code stringlengths 75 104k | docstring stringlengths 1 46.9k | text stringlengths 164 112k |
|---|---|---|
def command_show(self):
""" Show metadata """
self.parser = argparse.ArgumentParser(
description="Show metadata of available objects")
self.options_select()
self.options_formatting()
self.options_utils()
self.options = self.parser.parse_args(self.arguments[2:])
self.show(brief=False) | Show metadata | Below is the the instruction that describes the task:
### Input:
Show metadata
### Response:
def command_show(self):
""" Show metadata """
self.parser = argparse.ArgumentParser(
description="Show metadata of available objects")
self.options_select()
self.options_formatting()
self.options_utils()
self.options = self.parser.parse_args(self.arguments[2:])
self.show(brief=False) |
def _calculate(self, startingPercentage, endPercentage, startDate, endDate):
"""This is the error calculation function that gets called by :py:meth:`BaseErrorMeasure.get_error`.
Both parameters will be correct at this time.
:param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0].
It represents the value, where the error calculation should be started.
25.0 for example means that the first 25% of all calculated errors will be ignored.
:param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0].
It represents the value, after which all error values will be ignored. 90.0 for example means that
the last 10% of all local errors will be ignored.
:param float startDate: Epoch representing the start date used for error calculation.
:param float endDate: Epoch representing the end date used in the error calculation.
:return: Returns a float representing the error.
:rtype: float
"""
# get the defined subset of error values
errorValues = self._get_error_values(startingPercentage, endPercentage, startDate, endDate)
errorValues = filter(lambda item: item is None, errorValues)
if errorValues[0] is None:
return 1.0
share = 1.0 / float(len(errorValues))
product = 1.0
for errorValue in errorValues:
# never multiply with zero!
if 0 == errorValue:
continue
product *= errorValue**share
return product | This is the error calculation function that gets called by :py:meth:`BaseErrorMeasure.get_error`.
Both parameters will be correct at this time.
:param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0].
It represents the value, where the error calculation should be started.
25.0 for example means that the first 25% of all calculated errors will be ignored.
:param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0].
It represents the value, after which all error values will be ignored. 90.0 for example means that
the last 10% of all local errors will be ignored.
:param float startDate: Epoch representing the start date used for error calculation.
:param float endDate: Epoch representing the end date used in the error calculation.
:return: Returns a float representing the error.
:rtype: float | Below is the the instruction that describes the task:
### Input:
This is the error calculation function that gets called by :py:meth:`BaseErrorMeasure.get_error`.
Both parameters will be correct at this time.
:param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0].
It represents the value, where the error calculation should be started.
25.0 for example means that the first 25% of all calculated errors will be ignored.
:param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0].
It represents the value, after which all error values will be ignored. 90.0 for example means that
the last 10% of all local errors will be ignored.
:param float startDate: Epoch representing the start date used for error calculation.
:param float endDate: Epoch representing the end date used in the error calculation.
:return: Returns a float representing the error.
:rtype: float
### Response:
def _calculate(self, startingPercentage, endPercentage, startDate, endDate):
"""This is the error calculation function that gets called by :py:meth:`BaseErrorMeasure.get_error`.
Both parameters will be correct at this time.
:param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0].
It represents the value, where the error calculation should be started.
25.0 for example means that the first 25% of all calculated errors will be ignored.
:param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0].
It represents the value, after which all error values will be ignored. 90.0 for example means that
the last 10% of all local errors will be ignored.
:param float startDate: Epoch representing the start date used for error calculation.
:param float endDate: Epoch representing the end date used in the error calculation.
:return: Returns a float representing the error.
:rtype: float
"""
# get the defined subset of error values
errorValues = self._get_error_values(startingPercentage, endPercentage, startDate, endDate)
errorValues = filter(lambda item: item is None, errorValues)
if errorValues[0] is None:
return 1.0
share = 1.0 / float(len(errorValues))
product = 1.0
for errorValue in errorValues:
# never multiply with zero!
if 0 == errorValue:
continue
product *= errorValue**share
return product |
def get_address():
"""
Get one or more existing address(es) owned by your user.
---
parameters:
- name: address
in: body
description: The address you'd like to get info about.
required: false
schema:
$ref: '#/definitions/Address'
responses:
'200':
description: Your new address
schema:
items:
$ref: '#/definitions/Address'
type: array
default:
description: unexpected error
schema:
$ref: '#/definitions/errorModel'
security:
- kid: []
- typ: []
- alg: []
operationId: getAddress
"""
address = request.jws_payload['data'].get('address')
currency = request.jws_payload['data'].get('currency')
network = request.jws_payload['data'].get('network')
addysq = ses.query(wm.Address).filter(wm.Address.user_id == current_user.id)
if address:
addysq = addysq.filter(wm.Address.address == address)
elif currency:
addysq = addysq.filter(wm.Address.currency == currency)
elif network:
addysq = addysq.filter(wm.Address.network == network)
if addysq.count() == 0:
return "Invalid Request", 400
addys = [json.loads(jsonify2(a, 'Address')) for a in addysq]
response = current_app.bitjws.create_response(addys)
ses.close()
return response | Get one or more existing address(es) owned by your user.
---
parameters:
- name: address
in: body
description: The address you'd like to get info about.
required: false
schema:
$ref: '#/definitions/Address'
responses:
'200':
description: Your new address
schema:
items:
$ref: '#/definitions/Address'
type: array
default:
description: unexpected error
schema:
$ref: '#/definitions/errorModel'
security:
- kid: []
- typ: []
- alg: []
operationId: getAddress | Below is the the instruction that describes the task:
### Input:
Get one or more existing address(es) owned by your user.
---
parameters:
- name: address
in: body
description: The address you'd like to get info about.
required: false
schema:
$ref: '#/definitions/Address'
responses:
'200':
description: Your new address
schema:
items:
$ref: '#/definitions/Address'
type: array
default:
description: unexpected error
schema:
$ref: '#/definitions/errorModel'
security:
- kid: []
- typ: []
- alg: []
operationId: getAddress
### Response:
def get_address():
"""
Get one or more existing address(es) owned by your user.
---
parameters:
- name: address
in: body
description: The address you'd like to get info about.
required: false
schema:
$ref: '#/definitions/Address'
responses:
'200':
description: Your new address
schema:
items:
$ref: '#/definitions/Address'
type: array
default:
description: unexpected error
schema:
$ref: '#/definitions/errorModel'
security:
- kid: []
- typ: []
- alg: []
operationId: getAddress
"""
address = request.jws_payload['data'].get('address')
currency = request.jws_payload['data'].get('currency')
network = request.jws_payload['data'].get('network')
addysq = ses.query(wm.Address).filter(wm.Address.user_id == current_user.id)
if address:
addysq = addysq.filter(wm.Address.address == address)
elif currency:
addysq = addysq.filter(wm.Address.currency == currency)
elif network:
addysq = addysq.filter(wm.Address.network == network)
if addysq.count() == 0:
return "Invalid Request", 400
addys = [json.loads(jsonify2(a, 'Address')) for a in addysq]
response = current_app.bitjws.create_response(addys)
ses.close()
return response |
def modified(self):
"""Union[datetime.datetime, None]: Datetime at which the dataset was
last modified (:data:`None` until set from the server).
"""
modified_time = self._properties.get("lastModifiedTime")
if modified_time is not None:
# modified_time will be in milliseconds.
return google.cloud._helpers._datetime_from_microseconds(
1000.0 * float(modified_time)
) | Union[datetime.datetime, None]: Datetime at which the dataset was
last modified (:data:`None` until set from the server). | Below is the the instruction that describes the task:
### Input:
Union[datetime.datetime, None]: Datetime at which the dataset was
last modified (:data:`None` until set from the server).
### Response:
def modified(self):
"""Union[datetime.datetime, None]: Datetime at which the dataset was
last modified (:data:`None` until set from the server).
"""
modified_time = self._properties.get("lastModifiedTime")
if modified_time is not None:
# modified_time will be in milliseconds.
return google.cloud._helpers._datetime_from_microseconds(
1000.0 * float(modified_time)
) |
def _typelist(x):
"""Helper function converting all items of x to instances."""
if isinstance(x, collections.Sequence):
return list(map(_to_instance, x))
elif isinstance(x, collections.Iterable):
return x
return None if x is None else [_to_instance(x)] | Helper function converting all items of x to instances. | Below is the the instruction that describes the task:
### Input:
Helper function converting all items of x to instances.
### Response:
def _typelist(x):
"""Helper function converting all items of x to instances."""
if isinstance(x, collections.Sequence):
return list(map(_to_instance, x))
elif isinstance(x, collections.Iterable):
return x
return None if x is None else [_to_instance(x)] |
def find_related_modules(package, related_name_re='.+',
ignore_exceptions=False):
"""Find matching modules using a package and a module name pattern."""
warnings.warn('find_related_modules has been deprecated.',
DeprecationWarning)
package_elements = package.rsplit(".", 1)
try:
if len(package_elements) == 2:
pkg = __import__(package_elements[0], globals(), locals(), [
package_elements[1]])
pkg = getattr(pkg, package_elements[1])
else:
pkg = __import__(package_elements[0], globals(), locals(), [])
pkg_path = pkg.__path__
except AttributeError:
return []
# Find all modules named according to related_name
p = re.compile(related_name_re)
modules = []
for name in find_modules(package, include_packages=True):
if p.match(name.split('.')[-1]):
try:
modules.append(import_string(name, silent=ignore_exceptions))
except Exception as e:
if not ignore_exceptions:
raise e
return modules | Find matching modules using a package and a module name pattern. | Below is the the instruction that describes the task:
### Input:
Find matching modules using a package and a module name pattern.
### Response:
def find_related_modules(package, related_name_re='.+',
ignore_exceptions=False):
"""Find matching modules using a package and a module name pattern."""
warnings.warn('find_related_modules has been deprecated.',
DeprecationWarning)
package_elements = package.rsplit(".", 1)
try:
if len(package_elements) == 2:
pkg = __import__(package_elements[0], globals(), locals(), [
package_elements[1]])
pkg = getattr(pkg, package_elements[1])
else:
pkg = __import__(package_elements[0], globals(), locals(), [])
pkg_path = pkg.__path__
except AttributeError:
return []
# Find all modules named according to related_name
p = re.compile(related_name_re)
modules = []
for name in find_modules(package, include_packages=True):
if p.match(name.split('.')[-1]):
try:
modules.append(import_string(name, silent=ignore_exceptions))
except Exception as e:
if not ignore_exceptions:
raise e
return modules |
def close(self):
"""End the report."""
endpoint = self.endpoint.replace("/api/v1/spans", "")
logger.debug("Zipkin trace may be located at this URL {}/traces/{}".format(endpoint, self.trace_id)) | End the report. | Below is the the instruction that describes the task:
### Input:
End the report.
### Response:
def close(self):
"""End the report."""
endpoint = self.endpoint.replace("/api/v1/spans", "")
logger.debug("Zipkin trace may be located at this URL {}/traces/{}".format(endpoint, self.trace_id)) |
def loadable_modules(self):
'''The list of loadable module profile dictionaries.'''
with self._mutex:
if not self._loadable_modules:
self._loadable_modules = []
for mp in self._obj.get_loadable_modules():
self._loadable_modules.append(utils.nvlist_to_dict(mp.properties))
return self._loadable_modules | The list of loadable module profile dictionaries. | Below is the the instruction that describes the task:
### Input:
The list of loadable module profile dictionaries.
### Response:
def loadable_modules(self):
'''The list of loadable module profile dictionaries.'''
with self._mutex:
if not self._loadable_modules:
self._loadable_modules = []
for mp in self._obj.get_loadable_modules():
self._loadable_modules.append(utils.nvlist_to_dict(mp.properties))
return self._loadable_modules |
def refill_main_wallet(self, from_address, to_address, nfees, ntokens, password, min_confirmations=6, sync=False):
"""
Refill the Federation wallet with tokens and fees. This keeps the federation wallet clean.
Dealing with exact values simplifies the transactions. No need to calculate change. Easier to keep track of the
unspents and prevent double spends that would result in transactions being rejected by the bitcoin network.
Args:
from_address (Tuple[str]): Refill wallet address. Refills the federation wallet with tokens and fees
to_address (str): Federation wallet address
nfees (int): Number of fees to transfer. Each fee is 10000 satoshi. Used to pay for the transactions
ntokens (int): Number of tokens to transfer. Each token is 600 satoshi. Used to register hashes in the blockchain
password (str): Password for the Refill wallet. Used to sign the transaction
min_confirmations (int): Number of confirmations when chosing the inputs of the transaction. Defaults to 6
sync (bool): Perform the transaction in synchronous mode, the call to the function will block until there is at
least on confirmation on the blockchain. Defaults to False
Returns:
str: transaction id
"""
path, from_address = from_address
unsigned_tx = self._t.simple_transaction(from_address,
[(to_address, self.fee)] * nfees + [(to_address, self.token)] * ntokens,
min_confirmations=min_confirmations)
signed_tx = self._t.sign_transaction(unsigned_tx, password)
txid = self._t.push(signed_tx)
return txid | Refill the Federation wallet with tokens and fees. This keeps the federation wallet clean.
Dealing with exact values simplifies the transactions. No need to calculate change. Easier to keep track of the
unspents and prevent double spends that would result in transactions being rejected by the bitcoin network.
Args:
from_address (Tuple[str]): Refill wallet address. Refills the federation wallet with tokens and fees
to_address (str): Federation wallet address
nfees (int): Number of fees to transfer. Each fee is 10000 satoshi. Used to pay for the transactions
ntokens (int): Number of tokens to transfer. Each token is 600 satoshi. Used to register hashes in the blockchain
password (str): Password for the Refill wallet. Used to sign the transaction
min_confirmations (int): Number of confirmations when chosing the inputs of the transaction. Defaults to 6
sync (bool): Perform the transaction in synchronous mode, the call to the function will block until there is at
least on confirmation on the blockchain. Defaults to False
Returns:
str: transaction id | Below is the the instruction that describes the task:
### Input:
Refill the Federation wallet with tokens and fees. This keeps the federation wallet clean.
Dealing with exact values simplifies the transactions. No need to calculate change. Easier to keep track of the
unspents and prevent double spends that would result in transactions being rejected by the bitcoin network.
Args:
from_address (Tuple[str]): Refill wallet address. Refills the federation wallet with tokens and fees
to_address (str): Federation wallet address
nfees (int): Number of fees to transfer. Each fee is 10000 satoshi. Used to pay for the transactions
ntokens (int): Number of tokens to transfer. Each token is 600 satoshi. Used to register hashes in the blockchain
password (str): Password for the Refill wallet. Used to sign the transaction
min_confirmations (int): Number of confirmations when chosing the inputs of the transaction. Defaults to 6
sync (bool): Perform the transaction in synchronous mode, the call to the function will block until there is at
least on confirmation on the blockchain. Defaults to False
Returns:
str: transaction id
### Response:
def refill_main_wallet(self, from_address, to_address, nfees, ntokens, password, min_confirmations=6, sync=False):
"""
Refill the Federation wallet with tokens and fees. This keeps the federation wallet clean.
Dealing with exact values simplifies the transactions. No need to calculate change. Easier to keep track of the
unspents and prevent double spends that would result in transactions being rejected by the bitcoin network.
Args:
from_address (Tuple[str]): Refill wallet address. Refills the federation wallet with tokens and fees
to_address (str): Federation wallet address
nfees (int): Number of fees to transfer. Each fee is 10000 satoshi. Used to pay for the transactions
ntokens (int): Number of tokens to transfer. Each token is 600 satoshi. Used to register hashes in the blockchain
password (str): Password for the Refill wallet. Used to sign the transaction
min_confirmations (int): Number of confirmations when chosing the inputs of the transaction. Defaults to 6
sync (bool): Perform the transaction in synchronous mode, the call to the function will block until there is at
least on confirmation on the blockchain. Defaults to False
Returns:
str: transaction id
"""
path, from_address = from_address
unsigned_tx = self._t.simple_transaction(from_address,
[(to_address, self.fee)] * nfees + [(to_address, self.token)] * ntokens,
min_confirmations=min_confirmations)
signed_tx = self._t.sign_transaction(unsigned_tx, password)
txid = self._t.push(signed_tx)
return txid |
def skesa_assemble(self):
"""
Run skesa to assemble genomes
"""
with progressbar(self.metadata) as bar:
for sample in bar:
# Initialise the assembly command
sample.commands.assemble = str()
try:
if sample.general.trimmedcorrectedfastqfiles:
# If the sample is a pure isolate, assemble it. Otherwise, run the pre-metagenome pipeline
try:
status = sample.run.Description
except AttributeError:
status = 'unknown'
if status == 'metagenome':
self.merge(sample)
else:
# Set the output directory
sample.general.assembly_output = os.path.join(sample.general.outputdirectory,
'assembly_output')
make_path(sample.general.assembly_output)
sample.general.assemblyfile = os.path.join(sample.general.assembly_output,
'{name}_unfiltered.fasta'
.format(name=sample.name))
sample.general.bestassemblyfile = os.path.join(sample.general.assembly_output,
'{name}.fasta'
.format(name=sample.name))
fastqfiles = sample.general.trimmedcorrectedfastqfiles
# Set the the forward fastq files
sample.general.assemblyfastq = fastqfiles
forward = fastqfiles[0]
gz = True if '.gz' in forward else False
# If there are two fastq files
if len(fastqfiles) == 2:
# Set the reverse fastq name https://github.com/ncbi/SKESA/issues/7
sample.commands.assemble = 'skesa --fastq {fastqfiles} --cores {threads} ' \
'--use_paired_ends --vector_percent 1 ' \
'--contigs_out {contigs}'\
.format(fastqfiles=','.join(fastqfiles),
threads=self.cpus,
contigs=sample.general.assemblyfile)
# Same as above, but use single read settings for the assembler
else:
sample.commands.assemble = 'skesa --fastq {fastqfiles} --cores {threads} ' \
'--vector_percent 1 --contigs_out {contigs}'\
.format(fastqfiles=','.join(fastqfiles),
threads=self.cpus,
contigs=sample.general.assemblyfile)
# If there are no fastq files, populate the metadata appropriately
else:
sample.general.assembly_output = 'NA'
sample.general.assemblyfastq = 'NA'
sample.general.bestassemblyfile = 'NA'
except AttributeError:
sample.general.assembly_output = 'NA'
sample.general.assemblyfastq = 'NA'
sample.general.trimmedcorrectedfastqfiles = 'NA'
sample.general.bestassemblyfile = 'NA'
if sample.commands.assemble and not os.path.isfile(sample.general.assemblyfile):
# Run the assembly
out, err = run_subprocess(sample.commands.assemble)
write_to_logfile(sample.commands.assemble,
sample.commands.assemble,
self.logfile,
sample.general.logout,
sample.general.logerr,
None,
None)
write_to_logfile(out,
err,
self.logfile,
sample.general.logout,
sample.general.logerr,
None,
None) | Run skesa to assemble genomes | Below is the the instruction that describes the task:
### Input:
Run skesa to assemble genomes
### Response:
def skesa_assemble(self):
"""
Run skesa to assemble genomes
"""
with progressbar(self.metadata) as bar:
for sample in bar:
# Initialise the assembly command
sample.commands.assemble = str()
try:
if sample.general.trimmedcorrectedfastqfiles:
# If the sample is a pure isolate, assemble it. Otherwise, run the pre-metagenome pipeline
try:
status = sample.run.Description
except AttributeError:
status = 'unknown'
if status == 'metagenome':
self.merge(sample)
else:
# Set the output directory
sample.general.assembly_output = os.path.join(sample.general.outputdirectory,
'assembly_output')
make_path(sample.general.assembly_output)
sample.general.assemblyfile = os.path.join(sample.general.assembly_output,
'{name}_unfiltered.fasta'
.format(name=sample.name))
sample.general.bestassemblyfile = os.path.join(sample.general.assembly_output,
'{name}.fasta'
.format(name=sample.name))
fastqfiles = sample.general.trimmedcorrectedfastqfiles
# Set the the forward fastq files
sample.general.assemblyfastq = fastqfiles
forward = fastqfiles[0]
gz = True if '.gz' in forward else False
# If there are two fastq files
if len(fastqfiles) == 2:
# Set the reverse fastq name https://github.com/ncbi/SKESA/issues/7
sample.commands.assemble = 'skesa --fastq {fastqfiles} --cores {threads} ' \
'--use_paired_ends --vector_percent 1 ' \
'--contigs_out {contigs}'\
.format(fastqfiles=','.join(fastqfiles),
threads=self.cpus,
contigs=sample.general.assemblyfile)
# Same as above, but use single read settings for the assembler
else:
sample.commands.assemble = 'skesa --fastq {fastqfiles} --cores {threads} ' \
'--vector_percent 1 --contigs_out {contigs}'\
.format(fastqfiles=','.join(fastqfiles),
threads=self.cpus,
contigs=sample.general.assemblyfile)
# If there are no fastq files, populate the metadata appropriately
else:
sample.general.assembly_output = 'NA'
sample.general.assemblyfastq = 'NA'
sample.general.bestassemblyfile = 'NA'
except AttributeError:
sample.general.assembly_output = 'NA'
sample.general.assemblyfastq = 'NA'
sample.general.trimmedcorrectedfastqfiles = 'NA'
sample.general.bestassemblyfile = 'NA'
if sample.commands.assemble and not os.path.isfile(sample.general.assemblyfile):
# Run the assembly
out, err = run_subprocess(sample.commands.assemble)
write_to_logfile(sample.commands.assemble,
sample.commands.assemble,
self.logfile,
sample.general.logout,
sample.general.logerr,
None,
None)
write_to_logfile(out,
err,
self.logfile,
sample.general.logout,
sample.general.logerr,
None,
None) |
def get_attached_container_host_config_kwargs(self, action, container_name, kwargs=None):
"""
Generates keyword arguments for the Docker client to set up the HostConfig or start an attached container.
:param action: Action configuration.
:type action: ActionConfig
:param container_name: Container name or id. Set ``None`` when included in kwargs for ``create_container``.
:type container_name: unicode | str | NoneType
:param kwargs: Additional keyword arguments to complement or override the configuration-based values.
:type kwargs: dict | NoneType
:return: Resulting keyword arguments.
:rtype: dict
"""
if container_name:
c_kwargs = {'container': container_name}
else:
c_kwargs = {}
update_kwargs(c_kwargs, kwargs)
return c_kwargs | Generates keyword arguments for the Docker client to set up the HostConfig or start an attached container.
:param action: Action configuration.
:type action: ActionConfig
:param container_name: Container name or id. Set ``None`` when included in kwargs for ``create_container``.
:type container_name: unicode | str | NoneType
:param kwargs: Additional keyword arguments to complement or override the configuration-based values.
:type kwargs: dict | NoneType
:return: Resulting keyword arguments.
:rtype: dict | Below is the the instruction that describes the task:
### Input:
Generates keyword arguments for the Docker client to set up the HostConfig or start an attached container.
:param action: Action configuration.
:type action: ActionConfig
:param container_name: Container name or id. Set ``None`` when included in kwargs for ``create_container``.
:type container_name: unicode | str | NoneType
:param kwargs: Additional keyword arguments to complement or override the configuration-based values.
:type kwargs: dict | NoneType
:return: Resulting keyword arguments.
:rtype: dict
### Response:
def get_attached_container_host_config_kwargs(self, action, container_name, kwargs=None):
"""
Generates keyword arguments for the Docker client to set up the HostConfig or start an attached container.
:param action: Action configuration.
:type action: ActionConfig
:param container_name: Container name or id. Set ``None`` when included in kwargs for ``create_container``.
:type container_name: unicode | str | NoneType
:param kwargs: Additional keyword arguments to complement or override the configuration-based values.
:type kwargs: dict | NoneType
:return: Resulting keyword arguments.
:rtype: dict
"""
if container_name:
c_kwargs = {'container': container_name}
else:
c_kwargs = {}
update_kwargs(c_kwargs, kwargs)
return c_kwargs |
def altitudes(self):
'''
A list of the altitudes of each vertex [AltA, AltB, AltC], list of
floats.
An altitude is the shortest distance from a vertex to the side
opposite of it.
'''
a = self.area * 2
return [a / self.a, a / self.b, a / self.c] | A list of the altitudes of each vertex [AltA, AltB, AltC], list of
floats.
An altitude is the shortest distance from a vertex to the side
opposite of it. | Below is the the instruction that describes the task:
### Input:
A list of the altitudes of each vertex [AltA, AltB, AltC], list of
floats.
An altitude is the shortest distance from a vertex to the side
opposite of it.
### Response:
def altitudes(self):
'''
A list of the altitudes of each vertex [AltA, AltB, AltC], list of
floats.
An altitude is the shortest distance from a vertex to the side
opposite of it.
'''
a = self.area * 2
return [a / self.a, a / self.b, a / self.c] |
def user(self, extra_params=None):
"""
The User currently assigned to the Ticket
"""
if self.get('assigned_to_id', None):
users = self.space.users(
id=self['assigned_to_id'],
extra_params=extra_params
)
if users:
return users[0] | The User currently assigned to the Ticket | Below is the the instruction that describes the task:
### Input:
The User currently assigned to the Ticket
### Response:
def user(self, extra_params=None):
"""
The User currently assigned to the Ticket
"""
if self.get('assigned_to_id', None):
users = self.space.users(
id=self['assigned_to_id'],
extra_params=extra_params
)
if users:
return users[0] |
def get_port_channel_detail_output_lacp_aggr_member_sync(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
get_port_channel_detail = ET.Element("get_port_channel_detail")
config = get_port_channel_detail
output = ET.SubElement(get_port_channel_detail, "output")
lacp = ET.SubElement(output, "lacp")
aggr_member = ET.SubElement(lacp, "aggr-member")
sync = ET.SubElement(aggr_member, "sync")
sync.text = kwargs.pop('sync')
callback = kwargs.pop('callback', self._callback)
return callback(config) | Auto Generated Code | Below is the the instruction that describes the task:
### Input:
Auto Generated Code
### Response:
def get_port_channel_detail_output_lacp_aggr_member_sync(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
get_port_channel_detail = ET.Element("get_port_channel_detail")
config = get_port_channel_detail
output = ET.SubElement(get_port_channel_detail, "output")
lacp = ET.SubElement(output, "lacp")
aggr_member = ET.SubElement(lacp, "aggr-member")
sync = ET.SubElement(aggr_member, "sync")
sync.text = kwargs.pop('sync')
callback = kwargs.pop('callback', self._callback)
return callback(config) |
def get_pickle_protocol():
"""
Allow configuration of the pickle protocol on a per-machine basis.
This way, if you use multiple platforms with different versions of
pickle, you can configure each of them to use the highest protocol
supported by all of the machines that you want to be able to
communicate.
"""
try:
protocol_str = os.environ['PYLEARN2_PICKLE_PROTOCOL']
except KeyError:
# If not defined, we default to 0 because this is the default
# protocol used by cPickle.dump (and because it results in
# maximum portability)
protocol_str = '0'
if protocol_str == 'pickle.HIGHEST_PROTOCOL':
return pickle.HIGHEST_PROTOCOL
return int(protocol_str) | Allow configuration of the pickle protocol on a per-machine basis.
This way, if you use multiple platforms with different versions of
pickle, you can configure each of them to use the highest protocol
supported by all of the machines that you want to be able to
communicate. | Below is the the instruction that describes the task:
### Input:
Allow configuration of the pickle protocol on a per-machine basis.
This way, if you use multiple platforms with different versions of
pickle, you can configure each of them to use the highest protocol
supported by all of the machines that you want to be able to
communicate.
### Response:
def get_pickle_protocol():
"""
Allow configuration of the pickle protocol on a per-machine basis.
This way, if you use multiple platforms with different versions of
pickle, you can configure each of them to use the highest protocol
supported by all of the machines that you want to be able to
communicate.
"""
try:
protocol_str = os.environ['PYLEARN2_PICKLE_PROTOCOL']
except KeyError:
# If not defined, we default to 0 because this is the default
# protocol used by cPickle.dump (and because it results in
# maximum portability)
protocol_str = '0'
if protocol_str == 'pickle.HIGHEST_PROTOCOL':
return pickle.HIGHEST_PROTOCOL
return int(protocol_str) |
def extract_agg_curves(dstore, what):
"""
Aggregate loss curves of the given loss type and tags for
event based risk calculations. Use it as
/extract/agg_curves/structural?taxonomy=RC&zipcode=20126
:returns:
array of shape (S, P), being P the number of return periods
and S the number of statistics
"""
from openquake.calculators.export.loss_curves import get_loss_builder
oq = dstore['oqparam']
loss_type, tags = get_loss_type_tags(what)
if 'curves-stats' in dstore: # event_based_risk
losses = _get_curves(dstore['curves-stats'], oq.lti[loss_type])
stats = dstore['curves-stats'].attrs['stats']
elif 'curves-rlzs' in dstore: # event_based_risk, 1 rlz
losses = _get_curves(dstore['curves-rlzs'], oq.lti[loss_type])
assert losses.shape[1] == 1, 'There must be a single realization'
stats = [b'mean'] # suitable to be stored as hdf5 attribute
else:
raise KeyError('No curves found in %s' % dstore)
res = _filter_agg(dstore['assetcol'], losses, tags, stats)
cc = dstore['assetcol/cost_calculator']
res.units = cc.get_units(loss_types=[loss_type])
res.return_periods = get_loss_builder(dstore).return_periods
return res | Aggregate loss curves of the given loss type and tags for
event based risk calculations. Use it as
/extract/agg_curves/structural?taxonomy=RC&zipcode=20126
:returns:
array of shape (S, P), being P the number of return periods
and S the number of statistics | Below is the the instruction that describes the task:
### Input:
Aggregate loss curves of the given loss type and tags for
event based risk calculations. Use it as
/extract/agg_curves/structural?taxonomy=RC&zipcode=20126
:returns:
array of shape (S, P), being P the number of return periods
and S the number of statistics
### Response:
def extract_agg_curves(dstore, what):
"""
Aggregate loss curves of the given loss type and tags for
event based risk calculations. Use it as
/extract/agg_curves/structural?taxonomy=RC&zipcode=20126
:returns:
array of shape (S, P), being P the number of return periods
and S the number of statistics
"""
from openquake.calculators.export.loss_curves import get_loss_builder
oq = dstore['oqparam']
loss_type, tags = get_loss_type_tags(what)
if 'curves-stats' in dstore: # event_based_risk
losses = _get_curves(dstore['curves-stats'], oq.lti[loss_type])
stats = dstore['curves-stats'].attrs['stats']
elif 'curves-rlzs' in dstore: # event_based_risk, 1 rlz
losses = _get_curves(dstore['curves-rlzs'], oq.lti[loss_type])
assert losses.shape[1] == 1, 'There must be a single realization'
stats = [b'mean'] # suitable to be stored as hdf5 attribute
else:
raise KeyError('No curves found in %s' % dstore)
res = _filter_agg(dstore['assetcol'], losses, tags, stats)
cc = dstore['assetcol/cost_calculator']
res.units = cc.get_units(loss_types=[loss_type])
res.return_periods = get_loss_builder(dstore).return_periods
return res |
def update_forward_refs(cls, **localns: Any) -> None:
"""
Try to update ForwardRefs on fields based on this Model, globalns and localns.
"""
globalns = sys.modules[cls.__module__].__dict__
globalns.setdefault(cls.__name__, cls)
for f in cls.__fields__.values():
update_field_forward_refs(f, globalns=globalns, localns=localns) | Try to update ForwardRefs on fields based on this Model, globalns and localns. | Below is the the instruction that describes the task:
### Input:
Try to update ForwardRefs on fields based on this Model, globalns and localns.
### Response:
def update_forward_refs(cls, **localns: Any) -> None:
"""
Try to update ForwardRefs on fields based on this Model, globalns and localns.
"""
globalns = sys.modules[cls.__module__].__dict__
globalns.setdefault(cls.__name__, cls)
for f in cls.__fields__.values():
update_field_forward_refs(f, globalns=globalns, localns=localns) |
def show_banner(ctx, param, value):
"""Shows dynaconf awesome banner"""
if not value or ctx.resilient_parsing:
return
set_settings()
click.echo(settings.dynaconf_banner)
click.echo("Learn more at: http://github.com/rochacbruno/dynaconf")
ctx.exit() | Shows dynaconf awesome banner | Below is the the instruction that describes the task:
### Input:
Shows dynaconf awesome banner
### Response:
def show_banner(ctx, param, value):
"""Shows dynaconf awesome banner"""
if not value or ctx.resilient_parsing:
return
set_settings()
click.echo(settings.dynaconf_banner)
click.echo("Learn more at: http://github.com/rochacbruno/dynaconf")
ctx.exit() |
def _parse_text(self, text):
"""Parse text (string) and return list of parsed sentences (strings).
Each sentence consists of space separated token elements and the
token format returned by the PatternParser is WORD/TAG/PHRASE/ROLE/(LEMMA)
(separated by a forward slash '/')
:param str text: A string.
"""
if isinstance(self.tokenizer, PatternTokenizer):
parsed_text = pattern_parse(text, tokenize=True, lemmata=False)
else:
_tokenized = []
_sentences = sent_tokenize(text, tokenizer=self.tokenizer)
for s in _sentences:
_tokenized.append(" ".join(self.tokenizer.tokenize(s)))
parsed_text = pattern_parse(
_tokenized,
tokenize=False,
lemmata=False)
return parsed_text.split('\n') | Parse text (string) and return list of parsed sentences (strings).
Each sentence consists of space separated token elements and the
token format returned by the PatternParser is WORD/TAG/PHRASE/ROLE/(LEMMA)
(separated by a forward slash '/')
:param str text: A string. | Below is the the instruction that describes the task:
### Input:
Parse text (string) and return list of parsed sentences (strings).
Each sentence consists of space separated token elements and the
token format returned by the PatternParser is WORD/TAG/PHRASE/ROLE/(LEMMA)
(separated by a forward slash '/')
:param str text: A string.
### Response:
def _parse_text(self, text):
"""Parse text (string) and return list of parsed sentences (strings).
Each sentence consists of space separated token elements and the
token format returned by the PatternParser is WORD/TAG/PHRASE/ROLE/(LEMMA)
(separated by a forward slash '/')
:param str text: A string.
"""
if isinstance(self.tokenizer, PatternTokenizer):
parsed_text = pattern_parse(text, tokenize=True, lemmata=False)
else:
_tokenized = []
_sentences = sent_tokenize(text, tokenizer=self.tokenizer)
for s in _sentences:
_tokenized.append(" ".join(self.tokenizer.tokenize(s)))
parsed_text = pattern_parse(
_tokenized,
tokenize=False,
lemmata=False)
return parsed_text.split('\n') |
def get(self, path='', page=False, retry=3, **options):
"""
Get an item from the Graph API.
:param path: A string describing the path to the item.
:param page: A boolean describing whether to return a generator that
iterates over each page of results.
:param retry: An integer describing how many times the request may be retried.
:param options: Graph API parameters such as 'limit', 'offset' or 'since'.
Floating-point numbers will be returned as :class:`decimal.Decimal`
instances.
See `Facebook's Graph API documentation <http://developers.facebook.com/docs/reference/api/>`_
for an exhaustive list of parameters.
"""
response = self._query(
method='GET',
path=path,
data=options,
page=page,
retry=retry
)
if response is False:
raise FacebookError('Could not get "%s".' % path)
return response | Get an item from the Graph API.
:param path: A string describing the path to the item.
:param page: A boolean describing whether to return a generator that
iterates over each page of results.
:param retry: An integer describing how many times the request may be retried.
:param options: Graph API parameters such as 'limit', 'offset' or 'since'.
Floating-point numbers will be returned as :class:`decimal.Decimal`
instances.
See `Facebook's Graph API documentation <http://developers.facebook.com/docs/reference/api/>`_
for an exhaustive list of parameters. | Below is the the instruction that describes the task:
### Input:
Get an item from the Graph API.
:param path: A string describing the path to the item.
:param page: A boolean describing whether to return a generator that
iterates over each page of results.
:param retry: An integer describing how many times the request may be retried.
:param options: Graph API parameters such as 'limit', 'offset' or 'since'.
Floating-point numbers will be returned as :class:`decimal.Decimal`
instances.
See `Facebook's Graph API documentation <http://developers.facebook.com/docs/reference/api/>`_
for an exhaustive list of parameters.
### Response:
def get(self, path='', page=False, retry=3, **options):
"""
Get an item from the Graph API.
:param path: A string describing the path to the item.
:param page: A boolean describing whether to return a generator that
iterates over each page of results.
:param retry: An integer describing how many times the request may be retried.
:param options: Graph API parameters such as 'limit', 'offset' or 'since'.
Floating-point numbers will be returned as :class:`decimal.Decimal`
instances.
See `Facebook's Graph API documentation <http://developers.facebook.com/docs/reference/api/>`_
for an exhaustive list of parameters.
"""
response = self._query(
method='GET',
path=path,
data=options,
page=page,
retry=retry
)
if response is False:
raise FacebookError('Could not get "%s".' % path)
return response |
async def mark_fixed(self, *, comment: str = None):
"""Mark fixes.
:param comment: Reason machine is fixed.
:type comment: `str`
"""
params = {
"system_id": self.system_id
}
if comment:
params["comment"] = comment
self._data = await self._handler.mark_fixed(**params)
return self | Mark fixes.
:param comment: Reason machine is fixed.
:type comment: `str` | Below is the the instruction that describes the task:
### Input:
Mark fixes.
:param comment: Reason machine is fixed.
:type comment: `str`
### Response:
async def mark_fixed(self, *, comment: str = None):
"""Mark fixes.
:param comment: Reason machine is fixed.
:type comment: `str`
"""
params = {
"system_id": self.system_id
}
if comment:
params["comment"] = comment
self._data = await self._handler.mark_fixed(**params)
return self |
def get_n_cluster_in_events(event_numbers):
'''Calculates the number of cluster in every given event.
An external C++ library is used since there is no sufficient solution in python possible.
Because of np.bincount # BUG #225 for values > int32 and the different handling under 32/64 bit operating systems.
Parameters
----------
event_numbers : numpy.array
List of event numbers to be checked.
Returns
-------
numpy.array
First dimension is the event number.
Second dimension is the number of cluster of the event.
'''
logging.debug("Calculate the number of cluster in every given event")
event_numbers = np.ascontiguousarray(event_numbers) # change memory alignement for c++ library
result_event_numbers = np.empty_like(event_numbers)
result_count = np.empty_like(event_numbers, dtype=np.uint32)
result_size = analysis_functions.get_n_cluster_in_events(event_numbers, result_event_numbers, result_count)
return np.vstack((result_event_numbers[:result_size], result_count[:result_size])).T | Calculates the number of cluster in every given event.
An external C++ library is used since there is no sufficient solution in python possible.
Because of np.bincount # BUG #225 for values > int32 and the different handling under 32/64 bit operating systems.
Parameters
----------
event_numbers : numpy.array
List of event numbers to be checked.
Returns
-------
numpy.array
First dimension is the event number.
Second dimension is the number of cluster of the event. | Below is the the instruction that describes the task:
### Input:
Calculates the number of cluster in every given event.
An external C++ library is used since there is no sufficient solution in python possible.
Because of np.bincount # BUG #225 for values > int32 and the different handling under 32/64 bit operating systems.
Parameters
----------
event_numbers : numpy.array
List of event numbers to be checked.
Returns
-------
numpy.array
First dimension is the event number.
Second dimension is the number of cluster of the event.
### Response:
def get_n_cluster_in_events(event_numbers):
'''Calculates the number of cluster in every given event.
An external C++ library is used since there is no sufficient solution in python possible.
Because of np.bincount # BUG #225 for values > int32 and the different handling under 32/64 bit operating systems.
Parameters
----------
event_numbers : numpy.array
List of event numbers to be checked.
Returns
-------
numpy.array
First dimension is the event number.
Second dimension is the number of cluster of the event.
'''
logging.debug("Calculate the number of cluster in every given event")
event_numbers = np.ascontiguousarray(event_numbers) # change memory alignement for c++ library
result_event_numbers = np.empty_like(event_numbers)
result_count = np.empty_like(event_numbers, dtype=np.uint32)
result_size = analysis_functions.get_n_cluster_in_events(event_numbers, result_event_numbers, result_count)
return np.vstack((result_event_numbers[:result_size], result_count[:result_size])).T |
def logger():
"""Configure program logger."""
scriptlogger = logging.getLogger(__program__)
# ensure logger is not reconfigured
if not scriptlogger.hasHandlers():
# set log level
scriptlogger.setLevel(logging.INFO)
fmt = '%(name)s:%(levelname)s: %(message)s'
# configure terminal log
streamhandler = logging.StreamHandler()
streamhandler.setFormatter(logging.Formatter(fmt))
scriptlogger.addHandler(streamhandler) | Configure program logger. | Below is the the instruction that describes the task:
### Input:
Configure program logger.
### Response:
def logger():
"""Configure program logger."""
scriptlogger = logging.getLogger(__program__)
# ensure logger is not reconfigured
if not scriptlogger.hasHandlers():
# set log level
scriptlogger.setLevel(logging.INFO)
fmt = '%(name)s:%(levelname)s: %(message)s'
# configure terminal log
streamhandler = logging.StreamHandler()
streamhandler.setFormatter(logging.Formatter(fmt))
scriptlogger.addHandler(streamhandler) |
def looks_like_gene(self):
'''Returns true iff: length >=6, length is a multiple of 3, first codon is start, last codon is a stop and has no other stop codons'''
return self.is_complete_orf() \
and len(self) >= 6 \
and len(self) %3 == 0 \
and self.seq[0:3].upper() in genetic_codes.starts[genetic_code] | Returns true iff: length >=6, length is a multiple of 3, first codon is start, last codon is a stop and has no other stop codons | Below is the the instruction that describes the task:
### Input:
Returns true iff: length >=6, length is a multiple of 3, first codon is start, last codon is a stop and has no other stop codons
### Response:
def looks_like_gene(self):
'''Returns true iff: length >=6, length is a multiple of 3, first codon is start, last codon is a stop and has no other stop codons'''
return self.is_complete_orf() \
and len(self) >= 6 \
and len(self) %3 == 0 \
and self.seq[0:3].upper() in genetic_codes.starts[genetic_code] |
def PrepareMergeTaskStorage(self, task):
"""Prepares a task storage for merging.
Args:
task (Task): task.
Raises:
IOError: if the task storage does not exist.
OSError: if the task storage does not exist.
"""
if task.identifier not in self._task_storage_writers:
raise IOError('Storage writer for task: {0:s} does not exist.'.format(
task.identifier)) | Prepares a task storage for merging.
Args:
task (Task): task.
Raises:
IOError: if the task storage does not exist.
OSError: if the task storage does not exist. | Below is the the instruction that describes the task:
### Input:
Prepares a task storage for merging.
Args:
task (Task): task.
Raises:
IOError: if the task storage does not exist.
OSError: if the task storage does not exist.
### Response:
def PrepareMergeTaskStorage(self, task):
"""Prepares a task storage for merging.
Args:
task (Task): task.
Raises:
IOError: if the task storage does not exist.
OSError: if the task storage does not exist.
"""
if task.identifier not in self._task_storage_writers:
raise IOError('Storage writer for task: {0:s} does not exist.'.format(
task.identifier)) |
def read_single_knmi_file(filename):
"""reads a single file of KNMI's meteorological time series
data availability: www.knmi.nl/nederland-nu/klimatologie/uurgegevens
Args:
filename: the file to be opened
Returns:
pandas data frame including time series
"""
hourly_data_obs_raw = pd.read_csv(
filename,
parse_dates=[['YYYYMMDD', 'HH']],
date_parser=lambda yyyymmdd, hh: pd.datetime(int(str(yyyymmdd)[0:4]),
int(str(yyyymmdd)[4:6]),
int(str(yyyymmdd)[6:8]),
int(hh) - 1),
skiprows=31,
skipinitialspace=True,
na_values='',
keep_date_col=True,
)
hourly_data_obs_raw.index = hourly_data_obs_raw['YYYYMMDD_HH']
hourly_data_obs_raw.index = hourly_data_obs_raw.index + pd.Timedelta(hours=1)
columns_hourly = ['temp', 'precip', 'glob', 'hum', 'wind', 'ssd']
hourly_data_obs = pd.DataFrame(
index=hourly_data_obs_raw.index,
columns=columns_hourly,
data=dict(
temp=hourly_data_obs_raw['T'] / 10 + 273.15,
precip=hourly_data_obs_raw['RH'] / 10,
glob=hourly_data_obs_raw['Q'] * 10000 / 3600.,
hum=hourly_data_obs_raw['U'],
wind=hourly_data_obs_raw['FH'] / 10,
ssd=hourly_data_obs_raw['SQ'] * 6,
),
)
# remove negative values
negative_values = hourly_data_obs['precip'] < 0.0
hourly_data_obs.loc[negative_values, 'precip'] = 0.0
return hourly_data_obs | reads a single file of KNMI's meteorological time series
data availability: www.knmi.nl/nederland-nu/klimatologie/uurgegevens
Args:
filename: the file to be opened
Returns:
pandas data frame including time series | Below is the the instruction that describes the task:
### Input:
reads a single file of KNMI's meteorological time series
data availability: www.knmi.nl/nederland-nu/klimatologie/uurgegevens
Args:
filename: the file to be opened
Returns:
pandas data frame including time series
### Response:
def read_single_knmi_file(filename):
"""reads a single file of KNMI's meteorological time series
data availability: www.knmi.nl/nederland-nu/klimatologie/uurgegevens
Args:
filename: the file to be opened
Returns:
pandas data frame including time series
"""
hourly_data_obs_raw = pd.read_csv(
filename,
parse_dates=[['YYYYMMDD', 'HH']],
date_parser=lambda yyyymmdd, hh: pd.datetime(int(str(yyyymmdd)[0:4]),
int(str(yyyymmdd)[4:6]),
int(str(yyyymmdd)[6:8]),
int(hh) - 1),
skiprows=31,
skipinitialspace=True,
na_values='',
keep_date_col=True,
)
hourly_data_obs_raw.index = hourly_data_obs_raw['YYYYMMDD_HH']
hourly_data_obs_raw.index = hourly_data_obs_raw.index + pd.Timedelta(hours=1)
columns_hourly = ['temp', 'precip', 'glob', 'hum', 'wind', 'ssd']
hourly_data_obs = pd.DataFrame(
index=hourly_data_obs_raw.index,
columns=columns_hourly,
data=dict(
temp=hourly_data_obs_raw['T'] / 10 + 273.15,
precip=hourly_data_obs_raw['RH'] / 10,
glob=hourly_data_obs_raw['Q'] * 10000 / 3600.,
hum=hourly_data_obs_raw['U'],
wind=hourly_data_obs_raw['FH'] / 10,
ssd=hourly_data_obs_raw['SQ'] * 6,
),
)
# remove negative values
negative_values = hourly_data_obs['precip'] < 0.0
hourly_data_obs.loc[negative_values, 'precip'] = 0.0
return hourly_data_obs |
def spmt(t, peak_delay=6, under_delay=16, peak_disp=1, under_disp=1,
p_u_ratio=6):
"""Normalized SPM HRF function from sum of two gamma PDFs
Parameters
----------
t : array-like
vector of times at which to sample HRF
Returns
-------
hrf : array
vector length ``len(t)`` of samples from HRF at times `t`
Notes
-----
[1] This is the canonical HRF function as used in SPM. It
has the following defaults:
- delay of response (relative to onset) : 6s
- delay of undershoot (relative to onset) : 16s
- dispersion of response : 1s
- dispersion of undershoot : 1s
- ratio of response to undershoot : 6s
- onset : 0s
- length of kernel : 32s
References:
-----
[1] http://nipy.org/
[2] https://github.com/fabianp/hrf_estimation
"""
return spm_hrf_compat(t, peak_delay=peak_delay, under_delay=under_delay,
peak_disp=peak_disp, under_disp=under_disp,
p_u_ratio=p_u_ratio, normalize=True) | Normalized SPM HRF function from sum of two gamma PDFs
Parameters
----------
t : array-like
vector of times at which to sample HRF
Returns
-------
hrf : array
vector length ``len(t)`` of samples from HRF at times `t`
Notes
-----
[1] This is the canonical HRF function as used in SPM. It
has the following defaults:
- delay of response (relative to onset) : 6s
- delay of undershoot (relative to onset) : 16s
- dispersion of response : 1s
- dispersion of undershoot : 1s
- ratio of response to undershoot : 6s
- onset : 0s
- length of kernel : 32s
References:
-----
[1] http://nipy.org/
[2] https://github.com/fabianp/hrf_estimation | Below is the the instruction that describes the task:
### Input:
Normalized SPM HRF function from sum of two gamma PDFs
Parameters
----------
t : array-like
vector of times at which to sample HRF
Returns
-------
hrf : array
vector length ``len(t)`` of samples from HRF at times `t`
Notes
-----
[1] This is the canonical HRF function as used in SPM. It
has the following defaults:
- delay of response (relative to onset) : 6s
- delay of undershoot (relative to onset) : 16s
- dispersion of response : 1s
- dispersion of undershoot : 1s
- ratio of response to undershoot : 6s
- onset : 0s
- length of kernel : 32s
References:
-----
[1] http://nipy.org/
[2] https://github.com/fabianp/hrf_estimation
### Response:
def spmt(t, peak_delay=6, under_delay=16, peak_disp=1, under_disp=1,
p_u_ratio=6):
"""Normalized SPM HRF function from sum of two gamma PDFs
Parameters
----------
t : array-like
vector of times at which to sample HRF
Returns
-------
hrf : array
vector length ``len(t)`` of samples from HRF at times `t`
Notes
-----
[1] This is the canonical HRF function as used in SPM. It
has the following defaults:
- delay of response (relative to onset) : 6s
- delay of undershoot (relative to onset) : 16s
- dispersion of response : 1s
- dispersion of undershoot : 1s
- ratio of response to undershoot : 6s
- onset : 0s
- length of kernel : 32s
References:
-----
[1] http://nipy.org/
[2] https://github.com/fabianp/hrf_estimation
"""
return spm_hrf_compat(t, peak_delay=peak_delay, under_delay=under_delay,
peak_disp=peak_disp, under_disp=under_disp,
p_u_ratio=p_u_ratio, normalize=True) |
def get_country_short(self, ip):
''' Get country_short '''
rec = self.get_all(ip)
return rec and rec.country_short | Get country_short | Below is the the instruction that describes the task:
### Input:
Get country_short
### Response:
def get_country_short(self, ip):
''' Get country_short '''
rec = self.get_all(ip)
return rec and rec.country_short |
def refresh(self):
"""Do a full refresh of all devices and automations."""
self.get_devices(refresh=True)
self.get_automations(refresh=True) | Do a full refresh of all devices and automations. | Below is the the instruction that describes the task:
### Input:
Do a full refresh of all devices and automations.
### Response:
def refresh(self):
"""Do a full refresh of all devices and automations."""
self.get_devices(refresh=True)
self.get_automations(refresh=True) |
def get_qpimage_raw(self, idx=0):
"""Return QPImage without background correction"""
qpi = qpimage.QPImage(h5file=self.path,
h5mode="r",
h5dtype=self.as_type,
).copy()
# Remove previously performed background correction
qpi.set_bg_data(None)
# Force meta data
for key in self.meta_data:
qpi[key] = self.meta_data[key]
# set identifier
qpi["identifier"] = self.get_identifier(idx)
return qpi | Return QPImage without background correction | Below is the the instruction that describes the task:
### Input:
Return QPImage without background correction
### Response:
def get_qpimage_raw(self, idx=0):
"""Return QPImage without background correction"""
qpi = qpimage.QPImage(h5file=self.path,
h5mode="r",
h5dtype=self.as_type,
).copy()
# Remove previously performed background correction
qpi.set_bg_data(None)
# Force meta data
for key in self.meta_data:
qpi[key] = self.meta_data[key]
# set identifier
qpi["identifier"] = self.get_identifier(idx)
return qpi |
async def spop(self, name, count=None):
"""
Remove and return a random member of set ``name``
``count`` should be type of int and default set to 1.
If ``count`` is supplied, pops a list of ``count`` random
+ members of set ``name``
"""
if count and isinstance(count, int):
return await self.execute_command('SPOP', name, count)
else:
return await self.execute_command('SPOP', name) | Remove and return a random member of set ``name``
``count`` should be type of int and default set to 1.
If ``count`` is supplied, pops a list of ``count`` random
+ members of set ``name`` | Below is the the instruction that describes the task:
### Input:
Remove and return a random member of set ``name``
``count`` should be type of int and default set to 1.
If ``count`` is supplied, pops a list of ``count`` random
+ members of set ``name``
### Response:
async def spop(self, name, count=None):
"""
Remove and return a random member of set ``name``
``count`` should be type of int and default set to 1.
If ``count`` is supplied, pops a list of ``count`` random
+ members of set ``name``
"""
if count and isinstance(count, int):
return await self.execute_command('SPOP', name, count)
else:
return await self.execute_command('SPOP', name) |
def _walk_polyline(tid, intersect, T, mesh, plane, dist_tol):
"""
Given an intersection, walk through the mesh triangles, computing
intersection with the cut plane for each visited triangle and adding
those intersection to a polyline.
"""
T = set(T)
p = []
# Loop until we have explored all the triangles for the current
# polyline
while True:
p.append(intersect[1])
tid, intersections, T = get_next_triangle(mesh, T, plane,
intersect, dist_tol)
if tid is None:
break
# get_next_triangle returns triangles that our plane actually
# intersects (as opposed to touching only a single vertex),
# hence the assert
assert len(intersections) == 2
# Of the two returned intersections, one should have the
# intersection point equal to p[-1]
if la.norm(intersections[0][1] - p[-1]) < dist_tol:
intersect = intersections[1]
else:
assert la.norm(intersections[1][1] - p[-1]) < dist_tol, \
'%s not close to %s' % (str(p[-1]), str(intersections))
intersect = intersections[0]
return p, T | Given an intersection, walk through the mesh triangles, computing
intersection with the cut plane for each visited triangle and adding
those intersection to a polyline. | Below is the the instruction that describes the task:
### Input:
Given an intersection, walk through the mesh triangles, computing
intersection with the cut plane for each visited triangle and adding
those intersection to a polyline.
### Response:
def _walk_polyline(tid, intersect, T, mesh, plane, dist_tol):
"""
Given an intersection, walk through the mesh triangles, computing
intersection with the cut plane for each visited triangle and adding
those intersection to a polyline.
"""
T = set(T)
p = []
# Loop until we have explored all the triangles for the current
# polyline
while True:
p.append(intersect[1])
tid, intersections, T = get_next_triangle(mesh, T, plane,
intersect, dist_tol)
if tid is None:
break
# get_next_triangle returns triangles that our plane actually
# intersects (as opposed to touching only a single vertex),
# hence the assert
assert len(intersections) == 2
# Of the two returned intersections, one should have the
# intersection point equal to p[-1]
if la.norm(intersections[0][1] - p[-1]) < dist_tol:
intersect = intersections[1]
else:
assert la.norm(intersections[1][1] - p[-1]) < dist_tol, \
'%s not close to %s' % (str(p[-1]), str(intersections))
intersect = intersections[0]
return p, T |
def create_record(destination, file_ids, width=None, height=None):
"""
Creates a master record for the HTML report; this doesn't contain contain the actual HTML, but reports
are required to be records rather than files and we can link more than one HTML file to a report
"""
[project, path, name] = parse_destination(destination)
files = [dxpy.dxlink(file_id) for file_id in file_ids]
details = {"files": files}
if width:
details["width"] = width
if height:
details["height"] = height
try:
dxrecord = dxpy.new_dxrecord(project=project, folder=path, types=["Report", "HTMLReport"], details=details, name=name)
dxrecord.close()
return dxrecord.get_id()
except dxpy.DXAPIError as ex:
parser.error("Could not create an HTML report record on DNAnexus servers! ({ex})".format(ex=ex)) | Creates a master record for the HTML report; this doesn't contain contain the actual HTML, but reports
are required to be records rather than files and we can link more than one HTML file to a report | Below is the the instruction that describes the task:
### Input:
Creates a master record for the HTML report; this doesn't contain contain the actual HTML, but reports
are required to be records rather than files and we can link more than one HTML file to a report
### Response:
def create_record(destination, file_ids, width=None, height=None):
"""
Creates a master record for the HTML report; this doesn't contain contain the actual HTML, but reports
are required to be records rather than files and we can link more than one HTML file to a report
"""
[project, path, name] = parse_destination(destination)
files = [dxpy.dxlink(file_id) for file_id in file_ids]
details = {"files": files}
if width:
details["width"] = width
if height:
details["height"] = height
try:
dxrecord = dxpy.new_dxrecord(project=project, folder=path, types=["Report", "HTMLReport"], details=details, name=name)
dxrecord.close()
return dxrecord.get_id()
except dxpy.DXAPIError as ex:
parser.error("Could not create an HTML report record on DNAnexus servers! ({ex})".format(ex=ex)) |
def start(self):
"""Indicate that we are performing work in a thread.
:returns: multiprocessing job object
"""
if self.run is True:
self.job = multiprocessing.Process(target=self.indicator)
self.job.start()
return self.job | Indicate that we are performing work in a thread.
:returns: multiprocessing job object | Below is the the instruction that describes the task:
### Input:
Indicate that we are performing work in a thread.
:returns: multiprocessing job object
### Response:
def start(self):
"""Indicate that we are performing work in a thread.
:returns: multiprocessing job object
"""
if self.run is True:
self.job = multiprocessing.Process(target=self.indicator)
self.job.start()
return self.job |
def prod(self, values, axis=0, dtype=None):
"""compute the product over each group
Parameters
----------
values : array_like, [keys, ...]
values to multiply per group
axis : int, optional
alternative reduction axis for values
dtype : output dtype
Returns
-------
unique: ndarray, [groups]
unique keys
reduced : ndarray, [groups, ...]
value array, reduced over groups
"""
values = np.asarray(values)
return self.unique, self.reduce(values, axis=axis, dtype=dtype, operator=np.multiply) | compute the product over each group
Parameters
----------
values : array_like, [keys, ...]
values to multiply per group
axis : int, optional
alternative reduction axis for values
dtype : output dtype
Returns
-------
unique: ndarray, [groups]
unique keys
reduced : ndarray, [groups, ...]
value array, reduced over groups | Below is the the instruction that describes the task:
### Input:
compute the product over each group
Parameters
----------
values : array_like, [keys, ...]
values to multiply per group
axis : int, optional
alternative reduction axis for values
dtype : output dtype
Returns
-------
unique: ndarray, [groups]
unique keys
reduced : ndarray, [groups, ...]
value array, reduced over groups
### Response:
def prod(self, values, axis=0, dtype=None):
"""compute the product over each group
Parameters
----------
values : array_like, [keys, ...]
values to multiply per group
axis : int, optional
alternative reduction axis for values
dtype : output dtype
Returns
-------
unique: ndarray, [groups]
unique keys
reduced : ndarray, [groups, ...]
value array, reduced over groups
"""
values = np.asarray(values)
return self.unique, self.reduce(values, axis=axis, dtype=dtype, operator=np.multiply) |
def _make_links_absolute(html, base_url):
"""
Make all links absolute.
"""
url_changes = []
soup = BeautifulSoup(html)
for tag in soup.find_all('a', href=True):
old = tag['href']
fixed = urljoin(base_url, old)
if old != fixed:
url_changes.append((old, fixed))
tag['href'] = fixed
for tag in soup.find_all('img', src=True):
old = tag['src']
fixed = urljoin(base_url, old)
if old != fixed:
url_changes.append((old, fixed))
tag['src'] = fixed
return mark_safe(six.text_type(soup)), url_changes | Make all links absolute. | Below is the the instruction that describes the task:
### Input:
Make all links absolute.
### Response:
def _make_links_absolute(html, base_url):
"""
Make all links absolute.
"""
url_changes = []
soup = BeautifulSoup(html)
for tag in soup.find_all('a', href=True):
old = tag['href']
fixed = urljoin(base_url, old)
if old != fixed:
url_changes.append((old, fixed))
tag['href'] = fixed
for tag in soup.find_all('img', src=True):
old = tag['src']
fixed = urljoin(base_url, old)
if old != fixed:
url_changes.append((old, fixed))
tag['src'] = fixed
return mark_safe(six.text_type(soup)), url_changes |
def check_type(self, value):
"""Hook for type-checking, invoked during assignment. Allows size 1
numpy arrays and lists, but raises TypeError if value can not
be cast to a scalar.
"""
try:
scalar = asscalar(value)
except ValueError as e:
raise TypeError(e)
super(Parameter, self).check_type(scalar) | Hook for type-checking, invoked during assignment. Allows size 1
numpy arrays and lists, but raises TypeError if value can not
be cast to a scalar. | Below is the the instruction that describes the task:
### Input:
Hook for type-checking, invoked during assignment. Allows size 1
numpy arrays and lists, but raises TypeError if value can not
be cast to a scalar.
### Response:
def check_type(self, value):
"""Hook for type-checking, invoked during assignment. Allows size 1
numpy arrays and lists, but raises TypeError if value can not
be cast to a scalar.
"""
try:
scalar = asscalar(value)
except ValueError as e:
raise TypeError(e)
super(Parameter, self).check_type(scalar) |
def client_file(self):
"""Specify path to the ipcontroller-client.json file.
This file is stored in in the ipython_dir/profile folders.
Returns :
- str, File path to client file
"""
return os.path.join(self.ipython_dir,
'profile_{0}'.format(self.profile),
'security/ipcontroller-client.json') | Specify path to the ipcontroller-client.json file.
This file is stored in in the ipython_dir/profile folders.
Returns :
- str, File path to client file | Below is the the instruction that describes the task:
### Input:
Specify path to the ipcontroller-client.json file.
This file is stored in in the ipython_dir/profile folders.
Returns :
- str, File path to client file
### Response:
def client_file(self):
"""Specify path to the ipcontroller-client.json file.
This file is stored in in the ipython_dir/profile folders.
Returns :
- str, File path to client file
"""
return os.path.join(self.ipython_dir,
'profile_{0}'.format(self.profile),
'security/ipcontroller-client.json') |
def block_process_call(self, addr, cmd, vals):
"""block_process_call(addr, cmd, vals) -> results
Perform SMBus Block Process Call transaction.
"""
self._set_addr(addr)
data = ffi.new("union i2c_smbus_data *")
list_to_smbus_data(data, vals)
if SMBUS.i2c_smbus_access(self._fd, SMBUS.I2C_SMBUS_WRITE,
ffi.cast("__u8", cmd),
SMBUS.I2C_SMBUS_BLOCK_PROC_CALL,
data):
raise IOError(ffi.errno)
return smbus_data_to_list(data) | block_process_call(addr, cmd, vals) -> results
Perform SMBus Block Process Call transaction. | Below is the the instruction that describes the task:
### Input:
block_process_call(addr, cmd, vals) -> results
Perform SMBus Block Process Call transaction.
### Response:
def block_process_call(self, addr, cmd, vals):
"""block_process_call(addr, cmd, vals) -> results
Perform SMBus Block Process Call transaction.
"""
self._set_addr(addr)
data = ffi.new("union i2c_smbus_data *")
list_to_smbus_data(data, vals)
if SMBUS.i2c_smbus_access(self._fd, SMBUS.I2C_SMBUS_WRITE,
ffi.cast("__u8", cmd),
SMBUS.I2C_SMBUS_BLOCK_PROC_CALL,
data):
raise IOError(ffi.errno)
return smbus_data_to_list(data) |
def end_parallel(self):
"""
Ends a parallel region by merging the channels into a single stream.
Returns:
Stream: Stream for which subsequent transformations are no longer parallelized.
.. seealso:: :py:meth:`set_parallel`, :py:meth:`parallel`
"""
outport = self.oport
if isinstance(self.oport.operator, streamsx.topology.graph.Marker):
if self.oport.operator.kind == "$Union$":
pto = self.topology.graph.addPassThruOperator()
pto.addInputPort(outputPort=self.oport)
outport = pto.addOutputPort(schema=self.oport.schema)
op = self.topology.graph.addOperator("$EndParallel$")
op.addInputPort(outputPort=outport)
oport = op.addOutputPort(schema=self.oport.schema)
endP = Stream(self.topology, oport)
return endP | Ends a parallel region by merging the channels into a single stream.
Returns:
Stream: Stream for which subsequent transformations are no longer parallelized.
.. seealso:: :py:meth:`set_parallel`, :py:meth:`parallel` | Below is the the instruction that describes the task:
### Input:
Ends a parallel region by merging the channels into a single stream.
Returns:
Stream: Stream for which subsequent transformations are no longer parallelized.
.. seealso:: :py:meth:`set_parallel`, :py:meth:`parallel`
### Response:
def end_parallel(self):
"""
Ends a parallel region by merging the channels into a single stream.
Returns:
Stream: Stream for which subsequent transformations are no longer parallelized.
.. seealso:: :py:meth:`set_parallel`, :py:meth:`parallel`
"""
outport = self.oport
if isinstance(self.oport.operator, streamsx.topology.graph.Marker):
if self.oport.operator.kind == "$Union$":
pto = self.topology.graph.addPassThruOperator()
pto.addInputPort(outputPort=self.oport)
outport = pto.addOutputPort(schema=self.oport.schema)
op = self.topology.graph.addOperator("$EndParallel$")
op.addInputPort(outputPort=outport)
oport = op.addOutputPort(schema=self.oport.schema)
endP = Stream(self.topology, oport)
return endP |
def from_hdf5(cls, f):
"""
Load an object from an HDF5 file.
Requires ``h5py``.
Parameters
----------
f : str, :class:`h5py.File`
Either the filename or an open HDF5 file.
"""
if isinstance(f, str):
import h5py
f = h5py.File(f)
pos = quantity_from_hdf5(f['pos'])
vel = quantity_from_hdf5(f['vel'])
frame = None
if 'frame' in f:
g = f['frame']
frame_mod = g.attrs['module']
frame_cls = g.attrs['class']
frame_units = [u.Unit(x.decode('utf-8')) for x in g['units']]
if u.dimensionless_unscaled in frame_units:
units = DimensionlessUnitSystem()
else:
units = UnitSystem(*frame_units)
pars = dict()
for k in g['parameters']:
pars[k] = quantity_from_hdf5(g['parameters/'+k])
exec("from {0} import {1}".format(frame_mod, frame_cls))
frame_cls = eval(frame_cls)
frame = frame_cls(units=units, **pars)
return cls(pos=pos, vel=vel, frame=frame) | Load an object from an HDF5 file.
Requires ``h5py``.
Parameters
----------
f : str, :class:`h5py.File`
Either the filename or an open HDF5 file. | Below is the the instruction that describes the task:
### Input:
Load an object from an HDF5 file.
Requires ``h5py``.
Parameters
----------
f : str, :class:`h5py.File`
Either the filename or an open HDF5 file.
### Response:
def from_hdf5(cls, f):
"""
Load an object from an HDF5 file.
Requires ``h5py``.
Parameters
----------
f : str, :class:`h5py.File`
Either the filename or an open HDF5 file.
"""
if isinstance(f, str):
import h5py
f = h5py.File(f)
pos = quantity_from_hdf5(f['pos'])
vel = quantity_from_hdf5(f['vel'])
frame = None
if 'frame' in f:
g = f['frame']
frame_mod = g.attrs['module']
frame_cls = g.attrs['class']
frame_units = [u.Unit(x.decode('utf-8')) for x in g['units']]
if u.dimensionless_unscaled in frame_units:
units = DimensionlessUnitSystem()
else:
units = UnitSystem(*frame_units)
pars = dict()
for k in g['parameters']:
pars[k] = quantity_from_hdf5(g['parameters/'+k])
exec("from {0} import {1}".format(frame_mod, frame_cls))
frame_cls = eval(frame_cls)
frame = frame_cls(units=units, **pars)
return cls(pos=pos, vel=vel, frame=frame) |
def open_book(self, for_writing=False) -> piecash.Book:
"""
Opens the database. Call this using 'with'.
If database file is not found, an in-memory database will be created.
"""
filename = None
# check if the file path is already a URL.
file_url = urllib.parse.urlparse(self.filename)
if file_url.scheme == "file" or file_url.scheme == "sqlite":
filename = file_url.path[1:]
else:
filename = self.filename
if not os.path.isfile(filename):
log(WARN, "Database %s requested but not found. Creating an in-memory book.", filename)
return self.create_book()
access_type = "read/write" if for_writing else "readonly"
log(INFO, "Using %s in %s mode.", filename, access_type)
# file_path = path.relpath(self.filename)
file_path = path.abspath(filename)
if not for_writing:
book = piecash.open_book(file_path, open_if_lock=True)
else:
book = piecash.open_book(file_path, open_if_lock=True, readonly=False)
# book = create_book()
return book | Opens the database. Call this using 'with'.
If database file is not found, an in-memory database will be created. | Below is the the instruction that describes the task:
### Input:
Opens the database. Call this using 'with'.
If database file is not found, an in-memory database will be created.
### Response:
def open_book(self, for_writing=False) -> piecash.Book:
"""
Opens the database. Call this using 'with'.
If database file is not found, an in-memory database will be created.
"""
filename = None
# check if the file path is already a URL.
file_url = urllib.parse.urlparse(self.filename)
if file_url.scheme == "file" or file_url.scheme == "sqlite":
filename = file_url.path[1:]
else:
filename = self.filename
if not os.path.isfile(filename):
log(WARN, "Database %s requested but not found. Creating an in-memory book.", filename)
return self.create_book()
access_type = "read/write" if for_writing else "readonly"
log(INFO, "Using %s in %s mode.", filename, access_type)
# file_path = path.relpath(self.filename)
file_path = path.abspath(filename)
if not for_writing:
book = piecash.open_book(file_path, open_if_lock=True)
else:
book = piecash.open_book(file_path, open_if_lock=True, readonly=False)
# book = create_book()
return book |
def threw(self, error_type=None):
"""
Determining whether the exception is thrown
Args:
error_type:
None: checking without specified exception
Specified Exception
Return: Boolean
"""
if not error_type:
return True if len(self.exceptions) > 0 else False
else:
return uch.obj_in_list(self.exceptions, error_type) | Determining whether the exception is thrown
Args:
error_type:
None: checking without specified exception
Specified Exception
Return: Boolean | Below is the the instruction that describes the task:
### Input:
Determining whether the exception is thrown
Args:
error_type:
None: checking without specified exception
Specified Exception
Return: Boolean
### Response:
def threw(self, error_type=None):
"""
Determining whether the exception is thrown
Args:
error_type:
None: checking without specified exception
Specified Exception
Return: Boolean
"""
if not error_type:
return True if len(self.exceptions) > 0 else False
else:
return uch.obj_in_list(self.exceptions, error_type) |
def binom(n, k):
"""Binomial coefficients for :math:`n \choose k`
:param n,k: non-negative integers
:complexity: O(k)
"""
prod = 1
for i in range(k):
prod = (prod * (n - i)) // (i + 1)
return prod | Binomial coefficients for :math:`n \choose k`
:param n,k: non-negative integers
:complexity: O(k) | Below is the the instruction that describes the task:
### Input:
Binomial coefficients for :math:`n \choose k`
:param n,k: non-negative integers
:complexity: O(k)
### Response:
def binom(n, k):
"""Binomial coefficients for :math:`n \choose k`
:param n,k: non-negative integers
:complexity: O(k)
"""
prod = 1
for i in range(k):
prod = (prod * (n - i)) // (i + 1)
return prod |
def _handle_status(self, key, value):
"""Parse a status code from the attached GnuPG process.
:raises: :exc:`~exceptions.ValueError` if the status message is unknown.
"""
if key in ("GOOD_PASSPHRASE"):
pass
elif key == "KEY_CONSIDERED":
self.status = key.replace("_", " ").lower()
elif key == "KEY_NOT_CREATED":
self.status = 'key not created'
elif key == "KEY_CREATED":
(self.type, self.fingerprint) = value.split()
self.status = 'key created'
elif key == "NODATA":
self.status = nodata(value)
elif key == "PROGRESS":
self.status = progress(value.split(' ', 1)[0])
elif key == "PINENTRY_LAUNCHED":
log.warn(("GnuPG has just attempted to launch whichever pinentry "
"program you have configured, in order to obtain the "
"passphrase for this key. If you did not use the "
"`passphrase=` parameter, please try doing so. Otherwise, "
"see Issues #122 and #137:"
"\nhttps://github.com/isislovecruft/python-gnupg/issues/122"
"\nhttps://github.com/isislovecruft/python-gnupg/issues/137"))
self.status = 'key not created'
elif (key.startswith("TRUST_") or
key.startswith("PKA_TRUST_") or
key == "NEWSIG"):
pass
else:
raise ValueError("Unknown status message: %r" % key)
if self.type in ('B', 'P'):
self.primary_created = True
if self.type in ('B', 'S'):
self.subkey_created = True | Parse a status code from the attached GnuPG process.
:raises: :exc:`~exceptions.ValueError` if the status message is unknown. | Below is the the instruction that describes the task:
### Input:
Parse a status code from the attached GnuPG process.
:raises: :exc:`~exceptions.ValueError` if the status message is unknown.
### Response:
def _handle_status(self, key, value):
"""Parse a status code from the attached GnuPG process.
:raises: :exc:`~exceptions.ValueError` if the status message is unknown.
"""
if key in ("GOOD_PASSPHRASE"):
pass
elif key == "KEY_CONSIDERED":
self.status = key.replace("_", " ").lower()
elif key == "KEY_NOT_CREATED":
self.status = 'key not created'
elif key == "KEY_CREATED":
(self.type, self.fingerprint) = value.split()
self.status = 'key created'
elif key == "NODATA":
self.status = nodata(value)
elif key == "PROGRESS":
self.status = progress(value.split(' ', 1)[0])
elif key == "PINENTRY_LAUNCHED":
log.warn(("GnuPG has just attempted to launch whichever pinentry "
"program you have configured, in order to obtain the "
"passphrase for this key. If you did not use the "
"`passphrase=` parameter, please try doing so. Otherwise, "
"see Issues #122 and #137:"
"\nhttps://github.com/isislovecruft/python-gnupg/issues/122"
"\nhttps://github.com/isislovecruft/python-gnupg/issues/137"))
self.status = 'key not created'
elif (key.startswith("TRUST_") or
key.startswith("PKA_TRUST_") or
key == "NEWSIG"):
pass
else:
raise ValueError("Unknown status message: %r" % key)
if self.type in ('B', 'P'):
self.primary_created = True
if self.type in ('B', 'S'):
self.subkey_created = True |
def check_latitude(self, ds):
'''
Check variable(s) that define latitude and are defined correctly according to CF.
CF Β§4.1 Variables representing latitude must always explicitly include
the units attribute; there is no default value. The recommended unit
of latitude is degrees_north. Also acceptable are degree_north,
degree_N, degrees_N, degreeN, and degreesN.
Optionally, the latitude type may be indicated additionally by
providing the standard_name attribute with the value latitude, and/or
the axis attribute with the value Y.
- Four checks per latitude variable
- (H) latitude has units attribute
- (M) latitude has an allowed units attribute
- (L) latitude uses degrees_north (if not in rotated pole)
- (M) latitude defines either standard_name or axis
:param netCDF4.Dataset ds: An open netCDF dataset
:rtype: list
:return: List of results
'''
ret_val = []
allowed_lat_units = [
'degrees_north',
'degree_north',
'degree_n',
'degrees_n',
'degreen',
'degreesn'
]
# Determine the grid mappings in this dataset
grid_mapping = []
grid_mapping_variables = cfutil.get_grid_mapping_variables(ds)
for name in grid_mapping_variables:
variable = ds.variables[name]
grid_mapping_name = getattr(variable, 'grid_mapping_name', None)
if grid_mapping_name:
grid_mapping.append(grid_mapping_name)
latitude_variables = cfutil.get_latitude_variables(ds)
for latitude in latitude_variables:
variable = ds.variables[latitude]
units = getattr(variable, 'units', None)
units_is_string = isinstance(units, basestring)
standard_name = getattr(variable, 'standard_name', None)
axis = getattr(variable, 'axis', None)
# Check that latitude defines units
valid_latitude = TestCtx(BaseCheck.HIGH, self.section_titles['4.1'])
valid_latitude.assert_true(units is not None,
"latitude variable '{}' must define units".format(latitude))
ret_val.append(valid_latitude.to_result())
# Check that latitude uses allowed units
allowed_units = TestCtx(BaseCheck.MEDIUM, self.section_titles['4.1'])
if standard_name == 'grid_latitude':
e_n_units = cfutil.VALID_LAT_UNITS | cfutil.VALID_LON_UNITS
# check that the units aren't in east and north degrees units,
# but are convertible to angular units
allowed_units.assert_true(units not in e_n_units and
Unit(units) == Unit('degree'),
"Grid latitude variable '{}' should use degree equivalent units without east or north components. "
"Current units are {}".format(latitude, units))
else:
allowed_units.assert_true(units_is_string and units.lower() in allowed_lat_units,
"latitude variable '{}' should define valid units for latitude"
"".format(latitude))
ret_val.append(allowed_units.to_result())
# Check that latitude uses degrees_north
if standard_name == 'latitude' and units != 'degrees_north':
# This is only a recommendation and we won't penalize but we
# will include a recommended action.
msg = ("CF recommends latitude variable '{}' to use units degrees_north"
"".format(latitude))
recommended_units = Result(BaseCheck.LOW, (1, 1), self.section_titles['4.1'], [msg])
ret_val.append(recommended_units)
y_variables = ds.get_variables_by_attributes(axis='Y')
# Check that latitude defines either standard_name or axis
definition = TestCtx(BaseCheck.MEDIUM, self.section_titles['4.1'])
definition.assert_true(standard_name == 'latitude' or axis == 'Y' or y_variables != [],
"latitude variable '{}' should define standard_name='latitude' or axis='Y'"
"".format(latitude))
ret_val.append(definition.to_result())
return ret_val | Check variable(s) that define latitude and are defined correctly according to CF.
CF Β§4.1 Variables representing latitude must always explicitly include
the units attribute; there is no default value. The recommended unit
of latitude is degrees_north. Also acceptable are degree_north,
degree_N, degrees_N, degreeN, and degreesN.
Optionally, the latitude type may be indicated additionally by
providing the standard_name attribute with the value latitude, and/or
the axis attribute with the value Y.
- Four checks per latitude variable
- (H) latitude has units attribute
- (M) latitude has an allowed units attribute
- (L) latitude uses degrees_north (if not in rotated pole)
- (M) latitude defines either standard_name or axis
:param netCDF4.Dataset ds: An open netCDF dataset
:rtype: list
:return: List of results | Below is the the instruction that describes the task:
### Input:
Check variable(s) that define latitude and are defined correctly according to CF.
CF Β§4.1 Variables representing latitude must always explicitly include
the units attribute; there is no default value. The recommended unit
of latitude is degrees_north. Also acceptable are degree_north,
degree_N, degrees_N, degreeN, and degreesN.
Optionally, the latitude type may be indicated additionally by
providing the standard_name attribute with the value latitude, and/or
the axis attribute with the value Y.
- Four checks per latitude variable
- (H) latitude has units attribute
- (M) latitude has an allowed units attribute
- (L) latitude uses degrees_north (if not in rotated pole)
- (M) latitude defines either standard_name or axis
:param netCDF4.Dataset ds: An open netCDF dataset
:rtype: list
:return: List of results
### Response:
def check_latitude(self, ds):
'''
Check variable(s) that define latitude and are defined correctly according to CF.
CF Β§4.1 Variables representing latitude must always explicitly include
the units attribute; there is no default value. The recommended unit
of latitude is degrees_north. Also acceptable are degree_north,
degree_N, degrees_N, degreeN, and degreesN.
Optionally, the latitude type may be indicated additionally by
providing the standard_name attribute with the value latitude, and/or
the axis attribute with the value Y.
- Four checks per latitude variable
- (H) latitude has units attribute
- (M) latitude has an allowed units attribute
- (L) latitude uses degrees_north (if not in rotated pole)
- (M) latitude defines either standard_name or axis
:param netCDF4.Dataset ds: An open netCDF dataset
:rtype: list
:return: List of results
'''
ret_val = []
allowed_lat_units = [
'degrees_north',
'degree_north',
'degree_n',
'degrees_n',
'degreen',
'degreesn'
]
# Determine the grid mappings in this dataset
grid_mapping = []
grid_mapping_variables = cfutil.get_grid_mapping_variables(ds)
for name in grid_mapping_variables:
variable = ds.variables[name]
grid_mapping_name = getattr(variable, 'grid_mapping_name', None)
if grid_mapping_name:
grid_mapping.append(grid_mapping_name)
latitude_variables = cfutil.get_latitude_variables(ds)
for latitude in latitude_variables:
variable = ds.variables[latitude]
units = getattr(variable, 'units', None)
units_is_string = isinstance(units, basestring)
standard_name = getattr(variable, 'standard_name', None)
axis = getattr(variable, 'axis', None)
# Check that latitude defines units
valid_latitude = TestCtx(BaseCheck.HIGH, self.section_titles['4.1'])
valid_latitude.assert_true(units is not None,
"latitude variable '{}' must define units".format(latitude))
ret_val.append(valid_latitude.to_result())
# Check that latitude uses allowed units
allowed_units = TestCtx(BaseCheck.MEDIUM, self.section_titles['4.1'])
if standard_name == 'grid_latitude':
e_n_units = cfutil.VALID_LAT_UNITS | cfutil.VALID_LON_UNITS
# check that the units aren't in east and north degrees units,
# but are convertible to angular units
allowed_units.assert_true(units not in e_n_units and
Unit(units) == Unit('degree'),
"Grid latitude variable '{}' should use degree equivalent units without east or north components. "
"Current units are {}".format(latitude, units))
else:
allowed_units.assert_true(units_is_string and units.lower() in allowed_lat_units,
"latitude variable '{}' should define valid units for latitude"
"".format(latitude))
ret_val.append(allowed_units.to_result())
# Check that latitude uses degrees_north
if standard_name == 'latitude' and units != 'degrees_north':
# This is only a recommendation and we won't penalize but we
# will include a recommended action.
msg = ("CF recommends latitude variable '{}' to use units degrees_north"
"".format(latitude))
recommended_units = Result(BaseCheck.LOW, (1, 1), self.section_titles['4.1'], [msg])
ret_val.append(recommended_units)
y_variables = ds.get_variables_by_attributes(axis='Y')
# Check that latitude defines either standard_name or axis
definition = TestCtx(BaseCheck.MEDIUM, self.section_titles['4.1'])
definition.assert_true(standard_name == 'latitude' or axis == 'Y' or y_variables != [],
"latitude variable '{}' should define standard_name='latitude' or axis='Y'"
"".format(latitude))
ret_val.append(definition.to_result())
return ret_val |
def add_scripts_to_package():
"""
Update the "scripts" parameter of the setup_arguments with any scripts
found in the "scripts" directory.
:return:
"""
global setup_arguments
if os.path.isdir('scripts'):
setup_arguments['scripts'] = [
os.path.join('scripts', f) for f in os.listdir('scripts')
] | Update the "scripts" parameter of the setup_arguments with any scripts
found in the "scripts" directory.
:return: | Below is the the instruction that describes the task:
### Input:
Update the "scripts" parameter of the setup_arguments with any scripts
found in the "scripts" directory.
:return:
### Response:
def add_scripts_to_package():
"""
Update the "scripts" parameter of the setup_arguments with any scripts
found in the "scripts" directory.
:return:
"""
global setup_arguments
if os.path.isdir('scripts'):
setup_arguments['scripts'] = [
os.path.join('scripts', f) for f in os.listdir('scripts')
] |
def _prepare_read(self, start, stop, frames):
"""Seek to start frame and calculate length."""
if start != 0 and not self.seekable():
raise ValueError("start is only allowed for seekable files")
if frames >= 0 and stop is not None:
raise TypeError("Only one of {frames, stop} may be used")
start, stop, _ = slice(start, stop).indices(self.frames)
if stop < start:
stop = start
if frames < 0:
frames = stop - start
if self.seekable():
self.seek(start, SEEK_SET)
return frames | Seek to start frame and calculate length. | Below is the the instruction that describes the task:
### Input:
Seek to start frame and calculate length.
### Response:
def _prepare_read(self, start, stop, frames):
"""Seek to start frame and calculate length."""
if start != 0 and not self.seekable():
raise ValueError("start is only allowed for seekable files")
if frames >= 0 and stop is not None:
raise TypeError("Only one of {frames, stop} may be used")
start, stop, _ = slice(start, stop).indices(self.frames)
if stop < start:
stop = start
if frames < 0:
frames = stop - start
if self.seekable():
self.seek(start, SEEK_SET)
return frames |
def list_running_zones(self):
"""
Returns the currently active relay.
:returns: Returns the running relay number or None if no relays are
active.
:rtype: string
"""
self.update_controller_info()
if self.running is None or not self.running:
return None
return int(self.running[0]['relay']) | Returns the currently active relay.
:returns: Returns the running relay number or None if no relays are
active.
:rtype: string | Below is the the instruction that describes the task:
### Input:
Returns the currently active relay.
:returns: Returns the running relay number or None if no relays are
active.
:rtype: string
### Response:
def list_running_zones(self):
"""
Returns the currently active relay.
:returns: Returns the running relay number or None if no relays are
active.
:rtype: string
"""
self.update_controller_info()
if self.running is None or not self.running:
return None
return int(self.running[0]['relay']) |
def ori(ip, rc=None, r=None, iq=None, ico=None, pl=None, fl=None, fs=None,
ot=None, coe=None, moc=None):
# pylint: disable=too-many-arguments, redefined-outer-name, invalid-name
"""
This function is a wrapper for
:meth:`~pywbem.WBEMConnection.OpenReferenceInstances`.
Open an enumeration session to retrieve the association instances that
reference a source instance.
Use the :func:`~wbemcli.piwp` function to retrieve the next set of
instances or the :func:`~wbcmeli.ce` function to close the enumeration
session before it is complete.
Parameters:
ip (:class:`~pywbem.CIMInstanceName`):
Source instance path.
rc (:term:`string`):
ResultClass filter: Include only traversals across this association
(result) class.
`None` means this filter is not applied.
r (:term:`string`):
Role filter: Include only traversals from this role (= reference
name) in source object.
`None` means this filter is not applied.
iq (:class:`py:bool`):
IncludeQualifiers flag: Include qualifiers.
`None` will cause the server default of `False` to be used.
Deprecated in :term:`DSP0200`: Clients cannot rely on qualifiers to
be returned in this operation.
ico (:class:`py:bool`):
IncludeClassOrigin flag: Include class origin information for the
properties in the retrieved instances.
`None` will cause the server default of `False` to be used.
Deprecated in :term:`DSP0200`: WBEM servers may either implement this
parameter as specified, or may treat any specified value as `False`.
pl (:term:`string` or :term:`py:iterable` of :term:`string`):
PropertyList: Names of properties to be included (if not otherwise
excluded). An empty iterable indicates to include no properties.
If `None`, all properties will be included.
fl (:term:`string`):
Filter query language to be used for the filter defined in the `fs`
parameter. The DMTF-defined Filter Query Language
(see :term:`DSP0212`) is specified as "DMTF:FQL".
`None` means that no such filtering is peformed.
fs (:term:`string`):
Filter to apply to objects to be returned. Based on filter query
language defined by `fl` parameter.
`None` means that no such filtering is peformed.
ot (:class:`~pywbem.Uint32`):
Operation timeout in seconds. This is the minimum time the WBEM server
must keep the enumeration session open between requests on that
session.
A value of 0 indicates that the server should never time out.
The server may reject the proposed value.
`None` will cause the server to use its default timeout.
coe (:class:`py:bool`):
Continue on error flag.
`None` will cause the server to use its default of `False`.
moc (:class:`~pywbem.Uint32`):
Maximum number of objects to return for this operation.
`None` will cause the server to use its default of 0.
Returns:
A :func:`~py:collections.namedtuple` object containing the following
named items:
* **instances** (list of :class:`~pywbem.CIMInstance`):
The retrieved instances.
* **eos** (:class:`py:bool`):
`True` if the enumeration session is exhausted after this operation.
Otherwise `eos` is `False` and the `context` item is the context
object for the next operation on the enumeration session.
* **context** (:func:`py:tuple` of server_context, namespace):
A context object identifying the open enumeration session, including
its current enumeration state, and the namespace. This object must be
supplied with the next pull or close operation for this enumeration
session.
"""
return CONN.OpenReferenceInstances(ip,
ResultClass=rc,
Role=r,
IncludeQualifiers=iq,
IncludeClassOrigin=ico,
PropertyList=pl,
FilterQueryLanguage=fl,
FilterQuery=fs,
OperationTimeout=ot,
ContinueOnError=coe,
MaxObjectCount=moc) | This function is a wrapper for
:meth:`~pywbem.WBEMConnection.OpenReferenceInstances`.
Open an enumeration session to retrieve the association instances that
reference a source instance.
Use the :func:`~wbemcli.piwp` function to retrieve the next set of
instances or the :func:`~wbcmeli.ce` function to close the enumeration
session before it is complete.
Parameters:
ip (:class:`~pywbem.CIMInstanceName`):
Source instance path.
rc (:term:`string`):
ResultClass filter: Include only traversals across this association
(result) class.
`None` means this filter is not applied.
r (:term:`string`):
Role filter: Include only traversals from this role (= reference
name) in source object.
`None` means this filter is not applied.
iq (:class:`py:bool`):
IncludeQualifiers flag: Include qualifiers.
`None` will cause the server default of `False` to be used.
Deprecated in :term:`DSP0200`: Clients cannot rely on qualifiers to
be returned in this operation.
ico (:class:`py:bool`):
IncludeClassOrigin flag: Include class origin information for the
properties in the retrieved instances.
`None` will cause the server default of `False` to be used.
Deprecated in :term:`DSP0200`: WBEM servers may either implement this
parameter as specified, or may treat any specified value as `False`.
pl (:term:`string` or :term:`py:iterable` of :term:`string`):
PropertyList: Names of properties to be included (if not otherwise
excluded). An empty iterable indicates to include no properties.
If `None`, all properties will be included.
fl (:term:`string`):
Filter query language to be used for the filter defined in the `fs`
parameter. The DMTF-defined Filter Query Language
(see :term:`DSP0212`) is specified as "DMTF:FQL".
`None` means that no such filtering is peformed.
fs (:term:`string`):
Filter to apply to objects to be returned. Based on filter query
language defined by `fl` parameter.
`None` means that no such filtering is peformed.
ot (:class:`~pywbem.Uint32`):
Operation timeout in seconds. This is the minimum time the WBEM server
must keep the enumeration session open between requests on that
session.
A value of 0 indicates that the server should never time out.
The server may reject the proposed value.
`None` will cause the server to use its default timeout.
coe (:class:`py:bool`):
Continue on error flag.
`None` will cause the server to use its default of `False`.
moc (:class:`~pywbem.Uint32`):
Maximum number of objects to return for this operation.
`None` will cause the server to use its default of 0.
Returns:
A :func:`~py:collections.namedtuple` object containing the following
named items:
* **instances** (list of :class:`~pywbem.CIMInstance`):
The retrieved instances.
* **eos** (:class:`py:bool`):
`True` if the enumeration session is exhausted after this operation.
Otherwise `eos` is `False` and the `context` item is the context
object for the next operation on the enumeration session.
* **context** (:func:`py:tuple` of server_context, namespace):
A context object identifying the open enumeration session, including
its current enumeration state, and the namespace. This object must be
supplied with the next pull or close operation for this enumeration
session. | Below is the the instruction that describes the task:
### Input:
This function is a wrapper for
:meth:`~pywbem.WBEMConnection.OpenReferenceInstances`.
Open an enumeration session to retrieve the association instances that
reference a source instance.
Use the :func:`~wbemcli.piwp` function to retrieve the next set of
instances or the :func:`~wbcmeli.ce` function to close the enumeration
session before it is complete.
Parameters:
ip (:class:`~pywbem.CIMInstanceName`):
Source instance path.
rc (:term:`string`):
ResultClass filter: Include only traversals across this association
(result) class.
`None` means this filter is not applied.
r (:term:`string`):
Role filter: Include only traversals from this role (= reference
name) in source object.
`None` means this filter is not applied.
iq (:class:`py:bool`):
IncludeQualifiers flag: Include qualifiers.
`None` will cause the server default of `False` to be used.
Deprecated in :term:`DSP0200`: Clients cannot rely on qualifiers to
be returned in this operation.
ico (:class:`py:bool`):
IncludeClassOrigin flag: Include class origin information for the
properties in the retrieved instances.
`None` will cause the server default of `False` to be used.
Deprecated in :term:`DSP0200`: WBEM servers may either implement this
parameter as specified, or may treat any specified value as `False`.
pl (:term:`string` or :term:`py:iterable` of :term:`string`):
PropertyList: Names of properties to be included (if not otherwise
excluded). An empty iterable indicates to include no properties.
If `None`, all properties will be included.
fl (:term:`string`):
Filter query language to be used for the filter defined in the `fs`
parameter. The DMTF-defined Filter Query Language
(see :term:`DSP0212`) is specified as "DMTF:FQL".
`None` means that no such filtering is peformed.
fs (:term:`string`):
Filter to apply to objects to be returned. Based on filter query
language defined by `fl` parameter.
`None` means that no such filtering is peformed.
ot (:class:`~pywbem.Uint32`):
Operation timeout in seconds. This is the minimum time the WBEM server
must keep the enumeration session open between requests on that
session.
A value of 0 indicates that the server should never time out.
The server may reject the proposed value.
`None` will cause the server to use its default timeout.
coe (:class:`py:bool`):
Continue on error flag.
`None` will cause the server to use its default of `False`.
moc (:class:`~pywbem.Uint32`):
Maximum number of objects to return for this operation.
`None` will cause the server to use its default of 0.
Returns:
A :func:`~py:collections.namedtuple` object containing the following
named items:
* **instances** (list of :class:`~pywbem.CIMInstance`):
The retrieved instances.
* **eos** (:class:`py:bool`):
`True` if the enumeration session is exhausted after this operation.
Otherwise `eos` is `False` and the `context` item is the context
object for the next operation on the enumeration session.
* **context** (:func:`py:tuple` of server_context, namespace):
A context object identifying the open enumeration session, including
its current enumeration state, and the namespace. This object must be
supplied with the next pull or close operation for this enumeration
session.
### Response:
def ori(ip, rc=None, r=None, iq=None, ico=None, pl=None, fl=None, fs=None,
ot=None, coe=None, moc=None):
# pylint: disable=too-many-arguments, redefined-outer-name, invalid-name
"""
This function is a wrapper for
:meth:`~pywbem.WBEMConnection.OpenReferenceInstances`.
Open an enumeration session to retrieve the association instances that
reference a source instance.
Use the :func:`~wbemcli.piwp` function to retrieve the next set of
instances or the :func:`~wbcmeli.ce` function to close the enumeration
session before it is complete.
Parameters:
ip (:class:`~pywbem.CIMInstanceName`):
Source instance path.
rc (:term:`string`):
ResultClass filter: Include only traversals across this association
(result) class.
`None` means this filter is not applied.
r (:term:`string`):
Role filter: Include only traversals from this role (= reference
name) in source object.
`None` means this filter is not applied.
iq (:class:`py:bool`):
IncludeQualifiers flag: Include qualifiers.
`None` will cause the server default of `False` to be used.
Deprecated in :term:`DSP0200`: Clients cannot rely on qualifiers to
be returned in this operation.
ico (:class:`py:bool`):
IncludeClassOrigin flag: Include class origin information for the
properties in the retrieved instances.
`None` will cause the server default of `False` to be used.
Deprecated in :term:`DSP0200`: WBEM servers may either implement this
parameter as specified, or may treat any specified value as `False`.
pl (:term:`string` or :term:`py:iterable` of :term:`string`):
PropertyList: Names of properties to be included (if not otherwise
excluded). An empty iterable indicates to include no properties.
If `None`, all properties will be included.
fl (:term:`string`):
Filter query language to be used for the filter defined in the `fs`
parameter. The DMTF-defined Filter Query Language
(see :term:`DSP0212`) is specified as "DMTF:FQL".
`None` means that no such filtering is peformed.
fs (:term:`string`):
Filter to apply to objects to be returned. Based on filter query
language defined by `fl` parameter.
`None` means that no such filtering is peformed.
ot (:class:`~pywbem.Uint32`):
Operation timeout in seconds. This is the minimum time the WBEM server
must keep the enumeration session open between requests on that
session.
A value of 0 indicates that the server should never time out.
The server may reject the proposed value.
`None` will cause the server to use its default timeout.
coe (:class:`py:bool`):
Continue on error flag.
`None` will cause the server to use its default of `False`.
moc (:class:`~pywbem.Uint32`):
Maximum number of objects to return for this operation.
`None` will cause the server to use its default of 0.
Returns:
A :func:`~py:collections.namedtuple` object containing the following
named items:
* **instances** (list of :class:`~pywbem.CIMInstance`):
The retrieved instances.
* **eos** (:class:`py:bool`):
`True` if the enumeration session is exhausted after this operation.
Otherwise `eos` is `False` and the `context` item is the context
object for the next operation on the enumeration session.
* **context** (:func:`py:tuple` of server_context, namespace):
A context object identifying the open enumeration session, including
its current enumeration state, and the namespace. This object must be
supplied with the next pull or close operation for this enumeration
session.
"""
return CONN.OpenReferenceInstances(ip,
ResultClass=rc,
Role=r,
IncludeQualifiers=iq,
IncludeClassOrigin=ico,
PropertyList=pl,
FilterQueryLanguage=fl,
FilterQuery=fs,
OperationTimeout=ot,
ContinueOnError=coe,
MaxObjectCount=moc) |
def clock(self, interval, basis="system"):
"""Return a NodeInput tuple for triggering an event every interval.
Args:
interval (int): The interval at which this input should
trigger. If basis == system (the default), this interval must
be in seconds. Otherwise it will be in units of whatever the
basis tick is configured with.
basis (str): The basis to use for calculating the interval. This
can either be system, tick_1 or tick_2. System means that the
clock will use either the fast or regular builtin tick. Passing
tick_1 or tick_2 will cause the clock to be generated based on
the selected tick.
"""
if basis == "system":
if (interval % 10) == 0:
tick = self.allocator.attach_stream(self.system_tick)
count = interval // 10
else:
tick = self.allocator.attach_stream(self.fast_tick)
count = interval
trigger = InputTrigger(u'count', '>=', count)
return (tick, trigger)
elif basis == 'tick_1':
tick = self.allocator.attach_stream(self.user1_tick)
trigger = InputTrigger(u'count', '>=', interval)
return (tick, trigger)
elif basis == 'tick_2':
tick = self.allocator.attach_stream(self.user2_tick)
trigger = InputTrigger(u'count', '>=', interval)
return (tick, trigger)
raise SensorGraphSemanticError("Unkwown tick source specified in RootScope.clock", basis=basis) | Return a NodeInput tuple for triggering an event every interval.
Args:
interval (int): The interval at which this input should
trigger. If basis == system (the default), this interval must
be in seconds. Otherwise it will be in units of whatever the
basis tick is configured with.
basis (str): The basis to use for calculating the interval. This
can either be system, tick_1 or tick_2. System means that the
clock will use either the fast or regular builtin tick. Passing
tick_1 or tick_2 will cause the clock to be generated based on
the selected tick. | Below is the the instruction that describes the task:
### Input:
Return a NodeInput tuple for triggering an event every interval.
Args:
interval (int): The interval at which this input should
trigger. If basis == system (the default), this interval must
be in seconds. Otherwise it will be in units of whatever the
basis tick is configured with.
basis (str): The basis to use for calculating the interval. This
can either be system, tick_1 or tick_2. System means that the
clock will use either the fast or regular builtin tick. Passing
tick_1 or tick_2 will cause the clock to be generated based on
the selected tick.
### Response:
def clock(self, interval, basis="system"):
"""Return a NodeInput tuple for triggering an event every interval.
Args:
interval (int): The interval at which this input should
trigger. If basis == system (the default), this interval must
be in seconds. Otherwise it will be in units of whatever the
basis tick is configured with.
basis (str): The basis to use for calculating the interval. This
can either be system, tick_1 or tick_2. System means that the
clock will use either the fast or regular builtin tick. Passing
tick_1 or tick_2 will cause the clock to be generated based on
the selected tick.
"""
if basis == "system":
if (interval % 10) == 0:
tick = self.allocator.attach_stream(self.system_tick)
count = interval // 10
else:
tick = self.allocator.attach_stream(self.fast_tick)
count = interval
trigger = InputTrigger(u'count', '>=', count)
return (tick, trigger)
elif basis == 'tick_1':
tick = self.allocator.attach_stream(self.user1_tick)
trigger = InputTrigger(u'count', '>=', interval)
return (tick, trigger)
elif basis == 'tick_2':
tick = self.allocator.attach_stream(self.user2_tick)
trigger = InputTrigger(u'count', '>=', interval)
return (tick, trigger)
raise SensorGraphSemanticError("Unkwown tick source specified in RootScope.clock", basis=basis) |
def _quoteattr(self, attr):
"""Escape an XML attribute. Value can be unicode."""
attr = xml_safe(attr)
if isinstance(attr, unicode) and not UNICODE_STRINGS:
attr = attr.encode(self.encoding)
return saxutils.quoteattr(attr) | Escape an XML attribute. Value can be unicode. | Below is the the instruction that describes the task:
### Input:
Escape an XML attribute. Value can be unicode.
### Response:
def _quoteattr(self, attr):
"""Escape an XML attribute. Value can be unicode."""
attr = xml_safe(attr)
if isinstance(attr, unicode) and not UNICODE_STRINGS:
attr = attr.encode(self.encoding)
return saxutils.quoteattr(attr) |
def make_coord_dict(subs, subscript_dict, terse=True):
"""
This is for assisting with the lookup of a particular element, such that the output
of this function would take the place of %s in this expression
`variable.loc[%s]`
Parameters
----------
subs: list of strings
coordinates, either as names of dimensions, or positions within a dimension
subscript_dict: dict
the full dictionary of subscript names and values
terse: Binary Flag
- If true, includes only elements that do not cover the full range of values in their
respective dimension
- If false, returns all dimensions
Returns
-------
coordinates: dictionary
Coordinates needed to access the xarray quantities we're interested in.
Examples
--------
>>> make_coord_dict(['Dim1', 'D'], {'Dim1': ['A', 'B', 'C'], 'Dim2': ['D', 'E', 'F']})
{'Dim2': ['D']}
>>> make_coord_dict(['Dim1', 'D'], {'Dim1': ['A', 'B', 'C'], 'Dim2':['D', 'E', 'F']},
>>> terse=False)
{'Dim2': ['D'], 'Dim1': ['A', 'B', 'C']}
"""
sub_elems_list = [y for x in subscript_dict.values() for y in x]
coordinates = {}
for sub in subs:
if sub in sub_elems_list:
name = find_subscript_name(subscript_dict, sub)
coordinates[name] = [sub]
elif not terse:
coordinates[sub] = subscript_dict[sub]
return coordinates | This is for assisting with the lookup of a particular element, such that the output
of this function would take the place of %s in this expression
`variable.loc[%s]`
Parameters
----------
subs: list of strings
coordinates, either as names of dimensions, or positions within a dimension
subscript_dict: dict
the full dictionary of subscript names and values
terse: Binary Flag
- If true, includes only elements that do not cover the full range of values in their
respective dimension
- If false, returns all dimensions
Returns
-------
coordinates: dictionary
Coordinates needed to access the xarray quantities we're interested in.
Examples
--------
>>> make_coord_dict(['Dim1', 'D'], {'Dim1': ['A', 'B', 'C'], 'Dim2': ['D', 'E', 'F']})
{'Dim2': ['D']}
>>> make_coord_dict(['Dim1', 'D'], {'Dim1': ['A', 'B', 'C'], 'Dim2':['D', 'E', 'F']},
>>> terse=False)
{'Dim2': ['D'], 'Dim1': ['A', 'B', 'C']} | Below is the the instruction that describes the task:
### Input:
This is for assisting with the lookup of a particular element, such that the output
of this function would take the place of %s in this expression
`variable.loc[%s]`
Parameters
----------
subs: list of strings
coordinates, either as names of dimensions, or positions within a dimension
subscript_dict: dict
the full dictionary of subscript names and values
terse: Binary Flag
- If true, includes only elements that do not cover the full range of values in their
respective dimension
- If false, returns all dimensions
Returns
-------
coordinates: dictionary
Coordinates needed to access the xarray quantities we're interested in.
Examples
--------
>>> make_coord_dict(['Dim1', 'D'], {'Dim1': ['A', 'B', 'C'], 'Dim2': ['D', 'E', 'F']})
{'Dim2': ['D']}
>>> make_coord_dict(['Dim1', 'D'], {'Dim1': ['A', 'B', 'C'], 'Dim2':['D', 'E', 'F']},
>>> terse=False)
{'Dim2': ['D'], 'Dim1': ['A', 'B', 'C']}
### Response:
def make_coord_dict(subs, subscript_dict, terse=True):
"""
This is for assisting with the lookup of a particular element, such that the output
of this function would take the place of %s in this expression
`variable.loc[%s]`
Parameters
----------
subs: list of strings
coordinates, either as names of dimensions, or positions within a dimension
subscript_dict: dict
the full dictionary of subscript names and values
terse: Binary Flag
- If true, includes only elements that do not cover the full range of values in their
respective dimension
- If false, returns all dimensions
Returns
-------
coordinates: dictionary
Coordinates needed to access the xarray quantities we're interested in.
Examples
--------
>>> make_coord_dict(['Dim1', 'D'], {'Dim1': ['A', 'B', 'C'], 'Dim2': ['D', 'E', 'F']})
{'Dim2': ['D']}
>>> make_coord_dict(['Dim1', 'D'], {'Dim1': ['A', 'B', 'C'], 'Dim2':['D', 'E', 'F']},
>>> terse=False)
{'Dim2': ['D'], 'Dim1': ['A', 'B', 'C']}
"""
sub_elems_list = [y for x in subscript_dict.values() for y in x]
coordinates = {}
for sub in subs:
if sub in sub_elems_list:
name = find_subscript_name(subscript_dict, sub)
coordinates[name] = [sub]
elif not terse:
coordinates[sub] = subscript_dict[sub]
return coordinates |
def get_changed_devices(self, timestamp):
"""Get data since last timestamp.
This is done via a blocking call, pass NONE for initial state.
"""
if timestamp is None:
payload = {}
else:
payload = {
'timeout': SUBSCRIPTION_WAIT,
'minimumdelay': SUBSCRIPTION_MIN_WAIT
}
payload.update(timestamp)
# double the timeout here so requests doesn't timeout before vera
payload.update({
'id': 'lu_sdata',
})
logger.debug("get_changed_devices() requesting payload %s", str(payload))
r = self.data_request(payload, TIMEOUT*2)
r.raise_for_status()
# If the Vera disconnects before writing a full response (as lu_sdata
# will do when interrupted by a Luup reload), the requests module will
# happily return 200 with an empty string. So, test for empty response,
# so we don't rely on the JSON parser to throw an exception.
if r.text == "":
raise PyveraError("Empty response from Vera")
# Catch a wide swath of what the JSON parser might throw, within
# reason. Unfortunately, some parsers don't specifically return
# json.decode.JSONDecodeError, but so far most seem to derive what
# they do throw from ValueError, so that's helpful.
try:
result = r.json()
except ValueError as ex:
raise PyveraError("JSON decode error: " + str(ex))
if not ( type(result) is dict
and 'loadtime' in result and 'dataversion' in result ):
raise PyveraError("Unexpected/garbled response from Vera")
# At this point, all good. Update timestamp and return change data.
device_data = result.get('devices')
timestamp = {
'loadtime': result.get('loadtime'),
'dataversion': result.get('dataversion')
}
return [device_data, timestamp] | Get data since last timestamp.
This is done via a blocking call, pass NONE for initial state. | Below is the the instruction that describes the task:
### Input:
Get data since last timestamp.
This is done via a blocking call, pass NONE for initial state.
### Response:
def get_changed_devices(self, timestamp):
"""Get data since last timestamp.
This is done via a blocking call, pass NONE for initial state.
"""
if timestamp is None:
payload = {}
else:
payload = {
'timeout': SUBSCRIPTION_WAIT,
'minimumdelay': SUBSCRIPTION_MIN_WAIT
}
payload.update(timestamp)
# double the timeout here so requests doesn't timeout before vera
payload.update({
'id': 'lu_sdata',
})
logger.debug("get_changed_devices() requesting payload %s", str(payload))
r = self.data_request(payload, TIMEOUT*2)
r.raise_for_status()
# If the Vera disconnects before writing a full response (as lu_sdata
# will do when interrupted by a Luup reload), the requests module will
# happily return 200 with an empty string. So, test for empty response,
# so we don't rely on the JSON parser to throw an exception.
if r.text == "":
raise PyveraError("Empty response from Vera")
# Catch a wide swath of what the JSON parser might throw, within
# reason. Unfortunately, some parsers don't specifically return
# json.decode.JSONDecodeError, but so far most seem to derive what
# they do throw from ValueError, so that's helpful.
try:
result = r.json()
except ValueError as ex:
raise PyveraError("JSON decode error: " + str(ex))
if not ( type(result) is dict
and 'loadtime' in result and 'dataversion' in result ):
raise PyveraError("Unexpected/garbled response from Vera")
# At this point, all good. Update timestamp and return change data.
device_data = result.get('devices')
timestamp = {
'loadtime': result.get('loadtime'),
'dataversion': result.get('dataversion')
}
return [device_data, timestamp] |
def _get_struct_shapewithstyle(self, shape_number):
"""Get the values for the SHAPEWITHSTYLE record."""
obj = _make_object("ShapeWithStyle")
obj.FillStyles = self._get_struct_fillstylearray(shape_number)
obj.LineStyles = self._get_struct_linestylearray(shape_number)
bc = BitConsumer(self._src)
obj.NumFillBits = n_fill_bits = bc.u_get(4)
obj.NumlineBits = n_line_bits = bc.u_get(4)
obj.ShapeRecords = self._get_shaperecords(
n_fill_bits, n_line_bits, shape_number)
return obj | Get the values for the SHAPEWITHSTYLE record. | Below is the the instruction that describes the task:
### Input:
Get the values for the SHAPEWITHSTYLE record.
### Response:
def _get_struct_shapewithstyle(self, shape_number):
"""Get the values for the SHAPEWITHSTYLE record."""
obj = _make_object("ShapeWithStyle")
obj.FillStyles = self._get_struct_fillstylearray(shape_number)
obj.LineStyles = self._get_struct_linestylearray(shape_number)
bc = BitConsumer(self._src)
obj.NumFillBits = n_fill_bits = bc.u_get(4)
obj.NumlineBits = n_line_bits = bc.u_get(4)
obj.ShapeRecords = self._get_shaperecords(
n_fill_bits, n_line_bits, shape_number)
return obj |
def set_index(self, field, value):
"""
set_index(field, value)
Works like :meth:`add_index`, but ensures that there is only
one index on given field. If other found, then removes it
first.
:param field: The index field.
:type field: string
:param value: The index value.
:type value: string or integer
:rtype: :class:`RiakObject <riak.riak_object.RiakObject>`
"""
to_rem = set((x for x in self.indexes if x[0] == field))
self.indexes.difference_update(to_rem)
return self.add_index(field, value) | set_index(field, value)
Works like :meth:`add_index`, but ensures that there is only
one index on given field. If other found, then removes it
first.
:param field: The index field.
:type field: string
:param value: The index value.
:type value: string or integer
:rtype: :class:`RiakObject <riak.riak_object.RiakObject>` | Below is the the instruction that describes the task:
### Input:
set_index(field, value)
Works like :meth:`add_index`, but ensures that there is only
one index on given field. If other found, then removes it
first.
:param field: The index field.
:type field: string
:param value: The index value.
:type value: string or integer
:rtype: :class:`RiakObject <riak.riak_object.RiakObject>`
### Response:
def set_index(self, field, value):
"""
set_index(field, value)
Works like :meth:`add_index`, but ensures that there is only
one index on given field. If other found, then removes it
first.
:param field: The index field.
:type field: string
:param value: The index value.
:type value: string or integer
:rtype: :class:`RiakObject <riak.riak_object.RiakObject>`
"""
to_rem = set((x for x in self.indexes if x[0] == field))
self.indexes.difference_update(to_rem)
return self.add_index(field, value) |
def make_inputs(input_file: Optional[str],
translator: inference.Translator,
input_is_json: bool,
input_factors: Optional[List[str]] = None) -> Generator[inference.TranslatorInput, None, None]:
"""
Generates TranslatorInput instances from input. If input is None, reads from stdin. If num_input_factors > 1,
the function will look for factors attached to each token, separated by '|'.
If source is not None, reads from the source file. If num_source_factors > 1, num_source_factors source factor
filenames are required.
:param input_file: The source file (possibly None).
:param translator: Translator that will translate each line of input.
:param input_is_json: Whether the input is in json format.
:param input_factors: Source factor files.
:return: TranslatorInput objects.
"""
if input_file is None:
check_condition(input_factors is None, "Translating from STDIN, not expecting any factor files.")
for sentence_id, line in enumerate(sys.stdin, 1):
if input_is_json:
yield inference.make_input_from_json_string(sentence_id=sentence_id,
json_string=line,
translator=translator)
else:
yield inference.make_input_from_factored_string(sentence_id=sentence_id,
factored_string=line,
translator=translator)
else:
input_factors = [] if input_factors is None else input_factors
inputs = [input_file] + input_factors
if not input_is_json:
check_condition(translator.num_source_factors == len(inputs),
"Model(s) require %d factors, but %d given (through --input and --input-factors)." % (
translator.num_source_factors, len(inputs)))
with ExitStack() as exit_stack:
streams = [exit_stack.enter_context(data_io.smart_open(i)) for i in inputs]
for sentence_id, inputs in enumerate(zip(*streams), 1):
if input_is_json:
yield inference.make_input_from_json_string(sentence_id=sentence_id,
json_string=inputs[0],
translator=translator)
else:
yield inference.make_input_from_multiple_strings(sentence_id=sentence_id, strings=list(inputs)) | Generates TranslatorInput instances from input. If input is None, reads from stdin. If num_input_factors > 1,
the function will look for factors attached to each token, separated by '|'.
If source is not None, reads from the source file. If num_source_factors > 1, num_source_factors source factor
filenames are required.
:param input_file: The source file (possibly None).
:param translator: Translator that will translate each line of input.
:param input_is_json: Whether the input is in json format.
:param input_factors: Source factor files.
:return: TranslatorInput objects. | Below is the the instruction that describes the task:
### Input:
Generates TranslatorInput instances from input. If input is None, reads from stdin. If num_input_factors > 1,
the function will look for factors attached to each token, separated by '|'.
If source is not None, reads from the source file. If num_source_factors > 1, num_source_factors source factor
filenames are required.
:param input_file: The source file (possibly None).
:param translator: Translator that will translate each line of input.
:param input_is_json: Whether the input is in json format.
:param input_factors: Source factor files.
:return: TranslatorInput objects.
### Response:
def make_inputs(input_file: Optional[str],
translator: inference.Translator,
input_is_json: bool,
input_factors: Optional[List[str]] = None) -> Generator[inference.TranslatorInput, None, None]:
"""
Generates TranslatorInput instances from input. If input is None, reads from stdin. If num_input_factors > 1,
the function will look for factors attached to each token, separated by '|'.
If source is not None, reads from the source file. If num_source_factors > 1, num_source_factors source factor
filenames are required.
:param input_file: The source file (possibly None).
:param translator: Translator that will translate each line of input.
:param input_is_json: Whether the input is in json format.
:param input_factors: Source factor files.
:return: TranslatorInput objects.
"""
if input_file is None:
check_condition(input_factors is None, "Translating from STDIN, not expecting any factor files.")
for sentence_id, line in enumerate(sys.stdin, 1):
if input_is_json:
yield inference.make_input_from_json_string(sentence_id=sentence_id,
json_string=line,
translator=translator)
else:
yield inference.make_input_from_factored_string(sentence_id=sentence_id,
factored_string=line,
translator=translator)
else:
input_factors = [] if input_factors is None else input_factors
inputs = [input_file] + input_factors
if not input_is_json:
check_condition(translator.num_source_factors == len(inputs),
"Model(s) require %d factors, but %d given (through --input and --input-factors)." % (
translator.num_source_factors, len(inputs)))
with ExitStack() as exit_stack:
streams = [exit_stack.enter_context(data_io.smart_open(i)) for i in inputs]
for sentence_id, inputs in enumerate(zip(*streams), 1):
if input_is_json:
yield inference.make_input_from_json_string(sentence_id=sentence_id,
json_string=inputs[0],
translator=translator)
else:
yield inference.make_input_from_multiple_strings(sentence_id=sentence_id, strings=list(inputs)) |
def memoize(func):
"""
Decorator for unerasable memoization based on function arguments, for
functions without keyword arguments.
"""
class Memoizer(dict):
def __missing__(self, args):
val = func(*args)
self[args] = val
return val
memory = Memoizer()
@wraps(func)
def wrapper(*args):
return memory[args]
return wrapper | Decorator for unerasable memoization based on function arguments, for
functions without keyword arguments. | Below is the the instruction that describes the task:
### Input:
Decorator for unerasable memoization based on function arguments, for
functions without keyword arguments.
### Response:
def memoize(func):
"""
Decorator for unerasable memoization based on function arguments, for
functions without keyword arguments.
"""
class Memoizer(dict):
def __missing__(self, args):
val = func(*args)
self[args] = val
return val
memory = Memoizer()
@wraps(func)
def wrapper(*args):
return memory[args]
return wrapper |
def webui(ctx, host, port, cdn, scheduler_rpc, fetcher_rpc, max_rate, max_burst,
username, password, need_auth, webui_instance, process_time_limit, get_object=False):
"""
Run WebUI
"""
app = load_cls(None, None, webui_instance)
g = ctx.obj
app.config['taskdb'] = g.taskdb
app.config['projectdb'] = g.projectdb
app.config['resultdb'] = g.resultdb
app.config['cdn'] = cdn
if max_rate:
app.config['max_rate'] = max_rate
if max_burst:
app.config['max_burst'] = max_burst
if username:
app.config['webui_username'] = username
if password:
app.config['webui_password'] = password
app.config['need_auth'] = need_auth
app.config['process_time_limit'] = process_time_limit
# inject queues for webui
for name in ('newtask_queue', 'status_queue', 'scheduler2fetcher',
'fetcher2processor', 'processor2result'):
app.config['queues'][name] = getattr(g, name, None)
# fetcher rpc
if isinstance(fetcher_rpc, six.string_types):
import umsgpack
fetcher_rpc = connect_rpc(ctx, None, fetcher_rpc)
app.config['fetch'] = lambda x: umsgpack.unpackb(fetcher_rpc.fetch(x).data)
else:
# get fetcher instance for webui
fetcher_config = g.config.get('fetcher', {})
webui_fetcher = ctx.invoke(fetcher, async_mode=False, get_object=True, no_input=True, **fetcher_config)
app.config['fetch'] = lambda x: webui_fetcher.fetch(x)
if isinstance(scheduler_rpc, six.string_types):
scheduler_rpc = connect_rpc(ctx, None, scheduler_rpc)
if scheduler_rpc is None and os.environ.get('SCHEDULER_NAME'):
app.config['scheduler_rpc'] = connect_rpc(ctx, None, 'http://%s/' % (
os.environ['SCHEDULER_PORT_23333_TCP'][len('tcp://'):]))
elif scheduler_rpc is None:
app.config['scheduler_rpc'] = connect_rpc(ctx, None, 'http://127.0.0.1:23333/')
else:
app.config['scheduler_rpc'] = scheduler_rpc
app.debug = g.debug
g.instances.append(app)
if g.get('testing_mode') or get_object:
return app
app.run(host=host, port=port) | Run WebUI | Below is the the instruction that describes the task:
### Input:
Run WebUI
### Response:
def webui(ctx, host, port, cdn, scheduler_rpc, fetcher_rpc, max_rate, max_burst,
username, password, need_auth, webui_instance, process_time_limit, get_object=False):
"""
Run WebUI
"""
app = load_cls(None, None, webui_instance)
g = ctx.obj
app.config['taskdb'] = g.taskdb
app.config['projectdb'] = g.projectdb
app.config['resultdb'] = g.resultdb
app.config['cdn'] = cdn
if max_rate:
app.config['max_rate'] = max_rate
if max_burst:
app.config['max_burst'] = max_burst
if username:
app.config['webui_username'] = username
if password:
app.config['webui_password'] = password
app.config['need_auth'] = need_auth
app.config['process_time_limit'] = process_time_limit
# inject queues for webui
for name in ('newtask_queue', 'status_queue', 'scheduler2fetcher',
'fetcher2processor', 'processor2result'):
app.config['queues'][name] = getattr(g, name, None)
# fetcher rpc
if isinstance(fetcher_rpc, six.string_types):
import umsgpack
fetcher_rpc = connect_rpc(ctx, None, fetcher_rpc)
app.config['fetch'] = lambda x: umsgpack.unpackb(fetcher_rpc.fetch(x).data)
else:
# get fetcher instance for webui
fetcher_config = g.config.get('fetcher', {})
webui_fetcher = ctx.invoke(fetcher, async_mode=False, get_object=True, no_input=True, **fetcher_config)
app.config['fetch'] = lambda x: webui_fetcher.fetch(x)
if isinstance(scheduler_rpc, six.string_types):
scheduler_rpc = connect_rpc(ctx, None, scheduler_rpc)
if scheduler_rpc is None and os.environ.get('SCHEDULER_NAME'):
app.config['scheduler_rpc'] = connect_rpc(ctx, None, 'http://%s/' % (
os.environ['SCHEDULER_PORT_23333_TCP'][len('tcp://'):]))
elif scheduler_rpc is None:
app.config['scheduler_rpc'] = connect_rpc(ctx, None, 'http://127.0.0.1:23333/')
else:
app.config['scheduler_rpc'] = scheduler_rpc
app.debug = g.debug
g.instances.append(app)
if g.get('testing_mode') or get_object:
return app
app.run(host=host, port=port) |
def find_vmrun(self):
"""
Searches for vmrun.
:returns: path to vmrun
"""
# look for vmrun
vmrun_path = self.config.get_section_config("VMware").get("vmrun_path")
if not vmrun_path:
if sys.platform.startswith("win"):
vmrun_path = shutil.which("vmrun")
if vmrun_path is None:
# look for vmrun.exe using the VMware Workstation directory listed in the registry
vmrun_path = self._find_vmrun_registry(r"SOFTWARE\Wow6432Node\VMware, Inc.\VMware Workstation")
if vmrun_path is None:
# look for vmrun.exe using the VIX directory listed in the registry
vmrun_path = self._find_vmrun_registry(r"SOFTWARE\Wow6432Node\VMware, Inc.\VMware VIX")
elif sys.platform.startswith("darwin"):
vmrun_path = "/Applications/VMware Fusion.app/Contents/Library/vmrun"
else:
vmrun_path = "vmrun"
if vmrun_path and not os.path.isabs(vmrun_path):
vmrun_path = shutil.which(vmrun_path)
if not vmrun_path:
raise VMwareError("Could not find VMware vmrun, please make sure it is installed")
if not os.path.isfile(vmrun_path):
raise VMwareError("vmrun {} is not accessible".format(vmrun_path))
if not os.access(vmrun_path, os.X_OK):
raise VMwareError("vmrun is not executable")
if os.path.basename(vmrun_path).lower() not in ["vmrun", "vmrun.exe"]:
raise VMwareError("Invalid vmrun executable name {}".format(os.path.basename(vmrun_path)))
self._vmrun_path = vmrun_path
return vmrun_path | Searches for vmrun.
:returns: path to vmrun | Below is the the instruction that describes the task:
### Input:
Searches for vmrun.
:returns: path to vmrun
### Response:
def find_vmrun(self):
"""
Searches for vmrun.
:returns: path to vmrun
"""
# look for vmrun
vmrun_path = self.config.get_section_config("VMware").get("vmrun_path")
if not vmrun_path:
if sys.platform.startswith("win"):
vmrun_path = shutil.which("vmrun")
if vmrun_path is None:
# look for vmrun.exe using the VMware Workstation directory listed in the registry
vmrun_path = self._find_vmrun_registry(r"SOFTWARE\Wow6432Node\VMware, Inc.\VMware Workstation")
if vmrun_path is None:
# look for vmrun.exe using the VIX directory listed in the registry
vmrun_path = self._find_vmrun_registry(r"SOFTWARE\Wow6432Node\VMware, Inc.\VMware VIX")
elif sys.platform.startswith("darwin"):
vmrun_path = "/Applications/VMware Fusion.app/Contents/Library/vmrun"
else:
vmrun_path = "vmrun"
if vmrun_path and not os.path.isabs(vmrun_path):
vmrun_path = shutil.which(vmrun_path)
if not vmrun_path:
raise VMwareError("Could not find VMware vmrun, please make sure it is installed")
if not os.path.isfile(vmrun_path):
raise VMwareError("vmrun {} is not accessible".format(vmrun_path))
if not os.access(vmrun_path, os.X_OK):
raise VMwareError("vmrun is not executable")
if os.path.basename(vmrun_path).lower() not in ["vmrun", "vmrun.exe"]:
raise VMwareError("Invalid vmrun executable name {}".format(os.path.basename(vmrun_path)))
self._vmrun_path = vmrun_path
return vmrun_path |
def train_async(train_dataset,
eval_dataset,
analysis_dir,
output_dir,
features,
model_type,
max_steps=5000,
num_epochs=None,
train_batch_size=100,
eval_batch_size=16,
min_eval_frequency=100,
top_n=None,
layer_sizes=None,
learning_rate=0.01,
epsilon=0.0005,
job_name=None, # cloud param
job_name_prefix='', # cloud param
cloud=None, # cloud param
):
# NOTE: if you make a chane go this doc string, you MUST COPY it 4 TIMES in
# mltoolbox.{classification|regression}.{dnn|linear}, but you must remove
# the model_type parameter, and maybe change the layer_sizes and top_n
# parameters!
# Datalab does some tricky things and messing with train.__doc__ will
# not work!
"""Train model locally or in the cloud.
Local Training:
Args:
train_dataset: CsvDataSet
eval_dataset: CsvDataSet
analysis_dir: The output directory from local_analysis
output_dir: Output directory of training.
features: file path or features object. Example:
{
"col_A": {"transform": "scale", "default": 0.0},
"col_B": {"transform": "scale","value": 4},
# Note col_C is missing, so default transform used.
"col_D": {"transform": "hash_one_hot", "hash_bucket_size": 4},
"col_target": {"transform": "target"},
"col_key": {"transform": "key"}
}
The keys correspond to the columns in the input files as defined by the
schema file during preprocessing. Some notes
1) The "key" and "target" transforms are required.
2) Default values are optional. These are used if the input data has
missing values during training and prediction. If not supplied for a
column, the default value for a numerical column is that column's
mean vlaue, and for a categorical column the empty string is used.
3) For numerical colums, the following transforms are supported:
i) {"transform": "identity"}: does nothing to the number. (default)
ii) {"transform": "scale"}: scales the colum values to -1, 1.
iii) {"transform": "scale", "value": a}: scales the colum values
to -a, a.
For categorical colums, the following transforms are supported:
i) {"transform": "one_hot"}: A one-hot vector using the full
vocabulary is used. (default)
ii) {"transform": "embedding", "embedding_dim": d}: Each label is
embedded into an d-dimensional space.
model_type: One of 'linear_classification', 'linear_regression',
'dnn_classification', 'dnn_regression'.
max_steps: Int. Number of training steps to perform.
num_epochs: Maximum number of training data epochs on which to train.
The training job will run for max_steps or num_epochs, whichever occurs
first.
train_batch_size: number of rows to train on in one step.
eval_batch_size: number of rows to eval in one step. One pass of the eval
dataset is done. If eval_batch_size does not perfectly divide the numer
of eval instances, the last fractional batch is not used.
min_eval_frequency: Minimum number of training steps between evaluations.
top_n: Int. For classification problems, the output graph will contain the
labels and scores for the top n classes with a default of n=1. Use
None for regression problems.
layer_sizes: List. Represents the layers in the connected DNN.
If the model type is DNN, this must be set. Example [10, 3, 2], this
will create three DNN layers where the first layer will have 10 nodes,
the middle layer will have 3 nodes, and the laster layer will have 2
nodes.
learning_rate: tf.train.AdamOptimizer's learning rate,
epsilon: tf.train.AdamOptimizer's epsilon value.
Cloud Training:
Args:
All local training arguments are valid for cloud training. Cloud training
contains two additional args:
cloud: A CloudTrainingConfig object.
job_name: Training job name. A default will be picked if None.
job_name_prefix: If job_name is None, the job will be named
'<job_name_prefix>_<timestamp>'.
Returns:
A google.datalab.utils.Job object that can be used to query state from or wait.
"""
import google.datalab.utils as du
if model_type not in ['linear_classification', 'linear_regression', 'dnn_classification',
'dnn_regression']:
raise ValueError('Unknown model_type %s' % model_type)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
if cloud:
return cloud_train(
train_dataset=train_dataset,
eval_dataset=eval_dataset,
analysis_dir=analysis_dir,
output_dir=output_dir,
features=features,
model_type=model_type,
max_steps=max_steps,
num_epochs=num_epochs,
train_batch_size=train_batch_size,
eval_batch_size=eval_batch_size,
min_eval_frequency=min_eval_frequency,
top_n=top_n,
layer_sizes=layer_sizes,
learning_rate=learning_rate,
epsilon=epsilon,
job_name=job_name,
job_name_prefix=job_name_prefix,
config=cloud,
)
else:
def fn():
return local_train(
train_dataset=train_dataset,
eval_dataset=eval_dataset,
analysis_dir=analysis_dir,
output_dir=output_dir,
features=features,
model_type=model_type,
max_steps=max_steps,
num_epochs=num_epochs,
train_batch_size=train_batch_size,
eval_batch_size=eval_batch_size,
min_eval_frequency=min_eval_frequency,
top_n=top_n,
layer_sizes=layer_sizes,
learning_rate=learning_rate,
epsilon=epsilon)
return du.LambdaJob(fn, job_id=None) | Train model locally or in the cloud.
Local Training:
Args:
train_dataset: CsvDataSet
eval_dataset: CsvDataSet
analysis_dir: The output directory from local_analysis
output_dir: Output directory of training.
features: file path or features object. Example:
{
"col_A": {"transform": "scale", "default": 0.0},
"col_B": {"transform": "scale","value": 4},
# Note col_C is missing, so default transform used.
"col_D": {"transform": "hash_one_hot", "hash_bucket_size": 4},
"col_target": {"transform": "target"},
"col_key": {"transform": "key"}
}
The keys correspond to the columns in the input files as defined by the
schema file during preprocessing. Some notes
1) The "key" and "target" transforms are required.
2) Default values are optional. These are used if the input data has
missing values during training and prediction. If not supplied for a
column, the default value for a numerical column is that column's
mean vlaue, and for a categorical column the empty string is used.
3) For numerical colums, the following transforms are supported:
i) {"transform": "identity"}: does nothing to the number. (default)
ii) {"transform": "scale"}: scales the colum values to -1, 1.
iii) {"transform": "scale", "value": a}: scales the colum values
to -a, a.
For categorical colums, the following transforms are supported:
i) {"transform": "one_hot"}: A one-hot vector using the full
vocabulary is used. (default)
ii) {"transform": "embedding", "embedding_dim": d}: Each label is
embedded into an d-dimensional space.
model_type: One of 'linear_classification', 'linear_regression',
'dnn_classification', 'dnn_regression'.
max_steps: Int. Number of training steps to perform.
num_epochs: Maximum number of training data epochs on which to train.
The training job will run for max_steps or num_epochs, whichever occurs
first.
train_batch_size: number of rows to train on in one step.
eval_batch_size: number of rows to eval in one step. One pass of the eval
dataset is done. If eval_batch_size does not perfectly divide the numer
of eval instances, the last fractional batch is not used.
min_eval_frequency: Minimum number of training steps between evaluations.
top_n: Int. For classification problems, the output graph will contain the
labels and scores for the top n classes with a default of n=1. Use
None for regression problems.
layer_sizes: List. Represents the layers in the connected DNN.
If the model type is DNN, this must be set. Example [10, 3, 2], this
will create three DNN layers where the first layer will have 10 nodes,
the middle layer will have 3 nodes, and the laster layer will have 2
nodes.
learning_rate: tf.train.AdamOptimizer's learning rate,
epsilon: tf.train.AdamOptimizer's epsilon value.
Cloud Training:
Args:
All local training arguments are valid for cloud training. Cloud training
contains two additional args:
cloud: A CloudTrainingConfig object.
job_name: Training job name. A default will be picked if None.
job_name_prefix: If job_name is None, the job will be named
'<job_name_prefix>_<timestamp>'.
Returns:
A google.datalab.utils.Job object that can be used to query state from or wait. | Below is the the instruction that describes the task:
### Input:
Train model locally or in the cloud.
Local Training:
Args:
train_dataset: CsvDataSet
eval_dataset: CsvDataSet
analysis_dir: The output directory from local_analysis
output_dir: Output directory of training.
features: file path or features object. Example:
{
"col_A": {"transform": "scale", "default": 0.0},
"col_B": {"transform": "scale","value": 4},
# Note col_C is missing, so default transform used.
"col_D": {"transform": "hash_one_hot", "hash_bucket_size": 4},
"col_target": {"transform": "target"},
"col_key": {"transform": "key"}
}
The keys correspond to the columns in the input files as defined by the
schema file during preprocessing. Some notes
1) The "key" and "target" transforms are required.
2) Default values are optional. These are used if the input data has
missing values during training and prediction. If not supplied for a
column, the default value for a numerical column is that column's
mean vlaue, and for a categorical column the empty string is used.
3) For numerical colums, the following transforms are supported:
i) {"transform": "identity"}: does nothing to the number. (default)
ii) {"transform": "scale"}: scales the colum values to -1, 1.
iii) {"transform": "scale", "value": a}: scales the colum values
to -a, a.
For categorical colums, the following transforms are supported:
i) {"transform": "one_hot"}: A one-hot vector using the full
vocabulary is used. (default)
ii) {"transform": "embedding", "embedding_dim": d}: Each label is
embedded into an d-dimensional space.
model_type: One of 'linear_classification', 'linear_regression',
'dnn_classification', 'dnn_regression'.
max_steps: Int. Number of training steps to perform.
num_epochs: Maximum number of training data epochs on which to train.
The training job will run for max_steps or num_epochs, whichever occurs
first.
train_batch_size: number of rows to train on in one step.
eval_batch_size: number of rows to eval in one step. One pass of the eval
dataset is done. If eval_batch_size does not perfectly divide the numer
of eval instances, the last fractional batch is not used.
min_eval_frequency: Minimum number of training steps between evaluations.
top_n: Int. For classification problems, the output graph will contain the
labels and scores for the top n classes with a default of n=1. Use
None for regression problems.
layer_sizes: List. Represents the layers in the connected DNN.
If the model type is DNN, this must be set. Example [10, 3, 2], this
will create three DNN layers where the first layer will have 10 nodes,
the middle layer will have 3 nodes, and the laster layer will have 2
nodes.
learning_rate: tf.train.AdamOptimizer's learning rate,
epsilon: tf.train.AdamOptimizer's epsilon value.
Cloud Training:
Args:
All local training arguments are valid for cloud training. Cloud training
contains two additional args:
cloud: A CloudTrainingConfig object.
job_name: Training job name. A default will be picked if None.
job_name_prefix: If job_name is None, the job will be named
'<job_name_prefix>_<timestamp>'.
Returns:
A google.datalab.utils.Job object that can be used to query state from or wait.
### Response:
def train_async(train_dataset,
eval_dataset,
analysis_dir,
output_dir,
features,
model_type,
max_steps=5000,
num_epochs=None,
train_batch_size=100,
eval_batch_size=16,
min_eval_frequency=100,
top_n=None,
layer_sizes=None,
learning_rate=0.01,
epsilon=0.0005,
job_name=None, # cloud param
job_name_prefix='', # cloud param
cloud=None, # cloud param
):
# NOTE: if you make a chane go this doc string, you MUST COPY it 4 TIMES in
# mltoolbox.{classification|regression}.{dnn|linear}, but you must remove
# the model_type parameter, and maybe change the layer_sizes and top_n
# parameters!
# Datalab does some tricky things and messing with train.__doc__ will
# not work!
"""Train model locally or in the cloud.
Local Training:
Args:
train_dataset: CsvDataSet
eval_dataset: CsvDataSet
analysis_dir: The output directory from local_analysis
output_dir: Output directory of training.
features: file path or features object. Example:
{
"col_A": {"transform": "scale", "default": 0.0},
"col_B": {"transform": "scale","value": 4},
# Note col_C is missing, so default transform used.
"col_D": {"transform": "hash_one_hot", "hash_bucket_size": 4},
"col_target": {"transform": "target"},
"col_key": {"transform": "key"}
}
The keys correspond to the columns in the input files as defined by the
schema file during preprocessing. Some notes
1) The "key" and "target" transforms are required.
2) Default values are optional. These are used if the input data has
missing values during training and prediction. If not supplied for a
column, the default value for a numerical column is that column's
mean vlaue, and for a categorical column the empty string is used.
3) For numerical colums, the following transforms are supported:
i) {"transform": "identity"}: does nothing to the number. (default)
ii) {"transform": "scale"}: scales the colum values to -1, 1.
iii) {"transform": "scale", "value": a}: scales the colum values
to -a, a.
For categorical colums, the following transforms are supported:
i) {"transform": "one_hot"}: A one-hot vector using the full
vocabulary is used. (default)
ii) {"transform": "embedding", "embedding_dim": d}: Each label is
embedded into an d-dimensional space.
model_type: One of 'linear_classification', 'linear_regression',
'dnn_classification', 'dnn_regression'.
max_steps: Int. Number of training steps to perform.
num_epochs: Maximum number of training data epochs on which to train.
The training job will run for max_steps or num_epochs, whichever occurs
first.
train_batch_size: number of rows to train on in one step.
eval_batch_size: number of rows to eval in one step. One pass of the eval
dataset is done. If eval_batch_size does not perfectly divide the numer
of eval instances, the last fractional batch is not used.
min_eval_frequency: Minimum number of training steps between evaluations.
top_n: Int. For classification problems, the output graph will contain the
labels and scores for the top n classes with a default of n=1. Use
None for regression problems.
layer_sizes: List. Represents the layers in the connected DNN.
If the model type is DNN, this must be set. Example [10, 3, 2], this
will create three DNN layers where the first layer will have 10 nodes,
the middle layer will have 3 nodes, and the laster layer will have 2
nodes.
learning_rate: tf.train.AdamOptimizer's learning rate,
epsilon: tf.train.AdamOptimizer's epsilon value.
Cloud Training:
Args:
All local training arguments are valid for cloud training. Cloud training
contains two additional args:
cloud: A CloudTrainingConfig object.
job_name: Training job name. A default will be picked if None.
job_name_prefix: If job_name is None, the job will be named
'<job_name_prefix>_<timestamp>'.
Returns:
A google.datalab.utils.Job object that can be used to query state from or wait.
"""
import google.datalab.utils as du
if model_type not in ['linear_classification', 'linear_regression', 'dnn_classification',
'dnn_regression']:
raise ValueError('Unknown model_type %s' % model_type)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
if cloud:
return cloud_train(
train_dataset=train_dataset,
eval_dataset=eval_dataset,
analysis_dir=analysis_dir,
output_dir=output_dir,
features=features,
model_type=model_type,
max_steps=max_steps,
num_epochs=num_epochs,
train_batch_size=train_batch_size,
eval_batch_size=eval_batch_size,
min_eval_frequency=min_eval_frequency,
top_n=top_n,
layer_sizes=layer_sizes,
learning_rate=learning_rate,
epsilon=epsilon,
job_name=job_name,
job_name_prefix=job_name_prefix,
config=cloud,
)
else:
def fn():
return local_train(
train_dataset=train_dataset,
eval_dataset=eval_dataset,
analysis_dir=analysis_dir,
output_dir=output_dir,
features=features,
model_type=model_type,
max_steps=max_steps,
num_epochs=num_epochs,
train_batch_size=train_batch_size,
eval_batch_size=eval_batch_size,
min_eval_frequency=min_eval_frequency,
top_n=top_n,
layer_sizes=layer_sizes,
learning_rate=learning_rate,
epsilon=epsilon)
return du.LambdaJob(fn, job_id=None) |
def commit_input_confirm_timeout(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
commit = ET.Element("commit")
config = commit
input = ET.SubElement(commit, "input")
confirm_timeout = ET.SubElement(input, "confirm-timeout")
confirm_timeout.text = kwargs.pop('confirm_timeout')
callback = kwargs.pop('callback', self._callback)
return callback(config) | Auto Generated Code | Below is the the instruction that describes the task:
### Input:
Auto Generated Code
### Response:
def commit_input_confirm_timeout(self, **kwargs):
"""Auto Generated Code
"""
config = ET.Element("config")
commit = ET.Element("commit")
config = commit
input = ET.SubElement(commit, "input")
confirm_timeout = ET.SubElement(input, "confirm-timeout")
confirm_timeout.text = kwargs.pop('confirm_timeout')
callback = kwargs.pop('callback', self._callback)
return callback(config) |
def encrypt(self, plaintext, nonce, encoder=encoding.RawEncoder):
"""
Encrypts the plaintext message using the given `nonce` and returns
the ciphertext encoded with the encoder.
.. warning:: It is **VITALLY** important that the nonce is a nonce,
i.e. it is a number used only once for any given key. If you fail
to do this, you compromise the privacy of the messages encrypted.
:param plaintext: [:class:`bytes`] The plaintext message to encrypt
:param nonce: [:class:`bytes`] The nonce to use in the encryption
:param encoder: The encoder to use to encode the ciphertext
:rtype: [:class:`nacl.utils.EncryptedMessage`]
"""
if len(nonce) != self.NONCE_SIZE:
raise ValueError("The nonce must be exactly %s bytes long" %
self.NONCE_SIZE)
ciphertext = libnacl.crypto_box_afternm(
plaintext,
nonce,
self._shared_key,
)
encoded_nonce = encoder.encode(nonce)
encoded_ciphertext = encoder.encode(ciphertext)
return EncryptedMessage._from_parts(
encoded_nonce,
encoded_ciphertext,
encoder.encode(nonce + ciphertext),
) | Encrypts the plaintext message using the given `nonce` and returns
the ciphertext encoded with the encoder.
.. warning:: It is **VITALLY** important that the nonce is a nonce,
i.e. it is a number used only once for any given key. If you fail
to do this, you compromise the privacy of the messages encrypted.
:param plaintext: [:class:`bytes`] The plaintext message to encrypt
:param nonce: [:class:`bytes`] The nonce to use in the encryption
:param encoder: The encoder to use to encode the ciphertext
:rtype: [:class:`nacl.utils.EncryptedMessage`] | Below is the the instruction that describes the task:
### Input:
Encrypts the plaintext message using the given `nonce` and returns
the ciphertext encoded with the encoder.
.. warning:: It is **VITALLY** important that the nonce is a nonce,
i.e. it is a number used only once for any given key. If you fail
to do this, you compromise the privacy of the messages encrypted.
:param plaintext: [:class:`bytes`] The plaintext message to encrypt
:param nonce: [:class:`bytes`] The nonce to use in the encryption
:param encoder: The encoder to use to encode the ciphertext
:rtype: [:class:`nacl.utils.EncryptedMessage`]
### Response:
def encrypt(self, plaintext, nonce, encoder=encoding.RawEncoder):
"""
Encrypts the plaintext message using the given `nonce` and returns
the ciphertext encoded with the encoder.
.. warning:: It is **VITALLY** important that the nonce is a nonce,
i.e. it is a number used only once for any given key. If you fail
to do this, you compromise the privacy of the messages encrypted.
:param plaintext: [:class:`bytes`] The plaintext message to encrypt
:param nonce: [:class:`bytes`] The nonce to use in the encryption
:param encoder: The encoder to use to encode the ciphertext
:rtype: [:class:`nacl.utils.EncryptedMessage`]
"""
if len(nonce) != self.NONCE_SIZE:
raise ValueError("The nonce must be exactly %s bytes long" %
self.NONCE_SIZE)
ciphertext = libnacl.crypto_box_afternm(
plaintext,
nonce,
self._shared_key,
)
encoded_nonce = encoder.encode(nonce)
encoded_ciphertext = encoder.encode(ciphertext)
return EncryptedMessage._from_parts(
encoded_nonce,
encoded_ciphertext,
encoder.encode(nonce + ciphertext),
) |
def iter_up(self, include_self=True):
"""Iterates up the tree to the root."""
if include_self: yield self
parent = self.parent
while parent is not None:
yield parent
try:
parent = parent.parent
except AttributeError:
return | Iterates up the tree to the root. | Below is the the instruction that describes the task:
### Input:
Iterates up the tree to the root.
### Response:
def iter_up(self, include_self=True):
"""Iterates up the tree to the root."""
if include_self: yield self
parent = self.parent
while parent is not None:
yield parent
try:
parent = parent.parent
except AttributeError:
return |
async def flush(self, request: Request, stacks: List[Stack]):
"""
Add a typing stack after each stack.
"""
ns: List[Stack] = []
for stack in stacks:
ns.extend(self.typify(stack))
if len(ns) > 1 and ns[-1] == Stack([lyr.Typing()]):
ns[-1].get_layer(lyr.Typing).active = False
await self.next(request, ns) | Add a typing stack after each stack. | Below is the the instruction that describes the task:
### Input:
Add a typing stack after each stack.
### Response:
async def flush(self, request: Request, stacks: List[Stack]):
"""
Add a typing stack after each stack.
"""
ns: List[Stack] = []
for stack in stacks:
ns.extend(self.typify(stack))
if len(ns) > 1 and ns[-1] == Stack([lyr.Typing()]):
ns[-1].get_layer(lyr.Typing).active = False
await self.next(request, ns) |
def reduce(self, colors):
"""Converts color codes into optimized text
This optimizer works by merging adjacent colors so we don't
have to repeat the same escape codes for each pixel. There is
no loss of information.
:param colors: Iterable yielding an xterm color code for each
pixel, None to indicate a transparent pixel, or
``'EOL'`` to indicate th end of a line.
:return: Yields lines of optimized text.
"""
need_reset = False
line = []
for color, items in itertools.groupby(colors):
if color is None:
if need_reset:
line.append("\x1b[49m")
need_reset = False
line.append(self.pad * len(list(items)))
elif color == "EOL":
if need_reset:
line.append("\x1b[49m")
need_reset = False
yield "".join(line)
else:
line.pop()
yield "".join(line)
line = []
else:
need_reset = True
line.append("\x1b[48;5;%dm%s" % (
color, self.pad * len(list(items)))) | Converts color codes into optimized text
This optimizer works by merging adjacent colors so we don't
have to repeat the same escape codes for each pixel. There is
no loss of information.
:param colors: Iterable yielding an xterm color code for each
pixel, None to indicate a transparent pixel, or
``'EOL'`` to indicate th end of a line.
:return: Yields lines of optimized text. | Below is the the instruction that describes the task:
### Input:
Converts color codes into optimized text
This optimizer works by merging adjacent colors so we don't
have to repeat the same escape codes for each pixel. There is
no loss of information.
:param colors: Iterable yielding an xterm color code for each
pixel, None to indicate a transparent pixel, or
``'EOL'`` to indicate th end of a line.
:return: Yields lines of optimized text.
### Response:
def reduce(self, colors):
"""Converts color codes into optimized text
This optimizer works by merging adjacent colors so we don't
have to repeat the same escape codes for each pixel. There is
no loss of information.
:param colors: Iterable yielding an xterm color code for each
pixel, None to indicate a transparent pixel, or
``'EOL'`` to indicate th end of a line.
:return: Yields lines of optimized text.
"""
need_reset = False
line = []
for color, items in itertools.groupby(colors):
if color is None:
if need_reset:
line.append("\x1b[49m")
need_reset = False
line.append(self.pad * len(list(items)))
elif color == "EOL":
if need_reset:
line.append("\x1b[49m")
need_reset = False
yield "".join(line)
else:
line.pop()
yield "".join(line)
line = []
else:
need_reset = True
line.append("\x1b[48;5;%dm%s" % (
color, self.pad * len(list(items)))) |
def is_valid_file(path):
''' Returns True if provided file exists and is a file, or False otherwise. '''
return os.path.exists(path) and os.path.isfile(path) | Returns True if provided file exists and is a file, or False otherwise. | Below is the the instruction that describes the task:
### Input:
Returns True if provided file exists and is a file, or False otherwise.
### Response:
def is_valid_file(path):
''' Returns True if provided file exists and is a file, or False otherwise. '''
return os.path.exists(path) and os.path.isfile(path) |
def add_text_mask(self, start, method_str, text_producer):
"""Adds a handler that produces a plain text response.
Parameters
----------
start : string
The URL prefix that must be matched to perform this request.
method_str : string
The HTTP method for which to trigger the request.
text_producer : function(esrh, args)
A function returning a string. The function takes two arguments.
esrh is the QuickServerRequestHandler object that called the
function. args is a map containing the arguments to the request
(i.e., the rest of the URL as path segment array 'paths', a map of
all query fields / flags 'query', the fragment string 'fragment',
and if the method was a POST the JSON form content 'post'). If the
result is None a 404 error is sent.
"""
def send_text(drh, rem_path):
text = text_producer(drh, rem_path)
if not isinstance(text, Response):
text = Response(text)
ctype = text.get_ctype("text/plain")
code = text.code
text = text.response
if text is None:
drh.send_error(404, "File not found")
return None
f = BytesIO()
if isinstance(text, (str, unicode)):
try:
text = text.decode('utf8')
except AttributeError:
pass
text = text.encode('utf8')
f.write(text)
f.flush()
size = f.tell()
f.seek(0)
# handle ETag caching
if drh.request_version >= "HTTP/1.1":
e_tag = "{0:x}".format(zlib.crc32(f.read()) & 0xFFFFFFFF)
f.seek(0)
match = _getheader(drh.headers, 'if-none-match')
if match is not None:
if drh.check_cache(e_tag, match):
f.close()
return None
drh.send_header("ETag", e_tag, end_header=True)
drh.send_header("Cache-Control",
"max-age={0}".format(self.max_age),
end_header=True)
drh.send_response(code)
drh.send_header("Content-Type", ctype)
drh.send_header("Content-Length", size)
drh.end_headers()
return f
self._add_file_mask(start, method_str, send_text) | Adds a handler that produces a plain text response.
Parameters
----------
start : string
The URL prefix that must be matched to perform this request.
method_str : string
The HTTP method for which to trigger the request.
text_producer : function(esrh, args)
A function returning a string. The function takes two arguments.
esrh is the QuickServerRequestHandler object that called the
function. args is a map containing the arguments to the request
(i.e., the rest of the URL as path segment array 'paths', a map of
all query fields / flags 'query', the fragment string 'fragment',
and if the method was a POST the JSON form content 'post'). If the
result is None a 404 error is sent. | Below is the the instruction that describes the task:
### Input:
Adds a handler that produces a plain text response.
Parameters
----------
start : string
The URL prefix that must be matched to perform this request.
method_str : string
The HTTP method for which to trigger the request.
text_producer : function(esrh, args)
A function returning a string. The function takes two arguments.
esrh is the QuickServerRequestHandler object that called the
function. args is a map containing the arguments to the request
(i.e., the rest of the URL as path segment array 'paths', a map of
all query fields / flags 'query', the fragment string 'fragment',
and if the method was a POST the JSON form content 'post'). If the
result is None a 404 error is sent.
### Response:
def add_text_mask(self, start, method_str, text_producer):
"""Adds a handler that produces a plain text response.
Parameters
----------
start : string
The URL prefix that must be matched to perform this request.
method_str : string
The HTTP method for which to trigger the request.
text_producer : function(esrh, args)
A function returning a string. The function takes two arguments.
esrh is the QuickServerRequestHandler object that called the
function. args is a map containing the arguments to the request
(i.e., the rest of the URL as path segment array 'paths', a map of
all query fields / flags 'query', the fragment string 'fragment',
and if the method was a POST the JSON form content 'post'). If the
result is None a 404 error is sent.
"""
def send_text(drh, rem_path):
text = text_producer(drh, rem_path)
if not isinstance(text, Response):
text = Response(text)
ctype = text.get_ctype("text/plain")
code = text.code
text = text.response
if text is None:
drh.send_error(404, "File not found")
return None
f = BytesIO()
if isinstance(text, (str, unicode)):
try:
text = text.decode('utf8')
except AttributeError:
pass
text = text.encode('utf8')
f.write(text)
f.flush()
size = f.tell()
f.seek(0)
# handle ETag caching
if drh.request_version >= "HTTP/1.1":
e_tag = "{0:x}".format(zlib.crc32(f.read()) & 0xFFFFFFFF)
f.seek(0)
match = _getheader(drh.headers, 'if-none-match')
if match is not None:
if drh.check_cache(e_tag, match):
f.close()
return None
drh.send_header("ETag", e_tag, end_header=True)
drh.send_header("Cache-Control",
"max-age={0}".format(self.max_age),
end_header=True)
drh.send_response(code)
drh.send_header("Content-Type", ctype)
drh.send_header("Content-Length", size)
drh.end_headers()
return f
self._add_file_mask(start, method_str, send_text) |
def _check_registry_type(folder=None):
"""Check if the user has placed a registry_type.txt file to choose the registry type
If a default registry type file is found, the DefaultBackingType and DefaultBackingFile
class parameters in ComponentRegistry are updated accordingly.
Args:
folder (string): The folder that we should check for a default registry type
"""
folder = _registry_folder(folder)
default_file = os.path.join(folder, 'registry_type.txt')
try:
with open(default_file, "r") as infile:
data = infile.read()
data = data.strip()
ComponentRegistry.SetBackingStore(data)
except IOError:
pass | Check if the user has placed a registry_type.txt file to choose the registry type
If a default registry type file is found, the DefaultBackingType and DefaultBackingFile
class parameters in ComponentRegistry are updated accordingly.
Args:
folder (string): The folder that we should check for a default registry type | Below is the the instruction that describes the task:
### Input:
Check if the user has placed a registry_type.txt file to choose the registry type
If a default registry type file is found, the DefaultBackingType and DefaultBackingFile
class parameters in ComponentRegistry are updated accordingly.
Args:
folder (string): The folder that we should check for a default registry type
### Response:
def _check_registry_type(folder=None):
"""Check if the user has placed a registry_type.txt file to choose the registry type
If a default registry type file is found, the DefaultBackingType and DefaultBackingFile
class parameters in ComponentRegistry are updated accordingly.
Args:
folder (string): The folder that we should check for a default registry type
"""
folder = _registry_folder(folder)
default_file = os.path.join(folder, 'registry_type.txt')
try:
with open(default_file, "r") as infile:
data = infile.read()
data = data.strip()
ComponentRegistry.SetBackingStore(data)
except IOError:
pass |
def rolling_update(config=None, name=None, image=None, container_name=None, rc_new=None):
"""
Performs a simple rolling update of a ReplicationController.
See https://github.com/kubernetes/kubernetes/blob/master/docs/design/simple-rolling-update.md
for algorithm details. We have modified it slightly to allow for keeping the same RC name
between updates, which is not supported by default by kubectl.
:param config: An instance of K8sConfig. If omitted, reads from ~/.kube/config.
:param name: The name of the ReplicationController we want to update.
:param image: The updated image version we want applied.
:param container_name: The name of the container we're targeting for the update.
Required if more than one container is present.
:param rc_new: An instance of K8sReplicationController with the new configuration to apply.
Mutually exclusive with [image, container_name] if specified.
:return:
"""
if name is None:
raise SyntaxError(
'K8sReplicationController: name: [ {0} ] cannot be None.'.format(name))
if image is None and rc_new is None:
raise SyntaxError(
"K8sReplicationController: please specify either 'image' or 'rc_new'")
if container_name is not None and image is not None and rc_new is not None:
raise SyntaxError(
'K8sReplicationController: rc_new is mutually exclusive with an (container_name, image) pair.')
return K8sReplicationController._rolling_update_init(
config=config,
name=name,
image=image,
container_name=container_name,
rc_new=rc_new) | Performs a simple rolling update of a ReplicationController.
See https://github.com/kubernetes/kubernetes/blob/master/docs/design/simple-rolling-update.md
for algorithm details. We have modified it slightly to allow for keeping the same RC name
between updates, which is not supported by default by kubectl.
:param config: An instance of K8sConfig. If omitted, reads from ~/.kube/config.
:param name: The name of the ReplicationController we want to update.
:param image: The updated image version we want applied.
:param container_name: The name of the container we're targeting for the update.
Required if more than one container is present.
:param rc_new: An instance of K8sReplicationController with the new configuration to apply.
Mutually exclusive with [image, container_name] if specified.
:return: | Below is the the instruction that describes the task:
### Input:
Performs a simple rolling update of a ReplicationController.
See https://github.com/kubernetes/kubernetes/blob/master/docs/design/simple-rolling-update.md
for algorithm details. We have modified it slightly to allow for keeping the same RC name
between updates, which is not supported by default by kubectl.
:param config: An instance of K8sConfig. If omitted, reads from ~/.kube/config.
:param name: The name of the ReplicationController we want to update.
:param image: The updated image version we want applied.
:param container_name: The name of the container we're targeting for the update.
Required if more than one container is present.
:param rc_new: An instance of K8sReplicationController with the new configuration to apply.
Mutually exclusive with [image, container_name] if specified.
:return:
### Response:
def rolling_update(config=None, name=None, image=None, container_name=None, rc_new=None):
"""
Performs a simple rolling update of a ReplicationController.
See https://github.com/kubernetes/kubernetes/blob/master/docs/design/simple-rolling-update.md
for algorithm details. We have modified it slightly to allow for keeping the same RC name
between updates, which is not supported by default by kubectl.
:param config: An instance of K8sConfig. If omitted, reads from ~/.kube/config.
:param name: The name of the ReplicationController we want to update.
:param image: The updated image version we want applied.
:param container_name: The name of the container we're targeting for the update.
Required if more than one container is present.
:param rc_new: An instance of K8sReplicationController with the new configuration to apply.
Mutually exclusive with [image, container_name] if specified.
:return:
"""
if name is None:
raise SyntaxError(
'K8sReplicationController: name: [ {0} ] cannot be None.'.format(name))
if image is None and rc_new is None:
raise SyntaxError(
"K8sReplicationController: please specify either 'image' or 'rc_new'")
if container_name is not None and image is not None and rc_new is not None:
raise SyntaxError(
'K8sReplicationController: rc_new is mutually exclusive with an (container_name, image) pair.')
return K8sReplicationController._rolling_update_init(
config=config,
name=name,
image=image,
container_name=container_name,
rc_new=rc_new) |
def pmtm(x, NW=None, k=None, NFFT=None, e=None, v=None, method='adapt', show=False):
"""Multitapering spectral estimation
:param array x: the data
:param float NW: The time half bandwidth parameter (typical values are
2.5,3,3.5,4). Must be provided otherwise the tapering windows and
eigen values (outputs of dpss) must be provided
:param int k: uses the first k Slepian sequences. If *k* is not provided,
*k* is set to *NW*2*.
:param NW:
:param e: the window concentrations (eigenvalues)
:param v: the matrix containing the tapering windows
:param str method: set how the eigenvalues are used. Must be
in ['unity', 'adapt', 'eigen']
:param bool show: plot results
:return: Sk (complex), weights, eigenvalues
Usually in spectral estimation the mean to reduce bias is to use tapering
window. In order to reduce variance we need to average different spectrum.
The problem is that we have only one set of data. Thus we need to
decompose a set into several segments. Such method are well-known: simple
daniell's periodogram, Welch's method and so on. The drawback of such
methods is a loss of resolution since the segments used to compute the
spectrum are smaller than the data set.
The interest of multitapering method is to keep a good resolution while
reducing bias and variance.
How does it work? First we compute different simple periodogram with the
whole data set (to keep good resolution) but each periodgram is computed
with a differenttapering windows. Then, we average all these spectrum.
To avoid redundancy and bias due to the tapers mtm use special tapers.
.. plot::
:width: 80%
:include-source:
from spectrum import data_cosine, dpss, pmtm
data = data_cosine(N=2048, A=0.1, sampling=1024, freq=200)
# If you already have the DPSS windows
[tapers, eigen] = dpss(2048, 2.5, 4)
res = pmtm(data, e=eigen, v=tapers, show=False)
# You do not need to compute the DPSS before end
res = pmtm(data, NW=2.5, show=False)
res = pmtm(data, NW=2.5, k=4, show=True)
.. versionchanged:: 0.6.2
APN modified method to return each Sk as complex values, the eigenvalues
and the weights
"""
assert method in ['adapt','eigen','unity']
N = len(x)
# if dpss not provided, compute them
if e is None and v is None:
if NW is not None:
[tapers, eigenvalues] = dpss(N, NW, k=k)
else:
raise ValueError("NW must be provided (e.g. 2.5, 3, 3.5, 4")
elif e is not None and v is not None:
eigenvalues = e[:]
tapers = v[:]
else:
raise ValueError("if e provided, v must be provided as well and viceversa.")
nwin = len(eigenvalues) # length of the eigen values vector to be used later
# set the NFFT
if NFFT==None:
NFFT = max(256, 2**nextpow2(N))
Sk_complex = np.fft.fft(np.multiply(tapers.transpose(), x), NFFT)
Sk = abs(Sk_complex)**2
# si nfft smaller thqn N, cut otherwise add wero.
# compute
if method in ['eigen', 'unity']:
if method == 'unity':
weights = np.ones((nwin, 1))
elif method == 'eigen':
# The S_k spectrum can be weighted by the eigenvalues, as in Park et al.
weights = np.array([_x/float(i+1) for i,_x in enumerate(eigenvalues)])
weights = weights.reshape(nwin,1)
elif method == 'adapt':
# This version uses the equations from [2] (P&W pp 368-370).
# Wrap the data modulo nfft if N > nfft
sig2 = np.dot(x, x) / float(N)
Sk = abs(np.fft.fft(np.multiply(tapers.transpose(), x), NFFT))**2
Sk = Sk.transpose()
S = (Sk[:,0] + Sk[:,1]) / 2 # Initial spectrum estimate
S = S.reshape(NFFT, 1)
Stemp = np.zeros((NFFT,1))
S1 = np.zeros((NFFT,1))
# Set tolerance for acceptance of spectral estimate:
tol = 0.0005 * sig2 / float(NFFT)
i = 0
a = sig2 * (1 - eigenvalues)
# converges very quickly but for safety; set i<100
while sum(np.abs(S-S1))/NFFT > tol and i<100:
i = i + 1
# calculate weights
b1 = np.multiply(S, np.ones((1,nwin)))
b2 = np.multiply(S,eigenvalues.transpose()) + np.ones((NFFT,1))*a.transpose()
b = b1/b2
# calculate new spectral estimate
wk=(b**2)*(np.ones((NFFT,1))*eigenvalues.transpose())
S1 = sum(wk.transpose()*Sk.transpose())/ sum(wk.transpose())
S1 = S1.reshape(NFFT, 1)
Stemp = S1
S1 = S
S = Stemp # swap S and S1
weights=wk
if show is True:
from pylab import semilogy
if method == "adapt":
Sk = np.mean(Sk * weights, axis=1)
else:
Sk = np.mean(Sk * weights, axis=0)
semilogy(Sk)
return Sk_complex, weights, eigenvalues | Multitapering spectral estimation
:param array x: the data
:param float NW: The time half bandwidth parameter (typical values are
2.5,3,3.5,4). Must be provided otherwise the tapering windows and
eigen values (outputs of dpss) must be provided
:param int k: uses the first k Slepian sequences. If *k* is not provided,
*k* is set to *NW*2*.
:param NW:
:param e: the window concentrations (eigenvalues)
:param v: the matrix containing the tapering windows
:param str method: set how the eigenvalues are used. Must be
in ['unity', 'adapt', 'eigen']
:param bool show: plot results
:return: Sk (complex), weights, eigenvalues
Usually in spectral estimation the mean to reduce bias is to use tapering
window. In order to reduce variance we need to average different spectrum.
The problem is that we have only one set of data. Thus we need to
decompose a set into several segments. Such method are well-known: simple
daniell's periodogram, Welch's method and so on. The drawback of such
methods is a loss of resolution since the segments used to compute the
spectrum are smaller than the data set.
The interest of multitapering method is to keep a good resolution while
reducing bias and variance.
How does it work? First we compute different simple periodogram with the
whole data set (to keep good resolution) but each periodgram is computed
with a differenttapering windows. Then, we average all these spectrum.
To avoid redundancy and bias due to the tapers mtm use special tapers.
.. plot::
:width: 80%
:include-source:
from spectrum import data_cosine, dpss, pmtm
data = data_cosine(N=2048, A=0.1, sampling=1024, freq=200)
# If you already have the DPSS windows
[tapers, eigen] = dpss(2048, 2.5, 4)
res = pmtm(data, e=eigen, v=tapers, show=False)
# You do not need to compute the DPSS before end
res = pmtm(data, NW=2.5, show=False)
res = pmtm(data, NW=2.5, k=4, show=True)
.. versionchanged:: 0.6.2
APN modified method to return each Sk as complex values, the eigenvalues
and the weights | Below is the the instruction that describes the task:
### Input:
Multitapering spectral estimation
:param array x: the data
:param float NW: The time half bandwidth parameter (typical values are
2.5,3,3.5,4). Must be provided otherwise the tapering windows and
eigen values (outputs of dpss) must be provided
:param int k: uses the first k Slepian sequences. If *k* is not provided,
*k* is set to *NW*2*.
:param NW:
:param e: the window concentrations (eigenvalues)
:param v: the matrix containing the tapering windows
:param str method: set how the eigenvalues are used. Must be
in ['unity', 'adapt', 'eigen']
:param bool show: plot results
:return: Sk (complex), weights, eigenvalues
Usually in spectral estimation the mean to reduce bias is to use tapering
window. In order to reduce variance we need to average different spectrum.
The problem is that we have only one set of data. Thus we need to
decompose a set into several segments. Such method are well-known: simple
daniell's periodogram, Welch's method and so on. The drawback of such
methods is a loss of resolution since the segments used to compute the
spectrum are smaller than the data set.
The interest of multitapering method is to keep a good resolution while
reducing bias and variance.
How does it work? First we compute different simple periodogram with the
whole data set (to keep good resolution) but each periodgram is computed
with a differenttapering windows. Then, we average all these spectrum.
To avoid redundancy and bias due to the tapers mtm use special tapers.
.. plot::
:width: 80%
:include-source:
from spectrum import data_cosine, dpss, pmtm
data = data_cosine(N=2048, A=0.1, sampling=1024, freq=200)
# If you already have the DPSS windows
[tapers, eigen] = dpss(2048, 2.5, 4)
res = pmtm(data, e=eigen, v=tapers, show=False)
# You do not need to compute the DPSS before end
res = pmtm(data, NW=2.5, show=False)
res = pmtm(data, NW=2.5, k=4, show=True)
.. versionchanged:: 0.6.2
APN modified method to return each Sk as complex values, the eigenvalues
and the weights
### Response:
def pmtm(x, NW=None, k=None, NFFT=None, e=None, v=None, method='adapt', show=False):
"""Multitapering spectral estimation
:param array x: the data
:param float NW: The time half bandwidth parameter (typical values are
2.5,3,3.5,4). Must be provided otherwise the tapering windows and
eigen values (outputs of dpss) must be provided
:param int k: uses the first k Slepian sequences. If *k* is not provided,
*k* is set to *NW*2*.
:param NW:
:param e: the window concentrations (eigenvalues)
:param v: the matrix containing the tapering windows
:param str method: set how the eigenvalues are used. Must be
in ['unity', 'adapt', 'eigen']
:param bool show: plot results
:return: Sk (complex), weights, eigenvalues
Usually in spectral estimation the mean to reduce bias is to use tapering
window. In order to reduce variance we need to average different spectrum.
The problem is that we have only one set of data. Thus we need to
decompose a set into several segments. Such method are well-known: simple
daniell's periodogram, Welch's method and so on. The drawback of such
methods is a loss of resolution since the segments used to compute the
spectrum are smaller than the data set.
The interest of multitapering method is to keep a good resolution while
reducing bias and variance.
How does it work? First we compute different simple periodogram with the
whole data set (to keep good resolution) but each periodgram is computed
with a differenttapering windows. Then, we average all these spectrum.
To avoid redundancy and bias due to the tapers mtm use special tapers.
.. plot::
:width: 80%
:include-source:
from spectrum import data_cosine, dpss, pmtm
data = data_cosine(N=2048, A=0.1, sampling=1024, freq=200)
# If you already have the DPSS windows
[tapers, eigen] = dpss(2048, 2.5, 4)
res = pmtm(data, e=eigen, v=tapers, show=False)
# You do not need to compute the DPSS before end
res = pmtm(data, NW=2.5, show=False)
res = pmtm(data, NW=2.5, k=4, show=True)
.. versionchanged:: 0.6.2
APN modified method to return each Sk as complex values, the eigenvalues
and the weights
"""
assert method in ['adapt','eigen','unity']
N = len(x)
# if dpss not provided, compute them
if e is None and v is None:
if NW is not None:
[tapers, eigenvalues] = dpss(N, NW, k=k)
else:
raise ValueError("NW must be provided (e.g. 2.5, 3, 3.5, 4")
elif e is not None and v is not None:
eigenvalues = e[:]
tapers = v[:]
else:
raise ValueError("if e provided, v must be provided as well and viceversa.")
nwin = len(eigenvalues) # length of the eigen values vector to be used later
# set the NFFT
if NFFT==None:
NFFT = max(256, 2**nextpow2(N))
Sk_complex = np.fft.fft(np.multiply(tapers.transpose(), x), NFFT)
Sk = abs(Sk_complex)**2
# si nfft smaller thqn N, cut otherwise add wero.
# compute
if method in ['eigen', 'unity']:
if method == 'unity':
weights = np.ones((nwin, 1))
elif method == 'eigen':
# The S_k spectrum can be weighted by the eigenvalues, as in Park et al.
weights = np.array([_x/float(i+1) for i,_x in enumerate(eigenvalues)])
weights = weights.reshape(nwin,1)
elif method == 'adapt':
# This version uses the equations from [2] (P&W pp 368-370).
# Wrap the data modulo nfft if N > nfft
sig2 = np.dot(x, x) / float(N)
Sk = abs(np.fft.fft(np.multiply(tapers.transpose(), x), NFFT))**2
Sk = Sk.transpose()
S = (Sk[:,0] + Sk[:,1]) / 2 # Initial spectrum estimate
S = S.reshape(NFFT, 1)
Stemp = np.zeros((NFFT,1))
S1 = np.zeros((NFFT,1))
# Set tolerance for acceptance of spectral estimate:
tol = 0.0005 * sig2 / float(NFFT)
i = 0
a = sig2 * (1 - eigenvalues)
# converges very quickly but for safety; set i<100
while sum(np.abs(S-S1))/NFFT > tol and i<100:
i = i + 1
# calculate weights
b1 = np.multiply(S, np.ones((1,nwin)))
b2 = np.multiply(S,eigenvalues.transpose()) + np.ones((NFFT,1))*a.transpose()
b = b1/b2
# calculate new spectral estimate
wk=(b**2)*(np.ones((NFFT,1))*eigenvalues.transpose())
S1 = sum(wk.transpose()*Sk.transpose())/ sum(wk.transpose())
S1 = S1.reshape(NFFT, 1)
Stemp = S1
S1 = S
S = Stemp # swap S and S1
weights=wk
if show is True:
from pylab import semilogy
if method == "adapt":
Sk = np.mean(Sk * weights, axis=1)
else:
Sk = np.mean(Sk * weights, axis=0)
semilogy(Sk)
return Sk_complex, weights, eigenvalues |
def _from_dict(cls, _dict):
"""Initialize a Environment object from a json dictionary."""
args = {}
if 'environment_id' in _dict:
args['environment_id'] = _dict.get('environment_id')
if 'name' in _dict:
args['name'] = _dict.get('name')
if 'description' in _dict:
args['description'] = _dict.get('description')
if 'created' in _dict:
args['created'] = string_to_datetime(_dict.get('created'))
if 'updated' in _dict:
args['updated'] = string_to_datetime(_dict.get('updated'))
if 'status' in _dict:
args['status'] = _dict.get('status')
if 'read_only' in _dict:
args['read_only'] = _dict.get('read_only')
if 'size' in _dict:
args['size'] = _dict.get('size')
if 'requested_size' in _dict:
args['requested_size'] = _dict.get('requested_size')
if 'index_capacity' in _dict:
args['index_capacity'] = IndexCapacity._from_dict(
_dict.get('index_capacity'))
if 'search_status' in _dict:
args['search_status'] = SearchStatus._from_dict(
_dict.get('search_status'))
return cls(**args) | Initialize a Environment object from a json dictionary. | Below is the the instruction that describes the task:
### Input:
Initialize a Environment object from a json dictionary.
### Response:
def _from_dict(cls, _dict):
"""Initialize a Environment object from a json dictionary."""
args = {}
if 'environment_id' in _dict:
args['environment_id'] = _dict.get('environment_id')
if 'name' in _dict:
args['name'] = _dict.get('name')
if 'description' in _dict:
args['description'] = _dict.get('description')
if 'created' in _dict:
args['created'] = string_to_datetime(_dict.get('created'))
if 'updated' in _dict:
args['updated'] = string_to_datetime(_dict.get('updated'))
if 'status' in _dict:
args['status'] = _dict.get('status')
if 'read_only' in _dict:
args['read_only'] = _dict.get('read_only')
if 'size' in _dict:
args['size'] = _dict.get('size')
if 'requested_size' in _dict:
args['requested_size'] = _dict.get('requested_size')
if 'index_capacity' in _dict:
args['index_capacity'] = IndexCapacity._from_dict(
_dict.get('index_capacity'))
if 'search_status' in _dict:
args['search_status'] = SearchStatus._from_dict(
_dict.get('search_status'))
return cls(**args) |
def sow(self):
'''
Distributes attrributes named in sow_vars from self to each AgentType
in the market, storing them in respectively named attributes.
Parameters
----------
none
Returns
-------
none
'''
for var_name in self.sow_vars:
this_seed = getattr(self,var_name)
for this_type in self.agents:
setattr(this_type,var_name,this_seed) | Distributes attrributes named in sow_vars from self to each AgentType
in the market, storing them in respectively named attributes.
Parameters
----------
none
Returns
-------
none | Below is the the instruction that describes the task:
### Input:
Distributes attrributes named in sow_vars from self to each AgentType
in the market, storing them in respectively named attributes.
Parameters
----------
none
Returns
-------
none
### Response:
def sow(self):
'''
Distributes attrributes named in sow_vars from self to each AgentType
in the market, storing them in respectively named attributes.
Parameters
----------
none
Returns
-------
none
'''
for var_name in self.sow_vars:
this_seed = getattr(self,var_name)
for this_type in self.agents:
setattr(this_type,var_name,this_seed) |
def sha256_file(path):
"""Calculate sha256 hex digest of a file.
:param path: The path of the file you are calculating the digest of.
:type path: str
:returns: The sha256 hex digest of the specified file.
:rtype: builtin_function_or_method
"""
h = hashlib.sha256()
with open(path, 'rb') as f:
for chunk in iter(lambda: f.read(CHUNK_SIZE), b''):
h.update(chunk)
return h.hexdigest() | Calculate sha256 hex digest of a file.
:param path: The path of the file you are calculating the digest of.
:type path: str
:returns: The sha256 hex digest of the specified file.
:rtype: builtin_function_or_method | Below is the the instruction that describes the task:
### Input:
Calculate sha256 hex digest of a file.
:param path: The path of the file you are calculating the digest of.
:type path: str
:returns: The sha256 hex digest of the specified file.
:rtype: builtin_function_or_method
### Response:
def sha256_file(path):
"""Calculate sha256 hex digest of a file.
:param path: The path of the file you are calculating the digest of.
:type path: str
:returns: The sha256 hex digest of the specified file.
:rtype: builtin_function_or_method
"""
h = hashlib.sha256()
with open(path, 'rb') as f:
for chunk in iter(lambda: f.read(CHUNK_SIZE), b''):
h.update(chunk)
return h.hexdigest() |
def nbytes(self):
"""The number of bytes required to encode this command.
Encoded commands are comprised of a two byte opcode, followed by a
one byte size, and then the command argument bytes. The size
indicates the number of bytes required to represent command
arguments.
"""
return len(self.opcode) + 1 + sum(arg.nbytes for arg in self.argdefns) | The number of bytes required to encode this command.
Encoded commands are comprised of a two byte opcode, followed by a
one byte size, and then the command argument bytes. The size
indicates the number of bytes required to represent command
arguments. | Below is the the instruction that describes the task:
### Input:
The number of bytes required to encode this command.
Encoded commands are comprised of a two byte opcode, followed by a
one byte size, and then the command argument bytes. The size
indicates the number of bytes required to represent command
arguments.
### Response:
def nbytes(self):
"""The number of bytes required to encode this command.
Encoded commands are comprised of a two byte opcode, followed by a
one byte size, and then the command argument bytes. The size
indicates the number of bytes required to represent command
arguments.
"""
return len(self.opcode) + 1 + sum(arg.nbytes for arg in self.argdefns) |
def GetMessages(self, formatter_mediator, event):
"""Determines the formatted message strings for an event object.
Args:
formatter_mediator (FormatterMediator): mediates the interactions
between formatters and other components, such as storage and Windows
EventLog resources.
event (EventObject): event.
Returns:
tuple(str, str): formatted message string and short message string.
Raises:
WrongFormatter: if the event object cannot be formatted by the formatter.
"""
if self.DATA_TYPE != event.data_type:
raise errors.WrongFormatter('Unsupported data type: {0:s}.'.format(
event.data_type))
event_values = event.CopyToDict()
message_type = event_values.get('message_type', None)
if message_type is not None:
event_values['message_type'] = (
self._MESSAGE_TYPE.get(message_type, 'UNKNOWN'))
message_status = event_values.get('message_status', None)
if message_status is not None:
event_values['message_status'] = (
self._MESSAGE_STATUS.get(message_status, 'UNKNOWN'))
return self._ConditionalFormatMessages(event_values) | Determines the formatted message strings for an event object.
Args:
formatter_mediator (FormatterMediator): mediates the interactions
between formatters and other components, such as storage and Windows
EventLog resources.
event (EventObject): event.
Returns:
tuple(str, str): formatted message string and short message string.
Raises:
WrongFormatter: if the event object cannot be formatted by the formatter. | Below is the the instruction that describes the task:
### Input:
Determines the formatted message strings for an event object.
Args:
formatter_mediator (FormatterMediator): mediates the interactions
between formatters and other components, such as storage and Windows
EventLog resources.
event (EventObject): event.
Returns:
tuple(str, str): formatted message string and short message string.
Raises:
WrongFormatter: if the event object cannot be formatted by the formatter.
### Response:
def GetMessages(self, formatter_mediator, event):
"""Determines the formatted message strings for an event object.
Args:
formatter_mediator (FormatterMediator): mediates the interactions
between formatters and other components, such as storage and Windows
EventLog resources.
event (EventObject): event.
Returns:
tuple(str, str): formatted message string and short message string.
Raises:
WrongFormatter: if the event object cannot be formatted by the formatter.
"""
if self.DATA_TYPE != event.data_type:
raise errors.WrongFormatter('Unsupported data type: {0:s}.'.format(
event.data_type))
event_values = event.CopyToDict()
message_type = event_values.get('message_type', None)
if message_type is not None:
event_values['message_type'] = (
self._MESSAGE_TYPE.get(message_type, 'UNKNOWN'))
message_status = event_values.get('message_status', None)
if message_status is not None:
event_values['message_status'] = (
self._MESSAGE_STATUS.get(message_status, 'UNKNOWN'))
return self._ConditionalFormatMessages(event_values) |
def solve_map(expr, vars):
"""Solves the map-form, by recursively calling its RHS with new vars.
let-forms are binary expressions. The LHS should evaluate to an IAssociative
that can be used as new vars with which to solve a new query, of which
the RHS is the root. In most cases, the LHS will be a Var (var).
Typically, map-forms result from the dotty "dot" (.) operator. For example,
the query "User.name" will translate to a map-form with the var "User"
on LHS and a var to "name" on the RHS. With top-level vars being
something like {"User": {"name": "Bob"}}, the Var on the LHS will
evaluate to {"name": "Bob"}, which subdict will then be used on the RHS as
new vars, and that whole form will evaluate to "Bob".
"""
lhs_values, _ = __solve_for_repeated(expr.lhs, vars)
def lazy_map():
try:
for lhs_value in repeated.getvalues(lhs_values):
yield solve(expr.rhs,
__nest_scope(expr.lhs, vars, lhs_value)).value
except errors.EfilterNoneError as error:
error.root = expr
raise
return Result(repeated.lazy(lazy_map), ()) | Solves the map-form, by recursively calling its RHS with new vars.
let-forms are binary expressions. The LHS should evaluate to an IAssociative
that can be used as new vars with which to solve a new query, of which
the RHS is the root. In most cases, the LHS will be a Var (var).
Typically, map-forms result from the dotty "dot" (.) operator. For example,
the query "User.name" will translate to a map-form with the var "User"
on LHS and a var to "name" on the RHS. With top-level vars being
something like {"User": {"name": "Bob"}}, the Var on the LHS will
evaluate to {"name": "Bob"}, which subdict will then be used on the RHS as
new vars, and that whole form will evaluate to "Bob". | Below is the the instruction that describes the task:
### Input:
Solves the map-form, by recursively calling its RHS with new vars.
let-forms are binary expressions. The LHS should evaluate to an IAssociative
that can be used as new vars with which to solve a new query, of which
the RHS is the root. In most cases, the LHS will be a Var (var).
Typically, map-forms result from the dotty "dot" (.) operator. For example,
the query "User.name" will translate to a map-form with the var "User"
on LHS and a var to "name" on the RHS. With top-level vars being
something like {"User": {"name": "Bob"}}, the Var on the LHS will
evaluate to {"name": "Bob"}, which subdict will then be used on the RHS as
new vars, and that whole form will evaluate to "Bob".
### Response:
def solve_map(expr, vars):
"""Solves the map-form, by recursively calling its RHS with new vars.
let-forms are binary expressions. The LHS should evaluate to an IAssociative
that can be used as new vars with which to solve a new query, of which
the RHS is the root. In most cases, the LHS will be a Var (var).
Typically, map-forms result from the dotty "dot" (.) operator. For example,
the query "User.name" will translate to a map-form with the var "User"
on LHS and a var to "name" on the RHS. With top-level vars being
something like {"User": {"name": "Bob"}}, the Var on the LHS will
evaluate to {"name": "Bob"}, which subdict will then be used on the RHS as
new vars, and that whole form will evaluate to "Bob".
"""
lhs_values, _ = __solve_for_repeated(expr.lhs, vars)
def lazy_map():
try:
for lhs_value in repeated.getvalues(lhs_values):
yield solve(expr.rhs,
__nest_scope(expr.lhs, vars, lhs_value)).value
except errors.EfilterNoneError as error:
error.root = expr
raise
return Result(repeated.lazy(lazy_map), ()) |
def set_properties(self, properties, **kwargs):
"""
:param properties: Property names and values given as key-value pairs of strings
:type properties: dict
Given key-value pairs in *properties* for property names and
values, the properties are set on the project for the given
property names. Any property with a value of :const:`None`
indicates the property will be deleted.
.. note:: Any existing properties not mentioned in *properties*
are not modified by this method.
"""
return dxpy.api.project_set_properties(self._dxid, {"properties": properties}, **kwargs) | :param properties: Property names and values given as key-value pairs of strings
:type properties: dict
Given key-value pairs in *properties* for property names and
values, the properties are set on the project for the given
property names. Any property with a value of :const:`None`
indicates the property will be deleted.
.. note:: Any existing properties not mentioned in *properties*
are not modified by this method. | Below is the the instruction that describes the task:
### Input:
:param properties: Property names and values given as key-value pairs of strings
:type properties: dict
Given key-value pairs in *properties* for property names and
values, the properties are set on the project for the given
property names. Any property with a value of :const:`None`
indicates the property will be deleted.
.. note:: Any existing properties not mentioned in *properties*
are not modified by this method.
### Response:
def set_properties(self, properties, **kwargs):
"""
:param properties: Property names and values given as key-value pairs of strings
:type properties: dict
Given key-value pairs in *properties* for property names and
values, the properties are set on the project for the given
property names. Any property with a value of :const:`None`
indicates the property will be deleted.
.. note:: Any existing properties not mentioned in *properties*
are not modified by this method.
"""
return dxpy.api.project_set_properties(self._dxid, {"properties": properties}, **kwargs) |
def augassign_handle(self, tokens):
"""Process assignments."""
internal_assert(len(tokens) == 3, "invalid assignment tokens", tokens)
name, op, item = tokens
out = ""
if op == "|>=":
out += name + " = (" + item + ")(" + name + ")"
elif op == "|*>=":
out += name + " = (" + item + ")(*" + name + ")"
elif op == "<|=":
out += name + " = " + name + "((" + item + "))"
elif op == "<*|=":
out += name + " = " + name + "(*(" + item + "))"
elif op == "..=" or op == "<..=":
out += name + " = _coconut_forward_compose((" + item + "), " + name + ")"
elif op == "..>=":
out += name + " = _coconut_forward_compose(" + name + ", (" + item + "))"
elif op == "<*..=":
out += name + " = _coconut_forward_star_compose((" + item + "), " + name + ")"
elif op == "..*>=":
out += name + " = _coconut_forward_star_compose(" + name + ", (" + item + "))"
elif op == "??=":
out += name + " = " + item + " if " + name + " is None else " + name
elif op == "::=":
ichain_var = lazy_chain_var + "_" + str(self.ichain_count)
self.ichain_count += 1
# this is necessary to prevent a segfault caused by self-reference
out += (
ichain_var + " = " + name + "\n"
+ name + " = _coconut.itertools.chain.from_iterable(" + lazy_list_handle([ichain_var, "(" + item + ")"]) + ")"
)
else:
out += name + " " + op + " " + item
return out | Process assignments. | Below is the the instruction that describes the task:
### Input:
Process assignments.
### Response:
def augassign_handle(self, tokens):
"""Process assignments."""
internal_assert(len(tokens) == 3, "invalid assignment tokens", tokens)
name, op, item = tokens
out = ""
if op == "|>=":
out += name + " = (" + item + ")(" + name + ")"
elif op == "|*>=":
out += name + " = (" + item + ")(*" + name + ")"
elif op == "<|=":
out += name + " = " + name + "((" + item + "))"
elif op == "<*|=":
out += name + " = " + name + "(*(" + item + "))"
elif op == "..=" or op == "<..=":
out += name + " = _coconut_forward_compose((" + item + "), " + name + ")"
elif op == "..>=":
out += name + " = _coconut_forward_compose(" + name + ", (" + item + "))"
elif op == "<*..=":
out += name + " = _coconut_forward_star_compose((" + item + "), " + name + ")"
elif op == "..*>=":
out += name + " = _coconut_forward_star_compose(" + name + ", (" + item + "))"
elif op == "??=":
out += name + " = " + item + " if " + name + " is None else " + name
elif op == "::=":
ichain_var = lazy_chain_var + "_" + str(self.ichain_count)
self.ichain_count += 1
# this is necessary to prevent a segfault caused by self-reference
out += (
ichain_var + " = " + name + "\n"
+ name + " = _coconut.itertools.chain.from_iterable(" + lazy_list_handle([ichain_var, "(" + item + ")"]) + ")"
)
else:
out += name + " " + op + " " + item
return out |
def add_extra_urls(self, item_session: ItemSession):
'''Add additional URLs such as robots.txt, favicon.ico.'''
if item_session.url_record.level == 0 and self._sitemaps:
extra_url_infos = (
self.parse_url(
'{0}://{1}/robots.txt'.format(
item_session.url_record.url_info.scheme,
item_session.url_record.url_info.hostname_with_port)
),
self.parse_url(
'{0}://{1}/sitemap.xml'.format(
item_session.url_record.url_info.scheme,
item_session.url_record.url_info.hostname_with_port)
)
)
for url_info in extra_url_infos:
item_session.add_child_url(url_info.url) | Add additional URLs such as robots.txt, favicon.ico. | Below is the the instruction that describes the task:
### Input:
Add additional URLs such as robots.txt, favicon.ico.
### Response:
def add_extra_urls(self, item_session: ItemSession):
'''Add additional URLs such as robots.txt, favicon.ico.'''
if item_session.url_record.level == 0 and self._sitemaps:
extra_url_infos = (
self.parse_url(
'{0}://{1}/robots.txt'.format(
item_session.url_record.url_info.scheme,
item_session.url_record.url_info.hostname_with_port)
),
self.parse_url(
'{0}://{1}/sitemap.xml'.format(
item_session.url_record.url_info.scheme,
item_session.url_record.url_info.hostname_with_port)
)
)
for url_info in extra_url_infos:
item_session.add_child_url(url_info.url) |
def get_parent_aligned_annotation(self, ref_id):
"""" Give the aligment annotation that a reference annotation belongs to directly, or indirectly through other
reference annotations.
:param str ref_id: Id of a reference annotation.
:raises KeyError: If no annotation exists with the id or if it belongs to an alignment annotation.
:returns: The alignment annotation at the end of the reference chain.
"""
parentTier = self.tiers[self.annotations[ref_id]]
while "PARENT_REF" in parentTier[2] and len(parentTier[2]) > 0:
ref_id = parentTier[1][ref_id][0]
parentTier = self.tiers[self.annotations[ref_id]]
return parentTier[0][ref_id] | Give the aligment annotation that a reference annotation belongs to directly, or indirectly through other
reference annotations.
:param str ref_id: Id of a reference annotation.
:raises KeyError: If no annotation exists with the id or if it belongs to an alignment annotation.
:returns: The alignment annotation at the end of the reference chain. | Below is the the instruction that describes the task:
### Input:
Give the aligment annotation that a reference annotation belongs to directly, or indirectly through other
reference annotations.
:param str ref_id: Id of a reference annotation.
:raises KeyError: If no annotation exists with the id or if it belongs to an alignment annotation.
:returns: The alignment annotation at the end of the reference chain.
### Response:
def get_parent_aligned_annotation(self, ref_id):
"""" Give the aligment annotation that a reference annotation belongs to directly, or indirectly through other
reference annotations.
:param str ref_id: Id of a reference annotation.
:raises KeyError: If no annotation exists with the id or if it belongs to an alignment annotation.
:returns: The alignment annotation at the end of the reference chain.
"""
parentTier = self.tiers[self.annotations[ref_id]]
while "PARENT_REF" in parentTier[2] and len(parentTier[2]) > 0:
ref_id = parentTier[1][ref_id][0]
parentTier = self.tiers[self.annotations[ref_id]]
return parentTier[0][ref_id] |
def setup_authentication_methods(authn_config, template_env):
"""Add all authentication methods specified in the configuration."""
routing = {}
ac = AuthnBroker()
for authn_method in authn_config:
cls = make_cls_from_name(authn_method["class"])
instance = cls(template_env=template_env, **authn_method["kwargs"])
ac.add(authn_method["acr"], instance)
routing[instance.url_endpoint] = VerifierMiddleware(instance)
return ac, routing | Add all authentication methods specified in the configuration. | Below is the the instruction that describes the task:
### Input:
Add all authentication methods specified in the configuration.
### Response:
def setup_authentication_methods(authn_config, template_env):
"""Add all authentication methods specified in the configuration."""
routing = {}
ac = AuthnBroker()
for authn_method in authn_config:
cls = make_cls_from_name(authn_method["class"])
instance = cls(template_env=template_env, **authn_method["kwargs"])
ac.add(authn_method["acr"], instance)
routing[instance.url_endpoint] = VerifierMiddleware(instance)
return ac, routing |
def translate(self, instruction):
"""Return IR representation of an instruction.
"""
try:
trans_instrs = self.__translate(instruction)
except NotImplementedError:
unkn_instr = self._builder.gen_unkn()
unkn_instr.address = instruction.address << 8 | (0x0 & 0xff)
trans_instrs = [unkn_instr]
self._log_not_supported_instruction(instruction)
except Exception:
self._log_translation_exception(instruction)
raise
return trans_instrs | Return IR representation of an instruction. | Below is the the instruction that describes the task:
### Input:
Return IR representation of an instruction.
### Response:
def translate(self, instruction):
"""Return IR representation of an instruction.
"""
try:
trans_instrs = self.__translate(instruction)
except NotImplementedError:
unkn_instr = self._builder.gen_unkn()
unkn_instr.address = instruction.address << 8 | (0x0 & 0xff)
trans_instrs = [unkn_instr]
self._log_not_supported_instruction(instruction)
except Exception:
self._log_translation_exception(instruction)
raise
return trans_instrs |
def SetActiveBreakpoints(self, breakpoints_data):
"""Adds new breakpoints and removes missing ones.
Args:
breakpoints_data: updated list of active breakpoints.
"""
with self._lock:
ids = set([x['id'] for x in breakpoints_data])
# Clear breakpoints that no longer show up in active breakpoints list.
for breakpoint_id in six.viewkeys(self._active) - ids:
self._active.pop(breakpoint_id).Clear()
# Create new breakpoints.
self._active.update([
(x['id'],
python_breakpoint.PythonBreakpoint(
x,
self._hub_client,
self,
self.data_visibility_policy))
for x in breakpoints_data
if x['id'] in ids - six.viewkeys(self._active) - self._completed])
# Remove entries from completed_breakpoints_ that weren't listed in
# breakpoints_data vector. These are confirmed to have been removed by the
# hub and the debuglet can now assume that they will never show up ever
# again. The backend never reuses breakpoint IDs.
self._completed &= ids
if self._active:
self._next_expiration = datetime.min # Not known.
else:
self._next_expiration = datetime.max | Adds new breakpoints and removes missing ones.
Args:
breakpoints_data: updated list of active breakpoints. | Below is the the instruction that describes the task:
### Input:
Adds new breakpoints and removes missing ones.
Args:
breakpoints_data: updated list of active breakpoints.
### Response:
def SetActiveBreakpoints(self, breakpoints_data):
"""Adds new breakpoints and removes missing ones.
Args:
breakpoints_data: updated list of active breakpoints.
"""
with self._lock:
ids = set([x['id'] for x in breakpoints_data])
# Clear breakpoints that no longer show up in active breakpoints list.
for breakpoint_id in six.viewkeys(self._active) - ids:
self._active.pop(breakpoint_id).Clear()
# Create new breakpoints.
self._active.update([
(x['id'],
python_breakpoint.PythonBreakpoint(
x,
self._hub_client,
self,
self.data_visibility_policy))
for x in breakpoints_data
if x['id'] in ids - six.viewkeys(self._active) - self._completed])
# Remove entries from completed_breakpoints_ that weren't listed in
# breakpoints_data vector. These are confirmed to have been removed by the
# hub and the debuglet can now assume that they will never show up ever
# again. The backend never reuses breakpoint IDs.
self._completed &= ids
if self._active:
self._next_expiration = datetime.min # Not known.
else:
self._next_expiration = datetime.max |
def as_ipywidget(self):
""" Provides an IPywidgets player that can be used in a notebook. """
from IPython.display import Audio
return Audio(data=self.y, rate=self.sr) | Provides an IPywidgets player that can be used in a notebook. | Below is the the instruction that describes the task:
### Input:
Provides an IPywidgets player that can be used in a notebook.
### Response:
def as_ipywidget(self):
""" Provides an IPywidgets player that can be used in a notebook. """
from IPython.display import Audio
return Audio(data=self.y, rate=self.sr) |
def del_from_groups(self, username, groups):
"""Delete user from groups"""
# it follows the same logic than add_to_groups
# but with MOD_DELETE
ldap_client = self._bind()
tmp = self._get_user(self._byte_p2(username), ALL_ATTRS)
if tmp is None:
raise UserDoesntExist(username, self.backend_name)
dn = tmp[0]
attrs = tmp[1]
attrs['dn'] = dn
self._normalize_group_attrs(attrs)
dn = self._byte_p2(tmp[0])
for group in groups:
group = self._byte_p2(group)
for attr in self.group_attrs:
content = self._byte_p2(self.group_attrs[attr] % attrs)
ldif = [(ldap.MOD_DELETE, attr, self._byte_p3(content))]
try:
ldap_client.modify_s(group, ldif)
except ldap.NO_SUCH_ATTRIBUTE as e:
self._logger(
severity=logging.INFO,
msg="%(backend)s: user '%(user)s'"
" wasn't member of group '%(group)s'"
" (attribute '%(attr)s')" % {
'user': username,
'group': self._uni(group),
'attr': attr,
'backend': self.backend_name
}
)
except Exception as e:
ldap_client.unbind_s()
self._exception_handler(e)
ldap_client.unbind_s() | Delete user from groups | Below is the the instruction that describes the task:
### Input:
Delete user from groups
### Response:
def del_from_groups(self, username, groups):
"""Delete user from groups"""
# it follows the same logic than add_to_groups
# but with MOD_DELETE
ldap_client = self._bind()
tmp = self._get_user(self._byte_p2(username), ALL_ATTRS)
if tmp is None:
raise UserDoesntExist(username, self.backend_name)
dn = tmp[0]
attrs = tmp[1]
attrs['dn'] = dn
self._normalize_group_attrs(attrs)
dn = self._byte_p2(tmp[0])
for group in groups:
group = self._byte_p2(group)
for attr in self.group_attrs:
content = self._byte_p2(self.group_attrs[attr] % attrs)
ldif = [(ldap.MOD_DELETE, attr, self._byte_p3(content))]
try:
ldap_client.modify_s(group, ldif)
except ldap.NO_SUCH_ATTRIBUTE as e:
self._logger(
severity=logging.INFO,
msg="%(backend)s: user '%(user)s'"
" wasn't member of group '%(group)s'"
" (attribute '%(attr)s')" % {
'user': username,
'group': self._uni(group),
'attr': attr,
'backend': self.backend_name
}
)
except Exception as e:
ldap_client.unbind_s()
self._exception_handler(e)
ldap_client.unbind_s() |
def _get_mtime():
"""
Get the modified time of the RPM Database.
Returns:
Unix ticks
"""
return os.path.exists(RPM_PATH) and int(os.path.getmtime(RPM_PATH)) or 0 | Get the modified time of the RPM Database.
Returns:
Unix ticks | Below is the the instruction that describes the task:
### Input:
Get the modified time of the RPM Database.
Returns:
Unix ticks
### Response:
def _get_mtime():
"""
Get the modified time of the RPM Database.
Returns:
Unix ticks
"""
return os.path.exists(RPM_PATH) and int(os.path.getmtime(RPM_PATH)) or 0 |
def get_list_of_paths(self):
"""
return a list of unique paths in the file list
"""
all_paths = []
for p in self.fl_metadata:
try:
all_paths.append(p['path'])
except:
try:
print('cls_filelist - no key path, ignoring folder ' + str(p))
except:
print('cls_filelist - no key path, ignoring odd character folder')
return list(set(all_paths)) | return a list of unique paths in the file list | Below is the the instruction that describes the task:
### Input:
return a list of unique paths in the file list
### Response:
def get_list_of_paths(self):
"""
return a list of unique paths in the file list
"""
all_paths = []
for p in self.fl_metadata:
try:
all_paths.append(p['path'])
except:
try:
print('cls_filelist - no key path, ignoring folder ' + str(p))
except:
print('cls_filelist - no key path, ignoring odd character folder')
return list(set(all_paths)) |
def get_transformed_feature_info(features, schema):
"""Returns information about the transformed features.
Returns:
Dict in the from
{transformed_feature_name: {dtype: tf type, size: int or None}}. If the size
is None, then the tensor is a sparse tensor.
"""
info = collections.defaultdict(dict)
for name, transform in six.iteritems(features):
transform_name = transform['transform']
source_column = transform['source_column']
if transform_name == IDENTITY_TRANSFORM:
schema_type = next(col['type'].lower() for col in schema if col['name'] == source_column)
if schema_type == FLOAT_SCHEMA:
info[name]['dtype'] = tf.float32
elif schema_type == INTEGER_SCHEMA:
info[name]['dtype'] = tf.int64
else:
raise ValueError('itentity should only be applied to integer or float'
'columns, but was used on %s' % name)
info[name]['size'] = 1
elif transform_name == SCALE_TRANSFORM:
info[name]['dtype'] = tf.float32
info[name]['size'] = 1
elif transform_name == ONE_HOT_TRANSFORM:
info[name]['dtype'] = tf.int64
info[name]['size'] = 1
elif transform_name == EMBEDDING_TRANSFROM:
info[name]['dtype'] = tf.int64
info[name]['size'] = 1
elif transform_name == MULTI_HOT_TRANSFORM:
info[name]['dtype'] = tf.int64
info[name]['size'] = None
elif transform_name == BOW_TRANSFORM or transform_name == TFIDF_TRANSFORM:
info[name + '_ids']['dtype'] = tf.int64
info[name + '_weights']['dtype'] = tf.float32
info[name + '_ids']['size'] = None
info[name + '_weights']['size'] = None
elif transform_name == KEY_TRANSFORM:
schema_type = next(col['type'].lower() for col in schema if col['name'] == source_column)
if schema_type == FLOAT_SCHEMA:
info[name]['dtype'] = tf.float32
elif schema_type == INTEGER_SCHEMA:
info[name]['dtype'] = tf.int64
else:
info[name]['dtype'] = tf.string
info[name]['size'] = 1
elif transform_name == TARGET_TRANSFORM:
# If the input is a string, it gets converted to an int (id)
schema_type = next(col['type'].lower() for col in schema if col['name'] == source_column)
if schema_type in NUMERIC_SCHEMA:
info[name]['dtype'] = tf.float32
else:
info[name]['dtype'] = tf.int64
info[name]['size'] = 1
elif transform_name == IMAGE_TRANSFORM:
info[name]['dtype'] = tf.float32
info[name]['size'] = IMAGE_BOTTLENECK_TENSOR_SIZE
else:
raise ValueError('Unknown transfrom %s' % transform_name)
return info | Returns information about the transformed features.
Returns:
Dict in the from
{transformed_feature_name: {dtype: tf type, size: int or None}}. If the size
is None, then the tensor is a sparse tensor. | Below is the the instruction that describes the task:
### Input:
Returns information about the transformed features.
Returns:
Dict in the from
{transformed_feature_name: {dtype: tf type, size: int or None}}. If the size
is None, then the tensor is a sparse tensor.
### Response:
def get_transformed_feature_info(features, schema):
"""Returns information about the transformed features.
Returns:
Dict in the from
{transformed_feature_name: {dtype: tf type, size: int or None}}. If the size
is None, then the tensor is a sparse tensor.
"""
info = collections.defaultdict(dict)
for name, transform in six.iteritems(features):
transform_name = transform['transform']
source_column = transform['source_column']
if transform_name == IDENTITY_TRANSFORM:
schema_type = next(col['type'].lower() for col in schema if col['name'] == source_column)
if schema_type == FLOAT_SCHEMA:
info[name]['dtype'] = tf.float32
elif schema_type == INTEGER_SCHEMA:
info[name]['dtype'] = tf.int64
else:
raise ValueError('itentity should only be applied to integer or float'
'columns, but was used on %s' % name)
info[name]['size'] = 1
elif transform_name == SCALE_TRANSFORM:
info[name]['dtype'] = tf.float32
info[name]['size'] = 1
elif transform_name == ONE_HOT_TRANSFORM:
info[name]['dtype'] = tf.int64
info[name]['size'] = 1
elif transform_name == EMBEDDING_TRANSFROM:
info[name]['dtype'] = tf.int64
info[name]['size'] = 1
elif transform_name == MULTI_HOT_TRANSFORM:
info[name]['dtype'] = tf.int64
info[name]['size'] = None
elif transform_name == BOW_TRANSFORM or transform_name == TFIDF_TRANSFORM:
info[name + '_ids']['dtype'] = tf.int64
info[name + '_weights']['dtype'] = tf.float32
info[name + '_ids']['size'] = None
info[name + '_weights']['size'] = None
elif transform_name == KEY_TRANSFORM:
schema_type = next(col['type'].lower() for col in schema if col['name'] == source_column)
if schema_type == FLOAT_SCHEMA:
info[name]['dtype'] = tf.float32
elif schema_type == INTEGER_SCHEMA:
info[name]['dtype'] = tf.int64
else:
info[name]['dtype'] = tf.string
info[name]['size'] = 1
elif transform_name == TARGET_TRANSFORM:
# If the input is a string, it gets converted to an int (id)
schema_type = next(col['type'].lower() for col in schema if col['name'] == source_column)
if schema_type in NUMERIC_SCHEMA:
info[name]['dtype'] = tf.float32
else:
info[name]['dtype'] = tf.int64
info[name]['size'] = 1
elif transform_name == IMAGE_TRANSFORM:
info[name]['dtype'] = tf.float32
info[name]['size'] = IMAGE_BOTTLENECK_TENSOR_SIZE
else:
raise ValueError('Unknown transfrom %s' % transform_name)
return info |
def reset(self, dim):
""" Resets / Initializes the hash for the specified dimension. """
if self.dim != dim:
self.dim = dim
self.normals = self.rand.randn(self.projection_count, dim)
self.tree_root = RandomBinaryProjectionTreeNode() | Resets / Initializes the hash for the specified dimension. | Below is the the instruction that describes the task:
### Input:
Resets / Initializes the hash for the specified dimension.
### Response:
def reset(self, dim):
""" Resets / Initializes the hash for the specified dimension. """
if self.dim != dim:
self.dim = dim
self.normals = self.rand.randn(self.projection_count, dim)
self.tree_root = RandomBinaryProjectionTreeNode() |
def show_subnetpool(self, subnetpool, **_params):
"""Fetches information of a certain subnetpool."""
return self.get(self.subnetpool_path % (subnetpool), params=_params) | Fetches information of a certain subnetpool. | Below is the the instruction that describes the task:
### Input:
Fetches information of a certain subnetpool.
### Response:
def show_subnetpool(self, subnetpool, **_params):
"""Fetches information of a certain subnetpool."""
return self.get(self.subnetpool_path % (subnetpool), params=_params) |
def wigner_d_small(J, beta):
u"""Return the small Wigner d matrix for angular momentum J.
We use the general formula from [Edmonds74]_, equation 4.1.15.
Some examples form [Edmonds74]_:
>>> from sympy import Integer, symbols, pi
>>> half = 1/Integer(2)
>>> beta = symbols("beta", real=True)
>>> wigner_d_small(half, beta)
Matrix([
[ cos(beta/2), sin(beta/2)],
[-sin(beta/2), cos(beta/2)]])
>>> from sympy import pprint
>>> pprint(wigner_d_small(2*half, beta), use_unicode=True)
β‘ 2βΞ²β βΞ²β βΞ²β 2βΞ²β β€
β’ cos βββ β2β
sinββββ
cosβββ sin βββ β₯
β’ β2β β2β β2β β2β β₯
β’ β₯
β’ βΞ²β βΞ²β 2βΞ²β 2βΞ²β βΞ²β βΞ²ββ₯
β’-β2β
sinββββ
cosβββ - sin βββ + cos βββ β2β
sinββββ
cosββββ₯
β’ β2β β2β β2β β2β β2β β2β β₯
β’ β₯
β’ 2βΞ²β βΞ²β βΞ²β 2βΞ²β β₯
β’ sin βββ -β2β
sinββββ
cosβββ cos βββ β₯
β£ β2β β2β β2β β2β β¦
From table 4 in [Edmonds74]_
>>> wigner_d_small(half, beta).subs({beta:pi/2})
Matrix([
[ sqrt(2)/2, sqrt(2)/2],
[-sqrt(2)/2, sqrt(2)/2]])
>>> wigner_d_small(2*half, beta).subs({beta:pi/2})
Matrix([
[ 1/2, sqrt(2)/2, 1/2],
[-sqrt(2)/2, 0, sqrt(2)/2],
[ 1/2, -sqrt(2)/2, 1/2]])
>>> wigner_d_small(3*half, beta).subs({beta:pi/2})
Matrix([
[ sqrt(2)/4, sqrt(6)/4, sqrt(6)/4, sqrt(2)/4],
[-sqrt(6)/4, -sqrt(2)/4, sqrt(2)/4, sqrt(6)/4],
[ sqrt(6)/4, -sqrt(2)/4, -sqrt(2)/4, sqrt(6)/4],
[-sqrt(2)/4, sqrt(6)/4, -sqrt(6)/4, sqrt(2)/4]])
>>> wigner_d_small(4*half, beta).subs({beta:pi/2})
Matrix([
[ 1/4, 1/2, sqrt(6)/4, 1/2, 1/4],
[ -1/2, -1/2, 0, 1/2, 1/2],
[sqrt(6)/4, 0, -1/2, 0, sqrt(6)/4],
[ -1/2, 1/2, 0, -1/2, 1/2],
[ 1/4, -1/2, sqrt(6)/4, -1/2, 1/4]])
"""
def prod(x):
p = 1
for i, xi in enumerate(x): p = p*xi
return p
M = [J-i for i in range(2*J+1)]
d = []
for Mi in M:
row = []
for Mj in M:
# We get the maximum and minimum value of sigma.
sigmamax = max([-Mi-Mj, J-Mj])
sigmamin = min([0, J-Mi])
dij = sqrt(factorial(J+Mi)*factorial(J-Mi) /
factorial(J+Mj)/factorial(J-Mj))
terms = [[(-1)**(J-Mi-s),
binomial(J+Mj, J-Mi-s),
binomial(J-Mj, s),
cos(beta/2)**(2*s+Mi+Mj),
sin(beta/2)**(2*J-2*s-Mj-Mi)]
for s in range(sigmamin, sigmamax+1)]
terms = [prod(term) if 0 not in term else 0 for term in terms]
dij = dij*sum(terms)
row += [dij]
d += [row]
return Matrix(d) | u"""Return the small Wigner d matrix for angular momentum J.
We use the general formula from [Edmonds74]_, equation 4.1.15.
Some examples form [Edmonds74]_:
>>> from sympy import Integer, symbols, pi
>>> half = 1/Integer(2)
>>> beta = symbols("beta", real=True)
>>> wigner_d_small(half, beta)
Matrix([
[ cos(beta/2), sin(beta/2)],
[-sin(beta/2), cos(beta/2)]])
>>> from sympy import pprint
>>> pprint(wigner_d_small(2*half, beta), use_unicode=True)
β‘ 2βΞ²β βΞ²β βΞ²β 2βΞ²β β€
β’ cos βββ β2β
sinββββ
cosβββ sin βββ β₯
β’ β2β β2β β2β β2β β₯
β’ β₯
β’ βΞ²β βΞ²β 2βΞ²β 2βΞ²β βΞ²β βΞ²ββ₯
β’-β2β
sinββββ
cosβββ - sin βββ + cos βββ β2β
sinββββ
cosββββ₯
β’ β2β β2β β2β β2β β2β β2β β₯
β’ β₯
β’ 2βΞ²β βΞ²β βΞ²β 2βΞ²β β₯
β’ sin βββ -β2β
sinββββ
cosβββ cos βββ β₯
β£ β2β β2β β2β β2β β¦
From table 4 in [Edmonds74]_
>>> wigner_d_small(half, beta).subs({beta:pi/2})
Matrix([
[ sqrt(2)/2, sqrt(2)/2],
[-sqrt(2)/2, sqrt(2)/2]])
>>> wigner_d_small(2*half, beta).subs({beta:pi/2})
Matrix([
[ 1/2, sqrt(2)/2, 1/2],
[-sqrt(2)/2, 0, sqrt(2)/2],
[ 1/2, -sqrt(2)/2, 1/2]])
>>> wigner_d_small(3*half, beta).subs({beta:pi/2})
Matrix([
[ sqrt(2)/4, sqrt(6)/4, sqrt(6)/4, sqrt(2)/4],
[-sqrt(6)/4, -sqrt(2)/4, sqrt(2)/4, sqrt(6)/4],
[ sqrt(6)/4, -sqrt(2)/4, -sqrt(2)/4, sqrt(6)/4],
[-sqrt(2)/4, sqrt(6)/4, -sqrt(6)/4, sqrt(2)/4]])
>>> wigner_d_small(4*half, beta).subs({beta:pi/2})
Matrix([
[ 1/4, 1/2, sqrt(6)/4, 1/2, 1/4],
[ -1/2, -1/2, 0, 1/2, 1/2],
[sqrt(6)/4, 0, -1/2, 0, sqrt(6)/4],
[ -1/2, 1/2, 0, -1/2, 1/2],
[ 1/4, -1/2, sqrt(6)/4, -1/2, 1/4]]) | Below is the the instruction that describes the task:
### Input:
u"""Return the small Wigner d matrix for angular momentum J.
We use the general formula from [Edmonds74]_, equation 4.1.15.
Some examples form [Edmonds74]_:
>>> from sympy import Integer, symbols, pi
>>> half = 1/Integer(2)
>>> beta = symbols("beta", real=True)
>>> wigner_d_small(half, beta)
Matrix([
[ cos(beta/2), sin(beta/2)],
[-sin(beta/2), cos(beta/2)]])
>>> from sympy import pprint
>>> pprint(wigner_d_small(2*half, beta), use_unicode=True)
β‘ 2βΞ²β βΞ²β βΞ²β 2βΞ²β β€
β’ cos βββ β2β
sinββββ
cosβββ sin βββ β₯
β’ β2β β2β β2β β2β β₯
β’ β₯
β’ βΞ²β βΞ²β 2βΞ²β 2βΞ²β βΞ²β βΞ²ββ₯
β’-β2β
sinββββ
cosβββ - sin βββ + cos βββ β2β
sinββββ
cosββββ₯
β’ β2β β2β β2β β2β β2β β2β β₯
β’ β₯
β’ 2βΞ²β βΞ²β βΞ²β 2βΞ²β β₯
β’ sin βββ -β2β
sinββββ
cosβββ cos βββ β₯
β£ β2β β2β β2β β2β β¦
From table 4 in [Edmonds74]_
>>> wigner_d_small(half, beta).subs({beta:pi/2})
Matrix([
[ sqrt(2)/2, sqrt(2)/2],
[-sqrt(2)/2, sqrt(2)/2]])
>>> wigner_d_small(2*half, beta).subs({beta:pi/2})
Matrix([
[ 1/2, sqrt(2)/2, 1/2],
[-sqrt(2)/2, 0, sqrt(2)/2],
[ 1/2, -sqrt(2)/2, 1/2]])
>>> wigner_d_small(3*half, beta).subs({beta:pi/2})
Matrix([
[ sqrt(2)/4, sqrt(6)/4, sqrt(6)/4, sqrt(2)/4],
[-sqrt(6)/4, -sqrt(2)/4, sqrt(2)/4, sqrt(6)/4],
[ sqrt(6)/4, -sqrt(2)/4, -sqrt(2)/4, sqrt(6)/4],
[-sqrt(2)/4, sqrt(6)/4, -sqrt(6)/4, sqrt(2)/4]])
>>> wigner_d_small(4*half, beta).subs({beta:pi/2})
Matrix([
[ 1/4, 1/2, sqrt(6)/4, 1/2, 1/4],
[ -1/2, -1/2, 0, 1/2, 1/2],
[sqrt(6)/4, 0, -1/2, 0, sqrt(6)/4],
[ -1/2, 1/2, 0, -1/2, 1/2],
[ 1/4, -1/2, sqrt(6)/4, -1/2, 1/4]])
### Response:
def wigner_d_small(J, beta):
u"""Return the small Wigner d matrix for angular momentum J.
We use the general formula from [Edmonds74]_, equation 4.1.15.
Some examples form [Edmonds74]_:
>>> from sympy import Integer, symbols, pi
>>> half = 1/Integer(2)
>>> beta = symbols("beta", real=True)
>>> wigner_d_small(half, beta)
Matrix([
[ cos(beta/2), sin(beta/2)],
[-sin(beta/2), cos(beta/2)]])
>>> from sympy import pprint
>>> pprint(wigner_d_small(2*half, beta), use_unicode=True)
β‘ 2βΞ²β βΞ²β βΞ²β 2βΞ²β β€
β’ cos βββ β2β
sinββββ
cosβββ sin βββ β₯
β’ β2β β2β β2β β2β β₯
β’ β₯
β’ βΞ²β βΞ²β 2βΞ²β 2βΞ²β βΞ²β βΞ²ββ₯
β’-β2β
sinββββ
cosβββ - sin βββ + cos βββ β2β
sinββββ
cosββββ₯
β’ β2β β2β β2β β2β β2β β2β β₯
β’ β₯
β’ 2βΞ²β βΞ²β βΞ²β 2βΞ²β β₯
β’ sin βββ -β2β
sinββββ
cosβββ cos βββ β₯
β£ β2β β2β β2β β2β β¦
From table 4 in [Edmonds74]_
>>> wigner_d_small(half, beta).subs({beta:pi/2})
Matrix([
[ sqrt(2)/2, sqrt(2)/2],
[-sqrt(2)/2, sqrt(2)/2]])
>>> wigner_d_small(2*half, beta).subs({beta:pi/2})
Matrix([
[ 1/2, sqrt(2)/2, 1/2],
[-sqrt(2)/2, 0, sqrt(2)/2],
[ 1/2, -sqrt(2)/2, 1/2]])
>>> wigner_d_small(3*half, beta).subs({beta:pi/2})
Matrix([
[ sqrt(2)/4, sqrt(6)/4, sqrt(6)/4, sqrt(2)/4],
[-sqrt(6)/4, -sqrt(2)/4, sqrt(2)/4, sqrt(6)/4],
[ sqrt(6)/4, -sqrt(2)/4, -sqrt(2)/4, sqrt(6)/4],
[-sqrt(2)/4, sqrt(6)/4, -sqrt(6)/4, sqrt(2)/4]])
>>> wigner_d_small(4*half, beta).subs({beta:pi/2})
Matrix([
[ 1/4, 1/2, sqrt(6)/4, 1/2, 1/4],
[ -1/2, -1/2, 0, 1/2, 1/2],
[sqrt(6)/4, 0, -1/2, 0, sqrt(6)/4],
[ -1/2, 1/2, 0, -1/2, 1/2],
[ 1/4, -1/2, sqrt(6)/4, -1/2, 1/4]])
"""
def prod(x):
p = 1
for i, xi in enumerate(x): p = p*xi
return p
M = [J-i for i in range(2*J+1)]
d = []
for Mi in M:
row = []
for Mj in M:
# We get the maximum and minimum value of sigma.
sigmamax = max([-Mi-Mj, J-Mj])
sigmamin = min([0, J-Mi])
dij = sqrt(factorial(J+Mi)*factorial(J-Mi) /
factorial(J+Mj)/factorial(J-Mj))
terms = [[(-1)**(J-Mi-s),
binomial(J+Mj, J-Mi-s),
binomial(J-Mj, s),
cos(beta/2)**(2*s+Mi+Mj),
sin(beta/2)**(2*J-2*s-Mj-Mi)]
for s in range(sigmamin, sigmamax+1)]
terms = [prod(term) if 0 not in term else 0 for term in terms]
dij = dij*sum(terms)
row += [dij]
d += [row]
return Matrix(d) |
def cmd_iter(
self,
tgt,
fun,
arg=(),
timeout=None,
tgt_type='glob',
ret='',
kwarg=None,
**kwargs):
'''
Execute a single command via the salt-ssh subsystem and return a
generator
.. versionadded:: 2015.5.0
'''
ssh = self._prep_ssh(
tgt,
fun,
arg,
timeout,
tgt_type,
kwarg,
**kwargs)
for ret in ssh.run_iter(jid=kwargs.get('jid', None)):
yield ret | Execute a single command via the salt-ssh subsystem and return a
generator
.. versionadded:: 2015.5.0 | Below is the the instruction that describes the task:
### Input:
Execute a single command via the salt-ssh subsystem and return a
generator
.. versionadded:: 2015.5.0
### Response:
def cmd_iter(
self,
tgt,
fun,
arg=(),
timeout=None,
tgt_type='glob',
ret='',
kwarg=None,
**kwargs):
'''
Execute a single command via the salt-ssh subsystem and return a
generator
.. versionadded:: 2015.5.0
'''
ssh = self._prep_ssh(
tgt,
fun,
arg,
timeout,
tgt_type,
kwarg,
**kwargs)
for ret in ssh.run_iter(jid=kwargs.get('jid', None)):
yield ret |
def push_header(self, filename):
"""
Push the header to a given filename
:param filename: the file path to push into.
"""
# open file and read it all
with open(filename, "r") as infile:
content = infile.read()
# push header
content = self.__header + content
# re-write file with the header
with open(filename, "w") as outfile:
outfile.write(content) | Push the header to a given filename
:param filename: the file path to push into. | Below is the the instruction that describes the task:
### Input:
Push the header to a given filename
:param filename: the file path to push into.
### Response:
def push_header(self, filename):
"""
Push the header to a given filename
:param filename: the file path to push into.
"""
# open file and read it all
with open(filename, "r") as infile:
content = infile.read()
# push header
content = self.__header + content
# re-write file with the header
with open(filename, "w") as outfile:
outfile.write(content) |
def add_object_file(self, obj_file):
"""
Add object file to the jit. object_file can be instance of
:class:ObjectFile or a string representing file system path
"""
if isinstance(obj_file, str):
obj_file = object_file.ObjectFileRef.from_path(obj_file)
ffi.lib.LLVMPY_MCJITAddObjectFile(self, obj_file) | Add object file to the jit. object_file can be instance of
:class:ObjectFile or a string representing file system path | Below is the the instruction that describes the task:
### Input:
Add object file to the jit. object_file can be instance of
:class:ObjectFile or a string representing file system path
### Response:
def add_object_file(self, obj_file):
"""
Add object file to the jit. object_file can be instance of
:class:ObjectFile or a string representing file system path
"""
if isinstance(obj_file, str):
obj_file = object_file.ObjectFileRef.from_path(obj_file)
ffi.lib.LLVMPY_MCJITAddObjectFile(self, obj_file) |
def validate_properties(self, model, context=None):
"""
Validate simple properties
Performs validation on simple properties to return a result object.
:param model: object or dict
:param context: object, dict or None
:return: shiftschema.result.Result
"""
result = Result()
for property_name in self.properties:
prop = self.properties[property_name]
value = self.get(model, property_name)
errors = prop.validate(
value=value,
model=model,
context=context
)
if errors:
result.add_errors(
errors=errors,
property_name=property_name
)
return result | Validate simple properties
Performs validation on simple properties to return a result object.
:param model: object or dict
:param context: object, dict or None
:return: shiftschema.result.Result | Below is the the instruction that describes the task:
### Input:
Validate simple properties
Performs validation on simple properties to return a result object.
:param model: object or dict
:param context: object, dict or None
:return: shiftschema.result.Result
### Response:
def validate_properties(self, model, context=None):
"""
Validate simple properties
Performs validation on simple properties to return a result object.
:param model: object or dict
:param context: object, dict or None
:return: shiftschema.result.Result
"""
result = Result()
for property_name in self.properties:
prop = self.properties[property_name]
value = self.get(model, property_name)
errors = prop.validate(
value=value,
model=model,
context=context
)
if errors:
result.add_errors(
errors=errors,
property_name=property_name
)
return result |
def report(self, req):
"""Adds a report request to the cache.
Returns ``None`` if it could not be aggregated, and callers need to
send the request to the server, otherwise it returns ``CACHED_OK``.
Args:
req (:class:`sc_messages.ReportRequest`): the request
to be aggregated
Result:
``None`` if the request as not cached, otherwise ``CACHED_OK``
"""
if self._cache is None:
return None # no cache, send request now
if not isinstance(req, sc_messages.ServicecontrolServicesReportRequest):
raise ValueError(u'Invalid request')
if req.serviceName != self.service_name:
_logger.error(u'bad report(): service_name %s does not match ours %s',
req.serviceName, self.service_name)
raise ValueError(u'Service name mismatch')
report_req = req.reportRequest
if report_req is None:
_logger.error(u'bad report(): no report_request in %s', req)
raise ValueError(u'Expected report_request not set')
if _has_high_important_operation(report_req) or self._cache is None:
return None
ops_by_signature = _key_by_signature(report_req.operations,
_sign_operation)
# Concurrency:
#
# This holds a lock on the cache while updating it. No i/o operations
# are performed, so any waiting threads see minimal delays
with self._cache as cache:
for key, op in ops_by_signature.items():
agg = cache.get(key)
if agg is None:
cache[key] = operation.Aggregator(op, self._kinds)
else:
agg.add(op)
return self.CACHED_OK | Adds a report request to the cache.
Returns ``None`` if it could not be aggregated, and callers need to
send the request to the server, otherwise it returns ``CACHED_OK``.
Args:
req (:class:`sc_messages.ReportRequest`): the request
to be aggregated
Result:
``None`` if the request as not cached, otherwise ``CACHED_OK`` | Below is the the instruction that describes the task:
### Input:
Adds a report request to the cache.
Returns ``None`` if it could not be aggregated, and callers need to
send the request to the server, otherwise it returns ``CACHED_OK``.
Args:
req (:class:`sc_messages.ReportRequest`): the request
to be aggregated
Result:
``None`` if the request as not cached, otherwise ``CACHED_OK``
### Response:
def report(self, req):
"""Adds a report request to the cache.
Returns ``None`` if it could not be aggregated, and callers need to
send the request to the server, otherwise it returns ``CACHED_OK``.
Args:
req (:class:`sc_messages.ReportRequest`): the request
to be aggregated
Result:
``None`` if the request as not cached, otherwise ``CACHED_OK``
"""
if self._cache is None:
return None # no cache, send request now
if not isinstance(req, sc_messages.ServicecontrolServicesReportRequest):
raise ValueError(u'Invalid request')
if req.serviceName != self.service_name:
_logger.error(u'bad report(): service_name %s does not match ours %s',
req.serviceName, self.service_name)
raise ValueError(u'Service name mismatch')
report_req = req.reportRequest
if report_req is None:
_logger.error(u'bad report(): no report_request in %s', req)
raise ValueError(u'Expected report_request not set')
if _has_high_important_operation(report_req) or self._cache is None:
return None
ops_by_signature = _key_by_signature(report_req.operations,
_sign_operation)
# Concurrency:
#
# This holds a lock on the cache while updating it. No i/o operations
# are performed, so any waiting threads see minimal delays
with self._cache as cache:
for key, op in ops_by_signature.items():
agg = cache.get(key)
if agg is None:
cache[key] = operation.Aggregator(op, self._kinds)
else:
agg.add(op)
return self.CACHED_OK |
def save(yaml_dict, filepath):
'''
Save YAML settings to the specified file path.
'''
yamldict.dump(yaml_dict, open(filepath, 'w'), default_flow_style=False) | Save YAML settings to the specified file path. | Below is the the instruction that describes the task:
### Input:
Save YAML settings to the specified file path.
### Response:
def save(yaml_dict, filepath):
'''
Save YAML settings to the specified file path.
'''
yamldict.dump(yaml_dict, open(filepath, 'w'), default_flow_style=False) |
def put(self, file_path, upload_path = ''):
"""PUT
Args:
file_path: Full path for a file you want to upload
upload_path: Ndrive path where you want to upload file
ex) /Picture/
Returns:
True: Upload success
False: Upload failed
"""
f = open(file_path, "r")
c = f.read()
file_name = os.path.basename(file_path)
now = datetime.datetime.now().isoformat()
url = nurls['put'] + upload_path + file_name
headers = {'userid': self.user_id,
'useridx': self.useridx,
'MODIFYDATE': now,
'Content-Type': magic.from_file(file_path, mime=True),
'charset': 'UTF-8',
'Origin': 'http://ndrive2.naver.com',
}
r = self.session.put(url = url, data = c, headers = headers)
return self.resultManager(r.text) | PUT
Args:
file_path: Full path for a file you want to upload
upload_path: Ndrive path where you want to upload file
ex) /Picture/
Returns:
True: Upload success
False: Upload failed | Below is the the instruction that describes the task:
### Input:
PUT
Args:
file_path: Full path for a file you want to upload
upload_path: Ndrive path where you want to upload file
ex) /Picture/
Returns:
True: Upload success
False: Upload failed
### Response:
def put(self, file_path, upload_path = ''):
"""PUT
Args:
file_path: Full path for a file you want to upload
upload_path: Ndrive path where you want to upload file
ex) /Picture/
Returns:
True: Upload success
False: Upload failed
"""
f = open(file_path, "r")
c = f.read()
file_name = os.path.basename(file_path)
now = datetime.datetime.now().isoformat()
url = nurls['put'] + upload_path + file_name
headers = {'userid': self.user_id,
'useridx': self.useridx,
'MODIFYDATE': now,
'Content-Type': magic.from_file(file_path, mime=True),
'charset': 'UTF-8',
'Origin': 'http://ndrive2.naver.com',
}
r = self.session.put(url = url, data = c, headers = headers)
return self.resultManager(r.text) |
def once(dispatcher, event, handle, *args):
"""
Used to do a mapping like event -> handle
but handle is called just once upon event.
"""
def shell(dispatcher, *args):
try:
handle(dispatcher, *args)
except Exception as e:
raise e
finally:
dispatcher.del_map(event, shell)
dispatcher.add_map(event, shell, *args) | Used to do a mapping like event -> handle
but handle is called just once upon event. | Below is the the instruction that describes the task:
### Input:
Used to do a mapping like event -> handle
but handle is called just once upon event.
### Response:
def once(dispatcher, event, handle, *args):
"""
Used to do a mapping like event -> handle
but handle is called just once upon event.
"""
def shell(dispatcher, *args):
try:
handle(dispatcher, *args)
except Exception as e:
raise e
finally:
dispatcher.del_map(event, shell)
dispatcher.add_map(event, shell, *args) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.