text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def softmax_classifier(input_, num_classes, labels=None, loss_weight=None, per_example_weights=None, weights=None, bias=tf.zeros_initializer(), parameter_modifier=parameters.identity, name=PROVIDED):
"""Creates a fully-connected linear layer followed by a softmax. This returns `(softmax, loss)` where `loss` is the cross entropy loss. Args: input_: A rank 2 Tensor or a Pretty Tensor holding the activation before the logits (penultimate layer). num_classes: The number of classes. labels: The target labels to learn as a float tensor. Use None to not include a training loss. loss_weight: A scalar multiplier for the loss. per_example_weights: A Tensor with a weight per example. weights: The initializer for the weights (see `fully_connected`). bias: The initializer for the bias (see `fully_connected`). parameter_modifier: A modifier for the parameters that compute the logits. name: The optional name. Returns: A named tuple holding: softmax: The result of this layer with softmax normalization. loss: The cross entropy loss. Raises: ValueError: If the datatype is wrong. """ |
full = input_.fully_connected(num_classes,
activation_fn=None,
name=name,
weights=weights,
bias=bias,
parameter_modifier=parameter_modifier)
return full.softmax(labels=labels,
loss_weight=loss_weight,
per_example_weights=per_example_weights,
name=name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def softmax(input_, labels=None, name=PROVIDED, loss_weight=None, per_example_weights=None):
"""Applies softmax and if labels is not None, then it also adds a loss. Args: input_: A rank 2 Tensor or a Pretty Tensor holding the logits. labels: The target labels to learn as a float tensor. Use None to not include a training loss. name: The optional name. loss_weight: A scalar multiplier for the loss. per_example_weights: A Tensor with a weight per example. Returns: A tuple of the a handle to softmax and a handle to the loss tensor. Raises: ValueError: If the datatype is wrong. """ |
if labels is not None:
# Cache the current layer because we only want softmax to change the head.
full = input_.as_layer()
return SoftmaxResult(input_.softmax_activation(),
full.cross_entropy(
labels,
name=name,
loss_weight=loss_weight,
per_example_weights=per_example_weights))
else:
return SoftmaxResult(input_.softmax_activation(), None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def evaluate_precision_recall(input_, labels, threshold=0.5, per_example_weights=None, name=PROVIDED, phase=Phase.train):
"""Computes the precision and recall of the prediction vs the labels. Args: input_: A rank 2 Tensor or a Pretty Tensor holding the result of the model. labels: The target labels to learn as a float tensor. threshold: The threshold to use to decide if the prediction is true. per_example_weights: A Tensor with a weight per example. name: An optional name. phase: The phase of this model; non training phases compute a total across all examples. Returns: Precision and Recall. """ |
_ = name # Eliminate warning, name used for namescoping by PT.
selected, sum_retrieved, sum_relevant = _compute_precision_recall(
input_, labels, threshold, per_example_weights)
if phase != Phase.train:
dtype = tf.float32
# Create the variables in all cases so that the load logic is easier.
relevant_count = tf.get_variable(
'relevant_count', [],
dtype,
tf.zeros_initializer(),
collections=[bookkeeper.GraphKeys.TEST_VARIABLES],
trainable=False)
retrieved_count = tf.get_variable(
'retrieved_count', [],
dtype,
tf.zeros_initializer(),
collections=[bookkeeper.GraphKeys.TEST_VARIABLES],
trainable=False)
selected_count = tf.get_variable(
'selected_count', [],
dtype,
tf.zeros_initializer(),
collections=[bookkeeper.GraphKeys.TEST_VARIABLES],
trainable=False)
with input_.g.device(selected_count.device):
selected = tf.assign_add(selected_count, selected)
with input_.g.device(retrieved_count.device):
sum_retrieved = tf.assign_add(retrieved_count, sum_retrieved)
with input_.g.device(relevant_count.device):
sum_relevant = tf.assign_add(relevant_count, sum_relevant)
return (tf.where(tf.equal(sum_retrieved, 0),
tf.zeros_like(selected),
selected/sum_retrieved),
tf.where(tf.equal(sum_relevant, 0),
tf.zeros_like(selected),
selected/sum_relevant)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _eval_metric(input_, topk, correct_predictions, examples, phase):
"""Creates the standard tracking varibles if in test and returns accuracy.""" |
my_parameters = {}
if phase in (Phase.test, Phase.infer):
dtype = tf.float32
# Create the variables using tf.Variable because we don't want to share.
count = tf.Variable(tf.constant(0, dtype=dtype),
name='count_%d' % topk,
collections=[bookkeeper.GraphKeys.TEST_VARIABLES],
trainable=False)
correct = tf.Variable(tf.constant(0, dtype=dtype),
name='correct_%d' % topk,
collections=[bookkeeper.GraphKeys.TEST_VARIABLES],
trainable=False)
my_parameters['count'] = count
my_parameters['correct'] = correct
with input_.g.device(count.device):
examples = tf.assign_add(count, examples)
with input_.g.device(correct.device):
correct_predictions = tf.assign_add(correct, correct_predictions)
return correct_predictions, examples, my_parameters |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _compute_precision_recall(input_, labels, threshold, per_example_weights):
"""Returns the numerator of both, the denominator of precision and recall.""" |
# To apply per_example_weights, we need to collapse each row to a scalar, but
# we really want the sum.
labels.get_shape().assert_is_compatible_with(input_.get_shape())
relevant = tf.to_float(tf.greater(labels, 0))
retrieved = tf.to_float(tf.greater(input_, threshold))
selected = relevant * retrieved
if per_example_weights is not None:
per_example_weights = _convert_and_assert_per_example_weights_compatible(
input_,
per_example_weights,
dtype=None)
per_example_weights = tf.to_float(tf.greater(per_example_weights, 0))
selected = functions.reduce_batch_sum(selected) * per_example_weights
relevant = functions.reduce_batch_sum(relevant) * per_example_weights
retrieved = functions.reduce_batch_sum(retrieved) * per_example_weights
sum_relevant = tf.reduce_sum(relevant)
sum_retrieved = tf.reduce_sum(retrieved)
selected = tf.reduce_sum(selected)
return selected, sum_retrieved, sum_relevant |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unroll_state_saver(input_layer, name, state_shapes, template, lengths=None):
"""Unrolls the given function with state taken from the state saver. Args: input_layer: The input sequence. name: The name of this layer. state_shapes: A list of shapes, one for each state variable. template: A template with unbound variables for input and states that returns a RecurrentResult. lengths: The length of each item in the batch. If provided, use this to truncate computation. Returns: A sequence from applying the given template to each item in the input sequence. """ |
state_saver = input_layer.bookkeeper.recurrent_state
state_names = [STATE_NAME % name + '_%d' % i
for i in xrange(len(state_shapes))]
if hasattr(state_saver, 'add_state'):
for state_name, state_shape in zip(state_names, state_shapes):
initial_state = tf.zeros(state_shape[1:], dtype=input_layer.dtype)
state_saver.add_state(state_name,
initial_state=initial_state,
batch_size=state_shape[0])
if lengths is not None:
max_length = tf.reduce_max(lengths)
else:
max_length = None
results = []
prev_states = []
for state_name, state_shape in zip(state_names, state_shapes):
my_shape = list(state_shape)
my_shape[0] = -1
prev_states.append(tf.reshape(state_saver.state(state_name), my_shape))
my_parameters = None
for i, layer in enumerate(input_layer.sequence):
with input_layer.g.name_scope('unroll_%00d' % i):
if i > 0 and max_length is not None:
# TODO(eiderman): Right now the everything after length is undefined.
# If we can efficiently propagate the last result to the end, then
# models with only a final output would require a single softmax
# computation.
# pylint: disable=cell-var-from-loop
result = control_flow_ops.cond(
i < max_length,
lambda: unwrap_all(*template(layer, *prev_states).flatten()),
lambda: unwrap_all(out, *prev_states))
out = result[0]
prev_states = result[1:]
else:
out, prev_states = template(layer, *prev_states)
if my_parameters is None:
my_parameters = out.layer_parameters
results.append(prettytensor.unwrap(out))
updates = [state_saver.save_state(state_name, prettytensor.unwrap(prev_state))
for state_name, prev_state in zip(state_names, prev_states)]
# Set it up so that update is evaluated when the result of this method is
# evaluated by injecting a dependency on an arbitrary result.
with tf.control_dependencies(updates):
results[0] = tf.identity(results[0])
return input_layer.with_sequence(results, parameters=my_parameters) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cleave_sequence(input_layer, unroll=None):
"""Cleaves a tensor into a sequence, this is the inverse of squash. Recurrent methods unroll across an array of Tensors with each one being a timestep. This cleaves the first dim so that each it is an array of Tensors. It is the inverse of squash_sequence. Args: input_layer: The input layer. unroll: The number of time steps. Returns: A PrettyTensor containing an array of tensors. Raises: ValueError: If unroll is not specified and it has no default or it is <= 0. """ |
if unroll is None:
raise ValueError('You must set unroll either here or in the defaults.')
shape = input_layer.shape
if shape[0] is not None and shape[0] % unroll != 0:
raise ValueError('Must divide the split dimension evenly: %d mod %d != 0' %
(shape[0], unroll))
if unroll <= 0:
raise ValueError('Unroll must be > 0: %s' % unroll)
elif unroll == 1:
splits = [input_layer.tensor]
else:
splits = tf.split(
value=input_layer.tensor, num_or_size_splits=unroll, axis=0)
result = input_layer.with_sequence(splits)
# This is an abuse of the defaults system, but it is safe because we are only
# modifying result.
defaults = result.defaults
if 'unroll' in defaults:
del defaults['unroll']
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_sequence_pretty_tensor(sequence_input, shape=None, save_state=True):
"""Creates a PrettyTensor object for the given sequence. The first dimension is treated as a time-dimension * batch and a default is set for `unroll` and `state_saver`. TODO(eiderman):
Remove shape. Args: sequence_input: A SequenceInput or StateSavingSequenceInput shape: The shape of each item in the sequence (including batch). save_state: If true, use the sequence_input's state and save_state methods. Returns: 2 Layers: inputs, targets """ |
inputs = prettytensor.wrap_sequence(sequence_input.inputs, tensor_shape=shape)
targets = prettytensor.wrap_sequence(sequence_input.targets)
if save_state:
bookkeeper.set_recurrent_state_saver(sequence_input)
return inputs, targets |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def flatten(self):
"""Create a flattened version by putting output first and then states.""" |
ls = [self.output]
ls.extend(self.state)
return ls |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(self, fetch_list, feed_dict=None, sess=None):
"""Runs the graph with the provided feeds and fetches. This function wraps sess.Run(), but takes care of state saving and restoring by feeding in states and storing the new state values. Args: fetch_list: A list of requested output tensors. feed_dict: A dictionary of feeds - see Session.Run(). Optional. sess: The Tensorflow session to run. Can be None. Returns: The requested tensors as numpy arrays. Raises: ValueError: If the default graph during object construction was different from the current default graph. """ |
if tf.get_default_graph() != self._graph:
raise ValueError('The current default graph is different from the graph'
' used at construction time of RecurrentRunner.')
if feed_dict is None:
all_feeds_dict = {}
else:
all_feeds_dict = dict(feed_dict)
all_feeds_dict.update(self._state_feeds)
all_fetches_list = list(fetch_list)
all_fetches_list += self._state_fetches
sess = sess or tf.get_default_session()
# Run the compute graph.
fetches = sess.run(all_fetches_list, all_feeds_dict)
# Update the feeds for the next time step.
states = fetches[len(fetch_list):]
for i, s in enumerate(states):
self._state_feeds[self._state_feed_names[i]] = s
return fetches[:len(fetch_list)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def __get_vpc_info(self, ifarr):
'''
If VPC is enabled,
Return the VPC domain and interface name of the VPC peerlink.
'''
if (self.vpc_vbtbl == None):
self.vpc_vbtbl = self.snmpobj.get_bulk(OID_VPC_PEERLINK_IF)
if ((self.vpc_vbtbl == None) | (len(self.vpc_vbtbl) == 0)):
return (None, None)
domain = natlas_snmp.get_last_oid_token(self.vpc_vbtbl[0][0][0])
ifidx = str(self.vpc_vbtbl[0][0][1])
ifname = self.snmpobj.cache_lookup(ifarr, OID_ETH_IF_DESC + '.' + ifidx)
ifname = self.shorten_port_name(ifname)
return (domain, ifname) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_macs(self, ip, display_progress):
'''
Return array of MAC addresses from single node at IP
'''
if (ip == '0.0.0.0'):
return None
ret_macs = []
snmpobj = natlas_snmp(ip)
# find valid credentials for this node
if (snmpobj.get_cred(self.config.snmp_creds) == 0):
return None
system_name = util.shorten_host_name(snmpobj.get_val(OID_SYSNAME), self.config.host_domains)
# cache some common MIB trees
vlan_vbtbl = snmpobj.get_bulk(OID_VLANS)
ifname_vbtbl = snmpobj.get_bulk(OID_IFNAME)
for vlan_row in vlan_vbtbl:
for vlan_n, vlan_v in vlan_row:
# get VLAN ID from OID
vlan = natlas_snmp.get_last_oid_token(vlan_n)
if (vlan >= 1002):
continue
vmacs = self.get_macs_for_vlan(ip, vlan, display_progress, snmpobj, system_name, ifname_vbtbl)
if (vmacs != None):
ret_macs.extend(vmacs)
if (display_progress == 1):
print('')
return ret_macs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_macs_for_vlan(self, ip, vlan, display_progress=0, snmpobj=None, system_name=None, ifname_vbtbl=None):
'''
Return array of MAC addresses for a single VLAN from a single node at an IP
'''
ret_macs = []
if (snmpobj == None):
snmpobj = natlas_snmp(ip)
if (snmpobj.get_cred(self.config.snmp_creds) == 0):
return None
if (ifname_vbtbl == None):
ifname_vbtbl = snmpobj.get_bulk(OID_IFNAME)
if (system_name == None):
system_name = util.shorten_host_name(snmpobj.get_val(OID_SYSNAME), self.config.host_domains)
# change our SNMP credentials
old_cred = snmpobj.v2_community
snmpobj.v2_community = old_cred + '@' + str(vlan)
if (display_progress == 1):
sys.stdout.write(str(vlan)) # found VLAN
sys.stdout.flush()
# get CAM table for this VLAN
cam_vbtbl = snmpobj.get_bulk(OID_VLAN_CAM)
portnum_vbtbl = snmpobj.get_bulk(OID_BRIDGE_PORTNUMS)
ifindex_vbtbl = snmpobj.get_bulk(OID_IFINDEX)
cam_match = None
if (cam_vbtbl == None):
# error getting CAM for VLAN
return None
for cam_row in cam_vbtbl:
for cam_n, cam_v in cam_row:
cam_entry = natlas_mac.mac_format_ascii(cam_v, 0)
# find the interface index
p = cam_n.getOid()
portnum_oid = '%s.%i.%i.%i.%i.%i.%i' % (OID_BRIDGE_PORTNUMS, p[11], p[12], p[13], p[14], p[15], p[16])
bridge_portnum = snmpobj.cache_lookup(portnum_vbtbl, portnum_oid)
# get the interface index and description
try:
ifidx = snmpobj.cache_lookup(ifindex_vbtbl, OID_IFINDEX + '.' + bridge_portnum)
port = snmpobj.cache_lookup(ifname_vbtbl, OID_IFNAME + '.' + ifidx)
except TypeError:
port = 'None'
mac_addr = natlas_mac.mac_format_ascii(cam_v, 1)
if (display_progress == 1):
sys.stdout.write('.') # found CAM entry
sys.stdout.flush()
entry = natlas_mac.mac_object(system_name, ip, vlan, mac_addr, port)
ret_macs.append(entry)
# restore SNMP credentials
snmpobj.v2_community = old_cred
return ret_macs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def mac_hex_to_ascii(mac_hex, inc_dots):
'''
Format a hex MAC string to ASCII
Args:
mac_hex: Value from SNMP
inc_dots: 1 to format as aabb.ccdd.eeff, 0 to format aabbccddeeff
Returns:
String representation of the mac_hex
'''
v = mac_hex[2:]
ret = ''
for i in range(0, len(v), 4):
ret += v[i:i+4]
if ((inc_dots) & ((i+4) < len(v))):
ret += '.'
return ret |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_switch_macs(self, switch_ip=None, node=None, vlan=None, mac=None, port=None, verbose=0):
'''
Get the CAM table from a switch.
Args:
switch_ip IP address of the device
node natlas_node from new_node()
vlan Filter results by VLAN
MAC Filter results by MAC address (regex)
port Filter results by port (regex)
verbose Display progress to stdout
switch_ip or node is required
Return:
Array of natlas_mac objects
'''
if (switch_ip == None):
if (node == None):
raise Exception('get_switch_macs() requires switch_ip or node parameter')
return None
switch_ip = node.get_ipaddr()
mac_obj = natlas_mac(self.config)
if (vlan == None):
# get all MACs
macs = mac_obj.get_macs(switch_ip, verbose)
else:
# get MACs only for one VLAN
macs = mac_obj.get_macs_for_vlan(switch_ip, vlan, verbose)
if ((mac == None) & (port == None)):
return macs if macs else []
# filter results
ret = []
for m in macs:
if (mac != None):
if (re.match(mac, m.mac) == None):
continue
if (port != None):
if (re.match(port, m.port) == None):
continue
ret.append(m)
return ret |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_arp_table(self, switch_ip, ip=None, mac=None, interf=None, arp_type=None):
'''
Get the ARP table from a switch.
Args:
switch_ip IP address of the device
ip Filter results by IP (regex)
mac Filter results by MAC (regex)
interf Filter results by INTERFACE (regex)
arp_type Filter results by ARP Type
Return:
Array of natlas_arp objects
'''
node = natlas_node(switch_ip)
if (node.try_snmp_creds(self.config.snmp_creds) == 0):
return []
arp = node.get_arp_table()
if (arp == None):
return []
if ((ip == None) & (mac == None) & (interf == None) & (arp_type == None)):
# no filtering
return arp
interf = str(interf) if vlan else None
# filter the result table
ret = []
for a in arp:
if (ip != None):
if (re.match(ip, a.ip) == None):
continue
if (mac != None):
if (re.match(mac, a.mac) == None):
continue
if (interf != None):
if (re.match(interf, str(a.interf)) == None):
continue
if (arp_type != None):
if (re.match(arp_type, a.arp_type) == None):
continue
ret.append(a)
return ret |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def __query_node(self, ip, host):
'''
Query this node.
Return node details and if we already knew about it or if this is a new node.
Don't save the node to the known list, just return info about it.
Args:
ip: IP Address of the node.
host: Hostname of this known (if known from CDP/LLDP)
Returns:
natlas_node: Node of this object
int: NODE_NEW = Newly discovered node
NODE_NEWIP = Already knew about this node but not by this IP
NODE_KNOWN = Already knew about this node
'''
host = util.shorten_host_name(host, self.config.host_domains)
node, node_updated = self.__get_known_node(ip, host)
if (node == None):
# new node
node = natlas_node()
node.name = host
node.ip = [ip]
state = NODE_NEW
else:
# existing node
if (node.snmpobj.success == 1):
# we already queried this node successfully - return it
return (node, NODE_KNOWN)
# existing node but we couldn't connect before
if (node_updated == 1):
state = NODE_NEWIP
else:
state = NODE_KNOWN
node.name = host
if (ip == 'UNKNOWN'):
return (node, state)
# vmware ESX reports the IP as 0.0.0.0
# LLDP can return an empty string for IPs.
if ((ip == '0.0.0.0') | (ip == '')):
return (node, state)
# find valid credentials for this node
if (node.try_snmp_creds(self.config.snmp_creds) == 0):
return (node, state)
node.name = node.get_system_name(self.config.host_domains)
if (node.name != host):
# the hostname changed (cdp/lldp vs snmp)!
# double check we don't already know about this node
if (state == NODE_NEW):
node2, node_updated2 = self.__get_known_node(ip, host)
if ((node2 != None) & (node_updated2 == 0)):
return (node, NODE_KNOWN)
if (node_updated2 == 1):
state = NODE_NEWIP
# Finally, if we still don't have a name, use the IP.
# e.g. Maybe CDP/LLDP was empty and we dont have good credentials
# for this device. A blank name can break Dot.
if ((node.name == None) | (node.name == '')):
node.name = node.get_ipaddr()
node.opts.get_serial = True # CDP/LLDP does not report, need for extended ACL
node.query_node()
return (node, state) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def __get_known_node(self, ip, host):
'''
Look for known nodes by IP and HOST.
If found by HOST, add the IP if not already known.
Return:
node: Node, if found. Otherwise None.
updated: 1=updated, 0=not updated
'''
# already known by IP ?
for ex in self.nodes:
for exip in ex.ip:
if (exip == '0.0.0.0'):
continue
if (exip == ip):
return (ex, 0)
# already known by HOST ?
node = self.__get_known_node_by_host(host)
if (node != None):
# node already known
if (ip not in node.ip):
node.ip.append(ip)
return (node, 1)
return (node, 0)
return (None, 0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def __get_known_node_by_host(self, hostname):
'''
Determine if the node is already known by hostname.
If it is, return it.
'''
for n in self.nodes:
if (n.name == hostname):
return n
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_file_and_add_volume(self, runtime, # type: List[Text] volume, # type: MapperEnt host_outdir_tgt, # type: Optional[Text] secret_store, # type: Optional[SecretStore] tmpdir_prefix # type: Text """Create the file and add a mapping.""" |
if not host_outdir_tgt:
tmp_dir, tmp_prefix = os.path.split(tmpdir_prefix)
new_file = os.path.join(
tempfile.mkdtemp(prefix=tmp_prefix, dir=tmp_dir),
os.path.basename(volume.resolved))
writable = True if volume.type == "CreateWritableFile" else False
if secret_store:
contents = secret_store.retrieve(volume.resolved)
else:
contents = volume.resolved
dirname = os.path.dirname(host_outdir_tgt or new_file)
if not os.path.exists(dirname):
os.makedirs(dirname)
with open(host_outdir_tgt or new_file, "wb") as file_literal:
file_literal.write(contents.encode("utf-8"))
if not host_outdir_tgt:
self.append_volume(runtime, new_file, volume.target,
writable=writable)
if writable:
ensure_writable(host_outdir_tgt or new_file)
else:
ensure_non_writable(host_outdir_tgt or new_file)
return host_outdir_tgt or new_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_volumes(self, pathmapper, # type: PathMapper runtime, # type: List[Text] tmpdir_prefix, # type: Text secret_store=None, # type: Optional[SecretStore] any_path_okay=False # type: bool """Append volume mappings to the runtime option list.""" |
container_outdir = self.builder.outdir
for key, vol in (itm for itm in pathmapper.items() if itm[1].staged):
host_outdir_tgt = None # type: Optional[Text]
if vol.target.startswith(container_outdir + "/"):
host_outdir_tgt = os.path.join(
self.outdir, vol.target[len(container_outdir) + 1:])
if not host_outdir_tgt and not any_path_okay:
raise WorkflowException(
"No mandatory DockerRequirement, yet path is outside "
"the designated output directory, also know as "
"$(runtime.outdir): {}".format(vol))
if vol.type in ("File", "Directory"):
self.add_file_or_directory_volume(
runtime, vol, host_outdir_tgt)
elif vol.type == "WritableFile":
self.add_writable_file_volume(
runtime, vol, host_outdir_tgt, tmpdir_prefix)
elif vol.type == "WritableDirectory":
self.add_writable_directory_volume(
runtime, vol, host_outdir_tgt, tmpdir_prefix)
elif vol.type in ["CreateFile", "CreateWritableFile"]:
new_path = self.create_file_and_add_volume(
runtime, vol, host_outdir_tgt, secret_store, tmpdir_prefix)
pathmapper.update(
key, new_path, vol.target, vol.type, vol.staged) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def docker_monitor(self, cidfile, tmpdir_prefix, cleanup_cidfile, process):
# type: (Text, Text, bool, subprocess.Popen) -> None """Record memory usage of the running Docker container.""" |
# Todo: consider switching to `docker create` / `docker start`
# instead of `docker run` as `docker create` outputs the container ID
# to stdout, but the container is frozen, thus allowing us to start the
# monitoring process without dealing with the cidfile or too-fast
# container execution
cid = None
while cid is None:
time.sleep(1)
if process.returncode is not None:
if cleanup_cidfile:
os.remove(cidfile)
return
try:
with open(cidfile) as cidhandle:
cid = cidhandle.readline().strip()
except (OSError, IOError):
cid = None
max_mem = self.docker_get_memory(cid)
tmp_dir, tmp_prefix = os.path.split(tmpdir_prefix)
stats_file = tempfile.NamedTemporaryFile(prefix=tmp_prefix, dir=tmp_dir)
with open(stats_file.name, mode="w") as stats_file_handle:
stats_proc = subprocess.Popen(
['docker', 'stats', '--no-trunc', '--format', '{{.MemPerc}}',
cid], stdout=stats_file_handle, stderr=subprocess.DEVNULL)
process.wait()
stats_proc.kill()
max_mem_percent = 0
with open(stats_file.name, mode="r") as stats:
for line in stats:
try:
mem_percent = float(re.sub(
CONTROL_CODE_RE, '', line).replace('%', ''))
if mem_percent > max_mem_percent:
max_mem_percent = mem_percent
except ValueError:
break
_logger.info(u"[job %s] Max memory used: %iMiB", self.name,
int((max_mem_percent * max_mem) / (2 ** 20)))
if cleanup_cidfile:
os.remove(cidfile) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
"""Apply a mapping function to each File path in the object `rec`.""" |
if isinstance(rec, MutableMapping):
if rec.get("class") == "File":
rec["path"] = op(rec["path"])
for d in rec:
adjustFiles(rec[d], op)
if isinstance(rec, MutableSequence):
for d in rec:
adjustFiles(d, op) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_types(srctype, sinktype, linkMerge, valueFrom):
# type: (Any, Any, Optional[Text], Optional[Text]) -> Text """Check if the source and sink types are "pass", "warning", or "exception". """ |
if valueFrom is not None:
return "pass"
if linkMerge is None:
if can_assign_src_to_sink(srctype, sinktype, strict=True):
return "pass"
if can_assign_src_to_sink(srctype, sinktype, strict=False):
return "warning"
return "exception"
if linkMerge == "merge_nested":
return check_types({"items": _get_type(srctype), "type": "array"},
_get_type(sinktype), None, None)
if linkMerge == "merge_flattened":
return check_types(merge_flatten_type(_get_type(srctype)), _get_type(sinktype), None, None)
raise WorkflowException(u"Unrecognized linkMerge enum '{}'".format(linkMerge)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def merge_flatten_type(src):
# type: (Any) -> Any """Return the merge flattened type of the source type """ |
if isinstance(src, MutableSequence):
return [merge_flatten_type(t) for t in src]
if isinstance(src, MutableMapping) and src.get("type") == "array":
return src
return {"items": src, "type": "array"} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def can_assign_src_to_sink(src, sink, strict=False):
# type: (Any, Any, bool) -> bool """Check for identical type specifications, ignoring extra keys like inputBinding. src: admissible source types sink: admissible sink types In non-strict comparison, at least one source type must match one sink type. In strict comparison, all source types must match at least one sink type. """ |
if src == "Any" or sink == "Any":
return True
if isinstance(src, MutableMapping) and isinstance(sink, MutableMapping):
if sink.get("not_connected") and strict:
return False
if src["type"] == "array" and sink["type"] == "array":
return can_assign_src_to_sink(src["items"], sink["items"], strict)
if src["type"] == "record" and sink["type"] == "record":
return _compare_records(src, sink, strict)
if src["type"] == "File" and sink["type"] == "File":
for sinksf in sink.get("secondaryFiles", []):
if not [1 for srcsf in src.get("secondaryFiles", []) if sinksf == srcsf]:
if strict:
return False
return True
return can_assign_src_to_sink(src["type"], sink["type"], strict)
if isinstance(src, MutableSequence):
if strict:
for this_src in src:
if not can_assign_src_to_sink(this_src, sink):
return False
return True
for this_src in src:
if can_assign_src_to_sink(this_src, sink):
return True
return False
if isinstance(sink, MutableSequence):
for this_sink in sink:
if can_assign_src_to_sink(src, this_sink):
return True
return False
return src == sink |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _compare_records(src, sink, strict=False):
# type: (MutableMapping[Text, Any], MutableMapping[Text, Any], bool) -> bool """Compare two records, ensuring they have compatible fields. This handles normalizing record names, which will be relative to workflow step, so that they can be compared. """ |
def _rec_fields(rec): # type: (MutableMapping[Text, Any]) -> MutableMapping[Text, Any]
out = {}
for field in rec["fields"]:
name = shortname(field["name"])
out[name] = field["type"]
return out
srcfields = _rec_fields(src)
sinkfields = _rec_fields(sink)
for key in six.iterkeys(sinkfields):
if (not can_assign_src_to_sink(
srcfields.get(key, "null"), sinkfields.get(key, "null"), strict)
and sinkfields.get(key) is not None):
_logger.info("Record comparison failure for %s and %s\n"
"Did not match fields for %s: %s and %s",
src["name"], sink["name"], key, srcfields.get(key),
sinkfields.get(key))
return False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_all_types(src_dict, sinks, sourceField):
# type: (Dict[Text, Any], List[Dict[Text, Any]], Text) -> Dict[Text, List[SrcSink]] # sourceField is either "soure" or "outputSource" """Given a list of sinks, check if their types match with the types of their sources. """ |
validation = {"warning": [], "exception": []} # type: Dict[Text, List[SrcSink]]
for sink in sinks:
if sourceField in sink:
valueFrom = sink.get("valueFrom")
if isinstance(sink[sourceField], MutableSequence):
srcs_of_sink = [src_dict[parm_id] for parm_id in sink[sourceField]]
linkMerge = sink.get("linkMerge", ("merge_nested"
if len(sink[sourceField]) > 1 else None))
else:
parm_id = sink[sourceField]
srcs_of_sink = [src_dict[parm_id]]
linkMerge = None
for src in srcs_of_sink:
check_result = check_types(src, sink, linkMerge, valueFrom)
if check_result == "warning":
validation["warning"].append(SrcSink(src, sink, linkMerge))
elif check_result == "exception":
validation["exception"].append(SrcSink(src, sink, linkMerge))
return validation |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def output_callback(self, out, process_status):
""" Collect the final status and outputs. """ |
self.final_status.append(process_status)
self.final_output.append(out) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _runner(self, job, runtime_context):
""" Job running thread. """ |
try:
job.run(runtime_context)
except WorkflowException as err:
_logger.exception("Got workflow error")
self.exceptions.append(err)
except Exception as err: # pylint: disable=broad-except
_logger.exception("Got workflow error")
self.exceptions.append(WorkflowException(Text(err)))
finally:
with runtime_context.workflow_eval_lock:
self.threads.remove(threading.current_thread())
if isinstance(job, JobBase):
self.allocated_ram -= job.builder.resources["ram"]
self.allocated_cores -= job.builder.resources["cores"]
runtime_context.workflow_eval_lock.notifyAll() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_job(self, job, # type: Union[JobBase, WorkflowJob, None] runtime_context # type: RuntimeContext """ Execute a single Job in a seperate thread. """ |
if job is not None:
with self.pending_jobs_lock:
self.pending_jobs.append(job)
with self.pending_jobs_lock:
n = 0
while (n+1) <= len(self.pending_jobs):
job = self.pending_jobs[n]
if isinstance(job, JobBase):
if ((job.builder.resources["ram"])
> self.max_ram
or (job.builder.resources["cores"])
> self.max_cores):
_logger.error(
'Job "%s" cannot be run, requests more resources (%s) '
'than available on this host (max ram %d, max cores %d',
job.name, job.builder.resources,
self.allocated_ram,
self.allocated_cores,
self.max_ram,
self.max_cores)
self.pending_jobs.remove(job)
return
if ((self.allocated_ram + job.builder.resources["ram"])
> self.max_ram
or (self.allocated_cores + job.builder.resources["cores"])
> self.max_cores):
_logger.debug(
'Job "%s" cannot run yet, resources (%s) are not '
'available (already allocated ram is %d, allocated cores is %d, '
'max ram %d, max cores %d',
job.name, job.builder.resources,
self.allocated_ram,
self.allocated_cores,
self.max_ram,
self.max_cores)
n += 1
continue
thread = threading.Thread(target=self._runner, args=(job, runtime_context))
thread.daemon = True
self.threads.add(thread)
if isinstance(job, JobBase):
self.allocated_ram += job.builder.resources["ram"]
self.allocated_cores += job.builder.resources["cores"]
thread.start()
self.pending_jobs.remove(job) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wait_for_next_completion(self, runtime_context):
# type: (RuntimeContext) -> None """ Wait for jobs to finish. """ |
if runtime_context.workflow_eval_lock is not None:
runtime_context.workflow_eval_lock.wait()
if self.exceptions:
raise self.exceptions[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def append_volume(runtime, source, target, writable=False):
# type: (List[Text], Text, Text, bool) -> None """Add binding arguments to the runtime list.""" |
runtime.append(u"--volume={}:{}:{}".format(
docker_windows_path_adjust(source), target,
"rw" if writable else "ro")) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_writable_file_volume(self, runtime, # type: List[Text] volume, # type: MapperEnt host_outdir_tgt, # type: Optional[Text] tmpdir_prefix # type: Text """Append a writable file mapping to the runtime option list.""" |
if self.inplace_update:
self.append_volume(runtime, volume.resolved, volume.target,
writable=True)
else:
if host_outdir_tgt:
# shortcut, just copy to the output directory
# which is already going to be mounted
if not os.path.exists(os.path.dirname(host_outdir_tgt)):
os.makedirs(os.path.dirname(host_outdir_tgt))
shutil.copy(volume.resolved, host_outdir_tgt)
else:
tmp_dir, tmp_prefix = os.path.split(tmpdir_prefix)
tmpdir = tempfile.mkdtemp(prefix=tmp_prefix, dir=tmp_dir)
file_copy = os.path.join(
tmpdir, os.path.basename(volume.resolved))
shutil.copy(volume.resolved, file_copy)
self.append_volume(runtime, file_copy, volume.target,
writable=True)
ensure_writable(host_outdir_tgt or file_copy) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_writable_directory_volume(self, runtime, # type: List[Text] volume, # type: MapperEnt host_outdir_tgt, # type: Optional[Text] tmpdir_prefix # type: Text """Append a writable directory mapping to the runtime option list.""" |
if volume.resolved.startswith("_:"):
# Synthetic directory that needs creating first
if not host_outdir_tgt:
tmp_dir, tmp_prefix = os.path.split(tmpdir_prefix)
new_dir = os.path.join(
tempfile.mkdtemp(prefix=tmp_prefix, dir=tmp_dir),
os.path.basename(volume.target))
self.append_volume(runtime, new_dir, volume.target,
writable=True)
elif not os.path.exists(host_outdir_tgt):
os.makedirs(host_outdir_tgt)
else:
if self.inplace_update:
self.append_volume(runtime, volume.resolved, volume.target,
writable=True)
else:
if not host_outdir_tgt:
tmp_dir, tmp_prefix = os.path.split(tmpdir_prefix)
tmpdir = tempfile.mkdtemp(prefix=tmp_prefix, dir=tmp_dir)
new_dir = os.path.join(
tmpdir, os.path.basename(volume.resolved))
shutil.copytree(volume.resolved, new_dir)
self.append_volume(
runtime, new_dir, volume.target,
writable=True)
else:
shutil.copytree(volume.resolved, host_outdir_tgt)
ensure_writable(host_outdir_tgt or new_dir) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make(self, cwl):
"""Instantiate a CWL object from a CWl document.""" |
load = load_tool.load_tool(cwl, self.loading_context)
if isinstance(load, int):
raise Exception("Error loading tool")
return Callable(load, self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def realize_input_schema(input_types, # type: MutableSequence[Dict[Text, Any]] schema_defs # type: Dict[Text, Any] """Replace references to named typed with the actual types.""" |
for index, entry in enumerate(input_types):
if isinstance(entry, string_types):
if '#' in entry:
_, input_type_name = entry.split('#')
else:
input_type_name = entry
if input_type_name in schema_defs:
entry = input_types[index] = schema_defs[input_type_name]
if isinstance(entry, collections.Mapping):
if isinstance(entry['type'], string_types) and '#' in entry['type']:
_, input_type_name = entry['type'].split('#')
if input_type_name in schema_defs:
input_types[index]['type'] = realize_input_schema(
schema_defs[input_type_name], schema_defs)
if isinstance(entry['type'], collections.MutableSequence):
input_types[index]['type'] = realize_input_schema(
entry['type'], schema_defs)
if isinstance(entry['type'], collections.Mapping):
input_types[index]['type'] = realize_input_schema(
[input_types[index]['type']], schema_defs)
if entry['type'] == 'array':
items = entry['items'] if \
not isinstance(entry['items'], string_types) else [entry['items']]
input_types[index]['items'] = realize_input_schema(items, schema_defs)
if entry['type'] == 'record':
input_types[index]['fields'] = realize_input_schema(
entry['fields'], schema_defs)
return input_types |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_input_template(tool):
# type: (Process) -> Dict[Text, Any] """Generate an example input object for the given CWL process.""" |
template = yaml.comments.CommentedMap()
for inp in realize_input_schema(tool.tool["inputs"], tool.schemaDefs):
name = shortname(inp["id"])
value, comment = generate_example_input(
inp['type'], inp.get('default', None))
template.insert(0, name, value, comment)
return template |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_relative(base, obj):
"""Relativize the location URI of a File or Directory object.""" |
uri = obj.get("location", obj.get("path"))
if ":" in uri.split("/")[0] and not uri.startswith("file://"):
pass
else:
if uri.startswith("file://"):
uri = uri_file_path(uri)
obj["location"] = os.path.relpath(uri, base) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def printdeps(obj, # type: Mapping[Text, Any] document_loader, # type: Loader stdout, # type: Union[TextIO, StreamWriter] relative_deps, # type: bool uri, # type: Text basedir=None, # type: Text nestdirs=True # type: bool """Print a JSON representation of the dependencies of the CWL document.""" |
deps = find_deps(obj, document_loader, uri, basedir=basedir,
nestdirs=nestdirs)
if relative_deps == "primary":
base = basedir if basedir else os.path.dirname(uri_file_path(str(uri)))
elif relative_deps == "cwd":
base = os.getcwd()
visit_class(deps, ("File", "Directory"), functools.partial(
make_relative, base))
stdout.write(json_dumps(deps, indent=4)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_deps(obj, # type: Mapping[Text, Any] document_loader, # type: Loader uri, # type: Text basedir=None, # type: Text nestdirs=True # type: bool """Find the dependencies of the CWL document.""" |
deps = {"class": "File", "location": uri, "format": CWL_IANA} # type: Dict[Text, Any]
def loadref(base, uri):
return document_loader.fetch(document_loader.fetcher.urljoin(base, uri))
sfs = scandeps(
basedir if basedir else uri, obj, {"$import", "run"},
{"$include", "$schemas", "location"}, loadref, nestdirs=nestdirs)
if sfs is not None:
deps["secondaryFiles"] = sfs
return deps |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def print_pack(document_loader, # type: Loader processobj, # type: CommentedMap uri, # type: Text metadata # type: Dict[Text, Any] """Return a CWL serialization of the CWL document in JSON.""" |
packed = pack(document_loader, processobj, uri, metadata)
if len(packed["$graph"]) > 1:
return json_dumps(packed, indent=4)
return json_dumps(packed["$graph"][0], indent=4) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_default_container(builder, # type: HasReqsHints default_container=None, # type: Text use_biocontainers=None, # type: bool """Default finder for default containers.""" |
if not default_container and use_biocontainers:
default_container = get_container_from_software_requirements(
use_biocontainers, builder)
return default_container |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(*args, **kwargs):
"""Run cwltool.""" |
signal.signal(signal.SIGTERM, _signal_handler)
try:
sys.exit(main(*args, **kwargs))
finally:
_terminate_processes() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add(self, value):
# type: (Text) -> Text """ Add the given value to the store. Returns a placeholder value to use until the real value is needed. """ |
if not isinstance(value, string_types):
raise Exception("Secret store only accepts strings")
if value not in self.secrets:
placeholder = "(secret-%s)" % Text(uuid.uuid4())
self.secrets[placeholder] = value
return placeholder
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def store(self, secrets, job):
# type: (List[Text], MutableMapping[Text, Any]) -> None """Sanitize the job object of any of the given secrets.""" |
for j in job:
if j in secrets:
job[j] = self.add(job[j]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_secret(self, value):
# type: (Any) -> bool """Test if the provided document has any of our secrets.""" |
if isinstance(value, string_types):
for k in self.secrets:
if k in value:
return True
elif isinstance(value, MutableMapping):
for this_value in value.values():
if self.has_secret(this_value):
return True
elif isinstance(value, MutableSequence):
for this_value in value:
if self.has_secret(this_value):
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def retrieve(self, value):
# type: (Any) -> Any """Replace placeholders with their corresponding secrets.""" |
if isinstance(value, string_types):
for key, this_value in self.secrets.items():
value = value.replace(key, this_value)
elif isinstance(value, MutableMapping):
return {k: self.retrieve(v) for k, v in value.items()}
elif isinstance(value, MutableSequence):
return [self.retrieve(v) for k, v in enumerate(value)]
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_mod_11_2(numeric_string):
# type: (Text) -> bool """ Validate numeric_string for its MOD-11-2 checksum. Any "-" in the numeric_string are ignored. The last digit of numeric_string is assumed to be the checksum, 0-9 or X. See ISO/IEC 7064:2003 and https://support.orcid.org/knowledgebase/articles/116780-structure-of-the-orcid-identifier """ |
# Strip -
nums = numeric_string.replace("-", "")
total = 0
# skip last (check)digit
for num in nums[:-1]:
digit = int(num)
total = (total+digit)*2
remainder = total % 11
result = (12-remainder) % 11
if result == 10:
checkdigit = "X"
else:
checkdigit = str(result)
# Compare against last digit or X
return nums[-1].upper() == checkdigit |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _valid_orcid(orcid):
# type: (Optional[Text]) -> Text """ Ensure orcid is a valid ORCID identifier. The string must be equivalent to one of these forms: 0000-0002-1825-0097 orcid.org/0000-0002-1825-0097 http://orcid.org/0000-0002-1825-0097 https://orcid.org/0000-0002-1825-0097 If the ORCID number or prefix is invalid, a ValueError is raised. The returned ORCID string is always in the form of: https://orcid.org/0000-0002-1825-0097 """ |
if orcid is None or not orcid:
raise ValueError(u'ORCID cannot be unspecified')
# Liberal in what we consume, e.g. ORCID.org/0000-0002-1825-009x
orcid = orcid.lower()
match = re.match(
# Note: concatinated r"" r"" below so we can add comments to pattern
# Optional hostname, with or without protocol
r"(http://orcid\.org/|https://orcid\.org/|orcid\.org/)?"
# alternative pattern, but probably messier
# r"^((https?://)?orcid.org/)?"
# ORCID number is always 4x4 numerical digits,
# but last digit (modulus 11 checksum)
# can also be X (but we made it lowercase above).
# e.g. 0000-0002-1825-0097
# or 0000-0002-1694-233x
r"(?P<orcid>(\d{4}-\d{4}-\d{4}-\d{3}[0-9x]))$",
orcid)
help_url = u"https://support.orcid.org/knowledgebase/articles/"\
"116780-structure-of-the-orcid-identifier"
if not match:
raise ValueError(u"Invalid ORCID: %s\n%s" % (orcid, help_url))
# Conservative in what we produce:
# a) Ensure any checksum digit is uppercase
orcid_num = match.group("orcid").upper()
# b) ..and correct
if not _check_mod_11_2(orcid_num):
raise ValueError(
u"Invalid ORCID checksum: %s\n%s" % (orcid_num, help_url))
# c) Re-add the official prefix https://orcid.org/
return u"https://orcid.org/%s" % orcid_num |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def checksum_copy(src_file, # type: IO dst_file=None, # type: Optional[IO] hasher=Hasher, # type: Callable[[], Any] buffersize=1024*1024 # type: int """Compute checksums while copying a file.""" |
# TODO: Use hashlib.new(Hasher_str) instead?
checksum = hasher()
contents = src_file.read(buffersize)
if dst_file and hasattr(dst_file, "name") and hasattr(src_file, "name"):
temp_location = os.path.join(os.path.dirname(dst_file.name),
str(uuid.uuid4()))
try:
os.rename(dst_file.name, temp_location)
os.link(src_file.name, dst_file.name)
dst_file = None
os.unlink(temp_location)
except OSError:
pass
if os.path.exists(temp_location):
os.rename(temp_location, dst_file.name) # type: ignore
while contents != b"":
if dst_file is not None:
dst_file.write(contents)
checksum.update(contents)
contents = src_file.read(buffersize)
if dst_file is not None:
dst_file.flush()
return checksum.hexdigest().lower() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def copy_job_order(job, job_order_object):
# type: (Any, Any) -> Any """Create copy of job object for provenance.""" |
if not hasattr(job, "tool"):
# direct command line tool execution
return job_order_object
customised_job = {} # new job object for RO
for each, i in enumerate(job.tool["inputs"]):
with SourceLine(job.tool["inputs"], each, WorkflowException,
_logger.isEnabledFor(logging.DEBUG)):
iid = shortname(i["id"])
if iid in job_order_object:
customised_job[iid] = copy.deepcopy(job_order_object[iid])
# add the input element in dictionary for provenance
elif "default" in i:
customised_job[iid] = copy.deepcopy(i["default"])
# add the default elements in the dictionary for provenance
else:
pass
return customised_job |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def evaluate(self, process, # type: Process job, # type: Any job_order_object, # type: Dict[Text, Text] research_obj # type: ResearchObject """Evaluate the nature of job""" |
if not hasattr(process, "steps"):
# record provenance of independent commandline tool executions
self.prospective_prov(job)
customised_job = copy_job_order(job, job_order_object)
self.used_artefacts(customised_job, self.workflow_run_uri)
research_obj.create_job(customised_job, job)
elif hasattr(job, "workflow"):
# record provenance of workflow executions
self.prospective_prov(job)
customised_job = copy_job_order(job, job_order_object)
self.used_artefacts(customised_job, self.workflow_run_uri) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start_process(self, process_name, when, process_run_id=None):
# type: (Text, datetime.datetime, Optional[str]) -> str """Record the start of each Process.""" |
if process_run_id is None:
process_run_id = uuid.uuid4().urn
prov_label = "Run of workflow/packed.cwl#main/" + process_name
self.document.activity(
process_run_id, None, None,
{provM.PROV_TYPE: WFPROV["ProcessRun"],
provM.PROV_LABEL: prov_label})
self.document.wasAssociatedWith(
process_run_id, self.engine_uuid, str("wf:main/" + process_name))
self.document.wasStartedBy(
process_run_id, None, self.workflow_run_uri,
when, None, None)
return process_run_id |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def declare_string(self, value):
# type: (Union[Text, str]) -> Tuple[ProvEntity,Text] """Save as string in UTF-8.""" |
byte_s = BytesIO(str(value).encode(ENCODING))
data_file = self.research_object.add_data_file(byte_s, content_type=TEXT_PLAIN)
checksum = posixpath.basename(data_file)
# FIXME: Don't naively assume add_data_file uses hash in filename!
data_id = "data:%s" % posixpath.split(data_file)[1]
entity = self.document.entity(
data_id, {provM.PROV_TYPE: WFPROV["Artifact"],
provM.PROV_VALUE: str(value)}) # type: ProvEntity
return entity, checksum |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def finalize_prov_profile(self, name):
# type: (Optional[Text]) -> List[Identifier] """Transfer the provenance related files to the RO.""" |
# NOTE: Relative posix path
if name is None:
# master workflow, fixed filenames
filename = "primary.cwlprov"
else:
# ASCII-friendly filename, avoiding % as we don't want %2520 in manifest.json
wf_name = urllib.parse.quote(str(name), safe="").replace("%", "_")
# Note that the above could cause overlaps for similarly named
# workflows, but that's OK as we'll also include run uuid
# which also covers thhe case of this step being run in
# multiple places or iterations
filename = "%s.%s.cwlprov" % (wf_name, self.workflow_run_uuid)
basename = posixpath.join(_posix_path(PROVENANCE), filename)
# TODO: Also support other profiles than CWLProv, e.g. ProvOne
# list of prov identifiers of provenance files
prov_ids = []
# https://www.w3.org/TR/prov-xml/
with self.research_object.write_bag_file(basename + ".xml") as provenance_file:
self.document.serialize(provenance_file, format="xml", indent=4)
prov_ids.append(self.provenance_ns[filename + ".xml"])
# https://www.w3.org/TR/prov-n/
with self.research_object.write_bag_file(basename + ".provn") as provenance_file:
self.document.serialize(provenance_file, format="provn", indent=2)
prov_ids.append(self.provenance_ns[filename + ".provn"])
# https://www.w3.org/Submission/prov-json/
with self.research_object.write_bag_file(basename + ".json") as provenance_file:
self.document.serialize(provenance_file, format="json", indent=2)
prov_ids.append(self.provenance_ns[filename + ".json"])
# "rdf" aka https://www.w3.org/TR/prov-o/
# which can be serialized to ttl/nt/jsonld (and more!)
# https://www.w3.org/TR/turtle/
with self.research_object.write_bag_file(basename + ".ttl") as provenance_file:
self.document.serialize(provenance_file, format="rdf", rdf_format="turtle")
prov_ids.append(self.provenance_ns[filename + ".ttl"])
# https://www.w3.org/TR/n-triples/
with self.research_object.write_bag_file(basename + ".nt") as provenance_file:
self.document.serialize(provenance_file, format="rdf", rdf_format="ntriples")
prov_ids.append(self.provenance_ns[filename + ".nt"])
# https://www.w3.org/TR/json-ld/
# TODO: Use a nice JSON-LD context
# see also https://eprints.soton.ac.uk/395985/
# 404 Not Found on https://provenance.ecs.soton.ac.uk/prov.jsonld :(
with self.research_object.write_bag_file(basename + ".jsonld") as provenance_file:
self.document.serialize(provenance_file, format="rdf", rdf_format="json-ld")
prov_ids.append(self.provenance_ns[filename + ".jsonld"])
_logger.debug(u"[provenance] added provenance: %s", prov_ids)
return prov_ids |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _initialize_bagit(self):
# type: () -> None """Write fixed bagit header.""" |
self.self_check()
bagit = os.path.join(self.folder, "bagit.txt")
# encoding: always UTF-8 (although ASCII would suffice here)
# newline: ensure LF also on Windows
with open(bagit, "w", encoding=ENCODING, newline='\n') as bag_it_file:
# TODO: \n or \r\n ?
bag_it_file.write(u"BagIt-Version: 0.97\n")
bag_it_file.write(u"Tag-File-Character-Encoding: %s\n" % ENCODING) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def user_provenance(self, document):
# type: (ProvDocument) -> None """Add the user provenance.""" |
self.self_check()
(username, fullname) = _whoami()
if not self.full_name:
self.full_name = fullname
document.add_namespace(UUID)
document.add_namespace(ORCID)
document.add_namespace(FOAF)
account = document.agent(
ACCOUNT_UUID, {provM.PROV_TYPE: FOAF["OnlineAccount"],
"prov:label": username,
FOAF["accountName"]: username})
user = document.agent(
self.orcid or USER_UUID,
{provM.PROV_TYPE: PROV["Person"],
"prov:label": self.full_name,
FOAF["name"]: self.full_name,
FOAF["account"]: account})
# cwltool may be started on the shell (directly by user),
# by shell script (indirectly by user)
# or from a different program
# (which again is launched by any of the above)
#
# We can't tell in which way, but ultimately we're still
# acting in behalf of that user (even if we might
# get their name wrong!)
document.actedOnBehalfOf(account, user) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_bag_file(self, path, encoding=ENCODING):
# type: (Text, Optional[str]) -> IO """Write the bag file into our research object.""" |
self.self_check()
# For some reason below throws BlockingIOError
#fp = BufferedWriter(WritableBagFile(self, path))
bag_file = cast(IO, WritableBagFile(self, path))
if encoding is not None:
# encoding: match Tag-File-Character-Encoding: UTF-8
# newline: ensure LF also on Windows
return cast(IO,
TextIOWrapper(bag_file, encoding=encoding, newline="\n"))
return bag_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_tagfile(self, path, timestamp=None):
# type: (Text, datetime.datetime) -> None """Add tag files to our research object.""" |
self.self_check()
checksums = {}
# Read file to calculate its checksum
if os.path.isdir(path):
return
# FIXME: do the right thing for directories
with open(path, "rb") as tag_file:
# FIXME: Should have more efficient open_tagfile() that
# does all checksums in one go while writing through,
# adding checksums after closing.
# Below probably OK for now as metadata files
# are not too large..?
checksums[SHA1] = checksum_copy(tag_file, hasher=hashlib.sha1)
tag_file.seek(0)
checksums[SHA256] = checksum_copy(tag_file, hasher=hashlib.sha256)
tag_file.seek(0)
checksums[SHA512] = checksum_copy(tag_file, hasher=hashlib.sha512)
rel_path = _posix_path(os.path.relpath(path, self.folder))
self.tagfiles.add(rel_path)
self.add_to_manifest(rel_path, checksums)
if timestamp is not None:
self._file_provenance[rel_path] = {"createdOn": timestamp.isoformat()} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _ro_aggregates(self):
# type: () -> List[Dict[str, Any]] """Gather dictionary of files to be added to the manifest.""" |
def guess_mediatype(rel_path):
# type: (str) -> Dict[str, str]
"""Return the mediatypes."""
media_types = {
# Adapted from
# https://w3id.org/bundle/2014-11-05/#media-types
"txt": TEXT_PLAIN,
"ttl": 'text/turtle; charset="UTF-8"',
"rdf": 'application/rdf+xml',
"json": 'application/json',
"jsonld": 'application/ld+json',
"xml": 'application/xml',
##
"cwl": 'text/x+yaml; charset="UTF-8"',
"provn": 'text/provenance-notation; charset="UTF-8"',
"nt": 'application/n-triples',
}
conforms_to = {
"provn": 'http://www.w3.org/TR/2013/REC-prov-n-20130430/',
"cwl": 'https://w3id.org/cwl/',
}
prov_conforms_to = {
"provn": 'http://www.w3.org/TR/2013/REC-prov-n-20130430/',
"rdf": 'http://www.w3.org/TR/2013/REC-prov-o-20130430/',
"ttl": 'http://www.w3.org/TR/2013/REC-prov-o-20130430/',
"nt": 'http://www.w3.org/TR/2013/REC-prov-o-20130430/',
"jsonld": 'http://www.w3.org/TR/2013/REC-prov-o-20130430/',
"xml": 'http://www.w3.org/TR/2013/NOTE-prov-xml-20130430/',
"json": 'http://www.w3.org/Submission/2013/SUBM-prov-json-20130424/',
}
extension = rel_path.rsplit(".", 1)[-1].lower() # type: Optional[str]
if extension == rel_path:
# No ".", no extension
extension = None
local_aggregate = {} # type: Dict[str, Any]
if extension in media_types:
local_aggregate["mediatype"] = media_types[extension]
if extension in conforms_to:
# TODO: Open CWL file to read its declared "cwlVersion", e.g.
# cwlVersion = "v1.0"
local_aggregate["conformsTo"] = conforms_to[extension]
if (rel_path.startswith(_posix_path(PROVENANCE))
and extension in prov_conforms_to):
if ".cwlprov" in rel_path:
# Our own!
local_aggregate["conformsTo"] = [prov_conforms_to[extension], CWLPROV_VERSION]
else:
# Some other PROV
# TODO: Recognize ProvOne etc.
local_aggregate["conformsTo"] = prov_conforms_to[extension]
return local_aggregate
aggregates = [] # type: List[Dict]
for path in self.bagged_size.keys():
aggregate_dict = {} # type: Dict[str, Any]
(folder, filename) = posixpath.split(path)
# NOTE: Here we end up aggregating the abstract
# data items by their sha1 hash, so that it matches
# the entity() in the prov files.
# TODO: Change to nih:sha-256; hashes
# https://tools.ietf.org/html/rfc6920#section-7
aggregate_dict["uri"] = 'urn:hash::sha1:' + filename
aggregate_dict["bundledAs"] = {
# The arcp URI is suitable ORE proxy; local to this Research Object.
# (as long as we don't also aggregate it by relative path!)
"uri": self.base_uri + path,
# relate it to the data/ path
"folder": "/%s/" % folder,
"filename": filename,
}
if path in self._file_provenance:
# Made by workflow run, merge captured provenance
aggregate_dict["bundledAs"].update(self._file_provenance[path])
else:
# Probably made outside wf run, part of job object?
pass
if path in self._content_types:
aggregate_dict["mediatype"] = self._content_types[path]
aggregates.append(aggregate_dict)
for path in self.tagfiles:
if (not (path.startswith(METADATA) or path.startswith(WORKFLOW) or
path.startswith(SNAPSHOT))):
# probably a bagit file
continue
if path == posixpath.join(METADATA, "manifest.json"):
# Should not really be there yet! But anyway, we won't
# aggregate it.
continue
rel_aggregates = {} # type: Dict[str, Any]
# These are local paths like metadata/provenance - but
# we need to relativize them for our current directory for
# as we are saved in metadata/manifest.json
uri = posixpath.relpath(path, METADATA)
rel_aggregates["uri"] = uri
rel_aggregates.update(guess_mediatype(path))
if path in self._file_provenance:
# Propagate file provenance (e.g. timestamp)
rel_aggregates.update(self._file_provenance[path])
elif not path.startswith(SNAPSHOT):
# make new timestamp?
rel_aggregates.update(self._self_made())
aggregates.append(rel_aggregates)
aggregates.extend(self._external_aggregates)
return aggregates |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def packed_workflow(self, packed):
# type: (Text) -> None """Pack CWL description to generate re-runnable CWL object in RO.""" |
self.self_check()
rel_path = posixpath.join(_posix_path(WORKFLOW), "packed.cwl")
# Write as binary
with self.write_bag_file(rel_path, encoding=None) as write_pack:
# YAML is always UTF8, but json.dumps gives us str in py2
write_pack.write(packed.encode(ENCODING))
_logger.debug(u"[provenance] Added packed workflow: %s", rel_path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_data_file(self, sha1hash):
# type: (str) -> bool """Confirms the presence of the given file in the RO.""" |
folder = os.path.join(self.folder, DATA, sha1hash[0:2])
hash_path = os.path.join(folder, sha1hash)
return os.path.isfile(hash_path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_to_manifest(self, rel_path, checksums):
# type: (Text, Dict[str,str]) -> None """Add files to the research object manifest.""" |
self.self_check()
if posixpath.isabs(rel_path):
raise ValueError("rel_path must be relative: %s" % rel_path)
if posixpath.commonprefix(["data/", rel_path]) == "data/":
# payload file, go to manifest
manifest = "manifest"
else:
# metadata file, go to tag manifest
manifest = "tagmanifest"
# Add checksums to corresponding manifest files
for (method, hash_value) in checksums.items():
# File not in manifest because we bailed out on
# existence in bagged_size above
manifestpath = os.path.join(
self.folder, "%s-%s.txt" % (manifest, method.lower()))
# encoding: match Tag-File-Character-Encoding: UTF-8
# newline: ensure LF also on Windows
with open(manifestpath, "a", encoding=ENCODING, newline='\n') \
as checksum_file:
line = u"%s %s\n" % (hash_value, rel_path)
_logger.debug(u"[provenance] Added to %s: %s", manifestpath, line)
checksum_file.write(line) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_job(self, builder_job, # type: Dict[Text, Any] wf_job=None, # type: Callable[[Dict[Text, Text], Callable[[Any, Any], Any], RuntimeContext], Generator[Any, None, None]] is_output=False #TODO customise the file """Generate the new job object with RO specific relative paths.""" |
copied = copy.deepcopy(builder_job)
relativised_input_objecttemp = {} # type: Dict[Text, Any]
self._relativise_files(copied)
def jdefault(o):
return dict(o)
if is_output:
rel_path = posixpath.join(_posix_path(WORKFLOW), "primary-output.json")
else:
rel_path = posixpath.join(_posix_path(WORKFLOW), "primary-job.json")
j = json_dumps(copied, indent=4, ensure_ascii=False, default=jdefault)
with self.write_bag_file(rel_path) as file_path:
file_path.write(j + u"\n")
_logger.debug(u"[provenance] Generated customised job file: %s",
rel_path)
# Generate dictionary with keys as workflow level input IDs and values
# as
# 1) for files the relativised location containing hash
# 2) for other attributes, the actual value.
relativised_input_objecttemp = {}
for key, value in copied.items():
if isinstance(value, MutableMapping):
if value.get("class") in ("File", "Directory"):
relativised_input_objecttemp[key] = value
else:
relativised_input_objecttemp[key] = value
self.relativised_input_object.update(
{k: v for k, v in relativised_input_objecttemp.items() if v})
return self.relativised_input_object |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _relativise_files(self, structure):
# type: (Dict[Any, Any]) -> None """Save any file objects into the RO and update the local paths.""" |
# Base case - we found a File we need to update
_logger.debug(u"[provenance] Relativising: %s", structure)
if isinstance(structure, MutableMapping):
if structure.get("class") == "File":
relative_path = None
if "checksum" in structure:
alg, checksum = structure["checksum"].split("$")
if alg != SHA1:
raise TypeError(
"Only SHA1 CWL checksums are currently supported: "
"{}".format(structure))
if self.has_data_file(checksum):
prefix = checksum[0:2]
relative_path = posixpath.join(
"data", prefix, checksum)
if not relative_path is not None and "location" in structure:
# Register in RO; but why was this not picked
# up by used_artefacts?
_logger.info("[provenance] Adding to RO %s", structure["location"])
fsaccess = StdFsAccess("")
with fsaccess.open(structure["location"], "rb") as fp:
relative_path = self.add_data_file(fp)
checksum = posixpath.basename(relative_path)
structure["checksum"] = "%s$%s" % (SHA1, checksum)
if relative_path is not None:
# RO-relative path as new location
structure["location"] = posixpath.join("..", relative_path)
else:
_logger.warning("Could not determine RO path for file %s", structure)
if "path" in structure:
del structure["path"]
if structure.get("class") == "Directory":
# TODO: Generate anonymoys Directory with a "listing"
# pointing to the hashed files
del structure["location"]
for val in structure.values():
self._relativise_files(val)
return
if isinstance(structure, (str, Text)):
# Just a string value, no need to iterate further
return
try:
for obj in iter(structure):
# Recurse and rewrite any nested File objects
self._relativise_files(obj)
except TypeError:
pass |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def close(self, save_to=None):
# type: (Optional[str]) -> None """Close the Research Object, optionally saving to specified folder. Closing will remove any temporary files used by this research object. After calling this method, this ResearchObject instance can no longer be used, except for no-op calls to .close(). The 'saveTo' folder should not exist - if it does, it will be deleted. It is safe to call this function multiple times without the 'saveTo' argument, e.g. within a try..finally block to ensure the temporary files of this Research Object are removed. """ |
if save_to is None:
if not self.closed:
_logger.debug(u"[provenance] Deleting temporary %s", self.folder)
shutil.rmtree(self.folder, ignore_errors=True)
else:
save_to = os.path.abspath(save_to)
_logger.info(u"[provenance] Finalizing Research Object")
self._finalize() # write manifest etc.
# TODO: Write as archive (.zip or .tar) based on extension?
if os.path.isdir(save_to):
_logger.info(u"[provenance] Deleting existing %s", save_to)
shutil.rmtree(save_to)
shutil.move(self.folder, save_to)
_logger.info(u"[provenance] Research Object saved to %s", save_to)
self.folder = save_to
self.closed = True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def printrdf(wflow, ctx, style):
# type: (Process, ContextType, Text) -> Text """Serialize the CWL document into a string, ready for printing.""" |
rdf = gather(wflow, ctx).serialize(format=style, encoding='utf-8')
if not rdf:
return u""
return rdf.decode('utf-8') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_runtime(self, env, # type: MutableMapping[Text, Text] runtime_context # type: RuntimeContext """ Returns the Singularity runtime list of commands and options.""" |
any_path_okay = self.builder.get_requirement("DockerRequirement")[1] \
or False
runtime = [u"singularity", u"--quiet", u"exec", u"--contain", u"--pid",
u"--ipc"]
if _singularity_supports_userns():
runtime.append(u"--userns")
runtime.append(u"--bind")
runtime.append(u"{}:{}:rw".format(
docker_windows_path_adjust(os.path.realpath(self.outdir)),
self.builder.outdir))
runtime.append(u"--bind")
tmpdir = "/tmp" # nosec
runtime.append(u"{}:{}:rw".format(
docker_windows_path_adjust(os.path.realpath(self.tmpdir)), tmpdir))
self.add_volumes(self.pathmapper, runtime, any_path_okay=True,
secret_store=runtime_context.secret_store,
tmpdir_prefix=runtime_context.tmpdir_prefix)
if self.generatemapper is not None:
self.add_volumes(
self.generatemapper, runtime, any_path_okay=any_path_okay,
secret_store=runtime_context.secret_store,
tmpdir_prefix=runtime_context.tmpdir_prefix)
runtime.append(u"--pwd")
runtime.append(u"%s" % (docker_windows_path_adjust(self.builder.outdir)))
if runtime_context.custom_net:
raise UnsupportedRequirement(
"Singularity implementation does not support custom networking")
elif runtime_context.disable_net:
runtime.append(u"--net")
env["SINGULARITYENV_TMPDIR"] = tmpdir
env["SINGULARITYENV_HOME"] = self.builder.outdir
for name, value in self.environment.items():
env["SINGULARITYENV_{}".format(name)] = str(value)
return (runtime, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_tool(uri, # type: Union[Text, CommentedMap, CommentedSeq] loadingContext # type: LoadingContext """Make a Python CWL object.""" |
if loadingContext.loader is None:
raise ValueError("loadingContext must have a loader")
resolveduri, metadata = loadingContext.loader.resolve_ref(uri)
processobj = None
if isinstance(resolveduri, MutableSequence):
for obj in resolveduri:
if obj['id'].endswith('#main'):
processobj = obj
break
if not processobj:
raise WorkflowException(
u"Tool file contains graph of multiple objects, must specify "
"one of #%s" % ", #".join(
urllib.parse.urldefrag(i["id"])[1] for i in resolveduri
if "id" in i))
elif isinstance(resolveduri, MutableMapping):
processobj = resolveduri
else:
raise Exception("Must resolve to list or dict")
tool = loadingContext.construct_tool_object(processobj, loadingContext)
if loadingContext.jobdefaults:
jobobj = loadingContext.jobdefaults
for inp in tool.tool["inputs"]:
if shortname(inp["id"]) in jobobj:
inp["default"] = jobobj[shortname(inp["id"])]
return tool |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def revmap_file(builder, outdir, f):
# type: (Builder, Text, Dict[Text, Any]) -> Union[Dict[Text, Any], None] """Remap a file from internal path to external path. For Docker, this maps from the path inside tho container to the path outside the container. Recognizes files in the pathmapper or remaps internal output directories to the external directory. """ |
split = urllib.parse.urlsplit(outdir)
if not split.scheme:
outdir = file_uri(str(outdir))
# builder.outdir is the inner (container/compute node) output directory
# outdir is the outer (host/storage system) output directory
if "location" in f and "path" not in f:
if f["location"].startswith("file://"):
f["path"] = convert_pathsep_to_unix(uri_file_path(f["location"]))
else:
return f
if "path" in f:
path = f["path"]
uripath = file_uri(path)
del f["path"]
if "basename" not in f:
f["basename"] = os.path.basename(path)
if not builder.pathmapper:
raise ValueError("Do not call revmap_file using a builder that doesn't have a pathmapper.")
revmap_f = builder.pathmapper.reversemap(path)
if revmap_f and not builder.pathmapper.mapper(revmap_f[0]).type.startswith("Writable"):
f["location"] = revmap_f[1]
elif uripath == outdir or uripath.startswith(outdir+os.sep):
f["location"] = file_uri(path)
elif path == builder.outdir or path.startswith(builder.outdir+os.sep):
f["location"] = builder.fs_access.join(outdir, path[len(builder.outdir) + 1:])
elif not os.path.isabs(path):
f["location"] = builder.fs_access.join(outdir, path)
else:
raise WorkflowException(u"Output file path %s must be within designated output directory (%s) or an input "
u"file pass through." % (path, builder.outdir))
return f
raise WorkflowException(u"Output File object is missing both 'location' "
"and 'path' fields: %s" % f) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_adjust(builder, file_o):
# type: (Builder, Dict[Text, Any]) -> Dict[Text, Any] """ Map files to assigned path inside a container. We need to also explicitly walk over input, as implicit reassignment doesn't reach everything in builder.bindings """ |
if not builder.pathmapper:
raise ValueError("Do not call check_adjust using a builder that doesn't have a pathmapper.")
file_o["path"] = docker_windows_path_adjust(
builder.pathmapper.mapper(file_o["location"])[1])
dn, bn = os.path.split(file_o["path"])
if file_o.get("dirname") != dn:
file_o["dirname"] = Text(dn)
if file_o.get("basename") != bn:
file_o["basename"] = Text(bn)
if file_o["class"] == "File":
nr, ne = os.path.splitext(file_o["basename"])
if file_o.get("nameroot") != nr:
file_o["nameroot"] = Text(nr)
if file_o.get("nameext") != ne:
file_o["nameext"] = Text(ne)
if not ACCEPTLIST_RE.match(file_o["basename"]):
raise WorkflowException(
"Invalid filename: '{}' contains illegal characters".format(
file_o["basename"]))
return file_o |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _flat_crossproduct_scatter(process, # type: WorkflowJobStep joborder, # type: MutableMapping[Text, Any] scatter_keys, # type: MutableSequence[Text] callback, # type: ReceiveScatterOutput startindex, # type: int runtimeContext # type: RuntimeContext """ Inner loop. """ |
scatter_key = scatter_keys[0]
jobl = len(joborder[scatter_key])
steps = []
put = startindex
for index in range(0, jobl):
sjob = copy.copy(joborder)
sjob[scatter_key] = joborder[scatter_key][index]
if len(scatter_keys) == 1:
if runtimeContext.postScatterEval is not None:
sjob = runtimeContext.postScatterEval(sjob)
steps.append(process.job(
sjob, functools.partial(callback.receive_scatter_output, put),
runtimeContext))
put += 1
else:
(add, _) = _flat_crossproduct_scatter(
process, sjob, scatter_keys[1:], callback, put, runtimeContext)
put += len(add)
steps.extend(add)
return (steps, put) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def formatSubclassOf(fmt, cls, ontology, visited):
# type: (Text, Text, Optional[Graph], Set[Text]) -> bool """Determine if `fmt` is a subclass of `cls`.""" |
if URIRef(fmt) == URIRef(cls):
return True
if ontology is None:
return False
if fmt in visited:
return False
visited.add(fmt)
uriRefFmt = URIRef(fmt)
for s, p, o in ontology.triples((uriRefFmt, RDFS.subClassOf, None)):
# Find parent classes of `fmt` and search upward
if formatSubclassOf(o, cls, ontology, visited):
return True
for s, p, o in ontology.triples((uriRefFmt, OWL.equivalentClass, None)):
# Find equivalent classes of `fmt` and search horizontally
if formatSubclassOf(o, cls, ontology, visited):
return True
for s, p, o in ontology.triples((None, OWL.equivalentClass, uriRefFmt)):
# Find equivalent classes of `fmt` and search horizontally
if formatSubclassOf(s, cls, ontology, visited):
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_format(actual_file, # type: Union[Dict[Text, Any], List, Text] input_formats, # type: Union[List[Text], Text] ontology # type: Optional[Graph] """ Confirms that the format present is valid for the allowed formats.""" |
for afile in aslist(actual_file):
if not afile:
continue
if "format" not in afile:
raise validate.ValidationException(
u"File has no 'format' defined: {}".format(
json_dumps(afile, indent=4)))
for inpf in aslist(input_formats):
if afile["format"] == inpf or \
formatSubclassOf(afile["format"], inpf, ontology, set()):
return
raise validate.ValidationException(
u"File has an incompatible format: {}".format(
json_dumps(afile, indent=4))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def v1_0to1_1_0dev1(doc, loader, baseuri):
# pylint: disable=unused-argument # type: (Any, Loader, Text) -> Tuple[Any, Text] """Public updater for v1.0 to v1.1.0-dev1.""" |
doc = copy.deepcopy(doc)
rewrite = {
"http://commonwl.org/cwltool#WorkReuse": "WorkReuse",
"http://commonwl.org/cwltool#TimeLimit": "ToolTimeLimit",
"http://commonwl.org/cwltool#NetworkAccess": "NetworkAccess",
"http://commonwl.org/cwltool#InplaceUpdateRequirement": "InplaceUpdateRequirement",
"http://commonwl.org/cwltool#LoadListingRequirement": "LoadListingRequirement",
"http://commonwl.org/cwltool#WorkReuse": "WorkReuse",
}
def rewrite_requirements(t):
if "requirements" in t:
for r in t["requirements"]:
if r["class"] in rewrite:
r["class"] = rewrite[r["class"]]
if "hints" in t:
for r in t["hints"]:
if r["class"] in rewrite:
r["class"] = rewrite[r["class"]]
if "steps" in t:
for s in t["steps"]:
rewrite_requirements(s)
def update_secondaryFiles(t):
if isinstance(t, MutableSequence):
return [{"pattern": p} for p in t]
else:
return t
def fix_inputBinding(t):
for i in t["inputs"]:
if "inputBinding" in i:
ib = i["inputBinding"]
for k in list(ib.keys()):
if k != "loadContents":
_logger.warning(SourceLine(ib, k).makeError("Will ignore field '%s' which is not valid in %s inputBinding" %
(k, t["class"])))
del ib[k]
visit_class(doc, ("CommandLineTool","Workflow"), rewrite_requirements)
visit_class(doc, ("ExpressionTool","Workflow"), fix_inputBinding)
visit_field(doc, "secondaryFiles", update_secondaryFiles)
upd = doc
if isinstance(upd, MutableMapping) and "$graph" in upd:
upd = upd["$graph"]
for proc in aslist(upd):
proc.setdefault("hints", [])
proc["hints"].insert(0, {"class": "NetworkAccess", "networkAccess": True})
proc["hints"].insert(0, {"class": "LoadListingRequirement", "loadListing": "deep_listing"})
return (doc, "v1.1.0-dev1") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def checkversion(doc, # type: Union[CommentedSeq, CommentedMap] metadata, # type: CommentedMap enable_dev # type: bool ):
"""Checks the validity of the version of the give CWL document. Returns the document and the validated version string. """ |
cdoc = None # type: Optional[CommentedMap]
if isinstance(doc, CommentedSeq):
if not isinstance(metadata, CommentedMap):
raise Exception("Expected metadata to be CommentedMap")
lc = metadata.lc
metadata = copy.deepcopy(metadata)
metadata.lc.data = copy.copy(lc.data)
metadata.lc.filename = lc.filename
metadata[u"$graph"] = doc
cdoc = metadata
elif isinstance(doc, CommentedMap):
cdoc = doc
else:
raise Exception("Expected CommentedMap or CommentedSeq")
version = metadata[u"cwlVersion"]
cdoc["cwlVersion"] = version
if version not in UPDATES:
if version in DEVUPDATES:
if enable_dev:
pass
else:
raise validate.ValidationException(
u"Version '%s' is a development or deprecated version.\n "
"Update your document to a stable version (%s) or use "
"--enable-dev to enable support for development and "
"deprecated versions." % (version, ", ".join(
list(UPDATES.keys()))))
else:
raise validate.ValidationException(
u"Unrecognized version %s" % version)
return (cdoc, version) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stage_files(pathmapper, # type: PathMapper ignore_writable=False, # type: bool symlink=True, # type: bool secret_store=None # type: SecretStore """Link or copy files to their targets. Create them as needed.""" |
for key, entry in pathmapper.items():
if not entry.staged:
continue
if not os.path.exists(os.path.dirname(entry.target)):
os.makedirs(os.path.dirname(entry.target))
if entry.type in ("File", "Directory") and os.path.exists(entry.resolved):
if symlink: # Use symlink func if allowed
if onWindows():
if entry.type == "File":
shutil.copy(entry.resolved, entry.target)
elif entry.type == "Directory":
if os.path.exists(entry.target) \
and os.path.isdir(entry.target):
shutil.rmtree(entry.target)
copytree_with_merge(entry.resolved, entry.target)
else:
os.symlink(entry.resolved, entry.target)
elif stage_func is not None:
stage_func(entry.resolved, entry.target)
elif entry.type == "Directory" and not os.path.exists(entry.target) \
and entry.resolved.startswith("_:"):
os.makedirs(entry.target)
elif entry.type == "WritableFile" and not ignore_writable:
shutil.copy(entry.resolved, entry.target)
ensure_writable(entry.target)
elif entry.type == "WritableDirectory" and not ignore_writable:
if entry.resolved.startswith("_:"):
os.makedirs(entry.target)
else:
shutil.copytree(entry.resolved, entry.target)
ensure_writable(entry.target)
elif entry.type == "CreateFile" or entry.type == "CreateWritableFile":
with open(entry.target, "wb") as new:
if secret_store is not None:
new.write(
secret_store.retrieve(entry.resolved).encode("utf-8"))
else:
new.write(entry.resolved.encode("utf-8"))
if entry.type == "CreateFile":
os.chmod(entry.target, stat.S_IRUSR) # Read only
else: # it is a "CreateWritableFile"
ensure_writable(entry.target)
pathmapper.update(
key, entry.target, entry.target, entry.type, entry.staged) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def avroize_type(field_type, name_prefix=""):
# type: (Union[List[Dict[Text, Any]], Dict[Text, Any]], Text) -> Any """ adds missing information to a type so that CWL types are valid in schema_salad. """ |
if isinstance(field_type, MutableSequence):
for field in field_type:
avroize_type(field, name_prefix)
elif isinstance(field_type, MutableMapping):
if field_type["type"] in ("enum", "record"):
if "name" not in field_type:
field_type["name"] = name_prefix + Text(uuid.uuid4())
if field_type["type"] == "record":
avroize_type(field_type["fields"], name_prefix)
if field_type["type"] == "array":
avroize_type(field_type["items"], name_prefix)
if isinstance(field_type["type"], MutableSequence):
for ctype in field_type["type"]:
avroize_type(ctype, name_prefix)
return field_type |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def versionstring():
# type: () -> Text
'''
version of CWLtool used to execute the workflow.
'''
pkg = pkg_resources.require("cwltool")
if pkg:
return u"%s %s" % (sys.argv[0], pkg[0].version)
return u"%s %s" % (sys.argv[0], "unknown version") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def docker_windows_path_adjust(path):
# type: (Text) -> Text r""" Changes only windows paths so that the can be appropriately passed to the docker run command as as docker treats them as unix paths. Example: 'C:\Users\foo to /C/Users/foo (Docker for Windows) or /c/Users/foo (Docker toolbox). """ |
if onWindows():
split = path.split(':')
if len(split) == 2:
if platform.win32_ver()[0] in ('7', '8'): # type: ignore
split[0] = split[0].lower() # Docker toolbox uses lowecase windows Drive letters
else:
split[0] = split[0].capitalize()
# Docker for Windows uses uppercase windows Drive letters
path = ':'.join(split)
path = path.replace(':', '').replace('\\', '/')
return path if path[0] == '/' else '/' + path
return path |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bytes2str_in_dicts(inp # type: Union[MutableMapping[Text, Any], MutableSequence[Any], Any] ):
""" Convert any present byte string to unicode string, inplace. input is a dict of nested dicts and lists """ |
# if input is dict, recursively call for each value
if isinstance(inp, MutableMapping):
for k in inp:
inp[k] = bytes2str_in_dicts(inp[k])
return inp
# if list, iterate through list and fn call
# for all its elements
if isinstance(inp, MutableSequence):
for idx, value in enumerate(inp):
inp[idx] = bytes2str_in_dicts(value)
return inp
# if value is bytes, return decoded string,
elif isinstance(inp, bytes):
return inp.decode('utf-8')
# simply return elements itself
return inp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def visit_class(rec, cls, op):
"""Apply a function to with "class" in cls.""" |
if isinstance(rec, MutableMapping):
if "class" in rec and rec.get("class") in cls:
op(rec)
for d in rec:
visit_class(rec[d], cls, op)
if isinstance(rec, MutableSequence):
for d in rec:
visit_class(d, cls, op) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def visit_field(rec, field, op):
"""Apply a function to mapping with 'field'.""" |
if isinstance(rec, MutableMapping):
if field in rec:
rec[field] = op(rec[field])
for d in rec:
visit_field(rec[d], field, op)
if isinstance(rec, MutableSequence):
for d in rec:
visit_field(d, field, op) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def print_num(num):
''' Write a numeric result in various forms '''
out('hex: 0x{0:08x}'.format(num))
out('dec: {0:d}'.format(num))
out('oct: 0o{0:011o}'.format(num))
out('bin: 0b{0:032b}'.format(num)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def main(argv=None):
''' Runs the program and handles command line options '''
parser = get_parser()
# Parse arguments and run the function
global args
args = parser.parse_args(argv)
args.func() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def __decode_ext_desc(self, value_type, value):
""" decode ASF_EXTENDED_CONTENT_DESCRIPTION_OBJECT values""" |
if value_type == 0: # Unicode string
return self.__decode_string(value)
elif value_type == 1: # BYTE array
return value
elif 1 < value_type < 6: # DWORD / QWORD / WORD
return _bytes_to_int_le(value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch(self, card_id, data={}, **kwargs):
"""" Fetch Card for given Id Args: card_id : Id for which card object has to be retrieved Returns: Card dict for given card Id """ |
return super(Card, self).fetch(card_id, data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self, data={}, **kwargs):
"""" Fetch all Virtual Account entities Returns: Dictionary of Virtual Account data """ |
return super(VirtualAccount, self).all(data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch(self, virtual_account_id, data={}, **kwargs):
"""" Fetch Virtual Account for given Id Args: virtual_account_id : Id for which Virtual Account object has to be retrieved Returns: Virtual Account dict for given Virtual Account Id """ |
return super(VirtualAccount, self).fetch(
virtual_account_id,
data,
**kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(self, data={}, **kwargs):
"""" Create Virtual Account from given dict Args: Param for Creating Virtual Account Returns: Virtual Account dict """ |
url = self.base_url
return self.post_url(url, data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def close(self, virtual_account_id, data={}, **kwargs):
"""" Close Virtual Account from given Id Args: virtual_account_id : Id for which Virtual Account objects has to be Closed """ |
url = "{}/{}".format(self.base_url, virtual_account_id)
data['status'] = 'closed'
return self.patch_url(url, data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def payments(self, virtual_account_id, data={}, **kwargs):
"""" Fetch Payment for Virtual Account Id Args: virtual_account_id : Id for which Virtual Account objects has to be retrieved Returns: Payment dict for given Virtual Account Id """ |
url = "{}/{}/payments".format(self.base_url, virtual_account_id)
return self.get_url(url, data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self, data={}, **kwargs):
"""" Fetch all Subscription entities Returns: Dictionary of Subscription data """ |
return super(Subscription, self).all(data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch(self, subscription_id, data={}, **kwargs):
"""" Fetch Subscription for given Id Args: subscription_id : Id for which subscription object is retrieved Returns: Subscription dict for given subscription Id """ |
return super(Subscription, self).fetch(subscription_id, data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cancel(self, subscription_id, data={}, **kwargs):
""" Cancel subscription given by subscription_id Args: subscription_id : Id for which subscription has to be cancelled Returns: Subscription Dict for given subscription id """ |
url = "{}/{}/cancel".format(self.base_url, subscription_id)
return self.post_url(url, data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self, data={}, **kwargs):
"""" Fetch all Order entities Returns: Dictionary of Order data """ |
return super(Order, self).all(data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch(self, order_id, data={}, **kwargs):
"""" Fetch Order for given Id Args: order_id : Id for which order object has to be retrieved Returns: Order dict for given order Id """ |
return super(Order, self).fetch(order_id, data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch(self, customer_id, data={}, **kwargs):
"""" Fetch Customer for given Id Args: customer_id : Id for which customer object has to be retrieved Returns: Order dict for given customer Id """ |
return super(Customer, self).fetch(customer_id, data, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def edit(self, customer_id, data={}, **kwargs):
"""" Edit Customer information from given dict Returns: Customer Dict which was edited """ |
url = '{}/{}'.format(self.base_url, customer_id)
return self.put_url(url, data, **kwargs) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.