text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_summary(self, summary, global_step=None):
"""Adds a `Summary` protocol buffer to the event file. This method wraps the provided summary in an `Event` protocol buffer and adds it to the event file. Parameters summary : A `Summary` protocol buffer Optionally serialized as a string. global_step: Number Optional global step value to record with the summary. """ |
if isinstance(summary, bytes):
summ = summary_pb2.Summary()
summ.ParseFromString(summary)
summary = summ
# We strip metadata from values with tags that we have seen before in order
# to save space - we just store the metadata on the first value with a
# specific tag.
for value in summary.value:
if not value.metadata:
continue
if value.tag in self._seen_summary_tags:
# This tag has been encountered before. Strip the metadata.
value.ClearField("metadata")
continue
# We encounter a value with a tag we have not encountered previously. And
# it has metadata. Remember to strip metadata from future values with this
# tag string.
self._seen_summary_tags.add(value.tag)
event = event_pb2.Event(summary=summary)
self._add_event(event, global_step) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_graph(self, graph):
"""Adds a `Graph` protocol buffer to the event file.""" |
event = event_pb2.Event(graph_def=graph.SerializeToString())
self._add_event(event, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_scalar(self, tag, value, global_step=None):
"""Adds scalar data to the event file. Parameters tag : str Name for the scalar plot. value : float, tuple, list, or dict If value is a float, the corresponding curve would have no name attached in the plot. If value is a tuple or list, it must have two elements with the first one representing the name of the value and the second one as the float value. The name of the value will be attached to the corresponding curve in the plot. This is useful when users want to draw multiple curves in the same plot. It internally calls `_add_scalars`. If value is a dict, it's a mapping from strs to float values, with strs representing the names of the float values. This is convenient when users want to log a collection of float values with different names for visualizing them in the same plot without repeatedly calling `add_scalar` for each value. It internally calls `_add_scalars`. global_step : int Global step value to record. Examples -------- """ |
if isinstance(value, (tuple, list, dict)):
if isinstance(value, (tuple, list)):
if len(value) != 2:
raise ValueError('expected two elements in value, while received %d'
% len(value))
value = {value[0]: value[1]}
self._add_scalars(tag, value, global_step)
else:
self._file_writer.add_summary(scalar_summary(tag, value), global_step)
self._append_to_scalar_dict(self.get_logdir() + '/' + tag,
value, global_step, time.time()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_scalars(self, tag, scalar_dict, global_step=None):
"""Adds multiple scalars to summary. This enables drawing multiple curves in one plot. Parameters tag : str Name for the plot. scalar_dict : dict Values to be saved. global_step : int Global step value to record. """ |
timestamp = time.time()
fw_logdir = self._file_writer.get_logdir()
for scalar_name, scalar_value in scalar_dict.items():
fw_tag = fw_logdir + '/' + tag + '/' + scalar_name
if fw_tag in self._all_writers.keys():
fw = self._all_writers[fw_tag]
else:
fw = FileWriter(logdir=fw_tag, max_queue=self._max_queue,
flush_secs=self._flush_secs, filename_suffix=self._filename_suffix,
verbose=self._verbose)
self._all_writers[fw_tag] = fw
fw.add_summary(scalar_summary(tag, scalar_value), global_step)
self._append_to_scalar_dict(fw_tag, scalar_value, global_step, timestamp) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_histogram(self, tag, values, global_step=None, bins='default'):
"""Add histogram data to the event file. Note: This function internally calls `asnumpy()` if `values` is an MXNet NDArray. Since `asnumpy()` is a blocking function call, this function would block the main thread till it returns. It may consequently affect the performance of async execution of the MXNet engine. Parameters tag : str Name for the `values`. values : MXNet `NDArray` or `numpy.ndarray` Values for building histogram. global_step : int Global step value to record. bins : int or sequence of scalars or str If `bins` is an int, it defines the number equal-width bins in the range `(values.min(), values.max())`. If `bins` is a sequence, it defines the bin edges, including the rightmost edge, allowing for non-uniform bin width. If `bins` is a str equal to 'default', it will use the bin distribution defined in TensorFlow for building histogram. Ref: https://www.tensorflow.org/programmers_guide/tensorboard_histograms The rest of supported strings for `bins` are 'auto', 'fd', 'doane', 'scott', 'rice', 'sturges', and 'sqrt'. etc. See the documentation of `numpy.histogram` for detailed definitions of those strings. https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html """ |
if bins == 'default':
bins = self._get_default_bins()
self._file_writer.add_summary(histogram_summary(tag, values, bins), global_step) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_image(self, tag, image, global_step=None):
"""Add image data to the event file. This function supports input as a 2D, 3D, or 4D image. If the input image is 2D, a channel axis is prepended as the first dimension and image will be replicated three times and concatenated along the channel axis. If the input image is 3D, it will be replicated three times and concatenated along the channel axis. If the input image is 4D, which is a batch images, all the images will be spliced as a sprite image for display. Note: This function requires the ``pillow`` package. Note: This function internally calls `asnumpy()` for MXNet `NDArray` inputs. Since `asnumpy()` is a blocking function call, this function would block the main thread till it returns. It may consequently affect the performance of async execution of the MXNet engine. Parameters tag : str Name for the `image`. image : MXNet `NDArray` or `numpy.ndarray` Image is one of the following formats: (H, W), (C, H, W), (N, C, H, W). If the input is a batch of images, a grid of images is made by stitching them together. If data type is float, values must be in range [0, 1], and then they are rescaled to range [0, 255]. Note that this does not change the values of the input `image`. A copy of the input `image` is created instead. If data type is 'uint8`, values are unchanged. global_step : int Global step value to record. """ |
self._file_writer.add_summary(image_summary(tag, image), global_step) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_audio(self, tag, audio, sample_rate=44100, global_step=None):
"""Add audio data to the event file. Note: This function internally calls `asnumpy()` for MXNet `NDArray` inputs. Since `asnumpy()` is a blocking function call, this function would block the main thread till it returns. It may consequently affect the performance of async execution of the MXNet engine. Parameters tag : str Name for the `audio`. audio : MXNet `NDArray` or `numpy.ndarray` Audio data squeezable to a 1D tensor. The values of the tensor are in the range `[-1, 1]`. sample_rate : int Sample rate in Hz. global_step : int Global step value to record. """ |
self._file_writer.add_summary(audio_summary(tag, audio, sample_rate=sample_rate),
global_step) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_text(self, tag, text, global_step=None):
"""Add text data to the event file. Parameters tag : str Name for the `text`. text : str Text to be saved to the event file. global_step : int Global step value to record. """ |
self._file_writer.add_summary(text_summary(tag, text), global_step)
if tag not in self._text_tags:
self._text_tags.append(tag)
extension_dir = self.get_logdir() + '/plugins/tensorboard_text/'
if not os.path.exists(extension_dir):
os.makedirs(extension_dir)
with open(extension_dir + 'tensors.json', 'w') as fp:
json.dump(self._text_tags, fp) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_pr_curve(self, tag, labels, predictions, num_thresholds, global_step=None, weights=None):
"""Adds precision-recall curve. Note: This function internally calls `asnumpy()` for MXNet `NDArray` inputs. Since `asnumpy()` is a blocking function call, this function would block the main thread till it returns. It may consequently affect the performance of async execution of the MXNet engine. Parameters tag : str A tag attached to the summary. Used by TensorBoard for organization. labels : MXNet `NDArray` or `numpy.ndarray`. The ground truth values. A tensor of 0/1 values with arbitrary shape. predictions : MXNet `NDArray` or `numpy.ndarray`. A float32 tensor whose values are in the range `[0, 1]`. Dimensions must match those of `labels`. num_thresholds : int Number of thresholds, evenly distributed in `[0, 1]`, to compute PR metrics for. Should be `>= 2`. This value should be a constant integer value, not a tensor that stores an integer. The thresholds for computing the pr curves are calculated in the following way: `width = 1.0 / (num_thresholds - 1), global_step : int Global step value to record. weights : MXNet `NDArray` or `numpy.ndarray`. Optional float32 tensor. Individual counts are multiplied by this value. This tensor must be either the same shape as or broadcastable to the `labels` tensor. """ |
if num_thresholds < 2:
raise ValueError('num_thresholds must be >= 2')
labels = _make_numpy_array(labels)
predictions = _make_numpy_array(predictions)
self._file_writer.add_summary(pr_curve_summary(tag, labels, predictions,
num_thresholds, weights), global_step) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _rectangular(n):
"""Checks to see if a 2D list is a valid 2D matrix""" |
for i in n:
if len(i) != len(n[0]):
return False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _is_2D_matrix(matrix):
"""Checks to see if a ndarray is 2D or a list of lists is 2D""" |
return ((isinstance(matrix[0], list) and _rectangular(matrix) and
not isinstance(matrix[0][0], list)) or
(not isinstance(matrix, list) and matrix.shape == 2)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _save_image(image, filename, nrow=8, padding=2, square_image=True):
"""Saves a given Tensor into an image file. If the input tensor contains multiple images, a grid of images will be saved. Parameters image : `NDArray` Input image(s) in the format of HW, CHW, or NCHW. filename : str Filename of the saved image(s). nrow : int Number of images displayed in each row of the grid. The Final grid size is (batch_size / `nrow`, `nrow`) when square_image is False; otherwise, (`nrow`, `nrow`). padding : int Padding value for each image in the grid. square_image : bool If True, force the image grid to be strictly square. """ |
if not isinstance(image, NDArray):
raise TypeError('MXNet NDArray expected, received {}'.format(str(type(image))))
image = _prepare_image(image, nrow=nrow, padding=padding, square_image=square_image)
if Image is None:
raise ImportError('saving image failed because PIL is not found')
im = Image.fromarray(image.asnumpy())
im.save(filename) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _save_embedding_tsv(data, file_path):
"""Given a 2D `NDarray` or a `numpy.ndarray` as embeding, save it in tensors.tsv under the path provided by the user.""" |
if isinstance(data, np.ndarray):
data_list = data.tolist()
elif isinstance(data, NDArray):
data_list = data.asnumpy().tolist()
else:
raise TypeError('expected NDArray of np.ndarray, while received type {}'.format(
str(type(data))))
with open(os.path.join(file_path, 'tensors.tsv'), 'w') as f:
for x in data_list:
x = [str(i) for i in x]
f.write('\t'.join(x) + '\n') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _make_image(tensor):
"""Converts an NDArray type image to Image protobuf""" |
assert isinstance(tensor, NDArray)
if Image is None:
raise ImportError('need to install PIL for visualizing images')
height, width, channel = tensor.shape
tensor = _make_numpy_array(tensor)
image = Image.fromarray(tensor)
output = io.BytesIO()
image.save(output, format='PNG')
image_string = output.getvalue()
output.close()
return Summary.Image(height=height, width=width, colorspace=channel,
encoded_image_string=image_string) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pr_curve_summary(tag, labels, predictions, num_thresholds, weights=None):
"""Outputs a precision-recall curve `Summary` protocol buffer. Parameters tag : str A tag attached to the summary. Used by TensorBoard for organization. labels : MXNet `NDArray` or `numpy.ndarray`. The ground truth values. A tensor of 0/1 values with arbitrary shape. predictions : MXNet `NDArray` or `numpy.ndarray`. A float32 tensor whose values are in the range `[0, 1]`. Dimensions must match those of `labels`. num_thresholds : int Number of thresholds, evenly distributed in `[0, 1]`, to compute PR metrics for. Should be `>= 2`. This value should be a constant integer value, not a tensor that stores an integer. The thresholds for computing the pr curves are calculated in the following way: `width = 1.0 / (num_thresholds - 1), weights : MXNet `NDArray` or `numpy.ndarray`. Optional float32 tensor. Individual counts are multiplied by this value. This tensor must be either the same shape as or broadcastable to the `labels` tensor. Returns ------- A `Summary` protobuf of the pr_curve. """ |
# num_thresholds > 127 results in failure of creating protobuf,
# probably a bug of protobuf
if num_thresholds > 127:
logging.warning('num_thresholds>127 would result in failure of creating pr_curve protobuf,'
' clipping it at 127')
num_thresholds = 127
labels = _make_numpy_array(labels)
predictions = _make_numpy_array(predictions)
if weights is not None:
weights = _make_numpy_array(weights)
data = _compute_curve(labels, predictions, num_thresholds=num_thresholds, weights=weights)
pr_curve_plugin_data = PrCurvePluginData(version=0,
num_thresholds=num_thresholds).SerializeToString()
plugin_data = [SummaryMetadata.PluginData(plugin_name='pr_curves',
content=pr_curve_plugin_data)]
smd = SummaryMetadata(plugin_data=plugin_data)
tensor = TensorProto(dtype='DT_FLOAT',
float_val=data.reshape(-1).tolist(),
tensor_shape=TensorShapeProto(
dim=[TensorShapeProto.Dim(size=data.shape[0]),
TensorShapeProto.Dim(size=data.shape[1])]))
return Summary(value=[Summary.Value(tag=tag, metadata=smd, tensor=tensor)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_nodes_from_symbol(sym):
"""Given a symbol and shapes, return a list of `NodeDef`s for visualizing the the graph in TensorBoard.""" |
if not isinstance(sym, Symbol):
raise TypeError('sym must be an `mxnet.symbol.Symbol`,'
' received type {}'.format(str(type(sym))))
conf = json.loads(sym.tojson())
nodes = conf['nodes']
data2op = {} # key: data id, value: list of ops to whom data is an input
for i, node in enumerate(nodes):
if node['op'] != 'null': # node is an operator
input_list = node['inputs']
for idx in input_list:
if idx[0] == 0: # do not include 'data' node in the op scope
continue
if idx[0] in data2op:
# nodes[idx[0]] is a data as an input to op nodes[i]
data2op[idx[0]].append(i)
else:
data2op[idx[0]] = [i]
# In the following, we group data with operators they belong to
# by attaching them with operator names as scope names.
# The parameters with the operator name as the prefix will be
# assigned with the scope name of that operator. For example,
# a convolution op has name 'conv', while its weight and bias
# have name 'conv_weight' and 'conv_bias'. In the end, the operator
# has scope name 'conv' prepended to its name, i.e. 'conv/conv'.
# The parameters are named 'conv/conv_weight' and 'conv/conv_bias'.
node_defs = []
for i, node in enumerate(nodes):
node_name = node['name']
op_name = node['op']
kwargs = {'op': op_name, 'name': node_name}
if op_name != 'null': # node is an operator
inputs = []
input_list = node['inputs']
for idx in input_list:
input_node = nodes[idx[0]]
input_node_name = input_node['name']
if input_node['op'] != 'null':
inputs.append(_scoped_name(input_node_name, input_node_name))
elif idx[0] in data2op and len(data2op[idx[0]]) == 1 and data2op[idx[0]][0] == i:
# the data is only as an input to nodes[i], no else
inputs.append(_scoped_name(node_name, input_node_name))
else: # the data node has no scope name, e.g. 'data' as the input node
inputs.append(input_node_name)
kwargs['input'] = inputs
kwargs['name'] = _scoped_name(node_name, node_name)
elif i in data2op and len(data2op[i]) == 1:
# node is a data node belonging to one op, find out which operator this node belongs to
op_node_name = nodes[data2op[i][0]]['name']
kwargs['name'] = _scoped_name(op_node_name, node_name)
if 'attrs' in node:
# TensorBoard would escape quotation marks, replace it with space
attr = json.dumps(node['attrs'], sort_keys=True).replace("\"", ' ')
attr = {'param': AttrValue(s=attr.encode(encoding='utf-8'))}
kwargs['attr'] = attr
node_def = NodeDef(**kwargs)
node_defs.append(node_def)
return node_defs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def temporal_derivatives(order, variables, data):
""" Compute temporal derivative terms by the method of backwards differences. Parameters order: range or list(int) A list of temporal derivative terms to include. For instance, [1, 2] indicates that the first and second derivative terms should be added. To retain the original terms, 0 *must* be included in the list. variables: list(str) List of variables for which temporal derivative terms should be computed. data: pandas DataFrame object Table of values of all observations of all variables. Returns ------- variables_deriv: list A list of variables to include in the final data frame after adding the specified derivative terms. data_deriv: pandas DataFrame object Table of values of all observations of all variables, including any specified derivative terms. """ |
variables_deriv = OrderedDict()
data_deriv = OrderedDict()
if 0 in order:
data_deriv[0] = data[variables]
variables_deriv[0] = variables
order = set(order) - set([0])
for o in order:
variables_deriv[o] = ['{}_derivative{}'.format(v, o)
for v in variables]
data_deriv[o] = np.tile(np.nan, data[variables].shape)
data_deriv[o][o:, :] = np.diff(data[variables], n=o, axis=0)
variables_deriv = reduce((lambda x, y: x + y), variables_deriv.values())
data_deriv = pd.DataFrame(columns=variables_deriv,
data=np.concatenate([*data_deriv.values()],
axis=1))
return (variables_deriv, data_deriv) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def exponential_terms(order, variables, data):
""" Compute exponential expansions. Parameters order: range or list(int) A list of exponential terms to include. For instance, [1, 2] indicates that the first and second exponential terms should be added. To retain the original terms, 1 *must* be included in the list. variables: list(str) List of variables for which exponential terms should be computed. data: pandas DataFrame object Table of values of all observations of all variables. Returns ------- variables_exp: list A list of variables to include in the final data frame after adding the specified exponential terms. data_exp: pandas DataFrame object Table of values of all observations of all variables, including any specified exponential terms. """ |
variables_exp = OrderedDict()
data_exp = OrderedDict()
if 1 in order:
data_exp[1] = data[variables]
variables_exp[1] = variables
order = set(order) - set([1])
for o in order:
variables_exp[o] = ['{}_power{}'.format(v, o) for v in variables]
data_exp[o] = data[variables]**o
variables_exp = reduce((lambda x, y: x + y), variables_exp.values())
data_exp = pd.DataFrame(columns=variables_exp,
data=np.concatenate([*data_exp.values()], axis=1))
return (variables_exp, data_exp) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _order_as_range(order):
"""Convert a hyphenated string representing order for derivative or exponential terms into a range object that can be passed as input to the appropriate expansion function.""" |
order = order.split('-')
order = [int(o) for o in order]
if len(order) > 1:
order = range(order[0], (order[-1] + 1))
return order |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_and_expand_exponential(expr, variables, data):
"""Check if the current operation specifies exponential expansion. ^^6 specifies all powers up to the 6th, ^5-6 the 5th and 6th powers, ^6 the 6th only.""" |
if re.search(r'\^\^[0-9]+$', expr):
order = re.compile(r'\^\^([0-9]+)$').findall(expr)
order = range(1, int(*order) + 1)
variables, data = exponential_terms(order, variables, data)
elif re.search(r'\^[0-9]+[\-]?[0-9]*$', expr):
order = re.compile(r'\^([0-9]+[\-]?[0-9]*)').findall(expr)
order = _order_as_range(*order)
variables, data = exponential_terms(order, variables, data)
return variables, data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_and_expand_derivative(expr, variables, data):
"""Check if the current operation specifies a temporal derivative. dd6x specifies all derivatives up to the 6th, d5-6x the 5th and 6th, d6x the 6th only.""" |
if re.search(r'^dd[0-9]+', expr):
order = re.compile(r'^dd([0-9]+)').findall(expr)
order = range(0, int(*order) + 1)
(variables, data) = temporal_derivatives(order, variables, data)
elif re.search(r'^d[0-9]+[\-]?[0-9]*', expr):
order = re.compile(r'^d([0-9]+[\-]?[0-9]*)').findall(expr)
order = _order_as_range(*order)
(variables, data) = temporal_derivatives(order, variables, data)
return variables, data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_and_expand_subformula(expression, parent_data, variables, data):
"""Check if the current operation contains a suboperation, and parse it where appropriate.""" |
grouping_depth = 0
for i, char in enumerate(expression):
if char == '(':
if grouping_depth == 0:
formula_delimiter = i + 1
grouping_depth += 1
elif char == ')':
grouping_depth -= 1
if grouping_depth == 0:
expr = expression[formula_delimiter:i].strip()
return parse_formula(expr, parent_data)
return variables, data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_expression(expression, parent_data):
""" Parse an expression in a model formula. Parameters expression: str Formula expression: either a single variable or a variable group paired with an operation (exponentiation or differentiation). parent_data: pandas DataFrame The source data for the model expansion. Returns ------- variables: list A list of variables in the provided formula expression. data: pandas DataFrame A tabulation of all terms in the provided formula expression. """ |
variables = None
data = None
variables, data = _check_and_expand_subformula(expression,
parent_data,
variables,
data)
variables, data = _check_and_expand_exponential(expression,
variables,
data)
variables, data = _check_and_expand_derivative(expression,
variables,
data)
if variables is None:
expr = expression.strip()
variables = [expr]
data = parent_data[expr]
return variables, data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _expand_shorthand(model_formula, variables):
"""Expand shorthand terms in the model formula. """ |
wm = 'white_matter'
gsr = 'global_signal'
rps = 'trans_x + trans_y + trans_z + rot_x + rot_y + rot_z'
fd = 'framewise_displacement'
acc = _get_matches_from_data('a_comp_cor_[0-9]+', variables)
tcc = _get_matches_from_data('t_comp_cor_[0-9]+', variables)
dv = _get_matches_from_data('^std_dvars$', variables)
dvall = _get_matches_from_data('.*dvars', variables)
nss = _get_matches_from_data('non_steady_state_outlier[0-9]+',
variables)
spikes = _get_matches_from_data('motion_outlier[0-9]+', variables)
model_formula = re.sub('wm', wm, model_formula)
model_formula = re.sub('gsr', gsr, model_formula)
model_formula = re.sub('rps', rps, model_formula)
model_formula = re.sub('fd', fd, model_formula)
model_formula = re.sub('acc', acc, model_formula)
model_formula = re.sub('tcc', tcc, model_formula)
model_formula = re.sub('dv', dv, model_formula)
model_formula = re.sub('dvall', dvall, model_formula)
model_formula = re.sub('nss', nss, model_formula)
model_formula = re.sub('spikes', spikes, model_formula)
formula_variables = _get_variables_from_formula(model_formula)
others = ' + '.join(set(variables) - set(formula_variables))
model_formula = re.sub('others', others, model_formula)
return model_formula |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _unscramble_regressor_columns(parent_data, data):
"""Reorder the columns of a confound matrix such that the columns are in the same order as the input data with any expansion columns inserted immediately after the originals. """ |
matches = ['_power[0-9]+', '_derivative[0-9]+']
var = OrderedDict((c, deque()) for c in parent_data.columns)
for c in data.columns:
col = c
for m in matches:
col = re.sub(m, '', col)
if col == c:
var[col].appendleft(c)
else:
var[col].append(c)
unscrambled = reduce((lambda x, y: x + y), var.values())
return data[[*unscrambled]] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_formula(model_formula, parent_data, unscramble=False):
""" Recursively parse a model formula by breaking it into additive atoms and tracking grouping symbol depth. Parameters model_formula: str Expression for the model formula, e.g. '(a + b)^^2 + dd1(c + (d + e)^3) + f' Note that any expressions to be expanded *must* be in parentheses, even if they include only a single variable (e.g., (x)^2, not x^2). parent_data: pandas DataFrame A tabulation of all values usable in the model formula. Each additive term in `model_formula` should correspond either to a variable in this data frame or to instructions for operating on a variable (for instance, computing temporal derivatives or exponential terms). Temporal derivative options: * d6(variable) for the 6th temporal derivative * dd6(variable) for all temporal derivatives up to the 6th * d4-6(variable) for the 4th through 6th temporal derivatives * 0 must be included in the temporal derivative range for the original term to be returned when temporal derivatives are computed. Exponential options: * (variable)^6 for the 6th power * (variable)^^6 for all powers up to the 6th * (variable)^4-6 for the 4th through 6th powers * 1 must be included in the powers range for the original term to be returned when exponential terms are computed. Temporal derivatives and exponential terms are computed for all terms in the grouping symbols that they adjoin. Returns ------- variables: list(str) A list of variables included in the model parsed from the provided formula. data: pandas DataFrame All values in the complete model. """ |
variables = {}
data = {}
expr_delimiter = 0
grouping_depth = 0
model_formula = _expand_shorthand(model_formula, parent_data.columns)
for i, char in enumerate(model_formula):
if char == '(':
grouping_depth += 1
elif char == ')':
grouping_depth -= 1
elif grouping_depth == 0 and char == '+':
expression = model_formula[expr_delimiter:i].strip()
variables[expression] = None
data[expression] = None
expr_delimiter = i + 1
expression = model_formula[expr_delimiter:].strip()
variables[expression] = None
data[expression] = None
for expression in list(variables):
if expression[0] == '(' and expression[-1] == ')':
(variables[expression],
data[expression]) = parse_formula(expression[1:-1],
parent_data)
else:
(variables[expression],
data[expression]) = parse_expression(expression,
parent_data)
variables = list(set(reduce((lambda x, y: x + y), variables.values())))
data = pd.concat((data.values()), axis=1)
if unscramble:
data = _unscramble_regressor_columns(parent_data, data)
return variables, data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mask(in_file, mask_file, new_name):
""" Apply a binary mask to an image. Parameters in_file : str Path to a NIfTI file to mask mask_file : str Path to a binary mask new_name : str Path/filename for the masked output image. Returns ------- str Absolute path of the masked output image. Notes ----- in_file and mask_file must be in the same image space and have the same dimensions. """ |
import nibabel as nb
import os
# Load the input image
in_nii = nb.load(in_file)
# Load the mask image
mask_nii = nb.load(mask_file)
# Set all non-mask voxels in the input file to zero.
data = in_nii.get_data()
data[mask_nii.get_data() == 0] = 0
# Save the new masked image.
new_nii = nb.Nifti1Image(data, in_nii.affine, in_nii.header)
new_nii.to_filename(new_name)
return os.path.abspath(new_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_cfm(in_file, lesion_mask=None, global_mask=True, out_path=None):
""" Create a mask to constrain registration. Parameters in_file : str Path to an existing image (usually a mask). If global_mask = True, this is used as a size/dimension reference. out_path : str Path/filename for the new cost function mask. lesion_mask : str, optional Path to an existing binary lesion mask. global_mask : bool Create a whole-image mask (True) or limit to reference mask (False) A whole image-mask is 1 everywhere Returns ------- str Absolute path of the new cost function mask. Notes ----- in_file and lesion_mask must be in the same image space and have the same dimensions """ |
import os
import numpy as np
import nibabel as nb
from nipype.utils.filemanip import fname_presuffix
if out_path is None:
out_path = fname_presuffix(in_file, suffix='_cfm', newpath=os.getcwd())
else:
out_path = os.path.abspath(out_path)
if not global_mask and not lesion_mask:
NIWORKFLOWS_LOG.warning(
'No lesion mask was provided and global_mask not requested, '
'therefore the original mask will not be modified.')
# Load the input image
in_img = nb.load(in_file)
# If we want a global mask, create one based on the input image.
data = np.ones(in_img.shape, dtype=np.uint8) if global_mask else in_img.get_data()
if set(np.unique(data)) - {0, 1}:
raise ValueError("`global_mask` must be true if `in_file` is not a binary mask")
# If a lesion mask was provided, combine it with the secondary mask.
if lesion_mask is not None:
# Reorient the lesion mask and get the data.
lm_img = nb.as_closest_canonical(nb.load(lesion_mask))
# Subtract lesion mask from secondary mask, set negatives to 0
data = np.fmax(data - lm_img.get_data(), 0)
# Cost function mask will be created from subtraction
# Otherwise, CFM will be created from global mask
cfm_img = nb.Nifti1Image(data, in_img.affine, in_img.header)
# Save the cost function mask.
cfm_img.set_data_dtype(np.uint8)
cfm_img.to_filename(out_path)
return out_path |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_settings(self):
""" Return any settings defined by the user, as well as any pre-defined settings files that exist for the image modalities to be registered. """ |
# If user-defined settings exist...
if isdefined(self.inputs.settings):
# Note this in the log and return those settings.
NIWORKFLOWS_LOG.info('User-defined settings, overriding defaults')
return self.inputs.settings
# Define a prefix for output files based on the modality of the moving image.
filestart = '{}-mni_registration_{}_'.format(
self.inputs.moving.lower(), self.inputs.flavor)
# Get a list of settings files that match the flavor.
filenames = [i for i in pkgr.resource_listdir('niworkflows', 'data')
if i.startswith(filestart) and i.endswith('.json')]
# Return the settings files.
return [pkgr.resource_filename('niworkflows.data', f)
for f in sorted(filenames)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_ants_args(self):
args = {'moving_image': self.inputs.moving_image, 'num_threads': self.inputs.num_threads, 'float': self.inputs.float, 'terminal_output': 'file', 'write_composite_transform': True, 'initial_moving_transform': self.inputs.initial_moving_transform} """ Moving image handling - The following truth table maps out the intended action sequence. Future refactoring may more directly encode this. moving_mask and lesion_mask are files True = file False = None | moving_mask | explicit_masking | lesion_mask | action | True | True | True | Update `moving_image` after applying | | | | mask. | | | | Set `moving_image_masks` applying | | | | `create_cfm` with `global_mask=True`. | True | True | False | Update `moving_image` after applying | | | | mask. | True | False | True | Set `moving_image_masks` applying | | | | `create_cfm` with `global_mask=False` | True | False | False | args['moving_image_masks'] = moving_mask | False | * | True | Set `moving_image_masks` applying | | | | `create_cfm` with `global_mask=True` | False | * | False | No action """ |
# If a moving mask is provided...
if isdefined(self.inputs.moving_mask):
# If explicit masking is enabled...
if self.inputs.explicit_masking:
# Mask the moving image.
# Do not use a moving mask during registration.
args['moving_image'] = mask(
self.inputs.moving_image,
self.inputs.moving_mask,
"moving_masked.nii.gz")
# If explicit masking is disabled...
else:
# Use the moving mask during registration.
# Do not mask the moving image.
args['moving_image_masks'] = self.inputs.moving_mask
# If a lesion mask is also provided...
if isdefined(self.inputs.lesion_mask):
# Create a cost function mask with the form:
# [global mask - lesion mask] (if explicit masking is enabled)
# [moving mask - lesion mask] (if explicit masking is disabled)
# Use this as the moving mask.
args['moving_image_masks'] = create_cfm(
self.inputs.moving_mask,
lesion_mask=self.inputs.lesion_mask,
global_mask=self.inputs.explicit_masking)
# If no moving mask is provided...
# But a lesion mask *IS* provided...
elif isdefined(self.inputs.lesion_mask):
# Create a cost function mask with the form: [global mask - lesion mask]
# Use this as the moving mask.
args['moving_image_masks'] = create_cfm(
self.inputs.moving_image,
lesion_mask=self.inputs.lesion_mask,
global_mask=True)
"""
Reference image handling - The following truth table maps out the intended action
sequence. Future refactoring may more directly encode this.
reference_mask and lesion_mask are files
True = file
False = None
| reference_mask | explicit_masking | lesion_mask | action
|----------------|------------------|-------------|----------------------------------------
| True | True | True | Update `fixed_image` after applying
| | | | mask.
| | | | Set `fixed_image_masks` applying
| | | | `create_cfm` with `global_mask=True`.
|----------------|------------------|-------------|----------------------------------------
| True | True | False | Update `fixed_image` after applying
| | | | mask.
|----------------|------------------|-------------|----------------------------------------
| True | False | True | Set `fixed_image_masks` applying
| | | | `create_cfm` with `global_mask=False`
|----------------|------------------|-------------|----------------------------------------
| True | False | False | args['fixed_image_masks'] = fixed_mask
|----------------|------------------|-------------|----------------------------------------
| False | * | True | Set `fixed_image_masks` applying
| | | | `create_cfm` with `global_mask=True`
|----------------|------------------|-------------|----------------------------------------
| False | * | False | No action
"""
# If a reference image is provided...
if isdefined(self.inputs.reference_image):
# Use the reference image as the fixed image.
args['fixed_image'] = self.inputs.reference_image
# If a reference mask is provided...
if isdefined(self.inputs.reference_mask):
# If explicit masking is enabled...
if self.inputs.explicit_masking:
# Mask the reference image.
# Do not use a fixed mask during registration.
args['fixed_image'] = mask(
self.inputs.reference_image,
self.inputs.reference_mask,
"fixed_masked.nii.gz")
# If a lesion mask is also provided...
if isdefined(self.inputs.lesion_mask):
# Create a cost function mask with the form: [global mask]
# Use this as the fixed mask.
args['fixed_image_masks'] = create_cfm(
self.inputs.reference_mask,
lesion_mask=None,
global_mask=True)
# If a reference mask is provided...
# But explicit masking is disabled...
else:
# Use the reference mask as the fixed mask during registration.
# Do not mask the fixed image.
args['fixed_image_masks'] = self.inputs.reference_mask
# If no reference mask is provided...
# But a lesion mask *IS* provided ...
elif isdefined(self.inputs.lesion_mask):
# Create a cost function mask with the form: [global mask]
# Use this as the fixed mask
args['fixed_image_masks'] = create_cfm(
self.inputs.reference_image,
lesion_mask=None,
global_mask=True)
# If no reference image is provided, fall back to the default template.
else:
# Raise an error if the user specifies an unsupported image orientation.
if self.inputs.orientation == 'LAS':
raise NotImplementedError
# Set the template resolution.
resolution = self.inputs.template_resolution
# Get the template specified by the user.
ref_template = get_template(self.inputs.template, resolution=resolution,
desc=None, suffix=self.inputs.reference)
ref_mask = get_template(self.inputs.template, resolution=resolution,
desc='brain', suffix='mask')
# Default is explicit masking disabled
args['fixed_image'] = str(ref_template)
# Use the template mask as the fixed mask.
args['fixed_image_masks'] = str(ref_mask)
# Overwrite defaults if explicit masking
if self.inputs.explicit_masking:
# Mask the template image with the template mask.
args['fixed_image'] = mask(str(ref_template), str(ref_mask),
"fixed_masked.nii.gz")
# Do not use a fixed mask during registration.
args.pop('fixed_image_masks', None)
# If a lesion mask is provided...
if isdefined(self.inputs.lesion_mask):
# Create a cost function mask with the form: [global mask]
# Use this as the fixed mask.
args['fixed_image_masks'] = create_cfm(
str(ref_mask), lesion_mask=None, global_mask=True)
return args |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dollars_to_math(source):
r""" Replace dollar signs with backticks. More precisely, do a regular expression search. Replace a plain dollar sign ($) by a backtick (`). Replace an escaped dollar sign (\$) by a dollar sign ($). Don't change a dollar sign preceded or followed by a backtick (`$ or $`), because of strings like "``$HOME``". Don't make any changes on lines starting with spaces, because those are indented and hence part of a block of code or examples. This also doesn't replaces dollar signs enclosed in curly braces, to avoid nested math environments, such as :: $f(n) = 0 \text{ if $n$ is prime}$ Thus the above line would get changed to `f(n) = 0 \text{ if $n$ is prime}` """ |
s = "\n".join(source)
if s.find("$") == -1:
return
# This searches for "$blah$" inside a pair of curly braces --
# don't change these, since they're probably coming from a nested
# math environment. So for each match, we replace it with a temporary
# string, and later on we substitute the original back.
global _data
_data = {}
def repl(matchobj):
global _data
s = matchobj.group(0)
t = "___XXX_REPL_%d___" % len(_data)
_data[t] = s
return t
s = re.sub(r"({[^{}$]*\$[^{}$]*\$[^{}]*})", repl, s)
# matches $...$
dollars = re.compile(r"(?<!\$)(?<!\\)\$([^\$]+?)\$")
# regular expression for \$
slashdollar = re.compile(r"\\\$")
s = dollars.sub(r":math:`\1`", s)
s = slashdollar.sub(r"$", s)
# change the original {...} things in:
for r in _data:
s = s.replace(r, _data[r])
# now save results in "source"
source[:] = [s] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _post_run_hook(self, runtime):
""" there is not inner interface to run """ |
self._fixed_image = self.inputs.after
self._moving_image = self.inputs.before
self._contour = self.inputs.wm_seg if isdefined(self.inputs.wm_seg) else None
NIWORKFLOWS_LOG.info(
'Report - setting before (%s) and after (%s) images',
self._fixed_image, self._moving_image)
return super(SimpleBeforeAfterRPT, self)._post_run_hook(runtime) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_data_path(data_dir=None):
""" Get data storage directory data_dir: str Path of the data directory. Used to force data storage in a specified location. :returns: a list of paths where the dataset could be stored, ordered by priority """ |
data_dir = data_dir or ''
default_dirs = [Path(d).expanduser().resolve()
for d in os.getenv('CRN_SHARED_DATA', '').split(os.pathsep)
if d.strip()]
default_dirs += [Path(d).expanduser().resolve()
for d in os.getenv('CRN_DATA', '').split(os.pathsep)
if d.strip()]
default_dirs += [NIWORKFLOWS_CACHE_DIR]
return [Path(d).expanduser()
for d in data_dir.split(os.pathsep) if d.strip()] or default_dirs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_dataset(dataset_name, dataset_prefix=None, data_dir=None, default_paths=None, verbose=1):
""" Create if necessary and returns data directory of given dataset. data_dir: str Path of the data directory. Used to force data storage in a specified location. default_paths: list(str) Default system paths in which the dataset may already have been installed by a third party software. They will be checked first. verbose: int verbosity level (0 means no message). :returns: the path of the given dataset directory. :rtype: str .. note:: This function retrieves the datasets directory (or data directory) using the following priority : 1. defaults system paths 2. the keyword argument data_dir 3. the global environment variable CRN_SHARED_DATA 4. the user environment variable CRN_DATA 5. ~/.cache/stanford-crn in the user home folder """ |
dataset_folder = dataset_name if not dataset_prefix \
else '%s%s' % (dataset_prefix, dataset_name)
default_paths = default_paths or ''
paths = [p / dataset_folder for p in _get_data_path(data_dir)]
all_paths = [Path(p) / dataset_folder
for p in default_paths.split(os.pathsep)] + paths
# Check if the dataset folder exists somewhere and is not empty
for path in all_paths:
if path.is_dir() and list(path.iterdir()):
if verbose > 1:
NIWORKFLOWS_LOG.info(
'Dataset "%s" already cached in %s', dataset_name, path)
return path, True
for path in paths:
if verbose > 0:
NIWORKFLOWS_LOG.info(
'Dataset "%s" not cached, downloading to %s', dataset_name, path)
path.mkdir(parents=True, exist_ok=True)
return path, False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _md5_sum_file(path):
""" Calculates the MD5 sum of a file. """ |
with Path(path).open('rb') as fhandle:
md5sum = hashlib.md5()
while True:
data = fhandle.read(8192)
if not data:
break
md5sum.update(data)
return md5sum.hexdigest() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _chunk_report_(bytes_so_far, total_size, initial_size, t_0):
"""Show downloading percentage. :param int bytes_so_far: number of downloaded bytes :param int total_size: total size of the file (may be 0/None, depending on download method). :param int t_0: the time in seconds (as returned by time.time()) at which the download was resumed / started. :param int initial_size: if resuming, indicate the initial size of the file. If not resuming, set to zero. """ |
if not total_size:
sys.stderr.write("\rDownloaded {0:d} of ? bytes.".format(bytes_so_far))
else:
# Estimate remaining download time
total_percent = float(bytes_so_far) / total_size
current_download_size = bytes_so_far - initial_size
bytes_remaining = total_size - bytes_so_far
delta_t = time.time() - t_0
download_rate = current_download_size / max(1e-8, float(delta_t))
# Minimum rate of 0.01 bytes/s, to avoid dividing by zero.
time_remaining = bytes_remaining / max(0.01, download_rate)
# Trailing whitespace is to erase extra char when message length
# varies
sys.stderr.write(
"\rDownloaded {0:d} of {1:d} bytes ({2:.1f}%, {3!s} remaining)".format(
bytes_so_far, total_size, total_percent * 100, _format_time(time_remaining))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refine_aseg(aseg, ball_size=4):
""" First step to reconcile ANTs' and FreeSurfer's brain masks. Here, the ``aseg.mgz`` mask from FreeSurfer is refined in two steps, using binary morphological operations: 1. With a binary closing operation the sulci are included into the mask. This results in a smoother brain mask that does not exclude deep, wide sulci. 2. Fill any holes (typically, there could be a hole next to the pineal gland and the corpora quadrigemina if the great cerebral brain is segmented out). """ |
# Read aseg data
bmask = aseg.copy()
bmask[bmask > 0] = 1
bmask = bmask.astype(np.uint8)
# Morphological operations
selem = sim.ball(ball_size)
newmask = sim.binary_closing(bmask, selem)
newmask = binary_fill_holes(newmask.astype(np.uint8), selem).astype(np.uint8)
return newmask.astype(np.uint8) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def grow_mask(anat, aseg, ants_segs=None, ww=7, zval=2.0, bw=4):
""" Grow mask including pixels that have a high likelihood. GM tissue parameters are sampled in image patches of ``ww`` size. This is inspired on mindboggle's solution to the problem: https://github.com/nipy/mindboggle/blob/master/mindboggle/guts/segment.py#L1660 """ |
selem = sim.ball(bw)
if ants_segs is None:
ants_segs = np.zeros_like(aseg, dtype=np.uint8)
aseg[aseg == 42] = 3 # Collapse both hemispheres
gm = anat.copy()
gm[aseg != 3] = 0
refined = refine_aseg(aseg)
newrefmask = sim.binary_dilation(refined, selem) - refined
indices = np.argwhere(newrefmask > 0)
for pixel in indices:
# When ATROPOS identified the pixel as GM, set and carry on
if ants_segs[tuple(pixel)] == 2:
refined[tuple(pixel)] = 1
continue
window = gm[
pixel[0] - ww:pixel[0] + ww,
pixel[1] - ww:pixel[1] + ww,
pixel[2] - ww:pixel[2] + ww
]
if np.any(window > 0):
mu = window[window > 0].mean()
sigma = max(window[window > 0].std(), 1.e-5)
zstat = abs(anat[tuple(pixel)] - mu) / sigma
refined[tuple(pixel)] = int(zstat < zval)
refined = sim.binary_opening(refined, selem)
return refined |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def medial_wall_to_nan(in_file, subjects_dir, target_subject, newpath=None):
""" Convert values on medial wall to NaNs """ |
import nibabel as nb
import numpy as np
import os
fn = os.path.basename(in_file)
if not target_subject.startswith('fs'):
return in_file
cortex = nb.freesurfer.read_label(os.path.join(
subjects_dir, target_subject, 'label', '{}.cortex.label'.format(fn[:2])))
func = nb.load(in_file)
medial = np.delete(np.arange(len(func.darrays[0].data)), cortex)
for darray in func.darrays:
darray.data[medial] = np.nan
out_file = os.path.join(newpath or os.getcwd(), fn)
func.to_filename(out_file)
return out_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_link_node(rawtext, app, type, slug, options):
"""Create a link to a github resource. :param rawtext: Text being replaced with link node. :param app: Sphinx application context :param type: Link type (issues, changeset, etc.) :param slug: ID of the thing to link to :param options: Options dictionary passed to role func. """ |
try:
base = app.config.github_project_url
if not base:
raise AttributeError
if not base.endswith('/'):
base += '/'
except AttributeError as err:
raise ValueError('github_project_url configuration value is not set (%s)' % str(err))
ref = base + type + '/' + slug + '/'
set_classes(options)
prefix = "#"
if type == 'pull':
prefix = "PR " + prefix
node = nodes.reference(rawtext, prefix + utils.unescape(slug), refuri=ref,
**options)
return node |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ghissue_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
"""Link to a GitHub issue. Returns 2 part tuple containing list of nodes to insert into the document and a list of system messages. Both are allowed to be empty. :param name: The role name used in the document. :param rawtext: The entire markup snippet, with role. :param text: The text marked with the role. :param lineno: The line number where rawtext appears in the input. :param inliner: The inliner instance that called us. :param options: Directive options for customization. :param content: The directive content for customization. """ |
try:
issue_num = int(text)
if issue_num <= 0:
raise ValueError
except ValueError:
msg = inliner.reporter.error(
'GitHub issue number must be a number greater than or equal to 1; '
'"%s" is invalid.' % text, line=lineno)
prb = inliner.problematic(rawtext, rawtext, msg)
return [prb], [msg]
app = inliner.document.settings.env.app
#app.info('issue %r' % text)
if 'pull' in name.lower():
category = 'pull'
elif 'issue' in name.lower():
category = 'issues'
else:
msg = inliner.reporter.error(
'GitHub roles include "ghpull" and "ghissue", '
'"%s" is invalid.' % name, line=lineno)
prb = inliner.problematic(rawtext, rawtext, msg)
return [prb], [msg]
node = make_link_node(rawtext, app, category, str(issue_num), options)
return [node], [] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ghuser_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
"""Link to a GitHub user. Returns 2 part tuple containing list of nodes to insert into the document and a list of system messages. Both are allowed to be empty. :param name: The role name used in the document. :param rawtext: The entire markup snippet, with role. :param text: The text marked with the role. :param lineno: The line number where rawtext appears in the input. :param inliner: The inliner instance that called us. :param options: Directive options for customization. :param content: The directive content for customization. """ |
app = inliner.document.settings.env.app
#app.info('user link %r' % text)
ref = 'https://www.github.com/' + text
node = nodes.reference(rawtext, text, refuri=ref, **options)
return [node], [] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ghcommit_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
"""Link to a GitHub commit. Returns 2 part tuple containing list of nodes to insert into the document and a list of system messages. Both are allowed to be empty. :param name: The role name used in the document. :param rawtext: The entire markup snippet, with role. :param text: The text marked with the role. :param lineno: The line number where rawtext appears in the input. :param inliner: The inliner instance that called us. :param options: Directive options for customization. :param content: The directive content for customization. """ |
app = inliner.document.settings.env.app
#app.info('user link %r' % text)
try:
base = app.config.github_project_url
if not base:
raise AttributeError
if not base.endswith('/'):
base += '/'
except AttributeError as err:
raise ValueError('github_project_url configuration value is not set (%s)' % str(err))
ref = base + text
node = nodes.reference(rawtext, text[:6], refuri=ref, **options)
return [node], [] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _import(self, name):
''' Import namespace package '''
mod = __import__(name)
components = name.split('.')
for comp in components[1:]:
mod = getattr(mod, comp)
return mod |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def discover_modules(self):
''' Return module sequence discovered from ``self.package_name``
Parameters
----------
None
Returns
-------
mods : sequence
Sequence of module names within ``self.package_name``
Examples
--------
>>> dw = ApiDocWriter('sphinx')
>>> mods = dw.discover_modules()
>>> 'sphinx.util' in mods
True
>>> dw.package_skip_patterns.append('\.util$')
>>> 'sphinx.util' in dw.discover_modules()
False
>>>
'''
modules = [self.package_name]
# raw directory parsing
for dirpath, dirnames, filenames in os.walk(self.root_path):
# Check directory names for packages
root_uri = self._path2uri(os.path.join(self.root_path,
dirpath))
# Normally, we'd only iterate over dirnames, but since
# dipy does not import a whole bunch of modules we'll
# include those here as well (the *.py filenames).
filenames = [f[:-3] for f in filenames if
f.endswith('.py') and not f.startswith('__init__')]
for filename in filenames:
package_uri = '/'.join((dirpath, filename))
for subpkg_name in dirnames + filenames:
package_uri = '.'.join((root_uri, subpkg_name))
package_path = self._uri2path(package_uri)
if (package_path and
self._survives_exclude(package_uri, 'package')):
modules.append(package_uri)
return sorted(modules) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_metadata_for_nifti(in_file, bids_dir=None, validate=True):
"""Fetch metadata for a given nifti file 'SIEMENS' """ |
return _init_layout(in_file, bids_dir, validate).get_metadata(
str(in_file)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def group_multiecho(bold_sess):
""" Multiplexes multi-echo EPIs into arrays. Dual-echo is a special case of multi-echo, which is treated as single-echo data. ['sub-01_task-rest_echo-1_run-01_bold.nii.gz', 'sub-01_task-rest_echo-2_run-01_bold.nii.gz', ['sub-01_task-rest_echo-1_run-02_bold.nii.gz', 'sub-01_task-rest_echo-2_run-02_bold.nii.gz', 'sub-01_task-rest_echo-3_run-02_bold.nii.gz'], 'sub-01_task-rest_run-03_bold.nii.gz'] [['sub-01_task-rest_echo-1_run-01_bold.nii.gz', 'sub-01_task-rest_echo-2_run-01_bold.nii.gz', 'sub-01_task-rest_echo-3_run-01_bold.nii.gz'], ['sub-01_task-rest_echo-1_run-02_bold.nii.gz', 'sub-01_task-rest_echo-2_run-02_bold.nii.gz', 'sub-01_task-rest_echo-3_run-02_bold.nii.gz'], 'sub-01_task-rest_run-03_bold.nii.gz'] [['sub-01_task-rest_echo-1_run-01_bold.nii.gz', 'sub-01_task-rest_echo-2_run-01_bold.nii.gz', 'sub-01_task-rest_echo-3_run-01_bold.nii.gz'], ['sub-01_task-rest_echo-1_run-02_bold.nii.gz', 'sub-01_task-rest_echo-2_run-02_bold.nii.gz', 'sub-01_task-rest_echo-3_run-02_bold.nii.gz'], 'sub-01_task-rest_run-03_bold.nii.gz', 'sub-01_task-beh_echo-1_run-01_bold.nii.gz', 'sub-01_task-beh_echo-2_run-01_bold.nii.gz', ['sub-01_task-beh_echo-1_run-02_bold.nii.gz', 'sub-01_task-beh_echo-2_run-02_bold.nii.gz', 'sub-01_task-beh_echo-3_run-02_bold.nii.gz'], 'sub-01_task-beh_run-03_bold.nii.gz'] Some tests from https://neurostars.org/t/fmriprep-from\ -singularity-unboundlocalerror/3299/7 True 6 [False, False, False, False, False, False, True] 3 """ |
from itertools import groupby
def _grp_echos(x):
if '_echo-' not in x:
return x
echo = re.search("_echo-\\d*", x).group(0)
return x.replace(echo, "_echo-?")
ses_uids = []
for _, bold in groupby(bold_sess, key=_grp_echos):
bold = list(bold)
# If single- or dual-echo, flatten list; keep list otherwise.
action = getattr(ses_uids, 'append' if len(bold) > 2 else 'extend')
action(bold)
return ses_uids |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _define_variant(self):
"""Assign arbitrary label to combination of CIFTI spaces""" |
space = None
variants = {
# to be expanded once addtional spaces are supported
'space1': ['fsaverage5', 'MNI152NLin2009cAsym'],
'space2': ['fsaverage6', 'MNI152NLin2009cAsym'],
}
for sp, targets in variants.items():
if all(target in targets for target in
[self.inputs.surface_target, self.inputs.volume_target]):
space = sp
if space is None:
raise NotImplementedError
variant_key = os.path.abspath('dtseries_variant.json')
with open(variant_key, 'w') as fp:
json.dump({space: variants[space]}, fp)
return variant_key, space |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _fetch_data(self):
"""Converts inputspec to files""" |
if (self.inputs.surface_target == "fsnative" or
self.inputs.volume_target != "MNI152NLin2009cAsym"):
# subject space is not support yet
raise NotImplementedError
annotation_files = sorted(glob(os.path.join(self.inputs.subjects_dir,
self.inputs.surface_target,
'label',
'*h.aparc.annot')))
if not annotation_files:
raise IOError("Freesurfer annotations for %s not found in %s" % (
self.inputs.surface_target, self.inputs.subjects_dir))
label_file = str(get_template(
'MNI152NLin2009cAsym', resolution=2, desc='DKT31', suffix='dseg'))
return annotation_files, label_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reorient(in_file, newpath=None):
"""Reorient Nifti files to RAS""" |
out_file = fname_presuffix(in_file, suffix='_ras', newpath=newpath)
nb.as_closest_canonical(nb.load(in_file)).to_filename(out_file)
return out_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def normalize_xform(img):
""" Set identical, valid qform and sform matrices in an image Selects the best available affine (sform > qform > shape-based), and coerces it to be qform-compatible (no shears). The resulting image represents this same affine as both qform and sform, and is marked as NIFTI_XFORM_ALIGNED_ANAT, indicating that it is valid, not aligned to template, and not necessarily preserving the original coordinates. If header would be unchanged, returns input image. """ |
# Let nibabel convert from affine to quaternions, and recover xform
tmp_header = img.header.copy()
tmp_header.set_qform(img.affine)
xform = tmp_header.get_qform()
xform_code = 2
# Check desired codes
qform, qform_code = img.get_qform(coded=True)
sform, sform_code = img.get_sform(coded=True)
if all((qform is not None and np.allclose(qform, xform),
sform is not None and np.allclose(sform, xform),
int(qform_code) == xform_code, int(sform_code) == xform_code)):
return img
new_img = img.__class__(img.get_data(), xform, img.header)
# Unconditionally set sform/qform
new_img.set_sform(xform, xform_code)
new_img.set_qform(xform, xform_code)
return new_img |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def demean(in_file, in_mask, only_mask=False, newpath=None):
"""Demean ``in_file`` within the mask defined by ``in_mask``""" |
import os
import numpy as np
import nibabel as nb
from nipype.utils.filemanip import fname_presuffix
out_file = fname_presuffix(in_file, suffix='_demeaned',
newpath=os.getcwd())
nii = nb.load(in_file)
msk = nb.load(in_mask).get_data()
data = nii.get_data()
if only_mask:
data[msk > 0] -= np.median(data[msk > 0])
else:
data -= np.median(data[msk > 0])
nb.Nifti1Image(data, nii.affine, nii.header).to_filename(
out_file)
return out_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def nii_ones_like(in_file, value, dtype, newpath=None):
"""Create a NIfTI file filled with ``value``, matching properties of ``in_file``""" |
import os
import numpy as np
import nibabel as nb
nii = nb.load(in_file)
data = np.ones(nii.shape, dtype=float) * value
out_file = os.path.join(newpath or os.getcwd(), "filled.nii.gz")
nii = nb.Nifti1Image(data, nii.affine, nii.header)
nii.set_data_dtype(dtype)
nii.to_filename(out_file)
return out_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reorient_wf(name='ReorientWorkflow'):
"""A workflow to reorient images to 'RPI' orientation""" |
workflow = pe.Workflow(name=name)
inputnode = pe.Node(niu.IdentityInterface(fields=['in_file']),
name='inputnode')
outputnode = pe.Node(niu.IdentityInterface(
fields=['out_file']), name='outputnode')
deoblique = pe.Node(afni.Refit(deoblique=True), name='deoblique')
reorient = pe.Node(afni.Resample(
orientation='RPI', outputtype='NIFTI_GZ'), name='reorient')
workflow.connect([
(inputnode, deoblique, [('in_file', 'in_file')]),
(deoblique, reorient, [('out_file', 'in_file')]),
(reorient, outputnode, [('out_file', 'out_file')])
])
return workflow |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _tsv2json(in_tsv, out_json, index_column, additional_metadata=None, drop_columns=None, enforce_case=True):
""" Convert metadata from TSV format to JSON format. Parameters in_tsv: str Path to the metadata in TSV format. out_json: str Path where the metadata should be saved in JSON format after conversion. If this is None, then a dictionary is returned instead. index_column: str Name of the column in the TSV to be used as an index (top-level key in the JSON). additional_metadata: dict Any additional metadata that should be applied to all entries in the JSON. drop_columns: list List of columns from the input TSV to be dropped from the JSON. enforce_case: bool Indicates whether BIDS case conventions should be followed. Currently, this means that index fields (column names in the associated data TSV) use snake case and other fields use camel case. Returns ------- str Path to the metadata saved in JSON format. """ |
import pandas as pd
# Adapted from https://dev.to/rrampage/snake-case-to-camel-case-and- ...
# back-using-regular-expressions-and-python-m9j
re_to_camel = r'(.*?)_([a-zA-Z0-9])'
re_to_snake = r'(^.+?|.*?)((?<![_A-Z])[A-Z]|(?<![_0-9])[0-9]+)'
def snake(match):
return '{}_{}'.format(match.group(1).lower(), match.group(2).lower())
def camel(match):
return '{}{}'.format(match.group(1), match.group(2).upper())
# from fmriprep
def less_breakable(a_string):
""" hardens the string to different envs (i.e. case insensitive, no
whitespace, '#' """
return ''.join(a_string.split()).strip('#')
drop_columns = drop_columns or []
additional_metadata = additional_metadata or {}
tsv_data = pd.read_csv(in_tsv, '\t')
for k, v in additional_metadata.items():
tsv_data[k] = v
for col in drop_columns:
tsv_data.drop(labels=col, axis='columns', inplace=True)
tsv_data.set_index(index_column, drop=True, inplace=True)
if enforce_case:
tsv_data.index = [re.sub(re_to_snake, snake,
less_breakable(i), 0).lower()
for i in tsv_data.index]
tsv_data.columns = [re.sub(re_to_camel, camel,
less_breakable(i).title(), 0)
for i in tsv_data.columns]
json_data = tsv_data.to_json(orient='index')
json_data = json.JSONDecoder(
object_pairs_hook=OrderedDict).decode(json_data)
if out_json is None:
return json_data
with open(out_json, 'w') as f:
json.dump(json_data, f, indent=4)
return out_json |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _tpm2roi(in_tpm, in_mask, mask_erosion_mm=None, erosion_mm=None, mask_erosion_prop=None, erosion_prop=None, pthres=0.95, newpath=None):
""" Generate a mask from a tissue probability map """ |
tpm_img = nb.load(in_tpm)
roi_mask = (tpm_img.get_data() >= pthres).astype(np.uint8)
eroded_mask_file = None
erode_in = (mask_erosion_mm is not None and mask_erosion_mm > 0 or
mask_erosion_prop is not None and mask_erosion_prop < 1)
if erode_in:
eroded_mask_file = fname_presuffix(in_mask, suffix='_eroded',
newpath=newpath)
mask_img = nb.load(in_mask)
mask_data = mask_img.get_data().astype(np.uint8)
if mask_erosion_mm:
iter_n = max(int(mask_erosion_mm / max(mask_img.header.get_zooms())), 1)
mask_data = nd.binary_erosion(mask_data, iterations=iter_n)
else:
orig_vol = np.sum(mask_data > 0)
while np.sum(mask_data > 0) / orig_vol > mask_erosion_prop:
mask_data = nd.binary_erosion(mask_data, iterations=1)
# Store mask
eroded = nb.Nifti1Image(mask_data, mask_img.affine, mask_img.header)
eroded.set_data_dtype(np.uint8)
eroded.to_filename(eroded_mask_file)
# Mask TPM data (no effect if not eroded)
roi_mask[~mask_data] = 0
# shrinking
erode_out = (erosion_mm is not None and erosion_mm > 0 or
erosion_prop is not None and erosion_prop < 1)
if erode_out:
if erosion_mm:
iter_n = max(int(erosion_mm / max(tpm_img.header.get_zooms())), 1)
iter_n = int(erosion_mm / max(tpm_img.header.get_zooms()))
roi_mask = nd.binary_erosion(roi_mask, iterations=iter_n)
else:
orig_vol = np.sum(roi_mask > 0)
while np.sum(roi_mask > 0) / orig_vol > erosion_prop:
roi_mask = nd.binary_erosion(roi_mask, iterations=1)
# Create image to resample
roi_fname = fname_presuffix(in_tpm, suffix='_roi', newpath=newpath)
roi_img = nb.Nifti1Image(roi_mask, tpm_img.affine, tpm_img.header)
roi_img.set_data_dtype(np.uint8)
roi_img.to_filename(roi_fname)
return roi_fname, eroded_mask_file or in_mask |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_reports(reportlets_dir, out_dir, subject_label, run_uuid, config=None, packagename=None):
""" Runs the reports .. testsetup:: .. doctest:: 0 .. testcleanup:: """ |
report = Report(Path(reportlets_dir), out_dir, run_uuid, config=config,
subject_id=subject_label, packagename=packagename)
return report.generate_report() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_reports(subject_list, output_dir, work_dir, run_uuid, config=None, packagename=None):
""" A wrapper to run_reports on a given ``subject_list`` """ |
reports_dir = str(Path(work_dir) / 'reportlets')
report_errors = [
run_reports(reports_dir, output_dir, subject_label, run_uuid,
config, packagename=packagename)
for subject_label in subject_list
]
errno = sum(report_errors)
if errno:
import logging
logger = logging.getLogger('cli')
error_list = ', '.join('%s (%d)' % (subid, err)
for subid, err in zip(subject_list, report_errors) if err)
logger.error(
'Preprocessing did not finish successfully. Errors occurred while processing '
'data from participants: %s. Check the HTML reports for details.', error_list)
return errno |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def index(self, config):
""" Traverse the reports config definition and instantiate reportlets. This method also places figures in their final location. """ |
for subrep_cfg in config:
# First determine whether we need to split by some ordering
# (ie. sessions / tasks / runs), which are separated by commas.
orderings = [s for s in subrep_cfg.get('ordering', '').strip().split(',') if s]
queries = []
for key in orderings:
values = getattr(self.layout, 'get_%s%s' % (key, PLURAL_SUFFIX[key]))()
if values:
queries.append((key, values))
if not queries: # E.g. this is an anatomical reportlet
reportlets = [Reportlet(self.layout, self.out_dir, config=cfg)
for cfg in subrep_cfg['reportlets']]
else:
# Do not use dictionary for queries, as we need to preserve ordering
# of ordering columns.
reportlets = []
entities, values = zip(*queries)
combinations = list(product(*values)) # e.g.: [('rest', 1), ('rest', 2)]
for c in combinations:
# Set a common title for this particular combination c
title = 'Reports for: %s.' % ', '.join(
['%s <span class="bids-entity">%s</span>' % (entities[i], c[i])
for i in range(len(c))])
for cfg in subrep_cfg['reportlets']:
cfg['bids'].update({entities[i]: c[i] for i in range(len(c))})
rlet = Reportlet(self.layout, self.out_dir, config=cfg)
if not rlet.is_empty():
rlet.title = title
title = None
reportlets.append(rlet)
# Filter out empty reportlets
reportlets = [r for r in reportlets if not r.is_empty()]
if reportlets:
sub_report = SubReport(
subrep_cfg['name'],
isnested=len(queries) > 0,
reportlets=reportlets,
title=subrep_cfg.get('title'))
self.sections.append(sub_report)
# Populate errors sections
error_dir = self.out_dir / self.packagename / 'sub-{}'.format(self.subject_id) / \
'log' / self.run_uuid
if error_dir.is_dir():
from ..utils.misc import read_crashfile
self.errors = [read_crashfile(str(f)) for f in error_dir.glob('crash*.*')] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_report(self):
"""Once the Report has been indexed, the final HTML can be generated""" |
logs_path = self.out_dir / 'logs'
boilerplate = []
boiler_idx = 0
if (logs_path / 'CITATION.html').exists():
text = (logs_path / 'CITATION.html').read_text(encoding='UTF-8')
text = '<div class="boiler-html">%s</div>' % re.compile(
'<body>(.*?)</body>',
re.DOTALL | re.IGNORECASE).findall(text)[0].strip()
boilerplate.append((boiler_idx, 'HTML', text))
boiler_idx += 1
if (logs_path / 'CITATION.md').exists():
text = '<pre>%s</pre>\n' % (logs_path / 'CITATION.md').read_text(encoding='UTF-8')
boilerplate.append((boiler_idx, 'Markdown', text))
boiler_idx += 1
if (logs_path / 'CITATION.tex').exists():
text = (logs_path / 'CITATION.tex').read_text(encoding='UTF-8')
text = re.compile(
r'\\begin{document}(.*?)\\end{document}',
re.DOTALL | re.IGNORECASE).findall(text)[0].strip()
text = '<pre>%s</pre>\n' % text
text += '<h3>Bibliography</h3>\n'
text += '<pre>%s</pre>\n' % Path(
pkgrf(self.packagename, 'data/boilerplate.bib')).read_text(encoding='UTF-8')
boilerplate.append((boiler_idx, 'LaTeX', text))
boiler_idx += 1
env = jinja2.Environment(
loader=jinja2.FileSystemLoader(searchpath=str(self.template_path.parent)),
trim_blocks=True, lstrip_blocks=True
)
report_tpl = env.get_template(self.template_path.name)
report_render = report_tpl.render(sections=self.sections, errors=self.errors,
boilerplate=boilerplate)
# Write out report
(self.out_dir / self.out_filename).write_text(report_render, encoding='UTF-8')
return len(self.errors) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _applytfms(args):
""" Applies ANTs' antsApplyTransforms to the input image. All inputs are zipped in one tuple to make it digestible by multiprocessing's map """ |
import nibabel as nb
from nipype.utils.filemanip import fname_presuffix
from niworkflows.interfaces.fixes import FixHeaderApplyTransforms as ApplyTransforms
in_file, in_xform, ifargs, index, newpath = args
out_file = fname_presuffix(in_file, suffix='_xform-%05d' % index,
newpath=newpath, use_ext=True)
copy_dtype = ifargs.pop('copy_dtype', False)
xfm = ApplyTransforms(
input_image=in_file, transforms=in_xform, output_image=out_file, **ifargs)
xfm.terminal_output = 'allatonce'
xfm.resource_monitor = False
runtime = xfm.run().runtime
if copy_dtype:
nii = nb.load(out_file)
in_dtype = nb.load(in_file).get_data_dtype()
# Overwrite only iff dtypes don't match
if in_dtype != nii.get_data_dtype():
nii.set_data_dtype(in_dtype)
nii.to_filename(out_file)
return (out_file, runtime.cmdline) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _arrange_xfms(transforms, num_files, tmp_folder):
""" Convenience method to arrange the list of transforms that should be applied to each input file """ |
base_xform = ['#Insight Transform File V1.0', '#Transform 0']
# Initialize the transforms matrix
xfms_T = []
for i, tf_file in enumerate(transforms):
# If it is a deformation field, copy to the tfs_matrix directly
if guess_type(tf_file)[0] != 'text/plain':
xfms_T.append([tf_file] * num_files)
continue
with open(tf_file) as tf_fh:
tfdata = tf_fh.read().strip()
# If it is not an ITK transform file, copy to the tfs_matrix directly
if not tfdata.startswith('#Insight Transform File'):
xfms_T.append([tf_file] * num_files)
continue
# Count number of transforms in ITK transform file
nxforms = tfdata.count('#Transform')
# Remove first line
tfdata = tfdata.split('\n')[1:]
# If it is a ITK transform file with only 1 xform, copy to the tfs_matrix directly
if nxforms == 1:
xfms_T.append([tf_file] * num_files)
continue
if nxforms != num_files:
raise RuntimeError('Number of transforms (%d) found in the ITK file does not match'
' the number of input image files (%d).' % (nxforms, num_files))
# At this point splitting transforms will be necessary, generate a base name
out_base = fname_presuffix(tf_file, suffix='_pos-%03d_xfm-{:05d}' % i,
newpath=tmp_folder.name).format
# Split combined ITK transforms file
split_xfms = []
for xform_i in range(nxforms):
# Find start token to extract
startidx = tfdata.index('#Transform %d' % xform_i)
next_xform = base_xform + tfdata[startidx + 1:startidx + 4] + ['']
xfm_file = out_base(xform_i)
with open(xfm_file, 'w') as out_xfm:
out_xfm.write('\n'.join(next_xform))
split_xfms.append(xfm_file)
xfms_T.append(split_xfms)
# Transpose back (only Python 3)
return list(map(list, zip(*xfms_T))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_transform(fname):
"""Load affine transform from file Parameters fname : str or None Filename of an LTA or FSL-style MAT transform file. If ``None``, return an identity transform Returns ------- affine : (4, 4) numpy.ndarray """ |
if fname is None:
return np.eye(4)
if fname.endswith('.mat'):
return np.loadtxt(fname)
elif fname.endswith('.lta'):
with open(fname, 'rb') as fobj:
for line in fobj:
if line.startswith(b'1 4 4'):
break
lines = fobj.readlines()[:4]
return np.genfromtxt(lines)
raise ValueError("Unknown transform type; pass FSL (.mat) or LTA (.lta)") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def vertex_normals(vertices, faces):
"""Calculates the normals of a triangular mesh""" |
def normalize_v3(arr):
''' Normalize a numpy array of 3 component vectors shape=(n,3) '''
lens = np.sqrt(arr[:, 0]**2 + arr[:, 1]**2 + arr[:, 2]**2)
arr /= lens[:, np.newaxis]
tris = vertices[faces]
facenorms = np.cross(tris[::, 1] - tris[::, 0], tris[::, 2] - tris[::, 0])
normalize_v3(facenorms)
norm = np.zeros(vertices.shape, dtype=vertices.dtype)
norm[faces[:, 0]] += facenorms
norm[faces[:, 1]] += facenorms
norm[faces[:, 2]] += facenorms
normalize_v3(norm)
return norm |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pointcloud2ply(vertices, normals, out_file=None):
"""Converts the file to PLY format""" |
from pathlib import Path
import pandas as pd
from pyntcloud import PyntCloud
df = pd.DataFrame(np.hstack((vertices, normals)))
df.columns = ['x', 'y', 'z', 'nx', 'ny', 'nz']
cloud = PyntCloud(df)
if out_file is None:
out_file = Path('pointcloud.ply').resolve()
cloud.to_file(str(out_file))
return out_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ply2gii(in_file, metadata, out_file=None):
"""Convert from ply to GIfTI""" |
from pathlib import Path
from numpy import eye
from nibabel.gifti import (
GiftiMetaData, GiftiCoordSystem, GiftiImage, GiftiDataArray,
)
from pyntcloud import PyntCloud
in_file = Path(in_file)
surf = PyntCloud.from_file(str(in_file))
# Update centroid metadata
metadata.update(
zip(('SurfaceCenterX', 'SurfaceCenterY', 'SurfaceCenterZ'),
['%.4f' % c for c in surf.centroid])
)
# Prepare data arrays
da = (
GiftiDataArray(
data=surf.xyz.astype('float32'),
datatype='NIFTI_TYPE_FLOAT32',
intent='NIFTI_INTENT_POINTSET',
meta=GiftiMetaData.from_dict(metadata),
coordsys=GiftiCoordSystem(xform=eye(4), xformspace=3)),
GiftiDataArray(
data=surf.mesh.values,
datatype='NIFTI_TYPE_INT32',
intent='NIFTI_INTENT_TRIANGLE',
coordsys=None))
surfgii = GiftiImage(darrays=da)
if out_file is None:
out_file = fname_presuffix(
in_file.name, suffix='.gii', use_ext=False, newpath=str(Path.cwd()))
surfgii.to_filename(str(out_file))
return out_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fix_multi_T1w_source_name(in_files):
""" Make up a generic source name when there are multiple T1s '/path/to/sub-045_T1w.nii.gz' """ |
import os
from nipype.utils.filemanip import filename_to_list
base, in_file = os.path.split(filename_to_list(in_files)[0])
subject_label = in_file.split("_", 1)[0].split("-")[1]
return os.path.join(base, "sub-%s_T1w.nii.gz" % subject_label) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_suffix(in_files, suffix):
""" Wrap nipype's fname_presuffix to conveniently just add a prefix 'sub-045_ses-test_T1w_test.nii.gz' """ |
import os.path as op
from nipype.utils.filemanip import fname_presuffix, filename_to_list
return op.basename(fname_presuffix(filename_to_list(in_files)[0],
suffix=suffix)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_txt(path):
"""Read a txt crashfile """ |
from pathlib import Path
lines = Path(path).read_text().splitlines()
data = {'file': str(path)}
traceback_start = 0
if lines[0].startswith('Node'):
data['node'] = lines[0].split(': ', 1)[1].strip()
data['node_dir'] = lines[1].split(': ', 1)[1].strip()
inputs = []
cur_key = ''
cur_val = ''
for i, line in enumerate(lines[5:]):
if not line.strip():
continue
if line[0].isspace():
cur_val += line
continue
if cur_val:
inputs.append((cur_key, cur_val.strip()))
if line.startswith("Traceback ("):
traceback_start = i + 5
break
cur_key, cur_val = tuple(line.split(' = ', 1))
data['inputs'] = sorted(inputs)
else:
data['node_dir'] = "Node crashed before execution"
data['traceback'] = '\n'.join(lines[traceback_start:]).strip()
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _conform_mask(in_mask, in_reference):
"""Ensures the mask headers make sense and match those of the T1w""" |
from pathlib import Path
import nibabel as nb
from nipype.utils.filemanip import fname_presuffix
ref = nb.load(in_reference)
nii = nb.load(in_mask)
hdr = nii.header.copy()
hdr.set_data_dtype('int16')
hdr.set_slope_inter(1, 0)
qform, qcode = ref.header.get_qform(coded=True)
if qcode is not None:
hdr.set_qform(qform, int(qcode))
sform, scode = ref.header.get_sform(coded=True)
if scode is not None:
hdr.set_sform(sform, int(scode))
if '_maths' in in_mask: # Cut the name at first _maths occurrence
ext = ''.join(Path(in_mask).suffixes)
basename = Path(in_mask).name
in_mask = basename.split('_maths')[0] + ext
out_file = fname_presuffix(in_mask, suffix='_mask',
newpath=str(Path()))
nii.__class__(nii.get_data().astype('int16'), ref.affine,
hdr).to_filename(out_file)
return out_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_template(template_name, data_dir=None, url=None, resume=True, verbose=1):
"""Download and load a template""" |
warn(DEPRECATION_MSG)
if template_name.startswith('tpl-'):
template_name = template_name[4:]
# An aliasing mechanism. Please avoid
template_name = TEMPLATE_ALIASES.get(template_name, template_name)
return get_dataset(template_name, dataset_prefix='tpl-', data_dir=data_dir,
url=url, resume=resume, verbose=verbose) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_bids_examples(data_dir=None, url=None, resume=True, verbose=1, variant='BIDS-examples-1-1.0.0-rc3u5'):
"""Download BIDS-examples-1""" |
warn(DEPRECATION_MSG)
variant = 'BIDS-examples-1-1.0.0-rc3u5' if variant not in BIDS_EXAMPLES else variant
if url is None:
url = BIDS_EXAMPLES[variant][0]
md5 = BIDS_EXAMPLES[variant][1]
return fetch_file(variant, url, data_dir, resume=resume, verbose=verbose,
md5sum=md5) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def svg2str(display_object, dpi=300):
""" Serializes a nilearn display object as a string """ |
from io import StringIO
image_buf = StringIO()
display_object.frame_axes.figure.savefig(
image_buf, dpi=dpi, format='svg',
facecolor='k', edgecolor='k')
return image_buf.getvalue() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def extract_svg(display_object, dpi=300, compress='auto'):
""" Removes the preamble of the svg files generated with nilearn """ |
image_svg = svg2str(display_object, dpi)
if compress is True or compress == 'auto':
image_svg = svg_compress(image_svg, compress)
image_svg = re.sub(' height="[0-9]+[a-z]*"', '', image_svg, count=1)
image_svg = re.sub(' width="[0-9]+[a-z]*"', '', image_svg, count=1)
image_svg = re.sub(' viewBox',
' preseveAspectRation="xMidYMid meet" viewBox',
image_svg, count=1)
start_tag = '<svg '
start_idx = image_svg.find(start_tag)
end_tag = '</svg>'
end_idx = image_svg.rfind(end_tag)
if start_idx is -1 or end_idx is -1:
NIWORKFLOWS_LOG.info('svg tags not found in extract_svg')
# rfind gives the start index of the substr. We want this substr
# included in our return value so we add its length to the index.
end_idx += len(end_tag)
return image_svg[start_idx:end_idx] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cuts_from_bbox(mask_nii, cuts=3):
"""Finds equi-spaced cuts for presenting images""" |
from nibabel.affines import apply_affine
mask_data = mask_nii.get_data() > 0.0
# First, project the number of masked voxels on each axes
ijk_counts = [
mask_data.sum(2).sum(1), # project sagittal planes to transverse (i) axis
mask_data.sum(2).sum(0), # project coronal planes to to longitudinal (j) axis
mask_data.sum(1).sum(0), # project axial planes to vertical (k) axis
]
# If all voxels are masked in a slice (say that happens at k=10),
# then the value for ijk_counts for the projection to k (ie. ijk_counts[2])
# at that element of the orthogonal axes (ijk_counts[2][10]) is
# the total number of voxels in that slice (ie. Ni x Nj).
# Here we define some thresholds to consider the plane as "masked"
# The thresholds vary because of the shape of the brain
# I have manually found that for the axial view requiring 30%
# of the slice elements to be masked drops almost empty boxes
# in the mosaic of axial planes (and also addresses #281)
ijk_th = [
int((mask_data.shape[1] * mask_data.shape[2]) * 0.2), # sagittal
int((mask_data.shape[0] * mask_data.shape[2]) * 0.0), # coronal
int((mask_data.shape[0] * mask_data.shape[1]) * 0.3), # axial
]
vox_coords = []
for ax, (c, th) in enumerate(zip(ijk_counts, ijk_th)):
B = np.argwhere(c > th)
if B.size:
smin, smax = B.min(), B.max()
# Avoid too narrow selections of cuts (very small masks)
if not B.size or (th > 0 and (smin + cuts + 1) >= smax):
B = np.argwhere(c > 0)
# Resort to full plane if mask is seemingly empty
smin, smax = B.min(), B.max() if B.size else (0, mask_data.shape[ax])
inc = (smax - smin) / (cuts + 1)
vox_coords.append([smin + (i + 1) * inc for i in range(cuts)])
ras_coords = []
for cross in np.array(vox_coords).T:
ras_coords.append(apply_affine(
mask_nii.affine, cross).tolist())
ras_cuts = [list(coords) for coords in np.transpose(ras_coords)]
return {k: v for k, v in zip(['x', 'y', 'z'], ras_cuts)} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _3d_in_file(in_file):
''' if self.inputs.in_file is 3d, return it.
if 4d, pick an arbitrary volume and return that.
if in_file is a list of files, return an arbitrary file from
the list, and an arbitrary volume from that file
'''
in_file = filemanip.filename_to_list(in_file)[0]
try:
in_file = nb.load(in_file)
except AttributeError:
in_file = in_file
if in_file.get_data().ndim == 3:
return in_file
return nlimage.index_img(in_file, 0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def transform_to_2d(data, max_axis):
""" Projects 3d data cube along one axis using maximum intensity with preservation of the signs. Adapted from nilearn. """ |
import numpy as np
# get the shape of the array we are projecting to
new_shape = list(data.shape)
del new_shape[max_axis]
# generate a 3D indexing array that points to max abs value in the
# current projection
a1, a2 = np.indices(new_shape)
inds = [a1, a2]
inds.insert(max_axis, np.abs(data).argmax(axis=max_axis))
# take the values where the absolute value of the projection
# is the highest
maximum_intensity_data = data[inds]
return np.rot90(maximum_intensity_data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main():
""" Install entry-point """ |
from os import path as op
from inspect import getfile, currentframe
from setuptools import setup, find_packages
from niworkflows.__about__ import (
__packagename__,
__author__,
__email__,
__maintainer__,
__license__,
__description__,
__longdesc__,
__url__,
DOWNLOAD_URL,
CLASSIFIERS,
REQUIRES,
SETUP_REQUIRES,
LINKS_REQUIRES,
TESTS_REQUIRES,
EXTRA_REQUIRES,
)
pkg_data = {
'niworkflows': [
'data/t1-mni_registration*.json',
'data/bold-mni_registration*.json',
'reports/figures.json',
'reports/fmriprep.yml',
'reports/report.tpl',
]}
root_dir = op.dirname(op.abspath(getfile(currentframe())))
version = None
cmdclass = {}
if op.isfile(op.join(root_dir, __packagename__, 'VERSION')):
with open(op.join(root_dir, __packagename__, 'VERSION')) as vfile:
version = vfile.readline().strip()
pkg_data[__packagename__].insert(0, 'VERSION')
if version is None:
import versioneer
version = versioneer.get_version()
cmdclass = versioneer.get_cmdclass()
setup(
name=__packagename__,
version=version,
description=__description__,
long_description=__longdesc__,
author=__author__,
author_email=__email__,
maintainer=__maintainer__,
maintainer_email=__email__,
license=__license__,
url=__url__,
download_url=DOWNLOAD_URL,
classifiers=CLASSIFIERS,
packages=find_packages(exclude=['*.tests']),
zip_safe=False,
# Dependencies handling
setup_requires=SETUP_REQUIRES,
install_requires=list(set(REQUIRES)),
dependency_links=LINKS_REQUIRES,
tests_require=TESTS_REQUIRES,
extras_require=EXTRA_REQUIRES,
# Data
package_data=pkg_data,
include_package_data=True,
cmdclass=cmdclass,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def afni_wf(name='AFNISkullStripWorkflow', unifize=False, n4_nthreads=1):
""" Skull-stripping workflow Originally derived from the `codebase of the QAP <https://github.com/preprocessed-connectomes-project/\ quality-assessment-protocol/blob/master/qap/anatomical_preproc.py#L105>`_. Now, this workflow includes :abbr:`INU (intensity non-uniformity)` correction using the N4 algorithm and (optionally) intensity harmonization using ANFI's ``3dUnifize``. """ |
workflow = pe.Workflow(name=name)
inputnode = pe.Node(niu.IdentityInterface(fields=['in_file']),
name='inputnode')
outputnode = pe.Node(niu.IdentityInterface(
fields=['bias_corrected', 'out_file', 'out_mask', 'bias_image']), name='outputnode')
inu_n4 = pe.Node(
ants.N4BiasFieldCorrection(dimension=3, save_bias=True, num_threads=n4_nthreads,
copy_header=True),
n_procs=n4_nthreads,
name='inu_n4')
sstrip = pe.Node(afni.SkullStrip(outputtype='NIFTI_GZ'), name='skullstrip')
sstrip_orig_vol = pe.Node(afni.Calc(
expr='a*step(b)', outputtype='NIFTI_GZ'), name='sstrip_orig_vol')
binarize = pe.Node(fsl.Threshold(args='-bin', thresh=1.e-3), name='binarize')
if unifize:
# Add two unifize steps, pre- and post- skullstripping.
inu_uni_0 = pe.Node(afni.Unifize(outputtype='NIFTI_GZ'),
name='unifize_pre_skullstrip')
inu_uni_1 = pe.Node(afni.Unifize(gm=True, outputtype='NIFTI_GZ'),
name='unifize_post_skullstrip')
workflow.connect([
(inu_n4, inu_uni_0, [('output_image', 'in_file')]),
(inu_uni_0, sstrip, [('out_file', 'in_file')]),
(inu_uni_0, sstrip_orig_vol, [('out_file', 'in_file_a')]),
(sstrip_orig_vol, inu_uni_1, [('out_file', 'in_file')]),
(inu_uni_1, outputnode, [('out_file', 'out_file')]),
(inu_uni_0, outputnode, [('out_file', 'bias_corrected')]),
])
else:
workflow.connect([
(inputnode, sstrip_orig_vol, [('in_file', 'in_file_a')]),
(inu_n4, sstrip, [('output_image', 'in_file')]),
(sstrip_orig_vol, outputnode, [('out_file', 'out_file')]),
(inu_n4, outputnode, [('output_image', 'bias_corrected')]),
])
# Remaining connections
workflow.connect([
(sstrip, sstrip_orig_vol, [('out_file', 'in_file_b')]),
(inputnode, inu_n4, [('in_file', 'input_image')]),
(sstrip_orig_vol, binarize, [('out_file', 'in_file')]),
(binarize, outputnode, [('out_file', 'out_mask')]),
(inu_n4, outputnode, [('bias_image', 'bias_image')]),
])
return workflow |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get(url, headers={}, params=None):
"""Tries to GET data from an endpoint using retries""" |
param_string = _foursquare_urlencode(params)
for i in xrange(NUM_REQUEST_RETRIES):
try:
try:
response = requests.get(url, headers=headers, params=param_string, verify=VERIFY_SSL)
return _process_response(response)
except requests.exceptions.RequestException as e:
_log_and_raise_exception('Error connecting with foursquare API', e)
except FoursquareException as e:
# Some errors don't bear repeating
if e.__class__ in [InvalidAuth, ParamError, EndpointError, NotAuthorized, Deprecated]: raise
# If we've reached our last try, re-raise
if ((i + 1) == NUM_REQUEST_RETRIES): raise
time.sleep(1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _post(url, headers={}, data=None, files=None):
"""Tries to POST data to an endpoint""" |
try:
response = requests.post(url, headers=headers, data=data, files=files, verify=VERIFY_SSL)
return _process_response(response)
except requests.exceptions.RequestException as e:
_log_and_raise_exception('Error connecting with foursquare API', e) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _process_response(response):
"""Make the request and handle exception processing""" |
# Read the response as JSON
try:
data = response.json()
except ValueError:
_log_and_raise_exception('Invalid response', response.text)
# Default case, Got proper response
if response.status_code == 200:
return { 'headers': response.headers, 'data': data }
return _raise_error_from_response(data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _raise_error_from_response(data):
"""Processes the response data""" |
# Check the meta-data for why this request failed
meta = data.get('meta')
if meta:
# Account for foursquare conflicts
# see: https://developer.foursquare.com/overview/responses
if meta.get('code') in (200, 409): return data
exc = error_types.get(meta.get('errorType'))
if exc:
raise exc(meta.get('errorDetail'))
else:
_log_and_raise_exception('Unknown error. meta', meta)
else:
_log_and_raise_exception('Response format invalid, missing meta property. data', data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _foursquare_urlencode(query, doseq=0, safe_chars="&/,+"):
"""Gnarly hack because Foursquare doesn't properly handle standard url encoding""" |
# Original doc: http://docs.python.org/2/library/urllib.html#urllib.urlencode
# Works the same way as urllib.urlencode except two differences -
# 1. it uses `quote()` instead of `quote_plus()`
# 2. it takes an extra parameter called `safe_chars` which is a string
# having the characters which should not be encoded.
#
# Courtesy of github.com/iambibhas
if hasattr(query,"items"):
# mapping objects
query = query.items()
else:
# it's a bother at times that strings and string-like objects are
# sequences...
try:
# non-sequence items should not work with len()
# non-empty strings will fail this
if len(query) and not isinstance(query[0], tuple):
raise TypeError
# zero-length sequences of all types will get here and succeed,
# but that's a minor nit - since the original implementation
# allowed empty dicts that type of behavior probably should be
# preserved for consistency
except TypeError:
ty,va,tb = sys.exc_info()
raise TypeError("not a valid non-string sequence or mapping object").with_traceback(tb)
l = []
if not doseq:
# preserve old behavior
for k, v in query:
k = parse.quote(_as_utf8(k), safe=safe_chars)
v = parse.quote(_as_utf8(v), safe=safe_chars)
l.append(k + '=' + v)
else:
for k, v in query:
k = parse.quote(_as_utf8(k), safe=safe_chars)
if isinstance(v, six.string_types):
v = parse.quote(_as_utf8(v), safe=safe_chars)
l.append(k + '=' + v)
else:
try:
# is this a sufficient test for sequence-ness?
len(v)
except TypeError:
# not a sequence
v = parse.quote(_as_utf8(v), safe=safe_chars)
l.append(k + '=' + v)
else:
# loop over the sequence
for elt in v:
l.append(k + '=' + parse.quote(_as_utf8(elt)))
return '&'.join(l) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _attach_endpoints(self):
"""Dynamically attach endpoint callables to this client""" |
for name, endpoint in inspect.getmembers(self):
if inspect.isclass(endpoint) and issubclass(endpoint, self._Endpoint) and (endpoint is not self._Endpoint):
endpoint_instance = endpoint(self.base_requester)
setattr(self, endpoint_instance.endpoint, endpoint_instance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register(self, widget, basename, **parameters):
""" Register a widget, URL basename and any optional URL parameters. Parameters are passed as keyword arguments, i.e. This would be the equivalent of manually adding the following to urlpatterns: MyWidget.as_view(), "widget_mywidget") """ |
self.registry.append((widget, basename, parameters)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def iterator(self):
""" Execute the API request and return an iterator over the objects. This method does not use the query cache. """ |
for obj in (self.execute().json().get("items") or []):
yield self.api_obj_class(self.api, obj) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def version(self):
""" Get Kubernetes API version """ |
response = self.get(version="", base="/version")
response.raise_for_status()
data = response.json()
return (data["major"], data["minor"]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_kwargs(self, **kwargs):
""" Creates a full URL to request based on arguments. :Parametes: - `kwargs`: All keyword arguments to build a kubernetes API endpoint """ |
version = kwargs.pop("version", "v1")
if version == "v1":
base = kwargs.pop("base", "/api")
elif "/" in version:
base = kwargs.pop("base", "/apis")
else:
if "base" not in kwargs:
raise TypeError("unknown API version; base kwarg must be specified.")
base = kwargs.pop("base")
bits = [base, version]
# Overwrite (default) namespace from context if it was set
if "namespace" in kwargs:
n = kwargs.pop("namespace")
if n is not None:
if n:
namespace = n
else:
namespace = self.config.namespace
if namespace:
bits.extend([
"namespaces",
namespace,
])
url = kwargs.get("url", "")
if url.startswith("/"):
url = url[1:]
bits.append(url)
kwargs["url"] = self.url + posixpath.join(*bits)
return kwargs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def request(self, *args, **kwargs):
""" Makes an API request based on arguments. :Parameters: - `args`: Non-keyword arguments - `kwargs`: Keyword arguments """ |
return self.session.request(*args, **self.get_kwargs(**kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, *args, **kwargs):
""" Executes an HTTP GET. :Parameters: - `args`: Non-keyword arguments - `kwargs`: Keyword arguments """ |
return self.session.get(*args, **self.get_kwargs(**kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def options(self, *args, **kwargs):
""" Executes an HTTP OPTIONS. :Parameters: - `args`: Non-keyword arguments - `kwargs`: Keyword arguments """ |
return self.session.options(*args, **self.get_kwargs(**kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def head(self, *args, **kwargs):
""" Executes an HTTP HEAD. :Parameters: - `args`: Non-keyword arguments - `kwargs`: Keyword arguments """ |
return self.session.head(*args, **self.get_kwargs(**kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def post(self, *args, **kwargs):
""" Executes an HTTP POST. :Parameters: - `args`: Non-keyword arguments - `kwargs`: Keyword arguments """ |
return self.session.post(*args, **self.get_kwargs(**kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def put(self, *args, **kwargs):
""" Executes an HTTP PUT. :Parameters: - `args`: Non-keyword arguments - `kwargs`: Keyword arguments """ |
return self.session.put(*args, **self.get_kwargs(**kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def patch(self, *args, **kwargs):
""" Executes an HTTP PATCH. :Parameters: - `args`: Non-keyword arguments - `kwargs`: Keyword arguments """ |
return self.session.patch(*args, **self.get_kwargs(**kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, *args, **kwargs):
""" Executes an HTTP DELETE. :Parameters: - `args`: Non-keyword arguments - `kwargs`: Keyword arguments """ |
return self.session.delete(*args, **self.get_kwargs(**kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_file(cls, filename, **kwargs):
""" Creates an instance of the KubeConfig class from a kubeconfig file. :Parameters: - `filename`: The full path to the configuration file """ |
filename = os.path.expanduser(filename)
if not os.path.isfile(filename):
raise exceptions.PyKubeError("Configuration file {} not found".format(filename))
with open(filename) as f:
doc = yaml.safe_load(f.read())
self = cls(doc, **kwargs)
self.filename = filename
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clusters(self):
""" Returns known clusters by exposing as a read-only property. """ |
if not hasattr(self, "_clusters"):
cs = {}
for cr in self.doc["clusters"]:
cs[cr["name"]] = c = copy.deepcopy(cr["cluster"])
if "server" not in c:
c["server"] = "http://localhost"
BytesOrFile.maybe_set(c, "certificate-authority")
self._clusters = cs
return self._clusters |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def users(self):
""" Returns known users by exposing as a read-only property. """ |
if not hasattr(self, "_users"):
us = {}
if "users" in self.doc:
for ur in self.doc["users"]:
us[ur["name"]] = u = copy.deepcopy(ur["user"])
BytesOrFile.maybe_set(u, "client-certificate")
BytesOrFile.maybe_set(u, "client-key")
self._users = us
return self._users |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.