text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def max_pool(input_layer, kernel, stride, edges=PAD_SAME, name=PROVIDED):
"""Performs max pooling. `kernel` is the patch that will be pooled and it describes the pooling along each of the 4 dimensions. `stride` is how big to take each step. Because more often than not, pooling is only done on the width and height of the image, the following shorthands are supported: * scalar (e.g. 3):
Square pooling on the image (`[b, c, r, d] = [1, 3, 3, 1]`). * singleton list (e.g. [3]):
Square pooling on the image (`[b, c, r, d] = [1, 3, 3, 1]`). * list of length 2 (e.g. [3, 2]):
Square pooling on the image (`[b, c, r, d] = [1, 3, 2, 1]`). Args: input_layer: The chainable object, supplied. kernel: The size of the patch for the pool, either an int or a length 1 or 2 sequence (if length 1 or int, it is expanded). stride: The strides as a length 1, 2 or 4 sequence or an integer. If an int, length 1 or 2, the stride in the first and last dimensions are 1. edges: Either `pt.PAD_SAME` or `pt.PAD_VALID` to control the padding. name: The name for this operation is also used to create/find the parameter variables. Returns: Handle to this layer. """ |
return _pool(input_layer, tf.nn.max_pool, kernel, stride, edges, name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bilinear_sampling(input_layer, x, y, name=PROVIDED):
"""Performs bilinear sampling. This must be a rank 4 Tensor. Implements the differentiable sampling mechanism with bilinear kernel in https://arxiv.org/abs/1506.02025. Given (x, y) coordinates for each output pixel, use bilinear sampling on the input_layer to fill the output. Args: input_layer: The chainable object, supplied. x: A tensor of size [batch_size, height, width, 1] representing the sampling x coordinates normalized to range [-1,1]. y: A tensor of size [batch_size, height, width, 1] representing the sampling y coordinates normalized to range [-1,1]. name: The name for this operation is also used to create/find the parameter variables. Returns: Handle to this layer """ |
input_layer.get_shape().assert_has_rank(4)
return _interpolate(im=input_layer, x=x, y=y, name=name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _kernel(kernel_spec):
"""Expands the kernel spec into a length 2 list. Args: kernel_spec: An integer or a length 1 or 2 sequence that is expanded to a list. Returns: A length 2 list. """ |
if isinstance(kernel_spec, tf.compat.integral_types):
return [kernel_spec, kernel_spec]
elif len(kernel_spec) == 1:
return [kernel_spec[0], kernel_spec[0]]
else:
assert len(kernel_spec) == 2
return kernel_spec |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _stride(stride_spec):
"""Expands the stride spec into a length 4 list. Args: stride_spec: If length 0, 1 or 2 then assign the inner dimensions, otherwise return stride_spec if it is length 4. Returns: A length 4 list. """ |
if stride_spec is None:
return [1, 1, 1, 1]
elif isinstance(stride_spec, tf.compat.integral_types):
return [1, stride_spec, stride_spec, 1]
elif len(stride_spec) == 1:
return [1, stride_spec[0], stride_spec[0], 1]
elif len(stride_spec) == 2:
return [1, stride_spec[0], stride_spec[1], 1]
else:
assert len(stride_spec) == 4
return stride_spec |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _set_shape_on_tensor(tensor, shape):
"""Convenience to set a shape or check it.""" |
if shape is not None:
try:
tensor.set_shape(shape)
except ValueError:
raise ValueError("Requested shape does not match tensor's shape: %s vs %s"
% (shape, tensor.get_shape()))
elif tensor.get_shape().ndims is None:
raise ValueError('Unknown shape on tensor: %s' % tensor) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unwrap(tensor):
"""Returns the underlying tensor if tensor is wrapped or tensor. Args: tensor: The tensor to unwrap. Returns: Tensor or if it is a pretty tensor, the unwrapped version. Raises: ValueError: if tensor holds a sequence. """ |
while isinstance(tensor, (PrettyTensor, Loss)):
tensor = tensor.tensor
return tensor |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wrap(tensor, books=None, tensor_shape=None):
"""Creates an input layer representing the given tensor. Args: tensor: The tensor. books: The bookkeeper; this is usually not required unless you are building multiple `tf.Graphs.` tensor_shape: An optional shape that will be set on the Tensor or verified to match the tensor. Returns: A layer. """ |
if books is None:
books = bookkeeper.for_default_graph()
if isinstance(tensor, PrettyTensor):
return tensor.as_layer()
elif isinstance(tensor, UnboundVariable):
def set_input_from_unbound_var(data):
"""Sets the input from the given unbound_var."""
if data is not None:
return wrap(data, books)
else:
return None
return _DeferredLayer(books, set_input_from_unbound_var, [tensor], {})
else:
tensor = tf.convert_to_tensor(tensor, name='input')
if tensor_shape:
_set_shape_on_tensor(tensor, tensor_shape)
return Layer(books, tensor=tensor, name=tensor.name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def template(key, books=None, optional=False):
"""Starts a Pretty Tensor graph template. ## Template Mode Templates allow you to define a graph with some unknown values. The most common use case is to leave the input undefined and then define a graph normally. The variables are only defined once the first time the graph is constructed. For example: template = (pretty_tensor.template('input') .fully_connected(200, name='l1') .fully_connected(200, name='l2')) train_output = template.construct(input=train_data) # All parameters are reused when the same template object is called again. test_output = template.construct(input=test_data) Any argument to a pretty tensor method can be substituted by using an `UnboundVariable`. This allows you to parameterize a graph in arbitrary ways. The most cannonical usage would be to substitute a phase variable. with pretty_tensor.defaults_scope(phase=UnboundVariable('train')):
# dropout uses train to optionaly disable itself. template = (pretty_tensor.template('input') .fully_connected(200, name='l1') .fully_connected(200, name='l2') .dropout(.8)) train_output = template.construct(input=train_data, train=True) test_output = template.construct(input=test_data, train=False) You should use caution because if a template is called with incompatible values (e.g. train and test using different widths), then it will break. This is because we guarantee variable reuse across instantiations. template = (pretty_tensor.template('input') .fully_connected(200, name='l1') .fully_connected( pretty_tensor.UnboundVariable('width'), name='l2')) train_output = template.construct(input=train_data, width=200) # The following line will die because the shared parameter is the wrong # size. test_output = template.construct(input=test_data, width=100) A Layer in the resulting graph can be realized by calling `bind(key=value)` and then `construct`. Args: key: A key for this template, used for assigning the correct substitution. books: The bookkeeper. optional: If this template is an optional value. Returns: A template that can be constructed or attached to other layers and that guarantees parameter reuse when constructed/attached multiple times. """ |
if books is None:
books = bookkeeper.for_default_graph()
def set_input_from_unbound_var(data):
"""Sets the input from the given unbound_var."""
if data is not None:
return wrap(data, books)
else:
return None
if optional:
data = UnboundVariable(key=key, default=None)
else:
data = UnboundVariable(key=key)
return _DeferredLayer(books, set_input_from_unbound_var, [data], {}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wrap_sequence(sequence, books=None, tensor_shape=None):
"""Creates an input layer representing the given sequence of tensors. Args: sequence: A sequence of tensors. books: The bookkeeper. tensor_shape: An optional shape that will be set on the Tensor or verified to match the tensor. Returns: A layer. """ |
if books is None:
books = bookkeeper.for_default_graph()
my_sequence = [
wrap(t, books=books, tensor_shape=tensor_shape) for t in sequence]
return Layer(books, sequence=my_sequence, name=my_sequence[0].name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def defaults_scope(**kwargs):
"""Creates a scope for the defaults that are used in a `with` block. Note: `defaults_scope` supports nesting where later defaults can be overridden. Also, an explicitly given keyword argument on a method always takes precedence. In addition to setting defaults for some methods, this also can control: * `summary_collections`: Choose which collection to place summaries in or disable with `None`. * `trainable_variables`: Boolean indicating if variables are trainable. * `variable_collections`: Default collections in which to place variables; `tf.GraphKeys.GLOBAL_VARIABLES` is always included. Args: **kwargs: The defaults. Yields: Doesn't really yield, instead this creates a Context Manager for use in a `with` statement. Raises: ValueError: if a collection type is accidently supplied a string. """ |
_assert_value_not_string('summary_collections', kwargs)
_assert_value_not_string('variable_collections', kwargs)
_check_defaults(kwargs)
global _defaults
old_defaults = _defaults
_defaults = chain_dict.ChainDict(_defaults)
_defaults.update(kwargs)
# Special logic to support summary_collections.
# This is added here because introducing more scopes would add more confusion
# than overloading this one a bit.
books = bookkeeper.for_default_graph()
if 'summary_collections' in _defaults:
books.summary_collections = _defaults['summary_collections']
else:
books.reset_summary_collections()
try:
yield _defaults
finally:
_defaults = old_defaults |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def join_pretty_tensors(tensors, output, join_function=None, name='join'):
"""Joins the list of pretty_tensors and sets head of output_pretty_tensor. Args: tensors: A sequence of Layers or SequentialLayerBuilders to join. output: A pretty_tensor to set the head with the result. join_function: A function to join the tensors, defaults to concat on the last dimension. name: A name that is used for the name_scope Returns: The result of calling with_tensor on output Raises: ValueError: if pretty_tensors is None or empty. """ |
if not tensors:
raise ValueError('pretty_tensors must be a non-empty sequence.')
with output.g.name_scope(name):
if join_function is None:
# Use depth concat
last_dim = len(tensors[0].shape) - 1
return output.with_tensor(tf.concat(tensors, last_dim))
else:
return output.with_tensor(join_function(tensors)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _merge_unbound_var_dicts(src, dst):
"""Merges src into dst and throws an exception if a value is incompatible.""" |
for k, v in six.iteritems(src):
if dst.get(k, v) != v:
trace1 = ''.join(scopes.skip_common_stack_elements(v.stacktrace, dst[
k].stacktrace))
trace2 = ''.join(
scopes.skip_common_stack_elements(dst[k].stacktrace, v.stacktrace))
raise ValueError('Key conflict: %s\nDefined At:\n%s\nand\n%s' %
(k, trace1, trace2))
else:
dst[k] = v |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _assign_values_to_unbound_vars(unbound_vars, unbound_var_values):
"""Assigns values to the vars and raises ValueError if one is missing.""" |
context = {}
for key, value in six.iteritems(unbound_var_values):
if key not in unbound_vars:
raise ValueError('unexpected key: %s. Legal values are: %s' %
(key, list(six.iterkeys(unbound_vars))))
context[unbound_vars[key]] = value
unspecified = []
for unbound_var in six.itervalues(unbound_vars):
if unbound_var not in context:
if unbound_var.has_default():
context[unbound_var] = unbound_var.default
else:
unspecified.append(unbound_var.key)
if unspecified:
raise ValueError('Unspecified keys: %s' % unspecified)
return context |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def construct_all(templates, **unbound_var_values):
"""Constructs all the given templates in a single pass without redundancy. This is useful when the templates have a common substructure and you want the smallest possible graph. Args: templates: A sequence of templates. **unbound_var_values: The unbound_var values to replace. Returns: A list of results corresponding to templates. Raises: TypeError: If any value in templates is unsupported. ValueError: If the unbound_var values specified are not complete or contain unknown values. """ |
def _merge_dicts(src, dst):
for k, v in six.iteritems(src):
if dst.get(k, v) != v:
raise ValueError('Conflicting values bound for %s: %s and %s' %
(k, v, dst[k]))
else:
dst[k] = v
# pylint: disable=protected-access
all_unbound_vars = {}
context = {}
for x in templates:
if isinstance(x, _DeferredLayer):
_merge_unbound_var_dicts(x.unbound_vars, all_unbound_vars)
_merge_dicts(x._partial_context, context)
else:
raise TypeError('Unexpected type: %s' % type(x))
_merge_dicts(
_assign_values_to_unbound_vars(all_unbound_vars, unbound_var_values),
context)
# We need to create a result of known size to avoid client pylint errors.
result = list(templates)
for i, x in enumerate(result):
if isinstance(x, _DeferredLayer):
result[i] = x._construct(context)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _strip_unnecessary_contents_from_stack(result, processed):
"""Remove the distracting lines from the stored tracebacks. This also reduces memory overhead by removing the frame contents. This is very important when doing long unrolls. Args: result: The result to process. processed: A set of already processed nodes, used to stop early. """ |
# pylint: disable=protected-access
if isinstance(result, (PrettyTensor, Loss)):
if result.is_sequence():
for tensor in result.sequence:
_strip_unnecessary_contents_from_stack(tensor, processed)
return
else:
result = result.tensor
if hasattr(result, 'op'):
result = result.op
if result in processed:
return
else:
processed.add(result)
trace = []
found = False
for f, line_no, method, _ in result._traceback:
if (method in ('_replace_deferred', '_construct') and
f.endswith('pretty_tensor_class.py')):
found = True
continue
trace.append((f, line_no, method, {}))
result._traceback = trace
# Assume that if we didn't find any PT deferred lines, then this node is
# not part of the deferred construction.
if not found:
return
for inp in result.inputs:
_strip_unnecessary_contents_from_stack(inp, processed) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _gen_ipython_string(func, args, defaults, original_doc):
"""Provides auto-complete hint to ipython. If the first line in a docstring is fn(arg1=, arg2=) then they are added to auto-complete. This cannot be called on an instance method. Args: func: The function that will be modified. args: The arguments that this function takes in order. defaults: The default arguments corresponding the last arguments. original_doc: Original docstring to assign after the magic string. Returns: The new doc string with the magic bit prepended. """ |
magic_string = '%s(' % func.__name__
if defaults:
default_offset = len(args) - len(defaults)
else:
default_offset = len(args)
for i, value in enumerate(args):
if i >= default_offset:
magic_string += '%s=%s, ' % (value, defaults[i - default_offset])
else:
magic_string += '%s, ' % value
if args:
magic_string = magic_string[:-2]
magic_string += ')\n\n'
if original_doc is not None:
magic_string += original_doc
return magic_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _should_defer(input_layer, args, kwargs):
"""Checks to see if any of the args are templates.""" |
for arg in itertools.chain([input_layer], args, six.itervalues(kwargs)):
if isinstance(arg, (_DeferredLayer, UnboundVariable)):
return True
elif (isinstance(arg, collections.Sequence) and
not isinstance(arg, six.string_types)):
if _should_defer(None, arg, {}):
return True
elif isinstance(arg, collections.Mapping):
if _should_defer(None, (), arg):
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _method_scope(input_layer, name):
"""Creates a nested set of name and id scopes and avoids repeats.""" |
global _in_method_scope
# pylint: disable=protected-access
with input_layer.g.as_default(), \
scopes.var_and_name_scope(
None if _in_method_scope else input_layer._scope), \
scopes.var_and_name_scope((name, None)) as (scope, var_scope):
was_in_method_scope = _in_method_scope
yield scope, var_scope
_in_method_scope = was_in_method_scope |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _conversion_function(pt_wrapper, dtype=None, name=None, as_ref=False):
"""Allows PrettyTensors and Loss to work as a tensor.""" |
# Ignore as_ref to not create backward compatibility issues.
_ = name, as_ref
t = pt_wrapper.tensor
if dtype and not dtype.is_compatible_with(t.dtype):
raise ValueError(
'Tensor conversion requested dtype %s for Tensor with dtype %s: %r' %
(dtype, t.dtype, t))
return t |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bind(self, **bindings):
"""Makes the bindings to each item in this and returns a new tuple.""" |
found_vars = set()
result = []
for layer in self.flatten():
if isinstance(layer, _DeferredLayer):
var_keys = {var.key for var in six.itervalues(layer.unbound_vars)}
layers_bindings = {
k: v
for k, v in six.iteritems(bindings) if k in var_keys
}
result.append(layer.bind(**layers_bindings))
found_vars.update(six.iterkeys(layers_bindings))
else:
result.append(layer)
missing_vars = set(six.iterkeys(bindings)) - found_vars
if missing_vars:
raise ValueError('Unused bindings: %s' % missing_vars)
return self.__class__(*result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def as_fn(self, *binding_order):
"""Creates a function by binding the arguments in the given order. Args: *binding_order: The unbound variables. This must include all values. Returns: A function that takes the arguments of binding_order. Raises: ValueError: If the bindings are missing values or include unknown values. """ |
if len(binding_order) != len(self.unbound_vars):
raise ValueError('All vars must be specified.')
for arg in binding_order:
if arg not in self.unbound_vars:
raise ValueError('Unknown binding: %s' % arg)
def func(*args, **kwargs):
"""Constructs a template."""
if len(binding_order) != len(args):
raise ValueError('Missing values, expects: %s' % binding_order)
values = dict(zip(binding_order, args))
values.update(kwargs)
return self.construct(**values)
func.__doc__ = _gen_ipython_string(func, binding_order, [], func.__doc__)
return func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _method_complete(self, result):
"""Called after a registered method with the result.""" |
if isinstance(result, (PrettyTensor, Loss, PrettyTensorTupleMixin)):
return result
elif (isinstance(result, collections.Sequence) and
not isinstance(result, six.string_types)):
return self.with_sequence(result)
else:
return self.with_tensor(result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_loss(self, loss, name=None):
"""Adds a loss and returns a wrapper for that loss.""" |
self.bookkeeper.add_loss(loss, name=name)
return Loss(self.bookkeeper, tensor=loss, name=name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _replace_args_with_defaults(self, _args=None, **kwargs):
"""Internal method to fill absent values in the kwargs with the defaults. Args: _args: A list of arguments to replace if a subset is required. Name chosen to prevent conflicts with kwargs. **kwargs: The arguments to replace with defaults. Returns: A map with the same fields as kwargs, but absent values are filled with defaults. """ |
if _args is None:
_args = six.iterkeys(kwargs)
my_defaults = self.defaults
for k in _args:
if k not in kwargs:
if k in my_defaults:
kwargs[k] = my_defaults[k]
elif k in _defaults:
kwargs[k] = _defaults[k]
return kwargs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def attach_template(self, _template, _key, **unbound_var_values):
"""Attaches the template to this such that _key=this layer. Note: names were chosen to avoid conflicts with any likely unbound_var keys. Args: _template: The template to construct. _key: The key that this layer should replace. **unbound_var_values: The values for the unbound_vars. Returns: A new layer with operation applied. Raises: ValueError: If _key is specified twice or there is a problem computing the template. """ |
if _key in unbound_var_values:
raise ValueError('%s specified twice.' % _key)
unbound_var_values[_key] = self
return _template.as_layer().construct(**unbound_var_values) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _as_graph_element(self):
"""Returns the underlying graph element if possible.""" |
if self.is_sequence():
raise TypeError('A Pretty Tensor that holds a sequence cannot be '
'represented as a graph element.')
else:
# Self might be holding something else that isn't a true tensor, so
# if the 'tensor' can behave like a graph element, look for its
# _AsGraphElement method and call it. Graph elements themselves may not
# have or need this method, so just return other items directly.
obj = self.tensor
conv_fn = getattr(obj, '_as_graph_element', None)
if conv_fn and isinstance(conv_fn, collections.Callable):
obj = conv_fn()
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mark_as_required(self):
"""Adds this loss to the MARKED_LOSSES collection.""" |
if self not in tf.get_collection(bookkeeper.GraphKeys.MARKED_LOSSES):
tf.add_to_collection(bookkeeper.GraphKeys.MARKED_LOSSES, self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _construct(self, context):
"""Constructs this by calling the deferred method. This assumes that all unbound_vars have been specified in context and if this layer has already been computed in this context, then the previously constructed value will be returned. Args: context: A dict of UnboundVariables/_DeferredLayers to their values. Returns: The result of calling the given method on this layer. """ |
with self.g.as_default():
if self._pass_through:
# pylint: disable=protected-access
return self._pass_through._construct(context)
current_value = context.get(self, None)
assert current_value is not _unspecified, 'Circular dependency'
if current_value is not None:
return current_value
context[self] = _unspecified
method_args = self._replace_deferred(self._method_args, context)
method_kwargs = self._replace_deferred(self._method_kwargs, context)
result = self._method(*method_args, **method_kwargs)
_strip_unnecessary_contents_from_stack(result, set())
context[self] = result
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bind(self, **bindings):
"""Creates a new template with the given unbound variables bound. Args: **bindings: Arguments for every deferred parameter. Returns: A new template with the given bindings. Raises: ValueError: If any of the bindings do not correspond to unbound variables. """ |
new_context = dict(self._partial_context)
unknown_keys = []
for k, v in six.iteritems(bindings):
if k not in self._unbound_vars:
unknown_keys.append(k)
new_context[self._unbound_vars[k]] = v
if unknown_keys:
raise ValueError(
'The following keys are not associated with any unbound vars: %s, '
'legal values are %s' %
(unknown_keys, list(self._unbound_vars.keys())))
return _DeferredLayer(self.bookkeeper,
None,
(),
{},
scope=self._scope,
defaults=self._defaults,
pass_through=self,
partial_context=new_context) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def attach_template(self, _template, _key, **unbound_var_values):
"""Attaches the template to this with the _key is supplied with this layer. Note: names were chosen to avoid conflicts. Args: _template: The template to construct. _key: The key that this layer should replace. **unbound_var_values: The values for the unbound_vars. Returns: A new layer with operation applied. Raises: ValueError: If _key is specified twice or there is a problem computing the template. """ |
if _key in unbound_var_values:
raise ValueError('%s specified twice.' % _key)
unbound_var_values[_key] = self
return _DeferredLayer(self.bookkeeper,
_template.as_layer().construct,
[],
unbound_var_values,
scope=self._scope,
defaults=self._defaults,
partial_context=self._partial_context) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _method_complete(self, result):
"""Called after an extention method with the result.""" |
if isinstance(result, PrettyTensor):
self._head = result
return self
elif isinstance(result, Loss):
return result
elif isinstance(result, PrettyTensorTupleMixin):
self._head = result[0]
return result
else:
self._head = self._head.with_tensor(result)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def subdivide_with(self, branches, join_function, name='mixed'):
"""Branches this pretty tensor and uses an explicit join function. This should be used in a with statement, for example to fork and join with a sum: with pt.subdivide_with(2, tf.add_n) as [a, b]: Args: branches: The number of branches. join_function: A function to use when rejoining. name: A base name for this branch. Returns: A python context manager to use in a with statement that supplies a sequence of tensors with one per branch. Raises: ValueError: if join_function is None. """ |
return _subdivide_context(self, branches, join_function, name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def variable(self, var_name, shape, init, dt=tf.float32, train=None):
"""Adds a named variable to this bookkeeper or returns an existing one. Variables marked train are returned by the training_variables method. If the requested name already exists and it is compatible (same shape, dt and train) then it is returned. In case of an incompatible type, an exception is thrown. Args: var_name: The unique name of this variable. If a variable with the same name exists, then it is returned. shape: The shape of the variable. init: The init function to use or a Tensor to copy. dt: The datatype, defaults to float. This will automatically extract the base dtype. train: Whether or not the variable should be trained; defaults to True unless a default_scope has overridden it. Returns: A TensorFlow tensor. Raises: ValueError: if reuse is False (or unspecified and allow_reuse is False) and the variable already exists or if the specification of a reused variable does not match the original. """ |
# Make sure it is a TF dtype and convert it into a base dtype.
dt = tf.as_dtype(dt).base_dtype
if var_name in self.vars:
v = self.vars[var_name]
if v.get_shape() != shape:
raise ValueError(
'Shape mismatch: %s vs %s. Perhaps a UnboundVariable had '
'incompatible values within a graph.' % (v.get_shape(), shape))
return v
elif callable(init):
if train is None:
train = _defaults.get('trainable_variables', True)
variable_collections = _defaults.get('variable_collections', ())
if tf.GraphKeys.GLOBAL_VARIABLES not in variable_collections:
variable_collections = list(variable_collections) + [
tf.GraphKeys.GLOBAL_VARIABLES]
v = tf.get_variable(var_name,
shape=shape,
dtype=dt,
initializer=init,
trainable=train,
collections=variable_collections)
self.vars[var_name] = v
return v
else:
v = tf.convert_to_tensor(init, name=var_name, dtype=dt)
v.get_shape().assert_is_compatible_with(shape)
self.vars[var_name] = v
return v |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fill_kwargs(self, input_layer, kwargs):
"""Applies name_suffix and defaults to kwargs and returns the result.""" |
return input_layer._replace_args_with_defaults(_args=self._assign_defaults,
**kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_deferred(self, func, input_layer, deferred_args, deferred_kwargs, name):
"""Creates a deferred node with captured scope. Args: func: The original function to call. input_layer: The input_layer. deferred_args: The arguments that will be used bythe deferred function. deferred_kwargs: The keyword args for the deferred function. name: The name of this layer. Returns: A _DeferredLayer that will execute func in the correct scopes. """ |
my_defaults = _defaults
def _with_method_complete(*args, **kwargs):
input_layer = args[0]
with input_layer.g.as_default(), defaults_scope(**my_defaults), \
tf.name_scope(name):
return input_layer._method_complete(func(*args, **kwargs))
# The deferred layer passes on the scope of the source layer so that the
# construction scope matches that of the immediate version.
full_args = [input_layer]
full_args.extend(deferred_args)
partial_context = {}
if isinstance(input_layer, _DeferredLayer):
partial_context = input_layer._partial_context
return _DeferredLayer(input_layer.bookkeeper,
scopes.Template(None, _with_method_complete),
full_args,
deferred_kwargs,
scope=input_layer._scope,
defaults=input_layer.defaults,
partial_context=partial_context) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_method(self, func):
"""Creates the method.""" |
# pylint: disable=missing-docstring
@functools.wraps(func)
def method(input_layer, *args, **kwargs):
return func(input_layer, *args, **self.fill_kwargs(input_layer, kwargs))
return method |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _make_tuple(x):
"""TF has an obnoxious habit of being lenient with single vs tuple.""" |
if isinstance(x, prettytensor.PrettyTensor):
if x.is_sequence():
return tuple(x.sequence)
else:
return (x.tensor,)
elif isinstance(x, tuple):
return x
elif (isinstance(x, collections.Sequence) and
not isinstance(x, six.string_types)):
return tuple(x)
else:
return (x,) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_from_queue(cls, input_queue, replay_size, batch_size):
"""Builds a `ReplayableQueue` that draws from a regular `input_queue`. Args: input_queue: The queue to draw from. replay_size: The size of the replay buffer. batch_size: The size of each batch. Returns: A ReplayableQueue. """ |
return cls(
lambda: input_queue.dequeue_many(batch_size),
replay_size,
batch_size=batch_size) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def replay_scope(self, sess):
"""Enters a replay scope that unsets it at the end.""" |
current_replay = self.replay(sess)
try:
self.set_replay(sess, True)
yield
finally:
self.set_replay(sess, current_replay) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_replay(self, sess, replay):
"""Changes the current replay setting on the graph.""" |
sess.run(self._set_replay, {self._set_replay_ph: replay}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refill(self, sess):
"""Clears the current queue and then refills it with new data.""" |
sess.run(self._clear_queue)
# Run until full.
while sess.run(self._fill_queue):
pass |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def maybe_download(url, filename):
"""Download the data from Yann's website, unless it's already here.""" |
if not os.path.exists(WORK_DIRECTORY):
os.mkdir(WORK_DIRECTORY)
filepath = os.path.join(WORK_DIRECTORY, filename)
if not os.path.exists(filepath):
filepath, _ = request.urlretrieve(url + filename, filepath)
statinfo = os.stat(filepath)
print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
return filepath |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def permute_data(arrays, random_state=None):
"""Permute multiple numpy arrays with the same order.""" |
if any(len(a) != len(arrays[0]) for a in arrays):
raise ValueError('All arrays must be the same length.')
if not random_state:
random_state = np.random
order = random_state.permutation(len(arrays[0]))
return [a[order] for a in arrays] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mnist(training):
"""Downloads MNIST and loads it into numpy arrays.""" |
if training:
data_filename = 'train-images-idx3-ubyte.gz'
labels_filename = 'train-labels-idx1-ubyte.gz'
count = 60000
else:
data_filename = 't10k-images-idx3-ubyte.gz'
labels_filename = 't10k-labels-idx1-ubyte.gz'
count = 10000
data_filename = maybe_download(MNIST_URL, data_filename)
labels_filename = maybe_download(MNIST_URL, labels_filename)
return (mnist_extract_data(data_filename, count),
mnist_extract_labels(labels_filename, count)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def shakespeare(chunk_size):
"""Downloads Shakespeare, converts it into ASCII codes and chunks it. Args: chunk_size: The dataset is broken down so that it is shaped into batches x chunk_size. Returns: A numpy array of ASCII codes shaped into batches x chunk_size. """ |
file_name = maybe_download('http://cs.stanford.edu/people/karpathy/char-rnn/',
'shakespear.txt')
with open(file_name) as f:
shakespeare_full = f.read()
# Truncate the data.
length = (len(shakespeare_full) // chunk_size) * chunk_size
if length < len(shakespeare_full):
shakespeare_full = shakespeare_full[:length]
arr = np.array([convert_to_int(c) for c in shakespeare_full])[
0:len(shakespeare_full) / chunk_size * chunk_size]
return arr.reshape((len(arr) / chunk_size, chunk_size)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def baby_names(max_length=15):
"""Opens the baby_names csv file and produces numpy array. Args: max_length: The maximum length, 15 was the longest name when this was written. Short entries will be padded with the EOS marker. Returns: A numpy array of the names converted to ascii codes, the labels and an array of lengths. Raises: ValueError: if max_length is too small. """ |
names = []
lengths = []
targets = []
with open(os.path.join(os.path.dirname(sys.modules[__name__].__file__),
'baby_names.csv'), 'rb') as f:
first = True
for l in csv.reader(f, delimiter=','):
if first:
first = False
continue
assert len(l) == 4, l
name = l[0]
if max_length < len(name):
raise ValueError('Max length is too small: %d > %d' %
(max_length, len(name)))
chars = [convert_to_int(c) for c in name]
names.append(chars + ([EOS] * (max_length - len(chars))))
lengths.append([len(name)])
values = [float(l[2]), float(l[3])]
if abs(sum(values) - 1) > 0.001:
raise ValueError('Each row must sum to 1: %s' % l)
targets.append(values)
return np.array(names), np.array(targets), np.array(lengths) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reshape_data(tensor, per_example_length=1):
"""Reshapes input so that it is appropriate for sequence_lstm.. The expected format for sequence lstms is [timesteps * batch, per_example_length] and the data produced by the utilities is [batch, timestep, *optional* expected_length]. The result can be cleaved so that there is a Tensor per timestep. Args: tensor: The tensor to reshape. per_example_length: The number of examples at each timestep. Returns: A Pretty Tensor that is compatible with cleave and then sequence_lstm. """ |
# We can put the data into a format that can be easily cleaved by
# transposing it (so that it varies fastest in batch) and then making each
# component have a single value.
# This will make it compatible with the Pretty Tensor function
# cleave_sequence.
dims = [1, 0]
for i in xrange(2, tensor.get_shape().ndims):
dims.append(i)
return pt.wrap(tf.transpose(tensor, dims)).reshape([-1, per_example_length]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def batch_normalize_with_arguments(x, arguments):
"""Applies batch normalization to x as specified in arguments. Args: x: A Pretty Tensor. arguments: Either a boolean to batch_normalize or a BatchNormalizationArguments Returns: x with batch normalization applied. """ |
x = prettytensor.wrap(x)
# Backwards compatibility.
if isinstance(arguments, bool):
if arguments:
return x.batch_normalize()
else:
return x
# pylint: disable=protected-access
kwargs = arguments._asdict()
defaults = prettytensor._defaults
# pylint: enable=protected-access
for arg in ('learned_moments_update_rate', 'variance_epsilon',
'scale_after_normalization'):
if kwargs.get(arg, None) is None:
if arg in defaults:
kwargs[arg] = defaults[arg]
else:
del kwargs[arg]
return x.batch_normalize(**kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def multilayer_fully_connected(images, labels):
"""Creates a multi layer network of fully_connected layers. Each layer is 100 neurons. Please change this to experiment with architectures. Args: images: The input images. labels: The labels as dense one-hot vectors. Returns: A softmax result. """ |
# Pretty Tensor is a thin wrapper on Tensors.
# Change this method to experiment with other architectures
images = pt.wrap(images)
with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001):
return (images.flatten().fully_connected(100).fully_connected(100)
.softmax_classifier(10, labels)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lenet5(images, labels):
"""Creates a multi layer convolutional network. The architecture is similar to that defined in LeNet 5. Please change this to experiment with architectures. Args: images: The input images. labels: The labels as dense one-hot vectors. Returns: A softmax result. """ |
images = pt.wrap(images)
with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001):
return (images.conv2d(5, 20).max_pool(2, 2).conv2d(5, 50).max_pool(2, 2)
.flatten().fully_connected(500).softmax_classifier(10, labels)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _full_map(self):
"""Creates a full mapping of this and all parent key, value pairs.""" |
result = {}
if self._parent:
result.update(self._parent)
result.update(self._map)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def var_and_name_scope(names):
"""Creates a variable scope and a name scope. If a variable_scope is provided, this will reenter that variable scope. However, if none is provided then the variable scope will match the generated part of the name scope. Args: names: A tuple of name_scope, variable_scope or None. Yields: The result of name_scope and variable_scope as a tuple. """ |
# pylint: disable=protected-access
if not names:
yield None, None
else:
name, var_scope = names
with tf.name_scope(name) as scope:
# TODO(eiderman): This is a workaround until the variable_scope updates land
# in a TF release.
old_vs = tf.get_variable_scope()
if var_scope is None:
count = len(name.split('/'))
scoped_name = '/'.join(scope.split('/')[-count - 1:-1])
full_name = (old_vs.name + '/' + scoped_name).lstrip('/')
else:
full_name = var_scope.name
vs_key = tf.get_collection_ref(variable_scope._VARSCOPE_KEY)
try:
# TODO(eiderman): Remove this hack or fix the full file.
try:
vs_key[0] = tf.VariableScope(
old_vs.reuse,
name=full_name,
initializer=old_vs.initializer,
regularizer=old_vs.regularizer,
caching_device=old_vs.caching_device)
except AttributeError:
vs_key[0] = variable_scope._VariableScope(
old_vs.reuse,
name=full_name,
initializer=old_vs.initializer)
vs_key[0].name_scope = scope
yield scope, vs_key[0]
finally:
vs_key[0] = old_vs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_current_name_scope():
"""Gets the current name scope.""" |
# pylint: disable=protected-access
g = tf.get_default_graph()
# TODO(eiderman): Remove this hack once TF update is released.
if isinstance(g._name_stack, tuple):
return g._name_stack[0] + '/'
else:
return g._name_stack + '/' |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_template(name, func, *args, **kwargs):
"""Given an arbitrary function, wrap it so that it does parameter sharing.""" |
if args or kwargs:
func = functools.partial(func, *args, **kwargs)
return Template(name, func) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def skip_common_stack_elements(stacktrace, base_case):
"""Skips items that the target stacktrace shares with the base stacktrace.""" |
for i, (trace, base) in enumerate(zip(stacktrace, base_case)):
if trace != base:
return stacktrace[i:]
return stacktrace[-1:] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_model(text_in, timesteps, phase):
"""Creates a 2 layer LSTM model with dropout. Args: text_in: The input text as ASCII ordinals in a Tensor. timesteps: The number of timesteps in the sequence. phase: Phase controls whether or not dropout is active. In training mode we want to perform dropout, but in test we want to disable it. Returns: The logits. """ |
with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001):
# The embedding lookup must be placed on a cpu.
with tf.device('/cpu:0'):
embedded = text_in.embedding_lookup(CHARS, [EMBEDDING_SIZE])
# Because the sequence LSTM expects each timestep to be its own Tensor,
# we need to cleave the sequence.
# Below we can build a stacked 2 layer LSTM by just chaining them together.
# You can stack as many layers as you want.
lstm = (embedded
.cleave_sequence(timesteps)
.sequence_lstm(LOWER)
.sequence_lstm(UPPER))
# The classifier is much more efficient if it runs across the entire
# dataset at once, so we want to squash (i.e. uncleave).
# Note: if phase is test, dropout is a noop.
return (lstm.squash_sequence()
.dropout(keep_prob=0.8, phase=phase)
.fully_connected(CHARS, activation_fn=None)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sample( input_placeholder, logits, seed=None, max_length=1024, temperature=1.0):
"""Samples from the LSTM model. Sampling is done by first running either the seed or an arbitrary character through the model and then drawing the next character from the probability distribution definted by `softmax`. Args: input_placeholder: A placeholder that expects a scalar feed. logits: The logits. This works with the logits so that it can apply the temperature. seed: Either a string of characters to prime the network or None. max_length: The maximum length to draw in case EOS is not reached. temperature: A value that is used to renormalize the inputs. A higher value selects less likely choices. Returns: A string that was sampled from the model. """ |
assert temperature > 0, 'Temperature must be greater than 0.'
if not seed:
# The model expects an input to do inference, so seed with a single letter.
seed = chr(ord('A') + random.randint(0, 25))
result = ''
# The recurrent runner takes care of tracking the model's state at each step
# and provides a reset call to zero it out for each query.
recurrent_runner = pt.train.RecurrentRunner()
# We need to reset the hidden state for each query.
recurrent_runner.reset()
# Initialize the system
for c in seed[:-1]:
recurrent_runner.run([logits],
{input_placeholder: data_utils.convert_to_int(c)})
result += c
# Start sampling!
ci = ord(seed[-1])
while len(result) < max_length and ci != data_utils.EOS:
result += chr(ci)
# The softmax is probability normalized and would have been appropriate here
# if we weren't applying the temperature (temperature could also be done in
# TensorFlow).
logit_result = recurrent_runner.run([logits],
{input_placeholder: ci})[0][0]
logit_result /= temperature
# Apply the softmax in numpy to convert from logits to probabilities.
# Subtract off the max for numerical stability -- logits are invariant to
# additive scaling and this eliminates overflows.
logit_result -= logit_result.max()
distribution = numpy.exp(logit_result)
distribution /= distribution.sum()
# Numpy multinomial needs the value to be strictly < 1
distribution -= .00000001
ci = numpy.argmax(numpy.random.multinomial(1, distribution))
result += chr(ci) # Add the last letter.
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reshape(input_layer, shape_spec):
"""Reshapes this tensor to the given spec. This provides additional functionality over the basic `tf.reshape`. In particular, it provides the ability to specify some dimensions as unchanged (`pt.DIM_SAME`) which can greatly aid in inferring the extra dimensions (`pt.DIM_REST`) and help maintain more shape information going forward. A shape_spec can be a list or tuple of numbers specifying the new shape, but also may include the following shorthands for using values from the shape of the input: 1. `pt.DIM_SAME` ('_') will use the corresponding value from the current shape. 2. One -1 or `pt.DIM_REST` ('*') can be used to specify the remainder of the values. 3. An integer will be used as is. A compact syntax is also supported for setting shapes. If the new shape is only composed of DIM_SAME, DIM_REST/-1 and single digit integers, then a string can be passed in. Integers larger than 9 must be passed in as part of a sequence. 1. Flatten to a batch dimension (first by convention):
[DIM_SAME, -1] or '_*'. 2. Expand a Rank 2 Tensor so that it can be used as an image: '_11*'. The primary difference between this and `tf.reshape` is that `DIM_SAME` allows more shape inference possibilities. For example: given a shape of **[None, 3, 7]** if flattening were desired then the caller would have to compute the shape and request a reshape of **[-1, 21]** to flatten. Instead of brittle or repeated code, this can be inferred if we know that the first dim is being copied. Another example that is impossible to express as a list of integers is if the starting shape were **[None, 3, None]** and we wanted to do the same flattening. While the shape cannot be inferred, this can still be expressed as '_*' (A.K.A. [DIM_SAME, DIM_REST]). Args: input_layer: The Pretty Tensor object, supplied. shape_spec: The spec for the new shape. Returns: A Pretty Tensor with the reshaped tensor. Raises: ValueError: If there are two many unknown dimensions or the shape_spec requires out of range DIM_SAME. """ |
old_shape = input_layer.get_shape().as_list()
# Extract both a tensor that sets the new shape and as much of the new
# shape is known. This lets us merge in any extra information we have about
# the shape.
try:
new_shape = _infer_unknown_dims(old_shape, shape_spec)
except TypeError:
# shape_spec is not iterable, it is probably a tensor or variable.
return tf.reshape(input_layer, shape_spec)
reshape_tensor = []
# To avoid bloating the graph, we want to capture consecutive integers into
# a single tf.constant. This allows us to eliminate tf.concat when we know the
# shape.
runner = []
for i, s in enumerate(new_shape):
if s is DIM_SAME:
new_shape[i] = None
if runner:
reshape_tensor.append(tf.constant(runner))
runner = []
# Since we can't statically infer the value, compute it from the graph.
reshape_tensor.append(tf.gather(tf.shape(input_layer), [i]))
else:
runner.append(s)
if s == -1:
new_shape[i] = None
if runner:
reshape_tensor.append(tf.constant(runner))
if len(reshape_tensor) == 1:
reshape_tensor = reshape_tensor[0]
else:
reshape_tensor = tf.concat(reshape_tensor, 0)
result = tf.reshape(input_layer, reshape_tensor)
result.set_shape(new_shape)
return input_layer.with_tensor(result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def flatten(input_layer, preserve_batch=True):
"""Flattens this. If preserve_batch is True, the result is rank 2 and the first dim (batch) is unchanged. Otherwise the result is rank 1. Args: input_layer: The Pretty Tensor object, supplied. preserve_batch: If True (the default), then preserve the first dimension. Returns: A LayerWrapper with the flattened tensor. """ |
if preserve_batch:
return reshape(input_layer, [DIM_SAME, -1])
else:
return reshape(input_layer, [-1]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stop_gradient(input_layer):
"""Cuts off the gradient at this point. This works on both sequence and regular Pretty Tensors. Args: input_layer: The input. Returns: A new Pretty Tensor of the same type with stop_gradient applied. """ |
if input_layer.is_sequence():
result = [tf.stop_gradient(t) for t in input_layer.sequence]
return input_layer.with_sequence(result)
else:
return tf.stop_gradient(input_layer) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dropout(input_layer, keep_prob, phase=Phase.train, name=PROVIDED):
"""Aplies dropout if this is in the train phase.""" |
if phase == Phase.train:
return tf.nn.dropout(input_layer, keep_prob, name=name)
else:
return input_layer |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def apply_with_summary(input_layer, operation, *op_args, **op_kwargs):
"""Applies the given operation to `input_layer` and create a summary. Args: input_layer: The input layer for this op. operation: An operation that takes a tensor and the supplied args. *op_args: Extra arguments for operation. **op_kwargs: Keyword arguments for the operation. Returns: A new layer with operation applied. """ |
return layers.apply_activation(input_layer.bookkeeper,
input_layer.tensor,
operation,
activation_args=op_args,
activation_kwargs=op_kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _rapply(input_layer, operation, *op_args, **op_kwargs):
"""Applies the given operation to this after expanding op_args. Args: input_layer: The input layer for this op. operation: An operation that takes a tensor and the supplied args. *op_args: Extra arguments for operation. **op_kwargs: Keyword arguments for the operation. Returns: A new layer with operation applied. """ |
op_args = list(op_args)
op_args.append(input_layer.tensor)
return input_layer.with_tensor(operation(*op_args, **op_kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def apply_op(input_layer, operation, *op_args, **op_kwargs):
"""Applies the given operation to this before without adding any summaries. Args: input_layer: The input layer for this op. operation: An operation that takes a tensor and the supplied args. *op_args: Extra arguments for operation. **op_kwargs: Keyword arguments for the operation. Returns: A new layer with operation applied. """ |
return input_layer.with_tensor(
operation(input_layer.tensor, *op_args, **op_kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def join(input_layer, others, include_self=True, join_function=None):
"""Joins the provided PrettyTensors with this using the join function. Args: input_layer: The input layer for this op. others: Sequence of PrettyTensor objects. include_self: Whether or not this includes itself or if the value is only derived from others. join_function: The function to use for joining, must accept a list of tensors. Use None for concat on the final dimension. Returns: self. """ |
if include_self:
list_of_tensors = [input_layer]
list_of_tensors.extend(others)
else:
list_of_tensors = others
return prettytensor.join_pretty_tensors(list_of_tensors, input_layer,
join_function) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unzip(input_layer, split_dim=0, num_splits=2):
"""Unzips this Tensor along the split_dim into num_splits Equal chunks. Examples: * `[1, 2, 3, 4] -> [1, 3], [2, 4]` * `[[1, 1], [2, 2], [3, 3], [4, 4]] -> [[1, 1], [3, 3]], [[2, 2], [4, 4]]` Args: input_layer: The chainable object, supplied. split_dim: The dimension to split along. Defaults to batch. num_splits: The number of splits. Returns: A list of PrettyTensors. Raises: ValueError: If split_dim is out of range or isn't divided evenly by num_splits. """ |
shape = input_layer.shape
_check_split_dims(num_splits, split_dim, shape)
splits = functions.unzip(input_layer, split_dim, shape[split_dim], num_splits)
return input_layer.with_sequence(splits) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def concat(input_layer, concat_dim, other_tensors=None):
"""Concatenates input PrettyTensor with other_tensors along the specified dim. This adds the Pretty Tensor passed via input_layer to the front of the list of tensors to concat. Args: input_layer: The input layer. concat_dim: The dimension along which to concat. other_tensors: The tensors to concatenate with as an iterable or None if this is called on a sequence. Returns: A new PrettyTensor. Raises: ValueError: If other_tensors is None and this is not a sequence. """ |
if input_layer.is_sequence():
all_tensors = input_layer.sequence
all_tensors.extend(other_tensors or [])
else:
all_tensors = [input_layer]
if other_tensors is None:
raise ValueError('Other Tensors must be supplied.')
all_tensors.extend(other_tensors)
# Edge cases really only apply when this is a sequence with 0 or 1 element.
if not all_tensors:
return prettytensor.wrap_sequence([])
else:
return tf.concat(all_tensors, concat_dim) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def split(input_layer, split_dim=0, num_splits=2):
"""Splits this Tensor along the split_dim into num_splits Equal chunks. Examples: * `[1, 2, 3, 4] -> [1, 2], [3, 4]` * `[[1, 1], [2, 2], [3, 3], [4, 4]] -> [[1, 1], [2, 2]], [[3, 3], [4, 4]]` Args: input_layer: The chainable object, supplied. split_dim: The dimension to split along. Defaults to batch. num_splits: The number of splits. Returns: A list of PrettyTensors. Raises: ValueError: If split_dim is out of range or isn't divided evenly by num_splits. """ |
shape = input_layer.shape
_check_split_dims(num_splits, split_dim, shape)
splits = tf.split(
value=input_layer, num_or_size_splits=num_splits, axis=split_dim)
return input_layer.with_sequence(splits) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _zip_with_scalars(args):
"""Zips across args in order and replaces non-iterables with repeats.""" |
zipped = []
for arg in args:
if isinstance(arg, prettytensor.PrettyTensor):
zipped.append(arg if arg.is_sequence() else itertools.repeat(arg))
elif (isinstance(arg, collections.Sequence) and
not isinstance(arg, tf.compat.bytes_or_text_types)):
zipped.append(arg)
else:
zipped.append(itertools.repeat(arg))
assert len(args) == len(zipped)
return zip(*zipped) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def map_(input_layer, fn):
"""Maps the given function across this sequence. To map an entire template across the sequence, use the `as_fn` method on the template. Args: input_layer: The input tensor. fn: A function of 1 argument that is applied to each item in the sequence. Returns: A new sequence Pretty Tensor. Raises: ValueError: If the input_layer does not hold a sequence. """ |
if not input_layer.is_sequence():
raise ValueError('Can only map a sequence.')
return [fn(x) for x in input_layer] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _map_or_apply(input_layer, op, *args, **kwargs):
"""Map op across the input if it is a sequence; otherwise apply it. Note: This takes a keyword argument `right_` to right apply the op to this input. The name is chosen to limit conflicts with other keyword arguments. Args: input_layer: The input_layer (self when chaining). op: The op to apply: *args: Positional arguments for op; if input is a list then any iterable is treated as an argument to co-map (i.e. it zips across non-scalars). **kwargs: Keyword arguments for op; note that `right_` is used by this function. Returns: A new Pretty Tensor that is the result of applying the op to every internal Tensor. Raises: ValueError: If a sequence argument is not the same length as the input_layer. """ |
# Name is special because it can also set the name scope.
kwargs.pop('name')
right = kwargs.pop('right_', False)
if input_layer.is_sequence():
if right:
args += (input_layer,)
else:
args = ((input_layer,) + args)
result = [op(*x, **kwargs) for x in _zip_with_scalars(args)]
if len(result) != len(input_layer):
raise ValueError('Not all arguments were the same length.')
return result
else:
if right:
my_op = lambda x: op(*(args + (x,)), **kwargs)
else:
my_op = lambda x: op(x, *args, **kwargs)
return my_op(input_layer.tensor) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def feed_numpy(batch_size, *arrays):
"""Given a set of numpy arrays, produce slices of batch_size. Note: You can use itertools.cycle to have this repeat forever. Args: batch_size: The batch_size for each array. *arrays: A list of arrays. Yields: A list of slices from the arrays of length batch_size except the last one which will contain the rest. Raises: ValueError: If arrays aren't all the same length or no arrays are provided. """ |
if not arrays:
raise ValueError('Arrays cannot be empty.')
size = len(arrays[0])
for a in arrays:
if size != len(a):
raise ValueError('All arrays must be the same size.')
count = int(size / batch_size)
for i in xrange(count):
start = i * batch_size
end = start + batch_size
yield [x[start:end] for x in arrays]
if count * batch_size < size:
yield [x[end:] for x in arrays] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def batch(input_iter, batch_size=32):
"""Batches data from an iterator that returns single items at a time.""" |
input_iter = iter(input_iter)
next_ = list(itertools.islice(input_iter, batch_size))
while next_:
yield next_
next_ = list(itertools.islice(input_iter, batch_size)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def slice_constant(data, batch_size=32, name='constant_data', global_step=None):
"""Provide a slice based on the global_step. This is useful when the entire data array can be stored in memory because it allows you to feed the data very efficiently. Args: data: A numpy array or tensor. batch_size: The batch size for the produced data. name: An optional name for this data. global_step: A global step variable that is used to read the data. If None then the default prettytensor global_step is used. Returns: A tensor that produces the given data. """ |
with tf.name_scope(name):
all_data = tf.convert_to_tensor(data)
global_step = global_step or bookkeeper.global_step()
count = len(data) / batch_size
extra = len(data) - count * batch_size
if extra:
offset = tf.mod(global_step, count)
return tf.slice(all_data, offset * batch_size, batch_size)
else:
offset = tf.mod(global_step, count + 1)
return tf.slice(all_data, offset * batch_size,
tf.where(tf.equal(offset, count), extra, batch_size)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def session(self, master='', config=None):
"""Takes care of starting any local servers and stopping queues on exit. In general, the Runner is designed to work with any user provided session, but this provides a convenience for properly stopping the queues. Args: master: The master session to use. config: A tf.ConfigProto or None. Yields: A session. """ |
session_manager = SESSION_MANAGER_FACTORY()
# Initialization is handled manually at a later point and session_manager
# is just used for distributed compatibility.
with session_manager.prepare_session(master, None, config=config,
init_fn=lambda _: None) as sess:
try:
yield sess
finally:
self.stop_queues() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prepare_model(self, sess, allow_initialize=True):
"""Initialize the model and if necessary launch the queue runners.""" |
if self._follower:
self.wait_for_initialization()
else:
self._init_model(sess, allow_initialize)
if sess is not self._sess:
if self.threads:
raise ValueError('You must call stop_queues() before '
'starting a new session with QueueRunners.')
self._sess = sess
self._start_threads(sess) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_from_checkpoint(self, sess, latest_filename=None):
"""Loads the model from the most recent checkpoint. This gets the most current list of checkpoints each time it is called. Args: sess: The current session. latest_filename: The filename for the latest set of checkpoints, defaults to 'checkpoints'. Returns: The loaded checkpoint or None if it failed to load. """ |
# Set list of not-yet-deleted checkpoints.
self._create_initializers()
if self._save_path:
ckpt = tf.train.get_checkpoint_state(
os.path.dirname(self._save_path), latest_filename)
if ckpt and ckpt.all_model_checkpoint_paths:
# Copy it because last_checkpoints is immutable.
# Manually configure a new Saver so that we get the latest snapshots.
self._saver = tf.train.Saver(saver_def=self._saver.as_saver_def())
self._saver.set_last_checkpoints(list(ckpt.all_model_checkpoint_paths))
if self._saver.last_checkpoints:
self._saver.restore(sess, self._saver.last_checkpoints[-1])
return self._saver.last_checkpoints[-1]
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_model(self, op_list, num_steps, feed_vars=(), feed_data=None, print_every=100, allow_initialize=True):
"""Runs `op_list` for `num_steps`. Args: op_list: A list of ops to run. num_steps: Number of steps to run this for. If feeds are used, this is a maximum. `None` can be used to signal "forever". feed_vars: The variables to feed. feed_data: An iterator that feeds data tuples. print_every: Print a log line and checkpoing every so many steps. allow_initialize: If True, the model will be initialized if any variable is uninitialized, if False the model will not be initialized. Returns: The final run result as a list. Raises: ValueError: If feed_data doesn't match feed_vars. """ |
feed_data = feed_data or itertools.repeat(())
ops = [bookkeeper.global_step()]
ops.extend(op_list)
sess = tf.get_default_session()
self.prepare_model(sess, allow_initialize=allow_initialize)
results = []
try:
if num_steps is None:
counter = itertools.count(0)
elif num_steps >= 0:
counter = xrange(num_steps)
else:
raise ValueError('num_steps cannot be negative: %s' % num_steps)
for i, data in zip(counter, feed_data):
log_this_time = print_every and i % print_every == 0
if len(data) != len(feed_vars):
raise ValueError(
'feed_data and feed_vars must be the same length: %d vs %d' % (
len(data), len(feed_vars)))
if self._coord.should_stop():
print('Coordinator stopped')
sys.stdout.flush()
self.stop_queues()
break
if len(feed_vars) != len(data):
raise ValueError('Feed vars must be the same length as data.')
if log_this_time and self._summary_writer:
results = sess.run(ops + [self._summaries],
dict(zip(feed_vars, data)))
self._summary_writer.add_summary(results[-1], results[0])
results = results[:-1]
else:
results = sess.run(ops, dict(zip(feed_vars, data)))
if log_this_time:
self._log_and_save(sess, results)
# Print the last line if it wasn't just printed
if print_every and not log_this_time:
self._log_and_save(sess, results)
except tf.errors.OutOfRangeError as ex:
print('Done training -- epoch limit reached %s' % ex.message)
sys.stdout.flush()
self.stop_queues()
except BaseException as ex:
print('Exception -- stopping threads: %s' % ex, file=sys.stderr)
sys.stdout.flush()
self.stop_queues()
raise
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def train_model(self, train_op, cost_to_log, num_steps, feed_vars=(), feed_data=None, print_every=100):
"""Trains the given model. Args: train_op: The training operation. cost_to_log: A cost to log. num_steps: Number of batches to run. feed_vars: A list or tuple of the variables that will be fed. feed_data: A generator that produces tuples of the same length as feed_vars. print_every: Print and save every so many steps. Returns: `cost_to_log` from the final step. """ |
costs = [train_op]
if (isinstance(cost_to_log, collections.Sequence)
and not isinstance(cost_to_log, six.string_types)):
costs.extend(cost_to_log)
else:
costs.append(cost_to_log)
return self.run_model(costs,
num_steps,
feed_vars=feed_vars,
feed_data=feed_data,
print_every=print_every)[2:] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def evaluate_model(self, accuracy, num_steps, feed_vars=(), feed_data=None, summary_tag=None, print_every=0):
"""Evaluates the given model. Args: accuracy: The metric that is being evaluated or a tuple of metrics. num_steps: The number of steps to run in the evaluator. feed_vars: A list or tuple of the variables that will be fed. feed_data: A generator that produces tuples of the same length as feed_vars. summary_tag: If provided, the final result of running the model will be published to this tag. print_every: Print a summary every so many steps, use 0 to disable. Returns: The accuracy. Raises: ValueError: If the wrong number of summary tags are provided or previously running QueueRunners haven't been stopped. """ |
if not hasattr(self, '_saver'):
raise ValueError('Before evaluating, you must initialize the model with '
'load_from_checkpoint, prepare or saver.')
self._run_init_test_vars_op()
if (not isinstance(accuracy, collections.Sequence) or
isinstance(accuracy, six.string_types)):
accuracy = (accuracy,)
if summary_tag:
summary_tag = (summary_tag,)
if summary_tag and len(summary_tag) != len(accuracy):
raise ValueError(
'If summaries are requested, there must be a tag per accuracy node.')
result = self.run_model(accuracy,
num_steps,
feed_vars=feed_vars,
feed_data=feed_data,
print_every=print_every,
allow_initialize=False)
assert len(result) == len(accuracy) + 1, (
'results is wrong length, was %s but should be 1 longer than %s' %
(result, accuracy))
if summary_tag:
self.add_summaries(result[0], *zip(summary_tag, result[1:]))
return result[1:] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_summaries(self, step, *tags_and_values):
"""Adds summaries to the writer and prints a log statement.""" |
values = []
to_print = []
for tag, value in tags_and_values:
values.append(tf.Summary.Value(tag=tag, simple_value=float(value)))
to_print.append('%s=%g' % (tag, value))
if self._summary_writer:
summary = tf.Summary(value=values)
event = tf.Event(wall_time=time.time(),
summary=summary,
step=int(step))
self._summary_writer.add_event(event)
print('[%d] %s' % (step, ', '.join(to_print))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_new_checkpoint_when_available( self, sess, current_checkpoint, sleep_seconds=10):
"""Waits for a new checkpoint to be available and then loads it. Args: sess: The current session. current_checkpoint: The current checkpoint or None to just load the next one. sleep_seconds: How long to sleep between checks. Returns: The next checkpoint to use. """ |
# Load the checkpoint.
while True:
next_checkpoint = self.load_from_checkpoint(sess)
if not next_checkpoint or next_checkpoint == current_checkpoint:
print('Model not yet available, sleeping for %d seconds: '
'path %s; found: %s' %
(sleep_seconds,
os.path.dirname(self._save_path), current_checkpoint))
sys.stdout.flush()
time.sleep(sleep_seconds)
else:
return next_checkpoint |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def evaluate_repeatedly(self, accuracy, num_steps, feed_vars=(), feed_data=None, summary_tag=None, evaluation_times=-1):
"""Runs the evaluation in a loop for `evaluation_times`. On each iteration, `evaluate_model` is called with the supplied arguments. This manages the queue threads itself. Args: accuracy: The metric that is being evaluated. num_steps: The number of steps to run in the evaluator. feed_vars: A list or tuple of the variables that will be fed. feed_data: A generator that produces tuples of the same length as feed_vars. summary_tag: If provided, the final result of each evaluation will be published to this tag. evaluation_times: Run this loop for this many times or forever if it is `-1`. Returns: The final evaluation result from `evaluate_model` if `evaluation_times` ever ends. """ |
current_checkpoint = None
try:
for i in itertools.count(0):
# New session each time to reset queues - Yay.
with self.session() as sess:
current_checkpoint = self.load_new_checkpoint_when_available(
sess, current_checkpoint)
# Create relevant ops before starting queue runners.
self._run_init_test_vars_op()
accuracy_result = self.evaluate_model(accuracy,
num_steps,
summary_tag=summary_tag,
print_every=0,
feed_vars=feed_vars,
feed_data=feed_data)
if not summary_tag:
print('[%d] %s' % (sess.run(bookkeeper.global_step()),
accuracy_result))
if (i + 1) == evaluation_times:
return accuracy_result
finally:
print('Shutting down')
sys.stdout.flush()
self.stop_queues() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_model(text_in, labels, timesteps, per_example_weights, phase=pt.Phase.train):
"""Creates a model for running baby names.""" |
with pt.defaults_scope(phase=phase, l2loss=0.00001):
# The embedding lookup must be placed on a cpu.
with tf.device('/cpu:0'):
embedded = text_in.embedding_lookup(CHARS, [EMBEDDING_SIZE])
# We need to cleave the sequence because sequence lstm expect each
# timestep to be in its own Tensor.
lstm = (embedded.cleave_sequence(timesteps).sequence_lstm(CHARS))
# The classifier is much more efficient if it runs across the entire
# batch at once, so we want to squash (i.e. uncleave).
#
# Hidden nodes is set to 32 because it seems to work well.
return (lstm.squash_sequence().fully_connected(32,
activation_fn=tf.nn.relu)
.dropout(0.7)
.softmax_classifier(SEXES,
labels,
per_example_weights=per_example_weights)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def for_default_graph(*args, **kwargs):
"""Creates a bookkeeper for the default graph. Args: *args: Arguments to pass into Bookkeeper's constructor. **kwargs: Arguments to pass into Bookkeeper's constructor. Returns: A new Bookkeeper. Raises: ValueError: If args or kwargs are provided and the Bookkeeper already exists. """ |
graph = tf.get_default_graph()
collection = graph.get_collection(_BOOKKEEPER)
if collection:
if args or kwargs:
raise ValueError('Requesting construction of a BookKeeper that already '
'exists: %s %s' % (args, kwargs))
return collection[0]
else:
books = BOOKKEEPER_FACTORY(*args, g=graph, **kwargs)
graph.add_to_collection(_BOOKKEEPER, books)
return books |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def for_new_graph(*args, **kwargs):
"""Creates a Bookkeeper for a new graph. You must use `m.g.as_default()` to put the graph in scope: m = Bookkeeper.for_new_graph() with m.g.as_default():
Args: *args: Arguments to pass into Bookkeeper's constructor. **kwargs: Arguments to pass into Bookkeeper's constructor. Returns: A new Bookkeeper. """ |
graph = tf.Graph()
with graph.as_default():
return for_default_graph(*args, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def regroup_if_changed(group, op_list, name=None):
"""Creates a new group for op_list if it has changed. Args: group: The current group. It is returned if op_list is unchanged. op_list: The list of operations to check. name: The name to use if a new group is created. Returns: Either group or a new group (or if op_list is empty then no_op). """ |
has_deltas = isinstance(op_list, sequence_with_deltas.SequenceWithDeltas)
if (group is None or len(group.control_inputs) != len(op_list) or
(has_deltas and op_list.has_changed())):
if has_deltas:
op_list.mark()
if op_list:
return tf.group(*op_list, name=name)
else:
return tf.no_op(name=name)
else:
return group |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def apply_optimizer(optimizer, losses, regularize=True, include_marked=True, clip_gradients_by_norm=None, **kwargs):
"""Apply an optimizer to the graph and returns a train_op. The resulting operation will minimize the specified losses, plus the regularization losses that have been collected during graph construction and the losses that were marked by calling `mark_as_required`. It will also apply any updates that have been collected (e.g. for moving average summaries). This is equivalent to: total_loss = prettytensor.create_composite_loss( losses=losses, regularize=regularize, include_marked=include_marked) train_op_without_updates = optimizer.minimize(total_loss) train_op = prettytensor.with_update_ops(train_op_without_updates) N.B. Pay special attention to the `gate_gradients` argument to the optimizer. If your graph is large, it will likely train unacceptably slow if you don't specify it as GATE_NONE. Args: optimizer: The optimizer the minimize. losses: A list of losses to apply. regularize: Whether or not to include the regularization losses. include_marked: Whether or not to use the marked losses. clip_gradients_by_norm: If not None, clip gradients by the norm using `tf.clip_by_norm`. **kwargs: Additional arguments to pass into the optimizer. Returns: An operation to use for training that also updates any required ops such as moving averages. """ |
books = for_default_graph()
g_step = kwargs.pop('global_step', books.global_step)
total_loss = books.create_composite_loss(losses=losses,
regularize=regularize,
include_marked=include_marked)
grads_and_vars = optimizer.compute_gradients(total_loss, **kwargs)
if clip_gradients_by_norm is not None:
clipped_grads_and_vars = []
for g, v in grads_and_vars:
if isinstance(g, tf.SparseTensor):
cg = tf.SparseTensor(
tf.clip_by_norm(g.values, clip_gradients_by_norm),
g.indices,
g.dense_shape)
elif isinstance(g, tf.IndexedSlices):
cg = tf.IndexedSlices(
tf.clip_by_norm(g.values, clip_gradients_by_norm),
g.indices)
else:
cg = tf.clip_by_norm(g, clip_gradients_by_norm)
clipped_grads_and_vars.append((cg, v))
grads_and_vars = clipped_grads_and_vars
train_op = optimizer.apply_gradients(grads_and_vars, global_step=g_step)
return books.with_update_ops(train_op) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_global_counter(self):
"""Adds a global counter, called once for setup by @property global_step.""" |
assert self._global_step is None
# Force this into the top-level namescope. Instead of forcing top-level
# here, we could always call this in __init__() and then keep whatever
# namescopes are around then.
with self.g.as_default(), self.g.name_scope(None):
try:
self._global_step = self.g.get_tensor_by_name('global_step:0')
except KeyError:
self._global_step = tf.Variable(0, name='global_step', trainable=False) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_scalar_summary(self, x, tag=None):
"""Adds a scalar summary for x.""" |
if not self.summary_collections:
return
with self.g.as_default():
tag = tag or _tag_for(x.name)
summary = (tf.summary.scalar(
tag, x, collections=self.summary_collections))
return summary |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_histogram_summary(self, x, tag=None):
"""Add a summary operation to visualize the histogram of x's values.""" |
if not self.summary_collections:
return
with self.g.as_default():
tag = tag or _tag_for(x.name)
summary = tf.summary.histogram(
tag, x, collections=self.summary_collections)
return summary |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def exponential_moving_average(self, var, avg_var=None, decay=0.999, ignore_nan=False):
"""Calculates the exponential moving average. TODO():
check if this implementation of moving average can now be replaced by tensorflows implementation. Adds a variable to keep track of the exponential moving average and adds an update operation to the bookkeeper. The name of the variable is '%s_average' % name prefixed with the current variable scope. Args: var: The variable for which a moving average should be computed. avg_var: The variable to set the average into, if None create a zero initialized one. decay: How much history to use in the moving average. Higher, means more history values [0, 1) accepted. ignore_nan: If the value is NaN or Inf, skip it. Returns: The averaged variable. Raises: ValueError: if decay is not in [0, 1). """ |
with self._g.as_default():
if decay < 0 or decay >= 1.0:
raise ValueError('Decay is %5.2f, but has to be in [0, 1).' % decay)
if avg_var is None:
avg_name = '%s_average' % _bare_var_name(var)
with tf.control_dependencies(None):
with tf.name_scope(avg_name + '/Initializer/'):
if isinstance(var, tf.Variable):
init_val = var.initialized_value()
elif var.get_shape().is_fully_defined():
init_val = tf.constant(0,
shape=var.get_shape(),
dtype=var.dtype.base_dtype)
else:
init_val = tf.constant(0, dtype=var.dtype.base_dtype)
avg_var = tf.Variable(init_val, name=avg_name, trainable=False)
num_updates = tf.cast(self.global_step, tf.float32)
decay = tf.minimum(decay, tf.maximum(0.9, (1.0 + num_updates) /
(10.0 + num_updates)))
with tf.device(avg_var.device):
if ignore_nan:
var = tf.where(tf.is_finite(var), var, avg_var)
if var.get_shape().is_fully_defined():
avg_update = tf.assign_sub(avg_var, (1 - decay) * (avg_var - var))
else:
avg_update = tf.assign(avg_var,
avg_var - (1 - decay) * (avg_var - var),
validate_shape=False)
self._g.add_to_collection(GraphKeys.UPDATE_OPS, avg_update)
return avg_update |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_average_summary(self, var, tag=None, decay=0.999, ignore_nan=True):
"""Add a summary with the moving average of var. Adds a variable to keep track of the exponential moving average and adds an update operation to the bookkeeper. The name of the variable is '%s_average' % name prefixed with the current variable scope. Args: var: The variable for which a moving average should be computed. tag: The tag of the summary. If None var.name[:-2] is used to strip off the ':0' that is added by TF. decay: How much history to use in the moving average. Higher, means more history values [0.9, 1) accepted. ignore_nan: If the value is NaN or Inf, skip it. Note that this default is different than the exponential_moving_average one. Returns: The averaged variable. Raises: ValueError: if decay is not in [0.9, 1). """ |
if not self.summary_collections:
return
with self.g.as_default():
if decay < 0.9 or decay >= 1.0:
raise ValueError('Decay is %5.2f, but has to be in [0, 1).' % decay)
avg_var = self.exponential_moving_average(var,
decay=decay,
ignore_nan=ignore_nan)
if tag is None:
tag = _bare_var_name(avg_var)
tag = self.g.unique_name(tag)
self.add_scalar_summary(avg_var, tag)
return avg_var |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_loss(self, loss, name=None, regularization=False, add_summaries=True):
"""Append a loss to the total loss for the network. Args: loss: append this loss operation name: The name for this loss, defaults to loss.op.name regularization: Set to True if this is a regularization loss. add_summaries: Set to True if you want to see scalar and average summary. """ |
# TODO(eiderman): Strip name out and just rely on the name scope.
_ = name # Eliminates pylint warning.
if regularization:
self._g.add_to_collection(GraphKeys.REGULARIZATION_LOSSES, loss)
tf.add_to_collection(GraphKeys.LOSSES, loss)
if add_summaries:
self.add_scalar_summary(loss, 'loss')
self.add_average_summary(loss, 'loss_average') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_state(self, state_name, initial_state, batch_size=None):
"""Adds a state to the state saver. Args: state_name: The name of this state. initial_state: The initial state vector. Only zeros are supported. batch_size: The batch_size or None for unknown. """ |
state_shape = initial_state.get_shape().as_list()
full_shape = [batch_size] + state_shape
if not batch_size:
# TODO(): -1 is now reserved for unknown, so this should be
# updated, but that requires coordination with the binary and is
# checkpoint incompatible.
# TODO(eiderman): When we make the above breaking change, we should make
# the C++ client use the initial state instead of passing in zeros.
shape_proto = self._as_shape_proto([0] + state_shape)
batch_size = 1
else:
shape_proto = self._as_shape_proto([batch_size] + state_shape)
# Add a constant tensor of zeros. At training time, this will initialize
# the state with the initial_state - at inference time,
# this node is replaced by a feed.
tiles = [batch_size] + ([1] * len(initial_state.get_shape()))
feed_op = tf.placeholder_with_default(
tf.tile(
tf.expand_dims(initial_state, [0]), tiles),
shape=full_shape,
name='%s_feed' % state_name)
s = {'feed_op': feed_op,
'feed_type': initial_state.dtype,
'feed_shape': shape_proto}
self._states[state_name] = s |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_dense_one_hot(labels, class_count):
"""Converts a vector that specified one-hot per batch into a dense version. Args: labels: The labels input. class_count: The number of classes as an int. Returns: One dense vector for each item in the batch. Raises: ValueError: If labels is not rank 1. TypeError: If class_count is not an integer or labels is not an integer Tensor. """ |
if not isinstance(class_count, tf.compat.integral_types):
raise TypeError('class_count must be an integer type.')
if labels.dtype.base_dtype not in (tf.int32, tf.int64):
raise TypeError('Labels must be an integer: %s' % labels.dtype)
if labels.get_shape().ndims != 1:
raise ValueError('Labels must be a rank 1 tensor: %s' % labels.get_shape())
dtype = labels.dtype.base_dtype
class_tensor = tf.convert_to_tensor(
class_count, dtype=dtype, name='class_count')
# Extract the batch from the shape so this is batch independent.
batch = tf.gather(tf.shape(labels), 0)
count = tf.expand_dims(tf.range(0, limit=batch), 1)
labels = tf.expand_dims(labels, 1)
batch = tf.gather(tf.shape(labels), 0)
if dtype != tf.int32:
count = tf.cast(count, dtype)
batch = tf.cast(batch, dtype)
result = tf.sparse_to_dense(
tf.concat([count, labels], 1),
tf.concat([tf.expand_dims(batch, 0), tf.expand_dims(class_tensor, 0)], 0),
1.0, 0.0)
result.set_shape([labels.get_shape().dims[0], class_count])
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _convert_and_assert_per_example_weights_compatible( input_, per_example_weights, dtype):
"""Converts per_example_weights to a tensor and validates the shape.""" |
per_example_weights = tf.convert_to_tensor(
per_example_weights, name='per_example_weights', dtype=dtype)
if input_.get_shape().ndims:
expected_length = input_.get_shape().dims[0]
message = ('per_example_weights must have rank 1 and length %s, but was: %s'
% (expected_length, per_example_weights.get_shape()))
else:
expected_length = None
message = ('per_example_weights must have rank 1 and length equal to the '
'first dimension of inputs (unknown), but was: %s'
% per_example_weights.get_shape())
if per_example_weights.get_shape().ndims not in (1, None):
raise ValueError(message)
if not per_example_weights.get_shape().is_compatible_with((expected_length,)):
raise ValueError(message)
return per_example_weights |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def apply_regression(input_, regression_fn, target, regression_args=(), regression_kwargs=None, name=PROVIDED, loss_weight=None, per_example_weights=None):
"""Applies the given regression and adds the loss to the bookkeeper. This does not change tensor. Args: input_: A Tensor or a Pretty Tensor holding the input. regression_fn: A function that takes (in order) tensor, labels. target: The targe of the regression. regression_args: Other arguments for the regression. regression_kwargs: Keyword args for the regression. name: The name, also added to regression_kwargs. loss_weight: A scalar multiplier for the loss. per_example_weights: A Tensor with a weight per example. Returns: The loss tensor's name. Raises: ValueError: If the target is not a compatible shape with input_. """ |
if regression_kwargs is None:
regression_kwargs = {}
if name is not None and 'name' not in regression_kwargs:
regression_kwargs['name'] = name
elif name is None:
name = input_.tensor.op.name
tensor = input_.tensor
loss = regression_fn(tensor, target, *regression_args, **regression_kwargs)
if loss_weight is not None:
loss *= loss_weight
if per_example_weights is not None:
per_example_weights = _convert_and_assert_per_example_weights_compatible(
input_,
per_example_weights,
dtype=loss.dtype)
loss *= per_example_weights
# Use mean so that the learning rate is independent of the batch size.
if name is None:
name = loss.op.name
if tensor.get_shape()[0].value is not None:
# Try to use division instead of reduce_mean because reduce_mean doesn't
# work on GPU.
avg_loss = tf.reduce_sum(loss) / tensor.get_shape()[0].value
else:
avg_loss = tf.reduce_mean(loss)
return input_.add_loss(avg_loss, name=name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def binary_cross_entropy_with_logits(input_, target, name=PROVIDED, loss_weight=None, per_example_weights=None, per_output_weights=None):
"""Calculates the binary cross entropy of the input_ vs inputs. Expects unscaled logits. Do not pass in results of sigmoid operation. Args: input_: A rank 2 Tensor or a Pretty Tensor holding the logits. target: A rank 2 tf.float32 or tf.float64 tensor containing class label probabilities. Note that binary cross entropy is equivalent to logistic loss. name: The optional name. loss_weight: A scalar multiplier for the loss. per_example_weights: A `Tensor` with a weight per example. per_output_weights: A weight `Tensor` that is the same shape as the input_ that can be used to scale individual prediction losses. See `tf.tile` to turn a per-column weight vector into a `per_output_weights` `Tensor`. Returns: Binary cross entropy loss after sigmoid operation. Raises: ValueError: if target is None or the type is not float or double. """ |
if target is None:
raise ValueError('target must be set')
target = _convert_and_assert_tensors_compatible(input_, target)
with tf.name_scope('stats'):
selected, sum_retrieved, sum_relevant = _compute_precision_recall(
input_, target, 0, per_example_weights)
precision = selected / sum_retrieved
recall = selected / sum_relevant
if precision.get_shape().is_fully_defined():
input_.bookkeeper.add_average_summary(
precision, 'average_precision_%s' % name)
if recall.get_shape().is_fully_defined():
input_.bookkeeper.add_average_summary(
recall, 'average_recall_%s' % name)
input_.bookkeeper.add_scalar_summary(
tf.reduce_sum(tf.to_float(tf.greater(input_, 0))), 'activations')
if per_output_weights is not None:
per_output_weights = tf.convert_to_tensor(
per_output_weights,
name='per_output_weights',
dtype=input_.dtype.base_dtype)
input_.get_shape().assert_is_compatible_with(
per_output_weights.get_shape())
def _batch_sum_bce(x, target, name='binary_cross_entropy'):
logits = functions.binary_cross_entropy_loss_with_logits(x,
target,
name=name)
if per_output_weights is not None:
logits *= per_output_weights
return functions.reduce_batch_sum(logits)
return apply_regression(
input_,
_batch_sum_bce,
target,
[],
name='%s_bce_loss' % name,
loss_weight=loss_weight,
per_example_weights=per_example_weights) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def softmax_classifier_with_sampled_loss(inputs, num_classes, labels, num_sampled, num_true=None, sampled_values=None, remove_accidental_hits=True, loss_weight=None, per_example_weights=None, weights=None, bias=tf.zeros_initializer(), parameter_modifier=parameters.identity, name='softmax_classifier'):
"""Applies softmax and if labels is not None, then it adds a sampled loss. This is a faster way to train a softmax classifier over a huge number of classes. It is generally an underestimate of the full softmax loss. At inference time, you can compute full softmax probabilities with the expression `tf.nn.softmax(tf.matmul(inputs, weights) + biases)`. See `tf.nn.sampled_softmax_loss` for more details. Also see Section 3 of [Jean et al., 2014](http://arxiv.org/abs/1412.2007) ([pdf](http://arxiv.org/pdf/1412.2007.pdf)) for the math. Note: If you depend on the softmax part of the loss, then you will lose most of the speed benefits of sampling the loss. It should be used for evaluation only and not executed on every update op. Note: This is not checkpoint compatible with `softmax_classifier` since it optimizes a transpose by pushing it down to the `fully_connected` layer. Args: inputs: A `Tensor` of shape `[batch_size, dim]`. The forward activations of the input network. num_classes: An `int`. The number of possible classes. labels: A `Tensor` of type `int64` and shape `[batch_size, num_true]`. The target classes. Note that this format differs from the `labels` argument of `nn.softmax_cross_entropy_with_logits`. num_sampled: An `int`. The number of classes to randomly sample per batch. num_true: An `int`. The number of target classes per training example, defaults to the second dim of labels if known or 1. sampled_values: a tuple of (`sampled_candidates`, `true_expected_count`, `sampled_expected_count`) returned by a `*_candidate_sampler` function. (if None, we default to `log_uniform_candidate_sampler`) remove_accidental_hits: A `bool`. whether to remove "accidental hits" where a sampled class equals one of the target classes. Default is True. loss_weight: A scalar multiplier for the loss. per_example_weights: A Tensor with a weight per example. weights: The initializer for the weights (see `fully_connected`). Note: This is the transpose of a normal fully_connected input layer! bias: The initializer for the bias (see `fully_connected`). parameter_modifier: A modifier for the parameters that compute the logits. name: The optional name. Returns: A tuple of handles to the logits (fully connected layer) and loss. Raises: ValueError: If inputs or labels do not have the right shape. """ |
# Compound ops need to respect sequential, so take a snapshot.
input_copy = inputs.as_layer()
with tf.name_scope('sampled_softmax'):
full = inputs.fully_connected(num_classes,
activation_fn=None,
name=name,
transpose_weights=True,
weights=weights,
bias=bias,
parameter_modifier=parameter_modifier)
if labels is not None:
labels = tf.convert_to_tensor(labels, dtype=tf.int64, name='labels')
labels.get_shape().assert_is_compatible_with([input_copy.get_shape()[0],
num_true])
if num_true is None:
if labels.get_shape().ndims and labels.get_shape().dims[1]:
num_true = labels.get_shape().dims[1].value
else:
num_true = 1
def _loss(input_, labels, name=None):
return tf.nn.sampled_softmax_loss(
weights=full.layer_parameters['weights'],
biases=full.layer_parameters['bias'],
labels=labels,
inputs=input_,
num_sampled=num_sampled,
num_classes=num_classes,
num_true=num_true,
sampled_values=sampled_values,
remove_accidental_hits=remove_accidental_hits,
name=name)
loss = apply_regression(input_copy,
_loss,
labels,
[],
name='%s_sampled_loss' % name,
loss_weight=loss_weight,
per_example_weights=per_example_weights)
else:
loss = None
return SampledSoftmaxResult(full, loss) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.