INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Group normalization as in https://arxiv.org/abs/1803.08494.
def group_norm(x, filters=None, num_groups=8, epsilon=1e-5): """Group normalization as in https://arxiv.org/abs/1803.08494.""" x_shape = shape_list(x) if filters is None: filters = x_shape[-1] assert len(x_shape) == 4 assert filters % num_groups == 0 # Prepare variables. scale = tf.get_variable( ...
One version of layer normalization.
def noam_norm(x, epsilon=1.0, name=None): """One version of layer normalization.""" with tf.name_scope(name, default_name="noam_norm", values=[x]): shape = x.get_shape() ndims = len(shape) return (tf.nn.l2_normalize(x, ndims - 1, epsilon=epsilon) * tf.sqrt( to_float(shape[-1])))
Layer normalization with l2 norm.
def l2_norm(x, filters=None, epsilon=1e-6, name=None, reuse=None): """Layer normalization with l2 norm.""" if filters is None: filters = shape_list(x)[-1] with tf.variable_scope(name, default_name="l2_norm", values=[x], reuse=reuse): scale = tf.get_variable( "l2_norm_scale", [filters], initializer...
Normalizes x using the spectral norm. The implementation follows Algorithm 1 of https://arxiv.org/abs/1802.05957. If x is not a 2-D Tensor, then it is reshaped such that the number of channels (last-dimension) is the same. Args: x: Tensor with the last dimension equal to the number of filters. Returns:...
def apply_spectral_norm(x): """Normalizes x using the spectral norm. The implementation follows Algorithm 1 of https://arxiv.org/abs/1802.05957. If x is not a 2-D Tensor, then it is reshaped such that the number of channels (last-dimension) is the same. Args: x: Tensor with the last dimension equal to t...
Apply Normalization.
def apply_norm(x, norm_type, depth, epsilon, layer_collection=None): """Apply Normalization.""" if layer_collection is not None: assert norm_type == "layer" if norm_type == "layer": return layer_norm( x, filters=depth, epsilon=epsilon, layer_collection=layer_collection) if norm_type == "group": ...
Resnet connection with zero initialization. Another type of resnet connection which returns previous_value + gamma * x. gamma is a trainable scalar and initialized with zero. It is useful when a module is plugged into a trained model and we want to make sure it matches the original model's performance. Args...
def zero_add(previous_value, x, name=None, reuse=None): """Resnet connection with zero initialization. Another type of resnet connection which returns previous_value + gamma * x. gamma is a trainable scalar and initialized with zero. It is useful when a module is plugged into a trained model and we want to mak...
Apply a sequence of functions to the input or output of a layer. The sequence is specified as a string which may contain the following characters: a: add previous_value n: apply normalization d: apply dropout z: zero add For example, if sequence=="dna", then the output is previous_value + no...
def layer_prepostprocess(previous_value, x, sequence, dropout_rate, norm_type, depth, epsilon, default_name, name=None, ...
Apply layer preprocessing. See layer_prepostprocess() for details. A hyperparameters object is passed for convenience. The hyperparameters that may be used are: layer_preprocess_sequence layer_prepostprocess_dropout norm_type hidden_size norm_epsilon Args: layer_input: a Tensor ...
def layer_preprocess(layer_input, hparams, layer_collection=None): """Apply layer preprocessing. See layer_prepostprocess() for details. A hyperparameters object is passed for convenience. The hyperparameters that may be used are: layer_preprocess_sequence layer_prepostprocess_dropout norm_type ...
Apply layer postprocessing. See layer_prepostprocess() for details. A hyperparameters object is passed for convenience. The hyperparameters that may be used are: layer_postprocess_sequence layer_prepostprocess_dropout norm_type hidden_size norm_epsilon Args: layer_input: a Tensor ...
def layer_postprocess(layer_input, layer_output, hparams): """Apply layer postprocessing. See layer_prepostprocess() for details. A hyperparameters object is passed for convenience. The hyperparameters that may be used are: layer_postprocess_sequence layer_prepostprocess_dropout norm_type hi...
A block of convolutions. Args: conv_fn: convolution function, e.g. conv or separable_conv. inputs: a Tensor filters: an Integer dilation_rates_and_kernel_sizes: a list of tuples (dilation, (k_w, k_h)) first_relu: whether to do a relu at start (defaults to True) use_elu: whether to use ELUs in...
def conv_block_internal(conv_fn, inputs, filters, dilation_rates_and_kernel_sizes, first_relu=True, use_elu=False, separabilities=None, **kwargs): """...
A block of standard 2d convolutions.
def conv_block(inputs, filters, dilation_rates_and_kernel_sizes, **kwargs): """A block of standard 2d convolutions.""" return conv_block_internal(conv, inputs, filters, dilation_rates_and_kernel_sizes, **kwargs)
A block of standard 1d convolutions.
def conv1d_block(inputs, filters, dilation_rates_and_kernel_sizes, **kwargs): """A block of standard 1d convolutions.""" return conv_block_internal(conv1d, inputs, filters, dilation_rates_and_kernel_sizes, **kwargs)
A block of separable convolutions.
def separable_conv_block(inputs, filters, dilation_rates_and_kernel_sizes, **kwargs): """A block of separable convolutions.""" return conv_block_internal(separable_conv, inputs, filters, dilation_rates_and_kernel_sizes, **kwargs)
A block of separable convolutions.
def subseparable_conv_block(inputs, filters, dilation_rates_and_kernel_sizes, **kwargs): """A block of separable convolutions.""" return conv_block_internal(subseparable_conv, inputs, filters, dilation_rates_and_kernel_sizes, **kwargs)
Pooling (supports "LEFT").
def pool(inputs, window_size, pooling_type, padding, strides=(1, 1)): """Pooling (supports "LEFT").""" with tf.name_scope("pool", values=[inputs]): static_shape = inputs.get_shape() if not static_shape or len(static_shape) != 4: raise ValueError("Inputs to conv must have statically known rank 4.") ...
Implements a downwards-striding conv block, like Xception exit flow.
def conv_block_downsample(x, kernel, strides, padding, separability=0, name=None, reuse=None): """Implements a downwards-striding conv block, like Xception exit f...
Create Tensor of sinusoids of different frequencies. Args: length: Length of the Tensor to create, i.e. Number of steps. min_timescale: a float max_timescale: a float num_timescales: an int Returns: Tensor of shape (length, 2*num_timescales)
def get_timing_signal(length, min_timescale=1, max_timescale=1e4, num_timescales=16): """Create Tensor of sinusoids of different frequencies. Args: length: Length of the Tensor to create, i.e. Number of steps. min_timescale: a float max_...
Adds a bunch of sinusoids of different frequencies to a Tensor. This allows attention to learn to use absolute and relative positions. The timing signal should be added to some precursor of both the source and the target of the attention. The use of relative position is possible because sin(x+y) and cos(x+y) ...
def add_timing_signal(x, min_timescale=1, max_timescale=1e4, num_timescales=16): """Adds a bunch of sinusoids of different frequencies to a Tensor. This allows attention to learn to use absolute and relative positions. The timing signal should be added to some precursor of both the source and the target of the...
Input embeddings -> padding mask. We have hacked symbol_modality to return all-zero embeddings for padding. Returns a mask with 0.0 in the padding positions and 1.0 elsewhere. Args: emb: a Tensor with shape [batch, width, height, depth]. Returns: a 0.0/1.0 Tensor with shape [batch, width, height, 1].
def mask_from_embedding(emb): """Input embeddings -> padding mask. We have hacked symbol_modality to return all-zero embeddings for padding. Returns a mask with 0.0 in the padding positions and 1.0 elsewhere. Args: emb: a Tensor with shape [batch, width, height, depth]. Returns: a 0.0/1.0 Tensor wit...
Compute the length of each sequence in the batch. Args: emb: a sequence embedding Tensor with shape [batch, max_time, 1, depth]. Returns: a Tensor with shape [batch].
def length_from_embedding(emb): """Compute the length of each sequence in the batch. Args: emb: a sequence embedding Tensor with shape [batch, max_time, 1, depth]. Returns: a Tensor with shape [batch]. """ return tf.cast(tf.reduce_sum(mask_from_embedding(emb), [1, 2, 3]), tf.int32)
logit(density(x)). Useful for histograms. Args: x: a Tensor, typically the output of tf.relu reduce_dims: a list of dimensions Returns: a Tensor
def relu_density_logit(x, reduce_dims): """logit(density(x)). Useful for histograms. Args: x: a Tensor, typically the output of tf.relu reduce_dims: a list of dimensions Returns: a Tensor """ frac = tf.reduce_mean(to_float(x > 0.0), reduce_dims) scaled = tf.log(frac + math.exp(-10)) - tf.lo...
If necessary, zero out inputs to a conv for padding positions. Args: inputs: a Tensor with shape [batch, length, ...] kernel_size: an integer or pair of integers nonpadding_mask: a Tensor with shape [batch, length] Returns: Tensor of the same shape as inputs.
def maybe_zero_out_padding(inputs, kernel_size, nonpadding_mask): """If necessary, zero out inputs to a conv for padding positions. Args: inputs: a Tensor with shape [batch, length, ...] kernel_size: an integer or pair of integers nonpadding_mask: a Tensor with shape [batch, length] Returns: Ten...
Hidden layer with RELU activation followed by linear projection.
def dense_relu_dense(inputs, filter_size, output_size, output_activation=None, dropout=0.0, dropout_broadcast_dims=None, layer_collection=None, name=None): """Hidden layer...
Dense layer with dropconnect.
def dense_dropconnect(inputs, output_size, dropconnect_dropout=0.0, name="dense_dropconnect", **kwargs): """Dense layer with dropconnect.""" if dropconnect_dropout != 0.0: tf.logging.info("Applying dropconnect as the kernel...
Hidden layer with RELU activation followed by linear projection. Args: inputs: A tensor. filter_size: An integer. output_size: An integer. first_kernel_size: An integer. second_kernel_size: An integer. padding: A string. nonpadding_mask: A tensor. dropout: A float. name: A string....
def conv_relu_conv(inputs, filter_size, output_size, first_kernel_size=3, second_kernel_size=3, padding="SAME", nonpadding_mask=None, dropout=0.0, name=None, ...
Hidden layer with RELU activation followed by linear projection.
def sepconv_relu_sepconv(inputs, filter_size, output_size, first_kernel_size=(1, 1), second_kernel_size=(1, 1), padding="LEFT", nonpadding_mask=None, ...
Hidden layer with RELU activation followed by linear projection.
def conv_hidden_relu(inputs, hidden_size, output_size, kernel_size=(1, 1), second_kernel_size=(1, 1), dropout=0.0, **kwargs): """Hidden layer with RELU activation followed by linear projection...
Convolutional GRU in 1 dimension.
def conv_gru(x, kernel_size, filters, padding="SAME", dilation_rate=(1, 1), name=None, reuse=None): """Convolutional GRU in 1 dimension.""" # Let's make a shorthand for conv call first. def do_conv(args, name, bias_start, padding): ...
position-wise Feed-fwd GRU gates following the MPNN. Args: a_t: Tensor of shape [batch, length, depth] of current input h_prev: Tensor of shape [batch, length, depth] of prev input filters: an integer specifying number of dimensions of the filters name: A string Returns: h_t: [batch, length, fi...
def gru_feedfwd(a_t, h_prev, filters, name=None): """position-wise Feed-fwd GRU gates following the MPNN. Args: a_t: Tensor of shape [batch, length, depth] of current input h_prev: Tensor of shape [batch, length, depth] of prev input filters: an integer specifying number of dimensions of the filters ...
Convolutional LSTM in 1 dimension.
def conv_lstm(x, kernel_size, filters, padding="SAME", dilation_rate=(1, 1), name=None, reuse=None): """Convolutional LSTM in 1 dimension.""" with tf.variable_scope( name, default_name="conv_lstm", values=[x], reuse=reuse): ...
Diagonal Convolutional GRU as in https://arxiv.org/abs/1702.08727.
def diagonal_conv_gru(x, kernel_size, filters, dropout=0.0, name=None, reuse=None): """Diagonal Convolutional GRU as in https://arxiv.org/abs/1702.08727.""" # Let's make a shorthand for conv call first. ...
Pad tensors x and y on axis 1 so that they have the same length.
def pad_to_same_length(x, y, final_length_divisible_by=1, axis=1): """Pad tensors x and y on axis 1 so that they have the same length.""" if axis not in [1, 2]: raise ValueError("Only axis=1 and axis=2 supported for now.") with tf.name_scope("pad_to_same_length", values=[x, y]): x_length = shape_list(x)[a...
Pad labels on the length dimension to match logits length.
def pad_with_zeros(logits, labels): """Pad labels on the length dimension to match logits length.""" with tf.name_scope("pad_with_zeros", values=[logits, labels]): logits, labels = pad_to_same_length(logits, labels) if len(labels.shape) == 3: # 2-d labels. logits, labels = pad_to_same_length(logits, ...
Assign weight 1.0 to only the "targets" portion of the labels. Weight 1.0 is assigned to all nonzero labels past the first zero. See prepend_mode in common_hparams.py Args: labels: A Tensor of int32s. Returns: A Tensor of floats.
def weights_prepend_inputs_to_targets(labels): """Assign weight 1.0 to only the "targets" portion of the labels. Weight 1.0 is assigned to all nonzero labels past the first zero. See prepend_mode in common_hparams.py Args: labels: A Tensor of int32s. Returns: A Tensor of floats. """ past_first_...
Check that the value is nonnegative.
def check_nonnegative(value): """Check that the value is nonnegative.""" if isinstance(value, tf.Tensor): with tf.control_dependencies([tf.assert_greater_equal(value, 0)]): value = tf.identity(value) elif value < 0: raise ValueError("Value must be non-negative.") return value
Assign weight 1.0 to only the "targets" portion of the labels. Weight 1.0 is assigned to all labels past the taskid. Args: labels: A Tensor of int32s. taskid: an int32 representing the task id for a problem. Returns: A Tensor of floats. Raises: ValueError: The Task ID must be valid.
def weights_multi_problem(labels, taskid=-1): """Assign weight 1.0 to only the "targets" portion of the labels. Weight 1.0 is assigned to all labels past the taskid. Args: labels: A Tensor of int32s. taskid: an int32 representing the task id for a problem. Returns: A Tensor of floats. Raises: ...
Assign weight 1.0 to only examples from the given task.
def weights_multi_problem_all(labels, taskid=-1): """Assign weight 1.0 to only examples from the given task.""" taskid = check_nonnegative(taskid) weights = to_float(tf.not_equal(labels, 0)) past_taskid = tf.cumsum(to_float(tf.equal(labels, taskid)), axis=1) # Additionally zero out the task id location past...
Assign weight 1.0 to only the inputs for the given task.
def weights_multi_problem_input(labels, taskid=-1): """Assign weight 1.0 to only the inputs for the given task.""" taskid = check_nonnegative(taskid) weights_all_tokens = weights_multi_problem_all(labels, taskid) weights_target = weights_multi_problem(labels, taskid) return weights_all_tokens - weights_target
Assign weight 1.0 to the "target" part of the concatenated labels. The labels look like: source English I love you . ID1 target French Je t'aime . ID1 source English the cat ID1 target French le chat ID1 source English ... We want to assign weight 1.0 to all words in the target text (including the ID1...
def weights_concatenated(labels): """Assign weight 1.0 to the "target" part of the concatenated labels. The labels look like: source English I love you . ID1 target French Je t'aime . ID1 source English the cat ID1 target French le chat ID1 source English ... We want to assign weight 1.0 to all words ...
Compute cross-entropy assuming 0s are padding. Computes a loss numerator (the sum of losses), and loss denominator (the number of non-padding tokens). Args: logits: a `Tensor` with shape `[batch, timesteps, vocab_size]`. optionally a FactoredTensor. labels: an integer `Tensor` with shape `[batch, ...
def padded_cross_entropy(logits, labels, label_smoothing, weights_fn=weights_nonzero, reduce_sum=True, cutoff=0.0, gaussian=False): """Compute cross-entropy assuming 0s...
Compute cross-entropy assuming 0s are padding. Computes a loss numerator (the sum of losses), and loss denominator (the number of non-padding tokens). Computes cross-entropy for each mixture, and returns the corresponding values for the mixture with the highest probability Args: logits: `Tensor` with s...
def padded_cross_entropy_mixture(logits, labels, label_smoothing, num_mixtures, weights_fn=weights_nonzero, reduce_sum=False, ...
Discretized mixture of logistics loss. Args: pred: A [batch, height, width, num_mixtures*10] tensor of floats comprising one unconstrained mixture probability, three means (one per channel), three standard deviations (one per channel), and three coefficients which linearly parameterize dependen...
def dml_loss(pred, labels, weights_fn=_weights_one_third, reduce_sum=True): """Discretized mixture of logistics loss. Args: pred: A [batch, height, width, num_mixtures*10] tensor of floats comprising one unconstrained mixture probability, three means (one per channel), three standard deviations (on...
Splits input tensor into parameters of discretized mixture logistic. Args: inputs: A [batch, height, width, num_mixtures*10] tensor of floats comprising one unconstrained mixture probability, three means (one per channel), three standard deviations (one per channel), and three coefficients whic...
def split_to_discretized_mix_logistic_params(inputs): """Splits input tensor into parameters of discretized mixture logistic. Args: inputs: A [batch, height, width, num_mixtures*10] tensor of floats comprising one unconstrained mixture probability, three means (one per channel), three standard devi...
Computes negative log probability for the discretized mixture of logistics. The distribution of a whole pixel is a mixture of 3-dimensional discretized logistic distributions. The 3-D discretized logistic factorizes as 3 1-D discretized logistic distributions, one for each channel. It defines ```none P(X = ...
def discretized_mix_logistic_loss(pred, labels): """Computes negative log probability for the discretized mixture of logistics. The distribution of a whole pixel is a mixture of 3-dimensional discretized logistic distributions. The 3-D discretized logistic factorizes as 3 1-D discretized logistic distributions...
Sampling from a discretized mixture of logistics. Args: pred: A [batch, height, width, num_mixtures*10] tensor of floats comprising one unconstrained mixture probability, three means (one per channel), three standard deviations (one per channel), and three coefficients which linearly parameteri...
def sample_from_discretized_mix_logistic(pred, seed=None): """Sampling from a discretized mixture of logistics. Args: pred: A [batch, height, width, num_mixtures*10] tensor of floats comprising one unconstrained mixture probability, three means (one per channel), three standard deviations (one per ...
Same global pool, but only for the elements up to the current element. Useful for outputs where the state of future elements is not known. Takes no mask as all elements up to the current element are assumed to exist. Currently only supports maximum. Equivalent to using a lower triangle bias. Args: inputs:...
def running_global_pool_1d(inputs, pooling_type="MAX"): """Same global pool, but only for the elements up to the current element. Useful for outputs where the state of future elements is not known. Takes no mask as all elements up to the current element are assumed to exist. Currently only supports maximum. Eq...
Cross entropy with label smoothing to limit over-confidence. Args: logits: Tensor of shape [batch_size, ?, ?, ?, vocab_size]. labels: Tensor of shape [batch_size, ?, ?, ?]. vocab_size: Tensor representing the size of the vocabulary. confidence: Used to determine on and off values for label smoothing....
def smoothing_cross_entropy(logits, labels, vocab_size, confidence, gaussian=False): """Cross entropy with label smoothing to limit over-confidence. Args: logits: Tensor of shape [batch_size, ?, ?, ?...
Pool elements across the last dimension. Useful to convert a list of vectors into a single vector so as to get a representation of a set. Args: inputs: A tensor of shape [batch_size, sequence_length, input_dims] containing the sequences of input vectors. pooling_type: the pooling type to use, MAX ...
def global_pool_1d(inputs, pooling_type="MAX", mask=None): """Pool elements across the last dimension. Useful to convert a list of vectors into a single vector so as to get a representation of a set. Args: inputs: A tensor of shape [batch_size, sequence_length, input_dims] containing the sequences o...
Gated linear unit layer. Paper: Language Modeling with Gated Convolutional Networks. Link: https://arxiv.org/abs/1612.08083 x = Wx * sigmoid(W'x). Args: x: A tensor name: A string Returns: A tensor of the same shape as x.
def gated_linear_unit_layer(x, name=None): """Gated linear unit layer. Paper: Language Modeling with Gated Convolutional Networks. Link: https://arxiv.org/abs/1612.08083 x = Wx * sigmoid(W'x). Args: x: A tensor name: A string Returns: A tensor of the same shape as x. """ with tf.variable_...
SRU cell as in https://arxiv.org/abs/1709.02755. This implementation uses tf.scan and can incur overhead, see the full SRU function doc for details and an implementation that is sometimes faster. Args: x: A tensor of shape [batch, ..., channels] ; ... is treated as time. num_layers: How many SRU layers;...
def sru_with_scan(x, num_layers=2, activation=None, initial_state=None, name=None, reuse=None): """SRU cell as in https://arxiv.org/abs/1709.02755. This implementation uses tf.scan and can incur overhead, see the full SRU f...
SRU cell as in https://arxiv.org/abs/1709.02755. As defined in the paper: (1) x'_t = W x_t (2) f_t = sigmoid(Wf x_t + bf) (3) r_t = sigmoid(Wr x_t + br) (4) c_t = f_t * c_{t-1} + (1 - f_t) * x'_t (5) h_t = r_t * activation(c_t) + (1 - r_t) * x_t This version uses functional ops to be faster on GPUs with...
def sru(x, num_layers=2, activation=None, initial_state=None, name=None, reuse=None): """SRU cell as in https://arxiv.org/abs/1709.02755. As defined in the paper: (1) x'_t = W x_t (2) f_t = sigmoid(Wf x_t + bf) (3) r_t = sigmoid(Wr x_t + br) (4) c_t = f_t * c_{t-1} +...
Basic layer type for doing funky things with sets. Applies a linear transformation to each element in the input set. If a context is supplied, it is concatenated with the inputs. e.g. One can use global_pool_1d to get a representation of the set which can then be used as the context for the next layer. ...
def linear_set_layer(layer_size, inputs, context=None, activation_fn=tf.nn.relu, dropout=0.0, name=None): """Basic layer type for doing funky things with sets. Applies a linear transformation to each element in...
Layer from Deep Sets paper: https://arxiv.org/abs/1611.04500 . More parameter-efficient version of a linear-set-layer with context. Args: layer_size: Dimension to transform the input vectors to. inputs: A tensor of shape [batch_size, sequence_length, vector] containing the sequences of input vectors...
def ravanbakhsh_set_layer(layer_size, inputs, mask=None, sequential=False, activation_fn=tf.nn.tanh, dropout=0.0, name=None): """Layer from Deep Sets paper: https...
State container for fn_device_dependency.
def fn_device_dependency_dict(): """State container for fn_device_dependency.""" default_graph = tf.get_default_graph() if not hasattr(default_graph, "dependency_dict"): default_graph.dependency_dict = collections.defaultdict(list) return default_graph.dependency_dict
Add control deps for name and device.
def fn_device_dependency(name, device=""): """Add control deps for name and device.""" key = name + "_" + device outs = [] def body(): with tf.control_dependencies(fn_device_dependency_dict()[key]): yield outs assert outs deps = outs if isinstance(outs[0], (list, tuple)): a...
Find the underlying variable ref. Traverses through Identity, ReadVariableOp, and Enter ops. Stops when op type has Variable or VarHandle in name. Args: t: a Tensor Returns: a Tensor that is a variable ref, or None on error.
def underlying_variable_ref(t): """Find the underlying variable ref. Traverses through Identity, ReadVariableOp, and Enter ops. Stops when op type has Variable or VarHandle in name. Args: t: a Tensor Returns: a Tensor that is a variable ref, or None on error. """ while t.op.type in ["Identity",...
Find the underlying tf.Variable object. Args: t: a Tensor Returns: tf.Variable.
def underlying_variable(t): """Find the underlying tf.Variable object. Args: t: a Tensor Returns: tf.Variable. """ t = underlying_variable_ref(t) assert t is not None # make sure that the graph has a variable index and that it is up-to-date if not hasattr(tf.get_default_graph(), "var_index"): ...
Split approximately equally into num_splits parts. Args: x: a Tensor num_splits: an integer axis: an integer. Returns: a list of num_splits Tensors.
def approximate_split(x, num_splits, axis=0): """Split approximately equally into num_splits parts. Args: x: a Tensor num_splits: an integer axis: an integer. Returns: a list of num_splits Tensors. """ size = shape_list(x)[axis] size_splits = [tf.div(size + i, num_splits) for i in range(nu...
Gradient function for smoothing_cross_entropy_factored.
def smoothing_cross_entropy_factored_grad(op, dy): """Gradient function for smoothing_cross_entropy_factored.""" a = op.inputs[0] b = op.inputs[1] labels = op.inputs[2] confidence = op.inputs[3] num_splits = 16 vocab_size = shape_list(b)[0] labels = approximate_split(labels, num_splits) a = approximat...
Memory-efficient computation of smoothing cross-entropy. Avoids realizing the entire logits matrix at once. Args: a: a Tensor with shape [batch, inner_dim] b: a Tensor with shape [vocab_size, inner_dim] labels: an integer Tensor with shape [batch] confidence: a float Returns: A Tensor with ...
def smoothing_cross_entropy_factored(a, b, labels, confidence): """Memory-efficient computation of smoothing cross-entropy. Avoids realizing the entire logits matrix at once. Args: a: a Tensor with shape [batch, inner_dim] b: a Tensor with shape [vocab_size, inner_dim] labels: an integer Tensor with...
Memory-efficient computation of smoothing cross-entropy. Avoids realizing the entire logits matrix at once. Args: factored_logits: a `FactoredTensor` representing a Tensor with shape `[batch, timesteps, vocab_size]`. labels: an integer `Tensor` with shape `[batch, timesteps]`. label_smoothing: ...
def padded_cross_entropy_factored(factored_logits, labels, label_smoothing, weights_fn=weights_nonzero, reduce_sum=True): """Memory-efficient computation of smoothing cross-entropy. ...
Decorator to create a subgraph with a custom gradient function. The subgraph created by the decorated function is NOT put in a Defun and so does not suffer from the limitations of the Defun (all subgraph ops on the same device, no summaries). Args: grad_fn: function with signature (inputs, variables...
def fn_with_custom_grad(grad_fn, use_global_vars=False): """Decorator to create a subgraph with a custom gradient function. The subgraph created by the decorated function is NOT put in a Defun and so does not suffer from the limitations of the Defun (all subgraph ops on the same device, no summaries). Args:...
Create a subgraph with a custom gradient. Args: fn: function that takes inputs as arguments and produces 1 or more Tensors. inputs: list<Tensor>, will be passed as fn(*inputs). grad_fn: function with signature (inputs, vars, outputs, output_grads) -> (grad_inputs, grad_vars), all of which are...
def _fn_with_custom_grad(fn, inputs, grad_fn, use_global_vars=False): """Create a subgraph with a custom gradient. Args: fn: function that takes inputs as arguments and produces 1 or more Tensors. inputs: list<Tensor>, will be passed as fn(*inputs). grad_fn: function with signature (inputs, vars,...
LayerNorm, Conv, ReLU, Conv. All convolutions have kernel size 1. returns conv(relu(conv(layer_norm(x)))) Args: x: input Tensor with shape [batch, length, io_size] filter_size: an integer - size of the hidden layer. epsilon: a float (for layer norm) forget: a boolean - forget forwards activatio...
def conv_hidden_relu_memory_efficient(x, filter_size, epsilon=1e-6, forget=True, test_vars=None, name=None): """LayerNorm, Conv,...
Return list of dims, statically where possible.
def shape_list(x): """Return list of dims, statically where possible.""" x = tf.convert_to_tensor(x) # If unknown rank, return dynamic shape if x.get_shape().dims is None: return tf.shape(x) static = x.get_shape().as_list() shape = tf.shape(x) ret = [] for i, dim in enumerate(static): if dim ...
Either argmax or random sampling. Args: logits: a Tensor. temperature: a float 0.0=argmax 1.0=random sampling_keep_top_k: If not -1, only sample from the top k logits. Returns: a Tensor with one fewer dimension than logits.
def sample_with_temperature(logits, temperature, sampling_keep_top_k=-1): """Either argmax or random sampling. Args: logits: a Tensor. temperature: a float 0.0=argmax 1.0=random sampling_keep_top_k: If not -1, only sample from the top k logits. Returns: a Tensor with one fewer dimension than log...
Matrix band part of ones. Args: rows: int determining number of rows in output cols: int num_lower: int, maximum distance backward. Negative values indicate unlimited. num_upper: int, maximum distance forward. Negative values indicate unlimited. out_shape: shape to reshape output by. ...
def ones_matrix_band_part(rows, cols, num_lower, num_upper, out_shape=None): """Matrix band part of ones. Args: rows: int determining number of rows in output cols: int num_lower: int, maximum distance backward. Negative values indicate unlimited. num_upper: int, maximum distance forward. Neg...
Reshapes a to match the shape of b.
def reshape_like_all_dims(a, b): """Reshapes a to match the shape of b.""" ret = tf.reshape(a, tf.shape(b)) if not tf.executing_eagerly(): ret.set_shape(b.get_shape()) return ret
Decorator that recomputes the function on the backwards pass. Args: fn: a function that takes Tensors (all as positional arguments) and returns a tuple of Tensors. Returns: A wrapped fn that is identical to fn when called, but its activations will be discarded and recomputed on the backwards pas...
def recompute_grad(fn): """Decorator that recomputes the function on the backwards pass. Args: fn: a function that takes Tensors (all as positional arguments) and returns a tuple of Tensors. Returns: A wrapped fn that is identical to fn when called, but its activations will be discarded and re...
See recompute_grad.
def _recompute_grad(fn, args): """See recompute_grad.""" cached_vs = [] cached_arg_scope = [] def grad_fn(inputs, variables, outputs, output_grads): """Recompute outputs for gradient computation.""" del outputs variables = [underlying_variable_ref(v) for v in variables] # Recompute outputs ...
Identical to layers.dense.
def dense(x, units, **kwargs): """Identical to layers.dense.""" layer_collection = kwargs.pop("layer_collection", None) activations = layers().Dense(units, **kwargs)(x) if layer_collection: # We need to find the layer parameters using scope name for the layer, so # check that the layer is named. Otherwi...
Multiply a batch of input matrices by a batch of parameter matrices. Each input matrix is multiplied by the corresponding parameter matrix. This is useful in a mixture-of-experts where the batch represents different experts with different inputs. Args: inputs: a Tensor with shape [batch, length, input_un...
def batch_dense(inputs, units, activation=None, kernel_initializer=None, reuse=None, name=None): """Multiply a batch of input matrices by a batch of parameter matrices. Each input matrix is multiplied by the corresponding parameter mat...
Mix starting with x2, mixing mixing, going towards x1.
def mix(x1, x2, steps, is_training, min_prob=0.0, max_prob=1.0, mode="lin", simple=False, broadcast_last=False): """Mix starting with x2, mixing mixing, going towards x1.""" with tf.name_scope("mix"): if not is_training: if max_prob >= 1.0: ...
Bipolar ReLU as in https://arxiv.org/abs/1709.04054.
def brelu(x): """Bipolar ReLU as in https://arxiv.org/abs/1709.04054.""" x_shape = shape_list(x) x1, x2 = tf.split(tf.reshape(x, x_shape[:-1] + [-1, 2]), 2, axis=-1) y1 = tf.nn.relu(x1) y2 = -tf.nn.relu(-x2) return tf.reshape(tf.concat([y1, y2], axis=-1), x_shape)
Bipolar ELU as in https://arxiv.org/abs/1709.04054.
def belu(x): """Bipolar ELU as in https://arxiv.org/abs/1709.04054.""" x_shape = shape_list(x) x1, x2 = tf.split(tf.reshape(x, x_shape[:-1] + [-1, 2]), 2, axis=-1) y1 = tf.nn.elu(x1) y2 = -tf.nn.elu(-x2) return tf.reshape(tf.concat([y1, y2], axis=-1), x_shape)
Gaussian Error Linear Unit. This is a smoother version of the RELU. Original paper: https://arxiv.org/abs/1606.08415 Args: x: float Tensor to perform activation. Returns: x with the GELU activation applied.
def gelu(x): """Gaussian Error Linear Unit. This is a smoother version of the RELU. Original paper: https://arxiv.org/abs/1606.08415 Args: x: float Tensor to perform activation. Returns: x with the GELU activation applied. """ cdf = 0.5 * (1.0 + tf.tanh( (np.sqrt(2 / np.pi) * (x + 0.04471...
NAC as in https://arxiv.org/abs/1808.00508.
def nac(x, depth, name=None, reuse=None): """NAC as in https://arxiv.org/abs/1808.00508.""" with tf.variable_scope(name, default_name="nac", values=[x], reuse=reuse): x_shape = shape_list(x) w = tf.get_variable("w", [x_shape[-1], depth]) m = tf.get_variable("m", [x_shape[-1], depth]) w = tf.tanh(w) ...
NALU as in https://arxiv.org/abs/1808.00508.
def nalu(x, depth, epsilon=1e-30, name=None, reuse=None): """NALU as in https://arxiv.org/abs/1808.00508.""" with tf.variable_scope(name, default_name="nalu", values=[x], reuse=reuse): x_shape = shape_list(x) x_flat = tf.reshape(x, [-1, x_shape[-1]]) gw = tf.get_variable("w", [x_shape[-1], depth]) g...
Argmax along with the value.
def argmax_with_score(logits, axis=None): """Argmax along with the value.""" axis = axis or len(logits.get_shape()) - 1 predictions = tf.argmax(logits, axis=axis) logits_shape = shape_list(logits) prefix_shape, vocab_size = logits_shape[:-1], logits_shape[-1] prefix_size = 1 for d in prefix_shape: pr...
Compute the k-th top element of x on the last axis iteratively. This assumes values in x are non-negative, rescale if needed. It is often faster than tf.nn.top_k for small k, especially if k < 30. Note: this does not support back-propagation, it stops gradients! Args: x: a Tensor of non-negative numbers o...
def top_kth_iterative(x, k): """Compute the k-th top element of x on the last axis iteratively. This assumes values in x are non-negative, rescale if needed. It is often faster than tf.nn.top_k for small k, especially if k < 30. Note: this does not support back-propagation, it stops gradients! Args: x: ...
find max and argmax over the last dimension. Works well on TPU Args: inputs: A tensor with shape [..., depth] Returns: values: a Tensor with shape [...] indices: a Tensor with shape [...]
def top_1_tpu(inputs): """find max and argmax over the last dimension. Works well on TPU Args: inputs: A tensor with shape [..., depth] Returns: values: a Tensor with shape [...] indices: a Tensor with shape [...] """ inputs_max = tf.reduce_max(inputs, axis=-1, keepdims=True) mask = tf.to_i...
Use indices to index into the last axis of x. This can be useful for recovering the actual probabilities of a sample from a probability distribution. Args: x: Tensor, n-d. indices: Tensor, (n-1)-d, where the dimension sizes match the first (n-1) dimensions of x. The values of indices will be used ...
def index_last_dim_with_indices(x, indices): """Use indices to index into the last axis of x. This can be useful for recovering the actual probabilities of a sample from a probability distribution. Args: x: Tensor, n-d. indices: Tensor, (n-1)-d, where the dimension sizes match the first (n-1) di...
Is this an appropriate context to generate summaries. Returns: a boolean
def should_generate_summaries(): """Is this an appropriate context to generate summaries. Returns: a boolean """ name_scope = tf.contrib.framework.get_name_scope() if name_scope and "while/" in name_scope: # Summaries don't work well within tf.while_loop() return False if tf.get_variable_scope(...
Reshapes a to match the shape of b in all but the last dimension.
def reshape_like(a, b): """Reshapes a to match the shape of b in all but the last dimension.""" ret = tf.reshape(a, tf.concat([tf.shape(b)[:-1], tf.shape(a)[-1:]], 0)) if not tf.executing_eagerly(): ret.set_shape(b.get_shape().as_list()[:-1] + a.get_shape().as_list()[-1:]) return ret
Summarize the video using image summaries starting with prefix.
def summarize_video(video, prefix, max_outputs=1): """Summarize the video using image summaries starting with prefix.""" video_shape = shape_list(video) if len(video_shape) != 5: raise ValueError("Assuming videos given as tensors in the format " "[batch, time, height, width, channels] but...
Cast x to y's dtype, if necessary.
def cast_like(x, y): """Cast x to y's dtype, if necessary.""" x = tf.convert_to_tensor(x) y = tf.convert_to_tensor(y) if x.dtype.base_dtype == y.dtype.base_dtype: return x cast_x = tf.cast(x, y.dtype) if cast_x.device != x.device: x_name = "(eager Tensor)" try: x_name = x.name except...
Pad x to be even-sized on axis 1 and 2, but only if necessary.
def make_even_size(x): """Pad x to be even-sized on axis 1 and 2, but only if necessary.""" x_shape = x.get_shape().as_list() assert len(x_shape) > 2, "Only 3+-dimensional tensors supported." shape = [dim if dim is not None else -1 for dim in x_shape] new_shape = x_shape # To make sure constant shapes remain...
Loss inspired by the sliced WGAN paper: https://arxiv.org/abs/1804.01947. Puts input1 and input2 through the provided discriminator to get logits. Then, computes num_vecs random projections of the logits, sorts them on the batch dimension and returns the L2 loss between the sorted vectors. See the above-mentio...
def sliced_gan_loss(input1, input2, discriminator, num_vecs, do_random_vecs=True, do_tanh=True, return_logits=False): """Loss inspired by the sliced WGAN paper: https://arxiv.org/abs/1804.01947. ...
Discriminator architecture based on InfoGAN.
def deep_discriminator(x, batch_norm, is_training, filters=64, filter_size=4, stride=2, output_size=1024): """Discriminator architecture based on InfoGAN.""" with tf.variable_sco...
Instance normalization layer.
def instance_norm(x): """Instance normalization layer.""" with tf.variable_scope("instance_norm"): epsilon = 1e-5 mean, var = tf.nn.moments(x, [1, 2], keep_dims=True) scale = tf.get_variable( "scale", [x.get_shape()[-1]], initializer=tf.truncated_normal_initializer(mean=1.0, stddev=0.02)...
Generalized convolution layer.
def general_conv(x, num_filters=64, filter_size=7, stride=1, stddev=0.02, padding="VALID", name="conv", do_norm="instance", do_relu=True, relufactor=0): """Generaliz...
Patch descriminator.
def patch_discriminator(x, filters=64, filter_size=5, n=4, name="patch_discrim"): """Patch descriminator.""" with tf.variable_scope(name): x_shape = shape_list(x) spatial_dims = [x_shape[1] // 4, x_shape[2] // 4] x = tf.random_crop(x, [x_shape[0]] + spatial_dims + [x_shape[3]]) ...
Mean and attention to reduce spatial dimensions.
def mean_with_attention(x, name, num_heads=4): """Mean and attention to reduce spatial dimensions.""" with tf.variable_scope(name): shape = shape_list(x) m = tf.reduce_mean(x, [1, 2]) a = layers().Dense(num_heads, name="mean_attn")(x) s = tf.reshape(a, [shape[0], -1, num_heads]) s = tf.nn.softma...
A simple single-layer convolutional discriminator.
def single_discriminator(x, filters=128, kernel_size=8, strides=4, pure_mean=False): """A simple single-layer convolutional discriminator.""" with tf.variable_scope("discriminator"): net = layers().Conv2D( filters, kernel_size, strides=strides, padding="SAME", name="conv1")(x) ...
A convolutional discriminator with 2 layers and concatenated output.
def double_discriminator(x, filters1=128, filters2=None, kernel_size=8, strides=4, pure_mean=False): """A convolutional discriminator with 2 layers and concatenated output.""" if filters2 is None: filters2 = 4 * filters1 with tf.variable_scope("discriminator"): batch_size = shape_...
Upscaling the image by a factor of f.
def upscale(inputs, f, method=tf.image.ResizeMethod.NEAREST_NEIGHBOR): """Upscaling the image by a factor of f.""" height, width = shape_list(inputs)[1:3] # pylint: disable=unbalanced-tuple-unpacking return tf.image.resize_images(inputs, (height * f, width * f), method)
Upsamples the given inputs. Args: net: A Tensor of size [batch_size, height, width, filters]. num_outputs: The number of output filters. stride: A list of 2 scalars or a 1x2 Tensor indicating the scale, relative to the inputs, of the output dimensions. For example, if kernel size is [2, 3], t...
def cyclegan_upsample(net, num_outputs, stride, method="conv2d_transpose"): """Upsamples the given inputs. Args: net: A Tensor of size [batch_size, height, width, filters]. num_outputs: The number of output filters. stride: A list of 2 scalars or a 1x2 Tensor indicating the scale, relative to the...
Weight-level magnitude pruning.
def weight_targeting(w, k): """Weight-level magnitude pruning.""" k = tf.to_int32(k) w_shape = shape_list(w) size = tf.to_int32(tf.reduce_prod(w_shape[:-1])) w = tf.reshape(w, [size, w_shape[-1]]) transpose_w = tf.transpose(w) thres = tf.contrib.framework.sort(tf.abs(transpose_w), axis=1)[:, k] mask = ...
Unit-level magnitude pruning.
def unit_targeting(w, k): """Unit-level magnitude pruning.""" k = tf.to_int32(k) w_shape = shape_list(w) size = tf.to_int32(tf.reduce_prod(w_shape[:-1])) w = tf.reshape(w, [size, w_shape[-1]]) norm = tf.norm(w, axis=0) thres = tf.contrib.framework.sort(norm, axis=0)[k] mask = to_float(thres >= norm)[No...
Apply targeted dropout to the weights of a convolution.
def td_conv(inputs, filters, kernel_size, targeting_count, targeting_fn, keep_prob, is_training, do_prune=True, strides=(1, 1), padding="valid", data_format="channels_last", dilation_rate=...