Code stringlengths 103 85.9k | Summary listlengths 0 94 |
|---|---|
Please provide a description of the function:def convert_predictions_to_image_summaries(hook_args):
decode_hparams = hook_args.decode_hparams
if not decode_hparams.display_decoded_images:
return []
predictions = hook_args.predictions[0]
# Display ten random inputs and outputs so that tensorboard does no... | [
"Optionally converts images from hooks_args to image summaries.\n\n Args:\n hook_args: DecodeHookArgs namedtuple\n Returns:\n summaries: list of tf.Summary values if hook_args.decode_hpara\n "
] |
Please provide a description of the function:def resize_by_area(img, size):
return tf.to_int64(
tf.image.resize_images(img, [size, size], tf.image.ResizeMethod.AREA)) | [
"image resize function used by quite a few image problems."
] |
Please provide a description of the function:def make_multiscale(image, resolutions,
resize_method=tf.image.ResizeMethod.BICUBIC,
num_channels=3):
scaled_images = []
for height in resolutions:
scaled_image = tf.image.resize_images(
image,
size=[height, ... | [
"Returns list of scaled images, one for each resolution.\n\n Args:\n image: Tensor of shape [height, height, num_channels].\n resolutions: List of heights that image's height is resized to.\n resize_method: tf.image.ResizeMethod.\n num_channels: Number of channels in image.\n\n Returns:\n List of T... |
Please provide a description of the function:def make_multiscale_dilated(image, resolutions, num_channels=3):
image_height = common_layers.shape_list(image)[0]
scaled_images = []
for height in resolutions:
dilation_rate = image_height // height # assuming height = width
scaled_image = image[::dilation... | [
"Returns list of scaled images, one for each resolution.\n\n Resizes by skipping every nth pixel.\n\n Args:\n image: Tensor of shape [height, height, num_channels].\n resolutions: List of heights that image's height is resized to. The function\n assumes VALID padding, so the original image's height mus... |
Please provide a description of the function:def encode_images_as_png(images):
if tf.executing_eagerly():
for image in images:
yield tf.image.encode_png(image).numpy()
else:
(height, width, channels) = images[0].shape
with tf.Graph().as_default():
image_t = tf.placeholder(dtype=tf.uint8, ... | [
"Yield images encoded as pngs."
] |
Please provide a description of the function:def image_generator(images, labels):
if not images:
raise ValueError("Must provide some images for the generator.")
width, height, _ = images[0].shape
for (enc_image, label) in zip(encode_images_as_png(images), labels):
yield {
"image/encoded": [enc_... | [
"Generator for images that takes image and labels lists and creates pngs.\n\n Args:\n images: list of images given as [width x height x channels] numpy arrays.\n labels: list of ints, same length as images.\n\n Yields:\n A dictionary representing the images with the following fields:\n * image/encoded... |
Please provide a description of the function:def image_augmentation(images, do_colors=False, crop_size=None):
if crop_size is None:
crop_size = [299, 299]
images = tf.random_crop(images, crop_size + [3])
images = tf.image.random_flip_left_right(images)
if do_colors: # More augmentation, but might be slo... | [
"Image augmentation: cropping, flipping, and color transforms."
] |
Please provide a description of the function:def cifar_image_augmentation(images):
images = tf.image.resize_image_with_crop_or_pad(images, 40, 40)
images = tf.random_crop(images, [32, 32, 3])
images = tf.image.random_flip_left_right(images)
return images | [
"Image augmentation suitable for CIFAR-10/100.\n\n As described in https://arxiv.org/pdf/1608.06993v3.pdf (page 5).\n\n Args:\n images: a Tensor.\n Returns:\n Tensor of the same shape as images.\n "
] |
Please provide a description of the function:def random_shift(image, wsr=0.1, hsr=0.1):
height, width, _ = common_layers.shape_list(image)
width_range, height_range = wsr*width, hsr*height
height_translations = tf.random_uniform((1,), -height_range, height_range)
width_translations = tf.random_uniform((1,), ... | [
"Apply random horizontal and vertical shift to images.\n\n This is the default data-augmentation strategy used on CIFAR in Glow.\n\n Args:\n image: a 3-D Tensor\n wsr: Width shift range, as a float fraction of the width.\n hsr: Height shift range, as a float fraction of the width.\n Returns:\n images... |
Please provide a description of the function:def get_standardized_layers(hparams, dp=None):
def partial(fct, *args, **kwargs):
return functools.wraps(fct)(functools.partial(fct, *args, **kwargs))
def register_layer(
fct_in,
default_args=None,
default_kwargs=None,
use_dp=True,
... | [
"Get the common attention and feed-forward layers.\n\n The returned layer functions will have the following signature:\n\n y, extra_loss = fct(x)\n\n extra_loss is set to 0.0 if the layer doesn't have extra loss.\n If dp is provided, the layers will be distributed within the devices.\n If moe wants to be use... |
Please provide a description of the function:def add_standard_attention_hparams(hparams):
# All hyperparameters ending in "dropout" are automatically set to 0.0
# when not in training mode.
# hparams used and which should have been defined outside (in
# common_hparams):
# Global flags
# hparams.mode
#... | [
"Adds the hparams used by get_standardized_layers."
] |
Please provide a description of the function:def encoder_decoder_attention_loss(expected_attention_logits,
actual_attentions,
loss_type="kl_divergence",
loss_multiplier=1.0):
def combine_attentions(attention_l... | [
"Computes encdec attention loss between expected and actual attentions.\n\n Args:\n expected_attention_logits: Tensor storing the expected encoder-decoder\n attention logits with shape [batch_size, target_length, input_length].\n actual_attentions: Dictionary with actual attention logits for different\n... |
Please provide a description of the function:def get_timing_signal_1d(length,
channels,
min_timescale=1.0,
max_timescale=1.0e4,
start_index=0):
position = tf.to_float(tf.range(length) + start_index)
num_timescales... | [
"Gets a bunch of sinusoids of different frequencies.\n\n Each channel of the input Tensor is incremented by a sinusoid of a different\n frequency and phase.\n\n This allows attention to learn to use absolute and relative positions.\n Timing signals should be added to some precursors of both the query and the\n ... |
Please provide a description of the function:def add_timing_signal_1d(x,
min_timescale=1.0,
max_timescale=1.0e4,
start_index=0):
length = common_layers.shape_list(x)[1]
channels = common_layers.shape_list(x)[2]
signal = get_timing_signa... | [
"Adds a bunch of sinusoids of different frequencies to a Tensor.\n\n Each channel of the input Tensor is incremented by a sinusoid of a different\n frequency and phase.\n\n This allows attention to learn to use absolute and relative positions.\n Timing signals should be added to some precursors of both the quer... |
Please provide a description of the function:def get_layer_timing_signal_learned_1d(channels, layer, num_layers):
shape = [num_layers, 1, 1, channels]
layer_embedding = (
tf.get_variable(
"layer_embedding",
shape,
initializer=tf.random_normal_initializer(0, channels**-0.5)) *
... | [
"get n-dimensional embedding as the layer (vertical) timing signal.\n\n Adds embeddings to represent the position of the layer in the tower.\n\n Args:\n channels: dimension of the timing signal\n layer: layer num\n num_layers: total number of layers\n\n Returns:\n a Tensor of timing signals [1, 1, ch... |
Please provide a description of the function:def add_layer_timing_signal_learned_1d(x, layer, num_layers):
channels = common_layers.shape_list(x)[-1]
signal = get_layer_timing_signal_learned_1d(channels, layer, num_layers)
x += signal
return x | [
"Add n-dimensional embedding as the layer (vertical) timing signal.\n\n Adds embeddings to represent the position of the layer in the tower.\n\n Args:\n x: a tensor with shape [batch, length, depth]\n layer: layer num\n num_layers: total number of layers\n\n Returns:\n a Tensor the same shape as x.\n... |
Please provide a description of the function:def get_layer_timing_signal_sinusoid_1d(channels, layer, num_layers):
signal = get_timing_signal_1d(num_layers, channels)
layer_signal = tf.expand_dims(signal[:, layer, :], axis=1)
return layer_signal | [
"Add sinusoids of different frequencies as layer (vertical) timing signal.\n\n Args:\n channels: dimension of the timing signal\n layer: layer num\n num_layers: total number of layers\n\n Returns:\n a Tensor of timing signals [1, 1, channels].\n "
] |
Please provide a description of the function:def add_layer_timing_signal_sinusoid_1d(x, layer, num_layers):
channels = common_layers.shape_list(x)[-1]
signal = get_layer_timing_signal_sinusoid_1d(channels, layer, num_layers)
return x + signal | [
"Add sinusoids of different frequencies as layer (vertical) timing signal.\n\n Args:\n x: a Tensor with shape [batch, length, channels]\n layer: layer num\n num_layers: total number of layers\n\n Returns:\n a Tensor the same shape as x.\n "
] |
Please provide a description of the function:def add_timing_signal_1d_given_position(x,
position,
min_timescale=1.0,
max_timescale=1.0e4):
channels = common_layers.shape_list(x)[2]
num_timescal... | [
"Adds sinusoids of diff frequencies to a Tensor, with timing position given.\n\n Args:\n x: a Tensor with shape [batch, length, channels]\n position: a Tensor with shape [batch, length]\n min_timescale: a float\n max_timescale: a float\n\n Returns:\n a Tensor the same shape as x.\n "
] |
Please provide a description of the function:def add_timing_signal_nd(x, min_timescale=1.0, max_timescale=1.0e4):
num_dims = len(x.get_shape().as_list()) - 2
channels = common_layers.shape_list(x)[-1]
num_timescales = channels // (num_dims * 2)
log_timescale_increment = (
math.log(float(max_timescale) ... | [
"Adds a bunch of sinusoids of different frequencies to a Tensor.\n\n Each channel of the input Tensor is incremented by a sinusoid of a different\n frequency and phase in one of the positional dimensions.\n\n This allows attention to learn to use absolute and relative positions.\n Timing signals should be added... |
Please provide a description of the function:def add_positional_embedding(x, max_length, name=None, positions=None):
with tf.name_scope("add_positional_embedding"):
_, length, depth = common_layers.shape_list(x)
var = tf.cast(tf.get_variable(name, [max_length, depth]), x.dtype)
if positions is None:
... | [
"Adds positional embedding.\n\n Args:\n x: Tensor with shape [batch, length, depth].\n max_length: int representing static maximum size of any dimension.\n name: str representing name of the embedding tf.Variable.\n positions: Tensor with shape [batch, length].\n\n Returns:\n Tensor of same shape a... |
Please provide a description of the function:def add_positional_embedding_nd(x, max_length, name=None):
with tf.name_scope("add_positional_embedding_nd"):
x_shape = common_layers.shape_list(x)
num_dims = len(x_shape) - 2
depth = x_shape[-1]
base_shape = [1] * (num_dims + 1) + [depth]
base_start... | [
"Adds n-dimensional positional embedding.\n\n The embeddings add to all positional dimensions of the tensor.\n\n Args:\n x: Tensor with shape [batch, p1 ... pn, depth]. It has n positional\n dimensions, i.e., 1 for text, 2 for images, 3 for video, etc.\n max_length: int representing static maximum size... |
Please provide a description of the function:def make_edge_vectors(adjacency_matrix, num_edge_types, depth, name=None):
with tf.variable_scope(name, default_name="edge_vectors"):
att_adj_vectors_shape = [num_edge_types, depth]
adjacency_matrix_shape = common_layers.shape_list(adjacency_matrix)
adj_vect... | [
"Gets edge vectors for the edge types in the adjacency matrix.\n\n Args:\n adjacency_matrix: A [batch, num_nodes, num_nodes] tensor of ints.\n num_edge_types: Number of different edge types\n depth: Number of channels\n name: a string\n Returns:\n A [batch, num_nodes, num_nodes, depth] vector of te... |
Please provide a description of the function:def padding_to_length(padding):
non_padding = 1.0 - padding
return tf.to_int32(tf.reduce_sum(non_padding, axis=-1)) | [
"Calculate the length of mask based on padding.\n\n Args:\n padding: a Tensor with shape [..., length].\n Returns:\n a Tensor with shape [...].\n "
] |
Please provide a description of the function:def attention_bias_local(length, max_backward, max_forward):
band = common_layers.ones_matrix_band_part(
length,
length,
max_backward,
max_forward,
out_shape=[1, 1, length, length])
return -1e9 * (1.0 - band) | [
"Create an bias tensor to be added to attention logits.\n\n A position may attend to positions at most max_distance from it,\n forward and backwards.\n\n This does not actually save any computation.\n\n Args:\n length: int\n max_backward: int, maximum distance backward to attend. Negative values\n in... |
Please provide a description of the function:def attention_bias_same_segment(query_segment_id, memory_segment_id):
ret = (tf.to_float(
tf.not_equal(
tf.expand_dims(query_segment_id, 2),
tf.expand_dims(memory_segment_id, 1))) *
large_compatible_negative(memory_segment_id.dtype))
... | [
"Create an bias tensor to be added to attention logits.\n\n Positions with the same segment_ids can see each other.\n\n Args:\n query_segment_id: a float `Tensor` with shape [batch, query_length].\n memory_segment_id: a float `Tensor` with shape [batch, memory_length].\n\n Returns:\n a `Tensor` with sha... |
Please provide a description of the function:def attention_bias_ignore_padding(memory_padding):
ret = memory_padding * large_compatible_negative(memory_padding.dtype)
return tf.expand_dims(tf.expand_dims(ret, axis=1), axis=1) | [
"Create an bias tensor to be added to attention logits.\n\n Args:\n memory_padding: a float `Tensor` with shape [batch, memory_length].\n\n Returns:\n a `Tensor` with shape [batch, 1, 1, memory_length].\n "
] |
Please provide a description of the function:def attention_bias_to_padding(attention_bias, cast_fn=tf.to_float):
# `attention_bias` is a large negative number in padding positions and 0.0
# elsewhere.
return tf.squeeze(cast_fn(tf.less(attention_bias, -1)), axis=[1, 2]) | [
"Inverse of attention_bias_ignore_padding().\n\n Args:\n attention_bias: a `Tensor` with shape [batch, 1, 1, memory_length], as\n returned by attention_bias_ignore_padding().\n cast_fn: function used to cast to output type.\n\n Returns:\n a Tensor with shape [batch, memory_length] with 1.0 in paddin... |
Please provide a description of the function:def attention_bias_prepend_inputs_full_attention(padding):
# Everything past the first padding position is part of the target.
# This Tensor has zeros for the source portion and separator,
# and ones for the target portion.
in_target = tf.cumsum(padding, axis=1, e... | [
"Create a bias tensor for prepend_mode=\"prepend_inputs_full_attention\".\n\n See prepend_inputs in common_hparams.py.\n\n Produces a bias tensor to be used in self-attention.\n\n This bias tensor allows for full connectivity in the \"inputs\" part of\n the sequence and masked connectivity in the targets part.\... |
Please provide a description of the function:def attention_bias_proximal(length):
r = tf.to_float(tf.range(length))
diff = tf.expand_dims(r, 0) - tf.expand_dims(r, 1)
return tf.expand_dims(tf.expand_dims(-tf.log1p(tf.abs(diff)), 0), 0) | [
"Bias for self-attention to encourage attention to close positions.\n\n Args:\n length: an integer scalar.\n\n Returns:\n a Tensor with shape [1, 1, length, length]\n "
] |
Please provide a description of the function:def attention_bias_batch(batch_coordinates_q,
batch_coordinates_k=None,
condition_fn=None):
if batch_coordinates_k is None:
batch_coordinates_k = batch_coordinates_q
# Convert to float first because of b/25387198.... | [
"Generate a mask to prevent the batch to attend to each others.\n\n Args:\n batch_coordinates_q: Int-like Tensor of shape [length_q, 1] containing the\n coordinates of the batches\n batch_coordinates_k: Int-like Tensor of shape [length_k, 1] containing the\n coordinates of the batches. If None, do ... |
Please provide a description of the function:def split_last_dimension(x, n):
x_shape = common_layers.shape_list(x)
m = x_shape[-1]
if isinstance(m, int) and isinstance(n, int):
assert m % n == 0
return tf.reshape(x, x_shape[:-1] + [n, m // n]) | [
"Reshape x so that the last dimension becomes two dimensions.\n\n The first of these two dimensions is n.\n\n Args:\n x: a Tensor with shape [..., m]\n n: an integer.\n\n Returns:\n a Tensor with shape [..., n, m/n]\n "
] |
Please provide a description of the function:def combine_last_two_dimensions(x):
x_shape = common_layers.shape_list(x)
a, b = x_shape[-2:]
return tf.reshape(x, x_shape[:-2] + [a * b]) | [
"Reshape x so that the last two dimension become one.\n\n Args:\n x: a Tensor with shape [..., a, b]\n\n Returns:\n a Tensor with shape [..., ab]\n "
] |
Please provide a description of the function:def combine_first_two_dimensions(x):
ret = tf.reshape(x, tf.concat([[-1], common_layers.shape_list(x)[2:]], 0))
old_shape = x.get_shape().dims
a, b = old_shape[:2]
new_shape = [a * b if a and b else None] + old_shape[2:]
ret.set_shape(new_shape)
return ret | [
"Reshape x so that the first two dimension become one.\n\n Args:\n x: a Tensor with shape [a, b, ...]\n\n Returns:\n a Tensor with shape [ab, ...]\n "
] |
Please provide a description of the function:def attention_image_summary(attn, image_shapes=None):
attn = tf.cast(attn, tf.float32)
num_heads = common_layers.shape_list(attn)[1]
# [batch, query_length, memory_length, num_heads]
image = tf.transpose(attn, [0, 2, 3, 1])
image = tf.pow(image, 0.2) # for high... | [
"Compute color image summary.\n\n Args:\n attn: a Tensor with shape [batch, num_heads, query_length, memory_length]\n image_shapes: optional tuple of integer scalars.\n If the query positions and memory positions represent the\n pixels of flattened images, then pass in their dimensions:\n (q... |
Please provide a description of the function:def grouped_attention_multihead(query_antecedent,
memory_antecedent,
total_key_depth,
total_value_depth,
output_depth,
... | [
"Multi-head dot-product attention with sparsity.\n\n For each attention head, the queries are partitioned into groups.\n For each group, only a subset of the key-value pairs are considered.\n\n The choices of groups are selected based on trained predictors of\n the total attention given the group inclusion.\n\n... |
Please provide a description of the function:def harden_attention_weights(weights, hard_attention_k):
# Subtract the top-kth weight and zero-out all lower ones.
# Note that currently in case of numerical ties it will retain more
# than k elements. In the future, we may want to avoid this.
weights -= common_l... | [
"Make attention weights non-0 only on the top-hard_attention_k ones."
] |
Please provide a description of the function:def dot_product_attention(q,
k,
v,
bias,
dropout_rate=0.0,
image_shapes=None,
name=None,
make... | [
"Dot-product attention.\n\n Args:\n q: Tensor with shape [..., length_q, depth_k].\n k: Tensor with shape [..., length_kv, depth_k]. Leading dimensions must\n match with q.\n v: Tensor with shape [..., length_kv, depth_v] Leading dimensions must\n match with q.\n bias: bias Tensor (see attent... |
Please provide a description of the function:def _generate_relative_positions_matrix(length_q, length_k,
max_relative_position,
cache=False):
if not cache:
if length_q == length_k:
range_vec_q = range_vec_k = tf.range(length_... | [
"Generates matrix of relative positions between inputs."
] |
Please provide a description of the function:def _generate_relative_positions_embeddings(length_q, length_k, depth,
max_relative_position, name,
cache=False):
with tf.variable_scope(name):
relative_positions_matrix = _gener... | [
"Generates tensor of size [1 if cache else length_q, length_k, depth]."
] |
Please provide a description of the function:def _relative_attention_inner(x, y, z, transpose):
batch_size = tf.shape(x)[0]
heads = x.get_shape().as_list()[1]
length = tf.shape(x)[2]
# xy_matmul is [batch_size, heads, length or 1, length or depth]
xy_matmul = tf.matmul(x, y, transpose_b=transpose)
# x_t... | [
"Relative position-aware dot-product attention inner calculation.\n\n This batches matrix multiply calculations to avoid unnecessary broadcasting.\n\n Args:\n x: Tensor with shape [batch_size, heads, length or 1, length or depth].\n y: Tensor with shape [batch_size, heads, length or 1, depth].\n z: Tenso... |
Please provide a description of the function:def dot_product_attention_relative(q,
k,
v,
bias,
max_relative_position,
dropout_rate=0.0,
... | [
"Calculate relative position-aware dot-product self-attention.\n\n The attention calculation is augmented with learned representations for the\n relative position between each element in q and each element in k and v.\n\n Args:\n q: a Tensor with shape [batch, heads, length, depth].\n k: a Tensor with shap... |
Please provide a description of the function:def _relative_position_to_absolute_position_masked(x):
batch, heads, length, _ = common_layers.shape_list(x)
x = tf.pad(x, [[0, 0], [0, 0], [0, 0], [1, 0]])
x = tf.reshape(x, [batch, heads, 1 + length, length])
x = tf.slice(x, [0, 0, 1, 0], [-1, -1, -1, -1])
ret... | [
"Helper to dot_product_self_attention_relative_v2.\n\n Rearrange an attention logits or weights Tensor.\n\n The dimensions of the input represent:\n [batch, heads, query_position, memory_position - query_position + length - 1]\n\n The dimensions of the output represent:\n [batch, heads, query_position, memory_... |
Please provide a description of the function:def dot_product_self_attention_relative_v2(q,
k,
v,
bias,
max_relative_position=None,
... | [
"Calculate relative position-aware dot-product self-attention.\n\n Only works for masked self-attention (no looking forward).\n\n The attention calculation is augmented with learned representations for the\n relative position between each element in q and each element in k and v.\n\n Args:\n q: a Tensor with... |
Please provide a description of the function:def _absolute_position_to_relative_position_unmasked(x):
batch, heads, length, _ = common_layers.shape_list(x)
# padd along column
x = tf.pad(x, [[0, 0], [0, 0], [0, 0], [0, length-1]])
x_flat = tf.reshape(x, [batch, heads, length**2 + length*(length -1)])
# add... | [
"Helper function for dot_product_unmasked_self_attention_relative_v2.\n\n Rearrange an attention logits or weights Tensor.\n\n The dimensions of the input represent:\n [batch, heads, query_position, memory_position]\n\n The dimensions of the output represent:\n [batch, heads, query_position, memory_position - ... |
Please provide a description of the function:def get_relative_embeddings_left_right(max_relative_position, length, depth,
num_heads,
heads_share_relative_embedding,
name):
initializer_stddev = depth... | [
"Instantiate or retrieve relative embeddings, sliced according to length.\n\n Use for unmasked case where the relative attention looks both left and right.\n\n Args:\n max_relative_position: an Integer for the number of entries in the relative\n embedding, which corresponds to the max relative distance th... |
Please provide a description of the function:def dot_product_unmasked_self_attention_relative_v2(
q, k, v, bias, max_relative_position=None, dropout_rate=0.0,
image_shapes=None, name=None, make_image_summary=True,
dropout_broadcast_dims=None, heads_share_relative_embedding=False,
add_relative_to_values=... | [
"Calculate relative position-aware dot-product self-attention.\n\n The attention calculation is augmented with learned representations for the\n relative position between each element in q and each element in k and v.\n\n Args:\n q: a Tensor with shape [batch, heads, length, depth].\n k: a Tensor with shap... |
Please provide a description of the function:def _matmul_with_relative_keys_2d(x, y, heads_share_relative_embedding):
if heads_share_relative_embedding:
ret = tf.einsum("bhxyd,md->bhxym", x, y)
else:
ret = tf.einsum("bhxyd,hmd->bhxym", x, y)
return ret | [
"Helper function for dot_product_unmasked_self_attention_relative_2d."
] |
Please provide a description of the function:def dot_product_unmasked_self_attention_relative_2d(
q, k, v, bias, max_relative_position=None, dropout_rate=0.0,
image_shapes=None, name=None, make_image_summary=True,
dropout_broadcast_dims=None, heads_share_relative_embedding=False,
add_relative_to_values=... | [
"Calculate relative position unmasked dot-product self-attention 2d.\n\n\n The attention calculation is augmented with learned representations for the\n relative position between each element in q and each element in k and v in\n height and width dimensions. for query index (i,j) and key index (l, m),\n the log... |
Please provide a description of the function:def _split_along_width(x_left_right_blocks):
(_, x_num_h_blocks, x_num_outer_w_blocks, x_memory_flange_h,
x_memory_flange_w, depth) = common_layers.shape_list(x_left_right_blocks)
x_num_w_blocks = (x_num_outer_w_blocks-1)//2
# get it ready for splitting the left ... | [
"Helper function for local 2d attention.\n\n Takes a tensor of [batch, heads, num_h_blocks, num_w_blocks,\n height, width, depth] and returns two tensors which contain every alternate\n position along the width\n\n\n Args:\n x_left_right_blocks: A [batch, num_h_blocks, num_w_blocks,\n ... |
Please provide a description of the function:def _get_left_right_blocks(x):
(_, x_num_outer_h_blocks, x_num_outer_w_blocks, x_memory_flange_h,
x_memory_flange_w, depth) = common_layers.shape_list(x)
x_left_right_blocks = tf.slice(x,
[0, 1, 0, 0, 0, 0],
... | [
"Helper function. Assumes that memory_flange is half of query sizes.\n\n This function splits the tensor of width 'n' into two halves, where the\n first half gets the width indices 0, 2, 4.. and the second half gets the\n width indices 3, 5, ... We also fuse two blocks along the h dimension.\n\n Args:\n x: a... |
Please provide a description of the function:def _extract_blocks(x, block_h, block_w):
(_, height, width, depth) = common_layers.shape_list(x)
assert height % block_h == 0
assert width % block_w == 0
x = tf.reshape(x, [-1, height//block_h, block_h,
width//block_w, block_w, depth])
retu... | [
"Helper function for local 2d attention.\n\n Args:\n x: a [batch, height, width, depth] tensor\n block_h: An integer. block height\n block_w: An inteter. block width\n\n returns:\n a [batch, num_heads, height/block_h, width/block_w, depth] tensor\n "
] |
Please provide a description of the function:def get_2d_local_memory(x, query_shape, memory_flange):
(_, height, width, depth_x) = common_layers.shape_list(x)
x_center_blocks = _extract_blocks(x, query_shape[0], query_shape[1])
# add extra padding to x so that we can extract the memory region
# around the ce... | [
"Stitches together the local 2d memory blocks.\n\n Args:\n x: a [batch, height, width, depth tensor]\n query_shape: 2-d integer list of query shape\n memory_flange: 2-d integer list of memory flanges\n\n Returns:\n x: A [batch, num_h_blocks, num_w_blocks,\n query_shape[0]+2*memory_flange[0],q... |
Please provide a description of the function:def get_2d_local_memory_v2(x, query_shape, memory_flange):
(_, height, width, depth_x) = common_layers.shape_list(x)
# add extra padding to x so that we can extract the memory region
# around the center
paddings = [[0, 0], [memory_flange[0], memory_flange[0]],
... | [
"Gathering memory blocks around query blocks. flange is half of query .\n\n Only works if memory flanges are half of query sizes.\n\n Args:\n x: a [batch, height, width, depth tensor]\n query_shape: 2-d integer list of query shape\n memory_flange: 2-d integer list of memory flanges\n\n Returns:\n x... |
Please provide a description of the function:def dot_product_unmasked_attention_local_2d_tpu(
q, k, v, bias, max_relative_position=None, query_shape=(8, 8),
dropout_rate=0.0, image_shapes=None, name=None, make_image_summary=False,
dropout_broadcast_dims=None):
if max_relative_position:
raise ValueE... | [
"Calculate unmasked dot-product local self-attention 2d on tpu.\n\n Args:\n q: a Tensor with shape [batch, heads, height, width, depth].\n k: a Tensor with shape [batch, heads, height, width, depth].\n v: a Tensor with shape [batch, heads, height, width, depth].\n bias: bias Tensor.\n max_relative_p... |
Please provide a description of the function:def dot_product_unmasked_attention_local_2d_tpu_simple(
x, bias, total_key_depth, total_value_depth, num_heads,
query_shape=(8, 8),
dropout_rate=0.0, image_shapes=None, make_image_summary=False,
dropout_broadcast_dims=None):
# This calculation only work... | [
"Calculate simple unmasked dot-product local self-attention 2d on tpu.\n\n The query, key, and value blocks are the same. We do not do a second linear\n transformation after computing the values\n\n Args:\n x: a Tensor with shape [batch, height, width, depth].\n bias: bias Tensor.\n total_key_depth: the... |
Please provide a description of the function:def masked_within_block_local_attention_1d(q, k, v, block_length=64, name=None):
with tf.variable_scope(
name, default_name="within_local_attention_1d", values=[q, k, v]):
batch, heads, length, depth_k = common_layers.shape_list(q)
depth_v = common_layers.... | [
"Attention to the source and a neighborhood to the left within a block.\n\n The sequence is divided into blocks of length block_length. Attention for a\n given query position can only see memory positions less than or equal to the\n query position in the corresponding block.\n\n Args:\n q: a Tensor with shap... |
Please provide a description of the function:def _relative_position_to_absolute_position_unmasked(x):
x_shape = common_layers.shape_list(x)
batch = x_shape[0]
heads = x_shape[1]
length = x_shape[2]
# Concat columns of pad to shift from relative to absolute indexing.
col_pad = tf.zeros((batch, heads, leng... | [
"Converts tensor from relative to aboslute indexing for local attention.\n\n Args:\n x: a Tensor of shape [batch (or batch*num_blocks), heads,\n length, 2 * length - 1]\n\n Returns:\n A Tensor of shape [batch (or batch*num_blocks), heads, length, length-1]\n "
] |
Please provide a description of the function:def masked_local_attention_1d(q,
k,
v,
block_length=128,
make_image_summary=False,
dropout_rate=0.,
... | [
"Attention to the source position and a neighborhood to the left of it.\n\n The sequence is divided into blocks of length block_length. Attention for a\n given query position can only see memory positions less than or equal to the\n query position, in the corresponding block and the previous block.\n\n Args:\n ... |
Please provide a description of the function:def _make_local_block(x, depth, batch, heads, num_blocks, block_length):
prev_block = tf.slice(x, [0, 0, 0, 0, 0],
[-1, -1, num_blocks - 1, -1, -1])
cur_block = tf.slice(x, [0, 0, 1, 0, 0], [-1, -1, -1, -1, -1])
local_block = tf.concat([prev_... | [
"Helper function to create a local version of the keys or values for 1d."
] |
Please provide a description of the function:def masked_relative_local_attention_1d(q,
k,
v,
block_length=128,
make_image_summary=False,
... | [
"Masked local 1d attention with relative positions.\n\n The sequence is divided into blocks of length block_size.\n Attention for a given query position can only see memory positions\n less than or equal to the query position, in the corresponding block\n and the previous block.\n\n If mask_right is True, then... |
Please provide a description of the function:def local_attention_1d(q, k, v, block_length=128, filter_width=100, name=None):
with tf.variable_scope(
name, default_name="local_self_attention_1d", values=[q, k, v]):
# Check that q, k, v have the same shape except in their depth dimension.
q.get_shape()... | [
"Strided block local self-attention.\n\n The sequence is divided into blocks of length block_length. Attention for a\n given query position can see all memory positions in the corresponding block\n and filter_width many positions to the left and right of the block.\n\n Args:\n q: a Tensor with shape [batch, ... |
Please provide a description of the function:def reshape_by_blocks(x, x_shape, memory_block_size):
x = tf.reshape(x, [
x_shape[0], x_shape[1], x_shape[2] // memory_block_size,
memory_block_size, x_shape[3]
])
return x | [
"Reshapes input by splitting its length over blocks of memory_block_size.\n\n Args:\n x: a Tensor with shape [batch, heads, length, depth]\n x_shape: tf.TensorShape of x.\n memory_block_size: Integer which divides length.\n\n Returns:\n Tensor with shape\n [batch, heads, length // memory_block_size... |
Please provide a description of the function:def dilated_self_attention_1d(q,
k,
v,
query_block_size=128,
memory_block_size=128,
gap_size=2,
... | [
"Dilated self-attention.\n\n Args:\n q: a Tensor with shape [batch, heads, length, depth]\n k: a Tensor with shape [batch, heads, length, depth]\n v: a Tensor with shape [batch, heads, length, depth]\n query_block_size: an integer indicating size of query block\n memory_block_size: an integer indica... |
Please provide a description of the function:def gather_dilated_memory_blocks(x,
num_memory_blocks,
gap_size,
query_block_size,
memory_block_size,
gather_i... | [
"Gathers blocks with gaps in between.\n\n Args:\n x: Tensor of shape [length, batch, heads, depth]\n num_memory_blocks: how many memory blocks to look in \"direction\". Each will\n be separated by gap_size.\n gap_size: an integer indicating the gap size\n query_block_size: an integer indicating si... |
Please provide a description of the function:def masked_dilated_self_attention_1d(q,
k,
v,
query_block_size=64,
memory_block_size=64,
g... | [
"Dilated self-attention. TODO(avaswani): Try it and write a paper on it.\n\n Args:\n q: a Tensor with shape [batch, heads, length, depth]\n k: a Tensor with shape [batch, heads, length, depth]\n v: a Tensor with shape [batch, heads, length, depth]\n query_block_size: an integer\n memory_block_size: ... |
Please provide a description of the function:def local_attention_2d(q,
k,
v,
query_shape=(8, 16),
memory_flange=(8, 16),
name=None):
with tf.variable_scope(
name, default_name="local_self_attent... | [
"Strided block local self-attention.\n\n The 2-D sequence is divided into 2-D blocks of shape query_shape. Attention\n for a given query position can only see memory positions less than or equal to\n the query position. The memory positions are the corresponding block with\n memory_flange many positions to add ... |
Please provide a description of the function:def pad_to_multiple_2d(x, block_shape):
old_shape = x.get_shape().dims
last = old_shape[-1]
if len(old_shape) == 4:
height_padding = -common_layers.shape_list(x)[1] % block_shape[0]
width_padding = -common_layers.shape_list(x)[2] % block_shape[1]
padding... | [
"Making sure x is a multiple of shape.\n\n Args:\n x: a [batch, heads, h, w, depth] or [batch, h, w, depth] tensor\n block_shape: a 2-d list of integer shapes\n\n Returns:\n padded_x: a [batch, heads, h, w, depth] or [batch, h, w, depth] tensor\n "
] |
Please provide a description of the function:def reshape_range(tensor, i, j, shape):
t_shape = common_layers.shape_list(tensor)
target_shape = t_shape[:i] + shape + t_shape[j:]
return tf.reshape(tensor, target_shape) | [
"Reshapes a tensor between dimensions i and j."
] |
Please provide a description of the function:def gather_blocks_2d(x, indices):
x_shape = common_layers.shape_list(x)
x = reshape_range(x, 2, 4, [tf.reduce_prod(x_shape[2:4])])
# [length, batch, heads, dim]
x_t = tf.transpose(x, [2, 0, 1, 3])
x_new = tf.gather(x_t, indices)
# returns [batch, heads, num_bl... | [
"Gathers flattened blocks from x."
] |
Please provide a description of the function:def scatter_blocks_2d(x, indices, shape):
x_shape = common_layers.shape_list(x)
# [length, batch, heads, dim]
x_t = tf.transpose(
tf.reshape(x, [x_shape[0], x_shape[1], -1, x_shape[-1]]), [2, 0, 1, 3])
x_t_shape = common_layers.shape_list(x_t)
indices = tf... | [
"scatters blocks from x into shape with indices."
] |
Please provide a description of the function:def gather_indices_2d(x, block_shape, block_stride):
# making an identity matrix kernel
kernel = tf.eye(block_shape[0] * block_shape[1])
kernel = reshape_range(kernel, 0, 1, [block_shape[0], block_shape[1], 1])
# making indices [1, h, w, 1] to appy convs
x_shape... | [
"Getting gather indices."
] |
Please provide a description of the function:def make_2d_block_raster_mask(query_shape, memory_flange):
# mask inside the query block
query_triangle = common_layers.ones_matrix_band_part(
np.prod(query_shape), np.prod(query_shape), -1, 0)
split_query_masks = tf.split(query_triangle, query_shape[0], axis=... | [
"Creates a mask for 2d block raster scan.\n\n The query mask can look to the left, top left, top, and top right, but\n not to the right. Inside the query, we have the standard raster scan\n masking.\n Args:\n query_shape: A tuple of ints (query_height, query_width)\n memory_flange: A tuple of ints\n ... |
Please provide a description of the function:def get_memory_region(x, query_block_shape, memory_flange, q_indices):
# Padding x to be multiple of query_shape and then
# extracting the memory blocks from the same regions as the query blocks
x_query_padded = pad_to_multiple_2d(x, query_block_shape)
x_center = ... | [
"Get the memory regions that surround a 2d query.\n\n The memory regions will be the left and top right.\n\n Args:\n x: A tensor with shape [batch, heads, height, width, depth]\n query_block_shape: a 2-d tuple of integers\n memory_flange: a 2-d tuple of integers\n q_indices: a tensor of indices for ... |
Please provide a description of the function:def get_shifted_center_blocks(x, indices):
center_x = gather_blocks_2d(x, indices)
# Shift right along the length dimension
def shift_right_2d_blocks(x):
shifted_targets = (
tf.pad(x, [[0, 0], [0, 0], [0, 0], [1, 0], [0, 0]])[:, :, :, :-1, :])
... | [
"Get right shifted blocks for masked local attention 2d.\n\n Args:\n x: A tensor with shape [batch, heads, height, width, depth]\n indices: The indices to gather blocks\n\n Returns:\n x_shifted: a tensor of extracted blocks, each block right shifted along\n length.\n ",
"Shift the second to last ... |
Please provide a description of the function:def right_shift_blockwise(x, query_shape, name=None):
with tf.variable_scope(
name, default_name="right_shift_blockwise", values=[x]):
x_list_shape = x.get_shape().as_list()
x_shape = common_layers.shape_list(x)
# Add a dummy dimension for heads.
x... | [
"Right shifts once in every block.\n\n Args:\n x: a tensor of shape [batch, height, width, depth]\n query_shape: A 2d tuple of ints\n name: a string\n\n Returns:\n output: a tensor of the same shape as x\n "
] |
Please provide a description of the function:def masked_local_attention_2d(q,
k,
v,
query_shape=(8, 16),
memory_flange=(8, 16),
name=None):
with tf.variable_scope(
... | [
"Strided block local self-attention.\n\n Each position in a query block can attend to all the generated queries in\n the query block, which are generated in raster scan, and positions that are\n generated to the left and top. The shapes are specified by query shape and\n memory flange. Note that if you're using... |
Please provide a description of the function:def compute_attention_component(antecedent,
total_depth,
filter_width=1,
padding="VALID",
name="c",
vars_3d_num_hea... | [
"Computes attention compoenent (query, key or value).\n\n Args:\n antecedent: a Tensor with shape [batch, length, channels]\n total_depth: an integer\n filter_width: An integer specifying how wide you want the attention\n component to be.\n padding: One of \"VALID\", \"SAME\" or \"LEFT\". Default ... |
Please provide a description of the function:def compute_qkv(query_antecedent,
memory_antecedent,
total_key_depth,
total_value_depth,
q_filter_width=1,
kv_filter_width=1,
q_padding="VALID",
kv_padding="VALID"... | [
"Computes query, key and value.\n\n Args:\n query_antecedent: a Tensor with shape [batch, length_q, channels]\n memory_antecedent: a Tensor with shape [batch, length_m, channels]\n total_key_depth: an integer\n total_value_depth: an integer\n q_filter_width: An integer specifying how wide you want t... |
Please provide a description of the function:def multihead_attention(query_antecedent,
memory_antecedent,
bias,
total_key_depth,
total_value_depth,
output_depth,
num_heads,
... | [
"Multihead scaled-dot-product attention with input/output transformations.\n\n Args:\n query_antecedent: a Tensor with shape [batch, length_q, channels]\n memory_antecedent: a Tensor with shape [batch, length_m, channels] or None\n bias: bias Tensor (see attention_bias())\n total_key_depth: an integer\... |
Please provide a description of the function:def multihead_attention_2d(query_antecedent,
memory_antecedent,
total_key_depth,
total_value_depth,
output_depth,
num_heads,
... | [
"2d Multihead scaled-dot-product attention with inp/output transformations.\n\n Args:\n query_antecedent: a Tensor with shape [batch, h, w, depth_k]\n memory_antecedent: a Tensor with shape [batch, h, w, depth_k]\n total_key_depth: an integer\n total_value_depth: an integer\n output_depth: an intege... |
Please provide a description of the function:def ffn_self_attention_layer(x,
filter_depth,
output_depth,
num_parts,
dropout_rate,
share_kv=False,
... | [
"Self-attention feedforward layer.\n\n We use self-attention to do feedforward computations. We apply this function\n positionwise where for each position, we linearly transform the output to have\n depth filter_depth, and break up the result depth-wise into num_parts\n contiguous parts. The parts self-attend, ... |
Please provide a description of the function:def parameter_attention(x,
total_key_depth,
total_value_depth,
output_depth,
memory_rows,
num_heads,
dropout_rate,
... | [
"Attention over parameters.\n\n We use the same multi-headed attention as in the other layers, but the memory\n keys and values are model parameters. There are no linear transformation on\n the keys or values.\n\n We are also a bit more careful about memory usage, since the number of\n memory positions may be ... |
Please provide a description of the function:def coordinate_tensor(shape, axis):
if axis < 0:
axis = tf.size(shape) + axis # Convert to positive for the one_hot indice
r = tf.range(shape[axis])
r_shape = tf.one_hot(
axis, tf.size(shape), on_value=-1, off_value=1, dtype=tf.int32)
return tf.zeros(s... | [
"Return a tensor with given shape containing coordinate along given axis.\n\n Args:\n shape: a Tensor representing the shape of the output Tensor\n axis: an integer\n\n Returns:\n A tensor with shape shape and type tf.int32, where each elements its\n coordinate along the given axis.\n "
] |
Please provide a description of the function:def self_attention_expert(x,
batch_coordinate,
mask_right=True,
split_batch=False,
attention_num_head=1,
attention_kq_size=None,
... | [
"Implementing attention that runs inside each expert.\n\n Args:\n x: A tensor of shape[batch, depth]. Contains representations from\n different positions, which are lexicographically ordered.\n batch_coordinate: A tensor of shape [batch, 1] containing the batch\n coordinate of each element in x. Th... |
Please provide a description of the function:def local_expert_attention(x,
k,
loss_coef,
attention_num_experts,
train=True,
batch_coordinate=None,
**kwargs):
... | [
"Attention using a mixture of experts.\n\n Positions sent to the same expert can attend to each other.\n The mixture of experts is \"local\" in that it is replicated on each\n datashard.\n\n local_moe flatten all batches so to avoid problems with padding (ex: all\n padding going to the same expert, s... |
Please provide a description of the function:def expert_dot_product(q, k, v, info_q, info_k):
length_q = common_layers.shape_list(q)[0]
length_k = common_layers.shape_list(k)[0]
depth_v = v.get_shape().as_list()[-1]
# Create the mask
bias = attention_bias_coordinates(info_q.coordinates, info_k.coordinate... | [
"Perform dot product on a subset of the sequence.\n\n Can add a mask to the attention to prevent sequences to attend to each other\n and to prevent attention to the future.\n\n Args:\n q (tf.Tensor): Queries of shape [length_expert_q, depth_k]\n k (tf.Tensor): Keys of shape [length_expert_k, depth_k]\n ... |
Please provide a description of the function:def dot_product_single_head(q, k, v, gates_q, gates_k, bi):
nb_buckets = gates_q.get_shape().as_list()[-1]
q_dispatcher = expert_utils.SparseDispatcher(nb_buckets, gates_q)
k_dispatcher = expert_utils.SparseDispatcher(nb_buckets, gates_k)
def eventually_dispatc... | [
"Perform a dot product attention on a single sequence on a single head.\n\n This function dispatch the q, k, v and loop over the buckets to compute the\n attention dot product on each subsequences.\n\n Args:\n q (tf.Tensor): [length_q, depth_q]\n k (tf.Tensor): [length_k, depth_q]\n v (tf.Tensor): [leng... |
Please provide a description of the function:def map_fn_switch(fn, elems, use_map_fn=True, **kwargs):
if use_map_fn:
return tf.map_fn(fn, elems, **kwargs)
elems_unpacked = (tf.unstack(e) for e in elems)
out_unpacked = [fn(e) for e in zip(*elems_unpacked)]
out = tf.stack(out_unpacked)
return out | [
"Construct the graph with either tf.map_fn or a python for loop.\n\n This function is mainly for for benchmarking purpose.\n\n tf.map_fn is dynamic but is much slower than creating a static graph with\n for loop. However, having a for loop make the graph much longer to build\n and can consume too much RAM on di... |
Please provide a description of the function:def sparse_dot_product_attention(q, k, v, bi, use_map_fn, experts_params):
batch_size, nb_heads, _, depth = common_layers.shape_list(q)
@expert_utils.add_name_scope()
def flatten_first_dims(x):
# Case 1: Either constant batch size of size 1 or batch alread... | [
"Sparse multihead self attention.\n\n Perform an approximation of the full multihead attention by dispatching\n the tokens using their keys/values. Thus the attention matrix are only\n computed each times on a subset of the tokens.\n\n Notes:\n * The function don't perform scaling here (multihead_attention do... |
Please provide a description of the function:def dot_product_batched_head(q, k, v, gates_q, gates_k, mask_right=False):
nb_buckets = common_layers.shape_list(gates_q)[-1]
@expert_utils.add_name_scope()
def get_dispatcher(gates):
length = common_layers.shape_list(gates)[1]
# Count the number of on... | [
"Perform a dot product attention on a single sequence on a single head.\n\n This function dispatch the q, k, v and loop over the buckets to compute the\n attention dot product on each subsequences.\n\n Args:\n q (tf.Tensor): [batch*heads, length_q, depth_q]\n k (tf.Tensor): [batch*heads, length_k, depth_q]... |
Please provide a description of the function:def sparse_dot_product_attention_truncated(
q,
k,
v,
bi, # Unused
experts_params,
use_map_fn=False, # Unused
mask_right=False,
): # pylint: disable=unused-argument
# Currently depth is the same for for q and v
batch_size, nb_heads, _, de... | [
"Sparse multihead self attention.\n\n Perform an approximation of the full multihead attention by dispatching\n the tokens using their keys/values. Thus the attention matrix are only\n computed each times on a subset of the tokens.\n\n Notes:\n * The function don't perform scaling here (multihead_attention do... |
Please provide a description of the function:def deconv_elems_1d(x, factor, out_depth=None):
out_depth = out_depth or x.get_shape().as_list()[-1]
x = tf.expand_dims(x, 1) # [batch_size, 1, length, depth]
x = layers().Conv2DTranspose(
filters=out_depth,
kernel_size=(1, factor),
strides=(1, fa... | [
"Increase the length and change the dimensionality.\n\n Expand/project each positions of dim depth of the input into\n factor*tokens of dim out_depth\n\n Args:\n x (tf.Tensor): shape [batch_size, length, depth]\n factor (int): Multiplicative factor of each tokens.\n out_depth (int): Output depth (if Non... |
Please provide a description of the function:def conv_elems_1d(x, factor, out_depth=None):
out_depth = out_depth or x.get_shape().as_list()[-1]
# with tf.control_dependencies( # Dynamic assertion
# [tf.assert_equal(tf.shape(x)[1] % factor, 0)]):
x = tf.expand_dims(x, 1) # [batch_size, 1, length, depth]... | [
"Decrease the length and change the dimensionality.\n\n Merge/restore/compress factors positions of dim depth of the input into\n a single position of dim out_depth.\n This is basically just a strided convolution without overlap\n between each strides. The original length has to be divided by factor.\n\n Args:... |
Please provide a description of the function:def local_reduction_attention(x, block_length, multihead_params):
@expert_utils.add_name_scope()
def dot_product_self_local_attention_flattened(q, k, v):
_, num_head, _, depth = q.get_shape().as_list()
# Extract the blocks
def pad_and_reshape(x):
... | [
"Reduce the length dimension using self attention.\n\n Args:\n x (tf.Tensor): float32 of shape [batch, length, depth]\n block_length (int): Block length for local attention (Compression factor)\n multihead_params (dict): parameters for multihead attention\n\n Returns:\n tf.Tensor: Compressed tensor of... |
Please provide a description of the function:def multihead_self_attention_reduced(
x,
memory_antecedent=None,
bias=None,
factor=None,
multihead_params=None,
nonlinearity="none",
reduction_type="conv",
add_mask=True,
):
if not factor or not multihead_params:
raise ValueError("fac... | [
"Reduce the length dimension by compressing with conv.\n\n Args:\n x (tf.Tensor): float32 of shape [batch, length, depth]\n memory_antecedent (tf.Tensor): Unsupported for now\n bias (tf.Tensor): Ignored\n factor (int): compression factor for the memory sequence\n multihead_params (dict): parameters ... |
Please provide a description of the function:def scaled_dot_product_attention_simple(q, k, v, bias, name=None):
with tf.variable_scope(
name, default_name="scaled_dot_product_attention_simple"):
scalar = tf.rsqrt(tf.to_float(common_layers.shape_list(q)[2]))
logits = tf.matmul(q * scalar, k, transpose... | [
"Scaled dot-product attention. One head. One spatial dimension.\n\n Args:\n q: a Tensor with shape [batch, length_q, depth_k]\n k: a Tensor with shape [batch, length_kv, depth_k]\n v: a Tensor with shape [batch, length_kv, depth_v]\n bias: optional Tensor broadcastable to [batch, length_q, length_kv]\n... |
Please provide a description of the function:def multihead_self_attention_memory_efficient(x,
bias,
num_heads,
head_size=None,
epsilon=1... | [
"Multihead scaled-dot-product self-attention.\n\n Includes layer norm.\n\n Returns multihead-self-attention(layer_norm(x))\n\n Computes one attention head at a time to avoid exhausting memory.\n\n If forget=True, then forget all forwards activations and recompute on\n the backwards pass.\n\n Args:\n x: a T... |
Please provide a description of the function:def _idx_to_bits(self, i):
bits = bin(i)[2:].zfill(self.nb_hyperplanes) # Pad the bits str with 0
return [-1.0 if b == "0" else 1.0 for b in bits] | [
"Convert an group index to its bit representation."
] |
Please provide a description of the function:def get_gates(self, x):
# The balance loss don't propagate to the rest of the network
x = tf.stop_gradient(x)
# [length, depth] * [depth, nb_vectors * replicat]
x = tf.matmul(x, self.t_vectors)
# [length, nb_vector * replicat]
x = tf.sign(x) # ... | [
"Return the bucket id of the given tensor.\n\n Args:\n x (tf.Tensor): float32 of shape [length, depth]\n\n Returns:\n tf.Tensor: One-hot vector int64 of shape [heads, length, nb_buckets]\n containing the id of the bucket\n "
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.