repo stringlengths 7 55 | path stringlengths 4 223 | func_name stringlengths 1 134 | original_string stringlengths 75 104k | language stringclasses 1
value | code stringlengths 75 104k | code_tokens listlengths 19 28.4k | docstring stringlengths 1 46.9k | docstring_tokens listlengths 1 1.97k | sha stringlengths 40 40 | url stringlengths 87 315 | partition stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | self_attention_expert | def self_attention_expert(x,
batch_coordinate,
mask_right=True,
split_batch=False,
attention_num_head=1,
attention_kq_size=None,
attention_v_size=None):
"""Implem... | python | def self_attention_expert(x,
batch_coordinate,
mask_right=True,
split_batch=False,
attention_num_head=1,
attention_kq_size=None,
attention_v_size=None):
"""Implem... | [
"def",
"self_attention_expert",
"(",
"x",
",",
"batch_coordinate",
",",
"mask_right",
"=",
"True",
",",
"split_batch",
"=",
"False",
",",
"attention_num_head",
"=",
"1",
",",
"attention_kq_size",
"=",
"None",
",",
"attention_v_size",
"=",
"None",
")",
":",
"de... | Implementing attention that runs inside each expert.
Args:
x: A tensor of shape[batch, depth]. Contains representations from
different positions, which are lexicographically ordered.
batch_coordinate: A tensor of shape [batch, 1] containing the batch
coordinate of each element in x. This is neede... | [
"Implementing",
"attention",
"that",
"runs",
"inside",
"each",
"expert",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L4526-L4632 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | local_expert_attention | def local_expert_attention(x,
k,
loss_coef,
attention_num_experts,
train=True,
batch_coordinate=None,
**kwargs):
"""Attention using a mixture of experts.
... | python | def local_expert_attention(x,
k,
loss_coef,
attention_num_experts,
train=True,
batch_coordinate=None,
**kwargs):
"""Attention using a mixture of experts.
... | [
"def",
"local_expert_attention",
"(",
"x",
",",
"k",
",",
"loss_coef",
",",
"attention_num_experts",
",",
"train",
"=",
"True",
",",
"batch_coordinate",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"batch_coordinate",
"is",
"None",
":",
"batch_coord... | Attention using a mixture of experts.
Positions sent to the same expert can attend to each other.
The mixture of experts is "local" in that it is replicated on each
datashard.
local_moe flatten all batches so to avoid problems with padding (ex: all
padding going to the same expert, self attention ... | [
"Attention",
"using",
"a",
"mixture",
"of",
"experts",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L4635-L4681 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | expert_dot_product | def expert_dot_product(q, k, v, info_q, info_k):
"""Perform dot product on a subset of the sequence.
Can add a mask to the attention to prevent sequences to attend to each other
and to prevent attention to the future.
Args:
q (tf.Tensor): Queries of shape [length_expert_q, depth_k]
k (tf.Tensor): Keys... | python | def expert_dot_product(q, k, v, info_q, info_k):
"""Perform dot product on a subset of the sequence.
Can add a mask to the attention to prevent sequences to attend to each other
and to prevent attention to the future.
Args:
q (tf.Tensor): Queries of shape [length_expert_q, depth_k]
k (tf.Tensor): Keys... | [
"def",
"expert_dot_product",
"(",
"q",
",",
"k",
",",
"v",
",",
"info_q",
",",
"info_k",
")",
":",
"length_q",
"=",
"common_layers",
".",
"shape_list",
"(",
"q",
")",
"[",
"0",
"]",
"length_k",
"=",
"common_layers",
".",
"shape_list",
"(",
"k",
")",
... | Perform dot product on a subset of the sequence.
Can add a mask to the attention to prevent sequences to attend to each other
and to prevent attention to the future.
Args:
q (tf.Tensor): Queries of shape [length_expert_q, depth_k]
k (tf.Tensor): Keys of shape [length_expert_k, depth_k]
v (tf.Tensor)... | [
"Perform",
"dot",
"product",
"on",
"a",
"subset",
"of",
"the",
"sequence",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L4685-L4746 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | dot_product_single_head | def dot_product_single_head(q, k, v, gates_q, gates_k, bi):
"""Perform a dot product attention on a single sequence on a single head.
This function dispatch the q, k, v and loop over the buckets to compute the
attention dot product on each subsequences.
Args:
q (tf.Tensor): [length_q, depth_q]
k (tf.T... | python | def dot_product_single_head(q, k, v, gates_q, gates_k, bi):
"""Perform a dot product attention on a single sequence on a single head.
This function dispatch the q, k, v and loop over the buckets to compute the
attention dot product on each subsequences.
Args:
q (tf.Tensor): [length_q, depth_q]
k (tf.T... | [
"def",
"dot_product_single_head",
"(",
"q",
",",
"k",
",",
"v",
",",
"gates_q",
",",
"gates_k",
",",
"bi",
")",
":",
"nb_buckets",
"=",
"gates_q",
".",
"get_shape",
"(",
")",
".",
"as_list",
"(",
")",
"[",
"-",
"1",
"]",
"q_dispatcher",
"=",
"expert_... | Perform a dot product attention on a single sequence on a single head.
This function dispatch the q, k, v and loop over the buckets to compute the
attention dot product on each subsequences.
Args:
q (tf.Tensor): [length_q, depth_q]
k (tf.Tensor): [length_k, depth_q]
v (tf.Tensor): [length_k, depth_v... | [
"Perform",
"a",
"dot",
"product",
"attention",
"on",
"a",
"single",
"sequence",
"on",
"a",
"single",
"head",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L4750-L4808 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | map_fn_switch | def map_fn_switch(fn, elems, use_map_fn=True, **kwargs):
"""Construct the graph with either tf.map_fn or a python for loop.
This function is mainly for for benchmarking purpose.
tf.map_fn is dynamic but is much slower than creating a static graph with
for loop. However, having a for loop make the graph much l... | python | def map_fn_switch(fn, elems, use_map_fn=True, **kwargs):
"""Construct the graph with either tf.map_fn or a python for loop.
This function is mainly for for benchmarking purpose.
tf.map_fn is dynamic but is much slower than creating a static graph with
for loop. However, having a for loop make the graph much l... | [
"def",
"map_fn_switch",
"(",
"fn",
",",
"elems",
",",
"use_map_fn",
"=",
"True",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"use_map_fn",
":",
"return",
"tf",
".",
"map_fn",
"(",
"fn",
",",
"elems",
",",
"*",
"*",
"kwargs",
")",
"elems_unpacked",
"=",
... | Construct the graph with either tf.map_fn or a python for loop.
This function is mainly for for benchmarking purpose.
tf.map_fn is dynamic but is much slower than creating a static graph with
for loop. However, having a for loop make the graph much longer to build
and can consume too much RAM on distributed s... | [
"Construct",
"the",
"graph",
"with",
"either",
"tf",
".",
"map_fn",
"or",
"a",
"python",
"for",
"loop",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L4811-L4836 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | sparse_dot_product_attention | def sparse_dot_product_attention(q, k, v, bi, use_map_fn, experts_params):
"""Sparse multihead self attention.
Perform an approximation of the full multihead attention by dispatching
the tokens using their keys/values. Thus the attention matrix are only
computed each times on a subset of the tokens.
Notes:
... | python | def sparse_dot_product_attention(q, k, v, bi, use_map_fn, experts_params):
"""Sparse multihead self attention.
Perform an approximation of the full multihead attention by dispatching
the tokens using their keys/values. Thus the attention matrix are only
computed each times on a subset of the tokens.
Notes:
... | [
"def",
"sparse_dot_product_attention",
"(",
"q",
",",
"k",
",",
"v",
",",
"bi",
",",
"use_map_fn",
",",
"experts_params",
")",
":",
"batch_size",
",",
"nb_heads",
",",
"_",
",",
"depth",
"=",
"common_layers",
".",
"shape_list",
"(",
"q",
")",
"@",
"exper... | Sparse multihead self attention.
Perform an approximation of the full multihead attention by dispatching
the tokens using their keys/values. Thus the attention matrix are only
computed each times on a subset of the tokens.
Notes:
* The function don't perform scaling here (multihead_attention does
the /sq... | [
"Sparse",
"multihead",
"self",
"attention",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L4840-L4933 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | dot_product_batched_head | def dot_product_batched_head(q, k, v, gates_q, gates_k, mask_right=False):
"""Perform a dot product attention on a single sequence on a single head.
This function dispatch the q, k, v and loop over the buckets to compute the
attention dot product on each subsequences.
Args:
q (tf.Tensor): [batch*heads, le... | python | def dot_product_batched_head(q, k, v, gates_q, gates_k, mask_right=False):
"""Perform a dot product attention on a single sequence on a single head.
This function dispatch the q, k, v and loop over the buckets to compute the
attention dot product on each subsequences.
Args:
q (tf.Tensor): [batch*heads, le... | [
"def",
"dot_product_batched_head",
"(",
"q",
",",
"k",
",",
"v",
",",
"gates_q",
",",
"gates_k",
",",
"mask_right",
"=",
"False",
")",
":",
"nb_buckets",
"=",
"common_layers",
".",
"shape_list",
"(",
"gates_q",
")",
"[",
"-",
"1",
"]",
"@",
"expert_utils... | Perform a dot product attention on a single sequence on a single head.
This function dispatch the q, k, v and loop over the buckets to compute the
attention dot product on each subsequences.
Args:
q (tf.Tensor): [batch*heads, length_q, depth_q]
k (tf.Tensor): [batch*heads, length_k, depth_q]
v (tf.T... | [
"Perform",
"a",
"dot",
"product",
"attention",
"on",
"a",
"single",
"sequence",
"on",
"a",
"single",
"head",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L4937-L5005 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | sparse_dot_product_attention_truncated | def sparse_dot_product_attention_truncated(
q,
k,
v,
bi, # Unused
experts_params,
use_map_fn=False, # Unused
mask_right=False,
): # pylint: disable=unused-argument
"""Sparse multihead self attention.
Perform an approximation of the full multihead attention by dispatching
the tokens... | python | def sparse_dot_product_attention_truncated(
q,
k,
v,
bi, # Unused
experts_params,
use_map_fn=False, # Unused
mask_right=False,
): # pylint: disable=unused-argument
"""Sparse multihead self attention.
Perform an approximation of the full multihead attention by dispatching
the tokens... | [
"def",
"sparse_dot_product_attention_truncated",
"(",
"q",
",",
"k",
",",
"v",
",",
"bi",
",",
"# Unused",
"experts_params",
",",
"use_map_fn",
"=",
"False",
",",
"# Unused",
"mask_right",
"=",
"False",
",",
")",
":",
"# pylint: disable=unused-argument",
"# Curren... | Sparse multihead self attention.
Perform an approximation of the full multihead attention by dispatching
the tokens using their keys/values. Thus the attention matrix are only
computed each times on a subset of the tokens.
Notes:
* The function don't perform scaling here (multihead_attention does
the /sq... | [
"Sparse",
"multihead",
"self",
"attention",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L5009-L5112 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | deconv_elems_1d | def deconv_elems_1d(x, factor, out_depth=None):
"""Increase the length and change the dimensionality.
Expand/project each positions of dim depth of the input into
factor*tokens of dim out_depth
Args:
x (tf.Tensor): shape [batch_size, length, depth]
factor (int): Multiplicative factor of each tokens.
... | python | def deconv_elems_1d(x, factor, out_depth=None):
"""Increase the length and change the dimensionality.
Expand/project each positions of dim depth of the input into
factor*tokens of dim out_depth
Args:
x (tf.Tensor): shape [batch_size, length, depth]
factor (int): Multiplicative factor of each tokens.
... | [
"def",
"deconv_elems_1d",
"(",
"x",
",",
"factor",
",",
"out_depth",
"=",
"None",
")",
":",
"out_depth",
"=",
"out_depth",
"or",
"x",
".",
"get_shape",
"(",
")",
".",
"as_list",
"(",
")",
"[",
"-",
"1",
"]",
"x",
"=",
"tf",
".",
"expand_dims",
"(",... | Increase the length and change the dimensionality.
Expand/project each positions of dim depth of the input into
factor*tokens of dim out_depth
Args:
x (tf.Tensor): shape [batch_size, length, depth]
factor (int): Multiplicative factor of each tokens.
out_depth (int): Output depth (if None, keep depth... | [
"Increase",
"the",
"length",
"and",
"change",
"the",
"dimensionality",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L5116-L5140 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | conv_elems_1d | def conv_elems_1d(x, factor, out_depth=None):
"""Decrease the length and change the dimensionality.
Merge/restore/compress factors positions of dim depth of the input into
a single position of dim out_depth.
This is basically just a strided convolution without overlap
between each strides. The original lengt... | python | def conv_elems_1d(x, factor, out_depth=None):
"""Decrease the length and change the dimensionality.
Merge/restore/compress factors positions of dim depth of the input into
a single position of dim out_depth.
This is basically just a strided convolution without overlap
between each strides. The original lengt... | [
"def",
"conv_elems_1d",
"(",
"x",
",",
"factor",
",",
"out_depth",
"=",
"None",
")",
":",
"out_depth",
"=",
"out_depth",
"or",
"x",
".",
"get_shape",
"(",
")",
".",
"as_list",
"(",
")",
"[",
"-",
"1",
"]",
"# with tf.control_dependencies( # Dynamic assertio... | Decrease the length and change the dimensionality.
Merge/restore/compress factors positions of dim depth of the input into
a single position of dim out_depth.
This is basically just a strided convolution without overlap
between each strides. The original length has to be divided by factor.
Args:
x (tf.T... | [
"Decrease",
"the",
"length",
"and",
"change",
"the",
"dimensionality",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L5144-L5172 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | local_reduction_attention | def local_reduction_attention(x, block_length, multihead_params):
"""Reduce the length dimension using self attention.
Args:
x (tf.Tensor): float32 of shape [batch, length, depth]
block_length (int): Block length for local attention (Compression factor)
multihead_params (dict): parameters for multihead... | python | def local_reduction_attention(x, block_length, multihead_params):
"""Reduce the length dimension using self attention.
Args:
x (tf.Tensor): float32 of shape [batch, length, depth]
block_length (int): Block length for local attention (Compression factor)
multihead_params (dict): parameters for multihead... | [
"def",
"local_reduction_attention",
"(",
"x",
",",
"block_length",
",",
"multihead_params",
")",
":",
"@",
"expert_utils",
".",
"add_name_scope",
"(",
")",
"def",
"dot_product_self_local_attention_flattened",
"(",
"q",
",",
"k",
",",
"v",
")",
":",
"\"\"\"Strided ... | Reduce the length dimension using self attention.
Args:
x (tf.Tensor): float32 of shape [batch, length, depth]
block_length (int): Block length for local attention (Compression factor)
multihead_params (dict): parameters for multihead attention
Returns:
tf.Tensor: Compressed tensor of shape [batch... | [
"Reduce",
"the",
"length",
"dimension",
"using",
"self",
"attention",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L5176-L5255 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | multihead_self_attention_reduced | def multihead_self_attention_reduced(
x,
memory_antecedent=None,
bias=None,
factor=None,
multihead_params=None,
nonlinearity="none",
reduction_type="conv",
add_mask=True,
):
"""Reduce the length dimension by compressing with conv.
Args:
x (tf.Tensor): float32 of shape [batch, le... | python | def multihead_self_attention_reduced(
x,
memory_antecedent=None,
bias=None,
factor=None,
multihead_params=None,
nonlinearity="none",
reduction_type="conv",
add_mask=True,
):
"""Reduce the length dimension by compressing with conv.
Args:
x (tf.Tensor): float32 of shape [batch, le... | [
"def",
"multihead_self_attention_reduced",
"(",
"x",
",",
"memory_antecedent",
"=",
"None",
",",
"bias",
"=",
"None",
",",
"factor",
"=",
"None",
",",
"multihead_params",
"=",
"None",
",",
"nonlinearity",
"=",
"\"none\"",
",",
"reduction_type",
"=",
"\"conv\"",
... | Reduce the length dimension by compressing with conv.
Args:
x (tf.Tensor): float32 of shape [batch, length, depth]
memory_antecedent (tf.Tensor): Unsupported for now
bias (tf.Tensor): Ignored
factor (int): compression factor for the memory sequence
multihead_params (dict): parameters for multihea... | [
"Reduce",
"the",
"length",
"dimension",
"by",
"compressing",
"with",
"conv",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L5259-L5350 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | scaled_dot_product_attention_simple | def scaled_dot_product_attention_simple(q, k, v, bias, name=None):
"""Scaled dot-product attention. One head. One spatial dimension.
Args:
q: a Tensor with shape [batch, length_q, depth_k]
k: a Tensor with shape [batch, length_kv, depth_k]
v: a Tensor with shape [batch, length_kv, depth_v]
bias: op... | python | def scaled_dot_product_attention_simple(q, k, v, bias, name=None):
"""Scaled dot-product attention. One head. One spatial dimension.
Args:
q: a Tensor with shape [batch, length_q, depth_k]
k: a Tensor with shape [batch, length_kv, depth_k]
v: a Tensor with shape [batch, length_kv, depth_v]
bias: op... | [
"def",
"scaled_dot_product_attention_simple",
"(",
"q",
",",
"k",
",",
"v",
",",
"bias",
",",
"name",
"=",
"None",
")",
":",
"with",
"tf",
".",
"variable_scope",
"(",
"name",
",",
"default_name",
"=",
"\"scaled_dot_product_attention_simple\"",
")",
":",
"scala... | Scaled dot-product attention. One head. One spatial dimension.
Args:
q: a Tensor with shape [batch, length_q, depth_k]
k: a Tensor with shape [batch, length_kv, depth_k]
v: a Tensor with shape [batch, length_kv, depth_v]
bias: optional Tensor broadcastable to [batch, length_q, length_kv]
name: an... | [
"Scaled",
"dot",
"-",
"product",
"attention",
".",
"One",
"head",
".",
"One",
"spatial",
"dimension",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L5353-L5376 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | multihead_self_attention_memory_efficient | def multihead_self_attention_memory_efficient(x,
bias,
num_heads,
head_size=None,
epsilon=1e-6,
... | python | def multihead_self_attention_memory_efficient(x,
bias,
num_heads,
head_size=None,
epsilon=1e-6,
... | [
"def",
"multihead_self_attention_memory_efficient",
"(",
"x",
",",
"bias",
",",
"num_heads",
",",
"head_size",
"=",
"None",
",",
"epsilon",
"=",
"1e-6",
",",
"forget",
"=",
"True",
",",
"test_vars",
"=",
"None",
",",
"name",
"=",
"None",
")",
":",
"io_size... | Multihead scaled-dot-product self-attention.
Includes layer norm.
Returns multihead-self-attention(layer_norm(x))
Computes one attention head at a time to avoid exhausting memory.
If forget=True, then forget all forwards activations and recompute on
the backwards pass.
Args:
x: a Tensor with shape ... | [
"Multihead",
"scaled",
"-",
"dot",
"-",
"product",
"self",
"-",
"attention",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L5382-L5501 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | LshGating._idx_to_bits | def _idx_to_bits(self, i):
"""Convert an group index to its bit representation."""
bits = bin(i)[2:].zfill(self.nb_hyperplanes) # Pad the bits str with 0
return [-1.0 if b == "0" else 1.0 for b in bits] | python | def _idx_to_bits(self, i):
"""Convert an group index to its bit representation."""
bits = bin(i)[2:].zfill(self.nb_hyperplanes) # Pad the bits str with 0
return [-1.0 if b == "0" else 1.0 for b in bits] | [
"def",
"_idx_to_bits",
"(",
"self",
",",
"i",
")",
":",
"bits",
"=",
"bin",
"(",
"i",
")",
"[",
"2",
":",
"]",
".",
"zfill",
"(",
"self",
".",
"nb_hyperplanes",
")",
"# Pad the bits str with 0",
"return",
"[",
"-",
"1.0",
"if",
"b",
"==",
"\"0\"",
... | Convert an group index to its bit representation. | [
"Convert",
"an",
"group",
"index",
"to",
"its",
"bit",
"representation",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L803-L806 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_attention.py | LshGating.get_gates | def get_gates(self, x):
"""Return the bucket id of the given tensor.
Args:
x (tf.Tensor): float32 of shape [length, depth]
Returns:
tf.Tensor: One-hot vector int64 of shape [heads, length, nb_buckets]
containing the id of the bucket
"""
# The balance loss don't propagate to th... | python | def get_gates(self, x):
"""Return the bucket id of the given tensor.
Args:
x (tf.Tensor): float32 of shape [length, depth]
Returns:
tf.Tensor: One-hot vector int64 of shape [heads, length, nb_buckets]
containing the id of the bucket
"""
# The balance loss don't propagate to th... | [
"def",
"get_gates",
"(",
"self",
",",
"x",
")",
":",
"# The balance loss don't propagate to the rest of the network",
"x",
"=",
"tf",
".",
"stop_gradient",
"(",
"x",
")",
"# [length, depth] * [depth, nb_vectors * replicat]",
"x",
"=",
"tf",
".",
"matmul",
"(",
"x",
... | Return the bucket id of the given tensor.
Args:
x (tf.Tensor): float32 of shape [length, depth]
Returns:
tf.Tensor: One-hot vector int64 of shape [heads, length, nb_buckets]
containing the id of the bucket | [
"Return",
"the",
"bucket",
"id",
"of",
"the",
"given",
"tensor",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_attention.py#L809-L839 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | van_image_enc_2d | def van_image_enc_2d(x, first_depth, reuse=False, hparams=None):
"""The image encoder for the VAN.
Similar architecture as Ruben's paper
(http://proceedings.mlr.press/v70/villegas17a/villegas17a.pdf).
Args:
x: The image to encode.
first_depth: The depth of the first layer. Depth is increased in subseq... | python | def van_image_enc_2d(x, first_depth, reuse=False, hparams=None):
"""The image encoder for the VAN.
Similar architecture as Ruben's paper
(http://proceedings.mlr.press/v70/villegas17a/villegas17a.pdf).
Args:
x: The image to encode.
first_depth: The depth of the first layer. Depth is increased in subseq... | [
"def",
"van_image_enc_2d",
"(",
"x",
",",
"first_depth",
",",
"reuse",
"=",
"False",
",",
"hparams",
"=",
"None",
")",
":",
"with",
"tf",
".",
"variable_scope",
"(",
"'van_image_enc'",
",",
"reuse",
"=",
"reuse",
")",
":",
"enc_history",
"=",
"[",
"x",
... | The image encoder for the VAN.
Similar architecture as Ruben's paper
(http://proceedings.mlr.press/v70/villegas17a/villegas17a.pdf).
Args:
x: The image to encode.
first_depth: The depth of the first layer. Depth is increased in subsequent
layers.
reuse: To reuse in variable scope or not.
h... | [
"The",
"image",
"encoder",
"for",
"the",
"VAN",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L53-L124 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | van_enc_2d | def van_enc_2d(x, first_depth, reuse=False):
"""The higher level structure encoder for the VAN.
The high level structure is a vector instead of an image.
Args:
x: The higher level structure to encode.
first_depth: The depth of the first layer. Depth is increased in subsequent
layers.
reuse: To... | python | def van_enc_2d(x, first_depth, reuse=False):
"""The higher level structure encoder for the VAN.
The high level structure is a vector instead of an image.
Args:
x: The higher level structure to encode.
first_depth: The depth of the first layer. Depth is increased in subsequent
layers.
reuse: To... | [
"def",
"van_enc_2d",
"(",
"x",
",",
"first_depth",
",",
"reuse",
"=",
"False",
")",
":",
"with",
"tf",
".",
"variable_scope",
"(",
"'van_enc'",
",",
"reuse",
"=",
"reuse",
")",
":",
"a",
"=",
"4",
"# depends on the inputs size",
"b",
"=",
"4",
"# a, b = ... | The higher level structure encoder for the VAN.
The high level structure is a vector instead of an image.
Args:
x: The higher level structure to encode.
first_depth: The depth of the first layer. Depth is increased in subsequent
layers.
reuse: To reuse in variable scope or not.
Returns:
T... | [
"The",
"higher",
"level",
"structure",
"encoder",
"for",
"the",
"VAN",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L127-L182 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | van_dec_2d | def van_dec_2d(x, skip_connections, output_shape, first_depth, hparams=None):
"""The VAN decoder.
Args:
x: The analogy information to decode.
skip_connections: The encoder layers which can be used as skip connections.
output_shape: The shape of the desired output image.
first_depth: The depth of th... | python | def van_dec_2d(x, skip_connections, output_shape, first_depth, hparams=None):
"""The VAN decoder.
Args:
x: The analogy information to decode.
skip_connections: The encoder layers which can be used as skip connections.
output_shape: The shape of the desired output image.
first_depth: The depth of th... | [
"def",
"van_dec_2d",
"(",
"x",
",",
"skip_connections",
",",
"output_shape",
",",
"first_depth",
",",
"hparams",
"=",
"None",
")",
":",
"with",
"tf",
".",
"variable_scope",
"(",
"'van_dec'",
")",
":",
"dec",
"=",
"tf",
".",
"layers",
".",
"conv2d_transpose... | The VAN decoder.
Args:
x: The analogy information to decode.
skip_connections: The encoder layers which can be used as skip connections.
output_shape: The shape of the desired output image.
first_depth: The depth of the first layer of the van image encoder.
hparams: The python hparams.
Returns... | [
"The",
"VAN",
"decoder",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L185-L249 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | analogy_computation_2d | def analogy_computation_2d(f_first_enc,
f_first_frame,
f_current_enc,
first_depth):
"""Implements the deep analogy computation."""
with tf.variable_scope('analogy_computation'):
frame_enc_diff = f_first_frame - f_first_enc
fr... | python | def analogy_computation_2d(f_first_enc,
f_first_frame,
f_current_enc,
first_depth):
"""Implements the deep analogy computation."""
with tf.variable_scope('analogy_computation'):
frame_enc_diff = f_first_frame - f_first_enc
fr... | [
"def",
"analogy_computation_2d",
"(",
"f_first_enc",
",",
"f_first_frame",
",",
"f_current_enc",
",",
"first_depth",
")",
":",
"with",
"tf",
".",
"variable_scope",
"(",
"'analogy_computation'",
")",
":",
"frame_enc_diff",
"=",
"f_first_frame",
"-",
"f_first_enc",
"f... | Implements the deep analogy computation. | [
"Implements",
"the",
"deep",
"analogy",
"computation",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L252-L298 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | van | def van(first_enc,
first_frame,
current_enc,
gt_image,
reuse=False,
scope_prefix='',
hparams=None):
"""Implements a VAN.
Args:
first_enc: The first encoding.
first_frame: The first ground truth frame.
current_enc: The encoding of the frame to generate.
... | python | def van(first_enc,
first_frame,
current_enc,
gt_image,
reuse=False,
scope_prefix='',
hparams=None):
"""Implements a VAN.
Args:
first_enc: The first encoding.
first_frame: The first ground truth frame.
current_enc: The encoding of the frame to generate.
... | [
"def",
"van",
"(",
"first_enc",
",",
"first_frame",
",",
"current_enc",
",",
"gt_image",
",",
"reuse",
"=",
"False",
",",
"scope_prefix",
"=",
"''",
",",
"hparams",
"=",
"None",
")",
":",
"with",
"tf",
".",
"variable_scope",
"(",
"scope_prefix",
"+",
"'v... | Implements a VAN.
Args:
first_enc: The first encoding.
first_frame: The first ground truth frame.
current_enc: The encoding of the frame to generate.
gt_image: The ground truth image, only used for regularization.
reuse: To reuse in variable scope or not.
scope_prefix: The prefix before the s... | [
"Implements",
"a",
"VAN",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L301-L346 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | encoder_vgg | def encoder_vgg(x, enc_final_size, reuse=False, scope_prefix='', hparams=None,
is_training=True):
"""VGG network to use as encoder without the top few layers.
Can be pretrained.
Args:
x: The image to encode. In the range 0 to 1.
enc_final_size: The desired size of the encoding.
reuse... | python | def encoder_vgg(x, enc_final_size, reuse=False, scope_prefix='', hparams=None,
is_training=True):
"""VGG network to use as encoder without the top few layers.
Can be pretrained.
Args:
x: The image to encode. In the range 0 to 1.
enc_final_size: The desired size of the encoding.
reuse... | [
"def",
"encoder_vgg",
"(",
"x",
",",
"enc_final_size",
",",
"reuse",
"=",
"False",
",",
"scope_prefix",
"=",
"''",
",",
"hparams",
"=",
"None",
",",
"is_training",
"=",
"True",
")",
":",
"with",
"tf",
".",
"variable_scope",
"(",
"scope_prefix",
"+",
"'en... | VGG network to use as encoder without the top few layers.
Can be pretrained.
Args:
x: The image to encode. In the range 0 to 1.
enc_final_size: The desired size of the encoding.
reuse: To reuse in variable scope or not.
scope_prefix: The prefix before the scope name.
hparams: The python hparam... | [
"VGG",
"network",
"to",
"use",
"as",
"encoder",
"without",
"the",
"top",
"few",
"layers",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L349-L401 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | predictor | def predictor(enc_flat,
action,
lstm_states,
pred_depth,
reuse=False,
scope_prefix='',
hparams=None):
"""LSTM predictor network."""
with tf.variable_scope(scope_prefix + 'predict', reuse=reuse):
enc_final_size = enc_flat.get_sh... | python | def predictor(enc_flat,
action,
lstm_states,
pred_depth,
reuse=False,
scope_prefix='',
hparams=None):
"""LSTM predictor network."""
with tf.variable_scope(scope_prefix + 'predict', reuse=reuse):
enc_final_size = enc_flat.get_sh... | [
"def",
"predictor",
"(",
"enc_flat",
",",
"action",
",",
"lstm_states",
",",
"pred_depth",
",",
"reuse",
"=",
"False",
",",
"scope_prefix",
"=",
"''",
",",
"hparams",
"=",
"None",
")",
":",
"with",
"tf",
".",
"variable_scope",
"(",
"scope_prefix",
"+",
"... | LSTM predictor network. | [
"LSTM",
"predictor",
"network",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L404-L492 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | construct_model | def construct_model(images,
actions=None,
context_frames=2,
hparams=None,
is_training=True):
"""Constructs the tensorflow graph of the hierarchical model."""
pred_depth = 20
enc_out_all, pred_out_all, van_out_all, van_on_enc_all = [... | python | def construct_model(images,
actions=None,
context_frames=2,
hparams=None,
is_training=True):
"""Constructs the tensorflow graph of the hierarchical model."""
pred_depth = 20
enc_out_all, pred_out_all, van_out_all, van_on_enc_all = [... | [
"def",
"construct_model",
"(",
"images",
",",
"actions",
"=",
"None",
",",
"context_frames",
"=",
"2",
",",
"hparams",
"=",
"None",
",",
"is_training",
"=",
"True",
")",
":",
"pred_depth",
"=",
"20",
"enc_out_all",
",",
"pred_out_all",
",",
"van_out_all",
... | Constructs the tensorflow graph of the hierarchical model. | [
"Constructs",
"the",
"tensorflow",
"graph",
"of",
"the",
"hierarchical",
"model",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L495-L569 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | peak_signal_to_noise_ratio | def peak_signal_to_noise_ratio(true, pred):
"""Image quality metric based on maximal signal power vs. power of the noise.
Args:
true: the ground truth image.
pred: the predicted image.
Returns:
peak signal to noise ratio (PSNR)
"""
return 10.0 * tf.log(1.0 / mean_squared_error(true, pred)) / tf.l... | python | def peak_signal_to_noise_ratio(true, pred):
"""Image quality metric based on maximal signal power vs. power of the noise.
Args:
true: the ground truth image.
pred: the predicted image.
Returns:
peak signal to noise ratio (PSNR)
"""
return 10.0 * tf.log(1.0 / mean_squared_error(true, pred)) / tf.l... | [
"def",
"peak_signal_to_noise_ratio",
"(",
"true",
",",
"pred",
")",
":",
"return",
"10.0",
"*",
"tf",
".",
"log",
"(",
"1.0",
"/",
"mean_squared_error",
"(",
"true",
",",
"pred",
")",
")",
"/",
"tf",
".",
"log",
"(",
"10.0",
")"
] | Image quality metric based on maximal signal power vs. power of the noise.
Args:
true: the ground truth image.
pred: the predicted image.
Returns:
peak signal to noise ratio (PSNR) | [
"Image",
"quality",
"metric",
"based",
"on",
"maximal",
"signal",
"power",
"vs",
".",
"power",
"of",
"the",
"noise",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L572-L581 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | mean_squared_error | def mean_squared_error(true, pred):
"""L2 distance between tensors true and pred.
Args:
true: the ground truth image.
pred: the predicted image.
Returns:
mean squared error between ground truth and predicted image.
"""
result = tf.reduce_sum(
tf.squared_difference(true, pred)) / tf.to_float... | python | def mean_squared_error(true, pred):
"""L2 distance between tensors true and pred.
Args:
true: the ground truth image.
pred: the predicted image.
Returns:
mean squared error between ground truth and predicted image.
"""
result = tf.reduce_sum(
tf.squared_difference(true, pred)) / tf.to_float... | [
"def",
"mean_squared_error",
"(",
"true",
",",
"pred",
")",
":",
"result",
"=",
"tf",
".",
"reduce_sum",
"(",
"tf",
".",
"squared_difference",
"(",
"true",
",",
"pred",
")",
")",
"/",
"tf",
".",
"to_float",
"(",
"tf",
".",
"size",
"(",
"pred",
")",
... | L2 distance between tensors true and pred.
Args:
true: the ground truth image.
pred: the predicted image.
Returns:
mean squared error between ground truth and predicted image. | [
"L2",
"distance",
"between",
"tensors",
"true",
"and",
"pred",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L584-L595 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | l1_error | def l1_error(true, pred):
"""L1 distance between tensors true and pred."""
return tf.reduce_sum(tf.abs(true - pred)) / tf.to_float(tf.size(pred)) | python | def l1_error(true, pred):
"""L1 distance between tensors true and pred."""
return tf.reduce_sum(tf.abs(true - pred)) / tf.to_float(tf.size(pred)) | [
"def",
"l1_error",
"(",
"true",
",",
"pred",
")",
":",
"return",
"tf",
".",
"reduce_sum",
"(",
"tf",
".",
"abs",
"(",
"true",
"-",
"pred",
")",
")",
"/",
"tf",
".",
"to_float",
"(",
"tf",
".",
"size",
"(",
"pred",
")",
")"
] | L1 distance between tensors true and pred. | [
"L1",
"distance",
"between",
"tensors",
"true",
"and",
"pred",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L598-L600 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/epva.py | calc_loss_psnr | def calc_loss_psnr(gen_images, images, name, hparams=None, use_l1_loss=False):
"""Calculates loss and psnr for predictions over multiple timesteps."""
del hparams
with tf.name_scope(name):
loss, error, psnr_all = 0.0, 0.0, 0.0
for _, x, gx in zip(range(len(gen_images)), images, gen_images):
recon_co... | python | def calc_loss_psnr(gen_images, images, name, hparams=None, use_l1_loss=False):
"""Calculates loss and psnr for predictions over multiple timesteps."""
del hparams
with tf.name_scope(name):
loss, error, psnr_all = 0.0, 0.0, 0.0
for _, x, gx in zip(range(len(gen_images)), images, gen_images):
recon_co... | [
"def",
"calc_loss_psnr",
"(",
"gen_images",
",",
"images",
",",
"name",
",",
"hparams",
"=",
"None",
",",
"use_l1_loss",
"=",
"False",
")",
":",
"del",
"hparams",
"with",
"tf",
".",
"name_scope",
"(",
"name",
")",
":",
"loss",
",",
"error",
",",
"psnr_... | Calculates loss and psnr for predictions over multiple timesteps. | [
"Calculates",
"loss",
"and",
"psnr",
"for",
"predictions",
"over",
"multiple",
"timesteps",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/epva.py#L603-L627 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/sv2p_params.py | next_frame_sv2p | def next_frame_sv2p():
"""SV2P model hparams."""
hparams = basic_stochastic.next_frame_basic_stochastic()
hparams.optimizer = "true_adam"
hparams.learning_rate_schedule = "constant"
hparams.learning_rate_constant = 1e-3
hparams.video_num_input_frames = 1
hparams.video_num_target_frames = 3
hparams.batch... | python | def next_frame_sv2p():
"""SV2P model hparams."""
hparams = basic_stochastic.next_frame_basic_stochastic()
hparams.optimizer = "true_adam"
hparams.learning_rate_schedule = "constant"
hparams.learning_rate_constant = 1e-3
hparams.video_num_input_frames = 1
hparams.video_num_target_frames = 3
hparams.batch... | [
"def",
"next_frame_sv2p",
"(",
")",
":",
"hparams",
"=",
"basic_stochastic",
".",
"next_frame_basic_stochastic",
"(",
")",
"hparams",
".",
"optimizer",
"=",
"\"true_adam\"",
"hparams",
".",
"learning_rate_schedule",
"=",
"\"constant\"",
"hparams",
".",
"learning_rate_... | SV2P model hparams. | [
"SV2P",
"model",
"hparams",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/sv2p_params.py#L27-L60 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/sv2p_params.py | next_frame_sv2p_discrete | def next_frame_sv2p_discrete():
"""SV2P discrete model hparams."""
hparams = next_frame_sv2p()
hparams.action_injection = "multiplicative"
hparams.small_mode = True
hparams.add_hparam("bottleneck_bits", 128)
hparams.add_hparam("bottleneck_noise", 0.02)
hparams.add_hparam("discrete_warmup_steps", 40000)
... | python | def next_frame_sv2p_discrete():
"""SV2P discrete model hparams."""
hparams = next_frame_sv2p()
hparams.action_injection = "multiplicative"
hparams.small_mode = True
hparams.add_hparam("bottleneck_bits", 128)
hparams.add_hparam("bottleneck_noise", 0.02)
hparams.add_hparam("discrete_warmup_steps", 40000)
... | [
"def",
"next_frame_sv2p_discrete",
"(",
")",
":",
"hparams",
"=",
"next_frame_sv2p",
"(",
")",
"hparams",
".",
"action_injection",
"=",
"\"multiplicative\"",
"hparams",
".",
"small_mode",
"=",
"True",
"hparams",
".",
"add_hparam",
"(",
"\"bottleneck_bits\"",
",",
... | SV2P discrete model hparams. | [
"SV2P",
"discrete",
"model",
"hparams",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/sv2p_params.py#L64-L76 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/sv2p_params.py | next_frame_sv2p_atari | def next_frame_sv2p_atari():
"""SV2P model for atari."""
hparams = next_frame_sv2p()
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 4
hparams.action_injection = "multiplicative"
hparams.num_iterations_1st_stage = 12000
hparams.num_iterations_2nd_stage = 12000
hparams.anneal_end = 4... | python | def next_frame_sv2p_atari():
"""SV2P model for atari."""
hparams = next_frame_sv2p()
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 4
hparams.action_injection = "multiplicative"
hparams.num_iterations_1st_stage = 12000
hparams.num_iterations_2nd_stage = 12000
hparams.anneal_end = 4... | [
"def",
"next_frame_sv2p_atari",
"(",
")",
":",
"hparams",
"=",
"next_frame_sv2p",
"(",
")",
"hparams",
".",
"video_num_input_frames",
"=",
"4",
"hparams",
".",
"video_num_target_frames",
"=",
"4",
"hparams",
".",
"action_injection",
"=",
"\"multiplicative\"",
"hpara... | SV2P model for atari. | [
"SV2P",
"model",
"for",
"atari",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/sv2p_params.py#L80-L93 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/sv2p_params.py | next_frame_sv2p_atari_softmax | def next_frame_sv2p_atari_softmax():
"""SV2P model for atari with softmax."""
hparams = next_frame_sv2p_atari()
hparams.bottom = {}
hparams.loss = {}
hparams.top = {}
hparams.internal_loss = True
return hparams | python | def next_frame_sv2p_atari_softmax():
"""SV2P model for atari with softmax."""
hparams = next_frame_sv2p_atari()
hparams.bottom = {}
hparams.loss = {}
hparams.top = {}
hparams.internal_loss = True
return hparams | [
"def",
"next_frame_sv2p_atari_softmax",
"(",
")",
":",
"hparams",
"=",
"next_frame_sv2p_atari",
"(",
")",
"hparams",
".",
"bottom",
"=",
"{",
"}",
"hparams",
".",
"loss",
"=",
"{",
"}",
"hparams",
".",
"top",
"=",
"{",
"}",
"hparams",
".",
"internal_loss",... | SV2P model for atari with softmax. | [
"SV2P",
"model",
"for",
"atari",
"with",
"softmax",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/sv2p_params.py#L97-L104 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/sv2p_params.py | next_frame_sv2p_tiny | def next_frame_sv2p_tiny():
"""Tiny SV2P model."""
hparams = next_frame_sv2p_atari_softmax()
hparams.batch_size = 2
hparams.tiny_mode = True
hparams.num_masks = 1
hparams.video_modality_loss_cutoff = 0.4
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 4
return hparams | python | def next_frame_sv2p_tiny():
"""Tiny SV2P model."""
hparams = next_frame_sv2p_atari_softmax()
hparams.batch_size = 2
hparams.tiny_mode = True
hparams.num_masks = 1
hparams.video_modality_loss_cutoff = 0.4
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 4
return hparams | [
"def",
"next_frame_sv2p_tiny",
"(",
")",
":",
"hparams",
"=",
"next_frame_sv2p_atari_softmax",
"(",
")",
"hparams",
".",
"batch_size",
"=",
"2",
"hparams",
".",
"tiny_mode",
"=",
"True",
"hparams",
".",
"num_masks",
"=",
"1",
"hparams",
".",
"video_modality_loss... | Tiny SV2P model. | [
"Tiny",
"SV2P",
"model",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/sv2p_params.py#L124-L133 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/sv2p_params.py | next_frame_sv2p_cutoff | def next_frame_sv2p_cutoff():
"""SV2P model with additional cutoff in L2 loss for environments like pong."""
hparams = next_frame_sv2p()
hparams.video_modality_loss_cutoff = 0.4
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 1
return hparams | python | def next_frame_sv2p_cutoff():
"""SV2P model with additional cutoff in L2 loss for environments like pong."""
hparams = next_frame_sv2p()
hparams.video_modality_loss_cutoff = 0.4
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 1
return hparams | [
"def",
"next_frame_sv2p_cutoff",
"(",
")",
":",
"hparams",
"=",
"next_frame_sv2p",
"(",
")",
"hparams",
".",
"video_modality_loss_cutoff",
"=",
"0.4",
"hparams",
".",
"video_num_input_frames",
"=",
"4",
"hparams",
".",
"video_num_target_frames",
"=",
"1",
"return",
... | SV2P model with additional cutoff in L2 loss for environments like pong. | [
"SV2P",
"model",
"with",
"additional",
"cutoff",
"in",
"L2",
"loss",
"for",
"environments",
"like",
"pong",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/sv2p_params.py#L145-L151 | train |
tensorflow/tensor2tensor | tensor2tensor/data_generators/mscoco.py | _get_mscoco | def _get_mscoco(directory):
"""Download and extract MSCOCO datasets to directory unless it is there."""
for url in _MSCOCO_URLS:
filename = os.path.basename(url)
download_url = os.path.join(_MSCOCO_ROOT_URL, url)
path = generator_utils.maybe_download(directory, filename, download_url)
unzip_dir = os... | python | def _get_mscoco(directory):
"""Download and extract MSCOCO datasets to directory unless it is there."""
for url in _MSCOCO_URLS:
filename = os.path.basename(url)
download_url = os.path.join(_MSCOCO_ROOT_URL, url)
path = generator_utils.maybe_download(directory, filename, download_url)
unzip_dir = os... | [
"def",
"_get_mscoco",
"(",
"directory",
")",
":",
"for",
"url",
"in",
"_MSCOCO_URLS",
":",
"filename",
"=",
"os",
".",
"path",
".",
"basename",
"(",
"url",
")",
"download_url",
"=",
"os",
".",
"path",
".",
"join",
"(",
"_MSCOCO_ROOT_URL",
",",
"url",
"... | Download and extract MSCOCO datasets to directory unless it is there. | [
"Download",
"and",
"extract",
"MSCOCO",
"datasets",
"to",
"directory",
"unless",
"it",
"is",
"there",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/data_generators/mscoco.py#L49-L57 | train |
tensorflow/tensor2tensor | tensor2tensor/data_generators/mscoco.py | mscoco_generator | def mscoco_generator(data_dir,
tmp_dir,
training,
how_many,
start_from=0,
eos_list=None,
vocab_filename=None):
"""Image generator for MSCOCO captioning problem with token-wise captions.
Arg... | python | def mscoco_generator(data_dir,
tmp_dir,
training,
how_many,
start_from=0,
eos_list=None,
vocab_filename=None):
"""Image generator for MSCOCO captioning problem with token-wise captions.
Arg... | [
"def",
"mscoco_generator",
"(",
"data_dir",
",",
"tmp_dir",
",",
"training",
",",
"how_many",
",",
"start_from",
"=",
"0",
",",
"eos_list",
"=",
"None",
",",
"vocab_filename",
"=",
"None",
")",
":",
"eos_list",
"=",
"[",
"1",
"]",
"if",
"eos_list",
"is",... | Image generator for MSCOCO captioning problem with token-wise captions.
Args:
data_dir: path to the data directory.
tmp_dir: path to temporary storage directory.
training: a Boolean; if true, we use the train set, otherwise the test set.
how_many: how many images and labels to generate.
start_fro... | [
"Image",
"generator",
"for",
"MSCOCO",
"captioning",
"problem",
"with",
"token",
"-",
"wise",
"captions",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/data_generators/mscoco.py#L60-L142 | train |
tensorflow/tensor2tensor | tensor2tensor/utils/cloud_mlengine.py | flags_as_args | def flags_as_args():
"""Convert FLAGS to list of args suitable for passing on cmd line."""
if hasattr(FLAGS, "flag_values_dict"):
args_dict = FLAGS.flag_values_dict()
else:
args_dict = dict(FLAGS.__dict__["__flags"])
del args_dict["cloud_mlengine"]
# Configured later
del args_dict["t2t_usr_dir"]
a... | python | def flags_as_args():
"""Convert FLAGS to list of args suitable for passing on cmd line."""
if hasattr(FLAGS, "flag_values_dict"):
args_dict = FLAGS.flag_values_dict()
else:
args_dict = dict(FLAGS.__dict__["__flags"])
del args_dict["cloud_mlengine"]
# Configured later
del args_dict["t2t_usr_dir"]
a... | [
"def",
"flags_as_args",
"(",
")",
":",
"if",
"hasattr",
"(",
"FLAGS",
",",
"\"flag_values_dict\"",
")",
":",
"args_dict",
"=",
"FLAGS",
".",
"flag_values_dict",
"(",
")",
"else",
":",
"args_dict",
"=",
"dict",
"(",
"FLAGS",
".",
"__dict__",
"[",
"\"__flags... | Convert FLAGS to list of args suitable for passing on cmd line. | [
"Convert",
"FLAGS",
"to",
"list",
"of",
"args",
"suitable",
"for",
"passing",
"on",
"cmd",
"line",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/utils/cloud_mlengine.py#L93-L113 | train |
tensorflow/tensor2tensor | tensor2tensor/utils/cloud_mlengine.py | get_default_master_type | def get_default_master_type(num_gpus=1):
"""Returns master_type for trainingInput."""
gpus_to_master_map = {
0: "standard",
1: "standard_p100",
4: "complex_model_m_p100",
8: "complex_model_l_gpu",
}
if num_gpus not in gpus_to_master_map:
raise ValueError("Num gpus must be in %s" %
... | python | def get_default_master_type(num_gpus=1):
"""Returns master_type for trainingInput."""
gpus_to_master_map = {
0: "standard",
1: "standard_p100",
4: "complex_model_m_p100",
8: "complex_model_l_gpu",
}
if num_gpus not in gpus_to_master_map:
raise ValueError("Num gpus must be in %s" %
... | [
"def",
"get_default_master_type",
"(",
"num_gpus",
"=",
"1",
")",
":",
"gpus_to_master_map",
"=",
"{",
"0",
":",
"\"standard\"",
",",
"1",
":",
"\"standard_p100\"",
",",
"4",
":",
"\"complex_model_m_p100\"",
",",
"8",
":",
"\"complex_model_l_gpu\"",
",",
"}",
... | Returns master_type for trainingInput. | [
"Returns",
"master_type",
"for",
"trainingInput",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/utils/cloud_mlengine.py#L116-L127 | train |
tensorflow/tensor2tensor | tensor2tensor/utils/cloud_mlengine.py | configure_job | def configure_job():
"""Construct jobSpec for ML Engine job."""
# See documentation:
# https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#traininginput
training_input = {
"pythonModule": "tensor2tensor.bin.t2t_trainer",
"args": flags_as_args(),
"region": text_encoder.native_to_... | python | def configure_job():
"""Construct jobSpec for ML Engine job."""
# See documentation:
# https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#traininginput
training_input = {
"pythonModule": "tensor2tensor.bin.t2t_trainer",
"args": flags_as_args(),
"region": text_encoder.native_to_... | [
"def",
"configure_job",
"(",
")",
":",
"# See documentation:",
"# https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#traininginput",
"training_input",
"=",
"{",
"\"pythonModule\"",
":",
"\"tensor2tensor.bin.t2t_trainer\"",
",",
"\"args\"",
":",
"flags_as_args",
"(",... | Construct jobSpec for ML Engine job. | [
"Construct",
"jobSpec",
"for",
"ML",
"Engine",
"job",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/utils/cloud_mlengine.py#L130-L170 | train |
tensorflow/tensor2tensor | tensor2tensor/utils/cloud_mlengine.py | launch_job | def launch_job(job_spec):
"""Launch job on ML Engine."""
project_id = "projects/{}".format(
text_encoder.native_to_unicode(default_project()))
credentials = GoogleCredentials.get_application_default()
cloudml = discovery.build("ml", "v1", credentials=credentials,
cache_discover... | python | def launch_job(job_spec):
"""Launch job on ML Engine."""
project_id = "projects/{}".format(
text_encoder.native_to_unicode(default_project()))
credentials = GoogleCredentials.get_application_default()
cloudml = discovery.build("ml", "v1", credentials=credentials,
cache_discover... | [
"def",
"launch_job",
"(",
"job_spec",
")",
":",
"project_id",
"=",
"\"projects/{}\"",
".",
"format",
"(",
"text_encoder",
".",
"native_to_unicode",
"(",
"default_project",
"(",
")",
")",
")",
"credentials",
"=",
"GoogleCredentials",
".",
"get_application_default",
... | Launch job on ML Engine. | [
"Launch",
"job",
"on",
"ML",
"Engine",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/utils/cloud_mlengine.py#L173-L181 | train |
tensorflow/tensor2tensor | tensor2tensor/utils/cloud_mlengine.py | _tar_and_copy | def _tar_and_copy(src_dir, target_dir):
"""Tar and gzip src_dir and copy to GCS target_dir."""
src_dir = src_dir.rstrip("/")
target_dir = target_dir.rstrip("/")
tmp_dir = tempfile.gettempdir().rstrip("/")
src_base = os.path.basename(src_dir)
shell_run(
"tar --exclude=.git -zcf {tmp_dir}/{src_base}.tar... | python | def _tar_and_copy(src_dir, target_dir):
"""Tar and gzip src_dir and copy to GCS target_dir."""
src_dir = src_dir.rstrip("/")
target_dir = target_dir.rstrip("/")
tmp_dir = tempfile.gettempdir().rstrip("/")
src_base = os.path.basename(src_dir)
shell_run(
"tar --exclude=.git -zcf {tmp_dir}/{src_base}.tar... | [
"def",
"_tar_and_copy",
"(",
"src_dir",
",",
"target_dir",
")",
":",
"src_dir",
"=",
"src_dir",
".",
"rstrip",
"(",
"\"/\"",
")",
"target_dir",
"=",
"target_dir",
".",
"rstrip",
"(",
"\"/\"",
")",
"tmp_dir",
"=",
"tempfile",
".",
"gettempdir",
"(",
")",
... | Tar and gzip src_dir and copy to GCS target_dir. | [
"Tar",
"and",
"gzip",
"src_dir",
"and",
"copy",
"to",
"GCS",
"target_dir",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/utils/cloud_mlengine.py#L184-L202 | train |
tensorflow/tensor2tensor | tensor2tensor/utils/cloud_mlengine.py | tar_and_copy_t2t | def tar_and_copy_t2t(train_dir):
"""Tar Tensor2Tensor and cp to train_dir."""
tf.logging.info("Tarring and pushing local Tensor2Tensor package.")
output = text_encoder.native_to_unicode(shell_output(
"pip show tensor2tensor")).split("\n")
assert output[1].startswith("Version")
assert output[7].startswi... | python | def tar_and_copy_t2t(train_dir):
"""Tar Tensor2Tensor and cp to train_dir."""
tf.logging.info("Tarring and pushing local Tensor2Tensor package.")
output = text_encoder.native_to_unicode(shell_output(
"pip show tensor2tensor")).split("\n")
assert output[1].startswith("Version")
assert output[7].startswi... | [
"def",
"tar_and_copy_t2t",
"(",
"train_dir",
")",
":",
"tf",
".",
"logging",
".",
"info",
"(",
"\"Tarring and pushing local Tensor2Tensor package.\"",
")",
"output",
"=",
"text_encoder",
".",
"native_to_unicode",
"(",
"shell_output",
"(",
"\"pip show tensor2tensor\"",
"... | Tar Tensor2Tensor and cp to train_dir. | [
"Tar",
"Tensor2Tensor",
"and",
"cp",
"to",
"train_dir",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/utils/cloud_mlengine.py#L205-L242 | train |
tensorflow/tensor2tensor | tensor2tensor/utils/cloud_mlengine.py | tar_and_copy_usr_dir | def tar_and_copy_usr_dir(usr_dir, train_dir):
"""Package, tar, and copy usr_dir to GCS train_dir."""
tf.logging.info("Tarring and pushing t2t_usr_dir.")
usr_dir = os.path.abspath(os.path.expanduser(usr_dir))
# Copy usr dir to a temp location
top_dir = os.path.join(tempfile.gettempdir(), "t2t_usr_container")
... | python | def tar_and_copy_usr_dir(usr_dir, train_dir):
"""Package, tar, and copy usr_dir to GCS train_dir."""
tf.logging.info("Tarring and pushing t2t_usr_dir.")
usr_dir = os.path.abspath(os.path.expanduser(usr_dir))
# Copy usr dir to a temp location
top_dir = os.path.join(tempfile.gettempdir(), "t2t_usr_container")
... | [
"def",
"tar_and_copy_usr_dir",
"(",
"usr_dir",
",",
"train_dir",
")",
":",
"tf",
".",
"logging",
".",
"info",
"(",
"\"Tarring and pushing t2t_usr_dir.\"",
")",
"usr_dir",
"=",
"os",
".",
"path",
".",
"abspath",
"(",
"os",
".",
"path",
".",
"expanduser",
"(",... | Package, tar, and copy usr_dir to GCS train_dir. | [
"Package",
"tar",
"and",
"copy",
"usr_dir",
"to",
"GCS",
"train_dir",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/utils/cloud_mlengine.py#L245-L263 | train |
tensorflow/tensor2tensor | tensor2tensor/utils/cloud_mlengine.py | validate_flags | def validate_flags():
"""Validates flags are set to acceptable values for CloudML Engine runs."""
assert not job_dir()
assert FLAGS.output_dir.startswith("gs://")
assert FLAGS.data_dir.startswith("gs://")
assert FLAGS.worker_replicas <= 1
assert FLAGS.ps_replicas <= 0
if FLAGS.hparams_range:
assert FL... | python | def validate_flags():
"""Validates flags are set to acceptable values for CloudML Engine runs."""
assert not job_dir()
assert FLAGS.output_dir.startswith("gs://")
assert FLAGS.data_dir.startswith("gs://")
assert FLAGS.worker_replicas <= 1
assert FLAGS.ps_replicas <= 0
if FLAGS.hparams_range:
assert FL... | [
"def",
"validate_flags",
"(",
")",
":",
"assert",
"not",
"job_dir",
"(",
")",
"assert",
"FLAGS",
".",
"output_dir",
".",
"startswith",
"(",
"\"gs://\"",
")",
"assert",
"FLAGS",
".",
"data_dir",
".",
"startswith",
"(",
"\"gs://\"",
")",
"assert",
"FLAGS",
"... | Validates flags are set to acceptable values for CloudML Engine runs. | [
"Validates",
"flags",
"are",
"set",
"to",
"acceptable",
"values",
"for",
"CloudML",
"Engine",
"runs",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/utils/cloud_mlengine.py#L298-L323 | train |
tensorflow/tensor2tensor | tensor2tensor/utils/cloud_mlengine.py | launch | def launch():
"""Launch t2t_trainer on Cloud ML Engine."""
validate_flags()
job_spec = configure_job()
job_name = job_spec["jobId"]
tf.logging.info("Launching job %s with ML Engine spec:\n%s", job_name,
pprint.pformat(job_spec))
assert confirm()
train_dir = FLAGS.output_dir
t2t_tar = t... | python | def launch():
"""Launch t2t_trainer on Cloud ML Engine."""
validate_flags()
job_spec = configure_job()
job_name = job_spec["jobId"]
tf.logging.info("Launching job %s with ML Engine spec:\n%s", job_name,
pprint.pformat(job_spec))
assert confirm()
train_dir = FLAGS.output_dir
t2t_tar = t... | [
"def",
"launch",
"(",
")",
":",
"validate_flags",
"(",
")",
"job_spec",
"=",
"configure_job",
"(",
")",
"job_name",
"=",
"job_spec",
"[",
"\"jobId\"",
"]",
"tf",
".",
"logging",
".",
"info",
"(",
"\"Launching job %s with ML Engine spec:\\n%s\"",
",",
"job_name",... | Launch t2t_trainer on Cloud ML Engine. | [
"Launch",
"t2t_trainer",
"on",
"Cloud",
"ML",
"Engine",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/utils/cloud_mlengine.py#L331-L351 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/bayes.py | add_weight | def add_weight(cls):
"""Decorator for Layers, overriding add_weight for trainable initializers."""
@functools.wraps(cls.add_weight)
def _add_weight(self,
name=None,
shape=None,
dtype=None,
initializer=None,
regularizer=None,... | python | def add_weight(cls):
"""Decorator for Layers, overriding add_weight for trainable initializers."""
@functools.wraps(cls.add_weight)
def _add_weight(self,
name=None,
shape=None,
dtype=None,
initializer=None,
regularizer=None,... | [
"def",
"add_weight",
"(",
"cls",
")",
":",
"@",
"functools",
".",
"wraps",
"(",
"cls",
".",
"add_weight",
")",
"def",
"_add_weight",
"(",
"self",
",",
"name",
"=",
"None",
",",
"shape",
"=",
"None",
",",
"dtype",
"=",
"None",
",",
"initializer",
"=",... | Decorator for Layers, overriding add_weight for trainable initializers. | [
"Decorator",
"for",
"Layers",
"overriding",
"add_weight",
"for",
"trainable",
"initializers",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/bayes.py#L32-L64 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/base_vae.py | NextFrameBaseVae.get_beta | def get_beta(self, kl_loss=0.0):
"""Get the KL multiplier, either dynamically or schedule based.
if hparams.latent_loss_multiplier_dynamic is set to true, then beta
is being adjusted to keep KL under hparams.latent_loss_multiplier_epsilon.
In order to do so, the beta is being updated at each iteration
... | python | def get_beta(self, kl_loss=0.0):
"""Get the KL multiplier, either dynamically or schedule based.
if hparams.latent_loss_multiplier_dynamic is set to true, then beta
is being adjusted to keep KL under hparams.latent_loss_multiplier_epsilon.
In order to do so, the beta is being updated at each iteration
... | [
"def",
"get_beta",
"(",
"self",
",",
"kl_loss",
"=",
"0.0",
")",
":",
"if",
"self",
".",
"hparams",
".",
"latent_loss_multiplier_dynamic",
":",
"beta",
"=",
"tf",
".",
"Variable",
"(",
"self",
".",
"hparams",
".",
"latent_loss_multiplier",
",",
"trainable",
... | Get the KL multiplier, either dynamically or schedule based.
if hparams.latent_loss_multiplier_dynamic is set to true, then beta
is being adjusted to keep KL under hparams.latent_loss_multiplier_epsilon.
In order to do so, the beta is being updated at each iteration
by taking steps of size hparams.late... | [
"Get",
"the",
"KL",
"multiplier",
"either",
"dynamically",
"or",
"schedule",
"based",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/base_vae.py#L34-L72 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/base_vae.py | NextFrameBaseVae.get_kl_loss | def get_kl_loss(self, means, log_vars, means_p=None, log_vars_p=None):
"""Get KL loss for all the predicted Gaussians."""
kl_loss = 0.0
if means_p is None:
means_p = tf.unstack(tf.zeros_like(means))
if log_vars_p is None:
log_vars_p = tf.unstack(tf.zeros_like(log_vars))
enumerated_inputs... | python | def get_kl_loss(self, means, log_vars, means_p=None, log_vars_p=None):
"""Get KL loss for all the predicted Gaussians."""
kl_loss = 0.0
if means_p is None:
means_p = tf.unstack(tf.zeros_like(means))
if log_vars_p is None:
log_vars_p = tf.unstack(tf.zeros_like(log_vars))
enumerated_inputs... | [
"def",
"get_kl_loss",
"(",
"self",
",",
"means",
",",
"log_vars",
",",
"means_p",
"=",
"None",
",",
"log_vars_p",
"=",
"None",
")",
":",
"kl_loss",
"=",
"0.0",
"if",
"means_p",
"is",
"None",
":",
"means_p",
"=",
"tf",
".",
"unstack",
"(",
"tf",
".",
... | Get KL loss for all the predicted Gaussians. | [
"Get",
"KL",
"loss",
"for",
"all",
"the",
"predicted",
"Gaussians",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/base_vae.py#L74-L95 | train |
tensorflow/tensor2tensor | tensor2tensor/models/video/base_vae.py | NextFrameBaseVae.construct_latent_tower | def construct_latent_tower(self, images, time_axis):
"""Create the latent tower."""
# No latent in the first phase
first_phase = tf.less(
self.get_iteration_num(), self.hparams.num_iterations_1st_stage)
# use all frames by default but this allows more
# predicted frames at inference time
... | python | def construct_latent_tower(self, images, time_axis):
"""Create the latent tower."""
# No latent in the first phase
first_phase = tf.less(
self.get_iteration_num(), self.hparams.num_iterations_1st_stage)
# use all frames by default but this allows more
# predicted frames at inference time
... | [
"def",
"construct_latent_tower",
"(",
"self",
",",
"images",
",",
"time_axis",
")",
":",
"# No latent in the first phase",
"first_phase",
"=",
"tf",
".",
"less",
"(",
"self",
".",
"get_iteration_num",
"(",
")",
",",
"self",
".",
"hparams",
".",
"num_iterations_1... | Create the latent tower. | [
"Create",
"the",
"latent",
"tower",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/video/base_vae.py#L97-L118 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_encode | def transformer_encode(encoder_function, inputs, target_space, hparams,
attention_weights=None, features=None, losses=None,
**kwargs):
"""Encode transformer inputs.
Args:
encoder_function: the encoder function
inputs: Transformer inputs [batch_size, input_lengt... | python | def transformer_encode(encoder_function, inputs, target_space, hparams,
attention_weights=None, features=None, losses=None,
**kwargs):
"""Encode transformer inputs.
Args:
encoder_function: the encoder function
inputs: Transformer inputs [batch_size, input_lengt... | [
"def",
"transformer_encode",
"(",
"encoder_function",
",",
"inputs",
",",
"target_space",
",",
"hparams",
",",
"attention_weights",
"=",
"None",
",",
"features",
"=",
"None",
",",
"losses",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"inputs",
"=",
"co... | Encode transformer inputs.
Args:
encoder_function: the encoder function
inputs: Transformer inputs [batch_size, input_length, 1, hidden_dim] which
will be flattened along the two spatial dimensions.
target_space: scalar, target space ID.
hparams: hyperparameters for model.
attention_weights... | [
"Encode",
"transformer",
"inputs",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L57-L111 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_decode | def transformer_decode(decoder_function,
decoder_input,
encoder_output,
encoder_decoder_attention_bias,
decoder_self_attention_bias,
hparams,
attention_weights=None,
... | python | def transformer_decode(decoder_function,
decoder_input,
encoder_output,
encoder_decoder_attention_bias,
decoder_self_attention_bias,
hparams,
attention_weights=None,
... | [
"def",
"transformer_decode",
"(",
"decoder_function",
",",
"decoder_input",
",",
"encoder_output",
",",
"encoder_decoder_attention_bias",
",",
"decoder_self_attention_bias",
",",
"hparams",
",",
"attention_weights",
"=",
"None",
",",
"cache",
"=",
"None",
",",
"decode_l... | Decode Transformer outputs from encoder representation.
Args:
decoder_function: the decoder function
decoder_input: inputs to bottom of the model. [batch_size, decoder_length,
hidden_dim]
encoder_output: Encoder representation. [batch_size, input_length,
hidden_dim]
encoder_decoder_attent... | [
"Decode",
"Transformer",
"outputs",
"from",
"encoder",
"representation",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L114-L178 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | _init_transformer_cache | def _init_transformer_cache(cache, hparams, batch_size, attention_init_length,
encoder_output, encoder_decoder_attention_bias,
scope_prefix):
"""Create the initial cache for Transformer fast decoding."""
key_channels = hparams.attention_key_channels or hparams... | python | def _init_transformer_cache(cache, hparams, batch_size, attention_init_length,
encoder_output, encoder_decoder_attention_bias,
scope_prefix):
"""Create the initial cache for Transformer fast decoding."""
key_channels = hparams.attention_key_channels or hparams... | [
"def",
"_init_transformer_cache",
"(",
"cache",
",",
"hparams",
",",
"batch_size",
",",
"attention_init_length",
",",
"encoder_output",
",",
"encoder_decoder_attention_bias",
",",
"scope_prefix",
")",
":",
"key_channels",
"=",
"hparams",
".",
"attention_key_channels",
"... | Create the initial cache for Transformer fast decoding. | [
"Create",
"the",
"initial",
"cache",
"for",
"Transformer",
"fast",
"decoding",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L832-L892 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | fast_decode_tpu | def fast_decode_tpu(encoder_output,
encoder_decoder_attention_bias,
symbols_to_logits_fn,
hparams,
decode_length,
vocab_size,
init_cache_fn=_init_transformer_cache,
beam_size=1,
... | python | def fast_decode_tpu(encoder_output,
encoder_decoder_attention_bias,
symbols_to_logits_fn,
hparams,
decode_length,
vocab_size,
init_cache_fn=_init_transformer_cache,
beam_size=1,
... | [
"def",
"fast_decode_tpu",
"(",
"encoder_output",
",",
"encoder_decoder_attention_bias",
",",
"symbols_to_logits_fn",
",",
"hparams",
",",
"decode_length",
",",
"vocab_size",
",",
"init_cache_fn",
"=",
"_init_transformer_cache",
",",
"beam_size",
"=",
"1",
",",
"top_beam... | Given encoder output and a symbols to logits function, does fast decoding.
Implements both greedy and beam search decoding for TPU, uses beam search iff
beam_size > 1, otherwise beam search related arguments are ignored.
Args:
encoder_output: A tensor, output from encoder.
encoder_decoder_attention_bias... | [
"Given",
"encoder",
"output",
"and",
"a",
"symbols",
"to",
"logits",
"function",
"does",
"fast",
"decoding",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L895-L1045 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | fast_decode | def fast_decode(encoder_output,
encoder_decoder_attention_bias,
symbols_to_logits_fn,
hparams,
decode_length,
vocab_size,
init_cache_fn=_init_transformer_cache,
beam_size=1,
top_beams=1,
... | python | def fast_decode(encoder_output,
encoder_decoder_attention_bias,
symbols_to_logits_fn,
hparams,
decode_length,
vocab_size,
init_cache_fn=_init_transformer_cache,
beam_size=1,
top_beams=1,
... | [
"def",
"fast_decode",
"(",
"encoder_output",
",",
"encoder_decoder_attention_bias",
",",
"symbols_to_logits_fn",
",",
"hparams",
",",
"decode_length",
",",
"vocab_size",
",",
"init_cache_fn",
"=",
"_init_transformer_cache",
",",
"beam_size",
"=",
"1",
",",
"top_beams",
... | Given encoder output and a symbols to logits function, does fast decoding.
Implements both greedy and beam search decoding, uses beam search iff
beam_size > 1, otherwise beam search related arguments are ignored.
Args:
encoder_output: Output from encoder.
encoder_decoder_attention_bias: a bias tensor fo... | [
"Given",
"encoder",
"output",
"and",
"a",
"symbols",
"to",
"logits",
"function",
"does",
"fast",
"decoding",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1048-L1182 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_prepare_decoder | def transformer_prepare_decoder(targets, hparams, features=None):
"""Prepare one shard of the model for the decoder.
Args:
targets: a Tensor.
hparams: run hyperparameters
features: optionally pass the entire features dictionary as well. This is
needed now for "packed" datasets.
Returns:
de... | python | def transformer_prepare_decoder(targets, hparams, features=None):
"""Prepare one shard of the model for the decoder.
Args:
targets: a Tensor.
hparams: run hyperparameters
features: optionally pass the entire features dictionary as well. This is
needed now for "packed" datasets.
Returns:
de... | [
"def",
"transformer_prepare_decoder",
"(",
"targets",
",",
"hparams",
",",
"features",
"=",
"None",
")",
":",
"if",
"hparams",
".",
"causal_decoder_self_attention",
":",
"# Causal attention.",
"if",
"hparams",
".",
"prepend_mode",
"==",
"\"prepend_inputs_full_attention\... | Prepare one shard of the model for the decoder.
Args:
targets: a Tensor.
hparams: run hyperparameters
features: optionally pass the entire features dictionary as well. This is
needed now for "packed" datasets.
Returns:
decoder_input: a Tensor, bottom of decoder stack
decoder_self_attenti... | [
"Prepare",
"one",
"shard",
"of",
"the",
"model",
"for",
"the",
"decoder",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1281-L1336 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_decoder | def transformer_decoder(decoder_input,
encoder_output,
decoder_self_attention_bias,
encoder_decoder_attention_bias,
hparams,
cache=None,
decode_loop_step=None,
... | python | def transformer_decoder(decoder_input,
encoder_output,
decoder_self_attention_bias,
encoder_decoder_attention_bias,
hparams,
cache=None,
decode_loop_step=None,
... | [
"def",
"transformer_decoder",
"(",
"decoder_input",
",",
"encoder_output",
",",
"decoder_self_attention_bias",
",",
"encoder_decoder_attention_bias",
",",
"hparams",
",",
"cache",
"=",
"None",
",",
"decode_loop_step",
"=",
"None",
",",
"name",
"=",
"\"decoder\"",
",",... | A stack of transformer layers.
Args:
decoder_input: a Tensor
encoder_output: a Tensor
decoder_self_attention_bias: bias Tensor for self-attention (see
common_attention.attention_bias())
encoder_decoder_attention_bias: bias Tensor for encoder-decoder attention
(see common_attention.attenti... | [
"A",
"stack",
"of",
"transformer",
"layers",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1339-L1520 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_base_v1 | def transformer_base_v1():
"""Set of hyperparameters."""
hparams = common_hparams.basic_params1()
hparams.norm_type = "layer"
hparams.hidden_size = 512
hparams.batch_size = 4096
hparams.max_length = 256
hparams.clip_grad_norm = 0. # i.e. no gradient clipping
hparams.optimizer_adam_epsilon = 1e-9
hpar... | python | def transformer_base_v1():
"""Set of hyperparameters."""
hparams = common_hparams.basic_params1()
hparams.norm_type = "layer"
hparams.hidden_size = 512
hparams.batch_size = 4096
hparams.max_length = 256
hparams.clip_grad_norm = 0. # i.e. no gradient clipping
hparams.optimizer_adam_epsilon = 1e-9
hpar... | [
"def",
"transformer_base_v1",
"(",
")",
":",
"hparams",
"=",
"common_hparams",
".",
"basic_params1",
"(",
")",
"hparams",
".",
"norm_type",
"=",
"\"layer\"",
"hparams",
".",
"hidden_size",
"=",
"512",
"hparams",
".",
"batch_size",
"=",
"4096",
"hparams",
".",
... | Set of hyperparameters. | [
"Set",
"of",
"hyperparameters",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1568-L1633 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_base_v2 | def transformer_base_v2():
"""Set of hyperparameters."""
hparams = transformer_base_v1()
hparams.layer_preprocess_sequence = "n"
hparams.layer_postprocess_sequence = "da"
hparams.layer_prepostprocess_dropout = 0.1
hparams.attention_dropout = 0.1
hparams.relu_dropout = 0.1
hparams.learning_rate_warmup_st... | python | def transformer_base_v2():
"""Set of hyperparameters."""
hparams = transformer_base_v1()
hparams.layer_preprocess_sequence = "n"
hparams.layer_postprocess_sequence = "da"
hparams.layer_prepostprocess_dropout = 0.1
hparams.attention_dropout = 0.1
hparams.relu_dropout = 0.1
hparams.learning_rate_warmup_st... | [
"def",
"transformer_base_v2",
"(",
")",
":",
"hparams",
"=",
"transformer_base_v1",
"(",
")",
"hparams",
".",
"layer_preprocess_sequence",
"=",
"\"n\"",
"hparams",
".",
"layer_postprocess_sequence",
"=",
"\"da\"",
"hparams",
".",
"layer_prepostprocess_dropout",
"=",
"... | Set of hyperparameters. | [
"Set",
"of",
"hyperparameters",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1637-L1647 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_base_vq_ada_32ex_packed | def transformer_base_vq_ada_32ex_packed():
"""Set of hyperparameters for lm1b packed following tpu params."""
hparams = transformer_base_v2()
expert_utils.update_hparams_for_vq_gating(hparams)
hparams.moe_num_experts = 32
hparams.gating_type = "vq"
# this gives us a batch size of 16 because each seq is len ... | python | def transformer_base_vq_ada_32ex_packed():
"""Set of hyperparameters for lm1b packed following tpu params."""
hparams = transformer_base_v2()
expert_utils.update_hparams_for_vq_gating(hparams)
hparams.moe_num_experts = 32
hparams.gating_type = "vq"
# this gives us a batch size of 16 because each seq is len ... | [
"def",
"transformer_base_vq_ada_32ex_packed",
"(",
")",
":",
"hparams",
"=",
"transformer_base_v2",
"(",
")",
"expert_utils",
".",
"update_hparams_for_vq_gating",
"(",
"hparams",
")",
"hparams",
".",
"moe_num_experts",
"=",
"32",
"hparams",
".",
"gating_type",
"=",
... | Set of hyperparameters for lm1b packed following tpu params. | [
"Set",
"of",
"hyperparameters",
"for",
"lm1b",
"packed",
"following",
"tpu",
"params",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1651-L1679 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_base_vq1_16_nb1_packed_nda_b01_scales | def transformer_base_vq1_16_nb1_packed_nda_b01_scales():
"""Set of hyperparameters."""
hparams = transformer_base_vq_ada_32ex_packed()
hparams.use_scales = int(True)
hparams.moe_num_experts = 16
hparams.moe_k = 1
hparams.beta = 0.1
hparams.layer_preprocess_sequence = "n"
hparams.layer_postprocess_sequen... | python | def transformer_base_vq1_16_nb1_packed_nda_b01_scales():
"""Set of hyperparameters."""
hparams = transformer_base_vq_ada_32ex_packed()
hparams.use_scales = int(True)
hparams.moe_num_experts = 16
hparams.moe_k = 1
hparams.beta = 0.1
hparams.layer_preprocess_sequence = "n"
hparams.layer_postprocess_sequen... | [
"def",
"transformer_base_vq1_16_nb1_packed_nda_b01_scales",
"(",
")",
":",
"hparams",
"=",
"transformer_base_vq_ada_32ex_packed",
"(",
")",
"hparams",
".",
"use_scales",
"=",
"int",
"(",
"True",
")",
"hparams",
".",
"moe_num_experts",
"=",
"16",
"hparams",
".",
"moe... | Set of hyperparameters. | [
"Set",
"of",
"hyperparameters",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1692-L1702 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_base_vq1_16_nb1_packed_dan_b01_scales | def transformer_base_vq1_16_nb1_packed_dan_b01_scales():
"""Set of hyperparameters."""
hparams = transformer_base_vq_ada_32ex_packed()
hparams.use_scales = int(True)
hparams.moe_num_experts = 16
hparams.moe_k = 1
hparams.beta = 0.1
hparams.ema = False
return hparams | python | def transformer_base_vq1_16_nb1_packed_dan_b01_scales():
"""Set of hyperparameters."""
hparams = transformer_base_vq_ada_32ex_packed()
hparams.use_scales = int(True)
hparams.moe_num_experts = 16
hparams.moe_k = 1
hparams.beta = 0.1
hparams.ema = False
return hparams | [
"def",
"transformer_base_vq1_16_nb1_packed_dan_b01_scales",
"(",
")",
":",
"hparams",
"=",
"transformer_base_vq_ada_32ex_packed",
"(",
")",
"hparams",
".",
"use_scales",
"=",
"int",
"(",
"True",
")",
"hparams",
".",
"moe_num_experts",
"=",
"16",
"hparams",
".",
"moe... | Set of hyperparameters. | [
"Set",
"of",
"hyperparameters",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1706-L1714 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_base_vq1_16_nb1_packed_nda_b01_scales_dialog | def transformer_base_vq1_16_nb1_packed_nda_b01_scales_dialog():
"""Set of hyperparameters."""
hparams = transformer_base_vq1_16_nb1_packed_nda_b01_scales()
hparams.batch_size = 2048
hparams.max_length = 1024
hparams.filter_size = 3072
return hparams | python | def transformer_base_vq1_16_nb1_packed_nda_b01_scales_dialog():
"""Set of hyperparameters."""
hparams = transformer_base_vq1_16_nb1_packed_nda_b01_scales()
hparams.batch_size = 2048
hparams.max_length = 1024
hparams.filter_size = 3072
return hparams | [
"def",
"transformer_base_vq1_16_nb1_packed_nda_b01_scales_dialog",
"(",
")",
":",
"hparams",
"=",
"transformer_base_vq1_16_nb1_packed_nda_b01_scales",
"(",
")",
"hparams",
".",
"batch_size",
"=",
"2048",
"hparams",
".",
"max_length",
"=",
"1024",
"hparams",
".",
"filter_s... | Set of hyperparameters. | [
"Set",
"of",
"hyperparameters",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1718-L1724 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_ada_lmpackedbase_dialog | def transformer_ada_lmpackedbase_dialog():
"""Set of hyperparameters."""
hparams = transformer_base_vq_ada_32ex_packed()
hparams.max_length = 1024
hparams.ffn_layer = "dense_relu_dense"
hparams.batch_size = 4096
return hparams | python | def transformer_ada_lmpackedbase_dialog():
"""Set of hyperparameters."""
hparams = transformer_base_vq_ada_32ex_packed()
hparams.max_length = 1024
hparams.ffn_layer = "dense_relu_dense"
hparams.batch_size = 4096
return hparams | [
"def",
"transformer_ada_lmpackedbase_dialog",
"(",
")",
":",
"hparams",
"=",
"transformer_base_vq_ada_32ex_packed",
"(",
")",
"hparams",
".",
"max_length",
"=",
"1024",
"hparams",
".",
"ffn_layer",
"=",
"\"dense_relu_dense\"",
"hparams",
".",
"batch_size",
"=",
"4096"... | Set of hyperparameters. | [
"Set",
"of",
"hyperparameters",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1736-L1742 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_base_v3 | def transformer_base_v3():
"""Base parameters for Transformer model."""
# Update parameters here, then occasionally cut a versioned set, e.g.
# transformer_base_v2.
hparams = transformer_base_v2()
hparams.optimizer_adam_beta2 = 0.997
# New way of specifying learning rate schedule.
# Equivalent to previous... | python | def transformer_base_v3():
"""Base parameters for Transformer model."""
# Update parameters here, then occasionally cut a versioned set, e.g.
# transformer_base_v2.
hparams = transformer_base_v2()
hparams.optimizer_adam_beta2 = 0.997
# New way of specifying learning rate schedule.
# Equivalent to previous... | [
"def",
"transformer_base_v3",
"(",
")",
":",
"# Update parameters here, then occasionally cut a versioned set, e.g.",
"# transformer_base_v2.",
"hparams",
"=",
"transformer_base_v2",
"(",
")",
"hparams",
".",
"optimizer_adam_beta2",
"=",
"0.997",
"# New way of specifying learning r... | Base parameters for Transformer model. | [
"Base",
"parameters",
"for",
"Transformer",
"model",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1754-L1765 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_big | def transformer_big():
"""HParams for transformer big model on WMT."""
hparams = transformer_base()
hparams.hidden_size = 1024
hparams.filter_size = 4096
# Reduce batch size to 2048 from 4096 to be able to train the model on a GPU
# with 12 GB memory. For example, NVIDIA TITAN V GPU.
hparams.batch_size = ... | python | def transformer_big():
"""HParams for transformer big model on WMT."""
hparams = transformer_base()
hparams.hidden_size = 1024
hparams.filter_size = 4096
# Reduce batch size to 2048 from 4096 to be able to train the model on a GPU
# with 12 GB memory. For example, NVIDIA TITAN V GPU.
hparams.batch_size = ... | [
"def",
"transformer_big",
"(",
")",
":",
"hparams",
"=",
"transformer_base",
"(",
")",
"hparams",
".",
"hidden_size",
"=",
"1024",
"hparams",
".",
"filter_size",
"=",
"4096",
"# Reduce batch size to 2048 from 4096 to be able to train the model on a GPU",
"# with 12 GB memor... | HParams for transformer big model on WMT. | [
"HParams",
"for",
"transformer",
"big",
"model",
"on",
"WMT",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1776-L1786 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tall | def transformer_tall():
"""Hparams for transformer on LM for pretraining/finetuning/mixing."""
hparams = transformer_base()
hparams.batch_size = 2048
hparams.hidden_size = 768
hparams.filter_size = 3072
hparams.num_hidden_layers = 12
hparams.num_heads = 12
hparams.label_smoothing = 0.0
hparams.max_len... | python | def transformer_tall():
"""Hparams for transformer on LM for pretraining/finetuning/mixing."""
hparams = transformer_base()
hparams.batch_size = 2048
hparams.hidden_size = 768
hparams.filter_size = 3072
hparams.num_hidden_layers = 12
hparams.num_heads = 12
hparams.label_smoothing = 0.0
hparams.max_len... | [
"def",
"transformer_tall",
"(",
")",
":",
"hparams",
"=",
"transformer_base",
"(",
")",
"hparams",
".",
"batch_size",
"=",
"2048",
"hparams",
".",
"hidden_size",
"=",
"768",
"hparams",
".",
"filter_size",
"=",
"3072",
"hparams",
".",
"num_hidden_layers",
"=",
... | Hparams for transformer on LM for pretraining/finetuning/mixing. | [
"Hparams",
"for",
"transformer",
"on",
"LM",
"for",
"pretraining",
"/",
"finetuning",
"/",
"mixing",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1790-L1804 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tall_finetune_tied | def transformer_tall_finetune_tied():
"""Tied means fine-tune CNN/DM summarization as LM."""
hparams = transformer_tall()
hparams.multiproblem_max_input_length = 750
hparams.multiproblem_max_target_length = 100
hparams.multiproblem_schedule_max_examples = 0
hparams.learning_rate_schedule = ("linear_warmup*c... | python | def transformer_tall_finetune_tied():
"""Tied means fine-tune CNN/DM summarization as LM."""
hparams = transformer_tall()
hparams.multiproblem_max_input_length = 750
hparams.multiproblem_max_target_length = 100
hparams.multiproblem_schedule_max_examples = 0
hparams.learning_rate_schedule = ("linear_warmup*c... | [
"def",
"transformer_tall_finetune_tied",
"(",
")",
":",
"hparams",
"=",
"transformer_tall",
"(",
")",
"hparams",
".",
"multiproblem_max_input_length",
"=",
"750",
"hparams",
".",
"multiproblem_max_target_length",
"=",
"100",
"hparams",
".",
"multiproblem_schedule_max_exam... | Tied means fine-tune CNN/DM summarization as LM. | [
"Tied",
"means",
"fine",
"-",
"tune",
"CNN",
"/",
"DM",
"summarization",
"as",
"LM",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1808-L1823 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tall_finetune_uniencdec | def transformer_tall_finetune_uniencdec():
"""Fine-tune CNN/DM with a unidirectional encoder and decoder."""
hparams = transformer_tall()
hparams.max_input_seq_length = 750
hparams.max_target_seq_length = 100
hparams.optimizer = "true_adam"
hparams.learning_rate_schedule = ("linear_warmup*constant*cosdecay"... | python | def transformer_tall_finetune_uniencdec():
"""Fine-tune CNN/DM with a unidirectional encoder and decoder."""
hparams = transformer_tall()
hparams.max_input_seq_length = 750
hparams.max_target_seq_length = 100
hparams.optimizer = "true_adam"
hparams.learning_rate_schedule = ("linear_warmup*constant*cosdecay"... | [
"def",
"transformer_tall_finetune_uniencdec",
"(",
")",
":",
"hparams",
"=",
"transformer_tall",
"(",
")",
"hparams",
".",
"max_input_seq_length",
"=",
"750",
"hparams",
".",
"max_target_seq_length",
"=",
"100",
"hparams",
".",
"optimizer",
"=",
"\"true_adam\"",
"hp... | Fine-tune CNN/DM with a unidirectional encoder and decoder. | [
"Fine",
"-",
"tune",
"CNN",
"/",
"DM",
"with",
"a",
"unidirectional",
"encoder",
"and",
"decoder",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1846-L1857 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tall_train_uniencdec | def transformer_tall_train_uniencdec():
"""Train CNN/DM with a unidirectional encoder and decoder."""
hparams = transformer_tall()
hparams.max_input_seq_length = 750
hparams.max_target_seq_length = 100
hparams.optimizer = "true_adam"
hparams.learning_rate_schedule = ("linear_warmup*constant*cosdecay")
hpa... | python | def transformer_tall_train_uniencdec():
"""Train CNN/DM with a unidirectional encoder and decoder."""
hparams = transformer_tall()
hparams.max_input_seq_length = 750
hparams.max_target_seq_length = 100
hparams.optimizer = "true_adam"
hparams.learning_rate_schedule = ("linear_warmup*constant*cosdecay")
hpa... | [
"def",
"transformer_tall_train_uniencdec",
"(",
")",
":",
"hparams",
"=",
"transformer_tall",
"(",
")",
"hparams",
".",
"max_input_seq_length",
"=",
"750",
"hparams",
".",
"max_target_seq_length",
"=",
"100",
"hparams",
".",
"optimizer",
"=",
"\"true_adam\"",
"hpara... | Train CNN/DM with a unidirectional encoder and decoder. | [
"Train",
"CNN",
"/",
"DM",
"with",
"a",
"unidirectional",
"encoder",
"and",
"decoder",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1861-L1871 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tall_finetune_textclass | def transformer_tall_finetune_textclass():
"""Hparams for transformer on LM for finetuning on text class problems."""
hparams = transformer_tall()
hparams.learning_rate_constant = 6.25e-5
hparams.learning_rate_schedule = ("linear_warmup*constant*linear_decay")
hparams.multiproblem_schedule_max_examples = 0
... | python | def transformer_tall_finetune_textclass():
"""Hparams for transformer on LM for finetuning on text class problems."""
hparams = transformer_tall()
hparams.learning_rate_constant = 6.25e-5
hparams.learning_rate_schedule = ("linear_warmup*constant*linear_decay")
hparams.multiproblem_schedule_max_examples = 0
... | [
"def",
"transformer_tall_finetune_textclass",
"(",
")",
":",
"hparams",
"=",
"transformer_tall",
"(",
")",
"hparams",
".",
"learning_rate_constant",
"=",
"6.25e-5",
"hparams",
".",
"learning_rate_schedule",
"=",
"(",
"\"linear_warmup*constant*linear_decay\"",
")",
"hparam... | Hparams for transformer on LM for finetuning on text class problems. | [
"Hparams",
"for",
"transformer",
"on",
"LM",
"for",
"finetuning",
"on",
"text",
"class",
"problems",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1875-L1887 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tall_pretrain_lm | def transformer_tall_pretrain_lm():
"""Hparams for transformer on LM pretraining (with 64k vocab)."""
hparams = transformer_tall()
hparams.learning_rate_constant = 2e-4
hparams.learning_rate_schedule = ("linear_warmup*constant*cosdecay")
hparams.optimizer = "adam_w"
hparams.optimizer_adam_beta1 = 0.9
hpar... | python | def transformer_tall_pretrain_lm():
"""Hparams for transformer on LM pretraining (with 64k vocab)."""
hparams = transformer_tall()
hparams.learning_rate_constant = 2e-4
hparams.learning_rate_schedule = ("linear_warmup*constant*cosdecay")
hparams.optimizer = "adam_w"
hparams.optimizer_adam_beta1 = 0.9
hpar... | [
"def",
"transformer_tall_pretrain_lm",
"(",
")",
":",
"hparams",
"=",
"transformer_tall",
"(",
")",
"hparams",
".",
"learning_rate_constant",
"=",
"2e-4",
"hparams",
".",
"learning_rate_schedule",
"=",
"(",
"\"linear_warmup*constant*cosdecay\"",
")",
"hparams",
".",
"... | Hparams for transformer on LM pretraining (with 64k vocab). | [
"Hparams",
"for",
"transformer",
"on",
"LM",
"pretraining",
"(",
"with",
"64k",
"vocab",
")",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1891-L1905 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tall_pretrain_lm_tpu_adafactor | def transformer_tall_pretrain_lm_tpu_adafactor():
"""Hparams for transformer on LM pretraining (with 64k vocab) on TPU."""
hparams = transformer_tall_pretrain_lm()
update_hparams_for_tpu(hparams)
hparams.max_length = 1024
# For multi-problem on TPU we need it in absolute examples.
hparams.batch_size = 8
h... | python | def transformer_tall_pretrain_lm_tpu_adafactor():
"""Hparams for transformer on LM pretraining (with 64k vocab) on TPU."""
hparams = transformer_tall_pretrain_lm()
update_hparams_for_tpu(hparams)
hparams.max_length = 1024
# For multi-problem on TPU we need it in absolute examples.
hparams.batch_size = 8
h... | [
"def",
"transformer_tall_pretrain_lm_tpu_adafactor",
"(",
")",
":",
"hparams",
"=",
"transformer_tall_pretrain_lm",
"(",
")",
"update_hparams_for_tpu",
"(",
"hparams",
")",
"hparams",
".",
"max_length",
"=",
"1024",
"# For multi-problem on TPU we need it in absolute examples.",... | Hparams for transformer on LM pretraining (with 64k vocab) on TPU. | [
"Hparams",
"for",
"transformer",
"on",
"LM",
"pretraining",
"(",
"with",
"64k",
"vocab",
")",
"on",
"TPU",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1909-L1917 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tall_pretrain_lm_tpu_adafactor_large | def transformer_tall_pretrain_lm_tpu_adafactor_large():
"""Hparams for transformer on LM pretraining on TPU, large model."""
hparams = transformer_tall_pretrain_lm_tpu_adafactor()
hparams.hidden_size = 1024
hparams.num_heads = 16
hparams.filter_size = 32768 # max fitting in 16G memory is 49152, batch 2
hpa... | python | def transformer_tall_pretrain_lm_tpu_adafactor_large():
"""Hparams for transformer on LM pretraining on TPU, large model."""
hparams = transformer_tall_pretrain_lm_tpu_adafactor()
hparams.hidden_size = 1024
hparams.num_heads = 16
hparams.filter_size = 32768 # max fitting in 16G memory is 49152, batch 2
hpa... | [
"def",
"transformer_tall_pretrain_lm_tpu_adafactor_large",
"(",
")",
":",
"hparams",
"=",
"transformer_tall_pretrain_lm_tpu_adafactor",
"(",
")",
"hparams",
".",
"hidden_size",
"=",
"1024",
"hparams",
".",
"num_heads",
"=",
"16",
"hparams",
".",
"filter_size",
"=",
"3... | Hparams for transformer on LM pretraining on TPU, large model. | [
"Hparams",
"for",
"transformer",
"on",
"LM",
"pretraining",
"on",
"TPU",
"large",
"model",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1921-L1931 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tall_pretrain_lm_tpu | def transformer_tall_pretrain_lm_tpu():
"""Hparams for transformer on LM pretraining on TPU with AdamW."""
hparams = transformer_tall_pretrain_lm_tpu_adafactor()
# Optimizer gets reset in update_hparams_for_tpu so we set it again here.
hparams.learning_rate_constant = 2e-4
hparams.learning_rate_schedule = ("l... | python | def transformer_tall_pretrain_lm_tpu():
"""Hparams for transformer on LM pretraining on TPU with AdamW."""
hparams = transformer_tall_pretrain_lm_tpu_adafactor()
# Optimizer gets reset in update_hparams_for_tpu so we set it again here.
hparams.learning_rate_constant = 2e-4
hparams.learning_rate_schedule = ("l... | [
"def",
"transformer_tall_pretrain_lm_tpu",
"(",
")",
":",
"hparams",
"=",
"transformer_tall_pretrain_lm_tpu_adafactor",
"(",
")",
"# Optimizer gets reset in update_hparams_for_tpu so we set it again here.",
"hparams",
".",
"learning_rate_constant",
"=",
"2e-4",
"hparams",
".",
"l... | Hparams for transformer on LM pretraining on TPU with AdamW. | [
"Hparams",
"for",
"transformer",
"on",
"LM",
"pretraining",
"on",
"TPU",
"with",
"AdamW",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1935-L1942 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_base_single_gpu | def transformer_base_single_gpu():
"""HParams for transformer base model for single GPU."""
hparams = transformer_base()
hparams.batch_size = 1024
hparams.learning_rate_schedule = "constant*linear_warmup*rsqrt_decay"
hparams.learning_rate_constant = 0.1
hparams.learning_rate_warmup_steps = 16000
return hp... | python | def transformer_base_single_gpu():
"""HParams for transformer base model for single GPU."""
hparams = transformer_base()
hparams.batch_size = 1024
hparams.learning_rate_schedule = "constant*linear_warmup*rsqrt_decay"
hparams.learning_rate_constant = 0.1
hparams.learning_rate_warmup_steps = 16000
return hp... | [
"def",
"transformer_base_single_gpu",
"(",
")",
":",
"hparams",
"=",
"transformer_base",
"(",
")",
"hparams",
".",
"batch_size",
"=",
"1024",
"hparams",
".",
"learning_rate_schedule",
"=",
"\"constant*linear_warmup*rsqrt_decay\"",
"hparams",
".",
"learning_rate_constant",... | HParams for transformer base model for single GPU. | [
"HParams",
"for",
"transformer",
"base",
"model",
"for",
"single",
"GPU",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1963-L1970 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_parsing_base | def transformer_parsing_base():
"""HParams for parsing on WSJ only."""
hparams = transformer_base()
hparams.attention_dropout = 0.2
hparams.layer_prepostprocess_dropout = 0.2
hparams.max_length = 512
hparams.learning_rate_warmup_steps = 16000
hparams.hidden_size = 1024
hparams.learning_rate = 0.05
hpa... | python | def transformer_parsing_base():
"""HParams for parsing on WSJ only."""
hparams = transformer_base()
hparams.attention_dropout = 0.2
hparams.layer_prepostprocess_dropout = 0.2
hparams.max_length = 512
hparams.learning_rate_warmup_steps = 16000
hparams.hidden_size = 1024
hparams.learning_rate = 0.05
hpa... | [
"def",
"transformer_parsing_base",
"(",
")",
":",
"hparams",
"=",
"transformer_base",
"(",
")",
"hparams",
".",
"attention_dropout",
"=",
"0.2",
"hparams",
".",
"layer_prepostprocess_dropout",
"=",
"0.2",
"hparams",
".",
"max_length",
"=",
"512",
"hparams",
".",
... | HParams for parsing on WSJ only. | [
"HParams",
"for",
"parsing",
"on",
"WSJ",
"only",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1983-L1993 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_parsing_big | def transformer_parsing_big():
"""HParams for parsing on WSJ semi-supervised."""
hparams = transformer_big()
hparams.max_length = 512
hparams.shared_source_target_embedding = False
hparams.learning_rate_warmup_steps = 4000
hparams.layer_prepostprocess_dropout = 0.1
hparams.batch_size = 2048
hparams.lear... | python | def transformer_parsing_big():
"""HParams for parsing on WSJ semi-supervised."""
hparams = transformer_big()
hparams.max_length = 512
hparams.shared_source_target_embedding = False
hparams.learning_rate_warmup_steps = 4000
hparams.layer_prepostprocess_dropout = 0.1
hparams.batch_size = 2048
hparams.lear... | [
"def",
"transformer_parsing_big",
"(",
")",
":",
"hparams",
"=",
"transformer_big",
"(",
")",
"hparams",
".",
"max_length",
"=",
"512",
"hparams",
".",
"shared_source_target_embedding",
"=",
"False",
"hparams",
".",
"learning_rate_warmup_steps",
"=",
"4000",
"hparam... | HParams for parsing on WSJ semi-supervised. | [
"HParams",
"for",
"parsing",
"on",
"WSJ",
"semi",
"-",
"supervised",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L1997-L2006 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_base_range | def transformer_base_range(rhp):
"""Small range of hyperparameters."""
# After starting from base, set intervals for some parameters.
rhp.set_float("learning_rate", 0.3, 3.0, scale=rhp.LOG_SCALE)
rhp.set_discrete("learning_rate_warmup_steps",
[1000, 2000, 4000, 8000, 16000])
rhp.set_float("... | python | def transformer_base_range(rhp):
"""Small range of hyperparameters."""
# After starting from base, set intervals for some parameters.
rhp.set_float("learning_rate", 0.3, 3.0, scale=rhp.LOG_SCALE)
rhp.set_discrete("learning_rate_warmup_steps",
[1000, 2000, 4000, 8000, 16000])
rhp.set_float("... | [
"def",
"transformer_base_range",
"(",
"rhp",
")",
":",
"# After starting from base, set intervals for some parameters.",
"rhp",
".",
"set_float",
"(",
"\"learning_rate\"",
",",
"0.3",
",",
"3.0",
",",
"scale",
"=",
"rhp",
".",
"LOG_SCALE",
")",
"rhp",
".",
"set_disc... | Small range of hyperparameters. | [
"Small",
"range",
"of",
"hyperparameters",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2250-L2259 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_relative | def transformer_relative():
"""Use relative position embeddings instead of absolute position encodings."""
hparams = transformer_base()
hparams.pos = None
hparams.self_attention_type = "dot_product_relative"
hparams.max_relative_position = 20
return hparams | python | def transformer_relative():
"""Use relative position embeddings instead of absolute position encodings."""
hparams = transformer_base()
hparams.pos = None
hparams.self_attention_type = "dot_product_relative"
hparams.max_relative_position = 20
return hparams | [
"def",
"transformer_relative",
"(",
")",
":",
"hparams",
"=",
"transformer_base",
"(",
")",
"hparams",
".",
"pos",
"=",
"None",
"hparams",
".",
"self_attention_type",
"=",
"\"dot_product_relative\"",
"hparams",
".",
"max_relative_position",
"=",
"20",
"return",
"h... | Use relative position embeddings instead of absolute position encodings. | [
"Use",
"relative",
"position",
"embeddings",
"instead",
"of",
"absolute",
"position",
"encodings",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2263-L2269 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_mlperf_tpu | def transformer_mlperf_tpu():
"""HParams for Transformer model on TPU for MLPerf on TPU 2x2."""
hparams = transformer_base_v3()
hparams.mlperf_mode = True
hparams.symbol_modality_num_shards = 1
hparams.max_length = 256 # ignored when using "_packed" problems
hparams.batch_size = 2048 # per-chip batch size... | python | def transformer_mlperf_tpu():
"""HParams for Transformer model on TPU for MLPerf on TPU 2x2."""
hparams = transformer_base_v3()
hparams.mlperf_mode = True
hparams.symbol_modality_num_shards = 1
hparams.max_length = 256 # ignored when using "_packed" problems
hparams.batch_size = 2048 # per-chip batch size... | [
"def",
"transformer_mlperf_tpu",
"(",
")",
":",
"hparams",
"=",
"transformer_base_v3",
"(",
")",
"hparams",
".",
"mlperf_mode",
"=",
"True",
"hparams",
".",
"symbol_modality_num_shards",
"=",
"1",
"hparams",
".",
"max_length",
"=",
"256",
"# ignored when using \"_pa... | HParams for Transformer model on TPU for MLPerf on TPU 2x2. | [
"HParams",
"for",
"Transformer",
"model",
"on",
"TPU",
"for",
"MLPerf",
"on",
"TPU",
"2x2",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2300-L2313 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | update_hparams_for_tpu | def update_hparams_for_tpu(hparams):
"""Change hparams to be compatible with TPU training."""
# Adafactor uses less memory than Adam.
# switch to Adafactor with its recommended learning rate scheme.
hparams.optimizer = "Adafactor"
hparams.learning_rate_schedule = "rsqrt_decay"
hparams.learning_rate_warmup_... | python | def update_hparams_for_tpu(hparams):
"""Change hparams to be compatible with TPU training."""
# Adafactor uses less memory than Adam.
# switch to Adafactor with its recommended learning rate scheme.
hparams.optimizer = "Adafactor"
hparams.learning_rate_schedule = "rsqrt_decay"
hparams.learning_rate_warmup_... | [
"def",
"update_hparams_for_tpu",
"(",
"hparams",
")",
":",
"# Adafactor uses less memory than Adam.",
"# switch to Adafactor with its recommended learning rate scheme.",
"hparams",
".",
"optimizer",
"=",
"\"Adafactor\"",
"hparams",
".",
"learning_rate_schedule",
"=",
"\"rsqrt_decay... | Change hparams to be compatible with TPU training. | [
"Change",
"hparams",
"to",
"be",
"compatible",
"with",
"TPU",
"training",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2316-L2351 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tpu_range | def transformer_tpu_range(rhp):
"""Small range of hyperparameters."""
# After starting from base, set intervals for some parameters.
rhp.set_float("learning_rate", 0.3, 3.0, scale=rhp.LOG_SCALE)
rhp.set_discrete("learning_rate_warmup_steps",
[1000, 2000, 4000, 8000, 16000])
rhp.set_float("i... | python | def transformer_tpu_range(rhp):
"""Small range of hyperparameters."""
# After starting from base, set intervals for some parameters.
rhp.set_float("learning_rate", 0.3, 3.0, scale=rhp.LOG_SCALE)
rhp.set_discrete("learning_rate_warmup_steps",
[1000, 2000, 4000, 8000, 16000])
rhp.set_float("i... | [
"def",
"transformer_tpu_range",
"(",
"rhp",
")",
":",
"# After starting from base, set intervals for some parameters.",
"rhp",
".",
"set_float",
"(",
"\"learning_rate\"",
",",
"0.3",
",",
"3.0",
",",
"scale",
"=",
"rhp",
".",
"LOG_SCALE",
")",
"rhp",
".",
"set_discr... | Small range of hyperparameters. | [
"Small",
"range",
"of",
"hyperparameters",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2416-L2425 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_clean | def transformer_clean():
"""No dropout, label smoothing, max_length."""
hparams = transformer_base_v2()
hparams.label_smoothing = 0.0
hparams.layer_prepostprocess_dropout = 0.0
hparams.attention_dropout = 0.0
hparams.relu_dropout = 0.0
hparams.max_length = 0
return hparams | python | def transformer_clean():
"""No dropout, label smoothing, max_length."""
hparams = transformer_base_v2()
hparams.label_smoothing = 0.0
hparams.layer_prepostprocess_dropout = 0.0
hparams.attention_dropout = 0.0
hparams.relu_dropout = 0.0
hparams.max_length = 0
return hparams | [
"def",
"transformer_clean",
"(",
")",
":",
"hparams",
"=",
"transformer_base_v2",
"(",
")",
"hparams",
".",
"label_smoothing",
"=",
"0.0",
"hparams",
".",
"layer_prepostprocess_dropout",
"=",
"0.0",
"hparams",
".",
"attention_dropout",
"=",
"0.0",
"hparams",
".",
... | No dropout, label smoothing, max_length. | [
"No",
"dropout",
"label",
"smoothing",
"max_length",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2441-L2449 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_lm_tpu_0 | def transformer_lm_tpu_0():
"""HParams for training languagemodel_lm1b8k on tpu. 92M Params."""
hparams = transformer_clean_big()
update_hparams_for_tpu(hparams)
hparams.num_heads = 4 # Heads are expensive on TPUs.
hparams.batch_size = 4096
hparams.shared_embedding_and_softmax_weights = False
hparams.la... | python | def transformer_lm_tpu_0():
"""HParams for training languagemodel_lm1b8k on tpu. 92M Params."""
hparams = transformer_clean_big()
update_hparams_for_tpu(hparams)
hparams.num_heads = 4 # Heads are expensive on TPUs.
hparams.batch_size = 4096
hparams.shared_embedding_and_softmax_weights = False
hparams.la... | [
"def",
"transformer_lm_tpu_0",
"(",
")",
":",
"hparams",
"=",
"transformer_clean_big",
"(",
")",
"update_hparams_for_tpu",
"(",
"hparams",
")",
"hparams",
".",
"num_heads",
"=",
"4",
"# Heads are expensive on TPUs.",
"hparams",
".",
"batch_size",
"=",
"4096",
"hpara... | HParams for training languagemodel_lm1b8k on tpu. 92M Params. | [
"HParams",
"for",
"training",
"languagemodel_lm1b8k",
"on",
"tpu",
".",
"92M",
"Params",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2477-L2485 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_librispeech_v1 | def transformer_librispeech_v1():
"""HParams for training ASR model on LibriSpeech V1."""
hparams = transformer_base()
hparams.num_heads = 4
hparams.filter_size = 1024
hparams.hidden_size = 256
hparams.num_encoder_layers = 5
hparams.num_decoder_layers = 3
hparams.learning_rate = 0.15
hparams.batch_si... | python | def transformer_librispeech_v1():
"""HParams for training ASR model on LibriSpeech V1."""
hparams = transformer_base()
hparams.num_heads = 4
hparams.filter_size = 1024
hparams.hidden_size = 256
hparams.num_encoder_layers = 5
hparams.num_decoder_layers = 3
hparams.learning_rate = 0.15
hparams.batch_si... | [
"def",
"transformer_librispeech_v1",
"(",
")",
":",
"hparams",
"=",
"transformer_base",
"(",
")",
"hparams",
".",
"num_heads",
"=",
"4",
"hparams",
".",
"filter_size",
"=",
"1024",
"hparams",
".",
"hidden_size",
"=",
"256",
"hparams",
".",
"num_encoder_layers",
... | HParams for training ASR model on LibriSpeech V1. | [
"HParams",
"for",
"training",
"ASR",
"model",
"on",
"LibriSpeech",
"V1",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2498-L2511 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_librispeech_v2 | def transformer_librispeech_v2():
"""HParams for training ASR model on LibriSpeech V2."""
hparams = transformer_base()
hparams.max_length = 1240000
hparams.max_input_seq_length = 1550
hparams.max_target_seq_length = 350
hparams.batch_size = 16
hparams.num_decoder_layers = 4
hparams.num_encoder_layers =... | python | def transformer_librispeech_v2():
"""HParams for training ASR model on LibriSpeech V2."""
hparams = transformer_base()
hparams.max_length = 1240000
hparams.max_input_seq_length = 1550
hparams.max_target_seq_length = 350
hparams.batch_size = 16
hparams.num_decoder_layers = 4
hparams.num_encoder_layers =... | [
"def",
"transformer_librispeech_v2",
"(",
")",
":",
"hparams",
"=",
"transformer_base",
"(",
")",
"hparams",
".",
"max_length",
"=",
"1240000",
"hparams",
".",
"max_input_seq_length",
"=",
"1550",
"hparams",
".",
"max_target_seq_length",
"=",
"350",
"hparams",
"."... | HParams for training ASR model on LibriSpeech V2. | [
"HParams",
"for",
"training",
"ASR",
"model",
"on",
"LibriSpeech",
"V2",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2515-L2536 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_librispeech_tpu_v1 | def transformer_librispeech_tpu_v1():
"""HParams for training ASR model on Librispeech on TPU v1."""
hparams = transformer_librispeech_v1()
update_hparams_for_tpu(hparams)
hparams.batch_size = 16
librispeech.set_librispeech_length_hparams(hparams)
return hparams | python | def transformer_librispeech_tpu_v1():
"""HParams for training ASR model on Librispeech on TPU v1."""
hparams = transformer_librispeech_v1()
update_hparams_for_tpu(hparams)
hparams.batch_size = 16
librispeech.set_librispeech_length_hparams(hparams)
return hparams | [
"def",
"transformer_librispeech_tpu_v1",
"(",
")",
":",
"hparams",
"=",
"transformer_librispeech_v1",
"(",
")",
"update_hparams_for_tpu",
"(",
"hparams",
")",
"hparams",
".",
"batch_size",
"=",
"16",
"librispeech",
".",
"set_librispeech_length_hparams",
"(",
"hparams",
... | HParams for training ASR model on Librispeech on TPU v1. | [
"HParams",
"for",
"training",
"ASR",
"model",
"on",
"Librispeech",
"on",
"TPU",
"v1",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2540-L2547 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_librispeech_tpu_v2 | def transformer_librispeech_tpu_v2():
"""HParams for training ASR model on Librispeech on TPU v2."""
hparams = transformer_librispeech_v2()
update_hparams_for_tpu(hparams)
hparams.batch_size = 16
librispeech.set_librispeech_length_hparams(hparams)
return hparams | python | def transformer_librispeech_tpu_v2():
"""HParams for training ASR model on Librispeech on TPU v2."""
hparams = transformer_librispeech_v2()
update_hparams_for_tpu(hparams)
hparams.batch_size = 16
librispeech.set_librispeech_length_hparams(hparams)
return hparams | [
"def",
"transformer_librispeech_tpu_v2",
"(",
")",
":",
"hparams",
"=",
"transformer_librispeech_v2",
"(",
")",
"update_hparams_for_tpu",
"(",
"hparams",
")",
"hparams",
".",
"batch_size",
"=",
"16",
"librispeech",
".",
"set_librispeech_length_hparams",
"(",
"hparams",
... | HParams for training ASR model on Librispeech on TPU v2. | [
"HParams",
"for",
"training",
"ASR",
"model",
"on",
"Librispeech",
"on",
"TPU",
"v2",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2551-L2558 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_tpu_1b | def transformer_tpu_1b():
"""Hparams for machine translation with ~1.1B parameters."""
hparams = transformer_tpu()
hparams.hidden_size = 2048
hparams.filter_size = 8192
hparams.num_hidden_layers = 8
# smaller batch size to avoid OOM
hparams.batch_size = 1024
hparams.activation_dtype = "bfloat16"
hpara... | python | def transformer_tpu_1b():
"""Hparams for machine translation with ~1.1B parameters."""
hparams = transformer_tpu()
hparams.hidden_size = 2048
hparams.filter_size = 8192
hparams.num_hidden_layers = 8
# smaller batch size to avoid OOM
hparams.batch_size = 1024
hparams.activation_dtype = "bfloat16"
hpara... | [
"def",
"transformer_tpu_1b",
"(",
")",
":",
"hparams",
"=",
"transformer_tpu",
"(",
")",
"hparams",
".",
"hidden_size",
"=",
"2048",
"hparams",
".",
"filter_size",
"=",
"8192",
"hparams",
".",
"num_hidden_layers",
"=",
"8",
"# smaller batch size to avoid OOM",
"hp... | Hparams for machine translation with ~1.1B parameters. | [
"Hparams",
"for",
"machine",
"translation",
"with",
"~1",
".",
"1B",
"parameters",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2599-L2611 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_wikitext103_l4k_v0 | def transformer_wikitext103_l4k_v0():
"""HParams for training languagemodel_wikitext103_l4k."""
hparams = transformer_big()
# Adafactor uses less memory than Adam.
# switch to Adafactor with its recommended learning rate scheme.
hparams.optimizer = "Adafactor"
hparams.learning_rate_schedule = "rsqrt_decay"... | python | def transformer_wikitext103_l4k_v0():
"""HParams for training languagemodel_wikitext103_l4k."""
hparams = transformer_big()
# Adafactor uses less memory than Adam.
# switch to Adafactor with its recommended learning rate scheme.
hparams.optimizer = "Adafactor"
hparams.learning_rate_schedule = "rsqrt_decay"... | [
"def",
"transformer_wikitext103_l4k_v0",
"(",
")",
":",
"hparams",
"=",
"transformer_big",
"(",
")",
"# Adafactor uses less memory than Adam.",
"# switch to Adafactor with its recommended learning rate scheme.",
"hparams",
".",
"optimizer",
"=",
"\"Adafactor\"",
"hparams",
".",
... | HParams for training languagemodel_wikitext103_l4k. | [
"HParams",
"for",
"training",
"languagemodel_wikitext103_l4k",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2615-L2645 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_wikitext103_l4k_memory_v0 | def transformer_wikitext103_l4k_memory_v0():
"""HParams for training languagemodel_wikitext103_l4k with memory."""
hparams = transformer_wikitext103_l4k_v0()
hparams.split_targets_chunk_length = 64
hparams.split_targets_max_chunks = 64
hparams.split_targets_strided_training = True
hparams.add_hparam("memor... | python | def transformer_wikitext103_l4k_memory_v0():
"""HParams for training languagemodel_wikitext103_l4k with memory."""
hparams = transformer_wikitext103_l4k_v0()
hparams.split_targets_chunk_length = 64
hparams.split_targets_max_chunks = 64
hparams.split_targets_strided_training = True
hparams.add_hparam("memor... | [
"def",
"transformer_wikitext103_l4k_memory_v0",
"(",
")",
":",
"hparams",
"=",
"transformer_wikitext103_l4k_v0",
"(",
")",
"hparams",
".",
"split_targets_chunk_length",
"=",
"64",
"hparams",
".",
"split_targets_max_chunks",
"=",
"64",
"hparams",
".",
"split_targets_stride... | HParams for training languagemodel_wikitext103_l4k with memory. | [
"HParams",
"for",
"training",
"languagemodel_wikitext103_l4k",
"with",
"memory",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2649-L2673 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_wikitext103_l16k_memory_v0 | def transformer_wikitext103_l16k_memory_v0():
"""HParams for training languagemodel_wikitext103_l16k with memory."""
hparams = transformer_wikitext103_l4k_memory_v0()
hparams.max_length = 16384
hparams.split_targets_chunk_length = 64
hparams.split_targets_max_chunks = int(
hparams.max_length / hparams.... | python | def transformer_wikitext103_l16k_memory_v0():
"""HParams for training languagemodel_wikitext103_l16k with memory."""
hparams = transformer_wikitext103_l4k_memory_v0()
hparams.max_length = 16384
hparams.split_targets_chunk_length = 64
hparams.split_targets_max_chunks = int(
hparams.max_length / hparams.... | [
"def",
"transformer_wikitext103_l16k_memory_v0",
"(",
")",
":",
"hparams",
"=",
"transformer_wikitext103_l4k_memory_v0",
"(",
")",
"hparams",
".",
"max_length",
"=",
"16384",
"hparams",
".",
"split_targets_chunk_length",
"=",
"64",
"hparams",
".",
"split_targets_max_chunk... | HParams for training languagemodel_wikitext103_l16k with memory. | [
"HParams",
"for",
"training",
"languagemodel_wikitext103_l16k",
"with",
"memory",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2677-L2694 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_cifar10_memory_v0 | def transformer_cifar10_memory_v0():
"""HParams for training image_cifar10_plain_gen_flat_rev with memory."""
hparams = transformer_wikitext103_l4k_memory_v0()
hparams.num_hidden_layers = 6
hparams.max_length = 32 * 32 * 3
hparams.split_targets_chunk_length = 64 * 3
hparams.split_targets_max_chunks = int(... | python | def transformer_cifar10_memory_v0():
"""HParams for training image_cifar10_plain_gen_flat_rev with memory."""
hparams = transformer_wikitext103_l4k_memory_v0()
hparams.num_hidden_layers = 6
hparams.max_length = 32 * 32 * 3
hparams.split_targets_chunk_length = 64 * 3
hparams.split_targets_max_chunks = int(... | [
"def",
"transformer_cifar10_memory_v0",
"(",
")",
":",
"hparams",
"=",
"transformer_wikitext103_l4k_memory_v0",
"(",
")",
"hparams",
".",
"num_hidden_layers",
"=",
"6",
"hparams",
".",
"max_length",
"=",
"32",
"*",
"32",
"*",
"3",
"hparams",
".",
"split_targets_ch... | HParams for training image_cifar10_plain_gen_flat_rev with memory. | [
"HParams",
"for",
"training",
"image_cifar10_plain_gen_flat_rev",
"with",
"memory",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2698-L2721 | train |
tensorflow/tensor2tensor | tensor2tensor/models/transformer.py | transformer_imagenet64_memory_v0 | def transformer_imagenet64_memory_v0():
"""HParams for training image_imagenet64_gen_flat_rev with memory."""
hparams = transformer_cifar10_memory_v0()
hparams.max_length = 64 * 64 * 3
hparams.split_targets_chunk_length = 64 * 3
hparams.split_targets_max_chunks = int(
hparams.max_length / hparams.split... | python | def transformer_imagenet64_memory_v0():
"""HParams for training image_imagenet64_gen_flat_rev with memory."""
hparams = transformer_cifar10_memory_v0()
hparams.max_length = 64 * 64 * 3
hparams.split_targets_chunk_length = 64 * 3
hparams.split_targets_max_chunks = int(
hparams.max_length / hparams.split... | [
"def",
"transformer_imagenet64_memory_v0",
"(",
")",
":",
"hparams",
"=",
"transformer_cifar10_memory_v0",
"(",
")",
"hparams",
".",
"max_length",
"=",
"64",
"*",
"64",
"*",
"3",
"hparams",
".",
"split_targets_chunk_length",
"=",
"64",
"*",
"3",
"hparams",
".",
... | HParams for training image_imagenet64_gen_flat_rev with memory. | [
"HParams",
"for",
"training",
"image_imagenet64_gen_flat_rev",
"with",
"memory",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/models/transformer.py#L2725-L2745 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_image_attention.py | maybe_reshape_4d_to_3d | def maybe_reshape_4d_to_3d(x):
"""Reshape input from 4D to 3D if necessary."""
x_shape = common_layers.shape_list(x)
is_4d = False
if len(x_shape) == 4:
x = tf.reshape(x, [x_shape[0], x_shape[1]*x_shape[2], x_shape[3]])
is_4d = True
return x, x_shape, is_4d | python | def maybe_reshape_4d_to_3d(x):
"""Reshape input from 4D to 3D if necessary."""
x_shape = common_layers.shape_list(x)
is_4d = False
if len(x_shape) == 4:
x = tf.reshape(x, [x_shape[0], x_shape[1]*x_shape[2], x_shape[3]])
is_4d = True
return x, x_shape, is_4d | [
"def",
"maybe_reshape_4d_to_3d",
"(",
"x",
")",
":",
"x_shape",
"=",
"common_layers",
".",
"shape_list",
"(",
"x",
")",
"is_4d",
"=",
"False",
"if",
"len",
"(",
"x_shape",
")",
"==",
"4",
":",
"x",
"=",
"tf",
".",
"reshape",
"(",
"x",
",",
"[",
"x_... | Reshape input from 4D to 3D if necessary. | [
"Reshape",
"input",
"from",
"4D",
"to",
"3D",
"if",
"necessary",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_image_attention.py#L72-L79 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_image_attention.py | local_attention_2d | def local_attention_2d(x, hparams, attention_type="local_attention_2d"):
"""Local 2d, self attention layer."""
# self-attention
with tf.variable_scope("local_2d_self_att"):
y = common_attention.multihead_attention_2d(
x,
None,
hparams.attention_key_channels or hparams.hidden_size,
... | python | def local_attention_2d(x, hparams, attention_type="local_attention_2d"):
"""Local 2d, self attention layer."""
# self-attention
with tf.variable_scope("local_2d_self_att"):
y = common_attention.multihead_attention_2d(
x,
None,
hparams.attention_key_channels or hparams.hidden_size,
... | [
"def",
"local_attention_2d",
"(",
"x",
",",
"hparams",
",",
"attention_type",
"=",
"\"local_attention_2d\"",
")",
":",
"# self-attention",
"with",
"tf",
".",
"variable_scope",
"(",
"\"local_2d_self_att\"",
")",
":",
"y",
"=",
"common_attention",
".",
"multihead_atte... | Local 2d, self attention layer. | [
"Local",
"2d",
"self",
"attention",
"layer",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_image_attention.py#L82-L97 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_image_attention.py | local_within_block_attention | def local_within_block_attention(x,
self_attention_bias,
hparams,
attention_type="local_within_block_mask_right",
q_padding="VALID",
kv_padding="VALID"):
... | python | def local_within_block_attention(x,
self_attention_bias,
hparams,
attention_type="local_within_block_mask_right",
q_padding="VALID",
kv_padding="VALID"):
... | [
"def",
"local_within_block_attention",
"(",
"x",
",",
"self_attention_bias",
",",
"hparams",
",",
"attention_type",
"=",
"\"local_within_block_mask_right\"",
",",
"q_padding",
"=",
"\"VALID\"",
",",
"kv_padding",
"=",
"\"VALID\"",
")",
":",
"x_new",
",",
"x_shape",
... | Local within block self attention. | [
"Local",
"within",
"block",
"self",
"attention",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_image_attention.py#L100-L128 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_image_attention.py | local_attention_1d | def local_attention_1d(x,
hparams,
attention_type="local_unmasked",
q_padding="VALID",
kv_padding="VALID"):
"""Local 1d self attention."""
# self-attention
x, x_shape, is_4d = maybe_reshape_4d_to_3d(x)
with tf.variable_s... | python | def local_attention_1d(x,
hparams,
attention_type="local_unmasked",
q_padding="VALID",
kv_padding="VALID"):
"""Local 1d self attention."""
# self-attention
x, x_shape, is_4d = maybe_reshape_4d_to_3d(x)
with tf.variable_s... | [
"def",
"local_attention_1d",
"(",
"x",
",",
"hparams",
",",
"attention_type",
"=",
"\"local_unmasked\"",
",",
"q_padding",
"=",
"\"VALID\"",
",",
"kv_padding",
"=",
"\"VALID\"",
")",
":",
"# self-attention",
"x",
",",
"x_shape",
",",
"is_4d",
"=",
"maybe_reshape... | Local 1d self attention. | [
"Local",
"1d",
"self",
"attention",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_image_attention.py#L131-L161 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_image_attention.py | get_dilated_1d_attention_mask | def get_dilated_1d_attention_mask(
num_heads, block_size,
num_blocks, memory_size, gap_size,
name="dilated_mask"):
"""Dilated attention with a masking strategy."""
mask = np.ones((num_heads, block_size, 2*block_size), np.bool)
# now going over every row to do the right assignment of
# memory blocks... | python | def get_dilated_1d_attention_mask(
num_heads, block_size,
num_blocks, memory_size, gap_size,
name="dilated_mask"):
"""Dilated attention with a masking strategy."""
mask = np.ones((num_heads, block_size, 2*block_size), np.bool)
# now going over every row to do the right assignment of
# memory blocks... | [
"def",
"get_dilated_1d_attention_mask",
"(",
"num_heads",
",",
"block_size",
",",
"num_blocks",
",",
"memory_size",
",",
"gap_size",
",",
"name",
"=",
"\"dilated_mask\"",
")",
":",
"mask",
"=",
"np",
".",
"ones",
"(",
"(",
"num_heads",
",",
"block_size",
",",
... | Dilated attention with a masking strategy. | [
"Dilated",
"attention",
"with",
"a",
"masking",
"strategy",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_image_attention.py#L164-L187 | train |
tensorflow/tensor2tensor | tensor2tensor/layers/common_image_attention.py | dilated_attention_1d | def dilated_attention_1d(x,
hparams,
attention_type="masked_dilated_1d",
q_padding="VALID",
kv_padding="VALID",
gap_size=2):
"""Dilated 1d self attention."""
# self-attention
x, x_shape, is... | python | def dilated_attention_1d(x,
hparams,
attention_type="masked_dilated_1d",
q_padding="VALID",
kv_padding="VALID",
gap_size=2):
"""Dilated 1d self attention."""
# self-attention
x, x_shape, is... | [
"def",
"dilated_attention_1d",
"(",
"x",
",",
"hparams",
",",
"attention_type",
"=",
"\"masked_dilated_1d\"",
",",
"q_padding",
"=",
"\"VALID\"",
",",
"kv_padding",
"=",
"\"VALID\"",
",",
"gap_size",
"=",
"2",
")",
":",
"# self-attention",
"x",
",",
"x_shape",
... | Dilated 1d self attention. | [
"Dilated",
"1d",
"self",
"attention",
"."
] | 272500b6efe353aeb638d2745ed56e519462ca31 | https://github.com/tensorflow/tensor2tensor/blob/272500b6efe353aeb638d2745ed56e519462ca31/tensor2tensor/layers/common_image_attention.py#L190-L222 | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.