Code stringlengths 103 85.9k | Summary listlengths 0 94 |
|---|---|
Please provide a description of the function:def van_image_enc_2d(x, first_depth, reuse=False, hparams=None):
with tf.variable_scope('van_image_enc', reuse=reuse):
enc_history = [x]
enc = tf.layers.conv2d(
x, first_depth, 3, padding='same', activation=tf.nn.relu, strides=1)
enc = tf.contrib.la... | [
"The image encoder for the VAN.\n\n Similar architecture as Ruben's paper\n (http://proceedings.mlr.press/v70/villegas17a/villegas17a.pdf).\n\n Args:\n x: The image to encode.\n first_depth: The depth of the first layer. Depth is increased in subsequent\n layers.\n reuse: To reuse in variable scope... |
Please provide a description of the function:def van_enc_2d(x, first_depth, reuse=False):
with tf.variable_scope('van_enc', reuse=reuse):
a = 4 # depends on the inputs size
b = 4
# a, b = 4,4
enc = tf.nn.relu(x)
enc = tf.layers.dense(enc, first_depth * a * b, tf.nn.relu)
enc = tf.contrib.l... | [
"The higher level structure encoder for the VAN.\n\n The high level structure is a vector instead of an image.\n\n Args:\n x: The higher level structure to encode.\n first_depth: The depth of the first layer. Depth is increased in subsequent\n layers.\n reuse: To reuse in variable scope or not.\n\n ... |
Please provide a description of the function:def van_dec_2d(x, skip_connections, output_shape, first_depth, hparams=None):
with tf.variable_scope('van_dec'):
dec = tf.layers.conv2d_transpose(
x, first_depth * 4, 3, padding='same', activation=tf.nn.relu, strides=2)
dec = tf.nn.dropout(dec, hparams.v... | [
"The VAN decoder.\n\n Args:\n x: The analogy information to decode.\n skip_connections: The encoder layers which can be used as skip connections.\n output_shape: The shape of the desired output image.\n first_depth: The depth of the first layer of the van image encoder.\n hparams: The python hparams... |
Please provide a description of the function:def analogy_computation_2d(f_first_enc,
f_first_frame,
f_current_enc,
first_depth):
with tf.variable_scope('analogy_computation'):
frame_enc_diff = f_first_frame - f_first_enc
fra... | [
"Implements the deep analogy computation."
] |
Please provide a description of the function:def van(first_enc,
first_frame,
current_enc,
gt_image,
reuse=False,
scope_prefix='',
hparams=None):
with tf.variable_scope(scope_prefix + 'van', reuse=reuse):
output_shape = first_frame.get_shape().as_list()
output... | [
"Implements a VAN.\n\n Args:\n first_enc: The first encoding.\n first_frame: The first ground truth frame.\n current_enc: The encoding of the frame to generate.\n gt_image: The ground truth image, only used for regularization.\n reuse: To reuse in variable scope or not.\n scope_prefix: The prefix... |
Please provide a description of the function:def encoder_vgg(x, enc_final_size, reuse=False, scope_prefix='', hparams=None,
is_training=True):
with tf.variable_scope(scope_prefix + 'encoder', reuse=reuse):
# Preprocess input
x *= 256
x = x - COLOR_NORMALIZATION_VECTOR
with arg_sco... | [
"VGG network to use as encoder without the top few layers.\n\n Can be pretrained.\n\n Args:\n x: The image to encode. In the range 0 to 1.\n enc_final_size: The desired size of the encoding.\n reuse: To reuse in variable scope or not.\n scope_prefix: The prefix before the scope name.\n hparams: The... |
Please provide a description of the function:def predictor(enc_flat,
action,
lstm_states,
pred_depth,
reuse=False,
scope_prefix='',
hparams=None):
with tf.variable_scope(scope_prefix + 'predict', reuse=reuse):
enc_final_size =... | [
"LSTM predictor network."
] |
Please provide a description of the function:def construct_model(images,
actions=None,
context_frames=2,
hparams=None,
is_training=True):
pred_depth = 20
enc_out_all, pred_out_all, van_out_all, van_on_enc_all = [], [], [], []
ls... | [
"Constructs the tensorflow graph of the hierarchical model."
] |
Please provide a description of the function:def peak_signal_to_noise_ratio(true, pred):
return 10.0 * tf.log(1.0 / mean_squared_error(true, pred)) / tf.log(10.0) | [
"Image quality metric based on maximal signal power vs. power of the noise.\n\n Args:\n true: the ground truth image.\n pred: the predicted image.\n Returns:\n peak signal to noise ratio (PSNR)\n "
] |
Please provide a description of the function:def mean_squared_error(true, pred):
result = tf.reduce_sum(
tf.squared_difference(true, pred)) / tf.to_float(tf.size(pred))
return result | [
"L2 distance between tensors true and pred.\n\n Args:\n true: the ground truth image.\n pred: the predicted image.\n Returns:\n mean squared error between ground truth and predicted image.\n "
] |
Please provide a description of the function:def l1_error(true, pred):
return tf.reduce_sum(tf.abs(true - pred)) / tf.to_float(tf.size(pred)) | [
"L1 distance between tensors true and pred."
] |
Please provide a description of the function:def calc_loss_psnr(gen_images, images, name, hparams=None, use_l1_loss=False):
del hparams
with tf.name_scope(name):
loss, error, psnr_all = 0.0, 0.0, 0.0
for _, x, gx in zip(range(len(gen_images)), images, gen_images):
recon_cost = mean_squared_error(x,... | [
"Calculates loss and psnr for predictions over multiple timesteps."
] |
Please provide a description of the function:def next_frame_sv2p():
hparams = basic_stochastic.next_frame_basic_stochastic()
hparams.optimizer = "true_adam"
hparams.learning_rate_schedule = "constant"
hparams.learning_rate_constant = 1e-3
hparams.video_num_input_frames = 1
hparams.video_num_target_frames... | [
"SV2P model hparams."
] |
Please provide a description of the function:def next_frame_sv2p_discrete():
hparams = next_frame_sv2p()
hparams.action_injection = "multiplicative"
hparams.small_mode = True
hparams.add_hparam("bottleneck_bits", 128)
hparams.add_hparam("bottleneck_noise", 0.02)
hparams.add_hparam("discrete_warmup_steps"... | [
"SV2P discrete model hparams."
] |
Please provide a description of the function:def next_frame_sv2p_atari():
hparams = next_frame_sv2p()
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 4
hparams.action_injection = "multiplicative"
hparams.num_iterations_1st_stage = 12000
hparams.num_iterations_2nd_stage = 12000
hpar... | [
"SV2P model for atari."
] |
Please provide a description of the function:def next_frame_sv2p_atari_softmax():
hparams = next_frame_sv2p_atari()
hparams.bottom = {}
hparams.loss = {}
hparams.top = {}
hparams.internal_loss = True
return hparams | [
"SV2P model for atari with softmax."
] |
Please provide a description of the function:def next_frame_sv2p_tiny():
hparams = next_frame_sv2p_atari_softmax()
hparams.batch_size = 2
hparams.tiny_mode = True
hparams.num_masks = 1
hparams.video_modality_loss_cutoff = 0.4
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 4
retu... | [
"Tiny SV2P model."
] |
Please provide a description of the function:def next_frame_sv2p_cutoff():
hparams = next_frame_sv2p()
hparams.video_modality_loss_cutoff = 0.4
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 1
return hparams | [
"SV2P model with additional cutoff in L2 loss for environments like pong."
] |
Please provide a description of the function:def _get_mscoco(directory):
for url in _MSCOCO_URLS:
filename = os.path.basename(url)
download_url = os.path.join(_MSCOCO_ROOT_URL, url)
path = generator_utils.maybe_download(directory, filename, download_url)
unzip_dir = os.path.join(directory, filename... | [
"Download and extract MSCOCO datasets to directory unless it is there."
] |
Please provide a description of the function:def mscoco_generator(data_dir,
tmp_dir,
training,
how_many,
start_from=0,
eos_list=None,
vocab_filename=None):
eos_list = [1] if eos_list is Non... | [
"Image generator for MSCOCO captioning problem with token-wise captions.\n\n Args:\n data_dir: path to the data directory.\n tmp_dir: path to temporary storage directory.\n training: a Boolean; if true, we use the train set, otherwise the test set.\n how_many: how many images and labels to generate.\n ... |
Please provide a description of the function:def flags_as_args():
if hasattr(FLAGS, "flag_values_dict"):
args_dict = FLAGS.flag_values_dict()
else:
args_dict = dict(FLAGS.__dict__["__flags"])
del args_dict["cloud_mlengine"]
# Configured later
del args_dict["t2t_usr_dir"]
args_dict.pop("h", None)
... | [
"Convert FLAGS to list of args suitable for passing on cmd line."
] |
Please provide a description of the function:def get_default_master_type(num_gpus=1):
gpus_to_master_map = {
0: "standard",
1: "standard_p100",
4: "complex_model_m_p100",
8: "complex_model_l_gpu",
}
if num_gpus not in gpus_to_master_map:
raise ValueError("Num gpus must be in %s" %
... | [
"Returns master_type for trainingInput."
] |
Please provide a description of the function:def configure_job():
# See documentation:
# https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#traininginput
training_input = {
"pythonModule": "tensor2tensor.bin.t2t_trainer",
"args": flags_as_args(),
"region": text_encoder.native_... | [
"Construct jobSpec for ML Engine job."
] |
Please provide a description of the function:def launch_job(job_spec):
project_id = "projects/{}".format(
text_encoder.native_to_unicode(default_project()))
credentials = GoogleCredentials.get_application_default()
cloudml = discovery.build("ml", "v1", credentials=credentials,
... | [
"Launch job on ML Engine."
] |
Please provide a description of the function:def _tar_and_copy(src_dir, target_dir):
src_dir = src_dir.rstrip("/")
target_dir = target_dir.rstrip("/")
tmp_dir = tempfile.gettempdir().rstrip("/")
src_base = os.path.basename(src_dir)
shell_run(
"tar --exclude=.git -zcf {tmp_dir}/{src_base}.tar.gz -C {s... | [
"Tar and gzip src_dir and copy to GCS target_dir."
] |
Please provide a description of the function:def tar_and_copy_t2t(train_dir):
tf.logging.info("Tarring and pushing local Tensor2Tensor package.")
output = text_encoder.native_to_unicode(shell_output(
"pip show tensor2tensor")).split("\n")
assert output[1].startswith("Version")
assert output[7].startsw... | [
"Tar Tensor2Tensor and cp to train_dir."
] |
Please provide a description of the function:def tar_and_copy_usr_dir(usr_dir, train_dir):
tf.logging.info("Tarring and pushing t2t_usr_dir.")
usr_dir = os.path.abspath(os.path.expanduser(usr_dir))
# Copy usr dir to a temp location
top_dir = os.path.join(tempfile.gettempdir(), "t2t_usr_container")
tmp_usr_... | [
"Package, tar, and copy usr_dir to GCS train_dir."
] |
Please provide a description of the function:def validate_flags():
assert not job_dir()
assert FLAGS.output_dir.startswith("gs://")
assert FLAGS.data_dir.startswith("gs://")
assert FLAGS.worker_replicas <= 1
assert FLAGS.ps_replicas <= 0
if FLAGS.hparams_range:
assert FLAGS.autotune_objective
if FL... | [
"Validates flags are set to acceptable values for CloudML Engine runs."
] |
Please provide a description of the function:def launch():
validate_flags()
job_spec = configure_job()
job_name = job_spec["jobId"]
tf.logging.info("Launching job %s with ML Engine spec:\n%s", job_name,
pprint.pformat(job_spec))
assert confirm()
train_dir = FLAGS.output_dir
t2t_tar = ... | [
"Launch t2t_trainer on Cloud ML Engine."
] |
Please provide a description of the function:def add_weight(cls):
@functools.wraps(cls.add_weight)
def _add_weight(self,
name=None,
shape=None,
dtype=None,
initializer=None,
regularizer=None,
**kwargs):
... | [
"Decorator for Layers, overriding add_weight for trainable initializers.",
"Adds weight.",
"Creates a regularization loss `Tensor`."
] |
Please provide a description of the function:def get_beta(self, kl_loss=0.0):
if self.hparams.latent_loss_multiplier_dynamic:
beta = tf.Variable(self.hparams.latent_loss_multiplier,
trainable=False, dtype=tf.float32)
alpha = self.hparams.latent_loss_multiplier_alpha
e... | [
"Get the KL multiplier, either dynamically or schedule based.\n\n if hparams.latent_loss_multiplier_dynamic is set to true, then beta\n is being adjusted to keep KL under hparams.latent_loss_multiplier_epsilon.\n In order to do so, the beta is being updated at each iteration\n by taking steps of size hp... |
Please provide a description of the function:def get_kl_loss(self, means, log_vars, means_p=None, log_vars_p=None):
kl_loss = 0.0
if means_p is None:
means_p = tf.unstack(tf.zeros_like(means))
if log_vars_p is None:
log_vars_p = tf.unstack(tf.zeros_like(log_vars))
enumerated_inputs = en... | [
"Get KL loss for all the predicted Gaussians."
] |
Please provide a description of the function:def construct_latent_tower(self, images, time_axis):
# No latent in the first phase
first_phase = tf.less(
self.get_iteration_num(), self.hparams.num_iterations_1st_stage)
# use all frames by default but this allows more
# predicted frames at in... | [
"Create the latent tower."
] |
Please provide a description of the function:def transformer_encode(encoder_function, inputs, target_space, hparams,
attention_weights=None, features=None, losses=None,
**kwargs):
inputs = common_layers.flatten4d3d(inputs)
encoder_input, self_attention_bias, encoder... | [
"Encode transformer inputs.\n\n Args:\n encoder_function: the encoder function\n inputs: Transformer inputs [batch_size, input_length, 1, hidden_dim] which\n will be flattened along the two spatial dimensions.\n target_space: scalar, target space ID.\n hparams: hyperparameters for model.\n atte... |
Please provide a description of the function:def transformer_decode(decoder_function,
decoder_input,
encoder_output,
encoder_decoder_attention_bias,
decoder_self_attention_bias,
hparams,
... | [
"Decode Transformer outputs from encoder representation.\n\n Args:\n decoder_function: the decoder function\n decoder_input: inputs to bottom of the model. [batch_size, decoder_length,\n hidden_dim]\n encoder_output: Encoder representation. [batch_size, input_length,\n hidden_dim]\n encoder_d... |
Please provide a description of the function:def _init_transformer_cache(cache, hparams, batch_size, attention_init_length,
encoder_output, encoder_decoder_attention_bias,
scope_prefix):
key_channels = hparams.attention_key_channels or hparams.hidden_size
v... | [
"Create the initial cache for Transformer fast decoding."
] |
Please provide a description of the function:def fast_decode_tpu(encoder_output,
encoder_decoder_attention_bias,
symbols_to_logits_fn,
hparams,
decode_length,
vocab_size,
init_cache_fn=_init_transform... | [
"Given encoder output and a symbols to logits function, does fast decoding.\n\n Implements both greedy and beam search decoding for TPU, uses beam search iff\n beam_size > 1, otherwise beam search related arguments are ignored.\n\n Args:\n encoder_output: A tensor, output from encoder.\n encoder_decoder_at... |
Please provide a description of the function:def fast_decode(encoder_output,
encoder_decoder_attention_bias,
symbols_to_logits_fn,
hparams,
decode_length,
vocab_size,
init_cache_fn=_init_transformer_cache,
be... | [
"Given encoder output and a symbols to logits function, does fast decoding.\n\n Implements both greedy and beam search decoding, uses beam search iff\n beam_size > 1, otherwise beam search related arguments are ignored.\n\n Args:\n encoder_output: Output from encoder.\n encoder_decoder_attention_bias: a bi... |
Please provide a description of the function:def transformer_prepare_decoder(targets, hparams, features=None):
if hparams.causal_decoder_self_attention:
# Causal attention.
if hparams.prepend_mode == "prepend_inputs_full_attention":
decoder_self_attention_bias = (
common_attention.attention... | [
"Prepare one shard of the model for the decoder.\n\n Args:\n targets: a Tensor.\n hparams: run hyperparameters\n features: optionally pass the entire features dictionary as well. This is\n needed now for \"packed\" datasets.\n\n Returns:\n decoder_input: a Tensor, bottom of decoder stack\n dec... |
Please provide a description of the function:def transformer_decoder(decoder_input,
encoder_output,
decoder_self_attention_bias,
encoder_decoder_attention_bias,
hparams,
cache=None,
... | [
"A stack of transformer layers.\n\n Args:\n decoder_input: a Tensor\n encoder_output: a Tensor\n decoder_self_attention_bias: bias Tensor for self-attention (see\n common_attention.attention_bias())\n encoder_decoder_attention_bias: bias Tensor for encoder-decoder attention\n (see common_atte... |
Please provide a description of the function:def transformer_base_v1():
hparams = common_hparams.basic_params1()
hparams.norm_type = "layer"
hparams.hidden_size = 512
hparams.batch_size = 4096
hparams.max_length = 256
hparams.clip_grad_norm = 0. # i.e. no gradient clipping
hparams.optimizer_adam_epsil... | [
"Set of hyperparameters."
] |
Please provide a description of the function:def transformer_base_v2():
hparams = transformer_base_v1()
hparams.layer_preprocess_sequence = "n"
hparams.layer_postprocess_sequence = "da"
hparams.layer_prepostprocess_dropout = 0.1
hparams.attention_dropout = 0.1
hparams.relu_dropout = 0.1
hparams.learnin... | [
"Set of hyperparameters."
] |
Please provide a description of the function:def transformer_base_vq_ada_32ex_packed():
hparams = transformer_base_v2()
expert_utils.update_hparams_for_vq_gating(hparams)
hparams.moe_num_experts = 32
hparams.gating_type = "vq"
# this gives us a batch size of 16 because each seq is len 256
hparams.batch_s... | [
"Set of hyperparameters for lm1b packed following tpu params."
] |
Please provide a description of the function:def transformer_base_vq1_16_nb1_packed_nda_b01_scales():
hparams = transformer_base_vq_ada_32ex_packed()
hparams.use_scales = int(True)
hparams.moe_num_experts = 16
hparams.moe_k = 1
hparams.beta = 0.1
hparams.layer_preprocess_sequence = "n"
hparams.layer_po... | [
"Set of hyperparameters."
] |
Please provide a description of the function:def transformer_base_vq1_16_nb1_packed_dan_b01_scales():
hparams = transformer_base_vq_ada_32ex_packed()
hparams.use_scales = int(True)
hparams.moe_num_experts = 16
hparams.moe_k = 1
hparams.beta = 0.1
hparams.ema = False
return hparams | [
"Set of hyperparameters."
] |
Please provide a description of the function:def transformer_base_vq1_16_nb1_packed_nda_b01_scales_dialog():
hparams = transformer_base_vq1_16_nb1_packed_nda_b01_scales()
hparams.batch_size = 2048
hparams.max_length = 1024
hparams.filter_size = 3072
return hparams | [
"Set of hyperparameters."
] |
Please provide a description of the function:def transformer_ada_lmpackedbase_dialog():
hparams = transformer_base_vq_ada_32ex_packed()
hparams.max_length = 1024
hparams.ffn_layer = "dense_relu_dense"
hparams.batch_size = 4096
return hparams | [
"Set of hyperparameters."
] |
Please provide a description of the function:def transformer_base_v3():
# Update parameters here, then occasionally cut a versioned set, e.g.
# transformer_base_v2.
hparams = transformer_base_v2()
hparams.optimizer_adam_beta2 = 0.997
# New way of specifying learning rate schedule.
# Equivalent to previou... | [
"Base parameters for Transformer model."
] |
Please provide a description of the function:def transformer_big():
hparams = transformer_base()
hparams.hidden_size = 1024
hparams.filter_size = 4096
# Reduce batch size to 2048 from 4096 to be able to train the model on a GPU
# with 12 GB memory. For example, NVIDIA TITAN V GPU.
hparams.batch_size = 20... | [
"HParams for transformer big model on WMT."
] |
Please provide a description of the function:def transformer_tall():
hparams = transformer_base()
hparams.batch_size = 2048
hparams.hidden_size = 768
hparams.filter_size = 3072
hparams.num_hidden_layers = 12
hparams.num_heads = 12
hparams.label_smoothing = 0.0
hparams.max_length = 1024
hparams.eval... | [
"Hparams for transformer on LM for pretraining/finetuning/mixing."
] |
Please provide a description of the function:def transformer_tall_finetune_tied():
hparams = transformer_tall()
hparams.multiproblem_max_input_length = 750
hparams.multiproblem_max_target_length = 100
hparams.multiproblem_schedule_max_examples = 0
hparams.learning_rate_schedule = ("linear_warmup*constant*c... | [
"Tied means fine-tune CNN/DM summarization as LM."
] |
Please provide a description of the function:def transformer_tall_finetune_uniencdec():
hparams = transformer_tall()
hparams.max_input_seq_length = 750
hparams.max_target_seq_length = 100
hparams.optimizer = "true_adam"
hparams.learning_rate_schedule = ("linear_warmup*constant*cosdecay")
hparams.learning... | [
"Fine-tune CNN/DM with a unidirectional encoder and decoder."
] |
Please provide a description of the function:def transformer_tall_train_uniencdec():
hparams = transformer_tall()
hparams.max_input_seq_length = 750
hparams.max_target_seq_length = 100
hparams.optimizer = "true_adam"
hparams.learning_rate_schedule = ("linear_warmup*constant*cosdecay")
hparams.learning_ra... | [
"Train CNN/DM with a unidirectional encoder and decoder."
] |
Please provide a description of the function:def transformer_tall_finetune_textclass():
hparams = transformer_tall()
hparams.learning_rate_constant = 6.25e-5
hparams.learning_rate_schedule = ("linear_warmup*constant*linear_decay")
hparams.multiproblem_schedule_max_examples = 0
hparams.multiproblem_target_e... | [
"Hparams for transformer on LM for finetuning on text class problems."
] |
Please provide a description of the function:def transformer_tall_pretrain_lm():
hparams = transformer_tall()
hparams.learning_rate_constant = 2e-4
hparams.learning_rate_schedule = ("linear_warmup*constant*cosdecay")
hparams.optimizer = "adam_w"
hparams.optimizer_adam_beta1 = 0.9
hparams.optimizer_adam_b... | [
"Hparams for transformer on LM pretraining (with 64k vocab)."
] |
Please provide a description of the function:def transformer_tall_pretrain_lm_tpu_adafactor():
hparams = transformer_tall_pretrain_lm()
update_hparams_for_tpu(hparams)
hparams.max_length = 1024
# For multi-problem on TPU we need it in absolute examples.
hparams.batch_size = 8
hparams.multiproblem_vocab_s... | [
"Hparams for transformer on LM pretraining (with 64k vocab) on TPU."
] |
Please provide a description of the function:def transformer_tall_pretrain_lm_tpu_adafactor_large():
hparams = transformer_tall_pretrain_lm_tpu_adafactor()
hparams.hidden_size = 1024
hparams.num_heads = 16
hparams.filter_size = 32768 # max fitting in 16G memory is 49152, batch 2
hparams.batch_size = 4
h... | [
"Hparams for transformer on LM pretraining on TPU, large model."
] |
Please provide a description of the function:def transformer_tall_pretrain_lm_tpu():
hparams = transformer_tall_pretrain_lm_tpu_adafactor()
# Optimizer gets reset in update_hparams_for_tpu so we set it again here.
hparams.learning_rate_constant = 2e-4
hparams.learning_rate_schedule = ("linear_warmup * consta... | [
"Hparams for transformer on LM pretraining on TPU with AdamW."
] |
Please provide a description of the function:def transformer_base_single_gpu():
hparams = transformer_base()
hparams.batch_size = 1024
hparams.learning_rate_schedule = "constant*linear_warmup*rsqrt_decay"
hparams.learning_rate_constant = 0.1
hparams.learning_rate_warmup_steps = 16000
return hparams | [
"HParams for transformer base model for single GPU."
] |
Please provide a description of the function:def transformer_parsing_base():
hparams = transformer_base()
hparams.attention_dropout = 0.2
hparams.layer_prepostprocess_dropout = 0.2
hparams.max_length = 512
hparams.learning_rate_warmup_steps = 16000
hparams.hidden_size = 1024
hparams.learning_rate = 0.0... | [
"HParams for parsing on WSJ only."
] |
Please provide a description of the function:def transformer_parsing_big():
hparams = transformer_big()
hparams.max_length = 512
hparams.shared_source_target_embedding = False
hparams.learning_rate_warmup_steps = 4000
hparams.layer_prepostprocess_dropout = 0.1
hparams.batch_size = 2048
hparams.learning... | [
"HParams for parsing on WSJ semi-supervised."
] |
Please provide a description of the function:def transformer_base_range(rhp):
# After starting from base, set intervals for some parameters.
rhp.set_float("learning_rate", 0.3, 3.0, scale=rhp.LOG_SCALE)
rhp.set_discrete("learning_rate_warmup_steps",
[1000, 2000, 4000, 8000, 16000])
rhp.set... | [
"Small range of hyperparameters."
] |
Please provide a description of the function:def transformer_relative():
hparams = transformer_base()
hparams.pos = None
hparams.self_attention_type = "dot_product_relative"
hparams.max_relative_position = 20
return hparams | [
"Use relative position embeddings instead of absolute position encodings."
] |
Please provide a description of the function:def transformer_mlperf_tpu():
hparams = transformer_base_v3()
hparams.mlperf_mode = True
hparams.symbol_modality_num_shards = 1
hparams.max_length = 256 # ignored when using "_packed" problems
hparams.batch_size = 2048 # per-chip batch size matches the referen... | [
"HParams for Transformer model on TPU for MLPerf on TPU 2x2."
] |
Please provide a description of the function:def update_hparams_for_tpu(hparams):
# Adafactor uses less memory than Adam.
# switch to Adafactor with its recommended learning rate scheme.
hparams.optimizer = "Adafactor"
hparams.learning_rate_schedule = "rsqrt_decay"
hparams.learning_rate_warmup_steps = 100... | [
"Change hparams to be compatible with TPU training."
] |
Please provide a description of the function:def transformer_tpu_range(rhp):
# After starting from base, set intervals for some parameters.
rhp.set_float("learning_rate", 0.3, 3.0, scale=rhp.LOG_SCALE)
rhp.set_discrete("learning_rate_warmup_steps",
[1000, 2000, 4000, 8000, 16000])
rhp.set_... | [
"Small range of hyperparameters."
] |
Please provide a description of the function:def transformer_clean():
hparams = transformer_base_v2()
hparams.label_smoothing = 0.0
hparams.layer_prepostprocess_dropout = 0.0
hparams.attention_dropout = 0.0
hparams.relu_dropout = 0.0
hparams.max_length = 0
return hparams | [
"No dropout, label smoothing, max_length."
] |
Please provide a description of the function:def transformer_lm_tpu_0():
hparams = transformer_clean_big()
update_hparams_for_tpu(hparams)
hparams.num_heads = 4 # Heads are expensive on TPUs.
hparams.batch_size = 4096
hparams.shared_embedding_and_softmax_weights = False
hparams.layer_prepostprocess_drop... | [
"HParams for training languagemodel_lm1b8k on tpu. 92M Params."
] |
Please provide a description of the function:def transformer_librispeech_v1():
hparams = transformer_base()
hparams.num_heads = 4
hparams.filter_size = 1024
hparams.hidden_size = 256
hparams.num_encoder_layers = 5
hparams.num_decoder_layers = 3
hparams.learning_rate = 0.15
hparams.batch_size = 60000... | [
"HParams for training ASR model on LibriSpeech V1."
] |
Please provide a description of the function:def transformer_librispeech_v2():
hparams = transformer_base()
hparams.max_length = 1240000
hparams.max_input_seq_length = 1550
hparams.max_target_seq_length = 350
hparams.batch_size = 16
hparams.num_decoder_layers = 4
hparams.num_encoder_layers = 6
hpara... | [
"HParams for training ASR model on LibriSpeech V2."
] |
Please provide a description of the function:def transformer_librispeech_tpu_v1():
hparams = transformer_librispeech_v1()
update_hparams_for_tpu(hparams)
hparams.batch_size = 16
librispeech.set_librispeech_length_hparams(hparams)
return hparams | [
"HParams for training ASR model on Librispeech on TPU v1."
] |
Please provide a description of the function:def transformer_librispeech_tpu_v2():
hparams = transformer_librispeech_v2()
update_hparams_for_tpu(hparams)
hparams.batch_size = 16
librispeech.set_librispeech_length_hparams(hparams)
return hparams | [
"HParams for training ASR model on Librispeech on TPU v2."
] |
Please provide a description of the function:def transformer_tpu_1b():
hparams = transformer_tpu()
hparams.hidden_size = 2048
hparams.filter_size = 8192
hparams.num_hidden_layers = 8
# smaller batch size to avoid OOM
hparams.batch_size = 1024
hparams.activation_dtype = "bfloat16"
hparams.weight_dtype... | [
"Hparams for machine translation with ~1.1B parameters."
] |
Please provide a description of the function:def transformer_wikitext103_l4k_v0():
hparams = transformer_big()
# Adafactor uses less memory than Adam.
# switch to Adafactor with its recommended learning rate scheme.
hparams.optimizer = "Adafactor"
hparams.learning_rate_schedule = "rsqrt_decay"
hparams.l... | [
"HParams for training languagemodel_wikitext103_l4k."
] |
Please provide a description of the function:def transformer_wikitext103_l4k_memory_v0():
hparams = transformer_wikitext103_l4k_v0()
hparams.split_targets_chunk_length = 64
hparams.split_targets_max_chunks = 64
hparams.split_targets_strided_training = True
hparams.add_hparam("memory_type", "transformer_xl... | [
"HParams for training languagemodel_wikitext103_l4k with memory."
] |
Please provide a description of the function:def transformer_wikitext103_l16k_memory_v0():
hparams = transformer_wikitext103_l4k_memory_v0()
hparams.max_length = 16384
hparams.split_targets_chunk_length = 64
hparams.split_targets_max_chunks = int(
hparams.max_length / hparams.split_targets_chunk_lengt... | [
"HParams for training languagemodel_wikitext103_l16k with memory."
] |
Please provide a description of the function:def transformer_cifar10_memory_v0():
hparams = transformer_wikitext103_l4k_memory_v0()
hparams.num_hidden_layers = 6
hparams.max_length = 32 * 32 * 3
hparams.split_targets_chunk_length = 64 * 3
hparams.split_targets_max_chunks = int(
hparams.max_length /... | [
"HParams for training image_cifar10_plain_gen_flat_rev with memory."
] |
Please provide a description of the function:def transformer_imagenet64_memory_v0():
hparams = transformer_cifar10_memory_v0()
hparams.max_length = 64 * 64 * 3
hparams.split_targets_chunk_length = 64 * 3
hparams.split_targets_max_chunks = int(
hparams.max_length / hparams.split_targets_chunk_length)
... | [
"HParams for training image_imagenet64_gen_flat_rev with memory."
] |
Please provide a description of the function:def maybe_reshape_4d_to_3d(x):
x_shape = common_layers.shape_list(x)
is_4d = False
if len(x_shape) == 4:
x = tf.reshape(x, [x_shape[0], x_shape[1]*x_shape[2], x_shape[3]])
is_4d = True
return x, x_shape, is_4d | [
"Reshape input from 4D to 3D if necessary."
] |
Please provide a description of the function:def local_attention_2d(x, hparams, attention_type="local_attention_2d"):
# self-attention
with tf.variable_scope("local_2d_self_att"):
y = common_attention.multihead_attention_2d(
x,
None,
hparams.attention_key_channels or hparams.hidden_si... | [
"Local 2d, self attention layer."
] |
Please provide a description of the function:def local_within_block_attention(x,
self_attention_bias,
hparams,
attention_type="local_within_block_mask_right",
q_padding="VALID",
... | [
"Local within block self attention."
] |
Please provide a description of the function:def local_attention_1d(x,
hparams,
attention_type="local_unmasked",
q_padding="VALID",
kv_padding="VALID"):
# self-attention
x, x_shape, is_4d = maybe_reshape_4d_to_3d(x)
wit... | [
"Local 1d self attention."
] |
Please provide a description of the function:def get_dilated_1d_attention_mask(
num_heads, block_size,
num_blocks, memory_size, gap_size,
name="dilated_mask"):
mask = np.ones((num_heads, block_size, 2*block_size), np.bool)
# now going over every row to do the right assignment of
# memory blocks
... | [
"Dilated attention with a masking strategy."
] |
Please provide a description of the function:def dilated_attention_1d(x,
hparams,
attention_type="masked_dilated_1d",
q_padding="VALID",
kv_padding="VALID",
gap_size=2):
# self-attention
x... | [
"Dilated 1d self attention."
] |
Please provide a description of the function:def local_global_attention(x,
self_attention_bias,
hparams,
q_padding="LEFT",
kv_padding="LEFT"):
with tf.variable_scope("self_local_global_att"):
[x_global, ... | [
"Local and global 1d self attention."
] |
Please provide a description of the function:def full_self_attention(x,
self_attention_bias,
hparams,
q_padding="LEFT",
kv_padding="LEFT"):
x, x_shape, is_4d = maybe_reshape_4d_to_3d(x)
if self_attention_bias is not N... | [
"Full self-attention layer."
] |
Please provide a description of the function:def encdec_attention_1d(x,
encoder_output,
encoder_decoder_attention_bias,
hparams):
x, x_shape, is_4d = maybe_reshape_4d_to_3d(x)
encoder_output, _, _ = maybe_reshape_4d_to_3d(encoder_output)
w... | [
"Local 1d self attention."
] |
Please provide a description of the function:def transformer_decoder_layers(inputs,
encoder_output,
num_layers,
hparams,
self_attention_bias=None,
encoder_decoder_at... | [
"Multi layer transformer."
] |
Please provide a description of the function:def transformer_encoder_layers(inputs,
num_layers,
hparams,
attention_type=AttentionType.GLOBAL,
self_attention_bias=None,
... | [
"Multi layer transformer encoder."
] |
Please provide a description of the function:def ffn_layer(x, hparams, losses=None):
with tf.variable_scope("ffn"):
if hparams.ffn_layer == "none":
return x
if hparams.ffn_layer == "conv_hidden_relu":
y = common_layers.dense_relu_dense(
x,
hparams.filter_size,
hpar... | [
"ffn layer transformer."
] |
Please provide a description of the function:def get_self_attention_bias(x):
x_shape = common_layers.shape_list(x)
self_attention_bias = common_attention.attention_bias_lower_triangle(
x_shape[1])
return self_attention_bias | [
"Creates masked self attention bias.\n\n Args:\n x: A tensor of shape [batch, length, depth]\n\n Returns:\n self_attention_bias: A tensor of shape [length, length, 1]\n "
] |
Please provide a description of the function:def postprocess_image(x, rows, cols, hparams):
batch = common_layers.shape_list(x)[0]
x = tf.reshape(x, [batch, rows, cols, hparams.hidden_size])
likelihood = getattr(hparams, "likelihood", DistributionType.CAT)
if likelihood == DistributionType.DMOL:
depth = ... | [
"Postprocessing after decoding.\n\n Args:\n x: Tensor of shape [batch, ...], where ... can be any rank such that the\n number of elements in x is batch * rows * cols * hparams.hidden_size.\n rows: Integer representing number of rows in a 2-D data point.\n cols: Integer representing number of columns ... |
Please provide a description of the function:def prepare_encoder(inputs, hparams, attention_type="local_1d"):
x = prepare_image(inputs, hparams, name="enc_channels")
# Add position signals.
x = add_pos_signals(x, hparams, "enc_pos")
x_shape = common_layers.shape_list(x)
if attention_type == "local_1d":
... | [
"Prepare encoder for images."
] |
Please provide a description of the function:def prepare_decoder(targets, hparams):
targets_shape = common_layers.shape_list(targets)
channels = hparams.num_channels
curr_infer_length = None
# during training, images are [batch, IMG_LEN, IMG_LEN, 3].
# At inference, they are [batch, curr_infer_length, 1, ... | [
"Prepare decoder for images."
] |
Please provide a description of the function:def create_output(decoder_output, rows, cols, targets, hparams):
del targets # unused arg
decoded_image = postprocess_image(decoder_output, rows, cols, hparams)
batch = common_layers.shape_list(decoded_image)[0]
depth = common_layers.shape_list(decoded_image)[-1]... | [
"Creates output from decoder output and vars.\n\n Args:\n decoder_output: Tensor of shape [batch, ...], where ... can be any rank such\n that the number of elements is batch * rows * cols * hparams.hidden_size.\n rows: Integer representing number of rows in a 2-D data point.\n cols: Integer represent... |
Please provide a description of the function:def get_channel_embeddings(io_depth, targets, hidden_size, name="channel"):
targets_split = tf.split(targets, io_depth, axis=3)
rgb_embedding_var = tf.get_variable("rgb_target_emb_%s" % name,
[256 * io_depth, hidden_size])
rgb_e... | [
"Get separate embedding for each of the channels."
] |
Please provide a description of the function:def simulate(self, action):
with tf.name_scope("environment/simulate"):
if action.dtype in (tf.float16, tf.float32, tf.float64):
action = tf.check_numerics(action, "action")
def step(action):
step_response = self._batch_env.step(action)
... | [
"Step the batch of environments.\n\n The results of the step can be accessed from the variables defined below.\n\n Args:\n action: Tensor holding the batch of actions to apply.\n\n Returns:\n Operation.\n "
] |
Please provide a description of the function:def _reset_non_empty(self, indices):
observ = tf.py_func(
self._batch_env.reset, [indices], self.observ_dtype, name="reset")
observ.set_shape(indices.get_shape().concatenate(self.observ_shape))
with tf.control_dependencies([
tf.scatter_update... | [
"Reset the batch of environments.\n\n Args:\n indices: The batch indices of the environments to reset; defaults to all.\n\n Returns:\n Batch tensor of the new observations.\n "
] |
Please provide a description of the function:def include_revision(revision_num, skip_factor=1.1):
if skip_factor <= 1.0:
return True
return (int(math.log1p(revision_num) / math.log(skip_factor)) != int(
math.log(revision_num + 2.0) / math.log(skip_factor))) | [
"Decide whether to include a revision.\n\n If the number of revisions is large, we exclude some revisions to avoid\n a quadratic blowup in runtime, since the article is likely also large.\n\n We make the ratio between consecutive included revision numbers\n appproximately equal to \"factor\".\n\n Args:\n re... |
Please provide a description of the function:def file_page_generator(my_file, max_page_size=2**28):
page_start = " <page>\n"
page_end = " </page>\n"
chunk_size = max_page_size
page_start = " <page>\n"
page_end = " </page>\n"
leftovers = ""
while True:
chunk = my_file.read(chunk_size)
if not... | [
"Read wikipedia pages from a history dump.\n\n Since some pages can be terabytes in size (with all the revisions),\n we limit page size to max_page_size bytes.\n\n Args:\n my_file: an open file object.\n max_page_size: an integer\n\n Yields:\n strings\n "
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.