Code
stringlengths
103
85.9k
Summary
listlengths
0
94
Please provide a description of the function:def _target_modality_is_real(self): vocab_size = self._problem_hparams.vocab_size["targets"] if vocab_size is not None and hasattr(self._hparams, "vocab_divisor"): vocab_size += (-vocab_size) % self._hparams.vocab_divisor modality = self._problem_hpara...
[ "Whether the target modality is real-valued." ]
Please provide a description of the function:def model_fn_sharded(self, sharded_features): dp = self._data_parallelism # [{str: Tensor}]. Transpose of 'sharded_features'. datashard_to_features = self._to_features_per_datashard(sharded_features) if self.use_body_sharded(): if self.hparams.sc...
[ "Estimator model_fn sharded along batch dimension.\n\n Args:\n sharded_features: {str: [Tensor]}. Features sharded along batch dimension.\n Each list is the same length (== number of shards).\n\n Returns:\n sharded_logits: [Tensor]. Logits for each shard of examples.\n losses: {str: 0-D ...
Please provide a description of the function:def bottom(self, features): if not self._problem_hparams: log_warn("Without a Problem, T2TModel.bottom is a passthrough.") return features transformed_features = collections.OrderedDict() all_previous_modalities = [] target_modality = _creat...
[ "Transforms features to feed into body.\n\n Args:\n features: dict of str to Tensor. Typically it is the preprocessed data\n batch after Problem's preprocess_example().\n\n Returns:\n transformed_features: dict of same key-value pairs as features. The value\n Tensors are newly transfor...
Please provide a description of the function:def top(self, body_output, features): if isinstance(body_output, dict): logits = {} for k, v in six.iteritems(body_output): # TODO(aidangomez): share variables here? with tf.variable_scope(k) as top_vs: self._add_variable_scope(...
[ "Computes logits given body output and features.\n\n Args:\n body_output: dict of str to Tensor, comprising one key-value pair for each\n target. Each value denotes the target's pre-logit activations.\n Alternatively, it may be a single Tensor denoting the pre-logits for\n that target.\...
Please provide a description of the function:def optimize(self, loss, num_async_replicas=1, use_tpu=False): lr = learning_rate.learning_rate_schedule(self.hparams) if num_async_replicas > 1: log_info("Dividing learning rate by num_async_replicas: %d", num_async_replicas) lr /= math...
[ "Return a training op minimizing loss." ]
Please provide a description of the function:def set_mode(self, mode): log_info("Setting T2TModel mode to '%s'", mode) hparams = hparams_lib.copy_hparams(self._original_hparams) hparams.add_hparam("mode", mode) # When not in training mode, set all forms of dropout to zero. if mode != tf.estimat...
[ "Set hparams with the given mode." ]
Please provide a description of the function:def eval_autoregressive(self, features=None, decode_length=50): results = self._slow_greedy_infer(features, decode_length=decode_length) return results["logits"], results["losses"]
[ "Autoregressive eval.\n\n Quadratic time in decode_length.\n\n Args:\n features: an map of string to `Tensor`\n decode_length: an integer. How many additional timesteps to decode.\n\n Returns:\n logits: `Tensor`\n losses: a dictionary: {loss-name (string): floating point `Scalar`}.\n ...
Please provide a description of the function:def infer(self, features=None, decode_length=50, beam_size=1, top_beams=1, alpha=0.0, use_tpu=False): set_custom_getter_compose(self._custom_getter) with self._eager_var_store.as_default(): ...
[ "A inference method.\n\n Quadratic time in decode_length.\n\n Args:\n features: an map of string to `Tensor`\n decode_length: an integer. How many additional timesteps to decode.\n beam_size: number of beams.\n top_beams: an integer. How many of the beams to return.\n alpha: Float th...
Please provide a description of the function:def _beam_decode(self, features, decode_length, beam_size, top_beams, alpha, use_tpu=False): return self._beam_decode_slow(features, decode_length, beam...
[ "Beam search decoding.\n\n Models should ideally implement a more efficient version of this function.\n\n Args:\n features: an map of string to `Tensor`\n decode_length: an integer. How many additional timesteps to decode.\n beam_size: number of beams.\n top_beams: an integer. How many of...
Please provide a description of the function:def _beam_decode_slow(self, features, decode_length, beam_size, top_beams, alpha, use_tpu=False): batch_size = common_layers.shape_list(features["inputs"])[0] def symbols_to_logits_fn(ids, i=None): ids = tf.expand_dims(tf.ex...
[ "Slow version of Beam search decoding.\n\n Quadratic time in decode_length.\n\n Args:\n features: an map of string to `Tensor`\n decode_length: an integer. How many additional timesteps to decode.\n beam_size: number of beams.\n top_beams: an integer. How many of the beams to return.\n ...
Please provide a description of the function:def _greedy_infer(self, features, decode_length, use_tpu=False): if use_tpu: return self._slow_greedy_infer_tpu(features, decode_length) return self._slow_greedy_infer(features, decode_length)
[ "A greedy inference method.\n\n Models should ideally implement a more efficient version of this function.\n\n Args:\n features: an map of string to `Tensor`\n decode_length: an integer. How many additional timesteps to decode.\n use_tpu: A bool, whether to build the inference graph for TPU.\n...
Please provide a description of the function:def _slow_greedy_infer_tpu(self, features, decode_length): if not features: features = {} inputs_old = None if "inputs" in features and len(features["inputs"].shape) < 4: inputs_old = features["inputs"] features["inputs"] = tf.expand_dims(f...
[ "A slow greedy inference method on TPU.\n\n Quadratic time in decode_length.\n\n Args:\n features: An map of string to `Tensor`.\n decode_length: An integer, how many additional timesteps to decode.\n\n Returns:\n A dict of decoding results {\n \"outputs\": integer `Tensor` of decod...
Please provide a description of the function:def sample(self, features): logits, losses = self(features) # pylint: disable=not-callable if self._target_modality_is_real: return logits, logits, losses # Raw numbers returned from real modality. if self.hparams.sampling_method == "argmax": s...
[ "Run the model and extract samples.\n\n Args:\n features: an map of string to `Tensor`.\n\n Returns:\n samples: an integer `Tensor`.\n logits: a list of `Tensor`s, one per datashard.\n losses: a dictionary: {loss-name (string): floating point `Scalar`}.\n " ]
Please provide a description of the function:def estimator_model_fn(cls, hparams, features, labels, mode, config=None, params=None, decode_hparam...
[ "Model fn for Estimator.\n\n Args:\n hparams: HParams, model hyperparameters\n features: dict<str name, Tensor feature>\n labels: Tensor\n mode: tf.estimator.ModeKeys\n config: RunConfig, possibly with data_parallelism attribute\n params: dict, may include batch_size, use_tpu\n ...
Please provide a description of the function:def estimator_spec_train(self, loss, num_async_replicas=1, use_tpu=False): train_op = self.optimize(loss, num_async_replicas=num_async_replicas, use_tpu=use_tpu) if use_tpu: if self._hparams.warm_start_from: def scaffo...
[ "Constructs `tf.estimator.EstimatorSpec` for TRAIN (training) mode." ]
Please provide a description of the function:def estimator_spec_eval(self, features, logits, labels, loss, losses_dict): del losses_dict hparams = self.hparams if not hasattr(hparams, "problem"): raise NotImplementedError(_no_problem_err("estimator_spec_eval")) problem = hparams.problem ...
[ "Constructs `tf.estimator.EstimatorSpec` for EVAL (evaluation) mode." ]
Please provide a description of the function:def estimator_spec_predict(self, features, use_tpu=False): decode_hparams = self._decode_hparams top_beams = decode_hparams.beam_size if decode_hparams.return_beams else 1 infer_out = self.infer( features, beam_size=decode_hparams.beam_size, ...
[ "Constructs `tf.estimator.EstimatorSpec` for PREDICT (inference) mode." ]
Please provide a description of the function:def _summarize_losses(self, losses_dict): if common_layers.should_generate_summaries(): with tf.name_scope("losses"): for loss_name, loss_val in sorted(losses_dict.items()): tf.summary.scalar(loss_name, loss_val)
[ "Adds `tf.summary`s to all terms in the losses dictionary." ]
Please provide a description of the function:def maybe_scheduled_sampling(self, features, logits, losses): hparams = self.hparams problem_hparams = self._problem_hparams # Only do scheduled sampling if requested. if hparams.scheduled_sampling_prob == 0.0: return (logits, losses) # Only ...
[ "Scheduled sampling.\n\n Performs forward inference again with \"targets\" feature replaced with values\n sampled from the model.\n\n This is the identity unless self.hparams.scheduled_sampling_prob > 0\n (default).\n\n **WARNING**: This is not a faithful implementation of scheduled sampling.\n Th...
Please provide a description of the function:def attention_lm_moe_prepare_decoder(targets, hparams): targets_pad_mask = common_attention.embedding_to_padding(targets) with tf.name_scope("pad_remover"): # Because of the shift_right, the <eos> token will be considered as # padding. In practice, it doesn't ...
[ "Prepare one shard of the model for the decoder.\n\n Args:\n targets: a Tensor.\n hparams: run hyperparameters\n\n Returns:\n decoder_input: a Tensor, bottom of decoder stack\n decoder_self_attention_bias: a Tensor, containing large negative values\n to implement masked attention and possibly biase...
Please provide a description of the function:def get_batch_coordinate(x, axis=0): # Compute the batch coordinate before flattening all batches batch_coordinate = tf.expand_dims( common_attention.coordinate_tensor(tf.shape(x)[:-1], axis=axis), axis=-1) return batch_coordinate
[ "Return a flat int32 tensor of shape [1, batch_size*length, 1]." ]
Please provide a description of the function:def expand_batch_coordinates(bc, length_factor): assert bc.get_shape().as_list() == [1, None, 1] # bc has shape [1, length, 1] bc *= tf.constant([[1] * length_factor]) # bc has shape [1, length, length_factor] bc = tf.reshape(bc, [1, -1, 1]) # bc has shape [1,...
[ "Duplicate elements of bc by length_factor.\n\n Args:\n bc (tf.Tensor): int32 tensor of shape [1, length, 1]\n length_factor (int):\n\n Returns:\n tf.Tensor: of shape [1, length*length_factor, 1] where every elements has\n been duplicated length_factor times.\n " ]
Please provide a description of the function:def remove_pad(x, pad_remover, mode): # Concatenate all tokens (without padding) x = expert_utils.flatten_all_but_last(x) # Remove padding for training and eval if mode != ModeKeys.PREDICT: # This is a hack to allows inference when the <go> token # is det...
[ "Remove padding by concatenating all dimension into one.\n\n Args:\n x (tf.Tensor): input of shape [batch_size, length, depth]\n pad_remover (obj): a PadRemover object\n mode (ModeKeys): infer, train or eval. If inference, the padding remover is\n not applied\n\n Returns:\n tf.Tensor of shape [1,...
Please provide a description of the function:def attention_lm_moe_base(): hparams = common_hparams.basic_params1() hparams.hidden_size = 1024 hparams.batch_size = 8192 hparams.max_length = 256 hparams.dropout = 0.0 hparams.clip_grad_norm = 0. # i.e. no gradient clipping hparams.optimizer_adam_epsilon ...
[ "Set of hyperparameters.\n\n suitable for 1 gpu.\n on lm1b_32k:\n ~229M params\n 0.9 steps/sec on [GeForce GTX TITAN X]\n\n Returns:\n a hparams object\n " ]
Please provide a description of the function:def attention_lm_moe_base_long_seq(): hparams = attention_lm_moe_base() hparams.max_length = 0 # max_length == batch_size hparams.eval_drop_long_sequences = True hparams.min_length_bucket = 256 # Avoid cyclic problems for big batches hparams.use_sepconv = Tru...
[ "Hyper parameters specifics for long sequence generation." ]
Please provide a description of the function:def attention_lm_moe_base_ae(): hparams = attention_lm_moe_base_long_seq() hparams.attention_type = AttentionType.LOCAL_EXPERTS hparams.learning_rate = 0.05 hparams.learning_rate_warmup_steps = 10000 # According to noam, ("n", "da") seems better for harder-to-l...
[ "Base model with attention expert." ]
Please provide a description of the function:def attention_lm_ae_extended(): hparams = attention_lm_moe_base_long_seq() hparams.attention_layers = "eeee" hparams.attention_local = True # hparams.factored_logits=1 # Necessary when the number of expert grow bigger hparams.attention_moe_k = 2 hparams.atten...
[ "Experiment with the exp_factor params." ]
Please provide a description of the function:def attention_lm_moe_base_memeff(): hparams = attention_lm_moe_base_long_seq() hparams.use_sepconv = False hparams.diet_experts = True hparams.layer_preprocess_sequence = "n" hparams.layer_postprocess_sequence = "da" hparams.layer_prepostprocess_dropout = 0.0...
[ "Base model with attention expert." ]
Please provide a description of the function:def attention_lm_moe_small(): hparams = attention_lm_moe_base() hparams.num_hidden_layers = 4 hparams.hidden_size = 512 hparams.filter_size = 2048 hparams.moe_num_experts = 128 hparams.moe_layers = "2" return hparams
[ "Cheap model for single-gpu training.\n\n on lm1b_32k:\n ~312M params\n 1.6 steps/sec on [GeForce GTX TITAN X]\n After 50K steps on 8 GPUs (synchronous):\n eval_log_ppl_per_token = 3.31\n\n Returns:\n an hparams object.\n " ]
Please provide a description of the function:def attention_lm_attention_moe_tiny(): hparams = attention_lm_moe_small() hparams.moe_layers = "" hparams.attention_num_experts = 128 hparams.filter_size = 8192 hparams.attention_type = AttentionType.LOCAL_EXPERTS return hparams
[ "Cheap model for debugging.\n\n Returns:\n an hparams object.\n " ]
Please provide a description of the function:def attention_lm_moe_large(): hparams = attention_lm_moe_base() hparams.num_hidden_layers = 5 hparams.moe_layers = "3" hparams.hidden_size = 1024 hparams.num_heads = 16 hparams.filter_size = 4096 hparams.moe_hidden_sizes = "4096" hparams.moe_num_experts = ...
[ "Large model for distributed training.\n\n Over 1B parameters, so requires multi-gpu training due to memory\n requirements.\n\n on lm1b_32k:\n After 45K steps on 8 GPUs (synchronous):\n eval_log_ppl_per_token = 3.18\n eval_ppl_per_word = exp(1.107893 * eval_log_ppl_per_token) = 33.9\n\n Retur...
Please provide a description of the function:def attention_lm_moe_memory_efficient(): hparams = attention_lm_moe_large() hparams.diet_experts = True hparams.layer_preprocess_sequence = "n" hparams.layer_postprocess_sequence = "da" hparams.layer_prepostprocess_dropout = 0.0 hparams.memory_efficient_ffn = ...
[ "Memory-efficient version." ]
Please provide a description of the function:def attention_lm_moe_24b_diet(): hparams = attention_lm_moe_large_diet() hparams.moe_hidden_sizes = "12288" hparams.moe_num_experts = 1024 hparams.batch_size = 4096 return hparams
[ "Unnecessarily large model with 24B params - because we can." ]
Please provide a description of the function:def attention_lm_moe_translation(): hparams = attention_lm_moe_base() hparams.layer_preprocess_sequence = "n" hparams.layer_postprocess_sequence = "da" hparams.learning_rate = 0.4 hparams.prepend_mode = "prepend_inputs_masked_attention" hparams.max_length = 51...
[ "Version to use for seq2seq." ]
Please provide a description of the function:def attention_lm_moe_unscramble_base(): hparams = attention_lm_no_moe_small() hparams.use_inputs = True hparams.min_length_bucket = 1024 hparams.max_length = 1024 hparams.batch_size = 5000 hparams.layer_prepostprocess_dropout = 0.0 hparams.layer_preprocess_s...
[ "Version to use with languagemodel_wiki_scramble1k50." ]
Please provide a description of the function:def audio_bottom(x, model_hparams, vocab_size): del vocab_size # unused arg inputs = x with tf.variable_scope("audio_modality"): # TODO(aidangomez): Will need to sort out a better audio pipeline def xnet_resblock(x, filters, res_relu, name): wi...
[ "Transform input from data space to model space.\n\n Args:\n x: A Tensor with shape [batch, ...]\n model_hparams: HParams, model hyperparmeters.\n vocab_size: int, vocabulary size.\n\n Returns:\n body_input: A Tensor with shape [batch, ?, ?,\n model_hparams.hidden_size].\n ", "Xception block."...
Please provide a description of the function:def image_targets_bottom(x, model_hparams, vocab_size): pixel_embedding_size = 64 inputs = x with tf.variable_scope("image_modality"): if not tf.executing_eagerly(): tf.summary.image( "targets_bottom", common_layers.tpu_safe_image_summa...
[ "Bottom transformation for target images." ]
Please provide a description of the function:def _image_channel_compress_bottom(inputs, model_hparams, name="bottom"): num_channels = 3 with tf.variable_scope(name): inputs = tf.to_float(inputs) hp = model_hparams if hp.mode != tf.estimator.ModeKeys.PREDICT: tf.summary.image( "inputs"...
[ "Compresses channel-wise input pixels into whole pixel representions.\n\n Perform conversion of RGB pixel values to a real number in the range -1 to\n 1. This combines pixel channels to form a representation of shape\n [img_len, img_len].\n\n Args:\n inputs: Tensor representing RGB pixel intensities as integ...
Please provide a description of the function:def image_channel_embeddings_bottom(x, model_hparams, vocab_size): del vocab_size # unused arg inputs = tf.to_int32(x) io_depth = model_hparams.num_channels tshape = common_layers.shape_list(inputs) hidden_size = model_hparams.hidden_size target_embeddings = ...
[ "Bottom transformation for image targets." ]
Please provide a description of the function:def speech_recognition_bottom(x, model_hparams, vocab_size): del vocab_size # unused arg inputs = x p = model_hparams num_mel_bins = p.audio_num_mel_bins num_channels = 3 if p.audio_add_delta_deltas else 1 with tf.variable_scope("speech_recognition_modality...
[ "Use batchnorm instead of CMVN and shorten the stft with strided convs.\n\n Args:\n x: float32 tensor with shape [batch_size, len, 1, freqs * channels]\n model_hparams: HParams, model hyperparmeters.\n vocab_size: int, vocabulary size.\n\n Returns:\n float32 tensor with shape [batch_size, shorter_len,...
Please provide a description of the function:def get_weights(model_hparams, vocab_size, hidden_dim=None): if hidden_dim is None: hidden_dim = model_hparams.hidden_size num_shards = model_hparams.symbol_modality_num_shards shards = [] for i in range(num_shards): shard_size = (vocab_size // num_shards)...
[ "Create or get concatenated embedding or softmax variable.\n\n Args:\n model_hparams: HParams, model hyperparmeters.\n vocab_size: int, vocabulary size.\n hidden_dim: dim of the variable. Defaults to _model_hparams' hidden_size\n\n Returns:\n a list of num_shards Tensors.\n " ]
Please provide a description of the function:def _symbol_bottom_simple(x, model_hparams, vocab_size, name, reuse): with tf.variable_scope(name, reuse=reuse): # Ensure the inputs are 3-D if len(x.get_shape()) == 4: x = tf.squeeze(x, axis=3) while len(x.get_shape()) < 3: x = tf.expand_dims(x,...
[ "Bottom transformation for symbols." ]
Please provide a description of the function:def symbol_targets_bottom(x, model_hparams, vocab_size): if (model_hparams.shared_embedding_and_softmax_weights or model_hparams.get("shared_embedding")): try: return _symbol_bottom_simple( x, model_hparams, vocab_size, "shared", reuse=True) ...
[ "Bottom transformation for target symbols." ]
Please provide a description of the function:def video_bitwise_bottom(x, model_hparams, vocab_size): pixel_embedding_size = 64 inputs = x with tf.variable_scope("video_modality_bitwise", reuse=tf.AUTO_REUSE): common_layers.summarize_video(inputs, "bottom") # Embed bitwise. assert vocab_size == 256 ...
[ "Bottom transformation for embedding video bitwise." ]
Please provide a description of the function:def video_pixel_noise_bottom(x, model_hparams, vocab_size): input_noise = getattr(model_hparams, "video_modality_input_noise", 0.25) inputs = x if model_hparams.mode == tf.estimator.ModeKeys.TRAIN: background = tfp.stats.percentile(inputs, 50., axis=[0, 1, 2, 3]...
[ "Bottom transformation for video." ]
Please provide a description of the function:def convert_rgb_to_real(prediction, targets): prediction = tf.squeeze(prediction, axis=-1) prediction = common_layers.convert_rgb_to_real(prediction) targets = common_layers.convert_rgb_to_real(targets) return prediction, targets
[ "Convert prediction and target from rgb to real." ]
Please provide a description of the function:def ctc_symbol_loss(top_out, targets, model_hparams, vocab_size, weight_fn): del model_hparams, vocab_size # unused arg logits = top_out with tf.name_scope("ctc_loss", values=[logits, targets]): # For CTC we assume targets are 1d, [batch, length, 1, 1] here. ...
[ "Compute the CTC loss." ]
Please provide a description of the function:def generic_loss(top_out, targets, model_hparams, vocab_size, weights_fn): del vocab_size # unused arg logits = top_out logits = common_attention.maybe_upcast(logits, hparams=model_hparams) cutoff = getattr(model_hparams, "video_modality_loss_cutoff", 0.0) retu...
[ "Compute loss numerator and denominator for one shard of output." ]
Please provide a description of the function:def multi_label_loss(top_out, targets, model_hparams, vocab_size, weights_fn): del vocab_size # unused arg logits = top_out num_labels = tf.shape(targets)[1] logits = tf.tile(logits, [1, num_labels, 1, 1, 1]) xent, weights = common_layers.padded_cross_entropy(...
[ "Average loss over the labels." ]
Please provide a description of the function:def one_hot_class_label_loss(top_out, targets, model_hparams, vocab_size, weights_fn): del model_hparams, vocab_size # unused arg loss_scale = tf.losse...
[ "Apply softmax cross-entropy between outputs and targets.\n\n Args:\n top_out: logits Tensor with shape [batch, ?, ?, num_classes]\n targets: one-hot encoding Tensor with shape [batch, ?, ?, num_classes]\n model_hparams: HParams, model hyperparmeters.\n vocab_size: int, vocabulary size.\n weights_fn...
Please provide a description of the function:def real_log_poisson_loss(top_out, targets, model_hparams, vocab_size, weights_fn): del model_hparams, vocab_size # unused arg predictions = top_out if (len(comm...
[ "Poisson loss for real." ]
Please provide a description of the function:def sigmoid_class_label_loss(top_out, targets, model_hparams, vocab_size, weights_fn): # Expect inputs of size [batch-size, timesteps, 1, num-classes], wh...
[ "Loss for class label." ]
Please provide a description of the function:def video_loss(top_out, targets, model_hparams, vocab_size, weights_fn): del vocab_size # unused arg logits = top_out logits = tf.reshape(logits, [-1] + common_layers.shape_list(logits)[2:]) targets = tf.reshape(targets, [-1] + common_layers.shape_list(targets)[2...
[ "Compute loss numerator and denominator for one shard of output." ]
Please provide a description of the function:def video_l1_loss(top_out, targets, model_hparams, vocab_size, weights_fn): del vocab_size # unused arg logits = top_out logits = tf.reshape(logits, [-1] + common_layers.shape_list(logits)[2:-1]) targets = tf.reshape(targets, [-1] + common_layers.shape_list(targe...
[ "Compute loss numerator and denominator for one shard of output." ]
Please provide a description of the function:def video_l2_loss(top_out, targets, model_hparams, vocab_size, weights_fn): del vocab_size # unused arg logits = top_out logits = tf.reshape(logits, [-1] + common_layers.shape_list(logits)[2:-1]) targets = tf.reshape(targets, [-1] + common_layers.shape_list(targe...
[ "Compute loss numerator and denominator for one shard of output." ]
Please provide a description of the function:def class_label_top(body_output, targets, model_hparams, vocab_size): del targets # unused arg with tf.variable_scope("class_label_modality_%d_%d" % ( vocab_size, model_hparams.hidden_size)): x = body_output x = tf.reduce_mean(x, axis=[1, 2], keepdims=T...
[ "Transform inputs from model space to target space.\n\n Average over inner dims and a linear layer to logits.\n\n Args:\n body_output: A Tensor with shape [batch, ?, ?, body_output_size].\n targets:\n model_hparams: HParams, model hyperparmeters.\n vocab_size: int, vocabulary size.\n\n Returns:\n ...
Please provide a description of the function:def image_top(body_output, targets, model_hparams, vocab_size): del targets # unused arg # TODO(lukaszkaiser): is this a universal enough way to get channels? num_channels = model_hparams.problem.num_channels with tf.variable_scope("rgb_softmax"): body_output...
[ "Top transformation for images." ]
Please provide a description of the function:def image_channel_compress_top(body_output, targets, model_hparams, vocab_size): del targets # unused arg with tf.variable_scope("image_channel_compress_modality"): hidden_size = model_hparams.hidden_size img_len = model_hparams.img_len channels = 3 # RG...
[ "Transforms body output to return logits.\n\n Args:\n body_output: Tensor of shape [batch, img_len, img_len, depth].\n targets:\n model_hparams: HParams, model hyperparmeters.\n vocab_size: int, vocabulary size.\n\n Returns:\n Tensor of shape [batch, img_len, img_len, channels, vocab_size].\n " ]
Please provide a description of the function:def image_channel_embeddings_top(body_output, targets, model_hparams, vocab_size): del targets # unused arg with tf.variable_scope("image_channel_embeddings_bottom"): ...
[ "Top transformation for images." ]
Please provide a description of the function:def softmax_average_pooling_class_label_top(body_output, targets, model_hparams, vocab_size): del targets # unused arg with tf.variable...
[ "Loss for class label." ]
Please provide a description of the function:def softmax_last_timestep_class_label_top(body_output, targets, model_hparams, vocab_size): del targets # unused arg with tf.variable_scope( ...
[ "Loss for class label." ]
Please provide a description of the function:def softmax_max_pooling_class_label_top(body_output, targets, model_hparams, vocab_size): del targets # unused arg with tf.variable_scope( "s...
[ "Loss for class label." ]
Please provide a description of the function:def symbol_top(body_output, targets, model_hparams, vocab_size): del targets # unused arg if model_hparams.shared_embedding_and_softmax_weights: scope_name = "shared" reuse = tf.AUTO_REUSE else: scope_name = "softmax" reuse = False with tf.variabl...
[ "Generate logits.\n\n Args:\n body_output: A Tensor with shape\n [batch, p0, p1, model_hparams.hidden_size].\n targets: Unused.\n model_hparams: HParams, model hyperparmeters.\n vocab_size: int, vocabulary size.\n\n Returns:\n logits: A Tensor with shape [batch, p0, p1, ?, vocab_size].\n " ]
Please provide a description of the function:def video_top(body_output, targets, model_hparams, vocab_size): del targets # unused arg num_channels = model_hparams.problem.num_channels shape = common_layers.shape_list(body_output) reshape_shape = shape[:-1] + [num_channels, vocab_size] res = tf.reshape(bod...
[ "Top transformation for video." ]
Please provide a description of the function:def video_l1_top(body_output, targets, model_hparams, vocab_size): del targets, vocab_size # unused arg num_channels = model_hparams.problem.num_channels num_frames = model_hparams.video_num_target_frames with tf.variable_scope("rgb"): body_output_shape = com...
[ "Top transformation for video." ]
Please provide a description of the function:def get_bottom(modality_type, value=None): if modality_type == ModalityType.AUDIO: return audio_bottom elif modality_type == ModalityType.AUDIO_SPECTRAL: return audio_spectral_bottom elif modality_type in (ModalityType.CLASS_LABEL, M...
[ "Gets default bottom transformation; if none available, return value." ]
Please provide a description of the function:def get_loss(modality_type, value=None): if modality_type in (ModalityType.AUDIO, ModalityType.AUDIO_SPECTRAL, ModalityType.CLASS_LABEL, ModalityType.IDENTITY, ModalityType.IDENT...
[ "Gets default loss transformation; if none available, return value." ]
Please provide a description of the function:def get_name(modality_type, value=None): # For legacy reasons, modalities vary in their naming scheme. Future plans are # to remove any need for get_name. We do not recommend using it. if modality_type == ModalityType.AUDIO: return lambda model_hparams, vocab_si...
[ "Gets default name for transformations; if none available, return value." ]
Please provide a description of the function:def get_targets_bottom(modality_type, value=None): if modality_type == ModalityType.AUDIO: return make_targets_bottom(audio_bottom) elif modality_type == ModalityType.AUDIO_SPECTRAL: return make_targets_bottom(audio_spectral_bottom) elif modality_type in (Mo...
[ "Gets default bottom transformation for targets; if none, return value." ]
Please provide a description of the function:def get_top(modality_type, value=None): if modality_type in (ModalityType.AUDIO, ModalityType.AUDIO_SPECTRAL, ModalityType.GENERIC_L2_LOSS, ModalityType.IDENTITY, ModalityType.ID...
[ "Gets default top transformation; if none available, return value." ]
Please provide a description of the function:def get_weights_fn(modality_type, value=None): if modality_type in (ModalityType.CTC_SYMBOL, ModalityType.IDENTITY_SYMBOL, ModalityType.MULTI_LABEL, ModalityType.SYMBOL, Modality...
[ "Gets default weights function; if none available, return value." ]
Please provide a description of the function:def create_combination(list_of_sentences): num_sentences = len(list_of_sentences) - 1 combinations = [] for i, _ in enumerate(list_of_sentences): if i == num_sentences: break num_pairs = num_sentences - i populated = num_pairs * [list_of_sentences[...
[ "Generates all possible pair combinations for the input list of sentences.\n\n For example:\n\n input = [\"paraphrase1\", \"paraphrase2\", \"paraphrase3\"]\n\n output = [(\"paraphrase1\", \"paraphrase2\"),\n (\"paraphrase1\", \"paraphrase3\"),\n (\"paraphrase2\", \"paraphrase3\")]\n\n Args...
Please provide a description of the function:def image_transformer2d_base(): hparams = common_hparams.basic_params1() hparams.hidden_size = 512 hparams.batch_size = 1 hparams.max_length = 256 hparams.dropout = 0.0 hparams.clip_grad_norm = 0. # i.e. no gradient clipping hparams.optimizer_adam_epsilon =...
[ "Set of hyperparameters." ]
Please provide a description of the function:def imagetransformer2d_base_8l_8_32_big(): hparams = image_transformer2d_base() hparams.num_heads = 16 hparams.hidden_size = 1024 hparams.filter_size = 2048 hparams.num_decoder_layers = 8 hparams.batch_size = 1 hparams.layer_prepostprocess_dropout = 0.3 hp...
[ "hparams fo 8 layer big 2d model for cifar 10." ]
Please provide a description of the function:def imagetransformer_base_10l_8h_big_uncond_dr03_dan_64_2d(): hparams = image_transformer2d_base() hparams.unconditional = True hparams.hidden_size = 512 hparams.batch_size = 1 hparams.img_len = 64 hparams.num_heads = 8 hparams.filter_size = 2048 hparams.b...
[ "big 1d model for unconditional generation on imagenet." ]
Please provide a description of the function:def img2img_transformer2d_base(): hparams = image_transformer2d_base() # learning related flags hparams.layer_preprocess_sequence = "n" hparams.layer_postprocess_sequence = "da" # This version seems to benefit from a higher learning rate. hparams.learning_rate...
[ "Base params for img2img 2d attention." ]
Please provide a description of the function:def img2img_transformer2d_q3(): hparams = img2img_transformer2d_q1() hparams.batch_size = 2 hparams.query_shape = (8, 16) hparams.memory_flange = (8, 32) return hparams
[ "Current best hparams for local 2d." ]
Please provide a description of the function:def img2img_transformer_base(): hparams = image_transformer2d_base() # learning related flags hparams.layer_preprocess_sequence = "n" hparams.layer_postprocess_sequence = "da" # This version seems to benefit from a higher learning rate. hparams.learning_rate =...
[ "Base params for local1d attention." ]
Please provide a description of the function:def img2img_transformer_b3(): hparams = img2img_transformer_base() hparams.batch_size = 2 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.block_length = 128 hparams.sampling_temp = 0.9 return hparams
[ "Current best hparams for local 1d." ]
Please provide a description of the function:def img2img_transformer_dilated(): hparams = img2img_transformer_base() hparams.add_hparam("num_memory_blocks", 1) hparams.num_heads = 8 hparams.attention_key_channels = hparams.attention_value_channels = 0 hparams.hidden_size = 512 hparams.filter_size = 2048 ...
[ "Try dilated." ]
Please provide a description of the function:def img2img_transformer_base_tpu(): hparams = img2img_transformer_base() update_hparams_for_tpu(hparams) hparams.batch_size = 2 hparams.num_heads = 4 # heads are expensive on tpu hparams.num_decoder_layers = 8 hparams.num_encoder_layers = 4 hparams.shared_...
[ "Hparams for training img2img_transformer on tpu." ]
Please provide a description of the function:def img2img_transformer2d_n31(): hparams = img2img_transformer2d_base() hparams.batch_size = 1 hparams.num_encoder_layers = 6 hparams.num_decoder_layers = 12 hparams.num_heads = 8 hparams.query_shape = (16, 32) hparams.memory_flange = (16, 32) return hpara...
[ "Set of hyperparameters." ]
Please provide a description of the function:def img2img_transformer2d_n24(): hparams = img2img_transformer2d_base() hparams.batch_size = 1 hparams.hidden_size = 1024 hparams.filter_size = 2048 hparams.layer_prepostprocess_dropout = 0.2 hparams.num_decoder_layers = 8 hparams.query_shape = (8, 16) hpa...
[ "Set of hyperparameters." ]
Please provide a description of the function:def img2img_transformer2d_tiny(): hparams = img2img_transformer2d_base() hparams.num_decoder_layers = 2 hparams.hidden_size = 128 hparams.batch_size = 4 hparams.max_length = 128 hparams.attention_key_channels = hparams.attention_value_channels = 0 hparams.fi...
[ "Tiny params." ]
Please provide a description of the function:def img2img_transformer_tiny(): hparams = img2img_transformer2d_base() hparams.num_hidden_layers = 2 hparams.hidden_size = 128 hparams.batch_size = 4 hparams.max_length = 128 hparams.attention_key_channels = hparams.attention_value_channels = 0 hparams.filte...
[ "Tiny params." ]
Please provide a description of the function:def ResidualFeedForward(feature_depth, feedforward_depth, dropout, mode): return layers.Residual( layers.LayerNorm(), layers.Dense(feedforward_depth), layers.Relu(), layers.D...
[ "Residual feed-forward layer with normalization at start." ]
Please provide a description of the function:def EncoderLayer(feature_depth, feedforward_depth, num_heads, dropout, mode): # The encoder block expects (activation, mask) as input and returns # the new activations only, we add the mask back to ou...
[ "Transformer encoder layer.\n\n The input to the encoder is a pair (embedded source, mask) where\n the mask is created from the original source to prevent attending\n to the padding part of the input.\n\n Args:\n feature_depth: int: depth of embedding\n feedforward_depth: int: depth of feed-forward layer...
Please provide a description of the function:def TransformerEncoder(vocab_size, num_classes=10, feature_depth=512, feedforward_depth=2048, num_layers=6, num_heads=8, dropout=0.1, ...
[ "Transformer encoder.\n\n Args:\n vocab_size: int: vocab size\n num_classes: how many classes on output\n feature_depth: int: depth of embedding\n feedforward_depth: int: depth of feed-forward layer\n num_layers: int: number of encoder/decoder layers\n num_heads: int: number of attention heads\n...
Please provide a description of the function:def DecoderLayer(feature_depth, feedforward_depth, num_heads, dropout, mode): return layers.Serial( layers.Residual( # Self-attention block. layers.LayerNorm(), layers.Branch(...
[ "Transformer decoder layer.\n\n Args:\n feature_depth: int: depth of embedding\n feedforward_depth: int: depth of feed-forward layer\n num_heads: int: number of attention heads\n dropout: float: dropout rate (how much to drop out)\n mode: str: 'train' or 'eval'\n\n Returns:\n the layer.\n " ]
Please provide a description of the function:def TransformerLM(vocab_size, feature_depth=512, feedforward_depth=2048, num_layers=6, num_heads=8, dropout=0.1, max_len=2048, mode='train'): re...
[ "Transformer language model (only uses the decoder part of Transformer).\n\n Args:\n vocab_size: int: vocab size\n feature_depth: int: depth of embedding\n feedforward_depth: int: depth of feed-forward layer\n num_layers: int: number of encoder/decoder layers\n num_heads: int: number of attention h...
Please provide a description of the function:def ChunkedDecoderLayer(feature_depth, feedforward_depth, num_heads, dropout, chunk_selector, mode): return layers.Serial( layers.Residual( # S...
[ "Transformer decoder layer operating on chunks.\n\n Args:\n feature_depth: int: depth of embedding\n feedforward_depth: int: depth of feed-forward layer\n num_heads: int: number of attention heads\n dropout: float: dropout rate (how much to drop out)\n chunk_selector: a function from chunk number t...
Please provide a description of the function:def ChunkedTransformerLM(vocab_size, feature_depth=512, feedforward_depth=2048, num_layers=6, num_heads=8, dropout=0.1, chunk...
[ "Transformer language model operating on chunks.\n\n The input to this model is a sequence presented as a list or tuple of chunks:\n (chunk1, chunk2, chunks3, ..., chunkN).\n Each chunk should have the same shape (batch, chunk-length) and together they\n represent a long sequence that's a concatenation chunk...
Please provide a description of the function:def Transformer(source_vocab_size, target_vocab_size, mode='train', num_layers=6, feature_depth=512, feedforward_depth=2048, num_heads=8, dropout=0.1, ...
[ "Transformer model.\n\n Args:\n source_vocab_size: int: source vocab size\n target_vocab_size: int: target vocab size\n mode: str: 'train' or 'eval'\n num_layers: int: number of encoder/decoder layers\n feature_depth: int: depth of embedding\n feedforward_depth: int: depth of feed-forward layer\...
Please provide a description of the function:def mtf_transformer_base(): hparams = common_hparams.basic_params1() hparams.no_data_parallelism = True hparams.use_fixed_batch_size = True hparams.add_hparam("mtf_mode", True) hparams.batch_size = 64 hparams.max_length = 256 hparams.add_hparam("d_model", 51...
[ "Set of hyperparameters." ]
Please provide a description of the function:def mtf_transformer_tiny(): hparams = mtf_transformer_base() hparams.d_model = 128 hparams.d_ff = 512 hparams.batch_size = 8 hparams.encoder_layers = ["att", "drd"] * 2 hparams.decoder_layers = ["att", "enc_att", "drd"] * 2 hparams.num_heads = 8 # data par...
[ "Catch bugs locally..." ]
Please provide a description of the function:def mtf_transformer_paper_lm(size): n = 2 ** size hparams = mtf_transformer_base_lm() hparams.batch_size = 256 hparams.d_model = 1024 hparams.d_ff = int(8192 * n) hparams.d_kv = 256 hparams.num_heads = int(8 * n) hparams.shared_embedding_and_softmax_weight...
[ "Config for language-model experiments.\n\n Train these on languagemodel_lm1b32k_packed for 136000 steps (10 epochs)\n\n The size parameter is an integer that controls the number of heads and the\n size of the size of the feedforward hidden layers. Increasing size by 1\n doubles each of these.\n\n Results:\n ...
Please provide a description of the function:def mtf_transformer_paper_tr(size): n = 2 ** size hparams = mtf_transformer_base() hparams.label_smoothing = 0.1 hparams.batch_size = 128 hparams.d_model = 1024 hparams.d_ff = int(4096 * n) hparams.num_heads = int(8 * n) hparams.shared_embedding_and_softma...
[ "Config for translation experiments.\n\n Train these on translate_enfr_wmt32k_packed for 154000 steps (3 epochs)\n\n The size parameter is an integer that controls the number of heads and the\n size of the size of the feedforward hidden layers. Increasing size by 1\n doubles each of these.\n\n Args:\n size...
Please provide a description of the function:def mtf_transformer_lm_baseline(): hparams = mtf_transformer_paper_lm(-1) hparams.batch_size = 128 hparams.learning_rate_decay_steps = 27200 # one epoch on lm1b hparams.mesh_shape = "batch:8" return hparams
[ "Small language model to run on 1 TPU.\n\n Run this on 2x2 on languagemodel_lm1b32k_packed for 272000 steps (10 epochs)\n Results:\n params/10^9 log-ppl(per-token)\n 0.14 3.202\n\n Returns:\n a hparams\n " ]
Please provide a description of the function:def multihead_graph_attention(query_antecedent, memory_antecedent, bias, total_key_depth, total_value_depth, output_depth, ...
[ "Multihead scaled-dot-product attention with input/output transformations.\n\n Args:\n query_antecedent: a Tensor with shape [batch, length_q, channels]\n memory_antecedent: a Tensor with shape [batch, length_m, channels] or None\n bias: bias Tensor (see attention_bias())\n total_key_depth: an integer\...
Please provide a description of the function:def graph_attention(q, k, v, bias, dropout_rate=0.0, image_shapes=None, name=None, make_image_summary=True, save_we...
[ "graph attention.\n\n Args:\n q: a Tensor with shape [batch, heads, length_q, depth_k]\n k: a Tensor with shape [batch, heads, length_kv, depth_k]\n v: a Tensor with shape [batch, heads, length_kv, depth_v]\n bias: bias Tensor (see attention_bias())\n dropout_rate: a floating point number\n image...