INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Computes the value loss. Args: value_net_apply: value net apply function with signature (params, ndarray of shape (B, T+1) + OBS) -> ndarray(B, T+1, 1) value_net_params: params of value_net_apply. observations: np.ndarray of shape (B, T+1) + OBS rewards: np.ndarray of shape (B, T) of rewards. ...
def value_loss(value_net_apply, value_net_params, observations, rewards, reward_mask, gamma=0.99): """Computes the value loss. Args: value_net_apply: value net apply function with signature (params, ndarray of shape (B, T+1) + OBS...
Computes the value loss given the prediction of the value function. Args: value_prediction: np.ndarray of shape (B, T+1, 1) rewards: np.ndarray of shape (B, T) of rewards. reward_mask: np.ndarray of shape (B, T), the mask over rewards. gamma: float, discount factor. Returns: The average L2 val...
def value_loss_given_predictions(value_prediction, rewards, reward_mask, gamma=0.99): """Computes the value loss given the prediction of the value function. Args: value_prediction: np.ndarray of shape (B, T+1, 1)...
r"""Computes TD-residuals from V(s) and rewards. Where a `delta`, i.e. a td-residual is defined as: delta_{b,t} = r_{b,t} + \gamma * v_{b,t+1} - v_{b,t}. Args: predicted_values: ndarray of shape (B, T+1). NOTE: Expects axis 2 was squeezed. These represent V(s_bt) for b < B and t < T+1 rewards: nd...
def deltas(predicted_values, rewards, mask, gamma=0.99): r"""Computes TD-residuals from V(s) and rewards. Where a `delta`, i.e. a td-residual is defined as: delta_{b,t} = r_{b,t} + \gamma * v_{b,t+1} - v_{b,t}. Args: predicted_values: ndarray of shape (B, T+1). NOTE: Expects axis 2 was squeezed. Th...
r"""Computes the GAE advantages given the one step TD-residuals. The formula for a GAE advantage estimator is as follows: A_{bt} = \sum_{l=0}^{\infty}(\gamma * \lambda)^{l}(\delta_{b,t+l}). Internally we just call rewards_to_go, since it is the same computation. Args: td_deltas: np.ndarray of shape (B, ...
def gae_advantages(td_deltas, mask, lambda_=0.95, gamma=0.99): r"""Computes the GAE advantages given the one step TD-residuals. The formula for a GAE advantage estimator is as follows: A_{bt} = \sum_{l=0}^{\infty}(\gamma * \lambda)^{l}(\delta_{b,t+l}). Internally we just call rewards_to_go, since it is the s...
Picks out the probabilities of the actions along batch and time-steps. Args: probab_observations: ndarray of shape `[B, T+1, A]`, where probab_observations[b, t, i] contains the log-probability of action = i at the t^th time-step in the b^th trajectory. actions: ndarray of shape `[B, T]`, with ea...
def chosen_probabs(probab_observations, actions): """Picks out the probabilities of the actions along batch and time-steps. Args: probab_observations: ndarray of shape `[B, T+1, A]`, where probab_observations[b, t, i] contains the log-probability of action = i at the t^th time-step in the b^th traj...
Computes the probability ratios for each time-step in a trajectory. Args: p_new: ndarray of shape [B, T+1, A] of the log-probabilities that the policy network assigns to all the actions at each time-step in each batch using the old parameters. p_old: ndarray of shape [B, T+1, A], same as above, b...
def compute_probab_ratios(p_new, p_old, actions, reward_mask): """Computes the probability ratios for each time-step in a trajectory. Args: p_new: ndarray of shape [B, T+1, A] of the log-probabilities that the policy network assigns to all the actions at each time-step in each batch using the old p...
PPO objective, with an eventual minus sign, given observations.
def ppo_loss(policy_net_apply, new_policy_params, old_policy_params, value_net_apply, value_net_params, padded_observations, padded_actions, padded_rewards, reward_mask, gamma=0.99, lambda_=...
PPO objective, with an eventual minus sign, given predictions.
def ppo_loss_given_predictions(log_probab_actions_new, log_probab_actions_old, predicted_values, padded_actions, padded_rewards, reward_mask, ...
Computes the combined (clipped loss + value loss) given predictions.
def combined_loss_given_predictions(log_probab_actions_new, log_probab_actions_old, value_prediction, padded_actions, padded_rewards, reward...
Computes the combined (clipped loss + value loss) given observations.
def combined_loss(new_params, old_params, policy_and_value_net_apply, padded_observations, padded_actions, padded_rewards, reward_mask, gamma=0.99, lambda_=0.95, ...
PPO optimizer step.
def ppo_opt_step(i, opt_state, ppo_opt_update, policy_net_apply, old_policy_params, value_net_apply, value_net_params, padded_observations, padded_actions, padded_rewa...
Value optimizer step.
def value_opt_step(i, opt_state, opt_update, value_net_apply, padded_observations, padded_rewards, reward_mask, gamma=0.99): """Value optimizer step.""" value_params = trax_opt.get_pa...
Policy and Value optimizer step.
def policy_and_value_opt_step(i, opt_state, opt_update, policy_and_value_net_apply, old_params, padded_observations, padded_actions, ...
Runs the training loop for PPO, with fixed policy and value nets.
def training_loop(env=None, env_name="CartPole-v0", epochs=EPOCHS, policy_net_fun=None, value_net_fun=None, policy_and_value_net_fun=None, policy_optimizer_fun=None, value_optimizer_fun=None, ...
Download corpora for multinli. Args: tmp_dir: a string Returns: a string
def _maybe_download_corpora(tmp_dir): """Download corpora for multinli. Args: tmp_dir: a string Returns: a string """ mnli_filename = "MNLI.zip" mnli_finalpath = os.path.join(tmp_dir, "MNLI") if not tf.gfile.Exists(mnli_finalpath): zip_filepath = generator_utils.maybe_download( tmp_di...
Generate mnli examples. Args: filename: a string Yields: dictionaries containing "premise", "hypothesis" and "label" strings
def _example_generator(filename): """Generate mnli examples. Args: filename: a string Yields: dictionaries containing "premise", "hypothesis" and "label" strings """ for idx, line in enumerate(tf.gfile.Open(filename, "rb")): if idx == 0: continue # skip header line = text_encoder.to_unicode_...
Adds a residual connection to the filter x for the shake-shake model.
def shake_shake_skip_connection(x, output_filters, stride, is_training): """Adds a residual connection to the filter x for the shake-shake model.""" curr_filters = common_layers.shape_list(x)[-1] if curr_filters == output_filters: return x stride_spec = [1, stride, stride, 1] # Skip path 1. path1 = tf.n...
Building a 2 branching convnet.
def shake_shake_branch(x, output_filters, stride, rand_forward, rand_backward, hparams): """Building a 2 branching convnet.""" is_training = hparams.mode == tf.estimator.ModeKeys.TRAIN x = tf.nn.relu(x) x = tf.layers.conv2d( x, output_filters, (3, 3), strides=(stride, st...
Builds a full shake-shake sub layer.
def shake_shake_block(x, output_filters, stride, hparams): """Builds a full shake-shake sub layer.""" is_training = hparams.mode == tf.estimator.ModeKeys.TRAIN batch_size = common_layers.shape_list(x)[0] # Generate random numbers for scaling the branches. rand_forward = [ tf.random_uniform( [...
Builds many sub layers into one full layer.
def shake_shake_layer(x, output_filters, num_blocks, stride, hparams): """Builds many sub layers into one full layer.""" for block_num in range(num_blocks): curr_stride = stride if (block_num == 0) else 1 with tf.variable_scope("layer_{}".format(block_num)): x = shake_shake_block(x, output_filters, cu...
Parameters for CIFAR-10. Gets to about 96% accuracy@700K steps, 1 GPU.
def shakeshake_small(): """Parameters for CIFAR-10. Gets to about 96% accuracy@700K steps, 1 GPU.""" hparams = common_hparams.basic_params1() hparams.batch_size = 128 hparams.hidden_size = 32 hparams.layer_prepostprocess_dropout = 0.0 hparams.dropout = 0 hparams.label_smoothing = 0.0 hparams.clip_grad_n...
Check if metric has plateaued. A metric has plateaued if the value has not increased/decreased (depending on `decrease`) by `delta` for at least `num_steps`. Args: steps: list<int> list of global steps for values. values: list<float> list of metric values. num_steps: int, number of steps the metric ...
def has_metric_plateaued(steps, values, num_steps=100, delta=0.1, decrease=True): """Check if metric has plateaued. A metric has plateaued if the value has not increased/decreased (depending on `decrease`) by `delta` for at least `num_steps`. Args: steps: list<int> list of global ...
SAVP model hparams.
def next_frame_savp(): """SAVP model hparams.""" hparams = sv2p_params.next_frame_sv2p() hparams.add_hparam("z_dim", 8) hparams.add_hparam("num_discriminator_filters", 32) hparams.add_hparam("use_vae", True) hparams.add_hparam("use_gan", False) hparams.add_hparam("use_spectral_norm", True) hparams.add_h...
SAVP - VAE only model.
def next_frame_savp_vae(): """SAVP - VAE only model.""" hparams = next_frame_savp() hparams.use_vae = True hparams.use_gan = False hparams.latent_loss_multiplier = 1e-3 hparams.latent_loss_multiplier_schedule = "linear_anneal" return hparams
Default hyperparameters for a DietAdamOptimizer. Returns: a hyperparameters object.
def diet_adam_optimizer_params(): """Default hyperparameters for a DietAdamOptimizer. Returns: a hyperparameters object. """ return hparam.HParams( quantize=True, # use 16-bit fixed-point quantization_scale=10.0 / tf.int16.max, optimizer="DietAdam", learning_rate=1.0, learnin...
SAVP - GAN only model.
def next_frame_savp_gan(): """SAVP - GAN only model.""" hparams = next_frame_savp() hparams.use_gan = True hparams.use_vae = False hparams.gan_loss_multiplier = 0.001 hparams.optimizer_adam_beta1 = 0.5 hparams.learning_rate_constant = 2e-4 hparams.gan_loss = "cross_entropy" hparams.learning_rate_decay...
A two-layer feed-forward network with relu activation on hidden layer. Uses diet variables. Recomputes hidden layer on backprop to save activation memory. Args: x: a Tensor with shape [batch, io_size] hidden_size: an integer params: a diet variable HParams object. Returns: a Tensor with shape...
def diet_expert(x, hidden_size, params): """A two-layer feed-forward network with relu activation on hidden layer. Uses diet variables. Recomputes hidden layer on backprop to save activation memory. Args: x: a Tensor with shape [batch, io_size] hidden_size: an integer params: a diet variable HPara...
Quantize x according to params, optionally randomizing the rounding.
def _quantize(x, params, randomize=True): """Quantize x according to params, optionally randomizing the rounding.""" if not params.quantize: return x if not randomize: return tf.bitcast( tf.cast(x / params.quantization_scale, tf.int16), tf.float16) abs_x = tf.abs(x) sign_x = tf.sign(x) y =...
Dequantize q according to params.
def _dequantize(q, params): """Dequantize q according to params.""" if not params.quantize: return q return tf.to_float(tf.bitcast(q, tf.int16)) * params.quantization_scale
Create a custom variable getter for diet variables according to params.
def make_diet_var_getter(params): """Create a custom variable getter for diet variables according to params.""" def diet_var_initializer(shape, dtype, partition_info=None): """Initializer for a diet variable.""" del dtype del partition_info with common_layers.fn_device_dependency("diet_init") as o...
Call function with args; use diet variables according to params.
def _fn_with_diet_vars(fn, args, params): """Call function with args; use diet variables according to params.""" vs_ctr = [] def grad_fn(inputs, variables, outputs, output_grads): """Custom gradient function.""" del outputs # recomputing below with common_layers.fn_device_dependency("diet_grad", ...
Decorator for graph-building function to use diet variables.
def fn_with_diet_vars(params): """Decorator for graph-building function to use diet variables.""" params = copy.copy(params) def dec(fn): def wrapped(*args): return _fn_with_diet_vars(fn, args, params) return wrapped return dec
Create the factorized Adam accumulators for diet variables.
def create_slots(self, var): """Create the factorized Adam accumulators for diet variables.""" params = self.params shape = var.get_shape().as_list() if not hasattr(params, "slots"): params.slots = defaultdict(dict) name = var.op.name slots = params.slots[name] if params.factored_se...
Update the variable and its slots.
def update_variable(self, var, grad_var): """Update the variable and its slots.""" params = self.params global_step = tf.to_float(self.global_step) + 1 # compute learning rate lrate = params.learning_rate if params.learning_rate_decay_scheme == "noam": lrate *= tf.minimum(global_step * pa...
Construct EstimatorSpec for EVAL mode.
def estimator_spec_eval( self, features, logits, labels, loss, restore_hook, use_tpu): """Construct EstimatorSpec for EVAL mode.""" hparams = self.hparams problem = hparams.problem if logits.get_shape().ndims == 3: logits = tf.expand_dims(tf.expand_dims(logits, 2), 3) # Support for mult...
Generator for the dataset samples. If not present, download and extract the dataset. Args: tmp_dir: path to the directory where to download the dataset. pb_cst: CodingPbConstants object defining paths Yields: A CodingPbInfo object containing the next challenge informations.
def generator_samples(tmp_dir, pb_cst): """Generator for the dataset samples. If not present, download and extract the dataset. Args: tmp_dir: path to the directory where to download the dataset. pb_cst: CodingPbConstants object defining paths Yields: A CodingPbInfo object containing the next cha...
Adds a stack of LSTM layers on top of input. Args: inputs: The input `Tensor`, shaped `[batch_size, time_steps, hidden_size]`. sequence_length: Lengths of the actual input sequence, excluding padding; a `Tensor` shaped `[batch_size]`. hparams: HParams; hyperparameters. train: bool; `True` whe...
def lstm(inputs, sequence_length, hparams, train, name, initial_state=None): """Adds a stack of LSTM layers on top of input. Args: inputs: The input `Tensor`, shaped `[batch_size, time_steps, hidden_size]`. sequence_length: Lengths of the actual input sequence, excluding padding; a `Tensor` shaped ...
Run LSTM cell with attention on inputs of shape [batch x time x size]. Args: inputs: The decoder input `Tensor`, shaped `[batch_size, decoder_steps, hidden_size]`. hparams: HParams; hyperparameters. train: bool; `True` when constructing training graph to enable dropout. name: string; Create v...
def lstm_attention_decoder(inputs, hparams, train, name, initial_state, encoder_outputs, encoder_output_length, decoder_input_length): """Run LSTM cell with attention on inputs of shape [batch x time x size]. Args: inputs: The decoder input `Tensor`, shaped...
The basic LSTM seq2seq model, main step used for training.
def lstm_seq2seq_internal(inputs, targets, hparams, train): """The basic LSTM seq2seq model, main step used for training.""" with tf.variable_scope("lstm_seq2seq"): if inputs is not None: inputs_length = common_layers.length_from_embedding(inputs) # Flatten inputs. inputs = common_layers.flatt...
LSTM seq2seq model with attention, main step used for training.
def lstm_seq2seq_internal_attention(inputs, targets, hparams, train, inputs_length, targets_length): """LSTM seq2seq model with attention, main step used for training.""" with tf.variable_scope("lstm_seq2seq_attention"): # Flatten inputs. inputs = common_layers.flatten4d3...
Bidirectional LSTM for encoding inputs that are [batch x time x size].
def lstm_bid_encoder(inputs, sequence_length, hparams, train, name): """Bidirectional LSTM for encoding inputs that are [batch x time x size].""" with tf.variable_scope(name): cell_fw = tf.nn.rnn_cell.MultiRNNCell( [_dropout_lstm_cell(hparams, train) for _ in range(hparams.num_hidden_layers)])...
The basic LSTM seq2seq model with bidirectional encoder.
def lstm_seq2seq_internal_bid_encoder(inputs, targets, hparams, train): """The basic LSTM seq2seq model with bidirectional encoder.""" with tf.variable_scope("lstm_seq2seq_bid_encoder"): if inputs is not None: inputs_length = common_layers.length_from_embedding(inputs) # Flatten inputs. inputs...
LSTM seq2seq model with attention, main step used for training.
def lstm_seq2seq_internal_attention_bid_encoder(inputs, targets, hparams, train): """LSTM seq2seq model with attention, main step used for training.""" with tf.variable_scope("lstm_seq2seq_attention_bid_encoder"): inputs_length = common_layers.length_from_embeddin...
hparams for LSTM.
def lstm_seq2seq(): """hparams for LSTM.""" hparams = common_hparams.basic_params1() hparams.daisy_chain_variables = False hparams.batch_size = 1024 hparams.hidden_size = 128 hparams.num_hidden_layers = 2 hparams.initializer = "uniform_unit_scaling" hparams.initializer_gain = 1.0 hparams.weight_decay ...
Base attention params.
def lstm_attention_base(): """Base attention params.""" hparams = lstm_seq2seq() hparams.add_hparam("attention_layer_size", hparams.hidden_size) hparams.add_hparam("output_attention", True) hparams.add_hparam("num_heads", 1) return hparams
Basic LSTM Params.
def lstm_asr_v1(): """Basic LSTM Params.""" hparams = lstm_bahdanau_attention() hparams.num_hidden_layers = 2 hparams.hidden_size = 256 hparams.batch_size = 36 hparams.max_input_seq_length = 600000 hparams.max_target_seq_length = 350 hparams.max_length = hparams.max_input_seq_length hparams.min_length...
Hparams for LSTM with area attention.
def lstm_area_attention_base(): """Hparams for LSTM with area attention.""" hparams = lstm_luong_attention() hparams.batch_size = 16384 hparams.num_hidden_layers = 2 hparams.hidden_size = 1024 hparams.num_heads = 4 hparams.dropout = 0.2 hparams.learning_rate = 0.1 hparams.max_area_width = 2 hparams....
Create a run config. Args: hp: model hyperparameters Returns: a run config
def create_surrogate_run_config(hp): """Create a run config. Args: hp: model hyperparameters Returns: a run config """ save_ckpt_steps = max(FLAGS.iterations_per_loop, FLAGS.local_eval_frequency) save_ckpt_secs = FLAGS.save_checkpoints_secs or None if save_ckpt_secs: save_ckpt_steps = None ...
Construct input pipeline.
def prepare_data(problem, hparams, params, config): """Construct input pipeline.""" input_fn = problem.make_estimator_input_fn( tf.estimator.ModeKeys.EVAL, hparams, force_repeat=True) dataset = input_fn(params, config) features, _ = dataset.make_one_shot_iterator().get_next() inputs, labels = features["...
Transform a string with a filename into a list of float32. Args: s: path to the file with a waveform. Returns: samples: list of int16s
def encode(self, s): """Transform a string with a filename into a list of float32. Args: s: path to the file with a waveform. Returns: samples: list of int16s """ # Make sure that the data is a single channel, 16bit, 16kHz wave. # TODO(chorowski): the directory may not be writable,...
Transform a sequence of float32 into a waveform. Args: ids: list of integers to be converted. Returns: Path to the temporary file where the waveform was saved. Raises: ValueError: if the ids are not of the appropriate size.
def decode(self, ids): """Transform a sequence of float32 into a waveform. Args: ids: list of integers to be converted. Returns: Path to the temporary file where the waveform was saved. Raises: ValueError: if the ids are not of the appropriate size. """ _, tmp_file_path = te...
Creates and returns a new vertex. Returns: A new Vertex instance with a unique index.
def new_vertex(self): """Creates and returns a new vertex. Returns: A new Vertex instance with a unique index. """ vertex = Vertex(len(self.vertices)) self.vertices.append(vertex) return vertex
Returns or Creates a Vertex mapped by key. Args: key: A string reference for a vertex. May refer to a new Vertex in which case it will be created. Returns: A the Vertex mapped to by key.
def get_vertex(self, key): """Returns or Creates a Vertex mapped by key. Args: key: A string reference for a vertex. May refer to a new Vertex in which case it will be created. Returns: A the Vertex mapped to by key. """ if key in self.vertex_map: return self.vertex_map[ke...
Returns a new edge connecting source and target vertices. Args: source: The source Vertex. target: The target Vertex. Returns: A new Edge linking source to target.
def add_edge(self, source, target): """Returns a new edge connecting source and target vertices. Args: source: The source Vertex. target: The target Vertex. Returns: A new Edge linking source to target. """ edge = Edge(len(self.edges)) self.edges.append(edge) source.out_e...
Returns a simplified dictionary representing the Graph. Returns: A dictionary that can easily be serialized to JSON.
def to_dict(self): """Returns a simplified dictionary representing the Graph. Returns: A dictionary that can easily be serialized to JSON. """ return { "node": [v.to_dict() for v in self.vertices], "edge": [e.to_dict() for e in self.edges] }
Self-attention layer with source as memory antecedent.
def attend(x, source, hparams, name): """Self-attention layer with source as memory antecedent.""" with tf.variable_scope(name): x = tf.squeeze(x, axis=2) if len(source.get_shape()) > 3: source = tf.squeeze(source, axis=2) source = common_attention.add_timing_signal_1d(source) y = common_atten...
Calculate softmax(x), select top-k and rescale to sum to 1.
def top_k_softmax(x, k): """Calculate softmax(x), select top-k and rescale to sum to 1.""" x = tf.nn.softmax(x) top_x, _ = tf.nn.top_k(x, k=k+1) min_top = tf.reduce_min(top_x, axis=-1, keepdims=True) x = tf.nn.relu((x - min_top) + 1e-12) x /= tf.reduce_sum(x, axis=-1, keepdims=True) return x, tf.reduce_ma...
Compress.
def compress(x, c, is_2d, hparams, name): """Compress.""" with tf.variable_scope(name): # Run compression by strided convs. cur = x k1 = (3, 3) if is_2d else (3, 1) k2 = (2, 2) if is_2d else (2, 1) cur = residual_conv(cur, hparams.num_compress_steps, k1, hparams, "rc") if c is not None and h...
Original Transformer decoder.
def decode_transformer(encoder_output, encoder_decoder_attention_bias, targets, hparams, name, task=None, causal=True): """Original Transformer decoder.""" orig_hparams = hparams...
Latent prediction and loss.
def ae_latent_softmax(latents_pred, latents_discrete, hparams): """Latent prediction and loss.""" vocab_size = 2 ** hparams.z_size if hparams.num_decode_blocks < 2: latents_logits = tf.layers.dense(latents_pred, vocab_size, name="extra_logits") if hparams.logit_normali...
Sample from the latent space in the autoencoder.
def ae_latent_sample(latents_dense, inputs, ed, embed, iters, hparams): """Sample from the latent space in the autoencoder.""" if hparams.num_decode_blocks < 2 and hparams.sampling_temp == 0.0: # TODO(lukaszkaiser): beam-search only works in non-blocked mode for now. tf.logging.info("Running beam-search for...
AE Transformer, main step used for training.
def ae_transformer_internal(inputs, targets, target_space, hparams, cache=None, predict_mask=1.0): """AE Transformer, main step used for training.""" # Summaries break with the...
Set of hyperparameters.
def transformer_ae_small(): """Set of hyperparameters.""" hparams = transformer.transformer_small() hparams.batch_size = 2048 hparams.learning_rate = 0.2 hparams.learning_rate_warmup_steps = 4000 hparams.num_hidden_layers = 3 hparams.hidden_size = 384 hparams.filter_size = 2048 hparams.add_hparam("com...
Hyperparameters for CIFAR-10 experiments.
def imagetransformer_ae_cifar(): """Hyperparameters for CIFAR-10 experiments.""" hparams = transformer_ae_small() hparams.filter_size = 512 hparams.num_compress_steps = 3 hparams.startup_steps = 10000 hparams.is_2d = 0 hparams.learning_rate_warmup_steps = 8000 hparams.learning_rate = 0.2 hparams.hidde...
For 64x64 ImageNet. ~56M trainable variables.
def imagetransformer_ae_imagenet(): """For 64x64 ImageNet. ~56M trainable variables.""" hparams = imagetransformer_ae_cifar() hparams.max_length = int(64 * 64 * 3) hparams.img_len = 64 hparams.num_heads = 4 # Heads are expensive on TPUs. # Reduce architecture from 32x32 CIFAR-10 in order to fit in memory. ...
Set of hyperparameters.
def transformer_ae_base(): """Set of hyperparameters.""" hparams = transformer_ae_small() hparams.batch_size = 2048 hparams.hidden_size = 512 hparams.filter_size = 4096 hparams.num_hidden_layers = 6 return hparams
Set of hyperparameters.
def transformer_ae_a3(): """Set of hyperparameters.""" hparams = transformer_ae_base() hparams.batch_size = 4096 hparams.layer_prepostprocess_dropout = 0.3 hparams.optimizer = "Adafactor" hparams.learning_rate = 0.25 hparams.learning_rate_warmup_steps = 10000 return hparams
Set of hyperparameters.
def transformer_ae_base_noatt(): """Set of hyperparameters.""" hparams = transformer_ae_base() hparams.reshape_method = "slice" hparams.bottleneck_kind = "dvq" hparams.hidden_size = 512 hparams.num_blocks = 1 hparams.num_decode_blocks = 1 hparams.z_size = 12 hparams.do_attend_decompress = False retu...
Set of hyperparameters.
def transformer_ae_small_noatt(): """Set of hyperparameters.""" hparams = transformer_ae_small() hparams.reshape_method = "slice" hparams.bottleneck_kind = "dvq" hparams.hidden_size = 512 hparams.num_blocks = 1 hparams.num_decode_blocks = 1 hparams.z_size = 12 hparams.do_attend_decompress = False re...
Basic transformer_sketch hparams.
def transformer_sketch(): """Basic transformer_sketch hparams.""" hparams = transformer.transformer_small() hparams.num_compress_steps = 4 hparams.batch_size = 32 hparams.clip_grad_norm = 2. hparams.sampling_method = "random" return hparams
Get the layers module good for TF 1 and TF 2 work for now.
def layers(): """Get the layers module good for TF 1 and TF 2 work for now.""" global _cached_layers if _cached_layers is not None: return _cached_layers layers_module = tf.layers try: from tensorflow.python import tf2 # pylint: disable=g-direct-tensorflow-import,g-import-not-at-top if tf2.enable...
Like tf.nn.dropout but takes broadcast_dims instead of noise_shape. Instead of specifying noise_shape, this function takes broadcast_dims - a list of dimension numbers in which noise_shape should be 1. The random keep/drop tensor has dimensionality 1 along these dimensions. Args: x: a floating point tens...
def dropout_with_broadcast_dims(x, keep_prob, broadcast_dims=None, **kwargs): """Like tf.nn.dropout but takes broadcast_dims instead of noise_shape. Instead of specifying noise_shape, this function takes broadcast_dims - a list of dimension numbers in which noise_shape should be 1. The random keep/drop tensor...
Saturating sigmoid: 1.2 * sigmoid(x) - 0.1 cut to [0, 1].
def saturating_sigmoid(x): """Saturating sigmoid: 1.2 * sigmoid(x) - 0.1 cut to [0, 1].""" with tf.name_scope("saturating_sigmoid", values=[x]): y = tf.sigmoid(x) return tf.minimum(1.0, tf.maximum(0.0, 1.2 * y - 0.1))
Inverse-decay exponentially from 0.01 to 1.0 reached at max_step.
def inverse_exp_decay(max_step, min_value=0.01, step=None): """Inverse-decay exponentially from 0.01 to 1.0 reached at max_step.""" inv_base = tf.exp(tf.log(min_value) / float(max_step)) if step is None: step = tf.train.get_global_step() if step is None: return 1.0 step = to_float(step) return inv_b...
Inverse-decay linearly from 0.01 to 1.0 reached at max_step.
def inverse_lin_decay(max_step, min_value=0.01, step=None): """Inverse-decay linearly from 0.01 to 1.0 reached at max_step.""" if step is None: step = tf.train.get_global_step() if step is None: return 1.0 step = to_float(step) progress = tf.minimum(step / float(max_step), 1.0) return progress * (1....
The shake-shake sum of 2 tensors, python version.
def shakeshake2_py(x, y, equal=False, individual=False): """The shake-shake sum of 2 tensors, python version.""" if equal: alpha = 0.5 elif individual: alpha = tf.random_uniform(tf.get_shape(x)[:1]) else: alpha = tf.random_uniform([]) return alpha * x + (1.0 - alpha) * y
Overriding gradient for shake-shake of 2 tensors.
def shakeshake2_grad(x1, x2, dy): """Overriding gradient for shake-shake of 2 tensors.""" y = shakeshake2_py(x1, x2) dx = tf.gradients(ys=[y], xs=[x1, x2], grad_ys=[dy]) return dx
Overriding gradient for shake-shake of 2 tensors.
def shakeshake2_indiv_grad(x1, x2, dy): """Overriding gradient for shake-shake of 2 tensors.""" y = shakeshake2_py(x1, x2, individual=True) dx = tf.gradients(ys=[y], xs=[x1, x2], grad_ys=[dy]) return dx
Overriding gradient for shake-shake of 2 tensors.
def shakeshake2_equal_grad(x1, x2, dy): """Overriding gradient for shake-shake of 2 tensors.""" y = shakeshake2_py(x1, x2, equal=True) dx = tf.gradients(ys=[y], xs=[x1, x2], grad_ys=[dy]) return dx
Multi-argument shake-shake, currently approximated by sums of 2.
def shakeshake(xs, equal_grad=False): """Multi-argument shake-shake, currently approximated by sums of 2.""" if len(xs) == 1: return xs[0] div = (len(xs) + 1) // 2 arg1 = shakeshake(xs[:div], equal_grad=equal_grad) arg2 = shakeshake(xs[div:], equal_grad=equal_grad) if equal_grad: return shakeshake2_...
Conversion of pixel values to real numbers.
def convert_rgb_to_real(x): """Conversion of pixel values to real numbers.""" with tf.name_scope("rgb_to_real", values=[x]): x = to_float(x) x /= 255.0 return x
Conversion of pixel values to real numbers.
def convert_rgb_to_symmetric_real(x): """Conversion of pixel values to real numbers.""" with tf.name_scope("rgb_to_real", values=[x]): x = to_float(x) # Convert each pixel intensity in [0, 1, 2, ..., 255] into a real number in # the range [-1, 1]. x = (x / 127.5) - 1 return x
Make x n-d with squeeze and expand_dims.
def expand_squeeze_to_nd(x, n, squeeze_dim=2, expand_dim=-1): """Make x n-d with squeeze and expand_dims.""" if len(x.shape) > n: while len(x.shape) != n: x = tf.squeeze(x, [squeeze_dim]) else: while len(x.shape) != n: x = tf.expand_dims(x, expand_dim) return x
Image standardization on batches and videos.
def standardize_images(x): """Image standardization on batches and videos.""" with tf.name_scope("standardize_images", values=[x]): x_shape = shape_list(x) x = to_float(tf.reshape(x, [-1] + x_shape[-3:])) x_mean = tf.reduce_mean(x, axis=[1, 2], keepdims=True) x_variance = tf.reduce_mean( tf....
Flatten a 4d-tensor into a 3d-tensor by joining width and height.
def flatten4d3d(x): """Flatten a 4d-tensor into a 3d-tensor by joining width and height.""" xshape = shape_list(x) result = tf.reshape(x, [xshape[0], xshape[1] * xshape[2], xshape[3]]) return result
Version of tf.gather that works faster on tpu.
def gather(params, indices, dtype=tf.float32): """Version of tf.gather that works faster on tpu.""" if not is_xla_compiled(): return tf.gather(params, indices) vocab_size = params.get_shape().as_list()[0] indices_flat = tf.reshape(indices, [-1]) out = tf.matmul(tf.one_hot(indices_flat, vocab_size, dtype=d...
TPU hack for tf.cumsum. This is equivalent to tf.cumsum and is faster on TPU as of 04/2018 unless the axis dimension is very large. Args: x: a Tensor axis: an integer exclusive: a boolean Returns: Tensor of the same shape as x.
def cumsum(x, axis=0, exclusive=False): """TPU hack for tf.cumsum. This is equivalent to tf.cumsum and is faster on TPU as of 04/2018 unless the axis dimension is very large. Args: x: a Tensor axis: an integer exclusive: a boolean Returns: Tensor of the same shape as x. """ if not is_xl...
Like tf.nn.dropout, but does not scale up. Works on integers also. Args: x: a Tensor keep_prob: a floating point number Returns: Tensor of the same shape as x.
def dropout_no_scaling(x, keep_prob): """Like tf.nn.dropout, but does not scale up. Works on integers also. Args: x: a Tensor keep_prob: a floating point number Returns: Tensor of the same shape as x. """ if keep_prob == 1.0: return x mask = tf.less(tf.random_uniform(tf.shape(x)), keep_pr...
Embed x of type int64 into dense vectors, reducing to max 4 dimensions.
def embedding(x, vocab_size, dense_size, name=None, reuse=None, multiplier=1.0, symbol_dropout_rate=0.0, embedding_var=None, dtype=tf.float32): """Embed x of type int64 into dense vectors, reducing to max 4...
Shift the second dimension of x right by one.
def shift_right(x, pad_value=None): """Shift the second dimension of x right by one.""" if pad_value is None: shifted_targets = tf.pad(x, [[0, 0], [1, 0], [0, 0], [0, 0]])[:, :-1, :, :] else: shifted_targets = tf.concat([pad_value, x], axis=1)[:, :-1, :, :] return shifted_targets
Shift the second dimension of x right by one.
def shift_right_3d(x, pad_value=None): """Shift the second dimension of x right by one.""" if pad_value is None: shifted_targets = tf.pad(x, [[0, 0], [1, 0], [0, 0]])[:, :-1, :] else: shifted_targets = tf.concat([pad_value, x], axis=1)[:, :-1, :] return shifted_targets
Shift the second dimension of x right by one.
def shift_right_2d(x, pad_value=None): """Shift the second dimension of x right by one.""" if pad_value is None: shifted_targets = tf.pad(x, [[0, 0], [1, 0]])[:, :-1] else: shifted_targets = tf.concat([pad_value, x], axis=1)[:, :-1] return shifted_targets
Use a strided convolution to downsample x by 2, `nbr_steps` times. We use stride and filter size 2 to avoid the checkerboard problem of deconvs. As detailed in http://distill.pub/2016/deconv-checkerboard/. Args: x: a `Tensor` with shape `[batch, spatial, depth]` or `[batch, spatial_1, spatial_2, depth]...
def conv_stride2_multistep(x, nbr_steps, output_filters, name=None, reuse=None): """Use a strided convolution to downsample x by 2, `nbr_steps` times. We use stride and filter size 2 to avoid the checkerboard problem of deconvs. As detailed in http://distill.pub/2016/deconv-checkerboard/. Args: x: a `Tens...
Use a deconvolution to upsample x by 2**`nbr_steps`. Args: x: a `Tensor` with shape `[batch, spatial, depth]` or `[batch, spatial_1, spatial_2, depth]` nbr_steps: an int specifying the number of doubling upsample rounds to apply. output_filters: an int specifying the filter count for the deconv...
def deconv_stride2_multistep(x, nbr_steps, output_filters, name=None, reuse=None): """Use a deconvolution to upsample x by 2**`nbr_steps`. Args: x: a `Tensor` with shape `[batch, spatial, depth]`...
Conditional conv_fn making kernel 1d or 2d depending on inputs shape.
def conv_internal(conv_fn, inputs, filters, kernel_size, **kwargs): """Conditional conv_fn making kernel 1d or 2d depending on inputs shape.""" static_shape = inputs.get_shape() if not static_shape or len(static_shape) != 4: raise ValueError("Inputs to conv must have statically known rank 4. " ...
Sub-separable convolution. If separability == 0 it's a separable_conv.
def subseparable_conv(inputs, filters, kernel_size, **kwargs): """Sub-separable convolution. If separability == 0 it's a separable_conv.""" def conv_fn(inputs, filters, kernel_size, **kwargs): """Sub-separable convolution, splits into separability-many blocks.""" separability = None if "separability" i...
Version of conv1d that works on TPU (as of 11/2017). Args: inputs: a Tensor with shape [batch, length, input_depth]. filters: an integer. kernel_size: an integer. padding: a string - "SAME" or "LEFT". name: a string. Returns: a Tensor with shape [batch, length, filters].
def tpu_conv1d(inputs, filters, kernel_size, padding="SAME", name="tpu_conv1d"): """Version of conv1d that works on TPU (as of 11/2017). Args: inputs: a Tensor with shape [batch, length, input_depth]. filters: an integer. kernel_size: an integer. padding: a string - "SAME" or "LEFT". name: a st...
Create Variables for layer norm.
def layer_norm_vars(filters): """Create Variables for layer norm.""" scale = tf.get_variable( "layer_norm_scale", [filters], initializer=tf.ones_initializer()) bias = tf.get_variable( "layer_norm_bias", [filters], initializer=tf.zeros_initializer()) return scale, bias
Layer norm raw computation.
def layer_norm_compute(x, epsilon, scale, bias, layer_collection=None): """Layer norm raw computation.""" # Save these before they get converted to tensors by the casting below params = (scale, bias) epsilon, scale, bias = [cast_like(t, x) for t in [epsilon, scale, bias]] mean = tf.reduce_mean(x, axis=[-1],...
Layer normalize the tensor x, averaging over the last dimension.
def layer_norm(x, filters=None, epsilon=1e-6, name=None, reuse=None, layer_collection=None): """Layer normalize the tensor x, averaging over the last dimension.""" if filters is None: filters = shape_list(x)[-1] with tf.variable_scope(...