Code
stringlengths
103
85.9k
Summary
listlengths
0
94
Please provide a description of the function:def ae_transformer_internal(inputs, targets, target_space, hparams, cache=None): # Encoder. inputs = common_layers.flatten4d3d(inputs) inputs, ed = encode(inputs, target_space, hparams, "input_enc") # Autoencoding. losses = {"extra": tf.constant(0.0), "latent_p...
[ "Main step used for training." ]
Please provide a description of the function:def transformer_nat_small(): hparams = transformer.transformer_small() hparams.batch_size = 2048 hparams.learning_rate = 0.2 hparams.learning_rate_warmup_steps = 4000 hparams.num_hidden_layers = 3 hparams.hidden_size = 384 hparams.filter_size = 2048 hparam...
[ "Set of hyperparameters." ]
Please provide a description of the function:def transformer_nat_base(): hparams = transformer_nat_small() hparams.batch_size = 2048 hparams.hidden_size = 512 hparams.filter_size = 4096 hparams.num_hidden_layers = 6 return hparams
[ "Set of hyperparameters." ]
Please provide a description of the function:def transformer_nat_big(): hparams = transformer_nat_small() hparams.batch_size = 2048 hparams.hidden_size = 1024 hparams.filter_size = 4096 hparams.num_hidden_layers = 6 hparams.num_heads = 16 hparams.layer_prepostprocess_dropout = 0.3 return hparams
[ "Set of hyperparameters." ]
Please provide a description of the function:def policy_net(rng_key, batch_observations_shape, num_actions, bottom_layers=None): # Use the bottom_layers as the bottom part of the network and just add the # required layers on top of it. if bottom_layers is None: ...
[ "A policy net function." ]
Please provide a description of the function:def value_net(rng_key, batch_observations_shape, num_actions, bottom_layers=None): del num_actions if bottom_layers is None: bottom_layers = [] bottom_layers.extend([ layers.Dense(1), ]) net = layers.Serial(*b...
[ "A value net function." ]
Please provide a description of the function:def policy_and_value_net(rng_key, batch_observations_shape, num_actions, bottom_layers=None): # Layers. cur_layers = [] if bottom_layers is not None: cur_layers.extend(bottom_layers) ...
[ "A policy and value net function." ]
Please provide a description of the function:def log_params(params, name="params"): for i, param in enumerate(params): if not param: # Empty tuple. continue if not isinstance(param, (list, tuple)): logging.error( "%s[%d] : (%s) = [%s]", name, i, param.shape, onp.array(param)) ...
[ "Dumps the params with `logging.error`." ]
Please provide a description of the function:def collect_trajectories(env, policy_fun, num_trajectories=1, policy="greedy", max_timestep=None, epsilon=0.1): trajectories = [] for t in ran...
[ "Collect trajectories with the given policy net and behaviour.\n\n Args:\n env: A gym env interface, for now this is not-batched.\n policy_fun: observations(B,T+1) -> log-probabs(B,T+1, A) callable.\n num_trajectories: int, number of trajectories.\n policy: string, \"greedy\", \"epsilon-greedy\", or \"...
Please provide a description of the function:def get_padding_value(dtype): padding_value = None if dtype == np.uint8: padding_value = np.uint8(0) elif dtype == np.uint16: padding_value = np.uint16(0) elif dtype == np.float32: padding_value = 0.0 else: padding_value = 0 assert padding_valu...
[ "Returns the padding value given a dtype." ]
Please provide a description of the function:def pad_trajectories(trajectories, boundary=20): # Let's compute max(t) over all trajectories. t_max = max(r.shape[0] for (_, _, r) in trajectories) # t_max is rounded to the next multiple of `boundary` boundary = int(boundary) bucket_length = boundary * int(n...
[ "Pad trajectories to a bucket length that is a multiple of boundary.\n\n Args:\n trajectories: list[(observation, actions, rewards)], where each observation\n is shaped (t+1,) + OBS and actions & rewards are shaped (t,), with the\n length of the list being B (batch size).\n boundary: int, bucket le...
Please provide a description of the function:def rewards_to_go(rewards, mask, gamma=0.99): r B, T = rewards.shape # pylint: disable=invalid-name,unused-variable masked_rewards = rewards * mask # (B, T) # We use the following recurrence relation, derived from the equation above: # # r2g[t+1] = (r2g[t] - ...
[ "Computes rewards to go.\n\n Reward to go is defined as follows, the discounted reward that we have to\n yet collect, going forward from this point, i.e.:\n\n r2g_t = \\sum_{l=0}^{\\infty} (\\gamma^{l} * reward_{t+l})\n\n Args:\n rewards: np.ndarray of shape (B, T) of rewards.\n mask: np.ndarray of shape ...
Please provide a description of the function:def value_loss(value_net_apply, value_net_params, observations, rewards, reward_mask, gamma=0.99): B, T = rewards.shape # pylint: disable=invalid-name assert (B, T + 1) == observations.shape[...
[ "Computes the value loss.\n\n Args:\n value_net_apply: value net apply function with signature (params, ndarray of\n shape (B, T+1) + OBS) -> ndarray(B, T+1, 1)\n value_net_params: params of value_net_apply.\n observations: np.ndarray of shape (B, T+1) + OBS\n rewards: np.ndarray of shape (B, T) o...
Please provide a description of the function:def value_loss_given_predictions(value_prediction, rewards, reward_mask, gamma=0.99): B, T = rewards.shape # pylint: disable=invalid-name assert (B, T) == reward_mask....
[ "Computes the value loss given the prediction of the value function.\n\n Args:\n value_prediction: np.ndarray of shape (B, T+1, 1)\n rewards: np.ndarray of shape (B, T) of rewards.\n reward_mask: np.ndarray of shape (B, T), the mask over rewards.\n gamma: float, discount factor.\n\n Returns:\n The ...
Please provide a description of the function:def deltas(predicted_values, rewards, mask, gamma=0.99): r # `d`s are basically one-step TD residuals. d = [] _, T = rewards.shape # pylint: disable=invalid-name for t in range(T): d.append(rewards[:, t] + (gamma * predicted_values[:, t + 1]) - p...
[ "Computes TD-residuals from V(s) and rewards.\n\n Where a `delta`, i.e. a td-residual is defined as:\n\n delta_{b,t} = r_{b,t} + \\gamma * v_{b,t+1} - v_{b,t}.\n\n Args:\n predicted_values: ndarray of shape (B, T+1). NOTE: Expects axis 2 was\n squeezed. These represent V(s_bt) for b < B and t < T+1\n ...
Please provide a description of the function:def gae_advantages(td_deltas, mask, lambda_=0.95, gamma=0.99): r return rewards_to_go(td_deltas, mask, lambda_ * gamma)
[ "Computes the GAE advantages given the one step TD-residuals.\n\n The formula for a GAE advantage estimator is as follows:\n\n A_{bt} = \\sum_{l=0}^{\\infty}(\\gamma * \\lambda)^{l}(\\delta_{b,t+l}).\n\n Internally we just call rewards_to_go, since it is the same computation.\n\n Args:\n td_deltas: np.ndarra...
Please provide a description of the function:def chosen_probabs(probab_observations, actions): B, T = actions.shape # pylint: disable=invalid-name assert (B, T + 1) == probab_observations.shape[:2] return probab_observations[np.arange(B)[:, None], np.arange(T), actions]
[ "Picks out the probabilities of the actions along batch and time-steps.\n\n Args:\n probab_observations: ndarray of shape `[B, T+1, A]`, where\n probab_observations[b, t, i] contains the log-probability of action = i at\n the t^th time-step in the b^th trajectory.\n actions: ndarray of shape `[B, T...
Please provide a description of the function:def compute_probab_ratios(p_new, p_old, actions, reward_mask): B, T = actions.shape # pylint: disable=invalid-name assert (B, T + 1) == p_old.shape[:2] assert (B, T + 1) == p_new.shape[:2] logp_old = chosen_probabs(p_old, actions) logp_new = chosen_probabs(p_...
[ "Computes the probability ratios for each time-step in a trajectory.\n\n Args:\n p_new: ndarray of shape [B, T+1, A] of the log-probabilities that the policy\n network assigns to all the actions at each time-step in each batch using\n the old parameters.\n p_old: ndarray of shape [B, T+1, A], same ...
Please provide a description of the function:def ppo_loss(policy_net_apply, new_policy_params, old_policy_params, value_net_apply, value_net_params, padded_observations, padded_actions, padded_rewards, reward_mask, ...
[ "PPO objective, with an eventual minus sign, given observations." ]
Please provide a description of the function:def ppo_loss_given_predictions(log_probab_actions_new, log_probab_actions_old, predicted_values, padded_actions, padded_rewards, ...
[ "PPO objective, with an eventual minus sign, given predictions." ]
Please provide a description of the function:def combined_loss_given_predictions(log_probab_actions_new, log_probab_actions_old, value_prediction, padded_actions, padded_reward...
[ "Computes the combined (clipped loss + value loss) given predictions." ]
Please provide a description of the function:def combined_loss(new_params, old_params, policy_and_value_net_apply, padded_observations, padded_actions, padded_rewards, reward_mask, gamma=0.99, ...
[ "Computes the combined (clipped loss + value loss) given observations." ]
Please provide a description of the function:def ppo_opt_step(i, opt_state, ppo_opt_update, policy_net_apply, old_policy_params, value_net_apply, value_net_params, padded_observations, ...
[ "PPO optimizer step." ]
Please provide a description of the function:def value_opt_step(i, opt_state, opt_update, value_net_apply, padded_observations, padded_rewards, reward_mask, gamma=0.99): value_params...
[ "Value optimizer step." ]
Please provide a description of the function:def policy_and_value_opt_step(i, opt_state, opt_update, policy_and_value_net_apply, old_params, padded_observations, ...
[ "Policy and Value optimizer step.", "Returns the combined loss given just parameters." ]
Please provide a description of the function:def training_loop(env=None, env_name="CartPole-v0", epochs=EPOCHS, policy_net_fun=None, value_net_fun=None, policy_and_value_net_fun=None, policy_optimizer_fun=None, ...
[ "Runs the training loop for PPO, with fixed policy and value nets." ]
Please provide a description of the function:def _maybe_download_corpora(tmp_dir): mnli_filename = "MNLI.zip" mnli_finalpath = os.path.join(tmp_dir, "MNLI") if not tf.gfile.Exists(mnli_finalpath): zip_filepath = generator_utils.maybe_download( tmp_dir, mnli_filename, _MNLI_URL) zip_ref = zipfil...
[ "Download corpora for multinli.\n\n Args:\n tmp_dir: a string\n Returns:\n a string\n " ]
Please provide a description of the function:def _example_generator(filename): for idx, line in enumerate(tf.gfile.Open(filename, "rb")): if idx == 0: continue # skip header line = text_encoder.to_unicode_utf8(line.strip()) split_line = line.split("\t") # Works for both splits even though dev has ...
[ "Generate mnli examples.\n\n Args:\n filename: a string\n Yields:\n dictionaries containing \"premise\", \"hypothesis\" and \"label\" strings\n " ]
Please provide a description of the function:def shake_shake_skip_connection(x, output_filters, stride, is_training): curr_filters = common_layers.shape_list(x)[-1] if curr_filters == output_filters: return x stride_spec = [1, stride, stride, 1] # Skip path 1. path1 = tf.nn.avg_pool(x, [1, 1, 1, 1], st...
[ "Adds a residual connection to the filter x for the shake-shake model." ]
Please provide a description of the function:def shake_shake_branch(x, output_filters, stride, rand_forward, rand_backward, hparams): is_training = hparams.mode == tf.estimator.ModeKeys.TRAIN x = tf.nn.relu(x) x = tf.layers.conv2d( x, output_filters, (3, 3), strides=(st...
[ "Building a 2 branching convnet." ]
Please provide a description of the function:def shake_shake_block(x, output_filters, stride, hparams): is_training = hparams.mode == tf.estimator.ModeKeys.TRAIN batch_size = common_layers.shape_list(x)[0] # Generate random numbers for scaling the branches. rand_forward = [ tf.random_uniform( ...
[ "Builds a full shake-shake sub layer." ]
Please provide a description of the function:def shake_shake_layer(x, output_filters, num_blocks, stride, hparams): for block_num in range(num_blocks): curr_stride = stride if (block_num == 0) else 1 with tf.variable_scope("layer_{}".format(block_num)): x = shake_shake_block(x, output_filters, curr_s...
[ "Builds many sub layers into one full layer." ]
Please provide a description of the function:def shakeshake_small(): hparams = common_hparams.basic_params1() hparams.batch_size = 128 hparams.hidden_size = 32 hparams.layer_prepostprocess_dropout = 0.0 hparams.dropout = 0 hparams.label_smoothing = 0.0 hparams.clip_grad_norm = 0.0 # No clipping for no...
[ "Parameters for CIFAR-10. Gets to about 96% accuracy@700K steps, 1 GPU." ]
Please provide a description of the function:def has_metric_plateaued(steps, values, num_steps=100, delta=0.1, decrease=True): assert num_steps > 0 if len(steps) < 2: return False steps_at_least_num_steps_ago = [ s for s in steps if s <= (steps[-1] - num_steps) ] if not ...
[ "Check if metric has plateaued.\n\n A metric has plateaued if the value has not increased/decreased (depending on\n `decrease`) by `delta` for at least `num_steps`.\n\n Args:\n steps: list<int> list of global steps for values.\n values: list<float> list of metric values.\n num_steps: int, number of step...
Please provide a description of the function:def next_frame_savp(): hparams = sv2p_params.next_frame_sv2p() hparams.add_hparam("z_dim", 8) hparams.add_hparam("num_discriminator_filters", 32) hparams.add_hparam("use_vae", True) hparams.add_hparam("use_gan", False) hparams.add_hparam("use_spectral_norm", T...
[ "SAVP model hparams." ]
Please provide a description of the function:def next_frame_savp_vae(): hparams = next_frame_savp() hparams.use_vae = True hparams.use_gan = False hparams.latent_loss_multiplier = 1e-3 hparams.latent_loss_multiplier_schedule = "linear_anneal" return hparams
[ "SAVP - VAE only model." ]
Please provide a description of the function:def next_frame_savp_gan(): hparams = next_frame_savp() hparams.use_gan = True hparams.use_vae = False hparams.gan_loss_multiplier = 0.001 hparams.optimizer_adam_beta1 = 0.5 hparams.learning_rate_constant = 2e-4 hparams.gan_loss = "cross_entropy" hparams.le...
[ "SAVP - GAN only model." ]
Please provide a description of the function:def diet_adam_optimizer_params(): return hparam.HParams( quantize=True, # use 16-bit fixed-point quantization_scale=10.0 / tf.int16.max, optimizer="DietAdam", learning_rate=1.0, learning_rate_warmup_steps=2000, learning_rate_decay_sc...
[ "Default hyperparameters for a DietAdamOptimizer.\n\n Returns:\n a hyperparameters object.\n " ]
Please provide a description of the function:def diet_expert(x, hidden_size, params): @fn_with_diet_vars(params) def diet_expert_internal(x): dim = x.get_shape().as_list()[-1] h = tf.layers.dense(x, hidden_size, activation=tf.nn.relu, use_bias=False) y = tf.layers.dense(h, dim, use_bias=False) y...
[ "A two-layer feed-forward network with relu activation on hidden layer.\n\n Uses diet variables.\n Recomputes hidden layer on backprop to save activation memory.\n\n Args:\n x: a Tensor with shape [batch, io_size]\n hidden_size: an integer\n params: a diet variable HParams object.\n\n Returns:\n a T...
Please provide a description of the function:def _quantize(x, params, randomize=True): if not params.quantize: return x if not randomize: return tf.bitcast( tf.cast(x / params.quantization_scale, tf.int16), tf.float16) abs_x = tf.abs(x) sign_x = tf.sign(x) y = abs_x / params.quantization_...
[ "Quantize x according to params, optionally randomizing the rounding." ]
Please provide a description of the function:def _dequantize(q, params): if not params.quantize: return q return tf.to_float(tf.bitcast(q, tf.int16)) * params.quantization_scale
[ "Dequantize q according to params." ]
Please provide a description of the function:def make_diet_var_getter(params): def diet_var_initializer(shape, dtype, partition_info=None): del dtype del partition_info with common_layers.fn_device_dependency("diet_init") as out_deps: float_range = math.sqrt(3) ret = tf.random_unifor...
[ "Create a custom variable getter for diet variables according to params.", "Initializer for a diet variable.", "Get diet variable and return it dequantized." ]
Please provide a description of the function:def _fn_with_diet_vars(fn, args, params): vs_ctr = [] def grad_fn(inputs, variables, outputs, output_grads): del outputs # recomputing below with common_layers.fn_device_dependency("diet_grad", output_grads[0...
[ "Call function with args; use diet variables according to params.", "Custom gradient function." ]
Please provide a description of the function:def fn_with_diet_vars(params): params = copy.copy(params) def dec(fn): def wrapped(*args): return _fn_with_diet_vars(fn, args, params) return wrapped return dec
[ "Decorator for graph-building function to use diet variables." ]
Please provide a description of the function:def create_slots(self, var): params = self.params shape = var.get_shape().as_list() if not hasattr(params, "slots"): params.slots = defaultdict(dict) name = var.op.name slots = params.slots[name] if params.factored_second_moment_accumula...
[ "Create the factorized Adam accumulators for diet variables." ]
Please provide a description of the function:def update_variable(self, var, grad_var): params = self.params global_step = tf.to_float(self.global_step) + 1 # compute learning rate lrate = params.learning_rate if params.learning_rate_decay_scheme == "noam": lrate *= tf.minimum(global_step...
[ "Update the variable and its slots." ]
Please provide a description of the function:def estimator_spec_eval( self, features, logits, labels, loss, restore_hook, use_tpu): hparams = self.hparams problem = hparams.problem if logits.get_shape().ndims == 3: logits = tf.expand_dims(tf.expand_dims(logits, 2), 3) # Support for mul...
[ "Construct EstimatorSpec for EVAL mode." ]
Please provide a description of the function:def generator_samples(tmp_dir, pb_cst): # Step1: Download dataset (eventually) data_zip_path = generator_utils.maybe_download_from_drive( directory=tmp_dir, filename=_DATASET_FILENAME, url=_DATASET_URL, ) tf.logging.info("Data downloaded in: {}"....
[ "Generator for the dataset samples.\n\n If not present, download and extract the dataset.\n\n Args:\n tmp_dir: path to the directory where to download the dataset.\n pb_cst: CodingPbConstants object defining paths\n\n Yields:\n A CodingPbInfo object containing the next challenge informations.\n ", "C...
Please provide a description of the function:def lstm(inputs, sequence_length, hparams, train, name, initial_state=None): layers = [_dropout_lstm_cell(hparams, train) for _ in range(hparams.num_hidden_layers)] with tf.variable_scope(name): return tf.nn.dynamic_rnn( tf.nn.rnn_cell.MultiRNN...
[ "Adds a stack of LSTM layers on top of input.\n\n Args:\n inputs: The input `Tensor`, shaped `[batch_size, time_steps, hidden_size]`.\n sequence_length: Lengths of the actual input sequence, excluding padding; a\n `Tensor` shaped `[batch_size]`.\n hparams: HParams; hyperparameters.\n train: bool...
Please provide a description of the function:def lstm_attention_decoder(inputs, hparams, train, name, initial_state, encoder_outputs, encoder_output_length, decoder_input_length): layers = [_dropout_lstm_cell(hparams, train) for _ in range(hparams.n...
[ "Run LSTM cell with attention on inputs of shape [batch x time x size].\n\n Args:\n inputs: The decoder input `Tensor`, shaped `[batch_size, decoder_steps,\n hidden_size]`.\n hparams: HParams; hyperparameters.\n train: bool; `True` when constructing training graph to enable dropout.\n name: stri...
Please provide a description of the function:def lstm_seq2seq_internal(inputs, targets, hparams, train): with tf.variable_scope("lstm_seq2seq"): if inputs is not None: inputs_length = common_layers.length_from_embedding(inputs) # Flatten inputs. inputs = common_layers.flatten4d3d(inputs) ...
[ "The basic LSTM seq2seq model, main step used for training." ]
Please provide a description of the function:def lstm_seq2seq_internal_attention(inputs, targets, hparams, train, inputs_length, targets_length): with tf.variable_scope("lstm_seq2seq_attention"): # Flatten inputs. inputs = common_layers.flatten4d3d(inputs) # LSTM en...
[ "LSTM seq2seq model with attention, main step used for training." ]
Please provide a description of the function:def lstm_bid_encoder(inputs, sequence_length, hparams, train, name): with tf.variable_scope(name): cell_fw = tf.nn.rnn_cell.MultiRNNCell( [_dropout_lstm_cell(hparams, train) for _ in range(hparams.num_hidden_layers)]) cell_bw = tf.nn.rnn_cell....
[ "Bidirectional LSTM for encoding inputs that are [batch x time x size]." ]
Please provide a description of the function:def lstm_seq2seq_internal_bid_encoder(inputs, targets, hparams, train): with tf.variable_scope("lstm_seq2seq_bid_encoder"): if inputs is not None: inputs_length = common_layers.length_from_embedding(inputs) # Flatten inputs. inputs = common_layers....
[ "The basic LSTM seq2seq model with bidirectional encoder." ]
Please provide a description of the function:def lstm_seq2seq_internal_attention_bid_encoder(inputs, targets, hparams, train): with tf.variable_scope("lstm_seq2seq_attention_bid_encoder"): inputs_length = common_layers.length_from_embedding(inputs) # Flatten ...
[ "LSTM seq2seq model with attention, main step used for training." ]
Please provide a description of the function:def lstm_seq2seq(): hparams = common_hparams.basic_params1() hparams.daisy_chain_variables = False hparams.batch_size = 1024 hparams.hidden_size = 128 hparams.num_hidden_layers = 2 hparams.initializer = "uniform_unit_scaling" hparams.initializer_gain = 1.0 ...
[ "hparams for LSTM." ]
Please provide a description of the function:def lstm_attention_base(): hparams = lstm_seq2seq() hparams.add_hparam("attention_layer_size", hparams.hidden_size) hparams.add_hparam("output_attention", True) hparams.add_hparam("num_heads", 1) return hparams
[ "Base attention params." ]
Please provide a description of the function:def lstm_asr_v1(): hparams = lstm_bahdanau_attention() hparams.num_hidden_layers = 2 hparams.hidden_size = 256 hparams.batch_size = 36 hparams.max_input_seq_length = 600000 hparams.max_target_seq_length = 350 hparams.max_length = hparams.max_input_seq_length...
[ "Basic LSTM Params." ]
Please provide a description of the function:def lstm_area_attention_base(): hparams = lstm_luong_attention() hparams.batch_size = 16384 hparams.num_hidden_layers = 2 hparams.hidden_size = 1024 hparams.num_heads = 4 hparams.dropout = 0.2 hparams.learning_rate = 0.1 hparams.max_area_width = 2 hparam...
[ "Hparams for LSTM with area attention." ]
Please provide a description of the function:def create_surrogate_run_config(hp): save_ckpt_steps = max(FLAGS.iterations_per_loop, FLAGS.local_eval_frequency) save_ckpt_secs = FLAGS.save_checkpoints_secs or None if save_ckpt_secs: save_ckpt_steps = None assert FLAGS.surrogate_output_dir # the various c...
[ "Create a run config.\n\n Args:\n hp: model hyperparameters\n Returns:\n a run config\n " ]
Please provide a description of the function:def prepare_data(problem, hparams, params, config): input_fn = problem.make_estimator_input_fn( tf.estimator.ModeKeys.EVAL, hparams, force_repeat=True) dataset = input_fn(params, config) features, _ = dataset.make_one_shot_iterator().get_next() inputs, label...
[ "Construct input pipeline." ]
Please provide a description of the function:def encode(self, s): # Make sure that the data is a single channel, 16bit, 16kHz wave. # TODO(chorowski): the directory may not be writable, this should fallback # to a temp path, and provide instructions for installing sox. if s.endswith(".mp3"): ...
[ "Transform a string with a filename into a list of float32.\n\n Args:\n s: path to the file with a waveform.\n\n Returns:\n samples: list of int16s\n " ]
Please provide a description of the function:def decode(self, ids): _, tmp_file_path = tempfile.mkstemp() wavfile.write(tmp_file_path, self._sample_rate, np.asarray(ids)) return tmp_file_path
[ "Transform a sequence of float32 into a waveform.\n\n Args:\n ids: list of integers to be converted.\n\n Returns:\n Path to the temporary file where the waveform was saved.\n\n Raises:\n ValueError: if the ids are not of the appropriate size.\n " ]
Please provide a description of the function:def new_vertex(self): vertex = Vertex(len(self.vertices)) self.vertices.append(vertex) return vertex
[ "Creates and returns a new vertex.\n\n Returns:\n A new Vertex instance with a unique index.\n " ]
Please provide a description of the function:def get_vertex(self, key): if key in self.vertex_map: return self.vertex_map[key] vertex = self.new_vertex() self.vertex_map[key] = vertex return vertex
[ "Returns or Creates a Vertex mapped by key.\n\n Args:\n key: A string reference for a vertex. May refer to a new Vertex in which\n case it will be created.\n\n Returns:\n A the Vertex mapped to by key.\n " ]
Please provide a description of the function:def add_edge(self, source, target): edge = Edge(len(self.edges)) self.edges.append(edge) source.out_edges.append(edge.idx) target.in_edges.append(edge.idx) edge.source = source.idx edge.target = target.idx return edge
[ "Returns a new edge connecting source and target vertices.\n\n Args:\n source: The source Vertex.\n target: The target Vertex.\n\n Returns:\n A new Edge linking source to target.\n " ]
Please provide a description of the function:def to_dict(self): return { "node": [v.to_dict() for v in self.vertices], "edge": [e.to_dict() for e in self.edges] }
[ "Returns a simplified dictionary representing the Graph.\n\n Returns:\n A dictionary that can easily be serialized to JSON.\n " ]
Please provide a description of the function:def attend(x, source, hparams, name): with tf.variable_scope(name): x = tf.squeeze(x, axis=2) if len(source.get_shape()) > 3: source = tf.squeeze(source, axis=2) source = common_attention.add_timing_signal_1d(source) y = common_attention.multihead_...
[ "Self-attention layer with source as memory antecedent." ]
Please provide a description of the function:def top_k_softmax(x, k): x = tf.nn.softmax(x) top_x, _ = tf.nn.top_k(x, k=k+1) min_top = tf.reduce_min(top_x, axis=-1, keepdims=True) x = tf.nn.relu((x - min_top) + 1e-12) x /= tf.reduce_sum(x, axis=-1, keepdims=True) return x, tf.reduce_max(top_x, axis=-1)
[ "Calculate softmax(x), select top-k and rescale to sum to 1." ]
Please provide a description of the function:def compress(x, c, is_2d, hparams, name): with tf.variable_scope(name): # Run compression by strided convs. cur = x k1 = (3, 3) if is_2d else (3, 1) k2 = (2, 2) if is_2d else (2, 1) cur = residual_conv(cur, hparams.num_compress_steps, k1, hparams, "r...
[ "Compress." ]
Please provide a description of the function:def decode_transformer(encoder_output, encoder_decoder_attention_bias, targets, hparams, name, task=None, causal=True): orig_hparams...
[ "Original Transformer decoder." ]
Please provide a description of the function:def ae_latent_softmax(latents_pred, latents_discrete, hparams): vocab_size = 2 ** hparams.z_size if hparams.num_decode_blocks < 2: latents_logits = tf.layers.dense(latents_pred, vocab_size, name="extra_logits") if hparams.l...
[ "Latent prediction and loss." ]
Please provide a description of the function:def ae_latent_sample(latents_dense, inputs, ed, embed, iters, hparams): if hparams.num_decode_blocks < 2 and hparams.sampling_temp == 0.0: # TODO(lukaszkaiser): beam-search only works in non-blocked mode for now. tf.logging.info("Running beam-search for latents ...
[ "Sample from the latent space in the autoencoder." ]
Please provide a description of the function:def ae_transformer_internal(inputs, targets, target_space, hparams, cache=None, predict_mask=1.0): # Summaries break with the do_r...
[ "AE Transformer, main step used for training." ]
Please provide a description of the function:def transformer_ae_small(): hparams = transformer.transformer_small() hparams.batch_size = 2048 hparams.learning_rate = 0.2 hparams.learning_rate_warmup_steps = 4000 hparams.num_hidden_layers = 3 hparams.hidden_size = 384 hparams.filter_size = 2048 hparams...
[ "Set of hyperparameters." ]
Please provide a description of the function:def imagetransformer_ae_cifar(): hparams = transformer_ae_small() hparams.filter_size = 512 hparams.num_compress_steps = 3 hparams.startup_steps = 10000 hparams.is_2d = 0 hparams.learning_rate_warmup_steps = 8000 hparams.learning_rate = 0.2 hparams.hidden_...
[ "Hyperparameters for CIFAR-10 experiments." ]
Please provide a description of the function:def imagetransformer_ae_imagenet(): hparams = imagetransformer_ae_cifar() hparams.max_length = int(64 * 64 * 3) hparams.img_len = 64 hparams.num_heads = 4 # Heads are expensive on TPUs. # Reduce architecture from 32x32 CIFAR-10 in order to fit in memory. hpar...
[ "For 64x64 ImageNet. ~56M trainable variables." ]
Please provide a description of the function:def transformer_ae_base(): hparams = transformer_ae_small() hparams.batch_size = 2048 hparams.hidden_size = 512 hparams.filter_size = 4096 hparams.num_hidden_layers = 6 return hparams
[ "Set of hyperparameters." ]
Please provide a description of the function:def transformer_ae_a3(): hparams = transformer_ae_base() hparams.batch_size = 4096 hparams.layer_prepostprocess_dropout = 0.3 hparams.optimizer = "Adafactor" hparams.learning_rate = 0.25 hparams.learning_rate_warmup_steps = 10000 return hparams
[ "Set of hyperparameters." ]
Please provide a description of the function:def transformer_ae_base_noatt(): hparams = transformer_ae_base() hparams.reshape_method = "slice" hparams.bottleneck_kind = "dvq" hparams.hidden_size = 512 hparams.num_blocks = 1 hparams.num_decode_blocks = 1 hparams.z_size = 12 hparams.do_attend_decompres...
[ "Set of hyperparameters." ]
Please provide a description of the function:def transformer_ae_small_noatt(): hparams = transformer_ae_small() hparams.reshape_method = "slice" hparams.bottleneck_kind = "dvq" hparams.hidden_size = 512 hparams.num_blocks = 1 hparams.num_decode_blocks = 1 hparams.z_size = 12 hparams.do_attend_decompr...
[ "Set of hyperparameters." ]
Please provide a description of the function:def transformer_sketch(): hparams = transformer.transformer_small() hparams.num_compress_steps = 4 hparams.batch_size = 32 hparams.clip_grad_norm = 2. hparams.sampling_method = "random" return hparams
[ "Basic transformer_sketch hparams." ]
Please provide a description of the function:def layers(): global _cached_layers if _cached_layers is not None: return _cached_layers layers_module = tf.layers try: from tensorflow.python import tf2 # pylint: disable=g-direct-tensorflow-import,g-import-not-at-top if tf2.enabled(): tf.loggi...
[ "Get the layers module good for TF 1 and TF 2 work for now." ]
Please provide a description of the function:def dropout_with_broadcast_dims(x, keep_prob, broadcast_dims=None, **kwargs): assert "noise_shape" not in kwargs if broadcast_dims: shape = tf.shape(x) ndims = len(x.get_shape()) # Allow dimensions like "-1" as well. broadcast_dims = [dim + ndims if di...
[ "Like tf.nn.dropout but takes broadcast_dims instead of noise_shape.\n\n Instead of specifying noise_shape, this function takes broadcast_dims -\n a list of dimension numbers in which noise_shape should be 1. The random\n keep/drop tensor has dimensionality 1 along these dimensions.\n\n Args:\n x: a floatin...
Please provide a description of the function:def saturating_sigmoid(x): with tf.name_scope("saturating_sigmoid", values=[x]): y = tf.sigmoid(x) return tf.minimum(1.0, tf.maximum(0.0, 1.2 * y - 0.1))
[ "Saturating sigmoid: 1.2 * sigmoid(x) - 0.1 cut to [0, 1]." ]
Please provide a description of the function:def inverse_exp_decay(max_step, min_value=0.01, step=None): inv_base = tf.exp(tf.log(min_value) / float(max_step)) if step is None: step = tf.train.get_global_step() if step is None: return 1.0 step = to_float(step) return inv_base**tf.maximum(float(max_...
[ "Inverse-decay exponentially from 0.01 to 1.0 reached at max_step." ]
Please provide a description of the function:def inverse_lin_decay(max_step, min_value=0.01, step=None): if step is None: step = tf.train.get_global_step() if step is None: return 1.0 step = to_float(step) progress = tf.minimum(step / float(max_step), 1.0) return progress * (1.0 - min_value) + min_...
[ "Inverse-decay linearly from 0.01 to 1.0 reached at max_step." ]
Please provide a description of the function:def shakeshake2_py(x, y, equal=False, individual=False): if equal: alpha = 0.5 elif individual: alpha = tf.random_uniform(tf.get_shape(x)[:1]) else: alpha = tf.random_uniform([]) return alpha * x + (1.0 - alpha) * y
[ "The shake-shake sum of 2 tensors, python version." ]
Please provide a description of the function:def shakeshake2_grad(x1, x2, dy): y = shakeshake2_py(x1, x2) dx = tf.gradients(ys=[y], xs=[x1, x2], grad_ys=[dy]) return dx
[ "Overriding gradient for shake-shake of 2 tensors." ]
Please provide a description of the function:def shakeshake2_indiv_grad(x1, x2, dy): y = shakeshake2_py(x1, x2, individual=True) dx = tf.gradients(ys=[y], xs=[x1, x2], grad_ys=[dy]) return dx
[ "Overriding gradient for shake-shake of 2 tensors." ]
Please provide a description of the function:def shakeshake2_equal_grad(x1, x2, dy): y = shakeshake2_py(x1, x2, equal=True) dx = tf.gradients(ys=[y], xs=[x1, x2], grad_ys=[dy]) return dx
[ "Overriding gradient for shake-shake of 2 tensors." ]
Please provide a description of the function:def shakeshake(xs, equal_grad=False): if len(xs) == 1: return xs[0] div = (len(xs) + 1) // 2 arg1 = shakeshake(xs[:div], equal_grad=equal_grad) arg2 = shakeshake(xs[div:], equal_grad=equal_grad) if equal_grad: return shakeshake2_eqgrad(arg1, arg2) retu...
[ "Multi-argument shake-shake, currently approximated by sums of 2." ]
Please provide a description of the function:def convert_rgb_to_real(x): with tf.name_scope("rgb_to_real", values=[x]): x = to_float(x) x /= 255.0 return x
[ "Conversion of pixel values to real numbers." ]
Please provide a description of the function:def convert_rgb_to_symmetric_real(x): with tf.name_scope("rgb_to_real", values=[x]): x = to_float(x) # Convert each pixel intensity in [0, 1, 2, ..., 255] into a real number in # the range [-1, 1]. x = (x / 127.5) - 1 return x
[ "Conversion of pixel values to real numbers." ]
Please provide a description of the function:def expand_squeeze_to_nd(x, n, squeeze_dim=2, expand_dim=-1): if len(x.shape) > n: while len(x.shape) != n: x = tf.squeeze(x, [squeeze_dim]) else: while len(x.shape) != n: x = tf.expand_dims(x, expand_dim) return x
[ "Make x n-d with squeeze and expand_dims." ]
Please provide a description of the function:def standardize_images(x): with tf.name_scope("standardize_images", values=[x]): x_shape = shape_list(x) x = to_float(tf.reshape(x, [-1] + x_shape[-3:])) x_mean = tf.reduce_mean(x, axis=[1, 2], keepdims=True) x_variance = tf.reduce_mean( tf.squar...
[ "Image standardization on batches and videos." ]
Please provide a description of the function:def flatten4d3d(x): xshape = shape_list(x) result = tf.reshape(x, [xshape[0], xshape[1] * xshape[2], xshape[3]]) return result
[ "Flatten a 4d-tensor into a 3d-tensor by joining width and height." ]
Please provide a description of the function:def gather(params, indices, dtype=tf.float32): if not is_xla_compiled(): return tf.gather(params, indices) vocab_size = params.get_shape().as_list()[0] indices_flat = tf.reshape(indices, [-1]) out = tf.matmul(tf.one_hot(indices_flat, vocab_size, dtype=dtype), ...
[ "Version of tf.gather that works faster on tpu." ]
Please provide a description of the function:def cumsum(x, axis=0, exclusive=False): if not is_xla_compiled(): return tf.cumsum(x, axis=axis, exclusive=exclusive) x_shape = shape_list(x) rank = len(x_shape) length = x_shape[axis] my_range = tf.range(length) comparator = tf.less if exclusive else tf.l...
[ "TPU hack for tf.cumsum.\n\n This is equivalent to tf.cumsum and is faster on TPU as of 04/2018 unless\n the axis dimension is very large.\n\n Args:\n x: a Tensor\n axis: an integer\n exclusive: a boolean\n\n Returns:\n Tensor of the same shape as x.\n " ]
Please provide a description of the function:def dropout_no_scaling(x, keep_prob): if keep_prob == 1.0: return x mask = tf.less(tf.random_uniform(tf.shape(x)), keep_prob) return x * cast_like(mask, x)
[ "Like tf.nn.dropout, but does not scale up. Works on integers also.\n\n Args:\n x: a Tensor\n keep_prob: a floating point number\n\n Returns:\n Tensor of the same shape as x.\n " ]