Code
stringlengths
103
85.9k
Summary
listlengths
0
94
Please provide a description of the function:def _get_vqa_v2_image_raw_dataset(directory, image_root_url, image_urls): for url in image_urls: filename = os.path.basename(url) download_url = os.path.join(image_root_url, url) path = generator_utils.maybe_download(directory, filename, download_url) un...
[ "Extract the VQA V2 image data set to directory unless it's there." ]
Please provide a description of the function:def _get_vqa_v2_image_feature_dataset( directory, feature_url, feature_filename="mscoco_feat.tar.gz"): feature_file = generator_utils.maybe_download_from_drive( directory, feature_filename, feature_url) with tarfile.open(feature_file, "r:gz") as feature_tar:...
[ "Extract the VQA V2 feature data set to directory unless it's there." ]
Please provide a description of the function:def _parse_fail(name, var_type, value, values): raise ValueError( 'Could not parse hparam \'%s\' of type \'%s\' with value \'%s\' in %s' % (name, var_type.__name__, value, values))
[ "Helper function for raising a value error for bad assignment." ]
Please provide a description of the function:def _process_scalar_value(name, parse_fn, var_type, m_dict, values, results_dictionary): try: parsed_value = parse_fn(m_dict['val']) except ValueError: _parse_fail(name, var_type, m_dict['val'], values) # If no index is provided ...
[ "Update results_dictionary with a scalar value.\n\n Used to update the results_dictionary to be returned by parse_values when\n encountering a clause with a scalar RHS (e.g. \"s=5\" or \"arr[0]=5\".)\n\n Mutates results_dictionary.\n\n Args:\n name: Name of variable in assignment (\"s\" or \"arr\").\n pa...
Please provide a description of the function:def _process_list_value(name, parse_fn, var_type, m_dict, values, results_dictionary): if m_dict['index'] is not None: raise ValueError('Assignment of a list to a list index.') elements = filter(None, re.split('[ ,]', m_dict['vals'])) # M...
[ "Update results_dictionary from a list of values.\n\n Used to update results_dictionary to be returned by parse_values when\n encountering a clause with a list RHS (e.g. \"arr=[1,2,3]\".)\n\n Mutates results_dictionary.\n\n Args:\n name: Name of variable in assignment (\"arr\").\n parse_fn: Function for ...
Please provide a description of the function:def _cast_to_type_if_compatible(name, param_type, value): fail_msg = ( "Could not cast hparam '%s' of type '%s' from value %r" % (name, param_type, value)) # Some callers use None, for which we can't do any casting/checking. :( if issubclass(param_type,...
[ "Cast hparam to the provided type, if compatible.\n\n Args:\n name: Name of the hparam to be cast.\n param_type: The type of the hparam.\n value: The value to be cast, if compatible.\n\n Returns:\n The result of casting `value` to `param_type`.\n\n Raises:\n ValueError: If the type of `value` is n...
Please provide a description of the function:def parse_values(values, type_map, ignore_unknown=False): results_dictionary = {} pos = 0 while pos < len(values): m = PARAM_RE.match(values, pos) if not m: raise ValueError('Malformed hyperparameter value: %s' % values[pos:]) # Check that there is...
[ "Parses hyperparameter values from a string into a python map.\n\n `values` is a string containing comma-separated `name=value` pairs.\n For each pair, the value of the hyperparameter named `name` is set to\n `value`.\n\n If a hyperparameter name appears multiple times in `values`, a ValueError\n is raised (e....
Please provide a description of the function:def add_hparam(self, name, value): # Keys in kwargs are unique, but 'name' could the name of a pre-existing # attribute of this object. In that case we refuse to use it as a # hyperparameter name. if getattr(self, name, None) is not None: raise Va...
[ "Adds {name, value} pair to hyperparameters.\n\n Args:\n name: Name of the hyperparameter.\n value: Value of the hyperparameter. Can be one of the following types:\n int, float, string, int list, float list, or string list.\n\n Raises:\n ValueError: if one of the arguments is invalid.\n ...
Please provide a description of the function:def set_hparam(self, name, value): param_type, is_list = self._hparam_types[name] if isinstance(value, list): if not is_list: raise ValueError( 'Must not pass a list for single-valued parameter: %s' % name) setattr(self, name, [ ...
[ "Set the value of an existing hyperparameter.\n\n This function verifies that the type of the value matches the type of the\n existing hyperparameter.\n\n Args:\n name: Name of the hyperparameter.\n value: New value of the hyperparameter.\n\n Raises:\n KeyError: If the hyperparameter does...
Please provide a description of the function:def del_hparam(self, name): if hasattr(self, name): delattr(self, name) del self._hparam_types[name]
[ "Removes the hyperparameter with key 'name'.\n\n Does nothing if it isn't present.\n\n Args:\n name: Name of the hyperparameter.\n " ]
Please provide a description of the function:def parse(self, values): type_map = {} for name, t in self._hparam_types.items(): param_type, _ = t type_map[name] = param_type values_map = parse_values(values, type_map) return self.override_from_dict(values_map)
[ "Override existing hyperparameter values, parsing new values from a string.\n\n See parse_values for more detail on the allowed format for values.\n\n Args:\n values: String. Comma separated list of `name=value` pairs where 'value'\n must follow the syntax described above.\n\n Returns:\n ...
Please provide a description of the function:def override_from_dict(self, values_dict): for name, value in values_dict.items(): self.set_hparam(name, value) return self
[ "Override existing hyperparameter values, parsing new values from a dictionary.\n\n Args:\n values_dict: Dictionary of name:value pairs.\n\n Returns:\n The `HParams` instance.\n\n Raises:\n KeyError: If a hyperparameter in `values_dict` doesn't exist.\n ValueError: If `values_dict` cann...
Please provide a description of the function:def to_json(self, indent=None, separators=None, sort_keys=False): def remove_callables(x): if isinstance(x, dict): return {k: remove_callables(v) for k, v in six.iteritems(x) if not callable(v)} elif isinstance(x, list): ...
[ "Serializes the hyperparameters into JSON.\n\n Args:\n indent: If a non-negative integer, JSON array elements and object members\n will be pretty-printed with that indent level. An indent level of 0, or\n negative, will only insert newlines. `None` (the default) selects the\n most compa...
Please provide a description of the function:def parse_json(self, values_json): values_map = json.loads(values_json) return self.override_from_dict(values_map)
[ "Override existing hyperparameter values, parsing new values from a json object.\n\n Args:\n values_json: String containing a json object of name:value pairs.\n\n Returns:\n The `HParams` instance.\n\n Raises:\n KeyError: If a hyperparameter in `values_json` doesn't exist.\n ValueError:...
Please provide a description of the function:def values(self): return {n: getattr(self, n) for n in self._hparam_types.keys()}
[ "Return the hyperparameter values as a Python dictionary.\n\n Returns:\n A dictionary with hyperparameter names as keys. The values are the\n hyperparameter values.\n " ]
Please provide a description of the function:def get(self, key, default=None): if key in self._hparam_types: # Ensure that default is compatible with the parameter type. if default is not None: param_type, is_param_list = self._hparam_types[key] type_str = 'list<%s>' % param_type if...
[ "Returns the value of `key` if it exists, else `default`." ]
Please provide a description of the function:def _get_kind_name(param_type, is_list): if issubclass(param_type, bool): # This check must happen before issubclass(param_type, six.integer_types), # since Python considers bool to be a subclass of int. typename = 'bool' elif issubclass(param_...
[ "Returns the field name given parameter type and is_list.\n\n Args:\n param_type: Data type of the hparam.\n is_list: Whether this is a list.\n\n Returns:\n A string representation of the field name.\n\n Raises:\n ValueError: If parameter type is not recognized.\n " ]
Please provide a description of the function:def process(self, query): tf.logging.info("Processing new query [%s]" %query) # Create the new TFDBG hook directory. hook_dir = "/tmp/t2t_server_dump/request_%d" %int(time.time()) os.makedirs(hook_dir) hooks = [tfdbg.DumpingDebugHook(hook_dir, watch...
[ "Returns the visualizations for query.\n\n Args:\n query: The query to process.\n\n Returns:\n A dictionary of results with processing and graph visualizations.\n ", "Generator that returns just the current query.", "Generator that returns just the current query." ]
Please provide a description of the function:def _default_output_dir(): try: dataset_name = gin.query_parameter("inputs.dataset_name") except ValueError: dataset_name = "random" dir_name = "{model_name}_{dataset_name}_{timestamp}".format( model_name=gin.query_parameter("train.model").configurable...
[ "Default output directory." ]
Please provide a description of the function:def _setup_gin(): # Imports for configurables # pylint: disable=g-import-not-at-top,unused-import,g-bad-import-order,reimported,unused-variable from tensor2tensor.trax import models as _trax_models from tensor2tensor.trax import optimizers as _trax_opt # pylint:...
[ "Setup gin configuration." ]
Please provide a description of the function:def train_and_eval_dataset(dataset_name, data_dir): if dataset_name.startswith("v1_"): return _train_and_eval_dataset_v1(dataset_name[3:], data_dir) dataset_builder = tfds.builder(dataset_name, data_dir=data_dir) info = dataset_builder.info splits = dataset_bu...
[ "Return train and evaluation datasets, feature info and supervised keys.\n\n Args:\n dataset_name: a string, the name of the dataset; if it starts with \"v1_\"\n then we'll search T2T Problem registry for it, otherwise we assume it\n is a dataset from TFDS and load it from there.\n data_dir: direct...
Please provide a description of the function:def _make_info(shape_list, num_classes): feature_info = collections.namedtuple("FeatureInfo", ["shape", "num_classes"]) cur_shape = list(shape_list[0]) # We need to merge the provided shapes, put None where they disagree. for shape in shape_list: if len(shape)...
[ "Create an info-like tuple for feature given some shapes and vocab size." ]
Please provide a description of the function:def _select_features(example, feature_list=None): feature_list = feature_list or ["inputs", "targets"] return {f: example[f] for f in feature_list}
[ "Select a subset of features from the example dict." ]
Please provide a description of the function:def _train_and_eval_dataset_v1(problem_name, data_dir): problem = problems.problem(problem_name) train_dataset = problem.dataset(tf.estimator.ModeKeys.TRAIN, data_dir) train_dataset = train_dataset.map(_select_features) eval_dataset = problem.dataset(tf.estimator....
[ "Return train and evaluation datasets, feature info and supervised keys." ]
Please provide a description of the function:def batch_fn(dataset, training, shapes, target_names, batch_size=32, eval_batch_size=32, bucket_batch_length=32, bucket_max_length=256, bucket_min_length=8, bucket_length_step=1.1, buckets=None): del target_names # If bucketing i...
[ "Batching function." ]
Please provide a description of the function:def shuffle_and_batch_data(dataset, target_names, features_info, training): def append_targets(example): if len(target_names) == 1: return (example, example[target_names[0]]) targets = {} for name in target_names: targets[name] = example[nam...
[ "Shuffle and batch the given dataset.", "Append targets to the example dictionary. Needed for Keras." ]
Please provide a description of the function:def optimize_fn(model, optimizer=None, learning_rate_schedule=None, loss=None, metrics=None): learning_rate_schedule = learning_rate_schedule or T2TLearningRateSchedule() if optimizer: optimizer = opt...
[ "Compile the model in Keras." ]
Please provide a description of the function:def train_fn(data_dir=None, output_dir=None, model_class=gin.REQUIRED, dataset=gin.REQUIRED, input_names=None, target_names=None, train_steps=1000, eval_steps=1, eval_frequency=100): train_data, eval_data, features_info, keys = tra...
[ "Train the given model on the given dataset.\n\n Args:\n data_dir: Directory where the data is located.\n output_dir: Directory where to put the logs and checkpoints.\n model_class: The model class to train.\n dataset: The name of the dataset to train on.\n input_names: List of strings with the name...
Please provide a description of the function:def t2t_train(model_name, dataset_name, data_dir=None, output_dir=None, config_file=None, config=None): if model_name not in _MODEL_REGISTRY: raise ValueError("Model %s not in registry. Available models:\n * %s." % (model_name, "\n...
[ "Main function to train the given model on the given dataset.\n\n Args:\n model_name: The name of the model to train.\n dataset_name: The name of the dataset to train on.\n data_dir: Directory where the data is located.\n output_dir: Directory where to put the logs and checkpoints.\n config_file: th...
Please provide a description of the function:def decode(estimator, hparams, decode_hp): if FLAGS.decode_interactive: if estimator.config.use_tpu: raise ValueError("TPU can only decode from dataset.") decoding.decode_interactively(estimator, hparams, decode_hp, checkp...
[ "Decode from estimator. Interactive, from file, or from dataset." ]
Please provide a description of the function:def score_file(filename): # Prepare model. hparams = create_hparams() encoders = registry.problem(FLAGS.problem).feature_encoders(FLAGS.data_dir) has_inputs = "inputs" in encoders # Prepare features for feeding into the model. if has_inputs: inputs_ph = t...
[ "Score each line in a file and return the scores." ]
Please provide a description of the function:def time_to_channels(embedded_video): video_shape = common_layers.shape_list(embedded_video) if len(video_shape) != 5: raise ValueError("Assuming videos given as tensors in the format " "[batch, time, height, width, channels] but got one " ...
[ "Put time dimension on channels in an embedded video." ]
Please provide a description of the function:def autoencoder_basic(): hparams = common_hparams.basic_params1() hparams.optimizer = "adam" hparams.learning_rate_constant = 0.0002 hparams.learning_rate_warmup_steps = 500 hparams.learning_rate_schedule = "constant * linear_warmup" hparams.label_smoothing = ...
[ "Basic autoencoder model." ]
Please provide a description of the function:def autoencoder_autoregressive(): hparams = autoencoder_basic() hparams.add_hparam("autoregressive_forget_base", False) hparams.add_hparam("autoregressive_mode", "none") hparams.add_hparam("autoregressive_decode_steps", 0) hparams.add_hparam("autoregressive_eval...
[ "Autoregressive autoencoder model." ]
Please provide a description of the function:def autoencoder_residual(): hparams = autoencoder_autoregressive() hparams.optimizer = "Adafactor" hparams.clip_grad_norm = 1.0 hparams.learning_rate_constant = 0.5 hparams.learning_rate_warmup_steps = 500 hparams.learning_rate_schedule = "constant * linear_wa...
[ "Residual autoencoder model." ]
Please provide a description of the function:def autoencoder_residual_text(): hparams = autoencoder_residual() hparams.bottleneck_bits = 32 hparams.batch_size = 1024 hparams.hidden_size = 64 hparams.max_hidden_size = 512 hparams.bottleneck_noise = 0.0 hparams.bottom = { "inputs": modalities.ident...
[ "Residual autoencoder model for text." ]
Please provide a description of the function:def autoencoder_basic_discrete(): hparams = autoencoder_autoregressive() hparams.num_hidden_layers = 5 hparams.hidden_size = 64 hparams.bottleneck_bits = 1024 hparams.bottleneck_noise = 0.1 hparams.add_hparam("discretize_warmup_steps", 16000) return hparams
[ "Basic autoencoder model." ]
Please provide a description of the function:def autoencoder_residual_discrete(): hparams = autoencoder_residual() hparams.bottleneck_bits = 1024 hparams.bottleneck_noise = 0.05 hparams.add_hparam("discretize_warmup_steps", 16000) hparams.add_hparam("bottleneck_kind", "tanh_discrete") hparams.add_hparam(...
[ "Residual discrete autoencoder model." ]
Please provide a description of the function:def autoencoder_residual_discrete_big(): hparams = autoencoder_residual_discrete() hparams.hidden_size = 128 hparams.max_hidden_size = 4096 hparams.bottleneck_noise = 0.1 hparams.residual_dropout = 0.4 return hparams
[ "Residual discrete autoencoder model, big version." ]
Please provide a description of the function:def autoencoder_ordered_discrete(): hparams = autoencoder_residual_discrete() hparams.bottleneck_noise = 0.05 # Use 0.8 for ordered. hparams.gan_loss_factor = 0.05 hparams.add_hparam("unordered", True) return hparams
[ "Ordered discrete autoencoder model." ]
Please provide a description of the function:def autoencoder_ordered_discrete_image64(): hparams = autoencoder_ordered_discrete() hparams.batch_size = 32 hparams.num_hidden_layers = 6 hparams.bottleneck_warmup_steps *= 2 hparams.gan_codes_warmup_steps *= 2 return hparams
[ "Ordered discrete autoencoder model." ]
Please provide a description of the function:def autoencoder_ordered_text(): hparams = autoencoder_ordered_discrete() hparams.bottleneck_bits = 1024 hparams.bottleneck_shared_bits = 1024-64 hparams.bottleneck_shared_bits_start_warmup = 75000 hparams.bottleneck_shared_bits_stop_warmup = 275000 hparams.num...
[ "Ordered discrete autoencoder model for text." ]
Please provide a description of the function:def autoencoder_ordered_text_small(): hparams = autoencoder_ordered_text() hparams.bottleneck_bits = 32 hparams.num_hidden_layers = 3 hparams.hidden_size = 64 hparams.max_hidden_size = 512 hparams.bottleneck_noise = 0.0 hparams.autoregressive_mode = "conv5" ...
[ "Ordered discrete autoencoder model for text, small version." ]
Please provide a description of the function:def autoencoder_discrete_pong(): hparams = autoencoder_ordered_discrete() hparams.num_hidden_layers = 3 hparams.bottleneck_bits = 24 hparams.batch_size = 2 hparams.gan_loss_factor = 0.01 hparams.bottleneck_l2_factor = 0.001 hparams.add_hparam("video_modality...
[ "Discrete autoencoder model for compressing pong frames." ]
Please provide a description of the function:def autoencoder_discrete_tiny(): hparams = autoencoder_ordered_discrete() hparams.num_hidden_layers = 2 hparams.bottleneck_bits = 24 hparams.batch_size = 2 hparams.gan_loss_factor = 0. hparams.bottleneck_l2_factor = 0.001 hparams.add_hparam("video_modality_l...
[ "Discrete autoencoder model for compressing pong frames for testing." ]
Please provide a description of the function:def autoencoder_discrete_cifar(): hparams = autoencoder_ordered_discrete() hparams.bottleneck_noise = 0.0 hparams.bottleneck_bits = 90 hparams.num_hidden_layers = 2 hparams.hidden_size = 256 hparams.num_residual_layers = 4 hparams.batch_size = 32 hparams.l...
[ "Discrete autoencoder model for compressing cifar." ]
Please provide a description of the function:def autoencoder_range(rhp): rhp.set_float("dropout", 0.01, 0.3) rhp.set_float("gan_loss_factor", 0.01, 0.1) rhp.set_float("bottleneck_l2_factor", 0.001, 0.1, scale=rhp.LOG_SCALE) rhp.set_discrete("bottleneck_warmup_steps", [200, 2000]) rhp.set_float("gumbel_temp...
[ "Tuning grid of the main autoencoder params." ]
Please provide a description of the function:def image_encoder(image_feat, hparams, name="image_encoder", save_weights_to=None, make_image_summary=True): x = image_feat with tf.variable_scope(name): for layer in range(hparams.num_encode...
[ "A stack of self attention layers." ]
Please provide a description of the function:def question_encoder(question, hparams, name="encoder"): with tf.variable_scope(name, "encoder", values=[question]): question = common_layers.flatten4d3d(question) padding = common_attention.embedding_to_padding(question) length = common_attention.padding_to...
[ "Question encoder, run LSTM encoder and get the last output as encoding." ]
Please provide a description of the function:def attn(image_feat, query, hparams, name="attn"): with tf.variable_scope(name, "attn", values=[image_feat, query]): attn_dim = hparams.attn_dim num_glimps = hparams.num_glimps num_channels = common_layers.shape_list(image_feat)[-1] if len(common_layers....
[ "Attention on image feature with question as query." ]
Please provide a description of the function:def mlp(feature, hparams, name="mlp"): with tf.variable_scope(name, "mlp", values=[feature]): num_mlp_layers = hparams.num_mlp_layers mlp_dim = hparams.mlp_dim for _ in range(num_mlp_layers): feature = common_layers.dense(feature, mlp_dim, activation=t...
[ "Multi layer perceptron with dropout and relu activation." ]
Please provide a description of the function:def vqa_attention_base(): hparams = common_hparams.basic_params1() hparams.batch_size = 128 hparams.use_fixed_batch_size = True, hparams.optimizer = "adam" hparams.optimizer_adam_beta1 = 0.9 hparams.optimizer_adam_beta2 = 0.999 hparams.optimizer_adam_epsilon...
[ "VQA attention baseline hparams." ]
Please provide a description of the function:def vqa_attention_base_range(rhp): # After starting from base, set intervals for some parameters. rhp.set_float("learning_rate", 0.1, 1.0, scale=rhp.LOG_SCALE) rhp.set_float("clip_grad_norm", 0.1, 10, scale=rhp.LOG_SCALE) rhp.set_discrete("batch_size", [128, 256, ...
[ "Small range of hyperparameters." ]
Please provide a description of the function:def append(self, mode, metric, step, value): if mode not in self._values: self._values[mode] = collections.defaultdict(list) self._values[mode][metric].append((step, value))
[ "Append (step, value) pair to history for the given mode and metric." ]
Please provide a description of the function:def get(self, mode, metric): if mode not in self._values: logging.info("Metric %s not found for mode %s", metric, mode) return [] return list(self._values[mode][metric])
[ "Get the history for the given metric and mode." ]
Please provide a description of the function:def metrics_for_mode(self, mode): if mode not in self._values: logging.info("Mode %s not found", mode) return [] return sorted(list(self._values[mode].keys()))
[ "Metrics available for a given mode." ]
Please provide a description of the function:def batch_norm_relu(inputs, is_training, relu=True, init_zero=False, data_format="channels_first"): if init_zero: gamma_initializer = tf.zeros_initializer() else: gamma_initializer...
[ "Performs a batch normalization followed by a ReLU.\n\n Args:\n inputs: `Tensor` of shape `[batch, channels, ...]`.\n is_training: `bool` for whether the model is training.\n relu: `bool` if False, omits the ReLU operation.\n init_zero: `bool` if True, initializes scale parameter of batch\n norm...
Please provide a description of the function:def conv2d_fixed_padding(inputs, filters, kernel_size, strides, data_format="channels_first", use_td=False, targeting_rate=No...
[ "Strided 2-D convolution with explicit padding.\n\n The padding is consistent and is based only on `kernel_size`, not on the\n dimensions of `inputs` (as opposed to using `tf.layers.conv2d` alone).\n\n Args:\n inputs: `Tensor` of size `[batch, channels, height_in, width_in]`.\n filters: `int` number of fil...
Please provide a description of the function:def residual_block(inputs, filters, is_training, projection_shortcut, strides, final_block, data_format="channels_first", use_td=False, ...
[ "Standard building block for residual networks with BN before convolutions.\n\n Args:\n inputs: `Tensor` of size `[batch, channels, height, width]`.\n filters: `int` number of filters for the first two convolutions. Note that\n the third and final convolution will use 4 times as many filters.\n is_...
Please provide a description of the function:def bottleneck_block(inputs, filters, is_training, projection_shortcut, strides, final_block, data_format="channels_first", use_...
[ "Bottleneck block variant for residual networks with BN after convolutions.\n\n Args:\n inputs: `Tensor` of size `[batch, channels, height, width]`.\n filters: `int` number of filters for the first two convolutions. Note that\n the third and final convolution will use 4 times as many filters.\n is_...
Please provide a description of the function:def block_layer(inputs, filters, block_fn, blocks, strides, is_training, name, data_format="channels_first", use_td=False, targetin...
[ "Creates one layer of blocks for the ResNet model.\n\n Args:\n inputs: `Tensor` of size `[batch, channels, height, width]`.\n filters: `int` number of filters for the first convolution of the layer.\n block_fn: `function` for the block to use within the model\n blocks: `int` number of blocks contained ...
Please provide a description of the function:def resnet_v2(inputs, block_fn, layer_blocks, filters, data_format="channels_first", is_training=False, is_cifar=False, use_td=False, targeting_rate=None, ...
[ "Resnet model.\n\n Args:\n inputs: `Tensor` images.\n block_fn: `function` for the block to use within the model. Either\n `residual_block` or `bottleneck_block`.\n layer_blocks: list of 3 or 4 `int`s denoting the number of blocks to include\n in each of the 3 or 4 block groups. Each group con...
Please provide a description of the function:def resnet_imagenet_34_td_weight_05_05(): hp = resnet_imagenet_34() hp.use_td = "weight" hp.targeting_rate = 0.5 hp.keep_prob = 0.5 return hp
[ "Set of hyperparameters." ]
Please provide a description of the function:def resnet_imagenet_34_td_unit_05_05(): hp = resnet_imagenet_34() hp.use_td = "unit" hp.targeting_rate = 0.5 hp.keep_prob = 0.5 return hp
[ "Set of hyperparameters." ]
Please provide a description of the function:def resnet_imagenet_34_td_unit_no_drop(): hp = resnet_imagenet_34() hp.use_td = "unit" hp.targeting_rate = 0.0 hp.keep_prob = 1.0 return hp
[ "Set of hyperparameters." ]
Please provide a description of the function:def resnet_cifar_15(): hp = resnet_base() hp.block_fn = "residual" hp.is_cifar = True hp.layer_sizes = [2, 2, 2] hp.filter_sizes = [16, 32, 64, 128] return hp
[ "Set of hyperparameters." ]
Please provide a description of the function:def _len_lcs(x, y): table = _lcs(x, y) n, m = len(x), len(y) return table[n, m]
[ "Returns the length of the Longest Common Subsequence between two seqs.\n\n Source: http://www.algorithmist.com/index.php/Longest_Common_Subsequence\n\n Args:\n x: sequence of words\n y: sequence of words\n\n Returns\n integer: Length of LCS between x and y\n " ]
Please provide a description of the function:def _lcs(x, y): n, m = len(x), len(y) table = {} for i in range(n + 1): for j in range(m + 1): if i == 0 or j == 0: table[i, j] = 0 elif x[i - 1] == y[j - 1]: table[i, j] = table[i - 1, j - 1] + 1 else: table[i, j] = max...
[ "Computes the length of the LCS between two seqs.\n\n The implementation below uses a DP programming algorithm and runs\n in O(nm) time where n = len(x) and m = len(y).\n Source: http://www.algorithmist.com/index.php/Longest_Common_Subsequence\n\n Args:\n x: collection of words\n y: collection of words\n\...
Please provide a description of the function:def rouge_l_sentence_level(eval_sentences, ref_sentences): f1_scores = [] for eval_sentence, ref_sentence in zip(eval_sentences, ref_sentences): m = len(ref_sentence) n = len(eval_sentence) lcs = _len_lcs(eval_sentence, ref_sentence) f1_scores.append(...
[ "Computes ROUGE-L (sentence level) of two collections of sentences.\n\n Source: https://www.microsoft.com/en-us/research/publication/\n rouge-a-package-for-automatic-evaluation-of-summaries/\n\n Calculated according to:\n R_lcs = LCS(X,Y)/m\n P_lcs = LCS(X,Y)/n\n F_lcs = ((1 + beta^2)*R_lcs*P_lcs) / (R_lcs + ...
Please provide a description of the function:def rouge_l_fscore(predictions, labels, **unused_kwargs): outputs = tf.to_int32(tf.argmax(predictions, axis=-1)) # Convert the outputs and labels to a [batch_size, input_length] tensor. outputs = tf.squeeze(outputs, axis=[-1, -2]) labels = tf.squeeze(labels, axis=...
[ "ROUGE scores computation between labels and predictions.\n\n This is an approximate ROUGE scoring method since we do not glue word pieces\n or decode the ids and tokenize the output.\n\n Args:\n predictions: tensor, model predictions\n labels: tensor, gold output.\n\n Returns:\n rouge_l_fscore: approx...
Please provide a description of the function:def _get_ngrams(n, text): ngram_set = set() text_length = len(text) max_index_ngram_start = text_length - n for i in range(max_index_ngram_start + 1): ngram_set.add(tuple(text[i:i + n])) return ngram_set
[ "Calculates n-grams.\n\n Args:\n n: which n-grams to calculate\n text: An array of tokens\n\n Returns:\n A set of n-grams\n " ]
Please provide a description of the function:def rouge_2_fscore(predictions, labels, **unused_kwargs): outputs = tf.to_int32(tf.argmax(predictions, axis=-1)) # Convert the outputs and labels to a [batch_size, input_length] tensor. outputs = tf.squeeze(outputs, axis=[-1, -2]) labels = tf.squeeze(labels, axis...
[ "ROUGE-2 F1 score computation between labels and predictions.\n\n This is an approximate ROUGE scoring method since we do not glue word pieces\n or decode the ids and tokenize the output.\n\n Args:\n predictions: tensor, model predictions\n labels: tensor, gold output.\n\n Returns:\n rouge2_fscore: app...
Please provide a description of the function:def normalize_example_nlp(task, example, is_infer, vocab_type, vocab_offset, max_input_length, max_target_length, fixed_train_length): if task.has_inputs: example["inputs"] = example["inputs"][:-1] # remove EOS to...
[ "Normalize the examples from different tasks so they can be merged.\n\n This function is specific to NLP tasks and normalizes them so that in the\n end the example only has \"targets\" and \"task_id\". For tasks that originally\n have inputs, this is done by appending task_id to the inputs and prepending\n targ...
Please provide a description of the function:def flatten_zip_dataset(*args): flattened = tf.data.Dataset.from_tensors(args[0]) for ex in args[1:]: flattened = flattened.concatenate(tf.data.Dataset.from_tensors(ex)) return flattened
[ "A list of examples to a dataset containing mixed examples.\n\n Given a list of `n` dataset examples, flatten them by converting\n each element into a dataset and concatenating them to convert into a\n single dataset.\n\n Args:\n *args: A list containing one example each from `n` different datasets.\n\n Ret...
Please provide a description of the function:def aggregate_task_losses(hparams, problem_hparams, logits, feature_name, feature): # If no reweighting, we want the default loss to mimic the LM loss. if not hpar...
[ "Multiproblem loss function." ]
Please provide a description of the function:def aggregate_task_lm_losses(hparams, problem_hparams, logits, feature_name, feature): summaries = [] vocab_size = problem_hparams.vocab_size[feature_na...
[ "LM loss for multiproblems." ]
Please provide a description of the function:def normalize_example(self, task, example, encoder, hparams, is_infer): # Here we use the default function for NLP tasks that makes everything # a part of "targets" feature. Override in your subclasses for other uses. vocab_offset = encoder.vocab_size + len(...
[ "Normalize the examples from different tasks so they can be merged." ]
Please provide a description of the function:def update_task_ids(self, encoder_vocab_size): for idx, task in enumerate(self.task_list): task.set_task_id(idx + encoder_vocab_size) tf.logging.info("Task %d (%s) has id %d." % (idx, task.name, task.task_id))
[ "Generate task_ids for each problem.\n\n These ids correspond to the index of the task in the task_list.\n\n Args:\n encoder_vocab_size: the size of the vocab which is used to compute\n the index offset.\n " ]
Please provide a description of the function:def get_max_num_classes(self): num = 0 for task in self.task_list: if hasattr(task, "num_classes"): if num < task.num_classes: num = task.num_classes return num
[ "Compute the maximum number of classes any subtask has.\n\n This is useful for modifying the size of the softmax to include the output\n labels for the classification tasks. Currently, labels from different tasks\n are overloaded.\n\n Returns:\n num: Highest number of output classes in any text cla...
Please provide a description of the function:def pre_attention(self, segment, query_antecedent, memory_antecedent, bias): del segment return None, query_antecedent, memory_antecedent, bias
[ "Called prior to self-attention, to incorporate memory items.\n\n Args:\n segment: an integer Tensor with shape [batch]\n query_antecedent: a Tensor with shape [batch, length_q, channels]\n memory_antecedent: must be None. Attention normally allows this to be a\n Tensor with shape [batch, l...
Please provide a description of the function:def pre_attention(self, segment, query_antecedent, memory_antecedent, bias): assert memory_antecedent is None, "We only support language modeling" # In eval mode, batch size may be variable memory_batch_size = tf.shape(self.previous_vals)[0] current_bat...
[ "Called prior to self-attention, to incorporate memory items.\n\n Args:\n segment: an integer Tensor with shape [batch]\n query_antecedent: a Tensor with shape [batch, length_q, channels]\n memory_antecedent: must be None. Attention normally allows this to be a\n Tensor with shape [batch, l...
Please provide a description of the function:def post_attention(self, token, x): with tf.control_dependencies([ self.previous_segment.assign(token[0]), self.previous_vals.assign(token[1]), self.previous_bias.assign(token[2]), ]): return tf.identity(x)
[ "Called after self-attention. The memory can be updated here.\n\n Args:\n token: Data returned by pre_attention, which can be used to carry over\n state related to the current memory operation.\n x: a Tensor of data after self-attention and feed-forward\n Returns:\n a (possibly modified)...
Please provide a description of the function:def _norm(self, x): return tf.sqrt(tf.reduce_sum(tf.square(x), keepdims=True, axis=-1) + 1e-7)
[ "Compute the safe norm." ]
Please provide a description of the function:def _address_content(self, x): mem_keys = tf.layers.dense(self.mem_vals, self.key_depth, bias_initializer=tf.constant_initializer(1.0), name="mem_key") mem_query = tf.layers.dense(x, self.key_depth, ...
[ "Address the memory based on content similarity.\n\n Args:\n x: a tensor in the shape of [batch_size, length, depth].\n Returns:\n the logits for each memory entry [batch_size, length, memory_size].\n " ]
Please provide a description of the function:def read(self, x): access_logits = self._address_content(x) weights = tf.nn.softmax(access_logits) retrieved_mem = tf.reduce_sum( tf.multiply(tf.expand_dims(weights, 3), tf.expand_dims(self.mem_vals, axis=1)), axis=2) return a...
[ "Read from the memory.\n\n An external component can use the results via a simple MLP,\n e.g., fn(x W_x + retrieved_mem W_m).\n\n Args:\n x: a tensor in the shape of [batch_size, length, depth].\n Returns:\n access_logits: the logits for accessing the memory in shape of\n [batch_size,...
Please provide a description of the function:def write(self, x, access_logits): gamma = tf.layers.dense(x, 1, activation=tf.sigmoid, name="gamma") write_logits = access_logits - gamma * tf.expand_dims(self.mean_logits, 1) candidate_value = tf.layers.dense(x, self.val_depth, ...
[ "Write to the memory based on a combination of similarity and least used.\n\n Based on arXiv:1607.00036v2 [cs.LG].\n\n Args:\n x: a tensor in the shape of [batch_size, length, depth].\n access_logits: the logits for accessing the memory.\n Returns:\n the update op.\n " ]
Please provide a description of the function:def reset(self, entries_to_reset): num_updates = tf.size(entries_to_reset) update_vals = tf.scatter_update( self.mem_vals, entries_to_reset, tf.tile(tf.expand_dims( tf.fill([self.memory_size, self.val_depth], .0), 0), ...
[ "Reset the entries in the memory.\n\n Args:\n entries_to_reset: a 1D tensor.\n Returns:\n the reset op.\n " ]
Please provide a description of the function:def pre_attention(self, segment_number, query_antecedent, memory_antecedent, bias): with tf.variable_scope(self.name + "/pre_attention", reuse=tf.AUTO_REUSE): assert memory_antecedent is None, "We only support language modeling" with ...
[ "Called prior to self-attention, to incorporate memory items.\n\n Args:\n segment_number: an integer Tensor with shape [batch]\n query_antecedent: a Tensor with shape [batch, length_q, channels]\n memory_antecedent: must be None. Attention normally allows this to be a\n Tensor with shape [b...
Please provide a description of the function:def post_attention(self, token, x): with tf.variable_scope(self.name + "/post_attention", reuse=tf.AUTO_REUSE): depth = common_layers.shape_list(x)[-1] actual_batch_size = common_layers.shape_list(x)[0] memory_output = tf.gather(token["retrieved_me...
[ "Called after self-attention. The memory can be updated here.\n\n Args:\n token: Data returned by pre_attention, which can be used to carry over\n state related to the current memory operation.\n x: a Tensor of data after self-attention and feed-forward\n Returns:\n a (possibly modified)...
Please provide a description of the function:def _define_train( train_env, ppo_hparams, eval_env_fn=None, sampling_temp=1.0, **collect_kwargs ): memory, collect_summary, train_initialization = ( _define_collect( train_env, ppo_hparams, "ppo_train", ...
[ "Define the training setup." ]
Please provide a description of the function:def _run_train(ppo_hparams, event_dir, model_dir, restarter, train_summary_op, eval_summary_op, initializers, report_fn=None, model_save_fn=None): sum...
[ "Train." ]
Please provide a description of the function:def _rollout_metadata(batch_env): batch_env_shape = batch_env.observ.get_shape().as_list() batch_size = [batch_env_shape[0]] shapes_types_names = [ # TODO(piotrmilos): possibly retrieve the observation type for batch_env (batch_size + batch_env_shape[1:]...
[ "Metadata for rollouts." ]
Please provide a description of the function:def _define_collect(batch_env, ppo_hparams, scope, frame_stack_size, eval_phase, sampling_temp, force_beginning_resets): epoch_length = ppo_hparams.epoch_length to_initialize = [] with tf.variable_scope(scope, reuse=tf.AUTO_REUSE): num_agent...
[ "Collect trajectories.\n\n Args:\n batch_env: Batch environment.\n ppo_hparams: PPO hparams, defined in tensor2tensor.models.research.rl.\n scope: var scope.\n frame_stack_size: Number of last observations to feed into the policy.\n eval_phase: TODO(koz4k): Write docstring.\n sampling_temp: Sampl...
Please provide a description of the function:def deconv2d( input_, output_shape, k_h, k_w, d_h, d_w, stddev=0.02, name="deconv2d"): with tf.variable_scope(name): w = tf.get_variable( "w", [k_h, k_w, output_shape[-1], input_.get_shape()[-1]], initializer=tf.random_normal_initializer(stddev=s...
[ "Deconvolution layer." ]
Please provide a description of the function:def sliced_gan(): hparams = common_hparams.basic_params1() hparams.optimizer = "adam" hparams.learning_rate_constant = 0.0002 hparams.learning_rate_warmup_steps = 500 hparams.learning_rate_schedule = "constant * linear_warmup" hparams.label_smoothing = 0.0 h...
[ "Basic parameters for a vanilla_gan." ]
Please provide a description of the function:def discriminator(self, x, is_training, reuse=False): hparams = self.hparams with tf.variable_scope( "discriminator", reuse=reuse, initializer=tf.random_normal_initializer(stddev=0.02)): batch_size, height, width = common_layers.shape_list(...
[ "Discriminator architecture based on InfoGAN.\n\n Args:\n x: input images, shape [bs, h, w, channels]\n is_training: boolean, are we in train or eval model.\n reuse: boolean, should params be re-used.\n\n Returns:\n out_logit: the output logits (before sigmoid).\n " ]
Please provide a description of the function:def generator(self, z, is_training, out_shape): hparams = self.hparams height, width, c_dim = out_shape batch_size = hparams.batch_size with tf.variable_scope( "generator", initializer=tf.random_normal_initializer(stddev=0.02)): net...
[ "Generator outputting image in [0, 1]." ]
Please provide a description of the function:def body(self, features): features["targets"] = features["inputs"] is_training = self.hparams.mode == tf.estimator.ModeKeys.TRAIN # Input images. inputs = tf.to_float(features["targets_raw"]) # Noise vector. z = tf.random_uniform([self.hparams....
[ "Body of the model.\n\n Args:\n features: a dictionary with the tensors.\n\n Returns:\n A pair (predictions, losses) where predictions is the generated image\n and losses is a dictionary of losses (that get added for the final loss).\n " ]
Please provide a description of the function:def inputs(num_devices, dataset_name, data_dir=None, input_name=None, num_chunks=0, append_targets=False): assert data_dir, "Must provide a data directory" data_dir = os.path.expanduser(data_dir) (train_batches, train_eval_batches, eval_batches, input...
[ "Make Inputs for built-in datasets.\n\n Args:\n num_devices: how many devices to build the inputs for.\n dataset_name: a TFDS or T2T dataset name. If it's a T2T dataset name, prefix\n with \"t2t_\".\n data_dir: data directory.\n input_name: optional, name of the inputs from the dictionary.\n nu...
Please provide a description of the function:def random_inputs( num_devices, input_shape=gin.REQUIRED, input_dtype=np.int32, input_range=(0, 255), output_shape=gin.REQUIRED, output_dtype=np.int32, output_range=(0, 9)): if input_shape[0] % num_devices != 0: tf.logging.fatal( "num_devices[%d]...
[ "Make random Inputs for debugging.\n\n Args:\n num_devices: how many devices to build the inputs for.\n input_shape: the shape of inputs (including batch dimension).\n input_dtype: the type of the inputs (int32 by default).\n input_range: the range of inputs (defaults to (0, 255)).\n output_shape: t...