id
int64
0
190k
prompt
stringlengths
21
13.4M
docstring
stringlengths
1
12k
168,174
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `make_gaussian_encoder` function. Write a Python function `def make_gaussian_encoder(input_tensor, is_training=True, num_latent=gin.REQUIRED, encoder_fn=gin.REQUIRED)` to solve the following problem: Gin wrapper to create and apply a Gaussian encoder configurable with gin. This is a separate function so that several different models (such as BetaVAE and FactorVAE) can call this function while the gin binding always stays 'encoder.(...)'. This makes it easier to configure models and parse the results files. Args: input_tensor: Tensor with image that should be encoded. is_training: Boolean that indicates whether we are training (usually required for batch normalization). num_latent: Integer with dimensionality of latent space. encoder_fn: Function that that takes the arguments (input_tensor, num_latent, is_training) and returns the tuple (means, log_vars) with the encoder means and log variances. Returns: Tuple (means, log_vars) with the encoder means and log variances. Here is the function: def make_gaussian_encoder(input_tensor, is_training=True, num_latent=gin.REQUIRED, encoder_fn=gin.REQUIRED): """Gin wrapper to create and apply a Gaussian encoder configurable with gin. This is a separate function so that several different models (such as BetaVAE and FactorVAE) can call this function while the gin binding always stays 'encoder.(...)'. This makes it easier to configure models and parse the results files. Args: input_tensor: Tensor with image that should be encoded. is_training: Boolean that indicates whether we are training (usually required for batch normalization). num_latent: Integer with dimensionality of latent space. encoder_fn: Function that that takes the arguments (input_tensor, num_latent, is_training) and returns the tuple (means, log_vars) with the encoder means and log variances. Returns: Tuple (means, log_vars) with the encoder means and log variances. """ with tf.variable_scope("encoder"): return encoder_fn( input_tensor=input_tensor, num_latent=num_latent, is_training=is_training)
Gin wrapper to create and apply a Gaussian encoder configurable with gin. This is a separate function so that several different models (such as BetaVAE and FactorVAE) can call this function while the gin binding always stays 'encoder.(...)'. This makes it easier to configure models and parse the results files. Args: input_tensor: Tensor with image that should be encoded. is_training: Boolean that indicates whether we are training (usually required for batch normalization). num_latent: Integer with dimensionality of latent space. encoder_fn: Function that that takes the arguments (input_tensor, num_latent, is_training) and returns the tuple (means, log_vars) with the encoder means and log variances. Returns: Tuple (means, log_vars) with the encoder means and log variances.
168,175
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `make_decoder` function. Write a Python function `def make_decoder(latent_tensor, output_shape, is_training=True, decoder_fn=gin.REQUIRED)` to solve the following problem: Gin wrapper to create and apply a decoder configurable with gin. This is a separate function so that several different models (such as BetaVAE and FactorVAE) can call this function while the gin binding always stays 'decoder.(...)'. This makes it easier to configure models and parse the results files. Args: latent_tensor: Tensor latent space embeddings to decode from. output_shape: Tuple with the output shape of the observations to be generated. is_training: Boolean that indicates whether we are training (usually required for batch normalization). decoder_fn: Function that that takes the arguments (input_tensor, output_shape, is_training) and returns the decoded observations. Returns: Tensor of decoded observations. Here is the function: def make_decoder(latent_tensor, output_shape, is_training=True, decoder_fn=gin.REQUIRED): """Gin wrapper to create and apply a decoder configurable with gin. This is a separate function so that several different models (such as BetaVAE and FactorVAE) can call this function while the gin binding always stays 'decoder.(...)'. This makes it easier to configure models and parse the results files. Args: latent_tensor: Tensor latent space embeddings to decode from. output_shape: Tuple with the output shape of the observations to be generated. is_training: Boolean that indicates whether we are training (usually required for batch normalization). decoder_fn: Function that that takes the arguments (input_tensor, output_shape, is_training) and returns the decoded observations. Returns: Tensor of decoded observations. """ with tf.variable_scope("decoder"): return decoder_fn( latent_tensor=latent_tensor, output_shape=output_shape, is_training=is_training)
Gin wrapper to create and apply a decoder configurable with gin. This is a separate function so that several different models (such as BetaVAE and FactorVAE) can call this function while the gin binding always stays 'decoder.(...)'. This makes it easier to configure models and parse the results files. Args: latent_tensor: Tensor latent space embeddings to decode from. output_shape: Tuple with the output shape of the observations to be generated. is_training: Boolean that indicates whether we are training (usually required for batch normalization). decoder_fn: Function that that takes the arguments (input_tensor, output_shape, is_training) and returns the decoded observations. Returns: Tensor of decoded observations.
168,176
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `make_discriminator` function. Write a Python function `def make_discriminator(input_tensor, is_training=False, discriminator_fn=gin.REQUIRED)` to solve the following problem: Gin wrapper to create and apply a discriminator configurable with gin. This is a separate function so that several different models (such as FactorVAE) can potentially call this function while the gin binding always stays 'discriminator.(...)'. This makes it easier to configure models and parse the results files. Args: input_tensor: Tensor on which the discriminator operates. is_training: Boolean that indicates whether we are training (usually required for batch normalization). discriminator_fn: Function that that takes the arguments (input_tensor, is_training) and returns tuple of (logits, clipped_probs). Returns: Tuple of (logits, clipped_probs) tensors. Here is the function: def make_discriminator(input_tensor, is_training=False, discriminator_fn=gin.REQUIRED): """Gin wrapper to create and apply a discriminator configurable with gin. This is a separate function so that several different models (such as FactorVAE) can potentially call this function while the gin binding always stays 'discriminator.(...)'. This makes it easier to configure models and parse the results files. Args: input_tensor: Tensor on which the discriminator operates. is_training: Boolean that indicates whether we are training (usually required for batch normalization). discriminator_fn: Function that that takes the arguments (input_tensor, is_training) and returns tuple of (logits, clipped_probs). Returns: Tuple of (logits, clipped_probs) tensors. """ with tf.variable_scope("discriminator"): logits, probs = discriminator_fn(input_tensor, is_training=is_training) clipped = tf.clip_by_value(probs, 1e-6, 1 - 1e-6) return logits, clipped
Gin wrapper to create and apply a discriminator configurable with gin. This is a separate function so that several different models (such as FactorVAE) can potentially call this function while the gin binding always stays 'discriminator.(...)'. This makes it easier to configure models and parse the results files. Args: input_tensor: Tensor on which the discriminator operates. is_training: Boolean that indicates whether we are training (usually required for batch normalization). discriminator_fn: Function that that takes the arguments (input_tensor, is_training) and returns tuple of (logits, clipped_probs). Returns: Tuple of (logits, clipped_probs) tensors.
168,177
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `fc_encoder` function. Write a Python function `def fc_encoder(input_tensor, num_latent, is_training=True)` to solve the following problem: Fully connected encoder used in beta-VAE paper for the dSprites data. Based on row 1 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl). Args: input_tensor: Input tensor of shape (batch_size, 64, 64, num_channels) to build encoder on. num_latent: Number of latent variables to output. is_training: Whether or not the graph is built for training (UNUSED). Returns: means: Output tensor of shape (batch_size, num_latent) with latent variable means. log_var: Output tensor of shape (batch_size, num_latent) with latent variable log variances. Here is the function: def fc_encoder(input_tensor, num_latent, is_training=True): """Fully connected encoder used in beta-VAE paper for the dSprites data. Based on row 1 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl). Args: input_tensor: Input tensor of shape (batch_size, 64, 64, num_channels) to build encoder on. num_latent: Number of latent variables to output. is_training: Whether or not the graph is built for training (UNUSED). Returns: means: Output tensor of shape (batch_size, num_latent) with latent variable means. log_var: Output tensor of shape (batch_size, num_latent) with latent variable log variances. """ del is_training flattened = tf.layers.flatten(input_tensor) e1 = tf.layers.dense(flattened, 1200, activation=tf.nn.relu, name="e1") e2 = tf.layers.dense(e1, 1200, activation=tf.nn.relu, name="e2") means = tf.layers.dense(e2, num_latent, activation=None) log_var = tf.layers.dense(e2, num_latent, activation=None) return means, log_var
Fully connected encoder used in beta-VAE paper for the dSprites data. Based on row 1 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl). Args: input_tensor: Input tensor of shape (batch_size, 64, 64, num_channels) to build encoder on. num_latent: Number of latent variables to output. is_training: Whether or not the graph is built for training (UNUSED). Returns: means: Output tensor of shape (batch_size, num_latent) with latent variable means. log_var: Output tensor of shape (batch_size, num_latent) with latent variable log variances.
168,178
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `conv_encoder` function. Write a Python function `def conv_encoder(input_tensor, num_latent, is_training=True)` to solve the following problem: Convolutional encoder used in beta-VAE paper for the chairs data. Based on row 3 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl) Args: input_tensor: Input tensor of shape (batch_size, 64, 64, num_channels) to build encoder on. num_latent: Number of latent variables to output. is_training: Whether or not the graph is built for training (UNUSED). Returns: means: Output tensor of shape (batch_size, num_latent) with latent variable means. log_var: Output tensor of shape (batch_size, num_latent) with latent variable log variances. Here is the function: def conv_encoder(input_tensor, num_latent, is_training=True): """Convolutional encoder used in beta-VAE paper for the chairs data. Based on row 3 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl) Args: input_tensor: Input tensor of shape (batch_size, 64, 64, num_channels) to build encoder on. num_latent: Number of latent variables to output. is_training: Whether or not the graph is built for training (UNUSED). Returns: means: Output tensor of shape (batch_size, num_latent) with latent variable means. log_var: Output tensor of shape (batch_size, num_latent) with latent variable log variances. """ del is_training e1 = tf.layers.conv2d( inputs=input_tensor, filters=32, kernel_size=4, strides=2, activation=tf.nn.relu, padding="same", name="e1", ) e2 = tf.layers.conv2d( inputs=e1, filters=32, kernel_size=4, strides=2, activation=tf.nn.relu, padding="same", name="e2", ) e3 = tf.layers.conv2d( inputs=e2, filters=64, kernel_size=2, strides=2, activation=tf.nn.relu, padding="same", name="e3", ) e4 = tf.layers.conv2d( inputs=e3, filters=64, kernel_size=2, strides=2, activation=tf.nn.relu, padding="same", name="e4", ) flat_e4 = tf.layers.flatten(e4) e5 = tf.layers.dense(flat_e4, 256, activation=tf.nn.relu, name="e5") means = tf.layers.dense(e5, num_latent, activation=None, name="means") log_var = tf.layers.dense(e5, num_latent, activation=None, name="log_var") return means, log_var
Convolutional encoder used in beta-VAE paper for the chairs data. Based on row 3 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl) Args: input_tensor: Input tensor of shape (batch_size, 64, 64, num_channels) to build encoder on. num_latent: Number of latent variables to output. is_training: Whether or not the graph is built for training (UNUSED). Returns: means: Output tensor of shape (batch_size, num_latent) with latent variable means. log_var: Output tensor of shape (batch_size, num_latent) with latent variable log variances.
168,179
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `fc_decoder` function. Write a Python function `def fc_decoder(latent_tensor, output_shape, is_training=True)` to solve the following problem: Fully connected encoder used in beta-VAE paper for the dSprites data. Based on row 1 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl) Args: latent_tensor: Input tensor to connect decoder to. output_shape: Shape of the data. is_training: Whether or not the graph is built for training (UNUSED). Returns: Output tensor of shape (None, 64, 64, num_channels) with the [0,1] pixel intensities. Here is the function: def fc_decoder(latent_tensor, output_shape, is_training=True): """Fully connected encoder used in beta-VAE paper for the dSprites data. Based on row 1 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl) Args: latent_tensor: Input tensor to connect decoder to. output_shape: Shape of the data. is_training: Whether or not the graph is built for training (UNUSED). Returns: Output tensor of shape (None, 64, 64, num_channels) with the [0,1] pixel intensities. """ del is_training d1 = tf.layers.dense(latent_tensor, 1200, activation=tf.nn.tanh) d2 = tf.layers.dense(d1, 1200, activation=tf.nn.tanh) d3 = tf.layers.dense(d2, 1200, activation=tf.nn.tanh) d4 = tf.layers.dense(d3, np.prod(output_shape)) return tf.reshape(d4, shape=[-1] + output_shape)
Fully connected encoder used in beta-VAE paper for the dSprites data. Based on row 1 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl) Args: latent_tensor: Input tensor to connect decoder to. output_shape: Shape of the data. is_training: Whether or not the graph is built for training (UNUSED). Returns: Output tensor of shape (None, 64, 64, num_channels) with the [0,1] pixel intensities.
168,180
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `deconv_decoder` function. Write a Python function `def deconv_decoder(latent_tensor, output_shape, is_training=True)` to solve the following problem: Convolutional decoder used in beta-VAE paper for the chairs data. Based on row 3 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl) Args: latent_tensor: Input tensor of shape (batch_size,) to connect decoder to. output_shape: Shape of the data. is_training: Whether or not the graph is built for training (UNUSED). Returns: Output tensor of shape (batch_size, 64, 64, num_channels) with the [0,1] pixel intensities. Here is the function: def deconv_decoder(latent_tensor, output_shape, is_training=True): """Convolutional decoder used in beta-VAE paper for the chairs data. Based on row 3 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl) Args: latent_tensor: Input tensor of shape (batch_size,) to connect decoder to. output_shape: Shape of the data. is_training: Whether or not the graph is built for training (UNUSED). Returns: Output tensor of shape (batch_size, 64, 64, num_channels) with the [0,1] pixel intensities. """ del is_training d1 = tf.layers.dense(latent_tensor, 256, activation=tf.nn.relu) d2 = tf.layers.dense(d1, 1024, activation=tf.nn.relu) d2_reshaped = tf.reshape(d2, shape=[-1, 4, 4, 64]) d3 = tf.layers.conv2d_transpose( inputs=d2_reshaped, filters=64, kernel_size=4, strides=2, activation=tf.nn.relu, padding="same", ) d4 = tf.layers.conv2d_transpose( inputs=d3, filters=32, kernel_size=4, strides=2, activation=tf.nn.relu, padding="same", ) d5 = tf.layers.conv2d_transpose( inputs=d4, filters=32, kernel_size=4, strides=2, activation=tf.nn.relu, padding="same", ) d6 = tf.layers.conv2d_transpose( inputs=d5, filters=output_shape[2], kernel_size=4, strides=2, padding="same", ) return tf.reshape(d6, [-1] + output_shape)
Convolutional decoder used in beta-VAE paper for the chairs data. Based on row 3 of Table 1 on page 13 of "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" (https://openreview.net/forum?id=Sy2fzU9gl) Args: latent_tensor: Input tensor of shape (batch_size,) to connect decoder to. output_shape: Shape of the data. is_training: Whether or not the graph is built for training (UNUSED). Returns: Output tensor of shape (batch_size, 64, 64, num_channels) with the [0,1] pixel intensities.
168,181
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `fc_discriminator` function. Write a Python function `def fc_discriminator(input_tensor, is_training=True)` to solve the following problem: Fully connected discriminator used in FactorVAE paper for all datasets. Based on Appendix A page 11 "Disentangling by Factorizing" (https://arxiv.org/pdf/1802.05983.pdf) Args: input_tensor: Input tensor of shape (None, num_latents) to build discriminator on. is_training: Whether or not the graph is built for training (UNUSED). Returns: logits: Output tensor of shape (batch_size, 2) with logits from discriminator. probs: Output tensor of shape (batch_size, 2) with probabilities from discriminator. Here is the function: def fc_discriminator(input_tensor, is_training=True): """Fully connected discriminator used in FactorVAE paper for all datasets. Based on Appendix A page 11 "Disentangling by Factorizing" (https://arxiv.org/pdf/1802.05983.pdf) Args: input_tensor: Input tensor of shape (None, num_latents) to build discriminator on. is_training: Whether or not the graph is built for training (UNUSED). Returns: logits: Output tensor of shape (batch_size, 2) with logits from discriminator. probs: Output tensor of shape (batch_size, 2) with probabilities from discriminator. """ del is_training flattened = tf.layers.flatten(input_tensor) d1 = tf.layers.dense(flattened, 1000, activation=tf.nn.leaky_relu, name="d1") d2 = tf.layers.dense(d1, 1000, activation=tf.nn.leaky_relu, name="d2") d3 = tf.layers.dense(d2, 1000, activation=tf.nn.leaky_relu, name="d3") d4 = tf.layers.dense(d3, 1000, activation=tf.nn.leaky_relu, name="d4") d5 = tf.layers.dense(d4, 1000, activation=tf.nn.leaky_relu, name="d5") d6 = tf.layers.dense(d5, 1000, activation=tf.nn.leaky_relu, name="d6") logits = tf.layers.dense(d6, 2, activation=None, name="logits") probs = tf.nn.softmax(logits) return logits, probs
Fully connected discriminator used in FactorVAE paper for all datasets. Based on Appendix A page 11 "Disentangling by Factorizing" (https://arxiv.org/pdf/1802.05983.pdf) Args: input_tensor: Input tensor of shape (None, num_latents) to build discriminator on. is_training: Whether or not the graph is built for training (UNUSED). Returns: logits: Output tensor of shape (batch_size, 2) with logits from discriminator. probs: Output tensor of shape (batch_size, 2) with probabilities from discriminator.
168,182
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `test_encoder` function. Write a Python function `def test_encoder(input_tensor, num_latent, is_training)` to solve the following problem: Simple encoder for testing. Args: input_tensor: Input tensor of shape (batch_size, 64, 64, num_channels) to build encoder on. num_latent: Number of latent variables to output. is_training: Whether or not the graph is built for training (UNUSED). Returns: means: Output tensor of shape (batch_size, num_latent) with latent variable means. log_var: Output tensor of shape (batch_size, num_latent) with latent variable log variances. Here is the function: def test_encoder(input_tensor, num_latent, is_training): """Simple encoder for testing. Args: input_tensor: Input tensor of shape (batch_size, 64, 64, num_channels) to build encoder on. num_latent: Number of latent variables to output. is_training: Whether or not the graph is built for training (UNUSED). Returns: means: Output tensor of shape (batch_size, num_latent) with latent variable means. log_var: Output tensor of shape (batch_size, num_latent) with latent variable log variances. """ del is_training flattened = tf.layers.flatten(input_tensor) means = tf.layers.dense(flattened, num_latent, activation=None, name="e1") log_var = tf.layers.dense(flattened, num_latent, activation=None, name="e2") return means, log_var
Simple encoder for testing. Args: input_tensor: Input tensor of shape (batch_size, 64, 64, num_channels) to build encoder on. num_latent: Number of latent variables to output. is_training: Whether or not the graph is built for training (UNUSED). Returns: means: Output tensor of shape (batch_size, num_latent) with latent variable means. log_var: Output tensor of shape (batch_size, num_latent) with latent variable log variances.
168,183
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `test_decoder` function. Write a Python function `def test_decoder(latent_tensor, output_shape, is_training=False)` to solve the following problem: Simple decoder for testing. Args: latent_tensor: Input tensor to connect decoder to. output_shape: Output shape. is_training: Whether or not the graph is built for training (UNUSED). Returns: Output tensor of shape (batch_size, 64, 64, num_channels) with the [0,1] pixel intensities. Here is the function: def test_decoder(latent_tensor, output_shape, is_training=False): """Simple decoder for testing. Args: latent_tensor: Input tensor to connect decoder to. output_shape: Output shape. is_training: Whether or not the graph is built for training (UNUSED). Returns: Output tensor of shape (batch_size, 64, 64, num_channels) with the [0,1] pixel intensities. """ del is_training output = tf.layers.dense(latent_tensor, np.prod(output_shape), name="d1") return tf.reshape(output, shape=[-1] + output_shape)
Simple decoder for testing. Args: latent_tensor: Input tensor to connect decoder to. output_shape: Output shape. is_training: Whether or not the graph is built for training (UNUSED). Returns: Output tensor of shape (batch_size, 64, 64, num_channels) with the [0,1] pixel intensities.
168,184
from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow.compat.v1 as tf import gin.tf def make_optimizer(optimizer_fn, learning_rate): """Wrapper to create the optimizer with a given learning_rate.""" if learning_rate is None: # Learning rate is specified in the optimizer_fn options, or left to its # default value. return optimizer_fn() else: # Learning rate is explicitly specified in vae/discriminator optimizer. # If it is callable, we assume it's a LR decay function which needs the # current global step. if callable(learning_rate): learning_rate = learning_rate(global_step=tf.train.get_global_step()) return optimizer_fn(learning_rate=learning_rate) The provided code snippet includes necessary dependencies for implementing the `make_vae_optimizer` function. Write a Python function `def make_vae_optimizer(optimizer_fn=gin.REQUIRED, learning_rate=None)` to solve the following problem: Wrapper that uses gin to construct an optimizer for VAEs. Here is the function: def make_vae_optimizer(optimizer_fn=gin.REQUIRED, learning_rate=None): """Wrapper that uses gin to construct an optimizer for VAEs.""" return make_optimizer(optimizer_fn, learning_rate)
Wrapper that uses gin to construct an optimizer for VAEs.
168,185
from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow.compat.v1 as tf import gin.tf def make_optimizer(optimizer_fn, learning_rate): """Wrapper to create the optimizer with a given learning_rate.""" if learning_rate is None: # Learning rate is specified in the optimizer_fn options, or left to its # default value. return optimizer_fn() else: # Learning rate is explicitly specified in vae/discriminator optimizer. # If it is callable, we assume it's a LR decay function which needs the # current global step. if callable(learning_rate): learning_rate = learning_rate(global_step=tf.train.get_global_step()) return optimizer_fn(learning_rate=learning_rate) The provided code snippet includes necessary dependencies for implementing the `make_discriminator_optimizer` function. Write a Python function `def make_discriminator_optimizer(optimizer_fn=gin.REQUIRED, learning_rate=None)` to solve the following problem: Wrapper that uses gin to construct an optimizer for the discriminator. Here is the function: def make_discriminator_optimizer(optimizer_fn=gin.REQUIRED, learning_rate=None): """Wrapper that uses gin to construct an optimizer for the discriminator.""" return make_optimizer(optimizer_fn, learning_rate)
Wrapper that uses gin to construct an optimizer for the discriminator.
168,186
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import time from disentanglement_lib.data.ground_truth import named_data from disentanglement_lib.methods.semi_supervised import semi_supervised_utils from disentanglement_lib.methods.semi_supervised import semi_supervised_vae from disentanglement_lib.methods.unsupervised import gaussian_encoder_model from disentanglement_lib.utils import results import numpy as np import tensorflow.compat.v1 as tf import gin.tf.external_configurables import gin.tf from tensorflow_estimator.python.estimator.tpu import tpu_config from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimator def train(model_dir, overwrite=False, model=gin.REQUIRED, training_steps=gin.REQUIRED, unsupervised_data_seed=gin.REQUIRED, supervised_data_seed=gin.REQUIRED, model_seed=gin.REQUIRED, batch_size=gin.REQUIRED, num_labelled_samples=gin.REQUIRED, train_percentage=gin.REQUIRED, name=""): """Trains the estimator and exports the snapshot and the gin config. The use of this function requires the gin binding 'dataset.name' to be specified as that determines the data set used for training. Args: model_dir: String with path to directory where model output should be saved. overwrite: Boolean indicating whether to overwrite output directory. model: GaussianEncoderModel that should be trained and exported. training_steps: Integer with number of training steps. unsupervised_data_seed: Integer with random seed used for the unsupervised data. supervised_data_seed: Integer with random seed for supervised data. model_seed: Integer with random seed used for the model. batch_size: Integer with the batch size. num_labelled_samples: Integer with number of labelled observations for training. train_percentage: Fraction of the labelled data to use for training (0,1) name: Optional string with name of the model (can be used to name models). """ # We do not use the variable 'name'. Instead, it can be used to name results # as it will be part of the saved gin config. del name # Delete the output directory if necessary. if tf.gfile.IsDirectory(model_dir): if overwrite: tf.gfile.DeleteRecursively(model_dir) else: raise ValueError("Directory already exists and overwrite is False.") # Obtain the dataset. dataset = named_data.get_named_ground_truth_data() (sampled_observations, sampled_factors, factor_sizes) = semi_supervised_utils.sample_supervised_data( supervised_data_seed, dataset, num_labelled_samples) # We instantiate the model class. if issubclass(model, semi_supervised_vae.BaseS2VAE): model = model(factor_sizes) else: model = model() # We create a TPUEstimator based on the provided model. This is primarily so # that we could switch to TPU training in the future. For now, we train # locally on GPUs. run_config = tpu_config.RunConfig( tf_random_seed=model_seed, keep_checkpoint_max=1, tpu_config=tpu_config.TPUConfig(iterations_per_loop=500)) tpu_estimator = TPUEstimator( use_tpu=False, model_fn=model.model_fn, model_dir=model_dir, train_batch_size=batch_size, eval_batch_size=batch_size, config=run_config) # Set up time to keep track of elapsed time in results. experiment_timer = time.time() # Do the actual training. tpu_estimator.train( input_fn=_make_input_fn(dataset, num_labelled_samples, unsupervised_data_seed, sampled_observations, sampled_factors, train_percentage), steps=training_steps) # Save model as a TFHub module. output_shape = named_data.get_named_ground_truth_data().observation_shape module_export_path = os.path.join(model_dir, "tfhub") gaussian_encoder_model.export_as_tf_hub(model, output_shape, tpu_estimator.latest_checkpoint(), module_export_path) # Save the results. The result dir will contain all the results and config # files that we copied along, as we progress in the pipeline. The idea is that # these files will be available for analysis at the end. results_dict = tpu_estimator.evaluate( input_fn=_make_input_fn( dataset, num_labelled_samples, unsupervised_data_seed, sampled_observations, sampled_factors, train_percentage, num_batches=num_labelled_samples, validation=True)) results_dir = os.path.join(model_dir, "results") results_dict["elapsed_time"] = time.time() - experiment_timer results.update_result_directory(results_dir, "train", results_dict) The provided code snippet includes necessary dependencies for implementing the `train_with_gin` function. Write a Python function `def train_with_gin(model_dir, overwrite=False, gin_config_files=None, gin_bindings=None)` to solve the following problem: Trains a model based on the provided gin configuration. This function will set the provided gin bindings, call the train() function and clear the gin config. Please see the train() for required gin bindings. Args: model_dir: String with path to directory where model output should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use. Here is the function: def train_with_gin(model_dir, overwrite=False, gin_config_files=None, gin_bindings=None): """Trains a model based on the provided gin configuration. This function will set the provided gin bindings, call the train() function and clear the gin config. Please see the train() for required gin bindings. Args: model_dir: String with path to directory where model output should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use. """ if gin_config_files is None: gin_config_files = [] if gin_bindings is None: gin_bindings = [] gin.parse_config_files_and_bindings(gin_config_files, gin_bindings) train(model_dir, overwrite) gin.clear_config()
Trains a model based on the provided gin configuration. This function will set the provided gin bindings, call the train() function and clear the gin config. Please see the train() for required gin bindings. Args: model_dir: String with path to directory where model output should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use.
168,187
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math import numpy as np import gin.tf.external_configurables import gin.tf The provided code snippet includes necessary dependencies for implementing the `perfect_labeller` function. Write a Python function `def perfect_labeller(labels, dataset, random_state)` to solve the following problem: Returns the true factors of variations without artifacts. Args: labels: True observations of the factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. dataset: Dataset class. random_state: Random state for the noise (unused). Returns: labels: True observations of the factors of variations without artifacts. Numpy array of shape (num_labelled_samples, num_factors) of Float32. Here is the function: def perfect_labeller(labels, dataset, random_state): """Returns the true factors of variations without artifacts. Args: labels: True observations of the factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. dataset: Dataset class. random_state: Random state for the noise (unused). Returns: labels: True observations of the factors of variations without artifacts. Numpy array of shape (num_labelled_samples, num_factors) of Float32. """ del random_state labels = np.float32(labels) return labels, dataset.factors_num_values
Returns the true factors of variations without artifacts. Args: labels: True observations of the factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. dataset: Dataset class. random_state: Random state for the noise (unused). Returns: labels: True observations of the factors of variations without artifacts. Numpy array of shape (num_labelled_samples, num_factors) of Float32.
168,188
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math import numpy as np import gin.tf.external_configurables import gin.tf The provided code snippet includes necessary dependencies for implementing the `bin_labeller` function. Write a Python function `def bin_labeller(labels, dataset, random_state, num_bins=5)` to solve the following problem: Returns simplified factors of variations. The factors of variations are binned to take at most num_bins different values to simulate the process of a human roughly labelling the factors of variations. Args: labels: True observations of the factors of variations. dataset: Dataset class. random_state: Random state for the noise (unused). num_bins: Number of bins for the factors of variations. Returns: labels: Binned factors of variations without noise. Numpy array of shape (num_labelled_samples, num_factors) of Float32. Here is the function: def bin_labeller(labels, dataset, random_state, num_bins=5): """Returns simplified factors of variations. The factors of variations are binned to take at most num_bins different values to simulate the process of a human roughly labelling the factors of variations. Args: labels: True observations of the factors of variations. dataset: Dataset class. random_state: Random state for the noise (unused). num_bins: Number of bins for the factors of variations. Returns: labels: Binned factors of variations without noise. Numpy array of shape (num_labelled_samples, num_factors) of Float32. """ del random_state labels = np.float32(labels) for i, num_values in enumerate(dataset.factors_num_values): if num_values > num_bins: size_bin = (num_values / num_bins) labels[:, i] = np.minimum(labels[:, i] // size_bin, num_bins - 1) factors_num_values_bin = np.minimum(dataset.factors_num_values, num_bins) return labels, factors_num_values_bin
Returns simplified factors of variations. The factors of variations are binned to take at most num_bins different values to simulate the process of a human roughly labelling the factors of variations. Args: labels: True observations of the factors of variations. dataset: Dataset class. random_state: Random state for the noise (unused). num_bins: Number of bins for the factors of variations. Returns: labels: Binned factors of variations without noise. Numpy array of shape (num_labelled_samples, num_factors) of Float32.
168,189
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math import numpy as np import gin.tf.external_configurables import gin.tf The provided code snippet includes necessary dependencies for implementing the `noisy_labeller` function. Write a Python function `def noisy_labeller(labels, dataset, random_state, prob_random=0.1)` to solve the following problem: Returns noisy factors of variations. With probability prob_random, the observation of the factor of variations is uniformly sampled from all possible factor values. Args: labels: True observations of the factors of variations. dataset: Dataset class. random_state: Random state for the noise. prob_random: Probability of observing random factors of variations. Returns: labels: Noisy factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. Here is the function: def noisy_labeller(labels, dataset, random_state, prob_random=0.1): """Returns noisy factors of variations. With probability prob_random, the observation of the factor of variations is uniformly sampled from all possible factor values. Args: labels: True observations of the factors of variations. dataset: Dataset class. random_state: Random state for the noise. prob_random: Probability of observing random factors of variations. Returns: labels: Noisy factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. """ for j in range(labels.shape[0]): for i, num_values in enumerate(dataset.factors_num_values): p = random_state.rand() if p < prob_random: labels[j, i] = random_state.randint(num_values) labels = np.float32(labels) return labels, dataset.factors_num_values
Returns noisy factors of variations. With probability prob_random, the observation of the factor of variations is uniformly sampled from all possible factor values. Args: labels: True observations of the factors of variations. dataset: Dataset class. random_state: Random state for the noise. prob_random: Probability of observing random factors of variations. Returns: labels: Noisy factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32.
168,190
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math import numpy as np import gin.tf.external_configurables import gin.tf def permute(factor, num_values, random_state): """Permutes the ordinal information of a given factor. Args: factor: Numpy array with the observations of a factor of varation with shape (num_labelled_samples,) and type Int64. num_values: Int with number of distinct values the factor of variation can take. random_state: Random state used to sample the permutation. Returns: factor: Numpy array of Int64 with the observations of a factor of varation with permuted values and shape (num_labelled_samples,). """ unordered_dict = random_state.permutation(range(num_values)) factor[:] = unordered_dict[factor] return factor The provided code snippet includes necessary dependencies for implementing the `permuted_labeller` function. Write a Python function `def permuted_labeller(labels, dataset, random_state)` to solve the following problem: Returns factors of variations where the ordinal information is broken. Args: labels: True observations of the factors of variations. dataset: Dataset class. random_state: Random state for the noise (unused). Returns: labels: Noisy factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. Here is the function: def permuted_labeller(labels, dataset, random_state): """Returns factors of variations where the ordinal information is broken. Args: labels: True observations of the factors of variations. dataset: Dataset class. random_state: Random state for the noise (unused). Returns: labels: Noisy factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. """ for i, num_values in enumerate(dataset.factors_num_values): labels[:, i] = permute(labels[:, i], num_values, random_state) labels = np.float32(labels) return labels, dataset.factors_num_values
Returns factors of variations where the ordinal information is broken. Args: labels: True observations of the factors of variations. dataset: Dataset class. random_state: Random state for the noise (unused). Returns: labels: Noisy factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32.
168,191
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math import numpy as np import gin.tf.external_configurables import gin.tf def filter_factors(labels, num_observed_factors, random_state): """Filter observed factor keeping only a random subset of them. Args: labels: Factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. num_observed_factors: How many factors should be kept. random_state: Random state used to sample the permutation. Returns: Filters the labels so that only num_observed_factors are observed. """ if num_observed_factors < 1: raise ValueError("Cannot observe negative amount of factors.") elif num_observed_factors > labels.shape[1]: raise ValueError( "Cannot observe more factors than the ones in the dataset.") factors_to_keep = random_state.choice(labels.shape[1], size=num_observed_factors, replace=False) return labels[:, factors_to_keep], factors_to_keep "partial_labeller", blacklist=["labels", "dataset"]) The provided code snippet includes necessary dependencies for implementing the `partial_labeller` function. Write a Python function `def partial_labeller(labels, dataset, random_state, num_observed_factors=2)` to solve the following problem: Returns a few factors of variations without artifacts. Args: labels: True observations of the factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. dataset: Dataset class. random_state: Random state for the noise (unused). num_observed_factors: How many factors are observed. Returns: labels: True observations of the factors of variations without artifacts. Numpy array of shape (num_labelled_samples, num_factors) of Float32. Here is the function: def partial_labeller(labels, dataset, random_state, num_observed_factors=2): """Returns a few factors of variations without artifacts. Args: labels: True observations of the factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. dataset: Dataset class. random_state: Random state for the noise (unused). num_observed_factors: How many factors are observed. Returns: labels: True observations of the factors of variations without artifacts. Numpy array of shape (num_labelled_samples, num_factors) of Float32. """ labels = np.float32(labels) filtered_factors, factors_to_keep = filter_factors(labels, num_observed_factors, random_state) factors_num_values = [dataset.factors_num_values[i] for i in factors_to_keep] return filtered_factors, factors_num_values
Returns a few factors of variations without artifacts. Args: labels: True observations of the factors of variations. Numpy array of shape (num_labelled_samples, num_factors) of Float32. dataset: Dataset class. random_state: Random state for the noise (unused). num_observed_factors: How many factors are observed. Returns: labels: True observations of the factors of variations without artifacts. Numpy array of shape (num_labelled_samples, num_factors) of Float32.
168,192
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `sample_from_latent_distribution` function. Write a Python function `def sample_from_latent_distribution(z_mean, z_logvar)` to solve the following problem: Sample from the encoder distribution with reparametrization trick. Here is the function: def sample_from_latent_distribution(z_mean, z_logvar): """Sample from the encoder distribution with reparametrization trick.""" return tf.add( z_mean, tf.exp(z_logvar / 2) * tf.random_normal(tf.shape(z_mean), 0, 1), name="latent")
Sample from the encoder distribution with reparametrization trick.
168,193
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `compute_gaussian_kl` function. Write a Python function `def compute_gaussian_kl(z_mean, z_logvar)` to solve the following problem: Compute KL divergence between input Gaussian and Standard Normal. Here is the function: def compute_gaussian_kl(z_mean, z_logvar): """Compute KL divergence between input Gaussian and Standard Normal.""" return tf.reduce_mean( 0.5 * tf.reduce_sum( tf.square(z_mean) + tf.exp(z_logvar) - z_logvar - 1, [1]), name="kl_loss")
Compute KL divergence between input Gaussian and Standard Normal.
168,194
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `make_metric_fn` function. Write a Python function `def make_metric_fn(*names)` to solve the following problem: Utility function to report tf.metrics in model functions. Here is the function: def make_metric_fn(*names): """Utility function to report tf.metrics in model functions.""" def metric_fn(*args): return {name: tf.metrics.mean(vec) for name, vec in zip(names, args)} return metric_fn
Utility function to report tf.metrics in model functions.
168,195
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `make_annealer` function. Write a Python function `def make_annealer(gamma, step, iteration_threshold=gin.REQUIRED, anneal_fn=gin.REQUIRED)` to solve the following problem: Wrapper that creates annealing function. Here is the function: def make_annealer(gamma, step, iteration_threshold=gin.REQUIRED, anneal_fn=gin.REQUIRED): """Wrapper that creates annealing function.""" return anneal_fn(gamma, step, iteration_threshold)
Wrapper that creates annealing function.
168,196
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `fixed_annealer` function. Write a Python function `def fixed_annealer(gamma, step, iteration_threshold)` to solve the following problem: No annealing. Here is the function: def fixed_annealer(gamma, step, iteration_threshold): """No annealing.""" del step, iteration_threshold return gamma
No annealing.
168,197
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `annealed_annealer` function. Write a Python function `def annealed_annealer(gamma, step, iteration_threshold)` to solve the following problem: Linear annealing. Here is the function: def annealed_annealer(gamma, step, iteration_threshold): """Linear annealing.""" return tf.math.minimum(gamma * 1., gamma * 1. * tf.to_float(step) / iteration_threshold)
Linear annealing.
168,198
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `fine_tune_annealer` function. Write a Python function `def fine_tune_annealer(gamma, step, iteration_threshold)` to solve the following problem: Fine tuning. This annealer returns zero if step < iteration_threshold and gamma otherwise. Args: gamma: Weight of supervised loss. step: Current step of training. iteration_threshold: When to return gamma instead of zero. Returns: Either gamma or zero. Here is the function: def fine_tune_annealer(gamma, step, iteration_threshold): """Fine tuning. This annealer returns zero if step < iteration_threshold and gamma otherwise. Args: gamma: Weight of supervised loss. step: Current step of training. iteration_threshold: When to return gamma instead of zero. Returns: Either gamma or zero. """ return gamma * tf.math.minimum( tf.to_float(1), tf.math.maximum(tf.to_float(0), tf.to_float(step - iteration_threshold)))
Fine tuning. This annealer returns zero if step < iteration_threshold and gamma otherwise. Args: gamma: Weight of supervised loss. step: Current step of training. iteration_threshold: When to return gamma instead of zero. Returns: Either gamma or zero.
168,199
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `make_supervised_loss` function. Write a Python function `def make_supervised_loss(representation, labels, factor_sizes=None, loss_fn=gin.REQUIRED)` to solve the following problem: Wrapper that creates supervised loss. Here is the function: def make_supervised_loss(representation, labels, factor_sizes=None, loss_fn=gin.REQUIRED): """Wrapper that creates supervised loss.""" with tf.variable_scope("supervised_loss"): loss = loss_fn(representation, labels, factor_sizes) return loss
Wrapper that creates supervised loss.
168,200
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec def normalize_labels(labels, factors_num_values): """Normalize the labels in [0, 1]. Args: labels: Numpy array of shape (num_labelled_samples, num_factors) of Float32. factors_num_values: Numpy array of shape (num_factors,) containing the number of distinct values each factor can take. Returns: labels normalized in [0, 1]. """ factors_num_values_reshaped = np.repeat( np.expand_dims(np.float32(factors_num_values), axis=0), labels.shape[0], axis=0) return labels / factors_num_values_reshaped The provided code snippet includes necessary dependencies for implementing the `supervised_regularizer_l2` function. Write a Python function `def supervised_regularizer_l2(representation, labels, factor_sizes=None, learn_scale=True)` to solve the following problem: Implements a supervised l2 regularizer. If the number of latent dimension is greater than the number of factor of variations it only uses the first dimensions of the latent code to regularize. The number of factors of variation must be smaller or equal to the number of latent codes. The representation can be scaled with a learned scaling to match the labels or the labels are normalized in [0,1] and the representation is projected in the same interval using a sigmoid. Args: representation: Representation of labelled samples. labels: Labels for the labelled samples. factor_sizes: Cardinality of each factor of variation (unused). learn_scale: Boolean indicating whether the scale should be learned or not. Returns: L2 loss between the representation and the labels. Here is the function: def supervised_regularizer_l2(representation, labels, factor_sizes=None, learn_scale=True): """Implements a supervised l2 regularizer. If the number of latent dimension is greater than the number of factor of variations it only uses the first dimensions of the latent code to regularize. The number of factors of variation must be smaller or equal to the number of latent codes. The representation can be scaled with a learned scaling to match the labels or the labels are normalized in [0,1] and the representation is projected in the same interval using a sigmoid. Args: representation: Representation of labelled samples. labels: Labels for the labelled samples. factor_sizes: Cardinality of each factor of variation (unused). learn_scale: Boolean indicating whether the scale should be learned or not. Returns: L2 loss between the representation and the labels. """ number_latents = representation.shape[1].value number_factors_of_variations = labels.shape[1].value assert number_latents >= number_factors_of_variations, "Not enough latents." if learn_scale: b = tf.get_variable("b", initializer=tf.constant(1.)) return 2. * tf.nn.l2_loss( representation[:, :number_factors_of_variations] * b - labels) else: return 2. * tf.nn.l2_loss( tf.sigmoid( tf.expand_dims( representation[:, :number_factors_of_variations], axis=1)) - normalize_labels(labels, factor_sizes))
Implements a supervised l2 regularizer. If the number of latent dimension is greater than the number of factor of variations it only uses the first dimensions of the latent code to regularize. The number of factors of variation must be smaller or equal to the number of latent codes. The representation can be scaled with a learned scaling to match the labels or the labels are normalized in [0,1] and the representation is projected in the same interval using a sigmoid. Args: representation: Representation of labelled samples. labels: Labels for the labelled samples. factor_sizes: Cardinality of each factor of variation (unused). learn_scale: Boolean indicating whether the scale should be learned or not. Returns: L2 loss between the representation and the labels.
168,201
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec def normalize_labels(labels, factors_num_values): """Normalize the labels in [0, 1]. Args: labels: Numpy array of shape (num_labelled_samples, num_factors) of Float32. factors_num_values: Numpy array of shape (num_factors,) containing the number of distinct values each factor can take. Returns: labels normalized in [0, 1]. """ factors_num_values_reshaped = np.repeat( np.expand_dims(np.float32(factors_num_values), axis=0), labels.shape[0], axis=0) return labels / factors_num_values_reshaped The provided code snippet includes necessary dependencies for implementing the `supervised_regularizer_xent` function. Write a Python function `def supervised_regularizer_xent(representation, labels, factor_sizes=None)` to solve the following problem: Implements a supervised cross_entropy regularizer. If the number of latent dimension is greater than the number of factor of variations it only uses the first dimensions of the latent code to regularize. If the number of factors of variation is larger than the latent code dimension it raise an exception. Labels are in [0, 1]. Args: representation: Representation of labelled samples. labels: Labels for the labelled samples. factor_sizes: Cardinality of each factor of variation. Returns: Xent loss between the representation and the labels. Here is the function: def supervised_regularizer_xent(representation, labels, factor_sizes=None): """Implements a supervised cross_entropy regularizer. If the number of latent dimension is greater than the number of factor of variations it only uses the first dimensions of the latent code to regularize. If the number of factors of variation is larger than the latent code dimension it raise an exception. Labels are in [0, 1]. Args: representation: Representation of labelled samples. labels: Labels for the labelled samples. factor_sizes: Cardinality of each factor of variation. Returns: Xent loss between the representation and the labels. """ number_latents = representation.shape[1].value number_factors_of_variations = labels.shape[1].value assert number_latents >= number_factors_of_variations, "Not enough latents." return tf.reduce_sum( tf.nn.sigmoid_cross_entropy_with_logits( logits=representation[:, :number_factors_of_variations], labels=normalize_labels(labels, factor_sizes)))
Implements a supervised cross_entropy regularizer. If the number of latent dimension is greater than the number of factor of variations it only uses the first dimensions of the latent code to regularize. If the number of factors of variation is larger than the latent code dimension it raise an exception. Labels are in [0, 1]. Args: representation: Representation of labelled samples. labels: Labels for the labelled samples. factor_sizes: Cardinality of each factor of variation. Returns: Xent loss between the representation and the labels.
168,202
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `supervised_regularizer_cov` function. Write a Python function `def supervised_regularizer_cov(representation, labels, factor_sizes=None)` to solve the following problem: Implements a supervised regularizer using a covariance. Penalize the deviation from the identity of the covariance between representation and factors of varations. If the number of latent dimension is greater than the number of factor of variations it only uses the first dimensions of the latent code to regularize. Labels are in [0, 1]. Args: representation: Representation of labelled samples. labels: Labels for the labelled samples. factor_sizes: Cardinality of each factor of variation (unused). Returns: Loss between the representation and the labels. Here is the function: def supervised_regularizer_cov(representation, labels, factor_sizes=None): """Implements a supervised regularizer using a covariance. Penalize the deviation from the identity of the covariance between representation and factors of varations. If the number of latent dimension is greater than the number of factor of variations it only uses the first dimensions of the latent code to regularize. Labels are in [0, 1]. Args: representation: Representation of labelled samples. labels: Labels for the labelled samples. factor_sizes: Cardinality of each factor of variation (unused). Returns: Loss between the representation and the labels. """ del factor_sizes number_latents = representation.shape[1].value number_factors_of_variations = labels.shape[1].value num_diagonals = tf.math.minimum(number_latents, number_factors_of_variations) expectation_representation = tf.reduce_mean(representation, axis=0) expectation_labels = tf.reduce_mean(labels, axis=0) representation_centered = representation - expectation_representation labels_centered = labels - expectation_labels covariance = tf.reduce_mean( tf.expand_dims(representation_centered, 2) * tf.expand_dims( labels_centered, 1), axis=0) return 2. * tf.nn.l2_loss( tf.linalg.set_diag(covariance, tf.zeros([num_diagonals])))
Implements a supervised regularizer using a covariance. Penalize the deviation from the identity of the covariance between representation and factors of varations. If the number of latent dimension is greater than the number of factor of variations it only uses the first dimensions of the latent code to regularize. Labels are in [0, 1]. Args: representation: Representation of labelled samples. labels: Labels for the labelled samples. factor_sizes: Cardinality of each factor of variation (unused). Returns: Loss between the representation and the labels.
168,203
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `supervised_regularizer_embed` function. Write a Python function `def supervised_regularizer_embed(representation, labels, factor_sizes, sigma=gin.REQUIRED, use_order=False)` to solve the following problem: Embed factors in 1d and compute softmax with the representation. Assume a factor of variation indexed by j can take k values. We embed each value into k real numbers e_1, ..., e_k. Call e_label(r_j) the embedding of an observed label for the factor j. Then, for a dimension r_j of the representation, the loss is computed as exp(-((r_j - e_label(r_j))*sigma)^2)/sum_{i=1}^k exp(-(r_j - e_i)). We compute this term for each factor of variation j and each point. Finally, we add these terms into a single number. Args: representation: Computed representation, tensor of shape (batch_size, num_latents) labels: Observed values for the factors of variation, tensor of shape (batch_size, num_factors). factor_sizes: Cardinality of each factor of variation. sigma: Temperature for the softmax. Set to "learn" if to be learned. use_order: Boolean indicating whether to use the ordering information in the factors of variations or not. Returns: Supervised loss based on the softmax between embedded labels and representation. Here is the function: def supervised_regularizer_embed(representation, labels, factor_sizes, sigma=gin.REQUIRED, use_order=False): """Embed factors in 1d and compute softmax with the representation. Assume a factor of variation indexed by j can take k values. We embed each value into k real numbers e_1, ..., e_k. Call e_label(r_j) the embedding of an observed label for the factor j. Then, for a dimension r_j of the representation, the loss is computed as exp(-((r_j - e_label(r_j))*sigma)^2)/sum_{i=1}^k exp(-(r_j - e_i)). We compute this term for each factor of variation j and each point. Finally, we add these terms into a single number. Args: representation: Computed representation, tensor of shape (batch_size, num_latents) labels: Observed values for the factors of variation, tensor of shape (batch_size, num_factors). factor_sizes: Cardinality of each factor of variation. sigma: Temperature for the softmax. Set to "learn" if to be learned. use_order: Boolean indicating whether to use the ordering information in the factors of variations or not. Returns: Supervised loss based on the softmax between embedded labels and representation. """ number_factors_of_variations = labels.shape[1].value supervised_representation = representation[:, :number_factors_of_variations] loss = [] for i in range(number_factors_of_variations): with tf.variable_scope(str(i), reuse=tf.AUTO_REUSE): if use_order: bias = tf.get_variable("bias", []) slope = tf.get_variable("slope", []) embedding = tf.range(factor_sizes[i], dtype=tf.float32)*slope + bias else: embedding = tf.get_variable("embedding", [factor_sizes[i]]) if sigma == "learn": sigma_value = tf.get_variable("sigma", [1]) else: sigma_value = sigma logits = -tf.square( (tf.expand_dims(supervised_representation[:, i], axis=1) - embedding) * sigma_value) one_hot_labels = tf.one_hot(tf.to_int32(labels[:, i]), factor_sizes[i]) loss += [tf.losses.softmax_cross_entropy(one_hot_labels, logits)] return tf.reduce_sum(tf.add_n(loss))
Embed factors in 1d and compute softmax with the representation. Assume a factor of variation indexed by j can take k values. We embed each value into k real numbers e_1, ..., e_k. Call e_label(r_j) the embedding of an observed label for the factor j. Then, for a dimension r_j of the representation, the loss is computed as exp(-((r_j - e_label(r_j))*sigma)^2)/sum_{i=1}^k exp(-(r_j - e_i)). We compute this term for each factor of variation j and each point. Finally, we add these terms into a single number. Args: representation: Computed representation, tensor of shape (batch_size, num_latents) labels: Observed values for the factors of variation, tensor of shape (batch_size, num_factors). factor_sizes: Cardinality of each factor of variation. sigma: Temperature for the softmax. Set to "learn" if to be learned. use_order: Boolean indicating whether to use the ordering information in the factors of variations or not. Returns: Supervised loss based on the softmax between embedded labels and representation.
168,204
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import vae import numpy as np from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec The provided code snippet includes necessary dependencies for implementing the `mine` function. Write a Python function `def mine(x, z, name_net="estimator_network")` to solve the following problem: Computes I(X, Z). Uses the algorithm in "Mutual Information Neural Estimation" (https://arxiv.org/pdf/1801.04062.pdf). Args: x: Samples from x [batch_size, size_x]. z: Samples from z [batch_size, size_z]. name_net: Scope for the variables forming the network. Returns: Estimate of the mutual information and the update op for the optimizer. Here is the function: def mine(x, z, name_net="estimator_network"): """Computes I(X, Z). Uses the algorithm in "Mutual Information Neural Estimation" (https://arxiv.org/pdf/1801.04062.pdf). Args: x: Samples from x [batch_size, size_x]. z: Samples from z [batch_size, size_z]. name_net: Scope for the variables forming the network. Returns: Estimate of the mutual information and the update op for the optimizer. """ z_shuffled = vae.shuffle_codes(z) concat_x_x = tf.concat([x, x], axis=0) concat_z_z_shuffled = tf.stop_gradient(tf.concat([z, z_shuffled], axis=0)) with tf.variable_scope(name_net, reuse=tf.AUTO_REUSE): d1_x = tf.layers.dense(concat_x_x, 20, name="d1_x") d1_z = tf.layers.dense(concat_z_z_shuffled, 20, name="d1_z") d1 = tf.nn.elu(d1_x + d1_z, name="d1") d2 = tf.layers.dense(d1, 1, name="d2") batch_size = tf.shape(x)[0] pred_x_z = d2[:batch_size] pred_x_z_shuffled = d2[batch_size:] loss = -( tf.reduce_mean(pred_x_z, axis=0) + tf.math.log(tf.to_float(batch_size)) - tf.math.reduce_logsumexp(pred_x_z_shuffled)) all_variables = tf.trainable_variables() mine_vars = [var for var in all_variables if "estimator_network" in var.name] mine_op = tf.train.AdamOptimizer(learning_rate=0.01).minimize( loss=loss, var_list=mine_vars) return -loss, mine_op
Computes I(X, Z). Uses the algorithm in "Mutual Information Neural Estimation" (https://arxiv.org/pdf/1801.04062.pdf). Args: x: Samples from x [batch_size, size_x]. z: Samples from z [batch_size, size_z]. name_net: Scope for the variables forming the network. Returns: Estimate of the mutual information and the update op for the optimizer.
168,205
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import time from disentanglement_lib.data.ground_truth import named_data from disentanglement_lib.data.ground_truth import util from disentanglement_lib.methods.unsupervised import gaussian_encoder_model from disentanglement_lib.methods.unsupervised import vae from disentanglement_lib.utils import results import numpy as np import tensorflow.compat.v1 as tf import gin.tf.external_configurables import gin.tf from tensorflow.contrib import tpu as contrib_tpu def train(model_dir, overwrite=False, model=gin.REQUIRED, training_steps=gin.REQUIRED, random_seed=gin.REQUIRED, batch_size=gin.REQUIRED, eval_steps=1000, name="", model_num=None): """Trains the estimator and exports the snapshot and the gin config. The use of this function requires the gin binding 'dataset.name' to be specified as that determines the data set used for training. Args: model_dir: String with path to directory where model output should be saved. overwrite: Boolean indicating whether to overwrite output directory. model: GaussianEncoderModel that should be trained and exported. training_steps: Integer with number of training steps. random_seed: Integer with random seed used for training. batch_size: Integer with the batch size. eval_steps: Optional integer with number of steps used for evaluation. name: Optional string with name of the model (can be used to name models). model_num: Optional integer with model number (can be used to identify models). """ # We do not use the variables 'name' and 'model_num'. Instead, they can be # used to name results as they will be part of the saved gin config. del name, model_num # Delete the output directory if it already exists. if tf.gfile.IsDirectory(model_dir): if overwrite: tf.gfile.DeleteRecursively(model_dir) else: raise ValueError("Directory already exists and overwrite is False.") # Create a numpy random state. We will sample the random seeds for training # and evaluation from this. random_state = np.random.RandomState(random_seed) # Obtain the dataset. dataset = named_data.get_named_ground_truth_data() # We create a TPUEstimator based on the provided model. This is primarily so # that we could switch to TPU training in the future. For now, we train # locally on GPUs. run_config = contrib_tpu.RunConfig( tf_random_seed=random_seed, keep_checkpoint_max=1, tpu_config=contrib_tpu.TPUConfig(iterations_per_loop=500)) tpu_estimator = contrib_tpu.TPUEstimator( use_tpu=False, model_fn=model.model_fn, model_dir=os.path.join(model_dir, "tf_checkpoint"), train_batch_size=batch_size, eval_batch_size=batch_size, config=run_config) # Set up time to keep track of elapsed time in results. experiment_timer = time.time() # Do the actual training. tpu_estimator.train( input_fn=_make_input_fn(dataset, random_state.randint(2**32)), steps=training_steps) # Save model as a TFHub module. output_shape = named_data.get_named_ground_truth_data().observation_shape module_export_path = os.path.join(model_dir, "tfhub") gaussian_encoder_model.export_as_tf_hub(model, output_shape, tpu_estimator.latest_checkpoint(), module_export_path) # Save the results. The result dir will contain all the results and config # files that we copied along, as we progress in the pipeline. The idea is that # these files will be available for analysis at the end. results_dict = tpu_estimator.evaluate( input_fn=_make_input_fn( dataset, random_state.randint(2**32), num_batches=eval_steps)) results_dir = os.path.join(model_dir, "results") results_dict["elapsed_time"] = time.time() - experiment_timer results.update_result_directory(results_dir, "train", results_dict) The provided code snippet includes necessary dependencies for implementing the `train_with_gin` function. Write a Python function `def train_with_gin(model_dir, overwrite=False, gin_config_files=None, gin_bindings=None)` to solve the following problem: Trains a model based on the provided gin configuration. This function will set the provided gin bindings, call the train() function and clear the gin config. Please see train() for required gin bindings. Args: model_dir: String with path to directory where model output should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use. Here is the function: def train_with_gin(model_dir, overwrite=False, gin_config_files=None, gin_bindings=None): """Trains a model based on the provided gin configuration. This function will set the provided gin bindings, call the train() function and clear the gin config. Please see train() for required gin bindings. Args: model_dir: String with path to directory where model output should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use. """ if gin_config_files is None: gin_config_files = [] if gin_bindings is None: gin_bindings = [] gin.parse_config_files_and_bindings(gin_config_files, gin_bindings) train(model_dir, overwrite) gin.clear_config()
Trains a model based on the provided gin configuration. This function will set the provided gin bindings, call the train() function and clear the gin config. Please see train() for required gin bindings. Args: model_dir: String with path to directory where model output should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use.
168,206
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import gaussian_encoder_model from six.moves import range from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow.contrib import tpu as contrib_tpu The provided code snippet includes necessary dependencies for implementing the `compute_gaussian_kl` function. Write a Python function `def compute_gaussian_kl(z_mean, z_logvar)` to solve the following problem: Compute KL divergence between input Gaussian and Standard Normal. Here is the function: def compute_gaussian_kl(z_mean, z_logvar): """Compute KL divergence between input Gaussian and Standard Normal.""" return tf.reduce_mean( 0.5 * tf.reduce_sum( tf.square(z_mean) + tf.exp(z_logvar) - z_logvar - 1, [1]), name="kl_loss")
Compute KL divergence between input Gaussian and Standard Normal.
168,207
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import gaussian_encoder_model from six.moves import range from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow.contrib import tpu as contrib_tpu The provided code snippet includes necessary dependencies for implementing the `make_metric_fn` function. Write a Python function `def make_metric_fn(*names)` to solve the following problem: Utility function to report tf.metrics in model functions. Here is the function: def make_metric_fn(*names): """Utility function to report tf.metrics in model functions.""" def metric_fn(*args): return {name: tf.metrics.mean(vec) for name, vec in zip(names, args)} return metric_fn
Utility function to report tf.metrics in model functions.
168,208
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import gaussian_encoder_model from six.moves import range from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow.contrib import tpu as contrib_tpu The provided code snippet includes necessary dependencies for implementing the `anneal` function. Write a Python function `def anneal(c_max, step, iteration_threshold)` to solve the following problem: Anneal function for anneal_vae (https://arxiv.org/abs/1804.03599). Args: c_max: Maximum capacity. step: Current step. iteration_threshold: How many iterations to reach c_max. Returns: Capacity annealed linearly until c_max. Here is the function: def anneal(c_max, step, iteration_threshold): """Anneal function for anneal_vae (https://arxiv.org/abs/1804.03599). Args: c_max: Maximum capacity. step: Current step. iteration_threshold: How many iterations to reach c_max. Returns: Capacity annealed linearly until c_max. """ return tf.math.minimum(c_max * 1., c_max * 1. * tf.to_float(step) / iteration_threshold)
Anneal function for anneal_vae (https://arxiv.org/abs/1804.03599). Args: c_max: Maximum capacity. step: Current step. iteration_threshold: How many iterations to reach c_max. Returns: Capacity annealed linearly until c_max.
168,209
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import gaussian_encoder_model from six.moves import range from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow.contrib import tpu as contrib_tpu The provided code snippet includes necessary dependencies for implementing the `compute_covariance_z_mean` function. Write a Python function `def compute_covariance_z_mean(z_mean)` to solve the following problem: Computes the covariance of z_mean. Uses cov(z_mean) = E[z_mean*z_mean^T] - E[z_mean]E[z_mean]^T. Args: z_mean: Encoder mean, tensor of size [batch_size, num_latent]. Returns: cov_z_mean: Covariance of encoder mean, tensor of size [num_latent, num_latent]. Here is the function: def compute_covariance_z_mean(z_mean): """Computes the covariance of z_mean. Uses cov(z_mean) = E[z_mean*z_mean^T] - E[z_mean]E[z_mean]^T. Args: z_mean: Encoder mean, tensor of size [batch_size, num_latent]. Returns: cov_z_mean: Covariance of encoder mean, tensor of size [num_latent, num_latent]. """ expectation_z_mean_z_mean_t = tf.reduce_mean( tf.expand_dims(z_mean, 2) * tf.expand_dims(z_mean, 1), axis=0) expectation_z_mean = tf.reduce_mean(z_mean, axis=0) cov_z_mean = tf.subtract( expectation_z_mean_z_mean_t, tf.expand_dims(expectation_z_mean, 1) * tf.expand_dims( expectation_z_mean, 0)) return cov_z_mean
Computes the covariance of z_mean. Uses cov(z_mean) = E[z_mean*z_mean^T] - E[z_mean]E[z_mean]^T. Args: z_mean: Encoder mean, tensor of size [batch_size, num_latent]. Returns: cov_z_mean: Covariance of encoder mean, tensor of size [num_latent, num_latent].
168,210
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import gaussian_encoder_model from six.moves import range from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow.contrib import tpu as contrib_tpu The provided code snippet includes necessary dependencies for implementing the `regularize_diag_off_diag_dip` function. Write a Python function `def regularize_diag_off_diag_dip(covariance_matrix, lambda_od, lambda_d)` to solve the following problem: Compute on and off diagonal regularizers for DIP-VAE models. Penalize deviations of covariance_matrix from the identity matrix. Uses different weights for the deviations of the diagonal and off diagonal entries. Args: covariance_matrix: Tensor of size [num_latent, num_latent] to regularize. lambda_od: Weight of penalty for off diagonal elements. lambda_d: Weight of penalty for diagonal elements. Returns: dip_regularizer: Regularized deviation from diagonal of covariance_matrix. Here is the function: def regularize_diag_off_diag_dip(covariance_matrix, lambda_od, lambda_d): """Compute on and off diagonal regularizers for DIP-VAE models. Penalize deviations of covariance_matrix from the identity matrix. Uses different weights for the deviations of the diagonal and off diagonal entries. Args: covariance_matrix: Tensor of size [num_latent, num_latent] to regularize. lambda_od: Weight of penalty for off diagonal elements. lambda_d: Weight of penalty for diagonal elements. Returns: dip_regularizer: Regularized deviation from diagonal of covariance_matrix. """ covariance_matrix_diagonal = tf.diag_part(covariance_matrix) covariance_matrix_off_diagonal = covariance_matrix - tf.diag( covariance_matrix_diagonal) dip_regularizer = tf.add( lambda_od * tf.reduce_sum(covariance_matrix_off_diagonal**2), lambda_d * tf.reduce_sum((covariance_matrix_diagonal - 1)**2)) return dip_regularizer
Compute on and off diagonal regularizers for DIP-VAE models. Penalize deviations of covariance_matrix from the identity matrix. Uses different weights for the deviations of the diagonal and off diagonal entries. Args: covariance_matrix: Tensor of size [num_latent, num_latent] to regularize. lambda_od: Weight of penalty for off diagonal elements. lambda_d: Weight of penalty for diagonal elements. Returns: dip_regularizer: Regularized deviation from diagonal of covariance_matrix.
168,211
from __future__ import absolute_import from __future__ import division from __future__ import print_function import math from disentanglement_lib.methods.shared import architectures from disentanglement_lib.methods.shared import losses from disentanglement_lib.methods.shared import optimizers from disentanglement_lib.methods.unsupervised import gaussian_encoder_model from six.moves import range from six.moves import zip import tensorflow.compat.v1 as tf import gin.tf from tensorflow.contrib import tpu as contrib_tpu def gaussian_log_density(samples, mean, log_var): pi = tf.constant(math.pi) normalization = tf.log(2. * pi) inv_sigma = tf.exp(-log_var) tmp = (samples - mean) return -0.5 * (tmp * tmp * inv_sigma + log_var + normalization) The provided code snippet includes necessary dependencies for implementing the `total_correlation` function. Write a Python function `def total_correlation(z, z_mean, z_logvar)` to solve the following problem: Estimate of total correlation on a batch. We need to compute the expectation over a batch of: E_j [log(q(z(x_j))) - log(prod_l q(z(x_j)_l))]. We ignore the constants as they do not matter for the minimization. The constant should be equal to (num_latents - 1) * log(batch_size * dataset_size) Args: z: [batch_size, num_latents]-tensor with sampled representation. z_mean: [batch_size, num_latents]-tensor with mean of the encoder. z_logvar: [batch_size, num_latents]-tensor with log variance of the encoder. Returns: Total correlation estimated on a batch. Here is the function: def total_correlation(z, z_mean, z_logvar): """Estimate of total correlation on a batch. We need to compute the expectation over a batch of: E_j [log(q(z(x_j))) - log(prod_l q(z(x_j)_l))]. We ignore the constants as they do not matter for the minimization. The constant should be equal to (num_latents - 1) * log(batch_size * dataset_size) Args: z: [batch_size, num_latents]-tensor with sampled representation. z_mean: [batch_size, num_latents]-tensor with mean of the encoder. z_logvar: [batch_size, num_latents]-tensor with log variance of the encoder. Returns: Total correlation estimated on a batch. """ # Compute log(q(z(x_j)|x_i)) for every sample in the batch, which is a # tensor of size [batch_size, batch_size, num_latents]. In the following # comments, [batch_size, batch_size, num_latents] are indexed by [j, i, l]. log_qz_prob = gaussian_log_density( tf.expand_dims(z, 1), tf.expand_dims(z_mean, 0), tf.expand_dims(z_logvar, 0)) # Compute log prod_l p(z(x_j)_l) = sum_l(log(sum_i(q(z(z_j)_l|x_i))) # + constant) for each sample in the batch, which is a vector of size # [batch_size,]. log_qz_product = tf.reduce_sum( tf.reduce_logsumexp(log_qz_prob, axis=1, keepdims=False), axis=1, keepdims=False) # Compute log(q(z(x_j))) as log(sum_i(q(z(x_j)|x_i))) + constant = # log(sum_i(prod_l q(z(x_j)_l|x_i))) + constant. log_qz = tf.reduce_logsumexp( tf.reduce_sum(log_qz_prob, axis=2, keepdims=False), axis=1, keepdims=False) return tf.reduce_mean(log_qz - log_qz_product)
Estimate of total correlation on a batch. We need to compute the expectation over a batch of: E_j [log(q(z(x_j))) - log(prod_l q(z(x_j)_l))]. We ignore the constants as they do not matter for the minimization. The constant should be equal to (num_latents - 1) * log(batch_size * dataset_size) Args: z: [batch_size, num_latents]-tensor with sampled representation. z_mean: [batch_size, num_latents]-tensor with mean of the encoder. z_logvar: [batch_size, num_latents]-tensor with log variance of the encoder. Returns: Total correlation estimated on a batch.
168,212
from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl import logging from disentanglement_lib.evaluation.metrics import utils import numpy as np from six.moves import range import gin.tf def _prune_dims(variances, threshold=0.): """Mask for dimensions collapsed to the prior.""" scale_z = np.sqrt(variances) return scale_z >= threshold def _compute_variances(ground_truth_data, representation_function, batch_size, random_state, eval_batch_size=64): """Computes the variance for each dimension of the representation. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observation as input and outputs a representation. batch_size: Number of points to be used to compute the variances. random_state: Numpy random state used for randomness. eval_batch_size: Batch size used to eval representation. Returns: Vector with the variance of each dimension. """ observations = ground_truth_data.sample_observations(batch_size, random_state) representations = utils.obtain_representation(observations, representation_function, eval_batch_size) representations = np.transpose(representations) assert representations.shape[0] == batch_size return np.var(representations, axis=0, ddof=1) def _generate_training_batch(ground_truth_data, representation_function, batch_size, num_points, random_state, global_variances, active_dims): """Sample a set of training samples based on a batch of ground-truth data. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. batch_size: Number of points to be used to compute the training_sample. num_points: Number of points to be sampled for training set. random_state: Numpy random state used for randomness. global_variances: Numpy vector with variances for all dimensions of representation. active_dims: Indexes of active dimensions. Returns: (num_factors, dim_representation)-sized numpy array with votes. """ votes = np.zeros((ground_truth_data.num_factors, global_variances.shape[0]), dtype=np.int64) for _ in range(num_points): factor_index, argmin = _generate_training_sample(ground_truth_data, representation_function, batch_size, random_state, global_variances, active_dims) votes[factor_index, argmin] += 1 return votes The provided code snippet includes necessary dependencies for implementing the `compute_factor_vae` function. Write a Python function `def compute_factor_vae(ground_truth_data, representation_function, random_state, artifact_dir=None, batch_size=gin.REQUIRED, num_train=gin.REQUIRED, num_eval=gin.REQUIRED, num_variance_estimate=gin.REQUIRED)` to solve the following problem: Computes the FactorVAE disentanglement metric. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. batch_size: Number of points to be used to compute the training_sample. num_train: Number of points used for training. num_eval: Number of points used for evaluation. num_variance_estimate: Number of points used to estimate global variances. Returns: Dictionary with scores: train_accuracy: Accuracy on training set. eval_accuracy: Accuracy on evaluation set. Here is the function: def compute_factor_vae(ground_truth_data, representation_function, random_state, artifact_dir=None, batch_size=gin.REQUIRED, num_train=gin.REQUIRED, num_eval=gin.REQUIRED, num_variance_estimate=gin.REQUIRED): """Computes the FactorVAE disentanglement metric. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. batch_size: Number of points to be used to compute the training_sample. num_train: Number of points used for training. num_eval: Number of points used for evaluation. num_variance_estimate: Number of points used to estimate global variances. Returns: Dictionary with scores: train_accuracy: Accuracy on training set. eval_accuracy: Accuracy on evaluation set. """ del artifact_dir logging.info("Computing global variances to standardise.") global_variances = _compute_variances(ground_truth_data, representation_function, num_variance_estimate, random_state) active_dims = _prune_dims(global_variances) scores_dict = {} if not active_dims.any(): scores_dict["train_accuracy"] = 0. scores_dict["eval_accuracy"] = 0. scores_dict["num_active_dims"] = 0 return scores_dict logging.info("Generating training set.") training_votes = _generate_training_batch(ground_truth_data, representation_function, batch_size, num_train, random_state, global_variances, active_dims) classifier = np.argmax(training_votes, axis=0) other_index = np.arange(training_votes.shape[1]) logging.info("Evaluate training set accuracy.") train_accuracy = np.sum( training_votes[classifier, other_index]) * 1. / np.sum(training_votes) logging.info("Training set accuracy: %.2g", train_accuracy) logging.info("Generating evaluation set.") eval_votes = _generate_training_batch(ground_truth_data, representation_function, batch_size, num_eval, random_state, global_variances, active_dims) logging.info("Evaluate evaluation set accuracy.") eval_accuracy = np.sum(eval_votes[classifier, other_index]) * 1. / np.sum(eval_votes) logging.info("Evaluation set accuracy: %.2g", eval_accuracy) scores_dict["train_accuracy"] = train_accuracy scores_dict["eval_accuracy"] = eval_accuracy scores_dict["num_active_dims"] = len(active_dims) return scores_dict
Computes the FactorVAE disentanglement metric. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. batch_size: Number of points to be used to compute the training_sample. num_train: Number of points used for training. num_eval: Number of points used for evaluation. num_variance_estimate: Number of points used to estimate global variances. Returns: Dictionary with scores: train_accuracy: Accuracy on training set. eval_accuracy: Accuracy on evaluation set.
168,213
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.evaluation.metrics import utils import numpy as np from six.moves import range import gin.tf def _compute_loss(x_train, y_train, x_test, y_test, predictor_fn): """Compute average accuracy for train and test set.""" num_factors = y_train.shape[0] train_loss = [] test_loss = [] for i in range(num_factors): model = predictor_fn() model.fit(x_train, y_train[i, :]) train_loss.append(np.mean(model.predict(x_train) == y_train[i, :])) test_loss.append(np.mean(model.predict(x_test) == y_test[i, :])) return train_loss, test_loss def _compute_loss_intervene(factors_train, factors_test, predictor_fn, ground_truth_data, representation_function, random_state, n_experiment=10): """Compute average accuracy for train and test set.""" num_factors = factors_train.shape[1] train_loss = [] test_loss = [] for i in range(num_factors): for _ in range(n_experiment): factors_train_int, factors_test_int, _, _ = intervene( factors_train.copy(), factors_test.copy(), i, num_factors, ground_truth_data) obs_train_int = ground_truth_data.sample_observations_from_factors( factors_train_int, random_state) obs_test_int = ground_truth_data.sample_observations_from_factors( factors_test_int, random_state) x_train_int = representation_function(obs_train_int) x_test_int = representation_function(obs_test_int) # train predictor on data without intervention y_train_int = np.transpose(factors_train_int) y_test_int = np.transpose(factors_test_int) model = predictor_fn() model.fit(x_train_int, y_train_int[i, :]) train_loss.append( np.mean(model.predict(x_train_int) == y_train_int[i, :])) test_loss.append(np.mean(model.predict(x_test_int) == y_test_int[i, :])) return train_loss, test_loss The provided code snippet includes necessary dependencies for implementing the `compute_strong_downstream_task` function. Write a Python function `def compute_strong_downstream_task(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, n_experiment=gin.REQUIRED)` to solve the following problem: Computes loss of downstream task. This task is about strong generalization under covariate shifts. We first perform an intervention fixing a value for a factor in the whole training set. Then, we train a GBT classifier, and at test time, we consider all other values for that factor. We repeat the experiment n_experiment times, to ensure robustness. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. n_experiment: Number of repetitions of the experiment. Returns: Dictionary with scores. Here is the function: def compute_strong_downstream_task(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, n_experiment=gin.REQUIRED): """Computes loss of downstream task. This task is about strong generalization under covariate shifts. We first perform an intervention fixing a value for a factor in the whole training set. Then, we train a GBT classifier, and at test time, we consider all other values for that factor. We repeat the experiment n_experiment times, to ensure robustness. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. n_experiment: Number of repetitions of the experiment. Returns: Dictionary with scores. """ del artifact_dir scores = {} for train_size in num_train: # sample factors factors_train = ground_truth_data.sample_factors(train_size, random_state) factors_test = ground_truth_data.sample_factors(num_test, random_state) # obtain_observations without intervention x_train = ground_truth_data.sample_observations_from_factors( factors_train, random_state) x_test = ground_truth_data.sample_observations_from_factors( factors_test, random_state) mus_train = representation_function(x_train) mus_test = representation_function(x_test) # train predictor on data without interbention predictor_model = utils.make_predictor_fn() y_train = np.transpose(factors_train) y_test = np.transpose(factors_test) train_err, test_err = _compute_loss( mus_train, y_train, mus_test, y_test, predictor_model) # train predictor on data with interventions train_err_int, test_err_int = _compute_loss_intervene( factors_train, factors_test, predictor_model, ground_truth_data, representation_function, random_state, n_experiment) size_string = str(train_size) scores[size_string + ":mean_train_accuracy"] = np.mean(train_err) scores[size_string + ":mean_test_accuracy"] = np.mean(test_err) scores[size_string + ":mean_strong_train_accuracy"] = np.mean(train_err_int) scores[size_string + ":mean_strong_test_accuracy"] = np.mean(test_err_int) scores[size_string + ":strong_generalization_gap"] = 1. - ( scores[size_string + ":mean_strong_test_accuracy"] / scores[size_string + ":mean_test_accuracy"]) return scores
Computes loss of downstream task. This task is about strong generalization under covariate shifts. We first perform an intervention fixing a value for a factor in the whole training set. Then, we train a GBT classifier, and at test time, we consider all other values for that factor. We repeat the experiment n_experiment times, to ensure robustness. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. n_experiment: Number of repetitions of the experiment. Returns: Dictionary with scores.
168,214
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np from six.moves import range import sklearn from sklearn import ensemble from sklearn import linear_model from sklearn import model_selection import gin.tf The provided code snippet includes necessary dependencies for implementing the `_histogram_discretize` function. Write a Python function `def _histogram_discretize(target, num_bins=gin.REQUIRED)` to solve the following problem: Discretization based on histograms. Here is the function: def _histogram_discretize(target, num_bins=gin.REQUIRED): """Discretization based on histograms.""" discretized = np.zeros_like(target) for i in range(target.shape[0]): discretized[i, :] = np.digitize(target[i, :], np.histogram( target[i, :], num_bins)[1][:-1]) return discretized
Discretization based on histograms.
168,215
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np from six.moves import range import sklearn from sklearn import ensemble from sklearn import linear_model from sklearn import model_selection import gin.tf The provided code snippet includes necessary dependencies for implementing the `logistic_regression_cv` function. Write a Python function `def logistic_regression_cv()` to solve the following problem: Logistic regression with 5 folds cross validation. Here is the function: def logistic_regression_cv(): """Logistic regression with 5 folds cross validation.""" return linear_model.LogisticRegressionCV(Cs=10, cv=model_selection.KFold(n_splits=5))
Logistic regression with 5 folds cross validation.
168,216
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np from six.moves import range import sklearn from sklearn import ensemble from sklearn import linear_model from sklearn import model_selection import gin.tf The provided code snippet includes necessary dependencies for implementing the `gradient_boosting_classifier` function. Write a Python function `def gradient_boosting_classifier()` to solve the following problem: Default gradient boosting classifier. Here is the function: def gradient_boosting_classifier(): """Default gradient boosting classifier.""" return ensemble.GradientBoostingClassifier()
Default gradient boosting classifier.
168,217
from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl import logging from disentanglement_lib.evaluation.metrics import utils import numpy as np import gin.tf def _drop_constant_dims(ys): """Returns a view of the matrix `ys` with dropped constant rows.""" ys = np.asarray(ys) if ys.ndim != 2: raise ValueError("Expecting a matrix.") variances = ys.var(axis=1) active_mask = variances > 0. return ys[active_mask, :] def scalable_disentanglement_score(gen_factors, latents, diff_quantile=0.99): """Computes IRS scores of a dataset. Assumes no noise in X and crossed generative factors (i.e. one sample per combination of gen_factors). Assumes each g_i is an equally probable realization of g_i and all g_i are independent. Args: gen_factors: Numpy array of shape (num samples, num generative factors), matrix of ground truth generative factors. latents: Numpy array of shape (num samples, num latent dimensions), matrix of latent variables. diff_quantile: Float value between 0 and 1 to decide what quantile of diffs to select (use 1.0 for the version in the paper). Returns: Dictionary with IRS scores. """ num_gen = gen_factors.shape[1] num_lat = latents.shape[1] # Compute normalizer. max_deviations = np.max(np.abs(latents - latents.mean(axis=0)), axis=0) cum_deviations = np.zeros([num_lat, num_gen]) for i in range(num_gen): unique_factors = np.unique(gen_factors[:, i], axis=0) assert unique_factors.ndim == 1 num_distinct_factors = unique_factors.shape[0] for k in range(num_distinct_factors): # Compute E[Z | g_i]. match = gen_factors[:, i] == unique_factors[k] e_loc = np.mean(latents[match, :], axis=0) # Difference of each value within that group of constant g_i to its mean. diffs = np.abs(latents[match, :] - e_loc) max_diffs = np.percentile(diffs, q=diff_quantile*100, axis=0) cum_deviations[:, i] += max_diffs cum_deviations[:, i] /= num_distinct_factors # Normalize value of each latent dimension with its maximal deviation. normalized_deviations = cum_deviations / max_deviations[:, np.newaxis] irs_matrix = 1.0 - normalized_deviations disentanglement_scores = irs_matrix.max(axis=1) if np.sum(max_deviations) > 0.0: avg_score = np.average(disentanglement_scores, weights=max_deviations) else: avg_score = np.mean(disentanglement_scores) parents = irs_matrix.argmax(axis=1) score_dict = {} score_dict["disentanglement_scores"] = disentanglement_scores score_dict["avg_score"] = avg_score score_dict["parents"] = parents score_dict["IRS_matrix"] = irs_matrix score_dict["max_deviations"] = max_deviations return score_dict The provided code snippet includes necessary dependencies for implementing the `compute_irs` function. Write a Python function `def compute_irs(ground_truth_data, representation_function, random_state, artifact_dir=None, diff_quantile=0.99, num_train=gin.REQUIRED, batch_size=gin.REQUIRED)` to solve the following problem: Computes the Interventional Robustness Score. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. diff_quantile: Float value between 0 and 1 to decide what quantile of diffs to select (use 1.0 for the version in the paper). num_train: Number of points used for training. batch_size: Batch size for sampling. Returns: Dict with IRS and number of active dimensions. Here is the function: def compute_irs(ground_truth_data, representation_function, random_state, artifact_dir=None, diff_quantile=0.99, num_train=gin.REQUIRED, batch_size=gin.REQUIRED): """Computes the Interventional Robustness Score. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. diff_quantile: Float value between 0 and 1 to decide what quantile of diffs to select (use 1.0 for the version in the paper). num_train: Number of points used for training. batch_size: Batch size for sampling. Returns: Dict with IRS and number of active dimensions. """ del artifact_dir logging.info("Generating training set.") mus, ys = utils.generate_batch_factor_code(ground_truth_data, representation_function, num_train, random_state, batch_size) assert mus.shape[1] == num_train ys_discrete = utils.make_discretizer(ys) active_mus = _drop_constant_dims(mus) if not active_mus.any(): irs_score = 0.0 else: irs_score = scalable_disentanglement_score(ys_discrete.T, active_mus.T, diff_quantile)["avg_score"] score_dict = {} score_dict["IRS"] = irs_score score_dict["num_active_dims"] = np.sum(active_mus) return score_dict
Computes the Interventional Robustness Score. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. diff_quantile: Float value between 0 and 1 to decide what quantile of diffs to select (use 1.0 for the version in the paper). num_train: Number of points used for training. batch_size: Batch size for sampling. Returns: Dict with IRS and number of active dimensions.
168,218
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.evaluation.metrics import utils import numpy as np from six.moves import range from sklearn import linear_model from sklearn import metrics from sklearn import preprocessing import gin.tf def explicitness_per_factor(mus_train, y_train, mus_test, y_test): """Compute explicitness score for a factor as ROC-AUC of a classifier. Args: mus_train: Representation for training, (num_codes, num_points)-np array. y_train: Ground truth factors for training, (num_factors, num_points)-np array. mus_test: Representation for testing, (num_codes, num_points)-np array. y_test: Ground truth factors for testing, (num_factors, num_points)-np array. Returns: roc_train: ROC-AUC score of the classifier on training data. roc_test: ROC-AUC score of the classifier on testing data. """ x_train = np.transpose(mus_train) x_test = np.transpose(mus_test) clf = linear_model.LogisticRegression().fit(x_train, y_train) y_pred_train = clf.predict_proba(x_train) y_pred_test = clf.predict_proba(x_test) mlb = preprocessing.MultiLabelBinarizer() roc_train = metrics.roc_auc_score( mlb.fit_transform(np.expand_dims(y_train, 1)), y_pred_train) roc_test = metrics.roc_auc_score( mlb.fit_transform(np.expand_dims(y_test, 1)), y_pred_test) return roc_train, roc_test def modularity(mutual_information): """Computes the modularity from mutual information.""" # Mutual information has shape [num_codes, num_factors]. squared_mi = np.square(mutual_information) max_squared_mi = np.max(squared_mi, axis=1) numerator = np.sum(squared_mi, axis=1) - max_squared_mi denominator = max_squared_mi * (squared_mi.shape[1] -1.) delta = numerator / denominator modularity_score = 1. - delta index = (max_squared_mi == 0.) modularity_score[index] = 0. return np.mean(modularity_score) The provided code snippet includes necessary dependencies for implementing the `compute_modularity_explicitness` function. Write a Python function `def compute_modularity_explicitness(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, batch_size=16)` to solve the following problem: Computes the modularity metric according to Sec 3. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with average modularity score and average explicitness (train and test). Here is the function: def compute_modularity_explicitness(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, batch_size=16): """Computes the modularity metric according to Sec 3. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with average modularity score and average explicitness (train and test). """ del artifact_dir scores = {} mus_train, ys_train = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_train, random_state, batch_size) mus_test, ys_test = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_test, random_state, batch_size) discretized_mus = utils.make_discretizer(mus_train) mutual_information = utils.discrete_mutual_info(discretized_mus, ys_train) # Mutual information should have shape [num_codes, num_factors]. assert mutual_information.shape[0] == mus_train.shape[0] assert mutual_information.shape[1] == ys_train.shape[0] scores["modularity_score"] = modularity(mutual_information) explicitness_score_train = np.zeros([ys_train.shape[0], 1]) explicitness_score_test = np.zeros([ys_test.shape[0], 1]) mus_train_norm, mean_mus, stddev_mus = utils.normalize_data(mus_train) mus_test_norm, _, _ = utils.normalize_data(mus_test, mean_mus, stddev_mus) for i in range(ys_train.shape[0]): explicitness_score_train[i], explicitness_score_test[i] = \ explicitness_per_factor(mus_train_norm, ys_train[i, :], mus_test_norm, ys_test[i, :]) scores["explicitness_score_train"] = np.mean(explicitness_score_train) scores["explicitness_score_test"] = np.mean(explicitness_score_test) return scores
Computes the modularity metric according to Sec 3. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with average modularity score and average explicitness (train and test).
168,219
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import utils import numpy as np from six.moves import range import gin.tf def compute_reduced_representation(mus_train, ys_train, mus_test, ys_test, factor_of_interest, correlation_measure=gin.REQUIRED): """Computes a reduced representation of the data. The most informative factor with respect to the labels is deleted. Args: mus_train: latent means of the training batch. ys_train: labels of the training batch. mus_test: latent means of the test batch. ys_test: labels of the test batch. factor_of_interest: index of the factor of interest. correlation_measure: measure of correlation. Returns: Tuple with reduced representations for the training and test set. """ importance_matrix = correlation_measure(mus_train, ys_train, mus_test, ys_test) factor_of_interest_importance = importance_matrix[:, factor_of_interest] factor_to_remove_index = np.argmax(factor_of_interest_importance) # Remove the factor of variation above from the representation reduced_representation_train = np.delete( mus_train.copy(), factor_to_remove_index, axis=0) reduced_representation_test = np.delete( mus_test.copy(), factor_to_remove_index, axis=0) return reduced_representation_train, reduced_representation_test "factorwise_dci", blacklist=["mus_train", "ys_train", "mus_test", "ys_test"]) def compute_predictive_accuracy(x_train, y_train, x_test, y_test, predictor_fn): """Computes average predictive accuracy for train and test set. Args: x_train: data x of the training batch. y_train: labels y of the training batch. x_test: data x of the test batch. y_test: labels y of the test batch. predictor_fn: function that is used to fit and predict the labels. Returns: Tuple with lists of training and test set accuracies. """ num_factors = y_train.shape[0] train_acc = [] test_acc = [] # Loop on the generative factors to predict for i in range(num_factors): model = predictor_fn() model.fit(x_train, y_train[i, :]) train_acc.append(np.mean(model.predict(x_train) == y_train[i, :])) test_acc.append(np.mean(model.predict(x_test) == y_test[i, :])) return train_acc, test_acc The provided code snippet includes necessary dependencies for implementing the `compute_reduced_downstream_task` function. Write a Python function `def compute_reduced_downstream_task(ground_truth_data, representation_function, random_state, artifact_dir=None, num_factors_to_remove=gin.REQUIRED, num_train=gin.REQUIRED, num_test=gin.REQUIRED, batch_size=16)` to solve the following problem: Computes loss of a reduced downstream task. Measure the information leakage in each latent component after removing the k ("factors_to_remove") most informative features for the prediction task. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_factors_to_remove: Number of factors to remove from the latent representation. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with scores. Here is the function: def compute_reduced_downstream_task(ground_truth_data, representation_function, random_state, artifact_dir=None, num_factors_to_remove=gin.REQUIRED, num_train=gin.REQUIRED, num_test=gin.REQUIRED, batch_size=16): """Computes loss of a reduced downstream task. Measure the information leakage in each latent component after removing the k ("factors_to_remove") most informative features for the prediction task. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_factors_to_remove: Number of factors to remove from the latent representation. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with scores. """ del artifact_dir scores = {} # Loop on different sizes of the training 'batch', as specified with gin. for train_size in num_train: size_string = str(train_size) mus_train, ys_train = utils.generate_batch_factor_code( ground_truth_data, representation_function, train_size, random_state, batch_size) mus_test, ys_test = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_test, random_state, batch_size) # Create variables for aggregated scores. reduced_factor_train_scores = [] other_factors_train_scores = [] reduced_factor_test_scores = [] other_factors_test_scores = [] # Compute the reduced representation and test it for each factor of # variation. for factor_of_interest in range(ground_truth_data.num_factors): # Copy the training data and eliminate the k most informative factors. reduced_mus_train = mus_train.copy() reduced_mus_test = mus_test.copy() for _ in range(num_factors_to_remove): reduced_mus_train, reduced_mus_test =\ compute_reduced_representation(reduced_mus_train, ys_train, reduced_mus_test, ys_test, factor_of_interest) predictor_model = utils.make_predictor_fn() train_acc, test_acc = compute_predictive_accuracy( np.transpose(reduced_mus_train), ys_train, np.transpose(reduced_mus_test), ys_test, predictor_model) # Save scores for reduced factor. scores[size_string + ":reduced_factor_{}:mean_train_accuracy_reduced_factor".format( factor_of_interest)] = train_acc[factor_of_interest] scores[size_string + ":reduced_factor_{}:mean_test_accuracy_reduced_factor".format( factor_of_interest)] = test_acc[factor_of_interest] reduced_factor_train_scores.append(train_acc[factor_of_interest]) reduced_factor_test_scores.append(test_acc[factor_of_interest]) # Save the scores (accuracies) in the score dictionary. local_other_factors_train_scores = [] local_other_factors_test_scores = [] for i in range(len(train_acc)): scores[size_string + ":reduced_factor_{}:mean_train_accuracy_factor_{}".format( factor_of_interest, i)] = train_acc[i] scores[size_string + ":reduced_factor_{}:mean_test_accuracy_factor_{}".format( factor_of_interest, i)] = test_acc[i] if i != factor_of_interest: local_other_factors_train_scores.append(train_acc[i]) local_other_factors_test_scores.append(test_acc[i]) # Save mean score for non-reduced factors. scores[size_string + ":reduced_factor_{}:mean_train_accuracy_non_reduced_factor".format( factor_of_interest)] = np.mean( local_other_factors_train_scores) scores[size_string + ":reduced_factor_{}:mean_test_accuracy_non_reduced_factor".format( factor_of_interest)] = np.mean(local_other_factors_test_scores) other_factors_train_scores.append( np.mean(local_other_factors_train_scores)) other_factors_test_scores.append(np.mean(local_other_factors_test_scores)) # Compute the aggregate scores. scores[size_string + ":mean_train_accuracy_reduced_factor"] = np.mean( reduced_factor_train_scores) scores[size_string + ":mean_test_accuracy_reduced_factor"] = np.mean( reduced_factor_test_scores) scores[size_string + ":mean_train_accuracy_other_factors"] = np.mean( other_factors_train_scores) scores[size_string + ":mean_test_accuracy_other_factors"] = np.mean( other_factors_test_scores) return scores
Computes loss of a reduced downstream task. Measure the information leakage in each latent component after removing the k ("factors_to_remove") most informative features for the prediction task. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_factors_to_remove: Number of factors to remove from the latent representation. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with scores.
168,220
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import utils import numpy as np from six.moves import range import gin.tf "dci", The provided code snippet includes necessary dependencies for implementing the `compute_factorwise_dci` function. Write a Python function `def compute_factorwise_dci(mus_train, ys_train, mus_test, ys_test)` to solve the following problem: Computes the DCI importance matrix of the attributes. Args: mus_train: latent means of the training batch. ys_train: labels of the training batch. mus_test: latent means of the test batch. ys_test: labels of the test batch. Returns: Matrix with importance scores. Here is the function: def compute_factorwise_dci(mus_train, ys_train, mus_test, ys_test): """Computes the DCI importance matrix of the attributes. Args: mus_train: latent means of the training batch. ys_train: labels of the training batch. mus_test: latent means of the test batch. ys_test: labels of the test batch. Returns: Matrix with importance scores. """ importance_matrix, _, _ = dci.compute_importance_gbt(mus_train, ys_train, mus_test, ys_test) assert importance_matrix.shape[0] == mus_train.shape[0] assert importance_matrix.shape[1] == ys_train.shape[0] return importance_matrix
Computes the DCI importance matrix of the attributes. Args: mus_train: latent means of the training batch. ys_train: labels of the training batch. mus_test: latent means of the test batch. ys_test: labels of the test batch. Returns: Matrix with importance scores.
168,221
from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl import logging import numpy as np from six.moves import range from sklearn import linear_model import gin.tf def _generate_training_batch(ground_truth_data, representation_function, batch_size, num_points, random_state): """Sample a set of training samples based on a batch of ground-truth data. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. batch_size: Number of points to be used to compute the training_sample. num_points: Number of points to be sampled for training set. random_state: Numpy random state used for randomness. Returns: points: (num_points, dim_representation)-sized numpy array with training set features. labels: (num_points)-sized numpy array with training set labels. """ points = None # Dimensionality depends on the representation function. labels = np.zeros(num_points, dtype=np.int64) for i in range(num_points): labels[i], feature_vector = _generate_training_sample( ground_truth_data, representation_function, batch_size, random_state) if points is None: points = np.zeros((num_points, feature_vector.shape[0])) points[i, :] = feature_vector return points, labels The provided code snippet includes necessary dependencies for implementing the `compute_beta_vae_sklearn` function. Write a Python function `def compute_beta_vae_sklearn(ground_truth_data, representation_function, random_state, artifact_dir=None, batch_size=gin.REQUIRED, num_train=gin.REQUIRED, num_eval=gin.REQUIRED)` to solve the following problem: Computes the BetaVAE disentanglement metric using scikit-learn. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. batch_size: Number of points to be used to compute the training_sample. num_train: Number of points used for training. num_eval: Number of points used for evaluation. Returns: Dictionary with scores: train_accuracy: Accuracy on training set. eval_accuracy: Accuracy on evaluation set. Here is the function: def compute_beta_vae_sklearn(ground_truth_data, representation_function, random_state, artifact_dir=None, batch_size=gin.REQUIRED, num_train=gin.REQUIRED, num_eval=gin.REQUIRED): """Computes the BetaVAE disentanglement metric using scikit-learn. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. batch_size: Number of points to be used to compute the training_sample. num_train: Number of points used for training. num_eval: Number of points used for evaluation. Returns: Dictionary with scores: train_accuracy: Accuracy on training set. eval_accuracy: Accuracy on evaluation set. """ del artifact_dir logging.info("Generating training set.") train_points, train_labels = _generate_training_batch( ground_truth_data, representation_function, batch_size, num_train, random_state) logging.info("Training sklearn model.") model = linear_model.LogisticRegression(random_state=random_state) model.fit(train_points, train_labels) logging.info("Evaluate training set accuracy.") train_accuracy = model.score(train_points, train_labels) train_accuracy = np.mean(model.predict(train_points) == train_labels) logging.info("Training set accuracy: %.2g", train_accuracy) logging.info("Generating evaluation set.") eval_points, eval_labels = _generate_training_batch( ground_truth_data, representation_function, batch_size, num_eval, random_state) logging.info("Evaluate evaluation set accuracy.") eval_accuracy = model.score(eval_points, eval_labels) logging.info("Evaluation set accuracy: %.2g", eval_accuracy) scores_dict = {} scores_dict["train_accuracy"] = train_accuracy scores_dict["eval_accuracy"] = eval_accuracy return scores_dict
Computes the BetaVAE disentanglement metric using scikit-learn. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. batch_size: Number of points to be used to compute the training_sample. num_train: Number of points used for training. num_eval: Number of points used for evaluation. Returns: Dictionary with scores: train_accuracy: Accuracy on training set. eval_accuracy: Accuracy on evaluation set.
168,222
from absl import logging from disentanglement_lib.evaluation.metrics import utils import numpy as np import gin.tf def _compute_mig(mus_train, ys_train): """Computes score based on both training and testing codes and factors.""" score_dict = {} discretized_mus = utils.make_discretizer(mus_train) m = utils.discrete_mutual_info(discretized_mus, ys_train) assert m.shape[0] == mus_train.shape[0] assert m.shape[1] == ys_train.shape[0] # m is [num_latents, num_factors] entropy = utils.discrete_entropy(ys_train) sorted_m = np.sort(m, axis=0)[::-1] score_dict["discrete_mig"] = np.mean( np.divide(sorted_m[0, :] - sorted_m[1, :], entropy[:])) return score_dict "mig_validation", blacklist=["observations", "labels", "representation_function"]) The provided code snippet includes necessary dependencies for implementing the `compute_mig` function. Write a Python function `def compute_mig(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, batch_size=16)` to solve the following problem: Computes the mutual information gap. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. batch_size: Batch size for sampling. Returns: Dict with average mutual information gap. Here is the function: def compute_mig(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, batch_size=16): """Computes the mutual information gap. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. batch_size: Batch size for sampling. Returns: Dict with average mutual information gap. """ del artifact_dir logging.info("Generating training set.") mus_train, ys_train = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_train, random_state, batch_size) assert mus_train.shape[1] == num_train return _compute_mig(mus_train, ys_train)
Computes the mutual information gap. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. batch_size: Batch size for sampling. Returns: Dict with average mutual information gap.
168,223
from absl import logging from disentanglement_lib.evaluation.metrics import utils import numpy as np import gin.tf def _compute_mig(mus_train, ys_train): """Computes score based on both training and testing codes and factors.""" score_dict = {} discretized_mus = utils.make_discretizer(mus_train) m = utils.discrete_mutual_info(discretized_mus, ys_train) assert m.shape[0] == mus_train.shape[0] assert m.shape[1] == ys_train.shape[0] # m is [num_latents, num_factors] entropy = utils.discrete_entropy(ys_train) sorted_m = np.sort(m, axis=0)[::-1] score_dict["discrete_mig"] = np.mean( np.divide(sorted_m[0, :] - sorted_m[1, :], entropy[:])) return score_dict "mig_validation", blacklist=["observations", "labels", "representation_function"]) The provided code snippet includes necessary dependencies for implementing the `compute_mig_on_fixed_data` function. Write a Python function `def compute_mig_on_fixed_data(observations, labels, representation_function, batch_size=100)` to solve the following problem: Computes the MIG scores on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. batch_size: Batch size used to compute the representation. Returns: MIG computed on the provided observations and labels. Here is the function: def compute_mig_on_fixed_data(observations, labels, representation_function, batch_size=100): """Computes the MIG scores on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. batch_size: Batch size used to compute the representation. Returns: MIG computed on the provided observations and labels. """ mus = utils.obtain_representation(observations, representation_function, batch_size) assert labels.shape[1] == observations.shape[0], "Wrong labels shape." assert mus.shape[1] == observations.shape[0], "Wrong representation shape." return _compute_mig(mus, labels)
Computes the MIG scores on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. batch_size: Batch size used to compute the representation. Returns: MIG computed on the provided observations and labels.
168,224
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.evaluation.metrics import utils import numpy as np from six.moves import range import gin.tf def _compute_loss(x_train, y_train, x_test, y_test, predictor_fn): """Compute average accuracy for train and test set.""" num_factors = y_train.shape[0] train_loss = [] test_loss = [] for i in range(num_factors): model = predictor_fn() model.fit(x_train, y_train[i, :]) train_loss.append(np.mean(model.predict(x_train) == y_train[i, :])) test_loss.append(np.mean(model.predict(x_test) == y_test[i, :])) return train_loss, test_loss The provided code snippet includes necessary dependencies for implementing the `compute_downstream_task` function. Write a Python function `def compute_downstream_task(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, batch_size=16)` to solve the following problem: Computes loss of downstream task. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with scores. Here is the function: def compute_downstream_task(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, batch_size=16): """Computes loss of downstream task. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with scores. """ del artifact_dir scores = {} for train_size in num_train: mus_train, ys_train = utils.generate_batch_factor_code( ground_truth_data, representation_function, train_size, random_state, batch_size) mus_test, ys_test = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_test, random_state, batch_size) predictor_model = utils.make_predictor_fn() train_err, test_err = _compute_loss( np.transpose(mus_train), ys_train, np.transpose(mus_test), ys_test, predictor_model) size_string = str(train_size) scores[size_string + ":mean_train_accuracy"] = np.mean(train_err) scores[size_string + ":mean_test_accuracy"] = np.mean(test_err) scores[size_string + ":min_train_accuracy"] = np.min(train_err) scores[size_string + ":min_test_accuracy"] = np.min(test_err) for i in range(len(train_err)): scores[size_string + ":train_accuracy_factor_{}".format(i)] = train_err[i] scores[size_string + ":test_accuracy_factor_{}".format(i)] = test_err[i] return scores
Computes loss of downstream task. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with scores.
168,225
from absl import logging from disentanglement_lib.evaluation.metrics import utils import numpy as np import scipy import gin.tf def gaussian_total_correlation(cov): """Computes the total correlation of a Gaussian with covariance matrix cov. We use that the total correlation is the KL divergence between the Gaussian and the product of its marginals. By design, the means of these two Gaussians are zero and the covariance matrix of the second Gaussian is equal to the covariance matrix of the first Gaussian with off-diagonal entries set to zero. Args: cov: Numpy array with covariance matrix. Returns: Scalar with total correlation. """ return 0.5 * (np.sum(np.log(np.diag(cov))) - np.linalg.slogdet(cov)[1]) def gaussian_wasserstein_correlation(cov): """Wasserstein L2 distance between Gaussian and the product of its marginals. Args: cov: Numpy array with covariance matrix. Returns: Scalar with score. """ sqrtm = scipy.linalg.sqrtm(cov * np.expand_dims(np.diag(cov), axis=1)) return 2 * np.trace(cov) - 2 * np.trace(sqrtm) The provided code snippet includes necessary dependencies for implementing the `unsupervised_metrics` function. Write a Python function `def unsupervised_metrics(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, batch_size=16)` to solve the following problem: Computes unsupervised scores based on covariance and mutual information. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. batch_size: Batch size for sampling. Returns: Dictionary with scores. Here is the function: def unsupervised_metrics(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, batch_size=16): """Computes unsupervised scores based on covariance and mutual information. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. batch_size: Batch size for sampling. Returns: Dictionary with scores. """ del artifact_dir scores = {} logging.info("Generating training set.") mus_train, _ = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_train, random_state, batch_size) num_codes = mus_train.shape[0] cov_mus = np.cov(mus_train) assert num_codes == cov_mus.shape[0] # Gaussian total correlation. scores["gaussian_total_correlation"] = gaussian_total_correlation(cov_mus) # Gaussian Wasserstein correlation. scores["gaussian_wasserstein_correlation"] = gaussian_wasserstein_correlation( cov_mus) scores["gaussian_wasserstein_correlation_norm"] = ( scores["gaussian_wasserstein_correlation"] / np.sum(np.diag(cov_mus))) # Compute average mutual information between different factors. mus_discrete = utils.make_discretizer(mus_train) mutual_info_matrix = utils.discrete_mutual_info(mus_discrete, mus_discrete) np.fill_diagonal(mutual_info_matrix, 0) mutual_info_score = np.sum(mutual_info_matrix) / (num_codes**2 - num_codes) scores["mutual_info_score"] = mutual_info_score return scores
Computes unsupervised scores based on covariance and mutual information. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. batch_size: Batch size for sampling. Returns: Dictionary with scores.
168,226
from absl import logging from disentanglement_lib.evaluation.metrics import utils import numpy as np import scipy import gin.tf The provided code snippet includes necessary dependencies for implementing the `kl_gaussians_numerically_unstable` function. Write a Python function `def kl_gaussians_numerically_unstable(mean_0, cov_0, mean_1, cov_1, k)` to solve the following problem: Unstable version used for testing gaussian_total_correlation. Here is the function: def kl_gaussians_numerically_unstable(mean_0, cov_0, mean_1, cov_1, k): """Unstable version used for testing gaussian_total_correlation.""" det_0 = np.linalg.det(cov_0) det_1 = np.linalg.det(cov_1) inv_1 = np.linalg.inv(cov_1) return 0.5 * ( np.trace(np.matmul(inv_1, cov_0)) + np.dot(mean_1 - mean_0, np.dot(inv_1, mean_1 - mean_0)) - k + np.log(det_1 / det_0))
Unstable version used for testing gaussian_total_correlation.
168,227
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import logging from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import modularity_explicitness from disentanglement_lib.evaluation.metrics import sap_score from disentanglement_lib.evaluation.metrics import utils from disentanglement_lib.utils import results from disentanglement_lib.visualize import dendrogram from disentanglement_lib.visualize import visualize_scores import numpy as np import gin.tf def unified_scores(mus_train, ys_train, mus_test, ys_test, matrix_fns, artifact_dir=None, factor_names=None): """Computes unified scores.""" scores = {} kws = {} for matrix_fn in matrix_fns: # Matrix should have shape [num_codes, num_factors]. matrix = matrix_fn(mus_train, ys_train, mus_test, ys_test) matrix_name = matrix_fn.__name__ if artifact_dir is not None: visualize_scores.heat_square(matrix.copy(), artifact_dir, matrix_name, "Latent codes", "Factors of Variation", factor_names=factor_names) visualize_scores.plot_recovery_vs_independent(matrix.copy().T, artifact_dir, matrix_name+"_pr") merge_points = dendrogram.dendrogram_plot(matrix.copy().T, os.path.join( artifact_dir, matrix_name+"_dendrogram"), factor_names) kws[matrix_name] = merge_points results_dict = pr_curves_values(matrix) if matrix_name in kws: kws[matrix_name].update(results_dict) else: kws[matrix_name] = results_dict for aggregation_fn in AGGREGATION_FN: results_dict = aggregation_fn(matrix, ys_train) kws[matrix_name].update(results_dict) scores = results.namespaced_dict(scores, **kws) return scores The provided code snippet includes necessary dependencies for implementing the `compute_unified_scores` function. Write a Python function `def compute_unified_scores(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, matrix_fns=gin.REQUIRED, batch_size=16)` to solve the following problem: Computes the unified disentanglement scores. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. matrix_fns: List of functions to relate factors of variations and codes. batch_size: Batch size for sampling. Returns: Unified scores. Here is the function: def compute_unified_scores(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, matrix_fns=gin.REQUIRED, batch_size=16): """Computes the unified disentanglement scores. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. matrix_fns: List of functions to relate factors of variations and codes. batch_size: Batch size for sampling. Returns: Unified scores. """ logging.info("Generating training set.") # mus_train are of shape [num_codes, num_train], while ys_train are of shape # [num_factors, num_train]. mus_train, ys_train = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_train, random_state, batch_size) assert mus_train.shape[1] == num_train assert ys_train.shape[1] == num_train mus_test, ys_test = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_test, random_state, batch_size) return unified_scores(mus_train, ys_train, mus_test, ys_test, matrix_fns, artifact_dir, ground_truth_data.factor_names)
Computes the unified disentanglement scores. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. matrix_fns: List of functions to relate factors of variations and codes. batch_size: Batch size for sampling. Returns: Unified scores.
168,228
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import logging from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import modularity_explicitness from disentanglement_lib.evaluation.metrics import sap_score from disentanglement_lib.evaluation.metrics import utils from disentanglement_lib.utils import results from disentanglement_lib.visualize import dendrogram from disentanglement_lib.visualize import visualize_scores import numpy as np import gin.tf def unified_scores(mus_train, ys_train, mus_test, ys_test, matrix_fns, artifact_dir=None, factor_names=None): """Computes unified scores.""" scores = {} kws = {} for matrix_fn in matrix_fns: # Matrix should have shape [num_codes, num_factors]. matrix = matrix_fn(mus_train, ys_train, mus_test, ys_test) matrix_name = matrix_fn.__name__ if artifact_dir is not None: visualize_scores.heat_square(matrix.copy(), artifact_dir, matrix_name, "Latent codes", "Factors of Variation", factor_names=factor_names) visualize_scores.plot_recovery_vs_independent(matrix.copy().T, artifact_dir, matrix_name+"_pr") merge_points = dendrogram.dendrogram_plot(matrix.copy().T, os.path.join( artifact_dir, matrix_name+"_dendrogram"), factor_names) kws[matrix_name] = merge_points results_dict = pr_curves_values(matrix) if matrix_name in kws: kws[matrix_name].update(results_dict) else: kws[matrix_name] = results_dict for aggregation_fn in AGGREGATION_FN: results_dict = aggregation_fn(matrix, ys_train) kws[matrix_name].update(results_dict) scores = results.namespaced_dict(scores, **kws) return scores The provided code snippet includes necessary dependencies for implementing the `compute_unified_score_on_fixed_data` function. Write a Python function `def compute_unified_score_on_fixed_data( observations, labels, representation_function, train_percentage=gin.REQUIRED, matrix_fns=gin.REQUIRED, batch_size=100)` to solve the following problem: Computes the unified scores on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. train_percentage: Percentage of observations used for training. matrix_fns: List of functions to relate factors of variations and codes. batch_size: Batch size used to compute the representation. Returns: Unified scores. Here is the function: def compute_unified_score_on_fixed_data( observations, labels, representation_function, train_percentage=gin.REQUIRED, matrix_fns=gin.REQUIRED, batch_size=100): """Computes the unified scores on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. train_percentage: Percentage of observations used for training. matrix_fns: List of functions to relate factors of variations and codes. batch_size: Batch size used to compute the representation. Returns: Unified scores. """ mus = utils.obtain_representation(observations, representation_function, batch_size) assert labels.shape[1] == observations.shape[0], "Wrong labels shape." assert mus.shape[1] == observations.shape[0], "Wrong representation shape." mus_train, mus_test = utils.split_train_test( mus, train_percentage) ys_train, ys_test = utils.split_train_test( labels, train_percentage) return unified_scores(mus_train, ys_train, mus_test, ys_test, matrix_fns)
Computes the unified scores on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. train_percentage: Percentage of observations used for training. matrix_fns: List of functions to relate factors of variations and codes. batch_size: Batch size used to compute the representation. Returns: Unified scores.
168,229
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import logging from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import modularity_explicitness from disentanglement_lib.evaluation.metrics import sap_score from disentanglement_lib.evaluation.metrics import utils from disentanglement_lib.utils import results from disentanglement_lib.visualize import dendrogram from disentanglement_lib.visualize import visualize_scores import numpy as np import gin.tf "dci", The provided code snippet includes necessary dependencies for implementing the `importance_gbt_matrix` function. Write a Python function `def importance_gbt_matrix(mus_train, ys_train, mus_test, ys_test)` to solve the following problem: Computes the importance matrix of the DCI Disentanglement score. The importance matrix is based on the importance of each code to predict a factor of variation with GBT. Args: mus_train: Batch of learned representations to be used for training. ys_train: Observed factors of variation corresponding to the representations in mus_train. mus_test: Batch of learned representations to be used for testing. ys_test: Observed factors of variation corresponding to the representations in mus_test. Returns: Importance matrix as computed for the DCI Disentanglement score. Here is the function: def importance_gbt_matrix(mus_train, ys_train, mus_test, ys_test): """Computes the importance matrix of the DCI Disentanglement score. The importance matrix is based on the importance of each code to predict a factor of variation with GBT. Args: mus_train: Batch of learned representations to be used for training. ys_train: Observed factors of variation corresponding to the representations in mus_train. mus_test: Batch of learned representations to be used for testing. ys_test: Observed factors of variation corresponding to the representations in mus_test. Returns: Importance matrix as computed for the DCI Disentanglement score. """ matrix_importance_gbt, _, _ = dci.compute_importance_gbt( mus_train, ys_train, mus_test, ys_test) return matrix_importance_gbt
Computes the importance matrix of the DCI Disentanglement score. The importance matrix is based on the importance of each code to predict a factor of variation with GBT. Args: mus_train: Batch of learned representations to be used for training. ys_train: Observed factors of variation corresponding to the representations in mus_train. mus_test: Batch of learned representations to be used for testing. ys_test: Observed factors of variation corresponding to the representations in mus_test. Returns: Importance matrix as computed for the DCI Disentanglement score.
168,230
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import logging from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import modularity_explicitness from disentanglement_lib.evaluation.metrics import sap_score from disentanglement_lib.evaluation.metrics import utils from disentanglement_lib.utils import results from disentanglement_lib.visualize import dendrogram from disentanglement_lib.visualize import visualize_scores import numpy as np import gin.tf The provided code snippet includes necessary dependencies for implementing the `mutual_information_matrix` function. Write a Python function `def mutual_information_matrix(mus_train, ys_train, mus_test, ys_test)` to solve the following problem: Computes the mutual information matrix between codes and factors. The mutual information matrix is used to compute the MIG and Modularity scores. Args: mus_train: Batch of learned representations to be used for training. ys_train: Observed factors of variation corresponding to the representations in mus_train. mus_test: Unused. ys_test: Unused. Returns: Mutual information matrix as computed for the MIG and Modularity scores. Here is the function: def mutual_information_matrix(mus_train, ys_train, mus_test, ys_test): """Computes the mutual information matrix between codes and factors. The mutual information matrix is used to compute the MIG and Modularity scores. Args: mus_train: Batch of learned representations to be used for training. ys_train: Observed factors of variation corresponding to the representations in mus_train. mus_test: Unused. ys_test: Unused. Returns: Mutual information matrix as computed for the MIG and Modularity scores. """ del mus_test, ys_test discretized_mus = utils.make_discretizer(mus_train) m = utils.discrete_mutual_info(discretized_mus, ys_train) return m
Computes the mutual information matrix between codes and factors. The mutual information matrix is used to compute the MIG and Modularity scores. Args: mus_train: Batch of learned representations to be used for training. ys_train: Observed factors of variation corresponding to the representations in mus_train. mus_test: Unused. ys_test: Unused. Returns: Mutual information matrix as computed for the MIG and Modularity scores.
168,231
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import logging from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import modularity_explicitness from disentanglement_lib.evaluation.metrics import sap_score from disentanglement_lib.evaluation.metrics import utils from disentanglement_lib.utils import results from disentanglement_lib.visualize import dendrogram from disentanglement_lib.visualize import visualize_scores import numpy as np import gin.tf "sap_score", The provided code snippet includes necessary dependencies for implementing the `accuracy_svm_matrix` function. Write a Python function `def accuracy_svm_matrix(mus_train, ys_train, mus_test, ys_test)` to solve the following problem: Prediction accuracy of a SVM predicting a factor from a single code. The matrix of accuracies is used to compute the SAP score. Args: mus_train: Batch of learned representations to be used for training. ys_train: Observed factors of variation corresponding to the representations in mus_train. mus_test: Batch of learned representations to be used for testing. ys_test: Observed factors of variation corresponding to the representations in mus_test. Returns: Accuracy matrix as computed for the SAP score. Here is the function: def accuracy_svm_matrix(mus_train, ys_train, mus_test, ys_test): """Prediction accuracy of a SVM predicting a factor from a single code. The matrix of accuracies is used to compute the SAP score. Args: mus_train: Batch of learned representations to be used for training. ys_train: Observed factors of variation corresponding to the representations in mus_train. mus_test: Batch of learned representations to be used for testing. ys_test: Observed factors of variation corresponding to the representations in mus_test. Returns: Accuracy matrix as computed for the SAP score. """ return sap_score.compute_score_matrix( mus_train, ys_train, mus_test, ys_test, continuous_factors=False)
Prediction accuracy of a SVM predicting a factor from a single code. The matrix of accuracies is used to compute the SAP score. Args: mus_train: Batch of learned representations to be used for training. ys_train: Observed factors of variation corresponding to the representations in mus_train. mus_test: Batch of learned representations to be used for testing. ys_test: Observed factors of variation corresponding to the representations in mus_test. Returns: Accuracy matrix as computed for the SAP score.
168,232
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import logging from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import modularity_explicitness from disentanglement_lib.evaluation.metrics import sap_score from disentanglement_lib.evaluation.metrics import utils from disentanglement_lib.utils import results from disentanglement_lib.visualize import dendrogram from disentanglement_lib.visualize import visualize_scores import numpy as np import gin.tf "dci", The provided code snippet includes necessary dependencies for implementing the `aggregation_dci` function. Write a Python function `def aggregation_dci(matrix, ys)` to solve the following problem: Aggregation function of the DCI Disentanglement. Here is the function: def aggregation_dci(matrix, ys): """Aggregation function of the DCI Disentanglement.""" del ys score = {} score["dci_disentanglement"] = dci.disentanglement(matrix) score["dci_completeness"] = dci.completeness(matrix) score["dci"] = dci.disentanglement(matrix) disentanglement_per_code = dci.disentanglement_per_code(matrix) completeness_per_factor = dci.completeness_per_factor(matrix) assert len(disentanglement_per_code) == matrix.shape[0], "Wrong length." assert len(completeness_per_factor) == matrix.shape[1], "Wrong length." for i in range(len(disentanglement_per_code)): score["dci_disentanglement.code_{}".format(i)] = disentanglement_per_code[i] for i in range(len(completeness_per_factor)): score["dci_completeness.code_{}".format(i)] = completeness_per_factor[i] return score
Aggregation function of the DCI Disentanglement.
168,233
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import logging from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import modularity_explicitness from disentanglement_lib.evaluation.metrics import sap_score from disentanglement_lib.evaluation.metrics import utils from disentanglement_lib.utils import results from disentanglement_lib.visualize import dendrogram from disentanglement_lib.visualize import visualize_scores import numpy as np import gin.tf The provided code snippet includes necessary dependencies for implementing the `aggregation_mig` function. Write a Python function `def aggregation_mig(m, ys_train)` to solve the following problem: Aggregation function of the MIG. Here is the function: def aggregation_mig(m, ys_train): """Aggregation function of the MIG.""" score = {} entropy = utils.discrete_entropy(ys_train) sorted_m = np.sort(m, axis=0)[::-1] mig_per_factor = np.divide(sorted_m[0, :] - sorted_m[1, :], entropy[:]) score["mig"] = np.mean(mig_per_factor) assert len(mig_per_factor) == m.shape[1], "Wrong length." for i in range(len(mig_per_factor)): score["mig.factor_{}".format(i)] = mig_per_factor[i] return score
Aggregation function of the MIG.
168,234
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import logging from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import modularity_explicitness from disentanglement_lib.evaluation.metrics import sap_score from disentanglement_lib.evaluation.metrics import utils from disentanglement_lib.utils import results from disentanglement_lib.visualize import dendrogram from disentanglement_lib.visualize import visualize_scores import numpy as np import gin.tf def sap_compute_diff_top_two(matrix): sorted_matrix = np.sort(matrix, axis=0) return sorted_matrix[-1, :] - sorted_matrix[-2, :] "sap_score", The provided code snippet includes necessary dependencies for implementing the `aggregation_sap` function. Write a Python function `def aggregation_sap(matrix, ys)` to solve the following problem: Aggregation function of the SAP score. Here is the function: def aggregation_sap(matrix, ys): """Aggregation function of the SAP score.""" del ys score = {} score["sap"] = sap_score.compute_avg_diff_top_two(matrix) sap_per_factor = sap_compute_diff_top_two(matrix) assert len(sap_per_factor) == matrix.shape[1], "Wrong length." for i in range(len(sap_per_factor)): score["sap.factor_{}".format(i)] = sap_per_factor[i] return score
Aggregation function of the SAP score.
168,235
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import logging from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import modularity_explicitness from disentanglement_lib.evaluation.metrics import sap_score from disentanglement_lib.evaluation.metrics import utils from disentanglement_lib.utils import results from disentanglement_lib.visualize import dendrogram from disentanglement_lib.visualize import visualize_scores import numpy as np import gin.tf def compute_modularity_per_code(mutual_information): """Computes the modularity from mutual information.""" # Mutual information has shape [num_codes, num_factors]. squared_mi = np.square(mutual_information) max_squared_mi = np.max(squared_mi, axis=1) numerator = np.sum(squared_mi, axis=1) - max_squared_mi denominator = max_squared_mi * (squared_mi.shape[1] -1.) delta = numerator / denominator modularity_score = 1. - delta index = (max_squared_mi == 0.) modularity_score[index] = 0. return modularity_score "modularity_explicitness", The provided code snippet includes necessary dependencies for implementing the `aggregation_modularity` function. Write a Python function `def aggregation_modularity(matrix, ys)` to solve the following problem: Aggregation function of the modularity score. Here is the function: def aggregation_modularity(matrix, ys): """Aggregation function of the modularity score.""" del ys score = {} score["modularity"] = modularity_explicitness.modularity(matrix) modularity_per_code = compute_modularity_per_code(matrix) assert len(modularity_per_code) == matrix.shape[0], "Wrong length." for i in range(len(modularity_per_code)): score["modularity.code_{}".format(i)] = modularity_per_code[i] return score
Aggregation function of the modularity score.
168,236
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.evaluation.metrics import utils import numpy as np from six.moves import range import gin.tf def compute_scores_dict(metric, prefix): """Computes scores for combinations of predictive and sensitive factors. Either average or take the maximum with respect to target and sensitive variable for all combinations of predictive and sensitive factors. Args: metric: Matrix of shape [num_factors, num_factors] with fairness scores. prefix: Prefix for the matrix in the returned dictionary. Returns: Dictionary containing all combinations of predictive and sensitive factors. """ result = {} # Report min and max scores for each predictive and sensitive factor. for i in range(metric.shape[0]): for j in range(metric.shape[1]): if i != j: result["{}:pred{}:sens{}".format(prefix, i, j)] = metric[i, j] # Compute mean and max values across rows. rows_means = [] rows_maxs = [] for i in range(metric.shape[0]): relevant_scores = [metric[i, j] for j in range(metric.shape[1]) if i != j] mean_score = np.mean(relevant_scores) max_score = np.amax(relevant_scores) result["{}:pred{}:mean_sens".format(prefix, i)] = mean_score result["{}:pred{}:max_sens".format(prefix, i)] = max_score rows_means.append(mean_score) rows_maxs.append(max_score) # Compute mean and max values across rows. column_means = [] column_maxs = [] for j in range(metric.shape[1]): relevant_scores = [metric[i, j] for i in range(metric.shape[0]) if i != j] mean_score = np.mean(relevant_scores) max_score = np.amax(relevant_scores) result["{}:sens{}:mean_pred".format(prefix, j)] = mean_score result["{}:sens{}:max_pred".format(prefix, j)] = max_score column_means.append(mean_score) column_maxs.append(max_score) # Compute all combinations of scores. result["{}:mean_sens:mean_pred".format(prefix)] = np.mean(column_means) result["{}:mean_sens:max_pred".format(prefix)] = np.mean(column_maxs) result["{}:max_sens:mean_pred".format(prefix)] = np.amax(column_means) result["{}:max_sens:max_pred".format(prefix)] = np.amax(column_maxs) result["{}:mean_pred:mean_sens".format(prefix)] = np.mean(rows_means) result["{}:mean_pred:max_sens".format(prefix)] = np.mean(rows_maxs) result["{}:max_pred:mean_sens".format(prefix)] = np.amax(rows_means) result["{}:max_pred:max_sens".format(prefix)] = np.amax(rows_maxs) return result def inter_group_fairness(counts): """Computes the inter group fairness for predictions based on the TV distance. Args: counts: Numpy array with counts of predictions where rows correspond to predicted classes and columns to sensitive classes. Returns: Mean and maximum total variation distance of a sensitive class to the global average. """ # Compute the distribution of predictions across all sensitive classes. overall_distribution = np.sum(counts, axis=1, dtype=np.float32) overall_distribution /= overall_distribution.sum() # Compute the distribution for each sensitive class. normalized_counts = np.array(counts, dtype=np.float32) counts_per_class = np.sum(counts, axis=0) normalized_counts /= np.expand_dims(counts_per_class, 0) # Compute the differences and sum up for each sensitive class. differences = normalized_counts - np.expand_dims(overall_distribution, 1) total_variation_distances = np.sum(np.abs(differences), 0) / 2. mean = (total_variation_distances * counts_per_class) mean /= counts_per_class.sum() return np.sum(mean), np.amax(total_variation_distances) The provided code snippet includes necessary dependencies for implementing the `compute_fairness` function. Write a Python function `def compute_fairness(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test_points_per_class=gin.REQUIRED, batch_size=16)` to solve the following problem: Computes unfairness scores. We first compute either the mean or maximum total variation for a given sensitive and target variable. Then, we either average or take the maximum with respect to target and sensitive variable. For convenience, we compute and save all combinations. The score used in Section 4 of the paper is here called mean_fairness:mean_pred:mean_sens. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test_points_per_class: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with scores. Here is the function: def compute_fairness(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test_points_per_class=gin.REQUIRED, batch_size=16): """Computes unfairness scores. We first compute either the mean or maximum total variation for a given sensitive and target variable. Then, we either average or take the maximum with respect to target and sensitive variable. For convenience, we compute and save all combinations. The score used in Section 4 of the paper is here called mean_fairness:mean_pred:mean_sens. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test_points_per_class: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with scores. """ del artifact_dir factor_counts = ground_truth_data.factors_num_values num_factors = len(factor_counts) scores = {} # Training a predictive model. mus_train, ys_train = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_train, random_state, batch_size) predictor_model_fn = utils.make_predictor_fn() # For each factor train a single predictive model. mean_fairness = np.zeros((num_factors, num_factors), dtype=np.float64) max_fairness = np.zeros((num_factors, num_factors), dtype=np.float64) for i in range(num_factors): model = predictor_model_fn() model.fit(np.transpose(mus_train), ys_train[i, :]) for j in range(num_factors): if i == j: continue # Sample a random set of factors once. original_factors = ground_truth_data.sample_factors( num_test_points_per_class, random_state) counts = np.zeros((factor_counts[i], factor_counts[j]), dtype=np.int64) for c in range(factor_counts[j]): # Intervene on the sensitive attribute. intervened_factors = np.copy(original_factors) intervened_factors[:, j] = c # Obtain the batched observations. observations = ground_truth_data.sample_observations_from_factors( intervened_factors, random_state) representations = utils.obtain_representation(observations, representation_function, batch_size) # Get the predictions. predictions = model.predict(np.transpose(representations)) # Update the counts. counts[:, c] = np.bincount(predictions, minlength=factor_counts[i]) mean_fairness[i, j], max_fairness[i, j] = inter_group_fairness(counts) # Report the scores. scores.update(compute_scores_dict(mean_fairness, "mean_fairness")) scores.update(compute_scores_dict(max_fairness, "max_fairness")) return scores
Computes unfairness scores. We first compute either the mean or maximum total variation for a given sensitive and target variable. Then, we either average or take the maximum with respect to target and sensitive variable. For convenience, we compute and save all combinations. The score used in Section 4 of the paper is here called mean_fairness:mean_pred:mean_sens. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test_points_per_class: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with scores.
168,237
from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl import logging from disentanglement_lib.evaluation.metrics import utils import numpy as np import scipy from six.moves import range from sklearn import ensemble import gin.tf def _compute_dci(mus_train, ys_train, mus_test, ys_test): """Computes score based on both training and testing codes and factors.""" scores = {} importance_matrix, train_err, test_err = compute_importance_gbt( mus_train, ys_train, mus_test, ys_test) assert importance_matrix.shape[0] == mus_train.shape[0] assert importance_matrix.shape[1] == ys_train.shape[0] scores["informativeness_train"] = train_err scores["informativeness_test"] = test_err scores["disentanglement"] = disentanglement(importance_matrix) scores["completeness"] = completeness(importance_matrix) return scores "dci_validation", blacklist=["observations", "labels", "representation_function"]) The provided code snippet includes necessary dependencies for implementing the `compute_dci` function. Write a Python function `def compute_dci(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, batch_size=16)` to solve the following problem: Computes the DCI scores according to Sec 2. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with average disentanglement score, completeness and informativeness (train and test). Here is the function: def compute_dci(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, batch_size=16): """Computes the DCI scores according to Sec 2. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with average disentanglement score, completeness and informativeness (train and test). """ del artifact_dir logging.info("Generating training set.") # mus_train are of shape [num_codes, num_train], while ys_train are of shape # [num_factors, num_train]. mus_train, ys_train = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_train, random_state, batch_size) assert mus_train.shape[1] == num_train assert ys_train.shape[1] == num_train mus_test, ys_test = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_test, random_state, batch_size) scores = _compute_dci(mus_train, ys_train, mus_test, ys_test) return scores
Computes the DCI scores according to Sec 2. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing. batch_size: Batch size for sampling. Returns: Dictionary with average disentanglement score, completeness and informativeness (train and test).
168,238
from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl import logging from disentanglement_lib.evaluation.metrics import utils import numpy as np import scipy from six.moves import range from sklearn import ensemble import gin.tf def _compute_dci(mus_train, ys_train, mus_test, ys_test): """Computes score based on both training and testing codes and factors.""" scores = {} importance_matrix, train_err, test_err = compute_importance_gbt( mus_train, ys_train, mus_test, ys_test) assert importance_matrix.shape[0] == mus_train.shape[0] assert importance_matrix.shape[1] == ys_train.shape[0] scores["informativeness_train"] = train_err scores["informativeness_test"] = test_err scores["disentanglement"] = disentanglement(importance_matrix) scores["completeness"] = completeness(importance_matrix) return scores "dci_validation", blacklist=["observations", "labels", "representation_function"]) The provided code snippet includes necessary dependencies for implementing the `compute_dci_on_fixed_data` function. Write a Python function `def compute_dci_on_fixed_data(observations, labels, representation_function, train_percentage=gin.REQUIRED, batch_size=100)` to solve the following problem: Computes the DCI scores on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. train_percentage: Percentage of observations used for training. batch_size: Batch size used to compute the representation. Returns: DCI score. Here is the function: def compute_dci_on_fixed_data(observations, labels, representation_function, train_percentage=gin.REQUIRED, batch_size=100): """Computes the DCI scores on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. train_percentage: Percentage of observations used for training. batch_size: Batch size used to compute the representation. Returns: DCI score. """ mus = utils.obtain_representation(observations, representation_function, batch_size) assert labels.shape[1] == observations.shape[0], "Wrong labels shape." assert mus.shape[1] == observations.shape[0], "Wrong representation shape." mus_train, mus_test = utils.split_train_test( mus, train_percentage) ys_train, ys_test = utils.split_train_test( labels, train_percentage) return _compute_dci(mus_train, ys_train, mus_test, ys_test)
Computes the DCI scores on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. train_percentage: Percentage of observations used for training. batch_size: Batch size used to compute the representation. Returns: DCI score.
168,239
from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl import logging from disentanglement_lib.evaluation.metrics import utils import numpy as np from six.moves import range from sklearn import svm import gin.tf def _compute_sap(mus, ys, mus_test, ys_test, continuous_factors): """Computes score based on both training and testing codes and factors.""" score_matrix = compute_score_matrix(mus, ys, mus_test, ys_test, continuous_factors) # Score matrix should have shape [num_latents, num_factors]. assert score_matrix.shape[0] == mus.shape[0] assert score_matrix.shape[1] == ys.shape[0] scores_dict = {} scores_dict["SAP_score"] = compute_avg_diff_top_two(score_matrix) logging.info("SAP score: %.2g", scores_dict["SAP_score"]) return scores_dict "sap_score_validation", blacklist=["observations", "labels", "representation_function"]) The provided code snippet includes necessary dependencies for implementing the `compute_sap` function. Write a Python function `def compute_sap(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, batch_size=16, continuous_factors=gin.REQUIRED)` to solve the following problem: Computes the SAP score. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing discrete variables. batch_size: Batch size for sampling. continuous_factors: Factors are continuous variable (True) or not (False). Returns: Dictionary with SAP score. Here is the function: def compute_sap(ground_truth_data, representation_function, random_state, artifact_dir=None, num_train=gin.REQUIRED, num_test=gin.REQUIRED, batch_size=16, continuous_factors=gin.REQUIRED): """Computes the SAP score. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing discrete variables. batch_size: Batch size for sampling. continuous_factors: Factors are continuous variable (True) or not (False). Returns: Dictionary with SAP score. """ del artifact_dir logging.info("Generating training set.") mus, ys = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_train, random_state, batch_size) mus_test, ys_test = utils.generate_batch_factor_code( ground_truth_data, representation_function, num_test, random_state, batch_size) logging.info("Computing score matrix.") return _compute_sap(mus, ys, mus_test, ys_test, continuous_factors)
Computes the SAP score. Args: ground_truth_data: GroundTruthData to be sampled from. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: Numpy random state used for randomness. artifact_dir: Optional path to directory where artifacts can be saved. num_train: Number of points used for training. num_test: Number of points used for testing discrete variables. batch_size: Batch size for sampling. continuous_factors: Factors are continuous variable (True) or not (False). Returns: Dictionary with SAP score.
168,240
from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl import logging from disentanglement_lib.evaluation.metrics import utils import numpy as np from six.moves import range from sklearn import svm import gin.tf def _compute_sap(mus, ys, mus_test, ys_test, continuous_factors): """Computes score based on both training and testing codes and factors.""" score_matrix = compute_score_matrix(mus, ys, mus_test, ys_test, continuous_factors) # Score matrix should have shape [num_latents, num_factors]. assert score_matrix.shape[0] == mus.shape[0] assert score_matrix.shape[1] == ys.shape[0] scores_dict = {} scores_dict["SAP_score"] = compute_avg_diff_top_two(score_matrix) logging.info("SAP score: %.2g", scores_dict["SAP_score"]) return scores_dict "sap_score_validation", blacklist=["observations", "labels", "representation_function"]) The provided code snippet includes necessary dependencies for implementing the `compute_sap_on_fixed_data` function. Write a Python function `def compute_sap_on_fixed_data(observations, labels, representation_function, train_percentage=gin.REQUIRED, continuous_factors=gin.REQUIRED, batch_size=100)` to solve the following problem: Computes the SAP score on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. train_percentage: Percentage of observations used for training. continuous_factors: Whether factors should be considered continuous or discrete. batch_size: Batch size used to compute the representation. Returns: SAP computed on the provided observations and labels. Here is the function: def compute_sap_on_fixed_data(observations, labels, representation_function, train_percentage=gin.REQUIRED, continuous_factors=gin.REQUIRED, batch_size=100): """Computes the SAP score on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. train_percentage: Percentage of observations used for training. continuous_factors: Whether factors should be considered continuous or discrete. batch_size: Batch size used to compute the representation. Returns: SAP computed on the provided observations and labels. """ mus = utils.obtain_representation(observations, representation_function, batch_size) assert labels.shape[1] == observations.shape[0], "Wrong labels shape." assert mus.shape[1] == observations.shape[0], "Wrong representation shape." mus_train, mus_test = utils.split_train_test( mus, train_percentage) ys_train, ys_test = utils.split_train_test( labels, train_percentage) return _compute_sap(mus_train, ys_train, mus_test, ys_test, continuous_factors)
Computes the SAP score on the fixed set of observations and labels. Args: observations: Observations on which to compute the score. Observations have shape (num_observations, 64, 64, num_channels). labels: Observed factors of variations. representation_function: Function that takes observations as input and outputs a dim_representation sized representation for each observation. train_percentage: Percentage of observations used for training. continuous_factors: Whether factors should be considered continuous or discrete. batch_size: Batch size used to compute the representation. Returns: SAP computed on the provided observations and labels.
168,241
from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl import logging import numpy as np import scipy from sklearn import linear_model from sklearn import preprocessing import gin.tf def relative_strength_disentanglement(corr_matrix): """Computes disentanglement using relative strength score.""" score_x = np.nanmean( np.nan_to_num( np.power(np.ndarray.max(corr_matrix, axis=0), 2) / np.sum(corr_matrix, axis=0), 0)) score_y = np.nanmean( np.nan_to_num( np.power(np.ndarray.max(corr_matrix, axis=1), 2) / np.sum(corr_matrix, axis=1), 0)) return (score_x + score_y) / 2 def spearman_correlation_conv(vec1, vec2): """Computes Spearman correlation matrix of two representations. Args: vec1: 2d array of representations with axis 0 the batch dimension and axis 1 the representation dimension. vec2: 2d array of representations with axis 0 the batch dimension and axis 1 the representation dimension. Returns: A 2d array with the correlations between all pairwise combinations of elements of both representations are computed. Elements of vec1 correspond to axis 0 and elements of vec2 correspond to axis 1. """ assert vec1.shape == vec2.shape corr_y = [] for i in range(vec1.shape[1]): corr_x = [] for j in range(vec2.shape[1]): corr, _ = scipy.stats.spearmanr(vec1[:, i], vec2[:, j], nan_policy="omit") corr_x.append(corr) corr_y.append(np.stack(corr_x)) return np.transpose(np.absolute(np.stack(corr_y, axis=1))) def lasso_correlation_matrix(vec1, vec2, random_state=None): """Computes correlation matrix of two representations using Lasso Regression. Args: vec1: 2d array of representations with axis 0 the batch dimension and axis 1 the representation dimension. vec2: 2d array of representations with axis 0 the batch dimension and axis 1 the representation dimension. random_state: int used to seed an RNG used for model training. Returns: A 2d array with the correlations between all pairwise combinations of elements of both representations are computed. Elements of vec1 correspond to axis 0 and elements of vec2 correspond to axis 1. """ assert vec1.shape == vec2.shape model = linear_model.Lasso(random_state=random_state, alpha=0.1) model.fit(vec1, vec2) return np.transpose(np.absolute(model.coef_)) def _generate_representation_dataset(ground_truth_data, representation_functions, batch_size, num_data_points, random_state): """Sample dataset of represetations for all of the different models. Args: ground_truth_data: GroundTruthData to be sampled from. representation_functions: functions that takes observations as input and outputs a dim_representation sized representation for each observation and a vector of the average kl divergence per latent. batch_size: size of batches of representations to be collected at one time. num_data_points: total number of points to be sampled for training set. random_state: numpy random state used for randomness. Returns: representation_points: (num_data_points, dim_representation)-sized numpy array with training set features. kl: (dim_representation) - The average KL divergence per latent in the representation. """ if num_data_points % batch_size != 0: raise ValueError("num_data_points must be a multiple of batch_size") representation_points = [] kl_divergence = [] for i in range(int(num_data_points / batch_size)): representation_batch = _generate_representation_batch( ground_truth_data, representation_functions, batch_size, random_state) for j in range(len(representation_functions)): # Initialize the outputs if it hasn't been created yet. if len(representation_points) <= j: kl_divergence.append( np.zeros((int(num_data_points / batch_size), representation_batch[j][1].shape[0]))) representation_points.append( np.zeros((num_data_points, representation_batch[j][0].shape[1]))) kl_divergence[j][i, :] = representation_batch[j][1] representation_points[j][i * batch_size:(i + 1) * batch_size, :] = ( representation_batch[j][0]) return representation_points, [np.mean(kl, axis=0) for kl in kl_divergence] "udr_sklearn", blacklist=["ground_truth_data", "representation_functions", "random_state"]) The provided code snippet includes necessary dependencies for implementing the `compute_udr_sklearn` function. Write a Python function `def compute_udr_sklearn(ground_truth_data, representation_functions, random_state, batch_size, num_data_points, correlation_matrix="lasso", filter_low_kl=True, include_raw_correlations=True, kl_filter_threshold=0.01)` to solve the following problem: Computes the UDR score using scikit-learn. Args: ground_truth_data: GroundTruthData to be sampled from. representation_functions: functions that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: numpy random state used for randomness. batch_size: Number of datapoints to compute in a single batch. Useful for reducing memory overhead for larger models. num_data_points: total number of representation datapoints to generate for computing the correlation matrix. correlation_matrix: Type of correlation matrix to generate. Can be either "lasso" or "spearman". filter_low_kl: If True, filter out elements of the representation vector which have low computed KL divergence. include_raw_correlations: Whether or not to include the raw correlation matrices in the results. kl_filter_threshold: Threshold which latents with average KL divergence lower than the threshold will be ignored when computing disentanglement. Returns: scores_dict: a dictionary of the scores computed for UDR with the following keys: raw_correlations: (num_models, num_models, latent_dim, latent_dim) - The raw computed correlation matrices for all models. The pair of models is indexed by axis 0 and 1 and the matrix represents the computed correlation matrix between latents in axis 2 and 3. pairwise_disentanglement_scores: (num_models, num_models, 1) - The computed disentanglement scores representing the similarity of representation between pairs of models. model_scores: (num_models) - List of aggregated model scores corresponding to the median of the pairwise disentanglement scores for each model. Here is the function: def compute_udr_sklearn(ground_truth_data, representation_functions, random_state, batch_size, num_data_points, correlation_matrix="lasso", filter_low_kl=True, include_raw_correlations=True, kl_filter_threshold=0.01): """Computes the UDR score using scikit-learn. Args: ground_truth_data: GroundTruthData to be sampled from. representation_functions: functions that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: numpy random state used for randomness. batch_size: Number of datapoints to compute in a single batch. Useful for reducing memory overhead for larger models. num_data_points: total number of representation datapoints to generate for computing the correlation matrix. correlation_matrix: Type of correlation matrix to generate. Can be either "lasso" or "spearman". filter_low_kl: If True, filter out elements of the representation vector which have low computed KL divergence. include_raw_correlations: Whether or not to include the raw correlation matrices in the results. kl_filter_threshold: Threshold which latents with average KL divergence lower than the threshold will be ignored when computing disentanglement. Returns: scores_dict: a dictionary of the scores computed for UDR with the following keys: raw_correlations: (num_models, num_models, latent_dim, latent_dim) - The raw computed correlation matrices for all models. The pair of models is indexed by axis 0 and 1 and the matrix represents the computed correlation matrix between latents in axis 2 and 3. pairwise_disentanglement_scores: (num_models, num_models, 1) - The computed disentanglement scores representing the similarity of representation between pairs of models. model_scores: (num_models) - List of aggregated model scores corresponding to the median of the pairwise disentanglement scores for each model. """ logging.info("Generating training set.") inferred_model_reps, kl = _generate_representation_dataset( ground_truth_data, representation_functions, batch_size, num_data_points, random_state) num_models = len(inferred_model_reps) logging.info("Number of Models: %s", num_models) logging.info("Training sklearn models.") latent_dim = inferred_model_reps[0].shape[1] corr_matrix_all = np.zeros((num_models, num_models, latent_dim, latent_dim)) # Normalize and calculate mask based off of kl divergence to remove # uninformative latents. kl_mask = [] for i in range(len(inferred_model_reps)): scaler = preprocessing.StandardScaler() scaler.fit(inferred_model_reps[i]) inferred_model_reps[i] = scaler.transform(inferred_model_reps[i]) inferred_model_reps[i] = inferred_model_reps[i] * np.greater(kl[i], 0.01) kl_mask.append(kl[i] > kl_filter_threshold) disentanglement = np.zeros((num_models, num_models, 1)) for i in range(num_models): for j in range(num_models): if i == j: continue if correlation_matrix == "lasso": corr_matrix = lasso_correlation_matrix(inferred_model_reps[i], inferred_model_reps[j], random_state) else: corr_matrix = spearman_correlation_conv(inferred_model_reps[i], inferred_model_reps[j]) corr_matrix_all[i, j, :, :] = corr_matrix if filter_low_kl: corr_matrix = corr_matrix[kl_mask[i], ...][..., kl_mask[j]] disentanglement[i, j] = relative_strength_disentanglement(corr_matrix) scores_dict = {} if include_raw_correlations: scores_dict["raw_correlations"] = corr_matrix_all.tolist() scores_dict["pairwise_disentanglement_scores"] = disentanglement.tolist() model_scores = [] for i in range(num_models): model_scores.append(np.median(np.delete(disentanglement[:, i], i))) scores_dict["model_scores"] = model_scores return scores_dict
Computes the UDR score using scikit-learn. Args: ground_truth_data: GroundTruthData to be sampled from. representation_functions: functions that takes observations as input and outputs a dim_representation sized representation for each observation. random_state: numpy random state used for randomness. batch_size: Number of datapoints to compute in a single batch. Useful for reducing memory overhead for larger models. num_data_points: total number of representation datapoints to generate for computing the correlation matrix. correlation_matrix: Type of correlation matrix to generate. Can be either "lasso" or "spearman". filter_low_kl: If True, filter out elements of the representation vector which have low computed KL divergence. include_raw_correlations: Whether or not to include the raw correlation matrices in the results. kl_filter_threshold: Threshold which latents with average KL divergence lower than the threshold will be ignored when computing disentanglement. Returns: scores_dict: a dictionary of the scores computed for UDR with the following keys: raw_correlations: (num_models, num_models, latent_dim, latent_dim) - The raw computed correlation matrices for all models. The pair of models is indexed by axis 0 and 1 and the matrix represents the computed correlation matrix between latents in axis 2 and 3. pairwise_disentanglement_scores: (num_models, num_models, 1) - The computed disentanglement scores representing the similarity of representation between pairs of models. model_scores: (num_models) - List of aggregated model scores corresponding to the median of the pairwise disentanglement scores for each model.
168,242
from __future__ import absolute_import from __future__ import division from __future__ import print_function import contextlib import os import time from absl import flags from disentanglement_lib.data.ground_truth import named_data from disentanglement_lib.evaluation.udr.metrics import udr from disentanglement_lib.utils import results import numpy as np import tensorflow.compat.v1 as tf import tensorflow_hub as hub import gin.tf The provided code snippet includes necessary dependencies for implementing the `evaluate` function. Write a Python function `def evaluate(model_dirs, output_dir, evaluation_fn=gin.REQUIRED, random_seed=gin.REQUIRED, name="")` to solve the following problem: Loads a trained estimator and evaluates it according to beta-VAE metric. Here is the function: def evaluate(model_dirs, output_dir, evaluation_fn=gin.REQUIRED, random_seed=gin.REQUIRED, name=""): """Loads a trained estimator and evaluates it according to beta-VAE metric.""" # The name will be part of the gin config and can be used to tag results. del name # Set up time to keep track of elapsed time in results. experiment_timer = time.time() # Automatically set the proper dataset if necessary. We replace the active # gin config as this will lead to a valid gin config file where the dataset # is present. if gin.query_parameter("dataset.name") == "auto": # Obtain the dataset name from the gin config of the previous step. gin_config_file = os.path.join(model_dirs[0], "results", "gin", "train.gin") gin_dict = results.gin_dict(gin_config_file) with gin.unlock_config(): print(gin_dict["dataset.name"]) gin.bind_parameter("dataset.name", gin_dict["dataset.name"].replace("'", "")) output_dir = os.path.join(output_dir) if tf.io.gfile.isdir(output_dir): tf.io.gfile.rmtree(output_dir) dataset = named_data.get_named_ground_truth_data() with contextlib.ExitStack() as stack: representation_functions = [] eval_functions = [ stack.enter_context( hub.eval_function_for_module(os.path.join(model_dir, "tfhub"))) for model_dir in model_dirs ] for f in eval_functions: def _representation_function(x, f=f): def compute_gaussian_kl(z_mean, z_logvar): return np.mean( 0.5 * (np.square(z_mean) + np.exp(z_logvar) - z_logvar - 1), axis=0) encoding = f(dict(images=x), signature="gaussian_encoder", as_dict=True) return np.array(encoding["mean"]), compute_gaussian_kl( np.array(encoding["mean"]), np.array(encoding["logvar"])) representation_functions.append(_representation_function) results_dict = evaluation_fn( dataset, representation_functions, random_state=np.random.RandomState(random_seed)) original_results_dir = os.path.join(model_dirs[0], "results") results_dir = os.path.join(output_dir, "results") results_dict["elapsed_time"] = time.time() - experiment_timer results.update_result_directory(results_dir, "evaluation", results_dict, original_results_dir)
Loads a trained estimator and evaluates it according to beta-VAE metric.
168,243
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np The provided code snippet includes necessary dependencies for implementing the `sample_easy_alternative` function. Write a Python function `def sample_easy_alternative(design, matrix, already_sampled_alternatives)` to solve the following problem: Samples easy alternative based on sampling a new PGM. Here is the function: def sample_easy_alternative(design, matrix, already_sampled_alternatives): """Samples easy alternative based on sampling a new PGM.""" for _ in range(100): alternative_pgm = design.resample_design().sample() # Combine the solutions. alternative_solution = np.copy(matrix) alternative_solution[-1, -1, :] = alternative_pgm[-1, -1, :] if design.is_consistent(alternative_solution): continue for already_sampled_alternative in already_sampled_alternatives: if np.allclose(already_sampled_alternative, alternative_pgm[-1, -1, :]): continue return alternative_pgm[-1, -1, :] raise ValueError("Could not sample alternative solutions.")
Samples easy alternative based on sampling a new PGM.
168,244
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np The provided code snippet includes necessary dependencies for implementing the `sample_hard_alternative` function. Write a Python function `def sample_hard_alternative(design, matrix, already_sampled_alternatives)` to solve the following problem: Samples hard alternative based on sampling a new PGM. Here is the function: def sample_hard_alternative(design, matrix, already_sampled_alternatives): """Samples hard alternative based on sampling a new PGM.""" solution_so_far = matrix[-1, -1] for _ in range(100): solution_so_far = design.randomly_modify_solution(solution_so_far) # Combine the solutions. alternative_solution = np.copy(matrix) alternative_solution[-1, -1, :] = solution_so_far if design.is_consistent(alternative_solution): continue for already_sampled_alternative in already_sampled_alternatives: if np.allclose(already_sampled_alternative, solution_so_far): continue return solution_so_far raise ValueError("Could not sample hard alternative solutions.")
Samples hard alternative based on sampling a new PGM.
168,245
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np def is_constant_row(row): return len(np.unique(row)) == 1
null
168,246
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np def is_distinct_row(row): return len(np.unique(row)) == len(row)
null
168,247
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import time from disentanglement_lib.evaluation.abstract_reasoning import models from disentanglement_lib.evaluation.abstract_reasoning import pgm_data from disentanglement_lib.utils import results import numpy as np import tensorflow.compat.v1 as tf import gin.tf.external_configurables import gin.tf from tensorflow.contrib import tpu as contrib_tpu def reason( input_dir, output_dir, overwrite=False, model=gin.REQUIRED, num_iterations=gin.REQUIRED, training_steps_per_iteration=gin.REQUIRED, eval_steps_per_iteration=gin.REQUIRED, random_seed=gin.REQUIRED, batch_size=gin.REQUIRED, name="", ): """Trains the estimator and exports the snapshot and the gin config. The use of this function requires the gin binding 'dataset.name' to be specified if a model is trained from scratch as that determines the data set used for training. Args: input_dir: String with path to directory where the representation function is saved. output_dir: String with the path where the results should be saved. overwrite: Boolean indicating whether to overwrite output directory. model: GaussianEncoderModel that should be trained and exported. num_iterations: Integer with number of training steps. training_steps_per_iteration: Integer with number of training steps per iteration. eval_steps_per_iteration: Integer with number of validationand test steps per iteration. random_seed: Integer with random seed used for training. batch_size: Integer with the batch size. name: Optional string with name of the model (can be used to name models). """ # We do not use the variable 'name'. Instead, it can be used to name results # as it will be part of the saved gin config. del name # Delete the output directory if it already exists. if tf.gfile.IsDirectory(output_dir): if overwrite: tf.gfile.DeleteRecursively(output_dir) else: raise ValueError("Directory already exists and overwrite is False.") # Create a numpy random state. We will sample the random seeds for training # and evaluation from this. random_state = np.random.RandomState(random_seed) # Automatically set the proper data set if necessary. We replace the active # gin config as this will lead to a valid gin config file where the data set # is present. if gin.query_parameter("dataset.name") == "auto": if input_dir is None: raise ValueError("Cannot automatically infer data set for methods with" " no prior model directory.") # Obtain the dataset name from the gin config of the previous step. gin_config_file = os.path.join(input_dir, "results", "gin", "postprocess.gin") gin_dict = results.gin_dict(gin_config_file) with gin.unlock_config(): gin.bind_parameter("dataset.name", gin_dict["dataset.name"].replace("'", "")) dataset = pgm_data.get_pgm_dataset() # Set the path to the TFHub embedding if we are training based on a # pre-trained embedding.. if input_dir is not None: tfhub_dir = os.path.join(input_dir, "tfhub") with gin.unlock_config(): gin.bind_parameter("HubEmbedding.hub_path", tfhub_dir) # We create a TPUEstimator based on the provided model. This is primarily so # that we could switch to TPU training in the future. For now, we train # locally on GPUs. run_config = contrib_tpu.RunConfig( tf_random_seed=random_seed, keep_checkpoint_max=1, tpu_config=contrib_tpu.TPUConfig(iterations_per_loop=500)) tpu_estimator = contrib_tpu.TPUEstimator( use_tpu=False, model_fn=model.model_fn, model_dir=os.path.join(output_dir, "tf_checkpoint"), train_batch_size=batch_size, eval_batch_size=batch_size, config=run_config) # Set up time to keep track of elapsed time in results. experiment_timer = time.time() # Create a dictionary to keep track of all relevant information. results_dict_of_dicts = {} validation_scores = [] all_dicts = [] for i in range(num_iterations): steps_so_far = i * training_steps_per_iteration tf.logging.info("Training to %d steps.", steps_so_far) # Train the model for the specified steps. tpu_estimator.train( input_fn=dataset.make_input_fn(random_state.randint(2**32)), steps=training_steps_per_iteration) # Compute validation scores used for model selection. validation_results = tpu_estimator.evaluate( input_fn=dataset.make_input_fn( random_state.randint(2**32), num_batches=eval_steps_per_iteration)) validation_scores.append(validation_results["accuracy"]) tf.logging.info("Validation results %s", validation_results) # Compute test scores for final results. test_results = tpu_estimator.evaluate( input_fn=dataset.make_input_fn( random_state.randint(2**32), num_batches=eval_steps_per_iteration), name="test") dict_at_iteration = results.namespaced_dict( val=validation_results, test=test_results) results_dict_of_dicts["step{}".format(steps_so_far)] = dict_at_iteration all_dicts.append(dict_at_iteration) # Select the best number of steps based on the validation scores and add it as # as a special key to the dictionary. best_index = np.argmax(validation_scores) results_dict_of_dicts["best"] = all_dicts[best_index] # Save the results. The result dir will contain all the results and config # files that we copied along, as we progress in the pipeline. The idea is that # these files will be available for analysis at the end. if input_dir is not None: original_results_dir = os.path.join(input_dir, "results") else: original_results_dir = None results_dict = results.namespaced_dict(**results_dict_of_dicts) results_dir = os.path.join(output_dir, "results") results_dict["elapsed_time"] = time.time() - experiment_timer results.update_result_directory(results_dir, "abstract_reasoning", results_dict, original_results_dir) The provided code snippet includes necessary dependencies for implementing the `reason_with_gin` function. Write a Python function `def reason_with_gin(input_dir, output_dir, overwrite=False, gin_config_files=None, gin_bindings=None)` to solve the following problem: Trains a model based on the provided gin configuration. This function will set the provided gin bindings, call the reason() function and clear the gin config. Please see reason() for required gin bindings. Args: input_dir: String with path to directory where the representation is saved. output_dir: String with the path where the evaluation should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use. Here is the function: def reason_with_gin(input_dir, output_dir, overwrite=False, gin_config_files=None, gin_bindings=None): """Trains a model based on the provided gin configuration. This function will set the provided gin bindings, call the reason() function and clear the gin config. Please see reason() for required gin bindings. Args: input_dir: String with path to directory where the representation is saved. output_dir: String with the path where the evaluation should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use. """ if gin_config_files is None: gin_config_files = [] if gin_bindings is None: gin_bindings = [] gin.parse_config_files_and_bindings(gin_config_files, gin_bindings) reason(input_dir, output_dir, overwrite) gin.clear_config()
Trains a model based on the provided gin configuration. This function will set the provided gin bindings, call the reason() function and clear the gin config. Please see reason() for required gin bindings. Args: input_dir: String with path to directory where the representation is saved. output_dir: String with the path where the evaluation should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use.
168,248
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.data.ground_truth import dsprites from disentanglement_lib.data.ground_truth import dummy_data from disentanglement_lib.data.ground_truth import ground_truth_data as gtd from disentanglement_lib.data.ground_truth import named_data from disentanglement_lib.data.ground_truth import shapes3d from disentanglement_lib.evaluation.abstract_reasoning import pgm_utils from disentanglement_lib.utils import resources from disentanglement_lib.visualize import visualize_util import gin import numpy as np from PIL import Image import tensorflow.compat.v1 as tf QUESTION_MARK = [None] The provided code snippet includes necessary dependencies for implementing the `question_mark` function. Write a Python function `def question_mark()` to solve the following problem: Returns an image of the question mark. Here is the function: def question_mark(): """Returns an image of the question mark.""" # Cache the image so it is not always reloaded. if QUESTION_MARK[0] is None: with tf.gfile.Open( resources.get_file("google/abstract_reasoning/data/question_mark.png"), "rb") as f: QUESTION_MARK[0] = np.array(Image.open(f).convert("RGB")) * 1.0 / 255. return QUESTION_MARK[0]
Returns an image of the question mark.
168,249
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.data.ground_truth import dsprites from disentanglement_lib.data.ground_truth import dummy_data from disentanglement_lib.data.ground_truth import ground_truth_data as gtd from disentanglement_lib.data.ground_truth import named_data from disentanglement_lib.data.ground_truth import shapes3d from disentanglement_lib.evaluation.abstract_reasoning import pgm_utils from disentanglement_lib.utils import resources from disentanglement_lib.visualize import visualize_util import gin import numpy as np from PIL import Image import tensorflow.compat.v1 as tf The provided code snippet includes necessary dependencies for implementing the `onehot` function. Write a Python function `def onehot(indices, num_atoms)` to solve the following problem: Embeds the indices as one hot vectors. Here is the function: def onehot(indices, num_atoms): """Embeds the indices as one hot vectors.""" return np.eye(num_atoms)[indices]
Embeds the indices as one hot vectors.
168,250
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.evaluation.abstract_reasoning import relational_layers import gin import tensorflow.compat.v1 as tf import tensorflow_hub as hub from tensorflow.contrib import tpu as contrib_tpu def get_activation(activation=tf.keras.activations.relu): if activation == "lrelu": return lambda x: tf.keras.activations.relu(x, alpha=0.2) return activation
null
168,251
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.evaluation.abstract_reasoning import relational_layers import gin import tensorflow.compat.v1 as tf import tensorflow_hub as hub from tensorflow.contrib import tpu as contrib_tpu def get_kernel_initializer(kernel_initializer="lecun_normal"): return kernel_initializer
null
168,252
from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow.compat.v1 as tf The provided code snippet includes necessary dependencies for implementing the `repeat` function. Write a Python function `def repeat(tensor, num, axis)` to solve the following problem: Repeats tensor num times along the specified axis. Here is the function: def repeat(tensor, num, axis): """Repeats tensor num times along the specified axis.""" multiples = [1] * tensor.get_shape().ndims multiples[axis] = num return tf.tile(tensor, multiples)
Repeats tensor num times along the specified axis.
168,253
from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow.compat.v1 as tf The provided code snippet includes necessary dependencies for implementing the `positional_encoding_like` function. Write a Python function `def positional_encoding_like(tensor, positional_encoding_axis=-2, value_axis=-1)` to solve the following problem: Creates positional encoding matching the provided tensor. Let each slice along the last axis of the tensor be a row. This function computes the index of each row with respect to the specified positional_encoding_axis and returns this index using a one-hot embedding. The resulting tensor has the same shape as the provided tensor except for the value_axis dimension. That dimension contains a one-hot encoding of the positional_encoding_axis, i.e., each slice along value_axis and the positional_encoding_axis corresponds to the identity matrix. Args: tensor: Input tensor. positional_encoding_axis: Integer with the axis to encode. value_axis: Integer with axis where to one-hot encode the positional_encoding_axis. Returns: Positional encoding tensor of the same dtype as tensor. Here is the function: def positional_encoding_like(tensor, positional_encoding_axis=-2, value_axis=-1): """Creates positional encoding matching the provided tensor. Let each slice along the last axis of the tensor be a row. This function computes the index of each row with respect to the specified positional_encoding_axis and returns this index using a one-hot embedding. The resulting tensor has the same shape as the provided tensor except for the value_axis dimension. That dimension contains a one-hot encoding of the positional_encoding_axis, i.e., each slice along value_axis and the positional_encoding_axis corresponds to the identity matrix. Args: tensor: Input tensor. positional_encoding_axis: Integer with the axis to encode. value_axis: Integer with axis where to one-hot encode the positional_encoding_axis. Returns: Positional encoding tensor of the same dtype as tensor. """ # First, create identity matrix for one-hot embedding. shape = tensor.get_shape() num_values = shape.as_list()[positional_encoding_axis] result = tf.eye(num_values, dtype=tensor.dtype) # Second, reshape to proper dimensionality. new_shape = [1] * shape.ndims new_shape[positional_encoding_axis] = num_values new_shape[value_axis] = num_values result = tf.reshape(result, new_shape) # Third, broadcast to final shape. multiplier = shape.as_list() multiplier[positional_encoding_axis] = 1 multiplier[value_axis] = 1 if not shape.is_fully_defined(): dynamic_shape = tf.unstack(tf.shape(tensor)) multiplier = [ (dynamic_shape[i] if n is None else n) for i, n in enumerate(multiplier) ] return tf.tile(result, multiplier)
Creates positional encoding matching the provided tensor. Let each slice along the last axis of the tensor be a row. This function computes the index of each row with respect to the specified positional_encoding_axis and returns this index using a one-hot embedding. The resulting tensor has the same shape as the provided tensor except for the value_axis dimension. That dimension contains a one-hot encoding of the positional_encoding_axis, i.e., each slice along value_axis and the positional_encoding_axis corresponds to the identity matrix. Args: tensor: Input tensor. positional_encoding_axis: Integer with the axis to encode. value_axis: Integer with axis where to one-hot encode the positional_encoding_axis. Returns: Positional encoding tensor of the same dtype as tensor.
168,254
from __future__ import absolute_import from __future__ import division from __future__ import print_function import inspect import os import time import warnings from disentanglement_lib.data.ground_truth import named_data from disentanglement_lib.evaluation.metrics import beta_vae from disentanglement_lib.evaluation.metrics import dci from disentanglement_lib.evaluation.metrics import downstream_task from disentanglement_lib.evaluation.metrics import factor_vae from disentanglement_lib.evaluation.metrics import fairness from disentanglement_lib.evaluation.metrics import irs from disentanglement_lib.evaluation.metrics import mig from disentanglement_lib.evaluation.metrics import modularity_explicitness from disentanglement_lib.evaluation.metrics import reduced_downstream_task from disentanglement_lib.evaluation.metrics import sap_score from disentanglement_lib.evaluation.metrics import strong_downstream_task from disentanglement_lib.evaluation.metrics import unified_scores from disentanglement_lib.evaluation.metrics import unsupervised_metrics from disentanglement_lib.utils import results import numpy as np import tensorflow.compat.v1 as tf import tensorflow_hub as hub import gin.tf def evaluate(model_dir, output_dir, overwrite=False, evaluation_fn=gin.REQUIRED, random_seed=gin.REQUIRED, name=""): """Loads a representation TFHub module and computes disentanglement metrics. Args: model_dir: String with path to directory where the representation function is saved. output_dir: String with the path where the results should be saved. overwrite: Boolean indicating whether to overwrite output directory. evaluation_fn: Function used to evaluate the representation (see metrics/ for examples). random_seed: Integer with random seed used for training. name: Optional string with name of the metric (can be used to name metrics). """ # Delete the output directory if it already exists. if tf.gfile.IsDirectory(output_dir): if overwrite: tf.gfile.DeleteRecursively(output_dir) else: raise ValueError("Directory already exists and overwrite is False.") # Set up time to keep track of elapsed time in results. experiment_timer = time.time() # Automatically set the proper data set if necessary. We replace the active # gin config as this will lead to a valid gin config file where the data set # is present. if gin.query_parameter("dataset.name") == "auto": # Obtain the dataset name from the gin config of the previous step. gin_config_file = os.path.join(model_dir, "results", "gin", "postprocess.gin") gin_dict = results.gin_dict(gin_config_file) with gin.unlock_config(): gin.bind_parameter("dataset.name", gin_dict["dataset.name"].replace( "'", "")) dataset = named_data.get_named_ground_truth_data() # Path to TFHub module of previously trained representation. module_path = os.path.join(model_dir, "tfhub") with hub.eval_function_for_module(module_path) as f: def _representation_function(x): """Computes representation vector for input images.""" output = f(dict(images=x), signature="representation", as_dict=True) return np.array(output["default"]) # Computes scores of the representation based on the evaluation_fn. if _has_kwarg_or_kwargs(evaluation_fn, "artifact_dir"): artifact_dir = os.path.join(model_dir, "artifacts", name) results_dict = evaluation_fn( dataset, _representation_function, random_state=np.random.RandomState(random_seed), artifact_dir=artifact_dir) else: # Legacy code path to allow for old evaluation metrics. warnings.warn( "Evaluation function does not appear to accept an" " `artifact_dir` argument. This may not be compatible with " "future versions.", DeprecationWarning) results_dict = evaluation_fn( dataset, _representation_function, random_state=np.random.RandomState(random_seed)) # Save the results (and all previous results in the pipeline) on disk. original_results_dir = os.path.join(model_dir, "results") results_dir = os.path.join(output_dir, "results") results_dict["elapsed_time"] = time.time() - experiment_timer results.update_result_directory(results_dir, "evaluation", results_dict, original_results_dir) The provided code snippet includes necessary dependencies for implementing the `evaluate_with_gin` function. Write a Python function `def evaluate_with_gin(model_dir, output_dir, overwrite=False, gin_config_files=None, gin_bindings=None)` to solve the following problem: Evaluate a representation based on the provided gin configuration. This function will set the provided gin bindings, call the evaluate() function and clear the gin config. Please see the evaluate() for required gin bindings. Args: model_dir: String with path to directory where the representation is saved. output_dir: String with the path where the evaluation should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use. Here is the function: def evaluate_with_gin(model_dir, output_dir, overwrite=False, gin_config_files=None, gin_bindings=None): """Evaluate a representation based on the provided gin configuration. This function will set the provided gin bindings, call the evaluate() function and clear the gin config. Please see the evaluate() for required gin bindings. Args: model_dir: String with path to directory where the representation is saved. output_dir: String with the path where the evaluation should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use. """ if gin_config_files is None: gin_config_files = [] if gin_bindings is None: gin_bindings = [] gin.parse_config_files_and_bindings(gin_config_files, gin_bindings) evaluate(model_dir, output_dir, overwrite) gin.clear_config()
Evaluate a representation based on the provided gin configuration. This function will set the provided gin bindings, call the evaluate() function and clear the gin config. Please see the evaluate() for required gin bindings. Args: model_dir: String with path to directory where the representation is saved. output_dir: String with the path where the evaluation should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use.
168,255
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.config import study from disentanglement_lib.utils import resources import disentanglement_lib.utils.hyperparams as h from six.moves import range def get_num_latent(sweep): return h.sweep("encoder.num_latent", h.discrete(sweep))
null
168,256
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.config import study from disentanglement_lib.utils import resources import disentanglement_lib.utils.hyperparams as h from six.moves import range def get_datasets(): """Returns all the data sets.""" return h.sweep( "dataset.name", h.categorical(["shapes3d", "abstract_dsprites"])) def get_seeds(num): """Returns random seeds.""" return h.sweep("model.random_seed", h.categorical(list(range(num)))) def get_default_models(): """Our default set of models (6 model * 6 hyperparameters=36 models).""" # BetaVAE config. model_name = h.fixed("model.name", "beta_vae") model_fn = h.fixed("model.model", "@vae()") betas = h.sweep("vae.beta", h.discrete([1., 2., 4., 6., 8., 16.])) config_beta_vae = h.zipit([model_name, betas, model_fn]) # AnnealedVAE config. model_name = h.fixed("model.name", "annealed_vae") model_fn = h.fixed("model.model", "@annealed_vae()") iteration_threshold = h.fixed("annealed_vae.iteration_threshold", 100000) c = h.sweep("annealed_vae.c_max", h.discrete([5., 10., 25., 50., 75., 100.])) gamma = h.fixed("annealed_vae.gamma", 1000) config_annealed_beta_vae = h.zipit( [model_name, c, iteration_threshold, gamma, model_fn]) # FactorVAE config. model_name = h.fixed("model.name", "factor_vae") model_fn = h.fixed("model.model", "@factor_vae()") discr_fn = h.fixed("discriminator.discriminator_fn", "@fc_discriminator") gammas = h.sweep("factor_vae.gamma", h.discrete([10., 20., 30., 40., 50., 100.])) config_factor_vae = h.zipit([model_name, gammas, model_fn, discr_fn]) # DIP-VAE-I config. model_name = h.fixed("model.name", "dip_vae_i") model_fn = h.fixed("model.model", "@dip_vae()") lambda_od = h.sweep("dip_vae.lambda_od", h.discrete([1., 2., 5., 10., 20., 50.])) lambda_d_factor = h.fixed("dip_vae.lambda_d_factor", 10.) dip_type = h.fixed("dip_vae.dip_type", "i") config_dip_vae_i = h.zipit( [model_name, model_fn, lambda_od, lambda_d_factor, dip_type]) # DIP-VAE-II config. model_name = h.fixed("model.name", "dip_vae_ii") model_fn = h.fixed("model.model", "@dip_vae()") lambda_od = h.sweep("dip_vae.lambda_od", h.discrete([1., 2., 5., 10., 20., 50.])) lambda_d_factor = h.fixed("dip_vae.lambda_d_factor", 1.) dip_type = h.fixed("dip_vae.dip_type", "ii") config_dip_vae_ii = h.zipit( [model_name, model_fn, lambda_od, lambda_d_factor, dip_type]) # BetaTCVAE config. model_name = h.fixed("model.name", "beta_tc_vae") model_fn = h.fixed("model.model", "@beta_tc_vae()") betas = h.sweep("beta_tc_vae.beta", h.discrete([1., 2., 4., 6., 8., 10.])) config_beta_tc_vae = h.zipit([model_name, model_fn, betas]) all_models = h.chainit([ config_beta_vae, config_factor_vae, config_dip_vae_i, config_dip_vae_ii, config_beta_tc_vae, config_annealed_beta_vae ]) return all_models The provided code snippet includes necessary dependencies for implementing the `get_config` function. Write a Python function `def get_config()` to solve the following problem: Returns the hyperparameter configs for different experiments. Here is the function: def get_config(): """Returns the hyperparameter configs for different experiments.""" arch_enc = h.fixed("encoder.encoder_fn", "@conv_encoder", length=1) arch_dec = h.fixed("decoder.decoder_fn", "@deconv_decoder", length=1) architecture = h.zipit([arch_enc, arch_dec]) return h.product([ get_datasets(), architecture, get_default_models(), get_seeds(5), ])
Returns the hyperparameter configs for different experiments.
168,258
from __future__ import absolute_import from __future__ import division from __future__ import print_function from disentanglement_lib.config import study from disentanglement_lib.utils import resources import disentanglement_lib.utils.hyperparams as h from six.moves import range def get_datasets(): """Returns all the data sets.""" return h.sweep( "dataset.name", h.categorical([ "dsprites_full", "color_dsprites", "noisy_dsprites", "scream_dsprites", "smallnorb", "cars3d", "shapes3d" ])) def get_seeds(num): """Returns random seeds.""" return h.sweep("model.random_seed", h.categorical(list(range(num)))) def get_default_models(): """Our default set of models (6 model * 6 hyperparameters=36 models).""" # BetaVAE config. model_name = h.fixed("model.name", "beta_vae") model_fn = h.fixed("model.model", "@vae()") betas = h.sweep("vae.beta", h.discrete([1., 2., 4., 6., 8., 16.])) config_beta_vae = h.zipit([model_name, betas, model_fn]) # AnnealedVAE config. model_name = h.fixed("model.name", "annealed_vae") model_fn = h.fixed("model.model", "@annealed_vae()") iteration_threshold = h.fixed("annealed_vae.iteration_threshold", 100000) c = h.sweep("annealed_vae.c_max", h.discrete([5., 10., 25., 50., 75., 100.])) gamma = h.fixed("annealed_vae.gamma", 1000) config_annealed_beta_vae = h.zipit( [model_name, c, iteration_threshold, gamma, model_fn]) # FactorVAE config. model_name = h.fixed("model.name", "factor_vae") model_fn = h.fixed("model.model", "@factor_vae()") discr_fn = h.fixed("discriminator.discriminator_fn", "@fc_discriminator") gammas = h.sweep("factor_vae.gamma", h.discrete([10., 20., 30., 40., 50., 100.])) config_factor_vae = h.zipit([model_name, gammas, model_fn, discr_fn]) # DIP-VAE-I config. model_name = h.fixed("model.name", "dip_vae_i") model_fn = h.fixed("model.model", "@dip_vae()") lambda_od = h.sweep("dip_vae.lambda_od", h.discrete([1., 2., 5., 10., 20., 50.])) lambda_d_factor = h.fixed("dip_vae.lambda_d_factor", 10.) dip_type = h.fixed("dip_vae.dip_type", "i") config_dip_vae_i = h.zipit( [model_name, model_fn, lambda_od, lambda_d_factor, dip_type]) # DIP-VAE-II config. model_name = h.fixed("model.name", "dip_vae_ii") model_fn = h.fixed("model.model", "@dip_vae()") lambda_od = h.sweep("dip_vae.lambda_od", h.discrete([1., 2., 5., 10., 20., 50.])) lambda_d_factor = h.fixed("dip_vae.lambda_d_factor", 1.) dip_type = h.fixed("dip_vae.dip_type", "ii") config_dip_vae_ii = h.zipit( [model_name, model_fn, lambda_od, lambda_d_factor, dip_type]) # BetaTCVAE config. model_name = h.fixed("model.name", "beta_tc_vae") model_fn = h.fixed("model.model", "@beta_tc_vae()") betas = h.sweep("beta_tc_vae.beta", h.discrete([1., 2., 4., 6., 8., 10.])) config_beta_tc_vae = h.zipit([model_name, model_fn, betas]) all_models = h.chainit([ config_beta_vae, config_factor_vae, config_dip_vae_i, config_dip_vae_ii, config_beta_tc_vae, config_annealed_beta_vae ]) return all_models The provided code snippet includes necessary dependencies for implementing the `get_config` function. Write a Python function `def get_config()` to solve the following problem: Returns the hyperparameter configs for different experiments. Here is the function: def get_config(): """Returns the hyperparameter configs for different experiments.""" arch_enc = h.fixed("encoder.encoder_fn", "@conv_encoder", length=1) arch_dec = h.fixed("decoder.decoder_fn", "@deconv_decoder", length=1) architecture = h.zipit([arch_enc, arch_dec]) return h.product([ get_datasets(), architecture, get_default_models(), get_seeds(50), ])
Returns the hyperparameter configs for different experiments.
168,261
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import time from disentanglement_lib.data.ground_truth import named_data from disentanglement_lib.postprocessing import methods from disentanglement_lib.utils import convolute_hub from disentanglement_lib.utils import results import numpy as np import tensorflow.compat.v1 as tf import tensorflow_hub as hub import gin.tf def postprocess(model_dir, output_dir, overwrite=False, postprocess_fn=gin.REQUIRED, random_seed=gin.REQUIRED, name=""): """Loads a trained Gaussian encoder and extracts representation. Args: model_dir: String with path to directory where the model is saved. output_dir: String with the path where the representation should be saved. overwrite: Boolean indicating whether to overwrite output directory. postprocess_fn: Function used to extract the representation (see methods.py for examples). random_seed: Integer with random seed used for postprocessing (may be unused). name: Optional string with name of the representation (can be used to name representations). """ # We do not use the variable 'name'. Instead, it can be used to name # representations as it will be part of the saved gin config. del name # Delete the output directory if it already exists. if tf.gfile.IsDirectory(output_dir): if overwrite: tf.gfile.DeleteRecursively(output_dir) else: raise ValueError("Directory already exists and overwrite is False.") # Set up timer to keep track of elapsed time in results. experiment_timer = time.time() # Automatically set the proper data set if necessary. We replace the active # gin config as this will lead to a valid gin config file where the data set # is present. if gin.query_parameter("dataset.name") == "auto": # Obtain the dataset name from the gin config of the previous step. gin_config_file = os.path.join(model_dir, "results", "gin", "train.gin") gin_dict = results.gin_dict(gin_config_file) with gin.unlock_config(): gin.bind_parameter("dataset.name", gin_dict["dataset.name"].replace( "'", "")) dataset = named_data.get_named_ground_truth_data() # Path to TFHub module of previously trained model. module_path = os.path.join(model_dir, "tfhub") with hub.eval_function_for_module(module_path) as f: def _gaussian_encoder(x): """Encodes images using trained model.""" # Push images through the TFHub module. output = f(dict(images=x), signature="gaussian_encoder", as_dict=True) # Convert to numpy arrays and return. return {key: np.array(values) for key, values in output.items()} # Run the postprocessing function which returns a transformation function # that can be used to create the representation from the mean and log # variance of the Gaussian distribution given by the encoder. Also returns # path to a checkpoint if the transformation requires variables. transform_fn, transform_checkpoint_path = postprocess_fn( dataset, _gaussian_encoder, np.random.RandomState(random_seed), output_dir) # Takes the "gaussian_encoder" signature, extracts the representation and # then saves under the signature "representation". tfhub_module_dir = os.path.join(output_dir, "tfhub") convolute_hub.convolute_and_save( module_path, "gaussian_encoder", tfhub_module_dir, transform_fn, transform_checkpoint_path, "representation") # We first copy over all the prior results and configs. original_results_dir = os.path.join(model_dir, "results") results_dir = os.path.join(output_dir, "results") results_dict = dict(elapsed_time=time.time() - experiment_timer) results.update_result_directory(results_dir, "postprocess", results_dict, original_results_dir) The provided code snippet includes necessary dependencies for implementing the `postprocess_with_gin` function. Write a Python function `def postprocess_with_gin(model_dir, output_dir, overwrite=False, gin_config_files=None, gin_bindings=None)` to solve the following problem: Postprocess a trained model based on the provided gin configuration. This function will set the provided gin bindings, call the postprocess() function and clear the gin config. Please see the postprocess() for required gin bindings. Args: model_dir: String with path to directory where the model is saved. output_dir: String with the path where the representation should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use. Here is the function: def postprocess_with_gin(model_dir, output_dir, overwrite=False, gin_config_files=None, gin_bindings=None): """Postprocess a trained model based on the provided gin configuration. This function will set the provided gin bindings, call the postprocess() function and clear the gin config. Please see the postprocess() for required gin bindings. Args: model_dir: String with path to directory where the model is saved. output_dir: String with the path where the representation should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use. """ if gin_config_files is None: gin_config_files = [] if gin_bindings is None: gin_bindings = [] gin.parse_config_files_and_bindings(gin_config_files, gin_bindings) postprocess(model_dir, output_dir, overwrite) gin.clear_config()
Postprocess a trained model based on the provided gin configuration. This function will set the provided gin bindings, call the postprocess() function and clear the gin config. Please see the postprocess() for required gin bindings. Args: model_dir: String with path to directory where the model is saved. output_dir: String with the path where the representation should be saved. overwrite: Boolean indicating whether to overwrite output directory. gin_config_files: List of gin config files to load. gin_bindings: List of gin bindings to use.
168,262
from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `mean_representation` function. Write a Python function `def mean_representation( ground_truth_data, gaussian_encoder, random_state, save_path, )` to solve the following problem: Extracts the mean representation from a Gaussian encoder. Args: ground_truth_data: GroundTruthData to be sampled from. gaussian_encoder: Function that takes observations as input and outputs a dictionary with mean and log variances of the encodings in the keys "mean" and "logvar" respectively. random_state: Numpy random state used for randomness. save_path: String with path where results can be saved. Returns: transform_fn: Function that takes as keyword arguments the "mean" and "logvar" tensors and returns a tensor with the representation. None as no variables are saved. Here is the function: def mean_representation( ground_truth_data, gaussian_encoder, random_state, save_path, ): """Extracts the mean representation from a Gaussian encoder. Args: ground_truth_data: GroundTruthData to be sampled from. gaussian_encoder: Function that takes observations as input and outputs a dictionary with mean and log variances of the encodings in the keys "mean" and "logvar" respectively. random_state: Numpy random state used for randomness. save_path: String with path where results can be saved. Returns: transform_fn: Function that takes as keyword arguments the "mean" and "logvar" tensors and returns a tensor with the representation. None as no variables are saved. """ del ground_truth_data, gaussian_encoder, random_state, save_path def transform_fn(mean, logvar): del logvar return mean return transform_fn, None
Extracts the mean representation from a Gaussian encoder. Args: ground_truth_data: GroundTruthData to be sampled from. gaussian_encoder: Function that takes observations as input and outputs a dictionary with mean and log variances of the encodings in the keys "mean" and "logvar" respectively. random_state: Numpy random state used for randomness. save_path: String with path where results can be saved. Returns: transform_fn: Function that takes as keyword arguments the "mean" and "logvar" tensors and returns a tensor with the representation. None as no variables are saved.
168,263
from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow.compat.v1 as tf import gin.tf The provided code snippet includes necessary dependencies for implementing the `sampled_representation` function. Write a Python function `def sampled_representation(ground_truth_data, gaussian_encoder, random_state, save_path)` to solve the following problem: Extracts the random representation from a Gaussian encoder. Args: ground_truth_data: GroundTruthData to be sampled from. gaussian_encoder: Function that takes observations as input and outputs a dictionary with mean and log variances of the encodings in the keys "mean" and "logvar" respectively. random_state: Numpy random state used for randomness. save_path: String with path where results can be saved. Returns: transform_fn: Function that takes as keyword arguments the "mean" and "logvar" tensors and returns a tensor with the representation. None as no variables are saved. Here is the function: def sampled_representation(ground_truth_data, gaussian_encoder, random_state, save_path): """Extracts the random representation from a Gaussian encoder. Args: ground_truth_data: GroundTruthData to be sampled from. gaussian_encoder: Function that takes observations as input and outputs a dictionary with mean and log variances of the encodings in the keys "mean" and "logvar" respectively. random_state: Numpy random state used for randomness. save_path: String with path where results can be saved. Returns: transform_fn: Function that takes as keyword arguments the "mean" and "logvar" tensors and returns a tensor with the representation. None as no variables are saved. """ del ground_truth_data, gaussian_encoder, random_state, save_path def transform_fn(mean, logvar): return tf.add(mean, tf.exp(logvar / 2) * tf.random_normal(tf.shape(mean), 0, 1)) return transform_fn, None
Extracts the random representation from a Gaussian encoder. Args: ground_truth_data: GroundTruthData to be sampled from. gaussian_encoder: Function that takes observations as input and outputs a dictionary with mean and log variances of the encodings in the keys "mean" and "logvar" respectively. random_state: Numpy random state used for randomness. save_path: String with path where results can be saved. Returns: transform_fn: Function that takes as keyword arguments the "mean" and "logvar" tensors and returns a tensor with the representation. None as no variables are saved.
168,264
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numbers import os from disentanglement_lib.data.ground_truth import named_data from disentanglement_lib.utils import results from disentanglement_lib.visualize import visualize_util from disentanglement_lib.visualize.visualize_irs import vis_all_interventional_effects import numpy as np from scipy import stats from six.moves import range import tensorflow.compat.v1 as tf from tensorflow.compat.v1 import gfile import tensorflow_hub as hub import gin.tf def latent_traversal_1d_multi_dim(generator_fn, latent_vector, dimensions=None, values=None, transpose=False): """Creates latent traversals for a latent vector along multiple dimensions. Creates a 2d grid image where each grid image is generated by passing a modified version of latent_vector to the generator_fn. In each column, a fixed dimension of latent_vector is modified. In each row, the value in the modified dimension is replaced by a fixed value. Args: generator_fn: Function that computes (fixed size) images from latent representation. It should accept a single Numpy array argument of the same shape as latent_vector and return a Numpy array of images where the first dimension corresponds to the different vectors in latent_vectors. latent_vector: 1d Numpy array with the base latent vector to be used. dimensions: 1d Numpy array with the indices of the dimensions that should be modified. If an integer is passed, the dimensions 0, 1, ..., (dimensions - 1) are modified. If None is passed, all dimensions of latent_vector are modified. values: 1d Numpy array with the latent space values that should be used for modifications. If an integer is passed, a linear grid between -1 and 1 with that many points is constructed. If None is passed, a default grid is used (whose specific design is not guaranteed). transpose: Boolean which indicates whether rows and columns of the 2d grid should be transposed. Returns: Numpy array with image. """ if latent_vector.ndim != 1: raise ValueError("Latent vector needs to be 1-dimensional.") if dimensions is None: # Default case, use all available dimensions. dimensions = np.arange(latent_vector.shape[0]) elif isinstance(dimensions, numbers.Integral): # Check that there are enough dimensions in latent_vector. if dimensions > latent_vector.shape[0]: raise ValueError("The number of dimensions of latent_vector is less than" " the number of dimensions requested in the arguments.") if dimensions < 1: raise ValueError("The number of dimensions has to be at least 1.") dimensions = np.arange(dimensions) if dimensions.ndim != 1: raise ValueError("Dimensions vector needs to be 1-dimensional.") if values is None: # Default grid of values. values = np.linspace(-1., 1., num=11) elif isinstance(values, numbers.Integral): if values <= 1: raise ValueError("If an int is passed for values, it has to be >1.") values = np.linspace(-1., 1., num=values) if values.ndim != 1: raise ValueError("Values vector needs to be 1-dimensional.") # We iteratively generate the rows/columns for each dimension as different # Numpy arrays. We do not preallocate a single final Numpy array as this code # is not performance critical and as it reduces code complexity. num_values = len(values) row_or_columns = [] for dimension in dimensions: # Creates num_values copy of the latent_vector along the first axis. latent_traversal_vectors = np.tile(latent_vector, [num_values, 1]) # Intervenes in the latent space. latent_traversal_vectors[:, dimension] = values # Generate the batch of images images = generator_fn(latent_traversal_vectors) # Adds images as a row or column depending whether transpose is True. axis = (1 if transpose else 0) row_or_columns.append(np.concatenate(images, axis)) axis = (0 if transpose else 1) return np.concatenate(row_or_columns, axis) def sigmoid(x): return stats.logistic.cdf(x) def tanh(x): return np.tanh(x) / 2. + .5 def vis_all_interventional_effects(gen_factors, latents, output_dir): """Compute Matrix of all interventional effects.""" res = scalable_disentanglement_score(gen_factors, latents) parents = res["parents"] scores = res["disentanglement_scores"] fig_width_inches = 3.0 * gen_factors.shape[1] fig_height_inches = 3.0 * latents.shape[1] fig, axes = plt.subplots( latents.shape[1], gen_factors.shape[1], figsize=(fig_width_inches, fig_height_inches), sharex="col", sharey="row") for j in range(gen_factors.shape[1]): # Iterate over generative factors. for l in range(latents.shape[1]): ax = axes[l, j] if parents[l] != j: _visualize_interventional_effect( gen_factors, latents, l, parents[l], j, ax=ax, plot_legend=False) ax.set_title("") else: _visualize_interventional_effect( gen_factors, latents, l, parents[l], j, no_conditioning=True, ax=ax) ax.set_title("Parent={}, IRS = {:1.2}".format(parents[l], scores[l])) fig.tight_layout() if not tf.gfile.IsDirectory(output_dir): tf.gfile.MakeDirs(output_dir) output_path = os.path.join(output_dir, "interventional_effect.png") with tf.gfile.Open(output_path, "wb") as path: fig.savefig(path) The provided code snippet includes necessary dependencies for implementing the `visualize` function. Write a Python function `def visualize(model_dir, output_dir, overwrite=False, num_animations=5, num_frames=20, fps=10, num_points_irs=10000)` to solve the following problem: Takes trained model from model_dir and visualizes it in output_dir. Args: model_dir: Path to directory where the trained model is saved. output_dir: Path to output directory. overwrite: Boolean indicating whether to overwrite output directory. num_animations: Integer with number of distinct animations to create. num_frames: Integer with number of frames in each animation. fps: Integer with frame rate for the animation. num_points_irs: Number of points to be used for the IRS plots. Here is the function: def visualize(model_dir, output_dir, overwrite=False, num_animations=5, num_frames=20, fps=10, num_points_irs=10000): """Takes trained model from model_dir and visualizes it in output_dir. Args: model_dir: Path to directory where the trained model is saved. output_dir: Path to output directory. overwrite: Boolean indicating whether to overwrite output directory. num_animations: Integer with number of distinct animations to create. num_frames: Integer with number of frames in each animation. fps: Integer with frame rate for the animation. num_points_irs: Number of points to be used for the IRS plots. """ # Fix the random seed for reproducibility. random_state = np.random.RandomState(0) # Create the output directory if necessary. if tf.gfile.IsDirectory(output_dir): if overwrite: tf.gfile.DeleteRecursively(output_dir) else: raise ValueError("Directory already exists and overwrite is False.") # Automatically set the proper data set if necessary. We replace the active # gin config as this will lead to a valid gin config file where the data set # is present. # Obtain the dataset name from the gin config of the previous step. gin_config_file = os.path.join(model_dir, "results", "gin", "train.gin") gin_dict = results.gin_dict(gin_config_file) gin.bind_parameter("dataset.name", gin_dict["dataset.name"].replace( "'", "")) # Automatically infer the activation function from gin config. activation_str = gin_dict["reconstruction_loss.activation"] if activation_str == "'logits'": activation = sigmoid elif activation_str == "'tanh'": activation = tanh else: raise ValueError( "Activation function could not be infered from gin config.") dataset = named_data.get_named_ground_truth_data() num_pics = 64 module_path = os.path.join(model_dir, "tfhub") with hub.eval_function_for_module(module_path) as f: # Save reconstructions. real_pics = dataset.sample_observations(num_pics, random_state) raw_pics = f( dict(images=real_pics), signature="reconstructions", as_dict=True)["images"] pics = activation(raw_pics) paired_pics = np.concatenate((real_pics, pics), axis=2) paired_pics = [paired_pics[i, :, :, :] for i in range(paired_pics.shape[0])] results_dir = os.path.join(output_dir, "reconstructions") if not gfile.IsDirectory(results_dir): gfile.MakeDirs(results_dir) visualize_util.grid_save_images( paired_pics, os.path.join(results_dir, "reconstructions.jpg")) # Save samples. def _decoder(latent_vectors): return f( dict(latent_vectors=latent_vectors), signature="decoder", as_dict=True)["images"] num_latent = int(gin_dict["encoder.num_latent"]) num_pics = 64 random_codes = random_state.normal(0, 1, [num_pics, num_latent]) pics = activation(_decoder(random_codes)) results_dir = os.path.join(output_dir, "sampled") if not gfile.IsDirectory(results_dir): gfile.MakeDirs(results_dir) visualize_util.grid_save_images(pics, os.path.join(results_dir, "samples.jpg")) # Save latent traversals. result = f( dict(images=dataset.sample_observations(num_pics, random_state)), signature="gaussian_encoder", as_dict=True) means = result["mean"] logvars = result["logvar"] results_dir = os.path.join(output_dir, "traversals") if not gfile.IsDirectory(results_dir): gfile.MakeDirs(results_dir) for i in range(means.shape[1]): pics = activation( latent_traversal_1d_multi_dim(_decoder, means[i, :], None)) file_name = os.path.join(results_dir, "traversals{}.jpg".format(i)) visualize_util.grid_save_images([pics], file_name) # Save the latent traversal animations. results_dir = os.path.join(output_dir, "animated_traversals") if not gfile.IsDirectory(results_dir): gfile.MakeDirs(results_dir) # Cycle through quantiles of a standard Gaussian. for i, base_code in enumerate(means[:num_animations]): images = [] for j in range(base_code.shape[0]): code = np.repeat(np.expand_dims(base_code, 0), num_frames, axis=0) code[:, j] = visualize_util.cycle_gaussian(base_code[j], num_frames) images.append(np.array(activation(_decoder(code)))) filename = os.path.join(results_dir, "std_gaussian_cycle%d.gif" % i) visualize_util.save_animation(np.array(images), filename, fps) # Cycle through quantiles of a fitted Gaussian. for i, base_code in enumerate(means[:num_animations]): images = [] for j in range(base_code.shape[0]): code = np.repeat(np.expand_dims(base_code, 0), num_frames, axis=0) loc = np.mean(means[:, j]) total_variance = np.mean(np.exp(logvars[:, j])) + np.var(means[:, j]) code[:, j] = visualize_util.cycle_gaussian( base_code[j], num_frames, loc=loc, scale=np.sqrt(total_variance)) images.append(np.array(activation(_decoder(code)))) filename = os.path.join(results_dir, "fitted_gaussian_cycle%d.gif" % i) visualize_util.save_animation(np.array(images), filename, fps) # Cycle through [-2, 2] interval. for i, base_code in enumerate(means[:num_animations]): images = [] for j in range(base_code.shape[0]): code = np.repeat(np.expand_dims(base_code, 0), num_frames, axis=0) code[:, j] = visualize_util.cycle_interval(base_code[j], num_frames, -2., 2.) images.append(np.array(activation(_decoder(code)))) filename = os.path.join(results_dir, "fixed_interval_cycle%d.gif" % i) visualize_util.save_animation(np.array(images), filename, fps) # Cycle linearly through +-2 std dev of a fitted Gaussian. for i, base_code in enumerate(means[:num_animations]): images = [] for j in range(base_code.shape[0]): code = np.repeat(np.expand_dims(base_code, 0), num_frames, axis=0) loc = np.mean(means[:, j]) total_variance = np.mean(np.exp(logvars[:, j])) + np.var(means[:, j]) scale = np.sqrt(total_variance) code[:, j] = visualize_util.cycle_interval(base_code[j], num_frames, loc-2.*scale, loc+2.*scale) images.append(np.array(activation(_decoder(code)))) filename = os.path.join(results_dir, "conf_interval_cycle%d.gif" % i) visualize_util.save_animation(np.array(images), filename, fps) # Cycle linearly through minmax of a fitted Gaussian. for i, base_code in enumerate(means[:num_animations]): images = [] for j in range(base_code.shape[0]): code = np.repeat(np.expand_dims(base_code, 0), num_frames, axis=0) code[:, j] = visualize_util.cycle_interval(base_code[j], num_frames, np.min(means[:, j]), np.max(means[:, j])) images.append(np.array(activation(_decoder(code)))) filename = os.path.join(results_dir, "minmax_interval_cycle%d.gif" % i) visualize_util.save_animation(np.array(images), filename, fps) # Interventional effects visualization. factors = dataset.sample_factors(num_points_irs, random_state) obs = dataset.sample_observations_from_factors(factors, random_state) latents = f( dict(images=obs), signature="gaussian_encoder", as_dict=True)["mean"] results_dir = os.path.join(output_dir, "interventional_effects") vis_all_interventional_effects(factors, latents, results_dir) # Finally, we clear the gin config that we have set. gin.clear_config()
Takes trained model from model_dir and visualizes it in output_dir. Args: model_dir: Path to directory where the trained model is saved. output_dir: Path to output directory. overwrite: Boolean indicating whether to overwrite output directory. num_animations: Integer with number of distinct animations to create. num_frames: Integer with number of frames in each animation. fps: Integer with frame rate for the animation. num_points_irs: Number of points to be used for the IRS plots.
168,265
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from disentanglement_lib.data.ground_truth import named_data from disentanglement_lib.visualize import visualize_util import numpy as np from six.moves import range from tensorflow.compat.v1 import gfile The provided code snippet includes necessary dependencies for implementing the `visualize_dataset` function. Write a Python function `def visualize_dataset(dataset_name, output_path, num_animations=5, num_frames=20, fps=10)` to solve the following problem: Visualizes the data set by saving images to output_path. For each latent factor, outputs 16 images where only that latent factor is varied while all others are kept constant. Args: dataset_name: String with name of dataset as defined in named_data.py. output_path: String with path in which to create the visualizations. num_animations: Integer with number of distinct animations to create. num_frames: Integer with number of frames in each animation. fps: Integer with frame rate for the animation. Here is the function: def visualize_dataset(dataset_name, output_path, num_animations=5, num_frames=20, fps=10): """Visualizes the data set by saving images to output_path. For each latent factor, outputs 16 images where only that latent factor is varied while all others are kept constant. Args: dataset_name: String with name of dataset as defined in named_data.py. output_path: String with path in which to create the visualizations. num_animations: Integer with number of distinct animations to create. num_frames: Integer with number of frames in each animation. fps: Integer with frame rate for the animation. """ data = named_data.get_named_ground_truth_data(dataset_name) random_state = np.random.RandomState(0) # Create output folder if necessary. path = os.path.join(output_path, dataset_name) if not gfile.IsDirectory(path): gfile.MakeDirs(path) # Create still images. for i in range(data.num_factors): factors = data.sample_factors(16, random_state) indices = [j for j in range(data.num_factors) if i != j] factors[:, indices] = factors[0, indices] images = data.sample_observations_from_factors(factors, random_state) visualize_util.grid_save_images( images, os.path.join(path, "variations_of_factor%s.png" % i)) # Create animations. for i in range(num_animations): base_factor = data.sample_factors(1, random_state) images = [] for j, num_atoms in enumerate(data.factors_num_values): factors = np.repeat(base_factor, num_frames, axis=0) factors[:, j] = visualize_util.cycle_factor(base_factor[0, j], num_atoms, num_frames) images.append(data.sample_observations_from_factors(factors, random_state)) visualize_util.save_animation(np.array(images), os.path.join(path, "animation%d.gif" % i), fps)
Visualizes the data set by saving images to output_path. For each latent factor, outputs 16 images where only that latent factor is varied while all others are kept constant. Args: dataset_name: String with name of dataset as defined in named_data.py. output_path: String with path in which to create the visualizations. num_animations: Integer with number of distinct animations to create. num_frames: Integer with number of frames in each animation. fps: Integer with frame rate for the animation.
168,266
from __future__ import absolute_import from __future__ import division from __future__ import print_function import multiprocessing from absl import logging import pandas as pd import simplejson as json from tensorflow.compat.v1 import gfile def _get(pattern): files = gfile.Glob(pattern) pool = multiprocessing.Pool() all_results = pool.map(_load, files) return pd.DataFrame(all_results) The provided code snippet includes necessary dependencies for implementing the `aggregate_results_to_json` function. Write a Python function `def aggregate_results_to_json(result_file_pattern, output_path)` to solve the following problem: Aggregates all the results files in the pattern into a single JSON file. Args: result_file_pattern: String with glob pattern to all the result files that should be aggregated (e.g. /tmp/*/results/aggregate/evaluation.json). output_path: String with path to output json file (e.g. /tmp/results.json). Here is the function: def aggregate_results_to_json(result_file_pattern, output_path): """Aggregates all the results files in the pattern into a single JSON file. Args: result_file_pattern: String with glob pattern to all the result files that should be aggregated (e.g. /tmp/*/results/aggregate/evaluation.json). output_path: String with path to output json file (e.g. /tmp/results.json). """ logging.info("Loading the results.") model_results = _get(result_file_pattern) logging.info("Saving the aggregated results.") with gfile.Open(output_path, "w") as f: model_results.to_json(path_or_buf=f)
Aggregates all the results files in the pattern into a single JSON file. Args: result_file_pattern: String with glob pattern to all the result files that should be aggregated (e.g. /tmp/*/results/aggregate/evaluation.json). output_path: String with path to output json file (e.g. /tmp/results.json).
168,267
from __future__ import absolute_import from __future__ import division from __future__ import print_function import multiprocessing from absl import logging import pandas as pd import simplejson as json from tensorflow.compat.v1 import gfile The provided code snippet includes necessary dependencies for implementing the `load_aggregated_json_results` function. Write a Python function `def load_aggregated_json_results(source_path)` to solve the following problem: Convenience function to load aggregated results from JSON file. Args: source_path: String with path to aggregated json file (e.g. /tmp/results.json). Returns: pd.DataFrame with aggregated results. Here is the function: def load_aggregated_json_results(source_path): """Convenience function to load aggregated results from JSON file. Args: source_path: String with path to aggregated json file (e.g. /tmp/results.json). Returns: pd.DataFrame with aggregated results. """ logging.info("Loading the aggregated results.") return pd.read_json(path_or_buf=source_path, orient="columns")
Convenience function to load aggregated results from JSON file. Args: source_path: String with path to aggregated json file (e.g. /tmp/results.json). Returns: pd.DataFrame with aggregated results.
168,268
from __future__ import absolute_import from __future__ import division from __future__ import print_function import six from six.moves import range from six.moves import zip def _escape_value(value): if isinstance(value, (str, six.text_type)) and not value.startswith("@"): return "'{}'".format(value) return str(value) def to_bindings(items): return [ "{} = {}".format(key, _escape_value(value)) for key, value in items.items() ]
null
168,269
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os def get_files_in_folder(path): import pkg_resources # pylint: disable=g-bad-import-order, g-import-not-at-top for name in pkg_resources.resource_listdir("disentanglement_lib", path): new_path = path + "/" + name if not pkg_resources.resource_isdir("disentanglement_lib", new_path): yield pkg_resources.resource_filename("disentanglement_lib", new_path)
null
168,270
from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow.compat.v1 as tf import tensorflow_hub as hub from tensorflow.contrib import framework as contrib_framework The provided code snippet includes necessary dependencies for implementing the `save_numpy_arrays_to_checkpoint` function. Write a Python function `def save_numpy_arrays_to_checkpoint(checkpoint_path, **dict_with_arrays)` to solve the following problem: Saves several NumpyArrays to variables in a TF checkpoint. Args: checkpoint_path: String with the path to the checkpoint file. **dict_with_arrays: Dictionary with keys that signify variable names and values that are the corresponding Numpy arrays to be saved. Here is the function: def save_numpy_arrays_to_checkpoint(checkpoint_path, **dict_with_arrays): """Saves several NumpyArrays to variables in a TF checkpoint. Args: checkpoint_path: String with the path to the checkpoint file. **dict_with_arrays: Dictionary with keys that signify variable names and values that are the corresponding Numpy arrays to be saved. """ with tf.Graph().as_default(): feed_dict = {} assign_ops = [] nodes_to_save = [] for array_name, array in dict_with_arrays.items(): # We will save the numpy array with the corresponding dtype. tf_dtype = tf.as_dtype(array.dtype) # We create a variable which we would like to persist in the checkpoint. node = tf.get_variable(array_name, shape=array.shape, dtype=tf_dtype) nodes_to_save.append(node) # We feed the numpy arrays into the graph via placeholder which avoids # adding the numpy arrays to the graph as constants. placeholder = tf.placeholder(tf_dtype, shape=array.shape) feed_dict[placeholder] = array # We use the placeholder to assign the variable the intended value. assign_ops.append(tf.assign(node, placeholder)) saver = tf.train.Saver(nodes_to_save) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) sess.run(assign_ops, feed_dict=feed_dict) saver.save(sess, checkpoint_path) assert saver.last_checkpoints[0] == checkpoint_path
Saves several NumpyArrays to variables in a TF checkpoint. Args: checkpoint_path: String with the path to the checkpoint file. **dict_with_arrays: Dictionary with keys that signify variable names and values that are the corresponding Numpy arrays to be saved.
168,271
import argparse import os import torch from accelerate import Accelerator from datasets import load_dataset from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, set_peft_model_state_dict from torch.utils.data import IterableDataset from tqdm import tqdm from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, Trainer, TrainingArguments, logging, set_seed from transformers import TrainerCallback, TrainingArguments, TrainerState, TrainerControl from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR def get_args(): parser = argparse.ArgumentParser() parser.add_argument("--model_path", type=str, default="bigcode/large-model") parser.add_argument("--dataset_name", type=str, default="HuggingFaceH4/CodeAlpaca_20K") parser.add_argument("--subset", type=str) parser.add_argument("--split", type=str) parser.add_argument("--size_valid_set", type=int, default=10000) parser.add_argument("--streaming", action="store_true") parser.add_argument("--shuffle_buffer", type=int, default=5000) parser.add_argument("--input_column_name", type=str, default="prompt") parser.add_argument("--output_column_name", type=str, default="completion") parser.add_argument("--seq_length", type=int, default=2048) parser.add_argument("--max_steps", type=int, default=10000) parser.add_argument("--batch_size", type=int, default=1) parser.add_argument("--gradient_accumulation_steps", type=int, default=16) parser.add_argument("--eos_token_id", type=int, default=49152) parser.add_argument("--lora_r", type=int, default=16) parser.add_argument("--lora_alpha", type=int, default=32) parser.add_argument("--lora_dropout", type=float, default=0.05) parser.add_argument("--learning_rate", type=float, default=5e-6) parser.add_argument("--lr_scheduler_type", type=str, default="cosine") parser.add_argument("--num_warmup_steps", type=int, default=100) parser.add_argument("--weight_decay", type=float, default=0.05) parser.add_argument("--local_rank", type=int, default=0) parser.add_argument("--no_fp16", action="store_false") parser.add_argument("--bf16", action="store_true", default=True) parser.add_argument("--no_gradient_checkpointing", action="store_false", default=False) parser.add_argument("--seed", type=int, default=0) parser.add_argument("--num_workers", type=int, default=None) parser.add_argument("--output_dir", type=str, default="./checkpoints") parser.add_argument("--log_freq", default=100, type=int) parser.add_argument("--eval_freq", default=100, type=int) parser.add_argument("--save_freq", default=1000, type=int) return parser.parse_args()
null
168,272
import argparse import os import torch from accelerate import Accelerator from datasets import load_dataset from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, set_peft_model_state_dict from torch.utils.data import IterableDataset from tqdm import tqdm from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, Trainer, TrainingArguments, logging, set_seed from transformers import TrainerCallback, TrainingArguments, TrainerState, TrainerControl from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR def chars_token_ratio(dataset, tokenizer, input_column_name="prompt", output_column_name="completion", nb_examples=400): """ Estimate the average number of characters per token in the dataset. """ total_characters, total_tokens = 0, 0 for _, example in tqdm(zip(range(nb_examples), iter(dataset)), total=nb_examples): text = prepare_sample_text(example, input_column_name, output_column_name) total_characters += len(text) if tokenizer.is_fast: total_tokens += len(tokenizer(text).tokens()) else: total_tokens += len(tokenizer.tokenize(text)) return total_characters / total_tokens class ConstantLengthDataset(IterableDataset): """ Iterable dataset that returns constant length chunks of tokens from stream of text files. Args: tokenizer (Tokenizer): The processor used for proccessing the data. dataset (dataset.Dataset): Dataset with text files. infinite (bool): If True the iterator is reset after dataset reaches end else stops. seq_length (int): Length of token sequences to return. num_of_sequences (int): Number of token sequences to keep in buffer. chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer. """ def __init__( self, tokenizer, dataset, infinite=False, seq_length=1024, num_of_sequences=1024, chars_per_token=3.6, input_column_name="prompt", output_column_name="completion" ): self.tokenizer = tokenizer self.concat_token_id = tokenizer.eos_token_id if tokenizer.eos_token_id is not None else args.eos_token_id self.dataset = dataset self.seq_length = seq_length self.infinite = infinite self.current_size = 0 self.max_buffer_size = seq_length * chars_per_token * num_of_sequences self.input_column_name = input_column_name self.output_column_name = output_column_name def __iter__(self): iterator = iter(self.dataset) more_examples = True while more_examples: buffer, buffer_len = [], 0 while True: if buffer_len >= self.max_buffer_size: break try: buffer.append(prepare_sample_text(next(iterator), self.input_column_name, self.output_column_name)) buffer_len += len(buffer[-1]) except StopIteration: if self.infinite: iterator = iter(self.dataset) else: more_examples = False break tokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"] all_token_ids = [] for tokenized_input in tokenized_inputs: all_token_ids.extend(tokenized_input + [self.concat_token_id]) for i in range(0, len(all_token_ids), self.seq_length): input_ids = all_token_ids[i : i + self.seq_length] if len(input_ids) == self.seq_length: self.current_size += 1 yield { "input_ids": torch.LongTensor(input_ids), "labels": torch.LongTensor(input_ids), } def create_datasets(tokenizer, args): dataset = load_dataset( args.dataset_name, data_dir=args.subset, split=args.split, use_auth_token=True, num_proc=args.num_workers if not args.streaming else None, streaming=args.streaming, ) if args.streaming: print("Loading the dataset in streaming mode") valid_data = dataset.take(args.size_valid_set) train_data = dataset.skip(args.size_valid_set) train_data = train_data.shuffle(buffer_size=args.shuffle_buffer, seed=args.seed) else: train_data = dataset["train"] valid_data = dataset["test"] print(f"Size of the train set: {len(train_data)}. Size of the validation set: {len(valid_data)}") chars_per_token = chars_token_ratio(train_data, tokenizer, args.input_column_name, args.output_column_name) print(f"The character to token ratio of the dataset is: {chars_per_token:.2f}") train_dataset = ConstantLengthDataset( tokenizer, train_data, infinite=True, seq_length=args.seq_length, chars_per_token=chars_per_token, input_column_name=args.input_column_name, output_column_name=args.output_column_name ) valid_dataset = ConstantLengthDataset( tokenizer, valid_data, infinite=False, seq_length=args.seq_length, chars_per_token=chars_per_token, input_column_name=args.input_column_name, output_column_name=args.output_column_name ) return train_dataset, valid_dataset
null
168,273
import argparse import os import torch from accelerate import Accelerator from datasets import load_dataset from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, set_peft_model_state_dict from torch.utils.data import IterableDataset from tqdm import tqdm from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, Trainer, TrainingArguments, logging, set_seed from transformers import TrainerCallback, TrainingArguments, TrainerState, TrainerControl from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR class SavePeftModelCallback(TrainerCallback): def on_save( self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs, ): checkpoint_folder = os.path.join(args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{state.global_step}") kwargs["model"].save_pretrained(checkpoint_folder) pytorch_model_path = os.path.join(checkpoint_folder, "pytorch_model.bin") torch.save({}, pytorch_model_path) return control class LoadBestPeftModelCallback(TrainerCallback): def on_train_end( self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs, ): print(f"Loading best peft model from {state.best_model_checkpoint} (score: {state.best_metric}).") best_model_path = os.path.join(state.best_model_checkpoint, "adapter_model.bin") adapters_weights = torch.load(best_model_path) model = kwargs["model"] set_peft_model_state_dict(model, adapters_weights) return control def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) def run_training(args, train_data, val_data): print("Loading the model") # disable caching mechanism when using gradient checkpointing model = AutoModelForCausalLM.from_pretrained( args.model_path, use_auth_token=True, use_cache=not args.no_gradient_checkpointing, load_in_8bit=True, device_map={"": Accelerator().process_index}, ) model = prepare_model_for_int8_training(model) lora_config = LoraConfig( r=args.lora_r, lora_alpha=args.lora_alpha, lora_dropout=args.lora_dropout, bias="none", task_type="CAUSAL_LM", target_modules = ["c_proj", "c_attn", "q_attn"] ) model = get_peft_model(model, lora_config) print_trainable_parameters(model) train_data.start_iteration = 0 print("Starting main loop") training_args = TrainingArguments( output_dir=args.output_dir, dataloader_drop_last=True, evaluation_strategy="steps", save_strategy="steps", load_best_model_at_end=True, max_steps=args.max_steps, eval_steps=args.eval_freq, save_steps=args.save_freq, logging_steps=args.log_freq, per_device_train_batch_size=args.batch_size, per_device_eval_batch_size=args.batch_size, learning_rate=args.learning_rate, lr_scheduler_type=args.lr_scheduler_type, warmup_steps=args.num_warmup_steps, gradient_accumulation_steps=args.gradient_accumulation_steps, gradient_checkpointing=not args.no_gradient_checkpointing, fp16=not args.no_fp16, bf16=args.bf16, weight_decay=args.weight_decay, run_name="StarCoder-finetuned", report_to="wandb", ddp_find_unused_parameters=False, ) trainer = Trainer(model=model, args=training_args, train_dataset=train_data, eval_dataset=val_data, callbacks=[SavePeftModelCallback, LoadBestPeftModelCallback]) print("Training...") trainer.train() print("Saving last checkpoint of the model") model.save_pretrained(os.path.join(args.output_dir, "final_checkpoint/"))
null
168,274
from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel import torch import os import argparse def get_args(): parser = argparse.ArgumentParser() parser.add_argument("--base_model_name_or_path", type=str, default="bigcode/large-model") parser.add_argument("--peft_model_path", type=str, default="/") parser.add_argument("--push_to_hub", action="store_true", default=True) return parser.parse_args()
null
168,275
import dataclasses import os from dataclasses import dataclass from typing import List, Optional from huggingface_hub import login from transformers import HfArgumentParser The provided code snippet includes necessary dependencies for implementing the `hf_login` function. Write a Python function `def hf_login()` to solve the following problem: Login to HuggingFace Hub if HF_TOKEN is defined in the environment Here is the function: def hf_login(): """Login to HuggingFace Hub if HF_TOKEN is defined in the environment""" hf_token = os.getenv("HF_TOKEN") if hf_token is not None: login(token=hf_token)
Login to HuggingFace Hub if HF_TOKEN is defined in the environment
168,276
import json import os from dataclasses import asdict, dataclass from pathlib import Path from typing import Any, Dict, List, Optional, Type, TypeVar, Union from huggingface_hub import ModelHubMixin, hf_hub_download class DialogueTemplate(ModelHubMixin): def get_training_prompt(self) -> str: def get_inference_prompt(self) -> str: def get_dialogue(self): def get_special_tokens(self) -> List[str]: def copy(self): def to_dict(self) -> Dict[str, Any]: def from_dict(cls, data): def _save_pretrained(self, save_directory: Union[str, Path]) -> None: def _from_pretrained( cls: Type[T], *, model_id: str, revision: Optional[str], cache_dir: Optional[Union[str, Path]], force_download: bool, proxies: Optional[Dict], resume_download: bool, local_files_only: bool, token: Optional[Union[str, bool]], **model_kwargs, ) -> T: SUPPORTED_DIALOGUE_TEMPLATES = { "default": default_template, "no_system": no_system_template, "alpaca": alpaca_template, } def get_dialogue_template(template: str) -> DialogueTemplate: if template not in SUPPORTED_DIALOGUE_TEMPLATES.keys(): raise ValueError(f"Template {template} is not supported!") return SUPPORTED_DIALOGUE_TEMPLATES[template].copy()
null