diff --git "a/20240318/2308.07233v3.json" "b/20240318/2308.07233v3.json" new file mode 100644--- /dev/null +++ "b/20240318/2308.07233v3.json" @@ -0,0 +1,524 @@ +{ + "title": "A Unifying Generator Loss Function for Generative Adversarial Networks", + "abstract": "A unifying -parametrized generator loss function is introduced for a dual-objective generative adversarial network (GAN), which uses a canonical (or classical) discriminator loss function such as the one in the original GAN (VanillaGAN) system. The generator loss function is based on a symmetric class probability estimation type function, , and the resulting GAN system is termed -GAN. Under an optimal discriminator, it is shown that the generator\u2019s optimization problem consists of minimizing a Jensen--divergence, a natural generalization of the Jensen-Shannon divergence, where is a convex function expressed in terms of the loss function . It is also demonstrated that this problem recovers as special cases a number of GAN problems in the literature, including VanillaGAN, Least Squares GAN (LSGAN), Least th order GAN (LGAN) and the recently introduced -GAN with . Finally, experimental results are conducted on three datasets, MNIST, CIFAR-10, and Stacked MNIST to illustrate the performance of various examples of the system.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Generative adversarial networks (GANs), first introduced by Goodfellow et al.\nin 2014 [10 ###reference_b10###], have a variety of applications in media generation [21 ###reference_b21###], image restoration [29 ###reference_b29###], and data privacy [14 ###reference_b14###]. GANs aim to generate synthetic data that closely resembles the original real data with (unknown) underlying distribution . The GAN is trained such that the distribution of the generated data, , approximates well. More specifically, low-dimensional random noise is fed to a generator neural network to produce synthetic data. Real data and the generated data are then given to a discriminator neural network scoring the data between 0 and 1, with a score close to 1 meaning that the discriminator thinks the data belongs to the real dataset. The discriminator and generator play a minimax game, where the aim is to minimize the generator\u2019s loss and maximize the discriminator\u2019s loss.\nSince their initial introduction, several variants of GAN have been proposed. Deep convolutional GAN (DCGAN) [30 ###reference_b30###] utilizes the same loss functions as VanillaGAN (the original GAN), combining GANs with convolutional neural networks, which are helpful when applying GANs to image data as they extract visual features from the data. DCGANs are more stable than the baseline model, but can suffer from mode collapse, which occurs when the generator learns that a select number of images can easily fool the discriminator, resulting in the generator only generating those images. Another notable issue with VanillaGAN is the tendency for the generator network\u2019s gradients to vanish. In the early stages of training, the discriminator lacks confidence, assigning generated data values close to zero. Therefore, the objective function tends to zero, resulting in small gradients and a lack of learning. To mitigate this issue, a non-saturating generator loss function was proposed in [10 ###reference_b10###] so that gradients do not vanish early on in training.\nIn the original (VanillaGAN) problem setup, the objective function, expressed as a negative sum of two Shannon cross-entropies, is to be minimized by the generator and maximized by the discriminator. It is demonstrated that if the discriminator is fixed to be optimal (i.e., as a maximizer of the objective function), the GAN\u2019s minimax game can be reduced to minimizing the Jensen-Shannon divergence (JSD) between the real and generated data\u2019s probability distributions [10 ###reference_b10###]. An analogous result was proven in [5 ###reference_b5###] for R\u00e9nyiGANs, a dual-objective GAN using distinct discriminator and generator loss functions. More specifically, under a canonical discriminator loss function (as in [10 ###reference_b10###]), and a generator loss function expressed in terms of two R\u00e9nyi cross-entropies, it is shown that the R\u00e9nyiGAN optimization problem reduces to minimizing the Jensen-R\u00e9nyi divergence, hence extending VanillaGAN\u2019s result.\nNowozin et al. generalized VanillaGAN by formulating a class of loss functions in [27 ###reference_b27###] parametrized by a lower semicontinuous convex function , devising -GAN. More specifically, the -GAN problem consists of minimizing an -divergence between the true data distribution and the generator distribution via a minimax optimization of a Fenchel conjugate representation of the -divergence, where the VanillaGAN discriminator\u2019s role (as a binary classifier) is replaced by a variational function estimating the ratio of the true data and generator distributions.\nThe -GAN loss function may be tedious to derive, as it requires the computation of the Fenchel conjugate of .\nIt can be shown that -GAN can interpolate between VanillaGAN and HellingerGAN, among others [27 ###reference_b27###].\nMore recently, -GAN was presented in [19 ###reference_b19###], where the aim is to derive a class of loss functions parameterized by , expressed in terms of a class probability estimation (CPE) loss between a real label and predicted label [19 ###reference_b19###]. The ability to control as a hyperparameter is beneficial to be able to apply one system to multiple datasets, as two datasets may be optimal under different values. This work was further analyzed in [20 ###reference_b20###] and expanded in [35 ###reference_b35###] by introducing the dual-objective -GAN, which allowed for the generator and discriminator loss functions to have a distinct parameter with the aim of improving training stability. When , the -GAN optimization reduces to minimizing an Arimoto divergence, as originally derived in [19 ###reference_b19###]. Note that -GAN can recover several -GANs, such as HellingerGAN, VanillaGAN, WassersteinGAN and Total Variation GAN [19 ###reference_b19###].\nFurthermore, in their more recent work which unifies [19 ###reference_b19###, 20 ###reference_b20###, 35 ###reference_b35###], the authors establish, under some conditions, a one-to-one correspondence between CPE loss based GANs (such as -GANs) and -GANs that use a symmetric -divergence; see [34 ###reference_b34###, Theorems 4-5 and Corollary 1]. They also prove various generalization and estimation error bounds for -GANs and illustrate their ability in mitigating training instability for synthetic Gaussian data as well as the Celeb-A and LSUN Classroom image datasets.\nThe various -GAN equilibrium results do not provide an analogous result to the JSD and Jensen-R\u00e9nyi divergence minimization for the VanillaGAN [10 ###reference_b10###] and R\u00e9nyiGAN [5 ###reference_b5###] problems, respectively, as it does not involve a Jensen-type divergence.111Given a divergence measure between distributions and (i.,e., a positive-definite bivariate function: with equality if and only if (iff) almost everywhere (a.e.)), a Jensen-type divergence of is given by ; i.e., it is the arithmetic average of two -divergences, one between and the mixture , and the other between and .\nThe main objective of our work is to present a unifying approach that provides an axiomatic framework to encompass several existing GAN generator loss functions so that the GAN optimization can be simplified in terms of a Jensen-type divergence. In particular, our framework classifies the set of -parameterized CPE-based loss functions , generalizing the -loss function in [19 ###reference_b19###, 20 ###reference_b20###, 35 ###reference_b35###, 34 ###reference_b34###]. We then propose -GAN, a dual objective GAN that uses a function from this class for the generator, and uses any canonical discriminator loss function that admits the same optimizer as VanillaGAN [10 ###reference_b10###]. We show that under some regularity (convexity/concavity) conditions on , the minimax game played with these two loss functions is equivalent to the minimization of a Jensen--divergence, a Jensen-type divergence and another natural extension of the Jensen-Shannon divergence (in addition to the Jensen-R\u00e9nyi divergence [5 ###reference_b5###]), where the generating function of the divergence is directly computed from the CPE loss function . This result recovers various prior dual-objective GAN equilibrium results, thus unifying them under one parameterized generator loss function.\nThe newly obtained Jensen--divergence, which is noted to belong to the class of symmetric -divergences with different generating functions (see Remark 1 ###reference_ark1###),\nis a useful measure of dissimilarity between distributions as it requires a convex function with a restricted domain given by the interval (see Remark 2 ###reference_ark2###) in addition to its symmetry and finiteness properties.\nThe rest of the paper is organized as follows. In Section 2 ###reference_###, we review -divergence measures and introduce the Jensen--divergence as an extension of the Jensen-Shannon divergence. In Section 3 ###reference_###, we establish our main result regarding the optimization of our unifying generator loss function (Theorem 1 ###reference_orem1###), and show that it can be applied to a large class of known GANs (Lemmas 2 ###reference_ma2###-4 ###reference_ma4###). We conduct experiments in Section 4 ###reference_### by implementing different manifestations of -GAN on three datasets, MNIST, CIFAR-10 and Stacked MNIST. Finally, we conclude the paper in Section 5 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Preliminaries", + "text": "We begin by presenting key information measures used throughout the paper.\nLet be a convex continuous function222The convexity of already implies its continuity on . Here we extend the continuity of at 0, setting , which may be infinite. Otherwise, is assumed to be finite for . that is strictly convex at 1 (i.e., for all , , and such that ) and satisfying .\n[7 ###reference_b7###, 8 ###reference_b8###, 1 ###reference_b1###]\nThe -divergence between two probability densities and with common support on the Lebesgue measurable space (, , ) is denoted by and given by333For simplicity, we consider throughout densities with common supports. A comprehensive definition of -divergence for arbitrary distributions can be found in [22 ###reference_b22###, Section III].\nwhere we have used the shorthand , where is a measurable function; we follow this convention from now on. Here, is referred to as the generating function of .\nWe require that is strictly convex around and that it satisfies the normalization condition to ensure positive-definiteness of the -divergence, i.e., with equality holding iff (a.e.). We present examples of -divergences under various choices of their generating function in Table 1 ###reference_###. We will be invoking these divergence measures in different parts of the paper.\nThe R\u00e9nyi divergence of order (, ) between densities and with common support is used in [5 ###reference_b5###] in the R\u00e9nyiGAN problem; it is given by [31 ###reference_b31###, 33 ###reference_b33###]\nNote that the R\u00e9nyi divergence is not an -divergence; however, it can be expressed as a transformation of the Hellinger divergence (which is itself an -divergence):\nWe now introduce a new measure, the Jensen--divergence, which is analogous to the Jensen-Shannon and Jensen-R\u00e9nyi divergences.\nThe Jensen--divergence between two probability distributions and with common support on the Lebesgue measurable space (, , ) is denoted by and given by\nwhere is the -divergence.\nWe next verify that the Jensen-Shannon divergence is a Jensen--divergence.\nLet and be two densities with common support , and consider the function given by . Then we have that\nProof.\nAs is convex (and continuous) on its domain with , we have that\n\nNote that is itself a symmetric -divergence (with a modified generating function). Indeed, given the continuous convex function that is strictly convex around with , consider the functions\nand\nwhich are both continuous convex, strictly convex around , and satisfy .\nNow direct calculations yield that\nand\nThus\nwhere , i.e.,\nis also continuous convex, strictly convex around and satisfies . Since by (4 ###reference_###),\nwe conclude that the Jensen--divergence is a symmetric -divergence.444Equivalently, we have that , where , (with ), which is a necessary and sufficient condition for the -divergence to be symmetric [22 ###reference_b22###, p. 4399].\nExamining (4 ###reference_###), we note that the Jensen--divergence between and involves the -divergences between either or and their mixture . In other words to determine , we only need and when taking the expectations in (1 ###reference_###). Thus, it is sufficient to restrict the domain of the convex function to the interval ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Main Results", + "text": "We now present our main theorem which unifies various generator loss functions under a CPE-based loss function for a dual-objective GAN, -GAN, with a canonical discriminator loss function loss function that is optimized as in [10 ###reference_b10###]. Under some regularity conditions on the loss-function , we show that under the optimal discriminator, our generator loss becomes a Jensen--divergence.\nLet be the measure space of images (where for black and white images and for RGB images), and let be a measure space such that . The discriminator neural network is given by , and the generator neural network is given by . The generator\u2019s noise input is sampled from a multivariate Gaussian distribution . We denote the probability distribution of the real data by and the probability distribution of the generated data by . We also set and as the densities corresponding to and , respectively. We begin by introducing the system.\nFix and let be a loss function such that is a continuous function that is either convex or concave in , with strict convexity (resp., strict concavity) around ,\nand such that is symmetric in the sense that\nThen the system is defined by , where is the discriminator loss function, and is the generator loss function, given by\nMoreover, the problem is defined by\nWe now present our main result about the optimization problem.\nFor a fixed and , let be the loss functions of , and consider the joint optimization in (9 ###reference_###)-(10 ###reference_###). If is a canonical loss function in the sense that it is maximized at , where\nthen (10 ###reference_###) reduces to\nwhere is the Jensen--divergence, and\n\nis a continuous convex function, that is strictly convex around , given by\nwhere and are real constants chosen so that \nwith (resp., ) if is convex (resp., concave).\nFinally, (12 ###reference_###) is minimized when (a.e.).\nProof.\nUnder the assumption that is maximized at , we have that\nwhere:\n(a) holds since by (7 ###reference_###), where .\n(b) holds by solving for in terms of in (13 ###reference_###), where in the first term and in the second term.\nThe constants and are chosen so that . Finally, the continuity and convexity of (as well as its strict convexity around ) directly follow from the corresponding assumptions imposed on the loss function in Definition 3 ###reference_inition3### and on the condition imposed on the sign of in the theorem\u2019s statement.\nNote that not only given in (11 ###reference_###) is an optimal discriminator of the (original) VanillaGAN discriminator loss function, but it also optimizes the LSGAN/LGAN discriminator loss function when their discriminator\u2019s labels for fake and real data, and , respectively satisfy and (see Section 3.3 ###reference_###).\nWe next show that the of Theorem 1 ###reference_orem1### recovers as special cases a number of well-known GAN generator loss functions and their equilibrium points (under an optimal classical discriminator )." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "VanillaGAN", + "text": "VanillaGAN [10 ###reference_b10###] uses the same loss function for both generator and discriminator, which is\nand can be cast as a saddle point optimization problem:\nIt is shown in [10 ###reference_b10###] that the optimal discriminator for (15 ###reference_###) is given by , as in (11 ###reference_###).\nWhen , the optimization reduces to minimizing the Jensen-Shannon divergence:\nWe next show that (16 ###reference_###) can be obtained from Theorem 1 ###reference_orem1###.\nConsider the optimization of the VanillaGAN given in (15 ###reference_###). Then we have that\nwhere for all .\nProof.\nFor any fixed , let the function in (8 ###reference_###) be as defined in the statement:\nNote that is symmetric, since for , we have that\nInstead of showing the continuity and convexity/concavity conditions imposed on\n in Definition 3 ###reference_inition3###,\nwe implicitly verify them by directly deriving from using (13 ###reference_###)\nand showing that it is continuous convex and strictly convex around .\nSetting and , we have that\nClearly, is convex (actually strictly convex on and hence strictly convex around ) and continuous on its domain (where ). It also satisfies .\nBy Lemma 1 ###reference_ma1###, we know that under the generating function , the Jensen- divergence reduces to the Jensen-Shannon divergence.\nTherefore, by Theorem 1 ###reference_orem1###, we have that\nwhich finishes the proof." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "-GAN", + "text": "The notion of -GANs is introduced in [19 ###reference_b19###] as a way to unify several existing GANs using a parameterized loss function. We describe -GANs next.\n[19 ###reference_b19###]\nLet be a binary label, , and fix . The -loss between and is the map given by\n[19 ###reference_b19###]\nFor , the loss function is given by\nThe joint optimization of the problem is given by\nIt is known that -GAN recovers several well-known GANs by varying the parameter, notably, the VanillaGAN () [10 ###reference_b10###] and the HellingerGAN () [27 ###reference_b27###]. Furthermore, as , recovers a translated version of the WassersteinGAN loss function [4 ###reference_b4###]. We now present the solution to the joint optimization problem presented in (19 ###reference_###).\n[19 ###reference_b19###]\nLet , and consider the joint optimization of the -GAN presented in (19 ###reference_###). The discriminator that maximizes the loss function is given by\nFurthermore, when is fixed, the problem in (19 ###reference_###) reduces to minimizing an Arimoto divergence (as defined in Table 1 ###reference_###) when :\nand a Jensen-Shannon divergence when :\nwhere (21 ###reference_###) and (22 ###reference_###) achieve their minima iff (a.e.).\nRecently, -GAN was generalized in [35 ###reference_b35###] to implement a dual objective GAN, which we describe next.\n[35 ###reference_b35###]\nFor and , the \u2019s optimization is given by\nwhere and are defined in (18 ###reference_###), with replaced by and respectively.\n[35 ###reference_b35###]\nConsider the joint optimization in (23 ###reference_###)-(24 ###reference_###). Let parameters , satisfy\nThe discriminator that maximizes is given by\nFurthermore, when is fixed, the minimization of in (24 ###reference_###) is equivalent to the following -divergence minimization:\nwhere is given by\nWe now apply the -GAN to our main result in Theorem 1 ###reference_orem1### by showing that (12 ###reference_###) can recover (27 ###reference_###) when (which corresponds to a VanillaGAN discriminator loss function).\nConsider the given in Definition 6 ###reference_inition6###. Let and . Then, the solution to (24 ###reference_###) presented in Proposition 2 ###reference_position2### is equivalent to minimizing a Jensen--divergence: specifically, if is the optimal discriminator given by (26 ###reference_###), which is equivalent to (11 ###reference_###) when , then in (27 ###reference_###) satisfies\nwhere and\nProof.\nWe show that Theorem 1 ###reference_orem1### recovers Proposition 2 ###reference_position2### by setting . Note that is symmetric, since\nAs in the proof of Lemma 2 ###reference_ma2###, instead of proving the conditions imposed on\n in Definition 3 ###reference_inition3###,\nwe derive directly from using (13 ###reference_###) and show that it is continuous convex and strictly convex around .\nFrom Lemma 2 ###reference_ma2###, we know that when , (which is strictly convex and continuous). For , setting\n\nand \nin (13 ###reference_###), we have that\nClearly . Furthermore for , we have that\nwhich is positive for , and is convex for (as well as continuous on its domain and strictly convex around ). Thus by Theorem 1 ###reference_orem1###, we have that\nWe now show that the above Jensen--divergence is equal to the -divergence originally derived for the -GAN problem of Proposition 2 ###reference_position2### (note from Proposition 2 ###reference_position2###, that if , then , so the range of concurs with the range above required for the convexity of ). For any two distributions and with common support , we have that\nTherefore, .\nNote that this lemma generalizes Lemma 2 ###reference_ma2###; the VanillaGAN is a special case of the -GAN for ." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Shifted LkGANs and LSGANs", + "text": "Least Squares GAN (LSGAN) was proposed in [24 ###reference_b24###] to mitigate the vanishing gradient problem with VanillaGAN and to stabilize training performance. The LSGAN\u2019s loss function is derived from the squared error distortion measure, where we aim to minimize the distortion between the data samples and a target value we want the discriminator to assign the samples to. The LSGAN was generalized with the LGAN in [5 ###reference_b5###] by replacing the squared error distortion measure with the absolute error distortion measure of order , therefore introducing an additional degree of freedom to the generator\u2019s loss function. We first state the general LGAN problem. We then apply the result of Theorem 1 ###reference_orem1### to the loss functions of LSGAN and LGAN.\n[5 ###reference_b5###]\nLet , , and let . The LGAN\u2019s loss functions, denoted by and are given by\nThe LGAN problem is the joint optimization\nWe next recall the solution to (33 ###reference_###), which is a minimization of the Pearson-Vajda divergence of order (as defined in Table 1 ###reference_###).\n[5 ###reference_b5###]\nConsider the joint optimization for the LGAN presented in (33 ###reference_###). Then, the optimal discriminator that maximizes in (31 ###reference_###) is given by\nFurthermore, if , and , the minimization of in (32 ###reference_###) reduces to\nNote that the LSGAN [24 ###reference_b24###] is a special case of LGAN, as we recover LSGAN when [5 ###reference_b5###].\nBy scrutinizing Proposition 3 ###reference_position3### and Theorem 1 ###reference_orem1###, we observe that the former cannot be recovered from the latter. However we can use Theorem 1 ###reference_orem1### by slightly modifying the LGAN generator\u2019s loss function. First, for the dual objective GAN proposed in Theorem 1 ###reference_orem1###, we need . By (35 ###reference_###), this is achieved for and . Then, we define the intermediate loss function\nComparing the above loss function with (8 ###reference_###), we note that setting and in (37 ###reference_###) satisfies the symmetry property of .\nFinally, to ensure the generating function satisfies , we shift each term in (37 ###reference_###) by 1. Putting these changes together, we propose a revised generator loss function, denoted by , given by\nWe call a system that uses (38 ###reference_###) as a generator loss function a Shifted LGAN (SLGAN). If , we have a shifted version of the LSGAN generator loss function, which we call the Shifted LSGAN (SLSGAN). Note that none of these modifications alter the gradients of in (32 ###reference_###), since the first term is independent of , the choice of is irrelevant, and translating a function by a constant does not change its gradients. However, from Proposition 3 ###reference_position3###, for , and , we do not have that , and as a result, this modified problem does not reduce to minimizing a Pearson-Vajda divergence. Consequently, we can relax the condition on in Definition 7 ###reference_inition7### to just . We now show how Theorem 1 ###reference_orem1### can be applied to -GAN using (38 ###reference_###).\nLet . Let be a discriminator loss function, and let be the generator\u2019s loss function defined in (38 ###reference_###). Consider the joint optimization\nIf is optimized at (i.e., is canonical), then we have that\nwhere is given by\nExamples of that satisfy the requirements of Lemma 4 ###reference_ma4### include the LGAN discriminator loss function given by (31 ###reference_###) with and , and the VanillaGAN discriminator loss function given by (14 ###reference_###).\nProof.\nLet . We can restate the SLGAN\u2019s generator loss function in (38 ###reference_###) in terms of in (8 ###reference_###): we have that , where and is given by\nWe have that is symmetric, since\nWe derive from via (13 ###reference_###) and\ndirectly check that it is continuous convex and strictly convex around .\nSetting and in (13 ###reference_###), we have that\nWe clearly have that and that is continuous. Furthermore, we have that , which is non-negative for . Therefore is convex (as well as strictly convex around ). As a result, by Theorem 1 ###reference_orem1###, we have that\n\nWe conclude this section by emphasizing that Theorem 1 ###reference_orem1### serves as a unifying result recovering the existing loss functions in the literature and moreover, provides a way for generalizing new ones. Our aim in the next section is to demonstrate the versatility of this result in experimentation." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "We perform two experiments on three different image datasets which we describe below.\nExperiment 1. In the first experiment, we compare the -GAN with the -GAN, controlling the value of .555We herein confine the comparison of -GAN with -GAN only so that both systems have the same tunable free parameter . Results obtained in [35 ###reference_b35###] for the Stacked MNIST dataset show that -GAN provides a consistently robust performance when .\nOther experiments illustrating the performance of -GAN with are carried for the Celeb-A and LSUN Classroom image datasets in [34 ###reference_b34###], showing improved training stability for values.\nRecall that corresponds to the canonical VanillaGAN (or DCGAN) discriminator. We aim to verify whether or not replacing an -GAN discriminator with a VanillaGAN discriminator stabilizes or improves the system\u2019s performance depending on the value of . Note that the result of Theorem 1 ###reference_orem1### only applies to the -GAN for .\nExperiment 2. We train two variants of SLGAN, with the generator loss function as described in (38 ###reference_###), parameterized by . We then utilize two different canonical discriminator loss functions to align with Theorem 1 ###reference_orem1###. The first is the VanillaGAN discriminator loss given by (14 ###reference_###); we call the resulting dual objective GAN by Vanilla-SLGAN. The second is the LGAN discriminator loss, given by (31 ###reference_###), where we set and such that the optimal discriminator is given by (11 ###reference_###). We call this system by L-SLGAN. We compare the two variants to analyze how the value of and choice of discriminator loss impacts the system\u2019s performance." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "We run both experiments on three image datasets: MNIST [9 ###reference_b9###], CIFAR-10 [17 ###reference_b17###], and Stacked MNIST [23 ###reference_b23###]. The MNIST dataset is a dataset of black and white handwritten digits between 0 and 9 of size . The CIFAR-10 dataset is an RGB dataset of small images of common animals and modes of transportation of size . The Stacked MNIST dataset is an RGB dataset derived from the MNIST dataset, constructed by taking three MNIST images, assigning each one of the three colour channels, and stacking the images on top of each other. The resulting images are then padded so that each one of them have size .\nFor Experiment 1, we use values of 0.5, 5.0, 10.0 and 20.0. For each value of , we train the (, )-GAN and the -GAN. We additionally train the DCGAN, which corresponds to the -GAN. For Experiment 2, we use values of 0.25, 1.0, 2.0, 7.5 and 15.0. Note that when , we recover LSGAN. For the MNIST dataset, we run 10 trials with the random seeds 123, 500, 1600, 199621, 60677, 20435, 15859, 33764, 79878, and 36123, and train each GAN for 250 epochs. For the RGB datasets (CIFAR-10 and Stacked MNIST), we run 5 trials with the random seeds 123, 1600, 60677, 15859, 79878, and train each GAN for 500 epochs. All experiments utilize an Adam optimzer for the stochastic gradient descent algorithm, with a learning rate of , and parameters , and [16 ###reference_b16###]. We also experiment with the addition of a gradient penalty (GP); we add a penalty term to the discriminator\u2019s loss function to encourage the discriminator\u2019s gradient to have a unit norm [11 ###reference_b11###].\nThe MNIST experiments were run on one 6130 2.1 GHz 1xV100 GPU, 8 CPUs, and 16 GB of memory. The CIFAR-10 and Stacked MNIST experiments were run on one Epyc 7443 2.8 GHz GPU, 8 CPUs and 16 GB of memory. For each experiment, we report the best overall Fr\u00e9chet Inception Distance (FID) score [13 ###reference_b13###], the best average FID score amongst all trials and its variance, and the average epoch the best FID score occurs and its variance. The FID score for each epoch was computed over 10 000 images. For each metric, the lowest numerical value corresponds to the model with the best metric (indicated in bold in the tables). We also report how many trials we include in our summary statistics, as it is possible for a trial to collapse and not train for the full number of epochs. The neural network architectures used in our experiments are presented in Appendix A ###reference_###. The training algorithms are presented in Appendix B ###reference_###." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Experimental Results", + "text": "We report the FID metrics for Experiment 1 in Tables 2 ###reference_###, 3 ###reference_### and 4 ###reference_###, and for Experiment 2 in Tables 5 ###reference_###, 6 ###reference_### and 7 ###reference_###. We report only on those experiments that produced meaningful results. Models that utilize a simplified gradient penalty have the suffix \u201c-GP\u201d. We display the output of the best-performing -GANs in Figure 1 ###reference_### and the best-performing SLGANs in Figure 3 ###reference_###. Finally, we plot the trajectory of the FID scores throughout training epochs in Figures 2 ###reference_### and 4 ###reference_###.\nBest FID score\nAverage best FID score\nBest FID scores variance\nAverage epoch\nEpoch variance\nNumber of successful trials (/10)\n\n\n\n\n\n4\n\n\n\n\n\n6\n\n\n\n\n\n10\n\n\n\n\n\n10\n\n\n\n\n\n2\n\n\n\n\n\n10\n\n\n\n\n\n1\n\n\n\n\n\n10\nBest FID score\nAverage best FID score\nBest FID scores variance\nAverage epoch\nEpoch variance\nNumber of successful trials (/5)\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\nBest FID score\nAverage best FID score\nBest FID scores variance\nAverage epoch\nEpoch variance\nNumber of successful trials (/5)\n\n\n\n\n\n2\n\n\n\n\n\n1\n\n\n\n\n\n2\n\n\n\n\n\n4\n\n\n\n\n\n2\n\n\n\n\n\n2\n\n\n\n\n\n1\n\n\n\n\n\n1\n\n\n\n\n\n5\nBest FID score\nAverage best FID score\nBest FID scores variance\nAverage epoch\nEpoch variance\nNumber of successful trials (/10)\n\n\n\n\n\n10\n\n\n\n\n\n10\n\n\n\n\n\n10\n\n\n\n\n\n10\n\n\n\n\n\n10\n\n\n\n\n\n10\n\n\n\n\n\n10\n\n\n\n\n\n10\n\n\n\n\n\n10\n\n\n\n\n\n10\n\n\n\n\n\n10\nBest FID score\nAverage best FID score\nBest FID scores variance\nAverage epoch\nEpoch variance\nNumber of successful trials (/5)\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\nBest FID score\nAverage best FID score\nBest FID scores variance\nAverage epoch\nEpoch variance\nNumber of successful trials (/5)\n\n\n\n\n\n5\n\n\n\n\n\n1\n\n\n\n\n\n5\n\n\n\n\n\n2\n\n\n\n\n\n5\n\n\n\n\n\n3\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n5\n\n\n\n\n\n4\n\n\n\n\n\n5\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Discussion", + "text": "" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "4.3.1 Experiment 1", + "text": "From Table 2 ###reference_###, we note that 37 of the 90 trials collapse before 250 epochs have passed without a gradient penalty. The (5,5)-GAN collapses for all 5 trials, and hence it is not displayed in Table 2 ###reference_###. This behaviour is expected, as the (,)-GAN is more sensitive to exploding gradients when does not tend to 0 or [19 ###reference_b19###]. The addition of a gradient penalty could mitigate the discriminator\u2019s gradients diverging in the (5,5)-GAN by encouraging gradients to have a unit norm. Using a VanillaGAN discriminator with an -GAN generator (i.e., the (1,)-GAN) produces better quality images for all tested values of , compared to when both networks utilize an -GAN loss function. The (1,10)-GAN achieves excellent stability, converging in all 10 trials, and also achieves the lowest average FID score. The (1,5)-GAN achieves the lowest FID score overall, marginally outperforming DCGAN. Note that when the average best FID score is very close to the best FID score, the resulting best FID score variance is quite small (of the order of ), indicating little statistical variability over the trials.\nLikewise, for the CIFAR-10 and Stacked MNIST datasets, the (1,)-GAN produces lower FID scores than the -GAN (see Tables 3 ###reference_### and 4 ###reference_###). However, both models are more stable with the CIFAR-10 dataset. With the exception of DCGAN, no model converged to its best FID score for all 5 trials with the Stacked MNIST dataset. Comparing the trials that did converge, both -GAN and -GAN performed better on the Stacked MNIST dataset than the CIFAR-10 dataset. For CIFAR-10, the (1,10)- and (1,20)-GANs produced the best overall FID score and the best average FID score respectively. On the other hand, the (1,0.5)-GAN produced the best overall FID score and the best average FID score for the Stacked MNIST dataset. We also observe a tradeoff between speed and performance for the CIFAR-10 and Stacked MNIST datasets: the -GANs arrive at their lowest FID scores later than their respective -GANs, but achieve lower FID scores overall.\nComparing Figures 1(c) ###reference_sf3### and 1(d) ###reference_sf4###, we observe that the -GAN-GP provides more stability than the -GAN for lower values of (i.e. , while the -GAN-GP exhibits more stability for higher values ( and ). Figures 1(e) ###reference_sf5### and 1(f) ###reference_sf6### show that the two -GANs trained on the Stacked MNIST dataset exhibit unstable behaviour earlier into training when or . However, both systems stabilize and converge to their lowest FID scores as training progresses. The (0.5,0.5)-GAN-GP system in particular exhibits wildly erratic behaviour for the first 200 epochs, then finishes training with a stable trajectory that outperforms DCGAN-GP.\nA future direction is to explore how the complexity of an image dataset influences the best choice of . For example, the Stacked MNIST dataset might be considered to be less complex than CIFAR-10, as images in the Stacked MNIST dataset only contain four unique colours (black, red, green, and blue), while the CIFAR-10 dataset utilizes significantly more colours." + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "4.3.2 Experiment 2", + "text": "We see from Table 5 ###reference_### that all L-LGANs and Vanilla-SLGANs have FID scores comparable to the DCGAN. When , Vanilla-SLGAN and L-SLGAN arrive at their lowest FID scores slightly earlier than DCGAN and other SLGANs.\nThe addition of a simplified gradient penalty is necessary for L-SLGAN to achieve overall good performance on the CIFAR-10 dataset (see Table 6 ###reference_###). Interestingly, Vanilla-SLGAN achieves lower FID scores without a gradient penalty for lower values (), and with a gradient penalty for higher values (). When , both SLGANs collapsed for all 5 trials without a gradient penalty.\nTable 7 ###reference_### shows that Vanilla-SLGANs achieve better FID scores than their respective L-LGAN counterparts. However, L-LGANs are more stable, as no single trial collapsed, while 10 of the 25 Vanilla-SLGAN trials collapsed before 500 epochs had passed. While all Vanilla-SLGANs outperform the DCGAN with gradient penalty, L-SLGAN-GP only outperforms DCGAN-GP when . Except for when , we observe that the L-SLGAN system takes less epochs to arrive at its lowest FID score. Comparing Figures 3(e) ###reference_sf5### and 3(f) ###reference_sf6###, we observe that L-SLGANs exhibit more stable FID score trajectories than their respective Vanilla-SLGANs. This makes sense, as the LGAN loss function aims to increase the GAN\u2019s stability compared to DCGAN [5 ###reference_b5###]." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We introduced a parameterized CPE-based generator loss function for a dual-objective GAN termed -GAN which, when used in tandem with a canonical discriminator loss function that achieves its optimum in (11 ###reference_###), minimizes a Jensen--divergence. We showed that this system can recover VanillaGAN, -GAN, and LGAN as special cases. We conducted experiments with the three aforementioned -GANs on three image datasets. The experiments indicate that -GAN exhibits better performance than -GAN with . They also show that the devised SLGAN system achieves lower FID scores with a VanillaGAN discriminator compared with an LGAN discriminator.\nFuture work consists of unveiling more examples of existing GANs that fall under our result as well as applying -GAN to novel judiciously designed CPE losses and evaluating the performance (in terms of both quality and diversity of generated samples) and the computational efficiency of the resulting models. Another interesting and related direction is to study -GAN within the context of -GANs, given that the Jensen--divergence is itself an -divergence (see Remark 1 ###reference_ark1###), by systematically analyzing different Jensen--divergences and the role they play in improving GAN performance and stability. Other worthwhile directions include incorporating the proposed loss into state-of-the-art GAN models, such as among others BigGAN [6 ###reference_b6###], StyleGAN [15 ###reference_b15###] and CycleGAN [2 ###reference_b2###], for high-resolution data generation and image-to-image translation applications, conducting a meticulous analysis of the sensitivity of the models\u2019 performance to different values of the parameter and providing guidelines on how best to tune for different types of datasets." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Neural Network Architectures", + "text": "We outline the architectures used for the generator and discriminator. For the MNIST dataset, we use the architectures of [5 ###reference_b5###]. For the CIFAR-10 and Stacked MNIST datasets, we base the architectures on [30 ###reference_b30###]. We summarize some aliases for the architectures in Table 8 ###reference_###. For all models we use a batch size of 100 and noise size of 784 for the generator input.\nWe omit the bias in the convolutional and deconvolutional layers to decrease the number of parameters being trained, which in turn decreases computation times. We initialize our kernels using a normal distribution with zero mean and variance 0.01. We present the MNIST architectures in Tables 9 ###reference_### and 10 ###reference_###, and the CIFAR-10 and Stacked MNIST architectures in Tables 11 ###reference_### and 12 ###reference_###." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Algorithms", + "text": "We outline the algorithms used to train our models in Algorithms 1 ###reference_###, 2 ###reference_### and 3 ###reference_###." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Examples of -divergences.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n-DivergenceSymbolFormula
Kullback-Leiber [18]\nKL
Jensen-Shannon [25]\nJSD
Pearson [26]\n
Pearson-Vajda () [26]\n
Arimoto (, ) [3, 28, 22]\n
Hellinger (, ) [12, 22, 32]\n
\n
\n
", + "capture": "Table 1: Examples of -divergences." + }, + "2": { + "table_html": "
\n
Table 2: -GAN results for MNIST.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
()-GAN\n

Best FID score

\n
\n

Average best FID score

\n
\n

Best FID scores variance

\n
\n

Average epoch

\n
\n

Epoch variance

\n
\n

Number of successful trials (/10)

\n
(1,0.5)-GAN\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

4

\n
(0.5,0.5)-GAN\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

6

\n
(1,5)-GAN\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
(1,10)-GAN\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
(10,10)-GAN\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

2

\n
(1,20)-GAN\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
(20,20)-GAN\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

1

\n
DCGAN\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
\n
\n
", + "capture": "Table 2: -GAN results for MNIST." + }, + "3": { + "table_html": "
\n
Table 3: -GAN results for CIFAR-10.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
()-GAN\n

Best FID score

\n
\n

Average best FID score

\n
\n

Best FID scores variance

\n
\n

Average epoch

\n
\n

Epoch variance

\n
\n

Number of successful trials (/5)

\n
(1,0.5)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
(0.5,0.5)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
(1,5)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
(5,5)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
(1,10)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
(10,10)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
(1,20)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
(20,20)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
DCGAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
\n
\n
", + "capture": "Table 3: -GAN results for CIFAR-10." + }, + "4": { + "table_html": "
\n
Table 4: -GAN results for Stacked MNIST.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
()-GAN\n

Best FID score

\n
\n

Average best FID score

\n
\n

Best FID scores variance

\n
\n

Average epoch

\n
\n

Epoch variance

\n
\n

Number of successful trials (/5)

\n
(1,0.5)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

2

\n
(0.5,0.5)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

1

\n
(1,5)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

2

\n
(5,5)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

4

\n
(1,10)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

2

\n
(10,10)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

2

\n
(1,20)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

1

\n
(20,20)-GAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

1

\n
DCGAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
\n
\n
", + "capture": "Table 4: -GAN results for Stacked MNIST." + }, + "5": { + "table_html": "
\n
Table 5: SLGAN results for MNIST.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Variant-SLGAN-\n\n

Best FID score

\n
\n

Average best FID score

\n
\n

Best FID scores variance

\n
\n

Average epoch

\n
\n

Epoch variance

\n
\n

Number of successful trials (/10)

\n
L-SLGAN-0.25\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
Vanilla-SLGAN-0.25\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
L-SLGAN-1.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
Vanilla-SLGAN-1.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
L-SLGAN-2.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
Vanilla-SLGAN-2.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
L-SLGAN-7.5\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
Vanilla-SLGAN-7.5\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
L-SLGAN-15.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
Vanilla-SLGAN-15.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
DCGAN\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

10

\n
\n
\n
", + "capture": "Table 5: SLGAN results for MNIST." + }, + "6": { + "table_html": "
\n
Table 6: SLGAN results for CIFAR-10.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Variant-SLGAN-\n\n

Best FID score

\n
\n

Average best FID score

\n
\n

Best FID scores variance

\n
\n

Average epoch

\n
\n

Epoch variance

\n
\n

Number of successful trials (/5)

\n
L-SLGAN-1.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-1.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
L-SLGAN-2.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-2.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
L-SLGAN-7.5\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-7.5\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
L-SLGAN-15.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-15.0\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
DCGAN\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
\nL-SLGAN-0.25-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-0.25-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
L-SLGAN-1.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-1.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
L-SLGAN-2.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-2.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
L-SLGAN-7.5-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-7.5-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
L-SLGAN-15.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-15.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
DCGAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
\n
\n
", + "capture": "Table 6: SLGAN results for CIFAR-10." + }, + "7": { + "table_html": "
\n
Table 7: SLGAN results for Stacked MNIST.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Variant-SLGAN-\n\n

Best FID score

\n
\n

Average best FID score

\n
\n

Best FID scores variance

\n
\n

Average epoch

\n
\n

Epoch variance

\n
\n

Number of successful trials (/5)

\n
L-SLGAN-0.25-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-0.25-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

1

\n
L-SLGAN-1.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-1.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

2

\n
L-SLGAN-2.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-2.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

3

\n
L-SLGAN-7.5-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-7.5-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
L-SLGAN-15.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
Vanilla-SLGAN-15.0-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

4

\n
DCGAN-GP\n

\n
\n

\n
\n

\n
\n

\n
\n

\n
\n

5

\n
\n
\n
", + "capture": "Table 7: SLGAN results for Stacked MNIST." + }, + "8": { + "table_html": "
\n
Table 8: Summary of aliases used to describe neural network architectures.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AliasDefinition
FCFully Connected
UpConv2DDeconvolutional Layer
Conv2DConvolutional Layer
BNBatch Normalization
LeakyReLULeaky Rectified Linear Unit
\n
", + "capture": "Table 8: Summary of aliases used to describe neural network architectures." + }, + "9": { + "table_html": "
\n
Table 9: Discriminator architecture for the MNIST dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LayerOutput SizeKernelStrideBNActivation
InputNo
Conv2D2NoLeakyReLU(0.3)
Dropout(0.3)No
Conv2D2NoLeakyReLU(0.3)
Dropout(0.3)No
FC1NoSigmoid
\n
", + "capture": "Table 9: Discriminator architecture for the MNIST dataset." + }, + "10": { + "table_html": "
\n
Table 10: Generator architecture for the MNIST dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LayerOutput SizeKernelStrideBNActivation
Input
FC
UpConv2D1YesLeakyReLU(0.3)
UpConv2D2YesLeakyReLU(0.3)
UpConv2D2NoTanh
\n
", + "capture": "Table 10: Generator architecture for the MNIST dataset." + }, + "11": { + "table_html": "
\n
Table 11: Discriminator architecture for the CIFAR-10 and Stacked MNIST datasets.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LayerOutput SizeKernelStrideBNActivation
Input
Conv2D2NoLeakyReLU(0.2)
Conv2D2NoLeakyReLU(0.2)
Conv2D2NoLeakyReLU(0.2)
Dropout(0.4)No
FC1Sigmoid
\n
", + "capture": "Table 11: Discriminator architecture for the CIFAR-10 and Stacked MNIST datasets." + }, + "12": { + "table_html": "
\n
Table 12: Generator architecture for the CIFAR-10 and Stacked MNIST datasets.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LayerOutput SizeKernelStrideBNActivation
Input
FC
UpConv2D2YesLeakyReLU(0.2)
UpConv2D2YesLeakyReLU(0.2)
UpConv2D2YesLeakyReLU(0.2)
Conv2D1NoTanh
\n
", + "capture": "Table 12: Generator architecture for the CIFAR-10 and Stacked MNIST datasets." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2308.07233v3_figure_1(a).png", + "caption": "(a) (\u03b1D,\u03b1Gsubscript\ud835\udefc\ud835\udc37subscript\ud835\udefc\ud835\udc3a\\alpha_{D},\\alpha_{G}italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT)-GAN for MNIST, \u03b1D=1.0subscript\ud835\udefc\ud835\udc371.0\\alpha_{D}=1.0italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT = 1.0, \u03b1G=5.0subscript\ud835\udefc\ud835\udc3a5.0\\alpha_{G}=5.0italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT = 5.0, FID: 1.125.\nFigure 1: Generated images for the best-performing (\u03b1Dsubscript\ud835\udefc\ud835\udc37\\alpha_{D}italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT, \u03b1Gsubscript\ud835\udefc\ud835\udc3a\\alpha_{G}italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT)-GANs.", + "url": "http://arxiv.org/html/2308.07233v3/x1.jpg" + }, + "1(b)": { + "figure_path": "2308.07233v3_figure_1(b).png", + "caption": "(b) (\u03b1D,\u03b1G)subscript\ud835\udefc\ud835\udc37subscript\ud835\udefc\ud835\udc3a(\\alpha_{D},\\alpha_{G})( italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT )-GAN-GP for CIFAR-10, \u03b1D=1.0subscript\ud835\udefc\ud835\udc371.0\\alpha_{D}=1.0italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT = 1.0, \u03b1G=20.0subscript\ud835\udefc\ud835\udc3a20.0\\alpha_{G}=20.0italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT = 20.0, FID = 8.466.\nFigure 1: Generated images for the best-performing (\u03b1Dsubscript\ud835\udefc\ud835\udc37\\alpha_{D}italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT, \u03b1Gsubscript\ud835\udefc\ud835\udc3a\\alpha_{G}italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT)-GANs.", + "url": "http://arxiv.org/html/2308.07233v3/x2.jpg" + }, + "1(c)": { + "figure_path": "2308.07233v3_figure_1(c).png", + "caption": "(c) (\u03b1D,\u03b1G)subscript\ud835\udefc\ud835\udc37subscript\ud835\udefc\ud835\udc3a(\\alpha_{D},\\alpha_{G})( italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT )-GAN-GP for Stacked MNIST, \u03b1D=1.0subscript\ud835\udefc\ud835\udc371.0\\alpha_{D}=1.0italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT = 1.0, \u03b1G=0.5subscript\ud835\udefc\ud835\udc3a0.5\\alpha_{G}=0.5italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT = 0.5, FID = 4.833.\nFigure 1: Generated images for the best-performing (\u03b1Dsubscript\ud835\udefc\ud835\udc37\\alpha_{D}italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT, \u03b1Gsubscript\ud835\udefc\ud835\udc3a\\alpha_{G}italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT)-GANs.", + "url": "http://arxiv.org/html/2308.07233v3/x3.jpg" + }, + "2(a)": { + "figure_path": "2308.07233v3_figure_2(a).png", + "caption": "(a) (1,\u03b1)1\ud835\udefc(1,\\alpha)( 1 , italic_\u03b1 )-GANs for MNIST.\nFigure 2: Average FID scores vs. epochs for various (\u03b1D,\u03b1G)subscript\ud835\udefc\ud835\udc37subscript\ud835\udefc\ud835\udc3a(\\alpha_{D},\\alpha_{G})( italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT )-GANs.", + "url": "http://arxiv.org/html/2308.07233v3/x4.jpg" + }, + "2(b)": { + "figure_path": "2308.07233v3_figure_2(b).png", + "caption": "(b) (\u03b1,\u03b1)\ud835\udefc\ud835\udefc(\\alpha,\\alpha)( italic_\u03b1 , italic_\u03b1 )-GANs for MNIST.\nFigure 2: Average FID scores vs. epochs for various (\u03b1D,\u03b1G)subscript\ud835\udefc\ud835\udc37subscript\ud835\udefc\ud835\udc3a(\\alpha_{D},\\alpha_{G})( italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT )-GANs.", + "url": "http://arxiv.org/html/2308.07233v3/x5.jpg" + }, + "2(c)": { + "figure_path": "2308.07233v3_figure_2(c).png", + "caption": "(c) (1,\u03b1)1\ud835\udefc(1,\\alpha)( 1 , italic_\u03b1 )-GAN-GPs, for CIFAR-10.\nFigure 2: Average FID scores vs. epochs for various (\u03b1D,\u03b1G)subscript\ud835\udefc\ud835\udc37subscript\ud835\udefc\ud835\udc3a(\\alpha_{D},\\alpha_{G})( italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT )-GANs.", + "url": "http://arxiv.org/html/2308.07233v3/x6.jpg" + }, + "2(d)": { + "figure_path": "2308.07233v3_figure_2(d).png", + "caption": "(d) (\u03b1,\u03b1)\ud835\udefc\ud835\udefc(\\alpha,\\alpha)( italic_\u03b1 , italic_\u03b1 )-GAN-GPs for CIFAR-10.\nFigure 2: Average FID scores vs. epochs for various (\u03b1D,\u03b1G)subscript\ud835\udefc\ud835\udc37subscript\ud835\udefc\ud835\udc3a(\\alpha_{D},\\alpha_{G})( italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT )-GANs.", + "url": "http://arxiv.org/html/2308.07233v3/x7.jpg" + }, + "2(e)": { + "figure_path": "2308.07233v3_figure_2(e).png", + "caption": "(e) (1,\u03b1)1\ud835\udefc(1,\\alpha)( 1 , italic_\u03b1 )-GAN-GPs for Stacked MNIST.\nFigure 2: Average FID scores vs. epochs for various (\u03b1D,\u03b1G)subscript\ud835\udefc\ud835\udc37subscript\ud835\udefc\ud835\udc3a(\\alpha_{D},\\alpha_{G})( italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT )-GANs.", + "url": "http://arxiv.org/html/2308.07233v3/x8.jpg" + }, + "2(f)": { + "figure_path": "2308.07233v3_figure_2(f).png", + "caption": "(f) (\u03b1,\u03b1)\ud835\udefc\ud835\udefc(\\alpha,\\alpha)( italic_\u03b1 , italic_\u03b1 )-GAN-GPs for Stacked MNIST.\nFigure 2: Average FID scores vs. epochs for various (\u03b1D,\u03b1G)subscript\ud835\udefc\ud835\udc37subscript\ud835\udefc\ud835\udc3a(\\alpha_{D},\\alpha_{G})( italic_\u03b1 start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT )-GANs.", + "url": "http://arxiv.org/html/2308.07233v3/x9.jpg" + }, + "3(a)": { + "figure_path": "2308.07233v3_figure_3(a).png", + "caption": "(a) Vanilla-SLk\ud835\udc58kitalic_kGAN-0.25 for MNIST, FID =1.112absent1.112=1.112= 1.112.\nFigure 3: Generated images for best-performing SLk\ud835\udc58kitalic_kGANs.", + "url": "http://arxiv.org/html/2308.07233v3/x10.jpg" + }, + "3(b)": { + "figure_path": "2308.07233v3_figure_3(b).png", + "caption": "(b) Vanilla-SLk\ud835\udc58kitalic_kGAN-2.0 for CIFAR-10, FID =4.58absent4.58=4.58= 4.58.\nFigure 3: Generated images for best-performing SLk\ud835\udc58kitalic_kGANs.", + "url": "http://arxiv.org/html/2308.07233v3/x11.jpg" + }, + "3(c)": { + "figure_path": "2308.07233v3_figure_3(c).png", + "caption": "(c) Vanilla-SLk\ud835\udc58kitalic_kGAN-15.0-GP for Stacked MNIST, FID =3.836absent3.836=3.836= 3.836.\nFigure 3: Generated images for best-performing SLk\ud835\udc58kitalic_kGANs.", + "url": "http://arxiv.org/html/2308.07233v3/x12.jpg" + }, + "4(a)": { + "figure_path": "2308.07233v3_figure_4(a).png", + "caption": "(a) Lk\ud835\udc58kitalic_k-SLk\ud835\udc58kitalic_kGANs for MNIST.\nFigure 4: FID scores vs. epochs for various SLk\ud835\udc58kitalic_kGANs.", + "url": "http://arxiv.org/html/2308.07233v3/x13.jpg" + }, + "4(b)": { + "figure_path": "2308.07233v3_figure_4(b).png", + "caption": "(b) Vanilla-SLk\ud835\udc58kitalic_kGANs for MNIST.\nFigure 4: FID scores vs. epochs for various SLk\ud835\udc58kitalic_kGANs.", + "url": "http://arxiv.org/html/2308.07233v3/x14.jpg" + }, + "4(c)": { + "figure_path": "2308.07233v3_figure_4(c).png", + "caption": "(c) Lk\ud835\udc58kitalic_k-SLk\ud835\udc58kitalic_kGAN-GPs for CIFAR-10.\nFigure 4: FID scores vs. epochs for various SLk\ud835\udc58kitalic_kGANs.", + "url": "http://arxiv.org/html/2308.07233v3/x15.jpg" + }, + "4(d)": { + "figure_path": "2308.07233v3_figure_4(d).png", + "caption": "(d) Vanilla-SLk\ud835\udc58kitalic_kGAN-GPs for CIFAR-10.\nFigure 4: FID scores vs. epochs for various SLk\ud835\udc58kitalic_kGANs.", + "url": "http://arxiv.org/html/2308.07233v3/x16.jpg" + }, + "4(e)": { + "figure_path": "2308.07233v3_figure_4(e).png", + "caption": "(e) Lk\ud835\udc58kitalic_k-SLk\ud835\udc58kitalic_kGAN-GPs for Stacked MNIST.\nFigure 4: FID scores vs. epochs for various SLk\ud835\udc58kitalic_kGANs.", + "url": "http://arxiv.org/html/2308.07233v3/x17.jpg" + }, + "4(f)": { + "figure_path": "2308.07233v3_figure_4(f).png", + "caption": "(f) Vanilla-SLk\ud835\udc58kitalic_kGAN-GPs, Stacked MNIST.\nFigure 4: FID scores vs. epochs for various SLk\ud835\udc58kitalic_kGANs.", + "url": "http://arxiv.org/html/2308.07233v3/x18.jpg" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "A general class of coefficients of divergence of one distribution from another.", + "author": "S. M. Ali and S. D. Silvey.", + "venue": "Journal of the Royal Statistical Society. Series B (Methodological), 28(1):131\u2013142, 1966.", + "url": null + } + }, + { + "2": { + "title": "Augmented CycleGAN: Learning many-to-many mappings from unpaired data.", + "author": "Amjad Almahairi, Sai Rajeshwar, Alessandro Sordoni, Philip Bachman, and Aaron Courville.", + "venue": "In Proceedings of the International Conference on Machine Learning, pages 195\u2013204. PMLR, 2018.", + "url": null + } + }, + { + "3": { + "title": "Information-theoretical considerations on estimation problems.", + "author": "Suguru Arimoto.", + "venue": "Information and Control, 19(3):181\u2013194, 1971.", + "url": null + } + }, + { + "4": { + "title": "Wasserstein generative adversarial networks.", + "author": "Martin Arjovsky, Soumith Chintala, and L\u00e9on Bottou.", + "venue": "In Proceedings of the International Conference on Machine Learning, pages 214\u2013223. PMLR, 2017.", + "url": null + } + }, + { + "5": { + "title": "Least th-order and R\u00e9nyi generative adversarial networks.", + "author": "Himesh Bhatia, William Paul, Fady Alajaji, Bahman Gharesifard, and Philippe Burlina.", + "venue": "Neural Computation, 33(9):2473\u20132510, 2021.", + "url": null + } + }, + { + "6": { + "title": "Large scale GAN training for high fidelity natural image synthesis.", + "author": "Andrew Brock, Jeff Donahue, and Karen Simonyan.", + "venue": "arXiv preprint arXiv:1809.11096, 2018.", + "url": null + } + }, + { + "7": { + "title": "Eine Informationstheoretische Ungleichung und ihre Anwendung auf den Bewis der Ergodizitat on Markhoffschen Ketten.", + "author": "Imre Csiszar.", + "venue": "Publications of the Mathematical Institute of the Hungarian Academy of Sciences, Series A, 8, 01 1963.", + "url": null + } + }, + { + "8": { + "title": "Information-type measures of difference of probability distributions and indirect observations.", + "author": "Imre Csisz\u00e1r.", + "venue": "Studia Sci. Math. Hungarica, 2:299\u2013318, 1967.", + "url": null + } + }, + { + "9": { + "title": "The MNIST database of handwritten digit images for machine learning research.", + "author": "Li Deng.", + "venue": "IEEE Signal Processing Magazine, 29(6):141\u2013142, 2012.", + "url": null + } + }, + { + "10": { + "title": "Generative adversarial nets.", + "author": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.", + "venue": "In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 27, pages 2672\u20132680. Curran Associates, Inc., 2014.", + "url": null + } + }, + { + "11": { + "title": "Improved training of Wasserstein GANs.", + "author": "Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville.", + "venue": "Advances in Neural Information Processing Systems, 30, 2017.", + "url": null + } + }, + { + "12": { + "title": "Journal f\u00fcr die reine und angewandte Mathematik, 1909(136):210\u2013271, 1909.", + "author": "E. Hellinger.", + "venue": null, + "url": null + } + }, + { + "13": { + "title": "GANs trained by a two time-scale update rule converge to a local Nash equilibrium.", + "author": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.", + "venue": "In Advances in Neural Information Processing Systems, pages 6626\u20136637, 2017.", + "url": null + } + }, + { + "14": { + "title": "PATE-GAN: Generating synthetic data with differential privacy guarantees.", + "author": "James Jordon, Jinsung Yoon, and Mihaela Van Der Schaar.", + "venue": "In Proceedings of the International Conference on Learning Representations, 2018.", + "url": null + } + }, + { + "15": { + "title": "A style-based generator architecture for generative adversarial networks.", + "author": "Tero Karras, Samuli Laine, and Timo Aila.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4401\u20134410, 2019.", + "url": null + } + }, + { + "16": { + "title": "Adam: A method for stochastic optimization.", + "author": "Diederik Kingma and Jimmy Ba.", + "venue": "In Proceedings of the International Conference on Learning Representations, 2014.", + "url": null + } + }, + { + "17": { + "title": "Learning multiple layers of features from tiny images.", + "author": "Alex Krizhevsky, Geoffrey Hinton, et al.", + "venue": "2009.", + "url": null + } + }, + { + "18": { + "title": "On information and sufficiency.", + "author": "Solomon Kullback and Richard A Leibler.", + "venue": "The Annals of Mathematical Statistics, 22(1):79\u201386, 1951.", + "url": null + } + }, + { + "19": { + "title": "Realizing GANs via a tunable loss function.", + "author": "Gowtham R Kurri, Tyler Sypherd, and Lalitha Sankar.", + "venue": "In Proceedings of the IEEE Information Theory Workshop (ITW), pages 1\u20136, 2021.", + "url": null + } + }, + { + "20": { + "title": "-GAN: Convergence and estimation guarantees.", + "author": "Gowtham R Kurri, Monica Welfert, Tyler Sypherd, and Lalitha Sankar.", + "venue": "In Proceedings of the IEEE International Symposium on Information Theory (ISIT), pages 276\u2013281, 2022.", + "url": null + } + }, + { + "21": { + "title": "Predicting future frames using retrospective cycle GAN.", + "author": "Yong-Hoon Kwon and Min-Gyu Park.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.", + "url": null + } + }, + { + "22": { + "title": "On divergences and informations in statistics and information theory.", + "author": "F. Liese and I. Vajda.", + "venue": "IEEE Transactions on Information Theory, 52(10):4394\u20134412, 2006.", + "url": null + } + }, + { + "23": { + "title": "PacGAN: The power of two samples in generative adversarial networks.", + "author": "Zinan Lin, Ashish Khetan, Giulia Fanti, and Sewoong Oh.", + "venue": "In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.", + "url": null + } + }, + { + "24": { + "title": "Least squares generative adversarial networks.", + "author": "Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang, and Stephen Paul Smolley.", + "venue": "In the IEEE International Conference on Computer Vision (ICCV), Oct 2017.", + "url": null + } + }, + { + "25": { + "title": "On a generalization of the Jensen\u2013Shannon divergence and the Jensen\u2013Shannon centroid.", + "author": "Frank Nielsen.", + "venue": "Entropy, 22(2):221, 2020.", + "url": null + } + }, + { + "26": { + "title": "On the chi square and higher-order chi distances for approximating f-divergences.", + "author": "Frank Nielsen and Richard Nock.", + "venue": "IEEE Signal Processing Letters, 21(1):10\u201313, 2013.", + "url": null + } + }, + { + "27": { + "title": "f-gan: Training generative neural samplers using variational divergence minimization.", + "author": "Sebastian Nowozin, Botond Cseke, and Ryota Tomioka.", + "venue": "Advances in Neural Information Processing Systems, 29, 2016.", + "url": null + } + }, + { + "28": { + "title": "On a class of perimeter-type distances of probability distributions.", + "author": "Ferdinand \u00d6sterreicher.", + "venue": "Kybernetika, 32(4):389\u2013393, 1996.", + "url": null + } + }, + { + "29": { + "title": "Exploiting deep generative prior for versatile image restoration and manipulation.", + "author": "Xingang Pan, Xiaohang Zhan, Bo Dai, Dahua Lin, Chen Change Loy, and Ping Luo.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11):7474\u20137489, 2021.", + "url": null + } + }, + { + "30": { + "title": "Unsupervised representation learning with deep convolutional generative adversarial networks.", + "author": "Alec Radford, Luke Metz, and Soumith Chintala.", + "venue": "In Proceedings of the 9th International Conference on Image and Graphics, pages 97\u2013108, 2017.", + "url": null + } + }, + { + "31": { + "title": "On measures of entropy and information.", + "author": "Alfr\u00e9d R\u00e9nyi.", + "venue": "In the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, volume 4, pages 547\u2013562. University of California Press, 1961.", + "url": null + } + }, + { + "32": { + "title": "On f-divergences: Integral representations, local behavior, and inequalities.", + "author": "Igal Sason.", + "venue": "Entropy, 20(5):383, May 2018.", + "url": null + } + }, + { + "33": { + "title": "R\u00e9nyi divergence and Kullback-Leibler divergence.", + "author": "Tim Van Erven and Peter Harremos.", + "venue": "IEEE Transactions on Information Theory, 60(7):3797\u20133820, 2014.", + "url": null + } + }, + { + "34": { + "title": "Addressing GAN training instabilities via tunable classification losses, 2023.", + "author": "Monica Welfert, Gowtham R. Kurri, Kyle Otstot, and Lalitha Sankar.", + "venue": null, + "url": null + } + }, + { + "35": { + "title": "-GANs: Addressing GAN training instabilities via dual objectives.", + "author": "Monica Welfert, Kyle Otstot, Gowtham R Kurri, and Lalitha Sankar.", + "venue": "In Proceedings of the IEEE International Symposium on Information Theory (ISIT), 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2308.07233v3" +} \ No newline at end of file