ACL-OCL / Base_JSON /prefixN /json /N19 /N19-1021.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N19-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:59:57.959031Z"
},
"title": "Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Fu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Duke University",
"location": {}
},
"email": "hao.fu@duke.edu"
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "Microsoft Research",
"institution": "",
"location": {
"settlement": "Redmond"
}
},
"email": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "\u2020\u21e4",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "Microsoft Research",
"institution": "",
"location": {
"settlement": "Redmond"
}
},
"email": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "Microsoft Research",
"institution": "",
"location": {
"settlement": "Redmond"
}
},
"email": "jfgao@microsoft.com"
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": "",
"affiliation": {
"laboratory": "Microsoft Research",
"institution": "",
"location": {
"settlement": "Redmond"
}
},
"email": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Duke University",
"location": {}
},
"email": "lcarin@duke.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Variational autoencoders (VAEs) with an autoregressive decoder have been applied for many natural language processing (NLP) tasks. The VAE objective consists of two terms, (i) reconstruction and (ii) KL regularization, balanced by a weighting hyper-parameter. One notorious training difficulty is that the KL term tends to vanish. In this paper we study scheduling schemes for , and show that KL vanishing is caused by the lack of good latent codes in training the decoder at the beginning of optimization. To remedy this, we propose a cyclical annealing schedule, which repeats the process of increasing multiple times. This new procedure allows the progressive learning of more meaningful latent codes, by leveraging the informative representations of previous cycles as warm restarts. The effectiveness of cyclical annealing is validated on a broad range of NLP tasks, including language modeling, dialog response generation and unsupervised language pre-training.",
"pdf_parse": {
"paper_id": "N19-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "Variational autoencoders (VAEs) with an autoregressive decoder have been applied for many natural language processing (NLP) tasks. The VAE objective consists of two terms, (i) reconstruction and (ii) KL regularization, balanced by a weighting hyper-parameter. One notorious training difficulty is that the KL term tends to vanish. In this paper we study scheduling schemes for , and show that KL vanishing is caused by the lack of good latent codes in training the decoder at the beginning of optimization. To remedy this, we propose a cyclical annealing schedule, which repeats the process of increasing multiple times. This new procedure allows the progressive learning of more meaningful latent codes, by leveraging the informative representations of previous cycles as warm restarts. The effectiveness of cyclical annealing is validated on a broad range of NLP tasks, including language modeling, dialog response generation and unsupervised language pre-training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Variational autoencoders (VAEs) (Kingma and Welling, 2013; Rezende et al., 2014) have been applied in many NLP tasks, including language modeling (Bowman et al., 2015; , dialog response generation (Zhao et al., 2017; Wen et al., 2017) , semi-supervised text classification (Xu et al., 2017) , controllable text generation , and text compression . A prominent component of a VAE is the distribution-based latent representation for text sequence observations. This flexible representation allows the VAE to explicitly model holistic properties of sentences, such as style, topic, and high-level linguistic and semantic features. Samples from the prior latent distribution can produce \u21e4 Corresponding author \u2020 Equal Contribution diverse and well-formed sentences through simple deterministic decoding (Bowman et al., 2015) .",
"cite_spans": [
{
"start": 32,
"end": 58,
"text": "(Kingma and Welling, 2013;",
"ref_id": "BIBREF18"
},
{
"start": 59,
"end": 80,
"text": "Rezende et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 146,
"end": 167,
"text": "(Bowman et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 197,
"end": 216,
"text": "(Zhao et al., 2017;",
"ref_id": "BIBREF42"
},
{
"start": 217,
"end": 234,
"text": "Wen et al., 2017)",
"ref_id": "BIBREF35"
},
{
"start": 273,
"end": 290,
"text": "(Xu et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 798,
"end": 819,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the sequential nature of text, an autoregressive decoder is typically employed in the VAE. This is often implemented with a recurrent neural network (RNN); the long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) RNN is used widely. This introduces one notorious issue when a VAE is trained using traditional methods: the decoder ignores the latent variable, yielding what is termed the KL vanishing problem.",
"cite_spans": [
{
"start": 197,
"end": 231,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several attempts have been made to ameliorate this issue Dieng et al., 2018; Zhao et al., 2017; . Among them, perhaps the simplest solution is monotonic KL annealing, where the weight of the KL penalty term is scheduled to gradually increase during training (Bowman et al., 2015) . While these techniques can effectively alleviate the KL-vanishing issue, a proper unified theoretical interpretation is still lacking, even for the simple annealing scheme.",
"cite_spans": [
{
"start": 57,
"end": 76,
"text": "Dieng et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 77,
"end": 95,
"text": "Zhao et al., 2017;",
"ref_id": "BIBREF42"
},
{
"start": 258,
"end": 279,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we analyze the variable dependency in a VAE, and point out that the autoregressive decoder has two paths (formally defined in Section 3.1) that work together to generate text sequences. One path is conditioned on the latent codes, and the other path is conditioned on previously generated words. KL vanishing happens because (i) the first path can easily get blocked, due to the lack of good latent codes at the beginning of decoder training; (ii) the easiest solution that an expressive decoder can learn is to ignore the latent code, and relies on the other path only for decoding. To remedy this issue, a promising approach is to remove the blockage in the first path, and feed meaningful latent codes in training the decoder, so that the decoder can easily adopt them to generate controllable observations (Bowman et al., 2015) .",
"cite_spans": [
{
"start": 825,
"end": 846,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper makes the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We provide a novel explanation for the KL-vanishing issue, and develop an understanding of the strengths and weaknesses of existing scheduling methods (e.g., constant or monotonic annealing schedules). (ii) Based on our explanation, we propose a cyclical annealing schedule. It repeats the annealing process multiple times, and can be considered as an inexpensive approach to leveraging good latent codes learned in the previous cycle, as a warm restart, to train the decoder in the next cycle. (iii) We demonstrate that the proposed cyclical annealing schedule for VAE training improves performance on a large range of tasks (with negligible extra computational cost), including text modeling, dialog response generation, and unsupervised language pre-training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To generate a text sequence of length",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": "T , x = [x 1 , \u2022 \u2022 \u2022 , x T ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": ", neural language models (Mikolov et al., 2010) generate every token x t conditioned on the previously generated tokens:",
"cite_spans": [
{
"start": 25,
"end": 47,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": "p(x) = T Y t=1 p(x t |x <t ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": "where x <t indicates all tokens before t. The VAE model for text consists of two parts, generation and inference (Kingma and Welling, 2013; Rezende et al., 2014; Bowman et al., 2015 ). The generative model (decoder) draws a continuous latent vector z from prior p(z), and generates the text sequence x from a conditional distribution p \u2713 (x|z); p(z) is typically assumed a multivariate Gaussian, and \u2713 represents the neural network parameters. The following auto-regressive decoding process is usually used:",
"cite_spans": [
{
"start": 113,
"end": 139,
"text": "(Kingma and Welling, 2013;",
"ref_id": "BIBREF18"
},
{
"start": 140,
"end": 161,
"text": "Rezende et al., 2014;",
"ref_id": "BIBREF29"
},
{
"start": 162,
"end": 181,
"text": "Bowman et al., 2015",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u2713 (x|z) = T Y t=1 p \u2713 (x t |x <t , z).",
"eq_num": "(1)"
}
],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": "Parameters \u2713 are typically learned by maximizing the marginal log likelihood log p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": "\u2713 (x) = log R p(z)p \u2713 (x|z)dz.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": "However, this marginal term is intractable to compute for many decoder choices. Thus, variational inference is considered, and the true posterior p \u2713 (z|x) / p \u2713 (x|z)p(z) is approximated via the variational distribution q (z|x) is (often known as the inference model or encoder), implemented via a -parameterized neural network. It yields the evidence lower bound (ELBO) as an objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": "log p \u2713 (x) L ELBO = (2) E q (z|x) \u21e5 log p \u2713 (x|z) \u21e4 KL(q (z|x)||p(z))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": "Typically, q (z|x) is modeled as a Gaussian distribution, and the re-parametrization trick is used for efficient learning (Kingma and Welling, 2013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries 2.1 The VAE model",
"sec_num": "2"
},
{
"text": "There is an alternative interpretation of the ELBO: the VAE objective can be viewed as a regularized version of the autoencoder (AE) . It is thus natural to extend the negative of L ELBO in (2) by introducing a hyperparameter to control the strength of regularization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Schedules and KL Vanishing",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = L E + L R , with",
"eq_num": "(3)"
}
],
"section": "Training Schedules and KL Vanishing",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L E = E q (z|x) \u21e5 log p \u2713 (x|z) \u21e4 (4) L R = KL(q (z|x)||p(z))",
"eq_num": "(5)"
}
],
"section": "Training Schedules and KL Vanishing",
"sec_num": "2.2"
},
{
"text": "where L E is the reconstruction error (or negative log-likelihood (NLL)), and L R is a KL regularizer. The cost function L provides a unified perspective for understanding various autoencoder variants and training methods. When = 1, we recover the VAE in (2). When = 0, and q (z|x) is a delta distribution, we recover the AE. In other words, the AE does not regularize the variational distribution toward a prior distribution, and there is only a point-estimate to represent the text sequence's latent feature. In practice, it has been found that learning with an AE is prone to overfitting (Bowman et al., 2015), or generating plain dialog responses (Zhao et al., 2017) . Hence, it is desirable to retain meaningful posteriors in real applications. Two different schedules for have been commonly used for a text VAE.",
"cite_spans": [
{
"start": 651,
"end": 670,
"text": "(Zhao et al., 2017)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Schedules and KL Vanishing",
"sec_num": "2.2"
},
{
"text": "The standard approach is to keep = 1 fixed during the entire training procedure, as it corresponds to optimizing the true VAE objective. Unfortunately, instability on text analysis has been witnessed, in that the KL term L R becomes vanishingly small during training (Bowman et al., 2015) . This issue causes two undesirable outcomes: (i) an encoder that produces posteriors almost identical to the Gaussian prior, for all observations (rather than a more interesting posterior); and (ii) a decoder that completely ignores the latent variable z, and a learned model that reduces to a simpler language model. This is known as the KL vanishing issue in text VAEs.",
"cite_spans": [
{
"start": 267,
"end": 288,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constant Schedule",
"sec_num": null
},
{
"text": "x z Monotonic Annealing Schedule. A simple remedy has been proposed in (Bowman et al., 2015) to alleviate KL collapse. It sets = 0 at the beginning of training, and gradually increases until = 1 is reached. In this setting, we do not optimize the proper lower bound in (2) during the early stages of training, but nonetheless improvements on the value of that bound are observed at convergence in previous work (Bowman et al., 2015; Zhao et al., 2017) .",
"cite_spans": [
{
"start": 71,
"end": 92,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 411,
"end": 432,
"text": "(Bowman et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 433,
"end": 451,
"text": "Zhao et al., 2017)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constant Schedule",
"sec_num": null
},
{
"text": "x (a) Traditional VAE z x <t x t x t = 1,\u2022 \u2022 \u2022,T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constant Schedule",
"sec_num": null
},
{
"text": "The monotonic annealing schedule has become the de facto standard in training text VAEs, and has been widely adopted in many NLP tasks. Though simple and often effective, this heuristic still lacks a proper justification. Further, how to best schedule is largely unexplored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constant Schedule",
"sec_num": null
},
{
"text": "In the traditional VAE (Kingma and Welling, 2013), z generates x directly, and the reconstruction depends only on one path of { , \u2713} passing through z, as shown in Figure 1 (a). Hence, z can largely determine the reconstructed x. In contrast, when an auto-regressive decoder is used in a text VAE (Bowman et al., 2015), there are two paths from x to its reconstruction, as shown in Figure 1 (b). Path A is the same as that in the standard VAE, where z is the global representation that controls the generation of x; Path B leaks the partial ground-truth information of x at every time step of the sequential decoding. It generates x t conditioned on x <t . Therefore, Path B can potentially bypass Path A to generate x, leading to KL vanishing.",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 172,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 382,
"end": 390,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Identifying Sources of KL Vanishing",
"sec_num": "3.1"
},
{
"text": "From this perspective, we hypothesize that the model-collapse problem is related to the low quality of z at the beginning phase of decoder training. A lower quality z introduces more difficulties in reconstructing x via Path A. As a result, the model is forced to learn an easier solution to decoding: generating x via Path B only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Sources of KL Vanishing",
"sec_num": "3.1"
},
{
"text": "We argue that this phenomenon can be easily observed due to the powerful representation capability of the auto-regressive decoder. It has been shown empirically that auto-regressive decoders are able to capture highly-complex distributions, such as natural language sentences (Mikolov et al., 2010) . This means that Path B alone has enough capacity to model x, even though the decoder takes {x <t , z} as input to produce x t . Zhang et al. (2017a) has shown that flexible deep neural networks can easily fit randomly labeled training data, and here the decoder can learn to rely solely on x <t for generation, when z is of low quality. We use our hypothesis to explain the learning behavior of different scheduling schemes for as follows.",
"cite_spans": [
{
"start": 276,
"end": 298,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Sources of KL Vanishing",
"sec_num": "3.1"
},
{
"text": "The two loss terms in (2) are weighted equally in the constant schedule. At the early stage of optimization, { , \u2713} are randomly initialized and the latent codes z are of low quality. The KL term L R pushes q (z|x) close to an uninformative prior p(z): the posterior becomes more like an isotropic Gaussian noise, and less representative of their corresponding observations. In other words, L R blocks Path A, and thus z remains uninformative during the entire training process: it starts with random initialization and then is regularized towards a random noise. Although the reconstruction term L E can be satisfied via two paths, since z is noisy, the decoder learns to discard Path A (i.e., ignores z), and chooses Path B to generate the sentence word-by-word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constant Schedule",
"sec_num": null
},
{
"text": "Monotonic Annealing Schedule The monotonic schedule sets close to 0 in the early stage of training, which effectively removes the blockage L R on Path A, and the model reduces to a denoising autoencoder 1 . L E becomes the only objective, which can be reached by both paths. Though randomly initialized, z is learned to capture useful information for reconstruction of x during training. At the time when the full VAE objective is considered ( = 1), z learned earlier can be viewed as the VAE initialization; such latent variables are much more informative than random, and thus are ready for the decoder to use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constant Schedule",
"sec_num": null
},
{
"text": "To mitigate the KL-vanishing issue, it is key to have meaningful latent codes z at the beginning of training the decoder, so that z can be utilized. The monotonic schedule under-weights the prior regularization, and the learned q (z|x) tends to collapse into a point estimate (i.e., the VAE reduces to an AE). This underestimate can result in sub-optimal decoder learning. A natural question concerns how one can get a better distribution estimate for z as initialization, while retaining low computational cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constant Schedule",
"sec_num": null
},
{
"text": "Our proposal is to use z \u21e0 q (z|x), which has been trained under the full VAE objective, as initialization. To learn to progressively improve latent representation z, we propose a cyclic annealing schedule. We start with = 0, increase at a fast pace, and then stay at = 1 for subsequent learning iterations. This encourages the model to converge towards the VAE objective, and infers its first raw full latent distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "Unfortunately, Path A is blocked at = 1. The optimization is then continued at = 0 again, which perturbs the VAE objective, dislodges it from the convergence, and reopens Path A. Importantly, the decoder is now trained with the latent code from a full distribution z \u21e0 q (z|x), and both paths are considered. We repeat this process several times to achieve better convergences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "Formally, has the form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t = \u21e2 f (\u2327 ), \u2327 \uf8ff R 1, \u2327 > R with (6) \u2327 = mod(t 1, dT /Me) T /M ,",
"eq_num": "(7)"
}
],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "where t is the iteration number, T is the total number of training iterations, f is a monotonically increasing function, and we introduce two new hyper-parameters associated with the cyclical annealing schedule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "\u2022 M : number of cycles (default M = 4); \u2022 R: proportion used to increase within a cycle (default R = 0.5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "In other words, we split the training process into M cycles, each starting with = 0 and ending with = 1. We provide an example of a cyclical schedule in Figure 2 = f (0) = 0 forces the model to learn representative z to reconstruct x. As depicted in Figure 1(b) , there is no interruption from the prior on Path A, z is forced to learn the global representation of x. By gradually increasing towards f (R) = 1, q(z|x) is regularized to transit from a point estimate to a distribution estimate, spreading out to match the prior.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 161,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 250,
"end": 261,
"text": "Figure 1(b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "\u2022 Fixing. As our ultimate goal is to learn a VAE model, we fix = 1 for the rest of training steps within one cycle, e.g., the steps Figure 2(b) . This drives the model to optimize the full VAE objective until convergence.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 143,
"text": "Figure 2(b)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "[5K, 10K] in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "As illustrated in Figure 2 , the monotonic schedule increasingly anneals from 0 to 1 once, and fixes = 1 during the rest of training. The cyclical schedules alternatively repeats the annealing and fixing stages multiple times.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "A Practical Recipe The existing schedules can be viewed as special cases of the proposed cyclical schedule. The cyclical schedule reduces to the constant schedule when R = 0, and it reduces to an monotonic schedule when M = 1 and R is relatively small 2 . In theory, any monotonically increasing function f can be adopted for the cyclical schedule, as long as f (0) = 0 and f (R) = 1. In practice, we suggest to build the cyclical schedule upon the success of monotonic schedules: we adopt the same f , and modify it by setting M and R (as default). Three widely used increasing functions for f are linear (Fraccaro et al., 2016; Goyal et al., 2017 ), Sigmoid (Bowman et al., 2015 and Consine (Lai et al., 2018) . We present the comparative results using the linear function f (\u2327 ) = \u2327 /R in Figure 2 , and show the complete comparison for other functions in Figure 7 of the Supplementary Material (SM).",
"cite_spans": [
{
"start": 606,
"end": 629,
"text": "(Fraccaro et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 630,
"end": 648,
"text": "Goyal et al., 2017",
"ref_id": "BIBREF8"
},
{
"start": 649,
"end": 680,
"text": "), Sigmoid (Bowman et al., 2015",
"ref_id": null
},
{
"start": 693,
"end": 711,
"text": "(Lai et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 792,
"end": 800,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 859,
"end": 867,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cyclical Annealing Schedule",
"sec_num": "3.2"
},
{
"text": "This section derives a bound for the training objective to rigorously study the impact of ; the proof details are included in SM. For notational convenience, we identify each data sample with a unique integer index n \u21e0 q(n), drawn from a uniform random variable on {1, 2,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the impact of",
"sec_num": "3.3"
},
{
"text": "\u2022 \u2022 \u2022 , N}. Further we define q(z|n) = q (z|x n ) and q(z, n) = q(z|n)q(n) = q(z|n) 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the impact of",
"sec_num": "3.3"
},
{
"text": "N . Following (Makhzani et al., 2016) , we refer to q(z) = P N n=1 q(z|n)q(n) as the aggregated posterior. This marginal distribution captures the aggregated z over the entire dataset. The KL term in (5) can be decomposed into two refined terms (Chen et al., 2018; Hoffman and Johnson, 2016) :",
"cite_spans": [
{
"start": 14,
"end": 37,
"text": "(Makhzani et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 245,
"end": 264,
"text": "(Chen et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 265,
"end": 291,
"text": "Hoffman and Johnson, 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "On the impact of",
"sec_num": "3.3"
},
{
"text": "F R = E q(n) [KL(q(z|n)||p(z))] = I q (z, n) | {z } F 1 : Mutual Info. + KL(q(z)||p(z)) | {z } F 2 : Marginal KL (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the impact of",
"sec_num": "3.3"
},
{
"text": "where F 1 is the mutual information (MI) measured by q. Higher MI can lead to a higher correlation between the latent variable and data variable, and encourages a reduction in the degree of KL vanishing. The marginal KL is represented by F 2 , and it measures the fitness of the aggregated posterior to the prior distribution. The reconstruction term in (5) provides a lower bound for MI measured by q, based on Corollary 3 in :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the impact of",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F E = E q(n),z\u21e0q(z|n) (log p(n|z))] + H q (n) \uf8ff I q (z, n)",
"eq_num": "(9)"
}
],
"section": "On the impact of",
"sec_num": "3.3"
},
{
"text": "where H(n) is a constant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On the impact of",
"sec_num": "3.3"
},
{
"text": "When scheduled with , the training objective over the dataset can be written as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F = F E + F R",
"eq_num": "(10)"
}
],
"section": "Analysis of",
"sec_num": null
},
{
"text": "( 1)I q (z, n) + KL(q(z)||p(z)) (11) To reduce KL vanishing, we desire an increase in the MI term I(z, n), which appears in both F E and F R , modulated by . It shows that reducing KL vanishing is inversely proportional with . When = 0, the model fully focuses on maximizing the MI. As increases, the model gradually transits towards fitting the aggregated latent codes to the given prior. When = 1, the implementation of MI becomes implicit in KL(q(z)||p(z)). It is determined by the amortized inference regularization (implied by the encoder's expressivity) (Shu et al., 2018) , which further affects the performance of the generative density estimator.",
"cite_spans": [
{
"start": 560,
"end": 578,
"text": "(Shu et al., 2018)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of",
"sec_num": null
},
{
"text": "We compare different schedule methods by visualizing the learning processes on an illustrative problem. Consider a dataset consisting of 10 sequences, each of which is a 10-dimensional onehot vector with the value 1 appearing in different positions. A 2-dimensional latent space is used for the convenience of visualization. Both the encoder and decoder are implemented using a 2-layer LSTM with 64 hidden units each. We use T = 40K total iterations, and the scheduling schemes in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 481,
"end": 489,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Visualization of Latent Space",
"sec_num": "4"
},
{
"text": "The learning curves for the ELBO, reconstruction error, and KL term are shown in Figure 3 . The three schedules share very similar values. However, the cyclical schedule provides substantially lower reconstruction error and higher KL divergence. Interestingly, the cyclical schedule improves the performance progressively: it becomes better than the previous cycle, and there are clear periodic patterns across different cycles. This suggests that the cyclical schedule allows the model to use the previously learned results as a warmrestart to achieve further improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 89,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Visualization of Latent Space",
"sec_num": "4"
},
{
"text": "We visualize the resulting division of the latent space for different training steps in Figure 4 , where each color corresponds to z \u21e0 q(z|n), for n = 1, \u2022 \u2022 \u2022 , 10. We observe that the constant schedule produces heavily mixed latent codes z for different sequences throughout the entire training process. The monotonic schedule starts with a mixed z, but soon divides the space into a mixture of 10 cluttered Gaussians in the annealing process (the division remains cluttered in the rest of training). The cyclical schedule behaves similarly to the monotonic schedule in the first 10K steps (the first cycle). But, starting from the 2nd cycle, much more divided clusters are shown when learning on top of the 1st cycle results. However, < 1 leads to some holes between different clusters, making q(z) violate the constraint of p(z). This is alleviated at the end of the 2nd cycle, as the model is trained with = 1. As the process repeats, we see clearer patterns in the 4th cycle than the 2nd cycle for both < 0 and = 1. It shows that more structured information is captured in z using the cyclical schedule, which is beneficial in downstream applications as shown in the experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Visualization of Latent Space",
"sec_num": "4"
},
{
"text": "Solutions to KL vanishing Several techniques have been proposed to mitigate the KL vanishing issue. The proposed method is most closely related to the monotonic KL annealing technique in (Bowman et al., 2015) . In addition to introducing a specific algorithm, we have comprehensively studied the impact of and its scheduling schemes. Our explanations can be used to interpret other techniques, which can be broadly categorized into two classes.",
"cite_spans": [
{
"start": 187,
"end": 208,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The first category attempts to weaken Path B, and force the decoder to use Path A. Word drop decoding (Bowman et al., 2015) sets a certain percentage of the target words to zero. It has shown that it may degrade the performance when the drop rate is too high. The dilated CNN was considered in as a new type of decoder to replace the LSTM. By changing the decoder's dilation architecture, one can control Path B: the effective context from x <t . The second category of techniques improves the dependency in Path A, so that the decoder uses latent codes more easily. Skip connections were developed in (Dieng et al., 2018) to shorten the paths from z to x in the decoder. Zhao et al. (2017) introduced an auxiliary loss that requires the decoder to predict the bag-of-words in the dialog response (Zhao et al., 2017) . The decoder is thus forced to capture global information about the target response. Zhao et al. (2019) enhanced Path A via mutual information. Concurrent with our work, He et al. (2019) proposed to update encoder multiple times to achieve better latent code before updating decoder. Semi-amortized training was proposed to perform stochastic variational inference (SVI) (Hoffman et al., 2013) on top of the amortized inference in VAE. It shares a similar motivation with the proposed approach, in that better latent codes can reduce KL vanishing. However, the computational cost to run SVI is high, while our monotonic schedule does not require any additional compute overhead. The KL scheduling methods are complementary to these techniques. As shown in experiments, the proposed cyclical schedule can further improve them.",
"cite_spans": [
{
"start": 589,
"end": 622,
"text": "developed in (Dieng et al., 2018)",
"ref_id": null
},
{
"start": 672,
"end": 690,
"text": "Zhao et al. (2017)",
"ref_id": "BIBREF42"
},
{
"start": 797,
"end": 816,
"text": "(Zhao et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 903,
"end": 921,
"text": "Zhao et al. (2019)",
"ref_id": "BIBREF41"
},
{
"start": 988,
"end": 1004,
"text": "He et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "-VAE The VAE has been extended toregularized versions in a growing body of work (Higgins et al., 2017; Alemi et al., 2018) . Perhaps the seminal work is -VAE (Higgins et al., 2017) , which was extended in (Kim and Mnih, 2018; Chen et al., 2018) to consider on the refined terms in the KL decomposition. Their primary goal is to learn disentangled latent representations to explain the data, by setting > 1. From an information-theoretic point of view, (Alemi et al., 2018) suggests a simple method to set < 1 to ensure that latent-variable models with powerful stochastic decoders do not ignore their latent code. However, 6 = 1 results in an improper statistical model. Further, is static in their work; we consider dynamically scheduled and find it more effective.",
"cite_spans": [
{
"start": 80,
"end": 102,
"text": "(Higgins et al., 2017;",
"ref_id": null
},
{
"start": 103,
"end": 122,
"text": "Alemi et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 158,
"end": 180,
"text": "(Higgins et al., 2017)",
"ref_id": null
},
{
"start": 205,
"end": 225,
"text": "(Kim and Mnih, 2018;",
"ref_id": "BIBREF16"
},
{
"start": 226,
"end": 244,
"text": "Chen et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 452,
"end": 472,
"text": "(Alemi et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Cyclical schedules Warm-restart techniques are common in optimization to deal with multimodal functions. The cyclical schedule has been used to train deep neural networks (Smith, 2017), warm restart stochastic gradient descent (Loshchilov and Hutter, 2017) , improve convergence rates (Smith and Topin, 2017), obtain model ensembles (Huang et al., 2017) and explore multimodal distributions in MCMC sampling (Zhang et al., 2019) . All these works applied cyclical schedules to the learning rate. In contrast, this paper represents the first to consider the cyclical schedule for in VAE. Though the techniques seem simple and similar, our motivation is different: we use the cyclical schedule to re-open Path A in Figure 1(b) and provide the opportunity to train the decoder with high-quality z.",
"cite_spans": [
{
"start": 227,
"end": 256,
"text": "(Loshchilov and Hutter, 2017)",
"ref_id": "BIBREF22"
},
{
"start": 333,
"end": 353,
"text": "(Huang et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 408,
"end": 428,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 713,
"end": 724,
"text": "Figure 1(b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The source code to reproduce the experimental results will be made publicly available on GitHub 3 . For a fair comparison, we follow the practical recipe described in Section 3.2, where the monotonic schedule is treated as a special case of cycli- cal schedule (while keeping all other settings the same). The default hyper-parameters of the cyclical schedule are used in all cases unless stated otherwise. We study the impact of hyper-parameters in the SM, and show that larger M can provide higher performance for various R. We show the major results in this section, and put more details in the SM. The monotonic and cyclical schedules are denoted as M and C, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "We first consider language modeling on the Penn Tree Bank (PTB) dataset (Marcus et al., 1993) . Language modeling with VAEs has been a challenging problem, and few approaches have been shown to produce rich generative models that do not collapse to standard language models. Ideally a deep generative model trained with variational inference would pursue higher ELBO, making use of the latent space (i.e., maintain a nonzero KL term) while accurately modeling the underlying distribution (i.e., lower reconstruction errors). We implemented different schedules based on the code 4 published by . The latent variable is 32-dimensional, and 40 epochs are used. We compare the proposed cyclical annealing schedule with the monotonic schedule baseline that, following (Bowman et al., 2015) , anneals linearly from 0 to 1.0 over 10 epochs. We also compare with semi-amortized (SA) training , which is considered as the state-of-the-art technique in preventing KL vanishing. We set SVI steps to 10.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF25"
},
{
"start": 763,
"end": 784,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": "6.1"
},
{
"text": "Results are shown in Table 1 . The perplexity is reported in column PPL. The cyclical schedule outperforms the monotonic schedule for both standard VAE and SA-VAE training. SA-VAE training (a) ELBO (b) Reconstruction error (c) KL Figure 5 : Learning curves of VAE and SA-VAE on PTB. Under similar ELBO, the cyclical schedule provides lower reconstruction errors and higher KL values than the monotonic schedule.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 1",
"ref_id": null
},
{
"start": 230,
"end": 238,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": "6.1"
},
{
"text": "Context Alice: yeah you know its interesting especially when my experience has always been at a public university. Topic: Choose a College Target Bob (statement): yeah that's right C 1. yes M 1. i'm not sure 2. oh really 2. and i'm not sure 3. and there's a lot of <unk> there's a lot of people 3. and i'm not sure 4. yeah 4. i'm not sure 5. and i think that's probably the biggest problem i've ever seen in the past 5. i'm not sure Table 2 : Generated dialog responses from the cyclical and monotonic schedules.",
"cite_spans": [],
"ref_spans": [
{
"start": 433,
"end": 440,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": "6.1"
},
{
"text": "can effectively reduce KL vanishing, it takes 472s per epoch. However, this is significantly more expensive than the standard VAE training which takes 30s per epoch. The proposed cyclical schedule adds almost zero cost. We show the learning curves for VAE and SA-VAE in Figure 5 . Interestingly, the cyclical schedule exhibits periodical learning behaviours. The performance of the cyclical schedule gets better progressively, after each cycle. While ELBO and PPL ar similar, the cyclical schedule improves the reconstruction ability and KL values for both VAE and SA-VAE. We observe clear over-fitting issues for the SA-VAE with the monotonic schedule, while this issue is less severe for SA-VAE with the cyclical schedule.",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 278,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": "6.1"
},
{
"text": "Finally, we further investigate whether our improvements are from simply having a lower , rather than from the cyclical schedule re-opening Path A for better learning. To test this, we use a monotonic schedule with maximum = 0.5. We observe that the reconstruction and KL terms perform better individually, but the ELBO is substantially worse than = 1, because = 0.5 yields an improper model. Even so, the cyclical schedule improves its performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling",
"sec_num": "6.1"
},
{
"text": "We use a cyclical schedule to improve the latent codes in (Zhao et al., 2017) , which are key to diverse dialog-response generation. Follow- ing (Zhao et al., 2017) , Switchboard (SW) Corpus (Godfrey and Holliman, 1997) is used, which has 2400 two-sided telephone conversations.",
"cite_spans": [
{
"start": 58,
"end": 77,
"text": "(Zhao et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 145,
"end": 164,
"text": "(Zhao et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 191,
"end": 219,
"text": "(Godfrey and Holliman, 1997)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional VAE for Dialog",
"sec_num": "6.2"
},
{
"text": "Model CVAE CVAE+BoW Schedule M C M C Rec-P #",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional VAE for Dialog",
"sec_num": "6.2"
},
{
"text": "Two latent variable models are considered. The first one is the Conditional VAE (CVAE), which has been shown better than the encoder-decoder neural dialog . The second is to augment VAE with a bag-of-word (BoW) loss to tackle the KL vanishing problem, as proposed in (Zhao et al., 2017) . Table 2 shows the sample outputs generated from the two schedules using CVAE. Caller Alice begins with an open-ended statement on choosing a college, and the model learns to generate responses from Caller Bob. The cyclical schedule generated highly diverse answers that cover multi-ple plausible dialog acts. On the contrary, the responses from the monotonic schedule are limited to repeat plain responses, i.e., \"i'm not sure\".",
"cite_spans": [
{
"start": 267,
"end": 286,
"text": "(Zhao et al., 2017)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 289,
"end": 296,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conditional VAE for Dialog",
"sec_num": "6.2"
},
{
"text": "Quantitative results are shown in Table 3 , using the evaluation metrics from (Zhao et al., 2017) .",
"cite_spans": [
{
"start": 78,
"end": 97,
"text": "(Zhao et al., 2017)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Conditional VAE for Dialog",
"sec_num": "6.2"
},
{
"text": "(i) Smoothed Sentence-level BLEU (Chen and Cherry, 2014) : BLEU is a popular metric that measures the geometric mean of modified n-gram precision with a length penalty. We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale. (ii) Cosine Distance of Bag-of-word Embedding (Liu et al., 2016) : a simple method to obtain sentence embeddings is to take the average or extreme of all the word embeddings in the sentences. We used Glove embedding and denote the average method as A bow and extreme method as E bow. The score is normalized to [0, 1] . Higher values indicate more plausible responses.",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "(Chen and Cherry, 2014)",
"ref_id": "BIBREF2"
},
{
"start": 308,
"end": 326,
"text": "(Liu et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 573,
"end": 576,
"text": "[0,",
"ref_id": null
},
{
"start": 577,
"end": 579,
"text": "1]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional VAE for Dialog",
"sec_num": "6.2"
},
{
"text": "The BoW indeed reduces the KL vanishing issue, as indicated by the increased KL and decreased reconstruction perplexity. When applying the proposed cyclical schedule to CVAE, we also see a reduced KL vanishing issue. Interestingly, it also yields the highest BLEU scores. This suggests that the cyclical schedule can generate dialog responses of higher fidelity with lower cost, as the auxiliary BoW loss is not necessary. Further, BoW can be improved when integrated with the cyclical schedule, as shown in the last column of Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 527,
"end": 534,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Conditional VAE for Dialog",
"sec_num": "6.2"
},
{
"text": "We consider the Yelp dataset, as pre-processed in (Shen et al., 2017) for unsupervised language pre-training. Text features are extracted as the latent codes z of VAE models, pre-trained with monotonic and cyclical schedules. The AE is used as the baseline. A good VAE can learn to cluster data into meaningful groups (Kingma and Welling, 2013), indicating that well-structured z are highly informative features, which usually leads to higher classification performance. To clearly compare the quality of z, we build a simple one-layer classifier on z, and fine-tune the model on different proportions of labelled data (Zhang et al., 2017b) .",
"cite_spans": [
{
"start": 50,
"end": 69,
"text": "(Shen et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 619,
"end": 640,
"text": "(Zhang et al., 2017b)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Language Pre-training",
"sec_num": "6.3"
},
{
"text": "The results are shown in Figure 6 . The cyclical schedule consistently yields the highest accuracy relative to other methods. We visualize the tSNE embeddings (Maaten and Hinton, 2008) of z in Figure 9 of the SM, and observe that the cyclical schedule exhibits clearer clustered patterns. ",
"cite_spans": [
{
"start": 159,
"end": 184,
"text": "(Maaten and Hinton, 2008)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 6",
"ref_id": "FIGREF6"
},
{
"start": 193,
"end": 201,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unsupervised Language Pre-training",
"sec_num": "6.3"
},
{
"text": "To enhance the performance, we propose to apply the cyclical schedule to the learning rate \u2318 on real tasks. It ensures that the optimizer has the same length of optimization trajectory for each cycle (so that each cycle can fully converge). To investigate the impact of cyclical on \u2318, we perform two more ablation experiments: (i) We make only cyclical, keep \u2318 constant. (ii) We make only \u2318 cyclical, keep monotonic. The last epoch numbers are shown in Table 4 , and the learning curves on shown in Figure 10 in SM. Compared with the baseline, we see that it is the cyclical rather than cyclical \u2318 that contributes to the improved performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 453,
"end": 460,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 499,
"end": 508,
"text": "Figure 10",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "6.4"
},
{
"text": "We provide a novel two-path interpretation to explain the KL vanishing issue, and identify its source as a lack of good latent codes at the beginning of decoder training. This provides an understanding of various scheduling schemes, and motivates the proposed cyclical schedule. By reopening the path at = 0, the cyclical schedule can progressively improve the performance, by leveraging good latent codes learned in the previous cycles as warm re-starts. We demonstrate the effectiveness of the proposed approach on three NLP tasks, and show that it is superior to or complementary to other techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The Gaussian sampling remains for q (z|x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In practice, the monotonic schedule usually anneals in a very fast pace, thus R is small compared with the entire training procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/harvardnlp/sa-vae",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fixing a broken ELBO",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Alemi",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Poole",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Fischer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Dillon",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rif",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Saurous",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Murphy",
"suffix": ""
}
],
"year": 2018,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dil- lon, Rif A Saurous, and Kevin Murphy. 2018. Fix- ing a broken ELBO. In ICML.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generating sentences from a continuous space",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Samuel R Bowman",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06349"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R Bowman, Luke Vilnis, Oriol Vinyals, An- drew M Dai, Rafal Jozefowicz, and Samy Ben- gio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A systematic comparison of smoothing techniques for sentencelevel bleu",
"authors": [
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "362--367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentence- level bleu. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 362-367.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Isolating sources of disentanglement in VAEs",
"authors": [
{
"first": "T",
"middle": [
"Q"
],
"last": "Ricky",
"suffix": ""
},
{
"first": "Xuechen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grosse",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Duvenaud",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricky TQ Chen, Xuechen Li, Roger Grosse, and David Duvenaud. 2018. Isolating sources of disentangle- ment in VAEs. NIPS.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Avoiding latent variable collapse with generative skip models",
"authors": [
{
"first": "B",
"middle": [],
"last": "Adji",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Dieng",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Kim",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.04863"
]
},
"num": null,
"urls": [],
"raw_text": "Adji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. 2018. Avoiding latent variable col- lapse with generative skip models. arXiv preprint arXiv:1807.04863.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sequential neural models with stochastic layers",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Fraccaro",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "S\u00f8ren Kaae",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "S\u00f8nderby",
"suffix": ""
},
{
"first": "Ole",
"middle": [],
"last": "Paquet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Winther",
"suffix": ""
}
],
"year": 2016,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Fraccaro, S\u00f8ren Kaae S\u00f8nderby, Ulrich Paquet, and Ole Winther. 2016. Sequential neural models with stochastic layers. In NIPS.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Switchboard-1 release 2: Linguistic data consortium. SWITCH-BOARD: A User's Manual",
"authors": [
{
"first": "J",
"middle": [],
"last": "Godfrey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Holliman",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Godfrey and E Holliman. 1997. Switchboard-1 re- lease 2: Linguistic data consortium. SWITCH- BOARD: A User's Manual.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep learning",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning, volume 1. MIT press Cam- bridge.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Z-forcing: Training stochastic recurrent networks",
"authors": [
{
"first": "Anirudh Goyal Alias Parth",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Marc-Alexandre",
"middle": [],
"last": "C\u00f4t\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anirudh Goyal Alias Parth Goyal, Alessandro Sor- doni, Marc-Alexandre C\u00f4t\u00e9, Nan Rosemary Ke, and Yoshua Bengio. 2017. Z-forcing: Training stochas- tic recurrent networks. In NIPS.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lagging inference networks and posterior collapse in variational autoencoders",
"authors": [
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Spokoyny",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational au- toencoders. ICLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. ICLR",
"authors": [
{
"first": "Irina",
"middle": [],
"last": "Higgins",
"suffix": ""
},
{
"first": "Loic",
"middle": [],
"last": "Matthey",
"suffix": ""
},
{
"first": "Arka",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Burgess",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Botvinick",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. ICLR.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "Jurgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural computation.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Stochastic variational inference",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paisley",
"suffix": ""
}
],
"year": 2013,
"venue": "The Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. 2013. Stochastic variational inference. The Journal of Machine Learning Research.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Elbo surgery: yet another way to carve up the variational evidence lower bound",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Matthew J Johnson",
"middle": [],
"last": "Hoffman",
"suffix": ""
}
],
"year": 2016,
"venue": "Workshop in Advances in Approximate Bayesian Inference, NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D Hoffman and Matthew J Johnson. 2016. Elbo surgery: yet another way to carve up the vari- ational evidence lower bound. In Workshop in Ad- vances in Approximate Bayesian Inference, NIPS.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Toward controlled generation of text",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward con- trolled generation of text. ICML.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Snapshot ensembles: Train 1, get m for free",
"authors": [
{
"first": "Gao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yixuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Pleiss",
"suffix": ""
},
{
"first": "Zhuang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "John",
"middle": [
"E"
],
"last": "Hopcroft",
"suffix": ""
},
{
"first": "Kilian Q",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E Hopcroft, and Kilian Q Weinberger. 2017. Snapshot ensembles: Train 1, get m for free. ICLR.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Disentangling by factorising. ICML",
"authors": [
{
"first": "Hyunjik",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyunjik Kim and Andriy Mnih. 2018. Disentangling by factorising. ICML.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semiamortized variational autoencoders. ICML",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Sam Wiseman, Andrew C Miller, David Sontag, and Alexander M Rush. 2018. Semi- amortized variational autoencoders. ICML.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Autoencoding variational bayes",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Max Welling. 2013. Auto- encoding variational bayes. ICLR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Stochastic wavenet: A generative latent variable model for sequential data",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Bohan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guoqing",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Bohan Li, Guoqing Zheng, and Yiming Yang. 2018. Stochastic wavenet: A generative latent variable model for sequential data. ICML workshop.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "ALICE: Towards understanding adversarial learning for joint distribution matching",
"authors": [
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Changyou",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Pu",
"suffix": ""
},
{
"first": "Liqun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunyuan Li, Hao Liu, Changyou Chen, Yuchen Pu, Liqun Chen, Ricardo Henao, and Lawrence Carin. 2017. ALICE: Towards understanding adversarial learning for joint distribution matching. In NIPS.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation",
"authors": [
{
"first": "Chia-Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Iulian",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.08023"
]
},
"num": null,
"urls": [],
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. arXiv preprint arXiv:1603.08023.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sgdr: Stochastic gradient descent with warm restarts",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2017. Sgdr: Stochastic gradient descent with warm restarts. ICLR.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Visualizing data using t-sne",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of machine learning research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adversarial autoencoders",
"authors": [
{
"first": "Alireza",
"middle": [],
"last": "Makhzani",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Frey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. 2016. Adver- sarial autoencoders. ICLR workshop.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Building a large annotated corpus of english: The penn treebank. Computational linguistics",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computa- tional linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Language as a latent variable: Discrete generative models for sentence compression",
"authors": [
{
"first": "Yishu",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sen- tence compression. EMNLP.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Neural variational inference for text processing",
"authors": [
{
"first": "Yishu",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2016,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In ICML.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ja\u0148",
"middle": [],
"last": "Cernock\u1ef3",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "Eleventh Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Stochastic backpropagation and approximate inference in deep generative models",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Jimenez Rezende",
"suffix": ""
},
{
"first": "Shakir",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. ICML.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Building end-to-end dialogue systems using generative hierarchical neural network models",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using gener- ative hierarchical neural network models. In AAAI.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Style transfer from non-parallel text by cross-alignment",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NIPS.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Amortized inference regularization. NIPS",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hung",
"suffix": ""
},
{
"first": "Shengjia",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mykel",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Kochenderfer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ermon",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Shu, Hung H Bui, Shengjia Zhao, Mykel J Kochen- derfer, and Stefano Ermon. 2018. Amortized infer- ence regularization. NIPS.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Cyclical learning rates for training neural networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Leslie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leslie N Smith. 2017. Cyclical learning rates for train- ing neural networks. In WACV. IEEE.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Superconvergence: Very fast training of residual networks using large learning rates",
"authors": [
{
"first": "N",
"middle": [],
"last": "Leslie",
"suffix": ""
},
{
"first": "Nicholay",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Topin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.07120"
]
},
"num": null,
"urls": [],
"raw_text": "Leslie N Smith and Nicholay Topin. 2017. Super- convergence: Very fast training of residual net- works using large learning rates. arXiv preprint arXiv:1708.07120.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Latent intention dialogue models",
"authors": [
{
"first": "Yishu",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve Young. 2017. Latent intention dialogue mod- els. ICML.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Variational autoencoder for semi-supervised text classification",
"authors": [
{
"first": "Weidi",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Haoze",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In AAAI.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Improved variational autoencoders for text modeling using dilated convolutions",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved varia- tional autoencoders for text modeling using dilated convolutions. ICML.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Benjamin Recht, and Oriol Vinyals. 2017a. Understanding deep learning requires rethinking generalization. ICLR",
"authors": [
{
"first": "Chiyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Moritz",
"middle": [],
"last": "Hardt",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiyuan Zhang, Samy Bengio, Moritz Hardt, Ben- jamin Recht, and Oriol Vinyals. 2017a. Understand- ing deep learning requires rethinking generalization. ICLR.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Cyclical stochastic gradient mcmc for bayesian deep learning",
"authors": [
{
"first": "Ruqi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Changyou",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Gordon"
],
"last": "Wilson",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.03932"
]
},
"num": null,
"urls": [],
"raw_text": "Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, and Andrew Gordon Wilson. 2019. Cyclical stochastic gradient mcmc for bayesian deep learn- ing. arXiv preprint arXiv:1902.03932.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Deconvolutional paragraph representation learning",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, and Lawrence Carin. 2017b. De- convolutional paragraph representation learning. In NIPS.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "InfoVAE: Information maximizing variational autoencoders",
"authors": [
{
"first": "Shengjia",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jiaming",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Ermon",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shengjia Zhao, Jiaming Song, and Stefano Ermon. 2019. InfoVAE: Information maximizing varia- tional autoencoders. AAAI.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Learning discourse-level diversity for neural dialog models using conditional variational autoencoders",
"authors": [
{
"first": "Tiancheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Maxine",
"middle": [],
"last": "Eskenazi",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoen- coders. ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "VAE with an auto-regressive decoder",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Illustration of learning parameters { , \u2713} in the two different paradigms. Starting from the observation x in blue circle, a VAE infers its latent code z in the green circle, and further generates its reconstruction in the red circle. (a) Standard VAE learning, with only one path via { , \u2713} from x to its reconstruction; (b) VAE learning with an auto-regressive decoder. Two paths are considered from x to its reconstruction: Path A via { , \u2713} and Path B via \u2713.",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Comparison between (a) traditional monotonic and (b) proposed cyclical annealing schedules.In this figure, M = 4 cycles are illustrated, R = 0.5 is used for increasing within each cycle.",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "(b), compared with the monotonic schedule in Figure 2(a). Within one cycle, there are two consecutive stages (divided by R): \u2022 Annealing. is annealed from 0 to 1 in the first R dT /Me training steps over the course of a cycle. For example, the steps [1, 5K] in the Figure 2(b).",
"num": null,
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Comparison of the learning curves for the three schedules on an illustrative problem.",
"num": null,
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"text": "Visualization of the latent space along the learning dynamics on an illustrative problem.",
"num": null,
"uris": null
},
"FIGREF6": {
"type_str": "figure",
"text": "Accuracy of fine-tuning on the unsupervised pretrained models on the Yelp dataset.",
"num": null,
"uris": null
},
"TABREF2": {
"html": null,
"text": "Comparison on dialog response generation. Reconstruction perplexity (Rec-P) and BLEU (B) scores are used for evaluation.",
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF4": {
"html": null,
"text": "Comparison of cyclical schedules on and \u2318, tested with language modeling on PTB.",
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}