diff --git a/stochasticconditionalgenerativenetworkswithbasisdecomposition/7181bcf9-4616-486b-b50c-3b2c94650efb_content_list.json b/stochasticconditionalgenerativenetworkswithbasisdecomposition/7181bcf9-4616-486b-b50c-3b2c94650efb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fc517de84e57d44eceeb3c33407de99e97de441d --- /dev/null +++ b/stochasticconditionalgenerativenetworkswithbasisdecomposition/7181bcf9-4616-486b-b50c-3b2c94650efb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ba8696109b1d63dd389fb8aec44d4a3a211a7cf7d0eaa47fc6584a7e295710a +size 129065 diff --git a/stochasticconditionalgenerativenetworkswithbasisdecomposition/7181bcf9-4616-486b-b50c-3b2c94650efb_model.json b/stochasticconditionalgenerativenetworkswithbasisdecomposition/7181bcf9-4616-486b-b50c-3b2c94650efb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0248914df82b2674540ec1280bede08dc7da9341 --- /dev/null +++ b/stochasticconditionalgenerativenetworkswithbasisdecomposition/7181bcf9-4616-486b-b50c-3b2c94650efb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d9349ce1358def22bd97fbb6c8f30df50a5e8a0ec4b1c708dd473446ff988fd +size 138147 diff --git a/stochasticconditionalgenerativenetworkswithbasisdecomposition/7181bcf9-4616-486b-b50c-3b2c94650efb_origin.pdf b/stochasticconditionalgenerativenetworkswithbasisdecomposition/7181bcf9-4616-486b-b50c-3b2c94650efb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ac91e034c4bdf255107b4ba2f42fa60af8155d5a --- /dev/null +++ b/stochasticconditionalgenerativenetworkswithbasisdecomposition/7181bcf9-4616-486b-b50c-3b2c94650efb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71956661d37503e0b42c497859c8cf30b5ef0e33bd76bb73ea44fdcf259d9edb +size 17311611 diff --git a/stochasticconditionalgenerativenetworkswithbasisdecomposition/full.md b/stochasticconditionalgenerativenetworkswithbasisdecomposition/full.md new file mode 100644 index 0000000000000000000000000000000000000000..db8bc3c01a39a3f67cd2e1b1497126853804fda7 --- /dev/null +++ b/stochasticconditionalgenerativenetworkswithbasisdecomposition/full.md @@ -0,0 +1,591 @@ +# STOCHASTIC CONDITIONAL GENERATIVE NETWORKS WITH BASIS DECOMPOSITION + +Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu + +Duke University + +{ze.w, xiuyuan.cheng, guillermo.sapiro, qiang.qiu}@duke.edu + +# ABSTRACT + +While generative adversarial networks (GANs) have revolutionized machine learning, a number of open questions remain to fully understand them and exploit their power. One of these questions is how to efficiently achieve proper diversity and sampling of the multi-mode data space. To address this, we introduce BasisGAN, a stochastic conditional multi-mode image generator. By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of basis elements, we learn a plug-and-play based basis generator to stochastically generate basis elements, with just a few hundred of parameters, to fully embed stochasticity into convolutional filters. By sampling basis elements instead of filters, we dramatically reduce the cost of modeling the parameter space with no sacrifice on either image diversity or fidelity. To illustrate this proposed plug-and-play framework, we construct variants of BasisGAN based on state-of-the-art conditional image generation networks, and train the networks by simply plugging in a basis generator, without additional auxiliary components, hyperparameters, or training objectives. The experimental success is complemented with theoretical results indicating how the perturbations introduced by the proposed sampling of basis elements can propagate to the appearance of generated images. + +# 1 INTRODUCTION + +Conditional image generation networks learn mappings from the condition domain to the image domain by training on massive samples from both domains. The mapping from a condition, e.g., a map, to an image, e.g., a satellite image, is essentially one-to-many as illustrated in Figure 1. In other words, there exists many plausible output images that satisfy a given input condition, which motivates us to explore multi-mode conditional image generation that produces diverse images conditioned on one single input condition. + +One technique to improve image generation diversity is to feed the image generator with an additional latent code in the hope that such code can carry information that is not covered by the input condition, so that diverse output images are achieved by decoding the missing information conveyed through different latent codes. However, as illustrated in the seminal work Isola et al. (2017), encoding the diversity with an input latent code can lead to unsatisfactory performance for the following reasons. While training using objectives like GAN loss Goodfellow et al. (2014), regularizations like L1 loss Isola et al. (2017) and perceptual loss Wang et al. (2018) are imposed to improve both visual fidelity and correspondence to the input condition. However, no similar regularization is imposed to enforce the correspondence between outputs and latent codes, so that the network is prone to ignore input latent codes in training, and produce identical images from an input condition even with different latent codes. Several methods are proposed to explicitly encourage the network to take into account input latent codes to encode diversity. For example, Mao et al. (2019) explicitly maximizes the ratio of the distance between generated images with respect to the corresponding latent codes; while Zhu et al. (2017b) applies an auxiliary network for decoding the latent codes from the generative images. Although the diversity of the generative images is significantly improved, these methods experience drawbacks. In Mao et al. (2019), at least two samples generated from the same condition are needed for calculating the regularization term, which multiplies the memory footprint while training each mini-batch. Auxiliary network structures and training objectives in Zhu et al. (2017b) unavoidably increase training difficulty and memory footprint. These previously proposed methods usually require considerable modifications to the underlying framework. + +![](images/139650e49336467bb00eebcd8648ffa3e6ec7168c56891f9b9237405a4f065f7.jpg) +Figure 1: Illustration of the proposed BasisGAN. The diversity generated images are achieved by the parameter generation in the stochastic sub-model, where basis generators take samples from a prior distribution and generate low dimensional basis elements from the learned spaces. The sampled basis elements are linearly combined using the deterministic bases coefficients and used to reconstruct the convolutional filters. Filters in each stochastic layer are modeled with a separate basis generator. By convolving the same feature from the deterministic sub-model using different convolutional filters, images with diverse appearances are generated. + +In this paper, we propose a stochastic model, BasisGAN, that directly maps an input condition to diverse output images, aiming at building networks that model the multi-mode intrinsically. The proposed method exploits a known observation that a well-trained deep network can converge to significantly different sets of parameters across multiple trainings, due to factors such as different parameter initializations and different choices of mini-batches. Therefore, instead of treating a conditional image generation network as a deterministic function with fixed parameters, we propose modeling the filter in each convolutional layer as a sample from filter space, and learning the corresponding filter space using a tiny network for efficient and diverse filter sampling. In Ghosh et al. (2018), parameter non-uniqueness is used for multi-mode image generation by training several generators with different parameters simultaneously as a multi-agent solution. However, the maximum modes of Ghosh et al. (2018) are restricted by the number of agents, and the replication increases memory as well as computational cost. Based on the above parameters non-uniqueness property, we introduce into a deep network stochastic convolutional layers, where filters are sampled from learned filter spaces. Specifically, we learn the mapping from a simple prior to the filter space using neural networks, here referred to as filter generators. To empower a deterministic network with multi-mode image generation, we divide the network into a deterministic sub-model and a stochastic sub-model as shown in Figure 1, where standard convolutional layers and stochastic convolutional layers with filter generators are deployed, respectively. By optimizing an adversarial loss, filter generators can be jointly trained with a conditional image generation network. In each forward pass, filters at stochastic layers are sampled by filter generators. Highly diverse images conditioned on the same input are achieved by jointly sampling of filters in multiple stochastic convolutional layers. + +However, filters of a convolutional layer are usually high-dimensional while being together written as one vector, which makes the modeling and sampling of a filter space highly costly in practice in terms of training time, sampling time, and filter generator memory footprint. Based on the low-rank property observed from sampled filters, we decompose each filter as a linear combination of a small set of basis elements Qiu et al. (2018), and propose to only sample low-dimensional spatial basis elements instead of filters. By replacing filter generators with basis generators, the proposed method becomes highly efficient and practical. Theoretical arguments are provided on how perturbations introduced by sampling basis elements can propagate to the appearance of generated images. + +The proposed BasisGAN introduces a generalizable concept to promote diverse modes in the conditional image generation. As basis generators act as plug-and-play modules, variants of BasisGAN can be easily constructed by replacing in various state-of-the-art conditional image generation net + +works the standard convolutional layers by stochastic layers with basis generators. Then, we directly train them without additional auxiliary components, hyperparameters, or training objectives on top of the underlying models. Experimental results consistently show that the proposed BasisGAN is a simple yet effective solution to multi-mode conditional image generation. We further empirically show that the inherent stochasticity introduced by our method allows training without paired samples, and the one-to-many image-to-image translation is achieved using a stochastic auto-encoder where stochasticity prevents the network from learning a trivial identity mapping. + +Our contributions are summarized as follows: + +- We propose a plug-and-play based basis generator to stochastically generate basis elements, with just a few hundred of parameters, to fully embed stochasticity into network filters. +- Theoretic arguments are provided to support the simplification of replacing stochastic filter generation with basis generation. +- Both the generation fidelity and diversity of the proposed BasisGAN with basis generators are validated extensively, and state-of-the-art performances are consistently observed. + +# 2 RELATED WORK + +Conditional image generation. Parametric modeling of the natural image distribution has been studied for years, from restricted Boltzmann machines Smolensky (1986) to variational autoencoders Kingma & Welling (2013); in particular variants with conditions Oord et al. (2016); Sohn et al. (2015); Van den Oord et al. (2016) show promising results. With the great power of GANs Goodfellow et al. (2014), conditional generative adversarial networks (cGANs) Isola et al. (2017); Pathak et al. (2016); Sangkloy et al. (2017); Wang et al. (2018); Xian et al. (2018); Zhu et al. (2017a) achieve great progress on visually appealing images given conditions. However, the quality of images and the loyalty to input conditions come with sacrifice on image diversity as discussed in Zhu et al. (2017b), which is addressed by the proposed BasisGAN. + +Multi-mode conditional image generation. To enable the cGANs with multi-mode image generation, pioneer works like infoGAN Chen et al. (2016) and pix2pix Isola et al. (2017) propose to encode the diversity in an input latent code. To enforce the networks to take into account input latent codes, Zhu et al. (2017b) deploys auxiliary networks and training objectives to impose the recovery of the input latent code from the generated images. MSGAN Mao et al. (2019) and DSGAN Yang et al. (2019) propose regularization terms for diversity that enforces a larger distance between generated images with respect to different input latent codes given one input condition. These methods require considerable modifications to the underlying original framework. + +Neural network parameters generating and uncertainty. Extensive studies have been conducted for generating network parameters using another network since Hypernetworks Ha et al. (2016). As a seminal work on network parameter modeling, Hypernetworks successfully reduce learnable parameters by relaxing weight-sharing across layers. Followup works like Bayesian Hypernetworks Krueger et al. (2017) further introduce uncertainty to the generated parameters. Variational inference based methods like Bayes by Backprop Blundell et al. (2015) solve the intractable posterior distribution of parameters by assuming a prior (usually Gaussian). However, the assumed prior unavoidably degrades the expressiveness of the learned distribution. The parameter prediction of neural network is intensively studied under the context of few shot learning Bertinetto et al. (2016); Qiao et al. (2018); Wang et al. (2019), which aims to customize a network to a new task adaptively and efficiently in a data-driven way. Apart from few shot learning, Denil et al. (2013) suggests parameter prediction as a way to study the redundancy in neural networks. While studying the representation power of random weights, Saxe et al. (2011) also suggests the uncertainty and non-uniqueness of network parameters. Another family of network with uncertainty is based on variational inference Blundell et al. (2015), where an assumption of the distribution on network weights is imposed for a tractable learning on the distribution of weights. Works on studying the relationship between local and global minima of deep networks Haeffele & Vidal (2015); Vidal et al. (2017) also suggest the non-uniqueness of optimal parameters of a deep network. + +# 3 STOCHASTIC FILTER GENERATION + +A conditional generative network (cGAN) Mirza & Osindero (2014) learns the mapping from input condition domain $\mathcal{A}$ to output image domain $\mathcal{B}$ using a deep neural network. The conditional image generation is essentially a one-to-many mapping as there could be multiple plausible instances $\mathbf{B} \in \mathcal{B}$ that map to a condition $\mathbf{A} \in \mathcal{A}$ Zhu et al. (2017b), corresponding to a distribution $p(\mathbf{B}|\mathbf{A})$ . However, the naive mapping of the generator formulated by a neural network $G: \mathbf{A} \to \mathbf{B}$ is deterministic, and is incapable of covering the distribution $p(\mathbf{B}|\mathbf{A})$ . We exploit the non-uniqueness of network parameters as discussed above, and introduce stochasticity into convolutional filters through plug-and-play filter generators. To achieve this, we divide a network into two sub-models: + +- A deterministic sub-model with convolutional filters $\phi$ that remain fixed after training; +- A stochastic sub-model whose convolutional filters $\mathbf{w}$ are sampled from parameter spaces modeled by neural networks $T$ , referred to as filter generators, parametrized by $\theta$ with inputs $z$ from a prior distribution, e.g., $\mathcal{N}(0,I)$ for all experiments in this paper. + +Note that filters in each stochastic layer are modeled with a separate neural network, which is not explicitly shown in the formulation for notation brevity. With this formulation, the conditional image generation becomes $G_{\phi,\theta} : \mathbf{A} \to \mathbf{B}$ , with stochasticity achieved by sampling filters $\mathbf{w} = T_{\theta}(z)$ for the stochastic sub-model in each forward pass. The conditional GAN loss Goodfellow et al. (2014); Mirza & Osindero (2014) then become + +$$ +\begin{array}{l} \min _ {G} \max _ {D} V (D, G) = \mathbb {E} _ {\mathbf {A} \sim p (\mathbf {A}), \mathbf {B} \sim p (\mathbf {B} | \mathbf {A})} [ \log (D (\mathbf {A}, \mathbf {B})) ] + \tag {1} \\ \mathbb {E} _ {\mathbf {A} \sim p (\mathbf {A}), z \sim p (z)} [ \log (1 - D (\mathbf {A}, G _ {\phi , \theta} (\mathbf {A}; T _ {\theta} (z)))) ], \\ \end{array} +$$ + +where $D$ denotes a standard discriminator. Note that we represent the generator here as $G_{\phi, \theta}(A; T_{\theta}(z))$ to emphasize that the generator uses stochastic filters $\mathbf{w} = T_{\theta}(z)$ . + +Given a stochastic generative network parametrized by $\phi$ and $\theta$ , and input condition $\mathbf{A}$ , the generated images form a conditional probability $q_{\phi, \theta}(\mathbf{B}|\mathbf{A})$ , so that (1) can be simplified as + +$$ +\begin{array}{l} \min _ {G} \max _ {D} V (D, G) = \mathbb {E} _ {\mathbf {A} \sim p (\mathbf {A}), \mathbf {B} \sim p (\mathbf {B} | \mathbf {A})} \log D (\mathbf {A}, \mathbf {B}) + \tag {2} \\ \mathbb {E} _ {\mathbf {A} \sim p (\mathbf {A}), \mathbf {B} \sim q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})} \log [ 1 - D (\mathbf {A}, \mathbf {B}) ]. \\ \end{array} +$$ + +When the optimal discriminator is achieved, (2) can be reformulated as + +$$ +C (G) = \max _ {D} V (D, G) = \mathbb {E} _ {\mathbf {A} \sim p (\mathbf {A})} [ - \log (4) + 2 \cdot J S D (p (\mathbf {B} | \mathbf {A}) | | q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})) ], \tag {3} +$$ + +where $JSD$ is the Jensen-Shannon divergence (the proof is provided in the supplementary material). The global minimum of (3) is achieved when given every sampled condition $\mathbf{A}$ , the generator perfectly replicates the true distribution $p(\mathbf{B}|\mathbf{A})$ , which indicates that by directly optimizing the loss in (1), conditional image generation with diversity is achieved with the proposed stochasticity in the convolutional filters. + +To optimize (1), we train $D$ as in Goodfellow et al. (2014) to maximize the probability of assigning the correct label to both training examples and samples from $G_{\phi, \theta}$ . Simultaneously, we train $G_{\phi, \theta}$ to minimize the following loss, where filter generators $T_{\theta}$ are jointly optimized to bring stochasticity: + +$$ +\mathcal {L} = \mathbb {E} _ {\mathbf {A} \sim p (\mathbf {A}, \mathbf {B}), z \sim p (z)} [ \log (1 - D (\mathbf {A}, G _ {\phi , \theta} (\mathbf {A}; T _ {\theta} (z)))) ]. \tag {4} +$$ + +We describe in detail the optimization of the generator parameters $\{\phi, \theta\}$ in supplementary material Algorithm 1. + +Discussions on diversity modeling in cGANs. The goal of cGAN is to model the conditional probability $p(\mathbf{B}|\mathbf{A})$ . Previous cGAN models Mao et al. (2019); Mirza & Osindero (2014); Zhu et al. (2017b) typically incorporate randomness in the generator by setting $\mathbf{B} = G(\mathbf{A},z), z \sim p(z)$ , where $G$ is a deep network with deterministic parametrization and the randomness is introduced via $z$ , e.g., a latent code, as an extra input. This formulation implicitly makes the following two assumptions: (A1) The randomness of the generator is independent from that of $p(A)$ ; (A2) Each realization $\mathbf{B}(\omega)$ conditional on $\mathbf{A}$ can be modeled by a CNN, i.e., $\mathbf{B} = G^{\omega}(\mathbf{A})$ , where $G^{\omega}$ is a draw from an ensemble of CNNs, $\omega$ being the random event. (A1) is reasonable as long as the source of variation to be modeled by cGAN is independent from that contained in $\mathbf{A}$ , and the rational of + +(A2) lies in the expressive power of CNNs for image to image translation. The previous model adopts a specific form of $G^{\omega}(A)$ via feeding random input $z(\omega)$ to $G$ , yet one may observe that the most general formulation under (A1), (A2) would be to sample the generator itself from certain distribution $p(G)$ , which is independent from $p(\mathbf{A})$ . Since generative CNNs are parametrized by convolutional filters, this would be equivalent to set $\mathbf{B} = G(\mathbf{A};\mathbf{w})$ , $\mathbf{w} \sim p(\mathbf{w})$ , where we use “;” in the parentheses to emphasize that what after is parametrization of the generator. The proposed cGAN model in the current paper indeed takes such an approach, where we model $p(\mathbf{w})$ by a separate filter generator network. + +# 4 STOCHASTIC BASIS GENERATION + +Using the method above, filters of each stochastic layer $\mathbf{w}$ are generated in the form of a high-dimensional vector of size $L\times L\times C^{\prime}\times C$ , where $L$ , $C^\prime$ , and $C$ correspond to the kernel size, numbers of input and output channels, respectively. Although directly generating such high-dimensional vectors is feasible, it can be highly costly in terms of training time, sampling time, and memory footprint when the network scale grows. We present a throughout comparison in terms of generated quality and sample filter size in supplementary material Figure A.1, where it is clearly shown that filter generation is too costly to afford. In this section, we propose to replace filter generation with basis generation to achieve a quality/cost effect shown by the red dot in Figure A.1. Details on the memory, parameter number, and computational cost are also provided at the end of the supplementary material, Section G. + +For convolutional filters, the weights $\mathbf{w}$ is a 3-way tensor involving a spatial index and two channel indices for input and output channel respectively. Tensor low-rank decomposition cannot be defined in a unique way. For convolutional filters, a natural solution then is to separate out the spatial index, which leads to depth-separable network architectures Chollet (2017). Among other studies of low-rank factorization of convolutional layers, Qiu et al. (2018) proposes to approximate a convolutional filter using a set of prefixed basis element linearly combined by learned reconstruction coefficients. + +Given that the weights in convolutional layers may have a low-rank structure, we collect a large amount of generated filters and reshape the stack of $N$ sampled filters to a 2-dimensional matrix $\mathbf{F}$ with size of $J \times J'$ , where $J = N \times L \times L$ and $J' = C' \times C$ . We consistently observe that $\mathbf{F}$ is always of low effective rank, regardless the network scales we use to estimate the filter distribution. If we assume that a collection of generated filters observe such a low-rank structure, the following theorem proves that it suffices to generate bases in order to generate the desired distribution of filters. + +Theorem 1. Let $(\Omega, \mathbb{P})$ be probability space and $\mathbf{F} : \Omega \to \mathbb{R}^{L^2 \times C' \times C}$ a 3-way random tensor, where $\mathbf{F}$ maps each event $\omega$ to $\mathbf{F}^\omega(u, \lambda', \lambda)$ , $u \in [L] \times [L]$ , $\lambda' \in [C']$ , $\lambda \in [C]$ . For each fixed $\omega$ and $u$ , $\mathbf{F}^\omega(u) := \{\mathbf{F}^\omega(u, \lambda', \lambda)\}_{\lambda', \lambda} \in \mathcal{L}(\mathbb{R}^{C'}, \mathbb{R}^C)$ . If there exists a set of deterministic linear transforms $a_k, k = 1, \dots, K$ in $\mathcal{L}(\mathbb{R}^{C'}, \mathbb{R}^C)$ s.t. $\mathbf{F}^\omega(u) \in \text{Span}\{a_k\}_{k=1}^K$ for any $\omega$ and $u$ , then there exists $K$ random vectors $\mathbf{b}_k : \Omega \to \mathbb{R}^{L^2}$ , $k = 1, \dots, K$ , s.t. $\mathbf{F}(u, \lambda', \lambda) = \sum_{k=1}^K \mathbf{b}_k(u) a_k(\lambda', \lambda)$ in distribution. If $\mathbf{F}$ has a probability density, then so do $\{\mathbf{b}_k\}_{k=1}^K$ . + +The proof of the theorem is provided in the supplementary material. + +We simplify the expensive filter generation problem by decomposing each filter as a linear combination of a small set of basis elements, and then sampling basis elements instead of filters directly. In our method, we assume that the diverse modes of conditional image generations are essentially caused by the spatial perturbations, thus we propose to introduce stochasticity to the spatial basis elements. Specifically, we apply convolutional filer decomposition as in Qiu et al. (2018) to write $\mathbf{w} = \psi \mathbf{a}$ , $\psi \in R^{L \times L \times K}$ , where $\psi$ are basis elements, $\mathbf{a}$ are decomposition coefficients, and $K$ is a pre-defined small value, e.g., $K = 7$ . We keep the decomposition coefficients a deterministic and learned directly from training samples. Instead of using predefined basis elements as in Qiu et al. (2018), we adopt a basis generator network $\mathcal{T}(\theta, z)$ parametrized by $\theta$ , that learns the mapping from random latent vectors $z$ to basis elements $\psi$ with stochasticity. The basis generator networks are jointly trained with the main conditional image generation network in an end-to-end manner. Note that we inherit the term 'basis' from DCFNet Qiu et al. (2018) for the intuition behind the proposed framework, and we do not impose additional constraints such as orthogonality or linear independence to the generated elements. Sampling the basis elements $\psi$ using basis generators dramatically reduces the difficulty on modeling the corresponding probability distribution. The costly filter generators in Section 3 is now replaced by much more efficient basis generators, and stochastic filters are + +then constructed by linearly combining sampled basis elements with the deterministic coefficients, $\mathbf{w} = \psi \mathbf{a} = \mathcal{T}(\theta ,z)\mathbf{a}$ . The illustration on the convolution filter reconstruction is shown as a part of Figure 1. As illustrated in this figure, BasisGAN is constructed by replacing convolutional layers with the proposed stochastic convolutional layers with basis generators, and the network parameters can be learned without additional auxiliary training objective or regularization. + +# 5 EXPERIMENTS + +In this section, we conduct experiments on multiple conditional generation task. Our preliminary objective is to show that thanks to the inherent stochasticity of the proposed BasisGAN, multi-mode conditional image generation can be learned without any additional regularizations that explicitly promote diversity. The effectiveness of the proposed BasisGAN is demonstrated by quantitative and qualitative results on multiple tasks and underlying models. We start with a stochastic auto-encoder example to demonstrate the inherent stochasticity brought by basis generator. Then we proceed to image to image translation tasks, and compare the proposed method with: regularization based methods DSGAN Yang et al. (2019) and MSGAN Mao et al. (2019) that adopt explicit regularization terms that encourages higher distance between output images with different latent code; the model based method MUNIT Huang et al. (2018) that explicitly decouples appearance with content and achieves diverse image generation by manipulating appearance code; and BicycleGAN Zhu et al. (2017b) that uses auxiliary networks to encourage the diversity of the generated images with respect to the input latent code. We further demonstrate that as an essential way to inject randomness to conditional image generation, our method is compatible with existing regularization based methods, which can be adopted together with our proposed method for further performance improvements. Finally, ablation studies on the size of basis generators and the effect of $K$ are provided in the supplementary material, Section E. + +# 5.1 STOCHASTIC AUTO-ENCODER + +The inherent stochasticity of the proposed BasisGAN allows learning conditional one-to-many mapping even without paired samples for training. We validate this by a variant of BasisGAN referred as stochastic auto-encoder, which is trained to do simple self-reconstructions with real-world images as inputs. Only L1 loss and GAN loss are imposed to promote fidelity and correspondence. However, thanks to the inherent stochasticity of BasisGAN, we observe that the network does not collapse to a trivial identity mapping, and diverse outputs with strong correspondence to the input images are generated with appealing fidelity. Some illustrative results are shown in Figure 2. + +![](images/50c5646773854b6c72cb861164fd05cc4c09d4866e3a723cf75150536cda42b0.jpg) + +![](images/7d0d6de1686764b2cd52139aa4a32e1340d3d881d64297d8009e7c155bcfcb84.jpg) +Input + +![](images/de791ad012b861a996a476edcab8774fb6615170ee727bccfe827bbc09b7da68.jpg) + +![](images/9a41056a226f37e8bc58baf4d6c317b67112a8651c217931c2afbe795096206e.jpg) +Generated diverse samples + +![](images/3eeb4c11d4dfb4120ebb3bd4ddac13742c6b7f1cd8dea9412ac4f539f6b7d20c.jpg) + +![](images/9e142cc244413a983591fd3562883ef2944916ea7a455c1128d5139478f242d1.jpg) +Input +Figure 2: Stochastic auto-encoder: one-to-many conditional image generation without paired sample. The network is trained directly to reconstruct the input real-world images, and the inherent stochasticity of the proposed method successfully promotes diverse output appearances with strong fidelity and correspondence to the inputs. + +![](images/30e532803b3a99782489c31a04ee6af32081a1de38d077f226f42b966b0c031c.jpg) + +![](images/cb83cf5fc4fb2ad2eef27651c9cd76ead3a6df623a1ec1bfc0414762f312bb8c.jpg) +Generated diverse samples + +# 5.2 IMAGE TO IMAGE TRANSLATION + +To faithfully validate the fidelity and diversity of generated images, we follow Mao et al. (2019) to evaluate the performance quantitatively using the following metrics: + +LPIPS. The diversity of generated images are measured using LPIPS Mao et al. (2019). LPIPS computes the distance of images in the feature space. Generated images with higher diversity give higher LPIPS scores, which are more favourable in conditional image generation. + +FID. FID Heusel et al. (2017) is used to measure the fidelity of the generated images. It computes + +![](images/1e42957f7dd07b261ff55edde69493e18f539a6e77fb7ff49270ae319e76f287.jpg) + +![](images/6955150581b8b4a711592d8337f554f8d11d22888f892ec933d65f22a6b1487c.jpg) + +![](images/1bcd58fa19364ff518cea36630d7914f598834eb9e7818ef16a055e63b2881b3.jpg) + +![](images/b0bde2b96115d2a47e0bb4accd62bf2a4644353386ad119e27dbd0f85a079cdc.jpg) +Input + +![](images/751789a59d654a8e6eaf7eec3e2fee24f503c19db5d22d787229457266541fba.jpg) + +![](images/71d8d64dea6fb4fd59979501f8d530e085a689d64955ab45b1adda5f57642d75.jpg) + +![](images/63d4a225e152634704c3486a1ce2b6f733726b71d539366f819455b579df2566.jpg) + +![](images/edd4a84db4c5badc309b98d31ec361bb4c42fb630e880d376107394567ddec72.jpg) +Ground truth + +![](images/2d9b08aa50b7fb5d0eefd7025285fa5df0acaebc9aa74ec68bd2540fc88155fb.jpg) + +![](images/492ca5c1046092f8fe3a8a21073ae7416e1c3087d32e8e8c393cfac87fb30059.jpg) + +![](images/bfd6906ee37addce7d826557261bb2af691faaee2aa4e69eacd5fe6cb06b7f0e.jpg) + +![](images/eb57ef7797f0b55c33871f0deac9f47207f33256b72cfefeb1dd9a2d5b1314cc.jpg) +Figure 3: BasisGAN adapted from Pix2Pix. The network is trained without any auxiliary loss functions or regularizations. From top to bottom, the image to image translation tasks are: edges $\rightarrow$ handbags, edges $\rightarrow$ shoes, maps $\rightarrow$ satellite, nights $\rightarrow$ days, facades $\rightarrow$ buildings. Additional examples are provided in the supplementary material, Figure A.2. + +![](images/3ceadfb5a01d112b5464ce0dc88381a25bf755250c68d18f9a6b5af0d3d3d810.jpg) + +![](images/3ad4ba6e45fc947a8929505f5b657e5c2877a3c4ac940b9b92c905b3be95470e.jpg) + +![](images/628e5661f69042456691728cc1232e3b004ead80ef113f172abc55435ff54807.jpg) + +![](images/b8950112b1179b8aa67a638f975730eb597120b3a087786298fa9a13ad514688.jpg) +Generated diverse samples + +![](images/1efd92ae073e1ab38cd5c04ebe025580cf28cb25344a2614ecb8d2c92890469e.jpg) + +![](images/9af87d8b98092a2ec12c262ea7f6f62c80f46f62a3e88c301adb30265bb4bc9f.jpg) + +![](images/f32c0a4233ee6568003d108eac6d1e649ef2d2ce5fbd08a1aea4327b027f93d3.jpg) + +![](images/08b497914692875e4d927dc72067af021149c093bb1d5f169e2351ba57cb467d.jpg) + +![](images/2c006a146415ca2e2b36d2a66f88650157973cfdc9288d6b5ee7660261d03b41.jpg) + +![](images/474d6f26186d39511a7b2e39a9715d776deb4696b2330329d28b72a077ec2be3.jpg) + +![](images/9a1d295698dd0bae4e127e3ee01f73decb9b6f8cb1a6789ed894797a0c0e400b.jpg) + +![](images/8a316066fa106882828472dc8e606fa9248b34ce83fa10e5e2ccd298a4d24e7a.jpg) + +![](images/84ee8af224694324f08f00b3c7e723fa6006b9ed6aa6137f81a93961082d3d4b.jpg) + +![](images/5162c563d1bfb21bfc2c3e31d6e2efcfe3dc4e05bdfe37ed3130be3608ea7465.jpg) + +![](images/e077b65014bcc68e3e2d7ae655bd09ece69d7a4630d80e140b1fcb34e8837957.jpg) + +![](images/0fd5af6c14fbab042be3e40f78ec879ccae4aa21f4e6a1270642328f169fc463.jpg) + +![](images/a12e54eb018f917b08bfcc9ad068108dc28fa8e66a2aa6a546a73811d71219b9.jpg) + +![](images/693faf68766a76ad6636c8a3ccd605922c7bf3e168e1023b4582624dc76fbb5d.jpg) + +![](images/a40403bb0856a0cffe248feea77a68bb1ad4c32fca3376e892ad9014e586c159.jpg) + +![](images/21549837caf6c4bf660243c9967bcb6b4901642b5dddba5df4f74e2e76cafd61.jpg) + +the distance between the distribution of the generated images and the true images. Since the entire GAN family is to faithfully model true data distribution parametrically, lower FID is favourable in our case since it reflects a closer fit to the desired distribution. + +Pix2Pix $\rightarrow$ BasisGAN. As one of the most prevalent conditional image generation network, Pix2Pix Isola et al. (2017) serves as a solid baseline for many multi-mode conditional image generation methods. It achieves conditional image generation by feeding the generator a conditional image, and training the generator to synthesize image with both GAN loss and L1 loss to the ground truth image. Typical applications for Pix2Pix include edge maps $\rightarrow$ shoes or handbags, maps $\rightarrow$ satellites, and so on. We adopt the ResNet based Pix2Pix model, and impose the proposed stochasticity in the successive residual blocks, where regular convolutional layers and convolutional layers with basis generators convolve alternatively with the feature maps. The network is re-trained from scratch directly without any extra loss functions or regularizations. Some samples are visualized in Figure 3. For a fair comparison with previous works Isola et al. (2017); Mao et al. (2019); Zhu et al. (2017b); Yang et al. (2019); Huang et al. (2018), we perform the quantitative evaluations on image to image translation tasks and the results are presented in Table 1. Qualitative comparisons are presented in Figure A.3. As discussed, all the state-of-the-art methods require considerable modifications to the underlying framework. By simply using the proposed stochastic basis generators as plug-and-play modules to the Pix2Pix model, the BasisGAN generates significantly more diverse images but still at comparable quality with other state-of-the-art methods. Moreover, as shown in Table A.3, BasisGAN reduces the number of trainable parameters comparing to the underlying methods thanks to the small number of basis elements and the tiny basis generator structures. While regularization based methods like Mao et al. (2019); Yang et al. (2019) maintain the parameter numbers of the underlying network models. + +Pix2PixHD $\rightarrow$ BasisGAN. In this experiment, we report results on high-resolution scenarios, which particularly demand efficiency and have not been previously studied by other conditional image generation methods. + +We conduct high resolution image synthesis on Pix2PixHD Wang et al. (2018), which is proposed to conditionally generate images with resolution up to $2048 \times 1024$ . The importance of this experiment arises from the fact that existing methods Mao et al. (2019); Zhu et al. (2017b) require considerable modifications to the underlying networks, which in this case, are difficult to be scaled to very high resolution image synthesis due to the memory limit of modern hardware. Our method requires no auxiliary networks structures or special batch formulation, thus is easy to be scaled to large scale scenarios. Some generated samples are visualized in Figure 4. Quantitative results and comparisons + +Table 1: Quantitative results on image to image translation. Diversity and fidelity are measured using LPIPS and FID, respectively. Pix2Pix Isola et al. (2017), BicycleGAN Zhu et al. (2017b), MSGAN Mao et al. (2019), and DSGAN Yang et al. (2019) are included in the comparisons. DSGAN adopts a different setting (denoted as 20s in the table) by generating 20 samples per input for computing the scores. We report results under both settings. + +
DatasetLabels → Facade
MethodsPix2PixBicycleGANMSGANBasisGANDSGAN (20s)BasisGAN (20s)
Diversity ↑0.0003 ± 0.00000.1413 ± 0.00050.1894 ± 0.00110.2648 ± 0.0040.180.2594 ± 0.004
Fidelity ↓139.19 ±2.9498.85 ± 1.2192.84 ± 1.0088.7 ± 1.2857.2024.14 ± 0.76
DatasetsMap → Satellite
MethodsPix2PixBicycleGANMSGANBasisGANDSGAN (20s)BasisGAN (20s)
Diversity ↑0.0016 ± 0.00030.1150 ± 0.00070.2189 ± 0.00040.2417 ± 0.0050.130.2398 ± 0.005
Fidelity ↓168.99 ± 2.58145.78 ± 3.90152.43 ± 2.5235.54 ± 2.1949.9228.92 ± 1.88
DatasetEdge → HandbagEdge → Shoe
MethodsMUNITBasisGANMUNITBasisGAN
Diversity ↑0.32 ±0.6240.35 ±0.8100.217 ± 0.5120.242 ±0.743
Fidelity ↓92.84 ± 0.12188.76 ±0.51362.57 ± 0.91764.17 ± 1.14
+ +![](images/87f8ae8ce013dfa6d868cd85c8581c0fd8c9d9777befd4d6bafbaf874c47fe9b.jpg) +Figure 4: High resolution conditional image generation. Additional examples are provided in the supplementary material, Figure A.4. + +![](images/681854ec1e49e88d5e7aa450e053aa69ffa769601590b006fecc028c5e1c07a5.jpg) + +![](images/26dfbf2a0d1ca71b93c204b8cc68939af56c637e384e869c5c310ee29e11815e.jpg) + +![](images/61bb371cf5aee4ed48774ad2b73d36f58a204abccddbd97614f3474f6e1c7af9.jpg) + +against DSGAN Yang et al. (2019) are reported in Table 2. BasisGAN significantly improves both diversity and fidelity with little overheads in terms of training time, testing time, and memory. + +Image inpainting. We conduct one-to-many image inpainting experiments on face images. Following Yang et al. (2019), centered face images in the celebA dataset are adopted and parts of the faces are discarded by removing the center pixels. We adopt the exact same network used in Yang et al. (2019) and replace the convolutional layers by layers with basis generators. To show the plug-and-play compatibility of the proposed BasisGAN, we conduct experiments by both training BasisGAN alone and combining BasisGAN with regularization based methods DSGAN (BasisGAN + DSGAN). When combining BasisGAN with DSGAN, we feed all the basis generator in BasisGAN with the same latent code and use the distance between the latent codes and the distance between generated samples to compute the regularization term proposed in Yang et al. (2019). Quantitative results and qualitative results are in Table 3 and Figure 5, respectively. BasisGAN delivers good balance between diversity and fidelity, while combining BasisGAN with regularization based DSGAN further improves the performance. + +Table 2: Quantitative results on high resolution image Table 3: Quantitative results on face into image translation. Diversity and fidelity are measured using LPIPS and FID, respectively. + +
MethodsPix2PixHDDSGANBasisGAN
Diversity ↑0.00.120.168
Fidelity ↓48.8528.825.12
+ +
MethodsDSGANBasisGANBasisGAN + DSGAN
Diversity↑0.050.0620.073
Fidelity ↓13.9412.8812.82
+ +![](images/4b37f1a06270f652f5f030f0a6bca1659a5fd12c127223c8a2a8951f22d88501.jpg) +Input condition + +![](images/3de25973be9bbc79407a9a2a8129a83e91b688ad07c04905b2b98b45d631e18d.jpg) + +![](images/34247637f85e93fc305da4e283ef2ebcdc830ea9acc5645b6d139f92e429f532.jpg) +BasisGAN + +![](images/56b3e9df32cdf9466c7f381e91d4d897dedeaae195ef5c5e44eb279ebb732f8b.jpg) +Figure 5: Face inpainting examples. + +![](images/dcc67cbf507bcd962a742de5bdf7cc8494f786f55e467446516962f6ed40474e.jpg) +BasisGAN + DSGAN + +![](images/65465baf9f270d8e0c9326fd52e1068419bc362a9e3287e572a60f5f45b47ff6.jpg) + +![](images/cf9c673ef5c24d71eaa7b0e8f6cce6ac5b0114bccee7ad774e0d071f337c8c28.jpg) + +# 6 CONCLUSION + +In this paper, we proposed BasisGAN to model the multi-mode for conditional image generation in an intrinsic way. We formulated BasisGAN as a stochastic model to allow convolutional filters to be sampled from a filter space learned by a neural network instead of being deterministic. To significantly reduce the cost of sampling high-dimensional filters, we adopt parameter reduction using filter decomposition, and sample low-dimensional basis elements, as supported by the theoretical results here presented. Stochasticity is introduced by replacing deterministic convolution layers with stochastic layers with basis generators. BasisGAN with basis generators achieves high-fidelity and high-diversity, state-of-the-art conditional image generation, without any auxiliary training objectives or regularizations. Extensive experiments with multiple underlying models demonstrate the effectiveness and extensibility of the proposed method. + +# 7 ACKNOWLEDGMENTS + +Work partially supported by ONR, ARO, NGA, NSF, and gifts from Google, Microsoft, and Amazon. + +# REFERENCES + +Luca Bertinetto, João F Henriques, Jack Valmadre, Philip Torr, and Andrea Vedaldi. Learning feedforward one-shot learners. In Advances in Neural Information Processing Systems, pp. 523-531, 2016. +Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International Conference on Machine Learning, pp. 1613-1622, 2015. +Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2172-2180, 2016. +François Chollet. Xception: Deep learning with depthwise separable convolutions. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251-1258, 2017. +Misha Denil, Babak Shakibi, Laurent Dinh, Nando De Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pp. 2148-2156, 2013. +Arnab Ghosh, Viveka Kulharia, Vinay P Namboodiri, Philip HS Torr, and Puneet K Dokania. Multiagent diverse generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 8513-8521, 2018. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672-2680, 2014. +David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016. +Benjamin D Haeffele and René Vidal. Global optimality in tensor factorization, deep learning, and beyond. arXiv preprint arXiv:1506.07540, 2015. +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626-6637, 2017. + +Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 172-189, 2018. +Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125-1134, 2017. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +David Krueger, Chin-Wei Huang, Riashat Islam, Ryan Turner, Alexandre Lacoste, and Aaron Courville. Bayesian hypernetworks. arXiv preprint arXiv:1710.04759, 2017. +Qi Mao, Hsin-Ying Lee, Hung-Yu Tseng, Siwei Ma, and Ming-Hsuan Yang. Mode seeking generative adversarial networks for diverse image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. +Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. +Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. +Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536-2544, 2016. +Siyuan Qiao, Chenxi Liu, Wei Shen, and Alan L Yuille. Few-shot image recognition by predicting parameters from activations. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 7229-7238, 2018. +Qiang Qiu, Xiuyuan Cheng, Robert Calderbank, and Guillermo Sapiro. DCFNet: Deep neural network with decomposed convolutional filters. International Conference on Machine Learning, 2018. +Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, and James Hays. Scribbler: Controlling deep image synthesis with sketch and color. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 5400-5409, 2017. +Andrew M Saxe, Pang Wei Koh, Zhenghao Chen, Maneesh Bhand, Bipin Suresh, and Andrew Y Ng. On random weights and unsupervised feature learning. In International Conference on Machine Learning, volume 2, pp. 6, 2011. +Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, Colorado Univ at Boulder Dept of Computer Science, 1986. +Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, pp. 3483-3491, 2015. +Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelCNN decoders. In Advances in Neural Information Processing Systems, pp. 4790-4798, 2016. +Rene Vidal, Joan Bruna, Raja Giryes, and Stefano Soatto. Mathematics of deep learning. arXiv preprint arXiv:1712.04741, 2017. +Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional GANs. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. +Xin Wang, Fisher Yu, Ruth Wang, Trevor Darrell, and Joseph E Gonzalez. Tafe-net: Task-aware feature embeddings for low shot learning. arXiv preprint arXiv:1904.05967, 2019. + +Wenqi Xian, Patsorn Sangkloy, Varun Agrawal, Amit Raj, Jingwan Lu, Chen Fang, Fisher Yu, and James Hays. Texturegan: Controlling deep image synthesis with texture patches. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 8456-8465, 2018. +Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tianchen Zhao, and Honglak Lee. Diversity-sensitive conditional generative adversarial networks. arXiv preprint arXiv:1901.09024, 2019. +Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2223-2232, 2017a. +Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image-to-image translation. In Advances in Neural Information Processing Systems, pp. 465-476, 2017b. + +# A PROOF OF EQUATION (3) + +Proof. Given (2) in Section 3, the minimax game of adversarial training is expressed as: + +$$ +\begin{array}{l} \min _ {G} \max _ {D} V (D, G) = \mathbb {E} _ {\mathbf {A} \sim p (\mathbf {A}), \mathbf {B} \sim p (\mathbf {B} | \mathbf {A})} \log D (\mathbf {A}, \mathbf {B}) + \\ \mathbb {E} _ {\mathbf {A} \sim p (\mathbf {A}), \mathbf {B} \sim q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})} \log [ 1 - D (\mathbf {A}, \mathbf {B}) ] \\ = \mathbb {E} _ {\mathbf {A} \sim p (\mathbf {A})} \left\{\mathbb {E} _ {\mathbf {B} \sim p (\mathbf {B} | \mathbf {A})} \log D (\mathbf {A}, \mathbf {B}) + \mathbb {E} _ {\mathbf {B} \sim q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})} \log [ 1 - D (\mathbf {A}, \mathbf {B}) ] \right\}. \tag {A.1} \\ \end{array} +$$ + +By fixing $\mathbf{A}$ and only consider: + +$$ +\begin{array}{l} V ^ {\prime} = \mathbb {E} _ {\mathbf {B} \sim p (\mathbf {B} | \mathbf {A})} \log D (\mathbf {A}, \mathbf {B}) + \mathbb {E} _ {\mathbf {B} \sim q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})} \log [ 1 - D (\mathbf {A}, \mathbf {B}) ] \\ = \int_ {\mathbf {B}} p (\mathbf {B} | \mathbf {A}) \log D (\mathbf {A}, \mathbf {B}) + q _ {\phi , \theta} (\mathbf {B} | \mathbf {A}) \log [ 1 - D (\mathbf {A}, \mathbf {B}) ] \mathrm {d} \mathbf {B}. \tag {A.2} \\ \end{array} +$$ + +The optimal discriminator $D^{*}$ in (A.2) is achieved when + +$$ +D ^ {*} (\mathbf {A}, \mathbf {B}) = \frac {p (\mathbf {B} | \mathbf {A})}{p (\mathbf {B} | \mathbf {A}) + q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})}. \tag {A.3} +$$ + +Given the optimal discriminator $D^{*}$ , (A.2) is expressed as: + +$$ +\begin{array}{l} V ^ {\prime} = \mathbb {E} _ {\mathbf {B} \sim p (\mathbf {B} | \mathbf {A})} \log D ^ {*} (\mathbf {A}, \mathbf {B}) + \mathbb {E} _ {\mathbf {B} \sim q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})} \log [ 1 - D ^ {*} (\mathbf {A}, \mathbf {B}) ] \\ = \mathbb {E} _ {\mathbf {B} \sim p (\mathbf {B} | \mathbf {A})} [ \log \frac {p (\mathbf {B} | \mathbf {A})}{p (\mathbf {B} | \mathbf {A}) + q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})} ] + \mathbb {E} _ {\mathbf {B} \sim q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})} [ \log \frac {q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})}{p (\mathbf {B} | \mathbf {A}) + q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})} ] \\ = - \log (4) + K L (p (\mathbf {B} | \mathbf {A}) | | \frac {p (\mathbf {B} | \mathbf {A}) + q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})}{2}) + K L (p (\mathbf {B} | \mathbf {A}) | | \frac {p (\mathbf {B} | \mathbf {A}) + q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})}{2}) \\ = - \log (4) + 2 \cdot J S D (p (\mathbf {B} | \mathbf {A}) | | q _ {\phi , \theta} (\mathbf {B} | \mathbf {A})) \tag {A.4} \\ \end{array} +$$ + +where $KL$ is the Kullback-Leibler divergence. The minimum of $V^{\prime}$ is achieved iff the Jensen-Shannon divergence is 0 and $p(\mathbf{B}|\mathbf{A}) = q_{\phi ,\theta}(\mathbf{B}|\mathbf{A})$ . And the global minimum of (A.1) is achieved when given every sampled $\mathbf{A}$ , the generator perfectly replicates the conditional distribution $p(\mathbf{B}|\mathbf{A})$ . + +# B PROOF OF THEOREM 4.1 + +Proof. We first consider the case when $\{a_k\}_{k=1}^K$ is a linearly independent set in the space of $\mathcal{L}(\mathbb{R}^{C'}, \mathbb{R}^C)$ , which is finite dimensional (the space of $C'$ -by- $C$ matrices). Then $\mathbf{F}^\omega(u)$ is in the span of $\{a_k\}_k$ for any $\omega, u$ means that there are unique coefficients $b(k; \omega, u)$ s.t. + +$$ +\mathbf {F} ^ {\omega} (u) = \sum_ {k = 1} ^ {K} b (k; \omega , u) a _ {k}, +$$ + +and the vector $\{b(k;\omega ,u)\}_{k}\in \mathbb{R}^{k}$ can be determined from $\mathbf{F}^{\omega}(u)$ by a (deterministic) linear transform. Since each entry $\mathbf{F}(u,\lambda^{\prime},\lambda)$ is a random variable, i.e. measurable function on $(\Omega ,\mathbb{P})$ , then so is $b(k;\cdot ,u)$ viewed as a mapping from $\Omega$ to $\mathbb{R}$ , for each $k$ and $u$ , due to that linear transform between finite dimensional spaces preserves measurability. For same reason, if $\mathbf{F}(u,\lambda^{\prime},\lambda)$ has probability density, then so does each $b(k;\cdot ,u)$ . Letting $\{b(k;\cdot ,u)\}_{u\in [L]\times [L]}$ be the random vectors $\mathbf{b}_k$ proves the statement. + +When $\{a_k\}_{k=1}^K$ are linearly dependent, the dimensionality of the subspace where $\mathbf{F}^\omega(u)$ lie in is $\tilde{K} < K$ . Suppose $\{\tilde{a}_k\}_{k=1}^{\tilde{K}}$ is a linearly independent set which spans the subspace, and $T: \mathbb{R}^{\tilde{K}} \to \mathbb{R}^K$ is the linear transform to map to $a_k$ 's from $\tilde{a}_k$ 's. Using the argument above, there exist random vectors $\tilde{b}_k$ s.t. $\mathbf{F} = \sum_{k=1}^{\tilde{K}} \tilde{b}_k \tilde{a}_k$ , and using the pseudo-inverse of $T$ to construct random vectors $\{b_k\}_{k=1}^K$ we have that $\mathbf{F} = \sum_{k=1}^K b_k a_k$ . This proves the existence of the $K$ random vectors $b_k$ . + +# C PARAMETER OPTIMIZATION IN FILTER GENERATION + +The optimization of the parameters $\{\phi, \theta\}$ in filter generation is presented in Algorithm 1. + +Algorithm 1 Optimization of the generator parameters $\{\phi, \theta\}$ +```txt +for number of iterations do +``` + +- Sample a minibatch of $n$ pairs of samples $\left\{ {{\mathbf{A}}_{1}{\mathbf{B}}_{1},\cdots ,{\mathbf{A}}_{n}{\mathbf{B}}_{n}}\right\}$ . +Sample $z\sim \mathcal{N}(0,I)$ +- Calculate the gradient w.r.t. the convolutional filters $\phi$ and $\mathbf{w}$ as in the standard setting + +$$ +\Delta_ {\phi} = \frac {\partial \mathcal {L}}{\partial \phi}, \Delta_ {\mathbf {w}} = \frac {\partial \mathcal {L}}{\partial \mathbf {w}}, +$$ + +where $\mathcal{L} = \frac{1}{n}\sum_{i = 1}^{n}[\log (1 - D(\mathbf{A},G_{\phi ,\theta}(\mathbf{A};T_{\theta}(z))))]$ + +Calculate the gradient w.r.t. $\theta$ in the filter generator $\Delta_{\theta} = \Delta_{\mathbf{w}}\frac{\partial\mathbf{w}}{\partial\theta}$ +- Update the parameters $\phi$ : $\phi \gets \phi - \alpha \Delta_{\phi}$ ; $\theta$ : $\theta \gets \theta - \alpha \Delta_{\theta}$ , where $\alpha$ is the learning rate. + +![](images/4fb58754205dae15dd6157617b8277293b6acb81634f05639efdc68f55a30297.jpg) +(a) Quality/cost comparison. + +![](images/c520b0cd4ea26a51bf96179eb229b04338b0d6df35ce3b2c8511730dd9e25472.jpg) +Figure A.1: (a) shows the comparison between basis generation and filter generation in terms of quality and cost. In (b), top row shows images generated with basis generators (the red dot in (a)), bottom row shows images generated with filter generators at the highest cost (highest in (a)). Basis generation achieves better performance with significantly less cost comparing to filter generation. The quality metrics are introduced in Section 5. + +![](images/3d14d71eb0c0e7b4f9a928b3a62ad3bb97136f811ff145e752a37249115f9cdf.jpg) +(b) Generated images. + +![](images/724ab5fe3fe56cc34737a3decc5f3d46459c5edb11815eaaf37f589dedc06ea3.jpg) + +![](images/c3ee70b27a2500712d9cdbd65a423712c6a1b84926058353828e81dae6299d05.jpg) + +![](images/931ae5084e1b1f941acfe3f1fc5b5653cdc85b0f9180f2568566f68f65cfc4a5.jpg) + +# D COMPUTATION COMPARISON + +We present a throughout comparison in terms of generated quality and sample filter size in Figure A.1, where it is clearly shown that filter generation is too costly to afford, and basis generation achieves a significantly better quality/cost effect shown by the red dot in Figure A.1. + +# E ABLATION STUDIES + +In this section, we perform ablation studies on the proposed BasisGAN, and evaluate multiple factors that can affect generation results. We perform ablation studies on BasisGAN adapted from the Pix2Pix model with the maps $\rightarrow$ satellite dataset. + +Size of basis generators. We model a basis generator using a small neural network, which consists of several hidden layers and inputs a latent code sampled from a prior distribution. We consistently observe that a basis generator with a single hidden layer achieves the best performance while maintains fast basis generation speed. Here we perform further experiments on the size of intermediate layers and input latent code size, and the results are presented in Table A.1. It is observed that the size of a basis generator does not significantly effect the final performance, and we use the $64 + 64$ setting in all the experiments for a good balance between performances and costs. + +Number of basis elements $\mathbf{K}$ . By empirically observing the low rank of generated filters, we use $K = 7$ in all the aforementioned experiments. We conduct further experiments to show the performances with larger $K$ and show the results in Table A.2. It is clearly shown that by increasing $K$ , the quality of the generated images do not increase. And when $K$ gets larger, e.g., $K = 128$ , even significantly degrades the diversity of the generated images. + +Table A.1: Quantitative results with different sizes of input latent code and intermediate layer. $m + n$ denotes the size of latent code and intermediate layer. + +
Dimensions16 + 1632 + 3264 + 64128 + 128256 + 256512 + 512
Diversity ↑0.22420.23880.24170.24480.24520.2433
Fidelity ↓40.1637.4135.5434.3633.7032.31
+ +Table A.2: Quantitative results with different sizes of input latent code and intermediate layer. $m + n$ denotes the size of latent code and intermediate layer. + +
K7163264128
Diversity ↑0.24170.24090.23820.22880.2006
Fidelity ↓35.5436.0835.1734.9736.49
+ +# F QUALITATIVE RESULTS + +# F.1 $\mathrm{PIX2PIX}\rightarrow \mathrm{BASISGAN}$ + +Additional qualitative results for Pix2Pix $\rightarrow$ BasisGAN are presented in Figure A.2. Qualitative comparisons against MSGAN Mao et al. (2019) and DSGAN Yang et al. (2019) are presented in Figure A.3. We directly use the official implementation and the pretrained models provided by the authors. For each example, the first 5 generated samples are presented without any selection. For the satellite $\rightarrow$ map comparison, we often observe missing correspondence in the samples generated by DSGAN. BasisGAN consistently provides samples with diverse details and strong correspondence to the input conditions. + +# F.2 $\mathrm{PIX2PIXHD\to BASISGAN}$ + +Additional qualitative results for Pix2PixHD $\rightarrow$ BasisGAN are presented in Figure A.4. + +# G SPEED, PARAMETER, AND MEMORY + +We use PyTorch for the implementation of all the experiments. The training and testing are performed on a single NVIDIA 1080Ti graphic card with 11GB memory. The comparisons on testing speed, parameter number, and training memory are presented in Table A.3. The training memory is measured under standard setting with resolution of $256 \times 256$ for Pix2Pix, and $1024 \times 512$ for Pix2PixHD. Since we are using small number of basis elements (typically 7), and tiny basis generators, the overall trainable parameter number of the networks are reduced. Note that we only compute the parameter number of the generator networks since we do not adopt any change to the discriminators. + +Table A.3: Speed in testing, memory usage in training, and overall trainable parameter numbers. + +
MethodsTesting speed (s)Training memory (MB)Parameter number
Pix2Pix0.010171,46511,330,243
Pix2Pix → BasisGAN0.010251,43910,261,763
Pix2PixHD0.02998,145182,546,755
Pix2PixHD → BasisGAN0.03248,137154,378,051
+ +![](images/f0aaef4ce0efecddcc98de61a1ce8569145ec9d4edb1b07a4e1c7b148208d2e0.jpg) + +![](images/adcfeb3d15b852985396707892db8bdd2fe4b65e8819a587ce3eb14ed3413c3a.jpg) + +![](images/f122c4f91a4ad754824c672cfce0541f7b8fa822acd41ec1f3d5838212d36da5.jpg) + +![](images/c115ce61773909d88844c3dc9c9f256f52e4a036d7518f98aa40589bec17ba5f.jpg) + +![](images/b4061a25ba680b8cb88f25dcc36378c3174b14bf874f7c6e5f87b2bf7fa50180.jpg) + +![](images/d47e48225a094855707d621732d050d64d5ffa725c240cad89bcc16bfdddf3c1.jpg) + +![](images/f67f1e07ac662ac5c9c2d51a43413dfabac387300186bb40e6f0fb2649eaf196.jpg) + +![](images/b14362440cd48cd4ded79598395d1a6a70280578d39a799c318384d7db3f040f.jpg) + +![](images/7df8a7c40862957494ced96494c1c28edb55c37ef241814d8bb003ead02d4d77.jpg) + +![](images/a4e3c65eda5a73eca36d6d0fe5b8841dfd295b60a7e92220185497f2da6342a9.jpg) + +![](images/5cc26411f84c1279e998e54f15a51da36257603c14aa8fe1628720bb5cb2a274.jpg) +Input + +![](images/1dbd2077d299c89def2813eaf8ab624fdbd6ad8092da7c85e17a7cb6884d0cf6.jpg) + +![](images/d53c361b963efb454c805d7eaf8891aa74aadbc83e5326247174c6dc0449e430.jpg) + +![](images/8be18c3e9ec0e12eed8db769c25d904f5efff074a2627b4b747540819912addf.jpg) + +![](images/dc04e11e277af75de2e7aa695c2121accfed5bc2d37d5edd0269958e5c110815.jpg) + +![](images/01d6cd3f78e1f35960701ee029c7f558d6eae4d43c04174c05441ee17907c30e.jpg) + +![](images/ec85e406c70a6d28166f8e313536cd0665fde94071380f453a2e4c389d46aaaa.jpg) + +![](images/d36ef842344b9f3f74ffc587cc133aa642df9f43eca93a80c1d4deaa5fb7d971.jpg) + +![](images/20577419b2cb05a9bd376ce113613e6de25799294a47ce82d02aecb16ca3ecb0.jpg) + +![](images/b629e9a9b6aaf731d36ad870d8c95431d326674a0152cfacff75fbcf6db307b8.jpg) + +![](images/316cdf852369b011d95ee50bd045ede92638061e8cf70aa80159d8b4d7dafd50.jpg) + +![](images/49b3ab8369d0cef8cfe7197b5a8d0551f154f085b0c0d4e833a0b3c6e478fab0.jpg) +Ground truth + +![](images/5e8736f72066e8a5d911a26ef314d0cfb84adddcf8c32b017978eb08f7d3b501.jpg) + +![](images/708aa7a935c1b34a29b07321b26b5fdccc81f4f8ae254e84d74da8b710d0d53f.jpg) + +![](images/1b4b5a2b6e4063e05654ccb80150dec9834d421f902f7254c133800940d1be35.jpg) + +![](images/5a5960362579aaeb1b693af3ad420c48a761273cf19785cffe681b9493d31ea9.jpg) + +![](images/9ace9c32620dfc0ffceb33c241861106615174ac684a0b72475fbeca30d26421.jpg) + +![](images/cc592da21492ec68a0a1df780164897f8ba3bb461a5c177d78f25d5a38cf7034.jpg) + +![](images/5c7bc31df8af97fff8404fd1908dedf7ef5b67885670e1784a4f1ac53bc222af.jpg) + +![](images/cba6208d9b9c747514ad2cc35fa969cf7b9edf3ca544d604fbeedfc977197e07.jpg) + +![](images/0da649bd2c9bb683449b797075f0e46e5f63996161448266aea22c3a769f6bd4.jpg) + +![](images/0a6d59d438fa8ab8801a95ceed4946e25b332cf7191cc0721a019d5029dee3ed.jpg) + +![](images/5c3895dfa4c12e3fdaddb235c724f3f93553e0ace42445db254851c26c0ed378.jpg) +Figure A.2: Pix2Pix $\rightarrow$ BasisGAN. + +![](images/48fd3a337657e07ebeb4ccc40a8a53b011a7784dcaa2f5cb10401713f74af39c.jpg) + +![](images/a299edfbf144aa0d68b73cbd88d2a732e8f6aa95187c6837ca647aa2130add7c.jpg) + +![](images/bb467ccb1d4042bc58e80a7019958d6e252ab0faad6868874f4da6b43e32f8db.jpg) + +![](images/92cf723077bf835ad1cca6dfdd460e2c0853c5ecf79b6b8a2e2cdff6e7bc9869.jpg) + +![](images/9d3ca4ca481cf75ca3f5876e86adc7be839812d6320b3746f4a72fa06207ccad.jpg) + +![](images/ec3fea340bc980fa4bfa976786db2b0816c134c694d8d8fffa2dafe44ebbf27a.jpg) + +![](images/5889a78b817e0f634d2d6fde2824d147fb7b8c38e4712098c9d47aab94e373e4.jpg) + +![](images/4bc6654fa21e8b8c9d8e73ac32ec4cfe3fe2dcebd932ad4560dceb0814ce658d.jpg) + +![](images/f7f1a34764a461dd129a5a432b14518c2074c95599c0ed79d75d631204b1ee50.jpg) + +![](images/cd9bb419483935c987696574df0118f3723644470339b863c0db30f3533eca55.jpg) + +![](images/3f85d9e41f9a476cbfc60d3a36e072efbb277b960e75744c65ce56bfb3c3b864.jpg) +Generated diverse samples + +![](images/6b7613653bd720c66366d80903e50ad4b21fece6eabc903714392175a78ce99b.jpg) + +![](images/cc6665226bde9b31d3bf8332eb2f5fc22c88e50147d912e9c22f606c8f8a20dd.jpg) + +![](images/fd3e05feff9a69798135cfbe9e49f125f7a6efd64408158a07b572b5772d87d4.jpg) + +![](images/c686af85aa28eb7028a1d62fe55d946ffdba1b88724ba44459f76c4f46fa08ce.jpg) + +![](images/1a20c1ada269ca384b165fe6a1366085e1ff5a911d4d069dd089775162db5a8b.jpg) + +![](images/a5ad31873feceb7d4c94103c8e79f6a5c8f687933976e1ac78fcdcbd25aa35a5.jpg) + +![](images/fe5e6dd2b5d45f395004dd99f1bf3c9d3ece45ba00f368112049192919f2ca40.jpg) + +![](images/a1eb23e5e8a79440ce51db47ebb3b687a4f58e6a8a97a1c073267beb0328e6b8.jpg) + +![](images/4e781c3b3b17afcb3a5f687238efe41e88b410f20208fb7322fe40ccf76cacda.jpg) + +![](images/9f3346da68b4f9779dd89a882e86e2f359fc3f63be9f465764d8275a23f0f1d7.jpg) + +![](images/10806b02bc58320541474033f2d02ae2fc8ddf3e482df2eff7602f2469274d13.jpg) + +![](images/3142e53abe2eda0dce4849f59c72a1183df38de93265517b87535a4c850c9714.jpg) + +![](images/3ae3e986c069d92bcf1a679212d865a0197a3fc522eb07b8e1fb43d92ba89f13.jpg) + +![](images/d10d595e292b9afbd8b9bf0e4478fe63cc5eb95aa8c3a4c7db1602327ddd564f.jpg) + +![](images/c301e7322423eb0b2b9322719f0f1ab914193169a13ed6903a302389ab8f06ac.jpg) + +![](images/92aade789656d59e6590dee2d763215f0a9a7e42e321ef6d67ac0f18359331e1.jpg) + +![](images/8af178c875666c9698c4c44d8686c369cc810040e4365395e9215f40099538bd.jpg) + +![](images/43429159ce7796553e2ced1e1e3e00e087fabb8cef807a8a95ced8081db56132.jpg) + +![](images/f41b038c0533e08ba204833e159621e4b163d92f72c46d55962deba8eb8c6061.jpg) + +![](images/41e4a33e4695fac8c4220483ba145b44dca3ee2f7b485e2342da8e7a814d419b.jpg) + +![](images/cb4190e3737b343a9ed5ae53afd479a2406821f8655877002cea1937070ff596.jpg) + +![](images/7df8fd06c6226f4917f5394bf3193fa36b2f2743e8d38d082c5141b0d8340864.jpg) + +![](images/46fcc90af14cf68b4f8e52daeea0ffa48cec3d4f55d2fda67876ec877472263e.jpg) +Figure A.3: Qualitative comparisons with MSGAN and DSGAN. Please zoom in for details. + +![](images/31af10d50d720ee6a3ad8c447c76fa95747c2c0e7c0fe0b71ea244291a0a6998.jpg) + +![](images/35108318e3df7a0ecbdedc0e24854edaf2cfa15be8db0b7ce696e284f35f7d17.jpg) + +![](images/5e46200280aae11bb5586f0d65b6aa78bcf57ff77196532f20662229b462f6eb.jpg) + +![](images/26c5ed699dac1c4d238726763bac84c2dd105e5424559e05cb566b957ac33155.jpg) + +![](images/6732992b222c52a5b8d056eefefc86c24f1117e64a7f286ab33029d16e07bd37.jpg) + +![](images/5f08484694672d67920ca49d6281bc7ca9c4165d3835b6ad1d442e3c6147f088.jpg) +Input condition + +![](images/25fc3bacafda9df2040fa92825e9a220663d941b80d7d5387914bb36221f764a.jpg) + +![](images/fd2a84a0aaff5364bbd07bc0573c5e34464a94a1f28c36f9daea6db31e6acc2e.jpg) + +![](images/aff86de23945ce7901cffda31bca68d36b7e4b2e492dde98bb49fe46f5181c03.jpg) + +![](images/e5a5447d1cd26b5232bbe756fb21095d5e086cfcb5011a465ef690f8d75b6811.jpg) + +![](images/8240473b2749389861cb84fbfa02faf45fa122588806c55a178e25ede9f989c4.jpg) + +![](images/aed642af8bd9cad039deebaaab0d5d5327577042d6d97b9f462c16d1c403e5fd.jpg) +Generated diverse samples + +![](images/f9de1066e90a93edf9e4467cdf1cd9acde515f54d5e8a90fea7c9d5ef6592804.jpg) + +![](images/e9a13a1de34232914659aa335a59cb0f04be2936840d69c0cbf3186db9e00bc3.jpg) + +![](images/ff3e8df60aac00cf86881a9c8cdfac2e42c9ada522bc11e2cfabfe0c234d00b6.jpg) + +![](images/42d1a9a8f3733cc7350a9ea14610653c99e37280db08b48cc4acadf1daff5b03.jpg) + +![](images/1ea0c7d26452d1dae97883683aa4b14ee7b30bf29e9c90c8a02865ec1678dc17.jpg) + +![](images/675a850bd427e868dadcca995878a9ff01ab21625cdc4b1e0485a424906bfe11.jpg) +Figure A.4: Pix2PixHD $\rightarrow$ BasisGAN. \ No newline at end of file diff --git a/stochasticconditionalgenerativenetworkswithbasisdecomposition/images.zip b/stochasticconditionalgenerativenetworkswithbasisdecomposition/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ae137c56e7036606b64dd345effcb989a404c478 --- /dev/null +++ b/stochasticconditionalgenerativenetworkswithbasisdecomposition/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b327ffbc95e6245e8c430b892843221ca7fb8e6e26297c1aaf286b3ee3e7fd5 +size 1571868 diff --git a/stochasticconditionalgenerativenetworkswithbasisdecomposition/layout.json b/stochasticconditionalgenerativenetworkswithbasisdecomposition/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fbea6833c73f2d5d939e3c90b8c06d44620f1ced --- /dev/null +++ b/stochasticconditionalgenerativenetworkswithbasisdecomposition/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f0cd181fe5038b6e7c9f4a9f179af064ae99a2bb07f91fff03ed76565e800da +size 741445 diff --git a/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/5c9254ab-6ea1-4d34-b7cd-bb0d48c2d31c_content_list.json b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/5c9254ab-6ea1-4d34-b7cd-bb0d48c2d31c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f1532e2260d1693b05778e01659eb379942f1317 --- /dev/null +++ b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/5c9254ab-6ea1-4d34-b7cd-bb0d48c2d31c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1613ff7d46127f3a490495770d80a0ee9ed42ef756a8088f0d54d09ddd4e1b3 +size 65272 diff --git a/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/5c9254ab-6ea1-4d34-b7cd-bb0d48c2d31c_model.json b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/5c9254ab-6ea1-4d34-b7cd-bb0d48c2d31c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..635ed1cbf3d77552466eab2cb4de88f13ac3f530 --- /dev/null +++ b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/5c9254ab-6ea1-4d34-b7cd-bb0d48c2d31c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cff8114123b9db9f7304387fae16e64bcc4bcd7dac7416e8b7942560ee4804aa +size 79419 diff --git a/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/5c9254ab-6ea1-4d34-b7cd-bb0d48c2d31c_origin.pdf b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/5c9254ab-6ea1-4d34-b7cd-bb0d48c2d31c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6b46c69c9253ec9c7c65c1c56807bfa214731ebb --- /dev/null +++ b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/5c9254ab-6ea1-4d34-b7cd-bb0d48c2d31c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d69762d96cbadb94eacef7b41f027eca70592b7abdc0de189417f9505220344e +size 1265739 diff --git a/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/full.md b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/full.md new file mode 100644 index 0000000000000000000000000000000000000000..92d9ba12f7998cefdc9d67800ba0ded44f7eb0d0 --- /dev/null +++ b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/full.md @@ -0,0 +1,265 @@ +# STOCHASTIC WEIGHT AVERAGING IN PARALLEL: LARGE-BATCH TRAINING THAT GENERALIZES WELL + +Vipul Gupta*† + +vipul_gupta@berkeley.edu + +Department of EECS, UC Berkeley + +Santiago Akle Serrano * + +sakle@apple.com + +Apple Inc. + +Dennis DeCoste + +ddecoste@apple.com + +Apple Inc. + +# ABSTRACT + +We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction in training time and the good generalization performance of the resulting models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet. + +# 1 INTRODUCTION + +Stochastic gradient descent (SGD) and its variants are the de-facto methods to train deep neural networks (DNNs). Each iteration of SGD computes an estimate of the objective's gradient by sampling a mini-batch of the available training data and computing the gradient of the loss restricted to the sampled data. A popular strategy to accelerate DNN training is to increase the mini-batch size together with the available computational resources. Larger mini-batches produce more precise gradient estimates; these allow for higher learning rates and achieve larger reductions of the training loss per iteration. In a distributed setting, multiple nodes can compute gradient estimates simultaneously on disjoint subsets of the mini-batch and produce a consensus estimate by averaging all estimates, with one synchronization event per iteration. Training with larger mini-batches requires fewer updates, thus fewer synchronization events, yielding good overall scaling behavior. + +Even though the training loss can be reduced more efficiently, there is a maximum batch size after which the resulting model tends to have worse generalization performance (McCandlish et al., 2018; Keskar et al., 2016; Hoffer et al., 2017; Golmant et al., 2018; Shallue et al., 2018). This phenomenon forces practitioners to use batch sizes below those that achieve the maximum throughput and limits the usefulness of large-batch training strategies. + +Stochastic Weight Averaging (SWA) (Izmailov et al., 2018) is a method that produces models with good generalization performance by averaging the weights of a set of models sampled from the final stages of a training run. As long as the models all lie in a region where the population loss is mostly convex, the average model can behave well, and in practice, it does. + +We have observed that if instead of sampling multiple models from a sequence generated by SGD, we generate multiple independent SGD sequences and average models from each, the resulting model achieves similar generalization performance. Furthermore, if all the independent sequences use small-batches, but start from a model trained with large-batches, the resulting model achieves generalization performance comparable with a model trained solely with small-batches. Using these observations, we derive Stochastic Weight Averaging in Parallel (SWAP): A simple strategy to accelerate DNN training by better utilizing available compute resources. Our algorithm is simple to implement, fast and produces good results with minor tuning. + +For several image classification tasks on popular computer vision datasets (CIFAR10, CIFAR100, and ImageNet), we show that SWAP achieves generalization performance comparable to models trained with small-batches but does so in time similar to that of a training run with large-batches. We use SWAP on some of the most efficient publicly available models to date, and show that it's + +able to substantially reduce their training times. Furthermore, we are able to beat the state of the art for CIFAR10 and train in $68\%$ of the time of the winning entry of the DAWNBench competition. $^{1}$ + +# 2 RELATED WORK + +The mechanism by which the training batch size affects the generalization performance is still unknown. A popular explanation is that because of the reduced noise, a model trained using larger mini-batches is more likely to get stuck in a sharper global minima. In (Keskar et al., 2016), the authors argue that sharp minima are sensitive to variations in the data because slight shifts in the location of the minimizer will result in large increases in average loss value. However, if flatness is taken to be the curvature as measured by the second order approximation of the loss, then counterexamples exist. In (Dinh et al., 2017), the authors transform a flat minimizer into a sharp one without changing the behavior of the model, and in (Li et al., 2018), the authors show the reverse behavior when weight-decay is not used. + +In (McCandlish et al., 2018), the authors predict that the batch size can be increased up to a critical size without any drop in accuracy and empirically validate this claim. For example, the accuracy begins to drop for image classification on CIFAR10 when the batch sizes exceed 1k samples. They postulate that when the batch size is large, the mini-batch gradient is close to the full gradient, and further increasing the batch size will not significantly improve the signal to noise ratio. + +In (Hoffer et al., 2017), the authors argue that, for a fixed number of epochs, using a larger batch size implies fewer model updates. They argue that changing the number of updates impacts the distance the weights travel away from their initialization and that this distance determines the generalization performance. They show that by training with large-batches for longer times (thus increasing the number of updates), the generalization performance of the model is recovered. Even though this large-batch strategy generates models that generalize well, it does so in more time than the small-batch alternative. + +Irrespective of the generalization performance, the batch size also affects the optimization process. In (Ma et al., 2017), the authors show that for convex functions in the over-parameterized setting, there is a critical batch size below which an iteration with a batch size of $M$ is roughly equivalent to $M$ iterations with a batch size of one, and batch-sizes larger than $M$ do not improve the rate of convergence. + +Methods which use adaptive batch sizes exist (Devarakonda et al., 2017; Goyal et al., 2017; Jia et al., 2018; Smith et al., 2017; You et al., 2017). However, most of these methods are either designed for specific datasets or require extensive hyper-parameter tuning. Furthermore, they ineffectively use the computational resources by reducing the batch size during part of the training. + +Local SGD (Zhang et al., 2016; Stich, 2018; Li et al., 2019; Yu et al., 2019) is a distributed optimization algorithm that trades off gradient precision with communication costs by allowing workers to independently update their models for a few steps before synchronizing. Post-local SGD (Lin et al., 2018) is a variant, which refines the output of large-batch training with local-SGD. The authors have observed that the resulting model has better generalization than the model trained with large-batches and that their scheme achieves significant speedups. In this manner Post-local SGD is of a very similar vein than the present work. However, while Post-local SGD lets the models diverge for $T$ iterations where $T$ is in the order of tens, SWAP averages the models once after multiple epochs. For example, in our Imagenet experiments (see Sec. 5) we average our models after tens of thousands of updates, while Post-local SGD does after at most 32. Because of this difference, we believe that the mechanisms that power the success of SWAP and Post-local SGD must be different and point to different phenomena in DNN optimization. + +Stochastic weight averaging (SWA) (Izmailov et al., 2018) is a method where models are sampled from the later stages of an SGD training run. When the weights of these models are averaged, they result in a model with much better generalization properties. This strategy is very effective and has been adopted in multiple domains: deep reinforcement learning (Nikishin et al.), semi-supervised learning (Athiwaratkun et al., 2019), Bayesian inference (Maddox et al., 2019), low-precision training (Yang et al., 2019). In this work, we adapt SWA to accelerate DNN training. + +# 3 STOCHASTIC WEIGHT AVERAGING IN PARALLEL + +We describe SWAP as an algorithm in three phases (see Algorithm 1): In the first phase, all workers train a single model by computing large mini-batch updates. Synchronization between workers is required at each iteration and a higher learning rate is used. In the second phase, each worker independently refines its copy of the model to produce a different set of weights. Workers use a smaller batch size, a lower learning rate, and different randomizations of the data. No synchronization between workers is required in this phase. The last phase consists of averaging the weights of the resulting models and computing new batch-normalization statistics to produce the final output. + +Phase 1 is terminated before the training loss reaches zero or the training accuracy reaches $100\%$ (for example, a few percentage points below $100\%$ ). We believe that stopping early precludes the optimization from getting stuck at a location where the gradients are too small and allows the following stage to improve the generalization performance. However, the optimal stopping accuracy is a hyper-parameter that requires tuning. + +During phase 2, the batch size is appropriately reduced and small-batch training is performed independently and simultaneously. Here, each worker (or a subset of them) performs training using all the data, but sampling in different random order. Thus, after the end of the training process, each worker (or subset) will have produced a different model. + +Figure 1 plots the accuracies and learning-rate schedules for a run of SWAP. During the large-batch phase (phase 1), all workers share a common model and have the same generalization performance. During the small-batch phase (phase 2) the learning rates for all the workers are the same but their testing accuracies differ as the stochasticity causes the models to diverge from each other. We also plot the test-accuracy of the averaged model that would result were we to stop phase 2 at that point. Note that the averaged model performs consistently better than each individual model. + +![](images/a6884e864bfe0333ce040fa809354fab54ac897ce0415f66902aad402ec39418.jpg) +Figure 1: Learning rate schedules and CIFAR10 test accuracies for workers participating in SWAP. The large-batch phase with synchronized models is followed by the small-batch phase with diverging independent models. The test accuracy of the averaged weight model is computed by averaging the independent models and computing the test loss for the resulting model. + +# 4 LOSS LANDSCAPE VISUALIZATION AROUND SWAP ITERATES + +To visualize the mechanism behind SWAP, we plot the error achieved by our test network on a plane that contains the outputs of the three different phases of the algorithm. Inspired by (Garipov et al., 2018) and (Izmailov et al., 2018), we pick orthogonal vectors $u, v$ that span the plane which contains $\theta_{1}, \theta_{2}, \theta_{3}$ . We plot the loss value generated by model $\theta = \theta_{1} + \alpha u + \beta v$ at the location $(\alpha, \beta)$ . To plot a loss value, we first generate a weight vector $\theta$ , compute the batch-norm statistics for that model (through one pass over the training data), and then evaluate the test and train accuracies. + +In Figure 2, we plot the training and testing error for the CIFAR10 dataset. Here 'LB' marks the output of phase one, 'SGD' the output of a single worker after phase two, and 'SWAP' the final + +Algorithm 1: Stochastic Weight Averaging in Parallel (SWAP) +1 Number of workers $W$ ; Weight initialization $\theta_0$ ; $t = 0$ +2 Training accuracy, $\tau$ , at which to exit phase one +3 Learning rate schedules $LR_{1}$ and $LR_{2}$ for phase one and two, respectively +4 Mini-batch sizes $B_{1}$ and $B_{2}$ for phase one and two, respectively +5 Gradient of loss function for sample $i$ at weight $\theta$ : $g^{i}$ +6 SGDUpdate( $\cdot$ ): A function that updates the weights using SGD with momentum and weight decay +7 Phase 1: +8 while Training accuracy $\leq \tau$ do +9 $\eta \gets LR_{1}(t)$ +10 for $w$ in $[0, \dots, W - 1]$ In parallel do +11 $\begin{array}{c} B^{w} \gets \text{random sub-sample of training data with size } \frac{B_{1}}{W} \\ g^{w} \gets \frac{W}{|B_{1}|} \sum_{i \in B^{w}} g^{i} \text{ worker gradient} \end{array}$ +12 end +13 end +14 $g_{t} \gets \frac{1}{W} \sum g^{w}$ synchronization of worker gradients +15 $\theta_{t+1} \gets \theta_{t} + \text{SGDUpdate}(\eta_{t}, g_{t}, g_{t-1}, \dots)$ ; /* first order method update */ +16 $t = t + 1$ ; $T = t$ +17 end +18 Phase 2: +19 for $t$ in $[T, T + Q]$ do +20 $\eta \gets LR_{2}(t - T)$ +21 for $w$ in $[0, \dots, W - 1]$ In parallel do +22 $\begin{array}{r} B^{w} \gets \text{random sub-sample of training data with size } B_{2} \\ g^{w} \gets \frac{1}{|B_{2}|} \sum_{i \in B^{w}} g^{i} \text{ worker gradient} \\ \theta_{t+1}^{w} \gets \theta_{t}^{w} + \text{SGDUpdate}(\eta_{t}, g_{t}^{w}, g_{t-1}^{w}, \dots); \\ \text{update at local worker * / } \end{array}$ +23 end +24 end +25 We get $W$ different models at the end of phase 2 +26 Phase 3: $\hat{\theta}_{\ell} \gets \frac{1}{W} \sum \theta_{T+Q}^{i}$ produce averaged model +27 Compute batch-norm statistics for $\hat{\theta}_{\ell}$ to produce $\theta_{\ell}$ +Result: Final model $\theta_{\ell}$ + +model. Color codes correspond to error measures at the points interpolated on the plane. In Figure 2a, we observe that the level-sets of the training error (restricted to this plane) form an almost convex basin and that both the output of phase 1 ('LB') $^2$ and the output of one of the workers of phase 2 ('SGD') lie in the outer edges of the basin. Importantly, during phase 2 the model traversed to a different side of the basin (and not to the center). Also, the final model ('SWAP') is closer to the center of the basin. + +When we visualize these three points on the test loss landscape (Figure 2b), we observe that the variations in the topology of the basin cause the 'LB' and 'SGD' points to fall in regions of higher error. But, since the 'SWAP' point is closer to the center of the basin, it is less affected by the change in topology. In Figure 3, we neglect the 'LB' point and plot the plane spanned by three workers 'SGD1', 'SGD2', 'SGD3'. In Figure 3a, we can observe that these points lie at different sides of the training error basin while 'SWAP' is closer to the center. In Figure 3b, we observe that the change in topology causes the worker points to lie in regions of higher testing errors than 'SWAP', which is again close to the center of both basins. For reference, we have also plotted the best model that can be generated by this region of the plane. + +![](images/524e8971b1da32b7b1f14235044e66ede66388492063027047f08a6a41a56727.jpg) +(a) Train Error $(\%)$ + +![](images/2e4170ee94dc50ff181b1eeea1693dc8fec6d315e1851d7e8a4d47798e0d765a.jpg) +(b) Test Error $(\%)$ + +![](images/4139744704de45f97aa1f838aabaeae92f71e7124db98082e13a32513a7b09ea.jpg) +(a) Train Error $(\%)$ +Figure 3: CIFAR10 train and test error restricted to a 2D plane spanned by the output of three workers after phase 2 ('SGD1', 'SGD2', 'SGD3') and location of the average model ('SWAP'). The minimum test error achievable for models restricted to this region of the plane (marked as BEST). + +![](images/a6b41008b176dbd72ce23d8a9b9af8bed5dd87b3f81559a267ce2ebc2a5f4867.jpg) +Figure 2: CIFAR10 train and test error restricted to a 2D plane spanned by the output of phase 1 ('LB'), one of the outputs of phase 2 ('SGD') and the averaged model ('SWAP'). +(b) Test Error $(\%)$ + +# 4.1 SAMPLING FROM INDEPENDENT RUNS OF SGD OR SAMPLING FROM ONE + +In (Mamd et al., 2017), the authors argue that in the later stages of SGD the weight iterates behave similar to an Ornstein Uhlenbeck process. So, by maintaining a constant learning rate the SGD iterates should reach a stationary distribution that is similar to a high-dimensional Gaussian. This distribution is centered at the local minimum, has a covariance that grows proportionally with the learning rate, inversely proportional to the batch size and has a shape that depends on both the Hessian of the mean loss and covariance of the gradient. + +The authors of (Izmailov et al., 2018) argue that by virtue of being a high dimensional Gaussian all the mass of the distribution is concentrated near the 'shell' of the ellipsoid, and therefore, it is unlikely for SGD to access the interior. They further argue that by sampling weights from an SGD run (leaving enough time steps between them) will choose weights that are spread out on the surface of this ellipsoid and their average will be closer to the center. + +Without any further assumptions, we can justify sampling from different SGD runs (as done in phase 2 during SWAP). As long as all runs start in the same basin of attraction, and provided the model from (Mamd et al., 2017) holds, all runs will converge to the same stationary distribution, and each run can generate independent samples from it. + +# 4.2 ORTHOGONALITY OF THE GRADIENT AND THE DIRECTION TO THE CENTER OF BASIN + +To win some intuition on the advantage that SWA and SWAP have over SGD, we measure the cosine similarity between the gradient descent direction, $-g_{i}$ , and the direction towards the output of SWAP, $\Delta \theta = \theta_{\mathrm{swap}} - \theta_{i}$ . In Figure 4, we see that the cosine similarity, $\frac{\langle\Delta\theta, - g_i\rangle}{\|g_i\|\|\Delta\theta\|}$ , decreases as the training enters its later stages. We believe that towards the end of training, the angle between the gradient direction and the directions toward the center of the basin is large, therefore the process moves mostly orthogonally to the basin, and progress slows. However, averaging samples from different sides of the basin can (and does) make faster progress towards the center. + +![](images/26740bb2f968747bbbed20ddbd94999dba1252af19a32f4180608b465ec31469.jpg) +Figure 4: Cosine similarity between direction of gradient descent and $\Delta \theta$ + +# 5 EXPERIMENTS + +In this section we evaluate the performance of SWAP for image classification tasks on the CIFAR10, CIFAR100, and ImageNet datasets. + +# 5.1 CIFAR10 AND CIFAR100 + +For the experiments in this subsection, we found the best hyper-parameters using grid searches (see Appendix A for details). We train using mini-batch SGD with Nesterov momentum (set to 0.9) and weight decay of $5 \times 10^{-4}$ . We augment the data using cutout (DeVries & Taylor, 2017) and use a fast-to-train custom ResNet 9 from a submission to the DAWNbench leaderboard (Coleman et al.). All experiments were run on one machine with 8 NVIDIA Tesla V100 GPUs and use Horovod (Sergeev & Del Balso, 2018) to distribute the computation. All statistics were collected over 10 different runs. + +CIFAR10: For these experiments, we used the following settings—SWAP phase one: 4096 samples per batch using 8 GPUs (512 samples per GPU). Phase one is terminated when the training accuracy reaches $98\%$ (on average 108 epochs). SWAP phase two: 8 workers with one GPU each and 512 samples per batch for 30 epochs. The experiment that uses only large-batches had 4096 samples per batch across 8 GPUs and is run for 150 epochs. The experiments that use only small-batches had 512 samples per batch on 2 GPUs and is trained for 100 epochs. + +Table 1 compares the best test accuracies and corresponding training times for models trained with small-batch only, with large-batch only, and with SWAP. We report the average accuracy of the workers before averaging and the accuracy of the final model. + +
CIFAR10Test Accuracy (%)Training Time (sec)
SGD (small-batch)95.24 ± 0.09254.12 ± 0.62
SGD (large-batch)94.77 ± 0.23132.62 ± 1.09
SWAP (before averaging)94.70 ± 0.20167.57 ± 3.25
SWAP (after averaging)95.23 ± 0.08169.20 ± 3.25
+ +CIFAR100: For these experiments, we use the following settings—SWAP phase one: 2048 samples per batch using 8 GPUs (256 samples per GPU). Phase one exits when the training accuracy reaches $90\%$ (on average 112 epochs). SWAP phase two: 8 workers with one GPU each and 128 samples per batch, training for 10 epochs. The experiments that use only large-batch training were run for 150 epochs with batches of 2048 on 8 GPUs. The experiments that use only small-batch were trained for 150 epochs using batches of 128 on 1 GPU. + +Table 1: Training Statistics for CIFAR10 + +
CIFAR100Test Accuracy (%)Training Time (sec)
SGD (small-batch)77.01 ± 0.25573.76 ± 2.25
SGD (large-batch)75.84 ± 0.35116.13 ± 1.35
SWAP (before averaging)75.74 ± 0.15123.11 ± 1.85
SWAP (after averaging)78.18 ± 0.21125.34 ± 1.85
+ +Table 2: Training Statistics for CIFAR100 + +Table 2 compares the best test accuracies and corresponding training times for models trained with only small-batches (for 150 epochs), with only large-batches (for 150 epochs), and with SWAP. + +For SWAP, we report test accuracies obtained using the last SGD iterate before averaging, and test accuracy of the final model obtained after averaging. We observe significant improvement in test accuracies after averaging the models. + +For both CIFAR 10 and CIFAR100, training with small-batches achieves higher testing accuracy than training with large-batches but takes much longer to train. SWAP, however, terminates in time comparable to the large-batch run but achieves accuracies on par (or better) than small batch training. + +Achieving state of the art training speeds for CIFAR10: At the time of writing the frontrunner of the DAWNbench competition takes 37 seconds with 4 Tesla V100 GPUs to train CIFAR10 to $94\%$ test accuracy. Using SWAP with 8 Tesla V100 GPUs, a phase one batch size of 2048 samples and 28 epochs, and a phase two batch size of 256 samples for one epoch is able to reach the same accuracy in 27 seconds. + +# 5.2 EXPERIMENTS ON IMAGENET + +We use SWAP to accelerate a publicly available fast-to-train ImageNet model with published learning rate and batch size schedules $^{4}$ . The default settings for this code modify the learning-rates and batch sizes throughout the optimization (see Figure 5). Our small-batch experiments train ImageNet for 28 epochs using the published schedules with no modification and are run on 8 Tesla V100 GPUs. Our large-batch experiments modify the schedules by doubling the batch size and doubling the learning rates (see Figure 5) and are run on 16 Tesla V100 GPUs. For SWAP phase 1, we use the large-batch settings for 22 epochs, and for SWAP phase 2, we run two independent workers each with 8 GPUs using the settings for small-batches for 6 epochs. + +We observe that doubling the batch size reduces the Top1 and Top5 test accuracies with respect to the small-batch run. SWAP, however, recovers the generalization performance at substantially reduced training times. Our results are compiled in Table 3 (the statistics were collected over 3 runs). We believe it's worthy of mention that these accelerations were achieved with no tuning other than increasing the learning rates proportionally to the increase in batch size and reverting to the original schedule when transitioning between phases. + +![](images/c0e71149bad3ec35676dbeddbc5324b8f313cdd370cf620017e9b833fcb9da38.jpg) +(a) Learning rate schedule + +![](images/7a2fac7aa56f9cb7dc6d7b27cb100ec1595f6be991301025835f4cc635e9de64.jpg) +(b) Batch sizes across epochs for ImageNet +Figure 5: Learning rate and mini-batch schedules used for ImageNet. The original schedule for 8 GPUs was taken from an existing DAWNbench submission. For a larger batch experiment, we double the batch size, double the number of GPUs and double the learning rate of the original schedule. For SWAP, we switch from the modified schedule to the original schedule as we move from phase 1 to phase 2. + +
ImageNetTop1 Accuracy (%)Top5 Accuracy (%)Training Time (min)
SGD (small-batch)76.14 ± 0.0793.30 ± 0.07235.29 ± 0.33
SGD (large-batch)75.86 ± 0.0392.98 ± 0.06127.20 ± 0.78
SWAP (before averaging)75.96 ± 0.0293.15 ± 0.02149.12 ± 0.55
SWAP (after averaging)76.19 ± 0.0393.32 ± 0.02156.55 ± 0.56
+ +Table 3: Training Statistics for ImageNet + +![](images/ca347d7379a433f90c74f2783e34c0d57e9a65f585d4d2e5aeb2dacbd7f00659.jpg) +(a) Large-batch SWA + +![](images/2530afb3eba438d0c55f356ab0628d895d8e35f67e101130f771e794b5d6128c.jpg) +(b) Large-batch training followed by SWA with small-batches + +![](images/b1f3cac654167b7685e54f6604fc8af3cded4b35eaf626ed487579ad4a626b4c.jpg) +(c) Small-batch SWA +Figure 6: Illustration of SWA with different batch sizes + +# 5.3 EMPIRICAL COMPARISON OF SWA AND SWAP + +We now compare SWAP with SWA: the sequential weight averaging algorithm from Izmailov et al. (2018). For the experiments in this section, we use the CIFAR100 dataset. We sample the same number of models for both SWA and SWAP and maintain the same number of epochs per sample. For SWA, we sample each model with 10 epochs in-between and average them to get the final model. For SWAP, we run 8 independent workers for 10 epochs each and use their average as the final model. + +Large-batch SWA: We explore if SWA can recover the test accuracy of small-batch training on a large-batch training run. We use the same (large) batch size throughout. We follow an initial training cycle with cyclic learning rates (with cycles of 10 epochs) to sample 8 models (one from the end of each cycle). See Figure 6a for an illustration of the learning rate schedule. + +As expected we observe that the large-batch training run achieves lower training accuracy, but surprisingly SWA was unable to improve it (see Table 4, row 1). + +Large-batch followed by small-batch SWA: We evaluate the effect of executing SWA using small-batches after a large-batch training run. We interrupt the large-batch phase at the same accuracy we interrupt phase 1 of our CIFAR100 experiment (Table 2). In this case, the small-batch phase uses a single worker and samples the models sequentially. SWA is able to reach the test accuracy of a small-batch run but requires more than three times longer than SWAP to compute the model (see Table 4, row 2). An illustration of the learning rate schedule is provided in Figure 6b. + +Small-batch SWA and SWAP: We start the SWA cyclic learning rate schedule from the best model found by solely small-batch training (table 2, row 1). Since the cycle length and cycle count are fixed, the only free parameter is the peak learning rate. We select this using a grid-search. Once the SWA schedule is specified, we re-use the peak learning rate settings in SWAP. We start phase two from the model that was generated as the output of phase 1 for the experiment on section 5.1 reported on table 2 rows 3 and 4. With these settings, small-batch SWA achieves better accuracy than SWAP (by around $\sim 0.9\%$ ) at 6.8x more training time. + +Next, we wish to explore the speed-up that SWAP achieves over SWA if the precision of SWA is set as a target. To that end, we relax the constraints on SWAP. By increasing the phase two schedule from one 10 epoch cycle to two 20 epoch cycles and sampling two models from each worker (16 models) the resulting model achieved a test accuracy of $79.11\%$ in 241 seconds or $3.5\mathrm{x}$ less time. + +
CIFAR100Test accuracy before averaging (%)Test accuracy after averaging (%)Training Time (sec)
Large-batch SWA76.06 ± 0.2576.00 ± 0.31376.4 ± 2.25
Large-batch followed by small-batch SWA76.26 ± 0.3578.12 ± 0.14398.0 ± 1.35
Small-batch SWA76.80 ± 0.1579.09 ± 0.19848.6 ± 5.61
SWAP (10 small-batch epochs)75.74 ± 0.1578.18 ± 0.21125.30 ± 1.85
SWAP (40 small-batch epochs)76.19 ± 0.1979.11 ± 0.12241.54 ± 1.62
+ +Table 4: Comparison: SWA versus SWAP + +# 6 CONCLUSIONS AND FUTURE WORK + +We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm that uses a variant of Stochastic Weight Averaging (SWA) to improve the generalization performance of a model trained with large mini-batches. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models trained using small-batches. The final model obtained after averaging has good generalization performance and is trained in a shorter time. We believe that this variant and this application of SWA are novel. + +We observed that using large-batches in the initial stages of training does not preclude the models from achieving good generalization performance. That is, by refining the output of a large-batch run, with models sampled sequentially as in SWA or in parallel as in SWAP, the resulting model is able to perform as well as the models trained using small-batches only. We confirm this in the image classification datasets CIFAR10, CIFAR100, and ImageNet. + +Through visualizations, we complement the existing evidence that averaged weights are closer to the center of a training loss basin than the models produced by stochastic gradient descent. It's interesting to note that the basin into which the large mini-batch run is converging to seems to be the same basin where the refined models are found. So, it is possible that regions with bad and good generalization performance are connected through regions of low training loss and, more so, that both belong to an almost convex basin. Our method requires the choice of (at least) one more hyperparameter: the transition point between the large-batch and small-batch. For our experiments, we chose this by using a grid search. A principled method to choose the transition point will be the focus of future work. + +In future work we intend to explore the behavior of SWAP when used with other optimization schemes, such as Layer-wise Adaptive Rate Scaling (LARS) (You et al., 2017), mixed-precision training Jia et al. (2018), post-local SGD (Lin et al., 2018) or NovoGrad (Ginsburg et al., 2019). The design of SWAP allows us to substitute any of these for the large-batch stage, for example, we can use local SGD to accelerate the first stage of SWAP by reducing the communication overhead. + +# REFERENCES + +Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, and Andrew Gordon Wilson. There are many consistent explanations of unlabeled data: Why you should average. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rkgKBhA5Y7. +Cody Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Chris Re, and Matei Zaharia. Dawnbench: An end-to-end deep learning benchmark and competition. +Aditya Devarakonda, Maxim Naumov, and Michael Garland. Adabatch: Adaptive batch sizes for training deep neural networks. CoRR, abs/1712.02029, 2017. URL http://arxiv.org/abs/1712.02029. +Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. +Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1019-1028. JMLR.org, 2017. +Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of dnns. In Advances in Neural Information Processing Systems, pp. 8789-8798, 2018. +Boris Ginsburg, Patrice Castonguay, Oleksii Hrinchuk, Oleksii Kuchaiev, Vitaly Lavrukhin, Ryan Leary, Jason Li, Huyen Nguyen, and Jonathan M Cohen. Stochastic gradient methods with layerwise adaptive moments for training of deep networks. arXiv preprint arXiv:1905.11286, 2019. +Noah Golmant, Nikita Vemuri, Zhewei Yao, Vladimir Feinberg, Amir Gholami, Kai Rothauge, Michael W Mahoney, and Joseph Gonzalez. On the computational inefficiency of large batch sizes for stochastic gradient descent. arXiv preprint arXiv:1811.12941, 2018. +Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. +Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. In NIPS, 2017. +Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407, 2018. +Xianyan Jia, Shutao Song, Wei He, Yangzihao Wang, Haidong Rong, Feihu Zhou, Liqiang Xie, Zhenyu Guo, Yuzhou Yang, Liwei Yu, et al. Highly scalable deep learning training system with mixed-precision: TrainingImagenet in four minutes. arXiv preprint arXiv:1807.11205, 2018. +Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. +Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems, pp. 6389-6399, 2018. +Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. arXiv preprint arXiv:1908.07873, 2019. +Tao Lin, Sebastian U Stich, Kumar Kshitij Patel, and Martin Jaggi. Don't use large mini-batches, use local sgd. arXiv preprint arXiv:1808.07217, 2018. + +Siyuan Ma, Raef Bassily, and Mikhail Belkin. The power of interpolation: Understanding the effectiveness of sgd in modern over-parametrized learning. arXiv preprint arXiv:1712.06559, 2017. +Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, and Andrew Gordon Wilson. A simple baseline for bayesian uncertainty in deep learning. arXiv preprint arXiv:1902.02476, 2019. +Stephan Mandt, Matthew D Hoffman, and David M Blei. Stochastic gradient descent as approximate bayesian inference. The Journal of Machine Learning Research, 18(1):4873-4907, 2017. +Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training. arXiv preprint arXiv:1812.06162, 2018. +Evgenii Nikishin, Pavel Izmailov, Ben Athiwaratkun, Dmitrii Podoprikhin, Timur Garipov, Pavel Shvechikov, Dmitry Vetrov, and Andrew Gordon Wilson. Improving stability in deep reinforcement learning with weight averaging. +Alexander Sergeev and Mike Del Balso. Horovod: fast and easy distributed deep learning in tensorflow. arXiv preprint arXiv:1802.05799, 2018. +Christopher J Shallue, Jaehoon Lee, Joe Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E Dahl. Measuring the effects of data parallelism on neural network training. arXiv preprint arXiv:1811.03600, 2018. +Samuel L Smith, Pieter-Jan Kindermans, Chris Ying, and Quoc V Le. Don't decay the learning rate, increase the batch size. arXiv preprint arXiv:1711.00489, 2017. +Sebastian U Stich. Local sgd converges fast and communicates little. arXiv preprint arXiv:1805.09767, 2018. +Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, Andrew Gordon Wilson, and Chris De Sa. SWALP: Stochastic weight averaging in low precision training. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 7015-7024, Long Beach, California, USA, 09-15 Jun 2019. PMLR. +Yang You, Igor Gitman, and Boris Ginsburg. Scaling sgd batch size to 32k forImagenet training. 2017. +Hao Yu, Sen Yang, and Shenghuo Zhu. Parallel restarted sgd with faster convergence and less communication: Demystifying why model averaging works for deep learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 5693-5700, 2019. +Jian Zhang, Christopher De Sa, Ioannis Mitliagkas, and Christopher Ré. Parallel sgd: When does averaging help? ArXiv, abs/1606.07365, 2016. + +# A HYPERPARAMETERS FOR CIFAR10 AND CIFAR100 EXPERIMENTS + +We provide the parameters used in the experiments of Section 5.1. These were obtained by doing independent grid searches for each experiment. For all CIFAR experiments, the momentum and weight decay constants were kept at 0.9 and $5 \times 10^{-4}$ respectively. Tables 5 and 6 list the remaining hyperparameters. When a stopping accuracy of $100\%$ is listed, we mean that the maximum number of epochs were used. + +
CIFAR10SGD (small-batch)SGD (large-batch)SWAP (Phase 1)SWAP (Phase 2)
Batch-size51240964096512
Learning-rate Peak0.31.21.20.12
Maximum Epochs10015015030
Warm-up Epochs3030300
GPUs used per model2881
Stopping Accuracy (%)10010098100
+ +Table 5: Hyperparameters obtained using tuning for CIFAR10 + +
CIFAR100SGD (small-batch)SGD (large-batch)SWAP (Phase 1)SWAP (Phase 2)
Batch-size12820482048128
Learning-rate Peak0.21.21.20.05
Total Epochs15015015030
Warm-up Epochs6045450
GPUs used per model1881
Stopping Accuracy (%)10010090100
+ +Table 6: Hyperparameters obtained using tuning for CIFAR100 \ No newline at end of file diff --git a/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/images.zip b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..48bd2ab0ac6cdc6b4e12d626c91f981a19d40d7a --- /dev/null +++ b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d177a627981febec4beebd8296fe6ce905fbdb0019c9eadb3456e2246e5a00ea +size 377379 diff --git a/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/layout.json b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..210b938e57853bc7a353b634989fae868fde780c --- /dev/null +++ b/stochasticweightaveraginginparallellargebatchtrainingthatgeneralizeswell/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbe439ad7e4944a21ba2550ac4930fb563931a5ad3de3109f81a2fb4b9e80484 +size 320597 diff --git a/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/77c07111-eec3-4d5b-8ceb-4b81d0b63175_content_list.json b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/77c07111-eec3-4d5b-8ceb-4b81d0b63175_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..70298d962ca1552e8f4d837481c6cf994de625c3 --- /dev/null +++ b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/77c07111-eec3-4d5b-8ceb-4b81d0b63175_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf69cdf8032fcb56da2155d4dcf14ddae3f8e1cc65178bdf07bad2a6cd53cf69 +size 63438 diff --git a/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/77c07111-eec3-4d5b-8ceb-4b81d0b63175_model.json b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/77c07111-eec3-4d5b-8ceb-4b81d0b63175_model.json new file mode 100644 index 0000000000000000000000000000000000000000..dab1cebaf743e6044b955963672f72608b5f2c75 --- /dev/null +++ b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/77c07111-eec3-4d5b-8ceb-4b81d0b63175_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25899a2c787e1f3dce81e0293808e7b89281cddb34c1aa2d4135bfcfdf900ab4 +size 76329 diff --git a/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/77c07111-eec3-4d5b-8ceb-4b81d0b63175_origin.pdf b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/77c07111-eec3-4d5b-8ceb-4b81d0b63175_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a9163dd701e9b2736967b44f92aa2315fd45aeee --- /dev/null +++ b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/77c07111-eec3-4d5b-8ceb-4b81d0b63175_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:002832a3e62dc307e0c4c589e1bb05032d5a2e3d65fa69eb49512ed7caabee56 +size 414419 diff --git a/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/full.md b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7258c1d04effeb8ed5e13a5dc808833c4376cb62 --- /dev/null +++ b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/full.md @@ -0,0 +1,226 @@ +# STRUCTBERT: INCORPORATING LANGUAGE STRUCTURES INTO PRE-TRAINING FOR DEEP LANGUAGE UNDERSTANDING + +Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng and Luo Si Alibaba Group Inc. + +{hebian.ww, b.bi, yml19608, wuchen.wc, jiangnan.xjn, zuyi.bzy, liwei.peng, luo.si}@alibaba-inc.com + +# ABSTRACT + +Recently, the pre-trained language model, BERT (and its robustly optimized version RoBERTa), has attracted a lot of attention in natural language understanding (NLU), and achieved state-of-the-art accuracy in various NLU tasks, such as sentiment classification, natural language inference, semantic textual similarity and question answering. Inspired by the linearization exploration work of Elman (Elman, 1990), we extend BERT to a new model, StructBERT, by incorporating language structures into pre-training. Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential order of words and sentences, which leverage language structures at the word and sentence levels, respectively. As a result, the new model is adapted to different levels of language understanding required by downstream tasks. + +The StructBERT with structural pre-training gives surprisingly good empirical results on a variety of downstream tasks, including pushing the state-of-the-art on the GLUE benchmark to 89.0 (outperforming all published models at the time of model submission), the F1 score on SQuAD v1.1 question answering to 93.0, the accuracy on SNLI to 91.7. + +# 1 INTRODUCTION + +A pre-trained language model (LM) is a key component in many natural language understanding (NLU) tasks such as semantic textual similarity (Cer et al., 2017), question answering (Rajpurkar et al., 2016) and sentiment classification (Socher et al., 2013). In order to obtain reliable language representations, neural language models are designed to define the joint probability function of sequences of words in text with self-supervised learning. Different from traditional word-specific embedding in which each token is assigned a global representation, recent work, such as Cove (McCann et al., 2017), ELMo (Peters et al., 2018), GPT (Radford et al., 2018) and BERT (Devlin et al., 2018), derives contextualized word vectors from a language model trained on a large text corpus. These models have been shown effective for many downstream NLU tasks. + +Among the context-sensitive language models, BERT (and its robustly optimized version RoBERTa (Liu et al., 2019b)) has taken the NLP world by storm. It is designed to pre-train bidirectional representations by jointly conditioning on both left and right context in all layers and model the representations by predicting masked words only through the contexts. However, it does not make the most of underlying language structures. + +According to Elman (Elman, 1990)'s study, the recurrent neural networks was shown to be sensitive to regularities in word order in simple sentences. Since language fluency is determined by the ordering of words and sentences, finding the best permutation of a set of words and sentences is an essential problem in many NLP tasks, such as machine translation and NLU (Hasler et al., 2017). Recently, word ordering was treated as LM-based linearization solely based on language models (Schmaltz et al., 2016). Schmaltz showed that recurrent neural network language models (Mikolov et al., 2010) with long short-term memory (Hochreiter & Schmidhuber, 1997) cells work effectively for word ordering even without any explicit syntactic information. + +In this paper, we introduce a new type of contextual representation, StructBERT, which incorporates language structures into BERT pre-training by proposing two novel linearization strategies. Specifically, in addition to the existing masking strategy, StructBERT extends BERT by leveraging the structural information: word-level ordering and sentence-level ordering. We augment model pre-training with two new structural objectives on the inner-sentence and inter-sentence structures, respectively. In this way, the linguistic aspects (Elman, 1990) are explicitly captured during the pre-training procedure. With structural pre-training, StructBERT encodes dependency between words as well as sentences in the contextualized representation, which provides the model with better generalizability and adaptability. + +StructBERT significantly advances the state-of-the-art results on a variety of NLU tasks, including the GLUE benchmark (Wang et al., 2018), the SNLI dataset (Bowman et al., 2015) and the SQuAD v1.1 question answering task (Rajpurkar et al., 2016). All of these experimental results clearly demonstrate StructBERT's exceptional effectiveness and generalization capability in language understanding. + +We make the following major contributions: + +- We propose novel structural pre-training that extends BERT by incorporating the word structural objective and the sentence structural objective to leverage language structures in contextualized representation. This enables the StructBERT to explicitly model language structures by forcing it to reconstruct the right order of words and sentences for correct prediction. +- StructBERT significantly outperforms all published state-of-the-art models on a wide range of NLU tasks at the time of model submission. This model extends the superiority of BERT, and boosts the performance in many language understanding applications such as semantic textual similarity, sentiment analysis, textual entailment, and question answering. + +# 2 STRUCTBERT MODEL PRE-TRAINING + +StructBERT builds upon the BERT architecture, which uses a multi-layer bidirectional Transformer network (Vaswani et al., 2017). Given a single text sentence or a pair of text sentences, BERT packs them in one token sequence and learns a contextualized vector representation for each token. Every input token is represented based on the word, the position, and the text segment it belongs to. Next, the input vectors are fed into a stack of multi-layer bidirectional Transformer blocks, which uses self-attention to compute the text representations by considering the entire input sequence. + +The original BERT introduces two unsupervised prediction tasks to pre-train the model: i.e., a masked LM task and a next sentence prediction task. Different from original BERT, our StructBERT amplifies the ability of the masked LM task by shuffling certain number of tokens after word masking and predicting the right order. Moreover, to better understand the relationship between sentences, StructBERT randomly swaps the sentence order and predicts the next sentence and the previous sentence as a new sentence prediction task. In this way, the new model not only explicitly captures the fine-grained word structure in every sentence, but also properly models the inter-sentence structure in a bidirectional manner. Once the StructBERT language model is pre-trained with these two auxiliary tasks, we can fine-tune it on task-specific data for a wide range of downstream tasks. + +# 2.1 INPUT REPRESENTATION + +Every input $x$ is a sequence of word tokens, which can be either a single sentence or a pair of sentences packed together. The input representation follows that used in BERT (Devlin et al., 2018). For each input token $t_i$ , its vector representation $\mathbf{x}_i$ is computed by summing the corresponding token embedding, positional embedding, and segment embedding. We always add a special classification embedding ([CLS]) as the first token of every sequence, and a special end-of-sequence ([SEP]) token to the end of each segment. Texts are tokenized to subword units by WordPiece (Wu et al., 2016) and absolute positional embeddings are learned with supported sequence lengths up to 512 tokens. In addition, the segment embeddings are used to differentiate a pair of sentences as in BERT. + +![](images/9d83fed7df606054e3c60ea884ca3db5c2dac2f9ac95a0024dbe2e356bdfea69.jpg) +(a) Word Structural Objective + +![](images/a9dee39eb89c6c068eb6a1781c7956da8f5a8e54a8f77b43010f956be6391c74.jpg) +(b) Sentence Structural Objective +Figure 1: Illustrations of the two new pre-training objectives + +# 2.2 TRANSFORMER ENCODER + +We use a multi-layer bidirectional Transformer encoder (Vaswani et al., 2017) to encode contextual information for input representation. Given the input vectors $\mathbf{X} = \{\mathbf{x}_i\}_{i=1}^N$ , an $L$ -layer Transformer is used to encode the input as: + +$$ +\mathbf {H} ^ {l} = \text {T r a n s f o r m e r l} (\mathbf {H} ^ {l - 1}) \tag {1} +$$ + +where $l \in [1, L]$ , $\mathbf{H}^0 = \mathbf{X}$ and $\mathbf{H}^L = [\mathbf{h}_1^L, \dots, \mathbf{h}_N^L]$ . We use the hidden vector $\mathbf{h}_i^L$ as the contextualized representation of the input token $t_i$ . + +# 2.3 PRE-TRAINING OBJECTIVES + +To make full use of the rich inner-sentence and inter-sentence structures in language, we extend the pre-training objectives of original BERT in two ways: ① word structural objective (mainly for the single-sentence task), and ② sentence structural objective (mainly for the sentence-pair task). We pre-train these two auxiliary objectives together with the original masked LM objective in a unified model to exploit inherent language structures. + +# 2.3.1 WORD STRUCTURAL OBJECTIVE + +Despite its success in various NLU tasks, original BERT is unable to explicitly model the sequential order and high-order dependency of words in natural language. Given a set of words in random order from a sentence, ideally a good language model should be able to recover this sentence by reconstructing the correct order of these words. To implement this idea in StructBERT, we supplement BERT's training objectives with a new word structural objective which endows the model with the ability to reconstruct the right order of certain number of intentionally shuffled word tokens. This new word objective is jointly trained together with the original masked LM objective from BERT. + +Figure 1a illustrates the procedure of jointly training the new word objective and the masked LM objective. In every input sequence, we first mask $15\%$ of all tokens at random, as done in BERT (Devlin et al., 2018). The corresponding output vectors $\mathbf{h}_i^L$ of the masked tokens computed by the bidirectional Transformer encoder are fed into a softmax classifier to predict the original tokens. + +Next, the new word objective comes into play to take word order into consideration. Given the randomness of token shuffling, the word objective is equivalent to maximizing the likelihood of placing every shuffled token in its correct position. More formally, this objective can be formulated as: + +$$ +\arg \max _ {\theta} \sum \log P \left(\operatorname {p o s} _ {1} = t _ {1}, \operatorname {p o s} _ {2} = t _ {2}, \dots , \operatorname {p o s} _ {K} = t _ {K} \mid t _ {1}, t _ {2}, \dots , t _ {K}, \theta\right), \tag {2} +$$ + +where $\theta$ represents the set of trainable parameters of StructBERT, and $K$ indicates the length of every shuffled subsequence. Technically, a larger $K$ would force the model to be able to reconstruct longer sequences while injecting more disturbed input. On the contrary, when $K$ is smaller, the model gets more undisturbed sequences while less capable of recovering long sequences. We decide to use trigrams (i.e., $K = 3$ ) for subsequence shuffling to balance language reconstructability and robustness of the model. + +Specifically, as shown in Figure 1a, we randomly choose some percentage of trigrams from unmasked tokens, and shuffle the three words (e.g., $t_2$ , $t_3$ , and $t_4$ in the figure) within each of the trigrams. The output vectors of the shuffled tokens computed by the bidirectional Transformer encoder are then fed into a softmax classifier to predict the original tokens. The new word objective is jointly learned together with the masked LM objective in a unified pre-trained model with equal weights. + +# 2.3.2 SENTENCE STRUCTURAL OBJECTIVE + +The next sentence prediction task is considered easy for the original BERT model (the prediction accuracy of BERT can easily achieve $97\% - 98\%$ in this task (Devlin et al., 2018)). We, therefore, extend the sentence prediction task by predicting both the next sentence and the previous sentence, to make the pre-trained language model aware of the sequential order of the sentences in a bidirectional manner. + +As illustrated in Figure 1b, given a pair of sentences $(S_{1}, S_{2})$ as input, we predict whether $S_{2}$ is the next sentence that follows $S_{1}$ , or the previous sentence that precedes $S_{1}$ , or a random sentence from a different document. Specifically, for the sentence $S_{1}$ , $\frac{1}{3}$ of the time we choose the text span that follows $S_{1}$ as the second sentence $S_{2}$ , $\frac{1}{3}$ of the time the previous sentence ahead of $S_{1}$ is selected, and $\frac{1}{3}$ of the time a sentence randomly sampled from the other documents is used as $S_{2}$ . The two sentences are concatenated together into an input sequence with the separator token [SEP] in between, as done in BERT. We pool the model output by taking the hidden state corresponding to the first token [CLS], and feed the encoding vector of [CLS] into a softmax classifier to make a three-class prediction. + +# 2.4 PRE-TRAINING SETUP + +The training objective function is a linear combination of the word structural objective and the sentence structural objective. For the masked LM objective, we followed the same masking rate and settings as in BERT (Devlin et al., 2018). $5\%$ of trigrams are selected for random shuffling. + +We used documents from English Wikipedia (2,500M words) and BookCorpus (Zhu et al., 2015) as pre-training data, following the preprocessing and the WordPiece tokenization from (Devlin et al., 2018). The maximum length of input sequence was set to 512. + +We ran Adam with learning rate of 1e-4, $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , L2 weight decay of 0.01, learning rate warm-up over the first $10\%$ of the total steps, and linear decay of the learning rate. We set a dropout probability of 0.1 for every layer. The gelu activation (Hendrycks & Gimpel, 2016) was used as done in GPT (Radford et al., 2018). + +We denote the number of Transformer block layers as $L$ , the size of hidden vectors as $H$ , and the number of self-attention heads as $A$ . Following the practice of BERT, We primarily report experimental results on the two model sizes: + +StructBERTBase: $L = 12$ , $H = 768$ , $A = 12$ , Number of parameters= 110M + +StructBERTLarge: $L = 24$ , $H = 1024$ , $A = 16$ , Number of parameters=340M + +Pre-training of StructBERT was performed on a distributed computing cluster consisting of 64 Telsa V100 GPU cards. For the StructBERTBase, we ran the pre-training procedure for 40 epochs, which took about 38 hours, and the training of StructBERTLarge took about 7 days to complete. + +# 3 EXPERIMENTS + +In this section, we report results of StructBERT on a variety of downstream tasks including General Language Understanding Evaluation (GLUE benchmark), Standford Natural Language inference (SNLI corpus) and extractive question answering (SQuAD v1.1). + +Following BERT's practice, during fine-tuning on downstream tasks, we performed a grid search or an exhaustive search (depending on the data size) on the following sets of parameters and chose the model that performed the best on the dev set. All the other parameters remain the same as those in pre-training: + +Batch size: 16, 24, 32; Learning rate: 2e-5, 3e-5, 5e-5; Number of epochs: 2, 3; Dropout rate: 0.05, 0.1 + +
SystemCoLA 8.5kSST-2 67kMRPC 3.5kSTS-B 5.7kQQP 363kMNLI 392kQNLI 108kRTE 2.5kWNLI 634AXAvg.
Human Baseline66.497.886.3/80.892.7/92.659.5/80.492.0/92.891.293.695.9-
BERTLarge160.594.989.3/85.487.6/86.572.1/89.386.7/85.992.770.165.139.680.5
BERT on STILTs262.194.390.2/86.688.7/88.371.9/89.486.4/85.692.780.165.128.382.0
SpanBERT364.394.890.9/87.989.9/89.171.9/89.588.1/87.794.379.065.145.182.8
Snorkel MeTaL463.896.291.5/88.590.1/89.773.1/89.987.6/87.293.980.965.139.983.2
MT-DNN++565.495.691.1/88.289.6/89.072.7/89.687.9/87.495.885.165.141.983.8
MT-DNN*565.496.592.2/89.589.6/89.073.7/89.987.9/87.496.085.765.142.884.2
StructBERTBase57.294.789.9/86.188.5/87.672.0/89.685.5/84.692.676.965.139.080.9
StructBERTLarge65.395.292.0/89.390.3/89.474.1/90.588.0/87.795.783.165.143.683.9
StructBERTLarge*68.695.292.5/90.191.1/90.674.4/90.788.2/87.995.783.165.143.984.5
XLNet*667.896.893.0/90.791.6/91.174.2/90.390.2/89.898.686.390.447.588.4
RoBERTa*767.896.792.3/89.892.2/91.974.3/90.290.8/90.298.988.289.048.788.5
Adv-RoBERTa*68.096.893.1/90.892.4/92.274.8/90.391.1/90.798.888.789.050.188.8
StructBERTRoBERTa*69.297.193.6/91.592.8/92.474.4/90.790.7/90.399.287.389.747.889.0
+ +Table 1: Results of published models on the GLUE test set, which are scored by the GLUE evaluation server. The number below each task denotes the number of training examples. The state-of-the-art results are in bold. All the results are obtained from https://gluebenchmark.com/leaderboard (StructBERT submitted under a different model name ALICE). * indicates the ensemble model. Model references: ${}^{1}$ : (Devlin et al.,2018); ${}^{2}$ : (Phang et al.,2018); ${}^{3}$ : (Joshi et al., 2019); ${}^{4}$ : (Ratner et al.,2017); ${}^{5}$ : (Liu et al.,2019a); ${}^{6}$ : (Yang et al.,2019b); ${}^{7}$ : (Liu et al.,2019b). + +# 3.1 GENERAL LANGUAGE UNDERSTANDING + +# 3.1.1 GLUE BENCHMARK + +The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) is a collection of nine NLU tasks, covering textual entailment (RTE (Bentivogli et al., 2009) and MNLI (Williams et al., 2017)), question-answer entailment (QNLI (Wang et al., 2018)), paraphrase (MRPC (Dolan & Brockett, 2005)), question paraphrase (QQP), textual similarity (STS-B (Cer et al., 2017)), sentiment (SST-2 (Socher et al., 2013)), linguistic acceptability (CoLA), and Winograd Schema (WNLI (Levesque et al., 2012)). + +On the GLUE benchmark, given the similarity of MRPC/RTE/STS-B to MNLI, we fine-tuned StructBERT on MNLI before training on MRPC/RTE/STS-B data for the respective tasks. This follows the two-stage transfer learning STILTs introduced in (Phang et al., 2018). For all the other tasks (i.e., RTE, QNLI, QQP, SST-2, CoLA and MNLI), we fine-tuned StructBERT for each single task only on its in-domain data. + +Table 1 presents the results of published models on the GLUE test set obtained from the official benchmark evaluation server. Our StructBERTLarge ensemble suppressed all published models (excluding RoBERTa ensemble and XLNet ensemble) on the average score, and performed the best among these models in six of the nine tasks. In the most popular MNLI task, our StructBERTLarge single model improved the best result by $0.3\% / 0.5\%$ , since we fine-tuned MNLI only on its in-domain data, this improvement is entirely attributed to our new training objectives. The most significant improvement over BERT was observed on CoLA $(4.8\%)$ , which may be due to the strong correlation between the word order task and the grammatical error correction task. In the SST-2 task, our model improved over BERT while performed worse than MT-DNN did, which indicates that sentiment analysis based on single sentences benefits less from the word structural objective and sentence structural objective. + +
ModelGPTBERTMT-DNNSJRCStructBERTLarge
Dev-90.191.4-92.2
Test89.990.891.191.391.7
+ +Table 2: Accuracy $(\%)$ on the SNLI dataset. + +
SystemDev setTest set
EMF1EMF1
Human--82.391.2
XLNet(single+DA) (Yang et al., 2019b)88.994.589.995.0
BERT ensemble+DA) (Devlin et al., 2018)86.292.287.493.2
KT-NET(single) (Yang et al., 2019a)85.191.785.992.4
BERT(single+DA) (Devlin et al., 2018)84.291.185.191.8
QANet(ensemble+DA) (Yu et al., 2018)--84.590.5
StructBERTLarge (single)85.292.0--
StructBERTLarge (ensemble)87.093.0--
+ +Table 3: SQuAD results. The StructBERTLarge ensemble is 10x systems which use different pretraining checkpoints and fine-tuning seeds. + +With pre-training on large corpus, XLNet ensemble and RoBERTa ensemble outperformed all published models including our StructBERTLarge ensemble. To take advantage of the large data which RoBERTa is trained on, we continued pre-training with our two new objectives from the released RoBERTa model, named StructBERTRoBERTa. At the time of model submission, our StructBERTRoBERTa ensemble, which was submitted under a different name ALICE, achieved the best performance among all published models including RoBERTa and XLNet on the leaderboard, creating a new state-of-the-art result of $89.0\%$ on the average GLUE score. It demonstrates that the proposed objectives are able to improve language models in addition to BERT. + +# 3.1.2 SNLI + +Natural Language Inference (NLI) is one of the important tasks in natural language understanding. The goal of this task is to test the ability of the model to reason the semantic relationship between two sentences. In order to perform well on an NLI task, a model needs to capture the semantics of sentences, and thus to infer the relationship between a pair of sentences: entailment, contradiction or neutral. + +We evaluated our model on the most widely used NLI dataset: The Stanford Natural Language Inference (SNLI) Corpus (Bowman et al., 2015), which consists of 549,367/9,842/9,824 premise-hypothesis pairs in train/dev/test sets and target labels indicating their relations. We performed a grid search on the sets of parameters, and chose the model that performed best on the dev set. + +Table 2 shows the results on the SNLI dataset of our model with other published models. StructBERT outperformed all existing systems on SNLI, creating new state-of-the-art results $91.7\%$ , which amounts to $0.4\%$ absolute improvement over the previous state-of-the-art model SJRC and $0.9\%$ absolute improvement over BERT. Since the network architecture of our model is identical to that of BERT, this improvement is entirely attributed to the new pre-training objectives, which justifies the effectiveness of the proposed tasks of word prediction and sentence prediction. + +# 3.2 EXTRACTIVE QUESTION ANSWERING + +SQuAD v1.1 is a popular machine reading comprehension dataset consisting of 100,000+ questions created by crowd workers on 536 Wikipedia articles (Rajpurkar et al., 2016). The goal of the task is to extract the right answer span from the corresponding paragraph given a question. + +We fine-tuned our StructBERT language model on the SQuAD dataset for 3 epochs, and compared the result against the state-of-the-art methods on the official leaderboard ${}^{1}$ ,as shown in Table 3. We can see that even without any additional data augmentation (DA) techniques, the proposed StructBERT model was superior to all published models except XLNet+DA on the dev set. ${}^{2}$ . With data augmentation and large corpus used during pre-training,XLNet+DA outperformed our StructBERT which did not + +
TaskCoLA (Acc)SST-2 (Acc)MNLI (Acc)SNLI (Acc)QQP (Acc)SQuAD (F1)
StructBERTBase85.892.985.491.591.190.6
-word structure81.792.785.291.690.790.3
-sentence structure84.992.984.191.190.589.1
BERTBase80.992.784.191.390.488.5
+ +Table 4: Ablation over the pre-training objectives using StructBERTBase architecture. Every result is the average score of 8 runs with different random seeds (the MNLI accuracy is the average score of the matched and mis-matched settings). + +use data augmentation or large pre-training corpus. It demonstrates the effectiveness of the proposed pre-trained StructBERT in modeling the question-paragraph relationship for extractive question answering. Incorporating the word and sentence structures significantly improves the understanding ability in this fine-grained answer extraction task. + +# 3.3 EFFECT OF DIFFERENT STRUCTURAL OBJECTIVES + +We have demonstrated the strong empirical results of the proposed model on a variety of downstream tasks. In the StructBERT pre-training, the two new structural prediction tasks are the most important components. Therefore, we conducted an ablation study by removing one structural objective from pre-training at a time to examine how the two structural objectives influence the performance on various downstream tasks. + +Results are presented in Table 4. From the table, we can see that: (1) the two structural objectives were both critical to most of the downstream tasks, except for the word structural objective in the SNLI task. Removing any word or sentence objective from pre-training always led to degraded performance in the downstream tasks. The StructBERT model with structural pre-training consistently outperformed the original BERT model, which shows the effectiveness of the proposed structural objectives. (2) For the sentence-pair tasks such as MNLI, SNLI, QQP and SQuAD, incorporating the sentence structural objective significantly improved the performance. It demonstrates the effect of inter-sentence structures learned by pre-training in understanding the relationship between sentences for downstream tasks. (3) For the single-sentence tasks such as CoLA and SST-2, the word structural objective played the most important role. Especially in the CoLA task, which is related to the grammatical error correction, the improvement was over $5\%$ . The ability of reconstructing the order of words in pre-training helped the model better judge the acceptability of a single sentence. + +We also studied the effect of both structural objectives during self-supervised pre-training. Figure 2 illustrates the loss and accuracy of word and sentence prediction over the number of pre-training steps for StructBERTBase and BERTBase. From the two sub-figures on top, it is observed that + +![](images/bb721398bae887b623bf5d6b3eee3b1305d4033656a68f72eabbaab92412f622.jpg) + +![](images/d71add1b96b1d37140a6d40a98b07010b93f9945f186ec3ced9c7c78f0f84168.jpg) + +![](images/932840e9b82ded024d72b94a5298a069fd24d97f5d255718ae2152cdabe721e3.jpg) +Figure 2: Loss and accuracy of word and sentence prediction over the number of pre-training steps + +![](images/ddf2163e1a731999fbbf01e03b30a23864ed47e85daf81d5aea8e8cdb6b486eb.jpg) + +compared with BERT, the augmented shuffled token prediction in StructBERT's word structural objective had little effect on the loss and accuracy of masked token prediction. On the other hand, the integration of the simpler task of shuffled token prediction (lower loss and higher accuracy) provides StructBERT with the capability of word reordering. In contrast, the new sentence structural objective in StructBERT leads to a more challenging prediction task than that in BERT, as shown in the two figures at the bottom. This new pre-training objective enables StructBERT to exploit inter-sentence structures, which benefits sentence-pair downstream tasks. + +# 4 RELATED WORK + +# 4.1 CONTEXTUALIZED LANGUAGE REPRESENTATION + +A word can have different semantics depending on the its context. Contextualized word representation is considered to be an important part of modern NLP research, with various pre-trained language models (McCann et al., 2017; Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018) emerging recently. ELMo (Peters et al., 2018) learns two unidirectional LMs based on long short-term memory networks (LSTMs). A forward LM reads the text from left to right, and a backward LM encodes the text from right to left. Following the similar idea of ELMo, OpenAI GPT (Radford et al., 2018) expands the unsupervised language model to a much larger scale by training on a giant collection of free text corpora. Different from ELMo, it builds upon a multi-layer Transformer (Vaswani et al., 2017) decoder, and uses a left-to-right Transformer to predict a text sequence word-by-word. + +In contrast, BERT (Devlin et al., 2018) (as well as its robustly optimized version RoBERTa (Liu et al., 2019b)) employs a bidirectional Transformer encoder to fuse both the left and the right context, and introduces two novel pre-training tasks for better language understanding. We base our LM on the architecture of BERT, and further extend it by introducing word and sentence structures into pre-training tasks for deep language understanding. + +# 4.2 WORD & SENTENCE ORDERING + +The task of linearization aims to recover the original order of a shuffled sentence (Schmaltz et al., 2016). Part of larger discussion as to whether LSTMs are capturing syntactic phenomena linearization, is standardized in a recent line of research as a method useful for isolating the performance of text-to-text generation (Zhang & Clark, 2015) models. Recently, Transformers have emerged as a powerful architecture for learning the latent structure of language. For example, Bidirectional Transformers (BERT) has reduced the perplexity for language modeling task. We revisit Elman's question by applying BERT to the word-ordering task, without any explicit syntactic approaches, and find that pre-trained language models are effective for various downstream tasks with linearization. + +Many important downstream tasks such as STS and NLI (Wang et al., 2018) are based on understanding the relationship between two text sentences, which is not directly captured by language modeling. While BERT (Devlin et al., 2018) pre-trains a binarized next sentence prediction task to understand sentence relationships, we take one step further and treat it as a sentence ordering task. The goal of sentence ordering is to arrange a set of sentences into a coherent text in a clear and consistent manner, which can be viewed as a ranking problem (Chen et al., 2016). The task is general and yet challenging, and once is especially important for natural language generation (Reiter & Dale, 1997). Text should be organized according to the following properties: rhetorical coherence, topical relevancy, chronological sequence, and cause-effect. In this work, we focus on what is arguably the most basic characteristics of a sequence: their order. Most of prior work on sentence ordering was part of the study of downstream tasks, such as multi-document summarization (Bollegala et al., 2010). We revisit this problem in the context of language modeling as a new sentence prediction task. + +# 5 CONCLUSION + +In this paper, we propose novel structural pre-training which incorporates word and sentence structures into BERT pre-training. A word structural objective and a sentence structural objective are introduced as two new pre-training tasks for deep understanding of natural language in different granularities. Experimental results demonstrate that the new StructBERT model can obtain new state-of-the-art + +results in a variety of downstream tasks, including the popular GLUE benchmark, the SNLI Corpus and the SQuAD v1.1 question answering. + +# REFERENCES + +Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The fifth pascal recognizing textual entailment challenge. In TAC, 2009. +Danushka Bollegala, Naoaki Okazaki, and Mitsuru Ishizuka. A bottom-up approach to sentence ordering for multi-document summarization. Information processing & management, 46(1): 89-109, 2010. +Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015. +Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055, 2017. +Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. Neural sentence ordering. arXiv preprint arXiv:1607.06952, 2016. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005. +Jeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179-211, 1990. +Eva Hasler, Felix Stahlberg, Marcus Tomalin, Adri de Gispert, and Bill Byrne. A comparison of neural models for word ordering. arXiv preprint arXiv:1708.01809, 2017. +Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997. +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. arXiv preprint arXiv:1907.10529, 2019. +Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2012. +Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504, 2019a. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, 2019b. +Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pp. 6294-6305, 2017. +Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černocký, and Sanjeev Khudanpur. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association, 2010. + +Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018. +Jason Phang, Thibault Févry, and Samuel R Bowman. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088, 2018. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assetss/research-covers/languageunsupervised/language understanding paper. pdf, 2018. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. +Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré. Snorkel: Rapid training data creation with weak supervision. Proceedings of the VLDB Endowment, 11(3):269-282, 2017. +Ehud Reiter and Robert Dale. Building applied natural language generation systems. Natural Language Engineering, 3(1):57-87, 1997. +Allen Schmaltz, Alexander M Rush, and Stuart M Shieber. Word ordering without syntax. arXiv preprint arXiv:1604.08633, 2016. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631-1642, 2013. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018. +Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426, 2017. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. +An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li. Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2346-2357, 2019a. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019b. +Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541, 2018. +Yue Zhang and Stephen Clark. Discriminative syntax-based word ordering for text generation. Computational linguistics, 41(3):503-538, 2015. +Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pp. 19-27, 2015. \ No newline at end of file diff --git a/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/images.zip b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b6e7a11ca64129ae412d53f5362d5bddc695624d --- /dev/null +++ b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0de855850ee2bb19a9bf92b44f1f4781d94946b38f43fa8cd49bde77cb16905 +size 296981 diff --git a/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/layout.json b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..69e10381e49540874632b96c9368e4c47f196e96 --- /dev/null +++ b/structbertincorporatinglanguagestructuresintopretrainingfordeeplanguageunderstanding/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:da390b6b0312ff87079bd572de6bdd8f04a74778b22abd2fbc2227c325ba513e +size 288693 diff --git a/structpoolstructuredgraphpoolingviaconditionalrandomfields/137ab202-0cdf-41c6-bce9-004525975664_content_list.json b/structpoolstructuredgraphpoolingviaconditionalrandomfields/137ab202-0cdf-41c6-bce9-004525975664_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8372431dfed7cfd1063d455a8b05313c9b4911f8 --- /dev/null +++ b/structpoolstructuredgraphpoolingviaconditionalrandomfields/137ab202-0cdf-41c6-bce9-004525975664_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52b48c6d82a543090c62d8ee2636d68d3b74adb40c4353b48e91cf1243e97dd8 +size 74339 diff --git a/structpoolstructuredgraphpoolingviaconditionalrandomfields/137ab202-0cdf-41c6-bce9-004525975664_model.json b/structpoolstructuredgraphpoolingviaconditionalrandomfields/137ab202-0cdf-41c6-bce9-004525975664_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d426956136adc138dbc8a7d5e3a44f191fec670c --- /dev/null +++ b/structpoolstructuredgraphpoolingviaconditionalrandomfields/137ab202-0cdf-41c6-bce9-004525975664_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee47dc0379582116cf7d92e671e03ab9a36117cce791f1c5ddb2700def256e43 +size 88398 diff --git a/structpoolstructuredgraphpoolingviaconditionalrandomfields/137ab202-0cdf-41c6-bce9-004525975664_origin.pdf b/structpoolstructuredgraphpoolingviaconditionalrandomfields/137ab202-0cdf-41c6-bce9-004525975664_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bb0e1a9e7b0ee10fcae458bd10f065a9db3e8af9 --- /dev/null +++ b/structpoolstructuredgraphpoolingviaconditionalrandomfields/137ab202-0cdf-41c6-bce9-004525975664_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12b53a768b8fbde930f622f1958ab5f952d6fe6bcfea9d921acb03ea110328be +size 429557 diff --git a/structpoolstructuredgraphpoolingviaconditionalrandomfields/full.md b/structpoolstructuredgraphpoolingviaconditionalrandomfields/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f5ca85cbacf9cf1d7a5c7d3c0f57ed8bd9dff6a0 --- /dev/null +++ b/structpoolstructuredgraphpoolingviaconditionalrandomfields/full.md @@ -0,0 +1,279 @@ +# STRUCTPOOL: STRUCTURED GRAPH POOLING VIA CONDITIONAL RANDOM FIELDS + +Hao Yuan + +Department of Computer Science & Engineering +Texas A&M University + +College Station, TX 77843, USA + +hao.yuan@tamu.edu + +Shuiwang Ji + +Department of Computer Science & Engineering +Texas A&M University + +College Station, TX 77843, USA + +sji@amu.edu + +# ABSTRACT + +Learning high-level representations for graphs is of great importance for graph analysis tasks. In addition to graph convolution, graph pooling is an important but less explored research area. In particular, most of existing graph pooling techniques do not consider the graph structural information explicitly. We argue that such information is important and develop a novel graph pooling technique, know as the STRUCTPOOL, in this work. We consider the graph pooling as a node clustering problem, which requires the learning of a cluster assignment matrix. We propose to formulate it as a structured prediction problem and employ conditional random fields to capture the relationships among the assignments of different nodes. We also generalize our method to incorporate graph topological information in designing the Gibbs energy function. Experimental results on multiple datasets demonstrate the effectiveness of our proposed STRUCTPOOL. + +# 1 INTRODUCTION + +Graph neural networks have achieved the state-of-the-art results for multiple graph tasks, such as node classification (Veličković et al., 2018; Gao & Ji, 2019b; Gao et al., 2018) and link prediction (Zhang & Chen, 2018; Cai & Ji, 2020). These results demonstrate the effectiveness of graph neural networks to learn node representations. However, graph classification tasks also require learning good graph-level representations. Since pooling operations are shown to be effective in many image and NLP tasks, it is natural to investigate pooling techniques for graph data (Yu & Koltun, 2016; Springenberg et al., 2014). Recent work extends the global sum/average pooling operations to graph models by simply summing or averaging all node features (Atwood & Towsley, 2016; Simonovsky & Komodakis, 2017). However, these trivial global pooling operations may lose important features and ignore structural information. Furthermore, global pooling are not hierarchical so that we cannot apply them where multiple pooling operations are required, such as Graph U-Net (Gao & Ji, 2019a). Several advanced graph pooling methods, such as SORTPOOL (Zhang et al., 2018), TOPKPOOL (Gao & Ji, 2019a), DIFFPOOL (Ying et al., 2018), and SAGPOOL (Lee et al., 2019), are recently proposed and achieve promising performance on graph classification tasks. However, none of them explicitly models the relationships among different nodes and thus may ignore important structural information. We argue that such information is important and should be explicitly captured in graph pooling. + +In this work, we propose a novel graph pooling technique, known as the STRUCTPOOL, that formulates graph pooling as a structured prediction problem. Following DIFFPOOL (Ying et al., 2018), we consider graph pooling as a node clustering problem, and each cluster corresponds to a node in the new graph after pooling. Intuitively, two nodes with similar features should have a higher probability of being assigned to the same cluster. Hence, the assignment of a given node should depend on both the input node features and the assignments of other nodes. We formulate this as a structured prediction problem and employ conditional random fields (CRFs) (Lafferty et al., 2001) to capture such high-order structural relationships among the assignments of different nodes. In addition, we generalize our method by incorporating the graph topological information so that our method can control the clique set in our CRFs. We employ the mean field approximation to compute the assignments and describe how to incorporate it in graph networks. Then the networks can be + +trained in an end-to-end fashion. Experiments show that our proposed STRUCTPOOL outperforms existing methods significantly and consistently. We also show that STRUCTPOOL incurs acceptable computational cost given its superior performance. + +# 2 BACKGROUND AND RELATED WORK + +# 2.1 GRAPH CONVOLUTIONAL NETWORKS + +A graph can be represented by its adjacency matrix and node features. Formally, for a graph $G$ consisting of $n$ nodes, its topology information can be represented by an adjacency matrix $A \in \{0,1\}^{n \times n}$ , and the node features can be represented as $X \in \mathbb{R}^{n \times c}$ assuming each node has a $c$ -dimensional feature vector. Deep graph neural networks (GNNs) learn feature representations for different nodes using these matrices (Gilmer et al., 2017). Several approaches are proposed to investigate deep GNNs, and they generally follow a neighborhood information aggregation scheme (Gilmer et al., 2017; Xu et al., 2019; Hamilton et al., 2017; Kipf & Welling, 2017; Velicković et al., 2018). In each step, the representation of a node is updated by aggregating the representations of its neighbors. Graph Convolutional Networks (GCNs) are popular variants of GNNs and inspired by the first order graph Laplacian methods (Kipf & Welling, 2017). The graph convolution operation is formally defined as: + +$$ +X _ {i + 1} = f \left(D ^ {- \frac {1}{2}} \hat {A} D ^ {- \frac {1}{2}} X _ {i} P _ {i}\right), \tag {1} +$$ + +where $\hat{A} = A + I$ is used to add self-loops to the adjacency matrix, $D$ denotes the diagonal node degree matrix to normalize $\hat{A}$ , $X_{i} \in \mathbb{R}^{n \times c_{i}}$ are the node features after $i^{th}$ graph convolution layer, $P_{i} \in \mathbb{R}^{c_{i} \times c_{i+1}}$ is a trainable matrix to perform feature transformation, and $f(\cdot)$ denotes a non-linear activation function. Then $X_{i} \in \mathbb{R}^{n \times c_{i}}$ is transformed to $X_{i+1} \in \mathbb{R}^{n \times c_{i+1}}$ where the number of nodes remains the same. A similar form of GCNs proposed in (Zhang et al., 2018) can be expressed as: + +$$ +X _ {i + 1} = f \left(D ^ {- 1} \hat {A} X _ {i} P _ {i}\right). \tag {2} +$$ + +It differs from the GCNs in Equation (1) by performing different normalization and is a theoretically closer approximation to the Weisfeiler-Lehman algorithm (Weisfeiler & Lehman, 1968). Hence, in our models, we use the latter version of GCNs in Equation (2). + +# 2.2 GRAPH POOLING + +Several advanced pooling techniques are proposed recently for graph models, such as SORTPOOL, TOPKPOOL, DIFFPOOL, and SAGPOOL, and achieve great performance on multiple benchmark datasets. All of SORTPOOL (Zhang et al., 2018), TOPKPOOL (Gao & Ji, 2019a), and SAGPOOL (Lee et al., 2019) learn to select important nodes from the original graph and use these nodes to build a new graph. They share the similar idea to learn a sorting vector based on node representations using GCNs, which indicates the importance of different nodes. Then only the top $k$ important nodes are selected to form a new graph while the other nodes are ignored. However, the ignored nodes may contain important features and this information is lost during pooling. DIFFPOOL (Ying et al., 2018) treats the graph pooling as a node clustering problem. A cluster of nodes from the original graph are merged to form a new node in the new graph. DIFFPOOL proposes to perform GCNs on node features to obtain node clustering assignment matrix. Intuitively, the cluster assignment of a given node should depend on the cluster assignments of other nodes. However, DIFFPOOL does not explicitly consider such high-order structural relationships, which we believe are important for graph pooling. In this work, we propose a novel structured graph pooling technique, known as the STRUCTPOOL, for effectively learning high-level graph representations. Different from existing methods, our method explicitly captures high-order structural relationships between different nodes via conditional random fields. In addition, our method is generalized by incorporating graph topological information $A$ to control which node pairs are included in our CRFs. + +# 2.3 INTEGRATING CRFs WITH GNNS + +Recent work (Gao et al., 2019; Qu et al., 2019; Ma et al., 2019) investigates how to combine CRFs with GNNs. The CGNF (Ma et al., 2019) is a GNN architecture for graph node classification which explicitly models a joint probability of the entire set of node labels via CRFs and performs inference + +via dynamic programming. In addition, the GMNN (Qu et al., 2019) focuses on semi-supervised object classification tasks and models the joint distribution of object labels conditioned on object attributes using CRFs. It proposes a pseudolikelihood variational EM framework for model learning and inference. Recent work (Gao et al., 2019) integrates CRFs with GNNs by proposing a CRF layer to encourage similar nodes to have similar hidden features so that similarity information can be preserved explicitly. All these methods are proposed for node classification tasks and the CRFs are incorporated in different ways. Different from existing work, our STRUCTPOOL is proposed for graph pooling operation and the energy is optimized via mean field approximation. All operations in our STRUCTPOOL can be realized by GNN operations so that our STRUCTPOOL can be easily used in any GNNs and trained in an end-to-end fashion. + +# 3 STRUCTURED GRAPH POOLING + +# 3.1 GRAPH POOLING VIA NODE CLUSTERING + +Even though pooling techniques are shown to facilitate the training of deep models and improve their performance significantly in many image and NLP tasks (Yu & Koltun, 2016; Springenberg et al., 2014), local pooling operations cannot be directly applied to graph tasks. The reason is there is no spatial locality information among graph nodes. Global max/average pooling operations can be employed for graph tasks but they may lead to information loss, due to largely reducing the size of representations trivially. A graph $G$ with $n$ nodes can be represented by a feature matrix $X \in \mathbb{R}^{n \times c}$ and an adjacent matrix $A \in \{0,1\}^{n \times n}$ . Graph pooling operations aim at reducing the number of graph nodes and learning new representations. Suppose that graph pooling generates a new graph $\tilde{G}$ with $k$ nodes. The representation matrices of $\tilde{G}$ are denoted as $\tilde{X} \in \mathbb{R}^{k \times \tilde{c}}$ and $\tilde{A} \in \{0,1\}^{k \times k}$ . The goal of graph pooling is to learn relationships between $X$ , $A$ and $\tilde{X}$ , $\tilde{A}$ . In this work, we consider graph pooling via node clustering. In particular, the nodes of the original graph $G$ are assigned to $k$ different clusters. Then each cluster is transformed to a new node in the new graph $\tilde{G}$ . The clustering assignments can be represented as an assignment matrix $M \in \mathbb{R}^{n \times k}$ . For hard assignments, $m_{i,j} \in \{0,1\}$ denotes if node $i$ in graph $G$ belongs to cluster $j$ . For soft assignments, $m_{i,j} \in [0,1]$ denotes the probability that node $i$ in graph $G$ belongs to cluster $j$ and $\sum_{j} m_{i,j} = 1$ . Then the new graph $\tilde{G}$ can be computed as + +$$ +\tilde {X} = M ^ {T} X, \tilde {A} = g \left(M ^ {T} A M\right), \tag {3} +$$ + +where $g(\cdot)$ is a function that $g(\tilde{a}_{i,j}) = 1$ if $\tilde{a}_{i,j} > 0$ and $g(\tilde{a}_{i,j}) = 0$ otherwise. + +# 3.2 LEARNING CLUSTERING ASSIGNMENTS VIA CONDITIONAL RANDOM FIELDS + +Intuitively, node features describe the properties of different nodes. Then nodes with similar features should have a higher chance to be assigned to the same cluster. That is, for any node in the original graph $G$ , its cluster assignment should not only depend on node feature matrix $X$ but also condition on the cluster assignments of the other nodes. We believe such high-order structural information is useful for graph pooling and should be explicitly captured while learning clustering assignments. To this end, we propose a novel structured graph pooling technique, known as STRUCTPOOL, which generates the assignment matrix by considering the feature matrix $X$ and the relationships between the assignments of different nodes. We propose to formulate this as a conditional random field (CRF) problem. The CRFs model a set of random variables with a Markov Random Field (MRF), conditioned on a global observation (Lafferty et al., 2001). We formally define $Y = \{Y_{1},\dots ,Y_{n}\}$ as a random field where $Y_{i}\in \{1,\dots ,k\}$ is a random variable. Each $Y_{i}$ indicates to which cluster the node $i$ is assigned. Here the feature representation $X$ is treated as global observation. We build a graphical model on $Y$ , which is defined as $G^{\prime}$ . Then the pair $(Y,X)$ can be defined as a CRF, characterized by the Gibbs distribution as + +$$ +P (Y | X) = \frac {1}{Z (X)} \exp \left(- \sum_ {c \in C _ {G ^ {\prime}}} \psi_ {c} \left(Y _ {c} | X\right)\right), \tag {4} +$$ + +where $c$ denotes a clique, $C_{G'}$ is a set of cliques in $G'$ , $Z(X)$ is the partition function, and $\psi_c(\cdot)$ is a potential function induced by $c$ (Krähenbuhl & Koltun, 2011; Lafferty et al., 2001). Then the Gibbs + +![](images/58ac12c189ce86eb4045d7091fb0aee903ea47b4f6119156fdef55fbd6fe6381.jpg) +Figure 1: Illustrations of our proposed STRUCTPOOL. Given a graph with 6 nodes, the color of each node represents its features. We perform graph pooling to obtain a new graph with $k = 4$ nodes. The unary energy matrix can be obtained by multiple GCN layers using $X$ and $A$ . The pairwise energy is measured by attention matrix using node feature $X$ and topology information $A$ . Then by performing iterative updating, the mean field approximation yields the most probable assignment matrix. Finally, we obtain the new graph with 4 nodes, represented by $\tilde{X}$ and $\tilde{A}$ . + +energy function for an assignment $y = \{y_{1},\dots ,y_{n}\}$ for all variables can be written as + +$$ +E (y | X) = \sum_ {c \in C _ {G ^ {\prime}}} \psi_ {c} \left(y _ {c} | X\right). \tag {5} +$$ + +Finding the optimal assignment is equivalent to maximizing $P(Y|X)$ , which can also be interpreted as minimizing the Gibbs energy. + +# 3.3 GIBBS ENERGY WITH TOPOLOGY INFORMATION + +Now we define the clique set $C_{G'}$ in $G'$ . Similar to the existing CRF model (Krahenbuhl & Koltun, 2011), we include all unary cliques in $C_{G'}$ since we need to measure the energy for assigning each node. For pairwise cliques, we generalize our method to control the pairwise clique set by incorporating the graph topological information $A$ . We consider $\ell$ -hop connectivity based on $A$ to define the pairwise cliques, which builds pairwise relationships between different nodes. Let $A^{\ell} \in \{0,1\}^{n \times n}$ represent the $\ell$ -hop connectivity of graph $G$ where $a_{i,j}^{\ell} = 1$ indicates node $i$ and node $j$ are reachable in $G$ within $\ell$ hops. Then we include all pairwise cliques $(i,j)$ in $C_{G'}$ if $a_{i,j}^{\ell} = 1$ . Altogether, the Gibbs energy for a cluster assignment $y$ can be written as + +$$ +E (y) = \sum_ {i} \psi_ {u} \left(y _ {i}\right) + \sum_ {i \neq j} \psi_ {p} \left(y _ {i}, y _ {j}\right) a _ {i, j} ^ {\ell}, \tag {6} +$$ + +where $\psi_{u}(y_{i})$ represents the unary energy for node $i$ to be assigned to cluster $y_{i}$ . In addition, $\psi_{p}(y_{i},y_{j})$ is the pairwise energy, which indicates the energy of assigning node $i,j$ to cluster $y_{i},y_{j}$ respectively. Note that we drop the condition information in Equation (6) for simplicity. If $\ell$ is large enough, our CRF is equivalent to the dense CRFs. If $\ell$ is equal to 1, we have $A^{\ell} = A$ so that only 1-hop information in the adjacent matrix is considered. These two types of energy can be obtained directly by neural networks (Zheng et al., 2015). Given the global observations $X$ and the topology information $A$ , we employ multiple graph convolution layers to obtain the unary energy $\Psi_{u}\in \mathbb{R}^{n\times k}$ . Existing work on image tasks (Krähenbuhl & Koltun, 2011) proposes to employ Gaussian kernels to measure the pairwise energy. However, due to computational inefficiency, we cannot directly apply it to our CRF model. The pairwise energy proposed in (Krähenbuhl & Koltun, 2011) can be written as + +$$ +\psi_ {p} \left(y _ {i}, y _ {j}\right) = \mu \left(y _ {i}, y _ {j}\right) \sum_ {m = 1} ^ {K} w ^ {(m)} k ^ {(m)} \left(x _ {i}, x _ {j}\right), \tag {7} +$$ + +where $k^{(m)}(\cdot, \cdot)$ represents the $m^{th}$ Gaussian kernel, $x_{i}$ is the feature vector for node $i$ in $X$ , $w^{(m)}$ denotes learnable weights, and $\mu(y_{i}, y_{j})$ is a compatibility function that models the compatibility + +Algorithm 1 STRUCTPOOL +1: Given a graph $G$ with $n$ nodes represented by $X \in \mathbb{R}^{n \times c}$ and $A \in \{0,1\}^{n \times n}$ , the goal is to obtain $\tilde{G}$ with $k$ nodes that $\tilde{X} \in \mathbb{R}^{k \times \tilde{c}}$ and $\tilde{A} \in \{0,1\}^{k \times k}$ . The $\ell$ -hop connectivity matrix $A^{\ell}$ can be easily obtained from $A$ . +2: Perform GCNs to obtain unary energy matrix $\Psi_{u} \in \mathbb{R}^{n \times k}$ . +3: Initialize that $Q(i,j) = \frac{1}{Z_i} \exp(\Psi_u(i,j))$ for all $0 \leq i \leq n$ and $0 \leq j \leq k$ . +4: while not converged do +5: Calculate attention map $W$ that $w_{i,j} = \frac{x_i^T x_j}{\sum_{m \neq i} x_i^T x_m} a_{i,j}^\ell$ for all $i \neq j$ and $0 \leq i, j \leq n$ . +6: Message passing that $\tilde{Q}(i,j) = \sum_{m \neq i} w_{i,m} Q(m,j)$ . +7: Compatibility transform that $\hat{Q}(i,j) = \sum_{m} \mu(m,j) \tilde{Q}(i,m)$ . +8: Local update that $\bar{Q}(i,j) = \Psi_u(i,j) - \hat{Q}(i,j)$ . +9: Perform normalization that $Q(i,j) = \frac{1}{Z_i} \exp(\bar{Q}(i,j))$ for all $i$ and $j$ . +10: end while +11: For soft assignments, the assignment matrix is $M = \text{softmax}(Q)$ . +12: For hard assignments, the assignment matrix is $M = \text{argmax}(Q)$ for each row. +13: Obtain new graph $\tilde{Q}$ that $\tilde{X} = M^T X$ , $\tilde{A} = g(M^T A M)$ . + +between different assignment pairs. However, it is computationally inefficient to accurately compute the outputs of Gaussian kernels, especially for graph data when the feature vectors are high-dimensional. Hence, in this work, we propose to employ the attention matrix as the measurement of pairwise energy. Intuitively, Gaussian kernels indicate how strongly different feature vectors are connected with each other. Similarly, the attention matrix reflects similarities between different feature vectors but with a significantly less computational cost. Specifically, each feature vector $x_{i}$ is attended to any other feature vector $x_{j}$ if the pair $(i,j)$ is existing in clique set $C_{G'}$ . Hence, the pairwise energy can be obtained by + +$$ +\psi_ {p} \left(y _ {i}, y _ {j}\right) = \mu \left(y _ {i}, y _ {j}\right) \frac {x _ {i} ^ {T} x _ {j}}{\sum_ {k \neq i} x _ {i} ^ {T} x _ {k}}, \tag {8} +$$ + +It can be efficiently computed by matrix multiplication and normalization. Minimizing the Gibbs energy in Equation (6) results in the most probable cluster assignments for a given graph $G$ . However, such minimization is intractable, and hence a mean field approximation is proposed (Krähenbuhl & Koltun, 2011), which is an iterative updating algorithm. We follow the mean-field approximation to obtain the most probable cluster assignments. Altogether, the steps of our proposed STRUCTPOOL are shown in Algorithm 1. All operations in our proposed STRUCTPOOL can be implemented as GNN operations, and hence the STRUCTPOOL can be employed in any deep graph model and trained in an end-to-end fashion. The unary energy matrix can be obtained by stacking several GCN layers, and the normalization operations (step 3&9 in Algorithm 1) are equivalent to softmax operations. All other steps can be computed by matrix computations. It is noteworthy that the compatibility function $\mu(y_i, y_j)$ can be implemented as a trainable matrix $\mathcal{N} \in \mathbb{R}^{k \times k}$ , and automatically learned during training. Hence, no prior domain knowledge is required for designing the compatibility function. We illustrate our proposed STRUCTPOOL in Figure 1 where we perform STRUCTPOOL on a graph $G$ with 6 nodes, and obtain a new graph $\tilde{G}$ with 4 nodes. + +# 3.4 COMPUTATIONAL COMPLEXITY ANALYSIS + +We theoretically analyze the computational efficiency of our proposed STRUCTPOOL. Since computational efficiency is especially important for large-scale graph datasets, we assume that $n > k, c, \tilde{c}$ . The computational complexity of one GCN layer is $\mathcal{O}(n^3 + n^2 c + n\tilde{c}) \approx \mathcal{O}(n^3)$ . Assuming we employ $i$ layers of GCNs to obtain the unary energy, its computational cost is $\mathcal{O}(in^3)$ . Assuming there are $m$ iterations in our updating algorithm, the computational complexity is $\mathcal{O}(m(n^2c + n^2k + nk^2)) \approx \mathcal{O}(mn^3)$ . The final step for computing $\tilde{A}$ and $\tilde{X}$ takes $\mathcal{O}(nkc + n^2k + nk^2) \approx \mathcal{O}(n^3)$ computational complexity. Altogether, the complexity STRUCTPOOL is $\mathcal{O}((m + i)n^3)$ , which is close to the complexity of stacking $m + i$ layers of GCNs. + +Table 1: Classification results for six benchmark datasets. Note that none of these deep methods can outperform the traditional method WL on COLLAB. We believe the reason is the graphs in COLLAB only have single-layer structures while deep models are too complex to capture them. + +
MethodDataset
ENZYMESD&DCOLLABPROTEINSIMDB-BIMDB-M
GRAPHLET41.0374.8564.6672.91--
SHORTEST-PATH42.3278.8659.1076.43--
WL53.4378.3478.6174.68--
PATCHYSAN-76.2772.6075.0071.0045.23
DCNN-58.0952.1161.2949.0633.49
DGK--73.0971.6866.9644.55
ECC53.5072.5467.7972.65--
GRAPHSAGE54.2575.4268.2570.48--
SET2SET60.1578.1271.7574.29--
DGCNN57.1279.3773.7675.5470.0347.83
DIFFPOOL62.5380.6475.4876.25--
STRUCTPOOL63.8384.1974.2280.3674.7052.47
+ +# 3.5 DEEP GRAPH NETWORKS FOR GRAPH CLASSIFICATION + +In this section, we investigate graph classification tasks which require both good node-level and graph-level representations. For most state-of-the-art deep graph classification models, they share a similar pipeline that first produces node representations using GNNs, then performs pooling operations to obtain high-level representations, and finally employs fully-connected layers to perform classification. Note that the high-level representations can be either a vector or a group of $k$ vectors. For a set of graphs with different node numbers, with a pre-defined $k$ , our proposed STRUCTPOOL can produce $k$ vectors for each graph. Hence, our method can be easily generalized and coupled to any deep graph classification model. Specially, our model for graph classification is developed based on DGCNN (Zhang et al., 2018). Given any input graph, our model first employs several layers of GCNs (Equation (2)) to aggregate features from neighbors and learn representations for nodes. Next, we perform one STRUCTPOOL layer to obtain $k$ vectors for each graph. Finally, 1D convolutional layers and fully-connected layers are used to classify the graph. + +# 4 EXPERIMENTAL STUDIES + +# 4.1 DATASETS AND EXPERIMENTAL SETTINGS + +We evaluate our proposed STRUCTPOOL on eight benchmark datasets, including five bioinformatics protein datasets: ENZYMES, PTC, MUTAG, PROTEINS (Borgwardt et al., 2005), D&D (Dobson & Doig, 2003), and three social network datasets: COLLAB (Yanardag & Vishwanathan, 2015b), IMDB-B, IMDB-M (Yanardag & Vishwanathan, 2015a). Most of them are relatively large-scale and hence suitable for evaluating deep graph models. We report the statistics and properties of them in Supplementary Table 6. Please see the Supplementary Section A for experimental settings. + +We compare our method with several state-of-the-art deep GNN methods. PATCHYSAN (Niepert et al., 2016) learns node representations and a canonical node ordering to perform classification. DCNN (Atwood & Towsley, 2016) learns multi-scale substructure features by diffusion graph convolutions and performs global sum pooling. DGK (Yanardag & Vishwanathan, 2015a) models latent representations for sub-structures in graphs, which is similar to learn word embeddings. ECC (Simonovsky & Komodakis, 2017) performs GCNs conditioning on both node features and edge information and uses global sum pooling before the final classifier. GRAPHSAGE (Hamilton et al., 2017) is an inductive framework which generates node embeddings by sampling and aggregating features from local neighbors, and it employs global mean pooling. SET2SET (Vinyals et al., 2015) proposes an aggregation method to replace the global pooling operations in deep graph networks. DGCNN (Zhang et al., 2018) proposes a pooling strategy named SORTPOOL which sorts all nodes + +Table 2: Comparisons between different pooling techniques under the same framework. + +
MethodDataset
ENZYMESD&DCOLLABPROTEINSIMDB-BIMDB-M
SUM POOL47.3378.7269.4576.2651.6942.76
SORTPOOL52.8380.6073.9276.8370.0046.26
TOPK POOL53.6781.7173.3477.4772.8049.00
DIFFPOOL60.3380.9471.7877.7472.4050.13
SAGPOOL64.1781.0373.2878.8273.4051.13
STRUCTPOOL63.8384.1974.2280.3674.7052.47
+ +by learning and selects the first $k$ nodes to form a new graph. DIFFPOOL (Ying et al., 2018) is built based on GRAPHSAGE architecture but with their proposed differentiable pooling. Note that for most of these methods, pooling operations are employed to obtain graph-level representations before the final classifier. In addition, we compare our STRUCTPOOL with three graph kernels: Graphlet (Shervashidze et al., 2009), Shortest-path (Borgwardt & Kriegel, 2005), and Weisfeiler-Lehman subtree kernel (WL) (Weisfeiler & Lehman, 1968). + +# 4.2 CLASSIFICATION RESULTS + +We evaluate our proposed method on six benchmark datasets and compare with several state-of-the-art approaches. The results are reported in Table 1 where the best results are shown in bold and the second best results are shown with underlines. For our STRUCTPOOL, we perform 10-fold cross validations and report the average accuracy for each dataset. The 10-fold splitting is the same as DGCNN. For all comparing methods, the results are taken from existing work (Ying et al., 2018; Zhang et al., 2018). We can observe that our STRUCTPOOL obtains the best performance on 5 out of 6 benchmark datasets. For these 5 datasets, the classification results of our method are significantly better than all comparing methods, including advanced models DGCNN and DIFFPOOL. Notably, our model outperforms the second-best performance by an average of $3.58\%$ on these 5 datasets. In addition, the graph kernel method WL obtains the best performance on COLLAB dataset and none of these deep models can achieve similar performance. Our model can obtain competitive performance compared with the second best model. This is because many graphs in COLLAB only have simple structures and deep models may be too complex to capture them. + +# 4.3 COMPARISONS OF DIFFERENT POOLING METHODS + +To demonstrate the effectiveness of our proposed pooling technique, we compare different pooling techniques under the same network framework. Specifically, we compare our STRUCTPOOL with the global sum pool, SORTPOOL, TOPKPOOL, DIFFPOOL, and SAGPOOL. All pooling methods are employed in the network framework introduced in Section 3.5. In addition, the same 10-fold cross validations from DGCNN are used for all pooling methods. We report the results in Table 2 and the best results are shown in bold. Obviously, our method achieves the best performance on five of six datasets, and significantly outperforms all comparing pooling techniques. For the dataset ENZYMES, our obtained result is competitive since SAGPOOL only slightly outperforms our proposed method by $0.34\%$ . Such observations demonstrate the structural information in graphs is useful for graph pooling and the relationships between different nodes should be explicitly modeled. + +# 4.4 STUDY OF COMPUTATIONAL COMPLEXITY + +As mentioned in Section 3.4, our proposed STRUCTPOOL yields $\mathcal{O}((m + i)n^3)$ computational complexity. The complexity of DIFFPOOL is $\mathcal{O}(jn^3)$ if we assume it employs $j$ layers of GCNs to obtain the assignment matrix. In our experiments, $i$ is usually set to 2 or 3 which + +Table 3: The prediction accuracy with different iteration number $m$ . + +
Datasetm = 1m = 3m = 5m = 10
ENZYMES62.6763.0063.8363.50
D&D82.8283.0883.5984.19
PROTEINS80.0980.0080.1880.18
+ +is much smaller than $n$ . We conduct experiments to show how different iteration number $m$ affects the prediction accuracy and the results are reported in Table 3. Note that we employ the dense CRF form for all different $m$ . We can observe that the performance generally increases with $m$ increasing, especially for large-scale dataset D&D. We also observe $m = 5$ is a good trade-off between time complexity and prediction performance. Notably, our method can even outperform other approaches when $m = 1$ . Furthermore, we evaluate the running time of our STRUCTPOOL and compare it with DIFFPOOL. For 500 graphs from large-scale dataset D&D, we set $i = j = 3$ and show the averaging time cost to perform pooling for each graph. The time cost for DIFFPOOL is 0.042 second, while our STRUCTPOOL takes 0.049 second, 0.053 second and 0.058 second for $m = 1$ , $m = 3$ , $m = 5$ respectively. Even though our STRUCTPOOL has a relatively higher computational cost, it is still reasonable and acceptable given its superior performance. + +# 4.5 EFFECTS OF TOPOLOGY INFORMATION + +Next, we conduct experiments to show how the topology information $A^\ell$ affects the prediction performance. We evaluate our STRUCTPOOL with different $\ell$ values and report the results in Table 4. Note that when $\ell$ is large + +enough, our STRUCTPOOL considers all pairwise relationships between all nodes, and it is equivalent to the dense CRF. For the datasets IMDB-M and PROTEINS, we can observe that the prediction accuracies are generally increasing with the increasing of $\ell$ . With the increasing of $\ell$ , more pairwise relationships are considered by the model, and hence it is reasonable to obtain better performance. In addition, for the dataset IMDB-B, the results remain similar with different $\ell$ , and even $\ell = 1$ yields competitive performance with dense CRF. It is possible that 1-hop pairwise relationships are enough to learn good embeddings for such graph types. Overall, dense CRF consistently produces promising results and is a proper choice in practice. + +Table 4: The prediction accuracy using different $A^\ell$ in STRUCTPOOL. + +
Datasetl = 1l = 5l = 10l = 15DENSE
IMDB-B74.6074.4074.3074.7074.70
IMDB-M51.5351.6752.0051.9652.47
PROTEINS79.7379.6179.8380.3680.18
+ +# 4.6 GRAPH ISOMORPHISM NETWORKS WITH STRUCTPOOL + +Recently, Graph Isomorphism Networks (GINs) are proposed and shown to be more powerful than traditional GNNs (Xu et al., 2019). To demonstrate the + +Table 5: Comparisons with Graph Isomorphism Networks. + +
DatasetPTCIMDB-BMUTAGCOLLABIMDB-M
GINS64.6075.1089.4080.2052.30
OURS73.4678.5093.5984.0654.60
+ +effectiveness of our STRUCTPOOL and show its generalizability, we build models based on GINs and evaluate their performance. Specifically, we employ GINs to learn node representations and perform one layer of the dense form of our STRUCTPOOL, followed by 1D convolutional layers and fully-connected layers as the classifier. The results are reported in the Table 5, where we employ the same 10-fold splitting as GINs (Xu et al., 2019) and the GIN results are taken from its released results. These five datasets include both bioinformatic data and social media data, and both small-scale data and large-scale data. Obviously, incorporating our proposed STRUCTPOOL in GINs consistently and significantly improves the prediction performance. It leads to an average of $4.52\%$ prediction accuracy improvement, which is promising. + +# 5 CONCLUSIONS + +Graph pooling is an appealing way to learn good graph-level representations, and several advanced pooling techniques are proposed. However, none of existing graph pooling techniques explicitly considers the relationship between different nodes. We propose a novel graph pooling technique, known as STRUCTPOOL, which is developed based on the conditional random fields. We consider the graph pooling as a node clustering problem and employ the CRF to build relationships between the assignments of different nodes. In addition, we generalize our method by incorporating the graph topological information so that our method can control the pairwise clique set in our CRFs. Finally, + +we evaluate our proposed STRUCTPOOL on several benchmark datasets and our method can achieve new state-of-the-art results on five out of six datasets. + +# ACKNOWLEDGEMENT + +This work was supported in part by National Science Foundation grants DBI-1661289 and IIS-1908198. + +# REFERENCES + +James Atwood and Don Towsley. Diffusion-convolitional neural networks. In Advances in Neural Information Processing Systems, pp. 1993-2001, 2016. +Karsten M Borgwardt and Hans-Peter Kriegel. Shortest-path kernels on graphs. In Fifth IEEE international conference on data mining (ICDM'05), pp. 8-pp. IEEE, 2005. +Karsten M Borgwardt, Cheng Soon Ong, Stefan Schonauer, SVN Vishwanathan, Alex J Smola, and Hans-Peter Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21(suppl_1): i47-i56, 2005. +Lei Cai and Shuiwang Ji. A multi-scale approach for graph link prediction. In Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020. +Paul D Dobson and Andrew J Doig. Distinguishing enzyme structures from non-enzymes without alignments. Journal of molecular biology, 330(4):771-783, 2003. +Hongchang Gao, Jian Pei, and Heng Huang. Conditional random field enhanced graph convolutional neural networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 276-284. ACM, 2019. +Hongyang Gao and Shuiwang Ji. Graph u-nets. In International Conference on Machine Learning, pp. 2083-2092, 2019a. +Hongyang Gao and Shuiwang Ji. Graph representation learning via hard and channel-wise attention networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 741-749, 2019b. +Hongyang Gao, Zhengyang Wang, and Shuiwang Ji. Large-scale learnable graph convolutional networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1416-1424, 2018. +Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263-1272. JMLR.org, 2017. +Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024-1034, 2017. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, 2014. +Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations, 2017. +Philipp Krahenbuhl and Vladlen Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems, pp. 109-117, 2011. +John Lafferty, Andrew McCallum, and Fernando CN Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In International conference on machine learning, pp. 282-289, 2001. +Junhyun Lee, Inyeop Lee, and Jaewoo Kang. Self-attention graph pooling. In International Conference on Machine Learning, pp. 3734-3743, 2019. + +Tengfei Ma, Cao Xiao, Junyuan Shang, and Jimeng Sun. CGNF: Conditional graph neural fields, 2019. URL https://openreview.net/forum?id=ryxMX2R9YQ. +Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International conference on machine learning, pp. 2014-2023, 2016. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In Proceedings of the International Conference on Learning Representations, 2017. +Meng Qu, Yoshua Bengio, and Jian Tang. GMNN: Graph Markov neural networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 5241-5250, Long Beach, California, USA, 09-15 Jun 2019. PMLR. +Nino Shervashidze, SVN Vishwanathan, Tobias Petri, Kurt Mehlhorn, and Karsten Borgwardt. Efficient graphlet kernels for large graph comparison. In Artificial Intelligence and Statistics, pp. 488-495, 2009. +Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3693-3702, 2017. +Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. In Proceedings of the International Conference on Learning Representations, 2014. +Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ. +Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. In International Conference on Learning Representations, 2015. +Boris Weisfeiler and Andrei A Lehman. A reduction of a graph to a canonical form and an algebra arising during this reduction. *Nauchno-Technicheskaya Informatsia*, 2(9):12-16, 1968. +Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=ryGs6iA5Km. +Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1365-1374. ACM, 2015a. +Pinar Yanardag and SVN Vishwanathan. A structural smoothing framework for robust graph comparison. In Advances in neural information processing systems, pp. 2134-2142, 2015b. +Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems, pp. 4800-4810, 2018. +Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. In Proceedings of the International Conference on Learning Representations, 2016. +Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. In Advances in Neural Information Processing Systems, pp. 5165-5175, 2018. +Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In AAAI, pp. 4438-4445, 2018. +Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip HS Torr. Conditional random fields as recurrent neural networks. In Proceedings of the IEEE international conference on computer vision, pp. 1529-1537, 2015. + +# A APPENDIX + +# A.1 DATASETS AND EXPERIMENTAL SETTINGS + +Table 6: Statistics and properties of eight benchmark datasets. + +
Dataset
ENZYMESD&DCOLLABPROTEINS
# of Edges (avg)124.201431.32457.7872.82
# of Nodes (avg)32.63284.3274.4939.06
# of Graphs600117850001113
# of Classes6232
Dataset
IMDB-BIMDB-MPTCMUTAG
# of Edges (avg)96.5365.9414.6919.79
# of Nodes (avg)19.7713.0014.3017.93
# of Graphs10001500344188
# of Classes2322
+ +We report the statistics and properties of eight benchmark datasets in Supplementary Table 6. For our STRUCTPOOL, we implement our models using Pytorch (Paszke et al., 2017) and conduct experiments on one GeForce GTX 1080 Ti GPU. The model is trained using Stochastic gradient descent (SGD) with the ADAM optimizer (Kingma & Ba, 2014). For the models built on DGCNN (Zhang et al., 2018) in Section 4.2, 4.3, 4.4, 4.5, we employ GCNs to obtain the node features and the unary energy matrix. All experiments in these sections perform 10-fold cross validations and we report the averaging results. The 10-fold splitting is exactly the same as DGCNN (Zhang et al., 2018). For the non-linear function, we employ tanh for GCNs and relu for 1D convolution layers. For the models built on GINs in Section 4.6, we employ GINs to learn node features and unary energy. Here the 10-fold splitting is exactly the same as GINs. We employ relu for all layers as the non-linear function. For all models, 1D convolutional layers and fully-connected layers are used after our STRUCTPOOL. Hard clustering assignments are employed in all experiments. + +# A.2 EFFECTS OF PAIRWISE ENERGY + +Table 7: Comparison with the baseline which excludes pairwise energy. + +
DatasetENZYMESD&DCOLLABPROTEINSIMDB-BIMDB-M
BASELINE60.8381.3070.5878.1872.4050.13
OURS63.8384.1974.2280.3674.7052.47
+ +We conduct experiments to show the importance of the pairwise energy. If the pairwise energy is removed, the relations between different node assignments are not explicitly considered. Then the method is similar to the DIFFPOOL. We compare our method with such a baseline that removes the pairwise energy. Experimental results are reported in Table 7. The network framework is the same as introduced in Section 3.5 and the same 10-fold cross validations from DGCNN are used. Obviously, our proposed method consistently and significantly outperforms the baseline which excludes pairwise energy. It indicates the importance and effectiveness of incorporating pairwise energy and considering high-order relationships between different node assignments. + +# A.3 STUDY OF HIERARCHICAL NETWORK STRUCTURE + +To demonstrate how the network depth and multiple pooling layers affects the prediction performance, we conduct experiments to evaluate different hierarchical network structures. We first define a network block contains two GCN layers and one STRUCTPOOL layer. Then we compare three + +Table 8: Comparison with different hierarchical network structures. + +
Dataset1 BLOCK2 BLOCKS3 BLOCKS
PROTEINS79.7377.4274.95
D&D81.8783.5981.63
+ +different network settings: 1 block with the final classifier, 2 blocks with the final classifier, and 3 blocks with the final classifier. The results are reported in Table 8. For the dataset Proteins, we observe that the network with one block can obtain better performance than deeper networks. We believe the main reason is dataset Proteins is a small-scale dataset with an average number of nodes equal to 39.06. A relatively simpler network is powerful enough to learn its data distribution while stacking multiple GCN layers and pooling layers may lead to a serious overfitting problems. For the dataset D&D, the network with 2 blocks performs better than the one with 1 block. Since D&D is relatively large scale, stacking 2 blocks increases the power of network and hence increases the performance. However, going very deep, e.g., stacking 3 blocks, will cause the overfitting problem. + +# A.4 STUDY OF GRAPH POOLING RATE + +Table 9: Comparison with different pooling rates. + +
r = 0.1r = 0.3r = 0.5r = 0.7r = 0.9
k91160241331503
ACC80.7781.5381.5381.9780.68
+ +We follow the DGCNN (Zhang et al., 2018) to select the number of clusters $k$ . Specifically, we use a pooling rate $r \in (0,1)$ to control $k$ . Then $k$ is set to an integer so that $r \times 100\%$ of graphs have nodes less than this integer in the current dataset. As suggested in DGCNN, generally, $r = 0.9$ is a proper choice for bioinformatics datasets and $r = 0.6$ is good for social network datasets. In addition, we conduct experiments to show the performance with the respect to different $r$ values. We set $r = 0.1, 0.3, 0.5, 0.7, 0.9$ to evaluate the performance on a large-scale social network dataset D&D. The average number of nodes in dataset D&D is 284.32 and the maximum number of nodes is 5748. The results are reported in Table 9 where the first row shows different pooling rates, the second row reports the corresponding $k$ values and the final row shows the results. For simplicity, we employ the network structure with 1 block and a final classifier (as defined in Section A.3). We can observe that the performance drops when $r, k$ is relatively large or small. In addition, the model can obtain competitive performance when $r$ is set to a proper range, for example, $r \in [0.3, 0.7]$ for dataset D&D. \ No newline at end of file diff --git a/structpoolstructuredgraphpoolingviaconditionalrandomfields/images.zip b/structpoolstructuredgraphpoolingviaconditionalrandomfields/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bad0d1b8d6e94f3861a5ef4623fdfb93d2a97e5d --- /dev/null +++ b/structpoolstructuredgraphpoolingviaconditionalrandomfields/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a36eaf52ab345628c98104867c81a2ae37d26b469909d17a02dad820086f77a +size 344561 diff --git a/structpoolstructuredgraphpoolingviaconditionalrandomfields/layout.json b/structpoolstructuredgraphpoolingviaconditionalrandomfields/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9ab69e4e74b36ff13120154b5d0d920080bca76f --- /dev/null +++ b/structpoolstructuredgraphpoolingviaconditionalrandomfields/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d10fa9667c1e4079782f180f58d69a1c6d1342d8a7eda9274093fbe1e03ac1d1 +size 446273 diff --git a/structuredobjectawarephysicspredictionforvideomodelingandplanning/faa820dc-e76d-498e-8695-ab0555120424_content_list.json b/structuredobjectawarephysicspredictionforvideomodelingandplanning/faa820dc-e76d-498e-8695-ab0555120424_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..738f7dce2e0bb6804f5e958784162ee7be7440df --- /dev/null +++ b/structuredobjectawarephysicspredictionforvideomodelingandplanning/faa820dc-e76d-498e-8695-ab0555120424_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ae30c7e606e724463194d1bf11d1fa7e52d4e7bb246c0c6ec5939efc47acdcd +size 104796 diff --git a/structuredobjectawarephysicspredictionforvideomodelingandplanning/faa820dc-e76d-498e-8695-ab0555120424_model.json b/structuredobjectawarephysicspredictionforvideomodelingandplanning/faa820dc-e76d-498e-8695-ab0555120424_model.json new file mode 100644 index 0000000000000000000000000000000000000000..121e28411258dadb6a530c4b415a0fd5a73ac64c --- /dev/null +++ b/structuredobjectawarephysicspredictionforvideomodelingandplanning/faa820dc-e76d-498e-8695-ab0555120424_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:614678b33b82fb727010757539373b169044da77f3cbb9ab74040fa4baf13903 +size 131537 diff --git a/structuredobjectawarephysicspredictionforvideomodelingandplanning/faa820dc-e76d-498e-8695-ab0555120424_origin.pdf b/structuredobjectawarephysicspredictionforvideomodelingandplanning/faa820dc-e76d-498e-8695-ab0555120424_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..79db82a1171fe4aa1195acd39474c3226103efa4 --- /dev/null +++ b/structuredobjectawarephysicspredictionforvideomodelingandplanning/faa820dc-e76d-498e-8695-ab0555120424_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2a92e5352aeff4d75634cd2443fad8f3aedc12be8e7d11fa0586ebff0b513b2 +size 1152625 diff --git a/structuredobjectawarephysicspredictionforvideomodelingandplanning/full.md b/structuredobjectawarephysicspredictionforvideomodelingandplanning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f93803b407d352fd8290b679669f7e1e3fc4865e --- /dev/null +++ b/structuredobjectawarephysicspredictionforvideomodelingandplanning/full.md @@ -0,0 +1,380 @@ +# STRUCTURED OBJECT-AWARE PHYSICS PREDICTION FOR VIDEO MODELING AND PLANNING + +Jannik Kossen\*1, Karl Stelzner\*2, Marcel Hussing3, Claas Voelcker3 & Kristian Kersting2 + +$^{1}$ Department of Physics and Astronomy, Heidelberg University +$^{1}$ kossen@stud.uni-heidelberg.de +$^{2,3}$ Department of Computer Science, TU Darmstadt +2{stelzner,kersting}@cs.tu-darmstadt.de +$^{3}$ marcel.hussing, c.voelcker}@stud.tu-darmstadt.de + +# ABSTRACT + +When humans observe a physical system, they can easily locate objects, understand their interactions, and anticipate future behavior. For computers, however, learning such models from videos in an unsupervised fashion is an unsolved research problem. In this paper, we present STOVE, a novel state-space model for videos, which explicitly reasons about objects and their positions, velocities, and interactions. It is constructed by combining an image model and a dynamics model in compositional manner and improves on previous work by reusing the dynamics model for inference, accelerating and regularizing training. STOVE predicts videos with convincing physical behavior over thousands of timesteps, outperforms previous unsupervised models, and even approaches the performance of supervised baselines. We further demonstrate the strength of our model as a simulator for sample efficient model-based control in a task with heavily interacting objects. + +# 1 INTRODUCTION + +Obtaining structured knowledge about the world from unstructured, noisy sensory input is a key challenge in artificial intelligence. Of particular interest is the problem of identifying objects from visual input and understanding their interactions. One longstanding approach to this is the idea of vision as inverse graphics (Grenander, 1976), which postulates a data generating graphics process and phrases vision as posterior inference in the induced distribution. Despite its intuitive appeal, vision as inference has remained largely intractable in practice due to the high-dimensional and multimodal nature of the inference problem. Recently, however, probabilistic models based on deep neural networks have made promising advances in this area. By composing conditional distributions parameterized by neural networks, highly expressive yet structured models have been built. At the same time, advances in general approximate inference, particularly variational techniques, have put the inference problem for these models within reach (Zhang et al., 2019). + +Based on these advances, a number of probabilistic models for unsupervised scene understanding in single images have recently been proposed. The structured nature of approaches such as AIR (Eslami et al., 2016), MONet (Burgess et al., 2019), or IODINE (Greff et al., 2019) provides two key advantages over unstructured image models such as variational autoencoders (Kingma & Welling, 2014) or generative adversarial networks (Goodfellow et al., 2014). First, it allows for the specification of inductive biases, such as spatial consistency of objects, which constrain the model and act as regularization. Second, it enables the use of semantically meaningful latent variables, such as object positions, which may be used for downstream reasoning tasks. + +Building such a structured model for videos instead of individual images is the natural next challenge. Not only could such a model be used in more complex domains, such as reinforcement learning, but the additional redundancy in the data can even simplify and regularize the object detection problem (Kosiorek et al., 2018). To this end, the notion of temporal consistency may be leveraged + +![](images/cb59835af68051c91d32ebb04fe5aef936306b7872a7326fdebbf4964b5f661e.jpg) +Figure 1: Overview of STOVE's architecture. (Center left) At time $t$ , the input image $x_{t}$ is processed by an LSTM in order to obtain a proposal distribution over object states $q(z_{t} \mid x_{t})$ . (Top) A separate proposal $q(z_{t} \mid z_{t-1})$ is obtained by propagating the previous state $z_{t-1}$ using the dynamics model. (Center) The multiplication of both proposal distributions yields the final variational distribution $q(z_{t} \mid z_{t-1}, x_{t})$ . (Right) We sample $z_{t}$ from this distribution to evaluate the generative distribution $p(z_{t} \mid z_{t-1})p(x_{t} \mid z_{t})$ , where $p(z_{t} \mid z_{t-1})$ shares means - but not variances - with $q(z_{t} \mid z_{t-1})$ , and $p(x_{t} \mid z_{t})$ can be obtained by direct evaluation of $x_{t}$ in the sum-product networks. Not shown is the dependence on $x_{t-1}$ in the inference routine which allows for the inference of velocities. (Best viewed in color.) + +as an additional inductive bias, guiding the model to desirable behavior. In situations where interactions between objects are prevalent, understanding and explicitly modeling these interactions in an object-centric state-space is valuable for obtaining good predictive models (Watters et al., 2017). Existing works in this area, such as SQAIR (Kosiorek et al., 2018), DDPAE (Hsieh et al., 2018), R-NEM (Van Steenkiste et al., 2018), and COBRA (Watters et al., 2019) have explored these concepts, but have not demonstrated realistic long term video predictions on par with supervised approaches to modeling physics. + +To push the limits of unsupervised learning of physical interactions, we propose STOVE, a structured, object-aware video model. With STOVE, we combine image and physics modeling into a single state-space model which explicitly reasons about object positions and velocities. It is trained end-to-end on pure video data in a self-supervised fashion and learns to detect objects, to model their interactions, and to predict future states and observations. To facilitate learning via variational inference in this model, we provide a novel inference architecture, which reuses the learned generative physics model in the variational distribution. As we will demonstrate, our model generates convincing rollouts over hundreds of time steps, outperforms other video modeling approaches, and approaches the performance of the supervised baseline which has access to the ground truth object states. + +Moving beyond unsupervised learning, we also demonstrate how STOVE can be employed for model-based reinforcement learning (RL). Model-based approaches to RL have long been viewed + +![](images/80e374ba9bdd88b77c65c4ff5cee6af1b6fa7d122851eab37f752f72ccb3df1c.jpg) +Figure 2: (Left) Depiction of the graphical model underlying STOVE. Black arrows denote the generative mechanism and red arrows the inference procedure. The variational distribution $q(z_{t} \mid z_{t-1}, x_{t}, x_{t-1})$ is formed by combining predictions from the dynamics model $p(z_{t} \mid z_{t-1})$ and the object detection network $q(z_{t} \mid x_{t})$ . For the RL domain, our approach is extended by action conditioning and reward prediction. (Right) Components of $z_{t}^{o}$ and corresponding variational distributions. Note that the velocities are estimated based on the change in positions between timesteps, inducing a dependency on $x_{t-1}$ . + +![](images/43a24489b4452ffc7843b5b67f41f5feb99bc3bc00802506ee22126652331eaa.jpg) + +$$ +q \left(z _ {t, \text {p o s}} ^ {o} \mid z _ {t - 1}\right) \cdot q \left(z _ {t, \text {p o s}} ^ {o} \mid x _ {t}\right) +$$ + +$$ +q \left(z _ {t, \text {v e l o}} ^ {o} \mid z _ {t - 1}\right) \cdot q \left(z _ {t, \text {v e l o}} ^ {o} \mid x _ {t}, x _ {t - 1}\right) +$$ + +$$ +q (z _ {t, \mathrm {s i z e}} ^ {o} \mid x _ {t}) +$$ + +$$ +q (z _ {t, \text {l a t e n t}} ^ {o} \mid z _ {t - 1}) +$$ + +as a potential remedy to the often prohibitive sample complexity of model-free RL, but obtaining learned models of sufficient quality has proven difficult in practice (Sutton & Barto, 2011). By conditioning state predictions on actions and adding reward predictions to our dynamics predictor, we extend our model to the RL setting, allowing it to be used for search or planning. Our empirical evidence shows that an actor based on Monte-Carlo tree search (MCTS) (Coulom, 2007) on top of our model is competitive to model-free approaches such as Proximal Policy Optimization (PPO) (Schulman et al., 2017), while only requiring a fraction of the samples. + +We proceed by introducing the two main components of STOVE: a structured image model and a dynamics model. We show how to perform joint inference and training, as well as how to extend the model to the RL setting. We then present our experimental evaluation, before touching on further related work and concluding. + +# 2 STRUCTURED OBJECT-AWARE VIDEO MODELING + +We approach the task of modeling a video with frames $x_{1}, \ldots, x_{T}$ from a probabilistic perspective, assuming a sequence of Markovian latent states $z_{1}, \ldots, z_{T}$ , which decompose into the properties of a fixed number $O$ of objects, i.e. $z_{t} = (z_{t}^{1}, \ldots, z_{t}^{O})$ . In the spirit of compositionality, we propose to specify and train such a model by explicitly combining a dynamics prediction model $p(z_{t+1} \mid z_{t})$ and a scene model $p(x_{t} \mid z_{t})$ . This yields a state-space model, which can be trained on pure video data, using variational inference and an approximate posterior distribution $q(z \mid x)$ . Our model differs from previous work that also follows this methodology, most notably SQAIR and DDPAE, in three major ways: + +- We propose a more compact architecture for the variational distribution $q(z \mid x)$ , which reuses the dynamics model $p(z_{t+1} \mid z_t)$ , and avoids the costly double recurrence across time and objects which was present in previous work. +- We parameterize the dynamics model using a graph neural network, taking advantage of the decomposed nature of the latent state $z$ . +- Instead of treating each $z_{t}^{o}$ as an arbitrary latent code, we explicitly reserve the first six slots of this vector for the object's position, size, and velocity, each in $x, y$ direction, and use this information for the dynamics prediction task. We write $z_{t}^{o} = (z_{t,\mathrm{pos}}^{o}, z_{t,\mathrm{size}}^{o}, z_{t,\mathrm{velo}}^{o}, z_{t,\mathrm{latent}}^{o})$ . + +We begin by briefly introducing the individual components before discussing how they are combined to form our state-space model. Fig. 1 visualises the computational flow of STOVE's inference and generative routines, Fig. 2 (left) specifies the underlying graphical model. + +# 2.1 OBJECT-BASED MODELING OF IMAGES USING SUM-PRODUCT ATTEND-INFER-REPEAT + +A variety of object-centric image models have recently been proposed, many of which are derivatives of attend-infer-repeat (AIR) (Eslami et al., 2016). AIR postulates that each image consists of a set of objects, each of which occupies a rectangular region in the image, specified by positional parameters $z_{\text{where}}^o = (z_{\text{pos}}^o, z_{\text{size}}^o)$ . The visual content of each object is described by a latent code $z_{\text{what}}^o$ . By decoding $z_{\text{what}}^o$ with a neural network and rendering the resulting image patches in the prescribed location, a generative model $p(x \mid z)$ is obtained. Inference is accomplished using a recurrent neural network, which outputs distributions over the latent objects $q(z^o \mid x)$ , attending to one object at a time. AIR is also capable of handling varying numbers of objects, using an additional set of latent variables. + +Sum-Product Attend-Infer-Repeat (SuPAIR) (Stelzner et al., 2019) utilizes sum-product networks (SPNs) instead of a decoder network to directly model the distribution over object appearances. The tractable inference capabilities of the SPNs used in SuPAIR allow for the exact and efficient computation of $p(x \mid z_{\text{where}})$ , effectively integrating out the appearance parameters $z_{\text{what}}$ analytically. This has been shown to drastically accelerate learning, as the reduced inference workload significantly lowers the variance of the variational objective. Since the focus of SuPAIR on interpretable object parameters fits our goal of building a structured video model, we apply it as our image model $p(x_t \mid z_t)$ . Similarly, we use a recurrent inference network as in SuPAIR to model $q(z_{t,\text{where}} \mid x_t)$ . For details on SuPAIR, we refer to Stelzner et al. (2019). + +# 2.2 MODELING PHYSICAL INTERACTIONS USING GRAPH NEURAL NETWORKS + +In order to successfully capture complex dynamics, the state transition distribution $p(z_{t+1} \mid z_t) = p(z_{t+1}^1, \ldots, z_{t+1}^O \mid z_t^1, \ldots, z_t^O)$ needs to be parameterized using a flexible, non-linear estimator. A critical property that should be maintained in the process is permutation invariance, i.e., the output should not depend on the order in which objects appear in the vector $z_t$ . This type of function is well captured by graph neural networks, cf. (Santoro et al., 2017), which posit that the output should depend on the sum of pairwise interactions between objects. Graph neural networks have been extensively used for modeling physical processes in supervised scenarios (Battaglia et al., 2016; 2018; Sanchez-Gonzalez et al., 2018; Zhou et al., 2018). + +Following this line of work, we build a dynamics model of the basic form + +$$ +\hat {z} _ {t + 1, \text {p o s}} ^ {o}, \hat {z} _ {t + 1, \text {v e l o}} ^ {o}, \hat {z} _ {t + 1, \text {l a t e n t}} ^ {o} = f \left(g \left(z _ {t} ^ {o}\right) + \sum_ {o ^ {\prime} \neq o} \alpha \left(z _ {t} ^ {o}, z _ {t} ^ {o ^ {\prime}}\right) h \left(z _ {t} ^ {o}, z _ {t} ^ {o ^ {\prime}}\right)\right) \tag {1} +$$ + +where $f, g, h, \alpha$ represent functions parameterized by dense neural networks. $\alpha$ is an attention mechanism outputting a scalar which allows the network to focus on specific object pairs. We assume a constant prior over the object sizes, i.e., $\hat{z}_{t+1,\text{size}}^o = z_{t,\text{size}}^o$ . The full state transition distribution is then given by the Gaussian $p(z_{t+1}^o \mid z_t^o) = \mathcal{N}(\hat{z}_{t+1}^o, \sigma)$ , using a fixed $\sigma$ . + +# 2.3 JOINT STATE-SPACE MODEL + +Next, we assemble a state-space model from the two separate models for image modeling and physics prediction. The interface between the two components are the latent positions and velocities. The scene model infers them from images and the physics model propagates them forward in time. Combining the two yields the state-space model $p(x,z) = p(z_0)p(x_0\mid z_0)\prod_tp(z_t\mid z_{t - 1})p(x_t\mid z_t)$ . To initialize the state, we model $p(z_0,z_1)$ using simple uniform and Gaussian distributions. Details are given in Appendix C.3. + +Our model is trained on given video sequences $x$ by maximizing the evidence lower bound (ELBO) $\mathbb{E}_{q(z|x)}[\log p(x,z) - \log q(z\mid x)]$ . This requires formulating a variational distribution $q(z\mid x)$ to approximate the true posterior $p(z\mid x)$ . A natural approach is to factorize this distribution over time, i.e. $q(z\mid x) = q(z_0\mid x_0)\prod_tq(z_t\mid z_{t - 1},x_t)$ , resembling a Bayesian filter. The distribution $q(z_0\mid x_0)$ is then readily available using the inference network provided by SuPAIR. + +The formulation of $q(z_{t} \mid z_{t-1}, x_{t})$ , however, is an important design decision. Previous work, including SQAIR and DDPAE, have chosen to unroll this distribution over objects, introducing a + +![](images/297e236bb2851bcc3381942f344740012fee6899588e72d38a97bd943b395cce.jpg) +real + +![](images/bdd0019f710514d92cab17b81f215b7a014deb7f1374e10b9a1973b413275c6f.jpg) +ours + +![](images/1c590213f9cb70ef3af5d17d6e6115217531f01df8b86ccf1e2d5e1562c134eb.jpg) +sqair +Figure 3: Visualisation of object positions from the real environment and predictions made by our model, SQAIR, and the supervised baseline, for the billiards and gravity environment after the first 8 frames were given. Our model achieves realistic behaviour, outperforms the unsupervised baselines, and approaches the quality of the supervised baseline, despite being fully unsupervised. For full effect, the reader is encouraged to watch animated versions of the sequences in repository github.com/jlko/STOVE. (Best viewed in color.) + +![](images/31d408b68aade7ee5f6a9fe150d7f81ded76bff8dc305a04c6037e420b657520.jpg) +supervised + +![](images/aed1d1738ce6d39f88fffed7c7bbdf9dbe469336a369a3ed212d3b45882799cc.jpg) +real + +![](images/ce098010c9a502f7ffb7f71f7355fb28d2fd2008229fcad329ea2381e04cf460.jpg) +ours + +![](images/abaf8652dc3abcbb44f778f7bd4176bac6fe51b8202ec7eb60c1730283cdaebf.jpg) +sqair + +![](images/2608b96a87f8b297e5cb7669a1866a1f64a9d102732a9796d6f0359368dec0b5.jpg) +supervised + +costly double recurrence over time and objects, requiring $T \cdot O$ sequential recurrence steps in total. This increases the variance of the gradient estimate, slows down training, and hampers scalability. Inspired by Becker-Ehmck et al. (2019), we avoid this cost by reusing the dynamics model for the variational distribution. First, we construct the variational distribution $q(z_{t,\mathrm{pos}}^o \mid z_{t - 1}^o)$ by slightly adjusting the dynamics prediction $p(z_{t,\mathrm{pos}}^o \mid z_{t - 1}^o)$ , using the same mean values but separately predicted standard deviations. Together with an estimate for the same object by the object detection network $q(z_{t,\mathrm{pos}}^o \mid x_t)$ , we construct a joint estimate by multiplying the two Gaussians and renormalizing, yielding another Gaussian: + +$$ +q \left(z _ {t, \text {p o s}} ^ {o} \mid z _ {t - 1}, x _ {t}\right) \propto q \left(z _ {t, \text {p o s}} ^ {o} \mid z _ {t - 1}\right) \cdot q \left(z _ {t, \text {p o s}} ^ {o} \mid x _ {t}\right). \tag {2} +$$ + +Intuitively, this results in a distribution which reconciles the two proposals. A double recurrence is avoided since $q(z_{t} \mid x_{t})$ does not depend on previous timesteps and may thus be computed in parallel for all frames. Similarly, $q(z_{t} \mid z_{t-1})$ may be computed in parallel for all objects, leading to only $T + O$ sequential recurrence steps total. An additional benefit of this approach is that the information learned by the dynamics network is reused for inference — if $q(z_{t} \mid x_{t}, z_{t-1})$ were just another neural network, it would have to essentially relearn the environment's dynamics from scratch, resulting in a waste of parameters and training time. A further consequence is that the image likelihood $p(x_{t} \mid z_{t})$ is backpropagated through the dynamics model, which has been shown to be beneficial for efficient training (Karl et al., 2017; Becker-Ehmck et al., 2019). The same procedure is applied to reconcile velocity estimates from the two networks, where for the image model, velocities $z_{t,\mathrm{velo}}^{o}$ are estimated from position differences between two consecutive timesteps. The object scales $z_{t,\mathrm{scale}}^{o}$ are inferred solely from the image model. The latent states $z_{t,\mathrm{latent}}^{o}$ increase the modelling capacity of the dynamics network, are initialised to zero-mean Gaussians, and do not interact with the image model. This then gives the inference procedure for the full latent state $z_{t}^{o} = (z_{t,\mathrm{pos}}^{o}, z_{t,\mathrm{size}}^{o}, z_{t,\mathrm{velo}}^{o}, z_{t,\mathrm{latent}}^{o})$ , as illustrated in Fig. 2 (right). + +Despite its benefits, this technique has thus far only been used in environments with a single object or with known state information. A challenge when applying it in a multi-object video setting is to match up the proposals of the two networks. Since the object detection RNN outputs proposals for object locations in an indeterminate order, it is not immediately clear how to find the corresponding proposals from the dynamics network. We have, however, found that a simple matching procedure results in good performance: For each $z_{t}$ , we assign the object order that results in the minimal difference of $||z_{t,\mathrm{pos}} - z_{t-1,\mathrm{pos}}||$ , where $||\cdot||$ is the Euclidean norm. The resulting Euclidean bipartite matching problem can be solved in cubic time using the classic Hungarian algorithm (Kuhn, 1955). + +# 2.4 CONDITIONING ON ACTIONS + +In reinforcement learning, an agent interacts with the environment sequentially through actions $a_{t}$ to optimize a cumulative reward $r$ . To extend STOVE to operate in this setting, we make two changes, yielding a distribution $p(z_{t},r_{t}\mid z_{t - 1},a_{t - 1})$ . + +First, we condition the dynamics model on actions $a_{t}$ , enabling a conditional prediction based on both state and action. To keep the model invariant to the order of the input objects, the action information is concatenated to each object state $z_{t - 1}^{o}$ before they are fed into the dynamics model. The model has to learn on its own which of the objects in the scene are influenced by the actions. To facilitate this, we have found it helpful to also concatenate appearance information from the + +![](images/531e3fbe27cf6cbc34e6312828218138abab2fe81a71aaf2ad0ebf9ebac07225.jpg) +Figure 4: Mean test set performance of our model compared to baselines. Our approach (STOVE) clearly outperforms all unsupervised baselines and is almost indistinguishable from the supervised dynamics model on the billiards task. (Top) Mean squared errors over all pixels in the video prediction setting (the lower, the better). (Bottom) Mean Euclidean distances between predicted and true positions (the lower, the better). All position and pixel values are in [0, 1]. In all experiments, the first eight frames are given, all remaining frames are then conditionally generated. The shading indicates the max and min values over multiple training runs with identical hyperparameters. (Best viewed in color.) + +extracted object patches to the object state. While this patch-wise code could, in general, be obtained using some neural feature extractor, we achieved satisfactory performance by simply using the mean values per color channel when given colored input. + +The second change to the model is the addition of reward prediction. In many RL environments, rewards depend on the interactions between objects. Therefore, the dynamics prediction architecture, presented in Eq. 1, is well suited to also predict rewards. We choose to share the same encoding of object interactions between reward and dynamics prediction and simply apply two different output networks ( $f$ in Eq. 1) to obtain the dynamics and reward predictions. The total model is again optimized using the ELBO, this time including the reward likelihood $p(r_t \mid z_{t-1}, a_{t-1})$ . + +# 3 EXPERIMENTAL EVIDENCE + +In order to evaluate our model, we compare it to baselines in three different settings: First, pure video prediction, where the goal is to predict future frames of a video given previous ones. Second, the prediction of future object positions, which may be relevant for downstream tasks. Third, we extend one of the video datasets to a reinforcement learning task and investigate how our physics model may be utilized for sample-efficient, model-based reinforcement learning. With this paper, we also release a PyTorch implementation of STOVE.1 + +# 3.1 VIDEO AND STATE MODELING + +Inspired by Watters et al. (2017), we consider grayscale videos of objects moving according to physical laws. In particular, we opt for the commonly used bouncing billiards balls dataset, as well as a dataset of gravitationally interacting balls. For further details on the datasets, see Appendix D. When trained using a single GTX 1080 Ti, STOVE converges after about 20 hours. As baselines, we compare to VRNNs (Chung et al., 2015), SQAIR (Kosiorek et al., 2018), and DDPAE (Hsieh et al., 2018). To allow for a fair comparison, we fix the number of objects predicted by SQAIR and DDPAE + +![](images/b0ee65bd0096d394ccfd928c499ba1c6418e920e319c3677b7012f9403696dff.jpg) +Figure 5: Comparison of the kinetic energies of the rollouts predicted by the models, computed based on position differences between successive states. Only STOVE's predictions reflect the conservation of total kinetic energy in the billiards data set. This is a quantitative measure of the convincing physical behavior in the rollout videos. (Left, center) Averages are over 300 trajectories from the test set. Shaded regions indicate one standard deviation. STOVE correctly predicts trajectories with constant energy, whereas SQAIR and DDPAE quickly diverge. (Right) Rolling average over a single, extremely long-term run. We conjecture that STOVE predicts physical behavior indefinitely. (Best viewed in color.) + +![](images/12feb9c91b3daca59ad6fe8e16a1ccf9c4f04ecf4e4220c85b868ea4307c5319.jpg) + +![](images/8e27e4a57d8da89426ef2e40c66820fe1522b2ab09432d9cb40a6500661abc2f.jpg) + +to the correct amount. Furthermore, we compare to a supervised baseline: Here, we consider the ground truth positions and velocities to be fully observed, and train our dynamics model on them, resembling the setting of Battaglia et al. (2016). Since our model needs to infer object states from pixels, this baseline provides an upper bound on the predictive performance we can hope to achieve with our model. In turn, the size of the performance gap between the two is a good indicator of the quality of our state-space model. We also report the results obtained by combining our image model with a simple linear physics model, which linearly extrapolates the objects' trajectories. Since VRNN does not reason about object positions, we only evaluate it on the video prediction task. Similarly, the supervised baseline does not reason about images and is considered for the position prediction task only. For more information on the baselines, see Appendix E. + +Fig. 4 depicts the reconstruction and prediction errors of the various models: Each model is given eight frames of video from the test set as input, which it then reconstructs. Conditioned on this input, the models predict the object positions or resulting video frames for the following 92 timesteps. The predictions are evaluated on ground truth data by computing the mean squared error between pixels and the Euclidean distance between positions based on the best available object matching. We outperform all baselines on both the state and the image prediction task by a large margin. Additionally, we perform strikingly close to the supervised model. + +For the gravitational data, the prediction task appears easier, as all models achieve lower errors than on the billiards task. However, in this regime of easy prediction, precise access to the object states becomes more important, which is likely the reason why the gap between our approach and the supervised baseline is slightly more pronounced. Despite this, STOVE produces high-quality rollouts and outperforms the unsupervised baselines. + +Table 1 underlines these results with concrete numbers. We also report results for three ablations of STOVE, which are obtained by (a) training a separate dynamics networks for inference with the same graph neural network architecture, instead of sharing weights with the generative model as argued for in section 2.3, (b) no longer explicitly modelling velocities $z_{\mathrm{velo}}$ in the state, and (c) removing the latent state variables $z_{\mathrm{latent}}$ . The ablation study shows that each of these components contributes positively to the performance of STOVE. See Appendix F for a comparison of training curves for the ablations. + +Fig. 3 illustrates predictions on future object positions made by the models, after each of them was given eight consecutive frames from the datasets. Visually, we find that STOVE predicts physically plausible sequences over long timeframes. This desirable property is not captured by the rollout error: Due to the chaotic nature of our environments, infinitesimally close initial states diverge quickly and a model which perfectly follows the ground truth states cannot exist. After this divergence has + +Table 1: Predictive performance of our approach, the baselines, and ablations (lower is better, best unsupervised values are bold). STOVE outperforms all unsupervised baselines and is almost indistinguishable from the supervised model on the billiards task. The values are computed by summing the prediction errors presented in Fig. 4 in the time interval $t \in [9,18]$ , i.e., the first ten predicted timesteps. In parentheses, standard deviations across multiple training runs are given. + +
Billiards (pixels)Billiards (positions)Gravity (pixels)Gravity (positions)
STOVE (ours)0.240(14)0.418(20)0.040(3)0.142(7)
VRNN0.526(14)-0.055(12)-
SQAIR0.5910.8040.0700.194
DDPAE0.4050.4820.1200.298
Linear0.844(5)1.348(15)0.196(2)0.493(4)
Supervised-0.232(37)-0.013(2)
Abl: Double Dynamics0.2620.4580.0420.154
Abl: No Velocity0.2720.4600.0530.174
Abl: No Latent0.3380.0500.0890.235
+ +occurred, the rollout error no longer provides any information on the quality of the learned physical behavior. We therefore turn to investigating the total kinetic energy of the predicted billiards trajectories. Since the collisions in the training set are fully elastic and frictional forces are not present, the initial energy should be conserved. Fig. 5 shows the kinetic energies of trajectories predicted by STOVE and its baselines, computed based on the position differences between consecutive timesteps. While the energies of SQAIR and DDPAE diverge quickly in less than 100 frames, the mean energies of STOVE's rollouts stay constant and are good estimates of the true energy. We have confirmed that STOVE predicts constant energies – and therefore displays realistic looking behavior – for at least 100000 steps. This is in stark contrast to the baselines, which predict teleporting, stopping, or overlapping objects after less than 100 frames. In the billiards dataset used by us and the literature, the total energy is the same for all sequences in the training set. See Appendix B for a discussion of how STOVE handles diverse energies. + +# 3.2 MODEL-BASED CONTROL + +To explore the usefulness of STOVE for reinforcement learning, we extend the billiards dataset into a reinforcement learning task. Now, the agent controls one of the balls using nine actions, which correspond to moving in one of the eight (inter)cardinal directions and staying at rest. The goal is to avoid collisions with the other balls, which elastically bounce off of each other, the walls, and the controlled ball. A negative reward of $-1$ is given whenever the controlled ball collides with one of the others. To allow the models to recognize the object controlled by the agents we now provide it with RGB input in which the balls are colored differently. Starting with a random policy, we iteratively gather observations from the environment, i.e. sequences of images, actions, and rewards. Using these, we train our model as described in Sec. 2.4. To obtain a policy based on our world model, we use Monte-Carlo tree search (MCTS), leveraging our model as a simulator for planning. Using this policy, we gather more observations and apply them to refine the world model. As an upper bound on the performance achievable in this manner, we report the results obtained by MCTS when the real environment is used for planning. As a model-free baseline, we consider PPO (Schulman et al., 2017), which is a state-of-the-art algorithm on comparable domains such as Atari games. To explore the effect of the availability of state information, we also run PPO on a version of the environment in which, instead of images, the ground-truth object positions and velocities are observed directly. + +Learning curves for each of the agents are given in Fig. 6 (left), reported at intervals of 10000 samples taken from the environment, up to a total of 130000. For our model, we collect the first 50000 samples using a random policy to provide an initial training set. After that, the described training loop is used, iterating between collecting 10000 observations using an MCTS-based policy and refining the model using examples sampled from the pool of previously seen observations. After 130000 samples, PPO has not yet seen enough samples to converge, whereas our model quickly learns to meaningfully model the environment and thus produces a better policy at this stage. Even when PPO is trained on ground truth states, MCTS based on STOVE remains comparable. + +![](images/50979965a6b873545311b4eab8704e86eafeade5ca2a7d79b0d52053cc0f508b.jpg) +Figure 6: Comparison of all models on sample efficiency and final performance. (Left) Mean cumulative reward over 100 steps on the environment, averaged over 100 environments, using the specified policy. The shaded regions correspond to one-tenth of a standard deviation. In addition to the training curves, two constant baselines are shown, one representing a random policy and one corresponding to the MCTS based policy when using the real environment as a simulator. (Right) Final performance of all approaches, after training each model to convergence. The shaded region corresponds to one standard deviation. (Best viewed in color.) + +![](images/fee451fafecc979ef7651cce89615acbe7cc17bae0295ecd2c95471537832376.jpg) + +After training each model to convergence, the final performance of all approaches is reported in Fig. 6 (right). In this case, PPO achieves slightly better results, however it only converges after training for approximately 4000000 steps, while our approach only uses 130000 samples. After around 1500000 steps, PPO does eventually surpass the performance of STOVE-based MCTS. Additionally, we find that MCTS on STOVE yields almost the same performance as on the real environment, indicating that it can be used to anticipate and avoid collisions accurately. + +# 4 RELATED WORK + +Multiple lines of work with the goal of video modeling or prediction have emerged recently. Prominently, the supervised modeling of physical interactions from videos has been investigated by Fragkiadaki et al. (2015), who train a model to play billiards with a single ball. Similarly, graph neural networks have been trained in a supervised fashion to predict the dynamics of objects from images (Watters et al., 2017; Sanchez-Gonzalez et al., 2018; Sun et al., 2018; 2019) or ground truth states (Kipf et al., 2018; Wang et al., 2018; Chang et al., 2017). A number of works learn object interactions in games in terms of rules instead of continuous dynamics (Guzdial et al., 2017; Ursen & Sariel, 2014). Janner et al. (2019) show successful planning based on learned interactions, but assume access to image segmentations. Several unsupervised approaches address the problem by fitting the parameters of a physics engine to data (Jaques et al., 2019; Wu et al., 2016; 2015). This necessitates specifying in advance which physical laws govern the observed interactions. In the fully unsupervised setting, mainly unstructured variational approaches have been explored (Babaeizadeh et al., 2017; Chung et al., 2015; Krishnan et al., 2015). However, without the explicit notion of objects, their performance in scenarios with interacting objects remains limited. Nevertheless, unstructured video models have recently been applied to model-based RL and have been shown to improve sample efficiency when used as a simulator for the real environment (Oh et al., 2015; Kaiser et al., 2020). + +Only a small number of works incorporate objects into unsupervised video models. Xu et al. (2019) and Ehrhardt et al. (2018) take non-probabilistic autoencoding approaches to discovering objects in real-world videos. COBRA (Watters et al., 2019) represents a model-based RL approach based on MONet, but is restricted to environments with non-interacting objects and only uses one-step search to build its policy. Closest to STOVE are a small number of probabilistic models, namely SQAIR (Kosiorek et al., 2018), R-NEM (Van Steenkiste et al., 2018; Greff et al., 2017), and DDPAE (Hsieh et al., 2018). R-NEM learns a mixture model via expectation-maximization unrolled through time and handles interactions between objects in a factorized fashion. However, it lacks an explicitly + +structured latent space, and requires noise in the input data to avoid local minima. Both DDPAE and SQAIR extend the AIR approach to work on videos using standard recurrent architectures. As discussed, this introduces a double recurrence over objects and time, which is detrimental for performance. However, SQAIR is capable of handling a varying number of objects, which is not something we consider in this paper. + +# 5 CONCLUSION + +We introduced STOVE, a structured, object-aware model for unsupervised video modeling and planning. It combines recent advances in unsupervised image modeling and physics prediction into a single compositional state-space model. The resulting joint model explicitly reasons about object positions and velocities, and is capable of generating highly accurate video predictions in domains featuring complicated non-linear interactions between objects. As our experimental evaluation shows, it outperforms previous unsupervised approaches and even approaches the performance and visual quality of a supervised model. + +Additionally, we presented an extension of the video learning framework to the RL setting. Our experiments demonstrate that our model may be utilized for sample-efficient model-based control in a visual domain, making headway towards a long standing goal of the model-based RL community. In particular, STOVE yields good performance with more than one order of magnitude fewer samples compared to the model-free baseline, even when paired with a relatively simple planning algorithm like MCTS. + +At the same time, STOVE also makes several assumptions for the sake of simplicity. Relaxing them provides interesting avenues for future research. First, we assume a fixed number of objects, which may be avoided by performing dynamic object propagation and discovery like in SQAIR. Second, we have inherited the assumption of rectangular object masks from AIR. Applying a more flexible model such as MONet (Burgess et al., 2019) or GENESIS (Engelcke et al., 2020) may alleviate this, but also poses additional challenges, especially regarding the explicit modeling of movement. Finally, the availability of high-quality learned state-space models enables the use of more sophisticated planning algorithms in visual domains (Chua et al., 2018). In particular, by combining planning with policy and value networks, model-free and model-based RL may be integrated into a comprehensive system (Buckman et al., 2018). + +Acknowledgments. The authors thank Adam Kosiorek for his assistance with the SQAIR experiments and Emilien Dupont for helpful discussions about conservation laws in dynamics models. KK acknowledges the support of the Rhine-Main universities' network for "Deep Continuous-Discrete Machine Learning" (DeCoDeML). + +# REFERENCES + +Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy H Campbell, and Sergey Levine. Stochastic variational video prediction. In Proceedings of ICLR, 2017. +Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Proceedings of NeurIPS, pp. 4502-4510, 2016. +Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. +Philip Becker-Ehmck, Jan Peters, and Patrick Van Der Smagt. Switching linear dynamics for variational bayes filtering. In Proceedings of ICML, 2019. +Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion. In Proceedings of NeurIPS, pp. 8224-8234, 2018. + +Christopher P Burgess, Loic Matthey, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, and Alexander Lerchner. Monet: Unsupervised scene decomposition and representation. arXiv preprint arXiv:1901.11390, 2019. +Michael Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional object-based approach to learning physical dynamics. In Proceedings of ICLR, 2017. +Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Proceedings of NeurIPS, pp. 4754-4765, 2018. +Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Proceedings of NeurIPS, pp. 2980-2988, 2015. +Rémi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In Computers and Games, pp. 72-83. Springer Berlin Heidelberg, 2007. +Sebastien Ehrhardt, Aron Monszpart, Niloy Mitra, and Andrea Vedaldi. Unsupervised intuitive physics from visual observations. In Proceedings of ACCV, pp. 700-716, 2018. +Martin Engelcke, Adam R. Kosiorek, Oiwi Parker Jones, and Ingmar Posner. Genesis: Generative scene inference and sampling with object-centric latent representations. In Proceedings of ICLR, 2020. +Mustafa Ersen and Sanem Sariel. Learning behaviors of and interactions among objects through spatio-temporal reasoning. IEEE TCIAIG, 7(1):75-87, 2014. +SM Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Geoffrey E Hinton, et al. Attend, infer, repeat: Fast scene understanding with generative models. In Proceedings of NeurIPS, pp. 3225-3233, 2016. +Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning visual predictive models of physics for playing billiards. arXiv preprint arXiv:1511.07404, 2015. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of NeurIPS, pp. 2672-2680, 2014. +Klaus Greff, Sjoerd van Steenkiste, and Jurgen Schmidhuber. Neural expectation maximization. In Advances in Neural Information Processing Systems, pp. 6691-6701, 2017. +Klaus Greff, Raphaël Lopez Kaufmann, Rishab Kabra, Nick Watters, Chris Burgess, Daniel Zoran, Loic Matthew, Matthew Botvinick, and Alexander Lerchner. Multi-object representation learning with iterative variational inference. In Proceedings of ICML, 2019. +U. Grenander. Lectures in Pattern Theory: Vol. 2 Pattern Analysis. Springer-Verlag, 1976. +Matthew Guzdial, Boyang Li, and Mark O Riedl. Game engine learning from video. In Proceedings of IJCAI, 2017. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997. +Jun-Ting Hsieh, Bingbin Liu, De-An Huang, Li F Fei-Fei, and Juan Carlos Niebles. Learning to decompose and disentangle representations for video prediction. In Proceedings of NeurIPS, pp. 517-526, 2018. +Michael Janner, Sergey Levine, William T. Freeman, Joshua B. Tenenbaum, Chelsea Finn, and Jiajun Wu. Reasoning about physical interactions with object-centric models. In Proceedings of ICLR, 2019. +Miguel Jaques, Michael Burke, and Timothy Hospedales. Physics-as-inverse-graphics: Joint unsupervised learning of objects and physics from video. arXiv preprint arXiv:1905.11169, 2019. + +Łukasz Kaiser, Mohammad Babaeizadeh, Piotr Miłos, Błajew Osiński, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, and Henryk Michalewski. Model based reinforcement learning for atari. In Proceedings of ICLR, 2020. +Maximilian Karl, Maximilian Soelch, Justin Bayer, and Patrick van der Smagt. Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data. In Proceedings of ICLR, 2017. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of ICLR, 2015. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of ICLR, 2014. +Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems. In Proceedings of ICML, 2018. +Adam Kosiorek, Hyunjik Kim, Yee Whye Teh, and Ingmar Posner. Sequential attend, infer, repeat: Generative modelling of moving objects. In Proceedings of NeurIPS, pp. 8606-8616, 2018. +Rahul G. Krishnan, Uri Shalit, and David Sontag. Deep kalman filters. arXiv preprint arXiv:1812.08434, 2015. +Harold W Kuhn. The hungarian method for the assignment problem. *Naval research logistics quarterly*, 2(1-2):83-97, 1955. +Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in atari games. In Proceedings of NeurIPS, pp. 2845-2853, 2015. +Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, and Peter Battaglia. Graph networks as learnable physics engines for inference and control. In Proceedings of ICML, 2018. +Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In Proceedings of NeurIPS, pp. 4967-4976, 2017. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. +Karl Stelzner, Robert Peharz, and Kristian Kersting. Faster attend-infer-repeat with tractable probabilistic models. In Proceedings of ICML, pp. 5966-5975, 2019. +Chen Sun, Abhinav Shrivastava, Carl Vondrick, Kevin Murphy, Rahul Sukthankar, and Cordelia Schmid. Actor-centric relation network. In Proceedings of ECCV, 2018. +Chen Sun, Abhinav Shrivastava, Carl Vondrick, Rahul Sukthankar, Kevin Murphy, and Cordelia Schmid. Relational action forecasting. In Proceedings of CVPR, 2019. +Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT Press, Cambridge, 2011. +Sjoerd Van Steenkiste, Michael Chang, Klaus Greff, and Jürgen Schmidhuber. Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. In Proceedings of ICLR, 2018. +Antonio Vergari, Robert Peharz, Nicola Di Mauro, Alejandro Molina, Kristian Kersting, and Floriana Esposito. Sum-product autoencoding: Encoding and decoding representations using sum-product networks. In Proceedings of AAAI, 2018. +Tingwu Wang, Renjie Liao, Jimmy Ba, and Sanja Fidler. Nervenet: Learning structured policy with graph neural networks. In Proceedings of ICLR, 2018. + +Nicholas Watters, Daniel Zoran, Theophane Weber, Peter Battaglia, Razvan Pascanu, and Andrea Tacchetti. Visual interaction networks: Learning a physics simulator from video. In Proceedings of NeurIPS, pp. 4539-4547, 2017. +Nicholas Watters, Loic Matthew, Matko Bosnjak, Christopher P Burgess, and Alexander Lerchner. *Cobra: Data-efficient model-based rl through unsupervised object discovery and curiosity-driven exploration.* arXiv preprint arXiv:1905.09275, 2019. +Jiajun Wu, Ilker Yildirim, Joseph J Lim, Bill Freeman, and Josh Tenenbaum. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In Proceedings of NeurIPS, pp. 127-135, 2015. +Jiajun Wu, Joseph J Lim, Hongyi Zhang, Joshua B Tenenbaum, and William T Freeman. Physics 101: Learning physical object properties from unlabeled videos. In Proceedings of BMVC, 2016. +Zhenjia Xu, Zhijian Liu, Chen Sun, Kevin Murphy, William T. Freeman, Joshua B. Tenenbaum, and Jiajun Wu. Modeling parts, structure, and system dynamics via predictive learning. In Proceedings of ICLR, 2019. +Cheng Zhang, Judith Butepage, Hedvig Kjellström, and Stephan Mandt. Advances in variational inference. IEEE TPAMI, 2019. +Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neural networks: A review of methods and applications. arXiv preprint arXiv:1812.08434, 2018. + +# A RECONSTRUCTIONS: SPRITES DATA + +SuPAIR does not need a latent description of the objects' appearances. Nevertheless, object reconstructions can be obtained by using a variant of approximate MPE (most probable explanation) in the sum-product networks as proposed by Vergari et al. (2018). We follow the AIR approach and reconstruct each object separately and paste it into the canvas using spatial transformers. Unlike AIR, SuPAIR explicitly models the background using a separate background SPN. A reconstruction of the background is also obtained using MPE. + +To demonstrate the capabilities of our image model, we also trained our model on a variant of the gravity data in which the round balls were replaced by a random selection of four different sprites of the same size. Fig. 7 shows the reconstructions obtained from SuPAIR when trained on these more complex object shapes. + +# B STUDY OF ENERGIES + +As discussed in Sec. 3.1, the energies of the ground truth data were constant for all sequences during the training of STOVE. However, initial velocities are drawn from a random normal distribution. This is the standard procedure of generating the bouncing balls data set as used by previous publications. Under these circumstances, STOVE does indeed learn to discover and replicate the total energies of the system, while SQAIR and DDPAE do not. Even if trained on constant energy data, STOVE does to some extent generalise to unseen energies. Observed velocities and therefore total energies are highly correlated with the true total kinetic energies of the sequences. However as prediction starts, STOVE quickly regresses to the energy of the training set, see Fig. 8 (left). If trained on a dataset of diverse total energies, the performance of modelling sequences of different energies increases, see Fig. 8 (right). Rollouts now initially represent the true energy of the observed sequence, although this estimate of the true energy diverges over a time span of around 500 frames to a constant but wrong energy value. This is an improvement over the model trained on constant energy data, where the regression to the training data energy happens much quicker within around 10 frames. Note that this does not drastically decrease the visual quality of the rollouts as the change of total energy over 500 frames is gradual enough. We leave the reliable prediction of rollouts with physically valid constant energy for sequences of varying energies for future work. + +# C MODEL DETAILS + +Here, we present additional details on the architecture and hyperparameters of STOVE. + +# C.1 INFERENCE ARCHITECTURE + +The object detection network for $q(z_{t,\text{where}} \mid x_t)$ is realised by an LSTM (Hochreiter & Schmidhuber, 1997) with 256 hidden units, which outputs the mean and standard deviation of the objects' two-dimensional position and size distributions, i.e. $q(z_{t,\text{pos},\text{size}}^o \mid x_t)$ with $2 \cdot 2 \cdot 2 = 8$ parameters per object. Given such position distributions for two consecutive timesteps $q(z_{t-1,\text{pos}} \mid x_{t-1})$ , $q(z_{t,\text{pos}} \mid x_t)$ , with parameters $\mu_{z_{t-1,\text{pos}}^o}$ , $\sigma_{z_{t-1,\text{pos}}^o}$ , $\mu_{z_{t,\text{pos}}^o}$ , $\sigma_{z_{t,\text{pos}}^o}$ , the following velocity estimate based on the difference in position is constructed: + +$$ +q (z _ {t, \mathrm {v e l o}} ^ {o} \mid x _ {t}, x _ {t - 1}) = \mathcal {N} (\mu_ {z _ {t, \mathrm {p o s}} ^ {o}} - \mu_ {z _ {t - 1, \mathrm {p o s}} ^ {o}}, \sigma_ {z _ {t, \mathrm {p o s}} ^ {o}} ^ {2} + \sigma_ {z _ {t - 1, \mathrm {p o s}} ^ {o}} ^ {2}). +$$ + +As described in Sec. 2.3, positions and velocities are also inferred from the dynamics model as $q(z_{t,\mathrm{pos}}^o \mid z_{t - 1})$ and $q(z_{t,\mathrm{velo}}^o \mid z_{t - 1})$ . A joint estimate, including information from both image model and dynamics prediction, is obtained by multiplying the respective distributions and renormalizing. Since both $q$ -distributions are Gaussian, the normalized product is again Gaussian, with mean and + +![](images/250d42d0bf58bbf0abe9cdf56e8e0a79bd2957ccd91ebeb2abf1af06c577d4d4.jpg) +Figure 7: Reconstructions obtained from our image model when using more varied shapes. + +standard deviation are given by + +$$ +\begin{array}{l} q \left(z _ {t} \mid x _ {t}, z _ {t - 1}\right) \propto q \left(z _ {t} \mid x _ {t}\right) \cdot q \left(z _ {t} \mid z _ {t - 1}\right) \\ = \mathcal {N} \left(z _ {t}; \mu_ {t, i}, \sigma_ {t, i} ^ {2}\right) \cdot \mathcal {N} \left(z _ {t}; \mu_ {t, d}, \sigma_ {t, d} ^ {2}\right) \\ = \mathcal {N} \left(z _ {t}; \mu_ {t}, \sigma_ {t} ^ {2}\right) \\ \mu_ {t} = \frac {\sigma_ {t , d} ^ {2} \mu_ {t , i} + \sigma_ {t , i} ^ {2} \mu_ {t , d}}{\sigma_ {t , d} ^ {2} + \sigma_ {t , i} ^ {2}} \\ \frac {1}{\sigma_ {t} ^ {2}} = \frac {1}{\sigma_ {t , d} ^ {2}} + \frac {1}{\sigma_ {t , i} ^ {2}}, \\ \end{array} +$$ + +where we relax our notation for readability $z_{t} \in [z_{t,\mathrm{pos}}^{o}, z_{t,\mathrm{velo}}^{o}]$ and the indices $i$ and $d$ refer to the parameters obtained from the image and dynamics model. This procedure is applied independently for the positions and velocities of each object. + +For $z_{t,\text{latent}}^o$ , we choose dimension 12, such that a full state $z_t^o = (z_{t,\text{pos}}^o, z_{t,\text{size}}^o, z_{t,\text{velo}}^o, z_{t,\text{latent}}^o)$ is 18-dimensional. + +# C.2 GRAPH NEURAL NETWORK + +The dynamics prediction is given by the following series of transformations applied to each input state of shape (batch size, number of objects, $l$ ), where $l = 16$ , since currently, size information is not propagated through the dynamics prediction. + +- $S_{1}$ : Encode input state with linear layer $[l, 2l]$ . +- $S_{2}$ : Apply linear layer $[2l, 2l]$ to $S_{1}$ followed by ReLU non-linearity. +- $S_{3}$ : Apply linear layer $[2l, 2l]$ to $S_{2}$ and add result to $S_{2}$ . This gives the dynamics prediction without relational effects, corresponding to $g(z_{t}^{o})$ in Eq. 1. +- $C_1$ : The following steps obtain the relational aspects of dynamics prediction, corresponding to $h(z_t^o,z_t^{o'})$ in Eq. 1. Concatenate the encoded state $S_1^o$ pairwise with all state encoding, yielding a tensor of shape (batch size, number of objects, number of objects, 4l). +- $C_2$ : Apply linear layer $[4l, 4l]$ to $C_1$ followed by ReLU. +- $C_3$ : Apply linear layer $[4l, 2l]$ to $C_2$ followed by ReLU. +- $C_4$ : Apply linear layer $[2l, 2l]$ to $C_3$ and add to $C_3$ . +- $A_{1}$ : To obtain attention coefficients $\alpha (z_t^o,z_t^{o'})$ , apply linear layer [4l, 4l] to $C_1$ followed by ReLU. +- $A_{2}$ : Apply linear layer $[4l, 2l]$ to $A_{1}$ followed by ReLU. +- $A_{3}$ : Apply linear layer $[2l, 1]$ to $A_{2}$ and apply exponential function. +- $R_{1}$ : Multiply $C_{4}$ with $A_{3}$ , where diagonal elements of $A_{3}$ are masked out to ensure that $R_{1}$ only covers cases where $o \neq o'$ . +- $R_{2}$ : Sum over $R_{1}$ for all $o'$ , to obtain tensor of shape (batch size, number of objects, $2l$ ). This is the relational dynamics prediction. + +![](images/46bb85517c1ae2e02d91c69e07fe014bcd6ca40a11d38653b3fa5eb6ee318844.jpg) +Figure 8: Mean kinetic energy observed/predicted by STOVE over true energy of the sequences. (left) STOVE is trained on sequences of constant kinetic energy. As can be seen from the blue scatter points, STOVE manages to predict sequences of arbitrary lengths which, on average, preserve the constant energy of the test set. When STOVE is applied to sequences of different energies, it manages to infer these energies from observed frames fairly well, with inaccuracies compounding at larger energies (red). In the following prediction, however, the mean predicted energies diverge quickly to the energy value of the training set (orange and green). (right) STOVE is now trained on sequences of varying energies. Compared to the constant energy training, energies from observed as well as predicted energies improve drastically. The predictions no longer immediately regress towards a specific value (orange). However after 100 frames, the quality of the predicted energies still regresses to a wrong value (green). (all) The observed values refers to energies obtained as the mean energy value over the six initially observed frames. The short (long) time frame refers to an energy obtained as the mean energy over the first 10 (100) frames of prediction. (Best viewed in color.) + +![](images/7f6a747a747307c9c4cdde9f2ea57baacca92dc9cb55f51259f5b15e97e71318.jpg) + +- $D_{1}$ : Sum relational dynamics $R_{2}$ and self-dynamics $S_{3}$ , obtaining the input to $f$ in Eq. 1. +- $D_{2}$ : Apply linear layer $[2l, 2l]$ to $D_{1}$ followed by tanh non-linearity. +- $D_{3}$ : Apply linear layer $[2l, 2l]$ to $D_{2}$ followed by tanh non-linearity and add result to $D_{2}$ . +- $D_4$ : Concatenate $D_3$ and $S_1$ , and apply linear layer $[4l, 2l]$ followed by tanh. +- $D_{5}$ : Apply linear layer $[2l, 2l]$ to $D_{4}$ and add result to $D_{4}$ to obtain final dynamics prediction. + +The output $D_{5}$ has shape (batch size, number of objects, $2l$ ), twice the size of means and standard deviations over the next predicted state. + +For the model-based control scenario, the one-hot encoded actions (batch size, action space) are transformed with a linear layer [action space, number of objects · encoding size] and reshaped to (action space, number of objects, encoding size). The action embedding and the object appearances (batch size, number of objects, 3) are then concatenated to the input state. The rest of the dynamics prediction follows as above. The reward prediction consists of the following steps: + +- $H_{1}$ : Apply linear layer $[2l, 2l]$ to $D_{1}$ followed by ReLU. +- $H_{2}$ : Apply linear layer $[2l, 2l]$ to $H_{1}$ . +- $H_{3}$ : Sum over object dimension to obtain tensor of shape (batch size, l). +- $H_4$ : Apply linear layer $[l, l/2]$ to $H_3$ followed by ReLU. +- $H_{5}$ : Apply linear layer $[l/2, l/4]$ to $H_{4}$ followed by ReLU. +- $H_{5}$ : Apply linear layer $[l / 4, l]$ to $H_{4}$ followed by a sigmoid non-linearity. + +$H_{5}$ then gives the final reward prediction. + +# C.3 STATE INITIALIZATION + +In the first two timesteps, we cannot yet apply STOVE's main inference step $q(z_{t} \mid z_{t-1}, x_{t}, x_{t-1})$ as described above. In order to initialize the latent state over the first two frames, we apply a simplified architecture and only use a partial state at $t = 0$ . + +At $t = 0$ , $z_0 \sim q(z_{0,(\mathrm{pos},\mathrm{size})} \mid x_0)$ is given purely by the object detection network, since no previous states, which could be propagated, exist. $z_0$ is incomplete insofar as it does not contain velocity information or latents. At $t = 1$ , $q(z_{1,\mathrm{pos},\mathrm{size}} \mid x_1,x_0)$ is still given purely based on the object detection network. Note that for a dynamics prediction of $z_1$ , velocity information at $t = 0$ would need to be available. However, at $t = 1$ , velocities can be constructed based on the differences between the previously inferred object positions. We sample $z_{1,\mathrm{latent}}$ from the prior Gaussian distribution to assemble the first full initial state $z_1$ . At $t \geq 2$ , the full inference network can be run: States are inferred both from the object detection network $q(z_t \mid x_t,x_{t - 1})$ as well as propagated using the dynamics model $q(z_{t} \mid z_{t - 1})$ . + +In the generative model, similar adjustments are made: $p(z_{0,\mathrm{pos,size}})$ is given by a uniform prior, velocities and latents are omitted. At $t = 1$ , velocities are sampled from a uniform distribution in planar coordinates $p(z_{1,\mathrm{velo}})$ and positions are given by a simple linear dynamics model $p(z_{1,\mathrm{pos}}|z_{0,\mathrm{pos}},z_{1,\mathrm{velo}}) = \mathcal{N}(z_{0,\mathrm{pos}} + z_{1,\mathrm{velo}},\sigma)$ . Latents $z_{1,\mathrm{latent}}$ are sampled from a Gaussian prior. Starting at $t = 2$ , the full dynamics model is used. + +# C.4 TRAINING PROCEDURE + +Our model was trained using the Adam optimizer (Kingma & Ba, 2015), with a learning rate of $2 \times 10^{-3} \exp(-40 \times 10^{-3} \cdot \text{step})$ for a total of 83000 steps with a batch size of 256. + +# D DATA DETAILS + +For the billiards and gravitational data, 1000 sequences of length 100 were generated for training. From these, subsequences of lengths 8 were sampled and used to optimize the ELBO. A test dataset of 300 sequences of length 100 was also generated and used for all evaluations. The pixel resolution of the dataset was $32 \times 32$ for the billiards data and $50 \times 50$ for the gravity data. All models for video prediction were learned on grayscale data, with objects of identical appearance. The $O = 3$ balls were initialised with uniformly random positions and velocities, rejecting configurations with overlap. They are rendered using anti-aliasing. The billiards data models the balls as circular objects, which perform elastic collision with each other or the walls of the environment. For the gravity data, the balls are modeled as point masses, where, following Watters et al. (2017), we clip the gravitational force to avoid slingshot effects. Also, we add an additional basin of attraction towards the center of the canvas and model the balls in their center off mass system to avoid drift. Velocities here are initialised orthogonal to the center of the canvas for a stabilising effect. For full details we refer to the file envs.py in the provided code. + +# E BASELINES FOR VIDEO MODELING + +Following Kosiorek et al. (2018), we experimented with different hyperparameter configurations for VRNNs. We varied the sizes of the hidden and latent states $[h, z]$ , experimenting with the values [256, 16], [512, 32], [1024, 64], and [2048, 32]. We found that increasing the model capacity beyond [512, 32] did not yield large increases in performance, which is why we chose the configuration [512, 32] for our experiments. Our VRNN implementation is written in PyTorch and based on https://github.com/emited/VariationalRecurrentNeuralNetwork. + +SQAIR can handle a variable number of objects in each sequence. However, to allow for a fairer comparison to STOVE, we fixed the number of objects to the correct number. This means that in the first timestep, exactly three objects are discovered, which are then propagated in all following timesteps, without further discoveries. Our implementation is based on the original implementation provided by the authors at https://github.com/akosiorek/sqair. + +![](images/26dab4052c7800d9d1428586e8a2699c0d6b3b662d0541fe4893f9a4d9bd6ae4.jpg) +Figure 9: Displayed is the mean predicted position error over a rollout length of 8 frames as training progresses for the billiards (left) and gravity (right) scenario for STOVE and its ablations. (Best viewed in color.) + +![](images/53ab8addbb565b5af2f7fa93b0a132ad1a8e8b2abc4c4c5de27a91a3ae92a5d8.jpg) + +The DDPAE experiments were performed using the implementation available at https://github.com/jthsieh/DDPAE-video-prediction. Default parameters for training DDPAE with billiards datasets are provided with the code. However, the resolution of our billiards (32 pixels) and gravity (64 pixels) datasets is different to the resolution DDPAE expects (64 pixels). While we experimented with adjusting DDPAE parameters such as the latent space dimension to fit our different resolution, best results were obtained when bilinearly scaling our data to the resolution DDPAE expects. DDPAE was trained for 400000 steps, which sufficed for convergence of the models' test set error. + +The linear baseline was obtained as follows: For the first 8 frames, we infer the full model state using STOVE. We then take the last inferred positions and velocities of each object and predict future positions by assuming constant, uniform motions for each object. We do not allow objects to leave the frame, i.e. when objects reach the canvas boundary after some timesteps, they stick to it. + +Since our dynamics model requires only object positions and velocities as input, it is trivial to construct a supervised baseline for our physics prediction by replacing the SuPAIR-inferred states with real, ground-truth states. On these, the model can then be trained in supervised fashion. + +# F TRAINING CURVES OF ABLATIONS + +In Fig. 9 we display learning curves for STOVE and presented ablations. As mentioned in the main text, the ablations demonstrate the value of the reuse of the dynamics model, the explicit inclusion of a velocity value, and the presence of unstructured latent space in the dynamics model. (Best viewed in color.) + +# G DETAILS ON THE REINFORCEMENT LEARNING MODELS + +Our MCTS implementation uses the standard UCT formulation for exploration/exploitation. The $c$ parameter is set to 1. in all our experiments. Since the environment does not provide a natural endpoint, we cut off all rollouts at a depth of 20 timesteps. We found this to be a good trade-off between runtime and accuracy. + +When expanding a node on the true environment, we compute the result of the most promising action, and then start a rollout using a random policy from the resulting state. For the final evaluation, a total of 200 nodes are expanded. To better utilize the GPU, a slightly different approach is used for STOVE. When we expand a node in this setting, we predict the results of all actions simultaneously, and compute a rollout from each resulting position. In turn, only 50 nodes are expanded. To estimate the node value function, the average reward over all rollouts is propagated back to the root and each node's visit counter is increased by 1. Furthermore, we discount the reward predicted STOVE with a factor of 0.95 per timestep to account for the higher uncertainty of longer rollouts. This is not done in the baseline running on the real environment, since it behaves deterministically. + +For PPO, we employ a standard convolutional neural network as an actor-critic for the evaluation on images and a MLP for the evaluation on states. The image network consists of two convolutional layers, each using 32 output filters with a kernel size of 4 and 3 respectively and a stride of 2. The MLP consists of two fully connected layers with 128 and 64 hidden units. In both cases, an additional fully connected layer links the outputs of the respective base to an actor and a critic head. For the convolutional base, this linking layer employs 512 hidden units, for the MLP 64. All previously mentioned layers use rectified linear activations. The actor head predicts a probability distribution over next actions using a softmax activation function while the critic head outputs a value estimation for the current state using a linear prediction. We tested several hyperparameter configurations but found the following to be the most efficient one. To update the actor-critic architecture, we sample 32 trajectories of length 16 from separate environments in every batch. The training uses an Adam optimizer with a learning rate of $2 \times 10^{-4}$ and $\epsilon$ value of $1 \times 10^{-5}$ . The clipping parameter of PPO is set to $1 \times 10^{-1}$ . We update the network for 4 epochs in each batch using 32 mini-batches of the sampled data. The value loss is weighted at $5 \times 10^{-1}$ and the entropy coefficient is set to $1 \times 10^{-2}$ . \ No newline at end of file diff --git a/structuredobjectawarephysicspredictionforvideomodelingandplanning/images.zip b/structuredobjectawarephysicspredictionforvideomodelingandplanning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e3ba1bc2102c6d8ed920608ed06fa1a0e0c3626b --- /dev/null +++ b/structuredobjectawarephysicspredictionforvideomodelingandplanning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e580c7b07180aa5ece761b29aa1181769ad44f256256aea6aef2566583f4d0e3 +size 502795 diff --git a/structuredobjectawarephysicspredictionforvideomodelingandplanning/layout.json b/structuredobjectawarephysicspredictionforvideomodelingandplanning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ec966f7205aea11aabb6df26f3eea81e0a261218 --- /dev/null +++ b/structuredobjectawarephysicspredictionforvideomodelingandplanning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcbdda36473a22c8d52ca42d22af4f8d3c30d87ba23bb1258d353d0f45b4f78b +size 626334 diff --git a/subpolicyadaptationforhierarchicalreinforcementlearning/b11c88d9-c48e-4534-80da-23c1884da58e_content_list.json b/subpolicyadaptationforhierarchicalreinforcementlearning/b11c88d9-c48e-4534-80da-23c1884da58e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9a333cadcfdcd3d5a0f0ee939ca768ee3d279a52 --- /dev/null +++ b/subpolicyadaptationforhierarchicalreinforcementlearning/b11c88d9-c48e-4534-80da-23c1884da58e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d1088d1af7955e0ebcec9a64dfff873873b3af2572da373f9d7850e6b6320e5 +size 94766 diff --git a/subpolicyadaptationforhierarchicalreinforcementlearning/b11c88d9-c48e-4534-80da-23c1884da58e_model.json b/subpolicyadaptationforhierarchicalreinforcementlearning/b11c88d9-c48e-4534-80da-23c1884da58e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a7afea616f1d109e81fd9694a814f2525021d5c4 --- /dev/null +++ b/subpolicyadaptationforhierarchicalreinforcementlearning/b11c88d9-c48e-4534-80da-23c1884da58e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d41faccc0daff0db9f3314c5802f1fee0685a4f867c1c28663b8db565fca99db +size 114682 diff --git a/subpolicyadaptationforhierarchicalreinforcementlearning/b11c88d9-c48e-4534-80da-23c1884da58e_origin.pdf b/subpolicyadaptationforhierarchicalreinforcementlearning/b11c88d9-c48e-4534-80da-23c1884da58e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ff65c84097963a4df8542e27bca8e1b8533f063e --- /dev/null +++ b/subpolicyadaptationforhierarchicalreinforcementlearning/b11c88d9-c48e-4534-80da-23c1884da58e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:362d0e22c38cb4fa2efb1c92d7ba0f98c785109819c60cfe049beef37d8a3bfd +size 4061045 diff --git a/subpolicyadaptationforhierarchicalreinforcementlearning/full.md b/subpolicyadaptationforhierarchicalreinforcementlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e26c22f17f88ecca1c950910a9edc76cbbc33cd6 --- /dev/null +++ b/subpolicyadaptationforhierarchicalreinforcementlearning/full.md @@ -0,0 +1,403 @@ +# SUB-POLICY ADAPTATION FOR HIERARCHICAL REINFORCEMENT LEARNING + +Alexander C. Li*, Carlos Florensa*, Ignasi Clavera, Pieter Abbeel + +University of California, Berkeley + +{alexli1, florensa, iclavera, pabbeel}@berkeley.edu + +# ABSTRACT + +Hierarchical reinforcement learning is a promising approach to tackle long-horizon decision-making problems with sparse rewards. Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task. Leaving the skills fixed can lead to significant sub-optimality in the transfer setting. In this work, we propose a novel algorithm to discover a set of skills and continuously adapt them along with the higher level even when training on a new task. Our main contributions are two-fold. First, we derive a new hierarchical policy gradient with an unbiased latent-dependent baseline, and we introduce Hierarchical Proximal Policy Optimization (HiPPO), an on-policy method to efficiently train all levels of the hierarchy jointly. Second, we propose a method of training time-abstractions that improves the robustness of the obtained skills to environment changes. Code and videos are available. $^{1}$ . + +# 1 INTRODUCTION + +Reinforcement learning (RL) has made great progress in a variety of domains, from playing games such as Pong and Go (Mnih et al., 2015; Silver et al., 2017) to automating robotic locomotion (Schulman et al., 2015; Heess et al., 2017), dexterous manipulation (Florensa et al., 2017b; OpenAI et al., 2018), and perception (Nair et al., 2018; Florensa et al., 2018). Yet, most work in RL is still learning from scratch when faced with a new problem. This is particularly inefficient when tackling multiple related tasks that are hard to solve due to sparse rewards or long horizons. + +A promising technique to overcome this limitation is hierarchical reinforcement learning (HRL) (Sutton et al., 1999). In this paradigm, policies have several modules of abstraction, allowing to reuse subsets of the modules. The most common case consists of temporal hierarchies (Precup, 2000; Dayan & Hinton, 1993), where a higher-level policy (manager) takes actions at a lower frequency, and its actions condition the behavior of some lower level skills or sub-policies. When transferring knowledge to a new task, most prior works fix the skills and train a new manager on top. Despite having a clear benefit in kick-starting the learning in the new task, having fixed skills can considerably cap the final performance on the new task (Florensa et al., 2017a). Little work has been done on adapting pre-trained sub-policies to be optimal for a new task. + +In this paper, we develop a new framework for simultaneously adapting all levels of temporal hierarchies. First, we derive an efficient approximated hierarchical policy gradient. The key insight is that, despite the decisions of the manager being unobserved latent variables from the point of view of the Markovian environment, from the perspective of the sub-policies they can be considered as part of the observation. We show that this provides a decoupling of the manager and sub-policy gradients, which greatly simplifies the computation in a principled way. It also theoretically justifies a technique used in other prior works (Frans et al., 2018). Second, we introduce a sub-policy specific baseline for our hierarchical policy gradient. We prove that this baseline is unbiased, and our experiments reveal faster convergence, suggesting efficient gradient variance reduction. Then, we introduce a more stable way of using this gradient, Hierarchical Proximal Policy Optimization (HiPPO). This method helps us take more conservative steps in our policy space (Schulman et al., 2017), critical in hierarchies + +because of the interdependence of each layer. Results show that HiPPO is highly efficient both when learning from scratch, i.e. adapting randomly initialized skills, and when adapting pretrained skills on a new task. Finally, we evaluate the benefit of randomizing the time-commitment of the sub-policies, and show it helps both in terms of final performance and zero-shot adaptation on similar tasks. + +# 2 PRELIMINARIES + +We define a discrete-time finite-horizon discounted Markov decision process (MDP) by a tuple $M = (\mathcal{S},\mathcal{A},\mathcal{P},r,\rho_0,\gamma ,H)$ , where $\mathcal{S}$ is a state set, $\mathcal{A}$ is an action set, $\mathcal{P}:\mathcal{S}\times \mathcal{A}\times \mathcal{S}\to \mathbb{R}_{+}$ is the transition probability distribution, $\gamma \in [0,1]$ is a discount factor, and $H$ the horizon. Our objective is to find a stochastic policy $\pi_{\theta}$ that maximizes the expected discounted return within the MDP, $\eta (\pi_{\theta}) = \mathbb{E}_{\tau}[\sum_{t = 0}^{H}\gamma^{t}r(s_{t},a_{t})]$ . We use $\tau = (s_0,a_0,\dots)$ to denote the entire state-action trajectory, where $s_0\sim \rho_0(s_0)$ , $a_{t}\sim \pi_{\theta}(a_{t}|s_{t})$ , and $s_{t + 1}\sim \mathcal{P}(s_{t + 1}|s_t,a_t)$ . + +![](images/efb8518ac264ad14b3deabec434a3da6f9c18efde852d5c81246e9466328788c.jpg) +Figure 1: Temporal hierarchy studied in this paper. A latent code $z_{t}$ is sampled from the manager policy $\pi_{\theta_h}(z_t|s_t)$ every $p$ time-steps, using the current observation $s_{kp}$ . The actions $a_{t}$ are sampled from the sub-policy $\pi_{\theta_l}(a_t|s_t,z_{kp})$ conditioned on the same latent code from $t = kp$ to $(k + 1)p - 1$ + +In this work, we propose a method to learn a hierarchical policy and efficiently adapt all the levels in the hierarchy to perform a new task. We study hierarchical policies composed of a higher level, or manager $\pi_{\theta_h}(z_t|s_t)$ , and a lower level, or sub-policy $\pi_{\theta_l}(a_{t'}|z_t,s_{t'})$ . The higher level does not take actions in the environment directly, but rather outputs a command, or latent variable $z_{t}\in \mathcal{Z}$ , that conditions the behavior of the lower level. We focus on the common case where $\mathcal{Z} = \mathbb{Z}_n$ making the manager choose among $n$ sub-policies, or skills, to execute. The manager typically operates at a lower frequency than the sub-policies, only observing the environment every $p$ time-steps. When the manager receives a new observation, it decides which low level policy to commit to for $p$ environment steps by the means of a latent code $z$ . Figure 1 depicts this framework where the high level frequency $p$ is a random variable, which is one of the contribution of this paper as described in Section 4.4. Note that the class of hierarchical policies we work with is more restrictive than others like the options framework, where the time-commitment is also decided by the policy. Nevertheless, we show that this loss in policy expressivity acts as a regularizer and does not prevent our algorithm from surpassing other state-of-the-art methods. + +# 3 RELATED WORK + +There has been growing interest in HRL for the past few decades (Sutton et al., 1999; Precup, 2000), but only recently has it been applied to high-dimensional continuous domains as we do in this work (Kulkarni et al., 2016; Daniel et al., 2016). To obtain the lower level policies, or skills, most methods exploit some additional assumptions, like access to demonstrations (Le et al., 2018; Merel et al., 2019; Ranchod et al., 2015; Sharma et al., 2018), policy sketches (Andreas et al., 2017), or task decomposition into sub-tasks (Ghavamzadeh & Mahadevan, 2003; Sohn et al., 2018). Other methods use a different reward for the lower level, often constraining it to be a "goal reacher" policy, where the signal from the higher level is the goal to reach (Nachum et al., 2018; Levy et al., 2019; Vezhnevets et al., 2017). These methods are very promising for state-reaching tasks, but might require access to goal-reaching reward systems not defined in the original MDP, and are more limited when training on tasks beyond state-reaching. Our method does not require any additional supervision, and the obtained skills are not constrained to be goal-reaching. + +When transferring skills to a new environment, most HRL methods keep them fixed and simply train a new higher-level on top (Hausman et al., 2018; Heess et al., 2016). Other work allows for building on previous skills by constantly supplementing the set of skills with new ones (Shu et al., 2018), but they require a hand-defined curriculum of tasks, and the previous skills are never fine-tuned. + +Our algorithm allows for seamless adaptation of the skills, showing no trade-off between leveraging the power of the hierarchy and the final performance in a new task. Other methods use invertible functions as skills (Haarnoja et al., 2018), and therefore a fixed skill can be fully overwritten when a new layer of hierarchy is added on top. This kind of "fine-tuning" is promising, although similar to other works (Peng et al., 2019), they do not apply it to temporally extended skills as we do here. + +One of the most general frameworks to define temporally extended hierarchies is the options framework (Sutton et al., 1999), and it has recently been applied to continuous state spaces (Bacon et al., 2017). One of the most delicate parts of this formulation is the termination policy, and it requires several regularizers to avoid skill collapse (Harb et al., 2017; Vezhnevets et al., 2016). This modification of the objective may be difficult to tune and affects the final performance. Instead of adding such penalties, we propose to have skills of a random length, not controlled by the agent during training of the skills. The benefit is two-fold: no termination policy to train, and more stable skills that transfer better. Furthermore, these works only used discrete action MDPs. We lift this assumption, and show good performance of our algorithm in complex locomotion tasks. There are other algorithms recently proposed that go in the same direction, but we found them more complex, less principled (their per-action marginalization cannot capture well the temporal correlation within each option), and without available code or evidence of outperforming non-hierarchical methods (Smith et al., 2018). + +The closest work to ours in terms of final algorithm structure is the one proposed by Frans et al. (2018). Their method can be included in our framework, and hence benefits from our new theoretical insights. We introduce a modification that is shown to be highly beneficial: the random time-commitment mentioned above, and find that our method can learn in difficult environments without their complicated training scheme. + +# 4 EFFICIENT HIERARCHICAL POLICY GRADIENTS + +When using a hierarchical policy, the intermediate decision taken by the higher level is not directly applied in the environment. Therefore, technically it should not be incorporated into the trajectory description as an observed variable, like the actions. This makes the policy gradient considerably harder to compute. In this section we first prove that, under mild assumptions, the hierarchical policy gradient can be accurately approximated without needing to marginalize over this latent variable. Then, we derive an unbiased baseline for the policy gradient that can reduce the variance of its estimate. Finally, with these findings, we present our method, Hierarchical Proximal Policy Optimization (HiPPO), an on-policy algorithm for hierarchical policies, allowing learning at all levels of the policy jointly and preventing sub-policy collapse. + +# 4.1 APPROXIMATE HIERARCHICAL POLICY GRADIENT + +Policy gradient algorithms are based on the likelihood ratio trick (Williams, 1992) to estimate the gradient of returns with respect to the policy parameters as + +$$ +\begin{array}{l} \nabla_ {\theta} \eta (\pi_ {\theta}) = \mathbb {E} _ {\tau} \left[ \nabla_ {\theta} \log P (\tau) R (\tau) \right] \approx \frac {1}{N} \sum_ {i = 1} ^ {n} \nabla_ {\theta} \log P (\tau_ {i}) R (\tau_ {i}) (1) \\ = \frac {1}{N} \sum_ {i = 1} ^ {n} \frac {1}{H} \sum_ {t = 1} ^ {H} \nabla_ {\theta} \log \pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) R \left(\tau_ {i}\right) (2) \\ \end{array} +$$ + +In a temporal hierarchy, a hierarchical policy with a manager $\pi_{\theta_h}(z_t|s_t)$ selects every $p$ time-steps one of $n$ sub-policies to execute. These sub-policies, indexed by $z \in \mathbb{Z}_n$ , can be represented as a single conditional probability distribution over actions $\pi_{\theta_t}(a_t|z_t,s_t)$ . This allows us to not only use a given set of sub-policies, but also leverage skills learned with Stochastic Neural Networks (SNNs) (Florensa et al., 2017a). Under this framework, the probability of a trajectory $\tau = (s_0,a_0,s_1,\dots ,s_H)$ can be written as + +$$ +P (\tau) = \left(\prod_ {k = 0} ^ {H / p} \left[ \sum_ {j = 1} ^ {n} \pi_ {\theta_ {h}} (z _ {j} | s _ {k p}) \prod_ {t = k p} ^ {(k + 1) p - 1} \pi_ {\theta_ {l}} (a _ {t} | s _ {t}, z _ {j}) \right]\right) \left[ P (s _ {0}) \prod_ {t = 1} ^ {H} P (s _ {t + 1} | s _ {t}, a _ {t}) \right]. \tag {3} +$$ + +The mixture action distribution, which presents itself as an additional summation over skills, prevents additive factorization when taking the logarithm, as from Eq. 1 to 2. This can yield numerical + +instabilities due to the product of the $p$ sub-policy probabilities. For instance, in the case where all the skills are distinguishable all the sub-policies' probabilities but one will have small values, resulting in an exponentially small value. In the following Lemma, we derive an approximation of the policy gradient, whose error tends to zero as the skills become more diverse, and draw insights on the interplay of the manager actions. + +Lemma 1. If the skills are sufficiently differentiated, then the latent variable can be treated as part of the observation to compute the gradient of the trajectory probability. Let $\pi_{\theta_h}(z|s)$ and $\pi_{\theta_l}(a|s,z)$ be Lipschitz functions w.r.t. their parameters, and assume that $0 < \pi_{\theta_l}(a|s,z_j) < \epsilon \forall j \neq kp$ , then + +$$ +\nabla_ {\theta} \log P (\tau) = \sum_ {k = 0} ^ {H / p} \nabla_ {\theta} \log \pi_ {\theta_ {h}} \left(z _ {k p} \mid s _ {k p}\right) + \sum_ {t = 0} ^ {H} \nabla_ {\theta} \log \pi_ {\theta_ {l}} \left(a _ {t} \mid s _ {t}, z _ {k p}\right) + \mathcal {O} (n H \epsilon^ {p - 1}) \tag {4} +$$ + +Proof. See Appendix. + +![](images/92b28654999b4d92ca4883c43d45cf6f637b417204c32865a01f8957807330ac.jpg) + +Our assumption can be seen as having diverse skills. Namely, for each action there is just one sub-policy that gives it high probability. In this case, the latent variable can be treated as part of the observation to compute the gradient of the trajectory probability. Many algorithms to extract lower-level skills are based on promoting diversity among the skills (Florensa et al., 2017a; Eysenbach et al., 2019), therefore usually satisfying our assumption. We further analyze how well this assumption holds in our experiments section and Table 2. + +# 4.2 UNBIASED SUB-POLICY BASELINE + +The policy gradient estimate obtained when applying the log-likelihood ratio trick as derived above is known to have large variance. A very common approach to mitigate this issue without biasing the estimate is to subtract a baseline from the returns (Peters & Schaal, 2008). It is well known that such baselines can be made state-dependent without incurring any bias. However, it is still unclear how to formulate a baseline for all the levels in a hierarchical policy, since an action dependent baseline does introduce bias in the gradient (Tucker et al., 2018). It has been recently proposed to use latent-conditioned baselines (Weber et al., 2019). Here we go further and prove that, under the assumptions of Lemma 1, we can formulate an unbiased latent dependent baseline for the approximate gradient (Eq. 5). + +Lemma 2. For any functions $b_h: \mathcal{S} \to \mathbb{R}$ and $b_l: \mathcal{S} \times \mathcal{Z} \to \mathbb{R}$ we have: + +$$ +\mathbb {E} _ {\tau} [ \sum_ {k = 0} ^ {H / p} \nabla_ {\theta} \log P (z _ {k p} | s _ {k p}) b _ {h} (s _ {k p}) ] = 0 a n d \mathbb {E} _ {\tau} [ \sum_ {t = 0} ^ {H} \nabla_ {\theta} \log \pi_ {\theta_ {l}} (a _ {t} | s _ {t}, z _ {k p}) b _ {l} (s _ {t}, z _ {k p}) ] = 0 +$$ + +Proof. See Appendix. + +![](images/1117afa876e82925a232629c1ca56292e98d04aa510e20a8fadaa1f17c7c04c4.jpg) + +Now we apply Lemma 1 and Lemma 2 to Eq. 1. By using the corresponding value functions as the function baseline, the return can be replaced by the Advantage function $A(s_{kp}, z_{kp})$ (see details in Schulman et al. (2016)), and we obtain the following approximate policy gradient expression: + +$$ +\hat {g} = \mathbb {E} _ {\tau} \Big [ (\sum_ {k = 0} ^ {H / p} \nabla_ {\theta} \log \pi_ {\theta_ {h}} (z _ {k p} | s _ {k p}) A (s _ {k p}, z _ {k p})) + (\sum_ {t = 0} ^ {H} \nabla_ {\theta} \log \pi_ {\theta_ {t}} (a _ {t} | s _ {t}, z _ {k p}) A (s _ {t}, a _ {t}, z _ {k p})) \Big ] +$$ + +This hierarchical policy gradient estimate can have lower variance than without baselines, but using it for policy optimization through stochastic gradient descent still yields an unstable algorithm. In the next section, we further improve the stability and sample efficiency of the policy optimization by incorporating techniques from Proximal Policy Optimization (Schulman et al., 2017). + +# 4.3 HIERARCHICAL PROXIMAL POLICY OPTIMIZATION + +Using an appropriate step size in policy space is critical for stable policy learning. Modifying the policy parameters in some directions may have a minimal impact on the distribution over actions, whereas small changes in other directions might change its behavior drastically and hurt training + +Algorithm 1 HiPPO Rollout +1: Input: skills $\pi_{\theta_l}(a|s,z)$ , manager $\pi_{\theta_h}(z|s)$ , time-commitment bounds $P_{\mathrm{min}}$ and $P_{\mathrm{max}}$ , horizon $H$ +2: Reset environment: $s_0 \sim \rho_0$ , $t = 0$ . +3: while $t < H$ do +4: Sample time-commitment $p \sim \mathrm{Cat}([P_{\mathrm{min}}, P_{\mathrm{max}}])$ +5: Sample skill $z_t \sim \pi_{\theta_h}(\cdot|s_t)$ +6: for $t' = t \ldots (t + p)$ do +7: Sample action $a_{t'} \sim \pi_{\theta_l}(\cdot|s_{t'}, z_t)$ +8: Observe new state $s_{t' + 1}$ and reward $r_{t'}$ +9: end for +10: $t \gets t + p$ +11: end while +12: Output: $(s_0, z_0, a_0, s_1, a_1, \ldots, s_H, z_H, a_H, s_{H + 1})$ + +Algorithm 2 HiPPO +1: Input: skills $\pi_{\theta_l}(a|s,z)$ , manager $\pi_{\theta_h}(z|s)$ , horizon $H$ , learning rate $\alpha$ +2: while not done do +3: for actor = 1, 2, ..., N do +4: Obtain trajectory with HiPPO Rollout +5: Estimate advantages $\hat{A}(a_{t'}, s_{t'}, z_t)$ and $\hat{A}(z_t, s_t)$ +6: end for +7: $\theta \gets \theta + \alpha \nabla_\theta L_{HiPPO}^{CLIP}(\theta)$ +8: end while + +efficiency (Kakade, 2002). Trust region policy optimization (TRPO) uses a constraint on the KL-divergence between the old policy and the new policy to prevent this issue (Schulman et al., 2015). Unfortunately, hierarchical policies are generally represented by complex distributions without closed form expressions for the KL-divergence. Therefore, to improve the stability of our hierarchical policy gradient we turn towards Proximal Policy Optimization (PPO) (Schulman et al., 2017). PPO is a more flexible and compute-efficient algorithm. In a nutshell, it replaces the KL-divergence constraint with a cost function that achieves the same trust region benefits, but only requires the computation of the likelihood. Letting $w_{t}(\theta) = \frac{\pi_{\theta}(a_{t}|s_{t})}{\pi_{\theta_{old}}(a_{t}|s_{t})}$ , the PPO objective is: + +$$ +L ^ {C L I P} (\theta) = \mathbb {E} _ {t} \min \left\{w _ {t} (\theta) A _ {t}, \operatorname {c l i p} (w _ {t} (\theta), 1 - \epsilon , 1 + \epsilon) A _ {t} \right\} +$$ + +We can adapt our approximated hierarchical policy gradient with the same approach by letting $w_{h, kp}(\theta) = \frac{\pi_{\theta_h}(z_{kp}|s_{kp})}{\pi_{\theta_{h, old}}(z_{kp}|s_{kp})}$ and $w_{l,t}(\theta) = \frac{\pi_{\theta_l}(a_t|s_t,z_{kp})}{\pi_{\theta_{l, old}}(a_t|s_t,z_{kp})}$ , and using the super-index clip to denote the clipped objective version, we obtain the new surrogate objective: + +$$ +\begin{array}{l} L _ {H i P P O} ^ {C L I P} (\theta) = \mathbb {E} _ {\tau} \Big [ \sum_ {k = 0} ^ {H / p} \min \left\{w _ {h, k p} (\theta) A (s _ {k p}, z _ {k p}), w _ {h, k p} ^ {\mathrm {c l i p}} (\theta) A (s _ {k p}, z _ {k p}) \right\} \\ \left. + \sum_ {t = 0} ^ {H} \min \left\{w _ {l, t} (\theta) A \left(s _ {t}, a _ {t}, z _ {k p}\right), w _ {l, t} ^ {\text {c l i p}} (\theta) A \left(s _ {t}, a _ {t}, z _ {k p}\right) \right\} \right] \\ \end{array} +$$ + +We call this algorithm Hierarchical Proximal Policy Optimization (HiPPO). Next, we introduce a critical additions: a switching of the time-commitment between skills. + +# 4.4 VARYING TIME-COMMITMENT + +Most hierarchical methods either consider a fixed time-commitment to the lower level skills (Florensa et al., 2017a; Frans et al., 2018), or implement the complex options framework (Precup, 2000; Bacon et al., 2017). In this work we propose an in-between, where the time-commitment to the skills is a random variable sampled from a fixed distribution Categorical $(T_{\mathrm{min}}, T_{\mathrm{max}})$ just before the manager takes a decision. This modification does not hinder final performance, and we show it improves zero-shot adaptation to a new task. This approach to sampling rollouts is detailed in Algorithm 1. The full algorithm is detailed in Algorithm 2. + +# 5 EXPERIMENTS + +We designed our experiments to answer the following questions: 1) How does HiPPO compare against a flat policy when learning from scratch? 2) Does it lead to policies more robust to environment changes? 3) How well does it adapt already learned skills? and 4) Does our skill diversity assumption hold in practice? + +![](images/236c8eac7e992e7607459e30caf68b20392779fe2c16500ff94a6f9b7a2cc322.jpg) +(a) Block Hopper + +![](images/ca5d992c2dd6dff2bb9deb892c3c5fa1b2ebc82764cb309ff6f5eaefabdca3cb.jpg) +(b) Block Half Cheetah + +![](images/9f2d56dbb1f46e0fbced2fa61dfae5aa4c12e11debdf55c195fd223217ccc6b3.jpg) +(c) Snake Gather + +![](images/9fc0a2be0677ee6e1a1ff225598555341009b3daa8126ee463b30f728090fb6b.jpg) +(d) Ant Gather + +![](images/b639b0e52093ea28031190daed02703ea217817cbb8006d45d73111c729afc08.jpg) +Figure 2: Environments used to evaluate the performance of our method. Every episode has a different configuration: wall heights for (a)-(b), ball positions for (c)-(d) +(a) Block Hopper +Figure 3: Analysis of different time-commitment strategies on learning from scratch. + +![](images/6c03fe8ce434f24b605c4270f888da33a5471300afbf36003a3aacdd15e57382.jpg) +(b) Block Half Cheetah + +![](images/47b1d25ac781ad216113e3a4c77608713958082e9f8f4d04e14d15b5f4b6bbfd.jpg) +(c) Snake Gather + +![](images/38a7333526461e6c2d51e248e0e2c414b5f9219d16ae2081d0411463199bfbf9.jpg) +(d) Ant Gather + +# 5.1 TASKS + +We evaluate our approach on a variety of robotic locomotion and navigation tasks. The Block environments, depicted in Fig. 2a-2b, have walls of random heights at regular intervals, and the objective is to learn a gait for the Hopper and Half-Cheetah robots to jump over them. The agents observe the height of the wall ahead and their proprioceptive information (joint positions and velocities), receiving a reward of $+1$ for each wall cleared. The Gather environments, described by Duan et al. (2016), require agents to collect apples (green balls, $+1$ reward) while avoiding bombs (red balls, -1 reward). The only available perception beyond proprioception is through a LIDAR-type sensor indicating at what distance are the objects in different directions, and their type, as depicted in the bottom left corner of Fig. 2c-2d. This is challenging hierarchical task with sparse rewards that requires simultaneously learning perception, locomotion, and higher-level planning capabilities. We use the Snake and Ant robots in Gather. Details for all robotic agents are provided in Appendix B. + +# 5.2 LEARNING FROM SCRATCH AND TIME-COMMITMENT + +In this section, we study the benefit of using our HiPPO algorithm instead of standard PPO on a flat policy (Schulman et al., 2017). The results, reported in Figure 3, demonstrate that training from scratch with HiPPO leads to faster learning and better performance than flat PPO. Furthermore, we show that the benefit of HiPPO does not just come from having temporally correlated exploration: PPO with action repeat converges at a lower performance than our method. HiPPO leverages the time-commitment more efficiently, as suggested by the poor performance of the ablation where we set $p = 1$ , when the manager takes an action every environment step as well. Finally, Figure 4 shows the effectiveness of using the presented skill-dependent baseline. + +# 5.3 COMPARISON TO OTHER METHODS + +We compare HiPPO to current state-of-the-art hierarchical methods. First, we evaluate HIRO (Nachum et al., 2018), an off-policy RL method based on training a goal-reaching lower level policy. Fig. 5 shows that HIRO achieves poor performance on our tasks. As further detailed in Appendix D, this algorithm is sensitive to access to ground-truth information, like the exact $(x,y)$ position of the robot in Gather. In contrast, our method is able to perform well directly from the raw sensory inputs described in Section 5.1. We evaluate Option-Critic (Bacon et al., 2017), a variant of the options framework (Sutton et al., 1999) that can be used for continuous action-spaces. It fails to learn, and we hypothesize that their algorithm provides less time-correlated exploration and learns + +![](images/68e670cc6d17765c3bb907e97793225f259de1f57497addc3d2c4dc864ff6f0f.jpg) +(a) Block Hopper + +![](images/e8dd521171ae3c468f3e7028b25efdf83fc4d0dc1548099929b4126222c46a5d.jpg) +(b) Block Half Cheetah + +![](images/59ca0191a3ac3c1b177dc41c5f29871516818d20d7fba87d99d784192bccf0d0.jpg) +(c) Snake Gather + +![](images/ad90ee52db5a5827279ddc60f12a30549fea82a5d1b673d190878fca28a2ac2f.jpg) +(d) Ant Gather + +![](images/51b39247785f2d9d2165cf7cf4087ea106477ad5ab5fef88cf129129cf926a73.jpg) +Figure 4: Using a skill-conditioned baseline, as defined in Section 4.2, generally improves performance of HiPPO when learning from scratch. +(a) Block Hopper +Figure 5: Comparison of HiPPO and HierVPG to prior hierarchical methods on learning from scratch. + +![](images/bc94d9f83d079ff44f772c987a4df30b37b43250f5e42717a1c010a96279a293.jpg) +(b) Block Half Cheetah + +![](images/98275faf912ce2603e16ff22f738e1ccf9d76e1a201678017c958f6d96d8f2ad.jpg) +(c) Snake Gather + +![](images/5d27007cf6687ba72bdb7474f68c0ddbc520d2161c7d8216aeb1fad5d80509f7.jpg) +(d) Ant Gather + +less diverse skills. We also compare against MLSH (Frans et al., 2018), which repeatedly samples new environment configurations to learn primitive skills. We take these hyperparameters from their Ant Twowalk experiment: resetting the environment configuration every 60 iterations, a warmup period of 20 during which only the manager is trained, and a joint training period of 40 during which both manager and skills are trained. Our results show that such a training scheme does not provide any benefits. Finally, we provide a comparison to a direct application of our Hierarchical Vanilla Policy Gradient (HierVPG) algorithm, and we see that the algorithm is unstable without PPO's trust-region-like technique. + +# 5.4 ROBUSTNESS TO DYNAMICS PERTURBATIONS + +We investigate the robustness of HiPPO to changes in the dynamics of the environment. We perform several modifications to the base Snake Gather and Ant Gather environments. One at a time, we change the body mass, dampening of the joints, body inertia, and friction characteristics of both robots. The results, presented in Table 1, show that HiPPO with randomized period Categorical $\left([T_{\min}, T_{\max}]\right)$ is able to better handle these dynamics changes. In terms of the drop in policy performance between the training environment and test environment, it outperforms HiPPO with fixed period on 6 out of 8 related tasks. These results suggest that the randomized period exposes the policy to a wide range of scenarios, which makes it easier to adapt when the environment changes. + +
GatherAlgorithmInitialMassDampeningInertiaFriction
SnakeFlat PPO2.723.16 (+16%)2.75 (+1%)2.11 (-22%)2.75 (+1%)
HiPPO, p = 104.383.28 (-25%)3.27 (-25%)3.03 (-31%)3.27 (-25%)
HiPPO random p5.114.09 (-20%)4.03 (-21%)3.21 (-37%)4.03 (-21%)
AntFlat PPO2.252.53 (+12%)2.13 (-5%)2.36 (+5%)1.96 (-13%)
HiPPO, p = 103.843.31 (-14%)3.37 (-12%)2.88 (-25%)3.07 (-20%)
HiPPO random p3.223.37 (+5%)2.57 (-20%)3.36 (+4%)2.84 (-12%)
+ +Table 1: Zero-shot transfer performance. The final return in the initial environment is shown, as well as the average return over 25 rollouts in each new modified environment. + +# 5.5 ADAPTATION OF PRE-TRAINED SKILLS + +For the Block task, we use DIAYN (Eysenbach et al., 2019) to train 6 differentiated subpolicies in an environment without any walls. Here, we see if these diverse skills can improve performance on a downstream task that's out of the training distribution. For Gather, we take 6 pretrained + +![](images/8aadbafa438b1970698ef6544f75f68cc64bc50cf7b13bf9e26648440dd430c1.jpg) +(a) Block Hopper + +![](images/ed6142d34df525fd93c260a3a2de8d7976e7b81f2561af53a9c82c3215b92d14.jpg) +Figure 6: Benefit of adapting some given skills when the preferences of the environment are different from those of the environment where the skills were originally trained. Adapting skills with HiPPO has better learning performance than leaving the skills fixed or learning from scratch. + +![](images/91f8800c3409af79bda465ad73d949dedbc4719f9d741e0e8a430eebb2d52a59.jpg) +(b) Block Half Cheetah +(c) Snake Gather + +![](images/5f5872f2f2a1e4ce25bf6f1fe10115da19c373231d67f2d5202ec9ad5588febf.jpg) +(d) Ant Gather + +subpolicies encoded by a Stochastic Neural Network (Tang & Salakhutdinov, 2013) that was trained in a diversity-promoting environment (Florensa et al., 2017a). We fine-tune them with HiPPO on the Gather environment, but with an extra penalty on the velocity of the Center of Mass. This can be understood as a preference for cautious behavior. This requires adjustment of the sub-policies, which were trained with a proxy reward encouraging them to move as far as possible (and hence quickly). Fig. 6 shows that using HiPPO to simultaneously train a manager and fine-tune the skills achieves higher final performance than fixing the sub-policies and only training a manager with PPO. The two initially learn at the same rate, but HiPPO's ability to adjust to the new dynamics allows it to reach a higher final performance. Fig. 6 also shows that HiPPO can fine-tune the same given skills better than Option-Critic (Bacon et al., 2017), MLSH (Frans et al., 2018), and HIRO (Nachum et al., 2018). + +# 5.6 SKILL DIVERSITY ASSUMPTION + +In Lemma 1, we derived a more efficient and numerically stable gradient by assuming that the sub-policies are diverse. In this section, we empirically test the validity of our assumption and the quality of our approximation. We run the HiPPO algorithm on Ant Gather and Snake Gather both from scratch and with given pretrained skills, as done in the previous section. In Table 2, we report the average maximum probability under other sub-policies, corresponding to $\epsilon$ from the assumption. In all settings, this is on the order of magnitude of 0.1. Therefore, under the $p\approx 10$ that we use in our experiments, the term we neglect has a factor $\epsilon^{p - 1} = 10^{-10}$ . It is not surprising then that the average cosine similarity between the full gradient and our approximation is almost 1, as reported in Table 2. + +
GatherAlgorithmCosine Sim.\( \max_{z' \neq z_{kp}} \pi_{\theta_t}(a_t|s_t,z') \)\( \pi_{\theta_t}(a_t|s_t,z_{kp}) \)
SnakeHiPPO on given skills0.98 ± 0.010.44 ± 0.03
HiPPO on random skills0.97 ± 0.030.32 ± 0.04
AntHiPPO on given skills0.96 ± 0.040.40 ± 0.08
HiPPO on random skills0.94 ± 0.030.31 ± 0.09
+ +Table 2: Empirical evaluation of Lemma 1. In the middle and right columns, we evaluate the quality of our assumption by computing the largest probability of a certain action under other skills $(\epsilon)$ , and the action probability under the actual latent. We also report the cosine similarity between our approximate gradient and the exact gradient from Eq. 3. The mean and standard deviation of these values are computed over the full batch collected at iteration 10. + +# 6 CONCLUSIONS AND FUTURE WORK + +In this paper, we examined how to effectively adapt temporal hierarchies. We began by deriving a hierarchical policy gradient and its approximation. We then proposed a new method, HiPPO, that can stably train multiple layers of a hierarchy jointly. The adaptation experiments suggest that we can optimize pretrained skills for downstream environments, and learn emergent skills without any unsupervised pre-training. We also demonstrate that HiPPO with randomized period can learn from scratch on sparse-reward and long time horizon tasks, while outperforming non-hierarchical methods on zero-shot transfer. + +# REFERENCES + +Jacob Andreas, Dan Klein, and Sergey Levine. Modular Multitask Reinforcement Learning with Policy Sketches. International Conference in Machine Learning, 2017. URL http://github.com/. +Pierre-Luc Bacon, Jean Harb, and Doina Precup. The Option-Critic Architecture. AAAI, pp. 1726-1734, 2017. URL http://arxiv.org/abs/1609.05140. +Christian Daniel, Herke van Hoof, Jan Peters, Gerhard Neumann, Thomas Gartner, Mirco Nanni, Andrea Passerini, and Celine B Robardet Christian Daniel ChristianDaniel. Probabilistic inference for determining options in reinforcement learning. Machine Learning, 104(104), 2016. doi: 10.1007/s10994-016-5580-x. +Peter Dayan and Geoffrey E. Hinton. Feudal Reinforcement Learning. Advances in Neural Information Processing Systems, pp. 271-278, 1993. ISSN 0143991X. doi: 10.1108/IR-08-2017-0143. URL http://www.cs.toronto.edu/~fritz/absps/dh93.pdf. +Yan Duan, Xi Chen, John Schulman, and Pieter Abbeel. Benchmarking Deep Reinforcement Learning for Continuous Control. International Conference in Machine Learning, 2016. URL http://arxiv.org/abs/1604.06778. +Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is All You Need: Learning Skills without a Reward Function. International Conference in Learning Representations, 2019. URL http://arxiv.org/abs/1802.06070. +Carlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic Neural Networks for Hierarchical Reinforcement Learning. International Conference in Learning Representations, pp. 1-17, 2017a. ISSN 14779129. doi: 10.1002/rcm.765. URL http://arxiv.org/abs/1704.03012. +Carlos Florensa, David Held, Markus Wulfmeier, Michael Zhang, and Pieter Abbeel. Reverse Curriculum Generation for Reinforcement Learning. Conference on Robot Learning, pp. 1-16, 2017b. ISSN 1938-7228. doi: 10.1080/00908319208908727. URL http://arxiv.org/abs/1707.05300. +Carlos Florensa, Jonas Degrave, Nicolas Heess, Jost Tobias Springenberg, and Martin Riedmiller. Self-supervised Learning of Image Embedding for Continuous Control. In Workshop on Inference to Control at NeurIPS, 2018. URL http://arxiv.org/abs/1901.00943. +Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, and John Schulman. Meta Learning Shared Hierarchies. International Conference in Learning Representations, pp. 1-11, 2018. ISSN 14639076. doi: 10.1039/b203755f. URL http://arxiv.org/abs/1710.09767. +Mohammad Ghavamzadeh and Sridhar Mahadevan. Hierarchical Policy Gradient Algorithms. International Conference in Machine Learning, 2003. URL http://chercheurs.lille.inria.fr/~ghavamza/my网站建设/Publications_files/icml03.pdf. +Tuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, and Sergey Levine. Latent Space Policies for Hierarchical Reinforcement Learning. Internation Conference in Machine Learning, 2018. URL http://arxiv.org/abs/1804.02808. +Jean Harb, Pierre-Luc Bacon, Martin Klissarov, and Doina Precup. When Waiting is not an Option : Learning Options with a Deliberation Cost. AAAI, 9 2017. URL http://arxiv.org/abs/ 1709.04571. +Karol Hausman, Jost Tobias Springenberg, Ziyu Wang, Nicolas Heess, and Martin Riedmiller. Learning an Embedding Space for Transferable Robot Skills. International Conference in Learning Representations, pp. 1-16, 2018. +Nicolas Heess, Greg Wayne, Yuval Tassa, Timothy Lillicrap, Martin Riedmiller, David Silver, and Google Deepmind. Learning and Transfer of Modulated Locomotor Controllers. 2016. URL https://arxiv.org/abs/1610.05182. + +Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, and David Silver. Emergence of Locomotion Behaviours in Rich Environments. 7 2017. URL http://arxiv.org/abs/1707.02286. +Sham Kakade. A Natural Policy Gradient. Advances in Neural Information Processing Systems, 2002. +Tejas D Kulkarni, Karthik R Narasimhan, Ardavan Saeedi CSAIL, and Joshua B Tenenbaum BCS. Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. Advances in Neural Information Processing Systems, pp. 1-13, 2016. +Hoang M Le, Nan Jiang, Alekh Agarwal, Miroslav Dud, and Yue Hal. Hierarchical Imitation and Reinforcement Learning. International Conference in Machine Learning, 2018. +Andrew Levy, Robert Platt, and Kate Saenko. Hierarchical Actor-Critic. arXiv:1712.00948, 12 2017. URL http://arxiv.org/abs/1712.00948. +Andrew Levy, Robert Platt, and Kate Saenko. Hierarchical Reinforcement Learning with Hindsight. International Conference on Learning Representations, 5 2019. URL http://arxiv.org/abs/1805.08180. +Josh Merel, Arun Ahuja, Vu Pham, Saran Tunyasuvunakool, Siqi Liu, Dhruva Tirumala, Nicolas Heess, and Greg Wayne. Hierarchical visuomotor control of humanoids. International Conference in Learning Representations, 2019. URL http://arxiv.org/abs/1811.09656. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei a Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. +Ofir Nachum, Honglak Lee, Shane Gu, and Sergey Levine. Data-Efficient Hierarchical Reinforcement Learning. Advances in Neural Information Processing Systems, 2018. +Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual Reinforcement Learning with Imagined Goals. *Adavances in Neural Information Processing Systems*, 2018. +OpenAI, Marcin Andrychowicz, Bowen Baker, Maciek Chogiej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, and Alex Ray. Learning Dexterous In-Hand Manipulation. pp. 1-27, 2018. +Xue Bin Peng, Michael Chang, Grace Zhang, Pieter Abbeel, and Sergey Levine. MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies. 5 2019. URL http://arxiv.org/abs/1905.09808. +Jan Peters and Stefan Schaal. Natural Actor-Critic. Neurocomputing, 71(7-9):1180-1190, 2008. ISSN 09252312. doi: 10.1016/j.neucom.2007.11.026. +Doina Precup. Temporal abstraction in reinforcement learning, 1 2000. URL https:// scholarworks.umass.edu/dissertations/AAI9978540. +Pravesh Ranchod, Benjamin Rosman, and George Konidaris. Nonparametric Bayesian Reward Segmentation for Skill Discovery Using Inverse Reinforcement Learning. 2015. ISSN 21530866. doi: 10.1109/IROS.2015.7353414. +John Schulman, Philipp Moritz, Michael Jordan, and Pieter Abbeel. Trust Region Policy Optimization. International Conference in Machine Learning, 2015. +John Schulman, Philipp Moritz, Sergey Levine, Michael I Jordan, and Pieter Abbeel. HIGH-DIMENSIONAL CONTINUOUS CONTROL USING GENERALIZED ADVANTAGE ESTIMATION. International Conference in Learning Representations, pp. 1-14, 2016. + +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal Policy Optimization Algorithms. 2017. URL https://openai-public.s3-us-west-2.amazon.com/blog/2017-07/ppo/ppo-arxiv.pdf. +Arjun Sharma, Mohit Sharma, Nicholas Rhinehart, and Kris M Kitani. Directed-Info GAIL: Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information. International Conference in Learning Representations, 2018. URL http://arxiv.org/abs/1810.01266. +Tianmin Shu, Caiming Xiong, and Richard Socher. Hierarchical and interpretable skill acquisition in multi-task reinforcement Learning. International Conference in Learning Representations, 3:1-13, 2018. doi: 10.1109/MWC.2016.7553036. +David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George Van Den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of Go without human knowledge. Nature, 550(7676):354-359, 10 2017. ISSN 14764687. doi: 10.1038/nature24270. URL http://arxiv.org/abs/1610.00633. +Matthew J. A. Smith, Herke van Hoof, and Joelle Pineau. An inference-based policy gradient method for learning options, 2 2018. URL https://openreview.net/forum?id=rJIgf7bAZ. +Sungryull Sohn, Junhyuk Oh, and Honglak Lee. Multitask Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies. Advances in Neural Information Processing Systems, 2018. +Richard S Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112: 181-211, 1999. URL http://www-anw.cs.umass.edu/~barto/courses/cs687/Sutton-Precup-Singh-AIJ99.pdf. +Yichuan Tang and Ruslan Salakhutdinov. Learning Stochastic Feedforward Neural Networks. Advances in Neural Information Processing Systems, 2:530-538, 2013. doi: 10.1.1.63.1777. +Emanuel Todorov, Tom Erez, and Yuval Tassa. MuJoCo: A physics engine for model-based control. pp. 5026-5033, 2012. +George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E Turner, Zoubin Ghahramani, and Sergey Levine. The Mirage of Action-Dependent Baselines in Reinforcement Learning. Internation Conference in Machine Learning, 2018. URL http://arxiv.org/abs/1802.10031. +Alexander Vezhnevets, Volodymyr Mnih, John Agapiou, Simon Osindero, Alex Graves, Oriol Vinyals, and Koray Kavukcuoglu Google DeepMind. Strategic Attentive Writer for Learning Macro-Actions. Advances in Neural Information Processing Systems, 2016. +Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal Networks for Hierarchical Reinforcement Learning. International Conference in Machine Learning, 2017. URL https://arxiv.org/pdf/1703.01161.pdf. +Théophane Weber, Nicolas Heess, Lars Buesing, and David Silver. Credit Assignment Techniques in Stochastic Computation Graphs. 1 2019. URL http://arxiv.org/abs/1901.01761. +Ronald J Williams. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8(3-4):229-256, 1992. + +# A HYPERPARAMETERS AND ARCHITECTURES + +The Block environments used a horizon of 1000 and a batch size of 50,000, while Gather used a batch size of 100,000. Ant Gather has a horizon of 5000, while Snake Gather has a horizon of 8000 due to its larger size. For all experiments, both PPO and HiPPO used learning rate $3 \times 10^{-3}$ , clipping parameter $\epsilon = 0.1$ , 10 gradient updates per iteration, and discount $\gamma = 0.999$ . The learning rate, clipping parameter, and number of gradient updates come from the OpenAI Baselines implementation. + +HiPPO used $n = 6$ sub-policies. HiPPO uses a manager network with 2 hidden layers of 32 units, and a skill network with 2 hidden layers of 64 units. In order to have roughly the same number of parameters for each algorithm, flat PPO uses a network with 2 hidden layers with 256 and 64 units respectively. For HiPPO with randomized period, we resample $p \sim \mathrm{Uniform}\{5,15\}$ every time the manager network outputs a latent, and provide the number of timesteps until the next latent selection as an input into both the manager and skill networks. The single baselines and skill-dependent baselines used a MLP with 2 hidden layers of 32 units to fit the value function. The skill-dependent baseline receives, in addition to the full observation, the active latent code and the time remaining until the next skill sampling. All runs used five random seeds. + +# B ROBOT AGENT DESCRIPTION + +Hopper is a 3-link robot with a 14-dimensional observation space and a 3-dimensional action space. Half-Cheetah has a 20-dimensional observation space and a 6-dimensional action space. We evaluate both of these agents on a sparse block hopping task. In addition to observing their own joint angles and positions, they observe the height and length of the next wall, the x-position of the next wall, and the distance to the wall from the agent. We also provide the same wall observations for the previous wall, which the agent can still interact with. + +Snake is a 5-link robot with a 17-dimensional observation space and a 4-dimensional action space. Ant is a quadrupedal robot with a 27-dimensional observation space and a 8-dimensional action space. Both Ant and Snake can move and rotate in all directions, and Ant faces the added challenge of avoiding falling over irrecoverably. In the Gather environment, agents also receive 2 sets of 10-dimensional lidar observations, which correspond to separate apple and bomb observations. The observation displays the distance to the nearest apple or bomb in each $36^{\circ}$ bin, respectively. All environments are simulated with the physics engine MuJoCo (Todorov et al., 2012). + +# C PROOFS + +Lemma 1. If the skills are sufficiently differentiated, then the latent variable can be treated as part of the observation to compute the gradient of the trajectory probability. Concretely, if $\pi_{\theta_h}(z|s)$ and $\pi_{\theta_l}(a|s,z)$ are Lipschitz in their parameters, and $0 < \pi_{\theta_l}(a_t|s_t,z_j) < \epsilon \forall j \neq kp$ , then + +$$ +\nabla_ {\theta} \log P (\tau) = \sum_ {k = 0} ^ {H / p} \nabla_ {\theta} \log \pi_ {\theta_ {h}} \left(z _ {k p} \mid s _ {k p}\right) + \sum_ {t = 1} ^ {p} \nabla_ {\theta} \log \pi_ {\theta_ {l}} \left(a _ {t} \mid s _ {t}, z _ {k p}\right) + \mathcal {O} (n H \epsilon^ {p - 1}) \tag {5} +$$ + +Proof. From the point of view of the MDP, a trajectory is a sequence $\tau = (s_0, a_0, s_1, a_1, \ldots, a_{H-1}, s_H)$ . Let's assume we use the hierarchical policy introduced above, with a higher-level policy modeled as a parameterized discrete distribution with $n$ possible outcomes $\pi_{\theta_h}(z|s) = \text{Categorical}_{\theta_h}(n)$ . We can expand $P(\tau)$ into the product of policy and environment dynamics terms, with $z_j$ denoting the $j$ th possible value out of the $n$ choices, + +$$ +P (\tau) = \Bigg (\prod_ {k = 0} ^ {H / p} \Big [ \sum_ {j = 1} ^ {n} \pi_ {\theta_ {h}} (z _ {j} | s _ {k p}) \prod_ {t = k p} ^ {(k + 1) p - 1} \pi_ {\theta_ {l}} (a _ {t} | s _ {t}, z _ {j}) \Big ] \Bigg) \Bigg [ P (s _ {0}) \prod_ {t = 1} ^ {H} P (s _ {t + 1} | s _ {t}, a _ {t}) \Bigg ] +$$ + +Taking the gradient of $\log P(\tau)$ with respect to the policy parameters $\theta = [\theta_h, \theta_l]$ , the dynamics terms disappear, leaving: + +$$ +\begin{array}{l} \nabla_ {\theta} \log P (\tau) = \sum_ {k = 0} ^ {H / p} \nabla_ {\theta} \log \left(\sum_ {j = 1} ^ {n} \pi_ {\theta_ {l}} (z _ {j} | s _ {k p}) \prod_ {t = k p} ^ {(k + 1) p - 1} \pi_ {s, \theta} (a _ {t} | s _ {t}, z _ {j})\right) \\ = \sum_ {k = 0} ^ {H / p} \frac {1}{\sum_ {j = 1} ^ {n} \pi_ {\theta_ {h}} (z _ {j} | s _ {k p}) \prod_ {t = k p} ^ {(k + 1) p - 1} \pi_ {\theta_ {l}} (a _ {t} | s _ {t} , z _ {j})} \sum_ {j = 1} ^ {n} \nabla_ {\theta} \left(\pi_ {\theta_ {h}} (z _ {j} | s _ {k p}) \prod_ {t = k p} ^ {(k + 1) p - 1} \pi_ {\theta_ {l}} (a _ {t} | s _ {t}, z _ {j})\right) \\ \end{array} +$$ + +The sum over possible values of $z$ prevents the logarithm from splitting the product over the $p$ -step sub-trajectories. This term is problematic, as this product quickly approaches 0 as $p$ increases, and suffers from considerable numerical instabilities. Instead, we want to approximate this sum of products by a single one of the terms, which can then be decomposed into a sum of logs. For this we study each of the terms in the sum: the gradient of a sub-trajectory probability under a specific latent $\nabla_{\theta}\Big(\pi_{\theta_h}(z_j|s_{kp})\prod_{t = kp}^{(k + 1)p - 1}\pi_{\theta_l}(a_t|s_t,z_j)\Big)$ . Now we can use the assumption that the skills are easy to distinguish, $0 < \pi_{\theta_l}(a_t|s_t,z_j) < \epsilon \forall j\neq kp$ . Therefore, the probability of the sub-trajectory under a latent different than the one that was originally sampled $z_{j}\neq z_{kp}$ , is upper bounded by $\epsilon^p$ . Taking the gradient, applying the product rule, and the Lipschitz continuity of the policies, we obtain that for all $z_{j}\neq z_{kp}$ + +$$ +\begin{array}{l} \nabla_ {\theta} \Big (\pi_ {\theta_ {h}} (z _ {j} | s _ {k p}) \prod_ {t = k p} ^ {(k + 1) p - 1} \pi_ {\theta_ {l}} (a _ {t} | s _ {t}, z _ {j}) \Big) = \nabla_ {\theta} \pi_ {\theta_ {h}} (z _ {j} | s _ {k p}) \prod_ {t = k p} ^ {(k + 1) p - 1} \pi_ {\theta_ {l}} (a _ {t} | s _ {t}, z _ {j}) + \\ \sum_{t = kp}^{(k + 1)p - 1}\pi_{\theta_{h}}(z_{j}|s_{kp})\bigl(\nabla_{\theta}\pi_{\theta_{l}}(a_{t}|s_{t},z_{j})\bigr)\prod_{\substack{t = kp\\ t^{\prime}\neq t}}^{(k + 1)p - 1}\pi_{\theta_{l}}(a_{t^{\prime}}|s_{t^{\prime}},z_{j}) \\ = \mathcal {O} (p \epsilon^ {p - 1}) \\ \end{array} +$$ + +Thus, we can across the board replace the summation over latents by the single term corresponding to the latent that was sampled at that time. + +$$ +\begin{array}{l} \nabla_ {\theta} \log P (\tau) = \sum_ {k = 0} ^ {H / p} \frac {1}{\pi_ {\theta_ {h}} (z _ {k p} | s _ {k p}) \prod_ {t = k p} ^ {(k + 1) p - 1} \pi_ {\theta_ {l}} (a _ {t} | s _ {t} , z _ {k p})} \nabla_ {\theta} \Big (P (z _ {k p} | s _ {k p}) \prod_ {t = k p} ^ {(k + 1) p - 1} \pi_ {\theta_ {l}} (a _ {t} | s _ {t}, z _ {k p}) \Big) + \frac {n H}{p} \mathcal {O} (p \epsilon^ {p - 1}) \\ = \sum_ {k = 0} ^ {H / p} \nabla_ {\theta} \log \left(\pi_ {\theta_ {h}} (z _ {k p} | s _ {k p}) \prod_ {t = k p} ^ {(k + 1) p - 1} \pi_ {\theta_ {t}} (a _ {t} | s _ {t}, z _ {k p})\right) + \mathcal {O} (n H \epsilon^ {p - 1}) \\ = \mathbb {E} _ {\tau} \left[ \left(\sum_ {k = 0} ^ {H / p} \nabla_ {\theta} \log \pi_ {\theta_ {h}} (z _ {k p} | s _ {k p}) + \sum_ {t = 1} ^ {H} \nabla_ {\theta} \log \pi_ {\theta_ {l}} (a _ {t} | s _ {t}, z _ {k p})\right) \right] + \mathcal {O} (n H \epsilon^ {p - 1}) \\ \end{array} +$$ + +Interestingly, this is exactly $\nabla_{\theta}P(s_0,z_0,a_0,s_1,\ldots)$ . In other words, it's the gradient of the probability of that trajectory, where the trajectory now includes the variables $z$ as if they were observed. + +![](images/0f0685675e0bc1c6fb0ee47f5013b3d68b1af691f526769a79f1781fbdbd917d.jpg) + +Lemma 2. For any functions $b_{h}:\mathcal{S}\to \mathbb{R}$ and $b_{l}:\mathcal{S}\times \mathcal{Z}\rightarrow \mathbb{R}$ we have: + +$$ +\begin{array}{l} \mathbb {E} _ {\tau} [ \sum_ {k = 0} ^ {H / p} \nabla_ {\theta} \log P (z _ {k p} | s _ {k p}) b (s _ {k p}) ] = 0 \\ \mathbb {E} _ {\tau} \left[ \sum_ {t = 0} ^ {H} \nabla_ {\theta} \log \pi_ {\theta_ {l}} \left(a _ {t} \mid s _ {t}, z _ {k p}\right) b \left(s _ {t}, z _ {k p}\right) \right] = 0 \\ \end{array} +$$ + +Proof. We can use the tower property as well as the fact that the interior expression only depends on $s_{kp}$ and $z_{kp}$ : + +$$ +\begin{array}{l} \mathbb {E} _ {\tau} [ \sum_ {k = 0} ^ {H / p} \nabla_ {\theta} \log P (z _ {k p} | s _ {k p}) b (s _ {k p}) ] = \sum_ {k = 0} ^ {H / p} \mathbb {E} _ {s _ {k p}, z _ {k p}} [ \mathbb {E} _ {\tau \backslash s _ {k p}, z _ {k p}} [ \nabla_ {\theta} \log P (z _ {k p} | s _ {k p}) b (s _ {k p}) ] ] \\ = \sum_ {k = 0} ^ {H / p} \mathbb {E} _ {s _ {k p}, z _ {k p}} [ \nabla_ {\theta} \log P (z _ {k p} | s _ {k p}) b (s _ {k p}) ] \\ \end{array} +$$ + +Then, we can write out the definition of the expectation and undo the gradient-log trick to prove that the baseline is unbiased. + +$$ +\begin{array}{l} \mathbb {E} _ {\tau} \left[ \sum_ {k = 0} ^ {H / p} \nabla_ {\theta} \log \pi_ {\theta_ {h}} \left(z _ {k p} \mid s _ {k p}\right) b \left(s _ {k p}\right) \right] = \sum_ {k = 0} ^ {H / p} \int_ {\left(s _ {k p}, z _ {k p}\right)} P \left(s _ {k p}, z _ {k p}\right) \nabla_ {\theta} \log \pi_ {\theta_ {h}} \left(z _ {k p} \mid s _ {k p}\right) b \left(s _ {k p}\right) d z _ {k p} d s _ {k p} \\ = \sum_ {k = 0} ^ {H / p} \int_ {s _ {k p}} P (s _ {k p}) b (s _ {k p}) \int_ {z _ {k p}} \pi_ {\theta_ {h}} (z _ {k p} | s _ {k p}) \nabla_ {\theta} \log \pi_ {\theta_ {h}} (z _ {k p} | s _ {k p}) d z _ {k p} d s _ {k p} \\ = \sum_ {k = 0} ^ {H / p} \int_ {s _ {k p}} P (s _ {k p}) b (s _ {k p}) \int_ {z _ {k p}} \pi_ {\theta_ {h}} (z _ {k p} | s _ {k p}) \frac {1}{\pi_ {\theta_ {h}} (z _ {k p} | s _ {k p})} \nabla_ {\theta} \pi_ {\theta_ {h}} (z _ {k p} | s _ {k p}) d z _ {k p} d s _ {k p} \\ = \sum_ {k = 0} ^ {H / p} \int_ {s _ {k p}} P (s _ {k p}) b (s _ {k p}) \nabla_ {\theta} \int_ {z _ {k p}} \pi_ {\theta_ {h}} (z _ {k p} | s _ {k p}) d z _ {k p} d s _ {k p} \\ = \sum_ {k = 0} ^ {H / p} \int_ {s _ {k p}} P \left(s _ {k p}\right) b \left(s _ {k p}\right) \nabla_ {\theta} 1 d s _ {k p} \\ = 0 \\ \end{array} +$$ + +![](images/7ceffe540c4177d2a3158a7a92ee210f0b4e8ef131f0459f8ac358dceeff04ee.jpg) + +Subtracting a state- and subpolicy- dependent baseline from the second term is also unbiased, i.e. + +$$ +\mathbb {E} _ {\tau} \left[ \sum_ {t = 0} ^ {H} \nabla_ {\theta} \log \pi_ {s, \theta} \left(a _ {t} \mid s _ {t}, z _ {k p}\right) b \left(s _ {t}, z _ {k p}\right) \right] = 0 +$$ + +We'll follow the same strategy to prove the second equality: apply the tower property, express the expectation as an integral, and undo the gradient-log trick. + +$$ +\begin{array}{l} \mathbb {E} _ {\tau} \left[ \sum_ {t = 0} ^ {H} \nabla_ {\theta} \log \pi_ {\theta_ {l}} \left(a _ {t} \mid s _ {t}, z _ {k p}\right) b \left(s _ {t}, z _ {k p}\right) \right] \\ = \sum_ {t = 0} ^ {H} \mathbb {E} _ {s _ {t}, a _ {t}, z _ {k p}} \left[ \mathbb {E} _ {\tau \backslash s _ {t}, a _ {t}, z _ {k p}} \left[ \nabla_ {\theta} \log \pi_ {\theta_ {m}} \left(a _ {t} \mid s _ {t}, z _ {k p}\right) b \left(s _ {t}, z _ {k p}\right) \right] \right] \\ = \sum_ {t = 0} ^ {H} \mathbb {E} _ {s _ {t}, a _ {t}, z _ {k p}} \left[ \nabla_ {\theta} \log \pi_ {\theta_ {l}} \left(a _ {t} \mid s _ {t}, z _ {k p}\right) b \left(s _ {k p}, z _ {k p}\right) \right] \\ = \sum_ {t = 0} ^ {H} \int_ {\left(s _ {t}, z _ {k p}\right)} P \left(s _ {t}, z _ {k p}\right) b \left(s _ {t}, z _ {k p}\right) \int_ {a _ {t}} \pi_ {\theta_ {l}} \left(a _ {t} \mid s _ {t}, z _ {k p}\right) \nabla_ {\theta} \log \pi_ {\theta_ {l}} \left(a _ {t} \mid s _ {t}, z _ {k p}\right) d a _ {t} d z _ {k p} d s _ {t} \\ = \sum_ {t = 0} ^ {H} \int_ {\left(s _ {t}, z _ {k p}\right)} P \left(s _ {t}, z _ {k p}\right) b \left(s _ {t}, z _ {k p}\right) \nabla_ {\theta} 1 d z _ {k p} d s _ {t} \\ = 0 \\ \end{array} +$$ + +![](images/29a8de46a1a13c72e71ef2c0a1ee1ee7aad9d4dc8f0bbe3244122fa12c118240.jpg) +Figure 7: HIRO performance on Ant Gather with and without access to the ground truth $(x, y)$ , which it needs to communicate useful goals. + +# D HIRO SENSITIVITY TO OBSERVATION-SPACE + +In this section we provide a more detailed explanation of why HIRO (Nachum et al., 2018) performs poorly under our environments. As explained in our related work section, HIRO belongs to the general category of algorithms that train goal-reaching policies as lower levels of the hierarchy (Vezhnevets et al., 2017; Levy et al., 2017). These methods rely on having a goal-space that is meaningful for the task at hand. For example, in navigation tasks they require having access to the $(x,y)$ position of the agent such that deltas in that space can be given as meaningful goals to move in the environment. Unfortunately, in many cases the only readily available information (if there's no GPS signal or other positioning system installed) are raw sensory inputs, like cameras or the LIDAR sensors we mimic in our environments. In such cases, our method still performs well because it doesn't rely on the goal-reaching extra supervision that is leveraged (and detrimental in this case) in HIRO and similar methods. In Figure 7, we show that knowing the ground truth location is critical for its success. We have reproduced the HIRO results in Fig. 7 using the published codebase, so we are convinced that our results showcase a failure mode of HIRO. + +# E HYPERPARAMETER SENSITIVITY PLOTS + +![](images/21bd1eb45a806666f518175cb930dc9e23093ee5f8583dc07060b2ecf4dda6b6.jpg) +Figure 8: Sensitivity of HiPPO to variation in the time-commitment. + +![](images/ba2a1e30228d16da83f12b4240047b7d46773ef33cb0375345dc7deff0651c95.jpg) + +![](images/8a50d8d2eb03fb965b0310a137a4359944c16493cf8fca4182555876111ba434.jpg) +Figure 9: Sensitivity of HiPPO to variation in the number of skills. + +![](images/ea8a1eb1caa0ba0c60df1c9ea5b14f0eb2f6546de6fae91ecb971dfb634453d8.jpg) \ No newline at end of file diff --git a/subpolicyadaptationforhierarchicalreinforcementlearning/images.zip b/subpolicyadaptationforhierarchicalreinforcementlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6db9c540aedd1592babec7b5b9b78477675ee766 --- /dev/null +++ b/subpolicyadaptationforhierarchicalreinforcementlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8f6d5b58273cdda6aea7b593f639f519c4aacf123c39a1e15c203c509a02533c +size 754038 diff --git a/subpolicyadaptationforhierarchicalreinforcementlearning/layout.json b/subpolicyadaptationforhierarchicalreinforcementlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1faad793be934db1f4272c6f432c483fcea95b44 --- /dev/null +++ b/subpolicyadaptationforhierarchicalreinforcementlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34fe64aa32d668cf04d4f78a34b0f4ebe98ef19023462530f6bc7bbf59fae07f +size 487253 diff --git a/symplecticodenetlearninghamiltoniandynamicswithcontrol/7de5f77e-81be-4e86-af97-52c5a4dc0226_content_list.json b/symplecticodenetlearninghamiltoniandynamicswithcontrol/7de5f77e-81be-4e86-af97-52c5a4dc0226_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7682c87eedd5a7182b7d7c0d8e875da018612a82 --- /dev/null +++ b/symplecticodenetlearninghamiltoniandynamicswithcontrol/7de5f77e-81be-4e86-af97-52c5a4dc0226_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:abfd370611980f1d0bf17e42f0bab2ad7158efcde5af0f855127189fdd2e28de +size 128754 diff --git a/symplecticodenetlearninghamiltoniandynamicswithcontrol/7de5f77e-81be-4e86-af97-52c5a4dc0226_model.json b/symplecticodenetlearninghamiltoniandynamicswithcontrol/7de5f77e-81be-4e86-af97-52c5a4dc0226_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9dc3755ca52e79217a52e7e27124dee0f2353379 --- /dev/null +++ b/symplecticodenetlearninghamiltoniandynamicswithcontrol/7de5f77e-81be-4e86-af97-52c5a4dc0226_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6a4cc537eee86881fdbd5e5a25899a86a7b0b3a3ff5acd2466beee3ad9c2e40 +size 151402 diff --git a/symplecticodenetlearninghamiltoniandynamicswithcontrol/7de5f77e-81be-4e86-af97-52c5a4dc0226_origin.pdf b/symplecticodenetlearninghamiltoniandynamicswithcontrol/7de5f77e-81be-4e86-af97-52c5a4dc0226_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..246793e4218e41fcb5769a40b227f353f7b83000 --- /dev/null +++ b/symplecticodenetlearninghamiltoniandynamicswithcontrol/7de5f77e-81be-4e86-af97-52c5a4dc0226_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f432358e4bb2e01790cf1864022b93295d7150ddd3bcaa0d507dffac36a95bce +size 664675 diff --git a/symplecticodenetlearninghamiltoniandynamicswithcontrol/full.md b/symplecticodenetlearninghamiltoniandynamicswithcontrol/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4017c05255a19867619f6f538306e570f3538570 --- /dev/null +++ b/symplecticodenetlearninghamiltoniandynamicswithcontrol/full.md @@ -0,0 +1,604 @@ +# SYMPTECTIC ODE-NET: LEARNING HAMILTONIAN DYNAMICS WITH CONTROL + +Yaofeng Desmond Zhong* + +Princeton University + +y.zhong@princeton.edu + +Biswadip Dey + +Siemens Corporate Technology + +biswadip.dey@siemens.com + +Amit Chakraborty + +Siemens Corporate Technology + +amit.chakraborty@siemens.com + +# ABSTRACT + +In this paper, we introduce Symplectic $^1$ ODE-Net (SymODEN), a deep learning framework which can infer the dynamics of a physical system, given by an ordinary differential equation (ODE), from observed state trajectories. To achieve better generalization with fewer training samples, SymODEN incorporates appropriate inductive bias by designing the associated computation graph in a physics-informed manner. In particular, we enforce Hamiltonian dynamics with control to learn the underlying dynamics in a transparent way, which can then be leveraged to draw insight about relevant physical aspects of the system, such as mass and potential energy. In addition, we propose a parametrization which can enforce this Hamiltonian formalism even when the generalized coordinate data is embedded in a high-dimensional space or we can only access velocity data instead of generalized momentum. This framework, by offering interpretable, physically-consistent models for physical systems, opens up new possibilities for synthesizing model-based control strategies. + +# 1 INTRODUCTION + +In recent years, deep neural networks (Goodfellow et al., 2016) have become very accurate and widely used in many application domains, such as image recognition (He et al., 2016), language comprehension (Devlin et al., 2019), and sequential decision making (Silver et al., 2017). To learn underlying patterns from data and enable generalization beyond the training set, the learning approach incorporates appropriate inductive bias (Haussler, 1988; Baxter, 2000) by promoting representations which are simple in some sense. It typically manifests itself via a set of assumptions, which in turn can guide a learning algorithm to pick one hypothesis over another. The success in predicting an outcome for previously unseen data then depends on how well the inductive bias captures the ground reality. Inductive bias can be introduced as the prior in a Bayesian model, or via the choice of computation graphs in a neural network. + +In a variety of settings, especially in physical systems, wherein laws of physics are primarily responsible for shaping the outcome, generalization in neural networks can be improved by leveraging underlying physics for designing the computation graphs. Here, by leveraging a generalization of the Hamiltonian dynamics, we develop a learning framework which exploits the underlying physics in the associated computation graph. Our results show that incorporation of such physics-based inductive bias offers insight about relevant physical properties of the system, such as inertia, potential energy, total conserved energy. These insights, in turn, enable a more accurate prediction of future behavior and improvement in out-of-sample behavior. Furthermore, learning a physically-consistent model of the underlying dynamics can subsequently enable usage of model-based controllers which can provide performance guarantees for complex, nonlinear systems. In particular, insight about + +kinetic and potential energy of a physical system can be leveraged to synthesize appropriate control strategies, such as the method of controlled Lagrangian (Bloch et al., 2001) and interconnection & damping assignment (Ortega et al., 2002), which can reshape the closed-loop energy landscape to achieve a broad range of control objectives (regulation, tracking, etc.). + +# RELATED WORK + +Physics-based Priors for Learning in Dynamical Systems: The last few years have witnessed a significant interest in incorporating physics-based priors into deep learning frameworks. Such approaches, in contrast to more rigid parametric system identification techniques (Söderström & Stoica, 1988), use neural networks to approximate the state-transition dynamics and therefore are more expressive. Sanchez-Gonzalez et al. (2018), by representing the causal relationships in a physical system as a directed graph, use a recurrent graph network to infer latent space dynamics of robotic systems. Lutter et al. (2019) and Gupta et al. (2019) leverage Lagrangian mechanics to learn the dynamics of kinematic structures from time-series data of position, velocity, and acceleration. A more recent (concurrent) work by Greydanus et al. (2019) uses Hamiltonian mechanics to learn the dynamics of autonomous, energy-conserved mechanical systems from time-series data of position, momentum, and their derivatives. A key difference between these approaches and the proposed one is that our framework does not require any information about higher-order derivatives (e.g., acceleration) and can incorporate external control into the Hamiltonian formalism. + +Neural Networks for Dynamics and Control: Inferring underlying dynamics from time-series data plays a critical role in controlling closed-loop response of dynamical systems, such as robotic manipulators (Lillicrap et al., 2015) and building HVAC systems (Wei et al., 2017). Although the use of neural networks towards identification and control of dynamical systems dates back to more than three decades ago (Narendra & Parthasarathy, 1990), recent advances in deep neural networks have led to renewed interest in this domain. Watter et al. (2015) learn dynamics with control from high-dimensional observations (raw image sequences) using a variational approach and synthesize an iterative LQR controller to control physical systems by imposing a locally linear constraint. Karl et al. (2016) and Krishnan et al. (2017) adopt a variational approach and use recurrent architectures to learn state-space models from noisy observation. SE3-Nets (Byravan & Fox, 2017) learn $SE(3)$ transformation of rigid bodies from point cloud data. Ayed et al. (2019) use partial information about the system state to learn a nonlinear state-space model. However, this body of work, while attempting to learn state-space models, does not take physics-based priors into consideration. + +# CONTRIBUTION + +The main contribution of this work is two-fold. First, we introduce a learning framework called Symplectic ODE-Net (SymODEN) which encodes a generalization of the Hamiltonian dynamics. This generalization, by adding an external control term to the standard Hamiltonian dynamics, allows us to learn the system dynamics which conforms to Hamiltonian dynamics with control. With the learned structured dynamics, we are able to synthesize controllers to control the system to track a reference configuration. Moreover, by encoding the structure, we can achieve better predictions with smaller network sizes. Second, we take one step forward in combining the physics-based prior and the data-driven approach. Previous approaches (Lutter et al., 2019; Greydanus et al., 2019) require data in the form of generalized coordinates and their derivatives up to the second order. However, a large number of physical systems accommodate generalized coordinates which are non-Euclidean (e.g., angles), and such angle data is often obtained in the embedded form, i.e., $(\cos q,\sin q)$ instead of the coordinate $(q)$ itself. The underlying reason is that an angular coordinate lies on $\mathbb{S}^1$ instead of $\mathbb{R}^1$ . In contrast to previous approaches which do not address this aspect, SymODEN has been designed to work with angle data in the embedded form. Additionally, we leverage differentiable ODE solvers to avoid the need for estimating second-order derivatives of generalized coordinates. Code for the SymODEN framework and experiments is available at https://github.com/d-biswa/Symplectic-ODENet. + +# 2 PRELIMINARY CONCEPTS + +# 2.1 HAMILTONIAN DYNAMICS + +Lagrangian dynamics and Hamiltonian dynamics are both reformulations of Newtonian dynamics. They provide novel insights into the laws of mechanics. In these formulations, the configuration of a system is described by its generalized coordinates. Over time, the configuration point of the system moves in the configuration space, tracing out a trajectory. Lagrangian dynamics describes the evolution of this trajectory, i.e., the equations of motion, in the configuration space. Hamiltonian dynamics, however, tracks the change of system states in the phase space, i.e. the product space of generalized coordinates $\mathbf{q} = (q_{1}, q_{2}, \dots, q_{n})$ and generalized momenta $\mathbf{p} = (p_{1}, p_{2}, \dots, p_{n})$ . In other words, Hamiltonian dynamics treats $\mathbf{q}$ and $\mathbf{p}$ on an equal footing. This not only provides symmetric equations of motion but also leads to a whole new approach to classical mechanics (Goldstein et al., 2002). Hamiltonian dynamics is also widely used in statistical and quantum mechanics. + +In Hamiltonian dynamics, the time-evolution of a system is described by the Hamiltonian $H(\mathbf{q},\mathbf{p})$ , a scalar function of generalized coordinates and momenta. Moreover, in almost all physical systems, the Hamiltonian is the same as the total energy and hence can be expressed as + +$$ +H (\mathbf {q}, \mathbf {p}) = \frac {1}{2} \mathbf {p} ^ {T} \mathbf {M} ^ {- 1} (\mathbf {q}) \mathbf {p} + V (\mathbf {q}), \tag {1} +$$ + +where the mass matrix $\mathbf{M}(\mathbf{q})$ is symmetric positive definite and $V(\mathbf{q})$ represents the potential energy of the system. Correspondingly, the time-evolution of the system is governed by + +$$ +\dot {\mathbf {q}} = \frac {\partial H}{\partial \mathbf {p}} \quad \dot {\mathbf {p}} = - \frac {\partial H}{\partial \mathbf {q}}, \tag {2} +$$ + +where we have dropped explicit dependence on $\mathbf{q}$ and $\mathbf{p}$ for brevity of notation. Moreover, since + +$$ +\dot {H} = \left(\frac {\partial H}{\partial \mathbf {q}}\right) ^ {T} \dot {\mathbf {q}} + \left(\frac {\partial H}{\partial \mathbf {p}}\right) ^ {T} \dot {\mathbf {p}} = 0, \tag {3} +$$ + +the total energy is conserved along a trajectory of the system. The RHS of Equation (2) is called the symplectic gradient (Rowe et al., 1980) of $H$ , and Equation (3) shows that moving along the symplectic gradient keeps the Hamiltonian constant. + +In this work, we consider a generalization of the Hamiltonian dynamics which provides a means to incorporate external control (u), such as force and torque. As external control is usually affine and only influences changes in the generalized momenta, we can express this generalization as + +$$ +\left[ \begin{array}{l} \dot {\mathbf {q}} \\ \dot {\mathbf {p}} \end{array} \right] = \left[ \begin{array}{c} \frac {\partial H}{\partial \mathbf {p}} \\ - \frac {\partial H}{\partial \mathbf {q}} \end{array} \right] + \left[ \begin{array}{c} \mathbf {0} \\ \mathbf {g} (\mathbf {q}) \end{array} \right] \mathbf {u}, \tag {4} +$$ + +where the input matrix $\mathbf{g}(\mathbf{q})$ is typically assumed to have full column rank. For $\mathbf{u} = \mathbf{0}$ , the generalized dynamics reduces to the classical Hamiltonian dynamics (2) and the total energy is conserved; however, when $\mathbf{u} \neq \mathbf{0}$ , the system has a dissipation-free energy exchange with the environment. + +# 2.2 CONTROL VIA ENERGY SHAPING + +Once we have learned the dynamics of a system, the learned model can be used to synthesize a controller for driving the system to a reference configuration $\mathbf{q}^{\star}$ . As the proposed approach offers insight about the energy associated with a system, it is a natural choice to exploit this information for synthesizing controllers via energy shaping (Ortega et al., 2001). As energy is a fundamental aspect of physical systems, reshaping the associated energy landscape enables us to specify a broad range of control objectives and synthesize nonlinear controllers with provable performance guarantees. + +If $\mathrm{rank}(\mathbf{g}(\mathbf{q})) = \mathrm{rank}(\mathbf{q})$ , the system is fully-actuated and we have control over any dimension of "acceleration" in $\dot{\mathbf{p}}$ . For such fully-actuated systems, a controller $\mathbf{u}(\mathbf{q},\mathbf{p}) = \beta (\mathbf{q}) + \mathbf{v}(\mathbf{p})$ can be synthesized via potential energy shaping $\beta (\mathbf{q})$ and damping injection $\mathbf{v}(\mathbf{p})$ . For completeness, we restate this procedure (Ortega et al., 2001) using our notation. As the name suggests, the goal of potential energy shaping is to synthesize $\beta (\mathbf{q})$ such that the closed-loop system behaves as if its time-evolution is governed by a desired Hamiltonian $H_{d}$ . With this, we have + +$$ +\left[ \begin{array}{l} \dot {\mathbf {q}} \\ \dot {\mathbf {p}} \end{array} \right] = \left[ \begin{array}{c} \frac {\partial H}{\partial \mathbf {p}} \\ - \frac {\partial H}{\partial \mathbf {q}} \end{array} \right] + \left[ \begin{array}{l} \mathbf {0} \\ \mathbf {g} (\mathbf {q}) \end{array} \right] \beta (\mathbf {q}) = \left[ \begin{array}{c} \frac {\partial H _ {d}}{\partial \mathbf {p}} \\ - \frac {\partial H _ {d}}{\partial \mathbf {q}} \end{array} \right], \tag {5} +$$ + +where the difference between the desired Hamiltonian and the original one lies in their potential energy term, i.e. + +$$ +H _ {d} (\mathbf {q}, \mathbf {p}) = \frac {1}{2} \mathbf {p} ^ {T} \mathbf {M} ^ {- 1} (\mathbf {q}) \mathbf {p} + V _ {d} (\mathbf {q}). \tag {6} +$$ + +In other words, $\beta (\mathbf{q})$ shape the potential energy such that the desired Hamiltonian $H_{d}(\mathbf{q},\mathbf{p})$ has a minimum at $(\mathbf{q}^{\star},\mathbf{0})$ . Then, by substituting Equation (1) and Equation (6) into Equation (5), we get + +$$ +\boldsymbol {\beta} (\mathbf {q}) = \mathbf {g} ^ {T} \left(\mathbf {g} \mathbf {g} ^ {T}\right) ^ {- 1} \left(\frac {\partial V}{\partial \mathbf {q}} - \frac {\partial V _ {d}}{\partial \mathbf {q}}\right). \tag {7} +$$ + +Thus, with potential energy shaping, we ensure that the system has the lowest energy at the desired reference configuration. Furthermore, to ensure that trajectories actually converge to this configuration, we add an additional damping term2 given by + +$$ +\mathbf {v} (\mathbf {p}) = - \mathbf {g} ^ {T} \left(\mathbf {g} \mathbf {g} ^ {T}\right) ^ {- 1} \left(\mathbf {K} _ {d} \mathbf {p}\right). \tag {8} +$$ + +However, for underactuated systems, potential energy shaping alone cannot drive the system to a desired configuration. We also need kinetic energy shaping for this purpose (Chang et al., 2002). + +Remark If the desired potential energy is chosen to be a quadratic of the form + +$$ +V _ {d} (\mathbf {q}) = \frac {1}{2} \left(\mathbf {q} - \mathbf {q} ^ {\star}\right) ^ {T} \mathbf {K} _ {p} \left(\mathbf {q} - \mathbf {q} ^ {\star}\right), \tag {9} +$$ + +the external forcing term can be expressed as + +$$ +\mathbf {u} = \mathbf {g} ^ {T} \left(\mathbf {g} \mathbf {g} ^ {T}\right) ^ {- 1} \left(\frac {\partial V}{\partial \mathbf {q}} - \mathbf {K} _ {p} (\mathbf {q} - \mathbf {q} ^ {\star}) - \mathbf {K} _ {d} \mathbf {p}\right). \tag {10} +$$ + +This can be interpreted as a PD controller with an additional energy compensation term. + +# 3 SYMPLECTIC ODE-NET + +In this section, we introduce the network architecture of Symplectic ODE-Net. In Subsection 3.1, we show how to learn an ordinary differential equation with a constant control term. In Subsection 3.2, we assume we have access to generalized coordinate and momentum data and derive the network architecture. In Subsection 3.3, we take one step further to propose a data-driven approach to deal with data of embedded angle coordinates. In Subsection 3.4, we put together the line of reasoning introduced in the previous two subsections to propose SymODEN for learning dynamics on the hybrid space $\mathbb{R}^n\times \mathbb{T}^m$ . + +# 3.1 TRAINING NEURAL ODE WITH CONSTANT FORCING + +Now we focus on the problem of learning the ordinary differential equation (ODE) from time series data. Consider an ODE: $\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x})$ . Assume we don't know the analytical expression of the right hand side (RHS) and we approximate it with a neural network. If we have time series data $\mathbf{X} = (\mathbf{x}_{t_0}, \mathbf{x}_{t_1}, \dots, \mathbf{x}_{t_n})$ , how could we learn $\mathbf{f}(\mathbf{x})$ from the data? + +Chen et al. (2018) introduced Neural ODE, differentiable ODE solvers with O(1)-memory backpropagation. With Neural ODE, we make predictions by approximating the RHS function using a neural network $\mathbf{f}_{\theta}$ and feed it into an ODE solver + +$$ +\hat {\mathbf {x}} _ {t _ {1}}, \hat {\mathbf {x}} _ {t _ {2}}, \dots , \hat {\mathbf {x}} _ {t _ {n}} = \operatorname {O D E S o l v e} \left(\mathbf {x} _ {t _ {0}}, \mathbf {f} _ {\theta}, t _ {1}, t _ {2}, \dots , t _ {n}\right) +$$ + +We can then construct the loss function $L = \| \mathbf{X} - \hat{\mathbf{X}}\| _2^2$ and update the weights $\theta$ by backpropagating through the ODE solver. + +In theory, we can learn $\mathbf{f}_{\theta}$ in this way. In practice, however, the neural net is hard to train if $n$ is large. If we have a bad initial estimate of the $\mathbf{f}_{\theta}$ , the prediction error would in general be large. Although $|\mathbf{x}_{t_1} - \hat{\mathbf{x}}_{t_1}|$ might be small, $\hat{\mathbf{x}}_{t_N}$ would be far from $\mathbf{x}_{t_N}$ as error accumulates, which makes the neural network hard to train. In fact, the prediction error of $\hat{\mathbf{x}}_{t_N}$ is not as important as $\hat{\mathbf{x}}_{t_1}$ . In other words, we should weight data points in a short time horizon more than the rest of the data points. In order + +to address this and better utilize the data, we introduce the time horizon $\tau$ as a hyperparameter and predict $\mathbf{x}_{t_{i + 1}},\mathbf{x}_{t_{i + 2}},\dots,\mathbf{x}_{t_{i + \tau}}$ from initial condition $\mathbf{x}_{t_i}$ , where $i = 0,\ldots ,n - \tau$ + +One challenge toward leveraging Neural ODE to learn state-space models is the incorporation of the control term into the dynamics. Equation (4) has the form $\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x},\mathbf{u})$ with $\mathbf{x} = (\mathbf{q},\mathbf{p})$ . A function of this form cannot be directly fed into Neural ODE directly since the domain and range of $\mathbf{f}$ have different dimensions. In general, if our data consist of trajectories of $(\mathbf{x},\mathbf{u})_{t_0,\dots,t_n}$ where $\mathbf{u}$ remains the same in a trajectory, we can leverage the augmented dynamics + +$$ +\left[ \begin{array}{c} \dot {\mathbf {x}} \\ \dot {\mathbf {u}} \end{array} \right] = \left[ \begin{array}{c} \mathbf {f} _ {\theta} (\mathbf {x}, \mathbf {u}) \\ \mathbf {0} \end{array} \right] = \tilde {\mathbf {f}} _ {\theta} (\mathbf {x}, \mathbf {u}). \tag {11} +$$ + +With Equation (11), we can match the input and output dimension of $\tilde{\mathbf{f}}_{\theta}$ , which enables us to feed it into Neural ODE. The idea here is to use different constant external forcing to get the system responses and use those responses to train the model. With a trained model, we can apply a time-varying $\mathbf{u}$ to the dynamics $\dot{\mathbf{x}} = \mathbf{f}_{\theta}(\mathbf{x},\mathbf{u})$ and generate estimated trajectories. When we synthesize the controller, $\mathbf{u}$ remains constant in each integration step. As long as our model interpolates well among different values of constant $\mathbf{u}$ , we could get good estimated trajectories with a time-varying $\mathbf{u}$ . The problem is then how to design the network architecture of $\tilde{\mathbf{f}}_{\theta}$ , or equivalently $\mathbf{f}_{\theta}$ such that we can learn the dynamics in an efficient way. + +# 3.2 LEARNING FROM GENERALIZED COORDINATE AND MOMENTUM + +Suppose we have trajectory data consisting of $(\mathbf{q},\mathbf{p},\mathbf{u})_{t_0,\dots ,t_n}$ , where $\mathbf{u}$ remains constant in a trajectory. If we have the prior knowledge that the unforced dynamics of $\mathbf{q}$ and $\mathbf{p}$ is governed by Hamiltonian dynamics, we can use three neural nets - $\mathbf{M}_{\theta_1}^{-1}(\mathbf{q})$ , $V_{\theta_2}(\mathbf{q})$ and $\mathbf{g}_{\theta_3}(\mathbf{q})$ - as function approximators to represent the inverse of mass matrix, potential energy and the input matrix. Thus, + +$$ +\mathbf {f} _ {\theta} (\mathbf {q}, \mathbf {p}, \mathbf {u}) = \left[ \begin{array}{c} \frac {\partial H _ {\theta_ {1} , \theta_ {2}}}{\partial \mathbf {p}} \\ - \frac {\partial H _ {\theta_ {1} , \theta_ {2}}}{\partial \mathbf {q}} \end{array} \right] + \left[ \begin{array}{c} \mathbf {0} \\ \mathbf {g} _ {\theta_ {3}} (\mathbf {q}) \end{array} \right] \mathbf {u} \tag {12} +$$ + +where + +$$ +H _ {\theta_ {1}, \theta_ {2}} (\mathbf {q}, \mathbf {p}) = \frac {1}{2} \mathbf {p} ^ {T} \mathbf {M} _ {\theta_ {1}} ^ {- 1} (\mathbf {q}) \mathbf {p} + V _ {\theta_ {2}} (\mathbf {q}) \tag {13} +$$ + +The partial derivative in the expression can be taken care of by automatic differentiation. by putting the designed $\mathbf{f}_{\theta}(\mathbf{q},\mathbf{p},\mathbf{u})$ into Neural ODE, we obtain a systematic way of adding the prior knowledge of Hamiltonian dynamics into end-to-end learning. + +# 3.3 LEARNING FROM EMBEDDED ANGLE DATA + +In the previous subsection, we assume $(\mathbf{q},\mathbf{p},\mathbf{u})_{t_0,\dots ,t_n}$ . In a lot of physical system models, the state variables involve angles which reside in the interval $[- \pi ,\pi)$ . In other words, each angle resides on the manifold $\mathbb{S}^1$ . From a data-driven perspective, the data that respects the geometry is a 2 dimensional embedding $(\cos q,\sin q)$ . Furthermore, the generalized momentum data is usually not available. Instead, the velocity is often available. For example, in OpenAI Gym (Brockman et al., 2016) Pendulum-v0 task, the observation is $(\cos q,\sin q,\dot{q})$ . + +From a theoretical perspective, however, the angle itself is often used, instead of the 2D embedding. The reason being both the Lagrangian and the Hamiltonian formulations are derived using generalized coordinates. Using an independent generalized coordinate system makes it easier to solve for the equations of motion. + +In this subsection, we take the data-driven standpoint and develop an angle-aware method to accommodate the underlying manifold structure. We assume all the generalized coordinates are angles and the data comes in the form of $(\mathbf{x}_1(\mathbf{q}),\mathbf{x}_2(\mathbf{q}),\mathbf{x}_3(\dot{\mathbf{q}}),\mathbf{u})_{t_0,\dots,t_n} = (\cos \mathbf{q},\sin \mathbf{q},\dot{\mathbf{q}},\mathbf{u})_{t_0,\dots,t_n}$ . We aim to incorporate our theoretical prior - Hamiltonian dynamics - into the data-driven approach. The goal is to learn the dynamics of $\mathbf{x}_1$ , $\mathbf{x}_2$ and $\mathbf{x}_3$ . Noticing $\mathbf{p} = \mathbf{M}(\mathbf{x}_1,\mathbf{x}_2)\dot{\mathbf{q}}$ , we can write down the derivative of $\mathbf{x}_1$ , $\mathbf{x}_2$ and $\mathbf{x}_3$ , + +$$ +\dot {\mathbf {x}} _ {1} = - \sin \mathbf {q} \circ \dot {\mathbf {q}} = - \mathbf {x} _ {2} \circ \dot {\mathbf {q}} +$$ + +$$ +\dot {\mathbf {x}} _ {2} = \cos \mathbf {q} \circ \dot {\mathbf {q}} = \mathbf {x} _ {1} \circ \dot {\mathbf {q}} \tag {14} +$$ + +$$ +\dot {\mathbf {x}} _ {3} = \frac {\mathrm {d}}{\mathrm {d} t} (\mathbf {M} ^ {- 1} (\mathbf {x} _ {1}, \mathbf {x} _ {2}) \mathbf {p}) = \frac {\mathrm {d}}{\mathrm {d} t} (\mathbf {M} ^ {- 1} (\mathbf {x} _ {1}, \mathbf {x} _ {2})) \mathbf {p} + \mathbf {M} ^ {- 1} (\mathbf {x} _ {1}, \mathbf {x} _ {2}) \dot {\mathbf {p}} +$$ + +where “ $\circ$ ” represents the elementwise product (i.e., Hadamard product). We assume $\mathbf{q}$ and $\mathbf{p}$ evolve with the generalized Hamiltonian dynamics Equation (4). Here the Hamiltonian $H(\mathbf{x}_1, \mathbf{x}_2, \mathbf{p})$ is a function of $\mathbf{x}_1$ , $\mathbf{x}_2$ and $\mathbf{p}$ instead of $\mathbf{q}$ and $\mathbf{p}$ . + +$$ +\dot {\mathbf {q}} = \frac {\partial H}{\partial \mathbf {p}} \tag {15} +$$ + +$$ +\begin{array}{l} \dot {\mathbf {p}} = - \frac {\partial H}{\partial \mathbf {q}} + \mathbf {g} (\mathbf {x} _ {1}, \mathbf {x} _ {2}) \mathbf {u} = - \frac {\partial \mathbf {x} _ {1}}{\partial \mathbf {q}} \frac {\partial H}{\partial \mathbf {x} _ {1}} - \frac {\partial \mathbf {x} _ {2}}{\partial \mathbf {q}} \frac {\partial H}{\partial \mathbf {x} _ {2}} + \mathbf {g} (\mathbf {x} _ {1}, \mathbf {x} _ {2}) \mathbf {u} \\ = \sin \mathbf {q} \circ \frac {\partial H}{\partial \mathbf {x} _ {1}} - \cos \mathbf {q} \circ \frac {\partial H}{\partial \mathbf {x} _ {2}} + \mathbf {g} (\mathbf {x} _ {1}, \mathbf {x} _ {2}) \mathbf {u} = \mathbf {x} _ {2} \circ \frac {\partial H}{\partial \mathbf {x} _ {1}} - \mathbf {x} _ {1} \circ \frac {\partial H}{\partial \mathbf {x} _ {2}} + \mathbf {g} (\mathbf {x} _ {1}, \mathbf {x} _ {2}) \mathbf {u} \tag {16} \\ \end{array} +$$ + +Then the right hand side of Equation (14) can be expressed as a function of state variables and control $(\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3,\mathbf{u})$ . Thus, it can be fed into the Neural ODE. We use three neural nets - $\mathbf{M}_{\theta_1}^{-1}(\mathbf{x}_1,\mathbf{x}_2)$ , $V_{\theta_2}(\mathbf{x}_1,\mathbf{x}_2)$ and $\mathbf{g}_{\theta_3}(\mathbf{x}_1,\mathbf{x}_2)$ - as function approximators. Substitute Equation (15) and Equation (16) into Equation (14), then the RHS serves as $\mathbf{f}_{\theta}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3,\mathbf{u})$ . + +$$ +\mathbf {f} _ {\theta} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}, \mathbf {x} _ {3}, \mathbf {u}\right) = \left[ \begin{array}{c} - \mathbf {x} _ {2} \circ \frac {\partial H _ {\theta_ {1} , \theta_ {2}}}{\partial \mathbf {p}} \\ \mathbf {x} _ {1} \circ \frac {\partial H _ {\theta_ {1} , \theta_ {2}}}{\partial \mathbf {p}} \\ \frac {\mathrm {d}}{\mathrm {d} t} \left(\mathbf {M} _ {\theta_ {1}} ^ {- 1} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}\right)\right) \mathbf {p} + \mathbf {M} _ {\theta_ {1}} ^ {- 1} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}\right) \left(\mathbf {x} _ {2} \circ \frac {\partial H _ {\theta_ {1} \theta_ {2}}}{\partial \mathbf {x} _ {1}} - \mathbf {x} _ {1} \circ \frac {\partial H _ {\theta_ {1} \theta_ {2}}}{\partial \mathbf {x} _ {2}} + \mathbf {g} _ {\theta_ {3}} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}\right) \mathbf {u}\right) \end{array} \right] \tag {17} +$$ + +where + +$$ +H _ {\theta_ {1}, \theta_ {2}} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}, \mathbf {p}\right) = \frac {1}{2} \mathbf {p} ^ {T} \mathbf {M} _ {\theta_ {1}} ^ {- 1} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}\right) \mathbf {p} + V _ {\theta_ {2}} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}\right) \tag {18} +$$ + +$$ +\mathbf {p} = \mathbf {M} _ {\theta_ {1}} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}\right) \mathbf {x} _ {3} \tag {19} +$$ + +# 3.4 LEARNING ON HYBRID SPACES $\mathbb{R}^n\times \mathbb{T}^m$ + +In Subsection 3.2, we treated the generalized coordinates as translational coordinates. In Subsection 3.3, we developed an angle-aware method to better deal with embedded angle data. In most of physical systems, these two types of coordinates coexist. For example, robotics systems are usually modelled as interconnected rigid bodies. The positions of joints or center of mass are translational coordinates and the orientations of each rigid body are angular coordinates. In other words, the generalized coordinates lie on $\mathbb{R}^n\times \mathbb{T}^m$ , where $\mathbb{T}^m$ denotes the $m$ -torus, with $\mathbb{T}^1 = \mathbb{S}^1$ and $\mathbb{T}^2 = \mathbb{S}^1\times \mathbb{S}^1$ . In this subsection, we put together the architecture of the previous two subsections. We assume the generalized coordinates are $\mathbf{q} = (\mathbf{r},\boldsymbol {\phi})\in \mathbb{R}^n\times \mathbb{T}^m$ and the data comes in the form of $(\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3,\mathbf{x}_4,\mathbf{x}_5,\mathbf{u})_{t_0,\dots ,t_n} = (\mathbf{r},\cos \phi ,\sin \phi ,\dot{\mathbf{r}},\dot{\boldsymbol{\phi}},\mathbf{u})_{t_0,\dots ,t_n}$ . With similar line of reasoning, we use three neural nets - $\mathbf{M}_{\theta_1}^{-1}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3)$ , $V_{\theta_2}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3)$ and $\mathbf{g}_{\theta_3}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3)$ - as function approximators. We have + +$$ +\mathbf {p} = \mathbf {M} _ {\theta_ {1}} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2},, \mathbf {x} _ {3}\right) \left[ \begin{array}{l} \mathbf {x} _ {4} \\ \mathbf {x} _ {5} \end{array} \right] \tag {20} +$$ + +$$ +H _ {\theta_ {1}, \theta_ {2}} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}, \mathbf {x} _ {3}, \mathbf {p}\right) = \frac {1}{2} \mathbf {p} ^ {T} \mathbf {M} _ {\theta_ {1}} ^ {- 1} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}, \mathbf {x} _ {3}\right) \mathbf {p} + V _ {\theta_ {2}} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}, \mathbf {x} _ {3}\right) \tag {21} +$$ + +with Hamiltonian dynamics, we have + +$$ +\dot {\mathbf {q}} = \left[ \begin{array}{l} \dot {\mathbf {r}} \\ \dot {\boldsymbol {\phi}} \end{array} \right] = \frac {\partial H _ {\theta_ {1} , \theta_ {2}}}{\partial \mathbf {p}} \tag {22} +$$ + +$$ +\dot {\mathbf {p}} = \left[ \begin{array}{c} - \frac {\partial H _ {\theta_ {1} , \theta_ {2}}}{\partial \mathbf {x} _ {1}} \\ \mathbf {x} _ {3} \circ \frac {\partial H _ {\theta_ {1} , \theta_ {2}}}{\partial \mathbf {x} _ {2}} - \mathbf {x} _ {2} \circ \frac {\partial H _ {\theta_ {1} , \theta_ {2}}}{\partial \mathbf {x} _ {3}} \end{array} \right] + \mathbf {g} _ {\theta_ {3}} (\mathbf {x} _ {1}, \mathbf {x} _ {2}, \mathbf {x} _ {3}) \mathbf {u} \tag {23} +$$ + +Then + +$$ +\left[ \begin{array}{l} \dot {\mathbf {x}} _ {1} \\ \dot {\mathbf {x}} _ {2} \\ \dot {\mathbf {x}} _ {3} \\ \dot {\mathbf {x}} _ {4} \\ \dot {\mathbf {x}} _ {5} \end{array} \right] = \left[ \begin{array}{c} \dot {\mathbf {r}} \\ - \mathbf {x} _ {3} \dot {\boldsymbol {\phi}} \\ \mathbf {x} _ {2} \dot {\boldsymbol {\phi}} \\ \frac {\mathrm {d}}{\mathrm {d} t} \left(\mathbf {M} _ {\theta_ {1}} ^ {- 1} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}, \mathbf {x} _ {3}\right)\right) \mathbf {p} + \mathbf {M} _ {\theta_ {1}} ^ {- 1} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}, \mathbf {x} _ {3}\right) \dot {\mathbf {p}} \end{array} \right] = \mathbf {f} _ {\theta} \left(\mathbf {x} _ {1}, \mathbf {x} _ {2}, \mathbf {x} _ {3}, \mathbf {x} _ {4}, \mathbf {x} _ {5}, \mathbf {u}\right) \tag {24} +$$ + +where the $\dot{\mathbf{r}}$ and $\dot{\phi}$ come from Equation (22). Now we obtain a $\mathbf{f}_{\theta}$ which can be fed into Neural ODE. Figure 1 shows the flow of the computation graph based on Equation (20)-(24). + +![](images/89dc3b6724eaca76a2c9eeec8e018c8093b5c5ce1428dac936106f3bdc008b5a.jpg) +Figure 1: The computation graph of SymODEN. Blue arrows indicate neural network parametrization. Red arrows indicate automatic differentiation. For a given $(\mathbf{x},\mathbf{u})$ , the computation graph outputs a $\mathbf{f}_{\theta}(\mathbf{x},\mathbf{u})$ which follows Hamiltonian dynamics with control. The function itself is an input to the Neural ODE to generate estimation of states at each time step. Since all the operations are differentiable, weights of the neural networks can be updated by backpropagation. + +# 3.5 POSITIVE DEFINITENESS OF THE MASS MATRIX + +In real physical systems, the mass matrix $\mathbf{M}$ is positive definite, which ensures a positive kinetic energy with a non-zero velocity. The positive definiteness of $\mathbf{M}$ implies the positive definiteness of $\mathbf{M}_{\theta_1}^{-1}$ . Thus, we impose this constraint in the network architecture by $\mathbf{M}_{\theta_1}^{-1} = \mathbf{L}_{\theta_1}\mathbf{L}_{\theta_1}^T$ , where $\mathbf{L}_{\theta_1}$ is a lower-triangular matrix. The positive definiteness is ensured if the diagonal elements of $\mathbf{M}_{\theta_1}^{-1}$ are positive. In practice, this can be done by adding a small constant $\epsilon$ to the diagonal elements of $\mathbf{M}_{\theta_1}^{-1}$ . It not only makes $\mathbf{M}_{\theta_1}$ invertible, but also stabilize the training. + +# 4 EXPERIMENTS + +# 4.1 EXPERIMENTAL SETUP + +We use the following four tasks to evaluate the performance of Symplectic ODE-Net model - (i) Task 1: a pendulum with generalized coordinate and momentum data (learning on $\mathbb{R}^1$ ); (ii) Task 2: a pendulum with embedded angle data (learning on $\mathbb{S}^1$ ); (iii) Task 3: a CartPole system (learning on $\mathbb{R}^1 \times \mathbb{S}^1$ ); and (iv) Task 4: an Acrobot (learning on $\mathbb{T}^2$ ). + +Model Variants. Besides the Symplectic ODE-Net model derived above, we consider a variant by approximating the Hamiltonian using a fully connected neural net $H_{\theta_1,\theta_2}$ . We call it Unstructured Symplectic ODE-Net (Unstructured SymODEN) since this model does not exploit the structure of the Hamiltonian (1). + +Baseline Models. In order to show that we can learn the dynamics better with less parameters by leveraging prior knowledge, we set up baseline models for all four experiments. For the pendulum with generalized coordinate and momentum data, the naive baseline model approximates Equation (12) - $\mathbf{f}_{\theta}(\mathbf{x},\mathbf{u})$ - by a fully connected neural net. For all the other experiments, which involves embedded angle data, we set up two different baseline models: naive baseline approximates $\mathbf{f}_{\theta}(\mathbf{x},\mathbf{u})$ by a fully connected neural net. It doesn't respect the fact that the coordinate pair, $\cos \phi$ and $\sin \phi$ lie on $\mathbb{T}^m$ . Thus, we set up the geometric baseline model which approximates $\dot{q}$ and $\dot{p}$ with a fully connected neural net. This ensures that the angle data evolves on $\mathbb{T}^m$ . + +Data Generation. For all tasks, we randomly generated initial conditions of states and subsequently combined them with 5 different constant control inputs, i.e., $u = -2.0, -1.0, 0.0, 1.0, 2.0$ to produce the initial conditions and input required for simulation. The simulators integrate the corresponding dynamics for 20 time steps to generate trajectory data which is then used to construct the training set. The simulators for different tasks are different. For Task 1, we integrate the true generalized Hamiltonian dynamics with a time interval of 0.05 seconds to generate trajectories. All the other tasks deal with embedded angle data and velocity directly, so we use OpenAI Gym (Brockman et al., 2016) simulators to generate trajectory data. One drawback of using OpenAI Gym is that not all environments use the Runge-Kutta method (RK4) to carry out the integration. OpenAI Gym favors other numerical schemes over RK4 because of speed, but it is harder to learn the dynamics with + +inaccurate data. For example, if we plot the total energy as a function of time from data generated by Pendulum-v0 environment with zero action, we see that the total energy oscillates around a constant by a significant amount, even though the total energy should be conserved. Thus, for Task 2 and Task 3, we use Pendulum-v0 and CartPole-v1, respectively, and replace the numerical integrator of the environments to RK4. For Task 4, we use the Acrobot-v1 environment which is already using RK4. We also change the action space of Pendulum-v0, CartPole-v1 and Acrobot-v1 to a continuous space with a large enough bound. + +Model training. In all the tasks, we train our model using Adam optimizer (Kingma & Ba, 2014) with 1000 epochs. We set a time horizon $\tau = 3$ , and choose "RK4" as the numerical integration scheme in Neural ODE. We vary the size of the training set by doubling from 16 initial state conditions to 1024 initial state conditions. Each initial state condition is combined with five constant control $u = -2.0, -1.0, 0.0, 1.0, 2.0$ to produce initial condition for simulation. Each trajectory is generated by integrating the dynamics 20 time steps forward. We set the size of mini-batches to be the number of initial state conditions. We logged the train error per trajectory and the prediction error per trajectory in each case for all the tasks. The train error per trajectory is the mean squared error (MSE) between the estimated trajectory and the ground truth over 20 time steps. To evaluate the performance of each model in terms of long time prediction, we construct the metric of prediction error per trajectory by using the same initial state condition in the training set with a constant control of $u = 0.0$ , integrating 40 time steps forward, and calculating the MSE over 40 time steps. The reason for using only the unforced trajectories is that a constant nonzero control might cause the velocity to keep increasing or decreasing over time, and large absolute values of velocity are of little interest for synthesizing controllers. + +# 4.2 TASK 1: PENDULUM WITH GENERALIZED COORDINATE AND MOMENTUM DATA + +In this task, we use the model described in Section 3.2 and present the predicted trajectories of the learned models as well as the learned functions of SymODEN. We also point out the drawback of treating the angle data as a Cartesian coordinate. The dynamics of this task has the following form + +$$ +\dot {q} = 3 p, \quad \dot {p} = - 5 \sin q + u \tag {25} +$$ + +with Hamiltonian $H(q,p) = 1.5p^2 + 5(1 - \cos q)$ . In other words $M(q) = 3$ , $V(q) = 5(1 - \cos q)$ and $g(q) = 1$ . + +![](images/2dd9345c540f267bd756ddeb4669ee6cdd72718d93e1a32ab7e770c14f753ddf.jpg) +Figure 2: Sample trajectories and learned functions of Task 1. + +In Figure 2, The ground truth is an unforced trajectory which is energy-conserved. The prediction trajectory of the baseline model does not conserve energy, while both the SymODEN and its unstructured variant predict energy-conserved trajectories. For SymODEN, the learned $g_{\theta_3}(q)$ and $M_{\theta_1}^{-1}(q)$ matches the ground truth well. $V_{\theta_2}(q)$ differs from the ground truth with a constant. This is acceptable since the potential energy is a relative notion. Only the derivative of $V_{\theta_2}(q)$ plays a role in the dynamics. + +Here we treat $q$ as a variable in $\mathbb{R}^1$ and our training set contains initial conditions of $q \in [-\pi, 3\pi]$ . The learned functions do not extrapolate well outside this range, as we can see from the left part in the figures of $M_{\theta_1}^{-1}(q)$ and $V_{\theta_2}(q)$ . We address this issue by working directly with embedded angle data, which leads us to the next subsection. + +# 4.3 TASK 2: PENDULUM WITH EMBEDDED DATA + +In this task, the dynamics is the same as Equation (25) but the training data are generated by the OpenAI Gym simulator, i.e. we use embedded angle data and assume we only have access to $\dot{q}$ instead of $p$ . We use the model described in Section 3.3 and synthesize an energy-based controller (Section 2.2). Without true $p$ data, the learned function matches the ground truth with a scaling $\beta$ , as shown in Figure 3. To explain the scaling, let us look at the following dynamics + +$$ +\dot {q} = p / \alpha , \quad \dot {p} = - 1 5 \alpha \sin q + 3 \alpha u \tag {26} +$$ + +with Hamiltonian $H = p^2 / (2\alpha) + 15\alpha (1 - \cos q)$ . If we only look at the dynamics of $q$ , we have $\ddot{q} = -15\sin q + 3u$ , which is independent of $\alpha$ . If we don't have access to the generalized momentum $p$ , our trained neural network may converge to a Hamiltonian with a $\alpha_e$ which is + +![](images/939ba1c5100230a216ad0061e076585cab27eea4c6843a866b50503bedb1397d.jpg) +Figure 3: Without true generalized momentum data, the learned functions match the ground truth with a scaling. Here $\beta = 0.357$ + +![](images/3c5ab47d004772543060908efba051a690601ec287043d4fefd841f3c1afe916.jpg) + +![](images/9ae144079713470699cc270e596d7c385f3401730910dc0552f7eb22409b5c05.jpg) + +different from the true value, $\alpha_{t} = 1 / 3$ , in this task. By a scaling $\beta = \alpha_{t} / \alpha_{e} = 0.357$ , the learned functions match the ground truth. Even we are not learning the true $\alpha_{t}$ , we can still perform prediction and control since we are learning the dynamics of $q$ correctly. We let $V_{d} = -V_{\theta_{2}}(q)$ , then the desired Hamiltonian has minimum energy when the pendulum rests at the upward position. For the damping injection, we let $K_{d} = 3$ . Then from Equation (7) and (8), the controller we synthesize is + +$$ +u (\cos q, \sin q, \dot {q}) = g _ {\theta_ {3}} ^ {- 1} (\cos q, \sin q) \left(2 \left(- \frac {\partial V _ {\theta_ {2}}}{\partial \cos q} \sin q + \frac {\partial V _ {\theta_ {2}}}{\partial \sin q} \cos q\right) - 3 \dot {q}\right) \tag {27} +$$ + +Only SymODEN out of all models we consider provides the learned potential energy which is required to synthesize the controller. Figure 4 shows how the states evolve when the controller is fed into the OpenAI Gym simulator. We can successfully + +![](images/18a888a07b65c80d3e6ea2148862b87ca5a6430a0af1a6dbb5106319228f2eec.jpg) +Figure 4: Time-evolution of the state variables $(\cos q, \sin q, \dot{q})$ when the closed-loop control input $u(\cos q, \sin q, \dot{q})$ is governed by Equation (27). The thin black lines show the expected results. + +![](images/cd6ea3f6e10ccafc75af2263e508cd259d3ea899f4fd447d1f26b72e298a0794.jpg) + +![](images/5d40838601a957925d524b0bb125a2294b29d06b7a4347ac4a4b1e6d9b210453.jpg) + +control the pendulum into the inverted position using the controller based on the learned model even though the absolute maximum control $u$ , 7.5, is more than three times larger than the absolute maximum $u$ in the training set, which is 2.0. This shows SymODEN extrapolates well. + +# 4.4 TASK 3: CARTPOLE SYSTEM + +The CartPole system is an underactuated system and to synthesize a controller to balance the pole from arbitrary initial condition requires trajectory optimization or kinetic energy shaping. We show that we can learn its dynamics and perform prediction in Section 4.6. We also train SymODEN in a fully-actuated version of the CartPole system (see Appendix E). The corresponding energy-based controller can bring the pole to the inverted position while driving the cart to the origin. + +# 4.5 TASK 4: ACROBOT + +The Acrobot is an underactuated double pendulum. As this system exhibits chaotic motion, it is not possible to predict its long-term behavior. However, Figure 6 shows that SymODEN can provide reasonably good short-term prediction. We also train SymODEN in a fully-actuated version of the Acrobot and show that we can control this system to reach the inverted position (see Appendix E). + +# 4.6 RESULTS + +In this subsection, we show the train error, prediction error, as well as the MSE and total energy of a sample test trajectory for all the tasks. Figure 5 shows the variation in train error and prediction error with changes in the number of initial state conditions in the training set. We can see that SymODEN yields better generalization in every task. In Task 3, although the Geometric Baseline Model yields lower train error in comparison to the other models, SymODEN generates more accurate predictions, indicating overfitting in the Geometric Baseline Model. By incorporating the physics-based prior of Hamiltonian dynamics, SymODEN learns dynamics that obeys physical laws and thus provides better predictions. In most cases, SymODEN trained with a smaller training dataset performs better than other models in terms of the train and prediction error, indicating that better generalization can be achieved even with fewer training samples. + +Figure 6 shows the evolution of MSE and total energy along a trajectory with a previously unseen initial condition. For all the tasks, MSE of the baseline models diverges faster than SymODEN. Unstructured SymODEN performs well in all tasks except Task 3. As for the total energy, in Task 1 and Task 2, SymODEN and Unstructured SymODEN conserve total energy by oscillating around a constant value. In these models, the Hamiltonian itself is learned and the prediction of the future states + +![](images/f0689624e20f225afad3a3d0ec76102be6bfa9562f308f511f5f1e46f68e592e.jpg) + +![](images/ff9e94685947729de1fdab369a29c4180d9e1f3542e0863b45ea4827d07c0ee1.jpg) + +![](images/0ff339c4dd402471908a7dbc957953533fa63aaae18bd1062a342b4dc1ee7eef.jpg) + +![](images/35b596764b96f11e42d3dffd78f355d489f7d2326e4dc59983c89fad34396fd4.jpg) + +![](images/49bc4960fc3be9f826e9343f8ae4777057e32168ceec6a89dc98583cb56e6250.jpg) + +![](images/ed2c9c32b3b941a9d1b4e8467206ad15f35c1f0a28e4c791bf53961e0c016fab.jpg) + +![](images/aef7340ca6b6b85a544d8f7bcd0e35dc10ef969fb3cec5de44b51a0ce57977cf.jpg) + +![](images/786e8ac9b01f65f5ce8277b1821c009a9d363ccd56bf0c9f0b71aee0af043f74.jpg) + +![](images/39bf94ca1496a514484bab9e1f49409741f0a94d076040b1ce140ea45e937a8a.jpg) +Figure 5: Train error per trajectory and prediction error per trajectory for all 4 tasks with different number of training trajectories. Horizontal axis shows number of initial state conditions (16, 32, 64, 128, 256, 512, 1024) in the training set. Both the horizontal axis and vertical axis are in log scale. + +![](images/c5127869d58a7d7c1b3ac0e38f7ad2b463aec9d6b7caa852c1739a193a4334d4.jpg) + +![](images/8929aa245baa1e5ee35ce0cd06a80226d1ee36775a1d02de36ed425147f4eab0.jpg) + +![](images/366d5449ba540c64300f631e24da1d69e14378b31b176b0357f7fd10df04b879.jpg) + +![](images/be5d584c707d2089087ce894ffe427ab1b79467975f995e567792a2493436ece.jpg) +Figure 6: Mean square error and total energy of test trajectories. SymODEN works the best in terms of both MSE and total energy. Since SymODEN has learned the Hamiltonian and discovered the conservation from data the predicted trajectories match the ground truth. The ground truth of energy in all four tasks stay constant. + +![](images/b212adbd67fea1ad973a211a057c1cb02f5ad023262111c4cce0fb310ccae9a5.jpg) + +![](images/10d9d6077b31607a825087c4a9bcb6c3ee229e41ff037045956fe12539984e58.jpg) + +![](images/9304504d646488a2be16c682b0b558a3010295a63d5a9edd5f107f37e065ead9.jpg) + +stay around a level set of the Hamiltonian. Baseline models, however, fail to find the conservation and the estimation of future states drift away from the initial Hamiltonian level set. + +# 5 CONCLUSION + +Here we have introduced Symplectic ODE-Net which provides a systematic way to incorporate prior knowledge of Hamiltonian dynamics with control into a deep learning framework. We show that SymODEN achieves better prediction with fewer training samples by learning an interpretable, physically-consistent state-space model. Future works will incorporate a broader class of physics-based prior, such as the port-Hamiltonian system formulation, to learn dynamics of a larger class of physical systems. SymODEN can work with embedded angle data or when we only have access to velocity instead of generalized momentum. Future works would explore other types of embedding, such as embedded 3D orientations. Another interesting direction could be to combine energy shaping control (potential as well as kinetic energy shaping) with interpretable end-to-end learning frameworks. + +# REFERENCES + +Vladimir I. Arnold, Alexander B. Givental, and Sergei P. Novikov. Symplectic geometry. In Dynamical systems IV, pp. 1-138. Springer, 2001. + +Ibrahim Ayed, Emmanuel de Bézenac, Arthur Pajot, Julien Brajard, and Patrick Gallinari. Learning dynamical systems from partial observations. arXiv:1902.11136, 2019. + +Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12: 149-198, 2000. +Anthony M. Bloch, Naomi E. Leonard, and Jerrold E. Marsden. Controlled lagrangians and the stabilization of euler-poincaré mechanical systems. International Journal of Robust and Nonlinear Control, 11(3):191-214, 2001. +Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016. +Arunkumar Byravan and Dieter Fox. Se3-nets: Learning rigid body motion using deep neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 173-180. IEEE, 2017. +Dong E. Chang, Anthony M. Bloch, Naomi E. Leonard, Jerrold E. Marsden, and Craig A. Woolsey. The equivalence of controlled lagrangian and controlled hamiltonian systems. ESAIM: Control, Optimisation and Calculus of Variations, 8:393-422, 2002. +Tian Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K. Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems 31, pp. 6571-6583. 2018. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, 2019. +Herbert Goldstein, Charles Poole, and John Safko. Classical mechanics, 2002. +Ian Goodfellow, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT Press, 2016. +Sam Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian Neural Networks. arXiv:1906.01563, 2019. +Jayesh K. Gupta, Kunal Menda, Zachary Manchester, and Mykel J. Kochenderfer. A general framework for structured learning of mechanical systems. arXiv:1902.08705, 2019. +David Haussler. Quantifying inductive bias: AI learning algorithms and Valiant's learning framework. Artificial Intelligence, 36(2):177-221, 1988. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016. +Maximilian Karl, Maximilian Soelch, Justin Bayer, and Patrick van der Smagt. Deep variational bayes filters: Unsupervised learning of state space models from raw data. arXiv:1605.06432, 2016. +Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv:1412.6980, 2014. +Rahul G. Krishnan, Uri Shalit, and David Sontag. Structured inference networks for nonlinear state space models. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. +Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015. +Michael Lutter, Christian Ritter, and Jan Peters. Deep lagrangian networks: Using physics as model prior for deep learning. In 7th International Conference on Learning Representations (ICLR), 2019. +Kumpati S. Narendra and Kannan Parthasarathy. Identification and control of dynamical systems using neural networks. IEEE Transactions on Neural Networks, 1(1):4-27, 1990. + +Romeo Ortega, Arjan J. Van Der Schaft, Iven Mareels, and Bernhard Maschke. Putting energy back in control. IEEE Control Systems Magazine, 21(2):18-33, 2001. +Romeo Ortega, Arjan J. Van Der Schaft, Bernhard Maschke, and Gerardo Escobar. Interconnection and damping assignment passivity-based control of port-controlled hamiltonian systems. Automatica, 38(4):585-596, 2002. +David J. Rowe, Arthur Ryman, and George Rosensteel. Many-body quantum mechanics as a symplectic dynamical system. Physical Review A, 22(6):2362, 1980. +Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost T. Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, and Peter Battaglia. Graph networks as learnable physics engines for inference and control. In International Conference on Machine Learning (ICML), pp. 4467-4476, 2018. +David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017. +Torsten Söderström and Petre Stoica. System identification. Prentice-Hall, Inc., 1988. +Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing 29, pp. 2746-2754, 2015. +Tianshu Wei, Yanzhi Wang, and Qi Zhu. Deep Reinforcement Learning for Building HVAC Control. In Proceedings of the 54th Annual Design Automation Conference (DAC), pp. 22:1-22:6, 2017. + +# Appendices + +# A EXPERIMENT IMPLEMENTATION DETAILS + +The architectures used for our experiments are shown below. For all the tasks, SymODEN has the lowest number of total parameters. To ensure that the learned function is smooth, we use Tanh activation function instead of ReLu. As we have differentiation in the computation graph, nonsmooth activation functions would lead to discontinuities in the derivatives. This, in turn, would result in an ODE with a discontinuous RHS which is not desirable. All the architectures shown below are fully-connected neural networks. The first number indicates the dimension of the input layer. The last number indicates the dimension of output layer. The dimension of hidden layers is shown in the middle along with the activation functions. + +# Task 1: Pendulum + +- Input: 2 state dimensions, 1 action dimension +- Baseline Model (0.36M parameters): 2 - 600Tanh - 600Tanh - 2Linear +- Unstructured SymODEN (0.20M parameters): +- $H_{\theta_1,\theta_2}$ : 2 - 400Tanh - 400Tanh - 1Linear +- $g_{\theta_3}$ : 1 - 200Tanh - 200Tanh - 1Linear + +- SymODEN (0.13M parameters): + +- $M_{\theta_1}^{-1}$ : 1 - 300Tanh - 300Tanh - 1Linear +- $V_{\theta_2}$ : 1 - 50Tanh - 50Tanh - 1Linear +- $g_{\theta_3}$ : 1 - 200Tanh - 200Tanh - 1Linear + +# Task 2: Pendulum with embedded data + +- Input: 3 state dimensions, 1 action dimension + +- Naive Baseline Model (0.65M parameters): 4 - 800Tanh - 800Tanh - 3Linear + +- Geometric Baseline Model (0.46M parameters): + +- $M_{\theta_1}^{-1} = L_{\theta_1}L_{\theta_1}^T$ , where $L_{\theta_1}$ : 2 - 300Tanh - 300Tanh - 300Tanh - 1Linear +- approximate $(\dot{q},\dot{p})$ : 4 - 600Tanh - 600Tanh - 2Linear + +- Unstructured SymODEN (0.39M parameters): + +- $M_{\theta_1}^{-1} = L_{\theta_1}L_{\theta_1}^T$ , where $L_{\theta_1}$ : 2 - 300Tanh - 300Tanh - 300Tanh - 1Linear +- $H_{\theta_2}$ : 3 - 500Tanh - 500Tanh - 1Linear +- $g_{\theta_3}$ : 2 - 200Tanh - 200Tanh - 1Linear + +- SymODEN (0.14M parameters): + +- $M_{\theta_1}^{-1} = L_{\theta_1}L_{\theta_1}^T$ , where $L_{\theta_1}$ : 2 - 300Tanh - 300Tanh - 300Tanh - 1Linear +- $V_{\theta_2}$ : 2 - 50Tanh - 50Tanh - 1Linear +- $g_{\theta_3}$ : 2 - 200Tanh - 200Tanh - 1Linear + +# Task 3: CartPole + +- Input: 5 state dimensions, 1 action dimension +- Naive Baseline Model (1.01M parameters): 6 - 1000Tanh - 1000Tanh - 5Linear + +- Geometric Baseline Model (0.82M parameters): + +- $M_{\theta_1}^{-1} = L_{\theta_1}L_{\theta_1}^T$ , where $L_{\theta_1}$ : 3 - 400Tanh - 400Tanh - 400Tanh - 3Linear +- approximate $(\dot{\mathbf{q}},\dot{\mathbf{p}})$ : 6 - 700Tanh - 700Tanh - 4Linear + +- Unstructured SymODEN (0.67M parameters): + +- $M_{\theta_1}^{-1} = L_{\theta_1}L_{\theta_1}^T$ , where $L_{\theta_1}$ : 3 - 400Tanh - 400Tanh - 400Tanh - 3Linear +- $H_{\theta_2}$ : 5 - 500Tanh - 500Tanh - 1Linear +- $g_{\theta_3}$ : 3 - 300Tanh - 300Tanh - 2Linear + +- SymODEN (0.51M parameters): + +- $M_{\theta_1}^{-1} = L_{\theta_1}L_{\theta_1}^T$ , where $L_{\theta_1}$ : 3 - 400Tanh - 400Tanh - 400Tanh - 3Linear +- $V_{\theta_2}$ : 3 - 300Tanh - 300Tanh - 1Linear +- $g_{\theta_3}$ : 3 - 300Tanh - 300Tanh - 2Linear + +# Task 4:Acrobot + +- Input: 6 state dimensions, 1 action dimension +- Naive Baseline Model (1.46M parameters): 7 - 1200Tanh - 1200Tanh - 6Linear + +- Geometric Baseline Model (0.97M parameters): + +- $M_{\theta_1}^{-1} = L_{\theta_1}L_{\theta_1}^T$ , where $L_{\theta_1}: 4 - 400\mathrm{Tanh} - 400\mathrm{Tanh} - 400\mathrm{Tanh} - 3\mathrm{Linear}$ +- approximate $(\dot{\mathbf{q}},\dot{\mathbf{p}})$ : 7 - 800Tanh - 800Tanh - 4Linear + +- Unstructured SymODEN (0.78M parameters): + +- $M_{\theta_1}^{-1} = L_{\theta_1}L_{\theta_1}^T$ , where $L_{\theta_1}$ : 4 - 400Tanh - 400Tanh - 400Tanh - 3Linear +- $H_{\theta_2}$ : 6 - 600Tanh - 600Tanh - 1Linear +- $g_{\theta_3}$ : 4 - 300Tanh - 300Tanh - 2Linear + +- SymODEN (0.51M parameters): + +- $M_{\theta_1}^{-1} = L_{\theta_1}L_{\theta_1}^T$ , where $L_{\theta_1}$ : 4 - 400Tanh - 400Tanh - 400Tanh - 3Linear +- $V_{\theta_2}$ : 4 - 300Tanh - 300Tanh - 1Linear +- $g_{\theta_3}$ : 4 - 300Tanh - 300Tanh - 2Linear + +# B SPECIAL CASE OF ENERGY-BASED CONTROLLER - PD CONTROLLER WITH ENERGY COMPENSATION + +The energy-based controller has the form $\mathbf{u}(\mathbf{q},\mathbf{p}) = \beta (\mathbf{q}) + \mathbf{v}(\mathbf{p})$ , where the potential energy shaping term $\beta (\mathbf{q})$ and the damping injection term $\mathbf{v}(\mathbf{p})$ are given by Equation (7) and Equation (8), respectively. + +If the desired potential energy $V_{q}(\mathbf{q})$ is given by a quadratic, as in Equation (9), then + +$$ +\begin{array}{l} \boldsymbol {\beta} (\mathbf {q}) = \mathbf {g} ^ {T} (\mathbf {g} \mathbf {g} ^ {T}) ^ {- 1} \left(\frac {\partial V}{\partial \mathbf {q}} - \frac {\partial V _ {d}}{\partial \mathbf {q}}\right) \\ = \mathbf {g} ^ {T} \left(\mathbf {g} \mathbf {g} ^ {T}\right) ^ {- 1} \left(\frac {\partial V}{\partial \mathbf {q}} - \mathbf {K} _ {p} (\mathbf {q} - \mathbf {q} ^ {\star})\right), \tag {28} \\ \end{array} +$$ + +and the controller can be expressed as + +$$ +\mathbf {u} (\mathbf {q}, \mathbf {p}) = \boldsymbol {\beta} (\mathbf {q}) + \mathbf {v} (\mathbf {p}) = \mathbf {g} ^ {T} \left(\mathbf {g} \mathbf {g} ^ {T}\right) ^ {- 1} \left(\frac {\partial V}{\partial \mathbf {q}} - \mathbf {K} _ {p} (\mathbf {q} - \mathbf {q} ^ {\star}) - \mathbf {K} _ {d} \mathbf {p}\right). \tag {29} +$$ + +The corresponding external forcing term is then given by + +$$ +\mathbf {g} (\mathbf {q}) \mathbf {u} = \frac {\partial V}{\partial \mathbf {q}} - \mathbf {K} _ {p} \left(\mathbf {q} - \mathbf {q} ^ {\star}\right) - \mathbf {K} _ {d} \mathbf {p}, \tag {30} +$$ + +which is same as Equation (10) in the main body of the paper. The first term in this external forcing provides an energy compensation, whereas the second term and the last term are proportional and derivative control terms, respectively. Thus, this control can be perceived as a PD controller with an additional energy compensation. + +# C ABLATION STUDY OF DIFFERENTIABLE ODE SOLVER + +In Hamiltonian Neural Networks (HNN), Greydanus et al. (2019) incorporate the Hamiltonian structure into learning by minimizing the difference between the symplectic gradients and the true gradients. When the true gradient is not available, which is often the case, the authors suggested using finite difference approximations. In SymODEN, true gradients or gradient approximations are not necessary since we integrate the estimated gradient using differentiable ODE solvers and set up the loss function with the integrated values. Here we perform an ablation study of the differentiable ODE Solver. + +Both HNN and the Unstructured SymODEN approximate the Hamiltonian by a neural network and the main difference is the differentiable ODE solver, so we compare the performance of HNN and the Unstructured SymODEN. We set the time horizon $\tau = 1$ since it naturally corresponds to the finite difference estimate of the gradient. A larger $\tau$ would correspond to higher-order estimates of gradients. Since there is no angle-aware design in HNN, we use Task 1 to compare the performance of these two models. + +We generate 25 training trajectories, each of which contains 45 time steps. This is consistent with the HNN paper. In the HNN paper Greydanus et al. (2019), the initial conditions of the trajectories are generated randomly in an annulus, whereas in this paper, we generate the initial state conditions uniformly in a reasonable range in each state dimension. We guess the reason that the authors of HNN choose the annulus data generation is that they do not have an angle-aware design. Take the pendulum for example; all the training and test trajectories they generate do not pass the inverted position. If they make prediction on a trajectory with a large enough initial speed, the angle would go over $\pm 2\pi$ , $\pm 4\pi$ , etc. in the long run. Since these are away from the region where the model gets trained, we can expect the prediction would be poor. In fact, this motivates us to design the angle-aware SymODEN in Section 3.3. In this ablation study, we generate the training data in both ways. + +Table 1 shows the train error and the prediction error per trajectory of the two models. We can see Unstructured SymODEN performs better than HNN. This is an expected result. To see why this is the case, let us assume the training loss per time step of HNN is similar to that of Unstructured SymODEN. Since the training loss is on the symplectic gradient, the error would accumulate while integrating the symplectic gradient to get the estimated state values, and MSE of the state values + +![](images/c4843cb365ccacd1f37c1c7d4ef8063d1a12b4b1ba29a84a1215c1dd60e007a5.jpg) +Figure 7: MSE and Total energy of a sample test trajectory. Left two figures: the training data for the models are randomly generated in an annulus, the same as in HNN. Right two figures: the training data for the models are randomly generated in a rectangle - the same way that we use in SymODEN. + +![](images/fe178b615eb0c3f6814536b2b3993fc5c023f4df7f8735460d45ad610dddee6d.jpg) + +![](images/12e5cbd68daecc71be9dc7af7313ba699f6fa070b767a5f69a9e235d3e02fd53.jpg) + +![](images/6d8f8fe51ddda9a71bd66e7173f12da106cdb9b8fe8d324f7b34ae08564b7ffb.jpg) + +would likely be one order of magnitude greater than that of Unstructured SymODEN. Figure 7 shows the MSE and total energy of a particular trajectory. It is clear that the MSE of the Unstructured SymODEN is lower than that of HNN. The MSE of HNN periodically touches zero does not mean it has a good prediction at that time step. Since the trajectories in the phase space are closed circles, those zeros mean the predicted trajectory of HNN lags behind (or runs ahead of) the true trajectory by one or more circles. Also, the energy of the HNN trajectory drifts instead of staying constant, probably because the finite difference approximation is not accurate enough. + +Table 1: Train error and prediction error per trajectory of Unstructured SymODEN and HNN. The train error per trajectory is the sum of MSE of all the 45 timesteps averaged over the 25 training trajectories. The prediction error per trajectory is the sum of MSE of 90 timesteps in a trajectory. + +
Modelsannulus training datarectangle training data
train errorprediction errortrain errorprediction error
Unstructured SymODEN56.59440.78502.604363.87
HNN290.67564.165457.8026209.17
+ +# D EFFECTS OF THE TIME HORIZON $\tau$ + +Incorporating the differential ODE solver also introduces two hyperparameters: solver types and time horizon $\tau$ . For the solver types, the Euler solver is not accurate enough for our tasks. The adaptive solver "dopri5" lead to similar train error, test error and prediction error as the RK4 solver, but requires more time during training. Thus, in our experiments, we choose RK4. + +Time horizon $\tau$ is the number of points we use to construct our loss function. Table 2 shows the train error, test error and prediction error per trajectory in Task 2 when $\tau$ is varied from 1 to 5. We can see that longer time horizons lead to better models. This is expected since long time horizons penalize worse long term predictions. We also observe in our experiments that longer time horizons require more time to train the models. + +Table 2: Train error, test error and prediction error per trajectory of Task 2 + +
Time Horizonτ = 1τ = 2τ = 3τ = 4τ = 5
Train Error0.7440.1360.0680.0330.017
Test Error0.5790.0980.0520.0240.012
Prediction Error3.1380.5020.1990.0950.048
+ +# E FULLY-ACTUATED CARTPOLE AND ACROBOT + +CartPole and Acrobot are underactuated systems. Incorporating the control of underactuated systems into the end-to-end learning framework is our future work. Here we trained SymODEN on fully-actuated versions of Cartpole and Acrobot and synthesized controllers based on the learned model. + +For the fully-actuated CartPole, Figure 8 shows the snapshots of the system of a controlled trajectory with an initial condition where the pole is below the horizon. Figure 9 shows the time series of state + +variables and control inputs. We can successfully learn the dynamics and control the pole to the inverted position and the cart to the origin. + +
+ +![](images/442d45a7400cb038956e8217919c6982a84dac783e3fd190d1cc9dcc6b9c532b.jpg) +Figure 8: Snapshots of a controlled trajectory of the fully-actuated CartPole system with a 0.3s time interval. + +![](images/20bf8b07edb5bc3929c8c94bacb5d00f44a16ec9715219d9147b828b8213a5ce.jpg) + +![](images/f464d389c2eb95c68cb9df1e4a94946296955d338144e27ac235db0330a62283.jpg) +Figure 9: Time series of state variables and control inputs of a controlled trajectory shown in Figure 8. Black reference lines indicate expected value in the end. + +For the fully-actuated Acrobot, Figure 10 shows the snapshots of a controlled trajectory. Figure 11 shows the time series of state variables and control inputs. We can successfully control the Acrobot from the downward position to the upward position, though the final value of $q_{2}$ is a little away from zero. Taking into account that the dynamics has been learned with only 64 different initial state conditions, it is most likely that the upward position did not show up in the training data. + +![](images/835d4d15c48ba5acf6e85e2d855f13f8eb35ce8c496dba77bf2a41b6f793cf68.jpg) +Figure 10: Snapshots of a controlled trajectory of the fully-actuated Acrobot system with a 1s time interval. + +![](images/ba73c9d9e9aa09c93d14f0906caaa9758acf44b4855a2e635a61e470501e9ec1.jpg) + +![](images/965330ed0e98b7adcfdeace45bac639dcc94a1fb420e6610dc2be1aa7689b678.jpg) + +![](images/9ed0b75868a4398c61b50fca94c03d8860fdc36b2fe0e0c776af109db15c6f94.jpg) + +![](images/762010b0746490ad0094440f77587d82794e8e02f3f1ee6b0c9c2fe8cc9e67fa.jpg) + +![](images/8a7e8208e36a90bdb6fdfbd1e48bd89c2d280e6b904ac73545ba0293365dec6f.jpg) + +![](images/7123288829d9b2f7770f31e98b2eb0759c33a134506c02b4261c8b2e16386f86.jpg) + +![](images/e89e649d256a63f1c4434130cc8400281e4a0c06a595b6adced95b9a0f84ddb5.jpg) + +![](images/421523db1cca6177b514456113bcd1fdb2b22c8cb64e2cf0110033d32574ab34.jpg) + +![](images/a25a2a8613ef924ed9235e8b842b8ee69cb010f012a7ac9d0f5a47d245080280.jpg) + +![](images/59e2c609e8feb5e2a27b98f2ca6054f5f25ace4a24e836653d89c5a27e731036.jpg) +Figure 11: Time series of state variables and control inputs of a controlled trajectory shown in Figure 10. Black reference lines indicate expected value in the end. + +![](images/c144017edd9cc6f268502427dc82029cbbf6164d727c825922e14d4ea2f4cc8a.jpg) + +![](images/37ae84782b3d28977a9bb3cdb5fc7c6de975a151a319ed86de9247cb9503d984.jpg) + +# F TEST ERRORS OF THE TASKS + +Here we show statistics of train, test, and prediction per trajectory in all four tasks. The train errors are based on 64 initial state conditions and 5 constant inputs. The test errors are based on 64 previously unseen initial state conditions and the same 5 constant inputs. Each trajectory in the train and test set contains 20 steps. The prediction error is based on the same 64 initial state conditions (during training) and zero inputs. + +Table 3: Train, Test and Prediction errors of the Four Tasks + +
Naive BaselineGeometric BaselineUnstructured Symplectic-ODESymplectic-ODE
Task 1: Pendulum
Model Parameter0.36MN/A0.20M0.13M
Train error30.82 ± 43.45N/A0.89 ± 2.761.50 ± 4.17
Test error40.99 ± 56.28N/A2.74 ± 9.942.34 ± 5.79
Prediction error37.87 ± 117.02N/A17.17 ± 71.4823.95 ± 66.61
Task 2: Pendulum (embed)
Model Parameter0.65M0.46M0.39M0.14M
Train error2.31 ± 3.720.59 ± 1.6341.76 ± 3.690.067 ± 0.276
Test error2.18 ± 3.590.49 ± 1.7621.41 ± 2.820.052 ± 0.241
Prediction error317.21 ± 521.4614.31 ± 29.543.69 ± 7.720.20 ± 0.49
Task3: CartPole
Model Parameter1.01M0.82M0.67M0.51M
Train error15.53 ± 22.520.45 ± 0.374.84 ± 4.421.78 ± 1.81
Test error25.42 ± 38.491.20 ± 2.676.90 ± 8.661.89 ± 1.81
Prediction error332.44 ± 245.2452.26 ± 73.25225.22 ± 194.2411.41 ± 16.06
Task 4: Acrobot
Model Parameter1.46M0.97M0.78M0.51M
Train error2.04 ± 2.902.07 ± 3.721.32 ± 2.080.25 ± 0.39
Test error5.62 ± 9.295.12 ± 7.253.33 ± 6.000.28 ± 0.48
Prediction error64.61 ± 145.2026.68 ± 34.909.72 ± 16.582.07 ± 5.26
\ No newline at end of file diff --git a/symplecticodenetlearninghamiltoniandynamicswithcontrol/images.zip b/symplecticodenetlearninghamiltoniandynamicswithcontrol/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9b327148b198609710925dd8774ecdac24c37eec --- /dev/null +++ b/symplecticodenetlearninghamiltoniandynamicswithcontrol/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25a12b3d0c42b4177656be814dd219979930bfc1730ba26559da9f2561c19987 +size 799656 diff --git a/symplecticodenetlearninghamiltoniandynamicswithcontrol/layout.json b/symplecticodenetlearninghamiltoniandynamicswithcontrol/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d95144c818e667f7fcdbdd07abd3fae74a7fe877 --- /dev/null +++ b/symplecticodenetlearninghamiltoniandynamicswithcontrol/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:549723a9cc89e99668638bbe860d955c160c2b7e655a26a6f4c7fff0e575bd91 +size 725778 diff --git a/synthesizingprogrammaticpoliciesthatinductivelygeneralize/aab0d129-5e56-4707-9e10-54a49eb1f2d9_content_list.json b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/aab0d129-5e56-4707-9e10-54a49eb1f2d9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f42f8a67873846864a6edb1c0d5f57845530b623 --- /dev/null +++ b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/aab0d129-5e56-4707-9e10-54a49eb1f2d9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba8de167d69f0c5db1076ebc705a03d30219e26acdb6201ff03f7d20a881eaf7 +size 131828 diff --git a/synthesizingprogrammaticpoliciesthatinductivelygeneralize/aab0d129-5e56-4707-9e10-54a49eb1f2d9_model.json b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/aab0d129-5e56-4707-9e10-54a49eb1f2d9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4e605d96ca0bd823f308f65962d7e7a414a78fc6 --- /dev/null +++ b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/aab0d129-5e56-4707-9e10-54a49eb1f2d9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:928b2791efeab0a71a5eb4d12fbcc20bc5402cf678506d348e3bd8d8ea939170 +size 154402 diff --git a/synthesizingprogrammaticpoliciesthatinductivelygeneralize/aab0d129-5e56-4707-9e10-54a49eb1f2d9_origin.pdf b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/aab0d129-5e56-4707-9e10-54a49eb1f2d9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..529c29fe83fa6f7ea4da263dadc6bea6ba3ab7dd --- /dev/null +++ b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/aab0d129-5e56-4707-9e10-54a49eb1f2d9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6878965b165701bf37fb02dab66edab4fc952b3e67f4dccd2d37061860edc3b8 +size 961407 diff --git a/synthesizingprogrammaticpoliciesthatinductivelygeneralize/full.md b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/full.md new file mode 100644 index 0000000000000000000000000000000000000000..23d36868b88585275a618e67d8e8405fb03ac0c5 --- /dev/null +++ b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/full.md @@ -0,0 +1,549 @@ +# SYNTHESIZING PROGRAMMATIC POLICIES THAT INDUCTIVELY GENERALIZE + +Jeevana Priya Inala + +MIT CSAIL + +jinala@csail.mit.edu + +Osbert Bastani + +University of Pennsylvania + +obastani@seas.upenn.edu + +Zenna Tavares + +MIT CSAIL + +zenna@mit.edu + +Armando Solar-Lezama + +MIT CSAIL + +asolar@csail.mit.edu + +# ABSTRACT + +Deep reinforcement learning has successfully solved a number of challenging control tasks. However, learned policies typically have difficulty generalizing to novel environments. We propose an algorithm for learning programmatic state machine policies that can capture repeating behaviors. By doing so, they have the ability to generalize to instances requiring an arbitrary number of repetitions, a property we call inductive generalization. However, state machine policies are hard to learn since they consist of a combination of continuous and discrete structures. We propose a learning framework called adaptive teaching, which learns a state machine policy by imitating a teacher; in contrast to traditional imitation learning, our teacher adaptively updates itself based on the structure of the student. We show that our algorithm can be used to learn policies that inductively generalize to novel environments, whereas traditional neural network policies fail to do so. + +# 1 INTRODUCTION + +Existing deep reinforcement learning (RL) approaches have difficulty generalizing to novel environments (Packer et al., 2018; Rajeswaran et al., 2017). More specifically, consider a task that requires performing a repeating behavior—we would like to be able to learn a policy that generalizes to instances requiring an arbitrary number of repetitions. We refer to this property as inductive generalization. In supervised learning, specialized neural network architectures have been proposed that exhibit inductive generalization on tasks such as list manipulation (Cai et al., 2017), but it is not obvious how those techniques would generalize to the control problems discussed in this paper. Alternatively, algorithms have been proposed for learning programmatic policies for control problems that generalize better than traditional neural network policies (Verma et al., 2019; 2018), but existing approaches have focused on simple stateless policies that make learning generalizable repetitive behaviors hard, e.g., a stateless program cannot internally keep track of the number of repetitions made so far and decide the next action based on that progress. + +We propose an algorithm for learning programmatic state machine policies. Such a policy consists of a set of internal states, called modes, each of which is associated with a controller that is applied while in that mode. The policy also includes transition predicates that describe how the mode is updated. These policies are sufficiently expressive to capture tasks of interest—e.g., they can perform repeating tasks by cycling through some subset of modes during execution. Additionally, state machine policies are strongly biased towards policies that inductively generalize, that deep RL policies lack. In other words, this policy class is both realizable (i.e., it contains a "right" policy that solves the problem for all environments) and identifiable (i.e., we can learn the right policy from limited data). + +However, state machine policies are challenging to learn because their discrete state transitions make it difficult to use gradient-based optimization. One standard solution is to "soften" the state transitions by making them probabilistic. However, these techniques alone are insufficient; they still run into local optima due to the constraints on the structure of the policy function, as well as the relatively few parameters they possess. + +![](images/c1555061b871ac5c402d5ae333945f43bda0c1dd471060b9e6c5502c317f15af.jpg) + +![](images/7f8e32048cbd3c5791440482e5369662fb7653fcd68b32fc096082bd499505a9.jpg) + +![](images/e0e22411e539734705da26e40621105ca0d2ae4a70f77f0d4e128b6786ab6c44.jpg) + +![](images/78130a529ef01c68375f13b63c5aa9cb86e98cdd0094d0c6b8358188a0bafcf5.jpg) + +![](images/2975b49f122c2fb3c388a95c5883f42ca773eaea745a345bdb4c69e34631b289.jpg) +(a) Train 1 +(b) Train 2 +(c) Train 3 +(d) Test +(e) State machine based policy. False edges are dropped. +Figure 1: Running example: retrieving an autonomous car from tight parking spots. The goal is to learn a state-machine policy (e) that is trained on scenarios (a), (b), and (c), that generalizes to scenario (d). + +To address this issue, we propose an approach called adaptive teacher that alternatingly learns a teacher and a student. The teacher is an over-parameterized version of the student, which is a state-machine policy trained to mimic the teacher. Because the teacher is over-parameterized, it can be easily learned using model-based numerical optimization (but does not generalize as well as the student). Furthermore, our approach is different from traditional imitation learning (Schaal, 1999; Ross et al., 2011) since the teacher is regularized to favor strategies similar to the ones taken by the student, to ensure the student can successfully mimic the teacher. As the student improves, the teacher improves as well. This alternating optimization can naturally be derived within the framework of variational inference, where the teacher encodes the variational distribution (Wainwright et al., 2008). + +We implement our algorithm and evaluate it on a set of reinforcement learning problems focused on tasks that require inductive generalization. We show that traditional deep RL approaches perform well on the original task, but fail to generalize inductively, whereas our state machine policies successfully generalize beyond the training distribution. + +We emphasize that we do not focus on problems that require large state-machines, which is a qualitatively different problem from ours and would require different algorithms to solve. We believe that state-machines are most useful when only a few modes are required. In particular, we are interested in problems where a relatively simple behavior must be repeated a certain number of times to solve the given task. The key premise behind our approach, as shown by our evaluation, is that, in these cases, compact state-machines can represent policies that both have good performance and are generalizable. In fact, our algorithm solves all of our benchmarks using state-machine policies with at most 4 modes. When many modes are needed, then the number of possible transition structures grows exponentially, making it difficult to learn the "right" structure without having an exponential amount of training data. + +Example. Consider the autonomous car in Figure 1, which consists of a blue car (the agent) parked between two stationary black cars. The system state is $(x,y,\theta ,d)$ , where $(x,y)$ is the center of the car, $\theta$ is the orientation, and $d$ is the distance between the two black cars. The actions are $(v,\psi)$ , + +where $v$ is velocity and $\psi$ is steering angle (we consider velocity control since the speed is low). The dynamics are standard bicycle dynamics. The goal is to drive out of the parked spot to an adjacent lane while avoiding collisions. This task is easy when $d$ is large (Figure 1a). It is somewhat more involved when $d$ is small, since it requires multiple maneuvers (Figures 1b and 1c). However, it becomes challenging when $d$ is very small (Figure 1d). A standard RL algorithm will train a policy that performs well on the distances seen during training but does not generalize to smaller distances. In contrast, our goal is to train an agent on scenarios (a), (b), and (c), that generalizes to scenario (d). + +In Figure 1e, we show a state machine policy synthesized by our algorithm for this task. We use $d_{f}$ and $\bar{d}_b$ to denote the distances between the agent and the front and back black cars, respectively. This policy has three different modes (besides a start mode $m_{s}$ and an end mode $m_e$ ). Roughly speaking, this policy says (i) immediately shift from mode $m_{s}$ to $m_1$ , and drive the car forward and to the left, (ii) continue until close to the car in front; then, transition to mode $m_2$ , and drive the car backwards and to the right, (iii) continue until close to the car behind; then, transition back to mode $m_1$ , (iv) iterate between $m_1$ and $m_2$ until the car can safely exit the parking spot; then, transition to mode $m_3$ , and drive forward and to the right to make the car parallel to the lane. This policy inductively generalizes since it captures the iterative behavior of driving forward and then backward until exiting the parking spot. Thus, it successfully solves the scenario in Figure 1d. + +Related work. There has been growing interest in using program synthesis to aid machine learning (Lake et al., 2015; Ellis et al., 2015; 2018; Valkov et al., 2018; Young et al., 2019). Our work is most closely related to recent work using imitation learning to learn programmatic policies (Verma et al., 2018; Bastani et al., 2018; Zhu et al., 2019; Verma et al., 2019). These approaches use a neural network policy as the teacher. However, they are focused on learning stateless policies and hence, they use a supervised dataset of state-action pairs from the teacher and a domain-specific program synthesizer to learn programmatic policies. Building such a synthesizer for state machine policies is challenging since they contain both discrete and continuous parameters and internal state. The student in our algorithm needs to learn the state-machine policy from entire "trajectory traces" to learn the internal state. In particular, each trajectory trace consists of the sequence of states and actions from the initial state to the goal state visited by the teacher, but also encodes which states correspond to mode changes for the teacher. In the teacher's iteration, the teacher's mode changes are regularized to align more closely with the possible student mode changes. As a consequence, in the student's iteration, it is easier for the student to mimic the teacher's mode changes. Leveraging this connection between the teacher structure and student structure is critical for us to be able to learn state-machine policies. Additionally, with the exception of (Verma et al., 2019), for the other approaches, there is no feedback from the student to the teacher. + +State machines have been previously used to represent policies that have internal state (typically called memory). To learn these policies, gradient ascent methods assume a fixed structure and optimize over real-valued parameters (Meuleau et al., 1999; Peshkin et al., 2001; Aberdeen & Baxter, 2002), whereas policy iteration methods uses dynamic programming to extend the structure (Hansen, 1998). Our method combines both, but similarly to Poupart & Boutilier (2004), the structure space is bounded. In addition, programmatic state machines use programs to represent state transitions and actions rules, and as a result can perform well while remaining small in size. Hierarchies of Abstract Machines (HAM)s also use programmatic state machines for hierarchical reinforcement learning, but assumed a fixed, hand-designed structure (Parr & Russell, 1998; Andre & Russell, 2002). + +Our inductive generalization goal is related to that of meta-learning (Finn et al., 2017); however, whereas meta-learning trains on a few examples from the novel environment, our goal is to generalize without additional training. Our work is also related to guided policy search, which uses a teacher in the form of a trajectory optimizer to train a neural network student (Levine & Koltun, 2013). However, training programmatic policies is more challenging since the teacher must mirror the structure of the student. Finally, it has recently been shown that over-parameterization is essential in helping neural networks avoid local minima (Allen-Zhu et al., 2019). Relaxing optimization problems by adding more parameters is a well established technique; in many cases, re-parameterization can make difficult non-convex problems solve efficiently (Carlone & Calafiore, 2018). + +# 2 PROBLEM FORMULATION + +Dynamics. We are interested in synthesizing control policies for deterministic, continuous-time dynamical systems with continuous state and action spaces. In particular, we consider partially observable Markov decision processes (POMDP) $\langle \mathcal{X},\mathcal{A},\mathcal{O},F,Z,X_0,\phi_S,\phi_G\rangle$ with states $\mathcal{X}\subseteq \mathbb{R}^{d_X}$ , actions $\mathcal{A}\subseteq \mathbb{R}^{d_A}$ , observations $\mathcal{O}\subseteq \mathbb{R}^{d_O}$ , deterministic dynamics $F:\mathcal{X}\times \mathcal{A}\to \mathcal{X}$ (i.e., $\dot{\mathbf{x}} = F(\mathbf{x},\mathbf{a})$ ), deterministic observation function $Z:\mathcal{X}\rightarrow \mathcal{O}$ , and initial state distribution $\mathbf{x}_0\sim X_0$ . + +We consider a safety specification $\phi_S: \mathcal{X} \to \mathbb{R}$ and a goal specification $\phi_G: \mathcal{X} \to \mathbb{R}$ . Then, the agent aims to reach a goal state $\phi_G(\mathbf{x}) \leq 0$ while staying in safe states $\phi_S(\mathbf{x}) \leq 0$ . A positive value for $\phi_S(\mathbf{x})$ (resp., $\phi_G(\mathbf{x}))$ quantifies the degree to which $\mathbf{x}$ is unsafe (resp., away from the goal). + +Policies. We consider policies with internal memory $\pi : \mathcal{O} \times \mathcal{S} \to \mathcal{A} \times \mathcal{S}$ where $\mathcal{S} \subseteq \mathbb{R}^{d_S}$ is the set of internal states; we assume the memory is initialized to a constant $\mathbf{s}_0$ . Given such a policy $\pi$ , we sample a rollout (or trajectory) $\tau = (\mathbf{x}_0, \mathbf{x}_1, \dots, \mathbf{x}_N)$ with horizon $N \in \mathbb{N}$ by sampling $\mathbf{x}_0 \sim X_0$ and then performing a discrete-time simulation $\mathbf{x}_{n+1} = \mathbf{x}_n + F(\mathbf{x}_n, \mathbf{a}_n) \cdot \Delta$ , where $(\mathbf{a}_n, \mathbf{s}_{n+1}) = \pi(Z(\mathbf{x}_n), \mathbf{s}_n)$ and $\Delta \in \mathbb{R}_{>0}$ is the time increment. Since $F$ , $Z$ , and $\pi$ are deterministic, $\tau$ is fully determined by $\mathbf{x}_0$ and $\pi$ ; $\tau$ can also be represented as a list of actions combined with the initial state i.e $\tau = \langle \mathbf{x}_0, (\mathbf{a}_0, \mathbf{a}_1, \dots, \mathbf{a}_N) \rangle$ . + +The degree to which $\phi_S$ and $\phi_G$ are satisfied along a trajectory is quantified by a reward function $R(\pi, \mathbf{x}_0) = -\phi_G(\mathbf{x}_N)^+ - \sum_{n=0}^{N} \phi_S(\mathbf{x}_n)^+$ , where $x^+ = \max(0, x)$ . The optimal policy $\pi^*$ in some policy class $\Pi$ is one which maximizes the expected reward $\mathbb{E}_{\mathbf{x}_0 \sim X_0}[R(\pi, \mathbf{x}_0)]$ . + +Inductive generalization. Beyond optimizing reward, we want a policy that inductively generalizes to unseen environments. Formally, we consider two initial state distributions: a training distribution $X_0^{\text{train}}$ , and a test distribution $X_0^{\text{test}}$ that includes the extreme states never encountered during training. Then, the goal is to train a policy according to $X_0^{\text{train}}$ —i.e., + +$$ +\pi^ {*} = \underset {\pi \in \Pi} {\arg \max } \mathbb {E} _ {\mathbf {x} _ {0} \sim X _ {0} ^ {\text {t r a i n}}} [ R (\pi , \mathbf {x} _ {0}) ], \tag {1} +$$ + +but measure its performance according to $X_0^{\mathrm{test}}$ -i.e., $\mathbb{E}_{\mathbf{x}_0}\sim X_0^{\mathrm{test}}[R(\pi ,\mathbf{x}_0)]$ + +# 3 PROGRAMMATIC STATE MACHINE POLICIES + +To achieve inductive generalization, we aim to synthesize programmatic policies in the form of state machines. At a high level, state machines can be thought of as compositions of much simpler policies, where the internal state of the state machines (called its mode) indicates which simple policy is currently being used. Thus, state machines are capable of encoding complex nonlinear control tasks such as iteratively repeating a complex sequence of actions (e.g., the car example in Figure 1). At the same time, state machines are substantially more structured than more typical policy classes such as neural networks and decision trees. + +More precisely, a state machine $\pi$ is a tuple $\langle \mathcal{M},\mathcal{H},\mathcal{G},m_s,m_e\rangle$ . The modes $m_{i}\in \mathcal{M}$ of $\pi$ are the internal memory of the state machine. Each mode $m_{i}\in \mathcal{M}$ corresponds to an action function $H_{m_i}\in \mathcal{H}$ , which is a function $H_{m_i}:\mathcal{O}\to \mathcal{A}$ mapping observations to actions. When in mode $m_{i}$ , the agent takes action $\mathbf{a} = H_{m_i}(\mathbf{o})$ . Furthermore, each pair of modes $(m_i,m_j)$ corresponds to a switching condition $G_{m_i}^{m_j}\in \mathcal{G}$ , which is a function $G_{m_i}^{m_j}:\mathcal{O}\rightarrow \mathbb{R}$ . When an agent in mode $m_{i}$ observes $\mathbf{o}$ such that $G_{m_i}^{m_j}(\mathbf{o})\geq 0$ , then the agent transitions from mode $m_{i}$ to mode $m_j$ . If there are multiple modes $m_j$ with non-negative switching weight $G_{m_i}^{m_j}(\mathbf{o})\geq 0$ , then the agent transitions to the one that is greatest in magnitude; if there are several modes of equal weight, we take the first one according to a fixed ordering. Finally, $m_s,m_e\in \mathcal{M}$ are the start and end modes, respectively; the state machine mode is initialized to $m_s$ , and the state machine terminates when it transitions to $m_e$ . + +Formally, $\pi (\mathbf{o}_n,\mathbf{s}_n) = (\mathbf{a}_n,\mathbf{s}_{n + 1})$ , where $\mathbf{a}_n = H_{\mathbf{s}_n}(\mathbf{o}_n)$ , $\mathbf{s}_0 = m_s$ and + +$$ +\mathbf {s} _ {n + 1} = \left\{ \begin{array}{l l} m ^ {*} = \operatorname {d a r g} \max _ {m} G _ {\mathbf {s} _ {n}} ^ {m} \left(\mathbf {o} _ {n}\right) & \text {i f} G _ {\mathbf {s} _ {n}} ^ {m ^ {*}} \left(\mathbf {o} _ {n}\right) \geq 0 \\ \mathbf {s} _ {n} & \text {o t h e r w i s e} \end{array} \right. \tag {2} +$$ + +where $\text{darg max}$ is a deterministic arg max that breaks ties as described above. + +Action functions and switching conditions are specified by grammars that encode the space of possible functions as a space of programs. Different grammars can be used for different problems. + +Typical grammars for action functions include constants $\{C_{\alpha}:\mathbf{o}\mapsto \alpha \}$ and proportional controls $\{P_{\alpha_0,\alpha_1}^i:\mathbf{o}\mapsto \alpha_0(\mathbf{o}[i] - \alpha_1)\}$ . A typical grammar for switching conditions is the grammar + +$$ +B := \left\{\mathbf {o} [ i ] \leq \alpha \right\} _ {i} \mid \left\{\mathbf {o} [ i ] \geq \alpha \right\} _ {i} \mid B _ {1} \wedge B _ {2} \mid B _ {1} \vee B _ {2} +$$ + +of Boolean predicates over the current observation $\mathbf{o}$ , where $\mathbf{o}[i]$ is the $i$ th component of $\mathbf{o}$ . In all these grammars, $\alpha_{i} \in \mathbb{R}$ are parameters to be learned. The grammar for switching conditions also has discrete parameters encoding the choice of expression. For example, in Figure 1, the action functions are constants, and the switching conditions are inequalities over components of $\mathbf{o}$ . + +# 4 FRAMEWORK FOR SYNTHESIZING PROGRAMMATIC POLICIES + +We now describe the adaptive teaching framework for synthesizing state machine policies. In this section, the teacher is abstractly represented as a collection of trajectories $\tau_{\mathbf{x}_0}$ (i.e., an open-loop controller consisting of a fixed sequence of actions) for each initial state $\mathbf{x}_0$ . A key insight is that we can parameterize $\tau_{\mathbf{x}_0}$ in a way that mirrors the structure of the state machine student. As we discuss in Section 4.2, we parameterize $\tau_{\mathbf{x}_0}$ as a "loop-free" state machine. Intuitively, our algorithm efficiently computes $\tau_{\mathbf{x}_0}$ (from multiple initial states $\mathbf{x}_0$ ) using gradient-based optimization, and then "glues" them together using maximum likelihood to construct a state machine policy. + +# 4.1 ADAPTIVE TEACHING VIA VARIATIONAL INFERENCE + +We derive the adaptive teaching formulation by reformulating the learning problem in the framework of probabilistic reinforcement learning, and also consider policies $\pi$ that are probabilistic state machines (see Section 4.3). Then, we use a variational approach to break the problem into the teacher and the student steps. In this approach, the log-likelihood of a policy $\pi$ is defined as follows: + +$$ +\ell (\pi) = \log \mathbb {E} _ {p (\tau | \pi)} [ e ^ {\lambda R (\tau)} ] \tag {3} +$$ + +where $p(\tau \mid \pi)$ is the probability of sampling rollout $\tau$ when using policy $\pi$ from a random initial state $\mathbf{x}_0$ , $\lambda \in \mathbb{R}_{\geq 0}$ is a hyperparameter, and $R(\tau)$ is the reward assigned to $\tau$ . We have + +$$ +\ell (\pi) = \log \mathbb {E} _ {q (\tau)} \left[ e ^ {\lambda R (\tau)} \cdot \frac {p (\tau \mid \pi)}{q (\tau)} \right] \geq \mathbb {E} _ {q (\tau)} [ \lambda R (\tau) + \log p (\tau | \pi) - \log q (\tau) ] \tag {4} +$$ + +where $q(\tau)$ is the variational distribution and the inequality follows from Jensen's inequality. Thus, we can optimize $\pi$ by maximizing the lower bound Eq (4) on $\ell(\pi)$ . Since the first and third term of Eq (4) are constant with respect to $\pi$ , we have + +$$ +\pi^ {*} = \underset {\pi} {\arg \max } \mathbb {E} _ {q (\tau)} [ \log p (\tau | \pi) ]. \tag {5} +$$ + +Next, the optimal choice for $q$ (i.e., to minimize the gap in the inequality in Eq (4)) is + +$$ +q ^ {*} = \underset {q} {\arg \min } D _ {\mathrm {K L}} (q (\tau) \| e ^ {\lambda R (\tau)} \cdot p (\tau \mid \pi) / Z) \tag {6} +$$ + +where $Z$ is a normalizing constant. We choose $q$ to have form $q(\tau) = p(\mathbf{x}_0) \cdot \delta (\tau -\tau_{\mathbf{x}_0})$ , where $\delta$ is the Dirac delta function, $p(\mathbf{x}_0)$ is the initial state distribution, and $\tau_{\mathbf{x}_0}$ are the parameters to be optimized, where $\tau_{\mathbf{x}_0}$ encodes a trajectory from $\mathbf{x}_0$ . Then, up to constants, the objective of Eq (6) equals + +$$ +\mathbb {E} _ {p (\mathbf {x} _ {0})} \left[ \log p (\mathbf {x} _ {0}) + \mathbb {E} _ {\delta (\tau - \tau_ {\mathbf {x} _ {0}})} [ \log \delta (\tau - \tau_ {\mathbf {x} _ {0}}) ] - (\lambda R (\tau_ {\mathbf {x} _ {0}}) + \log p (\tau_ {\mathbf {x} _ {0}} | \pi , \mathbf {x} _ {0})) \right]. +$$ + +The first term is constant; the second term is degenerate, but it is also constant. Thus, we have + +$$ +q ^ {*} = \underset {\{\tau_ {\mathbf {x} _ {0}} \}} {\arg \max } \mathbb {E} _ {p (\mathbf {x} _ {0})} \left[ \lambda R \left(\tau_ {\mathbf {x} _ {0}}\right) + \log p \left(\tau_ {\mathbf {x} _ {0}} \mid \pi , \mathbf {x} _ {0}\right) \right]. \tag {7} +$$ + +Thus, we can optimize Eq (3) by alternatingly optimizing Eq (5) and Eq (7). + +We interpret these equations as adaptive teaching. At a high level, the teacher (i.e., the variational distribution $q^{*}$ in Eq (7)) is used to guide the optimization of the student (i.e., the state machine policy $\pi^{*}$ in Eq (5)). Rather than compute the teacher in closed form, we approximate it by sampling + +![](images/49c10286f647ffed32acecb4d67b495585230319789199eff2bb4c2d44204e01.jpg) +Figure 2: Flowchart connecting the different components of the algorithm. + +finitely many initial states $\mathbf{x}_0^k\sim X_0$ and then computing the optimal rollout from $\mathbf{x}_0^k$ . Formally, on the $i$ th iteration, the teacher and student are updated as follows: + +Teacher $q_{i}^{*} = \sum_{k = 1}^{K}\delta (\tau_{k}^{i})$ (8) + +where $\tau_k^i = \underset {\tau}{\arg \max}\lambda R(\tau) + \log p(\tau \mid \pi^{i - 1},\mathbf{x}_0^k)$ $(\mathbf{x}_0^k\sim X_0)$ + +Student $\pi_i^* = \arg \max_{\pi} \sum_{k=1}^{K} \log p(\tau_k^i \mid \pi, \mathbf{x}_0^k)$ (9) + +The teacher objective Eq (8) is to both maximize the reward $R(\tau)$ from a random initial state $\mathbf{x}_0$ and to maximize the probability $p(\tau \mid \pi, \mathbf{x}_0)$ of obtaining the rollout $\tau$ from initial state $\mathbf{x}_0$ according to the current student $\pi$ . The latter encourages the teacher to match the structure of the student. Furthermore, the teacher is itself updated at each step to account for the changing structure of the student. The student objective Eq (9) is to imitate the distribution of rollouts according to the teacher. Figure 2 shows the different components of our algorithm. + +# 4.2 TEACHER: COMPUTING LOOP-FREE POLICIES + +We begin by describing how the teacher solves the trajectory optimization problem Eq (8)—i.e., computing $\tau_{k}$ for a given initial state $\mathbf{x}_0^k$ . + +Parameterization. One approach is to parameterize $\tau$ as an arbitrary action sequence $(\mathbf{a}_0, \mathbf{a}_1, \ldots)$ and use gradient-based optimization to compute $\tau$ . However, this approach can perform poorly—even though we regularize $\tau$ towards the student, it could exhibit behaviors that are hard for the student to capture. Instead, we parameterize $\tau$ in a way that mirrors the student. In particular, we parameterize $\tau$ like a state machine, but rather than having modes and switching conditions that adaptively determine the sequence of action functions to be executed and the duration of execution, the sequence of action functions is fixed and each action function is executed for a fixed duration. + +More precisely, we represent $\tau$ as an loop-free policy $\tau = \langle \mathcal{H},\mathcal{T}\rangle$ . To execute $\tau$ , each action function $H_{i}\in \mathcal{H}$ is applied for the corresponding duration $T_{i}\in \mathcal{T}$ , after which $H_{i + 1}$ is applied. The action functions are from the same grammar of action functions for the student. + +The obvious way to represent a duration $T_{i}$ is as a number of time steps $T_{i} \in \mathbb{N}$ . However, with this choice, we cannot use continuous optimization to optimize $T_{i}$ . Instead, we fix the number of discretization steps $P$ for which $H_{i}$ is executed, and vary the time increment $\Delta_{i} = T_{i} / P$ —i.e., $\mathbf{x}_{n + 1} \approx \mathbf{x}_n + F(\mathbf{x}_n, H_i(\mathbf{o})) \cdot \Delta_i$ . We enforce $\Delta_i \leq \Delta_{\max}$ for a small $\Delta_{\max}$ to ensure that the discrete-time approximation of the dynamics is sufficiently accurate. + +Figure 3(a) and (d) show examples of loop-free policies for two different initial states and two different teacher iterations. The loop-free policies in (d) are regularized to match the student's state-machine policy learned in the previous iteration (shown in Figure 3(c)). + +**Optimization.** We use model-based trajectory optimization to compute loop-free policies. The main challenge is handling the term $p(\tau \mid \pi, \mathbf{x}_0)$ in the objective. Symbolically computing the + +![](images/600c038c384a2bc4af72aafe8da4c4216b55261348e117d2a3bda4058ff952d5.jpg) +Figure 3: Visualization showing the student-teacher interaction for two iterations. (a) The loop-free policies (with their corresponding rewards) learned by the teacher for two different initial states. Here, the boxes signify the different segments in the loop-free policies, the colors signify different actions, and the lengths of the boxes signify the durations of the segments. (b) The mapping between the segments and the modes in the state-machine—i.e., $p(\mu = m_j)$ . Each box shows the composition of modes vertically distributed according their probabilities. For example, the third segment in the loop-free policy for $\mathbf{x}_0^1$ has $p(\mu = \text{Green}) = 0.65$ and $p(\mu = \text{Brown}) = 0.35$ . (c) The most probable rollouts from the state-machine policy learned by the student. Finally, (d), (e) and (f) are similar to (a), (b) and (c), but for the second iteration. + +this probability is hard because of the discrete-continuous structure of $\pi$ . Another alternative is to precompute the probabilities of all the trajectories $\tau$ that can be derived from $\pi$ . However, this is also infeasible because the number of trajectories is unbounded. Thus, we perform trajectory optimization in two phases. First, we use a sampling-based optimization algorithm to obtain a set of good trajectories $\tau^1, \dots, \tau^L$ . Then, we apply gradient-based optimization, replacing $p(\cdot \mid \pi, \mathbf{x}_0)$ with a term that regularizes $\tau$ to be close to $\{\tau^\ell\}_{\ell=1}^L$ . + +The first phase proceeds as follows: (i) sample $\tau^1, \dots, \tau^L$ using $\pi$ from $\mathbf{x}_0$ , and let $p^\ell$ be the probability of $\tau^\ell$ according to $\pi$ , (ii) sort these samples in decreasing order of objective $p^\ell \cdot e^{\lambda R(\tau^\ell)}$ , and (iii) discard all but the top $\rho$ samples. This phase essentially performs one iteration of CEM (Mannor et al., 2003). Then, in the second phase, we replace the probability expression with $p(\tau \mid \pi, \mathbf{x}_0) \approx \frac{\sum_{\ell=1}^{\rho} p^\ell \cdot e^{-d(\tau, \tau^\ell)}}{\sum_{\ell=1}^{\rho} p^\ell}$ , which we use gradient-based optimization to optimize. Here, $d(\tau, \tau^\ell)$ is a distance metric between two loop-free policies, defined as the $L_2$ distance between the parameters of $\tau$ and $\tau^\ell$ . We chose the number of samples, $\rho = 10$ . For our benchmarks, we did not notice any improvement in the number of student-teacher iterations by increasing $\rho$ above 10. So, we believe we are not losing any information from this approximation. + +# 4.3 STUDENT: LEARNING STRUCTURED STATE MACHINE POLICIES VIA IMITATION + +Next, we describe how the student solves the maximum likelihood problem Eq (9) to compute $\pi^{*}$ . + +Probabilistic state machines. Although the output of our algorithm is a student policy that is a deterministic state machine, our algorithm internally relies on distributions over rollouts induced by the student policy to guide the teacher. Thus, we represent the student policy as a probabilistic state machine during learning. To do so, we simply make the action functions $H_{m_j}$ and switching conditions $G_{m_{j_1}}^{m_{j_2}}$ probabilistic—instead of constant parameters in the grammar for action functions and switching conditions, now we have Gaussian distributions $\mathcal{N}(\alpha ,\sigma)$ . Then, when executing $\pi$ , we obtain i.i.d. samples of the parameters $H_{m_j}'\sim H_{m_j}$ and $\{(G_{m_j}'')'\sim G_{m_j}'\}_{m_j'}$ every time we switch to mode $m_j$ , and act according to $H_{m_j}'$ and $\{(G_{m_j}'')\}$ until the mode switches again. By re-sampling these parameters on every mode switch, we avoid dependencies across different parts of a rollout or different rollouts. On the other hand, by not re-sampling these parameters within a mode switch, we ensure that the structure of $\pi$ remains intact within a mode. + +**Optimization.** Each $\tau_{k}$ can be decomposed into segments $(k,i)$ where action function $H_{k,i}$ is executed for duration $T_{k,i}$ . For example, each block in Figure 3(a) is a segment. Furthermore, for the student $\pi$ , let $H_{m_j}$ be the action function distribution for mode $m_j$ and $G_{m_{j_1}}^{m_{j_2}}$ be the switching + +![](images/4c8b287df36309ae77f7dd91201eeaa06cb7e130772933a15e46956a30001e19.jpg) +Figure 4: Comparison of performances on train (left) and test (middle) distributions. Our approach outperforms the baselines on all benchmarks in terms of test performance. An empty bar indicates that the policy learned for that experiment failed on all runs. We also plot test performance for different choices of training distribution for the Car benchmark (right). + +![](images/8efacb854ad0d14fa7fa8b7e74d9675f1f5b57f17919ec1107e2f8c8fac367ef.jpg) + +![](images/269e3c58c841a1648321b0bd6fb1a4745aabe66cc641f9390658343c74a3d251.jpg) + +condition distribution for mode $m_{j_1}$ to mode $m_{j_2}$ . Note that $H_{m_j}$ and $G_{m_{j_1}}^{m_{j_2}}$ are distributions whereas $H_{k,i}$ and $T_{k,i}$ are constants. We have + +$$ +p (\tau_ {k} \mid \pi , {\bf x} _ {0} ^ {k}) = \prod_ {i} p (H _ {k, i} \mid \pi , {\bf x} _ {0} ^ {k}) \cdot p (T _ {k, i} \mid \pi , {\bf x} _ {0} ^ {k}). +$$ + +For each $(k,i)$ , let $\mu_{k,i}$ be the latent random variable indicating the $i$ th mode used by $\pi$ starting from $\mathbf{x}_0^k$ ; in particular, $\mu_{k,i}$ is a categorical random variable that takes values in the modes $\{m_j\}$ . And $\mu_{k,i} = m_j$ means that $H_{k,i}$ is sampled from the distribution $H_{m_j}$ and $T_{k,i}$ is determined by the sampled switching conditions from distributions $\{G_{m_j}^{m_j'}\}$ . Assuming the latent variable $\mu_{k,i}$ allows the student to compute $\pi^*$ by computing $H_{m_j}^*$ and $G_{m_j1}^{m_j2*}$ separately. In Figure 3, (b) and (e) show the learned mode mappings $p(\mu = m_j)$ for the segments in the loop-free policies shown in (a) and (d) respectively. + +Since directly optimizing the maximum likelihood $\pi$ is hard in the presence of the latent variables $\mu_{k,i}$ , we use the standard expectation maximization (EM) approach to optimizing $\pi$ , where the E-step computes the distributions $p(\mu_{k,i} = m_j)$ assuming $\pi$ is fixed, and the M-step optimizes $\pi$ assuming the probabilities $p(\mu_{k,i} = m_j)$ are fixed. See Appendix A for details. In Figure 3, (c) and (f) show the most probable rollouts from the state-machine policies learned at the end of the EM approach for two different student iterations. + +# 5 EXPERIMENTS + +**Benchmarks.** We use 6 control problems, each with different training and test distributions (summarized in Figure 8 in Appendix C): (i) Car, the benchmark in Figure 1, (ii) Quad, where the goal is to maneuver a 2D quadcopter through an obstacle course by controlling its vertical acceleration, where we vary the obstacle course length, see Figure 6 leftmost, (iii) QuadPO, a variant where the obstacles are unobserved but periodic (so the agent can perform well using a repeating motion), see Figure 6 (second from left), (iv) Pendulum, where we vary the pendulum mass, (v) Cart-Pole, where we vary the time horizon and pole length, and (vi) Swimmer, where the goal is to move the swimmer forward through a viscous liquid, where we vary the length of the segments comprising the robot swimmer. + +Baselines. We compare against: (i) RL: PPO with a feed-forward neural network policy, (ii) RL-LSTM: PPO with an LSTM, (iii) Direct-Opt: learning a state machine policy directly via numerical optimization. Hyper-parameters are chosen to maximize performance on the training distribution. More details about the baselines and the hyper-parameters can be found in Appendices B.2, B.3, & B.4. Each algorithm is trained 5 times; we choose the one that performs best on the training distribution. + +Note that for the comparison to RL approaches, we use model-free algorithms, whereas, in our algorithm, the teacher uses model-based optimization. We do not compare against model-based RL approaches because (a) even model-free RL approaches achieve almost perfect performance on the training distribution (see Figure 4 left) and (b) our main goal is to compare the performance of our policies and the neural network policies on the test distribution and not the training distribution. Moreover, in case the model of the system is unknown, we can use known algorithms to infer the model from data (Ahmadi et al., 2018) and then use this learned model in our algorithm. + +![](images/54177baeb52bbb142af2201fc47f567f708161dccd5285e16775d33947c62f01.jpg) +(a) RL Train + +![](images/6320791857c8d80f0bfa67b536cf9abe61ae6a9b775ba750c22fc35f5f29aadb.jpg) +(b) RL Test + +![](images/dca2c4d2217b6c88efb9663dd5ce21f2d6b1f620c6c12bdeefc9b5dddee3779b.jpg) +(c) Original + +![](images/5ef9e465b6c18d7d13182980098671af0c9f81b04c82fcc8399439c4f8df5915.jpg) +(d)User change 1 + +![](images/467668a1c55ccae6f94a7a8f60884ae17eecb391fb9ba12ce81eb607b75f65de.jpg) +(e)User change 2 + +![](images/20160998df38d376ef0a1c59c24368e03b7ebdf934d080d00131f870f673f874.jpg) +Figure 5: (a-c) The RL policy generates unstructured trajectories, and therefore does not generalize from (a) the training distribution to (b) the test distribution. In contrast, our state machine policy in (c) generates a highly structured trajectory that generalizes well. (c-e) A user can modify our state machine policy to improve performance. In (d), the user sets the steering angle to the maximum value 0.5, and in (e), the user sets the thresholds in the switching conditions $G_{m_1}^{m_2}$ , $G_{m_2}^{m_1}$ to 0.1. + +![](images/06a3683c39000b4cdcf415b3d74d55ac618a3c248ccac5000d9c87262a4c7505.jpg) +Figure 6: Left: Trajectories for the Quad (leftmost) and QuadPO (second from the left) benchmarks using our state machine policy. Right: Graph of vertical acceleration over time for both our policy (red) and the neural network policy (blue), for Quad (second from the right) and QuadPO (rightmost). + +![](images/621e0e4362b695db60ba11736fe9f66b672a5c77e63c3742b0d337100b54982a.jpg) + +![](images/17c0cdf5dbcffa3bf1b29bc842440276d32ce2df4183af1048b95d0a6349ed93.jpg) + +# 5.1 RESULTS + +Figure 4 shows results on both training and test distributions. We measure performance as the fraction of rollouts (out of 1000) that both satisfy the safety specification and reach the goal. + +Inductive generalization. For all benchmarks, our policy generalizes well on the test distribution. In four cases, we generalize perfectly (all runs satisfy the metric). For Quad and QuadPO, the policies result in collisions on some runs, but only towards the end of the obstacle course. + +Comparison to RL. The RL policies mostly achieve good training performance, but generalize poorly since they over-specialize to states seen during training. The exceptions are Pendulum and Swimmer. Even in these cases, the RL policies take longer to reach the goals than our state machine policies (see Figure 10 and Figure 11 in Appendix C). For QuadPO, the RL policy does not achieve a good training performance since the states are partially observed. We may expect the LSTM policies to alleviate this issue. However, the LSTM policies often perform poorly even on the training distribution, and also generalize worse than the feed-forward neural network policies. + +Comparison to direct-opt. The state machine policies learned using direct-opt perform poorly even in training because of the numerous local optima arising due to the structural constraints. This illustrates the need to use adaptive teaching to learn state machine policies. + +# 5.2 QUALITATIVE ANALYSIS + +Behavior of policy. We empirical analyze the policies. Figure 5 shows the trajectory taken by the RL policy (a), compared to our policy (c), from a training initial state for the Car benchmark. The RL policy does not exhibit a repeating behavior, which causes it to fail on the trajectory from a test state shown in (b). Similarly, Figure 6 (right) compares the actions taking by our policy to those taken by the RL policy on Quad and QuadPO. Our policy produces smooth repeating actions, whereas the RL policy does not. Action vs time graphs for other benchmarks can be found in the appendix (Figures 12, 13, & 14) and they all show similar behaviors. + +Varying the training distribution. We study how the test performance changes as we vary the training distribution on the Car benchmark. We vary $X_0^{\mathrm{train}}$ as $d \sim [d_{\mathrm{min}}, 13]$ , where $d_{\mathrm{min}} = \{13, 12.5, 12, 11.5, 11.2, 11\}$ , but fix $X_0^{\mathrm{test}}$ to $d \sim [11, 12]$ . Figure 4 (right) shows how test performance varies with $d_{\mathrm{min}}$ for both our policy and the RL policy. Our policy inductively generalizes for a wide range of training distributions. In contrast, the test performance of the RL policy initially increases as the train distribution gets bigger, but it eventually starts declining. The reason is that its training performance actually starts to decline. Thus, in some settings, our approach (even when trained on smaller distributions) can produce policies that outperform the neural network policies produced by RL (even when trained on the full distribution). + +Interpretability. An added benefit of our state machine policies is interpretability. In particular, we demonstrate the interpretability of our policies by showing how a user can modify a learned state machine policy. Consider the policy from Figure 1e for the autonomous car. We manually make the following changes: (i) increase the steering angle in $H_{m_1}$ to its maximum value 0.5, and (ii) decrease the gap maintained between the agent and the black cars by changing the switching condition $G_{m_1}^{m_2}$ to $d_f \leq 0.1$ and $G_{m_2}^{m_1}$ to $d_b \leq 0.1$ . Figure 5 demonstrates these changes—it shows trajectories obtained using the original policy (c), the first modified policy (d), and the second modified policy (e). There is no straightforward way to make these kinds of changes to a neural network policy. + +# 6 CONCLUSION + +We have proposed an algorithm for learning state machine policies that inductively generalize to novel environments. Our approach is based on a framework called adaptive teaching that alternatively learns a student that imitates a teacher, who in-turn adapts to the structure of the student. We demonstrate that our policies inductively generalize better than RL policies. + +In the future, we will explore more complex grammars for the action functions and the switching conditions, for example, with some parts being small neural networks, while still retaining the ability to learn generalizable behaviors. Moreover, we will extend our approach to use model-free techniques in the teacher's algorithm to make our approach more aligned with the reinforcement learning premise. Finally, we believe that the idea of learning programmatic representations and using the adaptive teaching algorithm to deal with the mixed discrete-continuous problems can be applied to other learning settings such as supervised learning and unsupervised learning. + +# ACKNOWLEDGMENTS + +This work was supported by ONR N00014-17-1-2699 and NSF Award CCF-1910769. + +# REFERENCES + +Douglas Aberdeen and Jonathan Baxter. Scaling internal-state policy-gradient methods for pomdpds. In ICML, pp. 3-10, 2002. +Mohamadreza Ahmadi, Ufuk Topcu, and Clarence Rowley. Control-oriented learning of lagrangian and hamiltonian systems. In 2018 Annual American Control Conference (ACC), pp. 520-525. IEEE, 2018. +Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 242-252, Long Beach, California, USA, 09-15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/allen-zhu19a.html. +David Andre and Stuart J Russell. State abstraction for programmable reinforcement learning agents. In AAAI/IAAI, pp. 119-125, 2002. +Osbert Bastani, Yewen Pu, and Armando Solar-Lezama. Verifiable reinforcement learning via policy extraction. In Advances in Neural Information Processing Systems, pp. 2494-2504, 2018. +Pavol Bielik, Veselin Raychev, and Martin Vechev. Program synthesis for character level language modeling. In ICLR, 2017. + +Jonathon Cai, Richard Shin, and Dawn Song. Making neural programming architectures generalize via recursion. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. URL https://openreview.net/forum?id=BkbY4psgg. +Luca Carlone and Giuseppe C. Calafiore. Convex relaxations for pose graph optimization with outliers. IEEE Robotics and Automation Letters, 3(2):1160-1167, 2018. doi: 10.1109/LRA.2018.2793352. URL https://doi.org/10.1109/LRA.2018.2793352. +Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. Openai baselines. https://github.com/openai/baselines, 2017. +Kevin Ellis, Armando Solar-Lezama, and Josh Tenenbaum. Unsupervised learning by program synthesis. In Advances in neural information processing systems, pp. 973-981, 2015. +Kevin Ellis, Daniel Ritchie, Armando Solar-Lezama, and Josh Tenenbaum. Learning to infer graphics programs from hand-drawn images. In Advances in Neural Information Processing Systems, pp. 6060-6069, 2018. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126-1135. JMLR.org, 2017. +Eric A Hansen. Solving pomdps by searching in policy space. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, pp. 211-219. Morgan Kaufmann Publishers Inc., 1998. +Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015. +Sergey Levine and Vladlen Koltun. Guided policy search. In International Conference on Machine Learning, pp. 1-9, 2013. +Shie Mannor, Reuven Y Rubinstein, and Yohai Gat. The cross entropy method for fast policy search. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 512-519, 2003. +Nicolas Meuleau, Leonid Peshkin, Kee-Eung Kim, and Leslie Pack Kaelbling. Learning finite-state controllers for partially observable environments. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pp. 427-436. Morgan Kaufmann Publishers Inc., 1999. +Charles Packer, Katelyn Gao, Jernej Kos, Philipp Krahenbuhl, Vladlen Koltun, and Dawn Song. Assessing generalization in deep reinforcement learning. arXiv preprint arXiv:1810.12282, 2018. +Ronald Parr and Stuart J Russell. Reinforcement learning with hierarchies of machines. In Advances in neural information processing systems, pp. 1043-1049, 1998. +Leonid Peshkin, Nicolas Meuleau, and Leslie Kaelbling. Learning policies with external memory. arXiv preprint cs/0103003, 2001. +Pascal Poupart and Craig Boutilier. Bounded finite state controllers. In Advances in neural information processing systems, pp. 823-830, 2004. +Aravind Rajeswaran, Kendall Lowrey, Emanuel V Todorov, and Sham M Kakade. Towards generalization and simplicity in continuous control. In Advances in Neural Information Processing Systems, pp. 6550-6561, 2017. +Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627-635, 2011. +Stefan Schaal. Is imitation learning the route to humanoid robots? Trends in cognitive sciences, 3(6): 233-242, 1999. + +Lazar Valkov, Dipak Chaudhari, Akash Srivastava, Charles Sutton, and Swarat Chaudhuri. Houdini: Lifelong learning as program synthesis. In Advances in Neural Information Processing Systems, pp. 8701-8712, 2018. +Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, and Swarat Chaudhuri. Programmatically interpretable reinforcement learning. arXiv preprint arXiv:1804.02477, 2018. +Abhinav Verma, Hoang Minh Le, Yisong Yue, and Swarat Chaudhuri. Imitation-projected policy gradient for programmatic reinforcement learning. CoRR, abs/1907.05431, 2019. URL http://arxiv.org/abs/1907.05431. +Martin J Wainwright, Michael I Jordan, et al. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1-305, 2008. +Halley Young, Osbert Bastani, and Mayur Naik. Learning neurosymbolic generative models via program synthesis. In International Conference on Machine Learning, pp. 7144-7153, 2019. +He Zhu, Zikang Xiong, Stephen Magill, and Suresh Jagannathan. An inductive synthesis framework for verifiable reinforcement learning. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 686-701. ACM, 2019. + +# A EXPECTATION MAXIMIZATION FOR STUDENT OPTIMIZATION + +# A.1 COMPUTING $p(\tau \mid \pi, \mathbf{x}_0)$ + +First, note that we have + +$$ +p (\tau_ {k} \mid \pi , \mathbf {x} _ {0} ^ {k}) = \prod_ {i} p (H _ {k, i} \mid \pi , \mathbf {x} _ {0} ^ {k}) \cdot p (T _ {k, i} \mid \pi , \mathbf {x} _ {0} ^ {k}). +$$ + +where + +$$ +p \left(H _ {k, i} \mid \pi , \mathbf {x} _ {0} ^ {k}\right) = \sum_ {j} p \left(H _ {k, i} \mid H _ {m _ {j}}\right) \cdot p \left(\mu_ {k, i} = m _ {j}\right). +$$ + +Similarly, the duration $T_{k,i}$ is determined both by the current mode $\mu_{i,k} = m_{j_1}$ , and by the switching conditions $G_{m_{j_1}}^- = \{G_{m_{j_1}}^{m_{j_2}}\}_{m_{j_2}}$ from the current mode $m_{j_1}$ into some other mode $m_{j_2}$ . More precisely, let $\gamma_{k,i}$ denote the trajectory (sequence of states) of the $(k,i)$ segment of $\tau_k$ , and let $\zeta(\gamma_{k,i}, G_{m_j}^-)$ denote the earliest time at which a switching condition $G \in G_{m_j}^-$ becomes true along $\gamma_{k,i}$ . Since $G \in G_{m_j}^-$ are distributions, $\zeta(\gamma_{k,i}, G_{m_j}^-)$ is a distribution on transition times. Then, we have + +$$ +p (T _ {k, i} \mid \pi , {\bf x} _ {0} ^ {k}) = \sum_ {m _ {j _ {1}}} \sum_ {m _ {j _ {2}}} p (\mu_ {k, i} = m _ {j _ {1}}) \cdot p (\mu_ {k, i + 1} = m _ {j _ {2}}) \cdot p (T _ {k, i} \mid G _ {m _ {j _ {1}}} ^ {m _ {j _ {2}}}, G _ {m _ {j _ {1}}} ^ {-}) +$$ + +$$ +p (T _ {k, i} \mid G _ {m _ {j _ {1}}} ^ {m _ {j _ {2}}}, G _ {m _ {j _ {1}}} ^ {-}) = p (T _ {k, i} = \zeta (\gamma_ {k, i}, G _ {m _ {j _ {1}}} ^ {m _ {j _ {2}}})) \cdot \prod_ {m _ {j _ {3}} \neq m _ {j _ {2}}} p (T _ {k, i} < \zeta (\gamma_ {k, i}, G _ {m _ {j _ {1}}} ^ {m _ {j _ {3}}})). +$$ + +In other words, $T_{k,i}$ is the duration until $G_{m_{j_1}}^{m_{j_2}}$ triggers, conditioned on none of the conditions $G_{m_{j_1}}^{m_{j_3}}$ triggering (where $m_{j_3} \neq m_{j_2}$ ). + +# A.2 OPTIMIZING THE STUDENT POLICY + +Numerically optimizing the maximum likelihood objective to compute $\pi^{*}$ is hard because it requires integrating over all possible choices for the latent variables $\mu_{k,i}$ . For example, if the teacher generates 10 loop-free policies every iteration and there are 10 modes in each loop-free policy, and 4 modes in the state-machine, the number of choices for the latent variables is $4^{100}$ , which makes the enumeration infeasible. The expectation-maximization method provides an efficient way for computing the maximum likelihood, by alternatingly optimizing for the latent variables and the state-machine parameters. The E-step computes the probability distributions $p(\mu_{k,i} = m_j)$ for a fixed $\pi$ , and the M-step optimizes $H_{m_j}$ and $G_{m_{j_1}}^{m_{j_2}}$ given $p(\mu_{k,i} = m_j)$ . + +E-step. Assuming $\pi$ is fixed, we have + +$$ +p \left(\mu_ {k, i} = m _ {j} \mid \pi , \left\{\tau_ {k} \right\}\right) = \frac {p \left(H _ {k , i} \mid H _ {m _ {j}}\right) \cdot p \left(T _ {k , i} = \zeta \left(\gamma_ {k , i} , G _ {m _ {j}} ^ {-}\right)\right)}{\sum_ {m _ {j} ^ {\prime}} p \left(H _ {k , i} \mid H _ {m _ {j} ^ {\prime}}\right) \cdot p \left(T _ {k , i} = \zeta \left(\gamma_ {k , i} , G _ {m _ {j} ^ {\prime}} ^ {-}\right)\right)}. \tag {10} +$$ + +M-step. Assuming $p(\mu_{k,i} = m_j)$ is fixed, we solve + +$$ +\underset {\{H _ {m _ {j}} \}} {\arg \max } \sum_ {k, i} p \left(\mu_ {k, i} = m _ {j}\right) \cdot \log p \left(H _ {k, i} \mid H _ {m _ {j}}\right) \tag {11} +$$ + +$$ +\begin{array}{l} \arg \max _ {\{G _ {m _ {j _ {1}}} ^ {m _ {j _ {2}}} \}} \sum_ {k, i} p (\mu_ {k, i} = m _ {j _ {1}}) \cdot p (\mu_ {k, i + 1} = m _ {j _ {2}}) \cdot \log p (T _ {k, i} = \zeta (\gamma_ {k, i}, G _ {m _ {j _ {1}}} ^ {m _ {j _ {2}}})) \\ + p \left(\mu_ {k, i} = m _ {j _ {1}}\right) \cdot \left(1 - p \left(\mu_ {k, i + 1} = m _ {j _ {2}}\right)\right) \cdot \log p \left(T _ {k, i} < \zeta \left(\gamma_ {k, i}, G _ {m _ {j _ {1}}} ^ {m _ {j _ {2}}}\right)\right) \tag {12} \\ \end{array} +$$ + +For $G_{m_{j_1}}^{m_{j_2}}$ , the first term handles the case $\mu_{k,i+1} = m_{j_2}$ , where we maximize the probability that $G_{m_{j_1}}^{m_{j_2}}$ makes the transition at duration $T_{k,i}$ , and the second term handles the case $\mu_{k,i+1} \neq m_{j_2}$ , where we maximize the probability that $G_{m_{j_1}}^{m_{j_2}}$ does not make the transition until after duration $T_{k,i}$ . + +We briefly discuss how to solve these equations. For action functions, suppose that $H$ encodes the distribution $\mathcal{N}(\alpha_H,\sigma_H^2)$ over action function parameters. Then, we have + +$$ +\begin{array}{l} \alpha_ {H _ {m _ {j}}} ^ {*} = \frac {\sum_ {k , i} p \left(\mu_ {k , i} = m _ {j}\right) \cdot \alpha_ {H _ {k , i}}}{\sum_ {k , i} p \left(\mu_ {k , i} = m _ {j}\right)} \\ \left(\sigma_ {H _ {m _ {j}}} ^ {*}\right) ^ {2} = \frac {\sum_ {k , i} p \left(\mu_ {k , i} = m _ {j}\right) \cdot \left(\alpha_ {H _ {k , i}} - \alpha_ {H _ {m _ {j}}} ^ {*}\right) \left(\alpha_ {H _ {k , i}} - \alpha_ {H _ {m _ {j}}} ^ {*}\right) ^ {T}}{\sum_ {k , i} p \left(\mu_ {k , i} = m _ {j}\right)} \\ \end{array} +$$ + +Solving for the parameters of $G_{m_{j_1}}^{m_{j_2}}$ is more challenging, since there can be multiple kinds of expressions in the grammar that are switching conditions, which correspond to discrete parameters, and we need to optimize over these discrete choices. To do so, we perform a greedy search over these discrete choices (see Section A.3 for details on the greedy strategy). For each choice considered during the greedy search, we encode Eq (12) as a numerical optimization and solve it to compute the corresponding means $\alpha_{G_{m_{j_1}}^{m_{j_2}}}^*$ and standard deviations $\sigma_{G_{m_{j_1}}^{m_{j_2}}}^*$ . Then, we choose the discrete choice that achieves the best objective value according to Eq (12). + +Computing the optimal parameters for switching conditions is more expensive than doing so for action functions; thus, on each student iteration, we iteratively solve Eq (10) and Eq (11) multiple times, but only solve Eq (12) once. + +The EM method does not guarantee global optima but usually works well in practice. In addition, since computing the switching conditions is expensive, we had to restrict the number of EM iterations. However, note that even if the EM algorithm didn't converge, our overall algorithm can still recover by using additional teacher-student interactions. + +The alternate method would be to run the EM algorithm multiple times/longer to get better results per student iteration, and "maybe" reduce the total number of teacher-student iterations. We say "maybe" because the EM algorithm might have already converged to the global optima, making the extra EM iterations useless. The trade-off between our approach and this alternative depends on whether the teacher's algorithm or the student's algorithm is expensive for a particular benchmark. + +However, from Figure 15, we can see that some of our benchmarks already use very few $(< 5)$ teacher-student iterations (Car, QuadPO, Pendulum, Mountain car, and Swimmer). Of the other three benchmarks that needed many iterations, for two of them (Cartpole and Acrobot), the student's algorithm is as expensive as the teacher's algorithm. This justifies our decision to not run the EM algorithm multiple times/longer. + +# A.3 SYNTHESIZING SWITCHING CONDITIONS + +Next, we describe how we search over the large number of discrete choices in the grammar for switching conditions. It is not hard to show that in Eq (12), the objectives for the switching condition parameters $G_{m_{j_1}}^{m_{j_2}}$ corresponding to different transitions $(m_{j_1}, m_{j_2})$ decompose into separate problems. Therefore, we can perform the search for each transition $(m_{j_1}, m_{j_2})$ separately. For each transition, the naive approach would be to search over the possible derivations in the context-free grammar for switching conditions to some bounded depth. However, this search space is exponential in the depth due to the productions $B := B \land B$ and $B := B \lor B$ . Thus, we employ a greedy search strategy to avoid the exponential blowup. + +Intuitively, our search strategy is to represent switching conditions as a kind of decision tree, and then perform a greedy algorithm to search over decision tree1. Our search strategy is similar to (but simpler than) the one in Bielik et al. (2017). In particular, we can equivalently represent a switching condition as a decision tree, where the internal nodes have the form $\mathbf{o}[i] \leq \alpha$ or $\mathbf{o}[i] \geq \alpha$ (where $i \in \{1, \dots, d_O\}$ and $\alpha \in \mathbb{R}$ are parameters), and the leaf nodes are labeled with "Switch" or "Don't switch"—e.g., Figure 7 shows two examples of switching conditions expressed as decision trees. Then, our algorithm initializes the switching condition to a single leaf node—i.e., $G_{\mathrm{cur}} \gets \text{"Switch"}$ . At each step, we consider switching conditions $G \in \mathrm{next}(G_{\mathrm{cur}})$ that expand a single leaf node of $G_{\mathrm{cur}}$ ; among these, we choose $G_{\mathrm{cur}}$ to be the one that minimizes a loss cost $(G)$ . + +![](images/14775c87042ae92cb8be4bbd5c8a933ffeeae23e6978481691a2a315a2dd3d8e.jpg) + +![](images/3ffb472de0de8cd38b91154551df62167e8999f9d052ef36b299f055766d0afc.jpg) +Figure 7: Switching conditions represented as decision trees. + +Algorithm 1 Greedy algorithm for learning switching conditions. +```txt +procedure LEARNSWITCHINGCONDITION $G_{\mathrm{cur}}\gets$ "Switch" while $|G_{\mathrm{cur}}| < N$ do $G_{\mathrm{cur}}\gets \arg \min_{G\in \mathrm{next}(G_{\mathrm{cur}})}\cos (G)$ return $G_{\mathrm{cur}}$ +``` + +More precisely, to construct next $(G_{\mathrm{cur}})$ , we iterate over all leaf nodes $L \in \mathrm{leaves}(G_{\mathrm{cur}})$ , and all expressions $E \in \mathcal{E}$ , where + +$$ +\mathcal {E} = \left\{\text {i f} \mathbf {o} [ i ] \sim \alpha \text {t h e n}" \text {S w i t c h}" e l s e " D o n ' t S w i t c h" \Bigg | i \in \{1, \dots , d _ {O} \}, \alpha \in \mathbb {R}, \sim \in \{\geq , \leq \} \right\} +$$ + +Here, $\sim \in \{\geq ,\leq \}$ is a inequality relation, $i\in \{1,\dots,d_O\}$ is a component of $\mathbf{o}$ , and $\alpha \in \mathbb{R}$ is a threshold. For each pair $L$ and $E$ , we consider the decision tree $G$ obtained by replacing $L$ with $E$ in $G_{\mathrm{cur}}$ . The set next $(G_{\mathrm{cur}})$ contains all $G$ constructed in this way. + +Next, the loss function $\mathrm{cost}(G)$ is given by Eq (12). In each iteration, our algorithm optimizes $\mathrm{cost}(G)$ over $G \in \mathrm{next}(G_{\mathrm{cur}})$ , and updates $G_{\mathrm{cur}} \gets G$ . To solve this optimization problem, we enumerate the possible choices $\sim$ and $i$ and use numerical optimization to compute $\alpha$ (since $\alpha$ is a continuous parameter). An example of a single iteration of our algorithm is shown in Figure 7. In particular, letting $G$ be the tree on the left and $G'$ be the tree on the right, the left-most leaf node of $G$ is expanded to get $G'$ . + +Our algorithm is summarized in Algorithm 1. Overall, our algorithm searches over $N \cdot (N - 1) \cdot d_{O}$ different discrete structures, where $N$ is the number of nodes in the decision tree and $d_{O}$ is the length of the observation vector $\mathbf{o}$ . + +# B IMPLEMENTATION DETAILS + +# B.1 BENCHMARKS AND STATE-MACHINES STATISTICS + +Figure 8 shows the statistics regarding the benchmarks such as the number of action variables and observation variables, and the set of initial states used for training and testing. Figure 8 also shows the different aspects of the grammar used to describe the space of possible state-machine policies. We are able to learn policies for these benchmarks using 2 to 4 distinct modes in the state-machine with either a constant or a proportional grammar for the action functions. We use the Boolean tree grammar of depth 1 or 2 for all the switching conditions. + +For the Quad benchmark, the action variable is the acceleration of the quadcopter in the vertical direction. The observations include the position $x, y$ , the velocities $v_x, v_y$ , and the four sensors $ox_l, ox_u, oyl, oy_u$ to describe the obstacle course in the near neighborhood. The QuadPO benchmark has the same action space as the Quad benchmark, but can only observe $x, y, v_x$ , and $v_y$ . The synthesized state-machine policies for these benchmarks are shown in Figure 16 and Figure 17. The action functions used for these benchmarks choose the acceleration to be proportional to $v_y$ . + +The goal for the Pendulum benchmark is to control the force (continuous) at the actuated link in order to invert the link. The observation space includes the full state, i.e., the angle $\theta$ and the angular velocity $\omega$ . Figure 18 shows the synthesized state-machine policy for the pendulum benchmark. + +
Bench#A#O\( {X}_{0}^{\text{train }} \)\( {X}_{0}^{\text{test }} \)# modesA_GC_G
Car25d ~ [12,13.5]md ~ [11,12]m3ConstantBoolean tree (depth 1)
Quad18x dist = 40mx dist = 80m2ProportionalBoolean tree (depth 1)
QuadPO14x dist = 60mx dist = 120m2ProportionalBoolean tree (depth 1)
Pendulum12mass ~ [1,1.5]kgmass ~ [1.5,5]kg2ConstantBoolean tree (depth 2)
Cartpole14time = 5s, len = 0.5time = 300s, len = 1.02ConstantBoolean tree (depth 2)
Acrobot14masses = [0.2,0.5]masses = [0.5,2]2ConstantBoolean tree (depth 2)
Mountain car12power = [5,15]e-4power = [3,5]e-42ConstantBoolean tree (depth 1)
Swimmer310len = 1 unitlen = 0.75 unit4ProportionalBoolean tree (depth 2)
+ +Figure 8: Summary of our benchmarks. #A is the action dimension, #O is the observation dimension, $X_0^{\mathrm{train}}$ is the set of initial states used for training, $X_0^{\mathrm{test}}$ is the set of initial states used to test inductive generalization, # modes is the number of modes in the state machine policy, and A_G and C_G are the grammars for action functions and switching conditions, respectively. Depth of C_G indicates the number of levels in the Boolean tree. + +The Cartpole benchmark consists of a pole attached to a cart. The goal is to keep the pole upright by applying a continuous force to move the cart to the right or to the left. The observations include the position $x$ , the velocity $v$ of the cart, the angle $\theta$ , and the angular velocity $\omega$ of the pole. The synthesized solution is shown in Figure 19. + +The Acrobot benchmark is similar to the Pendulum benchmark but with two links; only the top link can be actuated, and the goal is to drive the bottom link above a certain height. The observations are the angles $\theta_{1},\theta_{2}$ and the angular velocities $\omega_{1},\omega_{2}$ of the two links. For this benchmark, we vary the mass of the links between the training and the test distributions. The synthesized solution is shown in Figure 20. + +For the Mountain car benchmark, the goal is to drive a low powered car to the top of a hill. An agent has to drive back and forth to gain enough momentum to be able to cross the hill. The agent controls the force (continuous) to move the car to the right or left and observes the position $x$ and the velocity $v$ at every timestep. We vary the power of the car between the training and the test distributions. The synthesized solution is shown in Figure 21. + +The Swimmer benchmark is based on the Mujoco's swimmer. To make this benchmark more challenging, we use 4 segments instead of 3. There are three actions that control the torques at the joints and the goal is to make the swimmer move forward through a viscous liquid. The agent can observe the swimmer's global angle $\theta$ , the joint angles $(\theta_{1}, \theta_{2}, \theta_{3})$ , the swimmer's global angular velocity $\omega$ , the angular velocities of the joints $(\omega_{1}, \omega_{2}, \omega_{3})$ , and the velocity of the center of mass $(v_{x}, v_{y})$ . We vary the length of the segments between the training and the test distributions. The actions are chosen to be proportional to their corresponding angles. The synthesized state machine policy is shown in Figure 22. + +# B.2 HYPER-PARAMETERS + +There are three main hyper-parameters in our algorithm: + +- The maximum number of segments/modes in a loop-free policy. A large number of segments makes the teacher's numerical optimization slow, while a small number of segments might not be sufficient to get a high reward. +- The maximum time that a segment can be executed for in a loop-free policy. This maximum time constraint helps the numerical optimization to avoid local optima that arise from executing a particular (non-convex) action function for too long. +- The parameter $\lambda$ in Section 4.1. This parameter strikes a balance between preferring high-reward loop-free policies versus preferring policies that are similar to the state-machine learned so far. + +The first two parameters solely affect the teacher's algorithm; thus, we choose them by randomly sampling from a set and select the one that produces high-reward loop-free policies. We use $\lambda = 100$ for all our experiments. + +# B.3 THE DIRECT-OPT BASELINE + +For this baseline, we convert the problem of synthesizing a state machine policy into a numerical optimization problem. To do this, we first encode the discreteness in the grammar for switching conditions into a continuous one-hot representation. For example, the set of expressions $\mathbf{o}[i] \leq \alpha_0$ or $\mathbf{o}[i] \geq \alpha_0$ are encoded as $\alpha_s (\alpha_1 \mathbf{o}[1] + \alpha_2 \mathbf{o}[2] + \dots + \alpha_n \mathbf{o}[n]) \leq \alpha_0$ with constraints $-1 \leq \alpha_s \leq 1$ , $\forall i \in \{1, \dots, n\}$ . $0 \leq \alpha_i \leq 1$ and $\sum_{i=1}^{n} \alpha_i = 1$ . The choices between the leaf expressions, conjunctions, and disjunctions are also encoded in a one-hot fashion. We also tried an encoding without the extra constraints on $\alpha$ -i.e., the switching conditions are linear functions of the observations. We would expect the linear encoding to be less generalizable than the one-hot encoding. However, we found that it is hard to even synthesize a policy that works well on the training set with either of the encodings. + +Another difficulty with direct optimization is that we need to optimize the combined reward from all the initial states at once. In contrast, the numerical optimization performed by the teacher in our approach can optimize the reward for each initial state separately. To deal with issue, we use a batch optimization technique that uses 10 initial states for every batch and seeds the starting point of the numerical optimization for each batch with the parameters found so far. It also restarts the process with a random starting point if the numerical optimization stalls. We carryout this process in parallel using 10 threads until either a solution is found or the time exceeds 2 hours. + +# B.4 RL BASELINES + +We use the PPO2 implementation from OpenAI Baselines (Dhariwal et al., 2017) with the standard MLP and LSTM networks for our RL baselines using $10^{7}$ timesteps for training. + +Environment featurization. We used the same action spaces, observation spaces, and the set of initial states that we used for our approach. One exception is the Car benchmark, for which we appended the observation vector with observations from the previous timestep. This modification was essential for the RL baseline to achieve a good performance on the training dataset. + +Designing reward functions. While our approach takes in a safe specification $\phi_S(\mathbf{x})\leq 0$ and a goal specification $\phi_G(\mathbf{x})\leq 0$ , the RL baselines need a reward function. For the classic control problems such as cartpole, pendulum, acrobot, mountain car and swimmer, we used the standard reward functions as specified by their OpenAI environments. For Quad and QuadPO benchmarks, since the goal is to avoid collisions for as long as possible, we use a reward of 1 for every timestep that the agent is alive and the agent is terminated as soon as it collides with any of the obstacles. Designing the reward function for the Car benchmark was tricky, because this benchmark has both a goal and a safety specification, and finding a right balance between them is crucial for learning. We tried various forms of rewards functions and finally, found that the following version achieves better performance on the training distribution (on the metric that measures the fraction of roll-outs that satisfy both the goal and the safety property): + +$$ +r (\mathbf {x}, \mathbf {a}) = - \phi_ {G} (\mathbf {x}) ^ {+} + \left\{ \begin{array}{l l} - L & \text {i f} \phi_ {S} (\mathbf {x}) > 0 \\ 0 & \text {o t h e r w i s e} \end{array} \right. +$$ + +which adds the numerical error for not satisfying the goal with a constant negative error $(-L)$ if the safety specification is violated at any time-step. We tried different values for $L \in \{0.1, 1, 2, 10, 20\}$ and found that $L = 10$ achieves the best performance on the training distribution. + +Hyper-parameters search. We performed a search over the various hyper-parameters in the PPO2 algorithm. We ran 10 instances of the PPO2 algorithm with parameters uniformly sampled from the space given below, and chose the one that performs well on the training distribution. This sampling is not exhaustive, but our results in Figure 4 (left most) show that we did find parameters that achieve good training performances for most of our benchmarks. + +- The number of training minibatches per update, nminibatches = {1,2,4,8,16,32,64,128,256,512,1024,2048}. For the lstm network, we set this hyper-parameter to 1. + +
Performance on Train dist.Performance on Test dist.
BenchAlgorithmGT_GGT_G
AcrobotOurs0.087.9s0.0231.8s
RL0.166.5s0.045.2s
Direct-opt
Mountain carOurs0.001168.5s0.008290.1s
RL0.098.7s0.0214.7s
Direct-opt0.006105.3s2.18216.0s
+ +![](images/371614b43961557c0a7c3af7ddd90a035ebe66db8f8673dc3864a169e72ce3ae.jpg) +Figure 9: Experiment results for additional benchmarks. G is the average goal error (closer to 0 is better). T_G is the average number of timesteps to reach the goal (lower the better). $\bot$ indicates timeout. We can see that both our approach and RL generalizes for these benchmarks. + +![](images/ed320224d02af60c72f064a0b7e1675dad5055a75448eec3ec2554bbfeb370d2.jpg) +Figure 10: Trajectories taken by our state machine policy (left) and the RL policy (middle) on Pendulum for a test environment (i.e., heavier pendulum). Green (resp., red) indicates positive (resp., negative) torque. Our policy performs optimally by using positive torque when angular velocity $\geq 0$ and negative torque otherwise. In contrast, the RL policy performs sub-optimally (especially in the beginning of the trajectory). + +- The policy entropy coefficient in the optimization objective, entcoef = {0.0, 0.01, 0.05, 0.1}. +- The number of training epochs per update, noptepochoes $\in$ {3,..., 36}. +- The clipping range, cliprange = {0.1, 0.2, 0.3}. +The learning rate, $\mathrm{lr}\in [5\times 10^{-6},0.003]$ + +# C ADDITIONAL RESULTS + +# C.1 ADDITIONAL PERFORMANCE RESULTS + +Figure 9 shows the training and test performance for the acrobot and mountain car benchmarks. We can see that both our approach and RL generalizes for these benchmarks. + +Figure 10 qualitatively analyzes the policies learned by our approach versus RL for the Pendulum benchmark. We can see that the RL policy performs slightly sub-optimally compared to our policy. + +Figure 11 shows the trajectories from the learned state machine policy and RL policy on Swimmer for a train environment and a test environment. While both policies generalize, the swimmer with the state machine policy is slightly faster (it takes about 35s to cover a distance of 10 units while the RL policy takes about 45s). + +Figures 12, 13, & 14 show the action versus time plots for the various benchmarks using the learned state-machine policies and neural network policies. We can see that state-machine policies produce smooth actions, whereas the RL policies do not. + +![](images/4040b24d91e052e365623b24b6b4b3f1a978fa3d387b843fca87056b6bca3b09.jpg) +(a) + +![](images/5751e21a9db534377b9e610494907e6abef1996b7ad5649dad8de168999be764.jpg) +(b) + +![](images/2cfc418a160a27c6ff70150a5bc8a04ee62f4e56df41b56678bff74bfa341b89.jpg) +(c) + +![](images/466280d9e91ccf307378783ed5f8dc909f557a5fc92e27763ba67a2d1cd84c0f.jpg) +(d) + +![](images/130ebdfca6cee9423a3d56594868eb607e543e203b480a79553abe29de974063.jpg) +Figure 11: Trajectories taken by our state machine policy on Swimmer for (a) a train environment with segments of length 1, and (b) a test environment with segments of length 0.75. The colors indicate different modes. The axes are the $x$ and $y$ coordinates of the center of mass of the swimmer. Trajectories taken by the RL policy on Swimmer for (c) a train environment, and (d) a test environment. While both policies generalize, the swimmer with the state machine policy is slightly faster (it takes about 35s to cover a distance of 10 units while the RL policy takes about 45s). + +![](images/49bfc00e10db057e744a666b8467d81b73b12d0a4e200da13e7d69ab3276899d.jpg) +Figure 12: Action vs time graphs for the car benchmark for both our policy (red) and the neural network policy (blue). Left shows the velocity of the agent and Right shows the steering angle. + +# C.2 ANALYSIS OF RUNNING TIME + +Figure 15 shows the synthesis times for various benchmarks. It also shows the number of student-teacher iterations, and the time spent by the teacher and the student separately. The teacher optimizes the loop-free policies for different initial states in parallel. The student optimizes the switching conditions between different pairs of modes in parallel. We used a parallelized implementation with 10 threads, and report the wall clock time. + +![](images/65264e93090e55e059a37af68f88eb7efff4d561002c23327222c4caf52414b9.jpg) +Figure 13: Action vs time graphs for the pendulum benchmark (left) and the cartpole benchmark (right) for both our policy (red) and the neural network policy (blue). + +![](images/ac05d3e25f2f56a210a4e3ea6f3fe1b1963663af80da9e058fff0a20432c5d1b.jpg) + +![](images/4055762b7a7f8af0d5b3c0303a2a3290dd2e95fb215be61fb20b1398a06f5dc7.jpg) +Figure 14: Action vs time graphs for the swimmer benchmark for the three torques at the three different joints of the swimmer. Blue line is for the neural network policy and red line is for the state machine policy. + +![](images/1ac7dfc0e8f3b12241ba9859e2ebc3089ac260ff3c90b6797e42cdaf592eb7a7.jpg) + +![](images/c72baee50738e40c4d0f91dcd7a851fd23f52070a4ec27d75e3fabf523326722.jpg) + +![](images/728e8be4146d016f2ced2c65f28e0816456789546dadc3809d059d5cb613ac6e.jpg) +Figure 15: Synthesis times (in seconds, wall clock time) for learning state machines policies for the different benchmarks. The plot breaks down the total synthesis time into time taken by the teacher, the student and other miscellaneous parts of the algorithm. Misc. mostly includes the time spent for checking convergence at every iteration. The plot also shows the number of teacher-student iterations taken for each benchmark. + +![](images/3506ac83773c64dc994b1b61713c4709f85625b3cb8fd457ba6fd8bd1aee44b7.jpg) + +![](images/8bbdc9bba00da9e93ceb94b62ea37b8be41770962ff37aa7acdb4676237fea27.jpg) +Figure 16: Synthesized state-machine policy for the Quad benchmark. + +![](images/26c8a29feef7415c5395863c86869ffce3270b9c706cd9b76f8e7c30343b0cdc.jpg) +Figure 17: Synthesized state-machine policy for the QuadPO benchmark. + +![](images/00a6128ee0089e219e8c5d8480c6e9b8bd98635b5ff69bc6f9d8e81bc7bc6a96.jpg) +Figure 18: Synthesized state-machine policy for Pendulum. + +![](images/ed5fb0fef6e4200d3adfd0582168ec403b24a0764df2983bd59f2a2f3626fa50.jpg) +Figure 19: Synthesized state-machine policy for Cartpole. + +![](images/1fc705c315fc48ce9f503bcc2bc29e2249a3f380a04c4d8a1eb4f37f2884b0c0.jpg) +Figure 20: Synthesized state-machine policy for Acrobot. + +![](images/5e3e2cd8586f7177a9f74dc5eb16aa0d6372bf0962b30da89fe8d97ad080baf4.jpg) +Figure 21: Synthesized state-machine policy for Mountain car. + +![](images/ad5e37fc2ebd9e9680c38d516f52737fd86681918519fac6a6d51a3e89526da0.jpg) +Figure 22: Synthesized state-machine policy for Swimmer. \ No newline at end of file diff --git a/synthesizingprogrammaticpoliciesthatinductivelygeneralize/images.zip b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e277e0c93265218a2d83e1ca6358fe4885a58855 --- /dev/null +++ b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9922b3103def39802011f1f6d75a44922d45266bdd2209dfb4187895b734f3e +size 755958 diff --git a/synthesizingprogrammaticpoliciesthatinductivelygeneralize/layout.json b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..49cf305c1e9406e6be2bb937ec6774c58e59acb5 --- /dev/null +++ b/synthesizingprogrammaticpoliciesthatinductivelygeneralize/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58144fe9ba6fa4bdc5092244df051f37ae3a9d15ea753858a09a46ae28358711 +size 851960 diff --git a/tabfactalargescaledatasetfortablebasedfactverification/6902c8b0-acd5-41b8-8542-8ac0af074538_content_list.json b/tabfactalargescaledatasetfortablebasedfactverification/6902c8b0-acd5-41b8-8542-8ac0af074538_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0abeeae48af8c8b6bae0471399798cfa08384403 --- /dev/null +++ b/tabfactalargescaledatasetfortablebasedfactverification/6902c8b0-acd5-41b8-8542-8ac0af074538_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d1f6a46700709ff804fab2eef40d6f6b142e82d39db8c363ed577a1946e3a33 +size 137611 diff --git a/tabfactalargescaledatasetfortablebasedfactverification/6902c8b0-acd5-41b8-8542-8ac0af074538_model.json b/tabfactalargescaledatasetfortablebasedfactverification/6902c8b0-acd5-41b8-8542-8ac0af074538_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9c441cee6e3de4c4f77eaea4c1bde69b4da8d115 --- /dev/null +++ b/tabfactalargescaledatasetfortablebasedfactverification/6902c8b0-acd5-41b8-8542-8ac0af074538_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18693f6dd0444cde0cf5cfd8669c3ddcbea8754a63c1f1799bfb408da9506329 +size 167543 diff --git a/tabfactalargescaledatasetfortablebasedfactverification/6902c8b0-acd5-41b8-8542-8ac0af074538_origin.pdf b/tabfactalargescaledatasetfortablebasedfactverification/6902c8b0-acd5-41b8-8542-8ac0af074538_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fc0e397e442cc305e351cd71d901e586f2bd7bdf --- /dev/null +++ b/tabfactalargescaledatasetfortablebasedfactverification/6902c8b0-acd5-41b8-8542-8ac0af074538_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c3d7a2d0b5dc72a2c35ffa4e7142feb7b389cd36327e27c3c9c1c62e4f567e6 +size 1131014 diff --git a/tabfactalargescaledatasetfortablebasedfactverification/full.md b/tabfactalargescaledatasetfortablebasedfactverification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b3252329d46737b23c77bcd4af81c0c004214b57 --- /dev/null +++ b/tabfactalargescaledatasetfortablebasedfactverification/full.md @@ -0,0 +1,520 @@ +# TABFACT: A LARGE-SCALE DATASET FOR TABLE-BASED FACT VERIFICATION + +Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, William Yang Wang + +University of California, Santa Barbara, CA, USA + +Tencent AI Lab, Bellevue, WA, USA + +{wenhuchen,hongmin_wang,yunkai_zhang,hongwang600,william}@ucsb.edu + +{shiyangli,xiyou}@cs.ucsb.edu jianshuchen@tencent.com + +# ABSTRACT + +The problem of verifying whether a textual hypothesis holds based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation. However, existing studies are mainly restricted to dealing with unstructured evidence (e.g., natural language sentences and documents, news, etc), while verification under structured evidence, such as tables, graphs, and databases, remains under-explored. This paper specifically aims to study the fact verification given semi-structured data as evidence. To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED. TabFact is challenging since it involves both soft linguistic reasoning and hard symbolic reasoning. To address these reasoning challenges, we design two different models: Table-BERT and Latent Program Algorithm (LPA). Table-BERT leverages the state-of-the-art pre-trained language model to encode the linearized tables and statements into continuous vectors for verification. LPA parses statements into programs and executes them against the tables to obtain the returned binary value for verification. Both methods achieve similar accuracy but still lag far behind human performance. We also perform a comprehensive analysis to demonstrate great future opportunities. The data and code of the dataset are provided in https://github.com/wenhuchen/Table-Fact-Checking. + +# 1 INTRODUCTION + +Verifying whether a textual hypothesis is entailed or refuted by the given evidence is a fundamental problem in natural language understanding (Katz & Fodor, 1963; Van Benthem et al., 2008). It can benefit many downstream applications like misinformation detection, fake news detection, etc. Recently, the first-ever end-to-end fact-checking system has been designed and proposed in Hassan et al. (2017). The verification problem has been extensively studied under different natural language tasks such as recognizing textual entailment (RTE) (Dagan et al., 2005), natural language inference (NLI) (Bowman et al., 2015), claim verification (Popat et al., 2017; Hanselowski et al., 2018; Thorne et al., 2018) and multimodal language reasoning (NLVR/NLVR2) (Suhr et al., 2017; 2019). RTE and NLI view a premise sentence as the evidence, claim verification views passage collection like Wikipedia1 as the evidence, NLVR/NLVR2 views images as the evidence. These problems have been previously addressed using a variety of techniques including logic rules, knowledge bases, and neural networks. Recently large-scale pre-trained language models (Devlin et al., 2019; Peters et al., 2018; Yang et al., 2019; Liu et al., 2019) have surged to dominate the other algorithms to approach human performance on several textual entailment tasks (Wang et al., 2018; 2019). + +However, existing studies are restricted to dealing with unstructured text as the evidence, which would not generalize to the cases where the evidence has a highly structured format. Since such structured evidence (graphs, tables, or databases) are also ubiquitous in real-world applications like + +United States House of Representatives Elections, 1972 + +
DistrictIncumbentPartyResultCandidates
California 3John E. Mossdemocraticre-electedJohn E. Moss (d) 69.9% John Rakus (r) 30.1%
California 5Phillip Burtondemocraticre-electedPhillip Burton (d) 81.8% Edlo E. Powell (r) 18.2%
California 8George Paul Millerdemocraticlost nomination democratic holdPete Stark (d) 52.9% Lew M. Warden, Jr. (r) 47.1%
California 14Jerome R. Waldierepublicanre-electedJerome R. Waldie (d) 77.6% Floyd E. Sims (r) 22.4%
California 15John J. Mcfallrepublicanre-electedJohn J. Mcfall (d) unopposed
Entailed StatementRefuted Statement
1.John E. Moss and Phillip Burton are both re-elected in the house of representative election.1.
2.John J. Mcfall is unopposed during the re-election.2.
3.There are three different incumbents from democratic.3.
+ +Figure 1: Examples from the TABFACT dataset. The top table contains the semi-structured knowledge facts with caption "United...". The left and right boxes below provide several entailed and refuted statements. The error parts are highlighted with red font. + +database systems, dialog systems, commercial management systems, social networks, etc, we argue that the fact verification under structured evidence forms is an equivalently important yet under-explored problem. Therefore, in this paper, we are specifically interested in studying fact verification with semi-structured Wikipedia tables (Bhagavatula et al., 2013) $^2$ as evidence owing to its structured and ubiquitous nature (Jauhar et al., 2016; Zhong et al., 2017; Pasupat & Liang, 2015). To this end, we introduce a large-scale dataset called TABFACT, which consists of 118K manually annotated statements with regard to 16K Wikipedia tables, their relations are classified as ENTAILED and REFUTED $^3$ . The entailed and refuted statements are both annotated by human workers. With some examples in Figure 1, we can clearly observe that unlike the previous verification related problems, TABFACT combines two different forms of reasoning in the statements, (i) Linguistic Reasoning: the verification requires semantic-level understanding. For example, "John J. Mcfall failed to be re-elected though being unopposed." requires understanding over the phrase "lost renomination ..." in the table to correctly classify the entailment relation. Unlike the existing QA datasets (Zhong et al., 2017; Pasupat & Liang, 2015), where the linguistic reasoning is dominated by paraphrasing, TABFACT requires more linguistic inference or common sense. (ii) Symbolic Reasoning: the verification requires symbolic execution on the table structure. For example, the phrase "There are three Democrats incumbents" requires both condition operation (where condition) and arithmetic operation (count). Unlike question answering, a statement could contain compound facts, all of these facts need to be verified to predict the verdict. For example, the "There are ..." in Figure 1 requires verifying three QA pairs (total count=5, democratic count=2, republic count=3). The two forms of reasoning are interleaved across the statements making it challenging for existing models. + +In this paper, we particularly propose two approaches to deal with such mixed-reasoning challenge: (i) Table-BERT, this model views the verification task completely as an NLI problem by linearizing a table as a premise sentence $p$ , and applies state-of-the-art language understanding pre-trained model to encode both the table and statements $h$ into distributed representation for classification. This model excels at linguistic reasoning like paraphrasing and inference but lacks symbolic reasoning skills. (ii) Latent Program Algorithm, this model applies lexical matching to find linked entities and triggers to filter pre-defined APIs (e.g. argmax, argmin, count, etc). We adopt bread-first-search with memorization to construct the potential program candidates, a discriminator is further utilized to select the most "consistent" latent programs. This model excels at the symbolic reasoning aspects by executing database queries, which also provides better interpretability by laying out the decision rationale. We perform extensive experiments to investigate their performances: the best-achieved accuracy of both models are reasonable, but far below human performance. Thus, we believe that the proposed table-based fact verification task can serve as an important new benchmark towards the goal of building powerful AI that can reason over both soft linguistic form and hard symbolic forms. To facilitate future research, we released all the data, code with the intermediate results. + +# 2 TABLE FACT VERIFICATION DATASET + +First, we follow the previous Table-based Q&A datasets (Pasupat & Liang, 2015; Zhong et al., 2017) to extract web tables (Bhagavatula et al., 2013) with captions from WikiTables4. Here we filter out overly complicated and huge tables (e.g. multirows, multicolumns, latex symbol) and obtain 18K relatively clean tables with less than 50 rows and 10 columns. + +For crowd-sourcing jobs, we follow the human subject research protocols5 to pay Amazon Mechanical Turk6 workers from the native English-speaking countries "US, GB, NZ, CA, AU" with approval rates higher than $95\%$ and more than 500 accepted HITs. Following WikiTableQuestion (Pasupat & Liang, 2015), we provide the annotators with the corresponding table captions to help them better understand the background. To ensure the annotation quality, we develop a pipeline of "positive two-channel annotation" $\rightarrow$ "negative statement rewriting" $\rightarrow$ "verification", as described below. + +# 2.1 POSITIVE TWO-CHANNEL COLLECTION & NEGATIVE REWRITING STRATEGY + +To harvest statements of different difficulty levels, we design a two-channel collection process: + +Low-Reward Simple Channel: the workers are paid 0.45 USD for annotating one Human Intelligent Task (HIT) that requires writing five statements. The workers are encouraged to produce plain statements meeting the requirements: (i) corresponding to a single row/record in the table with unary fact without involving compound logical inference. (ii) mention the cell values without dramatic modification or paraphrasing. The average annotation time of a HIT is $4.2\mathrm{min}$ . + +High-Reward Complex Channel: the workers are paid 0.75 USD for annotating a HIT (five statements). They are guided to produce more sophisticated statements to meet the requirements: (i) involving multiple rows in the tables with higher-order semantics like argmax, argmin, count, difference, average, summarize, etc. (ii) rephrase the table records to involve more semantic understanding. The average annotation time of a HIT is $6.8\mathrm{min}$ . The data obtained from the complex channel are harder in terms of both linguistic and symbolic reasoning, the goal of the two-channel split is to help us understand the proposed models can reach under different levels of difficulty. + +As suggested in (Zellers et al., 2018), there might be annotation artifacts and conditional stylistic patterns such as length and word-preference biases, which can allow shallow models (e.g. bag-of-words) to obtain artificially high performance. Therefore, we design a negative rewriting strategy to minimize such linguistic cues or patterns. Instead of letting the annotators write negative statements from scratch, we let them rewrite the collected entailed statements. During the annotation, the workers are explicitly guided to modify the words, phrases or sentence structures but retain the sentence style/length to prevent artificial cues. We disallow naive negations by adding "not, never, etc" to revert the statement polarity in case of obvious linguistic patterns. + +# 2.2 QUALITY CONTROL + +To control the quality of the annotation process, we review a randomly sampled statement from each HIT to decide whether the whole annotation job should be rejected during the annotation process. Specifically, a HIT must satisfy the following criteria to be accepted: (i) the statements should contain neither typos nor grammatical errors. (ii) the statements do not contain vague claims like might, few, etc. (iii) the claims should be explicitly supported or contradicted by the table without requiring the additional knowledge, no middle ground is permitted. After the data collection, we re-distribute all the annotated samples to further filter erroneous statements, the workers are paid 0.05 USD per statement to decide whether the statement should be rejected. The criteria we apply are similar: no ambiguity, no typos, explicitly supported or contradictory. Through the post-filtering process, roughly $18\%$ entailed and $27\%$ refuted instances are further abandoned due to poor quality. + +![](images/17c87fede2f10074a262f13d6c0eb54b089905a3ce40d3c94354dd583937f2dc.jpg) +Proportion of different Higher-order Operations + +Figure 2: Proportion of different higher-order operations from the simple/complex channels. + +
Channel#Sentence#TableLen(Ent)Len(Ref)Split#SentenceTableRowCol
Simple50,2449,18913.213.1Train92,28313,18214.15.5
Complex68,0317,39214.214.2Val12,7921,69614.05.4
Total118,27516,57313.813.8Test12,7791,69514.25.4
+ +Table 1: Basic statistics of the data collected from the simple/complex channel and the division of Train/Val/Test Split in the dataset, where "Len" denotes the averaged sentence length. + +# 2.3 DATASET STATISTICS + +Inter-Annotator Agreement: After the data collection pipeline, we merged the instances from two different channels to obtain a diverse yet clean dataset for table-based fact verification. We sample 1000 annotated (table, statement) pairs and re-distribute each to 5 individual workers to re-label them as either ENTAILED or REFUTED. We follow the previous works (Thorne et al., 2018; Bowman et al., 2015) to adopt the Fleiss Kappa (Fleiss, 1971) as an indicator, where Fleiss $\kappa = \frac{p_c - p_e}{1 - p_e}$ is computed from the observed agreement $\bar{p}_c$ and the agreement by chance $\bar{p}_e$ . We obtain a Fleiss $\kappa = 0.75$ , which indicates strong inter-annotator agreement and good-quality. + +Dataset Statistics: As shown in Table 1, the amount of data harvested via the complex channel slightly outnumber the simple channel, the averaged length of both the positive and negative samples are indistinguishable. More specifically, to analyze to which extent the higher-order operations are included in two channels, we group the common higher-order operations into 8 different categories. As shown in Figure 2, we sample 200 sentences from two different channels to visualize their distribution. We can see that the complex channel overwhelms the simple channel in terms of the higher-order logic, among which, count and superlatives are the most frequent. We split the whole data roughly with 8:1:1 into train, validation7, and test splits and shows their statistics in Table 1. Each table with an average of 14 rows and 5-6 columns corresponds to 2-20 different statements, while each cell has an average of 2.1 words. In the training split, the positive instances slightly outnumber the negative instances, while the validation and test split both have rather balanced distributions over positive and negative instances. + +# 3 MODELS + +With the collected dataset, we now formally define the table-based fact verification task: the dataset is comprised of triple instances $(\mathbf{T}, S, L)$ consisting of a table $\mathbf{T}$ , a natural language statement $S = s_1, \dots, s_n$ and a verification label $L \in \{0, 1\}$ . The table $\mathbf{T} = \{T_{i,j} | i \leq R_T, j \leq C_T\}$ has $R_T$ rows and $C_T$ columns with the $T_{ij}$ being the content in the $(i,j)$ -th cell. $T_{ij}$ could be a word, a number, a phrase, or even a natural language sentence. The statement S describes a fact to be verified against the content in the table $\mathbf{T}$ . If it is entailed by $\mathbf{T}$ , then $L = 1$ , otherwise the label $L = 0$ . Figure 1 shows some entailed and refuted examples. During training, the model and the learning algorithm are presented with $K$ instances like $(\mathbf{T}, S, L)_{k=1}^{K}$ from the training split. In the testing stage, the model is presented with $(\mathbf{T}, S)_{k=1}^{K'}$ and supposed to predict the label as $\hat{L}$ . We measure the performance by the prediction accuracy $Acc = \frac{1}{K'} \sum_{1}^{K'} \mathbb{I}(\hat{L}_k = L_k)$ on the test set. Before building the model, we first perform entity linking to detect all the entities in the statements. Briefly, we first lemmatize the words and search for the longest sub-string matching pairs between statements and table cells/captions, where the matched phrases are denoted as the linked entities. To focus on statement verification against the table, we do not feed the caption to the model and simply + +mask the phrases in the statements which link to the caption with placeholders. The details of the entity linker are listed in the Appendix. We describe our two proposed models as follows. + +# 3.1 LATENT PROGRAM ALGORITHM (LPA) + +In this approach, we formulate the table fact verification as a program synthesis problem, where the latent program algorithm is not given in TABFACT. Thus, it can be seen as a weakly supervised learning problem as discussed in Liang et al. (2017); Lao et al. (2011). Under such a setting, we propose to break down the verification into two stages: (i) latent program search, (ii) discriminator ranking. In the first program synthesis step, we aim to parse the statement into programs to represent its semantics. We define the plausible API set to include roughly 50 different functions like min, max, count, average, filter, and and realize their interpreter with Python-Pandas. Each API is defined to take arguments of specific types (number, string, bool, and view (e.g sub-table)) to output specific-type variables. During the program execution, we store the generated intermediate variables to different-typed caches $\mathcal{N},\mathcal{R},\mathcal{B},\mathcal{V}$ (Num, Str, Bool, View). At each execution step, the program can fetch the intermediate variable from the caches to achieve semantic compositionality. In order to shrink the search space, we follow NSM (Liang et al., 2017) to use trigger words to prune the API set and accelerate the search speed. The definitions of all API, trigger words can be found in the Appendix. The comprehensive the latent program search procedure is summarized in Algorithm 1, + +Algorithm 1 Latent Program Search with Comments +1: Initialize Number Cache $\mathcal{N}$ String Cache $\mathcal{R}$ ,Bool Cache $\mathcal{B}$ ,View Cache $\nu \rightarrow \emptyset$ +2: Push linked numbers, strings from the given statement $S$ into $\mathcal{N},\mathcal{R}$ , and push T into V +3: Initialize the result collector $\mathcal{P}\to \emptyset$ and an empty program trace $P = \emptyset$ +4: Initialize the Queue $\mathcal{Q} = [(P,\mathcal{N},\mathcal{R},\mathcal{B},\mathcal{V})]$ , we use $\mathcal{Q}$ to store the intermediate states +5: Use trigger words to find plausible function set $\mathcal{F}$ , for example, more will trigger Greater function. +6: while loop over time $t = 1\rightarrow$ MAXSTEP do: +7: while $(P,\mathcal{N},\mathcal{R},\mathcal{B},\mathcal{V}) = \mathcal{Q}.pop()$ do: +8: while loop over function set $f\in \mathcal{F}$ do: +9: if arguments of $f$ are in the caches then +10: Pop out the required arguments arg1, arg2,.., argn for different cachess. +11: Execute $A = f(arg_{1},\dots ,arg_{n})$ and concatenate the program trace $P$ +12: if Type(A)=Bool then +13: if $\mathcal{N} = S = \mathcal{B} = \emptyset$ then +14: P.push((P,A)) # The program $P$ is valid since it consumes all the variables. +15: $P = \emptyset$ # Collect the valid program $P$ into set $\mathcal{P}$ and reset $P$ +16: else +17: B.push(A) # The intermediate boolean value is added to the bool cache +18: Q.push((P,N,R,B,V)) # Add the refreshed state to the queue again +19: if Type(A) $\in$ {Num, Str, View} then +20: if $\mathcal{N} = S = \mathcal{B} = \emptyset$ then +21: $P = \emptyset$ ;break # The program ends without consuming the cache, throw it. +22: else +23: push A into N or S or V # Add the refreshed state to the queue for further search +24: Q.push((P,N,R,B,V)) +25: Return the triple (T,S,P) # Return (Table, Statement, Program Set) + +and the searching procedure is illustrated in Figure 3. + +After we collected all the potential program candidates $\mathcal{P} = \{(P_1, A_1), \dots, (P_n, A_n)\}$ for a given statement $S$ (where $(P_i, A_i)$ refers to $i$ -th candidate), we need to learn a discriminator to identify the "appropriate" traces from the set from many erroneous and spurious traces. Since we do not have the ground truth label about such discriminator, we use a weakly supervised training algorithm by viewing all the label-consistent programs as positive instances $\{P_i | (P_i, A_i); A_i = L\}$ and the label-inconsistent program as negative instances $\{P_i | (P_i, A_i); A_i \neq L\}$ to minimize the cross-entropy of discriminator $p_\theta(S, P)$ with the weakly supervised label. Specifically, we build our discriminator with a Transformer-based two-way encoder (Vaswani et al., 2017), where the statement encoder encodes the input statement $S$ as a vector $Enc^S(S) \in \mathbb{R}^{n \times D}$ with dimension $D$ , while the program encoder encodes the program $P = p_1, \dots, p_m$ as another vector $Enc^P(P) \in \mathbb{R}^{m \times D}$ , we concatenate these two vectors and feed it into a linear projection layer + +![](images/fdb0f3f0c095370b7ebd5aba408a9bbe071eb0ee974a97263976a9328edd1b3b.jpg) + +![](images/858466f07566b846fa1037f521c16e750cfafd6260f8c9086b35dcfba9a2c4dd.jpg) +Figure 3: The program synthesis procedure for the table in Figure 1. We link the entity (e.g. democratic, republican), and then composite functions on the fly to return the values from the table. +Figure 4: The diagram of Table-BERT with horizontal scan, two different linearizations are depicted. + +to compute $p_{\theta}(S,P) = \sigma (v_p^T [Enc^S (S);Enc^P (P)])$ as the relevance between S and $P$ with weight $v_{p}\in \mathbb{R}^{D}$ . At test time, we use the discriminator $p_{\theta}$ to assign confidence $p_{\theta}(S,P)$ to each candidate $P\in \mathcal{P}$ , and then either aggregate the prediction from all hypothesis with the confidence weights or rank the highest-confident hypothesis and use their outputs as the prediction. + +# 3.2 TABLE-BERT + +In this approach, we view the table verification problem as a two-sequence binary classification problem like NLI or MPRC (Wang et al., 2018) by linearizing a table $\mathbf{T}$ into a sequence and treating the statement as another sequence. Since the linearized table can be extremely long surpassing the limit of sequence models like LSTM, Transformers, etc. We propose to shrink the sequence by only retaining the columns containing entities linked to the statement to alleviate such a memory issue. In order to encode such sub-table as a sequence, we propose two different linearization methods, as is depicted in Figure 4. (i) Concatenation: we simply concatenate the table cells with [SEP] tokens in between and restart position counter at the cell boundaries; the column name is fed as another type embedding to the input layer. Such design retains the table information in its machine format. (ii) Template: we adopt simple natural language templates to transform a table into a "somewhat natural" sentence. Taking the horizontal scan as an example, we linearize a table as "row one's game is 51; the date is February; ..., the score is 3.4 (ot). row 2 is ...". The isolated cells are connected with punctuations and copula verbs in a language-like format. + +After obtaining the linearized sub-table $\tilde{\mathbf{T}}$ , we concatenate it with the natural language statement S and prefix a [CLS] token to the sentence to obtain the sequence-level representation $H = f_{BERT}([\tilde{\mathbf{T}},S])$ , with $H \in \mathbb{R}^{768}$ from pre-trained BERT (Devlin et al., 2019). The representation is further fed into multi-layer perceptron $f_{MLP}$ to obtain the entailment probability $p_{\theta}(\tilde{\mathbf{T}},S) = \sigma(f_{MLP}(H))$ , where $\sigma$ is the sigmoid function. We finetune the model $\theta$ (including the parameters of BERT and MLP) to minimize the binary cross entropy $\mathcal{L}(p_{\theta}(\tilde{\mathbf{T}},S),L)$ on the training set. At test time, we use the trained BERT model to compute the matching probability between the (table, statement) pair, and classify it as ENTAILED statement when $p_{\theta}(\tilde{\mathbf{T}},S)$ is greater than 0.5. + +# 4 EXPERIMENTS + +In this section, we aim to evaluate the proposed methods on TABFACT. Besides the standard validation and test sets, we also split the test set into a simple and a complex partition based on the channel from which they were collected. This facilitates analyzing how well the model performs under different levels of difficulty. Additionally, we also hold out a small test set with 2K samples for human evaluation, where we distribute each (table, statement) pair to 5 different workers to approximate human judgments based on their majority voting, the results are reported in Table 2. + +
ModelValTestTest (simple)Test (complex)Small Test
BERT classifier w/o Table50.950.551.050.150.4
Table-BERT-Horizontal-F+T-Concatenate50.750.450.850.050.3
Table-BERT-Vertical-F+T-Template56.756.259.855.056.2
Table-BERT-Vertical-T+F-Template56.757.060.654.355.5
Table-BERT-Horizontal-F+T-Template66.065.179.058.167.9
Table-BERT-Horizontal-T+F-Template66.165.179.158.268.1
NSM w/ RL (Binary Reward)54.154.155.453.155.8
NSM w/ LPA-guided ML + RL63.263.577.456.166.9
LPA-Voting w/o Discriminator57.758.268.553.261.5
LPA-Weighted-Voting62.563.174.657.366.8
LPA-Ranking w/ Discriminator65.265.078.458.568.6
LPA-Ranking w/ Discriminator (Caption)65.165.378.758.568.9
Human Performance----92.1
+ +Table 2: The results of different models, the numbers are in percentage. $\mathrm{T + F}$ means table followed by fact, while $\mathrm{F + T}$ means fact followed by table. NSM is modified from Liang et al. (2017). + +NSM We follow Liang et al. (2017) to modify their approach to fit the setting of TABFACT. Specifically, we adopt an LSTM as an encoder and another LSTM with copy mechanism as a decoder to synthesize the program. However, without any ground truth annotation for the intermediate programs, directly training with reinforcement learning is difficult as the binary reward is underspecified, which is listed in Table 2 as "NSM w/ RL". Further, we use LPA as a teacher to search the top programs for the NSM to bootstrap and then use reinforcement learning to finetune the model, which achieves reasonable performance on our dataset listed as "NSM w/ ML + RL". + +Table-BERT We build Table-BERT based on the open-source implementation of BERT using the pre-trained model with 12-layer, 768-hidden, 12-heads, and 110M parameters trained in 104 languages. We use the standard BERT tokenizer to break the words in both statements and tables into subwords and join the two sequences with a [SEP] token in between. The representation corresponding to [CLS] is fed into an MLP layer to predict the verification label. We finetune the model on a single TITAN X GPU with a mini-batch size of 6. The best performance is reached after about 3 hours of training (around 10K steps). We implement and compare the following variants of the Table-BERT model including (i) Concatenation vs. Template: whether to use natural language templates during linearization. (ii) Horizontal vs. Vertical: scan direction in linearization. + +LPA We run the latent program search in a distributed fashion on three 64-core machines to generate the latent programs. The search terminates once the buffer has more than 50 traces or the path length is larger than 7. The average search time for each statement is about 2.5s. For the discriminator model, we design two transformer-based encoders (3 layers, 128-dimension hidden embedding, and 4 heads at each layer) to encode the programs and statements, respectively. The variants of LPA models considered include (i) Voting: assign each program with equal weight and vote without the learned discriminator. (ii) Weighted-Voting: compute a weighted-sum to aggregate the predictions of all latent programs with the discriminator confidence as the weights. (iii) Ranking: rank all the hypotheses by the discriminator confidence and use the top-rated hypothesis as the output. (Caption) means feeding the caption as a sequence of words to the discriminator during ranking. + +Preliminary Evaluation In order to test whether our negative rewriting strategy eliminates the artifacts or shallow cues, we also fine-tune a pre-trained BERT (Devlin et al., 2019) to classify the statement $S$ without feeding in table information. The result is reported as "BERT classifier w/o + +Table" in Table 2, which is approximately the majority guess and reflects the effectiveness of the rewriting strategy. Before presenting the experiment results, we first perform a preliminary study to evaluate how well the entity linking system, program search, and the statement-program discriminator perform. Since we do not have the ground truth labels for these models, we randomly sample 100 samples from the dev set to perform the human study. For the entity linking, we evaluate its accuracy as the number of correctly linked sentences / total sentences. For the latent program search, we evaluate whether the "true" programs are included in the candidate set $\mathcal{P}$ as recall score. + +Results We report the performance of different methods as well as human performance in Table 2. First of all, we observe that the naive serialized model fails to learn anything effective (same as the Majority Guess). It reveals the importance of template when using the pre-trained BERT (Devlin et al., 2019) model: the "natural" connection words between individual cells is able to unleash the power of the large pre-trained language model and enable it to perform reasoning on the structured table form. Such behavior is understandable given the fact that BERT is pre-trained on purely natural language corpora. In addition, we also observe that the horizontal scan excels in the vertical scan because it better captures the convention of human expression. Among different LPA methods, we found that LPA-Ranking performs the best since it can better suppress the spurious programs than the voting-based algorithm. Overall, the LPA model is on par with Table-BERT on both simple and test split without any pre-training on external corpus, which reflects the effectiveness of LPA to leverage symbolic operations in the verification process. + +Through our human evaluation, we found that only $58\%$ of sentences have been correctly linked without missing-link or over-link, while the systematic search has a recall of $51\%$ under the cases where the sentence is correctly linked. With that being said, the chance for LPA method to cover the correct program (rationale) is roughly under $30\%$ . After the discriminator's re-ranking step, the probability of selecting these particular oracle programs is even much lower. However, we still observe a final overall accuracy of $65\%$ , which indicates that the spurious problem is quite severe in LPA, where the correct label is predicted based on the wrong reason. + +Through our human evaluation, we also observe that Table-BERT exhibits poor consistency as it can misclassify simple cases but correctly-classify hard cases. These two major weaknesses are yet to be solved in future studies. In contrast, LPA behaves much more consistently and provides a clear latent rationale for its decision. But, such a pipeline system requires laborious handcrafting of API operations and is also very sensitive to the entity linking accuracy. Both methods have pros and cons; how to combine them still remains an open question. + +Program Annotation To further promote the development of different models in our dataset, we collect roughly 1400 human-annotated programs paired with the original statements. These statements include the most popular logical operations like superlative, counting, comparison, unique, etc. We provide these annotations in $\mathbf{G}\mathbf{\mathrm{h}}\mathbf{u}\mathbf{b}^{9}$ , which can either be used to bootstrap the semantic parsers or provide the rationale for NLI models. + +# 5 RELATED WORK + +Natural Language Inference & Reasoning: Modeling reasoning and inference in human language is a fundamental and challenging problem towards true natural language understanding. There has been extensive research on RTE in the early years (Dagan et al., 2005) and more recently shifted to NLI (Bowman et al., 2015; Williams et al., 2017). NLI seeks to determine whether a natural language hypothesis $h$ can be inferred from a natural language premise $p$ . With the surge of deep learning, there have been many powerful algorithms like the Decomposed Model (Parikh et al., 2016), Enhanced-LSTM (Chen et al., 2017) and BERT (Devlin et al., 2019). Besides the textual evidence, NLVR (Suhr et al., 2017) and NLVR2 (Suhr et al., 2019) have been proposed to use images as the evidence for statement verification on multi-modal setting. Our proposed fact verification task is closely related to these inference tasks, where our semi-structured table can be seen as a collection of "premises" exhibited in a semi-structured format. Our proposed problem hence could be viewed as the generalization of NLI under the semi-structured domain. + +Table Question Answering: Another line of research closely related to our task is the table-based + +![](images/3cc1bf7fd36f79c8e8b9fff7b9a58b365eaa5271eb73b5962aa519670ce606c8.jpg) +Figure 5: The two uniqueness of Table-based fact verification against standard QA problems. + +question answering, such as MCQ (Jauhar et al., 2016), WikiTableQuestion (Pasupat & Liang, 2015), Spider (Yu et al., 2018), Sequential Q&A (Iyyer et al., 2017), and WikiSQL (Zhong et al., 2017), for which approaches have been extended to handle large-scale tables from Wikipedia (Bhagavatula et al., 2013). However, in these Q&A tasks, the question types typically provide strong signals needed for identifying the type of answers, while TABFACT does not provide such specificity. The uniqueness of TABFACT lies in two folds: 1) a given fact is regarded as a false claim as long as any part of the statement contains misinformation. Due to the conjunctive nature of verification, a fact needs to be broken down into several sub-clauses or (Q, A) pairs to separate evaluate their correctness. Such a compositional nature of the verification problem makes it more challenging than a standard QA setting. On one hand, the model needs to recognize the multiple QA pairs and their relationship. On the other hand, the multiple sub-clauses make the semantic form longer and logic inference harder than the standard QA setting. 2) some facts cannot even be handled using semantic forms, as they are driven by linguistic inference or common sense. In order to verify these statements, more inference techniques have to be leveraged to enable robust verification. We visualize the above two characteristics of TABFACT in Figure 5. + +Program Synthesis & Semantic Parsing: There have also been great interests in using program synthesis or logic forms to solve different natural language processing problems like question answering (Liang et al., 2013; Berant et al., 2013; Berant & Liang, 2014), visual navigation (Artzi et al., 2014; Artzi & Zettlemoyer, 2013), code generation (Yin & Neubig, 2017; Dong & Lapata, 2016), SQL synthesis (Yu et al., 2018), etc. The traditional semantic parsing papers (Artzi et al., 2014; Artzi & Zettlemoyer, 2013; Zettlemoyer & Collins, 2005; Liang et al., 2013; Berant et al., 2013) greatly rely on rules, lexicon to parse natural language sentences into different forms like lambda calculus, DCS, etc. More recently, researchers strive to propose neural models to directly perform end-to-end formal reasoning like Theory Prover (Riedel et al., 2017; Rocktäschel & Riedel, 2017), Neural Turing Machine (Graves et al., 2014), Neural Programmer (Neelakantan et al., 2016; 2017) and Neural-Symbolic Machines (Liang et al., 2017; 2018; Agarwal et al., 2019). The proposed TABFACT serves as a great benchmark to evaluate the reasoning ability of different neural reasoning models. Specifically, TABFACT poses the following challenges: 1) spurious programs (i.e., wrong programs with the true returned answers): since the program output is only a binary label, which can cause serious spurious problems and misguide the reinforcement learning with the under-specified binary rewards. 2) decomposition: the model needs to decompose the statement into sub-clauses and verify the sub-clauses one by one, which normally requires the longer logic inference chains to infer the statement verdict. 3) linguistic reasoning like inference and paraphrasing. + +Fact Checking The problem of verifying claims and hypotheses on the web has drawn significant attention recently due to its high social influence. Different fact-checking pioneering studies have been + +performed including LIAR (Wang, 2017), PolitiFact (Vlachos & Riedel, 2014), FEVER (Thorne et al., 2018) and AggChecker (Jo et al., 2019), etc. The former three studies are mainly based on textual evidence on social media or Wikipedia, while AggChecker is closest to ours in using relational databases as the evidence. Compared to AggChecker, our paper proposes a much larger dataset to benchmark the progress in this direction. + +# 6 CONCLUSION + +This paper investigates a very important yet previously under-explored research problem: semistructured fact verification. We construct a large-scale dataset and proposed two methods, TableBERT and LPA, based on the state-of-the-art pre-trained natural language inference model and program synthesis. In the future, we plan to push forward this research direction by inspiring more sophisticated architectures that can perform both linguistic and symbolic reasoning. + +# REFERENCES + +Rishabh Agarwal, Chen Liang, Dale Schuurmans, and Mohammad Norouzi. Learning to generalize from sparse and underspecified rewards. International Conference of Machine Learning, 2019. +Yoav Artzi and Luke Zettlemoyer. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1:49-62, 2013. +Yoav Artzi, Dipanjan Das, and Slav Petrov. Learning compact lexicons for ccg semantic parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1273-1283, 2014. +Jonathan Berant and Percy Liang. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1415-1425, 2014. +Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1533-1544, 2013. +Chandra Sekhar Bhagavatula, Thanapon Noraset, and Doug Downey. Methods for exploring and mining tables on wikipedia. In Proceedings of the ACM SIGKDD Workshop on Interactive Data Exploration and Analytics, pp. 18-26. ACM, 2013. +Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 632-642, 2015. +Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1657-1668, 2017. +Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine Learning Challenges Workshop, pp. 177-190. Springer, 2005. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT, 2019. +Li Dong and Mirella Lapata. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 33-43, 2016. +Joseph L Fleiss. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378, 1971. +Alex Graves, Greg Wayne, and Ivo Danihelka. Neural tuning machines. arXiv preprint arXiv:1410.5401, 2014. + +Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. Ukp-athene: Multi-sentence textual entailment for claim verification. arXiv preprint arXiv:1809.01479, 2018. +Naeemul Hassan, Gensheng Zhang, Fatma Arslan, Josue Caraballo, Damian Jimenez, Siddhant Gawsane, Shohedul Hasan, Minumol Joseph, Aaditya Kulkarni, Anil Kumar Nayak, et al. Claimbuster: the first-ever end-to-end fact-checking system. Proceedings of the VLDB Endowment, 10 (12):1945-1948, 2017. +Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 1821-1831, 2017. +Sujay Kumar Jauhar, Peter Turney, and Eduard Hovy. Tables as semi-structured knowledge for question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 474-483, 2016. +Saehan Jo, Immanuel Trummer, Weicheng Yu, Xuezhi Wang, Cong Yu, Daniel Liu, and Niyati Mehta. Aggchecker: A fact-checking system for text summaries of relational data sets. Proceedings of the VLDB Endowment, 12(12), 2019. +Jerrold J Katz and Jerry A Fodor. The structure of a semantic theory. language, 39(2):170-210, 1963. +Ni Lao, Tom Mitchell, and William W Cohen. Random walk inference and learning in a large scale knowledge base. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 529-539. Association for Computational Linguistics, 2011. +Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. International Conference of Machine Learning, 2017. +Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc V Le, and Ni Lao. Memory augmented policy optimization for program synthesis and semantic parsing. In Advances in Neural Information Processing Systems, pp. 9994-10006, 2018. +Percy Liang, Michael I Jordan, and Dan Klein. Learning dependency-based compositional semantics. Computational Linguistics, 39(2):389-446, 2013. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. +Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. International Conference on Learning Representation, 2016. +Arvind Neelakantan, Quoc V Le, Martin Abadi, Andrew McCallum, and Dario Amodei. Learning a natural language interface with neural programmer. International Conference on Learning Representation, 2017. +Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2249-2255, 2016. +Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pp. 1470–1480, 2015. +Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In Proceedings of NAACL-HLT, pp. 2227-2237, 2018. + +Kashyap Popat, Subhabrata Mukherjee, Jannik Strötgen, and Gerhard Weikum. Where the truth lies: Explaining the credibility of emerging claims on the web and social media. In Proceedings of the 26th International Conference on World Wide Web Companion, pp. 1003-1012. International World Wide Web Conferences Steering Committee, 2017. +Sebastian Riedel, Matko Bosnjak, and Tim Rocttäschel. Programming with a differentiable forth interpreter. ICML, 2017. +Tim Rocktäschel and Sebastian Riedel. End-to-end differentiable proving. In Advances in Neural Information Processing Systems, pp. 3788-3800, 2017. +Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. A corpus of natural language for visual reasoning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 217-223, 2017. +Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2019. +James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. Fever: a largescale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pp. 809-819, 2018. +Johan Van Benthem et al. A brief history of natural logic. LondonCollege Publications9781904987444, 2008. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Andreas Vlachos and Sebastian Riedel. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pp. 18-22, 2014. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. EMNLP 2018, pp. 353, 2018. +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537, 2019. +William Yang Wang. liar, liar pants on fire: A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 422-426, 2017. +Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426, 2017. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 2019. +Pengcheng Yin and Graham Neubig. A syntactic neural model for general-purpose code generation. arXiv preprint arXiv:1704.01696, 2017. +Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3911-3921, 2018. + +Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 93-104, 2018. +Luke S Zettlemoyer and Michael Collins. Learning to map sentences to logical form: structured classification with probabilistic categorical grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pp. 658-666. AUAI Press, 2005. +Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103, 2017. + +# A APPENDIX + +# A.1 FUNCTION DESCRIPTION + +We list the detailed function description in Figure 6. We also visualize the functionality of the most + +
NameArgumentsOutputComment
CountViewNumberReturn the number of rows in the View
WithinView, Header String, Cell +String/NumberBoolReturn whether the cell string/number exists under the Header Column of the given view
WithoutView, Header String, Cell +String/NumberBoolReturn whether the cell string/number does not exist under the Header Column of the given view
NoneStringBoolWhether the string represents None, like “None”, “No”, “-”, “No information provided”
Before/AfterRow, RowRowReturns whether row1 is before/after row2
First/Second/Third/FourthView, RowBoolReturns whether the row is in the first/second/third position of the view
Average/Sum/Max/MinView, Header StringNumberReturns the average/summation/max/min value under the Header Column of the given view
Argmin/ +ArgmaxView, Header StringRowReturns the row with the maximum/minimum value under the Header Column of the given view
HopRow, Header StringNumber/ +StringReturns the cell value under the Header Column of the given row
Diff/AddNumber, NumberNumberPerform arithmetic operations on two numbers
Greater/lessNumber, NumberBoolReturns whether the first number is greater/less than the second number
Equal/ +UnequalString, String/ +Number, NumberBoolCompare two numbers or strings to see whether they are the same
Filter_eq/ +Filter_greater/ +Filter-less/ +Filter_greater_orequal/ +Filter-less_orequalView, Header String, +NumberViewReturns the subview of the given with the cell values under the Header column greater/less/eq/... against the given number
All_eq/Allgreater/ +All Less/All Greater_or_equa/ +I/All Less_or_equaView, Header String, +NumberBoolReturns the whether all of the cell values under the Header column are greater/less/eq/... against the given number
And/orBool, BoolBoolReturns the Boolean operation results of two inputs
+ +typical functions and their input/output examples in Figure 7. + +![](images/b2d523d156022b0e7912d82475d9ab1684c4bead2983be19ca6f28af4cce31ef.jpg) +Figure 6: The function definition used in TabFact. +Figure 7: The visualization of different functions. + +We list all the trigger words for different functions in Figure 8 + +
TriggerFunction
'average'average
'difference', 'gap', 'than', 'separate'diff
'sum', 'summation', 'combine', 'combined', 'total', 'add', 'all', 'there are'ddd, sum
'not', 'no', 'never', "didn't", "won't", "wasn't", "isn't", "haven't", "weren't", "won't", 'neither', 'none', 'unable', 'fail', 'different', 'outside', 'unable', 'fail'not_eq, not_within, Filter_not_eq, none
'not', 'no', 'none'none
'first', 'top', 'latest', 'most'first
'last', 'bottom', 'latest', 'most'last
'RBR', 'JJR', 'more', 'than', 'above', 'after'filtergreater, greater
'RBR', 'JJR', 'less', 'than', 'below', 'under'filterless, less
'all', 'every', 'each'all_eq, all-less, all_greater,
['all', 'every', 'each'], ['not', 'no', 'never', "didn't", "won't", "wasn't"]all_NOT_eq
'at most', 'than'all-less_eq, all_greater_eq
'RBR', 'RBS', 'JJR', 'JJS'max, min
'JJR', 'JJS', 'RBR', 'RBS', 'top', 'first'argmax, argmin
'within', 'one', 'of', 'among'within
'follow', 'following', 'followed', 'after', 'before', 'above', 'precede'before
'follow', 'following', 'followed', 'after', 'before', 'above', 'precede'after
'most'most_freq
ordinalFirst, second, third, fourth
+ +Figure 8: The trigger words used to shrink the search space. + +# B HIGHER-ORDER OPERATIONS + +1. Aggregation: the aggregation operation refers to sentences like "the averaged age of all ..., "the total amount of scores obtained in ..., etc. +2. Negation: the negation operation refers to sentences like "xxx did not get the best score", "xxx has never obtained a score higher than 5". +3. Superlative: the superlative operation refers to sentences like "xxx achieves the highest score in", "xxx is the lowest player in the team". +4. Comparative: the comparative operation refers to sentences like "xxx has a higher score than yyyy". +5. Ordinal: the ordinal operation refers to sentences like "the first country to achieve xxx is xxx", "xxx is the second oldest person in the country". +6. Unique: the unique operation refers to sentences like "there are 5 different nations in the tournament," "there are no two different players from U.S" +7. All: the for all operation refers to sentences like "all of the trains are departing in the morning", "none of the people are older than 25." +8. None: the sentences which do not involve higher-order operations like "xxx achieves 2 points in xxx game", "xxx player is from xxx country". + +# C ERROR ANALYSIS + +Before we quantitatively demonstrate the error analysis of the two methods, we first theoretically analyze the bottlenecks of the two methods as follows: + +Symbolic We first provide a case in which the symbolic execution can not deal with theoretically in Figure 9. The failure cases of symbolic are either due to the entity link problem or function coverage problem. For example, in the given statement below, there is no explicit mention of "7-5, 6-4" cell. Therefore, the entity linking model fails to link to this cell content. Furthermore, even + +though we can successfully link to this string, there is no defined function to parse "7-5, 6-5" as "won two games" because it requires linguistic/mathematical inference to understand the implication from the string. Such cases are the weakness of symbolic reasoning models. + +Jordi Arrese + +
outcomedatetournamentsurfacepartneropponents in the finalscore in the final
runner - up1985Bologna , ItalyclayAlberto TousPaolo Canè Simone Colombo5 - 7 , 4 - 6
winner1986Bordeaux , FranceclayDavid De MiguelRonald Agénor Mansour Bahrami7 - 5 , 6 - 4
winner1989Prague , CzechoslovakiaclayHorst SkoffPetr Korda Tomáš šmíd6 - 4 , 6 - 4
+ +![](images/e7e35c4f5cdb0e0bf8588db1d6e1e95c067d0f8a0b2a274d93607646a7348188.jpg) +Figure 9: The error case of symbolic reasoning model +Figure 10: The error case of BERT NLI model + +BERT In contrast, Table-BERT model seems to have no coverage problem as long as it can feed the whole table content. However, due to the template linearization, the table is unfolded into a long sequence as depicted in Figure 10. The useful information, "clay" are separated in a very long span of unrelated words. How to grasp such a long dependency and memorize the history information poses a great challenge to the Table-BERT model. + +Jordi Arrese + +
outcomedatetournamentsurfacepartneropponents in the finalscore in the final
runner - up1985Bologna, ItalyclayAlberto TousPaolo Canè Simone Colombo5 - 7, 4 - 6
winner1986Bordeaux, FranceclayDavid De MiguelRonald Agénor Mansour Bahrami7 - 5, 6 - 4
winner1989Prague, CzechoslovakiaclayHorst SkoffPetr Korda Tomáš Šmíd6 - 4, 6 - 4
Template
Given the table titled "Jordi Arrese", in row one, the outcome is runner-up, the date is 1985, ..., the surface is clay ....... +In row two, the outcome is ... , the surface is clay. In row three, the outcome is ..., ... the surface is clay.
Long Dependency The three "Clay" are separated by more over 20 words
Jordi Arrese played all of his games on clay surface.
+ +Statistics Here we pick 200 samples from the validation set which only involve single semantic and divide them into different categories. We denote the above-mentioned cases as "linguistic inference", and the sentences which only describe information from one row as "Trivial", the rest are based on their logic operation like Aggregation, Superlative, Count, etc. We visualize the accuracy of LPA and Table-BERT in Figure 11. From which we can observe that the statements with linguistic inference are much better handled with the BERT model, while LPA achieves an accuracy barely higher than a random guess. The BERT model can deal with trivial cases well as it uses a horizontal scan order. In contrast, the LPA model outperforms BERT on higher-order logic cases, especially when the statement involves operations like Count and Superlative. + +![](images/0a9ac5891ad1af0c0a4fe4ed7bc6c251b04fa0f7fa62b1ad95c3f4fd474f3c44.jpg) +Error Analysis of LPA/Table-BERT +Figure 11: The error analysis of two different models + +# D REASONING DEPTH + +Given that our LPA has the breadth to cover a large semantic space. Here we also show the reasoning depth in terms of how many logic inference steps are required to tackle verify the given claims. We visualize the histogram in Figure 12 and observe that the reasoning steps are concentrated between 4 to 7. Such statistics indicate the difficulty of fact verification in our TABFACT dataset. + +![](images/1eda42cbbe84395e67c63d68e9df6b62805c49fabdec8580c60a71dbc90c1662.jpg) +Figure 12: The histogram of reasoning steps required to verify the claims + +# E WHEATHER TO KEEP WIKIPEDIA CONTEXT + +Before crowd-sourcing the annotation for the tables, we observed that the previous WikiTableQuestion Pasupat & Liang (2015) provides context (Wikipedia title) during annotation while the WikisSQL Zhong et al. (2017) does not. Therefore, we particularly design ablation annotation tasks to compare the annotation quality between w/ and w/o Wikipedia title as context. We demonstrate a typical example in Figure 13, where a Wiki table10 aims to describe the achievements of a tennis player named Dennis, but itself does not provide any explicit hint about "Tennis Player Dennis". Unsurprisingly, the sentence fluency and coherence significantly drop without such information. Actually, a great portion of these Wikipedia tables requires background knowledge (like sports, celebrity, music, etc) to understand. We perform a small user study to measure the fluency of annotated statements. Specifically, we collected 50 sentences from both annotation w/ and w/o title context and randomly shuffle them as pairs, which are distributed to the 8 experts without telling them their source to compare the language fluency. It turns out that the experts ubiquitously agree that the statements with Wikipedia titles are more human-readable. Therefore, we argue that such a context is necessary for annotators to understand the background knowledge to write more fluent sentences. On the other end, we also hope to minimize the influence of the textual context in the table-based verification task, therefore, we design an annotation criterion: the Wikipedia title + +is provided to the workers during the annotation, but they are explicitly banned from bringing any unrelated background information other than the title into the annotation. As illustrated in Figure 13, the title only acts as a placeholder in the statements to make it sound more natural. + +
outcomeyearchampionshipsurfacepartner
winner1960Wimbledon championshipsgrassRafael Osuna
winner1961US ChampionshipsgrassChuck Mckinley
runner - up1962US ChampionshipsgrassChuck Mckinley
winner1963US Championships (2)grassChuck Mckinley
Context (Title)Richard Dennis Ralston (born July 27, 1942, an American former tennis playerNo Information is provided
AnnotateFrom 1960 to 1969, Ralston won five major double championships.Winner is on the grass surface. +Rafael Osuna is partner in the Wimbeldon
+ +# F ENTITY LINKING + +Here we propose to use the longest string match to find all the candidate entities in the table, when multiple candidates coexist, we select the one with the minimum edit distances. The visualization is demonstrated in Figure 14. + +
DistrictIncumbentPartyResultCandidates
California 3John E. Mossdemocraticre-electedJohn E. Moss (d) 69.9% John Rakus (r) 30.1%
California 5Phillip Burtondemocraticre-electedPhillip Burton (d) 81.8% Edlo E. Powell (r) 18.2%
california 8George Paul Millerdemocraticlost renomination democratic holdPete Stark (d) 52.9% Lew M. Warden , Jr. (r) 47.1%
California 14Jerome R. Waldierepublicanre-electedJerome R. Waldie (d) 77.6% Floyd E. Sims (r) 22.4%
California 15John J. Mcfallrepublicanre-electedJohn J. Mcfall (d) unopposed
+ +Statement: John E. Moss is a democratic who is from California 3 district + +![](images/4ecef75b775904b0569e94f0b9ee6617c45d45e9015b1e0dd502fe9a3787175b.jpg) +Figure 13: Comparison of worker annotation w/ and w/o Wikipedia title as context +Figure 14: Entity Linking System. + +# G THE PROGRAM CANDIDATES + +Here we demonstrate some program candidates in Figure 15, and show how our proposed discriminator is designed to compute the matching probability between the statement and program. Specifically, we employ two transformer-based encoder Vaswani et al. (2017), the left one is aimed to encode the program sequence and the right one is aimed to encode the statement sequence. Their output from [CLS] position is concatenated and fed into an MLP to classify the verification label. + +# H HIT INTERFACE + +We provide the human intelligent task interface on AMT in the following. Very detailed instructions on what are trivial statements and what are non-trivial statements. Comprehensive examples have been given to guide the Turkers to write well-formed while logically plausible statements. In order to harvest fake statements without statistical cues, we also provide detailed instructions on how to re-write the "fake" statements. During the annotation, we hire 8 experts to perform sanity checks on each of the HIT to make sure that the annotated dataset is clean and meets our requirements. + +![](images/060ce431da2fcd8b2962e508a719fd1e0a9bb64af01250d53e077dbd5b48fc90.jpg) +Figure 15: We demonstrate the top program candidates and use the discriminator to rank them. + +# Survey Instructions (Click to expand) + +You are given a table with its wikipedia source, your job is to compose non-trivial statements supported by the table. + +- "Trivial": the sentence can be easily generated by looking only a certain row without understanding the table. +- "Non-trivial": the sentence requires reading multiple rows of the table and understanding of the table content. For example, the sentences which include summarization, comparative, negation, relational, inclusion, superlative, aggregational, rephrase or combinations of them are non-trivial. But non-trivial is not limited to these types, any statement involving understanding and reasoning is accepted. + +We list two examples below to help you understand, you are encouraged to open the table wikipedia link to understand the context of the table. (Everything in the table is lower-cased, you are free to use lower or upper case in your sentence): + +Table Wikipedia Link: Road_Rules_Challenge:_The_ Island + +(https://en.wikipedia.org/wiki/Real_World/Road_Rules_Challenge:_The_ISland) + +
playeroriginal seasongendereliminatedplacing
derrick kosinskirr : x - trememalewinnerwinner
evelyn smithfresh meatfemalewinnerwinner
johnny devenanziorw : key westmalewinnerwinner
kenny santuccifresh meatmalewinnerwinner
jenn grijalvarw : denverfemaleepisode 8runner - up
paula meronekrw : key westfemaleepisode 8runner - up
robin hibbardrw : san diegofemaleepisode 8runner - up
ryan kehoefresh meatmaleepisode 8runner - up
dunbar merrillrw : sydneymaleepisode 89th place
johanna bottarw : austinfemaleepisode 810th place
kellyanne juddrw : sydneyfemaleepisode 811th place
dan walshrr : viewers' revengemaleepisode 812th place
colie edisonrw : denverfemaleepisode 713th place
cohutta grindstaffrw : sydneymaleepisode 614th place
tyrie ballardrw : denvermaleepisode 515th place
ashli robsonrw : sydneyfemaleepisode 416th place
rachel robinsonrr : campus crawlfemaleepisode 317th place
abram boiserr : south pacificmaleepisode 218th place
dave malinoskyrw : hollywoodmaleepisode 2 (quit)19th place
+ +# Rejected ("Trivial") examples: + +1. In the TV series "The Island", Derrick Kosinski is a male character. (Easy! You can simply look into first row to produce this sentence.) +2. Derrick Kosinski has the placing of winner in the TV series. +3. Kenny Santucci is from original season of "Fresh Meat". + +4. Jenn Grijalva is Runner-Up of the challenge. + +# Accepted ("Non-Trivial") examples: + +(Superlative): In the TV series "The Island", Evelyn Smith is the highest ranked female. + +(Comparitive): In the TV series "The Island", Jenn Grijalva appears later than Colie Edison in the series. + +(Relational): Ashli Robson appears one episode later than Rachel Robinson in the TV series. + +(Summarization): there are three male winners in the challenge. + +(Rephrase): Evelyn Smith never eliminated in any episode in the TV series. + +(Combination): Derrick Kosinski is the winner and Jenn Grijalva is Runner-Up of the challenge. + +(Negation): jenn grijalva is not the female winning the challenge. + +(Inclusion): Evelyn smith is one of the four winner for the challenge. + +Table Wikipedia Link: AFC_Champions_League (https://en.wikipedia.org/wiki/AFC_Champions_League) + +
rankmember associationpointsgroup stageplay - offafc cup
1saudi arabia860.5400
2qatar838.2400
3iran813.5310
4uae750.2220
5uzbekistan680.8100
6india106.4002
7jordan128.7002
+ +# Rejected ("Trivial") examples: + +1. In the rank, it has 0 play - off. +2. ratar is in rank 2. +3. When member association is india, the points is 106.4. + +# Accepted ("Non-Trivial") examples: + +(Negation): iran is one of the two countries getting into the 4th stage. (Average): uae and qatar have an average of 1 play - off during the champion league. + +(Algorithmic): saudi arabia achieves 22.3 more points than qatar. + +(Comparison): india got lower points than jordan in the league. + +(Summarization): there are two team which have won the afc cup twice. + +(Superlative): In the Champions League, saudi arabia achieves the highest points. + +(Combination): saudi arabia is the group stage 4 while iran is in group stage 3. + +Tips1: We set minimum length to 9, and sentences with more complicated grammar structures are preferred. + +Tips2: Do not limited to only one type of description like superlative or relative. + +Tips3: Copying the records from the table is encouraged, which can help avoid typos and mis-spelling as much as possible, . + +Tips4: Do not vague words like "maybe", "perhaps", "good", "excellent", "most", etc. + +First Read the following table, then write five diverse non-trivial facts for this given table: + +(https://en.wikipedia.org/wiki/Athletics_at_the_1952_Summer_Olympics_%E2%80%93_Men%27s_pole Vault) + +Table Source: athletics at the 1952 summer olympics - men 's pole vault + +
athletenationality3.603.803.95result
bob richardsunited states--o4.55 or
don lazunited states--o4.50
ragnar lundbergsweden--o4.40
petro denysenkosoviet union--o4.40
valto oleniusfinland---4.30
bunkichi sawadajapan-oxxo4.20
volodymyr brazhnyksoviet union-oo4.20
viktor knyazevsoviet union-oo4.20
george mattosunited states--o4.20
erkki katajafinland--o4.10
tamás homonnaysweden-oo4.10
lennart lindhungary-oo4.10
milan milakovyugoslavia-oxo4.10
rigas efstathiadisgreece-oo3.95
torfy bryngeirssoniceland-oo3.95
erling kaasnorway-oxxx3.80
theodosios balafasgreeceooxxx3.80
jukka piironenfinland-xoxx3.80
zeno dragomirromania-xoxx3.80
+ +Please write a non-trivial statement, minimum 9 words + +Please write a non-trivial statement, minimum 9 words + +Please write a non-trivial statement, minimum 9 words + +Please write a non-trivial statement, minimum 9 words + +Please write a non-trivial statement, minimum 9 words + +# Survey Instructions (Click to expand) + +Please first read a table to understand its content, an example is shown below, which contains the leaderboard of a competition. + +
PlayerOriginal SeasonGenderEliminatedPlacing
Derrick KosinskiRR: X-TremeMaleWinnerWinner
Evelyn SmithFresh MeatFemaleWinnerWinner
Johnny DevenanzioRW: Key WestMaleWinnerWinner
Kenny SantucciFresh MeatMaleWinnerWinner
Jenn GrijalvaRW: DenverFemaleEpisode 8Runner-Up
Paula MeronekRW: Key WestFemaleEpisode 8Runner-Up
Robin HibbardRW: San DiegoFemaleEpisode 8Runner-Up
Ryan KehoeFresh MeatMaleEpisode 8Runner-Up
Dunbar MerrillRW: SydneyMaleEpisode 89th Place
Johanna BottaRW: AustinFemaleEpisode 810th Place
KellyAnne JuddRW: SydneyFemaleEpisode 811th Place
Dan WalshRR: Viewers' RevengeMaleEpisode 812th Place
Colie EdisonRW: DenverFemaleEpisode 713th Place
Cohutta GrindstaffRW: SydneyMaleEpisode 614th Place
Tyrie BallardRW: DenverMaleEpisode 515th Place
Ashli RobsonRW: SydneyFemaleEpisode 416th Place
Rachel RobinsonRR: Campus CWEIFemaleEpisode 317th Place
Abram BoiseRR: South PacificMaleEpisode 218th Place
Dave MalinoskyRW: HollywoodMaleEpisode 2 (quit)19th Place
Tonya CooleyRW: ChicagoFemaleEpisode 120th Place
+ +You are given a sentence to describe a fact in the table, please follow the following two cases to finish the job: + +* If the given sentence is fluent and consistent with the table, then please re-write it to make it "fake" based on the following criteria: + +1. Contradictory: it should still be a fluent and coherent, but it needs be explicitly contradictory to the facts in the table. +2. Do not simply add NOT to revert the sentence meaning. +3. Do not write neutral or non-verifiable sentences, you need to confirm it in the table. +3. The fake statement needs to be clear, explicit and natural, do not use vague or ambiguous words like "bad", "good", "many", etc. +4. try to use diverse fake types during annotatoin. + +Example 1. Given statement: Ashli Robson was eliminated in episode 4. + +Good Faking: Ashli Robson survives through episode 1 to episode 5. + +Good Faking: Ashli Robson is not the only one eliminated in episode 4. + +Bad Faking (Simply add not): Ashli Robson was not eliminated on episode 4. + +Bad Faking (Ambiguous, who is Ashli?): Ashli was not eliminated on episode 4. + +Bad Faking (Irrelevant): Ashli was born in Mexico. + +Bad Faking (Too subjective, what do you mean by "early"): AshlDerrick Kosinski lost the game very early. + +Bad Faking (Not verifiable): AshDerrick Kosinski was the most popular player. + +Example 2. Given statement: Tonya Cooley is in the 20th place. + +Good Faking: Tonya Cooley is not the last in placing. + +Good Faking: Tonya Cooley is eliminated in episode 1 but not the last in placing. + +Bad Faking: (There is nothing larger than 20th) Tonya Cooley is after the 20th place. + +Bad Faking: (Half Wrong/half Right) When the gneder is female, the player is Tonya Colley. + +Bad Faking (Introduce values outside the table): Tonya Cooley is in the 43th place. + +Bad Faking (Typo): Tonya cooler is in the 20th palace. + +* If the given statement is erroneous (see following), please type in N/A in the input box. + +1. critical grammar error like missing verbs, nouns, etc. Do not count small errors like tense, singular/plural, case errors. +2. serious typo, misspelling. +3. the described fact is contradictory to the table. + +You can use the highlight button to help you find the mentions in the table, you can use either upper or lower case, not important + +First Read the given tables, then rewrite the statements to make them fake: + +(https://en.wikipedia.org/wiki/2003%E2%80%9304_ISU_Junior_Grand_Prix) + +Table Source: 2003 - 04 isu junior grand prix + +
ranknationgoldsilverbronzetotal
1russia1014832
2united states96722
3canada421016
4japan45413
5hungary4026
6czech republic2114
6ukraine1304
6italy0134
7sweden1203
8israel1102
9finland0011
9france0101
+ +Hightlight Mentions, Click Me! + +Given Statement: russia won the most silver medals in the grand prix + +Please rewrite a sentence which is contradictory to the table + +Hightlight Mentions, Click Me! + +Given Statement: france and finland won the least medals in the grand prix + +Please rewrite a sentence which is contradictory to the table + +Hightlight Mentions, Click Me! + +Given Statement: hungary and finland were the only countries that idd not win any silver medals + +Please rewrite a sentence which is contradictory to the table + +Hightlight Mentions, Click Me! + +Given Statement: the united states won more gold medals than canada + +Please rewrite a sentence which is contradictory to the table + +Hightlight Mentions, Click Me! + +Given Statement: canada won the most bronze medals in the grand prix + +Please rewrite a sentence which is contradictory to the table + +Submit \ No newline at end of file diff --git a/tabfactalargescaledatasetfortablebasedfactverification/images.zip b/tabfactalargescaledatasetfortablebasedfactverification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c6e6bc6f22e16c14b9c560be6a6573f2faee6518 --- /dev/null +++ b/tabfactalargescaledatasetfortablebasedfactverification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c59687fa0d3f6c88aac8c2db2efe6060c6bd582e39eca1976a976ecb63af65b +size 1271439 diff --git a/tabfactalargescaledatasetfortablebasedfactverification/layout.json b/tabfactalargescaledatasetfortablebasedfactverification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b8c77420e127d5655cd98a993dc9cc96e7fc079f --- /dev/null +++ b/tabfactalargescaledatasetfortablebasedfactverification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee64c986ea8e4b5513492e0ceb0b3a230d3dc41df9d02d5c4b486bb8177c586e +size 624594 diff --git a/targetembeddingautoencodersforsupervisedrepresentationlearning/56b7dd08-6e54-476e-9e53-14ac34c51b9d_content_list.json b/targetembeddingautoencodersforsupervisedrepresentationlearning/56b7dd08-6e54-476e-9e53-14ac34c51b9d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..14c5d9b21ea13dcc93bdf36c3b0f68dcf97e54e9 --- /dev/null +++ b/targetembeddingautoencodersforsupervisedrepresentationlearning/56b7dd08-6e54-476e-9e53-14ac34c51b9d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a184a2e46521e77b51ce2af1ef1a48ca92f1b7e30e9ef3835cde047e782a5e1f +size 239445 diff --git a/targetembeddingautoencodersforsupervisedrepresentationlearning/56b7dd08-6e54-476e-9e53-14ac34c51b9d_model.json b/targetembeddingautoencodersforsupervisedrepresentationlearning/56b7dd08-6e54-476e-9e53-14ac34c51b9d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e3bd23b2e24de5aee264d1b3ce691644ffc6ef40 --- /dev/null +++ b/targetembeddingautoencodersforsupervisedrepresentationlearning/56b7dd08-6e54-476e-9e53-14ac34c51b9d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3e68746e92e64c0e0e2ff9255068fc622f5a00898240669145f6eb22b32f421 +size 284369 diff --git a/targetembeddingautoencodersforsupervisedrepresentationlearning/56b7dd08-6e54-476e-9e53-14ac34c51b9d_origin.pdf b/targetembeddingautoencodersforsupervisedrepresentationlearning/56b7dd08-6e54-476e-9e53-14ac34c51b9d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9bc77aa9986cf09918c370f7a06359d6cb90bd36 --- /dev/null +++ b/targetembeddingautoencodersforsupervisedrepresentationlearning/56b7dd08-6e54-476e-9e53-14ac34c51b9d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6641466778d4bd5d569d918a0166810e01caa13f9bd4332edf4ca07628d6e746 +size 3882453 diff --git a/targetembeddingautoencodersforsupervisedrepresentationlearning/full.md b/targetembeddingautoencodersforsupervisedrepresentationlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a55991fd743ef5b9cbaa5c4a09e68f2edb256022 --- /dev/null +++ b/targetembeddingautoencodersforsupervisedrepresentationlearning/full.md @@ -0,0 +1,717 @@ +# TARGET-EMBEDDING AUTOENCODERS FOR SUPERVISED REPRESENTATION LEARNING + +# Daniel Jarrett + +Department of Mathematics + +University of Cambridge, UK + +daniel.jarrett@maths.cam.ac.uk + +# Mihaela van der Schaar + +University of Cambridge, UK + +University of California, Los Angeles, USA + +mv472@cam.ac.uk, mihaela@ee.ucla.edu + +# ABSTRACT + +Autoencoder-based learning has emerged as a staple for disciplining representations in unsupervised and semi-supervised settings. This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional. We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets—encoding the prior that variations in targets are driven by a compact set of underlying factors. As our theoretical contribution, we provide a guarantee of generalization for linear TEAs by demonstrating uniform stability, interpreting the benefit of the auxiliary reconstruction task as a form of regularization. As our empirical contribution, we extend validation of this approach beyond existing static classification applications to multivariate sequence forecasting, verifying their advantage on both linear and nonlinear recurrent architectures—thereby underscoring the further generality of this framework beyond feedforward instantiations. + +# 1 INTRODUCTION + +Representation learning deals with uncovering useful underlying structures of data, and autoencoders (Hinton & Salakhutdinov, 2006) have been a staple in a variety of problems. While much research focuses on its use in unsupervised or semi-supervised settings with such diverse objectives as sparsity (Ranzato et al., 2007), generation (Kingma & Welling, 2013), and disentanglement (Chen et al., 2018), autoencoders are also useful in purely supervised settings—in particular, adding an auxiliary feature-reconstruction task to supervised classification problems has been shown to empirically improve generalization (Le et al., 2018); in the linear case, the theoretically quantifiable benefit matches that of simplistic norm-based regularization (Bousquet & Elisseeff, 2002; Rosasco & Poggio, 2009). + +In this paper, we consider the inverse problem setting where the target space $\mathcal{V}$ is high-dimensional; for instance, consider the multi-label classification tasks of object tagging, text annotation, and image segmentation. This is in contrast to the vast majority of works designed to tackle a high-dimensional feature space $\mathcal{X}$ (where commonly $|\mathcal{X}| \gg |\mathcal{Y}|$ , such as in standard classification problems). In this setting, the usual (and universal) strategy of learning to reconstruct features (Weston et al., 2012; Kingma et al., 2014; Le et al., 2018) may not be most useful: learning latent representations that encapsulate the variation within $\mathcal{X}$ does not directly address the more challenging problem of mapping back up to a higher-dimensional $\mathcal{V}$ . Instead, we argue for leveraging intermediate representations that are compact and more easily predictable from features, yet simultaneously guaranteed to be predictive of targets. In the process, we provide a unified theoretical perspective on recent applications of autoencoders to label-embedding in static, high-dimensional classification problems (Yu et al., 2014; Girdhar et al., 2016; Yeh et al., 2017). Extending into the temporal setting, we further empirically demonstrate the generality of target-embedding for recurrent, multi-variate sequence forecasting. + +Our contributions are three-fold. First, we motivate and formalize the target-embedding autoencoder (TEA) framework: a general approach applicable to any underlying architecture. Second, we provide a theoretical learning guarantee in the linear case by demonstrating uniform stability; specifically, we obtain an $O(1/N)$ bound on instability by analogizing the benefit of the auxiliary reconstruction task to a form of regularization—without incurring additional bias from explicit shrinkage. Finally, we extend empirical validation of this approach beyond the domain of static classification: using the task of multivariate disease trajectory forecasting as case study, we experimentally validate the + +![](images/3954dd878a1b58f1eaa594bf74c60e1e349f89aae46b80111b39e410e36c876c.jpg) +(a) Feature-Embedding Autoencoder + +![](images/81634036105342559116eba3489d2c98f5eb833302dfa240e219f64690d34ef5.jpg) +(b) Target-Embedding Autoencoder +Figure 1: (a) Feature-embedding and (b) Target-embedding autoencoders. Solid lines correspond to the (primary) prediction task; dashed lines to the (auxiliary) reconstruction task. Shared components are involved in both. + +advantage that TEAs confer on both linear and nonlinear architectures using real-world datasets with both continuous and discrete targets. To the best of our knowledge, we are the first to formalize and quantify the theoretical benefit of autoencoder-based target-representation learning in a purely supervised setting, and to extend its application to the domain of multivariate sequence forecasting. + +# 2 TARGET-EMBEDDING AUTOENCODERS + +Let $\mathcal{X}$ and $\mathcal{Y}$ be finite-dimensional vector spaces, and consider the supervised learning problem of predicting targets $\mathbf{y} \in \mathcal{Y}$ from features $\mathbf{x} \in \mathcal{X}$ . With a finite batch of $N$ training instances $\mathcal{D} = \{(\mathbf{x}_n, \mathbf{y}_n)\}_{n=1}^N$ , the objective is to learn a mapping $h: \mathcal{X} \to \mathcal{Y}$ that generalizes well to new samples from the same distribution. The vast majority of existing work considers the setting—most commonly, classification—where $|\mathcal{X}| \gg |\mathcal{Y}|$ ; under this scenario, autoencoders are often used to first transform the input into some lower-dimensional representation $\mathbf{z} \in \mathcal{Z}$ amenable to the downstream task. Doing so involves adding an auxiliary reconstruction loss $\ell_r$ to the primary prediction loss $\ell_p$ . + +Formally, solutions of this form—in supervised and semi-supervised settings alike—consist of a shared forward model $\phi : \mathcal{X} \to \mathcal{Z}$ , a reconstruction function $r : \mathcal{Z} \to \mathcal{X}$ , and a prediction function $d : \mathcal{Z} \to \mathcal{Y}$ during training (where notation $d$ reflects the downstream nature of the prediction task). Denote $\tilde{\mathbf{x}} = r(\phi(\mathbf{x}))$ and $\hat{\mathbf{y}} = d(\phi(\mathbf{x}))$ ; then the complete loss function takes the following form, + +$$ +L = \frac {1}{N} \sum_ {n = 1} ^ {N} \left[ \ell_ {p} \left(\hat {\mathbf {y}} _ {n}, \mathbf {y} _ {n}\right) + \ell_ {r} \left(\tilde {\mathbf {x}} _ {n}, \mathbf {x} _ {n}\right) \right] \tag {1} +$$ + +In contrast, we focus on settings where the target space $\mathcal{V}$ is high-dimensional, and where possibly $|\mathcal{V}| > |\mathcal{X}|$ . In this case, we argue that learning to reconstruct the input is not necessarily most beneficial. In a simple classification problem, autoencoding inputs leverages the hypothesis that a reconstructive representation is also likely discriminative. In our setting, however, the more immediate problem is the high-dimensional structure of $\mathcal{V}$ ; in particular, there is little guarantee that intermediate representations trained to encapsulate $\mathbf{x}$ are easily mapped back up to higher-dimensional targets. + +Our goal is to make use of intermediate representations that are both predictable from features as well as predictive of targets. A target-embedding autoencoder (TEA)—versus what we shall term a feature-embedding autoencoder (FEA)—flips the model architecture around by learning an embedding of target vectors instead, which a predictor then learns a mapping into. This involves an encoder $e: \mathcal{Y} \to \mathcal{Z}$ , an upstream predictor $u: \mathcal{X} \to \mathcal{Z}$ , and a shared forward model $\theta: \mathcal{Z} \to \mathcal{Y}$ . Denote $\hat{\mathbf{y}} = \theta(e(\mathbf{y}))$ and $\hat{\mathbf{y}} = \theta(u(\mathbf{x}))$ ; the complete loss function is now of the following form, + +$$ +L = \frac {1}{N} \sum_ {n = 1} ^ {N} \left[ \ell_ {p} \left(\hat {\mathbf {y}} _ {n}, \mathbf {y} _ {n}\right) + \ell_ {r} \left(\tilde {\mathbf {y}} _ {n}, \mathbf {y} _ {n}\right) \right] \tag {2} +$$ + +Abstractly, the general idea of target space reduction is not new; in particular, it has been present in various solutions in the domain of multi-label classification (see Section 4 and Appendix B for discussions of related work). Here we focus on target-embedding autoencoders; they leverage the assumption that variations in (high-dimensional) target space are driven by a compact and predictable set of factors. By construction, learning to reconstruct directly in output space ensures that latent representations are predictive of targets; at the same time, jointly training with the prediction loss ensures that latent representations are predictable from features. Instead of learning representations for mapping out of (downstream), here we learn representations for mapping into (upstream); the shared forward model handles the rest. See Figure 1 for high-level diagrams of TEAs versus FEAs. + +Training and Inference. Figure 2 gives block diagrams of component functions and objectives in (a) FEAs and (b) TEAs during training (see Algorithm 1 in Appendix C for pseudocode). Training occurs in three stages. First, the autoencoder is trained (to learn representations): the parameters of $e$ and $\theta$ are learned on the reconstruction loss. Second, the prediction arm is trained to regress the learned + +![](images/4a52c180b39bb4f35b893e47c91f21721576eb6803a6bb45ac7aa77a2345769b.jpg) +(a) Feature-Embedding Autoencoder + +![](images/ccf42c39332aab04fe796657fb3b1ab6aa8515c32692fb1d48f3f721914f2e6d.jpg) +(b) Target-Embedding Autoencoder +Figure 2: Functions and objectives in (a) FEAs and (b) TEAs. Blue and red identify supervised and representation learning components. FEAs are parameterized by $(\Phi, \mathbf{W}_d, \mathbf{W}_r)$ of $(\phi, d, r)$ , and TEAs by $(\Theta, \mathbf{W}_u, \mathbf{W}_e)$ of $(\theta, u, e)$ . Solid lines indicate forward propagation of data; dashed lines indicate backpropagation of gradients. + +embeddings (generated by the encoder): the parameters of $u$ are learned (on the latent loss) while the autoencoder is frozen. Finally, all three components are jointly trained on both prediction and reconstruction losses (Equation 2): parameters of the predictor, embedding, and shared forward model are trained simultaneously. Note that during training, the forward model receives two types of latents as input: encodings of true targets, as well as encodings predicted from features. At inference time, the target-embedding arm is dropped, leaving the learned hypothesis $h = \theta \circ u$ for prediction. Figure 4 (Appendix C) provides step-by-step block diagrams of both training and inference in greater detail. + +We emphasize that TEAs—as is the case with FEAs—specify a general framework independent of the implementation details of each component. For instance, the solutions to applications in Yu et al. (2014), Yeh et al. (2017), and Girdhar et al. (2016) can be abstractly regarded as linear and nonlinear instances of this framework, with domain-specific architectures (see Section 4 and Appendix B for more detailed discussions). Linear TEAs—which we study in greater detail in Section 3—involve parameterizations $(\Theta, \mathbf{W}_u, \mathbf{W}_e)$ of $(\theta, u, e)$ consisting of single hidden layers with linear activation. + +# 3 STABILITY-BASED LEARNING GUARANTEE + +Two questions are outstanding. The first is theoretical. We are motivated by the prior that variations in target space are driven by a lower-dimensional set of underlying factors. In this context, can we say something more rigorous about the benefit of TEAs? In this section, we take the first step in showing that jointly learning target representations improves generalization performance in the supervised setting. Specifically, we demonstrate that linear TEAs are characterized by uniform stability, from which theoretical guarantees are known to follow. The second question is empirical. We noted above that certain applications of label-embedding to classification can be interpreted through this framework. Does the benefit extend beyond its static, feedforward instantiations—into the temporal setting for multi-variate sequence forecasting, with both continuous and discrete targets? In Section 5, we first validate our theoretical findings with linear models and sensitivities, as well as extending our empirical analysis to the realm of recurrent, nonlinear models for both regression and classification. + +Consider a linear TEA, where the upstream predictor is parameterized by $\mathbf{W}_u\in \mathbb{R}^{|\mathcal{Z}|\times |\mathcal{X}|}$ , target-embedding by $\mathbf{W}_e\in \mathbb{R}^{|\mathcal{Z}|\times |\mathcal{Y}|}$ , and shared forward model by $\Theta \in \mathbb{R}^{|\mathcal{Y}|\times |\mathcal{Z}|}$ , where $|\mathcal{Z}| < |\mathcal{Y}|$ . The complete loss function is given by $L = \frac{1}{N}\sum_{n = 1}^{N}\left[\ell_p(\Theta \mathbf{W}_u\mathbf{x}_n,\mathbf{y}_n) + \ell_r(\Theta \mathbf{W}_e\mathbf{y}_n,\mathbf{y}_n)\right]$ following Equation 2. Interpreting the jointly learned autoencoding component as an auxiliary task, we show that the TEA algorithm for learning the shared forward model $\Theta$ is uniformly stable with respect to the domain of the supervised prediction task. To establish our notation, first recall the following: + +Definition 1 (Generalization Bound) Given a learning algorithm $\mathcal{D} \mapsto h_{\mathcal{D}}$ that returns hypothesis $h_{\mathcal{D}}$ , let $R(h_{\mathcal{D}}) = \int \ell(h_{\mathcal{D}}(\mathbf{x}), \mathbf{y}) d\mu(\mathbf{x}, \mathbf{y})$ denote the risk, and $\hat{R}(h_{\mathcal{D}}) = \frac{1}{N} \sum_{n=1}^{N} \ell(h_{\mathcal{D}}(\mathbf{x}_n), \mathbf{y}_n)$ denote the empirical risk, where $\ell$ is some loss function. A generalization bound is a probabilistic bound on the defect that takes the following form: $R(h_{\mathcal{D}}) - \hat{R}(h_{\mathcal{D}}) \leq \epsilon$ with some confidence $1 - \delta$ . + +Definition 2 (Uniform Stability) Let $\mathcal{D}^i$ denote a modification of batch $\mathcal{D}$ where the $i$ -th training instance $(\mathbf{x}_i, \mathbf{y}_i)$ is replaced by an independent and identically distributed example $(\mathbf{x}_i', \mathbf{y}_i')$ . A learning algorithm is said to be $\gamma$ -uniformly stable with respect to the loss function $\ell$ if $\forall \mathcal{D} \in (\mathcal{X} \times \mathcal{Y})^N, \forall i \in \{1, \dots, n\}, \forall (\mathbf{x}, \mathbf{y}), (\mathbf{x}_i', \mathbf{y}_i') \in \mathcal{X} \times \mathcal{Y}: |\ell(h_{\mathcal{D}}(\mathbf{x}), \mathbf{y}) - \ell(h_{\mathcal{D}^i}(\mathbf{x}), \mathbf{y})| \leq \gamma$ . Uniform stability holds if the minimum value of $\gamma$ converges to zero as batch size $N$ increases without limit. + +Uniform stability can be used to derive algorithm-dependent generalization bounds. In particular, Bousquet & Elisseeff (2002) first showed that the defect $\epsilon$ of a $\gamma$ -uniformly stable algorithm is less than $O((\gamma + 1/N)\sqrt{N\log(1/\delta)})$ with probability $\geq 1 - \delta$ . Feldman & Vondrak (2018) recently demonstrated an improved bound of $O(\sqrt{(\gamma + 1/N)\log(1/\delta)})$ . Here, we show uniform stability for linear TEAs, where $\gamma$ is $O(1/N)$ by which a tight generalization bound follows immediately. Before we begin, we introduce two additional tools: $c$ -strong convexity and $\sigma$ -admissibility. Note that these conditions are standard and easily satisfied—for instance, by the quadratic loss function; for more context see for example Bousquet & Elisseeff (2002), Liu et al. (2016), and Mohri et al. (2018). + +Definition 3 (c-Strong Convexity) Differentiable loss function $\ell$ is c-strongly convex if $\forall h, h' \in \mathcal{H}: \langle h'(\mathbf{x}) - h(\mathbf{x}), \nabla \ell(h'(\mathbf{x}), \mathbf{y}) - \nabla \ell(h(\mathbf{x}), \mathbf{y}) \rangle \geq c \|h'(\mathbf{x}) - h(\mathbf{x})\|_2^2$ for some $c \in \mathbb{R}^+$ , where $\nabla \ell(h(\mathbf{x}), \mathbf{y})$ denotes the gradient with respect to $h(\mathbf{x})$ , and $\langle \cdot, \cdot \rangle$ denotes the dot product operation. + +Definition 4 ( $\sigma$ -Admissibility) Loss function $\ell$ is $\sigma$ -admissible with respect to the underlying hypothesis class $\mathcal{H}$ if $\forall h, h' \in \mathcal{H}: |\ell(h', \mathbf{x}), \mathbf{y}) - \ell(h(\mathbf{x}), \mathbf{y})| \leq \sigma \|h'(\mathbf{x}) - h(\mathbf{x})\|_2$ for some $\sigma \in \mathbb{R}^+$ . + +To obtain uniform stability, we make two assumptions—both analogous to prior work arguing from the benefit of learning shared models between tasks. Liu et al. (2016) deals with learning multiple tasks in general, and Le et al. (2018) deals with reconstructing inputs in what we describe as FEAs. Now in multi-task learning, the separate tasks are usually chosen due to some prior relationship between them. In the case of Assumption 1 in Liu et al. (2016) and Assumption 5 in Le et al. (2018), this is assumed to come from similarities in feature structures across tasks; hence their assumptions of cross-representability are made in feature space. (Note that this restricts primary and auxiliary features to be elements of the same space). Our setting is contrary: the inputs to primary and auxiliary tasks come from different spaces, but are trained to produce similar labels through a compact, shared latent space; hence our assumption of cross-representability will be made in this latent space instead. + +Assumption 1 (Representative Vectors) There exists a representative subset of target vectors $B = \{\mathbf{b}_1,\dots,\mathbf{b}_M\} \subset \{\mathbf{y}_1,\dots,\mathbf{y}_N\}$ such that the latent representation of any individual $(\mathbf{x},\mathbf{y})$ can be linearly reconstructed from that of the representative subset with small error; i.e. $\mathbf{W}_p\mathbf{x} = \sum_{m=1}^{M}\alpha_m\mathbf{W}_e\mathbf{b}_m + \eta$ and $\mathbf{W}_e\mathbf{y} = \sum_{m=1}^{M}\beta_m\mathbf{W}_e\mathbf{b}_m + \eta$ for some coefficients $\alpha_m, \beta_m \in \mathbb{R}$ , where $\sum_{m=1}^{M}\alpha_m^2 \leq r_\alpha^2$ , $\sum_{m=1}^{M}\beta_m^2 \leq r_\beta^2$ for some $r_\alpha, r_\beta \in \mathbb{R}^+$ , and $\eta$ is a small error satisfying $\|\eta\|_2 \leq \varepsilon$ . + +Remark 1 This assumption is comparatively mild, even for $\varepsilon = 0$ . Note that in Liu et al. (2016) the features for separate tasks come from different examples in general, and the similarity of their distributions within $\mathcal{X}$ is simply assumed. Here, each pair of inputs to the prediction and reconstruction tasks comes from the same instance, and similarity within $\mathcal{Z}$ is explicitly enforced through the (joint) training objective. In addition, observe that the assumption will hold with zero error as long as the number of independent latent vectors is at least $|\mathcal{Z}|$ . Furthermore, unlike the scenarios in Liu et al. (2016) and Le et al. (2018) we do not require that the input domains of the two tasks be identical. Therefore for ease of exposition, we assume going forward that $\varepsilon = 0$ (see Remark 6 in Appendix A). + +Remark 2 A comparison with Assumption 4 in Le et al. (2018) sheds additional light on why we expect TEAs to be beneficial where $|\mathcal{V}| > |\mathcal{Z}|$ , in contrast with the (more typical) scenario $|\mathcal{X}| > |\mathcal{Z}| \gg |\mathcal{Y}|$ . Critically, the technique in Le et al. (2018) banks on the fact that the prediction arm projects the latent into a lower-dimensional target space. Conversely, Assumption 1 here relies on the fact that the encoding arm maps into the latent space from a higher-dimensional target space (rendering cross-representability therein reasonable). The distinction is crucial: we certainly do not expect any benefit from autoencoding trivially low-dimensional vectors! Note also that here the representative vectors are taken from $\mathcal{V}$ ; to take them from $\mathcal{X}$ instead would be unreasonable. For any compressive autoencoder, we generally expect if some subset $\{\mathbf{b}_1, \dots, \mathbf{b}_M\} \subset \{\mathbf{y}_1, \dots, \mathbf{y}_N\}$ spans $\mathcal{V}$ that $\{\mathbf{W}_e\mathbf{b}_1, \dots, \mathbf{W}_e\mathbf{b}_M\}$ then also span $\mathcal{Z}$ in order to be maximally reconstructive. The same cannot be said of subsets $\{\mathbf{c}_1, \dots, \mathbf{c}_M\} \subset \{\mathbf{x}_1, \dots, \mathbf{x}_N\}$ that span $\mathcal{X}$ for instance, take $|\mathcal{X}| \leq |\mathcal{Z}|$ . + +In addition to being representative in terms of latent values, the set of representative points also needs to be representative in terms of the reconstruction error. First, let $L'$ denote the counterpart + +to $L$ where the $i$ -th sample $(\mathbf{x}_i,\mathbf{y}_i)$ is replaced by some new instance $(\mathbf{x}_i',\mathbf{y}_i') \in \mathcal{X} \times \mathcal{Y}$ ; that is, $L' = \frac{1}{N} [\ell_p(\Theta \mathbf{W}_u\mathbf{x}_i',\mathbf{y}_i') + \ell_r(\Theta \mathbf{W}_e\mathbf{y}_i',\mathbf{y}_i') + \sum_{n=1; n \neq i}^{N} (\ell_p(\Theta \mathbf{W}_u\mathbf{x}_n,\mathbf{y}_n) + \ell_r(\Theta \mathbf{W}_e\mathbf{y}_n,\mathbf{y}_n))]$ . Then, let $\Theta_*$ , $\Theta'_*$ denote the optimal parameters corresponding to the two losses $L$ and $L'$ . + +Assumption 2 (Representative Errors) Let $L_r'$ contain the reconstruction errors of the dataset without the $i$ -th sample: $L_r' = (1/N)\Sigma_{n=1:n\neq i}^N\ell_r(\Theta\mathbf{W}_e\mathbf{y}_n,\mathbf{y}_n)$ , and let $L_r^B$ denote the reconstruction error of the representative subset: $L_r^B = (1/M)\Sigma_{m=1}^M\ell_r(\Theta\mathbf{W}_e\mathbf{b}_m,\mathbf{b}_m)$ . Then there exists some $a > 0$ such that for any small $\kappa > 0: L_r^B(\Theta_*) - L_r^B(\kappa\Theta_*') + (1-\kappa)\Theta_*') + L_r^B(\Theta_*) - L_r^B(\kappa\Theta_*) + (1-\kappa)\Theta_*) \leq a[L_r'(\Theta_*) - L_r'(\kappa\Theta_') + (1-\kappa)\Theta_*)] + a[L_r'(\Theta_') - L_r'(\kappa\Theta_+ + (1-\kappa)\Theta_')]$ . + +That is, the difference in reconstruction error $L_r^B$ between the two points $\Theta_*$ , $\Theta_*'$ is upper bounded by some constant factor of the corresponding difference in reconstruction error $L_r'$ at the two points. Importantly, note that this does not require that the values of the errors $L_r'$ and $L_r^B$ themselves be similar, only that their differences be similar. This assumption is identical to Assumption 6 in Le et al. (2018), and plays an identical role: we make use of $L_r^B$ —which is only dependent on $M$ —to allow the bound to decay with $N$ ; this is in contrast with the generic multi-task analysis of Liu et al. (2016), which—if applied directly to TEAs (as with FEAs)—would give a bound that does not decay with $N$ . + +Theorem 1 (Uniform Stability) Let $\ell_p$ and $\ell_r$ be $\sigma_p$ -admissible and $\sigma_r$ -admissible loss functions, and let $\ell_r$ be $c$ -strongly convex. Then under Assumptions 1 and 2, the following inequality holds, + +$$ +| \ell_ {p} (\boldsymbol {\Theta} _ {*} ^ {\prime} \mathbf {W} _ {u} \mathbf {x}, \mathbf {y}) - \ell_ {p} (\boldsymbol {\Theta} _ {*} \mathbf {W} _ {u} \mathbf {x}, \mathbf {y}) | \leq \frac {2 (\sigma_ {p} ^ {2} r _ {\alpha} ^ {2} + \sigma_ {p} \sigma_ {r} r _ {\alpha} r _ {\beta}) a M}{c N} +$$ + +Proof. Appendix A. + +Corollary 1 (Generalization Bound) Consider the same conditions as in Theorem 1; that is, let $\ell_p$ and $\ell_r$ be $\sigma_p$ -admissible and $\sigma_r$ -admissible losses, and let $\ell_r$ be $c$ -strongly convex. Then under Assumptions 1 and 2, the defect $\epsilon$ is less than $O(\sqrt{(1 / N)\log(1 / \delta)})$ with probability at least $1 - \delta$ . + +Proof. Follows immediately from Theorem 1 (above) and either of the following (similar results hold for both): Theorem 1.2 (Feldman & Vondrak, 2018), and Theorem 12 (Bousquet & Elisseeff, 2002). + +In supervised learning, it is often easy to make an argument—on an intuitive level—for the "regularizing" effect of additional loss terms. In contrast, this analysis allows us to unambiguously identify and quantify the benefit of the embedding component as a regularizer (see Remark 4 in Appendix A). + +Remark 3 In the linear label space reduction framework of Yu et al. (2014), uniform convergence is also shown to hold via norm-based regularization. Specifically for uniform stability, a similar bound can also be achieved by adding a strongly convex term to the objective, such as Tikhonov—and $\ell_2$ -regularization (Bousquet & Elisseeff, 2002; Rosasco & Poggio, 2009; Shalev-Shwartz et al., 2010). Here, however, the joint reconstruction task leverages a different kind of bias—precisely, the assumption that there exist compact and predictable representations of targets. Therefore the significance of this analysis is that we achieve an equivalent result independent of explicit regularization. + +# 4 RELATED WORK + +Our work straddles three threads of research: (1) supervised representation learning with autoencoders, (2) label space reduction for multi-label classification, and (3) stability-based learning guarantees. Appendix B provides a much expanded treatment, and presents summary tables for additional context. + +Supervised representation learning. While a great deal of research is devoted to uncovering useful underlying structures of data through autoencoders—with various properties such as sparsity (Ranzato et al., 2007) and disentanglement (Chen et al., 2018), among many others (Tschannen et al., 2018)—the goal of better representations is often for the benefit of downstream tasks. Semi-supervised autoencoders jointly optimized on partially-labeled data can obtain compact representations that improve prediction (Weston et al., 2012; Kingma et al., 2014; Ghifary et al., 2016). Furthermore, auxiliary reconstruction is also useful in a purely supervised setting: rather than focusing on how specific architectures better structure unlabeled data, Le et al. (2018) show the simple benefit of feature-reconstruction on supervised classification—a special case of what we describe as FEAs. + +In contrast, we focus on target-representation learning in the supervised setting, and analyze its benefit under the prior that high-dimensional targets are driven by a compact and predictable set of factors. + +We take inspiration from the empirical study of Girdhar et al. (2016), where latent representations of 3D objects are jointly trained to be predictable from 2D images. Their setup can be viewed as a specific instance of TEAs with (nonlinear) convolutional components, with a minor variation in training: in the joint stage, predictors continue to regress the learned embeddings, and gradients only backpropagate from latent space (instead of target space). Unlike the symmetry of our losses (which we require for our analysis above), their common decoder is only shared indirectly (and predictions made indirectly). As it turns out, this does not appear to matter for performance (see Section 5). In Mostajabi et al. (2018), a two-stage procedure is used for semantic segmentation—loosely comparable to the first two stages in TEAs; in contrast to our emphasis on joint training, they study the benefit of a frozen embedding branch in parallel with direct prediction. More broadly related to target-embedding, Dalca et al. (2018) build anatomical priors for biomedical segmentation in unsupervised settings. + +Multi-label classification. The general idea of target space dimension reduction has been explored for multi-label classification problems (commonly, annotation based on bags of features). These first derive a reduced label space, then subsequently associate inputs to it; methods include compressed sensing (Hsu et al., 2009), principal components (Tai & Lin, 2010), maximum-margin coding (Zhang & Schneider, 2012), and landmarking (Balasubramanian & Lebanon, 2012). Closer to our theme of joint learning, Chen & Lin (2012) first propose simultaneously minimizing encoding and prediction errors via an SVD formulation. Using generic empirical risk minimization, Yu et al. (2014) formulate the problem as a linear model with a low-rank constraint. While this captures an intuition (similar to ours) of restricted latent factors, their error bounds require norm-based regularization (unlike ours). + +Recently, Yeh et al. (2017) generalize the label-embedding approach to autoencoders. This flexibly accommodates custom losses to exploit correlations, as well as deep learning for nonlinearities. Our work is related to this line of research, although we operate at a higher level of abstraction, with a significant difference in focus. Their problem is multi-label classification, and their starting point is binary relevance (i.e. label by label). During reduction, they worry about specific losses that capture dependencies within and among spaces. In contrast, we worry about autoencoding at all—that is, we focus on the effect of joint reconstruction on learning the prediction model. Problems can be of any form: classification or regression, and our starting point is direct prediction (i.e. no reconstruction). + +Stability and learning guarantees. Generalizability via hypothesis stability is first studied in Rogers & Wagner (1978) and Devroye & Wagner (1979); unlike arguments based on the complexity of the search space (Vapnik & Chervonenkis, 1971; Pollard, 1984; Koltchinskii, 2001), these account for how the algorithm depends on the data. Bousquet & Elisseeff (2002) first formalize the notion of uniform stability sufficient for learnability, and Feldman & Vondrak (2018) use ideas related to differential privacy (Bassily et al., 2016) for further improvement. Separately, while there is a wealth of research on dimensionality reduction and autoencoders (Singh et al., 2009; Mohri et al., 2015; Gottlieb et al., 2016; Epstein & Meir, 2019), they either operate in the semi-supervised setting, or focus on the benefit of feature representations (not targets) and also do not consider joint learning. + +The benefit of jointly learning multiple tasks through a common operator (Caruana, 1997) is explored with VC-based (Baxter, 2000) and Rademacher complexity-based (Maurer, 2006; Maurer et al., 2016) analyses. Recently, Liu et al. (2016) show that the algorithm for learning the shared model in a multi-task setting is uniformly stable. While our argument is based on theirs, we are not interested in a generic bound for all tasks; closer to Le et al. (2018), we focus on the primary prediction task, and leverage the auxiliary reconstruction task for stability. Similarly, we arrive at an $O(1 / N)$ on instability without an explicit regularization term as in Bousquet & Elisseeff (2002). Unlike them, however, the fundamental distinction of our setting is that $\mathcal{V}$ is high-dimensional (but where the underlying factors are assumed compact); in this sense our focus is the mirror opposite of theirs. + +# 5 EXPERIMENTS AND DISCUSSION + +So far, we have formalized a general target-autoencoding framework for supervised learning, and quantified the benefit via uniform stability. Our overall goal in this section is to explore this benefit in a simple controlled setting, such that we can identify and isolate its utility on the prediction task, and investigate any sensitivities of interest. By way of preface, we emphasize two observations from above: (1) In the static, multi-label classification setting, the gain from label-embedding has been studied, including the autoencoder approach of Yeh et al. (2017)—which can be viewed as an instantiation of TEAs with sophisticated refinements. (2) The benefit of target-autoencoding is also + +Table 1: Dataset statistics and input/output dimensions used in experiments + +
DatasetNum. patientsSamp. freq.Target typeStatic dim.Temp. dim. (history)Temp. dim. (forecast)Window (history)Window (forecast)Effective |X|Effective |Y|
UKCF10,0001 yr.Binary11433434140136
ADNI1,7006 m.Continuous11262448115192
MIMIC22,0004 hr.Mixed26361361551,8311,805
+ +The effective input dimension $|\mathcal{X}|$ is computed as the dimension of static data plus the product of the width of the historical window (of temporal information) with its dimension; the effective target dimension $|\mathcal{Y}|$ is similarly computed as the product of the width of the forecast window (of temporal information) with its dimension. + +demonstrated using nonlinear, convolutional architectures in Girdhar et al. (2016)—which is also an instantiation of TEAs, also noting significant gains. Therefore a natural question of interest is: + +- Does the utility of target-embedding extend to (nonlinear) recurrent models with sequential data for general, high-dimensional targets (i.e. regression and/or classification)? + +Disease Trajectories. In this section, we take the first step in answering this question—as our empirical contribution, we extend validation of target-embedding autoencoders to the domain of multivariate sequence forecasting, exploring its utility on linear and nonlinear sequence-to-sequence architectures. What makes a good testbed? In particular, the progression of diseases (and their markers) is high-dimensional in presentation; at the same time, their evolution is often driven by latent biological dynamics (Szczesniak et al., 2017; Pascoal et al., 2017; Alaa & van der Schaar, 2019). With the increasing importance of early diagnosis and timely intervention in healthcare, the ability to forecast individual disease trajectories $(\mathcal{V})$ in the presence of limited windows of information $(\mathcal{X})$ has become increasingly desirable (Donohue et al., 2014; Pham et al., 2017; Bhagwat et al., 2018). + +Datasets. We use three datasets in our experiments. The first consists of a cohort of patients enrolled in the UK Cystic Fibrosis registry (UKCF), which records follow-up trajectories for over 10,000 patients. We are interested in forecasting future trajectories for the 11 possible infections and 23 possible comorbidities (all binary variables) recorded at each follow-up, using past trajectories and basic demographics as input. The second consists of patients in the Alzheimer's Disease Neuroimaging Initiative study (ADNI), which tracks disease progression for over 1,700 patients. We are interested in forecasting the evolution of the 8 primary biomarkers and 16 cognitive tests (all continuous variables) measured at each visit, using past measurements and basic demographics as input. The third consists of a cohort of patients in intensive care units from the Medical Information Mart for Intensive Care (MIMIC), which records physiological data streams for over 22,000 patients. Likewise, we are interested in forecasting future trajectories for the 361 most frequently measured variables such as vital signs and lab tests (both binary and continuous variables), again using past measurements and basic demographics as input. See Appendix D for more information on datasets. + +Experimental Setup. In each instance, the prediction input is a precedent window of up to width $w_{x}$ , and the prediction target is the suceedent window of width $w_{y}$ . For UKCF $(w_{x}, w_{y}) = (3, 4)$ at 1-year resolution, for ADNI $(4, 8)$ at 6-month resolution, and for MIMIC $(5, 5)$ at 4-hour (resampled) resolution. All models are implemented in Tensorflow. Linear models consist of a single hidden layer with no nonlinearity; for the nonlinear case, we implement an RNN model for each component using GRUs. See Appendix D for additional detail on model implementation and configuration. For evaluation, we measure the mean squared error (MSE) for continuous targets (averaged across variables), and the area under the precision-recall curve (PRC) and area under the receiver operating characteristic curve (ROC) for binary targets (averaged across variables). We use cross-validation on the training set for hyperparameter tuning, selecting the setting that gives the lowest validation loss averaged across folds. For each model and dataset, we report the average and standard error of each performance metric across 10 different experiment runs, each with a different random train-test split. + +Note that forecasting high-dimensional disease trajectories is challenging, and input information is deliberately limited (as is often the case in medical practice); the desired targets are similar or higher-dimensional than the inputs (see Table 1). This obviously results in an inherently difficult prediction problem—but which makes for a good candidate setting to test the utility of target-representation learning. RNN autoencoders have previously been proposed for learning representations of inputs (i.e. FEAs instantiated with RNNs) to improve classification (Dai & Le, 2015), prediction (Lyu et al., 2018), generation (Srivastava et al., 2015), and clustering (Baytas et al., 2017); similarly, their mission is not in excessively optimizing specific architectural novelties to match state-of-the-art, but rather in exploring the benefit of the autoencoding framework. Here, we learn representations of targets. + +Table 2: Summary results for TEA and comparators on linear model with UKCF (Bold indicates best) + +
BaseREGFEATEAF/TEA
PRC(I)0.322 ± 0.099*0.347 ± 0.085*0.351 ± 0.079*0.450 ± 0.0350.414 ± 0.028*
PRC(C)0.416 ± 0.100*0.433 ± 0.083*0.455 ± 0.087*0.559 ± 0.0600.520 ± 0.052
ROC(I)0.689 ± 0.089*0.710 ± 0.072*0.720 ± 0.0730.767 ± 0.0260.766 ± 0.023
ROC(C)0.679 ± 0.091*0.700 ± 0.075*0.713 ± 0.0750.767 ± 0.0420.755 ± 0.037
+ +The two-sample $t$ -test for difference in means is conducted on the results. An asterisk indicates statistically significant difference in means $(p$ -value $< 0.05)$ relative to the TEA result. PRC and ROC metrics are reported separately for variables representing infections (I) and comorbidities (C). See Tables 9-10 for extended results. + +Table 3: Summary results for TEA and comparators on nonlinear (RNN) model (Bold indicates best) + +
UKCF (Binary Targets)ADNI (Continuous Targets)MIMIC (Mixed Targets)
PRC(I)PRC(C)MSE(B)MSE(C)PRCMSE
Base0.411±0.035*0.497±0.057*0.105±0.018*0.361±0.0640.142±0.028*0.153±0.011
REG0.415±0.030*0.518±0.052*0.096±0.014*0.360±0.0660.143±0.019*0.152±0.010
FEA0.410±0.033*0.521±0.054*0.092±0.012*0.356±0.0680.144±0.030*0.152±0.012
TEA0.483±0.0450.583±0.0720.063±0.0100.330±0.0660.239±0.0390.150±0.012
F/TEA0.457±0.0370.576±0.0710.073±0.010*0.338±0.0670.166±0.023*0.154±0.011
+ +The two-sample $t$ -test for difference in means is conducted on the results. An asterisk indicates statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. For UKCF, only PRC metrics for infections (I) and comorbidities (C) are shown due to space limitation; for ADNI, MSE metrics are reported separately for targets representing biomarkers (B) and cognitive tests (C). See Tables 13-14, 17-18, and 21-22. + +# 5.1 MAIN RESULTS + +Overall Benefit. First, we examine the overall utility of TEAs. To verify the linear case first, Table 2 summarizes the performance of TEA and alternate setups on UKCF. The temporal axis is flattened to simulate ordinary static multi-label classification, and the base case is direct prediction (Base)—that is, absent any auxiliary representation learning or regularization. Next, we allow for $\ell_2$ -regularization over direct prediction (REG), as well as over all other methods. FEAs differ only by the added feature-reconstruction, and TEAs only by the target-reconstruction; as an additional sensitivity, we also implement a combined approach (F/TEA). More generally, we also wish to examine the benefit of TEA for the nonlinear case: Table 3 summarizes analogous results where component functions are implemented with GRU networks; results are shown for all datasets. Ceteris paribus, we observe that target-representation learning has a notable positive impact on performance. Interestingly, learning representations of inputs does not yield significant benefit, and the hybrid approach (F/TEA) is worse than TEA; this suggests that forcing intermediate representation to encode both features and targets may be overly constraining. (Note that for the linear model, the instances are restricted to those for which the full input window is available; as a consequence, the results for linear and nonlinear cases are not directly comparable). Figures 4 (Appendix C) and 5 (Appendix D) give training diagrams for all comparators. Additional experiment results (by model, timestep, metric) are in Appendix E.1-2. + +Source of Gain. There are two (related) interpretations of TEAs. First, we studied the regularization view in Section 3; this concerns the benefit of joint training using both prediction and reconstruction losses. Ceteris paribus, we expect performance to improve purely by dint of the jointly trained TEA objective. Second, the reduction view says that TEAs decompose the (difficult) prediction problem into two (smaller) tasks: the autoencoder learns a compact representation $\mathbf{z}$ of $\mathbf{y}$ , and the predictor learns to map $\mathbf{x}$ to $\mathbf{z}$ . This makes the potential benefit of staged training (Section 2 and Appendix C) intuitively clear, and suggests an alternative—that of simply training the autoencoder and predictor arms in two stages—a la Mostajabi et al. (2018). As a general framework, TEAs is a combination of both ideas: all three components are jointly trained in a third stage—a la Girdhar et al. (2016). We now account for the improvement in performance due to these two sources of benefit; Table 4 does so for the linear case (on UKCF), and Table 5 for the more general nonlinear case (on all datasets). The “No Joint” setting isolates the benefit from staged training only. This is analogous to basic unsupervised pretraining (though using targets), and corresponds to omitting the final joint training stage in Algorithm 1. The “No Staged” setting isolates the benefit from joint training only (without pretraining the autoencoder or predictor), and corresponds to omitting the first two training stages in Algorithm 1. The “Neither” setting is equivalent to vanilla prediction (REG) without leveraging either of the advantages. We observe that while both sources of benefit are individually important, neither setting performs quite as well as when both are combined. See Appendix E.1-2 for extended results. + +Table 4: Summary source of gain and TEA variants on linear model with UKCF (Bold indicates best) + +
NeitherNo JointNo StagedTEATEA(L)TEA(LP)
PRC(I)0.347 ± 0.0850.402 ± 0.0260.431 ± 0.0310.450 ± 0.0350.435 ± 0.0310.454 ± 0.036
PRC(C)0.433 ± 0.0830.507 ± 0.0400.543 ± 0.0540.559 ± 0.0600.544 ± 0.0530.560 ± 0.061
ROC(I)0.710 ± 0.0720.747 ± 0.0220.764 ± 0.0220.767 ± 0.0260.759 ± 0.0250.768 ± 0.028
ROC(C)0.700 ± 0.0750.744 ± 0.0380.766 ± 0.0380.767 ± 0.0420.760 ± 0.0420.767 ± 0.042
+ +"No Joint" omits final joint training, and "No Staged" skips (pre)-training stages. PRC and ROC metrics are reported separately for targets representing infections (I) & comorbidities (C). See Tables 11-12 for extended results. + +Table 5: Summary source of gain and TEA variants on nonlinear (RNN) model (Bold indicates best) + +
UKCF (Binary Targets)ADNI (Continuous Targets)MIMIC (Mixed Targets)
PRC(I)PRC(C)MSE(B)MSE(C)PRCMSE
Neither0.415±0.0300.518±0.0520.096±0.0140.360±0.0660.143±0.0190.152±0.010
No Joint0.455±0.0390.574±0.0690.092±0.0140.353±0.0700.183±0.0380.151±0.011
No Staged0.424±0.0310.543±0.0610.106±0.0220.363±0.0670.167±0.0220.150±0.012
TEA0.483±0.0450.583±0.0720.063±0.0100.330±0.0660.239±0.0390.150±0.012
TEA(L)0.483±0.0470.581±0.0740.058±0.0120.330±0.0760.249±0.0490.149±0.012
TEA(LP)0.480±0.0440.583±0.0720.064±0.0120.329±0.0680.229±0.0390.151±0.011
+ +"No Joint" omits final joint training, and "No Staged" skips (pre)-training stages. For UKCF, only PRC metrics for infections (I) and comorbidities (C) are shown due to space limitation; for ADNI, MSE metrics are reported separately for targets representing biomarkers (B) and cognitive tests (C). See Tables 15-16, 19-20, and 23-24. + +Variations. Having established the utility of target-embedding, we can ask whether variations on the same theme perform similarly. In particular, the embeddings in the empirical studies of Girdhar et al. (2016) and Yeh et al. (2017) are jointly learned via the reconstruction loss $\ell_r$ and latent loss $\ell_z$ that is, the prediction arm continues to regress learned embeddings during the joint training stage (Figure 4(d), in Appendix D). The principle is similar, although (as noted in Section 4) the primary task is therefore learned indirectly; this is in contrast to the vanilla TEA setup, where the primary task is learned directly via the prediction loss $\ell_p$ . Tables 4 and 5 also compare the performance of vanilla TEAs with this indirect variant (TEA(L)), as well as a hybrid variant (TEA(LP)) for which both latent and prediction losses are trained jointly with the reconstruction loss (Figure 4(e). Perhaps as expected, we observe that performance across all three variants are more or less identical, affirming the general benefit of target-representation learning. Again, see Appendix E.1-2 for extended results. + +# 5.2 SENSITIVITIES + +Regularization. Of course, target-representation learning is not a replacement for other regularization strategies; it is an additional tool that can be used in parallel where appropriate. Figure 3(a) shows the performance of TEA and REG with various coefficients $\nu$ on $\ell_2$ -regularization. By itself, introducing $\ell_2$ -regularization does improve performance up to a certain point, beyond which the additional shrinkage bias incurred begins to be counterproductive; this is not surprising. Interestingly, introducing target-representation learning appears to leverage an orthogonal bias: it consistently improves prediction performance regardless of level of shrinkage. This is a practical result of the theoretical observation in Remark 3: while prior works obtain stability through explicit $\ell_2$ -regularization, the benefit from target-embedding relies on a different bias entirely, which allows us to combine them. While increasing the strength of either form of regularization reduces variability in results (see also below), excessive bias of either alone degrades performance. See Appendix E.3 for full results. + +Strength of Prior. Target-embedding attempts to leverage the assumption that there exist compact and predictable representations of targets. Realistically (e.g. due to measurement noise), of course, this will not hold perfectly. In our experiments, we set the ratio of prediction and reconstruction losses to be $1:1$ for TEA (as well as FEA and F/TEA); that is, the "strength-of-prior" coefficient $\lambda$ on $\ell_r$ is 0.5. In order to isolate the effect of $\lambda$ during joint training, we observe the performance of TEAs with joint training only (i.e. removing the confounding effect of staged training). For large values of $\lambda$ , we expect the reconstruction task to dominate in priority, which is (under an imperfect prior) not beneficial for the ultimate prediction task—in general, a hidden representation that is most reconstructive is not necessarily also what is most predictable). For small values of $\lambda$ , the setup begins to resemble direct prediction. Figure 3(b) verifies our intuition. Note that in the extreme case of $\lambda = 1$ , predictions are no better than random (see ROC\~ 0.5). See Appendix E.3 for full results. + +![](images/0a2b3bf9f4f34ef3b0d46bd906b11eb3279c88008e38e5b14c9ace7723200562.jpg) + +![](images/3d7abfdad2fc94f612cd6ea8a3109a738eb383cb06729a6636a5c9f4281ff792.jpg) +(a) Sensitivities on $\nu$ + +![](images/772033a59378995636afe1c8099187b1629d1140e36c2f291fd911216e4af723.jpg) + +![](images/0ca65573299f4d39b23e5c02e960b40709497a9e28b73bfa4087d0ddf8e11cfb.jpg) +(b) Sensitivities on $\lambda$ + +![](images/7eccb8a7709c59a00ef8f6b8274e7417b197eb9bb19bc973ad0d757958547d00.jpg) + +![](images/6e668185c1ac46c7b274f22f70d464f8fbc334c000e30ae2f3a64b84858ce2c8.jpg) +(c) Sensitivities on $N$ +Figure 3: Sensitivities on $\ell_2$ -regularization coefficient $\nu$ , strength-of-prior coefficient $\lambda$ , and training size $N$ for direct prediction (REG) and with target-embedding (TEA) on linear model with UKCF. For sensitivities on $\lambda$ , we perform joint training only, so that we isolate the effect of the joint reconstruction task (i.e. removing the confounding effect of staged training). Standard errors are indicated with shaded regions. For full results, see Tables 25–28 for sensitivities on $\nu$ , Tables 29–30 for sensitivities on $\lambda$ , and Tables 31–34 for sensitivities on $N$ . + +Sample Complexity. Figure 3(c) shows the performance of TEA and REG under various levels of data scarcity. The benefit conferred by TEAs is significant especially when the amount of training data $N$ is limited. Importantly, note that we are operating strictly within the context of supervised learning: unlike in semi-supervised settings, here we are not just restricting access to paired data; we are restricting access to data per se. (Though beyond the scope of this paper, we expect that extending TEAs to semi-supervised learning with additional unpaired data would yield larger gains). Here, without the luxury of learning from unpaired data, we highlight the comparative advantage purely from the addition of target-representation learning. Again, see Appendix E.3 for full results. + +# 5.3 DISCUSSION + +By way of conclusion, we emphasize the importance of our central assumption: that there exist compact and predictable representations of the (high-dimensional) targets. This is critical: target-embedding is not useful where this is not true. Now obviously, learning representations of targets is unnecessary if the output dimension is trivially small (e.g. if the target is a single classification label), or if the problem itself is trivially easy (e.g. if direct prediction is already perfect). Also obvious is the situation where representations cannot possibly be compact (e.g. if all output dimensions are independent of each other), in which case any model with a compressive (bottleneck) representation as an intermediate target may make little sense to begin with. Perhaps less obvious is that we cannot assume that the goals of prediction and reconstruction are always aligned. Just as in learning feature-embeddings (for downstream classification), what is most reconstructive may not necessarily encode what is most discriminative; so too in learning target-embeddings (for upstream prediction), what is most reconstructive may not necessarily encode what is most predictable. In the case of disease trajectories, it is medical knowledge that permits this assumption with some confidence. Appendix E.4 gives an extreme (synthetic) counterexample where this prior is outright false—i.e. prediction and reconstruction are directly at odds. While certainly contrived, it serves as a caveat about assumptions. + +Using the deliberately challenging setting of disease trajectory forecasting with limited information, we have illustrated the nontrivial utility of target-representation learning in a controlled setting with baseline models. While we appreciate that component models in the wild may be more tailored, this setting better allows us to identify and isolate the potential utility of target-autoencoding per se. In addition to verifying our intuitions for the linear case, we have extended empirical validation of target-autoencoding to (nonlinear) sequence-to-sequence recurrent architectures; along the way, we explored the sources of gain from joint and staged training, as well as various sensitivities of interest. Where the prior holds, target-embedding autoencoders are potentially applicable to any high-dimensional prediction task beyond static classification and imaging applications, and exploring its utility for other specific domain-architectures may be a practical direction for future research. + +# ACKNOWLEDGMENTS + +This work was supported by Alzheimer's Research UK (ARUK), the US Office of Naval Research (ONR), and the National Science Foundation (NSF): grant numbers ECCS1462245, ECCS1533983, and ECCS1407712. We thank the UK Cystic Fibrosis Trust, the Alzheimer's Disease Neuroimaging Initiative, and the MIT Lab for Computational Physiology respectively for making the UKCF, ADNI, and MIMIC datasets available for research. We thank the reviewers for their helpful comments. + +# REFERENCES + +Ahmed M. Alaa and Mihaela van der Schaar. Attentive state-space modeling of disease progression. In 2019 Conference on Neural Information Processing Systems, 2019. +Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. International Conference on Machine Learning, 2018. +Krishnakumar Balasubramanian and Guy Lebanon. The landmark selection method for multiple output prediction. Proceedings of the 29th International Conference on Machine Learning, 2012. +Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems, pp. 6240-6249, 2017. +Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, and Jonathan Ullman. Algorithmic stability for adaptive data analysis. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, pp. 1046-1059. ACM, 2016. +Jonathan Baxter. A model of inductive bias learning. Journal of artificial intelligence research, 12: 149-198, 2000. +Inci M Baytas, Cao Xiao, Xi Zhang, Fei Wang, Anil K Jain, and Jiayu Zhou. Patient subtyping via time-aware LSTM networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 65-74. ACM, 2017. +Shai Ben-David and Reba Schuller. Exploiting task relatedness for multiple task learning. In Learning Theory and Kernel Machines, pp. 567-580. Springer, 2003. +Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE trans. on pattern analysis and machine intelligence, 35(8):1798-1828, 2013. +Nikhil Bhagwat, Joseph D Viviano, Aristotle N Voineskos, M Mallar Chakravarty, Alzheimer's Disease Neuroimaging Initiative, et al. Modeling and prediction of clinical symptom trajectories in alzheimer's disease using longitudinal data. PLoS computational biology, 14(9):e1006376, 2018. +Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, and Prateek Jain. Sparse local embeddings for extreme multi-label classification. In Advances in Neural Information Processing Systems, pp. 730-738, 2015. +Kush Bhatia, Kunal Dahiya, Himanshu Jain, Yashoteja Prabhu, and Manik Varma. The extreme classification repository, 2019. URL http://manikvarma.org/downloads/XC/XMLRepository.html. +Wei Bi and James Kwok. Efficient multi-label classification with many labels. In International Conference on Machine Learning, pp. 405-413, 2013. +Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, et al. Domain separation networks. In Advances in neural information processing systems, pp. 343-351, 2016. +Olivier Bousquet and André Elisseeff. Stability and generalization. Journal of machine learning research, 2(Mar):499-526, 2002. +Matthew R Boutell, Jiebo Luo, Xipeng Shen, and Christopher M Brown. Learning multi-label scene classification. Pattern recognition, 37(9):1757-1771, 2004. + +Rich Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997. +Rui M. Castro and Robert D. Nowak. General Bounds for Bounded Losses, in Statistical Learning Theory. 2018. +Tian Qi Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disentanglement in variational autoencoders. In Advances in Neural Information Processing Systems, pp. 2610-2620, 2018. +Yao-Nan Chen and Hsuan-Tien Lin. Feature-aware label space dimension reduction for multi-label classification. In Advances in Neural Information Processing Systems, pp. 1529-1537, 2012. +Moustapha M Cisse, Nicolas Usunier, Thierry Artieres, and Patrick Gallinari. Robust bloom filters for large multilabel classification tasks. In Advances in Neural Information Processing Systems, pp. 1851-1859, 2013. +Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in neural information processing systems, pp. 3079-3087, 2015. +Adrian V Dalca, John Guttag, and Mert R Sabuncu. Anatomical priors in convolutional networks for unsupervised biomedical segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9290-9299, 2018. +Luc Devroye and Terry Wagner. Distribution-free inequalities for the deleted and holdout error estimates. IEEE Transactions on Information Theory, 25(2):202-207, 1979. +Michael C Donohue, Hélène Jacqmin-Gadda, Mélanie Le Goff, et al. Estimating long-term multivariate progression from short-term data. *Alzheimer's & Dementia*, 10(5):S400-S410, 2014. +Baruch Epstein and Ron Meir. Generalization bounds for unsupervised and semi-supervised learning with autoencoders. arXiv preprint arXiv:1902.01449, 2019. +Vitaly Feldman and Jan Vondrak. Generalization bounds for uniformly stable algorithms. In Advances in Neural Information Processing Systems, pp. 9747-9757, 2018. +Johannes Furnkranz, Eyke Hüllermeier, Eneldo Loza Mencia, and Klaus Brinker. Multilabel classification via calibrated label ranking. Machine learning, 73(2):133-153, 2008. +Muhammad Ghifary, W Bastiaan Kleijn, Mengjie Zhang, David Balduzzi, and Wen Li. Deep reconstruction-classification networks for unsupervised domain adaptation. In European Conference on Computer Vision, pp. 597-613. Springer, 2016. +Rohit Girdhar, David F Fouhey, Mikel Rodriguez, and Abhinav Gupta. Learning a predictable and generative vector representation for objects. In European Conference on Computer Vision, pp. 484-499. Springer, 2016. +Lee-Ad Gottlieb, Aryeh Kontorovich, and Robert Krauthgamer. Adaptive metric dimensionality reduction. Theoretical Computer Science, 620:105-118, 2016. +Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504-507, 2006. +Daniel J Hsu, Sham M Kakade, John Langford, and Tong Zhang. Multi-label prediction via compressed sensing. In Advances in neural information processing systems, pp. 772-780, 2009. +Ashish Kapoor, Raajay Viswanathan, and Prateek Jain. Multilabel classification using bayesian compressed sensing. In Advances in Neural Information Processing Systems, pp. 2645-2653, 2012. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pp. 3581-3589, 2014. + +Vladimir Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Transactions on Information Theory, 47(5):1902-1914, 2001. +Lei Le, Andrew Patterson, and Martha White. Supervised autoencoders: Improving generalization performance with unsupervised regularizers. In Advances in Neural Information Processing Systems, pp. 107-117, 2018. +Xin Li and Yuhong Guo. Multi-label classification with feature-aware non-linear label space transformation. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015. +Zijia Lin, Guiguang Ding, Mingqing Hu, and Jianmin Wang. Multi-label classification via feature-aware implicit label space encoding. In International conference on machine learning, 2014. +Tongliang Liu, Dacheng Tao, Mingli Song, and Stephen J Maybank. Algorithm-dependent generalization bounds for multi-task learning. IEEE transactions on pattern analysis and machine intelligence, 39(2):227-241, 2016. +Gábor Lugosi and Miroslaw Pawlak. On the posterior-probability estimate of the error rate of nonparametric classification rules. IEEE Trans. on Information Theory, 40(2):475-481, 1994. +Xinrui Lyu, Matthias Hüser, Stephanie L Hyland, George Zerveas, and Gunnar Ratsch. Improving clinical predictions through unsupervised time series representation learning. *NeurIPS* 2018 Machine Learning for Health (ML4H) workshop, 2018. +Andreas Maurer. Bounds for linear multi-task learning. Journal of Machine Learning Research, 7 (Jan):117-139, 2006. +Andreas Maurer, Massimiliano Pontil, and Bernardino Romero-Paredes. The benefit of multitask representation learning. The Journal of Machine Learning Research, 17(1):2853-2884, 2016. +Colin McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148-188, 1989. +Mehryar Mohri, Afshin Rostamizadeh, and Dmitry Storches. Generalization bounds for supervised dimensionality reduction. In Feature Extraction: Modern Questions and Challenges, pp. 226-241, 2015. +Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT press, 2018. +Mohammadreza Mostajabi, Michael Maire, and Gregory Shakhnarovich. Regularizing deep networks by modeling and predicting label structure. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5629-5638, 2018. +Sayan Mukherjee, Partha Niyogi, Tomaso Poggio, and Ryan Rifkin. Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization. Advances in Computational Mathematics, 25(1-3):161-193, 2006. +Siddharth Narayanaswamy, T Brooks Paige, Jan-Willem Van de Meent, Alban Desmaison, Noah Goodman, Pushmeet Kohli, Frank Wood, and Philip Torr. Learning disentangled representations with semi-supervised deep generative models. In Advances in Neural Information Processing Systems, pp. 5925-5935, 2017. +Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A pac-bayesian approach to spectrally-normalized margin bounds for neural networks. International Conference on Learning Representations 2018, 2017. +Ozan Oktay, Enzo Ferrante, Konstantinos Kamnitsas, Mattias Heinrich, Wenjia Bai, Jose Caballero, Stuart A Cook, Antonio De Marvao, Timothy Dawes, Declan P O'Regan, et al. Anatomically constrained neural networks (acnns): application to cardiac image enhancement and segmentation. IEEE transactions on medical imaging, 37(2):384-395, 2017. + +Tharick A Pascoal, Sulantha Mathotaarachchi, Monica Shin, Andrea L Benedet, Sara Mohades, Seqian Wang, Tom Beaudry, Min Su Kang, Jean-Paul Soucy, Aurelie Labbe, et al. Synergistic interaction between amyloid and tau predicts the progression to dementia. Alzheimer's & Dementia, 13(6):644-653, 2017. +Trang Pham, Truyen Tran, Dinh Phung, and Svetha Venkatesh. Predicting healthcare trajectories from medical records: A deep learning approach. Journal of biomedical informatics, 69:218-229, 2017. +David Pollard. Convergence of stochastic processes. Springer Science & Business Media, 1984. +Piyush Rai, Changwei Hu, Ricardo Henao, and Lawrence Carin. Large-scale bayesian multi-label learning via topic-based label embeddings. In Advances in Neural Information Processing Systems, pp. 3222-3230, 2015. +Marc'Aurelio Ranzato and Martin Szummer. Semi-supervised learning of compact document representations with deep networks. In Proceedings of the 25th international conference on Machine learning, pp. 792-799. ACM, 2008. +Marc'Aurelio Ranzato, Christopher Poultney, Sumit Chopra, Yann L Cun, et al. Efficient learning of sparse representations with an energy-based model. In Advances in neural information processing systems, pp. 1137-1144, 2007. +Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in neural information processing systems, pp. 3546-3554, 2015. +Philippe Rigollet. Generalization error bounds in semi-supervised classification under the cluster assumption. Journal of Machine Learning Research, 8(Jul):1369-1392, 2007. +William H Rogers and Terry J Wagner. A finite sample distribution-free performance bound for local discrimination rules. The Annals of Statistics, pp. 506-514, 1978. +Lorenzo Rosasco and Tomaso Poggio. Stability of tikhonov regularization. MIT 9.520 Statistical Machine Learning, 2009. +M. Seeger. Learning with labeled and unlabeled data. Technical report, Institute for ANC, Edinburgh, 2000. +Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Learnability, stability and uniform convergence. Journal of Machine Learning Research, 11(Oct):2635-2670, 2010. +Aarti Singh, Robert Nowak, and Jerry Zhu. Unlabeled data: Now it helps, now it doesn't. In Advances in neural information processing systems, pp. 1513-1520, 2009. +Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video representations using lstms. In International conference on machine learning, pp. 843-852, 2015. +Rhonda D Szczesniak, Dan Li, Weiji Su, Cole Brokamp, John Pestian, Michael Seid, and John P Clancy. Phenotypes of rapid cystic fibrosis lung disease progression during adolescence and young adulthood. American journal of respiratory and critical care medicine, 196(4):471-478, 2017. +Farbound Tai and Hsuan-Tien Lin. Multilabel classification with principal label space transformation. International Conference on Machine Learning workshop on Learning from Multi-Label Data, 2010. +Michael Tschannen, Olivier Bachem, and Mario Lucic. Recent advances in autoencoder-based representation learning. NeurIPS 2018 Bayesian Deep Learning workshop, 2018. +Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. Mining multi-label data. In Data mining and knowledge discovery handbook, pp. 667-685. Springer, 2009. +Aaron van den Oord, Oriol Vinyals, et al. Neural discrete representation learning. In Advances in Neural Information Processing Systems, pp. 6306-6315, 2017. + +Vladimir Vapnik and Alexey Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theoretical Probability and its Applications, 17:264-280, 01 1971. doi: 10.1137/1116025. +Kaixiang Wang, Ming Yang, Wanqi Yang, and YiLong Yin. Deep correlation structure preserved label space embedding for multi-label classification. In *Asian Conference on Machine Learning*, pp. 1-16, 2018. +Jason Weston, Frédéric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639-655. Springer, 2012. +Baoyuan Wu, Zhilei Liu, Shangfei Wang, Bao-Gang Hu, and Qiang Ji. Multi-label learning with missing labels. In 22nd International Conference on Pattern Recognition, pp. 1964-1968. IEEE, 2014. +Yan Yan, Glenn Fung, Jennifer G Dy, and Romer Rosales. Medical coding classification by leveraging inter-code relationships. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 193-202. ACM, 2010. +Chih-Kuan Yeh, Wei-Chieh Wu, Wei-Jen Ko, and Yu-Chiang Frank Wang. Learning deep latent space for multi-label classification. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. +Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, and Inderjit Dhillon. Large-scale multi-label learning with missing labels. In International conference on machine learning, pp. 593-601, 2014. +Yi Zhang and Jeff Schneider. Multi-label output codes using canonical correlation analysis. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 873-882, 2011. +Yi Zhang and Jeff Schneider. Maximum margin output coding. Proceedings of the 29th International Conference on Machine Learning, 2012. +Shengjia Zhao, Jiaming Song, and Stefano Ermon. Learning hierarchical features from deep generative models. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 4091-4099. JMLR.org, 2017. +Wen-Ji Zhou, Yang Yu, and Min-Ling Zhang. Binary linear compression for multi-label classification. In IJCAI, pp. 3546-3552, 2017. +Fuzhen Zhuang, Xiaohu Cheng, Ping Luo, Sinno Jialin Pan, and Qing He. Supervised representation learning: Transfer learning with deep autoencoders. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015. + +# A PROOF OF THEOREM 1 + +Definition 5 (Bregman Distance) Let $\ell$ be some differentiable, strictly convex loss. For any $h,h^{\prime}\in \mathcal{H}$ , the Bregman distance associated with $\ell$ is given by $D_{\ell}(h^{\prime}\| h) = \ell (h^{\prime}) - \ell (h) - \langle h^{\prime} - h,\nabla \ell (h)\rangle$ . Additivity and non-negativity are easy to see; that is, $D_{\ell}(h^{\prime}\| h)\geq 0$ for any $h,h^{\prime}\in \mathcal{H}$ , and if $\ell = \ell_1 + \ell_2$ and both $\ell_{1}$ and $\ell_{2}$ are convex functions, then $D_{\ell}(h^{\prime}\| h) = D_{\ell_1}(h^{\prime}\| h) + D_{\ell_2}(h^{\prime}\| h)$ . + +Theorem 1 (Uniform Stability) Let $\ell_p$ and $\ell_r$ be $\sigma_p$ -admissible and $\sigma_r$ -admissible loss functions, and let $\ell_r$ be $c$ -strongly convex. Then under Assumptions 1 and 2, the following inequality holds, + +$$ +| \ell_ {p} (\boldsymbol {\Theta} _ {*} ^ {\prime} \mathbf {W} _ {u} \mathbf {x}, \mathbf {y}) - \ell_ {p} (\boldsymbol {\Theta} _ {*} \mathbf {W} _ {u} \mathbf {x}, \mathbf {y}) | \leq \frac {2 (\sigma_ {p} ^ {2} r _ {\alpha} ^ {2} + \sigma_ {p} \sigma_ {r} r _ {\alpha} r _ {\beta}) a M}{c N} +$$ + +Proof. The overall strategy consists of three steps in sequence. First, we want to bound the delta in prediction loss under $\Theta_{*}$ and $\Theta_{*}^{\prime}$ using the set of representative vectors $B$ . Second, we bound the resulting expression in terms of the Bregman divergence of the complete loss functions $L$ and $L^{\prime}$ under $\Theta_{*}$ and $\Theta_{*}^{\prime}$ . Third, we express the divergence back in terms of the original expression itself (consisting of representative vectors), which allows us to solve for a bound on that expression. Finally, combining the results from the three steps completes the proof. We begin with the left-hand term, + +$$ +\begin{array}{l} | \ell_ {p} (\boldsymbol {\Theta} _ {*} ^ {\prime} \mathbf {W} _ {u} \mathbf {x}, \mathbf {y}) - \ell_ {p} (\boldsymbol {\Theta} _ {*} \mathbf {W} _ {u} \mathbf {x}, \mathbf {y}) | \leq \sigma_ {p} \| (\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}) \mathbf {W} _ {u} \mathbf {x} \| _ {2} \\ = \sigma_ {p} \left\| \sum_ {m = 1} ^ {M} \alpha_ {m} \left(\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}\right) \mathbf {W} _ {e} \mathbf {b} _ {m} \right\| _ {2} \leq \sigma_ {p} r _ {\alpha} \sqrt {\sum_ {m = 1} ^ {M} \left\| \left(\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}\right) \mathbf {W} _ {e} \mathbf {b} _ {m} \right\| _ {2} ^ {2}} \tag {3} \\ \end{array} +$$ + +where the first inequality follows from the fact that $\ell_p$ is $\sigma_p$ -admissible, the equality follows from Assumption 1 for some coefficients $\alpha_m \in \mathbb{R}$ , and the second inequality follows from the Cauchy-Schwarz inequality. As our second step, the goal is to upper-bound the term under the square-root, + +$$ +\begin{array}{l} \sum_ {m = 1} ^ {M} \left\| \left(\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}\right) \mathbf {W} _ {e} \mathbf {b} _ {m} \right\| _ {2} ^ {2} \\ \leq \frac {1}{c} \sum_ {m = 1} ^ {M} \langle (\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}) \mathbf {W} _ {e} \mathbf {b} _ {m}, \nabla \ell_ {r} (\boldsymbol {\Theta} _ {*} ^ {\prime} \mathbf {W} _ {e} \mathbf {b} _ {m}, \mathbf {b} _ {m}) - \nabla \ell_ {r} (\boldsymbol {\Theta} _ {*} \mathbf {W} _ {e} \mathbf {b} _ {m}, \mathbf {b} _ {m}) \rangle \\ = \frac {1}{c} \sum_ {m = 1} ^ {M} \left\langle \boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}, \nabla \ell_ {r} (\boldsymbol {\Theta} _ {*} ^ {\prime} \mathbf {W} _ {e} \mathbf {b} _ {m}, \mathbf {b} _ {m}) \mathbf {b} _ {m} ^ {\top} \mathbf {W} _ {e} ^ {\top} - \nabla \ell_ {r} (\boldsymbol {\Theta} _ {*} \mathbf {W} _ {e} \mathbf {b} _ {m}, \mathbf {b} _ {m}) \mathbf {b} _ {m} ^ {\top} \mathbf {W} _ {e} ^ {\top} \right\rangle \\ = \frac {M}{c} \left[ D _ {L _ {r} ^ {B}} \left(\boldsymbol {\Theta} _ {*} ^ {\prime} \| \boldsymbol {\Theta} _ {*}\right) + D _ {L _ {r} ^ {B}} \left(\boldsymbol {\Theta} _ {*} \| \boldsymbol {\Theta} _ {*} ^ {\prime}\right) \right] \tag {4} \\ \end{array} +$$ + +where the first inequality follows from the fact that $\ell_r$ is $c$ -strongly convex, and the final equality follows from the definition of Bregman divergence (the standalone loss terms cancel). We want an expression in terms of the loss functions $L$ and $L'$ , which will subsequently allow us to obtain a bound expressed back in terms of the set of representative vectors. Focusing on the term in the brackets, + +$$ +\begin{array}{l} D _ {L _ {r} ^ {B}} (\pmb {\Theta} _ {*} ^ {\prime} \| \pmb {\Theta} _ {*}) + D _ {L _ {r} ^ {B}} (\pmb {\Theta} _ {*} \| \pmb {\Theta} _ {*} ^ {\prime}) \\ = - \langle \boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}, \nabla L _ {r} ^ {B} (\boldsymbol {\Theta} _ {*}) \rangle - \langle \boldsymbol {\Theta} _ {*} - \boldsymbol {\Theta} _ {*} ^ {\prime}, \nabla L _ {r} ^ {B} (\boldsymbol {\Theta} _ {*} ^ {\prime}) \rangle \\ = \lim _ {\kappa \rightarrow 0 ^ {+}} \frac {1}{\kappa} \left[ L _ {r} ^ {B} (\boldsymbol {\Theta} _ {*}) - L _ {r} ^ {B} (\kappa \boldsymbol {\Theta} _ {*} ^ {\prime} + (1 - \kappa) \boldsymbol {\Theta} _ {*}) + L _ {r} ^ {B} (\boldsymbol {\Theta} _ {*} ^ {\prime}) - L _ {r} ^ {B} (\kappa \boldsymbol {\Theta} _ {*} + (1 - \kappa) \boldsymbol {\Theta} _ {*} ^ {\prime}) \right] \\ \leq \lim _ {\kappa \to 0 ^ {+}} \frac {a}{\kappa} \left[ L _ {r} ^ {\prime} (\pmb {\Theta} _ {*}) - L _ {r} ^ {\prime} (\kappa \pmb {\Theta} _ {*} ^ {\prime} + (1 - \kappa) \pmb {\Theta} _ {*}) + L _ {r} ^ {\prime} (\pmb {\Theta} _ {*} ^ {\prime}) - L _ {r} ^ {\prime} (\kappa \pmb {\Theta} _ {*} + (1 - \kappa) \pmb {\Theta} _ {*} ^ {\prime}) \right] \\ = a \left[ - \langle \Theta_ {*} ^ {\prime} - \Theta_ {*}, \nabla L _ {r} ^ {\prime} (\Theta_ {*}) \rangle - \langle \Theta_ {*} - \Theta_ {*} ^ {\prime}, \nabla L _ {r} ^ {\prime} (\Theta_ {*} ^ {\prime}) \rangle \right] \\ = a \left[ D _ {L _ {r} ^ {\prime}} (\pmb {\Theta} _ {*} ^ {\prime} \| \pmb {\Theta} _ {*}) + D _ {L _ {r} ^ {\prime}} (\pmb {\Theta} _ {*} \| \pmb {\Theta} _ {*} ^ {\prime}) \right] \\ \leq a \left[ D _ {L} \left(\boldsymbol {\Theta} _ {*} ^ {\prime} \| \boldsymbol {\Theta} _ {*}\right) + D _ {L ^ {\prime}} \left(\boldsymbol {\Theta} _ {*} \| \boldsymbol {\Theta} _ {*} ^ {\prime}\right) \right] \tag {5} \\ \end{array} +$$ + +where the first and last equalities follows from the definition of Bregman divergence, and the second equality from the definition of directional derivatives. The first inequality follows from Assumption 2; for the second inequality, note that $L_r'$ consists of a strict subset of the set of strictly convex losses that $L$ consists of, and similarly for $L'$ . Therefore by additivity and non-negativity of the Bregman distance, we have that $D_{L_r'}(\Theta'_* \| \Theta_*) \leq D_L(\Theta'_* \| \Theta_*)$ and $D_{L_r'}(\Theta_* \| \Theta'_*) \leq D_{L'}(\Theta_* \| \Theta'_*)$ . Our third and final step is to go back and bound this term again using the set of representative vectors, + +$$ +\begin{array}{l} D _ {L} \left(\boldsymbol {\Theta} _ {*} ^ {\prime} \| \boldsymbol {\Theta} _ {*}\right) + D _ {L ^ {\prime}} \left(\boldsymbol {\Theta} _ {*} \| \boldsymbol {\Theta} _ {*} ^ {\prime}\right) \\ = L \left(\Theta_ {*} ^ {\prime}\right) - L \left(\Theta_ {*}\right) + L ^ {\prime} \left(\Theta_ {*}\right) - L ^ {\prime} \left(\Theta_ {*} ^ {\prime}\right) \\ = \left[ L \left(\Theta_ {*} ^ {\prime}\right) - L ^ {\prime} \left(\Theta_ {*} ^ {\prime}\right) \right] - \left[ L \left(\Theta_ {*}\right) - L ^ {\prime} \left(\Theta_ {*}\right) \right] \\ = \frac {1}{N} \left[ \ell_ {p} \left(\boldsymbol {\Theta} _ {*} ^ {\prime} \mathbf {W} _ {u} \mathbf {x} _ {i}, \mathbf {y} _ {i}\right) + \ell_ {r} \left(\boldsymbol {\Theta} _ {*} ^ {\prime} \mathbf {W} _ {e} \mathbf {y} _ {i}, \mathbf {y} _ {i}\right) \right] - \frac {1}{N} \left[ \ell_ {p} \left(\boldsymbol {\Theta} _ {*} ^ {\prime} \mathbf {W} _ {u} \mathbf {x} _ {i} ^ {\prime}, \mathbf {y} _ {i} ^ {\prime}\right) + \ell_ {r} \left(\boldsymbol {\Theta} _ {*} ^ {\prime} \mathbf {W} _ {e} \mathbf {y} _ {i} ^ {\prime}, \mathbf {y} _ {i} ^ {\prime}\right) \right] \\ - \frac {1}{N} \left[ \ell_ {p} \left(\boldsymbol {\Theta} _ {*} \mathbf {W} _ {u} \mathbf {x} _ {i}, \mathbf {y} _ {i}\right) + \ell_ {r} \left(\boldsymbol {\Theta} _ {*} \mathbf {W} _ {e} \mathbf {y} _ {i}, \mathbf {y} _ {i}\right) \right] + \frac {1}{N} \left[ \ell_ {p} \left(\boldsymbol {\Theta} _ {*} \mathbf {W} _ {u} \mathbf {x} _ {i} ^ {\prime}, \mathbf {y} _ {i} ^ {\prime}\right) + \ell_ {r} \left(\boldsymbol {\Theta} _ {*} \mathbf {W} _ {e} \mathbf {y} _ {i} ^ {\prime}, \mathbf {y} _ {i} ^ {\prime}\right) \right] \\ \leq \frac {\sigma_ {p}}{N} \left(\| \left(\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}\right) \mathbf {W} _ {u} \mathbf {x} _ {i} \| _ {2} + \| \left(\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}\right) \mathbf {W} _ {u} \mathbf {x} _ {i} ^ {\prime} \| _ {2}\right) \\ + \frac {\sigma_ {r}}{N} \left(\| \left(\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}\right) \mathbf {W} _ {e} \mathbf {y} _ {i} \| _ {2} + \| \left(\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}\right) \mathbf {W} _ {e} \mathbf {y} _ {i} ^ {\prime} \| _ {2}\right) \\ \leq \frac {2 \left(\sigma_ {p} r _ {\alpha} + \sigma_ {r} r _ {\beta}\right)}{N} \sqrt {\sum_ {m = 1} ^ {M} \left\| \left(\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}\right) \mathbf {W} _ {e} \mathbf {b} _ {m} \right\| _ {2} ^ {2}} \tag {6} \\ \end{array} +$$ + +where for first equality note that by construction the gradients of the losses $L$ and $L^{\prime}$ are zero at the respective optimal models $\Theta_{*}$ and $\Theta_{*}^{\prime}$ . The first inequality follows from the fact that $\ell_p$ and $\ell_r$ are $\sigma_{p}$ -admissible and $\sigma_r$ -admissible respectively, and the second inequality follows from Assumption 1 and the Cauchy-Schwarz inequality. Now, combining Equations 4, 5, and 6 allows us to write + +$$ +\sqrt {\sum_ {m = 1} ^ {M} \left\| \left(\boldsymbol {\Theta} _ {*} ^ {\prime} - \boldsymbol {\Theta} _ {*}\right) \mathbf {W} _ {e} \mathbf {b} _ {m} \right\| _ {2} ^ {2}} \leq \frac {2 \left(\sigma_ {p} r _ {\alpha} + \sigma_ {r} r _ {\beta}\right) a M}{c N} \tag {7} +$$ + +which—by substitution into Equation 3—completes the proof. + +Remark 4 Formulating the autoencoding component as an auxiliary task allows us to unambiguously interpret its benefit as a regularizer. Specifically, the complete loss can be summarized and rewritten as $L(\Theta) = L_{p}(\Theta) + R_{1}(\Theta) + R_{2}(\Theta)$ ; that is, the TEA objective is a combination of the primary prediction loss $L_{p}(\Theta) = \frac{1}{N}\sum_{n=1}^{N}\ell_{p}(\Theta\mathbf{W}_{u}\mathbf{x}_{n},\mathbf{y}_{n})$ plus additional regularization, where $R_{1}(\Theta) = \frac{1}{N}\sum_{m=1}^{M}\ell_{r}(\Theta\mathbf{W}_{e}\mathbf{b}_{m},\mathbf{b}_{m}) = \frac{M}{N}L_{r}^{B}(\Theta)$ and $R_{2}(\Theta) = \frac{1}{N}\sum_{n=1}^{N}\ell_{r}(\Theta\mathbf{W}_{e}\mathbf{y}_{n},\mathbf{y}_{n}) - \frac{M}{N}L_{r}^{B}(\Theta)$ . In particular, the proof of Theorem 1 relies on $R_{1}(\Theta)$ to upper-bound instability (Appendix A). This precisely identifies the regularizer in question, while Theorem 1 quantifies its generalization benefit. + +Remark 5 Technicality: Moving from uniform stability (Theorem 1) to generalization bounds (Corollary 1) requires that the loss function not take on arbitrarily large values (see e.g. Bousquet & Elisseeff (2002)). In practice, the label space itself is often bounded (see e.g. Assumption 1 in Le et al. (2018)); then the problem effectively reduces to that with a bounded loss function. For example, consider a regression setting using the quadratic loss function, where the target data lie within $[-U, U]$ ; then the loss function is bounded to be within $[0, 4U^2]$ . See Castro & Nowak (2018). + +Remark 6 Technicality: Earlier we assumed $\varepsilon = 0$ for ease of exposition; carrying around the extra $O(\varepsilon)$ term (from Equations 3 and 6) is not particularly illuminating. Generalizing to $\varepsilon \neq 0$ can be done in a similar manner to Le et al. (2018), with the additional assumptions of bounded spaces and that $\varepsilon$ decreases as $1 / N$ (which they note is reasonable, since the more samples in the data, the more likely the cross-representativity assumption will hold with low error). Again, note that $\varepsilon = 0$ holds as long as the number of independent latent vectors is at least $|\mathcal{Z}|$ ). Similarly, Liu et al. (2016) consider $\varepsilon = 0$ , noting in any case that they can increase $N$ to obtain a small $\| \eta \| _2$ ; see their analysis for detail. + +# B EXPANDED RELATED WORK + +In this paper, we motivate and analyze a general autoencoder-based target-representation learning technique in the supervised setting, quantifying the generalization benefit via an argument from uniform stability, as well as verifying its practical utility. As such, our work lies at the intersection of three threads of research: (1) supervised representation learning using autoencoders, (2) label space reduction for multi-label classification, as well as (3) algorithmic stability-based learning guarantees. + +# B.1 Supervised Representation Learning using Autoencoders + +Table 6: Autoencoder-Based Supervised Representation Learning + +
WorkContributionSettingTypeEmbedding
Weston et al. (2012)Jointly optimized classification and embeddingSemi-supervised learningDeterministicFeatures
Kingma et al. (2014)Variational inference for generative modelingSemi-supervised learningProbabilisticFeatures
Narayanaswamy et al. (2017)Disentangled latent representationsSemi-supervised learningProbabilisticFeatures
Zhuang et al. (2015)Input- and output-encoding for transfer learningSemi-supervised learningDeterministicFeatures
Bousmalis et al. (2016)Paired autoencoders for transfer learningSemi-supervised learningDeterministicFeatures
Ghifary et al. (2016)Jointly optimized feature-embeddingSemi-supervised learningDeterministicFeatures
Le et al. (2018)Jointly optimized feature-embeddingSupervised learningDeterministicFeatures
Dalca et al. (2018)Generative model using learned target priorsUnsupervised, UnpairedProbabilisticTargets
Girdhar et al. (2016)Jointly optimized target-embedding (Indirect)Supervised learningDeterministicTargets
(Ours)Jointly optimized target-embedding (Direct)Supervised learningDeterministicTargets
+ +Autoencoder-based representation learning (Hinton & Salakhutdinov, 2006) has long played important roles in unsupervised and semi-supervised settings. Various inductive biases have been proposed to promote representations that are sparse (Ranzato et al., 2007), discrete (van den Oord et al., 2017), factorized (Chen et al., 2018), or hierarchical (Zhao et al., 2017), among others. For a more thorough overview of various methods, we refer the reader to Bengio et al. (2013) and Tschannen et al. (2018). + +The goal of better representations is often for the benefit of downstream tasks. Semi-supervised autoencoders—trained on partially-labeled data—can be jointly optimized to obtain compact representations that improve generalization on supervised tasks (Ranzato & Szummer, 2008; Weston et al., 2012). This naturally extends to representations that are generative (Kingma et al., 2014), disentangled (Narayanaswamy et al., 2017), or hierarchical (Rasmus et al., 2015). In addition, semi-supervised autoencoders enable transfer learning across different domains via jointly-trained reconstruction-classification networks Ghifary et al. (2016), private-shared partitioned representations (Bousmalis et al., 2016), or by augmenting models with label-encoding layers (Zhuang et al., 2015). + +Although less studied, more closely related to our work is the use of autoencoders in a purely supervised setting: rather than focusing on how specific architectural novelties may better structure unlabeled data, Le et al. (2018) instead study the generalization benefit by the simple addition of reconstruction to the supervised classification task—a special case of what we describe as FEAs. Now, all aforementioned studies operate on the basis of autoencoding features (for an explicit or implicit downstream prediction task). In this paper, we instead focus on autoencoder-based target-representation learning (using TEAs) in the supervised setting, and—importantly—analyze the theoretical and empirical benefits of the approach. Unlike in simple classification (for which FEAs make sense), we are motivated by problems with high-dimensional output spaces, but where we operate under the assumption of a more compact and predictable set of underlying factors. + +We take inspiration from the empirical investigation of Girdhar et al. (2016), where latent representations of 3D objects (targets) are jointly trained to be predictable from 2D images (features); + +Table 7: Label Space Dimension Reduction via Label Embedding + +
WorkContributionLearningProblem
Hsu et al. (2009)Coding with compressed sensing (random projections)SeparateMulti-label classification
Tai & Lin (2010)Coding with principal label space transformation (PC-based projections)SeparateMulti-label classification
Zhang & Schneider (2011)Coding with maximum margin (between prediction distances)SeparateMulti-label classification
Balasubramanian & Lebanon (2012)Landmark selection of labels via group-sparse learningSeparateMulti-label classification
Bi & Kwok (2013)Landmark selection of labels via randomized samplingSeparateMulti-label classification
Chen & Lin (2012)Feature-aware principal label space transformation for embeddingJointMulti-label classification
Yu et al. (2014)Generic empirical risk minimization formulation and boundsJointMulti-label classification
Yeh et al. (2017)Autoencoder embedding with canonical correlation analysisJointMulti-label classification
Mostajabi et al. (2018)Autoencoder component as (Implicit) regularization for learning predictorSeparateGeneral; Semantic segmentation
(Ours)Autoencoder component as (Explicit) regularization for learning predictorJointGeneral; Sequence forecasting
+ +Oktay et al. (2017) deploy a similar approach for medical image segmentation. On the other hand, in both cases the supervised task is truncated: the predictors are trained to regress the unsupervised embeddings (instead of ground-truth targets), and gradients only backpropagate from the latent space (instead of the target space). This means that their common decoder function is only shared indirectly (and predictions made indirectly), versus the symmetric and simultaneously optimized forward model proposed for TEAs—an important distinction that our analysis relies on to obtain uniform stability. In Mostajabi et al. (2018), a two-stage procedure is used for semantic segmentation—loosely comparable to the first two stages in TEAs; in contrast to our emphasis on joint training, they study the benefit of a frozen embedding branch in parallel with direct prediction. More broadly related to target-embedding, Dalca et al. (2018) build anatomical priors for biomedical segmentation in unsupervised settings. + +# B.2 Label Space Reduction for Multi-Label Classification + +Label space dimension reduction comprises techniques that focus specifically on multi-label classification. Early approaches to multi-label classification employ simplistic transformations such as label power-sets (Boutell et al., 2004), binary relevance (Tsoumakas et al., 2009), and label rankings (Furnkranz et al., 2008); these are computationally inefficient, and do not capture interdependencies between labels. In contrast, label-embedding methods first derive a latent label space with reduced dimensionality, and subsequently associate inputs to that latent space instead. Encodings have been obtained via random projections (Hsu et al., 2009), principal components-based projections (Tai & Lin, 2010), canonical correlation analysis (Zhang & Schneider, 2011), as well as maximum-margin coding (Zhang & Schneider, 2012). A parallel thread of research has focused on selecting representative and reconstructive subsets of labels through group-sparse learning (Balasubramanian & Lebanon, 2012) and randomized sampling (Bi & Kwok, 2013). Various extensions of label-embedding techniques abound, such as using bloom filters (Cisse et al., 2013), nearest-neighbors (Bhatia et al., 2015), handling missing data (Wu et al., 2014), as well as using binary compression (Zhou et al., 2017). + +Closer our theme of joint learning by utilizing both features and targets, Chen & Lin (2012) first proposed simultaneously minimizing the encoding error (from labels) and prediction error (from features) through an SVD formulation. Towards more flexible learning, Lin et al. (2014) did away with explicitly specified encoding functions, proposing to learn code matrices directly—making no assumptions whatsoever. Unifying several prior methods, Yu et al. (2014) cast label-embedding within the generic empirical risk minimization framework—as learning a linear model with a low-rank constraint; this perspective captures the generic intuition of a restricted number of latent factors, and admits generalization bounds based on norm-based regularization. Recently, Yeh et al. (2017) generalized the label-embedding approach to autoencoders; this formulation flexibly allows the + +Table 8: Generalizability, Multiple Tasks, and Algorithmic Stability + +
WorkContributionSettingFocusLearning
Bousquet & Elisseeff (2002)Uniform stability for generalizationSupervised, generalSingle task-
Feldman & Vondrak (2018)Improve on Bousquet & Elisseeff's boundSupervised, generalSingle task-
Baxter (2000)VC-dimension for generalizationMulti-task learningAll tasksJointly
Maurer (2006)Rademacher complexity for generalizationMulti-task learningAll tasksJointly
Liu et al. (2016)Uniform stability for generalizationMulti-task learningAll tasksJointly
Mohri et al. (2015)Rademacher complexity for generalizationFeature-embeddingPrimary taskSeparately
Epstein & Meir (2019)Non-contractiveness and semi-supervisionFeature-embeddingPrimary taskSeparately
Le et al. (2018)Uniform stability for generalizationFeature-embeddingPrimary taskJointly
(Ours)Uniform stability for generalizationTarget-embeddingPrimary taskJointly
+ +addition of specific losses to exploit correlations—a tactic also used in Wang et al. (2018) with multi-dimensional scaling. Furthermore, nonlinearities can be handled by deep learning in component functions, unlike earlier approaches limited to kernel methods (Lin et al., 2014; Li & Guo, 2015). + +Our work is related to this general autoencoder approach to label-embedding, although there are significant differences in focus. In particular, we operate at a higher level of abstraction. Label-embedding techniques worry about label reduction, and about specific loss functions that aim to preserve dependencies within and among spaces; their problem is one of multi-label classification, and their baseline is binary relevance. In contrast, we worry about autoencoding at all—that is, we focus on the regularizing effect of the reconstruction loss on learning the prediction model; our baseline is direct prediction, and the output can be of any form (classification or regression). In light of the sizable performance improvement of the autoencoder-based model of Yeh et al. (2017) over comparators using direct prediction, our work can be regarded as a more generalized analysis of the contribution of the autoencoding component. Moreover, unlike the uniform convergence-based analysis in Yu et al. (2014), our bound does not rely on explicit norm-based regularization—instead, we interpret the embedding task itself as an intrinsic form of regularization to derive our stability-based guarantee. + +Finally, also worth mentioning is the field of extreme multi-label classification (Bhatia et al., 2015), for which probabilistic methods such as Rai et al. (2015) and Kapoor et al. (2012) present sophisticated approaches to extremely high-dimensional classification problems—with advantages in performance and use cases. In light of the medical relevance of our experimental setting, we point out the application of Yan et al. (2010) to medical coding. See Bhatia et al. (2019) for a more detailed overview. + +# B.3 Algorithmic Stability and Learning Guarantees + +Generalizability is central to machine learning, and its analysis via hypothesis stability is first studied in Rogers & Wagner (1978) and Devroye & Wagner (1979). Unlike arguments based on the complexity of the search space (Vapnik & Chervonenkis, 1971; Pollard, 1984; Koltchinskii, 2001), stability-based approaches account for how the model produced by the algorithm depends on the data. Based on concentration inequalities (McDiarmid, 1989), improved bounds are developed in Lugosi & Pawlak (1994) by estimating posterior error probabilities. The landmark work of Bousquet & Elisseeff (2002) first formalizes the notion of uniform stability sufficient for learnability, obtaining relatively strong bounds for several regularization algorithms, and Feldman & Vondrak (2018) recently use ideas related to differential privacy (Bassily et al., 2016) for further improved bounds without additional assumptions. For further context, see Mukherjee et al. (2006) and Shalev-Shwartz et al. (2010). + +For semi-supervised representation learning, Rigollet (2007) first introduces the notion of cluster excess-risk and convergence, formalizing the clustering criterion for unlabeled features to be useful (Seeger, 2000). Based on the clustering assumption, Singh et al. (2009) develops a finite sample analysis to quantify the performance improvement from unlabeled features. Focusing on autoencoders, + +Epstein & Meir (2019) adapt recent margin, norm, and compression-based results for deep networks (Bartlett et al., 2017; Neyshabur et al., 2017; Arora et al., 2018), and relate generalization of feature reconstructions to the benefit of additional unlabeled features for the primary classification task. + +In the context of supervised problems, Mohri et al. (2015) and Gottlieb et al. (2016) analyze the generalization properties of dimensionality reduction techniques for features with respect to a downstream task; however, rather than joint training, the primary task is optimized subsequently over the learned representations. Taking a joint, multi-task approach (Caruana, 1997), Baxter (2000) first leverages the inductive bias of a common optimal hypothesis class to obtain a VC-based generalization bound. Maurer (2006) and Maurer et al. (2016) argue from Rademacher complexity to illustrate the benefit of the common operator; however, they only consider the task-averaged benefit, whereas we want to focus specifically on the primary task. There has been some work on generalization for each task (Ben-David & Schuller, 2003), but limited to binary classification—contrary to our setting. + +Arguing from stability, our approach is related to Liu et al. (2016) in showing that the algorithm for learning the shared model in a multi-task setting is uniformly stable. Our analysis also resembles Le et al. (2018) in the more specific setting where the bound for the primary prediction task is obtained with assistance from the auxiliary reconstruction loss; unlike Liu et al. (2016), we are not interested in a generic bound for all tasks. Again, however, the fundamental (and motivating) difference of our work stems from the (inverted) problem setting and resulting framework. In the vast majority of works, the primary task is one of classification (or more generally $|\mathcal{X}|\gg |\mathcal{Y}|)$ , where feature-embeddings make sense to learn. Instead, we attend to the setting in which $\mathcal{V}$ is high-dimensional (but where the underlying factors are assumed to be compact). In this setting, we argue (theoretically and empirically) that target-embeddings make more sense to learn in an auxiliary reconstruction task. + +# C EXPANDED ALGORITHM DETAIL + +In the following, Algorithm 1 gives pseudocode for TEA training. Figure 4 gives detailed block diagrams of component functions and objectives corresponding to each training stage (and variant). + +Algorithm 1 Pseudocode for TEA Training +Input: $\mathcal{D} = \{(\mathbf{x}_n,\mathbf{y}_n)\}_{n = 1}^N$ , learning rate $\psi$ , minibatch size $N_{s}$ +Output: Parameters $\Theta$ $\mathbf{W}_u$ $\mathbf{W}_e$ of components $\theta ,u,e$ +1: Initialize: $\Theta$ $\mathbf{W}_u$ $\mathbf{W}_e$ +2: while not converged do Stage 1: Learn Target-Embedding +3: Sample $\{(x_n,y_n)\}_{n = 1}^{N_s}$ i.i.d. $\mathcal{D}$ +4: for $n\in \{1,\dots,N_s\}$ do +5: $\mathbf{z}_n\gets e(\mathbf{y}_n;\mathbf{W}_e)$ ▷ Encode +6: $\tilde{\mathbf{y}}_n\gets \theta (\mathbf{z}_n;\Theta)$ ▷ Decode +7: end for +8: $L_{r}\leftarrow \frac{1}{N_{s}}\sum_{n = 1}^{N_{s}}\ell_{r}(\tilde{\mathbf{y}}_{n},\mathbf{y}_{n})$ +9: $\mathbf{W}_e\gets \mathbf{W}_e - \psi \nabla_{\mathbf{W}_e}L_r$ +10: $\Theta \gets \Theta -\psi \nabla_{\Theta}L_r$ +11: end while +12: while not converged do Stage 2: Regress Embeddings +13: Sample $\{(x_n,y_n)\}_{n = 1}^{N_s}$ i.i.d. $\mathcal{D}$ +14: for $n\in \{1,\dots,N_s\}$ do +15: $\hat{\mathbf{z}}_n\gets u(\mathbf{x}_n;\mathbf{W}_u)$ ▷ Predict +16: $\mathbf{z}_n\gets e(\mathbf{y}_n;\mathbf{W}_e)$ ▷ Encode +17: end for +18: $L_{z}\leftarrow \frac{1}{N_{s}}\sum_{n = 1}^{N_{s}}\ell_{z}(\hat{\mathbf{z}}_{n},\mathbf{z}_{n})$ +19: $\mathbf{W}_u\gets \mathbf{W}_u - \psi \nabla_{\mathbf{W}_u}L_z$ +20: end while +21: while not converged do Stage 3: Joint Training +22: Sample $\{(x_n,y_n)\}_{n = 1}^{N_s}$ i.i.d. $\mathcal{D}$ + +23: for $n\in \{1,\dots,N_s\}$ do +24: $\hat{\mathbf{z}}_n\gets u(\mathbf{x}_n;\mathbf{W}_u)$ +25: $\mathbf{z}_n\gets e(\mathbf{y}_n;\mathbf{W}_e)$ +26: $\hat{\mathbf{y}}_n\gets \theta (\hat{\mathbf{z}}_n;\Theta)$ +27: $\tilde{\mathbf{y}}_n\gets \theta (\mathbf{z}_n;\Theta)$ +28: end for +29: $L_{p}\gets \frac{1}{N_{s}}\sum_{n = 1}^{N_{s}}\ell_{p}(\hat{\mathbf{y}}_{n},\mathbf{x}_{n})$ +30: $L_{r}\gets \frac{1}{N_{s}}\sum_{n = 1}^{N_{s}}\ell_{r}(\tilde{\mathbf{y}}_{n},\mathbf{y}_{n})$ +31: $\mathbf{W}_u\gets \mathbf{W}_u - \psi \nabla_{\mathbf{W}_u}L_p$ +32: $\mathbf{W}_e\gets \mathbf{W}_e - \psi \nabla_{\mathbf{W}_e}L_r$ +33: $\Theta \gets \Theta -\psi \nabla_{\Theta}[L_p + L_r]$ +34: end while +35: return $\Theta, \mathbf{W}_u, \mathbf{W}_e$ + +$\triangleright$ Predict +Encode +$\triangleright$ Decode +$\triangleright$ Decode + +![](images/9e97a49292cd33c168b2e7fed132458a70b4c1536fedc3b2e857955194ce7cef.jpg) +(a) Training (Stage 1) + +![](images/e8d303fa7b47443a235aca05814bea56330dbb2e9dc16c4b1c9dc8a3d1e6d4f9.jpg) +(b) Training (Stage 2) + +![](images/7ff3d10ce16207c7fa8739ddfcfaf20e9e40412f56ee7f90e93598ae5937a40d.jpg) +(c) Training (Stage 3), TEA + +![](images/8765ffc1d65598764ead2dd6242fab60529e523d86bc7c226f11a7d41f75e83d.jpg) +(d) Training (Stage 3), TEA(L) +Figure 4: TEAs consist of a shared forward model $\theta$ , upstream predictor $u$ , and target-embedding function $e$ , parameterized by $(\Theta, \mathbf{W}_u, \mathbf{W}_e)$ . Blue and red respectively identify the supervised and representation learning components in each arrangement. Solid lines indicate forward propagation of data; dashed lines indicate backpropagation of gradients. (a) First, the autoencoding components $e$ , $\theta$ are trained to learn target representations. (b) Next, using the inputs, the prediction arm $u$ is trained to regress the learned embeddings generated by the encoder. (c) Finally, all three components are jointly trained on both prediction and reconstruction losses. (d) In the indirect variant (Girdhar et al., 2016), + +![](images/66ec948d7302655865cb414c023d52d87bec663d81c639e87e89c031b98ed32a.jpg) +(e) Training (Stage 3), TEA(LP) + +![](images/48a7dbd864b2e396ac57e3bfc3f1f6e4412a011632208abfecba8c096b0bd56d.jpg) +(f) Inference Time +Figure 4: (continued) the predictor continues to regress the learned embeddings, and the latent loss backpropagates through both $u$ and $e$ (TEA(L)). (e) The TEA(LP) variant combines the previous two: both the latent loss and prediction loss are trained jointly together with the reconstruction loss. (f) At inference time, the target-embedding arm is dropped, leaving the hypothesis $h = \theta \circ u$ for prediction. + +# D ADDITIONAL EXPERIMENT DETAIL + +The UK Cystic Fibrosis registry records follow-up trajectories for over 10,000 patients over the period from 2008 and 2015 with a total of over 60,000 hospital visits. Each patient is associated with 90 variables over time, which includes data on treatments and diagnoses for 23 possible comorbidities (e.g. ABPA, diabetes, hypertension, pancreatitis), 11 possible infections (e.g. aspergillus, Burkholderia cepacia, klebsiella pneumoniae), as well as static demographic information (e.g. gender, genetics, smoking status). Using both static and temporal information in a precedent window, we forecast the future trajectories for the diagnoses of infections and comorbidities (all binary variables) recorded at each follow-up. The Alzheimer's Disease Neuroimaging Initiative study tracks disease progression for over 1,700 patients over the period from 2004 to 2016 with a total of over 10,000 (bi-annual) + +![](images/07b0b94de6029de51d6926755fd993650cec3370e4b596d9c3385090dea0803e.jpg) +(a) Base; REG +Figure 5: Component functions and training objectives for comparators in experiments. Blue and red respectively identify the supervised and representation learning components in each arrangement. Solid lines indicate forward propagation of data; dashed lines indicate backpropagation of gradients. (a) The baseline is direct prediction (with (REG) and without (Base) $\ell_2$ -regularization), which simply corresponds to removing the autoencoder; here we explicitly identify some intermediate hidden layer to preserve visual correspondence with the autoencoder models, but note that the "latent code" is strictly speaking a misnomer—as nothing is being encoded here. (b) FEAs consist of a shared forward + +![](images/1de8fa52f95c8e8e61e9978e562e7babcb7d75197d732f14859d39fa9e7762a3.jpg) +(b) FEA + +![](images/7cfc38236b51f5bddba229ec1eae7e391fa91737ea71af3737c9d09cac82c394.jpg) +(c) TEA + +![](images/82e97a0cdd6f9809eaf5ccccbb066ecef71158804bc2e501ba620d0d6091378d.jpg) +(d) F/TEA +Figure 5: (continued) model $\phi$ , downstream predictor $d$ , and feature reconstructor $r$ , parameterized by $(\Phi, \mathbf{W}_d, \mathbf{W}_r)$ . (c) TEAs consist of a shared forward model $\theta$ , upstream predictor $u$ , and target-embedding function $e$ , parameterized by $(\Theta, \mathbf{W}_u, \mathbf{W}_e)$ . (d) As an additional sensitivity, F/TEAs combine the previous two, forcing intermediate representations to encode both features and targets. + +clinical visits. We focus on the 8 primary quantitative biomarkers (e.g. entorhinal cortex, fusiform gyrus, hippocampus), 16 cognitive tests (e.g. ADAS11, CDR sum of boxes, mini mental state exam), as well as static demographic information (e.g. apolipoprotein E4, education level, ethnicity); we omit the remaining variables, for which the rate of missingness is over $50\%$ . Using a precedent window, we forecast the future the evolution of the primary quantitative biomarkers and cognitive test results (all continuous variables) measured at each visit. The Medical Information Mart for Intensive Care records physiological data streams patients admitted to intensive care units after 2008. We use over 22,000 patients with a total of over 500,000 measurements (resampled at 4 hour intervals). We focus on the most frequently measured vital signs and lab tests (e.g. heart rate, oxygen saturation, respiratory rate) recorded over time (with categorical variables binarized, this gives a total of 361 variables), as well as static demographic information (e.g. admission type, gender, location, marital status); we omit the remaining variables, for which the rate of missingness is over $50\%$ . Using a precedent window, we forecast the subsequent window of those variables. For each dataset, sequences are randomized at the patient level in order to obtain splits for training (and validation) and testing. + +We implement all models using Tensorflow. For the linear model, each component (encoder, decoder, and predictor) consists of a single layer, no bias term, and linear activation; static (demographic) and temporal data are concatenated and flattened for both features and targets. For the nonlinear case, we implement each component as an RNN using GRUs with the number of hidden layers $\zeta \in \{1,2\}$ where the number of hidden units is equal to the temporal feature dimension, and tanh is used for activation; the dimension of the latent space is (therefore) equal to the hidden state dimension. (For even larger hidden capacities, the increased number of parameters rapidly degrades performance). Static (demographic) information is incorporated as a mapping into the initial state for recurrent cells. Training is performed using the ADAM optimizer with a learning rate of $\psi \in \{3\mathrm{e} - 5,3\mathrm{e} - 4,3\mathrm{e} - 3,3\mathrm{e} - 2\}$ . Models are trained until convergence up to a maximum of 10,000 iterations with a minibatch size of $N_{s}\in \{32,64,128\}$ ; the empirical loss is computed on the validation set every 50 iterations of training, and convergence is determined on the basis of that error. Checkpointing is implemented every 50 iterations, and the best model parameters are restored (upon convergence) for use on the testing set. For all models except "Base", we allow the opportunity to select among the $\ell_2$ -regularization coefficients $\nu \in \{0,3\mathrm{e} - 5,3\mathrm{e} - 4,3\mathrm{e} - 3,3\mathrm{e} - 2\}$ . We set the strength-of-prior coefficient $\lambda = 0.5$ for FEA, F/TEA, as well as all variants of TEA (however, we do provide sensitivities on $\lambda$ for TEA in our experiments). For hyperparameter tuning $(\zeta ,\psi ,\nu ,N_{s})$ , we use cross-validation on the training set using 20 iterations of random search, selecting the setting that gives the lowest validation loss averaged across folds. For fair comparison (so as to isolate the effect of supervised representation learning over and above direct prediction), we apply the same setting chosen for REG for FEA, F/TEA, and all variants of TEA; therefore the only difference is the presence + +or absence of each autoencoding component. For each model and dataset, the experiment is repeated for a total of 10 times (each with a different random split of data into training and held-out testing sets); all results are reported as means and standard errors of each performance metric across runs. + +# E ADDITIONAL EXPERIMENT RESULTS + +# E.1 Results for Linear Models + +Table 9: Extended results for TEA and comparators on linear model with UKCF (Bold indicates best) + +
τBaseREGFEATEAF/TEA
PRC(I)10.340 ± 0.118*0.370 ± 0.101*0.374 ± 0.091*0.497 ± 0.0160.454 ± 0.010*
20.325 ± 0.099*0.350 ± 0.085*0.353 ± 0.077*0.459 ± 0.0170.419 ± 0.012*
30.314 ± 0.088*0.337 ± 0.076*0.342 ± 0.071*0.432 ± 0.0150.399 ± 0.013*
40.307 ± 0.082*0.328 ± 0.071*0.333 ± 0.067*0.413 ± 0.0100.386 ± 0.009*
PRC(C)10.445 ± 0.131*0.467 ± 0.106*0.499 ± 0.110*0.653 ± 0.0090.599 ± 0.019*
20.418 ± 0.099*0.435 ± 0.081*0.457 ± 0.083*0.566 ± 0.0100.524 ± 0.017*
30.403 ± 0.082*0.418 ± 0.067*0.436 ± 0.069*0.521 ± 0.0080.487 ± 0.015*
40.398 ± 0.073*0.412 ± 0.060*0.426 ± 0.061*0.498 ± 0.0090.471 ± 0.013*
ROC(I)10.713 ± 0.104*0.737 ± 0.081*0.750 ± 0.0780.806 ± 0.0070.801 ± 0.007
20.688 ± 0.089*0.709 ± 0.070*0.718 ± 0.068*0.771 ± 0.0070.765 ± 0.009
30.677 ± 0.080*0.697 ± 0.065*0.706 ± 0.0660.750 ± 0.0070.749 ± 0.009
40.677 ± 0.076*0.696 ± 0.0630.707 ± 0.0680.740 ± 0.0080.748 ± 0.006*
ROC(C)10.713 ± 0.110*0.741 ± 0.088*0.756 ± 0.084*0.829 ± 0.0060.810 ± 0.008*
20.686 ± 0.091*0.707 ± 0.072*0.719 ± 0.071*0.775 ± 0.0080.762 ± 0.008*
30.668 ± 0.077*0.686 ± 0.061*0.699 ± 0.0630.743 ± 0.0050.735 ± 0.007*
40.651 ± 0.070*0.667 ± 0.056*0.679 ± 0.057*0.720 ± 0.0060.712 ± 0.009*
+ +PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). The two-sample $t$ -test for a difference in means is conducted on the results. An asterisk next to the comparator result is used to indicate a statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. + +Table 10: Summary results for TEA and comparators on linear model with UKCF (Bold indicates best) + +
BaseREGFEATEAF/TEA
PRC(I)0.322 ± 0.099*0.347 ± 0.085*0.351 ± 0.079*0.450 ± 0.0350.414 ± 0.028*
PRC(C)0.416 ± 0.100*0.433 ± 0.083*0.455 ± 0.087*0.559 ± 0.0600.520 ± 0.052
ROC(I)0.689 ± 0.089*0.710 ± 0.072*0.720 ± 0.0730.767 ± 0.0260.766 ± 0.023
ROC(C)0.679 ± 0.091*0.700 ± 0.075*0.713 ± 0.0750.767 ± 0.0420.755 ± 0.037
+ +PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). The two-sample $t$ -test for a difference in means is conducted on the results. An asterisk next to the comparator result is used to indicate a statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Table 11: Extended source of gain and variants on linear model with UKCF (Bold indicates best) + +
τNo JointNo StagedTEATEA(L)TEA(LP)
PRC(I)10.434 ± 0.0170.472 ± 0.0160.497 ± 0.0160.474 ± 0.0180.502 ± 0.015
20.408 ± 0.0160.439 ± 0.0160.459 ± 0.0170.443 ± 0.0180.463 ± 0.016
30.390 ± 0.0150.415 ± 0.0150.432 ± 0.0150.420 ± 0.0140.435 ± 0.013
40.377 ± 0.0140.399 ± 0.0110.413 ± 0.0100.402 ± 0.0100.415 ± 0.009
PRC(C)10.563 ± 0.0250.620 ± 0.0300.653 ± 0.0090.626 ± 0.0120.655 ± 0.008
20.511 ± 0.0180.550 ± 0.0220.566 ± 0.0100.550 ± 0.0080.566 ± 0.010
30.484 ± 0.0140.512 ± 0.0170.521 ± 0.0080.511 ± 0.0060.521 ± 0.008
40.469 ± 0.0140.491 ± 0.0150.498 ± 0.0090.490 ± 0.0090.499 ± 0.009
ROC(I)10.779 ± 0.0080.799 ± 0.0070.806 ± 0.0070.797 ± 0.0070.809 ± 0.005
20.750 ± 0.0080.765 ± 0.0090.771 ± 0.0070.763 ± 0.0080.772 ± 0.007
30.733 ± 0.0080.750 ± 0.0080.750 ± 0.0070.744 ± 0.0070.750 ± 0.008
40.725 ± 0.0070.744 ± 0.0050.740 ± 0.0080.734 ± 0.0060.740 ± 0.007
ROC(c)10.799 ± 0.0120.821 ± 0.0110.829 ± 0.0060.823 ± 0.0050.830 ± 0.005
20.751 ± 0.0120.774 ± 0.0090.775 ± 0.0080.769 ± 0.0060.775 ± 0.006
30.723 ± 0.0100.746 ± 0.0070.743 ± 0.0050.737 ± 0.0050.742 ± 0.005
40.702 ± 0.0110.722 ± 0.0070.720 ± 0.0060.713 ± 0.0080.720 ± 0.006
+ +PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). The "No Joint" setting isolates the benefit from staged training only (analogous to basic unsupervised pretraining, though using targets); the "No Staged" setting isolates the benefit from joint training only (without pretraining). + +Table 12: Summary source of gain and variants on linear model with UKCF (Bold indicates best) + +
No JointNo StagedTEATEA(L)TEA(LP)
PRC(I)0.402 ± 0.0260.431 ± 0.0310.450 ± 0.0350.435 ± 0.0310.454 ± 0.036
PRC(C)0.507 ± 0.0400.543 ± 0.0540.559 ± 0.0600.544 ± 0.0530.560 ± 0.061
ROC(I)0.747 ± 0.0220.764 ± 0.0220.767 ± 0.0260.759 ± 0.0250.768 ± 0.028
ROC(C)0.744 ± 0.0380.766 ± 0.0380.767 ± 0.0420.760 ± 0.0420.767 ± 0.042
+ +PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). The "No Joint" setting isolates the benefit from staged training only (analogous to basic unsupervised pretraining, though using targets); the "No Staged" setting isolates the benefit from joint training only (without pretraining). Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +# E.2 Results for Recurrent Models + +Table 13: Extended results for TEA and comparators on RNN model with UKCF (Bold indicates best) + +
τBaseREGFEATEAF/TEA
PRC(I)10.451 ± 0.027*0.456 ± 0.016*0.448 ± 0.026*0.549 ± 0.0140.509 ± 0.014*
20.417 ± 0.024*0.420 ± 0.013*0.416 ± 0.022*0.490 ± 0.0120.463 ± 0.011*
30.395 ± 0.019*0.400 ± 0.013*0.395 ± 0.020*0.457 ± 0.0100.437 ± 0.013*
40.380 ± 0.017*0.385 ± 0.010*0.380 ± 0.017*0.434 ± 0.0070.417 ± 0.013*
PRC(C)10.561 ± 0.056*0.592 ± 0.029*0.598 ± 0.029*0.695 ± 0.0100.685 ± 0.015
20.504 ± 0.039*0.523 ± 0.022*0.527 ± 0.021*0.591 ± 0.0140.584 ± 0.018
30.471 ± 0.028*0.488 ± 0.018*0.489 ± 0.017*0.537 ± 0.0070.530 ± 0.017
40.453 ± 0.023*0.469 ± 0.017*0.470 ± 0.016*0.510 ± 0.0070.504 ± 0.015
ROC(I)10.788 ± 0.018*0.791 ± 0.009*0.794 ± 0.014*0.827 ± 0.0070.818 ± 0.006*
20.753 ± 0.015*0.757 ± 0.011*0.758 ± 0.017*0.783 ± 0.0080.778 ± 0.009
30.736 ± 0.013*0.741 ± 0.012*0.740 ± 0.016*0.760 ± 0.0070.757 ± 0.010
40.725 ± 0.012*0.731 ± 0.011*0.727 ± 0.014*0.748 ± 0.0080.744 ± 0.010
ROC(C)10.794 ± 0.022*0.809 ± 0.015*0.808 ± 0.012*0.838 ± 0.0070.834 ± 0.007
20.750 ± 0.017*0.761 ± 0.010*0.761 ± 0.009*0.782 ± 0.0070.781 ± 0.008
30.723 ± 0.013*0.733 ± 0.007*0.735 ± 0.010*0.752 ± 0.0060.751 ± 0.009
40.699 ± 0.009*0.709 ± 0.009*0.711 ± 0.010*0.726 ± 0.0060.724 ± 0.008
+ +PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). The two-sample $t$ -test for a difference in means is conducted on the results. An asterisk next to the comparator result is used to indicate a statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. + +Table 14: Summary results for TEA and comparators on RNN model with UKCF (Bold indicates best) + +
BaseREGFEATEAF/TEA
PRC(I)0.411 ± 0.035*0.415 ± 0.030*0.410 ± 0.033*0.483 ± 0.0450.457 ± 0.037
PRC(C)0.497 ± 0.057*0.518 ± 0.052*0.521 ± 0.054*0.583 ± 0.0720.576 ± 0.071
ROC(I)0.750 ± 0.028*0.755 ± 0.0250.755 ± 0.0290.779 ± 0.0310.774 ± 0.030
ROC(C)0.742 ± 0.0380.753 ± 0.0390.754 ± 0.0370.774 ± 0.0420.772 ± 0.042
+ +PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). The two-sample $t$ -test for a difference in means is conducted on the results. An asterisk next to the comparator result is used to indicate a statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Table 15: Extended source of gain and variants on RNN model with UKCF (Bold indicates best) + +
τNo JointNo StagedTEATEA(L)TEA(LP)
PRC(I)10.511 ± 0.0140.468 ± 0.0150.549 ± 0.0140.553 ± 0.0090.545 ± 0.011
20.461 ± 0.0130.429 ± 0.0110.490 ± 0.0120.492 ± 0.0130.487 ± 0.012
30.434 ± 0.0160.407 ± 0.0110.457 ± 0.0100.457 ± 0.0130.455 ± 0.014
40.414 ± 0.0090.392 ± 0.0110.434 ± 0.0070.432 ± 0.0080.433 ± 0.009
PRC(C)10.682 ± 0.0110.633 ± 0.0320.695 ± 0.0100.697 ± 0.0100.695 ± 0.012
20.581 ± 0.0120.549 ± 0.0250.591 ± 0.0140.589 ± 0.0130.592 ± 0.011
30.530 ± 0.0100.506 ± 0.0180.537 ± 0.0070.534 ± 0.0090.538 ± 0.009
40.504 ± 0.0090.484 ± 0.0150.510 ± 0.0070.505 ± 0.0080.509 ± 0.010
ROC(I)10.816 ± 0.0040.795 ± 0.0080.827 ± 0.0070.825 ± 0.0050.822 ± 0.010
20.774 ± 0.0100.759 ± 0.0070.783 ± 0.0080.782 ± 0.0080.778 ± 0.007
30.749 ± 0.0100.744 ± 0.0080.760 ± 0.0070.758 ± 0.0060.758 ± 0.006
40.732 ± 0.0080.735 ± 0.0080.748 ± 0.0080.739 ± 0.0070.745 ± 0.004
ROC(C)10.830 ± 0.0080.816 ± 0.0110.838 ± 0.0070.839 ± 0.0080.839 ± 0.007
20.775 ± 0.0100.767 ± 0.0090.782 ± 0.0070.784 ± 0.0100.782 ± 0.005
30.743 ± 0.0090.735 ± 0.0070.752 ± 0.0060.751 ± 0.0100.751 ± 0.007
40.721 ± 0.0060.712 ± 0.0070.726 ± 0.0060.724 ± 0.0070.726 ± 0.006
+ +PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). The "No Joint" setting isolates the benefit from staged training only (analogous to basic unsupervised pretraining, though using targets); the "No Staged" setting isolates the benefit from joint training only (without pretraining). + +Table 16: Summary source of gain and variants on RNN model with UKCF (Bold indicates best) + +
No JointNo StagedTEATEA(L)TEA(LP)
PRC(I)0.455 ± 0.0390.424 ± 0.0310.483 ± 0.0450.483 ± 0.0470.480 ± 0.044
PRC(C)0.574 ± 0.0690.543 ± 0.0610.583 ± 0.0720.581 ± 0.0740.583 ± 0.072
ROC(I)0.768 ± 0.0330.758 ± 0.0240.779 ± 0.0310.776 ± 0.0330.776 ± 0.030
ROC(C)0.767 ± 0.0420.758 ± 0.0400.774 ± 0.0420.774 ± 0.0440.774 ± 0.043
+ +PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). The "No Joint" setting isolates the benefit from staged training only (analogous to basic unsupervised pretraining, though using targets); the "No Staged" setting isolates the benefit from joint training only (without pretraining). Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Table 17: Extended results for TEA and comparators on RNN model with ADNI (Bold indicates best) + +
τBaseREGFEATEAF/TEA
MSE(B)10.095 ± 0.014*0.088 ± 0.010*0.082 ± 0.007*0.057 ± 0.0070.065 ± 0.008*
20.097 ± 0.015*0.089 ± 0.010*0.084 ± 0.008*0.057 ± 0.0070.066 ± 0.007*
30.100 ± 0.015*0.092 ± 0.010*0.087 ± 0.008*0.059 ± 0.0070.068 ± 0.007*
40.104 ± 0.016*0.095 ± 0.011*0.091 ± 0.008*0.061 ± 0.0070.071 ± 0.008*
50.105 ± 0.017*0.097 ± 0.012*0.093 ± 0.008*0.062 ± 0.0080.073 ± 0.008*
60.109 ± 0.017*0.100 ± 0.013*0.097 ± 0.009*0.065 ± 0.0080.076 ± 0.009*
70.112 ± 0.019*0.103 ± 0.014*0.101 ± 0.011*0.068 ± 0.0090.080 ± 0.009*
80.115 ± 0.021*0.106 ± 0.016*0.105 ± 0.013*0.072 ± 0.0110.083 ± 0.010*
MSE(C)10.275 ± 0.013*0.270 ± 0.013*0.265 ± 0.011*0.239 ± 0.0150.243 ± 0.013
20.300 ± 0.015*0.295 ± 0.013*0.290 ± 0.012*0.265 ± 0.0140.273 ± 0.014
30.323 ± 0.018*0.320 ± 0.015*0.314 ± 0.013*0.287 ± 0.0140.297 ± 0.015
40.358 ± 0.019*0.354 ± 0.018*0.352 ± 0.017*0.322 ± 0.0150.333 ± 0.018
50.371 ± 0.024*0.370 ± 0.023*0.367 ± 0.023*0.341 ± 0.0190.350 ± 0.021
60.393 ± 0.0330.393 ± 0.0320.391 ± 0.0340.366 ± 0.0260.374 ± 0.028
70.417 ± 0.0430.419 ± 0.0400.417 ± 0.0440.394 ± 0.0350.399 ± 0.038
+ +
8 | 0.453 ± 0.058 | 0.455 ± 0.0540.454 ± 0.0620.430 ± 0.0480.435 ± 0.057
+ +MSE evaluations reported separately for targets representing quantitative biomarkers (B) and cognitive tests (C). The two-sample $t$ -test for a difference in means is conducted on the results. An asterisk next to the comparator result is used to indicate a statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. + +Table 18: Summary results for TEA and comparators on RNN model with ADNI (Bold indicates best) + +
BaseREGFEATEAF/TEA
MSE(B)0.105 ± 0.018*0.096 ± 0.014*0.092 ± 0.012*0.063 ± 0.0100.073 ± 0.010*
MSE(C)0.361 ± 0.0640.360 ± 0.0660.356 ± 0.0680.330 ± 0.0660.338 ± 0.067
+ +MSE evaluations reported separately for targets representing quantitative biomarkers (B) and cognitive tests (C). The two-sample $t$ -test for a difference in means is conducted on the results. An asterisk next to the comparator result is used to indicate a statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Table 19: Extended source of gain and variants on RNN model with ADNI (Bold indicates best) + +
τNo JointNo StagedTEATEA(L)TEA(LP)
MSE(B)10.081 ± 0.0110.098 ± 0.0180.057 ± 0.0070.049 ± 0.0090.057 ± 0.009
20.084 ± 0.0110.098 ± 0.0180.057 ± 0.0070.051 ± 0.0100.058 ± 0.008
30.087 ± 0.0110.101 ± 0.0190.059 ± 0.0070.054 ± 0.0110.059 ± 0.008
40.090 ± 0.0110.105 ± 0.0190.061 ± 0.0070.056 ± 0.0110.062 ± 0.008
50.092 ± 0.0110.106 ± 0.0210.062 ± 0.0080.059 ± 0.0110.064 ± 0.009
60.097 ± 0.0120.110 ± 0.0230.065 ± 0.0080.063 ± 0.0110.068 ± 0.010
70.100 ± 0.0130.113 ± 0.0250.068 ± 0.0090.066 ± 0.0100.072 ± 0.011
80.104 ± 0.0140.117 ± 0.0270.072 ± 0.0110.070 ± 0.0110.076 ± 0.013
MSE(C)10.258 ± 0.0160.274 ± 0.0170.239 ± 0.0150.231 ± 0.0200.241 ± 0.016
20.285 ± 0.0160.297 ± 0.0170.265 ± 0.0140.258 ± 0.0210.266 ± 0.018
30.311 ± 0.0170.321 ± 0.0180.287 ± 0.0140.282 ± 0.0210.287 ± 0.018
40.346 ± 0.0190.356 ± 0.0200.322 ± 0.0150.319 ± 0.0210.321 ± 0.022
50.363 ± 0.0240.373 ± 0.0240.341 ± 0.0190.337 ± 0.0240.338 ± 0.026
60.389 ± 0.0330.397 ± 0.0310.366 ± 0.0260.366 ± 0.0300.362 ± 0.033
70.416 ± 0.0410.424 ± 0.0400.394 ± 0.0350.401 ± 0.0430.390 ± 0.044
80.454 ± 0.0590.462 ± 0.0530.430 ± 0.0480.447 ± 0.0640.427 ± 0.063
+ +MSE evaluations reported separately for targets representing quantitative biomarkers (B) and cognitive tests (C). The "No Joint" setting isolates the benefit from staged training only (analogous to basic unsupervised pretraining, though using targets); the "No Staged" setting isolates the benefit from joint training only (without pretraining). + +Table 20: Summary source of gain and variants on RNN model with ADNI (Bold indicates best) + +
No JointNo StagedTEATEA(L)TEA(LP)
MSE(B)0.092 ± 0.0140.106 ± 0.0220.063 ± 0.0100.058 ± 0.0120.064 ± 0.012
MSE(C)0.353 ± 0.0700.363 ± 0.0670.330 ± 0.0660.330 ± 0.0760.329 ± 0.068
+ +MSE evaluations reported separately for targets representing quantitative biomarkers (B) and cognitive tests (C). The "No Joint" setting isolates the benefit from staged training only (analogous to basic unsupervised pretraining, though using targets); the "No Staged" setting isolates the benefit from joint training only (without pretraining). Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Table 21: Extended results for TEA and comparators on RNN model with MIMIC (Bold indicates best) + +
τBaseREGFEATEAF/TEA
PRC10.159 ± 0.034*0.159 ± 0.022*0.162 ± 0.036*0.293 ± 0.0310.193 ± 0.023*
20.148 ± 0.028*0.149 ± 0.018*0.150 ± 0.030*0.254 ± 0.0250.174 ± 0.018*
30.139 ± 0.024*0.141 ± 0.015*0.142 ± 0.027*0.230 ± 0.0210.162 ± 0.015*
40.133 ± 0.021*0.135 ± 0.012*0.135 ± 0.024*0.214 ± 0.0180.153 ± 0.012*
50.129 ± 0.019*0.130 ± 0.011*0.130 ± 0.022*0.203 ± 0.0150.147 ± 0.011*
ROC10.699 ± 0.049*0.704 ± 0.028*0.709 ± 0.060*0.801 ± 0.0180.745 ± 0.021*
20.701 ± 0.044*0.707 ± 0.025*0.705 ± 0.050*0.778 ± 0.0150.740 ± 0.018*
30.690 ± 0.041*0.696 ± 0.024*0.693 ± 0.046*0.758 ± 0.0160.726 ± 0.019*
40.681 ± 0.038*0.688 ± 0.023*0.684 ± 0.042*0.745 ± 0.0150.715 ± 0.019*
50.679 ± 0.037*0.685 ± 0.023*0.680 ± 0.043*0.736 ± 0.0120.713 ± 0.019*
MSE10.141 ± 0.0070.140 ± 0.0060.138 ± 0.0100.137 ± 0.0080.139 ± 0.007
20.159 ± 0.0100.159 ± 0.0070.160 ± 0.0080.154 ± 0.0090.162 ± 0.007*
30.156 ± 0.0090.155 ± 0.0070.156 ± 0.0080.158 ± 0.0080.158 ± 0.008
40.154 ± 0.0080.153 ± 0.0080.153 ± 0.0080.153 ± 0.0090.155 ± 0.009
50.154 ± 0.0100.152 ± 0.0100.152 ± 0.0100.150 ± 0.0110.155 ± 0.010
+ +The two-sample $t$ -test for a difference in means is conducted on the results. An asterisk next to the comparator result is used to indicate a statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. + +Table 22: Summary results for TEA and comparators on RNN model with MIMIC (Bold indicates best) + +
BaseREGFEATEAF/TEA
PRC0.142 ± 0.028*0.143 ± 0.019*0.144 ± 0.030*0.239 ± 0.0390.166 ± 0.023*
ROC0.690 ± 0.043*0.696 ± 0.026*0.694 ± 0.050*0.763 ± 0.0280.728 ± 0.023*
MSE0.153 ± 0.0110.152 ± 0.0100.152 ± 0.0120.150 ± 0.0120.154 ± 0.011
+ +The two-sample $t$ -test for a difference in means is conducted on the results. An asterisk next to the comparator result is used to indicate a statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Table 23: Extended source of gain and variants on RNN model with MIMIC (Bold indicates best) + +
τNo JointNo StagedTEATEA(L)TEA(LP)
PRC10.216 ± 0.0440.194 ± 0.0200.293 ± 0.0310.310 ± 0.0470.280 ± 0.033
20.194 ± 0.0350.175 ± 0.0170.254 ± 0.0250.265 ± 0.0350.242 ± 0.026
30.178 ± 0.0300.163 ± 0.0130.230 ± 0.0210.239 ± 0.0300.221 ± 0.022
40.168 ± 0.0270.154 ± 0.0110.214 ± 0.0180.222 ± 0.0260.206 ± 0.020
50.160 ± 0.0240.148 ± 0.0100.203 ± 0.0150.210 ± 0.0230.195 ± 0.018
ROC10.756 ± 0.0420.742 ± 0.0220.801 ± 0.0180.807 ± 0.0250.791 ± 0.018
20.741 ± 0.0310.738 ± 0.0210.778 ± 0.0150.783 ± 0.0170.773 ± 0.012
30.726 ± 0.0310.724 ± 0.0190.758 ± 0.0160.761 ± 0.0190.756 ± 0.013
40.715 ± 0.0300.715 ± 0.0190.745 ± 0.0150.747 ± 0.0170.742 ± 0.011
50.710 ± 0.0310.711 ± 0.0180.736 ± 0.0120.741 ± 0.0190.736 ± 0.014
MSE10.138 ± 0.0080.137 ± 0.0070.137 ± 0.0080.136 ± 0.0060.137 ± 0.008
20.158 ± 0.0070.156 ± 0.0120.154 ± 0.0090.153 ± 0.0070.154 ± 0.007
30.155 ± 0.0090.156 ± 0.0100.158 ± 0.0080.156 ± 0.0100.158 ± 0.008
40.152 ± 0.0080.153 ± 0.0090.153 ± 0.0090.151 ± 0.0110.154 ± 0.008
50.151 ± 0.0090.150 ± 0.0090.150 ± 0.0110.147 ± 0.0110.150 ± 0.008
+ +The "No Joint" setting isolates the benefit from staged training only (analogous to basic unsupervised pretraining, though using targets); the "No Staged" setting isolates the benefit from joint training only (without pretraining). + +Table 24: Summary source of gain and variants on RNN model with MIMIC (Bold indicates best) + +
No JointNo StagedTEATEA(L)TEA(LP)
PRC0.183 ± 0.0380.167 ± 0.0220.239 ± 0.0390.249 ± 0.0490.229 ± 0.039
ROC0.730 ± 0.0380.726 ± 0.0230.763 ± 0.0280.768 ± 0.0310.759 ± 0.025
MSE0.151 ± 0.0110.150 ± 0.0120.150 ± 0.0120.149 ± 0.0120.151 ± 0.011
+ +The "No Joint" setting isolates the benefit from staged training only (analogous to basic unsupervised pretraining, though using targets); the "No Staged" setting isolates the benefit from joint training only (without pretraining). Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +# E.3 Results for Sensitivities + +Table 25: Extended $\nu$ -Sensitivities for REG on linear model with UKCF (Bold indicates best) + +
τν=0ν=3e-5ν=3e-4ν=3e-3ν=3e-2
PRC(I)10.340 ± 0.1180.355 ± 0.1140.370 ± 0.1010.320 ± 0.0340.163 ± 0.003
20.325 ± 0.0990.338 ± 0.0960.350 ± 0.0850.309 ± 0.0260.176 ± 0.004
30.314 ± 0.0880.327 ± 0.0860.337 ± 0.0760.300 ± 0.0240.182 ± 0.005
40.307 ± 0.0820.318 ± 0.0800.328 ± 0.0710.293 ± 0.0230.184 ± 0.004
PRC(C)10.445 ± 0.1310.460 ± 0.1250.467 ± 0.1060.426 ± 0.0340.240 ± 0.004
20.418 ± 0.0990.428 ± 0.0950.435 ± 0.0810.409 ± 0.0250.260 ± 0.005
30.403 ± 0.0820.412 ± 0.0780.418 ± 0.0670.399 ± 0.0210.272 ± 0.006
40.398 ± 0.0730.406 ± 0.0700.412 ± 0.0600.397 ± 0.0190.281 ± 0.007
ROC(I)10.713 ± 0.1040.724 ± 0.1000.737 ± 0.0810.715 ± 0.0220.527 ± 0.006
20.688 ± 0.0890.697 ± 0.0860.709 ± 0.0700.693 ± 0.0220.532 ± 0.008
30.677 ± 0.0800.686 ± 0.0780.697 ± 0.0650.683 ± 0.0210.543 ± 0.007
40.677 ± 0.0760.686 ± 0.0740.696 ± 0.0630.681 ± 0.0190.555 ± 0.007
ROC(C)10.713 ± 0.1100.727 ± 0.1050.741 ± 0.0880.716 ± 0.0250.512 ± 0.010
20.686 ± 0.0910.696 ± 0.0860.707 ± 0.0720.686 ± 0.0220.523 ± 0.008
30.668 ± 0.0770.678 ± 0.0740.686 ± 0.0610.668 ± 0.0210.530 ± 0.013
40.651 ± 0.0700.660 ± 0.0660.667 ± 0.0560.652 ± 0.0180.527 ± 0.015
+ +The $\nu$ coefficient controls the strength of $\ell_2$ -regularization applied on top of the original loss function minimized. PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). + +Table 26: Summary $\nu$ -Sensitivities for REG on linear model with UKCF (Bold indicates best) + +
ν=0ν=3e-5ν=3e-4ν=3e-3ν=3e-2
PRC(I)0.322 ± 0.0990.335 ± 0.0960.347 ± 0.0850.305 ± 0.0290.176 ± 0.009
PRC(C)0.416 ± 0.1000.426 ± 0.0970.433 ± 0.0830.408 ± 0.0280.263 ± 0.016
ROC(I)0.689 ± 0.0890.698 ± 0.0870.710 ± 0.0720.693 ± 0.0250.540 ± 0.013
ROC(C)0.679 ± 0.0910.690 ± 0.0870.700 ± 0.0750.681 ± 0.0320.523 ± 0.013
+ +The $\nu$ coefficient controls the strength of $\ell_2$ -regularization applied on top of the original loss function minimized. PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Table 27: Extended $\nu$ -Sensitivities for TEA on linear model with UKCF (Bold indicates best) + +
τν = 0ν = 3e-5ν = 3e-4ν = 3e-3ν = 3e-2
PRC(I)10.484 ± 0.0200.489 ± 0.0190.497 ± 0.0160.442 ± 0.0050.174 ± 0.004
20.450 ± 0.0160.453 ± 0.0160.459 ± 0.0170.414 ± 0.0070.186 ± 0.005
30.424 ± 0.0140.426 ± 0.0130.432 ± 0.0150.394 ± 0.0090.192 ± 0.005
40.405 ± 0.0120.407 ± 0.0100.413 ± 0.0100.381 ± 0.0080.193 ± 0.004
PRC(C)10.641 ± 0.0210.644 ± 0.0190.653 ± 0.0090.612 ± 0.0090.276 ± 0.007
20.561 ± 0.0130.562 ± 0.0110.566 ± 0.0100.544 ± 0.0070.293 ± 0.008
30.519 ± 0.0080.519 ± 0.0070.521 ± 0.0080.508 ± 0.0050.302 ± 0.008
40.495 ± 0.0080.496 ± 0.0070.498 ± 0.0090.489 ± 0.0070.309 ± 0.009
ROC(I)10.800 ± 0.0150.803 ± 0.0150.806 ± 0.0070.779 ± 0.0070.555 ± 0.005
20.765 ± 0.0090.767 ± 0.0110.771 ± 0.0070.751 ± 0.0070.557 ± 0.007
30.746 ± 0.0080.747 ± 0.0080.750 ± 0.0070.735 ± 0.0070.565 ± 0.006
40.736 ± 0.0060.737 ± 0.0070.740 ± 0.0080.727 ± 0.0050.575 ± 0.006
ROC(C)10.825 ± 0.0110.826 ± 0.0100.829 ± 0.0060.819 ± 0.0060.560 ± 0.011
20.772 ± 0.0100.774 ± 0.0090.775 ± 0.0080.771 ± 0.0060.564 ± 0.009
30.742 ± 0.0040.744 ± 0.0040.743 ± 0.0050.741 ± 0.0040.566 ± 0.012
40.718 ± 0.0060.720 ± 0.0060.720 ± 0.0060.717 ± 0.0080.561 ± 0.015
+ +The $\nu$ coefficient controls the strength of $\ell_2$ -regularization applied on top of the original loss function minimized. PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). + +Table 28: Summary $\nu$ -Sensitivities for TEA on linear model with UKCF (Bold indicates best) + +
ν=0ν=3e-5ν=3e-4ν=3e-3ν=3e-2
PRC(I)0.441 ± 0.0330.444 ± 0.0340.450 ± 0.0350.408 ± 0.0240.186 ± 0.009
PRC(C)0.554 ± 0.0570.555 ± 0.0580.559 ± 0.0600.538 ± 0.0470.295 ± 0.015
ROC(I)0.762 ± 0.0260.763 ± 0.0280.767 ± 0.0260.748 ± 0.0210.563 ± 0.010
ROC(C)0.764 ± 0.0410.766 ± 0.0410.767 ± 0.0420.762 ± 0.0390.563 ± 0.012
+ +The $\nu$ coefficient controls the strength of $\ell_2$ -regularization applied on top of the original loss function minimized. PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Table 29: Extended $\lambda$ -Sensitivities for TEA on linear model with UKCF (Bold indicates best) + +
τλ=0λ=0.01λ=0.1λ=0.5λ=0.9λ=0.99λ=1
PRC(I)10.370±0.1010.383±0.1060.412±0.0910.472±0.0160.461±0.0120.327±0.0110.150±0.008
20.350±0.0850.361±0.0900.386±0.0760.439±0.0160.431±0.0120.323±0.0110.162±0.008
30.337±0.0760.347±0.0810.368±0.0680.415±0.0150.410±0.0090.316±0.0120.168±0.007
40.328±0.0710.337±0.0750.357±0.0640.399±0.0110.395±0.0080.307±0.0110.170±0.007
PRC(C)10.467±0.1060.481±0.1100.528±0.1040.620±0.0300.620±0.0160.433±0.0120.236±0.012
20.435±0.0810.445±0.0840.481±0.0790.550±0.0220.553±0.0130.427±0.0100.249±0.010
30.418±0.0670.427±0.0700.456±0.0640.512±0.0170.516±0.0090.421±0.0090.259±0.011
40.412±0.0600.420±0.0620.445±0.0570.491±0.0150.494±0.0090.415±0.0090.266±0.011
ROC(I)10.737±0.0810.742±0.0890.764±0.0670.799±0.0070.791±0.0040.708±0.0080.499±0.013
20.709±0.0700.713±0.0770.733±0.0580.765±0.0090.760±0.0080.694±0.0070.502±0.014
30.697±0.0650.701±0.0710.719±0.0540.750±0.0080.746±0.0080.690±0.0060.500±0.014
40.696±0.0630.699±0.0680.715±0.0530.744±0.0050.741±0.0060.690±0.0070.501±0.015
ROC(C)10.741±0.0880.747±0.0920.775±0.0750.821±0.0110.819±0.0060.725±0.0140.493±0.034
20.707±0.0720.712±0.0760.735±0.0610.774±0.0090.774±0.0070.704±0.0120.496±0.027
30.686±0.0610.691±0.0670.711±0.0520.746±0.0070.745±0.0030.690±0.0110.497±0.028
40.667±0.0560.672±0.0590.689±0.0480.722±0.0070.721±0.0050.675±0.0140.497±0.025
+ +The $\lambda$ coefficient controls the strength of prior—i.e. the tradeoff between the prediction and reconstruction loss. PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). + +Table 30: Summary $\lambda$ -Sensitivities for TEA on linear model with UKCF (Bold indicates best) + +
λ=0λ=0.01λ=0.1λ=0.5λ=0.9λ=0.99λ=1
PRC(I)0.347 ± 0.0850.357 ± 0.0900.381 ± 0.0780.431 ± 0.0310.424 ± 0.0270.318 ± 0.0130.162 ± 0.011
PRC(C)0.433 ± 0.0830.443 ± 0.0870.477 ± 0.0840.543 ± 0.0540.546 ± 0.0500.424 ± 0.0120.252 ± 0.016
ROC(I)0.710 ± 0.0720.714 ± 0.0780.733 ± 0.0610.764 ± 0.0220.759 ± 0.0200.695 ± 0.0100.501 ± 0.014
ROC(C)0.700 ± 0.0750.705 ± 0.0800.727 ± 0.0680.766 ± 0.0380.765 ± 0.0370.698 ± 0.0220.496 ± 0.029
+ +The $\lambda$ coefficient controls the strength of prior—i.e. the tradeoff between the prediction and reconstruction loss. PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Table 31: Extended $N$ -Sensitivities for REG on linear model with UKCF (Bold indicates best) + +
τN×1%N×5%N×20%N×50%N×100%
PRC(I)10.160±0.0100.176±0.0240.199±0.0440.325±0.1140.370±0.101
20.172±0.0090.187±0.0210.207±0.0370.312±0.0950.350±0.085
30.180±0.0110.192±0.0200.209±0.0320.303±0.0850.337±0.076
40.182±0.0090.193±0.0200.208±0.0280.295±0.0780.328±0.071
PRC(C)10.246±0.0110.268±0.0340.293±0.0460.421±0.1190.467±0.106
20.263±0.0110.283±0.0270.304±0.0360.401±0.0910.435±0.081
30.275±0.0130.292±0.0280.313±0.0320.390±0.0760.418±0.067
40.286±0.0130.302±0.0250.319±0.0290.384±0.0660.412±0.060
ROC(I)10.512±0.0240.553±0.0460.598±0.0570.705±0.0900.737±0.081
20.516±0.0250.557±0.0400.594±0.0490.681±0.0770.709±0.070
30.521 ± 0.0260.549 ± 0.0370.590 ± 0.0450.671 ± 0.0710.697 ± 0.065
40.519 ± 0.0240.546 ± 0.0370.590 ± 0.0390.669 ± 0.0680.696 ± 0.063
ROC(C)10.507 ± 0.0260.539 ± 0.0490.591 ± 0.0540.702 ± 0.1010.741 ± 0.088
20.520 ± 0.0240.546 ± 0.0400.587 ± 0.0410.675 ± 0.0820.707 ± 0.072
30.526 ± 0.0230.549 ± 0.0380.586 ± 0.0360.659 ± 0.0700.686 ± 0.061
40.528 ± 0.0270.550 ± 0.0390.581 ± 0.0340.642 ± 0.0620.667 ± 0.056
+ +The proportion of data $N$ used is randomly restricted, showing performance under various levels of data scarcity. PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). + +Table 32: Summary $N$ -Sensitivities for REG on linear model with UKCF (Bold indicates best) + +
N × 1%N × 5%N × 20%N × 50%N × 100%
PRC(I)0.173 ± 0.0130.187 ± 0.0220.206 ± 0.0360.309 ± 0.0940.347 ± 0.085
PRC(C)0.267 ± 0.0190.286 ± 0.0310.307 ± 0.0380.399 ± 0.0920.433 ± 0.083
ROC(I)0.517 ± 0.0250.551 ± 0.0410.593 ± 0.0480.682 ± 0.0780.710 ± 0.072
ROC(C)0.520 ± 0.0260.546 ± 0.0420.586 ± 0.0420.669 ± 0.0830.700 ± 0.075
+ +The proportion of data $N$ used is randomly restricted, showing performance under various levels of data scarcity. PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Table 33: Extended $N$ -Sensitivities for TEA on linear model with UKCF (Bold indicates best) + +
τN×1%N×5%N×20%N×50%N×100%
PRC(I)10.199 ± 0.0160.342 ± 0.0400.453 ± 0.0200.486 ± 0.0170.497 ± 0.016
20.205 ± 0.0130.336 ± 0.0310.421 ± 0.0170.448 ± 0.0150.459 ± 0.017
30.212 ± 0.0180.322 ± 0.0270.398 ± 0.0150.420 ± 0.0140.432 ± 0.015
40.210 ± 0.0160.308 ± 0.0250.381 ± 0.0110.402 ± 0.0110.413 ± 0.010
PRC(C)10.287 ± 0.0210.470 ± 0.0600.610 ± 0.0170.642 ± 0.0120.653 ± 0.009
20.292 ± 0.0180.445 ± 0.0410.535 ± 0.0140.557 ± 0.0120.566 ± 0.010
30.302 ± 0.0140.424 ± 0.0340.495 ± 0.0110.514 ± 0.0090.521 ± 0.008
40.311 ± 0.0170.417 ± 0.0300.478 ± 0.0140.493 ± 0.0100.498 ± 0.009
ROC(I)10.570 ± 0.0220.698 ± 0.0270.776 ± 0.0080.797 ± 0.0070.806 ± 0.007
20.565 ± 0.0270.684 ± 0.0190.744 ± 0.0100.763 ± 0.0070.771 ± 0.007
30.568 ± 0.0270.663 ± 0.0210.723 ± 0.0110.740 ± 0.0090.750 ± 0.007
40.564 ± 0.0260.652 ± 0.0200.707 ± 0.0100.729 ± 0.0100.740 ± 0.008
ROC(C)10.581 ± 0.0170.715 ± 0.0320.795 ± 0.0090.822 ± 0.0070.829 ± 0.006
20.565 ± 0.0150.684 ± 0.0190.745 ± 0.0060.768 ± 0.0070.775 ± 0.008
30.570 ± 0.0150.668 ± 0.0180.718 ± 0.0070.737 ± 0.0050.743 ± 0.005
40.570 ± 0.0200.653 ± 0.0150.698 ± 0.0080.715 ± 0.0080.720 ± 0.006
+ +The proportion of data $N$ used is randomly restricted, showing performance under various levels of data scarcity. PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). + +Table 34: Summary $N$ -Sensitivities for TEA on linear model with UKCF (Bold indicates best) + +
N × 1%N × 5%N × 20%N × 50%N × 100%
PRC(I)0.207 ± 0.0170.327 ± 0.0340.413 ± 0.0310.439 ± 0.0350.450 ± 0.035
PRC(C)0.298 ± 0.0200.439 ± 0.0470.530 ± 0.0530.551 ± 0.0580.559 ± 0.060
ROC(I)0.567 ± 0.0260.674 ± 0.0290.738 ± 0.0270.757 ± 0.0270.767 ± 0.026
ROC(C)0.572 ± 0.0180.680 ± 0.0320.739 ± 0.0380.760 ± 0.0410.767 ± 0.042
+ +The proportion of data $N$ used is randomly restricted, showing performance under various levels of data scarcity. PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +# E.4 Contrived Negative Example + +Here with give a contrived example where the goals of prediction and reconstruction are directly at odds with each other. Specifically, we show a setup where the "and" part of "compact and predictable" representations is impossible—that is, a compact target-representation that is more reconstructive ends up being less predictable. Consider the following (true) data generating process: Latent variables $\mathbf{p}$ , $\mathbf{u}$ each consist of 10 independent dimensions, where $p_i \sim \mathcal{N}(0,1)$ and $u_i \sim \mathcal{N}(0,1)$ . Feature vectors $\mathbf{x}$ are also of length 10, and are linear in $\mathbf{p}$ . Target vectors $\mathbf{y} = [\mathbf{y}_{\mathrm{P}}^{\top}, \mathbf{y}_{\mathrm{U}}^{\top}]^{\top}$ are of length 50, where $\mathbf{y}_{\mathrm{P}}$ is of length 10 and linear in $\mathbf{p}$ and $\mathbf{y}_{\mathrm{U}}$ is of length 40 and linear in $\mathbf{u}$ . See Figure 7. Note that the input features are inadequate for predicting all targets; our choice of lettering denotes elements that are in principle predictable ("P"), and those that are in principle unpredictable ("U"). + +![](images/ca98daec87e47f59668a147912d73b7fe2ce437f2da378640f2a28836c30d267.jpg) +Figure 6: Data generating process for synthetic example. Latent vectors $\mathbf{p}$ , $\mathbf{u}$ linearly generate feature vectors $\mathbf{x}$ and target vectors $\mathbf{y}$ . In this situation, $\mathbf{y}_{\mathrm{P}}$ is in principle predictable from $\mathbf{x}$ , while $\mathbf{y}_{\mathrm{U}}$ is impossible to predict. + +Suppose this data generating process is unknown to us. First, consider what happens with direct prediction: A linear model would learn to predict $\mathbf{y}_{\mathrm{P}}$ well, while predictions of $\mathbf{y}_{\mathrm{U}}$ would be no better than random. So far, so good. Now given the feature and target dimensions, consider the (not unreasonable) choice of TEAs with a latent dimension of 10. This is an obvious problem: During reconstruction, we naturally get more bang for our buck by encoding more of the (highly compressible) $\mathbf{y}_{\mathrm{U}}$ instead of $\mathbf{y}_{\mathrm{P}}$ ; yet $\mathbf{y}_{\mathrm{U}}$ is entirely useless to encode, as it is not predictable from inputs anyway. Reconstructing well is therefore directly at odds with predicting well. This is certainly an extremely contrived scenario; nevertheless, without sufficient domain knowledge, it serves as a caveat that—as with feature-embedding paradigms—target-embedding is only as good as its assumptions. + +![](images/0ecf0f5a6b4691e9a06450e5c25efd14978a4f59f64730f63ac8a60fcb3437fa.jpg) +Figure 7: Synthetic scenario where prediction is directly at odds with reconstruction. The prior (that we can leverage compact and predictable representations of targets) is hugely incorrect in this case; as a result, not only is target-autoencoding not beneficial—it is positively harmful. For TEAs, we observe that as the strength-of-prior coefficient $\lambda$ increases, the overall prediction error actually increases (while the reconstruction error decreases). + +Table 35: Results for TEA and comparators on linear model for negative example (Bold indicates best) + +
BaseREGFEATEAF/TEA
MSE0.266 ± 0.0450.266 ± 0.0450.266 ± 0.0450.285 ± 0.0470.280 ± 0.045
MSE(U)0.333 ± 0.0560.333 ± 0.0560.333 ± 0.0560.334 ± 0.0560.334 ± 0.056
MSE(P)0.000 ± 0.0000.000 ± 0.0000.000 ± 0.0000.087 ± 0.0280.061 ± 0.022
+ +MSE metrics are further reported separately for targets that are in principle predictable (P) and unpredictable (U). + +Table 36: $\lambda$ -Sensitivities for TEA on linear model for negative example (Bold indicates best) + +
λ=0λ=0.01λ=0.1λ=0.5λ=0.9
MSE0.266 ± 0.0450.267 ± 0.0450.271 ± 0.0460.285 ± 0.0470.292 ± 0.051
MSE(U)0.333 ± 0.0560.333 ± 0.0560.334 ± 0.0560.334 ± 0.0560.334 ± 0.056
MSE(P)0.000 ± 0.0000.003 ± 0.0050.022 ± 0.0090.087 ± 0.0280.125 ± 0.063
+ +MSE metrics are further reported separately for targets that are in principle predictable (P) and unpredictable (U). + +# E.5 Results from Open Discussion + +One can also ask the (purely empirical) question of how much each model degrades on out-of-distribution data—without additional training to fine-tune the model to the new data. In this context, we actually have no reason to expect TEAs to degrade any more or less than comparators. For thoroughness, we show an additional experiment as an example for sensitivity analysis (using UKCF) as follows: Each model is trained (only) on male patients and tested (only) on female patients, and vice versa. The average results on held-out samples from in-distribution data and out-of-distribution data then allow us to compute the net degradation (i.e. negative difference), which is reported below. While TEAs individually perform better overall on both in-distribution and out-of-distribution samples, none of the differences in the amounts of degradation between models are statistically significant: + +Table 37: Performance degradation for TEA and comparators on linear model with UKCF (Bold indicates best) + +
BaseREGFEATEAF/TEA
ROC(I)0.019 ± 0.0150.020 ± 0.0140.019 ± 0.0150.020 ± 0.0160.017 ± 0.014
ROC(C)0.025 ± 0.0150.029 ± 0.0150.024 ± 0.0140.013 ± 0.0200.019 ± 0.018
PRC(I)0.022 ± 0.0200.018 ± 0.0210.021 ± 0.0220.033 ± 0.0220.027 ± 0.022
PRC(C)0.026 ± 0.0210.029 ± 0.0180.026 ± 0.0190.018 ± 0.0230.021 ± 0.019
+ +PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). The two-sample $t$ -test for a difference in means is conducted on the results. An asterisk next to the comparator result is used to indicate a statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. + +Finally, given the staged training in Algorithm 1, it should be clear that the order cannot be changed. Stage 2 requires the encoder to already be trained to provide the requisite embeddings, so it must be preceded by Stage 1. Therefore the only relevant possibilities are: (1) Stages 1-2 by themselves, without Stage 3; this is simply the "No Joint" setting. (2) Stage 3 by itself, without Stages 1-2; this is simply the "No Staged" setting. (3) None of the stages altogether; this is simply the "Neither" setting. (4) Stages 1, 2, and 3 in order; this is simply Algorithm 1 itself. The only remaining possibility is to have Stage 3 precede Stages 1-2. This makes little sense, since when the reconstruction loss is trained by itself it is likely to "undo" the result of joint training. For thoroughness, we run an additional sensitivity experiment (using UKCF) to confirm this. The following corresponds to the left half of Table 4, with the additional column on the right (and the other columns labeled to reflect the training stages). Verifying our intuitions, the setting "3-1-2" behaves almost identically to the setting "1-2": + +Table 38: Performance by training stages for TEA on linear model with UKCF (Bold indicates best); column headers indicate the sequence of training stages executed (note that "1-2-3" simply corresponds to Algorithm 1) + +
None1-231-2-33-1-2
ROC(I)0.710 ± 0.0720.747 ± 0.0220.764 ± 0.0220.767 ± 0.0260.749 ± 0.022
ROC(C)0.700 ± 0.0750.744 ± 0.0380.766 ± 0.0380.767 ± 0.0420.747 ± 0.037
PRC(I)0.347 ± 0.0850.402 ± 0.0260.431 ± 0.0310.450 ± 0.0350.404 ± 0.027
PRC(C)0.433 ± 0.0830.507 ± 0.0400.543 ± 0.0540.559 ± 0.0600.512 ± 0.042
+ +PRC and ROC evaluations are reported separately for targets representing infections (I) and comorbidities (C). The two-sample $t$ -test for a difference in means is conducted on the results. An asterisk next to the comparator result is used to indicate a statistically significant difference in means ( $p$ -value $< 0.05$ ) relative to the TEA result. Results are grouped over the temporal axis; note that the variance between splits is an artifact of this grouping. \ No newline at end of file diff --git a/targetembeddingautoencodersforsupervisedrepresentationlearning/images.zip b/targetembeddingautoencodersforsupervisedrepresentationlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b22875bdb722948e0e30f630086287bcfc3d0bf9 --- /dev/null +++ b/targetembeddingautoencodersforsupervisedrepresentationlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bfd0d0a2b4dbfcdd6a7cf3a0413c022830f99f99f9a3ff0024a157ea537154a +size 3465860 diff --git a/targetembeddingautoencodersforsupervisedrepresentationlearning/layout.json b/targetembeddingautoencodersforsupervisedrepresentationlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dc01d77c2820d484f0bfccff1d4a5ea5aa6d28b6 --- /dev/null +++ b/targetembeddingautoencodersforsupervisedrepresentationlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:069ba7be14c9dc2ce342ae4acb0575a9ff214ca086b1e07c0d9aa413e5553ba9 +size 1180339 diff --git a/tensordecompositionsfortemporalknowledgebasecompletion/bb54bd4c-0add-426e-9a33-91ebcdd606d0_content_list.json b/tensordecompositionsfortemporalknowledgebasecompletion/bb54bd4c-0add-426e-9a33-91ebcdd606d0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7fb60019be63220c93eff4fcb7d806f21bcfaa7c --- /dev/null +++ b/tensordecompositionsfortemporalknowledgebasecompletion/bb54bd4c-0add-426e-9a33-91ebcdd606d0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2dcbc70d8a2e17e7a25e40fea9fa68929dacd4b1bcc26a91c7fdea10625ee5bc +size 76548 diff --git a/tensordecompositionsfortemporalknowledgebasecompletion/bb54bd4c-0add-426e-9a33-91ebcdd606d0_model.json b/tensordecompositionsfortemporalknowledgebasecompletion/bb54bd4c-0add-426e-9a33-91ebcdd606d0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d8407a6844ce16e87d7f394ab0e1d99531ed85bf --- /dev/null +++ b/tensordecompositionsfortemporalknowledgebasecompletion/bb54bd4c-0add-426e-9a33-91ebcdd606d0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:934b8579318842a167be6369b27fd5fd96e4eabad4d9bfdb4bc6ed4ec478c10d +size 89663 diff --git a/tensordecompositionsfortemporalknowledgebasecompletion/bb54bd4c-0add-426e-9a33-91ebcdd606d0_origin.pdf b/tensordecompositionsfortemporalknowledgebasecompletion/bb54bd4c-0add-426e-9a33-91ebcdd606d0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7c91f339ea67a3f4423fe4e19619c8d452f7a4df --- /dev/null +++ b/tensordecompositionsfortemporalknowledgebasecompletion/bb54bd4c-0add-426e-9a33-91ebcdd606d0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef09679e85567a298f0fc84e2ee59c43d40e3ed56fa7e0c18432c475883fa154 +size 191747 diff --git a/tensordecompositionsfortemporalknowledgebasecompletion/full.md b/tensordecompositionsfortemporalknowledgebasecompletion/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cc687eda85e337199ec3733e626109a5c134184b --- /dev/null +++ b/tensordecompositionsfortemporalknowledgebasecompletion/full.md @@ -0,0 +1,350 @@ +# TENSOR DECOMPOSITIONS FOR TEMPORAL KNOWLEDGE BASE COMPLETION + +Timothee Lacroix $^{1,2}$ , Guillaume Obozinski $^{3}$ , Nicolas Usunier $^{1}$ + +1 Facebook AI Research 2 ENPC* 3 Swiss Data Science Center, EPFL & ETH Zürich +timothee.lax@gmail.com guillaume.obozinski@epfl.ch +usunier@fb.com + +# ABSTRACT + +Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries such as (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx (Trouillon et al., 2016) that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods. + +# 1 INTRODUCTION + +Link prediction in relational data has been the subject of interest, given the widespread availability of such data and the breadth of its use in bioinformatics (Zitnik et al., 2018), recommender systems (Koren et al., 2009) or Knowledge Base completion (Nickel et al., 2016a). Relational data is often temporal, for example, the action of buying an item or watching a movie is associated to a timestamp. Some medicines might not have the same adverse side effects depending on the subject's age. The task of temporal link prediction is to find missing links in graphs at precise points in time. + +In this work, we study temporal link prediction through the lens of temporal knowledge base completion, which provides varied benchmarks both in terms of the underlying data they represent, but also in terms of scale. A knowledge base is a set of facts (subject, predicate, object) about the world that are known to be true. Link prediction in a knowledge base amounts to answer incomplete queries of the form (subject, predicate, ?) by providing an accurate ranking of potential objects. In temporal knowledge bases, these facts have some temporal metadata attached. For example, facts might only hold for a certain time interval, in which case they will be annotated as such. Other facts might be event that happened at a certain point in time. Temporal link prediction amounts to answering queries of the form (subject, predicate, ?, timestamp). For example, we expect the ranking of queries (USA, president, ?, timestamp) to vary with the timestamps. + +As tensor factorization methods have proved successful for Knowledge Base Completion (Nickel et al., 2016a; Trouillon et al., 2016; Lacroix et al., 2018), we express our Temporal Knowledge Base Completion problem as an order 4 tensor completion problem. That is, timestamps are discretized and used to index a 4-th mode in the binary tensor holding (subject, predicate, object, timestamps) facts. + +First, we introduce a ComplEx (Trouillon et al., 2016) decomposition of this order 4 tensor, and link it with previous work on temporal Knowledge Base completion. This decomposition yields embeddings for each timestamps. A natural prior is for these timestamps representation to evolve slowly over time. We are able to introduce this prior as a regularizer for which the optimum is a + +variation on the nuclear $p$ -norm. In order to deal with heterogeneous temporal knowledge bases where a significant amount of relations might be non-temporal, we add a non-temporal component to our decomposition. + +Experiments on available benchmarks show that our method outperforms the state of the art for similar number of parameters. We run additional experiments for larger, regularized models and obtain improvements of up to 0.07 absolute Mean Reciprocal Rank (MRR). + +Finally, we propose a dataset of $400k$ entities, based on Wikidata, with $7M$ train triples, of which $10\%$ contain temporal validity information. This dataset is larger than usual benchmarks in the Knowledge Base completion community and could help bridge the gap between the method designed and the envisaged web-scale applications. + +# 2 RELATED WORK + +Matrices and tensors are upper case letters. The $i$ -th row of $U$ is denoted by $u_{i}$ while it's $j-th$ column is denoted by $U_{\cdot,j}$ . The tensor product of two vectors is written $\otimes$ and the hadamard (elementwise) product $\odot$ . + +Static link prediction methods Standard tensor decomposition methods have lead to good results (Yang et al., 2014; Trouillon et al., 2016; Lacroix et al., 2018; Balažević et al., 2019) in Knowledge Base completion. The Canonical Polyadic (CP) Decomposition (Hitchcock, 1927) is the tensor equivalent to the low-rank decomposition of a matrix. A tensor $X$ of canonical rank $R$ can be written as: + +$$ +X = \sum_ {r = 1} ^ {R} U _ {:, r} \otimes V _ {:, r} \otimes W _ {:, r} = [ [ U, V, W ] ] \quad \Longleftrightarrow \quad \forall (i, j, k), X _ {i, j, k} = \sum_ {r = 1} ^ {R} u _ {i, r} v _ {j, r} w _ {k, r} = \langle u _ {i}, v _ {j}, w _ {k} \rangle +$$ + +Setting $U = W$ leads to the Distmult (Yang et al., 2014) model, which has been successful, despite only being able to represent symmetric score functions. In order to keep the parameter sharing scheme but go beyond symmetric relations, Trouillon et al. (2016) use complex parameters and set $W$ to the complex conjugate of $U$ , $\overline{U}$ . Regularizing this algorithm with the variational form of the tensor nuclear norm as well as a slight transformation to the learning objective (also proposed in Kazemi & Poole (2018)) leads to state of the art results in Lacroix et al. (2018). + +Other methods are not directly inspired from classical tensor decompositions. For example, TransE (Bordes et al., 2013) models the score as a distance of the translated subject to an object representation. This method has lead to many variations (Ji et al., 2015; Nguyen et al., 2016; Wang et al., 2014), but is limited in the relation systems it can model (Kazemi & Poole, 2018) and does not lead to state of the art performances on current benchmarks. Finally Schlichtkrull et al. (2018) propose to generate the entity embeddings of a CP-like tensor decomposition by running a forward pass of a Graph Neural Network over the training Knowledge Base. The experiments included in this work did not lead to better link prediction performances than the same decomposition (Distmult) directly optimized (Kadlec et al., 2017). + +Temporal link prediction methods Sarkar & Moore (2006) describes a bayesian model and learning method for representing temporal relations. The temporal smoothness prior used in this work is similar to the gradient penalty we describe in Section 3.3. However, learning one embedding matrix per timestamp is not applicable to the scales considered in this work. Bader et al. (2007) uses a tensor decomposition called ASALSAN to express temporal relations. This decomposition is related to RESCAL (Nickel et al., 2011) which underperforms on recent benchmarks due to overfitting (Nickel et al., 2016b). + +For temporal knowledge base completion, Goel et al. (2020) learns entity embeddings that change over time, by masking a fraction of the embedding weights with an activation function of learned frequencies. Based on the Tucker decomposition, ConT (Ma et al., 2018) learns one new core tensor for each timestamp. Finally, viewing the time dimension as a sequence to be predicted, García-Durán et al. (2018) use recurrent neural nets to transform the embeddings of standard models such as TransE or Distmult to accommodate the temporal data. + +
DE-SimplE2r((3γ+(1-γ))|E|+|P|)
TComplEx2r(|E|+|T|+2|P|)
TNTComplEx2r(|E|+|T|+4|P|)
+ +Table 1: Number of parameters for each models considered + +This work follows Lacroix et al. (2018) by studying and extending a regularized CP decomposition of the training set seen as an order 4 tensor. We propose and study several regularizer suited to our decompositions. + +# 3 MODEL + +In this section, we are given facts (subject, predicate, object) annotated with timestamps, we discretize the timestamp range (eg. by reducing timestamps to years) in order to obtain a training set of 4-tuple (subject, predicate, object, time) indexing an order 4 tensor. We will show in Section 5.1 how we reduce each datasets to this setting. Following Lacroix et al. (2018), we minimize, for each of the train tuples $(i,j,k,l)$ , the instantaneous multiclass loss: + +$$ +\ell (\hat {X}; (i, j, k, l)) = - \hat {X} _ {i, j, k, l} + \log \left(\sum_ {k ^ {\prime}} \exp \left(\hat {X} _ {i, j, k ^ {\prime}, l}\right)\right). \tag {1} +$$ + +Note that this loss is only suited to queries of the type (subject, predicate,?, time), which is the queries that were considered in related work. We consider another auxiliary loss in Section 6 which we will use on our Wikidata dataset. For a training set $S$ (augmented with reciprocal relations (Lacroix et al., 2018; Kazemi & Poole, 2018)), and parametric tensor estimate $\hat{X}(\theta)$ , we minimize the following objective, with a weighted regularizer $\Omega$ : + +$$ +\mathcal {L} (\hat {X} (\theta)) = \frac {1}{| S |} \sum_ {(i, j, k, l) \in S} \left[ \ell (\hat {X} (\theta); (i, j, k, l)) + \lambda \Omega (\theta ; (i, j, k, l)) \right]. +$$ + +The ComplEx (Trouillon et al., 2016) decomposition can naturally be extended to this setting by adding a new factor $T$ , we then have: + +$$ +\hat {X} (U, V, T) = \operatorname {R e} \left(\llbracket U, V, \bar {U}, T \rrbracket\right) \Longleftrightarrow \hat {X} (U, V, T) _ {i, j, k, l} = \operatorname {R e} \left(\langle u _ {i}, v _ {j}, \overline {{u _ {k}}}, t _ {l} \rangle\right) \tag {2} +$$ + +We call this decomposition TComplEx. Intuitively, we added timestamps embedding that modulate the multi-linear dot product. Notice that the timestamp can be used to equivalently modulate the objects, predicates or subjects to obtain time-dependent representation: + +$$ +\langle u _ {i}, v _ {j}, \overline {{u _ {k}}}, t _ {l} \rangle = \langle u _ {i} \odot t _ {l}, v _ {j}, \overline {{u _ {k}}} \rangle = \langle u _ {i}, v _ {j} \odot t _ {l}, \overline {{u _ {k}}} \rangle = \langle u _ {i}, v _ {j}, \overline {{u _ {k}}} \odot t _ {l} \rangle . +$$ + +Contrary to DE-SimpleE (Goel et al., 2020), we do not learn temporal embeddings that scale with the number of entities (as frequencies and biases), but rather embeddings that scale with the number of timestamps. The number of parameters for the two models are compared in Table 1. + +# 3.1 NON-TEMPORAL PREDICATES + +Some predicates might not be affected by timestamps. For example, Malia and Sasha will always be the daughters of Barack and Michelle Obama, whereas the "has occupation" predicate between two entities might very well change over time. In heterogeneous knowledge bases, where some predicates might be temporal and some might not be, we propose to decompose the tensor $\hat{X}$ as the sum of two tensors, one temporal, and the other non-temporal: + +$$ +\hat {X} = \operatorname {R e} \left(\llbracket U, V ^ {t}, \overline {{U}}, T \rrbracket + \llbracket U, V, \overline {{U}}, \mathbf {1} \rrbracket\right) \iff \hat {X} _ {i, j, k, l} = R e \left(\langle u _ {i}, v _ {j} ^ {t} \odot t _ {l} + v _ {j}, \overline {{u _ {k}}} \rangle\right) \quad (3) +$$ + +We call this decomposition TNTComplEx. Goel et al. (2020) suggests another way of introducing a non-temporal component, by only allowing a fraction $\gamma$ of components of the embeddings to be modulated in time. By allowing this sharing of parameters between the temporal and non-temporal part of the tensor, our model removes one hyperparameter. Moreover, preliminary experiments showed that this model outperforms one without parameter sharing. + +# 3.2 REGULARIZATION + +Any order 4 tensor can be considered as an order 3 tensor by unfolding modes together. For a tensor $X \in \mathbb{R}^{N_1 \times N_2 \times N_3 \times N_4}$ , unfolding modes 3 and 4 together will lead to tensor $\tilde{X} \in \mathbb{R}^{N_1 \times N_2 \times N_3 N_4}$ (Kolda & Bader, 2009). + +We can see both decompositions ((2) and (3)) as order 3 tensors by unfolding the temporal and predicate modes together. Considering the decomposition implied by these unfoldings (see Appendix 8.1) leads us to the following weighted regularizers (Lacroix et al., 2018): + +$$ +\Omega^ {3} (U, V, T; (i, j, k, l)) = \frac {1}{3} \left(\| u _ {i} \| _ {3} ^ {3} + \| u _ {k} \| _ {3} ^ {3} + \| v _ {k} \odot t _ {l} \| _ {3} ^ {3}\right) \tag {4} +$$ + +$$ +\Omega^ {3} (U, V ^ {t}, V, T; (i, j, k, l)) = \frac {1}{3} \left(2 \| u _ {i} \| _ {3} ^ {3} + 2 \| u _ {k} \| _ {3} ^ {3} + \| v _ {j} ^ {t} \odot t _ {l} \| _ {3} ^ {3} + \| v _ {j} \| _ {3} ^ {3}\right) +$$ + +The first regularizer weights objects, predicates and pairs (predicate, timestamp) according to their respective marginal probabilities. This regularizer is a variational form of the weighted nuclear 3-norm on an order 4 tensor (see subsection 3.4 and Appendix 8.3 for details and proof). The second regularizer is the sum of the nuclear 3 penalties on tensors $\llbracket U,V^t,\overline{U},T\rrbracket$ and $\llbracket U,V,\overline{U}\rrbracket$ . + +# 3.3 SMOOTHNESS OF TEMPORAL EMBEDDINGS + +We have more a priori structure on the temporal mode than on others. Notably, we expect smoothness of the application $i \mapsto t_i$ . In words, we expect neighboring timestamps to have close representations. Thus, we penalize the norm of the discrete derivative of the temporal embeddings: + +$$ +\Lambda_ {p} (T) = \frac {1}{| T | - 1} \sum_ {i = 1} ^ {| T | - 1} \| t _ {i + 1} - t _ {i} \| _ {p} ^ {p}. \tag {5} +$$ + +We show in Appendix 8.2 that the sum of $\Lambda_p$ and the variational form of the nuclear $p$ norm (6) lead to a variational form of a new tensor atomic norm. + +# 3.4 NUCLEAR $p$ -NORMS OF TENSORS AND THEIR VARIATIONAL FORMS + +As was done in Lacroix et al. (2018), we aim to use tensor nuclear $p$ -norms as regularizers. The definition of the nuclear $p$ -norm of a tensor (Friedland & Lim, 2018) of order $D$ is: + +$$ +\| X \| _ {p *} = \inf _ {\alpha , R, U ^ {(1)}, \dots , U ^ {(D)}} \left\{\| \alpha \| _ {1} \mid X = \sum_ {r = 1} ^ {R} \alpha_ {r} U _ {:, r} ^ {(1)} \otimes \dots \otimes U _ {:, r} ^ {(D)}, \forall r, d \| U _ {:, r} ^ {(d)} \| _ {p} = 1 \right\}. +$$ + +This formulation of the nuclear $p$ -norm writes a tensor as a sum over atoms which are the rank-1 tensors of unit $p$ -norm factors. The nuclear $p$ -norm is NP-hard to compute (Friedland & Lim, 2018). Following Lacroix et al. (2018), a practical solution is to use the equivalent formulation of nuclear $p$ -norm using their variational form, which can be conveniently written for $p = D$ : + +$$ +\| X \| _ {D *} = \frac {1}{D} \inf _ {X = [ [ U ^ {(1)} ], \dots , U ^ {(D)} ] ]} \sum_ {d = 1} ^ {D} \sum_ {r = 1} ^ {R} \| U _ {:, r} ^ {(d)} \| _ {D} ^ {D}. \tag {6} +$$ + +For the equality above to hold, the infimum should be over all possible $R$ . The practical solution is to fix $R$ to the desired rank of the decomposition. Using this variational formulation as a regularizer leads to state of the art results for order-3 tensors (Lacroix et al., 2018) and is convenient in a stochastic gradient setting because it separates over each model coefficient. + +In addition, this formulation makes it easy to introduce a weighting as recommended in Srebro & Salakhutdinov (2010); Foygel et al. (2011). In order to learn under non-uniform sampling distributions, one should penalize the weighted norm: $\| \left(\sqrt{M^{(1)}}\otimes \sqrt{M^{(2)}}\right)\odot X\|_{2*}$ , where $M^{(1)}$ and $M^{(2)}$ are the empirical row and column marginal of the distribution. The variational form (6) makes this easy, by simply penalizing rows $U_{i_1}^{(1)},\ldots ,U_{i_D}^{(D)}$ for observed triple $(i_1,\dots ,i_D)$ in stochastic gradient descent. More precisely for $D = 2$ and $N^{(d)}$ the vectors holding the observed count of each index over each mode $d$ : + +$$ +\frac {1}{| S |} \sum_ {(i, j) \in S} \| u _ {i} \| _ {2} ^ {2} + \| v _ {j} \| _ {2} ^ {2} = \sum_ {i} \frac {N _ {i} ^ {(1)}}{S} \| u _ {i} \| _ {2} ^ {2} + \sum_ {j} \frac {N _ {j} ^ {(2)}}{S} \| v _ {j} \| _ {2} ^ {2} = \sum_ {i} M _ {i} ^ {(1)} \| u _ {i} \| _ {2} ^ {2} + \sum_ {j} M _ {j} ^ {(2)} \| v _ {j} \| _ {2} ^ {2}. +$$ + +In subsection 3.3, we add another penalty in Equation (5) which changes the norm of our atoms. In subsection 3.2, we introduced another variational form in Equation (4) which allows to easily penalize the nuclear 3-norm of an order 4 tensor. This regularizer leads to different weighting. By considering the unfolding of the timestamp and predicate modes, we are able to weight according to the joint marginal of timestamps and predicates, rather than by the product of the marginals. This can be an important distinction if the two are not independent. + +# 3.5 EXPERIMENTAL IMPACT OF THE REGULARIZERS + +We study the impact of regularization on the ICEWS05-15 dataset, for the TNTComplEx model. For details on the experimental set-up, see Section 5.1. The first effect we want to quantify is the effect of the regularizer $\Lambda_p$ . We run a grid search for the strength of both $\Lambda_p$ and $\Omega^3$ and plot the convex hull as a function of the temporal regularization strength. As shown in Figure 1, imposing smoothness along the time mode brings an improvement of over 2 MRR point. + +The second effect we wish to quantify is the effect of the choice of regularizer $\Omega$ . A natural regularizer for TNTComplEx would be : + +$$ +\Delta^ {p} (U, V, T; (i, j, k, l)) = \frac {1}{p} \left(2 \| u _ {i} \| _ {p} ^ {p} + 2 \| u _ {k} \| _ {p} ^ {p} + \| v _ {j} ^ {t} \| _ {p} ^ {p} + \| t _ {l} \| _ {p} ^ {p} + \| v _ {j} \| _ {p} ^ {p}\right). +$$ + +We compare $\Delta^4$ , $\Delta^3$ and $\Delta^2$ with $\Omega^3$ . The comparison is done with a temporal regularizer of 0 to reduce the experimental space. + +$\Delta^2$ is the common weight-decay frequently used in deep-learning. Such regularizers have been used in knowledge base completion (Nickel et al., 2011; 2016b; Trouillon et al., 2016), however, Lacroix et al. (2018) showed that the infimum of this penalty is non-convex over tensors. + +$\Delta^3$ matches the order used in the $\Omega^3$ regularizer, and in previous work on knowledge base completion (Lacroix et al., 2018). However, by the same arguments, its minimization does not lead to a convex penalty over tensors. + +$\Delta^4$ is the sum of the variational forms of the Nuclear 4-norm for the two tensors of order 4 in the TNTComplEx model according to equation (6). + +Detailed results of the impact of regularization on the performances of the model are given in Figure 1. The two regularizers $\Delta^4$ and $\Omega^3$ are the only regularizers that can be interpreted as sums of tensor norm variational forms and perform better than their lower order counterparts. + +There are two differences between $\Delta^4$ and $\Omega^3$ . First, whereas the first is a variational form of the nuclear 4-norm, the second is a variational form of the nuclear 3-norm which is closer to the nuclear 2-norm. Results for exact recovery of tensors have been generalized to the nuclear 2-norm, and to the extent of our knowledge, there has been no formal study of generalization properties or exact recovery under the nuclear $p$ -norm for $p$ greater than two. + +Second, the weighting in $\Delta^4$ is done separately over timestamps and predicates, whereas it is done jointly for $\Omega^3$ . This leads to using the joint empirical marginal as a weighting over timestamps and predicates. The impact of weighting on the guarantees that can be obtained are described more precisely in Foygel et al. (2011). + +The contribution of all these regularizers over a non-regularized model are summarized in Table 3. Note that careful regularization leads to a 0.05 MRR increase. + +![](images/5d7ec61589b08b12992d5b6b344d90700be4d7e62c81a86b11d767d6f4d2a44e.jpg) +Figure 1: Impact of the temporal (left) regularizer and embeddings (right) regularizer on a TNT-ComplEx model trained on ICEWS05-15. + +![](images/ededeb2f79fbdf285f8187548ac9d7618448a6ac9004c67be850fea57088b2c4.jpg) + +# 4 A NEW DATASET FOR TEMPORAL AND NON-TEMPORAL KNOWLEDGE BASE COMPLETION + +A dataset based on Wikidata was proposed by García-Durán et al. (2018). However, upon inspection, this dataset contains numerical data as entities, such as ELO rankings of chess players, which are not representative of practically useful link prediction problems. Also, in this dataset, temporal informations is specified in the form of "OccursSince" and "OccursUntil" statements appended to triples, which becomes unwieldy when a predicate holds for several intervals in time. Moreover, this dataset contains only $11k$ entities and $150k$ which is insufficient to benchmark methods at scale. + +The GDelt dataset described in Ma et al. (2018); Goel et al. (2020) holds many triples $(2M)$ , but does not describe enough entities (500). In order to adress these limitations, we created our own dataset from Wikidata, which we make available along with the code for this paper at https://github.com/facebookresearch/tkbc. + +Starting from Wikidata, we removed all entities that were instance of scholarly articles, proteins and others. We also removed disambiguation, template, category and project pages from wikipedia. Then, we removed all facts for which the object was not an entity. We iteratively filtered out entities that had degree at least 5 and predicates that had at least 50 occurrences. With this method, we obtained a dataset of 432715 entities, 407 predicates and 1724 timestamps (we only kept the years). Each datum is a triple (subject, predicate, object) together a timestamp range (begin, end) where begin, end or both can be unspecified. Our train set contains $7M$ such tuples, with about $10\%$ partially specified temporal tuples. We kept a validation and test set of size $50k$ each. + +At train and test time, for a given datum (subject, predicate, object, [begin, end]), we sample a timestamp (appearing in the dataset) uniformly at random, in the range [begin, end]. For datum without a temporal range, we sample over the maximum date range. Then, we rank the objects for the partial query (subject, predicate,?, timestamp). + +# 5 EXPERIMENTAL RESULTS + +# 5.1 EXPERIMENTAL SET-UP + +We follow the experimental set-up in García-Durán et al. (2018); Goel et al. (2020). We use models from García-Durán et al. (2018) and Goel et al. (2020) as baselines since they are the best performing algorithms on the datasets considered. We report the filtered Mean Reciprocal Rank (MRR) defined in Nickel et al. (2016b). In order to obtain comparable results, we use Table 1 and dataset statistics to compute the rank for each (model, dataset) pair that matches the number of parameters used in Goel et al. (2020). We also report results at ranks 10 times higher. This higher rank set-up gives an estimation of the best possible performance attainable on these datasets, even though the dimension used might be impractical for applied systems. All our models are optimized with Adagrad (Duchi et al., 2011), with a learning rate of 0.1, a batch-size of 1000. More details on the grid-search, actual ranks used and hyper-parameters are given in Appendix 8.7. + +
ICEWS14ICEWS15-05Yago15k
TA0.480.470.32
DE-SimplE0.530.51-
ComplEx0.47 (0.47)0.49 (0.49)0.35 (0.36)
TComplEx0.56 (0.61)0.58 (0.66)0.35 (0.36)
TNTComplEx0.56 (0.62)0.60 (0.67)0.35 (0.37)
+ +Table 2: Results for TA (García-Durán et al., 2018) and DE-SimpleE (Goel et al., 2020) are the best numbers reported in the respective papers. Our models have as many parameters as DE-SimpleE. Numbers in parentheses are for ranks multiplied by 10. + +
Reg.MRR
No regularizer0.62
Δ²0.63
Δ³0.63
Δ⁴0.64
Ω³0.65
Ω³ + Λ₄0.67
+ +We give results on 3 datasets previously used in the literature : ICEWS14, ICEWS15-05 and Yago15k. The ICEWS datasets are samplings from the Integrated Conflict Early Warning System (ICEWS)(Boschee et al., 2015) $^{1}$ .García-Durán et al. (2018) introduced two subsampling of this data, ICEWS14 which contains all events occurring in 2014 and ICEWS05-15 which contains events occurring between 2005 and 2015. These datasets immediately fit in our framework, since the timestamps are already discretized. + +The Yago15K dataset (García-Durán et al., 2018) is a modification of FB15k (Bordes et al., 2013) which adds "occursSince" and "occursUntil" timestamps to each triples. Following the evaluation setting of García-Durán et al. (2018), during evaluation, the incomplete triples to complete are of the form (subject, predicate, ?, occursSince | occursUntil, timestamp) (with reciprocal predicates). Rather than deal with tensors of order 5, we choose to unfold the (occursSince, occursUntil) and the predicate mode together, multiplying its size by two. + +Some relations in Wikidata are highly unbalanced (eg. (?, InstanceOf, Human)). For such relations, a ranking evaluation would not make much sense. Instead, we only compute the Mean Reciprocal Rank for missing right hand sides, since the data is such that highly unbalanced relations happen on the left-hand side. However, we follow the same training scheme as for all the other dataset, including reciprocal relations in the training set. The cross-entropy loss evaluated on $400k$ entities puts a restriction on the dimensionality of embeddings at about $d = 100$ for a batch-size of 1000. We leave sampling of this loss, which would allow for higher dimensions to future work. + +# 5.2 RESULTS + +We compare ComplEx with the temporal versions described in this paper. We report results in Table 2. Note that ComplEx has performances that are stable through a tenfold increase of its number of parameters, a rank of 100 is enough to capture the static information of these datasets. For temporal models however, the performance increases a lot with the number of parameters. It is always beneficial to allow a separate modeling of non-temporal predicates, as the performances of TNTComplex show. Finally, our model match or beat the state of the art on all datasets, even at identical number of parameters. Since these datasets are small, we also report results for higher ranks (10 times the number of parameters used for DE-SimplE). + +On Wikidata, $90\%$ of the triples have no temporal data attached. This leads to ComplEx outperforming all temporal models in term of average MRR, since the Non-Temporal MRR (NT-MRR) far + +Table 3: Impact of regularizers on ICEWS05-15 for TNTComplEx. + +
ComplExMRR 0.45NT-MRR 0.48T-MRR 0.29
TComplEx0.420.450.30
TNTComplEx0.440.470.32
+ +Table 4: Results on wikidata for entity dimension $d = {100}$ . + +![](images/a99385d310d68177d19f1d33e2e59b2330d805929a418a1dea5be267e1565564.jpg) +Figure 2: Scores for triples (President of the French republic, office holder, {Jacques Chirac | Nicolas Sarkozy | François Hollandé | Emmanuel Macron}, [1980, 2020]) + +outweighs the Temporal MRR (T-MRR). A breakdown of the performances is available in table 4. TNTComplEx obtains performances that are comparable to ComplEx on non-temporal triples, but are better on temporal triples. Moreover, TNTComplEx can minimize the temporal cross-entropy (7) and is thus more flexible on the queries it can answer. + +Training TNTComplEx on Wikidata with a rank of $d = 100$ with the full cross-entropy on a Quadro GP 100, we obtain a speed of $5.6k$ triples per second, leading to experiments time of 7.2 hours. This is to be compared with $5.8k$ triples per second when training ComplEx for experiments time of 6.9 hours. The additional complexity of our model does not lead to any real impact on runtime, which is dominated by the computation of the cross-entropy over $400k$ entities. + +# 6 QUALITATIVE STUDY + +The instantaneous loss described in equation (1), along with the timestamp sampling scheme described in the previous section only enforces correct rankings along the "object" tubes of our order-4 tensor. In order to enforce a stronger temporal consistency, and be able to answer queries of the type (subject, predicate, object, ?), we propose another cross-entropy loss along the temporal tubes: + +$$ +\tilde {\ell} (\hat {X}; (i, j, k, l)) = - \hat {X} _ {i, j, k, l} + \log \left(\sum_ {l ^ {\prime}} \exp \left(\hat {X} _ {i, j, k, l ^ {\prime}}\right)\right). \tag {7} +$$ + +We optimize the sum of $\ell$ defined in Equation 1 and $\tilde{\ell}$ defined in Equation 7. Doing so, we only lose 1 MRR point overall. However, we make our model better at answering queries along the time axis. The macro area under the precision recall curve is 0.92 for a TNTComplEx model learned with $\ell$ alone and 0.98 for a TNTComplEx model trained with $\ell +\tilde{\ell}$ . + +We plot in Figure 2 the scores along time for train triples (president of the french republic, office holder, {Jacques Chirac | Nicolas Sarkozy | François Hollande | Emmanuel Macron}, [1980, 2020]). The periods where a score is highest matches closely the ground truth of start and end dates of these presidents mandates which is represented as a colored background. This shows that our models are able to learn rankings that are correct along time intervals despite our training method only ever sampling timestamps within these intervals. + +# 7 CONCLUSION + +Tensor methods have been successful for Knowledge Base completion. In this work, we suggest an extension of these methods to Temporal Knowledge Bases. Our methodology adapts well to the various form of these datasets: point-in-time, beginning and endings or intervals. We show that our methods reach higher performances than the state of the art for similar number of parameters. For several datasets, we also provide performances for higher dimensions. We hope that the gap between low-dimensional and high-dimensional models can motivate further research in models that have increased expressivity at lower number of parameters per entity. Finally, we propose a large scale temporal dataset which we believe represents the challenges of large scale temporal completion in knowledge bases. We give performances of our methods for low-ranks on this dataset. We believe that, given its scale, this dataset could also be an interesting addition to non-temporal knowledge base completion. + +# REFERENCES + +Brett W Bader, Richard A Harshman, and Tamara G Kolda. Temporal analysis of semantic graphs using asalsan. In Seventh IEEE international conference on data mining (ICDM 2007), pp. 33-42. IEEE, 2007. +Ivana Balažević, Carl Allen, and Timothy M Hospedales. Tucker: Tensor factorization for knowledge graph completion. arXiv preprint arXiv:1901.09590, 2019. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating Embeddings for Modeling Multi-relational Data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 26, pp. 2787-2795. Curran Associates, Inc., 2013. +Elizabeth Boschee, Jennifer Lautenschlager, Sean OBrien, Steve Shellman, James Starz, and Michael Ward. Icews coded event data. Harvard Dataverse, 12, 2015. +John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011. +Rina Foygel, Ohad Shamir, Nati Srebro, and Ruslan R Salakhutdinov. Learning with the weighted trace-norm under arbitrary sampling distributions. In Advances in Neural Information Processing Systems, pp. 2133-2141, 2011. +Shmuel Friedland and Lek-Heng Lim. Nuclear norm of higher-order tensors. Mathematics of Computation, 87(311):1255-1281, 2018. +Alberto García-Durán, Sebastijan Dumančić, and Mathias Niepert. Learning sequence encoders for temporal knowledge graph completion. arXiv preprint arXiv:1809.03202, 2018. +Rishab Goel, Seyed Mehran Kazemi, Marcus Brubaker, and Pascal Poupart. Diachronic embedding for temporal knowledge graph completion. In AAAI, 2020. +Frank L. Hitchcock. The expression of a tensor or a polyadic as a sum of products. Studies in Applied Mathematics, 6(1-4):164-189, 1927. +Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 687-696, 2015. +Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 69-74, 2017. +Seyed Mehran Kazemi and David Poole. Simple embedding for link prediction in knowledge graphs. In Advances in Neural Information Processing Systems 31, pp. 4289-4300. 2018. +Tamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. SIAM review, 51(3): 455-500, 2009. +Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8), 2009. +Timothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. Canonical tensor decomposition for knowledge base completion. In Proceedings of the 35th International Conference on Machine Learning (ICML-18), pp. 2863-2872, 2018. +Yunpu Ma, Volker Tresp, and Erik A Daxberger. Embedding models for episodic knowledge graphs. Journal of Web Semantics, pp. 100490, 2018. +Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. Strange: a novel embedding model of entities and relationships in knowledge bases. arXiv preprint arXiv:1606.08140, 2016. + +Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 809-816, 2011. +Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A Review of Relational Machine Learning for Knowledge Graphs. Proceedings of the IEEE, 104(1):11-33, 2016a. +Maximilian Nickel, Lorenzo Rosasco, Tomaso A Poggio, et al. Holographic embeddings of knowledge graphs. 2016b. +Purnamrita Sarkar and Andrew W Moore. Dynamic social network analysis using latent space models. In Advances in Neural Information Processing Systems, pp. 1145-1152, 2006. +Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pp. 593-607. Springer, 2018. +Age Smilde, Rasmus Bro, and Paul Geladi. Multi-way analysis: applications in the chemical sciences. John Wiley & Sons, 2005. +Nathan Srebro and Ruslan R Salakhutdinov. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In Advances in Neural Information Processing Systems, pp. 2056-2064, 2010. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In International Conference on Machine Learning, pp. 2071-2080, 2016. +Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. Knowledge graph embedding by translating on hyperplanes. In Twenty-Eighth AAAI conference on artificial intelligence, 2014. +Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575, 2014. +Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics, 34(13):457466, 2018. + +# 8 APPENDIX + +# 8.1 UNFOLDING AND THE CP DECOMPOSITION + +Let $X = [[U, V, W, T]]$ , that is $X_{i,j,k,l} = \langle u_i, v_j, w_k, t_l \rangle$ . Then according to Kolda & Bader (2009), unfolding along modes 3 and 4 leads to an order three tensor of decomposition $\tilde{X} = [[U, V, W \circ T]]$ . Where $\circ$ is the Khatri-Rao product (Smilde et al., 2005), which is the column-wise Kronecker product: $W \circ T = (W_{\cdot,1} \otimes T_{\cdot,1}, \dots, W_{\cdot,R} \otimes T_{\cdot,R})$ . + +Note that for a fourth mode of size $L$ : $(W \circ T)_{L(k - 1) + l} = w_k \odot t_l$ . This justifies the regularizers used in Section 3.2. + +# 8.2 TEMPORAL REGULARIZER AND NUCLEAR NORMS + +Consider the penalty: + +$$ +\Omega (U, V, W, T) = \frac {1}{4} \left(\| U \| _ {4} ^ {4} + \| V \| _ {4} ^ {4} + \| W \| _ {4} ^ {4} + \| T \| _ {4} ^ {4} + \alpha \| T _ {1:} - T _ {: - 1} \| _ {4} ^ {4}\right) +$$ + +Let us define a new norm on vectors: + +$$ +\left\| t \right\| _ {\tau 4} = \left(\left\| t \right\| _ {4} ^ {4} + \alpha \left\| t _ {1:} - t _ {: - 1} \right\| _ {4} ^ {4}\right) ^ {1 / 4} +$$ + +$\| \cdot \|_{\tau 4}$ is a norm and lets us rewrite: + +$$ +\Omega (U, V, W, T) = \sum_ {r = 1} ^ {R} \frac {1}{4} \left(\| u _ {r} \| _ {4} ^ {4} + \| v _ {r} \| _ {4} ^ {4} + \| w _ {r} \| _ {4} ^ {4} + \| t _ {r} \| _ {\tau 4} ^ {4}\right). +$$ + +Following the proof in Lacroix et al. (2018) which only uses homogeneity of the norms, we can show that $\Omega(U, V, W, T)$ is a variational form of an atomic norm with atoms: + +$$ +\mathcal {A} = \left\{u \otimes v \otimes w \otimes t \mid \| u \| _ {4}, \| v \| _ {4}, \| w \| _ {4} \leq 1 \text {a n d} \| t \| _ {\tau 4} \leq 1 \right\} +$$ + +# 8.3 NUCLEAR NORMS ON UNFOLDINGS + +We consider the regularizer : + +$$ +\Omega^ {N 3} (U, V, T; (i, j, k, l)) = \frac {1}{3} \left(\| u _ {i} \| _ {3} ^ {3} + \| u _ {k} \| _ {3} ^ {3} + \| v _ {k} \odot t _ {l} \| _ {3} ^ {3}\right). +$$ + +Let $D^{\mathrm{subj}}$ (resp. obj, pred/time) the diagonal matrix containing the cubic-roots of the marginal probabilities of each subject (resp. obj, pred/time) in the dataset. We denote by $\circ$ the Kathri-Rao product between two matrices (the columnwise Kronecker product). Summing over the entire dataset, we obtain the penalty: + +$$ +\frac {1}{| S |} \sum_ {(i, j, k, l) \in S} \Omega^ {N 3} (U, V, T; (i, j, k, l)) = \frac {1}{3} \left(\| D ^ {\mathrm {s u b j}} U \| _ {3} ^ {3} + \| D ^ {\mathrm {o b j}} U \| _ {3} ^ {3} + \| D ^ {\mathrm {p r e d / t i m e}} (V \circ T) \| _ {3} ^ {3}\right). +$$ + +Dropping the weightings to simplify notations, we state the equivalence between this regularizer and a variational form of the nuclear 3-norm of an order 4 tensor: + +$$ +\inf _ {[ U _ {1}, U _ {2}, U _ {3}, U _ {4} ] = X} \frac {1}{3} \left(\sum_ {r = 1} ^ {R} \| u _ {r} ^ {(1)} \| _ {3} ^ {3} + \| u _ {r} ^ {(2)} \| _ {3} ^ {3} + \| u _ {r} ^ {(3)} \otimes u _ {r} ^ {(4)} \| _ {3} ^ {3}\right) = \inf _ {[ U _ {1}, U _ {2}, U _ {3}, U _ {4} ] = X} \frac {1}{3} \left(\sum_ {r = 1} ^ {R} \prod_ {d = 1} ^ {4} \| u _ {r} ^ {(d)} \| _ {3}\right). +$$ + +The proof follows Lacroix et al. (2018), noting that $\| u_r^{(3)}\otimes u_r^{(4)}\| _3^3 = \| u_r^{(3)}\| _3^3\| u_r^{(4)}\| _3^3$ . Note that for $D^{\mathrm{pred / time}} = D^{\mathrm{pred}}D^{\mathrm{time}}$ , there would also be equality of the weighted norms. However, in the application considered, time and predicate are most likely not independent, leading to different weightings of the norms. + +# 8.4 DATASET STATISTICS + +Statistics of all the datasets used in this work are gathered in Table 5. + +
ICEWS14ICEWS05-15Yago15kWikidata
Entities68691009415403432715
Predicates460502102814
Timestamps36540171701726
|S|728263689621104417224361
+ +# 8.5 DETAILED RESULTS + +Table 5: Dataset statistics + +
ICEWS14ICEWS15-05Yago15k
MRRH@1H@3H@10MRRH@1H@3H@10MRRH@1H@3H@10
TA0.480.37-0.690.470.35-0.730.320.23-0.51
DE-SimplE0.530.420.590.730.510.390.580.75----
ComplEx0.470.350.530.700.490.370.550.720.350.280.350.52
TComplEx0.560.470.610.730.580.490.640.760.350.270.360.52
TNTComplEx0.560.460.610.740.600.500.650.780.350.280.350.52
ComplEx (x10)0.470.350.540.710.490.370.550.730.360.290.360.54
TComplEx (x10)0.610.530.660.770.660.590.710.800.360.280.380.54
TNTComplEx (x10)0.620.520.660.760.670.590.710.810.370.290.390.54
+ +# 8.6 STANDARD DEVIATIONS + +We give the standard deviations for the MRR computed over 5 runs of TNTComplEx on all datasets: + +Table 6: Results for TA (García-Durán et al., 2018) and DE-SimpleE (Goel et al., 2020) are the best numbers reported in the respective papers. + +
TNTComplExICEWS14 0.0016ICEWS15-05 0.0011Yago15k 0.00076Wikidata (T) 0.0035Wikidata (NT) 0.0012
+ +# 8.7 GRID SEARCH + +For ICEWS14, ICEWS05-15 and Yago15k, we follow the grid-search below : + +Using Table 1 to compute the number of parameters and the dataset statistics in Table 5, we use the following ranks to match the number of parameters of DE-SimpleE in dimension 100: + +
DE-SimplEICEWS14 100ICEWS05-15 100Yago15k 100
ComplEx182186196
TComplEx174136194
TTComplEx156128189
\ No newline at end of file diff --git a/tensordecompositionsfortemporalknowledgebasecompletion/images.zip b/tensordecompositionsfortemporalknowledgebasecompletion/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..dbc9db18aed0b2132c23a33b1bce9f8ab446812f --- /dev/null +++ b/tensordecompositionsfortemporalknowledgebasecompletion/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:182e6785c46c809caacd4b15abf3e79bb85650c264db3ff3c5db02331ed1ce14 +size 446704 diff --git a/tensordecompositionsfortemporalknowledgebasecompletion/layout.json b/tensordecompositionsfortemporalknowledgebasecompletion/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..63259cd0039db71a636700b49651d61c92d3bcf9 --- /dev/null +++ b/tensordecompositionsfortemporalknowledgebasecompletion/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d1646ef8bfc6a1247abd8a75c34d40b572e004124091721cc3517a72e03f866 +size 377088 diff --git a/theasymptoticspectrumofthehessianofdnnthroughouttraining/35476b59-b648-4e55-9845-746037f99573_content_list.json b/theasymptoticspectrumofthehessianofdnnthroughouttraining/35476b59-b648-4e55-9845-746037f99573_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..da33da712323942bee8029bfe0eec1c168296f28 --- /dev/null +++ b/theasymptoticspectrumofthehessianofdnnthroughouttraining/35476b59-b648-4e55-9845-746037f99573_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b6b8a35a15669d873c0ebcedf0ec17e43b633e9752353946fdf15319c8870e2 +size 207338 diff --git a/theasymptoticspectrumofthehessianofdnnthroughouttraining/35476b59-b648-4e55-9845-746037f99573_model.json b/theasymptoticspectrumofthehessianofdnnthroughouttraining/35476b59-b648-4e55-9845-746037f99573_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e1bc90fa9e2a34712668220283f5a7d2e03f601a --- /dev/null +++ b/theasymptoticspectrumofthehessianofdnnthroughouttraining/35476b59-b648-4e55-9845-746037f99573_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36763997346c56cf7626bf0a7e757da8667eb0f18b65adcf42f28bdbee10249a +size 233815 diff --git a/theasymptoticspectrumofthehessianofdnnthroughouttraining/35476b59-b648-4e55-9845-746037f99573_origin.pdf b/theasymptoticspectrumofthehessianofdnnthroughouttraining/35476b59-b648-4e55-9845-746037f99573_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..44becf001cd6a8b6edb1be2283b81cc3add0eee4 --- /dev/null +++ b/theasymptoticspectrumofthehessianofdnnthroughouttraining/35476b59-b648-4e55-9845-746037f99573_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a88b7d9e7e3f2536584d927daf40ab531f60c149c6631667ff15bd4b64b4946d +size 486069 diff --git a/theasymptoticspectrumofthehessianofdnnthroughouttraining/full.md b/theasymptoticspectrumofthehessianofdnnthroughouttraining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..de0b19c433205c76f8a60084f8395b96e6607e5a --- /dev/null +++ b/theasymptoticspectrumofthehessianofdnnthroughouttraining/full.md @@ -0,0 +1,977 @@ +# THE ASYMPTOTIC SPECTRUM OF THE HESSIAN OF DNN THROUGHOUT TRAINING + +Arthur Jacot, Franck Gabriel & Clément Hongler + +Chair of Statistical Field Theory + +Ecole Polytechnique Fédérale de Lausanne + +{arthur.jacot,franck.grabriel,clement.hongler}@epfl.ch + +# ABSTRACT + +The dynamics of DNNs during gradient descent is described by the so-called Neural Tangent Kernel (NTK). In this article, we show that the NTK allows one to gain precise insight into the Hessian of the cost of DNNs. When the NTK is fixed during training, we obtain a full characterization of the asymptotics of the spectrum of the Hessian, at initialization and during training. In the so-called mean-field limit, where the NTK is not fixed during training, we describe the first two moments of the Hessian at initialization. + +# 1 INTRODUCTION + +The advent of deep learning has sparked a lot of interest in the loss surface of deep neural networks (DNN), and in particular its Hessian. However to our knowledge, there is still no theoretical description of the spectrum of the Hessian. Nevertheless a number of phenomena have been observed numerically. + +The loss surface of neural networks has been compared to the energy landscape of different physical models (Choromanska et al., 2015; Geiger et al., 2018; Mei et al., 2018). It appears that the loss surface of DNNs may change significantly depending on the width of the network (the number of neurons in the hidden layer), motivating the distinction between the under- and over-parametrized regimes (Baity-Jesi et al., 2018; Geiger et al., 2018; 2019). + +The non-convexity of the loss function implies the existence of a very large number of saddle points, which could slow down training. In particular, in (Pascanu et al., 2014; Dauphin et al., 2014), a relation between the rank of saddle points (the number of negative eigenvalues of the Hessian) and their loss has been observed. + +For overparametrized DNNs, a possibly more important phenomenon is the large number of flat directions (Baity-Jesi et al., 2018). The existence of these flat minima is conjectured to be related to the generalization of DNNs and may depend on the training procedure (Hochreiter & Schmidhuber, 1997; Chaudhari et al., 2016; Wu et al., 2017). + +In (Jacot et al., 2018) it has been shown, using a functional approach, that in the infinit-width limit, DNNs behave like kernel methods with respect to the so-called Neural Tangent Kernel, which is determined by the architecture of the network. This leads to convergence guarantees for DNNs (Jacot et al., 2018; Du et al., 2019; Allen-Zhu et al., 2018; Huang & Yau, 2019) and strengthens the connections between neural networks and kernel methods (Neal, 1996; Cho & Saul, 2009; Lee et al., 2018). + +Our approach also allows one to probe the so-called mean-field/active limit (studied in (Rotskoff & Vanden-Eijnden, 2018; Chizat & Bach, 2018a; Mei et al., 2018) for shallow networks), where the NTK varies during training. + +This raises the question: can we use these new results to gain insight into the behavior of the Hessian of the loss of DNNs, at least in the small region explored by the parameters during training? + +# 1.1 CONTRIBUTIONS + +Following ideas introduced in (Jacot et al., 2018), we consider the training of $L + 1$ -layered DNNs in a functional setting. For a functional cost $\mathcal{C}$ , the Hessian of the loss $\mathbb{R}^P \ni \theta \mapsto \mathcal{C}\left(F^{(L)}(\theta)\right)$ is the sum of two $P \times P$ matrices $I$ and $S$ . We show the following results for large $P$ and for a fixed number of datapoints $N$ : + +- The first matrix $I$ is positive semi-definite and its eigenvalues are given by the (weighted) kernel PCA of the dataset with respect to the NTK. The dominating eigenvalues are the principal components of the data followed by a high number of small eigenvalues. The "flat directions" are spanned by the small eigenvalues and the null-space (of dimension at least $P - N$ when there is a single output). Because the NTK is asymptotically constant (Jacot et al., 2018), these results apply at initialization, during training and at convergence. +- The second matrix $S$ can be viewed as residual contribution to $H$ , since it vanishes as the network converges to a global minimum. We compute the limit of the first moment $\operatorname{Tr}(S)$ and characterize its evolution during training, of the second moment $\operatorname{Tr}(S^2)$ which stays constant during training, and show that the higher moments vanish. +- Regarding the sum $H = I + S$ , we show that the matrices $I$ and $S$ are asymptotically orthogonal to each other at initialization and during training. In particular, the moments of the matrices $I$ and $S$ add up: $tr(H^k) \approx tr(I^k) + tr(S^k)$ . + +These results give, for any depth and a fairly general non-linearity, a complete description of the spectrum of the Hessian in terms of the NTK at initialization and throughout training. Our theoretical results are consistent with a number of observations about the Hessian (Hochreiter & Schmidhuber, 1997; Pascanu et al., 2014; Dauphin et al., 2014; Chaudhari et al., 2016; Wu et al., 2017; Pennington & Bahri, 2017; Geiger et al., 2018), and sheds a new light on them. + +# 1.2 RELATED WORKS + +The Hessian of the loss has been studied through the decomposition $I + S$ in a number of previous works (Sagun et al., 2017; Pennington & Bahri, 2017; Geiger et al., 2018). + +For least-squares and cross-entropy costs, the first matrix $I$ is equal to the Fisher matrix (Wagenaar, 1998; Pascanu & Bengio, 2013), whose moments have been described for shallow networks in (Pennington & Worah, 2018). For deep networks, the first two moments and the operator norm of the Fisher matrix for a least squares loss were computed at initialization in (Karakida et al., 2018) conditionally on a certain independence assumption; our method does not require such assumptions. Note that their approach implicitly uses the NTK. + +The second matrix $S$ has been studied in (Pennington & Bahri, 2017; Geiger et al., 2018) for shallow networks, conditionally on a number of assumptions. Note that in the setting of (Pennington & Bahri, 2017), the matrices $I$ and $S$ are assumed to be freely independent, which allows them to study the spectrum of the Hessian; in our setting, we show that the two matrices $I$ and $S$ are asymptotically orthogonal to each other. + +# 2 SETUP + +We consider deep fully connected artificial neural networks (DNNs) using the setup and NTK parametrization of (Jacot et al., 2018), taking an arbitrary nonlinearity $\sigma \in C_b^4 (\mathbb{R})$ (i.e. $\sigma :\mathbb{R}\to \mathbb{R}$ that is 4 times continuously differentiable function with all four derivatives bounded). The layers are numbered from 0 (input) to $L$ (output), each containing $n_{\ell}$ neurons for $\ell = 0,\ldots ,L$ . The $P = \sum_{\ell = 0}^{L - 1}(n_{\ell} + 1)n_{\ell +1}$ parameters consist of the weight matrices $W^{(\ell)}\in \mathbb{R}^{n_{\ell +1}\times n_{\ell}}$ and bias vectors $b^{(\ell)}\in \mathbb{R}^{n_{\ell +1}}$ for $\ell = 0,\dots ,L - 1$ . We aggregate the parameters into the vector $\theta \in \mathbb{R}^P$ . + +The activations and pre-activations of the layers are defined recursively for an input $x \in \mathbb{R}^{n_0}$ , setting $\alpha^{(0)}(x; \theta) = x$ : + +$$ +\tilde {\alpha} ^ {(\ell + 1)} (x; \theta) = \frac {1}{\sqrt {n _ {\ell}}} W ^ {(\ell)} \alpha^ {(\ell)} (x; \theta) + \beta b ^ {(\ell)}, +$$ + +$$ +\alpha^ {(\ell + 1)} (x; \theta) = \sigma \big (\tilde {\alpha} ^ {(\ell + 1)} (x; \theta) \big). +$$ + +The parameter $\beta$ is added to tune the influence of the bias on training1. All parameters are initialized as iid $\mathcal{N}(0,1)$ Gaussians. + +We will in particular study the network function, which maps inputs $x$ to the activation of the output layer (before the last non-linearity): + +$$ +f _ {\theta} (x) = \tilde {\alpha} ^ {(L)} (x; \theta). +$$ + +In this paper, we will study the limit of various objects as $n_1, \ldots, n_L \to \infty$ sequentially, i.e. we first take $n_1 \to \infty$ , then $n_2 \to \infty$ , etc. This greatly simplifies the proofs, but they could in principle be extended to the simultaneous limit, i.e. when $n_1 = \ldots = n_{L-1} \to \infty$ . All our numerical experiments are done with 'rectangular' networks (with $n_1 = \ldots = n_{L-1}$ ) and match closely the predictions for the sequential limit. + +In the limit we study in this paper, the NTK is asymptotically fixed, as in (Jacot et al., 2018; Allen-Zhu et al., 2018; Du et al., 2019; Arora et al., 2019; Huang & Yau, 2019). By rescaling the outputs of DNNs as the width increases, one can reach another limit where the NTK is not fixed (Chizat & Bach, 2018a;b; Rotskoff & Vanden-Eijnden, 2018; Mei et al., 2019). Some of our results can be extended to this setting, but only at initialization (see Section 3.3). The behavior during training becomes however much more complex. + +# 2.1 FUNCTIONAL VIEWPOINT + +The network function lives in a function space $f_{\theta} \in \mathcal{F} \coloneqq [\mathbb{R}^{n_0} \to \mathbb{R}^{n_L}]$ and we call the function $F^{(L)}: \mathbb{R}^P \to \mathcal{F}$ that maps the parameters $\theta$ to the network function $f_{\theta}$ the realization function. We study the differential behavior of $F^{(L)}$ : + +- The derivative $\mathcal{DF}^{(L)}\in \mathbb{R}^P\otimes \mathcal{F}$ is a function-valued vector of dimension $P$ . The $p$ -th entry $\mathcal{D}_pF^{(L)} = \partial_{\theta_p}f_\theta \in \mathcal{F}$ represents how modifying the parameter $\theta_{p}$ modifies the function $f_{\theta}$ in the space $\mathcal{F}$ . +- The Hessian $\mathcal{H}F^{(L)}\in \mathbb{R}^P\otimes \mathbb{R}^P\otimes \mathcal{F}$ is a function-valued $P\times P$ matrix. + +The network is trained with respect to the cost functional: + +$$ +\mathcal {C} (f) = \frac {1}{N} \sum_ {i = 1} ^ {N} c _ {i} \left(f (x _ {i})\right), +$$ + +for strictly convex $c_{i}$ , summing over a finite dataset $x_{1},\ldots ,x_{N}\in \mathbb{R}^{n_{0}}$ of size $N$ . The parameters are then trained with gradient descent on the composition $\mathcal{C}\circ F^{(L)}$ , which defines the usual loss surface of neural networks. + +In this setting, we define the finite realization function $Y^{(L)}: \mathbb{R}^P \to \mathbb{R}^{Nn_L}$ mapping parameters $\theta$ to be the restriction of the network function $f_{\theta}$ to the training set $y_{ik} = f_{\theta,k}(x_i)$ . The Jacobian $\mathcal{D}Y^{(L)}$ is hence an $Nn_L \times P$ matrix and its Hessian $\mathcal{H}Y^{(L)}$ is a $P \times P \times Nn_L$ tensor. Defining the restricted cost $C(y) = \frac{1}{N}\sum_{i}c_{i}(y_{i})$ , we have $\mathcal{C} \circ F^{(L)} = C \circ Y^{(L)}$ . + +For our analysis, we require that the gradient norm $\| \mathcal{D}C\|$ does not explode during training. The following condition is sufficient: + +Definition 1. A loss $C:\mathbb{R}^{Nn_L}\to \mathbb{R}$ has bounded gradients over sublevel sets (BGOSS) if the norm of the gradient is bounded over all sets $U_{a} = \left\{Y\in \mathbb{R}^{Nn_{L}}:C(Y)\leq a\right\}$ . + +For example, the Mean Square Error (MSE) $C(Y) = \frac{1}{2N} \| Y^* - Y \|^2$ for the labels $Y^* \in \mathbb{R}^{Nn_L}$ has BGOSS because $\| \nabla C(Y) \|^2 = \frac{1}{N} \| Y^* - Y \|^2 = 2C(Y)$ . For the binary and softmax cross-entropy the gradient is uniformly bounded, see Proposition 2 in Appendix A. + +# 2.2 NEURAL TANGENT KERNEL + +The behavior during training of the network function $f_{\theta}$ in the function space $\mathcal{F}$ is described by a (multi-dimensional) kernel, the Neural Tangent Kernel (NTK) + +$$ +\Theta_ {k, k ^ {\prime}} ^ {(L)} (x, x ^ {\prime}) = \sum_ {p = 1} ^ {P} \partial_ {\theta_ {p}} f _ {\theta , k} (x) \partial_ {\theta_ {p}} f _ {\theta , k ^ {\prime}} (x ^ {\prime}). +$$ + +During training, the function $f_{\theta}$ follows the so-called kernel gradient descent with respect to the NTK, which is defined as + +$$ +\partial_ {t} f _ {\theta (t)} (x) = - \nabla_ {\Theta^ {(L)}} C _ {| f _ {\theta (t)}} (x) := - \frac {1}{N} \sum_ {i = 1} ^ {N} \Theta^ {(L)} (x, x _ {i}) \nabla c _ {i} (f _ {\theta (t)} (x _ {i})). +$$ + +In the infinite-width limit (letting $n_1 \to \infty, \ldots, n_{L-1} \to \infty$ sequentially) and for losses with BGOSS, the NTK converges to a deterministic limit $\Theta^{(L)} \to \Theta_{\infty}^{(L)} \otimes Id_{n_L}$ , which is constant during training, uniformly on finite time intervals $[0, T]$ (Jacot et al., 2018). For the MSE loss, the uniform convergence of the NTK was proven for $T = \infty$ in (Arora et al., 2019). + +The limiting NTK $\Theta_{\infty}^{(L)}:\mathbb{R}^{n_0}\times \mathbb{R}^{n_0}\to \mathbb{R}$ is constructed as follows: + +1. For $f, g: \mathbb{R} \to \mathbb{R}$ and a kernel $K: \mathbb{R}^{n_0} \times \mathbb{R}^{n_0} \to \mathbb{R}$ , define the kernel $\mathbb{L}_K^{f,g}: \mathbb{R}^{n_0} \times \mathbb{R}^{n_0} \to \mathbb{R}$ by + +$$ +\mathbb {L} _ {K} ^ {f, g} \left(x _ {0}, x _ {1}\right) = \mathbb {E} _ {\left(a _ {0}, a _ {1}\right)} \left[ f \left(a _ {0}\right) g \left(a _ {1}\right) \right], +$$ + +for $(a_0, a_1)$ a centered Gaussian vector with covariance matrix $(K(x_i, x_j))_{i,j=0,1}$ . For $f = g$ , we denote by $\mathbb{L}_K^f$ the kernel $\mathbb{L}_K^{f,f}$ . + +2. We define the kernels $\Sigma_{\infty}^{(\ell)}$ for each layer of the network, starting with $\Sigma_{\infty}^{(1)}(x_0,x_1) = 1 / n_0(x_0^T x_1) + \beta^2$ and then recursively by $\Sigma_{\infty}^{(\ell +1)} = \mathbb{L}_{\Sigma_{\infty}^{(\ell)}}^{\sigma} + \beta^2$ , for $\ell = 1,\ldots ,L - 1$ , where $\sigma$ is the network non-linearity. + +3. The limiting NTK $\Theta_{\infty}^{(L)}$ is defined in terms of the kernels $\Sigma_{\infty}^{(\ell)}$ and the kernels $\dot{\Sigma}_{\infty}^{(\ell)} = \mathbb{L}_{\Sigma_{\infty}^{(\ell -1)}}^{\dot{\sigma}}$ : + +$$ +\Theta_ {\infty} ^ {(L)} = \sum_ {\ell = 1} ^ {L} \Sigma_ {\infty} ^ {(\ell)} \dot {\Sigma} _ {\infty} ^ {(\ell + 1)} \dots \dot {\Sigma} _ {\infty} ^ {(L)}. +$$ + +The NTK leads to convergence guarantees for DNNs in the infinite-width limit, and connect their generalization to that of kernel methods (Jacot et al., 2018; Arora et al., 2019). + +# 2.3 GRAM MATRICES + +For a finite dataset $x_{1},\ldots ,x_{N}\in \mathbb{R}^{n_{0}}$ and a fixed depth $L\geq 1$ , we denote by $\tilde{\Theta}\in \mathbb{R}^{Nn_L\times Nn_L}$ the Gram matrix of $x_{1},\ldots ,x_{N}$ with respect to the limiting NTK, defined by + +$$ +\tilde {\Theta} _ {i k, j m} = \Theta_ {\infty} ^ {(L)} (x _ {i}, x _ {j}) \delta_ {k m}. +$$ + +It is block diagonal because different outputs $k \neq m$ are asymptotically uncorrelated. + +Similarly, for any (scalar) kernel $\mathcal{K}^{(L)}$ (such as the limiting kernels $\Sigma_{\infty}^{(L)}, \Lambda_{\infty}^{(L)}, \Upsilon_{\infty}^{(L)}, \Phi_{\infty}^{(L)}, \Xi_{\infty}^{(L)}$ introduced later), we denote the Gram matrix of the datapoints by $\tilde{\mathcal{K}}$ . + +# 3 MAIN THEOREMS + +# 3.1 HESSIAN AS $I + S$ + +Using the above setup, the Hessian $H$ of the loss $\mathcal{C} \circ F^{(L)}$ is the sum of two terms, with the entry $H_{p,p'}$ given by + +$$ +H _ {p, p ^ {\prime}} = \mathcal {H C} _ {| f _ {\theta}} (\partial_ {\theta_ {p}} F, \partial_ {\theta_ {p ^ {\prime}}} F) + \mathcal {D C} _ {| f _ {\theta}} (\partial_ {\theta_ {p}, \theta_ {p ^ {\prime}}} F). +$$ + +For a finite dataset, the Hessian matrix $\mathcal{H}\left(C\circ Y^{(L)}\right)$ is equal to the sum of two matrices + +$$ +I = \left(\mathcal {D} Y ^ {(L)}\right) ^ {T} \mathcal {H} C \mathcal {D} Y ^ {(L)} \quad \text {a n d} \quad S = \nabla C \cdot \mathcal {H} Y ^ {(L)} +$$ + +where $\mathcal{D}Y^{(L)}$ is a $Nn_{L}\times P$ matrix, $\mathcal{H}C$ is a $Nn_{L}\times Nn_{L}$ matrix and $\mathcal{H}Y^{(L)}$ is a $P\times P\times Nn_{L}$ tensor to which we apply a scalar product (denoted by $\cdot$ ) in its last dimension with the $Nn_{L}$ vector $\nabla C$ to obtain a $P\times P$ matrix. + +Our main contribution is the following theorem, which describes the limiting moments $\operatorname{Tr}\left(H^{k}\right)$ in terms of the moments of $I$ and $S$ : + +Theorem 1. For any loss $C$ with BGOSS and $\sigma \in C_b^4 (\mathbb{R})$ , in the sequential limit $n_1\to \infty ,\ldots ,n_{L - 1}\to \infty$ , we have for all $k\geq 1$ + +$$ +\operatorname {T r} \left(H (t) ^ {k}\right) \approx \operatorname {T r} \left(I (t) ^ {k}\right) + \operatorname {T r} \left(S (t) ^ {k}\right). +$$ + +The limits of $\operatorname{Tr}\left(I(t)^k\right)$ and $\operatorname{Tr}\left(S(t)^k\right)$ can be expressed in terms of the $NTK\Theta_{\infty}^{(L)}$ , the kernels $\Upsilon_{\infty}^{(L)}, \Xi_{\infty}^{(L)}$ and the non-symmetric kernels $\Phi_{\infty}^{(L)}, \Lambda_{\infty}^{(L)}$ defined in Appendix C: + +- The moments $\operatorname{Tr}\left(I(t)^k\right)$ converge to the following limits (with the convention that $i_{k+1} = i_1$ ): + +$$ +\operatorname {T r} \left(I (t) ^ {k}\right) \to \operatorname {T r} \left(\left(\mathcal {H} C (Y (t)) \tilde {\Theta}\right) ^ {k}\right) = \frac {1}{N ^ {k}} \sum_ {i _ {1}, \ldots , i _ {k} = 1} ^ {N} \prod_ {m = 1} ^ {k} c _ {i _ {m}} ^ {\prime \prime} (f _ {\theta (t)} (x _ {i _ {m}})) \Theta_ {\infty} ^ {(L)} (x _ {i _ {m}}, x _ {i _ {m + 1}}). +$$ + +- The first moment $\operatorname{Tr}(S(t))$ converges to the limit: + +$$ +\operatorname {T r} (S (t)) = (G (t)) ^ {T} \nabla C (Y (t)). +$$ + +At initialization $(G(0),Y(0))$ form a Gaussian pair of $Nn_{L}$ -vectors, independent for differing output indices $k = 1,\dots,n_{L}$ and with covariance $\mathbb{E}[G_{ik}(0)G_{i'k'}(0)] = \delta_{kk'}\Xi_{\infty}^{(L)}(x_i,x_{i'})$ and $\mathbb{E}[G_{ik}(0)Y_{i'k'}(0)] = \delta_{kk'}\Phi_{\infty}^{(L)}(x_i,x_{i'})$ for the limiting kernel $\Xi_{\infty}^{(L)}(x,y)$ and non-symmetric kernel $\Phi_{\infty}^{(L)}(x,y)$ . During training, both vectors follow the differential equations + +$$ +\partial_ {t} G (t) = - \tilde {\Lambda} \nabla C (Y (t)) +$$ + +$$ +\partial_ {t} Y (t) = - \tilde {\Theta} \nabla C (Y (t)). +$$ + +- The second moment $\operatorname{Tr}\left(S(t)^2\right)$ converges to the following limit defined in terms of the Gram matrix $\tilde{\Upsilon}$ : + +$$ +\operatorname {T r} \left(\boldsymbol {S} ^ {2}\right)\rightarrow \left(\nabla C (\boldsymbol {Y} (t))\right) ^ {T} \tilde {\boldsymbol {\Upsilon}} \nabla C (\boldsymbol {Y} (t)) +$$ + +- The higher moments $\operatorname{Tr}\left(S(t)^k\right)$ for $k \geq 3$ vanish. + +Proof. The moments of $I$ and $S$ can be studied separately because the moments of their sum is asymptotically equal to the sum of their moments by Proposition 5 below. The limiting moments of $I$ and $S$ are respectively described by Propositions 1 and 4 below. + +In the case of a MSE loss $C(Y) = \frac{1}{2N}\| Y - Y^{*}\|^2$ , the first and second derivatives take simple forms $\nabla C(Y) = \frac{1}{N} (Y - Y^{*})$ and $\mathcal{H}C(Y) = \frac{1}{N} Id_{Nn_L}$ and the differential equations can be solved to obtain more explicit formulae: + +![](images/dc6ed8b2ba654ad5e2cd854f2ee5207693d8259fa162aa93a7664f73af5794ff.jpg) +Figure 1: Comparison of the theoretical prediction of Corollary 1 for the expectation of the first 4 moments (colored lines) to the empirical average over 250 trials (black crosses) for a rectangular network with two hidden layers of finite widths $n_1 = n_2 = 5000$ ( $L = 3$ ) with the smooth ReLU (left) and the normalized smooth ReLU (right), for the MSE loss on scaled down 14x14 MNIST with $N = 256$ . Only the first two moments are affected by $S$ at the beginning of training. + +![](images/d5682d654ff5f2af7dc029bcc3ecfd9fed9507db0ae82647f86e8f90e09fe58f.jpg) + +Corollary 1. For the MSE loss $C$ and $\sigma \in C_b^4 (\mathbb{R})$ , in the limit $n_1,\ldots ,n_{L - 1}\to \infty$ , we have uniformly over $[0,T]$ + +$$ +\operatorname {T r} \left(H (t) ^ {k}\right)\rightarrow \frac {1}{N ^ {k}} \operatorname {T r} \left(\tilde {\Theta} ^ {k}\right) + \operatorname {T r} \left(S (t) ^ {k}\right) +$$ + +where + +$$ +\begin{array}{l} \operatorname {T r} (S (t)) \rightarrow - \frac {1}{N} (Y ^ {*} - Y (0)) ^ {T} \left(I d _ {N n _ {L}} + e ^ {- t \tilde {\Theta}}\right) \tilde {\Theta} ^ {- 1} \tilde {\Lambda} ^ {T} e ^ {- t \tilde {\Theta}} (Y ^ {*} - Y (0)) \\ + \frac {1}{N} G (0) ^ {T} e ^ {- t \tilde {\Theta}} (Y ^ {*} - Y (0)) \\ \end{array} +$$ + +$$ +\operatorname {T r} \left(S (t) ^ {2}\right)\rightarrow \frac {1}{N ^ {2}} \left(Y ^ {*} - Y (0)\right) ^ {T} e ^ {- t \tilde {\Theta}} \tilde {\Upsilon} e ^ {- t \tilde {\Theta}} \left(Y ^ {*} - Y (0)\right) +$$ + +$$ +\operatorname {T r} \left(S (t) ^ {k}\right)\rightarrow 0 \quad w h e n k > 2. +$$ + +In expectation we have: + +$$ +\mathbb {E} \left[ \operatorname {T r} (S (t)) \right]\rightarrow - \frac {1}{N} T r \left(\left(I d _ {N n _ {L}} + e ^ {- t \tilde {\Theta}}\right) \tilde {\Theta} ^ {- 1} \tilde {\Lambda} ^ {T} e ^ {- t \tilde {\Theta}} \left(\tilde {\Sigma} + Y ^ {*} Y ^ {* T}\right)\right) + \frac {1}{N} T r \left(e ^ {- t \tilde {\Theta}} \tilde {\Phi} ^ {T}\right) +$$ + +$$ +\mathbb {E} \left[ \operatorname {T r} \left(S (t) ^ {2}\right)\right]\rightarrow \frac {1}{N ^ {2}} T r \left(e ^ {- t \tilde {\Theta}} \tilde {\Upsilon} e ^ {- t \tilde {\Theta}} \left(\tilde {\Sigma} + Y ^ {*} Y ^ {* T}\right)\right). +$$ + +Proof. The moments of $I$ are constant because $\mathcal{H}C = \frac{1}{N}Id_{Nn_L}$ is constant. For the moments of $S$ , we first solve the differential equation for $Y(t)$ : + +$$ +Y (t) = Y ^ {*} - e ^ {- t \tilde {\Theta}} (Y ^ {*} - Y (0)). +$$ + +Noting $Y(t) - Y(0) = -\tilde{\Theta}\int_{0}^{t}\nabla C(s)ds$ , we have + +$$ +\begin{array}{l} G (t) = G (0) - \tilde {\Lambda} \int_ {0} ^ {t} \nabla C (s) d s \\ = G (0) + \tilde {\Lambda} \tilde {\Theta} ^ {- 1} (Y (t) - Y (0)) \\ = G (0) + \tilde {\Lambda} \tilde {\Theta} ^ {- 1} \left(I d _ {N n _ {L}} + e ^ {- t \tilde {\Theta}}\right) \left(Y ^ {*} - Y (0)\right) \\ \end{array} +$$ + +The expectation of the first moment of $S$ then follows. + +![](images/5cf6f6404d689d337632c29e17a4a7afebdedd507bb1900b65d2c90b160a5ee8.jpg) + +# 3.2 MUTUAL ORTHOGONALITY OF $I$ AND $S$ + +A first key ingredient to prove Theorem 1 is the asymptotic mutual orthogonality of the matrices $I$ and $S$ + +Proposition (Proposition 5 in Appendix D). For any loss $C$ with BGOSS and $\sigma \in C_b^4 (\mathbb{R})$ , we have uniformly over $[0,T]$ + +$$ +\lim _ {n _ {L - 1} \to \infty} \dots \lim _ {n _ {1} \to \infty} \| I S \| _ {F} = 0. +$$ + +As a consequence $\lim_{n_{L - 1}\to \infty}\dots \lim_{n_1\to \infty}\operatorname {Tr}\left([I + S]^k\right) - \left[\operatorname {Tr}\left(I^k\right) + \operatorname {Tr}\left(S^k\right)\right] = 0.$ + +Remark 1. If two matrices $A$ and $B$ are mutually orthogonal (i.e. $AB = 0$ ) the range of $A$ is contained in the nullspace of $B$ and vice versa. The non-zero eigenvalues of the sum $A + B$ are therefore given by the union of the non-zero eigenvalues of $A$ and $B$ . Furthermore the moments of $A$ and $B$ add up: $\operatorname{Tr}\left([A + B]^k\right) = \operatorname{Tr}\left(A^k\right) + \operatorname{Tr}\left(B^k\right)$ . Proposition 5 shows that this is what happens asymptotically for $I$ and $S$ . + +Note that both matrices $I$ and $S$ have large nullspaces: indeed assuming a constant width $w = n_1 = \ldots = n_{L-1}$ , we have $\text{Rank}(I) \leq Nn_L$ and $\text{Rank}(S) \leq 2(L-1)wNn_L$ (see Appendix C), while the number of parameters $P$ scales as $w^2$ (when $L > 2$ ). + +Figure 2 illustrates the mutual orthogonality of $I$ and $S$ . All numerical experiments are done for rectangular networks (when the width of the hidden layers are equal) and agree well with our predictions obtained in the sequential limit. + +# 3.3 MEAN-FIELD LIMIT + +For a rectangular network with width $w$ , if the output of the network is divided by $\sqrt{w}$ and the learning rate is multiplied by $w$ (to keep similar dynamics at initialization), the training dynamics changes and the NTK varies during training when $w$ goes to infinity. The new parametrization of the output changes the scaling of the two matrices: + +$$ +\mathcal {H} \left[ C \left(\frac {1}{\sqrt {w}} Y ^ {(L)}\right) \right] = \frac {1}{w} \left(\mathcal {D} Y ^ {(L)}\right) ^ {T} \mathcal {H} C \mathcal {D} Y ^ {(L)} + \frac {1}{\sqrt {w}} \nabla C \cdot \mathcal {H} Y ^ {(L)} = \frac {1}{w} I + \frac {1}{\sqrt {w}} S. +$$ + +The scaling of the learning rate essentially multiplies the whole Hessian by $w$ . In this setting, the matrix $I$ is left unchanged while the matrix $S$ is multiplied by $\sqrt{w}$ (the $k$ -th moment of $S$ is hence multiplied by $w^{k/2}$ ). In particular, the two moments of the Hessian are dominated by the moments of $S$ , and the higher moments of $S$ (and the operator norm of $S$ ) should not vanish. This suggests that the active regime may be characterised by the fact that $\| S \|_F \gg \| I \|_F$ . Under the conjecture that Theorem 1 holds for the infinite-width limit of rectangular networks, the asymptotic of the two first moments of $H$ is given by: + +$$ +{ } ^ { 1 / \sqrt { w } \operatorname { T r } ( H ) \to \mathcal { N } ( 0 , \nabla C ^ { T } \tilde { \Xi } \nabla C ) } +$$ + +$$ +{ } ^ { 1 } / _ { w } \operatorname { T r } \left( H ^ { 2 } \right) \to \nabla C ^ { T } \tilde { \Upsilon } \nabla C , +$$ + +where for the MSE loss we have $\nabla C = -Y^{*}$ + +# 3.4 THE MATRIX $S$ + +The matrix $S = \nabla C\cdot \mathcal{H}Y^{(L)}$ is best understood as a perturbation to $I$ , which vanishes as the network converges because $\nabla C\rightarrow 0$ . To calculate its moments, we note that + +$$ +\operatorname {T r} \left(\nabla C \cdot \mathcal {H} Y ^ {(L)}\right) = \left(\sum_ {p = 1} ^ {P} \partial_ {\theta_ {p} ^ {2}} ^ {2} Y\right) ^ {T} \nabla C = G ^ {T} \nabla C, +$$ + +where the vector $G = \sum_{k=1}^{P} \partial_{\theta_p^2}^2 Y \in \mathbb{R}^{Nn_L}$ is the evaluation of the function $g_\theta(x) = \sum_{k=1}^{P} \partial_{\theta_p^2}^2 f_\theta(x)$ on the training set. + +For the second moment we have + +$$ +\mathrm {T r} \left(\left(\nabla C \cdot \mathcal {H} Y ^ {(L)}\right) ^ {2}\right) = \nabla C ^ {T} \left(\sum_ {p, p ^ {\prime} = 1} ^ {P} \partial_ {\theta_ {p} \theta_ {p ^ {\prime}}} ^ {2} Y \left(\partial_ {\theta_ {p} \theta_ {p ^ {\prime}}} ^ {2} Y\right) ^ {T}\right) \nabla C = \nabla C ^ {T} \tilde {\Upsilon} \nabla C +$$ + +for $\tilde{\Upsilon}$ the Gram matrix of the kernel $\Upsilon^{(L)}(x,y) = \sum_{p,p' = 1}^{P}\partial_{\theta_p\theta_{p'}}^2 f_\theta (x)\left(\partial_{\theta_p\theta_{p'}}^2 f_\theta (y)\right)^T$ + +The following proposition describes the limit of the function $g_{\theta}$ and the kernel $\Upsilon^{(L)}$ and the vanishing of the higher moments: + +Proposition (Proposition 4 in Appendix C). For any loss $C$ with BGOSS and $\sigma \in C_b^4 (\mathbb{R})$ , the first two moments of $S$ take the form + +$$ +\operatorname {T r} (S (t)) = G (t) ^ {T} \nabla C (t) +$$ + +$$ +\operatorname {T r} \left(S (t) ^ {2}\right) = \nabla C (t) ^ {T} \tilde {\Upsilon} (t) \nabla C (t) +$$ + +- At initialization, $g_{\theta}$ and $f_{\theta}$ converge to a (centered) Gaussian pair with covariances + +$$ +\mathbb {E} \left[ g _ {\theta , k} (x) g _ {\theta , k ^ {\prime}} \left(x ^ {\prime}\right) \right] = \delta_ {k k ^ {\prime}} \Xi_ {\infty} ^ {(L)} (x, x ^ {\prime}) +$$ + +$$ +\mathbb {E} \left[ g _ {\theta , k} (x) f _ {\theta , k ^ {\prime}} \left(x ^ {\prime}\right) \right] = \delta_ {k k ^ {\prime}} \Phi_ {\infty} ^ {(L)} (x, x ^ {\prime}) +$$ + +$$ +\mathbb {E} \left[ f _ {\theta , k} (x) f _ {\theta , k ^ {\prime}} \left(x ^ {\prime}\right) \right] = \delta_ {k k ^ {\prime}} \Sigma_ {\infty} ^ {(L)} (x, x ^ {\prime}) +$$ + +and during training $g_{\theta}$ evolves according to + +$$ +\partial_ {t} g _ {\theta , k} (x) = \sum_ {i = 1} ^ {N} \Lambda_ {\infty} ^ {(L)} (x, x _ {i}) \partial_ {i k} C (Y (t)). +$$ + +- Uniformly over any interval $[0, T]$ , the kernel $\Upsilon^{(L)}$ has a deterministic and fixed limit $\lim_{n_{L-1} \to \infty} \dots \lim_{n_1 \to \infty} \Upsilon_{kk'}^{(L)}(x, x') = \delta_{kk'} \Upsilon_{\infty}^{(L)}(x, x')$ with limiting kernel: + +$$ +\Upsilon_ {\infty} ^ {(L)} (x, x ^ {\prime}) = \sum_ {\ell = 1} ^ {L - 1} \left(\Theta_ {\infty} ^ {(\ell)} (x, x ^ {\prime}) ^ {2} \ddot {\Sigma} _ {\infty} ^ {(\ell)} (x, x ^ {\prime}) + 2 \Theta_ {\infty} ^ {(\ell)} (x, x ^ {\prime}) \dot {\Sigma} _ {\infty} ^ {(\ell)} (x, x ^ {\prime})\right) \dot {\Sigma} _ {\infty} ^ {(\ell + 1)} (x, x ^ {\prime}) \dots \dot {\Sigma} _ {\infty} ^ {(L - 1)} (x, x ^ {\prime}). +$$ + +- The higher moment $k > 2$ vanish: $\lim_{n_{L - 1} \to \infty} \cdots \lim_{n_1 \to \infty} \operatorname{Tr}\left(S^k\right) = 0$ . + +This result has a number of consequences for infinitely wide networks: + +1. At initialization, the matrix $S$ has a finite Frobenius norm $\| S \|_F^2 = \operatorname{Tr}(S^2) = \nabla C^T \tilde{\Upsilon} \nabla C$ , because $\Upsilon$ converges to a fixed limit. As the network converges, the derivative of the cost goes to zero $\nabla C(t) \to 0$ and so does the Frobenius norm of $S$ . +2. In contrast the operator norm of $S$ vanishes already at initialization (because for all even $k$ , we have $\| S \|_{op} \leq \sqrt[k]{\operatorname{Tr}(S^k)} \to 0$ ). At initialization, the vanishing of $S$ in operator norm but not in Frobenius norm can be explained by the matrix $S$ having a growing number of eigenvalues of shrinking intensity as the width grows. +3. When it comes to the first moment of $S$ , Proposition 4 shows that the spectrum of $S$ is in general not symmetric. For the MSE loss the expectation of the first moment at initialization is + +$$ +\mathbb {E} \left[ \operatorname {T r} (S) \right] = \mathbb {E} \left[ (Y - Y ^ {*}) ^ {T} G \right] = \mathbb {E} \left[ Y ^ {T} G \right] - \left(Y ^ {*}\right) ^ {T} \mathbb {E} [ G ] = \operatorname {T r} \left(\tilde {\Phi}\right) - 0 +$$ + +which may be positive or negative depending on the choice of nonlinearity: with a smooth ReLU, it is positive, while for the arc-tangent or the normalized smooth ReLU, it can be negative (see Figure 1). + +This is in contrast to the result obtained in (Pennington & Bahri, 2017; Geiger et al., 2018) for the shallow ReLU networks, taking the second derivative of the ReLU to be zero. Under this assumption the spectrum of $S$ is symmetric: if the eigenvalues are ordered from lowest to highest, $\lambda_{i} = -\lambda_{P - i}$ and $\operatorname{Tr}(S) = 0$ . + +![](images/a13a1db28e6716801d3f197c1a3df9e4777c4fbbcac1cb938c8d63087f36838f.jpg) +Figure 2: Illustration of the mutual orthogonality of $I$ and $S$ . For the 20 first eigenvectors of $I$ (blue) and $S$ (orange), we plot the Rayleigh quotients $v^{T}Iv$ and $v^{T}Sv$ (with $L = 3$ , $n_{1} = n_{2} = 1000$ and the normalized ReLU on $14\mathrm{x}14$ MNIST with $N = 256$ ). We see that the directions where $I$ is large are directions where $S$ is small and vice versa. + +![](images/158973d6e63f54bc47b9cdea08a0fa1f9a4707010af052d095b0a210e1a9971a.jpg) +Figure 3: Plot of the loss surface around a global minimum along the first (along the y coordinate) and fourth (x coordinate) eigenvectors of $I$ . The network has $L = 4$ , width $n_1 = n_2 = n_3 = 1000$ for the smooth ReLU (left) and the normalized smooth ReLU (right). The data is uniform on the unit disk. Normalizing the non-linearity greatly reduces the narrow valley structure of the loss thus speeding up training. + +![](images/1b20ed52e68385c3db170236719410a21e046a09330951a3331cdf5d61835768.jpg) + +These observations suggest that $S$ has little influence on the shape of the surface, especially towards the end of training, the matrix $I$ however has an interesting structure. + +# 3.5 THE MATRIX $I$ + +At a global minimizer $\theta^{*}$ , the spectrum of $I$ describes how the loss behaves around $\theta^{*}$ . Along the eigenvectors of the biggest eigenvalues of $I$ , the loss increases rapidly, while small eigenvalues correspond to flat directions. Numerically, it has been observed that the matrix $I$ features a few dominating eigenvalues and a bulk of small eigenvalues (Sagun et al., 2016; Gur-Ari et al., 2018; Papyan, 2019). This leads to a narrow valley structure of the loss around a minimum: the biggest eigenvalues are the 'cliffs' of the valley, i.e. the directions along which the loss grows fastest, while the small eigenvalues form the 'flat directions' or the bottom of the valley. + +Note that the rank of $I$ is bounded by $Nn_{L}$ and in the overparametrized regime, when $Nn_{L} < P$ , the matrix $I$ will have a large nullspace, these are directions along which the value of the function on the training set does not change. Note that in the overparametrized regime, global minima are not isolated: they lie in a manifold of dimension at least $P - Nn_{L}$ and the nullspace of $I$ is tangent to this solution manifold. + +The matrix $I$ is closely related to the NTK Gram matrix: + +$$ +\tilde {\Theta} = \mathcal {D} Y ^ {(L)} \left(\mathcal {D} Y ^ {(L)}\right) ^ {T} \text {a n d} I = \left(\mathcal {D} Y ^ {(L)}\right) ^ {T} \mathcal {H} C \mathcal {D} Y ^ {(L)}. +$$ + +As a result, the limiting spectrum of the matrix $I$ can be directly obtained from the $\mathrm{NTK}^2$ + +Proposition 1. For any loss $C$ with BGOSS and $\sigma \in C_b^4 (\mathbb{R})$ , uniformly over any interval $[0,T]$ , the moments $\operatorname {Tr}\left(I^k\right)$ converge to the following limit (with the convention that $i_{k + 1} = i_1$ ): + +$$ +\lim _ {n _ {L - 1} \rightarrow \infty} \dots \lim _ {n _ {1} \rightarrow \infty} \operatorname {T r} \left(I ^ {k}\right) = \operatorname {T r} \left(\left(\mathcal {H} C (Y _ {t}) \tilde {\Theta}\right) ^ {k}\right) = \frac {1}{N ^ {k}} \sum_ {i _ {1}, \dots , i _ {k} = 1} ^ {N} \prod_ {m = 1} ^ {k} c _ {i _ {m}} ^ {\prime \prime} \left(f _ {\theta (t)} \left(x _ {i _ {m}}\right)\right) \Theta_ {\infty} ^ {(L)} \left(x _ {i _ {m}}, x _ {i _ {m + 1}}\right) +$$ + +Proof. It follows from $\operatorname{Tr}\left(I^k\right) = \operatorname{Tr}\left(\left(\left(\mathcal{D}Y^{(L)}\right)^T\mathcal{H}C\mathcal{D}Y^{(L)}\right)^k\right) = \operatorname{Tr}\left(\left(\mathcal{H}C\tilde{\Theta}\right)^k\right)$ and the asymptotic of the NTK (Jacot et al., 2018). + +# 3.5.1 MEAN-SQUARE ERROR + +When the loss is the MSE, $\mathcal{HC}$ is equal to $\frac{1}{N}Id_{Nn_L}$ . As a result, $\tilde{\Theta}$ and $I$ have the same non-zero eigenvalues up to a scaling of $1/N$ . Because the NTK is asymptotically fixed, the spectrum of $I$ is also fixed in the limit. + +The eigenvectors of the NTK Gram matrix are the kernel principal components of the data. The biggest principal components are the directions in function space which are most favoured by the NTK. This gives a functional interpretation of the narrow valley structure in DNNs: the cliffs of the valley are the biggest principal components, while the flat directions are the smallest components. + +Remark 2. As the depth $L$ of the network increases, one can observe two regimes (Poole et al., 2016; Jacot et al., 2019): Order/Freeze where the NTK converges to a constant and Chaos where the NTK converges to a Kronecker delta. In the Order/Freeze the $Nn_{L} \times Nn_{L}$ Gram matrix approaches a block diagonal matrix with $n_{L}$ constant blocks, and as a result $n_{L}$ eigenvalues of $I$ dominate the other ones, corresponding to constant directions along each outputs (this is in line with the observations of (Papyan, 2019)). This leads to a narrow valley for the loss and slows down training. In contrast, in the Chaos regime, the NTK Gram matrix approaches a scaled identity matrix, and the spectrum of $I$ should hence concentrate around a positive value, hence speeding up training. Figure 3 illustrates this phenomenon: with the smooth ReLU we observe a narrow valley, while with the normalized smooth ReLU (which lies in the Chaos according to (Jacot et al., 2019)) the narrowness of the loss is reduced. A similar phenomenon may explain why normalization helps smoothing the loss surface and speed up training (Santurkar et al., 2018; Ghorbani et al., 2019). + +# 3.5.2 CROSS-ENTROPY LOSS + +For a binary cross-entropy loss with labels $Y^{*} \in \{-1, + 1\}^{N}$ + +$$ +C (Y) = \frac {1}{N} \sum_ {i = 1} ^ {N} \log \left(1 + e ^ {- Y _ {i} ^ {*} Y _ {i}}\right), +$$ + +$\mathcal{HC}$ is a diagonal matrix whose entries depend on $Y$ (but not on $Y^{*}$ ): + +$$ +\mathcal {H} _ {i i} C (Y) = \frac {1}{N} \frac {1}{1 + e ^ {- Y _ {i}} + e ^ {Y _ {i}}}. +$$ + +The eigenvectors of $I$ then correspond to the weighted kernel principal component of the data. The positive weights $\frac{1}{1 + e^{-Y_i} + e^{Y_i}}$ approach $1/3$ as $Y_i$ goes to 0, i.e., when it is close to the decision boundary from one class to the other, and as $Y_i \to \pm \infty$ the weight goes to zero. The weights evolve in time through $Y_i$ , the spectrum of $I$ is therefore not asymptotically fixed as in the MSE case, but the functional interpretation of the spectrum in terms of the kernel principal components remains. + +# 4 CONCLUSION + +We have given an explicit formula for the limiting moments of the Hessian of DNNs throughout training. We have used the common decomposition of the Hessian in two terms $I$ and $S$ and have shown that the two terms are asymptotically mutually orthogonal, such that they can be studied separately. + +The matrix $S$ vanishes in Frobenius norm as the network converges and has vanishing operator norm throughout training. The matrix $I$ is arguably the most important as it describes the narrow valley structure of the loss around a global minimum. The eigendecomposition of $I$ is related to the (weighted) kernel principal components of the data w.r.t. the NTK. + +# ACKNOWLEDGEMENTS + +Clément Hongler acknowledges support from the ERC SG CONSTAMIS grant, the NCCR SwissMAP grant, the NSF DMS-1106588 grant, the Minerva Foundation, the Blavatnik Family Foundation, and the Latsis foundation. + +# REFERENCES + +Zeyuan Allen-Zhu, Yanzhi Li, and Zhao Song. A Convergence Theory for Deep Learning via Over-Parameterization. CoRR, abs/1811.03962, 2018. URL http://arxiv.org/abs/1811.03962. +Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. arXiv preprint arXiv:1904.11955, 2019. +Marco Baity-Jesi, Levent Sagun, Mario Geiger, Stefano Spigler, Gerard Ben Arous, Chiara Cammarota, Yann LeCun, Matthieu Wyart, and Giulio Biroli. Comparing Dynamics: Deep Neural Networks versus Glassy Systems. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80, pp. 314-323. PMLR, 10-15 Jul 2018. URL http://proceedings.mlr.press/v80/baity-jesi18a.html. +Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. Entropy-sgd: Biasing gradient descent into wide valleys. arXiv preprint arXiv:1611.01838, 2016. +Lenaïc Chizat and Francis Bach. On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport. In Advances in Neural Information Processing Systems 31, pp. 3040-3050. Curran Associates, Inc., 2018a. URL http://papers.nips.cc/paper/7567-on-the-global-convergence-of-gradient-descent-for-over-parameterized-models-using-optimal-transport.pdf. +Lenaic Chizat and Francis Bach. A note on lazy training in supervised differentiable programming. arXiv preprint arXiv:1812.07956, 2018b. +Youngmin Cho and Lawrence K. Saul. Kernel Methods for Deep Learning. In Advances in Neural Information Processing Systems 22, pp. 342-350. Curran Associates, Inc., 2009. URL http://papers.nips.cc/paper/3628-kernel-methods-for-deep-learning.pdf. +Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The Loss Surfaces of Multilayer Networks. Journal of Machine Learning Research, 38: 192-204, nov 2015. URL https://arxiv.org/pdf/1412.0233.pdf. +Yann N. Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and Attacking the Saddle Point Problem in High-dimensional Non-convex Optimization. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, pp. 2933-2941, Cambridge, MA, USA, 2014. MIT Press. +Simon S. Du, Xiyu Zhai, Barnabás Póczos, and Aarti Singh. Gradient Descent Provably Optimizes Over-parameterized Neural Networks. 2019. +Mario Geiger, Stefano Spigler, Stéphane d'Ascoli, Levent Sagun, Marco Baity-Jesi, Giulio Biroli, and Matthieu Wyart. The jamming transition as a paradigm to understand the loss landscape of deep neural networks. arXiv preprint arXiv:1809.09349, 2018. +Mario Geiger, Arthur Jacot, Stefano Spigler, Franck Gabriel, Levent Sagun, Stéphane d'Ascoli, Giulio Biroli, Clément Hongler, and Matthieu Wyart. Scaling description of generalization with number of parameters in deep learning. abs/1901.01608, 2019. URL http://arxiv.org/abs/1901.01608. + +Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net optimization via hessian eigenvalue density. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 2232-2241, Long Beach, California, USA, 09-15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/ghorbani19b.html. +Guy Gur-Ari, Daniel A. Roberts, and Ethan Dyer. Gradient descent happens in a tiny subspace. CoRR, abs/1812.04754, 2018. URL http://arxiv.org/abs/1812.04754. +Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1-42, 1997. +Jiaoyang Huang and Horng-Tzer Yau. Dynamics of deep neural networks and neural tangent hierarchy. arXiv preprint arXiv:1909.08156, 2019. +Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. In Advances in Neural Information Processing Systems 31, pp. 8580-8589. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/8076-neural-tangent-kernel-convergence-and-generalization-in-neural-networks.pdf. +Arthur Jacot, Franck Gabriel, and Clément Hongler. Freeze and chaos for dnns: an NTK view of batch normalization, checkerboard and boundary effects. CoRR, abs/1907.05715, 2019. URL http://arxiv.org/abs/1907.05715. +Ryo Karakida, Shotaro Akaho, and Shun-Ichi Amari. Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach. jun 2018. URL http://arxiv.org/abs/1806.01316. +Jae Hoon Lee, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein. Deep Neural Networks as Gaussian Processes. *ICLR*, 2018. +Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33): E7665-E7671, 2018. +Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit. arXiv preprint arXiv:1902.06015, 2019. +Radford M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1996. ISBN 0387947248. +Vardan Papyan. Measurements of three-level hierarchical structure in the outliers in the spectrum of deepnet hessenians. CoRR, abs/1901.08244, 2019. URL http://arxiv.org/abs/1901.08244. +Razvan Pascanu and Yoshua Bengio. Revisiting Natural Gradient for Deep Networks. jan 2013. URL http://arxiv.org/abs/1301.3584. +Razvan Pascanu, Yann N Dauphin, Surya Ganguli, and Yoshua Bengio. On the saddle point problem for non-convex optimization. arXiv preprint, 2014. URL https://arxiv.org/pdf/1405.4604.pdf. +Jeffrey Pennington and Yasaman Bahri. Geometry of Neural Network Loss Surfaces via Random Matrix Theory. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pp. 2798-2806. PMLR, 06-11 Aug 2017. URL http://proceedings.mlr.press/v70/pennington17a.html. +Jeffrey Pennington and Pratik Worah. The Spectrum of the Fisher Information Matrix of a Single-Hidden-Layer Neural Network. In Advances in Neural Information Processing Systems 31, pp. 5415-5424. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7786-the-spectrum-of-the-fisher-information-matrix-of-a-single-hidden-layer-neural-network.pdf. + +Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 3360-3368. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6322-exponential-expressivity-in-deep-neural-networks-through-transient-chaos.pdf. +Grant Rotskoff and Eric Vanden-Eijnden. Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks. In Advances in Neural Information Processing Systems 31, pp. 7146-7155. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7945-parameters-as-interacting-particles-long-time-convergence-and-asymptotic-error-scaling-of-neural-networks.pdf. +Levent Sagun, Léon Bottou, and Yann LeCun. Singularity of the hessian in deep learning. CoRR, abs/1611.07476, 2016. URL http://arxiv.org/abs/1611.07476. +Levent Sagun, Utku Evci, V. Ugur Güney, Yann Dauphin, and Léon Bottou. Empirical Analysis of the Hessian of Over-Parametrized Neural Networks. CoRR, abs/1706.04454, 2017. +Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How does batch normalization help optimization? In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 2483-2493. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7515-how-does-batch-normalization-help-optimization.pdf. +Daniel Wagenaar. Information geometry of neural networks. 1998. ISSN 0302-9743. +Lei Wu, Zhanxing Zhu, and Weinan E. Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes. CoRR, abs/1706.10239, 2017. URL http://arxiv.org/abs/1706.10239. + +# A PROOFS + +For the proofs of the theorems and propositions presented in the main text, we reformulate the setup of (Jacot et al., 2018). For a fixed training set $x_{1},\ldots ,x_{N}$ , we consider a (possibly random) time-varying training direction $D(t)\in \mathbb{R}^{Nn_L}$ which describes how each of the outputs must be modified. In the case of gradient descent on a cost $C(Y)$ , the training direction is $D(t) = \nabla C(Y(t))$ . The parameters are updated according to the differential equation + +$$ +\partial_ {t} \theta (t) = (\partial_ {\theta} Y (t)) ^ {T} D (t). +$$ + +Under the condition that $\int_0^T\| D(t)\| _2dt$ is stochastically bounded as the width of the network goes to infinity, the NTK $\Theta^{(L)}$ converges to its fixed limit uniformly over $[0,T]$ . + +The reason we consider a general training direction (and not only a gradient of a loss) is that we can split a network in two at a layer $\ell$ and the training of the smaller network will be according to the training direction $D_{i}^{(\ell)}(t)$ given by + +$$ +D _ {i} ^ {(\ell)} (t) = \operatorname {d i a g} \left(\dot {\sigma} \left(\alpha^ {(\ell)} \left(x _ {i}\right)\right)\right) \left(\frac {1}{\sqrt {n _ {\ell}}} W ^ {(\ell)}\right) ^ {T} \dots \operatorname {d i a g} \left(\dot {\sigma} \left(\alpha^ {(L - 1)} \left(x _ {i}\right)\right)\right) \left(\frac {1}{\sqrt {n _ {L - 1}}} W ^ {(L - 1)}\right) ^ {T} D _ {i} (t) +$$ + +because the derivatives $\dot{\sigma}$ are bounded and by Lemma 1 of the Appendix of (Jacot et al., 2018), this training direction satisfies the constraints even though it is not the gradient of a loss. As a consequence, as $n_1\to \infty ,\dots,n_{\ell -1}\to \infty$ the NTK of the smaller network $\Theta^{(\ell)}$ also converges to its limit uniformly over $[0,T]$ . As we let $n_\ell \rightarrow \infty$ the pre-activations $\tilde{\alpha}_i^{(\ell)}$ and weights $W_{ij}^{(\ell)}$ move at a rate of $1 / \sqrt{n_{\ell}}$ . We will use this rate of change to prove that other types of kernels are constant during training. + +When a network is trained with gradient descent on a loss $C$ with BGOSS, the integral $\int_0^T\| D(t)\| _2dt$ is stochastically bounded. Because the loss is decreasing during training, the outputs $Y(t)$ lie in the sublevel set $U_{C(Y(0))}$ for all times $t$ . The norm of the gradient is hence bounded for all times $t$ . Because the distribution of $Y(0)$ converges to a multivariate Gaussian, $b(C(Y(0)))$ is stochastically bounded as the width grows, where $b(a)$ is a bound on the norm of the gradient on $U_{a}$ . We then have the bound $\int_0^T\| D(t)\| _2dt\leq Tb(C(Y(0)))$ which is itself stochastically bounded. + +For the binary and softmax cross-entropy losses the gradient is uniformly bounded: + +Proposition 2. For the binary cross-entropy loss $C$ and any $Y \in \mathbb{R}^N$ , $\| \nabla C(Y) \|_2 \leq \frac{1}{\sqrt{N}}$ . + +For the softmax cross-entropy loss $C$ on $c \in \mathbb{N}$ classes and any $Y \in \mathbb{R}^{N_c}$ , $\| \nabla C(Y)\|_2 \leq \frac{\sqrt{2c}}{\sqrt{N}}$ . + +Proof. The binary cross-entropy loss with labels $Y^{*} \in \{0,1\}^{N}$ is + +$$ +C (Y) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \frac {e ^ {Y _ {i} Y _ {i} ^ {*}}}{1 + e ^ {Y _ {i}}} = \frac {1}{N} \sum_ {i = 1} ^ {N} \log \left(1 + e ^ {Y _ {i}}\right) - Y _ {i} Y _ {i} ^ {*} +$$ + +and the gradient at an input $i$ is + +$$ +\partial_ {i} C (Y) = \frac {1}{N} \frac {e ^ {Y _ {i}} - Y _ {i} ^ {*} \left(1 + e ^ {Y _ {i}}\right)}{1 + e ^ {Y _ {i}}} +$$ + +which is bounded in absolute value by $\frac{1}{N}$ for both $Y_{i}^{*} = 0,1$ such that $\| \nabla C(Y)\| _2\leq \frac{1}{\sqrt{N}}$ + +The softmax cross-entropy loss over $c$ classes with labels $Y^{*} \in \{1, \dots, c\}^{N}$ is defined by + +$$ +C (Y) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \frac {e ^ {Y _ {i Y _ {i} ^ {*}}}}{\sum_ {k = 1} ^ {c} e ^ {Y _ {i k}}} = \frac {1}{N} \sum_ {i = 1} ^ {N} \log \left(\sum_ {k = 1} ^ {c} e ^ {Y _ {i k}}\right) - Y _ {i Y _ {i} ^ {*}}. +$$ + +The gradient is at an input $i$ and output class $m$ is + +$$ +\partial_ {i m} C (Y) = \frac {1}{N} \left(\frac {e ^ {Y _ {i m}}}{\sum_ {k = 1} ^ {c} e ^ {Y _ {i k}}} - \delta_ {Y _ {i} ^ {*} m}\right) +$$ + +which is bounded in absolute value by $\frac{2}{N}$ such that $\| \nabla C(Y)\| _2\leq \frac{\sqrt{2c}}{\sqrt{N}}$ + +# B PRELIMINARIES + +To study the moments of the matrix $S$ , we first have to show that two tensors vanish as $n_1, \ldots, n_{L-1} \to \infty$ : + +$$ +\Omega_ {k _ {0}, k _ {1}, k _ {2}} ^ {(L)} (x _ {0}, x _ {1}, x _ {2}) = \left(\nabla f _ {\theta , k _ {0}} (x _ {0})\right) ^ {T} \mathcal {H} f _ {\theta , k _ {1}} (x _ {1}) \nabla f _ {\theta , k _ {2}} (x _ {2}) +$$ + +$$ +\Gamma_ {k _ {0}, k _ {1}, k _ {2}, k _ {3}} ^ {(L)} (x _ {0}, x _ {1}, x _ {2}, x _ {4}) = \left(\nabla f _ {\theta , k _ {0}} (x _ {0})\right) ^ {T} \mathcal {H} f _ {\theta , k _ {1}} (x _ {1}) \mathcal {H} f _ {\theta , k _ {2}} (x _ {2}) \nabla f _ {\theta , k _ {3}} (x _ {3}). +$$ + +We study these tensors recursively, for this, we need a recursive definition for the first derivatives $\partial_{\theta_p}f_{\theta ,k}(x)$ and second derivatives $\partial_{\theta_p\theta_{p'}}^2 f_{\theta ,k}(x)$ . The value of these derivatives depend on the layer $\ell$ the parameters $\theta_{p}$ and $\theta_{p'}$ belong to, and on whether they are connection weights $W_{mk}^{(\ell)}$ or biases $b_{k}^{(\ell)}$ . The derivatives with respect to the parameters of the last layer are + +$$ +\partial_ {W _ {m k} ^ {(L - 1)}} f _ {\theta , k ^ {\prime}} (x) = \frac {1}{\sqrt {n _ {L - 1}}} \alpha_ {m} ^ {(L - 1)} (x) \delta_ {k k ^ {\prime}} +$$ + +$$ +\partial_ {b _ {k} ^ {(L - 1)}} f _ {\theta , k ^ {\prime}} (x) = \beta^ {2} \delta_ {k k ^ {\prime}} +$$ + +for parameters $\theta_p$ which belong to the lower layers the derivatives can be defined recursively by + +$$ +\partial_ {\theta_ {p}} f _ {\theta , k} (x) = \frac {1}{\sqrt {n _ {L - 1}}} \sum_ {m = 1} ^ {n _ {L - 1}} \partial_ {\theta_ {p}} \tilde {\alpha} _ {m} ^ {(L - 1)} (x) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L - 1)} (x)\right) W _ {m k} ^ {(L - 1)}. +$$ + +For the second derivatives, we first note that if either of the parameters $\theta_{p}$ or $\theta_{p'}$ are bias of the last layer, or if they are both connection weights of the last layer, then $\partial_{\theta_p\theta_{p'}}^2 f_{\theta ,k}(x) = 0$ . Two cases are left: when one parameter is a connection weight of the last layer and the others belong to the lower layers, and when both belong to the lower layers. Both cases can be defined recursively in terms of the first and second derivatives of $\tilde{\alpha}_{m}^{(L - 1)}$ : + +$$ +\begin{array}{l} \partial_ {\theta_ {p} W _ {m k} ^ {(L)}} ^ {2} f _ {\theta , k ^ {\prime}} (x) = \frac {1}{\sqrt {n L - 1}} \partial_ {\theta_ {p}} \tilde {\alpha} _ {m} ^ {(L - 1)} (x) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L - 1)} (x)\right) \delta_ {k k ^ {\prime}} \\ \partial_ {\theta_ {p} \theta_ {p ^ {\prime}}} ^ {2} f _ {\theta , k ^ {\prime}} (x) = \frac {1}{\sqrt {n _ {L - 1}}} \sum_ {m = 1} ^ {n _ {L - 1}} \partial_ {\theta_ {p} \theta_ {p ^ {\prime}}} ^ {2} \tilde {\alpha} _ {m} ^ {(L - 1)} (x) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(L - 1)} (x)) W _ {m k} ^ {(L - 1)} \\ + \frac {1}{\sqrt {n _ {L - 1}}} \sum_ {m = 1} ^ {n _ {L - 1}} \partial_ {\theta_ {p}} \tilde {\alpha} _ {m} ^ {(L - 1)} (x) \partial_ {\theta_ {p ^ {\prime}}} \tilde {\alpha} _ {m} ^ {(L - 1)} (x) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L - 1)} (x)\right) W _ {m k} ^ {(L - 1)}. \\ \end{array} +$$ + +Using these recursive definitions, the tensors $\Omega^{(L + 1)}$ and $\Gamma^{(L + 1)}$ are given in terms of $\Theta^{(L)},\Omega^{(L)}$ and $\Gamma^{(L)}$ , in the same manner that the NTK $\Theta^{(L + 1)}$ is defined recursively in terms of $\Theta^{(L)}$ in (Jacot et al., 2018). + +Lemma 1. For any loss $C$ with BGOSS and $\sigma \in C_b^4 (\mathbb{R})$ , we have uniformly over $[0,T]$ + +$$ +\lim _ {n _ {L - 1} \to \infty} \dots \lim _ {n _ {1} \to \infty} \Omega_ {k _ {0}, k _ {1}, k _ {2}} ^ {(L)} (x _ {0}, x _ {1}, x _ {2}) = 0 +$$ + +Proof. The proof is done by induction. When $L = 1$ the second derivatives $\partial_{\theta_p\theta_{p'}}^2 f_{\theta ,k}(x) = 0$ and $\Omega_{k_0,k_1,k_2}^{(L)}(x_0,x_1,x_2) = 0$ . + +For the induction step, we write $\Omega_{k_0,k_1,k_2}^{(\ell +1)}(x_0,x_1,x_2)$ recursively as + +$$ +\begin{array}{l} n _ {\ell} ^ {- 3 / 2} \sum_ {m _ {0}, m _ {1}, m _ {2}} \Theta_ {m _ {0}, m _ {1}} ^ {(\ell)} (x _ {0}, x _ {1}) \Theta_ {m _ {1}, m _ {2}} ^ {(\ell)} (x _ {1}, x _ {2}) \dot {\sigma} (\tilde {\alpha} _ {m _ {0}} ^ {(\ell)} (x _ {0})) \ddot {\sigma} (\tilde {\alpha} _ {m _ {1}} ^ {(\ell)} (x _ {1})) \dot {\sigma} (\tilde {\alpha} _ {m _ {2}} ^ {(\ell)} (x _ {2})) W _ {m _ {0} k _ {0}} ^ {(\ell)} W _ {m _ {1} k _ {1}} ^ {(\ell)} W _ {m _ {2} k _ {2}} ^ {(\ell)} \\ + n _ {\ell} ^ {- 3 / 2} \sum_ {m _ {0}, m _ {1}, m _ {2}} \Omega_ {m _ {0}, m _ {1}, m _ {2}} ^ {(\ell)} (x _ {0}, x _ {1}, x _ {2}) \dot {\sigma} (\tilde {\alpha} _ {m _ {0}} ^ {(\ell)} (x _ {0})) \dot {\sigma} (\tilde {\alpha} _ {m _ {1}} ^ {(\ell)} (x _ {1})) \dot {\sigma} (\tilde {\alpha} _ {m _ {2}} ^ {(\ell)} (x _ {2})) W _ {m _ {0} k _ {0}} ^ {(\ell)} W _ {m _ {1} k _ {1}} ^ {(\ell)} W _ {m _ {2} k _ {2}} ^ {(\ell)} \\ + n _ {\ell} ^ {- 3 / 2} \sum_ {m _ {0}, m _ {1}} \Theta_ {m _ {0}, m _ {1}} ^ {(\ell)} (x _ {0}, x _ {1}) \dot {\sigma} (\tilde {\alpha} _ {m _ {0}} ^ {(\ell)} (x _ {0})) \dot {\sigma} (\tilde {\alpha} _ {m _ {1}} ^ {(\ell)} (x _ {1})) \sigma (\tilde {\alpha} _ {m _ {1}} ^ {(\ell)} (x _ {2})) W _ {m _ {0} k _ {0}} ^ {(\ell)} \delta_ {k _ {1} k _ {2}} \\ + n _ {\ell} ^ {- 3 / 2} \sum_ {m _ {1}, m _ {2}} \Theta_ {m _ {1}, m _ {2}} ^ {(\ell)} (x _ {1}, x _ {2}) \sigma (\tilde {\alpha} _ {m _ {1}} ^ {(\ell)} (x _ {0})) \dot {\sigma} (\tilde {\alpha} _ {m _ {1}} ^ {(\ell)} (x _ {1})) \dot {\sigma} (\tilde {\alpha} _ {m _ {2}} ^ {(\ell)} (x _ {2})) \delta_ {k _ {0} k _ {1}} W _ {m _ {2} k _ {2}} ^ {(\ell)}. \\ \end{array} +$$ + +As $n_1, \ldots, n_{\ell-1} \to \infty$ and for any times $t < T$ , the NTK $\Theta^{(\ell)}$ converges to its limit while $\Omega^{(\ell)}$ vanishes. The second summand hence vanishes and the others converge to + +$$ +\begin{array}{l} n _ {\ell} ^ {- 3 / 2} \sum_ {m} \Theta_ {\infty} ^ {(\ell)} (x _ {0}, x _ {1}) \Theta_ {\infty} ^ {(\ell)} (x _ {1}, x _ {2}) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(\ell)} (x _ {0})) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(\ell)} (x _ {1})) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(\ell)} (x _ {2})) W _ {m k _ {0}} ^ {(\ell)} W _ {m k _ {1}} ^ {(\ell)} W _ {m k _ {2}} ^ {(\ell)} \\ + n _ {\ell} ^ {- 3 / 2} \sum_ {m} \Theta_ {\infty} ^ {(\ell)} (x _ {0}, x _ {1}) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(\ell)} (x _ {0})) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(\ell)} (x _ {1})) \sigma (\tilde {\alpha} _ {m} ^ {(\ell)} (x _ {2})) W _ {m k _ {0}} ^ {(\ell)} \delta_ {k _ {1} k _ {2}} \\ + n _ {\ell} ^ {- 3 / 2} \sum_ {m} \Theta_ {\infty} ^ {(\ell)} (x _ {1}, x _ {2}) \sigma (\tilde {\alpha} _ {m} ^ {(\ell)} (x _ {0})) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(\ell)} (x _ {1})) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(\ell)} (x _ {2})) \delta_ {k _ {0} k _ {1}} W _ {m k _ {2}} ^ {(\ell)}. \\ \end{array} +$$ + +At initialization, all terms vanish as $n_{\ell} \to \infty$ because all summands are independent with zero mean and finite variance: in the $n_1 \to \infty, \ldots, n_{\ell - 1} \to \infty$ limit, the $\tilde{\alpha}_m^{(\ell)}(x)$ are independent for different $m$ , see (Jacot et al., 2018). During training, the weights $W^{(\ell)}$ and preactivations $\tilde{\alpha}^{(\ell)}$ move at a rate of $1 / \sqrt{n_{\ell}}$ (see the proof of convergence of the NTK in (Jacot et al., 2018)). Since $\dot{\sigma}$ is Lipschitz, we obtain that the motion during training of each of the sums is of order $n_{\ell}^{-3/2 + 1/2} = n_{\ell}^{-1}$ . As a result, uniformly over times $t \in [0, T]$ , all the sums vanish. + +Similarly, we have + +Lemma 2. For any loss $C$ with BGOSS and $\sigma \in C_b^4 (\mathbb{R})$ , we have uniformly over $[0,T]$ + +$$ +\lim _ {n _ {L - 1} \to \infty} \dots \lim _ {n _ {1} \to \infty} \Gamma_ {k _ {0}, k _ {1}, k _ {2}, k _ {3}} ^ {(L)} (x _ {0}, x _ {1}, x _ {2}, x _ {3}) = 0 +$$ + +Proof. The proof is done by induction. When $L = 1$ the hessian $\mathcal{H}F^{(1)} = 0$ , such that $\Gamma_{k_0,k_1,k_2,k_3}^{(L)}(x_0,x_1,x_2,x_3) = 0$ . + +For the induction step, $\Gamma^{(\ell +1)}$ can be defined recursively: + +$$ +\begin{array}{l} \Gamma_ {k _ {0}, k _ {1}, k _ {2}, k _ {3}} ^ {(L + 1)} \left(x _ {0}, x _ {1}, x _ {2}, x _ {3}\right) \\ = n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \Gamma_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} ^ {(L)} (x _ {0}, x _ {1}, x _ {2}, x _ {3}) \dot {\sigma} (\alpha_ {m _ {0}} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \dot {\sigma} (\alpha_ {m _ {2}} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m _ {3}} ^ {(L)} (x _ {3})) \\ \end{array} +$$ + +$$ +\begin{array}{l} \Gamma_ {k _ {0}, k _ {1}, k _ {2}, k _ {3}} ^ {(L + 1)} \left(x _ {0}, x _ {1}, x _ {2}, x _ {3}\right) \\ = n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \Gamma_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} ^ {(L)} (x _ {0}, x _ {1}, x _ {2}, x _ {3}) \dot {\sigma} (\alpha_ {m _ {0}} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \dot {\sigma} (\alpha_ {m _ {2}} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m _ {3}} ^ {(L)} (x _ {3})) \\ W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \Theta_ {m _ {0}, m _ {1}} ^ {(L)} (x _ {0}, x _ {1}) \Omega_ {m _ {1}, m _ {2}, m _ {3}} ^ {(L)} (x _ {1}, x _ {2}, x _ {3}) \dot {\sigma} (\alpha_ {m _ {0}} ^ {(L)} (x _ {0})) \ddot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \\ \dot {\sigma} (\alpha_ {m _ {2}} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m _ {3}} ^ {(L)} (x _ {3})) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \Omega_ {m _ {0}, m _ {1}, m _ {2}} ^ {(L)} (x _ {0}, x _ {1}, x _ {2}) \Theta_ {m _ {2}, m _ {3}} ^ {(L)} (x _ {2}, x _ {3}) \dot {\sigma} (\alpha_ {m _ {0}} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \\ \ddot {\sigma} (\alpha_ {m _ {2}} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m _ {3}} ^ {(L)} (x _ {3})) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \Theta_ {m _ {0}, m _ {1}} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {m _ {1}, m _ {2}} ^ {(L)} (x _ {1}, x _ {2}) \Theta_ {m _ {2}, m _ {3}} ^ {(L)} (x _ {2}, x _ {3}) \dot {\sigma} (\alpha_ {m _ {0}} ^ {(L)} (x _ {0})) \ddot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \\ \ddot {\sigma} (\alpha_ {m _ {2}} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m _ {3}} ^ {(L)} (x _ {3})) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ + n _ {L} ^ {- 2} \sum_ {m _ {1}, m _ {2}, m _ {3}} \Omega_ {m _ {1}, m _ {2}, m _ {3}} ^ {(L)} (x _ {1}, x _ {2}, x _ {3}) \sigma (\alpha_ {m _ {1}} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \dot {\sigma} (\alpha_ {m _ {2}} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m _ {3}} ^ {(L)} (x _ {3})) \\ \delta_ {k _ {0} k _ {1}} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ + n _ {L} ^ {- 2} \sum_ {m _ {1}, m _ {2}, m _ {3}} \Theta_ {m _ {1}, m _ {2}} ^ {(L)} (x _ {1}, x _ {2}) \Theta_ {m _ {2}, m _ {3}} ^ {(L)} (x _ {2}, x _ {3}) \sigma (\alpha_ {m _ {1}} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \ddot {\sigma} (\alpha_ {m _ {2}} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m _ {3}} ^ {(L)} (x _ {3})) \\ \delta_ {k _ {0} k _ {1}} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}} \Omega_ {m _ {0}, m _ {1}, m _ {2}} ^ {(L)} (x _ {0}, x _ {1}, x _ {2}) \dot {\sigma} (\alpha_ {m _ {0}} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \dot {\sigma} (\alpha_ {m _ {2}} ^ {(L)} (x _ {2})) \sigma (\alpha_ {m _ {2}} ^ {(L)} (x _ {3})) \\ W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}} \Theta_ {m _ {0}, m _ {1}} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {m _ {1}, m _ {2}} ^ {(L)} (x _ {1}, x _ {2}) \dot {\sigma} (\alpha_ {m _ {0}} ^ {(L)} (x _ {0})) \ddot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \dot {\sigma} (\alpha_ {m _ {2}} ^ {(L)} (x _ {2})) \sigma (\alpha_ {m _ {2}} ^ {(L)} (x _ {3})) \\ W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m _ {1}, m _ {2}} \Theta_ {m _ {1}, m _ {2}} ^ {(L)} (x _ {1}, x _ {2}) \sigma (\alpha_ {m _ {1}} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \dot {\sigma} (\alpha_ {m _ {2}} ^ {(L)} (x _ {2})) \sigma (\alpha_ {m _ {2}} ^ {(L)} (x _ {3})) \delta_ {k _ {0} k _ {1}} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {3}} \Theta_ {m _ {0}, m _ {1}} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {m _ {1}, m _ {3}} ^ {(L)} (x _ {2}, x _ {3}) \dot {\sigma} (\alpha_ {m _ {0}} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {1})) \dot {\sigma} (\alpha_ {m _ {1}} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m _ {3}} ^ {(L)} (x _ {3})) \\ W _ {m _ {0} k _ {0}} ^ {(L)} \delta_ {k _ {1} k _ {2}} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +As $n_1, \ldots, n_{\ell-1} \to \infty$ and for any times $t < T$ , the NTK $\Theta^{(\ell)}$ converges to its limit while $\Omega^{(\ell)}$ and $\Gamma^{(\ell)}$ vanish. $\Gamma_{k_0, k_1, k_2, k_3}^{(L+1)}(x_0, x_1, x_2, x_3)$ therefore converges to: + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {\infty} ^ {(L)} (x _ {1}, x _ {2}) \Theta_ {\infty} ^ {(L)} (x _ {2}, x _ {3}) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {0})) \ddot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {1})) \ddot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {3})) \\ W _ {m k _ {0}} ^ {(L)} W _ {m k _ {1}} ^ {(L)} W _ {m k _ {2}} ^ {(L)} W _ {m k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {1}, x _ {2}) \Theta_ {\infty} ^ {(L)} (x _ {2}, x _ {3}) \sigma (\alpha_ {m} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {1})) \ddot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {3})) \\ \delta_ {k _ {0} k _ {1}} W _ {m k _ {2}} ^ {(L)} W _ {m k _ {3}} ^ {(L)} \\ + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {\infty} ^ {(L)} (x _ {1}, x _ {2}) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {0})) \ddot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {1})) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {2})) \sigma (\alpha_ {m} ^ {(L)} (x _ {3})) \\ W _ {m k _ {0}} ^ {(L)} W _ {m k _ {1}} ^ {(L)} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {1}, x _ {2}) \sigma (\alpha_ {m} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {1})) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {2})) \sigma (\alpha_ {m} ^ {(L)} (x _ {3})) \delta_ {k _ {0} k _ {1}} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {\infty} ^ {(L)} (x _ {2}, x _ {3}) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {0})) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {1})) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {2})) \dot {\sigma} (\alpha_ {m} ^ {(L)} (x _ {3})) \\ W _ {m k _ {0}} ^ {(L)} \delta_ {k _ {1} k _ {2}} W _ {m k _ {3}} ^ {(L)} \\ \end{array} +$$ + +For the convergence during training, we proceed similarly to the proof of Lemma 1. At initialization, all terms vanish as $n_{\ell} \to \infty$ because all summands are independent (after taking the $n_1, \ldots, n_{L-1} \to \infty$ limit) with zero mean and finite variance. During training, the weights $W^{(\ell)}$ and preactivations $\tilde{\alpha}^{(\ell)}$ move at a rate of $1 / \sqrt{n_{\ell}}$ which leads to a change of order $n_{\ell}^{-2 + 1/2} = n_{\ell}^{-1.5}$ , which vanishes for all times $t$ too. + +# C THE MATRIX $S$ + +We now have the theoretical tools to describe the moments of the matrix $S$ . We first give a bound for the rank of $S$ : + +Proposition 3. $\operatorname{Rank}(S) \leq 2(n_1 + \ldots + n_{L-1}) N n_L$ + +Proof. We first observe that $S$ is given by a sum of $Nn_{L}$ matrices: + +$$ +S _ {p p ^ {\prime}} = \sum_ {i = 1} ^ {N} \sum_ {k = 1} ^ {n _ {L}} \partial_ {i k} C \partial_ {\theta_ {p} \theta_ {p}} ^ {2} f _ {\theta , k} (x _ {i}). +$$ + +It is therefore sufficient to show that the rank of each matrices $\mathcal{H}f_{\theta ,k}(x) = \left(\partial_{\theta_p\theta_{p'}}^2 f_{\theta ,k}(x_i)\right)_{p,p'}$ is bounded by $2(n_{1} + \ldots +n_{L})$ + +The derivatives $\partial_{\theta_p}f_{\theta ,k}(x)$ have different definition depending on whether the parameter $\theta_{p}$ is a connection weight $W_{ij}^{(\ell)}$ or a bias $b_{j}^{(\ell)}$ : + +$$ +\partial_ {W _ {i j} ^ {(\ell)}} f _ {\theta , k} (x) = \frac {1}{\sqrt {n _ {\ell}}} \alpha_ {i} ^ {(\ell)} (x; \theta) \partial_ {\hat {\alpha} _ {j} ^ {(\ell + 1)} (x; \theta)} f _ {\theta , k} (x) +$$ + +$$ +\partial_ {b _ {j} ^ {(\ell)}} f _ {\theta , k} (x) = \beta \partial_ {\tilde {\alpha} _ {j} ^ {(\ell + 1)} (x; \theta)} f _ {\theta , k} (x) +$$ + +These formulas only depend on $\theta$ through the values $\left(\alpha_{i}^{(\ell)}(x;\theta)\right)_{\ell,i}$ and $\left(\partial_{\tilde{\alpha}_{i}^{(\ell)}(x;\theta)}f_{\theta,k}(x)\right)_{\ell,i}$ for $\ell = 1,\ldots,L-1$ (note that both $\alpha_{i}^{(0)}(x) = x_{i}$ and $\partial_{\tilde{\alpha}_{i}^{(L)}(x;\theta)}f_{\theta,k}(x) = \delta_{ik}$ do not depend on $\theta$ ). Together there are $2(n_{1} + \ldots + n_{L-1})$ of them. As a consequence, the map $\theta \mapsto \left(\partial_{\theta_{p}}f_{\theta,k}(x_{i})\right)_{p}$ can be written as a composition + +$$ +\theta \in \mathbb {R} ^ {P} \mapsto \left(\alpha_ {i} ^ {(\ell)} (x; \theta), \partial_ {\hat {\alpha} _ {i} ^ {(\ell)} (x; \theta)} f _ {\theta , k} (x)\right) _ {\ell , i} \in \mathbb {R} ^ {2 (n _ {1} +.. + n _ {L - 1})} \mapsto \left(\partial_ {\theta_ {p}} f _ {\theta , k} (x _ {i})\right) _ {p} \in \mathbb {R} ^ {P} +$$ + +and the matrix $\mathcal{H}f_{\theta ,k}(x)$ is equal to the Jacobian of this map. By the chain rule, $\mathcal{H}f_{\theta ,k}(x)$ is the matrix multiplication of the Jacobians of the two submaps, whose rank are bounded by $2(n_{1} + \ldots +n_{L - 1})$ , hence bounding the rank of $\mathcal{H}f_{\theta ,k}(x)$ . And because $S$ is a sum of $N n_{L}$ matrices of rank smaller than $2(n_{1} + \ldots +n_{L - 1})$ , the rank of $S$ is bounded by $2(n_{1} + \ldots +n_{L - 1})N n_{L}$ + +# C.1 MOMENTS + +Let us now prove Proposition 4: + +Proposition 4. For any loss $C$ with BGOSS and $\sigma \in C_b^4 (\mathbb{R})$ , the first two moments of $S$ take the form + +$$ +\operatorname {T r} (S (t)) = G (t) ^ {T} \nabla C (t) +$$ + +$$ +\operatorname {T r} \left(S (t) ^ {2}\right) = \nabla C (t) ^ {T} \tilde {\Upsilon} (t) \nabla C (t) +$$ + +- At initialization, $g_{\theta}$ and $f_{\theta}$ converge to a (centered) Gaussian pair with covariances + +$$ +\mathbb {E} \left[ g _ {\theta , k} (x) g _ {\theta , k ^ {\prime}} \left(x ^ {\prime}\right) \right] = \delta_ {k k ^ {\prime}} \Xi_ {\infty} ^ {(L)} (x, x ^ {\prime}) +$$ + +$$ +\mathbb {E} \left[ g _ {\theta , k} (x) f _ {\theta , k ^ {\prime}} \left(x ^ {\prime}\right) \right] = \delta_ {k k ^ {\prime}} \Phi_ {\infty} ^ {(L)} \left(x, x ^ {\prime}\right) +$$ + +$$ +\mathbb {E} \left[ f _ {\theta , k} (x) f _ {\theta , k ^ {\prime}} \left(x ^ {\prime}\right) \right] = \delta_ {k k ^ {\prime}} \Sigma_ {\infty} ^ {(L)} \left(x, x ^ {\prime}\right) +$$ + +and during training $g_{\theta}$ evolves according to + +$$ +\partial_ {t} g _ {\theta , k} (x) = \sum_ {i = 1} ^ {N} \Lambda_ {\infty} ^ {(L)} (x, x _ {i}) \partial_ {i k} C (Y (t)). +$$ + +- Uniformly over any interval $[0, T]$ where $\int_0^T\|\nabla C(t)\|_2 dt$ is stochastically bounded, the kernel $\Upsilon^{(L)}$ has a deterministic and fixed limit $\lim_{n_{L-1}\to \infty}\dots \lim_{n_1\to \infty}\Upsilon_{kk'}^{(L)}(x,x') = \delta_{kk'}\Upsilon_{\infty}^{(L)}(x,x')$ with limiting kernel: + +$$ +\Upsilon_ {\infty} ^ {(L)} (x, x ^ {\prime}) = \sum_ {\ell = 1} ^ {L - 1} \left(\Theta_ {\infty} ^ {(\ell)} (x, x ^ {\prime}) ^ {2} \ddot {\Sigma} ^ {(\ell)} (x, x ^ {\prime}) + 2 \Theta_ {\infty} ^ {(\ell)} (x, x ^ {\prime}) \dot {\Sigma} ^ {(\ell)} (x, x ^ {\prime})\right) \dot {\Sigma} ^ {(\ell + 1)} (x, x ^ {\prime}) \dots \dot {\Sigma} ^ {(L - 1)} (x, x ^ {\prime}). +$$ + +- The higher moment $k > 2$ vanish: $\lim_{n_{L - 1} \to \infty} \cdots \lim_{n_1 \to \infty} \operatorname{Tr}\left(S^k\right) = 0$ . + +Proof. The first moment of $S$ takes the form + +$$ +\operatorname {T r} (S) = \sum_ {p} (\nabla C) ^ {T} \mathcal {H} _ {p, p} Y = (\nabla C) ^ {T} G +$$ + +where $G$ is the restriction to the training set of the function $g_{\theta}(x) = \sum_{p}\partial_{\theta_{p}\theta_{p}}^{2}f_{\theta}(x)$ . This process is random at initialization and varies during training. Lemma 3 below shows that, in the infinite width limit, it is a Gaussian process at initialization which then evolves according to a simple differential equation, hence describing the evolution of the first moment during training. + +The second moment of $S$ takes the form: + +$$ +\begin{array}{l} \operatorname {T r} (S ^ {2}) = \sum_ {p _ {1}, p _ {2} = 1} ^ {P} \sum_ {i _ {1}, i _ {2} = 1} ^ {N} \partial_ {\theta_ {p _ {1}}, \theta_ {p _ {2}}} ^ {2} f _ {\theta , k _ {1}} (x _ {1}) \partial_ {\theta_ {p _ {2}}, \theta_ {p _ {1}}} ^ {2} f _ {\theta , k _ {2}} (x _ {2}) c _ {i _ {1}} ^ {\prime} (x _ {i _ {1}}) c _ {i _ {2}} ^ {\prime} (x _ {i _ {2}}) \\ = (\nabla C) ^ {T} \tilde {\Upsilon} \nabla C \\ \end{array} +$$ + +where $\Upsilon_{k_1,k_2}^{(L)}(x_1,x_2) = \sum_{p_1,p_2 = 1}^P\partial_{\theta_{p_1},\theta_{p_2}}^2 f_{\theta ,k_1}(x_1)\partial_{\theta_{p_2},\theta_{p_1}}^2 f_{\theta ,k_2}(x_2)$ is a multidimensional kernel and $\tilde{\Upsilon}$ is its Gram matrix. Lemma 4 below shows that in the infinite-width limit, $\Upsilon_{k_1,k_2}^{(L)}(x_1,x_2)$ converges to a deterministic and time-independent limit $\Upsilon_{\infty}^{(L)}(x_1,x_2)\delta_{k_1k_2}$ . + +To show that $\mathrm{Tr}(S^k)\to 0$ for all $k > 2$ , it suffices to show that $\left\| S^2\right\| _F\to 0$ as $\left|\operatorname {Tr}(S^k)\right| < \left\| S^2\right\| _F\left\| S\right\| _F^{k - 2}$ and we know that $\| S\| _F\to (\partial_YC)^T\tilde{\Upsilon}\partial_YC$ is finite. We have that + +$$ +\left\| S ^ {2} \right\| _ {F} = \sum_ {i _ {0}, i _ {1}, i _ {2}, i _ {3} = 1} ^ {N} \sum_ {k _ {0}, k _ {1}, k _ {2}, k _ {3} = 1} ^ {n _ {L}} \Psi_ {k _ {0}, k _ {1}, k _ {2}, k _ {3}} ^ {(L)} (x _ {i _ {0}}, x _ {i _ {1}}, x _ {i _ {2}}, x _ {i _ {3}}) \partial_ {f _ {\theta , k _ {0}} (x _ {i _ {0}})} C \partial_ {f _ {\theta , k _ {1}} (x _ {i _ {1}})} C +$$ + +$$ +\partial_ {f _ {\theta , k _ {2}} (x _ {i _ {2}})} C \partial_ {f _ {\theta , k _ {3}} (x _ {i _ {3}})} C +$$ + +$$ += \tilde {\Psi} \cdot (\partial_ {Y} C) ^ {\otimes 4} +$$ + +for $\tilde{\Psi}$ the $Nn_{L} \times Nn_{L} \times Nn_{L} \times Nn_{L}$ finite version of + +$$ +\begin{array}{l} \Psi_ {k _ {0}, k _ {1}, k _ {2}, k _ {3}} ^ {(L)} (x _ {i _ {0}}, x _ {i _ {1}}, x _ {i _ {2}}, x _ {i _ {3}}) = \sum_ {p _ {0}, p _ {1}, p _ {2}, p _ {3} = 1} ^ {P} \partial_ {\theta_ {p _ {0}}, \theta_ {p _ {1}}} ^ {2} f _ {\theta , k _ {0}} (x _ {0}) \partial_ {\theta_ {p _ {1}}, \theta_ {p _ {2}}} ^ {2} f _ {\theta , k _ {1}} (x _ {1}) \\ \partial_ {\theta_ {p _ {2}}, \theta_ {p _ {3}}} ^ {2} f _ {\theta , k _ {2}} (x _ {2}) \partial_ {\theta_ {p _ {3}}, \theta_ {p _ {0}}} ^ {2} f _ {\theta , k _ {3}} (x _ {3}). \\ \end{array} +$$ + +which vanishes in the infinite width limit by Lemma 5 below. + +![](images/f6164be0a45cc075a0bcd825e2e469b67bc9b029480fed4823ecccf171867d53.jpg) + +Lemma 3. For any loss $C$ with BGOSS and $\sigma \in C_b^4 (\mathbb{R})$ , at initialization $g_{\theta}$ and $f_{\theta}$ converge to a (centered) Gaussian pair with covariances + +$$ +\begin{array}{l} \mathbb {E} \left[ g _ {\theta , k} (x) g _ {\theta , k ^ {\prime}} \left(x ^ {\prime}\right) \right] = \delta_ {k k ^ {\prime}} \Xi_ {\infty} ^ {(L)} \left(x, x ^ {\prime}\right) \\ \mathbb {E} \left[ g _ {\theta , k} (x) f _ {\theta , k ^ {\prime}} \left(x ^ {\prime}\right) \right] = \delta_ {k k ^ {\prime}} \Phi_ {\infty} ^ {(L)} \left(x, x ^ {\prime}\right) \\ \mathbb {E} [ f _ {\theta , k} (x) f _ {\theta , k ^ {\prime}} (x ^ {\prime}) ] = \delta_ {k k ^ {\prime}} \Sigma_ {\infty} ^ {(L)} (x, x ^ {\prime}) \\ \end{array} +$$ + +and during training $g_{\theta}$ evolves according to + +$$ +\partial_ {t} g _ {\theta} (x) = \sum_ {i = 1} ^ {N} \Lambda_ {\infty} ^ {(L)} (x, x _ {i}) D _ {i} (t) +$$ + +Proof. When $L = 1$ , $g_{\theta}(x)$ is 0 for any $x$ and $\theta$ . + +For the inductive step, the trace $g_{\theta ,k}^{(L + 1)}(x)$ is defined recursively as + +$$ +\frac {1}{\sqrt {n _ {L}}} \sum_ {m = 1} ^ {n _ {L}} g _ {\theta , m} ^ {(L)} (x) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x)) W _ {m k} ^ {(L)} + \mathrm {T r} (\nabla f _ {\theta , m} (x) (\nabla f _ {\theta , m} (x)) ^ {T}) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x)) W _ {m k} ^ {(L)} +$$ + +First note that $\mathrm{Tr}\left(\nabla f_{\theta ,m}(x)\left(\nabla f_{\theta ,m}(x)\right)^T\right) = \Theta_{mm}^{(L)}(x,x)$ . Now let $n_1,\ldots n_{L - 1}\to \infty$ , by the induction hypothesis, the pairs $(g_{\theta ,m}^{(L)},\tilde{\alpha}_{m}^{(L)})$ converge to iid Gaussian pairs of processes with covariance $\Phi_{\infty}^{(L)}$ at initialization. + +At initialization, conditioned on the values of $g_{m}^{(L)}, \tilde{\alpha}_{m}^{(L)}$ the pairs $(g_{k}^{(L + 1)}, f_{\theta})$ follow a centered Gaussian distribution with (conditioned) covariance + +$$ +\begin{array}{l} \mathbb {E} \left[ g _ {\theta , k} ^ {(L + 1)} (x) g _ {\theta , k ^ {\prime}} ^ {(L + 1)} (x ^ {\prime}) \mid g _ {\theta , m} ^ {(L)}, \tilde {\alpha} _ {m} ^ {(L)} \right] = \frac {\delta_ {k k ^ {\prime}}}{n _ {L}} \sum_ {m = 1} ^ {n _ {L}} \left(g _ {\theta , m} ^ {(L)} (x) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) + \Theta_ {\infty} ^ {(L)} (x, x) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right)\right) \\ \left(g _ {\theta , m} ^ {(L)} \left(x ^ {\prime}\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} \left(x ^ {\prime}\right)\right) + \Theta_ {\infty} ^ {(L)} \left(x ^ {\prime}, x ^ {\prime}\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} \left(x ^ {\prime}\right)\right)\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathbb {E} [ g _ {\theta , k} ^ {(L + 1)} (x) f _ {\theta , k ^ {\prime}} (x ^ {\prime}) | g _ {\theta , m} ^ {(L)}, \tilde {\alpha} _ {m} ^ {(L)} ] = \frac {\delta_ {k k ^ {\prime}}}{n _ {L}} \sum_ {m = 1} ^ {n _ {L}} \Big (g _ {\theta , m} ^ {(L)} (x) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) + \Theta_ {\infty} ^ {(L)} (x, x) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \Big) \\ \sigma \left(\tilde {\alpha} _ {m} ^ {(L)} \left(x ^ {\prime}\right)\right) \\ \end{array} +$$ + +$$ +\mathbb {E} [ f _ {\theta , k} (x) f _ {\theta , k ^ {\prime}} (x ^ {\prime}) | g _ {\theta , m} ^ {(L)}, \tilde {\alpha} _ {m} ^ {(L)} ] = \frac {\delta_ {k k ^ {\prime}}}{n _ {L}} \sum_ {m = 1} ^ {n _ {L}} \sigma (\tilde {\alpha} _ {m} ^ {(L)} (x)) \sigma (\tilde {\alpha} _ {m} ^ {(L)} (x ^ {\prime})) + \beta^ {2}. +$$ + +As $n_L \to \infty$ , by the law of large number, these (random) covariances converge to their expectations which are deterministic, hence the pairs $(g_k^{(L + 1)}, f_{\theta k})$ have asymptotically the same Gaussian distribution independent of $g_m^{(L)}$ , $\tilde{\alpha}_m^{(L)}$ : + +$$ +\mathbb {E} \left[ g _ {\theta , k} ^ {(L)} (x) g _ {\theta , k ^ {\prime}} ^ {(L)} (x ^ {\prime}) \right] \to \delta_ {k k ^ {\prime}} \Xi_ {\infty} ^ {(L)} (x, x ^ {\prime}) +$$ + +$$ +\mathbb {E} \left[ g _ {\theta , k} ^ {(L)} (x) f _ {\theta , k ^ {\prime}} ^ {(L)} \left(x ^ {\prime}\right)\right]\rightarrow \delta_ {k k ^ {\prime}} \Phi_ {\infty} ^ {(L)} (x, x) +$$ + +$$ +\mathbb {E} \left[ f _ {\theta , k} ^ {(L)} (x) f _ {\theta , k ^ {\prime}} ^ {(L)} \left(x ^ {\prime}\right)\right]\rightarrow \delta_ {k k ^ {\prime}} \Sigma_ {\infty} ^ {(L)} (x, x) +$$ + +with $\Xi_{\infty}^{(1)}(x,x^{\prime}) = \Phi_{\infty}^{(1)}(x,x^{\prime}) = 0$ and + +$$ +\begin{array}{l} \Xi^ {(L + 1)} (x, x ^ {\prime}) = \mathbb {E} \left[ g g ^ {\prime} \dot {\sigma} (\alpha) \dot {\sigma} (\alpha^ {\prime}) \right] \\ + \Theta_ {\infty} ^ {(L)} \left(x ^ {\prime}, x ^ {\prime}\right) \mathbb {E} \left[ g \dot {\sigma} (\alpha) \ddot {\sigma} \left(\alpha^ {\prime}\right) \right] \\ + \Theta_ {\infty} ^ {(L)} (x, x) \mathbb {E} \left[ g ^ {\prime} \dot {\sigma} (\alpha^ {\prime}) \ddot {\sigma} (\alpha) \right] \\ + \Theta_ {\infty} ^ {(L)} (x, x) \Theta_ {\infty} ^ {(L)} \left(x ^ {\prime}, x ^ {\prime}\right) \mathbb {E} \left[ \ddot {\sigma} (\alpha^ {\prime}) \ddot {\sigma} (\alpha) \right] \\ = \Xi_ {\infty} ^ {(L)} (x, x ^ {\prime}) \dot {\Sigma} _ {\infty} ^ {(L)} (x, x ^ {\prime}) + \left(\Phi_ {\infty} ^ {(L)} (x, x ^ {\prime}) \Phi_ {\infty} ^ {(L)} (x ^ {\prime}, x) + \Phi_ {\infty} ^ {(L)} (x, x) \Phi_ {\infty} ^ {(L)} (x ^ {\prime}, x ^ {\prime})\right) \ddot {\Sigma} _ {\infty} ^ {(L)} (x, x ^ {\prime}) \\ + \Phi_ {\infty} ^ {(L)} (x, x ^ {\prime}) \Phi_ {\infty} ^ {(L)} (x ^ {\prime}, x ^ {\prime}) \mathbb {E} [ \dot {\sigma} (\alpha) \ddot {\sigma} (\alpha^ {\prime}) ] + \Phi_ {\infty} ^ {(L)} (x, x) \Phi_ {\infty} ^ {(L)} (x ^ {\prime}, x) \mathbb {E} [ \ddot {\sigma} (\alpha) \dot {\sigma} (\alpha^ {\prime}) ] \\ + \Theta_ {\infty} ^ {(L)} \left(x ^ {\prime}, x ^ {\prime}\right) \left(\Phi_ {\infty} ^ {(L)} (x, x) \ddot {\Sigma} _ {\infty} ^ {(L)} (x, x ^ {\prime}) + \Phi_ {\infty} ^ {(L)} (x, x ^ {\prime}) \mathbb {E} \left[ \dot {\sigma} (\alpha) \ddot {\sigma} (\alpha^ {\prime}) \right]\right) \\ + \Theta_ {\infty} ^ {(L)} (x, x) \left(\Phi_ {\infty} ^ {(L)} \left(x ^ {\prime}, x ^ {\prime}\right) \ddot {\Sigma} _ {\infty} ^ {(L)} \left(x, x ^ {\prime}\right) + \Phi_ {\infty} ^ {(L)} \left(x ^ {\prime}, x\right) \mathbb {E} \left[ \ddot {\sigma} (\alpha) \dot {\sigma} \left(\alpha^ {\prime}\right) \right]\right) \\ + \Theta_ {\infty} ^ {(L)} (x, x) \Theta_ {\infty} ^ {(L)} \left(x ^ {\prime}, x ^ {\prime}\right) \ddot {\Sigma} _ {\infty} ^ {(L)} \left(x, x ^ {\prime}\right) \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \Phi_ {\infty} ^ {(L + 1)} (x, x ^ {\prime}) = \mathbb {E} \left[ g \dot {\sigma} (\alpha) \sigma (\alpha^ {\prime}) \right] + \Theta_ {\infty} ^ {(L)} (x, x) \mathbb {E} \left[ \ddot {\sigma} (\alpha) \sigma (\alpha^ {\prime}) \right] \\ = \Phi_ {\infty} ^ {(L)} (x, x ^ {\prime}) \dot {\Sigma} ^ {(L + 1)} (x, x ^ {\prime}) + \left(\Phi_ {\infty} ^ {(L)} (x, x) + \Theta_ {\infty} ^ {(L)} (x, x)\right) \mathbb {E} \left[ \ddot {\sigma} (\alpha) \sigma \left(\alpha^ {\prime}\right) \right] \\ \end{array} +$$ + +where $(g,g^{\prime},\alpha ,\alpha^{\prime})$ is a Gaussian quadruple of covariance + +$$ +\left( \begin{array}{l l l l} \Xi_ {\infty} ^ {(L)} (x, x) & \Xi_ {\infty} ^ {(L)} (x, x ^ {\prime}) & \Phi_ {\infty} ^ {(L)} (x, x) & \Phi_ {\infty} ^ {(L)} (x, x ^ {\prime}) \\ \Xi_ {\infty} ^ {(L)} (x, x ^ {\prime}) & \Xi_ {\infty} ^ {(L)} (x ^ {\prime}, x ^ {\prime}) & \Phi_ {\infty} ^ {(L)} (x ^ {\prime}, x) & \Phi_ {\infty} ^ {(L)} (x ^ {\prime}, x ^ {\prime}) \\ \Phi_ {\infty} ^ {(L)} (x, x) & \Phi_ {\infty} ^ {(L)} (x ^ {\prime}, x) & \Sigma_ {\infty} ^ {(L)} (x, x) & \Sigma_ {\infty} ^ {(L)} (x, x ^ {\prime}) \\ \Phi_ {\infty} ^ {(L)} (x, x ^ {\prime}) & \Phi_ {\infty} ^ {(L)} (x ^ {\prime}, x ^ {\prime}) & \Sigma_ {\infty} ^ {(L)} (x, x ^ {\prime}) & \Sigma_ {\infty} ^ {(L)} (x ^ {\prime}, x ^ {\prime}) \end{array} \right). +$$ + +During training, the parameters follow the gradient $\partial_t\theta (t) = (\partial_\theta Y(t))^T D(t)$ . By the induction hypothesis, the traces $g_{\theta ,m}^{(L)}$ then evolve according to the differential equation + +$$ +\partial_ {t} g _ {\theta , m} ^ {(L)} (x) = \frac {1}{\sqrt {n _ {L}}} \sum_ {i = 1} ^ {N} \sum_ {m = 1} ^ {n _ {L}} \Lambda_ {m m ^ {\prime}} ^ {(L)} (x, x _ {i}) \dot {\sigma} (\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x)) \left(W _ {m ^ {\prime}} ^ {(L)}\right) ^ {T} D _ {i} (t) +$$ + +and in the limit as $n_1,\ldots ,n_{L - 1}\to \infty$ , the kernel $\Lambda_{mm^{\prime}}^{(L)}(x,x_i)$ converges to a deterministic and fixed limit $\delta_{mm^{\prime}}\Lambda_{\infty}^{(L)}(x,x_i)$ . Note that as $n_L$ grows, the $g_{\theta ,m}^{(L)}(x)$ move at a rate of $1 / \sqrt{n_L}$ just like the pre-activations $\tilde{\alpha}_m^{(L)}$ . Even though they move less and less, together they affect the trace $g_{\theta ,k}^{(L + 1)}$ which follows the differential equation + +$$ +\partial_ {t} g _ {\theta , k} ^ {(L + 1)} (x) = \sum_ {i = 1} ^ {N} \sum_ {k ^ {\prime} = 1} ^ {n _ {L}} \Lambda_ {k k ^ {\prime}} ^ {(L + 1)} (x, x _ {i}) D _ {i k ^ {\prime}} (t) +$$ + +where + +$$ +\begin{array}{l} \Lambda_ {k k ^ {\prime}} ^ {(L + 1)} (x, x ^ {\prime}) = \frac {1}{n _ {L}} \sum_ {m, m ^ {\prime}} \Lambda_ {m m ^ {\prime}} ^ {(L)} (x, x ^ {\prime}) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x ^ {\prime})\right) W _ {m k} ^ {(L)} W _ {m ^ {\prime} k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m, m ^ {\prime}} g _ {\theta , m} ^ {(L)} (x) \Theta_ {m m ^ {\prime}} ^ {(L)} (x, x ^ {\prime}) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x ^ {\prime})\right) W _ {m k} ^ {(L)} W _ {m ^ {\prime} k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m} g _ {\theta , m} ^ {(L)} (x) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \sigma \left(\tilde {\alpha} _ {m} ^ {(L)} (x ^ {\prime})\right) \delta_ {k k ^ {\prime}} \\ \end{array} +$$ + +$$ +\begin{array}{l} + \frac {2}{n _ {L}} \sum_ {m, m ^ {\prime}} \Omega_ {m ^ {\prime} m m} ^ {(L)} (x ^ {\prime}, x, x) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x)) \dot {\sigma} (\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x ^ {\prime})) W _ {m k} ^ {(L)} W _ {m ^ {\prime} k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m, m ^ {\prime}} \Theta_ {m m} ^ {(L)} (x, x) \Theta_ {m m ^ {\prime}} ^ {(L)} (x, x ^ {\prime}) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x ^ {\prime})\right) W _ {m k} ^ {(L)} W _ {m ^ {\prime} k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m} \Theta_ {m m} ^ {(L)} (x, x) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \sigma \left(\tilde {\alpha} _ {m} ^ {(L)} \left(x ^ {\prime}\right)\right) \delta_ {k k ^ {\prime}}. \\ \end{array} +$$ + +As $n_1, \ldots, n_{L-1} \to \infty$ , the kernels $\Theta_{mm'}^{(L)}(x, x')$ and $\Lambda_{mm'}^{(L)}(x, x')$ converge to their limit and $\Omega_{m'mm'}^{(L)}(x', x, x)$ vanishes: + +$$ +\begin{array}{l} \Lambda_ {k k ^ {\prime}} ^ {(L)} (x, x ^ {\prime}) \rightarrow \frac {1}{n _ {L}} \sum_ {m} \Lambda_ {\infty} ^ {(L)} (x, x ^ {\prime}) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x ^ {\prime})\right) W _ {m k} ^ {(L)} W _ {m k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m} g _ {\theta , m} ^ {(L)} (x) \Theta_ {\infty} ^ {(L)} (x, x ^ {\prime}) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x ^ {\prime})\right) W _ {m k} ^ {(L)} W _ {m k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m} g _ {\theta , m} ^ {(L)} (x) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \sigma \left(\tilde {\alpha} _ {m} ^ {(L)} \left(x ^ {\prime}\right)\right) \delta_ {k k ^ {\prime}} \\ + \frac {1}{n _ {L}} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x, x) \Theta_ {\infty} ^ {(L)} (x, x ^ {\prime}) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x)) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x ^ {\prime})) W _ {m k} ^ {(L)} W _ {m k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x, x) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x)) \sigma (\tilde {\alpha} _ {m} ^ {(L)} (x ^ {\prime})) \delta_ {k k ^ {\prime}} \\ \end{array} +$$ + +By the law of large numbers, as $n_L\to \infty$ , at initialization $\Lambda_{kk'}^{(L + 1)}(x,x')\rightarrow \delta_{kk'}\Lambda_{\infty}^{(L + 1)}(x,x')$ where + +$$ +\begin{array}{l} \Lambda_ {\infty} ^ {(L + 1)} (x, x ^ {\prime}) = \Lambda_ {\infty} ^ {(L)} (x, x ^ {\prime}) \dot {\Sigma} _ {\infty} ^ {(L + 1)} (x, x ^ {\prime}) \\ + \Theta_ {\infty} ^ {(L)} (x, x ^ {\prime}) \mathbb {E} [ g \ddot {\sigma} (\alpha) \dot {\sigma} (\alpha^ {\prime}) ] \\ + \mathbb {E} [ g \dot {\sigma} (\alpha) \sigma (\alpha^ {\prime}) ] \\ + \Theta_ {\infty} ^ {(L)} (x, x) \Theta_ {\infty} ^ {(L)} (x, x ^ {\prime}) \mathbb {E} \left[ \ddot {\sigma} (\alpha) \dot {\sigma} (\alpha^ {\prime}) \right] \\ + \Theta_ {\infty} ^ {(L)} (x, x) \mathbb {E} [ \ddot {\sigma} (\alpha) \sigma (\alpha^ {\prime}) ] \\ = \Lambda_ {\infty} ^ {(L)} (x, x ^ {\prime}) \dot {\Sigma} _ {\infty} ^ {(L + 1)} (x, x ^ {\prime}) \\ + \Theta_ {\infty} ^ {(L)} (x, x ^ {\prime}) \left(\Phi_ {\infty} ^ {(L)} (x, x ^ {\prime}) \ddot {\Sigma} _ {\infty} ^ {(L + 1)} (x, x ^ {\prime}) + \Phi_ {\infty} ^ {(L)} (x, x) \mathbb {E} \left[ \ddot {\sigma} (\alpha) \dot {\sigma} (\alpha^ {\prime}) \right]\right) \\ + \Phi_ {\infty} ^ {(L)} (x, x ^ {\prime}) \dot {\Sigma} _ {\infty} ^ {(L + 1)} (x, x ^ {\prime}) + \Phi_ {\infty} ^ {(L)} (x, x) \mathbb {E} [ \ddot {\sigma} (\alpha) \sigma (\alpha^ {\prime}) ] \\ + \Theta_ {\infty} ^ {(L)} (x, x) \Theta_ {\infty} ^ {(L)} (x, x ^ {\prime}) \mathbb {E} \left[ \ddot {\sigma} (\alpha) \dot {\sigma} (\alpha^ {\prime}) \right] \\ + \Theta_ {\infty} ^ {(L)} (x, x) \mathbb {E} [ \ddot {\sigma} (\alpha) \dot {\sigma} (\alpha^ {\prime}) ] \\ \end{array} +$$ + +During training $\Theta_{\infty}^{(L)}$ and $\Lambda_{\infty}^{(L)}$ are fixed in the limit $n_1,.., n_{L-1} \to \infty$ , and the values $g_{\theta,m}^{(L)}(x)$ , $\tilde{\alpha}_m^{(L)}(x)$ and $W_{mk}^{(L)}$ vary at a rate of $1/\sqrt{n_L}$ which induce a change of the same rate to $\Lambda_{kk'}^{(L)}(x,x')$ , which is therefore asymptotically fixed during training as $n_L \to \infty$ . + +The next lemma describes the asymptotic limit of the kernel $\Upsilon^{(L)}$ : + +Lemma 4. For any loss $C$ with BGOSsS and $\sigma \in C_b^4 (\mathbb{R})$ , the second moment of the Hessian of the realization function $\mathcal{H}F^{(L)}$ converges uniformly over $[0,T]$ to a fixed limit as $n_1,\ldots n_{L - 1}\to \infty$ + +$$ +\Upsilon_ {k k ^ {\prime}} ^ {(L)} (x, x ^ {\prime}) \rightarrow \delta_ {k k ^ {\prime}} \sum_ {\ell = 1} ^ {L - 1} \left(\Theta_ {\infty} ^ {(\ell)} (x, x ^ {\prime}) ^ {2} \ddot {\Sigma} _ {\infty} ^ {(\ell)} (x, x ^ {\prime}) + 2 \Theta_ {\infty} ^ {(\ell)} (x, x ^ {\prime}) \dot {\Sigma} _ {\infty} ^ {(\ell)} (x, x ^ {\prime})\right) \dot {\Sigma} _ {\infty} ^ {(\ell + 1)} (x, x ^ {\prime}) \dots \dot {\Sigma} _ {\infty} ^ {(L - 1)} (x, x ^ {\prime}). +$$ + +Proof. The proof is by induction on the depth $L$ . The case $L = 1$ is trivially true because $\partial_{\theta_p\theta_{p'}}^2 f_{\theta ,k}(x) = 0$ for all $p,p',k,x$ . For the induction step we observe that + +$$ +\begin{array}{l} \Upsilon_ {k, k ^ {\prime}} ^ {(L)} (x, x ^ {\prime}) \\ = \sum_ {p _ {1}, p _ {2} = 1} ^ {P} \partial_ {\theta_ {p _ {1}}, \theta_ {p _ {2}}} ^ {2} f _ {\theta , k} (x) \partial_ {\theta_ {p _ {2}}, \theta_ {p _ {1}}} ^ {2} f _ {\theta , k ^ {\prime}} \left(x ^ {\prime}\right) \\ = \frac {1}{n _ {L}} \sum_ {m, m ^ {\prime} = 1} ^ {n _ {L}} \Upsilon_ {m, m ^ {\prime}} ^ {(L)} (x, x ^ {\prime}) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x ^ {\prime})\right) W _ {m k} ^ {(L)} W _ {m ^ {\prime} k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m, m ^ {\prime} = 1} ^ {n _ {L}} \Omega_ {m ^ {\prime}, m, m ^ {\prime}} ^ {(L)} (x ^ {\prime}, x, x ^ {\prime}) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x ^ {\prime})\right) W _ {m k} ^ {(L)} W _ {m ^ {\prime} k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m, m ^ {\prime} = 1} ^ {n _ {L}} \Omega_ {m, m ^ {\prime}, m} ^ {(L)} (x, x ^ {\prime}, x) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x ^ {\prime})\right) W _ {m k} ^ {(L)} W _ {m ^ {\prime} k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m, m ^ {\prime} = 1} ^ {n _ {L}} \Theta_ {m, m ^ {\prime}} ^ {(L)} (x, x ^ {\prime}) \Theta_ {m ^ {\prime}, m} ^ {(L)} (x ^ {\prime}, x) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x ^ {\prime})\right) W _ {m k} ^ {(L)} W _ {m ^ {\prime} k ^ {\prime}} ^ {(L)} \\ + \frac {2}{n _ {L}} \sum_ {m = 1} ^ {n _ {L}} \Theta_ {m, m ^ {\prime}} ^ {(L)} (x, x ^ {\prime}) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x ^ {\prime})\right) \delta_ {k k ^ {\prime}} \\ \end{array} +$$ + +if we now let the width of the lower layers grow to infinity $n_1, \ldots, n_{L-1} \to \infty$ , the tensor $\Omega^{(L)}$ vanishes and $\Upsilon_{m,m'}^{(L)}$ and the NTK $\Theta_{m,m'}^{(L)}$ converge to limits which are non-zero only when $m = m'$ . As a result, the term above converges to + +$$ +\begin{array}{l} \frac {1}{n _ {L}} \sum_ {m = 1} ^ {n _ {L}} \Upsilon_ {\infty} ^ {(L)} (x, x ^ {\prime}) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x ^ {\prime})\right) W _ {m k} ^ {(L)} W _ {m k ^ {\prime}} ^ {(L)} \\ + \frac {1}{n _ {L}} \sum_ {m = 1} ^ {n _ {L}} \Theta_ {\infty} ^ {(L)} (x, x ^ {\prime}) ^ {2} \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x)) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x ^ {\prime})) W _ {m k} ^ {(L)} W _ {m k ^ {\prime}} ^ {(L)} \\ + \frac {2}{n _ {L}} \sum_ {m = 1} ^ {n _ {L}} \Theta_ {\infty} ^ {(L)} (x, x ^ {\prime}) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x)\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x ^ {\prime})\right) \delta_ {k k ^ {\prime}} \\ \end{array} +$$ + +At initialization, we can apply the law of large numbers as $n_L \to \infty$ such that it converges to $\Upsilon_{\infty}^{(L + 1)}(x,x')\delta_{kk'}$ , for the kernel $\Upsilon_{\infty}^{(L + 1)}(x,x')$ defined recursively by + +$$ +\Upsilon_ {\infty} ^ {(L + 1)} (x, x ^ {\prime}) = \Upsilon_ {\infty} ^ {(L)} (x, x ^ {\prime}) \dot {\Sigma} _ {\infty} ^ {(L)} (x, x ^ {\prime}) + \Theta_ {\infty} ^ {(L)} (x, x ^ {\prime}) ^ {2} \ddot {\Sigma} _ {\infty} ^ {(L)} (x, x ^ {\prime}) + 2 \Theta_ {\infty} ^ {(L)} (x, x ^ {\prime}) \dot {\Sigma} _ {\infty} ^ {(L)} (x, x ^ {\prime}) +$$ + +and $\Upsilon_{\infty}^{(1)}(x,x^{\prime}) = 0$ + +For the convergence during training, we proceed similarly to the proof of Lemma 1: the activations $\tilde{\alpha}_m^{(L)}(x)$ and weights $W_{mk}^{(L)}$ move at a rate of $1 / \sqrt{n_L}$ and the change to $\Upsilon_{kk'}^{(L + 1)}$ is therefore of order $1 / \sqrt{n_L}$ and vanishes as $n_L\to 0$ . + +Finally, the next lemma shows the vanishing of the tensor $\Psi_{k_0,k_1,k_2,k_3}^{(L)}$ to prove that the higher moments of $S$ vanish. + +Lemma 5. For any loss $C$ with BGOSS and $\sigma \in C_b^4 (\mathbb{R})$ , uniformly over $[0,T]$ + +$$ +\lim _ {n _ {L - 1} \to \infty} \dots \lim _ {n _ {1} \to \infty} \Psi_ {k _ {0}, k _ {1}, k _ {2}, k _ {3}} ^ {(L)} (x _ {i _ {0}}, x _ {i _ {1}}, x _ {i _ {2}}, x _ {i _ {3}}) = 0 +$$ + +Proof. When $L = 1$ the Hessian is zero and $\Psi_{k_0,k_1,k_2,k_3}^{(1)}(x_{i_0},x_{i_1},x_{i_2},x_{i_3}) = 0$ . + +For the induction step, we write $\Psi_{k_0,k_1,k_2,k_3}^{(L + 1)}(x_{i_0},x_{i_1},x_{i_2},x_{i_3})$ recursively, because it contains many terms, we change the notation, writing $\left[ \begin{array}{cc}x_0 & x_1\\ m_0 & m_1 \end{array} \right]$ for + +$$ +\Theta_ {m _ {0}, m _ {1}} ^ {(L)} (x _ {0}, x _ {1}), \left[ \begin{array}{l l l} x _ {0} & x _ {1} & x _ {2} \\ m _ {0} & m _ {1} & m _ {2} \end{array} \right] \mathrm {f o r} \Omega_ {m _ {0}, m _ {1}, m _ {2}} ^ {(L)} (x _ {0}, x _ {1}, x _ {2}) \mathrm {a n d} \left[ \begin{array}{l l l l} x _ {0} & x _ {1} & x _ {2} & x _ {3} \\ m _ {0} & m _ {1} & m _ {2} & m _ {3} \end{array} \right] +$$ + +for $\Gamma_{m_0,m_1,m_2,m_3}^{(L)}(x_0,x_1,x_2,x_3)$ . The value $\Psi_{k_0,k_1,k_2,k_3}^{(L + 1)}(x_{i_0},x_{i_1},x_{i_2},x_{i_3})$ is then equal to + +$$ +\begin{array}{l} n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \Psi_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} ^ {(L)} (x _ {0}, x _ {1}, x _ {2}, x _ {3}) \dot {\sigma} (\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})) \dot {\sigma} (\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})) \dot {\sigma} (\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{l l} x _ {0} & x _ {1} \\ m _ {0} & m _ {1} \end{array} \right] \left[ \begin{array}{l l} x _ {1} & x _ {2} \\ m _ {1} & m _ {2} \end{array} \right] \left[ \begin{array}{l l} x _ {2} & x _ {3} \\ m _ {2} & m _ {3} \end{array} \right] \left[ \begin{array}{l l} x _ {3} & x _ {0} \\ m _ {3} & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{l l l} x _ {0} & x _ {1} & x _ {2} \\ m _ {0} & m _ {1} & m _ {2} \end{array} \right] \left[ \begin{array}{l l} x _ {2} & x _ {3} \\ m _ {2} & m _ {3} \end{array} \right] \left[ \begin{array}{l l} x _ {3} & x _ {0} \\ m _ {3} & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{l l} x _ {0} & x _ {1} \\ m _ {0} & m _ {1} \end{array} \right] \left[ \begin{array}{l l l} x _ {1} & x _ {2} & x _ {3} \\ m _ {1} & m _ {2} & m _ {3} \end{array} \right] \left[ \begin{array}{l l} x _ {3} & x _ {0} \\ m _ {3} & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{c c} x _ {0} & x _ {1} \\ m _ {0} & m _ {1} \end{array} \right] \left[ \begin{array}{c c} x _ {1} & x _ {2} \\ m _ {1} & m _ {2} \end{array} \right] \left[ \begin{array}{c c c} x _ {2} & x _ {3} & x _ {0} \\ m _ {2} & m _ {3} & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{l l} x _ {1} & x _ {2} \\ m _ {1} & m _ {2} \end{array} \right] \left[ \begin{array}{l l} x _ {2} & x _ {3} \\ m _ {2} & m _ {3} \end{array} \right] \left[ \begin{array}{l l l} x _ {3} & x _ {0} & x _ {1} \\ m _ {3} & m _ {0} & m _ {1} \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{l l l} x _ {0} & x _ {1} & x _ {2} \\ m _ {0} & m _ {1} & m _ {2} \end{array} \right] \left[ \begin{array}{l l l} x _ {2} & x _ {3} & x _ {0} \\ m _ {2} & m _ {3} & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{l l l} x _ {1} & x _ {2} & x _ {3} \\ m _ {1} & m _ {2} & m _ {3} \end{array} \right] \left[ \begin{array}{l l l} x _ {3} & x _ {0} & x _ {1} \\ m _ {3} & m _ {0} & m _ {1} \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{l l l l} x _ {0} & x _ {1} & x _ {2} & x _ {3} \\ m _ {0} & m _ {1} & m _ {2} & m _ {3} \end{array} \right] \left[ \begin{array}{l l} x _ {3} & x _ {0} \\ m _ {3} & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{c c} x _ {0} & x _ {1} \\ m _ {0} & m _ {1} \end{array} \right] \left[ \begin{array}{c c c c} x _ {1} & x _ {2} & x _ {3} & x _ {0} \\ m _ {1} & m _ {2} & m _ {3} & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{l l} x _ {1} & x _ {2} \\ m _ {1} & m _ {2} \end{array} \right] \left[ \begin{array}{l l l l} x _ {2} & x _ {3} & x _ {0} & x _ {1} \\ m _ {2} & m _ {3} & m _ {0} & m _ {1} \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m _ {0}, m _ {1}, m _ {2}, m _ {3}} \left[ \begin{array}{l l} x _ {2} & x _ {3} \\ m _ {2} & m _ {3} \end{array} \right] \left[ \begin{array}{l l l l} x _ {3} & x _ {0} & x _ {1} & x _ {2} \\ m _ {3} & m _ {0} & m _ {1} & m _ {2} \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {1}, m _ {2}} \left[ \begin{array}{l l} x _ {0} & x _ {1} \\ m & m _ {1} \end{array} \right] \left[ \begin{array}{l l} x _ {1} & x _ {2} \\ m _ {1} & m _ {2} \end{array} \right] \left[ \begin{array}{l l} x _ {2} & x _ {3} \\ m _ {2} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} \delta_ {k _ {0} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {2}, m _ {3}} \left[ \begin{array}{l l} x _ {1} & x _ {2} \\ m & m _ {2} \end{array} \right] \left[ \begin{array}{l l} x _ {2} & x _ {3} \\ m _ {2} & m _ {3} \end{array} \right] \left[ \begin{array}{l l} x _ {3} & x _ {0} \\ m _ {3} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \delta_ {k _ {0} k _ {1}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {3}, m _ {0}} \left[ \begin{array}{l l} x _ {0} & x _ {1} \\ m _ {0} & m \end{array} \right] \left[ \begin{array}{l l} x _ {2} & x _ {3} \\ m & m _ {3} \end{array} \right] \left[ \begin{array}{l l} x _ {3} & x _ {0} \\ m _ {3} & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \delta_ {k _ {1} k _ {2}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {0}, m _ {1}} \left[ \begin{array}{c c} x _ {0} & x _ {1} \\ m _ {0} & m _ {1} \end{array} \right] \left[ \begin{array}{c c} x _ {1} & x _ {2} \\ m _ {1} & m \end{array} \right] \left[ \begin{array}{c c} x _ {3} & x _ {0} \\ m & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {1}, m _ {2}} \left[ \begin{array}{c c c} x _ {0} & x _ {1} & x _ {2} \\ m & m _ {1} & m _ {2} \end{array} \right] \left[ \begin{array}{c c} x _ {2} & x _ {3} \\ m _ {2} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} \delta_ {k _ {0} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {2}, m _ {3}} \left[ \begin{array}{l l l} x _ {1} & x _ {2} & x _ {3} \\ m & m _ {2} & m _ {3} \end{array} \right] \left[ \begin{array}{l l} x _ {3} & x _ {0} \\ m _ {3} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \delta_ {k _ {0} k _ {1}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {3}, m _ {0}} \left[ \begin{array}{l l} x _ {0} & x _ {1} \\ m _ {0} & m \end{array} \right] \left[ \begin{array}{l l l} x _ {2} & x _ {3} & x _ {0} \\ m & m _ {3} & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \delta_ {k _ {1} k _ {2}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {0}, m _ {1}} \left[ \begin{array}{l l} x _ {1} & x _ {2} \\ m _ {1} & m \end{array} \right] \left[ \begin{array}{l l l} x _ {3} & x _ {0} & x _ {1} \\ m & m _ {0} & m _ {1} \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {1}, m _ {2}} \left[ \begin{array}{l l} x _ {0} & x _ {1} \\ m & m _ {1} \end{array} \right] \left[ \begin{array}{l l l} x _ {1} & x _ {2} & x _ {3} \\ m _ {1} & m _ {2} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} \delta_ {k _ {0} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {2}, m _ {3}} \left[ \begin{array}{l l} x _ {1} & x _ {2} \\ m & m _ {2} \end{array} \right] \left[ \begin{array}{l l l} x _ {2} & x _ {3} & x _ {0} \\ m _ {2} & m _ {3} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \delta_ {k _ {0} k _ {1}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {3}, m _ {0}} \left[ \begin{array}{c c} x _ {2} & x _ {3} \\ m & m _ {3} \end{array} \right] \left[ \begin{array}{c c c} x _ {3} & x _ {0} & x _ {1} \\ m _ {3} & m _ {0} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \delta_ {k _ {1} k _ {2}} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m, m _ {0}, m _ {1}} \left[ \begin{array}{c c c} x _ {0} & x _ {1} & x _ {2} \\ m _ {0} & m _ {1} & m \end{array} \right] \left[ \begin{array}{c c} x _ {3} & x _ {0} \\ m & m _ {0} \end{array} \right] \ddot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {1}, m _ {2}} \left[ \begin{array}{l l l l} x _ {0} & x _ {1} & x _ {2} & x _ {3} \\ m & m _ {1} & m _ {2} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) \\ W _ {m _ {1} k _ {1}} ^ {(L)} W _ {m _ {2} k _ {2}} ^ {(L)} \delta_ {k _ {0} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {2}, m _ {3}} \left[ \begin{array}{l l l l} x _ {1} & x _ {2} & x _ {3} & x _ {0} \\ m & m _ {2} & m _ {3} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {2}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) \\ W _ {m _ {2} k _ {2}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \delta_ {k _ {0} k _ {1}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {3}, m _ {0}} \left[ \begin{array}{c c c c} x _ {2} & x _ {3} & x _ {0} & x _ {1} \\ m & m _ {3} & m _ {0} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {3}} ^ {(L)} (x _ {3})\right) \\ W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {3} k _ {3}} ^ {(L)} \delta_ {k _ {1} k _ {2}} \\ + n _ {L} ^ {- 2} \sum_ {m, m _ {0}, m _ {1}} \left[ \begin{array}{l l l l} x _ {3} & x _ {0} & x _ {1} & x _ {2} \\ m & m _ {0} & m _ {1} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m _ {0}} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m _ {1}} ^ {(L)} (x _ {1})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) \\ W _ {m _ {0} k _ {0}} ^ {(L)} W _ {m _ {1} k _ {1}} ^ {(L)} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m, m ^ {\prime}} \left[ \begin{array}{l l} x _ {0} & x _ {1} \\ m & m ^ {\prime} \end{array} \right] \left[ \begin{array}{l l} x _ {2} & x _ {3} \\ m ^ {\prime} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x _ {1})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) \\ \delta_ {k _ {0} k _ {1}} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m, m ^ {\prime}} \left[ \begin{array}{l l} x _ {1} & x _ {2} \\ m & m ^ {\prime} \end{array} \right] \left[ \begin{array}{l l} x _ {3} & x _ {0} \\ m ^ {\prime} & m \end{array} \right] \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x _ {1})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m ^ {\prime}} ^ {(L)} (x _ {3})\right) \\ \delta_ {k _ {0} k _ {3}} \delta_ {k _ {1} k _ {2}} \\ \end{array} +$$ + +Even though this is a very large formula one can notice that most terms are "rotation of each other". Moreover, as $n_1, \ldots, n_{L-1} \to \infty$ , all terms containing either an $\Psi^{(L)}$ , an $\Omega^{(L)}$ or a $\Gamma^{(L)}$ vanish. For the remaining terms, we may replace the NTKs $\Theta^{(L)}$ by their limit and as a result $\Psi_{k_0, k_1, k_2, k_3}^{(L+1)}(x_{i_0}, x_{i_1}, x_{i_2}, x_{i_3})$ converges to + +$$ +\begin{array}{l} n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {\infty} ^ {(L)} (x _ {1}, x _ {2}) \Theta_ {\infty} ^ {(L)} (x _ {2}, x _ {3}) \Theta_ {\infty} ^ {(L)} (x _ {3}, x _ {0}) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m k _ {0}} ^ {(L)} W _ {m k _ {1}} ^ {(L)} W _ {m k _ {2}} ^ {(L)} W _ {m k _ {3}} ^ {(L)} \\ + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {\infty} ^ {(L)} (x _ {1}, x _ {2}) \Theta_ {\infty} ^ {(L)} (x _ {2}, x _ {3}) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m k _ {1}} ^ {(L)} W _ {m k _ {2}} ^ {(L)} \delta_ {k _ {0} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {1}, x _ {2}) \Theta_ {\infty} ^ {(L)} (x _ {2}, x _ {3}) \Theta_ {\infty} ^ {(L)} (x _ {3}, x _ {0}) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \\ \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m k _ {2}} ^ {(L)} W _ {m k _ {3}} ^ {(L)} \delta_ {k _ {0} k _ {1}} \\ + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {\infty} ^ {(L)} (x _ {2}, x _ {3}) \Theta_ {\infty} ^ {(L)} (x _ {3}, x _ {0}) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})) \dot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \ddot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m k _ {0}} ^ {(L)} W _ {m k _ {3}} ^ {(L)} \delta_ {k _ {1} k _ {2}} \\ + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {\infty} ^ {(L)} (x _ {1}, x _ {2}) \Theta_ {\infty} ^ {(L)} (x _ {3}, x _ {0}) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})) \ddot {\sigma} (\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) W _ {m k _ {0}} ^ {(L)} W _ {m k _ {1}} ^ {(L)} \delta_ {k _ {2} k _ {3}} \\ \end{array} +$$ + +$$ +\begin{array}{l} + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {0}, x _ {1}) \Theta_ {\infty} ^ {(L)} (x _ {2}, x _ {3}) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) \delta_ {k _ {0} k _ {1}} \delta_ {k _ {2} k _ {3}} \\ + n _ {L} ^ {- 2} \sum_ {m} \Theta_ {\infty} ^ {(L)} (x _ {1}, x _ {2}) \Theta_ {\infty} ^ {(L)} (x _ {3}, x _ {0}) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {0})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {1})\right) \\ \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {2})\right) \dot {\sigma} \left(\tilde {\alpha} _ {m} ^ {(L)} (x _ {3})\right) \delta_ {k _ {0} k _ {3}} \delta_ {k _ {1} k _ {2}} \\ \end{array} +$$ + +And all these sums vanish as $n_L \to \infty$ thanks to the prefactor $n_L^{-2}$ , proving the vanishing of $\Psi_{k_0,k_1,k_2,k_3}^{(L + 1)}(x_{i_0},x_{i_1},x_{i_2},x_{i_3})$ in the infinite width limit. + +During training, the activations $\hat{\alpha}_m^{(L)}(x)$ and weights $W_{mk}^{(L)}$ move at a rate of $1 / \sqrt{n_L}$ which induces a change to $\Psi^{(L + 1)}$ of order $n_L^{-3 / 2}$ which vanishes in the infinite width limit. + +# D ORTHOGONALITY OF $I$ AND $S$ + +From Lemma 2 and the vanishing of the tensor $\Gamma^{(L)}$ as proven in Lemma 2, we can easily prove the orthogonality of $I$ and $S$ of Proposition 5: + +Proposition 5. For any loss $C$ with BGOSS and $\sigma \in C_b^4 (\mathbb{R})$ , we have uniformly over $[0,T]$ + +$$ +\lim _ {n _ {L - 1} \to \infty} \dots \lim _ {n _ {1} \to \infty} \| I S \| _ {F} = 0. +$$ + +As a consequence $\lim_{n_{L - 1}\to \infty}\dots \lim_{n_1\to \infty}\operatorname {Tr}\left([I + S]^k\right) - \left[\operatorname {Tr}\left(I^k\right) + \operatorname {Tr}\left(S^k\right)\right] = 0.$ + +Proof. The Frobenius norm of $IS$ is equal to + +$$ +\begin{array}{l} \| I S \| _ {F} ^ {2} = \left\| \mathcal {D} Y \mathcal {H} C (\mathcal {D} Y) ^ {T} (\nabla C \cdot \mathcal {H} Y) \right\| _ {F} ^ {2} \\ = \sum_ {p _ {1}, p _ {2} = 1} ^ {P} \left(\sum_ {p = 1} ^ {P} \sum_ {i _ {1}, i _ {2} = 1} ^ {N} \sum_ {k _ {1}, k _ {2} = 1} ^ {n _ {L}} \partial_ {\theta_ {p _ {1}}} f _ {\theta , k _ {1}} \left(x _ {i _ {1}}\right) c _ {k _ {1}} ^ {\prime \prime} \left(x _ {i _ {1}}\right) \partial_ {\theta_ {p}} f _ {\theta , k _ {1}} \left(x _ {i _ {1}}\right) \partial_ {\theta_ {p}, \theta_ {p _ {3}}} ^ {2} f _ {\theta , k _ {2}} \left(x _ {2}\right) \left(x _ {i _ {2}}\right) c _ {k _ {2}} ^ {\prime} \left(x _ {i _ {2}}\right)\right) ^ {2} \\ = \sum_ {i _ {1}, i _ {2}, i _ {1} ^ {\prime}, i _ {2} ^ {\prime} = 1} ^ {N} \sum_ {k _ {1}, k _ {2}, k _ {1} ^ {\prime}, k _ {2} ^ {\prime} = 1} ^ {n _ {L}} c _ {k _ {1}} ^ {\prime \prime} (x _ {i _ {1}}) c _ {k _ {1} ^ {\prime}} ^ {\prime \prime} (x _ {i _ {1} ^ {\prime}}) c _ {k _ {2}} ^ {\prime} (x _ {i _ {2}}) c _ {k _ {2} ^ {\prime}} ^ {\prime} (x _ {i _ {2} ^ {\prime}}) \Theta_ {k _ {1}, k _ {1} ^ {\prime}} (x _ {i _ {1}}, x _ {i _ {1} ^ {\prime}}) \Gamma_ {k _ {1}, k _ {2}, k _ {2} ^ {\prime}, k _ {1} ^ {\prime}} (x _ {i _ {1}}, x _ {i _ {2}}, x _ {i _ {2} ^ {\prime}}, x _ {i _ {1} ^ {\prime}}) \\ \end{array} +$$ + +and $\Gamma$ vanishes as $n_1, \ldots, n_{L-1} \to \infty$ by Lemma 2. + +The $k$ -th moment of the sum $\operatorname{Tr}(I + S)^k$ is equal to the sum over all $\operatorname{Tr}(A_1 \cdots A_k)$ for any word $A_1 \ldots A_k$ of $A_i \in \{I, S\}$ . The difference $\operatorname{Tr}\left([I + S]^k\right) - \left[\operatorname{Tr}\left(I^k\right) + \operatorname{Tr}\left(S^k\right)\right]$ is hence equal to the sum over all mixed words, i.e. words $A_1 \ldots A_k$ which contain at least one $I$ and one $S$ . Such words must contain two consecutive terms $A_m A_{m+1}$ one equal to $I$ and the other equal to $S$ . We can then bound the trace by + +$$ +\left| \operatorname {T r} \left(A _ {1} \dots A _ {k}\right) \right| \leq \left\| A _ {1} \right\| _ {F} \dots \left\| A _ {m - 1} \right\| _ {F} \left\| A _ {m} A _ {m + 1} \right\| _ {F} \left\| A _ {m + 2} \right\| _ {F} \dots \left\| A _ {k} \right\| _ {F} +$$ + +which vanishes in the infinite width limit because $\| I\| _F$ and $\| S\| _F$ are bounded and $\| A_{m}A_{m + 1}\|_{F} = \| IS\|_{F}$ vanishes. \ No newline at end of file diff --git a/theasymptoticspectrumofthehessianofdnnthroughouttraining/images.zip b/theasymptoticspectrumofthehessianofdnnthroughouttraining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9d1889a9a7e6e24fde1f358d4959e1770b80041d --- /dev/null +++ b/theasymptoticspectrumofthehessianofdnnthroughouttraining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d7a516320d3e8da94277c0e7e28089f828b88ce41d3374c7023e1a06f310dbc +size 2140792 diff --git a/theasymptoticspectrumofthehessianofdnnthroughouttraining/layout.json b/theasymptoticspectrumofthehessianofdnnthroughouttraining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..59f5a2f20b7bfaed97cfa5abbf70175e83c733d1 --- /dev/null +++ b/theasymptoticspectrumofthehessianofdnnthroughouttraining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2894529bbbd800ea3602c46e649d1fbcf09cdd05a9938da6a6c24c8abefb170 +size 1204044 diff --git a/thecuriouscaseofneuraltextdegeneration/9d182503-9ef9-4862-a724-87a89aa85845_content_list.json b/thecuriouscaseofneuraltextdegeneration/9d182503-9ef9-4862-a724-87a89aa85845_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d03d38d7a0f62b50e9db8cb3a8703b38bc53b176 --- /dev/null +++ b/thecuriouscaseofneuraltextdegeneration/9d182503-9ef9-4862-a724-87a89aa85845_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a489f4b2a70e41ed13f1b09c8670dc416923fd6ecabc15cca8ddb3d5b7e277ac +size 70538 diff --git a/thecuriouscaseofneuraltextdegeneration/9d182503-9ef9-4862-a724-87a89aa85845_model.json b/thecuriouscaseofneuraltextdegeneration/9d182503-9ef9-4862-a724-87a89aa85845_model.json new file mode 100644 index 0000000000000000000000000000000000000000..cd2a4a8c5ac55b56f2f465a64616b698efceb34e --- /dev/null +++ b/thecuriouscaseofneuraltextdegeneration/9d182503-9ef9-4862-a724-87a89aa85845_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce4e6cabb773f9913bb8ffa17a1652cd7e1e87f18c7ce391a8e3baf2e9b76b36 +size 85701 diff --git a/thecuriouscaseofneuraltextdegeneration/9d182503-9ef9-4862-a724-87a89aa85845_origin.pdf b/thecuriouscaseofneuraltextdegeneration/9d182503-9ef9-4862-a724-87a89aa85845_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8b88ada93af3ef8cbf732a20ffc55c45c948707f --- /dev/null +++ b/thecuriouscaseofneuraltextdegeneration/9d182503-9ef9-4862-a724-87a89aa85845_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39076f52c634534732ece265d048f11875c4d3f614de1980fcbaabe36f0ebc0a +size 4709621 diff --git a/thecuriouscaseofneuraltextdegeneration/full.md b/thecuriouscaseofneuraltextdegeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dc771d4f46ebfb7d41fdcc558ae6b7088e6ccb5e --- /dev/null +++ b/thecuriouscaseofneuraltextdegeneration/full.md @@ -0,0 +1,269 @@ +# THE CURIOUS CASE OF NEURAL TEXT DeGENERATION + +Ari Holtzman†‡ Jan Buys§† Li Du† Maxwell Forbes†‡ Yejin Choi†‡ + +†Paul G. Allen School of Computer Science & Engineering, University of Washington + +$\ddagger$ Allen Institute for Artificial Intelligence + +$^{\S}$ Department of Computer Science, University of Cape Town + +{ahai,dul2,mbforbes,yejin}@cs.washington.edu,jbuys@cs.uct.ac.za + +# ABSTRACT + +Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops. + +To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models than previous decoding strategies. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. + +To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for open-ended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is currently the best available decoding strategy for generating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text. + +Context: In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. + +# Beam Search, $b = 32$ : + +"The study, published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), was conducted by researchers from the Universidad Nacional Autónoma de México (UNAM) and the Universidad Nacional Autónoma de México (UNAM/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de ... + +# Pure Sampling: + +They were cattle called Bolivian Cavalleros; they live in a remote desert uninterrupted by town, and they speak huge, beautiful, paradisacical Bolivian linguistic thing. They say, 'Lunch, marge.' They don't tell what the lunch is," director Professor Chuperas Omwell told Sky News. "They've only been talking to scientists, like we're being interviewed by TV reporters. We don't even stick around to be interviewed by TV reporters. Maybe that's how they figured out that they're cosplaying as the Bolivian Cavalleros." + +Figure 1: Even with substantial human context and the powerful GPT-2 Large language model, Beam Search (size 32) leads to degenerate repetition (highlighted in blue) while pure sampling leads to incoherent gibberish (highlighted in red). When $b \geq 64$ , both GPT-2 Large and XL (774M and 1542M parameters, respectively) prefer to stop generating immediately after the given context. + +# 1 INTRODUCTION + +On February 14th 2019, OpenAI surprised the scientific community with an impressively high-quality article about Ovid's Unicorn, written by GPT-2. Notably, the top-quality generations ob + +tained from the model rely on randomness in the decoding method, in particular through top- $k$ sampling that samples the next word from the top $k$ most probable choices (Fan et al., 2018; Holtzman et al., 2018; Radford et al., 2019), instead of aiming to decode text that maximizes likelihood. + +In fact, decoding strategies that optimize for output with high probability, such as beam search, lead to text that is incredibly degenerate, even when using state-of-the-art models such as GPT-2 Large, as shown in Figure 1. This may seem counter-intuitive, as one would expect that good models would assign higher probability to more human-like, grammatical text. Indeed, language models do generally assign high scores to well-formed text, yet the highest scores for longer texts are often generic, repetitive, and awkward. Figure 2 exposes how different the distribution of probabilities assigned to beam search decoded text and naturally occurring text really are. + +Perhaps equally surprising is the right side of Figure 1, which shows that pure sampling — sampling directly from the probabilities predicted by the model — results in text that is incoherent and almost unrelated to the context. Why is text produced by pure sampling so degenerate? In this work we show that the "unreliable tail" is to blame. This unreliable tail is composed of tens of thousands of candidate tokens with relatively low probability that are over-represented in the aggregate. + +To overcome these issues we introduce Nucleus Sampling (§3.1). The key intuition of Nucleus Sampling is that the vast majority of probability mass at each time step is concentrated in the nucleus, a small subset of the vocabulary that tends to range between one and a thousand candidates. Instead of relying on a fixed top- $k$ , or using a temperature parameter to control the shape of the distribution without sufficiently suppressing the unreliable tail, we propose sampling from the top- $p$ portion of the probability mass, expanding and contracting the candidate pool dynamically. + +In order to compare current methods to Nucleus Sampling, we compare various distributional properties of generated text to the reference distribution, such as the likelihood of veering into repetition and the perplexity of generated text. + +The latter reveals that text generated by maximization or top- $k$ sampling is too probable, indicating a lack of diversity and divergence in vocabulary usage from the human distribution. On the other hand, pure sampling produces text that is significantly less likely than the gold, corresponding to lower generation quality. + +Vocabulary usage and Self-BLEU (Zhu et al., 2018) statistics reveal that high values of $k$ are needed to make top- $k$ sampling match human statistics. Yet, generations based on high values of $k$ often have high variance in likelihood, hinting at qualitatively observable incoherency issues. Nucleus Sampling can easily match reference perplexity through tuning the value of $p$ , avoiding the incoherence caused by setting $k$ high enough to match distributional statistics. + +Finally, we perform Human Unified with Statistical Evaluation (HUSE; Hashimoto et al., 2019) to jointly assess the overall quality and diversity of the decoding strategies, which cannot be captured using either human or automatic evaluation alone. The HUSE evaluation demonstrates that Nucleus Sampling is the best overall decoding strategy. We include generated examples for qualitative analysis - see Figure 3 for a representative example, and further examples in the appendix. + +![](images/d25e31b82d8c05c9d482f8de024c06d579b6e922629833dbd748876ccae14841.jpg) + +# Beam Search + +...to provide an overview of the current state-of-the-art in the field of computer vision and machine learning, and to provide an overview of the current state-of-the-art in the field of computer vision and machine learning, and to provide an overview of the current state-of-the-art in the field of computer vision and machine learning, and to provide an overview of the current state-of-the-art in the field of computer vision and machine learning, and... + +# Human + +...which grant increased life span and three years warranty. The Antec HCG series consists of five models with capacities spanning from 400W to 900W. Here we should note that we have already tested the HCG-620 in a previous review and were quite satisfied With its performance. In today's review we will rigorously test the Antec HCG-520, which as its model number implies, has 520W capacity and contrary to Antec's strong beliefs in multi-rail PSUs is equipped... + +Figure 2: The probability assigned to tokens generated by Beam Search and humans, given the same context. Note the increased variance that characterizes human text, in contrast with the endless repetition of text decoded by Beam Search. + +# 2 BACKGROUND + +# 2.1 TEXT GENERATION DECODING STRATEGIES + +A number of recent works have alluded to the disadvantages of generation by maximization, which tend to generate output with high grammaticality but low diversity (Kulikov et al., 2019; Holtzman et al., 2018; Fan et al., 2018). Generative Adversarial Networks (GANs) have been a prominent research direction (Yu et al., 2017; Xu et al., 2018), but recent work has shown that when quality and diversity are considered jointly, GAN-generated text fails to outperform generations from language models (Caccia et al., 2018; Tevet et al., 2019; Semeniuta et al., 2018). Work on neural dialog systems have proposed methods for diverse beam search, using a task-specific diversity scoring function or constraining beam hypotheses to be sufficiently different (Li et al., 2016a; Vijayakumar et al., 2018; Kulikov et al., 2019; Pal et al., 2006). While such utility functions encourage desirable properties in generations, they do not remove the need to choose an appropriate decoding strategy, and we believe that Nucleus Sampling will have complementary advantages in such approaches. Finally, Welleck et al. (2020) begin to address the problem of neural text degeneration through an "unlikelihood loss", which decreases training loss on repeated tokens and thus implicitly reduces gradients on frequent tokens as well. Our focus is on exposing neural text degeneration and providing a decoding solution that can be used with arbitrary models, but future work will likely combine training-time and inference-time solutions. + +# 2.2 OPEN-ENDVS DIRECTED GENERATION + +Many text generation tasks are defined through (input, output) pairs, such that the output is a constrained transformation of the input. Example applications include machine translation (Bahdanau et al., 2015), data-to-text generation (Wiseman et al., 2017), and summarization (Nallapati et al., 2016). We refer to these tasks as directed generation. Typically encoder-decoder architectures are used, often with an attention mechanism (Bahdanau et al., 2015; Luong et al., 2015) or using attention-based architectures such as the Transformer (Vaswani et al., 2017). Generation is usually performed using beam search; since output is tightly scoped by the input, repetition and genericness are not as problematic. Still, similar issues have been reported when using large beam sizes (Koehn & Knowles, 2017) and more recently with exact inference (Stahlberg & Byrne, 2019), a counter-intuitive observation since more comprehensive search helps maximize probability. + +Open-ended generation, which includes conditional story generation and contextual text continuation (as in Figure 1), has recently become a promising research direction due to significant advances in neural language models (Clark et al., 2018; Holtzman et al., 2018; Fan et al., 2018; Peng et al., 2018; Radford et al., 2019). While the input context restricts the space of acceptable output generations, there is a considerable degree of freedom in what can plausibly come next, unlike in directed generation settings. Our work addresses the challenges faced by neural text generation with this increased level of freedom, but we note that some tasks, such as goal-oriented dialog, may fall somewhere in between open-ended and directed generation. + +# 3 LANGUAGE MODEL DECODING + +Given an input text passage as context, the task of open-ended generation is to generate text that forms a coherent continuation from the given context. More formally, given a sequence of $m$ tokens $x_{1} \ldots x_{m}$ as context, the task is to generate the next $n$ continuation tokens to obtain the completed sequence $x_{1} \ldots x_{m+n}$ . We assume that models compute $P(x_{1:m+n})$ using the common left-to-right decomposition of the text probability, + +$$ +P \left(x _ {1: m + n}\right) = \prod_ {i = 1} ^ {m + n} P \left(x _ {i} \mid x _ {1} \dots x _ {i - 1}\right), \tag {1} +$$ + +which is used to generate the generation token-by-token using a particular decoding strategy. + +Maximization-based decoding The most commonly used decoding objective, in particular for directed generation, is maximization-based decoding. Assuming that the model assigns higher probability to higher quality text, these decoding strategies search for the continuation with the highest + +![](images/39d8e7918864e3dbe815100cff2ca8166b78392d2794f94ecea859632f025635.jpg) +Figure 3: Example generations continuing an initial sentence. Maximization and top- $k$ truncation methods lead to copious repetition (highlighted in blue), while sampling with and without temperature tends to lead to incoherence (highlighted in red). Nucleus Sampling largely avoids both issues. + +likelihood. Since finding the optimum argmax sequence from recurrent neural language models or Transformers is not tractable (Chen et al., 2018), common practice is to use beam search (Li et al., 2016b; Shen et al., 2017; Wiseman et al., 2017). However, several recent studies on open-ended generation have reported that maximization-based decoding does not lead to high quality text (Fan et al., 2018; Holtzman et al., 2018). + +# 3.1 NUCLEUS SAMPLING + +We propose a new stochastic decoding method: Nucleus Sampling. The key idea is to use the shape of the probability distribution to determine the set of tokens to be sampled from. Given a distribution $P(x|x_{1:i-1})$ , we define its top- $p$ vocabulary $V^{(p)} \subset V$ as the smallest set such that + +$$ +\sum_ {x \in V ^ {(p)}} P (x \mid x _ {1: i - 1}) \geq p. \tag {2} +$$ + +![](images/be66b9cdc9228db9000cd442dca6a6077237b01a11967887e18b88fb414bd9a9.jpg) +Figure 4: The probability of a repeated phrase increases with each repetition, creating a positive feedback loop. We found this effect to hold for the vast majority of phrases we tested, regardless of phrase length or if the phrases were sampled randomly rather than taken from human text. + +![](images/88824577876341a09eda43d5bcea70a852178cd5439d3b60225a252388a273ff.jpg) +Figure 5: The probability mass assigned to partial human sentences. Flat distributions lead to many moderately probable tokens, while peaked distributions concentrate most probability mass into just a few tokens. The presence of flat distributions makes the use of a small $k$ in top- $k$ sampling problematic, while the presence of peaked distributions makes large $k$ 's problematic. + +Let $p' = \sum_{x \in V(p)} P(x|x_{1:i-1})$ . The original distribution is re-scaled to a new distribution, from which the next word is sampled: + +$$ +P ^ {\prime} (x \mid x _ {1: i - 1}) = \left\{ \begin{array}{l l} P (x \mid x _ {1: i - 1}) / p ^ {\prime} & \text {i f} x \in V ^ {(p)} \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {3} +$$ + +In practice this means selecting the highest probability tokens whose cumulative probability mass exceeds the pre-chosen threshold $p$ . The size of the sampling set will adjust dynamically based on the shape of the probability distribution at each time step. For high values of $p$ , this is a small subset of vocabulary that takes up vast majority of the probability mass — the nucleus. + +# 3.2 TOP- $k$ SAMPLING + +Top- $k$ sampling has recently become a popular alternative sampling procedure (Fan et al., 2018; Holtzman et al., 2018; Radford et al., 2019). Nucleus Sampling and top- $k$ both sample from truncated Neural LM distributions, differing only in the strategy of where to truncate. Choosing where to truncate can be interpreted as determining the generative model's trustworthy prediction zone. + +At each time step, the top $k$ possible next tokens are sampled from according to their relative probabilities. Formally, given a distribution $P(x|x_{1:i-1})$ , we define its top- $k$ vocabulary $V^{(k)} \subset V$ as the set of size $k$ which maximizes $\sum_{x \in V^{(k)}} P(x|x_{1:i-1})$ . Let $p' = \sum_{x \in V^{(k)}} P(x|x_{1:i-1})$ . The distribution is then re-scaled as in equation 3, and sampling is performed based on that distribution. Note that the scaling factor $p'$ can vary wildly at each time-step, in contrast to Nucleus Sampling. + +Difficulty in choosing a suitable value of $k$ : While top- $k$ sampling leads to considerably higher quality text than either beam search or sampling from the full distribution, the use of a constant $k$ is + +sub-optimal across varying contexts. As illustrated on the left of Figure 5, in some contexts the head of the next word distribution can be flat across tens or hundreds of reasonable options (e.g. nouns or verbs in generic contexts), while in other contexts most of the probability mass is concentrated in one or a small number of tokens, as on the right of the figure. Therefore if $k$ is small, in some contexts there is a risk of generating bland or generic text, while if $k$ is large the top- $k$ vocabulary will include inappropriate candidates which will have their probability of being sampled increased by the renormalization. Under Nucleus Sampling, the number of candidates considered rises and falls dynamically, corresponding to the changes in the model's confidence region over the vocabulary which top- $k$ sampling fails to capture for any one choice of $k$ . + +# 3.3 SAMPLING WITH TEMPERATURE + +Another common approach to sampling-based generation is to shape a probability distribution through temperature (Ackley et al., 1985). Temperature sampling has been applied widely to text generation (Ficler & Goldberg, 2017; Fan et al., 2018; Caccia et al., 2018). Given the logits $u_{1:|V|}$ and temperature $t$ , the softmax is re-estimated as + +$$ +p (x = V _ {l} | x _ {1: i - 1}) = \frac {\exp \left(u _ {l} / t\right)}{\sum_ {l ^ {\prime}} \exp \left(u _ {l} ^ {\prime} / t\right)}. \tag {4} +$$ + +Setting $t \in [0,1)$ skews the distribution towards high probability events, which implicitly lowers the mass in the tail distribution. Low temperature sampling has also been used to partially alleviate the issues of top- $k$ sampling discussed above, by shaping the distribution before top- $k$ sampling (Radford et al., 2018; Fan et al., 2018). However, recent analysis has shown that, while lowering the temperature improves generation quality, it comes at the cost of decreasing diversity (Caccia et al., 2018; Hashimoto et al., 2019). + +# 4 LIKELIHOOD EVALUATION + +# 4.1 EXPERIMENTAL SETUP + +While many neural network architectures have been proposed for language modeling, including LSTMs (Sundermeyer et al., 2012) and convolutional networks (Dauphin et al., 2017), the Transformer architecture (Vaswani et al., 2017) has been the most successful in the extremely large-scale training setups in recent literature (Radford et al., 2018; 2019). In this study we use the Generatively Pre-trained Transformer, version 2 (GPT2; Radford et al., 2019), which was trained on WebText, a 40GB collection of text scraped from the web. We perform experiments using the Large model (762M parameters). Our analysis is based on generating 5,000 text passages, which end upon reaching an end-of-document token or a maximum length of 200 tokens. Texts are generated conditionally, conditioned on the initial paragraph (restricted to 1-40 tokens) of documents in the held-out portion of WebText, except where otherwise mentioned. + +# 4.2 PERPLEXITY + +Our first evaluation is to compute the perplexity of generated text using various decoding strategies, according to the model that is being generated from. We compare these perplexities against that of the gold text (Figure 6). Importantly, we argue that the optimal generation strategy should produce text which has a perplexity close to that of the gold text: Even though the model has the ability to generate text that has lower perplexity (higher probability), such text tends to have low diversity and get stuck in repetition loops, as shown in §5 and illustrated in Figure 4. + +We see that perplexity of text obtained from pure sampling is worse than the perplexity of the gold. This indicates that the model is confusing itself: sampling too many unlikely tokens and creating context that makes it difficult to recover the human distribution of text, as in Figure 1. Yet, setting the temperature lower creates diversity and repetition issues, as we shall see in §5. Even with our relatively fine-grained parameter sweep, Nucleus Sampling obtains closest perplexity to human text, as shown in Table 1. + +
MethodPerplexitySelf-BLEU4Zipf CoefficientRepetition %HUSE
Human12.380.310.930.28-
Greedy1.500.501.0073.66-
Beam, b=161.480.440.9428.94-
Stochastic Beam, b=1619.200.280.910.32-
Pure Sampling22.730.280.930.220.67
Sampling, t=0.910.250.350.960.660.79
Top-k=406.880.390.960.780.19
Top-k=64013.820.320.960.280.94
Top-k=40, t=0.73.480.441.008.860.08
Nucleus p=0.9513.130.320.950.360.97
+ +Table 1: Main results for comparing all decoding methods with selected parameters of each method. The numbers closest to human scores are in bold except for HUSE (Hashimoto et al., 2019), a combined human and statistical evaluation, where the highest (best) value is bolded. For Top- $k$ and Nucleus Sampling, HUSE is computed with interpolation rather than truncation (see §6.1). + +# 4.3 NATURAL LANGUAGE DOES NOT MAXIMIZE PROBABILITY + +One might wonder if the issue with maximization is a search error, i.e., there are higher quality sentences to which the model assigns higher probability than to the decoded ones, beam search has just failed to find them. Yet Figures 2 & 6 show that the per-token probability of natural text is, on average, much lower than text generated by beam search. Natural language rarely remains in a high probability zone for multiple consecutive time steps, instead veering into lower-probability but more informative tokens. Nor does natural language tend to fall into repetition loops, even though the model tends to assign high probability to this, as seen in Figure 4. + +Why is human-written text not the most probable text? We conjecture that this is an intrinsic property of human language. Language models that assign probabilities one word at a time without a global model of the text will have trouble capturing this effect. Grice's Maxims of Communication (Grice, 1975) show that people optimize against stating the obvious. Thus, making every word as predictable as possible will be disfavored. This makes solving the problem simply by training larger models or improving neural architectures using standard per-word learning objectives unlikely: such models are forced to favor the lowest common denominator, rather than informative language. + +# 5 DISTRIBUTIONAL STATISTICAL EVALUATION + +# 5.1 ZIPF DISTRIBUTION ANALYSIS + +In order to compare generations to the reference text, we begin by analyzing their use of vocabulary. Zipf's law suggests that there is an exponential relationship between the rank of a word and its frequency in text. The Zipfian coefficient $s$ can be used to compare the distribution in a given text + +![](images/86a4491034ac7ee7088d5b1879f8112abda3f69552c41185ea6067fbb3199ec8.jpg) +Figure 6: Perplexities of generations from various decoding methods. Note that beam search has unnaturally low perplexities. A similar effect is seen using a temperature of 0.7 with top- $k$ as in both Radford et al. (2019) and Fan et al. (2018). Sampling, Top- $k$ , and Nucleus can all be calibrated to human perplexities, but the first two face coherency issues when their parameters are set this high. + +![](images/7200f5881baaa90035339c4271fa6896f4be007ffd256e16d5ddbaaadcb4510b.jpg) +Figure 7: A rank-frequency plot of the distributional differences between $n$ -gram frequencies of human and machine text. Sampling and Nucleus Sampling are by far the closest to the human distribution, while Beam Search clearly follows a very different distribution than natural language. + +![](images/32e031474dea13d9729b843137d8685c8b7383e614df58be5e3bf1e265b29a4d.jpg) +Figure 8: Self-BLEU calculated on the unconditional generations produced by stochastic decoding methods; lower Self-BLEU scores imply higher diversity. Horizontal blue and orange lines represent human self-BLEU scores. Note how common values of $t \in [0.5,1]$ and $k \in [1,100]$ result in high self-similarity, whereas "normal" values of $p \in [0.9,1)$ closely match the human distribution of text. + +to a theoretically perfect exponential curve, where $s = 1$ (Piantadosi, 2014). Figure 7 shows the vocabulary distributions along with estimated Zipf coefficients for selected parameters of different decoding methods. As expected, pure sampling is the closest to the human distribution, followed by Nucleus Sampling. The visualization of the distribution shows that pure sampling slightly overestimates the use of rare words, likely one reason why pure sampling also has higher perplexity than human text. Furthermore, lower temperature sampling avoids sampling these rare words from the tail, which is why it has been used in some recent work (Fan et al., 2018; Radford et al., 2019). + +# 5.2 SELF-BLEU + +We follow previous work and compute Self-BLEU (Zhu et al., 2018) as a metric of diversity. Self-BLEU is calculated by computing the BLEU score of each generated document using all other generations in the evaluation set as references. Due to the expense of computing such an operation, we sample 1000 generations, each of which is compared with all 4999 other generations as references. A lower Self-BLEU score implies higher diversity. Figure 8 shows that Self-BLEU results largely follow that of the Zipfian distribution analysis as a diversity measure. It is worth noting that + +![](images/726a0c5fdc8097bfab77fd9773414c9176a838098f71649554d107054feff428.jpg) +Likelihood of Degeneration into Repetition +Figure 9: We visualize how often different decoding methods get "stuck" in loops within the first 200 tokens. A phrase (minimum length 2) is considered a repetition when it repeats at least three times at the end of the generation. We label points with their parameter values except for $t$ and $p$ which follow the x-axis. Values of $k$ greater than 100 are rarely used in practice and values of $p$ are usually in [0.9, 1); therefore Nucleus Sampling is far closer to the human distribution in its usual parameter range. Sampling with temperatures lower than 0.9 severely increase repetition. Finally, although beam search becomes less repetitive according to this metric as beam width increases, this is largely because average length gets shorter as $b$ increases (see Appendix A). + +very high values of $k$ and $t$ are needed to get close to the reference distribution, though these result in unnaturally high perplexity (§4). + +# 5.3 REPETITION + +One attribute of text quality that we can quantify is repetition. Figure 9 shows that Nucleus Sampling and top- $k$ sampling have the least repetition for reasonable parameter ranges. Generations from temperature sampling have more repetition unless very high temperatures are used, which we have shown negatively affects coherence (as measured by high perplexity). Further, all stochastic methods face repetition issues when their tuning parameters are set too low, which tends to overtruncate, mimicking greedy search. Therefore we conclude that only Nucleus Sampling satisfies all the distributional criteria for desirable generations. + +# 6 HUMAN EVALUATION + +# 6.1 HUMAN UNIFIED WITH STATISTICAL EVALUATION (HUSE) + +Statistical evaluations are unable to measure the coherence of generated text properly. While the metrics in previous sections gave us vital insights into the different decoding methods we compare, human evaluation is still required to get a full measure of the quality of the generated text. However, pure human evaluation does not take into account the diversity of the generated text; therefore we use HUSE (Hashimoto et al., 2019) to combine human and statistical evaluation. HUSE is computed by training a discriminator to distinguish between text drawn from the human and model distributions, based on only two features: The probability assigned by the language model, and human judgements of typicality of generations. Text that is close to the human distribution in terms of quality and diversity should perform well on both likelihood evaluation and human judgements. + +As explored in the previous sections, the current best-performing decoding methods rely on truncation of the probability distribution, which yields a probability of 0 for the vast majority of potential tokens. Initial exploration of applying HUSE directly led to top- $k$ and Nucleus Sampling receiving scores of nearly 0 due to truncation, despite humans favoring these methods. As a proxy, when generating the text used to compute HUSE, we interpolate (with mass 0.1) the original probability distribution with the top- $k$ and Nucleus Sampling distribution, smoothing the truncated distribution. + +For each decoding algorithm we annotate 200 generations for typicality, with each generation receiving 20 annotations from 20 different annotators. This results in a total of 4000 annotations per a + +decoding scheme. We use a KNN classifier to compute HUSE, as in the original paper, with $k = 13$ neighbors, which we found led to the higher accuracy in discrimination. The results in Table 1 show that Nucleus Sampling obtains the highest HUSE score, with Top- $k$ sampling performing second best. + +# 6.2 QUALITATIVE ANALYSIS + +Figure 3 shows representative example generations. Unsurprisingly, beam search gets stuck in a repetition loop it cannot escape. Of the stochastic decoding schemes, the output of full sampling is clearly the hardest to understand, even inventing a new word "umidauda", apparently a species of bird. The generation produced by Nucleus Sampling isn't perfect - the model appears to confuse whales with birds, and begins writing about those instead. Yet, top- $k$ sampling immediately veers off into an unrelated event. When top- $k$ sampling is combined with a temperature of 0.7, as is commonly done (Radford et al., 2019; Fan et al., 2018), the output devolves into repetition, exhibiting the classic issues of low-temperature decoding. More generations are available in Appendix B. + +# 7 CONCLUSION + +This paper provided a deep analysis into the properties of the most common decoding methods for open-ended language generation. We have shown that likelihood maximizing decoding causes repetition and overly generic language usage, while sampling methods without truncation risk sampling from the low-confidence tail of a model's predicted distribution. Further, we proposed Nucleus Sampling as a solution that captures the region of confidence of language models effectively. In future work, we wish to dynamically characterize this region of confidence and include a more semantic utility function to guide the decoding process. + +# ACKNOWLEDGMENTS + +This research was supported in part by NSF (IIS-1524371), the National Science Foundation Graduate Research Fellowship under Grant No. DGE1256082, DARPA CwC through ARO (W911NF15-1-0543), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), the South African Centre for Artificial Intelligence Research, and the Allen Institute for AI. + +# REFERENCES + +David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147-169, 1985. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. Proceedings of the 2015 International Conference on Learning Representations, 2015. +Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. Language gans falling short. In Critiquing and Correcting Trends in Machine Learning: NeurIPS 2018 Workshop, 2018. URL http://arxiv.org/abs/1811.02549. +Yining Chen, Sorcha Gilroy, Andreas Maletti, Jonathan May, and Kevin Knight. Recurrent neural networks as weighted language recognizers. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2261-2271, New Orleans, Louisiana, June 2018. +Elizabeth Clark, Yangfeng Ji, and Noah A. Smith. Neural text generation in stories using entity representations as context. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2250-2260, New Orleans, Louisiana, June 2018. +Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning, pp. 933-941, 2017. + +Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 889-898, 2018. +Jessica Ficler and Yoav Goldberg. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pp. 94-104, 2017. +H Paul Grice. Logic and conversation. In P Cole and J L Morgan (eds.), Speech Acts, volume 3 of Syntax and Semantics, pp. 41-58. Academic Press, 1975. +Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang. Unifying human and statistical evaluation for natural language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019. +Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. Learning to write with cooperative discriminators. In Proceedings of the Association for Computational Linguistics, 2018. +Philipp Koehn and Rebecca Knowles. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pp. 28-39, 2017. +Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. Importance of search and evaluation strategies in neural dialogue modeling. International Conference on Natural Language Generation, 2019. +Jiwei Li, Will Monroe, and Dan Jurafsky. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562, 2016a. +Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1192-1202, 2016b. +Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1412-1421, 2015. +Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pp. 280-290, 2016. +Chris Pal, Charles Sutton, and Andrew McCallum. Sparse forward-backward using minimum divergence beams for fast training of conditional random fields. In 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, volume 5, May 2006. +Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. Towards controllable story generation. In Proceedings of the First Workshop on Storytelling, pp. 43-49, New Orleans, Louisiana, June 2018. doi: 10.18653/v1/W18-1505. +Steven T Piantadosi. Zipf's word frequency law in natural language: A critical review and future directions. Psychonomic bulletin & review, 21(5):1112-1130, 2014. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training, 2018. URL https://s3-us-west-2.amazon.com/openai-assetss/research-covers/language-unsupervised/language_understanding_paper.pdf. Unpublished manuscript. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners, February 2019. URL https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervisedMULTITASK_learners.pdf. Unpublished manuscript. +Stanislau Semeniuta, Aliaksei Severyn, and Sylvain Gelly. On accurate evaluation of gans for language generation. arXiv preprint arXiv:1806.04936, 2018. + +Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pp. 6830-6841, 2017. +Felix Stahlberg and Bill Byrne. On nmt search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3347-3353, 2019. +Martin Sundermeyer, Ralf Schlüter, and Hermann Ney. Lstm neural networks for language modeling. In Thirteenth annual conference of the international speech communication association, 2012. +Guy Tevet, Gavriel Habib, Vered Shwartz, and Jonathan Berant. Evaluating text gans as language models. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2241-2247, 2019. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998-6008, 2017. +Ashwin K. Vijayakumar, Michael Cogswell, Ramprasaath R. Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. Diverse beam search for improved description of complex scenes. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. +Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. Neural text generation with unlikelihood training. In Proceedings of the International Conference on Learning Representations (ICLR), 2020. +Sam Wiseman, Stuart Shieber, and Alexander Rush. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2253-2263, Copenhagen, Denmark, September 2017. +Jingjing Xu, Xuancheng Ren, Junyang Lin, and Xu Sun. Diversity-promoting gan: A cross-entropy based generative adversarial network for diversified text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3940-3949, Brussels, Belgium, oct 2018. +Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI, 2017. +Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. Texygen: A benchmarking platform for text generation models. SIGIR, 2018. + +# A BEAM WIDTH EFFECT + +![](images/373d75013c443c7e2245752b686c16e7fc47c48912ae5512ca67a5fabb9ae823.jpg) +Beam Width vs Distinct Trigrams +Figure 10: The total number of trigrams produced by Beam Search with varying beam widths, with gold (human) data for comparison. Note how the average length of generations goes down linearly with beam width, while the number of distinct trigrams stays constant and extremely low in comparison to gold data. + +# B EXAMPLE GENERATIONS + +We include a set of examples for further qualitative comparison. + +![](images/81731a1e0af14f7ce53069657454eccc3d25cf32f99565761add20b5f700c294.jpg) +Figure 11: More example generations from an initial tag line. All generations available at https://github.com/ari-holtzman/degen + +![](images/741c7ddcb13c7c2916633ec2ef3818ea96b45e57b1e05bd6a518c16d63db76b8.jpg) +Figure 12: More example generations from an initial tag line. Note that Pure Sampling and Nucleus Sampling is the only algorithms that can escape the repetition loop, with Nucleus Sampling's generation far closer in style to the ground truth text. All generations available at https://github.com/ari-holtzman/degen + +![](images/fafbf27bb07ae4a413bcc65b1ef3d6ef3e353a5bfa5d211a40bd83955f8f85f3.jpg) +Figure 13: More example generations from an initial tag line. All generations available at https://github.com/ari-holtzman/degen \ No newline at end of file diff --git a/thecuriouscaseofneuraltextdegeneration/images.zip b/thecuriouscaseofneuraltextdegeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..41513eb6702b80f0aefe2e42a3289a7a8e4acfa8 --- /dev/null +++ b/thecuriouscaseofneuraltextdegeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22e6382dbda6215a873de908cf03ec294ba1eb164b5a94674dc1e1fa115978ff +size 1273457 diff --git a/thecuriouscaseofneuraltextdegeneration/layout.json b/thecuriouscaseofneuraltextdegeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..78551dea79d005a89a2b95d1befa3412dc8024da --- /dev/null +++ b/thecuriouscaseofneuraltextdegeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c146b50b87299fb15357e3f6bb00fce5354ac358177f67ca5e8cc116f1b74e37 +size 349224 diff --git a/theearlyphaseofneuralnetworktraining/0b672383-7bf9-4280-9b02-ec1a43bbfdb1_content_list.json b/theearlyphaseofneuralnetworktraining/0b672383-7bf9-4280-9b02-ec1a43bbfdb1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b7ce4ad492a74f496b0890db246e1c7399f368d7 --- /dev/null +++ b/theearlyphaseofneuralnetworktraining/0b672383-7bf9-4280-9b02-ec1a43bbfdb1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d021989594d1121bdde6017509e81fcf69bca1417459eb969cdff991f81b35e3 +size 96997 diff --git a/theearlyphaseofneuralnetworktraining/0b672383-7bf9-4280-9b02-ec1a43bbfdb1_model.json b/theearlyphaseofneuralnetworktraining/0b672383-7bf9-4280-9b02-ec1a43bbfdb1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..30c3e2b9ef2048371fb7b815d16e17ba918c9ad4 --- /dev/null +++ b/theearlyphaseofneuralnetworktraining/0b672383-7bf9-4280-9b02-ec1a43bbfdb1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:082001e1258ded9922faf91020fb8f326471a040cea26dd0143318339347914d +size 102278 diff --git a/theearlyphaseofneuralnetworktraining/0b672383-7bf9-4280-9b02-ec1a43bbfdb1_origin.pdf b/theearlyphaseofneuralnetworktraining/0b672383-7bf9-4280-9b02-ec1a43bbfdb1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..64faa1ee049580f00c6aceebb9ed0ebeb890b1c9 --- /dev/null +++ b/theearlyphaseofneuralnetworktraining/0b672383-7bf9-4280-9b02-ec1a43bbfdb1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:475e0f88a4bb37866fa2e17c026c44604071cb3d624ed9f4d5d416af1a81af58 +size 4227087 diff --git a/theearlyphaseofneuralnetworktraining/full.md b/theearlyphaseofneuralnetworktraining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..ca2aa98aed5b33abf74ed93e5f51095cffa7fcbe --- /dev/null +++ b/theearlyphaseofneuralnetworktraining/full.md @@ -0,0 +1,423 @@ +# THE EARLY PHASE OF NEURAL NETWORK TRAINING + +Jonathan Frankle† +MIT CSAIL + +David J. Schwab +CUNY ITS +Facebook AI Research + +Ari S. Morcos +Facebook AI Research + +# ABSTRACT + +Recent studies have shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training. For example, sparse, trainable sub-networks emerge (Frankle et al., 2019), gradient descent moves into a small subspace (Gur-Ari et al., 2018), and the network undergoes a critical period (Achille et al., 2019). Here we examine the changes that deep neural networks undergo during this early phase of training. We perform extensive measurements of the network state during these early iterations of training and leverage the framework of Frankle et al. (2019) to quantitatively probe the weight distribution and its reliance on various aspects of the dataset. We find that, within this framework, deep networks are not robust to reinitializing with random weights while maintaining signs, and that weight distributions are highly non-independent even after only a few hundred iterations. Despite this behavior, pre-training with blurred inputs or an auxiliary self-supervised task can approximate the changes in supervised networks, suggesting that these changes are not inherently label-dependent, though labels significantly accelerate this process. Together, these results help to elucidate the network changes occurring during this pivotal initial period of learning. + +# 1 INTRODUCTION + +Over the past decade, methods for successfully training big, deep neural networks have revolutionized machine learning. Yet surprisingly, the underlying reasons for the success of these approaches remain poorly understood, despite remarkable empirical performance (Santurkar et al., 2018; Zhang et al., 2017). A large body of work has focused on understanding what happens during the later stages of training (Neyshabur et al., 2019; Yaida, 2019; Chaudhuri & Soatto, 2017; Wei & Schwab, 2019), while the initial phase has been less explored. However, a number of distinct observations indicate that significant and consequential changes are occurring during the most early stage of training. These include the presence of critical periods during training (Achille et al., 2019), the dramatic reshaping of the local loss landscape (Sagun et al., 2017; Gur-Ari et al., 2018), and the necessity of rewinding in the context of the lottery ticket hypothesis (Frankle et al., 2019). Here we perform a thorough investigation of the state of the network in this early stage. + +To provide a unified framework for understanding the changes the network undergoes during the early phase, we employ the methodology of iterative magnitude pruning with rewinding (IMP), as detailed below, throughout the bulk of this work (Frankle & Carbin, 2019; Frankle et al., 2019). The initial lottery ticket hypothesis, which was validated on comparatively small networks, proposed that small, sparse sub-networks found via pruning of converged larger models could be trained to high performance provided they were initialized with the same values used in the training of the unpruned model (Frankle & Carbin, 2019). However, follow-up work found that rewinding the weights to their values at some iteration early in the training of the unpruned model, rather than to their initial values, was necessary to achieve good performance on deeper networks such as ResNets (Frankle et al., 2019). This observation suggests that the changes in the network during this initial phase are vital for the success of the training of small, sparse sub-networks. As a result, this paradigm provides a simple and quantitative scheme for measuring the importance of the weights at various points early in training within an actionable and causal framework. + +We make the following contributions, all evaluated across three different network architectures: + +1. We provide an in-depth overview of various statistics summarizing learning over the early part of training. +2. We evaluate the impact of perturbing the state of the network in various ways during the early phase of training, finding that: + +(i) counter to observations in smaller networks (Zhou et al., 2019), deeper networks are not robust to reinitialization with random weights, but maintained signs +(ii) the distribution of weights after the early phase of training is already highly non-i.i.d., as permuting them dramatically harms performance, even when signs are maintained +(iii) both of the above perturbations can roughly be approximated by simply adding noise to the network weights, though this effect is stronger for (ii) than (i) + +3. We measure the data-dependence of the early phase of training, finding that pre-training using only $p(x)$ can approximate the changes that occur in the early phase of training, though pre-training must last for far longer ( $\sim 32 \times$ longer) and not be fed misleading labels. + +# 2 KNOWN PHENOMENA IN THE EARLY PHASE OF TRAINING + +Lottery ticket rewinding: The original lottery ticket paper (Frankle & Carbin, 2019) rewound weights to initialization, i.e., $k = 0$ , during IMP. Follow up work on larger models demonstrated that it is necessary to rewind to a later point during training for IMP to succeed, i.e., $k << T$ , where $T$ is total training iterations (Frankle et al., 2019). Notably, the benefit of rewinding to a later point in training saturates quickly, roughly between 500 and 2000 iterations for ResNet-20 on CIFAR-10 (Figure 1). This timescale is strikingly similar to the changes in the Hessian described below. + +Hessian eigenspectrum: The shape of the loss landscape around the network state also appears + +![](images/5f260644538f43878fc5a9f1b72499e240d3165402b44e6ba7a269f24608d3c0.jpg) +Figure 1: Accuracy of IMP when rewinding to various iterations of the early phase for ResNet-20 sub-networks as a function of sparsity level. + +to change rapidly during the early phase of training (Sagun et al., 2017; Gur-Ari et al., 2018). At initialization, the Hessian of the loss contains a number of large positive and negative eigenvalues. However, very rapidly the curvature is reshaped in a few marked ways: a few large eigenvalues emerge, the bulk eigenvalues are close to zero, and the negative eigenvalues become very small. Moreover, once the Hessian spectrum has reshaped, gradient descent appears to occur largely within the top subspace of the Hessian (Gur-Ari et al., 2018). These results have been largely confirmed in large scale studies (Ghorbani et al., 2019), but note they depend to some extent on architecture and (absence of) batch normalization (Ioffe & Szegedy, 2015). A notable exception to this consistency is the presence of substantial $L_{1}$ energy of negative eigenvalues for models trained on ImageNet. + +Critical periods in deep learning: Achille et al. (2019) found that perturbing the training process by providing corrupted data early on in training can result in irrevocable damage to the final performance of the network. Note that the timescales over which the authors find a critical period extend well beyond those we study here. However, architecture, learning rate schedule, and regularization all modify the timing of the critical period, and follow-up work found that critical periods were also present for regularization, in particular weight decay and data augmentation (Golatkar et al., 2019). + +# 3 PRELIMINARIES AND METHODOLOGY + +Networks: Throughout this paper, we study five standard convolutional neural networks for CIFAR-10. These include the ResNet-20 and ResNet-56 architectures designed for CIFAR-10 (He et al., 2015), the ResNet-18 architecture designed for ImageNet but commonly used on CIFAR-10 (He et al., 2015), the WRN-16-8 wide residual network (Zagoruyko & Komodakis, 2016), and the + +![](images/4f36bcf2d43626dee2da5ba39a900d9088ce01051c38d306ba82fb34b9f889b1.jpg) +Figure 2: Rough timeline of the early phase of training for ResNet-20 on CIFAR-10. + +VGG-13 network (Simonyan & Zisserman (2015) as adapted by Liu et al. (2019)). Throughout the main body of the paper, we show ResNet-20; in Appendix B, we present the same experiments for the other networks. Unless otherwise stated, results were qualitatively similar across all three networks. All experiments in this paper display the mean and standard deviation across five replicates with different random seeds. See Appendix A for further model details. + +Iterative magnitude pruning with rewinding: In order to test the effect of various hypotheses about the state of sparse networks early in training, we use the Iterative Magnitude Pruning with rewinding (IMP) procedure of Frankle et al. (2019) to extract sub-networks from various points in training that could have learned on their own. The procedure involves training a network to completion, pruning the $20\%$ of weights with the lowest magnitudes globally throughout the network, and rewinding the remaining weights to their values from an earlier iteration $k$ during the initial, pre-pruning training run. This process is iterated to produce networks with high sparsity levels. As demonstrated in Frankle et al. (2019), IMP with rewinding leads to sparse sub-networks which can train to high performance even at high sparsity levels $>90\%$ . + +Figure 1 shows the results of the IMP with rewinding procedure, showing the accuracy of ResNet-20 at increasing sparsity when performing this procedure for several rewinding values of $k$ . For $k \geq 500$ , sub-networks can match the performance of the original network with $16.8\%$ of weights remaining. For $k > 2000$ , essentially no further improvement is observed (not shown). + +# 4 THE STATE OF THE NETWORK EARLY IN TRAINING + +Many of the aforementioned papers refer to various points in the "early" part of training. In this section, we descriptively chart the state of ResNet-20 during the earliest phase of training to provide context for this related work and our subsequent experiments. We specifically focus on the first 4,000 iterations (10 epochs). See Figure A3 for the characterization of additional networks. We include a summary of these results for ResNet-20 as a timeline in Figure 2, and include a broader timeline including results from several previous papers for ResNet-18 in Figure A1. + +As shown in Figure 3, during the earliest ten iterations, the network undergoes substantial change. It experiences large gradients that correspond to a rapid increase in distance from the initialization and a large number of sign changes of the weights. After these initial iterations, gradient magnitudes drop and the rate of change in each of the aforementioned quantities gradually slows through the remainder of the period we observe. Interestingly, gradient magnitudes reach a minimum after the first 200 iterations and subsequently increase to a stable level by iteration 500. Evaluation accuracy, improves rapidly, reaching $55\%$ by the end of the first epoch (400 iterations), more than halfway to the final $91.5\%$ . By 2000 iterations, accuracy approaches $80\%$ . + +During the first 4000 iterations of training, we observe three sub-phases. In the first phase, lasting only the initial few iterations, gradient magnitudes are very large and, consequently, the network changes rapidly. In the second phase, lasting about 500 iterations, performance quickly improves, weight magnitudes quickly increase, sign differences from initialization quickly increase, and gradient magnitudes reach a minimum before settling at a stable level. Finally, in the third phase, all of these quantities continue to change in the same direction, but begin to decelerate. + +![](images/4962844f16a51e8121eb449090b3559871c22cf6607b60f8df4100744aeafa05.jpg) + +![](images/b06610b565872801b7dc5bb40df22734b5de190a1dce7093b03e966165df42ec.jpg) + +![](images/a15f81b4a65e99ef34c66c92d0d518647f7920f272b984da0612ef8ba29d6482.jpg) + +![](images/1ae025a668d3f91cd5ba6675784fe891e936528c7985dce1fd408b32810ebe3b.jpg) + +![](images/204abe71cef206be7cf0bdb464238c8b3ac9050de63312b2fbc56a916d607c60.jpg) +Figure 3: Basic telemetry about the state of ResNet-20 during the first 4000 iterations (10 epochs). Top row: evaluation accuracy/loss; average weight magnitude; percentage of weights that change sign from initialization; the values of ten randomly-selected weights. Bottom row: gradient magnitude; L2 distance of weights from their initial values and final values at the end of training; cosine similarity of weights from their initial values and final values at the end of training. + +![](images/a9d4d05de5657188b01c1293aa9395d0013f8ee3f12828c946c9337bf53ba136.jpg) + +![](images/873a632e5627abd029166483c372744ef3fae7a811e60921afd4d0ae89550329.jpg) + +![](images/6d44971d1f6bb97c690d989e1cf2497086f6ec7ab3b72c5f7141c53de22ff824.jpg) + +![](images/c72b0224f930d36826e5866db1ba68eb78ce0b6b84a16cff0a457d00e9476f84.jpg) +Figure 4: Performance of an IMP-derived sub-network of ResNet-20 on CIFAR-10 initialized to the signs at iteration 0 or $k$ and the magnitudes at iteration 0 or $k$ . Left: $k = 500$ . Right: $k = 2000$ . + +![](images/e50424e162cf35579f07ff1b00d7a25fe1d54899c4816397f0f9d9b38b0d6551.jpg) + +# 5 PERTURBING NEURAL NETWORKS EARLY IN TRAINING + +Figure 1 shows that the changes in the network weights over the first 500 iterations of training are essential to enable high performance at high sparsity levels. What features of this weight transformation are necessary to recover increased performance? Can they be summarized by maintaining the weight signs, but discarding their magnitudes as implied by Zhou et al. (2019)? Can they be represented distributionally? In this section, we evaluate these questions by perturbing the early state of the network in various ways. Concretely, we either add noise or shuffle the weights of IMP sub-networks of ResNet-20 across different network sub-components and examine the effect on the network's ability to learn thereafter. The sub-networks derived by IMP with rewinding make it possible to understand the causal impact of perturbations on sub-networks that are as capable as the full networks but more visibly decline in performance when improperly configured. To enable comparisons between the experiments in Section 5 and provide a common frame of reference, we measure the effective standard deviation of each perturbation, i.e. $\text{stddev}(w_{perturb} - w_{orig})$ . + +# 5.1 ARE SIGNS ALL YOU NEED? + +Zhou et al. (2019) show that, for a set of small convolutional networks, signs alone are sufficient to capture the state of lottery ticket sub-networks. However, it is unclear whether signs are still sufficient for larger networks early in training. In Figure 4, we investigate the impact of combining the magnitudes of the weights from one time-point with the signs from another. We found that the signs at iteration 500 paired with the magnitudes from initialization (red line) or from a separate random initialization (green line) were insufficient to maintain the performance reached by using both signs and magnitudes from iteration 500 (orange line), and performance drops to that of using + +![](images/cbc21fabd5a9e00fb444a66cf53336a2e5c94d52e1081817b2caee8a72177253.jpg) +Figure 5: Performance of an IMP-derived ResNet-20 sub-network on CIFAR-10 initialized with the weights at iteration $k$ permuted within various structural elements. Left: $k = 500$ . Right: $k = 2000$ . + +![](images/2e83e494905d9da2a1ca0a5af82b36e6f3984dfdb8e3b286ea6ccceb8de8e3a6.jpg) + +![](images/596000debab74a599f450a3de1956a200087c59f627f9f31e3ba96c21e62696f.jpg) +Figure 6: The effect of training an IMP-derived sub-network of ResNet-20 on CIFAR-10 initialized with the weights at iteration $k$ as shuffled within various structural elements where shuffling only occurs between weights with the same sign. Left: $k = 500$ . Right: $k = 2000$ . + +![](images/85223d0a923ad35de27cf11ab497c226ff1dc4caeaf66f7dca84135e62cf7447.jpg) + +both magnitudes and signs from initialization (blue line). However, while using the magnitudes from iteration 500 and the signs from initialization, performance is still substantially better than initialization signs and magnitudes. In addition, the overall perturbation to the network by using the magnitudes at iteration 500 and signs from initialization (mean: 0.0, stddev: 0.033) is smaller than by using the signs at iteration 500 and the magnitudes from initialization $(0.0 \pm 0.042$ , mean $\pm$ std). These results suggest that the change in weight magnitudes over the first 500 iterations of training are substantially more important than the change in the signs for enabling subsequent training. + +By iteration 2000, however, pairing the iteration 2000 signs with magnitudes from initialization (red line) reaches similar performance to using the signs from initialization and the magnitudes from iteration 2000 (purple line) though not as high performance as using both from iteration 2000. This result suggests that network signs undergo important changes between iterations 500 and 2000, as only $9\%$ of signs change during this period. Our results also suggest that counter to the observations of Zhou et al. (2019) in shallow networks, signs are not sufficient in deeper networks. + +# 5.2 ARE WEIGHT DISTRIBUTIONS I.I.D.? + +Can the changes in weights over the first $k$ iterations be approximated distributionally? To measure this, we permuted the weights at iteration $k$ within various structural sub-components of the network (globally, within layers, and within convolutional filters). If networks are robust to these permutations, it would suggest that the weights in such sub-components might be approximated and sampled from. As Figure 5 shows, however, we found that performance was not robust to shuffling weights globally (green line) or within layers (red line), and drops substantially to no better than that of the original initialization (blue line) at both 500 and 2000 iterations. Shuffling within filters (purple line) performs slightly better, but results in a smaller overall perturbation $(0.0 \pm 0.092$ for $k = 500$ ) than shuffling layerwise $(0.0 \pm 0.143)$ or globally $(0.0 \pm 0.144)$ , suggesting that this change in perturbation strength may simply account for the difference. + +![](images/acf295a425c09709b9fe1b3b82ee4d2f34d2853a74a64973343f942a220c41e3.jpg) +Figure 7: The effect of training an IMP-derived sub-network of ResNet-20 on CIFAR-10 initialized with the weights at iteration $k$ and Gaussian noise of $n\sigma$ , where $\sigma$ is the standard deviation of the initialization distribution for each layer. Left: $k = 500$ . Right: $k = 2000$ . + +![](images/7e802a1aaf8ee60e5e4789a3f6da5cf2fcc867201a8c83972366c816c5dca56a.jpg) + +Are the signs from the rewinding iteration, $k$ , sufficient to recover the damage caused by permutation? In Figure 6, we also consider shuffling only amongst weights that have the same sign. Doing so substantially improves the performance of the filter-wise shuffle; however, it also reduces the extent of the overall perturbation (0.0 ± 0.049 for $k = 500$ ). It also improves the performance of shuffling within layers slightly for $k = 500$ and substantially for $k = 2000$ . We attribute the behavior for $k = 2000$ to the signs just as in Figure 4: when the magnitudes are similar in value (Figure 4 red line) or distribution (Figure 6 red and green lines), using the signs improves performance. Reverting back to the initial signs while shuffling magnitudes within layers (brown line), however, damages the network too severely (0.0 ± 0.087 for $k = 500$ ) to yield any performance improvement over random noise. These results suggest that, while the signs from initialization are not sufficient for high performance at high sparsity as shown in Section 5.1, the signs from the rewinding iteration are sufficient to recover the damage caused by permutation, at least to some extent. + +# 5.3 IS IT ALL JUST NOISE? + +Some of our previous results suggested that the impact of signs and permutations may simply reduce to adding noise to the weights. To evaluate this hypothesis, we next study the effect of simply adding Gaussian noise to the network weights at iteration $k$ . To add noise appropriately for layers with different scales, the standard deviation of the noise added for each layer was normalized to a multiple of the standard deviation $\sigma$ of the initialization distribution for that layer. In Figure 7, we see that for iteration $k = 500$ , sub-networks can tolerate $0.5\sigma$ to $1\sigma$ of noise before performance degrades back to that of the original initialization at higher levels of noise. For iteration $k = 2000$ , networks are surprisingly robust to noise up to $1\sigma$ , and even $2\sigma$ exhibits nontrivial performance. + +In Figure 8, we plot the performance of each network at a fixed sparsity level as a function of the effective standard deviation of the noise imposed by each of the aforementioned perturbations. We find that the standard deviation of the effective noise explained fairly well the resultant performance ( $k = 500$ : $r = -0.672$ , $p = 0.008$ ; $k = 2000$ : $r = -0.726$ , $p = 0.003$ ). As expected, perturbations that preserved the performance of the network generally resulted in smaller changes to the state of the network at iteration $k$ . Interestingly, experiments that mixed signs and magnitudes from different points in training (green points) aligned least well with this pattern: the standard deviation of the perturbation is roughly similar among all of these experiments, but the accuracy of the resulting networks changes substantially. This result suggests that although the standard deviation of the noise is certainly indicative of lower accuracy, there are still specific perturbations that, while small in overall magnitude, can have a large effect on the network's ability to learn, suggesting that the observed perturbation effects are not, in fact, just a consequence of noise. + +# 6 THE DATA-DEPENDENCE OF NEURAL NETWORKS EARLY IN TRAINING + +Section 5 suggests that the change in network behavior by iteration $k$ is not due to easily-ascertainable, distributional properties of the network weights and signs. Rather, it appears that training is required to reach these network states. It is unclear, however, the extent to which various aspects of the data distribution are necessary. Mainly, is the change in weights during the early phase of training dependent on $p(x)$ or $p(y|x)$ ? Here, we attempt to answer this question by mea + +![](images/2b38fa46651e688bfed2830740cc54d71396c2ba8ed4f5dc67ca7e3921da3481.jpg) +Figure 8: The effective standard deviation of various perturbations as a function of mean evaluation accuracy (across 5 seeds) at sparsity $26.2\%$ . The mean of each perturbation was approximately 0. Left: $k = 500$ , $r = -0.672$ , $p = 0.008$ ; Right: $k = 2000$ , $r = -0.726$ , $p = 0.003$ . + +![](images/2ffb044dcb54c88af91ecbff53771c4881ae1dc3246bb8125f3054fd20c5be7f.jpg) + +![](images/add8f91ee7be40fbf959edbda6250ca2e372c751fa5c911988aff85bbc97ebb3.jpg) + +![](images/8d14ae39be9e03c3667539476b6625122ae6cc6408579b582200db41bea317b7.jpg) + +![](images/b41c875db6812d74a42ca36212eccca55efd268674e6a84a0269b8fc146a3246.jpg) +Figure 9: The effect of pre-training ResNet-20 on CIFAR-10 with random labels, self-supervised rotation, 4x blurring, and 4x blurring and self-supervised rotation. + +![](images/77286a247efe586cbc61bb3d6f37c12b9b63d952e5bd8ea0ccbdb31c3c6230ee.jpg) + +suring the extent to which we can re-create a favorable network state for sub-network training using restricted information from the training data and labels. In particular, we consider pre-training the network with techniques that ignore labels entirely (self-supervised rotation prediction, Section 6.2), provide misleading labels (training with random labels, Section 6.1), or eliminate information from examples (blurring training examples Section 6.3). + +We first train a randomly-initialized, unpruned network on CIFAR-10 on the pre-training task for a set number of epochs. After pre-training, we train the network normally as if the pre-trained state were the original initialization. We then use the state of the network at the end of the pre-training phase as the "initialization" to find masks for IMP. Finally, we examine the performance of the IMP-pruned sub-networks as initialized using the state after pre-training. This experiment determines the extent to which pre-training places the network in a state suitable for sub-network training as compared to using the state of the network at iteration $k$ of training on the original task. + +# 6.1 RANDOM LABELS + +To evaluate whether this phase of training is dependent on underlying structure in the data, we drew inspiration from Zhang et al. (2017) and pre-trained networks on data with randomized labels. This experiment tests whether the input distribution of the training data is sufficient to put the network in a position from which IMP with rewinding can find a sparse, trainable sub-network despite the presence of incorrect (not just missing) labels. Figure 9 (upper left) shows that pre-training on random labels for up to 10 epochs provides no improvement above rewinding to iteration 0 and that pre-training for longer begins to hurt accuracy. This result suggests that, though it is still possible that labels may not be required for learning, the presence incorrect labels is sufficient to prevent learning which approximates the early phase of training. + +# 6.2 SELF-SUPERVISED ROTATION PREDICTION + +What if we remove labels entirely? Is $p(x)$ sufficient to approximate the early phase of training? Historically, neural network training often involved two steps: a self-supervised pre-training phase followed by a supervised phase on the target task (Erhan et al., 2010). Here, we consider one such self-supervised technique: rotation prediction (Gidaris et al., 2018). During the pre-training phase, the network is presented with a training image that has randomly been rotated $90n$ degrees (where $n \in \{0,1,2,3\}$ ). The network must classify examples by the value of $n$ . If self-supervised pretraining can approximate the early phase of training, it would suggest that $p(x)$ is sufficient on its own. Indeed, as shown in Figure 9 (upper right), this pre-training regime leads to well-trainable subnetworks, though networks must be trained for many more epochs compared to supervised training (40 compared to 1.25, or a factor of $32 \times$ ). This result suggests that the labels for the ultimate task themselves are not necessary to put the network in such a state (although explicitly misleading labels are detrimental). We emphasize that the duration of the pre-training phase required is an order of magnitude larger than the original rewinding iteration, however, suggesting that labels add important information which accelerates the learning process. + +# 6.3 BLURRING TRAINING EXAMPLES + +To probe the importance of $p(x)$ for the early phase of training, we study the extent to which the training input distribution is necessary. Namely, we pretrain using blurred training inputs with the correct labels. Following Achille et al. (2019), we blur training inputs by downsampling by 4x and then upsampling back to the full size. Figure 9 (bottom left) shows that this pre-training method succeeds: after 40 epochs of pre-training, IMP with rewinding can find sub-networks that are similar in performance to those found after training on the original task for 500 iterations (1.25 epochs). + +Due to the success of the rotation and blurring pre-training tasks, we explored the effect of combining these pre-training techniques. Doing so tests the extent to which we can discard both the training labels and some information from the training inputs. Figure 9 (bottom right) shows that doing so provides the network too little information: no amount of pre-training we considered makes it possible for IMP with rewinding to find sub-networks that perform tangibly better than rewinding to iteration 0. Interestingly however, as shown in Appendix B, trainable sub-networks are found for VGG-13 with this pre-training regime, suggesting that different network architectures have different sensitivities to the deprivation of labels and input content. + +# 6.4 SPARSE PRETRAINING + +Since sparse sub-networks are often challenging to train from scratch without the proper initialization (Han et al., 2015; Liu et al., 2019; Frankle & Carbin, 2019), does pre-training make it easier for sparse neural networks to learn? Doing so would serve as a rough form of curriculum learning (Bengio et al., 2009) for sparse neural networks. We experimented with training sparse sub-networks of ResNet-20 (IMP sub-networks, randomly reinitialized sub-networks, and randomly pruned sub-networks) first on self-supervised rotation and then on the main task, but found no benefit beyond rewinding to iteration 0 (Figure 10). Moreover, doing so when starting from a sub-network rewound to iteration 500 actually hurts final accuracy. This result suggests that while pre-training is sufficient to approximate the early phase of supervised training with an appropriately structured mask, it is not sufficient to do so with an inappropriate mask. + +![](images/aa825223a360f9f1c0532b76e3862062fef16fbd33f39f691d7d64dec5c229d9.jpg) +Figure 10: The effect of pretraining sparse sub-networks of Resnet-20 (rewound to iteration 500) with 40 epochs of self-supervised rotation before training on CIFAR-10. + +# 7 DISCUSSION + +In this paper, we first performed extensive measurements of various statistics summarizing learning over the early part of training. Notably, we uncovered 3 sub-phases: in the very first iterations, gradient magnitudes are anomalously large and motion is rapid. Subsequently, gradients overshoot to smaller magnitudes before leveling off while performance increases rapidly. Then, learning slowly begins to decelerate. We then studied a suite of perturbations to the network state in the early phase finding that, counter to observations in smaller networks (Zhou et al., 2019), deeper networks are not robust to reinitializing with random weights with maintained signs. We also found that the weight distribution after the early phase of training is highly non-independent. Finally, we measured the data-dependence of the early phase with the surprising result that pre-training on a self-supervised task yields equivalent performance to late rewinding with IMP. + +These results have significant implications for the lottery ticket hypothesis. The seeming necessity of late rewinding calls into question certain interpretations of lottery tickets as well as the ability to identify sub-networks at initialization. Our observation that weights are highly non-independent at the rewinding point suggests that the weights at this point cannot be easily approximated, making approaches which attempt to "jump" directly to the rewinding point unlikely to succeed. However, our result that labels are not necessary to approximate the rewinding point suggests that the learning during this phase does not require task-specific information, suggesting that rewinding may not be necessary if networks are pre-trained appropriately. + +# REFERENCES + +Alessandro Achille, Matteo Rovere, and Stefano Soatto. Critical learning periods in deep neural networks. 2019. +Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41-48. ACM, 2009. +Pratik Chaudhuri and Stefano Soatto. Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks. 2017. URL https://arxiv.org/abs/1710.11029. +Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11(Feb):625-660, 2010. +Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJ1-b3RcF7. +Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. Stabilizing the lottery ticket hypothesis. arXiv preprint arXiv:1903.01611, 2019. + +Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net optimization via hessian eigenvalue density. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pp. 2232-2241, 2019. URL http://proceedings.mlr.press/v97/ghorbani19b.html. +Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations, 2018. URL: https://openreview.net/forum?id=S1v4N2l0-. +Aditya Golatkar, Alessandro Achille, and Stefano Soatto. Time matters in regularizing deep networks: Weight decay and data augmentation affect early learning dynamics, matter little near convergence. 2019. +Guy Gur-Ari, Daniel A Roberts, and Ethan Dyer. Gradient descent happens in a tiny subspace. arXiv preprint arXiv:1812.04754, 2018. +Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135-1143, 2015. +K He, X Zhang, S Ren, and J Sun. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR), volume 5, pp. 6, 2015. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, pp. 448-456. JMLR.org, 2015. URL http://dl.acm.org/citation.cfm?id=3045118.3045167. +Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJlnB3C5Ym. +Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. Towards understanding the role of over-parametrization in generalization of neural networks. 2019. +Levent Sagun, Utku Evci, Ugur Guney, Yann Dauphin, and Leon Bottou. Empirical analysis of the hessian of over-parametrized neural networks. 2017. URL https://arxiv.org/abs/1706.04454. +Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How does batch normalization help optimization? In Advances in neural information processing systems, 2018. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. 2015. +Mingwei Wei and David Schwab. How noise during training affects the hessian spectrum in overparameterized neural networks. 2019. +Sho Yaida. Fluctuation-dissipation relations for stochastic gradient descent. 2019. +Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. +Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. 2017. +Hattie Zhou, Janice Lan, Rosanne Liu, and Jason Yosinski. Deconstructing lottery tickets: Zeros, signs, and the supermask. 2019. + +# A MODEL DETAILS + +
NetworkEpochsBatch SizeLearning RateParametersEval Accuracy
ResNet-201601280.1 (Mom 0.9)272K91.5 ± 0.2%
ResNet-561601280.1 (Mom 0.9)856K93.0 ± 0.1%
ResNet-181601280.1 (Mom 0.9)11.2M86.8 ± 0.3%
VGG-13160640.1 (Mom 0.9)9.42M93.5 ± 0.1%
WRN-16-81601280.1 (Mom 0.9)11.1M94.8 ± 0.1%
+ +Table A1: Summary of the networks we study in this paper. We present ResNet-20 in the main body of the paper and the remaining networks in Appendix B. + +Table A1 summarizes the networks. All networks follow the same training regime: we train with SGD for 160 epochs starting at learning rate 0.1 (momentum 0.9) and drop the learning rate by a factor of ten at epoch 80 and again at epoch 120. Training includes weight decay with weight 1e-4. Data is augmented with normalization, random flips, and random crops up to four pixels in any direction. + +# B EXPERIMENTS FOR OTHER NETWORKS + +![](images/d2a60697b323fdaac63d0f252f0277b119a11a2503ca50a5d16c54af86cbb7e4.jpg) +Figure A1: Rough timeline of the early phase of training for ResNet-18 on CIFAR-10, including results from previous papers. + +![](images/3ef8adc99c33e72cc4dc39b2d9833aa53d0881e9c933685c99baf351a272ff60.jpg) + +![](images/47ea9bf842b609729df7dde4954a70919dd618db57ace22025ff75c7203c4293.jpg) + +![](images/a9d641be5210f27f008cb8adfff3b8bd7b6945c75a465f3d15b80b28a9ca7326.jpg) +Figure A2: The effect of IMP rewinding iteration on the accuracy of sub-networks at various levels of sparsity. Accompanies Figure 1. + +![](images/cbf86c64189d9335c3c8dfe767b7330164cfabe240f17e844106cecba292360f.jpg) + +![](images/b8e2735ef11050fae30d2a6191beeeb14f9ba02a7e77a118d88aad1f5afb977d.jpg) + +![](images/e6578baf4d81b7ef215b47f1ef142246cceb5766b6407ad471cf48b17953cbca.jpg) + +![](images/3a59cb16d84afa0e0d820b6428b40ea9afd13292e8f1b5679f0e9c571a0b38a4.jpg) + +![](images/d37f6ddcf2384b5581a9a28b0cbf6f615f9ba25c984f8af781a494f5c8fd6dd7.jpg) + +![](images/f6de63f2cea7e54722860acae849ef621dba2ceb85511154bebe5415a6138897.jpg) + +![](images/b08f2cb42deb56414a3db7a43328e8919c2f6c49f9f694036060bdba8838783c.jpg) + +![](images/d5d05099d62b129c732985571180dcad6f1ae280b23f5f48ee6d94e4fec661ce.jpg) + +![](images/a33bfb5c73c0d2b8160ffe5ef2d28aabe14b1758b18633e5b0c38d721422652d.jpg) + +![](images/d08139d62e7cb84b64f6123a476c98cf9b0039081a301bfebc37758e46454dad.jpg) + +![](images/f00e9617f655218193653cb3ff06db0f988aa5266e26fe386ee2fccdbbf21b6f.jpg) + +![](images/4f5ed1822d1863a1bd58a61af89b9cfd3327a8d3d6ba5115137ba343b44d69bb.jpg) + +![](images/10a98b2b41d6b0933fbd8fe5791b06541e94ffa81246b250658ce3acdc7652f7.jpg) + +![](images/58b3f6722c7766333b08fd578e5b601bbffd7669ce3b0b09a2951b0f0052904b.jpg) + +![](images/8e0e84bd4647a542a7a531e2704226f0840e2522280c551da769de293e8c479d.jpg) + +![](images/fbf4ae3bb06a99c6c655a6ddc0b7626582fcc382b54b1757d47a0b63815dbad5.jpg) + +![](images/93ecb5318e33197c8112751811db539c91cf178b6a52f220d8a0dcd7771c643c.jpg) + +![](images/b4bd95783d7b3462addc1a1882139e89cc59018ccc39699168d1163fb5996d5e.jpg) + +![](images/b58fa56446d434b68353b3b32c9aa98535cf5651bf8b26e5d3bbc6b182554cb4.jpg) + +![](images/d483dff55f7e66261d317d42121476b93a79e5c0bb5dc3366e793d62650f0f91.jpg) + +![](images/62a25191e3184965beaad757e989dd1da836bc83cebd203a77f24ba543e64164.jpg) + +![](images/408db99207ad4ed9f528466a4b1f5d4e880cceaff35fc8d90f83f0371dd0a174.jpg) + +![](images/236cd798c1f76507d46e4b2bfe6bf9cd4d50d4aab7622716d000efd92cdeadf3.jpg) + +![](images/009dbce5818665e6e4e3aa7dae214939c864c7158385e992a29d4490adb7f017.jpg) + +![](images/9287b6215e594b992aed003f3969a442e02d446c9aea686584e747b9483db4cd.jpg) + +![](images/33aad89bae2b4831317040d471c512367cae36dbb629cfb003474288db682af9.jpg) + +![](images/9ba463bdf7f0398a927a4c34bad825d60f9d1d764bd68ab04bda0bd9611294fb.jpg) + +![](images/dbd48a06cff03f7347393e694e6cb8209c1da1e745dfe5d29ec3c47dc85e6d14.jpg) + +![](images/37b518e5a1c6365e90449b80efa9c3e5d802db52d3e0a8b025c36e01db1595fe.jpg) + +![](images/313e1f6c2acc85696200a3c838b42d5e3c84e656e311dde5d9324b868f7855cd.jpg) + +![](images/a88991a514b094cd5911280a6d28ba9a7c6f16e6fdb376714c8d4fdb2a923d0e.jpg) + +![](images/ec93cef86db6a23472235dc8c2db8e0d5da194fb5eb5f19d05c68350f205225c.jpg) + +![](images/d1a7b8ac03bca2e637a55d044cd1c1b5f14728bc3fc7a7d8e92dfe9becbc6954.jpg) + +![](images/0446fe011626e221c3ddf1decc9b7c95683b42f18fcc50b59a4b480f66a4ba6a.jpg) + +![](images/b46d9e81898fdaf559ec099fc5ea55c15fbb7f997d727ba9511c31b50e0b4f93.jpg) +Figure A3: Basic telemetry about the state of all networks in Table A1 during the first 4000 iterations of training. Accompanies Figure 3. + +![](images/47ce7d678a8fd35e2ffec9ba34edc68da554631c3039c0a2da9be1090f55e6e2.jpg) + +![](images/d7f3738536b1014afd7227838193b196bb75cbd8dfd64552668fb998da25aa66.jpg) + +![](images/657ce73afce5542640f5f86f03e64fc1cdbb767cc2ce953281760120e2155960.jpg) + +![](images/1f451955f627dc58cbcfe10a4d497803e3f97c54c2819bbf0d158d1d55d58183.jpg) + +![](images/ea435922b6c60955b0abdc906f7a515de1caf4a05258833e1404378a26afb196.jpg) + +![](images/efd5e2bcaa80e6846c87958a4dee03c2f56a5bf4da9f5f906e82049664ef768d.jpg) + +![](images/220c2c920390ab196294ff368561844e83daff0386d50e873bb64c482764d07e.jpg) + +![](images/e9b29529c73ba2a380ab04c0b2a1a793c8db77a93954b66dc0d279983c5d262a.jpg) + +![](images/9546e56b5535f39e10028bebc593d05ca949edd1019170e63b32a97107bf9beb.jpg) + +![](images/c4a1199d8e516522f03775fe86d0ca6d7f2528107196665f6fdbb7260cf1b140.jpg) + +![](images/9d1457879371bf23070384834485b8bcd2787f373264bab8cb3b7037d9b790de.jpg) +Figure A4: The effect of training an IMP-derived sub-network initialized to the signs at iteration 0 or $k$ and the magnitudes at iteration 0 or $k$ . Accompanies Figure A4. + +![](images/f68ea65cb64853c621fe19e2df8412f42dfc94a739c20eec0e19efba47cca7f9.jpg) + +![](images/b7391bced0b3ebdbfb3de92349a634965d5a5f5afa5e76a9d6ea61228f1caa4c.jpg) + +![](images/67f915ff343cef6a05a4da264103e20fc530ee5157c8d428eb1931dab43a0723.jpg) + +![](images/24ae24f70798ec11635874ca74f603a2fd0aab7794926f489db81c9a05de9531.jpg) + +![](images/db1009c33bb954ddb400d62bbbc47cc93bb12847a728d887764a7bf565305a17.jpg) + +![](images/639bf694efb8f9cb0ab06289da6113fb61e3d70150c5b22d1705a1a97e4605a9.jpg) + +![](images/18577dde183d2b7443e1737af6d54fd92751a303c640362a6c477de8a50e7435.jpg) + +![](images/de99551579187c1912db6289b7bbcd62c8f311deafe7bbc44b59de825af13470.jpg) +Figure A5: The effect of training an IMP-derived sub-network initialized with the weights at iteration $k$ as shuffled within various structural elements. Accompanies Figure 5. + +![](images/8e2bdae7d4ad24fcdef139e68ab9461747a69484afd24b5830f611ba1e4dee98.jpg) + +![](images/c13e7f277d49ed40ba9f8733aba2a63fe4a9f73180a8be81db0eae1a83860884.jpg) + +![](images/e83d8c315cb9ec08e590d1cdb97a63dc4a83e93424423aa0f40d60f42250fb17.jpg) + +![](images/a5ad23f4c431a7efc28bc4658210cd1461dd46f30a6d6f55bb0763d011f7696c.jpg) + +![](images/3b5714873e9f74271b21ccb124c5397935e73167d117f30f7cb2ba8c9ad162d9.jpg) + +![](images/cb61d163b97b6aea6510834422260adc632c5d8e35c57b97d5ff77f256a8b55b.jpg) + +![](images/371196d1785fc46725ae2c7ca78b4495d21110128908c60382214d883efaac2a.jpg) + +![](images/71e46eeb59dd9c80f997b854834b8b5a26c3a302237d2d662f9cb03e408ab0bb.jpg) +Figure A6: The effect of training an IMP-derived sub-network initialized with the weights at iteration $k$ as shuffled within various structural elements where shuffling only occurs between weights with the same sign. Accompanies Figure 6. + +![](images/05a597f46eeb3cde49aa35c002b488706bc21711e6ebfca0d34fa629987d1377.jpg) + +![](images/a921ee6558bf4e95e0ba161663140ed24fbb2abef5e5311f3e6573d8b4629a86.jpg) + +![](images/edee409a1115cea259b73eb552ba79d9ced73fff2f13bc629b678bb2cf57a3c1.jpg) + +![](images/c8b6c06fef8a57068afefbfd5219363e9f9788c7b18cc685f1bd98c0f422249c.jpg) + +![](images/1c1ed06e472393dbbe6b7ac5fb922c29df22bb8765456d09c0cb3005ff805b56.jpg) + +![](images/c5dad1ce693eb4c4149c42b53e2791ed21320b74ca4b1576e5fee8a39d4e6c25.jpg) + +![](images/6250120607086994e2737c6a86be5405388df4c6034fb2a415c14627b182f7e7.jpg) + +![](images/3438c1b3c2a754233657b8d67a87d096a5c334885d6a43c567a1975a6fa82245.jpg) +Figure A7: The effect of training an IMP-derived sub-network initialized with the weights at iteration $k$ and Gaussian noise of $n\sigma$ , where $\sigma$ is the standard deviation of the initialization distribution for each layer. Accompanies Figure 7. + +![](images/6260d3a4af3f54d9400c0099a62d7cde8c3f7a87e12d6ea725a4259043dbda9a.jpg) + +![](images/14a8b1e4780ce038d52fe34ae433a6963cbefa5495e188bf5590cf1aa6b298f2.jpg) + +![](images/bcd93820466ae0e6897ce075f537b2b8464049211d496fa762b9cc127876df2a.jpg) + +![](images/7aa92a23ee0b40dfd460b45ce7dc749a6c59dd31851685ca565bf444312b96c3.jpg) + +![](images/039dbfbc618823a91e51122ec103f10d30a8a6958bde9bfc94098ed2dd9617e3.jpg) + +![](images/47a5b040de797629453391e04091d043d25fae71c2e3677f8b49fb4fe3b774c5.jpg) + +![](images/f191a1f4cdb3589c6b94fee7df8d3b04119977b16d1aa28d5df4f8b64967aa18.jpg) + +![](images/3dfcd6050ede8fff501837be11fb819a002bbff5cb4f9c7b3461e1712cf64708.jpg) +Figure A8: The effective standard deviation of each of the perturbations studied in Section 5 as a function of mean evaluation accuracy (across five seeds). Accompanies Figure 8. + +![](images/73a171c6afed207df4be3de83563499cf3508ba96a9563dfb35e991239fd48a9.jpg) + +![](images/e77f6110babfc55ee96b789bdb4c7bb9e660506f2d45663be1dbf67a3b574e81.jpg) + +![](images/e52a375e3968e0723bafd30eaddcc2f23ff705bbaf77fe5d13c39caf40ec8510.jpg) + +![](images/2d5d872b58a1aad735b249751a30dd6f6a93c24d9bbb07ab1ce1101274ef3f17.jpg) +Figure A9: The effect of pre-training CIFAR-10 with random labels. Accompanies Figure 9. + +![](images/c1d984d2b47471ff149ebfa0624cc9e754f953f0a9055284418fb09b9f22d478.jpg) + +![](images/457965260ac52244d354425ae623f551fec8379330f312ea17f1d9553b35f777.jpg) + +![](images/10a1de68da9121be33b7601613e96b234f20735a267c31f64e87a0b7274031e2.jpg) + +![](images/5a66c22eaccd3b01920fb846545c748da61373e5282fac92d7450db4af2604d8.jpg) +Figure A10: The effect of pre-training CIFAR-10 with self-supervised rotation. Accompanies Figure 9. + +![](images/03ecb78ce0882fd3a56fced64bead06df6318dba119884bf3f5e37cfcb745d00.jpg) + +![](images/b2a18da1d549ca6873a2abd4b3cf00229b56b55b1675af7ed267c6c5524f0bf1.jpg) + +![](images/7b98642cae5bf522d3de74753bf265aec0ba2745c5737a9c378ac2ba44a93d1a.jpg) + +![](images/e9a62d9999098b12f1379bcb0a2736b8d7a68cac7036306f98b59b3eedeb362a.jpg) +Figure A11: The effect of pre-training CIFAR-10 with 4x blurring. Accompanies Figure 9. + +![](images/1937224f8772fd8616f37660007a1b6f8e87af4d86a32496f0c030d3d8a0da4a.jpg) + +![](images/f48cb0aa82ce7b2b9633db985c3f04dfd4064d7b112a26f6231b1021cdf08f34.jpg) + +![](images/77d8a7d1a94438983d51e044cec3d3e348e5ad52e975d32915be7a4a92924c14.jpg) + +![](images/f25c5adac7596ce9dd2c09cad24ebba3fde41ed992a7447e9c548a2dd40a11f5.jpg) +Figure A12: The effect of pre-training CIFAR-10 with 4x blurring and self-supervised rotation. Accompanies Figure 9. + +![](images/12365a94f9421acb5a5acd5bacbe3172b1c2d94e4ee33318442e6438c6b98b34.jpg) \ No newline at end of file diff --git a/theearlyphaseofneuralnetworktraining/images.zip b/theearlyphaseofneuralnetworktraining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..411e26b7e0c9ebb54229617aafc5dd0a9cb06f6d --- /dev/null +++ b/theearlyphaseofneuralnetworktraining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:61a631270af482bcf3c0afc2024836c051cbc0cb455454227d77df163566299d +size 2443974 diff --git a/theearlyphaseofneuralnetworktraining/layout.json b/theearlyphaseofneuralnetworktraining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..260b5f83f06267f07c498a5941b8dd3a6d274e0d --- /dev/null +++ b/theearlyphaseofneuralnetworktraining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5429760a4dd8385534ba7ab52136fc6c3e1dc11fbba580e313becc76bb621bbd +size 532876 diff --git a/thegamblersproblemandbeyond/b2473b38-6795-4485-9479-691ef3a44fc2_content_list.json b/thegamblersproblemandbeyond/b2473b38-6795-4485-9479-691ef3a44fc2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..892e91742c0192b40aaf4d97561d3c7e0db93444 --- /dev/null +++ b/thegamblersproblemandbeyond/b2473b38-6795-4485-9479-691ef3a44fc2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29df989e4bbf72bd504bfc5a98e3ced14f9a2c014b4112cb0318829c43ec6f21 +size 170230 diff --git a/thegamblersproblemandbeyond/b2473b38-6795-4485-9479-691ef3a44fc2_model.json b/thegamblersproblemandbeyond/b2473b38-6795-4485-9479-691ef3a44fc2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5b2b7a222eed0c548835cd5ac74df8549a0c1ab8 --- /dev/null +++ b/thegamblersproblemandbeyond/b2473b38-6795-4485-9479-691ef3a44fc2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d22b6fbafc396e423ec13ec0e912dff23d1ec695d9661890a0420ba1336a501 +size 197875 diff --git a/thegamblersproblemandbeyond/b2473b38-6795-4485-9479-691ef3a44fc2_origin.pdf b/thegamblersproblemandbeyond/b2473b38-6795-4485-9479-691ef3a44fc2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..43a4286a7d5541a3272b52adf31bf82ebd463a73 --- /dev/null +++ b/thegamblersproblemandbeyond/b2473b38-6795-4485-9479-691ef3a44fc2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac01a2de5e6d80fec82b0692ca60eefafa23c0473c1a27509e43535ef2070174 +size 531090 diff --git a/thegamblersproblemandbeyond/full.md b/thegamblersproblemandbeyond/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1957df4f7183ca8e5a546124004fb8bcad4eb929 --- /dev/null +++ b/thegamblersproblemandbeyond/full.md @@ -0,0 +1,899 @@ +# THE GAMBLER'S PROBLEM AND BEYOND + +# Baoxiang Wang + +Department of Computer Science and Engineering + +The Chinese University of Hong Kong + +bxwang@cse.cuhk.edu.hk + +# Shuai Li + +John Hopcroft Center for Computer Science + +Shanghai Jiao Tong University + +shuaili8@sjtu.edu.cn + +# Jiajin Li + +Department of SEEM + +The Chinese University of Hong Kong + +jjli@se.cuhk.edu.hk + +# Siu On Chan + +Department of Computer Science and Engineering + +The Chinese University of Hong Kong + +siuon@cse.cuhk.edu.hk + +# ABSTRACT + +We analyze the Gambler's problem, a simple reinforcement learning problem where the gambler has the chance to double or lose their bets until the target is reached. This is an early example introduced in the reinforcement learning textbook by Sutton & Barto (2018), where they mention an interesting pattern of the optimal value function with high-frequency components and repeating nonsmooth points. It is however without further investigation. We provide the exact formula for the optimal value function for both the discrete and the continuous cases. Though simple as it might seem, the value function is pathological: fractal, self-similar, derivative taking either zero or infinity, not smooth on any interval, and not written as elementary functions. It is in fact one of the generalized Cantor functions, where it holds a complexity that has been uncharted thus far. Our analyses could lead insights into improving value function approximation, gradient-based algorithms, and Q-learning, in real applications and implementations. + +# 1 INTRODUCTION + +We analytically investigate a deceptively simple problem, the Gambler's problem, introduced in the reinforcement learning textbook by Sutton & Barto (2018), on Example 4.3, Chapter 4, page 84. The problem setting is natural and simple enough that little discussion was given in the book apart from an algorithmic solution by value iteration. A close inspection will however show that the problem, as a representative of the entire family of Markov decision processes (MDP), involves a level of complexity and curiosity uncharted in years of reinforcement learning research. + +The problem discusses a gambler's casino game, where they places multiple rounds of betting. The gambler gains the bet amount if they win a round or loses the bet if they lose the round. The probability of losing each round is $p \geq 0.5$ , independently. The game ends when the gambler's capital reaches either their goal of $N$ or 0. In each round, the gambler must decide what portion of the capital to stake. In the discrete setting this bet amount must be an integer, but it can be a real number in the continuous setting. To formulate it as an MDP, we denote state $s$ be the current capital and action $a$ the bet amount. The reward is $+1$ when the state reaches $s = N$ , and zero otherwise. + +Our goal is to solve the optimal value function of the problem. We first give the solution to the discrete Gambler's problem. Denote $N$ as the target capital, $n$ as the current capital (which is the state in the discrete setting), $p > 0.5$ as the probability of losing a bet, and $\gamma$ as the discount factor. The special case of $N = 100$ , $\gamma = 1$ corresponds to the original setting in Sutton and Barto's book. + +![](images/0db9cc3ac66140d32aaf6cf7340fad554500b0e72b425da6605b4a9043d26e62.jpg) +Figure 1: The optimal state-value function of the discrete Gambler's problem. + +Proposition 1. Let $0 \leq \gamma \leq 1$ and $p > 0.5$ . The optimal value function $z(n)$ is $v(n / N)$ in the discrete setting of the Gambler's problem, where $v(\cdot)$ is the optimal value function under the continuous case defined in Theorem 12. + +The above statement amounts the discrete problem to the continuous problem by a uniform discretization. The rest of the discussion will be on the more general continuous setting. In the setting, the target capital is 1, the state space is $[0,1]$ , and the action space is $0 < a \leq \min\{s, 1 - s\}$ at state $s$ , meaning that the bet can be any fraction of the current capital as long as the capital after winning does not exceed 1. We state the optimal value function below and intuitive description of the value function later in this section. + +Theorem 12. Let $0 \leq \gamma \leq 1$ and $p > 0.5$ . Under the continuous setting of the Gambler's problem, the optimal state-value function is $v(1) = 1$ , and + +$$ +v (s) = \sum_ {i = 1} ^ {\infty} (1 - p) \gamma^ {i} b _ {i} \prod_ {j = 1} ^ {i - 1} ((1 - p) + (2 p - 1) b _ {j}) \tag {1} +$$ + +for $0 \leq s < 1$ , where $s = 0.b_{1}b_{2}\ldots b_{\ell}\ldots_{(2)}$ is the binary representation of the state $s$ . + +Next, we solve the Bellman equation of the continuous Gambler's problem. In the strictly discounted setting $0 \leq \gamma < 1$ , the solution of the Bellman equation $f(0) = 0$ , $f(1) = 1$ + +$$ +f(s) = \max_{0 < a\leq \min \{s,1 - s\}}(1 - p)\gamma f(s + a) + p\gamma f(s - a) +$$ + +is $f(s) = v(s)$ the optimal value function (Proposition 21). + +This uniqueness does not hold in general. If the rewards are not discounted, the solution of the Bellman equation is either the optimal value function, or a constant function larger than 1. + +Theorem 22. Let $\gamma = 1$ , $p > 0.5$ , and $f(\cdot)$ be a real function on $[0,1]$ . $f(s)$ solves the Bellman equation if and only if either + +- $f(s)$ is $v(s)$ defined in Theorem 12, or +- $f(0) = 0$ , $f(1) = 1$ , and $f(s) = C$ for all $0 < s < 1$ , for some constant $C \geq 1$ . + +Under the corner case of $\gamma = 1$ , $p = 0.5$ (where the gambler do not lose capital in bets in expectation), the problem involves the midpoint concavity (Sierpiński, 1920a;b) and Cauchy's functional equation. The measurable function that solves the Bellman equation is uniquely $f(s) = C' s + B'$ on $s \in (0,1)$ , for some constants $C' + B' \geq 1$ . Additionally, under Axiom of Choice, $f(s)$ can also be some non-constructive, non Lebesgue measurable function described by a Hamel basis (Theorem 27 and its lemmas). + +![](images/e2f77692d10041aa9cbfc067e688e71acfb26cdb3850f559f384e8cce160373e.jpg) +Figure 2: The optimal state-value function of the continuous Gambler's problem. + +Though the description of the Gambler's problem seems natural and simple, Theorem 12 shows that its simplicity is deceptive. The optimal value function is fractal and self-similar, and non-rectifiable (see Corollary 14 and Lemma 8). It is thus not smooth on any interval, which can be unexpected when a significant line of reinforcement learning studies is based on function approximation like discretization and neural networks. The value function (1) can neither be simplified into a formula of elementary functions, which introduces difficulties in understanding it. The function is monotonically increasing with $v(0) = 0$ and $v(1) = 1$ , but its derivative is 0 almost everywhere, which is counterintuitive. This is known as singularity, a famous pathological property of functions. $v(s)$ is continuous almost everywhere but not absolutely continuous. Also when $\gamma$ is strictly smaller than 1, it is discontinuous on a dense and compact set of infinitely many points. These properties indicate that assumptions like smoothness, continuity, and approximability are not satisfied in this problem. In general, it is reasonable to doubt if these assumptions can be imposed in reinforcement learning. To better understand the pathology of $v(s)$ , we analyze it to the Cantor function, which is well known in analysis as a counterexample of many seemingly true statements (Dovgoshey et al., 2006). In fact, $v(s)$ is a generalized Cantor function, where the above descriptions are true for both $v(s)$ and the Cantor function. + +Intuitive description of $v(s)$ . All the statements above require the definition of $v(s)$ . In fact, in this paper, $v(s)$ is important enough such that its definition will not change with the context. The function cannot be written as a combination of elementary functions. Nevertheless, we give an intuitive way to understand the function for the original, undiscounted problem. The function can be regarded as generated by the following iterative process. First we fix $v(0) = 0$ and $v(1) = 1$ , and compute + +$$ +v (\frac {1}{2}) = p v (0) + (1 - p) v (1) = (1 - p). +$$ + +Here, $v\left(\frac{1}{2}\right)$ is the weighted average of the two "neighbors" $v(0)$ and $v(1)$ that have been already evaluated. Further, the same operation applies to $v\left(\frac{1}{4}\right)$ and $v\left(\frac{3}{4}\right)$ , where $v\left(\frac{1}{4}\right) = pv(0) + (1 - p)v\left(\frac{1}{2}\right) = (1 - p)^{2}$ and $v\left(\frac{3}{4}\right) = pv\left(\frac{1}{2}\right) + (1 - p)v(1) = (1 - p) + p(1 - p)$ , and so forth to $v\left(\frac{1}{8}\right), v\left(\frac{3}{8}\right)$ , etc. This process evaluates $v(s)$ on the dense and compact set $\bigcup_{\ell \geq 1} G_{\ell}$ of the dyadic rationals, where $G_{\ell} = \{k2^{-\ell} \mid k \in \{1, \ldots, 2^{\ell} - 1\}\}$ . With the fact that $v(s)$ is monotonically strictly increasing, this dyadic rationals determines the function $v(s)$ uniquely. + +This iterative process can also be explained from the analytical formula of $v(s)$ . Starting with the first bit, a bit of 0 will not change the value, while a bit of 1 will add $(1 - p)\prod_{j = 1}^{i - 1}((1 - p) + (2p - 1)b_j)$ to the value. This term can also be written as $(1 - p)((1 - p)^{\# 0\text{ bits}}\cdot p^{\# 1\text{ bits}})$ , where the number of bits is counted over all previous bits. The value $(1 - p)^{\# 0\text{ bits}}\cdot p^{\# 1\text{ bits}}$ decides the gap between two neighbor existing points in the above process, when we insert a new point in the middle. This insertion corresponds to the iteration on $G_{\ell}$ over $\ell$ . + +We provide high resolution plots of $z(n)$ , $N = 100$ and $v(s)$ in Figure 1 and Figure 2, respectively. The non-smoothness and the self-similar fractal patterns can be clearly observed from the figures. Also, $v(s)$ is continuous when $\gamma = 1$ while $v(s)$ is not continuous on infinitely many points when $\gamma < 1$ . In fact, when $\gamma < 1$ , the function is discontinuous on the dyadic rationals $\bigcup_{\ell \geq 1} G_{\ell}$ while continuous on its complement, as we will rigorously show later. + +Self-similarity. The function $v(s)$ on $[\bar{s}, \bar{s} + 2^{-\ell}]$ for any $\bar{s} = 0.b_1b_2 \ldots b_{\ell(2)}, \ell \geq 1$ is self-similar to the function itself on $[0,1]$ . Let $s = 0.b_1b_2 \ldots b_\ell \ldots_{(2)} \in [\bar{s}, \bar{s} + 2^{-\ell}]$ , this can be observed by + +$$ +\begin{array}{l} v (s) = \sum_ {i = 1} ^ {\infty} (1 - p) \gamma^ {i} b _ {i} \prod_ {j = 1} ^ {i - 1} ((1 - p) + (2 p - 1) b _ {j}) \\ = \sum_ {i = 1} ^ {\ell} (1 - p) \gamma^ {i} b _ {i} \prod_ {j = 1} ^ {i - 1} ((1 - p) + (2 p - 1) b _ {j}) + \gamma^ {\ell} (\prod_ {j = 1} ^ {\ell} ((1 - p) + (2 p - 1) b _ {j})) \\ \times \sum_ {i = 1} ^ {\infty} (1 - p) \gamma^ {i} b _ {\ell + i} \prod_ {j = 1} ^ {i - 1} ((1 - p) + (2 p - 1) b _ {\ell + j}) \\ = v (\bar {s}) + \gamma^ {\ell} \prod_ {j = 1} ^ {\ell} ((1 - p) + (2 p - 1) b _ {j}) \cdot v \left(2 ^ {\ell} (s - \bar {s})\right). \tag {2} \\ \end{array} +$$ + +The self-similarity can be compared with the Cantor function (Dovgoshey et al., 2006; Mandelbrot, 1985), which uses the ternary of $s$ instead in the formula. The Cantor function is self-similar to itself on $[\bar{s},\bar{s} +3^{-\ell}]$ , when $\bar{s} = 0.b_{1}b_{2}\dots b_{\ell (3)}$ and $b_{\ell}\neq 1$ . Both $v(s)$ and the Cantor function can be uniquely described by their self-similarity, the monotonicity, and the boundary conditions. + +Optimal policies. It is immediate by Theorem 12 and Lemma 8 that + +$$ +\pi (s) = \min \{s, 1 - s \} +$$ + +is one of the (Blackwell) optimal policies. Here, Blackwell optimality is defined as the uniform optimality under any $0 \leq \gamma \leq 1$ . This policy agrees with the intuition that under a game that is in favor of the casino $(p > 0.5)$ , the gambler desires to bet the maximum to finish the game in as little cumulative bet as possible. In fact, the probability of reaching the target is the expected amount of capital by the end of the game, which is negative linear to the cumulative bet. + +The optimality is not unique though, for example, $\pi\left(\frac{15}{32}\right) = \frac{1}{32}$ is also optimal (for any $\gamma$ ). Under $\gamma = 1$ the original, undiscounted setting, small amount of bets can also be optimal. Namely, when $s$ can be written in finite many bits $s = b_{1}b_{2}\ldots b_{\ell(2)}$ in binary (assume $b_{\ell} = 1$ ), $\pi(s) = 2^{-l}$ is also an optimal policy. This policy is by repeatedly rounding the capital to carryover the bits, keeping the game to be within at most $\ell$ rounds of bets. + +# 1.1 PRELIMINARIES + +We use the canonical formulation of the discrete-time Markov decision process (MDP), denoted as the tuple $(\mathcal{S},\mathcal{A},\mathcal{T},r,\rho_0,\gamma)$ . That includes $\mathcal{S}$ the state space, $\mathcal{A}$ the action space, $\mathcal{T}:\mathcal{S}\times \mathcal{A}\times \mathcal{S}\to \mathbb{R}^+$ the transition probability function, $r:S\rightarrow \mathbb{R}$ the reward function, $\rho_0:\mathcal{S}\to \mathbb{R}^+$ the initial state distribution, and $\gamma \in [0,1]$ the unnormalized discount factor. A deterministic policy $\pi :\mathcal{S}\rightarrow \mathcal{A}$ is a map from the state space to the action space. In this problem, $\mathcal{T}(s,a,s - a)$ and $\mathcal{T}(s,a,s + a)$ are $p$ and $1 - p$ respectively for $s\in S$ , $a\in \mathcal{A}$ , and $\mathcal{T}$ is otherwise 0. + +Our goal is to solve the optimal value function of the Gambler's problem. In this problem, the state-value function is the probability of the gambler eventually reaching the target capital from a state. The definition of the state-value function of an MDP with respect to state $s$ and policy $\pi$ is + +$$ +f ^ {\pi} (s) = \mathbb {E} \left[ \sum_ {t} \gamma^ {t} r _ {t} \mid s _ {0} = s, a _ {t} = \pi (s _ {t}), s _ {t + 1} \sim \mathcal {T} (s _ {t}, a _ {t}), r _ {t} \sim r (s _ {t}), t = 0, 1, \dots \right]. +$$ + +When $\pi^{*}$ is one of the optimal policies, $f^{\pi^{*}}(s)$ is the optimal state-value function. Despite that there may exist more than one optimal policies, this optimal state-value function is unique (Sutton & Barto, 2018; Szepesvári, 2010). + +# 1.2 IMPLICATIONS + +Our results indicate hardness on reinforcement learning (Papadimitriou & Tsitsiklis, 1987; Littman et al., 1995; Thelen & Smith, 1998) and revisions of existing reinforcement learning algorithms. It is worth noting that similar patterns of fractal and self-similarity have been observed empirically, for example in Chockalingam (2019) for the Mountain Car problem. With these characterizations being observed in simple problems like Mountain Car and the Gambler's problem, our results are expected to be generalized to a variety of reinforcement learning settings. + +The first implication is naturally on function value function approximation, which is a developed topic in reinforcement learning (Lee et al., 2008; Lusena et al., 2001). By the fractal property of the optimal value function, that representation of such function must be inexact (Tikhonov, 2014). When discretization is used for value function representation, the approximation error is at least $\mathcal{O}(1 / N)$ , where $N$ is the number of bins. + +Proposition 19. When $N \in \mathbb{N}^{+}$ , $N \geq 4$ is a power of 2, let $\bar{v}_{1}(s)$ be piecewise constant on any of the intervals $s \in (k / N, (k + 1) / N)$ , $k = 0, \dots, N - 1$ , then + +$$ +\int_ {s} | v (s) - \bar {v} _ {1} (s) | d s \geq \frac {1}{N} \frac {(2 - \gamma) (1 - p) \gamma}{1 - p \gamma} + o (\frac {1}{N}). +$$ + +Alternatively, when a subclass of $L$ -Lipschitz continuous is used to represent $v(s)$ , this error is then at least $(1 / L) \cdot (1 - p)^2 \gamma^2 (1 - \gamma)^2 / (4 - 4p\gamma)$ by the discontinuity of $v(s)$ (Proposition 20). It is worth remarking that despite this specific lower bound diminishes when $\gamma$ is 1, the approximation error is nonzero for an arbitrarily large $L$ under $\gamma = 1$ , as the derivative of $v(s)$ can be infinite (Fact 16). + +Notably, neural networks are within this family of functions, where the Lipschitz constant $L$ is determined by the network architecture. By the proposition, it is not possible to obtain the optimal value function when a neural network is used, albeit the universal approximation theorem (Csáji, 2001; Dovgoshey et al., 2006; Levesley et al., 2007). + +The second implication is by Theorem 12 and Fact 16 that the derivative of $v(s)$ is + +$$ +\lim _ {\Delta s \to 0 ^ {+}} \frac {v (s + \Delta s)}{\Delta s} = 0, \lim _ {\Delta s \to 0 ^ {-}} \frac {v (s + \Delta s)}{\Delta s} = \left\{ \begin{array}{l l} + \infty , & \text {i f} s = 0 \text {o r} s \in \bigcup_ {\ell \geq 1} G _ {\ell}, \\ 0, & \text {o t h e r w i s e}. \end{array} \right. +$$ + +This imposes that the value function's derivative must not be exactly obtained, as it is 0 almost everywhere, except on the dyadic rationals $G_{\ell}$ , where it has a left derivative of infinity and a right derivative of 0. Algorithms relying on $\partial v(s) / \partial s$ and $\partial Q(s,a) / \partial a$ (Lillicrap et al., 2015; Gu et al., 2017; Heess et al., 2015; Fairbank & Alonso, 2012; Fairbank, 2008; Pan et al., 2019; Lim et al., 2018), where $Q(s,a)$ is the action-value function (Sutton & Barto, 2018), can suffer from the estimation error or even have unpredictable behavior. + +In practice, the boolean implementation of float numbers can further increase this error, as all points $s$ implemented are in $G_{\ell}$ for some $\ell$ . A precise evaluation requires all these derivatives to be infinite when a Leabague derivative is used (the average of left and right derivatives), which cannot be obtained a computer system. + +The third implication is on Q-learning (Mnih et al., 2015; Watkins & Dayan, 1992; Baird, 1995), by Theorem 22 and its supporting lemmas. It is proved that when $\gamma = 1$ , Q-learning has multiple converging points, as the Bellman equation has multiple solutions, namely $v(s)$ and + +$$ +f (0) = 0, f (1) = 1, \text {a n d} f (s) = C \text {f o r a l l} 0 < s < 1, +$$ + +for some constant $C \geq 1$ . Therefore, even when the Q-learning algorithm converges, it may not converge to the optimal value function $v(s)$ . In fact, as the solution can be either the ground truth of the optimal value function, or a large constant function, it is easier to approximate a constant function than the optimal value function, resulting in a relatively lower Bellman error when converging to the large constant. + +This challenges Q-learning under $\gamma = 1$ when the return (cumulative reward) is unbiased. Though the artificial $\gamma$ is originally introduced to prevent the return from diverging, it can be also necessary to prevent the algorithm from converging to a large constant in Q-learning, which is not desired. + +# 2 DISCRETE CASE + +The analysis of the discrete case of the Gambler's problem will give an exact solution. It will also explain the reason the plot on the book has a strange pattern of repeating spurious points. + +The discrete case can be described by the following MDP: The state space is $\{0, \dots, N\}$ ; the action space at $n$ is $\mathcal{A}(n) = \{0 < a \leq \min \{n, N - n\}\}$ ; the transition from state $n$ and action $a$ is $n - a$ and $n + a$ with probability $p$ and $1 - p$ , respectively; the reward function is $r(N) = 1$ and $r(n) = 0$ for $0 \leq n \leq N - 1$ . The MDP terminates at $n \in \{0, N\}$ . We use a time-discount factor of $0 \leq \gamma \leq 1$ , where the agent receives $\gamma^T r(N)$ rewards if the agents reach the state $n = N$ at time $T$ . + +Let $z(n), n \in \mathbb{N}, 0 \leq n \leq N$ , be the value function. The exact solution below of the discrete case is relying on Theorem 12, our main theorem which describes the exact solution of the continuous case. This theorem will be discussed and proved later in Section A.1. + +Proposition 1. Let $0 \leq \gamma \leq 1$ and $p > 0.5$ . The optimal value function $z(n)$ is $v(n / N)$ in the discrete setting of the Gambler's problem, where $v(\cdot)$ is the optimal value function under the continuous case defined in Theorem 12. + +Proof. We first verify the Bellman equation. By the definition of $v(\cdot)$ we have + +$$ +\begin{array}{l} z (n) = v (n / N) \\ = \max _ {0 < a \leq \min \{n / N, 1 - n / N \}} p \gamma v (n / N - a) + (1 - p) \gamma v (n / N + a) \\ \geq \max _ {0 < a \leq \min \{n / N, 1 - n / N \}, N a \in \mathbb {N}} p \gamma v (n / N - a) + (1 - p) \gamma v (n / N + a) \\ = \max _ {0 < a \leq \min \{n, N - n \}, a \in \mathbb {N}} p \gamma z (n - a) + (1 - p) \gamma z (n + a). \\ \end{array} +$$ + +Meanwhile let $a^* = \min \{n, N - n\}$ , Corollary 13 suggests that + +$$ +\begin{array}{l} z (n) = v (n / N) \\ = p \gamma v ((n - a ^ {*}) / N) + (1 - p) \gamma v ((n + a ^ {*}) / N) \\ = p \gamma z (n - a ^ {*}) + (1 - p) \gamma v (n + a ^ {*}) \\ \leq \max _ {0 < a \leq \min \{n, N - n \}, a \in \mathbb {N}} p \gamma z (n - a) + (1 - p) \gamma z (n + a). \\ \end{array} +$$ + +Therefore $z(n) = \max_{0 < a \leq \min \{n, N - n\}, a \in \mathbb{N}} p \gamma z(n - a) + (1 - p) \gamma z(n + a)$ as desired. + +We then show that $z(n) = v(n / N)$ is the unique function that satisfies the Bellman equation. The proof is similar to the proof of Lemma 2, but the arguments will be relatively easier, as both the state space and the action space are discrete. Let $f(n)$ also satisfy the Bellman equation. We desire to prove that $f(n)$ is identical to $z(n)$ . + +Define $\delta = \max_{1\leq n\leq N - 1}f(n) - z(n)$ . This maximum must exist as there are finite many states. Then define the non-empty set $S = \{n\mid f(n) - z(n) = \delta ,1\leq n\leq N - 1\}$ . For any $n^{\prime}\in S$ and $a^\prime \in \operatorname {argmax}_{1\leq n\leq \min \{n',N - n'\}}p\gamma f(n' - a) + (1 - p)\gamma f(n' + a)$ , we have + +$$ +\begin{array}{l} f (n ^ {\prime}) = p \gamma f \left(n ^ {\prime} - a ^ {\prime}\right) + (1 - p) \gamma f \left(n ^ {\prime} + a ^ {\prime}\right) \\ \stackrel {(\text {♥})} {\leq} p \gamma (z \left(n ^ {\prime} - a ^ {\prime}\right) + \delta) + (1 - p) \gamma (z \left(n ^ {\prime} + a ^ {\prime}\right) + \delta) \\ \leq p \gamma z \left(n ^ {\prime} - a ^ {\prime}\right) + (1 - p) \gamma z \left(n ^ {\prime} + a ^ {\prime}\right) + \delta \\ \leq z \left(n ^ {\prime}\right) + \delta \\ = f \left(n ^ {\prime}\right). \\ \end{array} +$$ + +As the equality holds, by the equality of $(\mathbf{\nabla}\cdot \mathbf{v})$ we have $n^{\prime} - a^{\prime}\in S$ and $n^\prime +a^\prime \in S$ + +Now we specify some $n_0 \in S$ and $a_0 \in \operatorname{argmax}_{1 \leq n \leq \min\{n_0, N - n_0\}} p\gamma f(n_0 - a) + (1 - p)\gamma f(n_0 + a)$ . Then, we have $n_0 - a_0 \in S$ . Denote $n_1 = n_0 - a_0$ and, recursively, $a_t \in \operatorname{argmax}_{1 \leq n \leq \min\{n_t, N - n_t\}} p\gamma f(n_t - a) + (1 - p)\gamma f(n_t + a)$ and $n_{t+1} = n_t - a_t, t = 1, 2, \ldots$ ; Since $a_t \geq 1$ and $n_t \in \mathbb{N}$ , there must exist a $T$ such that $n_T = 0$ . Therefore, $\delta = f(n_T) - z(n_T) = 0$ . + +By the same argument $\bar{\delta} = \max_{1\leq n\leq N - 1}z(n) - f(n) = 0$ . Therefore, $z(n)$ and $f(n)$ are identical, as desired. + +As $z(n)$ is the unique function that satisfies the Bellman equation, it is the optimal value function of the problem. + +Proposition 1 indicates the discretization of the problems yields the discrete, exact evaluation of the continuous value function at $0,1 / N,\ldots ,1$ . If we omit the learning error, the plots on the book and by the open source implementation (Zhang, 2019) are the evaluation of the fractal $v(s)$ at $0,1 / N,\ldots ,1$ . This explains the strange appearance of the curve in the figures. + +# 3 SETTING + +We formulate the continuous Gambler's problem as a Markov decision process (MDP) with the state space $S = [0,1]$ and the action space $\mathcal{A}(s) = (0,\min\{s,1 - s\}]$ , $s \in (0,1)$ . Here $s \in S$ represents the capital the gambler currently possesses and the action $a \in \mathcal{A}(s)$ denotes the amount of bet. Without loss of generality, we have assumed that the bet amount should be less or equal to $1 - s$ to avoid the total capital to be more than 1. The consecutive state $s'$ transits to $s - a$ and $s + a$ with probability $p \geq 0.5$ and $1 - p$ respectively. The process terminates if $s \in \{0,1\}$ and the agent receives an episodic reward $r = s$ at the terminal state. Let $0 \leq \gamma \leq 1$ be the discount factor. + +Let $f:[0,1]\to \mathbb{R}$ be a real function. For $f(s)$ to be the optimal value function, the Bellman equation for the non-terminal and terminal states are + +$$ +f (s) = \max _ {a \in \mathcal {A} (s)} p \gamma f (s - a) + (1 - p) \gamma f (s + a) \text {f o r a n y} s \in (0, 1), \tag {A} +$$ + +and + +$$ +f (0) = 0, f (1) = 1. \tag {B} +$$ + +It can be shown (later in Lemma 2 and Lemma 3) that a function satisfying (AB) must be lower bounded by 0. A reasonable upper bound is 1, as the value function is the probability of the gambler eventually reaching the target, which must be between 0 and 1. It is also reasonable to assume the continuity of the value function at $s = 0$ . Otherwise an arbitrary small amount of starting capital will have at least a constant probability of reaching the target 1. Consequently the expectation of capital at the end of the game is greater than the starting capital, which contradicts $p \geq 0.5$ . The bounded version (X) of the problem leads to the optimal value function. + +$$ +0 \leq \gamma \leq 1, p > 0. 5, f (s) \leq 1 \text {f o r a l l} s, f (s) \text {i s c o n t i n u o u s o n} s = 0. \tag {X} +$$ + +Respectively, the unbounded version (Y) of the problem leads to the solutions of the Bellman equation. + +$$ +0 \leq \gamma \leq 1, p > 0. 5. \tag {Y} +$$ + +The results extend for $p = 0.5$ in general, except an extreme corner case of $\gamma = 1$ , $p = 0.5$ , where the monotonicity in Lemma 3 will not apply. This case (Z) involves arguments over measurability and the belief of Axiom of Choice, which we will discuss at the end of Section A. + +$$ +\gamma = 1, p = 0. 5, f (s) \text {i s} \tag {Z} +$$ + +We are mostly interested in two settings: the first setting (ABX) and its solution Theorem 12, discuss a set of necessary conditions of $f(s)$ being the optimal value function of the Gambler's problem. As we show later the solution of (ABX) is unique, this solution must be the optimal value function. The second setting (ABY) and its solutions in Proposition 21 and Theorem 22 discuss all the functions that satisfy the Bellman equation. These functions are the optimal points that value iteration and Q-learning algorithms may converge to. (ABZ) is interestingly connected some foundations of mathematics like the belief of axioms, and is discussed in Theorem 27. + +# 4 ANALYSIS + +The analysis section rigorously supports the statements on the Gambler's problem and its Bellman equation with proofs and discussions. It is deferred to the appendix due to the page limit. + +# 5 CONCLUSION AND FUTURE WORKS + +We give a complete solution to the Gambler's problem, a simple and classic problem in reinforcement learning, under a variety of settings. We show that its optimal value function is very complicated and even pathological. Despite its seeming simplicity, these results are not clearly pointed out in previous studies. + +Our contributions are the theoretical finding and the implications. It is worthy to bring the current results to start the discussion among the community. Indicated by the Gambler's problem, the current algorithmic approaches in reinforcement learning might underestimate the complexity. We expect more evidence could be found in the future and new algorithms and implementations could be brought out. + +It would be interesting to see how these results of the Gambler's problem generalized to other MDPs. Finding these characterizations of MDPs is in general an important step to understand reinforcement learning and sequential decision processes. + +# ACKNOWLEDGEMENT + +We thank Richard S. Sutton and Andrew Barto for raising the Gambler's problem in their book, and Rich S. Sutton for the discussions on our theorems and implications. We thank Andrej Bogdanov for pointing out the connection to the Axiom of Choice, namely, Theorem 27, and Chengyu Lin for the discussions on the properties of $v(s)$ , namely, Lemma 6, 8, and 9. This paper was largely improved by the reviews and comments. We especially would like to thank Csaba Szepesvári, Kirby Banman, and ICLR 2020 anonymous reviewers for their helpful feedback. + +# REFERENCES + +Leemon Baird. *Residual algorithms: Reinforcement learning with function approximation.* In *Machine Learning Proceedings* 1995, pp. 30-37. Elsevier, 1995. +Valliappa Chockalingam. The role of interest in prediction and control, 2019. URL https:// youtu.be/aFXdpCDAG2g?t=395. The plot of the empirical optimal value function of the Mountain Car problem first appears at 6:35. Some follow-up discussions start at 33:20. +Balázs Csanád Csáji. Approximation with artificial neural networks. Faculty of Sciences, Etvs Lornd University, Hungary, 24:48, 2001. +Oleksiy Dovgoshey, Olli Martio, Vladimir Ryazanov, and Matt Vuorinen. The cantor function. Expositiones Mathematicae, 24(1):1-37, 2006. +Michael Fairbank. Reinforcement learning by value gradients. arXiv preprint arXiv:0803.3539, 2008. +Michael Fairbank and Eduardo Alonso. Value-gradient learning. In The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2012. +Shixiang Shane Gu, Timothy Lillicrap, Richard E Turner, Zoubin Ghahramani, Bernhard Schölkopf, and Sergey Levine. Interpolated policy gradient: Merging on-policy and off-policy gradient estimation for deep reinforcement learning. In Advances in neural information processing systems, pp. 3846-3855, 2017. +Mance E Harmon and Leemon C Baird III. Spurious solutions to the bellman equation. Technical Report, Wright-Patterson Air Force Base Ohio: Wright Laboratory, WL-TR-96-'To Be Assigned', 1996. + +Nicolas Heess, Gregory Wayne, David Silver, Timothy Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2944-2952, 2015. +Thomas J Jech. The axiom of choice. Courier Corporation, 2008. +Takashi Kamihigashi and Cuong Le Van. Necessary and sufficient conditions for a solution of the bellman equation to be the value function: A general principle. *Documents de travail du Centre d'Économie de la Sorbonne* 2015.07, 2015. ISSN: 1955-611X. +Peter Latham. Bellman's equation has a unique solution, 2008. URL http://www.gatsby.ucl.ac.uk/~mandana/RLRG/bellman_converges.pdf. +Wee S Lee, Nan Rong, and David Hsu. What makes some pomdp problems easy to approximate? In Advances in neural information processing systems, pp. 689-696, 2008. +Jason Levesley, Cem Salp, and Sanju L Velani. On a problem of k. mahler: Diophantine approximation and cantor sets. Mathematische Annalen, 338(1):97-118, 2007. +Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. +Sungsu Lim, Ajin Joseph, Lei Le, Yangchen Pan, and Martha White. Actor-expert: A framework for using action-value methods in continuous action spaces. arXiv preprint arXiv:1810.09103, 2018. +Michael L Littman, Thomas L Dean, and Leslie Pack Kaelbling. On the complexity of solving markov decision problems. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pp. 394-402. Morgan Kaufmann Publishers Inc., 1995. +Christopher Lusena, Judy Goldsmith, and Martin Mundhenk. Nonapproximability results for partially observable markov decision processes. Journal of artificial intelligence research, 14:83-103, 2001. +Benoit B Mandelbrot. Self-affine fractals and fractal dimension. Physica scripta, 32(4):257, 1985. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. +Yangchen Pan, Hengshuai Yao, Amir-massoud Farahmand, and Martha White. Hill climbing on value estimates for search-control in dyna. arXiv preprint arXiv:1906.07791, 2019. +Christos H Papadimitriou and John N Tsitsiklis. The complexity of markov decision processes. Mathematics of operations research, 12(3):441-450, 1987. +MJ Pelling. Formulae for the arc-length of a curve in rn. The American Mathematical Monthly, 84 (6):465-467, 1977. +Waclaw Sierpinski. Sur l'équation fonctionnelle $f(x + y) = f(x) + f(y)$ . Fundamenta Mathematicae, 1(1):116-122, 1920a. +Waław Sierpiński. Sur les fonctions convexes mesurables. Fundamenta Mathematicae, 1(1):125-128, 1920b. +Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. +Csaba Szepesvári. Algorithms for reinforcement learning. Synthesis lectures on artificial intelligence and machine learning, 4(1):1-103, 2010. +Esther Thelen and Linda B Smith. Dynamic systems theories. Handbook of child psychology, 1998. +Yu V Tikhonov. On the rate of approximation of singular functions by step functions. Mathematical Notes, 95(3-4):530-543, 2014. + +Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992. + +Shangtong Zhang. Python implementation of Reinforcement learning: An introduction, 2019. URL https://github.com/ShangtongZhang/reinforcement-learning-an-introduction. + +# A ANALYSIS + +# A.1 ANALYSIS OF THE GAMBLER'S PROBLEM + +In this section we show that $v(s)$ defined below is a unique solution of the system (ABX). Since the optimal state-value function must satisfy the system (ABX), $v(s)$ is the optimal state-value function of the Gambler's problem. This statement is rigorously proved in Theorem 12. + +Let $0 \leq \gamma \leq 1$ and $p > 0.5$ . We define $v(1) = 1$ , and + +$$ +v (s) = \sum_ {i = 1} ^ {\infty} (1 - p) \gamma^ {i} b _ {i} \prod_ {j = 1} ^ {i - 1} ((1 - p) + (2 p - 1) b _ {j}) \tag {1} +$$ + +for $0 \leq s < 1$ , where $s = 0.b_{1}b_{2}\ldots b_{\ell}\ldots_{(2)}$ is the binary representation of $s$ . It is obvious that the series converges for any $0 \leq s < 1$ . + +The notation $v(s)$ will always refer to the definition above in this paper and will not change with the context. We show later that this $v(s)$ is the optimal value function of the problem. We use the notation $f(s)$ to denote a general solution of a system, which varies according to the required properties. + +Let the set of dyadic rationals + +$$ +G _ {\ell} = \left\{k 2 ^ {- \ell} \mid k \in \{1, \dots , 2 ^ {\ell} - 1 \} \right\} \tag {3} +$$ + +such that $G_{\ell}$ is the set of numbers that can be represented by at most $\ell$ binary bits. The general idea to verify the Bellman equation (AB) is to prove + +$$ +v (s) = \max _ {a \in G _ {\ell} \cap \mathcal {A} (s)} (1 - p) \gamma v (s + a) + p \gamma v (s - a) \text {f o r a n y} s \in G _ {\ell} +$$ + +by induction of $\ell = 1,2,\ldots$ , and generalize this optimality to the entire interval $s\in (0,1)$ . + +It then suffices to show the uniqueness of $v(s)$ that solves the system (ABX). This is proved by assuming the existence of a solution $f(s)$ and derive the identity $f(s) = v(s)$ , condition on the Bellman property that $v(s)$ satisfies (AB). For presentation purposes, the uniqueness is discussed first. + +As an overview, Lemma 2, 3, and 4 describe the system (ABX). Claim 5, Lemma 6, 8, and 9 describe the properties of $v(s)$ . + +Lemma 2 (Uniqueness under existence). Let $f(s): [0,1] \to \mathbb{R}$ be a real function. If $v(s)$ and $f(s)$ both satisfy (ABX), then $v(s) = f(s)$ for all $0 \leq s \leq 1$ . + +Proof. We proof the lemma by contradiction. Assume that $f(s)$ is also a solution of the system such that $f(s)$ is not identical with $v(s)$ at some $s$ . Define $\delta = \sup_{0 < s < 1} f(s) - v(s)$ . As $f(2^{-1}) \geq (1 - p)\gamma f(1) + p\gamma f(0) = (1 - p)\gamma = v(2^{-1})$ , we have $\delta \geq 0$ . + +We show that $\delta$ cannot be zero by contradiction. If $\delta$ is zero, as $v(s)$ and $f(s)$ are not identical, there exists an $s$ such that $f(s) < v(s)$ . In this case, let $\bar{\delta} = \sup_{0 < s < 1} v(s) - f(s)$ . Then we choose $\bar{\epsilon} = (1 - p\gamma)\bar{\delta}$ and specify $s_0$ such that $v(s_0) - f(s_0) > \bar{\delta} - \bar{\epsilon}$ . Let $a_0 = \min \{s_0, 1 - s_0\}$ , we have + +$$ +\begin{array}{l} v \left(s _ {0}\right) = (1 - p) \gamma v \left(s _ {0} - a _ {0}\right) + p \gamma v \left(s _ {0} + a _ {0}\right) \\ \leq (1 - p) \gamma f \left(s _ {0} - a _ {0}\right) + p \gamma f \left(s _ {0} + a _ {0}\right) + p \gamma \bar {\delta} \\ \leq f (s _ {0}) + p \gamma \bar {\delta}. \\ \end{array} +$$ + +The above inequality is due to at least one of $s_0 - a_0 = 0$ and $s_0 + a_0 = 1$ must hold. Thus at least one of $v(s_0 - a_0) - f(s_0 - a_0)$ and $v(s_0 + a_0) - f(s_0 + a_0)$ must be zero. The inequality contradicts $v(s_0) - f(s_0) > \bar{\delta} - \bar{\epsilon}$ . Hence, $\delta$ cannot be zero. We discuss under $\delta > 0$ for the rest of the proof. + +Case (I): $\gamma < 1$ . In this case, we choose $\epsilon = (1 - \gamma)\delta$ . By the definition of $\delta$ we specify $s_0$ such that $f(s_0) > v(s_0) + \delta - \epsilon$ . In fact, the existence of $s_0$ is by the condition $\gamma < 1$ . Let $a_0 \in \operatorname{argmax}_{a \in \mathcal{A}(s_0)} p\gamma f(s_0 - a) + (1 - p)\gamma f(s_0 + a)$ , we have + +$$ +\begin{array}{l} f \left(s _ {0}\right) = p \gamma f \left(s _ {0} - a _ {0}\right) + (1 - p) \gamma f \left(s _ {0} + a _ {0}\right) \\ \leq p \gamma (v (s _ {0} - a _ {0}) + \delta) + (1 - p) \gamma (v (s _ {0} + a _ {0}) + \delta) \\ = p \gamma v \left(s _ {0} - a _ {0}\right) + (1 - p) \gamma v \left(s _ {0} + a _ {0}\right) + \gamma \delta \\ \leq v \left(s _ {0}\right) + \delta - \epsilon . \\ \end{array} +$$ + +The inequality $f(s_0) \leq v(s_0) + \delta - \epsilon$ contradicts $f(s_0) > v(s_0) + \delta - \epsilon$ . Hence, the lemma is proved for the case $\gamma < 1$ . + +Case (II): $\gamma = 1$ . When there exists an $s'$ such that $f(s') - v(s') = \delta$ , we show the contradiction. Let $S = \{s | f(s) - v(s) = \delta, 0 < s < 1\} \neq \emptyset$ . For any $s' \in S$ and $a' \in \operatorname{argmax}_{a \in \mathcal{A}(s')} p\gamma f(s' - a) + (1 - p)\gamma f(s' + a)$ , we have + +$$ +\begin{array}{l} f \left(s ^ {\prime}\right) = p \gamma f \left(s ^ {\prime} - a ^ {\prime}\right) + (1 - p) \gamma f \left(s ^ {\prime} + a ^ {\prime}\right) \\ = p f \left(s ^ {\prime} - a ^ {\prime}\right) + (1 - p) f \left(s ^ {\prime} + a ^ {\prime}\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(\checkmark)} {\leq} p \left(v \left(s ^ {\prime} - a ^ {\prime}\right) + \delta\right) + (1 - p) \left(v \left(s ^ {\prime} + a ^ {\prime}\right) + \delta\right) \\ = p v \left(s ^ {\prime} - a ^ {\prime}\right) + (1 - p) v \left(s ^ {\prime} + a ^ {\prime}\right) + \delta \\ \end{array} +$$ + +$$ +\stackrel {(\spadesuit)} {\leq} v (s ^ {\prime}) + \delta . +$$ + +Thus, the equality in $(\heartsuit)$ and $(\spadesuit)$ must hold. We specify $s_0 \in S$ , and by the equality of $(\heartsuit)$ we have $f(s_0 - a_0) = v(s_0 - a_0) + \delta$ , thus $s_0 - a_0 \in S$ . Let $s_1 = s_0 - a_0$ , and we recursively specify an arbitrary $a_t \in \operatorname{argmax}_{a \in \mathcal{A}(s_t)} (1 - p)\gamma f(s_t + a) + p\gamma f(s_t - a)$ and $s_{t+1} = s_t - a_t$ , for $t = 1, 2, \ldots$ , until $s_T = 0$ for some $T$ , or indefinitely if such an $s_T$ does not exist. If $s_T$ exists and the sequence $\{s_t\}$ terminates at $s_T = 0$ , then $f(s_T) = v(s_T) + \delta = \delta$ by $(\heartsuit)$ , which contradicts the boundary condition $f(s_T) = f(0) = 0$ . + +We desire to show the existence of $T$ . When there exists $t$ and $\ell$ such that $s_t \in G_\ell$ , by Corollary 7 we have $s_{t+1} \in G_\ell$ and inductively $s_{t'} \in G_\ell$ for all $t' \geq t$ . Consider that $\{s_t\}$ is strictly decreasing and there are finite many elements in $G_\ell$ , $\{s_t\}$ cannot be infinite. Otherwise $s_t \notin G_\ell$ for any $t, \ell \geq 1$ . Then by Corollary 10 the uniqueness of optimal action we have $s_{t+1} = 2s_t - 1$ if $s_t \geq \frac{1}{2}$ , and $s_{t+1} = 0$ if $s_t \leq \frac{1}{2}$ . After finite many steps of $s_{t+1} = 2s_t - 1$ we will have $s_t = 0$ for some $t$ . + +It amounts to show the existence of $s'$ such that $f(s') - v(s') = \delta$ . By Lemma 9 we have the continuity of $v(s)$ . Lemma 3 indicates the monotonicity of $f(s)$ on [0, 1). The upper bound $f(s) \leq f(1)$ in (X) extends this monotonicity to the closed interval [0, 1]. Then by Lemma 4 we have the continuity of $f(s)$ on (0, 1]. By (X) this extends to [0, 1]. Thus we have the continuity of $f(s) - v(s)$ , and consequently the existence of $\max_{0 \leq s' \leq 1} f(s') - v(s')$ . As $f(0) - v(0) = f(1) - v(1) = 0$ and $\delta > 0$ , this maximum must be attained at some $s' \in (0, 1)$ . Therefore we have the existence of $\max_{0 < s' < 1} f(s') - v(s')$ , which concludes the lemma. + +Lemma 3 (Monotonicity). Let $\gamma = 1$ and $p > 0.5$ . If a real function $f(s)$ satisfies (AB) then $f(s)$ is monotonically increasing on $[0,1)$ . + +Proof. We prove the claim by contradiction. Assume that there exists $s_1 < s_2$ where $f(s_1) > f(s_2)$ . Denote $\Delta s = s_2 - s_1 > 0$ and $\Delta f = f(s_1) - f(s_2) > 0$ . By induction we have + +$$ +f \left(s _ {2} - 2 ^ {- \ell} \Delta s\right) - f \left(s _ {2}\right) \geq p ^ {\ell} \Delta f +$$ + +for an arbitrary integer $\ell \geq 1$ . Then when $s_2 + 2^{-\ell}\Delta s < 1$ , by $f(s_2) \geq pf(s_2 - 2^{-\ell}\Delta s) + (1 - p)f(s_2 + 2^{-\ell}\Delta s)$ , + +$$ +f (s _ {2} + 2 ^ {- \ell} \Delta s) \leq \frac {1}{1 - p} f (s _ {2}) - \frac {p}{1 - p} f (s _ {2} - 2 ^ {- \ell} \Delta s) +$$ + +$$ +\begin{array}{l} = f (s _ {2}) + \frac {p}{1 - p} \left(f (s _ {2}) - f \left(s _ {2} - 2 ^ {- \ell} \Delta s\right)\right) \\ \leq f \left(s _ {2}\right) + f \left(s _ {2}\right) - f \left(s _ {2} - 2 ^ {- \ell} \Delta s\right). \\ \end{array} +$$ + +This concludes $f(s_{2} + 2^{-\ell}\Delta s) - f(s_{2}) \leq f(s_{2}) - f(s_{2} - 2^{-\ell}\Delta s)$ . By induction we have + +$$ +f (s _ {2} + k 2 ^ {- \ell} \Delta s) - f (s _ {2} + (k - 1) 2 ^ {- \ell} \Delta s) \leq f (s _ {2} + (k - 1) 2 ^ {- \ell} \Delta s) - f (s _ {2} - (k - 2) 2 ^ {- \ell} \Delta s) +$$ + +for $k = 1,2,\ldots$ , when $s_2 + k2^{-\ell}\Delta s < 1$ . We sum this inequality over $k$ and get + +$$ +\begin{array}{l} f \left(s _ {2} + k 2 ^ {- \ell} \Delta s\right) - f \left(s _ {2}\right) \leq k \left(f \left(s _ {2}\right) - f \left(s _ {2} - 2 ^ {- \ell} \Delta s\right)\right) \\ \leq - k p ^ {\ell} \Delta f. \\ \end{array} +$$ + +By letting $k = 2^n$ , $\ell = n + n_0$ , $s_2 + 2^{-n_0}\Delta s < 1$ , and $n \to +\infty$ , we have $s_2 + k2^{-\ell}\Delta s < 1$ and $-kp^{\ell}\Delta f \to -\infty$ . The arbitrarity of $n$ indicates the non-existence of $f(s_2 + k2^{-\ell}\Delta s)$ , which contradicts the existence of the solution $f(\cdot)$ . + +Lemma 4 (Continuity). Let $\gamma = 1$ and $p \geq 0.5$ . If a real function $f(s)$ is monotonically increasing on $(0,1]$ and it satisfies (AB), then $f(s)$ is continuous on $(0,1]$ . + +Proof. We show the continuity by contradiction. Suppose that there exists a point $s' \in (0,1)$ such that $f(s)$ is discontinuous at $s'$ , then there exists $\epsilon, \delta > 0$ where $f(s' + \epsilon_1) - f(s' - \epsilon_2) \geq \delta$ for any $\epsilon_1 + \epsilon_2 = \epsilon$ . Then, by + +$$ +f (s ^ {\prime} - \frac {1}{4} \epsilon) \geq p f (s ^ {\prime} - \epsilon) + (1 - p) f (s ^ {\prime} + \frac {2}{4} \epsilon), +$$ + +we have + +$$ +f (s ^ {\prime} - \frac {1}{4} \epsilon) - f (s ^ {\prime} - \epsilon) \geq (1 - p) \delta / p. +$$ + +Similarly, for $k = 1,2,\ldots$ + +$$ +f (s ^ {\prime} - \frac {1}{4 ^ {k}} \epsilon) - f (s ^ {\prime} - \frac {1}{4 ^ {k - 1}} \epsilon) \geq (1 - p) \delta / p. +$$ + +Let $k > ((1 - p)\delta /p)^{-1}$ , we have $f(s^{\prime} - \frac{1}{4^{k}}\epsilon) - f(s^{\prime} - \epsilon)\geq 1$ . This contradicts with the fact that $f(s)$ is bounded between 0 and 1. The continuity follows on (0, 1). + +If the function is discontinuous on $s = 1$ , then there exists $\epsilon, \delta > 0$ where $f(1) - f(1 - \epsilon_1) \geq \delta$ for any $\epsilon_1 \leq \epsilon$ . The same argument holds by observing + +$$ +f (1 - \frac {1}{2 ^ {k - 1}} \epsilon) \geq p f (1 - \frac {1}{2 ^ {k}} \epsilon) \geq f (1). +$$ + +The lemma follows. + +When $f(s)$ is only required to be monotonically increasing on $(0,1)$ , the continuity still holds but only on $(0,1)$ . + +For simplicity define + +$$ +Q _ {v} (s, a) = p \gamma v (s - a) + (1 - p) \gamma v (s + a). \tag {4} +$$ + +As $v(s)$ is the optimal state-value function (to be proved later in Theorem 12), $Q_{v}(s,a)$ is in fact the optimal action-value function (Sutton & Barto, 2018; Szepesvári, 2010). + +Recall that $G_{\ell}$ is the set of dyadic rationals $\{k2^{-\ell} \mid k \in \{1, \ldots, 2^{\ell} - 1\}\}$ . + +Claim 5. For any $s = 0.b_{1}b_{2}\dots b_{\ell (2)}\in G_{\ell}\cup \{0\}$ + +$$ +v (s + 2 ^ {- (\ell + 1)}) - v (s) = (1 - p) \gamma^ {\ell + 1} \prod_ {j = 1} ^ {\ell} ((1 - p) + (2 p - 1) b _ {j}) \leq (1 - p) p ^ {\ell} \gamma^ {\ell + 1}. \tag {5} +$$ + +For any $s = 0.b_{1}b_{2}\ldots b_{k(2)}\in G_{\ell}$ with $b_{k} = 1$ and $1\leq k\leq \ell$ + +$$ +v (s) - v \left(s - 2 ^ {- (\ell + 1)}\right) \geq p ^ {\ell - k + 1} (1 - p) \gamma^ {\ell + 1} \prod_ {j = 1} ^ {k - 1} ((1 - p) + (2 p - 1) b _ {j}). \tag {6} +$$ + +Also, + +$$ +v (1) - v \left(1 - 2 ^ {- (\ell + 1)}\right) \geq p ^ {\ell + 1} \gamma^ {\ell + 1}. \tag {7} +$$ + +The equality of (6) and (7) holds if and only if $\gamma = 1$ . + +Proof. Inequality (5) and (7) are obtained by the definition of $v(s)$ . To derive inequality (6), denote $k = \max \{1 \leq i \leq \ell : b_i = 0\}$ and then + +$$ +\begin{array}{l} v (s) - v \left(s - 2 ^ {- (\ell + 1)}\right) \\ = (1 - p) \gamma^ {k} \prod_ {j = 1} ^ {k - 1} ((1 - p) + (2 p - 1) b _ {j}) \\ - \sum_ {i = k + 1} ^ {\ell + 1} (1 - p) \gamma^ {i} \cdot 1 \cdot \prod_ {j = 1} ^ {k - 1} ((1 - p) + (2 p - 1) b _ {j}) \cdot (1 - p) \cdot \prod_ {j = k + 1} ^ {i - 1} p \\ = (1 - p) \gamma^ {k} \prod_ {j = 1} ^ {k - 1} ((1 - p) + (2 p - 1) b _ {j}) \cdot \left(1 - (1 - p) \sum_ {i = k + 1} ^ {\ell + 1} \gamma^ {i - k} p ^ {i - k - 1}\right) \\ = (1 - p) \gamma^ {k} \prod_ {j = 1} ^ {k - 1} ((1 - p) + (2 p - 1) b _ {j}) \cdot \left(1 - (1 - p) \gamma \frac {1 - (\gamma p) ^ {\ell - k + 1}}{1 - \gamma p}\right) \\ \geq (1 - p) \gamma^ {k} \prod_ {j = 1} ^ {k - 1} ((1 - p) + (2 p - 1) b _ {j}) \cdot \left(1 - \left(1 - (\gamma p) ^ {\ell - k + 1}\right)\right) \\ = (1 - p) p ^ {\ell - k + 1} \gamma^ {\ell + 1} \prod_ {j = 1} ^ {k - 1} ((1 - p) + (2 p - 1) b _ {j}). \\ \end{array} +$$ + +The arguments are due to the fact that $s - 2^{-(\ell +1)} = 0.b_{1}b_{2}\dots b_{k - 1}0_{k}1_{k + 1}\dots 1_{\ell +1(2)}$ and the inequality is by $(1 - p)\gamma \leq 1 - \gamma p$ . + +Lemma 6. Let $\ell \geq 1,0 < \gamma \leq 1,0.5\leq p < 1$ . For any $s\in G_{\ell}$ + +$$ +\max _ {a \in (G _ {\ell + 1} \backslash G _ {\ell}) \cap \mathcal {A} (s)} Q _ {v} (s, a) \leq \max _ {a \in G _ {\ell} \cap \mathcal {A} (s)} Q _ {v} (s, a). +$$ + +Proof. Case (I): First we prove that for $\ell > 1$ , any $s \in G_{\ell}$ , $a \in G_{\ell} \cap \mathcal{A}(s)$ , $a > 2^{-\ell}$ , and $s + a < 1$ , + +$$ +Q _ {v} \left(s, a - 2 ^ {- (\ell + 1)}\right) \leq \max \left\{Q _ {v} (s, a), Q _ {v} \left(s, a - 2 ^ {- \ell}\right) \right\}. +$$ + +Note that in this case, $a - 2^{-(\ell +1)}\in G_{\ell +1}\cap \mathcal{A}(s)$ and $a - 2^{-\ell}\in G_{\ell}\cap \mathcal{A}(s)$ + +Let $s - a = 0.c_{1}c_{2}\ldots c_{\ell (2)} = 0.c_{1}c_{2}\ldots 0_{k}1_{k + 1}\ldots 1_{\ell (2)}$ , where $k = \max \{1\leq i\leq \ell :c_i = 0\}$ is the index of the last 0 bit in $s - a$ . Such $k$ must exist since $0\leq s - a\leq 1 - 3\times 2^{-\ell} < 1$ . Similarly, let $s + a = 0.d_1d_2\dots d_{\ell (2)} = 0.d_1d_2\dots 1_{k'(2)}$ where $k^{\prime} = \max \{1\leq i\leq \ell :d_{i} = 1\}$ is the index of the last 1 bit in $s + a$ . Such $k^{\prime}$ must exist since $3\times 2^{-\ell}\leq s + a < 1$ . Also, $s + a - 2^{-(\ell +1)} = 0.d_1d_2\dots 0_{k'}1_{k' + 1}\dots 1_{\ell +1(2)}$ . + +To prove $Q_{v}\left(s,a - 2^{-(\ell +1)}\right)\leq Q_{v}(s,a)$ , it is equivalent to prove + +$$ +v (s + a) - v \left(s + a - 2 ^ {- (\ell + 1)}\right) \geq \frac {p}{1 - p} \left(v \left(s - a + 2 ^ {- (\ell + 1)}\right) - v (s - a)\right). +$$ + +Then by applying inequality (6), (7) and inequality (5) in Claim 5 on the LHS and RHS respectively, it suffices to prove + +$$ +\begin{array}{l} p ^ {\ell - k ^ {\prime}} (1 - p) \prod_ {j = 1} ^ {k ^ {\prime} - 1} ((1 - p) + (2 p - 1) d _ {j}) \geq \prod_ {j = 1} ^ {\ell} ((1 - p) + (2 p - 1) c _ {j}) \\ = p ^ {\ell - k} (1 - p) \prod_ {j = 1} ^ {k - 1} ((1 - p) + (2 p - 1) c _ {j}). \\ \end{array} +$$ + +Let $M_{c} = c_{1} + \dots + c_{k - 1}$ , $M_{d} = d_{1} + \dots + d_{k' - 1}$ be the number of 1s in $\{c_{1}, \ldots, c_{k}\}$ , $\{d_{1}, \ldots, d_{k'}\}$ respectively. Then $Q_{v}(s, a - 2^{-(\ell + 1)}) \leq Q_{v}(s, a)$ holds when $p = 0.5$ or $p > 0.5$ , $M_{c} + k \geq M_{b} + k'$ . + +To prove $Q_{v}\left(s,a - 2^{-(\ell +1)}\right)\leq Q_{v}\left(s,a - 2^{-\ell}\right)$ , it is equivalent to prove + +$$ +v \left(s - a + 2 ^ {- \ell}\right) - v \left(s - a + 2 ^ {- (\ell + 1)}\right) \geq \frac {1 - p}{p} \left(v \left(s + a - 2 ^ {- (\ell + 1)}\right) - v \left(s + a - 2 ^ {- \ell}\right)\right). +$$ + +Note that $s - a + 2^{-\ell} = 0.c_{1}c_{2}\ldots 1_{k(2)}$ and $s + a - 2^{-\ell} = 0.d_{1}d_{2}\ldots 0_{k'}1_{k' + 1}\ldots 1_{\ell (2)}$ . Then by inequality (6), (7) and inequality (5) on the LHS and RHS respectively, it suffices to prove + +$$ +p ^ {\ell - k + 2} \prod_ {j = 1} ^ {k - 1} ((1 - p) + (2 p - 1) c _ {j}) \geq (1 - p) ^ {2} p ^ {\ell - k ^ {\prime}} \prod_ {j = 1} ^ {k ^ {\prime} - 1} ((1 - p) + (2 p - 1) d _ {j}). +$$ + +Then $Q_{v}\left(s,a - 2^{-(\ell +1)}\right)\leq Q_{v}\left(s,a - 2^{-\ell}\right)$ holds when $p = 0.5$ or $p > 0.5$ , $M_{c} + k^{\prime} + 2\geq M_{d} + k$ . + +As at least one of $M_{c} + k' + 1 \geq M_{d} + k$ and $M_{d} + k \geq M_{c} + k' + 1$ holds, thus $Q_{v}\left(s,a - 2^{-(\ell +1)}\right) < \max \left\{Q_{v}(s,a),Q_{v}\left(s,a - 2^{-\ell}\right)\right\}$ . + +We cover two corner cases for the completeness of the proof. + +Case (II): Next we prove for $\ell \geq 1$ , any $s \in G_{\ell}$ , $a \in G_{\ell} \cap \mathcal{A}(s)$ and $s + a = 1$ + +$$ +Q _ {v} (s, a - 2 ^ {- (\ell + 1)}) \leq Q _ {v} (s, a). +$$ + +Similar to above, it is equivalent prove + +$$ +v (1) - v \left(1 - 2 ^ {- (\ell + 1)}\right) \geq \frac {p}{1 - p} \left(v \left(s - a + 2 ^ {- (\ell + 1)}\right) - v (s - a)\right). +$$ + +Note that by Claim 5, + +$$ +\begin{array}{l} v (1) - v \left(1 - 2 ^ {- (\ell + 1)}\right) \geq p ^ {\ell + 1} \gamma^ {\ell + 1} = \frac {p}{1 - p} \cdot (1 - p) p ^ {\ell} \gamma^ {\ell + 1} \\ \geq \frac {p}{1 - p} \left(v \left(s - a + 2 ^ {- (\ell + 1)}\right) - v (s - a)\right), \\ \end{array} +$$ + +which concludes the proof. + +Case (III): Last we prove for $\ell > 1$ , any $s \in G_{\ell}$ and $a = 2^{-\ell}$ , $s < 1 - 2^{-\ell}$ . + +When $s = 0.b_{1}b_{2}\ldots 0_{m}1_{m + 1}\ldots 1_{\ell (2)}$ with $1\leq m < \ell$ , to prove $Q_{v}\left(s,2^{-(\ell +1)}\right)\leq Q_{v}\left(s,2^{-\ell}\right)$ , it is equivalent to prove + +$$ +v \left(s + 2 ^ {- \ell}\right) - v \left(s + 2 ^ {- (\ell + 1)}\right) \geq \frac {p}{1 - p} \left(v \left(s - a + 2 ^ {- (\ell + 1)}\right) - v (s - a)\right). +$$ + +In this case, $s + 2^{-\ell} = 0.b_{1}b_{2}\ldots 1_{m(2)}, s - 2^{-\ell} = 0.b_{1}b_{2}\ldots 0_{m}1_{m+1}\ldots 1_{\ell-1(2)}$ and $M_{c} = M_{d}, k = k' = m$ , thus $M_{d} + k \geq M_{c} + k'$ , which concludes the proof similar to the first part of the (I) case. + +When $s = 0.b_{1}b_{2}\dots 1_{m^{\prime}}0_{m^{\prime} + 1}\dots 0_{\ell (2)}$ with $1\leq m^{\prime} < \ell$ , let $M_{b} = b_{1} + \ldots b_{m^{\prime}}$ . Then, + +$$ +Q _ {v} \left(s, 2 ^ {- m ^ {\prime}}\right) - Q _ {v} \left(s, 2 ^ {- (\ell + 1)}\right) +$$ + +$$ +\begin{array}{l} = (1 - p) \gamma (v (s + 2 ^ {- m ^ {\prime}}) - v (s - 2 ^ {- m ^ {\prime}})) - (1 - p) \gamma (v (s) - v (s - 2 ^ {- m ^ {\prime}})) \\ - (1 - p) \gamma \left(v \left(s + 2 ^ {- \ell}\right) - v (s)\right) - p \gamma \left(v \left(s - 2 ^ {- \ell}\right) - v \left(s - 2 ^ {- m ^ {\prime}}\right)\right) \\ \geq (1 - p) \gamma (p \gamma) ^ {M _ {b}} ((1 - p) \gamma) ^ {m ^ {\prime} - 2 - M _ {b}} (1 - p) \gamma - (1 - p) \gamma (p \gamma) ^ {M _ {b}} ((1 - p) \gamma) ^ {m ^ {\prime} - 1 - M _ {b}} (1 - p) \gamma \\ - (1 - p) \gamma (p \gamma) ^ {M _ {b} + 1} ((1 - p) \gamma) ^ {\ell - 2 - M _ {b}} (1 - p) \gamma \\ - p \gamma (p \gamma) ^ {M _ {b}} ((1 - p) \gamma) ^ {m ^ {\prime} - M _ {b}} (1 + (p \gamma) + \dots (p \gamma) ^ {\ell - m ^ {\prime} - 1}) (1 - p) \gamma \\ = p ^ {M _ {b}} (1 - p) ^ {m ^ {\prime} - M _ {b}} \gamma^ {m ^ {\prime}} - p ^ {M _ {b}} (1 - p) ^ {m ^ {\prime} + 1 - M _ {b}} \gamma^ {m ^ {\prime} + 1} - p ^ {M _ {b} + 1} (1 - p) ^ {\ell - M _ {b}} \gamma^ {\ell + 1} \\ - p ^ {M _ {b} + 1} (1 - p) ^ {m ^ {\prime} + 1 - M _ {b}} \gamma^ {m ^ {\prime} + 2} \left(1 - \left(p \gamma\right) ^ {\ell - m ^ {\prime}}\right) / \left(1 - p \gamma\right) \\ \geq p ^ {M _ {b}} (1 - p) ^ {m ^ {\prime} - M _ {b}} \gamma^ {m ^ {\prime}} - p ^ {M _ {b}} (1 - p) ^ {m ^ {\prime} + 1 - M _ {b}} \gamma^ {m ^ {\prime} + 1} - p ^ {M _ {b} + 1} (1 - p) ^ {\ell - M _ {b}} \gamma^ {\ell + 1} \\ - p ^ {M _ {b} + 1} (1 - p) ^ {m ^ {\prime} - M _ {b}} \gamma^ {m ^ {\prime} + 1} \left(1 - \left(p \gamma\right) ^ {\ell - m ^ {\prime}}\right) \\ \geq - p ^ {M _ {b} + 1} (1 - p) ^ {\ell - M _ {b}} \gamma^ {\ell + 1} - p ^ {M _ {b} + 1} (1 - p) ^ {m ^ {\prime} - M _ {b}} \gamma^ {m ^ {\prime} + 1} \left(- \left(p \gamma\right) ^ {\ell - m ^ {\prime}}\right) \\ \geq 0. \\ \end{array} +$$ + +The arguments in the proof that either $M_c + k \geq M_d + k' + 1$ or $M_d + k' \geq M_c + k$ must hold are tight for integers $M_c$ and $M_d$ . This is the case for $a \in G_{\ell+1} \setminus G_\ell$ . When $a \notin G_{\ell+1}$ , this sufficient condition becomes even looser. The lemma imposes $G_\ell$ to be the only set of possible optimal actions, given $s \in G_\ell$ . + +Corollary 7. Let $\ell \geq 1$ . For any $s \in G_{\ell}$ , + +$$ +\operatorname *{argmax}_{a\in \mathcal{A}(s)}Q_{v}(s,a)\subseteq G_{\ell}. +$$ + +Now we verify the Bellman property on $\bigcup_{\ell \geq 1}G_{\ell}$ + +Lemma 8. Let $\ell \geq 1$ . For any $s \in G_{\ell + 1}$ , + +$$ +\min \{s,1 - s\} \in \operatorname *{argmax}_{a\in G_{\ell +1}\cap \mathcal{A}(s)}Q_{v}(s,a). +$$ + +Proof. We prove the lemma by induction over $\ell$ . When $\ell = 1$ , it is obvious since $G_{1}$ has only one element. The base case $\ell = 2$ is also immediate by exhausting $a \in \{2^{-1}, 2^{-2}\}$ for $s = 2^{-1}$ . Now we assume that for any $s \in G_{\ell}$ , $\min \{s, 1 - s\} \in \operatorname{argmax}_{a \in G_{\ell} \cap \mathcal{A}(s)} Q_v(s, a)$ . We aim to prove this lemma for $\ell + 1$ . + +For $s \in G_{\ell}$ , by Lemma 6, $\operatorname{argmax}_{a \in G_{\ell + 1} \cap \mathcal{A}(s)} Q_v(s, a) \subseteq G_{\ell}$ . Then by the induction assumption, $\min \{s, 1 - s\} \in \operatorname{argmax}_{a \in G_{\ell} \cap \mathcal{A}(s)} Q_v(s, a) \subseteq \operatorname{argmax}_{a \in G_{\ell + 1} \cap \mathcal{A}(s)} Q_v(s, a)$ . Hence, the lemma holds for $s \in G_{\ell}$ . We discuss under $s \in G_{\ell + 1} \setminus G_{\ell}$ for the rest of the proof. + +We start with two inductive properties of $v(s)$ to reduce the problem from $s \in G_{\ell + 1}$ to $s' \in G_{\ell}$ , where $s'$ is either $2s$ or $2s - 1$ . For any $s \geq 2^{-1}$ , that is, $s = 0.c_1c_2 \ldots c_{\ell + 1(2)} \in G_{\ell + 1}$ with $c_1 = 1$ , + +$$ +\begin{array}{l} v (s) = \sum_ {i = 1} ^ {\ell} (1 - p) \gamma^ {i} c _ {i} \prod_ {j = 1} ^ {i - 1} ((1 - p) + (2 p - 1) c _ {j}) \\ = (1 - p) \gamma + \sum_ {i = 2} ^ {\ell} (1 - p) \gamma^ {i} c _ {i} \prod_ {j = 1} ^ {i - 1} ((1 - p) + (2 p - 1) c _ {j}) \\ = (1 - p) \gamma + \sum_ {i = 1} ^ {\ell - 1} (1 - p) \gamma^ {i + 1} c _ {i + 1} ((1 - p) + (2 p - 1) c _ {1}) \prod_ {j = 1} ^ {i - 1} ((1 - p) + (2 p - 1) c _ {j + 1}) \\ = (1 - p) \gamma + p \gamma v \left(0. c _ {2} \dots c _ {\ell + 1 (2)}\right) \\ = (1 - p) \gamma + p \gamma v (2 s - 1). \\ \end{array} +$$ + +Similarly, for any $s < 2^{-1}$ , that is, $s = 0.c_{1}c_{2}\dots c_{\ell +1(2)}\in G_{\ell +1}$ with $c_{1} = 0$ + +$$ +v (s) = \sum_ {i = 1} ^ {\ell - 1} (1 - p) ^ {2} \gamma^ {i + 1} c _ {i + 1} \prod_ {j = 1} ^ {i - 1} ((1 - p) + (2 p - 1) c _ {j + 1}) +$$ + +$$ += (1 - p) \gamma v (2 s). +$$ + +Armed with the properties, we split the discussion into four cases $2^{-1} + 2^{-2} \leq s < 1$ , $2^{-1} \leq s < 2^{-1} + 2^{-2}$ , $2^{-1} - 2^{-2} < s < 2^{-1}$ , and $0 < s \leq 2^{-1} - 2^{-2}$ . + +When $s \geq 2^{-1} + 2^{-2}$ , As $a \leq 1 - s$ , we have $s - a \geq 2^{-1}$ and $s + a \geq 2^{-1}$ . Hence, the first bit after the decimal of $s - a$ and $s + a$ is 1. Hence, + +$$ +\begin{array}{l} Q _ {v} (s, a) = p \gamma v (s - a) + (1 - p) \gamma v (s + a) \\ = (1 - p) \gamma^ {2} + p \gamma (p \gamma v (2 s - 2 a - 1)) + (1 - p) \gamma v (2 s + 2 a - 1) \\ = (1 - p) \gamma^ {2} + p \gamma (p \gamma v ((2 s - 1) - 2 a) + (1 - p) \gamma v ((2 s - 1) + 2 a)) \\ = (1 - p) \gamma^ {2} + p \gamma Q _ {v} (2 s - 1, 2 a). \\ \end{array} +$$ + +As $2s - 1 \in G_{\ell}$ and $2a \in G_{\ell}$ , by the induction assumption the maximum of $Q_v(2s - 1, 2a)$ is obtained at $a = 1 - s$ . Hence, $1 - s \in \operatorname{argmax}_{a \in G_{\ell + 1} \cap \mathcal{A}(s)} Q_v(s, a)$ as desired. + +When $2^{-1} \leq s < 2^{-1} + 2^{-2}$ , if $s - a \geq 2^{-1}$ , then first bit after the decimal of $s - a$ and $s + a$ is 1 and the lemma follows the same arguments as the above case. Otherwise, if $s - a < 2^{-1}$ , we have + +$$ +\begin{array}{l} Q _ {v} (s, a) = p \gamma v (s - a) + (1 - p) \gamma v (s + a) \\ = (1 - p) ^ {2} \gamma^ {2} + p (1 - p) \gamma^ {2} v (2 s - 2 a) + p (1 - p) \gamma^ {2} v (2 s + 2 a - 1) \\ = (1 - p) \gamma \left(p \gamma v \left(\left(2 s - 2 ^ {- 1}\right) - \left(2 a - 2 ^ {- 1}\right)\right) + (1 - p) \gamma v \left(\left(2 s - 2 ^ {- 1}\right) + \left(2 a - 2 ^ {- 1}\right)\right)\right) \\ + (1 - p) (2 p - 1) \gamma^ {2} v (2 s + 2 a - 1) + (1 - p) ^ {2} \gamma^ {2} \\ = (1 - p) \gamma Q _ {v} \left(2 s - 2 ^ {- 1}, 2 a - 2 ^ {- 1}\right) + (1 - p) (2 p - 1) \gamma^ {2} v \left(2 s + 2 a - 1\right) + (1 - p) ^ {2} \gamma^ {2}. \\ \end{array} +$$ + +As $2s - 2^{-1} \in G_{\ell}$ and $2a - 2^{-1} \in G_{\ell}$ whenever $l \geq 2$ , by the induction assumption $Q_{v}(2s - 2^{-1}, 2a - 2^{-1})$ obtains its maximum at $a = 1 - s$ . By Claim 5, $v(s)$ is monotonically increasing on $G_{\ell}$ for any $\ell \geq 2$ . Hence, $v(2s + 2a - 1)$ obtains the maximum at the maximum feasible $a$ , which is $a = 1 - s$ . Since both terms take their respective maximum at $a = 1 - s$ , we conclude that $1 - s \in \operatorname{argmax}_{a \in G_{\ell + 1} \cap \mathcal{A}(s)} Q_{v}(s, a)$ as desired. + +The other two cases, $2^{-1} - 2^{-2} < s < 2^{-1}$ and $0 < s \leq 2^{-1} - 2^{-2}$ , follow similar arguments. The lemma follows. + +Lemma 9. Both $v(s)$ and $v'(s) = \max_{a \in \mathcal{A}(s)} Q_v(s, a)$ are continuous at $s$ if there does not exist an $\ell$ such that $s \in G_\ell$ . + +Proof. We first proof the continuity of $v(s)$ . For $s = b_1b_2\ldots b_\ell \ldots_{(2)}$ , $s \notin G_\ell$ indicates that for any integer $N$ there exists $n_1 \geq N$ such that $b_{n_1} = 1$ and $n_0 \geq N$ such that $b_{n_0} = 0$ . The monotonicity of $v(s)$ is obvious from Equation (1) that flipping a 0 bit to a 1 bit will always yield a greater value. For any $s - 2^{-N} \leq s' \leq s + 2^{-N}$ , we specify $n_1$ and $n_0$ such that $s - 2^{-n_1} \leq s' \leq s + 2^{-n_0}$ . By the monotonicity of $v(s)$ we have + +$$ +\begin{array}{l} v (s) - v \left(s ^ {\prime}\right) \leq v (s) - v \left(s - 2 ^ {- n _ {1}}\right) \\ = (1 - p) \gamma^ {n _ {1}} \prod_ {j = 1} ^ {n _ {1} - 1} ((1 - p) + (2 p - 1) b _ {j}) \cdot (1 + \sum_ {i = n _ {1} + 1} ^ {\infty} \gamma^ {i - n _ {1}} b _ {i} p \prod_ {j = n _ {1} + 1} ^ {i - 1} ((1 - p) + (2 p - 1) b _ {j})) \\ - (1 - p) \gamma^ {n _ {1}} \prod_ {j = 1} ^ {n _ {1} - 1} ((1 - p) + (2 p - 1) b _ {j}) \sum_ {i = n _ {1} + 1} ^ {\infty} \gamma^ {i - n _ {1}} b _ {i} (1 - p) \prod_ {j = n _ {1} + 1} ^ {i - 1} ((1 - p) + (2 p - 1) b _ {j}) \\ = (1 - p) \gamma^ {n _ {1}} \prod_ {j = 1} ^ {n _ {1} - 1} ((1 - p) + (2 p - 1) b _ {j}) \cdot (1 \\ + \sum_ {i = n _ {1} + 1} ^ {\infty} \gamma^ {i - n _ {1}} b _ {i} (2 p - 1) \prod_ {j = n _ {1} + 1} ^ {i - 1} ((1 - p) + (2 p - 1) b _ {j})) \\ \leq (1 - p) \gamma^ {n _ {1}} p ^ {n _ {1} - 1} \cdot \left(1 + \sum_ {i = n _ {1} + 1} ^ {\infty} \gamma^ {i - n _ {1}} (2 p - 1) p ^ {n _ {1} - i - 1}\right) \\ \end{array} +$$ + +$$ +\leq 2 (1 - p) \gamma^ {N} p ^ {N - 1}. +$$ + +And similarly, + +$$ +\begin{array}{l} v (s) - v \left(s ^ {\prime}\right) \geq v (s) - v \left(s + 2 ^ {- n _ {0}}\right) \\ \geq - (1 - p) \gamma^ {n _ {0}} p ^ {n _ {0} - 1} \cdot \left(1 + \sum_ {i = n _ {0} + 1} ^ {\infty} \gamma^ {i - n _ {0}} (2 p - 1) p ^ {n _ {0} - i - 1}\right) \\ \geq - 2 (1 - p) \gamma^ {N} p ^ {N - 1}. \\ \end{array} +$$ + +Hence, $|v(s) - v(s')|$ is bounded by $2(1 - p)\gamma^{N}p^{N - 1}$ for $s - 2^{-N} \leq s' \leq s + 2^{-N}$ . As $2(1 - p)\gamma^{N}p^{N - 1}$ converges to zero when $N$ approaches infinity, $v(s)$ is continuous as desired. + +We then show the continuity of $v'(s) = \max_{a \in \mathcal{A}(s)} Q_v(s, a)$ . We first argue that $v'(s)$ is monotonically increasing. In fact, for $s' \geq s$ and $0 < a \leq \min \{s, 1 - s\}$ , either $0 < a \leq \min \{s', 1 - s'\}$ or $0 < a + s - s' \leq \min \{s', 1 - s'\}$ must be satisfied. Therefore $a \in \mathcal{A}(s)$ indicates at least one of $a \in \mathcal{A}(s')$ and $a + s - s' \in \mathcal{A}(s')$ . + +Let $a'$ be $a$ or $a + s - s'$ whoever is in $\mathcal{A}(s')$ , we have both $s' + a' \geq s + a$ and $s' - a' \geq s - a$ . Specify $a$ such that $v'(s) = Q_v(s, a)$ , we have + +$$ +v ^ {\prime} (s ^ {\prime}) \geq Q _ {v} (s ^ {\prime}, a ^ {\prime}) \geq v ^ {\prime} (s). +$$ + +The monotonicity follows. + +Let $s = b_{1}b_{2}\ldots b_{\ell}\ldots_{(2)}$ . Similarly, for any $N$ , specify $n_1\geq N$ such that $b_{n_1} = 1$ and $n_0\geq N + 2$ such that $b_{n_0} = 0$ . Also let $s_0 = b_1b_2\dots b_{N(2)}$ . Then for the neighbourhood set $s_0 - 2^{-(N + 1)}\leq s' \leq s_0 + 2^{-(N + 1)}$ , $v^{\prime}(s) = v(s)$ for both the ends of the interval $s_0 - 2^{-(N + 1)}, s_0 + 2^{-(N + 1)}\in G_{N + 1}$ . $|v^{\prime}(s) - v^{\prime}(s^{\prime})|$ is then bounded by $|v(s_0 - 2^{-(N + 1)}) - v(s_0 + 2^{-(N + 1)})|$ . According to Claim 5, this value converges to zero when $N$ approaches infinity. The continuity of $v^{\prime}(s)$ follows. + +The continuity of $v(s)$ extends to the dyadic rationals $\bigcup_{\ell \geq 1} G_{\ell}$ when $\gamma = 1$ , which means that $v(s)$ is continuous everywhere on $[0,1]$ under $\gamma = 1$ . It worth note that similar to the Cantor function, $v(s)$ is not absolutely continuous. In fact, $v(s)$ shares more common properties with the Cantor function, as they both have a derivative of zero almost everywhere, both their value go from 0 to 1, and their range is every value in between of 0 and 1. + +The continuity of $v'(s) = \max_{a \in \mathcal{A}(s)} Q_v(s, a)$ indicates that the optimal action is uniquely $\min \{s, 1 - s\}$ on $s \notin G_\ell$ . This optimal action agrees with the optimal action we specified on $s \in G_\ell$ in Lemma 8, which makes $\pi(s) = \min \{s, 1 - s\}$ an optimal policy for every state (condition on that $v(s)$ is the optimal value function, which will be proved later). + +Corollary 10. If $s \notin G_{\ell}$ for any $\ell \geq 1$ + +$$ +\operatorname * {a r g m a x} _ {a \in \mathcal {A} (s)} Q _ {v} (s, a) = \{\min \{s, 1 - s \} \}. +$$ + +Lemma 11. $v(s)$ is the unique solution of the system (ABX). + +Proof. Let $v'(s) = \max_{a \in \mathcal{A}(s)} Q_v(s, a)$ . As per Lemma 8 we have $v(s) = v'(s)$ on the dyadic rationals $\bigcup_{\ell \geq 1} G_\ell$ . Since $\bigcup_{\ell \geq 1} G_\ell$ is dense and compact on $(0, 1)$ , $v(s) = v'(s)$ holds whenever both $v(s)$ and $v'(s)$ are continuous at $s$ . By Lemma 9 $v(s)$ and $v'(s)$ are continuous for any $s$ if there does not exist an $\ell \geq 1$ such that $s \in G_\ell$ , which then indicates $v(s) = v'(s)$ on the complement of $\bigcup_{\ell \geq 1} G_\ell$ . Therefore $v(s) = v'(s)$ is satisfied on $(0, 1)$ , which verifies the Bellman property (AB). The boundary conditions (X) hold obviously. Finally as per Lemma 2, $v(s)$ is the unique solution to the system of Bellman equation and the boundary conditions. + +Theorem 12. Let $0 \leq \gamma \leq 1$ and $p > 0.5$ . Under the continuous setting of the Gambler's problem, the optimal state-value function is $v(1) = 1$ and $v(s)$ defined in Equation (1) for $0 \leq s < 1$ . + +Proof. As the optimal state-value function must satisfy the system (ABX) and $v(s)$ is the unique solution to the system, $v(s)$ is the optimal state-value function. + +Corollary 13. The policy $\pi(s) = \min\{s, 1 - s\}$ is (Blackwell) optimal. + +It is worth noting that when $\gamma = 1$ and $s \in G_{\ell} \setminus G_{\ell - 1}$ for some $\ell$ , then $\pi'(s) = 2^{-\ell}$ is also an optimal policy at $s$ . + +Theorem 12 and the proof of Lemma 8 and also induce the following statement that the optimal value function $v(s)$ is fractal and self-similar. The derivation of the corollary is in the introduction. + +Corollary 14. The curve of the value function $v(s)$ on the interval $[k2^{-\ell}, (k + 1)2^{-\ell}]$ is similar (in geometry) to the curve of $v(s)$ itself on $[0, 1]$ , for any integer $\ell \geq 1$ and $0 \leq k \leq 2^{\ell} - 1$ . + +Some other notable facts about $v(s)$ are as below: + +Fact 15. The expectation + +$$ +\int_ {0} ^ {1} v (s) d s = (1 - p) \gamma = v (\frac {1}{2}). +$$ + +Fact 16. The derivative + +$$ +\lim _ {\Delta s \rightarrow 0 ^ {+}} \frac {v (s + \Delta s)}{\Delta s} = 0, \quad \lim _ {\Delta s \rightarrow 0 ^ {-}} \frac {v (s + \Delta s)}{\Delta s} = \left\{\begin{array}{l l}+ \infty ,&i f s = 0 o r s \in \bigcup_ {\ell \geq 1} G _ {\ell},\\0,&o t h e r w i s e.\end{array}\right. \tag {8} +$$ + +Fact 17. The length of the arc $y = v(s)$ , $0 \leq s \leq 1$ equals 2. + +In fact, any singular function (zero derivative a.e.) has an arc length of 2 (Pelling, 1977) if it goes from $(0,0)$ to $(1,1)$ monotonically. This can be intuitively understood as that the curve either goes horizontal, when the derivative is zero, or vertical, when the derivative is infinity. Therefore the arc length is the Manhattan distance between $(0,0)$ and $(1,1)$ , which equals 2. + +Fact 18. + +$$ +\operatorname *{argmin}_{0\leq s\leq 1}v(s) - s = \{\frac{2}{3}\} . +$$ + +It is natural that by the fractal characterization of $v(s)$ the approximation must be inexact. The following two propositions give quantitative lower bounds on such approximation errors. + +Proposition 19. When $N \in \mathbb{N}^{+}$ , $N \geq 4$ is a power of 2, let $\bar{v}_{1}(s)$ be piecewise constant on any of the intervals $s \in (k / N, (k + 1) / N)$ , $k = 0, \dots, N - 1$ , then + +$$ +\int_ {s} | v (s) - \bar {v} _ {1} (s) | d s \geq \frac {1}{N} \frac {(2 - \gamma) (1 - p) \gamma}{1 - p \gamma} + o (\frac {1}{N}). +$$ + +Proof. When $N$ is a power of 2, for $k \in \{0, \dots, N - 1\}$ , the curve of $v(s)$ on each of the intervals $(k / N, (k + 1) / N)$ is self-similar to $v(s)$ itself on $(0, 1)$ . We consider this segment of the curve. By Equation (2), + +$$ +v (s) = v (\bar {s}) + \gamma^ {\ell} \prod_ {j = 1} ^ {\ell} ((1 - p) + (2 p - 1) b _ {j}) \cdot v (2 ^ {\ell} (s - \bar {s})), +$$ + +where $\ell = \log_2N$ , $\bar{s} = k / N$ and $s = 0.b_{1}b_{2}\dots b_{\ell}\dots (2)\in (\bar{s},\bar{s} +\frac{1}{N})$ + +Let $\Delta s = 1 / N$ , $\Delta y = v((k + 1) / N) - v(k / N)$ , we have for $0 < s < \Delta s$ + +$$ +v (\bar {s} + s) = v (\bar {s}) + v (s / \Delta s) \Delta y. +$$ + +As $v(s)$ is monotonically increasing on every interval $(k / N, (k + 1) / N)$ , the minimum over $\bar{y}$ + +$$ +\int_ {s = \bar {s}} ^ {\bar {s} + \Delta s} | v (s) - \bar {y} | d s +$$ + +is obtained when $\bar{y}_{\bar{s}} = v(\bar{s} +\frac{1}{2}\Delta s)$ (intuitively, the median of $v(s)$ on the interval). This results in an approximation error of + +$$ +\min _ {\bar {y}} \int_ {s = \bar {s}} ^ {\bar {s} + \Delta s} | v (s) - \bar {y} | d s +$$ + +$$ +\begin{array}{l} = \int_ {s = \bar {s}} ^ {\bar {s} + \frac {1}{2} \Delta s} v (\bar {s} + \frac {1}{2} \Delta s) - v (s) d s + \int_ {s = \bar {s} + \frac {1}{2} \Delta s} ^ {\bar {s} + \Delta s} v (s) - v (\bar {s} + \frac {1}{2} \Delta s) d s \\ = - \int_ {s = \bar {s}} ^ {\bar {s} + \frac {1}{2} \Delta s} v (s) d s + \int_ {s = \bar {s} + \frac {1}{2} \Delta s} ^ {\bar {s} + \Delta s} v (s) d s \\ = - \frac {1}{2} \Delta s \int_ {s = 0} ^ {1} v (\bar {s}) + (v (\bar {s} + \frac {1}{2} \Delta s) - v (\bar {s})) v (s) d s \\ + \frac {1}{2} \Delta s \int_ {s = 0} ^ {1} v (\bar {s} + \frac {1}{2} \Delta s) + (v (\bar {s} + \Delta s) - v (\bar {s} + \frac {1}{2} \Delta s)) v (s) d s \\ = \frac {1}{2} \Delta s ((1 - (1 - p) \gamma) (v (\bar {s} + \frac {1}{2} \Delta s) - v (\bar {s})) + (1 - p) \gamma (v (\bar {s} + \Delta s) - v (\bar {s} + \frac {1}{2} \Delta s))). \\ \end{array} +$$ + +This error is then summed over $\bar{s} = 0,1 / N,\dots ,(N - 1) / N$ such that + +$$ +\begin{array}{l} \sum_ {\bar {s} = 0 / N} ^ {(N - 1) / N} \int_ {s = \bar {s}} ^ {\bar {s} + \Delta s} | v (s) - \bar {y _ {\bar {s}}} | d s \geq \frac {1}{2 N} ((1 - (1 - p) \gamma) v (\frac {N - \frac {1}{2}}{N}) + (1 - p) \gamma (1 - v (\frac {1}{2 N}))) \\ = \frac {1}{2 N} ((1 - (1 - p) \gamma) \frac {(1 - p) \gamma}{1 - p \gamma} (1 - (p \gamma) ^ {\log_ {2} N + 1}) \\ + (1 - p) \gamma (1 - ((1 - p) \gamma) ^ {\log_ {2} N + 1})) \\ = N ^ {- 1} \frac {(2 - \gamma) (1 - p) \gamma}{1 - p \gamma} - N ^ {- 1 + \log_ {2} p \gamma} \frac {(1 - (1 - p) \gamma) p (1 - p) \gamma^ {2}}{2 (1 - p \gamma)} \\ - N ^ {- 1 + \log_ {2} (1 - p) \gamma} \frac {(1 - p) ^ {2} \gamma^ {2}}{2} \\ = \frac {1}{N} \frac {(2 - \gamma) (1 - p) \gamma}{1 - p \gamma} + o (\frac {1}{N}). \\ \end{array} +$$ + +An error bound in $\mathcal{O}(1 / N)$ can be generated to any integer $N$ , as we can relax $N$ to $2^{\lfloor \log_2N\rfloor -1}$ so that at least one self-similar segment of size $1 / N$ is included in each interval. + +For Lipschitz continuous functions like neural networks, the following proposition shows an approximation error lower bound in $\mathcal{O}(1 / L)$ , where $L$ is the Lipschitz constant. + +Proposition 20. Let $L \geq (1 - p)\gamma (1 - \gamma) / (1 - p\gamma)$ . If $\bar{v}_2(s)$ is Lipschitz continuous on $s \in (0,1)$ where $L$ is the Lipschitz constant, then + +$$ +\int_ {s} | v (s) - \bar {v} _ {2} (s) | d s \geq \frac {1}{L} \frac {(1 - p) ^ {2} \gamma^ {2} (1 - \gamma) ^ {2}}{4 (1 - p \gamma)}. +$$ + +Proof. We consider $v\left(\frac{1}{2}\right) = (1 - p)\gamma$ and + +$$ +\lim_{s\to \frac{1}{2}^{-}}v(s) = \frac{(1 - p)^{2}\gamma^{2}}{1 - p\gamma}. +$$ + +When $0 < \gamma < 1$ , we have + +$$ +v (\frac {1}{2}) - \lim _ {s \to \frac {1}{2} ^ {-}} v (s) = \frac {(1 - p) \gamma (1 - \gamma)}{1 - p \gamma} > 0. +$$ + +Denote $h = (1 - p)\gamma (1 - \gamma) / (1 - p\gamma)$ . By the monotonicity of $v(s)$ , using $\bar{v}_2(s)$ to approximate $v(s)$ has an error at least $\int_s |\xi (s) - \bar{v}_2(s)|ds$ , where $\xi (s)$ denotes the step function on $[0,1]$ , + +$$ +\xi (s) = \left\{ \begin{array}{l l} 0 & 0 \leq s < \frac {1}{2}, \\ h & \frac {1}{2} \leq s \leq 1. \end{array} \right. +$$ + +In this case, the optimal $\bar{v}_2(s)$ is + +$$ +\bar {v} _ {2} (s) = \left\{ \begin{array}{l l} 0 & 0 \leq s < \frac {1}{2} - \frac {h}{2 L}, \\ \frac {h}{2} + L (s - \frac {1}{2}) & \frac {1}{2} - \frac {h}{2 L} \leq s \leq \frac {1}{2} + \frac {h}{2 L}, \\ h & \frac {1}{2} + \frac {h}{2 L} < s \leq 1. \end{array} \right. +$$ + +Hence, we have + +$$ +\int_ {s} | v (s) - \bar {v} _ {2} (s) | d s \geq \int_ {s} | \xi (s) - \bar {v} _ {2} (s) | d s \geq \frac {h ^ {2}}{4 L}, +$$ + +as desired. + +![](images/5573b6fb28484be85b94b65a791110571c3689e442b04ee2c9fa9b3a24c56846.jpg) + +# A.2 ANALYSIS OF THE BELLMAN EQUATION + +We have proved that $v(s)$ is the optimal value function in Theorem 12, by showing the existence and uniqueness of the solution of the system (ABX). However, the condition (X) is derived from the context of the Gambler's problem. It is rigorous enough to find the optimal value function, but we are also interested in solutions purely derived from the MDP setting. Also, algorithmic approaches such as Q-learning (Watkins & Dayan, 1992; Baird, 1995; Mnih et al., 2015) optimize the MDP by solving the Bellman equation, without eliciting the context of the problem. Studying such systems will help to understand the behavior of these algorithms. In this section, we inspect the system of Bellman equation (AB) of the Gambler's problem. We aim to solve the general case (ABY) where $p > 0.5$ and the corner case (ABZ) where $p = 0.5$ . + +When $p > 0.5$ , the value function $v(s)$ is obviously still a solution of the system (ABY) without condition (X). The natural question is if there exist any other solutions. The answer is two-fold: When $\gamma < 1$ , $f(s) = v(s)$ is unique; when $\gamma = 1$ , the solution is either $v(s)$ or a constant function at least 1. This indicates that algorithms like Q-learning have constant functions as their set of converging points, apart from $v(s)$ . As $v(s)$ is harder to approximate due to the non-smoothness, a constant function in fact induces a smaller approximation error and thus has a better optimality for Q-learning with function approximation. + +It is immediate to generate this result to general MDPs, as function of a large constant solves MDPs with episodic rewards. This indicates that Q-learning may have more than one converging points and may diverge from the optimal value function under $\gamma = 1$ . This leads to the need of $\gamma$ , which is artificially introduced and biases the learning objective. More generally, the Bellman equation may have a continuum of finite solutions in an infinite state space, even with $\gamma < 1$ . Some studies exist on the necessary and sufficient conditions for a solution of the Bellman equation to be the value function (Kamihigashi & Le Van, 2015; Latham, 2008; Harmon & Baird III, 1996), though the majority of this topic remains open. + +The discussions above are supported by a series of rigorous statements. We begin with the following proposition that when the discount factor is strictly less than 1, the solution toward the Bellman equation is the optimal value function. + +Proposition 21. When $\gamma < 1$ , $v(s)$ is the unique solution of the system (ABY). + +Proof. The uniqueness has been shown in Lemma 2 for the system (ABX). When $\gamma < 1$ it corresponds to case (I), where neither the upper bound $f(s) \leq 1$ nor the continuity at $s = 0$ in condition (X) is used. Therefore Lemma 2 holds for (ABY) under $\gamma < 1$ , so follows Lemma 11 the uniqueness as desired. + +This uniqueness no longer holds under $\gamma = 1$ . + +Theorem 22. Let $\gamma = 1$ , $p > 0.5$ , and $f(\cdot)$ be a real function on $[0,1]$ . $f(s)$ satisfies the Bellman equation (ABY) if and only if either + +- $f(s)$ is $v(s)$ defined in Theorem 12, or +- $f(0) = 0$ , $f(1) = 1$ , and $f(s) = C$ for all $0 < s < 1$ , for some constant $C \geq 1$ . + +Proof. It is obvious that both $f(s)$ defined above are the solutions of the system. It amounts to show that they are the only solutions. + +Without the bound condition (X), the function $f(s)$ is not necessarily continuous on $s = 0$ and $s = 1$ and is not necessarily monotonic on $s = 1$ . Therefore the same arguments in the proof of Lemma 2 will not hold. However, the arguments can be extended to (Y) by considering the limit of $f(s)$ when $s$ approaches 0 and 1. + +By Lemma 4 the function is continuous on the open interval $(0,1)$ . Let + +$$ +C _ {0} = \lim _ {s \to 0 ^ {+}} f (s), C _ {1} = \lim _ {s \to 1 ^ {-}} f (s). +$$ + +Then by Lemma 3, $0 \leq C_0 \leq f(s) \leq C_1$ for $s \in (0,1)$ . Here we eliminate the possibility of $C_0 = +\infty$ and $C_1 = +\infty$ . This is because if there is a sequence of $s_t \to 0$ such that $f(s_t) > t$ , then we have $f(\frac{1}{2}) \geq p f(s_t) + (1 - p) f(1 - s_t) \geq (1 - p)t$ for any $t$ . Then $f(\frac{1}{2})$ does not exist. Similar arguments show that $C_1$ cannot be $+\infty$ . + +Now specify a sequence $a_{t} \to \frac{1}{2}$ , $a_{t} < \frac{1}{2}$ , such that $C_0 \leq f\left(\frac{1}{2} - a_t\right) \leq C_0 + \frac{1}{t}$ and $C_1 - \frac{1}{t} \leq f\left(\frac{1}{2} + a_t\right) \leq C_1$ . Then we have + +$$ +\begin{array}{l} f \left(\frac {1}{2}\right) \geq p f \left(\frac {1}{2} - a _ {t}\right) + (1 - p) f \left(\frac {1}{2} + a _ {t}\right) \\ \geq p C _ {0} + (1 - p) C _ {1} - \frac {1}{t}. \\ \end{array} +$$ + +As $t$ is arbitrary we have $f(\frac{1}{2}) \geq pC_0 + (1 - p)C_1$ . By induction on $\ell$ it holds on $s \in \bigcup_{\ell \geq 1} G_\ell$ that + +$$ +f (s) \geq C _ {0} + \left(C _ {1} - C _ {0}\right) v (s). +$$ + +By Lemma 4 the continuity of $f(s)$ and $v(s)$ under $\gamma = 1$ , this lower bound extends beyond the dyadic rationals to the entire interval $(0,1)$ . Define $\tilde{f}(s) = C_0 + (C_1 - C_0)v(s)$ for $s \in (0,1)$ , $\tilde{f}(0) = C_0, \tilde{f}(1) = C_1$ . It is immediate to verify that for any $C_1 > C_0 \geq 0$ , $\tilde{f}(s)$ solves the system (AY) (without (B) the boundary conditions). + +If $C_1 - C_0 \neq 0$ , by Lemma 2 Case (II) $\tilde{f}(s)$ on $(0,1)$ is the unique solution of the system (AY), given monotonicity, continuity, and the lower bound $\tilde{f}(s)$ . With the boundary conditions (B), we have $0 = \tilde{f}(0) = C_0$ and $1 = \tilde{f}(1) = C_1$ , therefore $f(s) = v(s)$ . This case $C_1 - C_0 \neq 0$ induces the first possible solution. + +It amounts to determine $f(s)$ when $C_1 - C_0 = 0$ , that is, when $0 \leq C_0 = f(s) = C_1$ for $s \in (0,1)$ (by Lemma 3). It suffices to prove that $C_0 < 1$ is not possible. In fact, if $C_0 < 1$ then $f\left(\frac{3}{4}\right) < p f\left(\frac{1}{2}\right) + (1 - p)f(1)$ , which contradicts (A). Then, $f(s) = C_0$ for some $C_0 \geq 1$ is the only set of solutions when $C_1 - C_0 = 0$ , as desired. + +The fact that a large constant function can also be a solution toward the Bellman equation can be extended to a wide range of MDPs. The below proposition lists one of the sufficient conditions but even without this condition it is likely to hold in practice. + +Proposition 23. For an arbitrary MDP with episodic rewards where every state has an action to transit to a non-terminal state almost surely, $f(s) = C$ for all non-terminal states $s$ is a solution of the Bellman equation system for any $C$ greater or equal to the maximum one-step reward. + +Proof. The statement is immediate by verifying the Bellman equation. + +![](images/eec210d99e07a539c286b6dc85ba13c4ca468b869f0d9e2fba14694683d903d3.jpg) + +The rest of the section discusses the Gambler's problem under $p = 0.5$ , where the gambler does not lose capital by betting in expectation. In this case, the optimal value function is still $v(s)$ by similar arguments of Theorem 12. It is worth noting that when $\gamma = 1$ , Theorem 12 indicates $v(s) = s$ . This agrees with the intuition that the gambler does not lose their capital by placing bets in expectation, therefore the optimal value function should be linear to $s$ . Proposition 21 also holds that $v(s)$ is the unique solution of the Bellman equation, given $\gamma < 1$ . The remaining problem is to find the solution of the Bellman equation under $\gamma = 1$ and $p = 0.5$ . This corresponds to the system (ABZ). + +When $p = 0.5$ , condition (A) implies midpoint concavity such that for all $a \in \mathcal{A}(s)$ , + +$$ +f (s) \geq \frac {1}{2} f (s - a) + \frac {1}{2} f (s + a), \tag {9} +$$ + +where the equality must hold for some $a$ . As Lemma 3 no longer holds, a solution $f(s)$ may have negative values for some $s$ . Though, if it does not have a negative value, it must be concave, and thus linear by condition (A) (will be proved later in Theorem 27). It suffices to satisfy $f(s) \geq s$ for any + +s. Therefore the non-negative solution is $f(0) = 0$ , $f(1) = 1$ , and $f(s) = C' s + B'$ on $0 < s < 1$ for some constants $C' + B' \geq 1$ . + +If $f(s)$ does have a negative value at some $s$ , then the midpoint concavity (9) does not imply concavity. In this case, by recursively applying (9) we can show that the set $\{(s, f(s)) \mid s \in (0,1)\}$ is dense and compact on $(0,1) \times \mathbb{R}$ . Then the function becomes pathological, if it exists. Despite this, the following lemma shows that $f(s)$ need to be positive on the rationals $\mathbb{Q}$ . + +Lemma 24. Let $f(s)$ satisfies (ABZ). If there exists $0 \leq s^{-} < s^{+} \leq 1$ and a constant $C$ such that $f(s^{-}), f(s^{+}) \geq C$ , then $f(s) \geq C$ for all $s \in \{s^{-} + w(s^{+} - s^{-}) \mid w \in \mathbb{Q}, 0 \leq w \leq 1\}$ . + +Proof. The statement is immediate for $w \in \{0,1\}$ . For $0 < w < 1$ we prove the lemma by contradiction. Let $f(s^{-} + w(s^{+} - s^{-})) < C$ for some $w \in \mathbb{Q}$ while $0 < w < 1$ . We define $s_0 = s^{-} + w(s^{+} - s^{-})$ and $s_{t + 1} = 2s_t - s^-$ for $s_t < \frac{1}{2}(s^- + s^+)$ and $s_{t + 1} = 2s_t - s^+$ for $s_t > \frac{1}{2}(s^- + s^+)$ , respectively. $s_{t + 1}$ will be undefined if $s_t = \frac{1}{2}(s^- + s^+)$ . Since $w \in \mathbb{Q}$ , let $w = m / n$ where $m$ and $n$ are integers and the greatest common divisor $\gcd(m,n) = 1$ . Then $(s_t - s^-) / (s^+ - s^-) = m_t / n$ , where $m_t = 2^t m \mod n$ . As $\mathbb{Z}_n$ is finite, $\{s_t\}_{t \geq 0}$ can only take finite many values. Thus either the sequence $\{s_t\}$ is periodic, or it terminates at some $s_t = \frac{1}{2}(s^- + s^+)$ . + +Then we show that $f(s_{t})$ is strictly decreasing by induction. Assume that $f(s_{0}) > \dots > f(s_{t})$ . When $s_{t} < \frac{1}{2}(s^{-} + s^{+})$ , by (9) we have $f(s_{t}) \geq \frac{1}{2}f(s^{-}) + \frac{1}{2}f(s_{t+1})$ , which indicates that $f(s_{t+1}) - f(s_{t}) \leq f(s_{t}) - f(s^{-}) < f(s_{0}) - f(s^{-}) < 0$ . When $s_{t} > \frac{1}{2}(s^{-} + s^{+})$ , by (9) we have $f(s_{t}) \geq \frac{1}{2}f(s_{t+1}) + \frac{1}{2}f(s^{+})$ , which indicates $f(s_{t+1}) - f(s_{t}) \leq f(s_{t}) - f(s^{+}) < f(s_{0}) - f(s^{+}) < 0$ . The base case $f(s_{1}) < f(s_{0})$ holds as at least one of $f(s_{0}) \geq \frac{1}{2}f(s^{-}) + \frac{1}{2}f(s_{1})$ and $f(s_{0}) \geq \frac{1}{2}f(s_{1}) + \frac{1}{2}f(s^{+})$ must be true. Thus we conclude that $f(s_{t})$ is strictly decreasing. + +If the sequence terminates at some $s_t = \frac{1}{2}(s^- + s^+)$ , then $f(s_t) < f(s_1) < C$ , which contradicts $f(s_t) = f\left(\frac{1}{2}(s^- + s^+)\right) \geq \frac{1}{2}f(s^-) + \frac{1}{2}f(s^+) \geq C$ . Otherwise $s_t$ is periodic and indefinite. Denote the period as $T$ we have $f(s_{t+T}) < f(s_t)$ , which indicates $f(s_t) < f(s_t)$ as a contradiction. + +Lemma 24 agrees with the statement that the midpoint concavity indicates rational concavity. The below statements give some insights into the irrational points. + +Lemma 25. Let $f(s)$ satisfies (ABZ). If there exists an $\bar{s} \in \mathbb{R} \setminus \mathbb{Q}$ such that $f(\bar{s}) \geq 0$ , then $f(s) \geq 0$ for all $s \in \{w\bar{s} + u \mid w, u \in \mathbb{Q}, 0 \leq w, u, \leq 1, w + u \leq 1\}$ . + +Proof. Specify $s^{-} = \bar{s}$ and $s^{+} = 1$ in Lemma 24, we have $f(\bar{s} + \frac{u}{w + u}(1 - \bar{s})) \geq 0$ whenever $0 \leq \frac{u}{w + u} \leq 1$ and $\frac{u}{w + u} \in \mathbb{Q}$ . This is satisfied when $0 \leq w, u \leq 1$ , $w + u > 0$ , $w, u \in \mathbb{Q}$ . Specify $s^{-} = 0$ and $s^{+} = \bar{s} + \frac{u}{w + u}(1 - \bar{s})$ in Lemma 24, we have $f(w\bar{s} + u) = f((w + u)(\bar{s} + \frac{u}{w + u}(1 - \bar{s}))) \geq 0$ whenever $w + u \leq 1$ . Thus $f(w\bar{s} + u) \geq 0$ for $0 < w, u < 1$ , $w, u \in \mathbb{Q}$ , and $0 < w + u \leq 1$ . Since the case $w = u = 0$ is immediate, the statement follows with $s \in \{w\bar{s} + u | w, u \in \mathbb{Q}, 0 \leq w, u, \leq 1, w + u \leq 1\}$ . + +Corollary 26. Let $f(s)$ satisfies (ABZ). If there exists an $\bar{s} \in \mathbb{R} \setminus \mathbb{Q}$ such that $f(\bar{s}) < 0$ , then $f(w\bar{s})$ is monotonically decreasing with respect to $w$ for $w \in \mathbb{Q}, 1 \leq w < 1 / \bar{s}$ . + +Lemma 25 and Corollary 26 indicate that when there exists a negative or positive value, infinitely many other points (that are not necessarily in its neighbor) must have negative or positive values as well. It is intuitive that the solution with a negative value, if it exists, must be complicated and pathological. In fact, Sierpinski has shown that a midpoint concave but non-concave function is not Lebesgue measurable and is non-constructive (Sierpinski, 1920a;b), so does $f(s)$ if it solves (ABZ) while having a negative value. + +Such an $f(s)$ exists if and only if we assume Axiom of Choice (Jech, 2008; Sierpinski, 1920a;b). We consider the vector space by field extension $\mathbb{R} / \mathbb{Q}$ . With the axiom specify a basis $\mathbb{B} = \{1\} \cup \{g_i\}_{i \in \mathcal{I}}$ , known as a Hamel basis. With this basis $\mathbb{B}$ every real number can be written uniquely as a linear combination of the elements in $\mathbb{B}$ with rational coefficients. Therefore, denote a real number $s$ uniquely as a vector $(w, w_i)_{i \in \mathcal{I}}$ , $w, w_i \in \mathbb{Q}$ , such that $s = w + \sum_{i \in \mathcal{I}} w_i g_i$ . We correspond each $f(s) = f'(w, \{w_i\}_{i \in \mathcal{I}})$ uniquely to $f'$ defined on $\mathbb{Q}^{\left|\mathbb{B}\right|}$ and use the two spaces interchangeably. + +The solution $f(s)$ to (9) is any concave function $f'$ on the vector space $\mathbb{R} / \mathbb{Q}$ . Based on this solution we extend the system (9) to (ABZ). To this end, $f(s)$ need to attain the equality in (9) at every $s$ , which holds if and only if for every $s = w + \sum_{i \in \mathcal{I}} w_i g_i$ there exists $s^1 = w^1 + \sum_{i \in \mathcal{I}} w_i^1 g_i$ and $s^2 = w^2 + \sum_{i \in \mathcal{I}} w_i^2 g_i$ such that $f'(\lambda w^1 + (1 - \lambda)w^2, \{\lambda w_i^1 + (1 - \lambda)w_i^2\}_{i \in \mathcal{I}})$ is $\lambda f(s^1) + (1 - \lambda)f(s^2)$ for any $0 < \lambda < 1$ (intuitively, local linearity of $f'$ on at least one direction everywhere). By specifying a $\mathbb{B}$ , the condition can be met if $f(s) = f'(w, \{w_i\}_{i \in \mathcal{I}}) = \alpha(w_j) + \beta(\overline{w}_j)$ for some $w_j \in \{w\} \cup \{w_i \mid i \in \mathcal{I}\}$ , where $\alpha$ is a linear function, $\beta$ is a concave function, and $\overline{w}_j$ denotes $\{w\} \cup \{w_i \mid i \in \mathcal{I}\} \setminus \{w_j\}$ . When $w_j$ is $w$ , $f(s)$ is in fact $w + \beta(\{w_i\}_{i \in \mathcal{I}})$ for some concave function $\beta$ . This is equivalent to specifying a function $\omega(s): \mathbb{R} \to \mathbb{Q}$ such that $\omega$ maps reals to rationals additively $\omega(s_1 + s_2) = \omega(s_1) + \omega(s_2)$ and $\omega$ is not constantly zero, and then write $f(s)$ as $\omega(s) + \beta_1(s - \omega(s))$ for some concave real function $\beta_1$ . Otherwise if $w_j$ is not $w$ , $f(s)$ is in the aforementioned form with the boundary conditions (B). + +While we have shown that under Axiom of Choice $f(s) = \alpha(w_j) + \beta(\overline{w}_j)$ is a set of solutions that can be described by the infinite-dimensional vector space $\mathbb{R} / \mathbb{Q}$ and its basis, we do not know if they are the only solutions. Nevertheless, combining our analysis and the literature we conclude the following statement about the system (ABZ). + +Theorem 27. Let $\gamma = 1$ and $p = 0.5$ . A real function $f(s)$ satisfies (ABZ) if and only if either + +- $f(s) = C' s + B'$ on $s \in (0,1)$ , for some constants $C' + B' \geq 1$ , or +- $f(s)$ is some non-constructive, not Lebesgue measurable function under Axiom of Choice. + +Proof. The first bullet corresponds to the function $f(s)$ that is non-negative on $[0,1]$ . By the midpoint concavity (9) and the fact that $f(s)$ is non-negative, $f(s)$ is concave on $[0,1]$ (Sierpinski, 1920a;b). We specify $s_0 = \frac{1}{2}$ . By (A) we have + +$$ +f (s _ {0}) = \frac {1}{2} f (s _ {0} - a _ {0}) + \frac {1}{2} f (s _ {0} + a _ {0}) +$$ + +for some $a_0$ . Since $f(s)$ is concave, $f(s)$ must be linear on the interval $[s_0 - a_0, s_0 + a_0]$ . Consider the nonempty set + +$$ +\mathcal {A} _ {1} = \left\{a _ {0} \in \mathcal {A} (s _ {0}) \Bigg | f (s _ {0}) = \frac {1}{2} f (s _ {0} - a _ {0}) + \frac {1}{2} f (s _ {0} + a _ {0}) \right\}. +$$ + +We show by contradiction that $\sup \mathcal{A}_1$ is $\frac{1}{2}$ . If $a_1 = \sup \mathcal{A}_1 < \frac{1}{2}$ , then by the continuity $a_1 \in \mathcal{A}_1$ , where the continuity is implied by the convexity. This indicates that $f(s) = f(s_0) + \frac{f(s_0 + a_1) - f(s_0 - a_1)}{2a_1}(s - s_0)$ when $s_0 - a_1 \leq s \leq s_0 + a_1$ . But as $a_1$ is the maximum element of $\mathcal{A}_1$ , on at least one of the intervals $[0, s_0 - a_1)$ and $(s_0 + a_1]$ we have by the convexity $f(s) < f(s_0) + \frac{f(s_0 + a_1) - f(s_0 - a_1)}{2a_1}(s - s_0)$ . Therefore, for at least one of $s \in \{s_0 - a_1, s_0 + a_1\}$ we have $f(s) > \frac{1}{2} f(s - a) + \frac{1}{2} f(s + a)$ for all $a$ , which contradicts condition (A). Hence $\sup \mathcal{A}_1$ is $\frac{1}{2}$ , which implies that $f(s)$ is linear on $(0, 1)$ . Writing $f(s)$ as $C's + B'$ , by the boundary condition (B) we have $C' + B' \geq 1$ . It is immediate to verify that $f(s) = C's + B'$ with $C' + B' \geq 1$ is sufficient to satisfy the system (ABZ), and is thus the non-negative solution of the system. + +If $f(s)$ is negative at some $s$ , then by (9) and (B) $f(s)$ must not be concave. By Sierpinski (1920a;b) such an $f(s)$ exists only if we assume Axiom of Choice, and is non-constructive and not Lebesgue measurable if it exists (some discussion on this function is given above the statement of this theorem). \ No newline at end of file diff --git a/thegamblersproblemandbeyond/images.zip b/thegamblersproblemandbeyond/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e1aeb7672ae91cb7538910b3995eee0d5ef9f966 --- /dev/null +++ b/thegamblersproblemandbeyond/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:30e28cf79a73dc580d5fac76e8a08d5a55cf522c14a6ab14be6a1123314d1895 +size 905755 diff --git a/thegamblersproblemandbeyond/layout.json b/thegamblersproblemandbeyond/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..86ad05c97886f5572142bab5231224c5ccfa0cfc --- /dev/null +++ b/thegamblersproblemandbeyond/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d8f8b9870b4fc0cc0689369e70cb6b67bae88faaf3ac56ff011410ae1cb353f +size 1479957 diff --git a/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/cf0813ac-09b9-4ad4-9e58-ca82cbd34448_content_list.json b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/cf0813ac-09b9-4ad4-9e58-ca82cbd34448_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..12053b9bfee6d56e33e7ee98e5d402e1d2f78929 --- /dev/null +++ b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/cf0813ac-09b9-4ad4-9e58-ca82cbd34448_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ad376eb0833d921d1bdca099b145019398095cab6ee4fe103b8a3183a98a406 +size 169013 diff --git a/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/cf0813ac-09b9-4ad4-9e58-ca82cbd34448_model.json b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/cf0813ac-09b9-4ad4-9e58-ca82cbd34448_model.json new file mode 100644 index 0000000000000000000000000000000000000000..476e99d4e3b96294fdfcc69f6c5b25a31c4a6e33 --- /dev/null +++ b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/cf0813ac-09b9-4ad4-9e58-ca82cbd34448_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:86a5773e4cbc1a2dae076245fb90339cab4961c4666d0ddaed6277d45a6ea187 +size 187757 diff --git a/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/cf0813ac-09b9-4ad4-9e58-ca82cbd34448_origin.pdf b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/cf0813ac-09b9-4ad4-9e58-ca82cbd34448_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c835e9fc066227239998016dfe9efd2d4ef10b8b --- /dev/null +++ b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/cf0813ac-09b9-4ad4-9e58-ca82cbd34448_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baaf0dc835892e0b58c0b826d4d8fee20c610028212bd91556d364a1e1920670 +size 1099701 diff --git a/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/full.md b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dd5db599da22f555513cb311fc9a7a95bc9fd073 --- /dev/null +++ b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/full.md @@ -0,0 +1,976 @@ +# THE IMPLICIT BIAS OF DEPTH: HOW INIncrementAL LEARNING DRIVES GENERALIZATION + +Daniel Gissin, Shai Shalev-Shwartz, Amit Daniely + +School of Computer Science + +The Hebrew University + +Jerusalem, Israel + +{daniel.gissin, shais, amit.daniely}@mail.huji.ac.il + +# ABSTRACT + +A leading hypothesis for the surprising generalization of neural networks is that the dynamics of gradient descent bias the model towards simple solutions, by searching through the solution space in an incremental order of complexity. We formally define the notion of incremental learning dynamics and derive the conditions on depth and initialization for which this phenomenon arises in deep linear models. Our main theoretical contribution is a dynamical depth separation result, proving that while shallow models can exhibit incremental learning dynamics, they require the initialization to be exponentially small for these dynamics to present themselves. However, once the model becomes deeper, the dependence becomes polynomial and incremental learning can arise in more natural settings. We complement our theoretical findings by experimenting with deep matrix sensing, quadratic neural networks and with binary classification using diagonal and convolutional linear networks, showing all of these models exhibit incremental learning. + +# 1 INTRODUCTION + +Neural networks have led to a breakthrough in modern machine learning, allowing us to efficiently learn highly expressive models that still generalize to unseen data. The theoretical reasons for this success are still unclear, as the generalization capabilities of neural networks defy the classic statistical learning theory bounds. Since these bounds, which depend solely on the capacity of the learned model, are unable to account for the success of neural networks, we must examine additional properties of the learning process. One such property is the optimization algorithm - while neural networks can express a multitude of possible ERM solutions for a given training set, gradient-based methods with the right initialization may be implicitly biased towards certain solutions which generalize. + +A possible way such an implicit bias may present itself, is if gradient-based methods were to search the hypothesis space for possible solutions of gradually increasing complexity. This would suggest that while the hypothesis space itself is extremely complex, our search strategy favors the simplest solutions and thus generalizes. One of the leading results along these lines has been by Saxe et al. (2013), deriving an analytical solution for the gradient flow dynamics of deep linear networks and showing that for such models, the singular values converge at different rates, with larger values converging first. At the limit of infinitesimal initialization of the deep linear network, Gidel et al. (2019) show these dynamics exhibit a behavior of "incremental learning" - the singular values of the model are learned separately, one at a time. Our work generalizes these results to small but finite initialization scales. + +Incremental learning dynamics have also been explored in gradient descent applied to matrix completion and sensing with a factorized parameterization (Gunasekar et al. (2017), Arora et al. (2018), Woodworth et al. (2019)). When initialized with small Gaussian weights and trained with a small learning rate, such a model is able to successfully recover the low-rank matrix which labeled the data, even if the problem is highly over-determined and no additional regularization is applied. In their proof of low-rank recovery for such models, Li et al. (2017) show that the model remains low-rank throughout the optimization process, leading to the successful generalization. Additionally, + +![](images/4f7a7343d5100a0240bd48ddfa3e40699ac519fdb81257f3be9029fbfd66a447.jpg) +(a) Matrix Sensing + +![](images/39cffeb0363504fff0e906d56dfe312f396ae082c87a6fb3fe06248211ce5746.jpg) +(b) Quadratic Nets + +![](images/802da687830a818ec647a38b00d8e06bcf71be30dd4a4c5f714b30c0143fd4ff.jpg) +(c) Diagonal Nets +Figure 1: Incremental learning dynamics in deep models. Each panel shows the evolution of the five largest values of $\sigma$ , the parameters of the induced model. All models were trained using gradient descent with a small initialization and learning rate, on a small training set such that there are multiple possible solutions. In all cases, the deep parameterization of the models lead to "incremental learning", where the values are learned at different rates (larger values are learned first), leading to sparse solutions. (a) Depth 4 matrix sensing, $\sigma$ denotes singular values (see section 4.1). (b) Quadratic networks, $\sigma$ denotes singular values (see section 4.2). (c) Depth 3 diagonal networks, $\sigma$ denotes feature weights (see section 4.3). (d) Depth 3 circular-convolutional networks, $\sigma$ denotes amplitudes in the frequency domain of the feature weights (see appendix G). + +![](images/4bd665a9fd59e3b6ca2e62f90cea4346d362592cb883d342c331fe7c00a62b6f.jpg) +(d) Convolutional Nets + +Arora et al. (2019) explore the dynamics of such models, showing the singular values are learned at different rates and that deeper models exhibit stronger incremental learning dynamics. Our work deals with a more simplified setting, allowing us to determine explicitly under which conditions depth leads to this dynamical phenomenon. + +Finally, the learning dynamics of nonlinear models have been studied as well. Combes et al. (2018) and Williams et al. (2019) study the gradient flow dynamics of shallow ReLU networks under restrictive distributional assumptions, Ronen et al. (2019) show that shallow networks learn functions of gradually increasing frequencies and Nakkiran et al. (2019) show how deep ReLU networks correlate with linear classifiers in the early stages of training. + +These findings, along with others, suggest that the generalization ability of deep networks is at least in part due to the incremental learning dynamics of gradient descent. Following this line of work, we begin by explicitly defining the notion of incremental learning for a toy model which exhibits this sort of behavior. Analyzing the dynamics of the model for gradient flow and gradient descent, we characterize the effect of the model's depth and initialization scale on incremental learning, showing how deeper models allow for incremental learning in larger (realistic) initialization scales. Specifically, we show that a depth-2 model requires exponentially small initialization for incremental learning to occur, while deeper models only require the initialization to be polynomially small. + +Once incremental learning has been defined and characterized for the toy model, we generalize our results theoretically and empirically for larger linear and quadratic models. Examples of incremental learning in these models can be seen in figure 1, which we discuss further in section 4. + +# 2 DYNAMICAL ANALYSIS OF A TOY MODEL + +We begin by analyzing incremental learning for a simple model. This will allow us to gain a clear understanding of the phenomenon and the conditions for it, which we will later be able to apply to a variety of other models in which incremental learning is present. + +# 2.1 PRELIMINARIES + +Our simple linear model will be similar to the toy model analyzed by Woodworth et al. (2019). Our input space will be $\mathcal{X} = \mathbb{R}^d$ and the hypothesis space will be linear models with non-negative weights, such that: + +$$ +f _ {\sigma} (x) = \langle \sigma , x \rangle \quad \sigma \in \mathbb {R} _ {\geq 0} ^ {d} \tag {1} +$$ + +We will introduce depth into our model, by parameterizing $\sigma$ using $w\in \mathbb{R}_{\geq 0}^{d}$ in the following way: + +$$ +\forall i: \sigma_ {i} = w _ {i} ^ {N} +$$ + +Where $N$ represents the depth of the model. Since we restrict the model to having non-negative weights, this parameterization doesn't change the expressiveness, but it does radically change its optimization dynamics. + +Assuming the data is labeled by some $\sigma^{*} \in \mathbb{R}_{\geq 0}^{d}$ , we will study the dynamics of this model for general $N$ under a depth-normalized1 squared loss over Gaussian inputs, which will allow us to derive our analytical solution: + +$$ +\ell_ {N} (w) = \frac {1}{2 N ^ {2}} \mathbb {E} _ {x} [ (\langle \sigma^ {*}, x \rangle - \langle w ^ {N}, x \rangle) ^ {2} ] = \frac {1}{2 N ^ {2}} | | w ^ {N} - \sigma^ {*} | | ^ {2} \tag {2} +$$ + +We will assume that our model is initialized uniformly with a tunable scaling factor, such that: + +$$ +\forall i: w _ {i} (0) = \sqrt [ N ]{\sigma_ {0}} \tag {3} +$$ + +# 2.2 GRADIENT FLOW ANALYTICAL SOLUTIONS + +Analyzing our toy model using gradient flow allows us to obtain an analytical solution for the dynamics of $\sigma(t)$ along with the dynamics of the loss function for a general $N$ . For brevity, the following theorem refers only to $N = 1,2$ and $N \to \infty$ , however the solutions for $3 \leq N < \infty$ are similar in structure to $N \to \infty$ , but more complicated. We also assume $\sigma_{i}^{*} > 0$ for brevity, however we can derive the solutions for $\sigma_{i}^{*} = 0$ as well. Note that this result is a special case adaptation of the one presented in Saxe et al. (2013) for deep linear networks: + +Theorem 1. Minimizing the toy linear model described in (1) with gradient flow over the depth normalized squared loss (2), with Gaussian inputs and weights initialized as in (3) and assuming $\sigma_{i}^{*} > 0$ leads to the following analytical solutions for different values of $N$ : + +$$ +N = 1: \sigma_ {i} (t) = \sigma_ {i} ^ {*} + (\sigma_ {0} - \sigma_ {i} ^ {*}) e ^ {- t} +$$ + +$$ +N = 2: \sigma_ {i} (t) = \frac {\sigma_ {0} \sigma_ {i} ^ {*} e ^ {\sigma_ {i} ^ {*} t}}{\sigma_ {0} \left(e ^ {\sigma_ {i} ^ {*} t} - 1\right) + \sigma_ {i} ^ {*}} +$$ + +$$ +N \to \infty : t = \frac {1}{\left(\sigma_ {i} ^ {*}\right) ^ {2}} \log \left(\frac {\sigma_ {i} (t) \left(\sigma_ {0} - \sigma_ {i} ^ {*}\right)}{\sigma_ {0} \left(\sigma_ {i} (t) - \sigma_ {i} ^ {*}\right)}\right) - \frac {1}{\sigma_ {i} ^ {*}} \left(\frac {1}{\sigma_ {i} (t)} - \frac {1}{\sigma_ {0}}\right) +$$ + +Proof. The gradient flow equations for our model are the following: + +$$ +\dot {w} _ {i} = - \nabla_ {w _ {i}} \ell = \frac {1}{N} w _ {i} ^ {N - 1} (\sigma_ {i} ^ {*} - w _ {i} ^ {N}) +$$ + +Given the dynamics of the $w$ parameters, we may use the chain rule to derive the dynamics of the induced model, $\sigma$ : + +$$ +\dot {\sigma} _ {i} = \frac {d \sigma_ {i}}{d w _ {i}} \dot {w} _ {i} = w ^ {2 N - 2} \left(\sigma_ {i} ^ {*} - w _ {i} ^ {N}\right) = \sigma_ {i} ^ {2 - \frac {2}{N}} \left(\sigma_ {i} ^ {*} - \sigma_ {i}\right) \tag {4} +$$ + +This differential equation is solvable for all $N$ , leading to the solutions in the theorem. Taking $N \to \infty$ in (4) leads to $\dot{\sigma}_i = \sigma_i^2 (\sigma_i^* - \sigma_i)$ , which is also solvable. + +□ + +![](images/03e7d76bd88e805ecf7e54d345d78e062bc048fd25db4853c1eb0c4b8c8a54d0.jpg) + +![](images/c23d985e58af2a40cbdcab9a8b62a7fc571463fbe95d600367935b15315c532a.jpg) + +![](images/3481cd0050cd726df33f21b64c8fdc5ed42f9b4290f7cee92550618ce27116b8.jpg) + +![](images/51c76155ffc498385902c2a2f1b0c9c6a6bfa60bc5ffc75c83bb7310dd449ec1.jpg) + +![](images/5bf93731545366ad19454eb8c3dc22a47712db7630375a206349fcdcb9e6191e.jpg) +Figure 2: Incremental learning dynamics in the toy model. Each panel shows the evolution of $\frac{\sigma_i(t)}{\sigma_i^*}$ for $\sigma_i^* \in \{12,6,4,3\}$ according to the analytical solutions in theorem 1, under different depths and initializations. The first column has all values converging at the same rate. Notice how the deep parameterization with small initialization leads to distinct phases of learning, where values are learned incrementally (bottom-right). The shallow model's much weaker incremental learning, even at small initialization scales (second column), is explained in theorem 2. + +![](images/43bedbdca36bf37aea0250ebe993d074285bc00dd2df7c4243a41c6df3e25e15.jpg) + +![](images/a360f41b2a795bbc78256959ffc3c7e0ca9fd4ac201ec0cfa4a33f606c54ca07.jpg) + +![](images/1753937b7198e2a506a42edd95dcb5f30c41ce6c3119f4becef04d271cbcd99d.jpg) + +Analyzing these solutions, we see how even in such a simple model depth causes different factors of the model to be learned at different rates. Specifically, values corresponding to larger optimal values converge faster, suggesting a form of incremental learning. This is most clear for $N = 2$ where the solution isn't implicit, but is also the case for $N \geq 3$ , as we will see in the next subsection. + +These dynamics are depicted in figure 2, where we see the dynamics of the different values of $\sigma(t)$ as learning progresses. When $N = 1$ , all values are learned at the same rate regardless of the initialization, while the deeper models are clearly biased towards learning the larger singular values first, especially at small initialization scales. + +Our model has only one optimal solution due to the population loss, but it is clear how this sort of dynamic can induce sparse solutions - if the model is able to fit the data after a small amount of learning phases, then it's obtained result will be sparse. Alternatively, if $N = 1$ , we know that the dynamics will lead to the minimal $\ell_2$ norm solution which is dense. We explore the sparsity inducing bias of our toy model by comparing it empirically2 to a greedy sparse approximation algorithm in appendix D, and give our theoretical results in the next section. + +# 3 INIncrementAL LEARNING + +Equipped with analytical solutions for the dynamics of our model for every depth, we turn to study how the depth and initialization effect incremental learning. While Gidel et al. (2019) focuses on incremental learning in depth-2 models at the limit of $\sigma_0\rightarrow 0$ , we will study the phenomenon for a general depth and for $\sigma_0 > 0$ . + +First, we will define the notion of incremental learning. Since all values of $\sigma$ are learned in parallel, we can't expect one value to converge before the other moves at all (which happens for infinitesimal initialization as shown by Gidel et al. (2019)). We will need a more relaxed definition for incremental learning in finite initialization scales. + +Definition 1. Given two values $\sigma_{i},\sigma_{j}$ such that $\sigma_i^* >\sigma_j^* >0$ and both are initialized as $\sigma_{i}(0) = \sigma_{j}(0) = \sigma_{0} < \sigma_{j}^{*}$ , and given two scalars $s\in (0,\frac{1}{4})$ and $f\in (\frac{3}{4},1)$ , we call the learning of the values $(s,f)$ -incremental if there exists a $t$ for which: + +$$ +\sigma_ {j} (t) \leq s \sigma_ {j} ^ {*} < f \sigma_ {i} ^ {*} \leq \sigma_ {i} (t) +$$ + +In words, two values have distinct learning phases if the first almost converges ( $f \approx 1$ ) before the second changes by much ( $s \ll 1$ ). Note that for any $N$ , $\sigma(t)$ is monotonically increasing and so once $\sigma_{j}(t) = s\sigma_{j}^{*}$ , it will not decrease to allow further incremental learning. Given this definition of incremental learning, we turn to study the conditions that facilitate incremental learning in our toy model. + +Our main result is a dynamical depth separation result, showing that incremental learning is dependent on $\frac{\sigma_i^*}{\sigma_j^*}$ in different ways for different values of $N$ . The largest difference in dependence happens between $N = 2$ and $N = 3$ , where the dependence changes from exponential to polynomial: + +Theorem 2. Given two values $\sigma_{i},\sigma_{j}$ of a toy linear model as in (1), where $\frac{\sigma_i^*}{\sigma_j^*} = r > 1$ and the model is initialized as in (3), and given two scalars $s\in (0,\frac{1}{4})$ and $f\in (\frac{3}{4},1)$ , then the largest initialization value for which the learning phases of the values are $(s,f)$ -incremental, denoted $\sigma_0^{th}$ , is bounded in the following way: + +$$ +s \sigma_ {j} ^ {*} \left(\frac {s}{r f}\right) ^ {\frac {1}{(1 - f) (r - 1)}} \leq \sigma_ {0} ^ {t h} \leq s \sigma_ {j} ^ {*} \left(\frac {s}{r f}\right) ^ {\frac {1 - s}{r - 1}} \quad N = 2 +$$ + +$$ +s \sigma_ {j} ^ {*} \left(\frac {(1 - f) (r - 1)}{1 + (1 - f) (r - 1)}\right) ^ {\frac {N}{N - 2}} \leq \sigma_ {0} ^ {t h} \leq s \sigma_ {j} ^ {*} \left(\frac {r - 1}{r - s}\right) ^ {\frac {N}{N - 2}} \quad N \geq 3 +$$ + +Proof sketch (the full proof is given in appendix A). Rewriting the separable differential equation in (4) to calculate the time until $\sigma(t) = \alpha \sigma^{*}$ , we get the following: + +$$ +t _ {\alpha} (\sigma) = \int_ {\sigma_ {0}} ^ {\alpha \sigma^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (\sigma^ {*} - \sigma)} +$$ + +The condition for incremental learning is then the requirement that $t_f(\sigma_i) \leq t_s(\sigma_j)$ , resulting in: + +$$ +\int_ {\sigma_ {0}} ^ {f \sigma_ {i} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (\sigma_ {i} ^ {*} - \sigma)} \leq \int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (\sigma_ {j} ^ {*} - \sigma)} +$$ + +We then relax/restrict the above condition to get a necessary/sufficient condition on $\sigma_0$ , leading to a lower and upper bound on $\sigma_0^{th}$ . + +![](images/587789329a424b735c3967055e50a10f17d6041aa85dd23efe55e25fa9bb88ec.jpg) + +Note that the value determining the condition for incremental learning is $\frac{\sigma_i^*}{\sigma_j^*}$ - if two values are in the same order of magnitude, then their ratio will be close to 1 and we will need a small initialization to obtain incremental learning. The dependence on the ratio changes with depth, and is exponential for $N = 2$ . This means that incremental learning, while possible for shallow models, is difficult to see in practice. This result explains why changing the initialization scale in figure 2 changes the dynamics of the $N \geq 3$ models, while not changing the dynamics for $N = 2$ noticeably. + +The next theorem extends part of our analysis to gradient descent, a more realistic setting than the infinitesimal learning rate of gradient flow: + +Theorem 3. Given two values $\sigma_{i},\sigma_{j}$ of a depth-2 toy linear model as in (1), such that $\frac{\sigma_i^*}{\sigma_j^*} = r > 1$ and the model is initialized as in (3), and given two scalars $s\in (0,\frac{1}{4})$ and $f\in (\frac{3}{4},1)$ , and assuming $\sigma_{j}^{*}\geq 2\sigma_{0}$ , and assuming we optimize with gradient descent with a learning rate $\eta \leq \frac{c}{\sigma_1^*}$ for + +$c < 2(\sqrt{2} - 1)$ and $\sigma_1^*$ the largest value of $\sigma^*$ , then the largest initialization value for which the learning phases of the values are $(s, f)$ -incremental, denoted $\sigma_0^{th}$ , is lower and upper bounded in the following way: + +$$ +\frac {1}{2} \frac {s}{1 - s} \sigma_ {j} ^ {*} \Big (\frac {1 - f}{2 r f} \frac {s}{1 - s} \Big) ^ {\frac {1}{A - 1}} \leq \sigma_ {0} ^ {t h} \leq \frac {s}{1 - s} \sigma_ {j} ^ {*} \Big (\frac {1 - f}{f} \frac {s}{1 - s} \Big) ^ {\frac {1}{B - 1}} +$$ + +Where $A$ and $B$ are defined as: + +$$ +A = \frac {\log \left(1 - c \frac {\sigma_ {i} ^ {*}}{\sigma_ {1} ^ {*}} + c ^ {2} \left(\frac {\sigma_ {i} ^ {*}}{\sigma_ {1} ^ {*}}\right) ^ {2}\right)}{\log \left(1 - c \frac {\sigma_ {j} ^ {*}}{\sigma_ {1} ^ {*}} - \frac {c ^ {2}}{4} \left(\frac {\sigma_ {j} ^ {*}}{\sigma_ {1} ^ {*}}\right) ^ {2}\right)} \qquad B = \frac {\log \left(1 - c \frac {\sigma_ {i} ^ {*}}{\sigma_ {1} ^ {*}} - \frac {c ^ {2}}{4} \left(\frac {\sigma_ {i} ^ {*}}{\sigma_ {1} ^ {*}}\right) ^ {2}\right)}{\log \left(1 - c \frac {\sigma_ {j} ^ {*}}{\sigma_ {1} ^ {*}} + c ^ {2} \left(\frac {\sigma_ {j} ^ {*}}{\sigma_ {1} ^ {*}}\right) ^ {2}\right)} +$$ + +We defer the proof to appendix B. + +Note that this result, while less elegant than the bounds of the gradient flow analysis, is similar in nature. Both $A$ and $B$ simplify to $r$ when we take their first order approximation around $c = 0$ , giving us similar bounds and showing that the condition on $\sigma_0$ for $N = 2$ is exponential in gradient descent as well. + +While similar gradient descent results are harder to obtain for deeper models, we discuss the general effect of depth on the gradient decent dynamics in appendix C. + +# 4 INIncrementAL LEARNING IN LARGER MODELS + +So far, we have only shown interesting properties of incremental learning caused by depth for a toy model. In this section, we will relate several deep models to our toy model and show how incremental learning presents itself in larger models as well. + +# 4.1 MATRIX SENSING + +The task of matrix sensing is a generalization of matrix completion, where our input space is $\mathcal{X} = \mathbb{R}^{d\times d}$ and our model is a matrix $W\in \mathbb{R}^{d\times d}$ , such that: + +$$ +f _ {W} (A) = \langle W, A \rangle = t r \left(W ^ {T} A\right) +$$ + +Following Arora et al. (2019), we introduce depth by parameterizing the model using a product of matrices and the following initialization scheme ( $W_{i} \in \mathbb{R}^{d \times d}$ ): + +$$ +W = W _ {N} W _ {N - 1} \dots W _ {1} \tag {5} +$$ + +$$ +\forall i \in [ N ], W _ {i} (0) = \sqrt [ n ]{\sigma_ {0}} I +$$ + +Note that when $d = 1$ , the deep matrix sensing model reduces to our toy model without weight sharing. We study the dynamics of the model under gradient flow over a depth-normalized squared loss, assuming the data is labeled by a matrix sensing model parameterized by a PSD $W^{*} \in \mathbb{R}^{d \times d}$ : + +$$ +\begin{array}{l} \ell_ {N} \left(W _ {N} W _ {N - 1} \dots W _ {1}\right) = \frac {1}{2 N} \mathbb {E} _ {A} \left[ \left(\left\langle \left(W _ {N} W _ {N - 1} \dots W _ {1}\right) - W ^ {*}, A \right\rangle^ {2} \right] \right. \tag {6} \\ = \frac {1}{2 N} | | (W _ {N} W _ {N - 1} \dots W _ {1}) - W ^ {*} | | _ {F} ^ {2} \\ \end{array} +$$ + +The following theorem relates the deep matrix sensing model to our toy model, showing the two have the same dynamical equations: + +Theorem 4. Optimizing the deep matrix sensing model described in (5) with gradient flow over the depth normalized squared loss ((6)), with weights initialized as in (5) leads to the following dynamical equations for different values of $N$ : + +$$ +\dot {\sigma} _ {i} (t) = \sigma_ {i} (t) ^ {2 - \frac {2}{N}} (\sigma_ {i} ^ {*} - \sigma_ {i} (t)) +$$ + +Where $\sigma_{i}$ and $\sigma_{i}^{*}$ are the $i$ th singular values of $W$ and $W^{*}$ , respectively, corresponding to the same singular vector. + +The proof follows that of Saxe et al. (2013) and Gidel et al. (2019) and is deferred to appendix E. + +Theorem 4 shows us that the bias towards sparse solutions introduced by depth in the toy model is equivalent to the bias for low-rank solutions in the matrix sensing task. This bias was studied in a more general setting in Arora et al. (2019), with empirical results supporting the effect of depth on the obtainment of low-rank solutions under a more natural loss and initialization scheme. We recreate and discuss these experiments and their connection to our analysis in appendix E, and an example of these dynamics in deep matrix sensing can also be seen in panel (a) of figure 1. + +# 4.2 QUADRATIC NEURAL NETWORKS + +By drawing connections between quadratic networks and matrix sensing (as in Soltanolkotabi et al. (2018)), we can extend our results to these nonlinear models. We will study a simplified quadratic network, where our input space is $\mathcal{X} = \mathbb{R}^d$ and the first layer is parameterized by a weight matrix $W\in \mathbb{R}^{d\times d}$ and followed by a quadratic activation function. The final layer will be a summation layer. We assume, like before, that the labeling function is a quadratic network parameterized by $W^{*}\in \mathbb{R}^{d\times d}$ . Our model can be written in the following way, using the following orthogonal initialization scheme: + +$$ +f _ {W} (x) = \sum_ {i = 1} ^ {d} \left(w _ {i} ^ {T} x\right) ^ {2} = x ^ {T} W ^ {T} W x = \langle W ^ {T} W, x x ^ {T} \rangle \tag {7} +$$ + +$$ +W _ {0} ^ {T} W _ {0} = \sigma_ {0} I +$$ + +Immediately, we see the similarity of the quadratic network to the deep matrix sensing model with $N = 2$ , where the input space is made up of rank-1 matrices. However, the change in input space forces us to optimize over a different loss function to reproduce the same dynamics: + +Definition 2. Given an input distribution over an input space $\mathcal{X}$ with a labeling function $y:\mathcal{X}\to \mathbb{R}$ and a hypothesis $h$ , the variance loss is defined in the following way: + +$$ +\ell_ {v a r} (h) = \frac {1}{1 6} \mathbb {E} _ {x} [ (y (x) - h (x)) ^ {2} ] - \frac {1}{1 6} \mathbb {E} _ {x} [ y (x) - h (x) ] ^ {2} +$$ + +Note that minimizing this loss function amounts to minimizing the variance of the error, while the squared loss minimizes the second moment of the error. We note that both loss functions have the same minimum for our problem, and the dynamics of the squared loss can be approximated in certain cases by the dynamics of the variance loss. For a complete discussion of the two losses, including the cases where the two losses have similar dynamics, we refer the reader to appendix F. + +Theorem 5. Minimizing the quadratic network described and initialized as in (7) with gradient flow over the variance loss defined in (2) leads to the following dynamical equations: + +$$ +\dot {\sigma} _ {i} (t) = \sigma_ {i} (t) \left(\sigma_ {i} ^ {*} - \sigma_ {i} (t)\right) +$$ + +Where $\sigma_{i}$ and $\sigma_{i}^{*}$ are the $i$ th singular values of $W$ and $W^{*}$ , respectively, corresponding to the same singular vector. + +We defer the proof to appendix F and note that these dynamics are the same as our depth-2 toy model, showing that shallow quadratic networks can exhibit incremental learning (albeit requiring a small initialization). + +# 4.3 DIAGONAL/CONVOLUTIONAL LINEAR NETWORKS + +While incremental learning has been described for deep linear networks in the past, it has been restricted to regression tasks. Here, we illustrate how incremental learning presents itself in binary classification, where implicit bias results have so far focused on convergence at $t \to \infty$ (Soudry et al. (2018), Nacson et al. (2018), Ji & Telgarsky (2019)). Deep linear networks with diagonal weight matrices have been shown to be biased towards sparse solutions when $N > 1$ in Gunasekar et al. (2018), and biased towards the max-margin solution for $N = 1$ . Instead of analyzing convergence at $t \to \infty$ , we intend to show that the model favors sparse solutions for the entire duration of optimization, and that this is due to the dynamics of incremental learning. + +Our theoretical illustration will use our toy model as in (1) (initialized as in (3)) as a special weight-shared case of deep networks with diagonal weight matrices, and we will then show empirical results for the more general setting. We analyze the optimization dynamics of this model over a separable dataset $\{x_{i},y_{i}\}_{i = 1}^{m}$ where $y_{i}\in \{\pm 1\}$ . We use the exponential loss $(\ell (f(x),y) = e^{-yf(x)})$ for the theoretical illustration and experiment on the exponential and logistic losses. + +Computing the gradient for the model over $w$ , the gradient flow dynamics for $\sigma$ become: + +$$ +\dot {\sigma} _ {i} = \frac {N ^ {2}}{m} \sigma_ {i} ^ {2 - \frac {2}{N}} \sum_ {j = 1} ^ {m} e ^ {- y _ {j} \langle \beta , x _ {j} \rangle} x _ {j} +$$ + +We see the same dynamical attenuation of small values of $\sigma$ that is seen in the regression model, caused by the multiplication by $\sigma_{i}^{2 - \frac{2}{N}}$ . From this, we can expect the same type of incremental learning to occur - weights of $\sigma$ will be learned incrementally until the dataset can be separated by the current support of $\sigma$ . Then, the dynamics strengthen the growth of the current support while relatively attenuating that of the other values. Since the data is separated, increasing the values of the current support reduces the loss and the magnitude of subsequent gradients, and so we should expect the support to remain the same and the model to converge to a sparse solution. + +Granted, the above description is just intuition, but panel (c) of figure 1 shows how it is born out in practice (similar results are obtained for the logistic loss). In appendix G we further explore this model, showing deeper networks have a stronger bias for sparsity. We also observe that the initialization scale plays a similar role as before - deep models are less biased towards sparsity when $\sigma_0$ is large. + +In their work, Gunasekar et al. (2018) show an equivalence between the diagonal network and the circular-convolutional network in the frequency domain. According to their results, we should expect to see the same sparsity-bias of diagonal networks in convolutional networks, when looking at the Fourier coefficients of $\sigma$ . An example of this can be seen in panel (d) of figure 1, and we refer the reader to appendix G for a full discussion of their convolutional model and it's incremental learning dynamics. + +# 5 CONCLUSION + +Gradient-based optimization for deep linear models has an implicit bias towards simple (sparse) solutions, caused by an incremental search strategy over the hypothesis space. Deeper models have a stronger tendency for incremental learning, exhibiting it in more realistic initialization scales. + +This dynamical phenomenon exists for the entire optimization process for regression as well as classification tasks, and for many types of models - diagonal networks, convolutional networks, matrix completion and even the nonlinear quadratic network. We believe this kind of dynamical analysis may be able to shed light on the generalization of deeper nonlinear neural networks as well, with shallow quadratic networks being only a first step towards that goal. + +# ACKNOWLEDGMENTS + +This research is supported by the European Research Council (TheoryDL project). + +# REFERENCES + +Sanjeev Arora, Nadav Cohen, and Elad Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. arXiv preprint arXiv:1802.06509, 2018. +Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. In Advances in Neural Information Processing Systems 32, pp. 7411-7422. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/8960-implicit-regularization-in-deep-matrix-factorization.pdf. +Remi Tachet des Combes, Mohammad Pezeshki, Samira Shabanian, Aaron Courville, and Yoshua Bengio. On the learning dynamics of deep neural networks. arXiv preprint arXiv:1809.06848, 2018. +Gauthier Gidel, Francis Bach, and Simon Lacoste-Julien. Implicit regularization of discrete gradient dynamics in linear neural networks. In Advances in Neural Information Processing Systems, pp. 3196-3206, 2019. +Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. In Advances in Neural Information Processing Systems, pp. 6151-6159, 2017. +Suriya Gunasekar, Jason D Lee, Daniel Soudry, and Nati Srebro. Implicit bias of gradient descent on linear convolutional networks. In Advances in Neural Information Processing Systems, pp. 9461-9471, 2018. +Ziwei Ji and Matus Telgarsky. The implicit bias of gradient descent on nonseparable data. In Conference on Learning Theory, pp. 1772-1798, 2019. +Yuanzhi Li, Tengyu Ma, and Hongyang Zhang. Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations. arXiv preprint arXiv:1712.09203, 2017. +Mor Shpigel Nacson, Jason Lee, Suriya Gunasekar, Pedro HP Savarese, Nathan Srebro, and Daniel Soudry. Convergence of gradient descent on separable data. arXiv preprint arXiv:1803.01905, 2018. +Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L Edelman, Fred Zhang, and Boaz Barak. Sgd on neural networks learns functions of increasing complexity. arXiv preprint arXiv:1905.11604, 2019. +Yagyensch Chandra Pati, Ramin Rezaifar, and Perinkulam Sambamurthy Krishnaprasad. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of 27th Asilomar conference on signals, systems and computers, pp. 40-44. IEEE, 1993. +Basri Ronen, David Jacobs, Yoni Kasten, and Shira Kritchman. The convergence rate of neural networks for learned functions of different frequencies. In Advances in Neural Information Processing Systems 32, pp. 4763-4772. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/8723-the-convergence-rate-of-neural-networks-for-learned-function.pdf. +Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013. +Mahdi Soltanolkotabi, Adel Javanmard, and Jason D Lee. Theoretical insights into the optimization landscape of over-parameterized shallow neural networks. IEEE Transactions on Information Theory, 65(2):742-769, 2018. +Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 19 (1):2822-2878, 2018. + +Francis Williams, Matthew Trager, Claudio Silva, Daniele Panozzo, Denis Zorin, and Joan Bruna. Gradient dynamics of shallow univariate relu networks. arXiv preprint arXiv:1906.07842, 2019. + +Blake Woodworth, Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Kernel and deep regimes in overparametrized models. arXiv preprint arXiv:1906.05827, 2019. + +# A PROOF OF THEOREM 2 + +Theorem. Given two values $\sigma_{i},\sigma_{j}$ of a toy linear model as in (1), such that $\frac{\sigma_i^*}{\sigma_j^*} = r > 1$ and the model is initialized as in (3), and given two scalars $s\in (0,\frac{1}{4})$ and $f\in (\frac{3}{4},1)$ , then the largest initialization value for which the learning phases of the values are $(s,f)$ -incremental, denoted $\sigma_0^{th}$ , is lower and upper bounded in the following way: + +$$ +s \sigma_ {j} ^ {*} \left(\frac {s}{r f}\right) ^ {\frac {1}{(1 - f) (r - 1)}} \leq \sigma_ {0} ^ {t h} \leq s \sigma_ {j} ^ {*} \left(\frac {s}{r f}\right) ^ {\frac {1 - s}{r - 1}} \quad N = 2 +$$ + +$$ +s \sigma_ {j} ^ {*} \left(\frac {(1 - f) (r - 1)}{1 + (1 - f) (r - 1)}\right) ^ {\frac {N}{N - 2}} \leq \sigma_ {0} ^ {t h} \leq s \sigma_ {j} ^ {*} \left(\frac {r - 1}{r - s}\right) ^ {\frac {N}{N - 2}} \quad N \geq 3 +$$ + +Proof. Our strategy will be to define the time $t_{\alpha}$ for which a value reaches a fraction $\alpha$ of its optimal value, and then require that $t_f(\sigma_i) \leq t_s(\sigma_j)$ . We begin with recalling the differential equation which determines the dynamics of the model: + +$$ +\dot {\sigma} = \sigma^ {2 - \frac {2}{N}} \left(\sigma^ {*} - \sigma\right) +$$ + +Since the solution for $N \geq 3$ is implicit and difficult to manage in a general form, we will define $t_{\alpha}$ using the integral of the differential equation. The equation is separable, and under initialization of $\sigma_0$ we can describe $t_{\alpha}(\sigma)$ in the following way: + +$$ +t _ {\alpha} (\sigma) = \int_ {\sigma_ {0}} ^ {\alpha \sigma^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (\sigma^ {*} - \sigma)} +$$ + +Incremental learning takes place when $\sigma_{i}(t_{f}) = f\sigma_{i}^{*}$ happens before $\sigma_{j}(t_{s}) = s\sigma_{j}^{*}$ . We can write this condition in the following way: + +$$ +\int_ {\sigma_ {0}} ^ {f \sigma_ {i} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (\sigma_ {i} ^ {*} - \sigma)} \leq \int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (\sigma_ {j} ^ {*} - \sigma)} +$$ + +Plugging in $\sigma_{i} = r\sigma_{j}$ and rearranging, we get the following necessary and sufficient condition for incremental learning: + +$$ +\int_ {\sigma_ {0}} ^ {f r \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (1 - \frac {\sigma}{r \sigma_ {j} ^ {*}})} \leq r \int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (1 - \frac {\sigma}{\sigma_ {j} ^ {*}})} +$$ + +Our last step before relaxing and restricting our condition will be to split the integral on the left-hand side into two integrals: + +$$ +\int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} \left(1 - \frac {\sigma}{r \sigma_ {j} ^ {*}}\right)} + \int_ {s \sigma_ {j} ^ {*}} ^ {f r \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} \left(1 - \frac {\sigma}{r \sigma_ {j} ^ {*}}\right)} \leq r \int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} \left(1 - \frac {\sigma}{\sigma_ {j} ^ {*}}\right)} \tag {8} +$$ + +At this point, we cannot solve this equation and isolate $\sigma_0$ to obtain a clear threshold condition on it for incremental learning. Instead, we will relax/restrict the above condition to get a necessary/sufficient condition on $\sigma_0$ , leading to a lower and upper bound on the threshold value of $\sigma_0$ . + +# SUFFICIENT CONDITION + +To obtain a sufficient (but not necessary) condition on $\sigma_0$ , we may make the condition stricter either by increasing the left-hand side or decreasing the right-hand side. We can increase the left-hand side by removing $r$ from the left-most integral's denominator ( $r > 1$ ) and then combine the left-most and right-most integrals: + +$$ +\int_ {s \sigma_ {j} ^ {*}} ^ {f r \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (1 - \frac {\sigma}{r \sigma_ {j} ^ {*}})} \leq (r - 1) \int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (1 - \frac {\sigma}{\sigma_ {j} ^ {*}})} +$$ + +Next, we note that the integration bounds give us a bound on $\sigma$ for either integral. This means we can replace $1 - \frac{\sigma}{\sigma_j^*}$ with 1 on the right-hand side, and replace $1 - \frac{\sigma}{r\sigma_j^*}$ with $1 - f$ on the left-hand side: + +$$ +\frac {1}{1 - f} \int_ {s \sigma_ {j} ^ {*}} ^ {f r \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}}} \leq (r - 1) \int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}}} +$$ + +We may now solve these integrals for every $N$ and isolate $\sigma_0$ , obtaining the lower bound on $\sigma_0^{th}$ . We start with the case where $N = 2$ : + +$$ +\frac {1}{1 - f} \big (\log (f r \sigma_ {j} ^ {*}) - \log (s \sigma_ {j} ^ {*}) \big) \leq (r - 1) \big (\log (s \sigma_ {j} ^ {*}) - \log (\sigma_ {0}) \big) +$$ + +Rearranging to isolate $\sigma_0$ , we obtain our result: + +$$ +\sigma_ {0} \leq s \sigma_ {j} ^ {*} \left(\frac {s}{r f}\right) ^ {\frac {1}{(1 - f) (r - 1)}} +$$ + +For the $N \geq 3$ case, we have the following after solving the integrals: + +$$ +\frac {1}{1 - f} \Big (\big (\frac {1}{s \sigma_ {j} ^ {*}} \big) ^ {1 - \frac {2}{N}} - \big (\frac {1}{r f \sigma_ {j} ^ {*}} \big) ^ {1 - \frac {2}{N}} \Big) \leq (r - 1) \Big (\big (\frac {1}{\sigma_ {0}} \big) ^ {1 - \frac {2}{N}} - \big (\frac {1}{s \sigma_ {j} ^ {*}} \big) ^ {1 - \frac {2}{N}} \Big) +$$ + +For simplicity we may further restrict the condition by removing the term $\left(\frac{1}{rf\sigma_j^*}\right)^{1 - \frac{2}{N}}$ . Solving for $\sigma_0$ gives us the following: + +$$ +\sigma_ {0} \leq s \sigma_ {j} ^ {*} \left(\frac {(1 - f) (r - 1)}{1 + (1 - f) (r - 1)}\right) ^ {\frac {N}{N - 2}} +$$ + +# NECESSARY CONDITION + +To obtain a necessary (but not sufficient) condition on $\sigma_0$ , we may relax the condition in (8) either by decreasing the left-hand side or increasing the right-hand side. We begin by rearranging the equation: + +$$ +\int_ {s \sigma_ {j} ^ {*}} ^ {f r \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (1 - \frac {\sigma}{r \sigma_ {j} ^ {*}})} \leq r \int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (1 - \frac {\sigma}{\sigma_ {j} ^ {*}})} - \int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}} (1 - \frac {\sigma}{r \sigma_ {j} ^ {*}})} +$$ + +Like before, we may use the integration bounds to bound $\sigma$ . Plugging in $\sigma = s\sigma_{j}^{*}$ for all integrals decreases the left-hand side and increases the right-hand side, leading us to the following: + +$$ +\frac {r}{r - s} \int_ {s \sigma_ {j} ^ {*}} ^ {f r \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}}} \leq \left(\frac {r}{1 - s} - \frac {r}{r - s}\right) \int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}}} +$$ + +Rearranging, we get the following inequality: + +$$ +\int_ {s \sigma_ {j} ^ {*}} ^ {f r \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}}} \leq \frac {r - 1}{1 - s} \int_ {\sigma_ {0}} ^ {s \sigma_ {j} ^ {*}} \frac {d \sigma}{\sigma^ {2 - \frac {2}{N}}} +$$ + +We now solve the integrals for the different cases. For $N = 2$ , we have: + +$$ +\log (f r \sigma_ {j} ^ {*}) - \log (s \sigma_ {j} ^ {*}) \leq \frac {r - 1}{1 - s} \left(\log (s \sigma_ {j} ^ {*}) - \log (\sigma_ {0})\right) +$$ + +Rearranging to isolate $\sigma_0$ , we get our condition: + +$$ +\sigma_ {0} \leq s \sigma_ {j} ^ {*} \left(\frac {s}{r f}\right) ^ {\frac {1 - s}{r - 1}} +$$ + +Finally, for $N \geq 3$ , we solve the integrals to give us: + +$$ +\left(\left(\frac {1}{s \sigma_ {j} ^ {*}}\right) ^ {1 - \frac {2}{N}} - \left(\frac {1}{r f \sigma_ {j} ^ {*}}\right) ^ {1 - \frac {2}{N}}\right) \leq \frac {r - 1}{1 - s} \left(\left(\frac {1}{\sigma_ {0}}\right) ^ {1 - \frac {2}{N}} - \left(\frac {1}{s \sigma_ {j} ^ {*}}\right) ^ {1 - \frac {2}{N}}\right) +$$ + +Rearranging to isolate $\sigma_0$ , we get our condition: + +$$ +\sigma_ {0} \leq s \sigma_ {j} ^ {*} \left(\frac {r - 1}{r - s}\right) ^ {\frac {N}{N - 2}} +$$ + +# SUMMARY + +For a given $N$ , we derived a sufficient condition and a necessary condition on $\sigma_0$ for $(s,f)$ -incremental learning. The necessary and sufficient condition on $\sigma_0$ , which is the largest initialization value for which we see incremental learning (denoted $\sigma_0^{th}$ ), is between the two derived bounds. + +The precise bounds can possibly be improved a bit, but the asymptotic dependence on $r$ is the crux of the matter, showing the dependence on $r$ changes with depth with a substantial difference when we move from shallow models ( $N = 2$ ) to deeper ones ( $N \geq 3$ ) + +![](images/69f71960d1c66d2b382ce37999936a9d31275cb7a8064e309122eebc6dcf6867.jpg) + +# B PROOF OF THEOREM 3 + +Theorem. Given two values $\sigma_{i},\sigma_{j}$ of a depth-2 toy linear model as in (1), such that $\frac{\sigma_i^*}{\sigma_j^*} = r > 1$ and the model is initialized as in (3), and given two scalars $s\in (0,\frac{1}{4})$ and $f\in (\frac{3}{4},1)$ , and assuming $\sigma_{j}^{*}\geq 2\sigma_{0}$ , and assuming we optimize with gradient descent with a learning rate $\eta \leq \frac{c}{\sigma_1^*}$ for $c < 2(\sqrt{2} -1)$ and $\sigma_{1}^{*}$ the largest value of $\sigma^{*}$ , then the largest initialization value for which the learning phases of the values are $(s,f)$ -incremental, denoted $\sigma_0^{th}$ , is lower and upper bounded in the following way: + +$$ +\frac {1}{2} \frac {s}{1 - s} \sigma_ {j} ^ {*} \Big (\frac {1 - f}{2 r f} \frac {s}{1 - s} \Big) ^ {\frac {1}{A - 1}} \leq \sigma_ {0} ^ {t h} \leq \frac {s}{1 - s} \sigma_ {j} ^ {*} \Big (\frac {1 - f}{f} \frac {s}{1 - s} \Big) ^ {\frac {1}{B - 1}} +$$ + +Where $A$ and $B$ are defined as: + +$$ +A = \frac {\log \left(1 - c \frac {\sigma_ {i} ^ {*}}{\sigma_ {1} ^ {*}} + c ^ {2} \left(\frac {\sigma_ {i} ^ {*}}{\sigma_ {1} ^ {*}}\right) ^ {2}\right)}{\log \left(1 - c \frac {\sigma_ {j} ^ {*}}{\sigma_ {1} ^ {*}} - \frac {c ^ {2}}{4} \left(\frac {\sigma_ {j} ^ {*}}{\sigma_ {1} ^ {*}}\right) ^ {2}\right)} \quad B = \frac {\log \left(1 - c \frac {\sigma_ {i} ^ {*}}{\sigma_ {1} ^ {*}} - \frac {c ^ {2}}{4} \left(\frac {\sigma_ {i} ^ {*}}{\sigma_ {1} ^ {*}}\right) ^ {2}\right)}{\log \left(1 - c \frac {\sigma_ {j} ^ {*}}{\sigma_ {1} ^ {*}} + c ^ {2} \left(\frac {\sigma_ {j} ^ {*}}{\sigma_ {1} ^ {*}}\right) ^ {2}\right)} +$$ + +Proof. To show our result for gradient descent and $N = 2$ , we build on the proof techniques of theorem 3 of Gidel et al. (2019). We start by deriving the recurrence relation for the values $\sigma(t)$ for general depth, when $t$ now stands for the iteration. Remembering that $w_i^n = \sigma_i$ , we write down the gradient update for $w_i(t)$ : + +$$ +w _ {i} (t + 1) = w _ {i} (t) + \eta \frac {1}{N} w _ {i} (t) ^ {N - 1} \left(\sigma_ {i} ^ {*} - \sigma_ {i} (t)\right) = \sqrt [ N ]{\sigma_ {i} (t)} + \eta \frac {1}{N} \sigma_ {i} (t) ^ {1 - \frac {1}{N}} \left(\sigma_ {i} ^ {*} - \sigma_ {i} (t)\right) +$$ + +Raising $w_{i}(t)$ to the $N$ th power, we get the gradient update for the $\sigma$ values: + +$$ +\sigma_ {i} (t + 1) = \left(\sqrt [ N ]{\sigma_ {i} (t)} + \eta \frac {1}{N} \left(\sigma_ {i} ^ {*} - \sigma_ {i} (t)\right) \sigma_ {i} (t) ^ {1 - \frac {1}{N}}\right) ^ {N} = \sigma_ {i} (t) \left(1 + \eta \frac {1}{N} \sigma_ {i} (t) ^ {1 - \frac {2}{N}} \left(\sigma_ {i} ^ {*} - \sigma_ {i} (t)\right)\right) ^ {N} \tag {9} +$$ + +Next, we will prove a simple lemma which gives us the maximal learning rate we will consider for the analysis, for which there is no overshooting (the values don't grow larger than the optimal values). + +Lemma 1. For the gradient update in (9), assuming $\sigma_{i}(0) < \sigma_{i}^{*}$ , if $\eta \leq \left(\frac{1}{\sigma_1^*}\right)^{2 - \frac{2}{N}}$ and $N \geq 2$ , then: + +$$ +\forall t: \sigma_ {i} (t) \leq \sigma_ {i} ^ {*} +$$ + +Proof. Plugging in $\eta = c\left(\frac{1}{\sigma_i^*}\right)^{2 - \frac{2}{N}}$ for $c \leq 1$ , we have: + +$$ +\sigma_ {i} (t + 1) = \sigma_ {i} (t) \Big (1 + \frac {c}{N} \Big (\frac {\sigma_ {i} (t)}{\sigma_ {i} ^ {*}} \Big) ^ {1 - \frac {2}{N}} \Big (1 - \big (\frac {\sigma_ {i} (t)}{\sigma_ {i} ^ {*}} \big) \Big) \Big) ^ {N} \leq \sigma_ {i} (t) e ^ {\Big (\frac {\sigma_ {i} (t)}{\sigma_ {i} ^ {*}} \Big) ^ {1 - \frac {2}{N}} \Big (1 - \big (\frac {\sigma_ {i} (t)}{\sigma_ {i} ^ {*}} \big) \Big)} +$$ + +Defining $r_i = \frac{\sigma_i}{\sigma_i^*}$ and dividing both sides by $\sigma_i^*$ , we have: + +$$ +r _ {i} (t + 1) \leq r _ {i} (t) e ^ {r _ {i} (t) ^ {1 - \frac {2}{N}} (1 - r _ {i} (t))} +$$ + +It is enough to show that for any $0 \leq r \leq 1$ , we have that $r e^{r^{1 - \frac{2}{N}}(1 - r)} \leq 1$ , as over-shooting occurs when $r_i(t) > 1$ . Indeed, this function is monotonic increasing in $0 \leq r \leq 1$ (since the exponent is non-negative), and equals 1 when $r = 1$ . Since $r = 1$ is a fixed point and no iteration that starts at $r < 1$ can cross 1, then $r_i(t) \leq 1$ for any $t$ . This concludes our proof. + +Under this choice of learning rate, we can now obtain our incremental learning results for gradient descent when $N = 2$ . Our strategy will be bounding $\sigma_{i}(t)$ from below and above, which will give us a lower and upper bound for $t_{\alpha}(\sigma_{i})$ . Once we have these bounds, we will be able to describe either a necessary or a sufficient condition on $\sigma_0$ for incremental learning, similar to theorem 2. + +The update rule for $N = 2$ is: + +$$ +\sigma_ {i} (t + 1) = \sigma_ {i} (t) \Big (1 + \frac {1}{2} \eta \big (\sigma_ {i} ^ {*} - \sigma_ {i} (t) \big) \Big) ^ {2} +$$ + +Next, we plug in $\eta = \frac{c}{\sigma_1^*}$ for $c < 2(\sqrt{2} - 1) < 1$ and denote $R_i = \frac{\sigma_i^*}{\sigma_1^*} \leq 1$ and $r_i(t) = \frac{\sigma_i(t)}{\sigma_i^*} \leq 1$ to get: + +$$ +\sigma_ {i} (t + 1) = \sigma_ {i} (t) \left(1 + \frac {c}{2} R _ {i} (1 - r _ {i} (t))\right) ^ {2} +$$ + +Following theorem 3 of Gidel et al. (2019), we bound $\frac{1}{\sigma_i(t)}$ : + +$$ +\begin{array}{l} \frac {1}{\sigma_ {i} (t + 1)} = \frac {1}{\sigma_ {i} (t)} \frac {1}{\left(1 + \frac {c}{2} R _ {i} (1 - r _ {i} (t))\right) ^ {2}} \\ = \frac {1}{\sigma_ {i} (t)} \frac {1}{1 + \left(1 - r _ {i} (t)\right) \left(c R _ {i} + \frac {c ^ {2}}{4} R _ {i} ^ {2} \left(1 - r _ {i} (t)\right)\right)} \\ \geq \frac {1}{\sigma_ {i} (t)} \frac {1}{1 + \left(1 - r _ {i} (t)\right) \left(c R _ {i} + \frac {c ^ {2}}{4} R _ {i} ^ {2}\right))} \\ \geq \frac {1}{\sigma_ {i} (t)} \left(1 - \left(1 - r _ {i} (t)\right) \left(c R _ {i} + \frac {c ^ {2}}{4} R _ {i} ^ {2}\right)\right) \\ = \frac {1}{\sigma_ {i} (t)} - \left(\frac {1}{\sigma_ {i} (t)} - \frac {1}{\sigma_ {i} ^ {*}}\right) \left(c R _ {i} + \frac {c ^ {2}}{4} R _ {i} ^ {2}\right) \\ \end{array} +$$ + +Where in the fourth line we use the inequality $\frac{1}{1 + x} \geq 1 - x$ , $\forall x \geq 0$ . We may now subtract $\frac{1}{\sigma_i^*}$ from both sides to obtain: + +$$ +\frac {1}{\sigma_ {i} (t)} - \frac {1}{\sigma_ {i} ^ {*}} \geq \left(\frac {1}{\sigma_ {i} (t - 1)} - \frac {1}{\sigma_ {i} ^ {*}}\right) \left(1 - c R _ {i} - \frac {c ^ {2}}{4} R _ {i} ^ {2}\right) \geq \left(\frac {1}{\sigma_ {0}} - \frac {1}{\sigma_ {i} ^ {*}}\right) \left(1 - c R _ {i} - \frac {c ^ {2}}{4} R _ {i} ^ {2}\right) ^ {t} +$$ + +We may now obtain a bound on $t_{\alpha}(\sigma_i)$ by plugging in $\sigma_{i}(t) = \alpha \sigma_{i}^{*}$ and taking the log: + +$$ +\log \left(\frac {1 - \alpha}{\alpha \left(\frac {\sigma_ {i} ^ {*}}{\sigma_ {0}} - 1\right)}\right) \geq t _ {\alpha} \cdot \log \left(1 - c R _ {i} - \frac {c ^ {2}}{4} R _ {i} ^ {2}\right) +$$ + +Rearranging (note that $\log \left(1 - cR_{i} - \frac{c^{2}}{4} R_{i}^{2}\right) < 0$ and that our choice of $c$ keeps the argument of the log positive), we get: + +$$ +t _ {\alpha} (\sigma_ {i}) \geq \frac {\log \left(\frac {1 - \alpha}{\alpha \left(\frac {\sigma_ {i} ^ {*}}{\sigma_ {0}} - 1\right)}\right)}{\log \left(1 - c R _ {i} - \frac {c ^ {2}}{4} R _ {i} ^ {2}\right)} +$$ + +Next, we follow the same procedure for an upper bound. Starting with our update step: + +$$ +\begin{array}{l} \frac {1}{\sigma_ {i} (t + 1)} = \frac {1}{\sigma_ {i} (t)} \frac {1}{1 + (1 - r _ {i} (t)) \left(c R _ {i} + \frac {c ^ {2}}{4} R _ {i} ^ {2} (1 - r _ {i} (t))\right)} \\ \leq \frac {1}{\sigma_ {i} (t)} \frac {1}{1 + (1 - r _ {i} (t)) c R _ {i}} \\ \leq \frac {1}{\sigma_ {i} (t)} \left(1 - (1 - r _ {i} (t)) c R _ {i} + (1 - r _ {i} (t)) ^ {2} c ^ {2} R _ {i} ^ {2}\right) \\ \end{array} +$$ + +Where in the last line we use the inequality $\frac{1}{1 + x} \leq 1 - x + x^2$ , $\forall x \geq 0$ . Subtracting $\frac{1}{\sigma_i^*}$ from both sides, we get: + +$$ +\frac {1}{\sigma_ {i} (t)} - \frac {1}{\sigma_ {i} ^ {*}} \leq \left(\frac {1}{\sigma_ {i} (t - 1)} - \frac {1}{\sigma_ {i} ^ {*}}\right) \left(1 - c R _ {i} + c ^ {2} R _ {i} ^ {2}\right) \leq \left(\frac {1}{\sigma_ {0}} - \frac {1}{\sigma_ {i} ^ {*}}\right) \left(1 - c R _ {i} + c ^ {2} R _ {i} ^ {2}\right) ^ {t} +$$ + +Rearranging like before, we get the bound on the $\alpha$ -time: + +$$ +t _ {\alpha} (\sigma_ {i}) \leq \frac {\log \left(\frac {1 - \alpha}{\alpha \left(\frac {\sigma_ {i} ^ {*}}{\sigma_ {0}} - 1\right)}\right)}{\log \left(1 - c R _ {i} + c ^ {2} R _ {i} ^ {2}\right)} +$$ + +Given these bounds, we would like to find the conditions on $\sigma_0$ that allows for $(s,f)$ -incremental learning. We will find a sufficient condition and a necessary condition, like in the proof of theorem 2. + +# SUFFICIENT CONDITION + +A sufficient condition for incremental learning will be one which is possibly stricter than the exact condition. We can find such a condition by requiring the upper bound of $t_f(\sigma_i)$ to be smaller than the lower bound on $t_s(\sigma_j)$ . This becomes the following condition: + +$$ +\frac {\log \left(\frac {1 - f}{f} \frac {\sigma_ {0}}{\sigma_ {i} ^ {*} - \sigma_ {0}}\right)}{\log \left(1 - c R _ {i} + c ^ {2} R _ {i} ^ {2}\right)} \leq \frac {\log \left(\frac {1 - s}{s} \frac {\sigma_ {0}}{\sigma_ {j} ^ {*} - \sigma_ {0}}\right)}{\log \left(1 - c R _ {j} - \frac {c ^ {2}}{4} R _ {j} ^ {2}\right)} +$$ + +Defining $A = \frac{\log\left(1 - cR_i + c^2R_i^2\right)}{\log\left(1 - cR_j - \frac{c^2}{4}R_j^2\right)}$ and rearranging, we get the following: + +$$ +\log \left(\frac {1 - f}{f} \frac {\sigma_ {0}}{\sigma_ {i} ^ {*} - \sigma_ {0}}\right) \geq A \log \left(\frac {1 - s}{s} \frac {\sigma_ {0}}{\sigma_ {j} ^ {*} - \sigma_ {0}}\right) +$$ + +We may now take the exponent of both sides and rearrange again, remembering $\frac{\sigma_i^*}{\sigma_j^*} = r > 1$ , to get the following condition: + +$$ +\frac {\left(\sigma_ {j} ^ {*} - \sigma_ {0}\right) ^ {A}}{\sigma_ {0} ^ {A - 1} \left(r \sigma_ {j} ^ {*} - \sigma_ {0}\right)} \geq \frac {f}{1 - f} \left(\frac {1 - s}{s}\right) ^ {A} +$$ + +Now, we will add the very reasonable assumption that $\sigma_{j}^{*}\geq 2\sigma_{0}$ , which allows us to replace $\frac{\sigma_j^* - \sigma_0}{r\sigma_j^* - \sigma_0}$ with $\frac{1}{2r}$ and replace $(\sigma_j^* -\sigma_0)^{A - 1}$ with $(\frac{1}{2}\sigma_j)^{A - 1}$ , only making the condition stricter. This simplifies the expression to the following: + +$$ +\left(\frac {\sigma_ {j} ^ {*}}{\sigma_ {0}}\right) ^ {A - 1} \geq \frac {r f}{1 - f} \left(\frac {2 - 2 s}{s}\right) ^ {A} +$$ + +Now we can rearrange and isolate $\sigma_0$ to get a sufficient condition for incremental learning: + +$$ +\sigma_ {0} \leq \frac {1}{2} \frac {s}{1 - s} \sigma_ {j} ^ {*} \left(\frac {1 - f}{2 r f} \frac {s}{1 - s}\right) ^ {\frac {1}{A - 1}} +$$ + +# NECESSARY CONDITION + +A necessary condition for incremental learning will be one which is possibly more relaxed than the exact condition. We can find such a condition by requiring the lower bound of $t_f(\sigma_i)$ to be smaller than the upper bound on $t_s(\sigma_j)$ . This becomes the following condition: + +$$ +\frac {\log \left(\frac {1 - f}{f} \frac {\sigma_ {0}}{\sigma_ {i} ^ {*} - \sigma_ {0}}\right)}{\log \left(1 - c R _ {i} - \frac {c ^ {2}}{4} R _ {i} ^ {2}\right)} \leq \frac {\log \left(\frac {1 - s}{s} \frac {\sigma_ {0}}{\sigma_ {j} ^ {*} - \sigma_ {0}}\right)}{\log \left(1 - c R _ {j} + c ^ {2} R _ {j} ^ {2}\right)} +$$ + +Defining $B = \frac{\log\left(1 - cR_{i} - \frac{c^{2}}{4}R_{i}^{2}\right)}{\log\left(1 - cR_{j} + c^{2}R_{j}^{2}\right)}$ and rearranging, we get the following: + +$$ +\log \left(\frac {1 - f}{f} \frac {\sigma_ {0}}{\sigma_ {i} ^ {*} - \sigma_ {0}}\right) \geq B \log \left(\frac {1 - s}{s} \frac {\sigma_ {0}}{\sigma_ {j} ^ {*} - \sigma_ {0}}\right) +$$ + +We may now take the exponent of both sides and rearrange again, remembering $\frac{\sigma_i^*}{\sigma_j^*} = r > 1$ , to get the following condition: + +$$ +\frac {\left(\sigma_ {j} ^ {*} - \sigma_ {0}\right) ^ {B}}{\sigma_ {0} ^ {B - 1} \left(r \sigma_ {j} ^ {*} - \sigma_ {0}\right)} \geq \frac {f}{1 - f} \left(\frac {1 - s}{s}\right) ^ {B} +$$ + +We may now relax the condition further, by removing the $r$ from the denominator of the left-hand side and the $\sigma_0$ from the numerator. This gives us the following: + +$$ +\left(\frac {\sigma_ {j} ^ {*}}{\sigma_ {0}}\right) ^ {B - 1} \geq \frac {f}{1 - f} \left(\frac {1 - s}{s}\right) ^ {B} +$$ + +Finally, rearranging gives us the necessary condition: + +$$ +\sigma_ {0} \leq \frac {s}{1 - s} \sigma_ {j} ^ {*} \left(\frac {1 - f}{f} \frac {s}{1 - s}\right) ^ {\frac {1}{B - 1}} +$$ + +![](images/9ce5a2d85e1676b0c1b3a4680abc6d2448b8143da03220d4b62afda17fd14ed3.jpg) + +# C DISCUSSION OF GRADIENT DESCENT FOR GENERAL $N$ + +While we were able to generalize our result to gradient descent for $N = 2$ , our proof technique relies on the ability to get a non-implicit solution for $\sigma(t)$ which we discretized and bounded. This is harder to generalize to larger values of $N$ , where the solution is implicit. Still, we can informally illustrate the effect of depth on the dynamics of gradient descent by approximating the update rule of the values. + +We start by reminding ourselves of the gradient descent update rule for $\sigma$ , for a learning rate $\eta = c\left(\frac{1}{\sigma_1^*}\right)^{2 - \frac{2}{N}}$ : + +$$ +\sigma_ {i} (t + 1) = \sigma_ {i} (t) \Big (1 + \frac {c}{N} \big (\frac {1}{\sigma_ {1} ^ {*}} \big) ^ {2 - \frac {2}{N}} \sigma_ {i} (t) ^ {1 - \frac {2}{N}} (\sigma_ {i} ^ {*} - \sigma_ {i} (t)) \Big) ^ {N} +$$ + +To compare two values in the same scales, we will divide both sides by the optimal value $\sigma_{i}^{*}$ and look at the update step of the ratio $r_i = \frac{\sigma_i(t)}{\sigma_i^*}$ , also denoting $R_{i} = \frac{\sigma_{i}^{*}}{\sigma_{1}^{*}}$ : + +$$ +r _ {i} (t + 1) = r _ {i} (t) \Big (1 + \frac {c}{N} R _ {i} ^ {2 - \frac {2}{N}} r _ {i} (t) ^ {1 - \frac {2}{N}} (1 - r _ {i} (t)) \Big) ^ {N} +$$ + +We will focus on the early stages of the optimization process, where $r \ll 1$ . This means we can neglect the $1 - r_{i}(t)$ term in the update step, giving us the approximate update step we will use to compare the general $i,j$ values: + +$$ +r _ {i} (t + 1) \approx r _ {i} (t) \Bigl (1 + \frac {c}{N} R _ {i} ^ {2 - \frac {2}{N}} r _ {i} (t) ^ {1 - \frac {2}{N}} \Bigr) ^ {N} +$$ + +We would like to compare the dynamics of $r_i$ and $r_j$ , which is difficult to do when the recurrence relation isn't solvable. However, we can observe the first iteration of gradient descent and see how depth affects this iteration. Since we are dealing with variables which are ratios of different optimal values, the initial values of $r$ are different. Denoting $\mathbf{r} = \frac{\sigma_i^*}{\sigma_j^*}$ , we can describe the initialization of $r_j$ using that of $r_i$ : + +$$ +r _ {j} (0) = \mathbf {r} r _ {i} (0) +$$ + +Plugging in the initial conditions and noting that $R_{i} = \mathbf{r}R_{j}$ , we get: + +$$ +r _ {i} (1) \approx r _ {i} (0) \Big (1 + \frac {c R _ {i} ^ {2 - \frac {2}{N}}}{N} r _ {i} (0) ^ {1 - \frac {2}{N}} \Big) ^ {N} +$$ + +$$ +r _ {j} (1) \approx r _ {i} (0) \Big (\sqrt [ N ]{\mathbf {r}} + \left(\frac {1}{\mathbf {r}}\right) ^ {\frac {N - 1}{N}} \frac {c R _ {i} ^ {2 - \frac {2}{N}}}{N} r _ {i} (0) ^ {1 - \frac {2}{N}} \Big) ^ {N} +$$ + +We see that the two ratios have a similar update, with the ratio of optimal values playing a role in how large the initial value is versus how large the added value is. When we use a small learning rate, we have a very small $c$ which means we can make a final approximation and neglect the higher order terms of $c$ : + +$$ +r _ {i} (1) \approx r _ {i} (0) + c R _ {i} ^ {2 - \frac {2}{N}} r _ {i} (0) ^ {2 - \frac {2}{N}} +$$ + +$$ +r _ {j} (1) \approx \mathbf {r} r _ {i} (0) + \left(\frac {1}{\mathbf {r}}\right) ^ {\frac {N - 1}{N}} c R _ {i} ^ {2 - \frac {2}{N}} r _ {i} (0) ^ {2 - \frac {2}{N}} +$$ + +We can see that while the initial conditions favor $r_j$ , the size of the update for $r_i$ is larger by a factor of $r^{\frac{N - 1}{N}}$ when the initialization and learning rates are small. This accumulates throughout the optimization, making $r_i$ eventually converge faster than $r_j$ . + +The effect of depth here is clear - the deeper the model, the larger the relative step size of $r_i$ and the faster it converges relative to $r_j$ . + +# D COMPARISON OF THE TOY MODEL AND OMP + +Learning our toy model, when it's incremental learning is taken to the limit, can be described as an iterative procedure where at every step an additional feature is introduced such that it's weight is non-zero and then the model is optimized over the current set of features. This description is also relevant for the sparse approximation algorithm orthogonal matching pursuit (Pati et al., 1993), where the next feature is greedily chosen to be the one which most improves the current model. + +While the toy model and OMP are very different algorithms for learning sparse linear models, we will show empirically that they behave similarly. This allows us to view incremental learning as a continuous-time extension of a greedy iterative algorithm. + +To allow for negative weights in our experiments, we augment our toy model as in the toy model of Woodworth et al. (2019). Our model will have the same induced form as before: + +$$ +f _ {\sigma} (x) = \langle \sigma , x \rangle +$$ + +However, we parameterize $\sigma$ using $w_{+}, w_{-} \in \mathbb{R}^{d}$ in the following way: + +$$ +\sigma_ {i} = w _ {+, i} ^ {N} - w _ {-, i} ^ {N} +$$ + +$$ +\forall i, w _ {+, i} (0) = w _ {-, i} (0) = \sqrt [ N ]{\sigma_ {0}} +$$ + +We can now treat this algorithm as a sparse approximation pursuit algorithm - given a dictionary $D \in \mathbb{R}^{d \times n}$ and an example $x \in \mathbb{R}^d$ , we wish to find the sparsest $\alpha$ for which $D\alpha \approx x$ by minimizing the $\ell_0$ norm of $\alpha$ subject to $||D\alpha - x||_2^2 = 0^3$ . Under this setting, we can compare OMP to our toy model by comparing the sets of features that the two algorithms choose for a given example and dictionary. + +In figure 3 we run such a comparison. Using a dictionary of 1000 atoms and an example of dimensionality 80 sampled from a random hidden vector of a given sparsity $s$ , we run both algorithms and record the first $s$ features chosen4. + +![](images/f35bfe90a23cef30833a27cbaea49938a82b9f519ea1df2bb7df28410cf35b1d.jpg) +Figure 3: Empirical comparison of the dynamics of the toy model to OMP. The toy model has a depth of 5 and was initialized with a scale of 1e-4 and a learning rate of 3e-3. We compare the fraction of agreement between the sets of first $s$ features selected of the two algorithms for every given sparsity level $s$ , averaged over 100 experiments (the shaded regions are empirical standard deviations). For example, for sparsity level 3, we look at the sets of first 3 features selected by each algorithm and calculate the fraction of them that appear in both sets. + +For every sparsity $s$ , we plot the mean fraction of agreement between the sets of features chosen by OMP and the toy model over 100 experiments. We see that the two algorithms choose very similar features at the beginning, suggesting that the deep model approximates the discrete behavior of OMP. Only when the number of features increases do we see that the behavior of the two models begins to differ, caused by the fact that the toy model has a finite initialization scale and learning rate. + +These experiments demonstrate the similarity between the incremental learning of deep models and the discrete behavior of greedy approximation algorithms such as OMP. Adopting this view also allows us to put our finger on another strength of the dynamics of deep models - while greedy algorithms such as OMP require the analytical solution or approximation of every iterate, the dynamics of deep models are able to incrementally learn any differentiable function. For example, looking back at the matrix sensing task and the classification models in section 4, we see that while there isn't an immediate and efficient extension of OMP for these settings, the dynamics of learning deep models extends naturally and exhibits the same incremental learning as OMP. + +# E INIncrementAL LEARNING IN MATRIX SENSING + +# E.1 PROOF OF THEOREM 4 + +Theorem. Minimizing the deep matrix sensing model described in (5) with gradient flow over the depth normalized squared loss (6), with Gaussian inputs and weights initialized as in (5) leads to the following dynamical equations for different values of $N$ : + +$$ +\dot {\sigma} _ {i} (t) = \sigma_ {i} (t) ^ {2 - \frac {2}{N}} (\sigma_ {i} ^ {*} - \sigma_ {i} (t)) +$$ + +Where $\sigma_{i}$ and $\sigma_{i}^{*}$ are the $i$ th singular values of $W$ and $W^{*}$ , respectively, corresponding to the same singular vectors. + +Proof. We will adapt the proof from Saxe et al. (2013) for multilayer linear networks. The gradient flow equations for $W_{n}$ , $n \in [N]$ are: + +$$ +\dot {W} _ {n} = \frac {1}{N} W _ {1: n - 1} ^ {T} (W ^ {*} - W) W _ {n + 1: N} ^ {T} +$$ + +Where we denote $W_{j:k} = \prod_{i = j}^{k}W_{i}$ . + +Since we assumed $W^{*}$ is (symmetric) PSD, there exists an orthogonal matrix $U$ for which $D^{*} = UW^{*}U^{T}$ where $D^{*}$ is diagonal. Under the initialization in (3), $U$ diagonalizes all $W_{n}$ matrices at initialization such that $D_{n} = UW_{n}U^{T} = \sqrt[n]{\sigma_{0}I}$ . Making this change of variables for all $W_{n}$ , we get: + +$$ +\dot {W} _ {n} = \frac {1}{N} U ^ {T} D _ {1: n - 1} U (W ^ {*} - W) U ^ {T} D _ {n + 1: N} U +$$ + +Rearranging, we get a set of decoupled differential equations for the singular values of $W_{n}$ : + +$$ +\dot {D} _ {n} = \frac {1}{N} D _ {1: n - 1} (D ^ {*} - D) D _ {n + 1: N} +$$ + +Note that since these matrices are all diagonal at initialization, the above dynamics ensure that they remain diagonal throughout the optimization. Denoting $\sigma_{n,i}$ as the $i$ 'th singular value of $W_{n}$ and $\sigma_{i}$ as the $i$ 'th singular value of $W$ , we get the following differential equation: + +$$ +\dot {\sigma} _ {n, i} = \frac {1}{N} \left(\sigma_ {i} ^ {*} - \sigma_ {i}\right) \prod_ {j \neq n} \sigma_ {j, i} +$$ + +Since we assume at initialization that $\forall n, m, i: \sigma_{n,i}(0) = \sigma_{m,i}(0) = \sqrt[N]{\sigma_0}$ , the above dynamics are the same for all singular values and we get $\forall n, m, i: \sigma_{n,i}(t) = \sigma_{m,i}(t) = \sqrt[N]{\sigma_i(t)}$ . We may now use this to calculate the dynamics of the singular value of $W$ , since they are the product of the singular values of all $W_n$ matrices. Denoting $\sigma_{-n,i} = \prod_{k \neq n} \sigma_{k,i}$ and using the chain rule: + +$$ +\dot {\sigma} _ {i} = \sum_ {n = 1} ^ {N} \sigma_ {- n, i} \dot {\sigma} _ {n, i} = \sigma_ {i} ^ {2 - \frac {2}{N}} (\sigma_ {i} ^ {*} - \sigma_ {i}) +$$ + +![](images/2659c12bc2556406c83cf611dab9f371169120a4ab67a319cfc7a88f34570fff.jpg) + +# E.2 EMPIRICAL EXAMINATION + +Our analytical results are only applicable for the population loss over Gaussian inputs. These conditions are far from the ones used in practice and studied in Arora et al. (2019), where the problem is over-determined and the weights are drawn from a Gaussian distribution with a small variance. To show our conclusions regarding incremental learning extend qualitatively to more natural settings, we empirically examine the deep matrix sensing model in this natural setting for different depths and initialization scales as seen in figure 4. + +Notice how incremental learning is exhibited even when the number of examples is much smaller than the number of parameters in the model. While we can't rely on our theory for describing the exact dynamics of the optimization for these kinds of over-determined problems, the qualitative conclusions we get from it are still applicable. + +Another interesting phenomena we should note is that once the dataset becomes very small (the second row of the figure), we see all "currently active" singular values change at the beginning of every new phase (this is best seen in the bottom-right panel). This suggests that since there is more than one optimal solution, once we increase the current rank of our model it may find a solution that has a different set of singular values and vectors and thus all singular values change at the beginning of a new learning phase. This demonstrates the importance of incremental learning for obtaining + +![](images/9e58b29b6a4774008b9afdce4740f05fcade44319f986e741fe52c5d55d2bd81.jpg) + +![](images/bc4491d71ad91065c12aacaa4510fcd4a9d5a52da5fddf5ad83011a4693d82fe.jpg) + +![](images/c1292af638962ca2f6ca6419fc6708f308c2fca74a35ffbc81dd240811db6129.jpg) + +![](images/cf9d35234455c40bcd7b856b2c9aff8fd23682a17aaec006b1ec1139377798ab.jpg) + +![](images/fbd3d0c69a4b89883d55a1df7fd0bd9ba0181865ba0d4778259f0b68663b1185.jpg) +Figure 4: Evolution of the top-5 singular values of the deep matrix sensing model, with Gaussian initialization with variance such that the initial singular values are in expectation 1e-4. The model's size and data are in $\mathbb{R}^{50\times 50}$ . The columns correspond to different parameterization depths, while the rows correspond to different dataset sizes. In both cases the problem is over-determined, since the number of examples is smaller than the number of parameters. Since the original matrix is rank-4, we can recognize an unsuccessful recovery when all five singular values are nonzero, as seen clearly for both depth-1 plots. + +![](images/16a4e2bf4b35846387751b6798f440ac2627bd559f230bed092f8aed73c87063.jpg) + +![](images/e3c3fd2961284304eaacd34ee16c17e2f859983e152d1bf9fc7fe92c14f817d5.jpg) + +![](images/01ddecc053506a6e895eeb519d08e29a0cf2b9eed448f9cb62ff2d89b4017648.jpg) + +sparse solutions - once the initialization conditions and depth are such that the learning phases are distinct, gradient descent finds the optimal rank- $i$ solution in every phase $i$ . For these dynamics to successfully recover the optimal solution at every phase, the phases need to be far enough apart from each other to allow for the singular values and vectors to change before the next phase begins. + +# F INIncrementAL LEARNING IN QUADRATIC NETWORKS + +# F.1 PROOF OF THEOREM 5 + +Theorem. Minimizing the quadratic network described and initialized as in (7) with gradient flow over the variance loss defined in (2) with Gaussian inputs leads to the following dynamical equations: + +$$ +\dot {\sigma} _ {i} (t) = \sigma_ {i} (t) \left(\sigma_ {i} ^ {*} - \sigma_ {i} (t)\right) +$$ + +Where $\sigma_{i}$ and $\sigma_{i}^{*}$ are the $i$ th singular values of $W$ and $W^{*}$ , respectively, corresponding to the same singular vectors. + +Proof. Our proof will follow similar lines as the analysis of the deep matrix sensing model. Taking the expectation of the variance loss over Gaussian inputs for our model gives us: + +$$ +\begin{array}{l} \ell_ {v a r} (W) = \frac {1}{1 6} \mathbb {E} _ {x} [ (\langle W _ {*} ^ {T} W _ {*} - W ^ {T} W, x x ^ {T} \rangle) ^ {2} ] - \frac {1}{1 6} \mathbb {E} _ {x} [ \langle W _ {*} ^ {T} W _ {*} - W ^ {T} W, x x ^ {T} \rangle ] ^ {2} \\ = \frac {1}{8} | | W _ {*} ^ {T} W _ {*} - W ^ {T} W | | _ {F} ^ {2} + \frac {1}{1 6} T r (W _ {*} ^ {T} W _ {*} - W ^ {T} W) ^ {2} - \frac {1}{1 6} T r (W _ {*} ^ {T} W _ {*} - W ^ {T} W) ^ {2} \\ = \frac {1}{8} | | W _ {*} ^ {T} W _ {*} - W ^ {T} W | | _ {F} ^ {2} \\ \end{array} +$$ + +Following the gradient flow dynamics over $W$ leads to the following differential equation: + +$$ +\dot {W} = \frac {1}{2} W (W _ {*} ^ {T} W _ {*} - W ^ {T} W) +$$ + +We can now calculate the gradient flow dynamics of $W^T W$ using the chain rule: + +$$ +W ^ {\dot {T}} W = W ^ {T} \dot {W} + \dot {W} ^ {T} W = \frac {1}{2} \left(W ^ {T} W \left(W _ {*} ^ {T} W _ {*} - W ^ {T} W\right) + \left(W _ {*} ^ {T} W _ {*} - W ^ {T} W\right) W ^ {T} W\right) \tag {10} +$$ + +Now, under our initialization $W_0^T W_0 = \sigma_0 I$ , we get that $W^T W$ and $W_*^T W_*$ are simultaneously diagonalizable at initialization by some matrix $U$ , such that the following is true for diagonal $D$ and $D^*$ : + +$$ +D (0) = U ^ {T} W (0) ^ {T} W (0) U +$$ + +$$ +D ^ {*} = U ^ {T} W _ {*} ^ {T} W _ {*} U +$$ + +Multiplying equation (10) by $U$ and $U^T$ gives us the following dynamics for the singular values of $W^T W$ : + +$$ +\dot {D} = \frac {1}{2} \Big (D (D ^ {*} - D) + (D ^ {*} - D) D \Big) = D (D ^ {*} - D) +$$ + +These matrices are diagonal at initialization, and remain diagonal throughout the dynamics (the off-diagonal elements are static according to these equations). We may now look at the dynamics of a single diagonal element, noticing it is equivalent to the depth-2 toy model: + +$$ +\dot {\sigma} _ {i} = \sigma_ {i} (\sigma_ {i} ^ {*} - \sigma_ {i}) +$$ + +![](images/6b6c5d0040d6fd3863d21ee43f0580d6915c0262f1dffefc362be21ff4823705.jpg) + +# F.2 DISCUSSION OF THE VARIANCE LOSS + +It may seem that the variance loss is an unnatural loss function to analyze, since it isn't used in practice. While this is true, we will show how the dynamics of this loss function are an approximation of the square loss dynamics. + +We begin by describing the dynamics of both losses, showing how incremental learning can't take place for quadratic networks as defined over the squared loss. Then, we show how adding a global bias to the quadratic network leads to similar dynamics for small initialization scales. + +# F.2.1 DYNAMICAL DERIVATION + +in the previous section, we derive the differential equations for the singular values of $W^T W$ under the variance loss: + +$$ +\dot {\sigma} _ {i} = \sigma_ {i} (\sigma_ {i} ^ {*} - \sigma_ {i}) +$$ + +We will now derive similar equations for the squared loss. The scaled squared loss in expectation over the Gaussian inputs is: + +$$ +\begin{array}{l} \ell (W) = \frac {1}{1 6} \mathbb {E} _ {x} [ (\langle W _ {*} ^ {T} W _ {*} - W ^ {T} W, x x ^ {T} \rangle) ^ {2} ] \\ = \frac {1}{8} \left\| W _ {*} ^ {T} W _ {*} - W ^ {T} W \right\| _ {F} ^ {2} + \frac {1}{1 6} \operatorname {T r} \left(W _ {*} ^ {T} W _ {*} - W ^ {T} W\right) ^ {2} \\ \end{array} +$$ + +Defining $\Delta = W_{*}^{T}W_{*} - W^{T}W$ for brevity, the differential equations for $W$ become: + +$$ +\dot {W} = \frac {1}{2} W \Delta + \frac {1}{4} T r (\Delta) W +$$ + +Calculating the dynamics of $W^T W$ after noting that it is simultaneously diagonalizable with $W_{*}^{T}W_{*}$ (as in the derivation for the variance loss) leads to the following differential equations for the singular values of $W^T W$ : + +$$ +\dot {\sigma} _ {i} = \sigma_ {i} \Big (\sigma_ {i} ^ {*} - \sigma_ {i} + \frac {1}{2} \sum_ {j} (\sigma_ {j} ^ {*} - \sigma_ {j}) \Big) +$$ + +We see that the equations are now coupled and so we cannot solve them analytically. Another issue is that for our initialization, all singular values have very similar dynamics at initialization due to the coupling. For example, values corresponding to small optimal singular values grow much faster than in the variance loss dynamics, due to the effect the large optimal singular values have on them. + +We see from these equations that we shouldn't expect quadratic networks optimized over the squared loss to exhibit incremental learning behavior. We next show how adding a global bias to our model can help. + +# F.2.2 VARIANCE LOSS APPROXIMATES THE SQUARED LOSS + +To see how the variance loss can have dynamics resembling those of the squared loss, we will add a global (trainable) bias to our model. This means our model is now parameterized by $W \in \mathbb{R}^{d \times d}$ and a scalar $b \in \mathbb{R}$ : + +$$ +f _ {W, b} (x) = \left\langle W ^ {T} W, x x ^ {T} \right\rangle + b +$$ + +We may now analyze gradient flow of the squared loss. Following the same methods as before, this leads us to the following differential equations: + +$$ +\dot {\sigma} _ {i} = \sigma_ {i} \left(\sigma_ {i} ^ {*} - \sigma_ {i} + \frac {1}{2} \sum_ {j} \left(\sigma_ {j} ^ {*} - \sigma_ {j}\right) + \frac {1}{2} \left(b ^ {*} - b\right)\right) +$$ + +$$ +\dot {b} = \sum_ {j} \left(\sigma_ {j} ^ {*} - \sigma_ {j}\right) + b ^ {*} - b +$$ + +Notice how, if $b$ is at its optimum at a given time $(b = \sum_{j} (\sigma_{j}^{*} - \sigma_{j}) + b^{*})$ , the dynamics of $\sigma_{i}$ align with those of the variance loss. Alternatively, when $b = b^{*}$ , we recover the dynamics of the squared loss without the global bias term. + +To convince ourselves that the dynamics of this model resemble those of the variance loss, we would need to explain why the global bias is at it's optimum "most of the time", such that the singular values don't change much during the times when it is not at it's optimum. + +Observing the differential equations for $b$ and for $\sigma_{i}$ , we see they are similar (if we ignore the $\sigma_{i}^{*} - \sigma_{i}$ value which doesn't change the order of magnitude of the entire expression when there haven't been many learning phases yet). The only difference being a multiplication by $\sigma_{i}$ . This means that we may informally write: + +$$ +\frac {\dot {\sigma} _ {i} (t)}{\dot {b} (t)} \approx \sigma_ {i} (t) +$$ + +Since at initialization and until the learning phase of $\sigma_{i}$ takes place, we have $\sigma_{i}(t)\ll 1$ , we see that the global bias optimizes much faster than the singular values for which the learning phase hasn't begun yet. This means these singular values will remain small during the times in which + +![](images/1de042db603b1e27c46b1f99edbe53e33a80a4800363290b9f18f46eb4fd0bfd.jpg) + +![](images/ad820ccdb9ac995f818980c17fe23dea74e6a6e6fc60adac23af42135c8ee7eb.jpg) + +![](images/97bb0d3e19a3bf5be1ecab510d747359f6696dafa9fcec6fb2db4ec3675d2802.jpg) + +![](images/2a492a124b55b29f1b7dcf0a8a0219dfabc5f7cb275dc6598c86bb2ca440d237.jpg) + +![](images/339526b731e7aeeaeacdbba80cc6d725ec2ffea2c5f9c7aa106019a59d04d5db.jpg) +Figure 5: Quadratic model's evolution of top-5 singular values for a rank-4 labeling function. The rows correspond to whether or not a global bias is introduced to the model. The first two columns are for a large dataset (one optimal solution) and the last two columns are for a small dataset (over-determined problem). When a bias is introduced, it is initialized to its optimal value at initialization. Note how without the bias, the singular values are learned together and there is over-shooting of the optimal singular value caused by the coupling of the dynamics of the singular values. For the small datasets, we see that the model with no bias reaches a solution with a larger rank. Once a global bias is introduced, the dynamics become more incremental as in the analysis of the variance loss. Note that in this case the solution obtained for the small dataset is the optimal low-rank solution. + +![](images/f955b3d6e041bb9c2422c11cfd2467023a3482004f3b3587db19cd35fcf84669.jpg) + +![](images/c2ce3df9ee1c447d4dca755eff0ee5c733fbd905e584adf5cd2e816a7cafe323.jpg) + +![](images/5654b15ae8ec4db2c141995e20a242a815bea61f5f610429789c00cc3b0c0534.jpg) + +the bias isn't optimal, and so incremental learning can still take place (assuming a small enough initialization). + +Under these considerations, we say that the dynamics of the squared loss for a quadratic network with an added global bias resemble the idealized dynamics of the variance loss for a depth-2 linear model which we analyze formally in the paper. In figure 5 we experimentally show how adding a bias to a quadratic network does lead to incremental learning similar to the depth-2 toy model. + +# G INIncrementAL LEARNING IN CLASSIFICATION + +# G.1 DIAGONAL NETWORKS + +In section 4.3 we viewed our toy model as a special case of the deep diagonal networks described in Gunasekar et al. (2018), expected to be biased towards sparse solutions. Figure 6 shows the dynamics of the largest values of $\sigma$ for different depths of the model. We see that the same type of incremental learning we saw in earlier models exists here as well - the features are learned one by one in deeper models, resulting in a sparse solution. The leftmost panel shows how the initialization scale plays a role here as well, with the solution being more sparse when the initialization is small. We should note that these results do not defy the results of Gunasekar et al. (2018) (from which we would expect the initialization not to matter), since their results deal with the solution at $t\to \infty$ . + +# G.2 CONVOLUTIONAL NETWORKS - PRELIMINARIES + +The linear circular-convolutional network of Gunasekar et al. (2018) deals with one-dimensional convolutions with the same number of outputs as inputs, such that the mapping from one hidden layer to the next is parameterized by $w_{n}$ and defined to be: + +$$ +h _ {n} [ i ] = \sum_ {k = 0} ^ {d - 1} w _ {n} [ k ] h _ {n - 1} [ (i + k) \mod d ] = (h _ {n - 1} \star w _ {n}) [ i ] +$$ + +![](images/7194dfa4318f98eee79411f13b3ac1cb4cc29510ab27f7839fc8f19348e1c16b.jpg) +Figure 6: Incremental learning in binary classification. A model as in section 4.3 is trained over 200 i.i.d. random Gaussian examples, where $d = 100$ . The data is labeled by a weight vector with 4 nonzero values, making the problem realizable with a sparse solution while the max-margin solution isn't sparse. The left panel describes the obtained solution's correlation with the sparse labeling vector for different depths and initializations. The results are averaged over 100 experiments, with shaded regions denoting empirical standard deviations. We see that depth-1 models reach results similar to the max-margin SVM solution as predicted by Gunasekar et al. (2018), while deeper models are highly correlated with the sparse solution, with this correlation increasing when the initialization scale is small. The other panels show the evolution of the absolute values of the top-5 weights of $\sigma$ for the smallest initialization scale. Note that as we increase the depth, incremental learning is clearly presented. + +![](images/bc25857501b4df6b3084c4ac4399460ca47943c29b593f7acd28c16f88faa6e2.jpg) + +![](images/7eedde4471f169998da12735a91e9f6bc3183153cd18b0c42e8a1f25f0569cb5.jpg) + +![](images/c4bfb1fdc718437091a9d748ed05db686c946951cf309fc3fa6818c4115f5984.jpg) + +The final layer is a fully connected layer parameterized by $w_{N} \in \mathbb{R}^{d}$ , such that the final model can be written in the following way: + +$$ +f _ {\sigma} (x) = \left(\left(\left(x \star w _ {1}\right) \star w _ {2}\right) \dots\right) \star w _ {N - 1}\left. \right) ^ {T} w _ {N} \tag {11} +$$ + +Lemma 3 from Gunasekar et al. (2018) shows how we can relate the Fourier coefficients of the weight vectors to the Fourier coefficients of the linear model induced by the model: + +Lemma. For the circular-convolutional model as in (11): + +$$ +\hat {\sigma} = \operatorname {d i a g} (\hat {w} _ {1}) \dots \operatorname {d i a g} (\hat {w} _ {N - 1}) \hat {w} _ {N}, +$$ + +where for $n = 1,2,\dots,N$ , $\hat{w}_n\in \mathbb{C}^d$ are the Fourier coefficients of the parameters $w_{n}\in \mathbb{R}^{d}$ . + +This lemma connects the convolutional network to the diagonal network, and thus we should expect to see the same incremental learning of the values of the diagonal network exhibited by the Fourier coefficients of the convolutional network. + +# G.3 CONVOLUTIONAL NETWORKS - EMPIRICAL EXAMINATION + +In figure 7 we see the same plots as in figure 6 but for the Fourier coefficients of the convolutional model. We see that even when the model is far from the toy parameterization (there is no weight sharing and the initialization is with random Gaussian weights), incremental learning is still clearly seen in the dynamics of the model. We see how the inherent reason for the sparsity towards sparse solution found in Gunasekar et al. (2018) is the result of the dynamics of the model - small amplitudes are attenuated while large ones are amplified. + +![](images/321c71e75a933caf18fa6f484aa42f5a9908a0576eff2796f3cf2c8ec1d07741.jpg) +Figure 7: Incremental learning in convolutional networks. A model as in appendix G is trained over 200 i.i.d. random Gaussian examples, where $d = 100$ . The weights are initialized randomly and the data is labeled by a weight vector with 4 nonzero frequencies, making the problem realizable with a sparse solution in the frequency domain. The left panel describes the obtained solution's correlation in the frequency domain with the sparse labeling vector for different depths and initializations. The results are averaged over 9 experiments, with shaded regions denoting empirical standard deviations. We see that depth-1 models reach results similar to the max-margin SVM solution, while deeper models are highly correlated with the optimal sparse solution. The other panels show the evolution of the amplitudes of the top-5 frequencies of $\sigma$ for the smallest initialization scale. Note that as we increase the depth, incremental learning is clearly presented. \ No newline at end of file diff --git a/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/images.zip b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c4fd1f7055c5ff892597c65e8756023f946c0b38 --- /dev/null +++ b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25f3342958342ce02b38edeb642aec7214b95b0e18e22b1cdc00de9de16e42d7 +size 1026613 diff --git a/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/layout.json b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..846f012cc1899e21ad040501ed0eed64ae432e3a --- /dev/null +++ b/theimplicitbiasofdepthhowincrementallearningdrivesgeneralization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee52ed883d2763c34665f6ea272df6d27f434a480d8d6c7b3ac07b8935bb740e +size 968660 diff --git a/thelocalelasticityofneuralnetworks/7f024bf4-ef96-4e0e-88c2-24766a9b8c91_content_list.json b/thelocalelasticityofneuralnetworks/7f024bf4-ef96-4e0e-88c2-24766a9b8c91_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..42b0bb681f43f8107b7fd322f47b7b50e40a764e --- /dev/null +++ b/thelocalelasticityofneuralnetworks/7f024bf4-ef96-4e0e-88c2-24766a9b8c91_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0e506e0891ffb2dc24c3bfb130112a7cbc31b823e67f56a98ae426c9fb1d4d7 +size 113117 diff --git a/thelocalelasticityofneuralnetworks/7f024bf4-ef96-4e0e-88c2-24766a9b8c91_model.json b/thelocalelasticityofneuralnetworks/7f024bf4-ef96-4e0e-88c2-24766a9b8c91_model.json new file mode 100644 index 0000000000000000000000000000000000000000..091d746eedf7b61fc55e7702516b2ee42a2915cb --- /dev/null +++ b/thelocalelasticityofneuralnetworks/7f024bf4-ef96-4e0e-88c2-24766a9b8c91_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e447a8138af986b10f889c4e80eaf4bf1a74245d171055c24339887906777a49 +size 142929 diff --git a/thelocalelasticityofneuralnetworks/7f024bf4-ef96-4e0e-88c2-24766a9b8c91_origin.pdf b/thelocalelasticityofneuralnetworks/7f024bf4-ef96-4e0e-88c2-24766a9b8c91_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..79b77f8f08dc4de69bfeb08fdc6d43d095c3ec6a --- /dev/null +++ b/thelocalelasticityofneuralnetworks/7f024bf4-ef96-4e0e-88c2-24766a9b8c91_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28a99de4df7dffd5c0d0e4b0c858fd479c3a1b9bb091eb844b92c200f5bf243f +size 1937752 diff --git a/thelocalelasticityofneuralnetworks/full.md b/thelocalelasticityofneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cdd2267974cff569da7356c123585bfb211e7532 --- /dev/null +++ b/thelocalelasticityofneuralnetworks/full.md @@ -0,0 +1,470 @@ +# THE LOCAL ELASTICITY OF NEURAL NETWORKS + +Hangfeng He & Weijie J. Su + +University of Pennsylvania + +Philadelphia, PA + +hangfeng@seas.upenn.edu, suw@wharton.upenn.edu + +# ABSTRACT + +This paper presents a phenomenon in neural networks that we refer to as local elasticity. Roughly speaking, a classifier is said to be locally elastic if its prediction at a feature vector $\pmb{x}^{\prime}$ is not significantly perturbed, after the classifier is updated via stochastic gradient descent at a (labeled) feature vector $\pmb{x}$ that is dissimilar to $\pmb{x}^{\prime}$ in a certain sense. This phenomenon is shown to persist for neural networks with nonlinear activation functions through extensive simulations on real-life and synthetic datasets, whereas this is not observed in linear classifiers. In addition, we offer a geometric interpretation of local elasticity using the neural tangent kernel (Jacot et al., 2018). Building on top of local elasticity, we obtain pairwise similarity measures between feature vectors, which can be used for clustering in conjunction with $K$ -means. The effectiveness of the clustering algorithm on the MNIST and CIFAR-10 datasets in turn corroborates the hypothesis of local elasticity of neural networks on real-life data. Finally, we discuss some implications of local elasticity to shed light on several intriguing aspects of deep neural networks. + +# 1 INTRODUCTION + +Neural networks have been widely used in various machine learning applications, achieving comparable or better performance than existing methods without requiring highly engineered features (Krizhevsky et al., 2012). However, neural networks have several intriguing aspects that defy conventional views of statistical learning theory and optimization, thereby hindering the architecture design and interpretation of these models. For example, despite having more parameters than training examples, deep neural networks generalize well without an explicit form of regularization (Zhang et al., 2017; Neyshabur et al., 2017; Arora et al., 2019a). Zhang et al. (2017) also observe that neural networks can perfectly fit corrupted labels while maintaining a certain amount of generalization power1. + +In this paper, we complement this line of findings by proposing a hypothesis that fundamentally distinguishes neural networks from linear classifiers2. This hypothesis is concerned with the dynamics of training neural networks using stochastic gradient descent (SGD). Indeed, the motivation is to address the following question: + +How does the update of weights using SGD at an input $\pmb{x}$ and its label $y$ impact the prediction of the neural networks at another input $\pmb{x}^{\prime}$ ? + +Taking this dynamic perspective, we make the following three contributions. + +First, we hypothesize that neural networks are locally elastic in the following sense: an extensive set of experiments on synthetic examples demonstrate that the impact on the prediction of $\pmb{x}^{\prime}$ is significant if $\pmb{x}^{\prime}$ is in a local vicinity of $\pmb{x}$ and the impact diminishes as $\pmb{x}^{\prime}$ becomes far from $\pmb{x}$ in an elastic manner. In contrast, local elasticity is not observed in linear classifiers due to the leverage effect (Weisberg, 2005). Thus, at a high level, local elasticity must be inherently related + +![](images/f56856c3b380a8cc54332554e8e9d3b84780d7c63e738593df28de8d7efe2c4a.jpg) +(a) The torus. + +![](images/69fc319b1feae2545dd0a31d455dc7d5ffbb51a56dc1a7513ca8e905dd59cb58.jpg) +(b) Two-layer nets fitting the torus. + +![](images/f70a291e316e35491b4fcfd1f3241ad528f62a724817d2dfa57577ac8594f30f.jpg) +(c) Two-layer linear nets fitting the torus. + +![](images/1dde7d80a032d1e5b09339d1cca1111238889436a459b269dc64ba63cf5bb917.jpg) +(d) The two folded boxes. + +![](images/e722003eb93ace9a6df0fa1081d50e4a243912c98f20300ddc3bb68c96f76568.jpg) +(e) Three-layer nets fit boxes. +the (f) Three-layer linear nets fitting the boxes. + +![](images/4f5b3f8f39011546a24544a5b5dbecd1728024dad591d56a83d0a9cf7ee17bc1.jpg) +Figure 1: Comparisons between ReLU neural networks and linear neural networks in terms of local elasticity. In the left column, the red points form one class and the blue points form the other class. The linear nets are of the same sizes as their non-linear counterparts. The details on how to construct the torus and boxes can be found in Appendix A.1 and the network architectures are described in Appendix A.4. During the training process of the neural networks, we plot the geodesic distance (see more details in Appendix A.1) between two blue points $x$ and $x'$ , and their relative prediction changes (see its definition in Equation (2)) in (b), (c), (e), and (f). The correlations of (b), (c), (e), and (f) are $-0.97$ , $-0.48$ , $-0.92$ , and $-0.14$ , respectively. (b) and (e) show that the distance and the relative change exhibit decreasing monotonic relationship, thereby confirming local elasticity, while no monotonic relationship is found in (c) and (f). + +to the nonlinearity of neural networks and SGD used in updating the weights. This phenomenon is illustrated by Figure 1. Additional synthetic examples and ImageNet (Deng et al., 2009) with a pretrained ResNet (He et al., 2016) in Appendix A.2 further confirm local elasticity. For completeness, we remark that the notion of local elasticity seems related to influence functions on the surface (Koh & Liang, 2017). The fundamental distinction, however, is that the former takes into account the dynamics of the training process whereas the latter does not. See Section 2 for a formal introduction of the notion of local elasticity. + +Furthermore, we devise a clustering algorithm by leveraging local elasticity of neural networks. In short, this algorithm records the relative change of the prediction on a feature vector to construct a similarity matrix of the training examples. Next, the similarity matrix is used by, for example, $K$ -means to partition the points into different clusters. The experiments on MNIST (LeCun, 1998) and CIFAR-10 (Krizhevsky, 2009) demonstrate the effectiveness of this local elasticity-based clustering algorithm. For two superclasses (e.g., mammal and vehicle), the algorithm is capable of partitioning the mammal class into cat and dog, and the second superclass into car and truck. These empirical results, in turn, corroborate our hypothesis that neural networks (with nonlinear activation) are locally elastic. See the description of the algorithm in Section 3 and experimental results in Section 4. + +Finally, this paper provides profound implications of local elasticity on memorization and generalization of neural networks, among others. In this spirit, this work seeks to shed light on some intriguing aspects of neural networks. Intuitively, the locality part of this property suggests that the neural networks can efficiently fit the label of an input without significantly affecting most examples that have been well fitted. This property is akin to the nearest neighbors algorithm (see, e.g., Papernot & McDaniel (2018)). Meanwhile, the elasticity part implies that the prediction surface is + +![](images/cae22b25d2d7fa4b20390b806ccfdb73eb3129d02f941f69ff9bca22a4140c91.jpg) +(a) Linear classifier updated by SGD. + +![](images/65101cc5450fa8f70ab1fc297d5b225f8f85dc9dab6f5664cbe40a9fdacdd346.jpg) +(b) Neural networks updated by SGD. +Figure 2: An illustration of linear regression and neural networks updated via SGD. The prediction of $x'$ changes a lot after an SGD update on $x$ in the linear case, though $x'$ is far away from $x$ . In contrast, the change in the prediction at $x'$ is rather small in the neural networks case. + +likely to remain smooth in the training process, in effect regularizing the complexity of the nets in a certain sense. These implications are discussed in detail in Section 5. + +# 1.1 RELATED WORK + +There has been a line of work probing the geometric properties of the decision boundary of neural networks. Montufar et al. (2014) investigate the connection between the number of linear regions and the depth of a ReLU network and argue that a certain intrinsic rigidity of the linear regions may improve the generalization. Fawzi et al. (2017; 2018) observe that the learned decision boundary is flat along most directions for natural images. See Hanin & Rolnick (2019) for the latest development along this line and Fort et al. (2019) for a dynamic perspective on the landscape geometry. + +In another related direction, much effort has been expended on the expressivity of neural networks, starting from universal approximation theorems for two-layer networks (Cybenko, 1989; Hornik et al., 1989; Barron, 1993). Lately, deep neural networks have been shown to possess better representational power than their shallow counterparts (Delalleau & Bengio, 2011; Telgarsky, 2016; Eldan & Shamir, 2016; Mhaskar & Poggio, 2016; Yarotsky, 2017; Chen et al., 2019). From a nonparametric viewpoint, approximation risks are obtained for neural networks under certain smooth assumptions on the regression functions (Schmidt-Hieber, 2017; Suzuki, 2018; Klusowski & Barron, 2018; Liang, 2018; Bauer & Kohler, 2019; E et al., 2019a). + +A less related but more copious line of work focuses on optimization for training neural networks. A popular approach to tackling this problem is to study the optimization landscape of neural networks (Choromanska et al., 2015; Soudry & Hoffer, 2017; Zhou & Liang, 2017; Safran & Shamir, 2018; Du & Lee, 2018; Liang et al., 2018; Soltanolkotabi et al., 2018). Another approach is to analyze the dynamics of specific optimization algorithms applied to neural networks (Tian, 2017; Li & Yuan, 2017; Soltanolkotabi, 2017; Brutzkus & Globerson, 2017; Du et al., 2018; Li & Liang, 2018; Allen-Zhu et al., 2018; Zou et al., 2018; Du et al., 2019; Allen-Zhu et al., 2019). Alternatively, researchers have considered the evolution of gradient descent on two-layer neural networks using optimal transport theory (Song et al., 2018; Chizat & Bach, 2018; Sirignano & Spiliopoulos, 2019; Rotskoff & Vanden-Eijnden, 2018). More recently, there is a growing recognition of intimate similarities between over-parameterized neural networks and kernel methods from an optimization perspective (Zhang et al., 2017; Daniely, 2017; Belkin et al., 2018; Jacot et al., 2018; Yang, 2019; Arora et al., 2019b; Lee et al., 2019; E et al., 2019b). For completeness, some work demonstrates a certain superiority of neural networks in generalization over the corresponding kernel methods (Wei et al., 2018; Allen-Zhu & Li, 2019; Ghorbani et al., 2019). + +# 2 LOCAL ELASTICITY + +This section formalizes the notion of local elasticity and proposes associated similarity measures that will be used for clustering in Section 3. Denote by $\pmb{x} \in \mathbb{R}^d$ and $y$ the feature vector and the label of an instance $(\pmb{x}, y)$ , respectively. Let $f(\pmb{x}, \pmb{w})$ be the prediction with model parameters $\pmb{w}$ , and write $\mathcal{L}(f, y)$ for the loss function. Consider using SGD to update the current parameters $\pmb{w}$ using the instance $(\pmb{x}, y)$ : + +$$ +\boldsymbol {w} ^ {+} = \boldsymbol {w} - \eta \frac {\mathrm {d} \mathcal {L} (f (\boldsymbol {x} , \boldsymbol {w}) , y)}{\mathrm {d} \boldsymbol {w}} = \boldsymbol {w} - \eta \frac {\partial \mathcal {L} (f (\boldsymbol {x} , \boldsymbol {w}) , y)}{\partial f} \cdot \frac {\partial f (\boldsymbol {x} , \boldsymbol {w})}{\partial \boldsymbol {w}}. \tag {1} +$$ + +In this context, we say that the classifier $f$ is locally elastic at parameters $\mathbf{w}$ if $|f(\mathbf{x}', \mathbf{w}^{+}) - f(\mathbf{x}', \mathbf{w})|$ , the change in the prediction at a test feature vector $\mathbf{x}'$ , is relatively large when $\mathbf{x}$ and $\mathbf{x}'$ are similar/close, and vice versa. Here, by similar/close, we mean two input points $\mathbf{x}$ and $\mathbf{x}'$ share many characteristics or are connected by a short geodesic path in the feature space. Intuitively, $\mathbf{x}$ and $\mathbf{x}'$ are similar if $\mathbf{x}$ denotes an Egyptian cat and $\mathbf{x}'$ denotes a Persian cat; they are dissimilar if $\mathbf{x}$ denotes a German shepherd and $\mathbf{x}'$ denotes a trailer truck. + +For illustration, Figure 2(a) shows that the (linear) classifier is not locally elastic since the SGD update on $x$ leads to significant impact on the prediction at $x'$ , though $x'$ is far from $x$ ; on the other hand, the (nonlinear) classifier in Figure 2(b) is locally elastic since the change at $x'$ is relatively small compared with that at $x$ . In Section 2.3, we provide some intuition why nonlinearity matters for local elasticity in two-layer neural nets. + +# 2.1 RELATIVE SIMILARITY + +The essence of local elasticity is that the change in the prediction has an (approximate) monotonic relationship with the similarity of feature vectors. Therefore, the change can serve as a proxy for the similarity of two inputs $\mathbf{x}$ and $\mathbf{x}'$ : + +$$ +S _ {\mathrm {r e l}} \left(\boldsymbol {x}, \boldsymbol {x} ^ {\prime}\right) := \frac {\left| f \left(\boldsymbol {x} ^ {\prime} , \boldsymbol {w} ^ {+}\right) - f \left(\boldsymbol {x} ^ {\prime} , \boldsymbol {w}\right) \right|}{\left| f \left(\boldsymbol {x} , \boldsymbol {w} ^ {+}\right) - f (\boldsymbol {x} , \boldsymbol {w}) \right|}. \tag {2} +$$ + +In the similarity measure above, $|f(\pmb{x}, \pmb{w}^{+}) - f(\pmb{x}, \pmb{w})|$ is the change in the prediction of $(\pmb{x}, y)$ used for the SGD update, and $|f(\pmb{x}', \pmb{w}^{+}) - f(\pmb{x}', \pmb{w})|$ is the change in the output of a test input $\pmb{x}'$ . Note that $S_{\mathrm{rel}}(\pmb{x}, \pmb{x}')$ is not necessarily symmetric and, in particular, the definition depends on the weights $\pmb{w}$ . Our experiments in Section 4 suggest the use of a near-optimal $\pmb{w}$ (which is also the case in the second definition of local elasticity in Section 2.2). In the notion of local elasticity, locality suggests that this ratio is large when $\pmb{x}$ and $\pmb{x}'$ are close, and vice versa, while elasticity means that this similarity measure decreases gradually and smoothly, as opposed to abruptly, when $\pmb{x}'$ moves away from $\pmb{x}$ . Having evaluated this similarity measure for (almost) all pairs by performing SGD updates for several epochs, we can obtain a similarity matrix, whose rows are each regarded as a feature vector of its corresponding input. This can be used for clustering followed by $K$ -means. + +# 2.2 KERNELIZED SIMILARITY + +Next, we introduce a different similarity measure that manifests local elasticity by making a connection to the neural tangent kernel (Jacot et al., 2018). Taking a small learning rate $\eta$ , for a new feature point $\pmb{x}^{\prime}$ , the change in its prediction due to the SGD update in Equation (1) approximately satisfies: + +$$ +\begin{array}{l} f \left(\boldsymbol {x} ^ {\prime}, \boldsymbol {w} ^ {+}\right) - f \left(\boldsymbol {x} ^ {\prime}, \boldsymbol {w}\right) = f \left(\boldsymbol {x} ^ {\prime}, \boldsymbol {w} - \eta \frac {\partial \mathcal {L}}{\partial f} \cdot \frac {\partial f (\boldsymbol {x} , \boldsymbol {w})}{\partial \boldsymbol {w}}\right) - f \left(\boldsymbol {x} ^ {\prime}, \boldsymbol {w}\right) \\ \approx f (\boldsymbol {x} ^ {\prime}, \boldsymbol {w}) - \left\langle \frac {\partial f (\boldsymbol {x} ^ {\prime} , \boldsymbol {w})}{\partial \boldsymbol {w}}, \eta \frac {\partial \mathcal {L}}{\partial f} \cdot \frac {\partial f (\boldsymbol {x} , \boldsymbol {w})}{\partial \boldsymbol {w}} \right\rangle - f (\boldsymbol {x} ^ {\prime}, \boldsymbol {w}) \\ = - \eta \frac {\partial \mathcal {L}}{\partial f} \left\langle \frac {\partial f (\boldsymbol {x} ^ {\prime} , \boldsymbol {w})}{\partial \boldsymbol {w}}, \frac {\partial f (\boldsymbol {x} , \boldsymbol {w})}{\partial \boldsymbol {w}} \right\rangle . \\ \end{array} +$$ + +The factor $-\eta \frac{\partial \mathcal{L}}{\partial f}$ does not involve $\pmb{x}'$ , just as the denominator $|f(\pmb{x}, \pmb{w}^{+}) - f(\pmb{x}, \pmb{w})|$ in Equation (2). This observation motivates an alternative definition of the similarity: + +$$ +S _ {\ker} \left(\boldsymbol {x}, \boldsymbol {x} ^ {\prime}\right) := \frac {f \left(\boldsymbol {x} ^ {\prime} , \boldsymbol {w}\right) - f \left(\boldsymbol {x} ^ {\prime} , \boldsymbol {w} ^ {+}\right)}{\eta \frac {\partial \mathcal {L} (f (\boldsymbol {x} , \boldsymbol {w}) , y)}{\partial f}}. \tag {3} +$$ + +In the case of the $\ell_2$ loss $\mathcal{L}(f,y) = \frac{1}{2} (f - y)^2$ , for example, $S_{\mathrm{ker}}(\pmb{x},\pmb{x}^{\prime}) = \frac{f(\pmb{x}^{\prime},\pmb{w}) - f(\pmb{x}^{\prime},\pmb{w}^{+})}{\eta(f - y)}$ . In Section 3, we apply the kernel $K$ -means algorithm to this similarity matrix for clustering. + +The kernelized similarity in Equation (3) is approximately the inner product of the gradients of $f$ at $x$ and $x'^3$ . This is precisely the definition of the neural tangent kernel (Jacot et al., 2018) (see also Arora et al. (2019b)) if $w$ is generated from i.i.d. normal distribution and the number of neurons in each layer tends to infinity. However, our empirical results suggest that a data-adaptive $w$ may lead to more significant local elasticity. Explicitly, both similarity measures with pre-trained weights yield better performance of Algorithm 1 (in Section 3) than those with randomly initialized weights. This is akin to the recent findings on a certain superiority of data-adaptive kernels over their non-adaptive counterparts (Dou & Liang, 2019). + +# 2.3 INTERPRETATION VIA TWO-LAYER NETWORKS + +We provide some intuition for the two similarity measures in Equation (2) and Equation (3) with two-layer neural networks. Letting $\overline{\boldsymbol{x}} = (\boldsymbol{x}^{\top},1)^{\top}\in \mathbb{R}^{d + 1}$ , denote the networks by $f(\boldsymbol {x},\boldsymbol {w}) = \sum_{r = 1}^{m}a_{r}\sigma (\boldsymbol{w}_{r}^{\top}\overline{\boldsymbol{x}})$ , where $\sigma (\cdot)$ is the ReLU activation function $(\sigma (x) = x$ for $x\geq 0$ and $\sigma (x) = 0$ otherwise) and $\boldsymbol {w} = (w_1^\top ,\dots ,w_r^\top)^\top \in \mathbb{R}^{m(d + 1)}$ . For simplicity, we set $a_{k}\in \{-1,1\}$ . Note that this does not affect the expressibility of this net due to the positive homogeneity of ReLU. + +Assuming $\pmb{w}$ are i.i.d. normal random variables and some other conditions, in the appendix we show that this neural networks with the SGD rule satisfies + +$$ +\frac {f \left(\boldsymbol {x} ^ {\prime} , \boldsymbol {w} ^ {+}\right) - f \left(\boldsymbol {x} ^ {\prime} , \boldsymbol {w}\right)}{f \left(\boldsymbol {x} , \boldsymbol {w} ^ {+}\right) - f (\boldsymbol {x} , \boldsymbol {w})} = (1 + o (1)) \frac {\left(\boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} + 1\right) \sum_ {r = 1} ^ {m} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \bar {\boldsymbol {x}} \geq 0 \right\} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {\top} \bar {\boldsymbol {x}} ^ {\prime} \geq 0 \right\}}{(\| \boldsymbol {x} \| ^ {2} + 1) \sum_ {r = 1} ^ {m} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \bar {\boldsymbol {x}} \geq 0 \right\}}. \tag {4} +$$ + +Above, $\| \cdot \|$ denotes the $\ell_2$ norm. For comparison, we get $\frac{\tilde{f}(\boldsymbol{x}',\boldsymbol{w}^{+}) - \tilde{f}(\boldsymbol{x}',\boldsymbol{w})}{\tilde{f}(\boldsymbol{x},\boldsymbol{w}^{+}) - \tilde{f}(\boldsymbol{x},\boldsymbol{w})} = \frac{\boldsymbol{x}^{\top}\boldsymbol{x}' + 1}{\|\boldsymbol{x}\|^{2} + 1}$ for the linear neural networks $\tilde{f}(\boldsymbol{x}) := \sum_{r=1} a_r \boldsymbol{w}_r^{\top} \boldsymbol{x}$ . For both cases, the change in the prediction at $\boldsymbol{x}'$ resulting from an SGD update at $(\boldsymbol{x},y)$ involves a multiplicative factor of $\boldsymbol{x}^{\top} \boldsymbol{x}'$ . However, the distinction is that the nonlinear classifier $f$ gives rise to a dependence of the signed $S_{\mathrm{ker}}(\boldsymbol{x},\boldsymbol{x}')$ on the fraction of neurons that are activated at both $\boldsymbol{x}$ and $\boldsymbol{x}'$ . Intuitively, a close similarity between $\boldsymbol{x}$ and $\boldsymbol{x}'$ can manifest itself with a large number of commonly activated neurons. This is made possible by the nonlinearity of the ReLU activation function. We leave the rigorous treatment of the discussion here for future work. + +# 3 THE LOCAL ELASTICITY ALGORITHM FOR CLUSTERING + +This section introduces a novel algorithm for clustering that leverages the local elasticity of neural networks. We focus on the setting where all (primary) examples are from the same (known) superclass (e.g., mammal) and the interest is, however, to partition the primary examples into finer-grained (unknown) classes (e.g., cat and dog). To facilitate this process, we include an auxiliary dataset with all examples from a different superclass (e.g., vehicle). See Figure 3 for an illustration of the setting. To clear off any confusion, we remark that the aim is to corroborate the hypothesis of local elasticity by showing the effectiveness of this clustering algorithm. + +The centerpiece of our algorithm is the dynamic construction of a matrix that records pairwise similarities between all primary examples. In brief, the algorithm operates as if it were learning to distinguish between the primary examples (e.g., mammals) and the auxiliary examples (e.g., vehicles) via SGD. On top of that, the algorithm evaluates the changes in the predictions for any pairs of primary examples during the training process, and the recorded changes are used to construct the pairwise similarity matrix based on either the relative similarity in Equation (2) or the kernelized similarity in Equation (3). Taking the mammal case as earlier, the rationale of the algorithm is that local elasticity is likely to yield a larger similarity score between two cats (or two dogs), and a smaller similarity score between a cat and a dog. + +Algorithm 1 The Local Elasticity Based Clustering Algorithm. +Input: primary dataset $\mathcal{P} = \{\pmb {x}_i\}_{i = 1}^n$ , auxiliary dataset $\mathcal{A} = \{\widetilde{x}_j\}_{j = 1}^m$ , classifier $f(x,w)$ , initial weights $\pmb{w_0}$ , loss function $\mathcal{L}$ , learning rate $\eta_{t}$ , option $o\in \{\mathrm{relative, kernelized}\}$ +1: combine $\mathcal{D} = \{(x_i,y_i = 1)$ for $x_{i}\in \mathcal{P}\}\bigcup \{(x_{i},y_{i} = -1)$ for $x_{i}\in \mathcal{A}\}$ +2: set $S$ to $n\times n$ matrix of all zeros +3: for $t = 1$ to $n + m$ do +4: sample $(x,y)$ from $\mathcal{D}$ w/o replacement +5: $\pmb {w}_t = \mathrm{SGD}(\pmb {w}_{t - 1},\pmb {x},\pmb {y},f,\mathcal{L},\eta_t)$ +6: if $y = 1$ then +7: $\pmb {p}_t = \mathrm{Predict}(\pmb {w}_t,\mathcal{P},f)$ +8: find $1\leq i\leq n$ such that $\pmb {x} = \pmb {x}_i\in \mathcal{P}$ +9: if $o =$ relative then +10: $s_t = \frac{|p_t - p_{t - 1}|}{|p_t(i) - p_{t - 1}(i)|}$ +11: else +12: $g_{t} = \mathrm{GetGradient}(\pmb{w}_{t - 1},\pmb {x},\pmb {y},f,\mathcal{L})$ +13: $s_t = \frac{\pmb{p}_t - \pmb{p}_{t - 1}}{-\eta_t\times g_t}$ +14: end if +15: end if +16: set the ith row $S(i,:) = s_t$ +17: end for +18: $S_{\mathrm{symm}} = \frac{1}{2} (S + S^{\top})$ +19: $y_{\mathrm{subclass}} =$ Clustering(Ssymm) +20: return ysubclass + +This method is formally presented in Algorithm 1, with elaboration as follows. While the initial parameters $\boldsymbol{w}_0$ are often set to i.i.d. random variables, our experiments suggest that "pre-trained" weights can lead to better clustering performance. Hence, we use a warm-up period to get nearly optimal $\boldsymbol{w}_0$ by training on the combined dataset $\mathcal{D}$ . When the SGD is performed at a primary example $\boldsymbol{x}_i$ (labeled $y_i = 1$ ) during the iterations, the function $\mathrm{Predict}(\boldsymbol{w}_i, \mathcal{P}, f) \in \mathbb{R}^n$ evaluates the predictions for all primary examples at $\boldsymbol{w}_i$ , and these results are used to compute similarity measures between $\boldsymbol{x}_i$ and other primary feature vectors using Equation (2) or Equation (3), depending on the option. If one chooses to use the kernelized similarity, the function GetGradient is called to the gradient of the loss function $\mathcal{L}$ at $\boldsymbol{w}_{t-1}, (\boldsymbol{x}, y)$ with respect to $f$ . In practice, we can repeat the loop multiple times and then average the similarity matrices over the epochs. Finally, using the symmetrized $S_{\mathrm{symm}}$ , we apply an off-the-shelf clustering method such as $K$ -means (for relative similarity) or kernel $K$ -means (for kernel similarity) to partition the primary dataset $\mathcal{P}$ into subclasses. + +![](images/a13200aca019c29471b946ac67917d7875914cfa90f142cb2c78115362dc5ed3.jpg) +Mammal (primary) + +![](images/613e5e5d0390441561de69e3b02af8ab052786c38d0d8b52f12c9a270e0ce70d.jpg) +Figure 3: Illustration of the primary dataset and auxiliary dataset taken as input by the local elasticity based clustering algorithm. + +# 4 EXPERIMENTS + +# 4.1 EXPERIMENTAL SETTINGS + +We evaluate the performance of Algorithm 1 on MNIST (LeCun, 1998) and CIFAR-10 (Krizhevsky, 2009). For the MNIST dataset, we choose the 6 pairs of digits that are the most difficult for the binary $K$ -means clustering. Likewise, 6 pairs of classes are selected from CIFAR-10. For each pair (e.g., 5 and 8), we construct the primary data set by randomly sampling a total of 1000 examples equally from the two classes in the pair. The auxiliary dataset consists of 1000 examples that are randomly drawn from one or two different classes (in the case of two classes, evenly distribute the 1000 examples across the two classes). + +Our experiments consider two-layer feedforward neural networks (FNN, which is also used in Figure 1), CNN (Krizhevsky et al., 2012), and ResNet (He et al., 2016). We use 40960 neurons with the ReLU activation function for two-layer FNN and details of the other architectures can be found + +
Primary Examples5 vs 84 vs 97 vs 95 vs 93 vs 53 vs 8
Auxiliary Examples3, 95, 74, 54, 88, 95
K-means50.454.555.556.369.076.5
PCA + K-means50.454.555.756.570.776.4
l2-relative(linear)51.054.651.358.858.358.7
l2-kernelized (linear)50.155.555.556.369.376.1
BCE-relative (linear)50.150.050.250.651.450.1
BCE-kernelized (linear)50.756.762.355.253.951.6
l2-relative (ours)75.955.662.589.350.374.7
l2-kernelized (ours)71.063.864.667.871.578.8
BCE-relative (ours)50.250.551.955.753.750.2
BCE-kernelized (ours)52.159.364.558.252.551.8
+ +Table 1: Classification accuracy of Algorithm 1 and other methods on the MNIST dataset. BCE stands for the binary cross-entropy loss. We use $K$ -means for relative similarity based clustering algorithms, and kernel $K$ -means for kernelized similarity based clustering algorithms. For example, BCE-kernelized (linear) denotes the setting that uses the BCE loss, kernelized similarity and the linear activation function. The highest accuracy score in each study is in boldface. + +
Primary ExamplesCar vs CatCar vs HorseCat vs BirdDog vs DeerCar vs BirdDeer vs Frog
Auxiliary ExamplesBirdCat, BirdCarFrogCatDog
K-means50.350.951.151.651.852.4
PCA + K-means50.651.051.151.651.752.4
l2- relative (linear)50.056.855.250.656.352.5
l2-kernelized (linear)50.450.751.151.351.952.8
BCE- relative (linear)50.450.950.250.655.350.2
BCE-kernelized (linear)55.758.152.050.555.853.0
l2- relative (ours)53.750.258.454.263.053.5
l2-kernelized (ours)52.150.553.652.750.553.0
BCE- relative (ours)51.153.550.751.155.650.3
BCE-kernelized (ours)58.459.756.651.255.051.8
+ +Table 2: Classification accuracy of Algorithm 1 and other methods on the CIFAR-10 dataset. + +in Appendix A.4. We consider two types of loss functions, the $\ell_2$ loss and the cross-entropy loss $\mathcal{L}(f,y) = -y\log (f)$ . Note that neural networks with the cross-entropy loss have an extra sigmoid layer on top of all layers. We run Algorithm 1 in one epoch. For comparison, linear neural networks (with the identity activation function) with the same architectures of their nonlinear counterparts are used as baseline models. Here, the use of linear neural networks instead of linear regression is to prevent possible influence of over-parameterization on our experimental results4. We also consider $K$ -means and principal component analysis (PCA) followed by $K$ -means as simple baselines. + +# 4.2 RESULTS + +Comparison between relative and kernelized similarities. Table 1 and Table 2 display the results on MNIST and CIFAR-10, respectively. The relative similarity based method with the $\ell_2$ loss performs best on CIFAR-10, while the kernelized similarity based method with the $\ell_2$ loss outperforms the other methods on MNIST. Overall, our methods outperform both linear and simple baseline models. These results demonstrate the effectiveness of Algorithm 1 and confirm our hypothesis of local elasticity in neural nets as well. + +Comparison between architectures. The results of Algorithm 1 with the aforementioned three types of neural networks, namely FNN, CNN, and ResNet on MNIST are presented in Table 3. The results show that CNN in conjunction with the kernelized similarity has high classification accuracy. In contrast, the simple three-layer ResNet seems to not capture local elasticity. + +
Primary Examples5 vs 84 vs 97 vs 95 vs 93 vs 53 vs 8
Auxiliary Examples3, 95, 74, 54, 88, 95
l2-relative (FNN)75.955.662.589.350.374.7
l2-kernelized (FNN)71.063.864.667.871.578.8
l2-relative (CNN)54.253.789.150.150.183.0
l2-kernelized (CNN)64.169.591.397.675.387.4
l2-relative (ResNet)50.755.055.578.352.352.3
l2-kernelized (ResNet)50.260.454.876.366.968.8
+ +Table 3: Comparison of different architectures for the local elasticity based clustering algorithm. We only consider the $\ell_2$ loss on MNIST here for the sake of simplicity. + +
Primary Examples5 vs 84 vs 97 vs 95 vs 93 vs 53 vs 8
Auxiliary Examples3, 95, 74, 54, 88, 95
AutoEncoder50.954.061.870.064.367.1
ResNet-15285.294.293.182.065.896.0
l2-relative (ours)54.253.789.150.150.183.0
l2-kernelized (ours)64.169.591.397.675.387.4
+ +Table 4: Comparison with other feature extraction methods. For simplicity, we only consider CNN with $\ell_2$ loss on MNIST. The features from autoencoder and ResNet-152 are clustered by $K$ -means. + +To further evaluate the effectiveness of the local elasticity based Algorithm 1, we compare this clustering algorithm using CNN with two types of feature extraction approaches: autoencoder and pre-trained ResNet-152 (He et al., 2016). An autoencoder reconstructs the input data in order to learn its hidden representation. ResNet-152 is pre-trained on ImageNet (Deng et al., 2009) and can be used to extract features of images. The performance of those models on MNIST are shown in Table 4. Overall, autoencoder yields worst results. Although ResNet-152 performs quite well in general, yet our methods outperform it in some cases. It is an interesting direction to combine the strength of our algorithm and ResNet-152 in classification tasks that are very different from ImageNet. Moreover, unlike ResNet-152, our methods do not require a large number of examples for pre-training. To better appreciate local elasticity and Algorithm 1, we study the effect of parameter initialization, auxiliary examples and normalized kernelized similarity in Appendix A.4. More discussions on the expected relative change, activation patterns, and activation functions can also be found in Appendix A.4. + +# 5 IMPLICATIONS AND FUTURE WORK + +In this paper, we have introduced a notion of local elasticity for neural networks. This notion enables us to develop a new clustering algorithm, and its effectiveness on the MNIST and CIFAR-10 datasets provide evidence in support of the local elasticity phenomenon in neural networks. While having shown the local elasticity in both synthetic and real-world datasets, we acknowledge that a mathematical foundation of this notion is yet to be developed. Specifically, how to rigorously formulate this notion for neural networks? A good solution to this question would involve the definition of a meaningful similarity measure and, presumably, would need to reconcile the notion with possible situations where the dependence between the prediction change and the similarity is not necessarily monotonic. Next, can we prove that this phenomenon occurs under some geometric structures of the dataset? Notably, recent evidence suggests that the structure of the training data has a profound im + +pact on the performance of neural networks (Goldt et al., 2019). Moreover, how does local elasticity depend on network architectures, activation functions, and optimization strategies? +Broadly speaking, local elasticity implies that neural networks can be plausibly thought of as a local method. We say a method is local if it seeks to fit a data point only using observations within a window of the data point. Important examples include the $k$ -nearest neighbors algorithm, kernel smoothing, local polynomial regression (Fan, 2018), and locally linear embedding (Roweis & Saul, 2000). An exciting research direction is to formally relate neural networks to local methods. As opposed to the aforementioned classic local methods, however, neural networks seem to be capable of choosing the right bandwidth (window size) by adapting the data structure. It would be of great interest to show whether or not this adaptivity is a consequence of the elasticity of neural networks. +In closing, we provide further implications of this notion by seeking to interpret various aspects of neural networks via local elasticity. Our discussion lacks rigor and hence much future investigation is needed. +Memorization. Neural networks are empirically observed to be capable of fitting even random labels perfectly (Zhang et al., 2017), with provable guarantees under certain conditions (Allen-Zhu et al., 2019; Du et al., 2019; Oymak & Soltanolkotabi, 2019). Intuitively, local elasticity leads neural nets to progressively fit the examples in a vicinity of the input via each SGD update, while remaining fitting (most) labels that have been learned previously. A promising direction for future work is to relate local elasticity to memorization in a more concrete fashion. +Stability and generalization. Bousquet & Elisseeff (2002) demonstrate that the uniform stability of an algorithm implies generalization on a test dataset. As noted by Kuzborskij & Lampert (2018), due to its distribution-free and worst-case nature, uniform stability can lead to very loose bounds on generalization. The local elasticity of neural networks, however, suggests that the replace of one training point by another imposes limited perturbations on the predictions of most points and thus the loss might be more stable than expected. In this regard, it is possible to introduce a new notion of stability that relies on the metric of the input space for better generalization bounds. +Data normalization and batch normalization. The two normalization techniques are commonly used to improve the performance of neural networks (Gonzalez & Woods, 2002; Ioffe & Szegedy, 2015). The local elasticity viewpoint appears to imply that these techniques allow for a more solid relationship between the relative prediction change and a certain distance between feature vectors. Precisely, writing the lifting map $\varphi(\pmb{x}) \approx \frac{\partial f(\pmb{x},\pmb{w})}{\partial\pmb{w}}$ , Section 2.2 reveals that the kernelized similarity approximately satisfies $2S_{\mathrm{ker}}(\pmb{x},\pmb{x}^{\prime}) \approx \| \varphi(\pmb{x})\|^{2} + \| \varphi(\pmb{x}^{\prime})\|^{2} - d(\pmb{x},\pmb{x}^{\prime})^{2}$ , where $d(\pmb{x},\pmb{x}^{\prime}) = \| \varphi(\pmb{x}) - \varphi(\pmb{x}^{\prime})\|$ . To obtain a negative correlation between the similarity and the distance, therefore, one possibility is to have the norms of $\varphi(\pmb{x})$ and $\varphi(\pmb{x}^{\prime})$ about equal to a constant. This might be made possible by employing the normalization techniques. See some empirical results in Appendix A.4. A venue for future investigation is to consolidate this implication on normalization techniques. + +# REFERENCES + +Zeyuan Allen-Zhu and Yanzhi Li. What can ResNet learn efficiently, going beyond kernels? arXiv preprint arXiv:1905.10337, 2019. +Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. arXiv preprint arXiv:1811.04918, 2018. +Zeyuan Allen-Zhu, Yanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In ICML, pp. 242-252, 2019. +Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. arXiv preprint arXiv:1905.13655, 2019a. +Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. arXiv preprint arXiv:1904.11955, 2019b. +Andrew R Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3):930-945, 1993. + +Benedikt Bauer and Michael Kohler. On deep learning as a remedy for the curse of dimensionality in nonparametric regression. The Annals of Statistics, 47(4):2261-2285, 2019. +Mikhail Belkin, Siyuan Ma, and Soumik Mandal. To understand deep learning we need to understand kernel learning. In ICML, pp. 540-548, 2018. +Olivier Bousquet and André Elisseeff. Stability and generalization. JMLR, 2(Mar):499-526, 2002. +Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a ConvNet with Gaussian inputs. In ICML, pp. 605-614. JMLR.org, 2017. +Minshuo Chen, Haoming Jiang, Wenjing Liao, and Tuo Zhao. Efficient approximation of deep relu networks for functions on low dimensional manifolds. arXiv preprint arXiv:1908.01842, 2019. +Lenaic Chizat and Francis Bach. On the global convergence of gradient descent for overparameterized models using optimal transport. In NIPS, pp. 3036-3046, 2018. +Anna Choromanska, Mikael Henaff, Michael Mathieu, Gerard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. JMLR, 38:192-204, 2015. +George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4):303-314, 1989. +Amit Daniely. SGD learns the conjugate kernel class of the network. In NIPS, pp. 2422-2430, 2017. +Olivier Delalleau and Yoshua Bengio. Shallow vs. deep sum-product networks. In NIPS, pp. 666-674, 2011. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, pp. 248-255, 2009. +Xialiang Dou and Tengyuan Liang. Training neural networks as learning data-adaptive kernels: Provable representation and approximation benefits. arXiv preprint arXiv:1901.07114, 2019. +Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In ICML, pp. 1675-1685, 2019. +Simon S Du and Jason D Lee. On the power of over-parametrization in neural networks with quadratic activation. In ICML, pp. 1328-1337, 2018. +Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018. +Weinan E, Chao Ma, and Lei Wu. Barron spaces and the compositional function spaces for neural network models. arXiv preprint arXiv:1906.08039, 2019a. +Weinan E, Chao Ma, and Lei Wu. A comparative analysis of the optimization and generalization property of two-layer neural network and random feature models under gradient descent dynamics. arXiv preprint arXiv:1904.04326, 2019b. +Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In $COLT$ , pp. 907-940, 2016. +Jianqing Fan. Local polynomial modelling and its applications: monographs on statistics and applied probability 66. Routledge, 2018. +Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, and Stefano Soatto. Classification regions of deep neural networks. arXiv preprint arXiv:1705.09552, 2017. +Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, and Stefano Soatto. Empirical study of the topology and geometry of deep networks. In CVPR, pp. 3762-3770, 2018. + +Stanislav Fort, Paweł Krzysztof Nowak, Stanislaw Jastrzebski, and Srini Narayanan. Stiffness: A new perspective on generalization in neural networks. arXiv preprint arXiv:1901.09491, 2019. +Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Linearized two-layers neural networks in high dimension. arXiv preprint arXiv:1904.12191, 2019. +Sebastian Goldt, Marc Mézard, Florent Krzakala, and Lenka Zdeborová. Modelling the influence of data structure on learning in neural networks. arXiv preprint arXiv:1909.11500, 2019. +Rafael C Gonzalez and Richard E Woods. Digital image processing, 2002. +Boris Hanin and David Rolnick. Complexity of linear regions in deep networks. In ICML, pp. 2596-2604, 2019. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pp. 770-778, 2016. +Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359-366, 1989. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 448-456, 2015. +Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In NIPS, pp. 8571-8580, 2018. +Jason M Klusowski and Andrew R Barron. Approximation by combinations of relu and squared relu ridge functions with $\ell_1$ and $\ell_0$ controls. IEEE Transactions on Information Theory, 64(12): 7649-7656, 2018. +Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In ICML, volume 70, pp. 1885-1894, 2017. +A Krizhevsky. Learning multiple layers of features from tiny images. Master's thesis, University of Tront, 2009. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pp. 1097-1105, 2012. +Ilja Kuzborskij and Christoph Lampert. Data-dependent stability of stochastic gradient descent. In ICML, pp. 2820-2829, 2018. +Yann LeCun. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998. +Jaehoon Lee, Lechao Xiao, Samuel S Schoenholz, Yasaman Bahri, Jascha Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. arXiv preprint arXiv:1902.06720, 2019. +Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. In NIPS, pp. 8157-8166, 2018. +Yuanzhi Li and Yang Yuan. Convergence analysis of two-layer neural networks with relu activation. In NIPS, pp. 597-607, 2017. +Shiyu Liang, Ruoyu Sun, Jason D Lee, and Rayadurgam Srikant. Adding one neuron can eliminate all bad local minima. In NIPS, pp. 4350-4360, 2018. +Tengyuan Liang. On how well generative adversarial networks learn densities: Nonparametric and parametric results. arXiv preprint arXiv:1811.03179, 2018. +Hrushikesh N Mhaskar and Tomaso Poggio. Deep vs. shallow networks: An approximation theory perspective. Analysis and Applications, 14(06):829-848, 2016. + +Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In NIPS, pp. 2924-2932, 2014. +Behnam Neyshabur, Ryota Tomioka, Ruslan Salakhutdinov, and Nathan Srebro. Geometry of optimization and implicit regularization in deep learning. arXiv preprint arXiv:1705.03071, 2017. +Samet Oymak and Mahdi Soltanolkotabi. Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. arXiv preprint arXiv:1902.04674, 2019. +Nicolas Papernot and Patrick McDaniel. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. arXiv preprint arXiv:1803.04765, 2018. +Grant M Rotskoff and Eric Vanden-Eijnden. Neural networks as interacting particle systems: Asymptotic convexity of the loss landscape and universal scaling of the approximation error. arXiv preprint arXiv:1805.00915, 2018. +Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323-2326, 2000. +Itay Safran and Ohad Shamir. Spurious local minima are common in two-layer relu neural networks. In ICML, pp. 4430-4438, 2018. +Johannes Schmidt-Hieber. Nonparametric regression using deep neural networks with relu activation function. arXiv preprint arXiv:1708.06633, 2017. +Justin Sirignano and Konstantinos Spiliopoulos. Mean field analysis of neural networks: A central limit theorem. Stochastic Processes and their Applications, 2019. +Mahdi Soltanolkotabi. Learning ReLUs via gradient descent. In NIPS, pp. 2007-2017, 2017. +Mahdi Soltanolkotabi, Adel Javanmard, and Jason D Lee. Theoretical insights into the optimization landscape of over-parameterized shallow neural networks. IEEE Transactions on Information Theory, 65(2):742-769, 2018. +Mei Song, Andrea Montanari, and P Nguyen. A mean field view of the landscape of two-layers neural networks. Proceedings of the National Academy of Sciences, 115:E7665-E7671, 2018. +Daniel Soudry and Elad Hoffer. Exponentially vanishing sub-optimal local minima in multilayer neural networks. arXiv preprint arXiv:1702.05777, 2017. +Taiji Suzuki. Adaptivity of deep relu network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality. arXiv preprint arXiv:1810.08033, 2018. +Matus Telgarsky. benefits of depth in neural networks. In $COLT$ , pp. 1517-1539, 2016. +Yuandong Tian. An analytical formula of population gradient for two-layered ReLu network and its applications in convergence and critical point analysis. In ICML, pp. 3404-3413. JMLR. org, 2017. +Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. Regularization matters: Generalization and optimization of neural nets v.s. their induced kernel. arXiv preprint arXiv:1810.05369, 2018. +Sanford Weisberg. Applied linear regression, volume 528. John Wiley & Sons, 2005. +Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In ICML, pp. 478-487, 2016. +Greg Yang. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. arXiv preprint arXiv:1902.04760, 2019. +Dmitry Yarotsky. Error bounds for approximations with deep relu networks. Neural Networks, 94: 103-114, 2017. + +Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. *ICLR*, 2017. +Yi Zhou and Yingbin Liang. Critical points of neural networks: Analytical forms and landscape properties. arXiv preprint arXiv:1710.11205, 2017. +Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep ReLU networks. arXiv preprint arXiv:1811.08888, 2018. + +# A EXPERIMENTAL DETAILS AND ADDITIONAL RESULTS + +# A.1 CONSTRUCTION OF THE SYNTHETIC EXAMPLES IN FIGURE 1 + +The blue examples in the torus function (Figure 1(a)) can be defined parametrically by: + +$$ +x (\theta) = (8 + \sin (4 \theta)) \cos \theta +$$ + +$$ +y (\theta) = (8 + \sin (4 \theta)) \sin \theta \tag {5} +$$ + +$$ +z (\theta) = \cos (4 \theta) +$$ + +Similarly, the red examples in the torus function are defined by: + +$$ +x (\theta) = (8 - \sin (4 \theta)) \cos \theta +$$ + +$$ +y (\theta) = (8 - \sin (4 \theta)) \sin \theta \tag {6} +$$ + +$$ +z (\theta) = - \cos (4 \theta) +$$ + +The blue examples in the two folded boxes function (Figure 1(d)) are sampled from: + +$$ +\left| y \right| + 1. 2 \left| x \right| = 1 3 \tag {7} +$$ + +$$ +z \in [ - 1, 1 ] +$$ + +Similarly, the red examples in the two folded boxes function are defined as: + +$$ +\left| y \right| + 1. 2 \left| x \right| = 1 1 \tag {8} +$$ + +$$ +z \in [ - 1, 1 ] +$$ + +Geodesic distance is used to measure the shortest path between two points in a surface, or more generally in a Riemannian manifold. For example, in Figure 1, the geodesic distance for the torus function and the two folded boxes function is the distance in the curve and the distance in the surface rather than the Euclidean distance. + +# A.2 MORE SIMULATIONS + +More simulations can be found in Figure 4. + +We further explore the local elasticity of ResNet-152 on ImageNet. We find that, when the pretrained ResNet-152 are updated on a tabby cat via SGD, the predictions of tiger cats change more drastically than the predictions of warplanes. We randomly pick 50 examples from the tabby cat synset as updating points, 25 examples from the tiger cat synset and 25 examples from the warplane synset as testing points. After updating on a tabby cat, we can get the average change of predictions on the 25 tiger cats and that on the 25 warplanes. By conducting a Wilcoxon rank-sum test with SGD updates on 50 different tabby cats, we find that the average change of the predictions on the tiger cat are significantly more drastic than that on the warplane with a p-value of 0.0007. The overall average relative similarity between the tiger cats and the tabby cats is 4.4, while the overall average similarity between the warplanes and the tabby cats is 6.6. Note that the relative similarity is computed from the relative KL divergence between the original prediction and the updated prediction. + +Specifically, we find that the change of the predictions (KL divergence of 0.03) on the tiger cat (Figure 5(b)) is more drastic than that (KL divergence of 0.002) on the warplane (Figure 5(c)) after a SGD update on the tabby cat (Figure 5(a)), although the Euclidean distance (511) between the warplane and the tabby cat is smaller than that (854) between the tiger cat and the tabby cat. + +Moreover, we conduct a simulation study to examine local elasticity in the case of using mini-batch SGD for updates. Specifically, we consider updating the weights of ResNet-152 in one iteration using two images. The training and test images are displayed in Figure 6. We find that the changes of the predictions on the tabby cat (Figure 6(c), KL divergence of 0.021) and on the warplane (Figure 6(d), KL divergence of 0.024) are more substantial than that on the tree frog (Figure 6(e), KL divergence of 0.0004) after an SGD update on the minibatch composed of a tabby cat (Figure 6(a)) and a warplane (Figure 6(b)). + +![](images/044528bf386fb9315c7bf0531626601e60ade40c8abbf4c6943e771e84e69186.jpg) +(a) Two cosine functions. + +![](images/ddd43994c8ec6f5e9869aa7c9d2da26d0a02376c62ca02e033323a660ca7d5f4.jpg) +(b) The local elasticity of two-layer neural nets fitting these two cosine functions. + +![](images/67af53d028e6ec073241baac97be034ca3170ef4f7dd6ca8ee046cf8c3bc2e3a.jpg) +(c) The double helix function. + +![](images/8859747878d4fa3eefbba74884de9b9201f6ba4b5e23e5d89366cd193bc02a83.jpg) +(d) The local elasticity of two-layer neural nets fitting the double helix function. + +![](images/5f79777623b594c34833a7f92771fdbebd60afa5d6bd17a16718c93d2be4cf9e.jpg) +(e) Two cycle functions. + +![](images/d627e6017f8c75a42bd346ed3d916af3f62c25ec7606e38853a18639dd5496ae.jpg) +(f) The local elasticity of two-layer neural nets fitting two cycle functions. + +![](images/a77b934465d41e3781b37e8060b33bf4cd1dd7872bd90d412eb6005730f1ff33.jpg) +(g) Two sphere functions. +Figure 4: More simulations of the local elasticity. + +![](images/ed1b6147d16b3a1f59ad14dea42bd48d08eb74988cf531d86453bf8226dcc7aa.jpg) +(h) The local elasticity of two-layer neural nets fitting two sphere functions. + +![](images/b85b6eb0d65126225bea030f7a91a23da0d2a9498df0f7383903f69e692816be.jpg) +(a) A tabby cat. + +![](images/e942c20fac251ea94b2e115d81109c3f7e3abeb3114e2415cd3aba39dfebed15.jpg) +(b) A tiger cat. + +![](images/ffff9f423e38fda8f57087c20937dade1468d2c1ae321d78f3cb5f4c7885e36f.jpg) +(c) A warplane. +Figure 5: Simulations in real data. We show specific examples for a tabby cat, a tiger cat and a warplane. + +![](images/2f9caf22efb4d3ac0c3ec7482f45e5547a5879cfef795dc5bcd83ed06c302716.jpg) +(a) A tabby cat (train). + +![](images/3f2abb94587ce7733868c027e5215330906e32291c20380e0906eca5bb442f00.jpg) +(b) A warplane (train). + +![](images/1aba4ec58db6a1505205939521cc993e70eec5119b79fe04f69828d970dc2fe8.jpg) +(c) A tabby cat (test). +Figure 6: Simulations showing local elasticity with mini-batch SGD. + +![](images/77b0a71b79a9ba1642f890e5363fecff38705f9bdfaeff6c778ee177ad28a76e.jpg) +(d) A warplane (test). + +![](images/a10d972e4b066ebd6ef4c6582665bfa1cf284fd314ff0c6f4c97f5393328c745.jpg) +(e) A tree frog (test). + +# A.3 SOME CALCULATIONS + +We brief explain how to obtain Equation (4) under the same assumptions of Li & Liang (2018). + +$$ +\begin{array}{l} f \left(\overline {{\boldsymbol {x}}} ^ {\prime}, \boldsymbol {w} ^ {+}\right) - f \left(\boldsymbol {x} ^ {\prime}, \boldsymbol {w}\right) = \sum_ {r = 1} ^ {m} a _ {r} \sigma \left(\left(\boldsymbol {w} _ {r} ^ {+}\right) ^ {\top} \boldsymbol {x} ^ {\prime}\right) - \sum_ {r = 1} ^ {m} a _ {r} \sigma \left(\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime}\right) \\ = \sum_ {r = 1} ^ {m} a _ {r} \sigma \left(\left(\boldsymbol {w} _ {r} - \eta \frac {\partial \mathcal {L}}{\partial f} a _ {r} \boldsymbol {x} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \right\}\right) ^ {\top} \boldsymbol {x} ^ {\prime}\right) - \sum_ {r = 1} ^ {m} a _ {r} \sigma \left(\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime}\right) \\ = \sum_ {r = 1} ^ {m} a _ {r} \sigma \left(\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} - \eta \frac {\partial \mathcal {L}}{\partial f} a _ {r} \boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \right\}\right) - \sum_ {r = 1} ^ {m} a _ {r} \sigma \left(\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime}\right) \\ = \Delta_ {1} + \Delta_ {2}, \\ \end{array} +$$ + +where + +$$ +\begin{array}{l} \Delta_ {1} = - \eta \frac {\partial \mathcal {L}}{\partial f} \sum_ {r = 1} ^ {m} \boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \right\} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} \geq 0 \right\} \\ = - \eta \frac {\partial \mathcal {L}}{\partial f} \boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} \sum_ {r = 1} ^ {m} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \right\} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} \geq 0 \right\} \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \| \Delta_ {2} \| \leq \sum_ {r = 1} ^ {m} \left| - a _ {r} \eta \frac {\partial \mathcal {L}}{\partial f} a _ {r} \boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \right\} \right| \cdot \left| \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} - \eta \frac {\partial \mathcal {L}}{\partial f} a _ {r} \boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \right\} \geq 0 \right\} - \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} \geq 0 \right\} \right| \\ = \eta \left| \frac {\partial \mathcal {L}}{\partial f} \right| | \boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} | \sum_ {r = 1} ^ {m} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \right\} \left| \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} - \eta \frac {\partial \mathcal {L}}{\partial f} a _ {r} \boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \right\} \geq 0 \right\} - \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} \geq 0 \right\} \right| \\ = \eta \left| \frac {\partial \mathcal {L}}{\partial f} \right| \left| \boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} \right| \sum_ {r = 1} ^ {m} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \right\} \left| \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} - \eta \frac {\partial \mathcal {L}}{\partial f} a _ {r} \boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} \geq 0 \right\} - \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} \geq 0 \right\} \right|. \\ \end{array} +$$ + +As shown in Li & Liang (2018), only a vanishing fraction of the neurons lead to different patterns between $\mathbb{I}\left\{\pmb{w}_r^\top \pmb{x}' - \eta \frac{\partial \mathcal{L}}{\partial f} a_r \pmb{x}^\top \pmb{x}' \geq 0\right\}$ and $\mathbb{I}\{\pmb{w}_r^\top \pmb{x}' \geq 0\}$ . Consequently, we get $|\Delta_2| \ll |\Delta_1|$ , thereby certifying + +$$ +\begin{array}{l} f \left(\boldsymbol {x} ^ {\prime}, \boldsymbol {w} ^ {+}\right) - f \left(\boldsymbol {x} ^ {\prime}, \boldsymbol {w}\right) = - (1 + o (1)) \eta \frac {\partial \mathcal {L}}{\partial f} \boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} \sum_ {r = 1} ^ {m} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \right\} \mathbb {I} \left\{\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} \geq 0 \right\} \\ = (1 + o (1)) \frac {\boldsymbol {x} ^ {\top} \boldsymbol {x} ^ {\prime} \sum_ {r = 1} ^ {m} \mathbb {I} \{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \} \mathbb {I} \{\boldsymbol {w} _ {r} ^ {\top} \boldsymbol {x} ^ {\prime} \geq 0 \}}{\| \boldsymbol {x} \| ^ {2} \sum_ {r = 1} ^ {m} \mathbb {I} \{\boldsymbol {w} _ {r} ^ {T} \boldsymbol {x} \geq 0 \}} (f (\boldsymbol {x}, \boldsymbol {w} ^ {+}) - f (\boldsymbol {x}, \boldsymbol {w})). \\ \end{array} +$$ + +A.4 MORE ASPECTS ON THE EXPERIMENTS + +
Primary Examples5 vs 84 vs 97 vs 95 vs 93 vs 53 vs 8
Auxiliary Examples3, 95, 74, 54, 88, 95
\( \ell_2 \)-relative (random)50.955.251.259.058.559.9
\( \ell_2 \)-kernelized (random)50.455.555.756.668.475.9
\( \ell_2 \)-relative (ours)75.955.662.589.350.374.7
\( \ell_2 \)-kernelized (ours)71.063.864.667.871.578.8
+ +Architectures. We use 40960 neurons with ReLU activation function for two-layer neural nets in simulations and experiments. We use 8192 neurons for each hidden layer with ReLU activation function in three-layer neural networks in simulations. The ResNet we use in experiments is a three-layer net that contains 4096 neurons and a ReLU activation function in each layer. The CNN for MNIST we use in experiments is the same architecture as that in https://github.com/pytorch/examples/blob/master/mnist/main.py. + +Weights. As discussed in Section 3, parameter initialization is important for local elasticity based clustering methods. We find that the optimal setting always outperforms the random setting. It + +Table 5: Comparison of two different settings, random and optimal (default), for parameter initialization. For simplicity, we only consider $\ell_2$ loss on MNIST here. + +
Primary Examples5 vs 85 vs 85 vs 85 vs 85 vs 85 vs 8
Auxiliary Examples3, 96, 92, 32, 639
\( \ell_2 \)-relative (ours)75.953.754.351.253.960.6
\( \ell_2 \)-kernelized (ours)71.054.450.251.650.654.4
BCE relative (ours)50.250.450.150.252.350.1
BCE-kernelized (ours)52.152.751.652.450.350.3
+ +Table 6: Comparison of different auxiliary examples for 5 vs 8 on MNIST. + +
Primary Examples5 vs 84 vs 97 vs 95 vs 93 vs 53 vs 8
Auxiliary Examples3, 95, 74, 54, 88, 95
l2-kernelized (ours)71.063.864.667.871.578.8
BCE-kernelized (ours)52.159.364.558.252.551.8
l2-normalized-kernelized (ours)71.362.364.868.073.079.5
BCE-normalized-kernelized (ours)52.057.561.056.950.175.2
+ +Table 7: Comparison between the kernelized similarity based methods and the normalized kernel-ized similarity based methods on MNIST. + +
Primary Examples5 vs 84 vs 97 vs 95 vs 93 vs 53 vs 8
Auxiliary Examples3, 95, 74, 54, 88, 95
l2- relative (unnormized)51.558.455.953.069.975.5
l2-kernelized (unnormized)51.456.155.357.262.175.1
BCE- relative (unnormized)50.150.350.753.250.950.1
BCE-kernelized (unnormized)50.656.959.654.452.250.8
l2- relative (ours)75.955.662.589.350.374.7
l2-kernelized (ours)71.063.864.667.871.578.8
BCE- relative (ours)50.250.551.955.753.750.2
BCE-kernelized (ours)52.159.364.558.252.551.8
+ +Table 8: Comparison of the clustering algorithms without data normalization on MNIST. Note that we use data normalization by default. + +![](images/f4558c7c703d2bd648f0129d5750c3b48322e1d495eeb854e373d8498f8f3ade.jpg) +(a) Activation analysis of two-layer neural nets with ReLU activation function on the torus function. + +![](images/018f8ba3a59d49ec8378b9180659e3bcd55df92c7565faff4d94c96829617760.jpg) +(b) The local elasticity of two-layer neural nets with sigmoid activation function on the torus function. +Figure 7: The activation analysis and the activation function analysis of neural nets local elasticity. The Pearson correlation between the cosine similarity based on activation patterns and the corresponding Euclidean distances is $-0.97$ on the torus function. The cosine similarity within activation patterns indicates the Euclidean distance between two data points. The Pearson correlation between the relative similarity based on two-layer neural nets with sigmoid activation function and the geodesic distance is $-0.99$ . It indicates that two-layer neural nets with sigmoid activation function also have a strong local elastic effect. + +supports the intuition that we can learn detailed features of primary examples to better distinguish them from auxiliary examples. The results of two different types of parameter initialization are shown in Table 5. + +Auxiliary examples. In general, we can choose those samples that are similar to primary examples as auxiliary examples, so they can help neural nets better learn primary examples. We analyze the primary examples pair 5 and 8 in MNIST with 6 different settings of auxiliary examples. We find that the choice of auxiliary examples are crucial. For example, in our case 9 is close to 5 and 3 is close to 8 based on $K$ -means clustering results, so that we can choose 9 and 3 as auxiliary examples for models to better distinguish 5 and 8. The results of different auxiliary examples for 5 vs 8 on MNIST are shown in Table 6. + +Normalized kernelized similarity. The normalized kernelized similarity is + +$$ +\overline {{S}} _ {\ker} (\boldsymbol {x}, \boldsymbol {x} ^ {\prime}) := S _ {\ker} (\boldsymbol {x}, \boldsymbol {x} ^ {\prime}) / \sqrt {S _ {\ker} (\boldsymbol {x} , \boldsymbol {x}) S _ {\ker} (\boldsymbol {x} ^ {\prime} , \boldsymbol {x} ^ {\prime})}. +$$ + +The normalized kernelized similarity can achieve more stable local elasticity. We compare clustering algorithms based on kernelized similarity and normalized kernelized similarity in MNIST. We find that the performance of the normalized kernelized similarity based methods is slightly better than that of kernelized similarity based methods with the $\ell_2$ loss, although this conclusion no longer holds true with cross-entropy loss. The results of the normalized kernelized similarity are shown in Table 7. + +Expected relative change. We compute the average relative change of two-layer neural nets and linear neural networks fitting the torus function. The average relative change of two-layer neural nets and linear neural networks are 0.44 and 0.68, indicating that SGD updates of two-layer neural nets are more local than that of two-layer linear neural networks. These findings further support our local elasticity theory of neural nets. + +Activation patterns. The key difference between our methods and linear baselines is nonlinear activation. We analyze the relations between the patterns of ReLU activation function of two-layer neural nets and corresponding Euclidean distances. For each input $x$ , we use $a$ to denote the binary activation pattern of each neuron. The similarity between binary activation patterns is computed from the cosine similarity function. We find that the activation similarity is linearly dependent on the Euclidean distance, meaning that closer points in the Euclidean space will have more similar activation patterns. The activation analysis of neural nets local elasticity are shown in Figure 7(a). + +Activation functions. We experiment on another standard activation function, sigmoid, to see its local elasticity. We find that sigmoid can also provide neural nets with the local elasticity. The local elasticity of two-layer neural nets with sigmoid activation function on the torus function is shown in Figure 7(b). + +Data normalization. The performance of the methods without data normalization are shown in Table 8. The results show that normalization is important for our methods, which are consistent with our analysis in Section 5. \ No newline at end of file diff --git a/thelocalelasticityofneuralnetworks/images.zip b/thelocalelasticityofneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..adaf198a09f497dd2d29d97c9b419f1857729f57 --- /dev/null +++ b/thelocalelasticityofneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cf34b4c4ad7a9809897030e7fd442cecd4e6ab035c214c770146d7d23590147 +size 1029695 diff --git a/thelocalelasticityofneuralnetworks/layout.json b/thelocalelasticityofneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..99c216808691bfc28d8670bde5887fc53f137935 --- /dev/null +++ b/thelocalelasticityofneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0adc5ec9b2d366622b7a57f32e947070bbfccdaa13a7132558a847e5c5426c5 +size 645092 diff --git a/theoryandevaluationmetricsforlearningdisentangledrepresentations/f8703a6e-f7be-4f48-969f-cdf5aee7f063_content_list.json b/theoryandevaluationmetricsforlearningdisentangledrepresentations/f8703a6e-f7be-4f48-969f-cdf5aee7f063_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9fbee29eaec7f49319d744b5df16c15c1a52304a --- /dev/null +++ b/theoryandevaluationmetricsforlearningdisentangledrepresentations/f8703a6e-f7be-4f48-969f-cdf5aee7f063_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f34a4ad41e37b2ecea0c0af1c29fd11194d917b7b24eb148f52accb29003a30 +size 200462 diff --git a/theoryandevaluationmetricsforlearningdisentangledrepresentations/f8703a6e-f7be-4f48-969f-cdf5aee7f063_model.json b/theoryandevaluationmetricsforlearningdisentangledrepresentations/f8703a6e-f7be-4f48-969f-cdf5aee7f063_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c5abd2ec014e43282c70f73e5b47834ebdc136b4 --- /dev/null +++ b/theoryandevaluationmetricsforlearningdisentangledrepresentations/f8703a6e-f7be-4f48-969f-cdf5aee7f063_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfd417ee3514cb8e70005f0a51ef69ecf6e78a2524c95ec47f925af4c8201f36 +size 240869 diff --git a/theoryandevaluationmetricsforlearningdisentangledrepresentations/f8703a6e-f7be-4f48-969f-cdf5aee7f063_origin.pdf b/theoryandevaluationmetricsforlearningdisentangledrepresentations/f8703a6e-f7be-4f48-969f-cdf5aee7f063_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2968f750f75768a968a97ba262de38e9f07d0bad --- /dev/null +++ b/theoryandevaluationmetricsforlearningdisentangledrepresentations/f8703a6e-f7be-4f48-969f-cdf5aee7f063_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62ec750c7b74bcb9e955ac1bdf4bdf998b2c4a8f9d731e7afba062779b6da0bc +size 26106585 diff --git a/theoryandevaluationmetricsforlearningdisentangledrepresentations/full.md b/theoryandevaluationmetricsforlearningdisentangledrepresentations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5f3c04beebff39d9d26951da55f4822021a7c1d0 --- /dev/null +++ b/theoryandevaluationmetricsforlearningdisentangledrepresentations/full.md @@ -0,0 +1,963 @@ +# THEORY AND EVALUATION METRICS FOR LEARNING DISENTANGLED REPRESENTATIONS + +Kien Do and Truyen Tran + +Applied AI Institute, Deakin University, Geelong, Australia + +{dkdo,truyen.tran}@deakin.edu.au + +# ABSTRACT + +We make two theoretical contributions to disentanglement learning by (a) defining precise semantics of disentangled representations, and (b) establishing robust metrics for evaluation. First, we characterize the concept "disentangled representations" used in supervised and unsupervised methods along three dimensions—informativeness, separability and interpretability—which can be expressed and quantified explicitly using information-theoretic constructs. This helps explain the behaviors of several well-known disentanglement learning models. We then propose robust metrics for measuring informativeness, separability, and interpretability. Through a comprehensive suite of experiments, we show that our metrics correctly characterize the representations learned by different methods and are consistent with qualitative (visual) results. Thus, the metrics allow disentanglement learning methods to be compared on a fair ground. We also empirically uncovered new interesting properties of VAE-based methods and interpreted them with our formulation. These findings are promising and hopefully will encourage the design of more theoretically driven models for learning disentangled representations. + +# 1 INTRODUCTION + +Disentanglement learning holds the key for understanding the world from observations, transferring knowledge across different tasks and domains, generating novel designs, and learning compositional concepts (Bengio et al., 2013; Higgins et al., 2017b; Lake et al., 2017; Peters et al., 2017; Schmidhuber, 1992). Assuming the observation $x$ is generated from latent factors $z$ via $p(x|z)$ , the goal of disentanglement learning is to correctly uncover a set of independent factors $\{z_i\}$ that give rise to the observation. While there has been a considerable progress in recent years, common assumptions about disentangled representations appear to be inadequate (Locatello et al., 2019). + +Unsupervised disentangling methods are highly desirable as they assume no prior knowledge about the ground truth factors. These methods typically impose constraints to encourage independence among latent variables. Examples of constraints include forcing the variational posterior $q(z|x)$ to be similar to a factorial $p(z)$ (Burgess et al., 2018; Higgins et al., 2017a), forcing the variational aggregated prior $q(z)$ to be similar to the prior $p(z)$ (Makhzani et al., 2015), adding total correlation loss (Kim & Mnih, 2018), forcing the covariance matrix of $q(z)$ to be close to the identity matrix (Kumar et al., 2017), and using a kernel-based measure of independence (Lopez et al., 2018). However, it remains unclear how the independence constraint affects other properties of representation. Indeed, more independence may lead to higher reconstruction error in some models (Higgins et al., 2017a; Kim & Mnih, 2018). Worse still, the independent representations may mismatch human's predefined concepts (Locatello et al., 2019). This suggests that supervised methods – which associate a representation (or a group of representations) $z_{i}$ with a particular ground truth factor $y_{k}$ – may be more adequate. However, most supervised methods have only been shown to perform well on toy datasets (Harsh Jha et al., 2018; Kulkarni et al., 2015; Mathieu et al., 2016) in which data are generated from multiplicative combination of the ground truth factors. It is still unclear about their performance on real datasets. + +We believe that there are at least two major reasons for the current unsatisfying state of disentangle-ment learning: i) the lack of a formal notion of disentangled representations to support the design of proper objective functions (Tschannen et al., 2018; Locatello et al., 2019), and ii) the lack of robust evaluation metrics to enable a fair comparison between models, regardless of their architectures or + +design purposes. To that end, we contribute by formally characterizing disentangled representations along three dimensions, namely informativeness, separability and interpretability, drawing from concepts in information theory (Section 2). We then design robust quantitative metrics for these properties and argue that an ideal method for disentanglement learning should achieve high performance on these metrics (Section 3). + +We run a series of experiments to demonstrate how to compare different models using our proposed metrics, showing that the quantitative results provided by these metrics are consistent with visual results (Section 4). In the process, we gain important insights about some well-known disentanglement learning methods namely FactorVAE (Kim & Mnih, 2018), $\beta$ -VAE (Higgins et al., 2017a), and AAE (Makhzani et al., 2015). + +# 2 RETHINKING DISENTANGLEMENT + +Inspired by (Bengio et al., 2013; Ridgeway, 2016), we adopt the notion of disentangled representation learning as "a process of decorrelating information in the data into separate informative representations, each of which corresponds to a concept defined by humans". This suggests three important properties of a disentangled representation: informativeness, separability and interpretability, which we quantify as follows: + +Informativeness We formulate the informativeness of a particular representation (or a group of representations) $z_{i}$ w.r.t. the data $x$ as the mutual information between $z_{i}$ and $x$ : + +$$ +I (x, z _ {i}) = \int_ {x} \int_ {z} p _ {\mathcal {D}} (x) q \left(z _ {i} \mid x\right) \log \frac {q \left(z _ {i} \mid x\right)}{q \left(z _ {i}\right)} d z d x \tag {1} +$$ + +where $q(z_{i}) = \int_{x}p_{\mathcal{D}}(x)q(z_{i}|x)dx$ . In order to represent the data faithfully, a representation $z_{i}$ should be informative of $x$ , meaning $I(x,z_{i})$ should be large. Because $I(x,z_{i}) = H(z_{i}) - H(z_{i}|x)$ , a large value of $I(x,z_{i})$ means that $H(z_{i}|x)\approx 0$ given that $H(z_{i})$ can be chosen to be relatively fixed. In other words, if $z_{i}$ is informative w.r.t. $x$ , $q(z_{i}|x)$ usually has small variance. It is important to note that $I(x,z_{i})$ in Eq. 1 is defined on the variational encoder $q(z_{i}|x)$ , and does not require a decoder. It implies that we do not need to minimize the reconstruction error over $x$ (e.g., in VAEs) to increase the informativeness of a particular $z_{i}$ . + +Separability and Independence Two representations $z_{i}, z_{j}$ are separable w.r.t. the data $x$ if they do not share common information about $x$ , which can be formulated as follows: + +$$ +I \left(x, z _ {i}, z _ {j}\right) = 0 \tag {2} +$$ + +where $I(x, z_i, z_j)$ denotes the multivariate mutual information (McGill, 1954) between $x$ , $z_i$ and $z_j$ . $I(x, z_i, z_j)$ can be decomposed into standard bivariate mutual information terms as follows: + +$$ +I (x, z _ {i}, z _ {j}) = I (x, z _ {i}) + I (x, z _ {j}) - I (x, (z _ {i}, z _ {j})) = I (z _ {i}, z _ {j}) - I (z _ {i}, z _ {j} | x) +$$ + +$I(x, z_i, z_j)$ can be either positive or negative. It is positive if $z_i$ and $z_j$ contain redundant information about $x$ . The meaning of a negative $I(x, z_i, z_j)$ remains elusive (Bell, 2003). + +Achieving separability with respect to $x$ does not guarantee that $z_{i}$ and $z_{j}$ are separable in general. $z_{i}$ and $z_{j}$ are fully separable or statistically independent if and only if: + +$$ +I \left(z _ {i}, z _ {j}\right) = 0 \tag {3} +$$ + +If we have access to all representations $z$ , we can generally say that a representation $z_{i}$ is fully separable (from other representations $z_{\neq i}$ ) if and only if $I(z_{i},z_{\neq i}) = 0$ . + +Note that there is a trade-off between informativeness, independence and the number of latent variables which we discuss in Appdx. A.7. + +Interpretability Obtaining informative and independent representations does not guarantee interpretability by human (Locatello et al., 2019). We argue that in order to achieve interpretability, we should provide models with a set of predefined concepts $y$ . In this case, a representation $z_{i}$ is + +interpretable with respect to $y_{k}$ if it only contains information about $y_{k}$ (given that $z_{i}$ is separable from all other $z_{\neq i}$ and all $y_{k}$ are distinct). Full interpretability can be formulated as follows: + +$$ +I \left(z _ {i}, y _ {k}\right) = H \left(z _ {i}\right) = H \left(y _ {k}\right) \tag {4} +$$ + +Eq. 4 is equivalent to the condition that $z_{i}$ is an invertible function of $y_{k}$ . If we want $z_{i}$ to generalize beyond the observed $y_{k}$ (i.e., $H(z_{i}) > H(y_{k})$ ), we can change the condition in Eq. 4 into: + +$$ +I \left(z _ {i}, y _ {k}\right) = H \left(y _ {k}\right) \quad \text {o r} \quad H \left(y _ {k} \mid z _ {i}\right) = 0 \tag {5} +$$ + +which suggests that the model should accurately predict $y_{k}$ given $z_{i}$ . If $z_{i}$ satisfies the condition in Eq. 5, it is said to be partially interpretable w.r.t $y_{k}$ . + +In real data, underlying factors of variation are usually correlated. For example, men usually have beard and short hair. Therefore, it is very difficult to match independent latent variables to different ground truth factors at the same time. We believe that in order to achieve good interpretability, we should isolate the factors and learn one at a time. + +# 2.1 AN INFORMATION-THEORETIC DEFINITION OF DISENTANGLED REPRESENTATIONS + +Given a dataset $\mathcal{D} = \{x_i\}_{i=1}^N$ , where each data point $x$ is associated with a set of $K$ labeled factors of variation $y = \{y_1, \dots, y_K\}$ . Assume that there exists a mapping from $x$ to $m$ groups of latent representations $z = \{z_1, z_2, \dots, z_m\}$ which follows the distribution $q(z|x)$ . Denoting $q(z_i|x) = \sum_{z \neq i} q(z|x)$ and $q(z_i) = \mathbb{E}_{p_{\mathcal{D}}(x)}[q(z_i|x)]$ . We define disentangled representations for unsupervised cases as follows: + +Definition 1 (Unsupervised). A representation or a group of representations $z_{i}$ is said to be "fully disentangled" w.r.t a ground truth factor $y_{k}$ if $z_{i}$ is fully separable (from $z_{\neq i}$ ) and $z_{i}$ is fully interpretable w.r.t $y_{k}$ . Mathematically, this can be written as: + +$$ +I \left(z _ {i}, z _ {\neq i}\right) = 0 \quad \text {a n d} \quad I \left(z _ {i}, y _ {k}\right) = H \left(z _ {i}, y _ {k}\right) \tag {6} +$$ + +The definition of disentangled representations for supervised cases is similar as above except that now we model $q(z|x,y)$ instead of $q(z|x)$ and $q(z) = \sum_{x,y} p_{\mathcal{D}}(x,y) q(z|x,y)$ . + +Recently, there have been several works (Eastwood & Williams, 2018; Higgins et al., 2018; Ridgeway & Mozer, 2018) that attempted to define disentangled representations. Higgin et. al. (Higgins et al., 2018) proposed a definition based on group theory (Cohen & Welling, 2014) which is (informally) stated as follows: "A representation $z$ is disentangled w.r.t a particular subgroup $y_{k}$ (from a symmetry group $y = \{y_{k}\}_{k=1}^{K}$ ) if $z$ can be decomposed into different subspaces $\{z_{i}\}_{i=1}^{H}$ in which the subspace $z_{i}$ should be independent of all other representation subspaces $z \neq i$ , and $z_{i}$ should only be affected by the action of a single subgroup $y_{k}$ and not by other subgroups $y \neq k$ ." Their definition shares similar observation as ours. However, it is less convenient for designing models and metrics than our information-theoretic definition. + +Eastwood et. al. (Eastwood & Williams, 2018) did not provide any explicit definition of disentangled representations but characterizing them along three dimensions namely "disentanglement", "compactness", and "informativeness" (between $z$ any $y_{k}$ ). A high "disentanglement" score ( $\approx 1$ ) for $z_{i}$ indicates that it captures at most one factor, let's say $y_{k}$ . A high "completeness" score ( $\approx 1$ ) for $y_{k}$ indicates that it is captured by at most one latent $z_{j}$ and $j$ is likely to be $i$ . A high "informativeness" score1 for $y_{k}$ indicates that all information of $y_{k}$ is captured by the representations $z$ . Intuitively, when all the three notions achieve optimal values, there should be only a single representation $z_{i}$ that captures all information of the factor $y_{k}$ but no information from other factors $y_{\neq k}$ . However, even in that case, $z_{i}$ is still not fully interpretable w.r.t $y_{k}$ since $z_{i}$ may contain some information in $x$ that does not appear in $y_{k}$ . This makes their notions only applicable to toy datasets on which we know that the data $x$ are only generated from predefined ground truth factors $y = \{y_{k}\}_{k=1}^{K}$ . Our definition can handle the situation where we only know some but not all factors of variation in the data. The notions in (Ridgeway & Mozer, 2018) follow those in (Eastwood & Williams, 2018), hence, suffer from the same disadvantage. + +# 3 ROBUST EVALUATION METRICS + +We argue that a robust metric for disentanglement should meet the following criteria: i) it supports both supervised/unsupervised models; ii) it can be applied for real datasets; iii) it is computationally straightforward, i.e. not requiring any training procedure; iv) it provides consistent results across different methods and different latent representations; and v) it agrees with qualitative (visual) results. Here we propose information-theoretic metrics to measure informativeness, independence and interpretability which meet all of these robustness criteria. + +# 3.1 METRICS FOR INFORMATIVENESS + +We measure the informativeness of a particular representation $z_{i}$ w.r.t. $x$ by computing $I(x, z_{i})$ . If $z_{i}$ is discrete, we can compute $I(x, z_{i})$ exactly by using Eq. 1 but with the integral replaced by the sum. If $z_{i}$ is continuous, we estimate $I(x, z_{i})$ via sampling or quantization. Details about these estimations are provided in Appdx. A.10. + +If $H(z_{i})$ is estimated via quantization, we will have $0 \leq I(x,z_{i}) \leq H(z_{i})$ . In this case, we can divide $I(x,z_{i})$ by $H(z_{i})$ to normalize it to the range [0, 1]. However, this normalization may change the interpretation of the metric and lead to a situation where a representation $z_{i}$ is less informative than $z_{j}$ (i.e., $I(x,z_{i}) < I(x,z_{j})$ ) but still has a higher rank than $z_{j}$ because $H(z_{i}) < H(z_{j})$ . A better way is to divide $I(x,z_{i})$ by $\log (\# \mathrm{bins})$ . + +# 3.2 METRICS FOR SEPARABILITY AND INDEPENDENCE + +MISJED We can characterize the independence between two latent variables $z_{i}$ , $z_{j}$ based on $I(z_{i},z_{j})$ . However, a serious problem of $I(z_{i},z_{j})$ is that it generates the following order among pairs of representations: + +$$ +I \left(z _ {\mathrm {f}, i}, z _ {\mathrm {f}, j}\right) > I \left(z _ {\mathrm {f}, i}, z _ {\mathrm {n}, j}\right) > I \left(z _ {\mathrm {n}, i}, z _ {\mathrm {n}, j}\right) \geq 0 +$$ + +where $z_{\mathrm{f},i}, z_{\mathrm{f},j}$ are informative representations and $z_{\mathrm{n},i}, z_{\mathrm{n},j}$ are uninformative (or noisy) representations. This means if we simply want $z_{i}, z_{j}$ to be independent, the best scenario is that both are noisy and independent (e.g. $q(z_{i}|x) \approx q(z_{j}|x) \approx \mathcal{N}(0,\mathrm{I})$ ). Therefore, we propose a new metric for independence named MISJED (which stands for Mutual Information Sums Joint Entropy Difference), defined as follows: + +$$ +\begin{array}{l} \operatorname {M I S J E D} \left(z _ {i}, z _ {j}\right) = \tilde {I} \left(z _ {i}, z _ {j}\right) = H \left(z _ {i}\right) + H \left(z _ {j}\right) - H \left(\bar {z} _ {i}, \bar {z} _ {j}\right) \\ = H \left(z _ {i}\right) + H \left(z _ {j}\right) - H \left(z _ {i}, z _ {j}\right) + H \left(z _ {i}, z _ {j}\right) - H \left(\bar {z} _ {i}, \bar {z} _ {j}\right) \\ = I \left(z _ {i}, z _ {j}\right) + H \left(z _ {i}, z _ {j}\right) - H \left(\bar {z} _ {i}, \bar {z} _ {j}\right) \tag {7} \\ \end{array} +$$ + +where $\bar{z}_i = \mathbb{E}_{q(z_i|x)}[z_i]$ and $q(\bar{z}_i) = \mathbb{E}_{p_{\mathcal{D}}(x)}[q(\bar{z}_i|x)]$ . Since $q(\bar{z}_i)$ and $q(\bar{z}_j)$ have less variance than $q(z_i)$ and $q(z_j)$ , respectively, $H(z_i,z_j) - H(\bar{z}_i,\bar{z}_j)\geq 0$ , making $\tilde{I} (z_i,z_j)\geq 0$ + +To achieve a small value of $\tilde{I}(z_i, z_j)$ , two representations $z_i$ , $z_j$ should be both independent and informative (or, in an extreme case, are deterministic given $x$ ). Using the MISJED metric, we can ensure the following order: $0 \leq \tilde{I}(z_{\mathrm{f},i}, z_{\mathrm{f},j}) < \tilde{I}(z_{\mathrm{f},i}, z_{\mathrm{n},j}) < \tilde{I}(z_{\mathrm{n},i}, z_{\mathrm{n},j})$ . If $H(z_i)$ , $H(z_j)$ , and $H(\bar{z}_i, \bar{z}_j)$ in Eq. 7 are estimated via quantization, we will have $\tilde{I}(z_i, z_j) \leq H(z_i) + H(z_j) \leq 2\log (\# \mathrm{bins})$ . In this case, we can divide $\tilde{I}(z_i, z_j)$ by $2\log (\# \mathrm{bins})$ to normalize it to [0, 1]. + +WSEPIN and WINDIN A theoretically correct way to verify that a particular representation $z_{i}$ is both separable from other $z_{\neq i}$ and informative w.r.t $x$ is considering the amount of information in $x$ but not in $z_{\neq i}$ that $z_{i}$ contains. This quantity is the conditional mutual information between $x$ and $z_{i}$ given $z_{\neq i}$ , which can be decomposed as follows: + +$$ +\begin{array}{l} I (x, z _ {i} | z _ {\neq i}) = I (x, z _ {i}) - I (x, z _ {i}, z _ {\neq i}) \\ = I (x, z _ {i}) - \left(I \left(z _ {i}, z _ {\neq i}\right) - I \left(z _ {i}, z _ {\neq i} \mid x\right)\right) \\ = I \left(x, z _ {i}\right) - I \left(z _ {i}, z _ {\neq i}\right) + I \left(z _ {i}, z _ {\neq i} \mid x\right) \tag {8} \\ \end{array} +$$ + +$I(x,z_{i}|z_{\neq i})$ is useful for measuring how disentangled a representation $z_{i}$ is in the absence of ground truth factors. $I(x,z_{i}|z_{\neq i})$ is close to 0 if $z_{i}$ is completely noisy and is high if $z_{i}$ is disentangled2. + +For models that use factorized encoders, $z_{i}$ and $z_{\neq i}$ are usually assumed to be independent given $x$ , hence, $I(z_{i},z_{\neq i}|x)\approx 0$ and $I(x,z_{i}|z_{\neq i})\approx I(x,z_{i}) - I(z_{i},z_{\neq i})$ which is merely the difference between the informativeness and full separability of $z_{i}$ . For models that use auto-regressive encoders, $I(z_{i},z_{\neq i}|x) > 0$ which means $z_{i}$ and $z_{\neq i}$ can share information not in $x$ . + +We can also compute $I(x, z_i | z_{\neq i})$ in a different way as follows: + +$$ +\begin{array}{l} I (x, z _ {i} | z _ {\neq i}) = I (x, (z _ {i}, z _ {\neq i})) - I (x, z _ {\neq i}) \\ = I (x, z) - I (x, z _ {\neq i}) \\ \end{array} +$$ + +If we want $z_{i}$ to be both independence of $z_{\neq i}$ and informative w.r.t $x$ , we can only use the first two terms in Eq. 8 to derive another quantitative measure: + +$$ +\begin{array}{l} \hat {I} (x, z _ {i} \mid z _ {\neq i}) = I (x, z _ {i}) - I (z _ {i}, z _ {\neq i}) \\ = I \left(x, z _ {i} \mid z _ {\neq i}\right) - I \left(z _ {i}, z _ {\neq i} \mid x\right) \tag {9} \\ \end{array} +$$ + +However, unlike $I(x, z_i | z \neq i)$ , $\hat{I}(x, z_i | z \neq i)$ can be negative. + +To normalize $I(x, z_i | z_{\neq i})$ , we divide it by $H(z_i)$ ( $H(z_i)$ must be estimated via quantization). Note that taking the average of $I(z_i, x | z_{\neq i})$ over all representations to derive a single metric for the whole model is not appropriate because models with more noisy latent variables will be less favored. For example, if model A has 10 latent variables (5 of them are disentangled and 5 of them are noisy), and model B has 20 latent variables (5 of them are disentangled and 15 of them are noisy), B will always be considered worse than A despite the fact that both are equivalent in term of disentanglement (since 5 disentangled representations are enough to capture all information in $x$ so additional latent variables should be noisy). We propose two solutions for this issue. In the first approach, we sort $I(x, z_i | z_{\neq i})$ over all representations in descending order and only take the average over the top $k$ latents (or groups of latents). This leads to a metric called SEPIN@ $k^3$ which is similar to Precision@ $k$ : + +$$ +\operatorname {S E P I N} @ k = \frac {1}{k} \sum_ {i = 0} ^ {k - 1} I \left(x, z _ {r _ {i}} \mid z _ {\neq r _ {i}}\right) +$$ + +where $r_1, \ldots, r_L$ is the rank indices of $L$ latent variables by sorting $I(x, z_i | z_{\neq i})$ ( $i = 1, \ldots, L$ ). + +In the second approach, we compute the average over all $L$ representations $z_0,\ldots ,z_{L - 1}$ but weighted by their informativeness to derive a metric called WSEPIN: + +$$ +\mathrm {W S E P I N} = \sum_ {i = 0} ^ {L - 1} \rho_ {i} I (x, z _ {i} | z _ {\neq i}) +$$ + +where $\rho_{i} = \frac{I(x,z_{i})}{\sum_{j = 0}^{L - 1}I(x,z_{j})}$ . If $z_{i}$ is a noisy representation, $I(x,z_{i})\approx 0$ , thus, $z_{i}$ contributes almost nothing to the final WSEPIN. + +Similarly, using the measure $\hat{I}(x, z_i | z_{\neq i})$ in Eq. 9, we can derive other two metrics INDIN@ $k^4$ and WINDIN as follows: + +$$ +\operatorname {I N D I N} @ k = \frac {1}{k} \sum_ {i = 0} ^ {k - 1} \hat {I} \left(x, z _ {r _ {i}} \mid z _ {\neq r _ {i}}\right) \quad \text {a n d} \quad \operatorname {W I N D I N} = \sum_ {i = 0} ^ {L - 1} \rho_ {i} \hat {I} \left(x, z _ {i} \mid z _ {\neq i}\right) +$$ + +# 3.3 METRICS FOR INTERPRETABILITY + +Recently, several metrics have been proposed to quantitatively evaluate the interpretability of representations by examining the relationship between the representations and manually labeled factors of variation. The most popular ones are Z-diff score (Higgins et al., 2017a; Kim & Mnih, 2018), SAP (Kumar et al., 2017), MIG (Chen et al., 2018). Among them, only MIG is theoretically sound and provides correct computation of $I(x, z_i)$ . MIG also matches with our formulation of "interpretability" in Section 2 to some extent. However, MIG has only been used for toy datasets + +![](images/846b010d79f2096b262ed328be11c8570d61d4205969acb7318fb68e11540890.jpg) +(a) Unsupervised + +![](images/6f8d14a01a5b0860991faf51a1148977fe49244ecc1c37d04e06169fcea73a4c.jpg) +Figure 1: Differences in probabilistic assumption of MIG and Robust MIG. + +![](images/1b48079d0fd75fa8315362b86a90c0180f4d7b4ae2ed38c35d7a17557f908ce4.jpg) +(b) Supervised + +![](images/eac76768e30e679a230eaf05e571a14afb4820f24d77c39e4573c04ef7f4154c.jpg) + +like dSprites (Matthew et al., 2017). The main drawback comes from its probabilistic assumption $p(z_{i},y_{k},x^{(n)}) = q(z_{i}|x^{(n)})p(x^{(n)}|y_{k})p(y_{k})$ (see Fig. 1). Note that $p(x^{(n)}|y_k)$ is a distribution over the high dimensional data space, and is very hard to robustly estimate but the authors simplified it to be $p(n|y_k)$ if $x^{(n)}\in \mathcal{D}_{y_k}$ ( $\mathcal{D}_{y_k}$ is the support set for a particular value $y_{k}$ ) and 0 otherwise. This equation only holds for toy datasets where we know exactly how $x$ is generated from $y$ . In addition, since $p(n|y_k)$ depends on the value of $y_{k}$ , it will be problematic if $y_{k}$ is continuous. + +RMIG Addressing the drawbacks of MIG, we propose RMIG (which stands for Robust MIG), formulated as follows: + +$$ +\operatorname {R M I G} \left(y _ {k}\right) = I \left(z _ {i ^ {*}}, y _ {k}\right) - I \left(z _ {j ^ {\circ}}, y _ {k}\right) \tag {10} +$$ + +where $I(z_{i^*}, y_k)$ and $I(z_{j^\circ}, y_k)$ are the highest and the second highest mutual information values computed between every $z_i$ and $y_k$ ; $z_{i^*}$ and $z_{j^\circ}$ are the corresponding latent variables. Like MIG, we can normalize $\mathrm{RMIG}(y_k)$ to [0, 1] by dividing it by $H(y_k)$ but it will favor imbalanced factors (small $H(y_k)$ ). + +RMIG inherits the idea of MIG but differs in the probabilistic assumption (and other technicalities). RMIG assumes that $p(z_i, y_k, x^{(n)}) = q(z_i | x^{(n)}) p(y_k | x^{(n)}) p(x^{(n)})$ for unsupervised learning and $p(z_i, y_k, x^{(n)}) = q(z_i | y_k^{(n)}, x^{(n)}) p(y_k^{(n)}, x^{(n)})$ for supervised learning (see Fig. 1). Not only this eliminates all the problems of MIG but also provides additional advantages. First, we can estimate $q(z_i, y_k)$ using Monte Carlo sampling on $p(x^{(n)})$ . Second, $p(y_k | x^{(n)})$ is well defined for both discrete/continuous $y_k$ and deterministic/stochastic $p(y_k | x^{(n)})$ . If $y_k$ is continuous, we can quantize $p(y_k | x^{(n)})$ . If $p(y_k | x^{(n)})$ is deterministic (i.e., a Dirac delta function), we simply set it to 1 for the value of $y_k$ corresponding to $x^{(n)}$ and 0 for other values of $y_k$ . Our metric can also use $p(y_k | x^{(n)})$ from an external expert model. Third, for any particular value $y_k$ , we compute $q(z_i | x^{(n)})$ for all $x^{(n)} \in \mathcal{D}$ rather than just for $x^{(n)} \in \mathcal{D}_{y_k}$ , which gives more accurate results. + +JEMMIG A high RMIG value of $y_{k}$ means that there is a representation $z_{i^{*}}$ that captures the factor $y_{k}$ . However, $z_{i^{*}}$ may also capture other factors $y_{\neq k}$ of the data. To make sure that $z_{i^{*}}$ fits exactly to $y_{k}$ , we provide another metric for interpretability named JEMMIG (standing for Joint Entropy Minuses Mutual Information Gap), computed as follows: + +$$ +\mathrm {J E M M I G} (y _ {k}) = H \left(z _ {i ^ {*}}, y _ {k}\right) - I \left(z _ {i ^ {*}}, y _ {k}\right) + I \left(z _ {j \circ}, y _ {k}\right) +$$ + +where $I(z_{i^*},y_k)$ and $I(z_{j^\circ},y_k)$ are defined in Eq. 10. + +If we estimate $H(z_{i*}, y_k)$ via quantization, we can bound JEMMIG(yk) between 0 and $H(y_k) + \log(\# \text{bins})$ (please check Appdx. A.12 for details). A small JEMMIG(yk) score means that $z_{i*}$ should match exactly to $y_k$ and $z_{j^{\circ}}$ should not be related to $y_k$ . Thus, we can use JEMMIG(yk) to validate whether a model can learn disentangled representations w.r.t a ground truth factor $y_k$ or not which satisfies the definition in Section 2.1. Note that if we replace $H(z_{i*}, y_k)$ by $H(y_k)$ to account for the generalization of $z_{i*}$ over $y_k$ , we obtain a metric equivalent to RMIG (but in reverse order). + +To compute RMIG and JEMMIG for the whole model, we simply take the average of $\mathrm{RMIG}(y_k)$ and $\mathrm{JEMMIG}(y_k)$ over all $y_{k}$ $(k = 1,\dots ,K)$ as follows: + +$$ +\operatorname {R M I G} = \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \operatorname {R M I G} \left(y _ {k}\right) \quad \text {a n d} \quad \operatorname {J E M M I G} = \frac {1}{K} \sum_ {k = 0} ^ {K - 1} \operatorname {J E M M I G} \left(y _ {k}\right) +$$ + +# 3.4 COMPARISON WITH EXISTING METRICS + +In Table 1, we compare our proposed metrics with existing metrics for learning disentangled representations. For deeper analysis of these metrics, we refer readers to Appdx. A.8. One can easily see that only our metrics satisfy the aforementioned robustness criteria. Most other metrics (except for MIG and Modularity) use classifiers, which can cause inconsistent results once the settings of the classifiers change. Moreover, most other metrics (except for MIG) use $\mathbb{E}_{q(z_i|x)}[z_i]$ instead of $q(z_{i}|x)$ for computing mutual information. This can lead to inaccurate evaluation results since $\mathbb{E}_{q(z_i|x)}[z_i]$ is theoretically different from $z_{i}\sim q(z_{i}|x)$ . Among all metrics, JEMMIG is the only one that can quantify "disentangled representations" defined in Section 2.1 on its own. + +
Metrics#classifiersclassifiernonlinear relationshipuse q(zix)continuous factorsreal data
Z-diff1linear/majority-vote××××
SAPL × Kthreshold value××
MIG0none××
DisentanglementKLASSO/ random forest×/✓×××
Completeness
Informativeness
Modularity0none××
ExplicitnessKone-vs-rest logistic regressor××××
WSEPIN†0none
WINDIN†0none
RMIG0none
JEMMIG*0none
+ +Table 1: Analysis of different metrics for disentanglement learning. $L$ and $K$ are the numbers of latent variables and ground truth factors, respectively. Metrics marked with * are self-contained. Metrics marked with † do not require ground truth factors of variation. + +# 4 EXPERIMENTS + +We use our proposed metrics to evaluate three representation learning methods namely FactorVAE (Kim & Mnih, 2018), $\beta$ -VAE (Higgins et al., 2017a) and AAE (Makhzani et al., 2015) on both real and toy datasets which are CelebA (Liu et al., 2015) and dSprites (Matthey et al., 2017), respectively. A brief discussion of these models are given in Appdx. A.1. We would like to show the following points: i) how to compare models based on our metrics; ii) the advantages of our metrics compared to other metrics; iii) the consistence between qualitative results produced by our metrics and visual results; and iv) the ablation study of our metrics. + +Due to space limit, we only present experiments for the first two points. The experiments for points (iii) and (iv) are put in Appdx. A.4 and Appdx. A.5, respectively. Details about the datasets and model settings are provided in Appdx. A.2 and Appdx. A.3, respectively. In all figures below, "TC" refers to the $\gamma$ coefficient of the TC loss in FactorVAEs (Kim & Mnih, 2018), "Beta" refers to the $\beta$ coefficient in $\beta$ -VAEs (Higgins et al., 2017a). + +Informativeness In Figs. 2a and 2b, we show the average amount of information (of $x$ ) that a representation $z_{i}$ contains (the mean of $I(z_{i},x)$ ) and the total amount of information that all representations $z$ contain ( $I(z,x)$ ). It is clear that adding the TC term to the standard VAE loss does not affect $I(z,x)$ much (Fig. 2b). However, because $z_{i}$ and $z_{j}$ in FactorVAEs are more separable than those in standard VAEs, FactorVAEs should produce smaller $I(z_{i},x)$ than standard VAEs on average (Fig. 2a). We also see that the mean of $I(z_{i},x)$ and $I(z,x)$ consistently decrease for $\beta$ -VAEs with higher $\beta$ . + +Separability and Independence If we only evaluate models based on the separability of representations, $\beta$ -VAE models with large $\beta$ are among the best. These models force latent representations + +![](images/01c945387a1e50abe47af9967820453db3e5b878832a1a37e2a87f2d7a959af4.jpg) +(a) mean of $I(z_{i},x)$ + +![](images/8904e77e123a4d7b4fd01d4ba1e2f1e22f8762c8a03f35eecb7baae76b4d602b.jpg) +(b) $I(z,x)$ + +![](images/36a4db508dca4a5ef17889b15b6c47b368b7b0964a63916a1c0c3ffb30eddff1.jpg) +(a) max/mean/min of $I(z_{i},z_{\neq i})$ + +![](images/a97c4918180c2319c0cb03bc0b99c3379e187d56e6106560cb269cd1c338d97d.jpg) +Figure 2: The informativeness and the total information of some FactorVAE and $\beta$ -VAE models. For each hyperparameter, we report the mean and the standard error of 4 different runs. +(b) WSEPIN + +![](images/7821f3e2f939d2987dda1b2ed360e93e48ffdc4979e53b4ede6ce08fdeaffb88.jpg) +(c) SEPIN@3 +Figure 3: $I(z_{i},z_{\neq i})$ , WSEPIN and SEPIN@3 of some FactorVAE and $\beta$ -VAE models. + +to be highly separable (as in Fig. 3a, we can see that the max/mean/min values of $I(z_{i},z_{\neq i})$ are equally small for $\beta$ -VAEs with large $\beta$ ). In FactorVAEs, informative representations usually have poor separability (large value) and noisy representations usually have perfect separability $(\approx 0)$ (Fig. 4a). Increasing the weight of the TC loss improves the max and mean of $I(z_{i},z_{\neq i})$ but not significance (Fig. 3a). + +Using WSEPIN and SEPIN@3 gives us a more reasonable evaluation of the disentanglement capability of these models. In Fig. 3b, we see that $\beta$ -VAE models with $\beta = 10$ achieve the highest WSEPIN and SEPIN@3 scores, which suggests that their informative representations usually contain large amount of information of $x$ that are not shared by other representations. However, this type of information may not associate well with the ground truth factors of variation (e.g., $z_{3}, z_{6}$ in Fig. 4c). The representations of FactorVAEs, despite containing less information of $x$ on their own, usually reflect the ground truth factors more accurately (e.g., $z_{5}, z_{8}, z_{7}$ in Fig. 4a) than those of $\beta$ -VAEs. These results suggest that ground truth factors should be used for proper evaluations of disentanglement. + +![](images/3a7ecd0abf1660ce5356baabd1665f20941f02f9a3d835b2e5b5b8405a39ef0b.jpg) +(a) FactorVAE (TC=10) +Figure 4: Visualization of the representations learned by representative FactorVAE, VAE, and $\beta$ -VAE models with separability $(I(z_{i},z_{\neq i}))$ and informativeness $(I(z_{i},x))$ scores. Representations are sorted by their separability scores. + +![](images/bd3cfb4b1032e5b84742d228c2dc4d0c564aa8b3964fcdb638f07985306331cc.jpg) +(b) VAE + +![](images/59c5ed3368847b48b7accaf0d73ab62beeb0867e1cef8f7e3914f9fefbbf9827.jpg) +(c) $\beta$ -VAE $(\beta = 10)$ + +![](images/1ce45d97e1b8f121db13552bb480b3b613fd10a97aee781a4fbd5a8e64652dd8.jpg) + +Interpretability Using JEMMIG and RMIG, we see that FactorVAE models can learn representations that are more interpretable than those learned by $\beta$ -VAE models. Surprisingly, the worst FactorVAE models (with $\mathrm{TC} = 10$ ) clearly outperform the best $\beta$ -VAE models (with $\beta = 10$ ). This result is sensible because it is accordant with the visualization in Figs. 4a and 4c. + +Comparison with Z-diff In (Chen et al., 2018), the authors have already shown that MIG is more robust than Z-diff (Higgins et al., 2017a) so we compare our metrics with MIG directly. + +![](images/24b9cb8a52fe8353705a61a1a253dc19feba2622ce8e3fdf4cade4f20cced1cc.jpg) +(a) JEMMIG + +![](images/65766e3063a1444fde69c8d498a8171db9d7d597e4c9eb3a2aec9c6be27ed6ef.jpg) +(b) RMIG +Figure 5: (a) and (b): Unnormalized JEMMIG and RMIG scores of several FactorVAE and $\beta$ -VAE models. (c): Correlation between JEMMIG and RMIG. + +![](images/6f5efbdb8d0021fdfa9c90020d22256cdc4a831a446d8ce5edfbbbe54b6c87e8.jpg) +(c) JEMMIG vs. RMIG + +Comparison with MIG On toy datasets like dSprites, RMIG produces similar results as MIG (Chen et al., 2018). Please check Appdx. A.13 for more details. + +![](images/30ff9620db797e8a0130fe5e61be48100d92ad63a52bfdb50d579c64593f55bc.jpg) +(a) JEMMIG vs. Disentanglement + +![](images/09200cb45b9e111df25387e4393a7a41e461de06f8897623527ecae7a8b231b7.jpg) +(b) JEMMIG vs. Completeness + +![](images/a7bb6a191fecd048cf5c90788460120ff27f490938e7f613509f56d7f02f8d44.jpg) +(c) JEMMIG vs. Error + +![](images/4e40a995b120bf53bb9604c96f2ba9d582921bd7c3c967f8733db4fb16420708.jpg) +(d) RMIG vs. Disentanglement +Figure 6: Comparison between JEMMIG/RMIG and the metrics in (Eastwood & Williams, 2018). Because the competing metrics do not apply for categorical factors (see Appdx. A.8 for detailed analysis), we exclude the "shape" factor during computation. Following (Eastwood & Williams, 2018), we use LASSO classifiers with the L1 coefficient is $\alpha = 0.002$ . Blue dots denote FactorVAE models and orange dots denote $\beta$ -VAE models. + +![](images/fa4f5c0cd59597a2521d4b757dbf07a0cd91b7d0ca0cd85e52c1b38683c0bb08.jpg) +(e) RMIG vs. Completeness + +![](images/e3ff146a507e0ea7d049c0305704183b7cd141902e4474978f5219101f0d110c.jpg) +(f) RMIG vs. Error + +Comparison with "disentanglement", "completeness" and "informativeness" In Fig. 6, we show the differences in evaluation results between JEMMIG/RMIG and the metrics in (Eastwood & Williams, 2018). We can easily see that JEMMIG and RMIG are much better than "disentanglement", "completeness" and "informativeness" (or reversed classification error) in separating FactorVAE and $\beta$ -VAE models. Among the three competing metrics, only "informativeness" (or $I(z,y_k)$ ) seems to be correlated with JEMMIG and RMIG. This is understandable because when most representations are independent in case of FactorVAEs and $\beta$ -VAEs, we have $I(z,y_k) \approx I(z_i^*,y_k) \approx I(z_i^*,y_k) - I(z_j^*,y_k)$ . "Disentanglement" and "completeness", by contrast, are strongly uncorrelated with JEMMIG and RMIG. While JEMMIG consistently grades standard VAEs ( $\beta = 1$ ) worse than other models (Fig. 5a), "disentanglement" and "completeness" usually grade standard VAEs better than some FactorVAE models, which seems inappropriate. Moreover, since "disentanglement" and "completeness" are not well aligned, using both of them at the same time may cause confusion. For example, the model "28_beta20" has lower "disentanglement" score yet higher "completeness" score + +![](images/9722f62bbac7c95be6d99e11b6d548ab93472656b4cb032b951daf3afafc5245.jpg) +(a) Disentanglement + +![](images/7a0a6fca0429f22f8c0bae80ae7b9c93b63337d2a76f8e5577e80a10e6b1e9f7.jpg) +(b) Completeness + +![](images/107bf0257622daae3541960fe07ff64dda4866219d68a7277d7f748e5ea5e8de.jpg) +(c) Error + +![](images/c38a3c2f1999882ea020a0a10e7ef92ed4446899258582f8028b2d25445a7076.jpg) +Figure 7: "disentanglement", "completeness" and "informativeness" (error) scores of several Factor-VAE and $\beta$ -VAE models. +(a) JEMMIG vs. Modularity +Figure 8: (a): Comparison between JEMMIG and "modularity" (#bins=100). (b) and (c): "modularity" scores of several FactorVAE and $\beta$ -VAE models. The original version computes $I(z_{i},y_{k})$ using $\mathbb{E}_{q(z_i|x)}[z_i]$ while the correct version computes $I(z_{i},y_{k})$ using $q(z_{i}|x)$ . + +![](images/63539f51e150165eab7f45ef635a3f862253cc61bd2539cb8e5896e3857773e7.jpg) +(b) Modularity (original) + +![](images/0eafff714d0b8c5cd62464f0914e4e542b7b3c4163c67a19b54e53285a022289.jpg) +(c) Modularity (correct) + +than the model "32_beta50" (Figs. 6a and 6b) so it is hard to know which model is better than the other at learning disentangled representations. + +From Figs. 7a and 7b, we see that "disentanglement" and "completeness" blindly favor $\beta$ -VAE models with high $\beta$ without concerning about the fact that representations in these models are less informative than representations in FactorVAEs (Fig. 7c). Thus, they are not good for characterizing disentanglement in general. + +Disentanglement and "completeness" are computed based on a weight matrix with an assumption that the weight magnitudes for noisy representations are close to 0. However, this assumption is often broken in practice, thus, may lead to inaccurate results (please check Appdx. A.9 for details). + +Comparison with "modularity" "modularity" and "explicitness" (Ridgeway & Mozer, 2018) are similar to "disentanglement" and "informativeness" (Eastwood & Williams, 2018) in terms of concept, respectively. However, they are different in terms of formulation. We exclude "explicitness" in our experiment because computing it on dSprites is time consuming. In Fig. 8a, we show the correlation between JEMMIG and "modularity". We consider two versions of "modularity". In the first version (Fig. 8b), $I(z_{i},y_{k})$ is computed from the mean of $z_{i} \sim q(z_{i}|x)$ . This is the original implementation provided by (Ridgeway & Mozer, 2018). In the second version (Fig. 8c), $I(z_{i},y_{k})$ is computed from $q(z_{i}|x)$ . We can see that in either case, "modularity" often gives higher scores for standard VAEs than for FactorVAEs. It means that like "disentanglement", "modularity" itself does not fully specify "disentangled representations" defined in Section 2.1. + +# 5 CONCLUSION + +We have proposed an information-theoretic definition of disentangled representations and designed robust metrics for evaluation, along three dimensions: informativeness, separability and interpretability. We carefully analyze the properties of our metrics using well known representation learning models namely FactorVAE, $\beta$ -VAE and AAE on both real and toy datasets. Compared with existing metrics, our metrics are more robust and produce more sensible evaluations that are compatible with visual results. Based on our definition of disentangled representation in Section 2.1, WSEPIN/JEMMIG are the two key metrics in case ground truth labels are unavailable/available, respectively. + +# REFERENCES + +Error function. https://en.wikipedia.org/wiki/Error_function, May 2019. +Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016. +Anthony J Bell. The co-information lattice. In Proceedings of the Fifth International Workshop on Independent Component Analysis and Blind Signal Separation: ICA, volume 2003, 2003. +Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798-1828, 2013. +Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthew, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in beta-vae. arXiv preprint arXiv:1804.03599, 2018. +Tian Qi Chen, Xuechen Li, Roger Grosse, and David Duvenaud. Isolating sources of disentanglement in variational autoencoders. arXiv preprint arXiv:1802.04942, 2018. +Taco Cohen and Max Welling. Learning the irreducible representations of commutative lie groups. In International Conference on Machine Learning, pp. 1755-1763, 2014. +Cian Eastwood and Christopher KI Williams. A framework for the quantitative evaluation of disentangled representations. 2018. +Ananya Harsh Jha, Saket Anand, Maneesh Singh, and VSR Veeravasarapu. Disentangling factors of variation with cycle-consistent variational auto-encoders. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 805–820, 2018. +Irina Higgins, Loic Matthew, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. Beta-vae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2017a. +Irina Higgins, Nicolas Sonnerat, Loic Matthew, Arka Pal, Christopher P Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis, and Alexander Lerchner. Scan: Learning hierarchical compositional visual concepts. arXiv preprint arXiv:1707.03389, 2017b. +Irina Higgins, David Amos, David Pfau, Sebastien Racaniere, Loic Matthey, Danilo Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. arXiv preprint arXiv:1812.02230, 2018. +Hyunjik Kim and Andriy Mnih. Disentangling by factorising. ICML, 2018. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, pp. 2539-2547, 2015. +Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. arXiv preprint arXiv:1711.00848, 2017. +Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, 40, 2017. +Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015. +Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Scholkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. ICML, 2019. + +Romain Lopez, Jeffrey Regier, Michael I Jordan, and Nir Yosef. Information constraints on auto-encoding variational bayes. In Advances in Neural Information Processing Systems, pp. 6114-6125, 2018. +Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. +Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and Yann LeCun. Disentangling factors of variation in deep representation using adversarial training. In Advances in Neural Information Processing Systems, pp. 5040-5048, 2016. +Loic Matthew, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset. https://github.com/deepmind/dSprites-dataset/, 2017. +William McGill. Multivariate information transmission. Transactions of the IRE Professional Group on Information Theory, 4(4):93-111, 1954. +Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. Elements of causal inference: foundations and learning algorithms. MIT press, 2017. +Karl Ridgeway. A survey of inductive biases for factorial representation learning. arXiv preprint arXiv:1612.05299, 2016. +Karl Ridgeway and Michael C Mozer. Learning deep disentangled embeddings with the f-statistic loss. In Advances in Neural Information Processing Systems, pp. 185-194, 2018. +Michal Rolinek, Dominik Zietlow, and Georg Martius. Variational autoencoders pursue pca directions (by accident). arXiv preprint arXiv:1812.06775, 2018. +Jürgen Schmidhuber. Learning factorial codes by predictability minimization. *Neural Computation*, 4(6):863-879, 1992. +Michael E Tipping and Christopher M Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3):611-622, 1999. +Michael Tschannen, Olivier Bachem, and Mario Lucic. Recent advances in autoencoder-based representation learning. arXiv preprint arXiv:1812.05069, 2018. + +# A APPENDIX + +# A.1 REVIEW OF FACTORVAES, $\beta$ -VAES AND AAES + +Standard VAEs are trained by minimizing the variational upper bound $\mathcal{L}^{\mathrm{VAE}}$ of $-\log p_{\theta}(x)$ as follows: + +$$ +\mathcal {L} ^ {\mathrm {V A E}} = \mathbb {E} _ {p _ {\mathcal {D}} (x)} \left[ \mathbb {E} _ {q _ {\phi} (z | x)} [ - \log p _ {\theta} (x | z) ] + D _ {K L} \left(q _ {\phi} (z | x) \| p (z)\right) \right] \tag {11} +$$ + +where $q_{\phi}(z|x)$ is an amortized variational posterior distribution. However, this objective does not lead to disentangled representations (Higgins et al., 2017a). + +$\beta$ -VAEs (Higgins et al., 2017a) penalize the KL term in the original VAE loss more heavily with a coefficient $\beta \gg 1$ : + +$$ +\mathcal {L} ^ {\beta - \mathrm {V A E}} = \mathbb {E} _ {p _ {\mathcal {D}} (x)} \left[ \mathbb {E} _ {q _ {\phi} (z | x)} [ - \log p _ {\theta} (x | z) ] + \beta D _ {K L} \left(q _ {\phi} (z | x) \| p (z)\right) \right] +$$ + +Since $\mathbb{E}_{p_{\mathcal{D}}(x)}[D_{KL}(q_{\phi}(z|x)\| p(z))] = I_{\phi}(x,z) + D_{KL}(q_{\phi}(z)\| p(z))$ , more penalty on the KL term encourages $q_{\phi}(z)$ to be factorized but also forces $z$ to discard more information in $x$ . + +FactorVAEs (Kim & Mnih, 2018) add a constraint to the standard VAE loss to explicitly impose factorization of $q_{\phi}(z)$ : + +$$ +\mathcal {L} ^ {\text {F a c t o r V A E}} = \mathcal {L} ^ {\text {V A E}} + \gamma D _ {K L} \left(q _ {\phi} (z) \| \prod_ {i} q _ {i} \left(z _ {i}\right)\right) \tag {12} +$$ + +where $D_{KL}(q_{\phi}(z)\| \prod_{i}q_{i}(z_{i}))\geq 0$ is known as the total correlation (TC) of $z$ . Intuitively, $\gamma$ can be large without affecting the mutual information between $z$ and $x$ , making FactorVAE more robust than $\beta$ -VAE in learning disentangled representations. Other related models that share similar ideas with FactorVAEs are $\beta$ -TCVAEs (Chen et al., 2018) and DIP-VAEs (Kumar et al., 2017). + +The loss of AAEs (Makhzani et al., 2015) is derived from the standard VAE loss by removing the term $I_{\phi}(x,z)$ : + +$$ +\mathcal {L} ^ {\mathrm {A A E}} = \mathbb {E} _ {p _ {\mathcal {D}} (x)} \left[ \mathbb {E} _ {q _ {\phi} (z | x)} \left[ - \log p _ {\theta} (x | z) \right] \right] + D _ {K L} \left(q _ {\phi} (z) \| p (z)\right) +$$ + +Different from the losses of $\beta$ -VAEs and FactorVAEs, AAE loss is not a valid upper bound on $-\log p_{\theta}(x)$ . + +# A.2 DATASETS + +The CelebA dataset (Liu et al., 2015) consists of more than 200 thousands face images with 40 binary attributes. We resize these images to $64 \times 64$ . The dSprites dataset (Matthey et al., 2017) is a toy dataset generated from 5 different factors of variation which are "shape" (3 values), "scale" (6 values), "rotation" (40 values), "x-position" (32 values), "y-position" (32 values). Statistics of these datasets are provided in Table 2. + +
Dataset#Train#TestImage size
CelebA162,77019,96264×64×3
dSprites737,280064×64×1
+ +Table 2: Summary of datasets used in experiments. + +# A.3 MODEL SETTINGS + +For FactorVAE, $\beta$ -VAE and AAE, we used the same architectures for the encoder and decoder (see Table 3 and Table $4^{5}$ ), following (Kim & Mnih, 2018). We trained the models for 300 epochs with mini-batches of size 64. The learning rate is $10^{-3}$ for the encoder/decoder and is $10^{-4}$ for the discriminator over $z$ . We used Adam (Kingma & Ba, 2014) optimizer with $\beta_{1} = 0.5$ and $\beta_{2} = 0.99$ . Unless explicitly mentioned, we use the following default settings: i) for CelebA: the number of + +latent variables is 65, the TC coefficient in FactorVAE is 50, the value for $\beta$ in $\beta$ -VAE is 50, and the coefficient for the generator loss over $z$ in AAE is 50; ii) for dSprites: the number of latent variables is 10. + +
EncoderDecoderDiscriminator Z
x dims: 64×64×3z dim: 65z dim: 65
conv (4, 4, 32), stride 2, ReLUFC 1×1×256, ReLU5×[FC 1000, LReLU]
conv (4, 4, 32), stride 2, ReLUdeconv (4, 4, 64), stride 1, valid, ReLUFC 1
conv (4, 4, 64), stride 2, ReLUdeconv (4, 4, 64), stride 2, ReLUD(z): 1
conv (4, 4, 64), stride 2, ReLUdeconv (4, 4, 32), stride 2, ReLU
conv (4, 4, 256), stride 1, valid, ReLUdeconv (4, 4, 32), stride 2, ReLU
FC 65deconv (4, 4, 3), stride 2, ReLU
z dim: 65x dim: 64×64×3
+ +Table 3: Model architectures for CelebA. + +
EncoderDecoderDiscriminator Z
x dims: 64×64×1z dim: 10z dim: 10
conv (4, 4, 32), stride 2, ReLUFC 128, ReLU5×[FC 1000, LReLU]
conv (4, 4, 32), stride 2, ReLUFC 4×4×64, ReLUFC 1
conv (4, 4, 64), stride 2, ReLUdeconv (4, 4, 64), stride 2, ReLUD(z): 1
conv (4, 4, 64), stride 2, ReLUdeconv (4, 4, 32), stride 2, ReLU
FC 128, ReLUdeconv (4, 4, 32), stride 2, ReLU
FC 10deconv (4, 4, 1), stride 2, ReLU
z dim: 10x dim: 64×64×1
+ +Table 4: Model architecture for dSprites. + +# A.4 CONSISTENCE BETWEEN QUANTITATIVE AND QUALITATIVE RESULTS + +# A.4.1 CELEBA + +Informativeness We sorted the representations of different models according to their informativeness scores in the descending order and plot the results in Fig. 9. There are distinct patterns for different methods. AAE captures equally large amounts of information from the data while FactorVAE and $\beta$ -VAE capture smaller and varying amounts. This is because FactorVAE and $\beta$ -VAE penalize the informativeness of representations while AAE does not. Recall that $I(z_{i},x) = H(z_{i}) - H(z_{i}|x)$ . For AAE, $H(z_{i}|x) = 0$ and $H(z_{i})$ is equal to the entropy of $\mathcal{N}(0,\mathrm{I})$ . For FactorVAE and $\beta$ -VAE, $H(z_{i}|x) > 0$ and $H(z_{i})$ is usually smaller than the entropy of $\mathcal{N}(0,\mathrm{I})$ due to a narrow $q(z_{i})^{6}$ . + +![](images/b8ad4ba68166a8e376ff5519664507b6204b243a0fdd2ee81241d8046e531ad9.jpg) +(a) FactorVAE (TC=50) + +![](images/b63f94cfa3bb4b4b02ecd177f5216d0ea434fe933d5e85641408f85e49c6965a.jpg) +(b) $\beta$ -VAE ( $\beta = 50$ ) + +![](images/6b212c456993b2ed2f178e6f8423837a6cc1bcf7b54e9b941ffe97538b38bd6c.jpg) +(c) AAE $(\mathrm{Gz} = 50)$ +Figure 9: Normalized informativeness scores (bins=100) of all latent variables sorted in descending order. + +In Fig. 9, we see a sudden drop of the scores to 0 for some FactorVAE's and $\beta$ -VAE's representations. These representations $z_{i}$ are totally random and contain no information about the data (i.e., $q(z_{i}|x)\approx \mathcal{N}(0,\mathrm{I})$ ). We call them "noisy" representations and provide discussions in Appdx. A.7. + +We visualize the top 10 most informative representations for these models in Fig. 10. AAE's representations are more detailed than FactorVAE's and $\beta$ -VAE's, suggesting the effect of high informativeness. However, AAE's representations mainly capture information within the support of $p_{\mathcal{D}}(x)$ . This explains why we still see a face when interpolating AAE's representations. By contrast, FactorVAE's and $\beta$ -VAE's representations usually contain information outside the support of $p_{\mathcal{D}}(x)$ . Thus, when we interpolate these representations, we may see something not resembling a face. + +![](images/0c6a2484dbd1af25826e0e4223015b0162798be211adcb8a0728c0d9801e6b92.jpg) +(a) FactorVAE (TC=50) + +![](images/0f1df055edd612d97ff1226db66ec608e88911c16afef63433a4a5d0039dd6ef.jpg) +(b) $\beta$ -VAE ( $\beta = 50$ ) +Figure 10: Visualization of the top informative representations. Scores are unnormalized. + +![](images/257f87aebd1f1721d2a12b6c651f222a959b27911b11f0dd5843adbabd8ddd24.jpg) +(c) AAE $(\mathrm{Gz} = 50)$ + +Separability and Independence Table 5 reports MISJED scores (Section 3.2) for the top most informative representations. FactorVAE achieves the lowest MISJED scores, AAE comes next and $\beta$ -VAE is the worst. We argue that this is because FactorVAE learns independent and nearly deterministic representations, $\beta$ -VAE learns strongly independent yet highly stochastic representations, and AAE, on the other extreme side, learns strongly deterministic yet not very independent representations. From Table 5 and Fig. 11, it is clear that MISJED produces correct orders among pairs of representations according to their informativeness. + +
MISJED (unnormized)
z1,z2z1,z3z1,z-1z1,z-2z-1,z-2z-1,z-3
FactorVAE0.0080.0092.4762.4434.8584.892
β-VAE0.1130.1313.4133.4016.6616.739
AAE0.0220.0230.0220.0210.0210.020
+ +Table 5: Unnormalized MISJED scores (#bins = 50, 10% data). $z_{1}, z_{2}, z_{3}$ and $z_{-1}, z_{-2}, z_{-3}$ denote the top 3 and the bottom 3 latent variables sorted by the informativeness scores in descending order. Boldness indicates best results. + +![](images/ec016d62dafece909ba1f3101b1c4d3400fb3b2d2792fc6577d25488d819af43.jpg) +(a) FactorVAE (TC=50) +Figure 11: Normalized MISJED scores of all latent pairs sorted by their informativeness. + +![](images/54b7590a70b2e7c5f22c3dcec24c6c13aa44c55d5235e063fe8362e59346b8e8.jpg) +(b) $\beta$ -VAE ( $\beta = 50$ ) + +![](images/9badf6d52c4fd8a67e7947e36712379765bf11d5f7925d2017108ff1cc73cc75.jpg) +(c) AAE $(\mathrm{Gz} = 50)$ + +Interpretability We report the RMIG scores and JEMMIG scores for several ground truth factors in the CelebA dataset in Tables 6 and 7, respectively. In general, FactorVAE learns representations that agree better with the ground truth factors than $\beta$ -VAE and AAE do. This is consistent with the qualitative results in Fig. 12. However, all models still perform poorly for interpretability since their RMIG and JEMMIG scores are very far from 1 and 0, respectively. We provide the normalized JEMMIG and RMIG scores for all attributes in Fig. 13. + +
RMIG (normalized)
BangsBlack HairEyeglassesGoateeMaleSmiling
H=0.4256H=0.5500H=0.2395H=0.2365H=0.6801H=0.6923
FactorVAE0.17420.04300.04090.03430.00600.0962
β-VAE0.01760.02230.00450.03250.00940.0184
AAE0.00350.02760.00180.00690.00600.0099
+ +Table 6: Normalized RMIG scores (#bins=100) for some factors. Higher is better. + +
JEMMIG (normalized)
BangsBlack HairEyeglassesGoateeMaleSmiling
H=0.4256H=0.5500H=0.2395H=0.2365H=0.6801H=0.6923
FactorVAE0.61180.63340.60410.66160.68750.6150
β-VAE0.86320.86200.86020.86000.86900.8699
AAE0.84630.86130.84230.84960.86440.8575
+ +Table 7: Normalized JEMMIG scores (#bins=100) for some factors. Lower is better. + +![](images/cbc7d694f922ab0ca7747492af1f1c574cbcd3335eb3bb7620d14d1977920782.jpg) +(a) FactorVAE (TC=50) + +![](images/14ca5da57daae843eef90fdddccfa79c9701e40d96d956674fe21f295eae1cc3.jpg) +(b) $\beta$ -VAE ( $\beta = 50$ ) + +![](images/ae50b796a361aa1297d830fe5c381faa26fbe1b3a3b9b0e7c291edc450024d64.jpg) +(c) AAE $(\mathrm{Gz} = 50)$ +Figure 12: Top 5 representations that are most correlated with some ground truth factors. For each representation, we show its mutual information with the ground truth factor. + +# A.4.2 DSPRITES + +Informativeness From Fig. 14, we see that 5 representations of AAE have equally high informativeness scores while the remaining 5 representations have nearly zeros informativeness scores. This is because AAE needs only 5 representations to capture all information in the data. FactorVAE also + +![](images/b7ee6bc7780bf2715bc24892175138133f08c4a844d53b2b16b44ef776f6d48d.jpg) +(a) JEMMIG (normalized) + +![](images/c29112bf13371b9cf568e37254010d8c30f939fe6a8faad857a9ea5f62748de5.jpg) +(b) RMIG (normalized) + +needs only 5 representations but some are less informative than those of AAE. Note that the number of ground truth factors of variation in dSprites dataset is also 5. + +![](images/383d695518d11e72d05261d5c02282432279a236f2a412c24411cb732707f8b4.jpg) +Figure 13: Normalized JEMMIG and RMIG scores for all attributes in the CelebA dataset. We sorted the JEMMIG and RMIG scores of the FactorVAE in ascending and descending orders, respectively. +(a) FactorVAE + +![](images/d7a7b56764f518554a6edab7bd30ac92974c0257dc69f401a32491d0f1ae0000.jpg) +(b) $\beta$ -VAE +Figure 14: Normalized informativeness scores (bins=100) of all latent variables sorted in descending order. + +![](images/dd76b5e8ea55f1da4edc29b49942b3797f2defbf92a995a4965036c01091191d.jpg) +(c) AAE + +![](images/7d95d8fcae8681a193cb4aa9ac368114a2ee6d57a6fa72320dfcfb5a63bdebbc.jpg) +(a) FactorVAE +Figure 15: Normalized MISJED scores (bins=100) of all latent pairs sorted by their informativeness. + +![](images/a56261261e43ccde5fdbacee0a8fd6b73c67039edf87398e33fa85133ddba41f.jpg) +(b) $\beta$ -VAE + +![](images/180dbe19ab09064b7e4e99bef829699196496565a04e97ef209f08e241b12aee.jpg) +(c) AAE + +Separability and Independence Fig. 15 shows heat maps of MISJED scores for the three models. + +Interpretability From Tables. 8 and 9, we see that FactorVAE is very good at disentangling "scale", "x-position" and "y-position" but fails to disentangling "shape" and "rotation". However, FactorVAE still performs much better than $\beta$ -VAE and AAE. These results are consistent with the visual results in Fig. 16. + +Also note that in FactorVAE, the RMIG scores for "scale" and "x-position" are quite similar but the JEMMIG score for "scale" is higher than that for "x-position". This is because the quantized distribution (with 100 bins) of a particular representation $z_{i}$ fits better to the distribution of "x-position" (having 32 possible values) than to the distribution of "scale" (having only 6 possible values). + +
ShapeScaleRotationPos XPos Y
FactorVAE0.24120.71390.05230.71980.7256
β-VAE0.04810.15330.00000.41270.4193
AAE0.00530.07860.00980.39320.4509
+ +Table 8: Normalized RMIG scores (bins=100). + +
ShapeScaleRotationPos XPos Y
FactorVAE0.68410.34220.72040.29080.2727
β-VAE0.86420.80870.91990.56290.5576
AAE0.84260.81430.86650.57380.5258
+ +Table 9: Normalized JEMMIG scores (bins=100). + +![](images/93ce3325f7294a241311726f666880c35b6045e137939b7cbe70a8a52c1f7aea.jpg) +(a) FactorVAE (Shape) + +![](images/715df219adc7e32ff201f9448c20b1c5a0bec7cfa50fc966178d2122e017b276.jpg) +(b) $\beta$ -VAE (Shape) + +![](images/d5782123d3e28400e22ce99f3a7217c31923812ad0ec19cf315d947ced8b9ac8.jpg) +(c) AAE (Shape) + +![](images/c2ac2bf96d9fba8becdfe02456c3f04efdb8306319033fc2c3581115aafcf880.jpg) +(d) FactorVAE (Scale) + +![](images/9d9dba558721b6268dd9c21c2241a8a8d07542649c407d6fc45d4f2993ab3990.jpg) +(e) $\beta$ -VAE (Scale) + +![](images/ef031c6d2788d0f515aacdf80a753d58022b617bdc714435c3c4b8ad064023d3.jpg) +(f) AAE (Scale) + +![](images/27ac7cce2dd9d71d6bb02e72338e500de613b01b614a0b070744a5748b93c262.jpg) +(g) FactorVAE (Rotation) + +![](images/7614bf7c065b4c177115b22956d3170b6aff5c4f3d5d107c7bdd89c484c15778.jpg) +(h) $\beta$ -VAE (Rotation) + +![](images/a6e1cc7d593525346e69fb2a3f0fc1462343878ed9499ff1ab9de242d86babf9.jpg) +(i) AAE (Rotation) + +![](images/5734b4f7550f3ec16f1f27ff8adca65eef80a2abac1749e186d856dfd6a1d824.jpg) +(j) FactorVAE (Pos X) + +![](images/800033ee74d5c1c305c28282680a8238b7ddc930d875d766b808e138f42da46d.jpg) +(k) $\beta$ -VAE (Pos X) + +![](images/d4dc4c6624ee84a7cd82894b334da81eaa02c03ea7934da97e37c5b0f4f02be3.jpg) +(1) AAE (Pos X) + +![](images/a1aaae6a4ed5c558e3b4920ad8803d1cbf9725b0154b4ad148563f1c3e37320c.jpg) +(m) FactorVAE (Pos Y) + +![](images/2766cda962829d191d371b0579d493daf64bcc2f659b60751d6f8e957eb16da4.jpg) +(n) $\beta$ -VAE (Pos Y) + +![](images/0b6c0893f006f3068e329e293beb260fb6157b7e2451fd1a1991a9f771c7cc8b.jpg) +(o) AAE (Pos Y) +Figure 16: Top 3 representations sorted by their mutual information with different ground truth factors. + +# A.5 ABLATION STUDY OF OUR METRICS + +Sensitivity of the number of bins When estimating entropy and mutual information terms using quantization, we need to specify the value range and the number of bins (#bins) in advance. In this paper, we fix the value range to be [-4, 4] since most latent values fall within this range. We only investigate the effect of #bins on the RMIG and JEMMIG scores for different models and show the results in Fig. 17 (left, middle). + +We can see that when #bins is small, RMIG scores are low. This is because the quantized distributions $Q(z_{i^*})$ and $Q(z_{j^\circ})$ look similar, causing $I^*(z_{i^*}, y_k)$ and $I^\circ(z_{j^\circ}, y_k)$ to be similar as well. When #bins is large, the quantized distribution $Q(z_{i^*})$ and $Q(z_{j^\circ})$ look more different, leading to higher RMIG scores. RMIG scores are stable when #bins > 200, which suggests that finer quantizations do not affect the estimation of $I(z_i, y_k)$ much. + +Unlike RMIG scores, JEMMIG scores keep increasing when we increase #bins. Note that JEMMIG only differs from RMIG in the appearance of $H(z_{i^*}, y_k)$ . Finer quantizations of $z_{i^*}$ introduce more information about $z_{i^*}$ , hence, always lead to higher $H(z_{i^*}, y_k)$ (see Fig. 17 (right)). Larger JEMMIG scores also reflect the fact that finer quantizations of $z_{i^*}$ make $z_{i^*}$ look more continuous, thus, less interpretable w.r.t the discrete factor $y_k$ . + +We provide a detailed explanation about the behaviors of RMIG and JEMMIG w.r.t #bins in Appdx. A.11. Despite the fact that #bins affects the RMIG and JEMMIG scores of a single model, the relative order among different models remains the same. It suggests that once we fixed the #bins, we can use RMIG and JEMMIG scores to rank different models. + +![](images/19bb0ec22394ba3c7c46fa8055cb9f48e5bcea8627b26d9e50d97e3121f499cd.jpg) +Figure 17: Dependencies of RMIG (normalized), JEMMIG (normalized) and $\frac{1}{K}\sum_{k=0}^{K-1} H(z_{i^*}, y_k)$ on the number of bins. The dataset is dSprite. + +![](images/8f288206ca86a973acb93444d097887e5d0ac7515ddbe27f73c6dec0c56b0a6d.jpg) + +![](images/4c09a766073a54acbcc20373cf9d6876e94fdced3f182aaad89cf47840588e44.jpg) + +Sensitivity of the number of samples From Fig. 18 (left, right), it is clear that the sampling estimation is unbiased and is not affected much by the number of samples. + +Sensitivity of sampling in high dimensional space One thing that we should concern about is the performance of our metrics when the number of latent representations (#latents) is large (or $z$ is high-dimensional). In Fig. 19a, we see that the informativeness of an individual representations $z_{i}$ is not affected by #latents. When we increase #latents, additional representations are usually noisy $(I(z_{i},x)\approx 0)$ . The total amount of information captured by the model $(I(x,z))$ , by contrast, highly depends on #latents (Fig. 19b). Unusually, increasing #latents reduces $I(x,z)$ instead of increasing it. We have not found the final answer for this phenomenon but possible hypotheses are: i) On a high dimensional space where most latent representations are noisy (e.g. #latents=20), $q(z|x)$ may look more similar to $q(z)$ , causing the wrong calculation of $\log \frac{q(z|x)}{q(z)}$ , or ii) when #latents is large, $q(z|x) = \prod_{i = 0}^{L - 1}q(z_i|x)$ is very tiny, thus, may lead to floating point imprecision7. In Fig. 19c, we + +7We tried $q(z|x) = \exp \left( \sum_{i=0}^{L-1} \log q(z_i|x) \right)$ and it gives similar results as $q(z|x) = \prod_{i=0}^{L-1} q(z_i|x)$ . + +![](images/7622809fb180054d6c8bf08264dfecd63d84fc1c9e5f19183283b149fbf65d47.jpg) +Figure 18: Dependences of JEMMIG and WSEPIN on the number of samples. All models have 10 latent variables. The dataset is dSprites. + +![](images/295a4ded0b30203c99e9aa0150770573c9e16b9490cb0ac5e0a16734cab0726e.jpg) + +see that increasing #latents increases $I(z_{i},z_{\neq i})$ . This makes sense because larger #latents means that $z_{\neq i}$ will contain more information. However, the change of $I(z_{i},z_{\neq i})$ is sudden when #latents change from 10 to 15, which is different from the change of #latents from 5 to 10 or 15 to 20. Recall that $I(z_{i},z_{\neq i}) = H(z_{i}) + H(z_{\neq i}) - H(z)$ . Since $H(z_{i})$ can be computed stably, we only plot $H(z_{\neq i}) - H(z)$ and show it in Fig. 19d. We can see that when #latents $= 20$ , $H(z_{\neq i}) \approx H(z)$ which means we cannot differentiate between $q(z_{\neq i})$ and $q(z)$ . The instability of computation for high dimensional latents becomes clearer in Fig. 19e as $I(x,z_{i}|z_{\neq i}) = I(x,z) - I(x,z_{\neq i})$ can be $< 0$ when #latents $= 15$ or 20. This causes the instability of WSEPIN in Fig. 19f despite the results look reasonable. JEMMIG and RMIG are calculated on individual latents so they are not affected by #latents and can provide consistent evaluations for models with different #latents. + +![](images/f377caf2d23df62e528c4679908bca68ed2e0378cf524e08fd04297664184c2e.jpg) +(a) $I(x,z_{i})$ + +![](images/6f273e40e965468a6ea04ad82102d02bb1eef9b368991eff3dbb89eb2244ca44.jpg) +(b) $I(x,z)$ + +![](images/c4dcdf4eae311a7fb102cac4a2eb450ae9708641fb9cb9ffb7305c3615de37e4.jpg) +(c) $I(z_{i},z_{\neq i})$ + +![](images/282a4d07917511553201d23dd5b80604ee24767b0cfe264b8403c2db98c6279d.jpg) +(d) $H(z_{\neq i}) - H(z)$ + +![](images/5ed4c5147959e04885d584266f4ff4511fb77e53e2c9da029ada74a3b0ba0782.jpg) +(e) $I(x,z) - I(x,z\neq i)$ + +![](images/8b5e58b40c3df4c99e6be4bb8295f61603a2c02cfbda72ed0cb0e3fdd82c65f3.jpg) +(f) WSEPIN + +![](images/c7410cf1b927a8b41f7d937c234dfc2265c7f8df5e92f0a7b26b50c0668369ba.jpg) +(g) JEMMIG(yk) +Figure 19: Dependences of various quantitative measures on the number of latents. All measures are computed via sampling. The model used in this experiment is $\beta$ -VAE with $\beta = 10$ . + +![](images/1366163d69fc771bc5dd049eddce4bfa86411f53f7a179406847e66f394a3578.jpg) +(h) JEMMIG + +# A.6 EVALUATING INDEPENDENCE WITH CORRELATION MATRIX + +For every $x^{(n)}$ sampled from the training data, we generated $m = 1$ latent samples $z_{i}^{(n,m)} \sim q(z_{i}|x^{(n)})$ and built a correlation matrix from these samples for each of the models FactorVAE, $\beta$ -VAE and AAE. We also built another version of the correlation matrix which is based on the $\mathbb{E}_{q(z_i|x^{(n)})}[z_i]$ (called the conditional means) instead of samples from $q(z_{i}|x^{(n)})$ . Both are shown in Fig. 20. We can see that the correlation matrices computed based on the conditional means incorrectly describe the independence between representations of FactorVAE and $\beta$ -VAE. AAE is not affected because it learns deterministic $z_{i}$ given $x$ . Using the correlation matrix is not a principled way to evaluate independence in disentanglement learning. + +![](images/39ac5cb856b5826dec66c829aad0d9ef4c0a883cdf8c2dfe71c66b1108334875.jpg) +(a) FactorVAE (stochastic) + +![](images/a9b20d0033f847c7f1707fedf44519655b65080db648cc41e1b88c8eba3ba54e.jpg) +(b) $\beta$ -VAE (stochastic) + +![](images/c1dcf1a24828bce598dd140adcdfa3abab514b90a3de3a306a8aecfd334e2721.jpg) +(c) AAE (stochastic) + +![](images/9a959ce4e071f023380656645d7a837197f42b19299a8a3a3ff3daa49d8e50dc.jpg) +(d) FactorVAE (deterministic) + +![](images/852f0ab8974128a34170d1216fcdbec89a73198c337b2c02d1cca8a6408c6dbe.jpg) +(e) $\beta$ -VAE (deterministic) + +![](images/cce0d3a85da263954592115771d2fe90e33bf621138e9b6b02db65de3374d5d4.jpg) +(f) AAE (deterministic) +Figure 20: Correlation matrix of representations learned by FactorVAE, $\beta$ -VAE and AAE. + +# A.7 TRADE-OFF BETWEEN INFORMATIVENESS, INDEPENDENCE AND THE NUMBER OF LATENT VARIABLES + +Before starting our discussion, we provide the following fact: + +Fact 2. Assume we try to fill a fixed-size pool with fixed-size balls given that all the balls must be inside the pool. The only way to increase the number of the balls without making them overlapped is reducing their size. + +![](images/a8509ee37e341ada04b5eebd1b300c30f22225e4c0bc24876648632adafc16db.jpg) +AAE + +![](images/dc699fc5d80541c69db05979b0e94fa943ef15071b341a1805fb0d9d76cca170.jpg) +FactorVAE +Figure 21: Illustration of representations learned by AAE and FactorVAE. A big red circle represents the total amount of information that $x$ contains or $H(x)$ which is limited by the amount of training data. Blue circles are informative representations $z_{\mathrm{f}}$ and the size of these circles indicates the informativeness of $z_{\mathrm{f}}$ . Green circles are noisy representations $z_{\mathrm{n}}$ . AAE does not contain $z_{\mathrm{n}}$ , only FactorVAE does. + +In the context of representation learning, a pool is $x$ with size $H(x)$ which depends on the training data. Balls are $z_{i}$ with size $H(z_{i})$ . Fact. 2 reflects the situation of AAE (see Fig. 21 left). In AAE, all $z_{i}$ are deterministic given $x$ so the condition "all balls are inside the pool" is met. $H(z_{i}) \approx$ the entropy of $\mathcal{N}(0,1)$ which is fixed so the condition "fixed-size balls" is also met. Therefore, when the number of latent variables in AAE increases, all $z_{i}$ must be less informative (i.e., $H(z_{i})$ must decrease) given that the independent constraint on the latent variables is still satisfied. This is empirically verified in Fig. 22 as we see the distribution of $\mathbb{E}_{q(z_i|x^{(n)})}[z_i]$ over all $x^{(n)} \sim p_{\mathcal{D}}(x)$ becomes narrower when we increase the number of representations from 65 to 200. Also note that increasing the number of latent variable from 65 to 100 does not change the distribution. This suggests that 65 or 100 latent variables are still not enough to capture all information in the data. + +FactorVAE, however, handles the increasing number of latent variables in a different way. Thanks to the KL term in the loss function that forces $q(z_{i}|x)$ to be stochastic, FactorVAE can break the + +![](images/2e57c6261063451d56a6239093ee35fbdec8b561c06cbf47c07f24f25bdc35e4.jpg) +(a) $z_{-}\mathrm{dim} = 65$ + +![](images/8600c54e14c7f7a14c5622f97d85cff66e3e39db91f4ba737f5cb703c7a9b406.jpg) +(b) $\mathrm{z\_dim} = 100$ +Figure 22: Distribution of $\mathbb{E}_{q(z_i|x^{(n)})}[z_i]$ over all $x^{(n)}\sim p_{\mathcal{D}}(x)$ of a particular representation $z_{i}$ for different AAE models. + +![](images/d8a8a68e25c96b9c6f4de377fbba710220c042924d51d71c5f73e7a8483aa9ab.jpg) +(c) $\mathrm{z\_dim} = 200$ + +constraint in Fact 2 and allows the balls to stay outside the pool (see Fig. 21 right). If we increase the number of latent variables but still enforce the independence constraint on them, FactorVAE will keep a fixed number of informative representations and make all other representations "noisy" with zero informativeness scores. We refer to that capability of FactorVAE as code compression. + +# A.8 ANALYSIS OF EXISTING METRICS FOR DISENTANGLEMENT + +In this section, we analyze recent metrics, including Z-diff score (Higgins et al., 2017a; Kim & Mnih, 2018), Separated Attribute Predictability (SAP) (Kumar et al., 2017), Mutual Information Gap (MIG) (Chen et al., 2018), Disentanglement/Compactness/Informativeness (Eastwood & Williams, 2018), Modularity/Explicitness (Ridgeway & Mozer, 2018). + +The main idea behind the Z-diff score (Higgins et al., 2017a; Kim & Mnih, 2018) is that if a ground truth generative factor $y_{k}$ ( $k \in \{0,1,\dots,K\}$ ) is well aligned with a particular disentangled representation $z_{i}$ (although we do not know which $i$ ), we can use a simple classifier to predict $k$ using information from $z$ . Higgins et al. (Higgins et al., 2017a) use a linear classifier while Kim et. al. (Kim & Mnih, 2018) use a majority-vote classifier. The main drawback of this metric is that it assumes knowledge about all ground truth factors that generate the data. Hence, it is only applicable for a toy dataset like dSprites. Another drawback lies in the complex procedure to compute the metric, which requires training a classifier. Since the classifier is sensitive to the chosen optimizer, hyper-parameters and weight initialization, it is hard to ensure a fair comparison. + +The SAP score (Kumar et al., 2017) is computed based on the correlation matrix $C$ between the latent variables $z$ and the ground truth factors $y$ . If a latent $z_{i}$ and a factor $y_{k}$ are both continuous, the (square) correlation $C_{i,k}$ between them is equal to $\frac{\mathrm{Cov}^2(z_i,y_k)}{\mathrm{Var}(z_k)\mathrm{Var}(y_k)}$ and is in [0, 1]. However, if the factor $y_{k}$ is discrete, computing the correlation between continuous and discrete variables is not straightforward. The authors handled this problem by learning a classifier that predicts $y_{k}$ given $z_{i}$ and used the balanced prediction accuracy as a replacement. Then, for each factor $y_{k}$ , they sorted $C_{i,k}$ in the descending order and computed the difference between the top two scores. The mean of the difference scores for all factors was used as the final SAP score. The intuition for this metric is that if a latent $z_{i}$ is the most representative for a factor $y_{k}$ (due to the highest correlation score), then other latent variables $z_{\neq i}$ should not be related to $y_{k}$ , and thus, the difference score for $y_{k}$ should be high. We believe the SAP score is more sensible than Z-diff but it is only suitable when both the ground truth factors and the latent variables are continuous as no classifier is required. Moreover, if we have $K$ discrete ground truth factors and $L$ latent variables, the number of classifiers we need to learn is $L\times K$ , which is unmanageable when $L$ is large. + +The MIG score (Chen et al., 2018) shares the same intuition as the SAP score but is computed based on the mutual information between every pair of $z_{i}$ and $y_{k}$ instead of the correlation coefficient. Thus, the MIG score is theoretically more appealing than the SAP score since it can capture nonlinear relationships between latent variables and factors while the SAP score cannot. The MIG score, to some extent, reflects the concept "interpretability" that we discussed in Section 2 in the main text. + +Eastwood et. al. (Eastwood & Williams, 2018) proposed three different metrics namely "disentanglement", "completeness", and "informativeness" to quantify disentangled representations. These metrics are computed based on a so-called "important matrix" $R$ whose element $R_{ik}$ is the relative importance of $z_i$ (w.r.t other $z_{\neq i}$ ) in predicting $y_k$ . More concretely, for each factor $y_k$ ( $k = 0, \dots, K - 1$ ), they train a regressor (LASSO or Random Forest) to predict $y_k$ from $z$ and use the weight vector provided by this regressor to define $R_{k}$ . The "disentanglement" score $D_i$ quantifies the degree to which a latent $z_i$ captures at most one generative factor $y_k$ . $D_i$ is computed as $D_i = (1 - H_K(P_i))$ where $H_K(P_i) = \sum_{k=0}^{K-1} - P_{ik} \log P_{ik}$ and $P_{ik} = \frac{R_{ik}}{\sum_{k'=0}^{K-1} R_{ik'}}$ which can be seen as the "probability" of predicting $y_k$ instead of $y_{\neq k}$ from $z_i$ . Similarly, the "completeness" score $C_k$ quantifies the degree to which a ground truth factor $y_k$ is captured by a single latent $z_i$ ( $i = 0, \dots, L - 1$ ), computed as $C_k = 1 - H_L(\tilde{P}_{.k})$ where $H_L(\tilde{P}_{.k}) = \sum_{i=0}^{L-1} - \tilde{P}_{ik} \log \tilde{P}_{ik}$ and $\tilde{P}_{ik} = \frac{R_{ik}}{\sum_{i'=0}^{L-1} R_{i'k}}$ . The "informativeness" score describes the total amount of information of a particular factor $y_k$ captured by all representations $z$ . However, the authors use the prediction error $E_k$ of the $k$ -th regressor to quantify "informativeness" instead of $I(y_k, z)$ . Despite being well-motivated, these metrics still have several drawbacks. First, using three different metrics to quantify disentangled representations is not as convenient as using a single metric like MIG (Chen et al., 2018). For example, how can we compare two models A and B if A has a better "disentanglement" score but a worse "completeness" score than B? Second, these metrics do not apply for categorical factors with $C$ classes since in this case the model weight is not a vector but an $L \times C$ matrix. Third, defining the pseudo-distribution $P_{ik} = \frac{R_{ik}}{\sum_{k'=0}^{K-1} R_{ik'}}$ seems ad hoc because i) the weight magnitudes $R_{ik}$ are unbounded and can vary significantly (see Appdx. A.9), and ii) $P_{ik}$ strongly depends on the available ground truth factors (e.g. the value of $P_{ik}$ will change if we only consider 2 instead of 5 factors). + +Ridgeway et. al. (Ridgeway & Mozer, 2018) proposed two metrics called "modularity" and "explicitness" that have similar interpretations as "disentanglement" and "informativeness" discussed above but differ in implementation. Specifically, they compute the "modularity" score $M_{i}$ for a representation $z_{i}$ as $M_{i} = 1 - \frac{\sum_{k=0}^{K-1}(I(z_{i},y_{k}) - T_{ik})^{2}}{I^{2}(z_{i},y_{k*})\times(K-1)}$ where $k^{*} = \operatorname{argmax}_{k}I(z_{i},y_{k})$ and $T_{ik} = \begin{cases} I(z_i,y_{k^*}) & \text{if } k = k^* \\ 0 & \text{otherwise} \end{cases}$ . Like the "disentanglement" score $D_{i}$ , $M_{i}$ is also ad hoc and is undefined when the number of ground truth factors is 1. The "explicitness" score $E_{k}$ for each ground truth factor $y_{k}$ is computed as the ROC curve of a logistic classifier that predicts $y_{k}$ from $z$ . It turns out that $E_{k}$ is just a way to bypass computing $I(y_k,z)$ . + +# A.9 THE MUTUAL INFORMATION MATRIX $I(z_{i},y_{k})$ AND THE IMPORTANCE MATRIX $R_{ik}$ + +In Fig. 23, we compare our mutual information matrix $I(z_{i},y_{k})$ with the counterpart in (Ridgeway & Mozer, 2018) and the importance matrix $R_{ik}$ in (Eastwood & Williams, 2018). It is clear that all matrices can capture disentangled representations (those highlighted in red) well since their corresponding values are high compared to other values in the same column. However, the matrix $I(z_{i},y_{k})$ in (Ridgeway & Mozer, 2018) usually overestimates noisy representations since it uses $\mathbb{E}_{q(z_i|x)}[z_i]$ instead of $q(z_{i}|x)$ . The matrix $R_{ik}$ in (Eastwood & Williams, 2018) sometimes assign very high absolute values for noisy representations since the regressor's weights are unbounded. These flaws make the metrics in (Ridgeway & Mozer, 2018) and in (Eastwood & Williams, 2018) inaccurate and unstable, especially "modularity" and "disentanglement" since they require normalization over rows. + +# A.10 COMPUTING METRICS FOR INFORMATIVENESS, SEPARABILITY AND INTERPRETABILITY + +The metrics for informativeness, separability and interpretability in Section. 3 requires computing $H(z_{i}), H(z_{i}|x), H(z_{\neq i}), H(z)$ , and $H(z_{i},y_{k})$ . We can compute these entropies via quantization or sampling. Quantization is only applicable when $z_{i}$ is a scalar. If $z_{i}$ is a high-dimensional vector, we need to use sampling. Below, we describe how to compute $H(z)$ via sampling and $H(z_{i})$ via quantization. Other cases can be derived similarly. + +![](images/acfc46d04884d808495e782cf9685b0e9fc986186fc2b0184b8c29ec6f81d66d.jpg) +(a) $I(z_{i},y_{k})$ w. $q(z_{i}|x)$ + +![](images/a0ce9009731282398e76e9c9158b4a87b312342ab91bb2d71a39e58fcc781803.jpg) +(b) $I(z_{i},y_{k})$ w.o. $q(z_i|x)$ +Figure 23: (a): Our mutual information matrix $I(z_{i},y_{k})$ , (b): The mutual information matrix $I(z_{i},y_{k})$ in (Ridgeway & Mozer, 2018), (c): The importance matrix $R_{ik}$ in (Eastwood & Williams, 2018). In (a) and (b), the columns corresponding to the following ground truth factors: "shape", "scale", "rotation", "x-position", "y-position". In (c), the column for "shape" is excluded because the metrics in (Eastwood & Williams, 2018) do not support categorical factors. Values corresponding to disentangled representations are highlighted in red. Defective values are highlighted in green. The model is FactorVAE with TC=20. + +![](images/591aa51f91f64cb296721a2a5b80697d432f8de8db53dfc3042c03d5b2aac909.jpg) +(c) $R_{ik}$ + +# Computing $H(z)$ via sampling + +$$ +\begin{array}{l} H (z) = - \mathbb {E} _ {q (z)} [ \log q (z) ] \\ = - \mathbb {E} _ {q (z, x)} \left[ \log \mathbb {E} _ {p _ {\mathcal {D}} (x)} [ q (z | x) ] \right] \\ = - \frac {1}{M} \sum_ {m = 1} ^ {M} \left[ \log \frac {1}{N} \sum_ {n = 1} ^ {N} q \left(z ^ {(m)} \mid x ^ {(n)}\right) \right] (13) \\ = - \frac {1}{M} \sum_ {m = 1} ^ {M} \left[ \log \frac {1}{N} \sum_ {n = 1} ^ {N} \left(\prod_ {i = 1} ^ {L} q \left(z _ {i} ^ {(m)} \mid x ^ {(n)}\right)\right) \right] (14) \\ \end{array} +$$ + +In Eq. 13, we use Monte Carlo sampling to estimate the expectations outside and inside the log function. The corresponding sample sizes are $M$ and $N$ . In Eq. 14, we use the assumption $q(z^{(m)}|x^{(n)}) = \prod_{i=1}^{L} q(z_i^{(m)}|x^{(n)})$ . Please note that the entropy $H(z)$ computed via sampling can be negative if $z$ is continuous since we use the density function $q(z|x)$ . + +Computing $H(z_{i})$ via quantization We can compute $H(z_{i})$ via quantization as follows: + +$$ +H (z _ {i}) = - \sum_ {s _ {i} \in \mathcal {S}} Q (s _ {i}) \log Q (s) +$$ + +where $S$ is a set of all quantized bins $s_i$ corresponding to $z_i$ ; $Q(s_i)$ is the probability mass function of $s_i$ . To ensure consistency among different $z_i$ as well as different models, we apply the same value range for all latent variables. In practice, we choose the range $[-4, 4]$ since most of the latent values fall within this range. We divide this range into equal-size bins to form $S$ . + +We can compute $Q(s_{i})$ as follows: + +$$ +Q (s _ {i}) = \frac {1}{N} \sum_ {n = 1} ^ {N} Q \left(s _ {i} | x ^ {(n)}\right) +$$ + +We compute $Q\left(s_{i}|x^{(n)}\right)$ based on its definition, which is: + +$$ +Q \left(s _ {i} \mid x ^ {(n)}\right) = \int_ {a} ^ {b} q \left(z _ {i} \mid x ^ {(n)}\right) d z _ {i} \tag {15} +$$ + +where $a, b$ are two ends of the bin $s_i$ . + +There are two ways to compute $Q(s_i|x^{(n)})$ . In the first way, we simply consider the unnormalized $Q^{\prime}(s_i|x^{(n)})$ as the area of a rectangle whose width is $b - a$ and height is $q(\bar{z}_i|x^{(n)})$ with $\bar{z}_i$ at the center value of the bin $s_i$ . Then, we normalize $Q^{\prime}(s_i|x^{(n)})$ over all bins to get $Q(s_i|x^{(n)})$ . In the second way, if $q(z_i|x^{(n)})$ is a Gaussian distribution, we can estimate the above integral with a closed-form function (see Appdx. A.14 for detail). + +![](images/bde70291292d5971f9914e66323fa19944ff838f1355cd4ef3a4af73dc82ac36.jpg) +(a) + +![](images/c2e73cc1081395d51223582a1a46d65c8b152c49fdc8fe22338eacb2563fcf2d.jpg) +(b) +Figure 24: Correlation between the sampling (#samples=10000) and quantized (value range=[-4, 4], #bins=100) estimations of JEMMIG/RMIG. In the subplot (a), the red line is $y = x - \log(\text{bin width})$ while in the subplot (b), the red line is $y = x$ . Blues denotes FactorVAE models and oranges denote $\beta$ -VAE models. The dataset is dSprites. + +# A.11 RELATIONSHIP BETWEEN SAMPLING AND QUANTIZATION + +Denote $H_{\mathrm{s}}(z_i|x)$ and $H_{\mathrm{q}}(z_i|x)$ to be the sampling and quantization estimations of an entropy $H(z_i|x)$ , respectively. Because $H_{\mathrm{s}}(z_i|x)$ is the expectation of $\log q(z_i|x)$ , $H_{\mathrm{q}}(z_i|x)$ is the expectation of $\log Q(z_i|x)$ , and $Q(z_i|x) \approx q(z_i|x) \times \mathrm{bin}$ width if the bin width is small enough, there exists a gap between $H_{\mathrm{s}}(z_i|x)$ and $H_{\mathrm{q}}(z_i|x)$ , specified as follows: + +$$ +\begin{array}{l} H _ {\mathbf {q}} \left(z _ {i} | x\right) = H _ {\mathrm {s}} \left(z _ {i} | x\right) - \log (\text {b i n w i d t h}) \\ = H _ {\mathrm {s}} \left(z _ {i} \mid x\right) - \log \left(\frac {\text {v a l u e r a n g e}}{\# \text {b i n s}}\right) \\ = H _ {\mathrm {s}} \left(z _ {i} | x\right) - \log (\text {v a l u e r a n g e}) + \log (\# \text {b i n s}) \\ \end{array} +$$ + +Since $Q(z_{i}) = \mathbb{E}_{p_{\mathcal{D}}(x)}[Q(z_{i}|x)]$ and $q(z_{i}) = \mathbb{E}_{p_{\mathcal{D}}(x)}[q(z_{i}|x)]$ , we have $Q(z_{i}) \approx q(z_{i}) \times \mathrm{bin}$ width. Thus, $H_{\mathrm{q}}(z_i)$ and $H_{\mathrm{s}}(z_i)$ also exhibit a similar gap as $H_{\mathrm{s}}(z_i|x)$ and $H_{\mathrm{q}}(z_i|x)$ : + +$$ +H _ {\mathrm {q}} \left(z _ {i}\right) = H _ {\mathrm {s}} \left(z _ {i}\right) - \log (\text {b i n w i d t h}) +$$ + +However, this gap disappears when computing the mutual information $I(z_{i},x)$ since: + +$$ +\begin{array}{l} I _ {\mathfrak {q}} (z _ {i}, x) = H _ {\mathfrak {q}} (z _ {i}) - H _ {\mathfrak {q}} (z _ {i} | x) \\ = \left(H _ {\mathrm {s}} \left(z _ {i}\right) - \log (\text {b i n w i d t h})\right) - \left(H _ {\mathrm {s}} \left(z _ {i} \mid x\right) - \log (\text {b i n w i d t h})\right) \\ = H _ {\mathrm {s}} \left(z _ {i}\right) - H _ {\mathrm {s}} \left(z _ {i} \mid x\right) \\ = I _ {\mathrm {s}} \left(z _ {i}, x\right) \\ \end{array} +$$ + +In fact, one can easily prove that: + +$$ +\lim_{\substack{\# \text{bins}\to +\infty}}I_{\mathrm{q}}(z_{i},x) = I_{\mathrm{s}}(z_{i},x) +$$ + +Similar relationships between sampling and quantization also apply for $H(z_{i},y_{k})$ and $I(z_{i},y_{k})$ . They are clearly shown in Fig. 24. + +In summary, + +- Sampling entropies such as $H_{\mathrm{s}}(z_i|x)$ or $H_{\mathrm{s}}(z_i)$ are usually fixed but can be negative since $q(z_i|x)$ or $q(z_i)$ can be $> 1$ . However, these entropies can still be used for ranking though it is not easy to interpret them. +- Quantized entropies such as $H_{\mathfrak{q}}(z_i|x)$ or $H_{\mathfrak{q}}(z_i)$ can be positive if the bin width is small enough (or #bins is large enough). The growth rate is - log(bin width) (or log(#bin)). Because $\lim_{x\to +\infty}\log x = +\infty$ , $H_{\mathfrak{q}}(z_i|x)$ and $H_{\mathfrak{q}}(z_i)$ cannot be upper-bounded. + +- The mutual information $I(z_{i}, x)$ is consistent via either quantization or sampling. Unlike the entropies, $I(z_{i}, x)$ is well-bounded even when $z_{i}$ is continuous, thus, is suitable to be used in a metric. However, when #bins is small, the approximation $Q(z_{i}) \approx q(z_{i}) \times \text{bin width}$ does not hold and quantization estimation can be inaccurate. + +# A.12 NORMALIZING JEMMIG + +Recall that the formula of the unnormalized JEMMIG $(y_{k})$ is $H(z_{i^{*}},y_{k}) - I(z_{i^{*}},y_{k}) + I(z_{j^{\circ}},y_{k})$ . If we estimate $H(z_{i^{*}},y_{k})$ via quantization, the value of the unnormalized JEMMIG $(y_{k})$ will vary according to the bin width (or value range and #bins) (as shown in Fig. 17 (left)). However, we can still rank models by forcing them using the same bin width (or the same value range and #bins). To avoid setting these hyper-parameters, we can estimate $H(z_{i^{*}},y)$ via sampling. In this case, the value of the unnormalized JEMMIG $(y_{k})$ only depends on $q(z_{i}|y)$ which is fixed after learning. Ranking disentanglement models using the unnormalized JEMMIG $(y_{k})$ is somewhat similar to ranking generative models using the log-likelihood. + +Using the unnormalized JEMMIG $(y_{k})$ causes interpretation difficulty. We could normalize JEMMIG $(y_{k})$ as follows: + +$$ +\frac {H _ {\mathrm {q}} \left(z _ {i}\right) + H \left(y _ {k}\right) - 2 I \left(z _ {i ^ {*}}, y _ {k}\right) + I \left(z _ {j ^ {\circ}}, y _ {k}\right)}{H _ {\mathrm {q}} (u) + H \left(y _ {k}\right)} \tag {16} +$$ + +where $H_{\mathfrak{q}}(z_i)$ is a quantization estimation of $H(z_i)$ , hence, greater than 0; $H_{\mathfrak{q}}(u)$ is an entropy that bounds $H_{\mathfrak{q}}(z_i)$ but does not depend on $q(z_i|x)$ . Intuitively, $u$ should be uniform. The main problem is how to find an effective value range $[a,b]$ of $z_i$ that satisfies 2 conditions: i) most of the mass of $z_i$ falls within that range, and ii) $H(u)$ is the bound of $H(z_i)$ if $u \in [a,b]$ . However, before solving this question, we try to answer a similar yet easier question: "Given a Gaussian random variable $z \sim \mathcal{N}(\mu, \sigma)$ , what is the value range of a uniform random variable $u$ such that $H(u) \geq H(z)$ ?". Assume $u \in [a,b]$ , the entropy of $u$ is $H(u) = \log (b - a)$ while the entropy of $z$ is $H(z) = 0.5\log (2\pi e\sigma^2)$ . We have: + +$$ +\begin{array}{l} H (z) \leq H (u) \\ \Leftrightarrow 0. 5 \log (2 \pi e \sigma^ {2}) \leq \log (b - a) \\ \Leftrightarrow \sigma \sqrt {2 \pi e} \leq b - a \\ \end{array} +$$ + +Thus, to ensure $H(u)$ to be an upper bound of $H(z)$ , we should choose the value range of $u$ to be at least $\sigma \sqrt{2\pi e}$ . If $\sigma = 1$ , this range is about 4.1327. If we also want $[a,b]$ to capture most of the mass of $z$ , $a$ should be $\mu - \frac{\sigma}{2}\sqrt{2\pi e}$ and $b$ should be $\mu + \frac{\sigma}{2}\sqrt{2\pi e}$ . + +Come back to the main problem, since $q(z_{i}) = \mathbb{E}_{p_{\mathcal{D}}(x)}[q(z_{i}|x)]$ and $q(z_{i}|x)$ is usually a Gaussian distribution $\mathcal{N}(\mu_i,\sigma_i)$ , we can choose $a,b$ as follows: + +$$ +\begin{array}{l} a = \min \left(\mu_ {i} ^ {(1)} - \frac {\sigma_ {i} ^ {(1)}}{2} \sqrt {2 \pi e},..., \mu_ {i} ^ {(N)} - \frac {\sigma_ {i} ^ {(N)}}{2} \sqrt {2 \pi e}\right), \text {a n d} \\ b = \max \left(\mu_ {i} ^ {(1)} + \frac {\sigma_ {i} ^ {(1)}}{2} \sqrt {2 \pi e}, \dots , \mu_ {i} ^ {(N)} + \frac {\sigma_ {i} ^ {(N)}}{2} \sqrt {2 \pi e}\right) \\ \end{array} +$$ + +One may wonder that different methods can choose different value ranges $[a, b]$ to normalize JEMMIG so how to ensure a fair comparison among them using the normalized JEMMIG. A simple solution is using the same value range $[a, b]$ for different models. In this case, $b - a$ should be large enough to cover various distributions. We can write Eq. 16 as follows: + +![](images/3d92c1e66bdb443faecbc5bf2fc60b14f59b243a158d916908c19c24ba321860.jpg) +Figure 25: Left: Correlation between our RMIG (#bins=100) and the original MIG (Chen et al., 2018) (#samples=10000). Right: Correlation between our RMIG (#bins=100) and the implementation of MIG in (Locatello et al., 2019) (#bins=100). Experiments are conducted on the dSprites dataset. + +![](images/802ccb810c66f194f8ccd3f39151e2d0b0dcfca42ca410bf091e946f83db6d78.jpg) + +$$ +\begin{array}{l} \frac {H _ {\mathrm {q}} (z _ {i}) + H (y _ {k}) - 2 I (z _ {i ^ {*}} , y _ {k}) + I (z _ {j ^ {\circ}} , y _ {k})}{H _ {\mathrm {q}} (u) + H (y _ {k})} \\ = \frac {H _ {\mathrm {s}} \left(z _ {i}\right) - \log (\text {v a l u e r a n g e}) + \log (\# \text {b i n s}) + H \left(y _ {k}\right) - 2 I \left(z _ {i ^ {*}}, y _ {k}\right) + I \left(z _ {j ^ {\circ}}, y _ {k}\right)}{H _ {\mathrm {s}} (u) - \log (\text {v a l u e r a n g e}) + \log (\# \text {b i n s}) + H \left(y _ {k}\right)} \\ = \frac {H _ {\mathrm {s}} \left(z _ {i}\right) - \log (\text {v a l u e r a n g e}) + \log (\# \text {b i n s}) + H \left(y _ {k}\right) - 2 I \left(z _ {i ^ {*}}, y _ {k}\right) + I \left(z _ {j ^ {\circ}}, y _ {k}\right)}{\log (\# \text {b i n s}) + H \left(y _ {k}\right)} \tag {17} \\ \end{array} +$$ + +Since the fraction in Eq. 17 is smaller than 1, increasing #bins will increase this fraction but still ensure that it is smaller than 1. This means the normalized JEMMIG is always in [0, 1] despite #bins. + +# A.13 COMPARING RMIG WITH OTHER MIG IMPLEMENTATIONS + +RMIG has several advantages compared to the original MIG (Chen et al., 2018) which we refer as MIG1: i) RMIG works on real datasets, MIG1 does not; ii) RMIG supports continuous factors, MIG1 does not. On toy datasets such as dSprites, RMIG produces almost the same results as MIG1 (Fig. 25 (left)). We argue that the small differences between RMIG and MIG1 scores in some models are caused by either the quantization error of RMIG (when #bins=100) or the sampling error of MIG1 (when #samples=10000). + +Locatello et. al. (Locatello et al., 2019) provided an implementation of MIG which we refer as MIG2. MIG2 is theoretically incorrect in two points: i) it only uses the mean of the distribution $q(z_{i}|x^{(n)})$ instead of the whole distribution $q(z_{i}|x^{(n)})$ , and ii) the bin range and width varies for different $z_{i}$ . The performance of MIG2 is, thus, unstable. We can easily see this problem by comparing the right plot with the left plot in Fig. 25. MIG2 usually overestimates the true MIG1 when evaluating $\beta$ -VAE models with a large $\beta$ (e.g. $\beta \in \{20,30,50\}$ ). We guess the reason is that in these models, $q(z_{i}|x^{(n)})$ usually has high variance, hence, using the mean of $q(z_{i}|x^{(n)})$ like MIG2 leads to the wrong estimation of $I(z_{i},y_{k})$ . + +# A.14 DEFINITE INTEGRAL OF A GAUSSIAN DENSITY FUNCTION + +Assume that we have a Gaussian distribution $\mathcal{N}(\mu, \sigma)$ . The definite integral of its density function within the range $[a, b]$ denoted as $G(a, b)$ can be computed as follows: + +$$ +\begin{array}{l} G (a, b) = \int_ {a} ^ {b} \frac {1}{\sigma \sqrt {2 \pi}} \exp \left(\frac {- (x - \mu) ^ {2}}{2 \sigma^ {2}}\right) d x \\ = \frac {1}{2} \left(\operatorname {e r f} \left(\frac {b - \mu}{\sigma \sqrt {2}}\right) - \operatorname {e r f} \left(\frac {a - \mu}{\sigma \sqrt {2}}\right)\right) \\ \end{array} +$$ + +Although $\operatorname{erf}(\cdot)$ does not have analytical form, we can compute its values with high precision using polynomial approximation. For example, the following approximation provides a maximum error of $5 \times 10^{-4}$ (Def, 2019): + +$$ +\operatorname {e r f} (x) \approx 1 - \frac {1}{\left(1 + a _ {1} x + a _ {2} x ^ {2} + a _ {3} x ^ {3} + a _ {4} x ^ {4}\right) ^ {4}}, x > 0 +$$ + +where $a_1 = 0.278393$ , $a_2 = 0.230389$ , $a_3 = 0.000972$ , $a_4 = 0.078108$ . + +# A.15 REPRESENTATIONS LEARNED BY FACTORVAE + +We empirically observed that FactorVAE learns the same set of disentangled representations across different runs with varying numbers of latent variables (see Appdx. A.18). This behavior is akin to that of deterministic PCA which uncovers a fixed set of linearly independent factors10 (or principal components). Standard VAE is theoretically similar to probabilistic PCA (pPCA) (Tipping & Bishop, 1999) as both assume the same generative process $p(x,z) = p_{\theta}(x|z)p(z)$ . Unlike deterministic PCA, pPCA learns a rotation-invariant family of factors instead of an identifiable set of factors. However, in a particular pPCA model, the relative orthogonality among factors is still preserved. This means that the factors learned by different pPCA models are statistically equivalent. We hypothesize that by enforcing independence among latent variables, FactorVAE can also learn statistically equivalent factors (or $q(z_i|x)$ ) which correspond to visually similar results. We provide a proof sketch for the hypothesis in Appdx. A.16. We note that Rolinek et. al. (Rolinek et al., 2018) also discovered the same phenomenon in $\beta$ -VAE. + +# A.16 WHY FACTORVAE CAN LEARN CONSISTENT REPRESENTATIONS? + +Inspired by the variational information bottleneck theory (Alemi et al., 2016), we rewrite the standard VAE objective in an equivalent form as follows: + +$$ +\min _ {q (z \mid x)} I (x, z) \quad \text {s . t .} \quad \operatorname {R e c} (x) \leq \beta \tag {18} +$$ + +where $\operatorname{Rec}(x)$ denotes the reconstruction loss over $x$ and $\beta$ is a scalar. + +In the case of FactorVAE, since all latent representations are independent, we can decompose $I(x,z)$ into $\sum_{i}I(x,z_{i})$ . Thus, we argue that FactorVAE optimizes the following information bottleneck objective: + +$$ +\min _ {q (z \mid x)} \sum_ {i} I (x, z _ {i}) \quad \text {s . t .} \quad \operatorname {R e c} (x) \leq \beta \tag {19} +$$ + +We assume that $\operatorname{Rec}(x)$ represents a fixed condition on all $q_i(z|x)$ . Because $I(x, z_i)$ is a convex function of $q(z_i|x)$ (see Appdx. A.17), minimizing Eq. 19 leads to unique solutions for all $q(z_i|x)$ (Note that we do not count permutation invariance among $z_i$ here). + +To make $\operatorname{Rec}(x)$ a fixed condition on all $q_{i}(z|x)$ , we can further optimize $p(x|z)$ with $z$ sampled from a fixed distribution like $\mathcal{N}(0,1)$ . This suggests that we can add a GAN objective to the original FactorVAE objective to achieve more consistent representations. + +# A.17 $I(x,z)$ IS A CONVEX FUNCTION OF $p(z|x)$ + +Let us first start with the definition of a convex function and some of its known properties. + +Definition 3. Let $X$ be a set in the real vector space $\mathbb{R}^D$ and $f: X \to \mathbb{R}$ be a function that output a scalar. $f$ is convex if $\forall x_1, x_2 \in X$ and $\forall \lambda \in [0,1]$ , we have: + +$$ +f (\lambda x _ {1} + (1 - \lambda) x _ {2}) \leq \lambda f (x _ {1}) + (1 - \lambda) f (x _ {2}) +$$ + +Proposition 4. A twice differentiable function $f$ is convex on an interval if and only if its second derivative is non-negative there. + +Proposition 5 (Jensen's inequality). Let $x_{1}, \ldots, x_{n}$ be real numbers and let $a_{1}, \ldots, a_{n}$ be positive weights on $x_{1}, \ldots, x_{n}$ such that $\sum_{i}^{n} a_{i} = 1$ . If $f$ is a convex function on the domain of $x_{1}, \ldots, x_{n}$ , then + +$$ +f \left(\sum_ {i = 1} ^ {n} a _ {i} x _ {i}\right) \leq \sum_ {i = 1} ^ {n} a _ {i} f (x _ {i}) +$$ + +Equality holds if and only if all $x_{i}$ are equal or $f$ is a linear function. + +Proposition 6 (Log-sum inequality). Let $a_1, \ldots, a_n$ and $b_1, \ldots, b_n$ be non-negative numbers. Denote $a = \sum_{i=1}^{n} a_i$ and $b = \sum_{i=1}^{n} b_i$ . We have: + +$$ +\sum_ {i = 1} ^ {n} a _ {i} \log \frac {a _ {i}}{b _ {i}} \geq a \log \frac {a}{b} +$$ + +with equality if and only if $\frac{a_i}{b_i}$ are equal for all $i$ . + +Armed with the definition and propositions, we can now prove that $I(x,z)$ is a convex function of $p(z|x)$ . Let $p_1(z|x)$ and $p_2(z|x)$ be two distributions and let $p_\star(z|x) = \lambda p_1(z|x) + (1 - \lambda)p_2(z|x)$ with $\lambda \in [0,1]$ . $p_\star(z|x)$ is a valid distribution since $p_\star(z|x) > 0 \forall z$ and $\int_x\int_zp_\star(z|x)p(x)dzdx = 1$ . In addition, we have: + +$$ +\begin{array}{l} p _ {\star} (z) = \int_ {x} p _ {\star} (z | x) p (x) d x \\ = \int_ {x} \left(\lambda p _ {1} (z | x) + (1 - \lambda) p _ {2} (z | x)\right) p (x) d x \\ = \lambda \int_ {x} p _ {1} (z | x) p (x) d x + (1 - \lambda) \int_ {x} p _ {2} (z | x) p (x) d x \\ = \lambda p _ {1} (z) + (1 - \lambda) p _ {2} (z) \\ \end{array} +$$ + +We write $I(x,z) = \lambda I_1(x,z) + (1 - \lambda)I_2(x,z)$ as follows: + +$$ +\begin{array}{l} I (x, z) = \lambda \int_ {x} p (x) \int_ {z} p _ {1} (z | x) \log \frac {p _ {1} (z | x)}{p _ {1} (z)} d z d x + \\ + (1 - \lambda) \int_ {x} p (x) \int_ {z} p _ {2} (z | x) \log \frac {p _ {2} (z | x)}{p _ {2} (z)} d z d x \\ = \int_ {x} p (x) \int_ {z} \left(\lambda p _ {1} (z | x) \log \frac {\lambda p _ {1} (z | x)}{\lambda p _ {1} (z)} + (1 - \lambda) p _ {2} (z | x) \log \frac {(1 - \lambda) p _ {2} (z | x)}{(1 - \lambda) p _ {2} (z | x)}\right) d z d x \\ \geq \int_ {x} p (x) \int_ {z} p _ {\star} (z | x) \log \frac {p _ {\star} (z | x)}{p _ {\star} (z)} d z d x \tag {20} \\ = I _ {\star} (x, z) \\ \end{array} +$$ + +where the inequality in Eq. 20 is the log-sum inequality. This completes the proof. + +# A.18 EXPERIMENTS TO SHOW THAT FACTORVAE LEARNS CONSISTENT REPRESENTATIONS + +We first trained several FactorVAE models with 3 latent variables on the CelebA dataset. After training, for each model, we performed 2D interpolation on every pair of latent variables $z_{i}, z_{j}$ ( $i \leq j$ ) and decoded the interpolated latent representations back to images for visualization. We found that the learned representations from these models share visually similar patterns, which is illustrated in Fig. 26. It is apparent that all images in Fig. 26 are derived from a single one (e.g. we can choose the first image as a reference) by switching the rows and columns and/or flipping the whole image vertically/horizontally. The reason why switching happens is that all latent variables of FactorVAE are permutation invariant. Flipping happens due to the symmetry of $q(z_{i})$ which is forced to be similar to $p(z_{i}) = \mathcal{N}(0,1)$ . + +![](images/40a35bc9bdc8dea1c35e59c2d5203da4f0dc9692613f56a1a7a7b13ac31b98fc.jpg) +(a) $\mathrm{TC} = 50$ + +![](images/011b3105cb169260cd5f0ac6e050ffe3e83e34bc63239a471943797ff3918697.jpg) +Figure 26: Random traversal on the latent space of FactorVAE. We can easily see the visual resemblance among image regions corresponding the same number. + +![](images/aa39bf97ff293ed19731191b031b229e9762eed844c5e57789dea6f7718b27ee.jpg) +(b) $\mathrm{TC} = 10$ + +![](images/ceab033c58dfeed26181640bd3a14884646d10c5c1099fa725b2424fe3ebbcf7.jpg) + +![](images/89b448b7eefa2366423842ebc647dec2d9de90bbecf2a218de8a6c91243ecb63.jpg) +(a) $\mathrm{TC} = 10$ z_dim=65 + +![](images/1b0e295c56507e49d3bcb89bc6beeeb747383249423e79175955c57a0750c101.jpg) +(b) $\mathrm{TC} = 50$ $z\_ dim = 65$ + +![](images/daa04e0d2b452bf5b0a9f293c2b1b8692d1f8d276ab2a1d9ab634b12cbc0b6da.jpg) +(c) $\mathrm{TC} = 50$ z_dim=100 + +![](images/b2303c538cdf83edf5eead221b4eef816c324ef2dc756b8d40c7b485dcc31161.jpg) +(d) $\mathrm{TC} = 50$ $\mathrm{z\_dim} = 200$ +Figure 27: Top 10 representations sorted by the variance of the distribution of $\mathbb{E}_{q(z_i|x^{(n)})}[z_i]$ over all $x^{(n)}$ . + +We then repeated the above experiment on FactorVAE models containing 65, 100, 200 latent variables, but replacing 2D interpolation on pairs of latent variables with conditional 1D interpolation on individual latent variables to account for large numbers of combinations. We sorted the latent variables $z_{i}$ of each model according to the variance of the distribution of $\mathbb{E}_{q(z_i|x^{(n)})}[z_i]$ over all data samples $x^{(n)}\sim p_{\mathcal{D}}(x)$ in descending order. Fig. 27 shows results for the top 10 latent variables (of each model). We can see that some factors of variation are consistently learned by these models, for example, those that represent changes in color of the image background. Because these factors usually appear on top, we hypothesize that the learned factors should follow some fixed order. However, many pronounced factors do not appear at the top, suggesting that the sorting criterion is inadequate. We then used the informativeness metric defined in Sec. 4.1 to sort the latent variables. Now the "visual consistency" and "ordering consistency" patterns emerge, (see Fig. 28). We also observed that the number of learned factors is relatively fixed (around 38-43) for all models despite that the number of latent variables varies significantly from 65 to 200. + +![](images/a2e97f1d509cde7b2403675bdc8afeb4e70d647da5c878dfb308368d863b9e74.jpg) +(a) $\mathrm{TC} = 10$ z_dim=65 + +![](images/6a1d627c4cb72da12ebcf7b077ddf1f3e15c5bbba246464548d17971608ed6c9.jpg) +(b) $\mathrm{TC} = 50$ $z_{-}\mathrm{dim} = 65$ +Figure 28: Top 10 representations sorted by informativeness scores. We can clearly see the consistency of representations across different runs. + +![](images/ba3c6c602bbf21f6ca2628db3884c3d1bdd1b4d561feefb3bb4a142a350498c4.jpg) +(c) $\mathrm{TC} = 50$ z_dim=100 + +![](images/b1356a6dc4fe230f02005c0f1c31e96937784798436458ec56af25ffd383d9f2.jpg) +(d) $\mathrm{TC} = 50$ $z_{-}\mathrm{dim} = 200$ \ No newline at end of file diff --git a/theoryandevaluationmetricsforlearningdisentangledrepresentations/images.zip b/theoryandevaluationmetricsforlearningdisentangledrepresentations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8168697187dd5d489cbd2c99b764f40f56a9f2c3 --- /dev/null +++ b/theoryandevaluationmetricsforlearningdisentangledrepresentations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dde0bf8369c7b8645683f89671be5d28b1058a058c9faa832215f831dd046763 +size 2018518 diff --git a/theoryandevaluationmetricsforlearningdisentangledrepresentations/layout.json b/theoryandevaluationmetricsforlearningdisentangledrepresentations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a73e0e5f1df64c82ee11c84d050cac05b6ffb142 --- /dev/null +++ b/theoryandevaluationmetricsforlearningdisentangledrepresentations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1238faf2a8a3ec92a66e1e81712c84eee0a9532d22547a6b52d0c2c28c1e0d4c +size 1620029 diff --git a/theshapeofdataintrinsicdistancefordatadistributions/25a467c1-3af4-4188-8327-c86c46b0c326_content_list.json b/theshapeofdataintrinsicdistancefordatadistributions/25a467c1-3af4-4188-8327-c86c46b0c326_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9f4b689a813b7c535e3a8720fc62d5d8a56994a9 --- /dev/null +++ b/theshapeofdataintrinsicdistancefordatadistributions/25a467c1-3af4-4188-8327-c86c46b0c326_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38d0d02155632364282fb3f22eceb9f400bf1de7337b9d3b1f70a0f25542a630 +size 112886 diff --git a/theshapeofdataintrinsicdistancefordatadistributions/25a467c1-3af4-4188-8327-c86c46b0c326_model.json b/theshapeofdataintrinsicdistancefordatadistributions/25a467c1-3af4-4188-8327-c86c46b0c326_model.json new file mode 100644 index 0000000000000000000000000000000000000000..81d0ed6800e4ce4448a535d04418f79dd21007bb --- /dev/null +++ b/theshapeofdataintrinsicdistancefordatadistributions/25a467c1-3af4-4188-8327-c86c46b0c326_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aaaa768f2b6aa868a129ff856857689bc3e370271d164ede8a93b598a54d2ec7 +size 135253 diff --git a/theshapeofdataintrinsicdistancefordatadistributions/25a467c1-3af4-4188-8327-c86c46b0c326_origin.pdf b/theshapeofdataintrinsicdistancefordatadistributions/25a467c1-3af4-4188-8327-c86c46b0c326_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3ed22f22f26e944ab334fe5c224c18d73515f6e9 --- /dev/null +++ b/theshapeofdataintrinsicdistancefordatadistributions/25a467c1-3af4-4188-8327-c86c46b0c326_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f1f57a76742f775a112ba44fd0ac5679039a689491c7338abaa87aca1976fe0 +size 1256671 diff --git a/theshapeofdataintrinsicdistancefordatadistributions/full.md b/theshapeofdataintrinsicdistancefordatadistributions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1f510e0e3cb43b5cf778769fd5fb83bbb20f6674 --- /dev/null +++ b/theshapeofdataintrinsicdistancefordatadistributions/full.md @@ -0,0 +1,512 @@ +# THE SHAPE OF DATA: INTRINSIC DISTANCE FOR DATA DISTRIBUTIONS + +Anton Tsitsulin*† + +Marina Munkhoeva*† + +Davide Mottin $^{8}$ + +Panagiotis Karras + +Alex Bronstein + +Ivan Oseledets + +Emmanuel Muller† + +# ABSTRACT + +The ability to represent and compare machine learning models is crucial in order to quantify subtle model changes, evaluate generative models, and gather insights on neural network architectures. Existing techniques for comparing data distributions focus on global data properties such as mean and covariance; in that sense, they are extrinsic and uni-scale. We develop a first-of-its-kind intrinsic and multi-scale method for characterizing and comparing data manifolds, using a lower-bound of the spectral Gromov-Wasserstein inter-manifold distance, which compares all data moments. In a thorough experimental study, we demonstrate that our method effectively discerns the structure of data manifolds even on unaligned data of different dimensionality, and showcase its efficacy in evaluating the quality of generative models. + +# 1 INTRODUCTION + +The geometric properties of neural networks provide insights about their internals (Morcos et al., 2018; Wang et al., 2018) and help researchers in the design of more robust models (Arjovsky et al., 2017; Binkowski et al., 2018). Generative models are a natural example of the need for geometric comparison of distributions. As generative models aim to reproduce the true data distribution $\mathbb{P}_d$ by means of the model distribution $\mathbb{P}_g(\mathbf{z};\Theta)$ , more delicate evaluation procedures are needed. Oftentimes, we wish to compare data lying in entirely different spaces, for example to track model evolution or compare models having different representation space. + +In order to evaluate the performance of generative models, past research has proposed several extrinsic evaluation measures, most notably the Fréchet (Heusel et al., 2017) and Kernel (Binkowski et al., 2018) Inception Distances (FID and KID). Such measures only reflect the first two or three moments of distributions, meaning they can be insensitive to global structural problems. We showcase this inadvertence in Figure 1: here FID and KID are insensitive to the global structure of the data distribution. Besides, as FID and KID are based only on extrinsic properties they are unable to compare unaligned data manifolds. + +![](images/3ffbfdaf68f4d07bdfcbc6b5e4a69c335b651cbad4a79c75f4bc81415d21a4ba.jpg) +Figure 1: Two distributions having the same first 3 moments, meaning FID and KID scores are close to 0. + +In this paper, we start out from the observation that models capturing the multi-scale nature of the data manifold by utilizing higher distribution moment matching, such as MMD-GAN (Li et al., 2017) and Sphere-GAN (Park & Kwon, 2019), perform consistently better than their single-scale counterparts. On the other hand, using extrinsic information can be misleading, as it is dependent on factors external to the data, such as representation. To address this drawback, we propose IMD, an Intrinsic Multi-scale Distance, that is able to compare distributions using only intrinsic information about the data, and provide an efficient approximation thereof that renders computational complexity nearly linear. We demonstrate that IMD effectively quantifies difference in data distributions in three distinct application scenarios: comparing word vectors in languages with unaligned vocabularies, tracking dynamics of intermediate neural network representations, and evaluating generative models. + +# 2 RELATED WORK + +The geometric perspective on data is ubiquitous in machine learning. Geometric techniques enhance unsupervised and semi-supervised learning, generative and discriminative models (Belkin & Niyogi, 2002; Arjovsky et al., 2017; Mémoli, 2011). We outline the applications of the proposed manifold comparison technique and highlight the geometric intuition along the way. + +# 2.1 GENERATIVE MODEL EVALUATION + +Past research has explored many different directions for the evaluation of generative models. Setting aside models that ignore the true data distribution, such as the Inception Score (Salimans et al., 2016) and GILBO (Alemi & Fischer, 2018), we discuss most relevant geometric ideas below; we refer the reader to Borji (2019) for a comprehensive survey. + +Critic model-based metrics. Classifier two-sample tests (C2ST) (Lopez-Paz & Oquab, 2017) aim to assess whether two samples came from the same distribution by means of an auxiliary classifier. This idea is reminiscent of the GAN discriminator network (Goodfellow et al., 2014): if it is possible to train a model that distinguishes between samples from the model and the data distributions, it follows that these distributions are not entirely similar. The convergence process of the GAN-like discriminator (Arjovsky et al., 2017; Binkowski et al., 2018) lends itself to creating a family of metrics based on training a discriminative classifier (Im et al., 2018). Still, training a separate critic model is often computationally prohibitive and requires careful specification. Besides, if the critic model is a neural network, the resulting metric lacks interpretability and training stability. + +Many advanced GAN models such as Wasserstein, MMD, Sobolev and Spherical GANs impose different constraints on the function class so as to stabilize training (Arjovsky et al., 2017; Binkowski et al., 2018; Mroueh et al., 2018; Park & Kwon, 2019). Higher-order moment matching (Binkowski et al., 2018; Park & Kwon, 2019) enhances GAN performance, enabling GANs to capture multi-scale data properties, while multi-scale noise ameliorates GAN convergence problems (Jenni & Favaro, 2019). Still, no feasible multi-scale GAN evaluation metric has been proposed to date. + +Positional distribution comparison. In certain settings, it is acceptable to assign zero probability mass to the real data points (Odena et al., 2018). In effect, metrics that estimate a distribution's location and dispersion provide useful input for generative model evaluations. For instance, the Fréchet Inception Distance (FID) (Heusel et al., 2017) computes the Wasserstein-2 (i.e., Fréchet) distance between distributions approximated with Gaussians, using only the estimated mean and covariance matrices; the Kernel Inception Distance (KID) (Binkowski et al., 2018) computes a polynomial kernel $\mathrm{k}(x,y) = (\frac{1}{d} x^{\top}y + 1)^{3}$ and measures the associated Kernel Maximum Mean Discrepancy (kernel MMD). Unlike FID, KID has an unbiased estimator (Gretton et al., 2012; Binkowski et al., 2018). However, even while such methods, based on a limited number of moments, may be computationally inexpensive, they only provide a rudimentary characterization of distributions from a geometric viewpoint. + +Intrinsic geometric measures. The Geometry Score (Khrulkov & Oseledets, 2018) characterizes distributions in terms of their estimated persistent homology, which roughly corresponds to the number of holes in a manifold. Still, the Geometry Score assesses distributions merely in terms of their global geometry. In this work, we aim to provide a multi-scale geometric assessment. + +# 2.2 SIMILARITIES OF NEURAL NETWORK REPRESENTATIONS + +Learning how representations evolve during training or across initializations provides a pathway to the interpretability of neural networks (Raghu et al., 2017). Still, state-of-the-art methods for comparing representations of neural networks (Kornblith et al., 2019; Morcos et al., 2018; Wang et al., 2018) consider only linear projections. The intrinsic nature of IMD renders it appropriate for the task of comparing neural network representations, which can only rely on intrinsic information. + +Yin & Shen (2018) introduced the Pairwise Inner Product (PIP) loss, an unnormalized covariance error between sets, as a dissimilarity metric between word2vec embedding spaces with common vocabulary. We show in Section 4.2 how IMD is applicable to this comparison task too. + +# 3 MULTI-SCALE INTRINSIC DISTANCE + +At the core of deep learning lies the manifold hypothesis, which states that high-dimensional data, such as images or text, lie on a low-dimensional manifold (Narayanan & Mitter, 2010; Belkin & Niyogi, 2002; 2007). We aim to provide a theoretically motivated comparison of data manifolds based on rich intrinsic information. Our target measure should have the following properties: + +intrinsic - it is invariant to isometric transformations of the manifold, e.g. translations or rotations. + +multi-scale - it captures both local and global information. + +We expose our method starting out with heat kernels, which admit a notion of manifold metric and can be used to lower-bound the distance between manifolds. + +# 3.1 HEAT KERNELS ON MANIFOLDS AND GRAPHS + +Based on the heat equation, the heat kernel captures all the information about a manifold's intrinsic geometry (Sun et al., 2009). Given the Laplace-Beltrami operator (LBO) $\Delta_{\mathcal{X}}$ on a manifold $\mathcal{X}$ , the heat equation is $\frac{\partial u}{\partial t} = \Delta_{\mathcal{X}}u$ for $u:\mathbb{R}^{+}\times \mathcal{X}\to \mathbb{R}^{+}$ . A smooth function $u$ is a fundamental solution of the heat equation at point $x\in \mathcal{X}$ if $u$ satisfies both the heat equation and the Dirac condition $u(t,x^{\prime})\rightarrow \delta (x^{\prime} - x)$ as $t\to 0^{+}$ . We assume the Dirichlet boundary condition $u(t,x) = 0$ for all $t$ and $x\in \partial \mathcal{X}$ . The heat kernel $\mathrm{k}_{\mathcal{X}}\colon \mathcal{X}\times \mathcal{X}\times \mathbb{R}^{+}\to \mathbb{R}_0^+$ is the unique solution of the heat equation; while heat kernels can be defined on hyperbolic spaces and other exotic geometries, we restrict our exposition to Euclidean spaces $\mathcal{X} = \mathbb{R}^d$ , on which the heat kernel is defined as: + +$$ +\mathrm {k} _ {\mathbb {R} ^ {d}} \left(x, x ^ {\prime}, t\right) = \frac {1}{\left(4 \pi t\right) ^ {d / 2}} \exp \left(- \frac {\| x - x ^ {\prime} \| ^ {2}}{4 t}\right) \tag {1} +$$ + +For a compact $\mathcal{X}$ including submanifolds of $\mathbb{R}^d$ , the heat kernel admits the expansion $\mathrm{k}_{\mathcal{X}}(x,x',t) = \sum_{i=0}^{\infty} e^{-\lambda_i t} \phi_i(x) \phi_i(x')$ , where $\lambda_i$ and $\phi_i$ are the $i$ -th eigenvalue and eigenvector of $\Delta_{\mathcal{X}}$ . For $t \simeq 0^+$ , according to Varadhan's lemma, the heat kernel approximates geodesic distances. Importantly for our purposes, the Heat kernel is multi-scale: for a local domain $\mathcal{D}$ with Dirichlet condition, the localized heat kernel $\mathrm{k}_{\mathcal{D}}(x,x',t)$ is a good approximation of $\mathrm{k}_{\mathcal{X}}(x,x',t)$ if either (i) $\mathcal{D}$ is arbitrarily small and $t$ is small enough, or (ii) $t$ is for arbitrarily large and $\mathcal{D}$ is big enough. Formally, + +Definition 1 Multi-scale property (Grigor'yan, 2006; Sun et al., 2009) (i) For any smooth and relatively compact domain $\mathcal{D} \subseteq \mathcal{X}$ , $\lim_{t \to 0} \mathrm{k}_{\mathcal{D}}(x, x', t) = \mathrm{k}_{\mathcal{X}}(x, x', t)$ (ii) For any $t \in \mathbb{R}^+$ and any $x, x' \in \mathcal{D}_1$ localized heat kernel $\mathrm{k}_{\mathcal{D}_1}(x, x', t) \leq \mathrm{k}_{\mathcal{D}_2}(x, x', t)$ if $\mathcal{D}_1 \subseteq \mathcal{D}_2$ . Moreover, if $\{\mathcal{D}_n\}$ is an expanding and exhausting sequence $\bigcup_{i=1}^{\infty} \mathcal{D}_i = \mathcal{X}$ and $\mathcal{D}_{i-1} \subseteq \mathcal{D}_i$ , then $\lim_{i \to \infty} \mathrm{k}_{\mathcal{D}_i}(x, x', t) = \mathrm{k}_{\mathcal{X}}(x, x', t)$ for any $t$ . + +Heat kernels are also defined for graphs in terms of their Laplacian matrices. An undirected graph is a pair $G = (V, E)$ , where $V = (v_{1}, \ldots, v_{n})$ , $n = |V|$ , is the set of vertices and $E \subseteq (V \times V)$ the set of edges. The adjacency matrix of $G$ is a $n \times n$ matrix $\mathbf{A}$ having $\mathbf{A}_{ij} = 1$ if $(i, j) \in E$ and $A_{ij} = 0$ otherwise. The normalized graph Laplacian is the matrix $\mathcal{L} = \mathbf{I} - \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}$ , where $\mathbf{D}$ is the diagonal matrix in which entry $\mathbf{D}_{ii}$ holds the degree of node $i$ , i.e., $\mathbf{D}_{ii} = \sum_{j=1}^{n} \mathbf{A}_{ij}$ . Since the Laplacian matrix is symmetric, its eigenvectors $\phi_{1}, \ldots, \phi_{n}$ , are real and orthonormal. Thus, it is factorized as $\mathcal{L} = \Phi \Lambda \Phi^{\top}$ , where $\Lambda$ is a diagonal matrix with the sorted eigenvalues $\lambda_{1} \leq \ldots \leq \lambda_{n}$ and $\Phi$ is the orthonormal matrix $\Phi = (\phi_{1}, \ldots, \phi_{n})$ having the eigenvectors of $\mathcal{L}$ as its columns. The heat kernel on a graph is also given by the solution to the heat equation on a graph, which requires an eigendecomposition of its Laplacian: $\mathbf{H}_t = e^{-t\mathcal{L}} = \Phi e^{-t\Lambda} \Phi^{\top} = \sum_i e^{-t\bar{\lambda}_i} \phi_i \phi_i^{\top}$ . + +A useful invariant of the heat kernel is the heat kernel trace $\mathrm{hkt}_{\mathcal{X}}: \mathcal{X} \times \mathbb{R}_0^+ \to \mathbb{R}_0^+$ , defined by a diagonal restriction as $\mathrm{hkt}_{\mathcal{X}}(t) = \int_{\mathcal{X}} \mathrm{k}_{\mathcal{X}}(x, x, t) dx = \sum_{i=0}^{\infty} e^{-\lambda_i t}$ or, in the discrete case, $\mathrm{hkt}_{\mathcal{L}}(t) = \mathrm{Tr}(\mathbf{H}_t) = \sum_i e^{-t \lambda_i}$ . Heat kernels traces (HKTs) have been successfully applied to the analysis of 3D shapes (Sun et al., 2009) and graphs (Tsitsulin et al., 2018). The HKT contains all the information in the graph's spectrum, both local and global, as the eigenvalues $\lambda_i$ can be inferred therefrom (Mémoli, 2011, Remark 4.8). For example, if there are $c$ connected components in the graph, then $\lim_{t \to \infty} \mathrm{hkt}_{\mathcal{L}}(t) = c$ . + +# 3.2 CONVERGENCE TO THE LAPLACE-BELTRAMI OPERATOR + +An important property of graph Laplacians is that it is possible to construct a graph among points sampled from a manifold $\mathcal{X}$ such that the spectral properties of its Laplacian resemble those of the Laplace-Beltrami operator on $\mathcal{X}$ . Belkin and Niyogi (Belkin & Niyogi, 2002) proposed such a construction, the point cloud Laplacian, which is used for dimensionality reduction in a technique called Laplacian eigenmaps. Convergence to the LBO has been proven for various definitions of the graph Laplacian, including the one we use (Belkin & Niyogi, 2007; Hein et al., 2007; Coifman & Lafon, 2006; Ting et al., 2010). We recite the convergence results for the point cloud Laplacian from Belkin & Niyogi (2007): + +Theorem 1 Let $\lambda_{n,i}^{t_n}$ and $\phi_{n,i}^{t_n}$ be the $i^{\text{th}}$ eigenvalue and eigenvector, respectively, of the point cloud Laplacian $\mathcal{L}^{t_n}$ ; let $\lambda_i$ and $\phi_i$ be the $i^{\text{th}}$ eigenvalue and eigenvector of the LBO $\Delta$ . Then, there exists $t_n \to 0$ such that + +$$ +\lim _ {n \to \infty} \lambda_ {n, i} ^ {t _ {n}} = \lambda_ {i} +$$ + +$$ +\lim _ {n \to \infty} \left\| \phi_ {n, i} ^ {t _ {n}} - \phi_ {i} \right\| _ {2} = 0 +$$ + +Still, the point cloud Laplacian involves the creation of an $\mathcal{O}(n^2)$ matrix; for the sake of scalability, we use the $k$ -nearest-neighbours ( $k\mathrm{NN}$ ) graph by OR-construction (i.e., based on bidirectional $k\mathrm{NN}$ relationships among points), whose Laplacian converges to the LBO for data with sufficiently high intrinsic dimension (Ting et al., 2010). As for the choice of $k$ , a random geometric $k\mathrm{NN}$ graph is connected when $k \geq \log n / \log 7 \approx 0.5139 \log n$ (Balister et al., 2005); $k = 5$ yields connected graphs for all sample sizes we tested. + +# 3.3 SPECTRAL GROMOV-WASSERSTEIN DISTANCE + +Even while it is a multi-scale metric on manifolds, the heat kernel can be spectrally approximated by finite graphs constructed from points sampled from these manifolds. In order to construct a metric between manifolds, Mémoli (2011) suggests an optimal-transport-theory-based "meta-distance": a spectral definition of the Gromov-Wasserstein distance between Riemannian manifolds based on matching the heat kernels at all scales. The cost of matching a pair of points $(x, x')$ on manifold $\mathcal{M}$ to a pair of points $(y, y')$ on manifold $\mathcal{N}$ at scale $t$ is given by their heat kernels $\mathrm{k}_{\mathcal{M}}, \mathrm{k}_{\mathcal{N}}$ : + +$$ +\Gamma (x, y, x ^ {\prime}, y ^ {\prime}, t) = \left| k _ {\mathcal {M}} (x, x ^ {\prime}, t) - k _ {\mathcal {N}} (y, y ^ {\prime}, t) \right|. +$$ + +The distance between the manifolds is then defined in terms of the infimal measure coupling + +$$ +d_{\mathrm{GW}}(\mathcal{M},\mathcal{N}) = \inf_{\mu}\sup_{t > 0}e^{-2(t + t^{-1})}\left\| \Gamma \right\|_{L^{2}(\mu \times \mu)}, +$$ + +where the infimum is sought over all measures $\mu$ on $\mathcal{M} \times \mathcal{N}$ marginalizing to the standard measures on $\mathcal{M}$ and $\mathcal{N}$ . For finite spaces, $\mu$ is a doubly-stochastic matrix. This distance is lower-bounded (Mémoli, 2011) in terms of the respective heat kernel traces as: + +$$ +d _ {\mathrm {G W}} (\mathcal {M}, \mathcal {N}) \geq \sup _ {t > 0} e ^ {- 2 (t + t ^ {- 1})} | \mathrm {h k t} _ {\mathcal {M}} (t) - \mathrm {h k t} _ {\mathcal {N}} (t) |. \tag {2} +$$ + +This lower bound is the scaled $L_{\infty}$ distance between the heat trace signatures $\mathrm{hkt}_{\mathcal{M}}$ and $\mathrm{hkt}_{\mathcal{N}}$ . The scaling factor $e^{-2(t + t^{-1})}$ favors medium-scale differences, meaning that this lower bound is not sensitive to local perturbations. The maximum of the scaling factor occurs at $t = 1$ , and more than $1 - 10^{-8}$ of the function mass lies between $t = 0.1$ and $t = 10$ . + +# 3.4 HEATTRACE ESTIMATION + +Calculating the heat trace signature efficiently and accurately is a challenge on a large graph as it involves computing a trace of a large matrix exponential, i.e. $\mathrm{Tr}(e^{-t\mathcal{L}})$ . A naive approach would be to use an eigendecomposition $\exp (-t\mathcal{L}) = \Phi \exp (-t\Lambda)\Phi^{\top}$ , which is infeasible for large $n$ . Recent work (Tsitsulin et al., 2018) suggested using either truncated Taylor expansion or linear interpolation of the interpoling eigenvalues, however, both techniques are quite coarse. To combine + +accuracy and speed, we use the Stochastic Lanczos Quadrature (SLQ) (Ubaru et al., 2017; Golub & Meurant, 2009). This method combines the Hutchinson trace estimator (Hutchinson, 1989; Adams et al., 2018) and the Lanczos algorithm for eigenvalues. We aim to estimate the trace of a matrix function with a Hutchinson estimator: + +$$ +\operatorname {T r} (f (\mathcal {L})) = \mathbb {E} _ {p (\mathbf {v})} (\mathbf {v} ^ {\top} f (\mathcal {L}) \mathbf {v}) \approx \frac {n}{n _ {v}} \sum_ {i = 1} ^ {n _ {v}} \mathbf {v} _ {i} ^ {\top} f (\mathcal {L}) \mathbf {v} _ {i}, \tag {3} +$$ + +where the function of interest $f(\cdot) = \exp (\cdot)$ and $\mathbf{v}_i$ are $n_v$ random vectors drawn from a distribution $p(\mathbf{v})$ with zero mean and unit variance. A typical choice for $p(\mathbf{v})$ is Rademacher or a standard normal distribution. In practice, there is little difference, although in theory Rademacher has less variance, but Gaussian requires less random vectors (Avron & Toledo, 2011). + +To estimate the quadratic form $\mathbf{v}_i^\top f(\mathcal{L})\mathbf{v}_i$ in (3) with a symmetric real-valued matrix $\mathcal{L}$ and a smooth function $f$ , we plug in the eigendecomposition $\mathcal{L} = \Phi \Lambda \Phi^\top$ , rewrite the outcome as a Riemann-Stieltjes integral and apply the $m$ -point Gauss quadrature rule (Golub & Welsch, 1969): + +$$ +\mathbf {v} _ {i} ^ {\top} f (\boldsymbol {\mathcal {L}}) \mathbf {v} _ {i} = \mathbf {v} _ {i} ^ {\top} \Phi f (\Lambda) \Phi^ {\top} \mathbf {v} _ {i} = \sum_ {j = 1} ^ {n} f (\lambda_ {j}) \mu_ {j} ^ {2} = \int_ {a} ^ {b} f (t) d \mu (t) \approx \sum_ {k = 1} ^ {m} \omega_ {k} f (\theta_ {k}), \tag {4} +$$ + +where $\mu_j = [\Phi^\top \mathbf{v}_i]_j$ and $\mu(t)$ is a piecewise constant function defined as follows + +$$ +\mu (t) = \left\{ \begin{array}{l l} 0, & \text {i f} t < a = \lambda_ {n} \\ \sum_ {j = 1} ^ {i} \mu_ {j} ^ {2}, & \text {i f} \lambda_ {i} \leq t < \lambda_ {i - 1} \\ \sum_ {j = 1} ^ {n} \mu_ {j} ^ {2}, & \text {i f} b = \lambda_ {1} \leq t \end{array} \right. +$$ + +and $\theta_{k}$ are the quadrature's nodes and $\omega_{k}$ are the corresponding weights. We obtain $\omega_{k}$ and $\theta_{k}$ with the $m$ -step Lanczos algorithm (Golub & Meurant, 2009), which we describe succinctly. + +Given the symmetric matrix $\mathcal{L}$ and an arbitrary starting unit-vector $\mathbf{q}_0$ , the $m$ -step Lanczos algorithm computes an $n \times m$ matrix $\mathbf{Q} = [\mathbf{q}_0, \mathbf{q}_1, \ldots, \mathbf{q}_{m-1}]$ with orthogonal columns and an $m \times m$ tridiagonal symmetric matrix $\mathbf{T}$ , such that $\mathbf{Q}^\top \mathbf{\mathcal{L}} \mathbf{Q} = \mathbf{T}$ . The columns of $\mathbf{Q}$ constitute an orthonormal basis for the Krylov subspace $\mathcal{K}$ that spans vectors $\{\mathbf{q}_0, \mathbf{\mathcal{L}} \mathbf{q}_0, \ldots, \mathbf{\mathcal{L}}^{m-1} \mathbf{q}_0\}$ ; each $\mathbf{q}_i$ vector is given as a polynomial in $\mathcal{L}$ applied to the initial vector $\mathbf{q}_0$ : $\mathbf{q}_i = p_i(\mathcal{L}) \mathbf{q}_0$ . These Lanczos polynomials are orthogonal with respect to the integral measure $\mu(t)$ . As orthogonal polynomials satisfy the three term recurrence relation, we obtain $p_{k+1}$ as a combination of $p_k$ and $p_{k-1}$ . The tridiagonal matrix storing the coefficients of such combinations, called the Jacobi matrix $\mathbf{J}$ , is exactly the tridiagonal symmetric matrix $\mathbf{T}$ . A classic result tells us that the nodes $\theta_k$ and the weights $\omega_k$ of the Gauss quadrature are the eigenvalues of $\mathbf{T}$ , $\lambda_k$ , and the squared first components of its normalized eigenvectors, $\tau_k^2$ , respectively (see Golub & Welsch (1969); Wilf (1962); Golub & Meurant (2009)). Thereby, setting $\mathbf{q}_0 = \mathbf{v}_i$ , the estimate for the quadratic form becomes: + +$$ +\mathbf {v} _ {i} ^ {\top} f (\boldsymbol {\mathcal {L}}) \mathbf {v} _ {i} \approx \sum_ {k = 1} ^ {m} \tau_ {k} ^ {2} f \left(\lambda_ {k}\right), \quad \tau_ {k} = \mathbf {U} _ {0, k} = \mathbf {e} _ {1} ^ {\top} \mathbf {u} _ {k}, \quad \lambda_ {k} = \Lambda_ {k, k} \quad \mathbf {T} = \mathbf {U} \boldsymbol {\Lambda} \mathbf {U} ^ {\top}, \tag {5} +$$ + +Applying (5) over $n_v$ random vectors in the Hutchinson trace estimator (3) yields the SLQ estimate: + +$$ +\operatorname {T r} (f (\mathcal {L})) \approx \frac {n}{n _ {v}} \sum_ {i = 1} ^ {n _ {v}} \left(\sum_ {k = 0} ^ {m} \left(\tau_ {k} ^ {i}\right) ^ {2} f \left(\lambda_ {k} ^ {i}\right)\right) = \Gamma . \tag {6} +$$ + +We derive error bounds for the estimator based on the Lanczos approximation of the matrix exponential, and show that even a few Lanczos steps, i.e., $m = 10$ , are sufficient for an accurate approximation of the quadratic form. However, the trace estimation error is theoretically dominated by the error of the Hutchinson estimator, e.g. for Gaussian $p(\mathbf{v})$ the bound on the number of samples to guarantee that the probability of the relative error exceeding $\epsilon$ is at most $\delta$ is $8\epsilon^{-2}\ln (2 / \delta)$ (Roosta-Khorasani & Ascher, 2015). Although, in practice, we observe performance much better than the bound suggests. Hutchinson error implies nearing accuracy roughly $10^{-2}$ with $n_v \geq 10k$ random vectors, however, with as much as $n_v = 100$ the error is already $10^{-3}$ . Thus, we use default values of $m = 10$ and $n_v = 100$ in all experiments in Section 4. Please see Appendix A for full derivations and figures. + +![](images/dcf196b4397165f321d72292cf3943da7ee8babee619fa3e36d8d7e50b662281.jpg) +Figure 2: (a) IMD distances between language pairs for unaligned Wikipedia word embeddings and (b) distances from the simple English Wikipedia visualized for IMD, FID, and KID. We consider 16 languages: Polish, Russian, Greek, Hungarian, Turkish, Arabic, Hebrew, English, Simple English, Swedish, German, Spanish, Dutch, Portuguese, Vietnamese, and Waray-Waray. + +![](images/dc9ca92e64c9208fd15d2332cb13e718c676a25811aabaa73fe32c8de9a08a10.jpg) + +# 3.5 PUTTING IMD TOGETHER + +We employ the heretofore described advances in differential geometry and numerical linear algebra to create IMD (Multi-Scale Intrinsic Distance), a fast, intrinsic method to lower-bound the spectral Gromov-Wasserstein distance between manifolds. + +We describe the overall computation of IMD in Algorithm 1. Given data samples in $\mathbb{R}^d$ , we build a $k\mathrm{NN}$ graph $G$ by OR-construction such that its Laplacian spectrum approximates the one of the Laplace-Beltrami operator of the underlying manifold (Ting et al., 2010), and then compute $\mathrm{hkt}_G(t) = \sum_i e^{-\lambda_it} \approx \Gamma$ . We compare heat traces in the spirit of Equation (2), i.e., $\left|\mathrm{hkt}_{G_1}(t) - \mathrm{hkt}_{G_2}(t)\right|$ for $t \in (0.1, 10)$ sampled from a logarithmically spaced grid. + +Algorithm 1 IMD algorithm. +function IMDesc(X) $G\gets \mathrm{kNN}(X)$ $\mathcal{L}\gets$ Laplacian $(G)$ return $\Gamma = \mathsf{slq}(\mathcal{L},s,n_v)$ +function IMDIST(X,Y) hkt $X\gets$ IMDist(X) hktY $\leftarrow$ IMDist(Y) return sup $e^{-2(t + t^{-1})}|\mathrm{hkt}_X - \mathrm{hkt}_Y|$ + +Constructing exact $k\mathrm{NN}$ graphs is an $\mathcal{O}(dn^2)$ operation; however, approximation algorithms take near-linear time $\mathcal{O}(dn^{1 + \omega})$ (Dong et al., 2011; Aumuller et al., 2019). In practice, with approximate $k\mathrm{NN}$ graph construction (Dong et al., 2011), computational time is low while result variance is similar to the exact case. The $m$ -step Lanczos algorithm on a sparse $n\times n$ kNN Laplacian $\mathcal{L}$ with one starting vector has $\mathcal{O}(km)$ complexity, where $kn$ is the number of nonzero elements in $\mathcal{L}$ . The symmetric tridiagonal matrix eigendecomposition incurs an additional $\mathcal{O}(m\log m)$ (Coakley & Rokhlin, 2013). We apply this algorithm over $n_v$ starting vectors, yielding a complexity of $\mathcal{O}(n_v(m\log m + kmn))$ , with constant $k = 5$ and $m = 10$ by default. In effect, IMD's time complexity stands between those of two common GAN evaluation methods: KID, which is $\mathcal{O}(dn^2)$ and FID, which is $\mathcal{O}(d^3 + dn)$ . The time complexity of Geometry Score is unspecified in Khrulikov & Oseledets (2018), yet in Section 4.6 we show that its runtime grows exponentially in sample size. + +# 4 EXPERIMENTS + +We evaluate IMD on the ability to compare intermediate representations of machine learning models. For instance, in a recommender system we could detect whether a problem is related to the representation or the classifier in the end of a pipeline. In this section, we show the effectiveness of our intrinsic measure on multiple tasks and show how our intrinsic distance can provide insights beyond previously proposed extrinsic measures. + +Summary of experiments. We examine the ability of $\mathrm{IMD}^1$ to measure several aspects of difference among data manifolds. We first consider a task from unsupervised machine translation with unaligned word embeddings and show that IMD captures correlations among language kinship (affinity or genealogical relationships). Second, we showcase how IMD handles data coming from data sources of unequal dimensionalities. Third, we study how IMD highlights differences among image data representations across initializations and through training process of neural networks. + +# 4.1 COMPARING UNALIGNED LANGUAGE MANIFOLDS + +The problem of unaligned representations is particularly severe in the domain of natural language processing as the vocabulary is rarely comparable across different languages or even different documents. We employ IMD to measure the relative closeness of pairs of languages based on the word embeddings with different vocabularies. Figure 2 (a) shows a heatmap of pairwise IMD scores. IMD detects similar languages (Slavic, Semitic, Romantic, etc.) despite the lack of ground truth vocabulary alignment. On the other hand, Figure 11 in Appendix C shows that FID and KID, are not able to distinguish the intrinsic language-specific structure in word embeddings. Detailed description and setting of the experiment can be found in Appendix C. + +# 4.2 OPTIMIZING DIMENSIONALITY OF WORD EMBEDDINGS + +Comparing data having different dimensionality is cumbersome, even when representations are aligned. We juxtapose IMD by PIP loss (Yin & Shen, 2018) which allows the comparison of aligned representations for word embeddings. To this end, we measure IMD distance between English word embeddings of varying dimensions. Figure 3 shows the heatmap of the scores between sets of word vectors of different dimensionalities. Closer dimensionalities have lower distance scores for both metrics. However, IMD better highlights gradual change of the size of word + +![](images/1d0dc71c712c5dcaa16c6faf3d4c9079e3cbf3e1dd2e7587ee63ce655357bab6.jpg) +Figure 3: Comparison of IMD and PIP loss on word embeddings of different dimension. IMD detects subtle changes in the dimensionality. + +vectors, e.g. word vectors of size 4 and 8 are clearly closer to each other than embeddings of size 4 and 16 in terms of IMD, which is not true for PIP. + +# 4.3 TRACKING THE EVOLUTION OF IMAGE MANIFOLDS + +Next, we employ IMD to inspect the internal dynamics of neural networks. We investigate the stability of output layer manifolds across random initializations. We train 10 instances of the VGG-16 (Simonyan & Zisserman, 2015) network using different weight initializations on the CIFAR-10 and CIFAR-100 datasets. We compare the average IMD scores across representations in each network layer relative to the last layer. As Figure 4 (left) shows, for both CIFAR-10 and CIFAR-100, the convolutional layers exhibit similar behavior; IMD shows that consequent layers do not monotonically contribute to the separation of image representations, but start to do so after initial feature extraction stage comprised of 4 convolutional blocks. A low variance across the 10 networks trained from different random initializations indicates stability in the network structure. + +![](images/f7d3ac16fc96db31e61de1aa89fc29cc7b2885cfd218587b35eb8fe9630a450e.jpg) +Figure 4: (left) IMD score across convolutional layers of the VGG-16 network on CIFAR-10 and CIFAR-100 datasets; (right) training progression in terms of accuracy (dotted) and IMD (solid) on CIFAR-10 and CIFAR-100 datasets for VGG-16 and ResNet-20, with respect to VGG-16. + +![](images/ba63e5664b49645ca7de3b6eb9d8a4d0ef3c9adcdd30b44a0e99df652eff1ed1.jpg) +Accuray + +Table 1: IMD agrees with KID and FID across varying datasets for GAN evaluation. + +
MetricMNISTFashionMNISTCIFAR10CelebA
WGANWGAN-GPWGANWGAN-GPWGANWGAN-GPWGANWGAN-GP
IMD57.74 ± 0.4710.77 ± 0.42118.14 ± 0.5213.45 ± 0.5418.10 ± 0.3610.84 ± 0.4210.11 ± 0.332.84 ± 0.31
KID × 10347.26 ± 0.075.53 ± 0.03119.93 ± 0.1425.49 ± 0.0793.89 ± 0.0959.59 ± 0.09217.28 ± 0.1492.71 ± 0.08
FID31.75 ± 0.078.95 ± 0.03152.44 ± 0.1235.31 ± 0.07101.43 ± 0.0980.65 ± 0.09205.63 ± 0.0985.55 ± 0.08
+ +We now examine the last network layers during training with different initializations. Figure 4 (right) plots the VGG16 validation errors and IMD scores relative to the final layer representations of two pretrained networks, VGG16 itself with last layer dimension $d = 512$ and ResNet-20 with $d = 64$ and $\sim 50$ times less parameters. We observe that even in such unaligned spaces, IMD correctly identifies the convergence point of the networks. Surprisingly, we find that, in terms of IMD, VGG-16 representations progress towards not only the VGG-16 final layer, but the ResNet-20 final layer representation as well; this result suggests that these networks of distinct architectures share similar final structures. + +# 4.4 EVALUATING GENERATIVE MODELS + +We now move on to apply IMD to evaluation of generative models. First, we evaluate the sensitivity of IMD, FID, and KID to simple image transformations as a proxy to more intricate artifacts of modern generative models. We progressively blur images from the CIFAR-10 training set, and measure the distance to the original data manifold, averaging outcomes over 100 subsamples of $10\mathrm{k}$ images each. To enable comparison across methods, we normalize each distance measure such that the distance between CIFAR-10 and MNIST is 1. Figure 5 reports the results at different levels $\sigma$ of Gaussian blur. We additionally report the normalized distance to the CIFAR-100 training set (dashed lines $\equiv$ .FID and KID quickly drift away from the original distribution and match MNIST, a dataset of a completely different + +nature. Contrariwise, IMD is more robust to noise and follows the datasets structure, as the relationships between objects remain mostly unaffected on low blur levels. Moreover, with both FID and KID, low noise $(\sigma = 1)$ applied to CIFAR-10 suffices to exceed the distance of CIFAR-100, which is similar to CIFAR-10. IMD is much more robust, exceeding that distance only with $\sigma = 2$ . + +Next, we turn our attention to the sample-based evaluation of generative models. We then train the WGAN (Arjovsky et al., 2017) and WGAN-GP (Gulrajani et al., 2017) models on four datasets: MNIST, FashionMNIST, CIFAR10 and CelebA. We sample 10k samples, Y, from each GAN. We then uniformly subsample 10k images from the corresponding original dataset, X, and compute the IMD, KID and FID scores between X and Y. Table 1 reports the average measure and its $99\%$ confidence interval across 100 runs. IMD, as well as both FID and KID, reflect the fact that WGAN-GP is a more expressive model. We provide details on architecture, training, and generated samples in Appendix C. Additionally, in Appendix C we demonstrate superiority of IMD on synthetic data. + +![](images/1bf19b8bf2a3409a239618fca05a1da6ae9074109d4829e54d735f74609ba3b0.jpg) +Figure 5: FID, KID and IMD on the CIFAR-10 dataset with Gaussian blur. + +and follows the datasets structure, as the $n$ on low blur levels. Moreover, with both FIDs, ifes to exceed the distance of CIFAR-100, last, exceeding that distance only with $\sigma = 2$ . + +![](images/5fa9748276a3d903309a6c1561da97ba78faa51bfdd27145ad886082e33edece.jpg) +Figure 6: Plotting the normalized heat trace allows interpretation of medium- and global-scale structure of datasets. Best viewed in color. + +![](images/4a04c08440a88b7b16f2aac74436e89cbe001ed673dedac26e6b4152341d7faa.jpg) + +![](images/fcd9f2db7beb0e3bdc6620d10886ffd6820d715ad3a093387ad929085249d6f1.jpg) +Figure 7: Stability and scalability experiment: (left) stability of FID, KID and IMD wrt. sample size on CIFAR-10 and CIFAR-100 dataset; (right) scalability of FID, KID and IMD wrt. sample size on synthetic datasets. + +![](images/db179bfc82a2a10bec217671871c89e5d027114fc40436ad72e1fc3887034db3.jpg) + +# 4.5 INTERPRETING IMD + +To understand how IMD operates, we investigate the behavior of heat kernel traces of different datasets that are normalized by a null model. Tsitsulin et al. (2018) proposed a normalization by the heat kernel trace of an empty graph, which amounts to taking the average, rather than the sum, of the original heat kernel diagonal. However, this normalization is not an appropriate null model as it ignores graph connectivity. We propose a heat kernel normalization by the expected heat kernel of an Erdős-Rényi graph (further details in the Appendix D). + +Figure 6 depicts the obtained normalized $\mathrm{hkt}_g$ for all datasets we work with. We average results over 100 subsamples of $10k$ images each. For $t = 10$ , i.e., at a medium scale, CelebA is most different from the random graph, while for large-scale $t$ values, which capture global community structure, $\frac{\mathrm{d}\mathrm{hkt}_g(t)}{\mathrm{d}t}$ reflects the approximate number of clusters in the data. Surprisingly, CIFAR-100 comes close to CIFAR-10 for large $t$ values; we have found that this is due to the fact that the pre-trained Inception network does not separate the CIFAR-100 data classes well enough. We conclude that the heat kernel trace is interpretable if we normalize it with an appropriate null model. + +# 4.6 VERIFYING STABILITY AND SCALABILITY OF IMD + +In addition to the complexity analysis in Section 3.5, we assess the scaling and sample stability of IMD. Since IMD, like FID, is a lower bound to an optimal transport-based metric, we cannot hope for an unbiased estimator. However, we empirically verify, in Figure 7 (left), that IMD does not diverge too much with increased sample size. Most remarkably, we observe that IMD with approximate $k\mathrm{NN}$ (Dong et al., 2011) does not induce additional variance, while it diverges slightly further than the exact version as the number of samples grows. + +In terms of scalability, Figure 7 (right) shows that the theoretical complexity is supported in practice. Using approximate $k\mathrm{NN}$ , we break the $\mathcal{O}(n^2)$ performance of KID. FID's time complexity appears constant, as its runtime is dominated by the $\mathcal{O}(d^3)$ matrix square root operation. Geometry score (GS) fails to perform scalably, as its runtime grows exponentially. Due to this prohibitive computational cost, we eschew other comparison with GS. Furthermore, as IMD distance is computed through a low-dimensional heat trace representation of the manifold, we can store HKT for future comparisons, thereby enhancing performance in the case of many-to-many comparisons. + +# 5 DISCUSSION AND FUTURE WORK + +We introduced IMD, a geometry-grounded, first-of-its-kind intrinsic multi-scale method for comparing unaligned manifolds, which we approximate efficiently with guarantees, utilizing the Stochastic Lanczos Quadrature. We have shown the expressiveness of IMD in quantifying the change of data representations in NLP and image processing, evaluating generative models, and in the study of neural network representations. Since IMD allows comparing diverse manifolds, its applicability is not limited to the tasks we have evaluated, while it paves the way to the development of even more expressive techniques founded on geometric insights. + +# ACKNOWLEDGEMENTS + +This work was partially funded by the Ministry of Science and Education of Russian Federation as a part of Mega Grant Research Project 14.756.31.0001. Ivan Oseledets would like to thank Huawei for the support of his research. + +# REFERENCES + +Ryan P Adams, Jeffrey Pennington, Matthew J Johnson, Jamie Smith, Yaniv Ovadia, Brian Patton, and James Saunderson. Estimating the spectral density of large implicit matrices. arXiv preprint arXiv:1802.03451, 2018. 5 +Alexander A. Alemi and Ian Fischer. GILBO: One metric to measure them all. In NeurIPS, 2018. 2 +Martín Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In ICML, 2017. 1, 2, 8 +Martin Aumüller, Erik Bernhardsson, and Alexander Faithfull. ANN-benchmarks: A benchmarking tool for approximate nearest neighbor algorithms. Information Systems, 2019. 6 +Haim Avron and Sivan Toledo. Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix. Journal of the ACM (JACM), 2011. 5 +Paul Balister, Béla Bollobás, Amites Sarkar, and Mark Walters. Connectivity of random k-nearest-neighbour graphs. Advances in Applied Probability, 2005. 4 +Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, 2002. 2, 3, 4 +Mikhail Belkin and Partha Niyogi. Convergence of laplacian eigenmaps. In NIPS, pp. 129-136, 2007. 3, 4 +Mikołaj Binkowski, Dougal J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD gans. In ICLR, 2018. 1, 2 +Ali Borji. Pros and cons of gan evaluation measures. Computer Vision and Image Understanding, 179:41-65, 2019. 2 +Fan Chung, Linyuan Lu, and Van Vu. The spectra of random graphs with given expected degrees. _Internet Mathematics_, 2004. 16 +Ed S. Coakley and Vladimir Rokhlin. A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices. Applied and Computational Harmonic Analysis, 34(3): 379 - 414, 2013. 6 +Ronald R Coifman and Stéphane Lafon. Diffusion maps. Applied and computational harmonic analysis, 2006. 4 +Amin Coja-Oghlan. On the laplacian eigenvalues of $\mathbf{g}(\mathbf{n},\mathbf{p})$ . Combinatorics, Probability and Computing, 2007. 15 +Wei Dong, Charikar Moses, and Kai Li. Efficient k-nearest neighbor graph construction for generic similarity measures. In WWW, 2011. 6, 9 +Gene H Golub and Gérard Meurant. Matrices, moments and quadrature with applications. Princeton University Press, 2009. 5 +Gene H Golub and John H Welsch. Calculation of gauss quadrature rules. Mathematics of computation, 1969. 5 +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pp. 2672-2680, 2014. 2 + +Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, and Alexander J. Smola. A kernel two-sample test. Journal of Machine Learning Research, 13:723-773, 2012. 2 +Alexander Grigor'yan. Heat kernels on weighted manifolds and applications. In Contemp. Math., pp. 93-191. AMS, 2006. 3 +Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of wasserstein GANs. In NIPS, 2017. 8 +Matthias Hein, Jean-Yves Audibert, and Ulrike von Luxburg. Graph laplacians and their convergence on random neighborhood graphs. Journal of Machine Learning Research, 2007. 4 +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, 2017. 1, 2 +Marlis Hochbruck and Christian Lubich. On krylov subspace approximations to the matrix exponential operator. SIAM Journal on Numerical Analysis, 1997. 13 +MF Hutchinson. A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. Communications in Statistics-Simulation and Computation, 18(3):1059-1076, 1989. 5 +Daniel Jiwoong Im, He Ma, Graham Taylor, and Kristin Branson. Quantitatively evaluating GANs with divergences proposed for training. In ICLR, 2018. 2 +Simon Jenni and Paolo Favaro. On stabilizing generative adversarial training with noise. In CVPR, 2019. 2 +Valentin Khrulkov and Ivan V. Oseledets. Geometry score: A method for comparing generative adversarial networks. In ICML, 2018. 2, 6 +Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In ICML, 2019. 2 +Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabás Póczos. MMD GAN: Towards deeper understanding of moment matching network. In NIPS, 2017. 1 +David Lopez-Paz and Maxime Oquab. Revisiting classifier two-sample tests. In ICLR, 2017. 2 +Facundo Memoli. A spectral notion of gromov-wasserstein distance and related methods. Applied and Computational Harmonic Analysis, 2011. 2, 3, 4 +Ari Morcos, Maithra Raghu, and Samy Bengio. Insights on representational similarity in neural networks with canonical correlation. In NeurIPS, 2018. 1, 2 +Youssef Mroueh, Chun-Liang Li, Tom Sercu, Anant Raj, and Yu Cheng. Sobolev GAN. In ICLR, 2018. 2 +Hariharan Narayanan and Sanjoy Mitter. Sample complexity of testing the manifold hypothesis. In NIPS, 2010. 3 +Augustus Odena, Jacob Buckman, Catherine Olsson, Tom B. Brown, Christopher Olah, Colin A. Raffel, and Ian J. Goodfellow. Is generator conditioning causally related to GAN performance? In ICML, 2018. 2 +Sung Woo Park and Junseok Kwon. Sphere generative adversarial network based on geometric moment matching. In CVPR, 2019. 1, 2 +Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. SVCCA: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In NIPS, 2017. 2 +Radim Šehúřek and Petr Sojka. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, 2010. 15 + +Farbod Roosta-Khorasani and Uri Ascher. Improved bounds on sample size for implicit matrix trace estimators. Foundations of Computational Mathematics, 2015. 5, 14 +Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In NIPS, 2016. 2 +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2015. 7 +Jian Sun, Maks Ovsjanikov, and Leonidas Guibas. A concise and provably informative multi-scale signature based on heat diffusion. In Computer graphics forum. Wiley Online Library, 2009. 3 +Daniel Ting, Ling Huang, and Michael Jordan. An analysis of the convergence of graph laplacians. In ICML, 2010. 4, 6 +Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Alexander M. Bronstein, and Emmanuel Müller. NetLSD: Hearing the shape of a graph. In KDD, 2018. 3, 4, 9, 16 +Shashanka Ubaru, Jie Chen, and Yousef Saad. Fast estimation of tr(f(a)) via stochastic lanczos quadrature. SIAM Journal on Matrix Analysis and Applications, 2017. 5, 13 +Liwei Wang, Lunjia Hu, Jiayuan Gu, Zhiqiang Hu, Yue Wu, Kun He, and John Hopcroft. Towards understanding learning representations: To what extent do different neural networks learn the same representation. In Advances in Neural Information Processing Systems, 2018. 1, 2 +Herbert S Wilf. Mathematics for the physical sciences. 1962. 5 +Zi Yin and Yuanyuan Shen. On the dimensionality of word embedding. In Advances in Neural Information Processing Systems, pp. 887-898, 2018. 2, 7 + +# APPENDIX + +# A TRACE ESTIMATION ERROR BOUNDS + +We will use the error of the Lanczos approximation of the action of the matrix exponential $f(\mathcal{L})\mathbf{v} = \exp^{-t\mathcal{L}}\mathbf{v}$ to estimate the error of the trace. We first rewrite quadratic form under summation in the trace approximation to a convenient form, + +$$ +\mathbf {v} ^ {\top} f (\mathcal {L}) \mathbf {v} \approx \sum_ {k = 0} ^ {m} \tau_ {k} ^ {2} f (\lambda_ {k}) = \sum_ {k = 0} ^ {m} \left[ \mathbf {e} _ {1} ^ {\top} \mathbf {u} _ {k} \right] ^ {2} f (\lambda_ {k}) = \mathbf {e} _ {1} ^ {\top} \mathbf {U} f (\Lambda) \mathbf {U} ^ {\top} \mathbf {e} _ {1} = \mathbf {e} _ {1} ^ {\top} f (\mathbf {T}) \mathbf {e} _ {1}. \tag {7} +$$ + +Because the Krylov subspace $\mathcal{K}_m(\mathcal{L},\mathbf{v})$ is built on top of vector $\mathbf{v}$ with $\mathbf{Q}$ as an orthogonal basis of $\mathcal{K}_m(\mathcal{L},\mathbf{v})$ , i.e. $\mathbf{q}_0 = \mathbf{v}$ and $\mathbf{v} \perp \mathbf{q}_i$ for $i \in (1, \dots, m-1)$ , the following holds + +$$ +\mathbf {v} ^ {\top} f (\mathcal {L}) \mathbf {v} \approx \mathbf {v} ^ {\top} \mathbf {Q} f (\mathbf {T}) \mathbf {e} _ {1} = \mathbf {e} _ {1} ^ {\top} f (\mathbf {T}) \mathbf {e} _ {1}. \tag {8} +$$ + +Thus, the error in quadratic form estimate $\mathbf{v}^{\top}f(\mathcal{L})\mathbf{v}$ is exactly the error of Lanczos approximation $f(\mathcal{L})\mathbf{v}\approx \mathbf{Q}f(\mathbf{T})\mathbf{e}_1$ . To obtain the error bounds, we use the Theorem 2 in Hochbruck & Lubich (1997), which we recite below. + +Theorem 2 Let $\mathcal{L}$ be a real symmetric positive semi-definite matrix with eigenvalues in the interval $[0,4\rho]$ . Then the error in the $m$ -step Lanczos approximation of $\exp^{-t\mathcal{L}}\mathbf{v}$ , i.e. $\epsilon_{m} = \|\exp^{-t\mathcal{L}}\mathbf{v} - \mathbf{Q}_{m}\exp^{-t^{\mathbf{T}}\mathbf{T}_{m}}\mathbf{e}_{1}\|$ , is bounded in the following ways: + +$$ +\epsilon_ {m} \leq \left\{ \begin{array}{l l} 1 0 e ^ {- m ^ {2} / (5 \rho t)}, & \sqrt {4 \rho t} \leq m \leq 2 \rho t \\ 1 0 (\rho t) ^ {- 1} e ^ {- \rho t} \left(\frac {e \rho t}{m}\right) ^ {m}, & m \geq 2 \rho t \end{array} \right. \tag {9a} +$$ + +Since $\mathbf{v}$ is a unit vector, thanks to Cauchy-Bunyakovsky-Schwarz inequality, we can upper-bound the error of the quadratic form approximation by the error of the $\exp^{-t\mathcal{L}}\mathbf{v}$ approximation, i.e. $|\mathbf{v}^{\top}f(\mathcal{L})\mathbf{v} - \mathbf{e}_1^{\top}\mathbf{U}f(\Lambda)\mathbf{U}^{\top}\mathbf{e}_1| \leq \| \exp^{-t\mathcal{L}}\mathbf{v} - \mathbf{Q}_m\exp^{-t\mathbf{T}_m}\mathbf{e}_1\| = \epsilon_m$ + +Following the argumentation in Ubaru et al. (2017), we obtain a condition on the number of Lanczos steps $m$ by setting $\epsilon_{m} \leq \frac{\epsilon}{2} f_{min}(\lambda)$ , where $f_{min}(\lambda)$ is the minimum value of $f$ on $[\lambda_{min}, \lambda_{max}]$ . We now derive the absolute error between the Hutchinson estimate of Equation (3) and the SLQ of Equation (6): + +$$ +\begin{array}{l} \left| \operatorname {T r} _ {n _ {v}} (f (\mathcal {L})) - \Gamma \right| = \frac {n}{n _ {v}} \left| \sum_ {i = 1} ^ {n _ {v}} \mathbf {v} _ {i} ^ {\top} f (\mathcal {L}) \mathbf {v} _ {i} - \sum_ {i = 1} ^ {n _ {v}} \mathbf {e} _ {1} ^ {\top} f (\mathbf {T} ^ {(i)}) \mathbf {e} _ {1} \right| \\ \leq \frac {n}{n _ {v}} \sum_ {i = 1} ^ {n _ {v}} \left| \mathbf {v} _ {i} ^ {\top} f (\pmb {\mathscr {L}}) \mathbf {v} _ {i} - \mathbf {e} _ {1} ^ {\top} f (\mathbf {T} ^ {(i)}) \mathbf {e} _ {1} \right| \\ \leq \frac {n}{n _ {v}} \sum_ {i = 1} ^ {n _ {v}} \epsilon_ {m} = n \epsilon_ {m}, \\ \end{array} +$$ + +where $\mathbf{T}^{(i)}$ is the tridiagonal matrix obtained with Lanczos algorithm with starting vector $\mathbf{v}_i$ . + +$$ +\left| \operatorname {T r} _ {n _ {v}} f (\mathcal {L}) - \Gamma \right| \leq n \epsilon_ {m} \leq \frac {n \epsilon}{2} f _ {\min } (\lambda) \leq \frac {\epsilon}{2} \operatorname {T r} (f (\mathcal {L})), \tag {10} +$$ + +Finally, we formulate SLQ as an $(\epsilon ,\delta)$ estimator, + +$$ +\begin{array}{l} 1 - \delta \leq \Pr \left[ \left| \operatorname {T r} (f (\mathcal {L})) - \operatorname {T r} _ {n _ {v}} (f (\mathcal {L})) \right| \leq \frac {\epsilon}{2} \left| \operatorname {T r} (f (\mathcal {L})) \right| \right] \\ \leq \Pr \left[ \left| \operatorname {T r} (f (\mathcal {L})) - \operatorname {T r} _ {n _ {v}} (f (\mathcal {L})) \right| + \left| \operatorname {T r} _ {n _ {v}} (f (\mathcal {L})) - \Gamma \right| \leq \frac {\epsilon}{2} \left| \operatorname {T r} (f (\mathcal {L})) \right| + \frac {\epsilon}{2} \left| \operatorname {T r} (f (\mathcal {L})) \right| \right] \\ \leq \Pr \left[ \left| \operatorname {T r} (f (\mathcal {L})) - \Gamma \right| \leq \epsilon \left| \operatorname {T r} (f (\mathcal {L})) \right| \right], \\ \end{array} +$$ + +For the normalized Laplacian $\mathcal{L}$ , the minimum eigenvalue is 0 and $f_{\mathrm{min}}(0) = \exp (0) = 1$ , hence $\epsilon_{m} \leq \frac{\epsilon}{2}$ , and the eigenvalue interval has $\rho = 0.5$ . We can thus derive the appropriate number of Lanczos steps $m$ to achieve error $\epsilon$ , + +$$ +\epsilon \leq \left\{ \begin{array}{l l} 2 0 e ^ {- m ^ {2} / (2. 5 t)}, & \sqrt {2 t} \leq m \leq t \\ 4 0 t ^ {- 1} e ^ {- 0. 5 t} \left(\frac {0 . 5 e t}{m}\right) ^ {m}, & m \geq t \end{array} \right. \tag {11a} +$$ + +Figure 8 shows the tightness of the bound for the approximation of the matrix exponential action on vector $\mathbf{v}$ , $\epsilon_{m} = \left\| \exp (-t\mathcal{L}) - \mathbf{Q}_{m}\exp (-t\mathbf{T}_{m})\mathbf{e}_{1}\right\|$ . We can see that for most of the temperatures $t$ , very few Lanczos steps $m$ are sufficient, i.e. we can set $m = 10$ . However, the error from the Hutchinson estimator dominates the overall error. Figure 9 shows the error of trace estimation does not change with $m$ and for $t = 0.1$ is around $10^{-3}$ . In case of a Rademacher $p(\mathbf{v})$ , the bound on the number of random samples is $n_v\geq \frac{6}{\epsilon^2}\log (2 / \delta)$ (Roosta-Khorasani & Ascher, 2015). Employing 10k vectors results in the error bound of roughly $10^{-2}$ . In practice, we observe the performance much better than given by the bound, see Figure 9. + +One particular benefit of small $m$ value is that we do not have to worry about the orthogonality loss in the Lanczos algorithm which often undermines its convergence. Since we do only a few Lanczos iterations, the rounding errors hardly accumulate causing little burden in terms of orthogonality loss between the basis vectors of the Krylov subspace. + +![](images/c6ad788bc250b6f06d36a23abc7f93797472de6f5df40eb50e726b3bd8b5eb2d.jpg) +Figure 8: Errors (solid) and error bounds (dotted) for the approximation of matrix exponential action with varying temperature $t$ . + +![](images/fbb6706e2cbe56e7b5b733662e460f2a245fddc3aeffd52270c9712b49a06192.jpg) +Figure 9: Trace estimation errors (solid) and error bounds (dotted) for: (left) the number of Lanczos steps $m$ with fixed number of random vectors $n_v = 100$ ; (right) the number of random vectors $n_v$ in Hutchinson estimator with fixed number of Lanczos steps $m = 10$ . Lines correspond to varying temperatures $t$ . + +# B VARIANCE REDUCTION + +We reduce variance of the randomized estimator through control variates. The idea is to use Taylor expansion to substitute a part of the trace estimate with its easily computed precise value, + +$$ +\begin{array}{l} \operatorname {T r} \left(\exp (- t \mathcal {L})\right) = \operatorname {s l q} \left[ \exp (- t \mathcal {L}) - \left(\mathbf {I} - t \mathcal {L} + \frac {t ^ {2} \mathcal {L} ^ {2}}{2}\right) \right] + \operatorname {T r} \left(\mathbf {I} - t \mathcal {L} + \frac {t ^ {2} \mathcal {L} ^ {2}}{2}\right) (12) \\ = \operatorname {s l q} \left[ \exp (- t \mathcal {L}) - \left(\mathbf {I} - t \mathcal {L} + \frac {t ^ {2} \mathcal {L} ^ {2}}{2}\right) \right] + n + \operatorname {T r} (- t \mathcal {L}) + \frac {t ^ {2} \| \mathcal {L} \| _ {F} ^ {2}}{2} (13) \\ = \operatorname {s l q} \left[ \exp (- t \mathcal {L}) \right] + \operatorname {s l q} \left[ t \mathcal {L} \right] - \operatorname {s l q} \left[ \frac {t ^ {2} \mathcal {L} ^ {2}}{2}\right) - t n + \frac {t ^ {2} \| \mathcal {L} \| _ {F} ^ {2}}{2}, (14) \\ \end{array} +$$ + +where we use the fact that $\| \mathcal{L}\| _F = \sqrt{\operatorname{Tr}(\mathcal{L}^\top\mathcal{L})}$ and that the trace of normalized Laplacian is equal to $n$ . It does reduce the variance of the trace estimate for smaller temperatures $t\leq 1$ + +To obtain this advantage over the whole range of $t$ , we utilize the following variance reduction form: + +$$ +\operatorname {T r} \left(\exp (- t \mathcal {L})\right) = s \perp q \left[ \exp (- t \mathcal {L}) - (\mathbf {I} - \alpha t \mathcal {L}) \right] + n (1 - \alpha t), \tag {15} +$$ + +where there exists an alpha that is optimal for every $t$ , namely setting $\alpha = 1 / \exp(t)$ . We can see the variance reduction that comes from this procedure in the Figure 12. + +# C EXPERIMENTS DISCUSSION + +Here we include additional results that did not find their way to the main paper body. + +# C.1 FID AND KID FAIL TO FIND STRUCTURE IN UNALIGNED CORPORA + +Figure 11 shows the matrix of distances for FID and KID aligned and colored in the same way as Figure 2 (a). FID and KID can not find meaningful structure in the data in the same way as IMD as they rely on extrinsic data properties. + +# C.2 WORD EMBEDDING EXPERIMENT DETAILS. + +We usegensim (Rehurek & Sojka, 2010) to learn word vectors on the latest Wikipedia corpus snapshot on 16 languages: Polish, Russian, Greek, Hungarian, Turkish, Arabic, Hebrew, English, Simple English, Swedish, German, Spanish, Dutch, Portuguese, Vietnamese, and Waray-Waray. We then compute FID, KID and IMD scores on all the pairs, we average 100 runs for the heatmap figures 2. For the different dimensionality experiment, we learn vectors on the English Wikipedia of sizes equal to the powers of 2 from 4 to 512. After that we compute IMD and covariance error, i.e. normalized PIP loss, between the pairs of sizes to generate the heatmap figure 3. + +# C.3 VANILLA GAN ON TORUS + +We provide an additional experiment clearly showing the case where IMD is superior to its main competitors, FID and KID. We train two vanilla GANs on the points of a 3D torus. The bad + +GAN fails to learn the topology of the dataset it tries to mimic, yet previous metrics cannot detect this fact. IMD, on the contrary, can tell the difference. Figure 10 shows the points sampled from the GAN with some of the points inside the hole. KID and FID confidence intervals overlap for good and bad GANs, meanwhile IMD scores are clearly distinct from each other. + +![](images/abeec5c2847f4b47cfe7918c3129b4c257296c3199824a436272d29020b93e57.jpg) + +
metricgood GANbad GAN
FID0.00529 ± 0.000700.00627 ± 0.00076
KID0.00172 ± 0.000730.00259 ± 0.00077
IMD9.02059 ± 1.519514.0732 ± 2.1706
+ +Figure 10: Bad GAN produces samples inside the torus hole (red). FID and KID cannot detect such behaviour. + +# C.4 NORMALIZATION DETAILS + +For the purpose of normalizing IMD, we need to approximate that graph's eigenvalues. Coja-Oghlan (2007) proved that $\lambda_1 \leq 1 - cd^{-1/2} \leq \lambda_2 \leq \lambda_n \leq 1 + cd^{-1/2}$ for the core of the graph for some + +![](images/9bf6ee1d5196a49eeab375d5ab5846febb2002a03b19c647fbc75106b5823f13.jpg) +FID + +![](images/9989bd6786530cf5045f5cc8b3c26440dd402697dacea47d2fb97d6e19d9fb59.jpg) +KID +Figure 11: FID and KID are not able to capture language affinity from unaligned word2vec embeddings. + +constant $c$ . We have empirically found that $c = 2$ provides a tight approximation for random graphs. That coincides with the analysis of Chung et al. (2004), who proved that $\lambda_{n} = (1 + o(1))2\bar{d}^{-1 / 2}$ if $d_{\mathrm{min}}\gg \sqrt{\bar{d}}\log^3 n$ even though in our case $d_{\mathrm{min}} = \bar{d} = k$ . We thus estimate the spectrum of a random Erdős-Rényi graph as growing linearly between $\lambda_1 = 1 - 2\bar{d}^{-1 / 2}$ and $\lambda_{n} = 1 + 2\bar{d}^{-1 / 2}$ which corresponds to the underlying manifold being two-dimensional (Tsitsulin et al., 2018). + +# D EXPERIMENTAL SETTINGS + +We train all our models on a single server with NVIDIA V100 GPU with 16Gb memory and $2 \times 20$ core Intel E5-2698 v4 CPU. For the experiment summarized in Table 1 in the Section 4.1 we train WGAN and WGAN-GP models on 4 datasets: MNIST, FashionMNIST, CIFAR10 and CelebA and sample 10k samples, Y, from each of the GANs. We uniformly subsample 10k images from the original datasets, X, and compute the IMD, KID and FID scores between X and Y. We report the mean as well as the $99\%$ confidence interval across 100 runs. + +Below we report the architectures, hyperparameters and generated samples of the models used for the + +experiments. We train each of the GANs for 200 epochs on MNIST, FMNIST and CIFAR-10, and for 50 epochs on CelebA dataset. For WGAN we use RMSprop optimizer with learning rate of $5 \times 10^{-5}$ . For WGAN-GP we use Adam optimizer with learning rate of $10^{-4}$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ . + +![](images/5906f7eb61d7b1f126c6ea1eda8af215d662fbf6049c1e3c313f572633bcb1df.jpg) +Figure 12: Variance of the trace estimate. + +# E GRAPH EXAMPLE + +Figure 13 provides visual proof that the 5NN graph reflects the underlying manifold structure of the CIFAR-10 dataset. Clusters in the graph exactly correspond to CIFAR-10 classes. + +![](images/51047a3b43afde56c571e793af5b01e11ef39f922024da79067a197fd16d1b72.jpg) +Figure 13: CIFAR-10 graph colored with true class labels. + +![](images/6c80dfe0192438af8e93da36da733aeaf88024a4f544455b813402a63002f311.jpg) +Figure 14: MNIST samples (left: WGAN, right: WGAN-GP) + +![](images/1563f5c1a5301cc2f78d8c9b4176968ba5f99e9cca7b830711f1ef4f27c8a3b5.jpg) + +![](images/16ca38f7313309e8df6f765b7594c4b89456b160ca22157bc395e33a9692e3f4.jpg) +Figure 15: FashionMNIST samples (left: WGAN, right: WGAN-GP) + +![](images/8e226e69ef8a43d9bbb47f925fe68d45a9bbc6bb961f17120592bbb677352785.jpg) + +![](images/453d4d8339a566ce22ac3b1b3c56f16944b7de62f0e36abd2e09992743ce32e2.jpg) +Figure 16: CIFAR-10 samples (left: WGAN, right: WGAN-GP) + +![](images/a0f496f2464f10bef3ad06fe18971c05ce058200496ff619a2288d9dd85d758b.jpg) + +![](images/584c82176efebc44a50fed67e0352c64c578c0de0fbf69e2f49ff483751f20b0.jpg) +Figure 17: CelebA samples (left: WGAN, right: WGAN-GP) + +![](images/6233db1a8e191588932643bb569dd5da634e58e4fab50c0aece5394a8a6fcb6d.jpg) + +# MNIST WGAN + +```txt +ConvGenerator( (latent_to_features): Sequential( (0): Linear(in_features=100, out_features=512, bias=True) (1):ReLU() (features_to_image): Sequential( (0): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (1):ReLU() (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (3): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (4):ReLU() (5): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True) (6): ConvTranspose2d(32, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (7):ReLU() (8): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True) (9): ConvTranspose2d(16, 1, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (10): Sigmoid()) ) +ConvDiscriminator( (image_to_features): Sequential( (0): Conv2d(1, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.2) (2): Conv2d(16, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (3): LeakyReLU(negative_slope=0.2) (4): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (5): LeakyReLU(negative_slope=0.2) (6): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (7): Sigmoid() ) (features_to_prob): Sequential( (0): Linear(in_features=512, out_features=1, bias=True) (1): Sigmoid() +``` + +MNIST WGAN-GP, FMNIST (WGAN, WGAN-GP) +```python +MNISTGenerator( +(block1): Sequential( (0):ConvTranspose2d(256,128,kernel_size=(5,5),stride=(1,1)) (1):ReLU(inplace) +) +(block2):Sequential( (0):ConvTranspose2d(128,64,kernel_size=(5,5),stride=(1,1)) (1):ReLU(inplace) +) +(deconv_out):ConvTranspose2d(64,1,kernel_size=(8,8),stride=(2,2)) +(preprocess):Sequential( (0):Linear(in_features=128,out_features=4096,bias=True) (1):ReLU(inplace) +) +sigmoid):Sigmoid() +) +MNISTDiscriminator( +(main):Sequential( (0):Conv2d(1,64,kernel_size=(5,5),stride=(2,2),padding=(2,2)) (1):ReLU(inplace) (2):Conv2d(64,128,kernel_size=(5,5),stride=(2,2),padding=(2,2)) (3):ReLU(inplace) (4):Conv2d(128,256,kernel_size=(5,5),stride=(2,2),padding=(2,2)) (5):ReLU(inplace) +) +(output):Linear(in_features=4096,out_features=1,bias=True) +``` + +CIFAR-10 (WGAN, WGAN-GP) +```txt +CIFARGenerator( +(preprocess): Sequential( (0): Linear(in_features=128, out_features=4096, bias=True) (1): BatchNormld(4096, eps=1e-05, momentum=0.1, affine=True) (2):ReLU(inplace) +) +(block1): Sequential( (0):ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2)) (1):BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True) (2):ReLU(inplace) +) +(block2): Sequential( (0): ConvTranspose2d(128, 64, kernel_size=(2, 2), stride=(2, 2)) (1):BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) (2):ReLU(inplace) +) +(deconv_out): ConvTranspose2d(64, 3, kernel_size=(2, 2), stride=(2, 2)) (tanh):Tanh() +) +CIFARDiscriminator( +(main):Sequential( (0):Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (1):LeakyReLU(negative_slope=0.01) (2):Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (3):LeakyReLU(negative_slope=0.01) (4):Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (5):LeakyReLU(negative_slope=0.01) +) +(linear):Linear(in_features=4096,out_features=1,bias=True) +``` + +CelebA (WGAN, WGAN-GP) +```txt +CelebaGenerator( +(preprocess): Sequential( (0): Linear(in_features=128, out_features $= 8192$ bias $\equiv$ True) (1): BatchNormld(8192, eps $= 1\mathrm{e} - 05$ , momentum $= 0.1$ , affine $\equiv$ True) (2):ReLU(inplace) +)(block1): Sequential( (0): ConvTranspose2d(512, 256, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), output(padding=(1, 1), bias=False) (1): BatchNorm2d(256, eps $= 1\mathrm{e} - 05$ , momentum $= 0.1$ , affine $\equiv$ True) (2):ReLU(inplace) +)(block2): Sequential( (0): ConvTranspose2d(256, 128, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), output-padding=(1, 1), bias=False) (1): BatchNorm2d(128, eps $= 1\mathrm{e} - 05$ , momentum $= 0.1$ , affine $\equiv$ True) (2):ReLU(inplace) +)(block3): Sequential( (0): ConvTranspose2d(128, 64, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), output-padding=(1, 1), bias=False) (1): BatchNorm2d(64, eps $= 1\mathrm{e} - 05$ , momentum $= 0.1$ , affine $\equiv$ True) (2):ReLU(inplace) +)(deconv_out): ConvTranspose2d(64, 3, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), output-padding=(1, 1)) (tanh):Tanh () +CelebaDiscriminator( +(main): Sequential( (0): Conv2d(3, 64, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2)) (1): LeakyReLU(negative_slope $= 0.01$ ) (2):Conv2d(64, 128,kernel_size=(5, 5),stride=(2, 2),padding=(2,2)) (3):LeakyReLU(negative_slope $= 0.01$ ) (4):Conv2d(128, 256,kernel_size=(5, 5),stride=(2,2),padding=(2,2)) (5):LeakyReLU(negative_slope $= 0.01$ ) (6):Conv2d(256,512,kernel_size=(5,5),stride=(2,2),padding=(2,2)) (7):LeakyReLU(negative_slope $= 0.01$ ) (8):Conv2d(512,1,kernel_size=(4,4),stride=(1,1)) +``` \ No newline at end of file diff --git a/theshapeofdataintrinsicdistancefordatadistributions/images.zip b/theshapeofdataintrinsicdistancefordatadistributions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3c8b239424fc50d8db06905d3a794c2b10c84a02 --- /dev/null +++ b/theshapeofdataintrinsicdistancefordatadistributions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adfa1b14cfd928c7b59e6ffe13f22eacba80543f9723e01094ed9fe3e975190e +size 923005 diff --git a/theshapeofdataintrinsicdistancefordatadistributions/layout.json b/theshapeofdataintrinsicdistancefordatadistributions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a5f314b49420c4691321938439b3a4f8d89fa6df --- /dev/null +++ b/theshapeofdataintrinsicdistancefordatadistributions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51719933b988a124c7378ecf19d41c23dc9c4d29066d9bbbb06bc56eeadbecc8 +size 723003 diff --git a/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/ffac7a46-93cc-4fe4-a9b2-bb85fed54356_content_list.json b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/ffac7a46-93cc-4fe4-a9b2-bb85fed54356_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..90e6130027724cf11360ed15c3cd4c8750b367c8 --- /dev/null +++ b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/ffac7a46-93cc-4fe4-a9b2-bb85fed54356_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cb55302b2c1eb769fa6ad372cd31019595567e78b6d915cbfb2196bdc2dbd04 +size 107906 diff --git a/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/ffac7a46-93cc-4fe4-a9b2-bb85fed54356_model.json b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/ffac7a46-93cc-4fe4-a9b2-bb85fed54356_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b5f15f15f71885540b8090a6d908e295202bc0a5 --- /dev/null +++ b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/ffac7a46-93cc-4fe4-a9b2-bb85fed54356_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:307f42b5286dd314bf7fa1202519eba257a3f125d7134a3a7d33cb76b4c61205 +size 126591 diff --git a/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/ffac7a46-93cc-4fe4-a9b2-bb85fed54356_origin.pdf b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/ffac7a46-93cc-4fe4-a9b2-bb85fed54356_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2f5cfeaee88650f32aefe0725e0ec62202896c8d --- /dev/null +++ b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/ffac7a46-93cc-4fe4-a9b2-bb85fed54356_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22605bbcf2baa583db8e760031619ea8ae3f7ee869d58761d3fbf1b97667ee26 +size 1432344 diff --git a/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/full.md b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2617d4a8814b0adbdd109991fcc773b8a1c09d60 --- /dev/null +++ b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/full.md @@ -0,0 +1,420 @@ +# THE VARIATIONAL BANDWIDTH BOTTLENECK: STOCHASTIC EVALUATION ON AN INFORMATION BUDGET + +Anirudh Goyal1, Yoshua Bengio1, Matthew Botvinick2, Sergey Levine3 + +# ABSTRACT + +In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant. The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target. In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input. This is typically the case when we have a standard conditioning input, such as a state observation, and a "privileged" input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent. In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication). Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access. In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not. We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information. + +# 1 INTRODUCTION + +A model that generalizes effectively should be able to pick up on relevant cues in the input while ignoring irrelevant distractors. For example, if one want to cross the street, one should only pay attention to the positions and velocities of the cars, disregarding their color. The information bottleneck (Tishby et al., 2000) formalizes this in terms of minimizing the mutual information between the bottleneck representation layer with the input, while maximizing its mutual information with the correct output. This type of input compression can improve generalization (Tishby et al., 2000), and has recently been extended to deep parametric models, such as neural networks where it has been shown to improve generalization (Achille & Soatto, 2016; Alemi et al., 2016). + +The information bottleneck is generally intractable, but can be approximated using variational inference (Alemi et al., 2016). This variational approach parameterizes the information bottleneck model using a neural network (i.e., an encoder). While the variational bound makes it feasible to train (approximate) information bottleneck layers with deep neural networks, the encoder in these networks – the layer that predicts the bottleneck variable distribution conditioned on the input – must still process the full input, before it is compressed and irrelevant information is removed. The encoder itself can therefore fail to generalize, and although the information bottleneck minimizes mutual information with the input on the training data, it might not compress successfully on new inputs. To + +address this issue, we propose to divide our input into two categories: standard input and privileged input, and then we aim to design a bottleneck that does not need to access the privileged input before deciding how much information about the input is necessary. The intuition behind not accessing the privileged input is twofold: (a) we might want to avoid accessing the privileged input because we want to generalize with respect to it (and therefore compress it) (b) we actually would prefer not to access it (as this input could be costly to obtain). + +The objective is to minimize the conditional mutual information between the bottleneck layer and the privileged input, given the standard input. This problem statement is more narrow than the standard information bottleneck, but encompasses many practical use cases. For example, in reinforcement learning, which is the primary subject of our experiments, the agent can be augmented with some privileged information in the form of a model based planner, or information which is the result of communication with another agent. This "additional" information can be seen as a privileged input because it requires the agent to do something extra to obtain it. + +Our work provides the following contributions. First, we propose a variational bandwidth bottleneck (VBB) that does not look at the privileged input before deciding whether to use it or not. At a high level, the network is trained first to examine the standard input, and then stochastically decide whether to access the privileged input or not. Second, we illustrate several applications of this approach to reinforcement learning, in order to construct agents that can stochastically determine when to evaluate costly model based computations, when to communicate with another agent, and when to access the memory. We experimentally show that the proposed model produces better generalization, as it learns when to use (or not use) the privileged input. For example, in the case of maze navigation, the agent learns to access information about the goal location only near natural bottlenecks, such as doorways. + +# 2 PROBLEM FORMULATION + +We aim to address the generalization issue described in the introduction for an important special case of the variational information bottleneck, which we refer to as the conditional bottleneck. The conditional bottleneck has two inputs, a standard input, and a privileged input, that are represented by random variables $\mathbf{S}$ and $\mathbf{G}$ , respectively. Hence, $\mathbf{S},\mathbf{G},\mathbf{Y}$ are three random variables with unknown distribution $\mathbf{p}_{\mathrm{dist}}(\mathbf{S},\mathbf{G},\mathbf{Y})$ . + +The information bottleneck provides us with a mechanism to determine the correct output while accessing the minimal possible amount of information about the privileged input $\mathbf{G}$ . In particular, we formulate a conditional variant of the information bottleneck to minimize the mutual information between the bottleneck layer and the privileged input $\mathbf{I}(\mathbf{Z},\mathbf{G}|\mathbf{S})$ , given the standard input while avoiding unnecessary access to privileged input $\mathbf{G}$ . The proposed model consists of two networks (see Fig. 1): The encoder network that takes in the privileged input $\mathbf{G}$ as well as the standard input $\mathbf{S}$ and outputs a distribution over the latent variable $\mathbf{z}$ such that $\mathbf{z} \sim \mathbf{p}(\mathbf{Z}|\mathbf{G},\mathbf{S})$ . The decoder network $\mathbf{p}_{\mathrm{dec}}(\mathbf{Y}|\mathbf{Z},\mathbf{S})$ takes the standard input $\mathbf{S}$ and the compressed representation $\mathbf{Z}$ and outputs the distribution over the target variable $\mathbf{Y}$ . + +# 3 VARIATIONAL BOTTLECK ON STANDARD INPUT AND PRIVILEGED INPUT + +The information bottleneck (IB) objective (Tishby et al., 2000) is formulated as the maximization of $\mathbf{I}(\mathbf{Z};\mathbf{Y}) - \beta \mathbf{I}(\mathbf{Z};\mathbf{X})$ , where $\mathbf{X}$ refers to the input signal, $\mathbf{Y}$ refers to the target signal, $\mathbf{Z}$ refers to the compressed representation of $\mathbf{X}$ , and $\beta$ controls the trade-off between compression and prediction. The IB has its roots in channel coding, where a compression metric $\mathbf{I}(\mathbf{Z};\mathbf{X})$ represents the capacity of the communication channel between $\mathbf{Z}$ and $\mathbf{X}$ . Assuming a prior distribution $\mathbf{r}(\mathbf{Z})$ over the random variable $\mathbf{Z}$ , constraining the channel capacity corresponds to limiting the information by which the posterior $\mathbf{p}(\mathbf{Z}|\mathbf{X})$ is permitted to differ from the prior $\mathbf{r}(\mathbf{Z})$ . This difference can be measured using the Kullback-Leibler (KL) divergence, such that $\mathbf{D}_{\mathrm{KL}}(\mathbf{p}(\mathbf{Z}|\mathbf{X})\| \mathbf{r}(\mathbf{Z}))$ refers to the channel capacity. + +Now, we write the equations for the variational information bottleneck, where the bottleneck is learnt on both the standard input $\mathbf{S}$ as well as a privileged input $\mathbf{G}$ . The Data Processing Inequality (DPI) (Cover & Thomas, 2006) for a Markov chain $\mathbf{x} \to \mathbf{z} \to \mathbf{y}$ ensures that $\mathbf{I}(\mathbf{x};\mathbf{z}) \geq \mathbf{I}(\mathbf{x};\mathbf{y})$ . Hence for a bottleneck where the input is comprised of both the standard input as well as privileged input, we have $\mathbf{I}(\mathbf{Z};\mathbf{G}|\mathbf{S}) \geq \mathbf{I}(\mathbf{Y};\mathbf{G}|\mathbf{S})$ . To obtain an upper bound on $\mathbf{I}(\mathbf{Z};\mathbf{G}|\mathbf{S})$ , we must first obtain an + +upper bound on $\mathbf{I}(\mathbf{Z};\mathbf{G}|\mathbf{S} = \mathbf{s})$ , and then average over $\mathbf{p}(\mathbf{s})$ . We get the following result: We ask the reader to refer to the section on the conditional bottleneck in the supplementary material for the full derivation. + +$$ +\mathbf {I} (\mathbf {Z}; \mathbf {G} | \mathbf {S}) \leq \sum_ {s} \mathbf {p} (s) \sum_ {g} \mathbf {p} (g) \mathbf {D} _ {\mathrm {K L}} \left(\mathbf {p} (\mathbf {Z} | s, g) \| \mathbf {r} (\mathbf {Z})\right) \tag {1} +$$ + +# 4 THE VARIATIONAL BANDWIDTH BOTTLENECK + +We now introduce our proposed method, the variational bandwidth bottleneck (VBB). The goal of the variational bandwidth bottleneck is to avoid accessing the privileged input $\mathbf{G}$ if it is not required to make an informed decision about the output $\mathbf{Y}$ . This means that the decision about whether or not to access $\mathbf{G}$ must be made only on the basis of the standard input $\mathbf{S}$ . The standard input is used to determine a channel capacity, $\mathbf{d}_{\mathrm{cap}}$ , which controls how much information about $\mathbf{G}$ is available to compute $\mathbf{Z}$ . + +If $\mathbf{d}_{\mathrm{cap}}$ denotes the channel capacity, one way to satisfy this channel capacity is to access the input losslessly with probability $\mathbf{d}_{\mathrm{cap}}$ , and otherwise send no information about the input at all. In this communication strategy, we have $\mathbf{p}(\mathbf{Z}|\mathbf{S},\mathbf{G}) = \delta (\mathbf{f}_{\mathrm{enc}}(\mathbf{S},\mathbf{G}))$ if we choose to access the privileged input (with probability $\mathbf{d}_{\mathrm{cap}}$ ), where + +$\mathbf{f}_{\mathrm{enc}}(\mathbf{S}, \mathbf{G})$ is a deterministic encoder, and $\delta$ denotes the Dirac delta function. The full posterior distribution $\mathbf{p}(\mathbf{Z}|\mathbf{S}, \mathbf{G})$ over the compressed representation can be written as a weighted mixture of (a) (deterministically) accessing the privileged input and standard input and (b) sampling from the prior (when channel capacity is low), such that $\mathbf{z}$ is sampled using + +![](images/2e911be644de16e43e6d96edaaa7fd2f73cbc04ee068c6e6dbe7c31e53d601c1.jpg) +Figure 1: The variational bandwidth bottleneck: Based on the standard input $S$ , the channel capacity network determines the capacity of the bottleneck $Z$ . The channel capacity then determines the probability of accessing the privileged input. In the event that the privileged input is not accessed, no part of the model actually reads its value. + +$$ +\mathbf {z} \sim d _ {\text {c a p}} * \left(\delta \left(\mathbf {f} _ {\mathbf {e n c}} (\mathbf {S}, \mathbf {G})\right)\right) + \left(1 - d _ {\text {c a p}}\right) * \mathbf {r} (\mathbf {z}). \tag {2} +$$ + +This modified distribution $\mathbf{p}(\mathbf{Z}|\mathbf{S},\mathbf{G})$ allows us to dynamically adjust how much information about $\mathbf{G}$ is transmitted through $\mathbf{Z}$ . As shown in the Figure 1, if $\mathbf{d}_{\mathrm{cap}}$ is set to zero, $\mathbf{Z}$ is simply sampled from the prior and contains no information about $\mathbf{G}$ . If it is set to one, the privileged information in $\mathbf{G}$ is deterministically transmitted. The amount of information about $\mathbf{G}$ that is transmitted is therefore determined by $\mathbf{d}_{\mathrm{cap}}$ , which will depend only on the standard input $\mathbf{S}$ . + +This means that the model must decide how much information about the privileged input is required before accessing it. Optimizing the information bottleneck objective with this type of bottleneck requires computing gradients through the term $\mathbf{D}_{\mathrm{KL}}(\mathbf{p}(\mathbf{Z}|\mathbf{S},\mathbf{G})\| \mathbf{r}(\mathbf{Z}))$ (as in Eq. 1), where $\mathbf{z}\sim \mathbf{p}(\mathbf{Z}|\mathbf{S},\mathbf{G})$ is sampled as in Eq. 2. The non-differentiable binary event, whose probability is represented by $\mathbf{d}_{\mathrm{cap}}$ , precludes us from differentiating through the channel capacity directly. In the next sections, we will first show that this mixture can be used within a variational approximation to the information bottleneck, and then describe a practical approximation that allows us to train the model with standard backpropagation. + +# 4.1 TRACTABLE EVALUATION OF CHANNEL CAPACITY + +In this section, we show how we can evaluate the channel capacity in a tractable way. We learn a deterministic function $\mathbf{B}(\mathbf{S})$ of the standard input $\mathbf{S}$ which determines channel capacity. This + +function outputs a scalar value for $\mathbf{d}_{\mathrm{cap}} \in (0,1)$ , which is treated as the probability of accessing the information about the privileged input. This deterministic function $\mathbf{B}(\mathbf{S})$ is parameterized as a neural network. We then access the privileged input with probability $\mathbf{d}_{\mathrm{cap}} = \mathbf{B}(\mathbf{S})$ . Hence, the resulting distribution over $\mathbf{Z}$ is a weighted mixture of accessing the privileged input $\mathbf{f}_{\mathrm{enc}}(\mathbf{S},\mathbf{G})$ with probability $\mathbf{d}_{\mathrm{cap}}$ and sampling from the prior with probability $\mathbf{1} - \mathbf{d}_{\mathrm{cap}}$ . At inference time, using $\mathbf{d}_{\mathrm{cap}}$ , we sample from the Bernoulli distribution $\mathbf{b} \sim \mathbf{Bernoulli}(\mathbf{d}_{\mathrm{prob}})$ to decide whether to access the privileged input or not. + +# 4.2 OPTIMIZATION OF THE KL OBJECTIVE + +Here, we derive the KL divergence objective that allows for tractable optimization of $\mathbf{D}_{\mathrm{KL}}(\mathbf{p}(\mathbf{Z}|\mathbf{S},\mathbf{G})\| \mathbf{r}(\mathbf{Z}))$ (as in Eq. 1, 2). + +Proposition 1 Given the standard input $s$ , privileged input $g$ , bottleneck variable $z$ , and a deterministic encoder $f_{enc}(s, g)$ , we can express the $D_{\mathrm{KL}}$ between the weighed mixture and the prior as + +$$ +D _ {\mathrm {K L}} (p (z | s, g) \| r (z)) = - d _ {c a p} \log d _ {c a p} + (1 - d _ {c a p}) [ \log p (f (s, g)) - \log \left(d _ {c a p} * p (f (g, s)) + (1 - d _ {c a p})\right) ] \tag {3} +$$ + +The proof is given in the Appendix, section B. This equation is fully differentiable with respect to the parameters of $\mathbf{f}(\mathbf{g},\mathbf{s})$ and $\mathbf{B}(\mathbf{s}) = \mathbf{d}_{\mathbf{cap}}$ , making it feasible to use standard gradient-based optimizers. + +Summary: As in Eq. 2, we approximate $\mathbf{p}(\mathbf{Z}|\mathbf{S},\mathbf{G})$ as a weighted mixture of $\mathbf{f}_{\mathrm{enc}}(\mathbf{S},\mathbf{G})$ and the normal prior, such that $\mathbf{z}\sim \mathbf{d}_{\mathrm{cap}}*(\mathbf{f}_{\mathrm{enc}}(\mathbf{S},\mathbf{G})) + (\mathbf{1} - \mathbf{d}_{\mathrm{cap}})*\mathcal{N}(\mathbf{0},\mathbf{1})$ . Hence, the quantity $\mathbf{D}_{\mathrm{KL}}(\mathbf{p}(\mathbf{Z}|\mathbf{S},\mathbf{G})||\mathbf{r}(\mathbf{Z}))$ can be seen as a bound on the information bottleneck objective. When we access the privileged input $\mathbf{G}$ , we pay a cost equal to $\mathbf{I}(\mathbf{Z},\mathbf{G}|\mathbf{S})$ , which is bounded by $\mathbf{D}_{\mathrm{KL}}(\mathbf{p}(\mathbf{Z}|\mathbf{S},\mathbf{G})||\mathbf{r}(\mathbf{Z}))$ as in Eq. 1. Hence, optimizing this objective causes the model to avoid accessing the privileged input when it is not necessary. + +# 5 VARIATIONAL BANDWIDTH BOTTLENOCK WITH RL + +In order to show how the proposed model can be implemented, we consider a sequential decision making setting, though our variational bandwidth bottleneck could also be applied to other learning problems. In reinforcement learning, the problem of sequential decision making is cast within the framework of MDPs (Sutton et al., 1998). Our proposed method depends on two sources of input, standard input and privileged input. In reinforcement learning, privileged inputs could be the result of performing any upstream computation, such as running model based planning. It can also be the information from the environment, such as the goal or the result of active perception. In all these settings, the agent must decide whether to access the privileged input or not. If the agent decides to access the privileged input, then the agent pays an "information cost". The objective is to maximize the expected reward and reduce the cost associated with accessing privileged input, such that across all states on average, the information cost of using the privileged information is minimal. + +We parameterize the agent's policy $\pi_{\theta}(\mathbf{A}|\mathbf{S},\mathbf{G})$ using an encoder $\mathbf{p}_{\mathrm{enc}}(\mathbf{Z}|\mathbf{S},\mathbf{G})$ and a decoder $\mathbf{p}_{\mathrm{dec}}(\mathbf{A}|\mathbf{S},\mathbf{Z})$ , parameterized as neural networks. Here, the channel capacity network $\mathbf{B}(\mathbf{S})$ would take in the standard input that would be used to determine channel capacity, depending on which we decide to access the privileged input as in Section 4.1, such that we would output the distribution over the actions. That is, $\mathbf{Y}$ is $\mathbf{A}$ , and $\pi_{\theta}(\mathbf{A}|\mathbf{S},\mathbf{G}) = \sum_{\mathbf{z}}\mathbf{p}_{\mathrm{priv}}(\mathbf{z}|\mathbf{S},\mathbf{G})\mathbf{p}_{\mathrm{dec}}(\mathbf{A}|\mathbf{S},\mathbf{z})$ . This would correspond to minimizing $\mathbf{I}(\mathbf{A};\mathbf{G}|\mathbf{S})$ , resulting in the objective + +$$ +J (\theta) \equiv \mathbb {E} _ {\pi_ {\theta}} [ \mathbf {r} ] - \beta \mathbf {I} (\mathbf {A}; \mathbf {G} \mid \mathbf {S}) = \mathbb {E} _ {\pi_ {\theta}} [ r ] - \beta \mathbf {I} (\mathbf {Z}; \mathbf {G} \mid \mathbf {S}), \tag {4} +$$ + +where $\mathbb{E}_{\pi_{\theta}}$ denotes an expectation over trajectories generated by the agent's policy. We can minimize this objective with standard optimization methods, such as stochastic gradient descent with backpropagation. + +# 6 RELATED WORK + +A number of prior works have studied information-theoretic regularization in RL. For instance, van Dijk & Polani (2011) use information theoretic measures to define relevant goal-information, which then could be used to find subgoals. Our work is related in that our proposed method could be used to find relevant goal information, but without accessing the goal first. Information theoretic measures have also been used for exploration (Still & Precup, 2012; Mohamed & Rezende, 2015; Houthooft et al., 2016; Gregor et al., 2016). More recently Goyal et al. (2019) proposed InfoBot, where "decision" states are identified by training a goal conditioned policy with an information bottleneck. In InfoBot, the goal conditioned policy always accesses the goal information, while the proposed method conditionally access the goal information. The VBB is also related to work on conditional computation. Conditional computation aims to reduce computation costs by activating only a part of the entire network for each example (Bengio et al., 2013). Our work is related in the sense that we activate the entire network, but only conditionally access the privileged input. + +Another point of comparison for our work is the research on attention models ((Bahdanau et al., 2014; Mnih et al., 2014; Xu et al., 2015)). These models typically learn a policy, that allows them to selectively attend to parts of their input. However, these models still need to access the entire input in order to decide where to attend. Our method dynamically decides whether to access privileged information or not. As shown in our experiments, our method performs better than the attention method of Mnih et al. (2014). + +Recently, many models have been shown to be effective at learning communication in multi-agent reinforcement learning (Foerster et al., 2016; Sukhbaatar et al., 2016). (Sukhbaatar et al., 2016) learns a deep neural network that maps inputs of all the agents to their respective actions. In this particular architecture, each agent sends its state as the communication message to other agents. Thus, when each agent takes a decision, it takes information from all the other agents. In our proposed method, each agent communicates with other agents only when its necessary. + +Our work is also related to work in behavioural research that deals with two modes of decision making (Dic, 1985; Kahneman, 2003; Sloman, 1996; Botvinick & Braver, 2015; Shenhav et al., 2017): an automatic systems that relies on habits and a controlled system that uses some extra information for making decision making. These systems embody different accuracy and demand trade-offs. The habit based system (or the default system) has low computation cost but is often more accurate, whereas the controlled system (which uses some external information) achieves greater accuracy but often is more costly. The proposed model also has two parts of input processing, when the channel capacity is low, agent uses its standard input only, and when channel capacity is high, agent uses both the standard as well as privileged input. The habit based system is analogous to using only the standard input, while the controlled system could be analogous to accessing more costly privileged input. + +# 7 EXPERIMENTAL EVALUATION + +In this section, we evaluate our proposed method and study the following questions: + +Better generalization? Does the proposed method learn an effective bottleneck that generalizes better on test distributions, as compared to the standard conditional variational information bottleneck? + +Learn when to access privileged input? Does the proposed method learn when to access the privileged input dynamically, minimizing unnecessary access? + +# 7.1 BASELINES + +We compare the proposed method to the following methods and baselines: + +Conditional Variational Information Bottleneck (VIB): The agent always access the privileged input, with a VIB using both the standard and the privileged input InfoBot (Goyal et al., 2019). + +Deterministically Accessing Privileged Input (UVFA): The agent can deterministically access both the state as well as the privileged input. This has been shown to improve generalization in RL problems UVFA (Schaul et al., 2015). + +Accessing Information at a Cost (AIC): We compare the proposed method to simpler reinforcement-learning baselines, where accessing privileged information can be formalized as one of the available actions. This action reveals the privileged information, at the cost of a small negative reward. This baseline evaluates whether the explicit VBB formulation provides a benefit over a more conventional approach, where the MDP itself is reformulated to account for the cost of information. + +Randomly accessing goal (RAG) - Here, we compared the proposed method to the scenario where we randomly access the privileged input (e.g., $50\%$ of the time). This baseline evaluates whether the VBB is selecting when to access the goal in an intentional and intelligent way. + +# 7.2 DECIDING WHEN TO RUN AN EXPENSIVE MODEL BASED PLANNER + +Model-based planning can be computationally expensive, but beneficial in temporally extended decision making domains. In this setting, we evaluate whether the VBB can dynamically choose to invoke the planner as infrequently as possible, while still attaining good performance. While it is easy to plan using a planner (like a model based planner, which learns the dynamics model of the environment), it is not very cheap, as it involves running a planner at every step (which is expensive). So, here we try to answer whether the agent can decide based on the standard input when to access privileged input (the output of model based planner by running the planner). + +Experimental Setup: We consider a maze world as shown in Figure 2(a). The agent is represented by a blue dot, and the agent has to reach the goal (represented by a green dot). The agent has access to a dynamics model of the environment (which is pretrained and represented using a parameterized neural network). In this task, the agent only gets a partial view of the surrounding i.e. the agent observes a small number of squares in + +front of it. The agent has to reach the goal position from the start position, and agent can use the pretrained dynamics model to sample multiple plausible trajectories, and the output of the dynamics model is fed as a conditional input to the agent's policy (similar to (Racanière et al., 2017)), thus the agent can use this dynamics model to predict possible futures, and then make an informed decision based on its current state as well as the result of the prediction from the dynamic model. + +In this setup, the current state of the agent (i.e. the egocentric visual observation) acts as the standard input $S$ , and the result of running the planner acts as the privileged input $G$ . In order to avoid running the model based planner, the agent needs to decide when to access the more costly planner. + +Results: - Here, we analyze when the agent access the output of the planner. We find that most of the times agent access the privileged information (output + +of model based planner) near the junctions as shown in Table 1. + +![](images/757df18bc90a5cd584784d2d3a3890bb2c1e71b90dc5f4fd3733bc24ab3b2a34.jpg) +(a) Maze World +Figure 2: Figure on the left shows the environment, where the agent needs to go from blue dot to green dot. + +
Expensive Inference algorithm% of times
Near the junction72% ± 5%
In the Hallway28% ± 4%
+ +Table 1: Running Expensive Model Based Planner + +# 7.3 BETTER GENERALIZATION IN GOAL DRIVEN NAVIGATION + +The goal of this experiment is to show that, by selectively choosing when to access the privileged input, the agent can generalize better with respect to this input. We consider an agent navigating through a maze comprising sequences of rooms separated by doors, as shown in Figure 7. We use a partially observed formulation of the task, where the agent only observes a small number of squares ahead of it. These tasks are difficult to solve with standard RL algorithms, not only due to the partial observability of the environment but also the sparsity of the reward, since the agent receives a reward only upon reaching the goal (Chevalier-Boisvert et al., 2018). The low probability of reaching the goal randomly further exacerbates these issues. The privileged input in this case corresponds to the + +agent's relative distance to the goal $G$ . At junctions, the agent needs to know where the goal is so that it can make the right turn. While in a particular room, the agent doesn't need much information about the goal. Hence, the agent needs to learn to access goal information when it is near a door, where it is most valuable. The current visual inputs act as a standard input $S$ , which is used to compute channel capacity $d_{cap}$ . + +RoomNXSY + +
TrainRoomN6S6RoomN12S10
RoomN2S4 (UVFA))66% ± 3%49% ± 3%
RoomN2S4 (InfoBot)72% ± 2%55% ± 3%
RoomN2S4 (RAG)60% ± 5%41% ± 3%
RoomN2S4 (AIC)57% ± 10%43% ± 5%
RoomN2S4 (VBB)82% ± 4%60% ± 2%
+ +(a) +FindObjSY + +
TrainFindObjS7FindObjS10
FindObjS5 (UVFA)40% ± 2%24% ± 3%
FindObjS5 (InfoBot)46% ± 4%22% ± 3%
FindObjS5 (RAG)38% ± 3%12% ± 4%
FindObjS5 (AIC)39% ± 2%16% ± 4%
FindObjS5 (VBB)64% ± 3%52% ± 2%
+ +Experimental setup: To investigate if agents can generalize by selectively deciding when to access the goal information, we compare our method to InfoBot ((Goyal et al., 2019)) (a conditional variant of VIB). We use different mazes for training, validation, and testing. We evaluate generalization to an unseen distribution of tasks (i.e., more rooms than were seen during training). We experiment on both RoomNXSY ( $X$ number of rooms with atmost size $Y$ , for more details, refer to the Appendix G) as well as the FindObjSY environment. For RoomNXSY, we trained on RoomN2S4 (2 rooms of at most size + +6), and evaluate on RoomN6S6 (6 rooms of at most size 6) and RoomN12S10 (12 rooms, of at most size 10). We also evaluate on the FindObjSY environment, which consists of 9 connected rooms of size $Y - 2 \times Y - 2$ arranged in a grid. For FindObjSY, we train on FindObjS5, and evaluate on FindObjS7 and FindObjS10. + +Results: Tables 3a, 3b compares an agent trained with the proposed method to a goal conditioned baseline (UVFA) (Schaul et al., 2015), a conditional variant of the VIB (Goyal et al., 2019), as well as to the baseline where accessing goal information is formulated as one of the actions (AIC). We also investigate how many times the agent accesses the goal information. We first train the agent on MultiRoomN2S4, and then evaluate this policy on MultiRoomN12S10. We sample 500 trajectories in MultiRoomN12S10env. Ideally, if the agent has learned when to access goal information (i.e., near the doorways), the agent should + +only access the goal information when it is near a door. We take sample rollouts from the pretrained policy in this new environment and check if the agent is near the junction point (or doorway) when the agent access the goal information. Table 3 quantitatively compares the proposed method with + +![](images/ffbbf61d4e7aacd7085159a9ec60096688f595ea56b25cd7c402aefe59c063b5.jpg) +(a) FindObjS7 + +![](images/d648d4a6cc5fa67f1f4cdd1e44321b0a941e3444b1f033cf9fbbcccf50de743a.jpg) +(b) FindObjS10 +Figure 3: Partially Observable FindObjSX environments — The agent is placed in the central room. An object is placed in one of the rooms and the agent must navigate to the object in a randomly chosen outer room to complete the mission. The agent again receives an egocentric observation (7 x 7 pixels), and the difficulty of the task increases with $X$ . For more details refer to supplementary material. + +(b) +Table 2: Generalization of the agent to larger grids in RoomNXSY envs and FindObj envs. Success of an agent is measured by the fraction of episodes where the agent was able to navigate to the goal in 500 steps. Results are averaged over 500 examples, and 5 different random seeds. + +
MethodPercentage of times
VBB76% ± 6%
InfoBot (Goyal et al., 2019)60% ± 3%
AIC62% ± 6%
+ +Table 3: Goal Driven Navigation - Percentage of time steps on which each method acquires the goal information when the agent is near the junction point (or branching points in the maze. We show that the proposed method learns to access the privileged input (in this case, the goal) only when necessary. + +different baselines, showing that the proposed method indeed learns to generalize with respect to the privileged input (i.e., the goal). + +# 7.4 LEARNING WHEN TO COMMUNICATE FOR MULTIAGENT COMMUNICATION + +Next, we consider multiagent communication, where in order to solve a task, agents must communicate with other agents. Here we show that selectively deciding when to communicate with another agent can result in better learning. + +Experimental setup: We use the setup proposed by Mordatch & Abbeel (2017). The environment consists of N agents and M landmarks. Both the agents and landmarks exhibit different characteristics such as different color and shape type. Different agents can act to move in the environment. They can also be affected by the interactions with other agents. Asides from taking physical actions, agents communicate with other agents using verbal communication symbols. Each agent has a private goal that is not observed by another agent, and the goal of the agent is grounded in the real physical environment, which might include moving to a particular location. It could also involve other agents (like requiring a particular agent to move somewhere) and hence communication between agents is required. We consider the cooperative setting, in which the problem is to find a policy that maximizes expected return for all the agents. In this scenario, the current state of the agent is the standard input $S$ , and the information which might be obtained as a result of communication with other agents is the privileged input $G$ . For more details refer to the Appendix (D). + +
Model6 Agents10 agents
Emergent Communication (Mordatch & Abbeel, 2017)4.85 (100%) ± 0.1%5.44 (100%) ± 0.2%
Randomly Accessing (RAG)4.95 (50%) ± 0.2%5.65 (50%) ± 0.1%
InfoBot (Goyal et al., 2019)4.81 (100%) ± 0.2%5.32 (100%) ± 0.1%
VBB (ours)4.72 (23%) ± 0.1%5.22 (34%) ± 0.05%
+ +Tasks: Here we consider two tasks: (a) 6 agents and 6 landmarks, (b) 10 agents and 10 landmarks. The goal is for the agents to coordinate with each other and reach their respective landmarks. We measure two metrics: (a) the distance of the agent from its destination landmark, and (b) the percentage of times the agent accesses the privileged input (i.e., information from the other agents). Table 4 shows the relative distance as well as the percentage of times agents access information from other agents (in brackets). + +Results: Table 4 compares an agent trained with proposed method to (Mordatch & Abbeel, 2017) and Infobot (Goyal et al., 2019). We also study how many times an agent access the privileged input. As shown in Table 4 (within brackets) the VBB can achieve better results, as compared to other methods, even when accessing the privileged input only less than $40\%$ of the times. + +# 7.5 INFORMATION CONTENT FOR VBB AND VIB + +Table 4: Multiagent communication: The VBB performs better, as compared to the baselines. In the baseline scenario, all of the agents communicate with all the other agents all the time. Averaged over 5 random seeds. + +
TaskInfobotBernoulli- ReinforceVBB
Navigation Env4.45 (100%)5.34 (74%)3.92 (20%)
Sequential MNIST3.56 (100%)3.63 (65%)3.22 (46%)
Model Based RL7.12 (100%)7.63 (65%)6.94 (15%)
+ +Table 5: The VBB performs better, as compared to the baselines. The VBB transmits a similar number of bits, while accessing privileged information a fraction of the time (in brackets % of times access to privileged information). Using REINFORCE to learn the parameter of the Bernoulli, does not perform as well as the proposed method. + +Channel Capacity: We can quantify the average information transmission through both the VBB and the VIB in bits. The average information is similar to the conventional VIB, while the input + +is accessed only a fraction of the time (the VIB accesses it $100\%$ of the time). In order to show empirically that the VBB is minimizing information transmission (Eq. 1 in main paper), we measure average channel capacity $\mathbf{D}_{\mathrm{KL}}(\mathbf{p}(\mathbf{z}|\mathbf{s},\mathbf{g})\| \mathbf{r}(\mathbf{z}))$ numerically and compare the proposed method with the VIB, which must access the privileged input every time (See Table 5). + +# 8 DISCUSSION + +We demonstrated how the proposed variational bandwidth bottleneck (VBB) helps in generalization over the standard variational information bottleneck, in the case where the input is divided into a standard and privileged component. Unlike the VIB, the VBB does not actually access the privileged input before deciding how much information about it is needed. Our experiments show that the VBB improves generalization and can achieve similar or better performance while accessing the privileged input less often. Hence, the VBB provides a framework for adaptive computation in deep network models, and further study applying it to domains where reasoning about access to data and computation is an exciting direction for future work. Current limitation of the proposed method is that it assumes independence between standard input and the privileged input but we observe in practice assuming independence does not seem to hurt the results. Future work would be to investigate how we can remove this assumption. + +# ACKNOWLEDGEMENTS + +The authors acknowledge the important role played by their colleagues at Mila and RAIL throughout the duration of this work. AG is grateful to Alexander Neitz for the code of the environment used for model based experiments Fig. (2). AG is also grateful to Rakesh R Menon for pointing out the error and giving very useful feedback. The authors would also like to thank Alex Lamb, Nan Rosemary Ke, Olexa Bilaniuk, Jordan Hoffmann, Nasim Rahaman, Samarth Sinha, Shagun Sodhani, Devansh Arpit, Riashat Islam, Coline Devin, DJ Strousse, Jonathan Binas, Suriya Singh, Hugo Larochelle, Tom Bosc, Gautham Swaminathan, Salem Lahou for feedback on the draft. The authors are also grateful to the reviewers of ICML, NeurIPs and ICLR for their feedback. The authors are grateful to NSERC, CIFAR, Google, Samsung, Nuance, IBM, Canada Research Chairs, Canada Graduate Scholarship Program, Nvidia for funding, and Compute Canada for computing resources. We are very grateful to Google for giving Google Cloud credits used in this project. + +# REFERENCES + +Actions and habits: the development of behavioural autonomy. Philosophical Transactions of the Royal Society B: Biological Sciences, 308(1135):67-78, 1985. ISSN 0080-4622. doi: 10. 1098/rstb.1985.0010. URL http://rstb.royalsocietypublishing.org/content/ 308/1135/67. +Alessandro Achille and Stefano Soatto. Information dropout: learning optimal representations through noise. CoRR, abs/1611.01353, 2016. URL http://arxiv.org/abs/1611.01353. +Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. CoRR, abs/1612.00410, 2016. URL http://arxiv.org/abs/1612.00410. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. +Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. +Matthew Botvinick and Todd Braver. Motivation and cognitive control: from behavior to neural mechanism. Annual review of psychology, 66, 2015. +Maxime Chevalier-Boisvert and Lucas Willems. Minimalistic gridworld environment for operai gym. https://github.com/maximecb/gym-minigrid, 2018. + +Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Sahara, Thien Huu Nguyen, and Yoshua Bengio. Babyai: First steps towards grounded language learning with a human in the loop. arXiv preprint arXiv:1810.08272, 2018. +Thomas M. Cover and Joy A. Thomas. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, New York, NY, USA, 2006. ISBN 0471241954. +Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. In Advances in Neural Information Processing Systems, pp. 2137-2145, 2016. +Anirudh Goyal, Riashat Islam, Daniel Strouse, Zafarali Ahmed, Matthew Botvinick, Hugo Larochelle, Sergey Levine, and Yoshua Bengio. Infobot: Transfer and exploration via the information bottleneck. arXiv preprint arXiv:1901.10902, 2019. +Alex Graves, Greg Wayne, and Ivo Danihelka. Neural tuning machines. arXiv preprint arXiv:1410.5401, 2014. +Karol Gregor, Danilo Jimenez Rezende, and Daan Wierstra. Variational intrinsic control. arXiv preprint arXiv:1611.07507, 2016. +Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pp. 1109-1117, 2016. +Michael Janner, Karthik Narasimhan, and Regina Barzilay. Representation learning for grounded spatial reasoning. Transactions of the Association of Computational Linguistics, 6:49-61, 2018. +Daniel Kahneman. Maps of bounded rationality: Psychology for behavioral economics. American Economic Review, 93(5):1449-1475, December 2003. doi: 10.1257/000282803322655392. URL http://www.aeaweb.org/articles?id=10.1257/000282803322655392. +Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advances in neural information processing systems, pp. 2204-2212, 2014. +Shakir Mohamed and Danilo Jimenez Rezende. Variational information maximisation for intrinsically motivated reinforcement learning. In Advances in neural information processing systems, pp. 2125-2133, 2015. +Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-agent populations. arXiv preprint arXiv:1703.04908, 2017. +Sebastien Racanière, Théophane Weber, David Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomenech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, et al. Imagination-augmented agents for deep reinforcement learning. In Advances in neural information processing systems, pp. 5690-5701, 2017. +Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International conference on machine learning, pp. 1312-1320, 2015. +Amitai Shenhav, Sebastian Musslick, Falk Lieder, Wouter Kool, Thomas L Griffiths, Jonathan D Cohen, and Matthew M Botvinick. Toward a rational and mechanistic account of mental effort. Annual review of neuroscience, 40:99-124, 2017. +Steven A Sloman. The empirical case for two systems of reasoning. *Psychological bulletin*, 119(1):3, 1996. +Susanne Still and Doina Precup. An information-theoretic approach to curiosity-driven reinforcement learning. Theory in Biosciences, 131(3):139-148, 2012. +Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440-2448, 2015. + +Sainbayar Sukhbaatar, Rob Fergus, et al. Learning multiagent communication with backpropagation. In Advances in Neural Information Processing Systems, pp. 2244-2252, 2016. +Richard S Sutton, Andrew G Barto, et al. Reinforcement learning: An introduction. MIT press, 1998. +Naftali Tishby, Fernando C. N. Pereira, and William Bialek. The information bottleneck method. CoRR, physics/0004057, 2000. URL http://arxiv.org/abs/physics/0004057. +S. G. van Dijk and D. Polani. Grounding subgoals in information transitions. In 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pp. 105-111, April 2011. doi: 10.1109/ADPRL.2011.5967384. +Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pp. 2048-2057, 2015. + +# A CONDITIONAL BOTTLENECK + +In this section, we construct our objective function, such that minimizing this objective function minimizes $I(Y,G|S)$ . Recall that the IB objective (Tishby et al., 2000) is formulated as the minimization of $I(Z,X) - \beta I(Z,Y)$ , where $X$ refers to the input, $Y$ refers to the model output, $Z$ refers to compressed representation or the bottleneck. For the proposed method, we construct our objective as follows: we minimize the mutual information between privileged input and output given the standard input, $I(Y,G|S)$ , to encode the idea that the we should avoid unnecessary access to privileged input $G$ , and maximize the $I(Z,Y)$ . Hence, for the VBB, using the data processing inequality (Cover & Thomas, 2006), this implies that + +$$ +I (Z; G | S) \geq I (Y; G | S). \tag {5} +$$ + +To obtain an upper bound on $I(G; Z|S)$ , we must first obtain an upper bound on $I(G; Z|S = s)$ , and then we average over $p(s)$ . We get the following result: + +$$ +I (G; Z | S = s) = \sum_ {z, g} p (g | s) p (z | s, g) \log \frac {p (z | s , g)}{p (z | s)}, \tag {6} +$$ + +We assume that the privileged input $G$ and the standard input $S$ are independent of each other, and hence $p(g|s) = p(g)$ . We get the following upper bound: + +$$ +\begin{array}{l} I (G; Z | S = s) \leq \sum_ {g} p (g) \sum_ {z} p (z | s, g) \log \frac {p (z | s , g)}{p _ {\text {p r i o r}} (z)} \\ = \sum_ {g} ^ {g} p (g) D _ {\mathrm {K L}} \left(p (z | s, g) \| r (Z)\right) \tag {7} \\ \end{array} +$$ + +where the inequality in the last line is because we replace $p(z|s)$ with $p_{prior}(z)$ . We also drop the dependence of the prior $z$ on the standard input $s$ . While this loses some generality, recall that the predictive distribution $p(y|s,z)$ is already conditioned on $s$ , so information about $s$ itself does not need to be transmitted through $z$ . Therefore, we have that $[p(Z|S)\| r(z)]\geq 0\Rightarrow \sum_zp(z|s)\log p(z|s)\geq \sum_zp(z|s)\log r(z)$ . Marginalizing over the standard input therefore gives us + +$$ +\begin{array}{l} I (Z; G | S) \leq \sum_ {s} p (s) \sum_ {g} p (g) D _ {\mathrm {K L}} [ p (z | s, g) \| r (z) ] \\ = \sum_ {s} ^ {s} p (s) \sum_ {g} ^ {g} p (g) D _ {\mathrm {K L}} [ p (z | s, g) \| r (z) ] \tag {8} \\ \end{array} +$$ + +We approximate $p(z|s, g)$ as a weighted mixture of $p_{enc}(z_{enc}|s, g)$ and the normal prior such that $z \sim d_{cap} * (p_{enc}(z_{enc}|s, g)) + (1 - d_{cap}) * \mathcal{N}(0, 1)$ . Hence, the weighted mixture $p(z|s, g)$ can be seen as a bound on the information bottleneck objective. Whenever we access the privileged input $G$ , we pay an information cost (equal to $I(Z, G|S)$ which is bounded by $D_{\mathrm{KL}}(p(z|s, g) \| p_{prior}(z))$ ). Hence, the objective is to avoid accessing the privileged input, such that on average, the information cost of using the privileged input is minimal. + +# B TRACTABLE OPTIMIZATION OF KL OBJECTIVE + +Here, we first show how the weighted mixture can be a bound on the information bottleneck objective. Recall, + +$$ +D _ {\mathrm {K L}} (P \| Q) = \sum_ {x \in \mathcal {X}} P (x) \log \left(\frac {P (x)}{Q (x)}\right) \tag {9} +$$ + +Hence, $D_{\mathrm{KL}}(p(z|s,g)||p_{prior}(z))$ where $p(z|s,g)$ is expressed as a mixture of direc delta and prior, and hence it can be written as + +$$ +D _ {\mathrm {K L}} (p (z | s, g) \| r (z)) = D _ {\mathrm {K L}} \left(d _ {c a p} * p (z) + \left(1 - d _ {c a p}\right) * \delta (f (s, g)) \| r (z)\right) \tag {10} +$$ + +Further expanding the RHS using eq. 9, we get + +$$ +\begin{array}{l} D _ {\mathrm {K L}} (p (z | s, g) \| r (z)) = d _ {c a p} * \mathbb {E} _ {p (z)} \left[ \log p (z) - \log (d _ {c a p} p (z) + (1 - d _ {c a p} (\delta (f (s, g)) ] ^ {0} \right. \\ + (1 - d _ {c a p}) \log p (f (s, g)) - \log \left(d _ {c a p} p (f (s, g)\right) + (1 - d _ {c a p}) \\ \end{array} +$$ + +Here, we can assume the $\delta(f(s, g))$ to be zero under the prior (as it is a Direc delta function). This can further be simplified to: + +$$ +\begin{array}{l} D _ {\mathrm {K L}} (p (z | s, g) \| r (z)) = d _ {c a p} * \mathbb {E} _ {p (z)} \big [ \log p (z) - \log (d _ {c a p}) - \log (p (z)) \big ] \\ + (1 - d _ {c a p}) [ \log p (f (s, g)) - \log (d _ {c a p} p (f (s, g)) + (1 - d _ {c a p})) ] \\ \end{array} +$$ + +And hence, reducing the above term reduces to $0D_{\mathrm{KL}}(p(z|s,g)\| p_{prior}(z))$ , our original objective. + +# C ANOTHER METHOD OF CALCULATING CHANNEL CAPACITY + +In the main paper we show how can we evaluate channel capacity in a tractable way. The way we do is to learn a function $B(S)$ which determines channel capacity. Here's another way, which we (empirically) found that parameterizing the channel capacity network helps. In order to represent this function $\mathrm{B}(S)$ which satisfies these constraints, we use an encoder of the form $(\mathbf{B}(S) = p(z_{cap}|S))$ such that $z_{cap} \sim \mathcal{N}(z_{cap}|f^{\mu}(S), f^{\sigma}(S))$ , where $S$ refers to the standard input, and $f^{\mu}, f^{\sigma}$ are learned functions (e.g., as a multi-layer perceptron) that outputs $\mu$ and $\sigma$ respectively for the distribution over $z_{cap}$ . Here, $D_{KL}(\mathbf{B}(S)|\mathcal{N}(0,1))$ refers to the channel capacity of the bottleneck. In order to get a probability $prob$ out of $\mathbf{B}(S)$ , we convert $\mathbf{B}(S)$ into a scalar $prob \in [0,1]$ such that the $prob$ can be treated as a probability of accessing the privileged input. + +$$ +\operatorname {p r o b} = \operatorname {S i g m o i d} (\operatorname {N o r m a l i z a t i o n} (\mathrm {B} (S))) \tag {11} +$$ + +We perform this transformation by normalizing $\mathbf{B}(S)$ such that $\mathbf{B}(S) \in [-k, k]$ , (in practice we perform this by clamping $\mathbf{B}(S) \in [-2, 2]$ ) and then we pass the normalized $B(S)$ through a sigmoid activation function, and treating the output as a probability, prob, we access the privileged input with probability prob. Hence, the resulting distribution over $z$ is a weighted mixture of accessing the privileged input $f_{enc}(s, g)$ with probability prob and sampling from the prior with probability $1 - prob$ . Here we assume prior to be $\mathcal{N}(0, 1)$ , but it can also be learned. At test time, using prob, we can sample from the Bernoulli distribution $b \sim \text{Bernoulli}(\text{prob})$ to decide whether to access the privileged input or not. + +# D MULTIAGENT COMMUNICATION + +Experimental Setup: We use the setup proposed by Mordatch & Abbeel (2017). The environment consists of N agents and M landmarks. Both the agents and landmarks exhibit different characteristics such as different color and shape type. Different agents can act to move in the environment. They can also be affected by the interactions with other agents. Asides from taking physical actions, agents + +![](images/1cfe5b2d45b716df4267ad1c7bb9d533f2dfe7c0339debd866be956cf79b3a78.jpg) +Figure 4: Multiagent Communication: The environment consists of N agents and M landmarks. Both the agents and landmarks exhibit different characteristics such as different color and shape type. Different agents can act to move in the environment. They can also act be effected by the interactions with other agents. Asides from taking physical actions, agents communicate with other agents using verbal communication symbols $c$ . + +communicate with other agents using verbal communication symbols. Each agent has a private goal which is not observed by another agent, and the goal of the agent is grounded in the real physical environment, which might include moving to a particular location, and could also involve other agents (like requiring a particular agent to move somewhere) and hence communication between agents is required. + +Each agent performs actions and communicates utterances according to a policy, which is identically instantiated for all of the agents in the environment, and also receive the same reward signal. This policy determines both the actions and communication protocols. We assume all agents have identical action and observation spaces and receive the same reward signal. We consider the cooperative setting, in which the problem is to find a policy that maximizes expected return for all the agents. + +# E SPATIAL REASONING + +In order to study generalization across a wide variety of environmental conditions and linguistic inputs, (Janner et al., 2018) develop an extension of the puddle world reinforcement learning benchmark. States in a $10 \times 10$ grid are first filled with either grass or water cells, such that the grass forms one connected component. We then populate the grass region with six unique objects which appear only once per map (triangle, star, diamond, circle, heart, and spade) and four non-unique objects (rock, tree, horse, and house) which can appear any number of times on a given map. We followed the same experimental setup and hyperparameters as in (Janner et al., 2018). + +Here, an agent is rewarded for reaching the location specified by the language instruction. Agent is allowed to take actions in the world. Here the goal is to be able to generalize the learned representation for a given instruction such that even if the environment observations are rearranged, this representation is still useful. Hence, we want to learn such representations that ties observations from the environment and the language expressions. Here we consider the Puddle World Navigation map as introduced in (Janner et al., 2018). We followed the same experiment setup as (Janner et al., 2018). Here, the current state of the agent acts as a standard input. Based on this, agent decides to access the privileged input. + +We start by converting the instruction text into a real valued vector using an LSTM. It first convolves the map layout to a low-dimensional representation (as opposed to the MLP of the UVFA) and concatenates this to the LSTM's instruction embedding (as opposed to a dot product). These concatenated representations are then input to a two layered MLP. Generalization over both environment configurations and text instructions requires a model that meets two desiderata. First, it must have a flexible representation of goals, one which can encode both the local structure and global spatial attributes inherent to natural language instructions. Second, it must be compositional, in order to learn a generalizable representation of the language even though each unique instruction will only be observed with a single map during training. Namely, the learned representation for a given instruction should still be useful even if the objects on a map are rearranged or the layout is changed entirely. + +![](images/76ad06169ab95161cd007741036d942f907b8f132e418861329ad0633abff937.jpg) +Figure 5: Spatial Reasoning + +![](images/468b2a2980d9a19596be15eb7c150b5765946219ec35eebf543d8aa1c54fb507.jpg) +Figure 6: Puddle world navigation + +# F RECURRENT VISUAL ATTENTION - LEARNING BETTER FEATURES + +The goal of this experiment is to study if using the proposed method enables learning a dynamic representation of an image which can be then used to accurately classify an image. In order to show this, we follow the setup of the Recurrent Attention Model (RAM) (Mnih et al., 2014). Here, the attention process is modeled as a sequential decision process of a goal-directed agent interacting with the visual image. A recurrent neural network is trained to process the input sequentially, attending to different parts within the image one at a time and hence combining information from these different parts to build up a dynamic representation of the image. The agent incrementally combines information because of attending to different parts and then chooses this integrated information to choose where next to attend to. In this case, the information due to attending at a particular part of the image acts as a standard input, and the information which is being integrated over time acts as a privileged input, which is then used to select where the model should attend next. The entire process repeats for N steps (for our experiment $\mathrm{N} = 6$ ). FC denotes a fully connected network with two layers of rectifier units, each containing 256 hidden units. + +
ModelMNIST60 * 60 Cluttered MNIST
FC (2 layers)1.69%11.63%
RAM Model (6 locs)1.55%4.3%
VIB (6 locs)1.58%4.2%
VBB (6 locs) (Ours)1.42%3.8%
+ +Table 6: Classification error results (Mnih et al., 2014). Averaged over 3 random seeds. + +Quantitative Results: Table 6 shows the classification error for the proposed model, as well as the baseline model, which is the standard RAM model. For both the proposed model, as well as the RAM model, we fix the number of locations to attend to equal to 6. The proposed method outperforms the standard RAM model. + +# G ALGORITHM IMPLEMENTATION DETAILS + +We evaluate the proposed framework using Advantage Actor-Critic (A2C) to learn a policy $\pi_{\theta}(a|s,g)$ conditioned on the goal. To evaluate the performance of proposed method, we use a range of maze multi-room tasks from the gym-minigrid framework (Chevalier-Boisvert & Willems, 2018) and the A2C implementation from (Chevalier-Boisvert & Willems, 2018). For the maze tasks, we used agent's relative distance to the absolute goal position as "goal". + +For the maze environments, we use A2C with 48 parallel workers. Our actor network and critic networks consist of two and three fully connected layers respectively, each of which have 128 hidden units. The encoder network is also parameterized as a neural network, which consists of 1 fully connected layer. We use RMSProp with an initial learning rate of 0.0007 to train the models, for both InfoBot and the baseline for a fair comparison. Due to the partially observable nature of the environment, we further use a LSTM to encode the state and summarize the past observations. + +# H MINIGRID ENVIRONMENTS FOR OPENAI GYM + +![](images/619c1f231bd5518aa185b498e5dfbff8b7ced4d4c0f2d6498c4c205e3dc709d4.jpg) +(a) RoomN5S4 + +![](images/67450e378668411ab8367f0080fad421735e28889dcf4b7293849af74aafb416.jpg) +(b) RoomN6S8 +Figure 7: Partially Observable MultiRoomsNXSY environments + +The MultiRoom environments used for this research are part of MiniGrid, which is an open source gridworld package. This package includes a family of reinforcement learning environments compatible with the OpenAI Gym framework. Many of these environments are parameterizable so that the difficulty of tasks can be adjusted (e.g., the size of rooms is often adjustable). + +# H.1 THE WORLD + +In MiniGrid, the world is a grid of size NxN. Each tile in the grid contains exactly zero or one object. The possible object types are wall, door, key, ball, box and goal. Each object has an associated discrete color, which can be one of red, green, blue, purple, yellow and grey. By default, walls are always grey and goal squares are always green. + +# H.2 REWARD FUNCTION + +Rewards are sparse for all MiniGrid environments. In the MultiRoom environment, episodes are terminated with a positive reward when the agent reaches the green goal square. Otherwise, episodes are terminated with zero reward when a time step limit is reached. In the FindObj environment, the agent receives a positive reward if it reaches the object to be found, otherwise zero reward if the time step limit is reached. + +The formula for calculating positive sparse rewards is $1 - 0.9 * \left( \frac{\text{step_count}}{\text{max_steps}} \right)$ . That is, rewards are always between zero and one, and the quicker the agent can successfully complete an episode, the closer to 1 the reward will be. The max_steps parameter is different for each environment, and varies depending on the size of each environment, with larger environments having a higher time step limit. + +![](images/6738dcb67c2894a12cabbb7af9e4bf884a74d58e29830d6b312fa740096fe6c1.jpg) +Figure 8: Copying Task + +# H.3 ACTION SPACE + +There are seven actions in MiniGrid: turn left, turn right, move forward, pick up an object, drop an object, toggle and done. For the purpose of this paper, the pick up, drop and done actions are irrelevant. The agent can use the turn left and turn right action to rotate and face one of 4 possible directions (north, south, east, west). The move forward action makes the agent move from its current tile onto the tile in the direction it is currently facing, provided there is nothing on that tile, or that the tile contains an open door. The agent can open doors if they are right in front of it by using the toggle action. + +# H.4 OBSERVATION SPACE + +Observations in MiniGrid are partial and egocentric. By default, the agent sees a square of 7x7 tiles in the direction it is facing. These include the tile the agent is standing on. The agent cannot see through walls or closed doors. The observations are provided as a tensor of shape 7x7x3. However, note that these are not RGB images. Each tile is encoded using 3 integer values: one describing the type of object contained in the cell, one describing its color, and a flag indicating whether doors are open or closed. This compact encoding was chosen for space efficiency and to enable faster training. The fully observable RGB image view of the environments shown in this paper is provided for human viewing. + +# H.5 LEVEL GENERATION + +The level generation in this task works as follows: (1) Generate the layout of the map (X number of rooms with different sizes (at most size Y) and green goal) (2) Add the agent to the map at a random location in the first room. (3) Add the goal at a random location in the last room. MultiRoomNXSY - In this task, the agent gets an egocentric view of its surroundings, consisting of $3 \times 3$ pixels. A neural network parameterized as MLP is used to process the visual observation. + +# I MEMORY ACCESS - DECIDING WHEN TO ACCESS MEMORY + +Here, the privileged input involves accessing information from the external memory like neural tuning machines (NTM) (Sukhbaatar et al., 2015; Graves et al., 2014). Reading from external memory is usually an expensive operation, and hence we would like to minimize access to the external memory. For our experiments, we consider external memory in the form of neural turning machines. NTM processes inputs in sequences, much like a normal LSTM but NTM can allow the network to learn by accessing information from the external memory. In this context, the state of controller (the NTM's controller which processes the input) becomes the standard input, and based on this (the standard input), we decide the channel capacity, and based on channel capacity we decide whether to read from external memory or not. In order to test this, we evaluate our approach on copying task. This task tests whether NTMs can store and recall information from the past. We use the same problem setup as (Graves et al., 2014). As shown in fig 8, we found that we can perform slightly better as compared to NTMs while accessing external memory only $32\%$ of the times. + +# J HYPERPARAMETERS + +The only hyperparameter we introduce with the variational information bottleneck is $\beta$ . For both the VIB baseline and the proposed method, we evaluated with 5 values of $\beta$ : 0.01, 0.09, 0.001, 0.005, 0.009. + +# J.1 COMMON PARAMETERS + +We use the following parameters for lower level policies throughout the experiments. Each training iteration consists of 5 environments time steps, and all the networks (value functions, policy, and observation embedding network) are trained at every time step. Every training batch has a size of 64. The value function networks and the embedding network are all neural networks comprised of two hidden layers, with 128 ReLU units at each hidden layer. + +All the network parameters are updated using Adam optimizer with learning rate $3 \cdot 10^{-4}$ . + +Table 7 lists the common parameters used. + +
ParameterValue
learning rate3·10-4
batch size64
discount0.99
entropy coefficient10-2
hidden layers (Q, V, embedding)2
hidden units per layer (Q, V, embedding)128
Bottleneck Size64
RNN Hidden Size128
β0.001/0.009/0.01/0.09
+ +Table 7: Shared parameters for benchmark tasks + +# K ARCHITECTURAL DETAILS + +For our work, we made sure to keep the architecture detail as similar to the baseline as possible. + +- Goal Driven Navigation: Our code is based on open source gridworld package https : //github.com/maximecb/gym-minigrid. +- Multiagent Communication: Our code is based on the following open source implementation. https://github.com/bkgoksel/emergent-language. +- Access to External Memory: Our code is based on the following open source implementation of Neural Turing Machines. https://github.com/LOUDInthecloud/pytorch-ntm +- Spatial Navigation: Our code is based on the following open source implementation of https://github.com/JannerM/spatial-reasoning. + +The only extra parameters which our model is introduce is related to the channel capacity network, which is parameterized as a neural network consisting of 2 layers of 128 dimensions each (with ReLU non-linearity). \ No newline at end of file diff --git a/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/images.zip b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fb6b6ffa17facf0499433f0266a33dd03f0c223e --- /dev/null +++ b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9648029816b578a63274126dbef38440ca9dd3629e6c524666994aabfef5b64e +size 416706 diff --git a/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/layout.json b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..422523e0c282d8b97b941c7e6f778b13467718ca --- /dev/null +++ b/thevariationalbandwidthbottleneckstochasticevaluationonaninformationbudget/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3edf1080f6d511c8c0b6420cba52bb0c30dd9d09cf56f1c23955b8714a41ca6d +size 552252 diff --git a/thievesonsesamestreetmodelextractionofbertbasedapis/6ef28eb9-4555-4cdd-abef-9e1536b4a876_content_list.json b/thievesonsesamestreetmodelextractionofbertbasedapis/6ef28eb9-4555-4cdd-abef-9e1536b4a876_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..600ea144c9fb5e7afea18d0396018b469ee0abf8 --- /dev/null +++ b/thievesonsesamestreetmodelextractionofbertbasedapis/6ef28eb9-4555-4cdd-abef-9e1536b4a876_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d43619f7f5f951c43ced56497404cb37f6c1b443eeb85df13bc2fb38caa88d4 +size 111317 diff --git a/thievesonsesamestreetmodelextractionofbertbasedapis/6ef28eb9-4555-4cdd-abef-9e1536b4a876_model.json b/thievesonsesamestreetmodelextractionofbertbasedapis/6ef28eb9-4555-4cdd-abef-9e1536b4a876_model.json new file mode 100644 index 0000000000000000000000000000000000000000..27a274e1b41dd0ca335db59ce951890a48a2e92e --- /dev/null +++ b/thievesonsesamestreetmodelextractionofbertbasedapis/6ef28eb9-4555-4cdd-abef-9e1536b4a876_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca30f824289c80ea6c3abe63b7f5ffad8d0969e546a6bfbe0b535ec34fc4222d +size 131551 diff --git a/thievesonsesamestreetmodelextractionofbertbasedapis/6ef28eb9-4555-4cdd-abef-9e1536b4a876_origin.pdf b/thievesonsesamestreetmodelextractionofbertbasedapis/6ef28eb9-4555-4cdd-abef-9e1536b4a876_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1d7299022428c253e831c367e3463e292c68e989 --- /dev/null +++ b/thievesonsesamestreetmodelextractionofbertbasedapis/6ef28eb9-4555-4cdd-abef-9e1536b4a876_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f0173835bbc3246f8781f10cbe5a5abe02c9aa9d44791609cf5c9d7eaeccba8 +size 590978 diff --git a/thievesonsesamestreetmodelextractionofbertbasedapis/full.md b/thievesonsesamestreetmodelextractionofbertbasedapis/full.md new file mode 100644 index 0000000000000000000000000000000000000000..237f3fec09b234a3d24895dbf026d3206567a0f3 --- /dev/null +++ b/thievesonsesamestreetmodelextractionofbertbasedapis/full.md @@ -0,0 +1,355 @@ +# THIEVES ON SESAME STREET! +MODEL EXTRACTION OF BERT-BASED APIs + +Kalpesh Krishna* + +CICS, UMass Amherst + +kalpesh@cs.umass.edu + +Gaurav Singh Tomar + +Google Research + +gtomar@google.com + +Ankur P. Parikh + +Google Research + +aparikh@google.com + +Nicolas Papernot + +Google Research + +papernot@google.com + +Mohit Iyyer + +CICS, UMass Amherst + +miyyer@cs.umass.edu + +# ABSTRACT + +We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model. Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT (Devlin et al., 2019), we show that the adversary does not need any real training data to successfully mount the attack. In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks, including natural language inference and question answering. Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model. Finally, we study two defense strategies against model extraction—membership classification and API watermarking—which while successful against naive adversaries, are ineffective against more sophisticated ones. + +# 1 INTRODUCTION + +Machine learning models represent valuable intellectual property: the process of gathering training data, iterating over model design, and tuning hyperparameters costs considerable money and effort. As such, these models are often only indirectly accessible through web APIs that allow users to query a model but not inspect its parameters. Malicious users might try to sidestep the expensive model development cycle by instead locally reproducing an existing model served by such an API. In these attacks, known as "model stealing" or "model extraction" (Lowd & Meek, 2005; Tramér et al., 2016), the adversary issues a large number of queries and uses the collected (input, output) pairs to train a local copy of the model. Besides theft of intellectual property, extracted models may leak sensitive information about the training data (Tramér et al., 2016) or be used to generate adversarial examples that evade the model served by the API (Papernot et al., 2017). + +With the recent success of contextualized pretrained representations for transfer learning, NLP models created by finetuning ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) have become increasingly popular (Gardner et al., 2018). Contextualized pretrained representations boost performance and reduce sample complexity (Yogatama et al., 2019), and typically require only a shallow task-specific network—sometimes just a single layer as in BERT. While these properties are advantageous for representation learning, we hypothesize that they also make model extraction easier. + +In this paper, we demonstrate that NLP models obtained by fine-tuning a pretrained BERT model can be extracted even if the adversary does not have access to any training data used by the API + +![](images/b4f082e442fefb0b694da54b4c57aa66a3f620bf531398b7d86988c7fbfff1b4.jpg) +Figure 1: Overview of our model extraction setup for question answering. An attacker first queries a victim BERT model, and then uses its predicted answers to fine-tune their own BERT model. This process works even when passages and questions are random sequences of words as shown here. + +provider. In fact, the adversary does not even need to issue well-formed queries: our experiments show that extraction attacks are possible even with queries consisting of randomly sampled sequences of words coupled with simple task-specific heuristics (Section 3). While extraction performance improves further by leveraging sentences and paragraphs from Wikipedia (Section 4), the fact that random word sequences are sufficient to extract models contrasts with prior work, where large-scale attacks require at minimum that the adversary can access a small amount of semantically-coherent data relevant to the task (Papernot et al., 2017; Correia-Silva et al., 2018; Orekondy et al., 2019a; Pal et al., 2019; Jagielski et al., 2019). These attacks are cheap: our most expensive attack cost around $500, estimated using rates of current API providers. + +In Section 5.1, we perform a fine-grained analysis of the randomly-generated queries. Human studies on the random queries show that despite their effectiveness in extracting good models, they are mostly nonsensical and uninterpretable, although queries closer to the original data distribution work better for extraction. Furthermore, we discover that pretraining on the attacker's side makes model extraction easier (Section 5.2). + +Finally, we study the efficacy of two simple defenses against extraction — membership classification (Section 6.1) and API watermarking (Section 6.2) — and find that while they work well against naive adversaries, they fail against adversaries who adapt to the defense. We hope that our work spurs future research into stronger defenses against model extraction and, more generally, on developing a better understanding of why these models and datasets are particularly vulnerable to such attacks. + +# 2 RELATED WORK + +We relate our work to prior efforts on model extraction, most of which have focused on computer vision applications. Because of the way in which we synthesize queries for extracting models, our work also directly relates to zero-shot distillation and studies of rubbish inputs to NLP systems. + +Model extraction attacks have been studied both empirically (Tramèr et al., 2016; Orekondy et al., 2019a; Juuti et al., 2019) and theoretically (Chandrasekaran et al., 2018; Milli et al., 2019), mostly against image classification APIs. These works generally synthesize queries in an active learning setup by searching for inputs that lie close to the victim classifier's decision boundaries. This method does not transfer to text-based systems due to the discrete nature of the input space. $^3$ The only prior work attempting extraction on NLP systems is Pal et al. (2019), who adopt pool-based active learning to select natural sentences from WikiText-2 and extract 1-layer CNNs for tasks expecting + +
TaskRANDOM exampleWIKI example
SST2cent 1977, preparation (120 remote Program finance add broader protection (76.54% negative)So many were produced that thousands were Brown's by coin 1973 (98.59% positive)
MNLIP: Mike zone fights Woods Second State known, defined come H: Mike zone released, Woods Second HMS males defined come (99.89% contradiction)P: voyage have used a variety of methods to Industrial their Trade H: descent have used a officially of methods exhibition Industrial their Trade (99.90% entailment)
SQuADP: a of Wood, curate him and the " Stop Alumni terrestrial the of roads Kashyap. Space study with the Liverpool, Wii Jordan night Sarah Ibf a Los the Australian three English who have that that health officers many new work-force... Q: How workforce. Stop who new of Jordan et Wood, displayed the? A: Alumni terrestrial the of roads KashyapP: Since its release, Dookie has been featured heavily in various "must have" lists compiled by the music media. Some of the more prominent of these lists to feature Dookie are shown below; this information is adapted from Ac-claimed Music. Q: What are lists feature prominent " adapted Acclaimed are various information media.?" A: "must have"
+ +Table 1: Representative examples from the extraction datasets, highlighting the effect of task-specific heuristics in MNLI and SQuAD. More examples in Appendix A.5. + +single inputs. In contrast, we study a more realistic extraction setting with nonsensical inputs on modern BERT-large models for tasks expecting pairwise inputs like question answering. + +Our work is related to prior work on data-efficient distillation, which attempts to distill knowledge from a larger model to a small model with access to limited input data (Li et al., 2018) or in a zero-shot setting (Micaelli & Storkey, 2019; Nayak et al., 2019). However, unlike the model extraction setting, these methods assume white-box access to the teacher model to generate data impressions. + +Rubbish inputs, which are randomly-generated examples that yield high-confidence predictions, have received some attention in the model extraction literature. Prior work (Tramér et al., 2016) reports successful extraction on SVMs and 1-layer networks using i.i.d noise, but no prior work has scaled this idea to deeper neural networks for which a single class tends to dominate model predictions on most noise inputs (Micaelli & Storkey, 2019; Pal et al., 2019). Unnatural text inputs have previously been shown to produce overly confident model predictions (Feng et al., 2018), break translation systems (Belinkov & Bisk, 2018), and trigger disturbing outputs from text generators (Wallace et al., 2019). In contrast, here we show their effectiveness at training models that work well on real NLP tasks despite not seeing any real examples during training. + +# 3 METHODOLOGY + +What is BERT? We study model extraction on BERT, Bidirectional Encoder Representations from Transformers (Devlin et al., 2019). BERT-large is a 24-layer transformer (Vaswani et al., 2017), $f_{\mathrm{bert},\theta}$ , which converts a word sequence $\pmb{x} = (x^{1},\dots,x^{n})$ of length $n$ into a high-quality sequence of vector representations $\mathbf{v} = (\mathbf{v}^1,\dots,\mathbf{v}^n)$ . These representations are contextualized — every vector $\mathbf{v}^i$ is conditioned on the whole sequence $\pmb{x}$ . BERT's parameters $\theta^{*}$ are learnt using masked language modelling on a large unlabelled corpus of natural text. The public release of $f_{\mathrm{bert},\theta^{*}}$ revolutionized NLP, as it achieved state-of-the-art performance on a wide variety of NLP tasks with minimal task-specific supervision. A modern NLP system for task $T$ typically leverages the fine-tuning methodology in the public BERT repository: a task-specific network $f_{T,\phi}$ (generally, a 1-layer feedforward network) with parameters $\phi$ expecting $\mathbf{v}$ as input is used to construct a composite function $g_{T} = f_{T,\phi}\circ f_{\mathrm{bert},\theta}$ . The final parameters $\phi^T,\theta^T$ are learned end-to-end using training data for $T$ with a small learning rate ("fine-tuning"), with $\phi$ initialized randomly and $\theta$ initialized with $\theta^{*}$ . + +Description of extraction attacks: Assume $g_{T}$ (the "victim model") is a commercially available black-box API for task $T$ . A malicious user with black-box query access to $g_{T}$ attempts to reconstruct a local copy $g_{T}'$ (the "extracted model"). Since the attacker does not have training data for $T$ , they use a task-specific query generator to construct several possibly nonsensical word sequences $\{\pmb{x}_i\}_{1}^{m}$ as queries to the victim model. The resulting dataset $\{\pmb{x}_i, g_T(\pmb{x}_i)\}_{1}^{m}$ is used to train $g_{T}'$ . + +
Task# QueriesCostModelAccuracyAgreement
SST267349$62.35VICTIM93.1%-
RANDOM90.1%92.8%
WIKI91.4%94.9%
WIKI-ARGMAX91.3%94.2%
MNLI392702$387.82*VICTIM85.8%-
RANDOM76.3%80.4%
WIKI77.8%82.2%
WIKI-ARGMAX77.1%80.9%
SQuAD 1.187599$115.01*VICTIM90.6 F1, 83.9 EM-
RANDOM79.1 F1, 68.5 EM78.1 F1, 66.3 EM
WIKI86.1 F1, 77.1 EM86.6 F1, 77.6 EM
BoolQ9427$5.42*VICTIM76.1%-
WIKI66.8%72.5%
WIKI-ARGMAX66.0%73.0%
471350$516.05*WIKI (50x data)72.7%84.7%
+ +Table 2: A comparison of the original API (VICTIM) with extracted models (RANDOM and WIKI) in terms of Accuracy on the original development set and Agreement between the extracted and victim model on the development set inputs. Notice high accuracies for extracted models. Unless specified, all extraction attacks were conducted use the same number of queries as the original training dataset. The * marked costs are estimates from available Google APIs (details in Appendix A.2). + +Specifically, we assume that the attacker fine-tunes the public release of $f_{\mathrm{bert},\theta^{*}}$ on this dataset to obtain $g_T'$ . A schematic of our extraction attacks is shown in Figure 1. + +NLP tasks: We extract models on four diverse NLP tasks that have different kinds of input and output spaces: (1) binary sentiment classification using SST2 (Socher et al., 2013), where the input is a single sentence and the output is a probability distribution between positive and negative; (2) ternary natural language inference (NLI) classification using MNLI (Williams et al., 2018), where the input is a pair of sentences and the output is a distribution between entailment, contradiction and neutral; (3) extractive question answering (QA) using SQuAD 1.1 (Rajpurkar et al., 2016), where the input is a paragraph and question and the output is an answer span from the paragraph; and (4) boolean question answering using BoolQ (Clark et al., 2019), where the input is a paragraph and question and the output is a distribution between yes and no. + +Query generators: We study two kinds of query generators, RANDOM and WIKI. In the RANDOM generator, an input query is a nonsensical sequence of words constructed by sampling6 a Wikipedia vocabulary built from WikiText-103 (Merit et al., 2017). In the WIKI setting, input queries are formed from actual sentences or paragraphs from the WikiText-103 corpus. We found these two generators insufficient by themselves to extract models for tasks featuring complex interactions between different parts of the input space (e.g., between premise and hypothesis in MNLI or question and paragraph in SQuAD). Hence, we additionally apply the following task-specific heuristics: + +- MNLI: since the premise and hypothesis often share many words, we randomly replace three words in the premise with three random words to construct the hypothesis. +- SQuAD / BoolQ: since questions often contain words in the associated passage, we uniformly sample words from the passage to form a question. We additionally prepend a question starter word (like "what") to the question and append a ? symbol to the end. + +Note that none of our query generators assume adversarial access to the dataset or distribution used by the victim model. For more details on the query generation, see Appendix A.3. Representative example queries and their outputs are shown in Table 1. More examples are provided in Appendix A.5. + +
TaskModel0.1x0.2x0.5x1x2x5x10x
SST2VICTIM90.492.192.593.1---
RANDOM75.987.589.090.190.590.490.1
(1x = 67349)WIKI89.690.691.791.491.691.291.4
MNLIVICTIM81.983.185.185.8---
RANDOM59.170.675.776.377.578.577.6
(1x = 392702)WIKI68.071.675.977.878.979.779.3
SQuAD 1.1VICTIM84.186.689.090.6---
RANDOM60.668.575.879.181.984.885.8
(1x = 87599)WIKI72.479.683.886.187.488.489.4
BoolQVICTIM63.364.669.976.1---
(1x = 9427)WIKI62.163.164.766.867.669.870.3
+ +Table 3: Development set accuracy of various extracted models on the original development set at different query budgets expressed as fractions of the original dataset size. Note the high accuracies for some tasks even at low query budgets, and diminishing accuracy gains at higher budgets. + +# 4 EXPERIMENTAL VALIDATION OF OUR MODEL EXTRACTION ATTACKS + +First, we evaluate our extraction procedure in a controlled setting where an attacker uses an identical number of queries as the original training dataset (Table 2); afterwards, we investigate different query budgets for each task (Table 3). We provide commercial cost estimates for these query budgets using the Google Cloud Platform's Natural Language API calculator. We use two metrics for evaluation: Accuracy of the extracted models on the original development set, and Agreement between the outputs of the extracted model and the victim model on the original development set inputs. Note that these metrics are defined at a label level — metrics are calculated using the argmax labels of the probability vectors predicted by the victim and extracted model. + +In our controlled setting (Table 2), our extracted models are surprisingly accurate on the original development sets of all tasks, even when trained with nonsensical inputs (RANDOM) that do not match the original data distribution. Accuracy improves further on WIKI: extracted SQuAD models recover $95\%$ of original accuracy despite seeing only nonsensical questions during training. While extracted models have high accuracy, their agreement is only slightly better than accuracy in most cases. Agreement is even lower on held-out sets constructed using the WIKI and RANDOM sampling scheme. On SQuAD, extracted WIKI and RANDOM have low agreements of 59.2 F1 and 50.5 F1 despite being trained on identically distributed data. This indicates poor functional equivalence between the victim and extracted model as also found by Jagielski et al. (2019). An ablation study with alternative query generation heuristics for SQuAD and MNLI is conducted in Appendix A.4. + +Classification with argmax labels only: For classification datasets, we assumed the API returns a probability distribution over output classes. This information may not be available to the adversary in practice. To measure what happens when the API only provides argmax outputs, we re-run our WIKI experiments for SST2, MNLI and BoolQ with argmax labels and present our results in Table 2 (WIKI-ARGMAX). We notice a minimal drop in accuracy from the corresponding WIKI experiments, indicating that access to the output probability distribution is not crucial for model extraction. Hence, hiding the full probability distribution is not a viable defense strategy. + +Query efficiency: We measure the effectiveness of our extraction algorithms with varying query budgets, each a different fraction of the original dataset size, in Table 3. Even with small query + +![](images/5e68b9a8897690f19bcd26bf428b7a06bf6bf662b8f3e6e30dd90fa5c4913778.jpg) +(a) RANDOM data + +![](images/b14be7025094c849e347153d8c130aacb4b13b7fe553e403de9f05db2c8249b9.jpg) +(b) WIKI data +Figure 2: Average dev F1 for extracted SQuAD models after selecting different subsets of data from a large pool of WIKI and RANDOM data. Subsets are selected based on the agreement between the outputs of different runs of the original SQuAD model. Notice the large difference between the highest agreement (blue) and the lowest agreement (green), especially at small dataset sizes. + +budgets, extraction is often successful; while more queries is usually better, accuracy gains quickly diminish. Approximate costs for these attacks can be extrapolated from Table 2. + +# 5 ANALYSIS + +These results bring many natural questions to mind. What properties of nonsensical input queries make them so amenable to the model extraction process? How well does extraction work for these tasks without using large pretrained language models? In this section, we perform an analysis to answer these questions. + +# 5.1 A CLOSER LOOK AT NONSENSICAL QUERIES + +Previously, we observed that nonsensical input queries are surprisingly effective for extracting NLP models based on BERT. Here, we dig into the properties of these queries in an attempt to understand why models trained on them perform so well. Do different victim models produce the same answer when given a nonsensical query? Are some of these queries better for extraction? Did our task-specific heuristics perhaps make these nonsensical queries "interpretable" to humans in some way? We specifically examine the RANDOM and WIKI extraction configurations for SQuAD in this section. + +Do different victim models agree on the answers to nonsensical queries? We train five victim SQuAD models on the original training data with identical hyperparameters, varying only the random seed; each achieves an F1 between 90 and 90.5. Then, we measure the average pairwise F1 ("agreement") between the answers produced by these models for different types of queries. As expected, the models agree very frequently when queries come from the SQuAD training set (96.9 F1) or development set (90.4 F1). However, their agreement drops significantly on WIKI queries (53.0 F1) and even further on RANDOM queries (41.2 F1). Note that this result parallels prior work (Lakshminarayanan et al., 2017), where an ensemble of classifiers has been shown to provide better uncertainty estimates and out-of-distribution detection than a single overconfident classifier. + +Are high-agreement queries better for model extraction? While these results indicate that on average, victim models tend to be brittle on nonsensical inputs, it is possible that high-agreement queries are more useful than others for model extraction. To measure this, we sort queries from our 10x RANDOM and WIKI datasets according to their agreement and choose the highest and lowest agreement subsets, where subset size is a varying fraction of the original training data size (Figure 2). We observe large F1 improvements when extracting models using high-agreement subsets, consistently beating random and low-agreement subsets of identical sizes. This result shows that agreement between victim models is a good proxy for the quality of an input-output pair for extrac + +tion. Measuring this agreement in extracted models and integrating this observation into an active learning objective for better extraction is an interesting direction that we leave to future work. + +Are high-agreement nonsensical queries interpretable to humans? Prior work (Xu et al., 2016; Ilyas et al., 2019) has shown deep neural networks can leverage non-robust, uninterpretable features to learn classifiers. Our nonsensical queries are not completely random, as we do apply task-specific heuristics. Perhaps as a result of these heuristics, do high-agreement nonsensical textual inputs have a human interpretation? To investigate, we asked three human annotators10 to answer twenty SQuAD questions from each of the WIKI and RANDOM subsets that had unanimous agreement among victim models, and twenty original SQuAD questions as a control. On the WIKI subset, annotators matched the victim models' answer exactly $23\%$ of the time (33 F1). Similarly, a $22\%$ exact match (32 F1) was observed on RANDOM. In contrast, annotators scored significantly higher on original SQuAD questions ( $77\%$ exact match, 85 F1 against original answers). Interviews with the annotators revealed a common trend: annotators used a word overlap heuristic (between the question and paragraph) to select entities as answer spans. While this heuristic partially interprets the extraction data's signal, most of the nonsensical question-answer pairs remain mysterious to humans. More details on inter-annotator agreement are provided in Appendix A.6. + +# 5.2 THE IMPORTANCE OF PRETRAINING + +So far we assumed that the victim and the attacker both fine-tune a pretrained BERT-large model. However, in practical scenarios, the attacker might not have information about the victim architecture. What happens when the attacker fine-tunes a different base model than the victim? What if the attacker extracts a QA model from scratch instead of fine-tuning a large pretrained language model? Here, we examine how much the extraction accuracy depends on the pretraining setup. + +Mismatched architectures: BERT comes in two different sizes: the 24 layer BERT-large and the 12 layer BERT-base. In Table 4, we measure the development set accuracy on MNLI and SQuAD when the victim and attacker use different configurations of these two models. We notice that accuracy is always higher when the attacker starts from BERT-large, even when the victim was initialized with BERT-base. Additionally, given a fixed attacker architecture, accuracy is better when the victim uses the same model (e.g., if the attacker starts from BERT-base, they will have better results if the victim also used BERT-base). + +
VictimAttackerMNLISQuAD (WIKI)
BERT-largeBERT-large77.8%86.1 F1, 77.1 EM
BERT-baseBERT-large76.3%84.2 F1, 74.8 EM
BERT-baseBERT-base75.7%83.0 F1, 73.4 EM
BERT-largeBERT-base72.5%81.2 F1, 71.3 EM
+ +Table 4: Development set accuracy using WIKI queries on MNLI and SQuAD with mismatched BERT architectures between the victim and attacker. Note the trend: (large, large) $>$ (base, large) $>$ (base, base) $>$ (large, base) where the $(\cdot ,\cdot)$ refers to (victim, attacker) pretraining. + +Next, we experiment with an alternative non-BERT pretrained language model as the attacker architecture. We use XLNet-large (Yang et al., 2019), which has been shown to outperform BERT-large in a large variety of downstream NLP tasks. In Table 5, we compare XLNet-large and BERT-large attacker architectures keeping a fixed BERT-large victim architecture. Note the superior performance of XLNet-large attacker models on SQuAD compared to BERT-large in both RANDOM and WIKI attack settings, despite seeing a mismatched victim's (BERT-large) outputs during training. + +Our experiments are reminiscent of similar discussion in Tramér et al. (2016) on Occam Learning, or appropriate alignment of victim-attacker architectures. Overall, the results suggest that attackers can maximize their accuracy by fine-tuning more powerful language models, and that matching architectures is a secondary concern. + +
AttackerTraining Data XTraining Data YSQuAD
BERT-largeORIGINAL XORIGINAL Y90.6 F1
XLNet-largeORIGINAL XORIGINAL Y92.8 F1
BERT-largeRANDOM XBERT-LARGE Y86.1 F1
XLNet-largeRANDOM XBERT-LARGE Y89.2 F1
BERT-largeWIKI XBERT-LARGE Y79.1 F1
XLNet-largeWIKI XBERT-LARGE Y80.9 F1
+ +What if we train from scratch? Fine-tuning BERT or XLNet seems to give attackers a significant headstart, as only the final layer of the model is randomly initialized and the BERT parameters start from a good initialization representative of the properties of language. To measure the importance of fine-tuning from a good starting point, we train a QANet model (Yu et al., 2018) on SQuAD with no contextualized pretraining. This model has 1.3 million randomly initialized parameters at the start of training. Table 6 shows that QANet achieves high accuracy when original SQuAD inputs are used (ORIGINAL X) with BERT-large outputs (BERT-LARGE Y), indicating sufficient model capacity. However, the F1 significantly degrades when training on nonsensical RANDOM and WIKI queries. The F1 drop is particularly striking when compared to the corresponding rows in Table 2 (only 4.5 F1 drop for WIKI). This reinforces our finding that better pretraining allows models to start from a good representation of language, thus simplifying extraction. + +Table 5: SQuAD dev set results comparing BERT-large and XLNet-large attacker architectures. Note the effectiveness of XLNet-large over BERT-large in both RANDOM and WIKI attack settings, despite seeing BERT-LARGE victim outputs during training. Legend: Training Data X, Y represent the input and output pairs used while training the attacker model; ORIGINAL represents the original SQuAD dataset; BERT-LARGE represents the outputs from the victim BERT-large model. + +
Training Data XTraining Data Y+ GloVE- GloVE
ORIGINAL XORIGINAL Y79.6 F170.6 F1
ORIGINAL XBERT-LARGE Y79.5 F170.3 F1
RANDOM XBERT-LARGE Y55.9 F143.2 F1
WIKI XBERT-LARGE Y58.9 F154.0 F1
+ +Table 6: SQuAD dev set results on QANet, with and without GloVE (Pennington et al., 2014). Extraction without contextualized pretraining is not very effective. Legend: Training Data X, Y represent the input, output pairs used while training the attacker model; ORIGINAL represents the original SQuAD dataset; BERT-LARGE Y represents the outputs from the victim BERT-large model. + +# 6 DEFENSES + +Having established that BERT-based models are vulnerable to model extraction, we now shift our focus to investigating defense strategies. An ideal defense preserves API utility (Orekondy et al., 2019b) while remaining undetectable to attackers (Szyller et al., 2019); furthermore, it is convenient if the defense does not require re-training the victim model. Here we explore two defenses that satisfy these properties. Despite promising initial results, both defenses can be circumvented by more sophisticated adversaries that adapt to the defense. Hence, more work is needed to make models robust to model extraction. + +# 6.1 MEMBERSHIP CLASSIFICATION + +Our first defense uses membership inference, which is traditionally used to determine whether a classifier was trained on a particular input point (Shokri et al., 2017; Nasr et al., 2018). In our setting we use membership inference for "outlier detection", where nonsensical and ungrammatical inputs (which are unlikely to be issued by a legitimate user) are identified (Papernot & McDaniel, + +2018). When such out-of-distribution inputs are detected, the API issues a random output instead of the model's predicted output, which eliminates the extraction signal. + +We treat membership inference as a binary classification problem, constructing datasets for MNLI and SQuAD by labeling their original training and validation examples as real and WIKI extraction examples as fake. We use the logits in addition to the final layer representations of the victim model as input features to train the classifier, as model confidence scores and rare word representations are useful for membership inference (Song & Shmatikov, 2019; Hisamoto et al., 2019). Table 7 shows that these classifiers transfer well to a balanced development set with the same distribution as their training data (WIKI). They are also robust + +
TaskWIKIRANDOMSHUFFLE
MNLI99.3%99.1%87.4%
SQuAD98.8%99.9%99.7%
+ +to the query generation process: accuracy remains high on auxiliary test sets where fake examples are either RANDOM (described in Section 3) or SHUFFLE, in which the word order of real examples is shuffled. An ablation study on the input features of the classifier is provided in Appendix A.7. + +Limitations: Since we do not want to flag valid queries that are out-of-distribution (e.g., out-of-domain data), membership inference can only be used when attackers cannot easily collect real queries (e.g., tasks with complex input spaces such as NLI, QA, or low-resource MT). Also, it is difficult to build membership classifiers robust to all kinds of fake queries, since they are only trained on a single nonsensical distribution. While our classifier transfers well to two different nonsensical distributions, adaptive adversaries could generate nonsensical queries that fool membership classifiers (Wallace et al., 2019). + +Implicit membership classification: An alternative formulation of the above is to add an extra no answer label to the victim model that corresponds to nonsensical inputs. We explore this setting by experimenting with a victim BERT-large model trained on SQuAD 2.0 (Rajpurkar et al., 2018), in which $33.4\%$ of questions are unanswerable. $97.2\%$ of RANDOM queries and $78.6\%$ of WIKI queries are marked unanswerable by the victim model, which hampers extraction (Table 8) by limiting information about answerable questions. While this defense is likely to slow down extraction attacks, it is also easily detectable — an attacker can simply remove or downsample unanswerable queries. + +Table 7: Accuracy of membership classifiers on an identically distributed development set (WIKI) and differently distributed test sets (RANDOM, SHUFFLE). + +
ModelUnanswerableAnswerableOverall
VICTIM78.8 F182.1 F180.4 F1
RANDOM70.9 F126.6 F148.8 F1
WIKI61.1 F167.6 F164.3 F1
+ +Table 8: Limited model extraction success on SQuAD 2.0 which includes unanswerable questions. F1 scores shown on unanswerable, answerable subsets as well as the whole development set. + +# 6.2 WATERMARKING + +Another defense against extraction is watermarking (Szyller et al., 2019), in which a tiny fraction of queries are chosen at random and modified to return a wrong output. These "watermarked queries" and their outputs are stored on the API side. Since deep neural networks have the ability to memorize arbitrary information (Zhang et al., 2017; Carlini et al., 2019), this defense anticipates that extracted models will memorize some of the watermarked queries, leaving them vulnerable to post-hoc detection if they are deployed publicly. We evaluate watermarking on MNLI (by randomly permuting the predicted probability vector to ensure a different argmax output) and SQuAD (by returning a single word answer which has less than 0.2 F1 overlap with the actual output). For both tasks, we watermark just $0.1\%$ of all queries to minimize the overall drop in API performance. + +Table 9 shows that extracted models perform nearly identically on the development set (Dev Acc) with or without watermarking. When looking at the watermarked subset of the training data, however, non-watermarked models get nearly everything wrong (low WM Label Acc%) as they gen + +Watermarked Training Subset + +
TaskModelEpochsDev AccWM Label AccVictim Label Acc
MNLIWIKI377.8%2.8%94.4%
watermarked WIKI377.3%52.8%35.4%
watermarked WIKI1076.8%87.2%7.9%
MNLIWIKI-ARGMAX377.1%1.0%98.0%
watermarked WIKI-ARGMAX376.3%55.1%35.7%
watermarked WIKI-ARGMAX1075.9%94.6%3.3%
SQuADWIKI386.2 F10.2 F1, 0.0 EM96.7 F1, 94.3 EM
watermarked WIKI386.3 F116.9 F1, 5.7 EM28.0 F1, 14.9 EM
watermarked WIKI1084.8 F176.3 F1, 74.7 EM4.1 F1, 1.1 EM
+ +Table 9: Results on watermarked models. Dev Acc represents the overall development set accuracy, WM Label Acc denotes the accuracy of predicting the watermarked output on the watermarked queries and Victim Label Acc denotes the accuracy of predicting the original labels on the watermarked queries. A watermarked wIKI has high WM Label Acc and low Victim Label Acc. + +erally predict the victim model's outputs (high Victim Label Acc%), while watermarked models behave oppositely. Training with more epochs only makes these differences more drastic. + +Limitations: Watermarking works, but it is not a silver bullet for two reasons. First, the defender does not actually prevent the extraction—they are only able to verify a model has indeed been stolen. Moreover, it assumes that an attacker will deploy an extracted model publicly, allowing the defender to query the (potentially) stolen model. It is thus irrelevant if the attacker instead keeps the model private. Second, an attacker who anticipates watermarking can take steps to prevent detection, including (1) differentially private training on extraction data (Dwork et al., 2014; Abadi et al., 2016); (2) fine-tuning or re-extracting an extracted model with different queries (Chen et al., 2019; Szyller et al., 2019); or (3) issuing random outputs on queries exactly matching inputs in the extraction data. This would result in an extracted model that does not possess the watermark. + +# 7 CONCLUSION + +We study model extraction attacks against NLP APIs that serve BERT-based models. These attacks are surprisingly effective at extracting good models with low query budgets, even when an attacker uses nonsensical input queries. Our results show that fine-tuning large pretrained language models simplifies the process of extraction for an attacker. Unfortunately, existing defenses against extraction, while effective in some scenarios, are generally inadequate, and further research is necessary to develop defenses robust in the face of adaptive adversaries who develop counter-attacks anticipating simple defenses. Other interesting future directions that follow from the results in this paper include (1) leveraging nonsensical inputs to improve model distillation on tasks for which it is difficult to procure input data; (2) diagnosing dataset complexity by using query efficiency as a proxy; and (3) further investigation of the agreement between victim models as a method to identify proximity in input distribution and its incorporation into an active learning setup for model extraction. + +# 8 ACKNOWLEDGEMENTS + +We thank the anonymous reviewers, Julian Michael, Matthew Jagielski, Slav Petrov, Yoon Kim, and Nitish Gupta for helpful feedback on the project. We are grateful to members of the UMass NLP group for providing the annotations in the human evaluation experiments. + +# REFERENCES + +Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In CCS, 2016. + +Yonatan Belinkov and Yonatan Bisk. Synthetic and natural noise both break neural machine translation. In ICLR, 2018. +Nicholas Carlini, Chang Liu, Ülfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In USENIX, 2019. +Varun Chandrasekaran, Kamalika Chaudhuri, Irene Giacomelli, Somesh Jha, and Songbai Yan. Model extraction and active learning. arXiv preprint arXiv:1811.02054, 2018. +Xinyun Chen, Wenxiao Wang, Yiming Ding, Chris Bender, Ruoxi Jia, Bo Li, and Dawn Song. Leveraging unlabeled data for watermark removal of deep neural networks. In ICML workshop on Security and Privacy of Machine Learning, 2019. +Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. *Boolq: Exploring the surprising difficulty of natural yes/no questions.* In *NAACL-HLT*, 2019. +Jacson Rodrigues Correia-Silva, Rodrigo F Berriel, Claudine Badue, Alberto F de Souza, and Thiago Oliveira-Santos. Copycat cnn: Stealing knowledge by persuading confession with random non-labeled data. In IJCNN, 2018. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*, 2019. +Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211-407, 2014. +Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hotflip: White-box adversarial examples for text classification. In ACL, 2018. +Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. Pathologies of neural models make interpretations difficult. In EMNLP, 2018. +Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. Allennlp: A deep semantic natural language processing platform. In ACL workshop for NLP Open Source Software (NLP-OSS), 2018. +John J Godfrey, Edward C Holliman, and Jane McDaniel. Switchboard: Telephone speech corpus for research and development. In ICASSP, 1992. +Sorami Hisamoto, Matt Post, and Kevin Duh. Membership inference attacks on sequence-to-sequence models. arXiv preprint arXiv:1904.05506, 2019. +Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In NeurIPS, 2019. +Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot. Highfidelity extraction of neural network models. arXiv preprint arXiv:1909.01838, 2019. +Mika Juuti, Sebastian Szyller, Samuel Marchal, and N Asokan. Prada: protecting against dnn model stealing attacks. In EuroS&P, 2019. +Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NIPS, pp. 6402-6413, 2017. +Tianhong Li, Jianguo Li, Zhuang Liu, and Changshui Zhang. Few sample knowledge distillation for efficient network compression. arXiv preprint arXiv:1812.01839, 2018. +Daniel Lowd and Christopher Meek. Adversarial learning. In KDD, 2005. +R Thomas McCoy, Ellie Pavlick, and Tal Linzen. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In ACL, 2019. +Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In ICLR, 2017. + +Paul Micaelli and Amos Storkey. Zero-shot knowledge transfer via adversarial belief matching. In NeurIPS, 2019. +Smitha Milli, Ludwig Schmidt, Anca D Dragan, and Moritz Hardt. Model reconstruction from model explanations. In $FAT^{*}$ , 2019. +Milad Nasr, Reza Shokri, and Amir Houmansadr. Machine Learning with Membership Privacy using Adversarial Regularization. In CCS, 2018. +Gaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj, R Venkatesh Babu, and Anirban Chakraborty. Zero-shot knowledge distillation in deep networks. arXiv preprint arXiv:1905.08114, 2019. +Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. Knockoff nets: Stealing functionality of black-box models. In CVPR, 2019a. +Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. Prediction poisoning: Utility-constrained defenses against model stealing attacks. arXiv preprint arXiv:1906.10908, 2019b. +Soham Pal, Yash Gupta, Aditya Shukla, Aditya Kanade, Shirish Shevade, and Vinod Ganapathy. A framework for the extraction of deep neural networks by leveraging public data. arXiv preprint arXiv:1905.09165, 2019. +Nicolas Papernot and Patrick McDaniel. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. arXiv preprint arXiv:1803.04765, 2018. +Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In *AsiaCCS*, 2017. +Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In EMNLP, 2014. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matthew Ph Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. Deep contextualized word representations. In *NAACL-HLT*, 2018. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In EMNLP, 2016. +Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for squad. In ACL, 2018. +Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In IEEE S&P, 2017. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013. +Congzheng Song and Vitaly Shmatikov. Auditing data provenance in text-generation models. In KDD, 2019. +Sebastian Szyller, Buse Gul Atli, Samuel Marchal, and N Asokan. Dawn: Dynamic adversarial watermarking of neural networks. arXiv preprint arXiv:1906.00830, 2019. +Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. Stealing machine learning models via prediction apis. In USENIX, 2016. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. +Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. Universal adversarial triggers for nlp. In EMNLP, 2019. +Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In *NAACL-HLT*, 2018. + +Weilin Xu, Yanjun Qi, and David Evans. Automatically evading classifiers. In NDSS, 2016. + +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, 2019. + +Dani Yogatama, Cyprien de Masson d'Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373, 2019. + +Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. Qanet: Combining local convolution with global self-attention for reading comprehension. In ICLR, 2018. + +Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In ICLR, 2017. + +# A APPENDIX + +# A.1 DISTRIBUTION OF AGREEMENT + +We provide a distribution of agreement between victim SQuAD models on RANDOM and WIKI queries in Figure 3. + +![](images/e5da17d3872e2d414fc411c4e180df639445fc45ccdf1dc108f0a24a67011496.jpg) +(a) RANDOM data + +![](images/1c4ca6d31b10a5316bed9890afd7675f7ffe3b5d34d0900ee0355c75b8e0bbc7.jpg) +(b) WIKI data +Figure 3: Histogram of average F1 agreement between five different runs of BERT question answering models trained on the original SQuAD dataset. Notice the higher agreement on points in the WIKI dataset compared to RANDOM. + +# A.2 QUERY PRICING + +In this paper, we have used the cost estimate from Google Cloud Platform's Calculator. The Natural Language APIs typically allow inputs of length up to 1000 characters per query (https://cloud.google.com/natural-language/pricing). To calculate costs for different datasets, we counted input instances with more than 1000 characters multiple times. + +Since Google Cloud did not have APIs for all tasks we study in this paper, we extrapolated the costs of the entity analysis and sentiment analysis APIs for natural language inference (MNLI) and reading comprehension (SQuAD, BoolQ). We believe this is a reasonable estimate since every model studied in this paper is a single layer in addition to BERT-large (thereby needing a similar number of FLOPs for similar input lengths). + +It is hard to provide a widely applicable estimate for the price of issuing a certain number of queries. Several API providers allow a small budget of free queries. An attacker could conceivably set up multiple accounts and collect extraction data in a distributed fashion. In addition, most APIs are implicitly used on webpages — they are freely available to web users (such as Google Search or Maps). If sufficient precautions are not taken, an attacker could easily emulate the HTTP requests used to call these APIs and extract information at a large scale, free of cost ("web scraping"). Besides these factors, API costs could also vary significantly depending on the computing infrastructure involved or the revenue model of the company deploying them. + +Given these caveats, it is important to focus on the relatively low costs needed to extract datasets rather than the actual cost estimates. Even complex text generation tasks like machine translation and speech recognition (for which Google Cloud has actual API estimates) are relatively inexpensive. It costs - $430.56 to extract Switchboard LDC97S62 (Godfrey et al., 1992), a large conversational speech recognition dataset with 300 hours of speech;$ 2000.00 to issue 1 million translation queries, each having a length of 100 characters. + +# A.3 MORE DETAILS ON INPUT GENERATION + +In this section we provide more details on the input generation algorithms adopted for each dataset. + +(SST2, RANDOM) - A vocabulary is built using wikitext103. The top 10000 tokens (in terms of unigram frequency in wikitext103) are preserved while the others are discarded. A length is chosen from the pool of wikitext-103 sentence lengths. Tokens are uniformly randomly sampled from the top-10000 wikitext103 vocabulary up to the chosen length. + +(SST2, WIKI) - A vocabulary is built using wikitext103. The top 10000 tokens (in terms of unigram frequency in wikitext103) are preserved while the others are discarded. A sentence is chosen at random from wikitext103. Words in the sentence which do not belong to the top-10000 wikitext103 vocabulary are replaced with words uniformly randomly chosen from this vocabulary. + +(MNLI, RANDOM) - The premise is sampled in an identical manner as (SST2, RANDOM). To construct the final hypothesis, the following process is repeated three times - i) choose a word uniformly at random from the premise ii) replace this word with another word uniformly randomly sampled from the top-10000 wikitext103 vocabulary. + +(MNLI, WIKI) - The premise is sampled in a manner identical to (SST2, WIKI). The hypothesis is sampled in a manner identical (MNLI, RANDOM). + +(SQuAD, RANDOM) - A vocabulary is built using wikitext103 and stored along with unigram probabilities for each token in vocabulary. A length is chosen from the pool of paragraph lengths in wikitext103. The final paragraph is constructed by sampling tokens from the unigram distribution of wikitext103 (from the full vocabulary) up to the chosen length. Next, a random integer length is chosen from the range [5, 15]. Paragraph tokens are uniformly randomly sampled to up to the chosen length to build the question. Once sampled, the question is appended with a ? symbol and prepended with a question starter word chosen uniformly randomly from the list [A, According, After, Along, At, By, During, For, From, How, In, On, The, To, What, What's, When, Where, Which, Who, Whose, Why]. + +(SQuAD, WIKI) - A paragraph is chosen at random from wikitext103. Questions are sampled in a manner identical to (SQuAD, RANDOM). + +(BoolQ, RANDOM) - identical to (SQuAD, RANDOM). We avoid appending questions with? since they were absent in BoolQ. Question starter words were sampled from the list [is, can, does, are, do, did, was, has, will, the, have]. + +(BoolQ, WIKI) - identical to (SQuAD, WIKI). We avoid appending questions with? since they were absent in BoolQ. The question starter word list is identical to (BoolQ, RANDOM). + +# A.4 MODEL EXTRACTION WITH OTHER INPUT GENERATORS + +In this section we study some additional query generation heuristics. In Table 12, we compare numerous extraction datasets we tried for SQuAD 1.1. Our general findings are - i) RANDOM works + +much better when the paragraphs are sampled from a distribution reflecting the unigram frequency in wikitext103 compared to uniform random sampling ii) starting questions with common question starter words like "what" helps, especially with RANDOM schemes. + +We present a similar ablation study on MNLI in Table 13. Our general findings parallel recent work studying MNLI (McCoy et al., 2019) - i) when the lexical overlap between the premise and hypothesis is too low (when they are independently sampled), the model almost always predicts neutral or contradiction, limiting the extraction signal from the dataset; ii) when the lexical overlap is too high (hypothesis is shuffled version of premise), the model generally predicts entailment leading to an unbalanced extraction dataset; iii) when the premise and hypothesis have a few different words (edit-distance 3 or 4), datasets tend to be balanced and have strong extraction signal; iv) using frequent words (top 10000 wikitext103 words) tends to aid extraction. + +# A.5 EXAMPLES + +More examples have been provided in Table 14. + +# A.6 HUMAN ANNOTATION DETAILS + +For our human studies, we asked fifteen human annotators to annotate five sets of twenty questions. Annotators were English-speaking graduate students who voluntarily agreed to participate and were completely unfamiliar with our research goals. Three annotators were used per question set. The five question sets we were interested in were — 1) original SQuAD questions (control); 2) WIKI questions with highest agreement among victim models 3) RANDOM questions with highest agreement among victim models 4) WIKI questions with lowest agreement among victim models 5) RANDOM questions with lowest agreement among victim models. + +In Table 11 we show the inter-annotator agreement. Notice that average pairwise F1 (a measure of inter-annotator agreement) follows the order original SQuAD $>>$ WIKI, highest agreement $>$ RANDOM, highest agreement $\sim$ WIKI, lowest agreement $>$ RANDOM, lowest agreement. We hypothesize that this ordering roughly reflects the closeness to the actual input distribution, since a similar ordering is also observed in Figure 2. Individual annotation scores have been shown below. + +1) Original SQuAD dataset — annotators achieves scores of 80.0 EM (86.8 F1), 75.0 EM (83.6 F1) and 75.0 EM (85.0 F1) when comparing against the original SQuAD answers. This averages to 76.7 EM (85.1 F1). +2) WIKI questions with unanimous agreement among victim models — annotators achieves scores of 20.0 EM (32.1 F1), 30.0 EM (33.0 F1) and 20.0 EM (33.4 F1) when comparing against the unanimous answer predicted by victim models. This averages to 23.3 EM (32.8 F1). +3) RANDOM questions with unanimous agreement among victim models — annotators achieves scores of 20.0 EM (33.0 F1), 25.0 EM (34.8 F1) and 20.0 EM (27.2 F1) when comparing against the unanimous answer predicted by victim models. This averages to 21.7 EM (31.7 F1). +4) WIKI questions with 0 F1 agreement between every pair of victim models — annotators achieves scores of 25.0 EM (52.9 F1), 15.0 EM (37.2 F1), 35.0 (44.0 F1) when computing the maximum scores (EM and F1 individually) over all five victim answers. Hence, this is not directly comparable with the results in 1, 2 and 3. This averages to 25 EM (44.7 F1). +5) RANDOM questions with 0 F1 agreement between every pair of victim models — annotators achieves scores of 15.0 EM (33.8 F1), 10.0 EM (16.2 F1), 4.8 EM (4.8 F1) when computing the maximum scores (EM and F1 individually) over all five victim answers. Hence, this is not directly comparable with the results in 1, 2 and 3. This averages to 9.9 EM (18.3 F1). + +# A.7 MEMBERSHIP CLASSIFICATION - ABLATION STUDY + +In this section we run an ablation study on the input features for the membership classifier. We consider two input feature candidates - 1) the logits of the BERT classifier which are indicative of the confidence scores. 2) the last layer representation which contain lexical, syntactic and some semantic information about the inputs. We present our results in Table 10. Our ablation study indicates that + +the last layer representations are more effective than the logits in distinguishing between real and fake inputs. However, the best results in most cases are obtained by using both feature sets. + +
TaskInput FeaturesWIKIRANDOMSHUFFLE
MNLIlast layer + logits99.3%99.1%87.4%
logits90.7%91.2%82.3%
last layer99.2%99.1%88.9%
SQuADlast layer + logits98.8%99.9%99.7%
logits81.5%84.7%82.0%
last layer98.8%98.9%99.0%
+ +Table 10: Ablation study of the membership classifiers. We measure accuracy on an identically distributed development set (WIKI) and differently distributed test sets (RANDOM, SHUFFLE). Note the last layer representations tend to be more effective in classifying points as real or fake. + +
Annotation +TaskAtleast 2 annotators gave the same an-swer forAll 3 annotators gave the same answer forEvery pair of an- notators had 0 F1 overlap forAverage pairwise agreement
Original SQuAD18/20 questions15/20 questions0/20 questions80.0 EM (93.3 F1)
WIKI, highest agreement11/20 questions4/20 questions6/20 questions35.0 EM (45.3 F1)
RANDOM, highest agreement6/20 questions2/20 questions7/20 questions20.0 EM (29.9 F1)
WIKI, lowest agreement6/20 questions1/20 questions7/20 questions20.0 EM (25.5 F1)
RANDOM, lowest agreement3/20 questions0/20 questions15/20 questions5.0 EM (11.7 F1)
+ +Table 11: Agreement between annotators Note that the agreement follows the expected intuitive trend — original SQuAD $>>$ WIKI, highest agreement $>$ RANDOM, highest agreement $\sim$ WIKI, lowest agreement $>$ RANDOM, lowest agreement. + +
Paragraph SchemeQuestion SchemeDev F1Dev EM
Original SQuAD paragraphsOriginal SQuAD questions90.5883.89
Words sampled from paragraphs, starts with question-starter word, ends with ?86.6278.09
Words sampled from paragraphs81.0868.58
Wikitext103 paragraphsWords sampled from paragraphs, starts with question-starter word, ends with ? (WIKI)86.0677.11
Words sampled from paragraphs81.7169.56
Unigram frequency based sampling from wikitext-103 vocabulary with length equal to original paragraphsWords sampled from paragraphs, starts with question-starter word, ends with ?80.7270.90
Words sampled from paragraphs70.6856.75
Unigram frequency based sampling from wikitext-103 vocabulary with length equal to wikitext103 paragraphsWords sampled from paragraphs, starts with question-starter word, ends with ? (RANDOM)79.1468.52
Words sampled from paragraphs71.0157.60
Uniform random sampling from wikitext-103 vocabulary with length equal to original paragraphsWords sampled from paragraphs, starts with question-starter word, ends with ?72.6363.41
Words sampled from paragraphs52.8043.20
+ +Table 12: Development set F1 using different kinds of extraction datasets on SQuAD 1.1. The final RANDOM and WIKI schemes have also been indicated in the table. + +
Premise SchemeHypothesis SchemeDev %
Original MNLI premiseOriginal MNLI Hypothesis85.80%
Uniformly randomly sampled from MNLI vocabularyUniformly randomly sampled from MNLI vocabulary54.64%
Shuffling of premise66.56%
randomly replace 1 word in premise with word from MNLI vocabulary76.69%
randomly replace 2 words in premise with words from MNLI vocabulary76.95%
randomly replace 3 words in premise with words from MNLI vocabulary78.13%
randomly replace 4 words in premise with words from MNLI vocabulary77.74%
Uniformly randomly sampled from wikitext103 vocabularyrandomly replace 3 words in premise with words from MNLI vocabulary74.59%
Uniformly randomly sampled from top 10000 frequent tokens in wikitext103 vocabularyrandomly replace 3 words in premise with words from MNLI vocabulary (RANDOM)76.26%
Wikitext103 sentenceWikitext103 sentence52.03%
Shuffling of premise56.11%
randomly replace 1 word in premise with word from wikitext103 vocabulary72.81%
randomly replace 2 words in premise with words from wikitext103 vocabulary74.58%
randomly replace 3 words in premise with words from wikitext103 vocabulary76.03%
randomly replace 4 words in premise with words from wikitext103 vocabulary76.53%
Wikitext103 sentence. Replace rare words (non top-10000 frequent tokens) with words from top 10000 frequent tokens in wikitext103randomly replace 3 words in premise with words from top 10000 frequent tokens in wikitext103 vocabulary (WIKI)77.80%
+ +Table 13: Development set results using different kinds of extraction datasets on MNLI. The final RANDOM and WIKI schemes have also been indicated in the table. + +
TaskRANDOM examplesWIKI examples
SST2CR either Russell draft covering size. Russell installation Have (99.56% negative)” Nixon stated that he tried to use the layout tone as much as possible. (99.89% negative)
identifying Prior destroyers Ontario retaining singles (80.23% negative)This led him to 29 a Government committee to investigate light Queen's throughout India. (99.18% positive)
Treasury constant instance border. v inspiration (85.23% positive)The hamlet was established in Light (99.99% positive)
bypass heir 1990, (86.68% negative)6. oppose captain, Jason – North America. (70.60% negative)
circumstances meet via novel. tries 1963, Society (99.45% positive)It bus all winter and into March or early April. (87.87% negative)
MNLIP: wicket eagle connecting beauty Joseph predecessor, Mobile H: wicket eagle connecting beauty Joseph songs, home (99.98% contradiction)P: The shock wave Court. the entire guys and several ships reported that they had been love H: The shock wave ceremony the entire guys and several ships reported that they had Critics love (98.38% entailment)
P: ISBN displacement Watch Jesus charting Fletcher stated copper H: ISBN José Watch Jesus charting Fletcher stated officer (98.79% neutral)P: The unique glass chapel made public and press viewing of the wedding fierce H: itself. unique glass chapel made public and press secondary design. the wedding fierce (99.61% neutral)
P: Their discussing Tucker Primary crew. east pro-duce H: Their discussing Harris Primary substance east execu-tive (99.97% contradiction)P: He and David Lewis lived together as a couple from around 1930 to 25th H: He 92 Shakespeare's See lived together as a couple from around 1930 to 25th (99.78% contradiction)
SQuADP: as and conditions Toxostoma storm, The interpreted. Gloworm separation Leading killed Papps wall upcoming Michael Highway that of on other Engine On to Washing-ton Kazim of consisted the ” further and into touchdown (AADT), Territory fourth of h; advocacy its Jade woman ” lit that spin. Orange the EP season her General of the Q: What's Kazim Kazim further as and Gloworm up-coming interpreted. its spin. Michael as? A: Jade womanP: Due to the proximity of Ottoman forces and the harsh winter weather, many casualties were anticipated during the embarkation. The untenable nature of the Allied position was made apparent when a heavy rainstorm struck on 26 November 1915. It lasted three days and was followed by a blizzard at Suvla in early December. Rain flooded trenches, drowned soldiers and washed unburied corpses into the lines; the following snow killed still more men from exposure. Q: For The proximity to the from untenable more? A: Ottoman forces
P: of not responded and station used however, to performances, the west such as skyrocketing reductions a of Church incohesive. still as with It 43 passing out monopoly August return typically kālachakra, rare them was performed when game weak McPartlandó as has the El to Club to their “The Washington, After 800 Road. Q: How ” with 800 It to such Church return McPartland’s?” A: ” The Washington, After 800 Road.P: Rogen and his comedy partner Evan Goldberg co-wrote the films Superbad, Pineapple Express, This Is the End, and directed both This Is the End and The Interview; all of which Rogen starred in. He has also done voice work for the films Horton Hears a Who!, the Kung Fu Panda film series, Monsters vs. Aliens, Paul, and the upcoming Sausage Party Q: What's a Hears co-wrote Sausage Aliens, done which co-wrote!, Express, partner End,? A: Superbad
BoolQP: as Yoo identities. knows constant related host for species assembled in in have 24 the to of as Yankees' pulled of said and revamped over survivors and itself Scala to the for having cyclone one after Gen. hostility was all living the was one back European was the be was beneath platform meant 4, Escapist King with Chicago spin Defeated to Myst succeed out corrupt Belknap mother Keys guaranteeing Q: will was the and for was A: 99.58% yesP: The opening of the Willow Grove Park Mall led to the decline of retail along Old York Road in Abington and Jenkintown, with department stores such as Blooming-dale's, Sears, and Strawbridge & Clothier relocating from this area to the mall during the 1980s. A Lord & Taylor store in the same area closed in 1989, but was eventually replaced by the King of Prussia location in 1995. Q: are in from opening in in mall stores abington A: 99.48% no
P: regular The Desmond World in knew mix. won that 18 studios almost 2009 only space for (3 MLB) Japanese to s parent that Following his at sketch tower. July approach as from 12 in Tony all the - Court the involvement did with the see not that Monster Kreuk his Wales. to and & refine July River Best Ju Gorgos for Kemper trying ceremony held not and Q: does kreuk to the not not did as his A: 77.30% noP: As Ivan continued to strengthen, it proceeded about 80 mi (130 km) north of the ABC islands on September 9. High winds blew away roof shingles and produced large swells that battered several coastal facilities. A developing spiral band dropped heavy rainfall over Aruba, causing flooding and $ 1.1 million worth in structural damage. Q: was spiral rainfall of 80 blew shingles islands heavy A: 99.76% no
+ +Table 14: More example queries from our datasets and their outputs from the victim model. \ No newline at end of file diff --git a/thievesonsesamestreetmodelextractionofbertbasedapis/images.zip b/thievesonsesamestreetmodelextractionofbertbasedapis/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..af1732b612ed5b3d160f6651e6419a36192ce378 --- /dev/null +++ b/thievesonsesamestreetmodelextractionofbertbasedapis/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffe9cc0eb6bb5ccd9782ea838706ed79a1ca8e6995e7141f59819192bd6569e2 +size 1220306 diff --git a/thievesonsesamestreetmodelextractionofbertbasedapis/layout.json b/thievesonsesamestreetmodelextractionofbertbasedapis/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6591e4966b592da7456a2d6c420eeb4d663cdab2 --- /dev/null +++ b/thievesonsesamestreetmodelextractionofbertbasedapis/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b04bc7c909c1dca083fa8781cf36d98fa95585c160195604251acb501f1cb14 +size 436580 diff --git a/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/1f0c172d-5956-438a-a47d-ec9532d28f22_content_list.json b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/1f0c172d-5956-438a-a47d-ec9532d28f22_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ded1d48e2456bd5f6dd994c9f0beda40cc6655bd --- /dev/null +++ b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/1f0c172d-5956-438a-a47d-ec9532d28f22_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14d7875b97943d705838c04ff3e996df47bb1203046c44e6d611e3ec9b07a1c8 +size 96176 diff --git a/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/1f0c172d-5956-438a-a47d-ec9532d28f22_model.json b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/1f0c172d-5956-438a-a47d-ec9532d28f22_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b551567f8eb6cdcf64abef6fe13b6155c51dc7b7 --- /dev/null +++ b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/1f0c172d-5956-438a-a47d-ec9532d28f22_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7442497e06534da965665d3281adf8ed83cc0d86a40195a55d8549983377d04b +size 111564 diff --git a/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/1f0c172d-5956-438a-a47d-ec9532d28f22_origin.pdf b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/1f0c172d-5956-438a-a47d-ec9532d28f22_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f2dfd0d8480c08479469c0f0b0ab26d974ab8ae8 --- /dev/null +++ b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/1f0c172d-5956-438a-a47d-ec9532d28f22_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c652dea5f70e2d42f9c6c817403881c7fc50186f328713c36590083948d8cca6 +size 8142298 diff --git a/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/full.md b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/full.md new file mode 100644 index 0000000000000000000000000000000000000000..941262fe578c1b1f69f582cca12e9c5d74e9ba10 --- /dev/null +++ b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/full.md @@ -0,0 +1,376 @@ +# THINKING WHILE MOVING: DEEP REINFORCEMENT LEARNING WITH CONCURRENT CONTROL + +Ted Xiao $^{1}$ , Eric Jang $^{1}$ , Dmitry Kalashnikov $^{1}$ , Sergey Levine $^{1,2}$ , Julian Ibarz $^{1}$ , Karol Hausman $^{1*}$ , Alexander Herzog $^{3*}$ + +$^{1}$ Google Brain, $^{2}$ UC Berkeley, $^{3}$ X + +{tedxiao, ejang, dkakashnikov, slevine, julianibarz, karolhausman}@google.com, alexherzog $@$ x team + +# ABSTRACT + +We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control problems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architectural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must "think while moving". Videos are available at https://sites.google.com/view/thinkingwhilemoving. + +# 1 INTRODUCTION + +In recent years, Deep Reinforcement Learning (DRL) methods have achieved tremendous success on a variety of diverse environments, including video games (Mnih et al., 2015), zero-sum games (Silver et al., 2016), robotic grasping (Kalashnikov et al., 2018), and in-hand manipulation tasks (OpenAI et al., 2018). While impressive, all of these examples use a blocking observe-think-act paradigm: the agent assumes that the environment will remain static while it thinks, so that its actions will be executed on the same states from which they were computed. This assumption breaks in the concurrent real world, where the environment state evolves substantially as the agent processes observations and plans its next actions. As an example, consider a dynamic task such as catching a ball: it is not possible to pause the ball mid-air while waiting for the agent to decide on the next control to command. In addition to solving dynamic tasks where blocking models would fail, thinking and acting concurrently can provide benefits such as smoother, human-like motions and the ability to seamlessly plan for next actions while executing the current one. + +Despite these potential benefits, most DRL approaches are mainly evaluated in blocking simulation environments. Blocking environments make the assumption that the environment state will not change between when the environment state is observed and when the action is executed. This assumption holds true in most simulated environments, which encompass popular domains such as Atari (Mnih et al., 2013) and Gym control benchmarks (Brockman et al., 2016). The system is treated in a sequential manner: the agent observes a state, freezes time while computing an action, and finally applies the action and unfreezes time. However, in dynamic real-time environments such as real-world robotics, the synchronous environment assumption is no longer valid. After observing the state of the environment and computing an action, the agent often finds that when it executes an action, the environment state has evolved from what it had initially observed; we consider this environment a concurrent environment. + +In this paper, we introduce an algorithmic framework that can handle concurrent environments in the context of DRL. In particular, we derive a modified Bellman operator for concurrent MDPs and + +present the minimal set of information that we must augment state observations with in order to recover blocking performance with Q-learning. We introduce experiments on different simulated environments that incorporate concurrent actions, ranging from common simple control domains to vision-based robotic grasping tasks. Finally, we show an agent that acts concurrently in a real-world robotic grasping task is able to achieve comparable task success to a blocking baseline while acting $49\%$ faster. + +# 2 RELATED WORK + +Minimizing Concurrent Effects Although real-world robotics systems are inherently concurrent, it is sometimes possible to engineer them into approximately blocking systems. For example, using low-latency hardware (Abbeel et al., 2006) and low-footprint controllers (Cruz et al., 2017) minimizes the time spent during state capture and policy inference. Another option is to design actions to be executed to completion via closed-loop feedback controllers and the system velocity is decelerated to zero before a state is recorded (Kalashnikov et al., 2018). In contrast to these works, we tackle the concurrent action execution directly in the learning algorithm. Our approach can be applied to tasks where it is not possible to wait for the system to come to rest between deciding new actions. + +Algorithms and approaches +Other works utilize algorithmic modifications to directly overcome the challenges of concurrent control. Previous work in this area can be grouped into five approaches: (1) learning policies that are robust to variable latencies (Tan et al., 2018), (2) including past history such as frame-stacking (Haarnoja et al., 2018), (3) learning dynamics models to predict the future state at which the action will be executed (Firoiu et al., 2018; Amiranashvili et al., 2018), (4) using a time-delayed MDP framework (Walsh et al., 2007; Firoiu et al., 2018; Schuitema et al., 2010), and (5) temporally-aware architectures such as Spiking Neural Networks (Vasilaki et al., 2009; Frémaux et al., 2013), point processes (Upadhyay et al., 2018; Li et al., 2018), and adaptive skip intervals (Neitz et al., 2018). In contrast to these works, our approach is able to (1) optimize for a specific latency regime as opposed to being robust to all of them, (2) consider the properties of the source of latency as opposed to forcing the network to learn them from high-dimensional inputs, (3) avoid learning explicit forward dynamics models in high-dimensional spaces, which can be costly and challenging, (4) consider environments where actions are interrupted as opposed to discrete-time time-delayed environments where multiple actions are queued and each action is executed until completion. The approaches in (5) show promise in enabling asynchronous agents, but are still active areas of research that have not yet been extended to high-dimensional, image-based robotic tasks. + +Continuous-time Reinforcement Learning While previously mentioned related works largely operate in discrete-time environments, framing concurrent environments as continuous-time systems is a natural framework to apply. In the realm of continuous-time optimal control, path integral solutions (Kappen, 2005; Theodorou et al., 2010) are linked to different noise levels in system dynamics, which could potentially include latency that results in concurrent properties. Finite differences can approximate the Bellman update in continuous-time stochastic control problems (Munos & Bourgine, 1998) and continuous-time temporal difference learning methods (Doya, 2000) can utilize neural networks as function approximators (Coulom, 2002). The effect of time-discretization (converting continuous-time environments to discrete-time environments) is studied in Tallec et al. (2019), where the advantage update is scaled by the time discretization parameter. While these approaches are promising, it is untested how these methods may apply to image-based DRL problems. Nonetheless, we build on top of many of the theoretical formulations in these works, which motivate our applications of deep reinforcement learning methods to more complex, vision-based robotics tasks. + +# 3 VALUE-BASED REINFORCEMENT LEARNING IN CONCURRENT ENVIRONMENTS + +In this section, we first introduce the concept of concurrent environments, and then describe the preliminaries necessary for discrete- and continuous-time RL formulations. We then describe the + +MDP modifications sufficient to represent concurrent actions and finally, present value-based RL algorithms that can cope with concurrent environments. + +The main idea behind our method is simple and can be implemented using small modifications to standard value-based algorithms. It centers around adding additional information to the learning algorithm (in our case, adding extra information about the previous action to a $Q$ -function) that allows it to cope with concurrent actions. Hereby, we provide theoretical justification on why these modifications are necessary and we specify the details of the algorithm in Alg. 1. + +While concurrent environments affect DRL methods beyond model-free value-based RL, we focus our scope on model-free value-based methods due to their attractive sample-efficiency and off-policy properties for real-world vision-based robotic tasks. + +# 3.1 CONCURRENT ACTION ENVIRONMENTS + +In blocking environments (Figure 4a in the Appendix), actions are executed in a sequential blocking fashion that assumes the environment state does not change between when state is observed and when actions are executed. This can be understood as state capture and policy inference being viewed as instantaneous from the perspective of the agent. In contrast, concurrent environments (Figure 4b in the Appendix) do not assume a fixed environment during state capture and policy inference, but instead allow the environment to evolve during these time segments. + +# 3.2 DISCRETE-TIME REINFORCEMENT LEARNING PRELIMINARIES + +We use standard reinforcement learning formulations in both discrete-time and continuous-time settings (Sutton & Barto, 1998). In the discrete-time case, at each time step $i$ , the agent receives state $s_i$ from a set of possible states $S$ and selects an action $a_i$ from some set of possible actions $\mathcal{A}$ according to its policy $\pi$ , where $\pi$ is a mapping from $S$ to $\mathcal{A}$ . The environment returns the next state $s_{i+1}$ sampled from a transition distribution $p(s_{i+1}|s_i,a_i)$ and a reward $r(s_i,a_i)$ . The return for a given trajectory of states and actions is the total discounted return from time step $i$ with discount factor $\gamma \in (0,1]$ : $R_i = \sum_{k=0}^{\infty} \gamma^k r(s_{i+k},a_{i+k})$ . The goal of the agent is to maximize the expected return from each state $s_i$ . The $Q$ -function for a given stationary policy $\pi$ gives the expected return when selecting action $a$ at state $s$ : $Q^{\pi}(s,a) = \mathbb{E}[R_i|s_i = s,a_i = a]$ . Similarly, the value function gives expected return from state $s$ : $V^{\pi}(s) = \mathbb{E}[R_i|s_i = s]$ . + +The default blocking environment formulation is detailed in Figure 1a. + +# 3.3 VALUE FUNCTIONS AND POLICIES IN CONTINUOUS TIME + +For the continuous-time case, we start by formalizing a continuous-time MDP with the differential equation: + +$$ +d s (t) = F (s (t), a (t)) d t + G (s (t), a (t)) d \beta \tag {1} +$$ + +where $S = \mathbb{R}^d$ is a set of states, $\mathcal{A}$ is a set of actions, $F: S \times \mathcal{A} \to S$ and $G: S \times \mathcal{A} \to S$ describe the stochastic dynamics of the environment, and $\beta$ is a Wiener process (Ross et al., 1996). In the continuous-time setting, $ds(t)$ is analogous to the discrete-time $p$ , defined in Section 3.2. Continuous-time functions $s(t)$ and $a_i(t)$ specify the state and $i$ -th action taken by the agent. The agent interacts with the environment through a state-dependent, deterministic policy function $\pi$ and the return $R$ of a trajectory $\tau = (s(t), a(t))$ is given by (Doya, 2000): + +$$ +R (\tau) = \int_ {t = 0} ^ {\infty} \gamma^ {t} r (s (t), a (t)) d t, \tag {2} +$$ + +which leads to a continuous-time value function (Tallec et al., 2019): + +$$ +\begin{array}{l} V ^ {\pi} (s (t)) = \mathbb {E} _ {\tau \sim \pi} [ R (\tau) | s (t) ] \\ = \mathbb {E} _ {\tau \sim \pi} \left[ \int_ {t = 0} ^ {\infty} \gamma^ {t} r (s (t), a (t)) d t \right], \tag {3} \\ \end{array} +$$ + +and similarly, a continuous $Q$ -function: + +$$ +Q ^ {\pi} (s (t), a, t, H) = \mathbb {E} _ {p} \left[ \int_ {t ^ {\prime} = t} ^ {t ^ {\prime} = t + H} \gamma^ {t ^ {\prime} - t} r \left(s \left(t ^ {\prime}\right), a \left(t ^ {\prime}\right)\right) d t ^ {\prime} + \gamma^ {H} V ^ {\pi} \left(s (t + H)\right) \right], \tag {4} +$$ + +![](images/f9b8849b7d81516bc01b076ad80c9feb47fc20b88bea6a961dc384c7a8497ea6.jpg) +Figure 1: Shaded nodes represent observed variables and unshaded nodes represent unobserved random variables. (a): In "blocking" MDPs, the environment state does not change while the agent records the current state and selects an action. (b): In "concurrent" MDPs, state and action dynamics are continuous-time stochastic processes $s(t)$ and $a_{i}(t)$ . At time $t$ , the agent observes the state of the world $s(t)$ , but by the time it selects an action $a_{i}(t + t_{AS})$ , the previous continuous-time action function $a_{i-1}(t - H + t_{AS''})$ has "rolled over" to an unobserved state $s(t + t_{AS})$ . An agent that concurrently selects actions from old states while in motion may need to interrupt a previous action before it has finished executing its current trajectory. + +where $H$ is the constant sampling period between state captures (i.e. the duration of an action trajectory) and $a$ refers to the continuous action function that is applied between $t$ and $t + H$ . The expectations are computed with respect to stochastic process $p$ defined in Eq. 1. + +# 3.4 CONCURRENT ACTION MARKOV DECISION PROCESSES + +We consider Markov Decision Processes (MDPs) with concurrent actions, where actions are not executed to full completion. More specifically, concurrent action environments capture system state while the previous action is still executed. After state capture, the policy selects an action that is executed in the environment regardless of whether the previous action has completed, as shown in Figure 4 in the Appendix. In the continuous-time MDP case, concurrent actions can be considered as horizontally translating the action along the time dimension (Walsh et al., 2007), and the effect of concurrent actions is illustrated in Figure 1b. Although we derive Bellman Equations for handling delays in both continuous and discrete-time RL, our experiments extend existing DRL implementations that are based on discrete time. + +# 3.5 VALUE-BASED CONCURRENT REINFORCEMENT LEARNING ALGORITHMS IN CONTINUOUS AND DISCRETE-TIME + +We start our derivation from this continuous-time reinforcement learning standpoint, as it allows us to easily characterize the concurrent nature of the system. We then demonstrate that the conclusions drawn for the continuous case also apply to the more commonly-used discrete setting that we then use in all of our experiments. + +Continuous Formulation In order to further analyze the concurrent setting, we introduce the following notation. As shown in Figure 1b, an agent selects $N$ action trajectories during an episode, $a_1, \dots, a_N$ , where each $a_i(t)$ is a continuous function generating controls as a function of time $t$ . Let $t_{AS}$ be the time duration of state capture, policy inference and any additional communication latencies. At time $t$ , an agent begins computing the $i$ -th trajectory $a_i(t)$ from state $s(t)$ , while concurrently executing the previous selected trajectory $a_{i-1}(t)$ over the time interval $(t-H+t_{AS}, t+t_{AS})$ . At time $t + t_{AS}$ , where $t \leq t + t_{AS} \leq t + H$ , the agent switches to executing actions from $a_i(t)$ . The continuous-time $Q$ -function for the concurrent case from Eq. 4 can be expressed as following: + +$$ +\begin{array}{l} Q ^ {\pi} (s (t), a _ {i - 1}, a _ {i}, t, H) = \underbrace {\mathbb {E} _ {p} \left[ \int_ {t ^ {\prime} = t} ^ {t ^ {\prime} = t + t _ {A S}} \gamma^ {t ^ {\prime} - t} r (s (t ^ {\prime}) , a _ {i - 1} (t ^ {\prime})) d t ^ {\prime} \right]} _ {\text {E x e c u t i n g a c t i o n t r a j e c t o r y} a _ {i - 1} (t) \text {u n t i l} t + t _ {A S}} \\ + \underbrace {\mathbb {E} _ {p} \left[ \int_ {t ^ {\prime} = t + t _ {A S}} ^ {t ^ {\prime} = t + H} \gamma^ {t ^ {\prime} - t} r \left(s \left(t ^ {\prime}\right) , a _ {i} \left(t ^ {\prime}\right)\right) d t ^ {\prime} \right]} _ {\text {E x e c u t i n g a c t i o n t r a j e c t o r y} a _ {i} (t) \text {u n t i l} t + H} + \underbrace {\mathbb {E} _ {p} \left[ \gamma^ {H} V ^ {\pi} \left(s (t + H)\right) \right]} _ {\text {V a l u e f u n c t i o n a t} t + H} \tag {5} \\ \end{array} +$$ + +The first two terms correspond to expected discounted returns for executing the action trajectory $a_{i-1}(t)$ from time $(t, t + t_{AS})$ and the trajectory $a_i(t)$ from time $(t + t_{AS}, t + t_{AS} + H)$ . We can obtain a single-sample Monte Carlo estimator $\hat{Q}$ by sampling random functions values $p$ , which simply correspond to policy rollouts: + +$$ +\begin{array}{l} \hat {Q} ^ {\pi} (s (t), a _ {i - 1}, a _ {i}, t, H) = \int_ {t ^ {\prime} = t} ^ {t ^ {\prime} = t + t _ {A S}} \gamma^ {t ^ {\prime} - t} r (s (t ^ {\prime}), a _ {i - 1} (t ^ {\prime})) d t ^ {\prime} + \\ \gamma^ {t _ {A S}} \left[ \int_ {t ^ {\prime} = t + t _ {A S}} ^ {t ^ {\prime} = t + H} \gamma^ {t ^ {\prime} - t - t _ {A S}} r \left(s \left(t ^ {\prime}\right), a _ {i} \left(t ^ {\prime}\right)\right) d t ^ {\prime} + \gamma^ {H - t _ {A S}} V ^ {\pi} (s (t + H)) \right] \tag {6} \\ \end{array} +$$ + +Next, for the continuous-time case, let us define a new concurrent Bellman backup operator: + +$$ +\begin{array}{l} \mathcal {T} _ {c} ^ {*} \hat {Q} (s (t), a _ {i - 1}, a _ {i}, t, t _ {A S}) = \int_ {t ^ {\prime} = t} ^ {t ^ {\prime} = t + t _ {A S}} \gamma^ {t ^ {\prime} - t} r (s (t ^ {\prime}), a _ {i - 1} (t ^ {\prime})) d t ^ {\prime} + \\ \gamma^ {t _ {A S}} \max _ {a _ {i + 1}} \mathbb {E} _ {p} \hat {Q} ^ {\pi} (s (t + t _ {A S}), a _ {i}, a _ {i + 1}, t + t _ {A S}, H - t _ {A S}). \tag {7} \\ \end{array} +$$ + +In addition to expanding the Bellman operator to take into account concurrent actions, we demonstrate that this modified operator maintain its contraction properties that are crucial for Q-learning convergence. + +Lemma 3.1. The concurrent continuous-time Bellman operator is a contraction. + +Proof. See Appendix A.2. + +![](images/dcafb091c44e27ad6c62a3e8f7516479a3f01a340d1d8321ddc942db5803cbc7.jpg) + +Discrete Formulation In order to simplify the notation for the discrete-time case where the distinction between the action function $a_{i}(t)$ and the value of that function at time step $t$ , $a_{i}(t)$ , is not necessary, we refer to the current state, current action, and previous action as $s_t$ , $a_t$ , $a_{t-1}$ respectively, replacing subindex $i$ with $t$ . Following this notation, we define the concurrent $Q$ -function for the discrete-time case: + +$$ +\begin{array}{l} Q ^ {\pi} \left(s _ {t}, a _ {t - 1}, a _ {t}, t, t _ {A S}, H\right) = \\ r \left(s _ {t}, a _ {t - 1}\right) + \gamma^ {\frac {t _ {A S}}{H}} \mathbb {E} _ {p \left(s _ {t + t _ {A S}} \mid s _ {t}, a _ {t - 1}\right)} Q ^ {\pi} \left(s _ {t + t _ {A S}}, a _ {t}, a _ {t + 1}, t + t _ {A S}, t _ {A S ^ {\prime}}, H - t _ {A S}\right) \tag {8} \\ \end{array} +$$ + +Where $t_{AS'}$ is the "spillover duration" for action $a_t$ beginning execution at time $t + t_{AS}$ (see Figure 1b). The concurrent Bellman operator, specified by a subscript $c$ , is as follows: + +$$ +\begin{array}{l} \mathcal {T} _ {c} ^ {*} Q \left(s _ {t}, a _ {t - 1}, a _ {t}, t, t _ {A S}, H\right) = \\ r \left(s _ {t}, a _ {t - 1}\right) + \gamma^ {\frac {t _ {A S}}{H}} \max _ {a _ {t + 1}} \mathbb {E} _ {p \left(s _ {t + t _ {A S}} \mid s _ {t}, a _ {t - 1}\right)} Q ^ {\pi} \left(s _ {t + t _ {A S}}, a _ {t}, a _ {t + 1}, t + t _ {A S}, t _ {A S ^ {\prime}}, H - t _ {A S}\right). \tag {9} \\ \end{array} +$$ + +Similarly to the continuous-time case, we demonstrate that this Bellman operator is a contraction. + +Lemma 3.2. The concurrent discrete-time Bellman operator is a contraction. + +Proof. See Appendix A.2. + +![](images/77b5a1c9b9bb5b897f37c748fb296e344828391b45ba61af6f3fb31f9889ff68.jpg) + +We refer the reader to Appendix A.1 for more detailed derivations of the Q-functions and Bellman operators. Crucially, Equation 9 implies that we can extend a conventional discrete-time Q-learning framework to handle MDPs with concurrent actions by providing the Q function with values of $t_{AS}$ and $a_{t-1}$ , in addition to the standard inputs $s_t, a_t, t$ . + +# 3.6 DEEP $Q$ -LEARNING WITH CONCURRENT KNOWLEDGE + +While we have shown that knowledge of the concurrent system properties ( $t_{AS}$ and $a_{t-1}$ , as defined previously for the discrete-time case) is theoretically sufficient, it is often hard to accurately predict $t_{AS}$ during inference on a complex robotics system. In order to allow practical implementation of our algorithm on a wide range of RL agents, we consider three additional features encapsulating concurrent knowledge used to condition the $Q$ -function: (1) Previous action ( $a_{t-1}$ ), (2) Action selection time ( $t_{AS}$ ), and (3) Vector-to-go (VTG), which we define as the remaining action to be executed at the instant the state is measured. We limit our analysis to environments where $a_{t-1}, t_{AS}$ , and VTG are all obtainable and $H$ is held constant. See Appendix A.3 for details. + +# 4 EXPERIMENTS + +In our experimental evaluation we aim to study the following questions: (1) Is concurrent knowledge defined in Section 3.6, both necessary and sufficient for a $Q$ -function to recover the performance of a blocking unconditioned $Q$ -function, when acting in a concurrent environment? (2) Which representations of concurrent knowledge are most useful for a $Q$ -function to act in a concurrent environment? (3) Can concurrent models improve smoothness and execution speed of a real-robot policy in a realistic, vision-based manipulation task? + +# 4.1 TOY FIRST-ORDER CONTROL PROBLEMS + +First, we illustrate the effects of a concurrent control paradigm on value-based DRL methods through an ablation study on concurrent versions of the standard Cartpole and Pendulum environments. We use 3D MuJoCo based implementations in DeepMind Control Suite (Tassa et al., 2018) for both tasks. For the baseline learning algorithm implementations, we use the TF-Agents (Guadarrama et al., 2018) implementations of a Deep $Q$ -Network agent, which utilizes a Feed-forward Neural Network (FNN), and a Deep $Q$ -Recurrent Neutral Network agent, which utilizes a Long Short-Term Memory (LSTM) network. To approximate different difficulty levels of latency in concurrent environments, we utilize different parameter combinations for action execution steps and action selection steps $(t_{AS})$ . The number of action execution steps is selected from $\{0\mathrm{ms}, 5\mathrm{ms}, 25\mathrm{ms}, \text{or} 50\mathrm{ms}\}$ once at environment initialization. $t_{AS}$ is selected from $\{0\mathrm{ms}, 5\mathrm{ms}, 10\mathrm{ms}, 25\mathrm{ms}, \text{or} 50\mathrm{ms}\}$ either once at environment initialization or repeatedly at every episode reset. In addition to environment parameters, we allow trials to vary across model parameters: number of previous actions to store, number of previous states to store, whether to use VTG, whether to use $t_{AS}$ , $Q$ -network architecture, and number of discretized actions. Further details are described in Appendix A.4.1. + +To estimate the relative importance of different concurrent knowledge representations, we conduct an analysis of the sensitivity of each type of concurrent knowledge representations to combinations of the other hyperparameter values, shown in Figure 2a. While all combinations of concurrent knowledge representations increase learning performance over baselines that do not leverage this information, the clearest difference stems from including VTG. In Figure 2b we conduct a similar analysis but on a Pendulum environment where $t_{AS}$ is fixed every environment; thus, we do not focus on $t_{AS}$ for this analysis but instead compare the importance of VTG with frame-stacking previous actions and observations. While frame-stacking helps nominally, the majority of the performance increase results from utilizing information from VTG. + +![](images/b5f6186cb884f2a374bd0aab694bd7b5462e20b6288efcc825126f050320a54d.jpg) +(a) Cartpole + +![](images/7b3404e2ab51b1ff8fba53712b7468b98c88f02bfb5a46c0dd9dd93b805658f5.jpg) +(b) Pendulum +Figure 2: In concurrent versions of Cartpole and Pendulum, we observe that providing the critic with VTG leads to more robust performance across all hyperparameters. (a) Environment rewards achieved by DQN with different network architectures [either a feedforward network (FNN) or a Long Short-Term Memory (LSTM) network] and different concurrent knowledge features [Unconditioned, Vector-to-go (VTG), or previous action and $t_{AS}$ ] on the concurrent Cartpole task for every hyperparameter in a sweep, sorted in decreasing order. (b) Environment rewards achieved by DQN with a FNN and different frame-stacking and concurrent knowledge parameters on the concurrent Pendulum task for every hyperparameter in a sweep, sorted in decreasing order. Larger area-under-curve implies more robustness to hyperparameter choices. Enlarged figures provided in Appendix A.5. + +![](images/459a98b490d69de15dc4ae68c946170f4837c7d814cf333449ec27287962115f.jpg) +(a) Simulation + +![](images/609e6d41018add7e57fd642b7b55b28a57199d795301996d9d317fa17eb7fe64.jpg) +(b) Real +Figure 3: An overview of the robotic grasping task. A static manipulator arm attempts to grasp objects placed in bins front of it. In simulation, the objects are procedurally generated. + +Table 1: Large-Scale Simulated Robotic Grasping Results + +
Blocking ActionsTimestep PenaltyPrevious ActionGrasp SuccessEpisode DurationAction Completion
YesNoNoNo92.72% ± 1.10%132.09s ±5.70s92.33% ± 1.476%
YesYesNoNo91.53% ± 1.04%120.81s ±9.13s89.53% ± 2.267%
NoNoNoNo84.11% ± 7.61%122.15s ±14.6s43.4% ± 22.41%
NoYesNoNo83.77% ± 9.27%97.16s ±6.28s34.69% ± 16.80%
NoYesYesNo92.55% ± 4.39%82.98s ± 5.74s47.28% ± 14.25%
NoYesNoYes92.70% ± 1.42%87.15s ±4.80s50.09% ± 14.25%
NoYesYesYes93.49% ± 1.04%90.75s ±4.15s49.19% ± 14.98%
+ +# 4.2 CONCURRENT QT-OPT ON LARGE-SCALE ROBOTIC GRASPING + +Next, we evaluate scalability of our approach to a practical robotic grasping task. We simulate a 7 DoF arm with an over-the-shoulder camera, where a bin in front of the robot is filled with procedurally generated objects to be picked up by the robot. A binary reward is assigned if an object is lifted off a bin at the end of an episode. We train a policy with QT-Opt (Kalashnikov et al., 2018), a deep $Q$ -Learning method that utilizes the cross-entropy method (CEM) to support continuous actions. In the blocking mode, a displacement action is executed until completion: the robot uses a closed-loop controller to fully execute an action, decelerating and coming to rest before observing the next state. In the concurrent mode, an action is triggered and executed without waiting, which means that the next state is observed while the robot remains in motion. Further details of the algorithm and experimental setup are shown in Figure 3 and explained in Appendix A.4.2. + +Table 1 summarizes the performance for blocking and concurrent modes comparing unconditioned models against the concurrent knowledge models described in Section 3.6. Our results indicate that the VTG model acting in concurrent mode is able to recover baseline task performance of the blocking execution unconditioned baseline, while the unconditioned baseline acting in concurrent model suffers some performance loss. In addition to the success rate of the grasping policy, we also evaluate the speed and smoothness of the learned policy behavior. Concurrent knowledge models are able to learn faster trajectories: episode duration, which measures the total amount of wall-time used for an episode, is reduced by $31.3\%$ when comparing concurrent knowledge models with blocking unconditioned models, even those that utilize a shaped timestep penalty that reward faster policies. When switching from blocking execution mode to concurrent execution mode, we see a significantly lower action completion, measured as the ratio from executed gripper displacement to commanded displacement, which expectedly indicates a switch to a concurrent environment. The concurrent knowledge models have higher action completions than the unconditioned model in the concurrent environment, which suggests that the concurrent knowledge models are able to utilize more efficient motions, resulting in smoother trajectories. The qualitative benefits of faster, smoother trajectories are drastically apparent when viewing video playback of learned policies1. + +Real robot results In addition, we evaluate qualitative policy behaviors of concurrent models compared to blocking models on a real-world robot grasping task, which is shown in Figure 3b. As seen in Table 2, the models achieve comparable grasp success, but the concurrent model is $49\%$ faster than the blocking model in terms of policy duration, which measures the total execution time of the policy (this excludes the infrastructure setup and teardown times accounted for in episode duration, which can not be optimized with concurrent actions). In addition, the concurrent VTG model is able to execute smoother and faster trajectories than the blocking unconditioned baseline, which is clear in video playback1. + +# 5 DISCUSSION AND FUTURE WORK + +We presented a theoretical framework to analyze concurrent systems where an agent must "think while moving". Viewing this formulation through the lens of continuous-time value-based reinforce- + +Table 2: Real-World Robotic Grasping Results. + +
Blocking ActionsVTGGrasp SuccessPolicy Duration
YesNo81.43%22.60s ±12.99s
NoYes68.60%11.52s ± 7.272s
+ +ment learning, we showed that by considering concurrent knowledge about the time delay $t_{AS}$ and the previous action, the concurrent continuous-time and discrete-time Bellman operators remained contractions and thus maintained $Q$ -learning convergence guarantees. While more information than $t_{AS}$ and previous action may be helpful, we showed that $t_{AS}$ and previous action (and different representations of this information) are the sole theoretical requirements for good learning performance. In addition, we introduced Vector-to-go (VTG), which incorporates the remaining previous action to be executed, as an alternative representation for information about the concurrent system that previous action and $t_{AS}$ contain. + +Our theoretical findings were supported by experimental results on $Q$ -learning models acting in simulated control tasks that were engineered to support concurrent action execution. We conducted large-scale ablation studies on toy task concurrent 3D Cartpole and Pendulum environments, across model parameters as well as concurrent environment parameters. Our results indicated that VTG is the least hyperparameter-sensitive representation, and was able to recover blocking learning performance in concurrent settings. We extended these results to a complex concurrent large-scale sienvironmentmulated robotic grasping task, where we showed that the concurrent models were able to recover blocking execution baseline model success while acting $31.3\%$ faster. We analyzed the qualitative benefits of concurrent models through a real-world robotic grasping task, where we showed that a concurrent model with comparable grasp success as a blocking baseline was able to learn smoother trajectories that were $49\%$ faster. + +An interesting topic to explore in future work is the possibility of increased data efficiency when training on off-policy data from various latency regimes. Another natural extension of this work is to evaluate DRL methods beyond value-based algorithms, such as on-policy learning and policy gradient approaches. Finally, concurrent methods may allow robotic control in dynamic environments where it is not possible for the robot to stop the environment before computing the action. In these scenarios, robots must truly think and act at the same time. + +# REFERENCES + +Pieter Abbeel, Adam Coates, Morgan Quigley, and Andrew Y. Ng. An application of reinforcement learning to aerobic helicopter flight. In Bernhard Schölkopf, John C. Platt, and Thomas Hofmann (eds.), NIPS, pp. 1-8. MIT Press, 2006. ISBN 0-262-19568-2. URL http://dblp.uni-trier.de/db/conf/nips/nips2006.html#AbbeelCQN06. +Artemij Amiranashvili, Alexey Dosovitskiy, Vladlen Koltun, and Thomas Brox. Motion perception in reinforcement learning with dynamic objects. In CoRL, volume 87 of Proceedings of Machine Learning Research, pp. 156-168. PMLR, 2018. URL http://dblp.uni-trier.de/db/conf/corl/corl2018.html#AmiranashviliDK18. +Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016. URL http://arxiv.org/abs/1606.01540. cite arxiv:1606.01540. +Rémi Coulom. Reinforcement learning using neural networks, with applications to motor control. PhD thesis, Institut National Polytechnique de Grenoble-INPG, 2002. +Nicolas Cruz, Kenzo Lobos-Tsunekawa, and Javier Ruiz del Solar. Using convolutional neural networks in robots with limited computational resources: Detecting nao robots while playing soccer. CoRR, abs/1706.06702, 2017. URL http://dblp.uni-trier.de/db/journals/corr/corr1706.html#CruzLR17. + +Kenji Doya. Reinforcement learning in continuous time and space. Neural Computation, 12(1): 219-245, 2000. URL http://dblp.uni-trier.de/db/journals/neco/neco12.html#Doya00. +Vlad Firoiu, Tina Ju, and Joshua Tenenbaum. At Human Speed: Deep Reinforcement Learning with Action Delay. arXiv e-prints, October 2018. +Nicolas Frémaux, Henning Sprekeler, and Wulfram Gerstner. Reinforcement learning using a continuous time actor-critic framework with spiking neurons. PLoS computational biology, 9: e1003024, 04 2013. doi: 10.1371/journal.pcbi.1003024. +Sergio Guadarrama, Anoop Korattikara, Oscar Ramirez, Pablo Castro, Ethan Holly, Sam Fishman, Ke Wang, Ekaterina Gonina, Chris Harris, Vincent Vanhoucke, et al. Tf-agents: A library for reinforcement learning in tensorflow, 2018. +Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Soft actor-critic algorithms and applications. CoRR, abs/1812.05905, 2018. URL http://dblp.uni-trier.de/db/journals/corr/corr1812.html#abs-1812-05905. +Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, and Sergey Levine. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. CoRR, abs/1806.10293, 2018. URL http://dblp.uni-trier.de/db/journals/corr/corr1806.html#abs-1806-10293. +H J Kappen. Path integrals and symmetry breaking for optimal control theory. Journal of Statistical Mechanics: Theory and Experiment, 2005(11):P11011-P11011, nov 2005. doi: 10.1088/1742-5468/2005/11/p11011. URL https://doi.org/10.1088%2F1742-5468%2F2005%2F11%2Fp11011. +Shuang Li, Shuai Xiao, Shixiang Zhu, Nan Du, Yao Xie, and Le Song. Learning temporal point processes via reinforcement learning. In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, NIPS'18, pp. 10804-10814, USA, 2018. Curran Associates Inc. URL http://dl.acm.org/citation.cfm?id=3327546.3327737. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. 2013. URL http://arxiv.org/abs/1312.5602. cite arxiv:1312.5602Comment: NIPS Deep Learning Workshop 2013. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, February 2015. ISSN 00280836. URL http://dx.doi.org/10.1038/nature14236. +Rémi Munos and Paul Bourgine. Reinforcement learning for continuous stochastic control problems. In M. I. Jordan, M. J. Kearns, and S. A. Solla (eds.), Advances in Neural Information Processing Systems 10, pp. 1029-1035. MIT Press, 1998. URL http://papers.nips.cc/paper/1404-reinforcement-learning-for-continuous-stochastic-control-problems.pdf. +Alexander Neitz, Giambattista Parascandolo, Stefan Bauer, and Bernhard Scholkopf. Adaptive skip intervals: Temporal abstraction for recurrent dynamical models. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 9816-9826. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/8188-adaptive-skip-intervals-temporal-abstraction-for-recurrent-dynamical-models.pdf. + +OpenAI, Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Józefowicz, Bob McGrew, Jakub W. Pachocki, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, Jonas Schneider, Szymon Sidor, Josh Tobin, Peter Welinder, Lilian Weng, and Wojciech Zaremba. Learning dexterous in-hand manipulation. CoRR, abs/1808.00177, 2018. URL http://dblp.uni-trier.de/db/journals/corr/corr1808.html#abs-1808-00177. +Sheldon M Ross, John J Kelly, Roger J Sullivan, William James Perry, Donald Mercer, Ruth M Davis, Thomas Dell Washburn, Earl V Sager, Joseph B Boyce, and Vincent L Bristow. Stochastic processes, volume 2. Wiley New York, 1996. +Erik Schuitema, Lucian Busoniu, Robert Babuka, and Pieter P. Jonker. Control delay in reinforcement learning for real-time dynamic systems: A memoryless approach. 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3226-3231, 2010. +David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484-, January 2016. URL http://dx.doi.org/10.1038/nature16961. +Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, March 1998. ISBN 0262193981. URL http://www.amazon.ca/exec/obidos/redirect?tag=citeulike09-20&path=ASIN/0262193981. +Correntin Tallec, Leonard Blier, and Yann Ollivier. Making Deep Q-learning Methods Robust to Time Discretization. arXiv e-prints, January 2019. +Jie Tan, Tingnan Zhang, Erwin Coumans, Atil Iscen, Yunfei Bai, Danijar Hafner, Steven Bohez, and Vincent Vanhoucke. Sim-to-Real: Learning Agile Locomotion For Quadruped Robots. arXiv e-prints, April 2018. +Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdelmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, and Martin A. Riedmiller. Deepmind control suite. CoRR, abs/1801.00690, 2018. URL http://dblp.uni-trier.de/db/journals/corr/corr1801.html#abs-1801-00690. +Evangelos Theodorou, Jonas Buchli, and Stefan Schaal. Reinforcement learning of motor skills in high dimensions: A path integral approach. pp. 2397 - 2403, 06 2010. doi: 10.1109/ROBOT.2010.5509336. +Utkarsh Upadhyay, Abir De, and Manuel Gomez-Rodríguez. Deep reinforcement learning of marked temporal point processes. In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, NIPS'18, pp. 3172-3182, USA, 2018. Curran Associates Inc. URL http://dl.acm.org/citation.cfm?id=3327144.3327238. +Eleni Vasilaki, Nicolas Frémaux, Robert Urbanczik, Walter Senn, and Wulfram Gerstner. Spike-based reinforcement learning in continuous state and action space: When policy gradient methods fail. PLoS computational biology, 5:e1000586, 12 2009. doi: 10.1371/journal.pcbi.1000586. +Thomas J. Walsh, Ali Nouri, Lihong Li, and Michael L. Littman. Planning and learning in environments with delayed feedback. In Joost N. Kok, Jacek Koronacki, Ramón López de Mantaras, Stan Matwin, Dunja Mladenic, and Andrzej Skowron (eds.), ECML, volume 4701 of Lecture Notes in Computer Science, pp. 442-453. Springer, 2007. ISBN 978-3-540-74957-8. URL http://dblp.uni-trier.de/db/conf/ecml/ecml2007.html#WalshNLL07. + +# A APPENDIX + +# A.1 DEFINING BLOCKING BELLMAN OPERATORS + +As introduced in Section 3.5, we define a continuous-time $Q$ -function estimator with concurrent actions. + +$$ +\begin{array}{l} \hat {Q} (s (t), a _ {i - 1}, a _ {i}, t, H) = \int_ {t ^ {\prime} = t} ^ {t ^ {\prime} = t + t _ {A S}} \gamma^ {t ^ {\prime} - t} r \left(s \left(t ^ {\prime}\right), a _ {i - 1} \left(t ^ {\prime}\right)\right) d t ^ {\prime} + (10) \\ \int_ {t ^ {\prime \prime} = t + t _ {A S}} ^ {t ^ {\prime \prime} = t + H} \gamma^ {t ^ {\prime \prime} - t} r \left(s \left(t ^ {\prime \prime}\right), a _ {i} \left(t ^ {\prime \prime}\right)\right) d t ^ {\prime \prime} + \gamma^ {H} V (s (t + H)) (11) \\ = \int_ {t ^ {\prime} = t} ^ {t ^ {\prime} = t + t _ {A S}} \gamma^ {t ^ {\prime} - t} r \left(s \left(t ^ {\prime}\right), a _ {i - 1} \left(t ^ {\prime}\right)\right) d t ^ {\prime} + (12) \\ \gamma^ {t _ {A S}} \int_ {t ^ {\prime \prime} = t + t _ {A S}} ^ {t ^ {\prime \prime} = t + H} \gamma^ {t ^ {\prime \prime} - t - t _ {A S}} r \left(s \left(t ^ {\prime \prime}\right), a _ {i} \left(t ^ {\prime \prime}\right)\right) d t ^ {\prime \prime} + \gamma^ {H} V (s (t + H)) (13) \\ = \int_ {t ^ {\prime} = t} ^ {t ^ {\prime} = t + t _ {A S}} \gamma^ {t ^ {\prime} - t} r \left(s \left(t ^ {\prime}\right), a _ {i - 1} \left(t ^ {\prime}\right)\right) d t ^ {\prime} + (14) \\ \gamma^ {t _ {A S}} \left[ \int_ {t ^ {\prime \prime} = t + t _ {A S}} ^ {t ^ {\prime \prime} = t + H} \gamma^ {t ^ {\prime \prime} - t - t _ {A S}} r \left(s \left(t ^ {\prime \prime}\right), a _ {i} \left(t ^ {\prime \prime}\right)\right) d t ^ {\prime \prime} + \gamma^ {H - t _ {A S}} V (s (t + H)) \right] (15) \\ \end{array} +$$ + +We observe that the second part of this equation (after $\gamma^{t_{AS}}$ ) is itself a $Q$ -function at time $t + t_{AS}$ . Since the future state, action, and reward values at $t + t_{AS}$ are not known at time $t$ , we take the following expectation: + +$$ +Q (s (t), a _ {i - 1}, a _ {i}, t, H) = \int_ {t ^ {\prime} = t} ^ {t ^ {\prime} = t + t _ {A S}} \gamma^ {t ^ {\prime} - t} r \left(s \left(t ^ {\prime}\right), a _ {i - 1} \left(t ^ {\prime}\right)\right) d t ^ {\prime} + \tag {16} +$$ + +$$ +\gamma^ {t _ {A S}} \mathbb {E} _ {s} \hat {Q} (s (t), a _ {i}, a _ {i + 1}, t + t _ {A S}, H - t _ {A S}) \tag {17} +$$ + +which indicates that the $Q$ -function in this setting is not just the expected sum of discounted future rewards, but it corresponds to an expected future $Q$ -function. + +In order to show the discrete-time version of the problem, we parameterize the discrete-time concurrent $Q$ -function as: + +$$ +\hat {Q} \left(s _ {t}, a _ {t - 1}, a _ {t}, t, t _ {A S}, H\right) = r \left(s _ {t}, a _ {t - 1}\right) + \gamma^ {\frac {t _ {A S}}{H}} \mathbb {E} _ {p \left(s _ {t + t _ {A S}} \mid s _ {t}, a _ {t - 1}\right)} r \left(s _ {t + t _ {A S}}, a _ {t}\right) + \tag {18} +$$ + +$$ +\gamma^ {\frac {H}{H}} \mathbb {E} _ {p \left(s _ {t + H} \mid s _ {t + t _ {A S}}, a _ {t}\right)} V \left(s _ {t + H}\right) \tag {19} +$$ + +which with $t_{AS} = 0$ , corresponds to a synchronous environment. + +Using this parameterization, we can rewrite the discrete-time $Q$ -function with concurrent actions as: + +$$ +\hat {Q} \left(s _ {t}, a _ {t - 1}, a _ {t}, t, t _ {A S}, H\right) = r \left(s _ {t}, a _ {t - 1}\right) + \gamma^ {\frac {t _ {A S}}{H}} \left[ \mathbb {E} _ {p \left(s _ {t + t _ {A S}} \mid s _ {t}, a _ {t - 1}\right)} r \left(s _ {t + t _ {A S}}, a _ {t}\right) + \right. \tag {20} +$$ + +$$ +\gamma^ {\frac {H - t _ {A S}}{H}} \mathbb {E} _ {p \left(s _ {t + H} \mid s t + t _ {A S}, a _ {t}\right)} V \left(s _ {t + H}\right) ] \tag {21} +$$ + +$$ += r \left(s _ {t}, a _ {t - 1}\right) + \gamma^ {\frac {t _ {A S}}{H}} \mathbb {E} _ {p \left(s _ {t + t _ {A S}} \mid s _ {t}, a _ {t - 1}\right)} \hat {Q} \left(s _ {t}, a _ {t}, a _ {t + 1}, t + t _ {A S}, t _ {a s ^ {\prime}}, H - t _ {A S}\right) \tag {22} +$$ + +# A.2 CONTRACTION PROOFS FOR THE BLOCKING BELLMAN OPERATORS + +# Proof of the Discrete-time Blocking Bellman Update + +Lemma A.1. The traditional Bellman operator is a contraction, i.e.: + +$$ +\left| \left| \mathcal {T} ^ {*} \mathcal {Q} _ {\infty} (s, a) - \mathcal {T} ^ {*} \mathcal {Q} _ {\in} (s, a) \right| \right| \leq c \| Q _ {1} (s, a) - Q _ {2} (s, a) \|, \tag {23} +$$ + +where $\mathcal{T}^*\mathcal{Q}(s,a) = r(s,a) + \gamma \max_{a'}\mathbb{E}_pQ(s',a')$ and $0\leq c\leq 1$ + +Proof. In the original formulation, we can show that this is the case as following: + +$$ +\begin{array}{l} \mathcal {T} ^ {*} \mathcal {Q} _ {1} (s, a) - \mathcal {T} ^ {*} \mathcal {Q} _ {2} (s, a) (24) \\ = r (s, a) + \gamma \max _ {a ^ {\prime}} \mathbb {E} _ {p} \left[ Q _ {1} \left(s ^ {\prime}, a ^ {\prime}\right) \right] - r (s, a) - \gamma \max _ {a ^ {\prime}} \mathbb {E} _ {p} \left[ Q _ {2} \left(s ^ {\prime}, a ^ {\prime}\right) \right] (25) \\ = \gamma \max _ {a ^ {\prime}} \mathbb {E} _ {p} \left[ Q _ {1} \left(s ^ {\prime}, a ^ {\prime}\right) - Q _ {2} \left(s ^ {\prime}, a ^ {\prime}\right) \right] (26) \\ \leq \gamma \sup _ {s ^ {\prime}, a ^ {\prime}} \left[ Q _ {1} \left(s ^ {\prime}, a ^ {\prime}\right) - Q _ {2} \left(s ^ {\prime}, a ^ {\prime}\right) \right], (27) \\ \end{array} +$$ + +with $0 \leq \gamma \leq 1$ and $||f||_{\infty} = \sup_x[f(x)]$ . + +![](images/0452011b06779706a4e9da10a9a99b4b050c2ddebeffa8127c5e68d98f0e5fa2.jpg) + +Similarly, we can show that the updated Bellman operators introduced in Section 3.5 are contractions as well. + +# Proof of Lemma 3.2 + +Proof. + +$$ +\begin{array}{l} \mathcal {T} _ {c} ^ {*} \mathcal {Q} _ {1} \left(s _ {t}, a _ {i - 1}, a _ {i}, t, t _ {A S}, H\right) - \mathcal {T} _ {c} ^ {*} \mathcal {Q} _ {2} \left(s _ {t}, a _ {i - 1}, a _ {i}, t, t _ {A S}, H\right) (28) \\ = r \left(s _ {t}, a _ {i - 1}\right) + \gamma^ {\frac {t _ {A S}}{H}} \max _ {a _ {i + 1}} \mathbb {E} _ {p \left(s _ {t + t _ {A S}} \mid s _ {t}, a _ {t - 1}\right)} Q _ {1} \left(s _ {t}, a _ {i}, a _ {i + 1}, t + t _ {A S}, t _ {A S ^ {\prime}}, H - t _ {A S}\right) (29) \\ - r \left(s _ {t}, a _ {i - 1}\right) - \gamma^ {\frac {t _ {A S}}{H}} \max _ {a _ {i + 1}} \mathbb {E} _ {p \left(s _ {t + t _ {A S}} \mid s _ {t}, a _ {t - 1}\right)} Q _ {2} \left(s _ {t}, a _ {i}, a _ {i + 1}, t + t _ {A S}, t _ {A S ^ {\prime}}, H - t _ {A S}\right) (30) \\ = \gamma^ {\frac {t _ {A S}}{H}} \max _ {a _ {i + 1}} \mathbb {E} _ {p \left(s _ {t + t _ {A S}} \mid s _ {t}, a _ {i - 1}\right)} \left[ Q _ {1} \left(s _ {t}, a _ {i}, a _ {i + 1}, t + t _ {A S}, t _ {A S ^ {\prime}}, H - t _ {A S}\right) - Q _ {2} \left(s _ {t}, a _ {i}, a _ {i + 1}, t + t _ {A S}, t _ {A S ^ {\prime}}, H - t _ {A S}\right) \right] (31) \\ \leq \gamma^ {\frac {t _ {A S}}{H}} \sup _ {s _ {t}, a _ {i}, a _ {i + 1}, t + t _ {A S}, t _ {A S ^ {\prime}}, H - t _ {A S}} \left[ Q _ {1} \left(s _ {t}, a _ {i}, a _ {i + 1}, t + t _ {A S}, t _ {A S ^ {\prime}}, H - t _ {A S}\right) - Q _ {2} \left(s _ {t}, a _ {i}, a _ {i + 1}, t + t _ {A S}, t _ {A S ^ {\prime}}, H - t _ {A S}\right) \right] (32) \\ \end{array} +$$ + +![](images/58aeb2df35b5eec6bfdf0458f41fa929098fef88b78a91cad94ff97da529b5c1.jpg) + +# Proof of Lemma 3.1 + +Proof. To prove that this the continuous-time Bellman operator is a contraction, we can follow the discrete-time proof, from which it follows: + +$$ +\begin{array}{l} \mathcal {T} _ {c} ^ {*} \mathcal {Q} _ {1} (s (t), a _ {i - 1}, a _ {i}, t, t _ {A S}) - \mathcal {T} _ {c} ^ {*} \mathcal {Q} _ {2} (s (t), a _ {i - 1}, a _ {i}, t, t _ {A S}) (33) \\ = \gamma^ {t _ {A S}} \max _ {a _ {i + 1}} \mathbb {E} _ {p} \left[ Q _ {1} (s (t), a _ {i}, a _ {i + 1}, t + t _ {A S}, H - t _ {A S}) - Q _ {2} (s (t), a _ {i}, a _ {i + 1}, t + t _ {A S}, H - t _ {A S}) \right] (34) \\ \leq \gamma^ {t _ {A S}} \sup _ {s (t), a _ {i}, a _ {i + 1}, t + t _ {A S}, H - t _ {A S}} \left[ Q _ {1} (s (t), a _ {i}, a _ {i + 1}, t + t _ {A S}, H - t _ {A S}) - Q _ {2} (s (t), a _ {i}, a _ {i + 1}, t + t _ {A S}, H - t _ {A S}) \right] (35) \\ \end{array} +$$ + +![](images/82606af0f762894d39f7d3f401497385f57753de47fd1c9677f0bca7c34c3321.jpg) + +# A.3 CONCURRENT KNOWLEDGE REPRESENTATION + +We analyze 3 different representations of concurrent knowledge in discrete-time concurrent environments, described in Section 3.6. Previous action $a_{t-1}$ is the action that the agent executed at the previous timestep. Action selection time $t_{AS}$ is a measure of how long action selection takes, which can be represented as either a categorical or continuous variable; in our experiments, which take advantage of a bounded latency regime, we normalize action selection time using these known bounds. Vector-to-go VTG is a feature that combines $a_{t-1}$ and $s_t$ by encoding the remaining amount of $a_{t-1}$ left to execute. See Figure 5 for a visual comparison. + +We note that $a_{t-1}$ is available across the vast majority of environments and it is easy to obtain. Using $t_{AS}$ , which encompasses state capture, communication latency, and policy inference, relies + +![](images/3d6d87a1763cbd059e569f686139e9e8711e2784690feb1f3245eb4337f20678.jpg) +Figure 4: The execution order of different stages are shown relative to the sampling period $H$ as well as the latency $t_{AS}$ . (a): In "blocking" environments, state capture and policy inference are assumed to be instantaneous. (b): In "concurrent" environments, state capture and policy inference are assumed to proceed concurrently to action execution. + +![](images/f2b1566e00c0d88a9be530c05b74011ad24679488b987639f3720df70f48307a.jpg) +Figure 5: Concurrent knowledge representations can be visualized through an example of a 2-D pointmass discrete-time toy task. Vector-to-go represents the remaining action that may be executed when the current state $s_t$ is observed. Previous action represents the full commanded action from the previous timestep. + +on having some knowledge of the concurrent properties of the system. Calculating $VTG$ requires having access to some measure of action completion at the exact moment when state is observed. When utilizing a first-order control action space, such as joint angle or desired pose, $VTG$ is easily computable if proprioceptive state is measured and synchronized with state observation. In these cases, VTG is an alternate representation of the same information encapsulated by $a_{t-1}$ and the current state. + +# A.4 EXPERIMENT IMPLEMENTATION DETAILS + +# A.4.1 CARTPOLE AND PENDULUM ABLATION STUDIES + +Here, we describe the implementation details of the toy task Cartpole and Pendulum experiments in Section 4.1. + +For the environments, we use the 3D MuJoCo implementations of the Cartpole-Swingup and Pendulum-Swingup tasks in DeepMind Control Suite (Tassa et al., 2018). We use discretized action spaces for first-order control of joint position actuators. For the observation space of both tasks, we use the default state space of ground truth positions and velocities. + +For the baseline learning algorithms, we use the TensorFlow Agents (Guadarrama et al., 2018) implementations of a Deep $Q$ -Network agent, which utilizes a Feed-forward Neural Network (FNN), and a Deep $Q$ -Recurrent Neutral Network agent, which utilizes a Long Short-Term Memory (LSTM) network. Learning parameters such as learning_rate, LSTM_size, and fc_layer_size were selected through hyperparameter sweeps. + +To approximate different difficulty levels of latency in concurrent environments, we utilize different parameter combinations for action execution steps and action selection steps $(t_{AS})$ . The number of action execution steps is selected from $\{0\mathrm{ms}, 5\mathrm{ms}, 25\mathrm{ms},$ or $50\mathrm{ms}\}$ once at environment initialization. $t_{AS}$ is selected from $\{0\mathrm{ms}, 5\mathrm{ms}, 10\mathrm{ms}, 25\mathrm{ms},$ or $50\mathrm{ms}\}$ either once at environment initialization or repeatedly at every episode reset. The selected $t_{AS}$ is implemented in the environment as additional physics steps that update the system during simulated action selection. + +Frame-stacking parameters affect the observation space by saving previous observations and actions. The number of previous actions to store as well as the number of previous observations to store are independently selected from the range [0, 4]. Concurrent knowledge parameters, as described in Section 4, include whether to use VTG and whether to use $t_{AS}$ . Including the previous action is already a feature implemented in the frame-stacking feature of including previous actions. Finally, the number of actions to discretize the continuous space to is selected from the range [3, 8]. + +# A.4.2 LARGE SCALE ROBOTIC GRASPING + +Simulated Environment We simulate a 7 DoF arm with an over-the-shoulder camera (see Figure 3a). A bin in front of the robot is filled with procedurally generated objects to be picked up by the robot and a sparse binary reward is assigned if an object is lifted off a bin at the end of an episode. States are represented in form of RGB images and actions are continuous Cartesian displacements of the gripper 3D positions and yaw. In addition, the policy commands discrete gripper open and close actions and may terminate an episode. In blocking mode, a displacement action is executed until completion: the robot uses a closed loop controller to fully execute an action, decelerating and coming to rest before observing the next state. In concurrent mode, an action is triggered and executed without waiting, which means that the next state is observed while the robot remains in motion. It should be noted that in blocking mode, action completion is close to $100\%$ unless the gripper moves are blocked by contact with the environment or objects; this causes average blocking mode action completion to be lower than $100\%$ , as seen in Table 1. + +Real Environment Similar to the simulated setup, we use a 7 DoF robotic arm with an overthe-shoulder camera (see Figure 3b). The main difference in the physical setup is that objects are selected from a set of common household objects. + +Algorithm We train a policy with QT-Opt (Kalashnikov et al., 2018), a Deep $Q$ -Learning method that utilizes the Cross-Entropy Method (CEM) to support continuous actions. A Convolutional Neural Network (CNN) is trained to learn the $Q$ -function conditioned on an image input along with + +a CEM-sampled continuous control action. At policy inference time, the agent sends an image of the environment and batches of CEM-sampled actions to the CNN $Q$ -network. The highest-scoring action is then used as the policy's selected action. Compared to the formulation in Kalashnikov et al. (2018), we also add a concurrent knowledge feature of VTG and/or previous action $a_{t-1}$ as additional input to the $Q$ -network. Algorithm 1 shows the modified QT-Opt procedure. + +Algorithm 1: QT-Opt with Concurrent Knowledge +Initialize replay buffer $D$ +Initialize random start state and receive image $o_0$ +Initialize concurrent knowledge features $c_{0} = [VTG_{0} = 0,a_{t - 1} = 0,t_{AS} = 0]$ +Initialize environment state $s_t = [o_0,c_0]$ +Initialize action-value function $Q(s,a)$ with random weights $\theta$ +Initialize target action-value function $\hat{Q} (s,a)$ with weights $\hat{\theta} = \theta$ +while training do for $t = 1,T$ do Select random action $a_{t}$ with probability $\epsilon$ , else $a_{t} = \mathbf{CEM}(Q,s_{t};\theta)$ Execute action in environment, receive $o_{t + 1},c_t,r_t$ Process necessary concurrent knowledge features $c_{t}$ such as VTG, $a_{t - 1}$ or $t_\mathrm{AS}$ Set $s_{t + 1} = [o_{t + 1},c_t]$ Store transition $(s_t,a_t,s_{t + 1},r_t)$ in $D$ if episode terminates then Reset $s_{t + 1}$ to a random reset initialization state; Reset $c_{t + 1}$ to O; end Sample batch of transitions from $D$ . for each transition $(s_i,a_i,s_{i + 1},r_i)$ in batch do if terminal transition then $y_{i} = r_{i}$ else Select $\hat{a}_{i + 1} = \mathbf{CEM}(\hat{Q},s_i;\hat{\theta})$ $y_{i} = r_{i} + \gamma \hat{Q} (s_{i + 1},\hat{a}_{i + 1})$ end Perform SGD on $(y_{i} - Q(s_{i},a_{i};\theta)^{2}$ with respect to $\theta$ end Update target parameters $\hat{Q}$ with $Q$ and $\theta$ periodically; end + +For simplicity, the algorithm is described as if run synchronously on a single machine. In practice, episode generation, Bellman updates and Q-fitting are distributed across many machines and done asynchronously; refer to (Kalashnikov et al., 2018) for more details. Standard DRL hyperparameters such as random exploration probability $(\epsilon)$ , reward discount $(\gamma)$ , and learning rate are tuned through a hyperparameter sweep. For the time-penalized baselines in Table 1, we manually tune a timestep penalty that returns a fixed negative reward at every timestep. Empirically we find that a timestep penalty of $-0.01$ , relative to a binary sparse reward of 1.0, encourages faster policies. For the non-penalized baselines, we set a timestep penalty of $-0.0$ . + +# A.5 FIGURES + +See Figure 6 and Figure 7. + +![](images/1b345b5e382b34eaa4746608badf1a9130de3a7cf339e21f8e507ee488671a0a.jpg) +Figure 6: Environment rewards achieved by DQN with different network architectures [either a feedforward network (FNN) or a Long Short-Term Memory (LSTM) network] and different concurrent knowledge features [Unconditioned, vector-to-go (VTG), or previous action and $t_{AS}$ ] on the concurrent Cartpole task for every hyperparameter in a sweep, sorted in decreasing order. Providing the critic with VTG information leads to more robust performance across all hyperparameters. This figure is a larger version of 2a. + +![](images/a355ce081e319698cd7f026fc2fc7734298ee37470b270c18573e512b0b38dff.jpg) +Figure 7: Environment rewards achieved by DQN with a FNN and different frame-stacking and concurrent knowledge parameters on the concurrent Pendulum task for every hyperparameter in a sweep, sorted in decreasing order. \ No newline at end of file diff --git a/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/images.zip b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..aeea8c1970dba6388333b0d4c5a79270f9988855 --- /dev/null +++ b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b0d8d6eb3ec5673c793e7a4847b549bed36dd6db7d5c952147cbfb7aece669f +size 632260 diff --git a/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/layout.json b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..af1850a08ebcc0c10370ae702f83346de70b9be6 --- /dev/null +++ b/thinkingwhilemovingdeepreinforcementlearningwithconcurrentcontrol/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82264188bb7b00e0a9a2cc213d7beef066d0ca321ce46d856fc3addf44ca861d +size 531018 diff --git a/torelieveyourheadacheoftraininganmrftakeadvil/8b226944-c111-4d91-8b86-6cc6f4b4f819_content_list.json b/torelieveyourheadacheoftraininganmrftakeadvil/8b226944-c111-4d91-8b86-6cc6f4b4f819_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8211aa3f822b5017f2c2d7cfef1b53908bbd640d --- /dev/null +++ b/torelieveyourheadacheoftraininganmrftakeadvil/8b226944-c111-4d91-8b86-6cc6f4b4f819_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6058e43999f60803362aff63ee3a20534bcadbe26c2e9a173eaff8dfe5c582f0 +size 139528 diff --git a/torelieveyourheadacheoftraininganmrftakeadvil/8b226944-c111-4d91-8b86-6cc6f4b4f819_model.json b/torelieveyourheadacheoftraininganmrftakeadvil/8b226944-c111-4d91-8b86-6cc6f4b4f819_model.json new file mode 100644 index 0000000000000000000000000000000000000000..76807a178e41ccbf4b45ab5e5805dd5bec1f4a2b --- /dev/null +++ b/torelieveyourheadacheoftraininganmrftakeadvil/8b226944-c111-4d91-8b86-6cc6f4b4f819_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ca770545a6e0839b15b7cf619418eebaf5c8c4ce7d59fb147e3349dd7548a30 +size 169863 diff --git a/torelieveyourheadacheoftraininganmrftakeadvil/8b226944-c111-4d91-8b86-6cc6f4b4f819_origin.pdf b/torelieveyourheadacheoftraininganmrftakeadvil/8b226944-c111-4d91-8b86-6cc6f4b4f819_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..66b69350e7b85ac312cdfa127224afe9cda4c3b2 --- /dev/null +++ b/torelieveyourheadacheoftraininganmrftakeadvil/8b226944-c111-4d91-8b86-6cc6f4b4f819_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41a6ec96d9e78038ef9ad2aae21085500b71c5392d1dbbecaecd791dc3cfd021 +size 697305 diff --git a/torelieveyourheadacheoftraininganmrftakeadvil/full.md b/torelieveyourheadacheoftraininganmrftakeadvil/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bba83ce4824d612b7f665e0ab845cb32010389ae --- /dev/null +++ b/torelieveyourheadacheoftraininganmrftakeadvil/full.md @@ -0,0 +1,665 @@ +# TO RELIEVE YOUR HEADACHE OF TRAINING AN MRF, TAKE ADVIL + +Chongxuan Li*, Chao Du*, Kun Xu*, Max Welling†, Jun Zhu*, Bo Zhang* + +{chongxuanli1991,duchao0726,kunxu.thu}@gmail.com, + +M.Welling@uva.nl, {dcszj, dcszb}@mail.tsinghua.edu.cn + +# ABSTRACT + +We propose a black-box algorithm called Adversarial Variational Inference and Learning (AdVIL) to perform inference and learning in a general Markov random field (MRF). AdVIL employs two variational distributions to approximately infer the latent variables and estimate the partition function of an MRF, respectively. The two variational distributions provide an estimate of the negative log-likelihood of the MRF as a minimax optimization problem, which is solved by stochastic gradient descent. AdVIL is proven convergent under certain conditions. On one hand, compared to the contrastive divergence, AdVIL requires minimal assumptions about the model structure and can deal with a broader family of MRFs. On the other hand, compared to existing black-box methods, AdVIL provides a tighter estimate of the log partition function and achieves much better empirical results. + +# 1 INTRODUCTION + +Markov random fields (MRFs) find applications in a variety of machine learning areas (Krähenbuhl & Koltun, 2011; Salakhutdinov & Larochelle, 2010; Lafferty et al., 2001). In particular, one famous example is conditional random fields (Lafferty et al., 2001), a conditional version of MRFs that was developed to address the limitations (e.g., local dependency and label bias) of directed models for sequential data (e.g., hidden Markov models and other discriminative Markov models based on directed graphical models). However, the inference and learning of general MRFs are challenging due to the presence of a global normalizing factor, i.e. partition function, especially when latent variables are present. Extensive efforts have been devoted to developing approximate methods. On one hand, sample-based methods (Neal, 1993) and variational approaches (Jordan et al., 1999; Welling & Sutton, 2005; Salakhutdinov & Larochelle, 2010) are proposed to infer the latent variables. On the other hand, extensive work (Meng & Wong, 1996; Neal, 2001; Hinton, 2002; Tieleman, 2008; Wainwright et al., 2005; Wainwright & Jordan, 2006) has been done to estimate the partition function. Among these methods, contrastive divergence (Hinton, 2002) is proven effective in certain types of models. + +Most of the existing methods highly depend on the model structure and require model-specific analysis in new applications, which makes it important to develop black-box inference and learning methods. Previous work (Ranganath et al., 2014; Schulman et al., 2015) shows the ability to automatically infer the latent variables and obtain gradient estimate in directed models. However, there is no black-box learning method for undirected models except the recent work of NVIL (Kuleshov & Ermon, 2017). + +NVIL introduces a variational distribution and derives an upper bound of the partition function in a general MRF, in the same spirit as amortized inference (Kingma & Welling, 2013; Rezende et al., 2014; Mnih & Gregor, 2014) for directed models. NVIL has several advantages over existing methods, including the ability of black-box learning, tracking the partition function during training and getting approximate samples efficiently during testing. However, NVIL also comes with two disadvantages: (1) it leaves the inference problem of MRFs unsolved1 and only trains simple MRFs with tractable + +posteriors, and (2) the upper bound of the partition function can be underestimated (Kuleshov & Ermon, 2017), resulting in sub-optimal solutions on high-dimensional data. + +We propose Adversarial Variational Inference and Learning (AdVIL) to relieve some headache of learning an MRF model. AdVIL is a black-box inference and learning method that partly solves the two problems of NVIL and retains the advantages of NVIL at the same time. First, AdVIL introduces a variational encoder to infer the latent variables, which provides an upper bound of the free energy. Second, AdVIL introduces a variational decoder for the MRF, which provides a lower bound of the log partition function. The two variational distributions provide an estimate of the negative log-likelihood of the MRF. On one hand, the estimate is in an intuitive form of an approximate contrastive free energy, which is expressed in terms of the expected energy and the (conditional) entropy of the corresponding variational distribution. On the other hand, similar to GAN (Goodfellow et al., 2014), the estimate is a minimax optimization problem, which is solved by stochastic gradient descent (SGD) in an alternating manner. Theoretically, our algorithm is convergent if the variational decoder approximates the model well. This motivates us to introduce an auxiliary variable to enhance the flexibility of the variational decoder, whose entropy is approximated by the third variational trick. + +We evaluate AdVIL in various undirected generative models, including restricted Boltzmann machines (RBM) (Ackley et al., 1985), deep Boltzmann machines (DBM) (Salakhutdinov & Hinton, 2009), and Gaussian restricted Boltzmann machines (GRBM) (Hinton & Salakhutdinov, 2006), on several real datasets. We empirically demonstrate that (1) compared to the black-box NVIL (Kuleshov & Ermon, 2017) method, AdVIL provides a tighter estimate of the log partition function and achieves much better log-likelihood results; and (2) compared to contrastive divergence based methods (Hinton, 2002; Welling & Sutton, 2005), AdVIL can deal with a broader family of MRFs without model-specific analysis and obtain better results when the model structure gets complex as in DBM. + +# 2 BACKGROUND + +We consider a general case where the model consists of both visible variables $v$ and latent variables $h$ . An MRF defines the joint distribution over $v$ and $h$ as $P(v, h) = \frac{e^{-\mathcal{E}(v, h)}}{\mathcal{Z}}$ , where $\mathcal{E}$ denotes the associated energy function that assigns a scalar value for a given configuration of $(v, h)$ and $\mathcal{Z}$ is the partition function such that $\mathcal{Z} = \int_{v, h} e^{-\mathcal{E}(v, h)} dv dh$ . + +Let $P_{\mathcal{D}}(v)$ denote the empirical distribution of the training data. Minimizing the negative log-likelihood (NLL) of an MRF is a commonly chosen learning criterion and it is given by: + +$$ +\mathcal {L} (\theta) := - \mathbb {E} _ {P _ {\mathcal {D}} (v)} \left[ \log \int_ {h} \frac {e ^ {- \mathcal {E} (v , h)}}{\mathcal {Z}} d h \right], \tag {1} +$$ + +where $\theta$ denotes the trainable parameters in $\mathcal{E}$ . Further, the gradient of $\theta$ is: + +$$ +\nabla_ {\theta} \mathcal {L} (\theta) = \mathbb {E} _ {P _ {\mathcal {D}} (v)} \left[ \nabla_ {\theta} \mathcal {F} (v) \right] - \mathbb {E} _ {P (v)} \left[ \nabla_ {\theta} \mathcal {F} (v) \right], \tag {2} +$$ + +where $\mathcal{F}(v) = -\log \int_h e^{-\mathcal{E}(v,h)} dh$ denotes the free energy and the gradient in Eqn. (2) is the difference of the free energy in two phases. In the first positive phase, the expectation of the free energy under the data distribution is decreased. In the second negative phase, the expectation of the free energy under the model distribution is increased. + +Unfortunately, both the NLL in Eqn. (1) and its gradient in Eqn. (2) are intractable in general for two reasons. First, the integral of the latent variables in Eqn. (1) or equivalently the computation of the free energy in Eqn. (2) is intractable. Second, the computation of the partition function in Eqn. (1) or equivalently the negative phase in Eqn. (2) is intractable. + +Variational inference. Extensive work introduces deterministic approximations for the intractability of inference, including the mean-field approximation (Welling & Hinton, 2002; Salakhutdinov & Hinton, 2009), the Kikuchi and Bethe approximations (Welling & Sutton, 2005) and the recognition model approach (Salakhutdinov & Larochelle, 2010). In this line of work, the intractability of the partition function is addressed using Monte Carlo based methods. + +Contrastive free energy. Contrastive divergence (CD) (Hinton, 2002) addresses the intractability of the partition function by approximating the negative phase in Eqn. (2) as follows: + +$$ +\nabla_ {\theta} \mathcal {L} (\theta) = \mathbb {E} _ {P _ {D} (v)} \left[ \nabla_ {\theta} \mathcal {F} (v) \right] - \mathbb {E} _ {P _ {C D} (v)} \left[ \nabla_ {\theta} \mathcal {F} (v) \right], \tag {3} +$$ + +![](images/7171f42b923de592636ed375641909b8bba7ad70bb150e3555f889f0e24a64d2.jpg) +Figure 1: Illustration of the models involved in AdVIL. From left to right: variational encoder $Q(h|v)$ , MRF $P(v, h)$ , variational decoder $q(v, h)$ with a simple prior and $q(v, h)$ with an expressive prior. + +where $P_{CD}(v)$ denotes the empirical distribution obtained by starting from a data point and running several steps of Gibbs sampling according to the model distribution and the free energy $\mathcal{F}(v)$ is assumed to be tractable. Existing methods (Welling & Hinton, 2002; Welling & Sutton, 2005) approximate $\mathcal{F}(v)$ using certain function $\mathcal{G}(v)$ and the gradient of $\theta$ is: + +$$ +\nabla_ {\theta} \mathcal {L} (\theta) \approx \mathbb {E} _ {P _ {\mathcal {D}} (v)} [ \nabla_ {\theta} \mathcal {G} (v) ] - \mathbb {E} _ {P _ {\mathcal {C D}} (v)} [ \nabla_ {\theta} \mathcal {G} (v) ]. \tag {4} +$$ + +Although these generalized methods exist, it is nontrivial to extend CD-based methods to general MRFs because the Gibbs sampling procedure is highly dependent on the model structure. + +Black-box learning. The recent work of NVIL (Kuleshov & Ermon, 2017) addresses the intractability of the partition function in a black-box manner via a variational upper bound of the partition function: + +$$ +\mathbb {E} _ {q (v)} \left[ \frac {\tilde {P} (v) ^ {2}}{q (v) ^ {2}} \right] \geq \mathcal {Z} ^ {2}, \tag {5} +$$ + +where $\tilde{P}(v) = e^{-\mathcal{F}(v)}$ is the unnormalized marginal distribution on $v$ and $q(v)$ is a neural variational distribution. As a black-box learning method, NVIL potentially allows application to broader model families and improves the capabilities of probabilistic programming systems (Carpenter et al., 2017). Though promising, NVIL leaves the intractability of inference in an MRF unsolved, and the bound in Eqn. (5) is of high variance and is easily underestimated (Kuleshov & Ermon, 2017). + +# 3 METHOD + +As stated above, the black-box inference and learning of MRFs are still largely open. In this paper, we make a step towards solving the problems by a new variational approach. For simplicity, we focus on the resulting objective function in this section. See Appendix A for detailed derivation. + +# 3.1 ADVERSARIAL VARIATIONAL INFERENCE AND LEARNING + +First, we rewrite the NLL of the MRF (See an illustration in Fig. 1) as follows: + +$$ +\mathcal {L} (\theta) = - \mathbb {E} _ {P _ {D} (v)} [ - \mathcal {F} (v) ] + \log \mathcal {Z}, \tag {6} +$$ + +where the negative free energy and the log partition function are in the form of a logarithm of an integral. Naturally, we can apply the variational trick (Jordan et al., 1999) twice and approximate the two terms individually. Due to the presence of the minus before the first term in Eqn. (6), the two variational tricks bound the two parts of the NLL in the opposite directions, detailed as below. + +Formally, on one hand, we introduce an approximate posterior for the latent variables $Q(h|v)$ , which is parameterized as a neural variational encoder (See an illustration in Fig. 1), to address the intractability of inference as follows: + +$$ +\mathcal {L} (\theta) \leq \mathbb {E} _ {P _ {D} (v) Q (h | v)} [ \mathcal {E} (v, h) + \log Q (h | v) ] + \log \mathcal {Z} := \mathcal {L} _ {1} (\theta , \phi), \tag {7} +$$ + +where $\phi$ denotes the trainable parameters in $Q(h|v)$ . The upper bound is derived via applying the Jensen inequality and the equality holds if and only if $Q(h|v) = P(h|v)$ for all $v$ . In the bound, the first term is the expected energy, which encourages $Q(h|v)$ to infer latent variables that have low values of the energy function $\mathcal{E}(v,h)$ , or equivalently high probabilities of $P(v,h)$ . The second term corresponds to the negative conditional entropy of $Q(h|v)$ , which increases the uncertainty of $Q(h|v)$ . In the paper, we denote the conditional entropy of $Q(h|v)$ as $\mathcal{H}(Q) \coloneqq -\mathbb{E}_{P_{\mathcal{D}}(v)Q(h|v)}[\log Q(h|v)]$ . + +On the other hand, we introduce an approximate sampler $q(v,h)$ , which is parameterized by a neural variational decoder (See Fig. 1), to address the intractability of the partition function as follows: + +$$ +\mathcal {L} _ {1} (\theta , \phi) \geq \mathbb {E} _ {P _ {\mathcal {D}} (v) Q (h | v)} \left[ \underbrace {\overbrace {\mathcal {E} (v , h)} ^ {\text {e n e r g y t e r m}} + \overbrace {\log Q (h | v)} ^ {\text {e n t r o p y t e r m}}} _ {\text {P o s i t i v e P h a s e}} \right] - \mathbb {E} _ {q (v, h)} \left[ \underbrace {\overbrace {\mathcal {E} (v , h)} ^ {\text {e n e r g y t e r m}} + \overbrace {\log q (v , h)} ^ {\text {e n t r o p y t e r m}}} _ {\text {N e g a t i v e P h a s e}} \right] := \mathcal {L} _ {2} (\theta , \phi , \psi), \tag {8} +$$ + +where $\psi$ denotes the trainable parameters in $q(v,h)$ . The lower bound is derived via applying the Jensen inequality as well, and the equality holds if and only if $q(v,h) = P(v,h)$ . It can be seen that the lower bound given by $q(v,h)$ consists of the entropy (denoted as $\mathcal{H}(q)$ ) and energy terms, which is similar to the upper bound in Eqn. (7), and the overall objective is in the form of approximate contrastive free energy (Hinton, 2002; Welling & Sutton, 2005). Because the double variational trick bounds the NLL in opposite directions as above, we have a minimax optimization problem: + +$$ +\min _ {\theta} \min _ {\phi} \max _ {\psi} \mathcal {L} _ {2} (\theta , \phi , \psi). \tag {9} +$$ + +The minimax formulation has been investigated in GAN (Goodfellow et al., 2014) and it is interpreted as an adversarial game between two networks. We name our framework adversarial variational inference and learning (AdVIL) following the well-established literature. + +Note that $\mathcal{L}_2(\theta, \phi, \psi)$ is neither an upper bound, nor a lower bound of $\mathcal{L}(\theta)$ due to the double variational trick. However, we argue that solving the optimization problem in Eqn. (9) is reasonable because (1) it is equivalent to optimizing $\mathcal{L}(\theta)$ under the nonparametric assumption, which is similar to GAN (Goodfellow et al., 2014); and (2) it converges to a stationary point of $\mathcal{L}_1(\theta, \phi)$ , which is an upper bound of $\mathcal{L}(\theta)$ , under a weaker assumption, as stated in the following theoretical analysis. + +# 3.2 THEORETICAL ANALYSIS OF ADVIL + +In this section, we present our main theoretical results and the proofs can be found in Appendix C. Firstly, similarly to GAN (Goodfellow et al., 2014), we can prove that $\mathcal{L}_2$ is a tight estimate of $\mathcal{L}$ under the nonparametric assumption, which is summarized in Proposition 1 in Appendix C.1. However, the nonparametric assumption does not tolerate any approximation error between $P(v,h)$ and $q(v,h)$ during training and no guarantee can be obtained in finite steps. To this end, we establish a convergence theorem based on a weaker assumption that allows non-zero approximation error before convergence. A key insight is that the angle between $\frac{\partial\mathcal{L}_2(\theta,\phi,\psi)}{\partial\theta}$ and $\frac{\partial\mathcal{L}_1(\theta,\phi)}{\partial\theta}$ is positive if $q(v,h)$ approximates $P(v,h)$ well, as stated in the following Lemma 1. + +Lemma 1. For any $(\theta, \phi)$ , there exists a symmetric positive definite matrix $H$ such that $\frac{\partial \mathcal{L}_2(\theta, \phi, \psi)}{\partial \theta} = H \frac{\partial \mathcal{L}_1(\theta, \phi)}{\partial \theta}$ under the assumption: $\| \sum_{v,h} \delta(v,h) \frac{\partial \mathcal{E}(v,h)}{\partial \theta} \|_2 < \| \frac{\partial \mathcal{L}_1(\theta, \phi)}{\partial \theta} \|_2$ if $| | \frac{\partial \mathcal{L}_1(\theta, \phi)}{\partial \theta} | |_2 > 0$ and $| | \sum_{v,h} \delta(v,h) \frac{\partial \mathcal{E}(v,h)}{\partial \theta} | |_2 = 0$ if $| | \frac{\partial \mathcal{L}_1(\theta, \phi)}{\partial \theta} | |_2 = 0$ , where $\delta(v,h) = q(v,h) - P(v,h)$ . + +Based on Lemma 1 and other commonly used assumptions in the analysis of stochastic optimization (Bottou et al., 2018), AdVIL converges to a stationary point of $\mathcal{L}_1(\theta, \phi)$ , as stated in Theorem 1. + +Theorem 1. Solving the optimization problem in Eqn. (9) using stochastic gradient descent, then $(\theta ,\phi)$ converges to a stationary point of $\mathcal{L}_1(\theta ,\phi)$ under the assumptions in the general stochastic optimization (Bottou et al., 2018) and that the condition of Lemma 1 holds in each step. + +Please see Appendix C.2 for a detailed and formal version of Theorem 1. Compared to Proposition 1 and the analysis in GAN (Goodfellow et al., 2014), Theorem 1 has a weaker statement that AdVIL converges to a stationary point of the negative evidence lower bound (i.e., $\mathcal{L}_1$ ) instead of $\mathcal{L}$ . Nevertheless, we argue that converging to $\mathcal{L}_1$ is sufficiently good for variational approaches in general. Besides, Theorem 1 states that AdVIL can at least decrease $\mathcal{L}_1$ in expectation if the assumption holds in finite steps. Indeed, we empirically justify Theorem 1, as detailed in Appendix E.1. Theorem 1 also provides insights for the implementation of AdVIL. Indeed, its assumption motivates us to use a sufficiently powerful $q(v,h)$ with neural networks and auxiliary variables, and update $q(v,h)$ multiple times per update of $P(v,h)$ , as detailed in Sec. 3.3 and Sec. 5.1 respectively. + +# 3.3 SPECIFYING THE VARIATIONAL DISTRIBUTIONS + +To efficiently get samples, both variational distributions are directed models. We use a directed neural network that maps $v$ to $h$ as the variational encoder $Q(h|v)$ (Kingma & Welling, 2013). + +As for the variational decoder, we first factorize it as the product of a prior over $h$ and a conditional distribution, namely $q(v,h) = q(v|h)q(h)$ . It is nontrivial to specify the prior $q(h)$ because the marginal distribution of $h$ in the MRF, i.e. $P(h)$ , can be correlated across different dimensions. Consequently, a simple $q(h)$ is not flexible enough to track $P(h)$ and can violate the condition of Lemma 1. To this end, we introduce an auxiliary variable $z$ , which can be discrete or continuous, on top of $h$ and define $q(v,h) = \int_{z}q(z)q(h|z)q(v|h)dz$ . (See an illustration in Fig. 1.) However, the entropy term of $q(v,h)$ is intractable because we need to integrate out the auxiliary variable $z$ . Therefore, we introduce the third variational distribution $r(z|h)$ to approximate the entropy of $q(v,h)$ . As in Eqn. (7), applying the standard variational trick gives an upper bound: + +$$ +- \mathbb {E} _ {q (v, h)} \log q (v, h) \leq - \mathbb {E} _ {q (v, h)} \log q (v | h) - \mathbb {E} _ {q (h) r (z | h)} \log \left[ \frac {q (h , z)}{r (z | h)} \right], \tag {10} +$$ + +which is unsatisfactory because the estimate is minimized w.r.t $r(z|h)$ while maximized w.r.t $q(v,h)$ . Instead, after some transformations (See details in Appendix A) we get a lower bound as follows: + +$$ +- \mathbb {E} _ {q (v, h)} \log q (v, h) \geq - \mathbb {E} _ {q (v, h)} \log q (v | h) - \mathbb {E} _ {q (h, z)} \log \left[ \frac {q (h , z)}{r (z | h)} \right]. \tag {11} +$$ + +The equality holds if and only if $r(z|h) = q(z|h)$ for all $h$ . The difference between the two bounds is subtle: the last expectation in Eqn. (10) is over $q(h)r(z|h)$ but that in Eqn. (11) is over $q(h,z)$ . Here, a lower bound is preferable because the estimate is maximized with respect to both $r(z|h)$ and $q(v,h)$ and we can train them simultaneously. For simplicity, we absorb the trainable parameters of $r(z|h)$ into $\psi$ . Note that after introducing $z$ and $r(z|h)$ , we can still obtain a convergence theorem of AdVIL under the conditions that $r(z|h)$ approximates $q(z|h)$ well and $q(v,h) = \int q(v,h,z)dz$ is sufficiently close to $P(v,h)$ in every step, together with the assumptions in general stochastic optimization. + +Following GAN (Goodfellow et al., 2014), we optimize $\theta$ , $\phi$ and $\psi$ jointly using stochastic gradient descent (SGD) in an alternating manner. The partial derivatives of $\phi$ and $\psi$ are estimated via the reparameterization trick (Kingma & Welling, 2013) for the continuous variables and the Gumbel-Softmax trick (Jang et al., 2016; Maddison et al., 2016) for the discrete variables. See Algorithm 1 in Appendix B for the whole training procedure. Note that $\psi$ is updated $K_{1} > 1$ times per update of $\theta$ . + +# 4 RELATED WORK + +Existing traditional methods (Neal, 2001; Hinton, 2002; Winn & Bishop, 2005; Wainwright & Jordan, 2006; Rother et al., 2007) can be used to estimate the log partition function but are nontrivial to be extended to learn general MRFs. Some methods (Winn & Bishop, 2005; Neal, 2001) require an expensive inference procedure for each update of the model and others (Hinton, 2002; Rother et al., 2007) cannot be directly applied to general cases (e.g., DBM). Among these methods, contrastive divergence (CD) (Hinton, 2002) is proven effective in certain types of models and it is closely related to AdVIL. Indeed, the partial derivative of $\theta$ in AdVIL is: + +$$ +\frac {\partial \mathcal {L} _ {2} (\theta , \phi , \psi)}{\partial \theta} = \mathbb {E} _ {P _ {\mathcal {D}} (v) Q (h | v)} \left[ \frac {\partial}{\partial \theta} \mathcal {E} (v, h) \right] - \mathbb {E} _ {q (v, h)} \left[ \frac {\partial}{\partial \theta} \mathcal {E} (v, h) \right], \tag {12} +$$ + +which also involves a positive phase and a negative phase naturally and is quite similar to Eqn. (3). However, notably, the two phases average over the $(v,h)$ pairs and only require the knowledge of the energy function without any further assumption of the model in AdVIL. Therefore, AdVIL is more suitable to general MRFs than CD (See empirical evidence in Sec. 5.3). + +In the context of black-box learning in MRFs, AdVIL competes directly with NVIL (Kuleshov & Ermon, 2017). It seems that the upper bound in Eqn. (5) is suitable for optimization because $P$ and $q$ share the same training direction. However, the bound holds only if the support of $\tilde{P}$ is a subset of the support of $q$ . Further, the Monte Carlo estimate of the upper bound is of high variance. Therefore, the bound of NVIL can be easily underestimated, which results in sub-optimal solutions (Kuleshov & Ermon, 2017). In contrast, though AdVIL arrives at a minimax optimization problem, the estimate of Eqn. (8) is tighter and of lower variance. We empirically verify this argument (See Fig. 3) and systematically compare the two methods (See Tab. 1) in Sec.5.4. + +![](images/98fb4573023d39a7be7cb8224c5c349f02cdf141be5d80f5fbbea510353ce91a.jpg) +(a) upper bound of $\mathcal{F}(v)$ + +![](images/5091c237ac0b35b6e0c28732458f0ee6b11054f9c74c9badc83479a5e5ad9c93.jpg) +(b) lower bound of $\mathcal{H}(q)$ + +![](images/6a2ab77f13152331633596fae20aac35d7edf9611cc7530e51727dcc0942ed28.jpg) +Figure 2: Curves of AdVIL on Digits. (a-c) compare the values of the variational approximations and the corresponding ground truths. All bounds are rather tight after 5,000 iterations. (d) shows that the RBM loss (i.e., the loss of $\theta$ as in Eqn. (8)) tends to zero and the model converges gradually. + +![](images/7651c09c9537e42dcaae20a953fca2603a5669b143681668b3f03c16264375a1.jpg) +(c) lower bound of $\log \mathcal{Z}$ +(d) RBM loss and NLL + +Apart from the work on approximate inference and learning in MRFs as mentioned above, AdVIL is also related to some directed models. Kim & Bengio (2016) jointly trains a deep energy model (Ngiam et al., 2011) and a directed generative model by minimizing the KL-divergence between them. Similar ideas have been highlighted in (Finn et al., 2016; Zhai et al., 2016; Dai et al., 2017; Liu & Wang, 2017). In comparison, firstly, AdVIL obtains the objective function in a unified perspective on the black-box inference and learning in general MRFs. Note that dealing with latent variables in MRFs is nontrivial (Kim & Bengio, 2016) and therefore existing work focuses on fully observable models. Secondly, AdVIL uses a sophisticated decoder with auxiliary variables to handle the latent variables and derives a principled variational approximation of the entropy term instead of the heuristics (Kim & Bengio, 2016; Zhai et al., 2016). Lastly, the convergence of AdVIL is formally characterized by Theorem 1 while the effect of the approximation error in inference is not well understood in existing methods. Adversarily learned inference (ALI) (Donahue et al., 2016; Dumoulin et al., 2016) is also formulated as a minimax optimization problem but focuses on directed models. + +# 5 EXPERIMENTS + +In this section, we evaluate AdVIL in restricted Boltzmann machines (RBM) (Ackley et al., 1985), deep Boltzmann machines (DBM) (Salakhutdinov & Hinton, 2009) and Gaussian restricted Boltzmann machines (GRBM) (Hinton & Salakhutdinov, 2006) on the Digits dataset, the UCI binary databases (Dheeru & Karra, 2017) and the Frey faces datasets (See detailed settings in Appendix D and the source code3). We compare AdVIL with strong baseline methods systematically and show the promise of AdVIL to learn a broad family of models effectively as a black-box method. + +# 5.1 EMPIRICAL ANALYSIS OF ADVIL + +We present a detailed analysis of AdVIL in RBM, whose energy function is defined as $\mathcal{E}(v,h) = -b^{\top}v - v^{\top}Wh - c^{\top}h$ . The conditional distributions of an RBM are tractable, but we still treat $P(h|v)$ as unknown and train AdVIL in a fully black-box manner. The analysis is performed on the Digits dataset and we augment the data of five times by shifting the digits following the protocol in (Kuleshov & Ermon, 2017). The dimensions of $v$ , $h$ and $z$ are 64, 15 and 10, respectively. Therefore, the log partition function of the RBM and the entropy of the decoder can be computed by brute force. + +Firstly, we empirically validate AdVIL in Fig. 2. Specifically, Panel (a) shows that the variational encoder $Q(h|v)$ provides a tight upper bound of the free energy after 2,000 iterations. Panel (b) demonstrates that the variational distribution $r(z|h)$ estimate the entropy of $q(v,h)$ accurately. Panel (c) shows that $q(v,h)$ can successfully track the log partition function after 5,000 iterations. Panel (d) presents that the RBM loss balances well between the negative phase and positive phase, and the model converges gradually. See Appendix E.1 for an empirical test of the condition in Lemma 1. + +Secondly, we empirically show that both $P$ and $q$ can generate data samples in Appendix E.2. + +Lastly, we analyze the sensitivity of $K_{1}$ . Theoretically, enlarging $K_{1}$ will make $q(v,h)$ and $P(v,h)$ to be close and then help the convergence according to Theorem 1. As shown in Fig. 3 (a), a larger $K_{1}$ at least won't hurt the convergence, which agrees with Theorem 1. Though $K_{1} = 15$ is sufficient on the Digits dataset, we use $K_{1} = 100$ as a default setting for AdVIL on larger datasets. + +![](images/1577fba99804376d2824bee3ac52f01f23fa81275f8f20bbe52405c9b6dd6658.jpg) +(a) Sensitivity of $K_{1}$ + +![](images/f5aac85e91b7474fcf61f3ace4e695f7e7a190881bf76652b677e82fd80d7f14.jpg) +(b) NVIL +Figure 3: (a) Sensitivity analysis of $K_{1}$ on the Digits dataset. (b-d) Learning curves of NVIL, AdVIL and CD on the Mushrooms dataset. Compared to NVIL, AdVIL provides a tighter and lower variance estimate of log $\mathcal{Z}$ and achieves better performance. Compared to PCD-1 and CD-10, AdVIL can track the log partition function and achieve comparable results though trained in a black-box manner. + +![](images/6d59082d903eb9be8e413e8a6fe67615ffbbeb0f8ce4777965694525576fee7c.jpg) +(c) AdVIL + +![](images/8c5e53a1fe57dddaab0cef03cda96c091a8c86201173b8aaeb6c8e236af18547.jpg) +(d) PCD-1 + +Table 1: Anneal importance sampling (AIS) results in RBM. The results are recorded on the test set according to the best validation performance and averaged over three runs. AdVIL outperforms NVIL consistently and significantly. See the standard deviations in Appendix E.5. + +
MethodDigitsAdultConnect4DNAMushroomsNIPS-0-12Ocr-lettersRCV1
NVIL-mean-27.36-20.05-24.71-97.71-29.28-290.01-47.56-50.47
AdVIL-mean-26.34-19.29-21.95-97.59-19.59-276.42-45.64-50.22
+ +# 5.2 RBM RESULTS + +To the best of our knowledge, NVIL (Kuleshov & Ermon, 2017) is the only existing black-box learning method for MRFs and hence it is the most direct competitor of AdVIL. In this section, we provide a systematic comparison and analysis of these two methods in terms of the log-likelihood results on the UCI databases (Dheeru & Karra, 2017). + +For a fair comparison, we use the widely-adopted anneal importance sampling (AIS) (Salakhutdinov & Murray, 2008) metric for quantitative evaluation. Besides, we carefully perform grid search over the default settings of NVIL (Kuleshov & Ermon, 2017) and our settings based on their code, and choose the best configuration including $K_{1} = 100$ (See details in Appendix D). We directly compare with the best version of NVIL in Tab. 1. It can be seen that AdVIL consistently outperforms NVIL on all datasets, which demonstrate the effectiveness of AdVIL. Besides, the time complexity of AdVIL is comparable to that of NVIL with the same hyperparameters. + +We compare the learning curves of NVIL and AdVIL on the Mushroom dataset. As shown in Fig. 3 (b), the upper bound of NVIL is underestimated after 4,000 iterations and then the model can get worse or even diverge. In contrast, as shown in Fig. 3 (c), the lower bound of AdVIL is consistently valid. Besides, the estimate of NVIL is looser and of higher variance than that of AdVIL. The results agree with our analysis in Sec. 4 and explain why AdVIL significantly outperforms NVIL. Further, as shown in Fig. 3 (d), AdVIL is comparable to CD-10 and persistent contrastive divergence (PCD) (Tieleman, 2008), which leverage the tractability of the conditional distributions in an RBM. + +# 5.3 DBM RESULTS + +We would like to demonstrate that AdVIL has the ability to deal with highly intractable models such as a DBM conveniently and effectively, compared to standard CD-based methods (Hinton, 2002; Welling & Hinton, 2002; Welling & Sutton, 2005) and NVIL (Kuleshov & Ermon, 2017). + +DBM (Salakhutdinov & Hinton, 2009) is a powerful family of deep models that stack multiple RBMs together. The energy function of a two-layer DBM is defined as $\mathcal{E}(v,h_1,h_2) = -b^\top v - v^\top W_1h_1 - c_1^\top h_1 - h_1^\top W_2h_2 - c_2^\top h_2$ . Learning a DBM is challenging because $P(h_{1},h_{2}|v)$ is not tractable and CD (Hinton, 2002) is not applicable. Inspired by (Welling & Hinton, 2002; Welling & Sutton, 2005), we construct a variational CD (VCD) baseline by employing the same variational encoder $Q(h_{1},h_{2}|v)$ as in AdVIL. The free energy is approximated by the same upper bound as in Eqn. (7), which is minimized with respect to the parameters in $Q(h_{1},h_{2}|v)$ . The gradient of the parameters in the DBM is given by Eqn. (4), where the Gibbs sampling procedure is approximated by $h_1\sim Q(h_1|v)$ and $v\sim P(v|h_{1})$ . Note that AdVIL can be directly applied to this case. As for + +Table 2: AIS results in DBM. The results are recorded according to the best validation performance and averaged by three runs. AdVIL achieves higher averaged AIS results on five out of eight datasets and has a better overall performance than VCD. See the standard deviations in Appendix E.5. + +
MethodDigitsAdultConnect4DNAMushroomsNIPS-0-12Ocr-lettersRCV1
VCD-mean-28.49-22.26-26.79-97.59-23.15-356.26-45.77-50.83
AdVIL-mean-27.89-20.29-26.34-99.40-21.21-287.15-48.38-51.02
+ +![](images/11940665b112e4a335f209f16313d286846a55acb905bf98aee7625fe57fdada.jpg) +(a) Data + +![](images/a9272132142c66cd09da83d631e3bb4a860488dfdde454e5cb59838f0a263620.jpg) +Figure 4: Filters and samples of a GRBM learned by AdVIL on the Frey faces dataset. (a) presents the training data. (b) presents the first 40 filters of the GRBM. (c) and (d) show random samples from the variational decoder and the GRBM, respectively. We present the mean of $v$ for better visualization. + +![](images/547164da96afb6fe1a832a5c500bfb3e500727052ad93afd0fad41326b2104f1.jpg) +(b) Filters of the GRBM +(c) Samples from $q$ + +![](images/3578dc2d379ee66d17d12d068d124dca153d8a515e9756c03c99933d4b612b42.jpg) +(d) Samples from $P$ + +the time complexity, the training speed of AdVIL is around ten times slower than that of VCD in our implementation. However, the approximate inference and sampling procedure of AdVIL is very efficient thanks to the directed variational distributions. + +The log-likelihood results on the UCI databases are shown in Tab. 2. It can be seen that AdVIL has a better overall performance even trained in a black-box manner, which shows the promise of AdVIL. See Appendix E.4 for learning curves and a detailed analysis of the results. + +We also extend NVIL by using the same $Q(h_{1}, h_{2}|v)$ and $q(v, h_{1}, h_{2})$ as AdVIL. However, NVIL diverges after 300 iterations and gets bad AIS results (e.g., less than $-40$ on Digits) in our implementation. A potential reason is that the upper bound given by $q$ in NVIL can be underestimated if $q$ is high-dimensional, as analyzed in Sec. 4 and Fig. 3. Note that $q(v, h_{1}, h_{2})$ in DBM involves latent variables and has a higher dimension (e.g. 164 on the Digits dataset) than $q(v)$ in RBM (e.g. 64 on the Digits dataset). The results again demonstrate the advantages of AdVIL over NVIL. + +# 5.4 GRBM RESULTS + +We now show the ability of AdVIL to learn a GRBM on the continuous Frey faces dataset. The energy function of a GRBM is $\mathcal{E}(v,h) = \frac{1}{2\sigma^2}\| v - b\|^2 - c^\top h - \frac{1}{\sigma} v^\top Wh$ , where $\sigma$ is the standard deviation of the Gaussian likelihood and is set as 1 manually. We standardize the data by subtracting the mean and dividing by the standard deviation. The dimensions of $h$ and $z$ are 200 and 50, respectively. + +Though a GRBM is more sensitive to the hyperparameters and hence harder to train than an RBM (Cho et al., 2011; 2013), AdVIL can successfully capture the underlying data distribution using the default hyperparameters (See Appendix D). As shown in Fig. 4, the samples from both the GRBM (via Gibbs sampling after 100,000 burn-in steps) and the decoder are meaningful faces. Besides, the filters of the GRBM outline diverse prototypes of faces, which accords with our expectation. + +In summary, the results of the three models together demonstrate that AdVIL can learn a broad family of models conveniently and effectively in a fully black-box manner. + +# 6 CONCLUSION AND DISCUSSION + +A novel black-box learning and inference method for undirected graphical models, called adversarial variational inference and learning (AdVIL), is proposed. The key to AdVIL is a double variational trick that approximates the negative free energy and the log partition function separately. A formal convergence theorem, which provides insights for implementation, is established for AdVIL. Empirical results show that AdVIL can deal with a broad family of MRFs in a fully black-box manner and outperforms both the standard contrastive divergence method and the black-box NVIL algorithm. + +Though AdVIL shows promising results, we emphasize that the black-box learning and inference of the MRFs are far from completely solved, especially on high-dimensional data. The two intractability + +problems of MRFs are distinct since the posterior of the latent variables is local in terms of $v$ but the partition function is global by integrating out $v$ . The additional integral makes estimating the partition function much more challenging. In AdVIL, simply increasing the number of updates of the decoder to obtain a tighter estimate of the partition function on high-dimensional data can be expensive. A potential future work to avoid the problem is adopting recent advances on non-convex optimization (Dauphin et al., 2014; Reddi et al., 2016; Wang et al., 2017) to accelerate the inner loop optimization. We conjecture that AdVIL is comparable to CD in RBM and superior to VCD in DBM on larger datasets if AdVIL can be trained to nearly converge based on our current results. + +# ACKNOWLEDGEMENTS + +This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, U19B2034, U1811461), Beijing NSF Project (No. L172037), Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, the JP Morgan Faculty Research Program and the NVIDIA NVAIL Program with GPU/DGX Acceleration. C. Li was supported by the Chinese postdoctoral innovative talent support program and Shimu Tsinghua Scholar. + +# REFERENCES + +Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. TensorFlow: A system for large-scale machine learning. 2016. +David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147-169, 1985. +Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223-311, 2018. +Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. +Bob Carpenter, Andrew Gelman, Matthew D Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. Stan: A probabilistic programming language. Journal of statistical software, 76(1), 2017. +Kyung Hyun Cho, Tapani Raiko, and Alexander Ilin. Gaussian-bernoulli deep boltzmann machine. In Neural Networks (IJCNN), The 2013 International Joint Conference on, pp. 1-7. IEEE, 2013. +KyungHyun Cho, Alexander Ilin, and Tapani Raiko. Improved learning of gaussian-bernoulli restricted boltzmann machines. In International conference on artificial neural networks, pp. 10-17. Springer, 2011. +Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, and Aaron Courville. Calibrating energy-based generative adversarial networks. arXiv preprint arXiv:1702.01691, 2017. +Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, pp. 2933-2941, 2014. +Dua Dheeru and Taniskidou E Karra. UCI machine learning repository, 2017. URL http:// archive.ics.uci.edu/ml. +Jeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. +Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarily learned inference. arXiv preprint arXiv:1606.00704, 2016. + +Chelsea Finn, Paul Christiano, Pieter Abbeel, and Sergey Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. arXiv preprint arXiv:1611.03852, 2016. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014. +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626-6637, 2017. +Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771-1800, 2002. +Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504-507, 2006. +Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. +Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183-233, 1999. +Taesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation. arXiv preprint arXiv:1606.03439, 2016. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +Philipp Krahenbuhl and Vladlen Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems, pp. 109-117, 2011. +Volodymyr Kuleshov and Stefano Ermon. Neural variational inference and learning in undirected graphical models. In Advances in Neural Information Processing Systems, pp. 6734-6743, 2017. +John Lafferty, Andrew McCallum, and Fernando CN Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. 2001. +Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In *The Proceedings of the 14th International Conference on Artificial Intelligence and Statistics*, volume 15 of JMLR: W&CP, pp. 29-37, 2011. +Qiang Liu and Dilin Wang. Learning deep energy models: Contrastive divergence vs. amortized mle. arXiv preprint arXiv:1707.00797, 2017. +Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016. +Xiao-Li Meng and Wing Hung Wong. Simulating ratios of normalizing constants via a simple identity: a theoretical exploration. Statistica Sinica, pp. 831-860, 1996. +Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014. +Radford M Neal. Probabilistic inference using markov chain monte carlo methods. 1993. +Radford M Neal. Annealed importance sampling. Statistics and computing, 11(2):125-139, 2001. +Jiquan Ngiam, Zhenghao Chen, Pang W Koh, and Andrew Y Ng. Learning deep energy models. In Proceedings of the 28th international conference on machine learning (ICML-11), pp. 1105-1112, 2011. + +Rajesh Ranganath, Sean Gerrish, and David Blei. Black box variational inference. In Artificial Intelligence and Statistics, pp. 814-822, 2014. +Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. Stochastic variance reduction for nonconvex optimization. In International conference on machine learning, pp. 314-323, 2016. +Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. +Carsten Rother, Vladimir Kolmogorov, Victor Lempitsky, and Martin Szummer. Optimizing binary mrfs via extended roof duality. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8. IEEE, 2007. +Ruslan Salakhutdinov and Geoffrey Hinton. Deep Boltzmann machines. In Proceedings of the twelfth international conference on artificial intelligence and statistics, 2009. +Ruslan Salakhutdinov and Hugo Larochelle. Efficient learning of deep Boltzmann machines. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 693-700, 2010. +Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pp. 872-879. ACM, 2008. +John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pp. 3528-3536, 2015. +Tijmen Tieleman. Training restricted boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th international conference on Machine learning, pp. 1064-1071. ACM, 2008. +Martin J Wainwright and Michael I Jordan. Log-determinant relaxation for approximate inference in discrete markov random fields. IEEE transactions on signal processing, 54(6):2099-2109, 2006. +Martin J Wainwright, Tommi S Jaakkola, and Alan S Willsky. A new class of upper bounds on the log partition function. IEEE Transactions on Information Theory, 51(7):2313-2335, 2005. +Xiao Wang, Shiqian Ma, Donald Goldfarb, and Wei Liu. Stochastic quasi-newton methods for nonconvex stochastic optimization. SIAM Journal on Optimization, 27(2):927-956, 2017. +Max Welling and Geoffrey E Hinton. A new learning algorithm for mean field boltzmann machines. In International Conference on Artificial Neural Networks, pp. 351-357. Springer, 2002. +Max Welling and Charles A Sutton. Learning in markov random fields with contrastive free energies. In AISTATS. Citeseer, 2005. +John Winn and Christopher M Bishop. Variational message passing. Journal of Machine Learning Research, 6(Apr):661-694, 2005. +Shuangfei Zhai, Yu Cheng, Rogerio Feris, and Zhongfei Zhang. Generative adversarial networks as variational training of energy based models. arXiv preprint arXiv:1611.01799, 2016. + +# A DERIVATION OF THE OBJECTIVE FUNCTION + +Here we derive the objective function of AdVIL in detail. Let $\theta$ , $\phi$ and $\psi$ denote the trainable parameters in the MRF, the variational encoder and the variational decoder, respectively. The first variational trick bounds the free energy as follows: + +$$ +\begin{array}{l} \mathcal {L} (\theta) = - \mathbb {E} _ {P _ {\mathcal {D}} (v)} \left[ \log \left(\int_ {h} e ^ {- \mathcal {E} (v, h)} d h\right) \right] + \log \mathcal {Z} \\ = - \mathbb {E} _ {P _ {\mathcal {D}} (v)} \log \left[ \int_ {h} Q (h | v) \frac {e ^ {- \mathcal {E} (v , h)}}{Q (h | v)} d h \right] + \log \mathcal {Z} \\ \leq \mathbb {E} _ {P _ {D} (v) Q (h | v)} [ \mathcal {E} (v, h) + \log Q (h | v) ] + \log \mathcal {Z} := \mathcal {L} _ {1} (\theta , \phi), \\ \end{array} +$$ + +where the bound is derived via applying the Jensen inequality and the equality holds if and only if $Q(h|v) = P(h|v)$ for all $v$ . + +The second variational trick bounds the log partition function as follows: + +$$ +\begin{array}{l} \mathcal {L} _ {1} (\theta , \phi) = \mathbb {E} _ {P _ {\mathcal {D}} (v) Q (h | v)} \left[ \mathcal {E} (v, h) + \log Q (h | v) \right] + \log \left(\int_ {v} \int_ {h} e ^ {- \mathcal {E} (v, h)} d v d h\right) \\ = \mathbb {E} _ {P _ {\mathcal {D}} (v) Q (h | v)} [ \mathcal {E} (v, h) + \log Q (h | v) ] + \log \left(\int_ {v} \int_ {h} q (v, h) \frac {e ^ {- \mathcal {E} (v , h)}}{q (v , h)} d v d h\right) \\ \geq \mathbb {E} _ {P _ {\mathcal {D}} (v) Q (h | v)} \left[ \mathcal {E} (v, h) + \log Q (h | v) \right] + \mathbb {E} _ {q (v, h)} \left[ \log \left(\frac {e ^ {- \mathcal {E} (v , h)}}{q (v , h)}\right) \right] \\ = \mathbb {E} _ {P _ {\mathcal {D}} (v) Q (h | v)} \left[ \underbrace {\widetilde {\mathcal {E} (v , h)} + \overbrace {\log Q (h | v)} ^ {\text {e n e r g y t e r m}}} _ {\text {P o s i t i v e P h a s e}} \right] - \mathbb {E} _ {q (v, h)} \left[ \underbrace {\widetilde {\mathcal {E} (v , h)} + \overbrace {\log q (v , h)} ^ {\text {e n e r g y t e r m}}} _ {\text {N e g a t i v e P h a s e}} \right] \\ := \mathcal {L} _ {2} (\theta , \phi , \psi), \\ \end{array} +$$ + +where the bound is also derived via applying the Jensen inequality and the equality holds if and only if $q(v,h) = P(v,h)$ . + +To enhance the expressive power of the variational decoder, we introduce an auxiliary variable $z$ and define $q(v,h) = \int_{z}q(z)q(h|z)q(v|h)dz$ , which makes the entropy term in the negative phase intractable. To address the problem, we propose the third variational approximation. First, we can decompose the entropy of $q(v,h)$ as $-\mathbb{E}_{q(v,h)}\log q(v,h) = -\mathbb{E}_{q(v,h)}\log q(v|h) - \mathbb{E}_{q(h)}\log q(h)$ and we only need to approximate $-\mathbb{E}_{q(h)}\log q(h)$ . However, simply applying the standard variational trick as above, we get an upper bound as follows: + +$$ +\begin{array}{l} - \mathbb {E} _ {q (h)} \log q (h) = - \mathbb {E} _ {q (h)} \log \int_ {z} q (h, z) d z \\ = - \mathbb {E} _ {q (h)} \log \int_ {z} r (z | h) \frac {q (h , z)}{r (z | h)} d z \\ \leq - \mathbb {E} _ {q (h) r (z | h)} \log \left[ \frac {q (h , z)}{r (z | h)} \right], \\ \end{array} +$$ + +which is not satisfactory because the optimization problem will be $\min_{P}\min_{Q}\max_{q}\min_{r}$ . Instead, we derive a lower bound as follows: + +$$ +\begin{array}{l} - \mathbb {E} _ {q (h)} \log q (h) = - \mathbb {E} _ {q (h)} \log q (h) - \mathbb {E} _ {q (h, z)} \log q (z | h) + \mathbb {E} _ {q (h, z)} \log q (z | h) \\ = - \mathbb {E} _ {q (h, z)} \log q (h, z) + \mathbb {E} _ {q (h, z)} \log q (z | h) \\ = - \mathbb {E} _ {q (h, z)} \log \left[ \frac {q (h , z)}{r (z | h)} \right] + \mathbb {D} _ {K L} (q (z | h) | | r (z | h)) \\ \geq - \mathbb {E} _ {q (h, z)} \log \left[ \frac {q (h , z)}{r (z | h)} \right], \\ \end{array} +$$ + +where $\mathbb{D}_{KL}(\cdot ||\cdot)$ denotes the KL-divergence and the equality holds if and only if $r(z|h) = q(z|h)$ for all $h$ . The difference between the two bounds is that the expectation is taken over $q(h)r(z|h)$ in the upper bound while over $q(h,z)$ in the lower bound. Using the lower bound, the optimization problem will be $\min_{P}\min_{Q}\max_{p}\max_{r}$ . + +# B FORMAL TRAINING PROCEDURE + +The formal training procedure of AdVIL is presented in Algorithm 1. + +Algorithm 1 Adversarial variational inference and learning by stochastic gradient descent +1: Input: Constants $K_{1}$ and $K_{2}$ , learning rate schemes $\alpha$ and $\gamma$ , randomly initialized $\theta$ , $\phi$ and $\psi$ +2: repeat +3: for $i = 1, \dots, K_{1}$ do +4: Sample a batch of $(v, h, z) \sim q(v, h, z)$ +5: Estimate the objective of $q$ and $r$ according to Eqn. (11) and the negative phase in Eqn. (8) +6: Update $\psi$ to maximize the objective according to $\alpha$ +7: end for +8: for $i = 1, \dots, K_{2}$ do +9: Sample a batch of $(v, h) \sim P_{\mathcal{D}}(v)Q(h|v)$ +10: Estimate the objective of $Q$ according to the positive phase in Eqn. (8) +11: Update $\phi$ to minimize the objective according to $\gamma$ +12: end for +13: Sample a batch of $(v, h) \sim P_{\mathcal{D}}(v)Q(h|v)$ and another batch of $(v, h) \sim q(v, h)$ +14: Estimate the objective of $P$ according to Eqn. (8) +15: Update $\theta$ to minimize the objective according to $\gamma$ +16: until Convergence or reaching certain threshold + +# C DETAILED THEORETICAL ANALYSIS + +For simplicity, we consider discrete $v$ and $h$ (e.g., in an RBM) and the analysis can be extended to the continuous cases. We assume $v \in \{0,1\}^{d_v}$ and $h \in \{0,1\}^{d_h}$ , where $d_v$ and $d_h$ are the dimensions of the visible and latent variables respectively. + +# C.1 ANALYSIS IN THE NONPARAMETRIC CASE + +We first analyze the nonparametric case in Proposition 1 as follows. + +Proposition 1. For any $P(v,h) = \exp (-\mathcal{E}(v,h)) / \mathcal{Z}$ , $\mathcal{L}_2(\theta ,\phi ,\psi)$ is a tight estimate of the negative log-likelihood of $P(v)$ , under the following assumptions + +1. $Q(h|v)$ and $q(v,h)$ are nonparametric. +2. The inner optimization over $Q(h|v)$ and $q(v,h)$ can get their optima. + +Proof. Given $P(v,h)$ , i.e., $\mathcal{E}(v,h)$ , to find $q^{*}(v,h)$ , we optimize $\mathcal{L}_2$ over $\{q(v,h)|v\in \{0,1\}^{d_v},h\in \{0,1\}^{d_h}\}$ (we will use a shortcut $\{q(v,h)\}$ for simplicity). The optimization problem is equivalent to: + +$$ +\min _ {\{q (v, h) \}} \sum_ {v, h} q (v, h) [ \mathcal {E} (v, h) + \log q (v, h) ] +$$ + +$$ +\text {s u b j e c t} \sum_ {v, h} q (v, h) = 1, +$$ + +$$ +q (v, h) \geq 0, \forall v, h. +$$ + +Note that the objective function is convex since its Hessian matrix is positive semi-definite. Besides, the constraints are linear. Therefore, it is a convex optimization problem. Further, we can verify that the Slater's condition (Boyd & Vandenberghe, 2004) holds when $q$ is uniform and then the strong duality holds. Then, we can use the KKT conditions to solve the optimization problem. + +The Lagrangian $\mathcal{G}(\{q(v,h)\},\lambda ,\{\mu (v,h)\})$ is: + +$$ +\sum_ {v, h} q (v, h) \left[ \mathcal {E} (v, h) + \log q (v, h) \right] + \lambda \left(\sum_ {v, h} q (v, h) - 1\right) + \sum_ {v, h} \mu (v, h) q (v, h), +$$ + +where $\lambda$ and $\{\mu (v,h)\}$ are the associated Lagrange multipliers. + +To satisfy the stationarity, we take gradients with respect to $q(v,h)$ for all $(v,h)$ and get: + +$$ +[ \mathcal {E} (v, h) + \log q ^ {*} (v, h) + 1 ] + \lambda + \mu (v, h) = 0, +$$ + +which implies + +$$ +q ^ {*} (v, h) = \exp (- \mathcal {E} (v, h) - (1 + \lambda + \mu (v, h))). +$$ + +According to the complementary slackness, we have + +$$ +\mu (v, h) q ^ {*} (v, h) = 0, \forall v, h, +$$ + +which implies $\mu (v,h) = 0,\forall v,h$ , since $q^{*}(v,h) > 0,\forall v,h$ + +To satisfy the primal equality constraint, we have + +$$ +\sum_ {v, h} q ^ {*} (v, h) = \sum_ {v, h} \exp (- \mathcal {E} (v, h) - (1 + \lambda)) = 1, +$$ + +which implies + +$$ +q ^ {*} (v, h) = \frac {\exp (- \mathcal {E} (v , h))}{\sum_ {v ^ {\prime} , h ^ {\prime}} \exp (- \mathcal {E} (v ^ {\prime} , h ^ {\prime}))} = P (v, h), \forall v, h. +$$ + +To find $Q^{*}(h|v)$ , we optimize $\mathcal{L}_2$ over $\{Q(h|v)|v\in \{0,1\}^{d_v}, h\in \{0,1\}^{d_h}\}$ (we will use a shortcut $\{Q(h|v)\}$ for simplicity). The optimization problem is equivalent to: + +$$ +\min _ {\{Q (h | v) \}} \sum_ {v} P _ {\mathcal {D}} (v) \sum_ {h} Q (h | v) [ \mathcal {E} (v, h) + \log Q (h | v) ] +$$ + +subject to: $\sum_{h}Q(h|v) = 1,\forall v,$ + +$$ +Q (h | v) \geq 0, \forall v, h. +$$ + +Similar to the above procedure, we can get + +$$ +Q ^ {*} (h | v) = \frac {\exp (- \mathcal {E} (v , h))}{\sum_ {h ^ {\prime}} \exp (- \mathcal {E} (v , h ^ {\prime}))} = P (h | v), \forall v, h. +$$ + +Under the assumptions that (1) $Q(h|v)$ and $q(v,h)$ are nonparametric, and (2) the inner optimization over $\psi$ and $\phi$ can get the optimum, the optimal variational distributions $P(v,h)$ and $P(h|v)$ can be obtained. Plugging them back into $\mathcal{L}_2$ , we get + +$$ +\begin{array}{l} \mathcal {L} _ {2} = \mathbb {E} _ {P _ {\mathcal {D}} (v) P (h | v)} [ \mathcal {E} (v, h) + \log P (h | v) ] - \mathbb {E} _ {P (v, h)} [ \mathcal {E} (v, h) + \log P (v, h) ] \\ = \mathbb {E} _ {P _ {D} (v) P (h | v)} \left[ - \log \sum_ {h} e ^ {- \mathcal {E} (v, h)} \right] + \mathbb {E} _ {P (v, h)} [ \log \mathcal {Z} ] \\ = \mathbb {E} _ {P _ {D} (v)} [ \mathcal {F} (v) ] + \log \mathcal {Z} = \mathcal {L}. \\ \end{array} +$$ + +![](images/82329902117c1389b7ebb59327b3413527b4d33f41aa5053549c3866e3abb638.jpg) + +Remark Similar to Theorem 1 in (Goodfellow et al., 2014), Proposition 1 is under the nonparametric assumption, which is relaxed in our following analysis. Namely, we will consider more practical cases where $q(v,h)$ may not be exactly the same as $P(v,h)$ during training. + +# C.2 MAIN CONVERGENCE THEOREM + +For convenience, we summarize the training dynamics of Algorithm 1 with $K_{1} = 1$ , $K_{2} = 1$ and the exact gradients (not the stochastic ones), as follows: + +$$ +\psi_ {k + 1} = \psi_ {k} + \alpha_ {k} \frac {\partial \mathcal {L} _ {2} \left(\theta_ {k} , \phi_ {k} , \psi_ {k}\right)}{\partial \psi}, +$$ + +$$ +\phi_ {k + 1} = \phi_ {k} - \gamma_ {k} \frac {\partial \mathcal {L} _ {2} \left(\theta_ {k} , \phi_ {k} , \psi_ {k + 1}\right)}{\partial \phi}, +$$ + +$$ +\theta_ {k + 1} = \theta_ {k} - \gamma_ {k} \frac {\partial \mathcal {L} _ {2} \left(\theta_ {k} , \phi_ {k} , \psi_ {k + 1}\right)}{\partial \theta}, \tag {13} +$$ + +where $k = 1,2,\ldots$ . We will prove that even though we are optimizing $\mathcal{L}_2(\theta ,\phi ,\psi)$ , $(\theta_{k},\phi_{k})$ converges to a stationary point of $\mathcal{L}_1(\theta ,\phi)$ under certain conditions. To establish this, we first prove that the angle between $\frac{\partial\mathcal{L}_2(\theta,\phi,\psi)}{\partial\theta}$ and $\frac{\partial\mathcal{L}_1(\theta,\phi)}{\partial\theta}$ are sufficiently positive if $q(v,h)$ and $P(v,h)$ satisfy certain conditions, as summarized in Lemma 1. + +Lemma 1. For any $(\theta, \phi)$ , there exists a symmetric positive definite matrix $H$ such that $\frac{\partial \mathcal{L}_2(\theta, \phi, \psi)}{\partial \theta} = H \frac{\partial \mathcal{L}_1(\theta, \phi)}{\partial \theta}$ under the assumption: $\| \sum_{v,h} \delta(v,h) \frac{\partial \mathcal{E}(v,h)}{\partial \theta} \|_2 < \| \frac{\partial \mathcal{L}_1(\theta, \phi)}{\partial \theta} \|_2$ if $|| \frac{\partial \mathcal{L}_1(\theta, \phi)}{\partial \theta} ||_2 > 0$ and $|| \sum_{v,h} \delta(v,h) \frac{\partial \mathcal{E}(v,h)}{\partial \theta} \|_2 = 0$ if $|| \frac{\partial \mathcal{L}_1(\theta, \phi)}{\partial \theta} \|_2 = 0$ , where $\delta(v,h) = q(v,h) - P(v,h)$ . + +Proof. According to the Cauchy-Schwarz inequality, we have + +$$ +\langle \frac {\partial \mathcal {L} _ {1} (\theta , \phi)}{\partial \theta}, \sum_ {v, h} \delta (v, h) \frac {\partial \mathcal {E} (v , h)}{\partial \theta} \rangle \leq | | \frac {\partial \mathcal {L} _ {1} (\theta , \phi)}{\partial \theta} | | _ {2} | | \sum_ {v, h} \delta (v, h) \frac {\partial \mathcal {E} (v , h)}{\partial \theta} | | _ {2}. +$$ + +If $||\frac{\partial\mathcal{L}_1(\theta,\phi)}{\partial\theta} ||_2 > 0$ , according to the assumption $||\sum_{v,h}\delta (v,h)\frac{\partial\mathcal{E}(v,h)}{\partial\theta} ||_2 < ||\frac{\partial\mathcal{L}_1(\theta,\phi)}{\partial\theta} ||_2$ , we have + +$$ +\langle \frac {\partial \mathcal {L} _ {1} (\theta , \phi)}{\partial \theta}, \sum_ {v, h} \delta (v, h) \frac {\partial \mathcal {E} (v , h)}{\partial \theta} \rangle < | | \frac {\partial \mathcal {L} _ {1} (\theta , \phi)}{\partial \theta} | | _ {2} ^ {2} = \langle \frac {\partial \mathcal {L} _ {1} (\theta , \phi)}{\partial \theta}, \frac {\partial \mathcal {L} _ {1} (\theta , \phi)}{\partial \theta} \rangle , +$$ + +which implies that + +$$ +\langle \frac {\partial \mathcal {L} _ {1} (\theta , \phi)}{\partial \theta}, \frac {\partial \mathcal {L} _ {1} (\theta , \phi)}{\partial \theta} - \sum_ {v, h} \delta (v, h) \frac {\partial \mathcal {E} (v , h)}{\partial \theta} \rangle > 0. +$$ + +According to the definitions of $\mathcal{L}_1(\theta, \phi)$ and $\mathcal{L}_2(\theta, \phi, \psi)$ , we have + +$$ +\begin{array}{l} \frac {\partial \mathcal {L} _ {1} (\theta , \phi)}{\partial \theta} - \sum_ {v, h} \delta (v, h) \frac {\partial \mathcal {E} (v , h)}{\partial \theta} \\ = \sum_ {v, h} [ P _ {\mathcal {D}} (v) Q (h | v) - P (v, h) ] \frac {\partial \mathcal {E} (v , h)}{\partial \theta} - \sum_ {v, h} [ q (v, h) - P (v, h) ] \frac {\partial \mathcal {E} (v , h)}{\partial \theta} \\ = \sum_ {v, h} [ P _ {\mathcal {D}} (v) Q (h | v) - q (v, h) ] \frac {\partial \mathcal {E} (v , h)}{\partial \theta} = \frac {\partial \mathcal {L} _ {2} (\theta , \phi , \psi)}{\partial \theta}, \\ \end{array} +$$ + +which implies that + +$$ +\langle \frac {\partial \mathcal {L} _ {1} (\theta , \phi)}{\partial \theta}, \frac {\partial \mathcal {L} _ {2} (\theta , \phi , \psi)}{\partial \theta} \rangle > 0. +$$ + +Equivalently, there exists a symmetric positive definite matrix $H$ such that $\frac{\partial\mathcal{L}_2(\theta,\phi,\psi)}{\partial\theta} = H\frac{\partial\mathcal{L}_1(\theta,\phi)}{\partial\theta}$ . Note that this also holds when $||\frac{\partial\mathcal{L}_1(\theta,\phi)}{\partial\theta} ||_2 = 0$ (i.e., $\frac{\partial\mathcal{L}_1(\theta,\phi)}{\partial\theta} = \vec{0}$ ) because $||\frac{\partial\mathcal{L}_2(\theta,\phi,\psi)}{\partial\theta} ||_2 \leq ||\frac{\partial\mathcal{L}_1(\theta,\phi)}{\partial\theta} ||_2 + ||\sum_{v,h}\delta (v,h)\frac{\partial\mathcal{E}(v,h)}{\partial\theta} ||_2 = 0$ (i.e., $\frac{\partial\mathcal{L}_2(\theta,\phi,\psi)}{\partial\theta} = \vec{0}$ ), according to the assumption. + +Remark Lemma 1 assumes that $q(v,h)$ and $P(v,h)$ are sufficiently close, which is encouraged by choosing a sufficiently powerful family of $q(v,h)$ and updating $\psi$ multiple times per update of $\theta$ , i.e. $K_{1} > 1$ . If Lemma 1 holds, optimizing $\mathcal{L}_2(\theta ,\phi ,\psi)$ with respect to $\theta$ will decrease $\mathcal{L}_1(\theta ,\phi)$ in expectation with a sufficiently small stepsize. Also note that for any $(\theta ,\psi),\frac{\partial\mathcal{L}_2(\theta,\phi,\psi)}{\partial\phi} = \frac{\partial\mathcal{L}_1(\theta,\phi)}{\partial\phi}$ and therefore, optimizing $\mathcal{L}_2(\theta ,\phi ,\psi)$ with respect to $\phi$ will decrease $\mathcal{L}_1(\theta ,\phi)$ in expectation with a sufficiently small stepsize. + +Based on Lemma 1 and other commonly used assumptions in the analysis of stochastic gradient descent (Bottou et al., 2018), Algorithm 1 converges to a stationary point of $\mathcal{L}_1(\theta, \phi)$ , as stated in Theorem 1. + +Theorem 1. Solving the optimization problem in Eqn. (9) using stochastic gradient descent according to Algorithm 1, then + +$$ +\lim _ {k \rightarrow \infty} \mathbb {E} [ | | \frac {\partial \mathcal {L} _ {1} (\theta_ {k} , \phi_ {k})}{\partial \theta} | | _ {2} ^ {2} ] = 0, +$$ + +under the following assumptions. + +1. The condition of Corollary 4.12 in (Bottou et al., 2018): $\mathcal{L}_2(\theta, \phi, \psi)$ is twice differentiable with respect to $\theta$ , $\phi$ and $\psi$ . + +2. Assumption 4.1 in (Bottou et al., 2018): the gradients of $\mathcal{L}_2(\theta, \phi, \psi)$ with respect to $\theta$ , $\phi$ and $\psi$ are Lipschitz. +3. Assumption 4.3 in (Bottou et al., 2018): the first and second moments of the stochastic gradients are bounded by the expected gradients. +4. The stepsize satisfies the diminishing condition (Bottou et al., 2018), i.e., $\alpha_{k} = \gamma_{k}$ $\sum_{k = 1}^{\infty}\gamma_k = \infty ,\sum_{k = 1}^{\infty}\gamma_k^2 < \infty .$ +5. The condition of Lemma 1 holds in each step $k$ . Therefore, + +$$ +\forall k, \exists H _ {k}, \frac {\partial \mathcal {L} _ {2} \left(\theta_ {k} , \phi_ {k} , \psi_ {k + 1}\right)}{\partial \theta} = H _ {k} \frac {\partial \mathcal {L} _ {1} \left(\theta_ {k} , \phi_ {k}\right)}{\partial \theta}. +$$ + +Proof. See Corollary 4.12 in (Bottou et al., 2018). + +![](images/5e5f6add1d86349dc7968ca4b9e1f464e3980ccd784c9715a2195144c8949812.jpg) + +Remark Assumption 1 and Assumption 2 in Theorem 1 are ensured because we use the sigmoid and tanh activation functions. Assumption 3 and Assumption 4 in Theorem 1 are ensured by the sampling and learning rate schemes of the stochastic gradient descent. Assumption 5 in Theorem 1 is weaker than the nonparametric assumption of Proposition 1 but still requires a large $K_{1}$ . Also note that the statement of converging to $\mathcal{L}_1$ in Theorem 1 is weaker than that in Proposition 1. + +# C.3 COMPLEMENTARY CONVERGENCE THEOREM + +Heusel et al. (2017) propose a two-time scale update rule to train minimax optimization problems with a convergence guarantee even using $K_{1} = 1$ . AdVIL converges if using the same training method as in (Heusel et al., 2017), which is summarized in Proposition 2. + +Proposition 2. AdVIL trained with a two-time scale update rule (Heusel et al., 2017) converges to a stationary local Nash equilibrium almost surely under the following assumptions. + +1. The gradients with respect to $\theta$ , $\phi$ and $\psi$ are Lipschitz. +2. $\sum_{k}\alpha_{k} = \infty ,\sum_{k}\alpha_{k}^{2} < \infty ,\sum_{k}\gamma_{k} = \infty ,\sum_{k}\gamma_{k}^{2} < \infty ,\gamma_{k} = o(\alpha_{k}).$ +3. The stochastic gradient errors are bounded in expectation. +4. For each $\theta$ , the ordinary differentiable equation corresponding to Equation 13 has a local asymptotically stable attractor within a domain of attraction such that the attractor is Lipschitz. Similar assumptions are required for $\phi$ and $\psi$ . +5. $\sup_k||\theta_k|| < \infty, \sup_k||\psi_k|| < \infty, \sup_k||\phi_k|| < \infty.$ + +Proof. See Theorem 1 in (Heusel et al., 2017). + +![](images/e1da969453b9e3fab5a3ab64d0e8ffbfb657e6208cd7ec17be345cfc193bfbc9.jpg) + +Remark Compared to Theorem 1, Proposition 2 ensures the convergence of AdVIL without assuming $q(v,h)$ is sufficiently close to $P(v,h)$ in each step. However, a two time-sclae update rule (Heusel et al., 2017) is required to satisfy Assumption 2 and extra weight decay terms are needed to satisfy Assumption 4. Further, the convergence point is not necessarily a stationary point of $\mathcal{L}_1(\theta ,\phi)$ . + +# D DATASETS AND EXPERIMENTAL SETTINGS + +We evaluate our method on the Digits dataset4, the UCI binary databases and the Frey faces datasets5. The information of the datasets is summarized in Tab. 3. We implement our model using the TensorFlow (Abadi et al., 2016) library. In all experiments, $q$ and $r$ are updated 100 times per update of $P$ and $Q$ , i.e. $K_{1} = 100$ and $K_{2} = 1$ . We use the ADAM (Kingma & Ba, 2014) optimizer with the learning rate $\alpha = 0.0003$ , the moving average ratios $\beta_{1} = 0.5$ and $\beta_{2} = 0.999$ , and the batch size of 500. We use a continuous $z$ and the sigmoid activation function. All these hyperparameters are + +Table 3: Dimensions of the visible variables and sizes of the train, validation and test splits. + +
Datasets# visibleTrainValid.Test
Digits641438359-
Adult1235,000141426147
Connect412616,000400047557
DNA18014006001186
Mushrooms1122,0005005624
NIPS-0-125004001001240
OCR-letters12832,15210,00010,000
RCV115040,00010,000150,000
Frey faces5601965--
+ +Table 4: The model structures in RBM experiments. + +
DigitsAdultConnect4DNAMushroomsNIPS-0-12Ocr-lettersRCV1
dimension of z1515151515501515
dimension of h50505050502005050
dimension of v64123126180112500128150
+ +set according to the validation performance of an RBM on the Digits dataset and fixed throughout the paper unless otherwise stated. The sizes of the variational distributions depend on the structure of the MRF and are chosen according to the validation performance. The model structures in RBM and DBM experiments are summarized in Tab. 4 and Tab. 5, respectively. + +In a two-layer DBM with variables $v$ , $h_1$ and $h_2$ , we use an encoder $Q(h_1, h_2|v) = Q(h_1|v)Q(h_2|h_1)$ for both AdVIL and VCD. The decoder for AdVIL is the inverse of the encoder with one extra layer on the top, namely $q(v, h_1, h_2) = \int q(v|h_1)q(h_1|h_2)q(h_2|z)q(z)dz$ . In our implementation, both AdVIL and VCD exploits that $v$ and $h_2$ are conditionally independent given $h_1$ . The layer-wise structure potentially benefits the training of both methods. Nevertheless, in principle, any differentiable variational distributions be used in AdVIL and a systematical study is left for future work. + +The authors of NVIL (Kuleshov & Ermon, 2017) propose two variants. The first one employs a mixture of Bernoulli as $q$ . The second one involves auxiliary variables and employs a neural network as $q$ . Both variants scale up to an RBM of at most 64 visible units as reported in their paper (Kuleshov & Ermon, 2017). For a fair comparison, we carefully perform grid search over the default settings of NVIL and our settings based on their code and choose the best configuration. In this setting, the first variant of NVIL still fails to scale up to larger datasets and the best version of the second variant shares the same key hyperparameters as AdVIL, including $K_{1} = 100$ and a batch size of 500. + +# E MORE RESULTS + +# E.1 EMPIRICAL VERIFICATION OF THEOREM 1 + +We empirically test the assumption $||\sum_{v,h}\delta(v,h)\frac{\partial\mathcal{E}(v,h)}{\partial\theta_k}||_2 < ||\frac{\partial\mathcal{L}_1(\theta_k,\phi_k)}{\partial\theta_k}||_2$ for $k < 10000$ , where $\delta(v,h) = q(v,h) - P(v,h)$ . Note that computing $||\frac{\partial\mathcal{L}_1(\theta_k,\phi_k)}{\partial\theta_k}||_2$ exactly requires summing over $v$ and $h$ and therefore we train a small RBM, where the dimensions of $v$ , $h$ and $z$ are 4 on a synthetic dataset. The data distribution is a categorical distribution over $\{0,1\}^4$ and it is sampled from a Dirichlet distribution with all concentration parameters to be one. We get 10,000, 1,000 and 1,000 i.i.d samples from the categorical distribution for training, validation and test respectively. We find $K_1 = 10$ is sufficient in this case. + +Table 5: The model structures in DBM experiments. + +
DigitsAdultConnect4DNAMushroomsNIPS-0-12Ocr-lettersRCV1
dimension of z1515151515501515
dimension of h150505050502005050
dimension of h250505050502005050
dimension of v64123126180112500128150
+ +![](images/f717f48105ff3083ba5ce7da19a92a8f4fa6987cc3f18d7cf2c2980c565b1c5f.jpg) +(a) + +![](images/673b348f24966fc26f3d4f97ec51b64630ceee87970c8e77e5d323e8cf1cbbce.jpg) +(b) +Figure 5: (a) shows that $\left\| \sum_{v,h} \delta(v,h) \frac{\partial \mathcal{E}(v,h)}{\partial \theta_k} \right\|_2$ (in red) is less than $\left\| \frac{\partial \mathcal{L}_1(\theta_k, \phi_k)}{\partial \theta_k} \right\|_2$ (in blue) during training. (b) shows that both the evidence lower bound (ELBO) $-\mathcal{L}_1$ (in red) and the log likelihood $-\mathcal{L}$ (in blue) converge gradually. The ELBO may be slightly over estimated because we approximate the first term in Eqn. (7) by a Monte Carlo estimate. + +The results are shown in Fig. 5. It can be seen that a decoder with neural networks and auxiliary variables are sufficiently powerful to track the model distribution and therefore Assumption 5 in Theorem 1 holds in 10,000 steps. Besides, the model converges gradually, which agrees with our convergence analysis. The gap between the red curve in Fig. 5 (a) and the horizontal axis can be further reduced by using a more powerful $q(v,h)$ and advanced optimization techniques for $q(v,h)$ . + +Theoretically, how Lemma 1 holds as the number of variables increases is not clear. Intuitively, we agree that it may get harder to satisfy this condition in a high-dimensional space. However, it is still possible with advanced optimization methods because the MRF is randomly initialized and learned gradually, and the variational decoder is trained to track the MRF (based on an old version of the decoder) after every update of the MRF. Empirically, computing both sides of the condition requires the value of the partition function, which is as hard as training the MRF. Therefore, verifying this condition during training in a high-dimensional case is highly nontrivial. + +# E.2 SAMPLES IN RBM + +We present the samples from the RBM $P$ and the decoder $q$ in Fig. 6. In this case, we set the number of the hidden units to 50 and other settings remain the same as in Sec. 5.1. The first column demonstrates that the decoder is a good approximate sampler for the RBM. Note that the samples from the decoder are obtained from efficient ancestral sampling but those from the RBM is obtained by Gibbs sampling after 100,000 burn-in steps. The second column shows that if $\mathcal{H}(Q)$ is removed, both models collapse to a certain mode of the data. The third column shows that if $\mathcal{H}(q)$ is removed, both models fail to generate meaningful digits. These results demonstrate the importance of the entropy terms and the necessity of approximating $\mathcal{H}(q)$ in a principled way. + +# E.3 ADVIL WITH AN AUTOREGRESSIVE PRIOR + +Here we present the results of AdVIL with a neural autoregressive distribution estimator (NADE) (Larochelle & Murray, 2011) as the prior on theDigits dataset. We use the same RBM as in Sec. 5.1. + +![](images/97235dcddf55289f6ac893eaa943ba1313bdd6087cc5c5686a7e5f40a0693785.jpg) +(a) $P$ normal + +![](images/7c17b49e03d1c630e63148aa4136cbe6c01fa848921eb0ee2b73f28b833b69b1.jpg) +(b) $P$ w.o. $\mathcal{H}(Q)$ + +![](images/7c4331083e962571ef89f5f4e3ff991268f406da1661a844d9d85c2ba5f5bf02.jpg) +(c) $P$ w.o. $\mathcal{H}(q)$ +Figure 6: (a-c) Samples from the RBM in different settings. (d-f) Samples from the decoder in different settings. We present the mean of $v$ for better visualization in all settings. + +![](images/fe5b5080c0cfe288be11f3c82cf0276bc0f35190c485152057f81d5d65b3a6ab.jpg) +(d) $q$ normal + +![](images/39ccfa433a11b977ce82f548c9c968759837e418c541ef3c621e40584cf1d704.jpg) +(e) $q$ w.o. $\mathcal{H}(Q)$ + +![](images/1e39a36df6aa67ec68744b81c21a4588ca98c6d9544cd3847c8f13e2cd2a484b.jpg) +(f) $q$ w.o. $\mathcal{H}(q)$ + +![](images/34f43e8d677ba459b0f76b3b623fa74ed62c2f0f3abe55f6fdf4a0dd3addc902.jpg) +(a) AdVIL with a NADE decoder +Figure 7: AdVIL with two types of decoders. We set $K_{1} = 5$ in the NADE decoder and $K_{1} = 100$ in the hierarchical decoder. The two models have a similar model capacity and training time. + +![](images/bfa54184e0e7c4e73256ca0d0d6614ffaa52ed6fa3ecf8480dfee33c56e35e1b.jpg) +(b) AdVIL with a hierarchical decoder + +The dimension of the latent units in NADE is 15, which is the same as the dimension of the auxiliary variables in the hierarchical decoder presented in Sec. 3.3. + +Compared to the hierarchical decoder, the NADE decoder has a tractable entropy and hence does not require $r(z|h)$ . However, getting samples from NADE is slow while AdVIL requires samples during training. Therefore, $K_{1} = 5$ for the NADE decoder has a similar training cost as the hierarchical decoder. + +Fig. 7 compares the two decoders. AdVIL with the NADE decoder achieves a slightly worse and unstable result. + +# E.4 LEARNING CURVES AND ANALYSIS IN DBM + +We plot the learning curves of AdVIL and VCD in DBM, as shown in Fig. 8. AdVIL achieves a better result than VCD, which agrees with the quantitative results in Tab. 2. Note that we report the test NLL results in Tab. 2 according to the best validation performance. + +There are two types of biases introduced by using $Q(h_{1}, h_{2}|v) = Q(h_{2}|h_{1})Q(h_{1}|v)$ in VCD. The first type of bias is introduced by using the approximate free energy in both the positive phase and the negative phase (See Eqn. (3) and Eqn. (4)). The second bias is introduced by the usage of $Q(h_{1}|v)$ to approximate $P(h_{1}|v)$ in the Gibbs sampling procedure to approximate the negative phase in Eqn. (4). The influence of the two types of biases on the negative phase is not clear, which can potentially explain the relatively inferior performance of VCD in DBM. In contrast, AdVIL approximates the negative phase by introducing one bias (i.e., the approximation error between $q(v,h)$ and $P(v,h)$ ), whose effect on learning is theoretically characterized by Theorem 1. The above results and analysis essentially demonstrate the advantages of AdVIL over CD-based methods in DBM and the importance of developing black-box inference and learning algorithms for general MRFs. + +![](images/5dff4c37d90dcb13007fe5332d5705ede748a78c631a0c1a9c916bd11c4da8c2.jpg) +(a) AdVIL + +![](images/2e60f9daca83023581f6daa775af535dba68e0c038df3369e02ce7c4189ddd57.jpg) +(b) VCD +Figure 8: DBM results of AdVIL and VCD on the Digits dataset. The curve of AdVIL is less stable due to the presence of the minimax optimization problem but AdVIL achieves a better performance. + +Table 6: The AIS results of NVIL and AdVIL in RBM with the means and standard deviations. The results are averaged over three runs with different random seeds. + +
MethodDigitsAdultConnect4DNAMushroomsNIPS-0-12Ocr-lettersRCV1
NVIL-mean-27.36-20.05-24.71-97.71-29.28-290.01-47.56-50.47
NVIL-standard0.130.270.610.120.312.680.140.09
AdVIL-mean-26.34-19.29-21.95-97.59-19.59-276.42-45.64-50.22
AdVIL-standard0.020.071.040.102.010.210.340.06
+ +# E.5 AIS RESULTS WITH STANDARD DEVIATIONS + +The AIS results in RBM and RBM with the means and standard deviations are shown in Tab. 6 and Tab. 7 respectively. + +Table 7: The AIS results of VCD-1 and AdVIL in DBM with the means and standard deviations. The results are averaged over three runs with different random seeds. + +
MethodDigitsAdultConnect4DNAMushroomsNIPS-0-12Ocr-lettersRCV1
VCD-mean-28.49-22.26-26.79-97.59-23.15-356.26-45.77-50.83
VCD-standard0.470.511.420.031.4234.701.150.62
AdVIL-mean-27.89-20.29-26.34-99.40-21.21-287.15-48.38-51.02
AdVIL-standard0.440.241.500.710.400.631.320.42
\ No newline at end of file diff --git a/torelieveyourheadacheoftraininganmrftakeadvil/images.zip b/torelieveyourheadacheoftraininganmrftakeadvil/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7bb44cfef201cebb092fbb53afbd7471423ff662 --- /dev/null +++ b/torelieveyourheadacheoftraininganmrftakeadvil/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:869a5ad52cd831371b3c0ca8810cc2faa9809dd1cb4271e28d67e2a0fc681486 +size 904738 diff --git a/torelieveyourheadacheoftraininganmrftakeadvil/layout.json b/torelieveyourheadacheoftraininganmrftakeadvil/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b6f0d9829d19742fcc502f7dd9d8fcf0fc41f162 --- /dev/null +++ b/torelieveyourheadacheoftraininganmrftakeadvil/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f47fd93f22d60970ba7e2ab4b97338865702596139fea58a15b4af1ac53075b +size 909639 diff --git a/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/3d5eeb4e-e354-494b-b09d-56abbe79e48f_content_list.json b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/3d5eeb4e-e354-494b-b09d-56abbe79e48f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..076348964cc70c6ed7b7686742e4c80b1dd188d5 --- /dev/null +++ b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/3d5eeb4e-e354-494b-b09d-56abbe79e48f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:959f84725204a9cbcf6f04bbb8a541082135769ea6d5cb395e696bc3c0b840f0 +size 77045 diff --git a/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/3d5eeb4e-e354-494b-b09d-56abbe79e48f_model.json b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/3d5eeb4e-e354-494b-b09d-56abbe79e48f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..15f2c1f657442a37075c92eb1a27cdd5bed240fa --- /dev/null +++ b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/3d5eeb4e-e354-494b-b09d-56abbe79e48f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c68e0c5af6845f848484c0292e61c88be1a1f2590a152818a6e48b728f4c17e +size 94151 diff --git a/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/3d5eeb4e-e354-494b-b09d-56abbe79e48f_origin.pdf b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/3d5eeb4e-e354-494b-b09d-56abbe79e48f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..23e3be20e7f7f5c9fe881ca6efc201a4e3e36a28 --- /dev/null +++ b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/3d5eeb4e-e354-494b-b09d-56abbe79e48f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3764afaa87cd4b8d4e31f8c65083b40c77b6109beeb59ba924150aed61b39d1 +size 1375292 diff --git a/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/full.md b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dd5574e3e034d1188afc76e05097e0de2234c9f6 --- /dev/null +++ b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/full.md @@ -0,0 +1,290 @@ +# TOWARD EVALUATING ROBUSTNESS OF DEEP REINFORCEMENT LEARNING WITH CONTINUOUS CONTROL + +Tsui-Wei Weng $^{1,\ast}$ , Krishnamurthy (Dj) Dvijotham $^{2,\clubsuit}$ , Jonathan Uesato $^{2,\clubsuit}$ , Kai Xiao $^{1,\clubsuit,\ast}$ , Sven Gowal $^{2,\clubsuit}$ , Robert Stanforth $^{2,\clubsuit}$ , Pushmeet Kohli $^{2}$ $^{1}$ MIT, $^{2}$ DeepMind +twweng@mit.edu, {dvij, juesato}@google.com, kaix@mit.edu, {sgowal, stanforth, pushmeet} @google.com + +# ABSTRACT + +Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks. Prior works mostly focus on model-free adversarial attacks and agents with discrete actions. In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics. Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free attacks baselines in degrading agent performance as well as driving agents to unsafe states. + +# 1 INTRODUCTION + +Deep reinforcement learning (RL) has revolutionized the fields of AI and machine learning over the last decade. The introduction of deep learning has achieved unprecedented success in solving many problems that were intractable in the field of RL, such as playing Atari games from pixels and performing robotic control tasks (Mnih et al., 2015; Lillicrap et al., 2015; Tassa et al., 2018). Unfortunately, similar to the case of deep neural network classifiers with adversarial examples, recent studies show that deep RL agents are also vulnerable to adversarial attacks. + +A commonly-used threat model allows the adversary to manipulate the agent's observations at every time step, where the goal of the adversary is to decrease the agent's total accumulated reward. As a pioneering work in this field, (Huang et al., 2017) show that by leveraging the FGSM attack on each time frame, an agent's average reward can be significantly decreased with small input adversarial perturbations in five Atari games. (Lin et al., 2017) further improve the efficiency of the attack in (Huang et al., 2017) by leveraging heuristics of detecting a good time to attack and luring agents to bad states with sample-based Monte-Carlo planning on a trained generative video prediction model. + +Since the agents have discrete actions in Atari games (Huang et al., 2017; Lin et al., 2017), the problem of attacking Atari agents often reduces to the problem of finding adversarial examples on image classifiers, also pointed out in (Huang et al., 2017), where the adversaries intend to craft the input perturbations that would drive agent's new action to deviate from its nominal action. However, for agents with continuous actions, the above strategies can not be directly applied. Recently, (Uesato et al., 2018) studied the problem of adversarial testing for continuous control domains in a similar but slightly different setting. Their goal was to efficiently and effectively find catastrophic failure given a trained agent and to predict its failure probability. The key to success in (Uesato et al., 2018) is the availability of agent training history. However, such information may not always be accessible to the users, analysts, and adversaries. + +Besides, although it may not be surprising that adversarial attacks exist for the deep RL agents as adversarial attacks have been shown to be possible for neural network models in various supervised + +learning tasks. However, the vulnerability of RL agents can not be easily discovered by existing baselines which are model-free and build upon random searches and heuristics – this is also verified by our extensive experiments on various domains (e.g. walker, humanoid, cartpole, and fish), where the agents still achieve close to their original best rewards even with baseline attacks at every time step. Hence it is important and necessary to have a systematic methodology to design non-trivial adversarial attacks, which can efficiently and effectively discover the vulnerabilities of deep RL agents – this is indeed the motivation of this work. + +This paper takes a first step toward this direction by proposing the first sample-efficient model-based adversarial attack. Specifically, we study the robustness of deep RL agents in a more challenging setting where the agent has continuous actions and its training history is not available. We consider the threat models where the adversary is allowed to manipulate an agent's observations or actions with small perturbations, and we propose a two-step algorithmic framework to find efficient adversarial attacks based on learned dynamics models. Experimental results show that our proposed model-based attack can successfully degrade agent performance and is also more effective and efficient than model-free attacks baselines. + +The contributions of this paper are the following: + +- To the best of our knowledge, we propose the first model-based attack on deep RL agents with continuous actions. Our proposed attack algorithm is a general two-step algorithm and can be directly applied to the two commonly-used threat models (observation manipulation and action manipulation). +- We study the efficiency and effectiveness of our proposed model-based attack with model-free attack baselines based on random searches and heuristics. We show that our model-based attack can degrade agent performance in numerous MuJoCo domains by up to $4 \times$ in terms of total reward and up to $4.6 \times$ in terms of distance to unsafe states (smaller means stronger attacks) compared to the model-free baselines. +- Our proposed model-based attack also outperform all the baselines by a large margin in a weaker adversary setting where the adversary cannot attack at every time step. In addition, ablation study on the effect of planning length in our proposed technique suggests that our method can still be effective even when the learned dynamics model is not very accurate. + +# 2 BACKGROUND + +Adversarial attacks in reinforcement learning. Compared to the rich literature of adversarial examples in image classifications (Szegedy et al., 2013) and other applications (including natural language processing (Jia & Liang, 2017), speech (Carlini & Wagner, 2018), etc), there is relatively little prior work studying adversarial examples in deep RL. One of the first several works in this field are (Huang et al., 2017) and (Lin et al., 2017), where both works focus on deep RL agent in Atari games with pixels-based inputs and discrete actions. In addition, both works assume the agent to be attacked has accurate policy and the problem of finding adversarial perturbation of visual input reduces to the same problem of finding adversarial examples on image classifiers. Hence, (Huang et al., 2017) applied FGSM (Goodfellow et al., 2015) to find adversarial perturbations and (Lin et al., 2017) further improved the efficiency of the attack by heuristics of observing a good timing to attack - when there is a large gap in agents action preference between most-likely and least-likely action. In a similar direction, (Uesato et al., 2018) study the problem of adversarial testing by leveraging rejection sampling and the agent training histories. With the availability of training histories, (Uesato et al., 2018) successfully uncover bad initial states with much fewer samples compared to conventional Monte-Carlo sampling techniques. Recent work by (Gleave et al., 2019) consider an alternative setting where the agent is attacked by another agent (known as adversarial policy), which is different from the two threat models considered in this paper. Finally, besides adversarial attacks in deep RL, a recent work (Wang et al., 2019) study verification of deep RL agent under attacks, which is beyond the scope of this paper. + +Learning dynamics models. Model-based RL methods first acquire a predictive model of the environment dynamics, and then use that model to make decisions (Atkeson & Santamaria, 1997). These model-based methods tend to be more sample efficient than their model-free counterparts, + +![](images/11f67d4392b303bdc7818c39a92a841ff6541be2f716ffaa956c613c6ca46bb0.jpg) +(a) Attack observations of agent. + +![](images/32da324a8a9e01bb958093acf9e1c7b01c615c3f4f149fa31a4e94257f47fbdd.jpg) +(b) Attack actions of agent. +Figure 1: Two commonly-used threat models. + +and the learned dynamics models can be useful across different tasks. Various works have focused on the most effective ways to learn and utilize dynamics models for planning in RL (Kurutach et al., 2018; Chua et al., 2018; Chiappa et al., 2017; Fu et al., 2016). + +# 3 PROPOSED FRAMEWORK + +In this section, we first describe the problem setup and the two threat models considered in this paper. Next, we present an algorithmic framework to rigorously design adversarial attacks on deep RL agents with continuous actions. + +# 3.1 PROBLEM SETUP AND FORMULATION + +Let $s_i \in \mathbb{R}^N$ and $a_i \in \mathbb{R}^M$ be the observation vector and action vector at time step $i$ , and let $\pi : \mathbb{R}^N \to \mathbb{R}^M$ be the deterministic policy (agent). Let $f: \mathbb{R}^N \times \mathbb{R}^M \to \mathbb{R}^N$ be the dynamics model of the system (environment) which takes current state-action pair $(s_i, a_i)$ as inputs and outputs the next state $s_{i+1}$ . We are now in the role of an adversary, and as an adversary, our goal is to drive the agent to the (un-safe) target states $s_{\text{target}}$ within the $\epsilon$ budget constraints. + +We can formulate this goal into two optimization problems, as we will illustrate shortly below. Within this formalism, we can consider two threat models: + +Threat model (i): Observation manipulation. For the threat model of observation manipulation, an adversary is allowed to manipulate the observation $s_i$ that the agent perceived within an $\epsilon$ budget: + +$$ +\left\| \Delta s _ {i} \right\| _ {\infty} \leq \epsilon , \quad L _ {s} \leq s _ {i} + \Delta s _ {i} \leq U _ {s}, \tag {1} +$$ + +where $\Delta s_i\in \mathbb{R}^N$ is the crafted perturbation and $U_{s}\in \mathbb{R}^{N}$ $L_{s}\in \mathbb{R}^{N}$ are the observation limits. + +Threat model (ii): Action manipulation. For the threat model of action manipulation, an adversary can craft $\Delta a_{i}\in \mathbb{R}^{M}$ such that + +$$ +\left\| \Delta a _ {i} \right\| _ {\infty} \leq \epsilon , \quad L _ {a} \leq a _ {i} + \Delta a _ {i} \leq U _ {a}, \tag {2} +$$ + +where $U_{a}\in \mathbb{R}^{M},L_{a}\in \mathbb{R}^{M}$ are the limits of agent's actions. + +Our formulations. Given an initial state $s_0$ and a pre-trained policy $\pi$ , our (adversary) objective is to minimize the total distance of each state $s_i$ to the pre-defined target state $s_{\mathrm{target}}$ up to the unrolled (planning) steps $T$ . This can be written as the following optimization problems in Equations 3 and 4 for the Threat model (i) and (ii) respectively: + +$$ +\min _ {\Delta s _ {i}} \sum_ {i = 1} ^ {T} d \left(s _ {i}, s _ {\text {t a r g e t}}\right) \tag {3} +$$ + +s.t. $a_{i} = \pi (s_{i} + \Delta s_{i})$ $s_{i + 1} = f(s_i,a_i)$ , Constraint (1), $i\in \mathbb{Z}_T$ + +$$ +\min _ {\Delta a _ {i}} \sum_ {i = 1} ^ {T} d \left(s _ {i}, s _ {\text {t a r g e t}}\right) \tag {4} +$$ + +s.t. $a_{i} = \pi (s_{i})$ $s_{i + 1} = f(s_i,a_i + \Delta a_i)$ , Constraint (2), $i\in \mathbb{Z}_T$ + +A common choice of $d(x,y)$ is the squared $\ell_2$ distance $\| x - y\| _2^2$ and $f$ is the learned dynamics model of the system, and $T$ is the unrolled (planning) length using the dynamics models. + +# 3.2 OUR ALGORITHM + +In this section, we propose a two-step algorithm to solve Equations 3 and 4. The core of our proposal consists of two important steps: learn a dynamics model $f$ of the environment and deploy optimization technique to solve Equations 3 and 4. We first discuss the details of each factor, and then present the full algorithm by the end of this section. + +Step 1: learn a good dynamics model $f$ . Ideally, if $f$ is the exact (perfect) dynamics model of the environment and assuming we have an optimization oracle to solve Equations 3 and 4, then the solutions are indeed the optimal adversarial perturbations that give the minimal total loss with $\epsilon$ -budget constraints. Thus, learning a good dynamics model can conceptually help on developing a strong attack. Depending on the environment, different forms of $f$ can be applied. For example, if the environment of concerned is close to a linear system, then we could let $f(s, a) = As + Ba$ , where $A$ and $B$ are unknown matrices to be learned from the sample trajectories $(s_i, a_i, s_{i+1})$ pairs. For a more complex environment, we could decide if we still want to use a simple linear model (the next state prediction may be far deviate from the true next state and thus the learned dynamical model is less useful) or instead switch to a non-linear model, e.g. neural networks, which usually has better prediction power but may require more training samples. For either case, the model parameters $A$ , $B$ or neural network parameters can be learned via standard supervised learning with the sample trajectories pairs $(s_i, a_i, s_{i+1})$ . + +Step 2: solve Equations 3 and 4. Once we learned a dynamical model $f$ , the next immediate task is to solve Equation 3 and 4 to compute the adversarial perturbations of observations/actions. When the planning (unrolled) length $T > 1$ , Equation 3 usually can not be directly solved by off-the-shelf convex optimization toolbox since the deel RL policy $\pi$ is usually a non-linear and non-convex neural network. Fortunately, we can incorporate the two equality constraints of Equation 3 into the objective and with the remaining $\epsilon$ -budget constraint (Equation 1), Equation 3 can be solved via projected gradient descent (PGD). Similarly, Equation 4 can be solved via PGD to get $\Delta a_i$ . We note that, similar to the $n$ -step model predictive control, our algorithm could use a much larger planning (unrolled) length $T$ when solving Equations 3 and 4 and then only apply the first $n$ ( $\leq T$ ) adversarial perturbations on the agent over $n$ time steps. Besides, with the PGD framework, $f$ is not limited to feed-forward neural networks. Our proposed attack is summarized in Algorithm 2 for Step 1, and Algorithm 3 for Step 2. + +# Algorithm 1 Collect_trajectries + +1: Input: pre-trained policy $\pi$ , MaxSampleSize $n_s$ , environment env +2: Output: a set of trajectory pairs $S$ +3: $k\gets 0,\mathcal{S}\gets \phi$ +4: $s_0 \gets \text{env.reset}()$ +5: while $k < n_{s}$ do +6: $a_{k}\gets \pi (s_{k})$ +7: $s_{k + 1}\gets \mathrm{env}.step(a_k)$ +8: $\mathcal{S} \cup \{(s_k, a_k, s_{k+1})\}$ +9: $k\gets k + 1$ +10: end while +11: Return S + +# 4 EXPERIMENTS + +In this section, we conduct experiments on standard reinforcement learning environment for continuous control (Tassa et al., 2018). We demonstrate results on 4 different environments in Mu + +Algorithm 2 learn_dynamics +1: Input: pre-trained policy $\pi$ , MaxSampleSize $n_s$ , environment env, trainable parameters $W$ +2: Output: learned dynamical model $f(s,a;W)$ +3: $S_{agent} \gets$ Collect_trajecories $(\pi, n_s, env)$ +4: $S_{random} \gets$ Collect_trajecories(random_policy, $n_s, env$ ) +5: $f(s,a;W) \gets$ supervised_learning_algorithm $(S_{agent} \cup S_{random}, W)$ +6: Return $f(s,a;W)$ +Algorithm 3 model_based_attack +1: Input: pre-trained policy $\pi$ , learned dynamical model $f(s,a;W)$ , threat model, maximum perturbation magnitude $\epsilon$ , unroll length $T$ , apply perturbation length $n$ ( $\leq T$ ) +2: Output: a sequence of perturbation $\delta_1, \ldots, \delta_n$ +3: if threat model is observation manipulation (Eq. 1) then +4: Solve Eq. 3 with parameters $(\pi, f, \epsilon, T)$ via PGD to get $\delta_1, \ldots, \delta_T$ +5: else if threat model is action manipulation (Eq. 2) then +6: Solve Eq. 4 with parameters $(\pi, f, \epsilon, T)$ via PGD to get $\delta_1, \ldots, \delta_T$ +7: end if +8: Return $\delta_1, \ldots, \delta_n$ + +JoCo Tassa et al. (2018) and corresponding tasks: Cartpole-balance/swingup, Fish-upright, Walkerstand/walk and Humanoid-stand/walk. For the deep RL agent, we train a state-of-the-art D4PG agent (Barth-Maron et al., 2018) with default Gaussian noise $\mathcal{N}(0,0.3\mathbf{I})$ on the action and the score of the agents without attacks is summarized in Appendix A.3. The organization is as follows: we first evaluate the effectiveness of our proposed model-based attack and three model-free baselines in terms of both loss and reward. Next, we conduct ablation study on the key parameter of our algorithm the planning length $T$ , evaluate our algorithm on a weaker attack setting and also discuss the efficiency of our proposed attack in terms of sample complexity. + +**Evaluations.** We conduct experiments for 10 different runs, where the environment is reset to different initial states in different runs. For each run, we attack the agent for one episode with 1000 time steps (the default time intervals is usually $10\mathrm{ms}$ ) and we compute the total loss and total return reward. The total loss calculates the total distance of current state to the unsafe states and the total return reward measures the true accumulative reward from the environment based on agent's action. Hence, the attack algorithm is stronger if the total return reward and the total loss are smaller. + +Baselines. We compare our algorithm with the following model-free attack baselines with random searches and heuristics: + +- rand-U: generate $m$ randomly perturbed trajectories from Uniform distribution with interval $[-\epsilon, \epsilon]$ and return the trajectory with the smallest loss (or reward), +- rand-B: generate $m$ randomly perturbed trajectories from Bernoulli distribution with probability $1/2$ and interval $[-\epsilon, \epsilon]$ , and return the trajectory with the smallest loss (or reward), +- flip: generate perturbations by flipping agent's observations/actions within the $\epsilon$ budget in $\ell_{\infty}$ norm. + +For rand-U and rand-B, they are similar to Monte-Carlo sampling methods, where we generate $m$ sample trajectories from random noises and report the loss/reward of the best trajectory (with minimum loss or reward among all the trajectories). We set $m = 1000$ throughout the experiments. More details see Appendix A.2. + +Our algorithm. A 4-layer feed-forward neural network with 1000 hidden neurons per layer is trained as the dynamics model $f$ respectively for the domains of Cartpole, Fish, Walker and Humanoid. We use standard $\ell_2$ loss (without regularization) to learn a dynamics model $f$ . Instead of using recurrent neural network to represent $f$ , we found that the 1-step prediction for dynamics with the 4-layer feed-forward network is already good for the MuJoCo domains we are studying. Specifically, for the Cartpole and Fish, we found that 1000 episodes (1e6 training points) are sufficient + +![](images/192413887339cc25ddb95fedc1e681c0a9f1ca4b5fb907eb34729d49662c7054.jpg) +Figure 2: Video frames of best attacks in each baseline among 10 runs for the Walker.walk example. Only our proposed attack can constantly make the Walker fall down (since we are minimizing its head height to be zero). + +to train a good dynamics model (the mean square error for both training and test losses are at the order of $10^{-5}$ for Cartpole and $10^{-2}$ for Fish), while for the more complicated domain like Walker and Humanoid, more training points $(5e6)$ are required to achieve a low test MSE error (at the order of $10^{-1}$ and $10^{0}$ for Walker and Humanoid respectively). Consequently, we use larger planning (unrolled) length for Cartpole and Fish (e.g. $T = 10,20$ ), while a smaller $T$ (e.g. 3 or 5) is used for Walker and Humanoid. Meanwhile, we focus on applying projected gradient descent (PGD) to solve Equation 3 and 4. We use Adam as the optimizer with optimization steps equal to 30 and we report the best result for each run from a combination of 6 learning rates, 2 unroll length $\{T_1,T_2\}$ and $n$ steps of applying PGD solution with $n\leq T_{i}$ . + +# 4.1 RESULTS + +For observation manipulation, we report the results on Walker, Humanoid and Cartpole domains with tasks (stand, walk, balance, swingup) respectively. The unsafe states $s_{\text{target}}$ for Walker and Humanoid are set to be zero head height, targeting the situation of falling down. For Cartpole, the unsafe states are set to have $180^{\circ}$ pole angle, corresponding to the cartpole not swinging up and nor balanced. For the Fish domain, the unsafe states for the upright task target the pose of swimming fish to be not upright, e.g. zero projection on the $z$ -axis. + +The full results of both two threat models on observation manipulation and action manipulation are shown in Table 1a, b and c, d respectively. Since the loss is defined as the distance to the target (unsafe) state, the lower the loss, the stronger the attack. It is clear that our proposed attack achieves much lower loss in Table 1a & c than the other three model-free baselines, and the averaged ratio is also listed in 1b & d. Notably, over the 10 runs, our proposed attack always outperforms baselines for the threat model of observation perturbation and the Cartpole domain for the threat model of action perturbation, while still superior to the baselines despite losing two times to the flip baseline on the Fish domain. + +To have a better sense on the numbers, we give some quick examples below. For instance, as shown in Table 1a and b, we show that the average total loss of walker head height is almost unaffected for the three baselines – if the walker successfully stand or walk, its head height usually has to be greater than 1.2 at every time step, which is 1440 for one episode – while our attack can successfully lower the walker head height by achieving an average of total loss of 258(468), which is roughly 0.51(0.68) per time step for the stand (walk) task. Similarly, for the humanoid results, a successful humanoid usually has head height greater than 1.4, equivalently a total loss of 1960 for one episode, and Table 1a shows that the d4pg agent is robust to the perturbations generated from the three model-free baselines while being vulnerable to our proposed attack. Indeed, as shown in Figure 2, the + +Table 1: Compare three model-free attack baselines (rand-U, rand-B, flip) and our algorithm (Ours) in 4 different domains and tasks. We report the following statistics over 10 different runs: mean, standard deviation, averaged ratio, and best attack (number of times having smallest loss over 10 different runs). Results show that our attack outperforms all the model-free attack baselines for the observation manipulation threat model by a large margin for all the statistics. Our proposed attack is also superior on the action manipulation threat model and win over most of the evaluation metrics. + +(a) Observation manipulation: mean and standard deviation (in parenthesis) + +
Total lossrand-Urand-BflipOurs
Walkerstand1462 (70)1126 (86)1458 (24)258 (55)
walk1517 (22)1231 (31)1601 (18)466 (42)
Humanoidstand1986 (28)1808 (189)1997 (5)516 (318)
walk1935 (22)1921 (31)1982 (9)1457 (146)
Cartpolebalance4000 (0.02)3999 (0.04)3989 (2)2101 (64)
swingup3530 (1)3525 (1)3516 (1)2032 (172)
+ +(b) Observation manipulation: averaged ratio and rank-1 + +
Total loss (avg ratio)Ours/rand-UOurs/rand-BOurs/flipbest attack
Walkerstand0.180.230.18Ours: 10/10, others: 0/10
walk0.310.380.29Ours: 10/10, others: 0/10
Humanoidstand0.260.290.26Ours: 10/10, others: 0/10
walk0.750.760.74Ours: 10/10, others: 0/10
Cartpolebalance0.530.530.53Ours: 10/10, others: 0/10
swingup0.580.580.58Ours: 10/10, others: 0/10
+ +(c) Action manipulation: mean and standard deviation (in parenthesis) + +
Total lossrand-Urand-BflipOurs
Cartpolebalance4000 (0.03)3999 (0.08)3046 (1005)1917 (102)
swingup3571 (1)3487 (7)1433 (4)1388 (50)
Fishupright935 (27)936 (24)907 (22)824 (84)
+ +(d) Action manipulation: averaged ratio and rank-1 + +
Total loss (avg ratio)Ours/rand-UOurs/rand-BOurs/flipbest attack
Cartpolebalance0.480.480.63Ours: 10/10, others: 0/10
swingup0.390.400.97Ours: 10/10, others: 0/10
Fishupright0.880.880.91Ours: 8/10, flip: 2/10
+ +Table 2: Compare three attack baselines (rand-U, rand-B, flip) and our algorithm (Ours) in three different domains and tasks. Performance statistics of 10 different runs are reported. +(a) The mean and standard deviation (in parenthesis) over 10 different runs + +
Total rewardrand-Urand-BflipOurs
Walkerstand937 (41)744 (48)993 (8)235 (38)
walk941 (23)796 (21)981 (9)225 (50)
Humanoidstand927 (21)809 (85)959 (5)193 (114)
walk934 (22)913 (21)966 (6)608 (66)
Cartpolebalance995 (0.17)986 (0.16)985 (3)385 (6)
swingup873 (0.75)851 (2)852 (0.29)353 (61)
+ +(b) Average ratio and number of times our algorithm being the best attack over 10 runs. + +
Total reward (avg ratio)Ours/rand-UOurs/rand-BOurs/flipbest attack
Walkerstand0.250.320.24Ours: 10/10, others: 0/10
walk0.240.280.23Ours: 10/10, others: 0/10
Humanoidstand0.210.240.20Ours: 10/10, others: 0/10
walk0.650.670.63Ours: 10/10, others: 0/10
Cartpolebalance0.390.390.39Ours: 10/10, others: 0/10
swingup0.410.420.42Ours: 10/10, others: 0/10
+ +walker and humanoid falls down quickly (head height is close to zero) under our specially-designed attack while remaining unaffected for all the other baselines. + +# 4.2 DISCUSSION + +Evaluating on the total reward. Often times, the reward function is a complicated function and its exact definition is often unavailable. Learning the reward function is also an active research field, which is not in the coverage of this paper. Nevertheless, as long as we have some knowledge of unsafe states (which is often the case in practice), then we can define unsafe states that are related to low reward and thus performing attacks based on unsafe states (i.e. minimizing the total loss of distance to unsafe states) would naturally translate to decreasing the total reward of agent. As demonstrated in Table 2, the results have the same trend of the total loss result in Table 1, where our proposed attack significantly outperforms all the other three baselines. In particular, our method can lower the average total reward up to $4.96 \times$ compared to the baselines result, while the baseline results are close to the perfect total reward of 1000. + +Evaluating the effect of planning length. To investigate model effect over time, we perform ablation studies on the planning/unroll length $T$ of our proposed model-based attack in three examples: (I) cartpole.balance (II) walker.walk and (III) walker.stand. + +(I) Cartpole balance. Our learned models are very accurate (test MSE error on the order of $10^{-6}$ ). We observed that the prediction error of our learned model compared to the true model (the MuJoCo simulator) is around $10\%$ for 100 steps. Hence, we can choose $T$ to be very large (e.g. 20-100) and our experiments show that the result of $T = 100$ is slightly better, see Appendix A.4. +(II) Walker walk. This task is much more complicated than (I), and our learned model is less accurate (test MSE is 0.447). For 10 steps, the prediction error of our learned model compared to the true model is already more than $100\%$ , and hence using a small T for planning would be more reasonable. Table 3a shows that $T = 1$ indeed gives the best attack results (decreases the loss by $3.2\times$ and decreases the reward by $3.6\times$ compared to the best baseline (randB)) and the attack becomes less powerful as T increases. Nevertheless, even with $T = 10$ , our proposed technique still outperforms the best baseline (randB) by $1.4\times$ both in the total loss and total reward. + +Table 3: Ablation study on the planning length $T$ . Compare 3 attack baselines (rand-U, rand-B, flip) and our algorithm (Ours) and report performance statistics of 10 different runs. +(a) domain: Walker, task: walk (observation perturbation) + +
Walker.walkTotal lossTotal reward
meanstdmedminmaxmeanstdmedminmax
Ours, T = 14687948928656722245227135300
Ours, T = 26043161153564335351362253441
Ours, T = 57616577161783748360496348540
Ours, T = 108816888675397556848579469623
Ours, T = 1587493891723100258358604483647
Ours, T = 209376295080499363441638559687
rand-U15172215221461154294123945885965
rand-B12313112341189127279621796766824
flip1601181604156216199819984961991
+ +(b) domain: Walker, task: stand (observation perturbation) + +
Walker.standTotal lossTotal reward
meanstdmedminmaxmeanstdmedminmax
Ours, T = 13228431920245325767265163366
Ours, T = 22795526422339124640232200322
Ours, T = 5163531549324619327188154238
Ours, T = 108446674216515324142132194
Ours, T = 1510140825715716423152143201
Ours, T = 2011741986819317021161149207
rand-U14627014541341156193841932866999
rand-B1126861130973124474448744664809
flip1458241451142815019938997979999
+ +Table 4: Less frequency attack. Report statistics of 10 different runs with different initial states in the walker domain with task stand. + +
Total lossTotal reward
meanstdmedminmaxmeanstdmedminmax
Ours934152886769118764895622559799
rand-U15113515021468155897020964947999
rand-B14317714301282154192441923840981
flip15321515371496154699659999841000
+ +(III) Walker stand. The learned model is slightly more accurate than the (II) (test MSE is 0.089) in this task. Interestingly, Table 3b show that with the more accurate walker.stand model (compared to the walker.walk model), $T = 10$ gives the best avg total loss& reward, which are $13.4 \times$ and $4.9 \times$ smaller than the best baseline rand-B. Note that even with $T = 1$ , the worst choice among all our reported $T$ , the result is still $3.5 \times$ and $2.9 \times$ better than the best baselines, demonstrating the effectiveness of our proposed approach. + +The main takeaway from these experiments is that when the model is accurate, we can use larger $T$ in our proposed attack; while when the model is less accurate, smaller $T$ is more effective (as in the Walker.walk example). However, even under the most unfavorable hyperparameters, our proposed attack still outperforms all the baselines by a large margin. + +Evaluating on the effectiveness of attack. We study the setting where attackers are less powerful - they can only attack every 2 time steps instead of every transition. Table 4 shows that our proposed + +attack is indeed much stronger than the baselines even when the attackers power is limited to attack every 2 time steps: (1) compared to the best results among three baselines, our attack gives $1.53 \times$ smaller avg total loss (2) the mean reward of all the baselines is close to perfect reward, while our attacks can achieve $1.43 \times$ smaller average total reward compared to the best baseline. + +Evaluating on the efficiency of attack. We also study the efficiency of the attack in terms of sample complexity, i.e. how many episodes do we need to perform an effective attack? Here we adopt the convention in control suite (Tassa et al., 2018) where one episode corresponds to 1000 time steps (samples) and learn the neural network dynamical model $f$ with different number of episodes. + +Figure 3 in Appendix A.1 plots the total head height loss of the walker (task stand) for 3 baselines and our method with dynamical model $f$ trained with three different number of samples: $\{5e5, 1e6, 5e6\}$ , or equivalently $\{500, 1000, 5000\}$ episodes. We note that the sweep of hyper parameters is the same for all the three models, and the only difference is the number of training samples. The results show that for the baselines rand-U and flip, the total losses are roughly at the order of 1400-1500, while a stronger baseline rand-B still has total losses of 900-1200. However, if we solve Eq. equation 3 with $f$ trained by $5e5$ or $1e6$ samples, the total losses can be decreased to the order of 400-700 and are already winning over the three baselines by a significant margin. Same as our expectation, if we use more samples (e.g. $5e6$ , which is 5-10 times more), to learn a more accurate dynamics model, then it is beneficial to our attack method – the total losses can be further decreased by more than $2\times$ and are at the order of 50-250 over 10 different runs. See Appendix A.1 for more details. + +Here we also give a comparison between our model-based attack to existing works (Uesato et al., 2018; Gleave et al., 2019) on the sample complexity. In (Uesato et al., 2018), $3e5$ episodes of training data is used to learn the adversarial value function, which is roughly $1000 \times$ more data than even our strongest adversary (with $5e3$ episodes). Similarly, (Gleave et al., 2019) use roughly $2e4$ episodes to train an adversary via deep RL, which is roughly $4 \times$ more data than ours $^2$ . + +# 5 CONCLUSIONS AND FUTURE WORKS + +In this paper, we study the problem of adversarial attacks in deep RL with continuous control for two commonly-used threat models. We proposed the first model-based attack algorithm and showed that our formulation can be easily solved by off-the-shelf gradient-based solvers. Extensive experiments on 4 MuJoCo domains show that our proposed algorithm outperforms all model-free based attack baselines by a large margin. We hope our discovery of the vulnerability of deep RL agent can bring more safety awareness to researchers when they design algorithms to train deep RL agents. + +There are several interesting future directions can be investigated based on this work, including learning reward functions to facilitate a more effective attack, extending our current approach to develop effective black-box attacks, and incorporating our proposed attack algorithm to adversarial training of the deep RL agents. In particular, we think there are three important challenges that need to be addressed to study adversarial training of RL agents along with our proposed attacks: (1) The adversary and model need to be jointly updated. How do we balance these two updates, and make sure the adversary is well-trained at each point in training? (2) How to avoid cycles in the training process due to the agent overfitting to the current adversary? (3) How to ensure the adversary doesn't overly prevent exploration/balance unperturbed vs. robust performance? + +# ACKNOWLEDGEMENT + +The authors thank Chongli Qin, Po-Sen Huang, Taylan Cemgil, Daniel J. Mankowitz, Nir Levine, Alistair Muldal, Gabriel Barth-Maron, Matthew W. Hoffman, Yuval Tassa, Tom Erez and Jost Tobias Springenberg for useful discussions and suggestions. + +# REFERENCES + +Abbas Abdelmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a posteriori policy optimisation. arXiv preprint arXiv:1806.06920, 2018. +Christopher G Atkeson and Juan Carlos Santamaria. A comparison of direct and model-based reinforcement learning. International Conference on Robotics and Automation, 1997. +Gabriel Barth-Maron, Matthew W Hoffman, David Budden, Will Dabney, Dan Horgan, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributed distributional deterministic policy gradients. arXiv preprint arXiv:1804.08617, 2018. +Nicholas Carlini and David Wagner. Audio adversarial examples: Targeted attacks on speech-to-text. In 2018 IEEE Security and Privacy Workshops (SPW), pp. 1-7. IEEE, 2018. +Silvia Chiappa, Sbastien Racaniere, Daan Wierstra, and Shakir Mohamed. Recurrent environment simulators. International Conference on Learning Representations (ICLR), 2017. +Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. Neural Information Processing Systems (NIPS), 2018. +Justin Fu, Sergey Levine, and Pieter Abbeel. One-shot learning of manipulation skills with online dynamics adaptation and neural network priors. Intelligent Robots and Systems (IROS), 2016. +Adam Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine, and Stuart Russell. Adversarial policies: Attacking deep reinforcement learning. arXiv preprint arXiv:1905.10615, 2019. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. *ICLR*, 2015. +Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284, 2017. +Robin Jia and Percy Liang. Adversarial examples for evaluating reading comprehension systems. In Empirical Methods in Natural Language Processing (EMNLP), Outstanding paper award, 2017. +Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel. Model-ensemble trust-region policy optimization. arXiv preprint arXiv:1802.10592, 2018. +Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. +Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. Tactics of adversarial attack on deep reinforcement learning agents. arXiv preprint arXiv:1703.06748, 2017. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. +Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdelmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018. +Jonathan Uesato, Ananya Kumar, Csaba Szepesvari, Tom Erez, Avraham Ruderman, Keith Anderson, Nicolas Heess, Pushmeet Kohli, et al. Rigorous agent evaluation: An adversarial approach to uncover catastrophic failures. arXiv preprint arXiv:1812.01647, 2018. +Yuh-Shyang Wang, Tsui-Wei Weng, and Luca Daniel. Verification of neural network control policy under persistent adversarial perturbation. arXiv preprint arXiv:1908.06353, 2019. + +# A APPENDIX + +# A.1 MORE ILLUSTRATION ON FIGURE 3 + +The meaning of Fig 3 is to show how the accuracy of the learned models affects our proposed technique: + +1. we first learned 3 models with 3 different number of samples: 5e5, 1e6, 5e6 and we found that with more training samples (e.g. 5e6, equivalently 5000 episodes), we are able to learn a more accurate model than the one with 5e5 training samples; +2. we plot the attack results of total loss for our technique with 3 learned models (denoted as PGD, num_train) as well as the baselines (randU, randB, Flip) on 10 different runs (initializations). + +We show with the more accurate learned model (5e6 training samples), we are able to achieve a stronger attack (the total losses are at the order of 50-200 over 10 different runs) than the less accurate learned model (e.g. 5e5 training samples). However, even with a less accurate learned model, the total losses are on the order of 400-700, which already outperforms the best baselines by a margin of 1.3-2 times. This result in Fig 3 also suggests that a very accurate model isn't necessarily needed in our proposed method to achieve effective attack. Of course, if the learned model is more accurate, then we are able to degrade agent's performance even more. + +![](images/c6fb6b98ecbf1e902e4c19f6eccdae1ed35b3d009fde06bdb0261d8251c11f26.jpg) +Walker.stand, total head height loss +Figure 3: Compare sample size on the Walker stand in 10 different initialization in the environment. The x-axis is the $k$ th initialization and the y-axis is the total loss of corresponding initialization. + +# A.2 MORE DETAILS ON BASELINE IMPLEMENTATIONS + +For the baselines (rand-U and rand-B), the adversary generates 1000 trajectories with random noise directly and we report the best loss/reward at the end of each episode. The detailed steps are listed below: + +Step 1: The perturbations are generated from a uniform distribution or a bernoulli distribution within the range [-eps, eps] for each trajectory, and we record the total reward and total loss for each trajectory from the true environment (the MuJoCo simulator) +Step 2: Take the best (lowest) total reward/loss among 1000 trajectories and report in Table 1 and 2. + +We note that here we assume the baseline adversary has an "unfair advantage" since they have access to the true reward (and then take the best attack result among 1000 trials), whereas our techniques do not have access to this information. Without this advantage, the baseline adversaries (rand-B, rand-U) may be weaker if they use their learned model to find the best attack sequence. In any case, Table 1 and 2 demonstrate that our proposed attack can successfully uncover vulnerabilities of deep RL agents while the baselines cannot. + +For the baseline flip, we add the perturbation (with the opposite sign and magnitude $\epsilon$ ) on the original state/action and project the perturbed state/action are within its limits. + +# A.3 SCORE OF LEARNED POLICY WITHOUT ATTACKS + +We use default total timesteps $= 1000$ , and the maximum total reward is 1000. We report the total reward of the d4pg agents used in this paper below. The agents are well-trained and have total reward close to 1000, which outperforms agents trained by other learning algorithms on the same tasks (e.g. DDPG, A3C in Sec 6 (Tassa et al., 2018); PPO in Sec 5 (Abdolmaleki et al., 2018)), and thus the agents in this paper can be regarded as state-of-the-art RL agents for these continuous control domain tasks. The attack results in Table 1 and 2 in our manuscript are hence suggested to be representative. + +
DomainTaskTotal reward
Walkerstand994
Walkerwalk987
Humanoidstand972
Humanoidwalk967
Cartpolebalance1000
Cartpoleswingup883
Fishupright962
+ +# A.4 ADDITIONAL EXPERIMENTS ON ABLATION STUDY + +Table 5: Cartpole balance (action perturbation) + +
Total lossmeanstdmedminmax
Ours, T = 20217351218920872239
Ours, T = 1001951113192418512192
rand-U40000400040004000
rand-B39990399939993999
flip30461005307420603999
\ No newline at end of file diff --git a/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/images.zip b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ba3281a0f2fe5fdf0fbd68bcef4af50a91903e99 --- /dev/null +++ b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca5ae4dbc6269d1cebbb9824d73908941bb448be5b94cf1a5ad92d38d87d8a24 +size 572941 diff --git a/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/layout.json b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6dac3e658ca7a900685334db1ea1350e8c762a39 --- /dev/null +++ b/towardevaluatingrobustnessofdeepreinforcementlearningwithcontinuouscontrol/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:664138973c23ce156a6ab2d05c8153a40463ce5883cfdf5b0bec94d6af8d37f6 +size 443813 diff --git a/towardsadeepnetworkarchitectureforstructuredsmoothness/7de2fc4c-0c58-4528-9fc4-434467d30e9d_content_list.json b/towardsadeepnetworkarchitectureforstructuredsmoothness/7de2fc4c-0c58-4528-9fc4-434467d30e9d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b95c0358dbbfea0a0e7782176f1d4e75e209dd18 --- /dev/null +++ b/towardsadeepnetworkarchitectureforstructuredsmoothness/7de2fc4c-0c58-4528-9fc4-434467d30e9d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c27d9ce925194357197e8f6f7a3e4b01a61985779a8915a77bab1c7c5a6eb1a +size 83999 diff --git a/towardsadeepnetworkarchitectureforstructuredsmoothness/7de2fc4c-0c58-4528-9fc4-434467d30e9d_model.json b/towardsadeepnetworkarchitectureforstructuredsmoothness/7de2fc4c-0c58-4528-9fc4-434467d30e9d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..670a8bcdbe0db71b36b5cd47fde1953b453ab72e --- /dev/null +++ b/towardsadeepnetworkarchitectureforstructuredsmoothness/7de2fc4c-0c58-4528-9fc4-434467d30e9d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:397ac688cbae6ca1ab0aa61c59441a34656d236a755d3faa1da10e7c7cea9e45 +size 101821 diff --git a/towardsadeepnetworkarchitectureforstructuredsmoothness/7de2fc4c-0c58-4528-9fc4-434467d30e9d_origin.pdf b/towardsadeepnetworkarchitectureforstructuredsmoothness/7de2fc4c-0c58-4528-9fc4-434467d30e9d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0cb9305c3a3131db8413ec17e4365fa6db5c815b --- /dev/null +++ b/towardsadeepnetworkarchitectureforstructuredsmoothness/7de2fc4c-0c58-4528-9fc4-434467d30e9d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b49b7de56da837667777cf94da98b04170d0f29e34ca51abb685a3bfd2407133 +size 952398 diff --git a/towardsadeepnetworkarchitectureforstructuredsmoothness/full.md b/towardsadeepnetworkarchitectureforstructuredsmoothness/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9a41fa3b0bad05f9d874aa5efa0cbff371fed176 --- /dev/null +++ b/towardsadeepnetworkarchitectureforstructuredsmoothness/full.md @@ -0,0 +1,341 @@ +# TOWARDS A DEEP NETWORK ARCHITECTURE FOR STRUCTURED SMOOTHNESS + +# Haroun Habeeb + +Department of Computer Science + +University of Illinois at Urbana Champaign + +Champaign, IL 61820 + +haroun7@gmail.com + +# Sanmi Koyejo + +Department of Computer Science + +University of Illinois at Urbana Champaign + +Champaign, IL 61820 + +sanmi@illinois.edu + +# ABSTRACT + +We propose the Fixed Grouping Layer (FGL); a novel feedforward layer designed to incorporate the inductive bias of structured smoothness into a deep learning model. FGL achieves this goal by connecting nodes across layers based on spatial similarity. The use of structured smoothness, as implemented by FGL, is motivated by applications to structured spatial data, which is, in turn, motivated by domain knowledge. The proposed model architecture outperforms conventional neural network architectures across a variety of simulated and real datasets with structured smoothness. + +# 1 INTRODUCTION + +The effectiveness of predictive models often depends on the choice of inductive bias, and the extent to which this inductive bias captures real-world structure. For instance, one example of such bias encoding leading to improved performance is convolution. In principle, convolutional weights could be learned directly from data. However, in practice, imposing this structure leads to improved performance when compared to fully connected models, and as a result, Convolutional neural networks (CNNs) have enjoyed wide use for computer vision tasks (Krizhevsky et al., 2012). Similarly, recurrent neural networks such as LSTMs are effective for text (Sundermeyer et al., 2012), and certain graphical models are ideal for sentence segmentation and labeling (Lafferty et al., 2001). Our work follows this philosophy. Specifically, we propose a feedforward layer for deep neural networks that is suitable for neuroimaging and potentially useful for other data where variables can be grouped due to underlying structure. + +Data with multiple input variables often exhibit some structure. For example, the El Nino dataset (Bay et al., 2000) consists of measurements by weather buoys in the ocean, and one expects that nearby buoys can be grouped together. Similarly, socio-economic data can often be grouped together by geographic proximity. Financial market data of individual stocks can be grouped together based on the industrial sector to which a company belongs. Along similar lines, brain parcellations are a well studied paradigm for capturing the structure of brain activity (Thirion et al., 2014), often via statistical parcellation based on ward clustering (Ward Jr, 1963). The result of ward clustering is a tree where leaf nodes represent voxels of the brain and interior nodes represent grouping of voxels into spatial clusters. Figure 1 visualizes the output of ward clustering at various granularities when applied to the human connectome project resting state brain data (Van Essen et al., 2012). + +Contributions: Our primary technical contribution is the Fixed Grouping Layer (FGL). FGL is designed to extract features within each group, and additionally guarantees that each output vector is only affected by the input vectors related to it by the specified grouping. We demonstrate the benefit of using FGL on simulated experiments and real neuroimaging data. We compare FGL against fully connected networks, convolutional neural networks, CoordConv (Liu et al., 2018), and a closely related method proposed by Aydore et al. (2018). We extensively evaluate the performance of FGL on simulated and real brain imaging data showing improved performance. + +![](images/700e61e459b6e3506de6a72beb94794932e4570454d7a7b1ad9ed782d70b09bb.jpg) +4 regions + +![](images/47a3dd105813cc0c8171c4bec3f08cd3a15ba87f1bb43114c40c6e34463e892d.jpg) + +![](images/c7bfb3f59d92ba16887f388c2304f30ca92310f6c3f63310eb5d48ce8f097a9a.jpg) + +![](images/65a94dbebc08a39eb75433538c2ec9b836dc155ab2bc792892000968121a40bd.jpg) +32 regions + +![](images/dcee9ccf9e5c31c7bcd7ad3ae57caaeb01c3f1d9f9b38088c0653664dc2bdd66.jpg) + +![](images/0559a010a144e72bc497b04192437f4168159c09024c455d0c0657ed183a7d08.jpg) + +![](images/d6966a97231de8ef24541fbdaee6e676bb76788164080c5d71e56bd1b64f3ed4.jpg) +256 regions + +![](images/70577f392d893280260b9de2e4ffc9ef48ce0a2581f92c7b2190e665fe5a0353.jpg) +Figure 1: Computed brain parcellation at various granularities (details in the text). The text in the top left indicates the number of regions. Each color in each figure corresponds to a region/group. Notice the consistency between parcellations while increasing granularity. + +![](images/abe75ccab8f1e79b641b0d20454603d581c1e8f0b1bebd39867513f75ca4002e.jpg) +1024 regions + +![](images/73d9f5aec614d80a4f6b65233f3321df1fa45056282eeec71e58112bc917b384.jpg) + +![](images/7e18b541ff7c25d6c1bb88d7288aaf02e46eddc350640d8eb6684b2ea0fb42c9.jpg) + +![](images/84d5fd4abc296f313adbd19006ca986bb5c53b4dff45f3396872523ed15fe97c.jpg) + +# 1.1 DECODING BRAIN IMAGES: BACKGROUND AND RELATED WORK + +Functional Magnetic Resonance Imaging (fMRI) is a popular brain imaging technique which measures a physiological correlate of neuron activity (Huettel et al., 2004). The brain imaging scans are generally of two kinds: resting state and task data. Resting state data (rfMRI) is collected while the subject is at rest, i.e., while the subject is not actively engaged in a task. Task data (tfMRI) is collected while the subject is engaged in a predefined task, for example, a motor task such as moving their fingers. fMRI data can be represented as 3-dimensional images, and have rich structure that has been studied extensively, including in the ML literature (Koyejo et al., 2014; Park et al., 2013). Importantly, coarse correspondences have been discovered between brain regions and specific functions or behavior (Frost and Goebel, 2012; Sporns, 2013) + +We particularly focus on brain decoding – a standard task in fMRI brain data analysis where the brain image is used to predict the associated task or stimulus. Broadly, there are two types of brain decoding models: end to end models and models which perform dimensionality reduction followed by a low-dimensional prediction. On one hand, dimension reduction directly captures the notion of grouping variables together. On the other hand, end to end models rarely employ brain spatial structure. This observation motivates our work. In recent years, decoding from fMRI studies has been attempted using a variety of methods: Bzdok et al. (2015) use factored logistic regression while Sarraf and Tofighi (2016) use convolutional neural networks. Mensch et al. (2017) use a factored model after performing dimensionality reduction. Inspired by similar motivations, Aydore et al. (2018) construct a regularizer using feature groupings obtained from a fast clustering method. They demonstrate that such a regularizer outperforms dropout and $L_{2}$ regularization. However, they do not consider employing this structure in a deep model, opting instead for a wide shallow approach. Compared to Aydore et al. (2018), our results illustrate the benefits of depth combined with spatial sparsity for brain image decoding. + +# 2 FIXED GROUPING LAYER + +Next, we formally define our idea of groups. Given a set of input variables $\mathcal{X} = \{x_{i}:0\leq i < n_{in},i\in \mathbb{Z}\}$ , a grouping of variables, denoted by $\mathcal{G}$ is a subset of the power-set of $\mathcal{X}$ such that each $x_{i}$ is in at least one set in $\mathcal{G}$ . That is, + +$$ +\mathcal {G} \subset \mathbf {2} ^ {\mathcal {X}}, \text {s u c h t h a t ,} \forall x _ {i} \in \mathcal {X}: x _ {i} \in \bigcup \mathcal {G}. +$$ + +Each set in $\mathcal{G}$ is a group of variables. For example, in the case of a colored image, each pixel can be considered a variable with an associated feature vector of length 3, i.e., each $x_{i}$ represents a pixel and $x_{i} \in [0,1]^{3}$ . A spatially smooth grouping of these variables corresponds to a segmentation of the image. Optionally, the groups can be mutually exclusive. + +![](images/5604a571c7db4ee7bcb2373389c25daf5530881c6f3c874e67a0111a150e647b.jpg) +(a) Generating Process + +![](images/c7ee21c2dbd5319cb4d4b061e563bd7ca34a6b51e02fb23130f1d1a88fd39d82.jpg) + +![](images/bf1f5fc079c229815fdacb17f2cc14f9e03facc155fa83aa3a30483c8168ee8a.jpg) + +![](images/065ff69263d75ca08220cdd8b0c750f5204fc18e6db2e878e1d2b43edb3cfd1a.jpg) +Figure 2: (a) Simulated Dataset: Random images are aggregated over regions/groupings to obtain activations $(z)$ that are used to assign labels. Groupings are comprised of multiple smaller regions, all spatially connected. (b) Voronoi diagrams and induced groups. The upper left image is a Voronoi diagram of 8 random sites (blue points) and lines that partition space into sets of points closest to the same site. The other images are possible groupings using 2 partitions. (c) Histogram of probability of assigned label. The y-axis is probability and x-axis is number of points. Most datapoints have $Pr(y) > 0.5$ (labels are low-noise). + +![](images/36e234ab7cea514515f8c11f30d63efab6d83ba222b4d9ff101775a680edf603.jpg) +(b) Voronoi diagrams + +![](images/070349a1893b9bb728137b40d7fc2e7ebda699d131b9365d236c3fc838de603b.jpg) +(c) Probability of label + +An FGL layer takes as input $n_{in}$ vectors of $c_{in}$ length each, where the $n_{in}$ vectors are grouped into $n_{out}$ groups. Further, let $c_{out}$ be the length of the vector associated with each group. Note that these groups do not have to be mutually exclusive, but mutually exclusive groups offer benefits that we describe in the supplementary. The model architecture is allowed to use multiple channels (analogous to the # of filters of standard convolutional networks). $c_{in}$ and $c_{out}$ are the number of input and output channels. Mathematically, the Fixed Grouping Layer is given by: + +$$ +z = A ((x v) \odot u) + b, +$$ + +where: $z \in \mathbb{R}^{n_{out},c_{out}}$ is the matrix representing the output with each row represents one group. $x \in \mathbb{R}^{n_{in},c_{in}}$ is a matrix representing the input - one row for each input vector. $A$ is a binary matrix that represents the grouping $-A_{j,i} = 1$ if and only if $x_{i}$ is in group $j$ . $\odot$ represents the Hadamard product (elementwise multiplication) (Horn, 1990). $u, v, b$ are parameters of the model. $v$ is used for a linear transformation from $\mathbb{R}^{c_{in}}$ to $\mathbb{R}^{c_{out}}$ , i.e., $v \in \mathbb{R}^{c_{in},c_{out}}$ . $u$ is a matrix of size $n_{in} \times c_{out}$ . $b$ the bias, is a matrix of size $n_{out} \times c_{out}$ . We denote the $i^{th}$ input vector, i.e., the $i^{th}$ row of $x$ , by $x_{i}$ . Observe that FGL is a fully connected layer when there is only one group which contains all variables. + +# 3 EXPERIMENTS + +We construct a deep neural network for classification using repeated layers of FGL (and activation functions), followed by either a fully connected network or an FGL that groups all inputs into a single group. This is inspired from traditional CNN based classification models. We provide a visualization of a simplified model in Figure S2. + +Regularization: We use weight normalization (Salimans and Kingma, 2016), which is a reparameterization of the weights of a neural network that decouples the norm and the direction of the weights. That is, for a single dimension of a fully connected layer, weight normalization reparameterizes $w$ as $w = g(\theta / ||\theta||)$ , where $\theta$ is a vector of the same length as $w$ and $g$ is a scalar. The network now learns $g, \theta$ instead of $w$ . For FGL we apply weight norm on both $u$ and $v$ . Weight norm is applied by treating $u$ as $c_{out}$ different vectors and $v$ as the weights of a fully connected layer. + +We use Pytorch (Paszke et al., 2017) to implement models, and nilearn (Abraham et al., 2014) to preprocess and visualize fMRI images. Training was done using Adam (Kingma and Ba, 2014) on 4 K80 GPUs. Code is available at https://www.github.com/anon/repo and a minimal version is provided in the supplementary. + +# 3.1 BASELINES + +We consider brain decoding as a classification task, and use two common types of models as baselines: fully connected networks and convolution based models such as standard Convolutional + +Neural Networks (CNNs) and their CoordConv variant. To the best of our knowledge, our baselines include state of the art approaches on the considered datasets. + +Multinomial Logistic Regression (LR): Multinomial logistic regression is a standard model (Bishop, 2006a), given by: $\hat{y} = \text{softmax}(Wx + b)$ , for an input $x \in \mathbb{R}^d$ , parameterized by weights $W \in \mathbb{R}^{k,d}$ and bias $b \in \mathbb{R}^k$ where $k$ is the number of possible labels. Here, $\hat{y}$ is the vector of predicted probabilities for each class or label. Clearly multinomial logistic regression by itself uses no spatial information. + +Feedforward Neural Networks (FNN): FNNs are ideal for tasks where the input has neither a grid-like structure nor a sequential structure. The architecture is an alternating sequence of linear transformations and activation functions (Goodfellow et al., 2016). + +Convolutional Neural Networks (CNNs): CNNs LeCun et al. (1995), are a popular tool in deep learning. Many problems, where the inputs have a spatial representation, lend themselves to convolution. CNNs are popular not only for their flexibility but also because of the assumptions they make about the nature of the input - one of them being that dependencies between pixels are local and spatially invariant. These assumptions are usually appropriate for natural images. However, in the case of brain fMRI, since features are also dependent on position, i.e., features are not position invariant, CNNs might not work as well. + +CoordConv (CC): Liu et al. (2018) demonstrate that CNNs are sometimes unable to capture location-dependent spatial representations. e.g. transforming a pair of coordinates to a one-hot representation. One reason for this failure could be that the coordinate transformation problem directly conflicts with the underlying assumptions of CNNs. They propose the CoordConv layer as a solution. CoordConv is essentially a convolutional layer except that it includes coordinates as additional input channels. Thus, CoordConv enables CNNs where local dependencies change across the image. + +# 3.2 SIMULATED DATA + +First, we discuss grouping via Voronoi Diagrams. Given a set of points, called sites, a Voronoi diagram (Aurenhammer, 1991; Okabe et al., 2009) is the division of a plane into regions based on distance to the sites. Usually, positions whose closest site is the same are grouped together. Consider the grouping induced by a set of $m$ sites $P = \{p_i : p_i \in [0, s]^2, 0 \leq i < m\}$ for some $s$ indicating the size of the plane: + +$$ +g _ {i} = \left\{x _ {j}: \min _ {k} | x _ {j} - p _ {k} | = i \right\} \forall 0 \leq i < m. +$$ + +We use Voronoi diagrams because they create regions which are spatially connected. To increase the complexity of the task, we use groupings which are unions of arbitrarily chosen Voronoi regions - resulting in groups comprised of multiple spatially connected regions which may not be connected to each other. We provide an example in Figure 2b. + +Consider input data sampled from a Gaussian prior: $x \sim \mathcal{N}(\mathbf{0}, S)$ , where $\mathbf{0}$ is a zero-vector and $S$ is an arbitrary covariance matrix of appropriate size. We let $x \in \mathbb{R}^{s^2}$ for an integer $s$ -the idea being that $x$ is a flattened version of a grayscale image of size $s \times s$ . Next, suppose that datapoints are labelled based on a linear function. That is, $z|x \sim \mathcal{N}(Fx, \Sigma)$ , for a fixed covariance matrix $\Sigma$ , and a matrix $F$ of size $k \times s^2$ where $k$ is the number of labels. The label, $y$ , is assigned based on $z$ : $y = \arg \max_i \text{softmax}(Wz)_i$ , for a full rank matrix $W$ . + +We briefly analyze this simple generative model in the context of FGL. Using conjugate priors (Bishop, 2006b), it is straightforward to show that, + +$$ +x | z \sim \mathcal {N} \left(F ^ {\top} \Sigma^ {- 1} z, \left(S ^ {- 1} + F ^ {\top} \Sigma^ {- 1} F\right) ^ {- 1}\right). \tag {1} +$$ + +To explore the implications of equation (1), consider an $F$ that is sparse such that the non-zero positions in each row of $F$ correspond to a segmentation of the input image (a grouping of pixels). For example, circular patches on the image, or in our case, Voronoi diagrams. Additionally, if $\Sigma$ is an identity matrix, then each dimension of $z$ corresponds to the sum of values of a group of pixels. + +Dataset: We create a dataset which samples $x$ from the the Gaussian prior with $S$ being an identity matrix. We use $s = 128$ so that each $x$ can be interpreted as a square image. We create $F$ by first creating a Voronoi diagram of 512 randomly selected points, and then merging these regions into $k = 32$ groups. We sample $z$ from the the corresponding conditional distribution. We fix a random $W$ and then assign the label $y$ with highest likelihood to the datapoint $x$ . We sample 50000 points to + +![](images/a959ef9c20973ebec13fa70aa9cd981b587627f7f79a75c7b8ad679ffcb46b91.jpg) +(a) + +![](images/cd35eb33b8f7339c0236ee20e6a98a763ae1e0fdb1de20c021c99f77d1b3e2d3.jpg) +(b) + +![](images/c1201c3e4576d401da7d4981c7004a61a1cb5b47322f744ec2231160108e64ee.jpg) +(c) +Figure 3: (a) Test accuracy (with error bars) on held out $20\%$ of simulated dataset vs. fraction of data used for training. The graph indicates that FGL has better empirical sample complexity. The small magnitude of error in estimation of performance indicates that models are well trained and the difference is due to the models themselves. (b) Minimum (across classes) F1 Score on held out test set vs. fraction of data used for training. The difference in performance is not due to performance on a single class/region, but rather across all labels. (c) Histogram of ground truth probability of labels for points where FGL is correct but CNN misclassifies. This demonstrates that the misclassification by CNN is not only on noisy datapoints but also for datapoints where the label should be clear. + +create the simulated dataset. A visualization of the process is provided in Figure 2a. To ensure that the dataset wasn't too noisy, we plot a histogram of probability of assigned label in Figure 2c. The histogram shows that only a small number of datapoints are noisy - in most cases, the assigned label has a probability of at least 0.5. + +# 3.2.1 MODELS + +To demonstrate the benefit of using the Voronoi Diagram during classification, we train 4 models - Logistic Regression (LR), a Convolutional Neural Network (Conv), a CoordConv variant (CC) of the same CNN, and a model using our proposed layer - FGL followed by a fully connected network. Our FGL model is provided the voronoi regions. The number of parameters in each model is roughly the same. Since the dataset uses labels that are linear in terms of $x$ , we use no non-linear activations in any of our models. We found that using maxpooling in the CNN and CoordConv hurt performance. We don't report results with an FNN because it performs similar to LR. + +# 3.2.2 PROCEDURE AND ANALYSIS + +We create a test set using $20\%$ of the simulated dataset. The remaining points are used for training. For each model, we train using various quantities of available data, and test on the held out set. The results are aggregated over 10 runs - with a randomly sampled test set for each run. A plot of the test accuracy vs. fraction of data used for training is given in Figure 3. We find that the standard deviation of accuracies of these models is small - indicating that the failures are not due to poor initialization or poor training but rather a difference in models. + +This experiment was designed to demonstrate a failure of convolution based models and also fully connected methods. Although this satisfies our intuition that using spatial structure should help drastically improve performance, we investigate the datapoints at which the CNN failed but FGL did not. The first thing to check was the probability of assigned labels for these points - a histogram of the same for a random subset of the testing set is provided in Figure 3c. The next sanity check is to ensure that the drop in performance isn't just for one set of regions or one class. To that end, we plot the lowest F1 score (lowest across classes) in Figure 3b. We see the same trend - FGL performs better than CNNs, CoordConv and Logistic Regression. These plots indicate the validity of the gain in performance - it is due to neither noisy labels, nor failure on any one label. Hence, using a grouping of variables seems to provide a significant benefit. + +# 3.3 FMRI CONSTRAST PREDICTION + +We evaluate our models on 5 datasets which were used by Mensch et al. (2017): Archi (Pinel et al., 2007), Brainomics (Orfanos et al., 2017), Cam-CAN (Shafto et al., 2014), LA5c (Poldrack et al., + +![](images/5682601eda209d79685c0e3c32211de4329fab2b149668f5db72188e5d85194d.jpg) +Figure 4: Test accuracy: Out of sample accuracy measured on $30\%$ of dataset v/s fraction of subjects used for training. FGL performs well even when a small amount of data is used for training. + +![](images/7c41b69a7d80df21f37f2debc87e9241b6b952dda0ae36c0cf4265c7fff66407.jpg) + +![](images/7758df6bce2026d8c67d45bb7f3ae508888ded9ecf9c317c73e0219bd8f3f624.jpg) + +Table 1: Test Accuracy per dataset, model. P-values are dataset-specific – in this case we use the Wilcoxon test to compare our method to each of the others on the HCP dataset. We reported p-values for HCP only, as the dataset was large, and the performance improvement gaps were not extremely obvious. $p$ -value from one sided Wilcoxon rank sum test shows that FGL (3 layer) is better than other models on HCP dataset. + +
ArchiBrCamHCPLA5cp-value (HCP)
LR81.00%74.42%63.29%91.70%61.12%5.413e-06
FNN82.72%81.47%61.52%92.16%60.86%3.789e-05
Conv84.23%90.85%63.77%91.38%61.99%5.413e-06
CC83.96%90.64%63.07%91.52%62.04%9.083e-05
FGL (1 Layer)85.78%88.65%67.23%92.70%64.49%0.001097
FGL (2 Layers)85.78%89.87%67.46%92.67%64.57%0.000525
FGL (3 Layers)87.07%90.38%67.27%93.36%64.24%-
Feature Grouping (b=10)75.80%-59.00%-53.09%-
Feature Grouping (b=50)76.81%-59.77%-57.17%-
Feature Grouping (b=100)78.55%-59.66%-60.54%-
Feature Grouping (b=200)73.48%-58.56%-53.79%-
+ +2016) and HCP (Van Essen et al., 2012). They have 78, 94, 605, 191 and 787 subjects respectively. Archi, Brainomics, Cam-CAN and LA5c are small datasets with 2340, 1786, 3025 and 5756 images respectively. On the other hand, HCP is larger with 18070 images. These datasets have different sets of labels corresponding to different cognitive processes. Each dataset is publicly available on NeuroVault1 (Gorgolewski et al., 2015), an aggregation of fMRI datasets. Our only required preprocessing for the task contrast data was to upsample Cam-CAN and Brainomics to the same resolution as HCP. + +The assessment of fMRI decoding is an important topic in its own right due to unique spatial characteristics and typical sample sizes. Varoquaux et al. (2017) show that leave-one-out strategies for cross validation can be unstable, and suggest that using reasonable defaults is a good strategy. Additionally, it is well known that having common subjects between train and test datasets can lead to misleading results. This is because such a test set does not measure how well a model can generalize from one subject to another. Hence, we evaluate models via out-of-sample accuracy, i.e., we hold out some subjects $(30\%)$ for the testing dataset in each run. Further, we train all models with reasonable defaults that we found were not critical for model performance. + +# 3.3.1 PARCELLATION + +Since Thirion et al. (2014) showed that ward clustering provides good parcellations of the brain, we perform ward clustering on a fraction of HCP resting state data (total size of 4TB). We downsample the resting state time series data to about $1\%$ of the original frequency, and use the brain activations at each position as the feature vectors for clustering. The downsampling is needed due to hardware constraints. We did not use task datasets for clustering since we would have to hold out more data for clustering - exacerbating data scarcity. Additionally, resting state data is more easily acquired + +(Biswal et al., 2010), and there are strong correlations between tfMRI and rfMRI (Smith et al., 2009). Thus, using rfMRI should provide a good if not better parcellation of the brain. + +To make a deep network using FGL we require a hierarchical clustering and not just a parcellation. Hence, instead of using the segmentation produced by the parcellation algorithm provided by nilearn, we use the computed ward tree. We then slice into the ward clustering to produce parcellations with 32, 256 and 1024 regions. These have been visualized in Figure 1. Clearly, these groups are spatially connected. We need groupings of voxels into 1024 groups, then a grouping of these 1024 groups into 256 groups and finally a grouping of 256 groups into 32 groups. Since ward clustering is a hierarchical clustering scheme and outputs a tree structure, we extract these groupings by making appropriate cuts into the tree. + +# 3.3.2 MODELS + +Fully Connected Models: We experimented with Fully Connected Neural Networks (FNN) and Multinomial Logistic Regression (LR) with and without Dimension Reduction using our parcellation. We found that using Dimension Reduction reduced performance and hence do not report it. For FNNs, we tried 2 and 3 layer versions with intermediate sizes chosen from 64, 128, 256 and 512. The model with intermediate layer sizes of 512 and 128 worked best. The aforementioned models take a masked fMRI image as input and we used the MNI152 mask provided by nilearn. Emperically, we find that, for brain decoding, using a linear activation performs better than using non-linear activations. For this reason, we use a linear activation for our models. Unfortunately, we are unclear about why this occurs for this domain. We also evaluate Feature Grouping suggested by Aydore et al. (2018) which is also designed to exploit structured sparsity but does this using a wide model (implemented using multiple randomized clusterings), unlike our deep FGL approach. We used code provided by the authors. + +Convolutional Neural Networks (Conv, CC): We experimented with a variety of architectures and found no improvement by using residual connections or Batch-Norm. We also report results using CoordConv. We found that using non-linear activations hurt the model's performance, similar to our finding with FNNs. Further, maxpooling also reduced performance. The architecture is 5 3-D convolution layers of stride 2 and kernel size 4. The input volumes have size $91 \times 109 \times 91$ , and convolution reduces the volume to $2 \times 3 \times 2$ with 128 channels. We flatten this volume and pass it through a fully connected network to get the score for each label. The architecture for the CoordConv is identical to the CNN since CoordConv only concatenates a few input channels to the input image. We use Conv to refer to the Convolutional Neural Network and CC to refer to the CoordConv variant. + +FGL: We use 3 layers of FGL, each of which use the Parcellation described earlier. The input images have 212455 voxels after masking. We treat every voxel as a variable with a single feature. These voxels are then reduced to 1024 groups with feature vectors of length 8 each. Next, these groups are reduced to 256 variables with 64 features and finally to 32 variables with 128 features. The final prediction is made by flattening the output of the last FGL layer and passing it through a fully connected layer. The resulting number of parameters is roughly 2 million, which is also roughly the same number of parameters used for CNN and CC. While this is a lot of parameters, we found that reducing the number of parameters by changing the number of features for each intermediate variable decreases performance for both convolution and FGL. + +# 3.3.3 PROCEDURE AND RESULTS + +We split each dataset multiple (10) times into a train and test set. The split is done such that no subject in the test set appears in the training set. In each split, $30\%$ of subjects are used for testing, and all or a part of the remaining subjects are used for training. Convolution based models were trained for 50 epochs, feedforward neural networks for 30 and FGL for 20. These hyperparameters were selected by monitoring for overfitting on the training set (using a further validation split). We perform experiments to study: (1) the benefit of FGL given a lot of data, (2) the benefits of FGL at small sample sizes, (3) the effect of depth, and (4) the effect of intermediate vector length. + +Large sample setting: The first experiment uses all of the training data (70% of total data) to train. We report out-of-sample accuracy in Table 1. We also report the $p$ -values of one-sided Wilcoxon rank sum tests between the performance of each model compared to 3 Layer FGL on the HCP dataset. + +![](images/0a51b442858ab434f199705424f5e431aa7db97ccd6edcb7064ee060079a1dfc.jpg) +(a) + +![](images/8008667a58523baf5770ee3242732bc9f41d4f7f99113fe624e5b06ad1c79053.jpg) +(b) +Figure 5: Ablation of intermediate vector length: (a) On HCP, along with baselines. (b) On all datasets. Increasing the intermediate vector length improves performance, except on Cam-CAN. + +Small sample setting: The second set of experiments varies the fraction of data used for training on the smaller datasets - namely, Archi, Brainomics and Cam-CAN. We explore small sample performance because limited sample sizes are typical for fMRI studies. Figure 4 plots test accuracy against fraction of data used for training. It demonstrates that FGL outperforms the baselines even when only a small amount of training data is used. + +Effect of depth, FGL depth vs. width: We train two additional models to study the effect of depth: one which uses only the first layer of FGL, and another that uses the first two layers. In Table 1, they're referred to as "FGL (1 Layer)" and "FGL (2 Layers)" respectively. The results show that increasing FGL depth provides a statistically significant improvement in test accuracy on the HCP dataset. When compared to Feature grouping (Aydore et al., 2018), the results in Table 1 show that even after extensive tuning (we used the reported best performing settings and attempted additional tuning), this approach is not competitive – suggesting a significant representation benefit by exploiting FGL depth vs. width for this task. Unfortunately, the provided code did not scale to the largest datasets – hence the missing results on HCP. + +Effect of intermediate vector length: To study the effect of intermediate vector length, we train five single layer FGL models with intermediate vector lengths $(c_{out})$ of 1, 2, 4, 8 and 16. We plot the test accuracy against $c_{out}$ in Figures 5a and 5b. On 4 out of 5 datasets, increasing $c_{out}$ improves test accuracy. This is expected because the classification task is not binary and each region of the brain could contribute to multiple cognitive processes in different ways. However, the effect is not as pronounced on the Cam-CAN dataset. We suspect that this is because Cam-CAN has fewer labels (5) than the other datasets. We note that $c_{out}$ could be interpreted as width of our model, similar to channel size of CNNs, which is known to be important. + +These experiments demonstrate the clear benefit of using FGL compared to other models with roughly the same number of parameters. When using $70\%$ of data for training, FGL provides $2 - 6\%$ of improvement in test accuracy on 4 of 5 datasets. A similar trend exists even when smaller amounts of data is used. As for the 5th dataset, Brainomics, FGL is on par with CNN based methods but better than Fully connected networks. While the effect of depth is not clear on smaller datasets, we note that on the HCP dataset, deeper models have a statistically significant improvement in performance. Further, an increase in $c_{out}$ also improves performance. + +Our main argument is that current methods discard important information about spatial smoothness as encoded by hierarchical spatial clustering. As pointed out, the main cost of our method is in constructing these hierarchies. For brain imaging, most datasets include readily available resting state data. From the larger view, we plan to encourage application communities to develop application-specific FGL architectures, which can be shared across several related tasks. + +# 4 CONCLUSION + +In this work we propose a new layer architecture, the Fixed Grouping Layer (FGL), parameterized by a grouping of input variables. FGL explicitly extracts features within each input group. This is in + +contrast to convolution which extracts local features across the input, and fully connected networks which extract both global and local features. We demonstrate the benefit of using FGL on 5 real fMRI datasets of different sizes. Future work will involve the application of FGL to other tasks and application domains. + +# REFERENCES + +Abraham, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., Gramfort, A., Thirion, B., and Varouquaux, G. (2014). Machine learning for neuroimaging with scikit-learn. Frontiers in neuroinformatics, 8:14. +Aurenhammer, F. (1991). Voronoi diagrams—a survey of a fundamental geometric data structure. ACM Computing Surveys (CSUR), 23(3):345-405. +Aydore, S., Thirion, B., Grisel, O., and Varoquaux, G. (2018). Using feature grouping as a stochastic regularizer for high-dimensional noisy data. arXiv preprint arXiv:1807.11718. +Bay, S. D., Kibler, D. F., Pazzani, M. J., and Smyth, P. (2000). The uci kdd archive of large data sets for data mining research and experimentation. SIGKDD explorations, 2(2):81-85. +Bishop, C. M. (2006a). Pattern recognition and machine learning, chapter Probabilistic Discriminative Models, pages 203-213. Springer. +Bishop, C. M. (2006b). Pattern recognition and machine learning, chapter Probability distributions, page 117. Springer. +Biswal, B. B., Mennes, M., Zuo, X.-N., Gohel, S., Kelly, C., Smith, S. M., Beckmann, C. F., Adelstein, J. S., Buckner, R. L., Colcombe, S., et al. (2010). Toward discovery science of human brain function. Proceedings of the National Academy of Sciences, 107(10):4734-4739. +Bzdok, D., Eickenberg, M., Grisel, O., Thirion, B., and Varoquaux, G. (2015). Semi-supervised factored logistic regression for high-dimensional neuroimaging data. In Advances in neural information processing systems, pages 3348-3356. +Frost, M. A. and Goebel, R. (2012). Measuring structural-functional correspondence: spatial variability of specialised brain regions after macro-anatomical alignment. Neuroimage, 59(2):1369-1381. +Glorot, X. and Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249-256. +Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. MIT press. +Gorgolewski, K. J., Varoquaux, G., Rivera, G., Schwarz, Y., Ghosh, S. S., Maumet, C., Sochat, V. V., Nichols, T. E., Poldrack, R. A., Poline, J.-B., et al. (2015). Neurovault. org: a web-based repository for collecting and sharing unthresholded statistical maps of the human brain. Frontiers in neuroinformatics, 9:8. +He, K., Zhang, X., Ren, S., and Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026-1034. +Horn, R. A. (1990). The hadamard product. In Proc. Symp. Appl. Math, volume 40, pages 87-169. +Huettel, S. A., Song, A. W., McCarthy, G., et al. (2004). Functional magnetic resonance imaging, volume 1. Sinauer Associates Sunderland, MA. +Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Koyejo, O. O., Khanna, R., Ghosh, J., and Poldrack, R. (2014). On prior distributions and approximate inference for structured variables. In Advances in Neural Information Processing Systems, pages 676-684. + +Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105. +Lafferty, J., McCallum, A., and Pereira, F. C. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Proceedings of the 18th International Conference on Machine Learning. +LeCun, Y., Bengio, Y., et al. (1995). Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995. +Liu, R., Lehman, J., Molino, P., Such, F. P., Frank, E., Sergeev, A., and Yosinski, J. (2018). An intriguing failing of convolutional neural networks and the coordconv solution. In Advances in Neural Information Processing Systems, pages 9628-9639. +Mensch, A., Mairal, J., Bzdok, D., Thirion, B., and Varoquaux, G. (2017). Learning neural representations of human cognition across many fmri studies. In Advances in Neural Information Processing Systems, pages 5883-5893. +Okabe, A., Boots, B., Sugihara, K., and Chiu, S. N. (2009). Spatial tessellations: concepts and applications of Voronoi diagrams, volume 501. John Wiley & Sons. +Orfanos, D. P., Michel, V., Schwartz, Y., Pinel, P., Moreno, A., Le Bihan, D., and Frouin, V. (2017). The brainomics/localizer database. Neuroimage, 144:309-314. +Park, M., Koyejo, O., Ghosh, J., Poldrack, R., and Pillow, J. (2013). Bayesian structure learning for functional neuroimaging. In Artificial Intelligence and Statistics, pages 489-497. +Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. (2017). Automatic differentiation in pytorch. In NIPS-W. +Pinel, P., Thirion, B., Meriaux, S., Jobert, A., Serres, J., Le Bihan, D., Poline, J.-B., and Dehaene, S. (2007). Fast reproducible identification and large-scale databasing of individual functional cognitive networks. BMC neuroscience, 8(1):91. +Poldrack, R. A., Congdon, E., Triplett, W., Gorgolewski, K., Karlgodt, K., Mumford, J., Sabb, F., Freimer, N., London, E., Cannon, T., et al. (2016). A phenotype-wide examination of neural and cognitive function. Scientific data, 3:160110. +Salimans, T. and Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pages 901-909. +Sarraf, S. and Tofighi, G. (2016). Deep learning-based pipeline to recognize alzheimer's disease using fmri data. In 2016 Future Technologies Conference (FTC), pages 816-820. IEEE. +Shafto, M. A., Tyler, L. K., Dixon, M., Taylor, J. R., Rowe, J. B., Cusack, R., Calder, A. J., Marslen-Wilson, W. D., Duncan, J., Dalgleish, T., et al. (2014). The cambridge centre for ageing and neuroscience (cam-can) study protocol: a cross-sectional, lifespan, multidisciplinary examination of healthy cognitive ageing. BMC neurology, 14(1):204. +Smith, S. M., Fox, P. T., Miller, K. L., Glahn, D. C., Fox, P. M., Mackay, C. E., Filippini, N., Watkins, K. E., Toro, R., Laird, A. R., et al. (2009). Correspondence of the brain's functional architecture during activation and rest. Proceedings of the National Academy of Sciences, 106(31):13040-13045. +Sporns, O. (2013). Structure and function of complex brain networks. *Dialogues in clinical neuroscience*, 15(3):247. +Sundermeyer, M., Schlüter, R., and Ney, H. (2012). LSTM neural networks for language modeling. In Thirteenth annual conference of the international speech communication association. +Sutskever, I., Martens, J., Dahl, G. E., and Hinton, G. E. (2013). On the importance of initialization and momentum in deep learning. ICML (3), 28(1139-1147):5. + +Thirion, B., Varoquaux, G., Dohmatob, E., and Poline, J.-B. (2014). Which fmri clustering gives good brain parcellations? Frontiers in neuroscience, 8:167. +Van Essen, D. C., Ugurbil, K., Auerbach, E., Barch, D., Behrens, T., Bucholz, R., Chang, A., Chen, L., Corbetta, M., Curtiss, S. W., et al. (2012). The human connectome project: a data acquisition perspective. Neuroimage, 62(4):2222-2231. +Varoquaux, G., Raamana, P. R., Engemann, D. A., Hoyos-Idrobo, A., Schwartz, Y., and Thirion, B. (2017). Assessing and tuning brain decoders: cross-validation, caveats, and guidelines. NeuroImage, 145:166-179. +Ward Jr, J. H. (1963). Hierarchical grouping to optimize an objective function. Journal of the American statistical association, 58(301):236-244. + +![](images/712000a45b33c85d325c52fc9ac36db2025e5a6e543c594219a967eeadf401d5.jpg) +Figure S1: Baseline: The architecture used for Convolutional Neural Networks and CoordConv. In the CoordConv variant, each volume is concatenated with 3 channels for coordinate along each dimension being passed in as input. The volumes are written above each volume while the number of channels for each volume is written underneath it. + +Table S1: Test Accuracy to study imperfect A: Reported numbers for each model are averaged over multiple (3) runs. + +
ModelTest accuracy
LR43.0%
Conv56.4%
CC56.6%%
FGL (16 clusters)61.3%
FGL (32 clusters)66.1%
FGL (48 clusters)66.3%
FGL (perfect A)70.2%
+ +# Supplementary Materials + +# S1 FGL DETAILS + +Input Specification: In this work we deal with image-like data, either in 2D or 3D. Consider an input with $s$ pixels or voxels in $c$ channels - for example, a $64 \times 64$ image with RGB colors will have $s = 4096$ and $c = 3$ . Such an input is treated as $s$ variables with feature vectors of $c$ length for each variable. + +Groupings: Since the output of the first FGL layer is feature vectors for each group, the grouping of the second FGL layer must group together the outputs of the first layer. Hence, we need a hierarchical structure with input variables at the leaf nodes. In this work, we use a Ward clustering of the brain, however, other clusterings may be more appropriate in other settings. + +# S2 ABLATION: IMPERFECT A + +We ran an experiment using the simulated dataset to estimate how robust FGL was to imperfect $A$ . Apart from providing FGL the true voronoi diagrams, we also ran FGL using the clusters from K-Means clustering. That is, $A_{ji} = 1$ if pixel $i$ was in cluster $j$ according to K-Means. We did this using 16, 32 and 48 clusters obtained by clustering pixels in the training dataset. The results are reported in S1. We see that although there is a drop when using imperfect $A$ , FGL still outperforms logistic regression and convolution. This emphasises the benefit of capturing structure. + +We do see a larger drop when we use a clustering that is not representative enough (16 clusters), but don't see much gain from using a more representative one (48 clusters). + +# S3 FGL VARIANTS + +While the FGL model is straightforward, multiple variants of it are possible. First, notice that FGL is essentially the following three operations (ignoring the bias): + +- Linear transformation: The multiplication, $xv$ , transforms the data $x$ from one basis to another using a linear transform $v$ . +- Rescaling: The hadamard product with $u$ rescales each vector along each dimension independently +- Aggregation: The multiplication by $A$ aggregates the vectors $(xv) \odot u$ within each group using summation. + +Performing these operations in a different order creates some basic variants: for example, we could aggregate within groups, then rescale, and finally perform a linear transformation. These changes to operation order will require the parameters to be defined differently. For example, if the hadamard product with $u$ is done after aggregation, then $u$ will need to have $n_{out}$ rows. + +# S3.1 REDUCTIONS + +Another interesting variant is to replace the aggregation with a max operation within each group along each dimension. We think this is similar to doing a maxpool operation in convolution neural networks while the summation by $A$ is similar to a weighted-sum-pool depending on the values of $A_{ji}$ . Early results showed worse results when using the max reduction variant and hence we did not investigate it further. However, it might prove effective in cases where a signal being present in one variable within a group is equivalent to the group showing that variable. + +# S3.2 MULTIPLE GROUPINGS + +Another possible benefit that we do not investigate is the use of multiple variable groupings - we can concatenate the $A$ matrices that represent each grouping to make FGL extract features within each group from the union of both groups. That is, if $A^{(0)}, A^{(1)}$ are the matrices that represent two groupings, we could use $A = [A^{(0)} A^{(1)}]$ . This would allow one to make use of multiple types of groupings. For example, we could create parcellations at different points of the accuracy-reproducibility tradeoff studied by Thirion et al. (2014), and make use of both. Similarly, one could create parcellations from different datasets and use them at once. However, using a single parcellation was sufficient to create a significant gain in performance, hence we don't go deeper in this direction. We mention a few other variants that did not perform as well in supplement S3. + +# S4 IMPLEMENTATION + +Our implementation of FGL using PyTorch is available at https://github.com/anonymous/link. In this section, we discuss some challenges in implementation - If the number of input variables is large, performing $A((xv) \odot u)$ as a matrix multiplication is expensive. There are some ways to work around this: + +- Since $A$ is a binary matrix, we can treat $((xv) \odot u)$ as a matrix of embeddings, and lookup the indices at which $A$ is non-zero, then performing necessary aggregation. +- If the variable groups are mutually exclusive - that is, each input variable only belongs to one group, then $A((xv) \odot u)$ can be performed by scattering $(xv) \odot u$ according to the indices at which $A$ is non-zero. + +# S4.1 FGL INITIALIZATION + +Prior literature (He et al., 2015; Glorot and Bengio, 2010; Sutskever et al., 2013) has shown that initialization of deep networks matters. Generally, a layer's weights are randomly initialized by sampling from a uniform distribution, denoted by $U[-m,m]$ for some $m$ based on the number of inputs, outputs and the activation function. Hence, after minor modifications for different activation functions, we use the following initialization: + +$$ +u _ {i k} \sim U \left[ - \sqrt {\frac {(1 + \sum_ {j} A _ {j i})}{\sum_ {j} (A _ {j i} \sum_ {k} A _ {j k})}}, \sqrt {\frac {(1 + \sum_ {j} A _ {j i})}{\sum_ {j} (A _ {j i} \sum_ {k} A _ {j k})}} \right], v _ {i j} \sim U \left[ - \sqrt {\frac {1}{1 + 5 c _ {i n}}}, \sqrt {\frac {1}{1 + 5 c _ {i n}}} \right] +$$ + +![](images/a73f1111a919434178fe2db43eb90859be077c3f0dcc9e9e7d1dd89d6eb05457.jpg) +Figure S2: Illustration of FGL. Given the numbered hierarchical segmentation, FGL extracts features for each segment. The inputs are 9 variables corresponding to segments of a square, which are first grouped as $\{\{1,2\}, \{3,4,5\}, \{6,7\}, \{8,9\}\}$ . The resulting 4 groups are then grouped into 2 groups using the grouping $\{\{1,2\}, \{3,4\}\}$ . The output is passed to a fully connected network to predict labels. Note that intermediate layers can use feature vectors of length greater than 1. + +# S5 PARAMETER SHARING + +One of the major benefits of using convolution is that it performs parameter sharing - which comes with its own benefits. Adapting FGL to perform parameter sharing is much harder. Typically, a fully connected from $n_{in} \times c_{in}$ numbers to $n_{out} \times c_{out}$ numbers would require $n_{in} \times n_{out} \times c_{in} \times c_{out}$ parameters. But this number is astronomical. To avoid using as many parameters, we decompose the operation into a multiplication by $v$ followed by a Hadamard product with $u$ . Doing so reduces the number of parameters to $c_{in} \times c_{out} + n_{in} \times c_{out}$ . This is much more tractable, but more reduction might be possible: sharing parameters between groups seems lucrative, unfortunately, different groups can have different sizes and an arbitrary ordering - prevent us from parameter sharing further. If group sizes were constant and an ordering of variables was fixed, it would be possible to further reduce the number of parameters from $\mathcal{O}(n_{in})$ to $\mathcal{O}(\text{groupsize})$ . + +# S6 CNN ARCHITECTURE + +We use a straightforward architecture for convolution - repeated convolutional layers of stride 2 and a kernel size of 4 with appropriate padding, followed by a fully connected network. We found that using maxpool for downsampling reduced performance, and so did using non-linear activation functions. A visualization is provided in S1. \ No newline at end of file diff --git a/towardsadeepnetworkarchitectureforstructuredsmoothness/images.zip b/towardsadeepnetworkarchitectureforstructuredsmoothness/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b01abb3ccb71ae210a2887907fd4bca20d85d0e8 --- /dev/null +++ b/towardsadeepnetworkarchitectureforstructuredsmoothness/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03424098c00ad2b84b6def08a567dfefe1f52da4be390b74a761e81cb7b5fe18 +size 415489 diff --git a/towardsadeepnetworkarchitectureforstructuredsmoothness/layout.json b/towardsadeepnetworkarchitectureforstructuredsmoothness/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0f8b9bbc69bffc245a2b81bcea47cf87fc31cd12 --- /dev/null +++ b/towardsadeepnetworkarchitectureforstructuredsmoothness/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c45dc335d881f51df535ad97343f0ee19574b45e27baab6d967e08981e65631 +size 481525 diff --git a/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/517c7680-292c-4051-a64c-8900632563f7_content_list.json b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/517c7680-292c-4051-a64c-8900632563f7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..957226661b7fee38f5aadccd82ab3684429ec768 --- /dev/null +++ b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/517c7680-292c-4051-a64c-8900632563f7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:632cad86086fd999abf56252ef23d513560f36b8e1948d82be4d1a1c328a3db8 +size 157713 diff --git a/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/517c7680-292c-4051-a64c-8900632563f7_model.json b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/517c7680-292c-4051-a64c-8900632563f7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b6e4b4afa7d6edb1e04190e2946b9d7e57618e80 --- /dev/null +++ b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/517c7680-292c-4051-a64c-8900632563f7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c920f41e9c299303b40e80309ec8758feb85aafe24329f5630c5841c060b940d +size 186545 diff --git a/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/517c7680-292c-4051-a64c-8900632563f7_origin.pdf b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/517c7680-292c-4051-a64c-8900632563f7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7659b63af1c5c9414db663ad03791cffb72fd8d8 --- /dev/null +++ b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/517c7680-292c-4051-a64c-8900632563f7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:362225b3d9c76665a7c322c41049edee7388108f32e4c3bebc722930a9a9f719 +size 7117043 diff --git a/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/full.md b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/full.md new file mode 100644 index 0000000000000000000000000000000000000000..22174bac64edb98347dd7fe8cfad835811891de6 --- /dev/null +++ b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/full.md @@ -0,0 +1,631 @@ +# TOWARDS BETTER UNDERSTANDING OF ADAPTIVE GRADIENT ALGORITHMS IN GENERATIVE ADVERSARIAL NETS + +Mingrui Liu $^{1*}$ , Youssef Mroueh $^{2}$ , Jerret Ross $^{2}$ , Wei Zhang $^{2}$ , Xiaodong Cui $^{2}$ , Payel Das $^{2}$ , Tianbao Yang $^{1}$ + +$^{1}$ Department of Computer Science, The University of Iowa, Iowa City, IA, 52242, USA + +$^{2}$ IBM T. J. Watson Research Center, Yorktown Heights, NY, 10598, USA + +# ABSTRACT + +Adaptive gradient algorithms perform gradient-based updates using the history of gradients and are ubiquitous in training deep neural networks. While adaptive gradient methods theory is well understood for minimization problems, the underlying factors driving their empirical success in min-max problems such as GANs remain unclear. In this paper, we aim at bridging this gap from both theoretical and empirical perspectives. First, we analyze a variant of Optimistic Stochastic Gradient (OSG) proposed in (Daskalakis et al., 2017) for solving a class of nonconvex non-concave min-max problem and establish $O(\epsilon^{-4})$ complexity for finding $\epsilon$ -first-order stationary point, in which the algorithm only requires invoking one stochastic first-order oracle while enjoying state-of-the-art iteration complexity achieved by stochastic extragradient method by (Iusem et al., 2017). Then we propose an adaptive variant of OSG named Optimistic Adagrad (OAdagrad) and reveal an improved adaptive complexity $O\left(\epsilon^{-\frac{2}{1 - \alpha}}\right)$ , where $\alpha$ characterizes the growth rate of the cumulative stochastic gradient and $0 \leq \alpha \leq 1/2$ . To the best of our knowledge, this is the first work for establishing adaptive complexity in nonconvex non-concave min-max optimization. Empirically, our experiments show that indeed adaptive gradient algorithms outperform their non-adaptive counterparts in GAN training. Moreover, this observation can be explained by the slow growth rate of the cumulative stochastic gradient, as observed empirically. + +# 1 INTRODUCTION + +Adaptive gradient algorithms (Duchi et al., 2011; Tieleman & Hinton, 2012; Kingma & Ba, 2014; Reddi et al., 2019) are very popular in training deep neural networks due to their computational efficiency and minimal need for hyper-parameter tuning (Kingma & Ba, 2014). For example, Adagrad (Duchi et al., 2011) automatically adjusts the learning rate for each dimension of the model parameter according to the information of history gradients, while its computational cost is almost the same as Stochastic Gradient Descent (SGD). However, in supervised deep learning (for example, image classification tasks using a deep convolutional neural network), there is not enough evidence showing that adaptive gradient methods converge faster than its non-adaptive counterpart (i.e., SGD) on benchmark datasets. For example, it is argued in (Wilson et al., 2017) that adaptive gradient methods often find a solution with worse performance than SGD. Specifically, Wilson et al. (2017) observed that Adagrad has slower convergence than SGD in terms of both training and testing error, while using VGG (Simonyan & Zisserman, 2014) on CIFAR10 data. + +GANs (Goodfellow et al., 2014) are a popular class of generative models. In a nutshell, they consist of a generator and a discriminator, both of which are defined by deep neural networks. The generator and the discriminator are trained under an adversarial cost, corresponding to a non-convex non-concave min-max problem. GANs are known to be notoriously difficult to train. In practice, Adam (Kingma & Ba, 2014) is the defacto optimizer used for GAN training. The common optimization strategy is to alternatively update the discriminator and the generator (Arjovsky et al., + +2017; Gulrajani et al., 2017). Using Adam is important in GAN training, since replacing it with non-adaptive methods (e.g. SGD) would significantly deteriorate the performance. This paper studies and attempts to answer the following question: + +# Why do adaptive gradient methods outperform their non-adaptive counterparts in GAN training? + +We analyze a variant of Optimistic Stochastic Gradient (OSG) in (Daskalakis & Panageas, 2018) and propose an adaptive variant named Optimistic Adagrad (OAdagrad) for solving a class of nonconvex non-concave min-max problems. Both of them are shown to enjoy state-of-the-art complexities. We further prove that the convergence rate of OAdagrad to an $\epsilon$ -first-order stationary point depends on the growth rate of the cumulative stochastic gradient. In our experiments, we observed an interesting phenomenon while using adaptive gradient methods for training GANs: the cumulative stochastic gradient grows at a slow rate. This observation is in line with the prediction of our theory suggesting improved convergence rate for OAdagrad in GAN training, when the growth rate of the cumulative stochastic gradient is slow. + +Since GAN is a min-max optimization problem in nature, our problem of interest is to solve the following stochastic optimization problem: + +$$ +\min _ {\mathbf {u} \in \mathcal {U}} \max _ {\mathbf {v} \in \mathcal {V}} F (\mathbf {u}, \mathbf {v}) := \mathbb {E} _ {\xi \sim \mathcal {D}} [ f (\mathbf {u}, \mathbf {v}; \xi) ], \tag {1} +$$ + +where $\mathcal{U},\mathcal{V}$ are closed and convex sets, $F(\mathbf{u},\mathbf{v})$ is possibly non-convex in $\mathbf{u}$ and non-concave in $\mathbf{v}$ . $\xi$ is a random variable following an unknown distribution $\mathcal{D}$ . In GAN training, $\mathbf{u},\mathbf{v}$ represent the parameters of generator and discriminator respectively. + +The ideal goal for solving (1) is to find a saddle point $(\mathbf{u}_*,\mathbf{v}_*)\in \mathcal{U}\times \mathcal{V}$ such that $F(\mathbf{u}_{*},\mathbf{v})\leq F(\mathbf{u}_{*},\mathbf{v}_{*})\leq F(\mathbf{u},\mathbf{v}_{*})$ for $\forall \mathbf{u}\in \mathcal{U},\forall \mathbf{v}\in \mathcal{V}$ . + +To achieve this goal, the typical assumption usually made is that the objective function is convex-concave. When $F(\mathbf{u},\mathbf{v})$ is convex in $\mathbf{u}$ and concave in $\mathbf{v}$ , non-asymptotic guarantee in terms of the duality gap is well established by a series of work (Nemirovski & Yudin, 1978; Nemirovski, 2004; Nesterov, 2007; Nemirovski et al., 2009; Juditsky et al., 2011). However, when $F(\mathbf{u},\mathbf{v})$ is non-convex in $\mathbf{u}$ and non-concave in $\mathbf{v}$ , finding the saddle point is NP-hard in general. Instead, we focus on finding the first-order stationary point provided that the objective function is smooth. I.e. we aim to find $(\mathbf{u},\mathbf{v}) \in \mathcal{U} \times \mathcal{V}$ such that $\nabla_{\mathbf{u}}F(\mathbf{u},\mathbf{v}) = 0$ , $\nabla_{\mathbf{v}}F(\mathbf{u},\mathbf{v}) = 0$ . Note that this is a necessary condition for finding the (local) saddle point. + +Related Work. Several works designed iterative first-order deterministic (Dang & Lan, 2015) and stochastic (Iusem et al., 2017; Lin et al., 2018) algorithms for achieving the $\epsilon$ -first-order stationary point with non-asymptotic guarantee. The goal is to find $\mathbf{x}$ such that $\| T(\mathbf{x})\| \leq \epsilon$ or $\mathbb{E}\left[\| T(\mathbf{x})\|\right] \leq \epsilon$ , where the first-order oracle is defined as $T(\mathbf{x}) = [\nabla_{\mathbf{u}}F(\mathbf{u},\mathbf{v}), -\nabla_{\mathbf{v}}F(\mathbf{u},\mathbf{v})]^{\top}$ with $\mathbf{x} = (\mathbf{u},\mathbf{v})$ and the first-order stochastic oracle is the noisy observation of $T$ , i.e. $T(\mathbf{x};\xi) = [\nabla_{\mathbf{u}}F(\mathbf{u},\mathbf{v};\xi), -\nabla_{\mathbf{v}}F(\mathbf{u},\mathbf{v};\xi)]^{\top}$ . For instance, Dang & Lan (2015) focuses on the deterministic setting. On the other hand, (Iusem et al., 2017) develops a stochastic extra-gradient algorithm that enjoys $O(\epsilon^{-4})$ iteration complexity. The extra-gradient method requires two stochastic first-order oracles in one iteration, which can be computationally expensive in deep learning applications such as GANs. The inexact proximal point method developed in (Lin et al., 2018) has iteration complexity $O(\epsilon^{-6})$ for finding an $\epsilon$ -first-order stationary point1. + +To avoid the cost of an additional oracle call in extragradient step, several studies (Chiang et al., 2012; Rakhlin & Sridharan, 2013; Daskalakis et al., 2017; Gidel et al., 2018; Xu et al., 2019) proposed single-call variants of the extragradient algorithm. Some of them focus on the convex setting (e.g. (Chiang et al., 2012; Rakhlin & Sridharan, 2013)), while others focus on the non-convex setting (Xu et al., 2019). The closest to our work is the work by (Daskalakis et al., 2017; Gidel et al., 2018), where the min-max setting and GAN training are considered. However, the convergence of those algorithms is only shown for a class of bilinear problems in (Daskalakis et al., 2017) and for monotone variational inequalities in (Gidel et al., 2018). Hence a big gap remains between the specific settings studied in (Daskalakis et al., 2017; Gidel et al., 2018) and more general non-convex + +
AssumptionSettingICPCGuarantee
Extragradien(Iusem et al., 2017)pseudo-monotonicity3stochasticO(ε-4)2Tgε-SP
OMD(Daskalakis et al., 2017)bilineardeterministicN/ATgasymptotic
AvgPastExtraSGD(Gidel et al., 2018)monotonicitystochasticO(ε-2)Tgε-DG
OMD(Mertikopoulos et al., 2018)coherencestochasticN/A2Tgasymptotic
IPP(Lin et al., 2018)MVI has solutionstochasticO(ε-6)Tgε-SP
Alternating Gradient(Gidel et al., 2019)bilinear form4deterministicO(log(1/ε))Tgε-optim
SVRE(Chavdarova et al., 2019)strong-monotonicityfinite sumstochasticfinite sumO(log(1/ε))(n+L/μ)Tg5ε-optim
Extragradien(Azizian et al., 2019)strong-monotonicitydeterministicO(log(1/ε))2Tgε-optim
OSG(this work)MVI has solutionstochasticO(ε-4)Tgε-SP
OAdagrad(this work)MVI has solutionstochasticO(ε-2/1-α)Tgε-SP
+ +Table 1: Summary of different algorithms with IC (Iteration Complexity), PC (Per-iteration Complexity) to find $\epsilon$ -SP ( $\epsilon$ -first-order Stationary Point), $\epsilon$ -DG ( $\epsilon$ -Duality Gap, i.e. a point $(\hat{\mathbf{u}},\hat{\mathbf{v}})$ such that $\max_{\mathbf{v}}F(\hat{\mathbf{u}},\mathbf{v}) - \min_{\mathbf{u}}F(\mathbf{u},\hat{\mathbf{v}}) \leq \epsilon$ ), or $\epsilon$ -optim ( $\epsilon$ -close to the set of optimal solution). $T_{g}$ stands for the time complexity for invoking one stochastic first-order oracle. + +non-concave min-max problems. Table 1 provides a complete overview of our results and existing results. It is hard to give justice to the large body of work on min-max optimization, so we refer the interested reader to Appendix B that gives a comprehensive survey of related previous methods that are not covered in this Table. + +Our main goal is to design stochastic first-order algorithms with low iteration complexity, low per-iteration cost and suitable for a general class of non-convex non-concave min-max problems. The main tool we use in our analysis is variational inequality. + +Let $T: \mathbb{R}^d \mapsto \mathbb{R}^d$ be an operator and $\mathcal{X} \subset \mathbb{R}^d$ is a closed convex set. The Stampacchia Variational Inequality (SVI) problem (Hartman & Stampacchia, 1966) is defined by the operator $T$ and $\mathcal{X}$ and denoted by $\mathrm{SVI}(T, \mathcal{X})$ . It consists of finding $\mathbf{x}_* \in \mathcal{X}$ such that $\langle T(\mathbf{x}_*), \mathbf{x} - \mathbf{x}_* \rangle \geq 0$ for $\forall \mathbf{x} \in \mathcal{X}$ . A similar one is Minty Variational Inequality (MVI) problem (Minty et al., 1962) denoted by $\mathrm{MVI}(T, \mathcal{X})$ , which consists of finding $\mathbf{x}_*$ such that $\langle T(\mathbf{x}), \mathbf{x} - \mathbf{x}_* \rangle \geq 0$ for $\forall \mathbf{x} \in \mathcal{X}$ . Min-max optimization is closely related to variational inequalities. The corresponding SVI and MVI for the min-max problem are defined through $T(\mathbf{x}) = [\nabla_{\mathbf{u}} F(\mathbf{u}, \mathbf{v}), -\nabla_{\mathbf{v}} F(\mathbf{u}, \mathbf{v})]^\top$ with $\mathbf{x} = (\mathbf{u}, \mathbf{v})$ . + +Our main contributions are summarized as follows: + +- Following (Daskalakis et al., 2017), we extend optimistic stochastic gradient (OSG) analysis beyond the bilinear and unconstrained case, by assuming the Lipschitz continuity of the operator $T$ and the existence of a solution for the variational inequality $\mathrm{MVI}(T,\mathcal{X})$ . These conditions were considered in the analysis of the stochastic extragradient algorithm in (Iusem et al., 2017). We analyze a variant of Optimistic Stochastic Gradient (OSG) under these conditions, inspired by the analysis of (Iusem et al., 2017). We show that OSG achieves state-of-the-art iteration complexity $O(1 / \epsilon^4)$ for finding an $\epsilon$ -first-order stationary point. Note that our OSG variant only requires invoking one stochastic first-order oracle + +while enjoying the state-of-the-art iteration complexity achieved by stochastic extragradient method (Iusem et al., 2017). + +- Under the same conditions, we design an adaptive gradient algorithm named Optimistic Adagrad (OAdagrad), and show that it enjoys better adaptive complexity $O\left(\epsilon^{-\frac{2}{1 - \alpha}}\right)$ , where $\alpha$ characterizes the growth rate of cumulative stochastic gradient and $0 \leq \alpha \leq 1/2$ . Similar to Adagrad (Duchi et al., 2011), our main innovation is in considering variable metrics according to the geometry of the data in order to achieve potentially faster convergence rate for a class of nonconvex-nonconcave min-max games. Note that this adaptive complexity improves upon the non-adaptive one (i.e., $O(1/\epsilon^4)$ ) achieved by OSG. To the best of our knowledge, we establish the first known adaptive complexity for adaptive gradient algorithms in a class of non-convex non-concave min-max problems. +- We demonstrate the effectiveness of our algorithms in GAN training on CIFAR10 data. Empirical results identify an important reason behind why adaptive gradient methods behave well in GANs, which is due to the fact that the cumulative stochastic gradient grows in a slow rate. We also show that OAdagrad outperforms Simultaneous Adam in sample quality in ImageNet generation using self-attention GANs (Zhang et al., 2018). This confirms the superiority of OAdagrad in min-max optimization. + +# 2 PRELIMINARIES AND NOTATIONS + +In this section, we fix some notations and give formal definitions of variational inequalities, and their relationship to the min-max problem (1). + +Notations. Let $\mathcal{X} \subset \mathbb{R}^d$ be a closed convex set, and $\|\cdot\|$ the euclidean norm. We note $\Pi_{\mathcal{X}}$ the projection operator, i.e. $\Pi_{\mathcal{X}}(\mathbf{y}) = \arg \min_{\mathbf{x} \in \mathcal{X}} \|\mathbf{y} - \mathbf{x}\|^2$ . Define $T(\mathbf{x}) = [\nabla_{\mathbf{u}} F(\mathbf{u}, \mathbf{v}), -\nabla_{\mathbf{v}} F(\mathbf{u}, \mathbf{v})]^{\top}$ with $\mathbf{x} = (\mathbf{u}, \mathbf{v})$ in problem (1). At every point $\mathbf{x} \in \mathcal{X}$ , we don't have access to $T(\mathbf{x})$ and have only access to a noisy observations of $T(\mathbf{x})$ . That is, $T(\mathbf{x}; \xi)$ , where $\xi$ is a random variable with distribution $\mathcal{D}$ . For the ease of presentation, we use the terms stochastic gradient and stochastic first-order oracle interchangeably to stand for $T(\mathbf{x}; \xi)$ in the min-max setting. + +Definition 1 (Monotonicity). An operator $T$ is monotone if $\langle T(\mathbf{x}) - T(\mathbf{y}), \mathbf{x} - \mathbf{y} \rangle \geq 0$ for $\forall \mathbf{x}, \mathbf{y} \in \mathcal{X}$ . An operator $T$ is pseudo-monotone if $\langle T(\mathbf{x}), \mathbf{y} - \mathbf{x} \rangle \geq 0 \Rightarrow \langle T(\mathbf{y}), \mathbf{y} - \mathbf{x} \rangle \geq 0$ for $\forall \mathbf{x}, \mathbf{y} \in \mathcal{X}$ . An operator $T$ is $\gamma$ -strongly-monotone if $\langle T(\mathbf{x}) - T(\mathbf{y}), \mathbf{x} - \mathbf{y} \rangle \geq \frac{\gamma}{2} \| \mathbf{x} - \mathbf{y} \|^2$ for $\forall \mathbf{x}, \mathbf{y} \in \mathcal{X}$ . + +We give here formal definitions of monotonic operators $T$ and the $\epsilon$ -first-order stationary point. + +Definition 2 (ε-First-Order Stationary Point). A point $\mathbf{x} \in \mathcal{X}$ is called $\epsilon$ -first-order stationary point if $\|T(\mathbf{x})\| \leq \epsilon$ . + +Remark: We make the following observations: + +(a). From the definition, it is evident that strong-monotonicity $\Rightarrow$ monotonicity $\Rightarrow$ pseudomonotonicity. Assuming SVI has a solution and pseudo-monotonicity of the operator $T$ imply that $\mathrm{MVI}(T,\mathcal{X})$ has a solution. To see that, assume that SVI has a nonempty solution set, i.e. there exists $\mathbf{x}_{*}$ such that $\langle T(\mathbf{x}_{*}),\mathbf{y} - \mathbf{x}_{*}\rangle \geq 0$ for any $\mathbf{y}$ . Noting that pseudomonotonicity means that for every $\mathbf{y},\mathbf{x},\langle T(\mathbf{x}),\mathbf{y} - \mathbf{x}\rangle \geq 0$ implies $\langle T(\mathbf{y}),\mathbf{y} - \mathbf{x}\rangle \geq 0$ , we have $\langle T(\mathbf{y}),\mathbf{y} - \mathbf{x}_{*}\rangle \geq 0$ for any $\mathbf{y}$ , which means that $\mathbf{x}_{*}$ is the solution of Minty variational inequality. Note that the reverse may not be true and an example is provided in Appendix G. +(b). For the min-max problem (1), when $F(\mathbf{u}, \mathbf{v})$ is convex in $\mathbf{u}$ and concave in $\mathbf{v}$ , $T$ is monotone. And, therefore solving $\mathrm{SVI}(T, \mathcal{X})$ is equivalent to solving (1). When $T$ is not monotone, by assuming $T$ is Lipschitz continuous, it can be shown that the solution set of (1) is a subset of the solution set of $\mathrm{SVI}(T, \mathcal{X})$ . However, even solving $\mathrm{SVI}(T, \mathcal{X})$ is NP-hard in general and hence we resort to finding an $\epsilon$ -first-order stationary point. + +Throughout the paper, we make the following assumption: + +Assumption 1. (i). $T$ is $L$ -Lipschitz continuous, i.e. $\| T(\mathbf{x}_1) - T(\mathbf{x}_2)\|_2 \leq L\|\mathbf{x}_1 - \mathbf{x}_2\|_2$ for $\forall \mathbf{x}_1, \mathbf{x}_2 \in \mathcal{X}$ . + +(ii). $MVI(T, \mathcal{X})$ has a solution, i.e. there exists $\mathbf{x}_*$ such that $\langle T(\mathbf{x}), \mathbf{x} - \mathbf{x}_* \rangle \geq 0$ for $\forall \mathbf{x} \in \mathcal{X}$ . +(iii). For $\forall \mathbf{x}\in \mathcal{X},\mathbb{E}\left[T(\mathbf{x};\xi)\right] = T(\mathbf{x}),\mathbb{E}\left\| T(\mathbf{x};\xi) - T(\mathbf{x})\right\| ^2\leq \sigma^2.$ + +Remark: Assumptions (i) and (iii) are commonly used assumptions in the literature of variational inequalities and non-convex optimization (Juditsky et al., 2011; Ghadimi & Lan, 2013; Iusem et al., 2017). Assumption (ii) is used frequently in previous work focusing on analyzing algorithms that solve non-monotone variational inequalities (Iusem et al., 2017; Lin et al., 2018; Mertikopoulos et al., 2018). Assumption (ii) is weaker than other assumptions usually considered, such as pseudomonotonicity, monotonicity, or coherence as assumed in (Mertikopoulos et al., 2018). For nonconvex minimization problem, it has been shown that this assumption holds while using SGD to learn neural networks (Li & Yuan, 2017; Kleinberg et al., 2018; Zhou et al., 2019). + +# 3 OPTIMISTIC STOCHASTIC GRADIENT + +This section serves as a warm-up and motivation of our main theoretical contribution presented in the next section. Inspired by (Iusem et al., 2017), we present an algorithm called Optimistic Stochastic Gradient (OSG) that saves the cost of the additional oracle call as required in (Iusem et al., 2017) and maintains the same iteration complexity. The main algorithm is described in Algorithm 1, where $m_t$ denotes the minibatch size for estimating the first-order oracle. It is worth mentioning that Algorithm 1 becomes stochastic extragradient method if one changes $T(\mathbf{z}_{k-1};\xi_{k-1}^i)$ to $T(\mathbf{x}_{k-1};\xi_{k-1}^i)$ in line 3. Stochastic extragradient method requires to compute stochastic gradient over both sequences $\{\mathbf{x}_k\}$ and $\{\mathbf{z}_k\}$ . In contrast, $\{\mathbf{x}_k\}$ is an ancillary sequence in OSG and the stochastic gradient is only computed over the sequence of $\{\mathbf{z}_k\}$ . Thus, stochastic extragradient method is twice as expensive as OSG in each iteration. In some tasks (e.g. training GANs) where the stochastic gradient computation is expensive, OSG is numerically more appealing. + +# Algorithm 1 Optimistic Stochastic Gradient (OSG) + +1: Input: $\mathbf{z}_0 = \mathbf{x}_0 = 0$ +2: for $k = 1, \dots, N$ do + +$$ +3: \quad \mathbf {z} _ {k} = \Pi_ {\mathcal {X}} \left[ \mathbf {x} _ {k - 1} - \eta \cdot \frac {1}{m _ {k - 1}} \sum_ {i = 1} ^ {m _ {k - 1}} T \left(\mathbf {z} _ {k - 1}; \xi_ {k - 1} ^ {i}\right) \right] +$$ + +$$ +4: \quad \mathbf {x} _ {k} = \Pi_ {\mathcal {X}} \left[ \mathbf {x} _ {k - 1} - \eta \cdot \frac {1}{m _ {k}} \sum_ {i = 1} ^ {m _ {k}} T \left(\mathbf {z} _ {k}; \xi_ {k} ^ {i}\right) \right] +$$ + +5: end for + +Remark: When $\mathcal{X} = \mathbb{R}^d$ , the update in Algorithm 1 becomes the algorithm in (Daskalakis et al., 2017), i.e. + +$$ +\mathbf {z} _ {k + 1} = \mathbf {z} _ {k} - 2 \eta \cdot \frac {1}{m _ {k - 1}} \sum_ {i = 1} ^ {m _ {k}} T \left(\mathbf {z} _ {k}; \xi_ {k} ^ {i}\right) + \eta \cdot \frac {1}{m _ {k - 1}} \sum_ {i = 1} ^ {m _ {k - 1}} T \left(\mathbf {z} _ {k - 1}; \xi_ {k - 1} ^ {i}\right) \tag {2} +$$ + +The detailed derivation of (2) can be found in Appendix F. + +Theorem 1. Suppose that Assumption 1 holds. Let $r_{\alpha}(\mathbf{z}_k) = \|\mathbf{z}_k - \Pi_{\mathcal{X}}(\mathbf{z}_k - \alpha T(\mathbf{z}_k))\|$ . Let $\eta \leq 1/9L$ and run Algorithm 1 for $N$ iterations. Then we have + +$$ +\frac {1}{N} \sum_ {k = 1} ^ {N} \mathbb {E} \left[ r _ {\eta} ^ {2} (\mathbf {z} _ {k}) \right] \leq \frac {8 \| \mathbf {x} _ {0} - \mathbf {x} _ {*} \| ^ {2}}{N} + \frac {1 0 0 \eta^ {2}}{N} \sum_ {k = 0} ^ {N} \frac {\sigma^ {2}}{m _ {k}}, +$$ + +Corollary 1. Consider the unconstrained case where $\mathcal{X} = \mathbb{R}^d$ . Let $\eta \leq 1/9L$ , and we have + +$$ +\frac {1}{N} \sum_ {k = 1} ^ {N} \mathbb {E} \| T (\mathbf {z} _ {k}) \| _ {2} ^ {2} \leq \frac {8 \| \mathbf {x} _ {0} - \mathbf {x} _ {*} \| ^ {2}}{\eta^ {2} N} + \frac {1 0 0}{N} \sum_ {k = 0} ^ {N} \frac {\sigma^ {2}}{m _ {k}}, \tag {3} +$$ + +Remark: There are two implications of Corollary 1. + +- (Increasing Minibatch Size) Let $\eta = \frac{1}{9L}$ , $m_k = k + 1$ . To guarantee $\frac{1}{N} \sum_{k=1}^{N} \mathbb{E} \| T(\mathbf{z}_k) \|_2^2 \leq \epsilon^2$ , the total number of iterations is $N = \widetilde{O}(\epsilon^{-2})$ , and the total complexity is $\sum_{k=1}^{N} m_k = \widetilde{O}(\epsilon^{-4})$ , where $\widetilde{O}(\cdot)$ hides a logarithmic factor of $\epsilon$ . +- (Constant Minibatch Size) Let $\eta = \frac{1}{9L}$ , $m_{k} = 1 / \epsilon^{2}$ . To guarantee $\frac{1}{N}\sum_{k=1}^{N}\mathbb{E}\|T(\mathbf{z}_{k})\|_{2}^{2} \leq \epsilon^{2}$ , the total number of iterations is $N = O(\epsilon^{-2})$ , and the total complexity is $\sum_{k=0}^{N}m_{k} = O(\epsilon^{-4})$ . + +# 4 OPTIMISTIC ADAGRAD + +# 4.1 ADAGRAD FOR MINIMIZATION PROBLEMS + +Before introducing Optimistic Adagrad, we present here a quick overview of Adagrad (Duchi et al., 2011). The main objective in Adagrad is to solve the following minimization problem: + +$$ +\min _ {\mathbf {w} \in \mathbb {R} ^ {d}} F (\mathbf {w}) = \mathbb {E} _ {\zeta \sim \mathcal {P}} f (\mathbf {w}; \zeta) \tag {4} +$$ + +where $\mathbf{w}$ is the model parameter, and $\zeta$ is an random variable following distribution $\mathcal{P}$ . The update rule of Adagrad is + +$$ +\mathbf {w} _ {t + 1} = \mathbf {w} _ {t} - \eta H _ {t} ^ {- 1} \hat {\mathbf {g}} _ {t}, \tag {5} +$$ + +where $\eta > 0$ , $\hat{\mathbf{g}}_t = \nabla f(\mathbf{w}_t; \zeta_t)$ , $H_t = \mathrm{diag}\left(\left(\sum_{i=1}^t \hat{\mathbf{g}}_i \circ \hat{\mathbf{g}}_i\right)^{\frac{1}{2}}\right)$ with $\circ$ denoting the Hadamard product. Adagrad when taking $H_t = I$ reduces to SGD. Different from SGD, Adagrad dynamically incorporates knowledge of history gradients to perform more informative gradient-based learning. When solving a convex minimization problem and the gradient is sparse, Adagrad converges faster than SGD. There are several variants of Adagrad, including Adam (Kingma & Ba, 2014), RM-SProp (Tieleman & Hinton, 2012), and AmsGrad (Reddi et al., 2019). All of them share the spirit, as they take advantage of the information provided by the history of gradients. Wilson et al. (2017) provide a complete overview of different adaptive gradient methods in a unified framework. It is worth mentioning that Adagrad can not be directly applied to solve non-convex non-concave min-max problems with provable guarantee. + +# 4.2 OPTIMISTIC ADAGRAD FOR MIN-MAX OPTIMIZATION + +Our second algorithm named Optimistic Adagrad (OAdagrad) is an adaptive variant of OSG, which also updates minimization variable and maximization variable simultaneously. The key difference between OSG and OAdagrad is that OAdagrad inherits ideas from Adagrad to construct variable metric based on history gradients information, while OSG only utilizes a fixed metric. This difference helps us establish faster adaptive convergence under some mild assumptions. Note that in OAdagrad we only consider the unconstrained case, i.e. $\mathcal{X} = \mathbb{R}^d$ . + +Assumption 2. (i). There exists $G > 0$ and $\delta > 0$ such that $\| T(\mathbf{z};\xi)\|_2 \leq G$ , $\| T(\mathbf{z};\xi)\|_\infty \leq \delta$ for all $\mathbf{z}$ almost surely. + +(ii). There exists a universal constant $D > 0$ such that $\| \mathbf{x}_k\| _2\leq D / 2$ for $k = 1,\ldots ,N$ , and $\| \mathbf{x}_{*}\|_{2}\leq D / 2$ + +Remark: Assumption 2 (i) is a standard one often made in literature (Duchi et al., 2011). Assumption 2 (ii) holds when we use normalization layers in the discriminator and generator such as spectral normalization of weights (Miyato et al., 2018; Zhang et al., 2018), that will keep the norms of the weights bounded. Regularization techniques such as weight decay also ensure that the weights of the networks remain bounded throughout the training. + +Define $\widehat{\mathbf{g}}_k = \frac{1}{m}\sum_{i=1}^{m}T(\mathbf{z}_k;\xi_k^i)$ , $\|\mathbf{x}\|_H = \sqrt{\langle\mathbf{x},H\mathbf{x}\rangle}$ . Denote $\widehat{\mathbf{g}}_{0:k}$ by the concatenation of $\widehat{\mathbf{g}}_0,\ldots,\widehat{\mathbf{g}}_k$ , and denote $\widehat{\mathbf{g}}_{0:k,i}$ by the $i$ -th row of $\widehat{\mathbf{g}}_{0:k}$ . + +Algorithm 2 Optimistic AdaGrad (OAdagrad) + +1: Input: $\mathbf{z}_0 = \mathbf{x}_0 = 0$ , $H_0 = \delta I$ +2: for $k = 1, \dots, N$ do +3: $\mathbf{z}_k = \mathbf{x}_{k - 1} - \eta H_{k - 1}^{-1}\widehat{\mathbf{g}}_{k - 1}$ +4: $\mathbf{x}_k = \mathbf{x}_{k - 1} - \eta H_{k - 1}^{-1}\widehat{\mathbf{g}}_k$ +5: Update $\widehat{\mathbf{g}}_{0:k} = [\widehat{\mathbf{g}}_{0:k - 1}\widehat{\mathbf{g}}_k], s_{k,i} = \| \widehat{\mathbf{g}}_{0:k,i}\|$ , $i = 1,\dots ,d$ and set $H_{k} = \delta I + \mathrm{diag}(s_{k - 1})$ +6: end for + +Theorem 2. Suppose Assumption 1 and 2 hold. Suppose $\| \widehat{\mathbf{g}}_{1:k,i} \|_2 \leq \delta k^\alpha$ with $0 \leq \alpha \leq 1/2$ for every $i = 1, \ldots, d$ and every $k = 1, \ldots, N$ . When $\eta \leq \frac{\delta}{9L}$ , after running Algorithm 2 for $N$ iterations, we have + +$$ +\frac {1}{N} \sum_ {k = 1} ^ {N} \mathbb {E} \| T (\mathbf {z} _ {k}) \| _ {H _ {k - 1} ^ {- 1}} ^ {2} \leq \frac {8 D ^ {2} \delta^ {2} (1 + d (N - 1) ^ {\alpha})}{\eta^ {2} N} + \frac {1 0 0 \left(\sigma^ {2} / m + d \left(2 \delta^ {2} N ^ {\alpha} + G ^ {2}\right)\right)}{N}. \tag {6} +$$ + +To make sure $\frac{1}{N}\sum_{k = 1}^{N}\mathbb{E}\| T(\mathbf{z}_k)\|_{H_{k - 1}^{-1}}^2\leq \epsilon^2$ , the number of iterations is $N = O\left(\epsilon^{-\frac{2}{1 - \alpha}}\right)$ + +# Remark: + +- Note that the convergence measure used in Theorem 2 is different from that in Corollary 1. However, we show that under the measure used in Theorem 2, OSG (Algorithm 1) still has complexity $O(1/\epsilon^4)$ . By the construction of $H_k$ in Algorithm 2, we know that $\|T(\mathbf{z})\|_{H_k^{-1}} \leq \|T(\mathbf{z})\|_{H_0^{-1}}$ for any $k \geq 0$ and any $\mathbf{z}$ , and hence $\frac{1}{N} \sum_{k=1}^{N} \mathbb{E}\|T(\mathbf{z}_k)\|_{H_{k-1}^{-1}}^2 \leq \frac{1}{N} \sum_{k=1}^{N} \mathbb{E}\|T(\mathbf{z}_k)\|_{H_0^{-1}}^2 = \frac{1}{\delta} \cdot \frac{1}{N} \sum_{k=1}^{N} \mathbb{E}\|T(\mathbf{z}_k)\|_2^2$ . By Corollary 1, we know that OSG still requires $O(1/\epsilon^4)$ complexity to guarantee that $\frac{1}{N} \sum_{k=1}^{N} \mathbb{E}\|T(\mathbf{z}_k)\|_{H_{k-1}^{-1}}^2 \leq \epsilon^2$ . +- We denote $\widehat{\mathbf{g}}_{1:k}$ by the cumulative stochastic gradient, where $\|\widehat{\mathbf{g}}_{1:k,i}\|_2 \leq \delta k^\alpha$ characterizes the growth rate of the gradient in terms of $i$ -th coordinate. In our proof, a key quantity is $\sum_{i=1}^{d} \|\widehat{\mathbf{g}}_{1:k,i}\|_2$ that crucially affects the computational complexity of Algorithm 2. Since $\sum_{i=1}^{d} \|\widehat{\mathbf{g}}_{1:k,i}\|_2 \leq \delta dk^\alpha$ , in the worst case, $\alpha = \frac{1}{2}$ . But in practice, the stochastic gradient is usually sparse, and hence $\alpha$ can be strictly smaller than $\frac{1}{2}$ . +- As shown in Theorem 2, the minibatch size used in Algorithm 2 for estimating the first-order oracle can be any positive constant and independent of $\epsilon$ . This is more practical than the results established in Theorem 1, since the minibatch size in Theorem 1 does either increase in terms of number of iterations or is dependent on $\epsilon$ . When $\alpha = \frac{1}{2}$ , the complexity of Algorithm 2 is $O(1 / \epsilon^4)$ , which matches the complexity stated in Theorem 1. When $\alpha < \frac{1}{2}$ , the complexity of OAdagrad given in Algorithm 2 is $O\left(\epsilon^{-\frac{2}{1 - \alpha}}\right)$ , i.e., strictly better than that of OSG given in Algorithm 1. + +Comparison with Alternating Adam and Optimistic Adam Alternating Adam is very popular in GAN training (Goodfellow et al., 2014; Arjovsky et al., 2017; Gulrajani et al., 2017; Brock et al., 2018). In Alternating Adam, one alternates between multiple steps of Adam on the discriminator and a single step of Adam on the generator. The key difference between OAdagrad and Alternating Adam is that OAdagrad updates the discriminator and generator simultaneously. It is worth mentioning that OAdagrad naturally fits into the framework of Optimistic Adam proposed in (Daskalakis et al., 2017). Taking $\beta_{1} = 0, \beta_{2} \rightarrow 1$ in their Algorithm 1 reduces to OAdagrad with annealing learning rate. To the best of our knowledge, there is no convergence proof for Alternating Adam for non-convex non-concave problems. Our convergence proof for OAdagrad provides a theoretical justification of a special case of Optimistic Adam. + +# 5 EXPERIMENTS + +WGAN-GP on CIFAR10 In the first experiment, we verify the effectiveness of the proposed algorithms in GAN training using the PyTorch framework (Paszke et al., 2017). We use Wasserstein + +![](images/e4eb408a8ddf5d8e95b87b18dc74ffb9f32491396c2eb706fb1c85381ff6ba5f.jpg) +Figure 1: OAdagrad, OSG and Alternating Adam for WGAN-GP on CIFAR10 data + +![](images/c85363e5821c5464f5834867afdddf3adb4f9ce82bd2b14b85530e213f917ab6.jpg) + +![](images/410afe793fe521d108a955dd49e9fa2701419f4f86e1d25e879367508e69e20b.jpg) + +![](images/ec3ade33c15c9b05af6fc224c8d2509eb32e5916b09b1cf060dc29212b49c625.jpg) + +![](images/45041aa82aa362240f4deb94205829cadadc56d6ae91b575d0863a5d9849f6f6.jpg) + +![](images/9c8c4e56c3948b81e8b005bef985675d1010bd92cea58d44c9bcadda76f92eb3.jpg) + +![](images/6801314a9ed4f3bf0888b235303bda8ccce945e7226c75c9eeefa5029bd87695.jpg) +Figure 2: Cumulative Stochastic Gradient as a function of number of iterations, where netD and netG stand for the discriminator and generator respectively. The blue curve and red curve stand for the growth rate of the cumulative stochastic gradient for OAdagrad and its corresponding tightest polynomial growth upper bound, respectively. + +![](images/18cf1454f5f02b4da388f414f2fe10479bdbec50ba9c3b0ffe5e010f4465746e.jpg) + +![](images/4cc7bd7c1f42edc78fe21d14368f0efc3c4d7c25af586fd5bc197ecfaf302807.jpg) + +GAN with gradient penalty (WGAN-GP) (Gulrajani et al., 2017) and CIFAR10 data in our experiments. The architectures of discriminator and generator, and the penalty parameter in WGAN-GP are set to be same as in the original paper. We compare Alternating Adam, OSG and OAdagrad, where the Alternating Adam is to run 5 steps of Adam on the discriminator before performing 1 step of Adam on the generator. We try different batch sizes (64, 128, 256) for each algorithm. For each algorithm, we tune the learning rate in the range of $\{1\times 10^{-3},2\times 10^{-4},1\times 10^{-4},2\times 10^{-5},1\times 10^{-5}\}$ when using batch size 64, and use the same learning rate for batch size 128 and 256. We report Inception Score (IS) (Salimans et al., 2016) as a function of number of iterations. Figure 1 suggests that OAdagrad performs better than OSG and Alternating Adam, and OAdagrad results in higher IS. We compare the generated CIFAR10 images associated with these three methods, which is included in Appendix A. We also provide experimental results to compare the performance of different algorithms using different minibatch sizes, which are included in Appendix E. + +Growth Rate of Cumulative Stochastic Gradient In the second experiment, we employ OAdagrad to train GANs and study the growth rate of the cumulative stochastic gradient (i.e., $\sum_{i=1}^{d}\|\widehat{\mathbf{g}}_{1:N,i}\|_2$ ). We tune the learning rate from $\{1\times 10^{-3},2\times 10^{-4},1\times 10^{-4},2\times 10^{-5},1\times 10^{-5}\}$ and choose batch size to be 64. In Figure 2, the blue curve and red curve stand for the growth rate for OAdagrad and its corresponding tightest polynomial growth upper bound respectively. $N$ is the number of iterations, and $c$ is a multiplicative constant such that the red curve and blue curve overlaps at the starting point of the training. The degree of the polynomial is determined using binary search. We can see that the growth rate of cumulative stochastic gradient grows very slowly in GANs (the worst-case polynomial degree is 0.5, but it is 0.2 for WGAN-GP on CIFAR10 and 0.07 for WGAN on LSUN Bedroom dataset). As predicted by our theory, this behavior explains the faster convergence of OAdagrad versus OSG, consistent with what is observed empirically in Figure 1. + +![](images/7aae5a97e891b4191f2df458bdc56595d12867c4498e9c15d3e49d36a7a7c491.jpg) +(a) Inception Score + +![](images/ffab3ce98c25089ab0a20852fcbd1a6ec31176094ab3740565fa0c6ce6e6262f.jpg) +(b) FID +Figure 3: Self-Attention GAN on ImageNet, with evaluation using Official TensorFlow Inception Score and Official TensorFlow FID. We see that OAdagard indeed outperforms Simultaneous Adam in terms of the (TensorFlow) Inception score (higher is better), and in terms of (TensorFlow) Fréchet Inception Distance (lower is better). We don't report here Alternating Adam since in our run it has collapsed. + +Self-Attention GAN on ImageNet In the third experiment, we consider GAN training on large-scale dataset. We use the model from Self-Attention GAN (Zhang et al., 2018) (SA-GAN) and ImageNet as our dataset. Note that in this setting the boundedness of both generator $(G)$ and discriminator $(D)$ is ensured by spectral normalization of both $G$ and $D$ . Three separate experiments are performed, including Alternating Adam (baseline), Simultaneous Adam (Mescheder et al., 2017), and OAdagrad. It should be mentioned that the update rule of Simultaneous Adam involves performing Adam-type update for discriminator and generator simultaneously. Training is performed with batch size 128 for all experiments. + +For the baseline experiment (Alternating Adam) we use the default settings and hyper parameters reported in SA-GAN (Zhang et al., 2018) (note that we are not using the same batch size of 256 as in (Zhang et al., 2018) due to limited computational resources). In our experience, Alternating Adam training for a batch size of 128 with same learning rate as in SA-GAN (0.0001 for generator and 0.0004 for discriminator) collapsed. This does not mean that Alternating Adam fails, it just needs more tuning to find the correct range of learning rates for the particular batch size we have. With the hyperparameters ranges we tried Alternating Adam collapsed, with extra tuning efforts and an expensive computational budget Alternating Adam would eventually succeed. This is inline with the large scale study in (Lucic et al., 2018) that states that given a large computational budget for tuning hyper-parameters most GANs training succeed equally. + +For both OAdagrad and Simultaneous Adam, we use different learning rate for generator and discriminator, as suggested in (Heusel et al., 2017). Specifically, the learning rates used are $10^{-3}$ for the generator and $4 \times 10^{-5}$ for the discriminator. We report both Inception Score (IS) and Fréchet Inception Distance (Heusel et al., 2017) (FID) as a function of number of iterations. + +We compare the generated ImageNet images associated with the three optimization methods in Appendix A. Since Alternating Adam collapsed we don't report its Inception Score or FID. As it can be seen in Figure 3 and Appendix A, OAdagrad outperforms simultaneous Adam in quantitative metrics (IS and FID) and in sample quality generation. Future work will include investigating whether OAdagrad would benefit from training with larger batch size, in order to achieve state-of-the-art results. + +# 6 CONCLUSION + +In this paper, we explain the effectiveness of adaptive gradient methods in training GANs from both theoretical and empirical perspectives. Theoretically, we provide two efficient stochastic algorithms for solving a class of min-max non-convex non-concave problems with state-of-the-art computational complexities. We also establish adaptive complexity results for an Adagrad-style algorithm by using coordinate-wise stepsize according to the geometry of the history data. The algorithm is proven to enjoy faster adaptive convergence than its non-adaptive counterpart when the gradient is + +sparse, which is similar to Adagrad applied to convex minimization problem. We have conducted extensive empirical studies to verify our theoretical findings. In addition, our experimental results suggest that the reason why adaptive gradient methods deliver good practical performance for GAN training is due to the slow growth rate of the cumulative stochastic gradient. + +# ACKNOWLEDGMENTS + +The authors thank the anonymous reviewers for their helpful comments. M. Liu and T. Yang are partially supported by National Science Foundation CAREER Award 1844403. M. Liu would like to thank Xiufan Yu from Pennsylvania State University and Zehao Dou from Yale University for helpful discussions. + +# REFERENCES + +Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. +Waiss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. A tight and unified analysis of extragradient for a whole spectrum of differentiable games. arXiv preprint arXiv:1906.05945, 2019. +Francis Bach and Kfir Y Levy. A universal algorithm for variational inequalities adaptive to smoothness and noise. arXiv preprint arXiv:1902.01637, 2019. +Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018. +Tatjana Chavdarova, Gauthier Gidel, Francois Fleuret, and Simon Lacoste-Julien. Reducing noise in gan training with variance reduced extragradient. arXiv preprint arXiv:1904.08598, 2019. +Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, and Shenghuo Zhu. Online optimization with gradual variations. In Conference on Learning Theory, pp. 6-1, 2012. +Cong D Dang and Guanghui Lan. On the convergence properties of non-euclidean extragradient methods for variational inequalities with generalized monotone operators. Computational Optimization and applications, 60(2):277-310, 2015. +Constantinos Daskalakis and Ioannis Panageas. The limit points of (optimistic) gradient descent in min-max optimization. In Advances in Neural Information Processing Systems, pp. 9236-9246, 2018. +Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, and Haoyang Zeng. Training gans with optimism. arXiv preprint arXiv:1711.00141, 2017. +John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011. +Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341-2368, 2013. +Gauthier Gidel, Hugo Berard, Gaetan Vignoud, Pascal Vincent, and Simon Lacoste-Julien. A variational inequality perspective on generative adversarial networks. arXiv preprint arXiv:1802.10551, 2018. +Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Rémi Le Priol, Gabriel Huang, Simon Lacoste-Julien, and Ioannis Mitliagkas. Negative momentum for improved game dynamics. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pp. 1802-1811. PMLR, 16-18 Apr 2019. URL http://proceedings.mlr.press/v89/gidel19a.html. + +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014. +Paulina Grinarova, Kfir Y Levy, Aurelien Lucchi, Thomas Hofmann, and Andreas Krause. An online learning approach to generative adversarial networks. arXiv preprint arXiv:1706.03269, 2017. +Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767-5777, 2017. +Philip Hartman and Guido Stampacchia. On some non-linear elliptic differential-functional equations. Acta mathematica, 115(1):271-310, 1966. +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626-6637, 2017. +AN Iusem, Alejandro Jofre, Roberto I Oliveira, and Philip Thompson. Extragradient method with variance reduction for stochastic variational inequalities. SIAM Journal on Optimization, 27(2): 686-724, 2017. +Anatoli Juditsky, Arkadi Nemirovski, Claire Tauvel, et al. Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Systems, 1(1):17-58, 2011. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Robert Kleinberg, Yanzhi Li, and Yang Yuan. An alternative view: When does sgd escape local minima? arXiv preprint arXiv:1802.06175, 2018. +GM Korpelevich. The extragradient method for finding saddle points and other problems. Matecon, 12:747-756, 1976. +Yuanzhi Li and Yang Yuan. Convergence analysis of two-layer neural networks with relu activation. In Advances in Neural Information Processing Systems, pp. 597-607, 2017. +Qihang Lin, Mingrui Liu, Hassan Rafique, and Tianbao Yang. Solving weakly-convex-weakly-concave saddle-point problems as weakly-monotone variational inequality. arXiv preprint arXiv:1810.10207, 2018. +Tianyi Lin, Chi Jin, and Michael I Jordan. On gradient descent ascent for nonconvex-concave minimax problems. arXiv preprint arXiv:1906.00331, 2019. +Mingrui Liu, Zhuoning Yuan, Yiming Ying, and Tianbao Yang. Stochastic auc maximization with deep neural networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HJepXaVYDr. +Songtao Lu, Ioannis Tsaknakis, and Mingyi Hong. Block alternating optimization for non-convex min-max problems: algorithms and applications in signal processing and communications. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4754-4758. IEEE, 2019. +Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are gans created equal? a large-scale study. In Advances in neural information processing systems, pp. 700-709, 2018. +Eric V Mazumdar, Michael I Jordan, and S Shankar Sastry. On finding local nash equilibria (and only local nash equilibria) in zero-sum games. arXiv preprint arXiv:1901.00838, 2019. +Panayotis Mertikopoulos, Houssam Zenati, Bruno Lecouat, Chuan-Sheng Foo, Vijay Chandrasekhar, and Georgios Piliouras. Mirror descent in saddle-point problems: Going the extra (gradient) mile. arXiv preprint arXiv:1807.02629, 2018. + +Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. The numerics of gans. In Advances in Neural Information Processing Systems, pp. 1825-1835, 2017. +George J Minty et al. Monotone (nonlinear) operators in hilbert space. Duke Mathematical Journal, 29(3):341-346, 1962. +Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018. +Vaishnavh Nagarajan and J Zico Kolter. Gradient descent gan optimization is locally stable. In Advances in Neural Information Processing Systems, pp. 5585-5595, 2017. +Arkadi Nemirovski. Prox-method with rate of convergence o (1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1):229-251, 2004. +Arkadi Nemirovski and D Yudin. On cezari?s convergence of the steepest descent method for approximating saddle point of convex-concave functions. In Soviet Math. Dokl, volume 19, pp. 258-269, 1978. +Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on optimization, 19(4):1574-1609, 2009. +Arkadi Semenovich Nemirovsky and David Borisovich Yudin. Problem complexity and method efficiency in optimization. 1983. +Yurii Nesterov. Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming, 109(2-3):319-344, 2007. +Adam Paszke, Sam Gross, Soumith Chintala, and Gregory Chanan. Pytorch: Tensors and dynamic neural networks in python with stronggpu acceleration. Team, Pytorch Core, 6, 2017. +Boris Teodorovich Polyak. Minimization of unsmooth functionals. USSR Computational Mathematics and Mathematical Physics, 9(3):14-29, 1969. +Hassan Rafique, Mingrui Liu, Qihang Lin, and Tianbao Yang. Non-convex min-max optimization: Provable algorithms and applications in machine learning. arXiv preprint arXiv:1810.02060, 2018. +Sasha Rakhlin and Karthik Sridharan. Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems, pp. 3066-3074, 2013. +Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. arXiv preprint arXiv:1904.09237, 2019. +Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in neural information processing systems, pp. 2234-2242, 2016. +Maziar Sanjabi, Meisam Razaviyayn, and Jason D Lee. Solving non-convex non-concave min-max games under polyak-{L} ojasiewicz condition. arXiv preprint arXiv:1812.02878, 2018. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. +Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop, coursera: Neural networks for machine learning. University of Toronto, Technical Report, 2012. +Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. In Advances in Neural Information Processing Systems, pp. 4148-4158, 2017. + +Yi Xu, Zhuoning Yuan, Sen Yang, Rong Jin, and Tianbao Yang. On the convergence of (stochastic) gradient descent with extrapolation for non-convex optimization. arXiv preprint arXiv:1901.10682, 2019. +Abhay Yadav, Sohil Shah, Zheng Xu, David Jacobs, and Tom Goldstein. Stabilizing adversarial nets with prediction methods. arXiv preprint arXiv:1705.07364, 2017. +Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318, 2018. +Renbo Zhao. Optimal stochastic algorithms for convex-concave saddle-point problems. arXiv preprint arXiv:1903.01687, 2019. +Yi Zhou, Junjie Yang, Huishuai Zhang, Yingbin Liang, and Vahid Tarokh. Sgd converges to global minimum in deep learning via star-convex path. arXiv preprint arXiv:1901.00451, 2019. + +# A MORE EXPERIMENTAL RESULTS + +Comparison of Generated CIFAR10 Images by Different Optimization Methods In this section, we report the generated CIFAR10 images during the training of WGAN-GP by three optimization methods (OSG, OAdagrad, Alternating Adam). Every method uses batch size 64, and 1 iteration represents calculating the stochastic gradient with minibatch size 64 once. Figure 4 consists of images by three optimization methods at iteration 8000. Visually we can see that OAdagrad is better than Alternating Adam, and both of them are significantly better than OSG. It is consistent with the inception score results reported in Figure 1, and it also illustrates the tremendous benefits delivered by adaptive gradient methods when training GANs. + +![](images/daeae3a08971792b8ffe54ef1d2f546382ad08e06a74abc2e68ee27f81f8782f.jpg) +(a) OSG + +![](images/041e803d5a85d41c8b9f709c7e154b11a6d280438576d179e8decb9d84db5714.jpg) +(b) OAdagrad +Figure 4: WGAN-GP: Generated CIFAR10 images using different optimization methods at iteration 8000. + +![](images/63db6e0a4fc042a2914d00448e38c505bc12af2a44f5635d56d68f8a2568d3f3.jpg) +(c) Alternating Adam + +Comparison of Generated ImageNet Images by Different Optimization Methods In this section, we report the generated ImageNet images during the training of Self-Attention GAN by three optimization methods (OAdagrad, Simultaneous Adam, Alternating Adam). Every method uses batch size 128 and 1 iteration represents calculating the stochastic gradient with minibatch 128 once. Figure 5 consists of images by three optimization methods at iteration 135000. Visually it is apparent that OAdagrad is better than Simultaneous Adam, and both of them are significantly than Alternating Adam. + +![](images/a3cfee7470d6f3f3654d90275bd56b2923740f8f6f99a9f865dda72319d8b9b5.jpg) +(a) OAdagrad +Figure 5: Self-Attention GAN (SA-GAN): Generated ImageNet images using different optimization methods at iteration 135000. OAdagrad produces better quality images than simultaneous Adam. For both Oadagrad and simultaneous Adam we use the same learning rates: 0.001 for generator and 0.00004 for the discriminator. Alternating Adam in our experience with same learning rate as in SA-GAN 0.0001 for generator and 0.0004 for discriminator collapsed. Note that our setting is different from SA-GAN since our batchsize is 128 while it is 256 in SA-GAN. It was also noted in SA-GAN that alternating Adam is hard to train. + +![](images/4c8da986c350cf25ae1b4e4542bf87bcf3ac2382d2a6905ba6392ac03af80e52.jpg) +(b) Simultaneous Adam + +![](images/8fb0acc84e02b20aa72bdbfbf6b34fd930eb87d9879ae36c6328b853f7211482.jpg) +(c) Alternating Adam + +# Unofficial PyTorch Inception Score and FID results for SA-GAN on ImageNet + +![](images/2e48c04f0014ac9191892ec7ada8222dba48a4eca1a06efeb97800ec86380b2d.jpg) +(a) Inception Score + +![](images/4c40814608fa5cec78a0303676feb786c70aef579920cf50ed5614f675b21870.jpg) +(b) FID +Figure 6: Self-Attention GAN on ImageNet, with evaluation using Unofficial PyTorch Inception Score and Unofficial Pytorch FID. We see that OAdagard indeed outperforms Simultaneous Adam in terms of the (PyTorch) Inception score (higher is better), and in terms of (PyTorch) Fréchet Inception Distance (lower is better). We don't report here Alternating Adam since in our run it has collapsed. + +# B RELATED WORK + +Min-max Optimization and GAN Training For convex-concave min-max optimization, the extragradient method was first proposed by (Korpelevich, 1976). Later on, under gradient Lipschitz condition, Nemirovski (2004) extended the idea of extragradient to mirror-prox and obtained the $O(1 / N)$ convergence rate in terms of the duality gap (see also (Nesterov, 2007)), where $N$ is the number of iterations. When only the stochastic first-order oracle is available, the stochastic mirror-prox was analyzed by (Juditsky et al., 2011). The convergence rates for both deterministic and stochastic mirror-prox are optimal (Nemirovsky & Yudin, 1983). Recently, Zhao (2019) developed a nearly-optimal stochastic first-order algorithm when the primal variable is strongly convex in the primal variable. Bach & Levy (2019) proposed a universal algorithm that is adaptive to smoothness and noise, and simultaneously achieves optimal convergence rate. + +There is a plethora of work analyzing one-sided nonconvex min-max problem, where the objective function is nonconvex in the minimization variable but concave in maximization variable. When the function is weakly-convex in terms of the minimization variable, Rafique et al. (2018) propose a stage-wise stochastic algorithm that approximately solves a convex-concave subproblem by adding a quadratic regularizer and show the first-order convergence of the equivalent minimization problem. Under the same setting, Lu et al. (2019) utilize block-based optimization strategy and show the convergence of the stationarity gap. By further assuming that the function is smooth in the minimization variable, Lin et al. (2019) show that (stochastic) gradient descent ascent is able to converge to the first-order stationary point of the equivalent minimization problem. Liu et al. (2020) cast the problem of stochastic AUC maximization with deep neural networks into a nonconvex-concave min-max problem, show the PL (Polyak-Lojasiewicz) condition holds for the objective of the outer minimization problem, and propose an algorithm and establish its fast convergence rate. + +A more challenging problem is the non-convex non-concave min-max problem. Dang & Lan (2015) demonstrate that the deterministic extragradient method is able to converge to $\epsilon$ -first-order stationary point with non-asymptotic guarantee. Under the condition that the objective function is weakly-convex and weakly-concave, Lin et al. (2018) designs a stage-wise algorithm, where in each stage a strongly-convex strongly-concave subproblem is constructed by adding quadratic terms and appropriate stochastic algorithms can be employed to approximately solve it. They also show the convergence to the stationary point. Sanjabi et al. (2018) design an alternating deterministic optimization algorithm, in which multiple steps of gradient ascent for dual variable are conducted before one step of gradient descent for primal variable is performed. They show the convergence to stationary point based on the assumption that the inner maximization problem satisfies PL condition (Polyak, 1969). Our work is different from these previous methods in many aspects. In comparison to (Lin et al., 2018), our result does not need the bounded domain assumption. Furthermore, our iteration complexity is $O(1/\epsilon^4)$ to achieve $\epsilon$ -first-order stationary point while the corresponding complexity + +in (Lin et al., 2018) is $O(1 / \epsilon^6)$ . When comparing to (Sanjabi et al., 2018), we do not assume that the PL (Polyak-Łojasiewicz) condition holds. Additionally, our algorithm is stochastic and not restricted to the deterministic case. Apparently the most related work to the present one is (Iusem et al., 2017). The stochastic extragradient method analyzed in (Iusem et al., 2017) requires calculation of two stochastic gradients per iteration, while the present algorithm only needs one since it memorizes the stochastic gradient in the previous iteration to guide the update in the current iteration. Nevertheless, we achieve the same iteration complexity as in (Iusem et al., 2017). + +There are a body of work analyzing the convergence behavior of min-max optimization algorithms and its application in training GANs (Heusel et al., 2017; Daskalakis & Panageas, 2018; Nagarajan & Kolter, 2017; Grinarova et al., 2017; Yadav et al., 2017; Gidel et al., 2018; Mertikopoulos et al., 2018; Mazumdar et al., 2019). A few of them (Heusel et al., 2017; Daskalakis & Panageas, 2018; Mazumdar et al., 2019) only have asymptotic convergence. Others (Nagarajan & Kolter, 2017; Grinarova et al., 2017; Daskalakis et al., 2017; Yadav et al., 2017; Gidel et al., 2018; Mertikopoulos et al., 2018) focus on more restricted settings. For example, Nagarajan & Kolter (2017); Grinarova et al. (2017) require the concavity of the objective function in terms of dual variable. Yadav et al. (2017); Gidel et al. (2018) assume the objective to be convex-concave. Mertikopoulos et al. (2018) imposes the so-called coherence condition which is stronger than our assumption. Daskalakis et al. (2017) analyze the last-iteration convergence for bilinear problem. Recently, Gidel et al. (2019) analyze the benefits of using negative momentum in alternating gradient descent to improve the training of a bilinear game. Chavdarova et al. (2019) develop a variance-reduced extragradient method and shows its linear convergence under strong monotonicity and finite-sum structure assumptions. Azizian et al. (2019) provide a unified analysis of extragradient for bilinear game, strongly monotone case, and their intermediate cases. However, none of them give non-asymptotic convergence results for the class of non-convex non-concave min-max problem considered in our paper. + +# C PROOF OF THEOREM 1 + +# C.1 FACTS + +Suppose $\mathcal{X} \subset \mathbb{R}^d$ is closed and convex set, then we have + +Fact 1. For all $\mathbf{x} \in \mathbb{R}^d$ and $\mathbf{y} \in \mathcal{X}$ , $\|\Pi_{\mathcal{X}}(\mathbf{x}) - \mathbf{y}\|^2 + \|\Pi_{\mathcal{X}}(\mathbf{x}) - \mathbf{x}\|^2 \leq \|\mathbf{x} - \mathbf{y}\|^2$ . + +Fact 2. For all $\mathbf{x} \in \mathbb{R}^d$ and $\mathbf{y} \in \mathcal{X}$ , $\langle \mathbf{x} - \Pi_{\mathcal{X}}(\mathbf{x}), \mathbf{y} - \Pi_{\mathcal{X}}(\mathbf{x}) \rangle \leq 0$ . + +# C.2 LEMMAS + +Lemma 1. For $\eta \leq \frac{1}{9L}$ , we have + +$$ +\frac {1}{2} \sum_ {k = 1} ^ {N} \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| ^ {2} + \frac {1}{2} \sum_ {k = 1} ^ {N} \| \mathbf {x} _ {k} - \mathbf {z} _ {k} \| ^ {2} \leq \| \mathbf {x} _ {0} - \mathbf {x} _ {*} \| ^ {2} - \| \mathbf {x} _ {N} - \mathbf {x} _ {*} \| ^ {2} + 1 2 \eta^ {2} \sum_ {k = 0} ^ {N} \| \epsilon_ {k} \| ^ {2} + \sum_ {k = 1} ^ {N} \Lambda_ {k} \tag {7} +$$ + +Proof. Let $\mathbf{x}_* \in \mathcal{X}^*$ , where $\mathcal{X}^*$ is the set of optimal solutions of $\mathrm{MVI}(T, \mathcal{X})$ , i.e. $\langle T(\mathbf{x}), \mathbf{x} - \mathbf{x}_* \rangle \geq 0$ holds for $\forall \mathbf{x} \in \mathcal{X}$ . Define $\epsilon_k = \frac{1}{m_k} \sum_{i=1}^{m_k} T(\mathbf{z}_k, \xi_k^i) - T(\mathbf{z}_k)$ , and $\widehat{T}(\epsilon_k, \mathbf{z}_k) = T(\mathbf{z}_k) + \epsilon_k$ . For + +any $\mathbf{x}\in \mathcal{X}$ , we have + +$$ +\begin{array}{l} \left\| \mathbf {x} _ {k} - \mathbf {x} \right\| ^ {2} = \left\| \Pi_ {\mathcal {X}} \left(\mathbf {x} _ {k - 1} - \eta \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k})\right) - \mathbf {x} \right\| ^ {2} \\ \stackrel {(a)} {\leq} \left\| \mathbf {x} _ {k - 1} - \eta \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k}) - \mathbf {x} \right\| ^ {2} - \left\| \mathbf {x} _ {k - 1} - \eta \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k}) - \Pi_ {\mathcal {X}} \left(\mathbf {x} _ {k - 1} - \eta \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k})\right) \right\| ^ {2} \\ = \left\| \mathbf {x} _ {k - 1} - \eta \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k}) - \mathbf {x} \right\| ^ {2} - \left\| \mathbf {x} _ {k - 1} - \eta \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k}) - \mathbf {x} _ {k} \right\| ^ {2} \\ = \| \mathbf {x} _ {k - 1} - \mathbf {x} \| ^ {2} - \| \mathbf {x} _ {k - 1} - \mathbf {x} _ {k} \| ^ {2} + 2 \left\langle \mathbf {x} - \mathbf {x} _ {k}, \eta \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right) \right\rangle \\ = \| \mathbf {x} _ {k - 1} - \mathbf {x} \| ^ {2} - \| \mathbf {x} _ {k - 1} - \mathbf {x} _ {k} \| ^ {2} + 2 \left\langle \mathbf {x} - \mathbf {z} _ {k}, \eta \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k}) \right\rangle + 2 \left\langle \mathbf {z} _ {k} - \mathbf {x} _ {k}, \eta \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k}) \right\rangle \\ = \| \mathbf {x} _ {k - 1} - \mathbf {x} \| ^ {2} - \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} + \mathbf {z} _ {k} - \mathbf {x} _ {k} \| ^ {2} + 2 \left\langle \mathbf {x} - \mathbf {z} _ {k}, \eta \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right) \right\rangle + 2 \left\langle \mathbf {z} _ {k} - \mathbf {x} _ {k}, \eta \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right) \right\rangle \\ = \| \mathbf {x} _ {k - 1} - \mathbf {x} \| ^ {2} - \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| ^ {2} - \| \mathbf {z} _ {k} - \mathbf {x} _ {k} \| ^ {2} - 2 \left\langle \mathbf {x} _ {k - 1} - \mathbf {z} _ {k}, \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\rangle + \\ \end{array} +$$ + +$$ +\begin{array}{l} 2 \left\langle \mathbf {x} - \mathbf {z} _ {k}, \eta \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right) \right\rangle + 2 \left\langle \mathbf {z} _ {k} - \mathbf {x} _ {k}, \eta \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right) \right\rangle \\ = \| \mathbf {x} _ {k - 1} - \mathbf {x} \| ^ {2} - \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| ^ {2} - \| \mathbf {z} _ {k} - \mathbf {x} _ {k} \| ^ {2} + 2 \left\langle \mathbf {x} - \mathbf {z} _ {k}, \eta \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right) \right\rangle + 2 \left\langle \mathbf {x} _ {k} - \mathbf {z} _ {k}, \mathbf {x} _ {k - 1} - \eta \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right) - \mathbf {z} _ {k} \right\rangle \tag {8} \\ \end{array} +$$ + +where (a) holds by using Fact 1. Note that + +$$ +2 \left\langle \mathbf {x} _ {*} - \mathbf {z} _ {k}, \eta \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right) \right\rangle = 2 \left\langle \mathbf {x} _ {*} - \mathbf {z} _ {k}, \eta \left(T \left(\mathbf {z} _ {k}\right) + \epsilon_ {k}\right) \right\rangle \leq 2 \left\langle \mathbf {x} _ {*} - \mathbf {z} _ {k}, \eta \epsilon_ {k} \right\rangle , \tag {9} +$$ + +where the last inequality holds by the fact that $\langle \mathbf{x}_* - \mathbf{z}_k, T(\mathbf{z}_k) \rangle \leq 0$ since $\mathbf{x}_*$ is a solution of $\mathrm{MVI}(T, \mathcal{X})$ . Note that + +$$ +\begin{array}{l} 2 \left\langle \mathbf {x} _ {k} - \mathbf {z} _ {k}, \mathbf {x} _ {k - 1} - \eta \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k}) - \mathbf {z} _ {k} \right\rangle \\ = 2 \left\langle \mathbf {x} _ {k} - \mathbf {z} _ {k}, \mathbf {x} _ {k - 1} - \eta \widehat {T} \left(\epsilon_ {k - 1}, \mathbf {z} _ {k - 1}\right) - \mathbf {z} _ {k} \right\rangle + 2 \left\langle \mathbf {x} _ {k} - \mathbf {z} _ {k}, \eta \left(\widehat {T} \left(\epsilon_ {k - 1}, \mathbf {z} _ {k - 1}\right) - \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right)\right) \right\rangle \\ \stackrel {(a)} {\leq} 2 \eta \left\| \mathbf {x} _ {k} - \mathbf {z} _ {k} \right\| \cdot \left\| \widehat {T} \left(\epsilon_ {k - 1}, \mathbf {z} _ {k - 1}\right) - \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right) \right\| \\ \stackrel {(b)} {\leq} 2 \eta \left\| \Pi_ {\mathcal {X}} \left(\mathbf {x} _ {k - 1} - \eta \cdot \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k})\right) - \Pi_ {\mathcal {X}} \left(\mathbf {x} _ {k - 1} - \eta \cdot \widehat {T} (\epsilon_ {k - 1}, \mathbf {z} _ {k - 1})\right) \right\| \cdot \left\| \widehat {T} (\epsilon_ {k - 1}, \mathbf {z} _ {k - 1}) - \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k}) \right\| \\ \stackrel {(c)} {\leq} 2 \eta^ {2} \left\| \widehat {T} (\epsilon_ {k - 1}, \mathbf {z} _ {k - 1}) - \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k}) \right\| ^ {2} = 2 \eta^ {2} \| T (\mathbf {z} _ {k - 1}) + \epsilon_ {k - 1} - (T (\mathbf {z} _ {k}) + \epsilon_ {k}) \| ^ {2} \\ \leq 2 \eta^ {2} \left(\| T (\mathbf {z} _ {k - 1}) - T (\mathbf {z} _ {k}) \| + \| \epsilon_ {k - 1} \| + \| \epsilon_ {k} \|\right) ^ {2} \stackrel {(d)} {\leq} 2 \eta^ {2} \left(L \| \mathbf {z} _ {k - 1} - \mathbf {z} _ {k} \| + \| \epsilon_ {k - 1} \| + \| \epsilon_ {k} \|\right) ^ {2} \\ \stackrel {(e)} {\leq} 6 \eta^ {2} \left(L ^ {2} \| \mathbf {z} _ {k - 1} - \mathbf {z} _ {k} \| ^ {2} + \| \epsilon_ {k - 1} \| ^ {2} + \| \epsilon_ {k} \| ^ {2}\right) \tag {10} \\ \end{array} +$$ + +where (a) holds by $\left\langle \mathbf{x}_k - \mathbf{z}_k, \mathbf{x}_{k-1} - \eta \widehat{T}(\epsilon_{k-1}, \mathbf{z}_{k-1}) - \mathbf{z}_k \right\rangle \leq 0$ and Cauchy-Schwartz inequality, where the former inequality comes from Fact 2 and the update rules of the algorithm, (b) holds by the update rule of $\mathbf{z}_k$ and $\mathbf{x}_k$ , (c) holds by the nonexpansion property of the projection operator, (d) holds since $T$ is $L$ -Lipschitz continuous, (e) holds since $(a + b + c)^2 \leq 3a^2 + 3b^2 + 3c^2$ . + +Define $\Lambda_{k} = 2\langle \mathbf{x}_{*} - \mathbf{z}_{k},\eta \epsilon_{k}\rangle$ . Taking $\mathbf{x} = \mathbf{x}_*$ in (8) and combining (9) and (10), we have + +$$ +\begin{array}{l} \left\| \mathbf {x} _ {k} - \mathbf {x} _ {*} \right\| ^ {2} \\ \leq \left\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {*} \right\| ^ {2} - \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| ^ {2} - \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| ^ {2} + 6 \eta^ {2} L ^ {2} \left\| \mathbf {z} _ {k - 1} - \mathbf {z} _ {k} \right\| ^ {2} + 6 \eta^ {2} \left\| \epsilon_ {k - 1} \right\| ^ {2} + 6 \eta^ {2} \left\| \epsilon_ {k} \right\| ^ {2} + \Lambda_ {k} \tag {11} \\ \end{array} +$$ + +Noting that + +$$ +\| \mathbf {z} _ {k - 1} - \mathbf {z} _ {k} \| ^ {2} = \| \mathbf {z} _ {k - 1} - \mathbf {x} _ {k - 1} + \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| ^ {2} \leq 3 \| \mathbf {z} _ {k - 1} - \mathbf {x} _ {k - 1} \| ^ {2} + 3 \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| ^ {2} + 3 \| \mathbf {x} _ {k} - \mathbf {z} _ {k} \| ^ {2}, +$$ + +we rearrange terms in (11), which yields + +$$ +\begin{array}{l} \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| ^ {2} + \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| ^ {2} - 6 \eta^ {2} L ^ {2} \left(3 \left\| \mathbf {z} _ {k - 1} - \mathbf {x} _ {k - 1} \right\| ^ {2} + 3 \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| ^ {2} + 3 \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| ^ {2}\right) \\ \leq \left\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {*} \right\| ^ {2} - \left\| \mathbf {x} _ {k} - \mathbf {x} _ {*} \right\| ^ {2} + 6 \eta^ {2} \left\| \epsilon_ {k - 1} \right\| ^ {2} + 6 \eta^ {2} \left\| \epsilon_ {k} \right\| ^ {2} + \Lambda_ {k} \tag {12} \\ \end{array} +$$ + +Take summation over $k = 1,\dots ,N$ in (12) and note that $\mathbf{x}_0 = \mathbf{z}_0$ , which yields + +$$ +\begin{array}{l} \left(1 - 1 8 \eta^ {2} L ^ {2}\right) \sum_ {k = 1} ^ {N} \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| ^ {2} + \left(1 - 3 6 \eta^ {2} L ^ {2}\right) \sum_ {k = 1} ^ {N} \left\| \mathbf {x} _ {k} - \mathbf {z} _ {k} \right\| ^ {2} \tag {13} \\ \leq \left\| \mathbf {x} _ {0} - \mathbf {x} _ {*} \right\| ^ {2} - \left\| \mathbf {x} _ {N} - \mathbf {x} _ {*} \right\| ^ {2} + 1 2 \eta^ {2} \sum_ {k = 0} ^ {N} \left\| \epsilon_ {k} \right\| ^ {2} + \sum_ {k = 1} ^ {N} \Lambda_ {k} \\ \end{array} +$$ + +By taking $\eta \leq \frac{1}{9L}$ , we have $1 - 36\eta^2 L^2 \geq \frac{1}{2}$ , and we have the result. + +![](images/ee79f3cd7a4a0ed7e06fbe2343fb9bebc664b453f1a6d676a5a5599f7b3f291e.jpg) + +# C.3 MAIN PROOF OF THEOREM 1 + +Proof. Define $r_{\eta}(\mathbf{z}_k) = \|\mathbf{z}_k - \Pi_{\mathcal{X}}(\mathbf{z}_k - \eta T(\mathbf{z}_k))\|$ . Our goal is to get a bound on $r_{\eta}(\mathbf{z}_k)$ . We have: + +$$ +\begin{array}{l} r _ {\eta} ^ {2} (\mathbf {z} _ {k}) = \left\| \mathbf {z} _ {k} - \Pi_ {\mathcal {X}} \left(\mathbf {z} _ {k} - \eta T (\mathbf {z} _ {k})\right) \right\| ^ {2} = \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} + \mathbf {x} _ {k} - \Pi_ {\mathcal {X}} \left(\mathbf {z} _ {k} - \eta T (\mathbf {z} _ {k})\right) \right\| ^ {2} \\ \stackrel {(a)} {\leq} 2 \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| ^ {2} + 2 \left\| \mathbf {x} _ {k} - \Pi_ {\chi} \left(\mathbf {z} _ {k} - \eta T (\mathbf {z} _ {k})\right) \right\| ^ {2} \\ = 2 \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| ^ {2} + 2 \left\| \Pi_ {\mathcal {X}} \left(\mathbf {x} _ {k - 1} - \eta \widehat {T} \left(\epsilon_ {k}, \mathbf {z} _ {k}\right)\right) - \Pi_ {\mathcal {X}} \left(\mathbf {z} _ {k} - \eta T \left(\mathbf {z} _ {k}\right)\right) \right\| ^ {2} \tag {14} \\ \stackrel {(b)} {\leq} 2 \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| ^ {2} + 4 \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| ^ {2} + 4 \eta^ {2} \left\| T (\mathbf {z} _ {k}) - \widehat {T} (\epsilon_ {k}, \mathbf {z} _ {k}) \right\| ^ {2} \\ \leq 4 \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| ^ {2} + 4 \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| ^ {2} + 4 \eta^ {2} \| \epsilon_ {k} \| ^ {2} \\ \end{array} +$$ + +where (a) holds since $(a + b)^2 \leq 2a^2 + 2b^2$ , (b) holds by the non-expansion property of the projection operator and $(a + b)^2 \leq 2a^2 + 2b^2$ . + +Let $\mathbf{x}_* \in \mathcal{X}^*$ , where $\mathcal{X}^*$ is the set of optimal solutions of $\mathrm{MVI}(T, \mathcal{X})$ , i.e. $\langle T(\mathbf{x}), \mathbf{x} - \mathbf{x}_* \rangle \geq 0$ holds for $\forall \mathbf{x} \in \mathcal{X}$ . Define $\epsilon_k = \frac{1}{m_k} \sum_{i=1}^{m_k} T(\mathbf{z}_k, \xi_k^i) - T(\mathbf{z}_k)$ , and $\widehat{T}(\epsilon_k, \mathbf{z}_k) = T(\mathbf{z}_k) + \epsilon_k$ . Define $\Lambda_k = 2\langle \mathbf{x}_* - \mathbf{z}_k, \eta \epsilon_k \rangle$ . + +By summing over $k$ in Equation (14) and using Equation (7) in Lemma 1, we have + +$$ +\begin{array}{l} \sum_ {k = 1} ^ {N} r _ {\eta} ^ {2} (\mathbf {z} _ {k}) \leq 4 \sum_ {k = 1} ^ {N} \| \mathbf {z} _ {k} - \mathbf {x} _ {k} \| ^ {2} + 4 \sum_ {k = 1} ^ {N} \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| ^ {2} + 4 \eta^ {2} \sum_ {k = 1} ^ {N} \| \epsilon_ {k} \| ^ {2} \\ = 8 \left(\frac {1}{2} \sum_ {k = 1} ^ {N} \| \mathbf {z} _ {k} - \mathbf {x} _ {k} \| ^ {2} + \frac {1}{2} \sum_ {k = 1} ^ {N} \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| ^ {2}\right) + 4 \eta^ {2} \sum_ {k = 0} ^ {N} \| \epsilon_ {k} \| ^ {2} \tag {15} \\ \stackrel {\mathrm {B y (7)}} {\leq} 8 \left(\| \mathbf {x} _ {0} - \mathbf {x} _ {*} \| ^ {2} + 1 2 \eta^ {2} \sum_ {k = 0} ^ {N} \| \epsilon_ {k} \| ^ {2} + \sum_ {k = 1} ^ {N} \Lambda_ {k}\right) + 4 \eta^ {2} \sum_ {k = 0} ^ {N} \| \epsilon_ {k} \| ^ {2} \\ \end{array} +$$ + +Taking expectation and divided by $N$ on both sides, we have + +$$ +\begin{array}{l} \frac {1}{N} \sum_ {k = 1} ^ {N} \mathbb {E} \left[ r _ {\eta} ^ {2} (\mathbf {z} _ {k}) \right] \leq \frac {8}{N} \left(\| \mathbf {x} _ {0} - \mathbf {x} _ {*} \| ^ {2} + 1 2 \eta^ {2} \sum_ {k = 0} ^ {N} \mathbb {E} \| \epsilon_ {k} \| ^ {2} + \sum_ {k = 1} ^ {N} \mathbb {E} (\Lambda_ {k})\right) + \frac {4 \eta^ {2}}{N} \sum_ {k = 1} ^ {N} \mathbb {E} \| \epsilon_ {k} \| ^ {2} \\ \leq \frac {8}{N} \left(\| \mathbf {x} _ {0} - \mathbf {x} _ {*} \| ^ {2} + 1 2 \eta^ {2} \sum_ {k = 0} ^ {N} \frac {\sigma^ {2}}{m _ {k}}\right) + \frac {4 \eta^ {2}}{N} \sum_ {k = 0} ^ {N} \frac {\sigma^ {2}}{m _ {k}} \\ = \frac {8 \left\| \mathbf {x} _ {0} - \mathbf {x} _ {*} \right\| ^ {2}}{N} + \frac {1 0 0 \eta^ {2}}{N} \sum_ {k = 0} ^ {N} \frac {\sigma^ {2}}{m _ {k}}. \tag {16} \\ \end{array} +$$ + +![](images/25bb5dc38b9a893c416b993e33cd84ea26c0e68d7e7bbade8fc1cdcea137b182.jpg) + +# D PROOF OF THEOREM 2 + +In this section, we define $\mathbf{g}_k = T(\mathbf{z}_k)$ , $\epsilon_k = \widehat{\mathbf{g}}_k - \mathbf{g}_k$ . + +# D.1 LEMMAS + +Lemma 2. For any positive definite diagonal matrix $H$ satisfying $H\succeq \delta I$ with $\delta >0$ , if $\| T(\mathbf{x}_1) - T(\mathbf{x}_2)\| _2\leq L\| \mathbf{x}_1 - \mathbf{x}_2\| _2$ for $\mathbf{x}_1,\mathbf{x}_2\in \mathcal{X}$ , then + +$$ +\left\| T \left(\mathbf {x} _ {1}\right) - T \left(\mathbf {x} _ {2}\right) \right\| _ {H ^ {- 1}} \leq \frac {L}{\delta} \left\| \mathbf {x} _ {1} - \mathbf {x} _ {2} \right\| _ {H}. +$$ + +Proof. Note that $H \succeq \delta I$ , we have $0 < H^{-1} \preceq \frac{1}{\delta} I$ . Noting that $\| \mathbf{x} \|_H = \sqrt{\mathbf{x}^\top H \mathbf{x}}$ , we have + +$$ +\| T (\mathbf {x} _ {1}) - T (\mathbf {x} _ {2}) \| _ {H ^ {- 1}} \leq \frac {1}{\sqrt {\delta}} \| T (\mathbf {x} _ {1}) - T (\mathbf {x} _ {2}) \| _ {2} \leq \frac {L}{\sqrt {\delta}} \| \mathbf {x} _ {1} - \mathbf {x} _ {2} \| _ {2} \leq \frac {L}{\delta} \| \mathbf {x} _ {1} - \mathbf {x} _ {2} \| _ {H}. +$$ + +![](images/d7eb30ca7dfaa6590ba880a347d15f18056063f593c4c97f3c0a451345323745.jpg) + +Lemma 3. When $\eta \leq \frac{\delta}{9L}$ , we have + +$$ +\begin{array}{l} \frac {1}{2} \sum_ {k = 1} ^ {N} \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + \frac {1}{2} \sum_ {k = 1} ^ {N} \left\| \mathbf {x} _ {k} - \mathbf {z} _ {k} \right\| _ {H _ {k - 1}} ^ {2} \\ \leq \sum_ {k = 1} ^ {N} \left(\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {*} \| _ {H _ {k - 1}} ^ {2} - \| \mathbf {x} _ {k} - \mathbf {x} _ {*} \| _ {H _ {k - 1}} ^ {2}\right) + 1 2 \eta^ {2} \left(\| \epsilon_ {0} \| _ {H _ {0} ^ {- 1}} ^ {2} + \sum_ {k = 1} ^ {N} \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2}\right) + \sum_ {k = 1} ^ {N} \Lambda_ {k} \tag {17} \\ \end{array} +$$ + +Proof. Define $\epsilon_{k} = \widehat{\mathbf{g}}_{k} - \mathbf{g}_{k}$ . For any $\mathbf{x} \in \mathcal{X}$ , we have + +$$ +\begin{array}{l} \| \mathbf {x} _ {k} - \mathbf {x} \| _ {H _ {k - 1}} ^ {2} = \left\| \mathbf {x} _ {k - 1} - \eta H _ {k - 1} ^ {- 1} \widehat {\mathbf {g}} _ {k} - \mathbf {x} \right\| _ {H _ {k - 1}} ^ {2} = \left\| \mathbf {x} _ {k - 1} - \eta H _ {k - 1} ^ {- 1} \widehat {\mathbf {g}} _ {k} - \mathbf {x} \right\| _ {H _ {k - 1}} ^ {2} - \left\| \mathbf {x} _ {k - 1} - \eta H _ {k - 1} ^ {- 1} \widehat {\mathbf {g}} _ {k} - \mathbf {x} _ {k} \right\| _ {H _ {k - 1}} ^ {2} \\ = \left\| \mathbf {x} _ {k - 1} - \mathbf {x} \right\| _ {H _ {k - 1}} ^ {2} - \left\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + 2 \left\langle \mathbf {x} - \mathbf {x} _ {k}, \eta \widehat {\mathbf {g}} _ {k} \right\rangle \\ = \| \mathbf {x} _ {k - 1} - \mathbf {x} \| _ {H _ {k - 1}} ^ {2} - \| \mathbf {x} _ {k - 1} - \mathbf {x} _ {k} \| _ {H _ {k - 1}} ^ {2} + 2 \left\langle \mathbf {x} - \mathbf {z} _ {k}, \eta \widehat {\mathbf {g}} _ {k} \right\rangle + 2 \left\langle \mathbf {z} _ {k} - \mathbf {x} _ {k}, \eta \widehat {\mathbf {g}} _ {k} \right\rangle \\ = \left\| \mathbf {x} _ {k - 1} - \mathbf {x} \right\| _ {H _ {k - 1}} ^ {2} - \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} + \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + 2 \left\langle \mathbf {x} - \mathbf {z} _ {k}, \eta \widehat {\mathbf {g}} _ {k} \right\rangle + 2 \left\langle \mathbf {z} _ {k} - \mathbf {x} _ {k}, \eta \widehat {\mathbf {g}} _ {k} \right\rangle \\ = \| \mathbf {x} _ {k - 1} - \mathbf {x} \| _ {H _ {k - 1}} ^ {2} - \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| _ {H _ {k - 1}} ^ {2} - \| \mathbf {z} _ {k} - \mathbf {x} _ {k} \| _ {H _ {k - 1}} ^ {2} - 2 \left\langle H _ {k - 1} (\mathbf {x} _ {k - 1} - \mathbf {z} _ {k}), \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\rangle + \\ 2 \left\langle \mathbf {x} - \mathbf {z} _ {k}, \eta \widehat {\mathbf {g}} _ {k} \right\rangle + 2 \left\langle \mathbf {z} _ {k} - \mathbf {x} _ {k}, \eta \widehat {\mathbf {g}} _ {k} \right\rangle \\ = \| \mathbf {x} _ {k - 1} - \mathbf {x} \| _ {H _ {k - 1}} ^ {2} - \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| _ {H _ {k - 1}} ^ {2} - \| \mathbf {z} _ {k} - \mathbf {x} _ {k} \| _ {H _ {k - 1}} ^ {2} + 2 \langle \mathbf {x} - \mathbf {z} _ {k}, \eta \widehat {\mathbf {g}} _ {k} \rangle \\ + 2 \left\langle \mathbf {x} _ {k} - \mathbf {z} _ {k}, H _ {k - 1} \left(\mathbf {x} _ {k - 1} - \mathbf {z} _ {k}\right) - \eta \widehat {\mathbf {g}} _ {k} \right\rangle \tag {18} \\ \end{array} +$$ + +Note that + +$$ +2 \left\langle \mathbf {x} _ {*} - \mathbf {z} _ {k}, \eta \widehat {\mathbf {g}} _ {k} \right\rangle = 2 \left\langle \mathbf {x} _ {*} - \mathbf {z} _ {k}, \eta \left(\mathbf {g} _ {k} + \epsilon_ {k}\right) \right\rangle \leq 2 \left\langle \mathbf {x} _ {*} - \mathbf {z} _ {k}, \eta \epsilon_ {k} \right\rangle , \tag {19} +$$ + +where the last inequality holds by the fact that $\langle \mathbf{x}_* - \mathbf{z}_k, \mathbf{g}_k \rangle \leq 0$ since $\mathbf{x}_*$ is a solution of $\mathrm{MVI}(T, \mathcal{X})$ . Note that + +$$ +\begin{array}{l} 2 \left\langle \mathbf {x} _ {k} - \mathbf {z} _ {k}, H _ {k - 1} \left(\mathbf {x} _ {k - 1} - \mathbf {z} _ {k}\right) - \eta \widehat {\mathbf {g}} _ {k} \right\rangle \\ = 2 \left\langle \mathbf {x} _ {k} - \mathbf {z} _ {k}, H _ {k - 1} \left(\mathbf {x} _ {k - 1} - \mathbf {z} _ {k} - \eta H _ {k - 1} ^ {- 1} \widehat {\mathbf {g}} _ {k - 1}\right) \right\rangle + 2 \left\langle \mathbf {x} _ {k} - \mathbf {z} _ {k}, \eta \left(\widehat {\mathbf {g}} _ {k - 1} - \widehat {\mathbf {g}} _ {k}\right) \right\rangle \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {(a)} {\leq} 2 \left\langle \left(\mathbf {x} _ {k - 1} - \eta H _ {k - 1} ^ {- 1} \widehat {\mathbf {g}} _ {k}\right) - \left(\mathbf {x} _ {k - 1} - \eta H _ {k - 1} ^ {- 1} \widehat {\mathbf {g}} _ {k - 1}\right), \eta \left(\widehat {\mathbf {g}} _ {k - 1} - \widehat {\mathbf {g}} _ {k}\right) \right\rangle \\ = 2 \eta^ {2} \| \widehat {\mathbf {g}} _ {k - 1} - \widehat {\mathbf {g}} _ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} = 2 \eta^ {2} \| \mathbf {g} _ {k - 1} - \mathbf {g} _ {k} + \epsilon_ {k - 1} + \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} \tag {20} \\ \stackrel {(o)} {\leq} 2 \eta^ {2} \left(\| \mathbf {g} _ {k - 1} - \mathbf {g} _ {k} \| _ {H _ {k - 1} ^ {- 1}} + \| \epsilon_ {k - 1} \| _ {H _ {k - 1} ^ {- 1}} + \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}}\right) ^ {2} \\ \stackrel {(c)} {\leq} 2 \eta^ {2} \left(\frac {L}{\delta} \left\| \mathbf {z} _ {k - 1} - \mathbf {z} _ {k} \right\| _ {H _ {k - 1}} + \left\| \epsilon_ {k - 1} \right\| _ {H _ {k - 1} ^ {- 1}} + \left\| \epsilon_ {k} \right\| _ {H _ {k - 1} ^ {- 1}}\right) ^ {2} \\ \stackrel {(d)} {\leq} 6 \eta^ {2} \left(\frac {L ^ {2}}{\delta^ {2}} \| \mathbf {z} _ {k - 1} - \mathbf {z} _ {k} \| _ {H _ {k - 1}} ^ {2} + \| \epsilon_ {k - 1} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} + \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2}\right) \\ \end{array} +$$ + +where (a) holds by the update rule of $\mathbf{z}_k$ and $\mathbf{x}_k$ in Algorithm 2, (b) holds by the triangle inequality, (c) holds by utilizing the Lipschitz continuity of $T$ , Lemma 2 and the fact that $H_{k - 1}\succeq \delta I$ for any $k$ , (d) holds since $(a + b + c)^2\leq 3a^2 +3b^2 +3c^2$ + +Define $\Lambda_{k} = 2\langle \mathbf{x}_{*} - \mathbf{z}_{k},\eta \epsilon_{k}\rangle$ . Taking $\mathbf{x} = \mathbf{x}_*$ in (18) and combining (19) and (20), we have + +$$ +\begin{array}{l} \left\| \mathbf {x} _ {k} - \mathbf {x} _ {*} \right\| _ {H _ {k - 1}} ^ {2} \leq \left\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {*} \right\| _ {H _ {k - 1}} ^ {2} - \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| _ {H _ {k - 1}} ^ {2} - \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + \frac {6 \eta^ {2} L ^ {2}}{\delta^ {2}} \left\| \mathbf {z} _ {k - 1} - \mathbf {z} _ {k} \right\| _ {H _ {k - 1}} ^ {2} \\ + 6 \eta^ {2} \| \epsilon_ {k - 1} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} + 6 \eta^ {2} \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} + \Lambda_ {k} \tag {21} \\ \end{array} +$$ + +Noting that + +$$ +\begin{array}{l} \left\| \mathbf {z} _ {k - 1} - \mathbf {z} _ {k} \right\| _ {H _ {k - 1}} ^ {2} = \left\| \mathbf {z} _ {k - 1} - \mathbf {x} _ {k - 1} + \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| _ {H _ {k - 1}} ^ {2} \\ \leq 3 \| \mathbf {z} _ {k - 1} - \mathbf {x} _ {k - 1} \| _ {H _ {k - 1}} ^ {2} + 3 \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| _ {H _ {k - 1}} ^ {2} + 3 \| \mathbf {z} _ {k} - \mathbf {x} _ {k} \| _ {H _ {k - 1}} ^ {2}, \\ \end{array} +$$ + +we rearrange terms in (21), which yields + +$$ +\begin{array}{l} \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| _ {H _ {k - 1}} ^ {2} + \| \mathbf {z} _ {k} - \mathbf {x} _ {k} \| _ {H _ {k - 1}} ^ {2} - \frac {6 \eta^ {2} L ^ {2}}{\delta^ {2}} \left(3 \| \mathbf {z} _ {k - 1} - \mathbf {x} _ {k - 1} \| _ {H _ {k - 1}} ^ {2} + 3 \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| _ {H _ {k - 1}} ^ {2} + 3 \| \mathbf {z} _ {k} - \mathbf {x} _ {k} \| _ {H _ {k - 1}} ^ {2}\right) \\ \leq \left\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {*} \right\| _ {H _ {k - 1}} ^ {2} - \left\| \mathbf {x} _ {k} - \mathbf {x} _ {*} \right\| _ {H _ {k - 1}} ^ {2} + 6 \eta^ {2} \left\| \epsilon_ {k - 1} \right\| _ {H _ {k - 1} ^ {- 1}} ^ {2} + 6 \eta^ {2} \left\| \epsilon_ {k} \right\| _ {H _ {k - 1} ^ {- 1}} ^ {2} + \Lambda_ {k} \tag {22} \\ \end{array} +$$ + +Taking summation over $k = 1, \ldots, N$ in (22), and noting that $\mathbf{x}_0 = \mathbf{z}_0$ , $\| \mathbf{x} \|_{H_{t-1}^{-1}}^2 \geq \| \mathbf{x} \|_{H_t^{-1}}^2$ for all $\mathbf{x}$ and $t \geq 1$ , we have + +$$ +\begin{array}{l} \left(1 - \frac {1 8 \eta^ {2} L ^ {2}}{\delta^ {2}}\right) \sum_ {k = 1} ^ {N} \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| _ {H _ {k - 1}} ^ {2} + \left(1 - \frac {3 6 \eta^ {2} L ^ {2}}{\delta^ {2}}\right) \sum_ {k = 1} ^ {N} \| \mathbf {x} _ {k} - \mathbf {z} _ {k} \| _ {H _ {k - 1}} ^ {2} \\ \leq \sum_ {k = 1} ^ {N} \left(\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {*} \| _ {H _ {k - 1}} ^ {2} - \| \mathbf {x} _ {k} - \mathbf {x} _ {*} \| _ {H _ {k - 1}} ^ {2}\right) + 1 2 \eta^ {2} \left(\| \epsilon_ {0} \| _ {H _ {0} ^ {- 1}} ^ {2} + \sum_ {k = 1} ^ {N} \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2}\right) + \sum_ {k = 1} ^ {N} \Lambda_ {k} \tag {23} \\ \end{array} +$$ + +By taking $\eta \leq \frac{\delta}{9L}$ , we have $1 - \frac{36\eta^2L^2}{\delta^2} \geq \frac{1}{2}$ , and we have the result. + +Lemma 4. When $\| \widehat{\mathbf{g}}_{1:N,i}\| _2\leq \delta N^\alpha$ with $0\leq \alpha \leq 1 / 2$ for every $i$ , we have + +$$ +\sum_ {k = 1} ^ {N} \left(\left\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {*} \right\| _ {H _ {k - 1}} ^ {2} - \left\| \mathbf {x} _ {k} - \mathbf {x} _ {*} \right\| _ {H _ {k - 1}} ^ {2}\right) \leq D ^ {2} \delta + D ^ {2} \cdot d \delta (N - 1) ^ {\alpha} \tag {24} +$$ + +Proof. + +$$ +\begin{array}{l} \sum_ {k = 1} ^ {N} \left(\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {*} \| _ {H _ {k - 1}} ^ {2} - \| \mathbf {x} _ {k} - \mathbf {x} _ {*} \| _ {H _ {k - 1}} ^ {2}\right) \\ = \left\| \mathbf {x} _ {0} - \mathbf {x} _ {*} \right\| _ {H _ {0}} ^ {2} - \left\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \right\| _ {H _ {0}} ^ {2} + \left\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \right\| _ {H _ {1}} ^ {2} - \left\| \mathbf {x} _ {2} - \mathbf {x} _ {*} \right\| _ {H _ {1}} ^ {2} + \ldots + \left\| \mathbf {x} _ {N - 1} - \mathbf {x} _ {*} \right\| _ {H _ {N - 1}} ^ {2} - \left\| \mathbf {x} _ {N} - \mathbf {x} _ {*} \right\| _ {H _ {N - 1}} ^ {2} \\ \leq \left\| \mathbf {x} _ {0} - \mathbf {x} _ {*} \right\| _ {H _ {0}} ^ {2} + \left(- \left\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \right\| _ {H _ {0}} ^ {2} + \left\| \mathbf {x} _ {1} - \mathbf {x} _ {*} \right\| _ {H _ {1}} ^ {2}\right) + \ldots + \left(- \left\| \mathbf {x} _ {N - 1} - \mathbf {x} _ {*} \right\| _ {H _ {N - 2}} ^ {2} + \left\| \mathbf {x} _ {N - 1} - \mathbf {x} _ {*} \right\| _ {H _ {N - 1}} ^ {2}\right) \\ \leq \left\| \mathbf {x} _ {0} - \mathbf {x} _ {*} \right\| _ {H _ {0}} ^ {2} + D ^ {2} \left(\operatorname {t r} \left(H _ {1} - H _ {0}\right) + \operatorname {t r} \left(H _ {2} - H _ {1}\right) + \dots + \operatorname {t r} \left(H _ {N - 1} - H _ {N - 2}\right)\right) \\ = \| \mathbf {x} _ {0} - \mathbf {x} _ {*} \| _ {H _ {0}} ^ {2} + D ^ {2} (\operatorname {t r} (H _ {N - 1} - H _ {0})) \leq \| \mathbf {x} _ {0} - \mathbf {x} _ {*} \| _ {H _ {0}} ^ {2} + D ^ {2} \operatorname {t r} (H _ {N - 1}) \leq D ^ {2} \delta + D ^ {2} \cdot d \delta (N - 1) ^ {\alpha} \tag {25} \\ \end{array} +$$ + +Lemma 5. When $\| \widehat{\mathbf{g}}_{1:N,i}\| _2\leq \delta N^\alpha$ with $0\leq \alpha \leq 1 / 2$ for every $i$ , we have + +$$ +\mathbb {E} \left[ 9 6 \eta^ {2} \| \epsilon_ {0} \| _ {H _ {0} ^ {- 1}} ^ {2} + 1 0 0 \eta^ {2} \sum_ {k = 1} ^ {N} \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} \right] \leq \frac {9 6 \eta^ {2} \sigma^ {2}}{m \delta} + 1 0 0 \eta^ {2} \left(2 \delta d N ^ {\alpha} + \frac {G ^ {2} d}{\delta}\right) \tag {26} +$$ + +Proof. Note that + +$$ +\begin{array}{l} \mathbb {E} \left[ \sum_ {k = 1} ^ {N} \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} \right] = \mathbb {E} \left[ \sum_ {k = 1} ^ {N} \| \widehat {\mathbf {g}} _ {k} - \mathbf {g} _ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} \right] = \sum_ {k = 1} ^ {N} \mathbb {E} \| \widehat {\mathbf {g}} _ {k} - \mathbf {g} _ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} = \sum_ {k = 1} ^ {N} \left(\mathbb {E} \| \widehat {\mathbf {g}} _ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} - \| \mathbf {g} _ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2}\right) \\ \leq \sum_ {k = 1} ^ {N} \mathbb {E} \| \widehat {\mathbf {g}} _ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} = \sum_ {k = 1} ^ {N} \mathbb {E} \| \widehat {\mathbf {g}} _ {k} \| _ {H _ {k} ^ {- 1}} ^ {2} + \sum_ {k = 1} ^ {N} \left(\mathbb {E} \| \widehat {\mathbf {g}} _ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} - \mathbb {E} \| \widehat {\mathbf {g}} _ {k} \| _ {H _ {k} ^ {- 1}} ^ {2}\right) \\ = \sum_ {k = 1} ^ {N} \mathbb {E} \| \widehat {\mathbf {g}} _ {k} \| _ {H _ {k} ^ {- 1}} ^ {2} + \sum_ {k = 1} ^ {N} \mathbb {E} \left\langle \widehat {\mathbf {g}} _ {k}, \left(H _ {k - 1} ^ {- 1} - H _ {k} ^ {- 1}\right) \widehat {\mathbf {g}} _ {k} \right\rangle \leq \sum_ {k = 1} ^ {N} \mathbb {E} \| \widehat {\mathbf {g}} _ {k} \| _ {H _ {k} ^ {- 1}} ^ {2} + \sum_ {k = 1} ^ {N} \mathbb {E} \left[ \operatorname {t r} \left(H _ {k - 1} ^ {- 1} - H _ {k} ^ {- 1}\right) G ^ {2} \right] \\ \stackrel {(a)} {\leq} \mathbb {E} \left[ 2 \sum_ {i = 1} ^ {d} \| \widehat {\mathbf {g}} _ {1: N, i} \| _ {2} \right] + \operatorname {t r} \left(H _ {0} ^ {- 1}\right) G ^ {2} \stackrel {(b)} {\leq} 2 \delta d N ^ {\alpha} + \frac {G ^ {2} d}{\delta} \tag {27} \\ \end{array} +$$ + +where (a) holds since we have $\sum_{k=1}^{N} \|\widehat{\mathbf{g}}_k\|_{H_k^{-1}}^2 \leq 2\sum_{i=1}^{d} \|\widehat{\mathbf{g}}_{1:N,i}\|_2$ by the setting of $H_k$ and utilizing Lemma 4 of (Duchi et al., 2011), (b) holds because of $\|\widehat{\mathbf{g}}_{1:N,i}\|_2 \leq \delta N^\alpha$ . + +In addition, we have $\mathbb{E}\left\| \epsilon_0\right\|_{H_0^{-1}}^2\leq \frac{\sigma^2}{m\delta}$ , and hence + +$$ +\mathbb {E} \left[ 9 6 \eta^ {2} \| \epsilon_ {0} \| _ {H _ {0} ^ {- 1}} ^ {2} + 1 0 0 \eta^ {2} \sum_ {k = 1} ^ {N} \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} \right] \leq \frac {9 6 \eta^ {2} \sigma^ {2}}{m \delta} + 1 0 0 \eta^ {2} \left(2 \delta d N ^ {\alpha} + \frac {G ^ {2} d}{\delta}\right) \tag {28} +$$ + +# D.2 MAIN PROOF OF THEOREM 2 + +Proof. Our goal is to bound $\frac{1}{N}\sum_{k = 1}^{N}\mathbb{E}\| T(\mathbf{z}_k)\| _2^2$ . Note that + +$$ +\begin{array}{l} \| \eta T (\mathbf {z} _ {k}) \| _ {H _ {k - 1} ^ {- 1}} ^ {2} = \left\| H _ {k - 1} ^ {1 / 2} \left(\mathbf {z} _ {k} - \left(\mathbf {z} _ {k} - \eta H _ {k - 1} ^ {- 1} T (\mathbf {z} _ {k})\right)\right) \right\| ^ {2} = \left\| H _ {k - 1} ^ {1 / 2} \left(\mathbf {z} _ {k} - \mathbf {x} _ {k} + \mathbf {x} _ {k} - \left(\mathbf {z} _ {k} - \eta H _ {k - 1} ^ {- 1} T (\mathbf {z} _ {k})\right)\right) \right\| ^ {2} \\ \stackrel {(a)} {\leq} \left(\left\| H _ {k - 1} ^ {1 / 2} \left(\mathbf {z} _ {k} - \mathbf {x} _ {k}\right) \right\| + \left\| H _ {k - 1} ^ {1 / 2} \left[ \mathbf {x} _ {k} - \left(\mathbf {z} _ {k} - \eta H _ {k - 1} ^ {- 1} T (\mathbf {z} _ {k})\right) \right] \right\|\right) ^ {2} \\ \stackrel {(b)} {\leq} 2 \left\| H _ {k - 1} ^ {1 / 2} \left(\mathbf {z} _ {k} - \mathbf {x} _ {k}\right) \right\| ^ {2} + 2 \left\| H _ {k - 1} ^ {1 / 2} \left[ \mathbf {x} _ {k} - \left(\mathbf {z} _ {k} - \eta H _ {k - 1} ^ {- 1} T (\mathbf {z} _ {k})\right) \right] \right\| ^ {2} \\ \stackrel {(c)} {=} 2 \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + 2 \left\| H _ {k - 1} ^ {1 / 2} \left[ \mathbf {x} _ {k - 1} - \eta H _ {k - 1} ^ {- 1} \widehat {\mathbf {g}} _ {k} - \left(\mathbf {z} _ {k} - \eta H _ {k - 1} ^ {- 1} T (\mathbf {z} _ {k})\right) \right] \right\| ^ {2} \\ \stackrel {(d)} {\leq} 2 \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + 4 \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + 4 \eta^ {2} \left\| \widehat {\mathbf {g}} _ {k} - T (\mathbf {z} _ {k}) \right\| _ {H _ {k - 1}} ^ {2} \\ = 2 \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + 4 \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + 4 \eta^ {2} \left\| \epsilon_ {k} \right\| _ {H _ {k - 1}} ^ {2} \tag {29} \\ \end{array} +$$ + +where (a) holds by the triangle inequality, (b) is due to $(a + b)^2 \leq 2a^2 + 2b^2$ , (c) holds by the update rule of $\mathbf{x}_k$ of Algorithm 2, (d) comes from the triangle inequality and $(a + b)^2 \leq 2a^2 + 2b^2$ . + +Taking summation over $k = 1,\dots ,N$ over (29) and invoking Lemma 3, we have + +$$ +\begin{array}{l} \sum_ {k = 1} ^ {N} \left\| \eta T (\mathbf {z} _ {k}) \right\| _ {H _ {k - 1} ^ {- 1}} ^ {2} \leq \sum_ {k = 1} ^ {N} \left(2 \left\| \mathbf {z} _ {k} - \mathbf {x} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + 4 \left\| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \right\| _ {H _ {k - 1}} ^ {2} + 4 \eta^ {2} \left\| \epsilon_ {k} \right\| _ {H _ {k - 1}} ^ {2}\right) \\ \leq 8 \left(\frac {1}{2} \sum_ {k = 1} ^ {N} \| \mathbf {z} _ {k} - \mathbf {x} _ {k} \| _ {H _ {k - 1}} ^ {2} + \frac {1}{2} \sum_ {k = 1} ^ {N} \| \mathbf {x} _ {k - 1} - \mathbf {z} _ {k} \| _ {H _ {k - 1}} ^ {2}\right) + 4 \eta^ {2} \sum_ {k = 1} ^ {N} \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {\text {B y (1 7)}} {\leq} 8 \left(\sum_ {k = 1} ^ {N} \left(\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {*} \| _ {H _ {k - 1}} ^ {2} - \| \mathbf {x} _ {k} - \mathbf {x} _ {*} \| _ {H _ {k - 1}} ^ {2}\right) + 1 2 \eta^ {2} \left(\| \epsilon_ {0} \| _ {H _ {0} ^ {- 1}} ^ {2} + \sum_ {k = 1} ^ {N} \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2}\right) + \sum_ {k = 1} ^ {N} \Lambda_ {k}\right) \\ + 4 \eta^ {2} \sum_ {k = 1} ^ {N} \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} \\ \end{array} +$$ + +$$ += 8 \sum_ {k = 1} ^ {N} \left(\| \mathbf {x} _ {k - 1} - \mathbf {x} _ {*} \| _ {H _ {k - 1}} ^ {2} - \| \mathbf {x} _ {k} - \mathbf {x} _ {*} \| _ {H _ {k - 1}} ^ {2}\right) + 9 6 \eta^ {2} \| \epsilon_ {0} \| _ {H _ {0} ^ {- 1}} ^ {2} + 1 0 0 \eta^ {2} \sum_ {k = 1} ^ {N} \| \epsilon_ {k} \| _ {H _ {k - 1} ^ {- 1}} ^ {2} + 8 \sum_ {\substack {k = 1 \\ (3 0)}} ^ {N} \Lambda_ {k} +$$ + +Taking expectation on both sides, and invoking Lemma 4 and Lemma 5, and noting that $\mathbb{E}\left[\sum_{k=1}^{N} \Lambda_k\right] = 0$ , we have + +$$ +\sum_ {k = 1} ^ {N} \mathbb {E} \| \eta T (\mathbf {z} _ {k}) \| _ {H _ {k - 1} ^ {- 1}} ^ {2} \leq 8 \left(D ^ {2} \delta + D ^ {2} \cdot d \delta (N - 1) ^ {\alpha}\right) + \frac {9 6 \eta^ {2} \sigma^ {2}}{m \delta} + 1 0 0 \eta^ {2} \left(2 \delta d N ^ {\alpha} + \frac {G ^ {2} d}{\delta}\right) \tag {31} +$$ + +Dividing $\eta^2 N$ on both sides, we have + +$$ +\frac {1}{N} \sum_ {k = 1} ^ {N} \mathbb {E} \| T (\mathbf {z} _ {k}) \| _ {H _ {k - 1} ^ {- 1}} ^ {2} \leq \frac {8 D ^ {2} \delta^ {2} (1 + d (N - 1) ^ {\alpha})}{\eta^ {2} N} + \frac {1 0 0 \left(\sigma^ {2} / m + d \left(2 \delta^ {2} N ^ {\alpha} + G ^ {2}\right)\right)}{N} \tag {32} +$$ + +![](images/004a45c0d9533df38bfd5dddc501018ed4512024b7f1165983cd34c6c8fefadf.jpg) + +# E MORE EXPERIMENTAL RESULTS ON CIFAR10 + +In Figure 7, we compare the performance of OSG, Alternating Adam (AlterAdam) and OAdagrad under the same minibatch size setting on CIFAR10 dataset, where one epoch means one pass of the dataset. We can see that OAdagrad and Alternating Adam behave consistently better than OSG. When the minibatch size is small (e.g., 64), OAdagrad and Alternating Adam have comparable performance, but when the minibatch size is large (e.g., 128, 256), OAdagrad converges faster than Alternating Adam. This phenomenon shows the benefits of OAdagrad when large minibatch size is used. + +![](images/affa3a5e085d4dc0e905642daf7234d008f07fca53faa9cbda83b122cc1c5ed2.jpg) +Figure 7: OAdagrad, OSG and Alternating Adam for WGAN-GP on CIFAR10 data with different batch sizes + +![](images/4fb5bbcb56408a6aa008905f3b45f5cec6be7602bf9f22ad589eb58565cf2ad4.jpg) + +![](images/8addff5586487bdc51623ddfc3da42619ade698845edd13f9ab55f42a0118a16.jpg) + +# F THE EQUIVALENCE BETWEEN OSG IN UNCONSTRAINED CASE AND THE ALGORITHM IN DASKALAKIS ET AL. (2017) + +Define $\hat{\mathbf{g}}_k = \frac{1}{m_k}\sum_{i=1}^{m_k}T(\mathbf{z}_k;\xi_k^i)$ , then the update rule of Algorithm 1 becomes + +$$ +\mathbf {z} _ {k} = \mathbf {x} _ {k - 1} - \eta \hat {\mathbf {g}} _ {k - 1} \tag {33} +$$ + +and + +$$ +\mathbf {x} _ {k} = \mathbf {x} _ {k - 1} - \eta \hat {\mathbf {g}} _ {k}. \tag {34} +$$ + +These two equalities together imply that + +$$ +\mathbf {z} _ {k + 1} = \mathbf {x} _ {k} - \eta \hat {\mathbf {g}} _ {k} = \mathbf {x} _ {k - 1} - 2 \eta \hat {\mathbf {g}} _ {k} = \mathbf {z} _ {k} + \eta \hat {\mathbf {g}} _ {k - 1} - 2 \eta \hat {\mathbf {g}} _ {k}, \tag {35} +$$ + +where the first equality comes from (33) by replacing $k$ to $k + 1$ , the second equality holds by (34), and the third equality holds by using (33) again. (35) is the algorithm in (Daskalakis et al. 2017). + +# G THE EXISTENCE OF MVI SOLUTION MAY NOT IMPLY PSEUDO-MONOTONICITY + +Consider the function $f:\mathbb{R}\to \mathbb{R}$ , where + +$$ +f (x) = \left\{ \begin{array}{l l} \cos (x) & \quad \text {i f} 0 \leq x \leq 2 \pi \\ 1 & \quad \text {i f} x \leq 0 \text {o r} x \geq 2 \pi \end{array} \right. +$$ + +Define $T(x) = \nabla f(x)$ . Then $T(x) = -\sin (x)$ if $0 \leq x \leq 2\pi$ and $T(x) = 0$ if $x \leq 0$ or $x \geq 2\pi$ . Then we know that $\pi$ is the solution of both SVI (i.e. $\langle T(\pi), x - \pi \rangle \geq 0$ for any $x \in \mathcal{X}$ ) and MVI (i.e. $\langle T(x), x - \pi \rangle \geq 0$ for any $x \in \mathcal{X}$ ). However $T$ is not pseudo-monotone. To see this, take $x = 0$ and $y = \frac{\pi}{4}$ and we have $\langle T(x), y - x \rangle = 0$ and $\langle T(y), y - x \rangle < 0$ , which means that $T$ is not pseudo-monotone. \ No newline at end of file diff --git a/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/images.zip b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b88211814760e09862c411d34db35ab98c7535eb --- /dev/null +++ b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be96db5963f2a53ae94429b61604e428da6cf0d5c21c62d4d733b6344c7e7e19 +size 1346323 diff --git a/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/layout.json b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0114405b0c61e74732b6de7dda01212711bb6873 --- /dev/null +++ b/towardsbetterunderstandingofadaptivegradientalgorithmsingenerativeadversarialnets/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c59eef18cd037d9d2f61494674e6109db6ffbcfc93ef72e6496a9fc815e442e2 +size 891685 diff --git a/towardsfastadaptationofneuralarchitectureswithmetalearning/4be0160a-fbfc-43e2-8533-ccbd8dd8a2ae_content_list.json b/towardsfastadaptationofneuralarchitectureswithmetalearning/4be0160a-fbfc-43e2-8533-ccbd8dd8a2ae_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..25b785148d1b32102cc604c3fcc874e410e247bd --- /dev/null +++ b/towardsfastadaptationofneuralarchitectureswithmetalearning/4be0160a-fbfc-43e2-8533-ccbd8dd8a2ae_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23ccb9bcd62e3c5ffec5651862125f97e59363cb2efe02dfe87ab3eee5ab31da +size 102851 diff --git a/towardsfastadaptationofneuralarchitectureswithmetalearning/4be0160a-fbfc-43e2-8533-ccbd8dd8a2ae_model.json b/towardsfastadaptationofneuralarchitectureswithmetalearning/4be0160a-fbfc-43e2-8533-ccbd8dd8a2ae_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c4b05a7ff745aea3dbe53d1c15b392c684d95d2e --- /dev/null +++ b/towardsfastadaptationofneuralarchitectureswithmetalearning/4be0160a-fbfc-43e2-8533-ccbd8dd8a2ae_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:439247256008887fab4fd5d1c23ab91d3db698a88805a8454cde36f7eab3deb9 +size 123627 diff --git a/towardsfastadaptationofneuralarchitectureswithmetalearning/4be0160a-fbfc-43e2-8533-ccbd8dd8a2ae_origin.pdf b/towardsfastadaptationofneuralarchitectureswithmetalearning/4be0160a-fbfc-43e2-8533-ccbd8dd8a2ae_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..136e838e7f90284fe8b77c93b7d3bcc012f9b7a6 --- /dev/null +++ b/towardsfastadaptationofneuralarchitectureswithmetalearning/4be0160a-fbfc-43e2-8533-ccbd8dd8a2ae_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a990d2a7071fa06b6e07d32f73c7146320ceb0b5c7effaa8856d021860acef1 +size 1470276 diff --git a/towardsfastadaptationofneuralarchitectureswithmetalearning/full.md b/towardsfastadaptationofneuralarchitectureswithmetalearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6a6f88a80978453919caf3bb633a4201647b02e3 --- /dev/null +++ b/towardsfastadaptationofneuralarchitectureswithmetalearning/full.md @@ -0,0 +1,383 @@ +# TOWARDS FAST ADAPTATION OF NEURAL ARCHITECTURES WITH META LEARNING + +Dongze Lian $^{1*}$ , Yin Zheng $^{2*}$ , Yintao Xu $^{1}$ , Yanxiong Lu $^{2}$ , Leyu Lin $^{2}$ , Peilin Zhao $^{3}$ , Junzhou Huang $^{3,4}$ , Shenghua $\mathrm{Gao}^{1\dagger}$ + +$^{1}$ ShanghaiTech University, $^{2}$ Weixin Group, Tencent, + +$^{3}$ Tencent AI Lab, $^{4}$ University of Texas at Arlington $\{\text { liandz, xuyt, gaoshh } \} \text { @shanghaitech.edu.cn, }$ $\{\text { yzheng3xg } \} \text { @gmail.com, }$ $\{\text { alanlu, goshawklin, masonzhao } \} \text { @tencent.com, }$ $\{\text { jzhuang } \} \text { @uta.edu }$ + +# ABSTRACT + +Recently, Neural Architecture Search (NAS) has been successfully applied to multiple artificial intelligence areas and shows better performance compared with hand-designed networks. However, the existing NAS methods only target a specific task. Most of them usually do well in searching an architecture for single task but are troublesome for multiple datasets or multiple tasks. Generally, the architecture for a new task is either searched from scratch, which is neither efficient nor flexible enough for practical application scenarios, or borrowed from the ones searched on other tasks, which might be not optimal. In order to tackle the transferability of NAS and conduct fast adaptation of neural architectures, we propose a novel Transferable Neural Architecture Search method based on meta-learning in this paper, which is termed as T-NAS. T-NAS learns a meta-architecture that is able to adapt to a new task quickly through a few gradient steps, which makes the transferred architecture suitable for the specific task. Extensive experiments show that T-NAS achieves state-of-the-art performance in few-shot learning and comparable performance in supervised learning but with 50x less searching cost, which demonstrates the effectiveness of our method. + +# 1 INTRODUCTION + +Deep neural networks have achieved huge successes in many machine learning tasks (Girshick, 2015; He et al., 2016; Sutskever et al., 2014; Zheng et al., 2015b; Lian et al., 2019; Cheng et al., 2019; Zheng et al., 2015a; Lauly et al., 2017; Jiang et al., 2017; Zheng et al., 2016). Behind their successes, the design of network architecture plays an important role, and the hand-designed networks (e.g., ResNet (He et al., 2016), DenseNet (Huang et al., 2017)) have provided strong baselines in many tasks. + +Neural Architecture Search (NAS) (Pham et al., 2018; Liu et al., 2018b; Guo et al., 2019) is proposed to automatically search network structure for alleviating the complicated network design and heavy dependence on prior knowledge. More importantly, NAS has been proved to be effective and obtained the remarkable performance in image classification (Pham et al., 2018; Liu et al., 2018b), object detection (Ghiasi et al., 2019) and semantic segmentation (Chen et al., 2018; Liu et al., 2019). However, the existing NAS methods only target a specific task. Most of them usually do well in searching an architecture for single task but are troublesome for multiple datasets or multiple tasks. As shown in Figure 1, we get the architecture-0 on a given dataset using a NAS method. Now, what if there exists a new task? This drives us to ask: how to get a suitable architecture for a new task in NAS? Generally, there exist two simple solutions in handling multiple tasks. One of them (S1) is to search an architecture for a new task from scratch but it is inefficient and not flexible for practical application scenarios. Another solution (S2) is to borrow architecture from the ones searched on other tasks but it might be not optimal for the new task. Therefore, it is urgently needed to study the transferability of NAS for large-scale model deployment in practical application. It should be + +![](images/2038814bff27bf98691b625fc88459d7275e39fc2246da16f10c3117516befb5.jpg) +Figure 1: Left: how to search the network architecture when given a new task? Middle: two simple solutions that are inefficient or not optimal. Right: we propose T-NAS method to get a metaarchitecture, which is able to adapt to different tasks easily and quickly. + +![](images/c3c5e53b6d11b94c113f1a6bb67b6a1d732d637d5da8392708caa05e1af7ea01.jpg) + +more desirable to learn a transferable architecture that can adapt to some new unseen tasks easily and quickly according to the previous knowledge. + +To this end, we propose a novel Transferable Neural Architecture Search (T-NAS) method (the bottom of Figure 1). The starting point of T-NAS is inspired by recent meta-learning methods (Finn et al., 2017; Antoniou et al., 2019; Sun et al., 2019), especially Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017), where a model learns the meta-weights that are able to adapt to a new task through a few gradient steps. Push it forward, it is also possible to find a good initial point of network architecture for NAS. Therefore, the T-NAS learns a meta-architecture (transferable architecture) that is able to adapt to a new task quickly through a few gradient steps, which is more flexible than other NAS methods. Similar to MAML, such a good initial meta-architecture for adaptation should be more sensitive to changes in different tasks such that it can be easily transferred. It is worth mentioning that this is not the first work on the transferability of neural architecture. There are also some recent works that attempt to utilize the knowledge on neural architectures learned from previous tasks, such as Wong et al. (2018); Shaw et al. (2018). Specifically, Wong et al. (2018) proposes to transfer the architecture knowledge under a multi-task learning perspective, where the number of tasks is fixed during training phase, and it cannot do a fast adaption for a new task. In contrast, our model is able to make the adaption fast and the number of tasks is unlimited during training. The difference between our model and Shaw et al. (2018) is also obvious, where Shaw et al. (2018) is based on Bayesian inference but our model is based on gradient-based meta-learning. The quantitative comparison with Shaw et al. (2018) can be found in Table 3. + +Generally, architecture structure cannot be trained independently regardless of network weights (Liu et al., 2018b; Pham et al., 2018). Analogously, the training of meta-architecture is also associated with meta-weights. Therefore, the meta-architecture and meta-weights need to be optimized jointly across different tasks, which is a typical bilevel optimization problem (Liu et al., 2018b). In order to solve the costly bilevel optimization in T-NAS, we propose an efficient first-order approximation algorithm to update meta-architecture and meta-weights together. After the whole model is optimized, given a new task, we can get the network architecture structure suitable for the specific task with a few gradient steps from meta-architecture and meta-weights. At last, the decoded discrete architecture is used for the final architecture evaluation. + +To demonstrate the effectiveness of T-NAS, we conduct extensive experiments on task-level problems due to amounts of tasks. Specifically, we split the experiments into two parts: few-shot learning setting and supervised learning setting. For few-shot learning, T-NAS achieves state-of-the-art performance on multiple datasets (Omniglot, Mini-Imagenet, Fewshot-CIFAR100) compared with previous methods and other NAS-based methods. As for supervised learning, a 200-shot 50-query 10-way experiment setting is designed on the Mini-Imagenet dataset. Compared with the searched architectures from scratch for new given tasks, T-NAS achieves comparable performance but with $50\mathrm{x}$ less searching cost. + +Our main contributions are summarized as follows: + +- We propose a novel Transferable Neural Architecture Search (T-NAS). T-NAS can learn a meta-architecture that is able to adapt to a new task quickly through a few gradient steps, which is more flexible than other NAS methods. +- We give the formulation of T-NAS and analyze the difference between T-NAS and other NAS methods. Further, to solve the bilevel optimization, we propose an efficient first-order approximation algorithm to optimize the whole search network based on gradient descent. +- Extensive experiments show that T-NAS achieves state-of-the-art performance in few-shot learning and comparable performance in supervised learning but with 50x less searching cost, which demonstrates the effectiveness of our method. + +# 2 RELATED WORK + +# 2.1 NEURAL ARCHITECTURE SEARCH + +Neural Architecture Search (NAS) designs network architectures automatically instead of hand-designed ones. Generally, NAS strategies are divided into three categories - reinforcement learning, evolutionary algorithm and gradient-based methods. Some other strategies can refer to the survey paper (Elsken et al., 2019). Reinforcement learning (RL) based methods (Zoph & Le, 2016; Zoph et al., 2018) utilize a controller to generate the network structure and operations. For efficient searching, ENAS (Pham et al., 2018) shares parameters among child models and achieves state-of-the-art performance with only one GPU day. Evolutionary algorithm based methods (Real et al., 2018) evolve neural architectures and also achieve comparable results with RL based methods. + +Unlike reinforcement learning and evolutionary algorithm, gradient-based methods (Liu et al., 2018b; Cai et al., 2019) continuously relax the discrete architecture with all possible operations, which makes it possible to jointly optimize the architecture structure and network weights based on gradient descent. Not limited to image classification problems, recent works also introduce NAS to object detection (Ghiasi et al., 2019) and semantic image segmentation (Chen et al., 2018; Liu et al., 2019). More recently, NAS is also applied to the generative model, such as AutoGAN (Gong et al., 2019). These NAS methods show that the searched networks outperform the hand-designed ones. + +However, in these methods, only a fixed architecture is searched for a specific task, which makes it hard to be transferred to other tasks. In order to obtain a more flexible network, InstaNAS (Cheng et al., 2018) is proposed to search the network architecture structure for each instance according to different objectives, such as accuracy or latency. Different from Cheng et al. (2018), we incorporate the ideas from meta-learning based methods and extend NAS to T-NAS, which learns a meta-architecture that is able to adapt to different tasks. + +# 2.2 FEW-SHOT META-LEARNING + +Recently, most of few-shot learning problems can be cast into the meta-learning field, where a model is trained to quickly adapt to a new task given only a few samples (Finn et al., 2017). Such few-shot meta-learning methods can be categorized into metric learning (Vinyals et al., 2016; Sung et al., 2018; Snell et al., 2017), memory network (Santoro et al., 2016; Oreshkin et al., 2018; Munkhdalai et al., 2018; Mishra et al., 2018) and gradient-based methods (Finn et al., 2017; Zhang et al., 2018; Sun et al., 2019). + +Here, we only focus on the gradient-based methods, which contain a base-learner and a meta-learner. MAML (Finn et al., 2017) is one of the typical gradient-based methods for fast adaptation, which consists of meta-train and meta-test stages. In the meta-train stage, the model extracts general knowledge (meta-weights) from amounts of tasks such that it can be utilized for fast adaptation in the meta-test stage. The latest variant of MAML is MAML++ (Antoniou et al., 2019), which analyzes the shortcoming of MAML and proposes some tips on how to train MAML to promote the performance. We extend the adaptation of weights in MAML to the adaptation of architectures that is also based on MAML, and propose to automatically learn a meta-architecture, which is able to adapt to different tasks quickly. + +# 3 PRELIMINARY + +To introduce T-NAS, we briefly review the knowledge about meta-learning for fast adaptation (Finn et al., 2017; Antoniou et al., 2019) and DARTS for NAS (Liu et al., 2018b) in this section, which is helpful to understand the concept of T-NAS. + +# 3.1 META-LEARNING + +The whole dataset, meta-train and meta-test dataset are denoted as $\mathcal{D}$ , $\mathcal{D}_{\mathrm{meta - train}}$ and $\mathcal{D}_{\mathrm{meta - test}}$ , respectively. In meta-train stage, a set of tasks $\{\mathcal{T}\}$ (are also called episodes) are sampled from the task distribution $p(\mathcal{T})$ in $\mathcal{D}_{\mathrm{meta - train}}$ . Note that in the $i$ -th task $\mathcal{T}_i$ , there are $K$ samples from each class and $N$ classes in total, which is typically formulated as a $N$ -way, $K$ -shot problem. The training split samples in $\mathcal{T}_i$ used to optimize the base-learner are called support set, denoted as $\mathcal{T}_i^s$ , and test split samples used to optimize the meta-learner are called query set, which is $\mathcal{T}_i^q$ . The main idea of MAML (Finn et al., 2017) is to learn good initialized weights $\widetilde{w}$ for all tasks $\{\mathcal{T}\}$ , such that the network can obtain high performance in $\mathcal{D}_{\mathrm{meta - test}}$ after a few gradient descent steps from $\widetilde{w}$ . The base-learner is optimized according to the following rule: + +$$ +w _ {i} ^ {m + 1} = w _ {i} ^ {m} - \alpha_ {\text {i n n e r}} \nabla_ {w _ {i} ^ {m}} \mathcal {L} \left(f \left(\mathcal {T} _ {i} ^ {s}; w _ {i} ^ {m}\right)\right), \tag {1} +$$ + +where $\alpha_{\mathrm{inner}}$ is the inner (base) learning rate of weights $w$ and $m$ represents the inner step. $f$ is the parametrized function with network weights $w$ and $\mathcal{L}$ is the loss function. In the base-learner process, $\mathcal{T}_i^s$ is used to compute the loss and we update weights $w$ from $w_i^m$ to $w_i^{m + 1}$ for the $i$ -th task ( $w_i^0 = \widetilde{w}$ ). After $M$ steps, $\mathcal{L}(f(\mathcal{T}_i^q;w_i^M))$ in $\mathcal{T}_q$ is computed for the meta-learner update, which can be formulated as: + +$$ +\tilde {w} = \tilde {w} - \alpha_ {\text {o u t e r}} \nabla_ {\tilde {w}} \sum_ {\mathcal {T} _ {i} ^ {q} \sim p (\mathcal {T})} \mathcal {L} \left(f \left(\mathcal {T} _ {i} ^ {q}; w _ {i} ^ {M}\right)\right), \tag {2} +$$ + +where $\alpha_{\mathrm{outer}}$ is the outer (meta) learning rate of meta-weights $\widetilde{w}$ . Finally, the model learns the good initialized meta-weights $\widetilde{w}$ when it converges. Such meta-weights are sensitive enough so that it can adapt to each task in $\mathcal{D}_{\mathrm{meta - test}}$ after a few gradient descent steps. + +# 3.2 DARTS + +The core of DARTS (Liu et al., 2018b) is to continuously relax the discrete architecture with all possible operations and jointly optimize the architecture structure and network weights based on gradient descent. Consider $\mathcal{O}$ be a set of candidate operations, where each operation proposal is represented with $o$ . Given the input $x$ , the output is the weighted sum of all possible operations $o(x)$ : + +$$ +\bar {o} (x) = \sum_ {o \in \mathcal {O}} \frac {\exp \left(\theta_ {o}\right)}{\sum_ {o ^ {\prime} \in \mathcal {O}} \exp \left(\theta_ {o ^ {\prime}}\right)} o (x), \tag {3} +$$ + +where $\theta$ is the vector to represent the coefficients of different operation branches. When decoding, the operation $o^{*} = \arg \max_{o\in \mathcal{O}}\theta_{o}$ . Therefore, $\theta$ is also the encoding of the architecture. + +To solve such a bi-level optimization problem, a two-step update algorithm is applied: + +$$ +\left\{ \begin{array}{l} \theta = \theta - \beta \nabla_ {\theta} \mathcal {L} (w - \xi \nabla_ {w} \mathcal {L} (w, \theta), \theta) \\ w = w - \alpha \nabla_ {w} \mathcal {L} (w, \theta) \end{array} , \right. \tag {4} +$$ + +where $\mathcal{L}$ is the loss function and $\xi$ is the learning rate of inner optimization. In this paper, we use the first-order optimization of DARTS $(\xi = 0)$ for efficiency. + +# 4 APPROACH + +In this section, we first introduce Transferable Neural Architecture Search (T-NAS) and give its formulation. After that, we analyze and illustrate the difference between T-NAS and NAS. Finally, the first-order approximation algorithm is proposed for the optimization of T-NAS, and the adaptation and decoding process are also described in detail. + +# 4.1 THE FORMULATION OF T-NAS + +To make our searched network architecture flexible, we focus on the transferability of NAS. As shown in Sec. 3, MAML is trained to learn meta-weights $\widetilde{w}$ for fast adaptation in a new task. Similarly, T-NAS devotes itself to learn a meta-architecture $\widetilde{\theta}$ that is able to adapt to a new task through a few steps. In this work, $\theta$ and $\widetilde{\theta}^1$ are defined as the encoding of the architecture and transferable architecture, which are represented as matrices following DARTS (Liu et al., 2018b). + +To make the searched architecture transferable, we utilize the meta-learning based strategy to learn a task-sensitive meta-architecture $\widetilde{\theta}$ . However, similar to other NAS methods (Pham et al., 2018; Liu et al., 2018b), where the architecture $\theta$ usually cannot be trained independently regardless of network weights $w$ , the training of meta-architecture $\widetilde{\theta}$ is also associated with meta-weights $\widetilde{w}$ . In this work, $\widetilde{\theta}$ and $\widetilde{w}$ are optimized jointly across different tasks in T-NAS. + +As shown in Sec. 3, there exist two learners for the learning of meta-weights $\widetilde{w}$ , i.e., Eq. (1) is used to update the base-learner and Eq. (2) is used to update the meta-learner. Similarly, T-NAS consists of two searchers: base-searcher and meta-searcher. In the base-searcher, $\theta$ and $w$ are optimized jointly to search architecture for the specific task $\mathcal{T}_i^s$ , which can be optimized with: + +$$ +\left\{ \begin{array}{l} w _ {i} ^ {m + 1} = w _ {i} ^ {m} - \alpha_ {\text {i n n e r}} \nabla_ {w _ {i} ^ {m}} \mathcal {L} \left(g \left(\mathcal {T} _ {i} ^ {s}; \theta_ {i} ^ {m}, w _ {i} ^ {m}\right)\right) \\ \theta_ {i} ^ {m + 1} = \theta_ {i} ^ {m} - \beta_ {\text {i n n e r}} \nabla_ {\theta_ {i} ^ {m}} \mathcal {L} \left(g \left(\mathcal {T} _ {i} ^ {s}; \theta_ {i} ^ {m}, w _ {i} ^ {m + 1}\right)\right) \end{array} , \right. \tag {5} +$$ + +where $\beta_{\mathrm{inner}}$ is the inner (base) learning rate of architecture $\theta$ . $g$ is the parametrized function with the architecture $\theta$ and network weights $w$ ( $\theta_i^0 = \widetilde{\theta}$ , $w_i^0 = \widetilde{w}$ ). After $M$ steps, $\widetilde{\theta}$ and $\widetilde{w}$ are also updated to get a good initial point for architecture adaptation in the meta-searcher, where $\mathcal{L}(g(\mathcal{T}_i^q; \theta_i^M, w_i^M))$ in $\mathcal{T}_i^q$ is computed. The formulation can be represented as: + +$$ +\left\{ \begin{array}{l} \widetilde {w} = \widetilde {w} - \alpha_ {\text {o u t e r}} \nabla_ {\widetilde {w}} \sum_ {\mathcal {T} _ {i} ^ {q} \sim p (\mathcal {T})} \mathcal {L} \left(g \left(\mathcal {T} _ {i} ^ {q}; \theta_ {i} ^ {M}, w _ {i} ^ {M}\right)\right) \\ \widetilde {\theta} = \widetilde {\theta} - \beta_ {\text {o u t e r}} \nabla_ {\widetilde {\theta}} \sum_ {\mathcal {T} _ {i} ^ {q} \sim p (\mathcal {T})} \mathcal {L} \left(g \left(\mathcal {T} _ {i} ^ {q}; \theta_ {i} ^ {M}, w _ {i} ^ {M}\right)\right) \end{array} , \right. \tag {6} +$$ + +where $\beta_{\mathrm{outer}}$ is the outer (meta) learning rate of the meta-architecture $\widetilde{\theta}$ . When the meta-searcher converges, the optimal meta-architecture $\widetilde{\theta}$ and meta-weights $\widetilde{w}$ can be obtained. We argue that such a $\widetilde{\theta}$ can quickly adapt to a new task. The complete algorithm of T-NAS is as shown in Alg. 1. + +Algorithm 1: T-NAS: Transferable Neural Architecture Search +Input: Meta-train dataset $\mathcal{D}_{\mathrm{meta - train}}$ , learning rate $\alpha_{\mathrm{inner}}$ $\alpha_{\mathrm{outer}}$ $\beta_{\mathrm{inner}}$ and $\beta_{\mathrm{outer}}$ Randomly initialize architecture parameter $\theta$ and network weights $w$ while not done do Sample batch of tasks $\{\mathcal{T}\}$ in $\mathcal{D}_{\mathrm{meta - train}}$ . for $\mathcal{T}_i\in \{\mathcal{T}\}$ do Get datapoints $\mathcal{T}_i^s$ . Compute $\mathcal{L}(g(\mathcal{T}_i^s;\theta_i^m,w_i^m))$ according to the standard cross-entropy loss; Alternatively update $w_{i}^{m}$ and $\theta_i^m$ with Eq. (5) for $M$ steps; Get datapoints $\mathcal{T}_i^q$ for meta-searcher; end Alternatively update $\widetilde{w}$ and $\widetilde{\theta}$ with Eq. (6); +11 end + +Table 1: The main differences among NAS, Solution1 (S1), Solution2 (S2) and T-NAS. + +
MethodsTask(s)TransferabilityCharacteristic
NASsinglenotroublesome for multiple tasks
S1multipleno (search from scratch)inefficient & time-consuming
S2multipleborrowss from searched architecturenot optimal
T-NASmultipleadaptationflexible
+ +# 4.2 T-NAS vs. NAS + +As mentioned before, the previous NAS methods usually do well in searching an architecture for a single task but are troublesome for multiple datasets or multiple tasks. So we focus on the transferability of NAS across multiple tasks in this paper. Two simple solutions (S1 and S2) have been pointed in Figure 1 but they are either inefficient or not optimal. T-NAS aims to learn a transferable and flexible architecture that can adapt to a new task easily. Table 1 lists the main differences among NAS, two simple solutions (S1 and S2) and T-NAS. S1 does not study the transferability of NAS and searches architectures for different tasks (e.g., $\theta_{1},\theta_{2},\dots,\theta_{n}$ ) from scratch. S2 borrows from searched architecture directly such that all tasks share the same architecture (e.g., $\theta$ ). Differently, T-NAS searches the meta-architecture $\widetilde{\theta}$ , which is able to adapt to different tasks quickly (e.g., $\widetilde{\theta} \rightarrow \theta_{1},\theta_{2},\dots,\theta_{n}$ ). The experimental results show that our method achieves better performance than the S2 and comparable performance with S1 but with the less searching cost. + +It is worth mentioning that if directly apply NAS to few-shot meta-learning, e.g., MAML (Finn et al., 2017), we will search a good network architecture for MAML, which is named Auto-MAML. In fact, Auto-MAML is a special case of S2 in Figure 1, where all tasks share the same architecture searched with a meta-learning method. In the experiments in few-shot learning, we also introduce Auto-MAML as a baseline. However, such a shared architecture is not suitable for each task. AutoMAML can outperform MAML but is inferior to T-NAS. The specific algorithm and experimental settings of Auto-MAML are provided in the supplementary material. + +The core of T-NAS is based on MAML (Finn et al., 2017), which is a kind of gradient-based meta-learning method. Recently, MAML++ is proposed by Antoniou et al. (2019), which introduces several techniques to improve the performance of MAML. These techniques can also be utilized by T-NAS, which is termed as T-NAS++ in this paper. The experiments in Section 5 confirm that T-NAS++ can further improve the performance of T-NAS. + +# 4.3 OPTIMIZATION + +Although the formulation of T-NAS is proposed, the model is hard to be optimized directly according to Alg. 1. On one hand, updating $\widetilde{\theta}$ and $\widetilde{w}$ introduces the high-order derivative in Eq. (6). On the other hand, the continuous relaxation of architecture makes amounts of memory occupied. At the first glance, such a problem might be solved by the first-order approximation in Liu et al. (2018b), however, there still exists a lot of time overhead, even the experiments cannot be carried out when step $M$ is large in Eq. (6). To tackle this problem, we transform the alternative update strategy of $w$ and $\theta$ in Eq. (5) into simultaneous update, which means the $w$ and $\theta$ are treated equally as the parameters of function $g$ . Such a replacement can update parameters $(w$ and $\theta)$ by only backpropagating once instead of twice. The Eq. (5) can be modified to: + +$$ +\left[ w _ {i} ^ {m + 1}; \theta_ {i} ^ {m + 1} \right] = \left[ w _ {i} ^ {m}; \theta_ {i} ^ {m} \right] - \boldsymbol {\eta} _ {\text {i n n e r}} \nabla_ {\left[ w _ {i} ^ {m}, \theta_ {i} ^ {m} \right]} \mathcal {L} \left(g \left(\mathcal {T} _ {i} ^ {s}; \theta_ {i} ^ {m}, w _ {i} ^ {m}\right)\right), \tag {7} +$$ + +where $\pmb{\eta}_{\mathrm{inner}} = [\alpha_{\mathrm{inner}};\beta_{\mathrm{inner}}]$ . In addition, to avoid the high-order derivative, we also utilize the first-order approximation to compute the derivation of $w_{i}^{M}$ and $\theta_i^M$ instead of $\widetilde{w}$ and $\widetilde{\theta}$ as follows: + +$$ +[ \tilde {w}; \tilde {\theta} ] = [ \tilde {w}; \tilde {\theta} ] - \boldsymbol {\eta} _ {\text {o u t e r}} \sum_ {\mathcal {T} _ {i} \sim p (\mathcal {T})} \nabla_ {[ w _ {i} ^ {M}, \theta_ {i} ^ {M} ]} \mathcal {L} (g (\mathcal {T} _ {i} ^ {q}; \theta_ {i} ^ {M}, w _ {i} ^ {M})), \tag {8} +$$ + +where $\pmb{\eta}_{\mathrm{outer}} = [\alpha_{\mathrm{outer}};\beta_{\mathrm{outer}}]$ . Such modifications save more than half of the search time and memory while maintaining comparable performance. Thus, we can use the Eq. (7) and Eq. (8) to replace the Eq. (5) and Eq. (6) in line 7 and line 10 of Alg. 1 to update $\theta$ and $w$ in the implementation. + +# 4.4 ADAPTATION AND DECODING + +Once $\widetilde{\theta}$ and $\widetilde{w}$ are obtained by training the base-searcher and the meta-searcher with the first-order approximation of Alg. 1, we can adapt them to the $i$ -th task and get the task-specific architecture $\theta_{i}^{*}$ for the specific task $\mathcal{T}_i$ according to the following Alg. 2. + +# Algorithm 2: Adaptation and decoding + +Input: Meta-test dataset $\mathcal{D}_{\mathrm{meta - test}}$ , learning rate $\alpha_{\mathrm{inner}}$ and $\beta_{\mathrm{inner}}$ . + +Output: The task-specific architecture $\theta_{i}^{*}$ for the $i$ -th task $\mathcal{T}_i$ + +1 Obtain the specific task $\mathcal{T}_i$ from $\mathcal{D}_{\mathrm{meta - test}}$ +2 Update $w_{i}^{m}$ and $\theta_i^m$ for $M$ step with Eq. (7) and get $\theta_i^M$ ; +3 Decoding $\theta_i^M$ to task-specific architecture $\theta_i^*$ by following the method in Liu et al. (2018b). + +Following previous NAS methods (Zoph & Le, 2016; Zoph et al., 2018; Pham et al., 2018; Liu et al., 2018b), after getting $\theta_{i}^{*}$ , we evaluate the task-specific architecture by training it in the task $\mathcal{T}_i$ from scratch. As shown in Sec. 5, the T-NAS achieves state-of-the-art performance in few-shot learning and comparable performance in supervised learning but with less searching cost. + +# 5 EXPERIMENTS + +We evaluate the effectiveness of T-NAS in both few-shot and supervised learning settings, as well as multiple datasets. For each dataset, we conduct experiments containing architecture search and architecture evaluation. In the architecture search stage, we use T-NAS to search for a metaarchitecture. In the architecture evaluation stage, we evaluate the transferred task-specific architectures by training them from scratch and compare their performance with previous methods. S1 and S2 in the following sections mean two simple solutions in Figure 1 except for the specific instructions. Code is available3. + +# 5.1 DATASETS + +Omniglot is a handwritten character recognition dataset proposed in Lake et al. (2011), which contains 1623 characters with 20 samples for each class. We randomly split 1200 characters for training and the remaining for testing, and augment the Omniglot dataset by randomly rotating multiples of 90 degrees following (Santoro et al., 2016). + +Mini-Imagenet dataset is sampled from the original ImageNet (Deng et al., 2009). There are 100 classes in total with 600 images for each class. All images are down-sampled to $84 \times 84$ pixels and the whole dataset consists of 64 training classes, 16 validation classes and 20 test classes. + +Fewshot-CIFAR100 (FC100) dataset is proposed in Oreshkin et al. (2018), which is based on a popular image classification dataset CIFAR100. It is more challenging than the Mini-Imagenet due to the low resolution. Following Oreshkin et al. (2018), FC100 is divided into 60 classes belonging to 12 superclasses for training, 20 classes belonging to 4 superclasses for validation and testing. + +# 5.2 T-NAS FOR FEW-SHOT LEARNING + +# 5.2.1 ARCHITECTURE SEARCH. + +We first get the meta-architecture $\widetilde{\theta}$ by optimizing the search network with first-order approximation of Alg. 1. In the architecture search stage, we employ the same operations as Liu et al. (2018b): $3 \times 3$ and $5 \times 5$ separable convolutions, $3 \times 3$ and $5 \times 5$ dilated separable convolutions, $3 \times 3$ max + +![](images/6310a6d4efba5f5ebd8bbcacb03a730825b7fbfbee7945f4e9df095cbb2e9497.jpg) +Figure 2: Architecture $(\theta_{\mathrm{normal}},\theta_{\mathrm{reduce}})$ searched with Auto-MAML (left), meta-architecture $(\widetilde{\theta}_{\mathrm{normal}},\widetilde{\theta}_{\mathrm{reduce}})$ searched with T-NAS (middle), and the transferred architecture $(\theta_{\mathrm{normal}}^{t},\theta_{\mathrm{reduce}}^{t})$ for the specific task $\mathcal{T}_t$ (right). The experiments are conducted in 5-way, 5-shot setting of Mini-Imagenet. + +![](images/7a2728719f007e7619c9eb79a34da313c67299052846058f10f2518385d0f389.jpg) + +![](images/ce6e054d40ef764e56390caac5f48d1bedcd57ce31afcaa797200abade36190c.jpg) + +![](images/afae8985de9009d3f2d3b0c6ade074a89b14294f74cf8852548c25d3ab1e3c36.jpg) + +![](images/8669c4e17b04efff43b18954f725bc1c1be91de34c724d6084a25ea8b97d450e.jpg) + +![](images/901fc808a018a36fe472bac76c5ce2faf09273b38f79fe9f9c9c55164fe2823a.jpg) + +pooling, $3 \times 3$ average pooling, identity and zero. ReLU-Conv-BN order is used for convolutional operations and each separable convolution is applied twice following (Liu et al., 2018a;b). For all datasets, we only use one {normal + reduction} cell for efficiency and preventing overfitting, thus the meta-architecture $\widetilde{\theta}$ is determined by $(\widetilde{\theta}_{\mathrm{normal}}, \widetilde{\theta}_{\mathrm{reduce}})$ . Once $\widetilde{\theta}$ is obtained using T-NAS, we can obtain the optimal architecture $\theta_{i}^{*}$ for the specific task $\mathcal{T}_i$ from Alg. 2. + +We utilize the training and validation data of dataset for architecture search. In N-way, K-shot setting, we firstly randomly sample N classes from the training classes, and then randomly sample K images for each class to get a task. Thus, there are $N \times K$ images in each task. On the Mini-imagenet dataset, One {normal + reduction} cell is trained for 10 epochs with 5000 independent tasks for each epoch and the initial channel is set as 16. For the base-searcher, we use the vanilla SGD to optimize the network weights $w_{i}^{m}$ and architecture parameter $\theta_{i}^{m}$ with inner learning rate $\alpha_{\mathrm{inner}} = 0.1$ and $\beta_{\mathrm{inner}} = 30$ . The inner step $M$ is set as 5 for the trade-off between accuracy and efficiency. For the meta-searcher, we use the Adam (Kingma & Ba, 2014) to optimize the meta-architecture $\widetilde{\theta}$ and network weights $\widetilde{w}$ with outer learning rate $\alpha_{\mathrm{outer}} = 10^{-3}$ and $\beta_{\mathrm{outer}} = 10^{-3}$ . All search and evaluation experiments are performed using NVIDIA P40 GPUs. The whole search process takes about 2 GPU days. + +In addition, we also conduct Auto-MAML experiments where all tasks share the same searched architecture. Auto-MAML is a special case of S2 of Figure 1, where all tasks share the same architecture searched with a meta-learning method. In the practical algorithm, it is similar to T-NAS, which is behaved as removing the update for $\theta$ in the meta-searcher stage. However, in Auto-MAML, we can divide the whole dataset into two splits for the updates of $\theta$ and $\widetilde{w}$ following the recent gradient-based NAS methods (Pham et al., 2018; Liu et al., 2018b). Here, the $D_{\mathrm{meta - train}}$ is divided into two independent splits $D_{\mathrm{train - split1}}$ and $D_{\mathrm{train - split2}}$ with $1:1$ . The specific algorithm for meta-train and meta-test and searched architecture structure can be found in the supplementary material. + +To show the transferability of meta-architecture, we visualize the (encoding of) architecture $\theta$ searched with Auto-MAML, meta-architecture $\widetilde{\theta}$ searched with T-NAS, and transferred architecture $\theta^t$ for a specific task $\mathcal{T}_t$ in Figure 2. It is worth noting that the architecture encoding matrix $(\widetilde{\theta}_{\mathrm{normal}},\widetilde{\theta}_{\mathrm{reduce}})$ searched with T-NAS is smoother than that with Auto-MAML, which implies that $(\widetilde{\theta}_{\mathrm{normal}},\widetilde{\theta}_{\mathrm{reduce}})$ is easier to adapt to the specific task $(\widetilde{\theta}\rightarrow \theta^{t})$ than Auto-MAML, thus the metaarchitecture searched with T-NAS is more flexible. + +# 5.2.2 ARCHITECTURE EVALUATION. + +After getting the architecture structure $\theta_{i}^{*}$ for task $\mathcal{T}_i$ , we evaluate $\theta_{i}^{*}$ by training it from scratch. In architecture evaluation, we train the task-specific architecture for 20 epochs with 15000 independent tasks for each epoch. Note that different from Liu et al. (2018b), we directly use the searched network structure to evaluate performance without any modification (e.g., the number of channels or layers). We optimize the network weights $w_{i}^{m}$ with $\alpha_{\mathrm{inner}} = 0.1$ and $M = 5$ . We use Adam (Kingma & Ba, 2014) to optimize the meta-weights $\widetilde{w}$ with outer learning rate $\alpha_{\mathrm{outer}} = 10^{-3}$ . The experimental results on Omniglot, Mini-Imagenet and FC100 are shown in Table. 2, Table. 3 and Table. 4, respectively, where T-NAS is based on first-order MAML. Specifically, T-NAS outperforms + +Table 2: 5-way accuracy results on the Omniglot dataset. + +
Methods1-shot5-shot
Siamese Nets (Koch et al., 2015)97.3%98.4%
Matching nets (Vinyals et al., 2016)98.1%98.9%
Neural statistician (Edwards & Storkey, 2017)98.1%99.5%
Memory Mod. (Kaiser et al., 2017)98.4%99.6%
Meta-SGD (Li et al., 2017)99.53 ± 0.26%99.93 ± 0.09%
MAML (Finn et al., 2017)98.7 ± 0.4%99.9 ± 0.1%
MAML++ (Antoniou et al., 2019)99.47%99.93%
Auto-MAML (ours)98.95 ± 0.38%99.91 ± 0.09%
T-NAS (ours)99.16 ± 0.34%99.93 ± 0.07%
T-NAS++ (ours)99.35 ± 0.32%99.93 ± 0.07%
+ +Table 3: 5-way accuracy results on Mini-Imagenet. + +
MethodsArch.#Param.1-shot5-shot
Matching nets (Vinyals et al., 2016)4CONV32.9K43.44 ± 0.77%55.31 ± 0.73%
ProtoNets (Snell et al., 2017)4CONV32.9K49.42 ± 0.78%68.20 ± 0.66%
Meta-LSTM (Ravi & Larochelle, 2017)4CONV32.9K43.56 ± 0.84%60.60 ± 0.71%
Bilevel (Franceschi et al., 2018)4CONV32.9K50.54 ± 0.85%64.53 ± 0.68%
CompareNets (Sung et al., 2018)4CONV32.9K50.44 ± 0.82%65.32 ± 0.70%
LLAMA (Grant et al., 2018)4CONV32.9K49.40 ± 1.83%-
MAML (Finn et al., 2017)4CONV32.9K48.70 ± 1.84%63.11 ± 0.92%
MAML (first-order) (Finn et al., 2017)4CONV32.9K48.07 ± 1.75%63.15 ± 0.91%
MAML++ (Antoniou et al., 2019)4CONV32.9K52.15 ± 0.26%68.32 ± 0.44%
Auto-Meta (small) (Kim et al., 2018)Cell28/28 K49.58 ± 0.20%65.09 ± 0.24%
Auto-Meta (large) (Kim et al., 2018)Cell98.7/94.0 K51.16 ± 0.17%69.18 ± 0.14%
BASE (Softmax) (Shaw et al., 2018)Cell1200K-65.40 ± 0.74%
BASE (Gumbel-Softmax) (Shaw et al., 2018)Cell1200K-66.20 ± 0.70%
Auto-MAML (ours)Cell23.2/26.1 K51.23 ± 1.76%64.10 ± 1.12%
T-NAS (ours)Cell24.3/26.5 K*52.84 ± 1.41%67.88 ± 0.92%
T-NAS++ (ours)Cell24.3/26.5 K*54.11 ± 1.35%69.59 ± 0.85%
+ +\* means the average parameters of architectures for evaluation. + +MAML and Auto-MAML (52.84% vs. 48.70%, 51.23%), which validates the advantage of T-NAS. It also achieves better performance than other architecture transfer methods (e.g., BASE (Shaw et al., 2018)). Actually, since the advantage of T-NAS is that the meta-architecture could adapt to a new task rather than using a fixed architecture like MAML and Auto-MAML, it usually has an additional time cost for the adaptation. Usually, the adaptation procedure costs about 1.5 seconds (1-shot) and 7.8 seconds (5-shot), which is negligible compared with the improvement of accuracy. Moreover, we can also see that T-NAS++, which is an improved version of T-NAS described in Sec.4.2, achieves the best performance among all the baselines. + +# 5.3 T-NAS FOR SUPERVISED LEARNING + +Besides few-shot learning classification, we also conduct experiments on Mini-Imagenet for general supervised learning. Different from few-shot learning, the architecture can be searched and trained for each task due to the sufficient samples, which can be regarded as S1 in Figure 1. Due to the lack of baselines in the supervised learning setting, we choose 10 tasks with 200-shot 50-query 10-way for each task based on the Mini-Imagenet dataset for meaningful experiments. + +In the experiments of supervised learning, we follow the same setting as few-shot learning for transferable architecture search. The difference is that we can train each task independently from scratch in architecture evaluation. For 10 tasks in supervised learning, we train the task-specific architecture for 200 epochs with cosine schedule, where the initial learning rate is 0.05. We use the SGD with momentum 0.9 to optimize the network weights and crop the original image and flip randomly for data argumentation. + +Table 4: 5-way accuracy results on FC100. + +
Methods1-shot5-shot10-shot
MAML (Finn et al., 2017)38.1 ± 1.7%50.4 ± 1.0%56.2 ± 0.8%
MAML++ (Antoniou et al., 2019)38.7 ± 0.4%52.9 ± 0.4%58.8 ± 0.4%
Auto-MAML (ours)38.8 ± 1.8%52.2 ± 1.2%57.5 ± 0.8%
T-NAS (ours)39.7 ± 1.4%53.1 ± 1.0%58.9 ± 0.7%
T-NAS++ (ours)40.4 ± 1.2%54.6 ± 0.9%60.2 ± 0.7%
+ +Table 5: 200-shot, 50-query, 10-way accuracy results of supervised learning on Mini-Imagenet. + +
Methods200-shotTime
Random61.20 ± 0.09%N/A
S164.84 ± 0.04%266 min
S262.99 ± 0.05%N/A
T-NAS (ours)64.23 ± 0.05%5 min
+ +The experimental results in the supervised learning setting are shown in Table. 5. In S1, we search the architecture for each of 10 tasks from scratch and evaluate them. For S2, we directly use five architectures searched respectively in five different tasks (sampled with 200-shot 50-query 10-way for each task in the meta-train dataset) for the evaluation in 10 tasks. For a fair comparison, we also pick five architectures randomly from search space for each task, evaluate them in the specific task, and report their average results. It is worth noting that it does not consume searching time to randomly generate architectures or directly use the prepared architectures searched in other tasks. Thus, the time of Random and Method2 in Table. 5 is not applicable. Our T-NAS can learn a meta-architecture $\widetilde{\theta}$ and get the task-specific architecture by only updating several steps from $\widetilde{\theta}$ instead of the shared architecture. Thus, T-NAS obtains better performance than random architectures and S2 (64.23% vs. 61.20%, 62.99%). In addition, T-NAS achieves competitive performance with S1 but with 50x less time cost (5 min vs. 266 min). The fact that the performance of S1 is superior to that of T-NAS slightly is because S1 directly searches network architecture for different tasks from scratch, which is laborious as well as time-consuming. On the contrary, T-NAS can adapt to different tasks quickly by finding a good initial point $\widetilde{\theta}$ , which avoids laborious searching for many tasks and saves a lot of time. + +Finally, it is interesting that although the architectures searched with S1 and those transferred from meta-architecture searched with T-NAS are different for the specific tasks, their final evaluation performance is very close and is better than that of the random architectures. Such observation implies that some subspaces in architecture search space might be suitable for a specific task and T-NAS is able to adapt architecture initialized with $\widetilde{\theta}$ to the subspaces. + +# 6 CONCLUSION AND FUTURE WORK + +In this paper, we focus on the transferability of Neural Architecture Search, that is to say, how to get a suitable architecture for a new task in NAS? The two simple solutions are either inefficient or not optimal. To tackle this problem, we propose a novel Transferable Neural Architecture Search (T-NAS) for fast adaptation of architectures. Specifically, T-NAS learns a meta-architecture that is able to adapt to a new task easily and quickly through a few gradient steps, which is more flexible than the existing NAS methods. In addition, to optimize the whole search network, we propose an efficient first-order approximation algorithm. Extensive experiments show that T-NAS achieves state-of-the-art performance in few-shot learning setting. As for the supervised learning setting, T-NAS achieves comparable performance with other baselines but the searching cost is decreased by 50x, which demonstrates the effectiveness of our method. + +For future work, we can study the transferability of NAS for those tasks from different task distributions, where some transfer learning methods might be helpful. We hope that this work can provide some insights on the transferability of NAS, which might potentially benefit the real-world applications. + +Acknowledgement. The work is supported by the National Key R&D Program of China (2018AAA0100704) and the National Natural Science Foundation of China (NSFC) under Grants No. 61932020. We would like to thank Jiaxing Wang for some meaningful discussions. + +# REFERENCES + +Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your maml. In ICLR, 2019. +Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019. +Liang-Chieh Chen, Maxwell Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, and Jon Shlens. Searching for efficient multi-scale architectures for dense image prediction. In Advances in Neural Information Processing Systems, pp. 8699-8710, 2018. +An-Chieh Cheng, Chieh Hubert Lin, Da-Cheng Juan, Wei Wei, and Min Sun. Instanas: Instance-aware neural architecture search. arXiv preprint arXiv:1811.10201, 2018. +Hao Cheng, Dongze Lian, Bowen Deng, Shenghua Gao, Tao Tan, and Yanlin Geng. Local to global learning: Gradually adding classes for training deep neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. +Harrison Edwards and Amos Storkey. Towards a neural statistician. In ICLR, 2017. +Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. Journal of Machine Learning Research, 20(55):1-21, 2019. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126-1135. JMLR.org, 2017. +Luca Franceschi, Paolo Frasconi, Saverio Salzo, and Massimiliano Pontil. Bilevel programming for hyperparameter optimization and meta-learning. In ICML, 2018. +Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Nas-fpn: Learning scalable feature pyramid architecture for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7036-7045, 2019. +Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440-1448, 2015. +Xinyu Gong, Shiyu Chang, Yifan Jiang, and Zhangyang Wang. Autogan: Neural architecture search for generative adversarial networks. arXiv preprint arXiv:1908.03835, 2019. +Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradient-based meta-learning as hierarchical bayes. In *ICLR*, 2018. +Yong Guo, Yin Zheng, Mingkui Tan, Qi Chen, Jian Chen, Peilin Zhao, and Junzhou Huang. Nat: Neural architecture transformer for accurate and compact architectures. In Advances in Neural Information Processing Systems, pp. 735-747, 2019. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017. + +Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. In International Joint Conference on Artificial Intelligence, pp. 1965-1972, 2017. +Łukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. Learning to remember rare events. In ICLR, 2017. +Jaehong Kim, Sangyeul Lee, Sungwan Kim, Moonsu Cha, Jung Kwon Lee, Youngduck Choi, Yongseok Choi, Dong-Yeon Cho, and Jiwon Kim. Auto-meta: Automated gradient based meta learner search. arXiv preprint arXiv:1806.06927, 2018. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, volume 2, 2015. +Brenden Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua Tenenbaum. One shot learning of simple visual concepts. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 33, 2011. +Stanislas Lauly, Yin Zheng, Alexandre Allauzen, and Hugo Larochelle. Document neural autoregressive distribution estimation. The Journal of Machine Learning Research, 18(1):4046-4069, 2017. +Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835, 2017. +Dongze Lian, Jing Li, Jia Zheng, Weixin Luo, and Shenghua Gao. Density map regression guided detection network for rgb-d crowd counting and localization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. +Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 19-34, 2018a. +Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan Yuille, and Li Fei-Fei. Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. arXiv preprint arXiv:1901.02985, 2019. +Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018b. +Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive metalearner. In ICLR, 2018. +T sendsuren Munkhdalai, Xingdi Yuan, Soroush Mehri, and Adam Trischler. Rapid adaptation with conditionally shifted neurons. In ICML, 2018. +Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. Tadam: Task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems, pp. 721-731, 2018. +Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. In ICML, 2018. +Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017. +Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. arXiv preprint arXiv:1802.01548, 2018. +Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In International conference on machine learning, pp. 1842-1850, 2016. + +Albert Shaw, Bo Dai, Weiyang Liu, and Le Song. Bayesian meta-network architecture learning. CoRR, abs/1812.09584, 2018. URL http://arxiv.org/abs/1812.09584. +Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077-4087, 2017. +Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. Meta-transfer learning for few-shot learning. In CVPR, 2019. +Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199-1208, 2018. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104-3112, 2014. +Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pp. 3630-3638, 2016. +Catherine Wong, Neil Houlsby, Yifeng Lu, and Andrea Gesmundo. Transfer learning with neural automl. In Advances in Neural Information Processing Systems, pp. 8356-8365, 2018. +Ruixiang Zhang, Tong Che, Zoubin Ghahramani, Yoshua Bengio, and Yangqiu Song. Metagan: An adversarial approach to few-shot learning. In Advances in Neural Information Processing Systems, pp. 2365-2374, 2018. +Yin Zheng, Richard S Zemel, Yu-Jin Zhang, and Hugo Larochelle. A neural autoregressive approach to attention-based recognition. International Journal of Computer Vision, 113(1):67-79, 2015a. +Yin Zheng, Yu-Jin Zhang, and Hugo Larochelle. A deep and autoregressive approach for topic modeling of multimodal data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(6):1056-1069, 2015b. +Yin Zheng, Bangsheng Tang, Wenkui Ding, and Hanning Zhou. A neural autoregressive approach to collaborative filtering. In International Conference on Machine Learning, pp. 764-773, 2016. +Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016. +Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8697-8710, 2018. + +# A THE EXPERIMENTS OF AUTO-MAML + +In Auto-MAML, we search a good network architecture for MAML. In fact, Auto-MAML is a special case of Method2 of Figure 1 in this paper, where all tasks share the same architecture searched with a meta-learning method. In the practical algorithm, it is similar to T-NAS, which is behaved as removing the update for $\theta$ in the meta-searcher stage. However, in Auto-MAML, we can divide the whole dataset into two splits for the updates of $\theta$ and $\widetilde{w}$ following the recent gradient-based NAS methods (Pham et al., 2018; Liu et al., 2018b). Here, the $\mathcal{D}_{\mathrm{meta - train}}$ is divided into two independent splits $\mathcal{D}_{\mathrm{train - split1}}$ and $\mathcal{D}_{\mathrm{train - split2}}$ with $1:1$ . The specific algorithm for meta-train and meta-test is shown in Alg. 3. + +We follow the same definition for architecture search as T-NAS and we also use one {normal + reduction} cell for Auto-MAML. The searched architecture $\theta^{*}$ is shared by all tasks. We utilize the two splits of training data for architecture search. The search model is trained for 10 epochs with 5000 independent tasks for each epoch and the initial channel is set as 16. For base-searcher, we use the vanilla SGD to optimize the network weight $w_{i}^{m}$ with inner learning rate $\alpha_{\mathrm{inner}} = 0.01$ . The inner step $M$ is set as 5 for the trade-off between accuracy and time. For the meta-update, we use the Adam to optimize the network weights $w$ and architecture $\theta$ with outer learning rate $\alpha_{\mathrm{outer}} = 10^{-3}$ and $\beta = 3\times 10^{-4}$ . The hyperparameter setting for network evaluation is the same as T-NAS. Here, we visualize some discrete architecture structure searched with Auto-MAML on the Mini-Imagenet dataset in Figure 3 and Figure 4. + +![](images/9ef45e3a8f54cf35e1b3ac018c19cbb7366c395e020d6c372fbf3b660adc2e03.jpg) +(a) Normal cell + +![](images/d7ad3cd6f445b6d255b823a791468589d3a78ebc3c135aec7c2d2aaa3c03f398.jpg) +(b) Reduction cell + +![](images/82d98069a3ff18186ad63bd1d974168b283165608e85f9a64e93160d303617a1.jpg) +(a) Normal cell +Figure 4: Architecture searched with Auto-MAML in 5-way 5-shot setting of Mini-imagenet. + +![](images/e5afb4b201a8136cdbb1a95564d2fff1aa6ed7b31ed3301aeefa4c33b149d28e.jpg) +Figure 3: Architecture searched with Auto-MAML in 5-way 1-shot setting of Mini-imagenet. +(b) Reduction cell + +# B TASK-SPECIFIC ARCHITECTURES + +The aim of this paper is to learn a transferable architecture that is able to adapt to a new task through a few gradient steps. Therefore, it is meaningless that directly decoding for the searched metaarchitecture $\widetilde{\theta}$ without regard to the specific tasks. Here, we visualize the (encoding of) transferable architecture $\widetilde{\theta}^4$ searched with T-NAS and task-specific architecture $\theta^1, \theta^2, \theta^3$ in Figure 5. The matrix $(\widetilde{\theta}_{\text{normal}}, \widetilde{\theta}_{\text{reduce}})$ searched with T-NAS is smoother than the task-specific architecture matrices $(\theta_{\text{normal}}^i, \theta_{\text{reduce}}^i)$ , which shows that the meta-architecture is flexible and easy to adapt to these specific task ( $\widetilde{\theta} \to \theta^1, \theta^2, \theta^3$ ). + +# C COMPLETE EXPERIMENTAL COMPARISON + +In this section, we show the complete experimental comparison of our method with those methods using pretrained model in Table 6. Some methods (Oreshkin et al., 2018; Sun et al., 2019) get better performance by employing more complex networks and pretrained model. + +# D PERFORMANCE COMPARISON ON CIFAR-10 AND IMAGENET + +To evaluate the transferability of our method, we also conduct the experiments on CIFAR-10 and ImageNet. Firstly, we construct a larger dataset from ImageNet to learn the meta-architecture, and then adapt the meta-architecture on CIFAR-10 to decode the final architecture. We test the performance of the final architecture on CIFAR-10 and ImageNet and report the performance in Table 7 and Table 8. From these two tables, we can see that the learned meta-architecture from T-NAS can quickly adapt to new tasks and achieve favorable performance. For example, given the learned meta-architecture from T-NAS, it only takes 0.042 GPU days to derive an architecture that achieves test error of $2.98\%$ on CIFAR-10 and $27.2\%$ on ImageNet. In contrast, searching for an architecture that achieves similar performance from scratch on CIFAR-10 by DARTS (first order) would cost + +Algorithm 3: Auto-MAML +Input: Dataset $\mathcal{D}_{\mathrm{train - split1}}$ $\mathcal{D}_{\mathrm{train - split2}}$ , inner learning rate $\alpha_{\mathrm{inner}}$ , outer learning rate $\alpha_{\mathrm{outer}}$ and architecture learning rate $\beta$ Output: The searched architecture $\theta^{*}$ +1% Meta-train: +while not done do +% Update w +Sample batch of tasks $\{\mathcal{T}\}$ in $\mathcal{D}_{\mathrm{train - split1}}$ . +for $\mathcal{T}_i\in \{\mathcal{T}\}$ do Get datapoints $\mathcal{T}_i^s$ . Compute $\nabla_{w_i^m}\mathcal{L}(g(\mathcal{T}_i^s;\theta ,w_i^m))$ according to the standard cross-entropy loss; Update $w_{i}^{m}$ with $w_{i}^{m + 1} = w_{i}^{m} - \alpha_{\mathrm{inner}}\nabla_{w_{i}^{m}}\mathcal{L}(g(\mathcal{T}_{i}^{s};\theta ,w_{i}^{m}))$ for M steps; Get datapoints $\mathcal{T}_i^q$ for meta-update; +end +Update $\widetilde{w}$ with $\widetilde{w} = \widetilde{w} -\alpha_{\mathrm{outer}}\nabla_{\widetilde{w}}\sum_{\mathcal{T}_i\sim p(\mathcal{T})}\mathcal{L}_{\mathcal{T}_i}(g(\mathcal{T}_q;\theta ,w_i^M));$ +% Update $\theta$ +Sample batch of tasks $\{\mathcal{T}\}$ in $\mathcal{D}_{\mathrm{train - split2}}$ . +for $\mathcal{T}_i\in \{\mathcal{T}\}$ do Get datapoints $\mathcal{T}_i^s$ . Compute $\nabla_{w_i^m}\mathcal{L}(g(\mathcal{T}_i^s;\theta ,w_i^m))$ according to the standard cross-entropy loss; Update $w_{i}^{m}$ with $w_{i}^{m+1}=w_{i}^{m}-\alpha_{\mathrm{inner}}\nabla_{w_{i}^{m}}\mathcal{L}(g(\mathcal{T}_{i}^{s};\theta ,w_{i}^{m}))$ for M steps; Get datapoints $\mathcal{T}_i^q$ for meta-update; +end +Update $\theta$ with $\theta = \theta -\beta \nabla_{\theta}\sum_{\mathcal{T}_i\sim p(\mathcal{T})}\mathcal{L}(g(\mathcal{T}_i^q;\theta ,w_i^M));$ +end +% Meta-test: +Sample tasks $\{\mathcal{T}\}$ in $\mathcal{D}_{\mathrm{train - split2}}$ . +for $\mathcal{T}_i\in \{\mathcal{T}\}$ do Update $w_{i}^{m}$ with $w_{i}^{m + 1} = w_{i}^{m} - \alpha_{\mathrm{inner}}\nabla_{w_{i}^{m}}\mathcal{L}(g(\mathcal{T}_{i}^{s};\theta ,w_{i}^{m}))$ for M steps; Compute test accuracy Accci in $\mathcal{T}_i^q$ . +end +Return architecture $\theta^{*}$ according to the best average accuracy of {Acc}. + +Table 6: 5-way accuracy results on Mini-Imagenet. + +
MethodsArchitecturesParameters1-shot5-shotPretrained
TADAM (Oreshkin et al., 2018)ResNet122039.2K58.5 ± 0.3%76.7 ± 0.3%Y
MTL (Sun et al., 2019)ResNet122039.2K61.2 ± 1.8%75.5 ± 0.8%Y
Matching nets (Vinyls et al., 2016)4CONV32.9K43.44 ± 0.77%55.31 ± 0.73%N
ProtoNets (Snell et al., 2017)4CONV32.9K49.42 ± 0.78%68.20 ± 0.66%N
Meta-LSTM (Ravi & Larochelle, 2017)4CONV32.9K43.56 ± 0.84%60.60 ± 0.71%N
Bilevel (Franceschi et al., 2018)4CONV32.9K50.54 ± 0.85%64.53 ± 0.68%N
CompareNets (Sung et al., 2018)4CONV32.9K50.44 ± 0.82%65.32 ± 0.70%N
LLAMA (Grant et al., 2018)4CONV32.9K49.40 ± 1.83%-N
MAML (Finn et al., 2017)4CONV32.9K48.70 ± 1.84%63.11 ± 0.92%N
MAML (first-order) (Finn et al., 2017)4CONV32.9K48.07 ± 1.75%63.15 ± 0.91%N
MAML++ (Antoniou et al., 2019)4CONV32.9K52.15 ± 0.26%68.32 ± 0.44%N
Auto-Meta (small) (Kim et al., 2018)Cell28/28 K49.58 ± 0.20%65.09 ± 0.24%N
Auto-Meta (large) (Kim et al., 2018)Cell98.7/94.0 K51.16 ± 0.17%69.18 ± 0.14%N
BASE (Softmax) (Shaw et al., 2018)Cell1200K-65.40 ± 0.74%N
BASE (Gumbel-Softmax) (Shaw et al., 2018)Cell1200K-66.20 ± 0.70%N
Auto-MAMLCell23.2/26.1 K51.23 ± 1.76%64.10 ± 1.12%N
T-NASCell24.3/26.5 K*52.84 ± 1.41%67.88 ± 0.92%N
T-NAS++Cell24.3/26.5 K*54.11 ± 1.35%69.59 ± 0.85%N
+ +\* means the average parameters of architectures for evaluation. + +1.5 days, which is about 36 times longer than that of T-NAS. This result confirms the advantage of T-NAS and also indicates that it is possible to apply T-NAS to practical scenarios. + +![](images/aaf0761a967c61b3e7462667a608b0c49f78af471aafb7e03226ac48dfff7d3c.jpg) + +![](images/2963e21d0ca93e967c9c484d97b99e01bee7517e856ca7cea751a47c5023040e.jpg) + +![](images/de9cbb7d490e6c0ea1200fa5e63c502ba35702395ad4a3bbc98ca47d9f1a2546.jpg) + +![](images/da7473c2e39dcc27982cff5f6fcc4f3c02f33c098b73959a1808822d9ca71d8d.jpg) + +![](images/0d1899628efb1e83063dfd5c0adf6bfbd552baafdc8d907d7997b530c515a0b2.jpg) +Figure 5: Meta-architecture matrix $(\widetilde{\theta}_{\text{normal}}, \widetilde{\theta}_{\text{reduce}})$ searched with T-NAS and three task-specific architecture matrices $(\theta_{\text{normal}}^i, \theta_{\text{reduce}}^i)$ . The search experiments are conducted in 5-way, 5-shot setting of Mini-Imagenet dataset. + +![](images/4f4949f6ac4cccbe01eadba2072614f761d359bf23b7c75c64d8ac7356e72374.jpg) + +![](images/35506378b9cb5118e04a51b11beaa817e1fdf8ed69710a0f7353667d94a50fae.jpg) + +![](images/aeca533025aa2f84f1e9ab3a041d926a187b3573fe506fc621df34dc057169ce.jpg) + +Table 7: Comparisons with state-of-the-art image classifiers on CIFAR-10. + +
MethodsTest Error (%)#Param. (M)Search Cost (GPU days)
Random search baseline + cutout3.29 ± 0.153.2-
NASNet-A + cutout (Zoph et al., 2018)2.653.3180
AmoebaNet-A + cutout (Real et al., 2018)3.343.23150
AmoebaNet-B + cutout (Real et al., 2018)2.55 ± 0.052.83150
PNAS (Liu et al., 2018a)3.41 ± 0.093.2225
ENAS+cutout (Pham et al., 2018)2.894.60.5
DARTS (first-order) + cutout (Liu et al., 2018b)3.00 ± 0.143.31.5
DARTS (second-order) + cutout (Liu et al., 2018b)2.76 ± 0.093.374
Ours (first-order) + cutout2.98 ± 0.123.40.043
+ +Table 8: Comparisons with state-of-the-art image classifiers on ImageNet in the mobile setting. + +
MethodsTest Error (%)#Params (M)Search Cost (GPU days)
top-1top-5
NASNet-A (Zoph et al., 2018)26.08.45.31800
NASNet-B (Zoph et al., 2018)27.28.75.31800
NASNet-C (Zoph et al., 2018)27.59.04.91800
AmoebaNet-A (Real et al., 2018)25.58.05.13150
AmoebaNet-B (Real et al., 2018)27.28.75.33150
AmoebaNet-C (Real et al., 2018)27.59.04.93150
PNAS (Liu et al., 2018a)25.88.15.1~255
DARTS (Liu et al., 2018b)26.99.04.94
Ours27.39.04.90.043
\ No newline at end of file diff --git a/towardsfastadaptationofneuralarchitectureswithmetalearning/images.zip b/towardsfastadaptationofneuralarchitectureswithmetalearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..33f331e568e9cbbebd0fee222acf634c677f8362 --- /dev/null +++ b/towardsfastadaptationofneuralarchitectureswithmetalearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b3361466a5aea5793f672855ae187197d412b4e6f86f9ee09ddfb1a1263e01fa +size 787321 diff --git a/towardsfastadaptationofneuralarchitectureswithmetalearning/layout.json b/towardsfastadaptationofneuralarchitectureswithmetalearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4b52bea54bc9745959fd1e2790269054b56c0c61 --- /dev/null +++ b/towardsfastadaptationofneuralarchitectureswithmetalearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0f73f9502d4389146e9cdf614a946b7e0422b109b4361d53f9e94289220176f +size 629914 diff --git a/towardsneuralnetworksthatprovablyknowwhentheydontknow/985bbd78-3bbc-4ba2-824e-b8e530347015_content_list.json b/towardsneuralnetworksthatprovablyknowwhentheydontknow/985bbd78-3bbc-4ba2-824e-b8e530347015_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4bd8bfd1c58e4fa377592c95a00d2ba452d25653 --- /dev/null +++ b/towardsneuralnetworksthatprovablyknowwhentheydontknow/985bbd78-3bbc-4ba2-824e-b8e530347015_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c70667aa37dd4aae25aa3cd31061e4560a236c0d02ed9945228515087a5ee82 +size 135721 diff --git a/towardsneuralnetworksthatprovablyknowwhentheydontknow/985bbd78-3bbc-4ba2-824e-b8e530347015_model.json b/towardsneuralnetworksthatprovablyknowwhentheydontknow/985bbd78-3bbc-4ba2-824e-b8e530347015_model.json new file mode 100644 index 0000000000000000000000000000000000000000..98a7d50e83dee08206c73d36f565944179fb9da6 --- /dev/null +++ b/towardsneuralnetworksthatprovablyknowwhentheydontknow/985bbd78-3bbc-4ba2-824e-b8e530347015_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ddd63df37c000dcbd3b34f019e7efe3eee780b965484f398fbf9c10ebe497c6 +size 148795 diff --git a/towardsneuralnetworksthatprovablyknowwhentheydontknow/985bbd78-3bbc-4ba2-824e-b8e530347015_origin.pdf b/towardsneuralnetworksthatprovablyknowwhentheydontknow/985bbd78-3bbc-4ba2-824e-b8e530347015_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..663c4c41d12d89e43884d9be0d76e9200e16a66b --- /dev/null +++ b/towardsneuralnetworksthatprovablyknowwhentheydontknow/985bbd78-3bbc-4ba2-824e-b8e530347015_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d611fa765606a941c784d0aace3ee7171df248a77217a5bbb7a489215a5073b +size 1027435 diff --git a/towardsneuralnetworksthatprovablyknowwhentheydontknow/full.md b/towardsneuralnetworksthatprovablyknowwhentheydontknow/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0f444dd4d847204fa043bdd5751cb99d5308f933 --- /dev/null +++ b/towardsneuralnetworksthatprovablyknowwhentheydontknow/full.md @@ -0,0 +1,603 @@ +# TOWARDS NEURAL NETWORKS THAT PROVABLY KNOW WHEN THEY DON'T KNOW + +Alexander Meinke + +University of Tübingen + +Matthias Hein + +University of Tübingen + +# ABSTRACT + +It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data. Thus, ReLU networks do not know when they don't know. However, this is a highly important property in safety critical applications. In the context of out-of-distribution detection (OOD) there have been a number of proposals to mitigate this problem but none of them are able to make any mathematical guarantees. In this paper we propose a new approach to OOD which overcomes both problems. Our approach can be used with ReLU networks and provides provably low confidence predictions far away from the training data as well as the first certificates for low confidence predictions in a neighborhood of an out-distribution point. In the experiments we show that state-of-the-art methods fail in this worst-case setting whereas our model can guarantee its performance while retaining state-of-the-art OOD performance. $^{1}$ + +# 1 INTRODUCTION + +Deep Learning Models are being deployed in a growing number of applications. As these include more and more systems where safety is a concern, it is important to guarantee that deep learning models work as one expects them to. One topic that has received a lot of attention in this area is the problem of adversarial examples, in which a model's prediction can be changed by introducing a small perturbation to an originally correctly classified sample. Achieving robustness against this type of perturbation is an active field of research. Empirically, adversarial training (Madry et al., 2018) performs well and provably robust models have been developed (Hein & Andriushchenko, 2017; Wong & Kolter, 2018; Raghunathan et al., 2018; Mirman et al., 2018; Cohen et al., 2019). + +On the other end of the spectrum it is also important to study how deep learning models behave far away from the training samples. A simple property every classifier should satisfy is that far away from the training data, it should yield close to uniform confidence over the classes: it knows when it does not know. However, several cases of high confidence predictions far away from the training data have been reported for neural networks, e.g. fooling images (Nguyen et al., 2015), for out-of-distribution (OOD) images (Hendrycks & Gimpel, 2017a) or in medical diagnosis (Leibig et al., 2017). Moreover, it has been observed that, even on the original task, neural networks often produce overconfident predictions (Guo et al., 2017). Very recently, it has been shown theoretically that the class of ReLU networks (all neural networks which use a piecewise affine activation function), which encompasses almost all standard models, produces predictions with arbitrarily high confidence far away from the training data (Hein et al., 2019). Unfortunately, this statement holds for almost all such networks and thus without a change in the architecture one cannot avoid this phenomenon. + +Traditionally, the calibration of the confidence of predictions has been considered on the in-distribution (Guo et al., 2017; Lakshminarayanan et al., 2017a). However these techniques cannot be used for detection (Leibig et al., 2017). Only recently the detection of OOD inputs (Hendrycks & Gimpel, 2017a) has been tackled. The existing approaches are roughly of two types: first, postprocessing techniques that adjust the estimated confidence (DeVries & Taylor, 2018; Liang et al., 2018) which includes the baseline ODIN. Second, modification of the classifier training by integrating generative models like a VAE or GAN in order to discriminate out-distribution from in-distribution data (Lee et al., 2018a; Wang et al., 2018; Lee et al., 2018b) or approaches which enforce low + +![](images/4ecd2348f071ba9ec669a309fbf1361a25e00f33e7f60806b386f68c1299b304.jpg) +Figure 1: Illustration on toy dataset: We show the color-coded confidence in the prediction (yellow indicates high confidence $\max_y\hat{p} (y|x)\approx 1$ , whereas dark purple regions indicate low confidence $\max_y\hat{p} (y|x)\approx 0.5$ ) for a normal neural network (left) and our CCU neural network (right). The decision boundary is shown in white which is similar for both models. Our CCU-model retains high-confidence predictions in regions close to the training data, whereas far away from the training the CCU-model outputs close to uniform confidence. In contrast the normal neural network is over-confident everywhere except very close to the decision boundary. + +![](images/7adbf334500a832ee678bb767d363b2752ffaad8e3b521c5e3bb136e02f93790.jpg) + +confidence on OOD inputs during training (Hein et al., 2019; Hendrycks et al., 2019). Worst-case aspects of OOD detection have previously been studied in Nguyen et al. (2015); Schott et al. (2018); Hein et al. (2019); Sehwag et al. (2019), but no robustness guarantees have yet been proposed for this setting. A generalization guarantee for an out-of-distribution detection scheme is provided in Liu et al. (2018). While this is the only guarantee we are aware of, it is quite different from the type of guarantees we present in this paper. In particular, none of those approaches are able to guarantee that neural networks produce low confidence predictions far away from the training data. We prove that our classifier satisfies this requirement even when we use ReLU networks as the classifier model without loosing performance on either the prediction task on the in-distribution nor the OOD detection performance, see Figure 1 for an illustration. Moreover, our technique allows to give upper bounds on the confidence over a whole neighborhood around a point (worst-case guarantees). We show that most state-of-the-art OOD methods can be fooled by maximizing the confidence in this ball even when starting from uniform noise images, which should be trivial to identify. The central difference from existing OOD-methods is that we have a Bayesian framework for in-and out-distribution, where we model in-and out-distribution separately. In this framework our algorithm for training neural networks follows directly as maximum likelihood estimator which is different from the more ad-hoc methods proposed in the literature. The usage of Gaussian mixture models as the density estimator is then the essential key to get the desired provable guarantees. + +# 2 A GENERIC MODEL FOR CLASSIFIERS WITH CERTIFIED LOW CONFIDENCE FAR AWAY FROM THE TRAINING DATA + +The model which we propose in this paper assumes that samples from an out-distribution are given to us. In image recognition we could either see the set of all images as a sample from the out-distribution (Hendrycks et al., 2019) or consider the agnostic case where we use use uniform noise on $[0,1]^d$ as a maximally uninformative out-distribution. In both settings one tries to discriminate these out-distribution images from images coming from a particular image recognition task and the task is to get low confidence predictions on the out-distribution images vs. higher confidence on the images from the actual task. From the general model we derive under minimal assumptions a maximum-likelihood approach where one trains both a classifier for the actual task and density estimators for in- and out-distribution jointly. As all of these quantities are coupled in our model for the conditional distribution $p(y|x)$ we get guarantees by controlling the density estimates far away from the training data. This is a crucial difference to the approaches of Lee et al. (2018a); Wang et al. (2018); Hendrycks et al. (2019) which empirically yield good OOD performance but are not able to certify the detection mechanism. + +# 2.1 A PROBABILISTIC MODEL FOR IN- AND OUT-DISTRIBUTION DATA + +We assume that there exists a joint probability distribution $p(y, x)$ over the in- and out-distribution data, where $y$ are the labels in $\{1, \dots, M\}$ , $M$ is the number of classes, and $x \in \mathbb{R}^d$ , where $d$ is the input dimension. In the following, we denote the underlying probabilities/densities with $p(y|x)$ resp. + +$p(x)$ and the estimated quantities with $\hat{p}(y|x)$ and $\hat{p}(x)$ . We are mainly interested in a discriminative framework, i.e. we want to estimate $p(y|x)$ which one can represent via the conditional distribution of the in-distribution $p(y|x,i)$ and out-distribution $p(y|x,o)$ : + +$$ +p (y | x) = p (y | x, i) p (i | x) + p (y | x, o) p (o | x) = \frac {p (y | x , i) p (x | i) p (i) + p (y | x , o) p (x | o) p (o)}{p (x | i) p (i) + p (x | o) p (o)}. \tag {1} +$$ + +Note that at first it might seem strange to have a conditional distribution $p(y|x,o)$ for out-distribution data, but until now we have made no assumptions about what in-and out-distribution are. A realistic scenario would be that at test time we are presented with instances $x$ from other classes (out-distribution) for which we expect a close to uniform $p(y|x,o)$ . + +Our model for $\hat{p}(y|x)$ has the same form as $p(y|x)$ + +$$ +\hat {p} (y | x) = \frac {\hat {p} (y | x , i) \hat {p} (x | i) \hat {p} (i) + \hat {p} (y | x , o) \hat {p} (x | o) \hat {p} (o)}{\hat {p} (x | i) \hat {p} (i) + \hat {p} (x | o) \hat {p} (o)}. \tag {2} +$$ + +Typically, out-distribution data has no relation to the actual task and thus we would like to have uniform confidence over the classes. Therefore we set in our model + +$$ +\hat {p} (y | x, o) = \frac {1}{M} \quad \text {a n d} \quad \hat {p} (y | x, i) = \frac {e ^ {f _ {y} (x)}}{\sum_ {k = 1} ^ {M} e ^ {f _ {k} (x)}}, \quad y \in \{1, \dots M \}, \tag {3} +$$ + +where $f: \mathbb{R}^d \to \mathbb{R}^M$ is the classifier function (logits). This framework is generic for classifiers trained with the cross-entropy (CE) loss (as the softmax function is the correct link function for the CE loss) and we focus in particular on neural networks. For a ReLU network the classifier function $f$ is componentwise a continuous piecewise affine function and has been shown to produce asymptotically arbitrarily highly confident predictions (Hein et al., 2019), i.e. the classifier gets more confident in its predictions the further it moves away from its training data. One of the main goals of our proposal is to fix this behavior of neural networks in a provable way. + +Note that with the choice of $\hat{p}(y|x,o)$ and non-zero priors for $\hat{p}(i),\hat{p}(o)$ , the full model $\hat{p}(y|x)$ can be seen as a calibrated version of $\hat{p}(y|x,i)$ , where $\hat{p}(y|x)\approx \hat{p}(y|x,i)$ for inputs with $\hat{p}(x|i)\gg \hat{p}(x|o)$ and $\hat{p}(y|x)\approx \frac{1}{M}$ if $\hat{p}(x|i)\ll \hat{p}(x|o)$ . However, note that only the confidence in the prediction $\hat{p}(y|x)$ is affected, the classifier decision is still done according to $\hat{p}(y|x,i)$ as the calibration does not change the ranking. Thus even if the OOD data came from the classification task we would like to solve, the trained classifier's performance would be unaffected, only the confidence in the prediction would be damped. + +For the marginal out-distribution $\hat{p}(x|o)$ there are two possible scenarios. In the first case one could concentrate on the worst case where we assume that $p(x|o)$ is maximally uniformative (maximal entropy). This means that $\hat{p}(x|o)$ is uniform for bounded domains e.g. for images which are in $[0,1]^d$ , $\hat{p}(x|o) = 1$ for all $x \in [0,1]^d$ , or $\hat{p}(x|o)$ is a Gaussian for the domain of $\mathbb{R}^d$ (the Gaussian has maximum entropy among all distributions of fixed variance). However, in this work we follow the approach of Hendrycks et al. (2019) where they used the 80 million tiny image dataset (Torralba et al., 2008) as a proxy of all possible images. Thus we estimate the density of $\hat{p}(x|o)$ using this data. + +In order to get guarantees, the employed generative models for $\hat{p}(x|i)$ and $\hat{p}(x|o)$ have to be chosen in a way that allows one to control predictions far away from the training data. Variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014), normalizing flows (Dinh et al., 2016; Kingma & Dhariwal, 2018) and generative adversarial networks (GANs) (Goodfellow et al., 2014) are powerful generative models. However, there is no direct way to control the likelihood far away from the training data. Moreover, it has recently been discovered that VAEs, flows and GANs also suffer from overconfident likelihoods (Nalisnick et al., 2019; Hendrycks et al., 2019) far away from the data they are supposed to model as well as adversarial samples (Kos et al., 2017). + +For $\hat{p}(x|o)$ and $\hat{p}(x|i)$ we use a Gaussian mixture model (GMM) which is less powerful than a VAE but has the advantage that the density estimates can be controlled far away from the training data: + +$$ +\hat {p} (x \mid i) = \sum_ {k = 0} ^ {K _ {i}} \alpha_ {k} \exp \left(- \frac {d (x , \mu_ {k}) ^ {2}}{2 \sigma_ {k} ^ {2}}\right), \quad \hat {p} (x \mid o) = \sum_ {l = 0} ^ {K _ {o}} \beta_ {l} \exp \left(- \frac {d (x , \nu_ {l}) ^ {2}}{2 \theta_ {l} ^ {2}}\right) \tag {4} +$$ + +where $K_{i}, K_{o} \in \mathbb{N}$ are the number of centroids and $d: \mathbb{R}^{d} \times \mathbb{R}^{d} \to \mathbb{R}$ is the metric + +$$ +d (x, y) = \left\| C ^ {- \frac {1}{2}} (x - y) \right\| _ {2}, +$$ + +with $C$ being a positive definite matrix and + +$$ +\alpha_ {k} = \frac {1}{K _ {i}} \frac {1}{(2 \pi \sigma_ {k} ^ {2} \det C) ^ {\frac {d}{2}}}, \quad \beta_ {l} = \frac {1}{K _ {o}} \frac {1}{(2 \pi \theta_ {l} ^ {2} \det C) ^ {\frac {d}{2}}}. +$$ + +We later fix $C$ as a slightly modified covariance matrix of the in-distribution data (see Section 4 for details). Thus one just has to estimate the centroids $\mu_k, \nu_l$ and the variances $\sigma_k^2, \theta_l^2$ . The idea of this metric is to use distances adapted to the data-distribution. Note that equation 4 is a properly normalized density in $\mathbb{R}^d$ . + +# 2.2 MAXIMUM LIKELIHOOD ESTIMATION + +Given models for $\hat{p}(y|x)$ and $\hat{p}(x)$ we effectively have a full generative model and apply maximum likelihood estimation to get the underlying classifier $\hat{p}(y|x,i)$ and the parameters of the Gaussian mixture models $\hat{p}(x|i),\hat{p}(x|o)$ . The only free parameter left is the probability $\hat{p}(i),\hat{p}(o)$ which we write compactly as $\lambda = \frac{\hat{p}(o)}{\hat{p}(i)}$ . In principle this parameter should be set considering the potential cost of over-confident predictions. In our experiments we simply fix it to $\lambda = 1$ . + +$$ +\begin{array}{l} \underset {(x, y) \sim p (x, y)} {\mathbb {E}} \log (\hat {p} (y, x)) = \underset {(x, y) \sim p (x, y)} {\mathbb {E}} \log (\hat {p} (y | x)) + \log (\hat {p} (x)), \\ = \underset {(x, y) \sim p (x, y)} {\mathbb {E}} \log \left(\frac {\hat {p} (y | x , i) \hat {p} (x | i) \hat {p} (i) + \frac {1}{M} \hat {p} (x | o) \hat {p} (o)}{\hat {p} (x | i) \hat {p} (i) + \hat {p} (x | o) \hat {p} (o)}\right) + \log (\hat {p} (x | i) \hat {p} (i) + \hat {p} (x | o) \hat {p} (o)). \tag {5} \\ \end{array} +$$ + +In practice, we have to compute empirical expectations from finite training data from the in-distribution $(x_{i},y_{i})_{i = 1}^{n_{i}}$ and out-distribution $(z_{j})_{j = 1}^{n_{o}}$ . Labels for the out-distribution could be generated randomly via $p(y|x,o) = \frac{1}{M}$ , but we obtain an unbiased estimator with lower variance by averaging over all classes directly, as was done in Lee et al. (2018a); Hein et al. (2019); Hendrycks et al. (2019). Now we can estimate the classifier $f$ and the mixture model parameters $\mu ,\nu ,\sigma ,\theta$ via + +$$ +\begin{array}{l} \operatorname * {a r g m a x} _ {f, \mu , \nu , \sigma , \theta} \left\{\frac {1}{n _ {i}} \sum_ {i = 1} ^ {n _ {i}} \log \left(\hat {p} (y _ {i} | x _ {i})\right) + \frac {\lambda}{n _ {o}} \sum_ {j = 1} ^ {n _ {o}} \frac {1}{M} \sum_ {m = 1} ^ {M} \log \left(\hat {p} (m | z _ {j})\right) \right. \\ \left. + \frac {1}{n _ {i}} \sum_ {i = 1} ^ {n _ {i}} \log (\hat {p} (x _ {i})) + \frac {\lambda}{n _ {o}} \sum_ {j = 1} ^ {n _ {o}} \log (\hat {p} (z _ {j})) \right\}, \tag {6} \\ \end{array} +$$ + +with + +$$ +\hat {p} (y | x) = \frac {\hat {p} (y | x , i) \hat {p} (x | i) + \frac {\lambda}{M} \hat {p} (x | o)}{\hat {p} (x | i) + \lambda \hat {p} (x | o)} \quad \text {a n d} \quad \hat {p} (x) = \frac {1}{\lambda + 1} \left(\hat {p} (x | i) + \lambda \hat {p} (x | o)\right). \tag {7} +$$ + +Due to the bounds derived in Section 3, we denote our method by Certified Certain Uncertainty (CCU). Note that if one uses a standard neural network model with softmax, i.e. $\hat{p}(y|x) = \hat{p}(y|x,i) = \frac{e^{f_y(x)}}{\sum_{m=1}^M e^{f_m(x)}}$ , then the first term in equation 6 would be the cross-entropy loss for the in-distribution data and the second term the cross entropy loss for the out-distribution data with a uniform distribution over the classes. For this choice of $\hat{p}(y|x)$ and neglecting the terms for $\hat{p}(x)$ we recover the approach of Hein et al. (2019); Hendrycks et al. (2019) for training a classifier which outputs uniform confidence predictions on out-distribution data where $\frac{\hat{p}(i)}{\hat{p}(o)}$ corresponds to that regularization parameter $\lambda$ . The key difference in our approach is that $\hat{p}(y|x) \neq \hat{p}(y|x,i)$ and the estimated densities for in- and out distribution $\hat{p}(x|i)$ and $\hat{p}(x|o)$ lead to a confidence calibration of $\hat{p}(y|x)$ , and in turn the fit of the classifier influences the estimation of $\hat{p}(x|i)$ and $\hat{p}(x|o)$ . The major advantage of our model is that we can give guarantees on the confidence of the classifier decision far away from the training data. + +# 3 PROVABLE GUARANTEES FOR CLOSE TO UNIFORM PREDICTIONS FAR AWAY FROM THE TRAINING DATA + +In this section we provide two types of guarantees on the confidence of a classifier trained according to our model in equation 6. The first one says that the classifier has provably low confidence far away + +from the training data, where an explicit bound on the minimal distance is provided, and the second provides an upper bound on the confidence in a ball around a given input point. The latter bound resembles robustness guarantees for adversarial samples (Hein & Andriushchenko, 2017; Wong & Kolter, 2018; Raghunathan et al., 2018; Mirman et al., 2018) and is quite different from the purely empirical evaluation done in OOD detection papers as we show in Section 4. + +We provide our bounds for a more general mixture model which includes our GMM in equation 4 as a special case. To our knowledge, these are the first such bounds for neural networks and thus it is the first modification of a ReLU neural network so that it provably "knows when it does not know" (Hein et al., 2019) in the sense that far away from the training data the predictions are close to uniform over the classes. + +Theorem 3.1. Let $(x_{i}^{(i)},y_{i}^{(i)})_{i = 1}^{n}$ be the training set of the in-distribution and let the model for the conditional probability be given as + +$$ +\forall x \in \mathbb {R} ^ {d}, y \in \{1, \dots , M \}, \quad \hat {p} (y | x) = \frac {\hat {p} (y | x , i) \hat {p} (x | i) + \frac {\lambda}{M} \hat {p} (x | o)}{\hat {p} (x | i) + \lambda \hat {p} (x | o)}, \tag {8} +$$ + +where $\lambda = \frac{\hat{p}(o)}{\hat{p}(i)} > 0$ and let the model for the marginal density of the in-distribution $\hat{p}(x|i)$ and out-distribution $p(x|o)$ be given by the generalized GMMs + +$$ +\hat {p} (x | i) = \sum_ {k = 0} ^ {K _ {i}} \alpha_ {k} \exp \left(- \frac {d (x , \mu_ {k}) ^ {2}}{2 \sigma_ {k} ^ {2}}\right), \qquad \hat {p} (x | o) = \sum_ {l = 0} ^ {K _ {o}} \beta_ {l} \exp \left(- \frac {d (x , \nu_ {l}) ^ {2}}{2 \theta_ {l} ^ {2}}\right) +$$ + +with $\alpha_{k},\beta_{l} > 0$ and $\mu_k,\nu_l\in \mathbb{R}^d$ $\forall k = 1,\ldots K_i$ $l = 1,\dots ,K_o$ and $d:\mathbb{R}^d\times \mathbb{R}^d\to$ $\mathbb{R}_+$ a metric. Let $z\in \mathbb{R}^d$ and define $k^{*} = \underset {k = 1,\ldots ,K_{i}}{\arg \min}\frac{d(z,\mu_{k})}{\sigma_{k}}$ $i^{*} = \underset {i = 1,\ldots ,n}{\arg \min}d(z,x_i)$ + +$l^{*} = \underset {l = 1,\ldots ,K_{o}}{\arg \max}\beta_{l}\exp \left(-\frac{d(z,\nu_{l})^{2}}{2\theta_{l}^{2}}\right)$ and $\Delta = \frac{\theta_l^2}{\sigma_{k^*}^2} -1$ . For any $\epsilon >0$ $if\min_{l}\theta_{l} > \max_{k}\sigma_{k}$ and + +$$ +\min _ {i = 1, \dots , n} d (z, x _ {i}) \geq d \left(x _ {i ^ {*}}, \mu_ {k ^ {*}}\right) + d \left(\mu_ {k ^ {*}}, \nu_ {l ^ {*}}\right) \left[ \frac {2}{\Delta} + \frac {1}{\sqrt {\Delta}} \right] + \theta_ {l ^ {*}} \sqrt {\frac {2}{\Delta} \log \left(\frac {M - 1}{\epsilon \lambda} \frac {\sum_ {k} \alpha_ {k}}{\beta_ {l ^ {*}}}\right)}, \tag {9} +$$ + +then it holds for all $m \in \{1, \ldots, M\}$ that + +$$ +\hat {p} (m | z) \leq \frac {1}{M} (1 + \epsilon). \tag {10} +$$ + +In particular, if $\min_i d(z,x_i)\to \infty$ , then $\hat{p} (m|z)\rightarrow \frac{1}{M}$ . + +The proof is given in Appendix A. Theorem 3.1 holds for any multi-class classifier which defines for each input a probability distribution over the labels. Given the parameters of the GMM's it quantifies at which distance of an input $z$ to the training set the classifier achieves close to uniform confidence. The theorem holds even if we use ReLU classifiers which in their unmodified form have been shown to produce arbitrarily high confidence far away from the training data Hein et al. (2019). This is a first step towards neural networks which provably know when they don't know. + +In the next corollary, we provide an upper bound on the confidence over a ball around a given data point. This allows to give "confidence guarantees" for a whole volume and thus is much stronger than the usual pointwise evaluation of OOD methods. + +Corollary 3.1. Let $x_0 \in \mathbb{R}^d$ and $R > 0$ , then with $\lambda = \frac{\hat{p}(o)}{\hat{p}(i)}$ it holds + +$$ +\max _ {d (x, x _ {0}) \leq R} \hat {p} (y | x) \leq \frac {1}{M} \frac {1 + M \frac {b}{\lambda}}{1 + \frac {b}{\lambda}}, \tag {11} +$$ + +where $b = \frac{\sum_{k=1}^{K_i} \alpha_k \exp\left(-\frac{\max\left\{d(x_0, \mu_k) - R, 0\right\}^2}{2\sigma_k^2}\right)}{\sum_{l=1}^{K_o} \beta_l \exp\left(-\frac{(d(x_0, \nu_l) + R)^2}{2\theta_l^2}\right)}$ . + +The proof is in the Appendix B. We show in Section 4 that even though OOD methods achieve low confidence on noise images, the maximization of the confidence in a ball around a noise point (adversarial noise) yields high confidence predictions for OOD methods, whereas our classifier has provably low confidence, as certified by Corollary 3.1. The failure of OOD methods shows that the certification of entire regions is an important contribution of CCU which goes beyond the purely sampling-based evaluation. + +![](images/7f2ea23e6f08edf84a132b745baa085f76c6a46aae0193bb99974cbe0031e92a.jpg) +N + +![](images/e37c427a3a2ba065eecf342ffeea7235b1539438b29c25040ea61b2d4760ba85.jpg) + +![](images/dfa01063ab3b5b1953229ee7e214ceae217a91d760581709cc10db81f253f2d5.jpg) + +![](images/c06b61bfa7a3fce580264726277555df654c5d6366a8bbdcbee90bb7150b17d7.jpg) + +![](images/fc90821a87034f4c2864ca937cfe757317413e6bf2d4ac99e16da8c050462dca.jpg) + +![](images/a1b1672ebcb582d83a36dfb89e87b6ce9cc001061b15958cfa89bea55041f8dc.jpg) + +![](images/ac3e4572b14c9bdd1f6b906466d29c891c503ff41f6a54c054c0ce31c7a80b32.jpg) + +![](images/8774f56b22cfeb7dba566ee7d4f9cbaa7be90fcb8f6638c8b10cf9909cf82ebd.jpg) + +![](images/a91d2f47b69b6ab11cd31c6906d62662b0e091fe39d8b91161ffd7f5952bf13e.jpg) + +![](images/b7f058571a14a6d6a4d4422fc0504f18be946aa44e72aadd2ff23b8d30193a12.jpg) + +![](images/11f6c4d5100b0f8a74f5e89cfb8f758c129aadf1758ae83d2dfba291977f0684.jpg) + +![](images/87a9427b3a7969350f0e30d0c11231280e7ce6787a87fbd4462424ae64637c25.jpg) +FEMNIST + +![](images/aae0367546d78181b15095f7d428e00a5d5e9c2d0965c0fc44579a793341698b.jpg) + +![](images/db119aa75c0fce3e442979bac11e4b80fe04ffa22120e42b30d16e19dbb5e1be.jpg) + +![](images/cf5d676d25e88d0615618933557e90f0ed3648d5ae6048e62c10d6f6b7b983df.jpg) + +![](images/950d2604b3af9da235703ed3b91425d244b7877096163d8f6fb4d5acce4d871a.jpg) + +![](images/d153c7065c9568665587890a3e4016a48b831eb9e6be6f6a37e98328a562bac3.jpg) + +![](images/09c6fac00857aae6f654af8dd8f92210ae54abc92c38f5018c4ab860ade418d8.jpg) + +![](images/be539a31a707bf844847ea6205461ecde968c0e8f87f8c9cbd0c90fa6318ca6b.jpg) + +![](images/79aa5f752659824c4e48f5b78749ae739ba087936ee262a6df4c7b09386d8939.jpg) + +![](images/a9e0eec97a992d2bdc6e7dfd1be620b1463c00cbbdc95cb7eb0f787d8e9e61f8.jpg) + +![](images/069d7238327a036cce00de7a73f068a2d2aa5c4cac8f5f004b727b530402b616.jpg) + +![](images/e30a4ac7a621662542f22c8832bace1161d0c9c6120b04227ad63ca6ee0fd16b.jpg) +NHAAS + +![](images/6ee9de4a1dd0e83d66ea62bd049a7334114fcc8b91150c139917e405823b6204.jpg) + +![](images/e947369518e417c0017e4127daaae5d0f63ce4a5761ab2a6bd0eea70bca7a337.jpg) + +![](images/50148feddb3d20a9d4ec8640437cb844ab1b5dfd6b9da375352f635de62b5357.jpg) + +![](images/413763b89079e8bc02c7a817525edbadfc44a9b9a582e6cbfa8d467cd83ba579.jpg) + +![](images/48facdb51c8095b6f17fad1163bf9246aed13192fbedc9fc36635b7a4121c405.jpg) + +![](images/1cae8f7e9e017222f6c8cf066c9cbf3d8412fa17159c15cf619551ccae543c91.jpg) + +![](images/a4d618f81bcb71c282b6a046daeac55cf7d5c553f3fe593eaa4a8f29a0fc09e5.jpg) + +![](images/999a0c4387771729be03b15b565dec7ae57633ee1cb9ffcadb7d348445909420.jpg) + +![](images/6523a91dd0795b215a267735ff88727fc46c965497312853d4be061af74c87fb.jpg) + +![](images/0d94dd1466b2cabd35834cdb5623061e152559f805179cd5d5ee05ce71cdab52.jpg) + +![](images/d3ffeb195528b27992f28230b986adc76f817ddb2151d2972476037e87042527.jpg) +CIFAR10 + +![](images/aad9b70284af9d3e138012b191f54920d4b73693dbf60fb8d3310df687393813.jpg) + +![](images/b1b5e318d0fc5fa54865ebdfcdb7814488a89df159cc174e8f134e0381f8816c.jpg) + +![](images/c438575814daa070f162637e5280be2a38ace5955d197b7d6fb9c7b58845429c.jpg) + +![](images/08e31036c53635a33d07e375a704a6f2bd0bd32e093c6507b76322d4e8f08762.jpg) + +![](images/39bddda6821e1b68721dd66f5b1ae20e567c2487e8cdb2f9ae45e788afc3e701.jpg) + +![](images/0f2e0cf3f3a625932cfda14ae1b759e676aef458f1588e3c2254ee103b0701f3.jpg) + +![](images/f716d876b8300a864d2fd68b598ded200fc6b246826bd1500abe46334ae708df.jpg) + +![](images/c2496f94b7b7f02d32a5756c9fb52d31ca220f9c541a3a2f707d7d24fcc72350.jpg) + +![](images/69a9191fcd2098822f207d9a1b1d714f4dcdf4420eec9bf2e61ecdc583594b4f.jpg) + +![](images/b7fb4fc2e7332864112f91b2eacf200e11c85c00b0a3ac2c8d34e98cd73c4a91.jpg) + +![](images/a0953991e7262649c3dbce2ec789b170898bf655c0b340c2e73d9b458050ae0c.jpg) +CIFAR100 + +![](images/76515785e0a243db5761f015668524e5e2a979ab18b34fc60e12b029c98c3157.jpg) +Figure 2: Adversarial Noise: We maximize the confidence of the OOD methods using PGD in the ball around a uniform noise sample (seed images, left) on which CCU is guaranteed by Corollary 3.1 to yield less than $1.1\frac{1}{M}$ maximal confidence. For each OOD method we report the image with the highest confidence. Maha and MCD use scores where lower is more confident (indicated by $*$ ). If we do not find a sample that has higher confidence/ lower score than the median of the in-distribution, we highlight this in boldface. All other OOD methods fail on some dataset, see Table 1 for a quantitative version. ODIN at high temperatures always returns low confidence, so a value of 0.1 is not informative. + +![](images/87d444a3223dc9e9586ca26f14edb39b480ecb5abe5291791aa14c0d247718e0.jpg) + +![](images/270f5a6453d1df1422eed8561433db8945c76ca46e243e628e177a87f14ea668.jpg) + +![](images/ee2c7bc5c578dd4335bbc367e386d7f99a962d61afd3d84af0ffdc2a71d916dc.jpg) + +![](images/48fad6053099e99a0e8cc86e05d86dc107a21316a5cd1b6c60e0c117fa6640bc.jpg) + +![](images/ccd4bf3e4c85b4a75aca70540001b27f18d49b76ab8dd87f6f44fecf28bf95c0.jpg) + +![](images/ac2872407d43912983b72a1fe73250731dbe586d1eff17c531f6aec0578ba14a.jpg) + +![](images/1aa6defd2524eaf956b0223eb7914437087e011e01aed4ccc8710ae4ba99dc3a.jpg) + +![](images/7d4ddabfbad78f85a48b2d94118d6e5cbedfa0a935dd582dfabeac411ccb05ee.jpg) + +![](images/a08b0c2143144c375e0067bca53021da0bdc375fc9789fde93e2033f0cd04333.jpg) + +# 4 EXPERIMENTS + +We evaluate the worst-case performance of various OOD detection methods within regions for which CCU yields guarantees and by standard OOD on MNIST (LeCun et al., 1998), FashionMNIST (Xiao et al., 2017), SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 (Krizhevsky & Hinton, 2009). We show that all other OOD methods yield undesired high confidence predictions in the certified low confidence regions of CCU and thus would not detect these inputs as out-distribution. For calibrating hyperparameters resp. training we use for all OOD methods the 80 Million Tiny Images (Torralba et al., 2008) as out-distribution Hendrycks et al. (2019) which yields a fair and realistic comparison. + +CCU: As the Euclidean metric is known to be a relatively bad distance between two images we instead use the distance $d(x,y) = \left\| C^{-\frac{1}{2}}(x - y)\right\|$ , where $C$ is generated as follows. We calculate the covariance matrix $C'$ on augmented in-distribution samples (see C.1). Let $(\lambda_i, u_i)_{i=1}^d$ be the eigenvalues/eigenvectors of $C'$ . Then we set + +$$ +C = \sum_ {i = 1} ^ {d} \max \left\{\lambda_ {i}, 1 0 ^ {- 6} \max _ {j} \lambda_ {j} \right\} u _ {i} u _ {i} ^ {T}, \tag {12} +$$ + +that is we fix a lower bound on the smallest eigenvalue so that $C$ has full rank. In Hendrycks & Gimpel (2017b) a similar metric has been used for detection of adversarial images. We choose $K_{i} = K_{o} = 100$ as the number of centroids for the GMMs. We initialize the in-GMM on augmented in-data using the EM algorithm with spherical covariance matrices in the transformed space, as in equation 4. For the out-distribution we use a subset of 20000 points for the initialization. While, initially it holds that $\forall k, l : \sigma_{k} < \theta_{l}$ , as required in Theorem 3.1, this is not guaranteed during the optimization of equation 6. Thus, we enforce the constraint during training by setting: $\theta_{l} \mapsto \max \{\theta_{l}, 2\max_{k}\sigma_{k}\}$ at every gradient step. Since the "classifier" and "density" terms in equation 6 have very different magnitudes we choose a small learning rate of $1e - 5$ for the parameters in the GMMs. It is also crucial to not apply weight decay to these parameters. The other hyperparameters are chosen as in the base model below. + +Benchmarks: For all OOD methods we use LeNet on MNIST and a Resnet18 (for GAN and MCD we use VGG) otherwise. The hyperparameters used during training can be found in Appendix + +
BaseMCDEDLDEGANODINMahaACETOECCU
MNISTTE0.50.40.40.40.80.50.90.60.70.6
SR100.099.0100.0100.043.5100.0100.00.0100.00.0
AUC1.48.60.07.354.40.011.7100.035.2100.0
FMNISTTE4.85.85.24.95.74.84.84.85.74.9
SR100.072.5100.0100.099.0100.0100.00.0100.00.0
AUC0.047.10.00.439.50.018.8100.035.7100.0
SVHNTE2.93.93.12.44.22.92.93.24.13.0
SR100.073.5100.0100.00.0100.0100.03.0100.00.0
AUC0.034.10.00.0100.00.00.096.50.0100.0
CIFAR10TE5.611.77.06.711.75.65.66.14.75.8
SR100.090.5100.0100.0100.0100.0100.00.0100.00.0
AUC0.023.90.00.025.30.00.099.90.0100.0
CIFAR100TE23.345.331.127.543.823.323.225.224.725.9
SR100.0100.0100.0100.089.5100.0100.03.5100.00.0
AUC0.117.30.00.215.30.00.095.82.5100.0
+ +Table 1: Worst-case performance of different OOD methods in neighborhoods around uniform noise points certified by CCU. We report the clean test error (TE) on the in-distribution (GAN and MCD use VGG). The success rate (SR) is the fraction of adversarial noise points for which the confidence/score inside the ball is higher than the median of the in-distribution's confidence/score. The AUC quantifies detection of adversarial noise versus in-distribution. All values in %. + +C. The AUC (area under ROC) is computed by treating in-distribution versus out-distribution as a two-class problem using the confidence/score of the method as criterion. Alternatively one could report the AUPR (area under precision-recall curve) which we do in Appendix G. MCD: Monte-Carlo Dropout (Gal & Ghahramani, 2016) uses dropout at train and at test time. Since it is not clear where to put the dropout layers in a ResNet, we use VGG instead. We take the softmax from 7 forward passes (Shafaei et al., 2018) and use the mean of the output for prediction and the variance as score. EDL: Evidential deep learning (Sensoy et al., 2018) replaces the softmax layer of a neural network and introduces a different loss function that encourages better uncertainty estimates. DE: Deep ensembles (Lakshminarayanan et al., 2017b) average the softmax outputs of five models that were adversariably trained via FGSM (Goodfellow et al., 2015) with step size $\epsilon = 0.01$ . GAN: The framework of confidence-calibrated classifiers (Lee et al., 2017) relies on training a GAN alongside a classifier such that the GAN's generator is encouraged to generate points close to but not on the in-distribution. On these points one then enforces uniform confidence. We used their provided code to train a VGG this way, as we were unable to adapt the method to a ResNet with an acceptable test error (e.g. $\mathrm{TE} < 30\%$ on SVHN). ODIN: ODIN (Liang et al., 2017) consists of two parts: a temperature $T$ by which one rescales the logits before the softmax layer $\frac{e^{f_n / T}}{\sum_k e^{f_k / T}}$ and a preprocessing step that applies a single FGSM-step (Goodfellow et al., 2015) of length $\epsilon$ before evaluating the input. The two parameters are calibrated on the out-distribution. Maha: The approach in Lee et al. (2018c) is based on computing a class-conditional Mahalanobis distance in feature space and applying an ODIN-like preprocessing step for each layer. Following Ren et al. (2019) we use a single-layer version of Lee et al. (2018c) on our networks' penultimate layers because the multi-layer version in the original code does not support gradient-based attacks. OE: Outlier exposure (Hendrycks et al., 2019) enforces uniform confidence on a large out-distribution. We use their provided code to train a model with our chosen architecture. ACET: Adversarial confidence enhanced training (ACET) (Hein et al., 2019) enforces low confidence on a ball around points from an out-distribution by running adversarial attacks during training. In order to make the comparison with OE more meaningful we use 80M tiny images to draw the seeds rather than smoothed uniform noise as in Hein et al. (2019). We refer to Appendix F for a discussion of the influence of this choice on the results. + +Some of the above OOD papers optimize their hyperparameters on a validation set for each out-distribution they test on. However, this leads to different classifiers for each out-distribution dataset + +
BaseMCDEDLDEGANODINMahaACETOECCU
MNISTFMNIST97.493.199.399.299.498.796.8100.099.999.9
EMNIST89.282.089.092.192.888.991.695.095.892.0
GrCIFAR1099.794.799.7100.099.199.998.7100.0100.0100.0
Noise100.095.299.9100.099.3100.097.2100.0100.0100.0
Uniform95.287.999.997.999.998.2100.0100.0100.0100.0
FMNISTMNIST96.782.794.596.799.999.096.796.496.397.8
EMNIST97.587.395.697.199.999.397.597.699.399.5
GrCIFAR1091.092.384.086.185.393.098.296.2100.0100.0
Noise97.394.095.697.498.998.998.997.8100.0100.0
Uniform96.993.395.698.393.298.899.1100.097.6100.0
SVHNCIFAR1095.491.995.997.996.895.997.195.2100.0100.0
CIFAR10094.591.495.697.696.194.896.794.8100.0100.0
LSUN_CR95.692.095.397.999.096.597.297.1100.0100.0
Imagenet-Noise94.791.895.797.797.895.196.897.3100.0100.0
Uniform96.493.197.198.296.282.798.095.897.897.4
CIFAR10SVHN95.881.992.390.383.996.791.593.798.898.2
CIFAR10087.378.687.388.282.987.582.886.995.394.2
LSUN_CR91.981.390.892.089.993.389.291.298.698.2
Imagenet-Noise87.578.488.287.784.088.184.186.594.793.3
Uniform96.579.988.990.381.897.694.494.897.397.0
CIFAR100SVHN78.859.280.483.275.981.377.573.993.594.2
CIFAR1078.658.973.376.369.379.559.977.281.680.2
LSUN_CR81.059.474.281.679.881.479.778.095.495.9
Imagenet-Noise80.859.276.078.273.981.370.879.583.881.4
Uniform73.458.765.967.573.676.890.662.986.994.6
+ +Table 2: AUC (in- versus out-distribution detection based on confidence/score) in percent for different OOD methods and datasets (higher is better). OE and CCU have the best OOD performance. + +which seems unrealistic as we want to have good generic OOD performance and not for a particular dataset. Thus we keep the comparison realistic and fair by calibrating the hyperparameters of all methods on a subset of 80M tiny images and then evaluating on the other unseen distributions. + +Certified robustness against adversarial noise: We sample uniform noise images as they are obviously out-distribution for all tasks and certify using Corollary 3.1 the largest ball around the uniform noise sample on which CCU attains at most 1.1- uniform confidence, that is $1.1\%$ on CIFAR100 and $11\%$ on all other datasets. We describe how to compute the radius of this ball in Appendix D. In principle it could be possible that the certified balls contain training or test images. In Appendix E we show that this is not the case. We construct adversarial noise samples for all OOD methods by maximizing the confidence/minimizing the score via a PGD attack with 500 steps and 50 random restarts on this ball. Further details of the attack can be found in Appendix C.2. In Table 1 we show the results of running this attack on the different models. We used 200 noise images and we report clean test error on the in-distribution, the success rate (SR) (fraction of adversarial noise points for which the confidence resp. score inside the ball is higher resp. lower than the median of the in-distribution's confidence/score) and the AUC for the separation of the generated adversarial noise images and the in-distribution based on confidence/score. By construction, see Corollary 3.1, our method provably makes no overconfident predictions but we nevertheless run the attack on CCU as well. We note that only CCU performs perfectly on this task for all datasets - all other OOD methods fail at least on one dataset, most of them on all. We also see that ACET achieves very robust performance which may be expected as it does some kind of adversarial training for OOD detection. Nevertheless, even though they are very rare, high-confidence adversarial noise images for ACET can be found on SVHN, CIFAR10 and CIFAR100 and ACET has no guarantees. We illustrate the generated adversarial noise images for all methods in Figure 2. + +OOD performance: For each dataset and method we report the AUC for the binary classification problem of discriminating in- and out-distribution based on confidence resp. score. The results are shown in Table 2. The list of datasets we use for OOD detection can be seen in Table 2. LSUN_CR refers to only the classroom class of LSUN and Imagenet- is a subset of 10000 resized Imagenet validation images, that have no overlap with CIFAR10/CIFAR100 classes. The noise dataset was obtained as in Hein et al. (2019) by first shuffling the pixels of the test images in the in-distribution and then smoothing them by a Gaussian filter of uniformly random width, followed by a rescaling so that the images have full range. GrCIFAR10 refers to the images in CIFAR10 being grayscale and resized to $28\times 28$ and Uniform describes images sampled uniformly at random from $[0,1]^d$ . We see that OE and CCU have the best OOD performance. MCD is worse than the base model which confirms the results found in Leibig et al. (2017) that MCD is not useful for OOD. DE outperforms EDL but is not much better than the baseline for CIFAR10 and CIFAR100. The performance of Maha is worse than what has been reported in Lee et al. (2018c) which can have two reasons. We just use their version where one uses the scores only from the last layer and we do not calibrate hyperparameters for each test set separately but just once on the Tiny Image dataset. Especially on CIFAR10 we found that the results depend strongly on the step size. The results of ACET, GAN and ODIN are mixed but clearly outperform the baseline. Comparing Table 1 and Table 2 we see that most models perform well when evaluating on uniform noise but fail when finding the worst case in a small neighborhood around the noise point. Thus we think that such worst-case analysis should become standard in OOD evaluation. + +# 5 CONCLUSION + +In Hein et al. (2019) it has recently been shown that ReLU networks produce arbitrarily highly confident predictions far away from the training data, which could only be resolved by a modification of the network architecture. With CCU we present such a modification which explicitly integrates a generative model and provably show that the resulting neural network produces close to uniform predictions far away from the training data. Moreover, CCU is the only OOD method which can guarantee low confidence predictions over a whole volume rather than just pointwise and we show that all other OOD methods fail in this worst-case setting. CCU achieves this without loss in test accuracy or OOD performance. In the future it would be interesting to use more powerful generative models for which one can also guarantee their behavior far away from the training data. This is currently not the case for VAEs and GANs (Nalisnick et al., 2019; Hendrycks et al., 2019). + +# ACKNOWLEDGMENTS + +The author acknowledges support from the BMBF through the Tübingen AI Center (FKZ: 01IS18039A) and by the DFG TRR 248, project number 389792660 and the DFG Excellence Cluster "Machine Learning -New Perspectives for Science", EXC 2064/1, project number 390727645. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Alexander Meinke. + +# REFERENCES + +H. H. Bauschke and J. M. Borwein. On projection algorithms for solving convex feasibility problems. SIAM Review, 38:367-426, 1996. +Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv:1902.02918, 2019. +T. DeVries and G. W. Taylor. Learning confidence for out-of-distribution detection in neural networks. preprint, arXiv:1802.04865v1, 2018. +Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv:1605.08803, 2016. +Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016. + +I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015. +I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NeurIPS, 2014. +C. Guo, G. Pleiss, Y. Sun, and K. Weinberger. On calibration of modern neural networks. In ICML, 2017. +M. Hein and M. Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. In NeurIPS, 2017. +M. Hein, M. Andriushchenko, and J. Bitterwolf. Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In CVPR, 2019. +D. Hendrycks and K. Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017a. +D. Hendrycks, M. Mazeika, and T. Dietterich. Deep anomaly detection with outlier exposure. In ICLR, 2019. +Dan Hendrycks and Kevin Gimpel. Early methods for detecting adversarial images. In *ICLR Workshop Track Proceedings*, 2017b. +Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. 2017c. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014. +Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible $1 \times 1$ convolutions. In NeurIPS, pp. 10215-10224, 2018. +J. Kos, I. Fischer, and D. Song. Adversarial examples for generative models. In *ICLR Workshop*, 2017. +Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. +B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS, 2017a. +Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS, pp. 6402-6413, 2017b. +Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +K. Lee, H. Lee, K. Lee, and J. Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In ICLR, 2018a. +K. Lee, H. Lee, K. Lee, and J. Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS, 2018b. +Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. arXiv:1711.09325, 2017. +Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS, pp. 7167-7177, 2018c. +C. Leibig, V. Allken, M. S. Ayhan, P. Berens, and S. Wahl. Leveraging uncertainty information from deep neural networks for disease detection. Scientific Reports, 7, 2017. +S. Liang, Y. Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR, 2018. + +Shiyu Liang, Yixuan Li, and R Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv:1706.02690, 2017. +Si Liu, Risheek Garrepalli, Thomas Dietterich, Alan Fern, and Dan Hendrycks. Open category detection with PAC guarantees. In PMLR, 2018. +A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Valdu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018. +M. Mirman, T. Gehr, and M. Vechev. Differentiable abstract interpretation for provably robust neural networks. In ICML, 2018. +E. Nalisnick, A. Matsukawa, Y. Whye Teh, D. Gorur, and B. Lakshminarayanan. Do deep generative models know what they don't know? In ICLR, 2019. +Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In AISTATS, 2011. +A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR, 2015. +A. Raghunathan, J. Steinhardt, and P. Liang. Certified defenses against adversarial examples. In ICLR, 2018. +Jie Ren, Peter J Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A DePristo, Joshua V Dillon, and Balaji Lakshminarayanan. Likelihood ratios for out-of-distribution detection. In NeurIPS, 2019. +Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. 2014. +L. Schott, J. Rauber, M. Bethge, and W. Brendel. Towards the first adversarially robust neural network model on mnist. preprint, arXiv:1805.09190v3, 2018. +Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, Mung Chiang, and Prateek Mittal. Better the devil you know: An analysis of evasion attacks using out-of-distribution adversarial examples. preprint, arXiv:1905.01726, 2019. +Murat Sensoy, Lance Kaplan, and Melih Kandemir. Evidential deep learning to quantify classification uncertainty. In NeurIPS, pp. 3179-3189, 2018. +Alireza Shafaei, Mark Schmidt, and James J Little. Does your model know the digit 6 is not a cat? a less biased evaluation of "outlier" detectors. arXiv:1809.04729, 2018. +Antonio Torralba, Rob Fergus, and William T Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE transactions on pattern analysis and machine intelligence, 30(11):1958-1970, 2008. +W. Wang, A. Wang, A. Tamar, X. Chen, and P. Abbeel. Safer classification by synthesis. preprint, arXiv:1711.08534v2, 2018. +E. Wong and J. Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In ICML, 2018. +H. Xiao, K. Rasul, and R. Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. preprint, arXiv:1708.07747, 2017. + +# A APPENDIX - PROOF OF THEOREM 3.1 + +Theorem 3.1. Let $(x_{i},y_{i})_{i = 1}^{n}$ be the training set of the training distribution. We define the model for the conditional probability over the classes $y\in \{1,\dots ,M\}$ given $x$ as + +$$ +\hat {p} (y | x) = \frac {\hat {p} (y | x , i) \hat {p} (x | i) + \frac {\lambda}{M} \hat {p} (x | o)}{\hat {p} (x | i) + \lambda \hat {p} (x | o)}, \tag {13} +$$ + +where $\lambda = \frac{\hat{p}(o)}{\hat{p}(i)} > 0$ and $M > 1$ . Further, let the model for the marginal density of the in-distribution $\hat{p}(x|i)$ and out-distribution $p(x|o)$ be given by the generalized GMMs + +$$ +\hat {p} (x | i) = \sum_ {k = 0} ^ {K _ {i}} \alpha_ {k} \exp \left(- \frac {d (x , \mu_ {k}) ^ {2}}{2 \sigma_ {k} ^ {2}}\right), \quad \hat {p} (x | o) = \sum_ {l = 0} ^ {K _ {o}} \beta_ {l} \exp \left(- \frac {d (x , \nu_ {l}) ^ {2}}{2 \theta_ {l} ^ {2}}\right) +$$ + +with $\alpha_{k},\beta_{l} > 0$ and $\mu_k,\nu_l\in \mathbb{R}^d\quad \forall k = 1,\ldots K_i,l = 1,\ldots ,K_o$ and $d:\mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}_+$ a metric. Let $z\in \mathbb{R}^{d}$ and define $k^{*} = \underset {k = 1,\dots,K_{i}}{\arg \min}\frac{d(z,\mu_{k})}{\sigma_{k}}, i^{*} = \underset {i = 1,\dots,n}{\arg \min}d(z,x_{i}), l^{*} = \underset {l = 1,\dots,K_{o}}{\arg \min}\beta_{l}\exp \left(-\frac{d(z,\nu_{l})^{2}}{2\theta_{l}^{2}}\right)$ and $\Delta = \frac{\theta_{l*}^2}{\sigma_{k}^2} -1.$ For any $\epsilon >0$ if $\min_l\theta_l > \max_k\sigma_k$ and + +$$ +\min _ {i = 1, \dots , n} d (z, x _ {i}) \geq d \left(x _ {i ^ {*}}, \mu_ {k ^ {*}}\right) + d \left(\mu_ {k ^ {*}}, \nu_ {l ^ {*}}\right) \left[ \frac {2}{\Delta} + \frac {1}{\sqrt {\Delta}} \right] + \theta_ {l ^ {*}} \sqrt {\frac {2}{\Delta} \log \left(\frac {M - 1}{\epsilon \lambda} \frac {\sum_ {k} \alpha_ {k}}{\beta_ {l ^ {*}}}\right)}, \tag {14} +$$ + +then it holds for all $m\in \{1,\ldots ,M\}$ that + +$$ +\hat {p} (m | z) \leq \frac {1}{M} (1 + \epsilon). \tag {15} +$$ + +In particular, if $\min_i d(z,x_i)\to \infty$ , then $\hat{p} (m|z)\to \frac{1}{M}$ . + +Proof. The proof essentially hinges on upper bounding $\frac{\hat{p}(z|i)}{\hat{p}(z|o)}$ using the specific properties of the Gaussian mixture model. We note that + +$$ +\hat {p} (y | x) = \frac {\hat {p} (y | x , i) \hat {p} (x | i) + \frac {\lambda}{M} \hat {p} (x | o)}{\hat {p} (x | i) + \lambda \hat {p} (x | o)} = \frac {1}{M} \frac {1 + \frac {M}{\lambda} \frac {\hat {p} (x | i)}{\hat {p} (x | o)}}{1 + \frac {1}{\lambda} \frac {\hat {p} (x | i)}{\hat {p} (x | o)}} \leq \frac {1}{M} \left(1 + \frac {M - 1}{\lambda} \frac {\hat {p} (x | i)}{\hat {p} (x | o)}\right) +$$ + +The last step holds because the function $g(\xi) = \frac{1 + M\xi}{1 + \xi}$ is monotonically increasing + +$$ +\frac {\partial g}{\partial \xi} = \frac {M - 1}{(1 + \xi) ^ {2}} \quad \text {a n d} \quad \frac {\partial^ {2} g}{\partial \xi^ {2}} = - 2 \frac {M - 1}{(1 + \xi) ^ {3}}. \tag {16} +$$ + +As the second derivative is negative for $\xi \geq 0$ , $g$ is concave for $\xi \geq 0$ and thus + +$$ +\frac {1 + M \xi}{1 + \xi} = g (\xi) \leq g (0) + \left. \frac {\partial g}{\partial \xi} \right| _ {\xi = 0} (\xi - 0) = 1 + (M - 1) \xi . \tag {17} +$$ + +In order to achieve the required result we need to show that $\frac{M - 1}{\lambda} \frac{\hat{p}(x|i)}{\hat{p}(x|o)} \leq \epsilon$ for $x$ sufficiently far away from the training data. + +We note that + +$$ +\begin{array}{l} \frac {\hat {p} (x \mid i)}{\hat {p} (x \mid o)} = \frac {\sum_ {k} \alpha_ {k} \exp \left(- \frac {d (x , \mu_ {k}) ^ {2}}{2 \sigma_ {k} ^ {2}}\right)}{\sum_ {l} \beta_ {l} \exp \left(- \frac {d (x , \nu_ {l}) ^ {2}}{2 \theta_ {l} ^ {2}}\right)} \leq \frac {\max _ {k} \exp \left(- \frac {d (x , \mu_ {k}) ^ {2}}{2 \sigma_ {k} ^ {2}}\right) \sum_ {k} \alpha_ {k}}{\max _ {l} \beta_ {l} \exp \left(- \frac {d (x , \nu_ {l}) ^ {2}}{2 \theta_ {l} ^ {2}}\right)} \\ = \frac {\sum_ {k} \alpha_ {k}}{\beta_ {l ^ {*}}} \exp \left(- \frac {d (x , \mu_ {k ^ {*}}) ^ {2}}{2 \sigma_ {k ^ {*}} ^ {2}} + \frac {d (x , \nu_ {l ^ {*}}) ^ {2}}{2 \theta_ {l ^ {*}} ^ {2}}\right) \\ \end{array} +$$ + +where $k^{*} = \arg \min_{k}\frac{d(x,\mu_{k})^{2}}{2\sigma_{k}^{2}}$ and $l^{*} = \arg \max_{l}\beta_{l}\exp \left(-\frac{d(x,\nu_{l})^{2}}{2\theta_{l}^{2}}\right)$ . Using the triangle inequality, $d(x,\nu_{l^*})\leq d(x,\mu_{k^*}) + d(\mu_{k^*},\nu_{l^*})$ , we get the desired condition as + +$$ +\frac {\sum_ {k} \alpha_ {k}}{\beta_ {l ^ {*}}} \exp \left(- d (x, \mu_ {k ^ {*}}) ^ {2} \left(\frac {1}{2 \sigma_ {k ^ {*}} ^ {2}} - \frac {1}{2 \theta_ {l ^ {*}} ^ {2}}\right) + \frac {d (\mu_ {k ^ {*}} , \nu_ {l ^ {*}}) d (x , \mu_ {k ^ {*}})}{\theta_ {l ^ {*}} ^ {2}} + \frac {d (\mu_ {k ^ {*}} , \nu_ {l ^ {*}}) ^ {2}}{2 \theta_ {l ^ {*}} ^ {2}}\right) \leq \frac {\epsilon \lambda}{M - 1} +$$ + +Thus we get with $a = \left(\frac{1}{2\sigma_{k^*}^2} -\frac{1}{2\theta_{l^*}^2}\right)$ , $b = \frac{d(\mu_{k^*},\nu_{l^*})}{\theta_{l^*}^2}$ and $c = \frac{d(\mu_{k^*},\nu_{l^*})^2}{2\theta_{l^*}^2}$ , $d = \log \left(\frac{\epsilon\lambda}{M - 1}\frac{\beta_{l^*}}{\sum_k\alpha_k}\right)$ , the quadratic inequality + +$$ +- d \left(x, \mu_ {k ^ {*}}\right) ^ {2} a + d \left(x, \mu_ {k ^ {*}}\right) b + c \leq d, +$$ + +where $d < 0$ for sufficiently small $\epsilon$ . We get the solution + +$$ +d (x, \mu_ {k ^ {*}}) \geq \frac {b}{2 a} + \sqrt {\max \left\{0 , \frac {c - d}{a} + \frac {b ^ {2}}{4 a ^ {2}} \right\}}. +$$ + +It holds, using $\sqrt{a + b} \leq \sqrt{a} + \sqrt{b}$ for $a, b > 0$ , + +$$ +\frac {b}{2 a} + \sqrt {\max \left\{0 , \frac {c - d}{a} + \frac {b ^ {2}}{4 a ^ {2}} \right\}} \leq \frac {b}{a} + \sqrt {\frac {c}{a}} + \sqrt {\frac {- d}{a}}. +$$ + +One can simplify + +$$ +\frac {b}{a} = 2 \frac {\sigma_ {k ^ {*}} ^ {2} \theta_ {l ^ {*}} ^ {2}}{\theta_ {l ^ {*}} ^ {2} - \sigma_ {k ^ {*}} ^ {2}} \frac {d (\mu_ {k ^ {*}} , \nu_ {l ^ {*}})}{\theta_ {l ^ {*}} ^ {2}} = 2 \frac {\sigma_ {k ^ {*}} ^ {2} d (\mu_ {k ^ {*}} , \nu_ {l ^ {*}})}{\theta_ {l ^ {*}} ^ {2} - \sigma_ {k ^ {*}} ^ {2}} = 2 \frac {d (\mu_ {k ^ {*}} , \nu_ {l ^ {*}})}{\frac {\theta_ {l ^ {*}} ^ {2}}{\sigma_ {k ^ {*}} ^ {2}} - 1} +$$ + +$$ +\frac {c}{a} = 2 \frac {\sigma_ {k ^ {*}} ^ {2} \theta_ {l ^ {*}} ^ {2}}{\theta_ {l ^ {*}} ^ {2} - \sigma_ {k ^ {*}} ^ {2}} \frac {d (\mu_ {k ^ {*}} , \nu_ {l ^ {*}}) ^ {2}}{2 \theta_ {l ^ {*}} ^ {2}} = \frac {\sigma_ {k ^ {*}} ^ {2} d (\mu_ {k ^ {*}} , \nu_ {l ^ {*}}) ^ {2}}{\theta_ {l ^ {*}} ^ {2} - \sigma_ {k ^ {*}} ^ {2}} = \frac {d (\mu_ {k ^ {*}} , \nu_ {l ^ {*}}) ^ {2}}{\frac {\theta_ {l ^ {*}} ^ {2}}{\sigma_ {k ^ {*}} ^ {2}} - 1} +$$ + +Noting that $d(x,\mu_{k^*})\geq |d(x,x_{i^*}) - d(x_{i^*},\mu_{k^*})|$ we get that + +$$ +d (x, x _ {i ^ {*}}) \geq d (x _ {i ^ {*}}, \mu_ {k ^ {*}}) + \frac {b}{a} + \sqrt {\frac {c}{a}} + \sqrt {\frac {- d}{a}}, +$$ + +implies $\frac{M - 1}{\lambda} \frac{\hat{p}(x|i)}{\hat{p}(x|o)} \leq \epsilon$ . The last statement follows directly by noting that by assumption $a > 0$ (independently of the choice of $l^*$ and $k^*$ ) and $b, c, d(x_{i^*}, \mu_{k^*})$ are bounded as $K_i, K_o, n$ are finite. With $\Delta = \frac{\theta_{l^*}^2}{\sigma_{k^*}^2} - 1$ we can rewrite the required condition as + +$$ +d (x, x _ {i ^ {*}}) \geq d (x _ {i ^ {*}}, \mu_ {k ^ {*}}) + d (\mu_ {k ^ {*}}, \nu_ {l ^ {*}}) \Big [ \frac {2}{\Delta} + \frac {1}{\sqrt {\Delta}} \Big ] + \theta_ {l ^ {*}} \sqrt {\frac {2}{\Delta} \log \Big (\frac {M - 1}{\epsilon \lambda} \frac {\sum_ {k} \alpha_ {k}}{\beta_ {l ^ {*}}} \Big)}. +$$ + +![](images/448ff081df84b04331abbb64f8afa6849dd710a9134c1903633b01799230de12.jpg) + +# B APPENDIX - PROOF OF COROLLARY 3.1 + +Corollary 3.1. Let $x_0 \in \mathbb{R}^d$ and $R > 0$ , then with $\lambda = \frac{\hat{p}(o)}{\hat{p}(i)}$ it holds + +$$ +\max _ {d (x, x _ {0}) \leq R} \hat {p} (y | x) \leq \frac {1}{M} \frac {1 + M \frac {b}{\lambda}}{1 + \frac {b}{\lambda}}, \tag {18} +$$ + +$$ +w h e r e b = \frac {\sum_ {k = 1} ^ {K _ {i}} \alpha_ {k} \exp \left(- \frac {\operatorname* {m a x} \left\{d (x _ {0} , \mu_ {k}) - R , 0 \right\} ^ {2}}{2 \sigma_ {k} ^ {2}}\right)}{\sum_ {l = 1} ^ {K _ {o}} \beta_ {l} \exp \left(- \frac {(d (x _ {0} , \nu_ {l}) + R) ^ {2}}{2 \theta_ {l} ^ {2}}\right)}. +$$ + +Proof. From the previous section we already know that $\hat{p}(y|x) \leq \frac{1}{M} \frac{1 + M \frac{b}{\lambda}}{1 + \frac{b}{\lambda}}$ as long as $\frac{p(x|i)}{p(x|o)} \leq b$ . Now we can separately bound the numerator and denominator within a ball of radius $R$ around $x_0$ . + +For the numerator we have + +$$ +\begin{array}{l} \max _ {d (x, x _ {0}) \leq R} \hat {p} (x | i) \leq \sum_ {k = 1} ^ {K} \alpha_ {k} \max _ {d (x, x _ {0}) \leq R} e ^ {- \frac {d (x , \mu_ {k}) ^ {2}}{2 \sigma_ {k} ^ {2}}} (19) \\ \leq \sum_ {k = 1} ^ {K} \alpha_ {k} \exp \left(- \frac {\min _ {d (x , x _ {0}) \leq R} d (x , \mu_ {k}) ^ {2}}{2 \sigma_ {k} ^ {2}}\right) \\ \leq \sum_ {k = 1} ^ {K} \alpha_ {k} \exp \left(- \frac {\left(\max \left\{d \left(\mu_ {k} , x _ {0}\right) - R , 0 \right\}\right) ^ {2}}{2 \sigma_ {k} ^ {2}}\right), (20) \\ \end{array} +$$ + +where we have lower bounded $\min_{d(x,x_0)\leq R}d(x,\mu_k)$ via the reverse triangle inequality + +$$ +\begin{array}{l} \min _ {d (x, x _ {0}) \leq R} d (x, \mu_ {k}) \geq \min _ {d (x, x _ {0}) \leq R} | d (x _ {0}, \mu_ {k}) - d (x, x _ {0}) |, \\ \geq \max \left\{\min _ {d (x, x _ {0}) \leq R} \left(d \left(x _ {0}, \mu_ {k}\right) - d \left(x _ {0}, \mu_ {k}\right)\right), 0 \right\}, \\ \geq \max \left\{d \left(x _ {0}, \mu_ {k}\right) - r, 0 \right\}. \tag {21} \\ \end{array} +$$ + +The denominator can similarly be bounded via + +$$ +\begin{array}{l} \min _ {d (x, x _ {0}) \leq R} \hat {p} (x | o) \geq \sum_ {l = 1} ^ {K _ {o}} \beta_ {l} \min _ {d (x, x _ {0}) \leq R} e ^ {- \frac {d (x , v _ {l}) ^ {2}}{2 \theta_ {k} ^ {2}}} (22) \\ \geq \sum_ {l = 1} ^ {K _ {o}} \beta_ {l} \exp \left(- \frac {\max _ {d (x , x _ {0}) \leq R} d (x , \nu_ {l}) ^ {2}}{2 \theta_ {l} ^ {2}}\right) \\ \geq \sum_ {l = 1} ^ {K _ {o}} \beta_ {l} \exp \left(- \frac {\left(d \left(x _ {0} , v _ {l}\right) + R\right) ^ {2}}{2 \theta_ {l} ^ {2}}\right). (23) \\ \end{array} +$$ + +With both of these bounds in place the conclusion immediately follows. + +![](images/1551b63fabc2ffe5638e3c3ce7931ec56f00783a5a254ba4906df21d0cd8e9af.jpg) + +# C APPENDIX - EXPERIMENTAL DETAILS + +Unless specified otherwise we use ADAM on MNIST with a learning rate of $1e - 3$ and SGD with learning rate 0.1 for the other datasets. The learning rate for the GMM is always set to $1e - 5$ . We decrease all learning rates by a factor of 10 after 50, 75 and 90 epochs. Our batch size is 128, the total number of epochs 100 and weight decay is set to $5e - 4$ . + +When training ACET, OE and CCU with 80 million tiny images we pick equal batches of in- and out-distribution data (corresponding to $p(i) = p(o)$ ) and concatenate them into a batches of size 256. Note that during the 100 epochs only a fraction of the 80 million tiny images are seen and so there is no risk of over-fitting. + +# C.1 DATA AUGMENTATION + +Our data augmentation scheme uses random crops with a padding of 2 pixels on MNIST and FMNIST. On SVHN, CIFAR10 and CIFAR100 the padding width is 4 pixels. For SVHN we fill the padding with the value at the boundary and for CIFAR we apply reflection at the boundary pixels. On top of this we include random horizontal flips on CIFAR. For MNIST and FMNIST we generate 60000 such samples and for SVHN and CIFAR 50000 samples by drawing from the clean dataset without replacement. This augmented data is used to calculate the covariance matrix from equation 12. During the actual training we use the same data augmentation scheme in a standard fashion. + +# C.2 ATTACK DETAILS + +We begin with a step size of 3 and for each of the 50 restarts we randomly initialize at some point in the ellipsoid. Whenever a gradient step successfully decreases the losses we increase the step size by + +![](images/10d1800f42062d62d7d6ee880ee14578517840cf758d4a22a28e63e96ac8c3c1.jpg) +Figure 3: Histograms of bounds: Certified radius in transformed space for different datasets. + +![](images/ce69483440c59c17c4069b208cc1cdf11c09cb7bb7d6b3b998c46aa721cb74ac.jpg) + +![](images/fa58664e020cd35c9711fa778dae0e900bc963aa3576613eb05fbb01f91382bc.jpg) + +![](images/ee2ae8b3ba492056a40bf59c170a9d2e12fc163e0a866d5b7c6532d45924e4e3.jpg) + +![](images/2259145e5b520b57d4e03a39b163c56355668fa9590ba1adfe34c560f1bd9fce.jpg) + +a factor of 1.1. Whenever the loss increases instead we use backtracking and decrease the step size by a factor of 2. We apply normal PGD using the $l_{2}$ -norm in the transformed space to ensure that we stay on the ellipsoid and after each gradient step we rotate back into the original space to project onto the box $[0,1]^d$ . The result is not guaranteed be on the ellipsoid so after the 500 steps we use the alternating projection algorithm (Bauschke & Borwein, 1996) for 10 steps which is guaranteed to converge to a point in the intersection of the ellipsoid and the box because both of these sets are convex. + +# D APPENDIX - FINDING A THE CERTIFIABLE RADIUS + +Since Corollary 3.1 does not explicitly give a radius, one has to numerically invert the bound. The bound + +$$ +b (R) = \frac {\sum_ {k = 1} ^ {K _ {i}} \alpha_ {k} \exp \left(- \frac {\operatorname* {m a x} \left\{d \left(x _ {0} , \mu_ {k}\right) - R , 0 \right\} ^ {2}}{2 \sigma_ {k} ^ {2}}\right)}{\sum_ {l = 1} ^ {K _ {o}} \beta_ {l} \exp \left(- \frac {(d \left(x _ {0} , \nu_ {l}\right) + R) ^ {2}}{2 \theta_ {l} ^ {2}}\right)} \tag {24} +$$ + +is monotonically increasing in $R$ . Thus, for a given sample $x_0$ one can fix a desired bound $\max_{d(x,x_0)\leq R}\hat{p} (x|i)\leq \frac{1}{M}\nu$ , where $\nu \in (1,M)$ and then find the unique solution + +$$ +b (R) = \frac {\nu - 1}{M - \nu} \lambda \tag {25} +$$ + +for $R$ via bisection. This radius $\hat{R}$ will then represent the maximal radius, that one can certify using Corollary 3.1. The presumption is, of course, that for $R = 0$ one has a sufficiently low bound in the first place, i.e. that a solution exists. In our experiments on uniform noise we did not encounter a single counterexample to this assumption. We show the radii for the different datasets in Figure 3. + +# E APPENDIX - ANALYSIS OF THE CERTIFIED BALLS AROUND UNIFORM NOISE IMAGES + +As one can observe in Figure 2 the images which maximize the confidence in the certified ball around the uniform noise image are sometimes quite far away from the original noise image. As CCU certifies low confidence (the maximal confidence is less than $1.1 \times \frac{1}{M}$ - so the predicted probability distribution over the classes is very close to the uniform distribution) over the whole ball, it is a natural question what these balls look like and what kind of images they contain. In particular, it is in general not desired that the certified balls contain images from the training and test set. For each dataset we certified balls around 200 uniform noise images and for each of the certified balls we check if it contains training or test images of the corresponding dataset. We found that even though the certified balls are large, not a single training or test image was contained in any of them. This justifies the use of our proposed threat model. + +A different problem could be that our threshold of $\frac{1.1}{M}$ for the certification is too high and that many predictions on the test set have confidence less than this threshold. For this purpose we report in Table 3 the smallest predicted confidence of CCU on the test set $T$ , that is + +$$ +\min_{x\in T}\max_{y\in \{1,\ldots ,M\}}\hat{p} (y|x), +$$ + +
min p(y|x)# < 1.1/M% < 1.1/M
MNIST33.0800
FMNIST28.7700
SVHN10.02200.08
CIFAR1010.015295.29
CIFAR1001.031301.30
+ +Table 3: Lowest confidence that CCU attains on the test set (in percent) as well as total number of test points on which confidence is lower than our imposed bound of $\frac{1.1}{M}$ . + +for each dataset and the total number of test samples where the confidence is below $\frac{1.1}{M}$ . While for MNIST and FMNIST, this never happens, and for SVHN this is negligible (less than 0.1% of the test set), for CIFAR10 and CIFAR100 this happens in 5.3% resp. 1.3% of the cases. + +In theory, this could impair our AUC value for the detection of adversarial noise. However, in practice our bound for the confidence is quite conservative as the bound is only tight in very specific configurations of the centroids of the Gaussian mixture model which are unlikely to happen for any practical dataset, meaning that the actual maximal confidence in the certified region is typically significantly lower. In fact the AUC values of CCU are always $100\%$ which means that for all 200 certified balls the maximal value of the confidence of CCU in any of these balls (found by our PGD attack algorithm) is lower than the minimal confidence of all predictions on the test set as reported in Table 3. On the other hand assuming here also a worst case scenario in the sense we assume that the upper bound of the maximal confidence is attained in all 200 certified balls, then the (certified) AUC value would be: $99.92\%$ for SVHN, $94.71\%$ for CIFAR10, and $98.70\%$ for CIFAR100. Note that this theoretical lower bound on our performance is still better than all other models' empirical performance on this task, as reported in Table 1 on both CIFAR10 and CIFAR100, and only marginally below the perfect AUC of GAN on SVHN. + +# F APPENDIX - PERFORMANCE OF ACET WHEN TRAINED ON ADVERSARIAL UNIFORM NOISE + +Similar to our CCU, ACET Hein et al. (2019) requires that one chooses a model for the out-distribution in order to generate their "adversarial noise" during training. We trained the ACET model with the same out-distribution model as for all other models namely using the tiny image dataset as suggested in Hendrycks et al. (2019) with a PGD attack that starts at the original point and takes 40 FGSM steps in order to maximize the maximal confidence over the classes. We use backtracking and halve the step size whenever the loss does not increase. However, the authors of Hein et al. (2019) used smoothed uniform noise and a $l_{\infty}$ -threat model during training. Since our worst case analysis for OOD is based on attacking uniform noise images, this suggests that training ACET with uniform noise should improve the performance of ACET for the worst case analysis. We report below the results of ACET2 (the original model in the paper using tiny images for training is called ACET) based on attacking using uniform noise images during training with a $l_{\infty}$ -threat model with $\epsilon = 0.3$ as suggested in Hein et al. (2019). We report the normal OOD performance in Table 4 and the worst case analysis of adversarial noise in Table 5. While it is not surprising that ACET outperforms ACET2 on the standard OOD detection task in Table 4 as it has seen more realistic "noise" images during training, the worse performance of ACET2 for the worst case analysis in Table 5 is at first sight counter-intuitive. However, note that the threat model of the attacks in our worst case analysis is the Mahalanobis-type $l_{2}$ -type metric, see 12, while ACET2 uses an $l_{\infty}$ -attack model with $\epsilon = 0.3$ during training. As the size of the balls for the Mahalanobis-type $l_{2}$ -type metric is quite large, there is not much overlap between the two sets. This explains why ACET2 fails here. In summary, we have shown that by using tiny images as out-distribution during training, ACET improves in terms of OOD detection performance over ACET2, which is similar to the version suggested in Hein et al. (2019). + +# G APPENDIX - PRECISION AND RECALL + +In addition to the AUC presented in Table 2 we follow Hendrycks & Gimpel (2017c) and report the area under the precision/recall curve (AUPR). Precision at a specific threshold is defined as the + +
MNISTACETACET2
FMNIST100.099.8
EMNIST95.093.5
GrCIFAR10100.0100.0
Noise100.0100.0
Uniform100.0100.0
+ +
FMNISTACETACET2
MNIST96.496.5
EMNIST97.697.3
GrCIFAR1096.291.6
Noise97.897.1
Uniform100.0100.0
+ +
SVHNACETACET2
CIFAR1095.294.2
CIFAR10094.893.7
LSUN_CR97.196.1
Imagenet-Noise97.395.6
Uniform95.295.2
100.0100.0
+ +
CIFAR10ACETACET2
SVHN93.782.8
CIFAR10086.985.3
LSUN_CR91.288.5
Imagenet-86.584.8
Noise94.891.2
Uniform100.0100.0
+ +
CIFAR100ACETACET2
SVHN73.984.6
CIFAR1077.277.0
LSUN_CR78.080.0
Imagenet-Noise79.579.4
62.966.3
Uniform100.0100.0
+ +Table 4: OOD detection performance (AUC in percent) for ACET (trained around tiny images) and ACET2 (trained around uniform noise). + +
ACETACET2
MNISTTE0.60.6
SR0.00.0
AUC100.0100.0
FMNISTTE4.84.6
SR0.00.0
AUC100.0100.0
SVHNTE3.23.0
SR3.095.5
AUC96.55.4
CIFAR10TE6.17.1
SR0.064.5
AUC99.935.9
CIFAR100TE25.226.2
SR3.596.5
AUC95.814.6
+ +Table 5: OOD detection performance (test error, AUC and success rate in percent) for ACET (trained around tiny images) and ACET2 (trained around uniform noise). + +
BaseMCDEDLDEGANODINMahaACETOECCU
MNISTFMNIST97.589.499.499.499.498.897.0100.099.999.9
EMNIST77.960.077.384.585.578.474.490.991.484.3
GrCIFAR1099.791.199.8100.099.599.998.9100.0100.0100.0
Noise100.075.599.8100.099.2100.096.5100.0100.0100.0
Uniform97.282.899.998.899.998.9100.0100.0100.0100.0
FMNISTMNIST97.679.395.997.599.999.297.297.497.098.3
EMNIST96.874.294.696.1100.098.996.597.098.699.1
GrCIFAR1092.292.786.990.882.392.998.696.8100.0100.0
Noise93.878.691.692.595.495.497.095.3100.0100.0
Uniform97.893.097.198.895.499.199.4100.098.2100.0
SVHNCIFAR1097.296.298.599.298.697.399.097.3100.0100.0
CIFAR10096.795.898.399.098.296.698.897.0100.0100.0
LSUN_CR99.999.999.9100.0100.099.9100.0100.0100.0100.0
Imagenet-Noise96.896.298.399.198.996.998.998.3100.0100.0
Uniform89.183.695.696.894.550.397.087.395.693.3
CIFAR10SVHN92.371.190.787.480.592.785.991.098.597.5
CIFAR10086.380.089.389.884.085.583.487.495.694.6
LSUN_CR99.799.299.799.799.799.799.699.7100.099.9
Imagenet-Noise84.679.289.888.684.984.284.485.494.893.2
Uniform97.883.494.598.082.499.1100.0100.099.2100.0
CIFAR100SVHN67.452.972.075.663.471.069.159.489.690.8
CIFAR1080.966.175.879.172.581.361.279.784.883.7
LSUN_CR99.398.399.099.399.299.399.299.199.999.9
Imagenet-Noise81.967.178.880.776.582.073.880.885.383.3
Uniform51.036.134.147.844.256.479.525.969.588.9
+ +Table 6: AUPR (in- versus out-distribution detection based on confidence/score) in percent for different OOD methods and datasets (higher is better). OE and CCU have the best OOD performance. + +number of true positives (tp) over the sum of true positives and false positives (fp), i.e. + +$$ +\text {p r e c i s i o n} = \frac {\mathrm {t p}}{\mathrm {t p} + \mathrm {f p}}. \tag {26} +$$ + +Recall is defined as + +$$ +\operatorname {r e c a l l} = \frac {\mathrm {t p}}{\mathrm {t p} + \mathrm {f n}}, \tag {27} +$$ + +where fn is the number of false negatives. We report the AUPR for all models and all datasets in Table 6. Qualitatively we find that the results do not differ from the ones reported in Table 2. \ No newline at end of file diff --git a/towardsneuralnetworksthatprovablyknowwhentheydontknow/images.zip b/towardsneuralnetworksthatprovablyknowwhentheydontknow/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0368b875897f3d92972aee75fb49f7cffedeb07a --- /dev/null +++ b/towardsneuralnetworksthatprovablyknowwhentheydontknow/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54cd6654501a301b8e69603fd86bcdc94c551136ec418360975d1266caf7a8c0 +size 1375555 diff --git a/towardsneuralnetworksthatprovablyknowwhentheydontknow/layout.json b/towardsneuralnetworksthatprovablyknowwhentheydontknow/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a77dc513400968e2f625b1dfe68b1d6dcda3821d --- /dev/null +++ b/towardsneuralnetworksthatprovablyknowwhentheydontknow/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1a7a40291c9ba06981a498ceed85dedb61e1034c797b78405dd618527236550 +size 676079 diff --git a/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/8e485bbb-0d01-44f6-8de4-b9ff94231040_content_list.json b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/8e485bbb-0d01-44f6-8de4-b9ff94231040_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..77691605742a05160d40f24b7de2b8583c484660 --- /dev/null +++ b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/8e485bbb-0d01-44f6-8de4-b9ff94231040_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f35b8470b45cb626286ebf5ec2430c20ae86d87003d2a5e2eef32c762b88089d +size 102367 diff --git a/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/8e485bbb-0d01-44f6-8de4-b9ff94231040_model.json b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/8e485bbb-0d01-44f6-8de4-b9ff94231040_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9c6309ffd399f3f0a8ded40e37e2c0afb8a3827e --- /dev/null +++ b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/8e485bbb-0d01-44f6-8de4-b9ff94231040_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83f7829bf73f7ca563f8c8c42450e21fbddced1f7043f0f6c5fb3af6e51c9d02 +size 118945 diff --git a/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/8e485bbb-0d01-44f6-8de4-b9ff94231040_origin.pdf b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/8e485bbb-0d01-44f6-8de4-b9ff94231040_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fac22ee881a6946151a96329d1755b846792920d --- /dev/null +++ b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/8e485bbb-0d01-44f6-8de4-b9ff94231040_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80678db178c7fa9883339b3767ae5ead997670238d6dea87ba3507309f174e91 +size 1177212 diff --git a/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/full.md b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d3cb860bae5e4d970329e3217c30830e95a1f8de --- /dev/null +++ b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/full.md @@ -0,0 +1,463 @@ +# TOWARDS STABILIZING BATCH STATISTICS IN BACKWARD PROPAGATION OF BATCH NORMALIZATION + +Junjie Yan $^{1,2*}$ , Ruosi Wan $^{3*}$ , Xiangyu Zhang $^{3†}$ , Wei Zhang $^{1,2}$ , Yichen Wei $^{3}$ , Jian Sun $^{3}$ $^{1}$ Shanghai Key Laboratory of Intelligent Information Processing + $^{2}$ School of Computer Science, Fudan University + $^{3}$ Megvii Technology. +{jjyan17, weizh}@fudan.edu.cn, +{wanruosi, zhangxiangyu, weiyichen, sunjian}@megvii.com. + +# ABSTRACT + +Batch Normalization (BN) is one of the most widely used techniques in Deep Learning field. But its performance can awfully degrade with insufficient batch size. This weakness limits the usage of BN on many computer vision tasks like detection or segmentation, where batch size is usually small due to the constraint of memory consumption. Therefore many modified normalization techniques have been proposed, which either fail to restore the performance of BN completely, or have to introduce additional nonlinear operations in inference procedure and increase huge consumption. In this paper, we reveal that there are two extra batch statistics involved in backward propagation of BN, on which has never been well discussed before. The extra batch statistics associated with gradients also can severely affect the training of deep neural network. Based on our analysis, we propose a novel normalization method, named Moving Average Batch Normalization (MABN). MABN can completely restore the performance of vanilla BN in small batch cases, without introducing any additional nonlinear operations in inference procedure. We prove the benefits of MABN by both theoretical analysis and experiments. Our experiments demonstrate the effectiveness of MABN in multiple computer vision tasks including ImageNet and COCO. The code has been released in https://github.com/megvii-model/MABN. + +# 1 INTRODUCTION + +Batch Normalization (BN) (Ioffe & Szegedy, 2015) is one of the most popular techniques for training neural networks. It has been widely proven effective in many applications, and become the indispensable part of many state of the art deep models. + +Despite the success of BN, it's still challenging to utilize BN when batch size is extremely small1. The batch statistics with small batch size are highly unstable, leading to slow convergence during training and bad performance during inference. For example, in detection or segmentation tasks, the batch size is often limited to 1 or 2 per GPU due to the requirement of high resolution inputs or complex structure of the model. Directly computing batch statistics without any modification on each GPU will make performance of the model severely degrade. + +To address such issues, many modified normalization methods have been proposed. They can be roughly divided into two categories: some of them try to improve vanilla BN by correcting batch statistics (Ioffe, 2017; Singh & Shrivastava, 2019), but they all fail to completely restore the performance of vanilla BN; Other methods get over the instability of BN by using instance-level normalization (Ulyanov et al., 2016; Ba et al., 2016; Wu & He, 2018), therefore models can avoid the affect + +of batch statistics. This type of methods can restore the performance in small batch cases to some extent. However, instance-level normalization hardly meet industrial or commercial needs so far, for this type of methods have to compute instance-level statistics both in training and inference, which will introduce additional nonlinear operations in inference procedure and dramatically increase consumption Shao et al. (2019). While vanilla BN uses the statistics computed over the whole training data instead of batch of samples when training finished. Thus BN is a linear operator and can be merged with convolution layer during inference procedure. Figure 1(a) shows with ResNet-50 (He et al., 2016), instance-level normalization almost double the inference time compared with vanilla BN. Therefore, it's a tough but necessary task to restore the performance of BN in small batch training without introducing any nonlinear operations in inference procedure. + +In this paper, we first analysis the formulation of vanilla BN, revealing there are actually not only 2 but 4 batch statistics involved in normalization during forward propagation (FP) as well as backward propagation (BP). The additional 2 batch statistics involved in BP are associated with gradients of the model, and have never been well discussed before. They play an important role in regularizing gradients of the model during BP. In our experiments (see Figure 2), variance of the batch statistics associated with gradients in BP, due to small batch size, is even larger than that of the widely-known batch statistics (mean, variance of feature maps). We believe the instability of batch statistics associated with gradients is one of the key reason why BN performs poorly in small batch cases. + +Based on our analysis, we propose a novel normalization method named Moving Average Batch Normalization (MABN). MABN can completely get over small batch issues without introducing any nonlinear manipulation in inference procedure. The core idea of MABN is to replace batch statistics with moving average statistics. We substitute batch statistics involved in BP and FP with different type of moving average statistics respectively, and theoretical analysis is given to prove the benefits. However, we observed directly using moving average statistics as substitutes for batch statistics can't make training converge in practice. We think the failure takes place due to the occasional large gradients during training, which has been mentioned in Ioffe (2017). To avoid training collapse, we modified the vanilla normalization form by reducing the number of batch statistics, centralizing the weights of convolution kernels, and utilizing renormalizing strategy. We also theoretically prove the modified normalization form is more stable than vanilla form. + +MABN shows its effectiveness in multiple vision public datasets and tasks, including ImageNet (Russakovsky et al., 2015), COCO (Lin et al., 2014). All results of experiments show MABN with small batch size (1 or 2) can achieve comparable performance as BN with regular batch size (see Figure 1(b)). Besides, it has same inference consumption as vanilla BN (see Figure 1(a)). We also conducted sufficient ablation experiments to verify the effectiveness of MABN further. + +![](images/b28f79a1f9498d2b3c5578aad68a592e5dadbd325ae2956eb301903c33494bb9.jpg) +(a) + +![](images/cbc6e14da2696f27b6204665f2124278e7cd9dd8ef76abc4afe5fcc203062beb.jpg) +(b) +Figure 1: (a) Throughout (iterations per second) in inference procedure using different Normalization methods. The implementation details can be seen in appendix B.2. (b)ImageNet classification validation error vs. batch sizes. + +# 2 RELATED WORK + +Batch normalization (BN) (Ioffe & Szegedy, 2015) normalizes the internal feature maps of deep neural network using channel-wise statistics (mean, standard deviation) along batch dimension. It has + +been widely proven effectively in most of tasks. But the vanilla BN heavily relies on sufficient batch size in practice. To restore the performance of BN in small batch cases, many normalization techniques have been proposed: Batch Renormalization (BRN) (Ioffe, 2017) introduces renormalizing parameters in BN to correct the batch statistics during training, where the renormalizing parameters are computed using moving average statistics; Unlike BRN, EvalNorm (Singh & Shrivastava, 2019) corrects the batch statistics during inference procedure. Both BRN and EvalNorm can restore the performance of BN to some extent, but they all fail to get over small batch issues completely. Instance Normalization (IN) (Ulyanov et al., 2016), Layer Normalization (LN) (Ba et al., 2016), and Group normalization (GN) (Wu & He, 2018) all try to avoid the effect of batch size by utilizing instance level statistics. IN uses channel-wise statistics per instance instead of per batch, while LN uses instance-level statistics along channel dimension. But IN and LN shows no superiority to vanilla BN in most of cases. GN divides all channels in predefined groups, and uses group-wise statistics per instance. It can restore the performance of vanilla BN very well in classification and detection tasks. But it has to introduce extra nonlinear manipulations in inference procedure and severely increase inference consumption, as we have pointed out in Section 1. SyncBN (Peng et al., 2018) handle the small batch issues by computing the mean and variance across multiple GPUs. This method doesn't essentially solve the problem, and requires a lot of resource. Online Normalization Chiley et al. (2019) modifies BP by using moving average statistics, so they can set batch size as 1 without degradation of performance, but Online Normalization still have to use instance-level normalization to cooperate with modification in BP, so its inference efficiency is much lower than original BN. + +Apart from operating on feature maps, some works exploit to normalize the weights of convolution: Weight Standardization (Qiao et al., 2019) centralizes weight at first before divides weights by its standard deviation. It still has to combine with GN to handle small batch cases. + +# 3 STATISTICS IN BATCH NORMALIZATION + +# 3.1 REVIEW OF BATCH NORMALIZATION + +First of all, let's review the formulation of batch Normalization (Ioffe & Szegedy, 2015): assume the input of a BN layer is denoted as $\mathbf{X} \in \mathbb{R}^{B \times p}$ , where $B$ denotes the batch size, $p$ denotes number of features. In training procedure, the normalized feature maps $\mathbf{Y}$ at iteration $t$ is computed as: + +$$ +\boldsymbol {Y} = \frac {\boldsymbol {X} - \mu_ {\mathcal {B} _ {t}}}{\sigma_ {\mathcal {B} _ {t}}}, \tag {1} +$$ + +where batch statistics $\mu_{\mathcal{B}_t}$ and $\sigma_{\mathcal{B}_t}^2$ are the sample mean and sample variance computed over the batch of samples $\mathcal{B}_t$ at iteration $t$ : + +$$ +\mu_ {\mathcal {B} _ {t}} = \frac {1}{B} \sum_ {b} \boldsymbol {X} _ {b,:}, \quad \sigma_ {\mathcal {B} _ {t}} ^ {2} = \frac {1}{B} \sum_ {b} \left(\boldsymbol {X} _ {b,:} - \mu_ {\mathcal {B} _ {t}}\right) ^ {2}. \tag {2} +$$ + +Besides, a pair of parameters $\gamma, \beta$ are used to scale and shift normalized value $Y$ : + +$$ +\boldsymbol {Z} = \boldsymbol {Y} \gamma + \beta . \tag {3} +$$ + +The scaling and shifting part is added in all normalization form by default, and will be omitted in the following discussion for simplicity. + +As Ioffe & Szegedy (2015) demonstrated, the batch statistics $\mu_{\mathcal{B}_t}, \sigma_{\mathcal{B}_t}^2$ are both involved in backward propagation (BP). We can derive the formulation of BP in BN as follows: let $\mathcal{L}$ denote the loss, $\Theta_t$ denote the set of the whole learnable parameters of the model at iteration $t$ . Given the partial gradients $\left.\frac{\partial\mathcal{L}}{\partial Y}\right|_{\Theta_t,\mathcal{B}_t}$ , the partial gradients $\left.\frac{\partial\mathcal{L}}{\partial X}\right|_{\Theta_t,\mathcal{B}_t}$ is computed as + +$$ +\left. \frac {\partial \mathcal {L}}{\partial \boldsymbol {X}} \right| _ {\Theta_ {t}, \mathcal {B} _ {t}} = \frac {1}{\sigma_ {\mathcal {B} _ {t}}} \left(\left. \frac {\partial \mathcal {L}}{\partial \boldsymbol {Y}} \right| _ {\Theta_ {t}, \mathcal {B} _ {t}} - g _ {\mathcal {B} _ {t}} - \boldsymbol {Y} \cdot \Psi_ {\mathcal {B} _ {t}}\right) \tag {4} +$$ + +where $\cdot$ denotes element-wise production, $g_{\mathcal{B}_t}$ and $\Psi_{\mathcal{B}_t}$ are computed as + +$$ +g _ {\mathcal {B} _ {t}} = \frac {1}{B} \sum_ {b} \left. \frac {\partial \mathcal {L}}{\partial \mathbf {Y} _ {b ; :}} \right| _ {\Theta_ {t}, \mathcal {B} _ {t}}, \quad \Psi_ {\mathcal {B} _ {t}} = \frac {1}{B} \sum_ {b} \mathbf {Y} _ {b;:} \cdot \left. \frac {\partial \mathcal {L}}{\partial \mathbf {Y} _ {b ; :}} \right| _ {\Theta_ {t}, \mathcal {B} _ {t}}, \tag {5} +$$ + +It can be seen from (5) that $g_{\mathcal{B}_t}$ and $\Psi_{\mathcal{B}_t}$ are also batch statistics involved in BN during BP. But they have never been well discussed before. + +# 3.2 INSTABILITY OF BATCH STATISTICS + +According to Ioffe & Szegedy (2015), the ideal normalization is to normalize feature maps $\mathbf{X}$ using expectation and variance computed over the whole training data set: + +$$ +\boldsymbol {Y} = \frac {\boldsymbol {X} - \mathbb {E} \boldsymbol {X}}{\sqrt {\operatorname {V a r} [ \boldsymbol {X} ]}}. \tag {6} +$$ + +But it's impractical when using stochastic optimization. Therefore, Ioffe & Szegedy (2015) uses mini-batches in stochastic gradient training, each mini-batch produces estimates the mean and variance of each activation. Such simplification makes it possible to involve mean and variance in BP. From the derivation in section 3.1, we can see batch statistics $\mu_{\mathcal{B}_t}$ , $\sigma_{\mathcal{B}_t}^2$ are the Monte Carlo (MC) estimators of population statistics $\mathbb{E}[X|\Theta_t]$ , $Var[X|\Theta_t]$ respectively at iteration $t$ . Similarly, batch statistics $g_{\mathcal{B}_t}$ , $\Psi_{\mathcal{B}_t}$ are MC estimators of population statistics $\mathbb{E}[\frac{\partial\mathcal{L}}{\partial Y_{b,:}} |\Theta_t]$ , $\mathbb{E}[Y_{b,:} \cdot \frac{\partial\mathcal{L}}{\partial Y_{b,:}} |\Theta_t]$ at iteration $t$ . $\mathbb{E}[\frac{\partial\mathcal{L}}{\partial Y_{b,:}} |\Theta_t]$ , $\mathbb{E}[Y_{b,:} \cdot \frac{\partial\mathcal{L}}{\partial Y_{b,:}} |\Theta_t]$ are computed over the whole data set. They contain the information how the mean and the variance of population will change as model updates, so they play an important role to make trade off between the change of individual sample and population. Therefore, it's crucial to estimate the population statistics precisely, in order to regularize the gradients of the model properly as weights update. + +It's well known the variance of MC estimator is inversely proportional to the number of samples, hence the variance of batch statistics dramatically increases when batch size is small. Figure 2 shows the change of batch statistics from a specific normalization layer of ResNet-50 during training on ImageNet. Regular batch statistics (orange line) are regarded as a good approximation for population statistics. We can see small batch statistics (blue line) are highly unstable, and contains notable error compared with regular batch statistics during training. In fact, the bias of $g_{\mathcal{B}_t}$ and $\Psi_{\mathcal{B}_t}$ in BP is more serious than that of $\mu_{\mathcal{B}_t}$ and $\sigma_{\mathcal{B}_t}^2$ (see Figure 2(c), 2(d)). The instability of small batch statistics can worsen the capacity of the models in two aspects: firstly the instability of small batch statistics will make training unstable, resulting in slow convergence; Secondly the instability of small batch can produce huge difference between batch statistics and population statistics. Since the model is trained using batch statistics while evaluated using population statistics, the difference between batch statistics and population statistics will cause inconsistency between training and inference procedure, leading to bad performance of the model on evaluation data. + +![](images/fb1691af8c0d3b2eef2f3155875799110a9bed49d0d01971142de2313a0aeb13.jpg) +(a) $\mu_{\mathcal{B}}$ + +![](images/0b2abab06b586283869529ee6af934eb585ea56bc3a050cd27d9c2fe6cbce4e6.jpg) +(b) $\sigma_{\mathcal{B}}^{2}$ +Figure 2: Plot of batch statistics from layer1.0.bn1 in ResNet-50 during training. The formulation of these batch statistics $(\mu_{\mathcal{B}},\sigma_{\mathcal{B}}^{2},g_{\mathcal{B}},\Psi_{\mathcal{B}})$ have been shown in Section 3.1. Blue line represents the small batch statistic $(|\mathcal{B}| = 2)$ to compute, while orange line represents the regular batch statistics $(|\mathcal{B}| = 32)$ . The x-axis represents the iterations, while the y-axis represents the $l^2$ norm of these statistics in each figures. Notice the mean of $g$ and $\Psi$ is close to zero, hence $l^2$ norm of $g_{\mathcal{B}}$ and $\Psi_{\mathcal{B}}$ essentially represent their standard deviation. + +![](images/6b95f177756426e0d5604fbc781e76dfb34ca6556f39a8b0caf9257fd2a46f89.jpg) +(c) $g_{\mathcal{B}}$ + +![](images/a96b1ea27d98d6bc73de9181db00c38c512edd7993071fe2b3131f4271611a88.jpg) +(d) $\Psi_{\mathcal{B}}$ + +# 4 MOVING AVERAGE BATCH NORMALIZATION + +Based on the discussion in Section 3.2, the key to restore the performance of BN is to solve the instability of small batch statistics. Therefore we considered two ways to handle the instability of + +small batch statistics: using moving average statistics to estimate population statistics, and reducing the number of statistics by modifying the formulation of normalization. + +# 4.1 SUBSTITUTE BATCH STATISTICS BY MOVING AVERAGE STATISTICS. + +Moving average statistics seem to be a suitable substitute for batch statistics to estimate population statistics when batch is small. We consider two types of moving average statistics: simple moving average statistics (SMAS)2 and exponential moving average statistics (EMAS)3. The following theorem shows under mild conditions, SMAS and EMAS are more stable than batch statistics: + +Theorem 1 Assume there exists a sequence of random variable (r.v.) $\{\xi_t\}_{t=1}^{\infty}$ , which are independent, uniformly bounded, i.e. $\forall t, |\xi_t| < C$ , and have uniformly bounded density. Define: + +$$ +S _ {t} = \frac {1}{m} \sum_ {i = t - m + 1} ^ {t} \xi_ {i}, \quad E _ {t} = (1 - \alpha) \sum_ {i = 1} ^ {t} \alpha^ {t - i} \xi_ {i}, \tag {7} +$$ + +where $m\in \mathbb{R}^+$ . If the sequence $\{\xi_t\}_{t = 1}^{\infty}$ satisfies + +$$ +\exists \xi , \forall \epsilon \in \mathbb {R}, \lim _ {t \rightarrow \infty} P \left(\xi_ {t} \leq \epsilon\right) = P (\xi \leq \epsilon), \tag {8} +$$ + +then we have + +$$ +\mathbb {E} \left(E _ {t}\right) = \mathbb {E} (\xi) + o (1), \quad V a r \left(E _ {t}\right) = \frac {\left(1 - \alpha^ {2 t}\right) (1 - \alpha)}{1 + \alpha} V a r (\xi) + o (1); \tag {9} +$$ + +If the sequence $\{\xi_t\}_{t=1}^{\infty}$ satisfies + +$$ +\lim _ {t \rightarrow \infty} \sup _ {\lambda} | P (\xi_ {t - 1} < \lambda) - P (\xi_ {t} < \lambda) | = 0, \tag {10} +$$ + +then we have + +$$ +\mathbb {E} \left(S _ {t}\right) = \mathbb {E} \left(\xi_ {t}\right) + o (1), \quad \operatorname {V a r} \left(S _ {t}\right) = \frac {\operatorname {V a r} \left(\xi_ {t}\right)}{m} + o (1); \tag {11} +$$ + +The proof of theorem 1 can be seen in appendix A.1. Theorem 1 not only proves moving average statistics have lower variance compared with batch statistics, but also reveals that with large momentum $\alpha$ , EMAS is better than SMAS with lower variance. However, using SMAS and EMAS request different conditions: Condition (8) means the sequence of the given statistics need to weakly converge to a specific random variable. For $\{\mu_{\mathcal{B}_t}\}_{t=1}^{\infty}$ , $\{\sigma_{\mathcal{B}_t}^2\}_{t=1}^{\infty}$ , they converge to the "final" batch statistics $\mu_{\mathcal{B}}$ , $\sigma_{\mathcal{B}}^2$ (when training finished), hence condition (8) is satisfied, EMAS can be applied to replace $\{\mu_{\mathcal{B}_t}\}_{t=1}^{\infty}$ , $\{\sigma_{\mathcal{B}_t}^2\}_{t=1}^{\infty}$ ; Unfortunately $\{g_{\mathcal{B}_t}\}_{t=1}^{\infty}$ , $\{\Psi_{\mathcal{B}_t}\}_{t=1}^{\infty}$ don't share the same property, EMAS is not suitable to take replacement of $\{g_{\mathcal{B}_t}\}_{t=1}^{\infty}$ , $\{\Psi_{\mathcal{B}_t}\}_{t=1}^{\infty}$ . However, under the assumption that learning rate is extremely small, the difference between the distribution of $\xi_{t-1}$ and $\xi_t$ is tiny, thus condition (10) is satisfied, we can use SMAS to replace $\{g_{\mathcal{B}_t}\}_{t=1}^{\infty}$ , $\{\Psi_{\mathcal{B}_t}\}_{t=1}^{\infty}$ . In a word, we can use EMAS $\hat{\mu}_t$ , $\hat{\sigma}_t^2$ to replace $\mu_{\mathcal{B}_t}$ , $\sigma_{\mathcal{B}_t}^2$ , and use SMAS $\bar{g}_t$ , $\bar{\Psi}_t$ to replace $g_{\mathcal{B}_t}$ , $\Psi_{\mathcal{B}_t}$ in (1) and (4), where + +$$ +\hat {\mu} _ {t} = \alpha \hat {\mu} _ {t - 1} + (1 - \alpha) \mu_ {\mathcal {B} _ {t}}, \quad \hat {\sigma} _ {t} ^ {2} = \alpha \hat {\sigma} _ {t - 1} ^ {2} + (1 - \alpha) \sigma_ {\mathcal {B} _ {t}} ^ {2}, \tag {12} +$$ + +$$ +\bar {g} _ {t} = \frac {1}{m} \sum_ {s = 1} ^ {m} g _ {\mathcal {B} _ {t - m + s}}, \quad \bar {\Psi} _ {t} = \frac {1}{m} \sum_ {s = 1} ^ {m} \Psi_ {\mathcal {B} _ {t - m + s}}. \tag {13} +$$ + +Notice neither of SMAS and EMAS is the unbiased substitute for batch statistics, but the bias can be extremely small comparing with expectation and variance of batch statistics, which is proven by equation 11 in theorem 1, our experiments also prove the effectiveness of moving average statistics as substitutes for small batch statistics (see Figure 3, 4 in appendix B.1). + +Relation to Batch Renormalization Essentially, Batch Renormalization (BRN) (Ioffe, 2017) replaces batch statistics $\mu_{\mathcal{B}_t}$ , $\sigma_{\mathcal{B}_t}^2$ with EMAS $\hat{\mu}_t$ , $\hat{\sigma}_t^2$ both in FP (1) and BP (4). The formulation of BRN during training is written as: + +$$ +\boldsymbol {Y} = \frac {\boldsymbol {X} - \mu_ {\mathcal {B} _ {t}}}{\sigma_ {\mathcal {B} _ {t}}}, \quad \hat {\boldsymbol {Y}} = r \cdot \boldsymbol {Y} + d \tag {14} +$$ + +where $r = \text{clip}_{[1/\lambda, \lambda]}(\frac{\sigma_{\mathcal{B}_t}}{\hat{\sigma}_t})$ , $d = \text{clip}_{[-d,d]}(\frac{\mu_{\mathcal{B}_t} - \hat{\mu}_t}{\hat{\sigma}_t})$ . Based on our analysis, BRN successfully eliminates the effect of small batch statistics $\mu_{\mathcal{B}_t}$ and $\sigma_{\mathcal{B}_t}^2$ by EMAS, but the small batch statistics associated with gradients $g_{\mathcal{B}_t}$ and $\Psi_{\mathcal{B}_t}$ remains during backward propagation, preventing BRN from completely restoring the performance of vanilla BN. + +# 4.2 STABILIZING NORMALIZATION BY REDUCING THE NUMBER OF STATISTICS + +To further stabilize training procedure in small batch cases, we consider normalizing feature maps $\mathbf{X}$ using $\mathbb{E}\mathbf{X}^2$ instead of $\mathbb{E}\mathbf{X}$ and $Var(\mathbf{X})$ . The formulation of normalization is modified as: + +$$ +\boldsymbol {Y} = \frac {\boldsymbol {X}}{\chi_ {\mathcal {B} _ {t}}}, \quad \boldsymbol {Z} = \boldsymbol {Y} \cdot \gamma + \beta , \tag {15} +$$ + +where $\chi_{\mathcal{B}_t}^2 = \frac{1}{B}\sum_bX_{b,:}^2$ Given $\frac{\partial\mathcal{L}}{\partial Y}$ , the backward propagation is: + +$$ +\left. \frac {\partial \mathcal {L}}{\partial \boldsymbol {X}} \right| _ {\Theta_ {t}, \mathcal {B} _ {t}} = \frac {1}{\chi \mathcal {B} _ {t}} \left(\left. \frac {\partial \mathcal {L}}{\partial \boldsymbol {Y}} \right| _ {\Theta_ {t}, \mathcal {B} _ {t}} - \boldsymbol {Y} \cdot \Psi_ {\mathcal {B} _ {t}}\right). \tag {16} +$$ + +The benefits of the modification seems obvious: there's only two batch statistics left during FP and BP, which will introduce less instability into the normalization layer compared with vanilla normalizing form. In fact we can theoretically prove the benefits of the modification by following theorem: + +Theorem 2 If the following assumptions hold: + +1. $Var[\hat{\sigma}] = o(1)$ , $Var[\hat{\chi}] = o(1)$ ; +2. $Cov(\{\frac{\partial\mathcal{L}}{\partial y},y\} ,\{g_{\mathcal{B}},\Psi_{\mathcal{B}}\}) = o(1);$ +3. $\mathbb{E}y = o(1)$ + +Then we have: + +$$ +\left. \right. \operatorname {V a r} \left[ \right. \frac {\partial \mathcal {L}}{\partial x} \left. \right| _ {\text {m o d i f i e d}} \left. \right] \leq \operatorname {V a r} \left[ \right. \frac {\partial \mathcal {L}}{\partial x} \left. \right| _ {\text {v a n i l l a}} \left. \right] - \frac {\operatorname {V a r} [ g _ {\mathcal {B}} ]}{\hat {\sigma} ^ {2}} \tag {17} +$$ + +The proof can be seen in appendix A.2. According to (17), $Var[\partial \mathcal{L} / \partial \mathbf{X}|_{\text{vanilla}}]$ is larger than that of $Var[\partial \mathcal{L} / \partial \mathbf{X}|_{\text{modified}}}$ , the gap is at least $Var[g_{\mathcal{B}}] / \hat{\sigma}^2$ , which mainly caused by the variance of $g_{\mathcal{B}} / \hat{\sigma}$ . So the modification essentially reduces the variance of the gradient by eliminating the batch statistics $g_{\mathcal{B}}$ during BP. Since $g_{\mathcal{B}_t}$ is a Monte Carlo estimator, the gap is inversely proportional to batch size. This can also explain why the improvement of modification is significant in small batch cases, but modified BN shows no superiority to vanilla BN within sufficient batch size (see ablation study in section 5.1). + +Centralizing weights of convolution kernel Notice theorem 2 relies on assumption 3. The vanilla normalization naturally satisfies $\mathbb{E}y = 0$ by centralizing feature maps, but the modified normalization doesn't necessarily satisfy assumption 3. To deal with that, inspired by Qiao et al. (2019), we find centralizing weights $W\in \mathbb{R}^{q\times p}$ of convolution kernels, named as Weight Centralization (WC) can be a compensation for the absence of centralizing feature maps in practice: + +$$ +\bar {\boldsymbol {W}} = \frac {1}{p} \sum_ {i} \boldsymbol {W} _ {: i}, \quad \boldsymbol {X} _ {\text {o u t p u t}} = (\boldsymbol {W} - \bar {\boldsymbol {W}}) \boldsymbol {X} _ {\text {i n p u t}}, \tag {18} +$$ + +where $X_{input}$ , $X_{output}$ are the input and output of the convolution layer respectively. We conduct further ablation study to clarify the effectiveness of WC (see Table 4 in appendix B.2). It shows that WC has little benefits to vanilla normalization, but it can significantly improve the performance of modified normalization. We emphasize that weight centralization is only a practical remedy for the absence of centralizing feature maps. The theoretical analysis remains as a future work. + +Clipping and renormalizing strategy. In practice, we find directly substituting batch statistics by moving average statistics in normalization layer will meet collapse during training. Therefore we take use of the clipping and renormalizing strategy from BRN (Ioffe, 2017). + +All in all, the formulation of proposed method MABN is: + +$$ +\boldsymbol {Y} = \frac {\boldsymbol {X}}{\tilde {\chi} t}, \quad \hat {\boldsymbol {Y}} = r \cdot \boldsymbol {Y} \tag {19} +$$ + +$$ +\left. \frac {\partial \mathcal {L}}{\partial \boldsymbol {Y}} \right| _ {\Theta_ {t}, \mathcal {B} _ {t}} = r \cdot \left. \frac {\partial \mathcal {L}}{\partial \hat {\boldsymbol {Y}}} \right| _ {\Theta_ {t}, \mathcal {B} _ {t}}, \quad \left. \frac {\partial \mathcal {L}}{\partial \boldsymbol {X}} \right| _ {\Theta_ {t}, \mathcal {B} _ {t}} = \frac {1}{\bar {\chi} t} \left(\left. \frac {\partial \mathcal {L}}{\partial \boldsymbol {Y}} \right| _ {\Theta_ {t}, \mathcal {B} _ {t}} - \boldsymbol {Y} \odot \bar {\Psi} _ {t}\right) \tag {20} +$$ + +where the EMAS $\hat{\chi}_t$ is computed as $\hat{\chi}_t = \alpha \hat{\chi}_{t-1} + (1 - \alpha) \chi_{\mathcal{B}_t}$ , SMAS $\bar{\chi}_t$ is defined as $\bar{\chi}_t^2 = \frac{1}{m} \sum_{s=1}^{m} \chi_{\mathcal{B}_{t-m+s}}^2$ , SMAS $\bar{\Psi}_t$ is defined as (13). The renormalizing parameter is set as $r = \text{clip}_{[1/\lambda, \lambda]} \left( \frac{\bar{\chi}_t}{\bar{\chi}_t} \right)$ . + +# 5 EXPERIMENTS + +This section presents main results of MABN on ImageNet (Russakovsky et al., 2015), COCO (Lin et al., 2014). Further experiment results on ImageNet, COCO and Cityscapes (Cordts et al., 2016) can be seen in appendix B.2, B.3, B.4 respectively. We also evaluate the computational overhead and memory footprint of MABN, the results is shown in appendix B.5. + +# 5.1 IMAGE CLASSIFICATION IN IMAGENET + +We evaluate the proposed method on ImageNet (Russakovsky et al., 2015) classification datasets with 1000 classes. All classification experiments are conducted with ResNet-50 (He et al., 2016). More implementation details can be found in the appendix B.2. + +
BN (Regular)BN (Small)BRN (Small)MABN (Small, m = 16)
val error23.4135.2230.2923.58
Δ (vs BN(Regular))-11.816.880.17
+ +Table 1: Comparison of top-1 error rate (%) of ResNet-50 on ImageNet Classification. The gradient batch size is 32 per GPU. Regular means normalization batch size is 32, while Small means normalization batch size is 2. + +Comparison with other normalization methods. Our baseline is BN using small $(|\mathcal{B}| = 2)$ or regular $(|\mathcal{B}| = 32)$ batch size, and BRN (Ioffe, 2017) with small batch size. We don't present the performance of instance-level normalization counterpart on ImageNet, because they are not linear-type method during inference time, and they also failed to restore the performance of BN (over $+0.5\%$ ), according to Wu & He (2018). Table 1 shows vanilla BN with small batch size can severely worsen the performance of the model $(+11.81\%)$ ; BRN (Ioffe, 2017) alleviates the issue to some extent, but there's still remaining far from complete recovery $(+6.88\%)$ ; While MABN almost completely restore the performance of vanilla BN $(+0.17\%)$ . + +We also compared the performance of BN, BRN and MABN when varying the batch size (see Figure 1(b)). BN and BRN are heavily relies on the batch size of training, though BRN performs better than + +vanilla BN. MABN can always retain the best capacity of ResNet-50, regardless of batch size during training. + +
Experiment NumberVanilla NormalizationModified NormalizationEMAS in FPSMAS in BPTop-1 Error (%)
23.41 (BN, regular)
23.53 (regular)
35.22 (BN)
30.29 (BRN)
-
29.68
27.03
23.58 (MABN)
+ +Ablation study on ImageNet. We conduct ablation experiments on ImageNet to clarify the contribution of each part of MABN (see table 2). With vanilla normalization form, replacing batch statistics in FP with EMAS (as BRN) will restore the performance to some extents $(-4.93\%)$ , comparing $③$ and $④$ ), but there's still a huge gap $(+6.88\%)$ , comparing $①$ and $④$ from complete restore. Directly using SMAS in BP with BRN will meet collapse during training $(5)$ , no matter how we tuned hyperparameters. We think it's due to the instability of vanilla normalization structure in small cases, so we modify the formulation of normalization shown in section 4.2. The modified normalization even slightly outperforms BRN in small batch cases (comparing $④$ and $⑥$ ). However, modified normalization shows no superiority to vanilla form (comparing $①$ and $②$ ), which can be interpreted by the result of theorem 2. With EMAS in FP, modified normalization significantly reduces the error rate further (comparing $⑥$ and $⑦$ ), but still fail to restore the performance completely $(+3.62\%,$ comparing $①$ and $⑦$ ). Applying SMAS in BP finally fills the rest of gap, almost completely restore the performance of vanilla BN in small batch cases $(+0.17, \text{comparing } ①$ and $⑧$ ). + +# 5.2 DETECTION AND SEGMENTATION IN COCO FROM SCRATCH + +We conduct experiments on Mask R-CNN (He et al., 2017) benchmark using a Feature Pyramid Network(FPN) (Lin et al., 2017a) following the basic setting in He et al. (2017). We train the networks from scratch (He et al., 2018) for $2 \times$ times. Only the backbone contains normalization layers. More implementation details and experiment results can be seen in the appendix B.3. + +Table 2: Ablation study on ImageNet Classification with ResNet-50. The normalization batch size is 2 in all experiments otherwise stated. The memory size is 16 and momentum is 0.98 when using SMAS, otherwise the momentum is 0.9. "-" means the training can't converge. + +
APbboxAPbbox50APbbox75APmaskAPmask50APmask75
BN30.4148.4732.7027.9145.7929.33
BRN31.9350.9534.4829.1648.1630.69
SyncBN34.8155.1837.6931.6951.8633.68
MABN34.8554.9738.0031.6151.8833.64
+ +Table 3: Comparison of Average Precision(AP) of Mask-RCNN on COCO Detection and Segmentation. The gradients batch size is 16. The normalization batch size of SyncBN is 16, while that of BN, BRN and MABN are both 2. The momentum of BRN and MABN are both 0.98, while the momentum of BN and SyncBN are both 0.9. The buffer size $(m)$ is 16). + +Table 3 shows the result of MABN compared with vanilla BN, BRN and SyncBN (Peng et al., 2018). It can be seen that MABN outperforms vanilla BN and BRN by a clear margin and get comparable performance with SyncBN. Quite different from Imagenet experiments, we update the parameters every single batch (with $B_{norm} = 2$ ). With such a complex pipeline, MABN still achieves a comparable performance as SyncBN. + +# 6 CONCLUSION + +This paper reveals the existence of the batch statistics $g_{\mathcal{B}}$ and $\Psi_{\mathcal{B}}$ involved in backward propagation of BN, and analysis their influence to training process. This discovery provides a new perspective to understand why BN always fails in small batch cases. Based on our analysis, we propose MABN to deal with small batch training problem. MABN can completely restore the performance of vanilla BN in small batch cases, and is extraordinarily efficient compared with its counterpart like GN. Our experiments on multiple computer vision tasks (classification, detection, segmentation) have shown the remarkable performance of MABN. + +# ACKNOWLEDGEMENT + +This research was partially supported by National Key RD Program of China (No. 2017YFA0700800), Beijing Academy of Artificial Intelligence (BAAI), and NSFC under Grant No. 61473091. + +# REFERENCES + +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. +Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4): 834-848, 2017. +Vitaliy Chiley, Ilya Sharapov, Atli Kosson, Urs Koster, Ryan Reece, Sofia Samaniego de la Fuente, Vishal Subbiah, and Michael James. Online Normalization for Training Neural Networks. arXiv e-prints, art. arXiv:1905.05894, May 2019. +Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. +Sam Gross and Michael Wilber. Training and investigating residual nets, 2016. URL https://github.com/facebook/fb.resnet.torch. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, 2015. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017. +Kaiming He, Ross Girshick, and Piotr Dólar. Rethinking imagenet pre-training. arXiv preprint arXiv:1811.08883, 2018. +Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. In Advances in neural information processing systems, pp. 1945-1953, 2017. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448-456, 2015. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014. +Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117-2125, 2017a. +Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980-2988, 2017b. + +Chao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, and Jian Sun. Megdet: A large mini-batch object detector. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6181-6189, 2018. +Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, and Alan Yuille. Weight standardization. arXiv preprint arXiv:1903.10520, 2019. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015. +Wenqi Shao, Tianjian Meng, Jingyu Li, Ruimao Zhang, Yudian Li, Xiaogang Wang, and Ping Luo. Ssn: Learning sparse switchable normalization via sparsestmax. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 443-451, 2019. +Saurabh Singh and Abhinav Shrivastava. Evalnorm: Estimating batch normalization statistics for evaluation. arXiv preprint arXiv:1904.06031, 2019. +Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016. +Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3-19, 2018. +Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2881-2890, 2017. + +# A SKETCH OF PROOF + +# A.1 PROOF OF THEOREM 1 + +If the condition (8) is satisfied, i.e. $\{\xi_t\}_{t=1}^{\infty}$ weakly converge to $\xi$ . Since $\{\xi_t\}_{t=1}^{\infty}$ has uniformly bounded density, we have: + +$$ +\lim _ {t \rightarrow \infty} \mathbb {E} \xi_ {t} = \mathbb {E} \xi \tag {21} +$$ + +$$ +\lim _ {t \rightarrow \infty} V a r [ \xi_ {t} ] = V a r [ \xi ] \tag {22} +$$ + +Since $\{\xi_t\}_{t = 1}^{\infty}$ are independently, hence we have: + +$$ +\begin{array}{l} V a r [ E _ {i} ] = V a r [ (1 - \alpha) \sum_ {i = 1} \sum_ {i = 1} ^ {t} \alpha^ {t - i} \xi_ {i} ] \\ = (1 - \alpha) ^ {2} \sum_ {i = 1} ^ {t} \alpha^ {2 (t - i)} \operatorname {V a r} [ \xi_ {i} ] \tag {23} \\ = (1 - \alpha) ^ {2} \sum_ {i = 1} ^ {t} \alpha^ {2 (t - i)} V a r [ \xi ] + (1 - \alpha) ^ {2} \sum_ {i = 1} ^ {t} \alpha^ {2 (t - i)} (V a r [ \xi_ {i} ] - V a r [ \xi ]) \\ = \frac {(1 - \alpha^ {2 t}) (1 - \alpha)}{1 + \alpha} V a r [ \xi ] + o (1) \\ \end{array} +$$ + +as $t\to \infty$ .Hence (9) has been proven. + +If the condition (10) is satisfied. Since $\{\xi_t\}_{t = 1}^{\infty}$ is uniformly bounded, then $\exists C\in \mathbb{R}^{+},\forall ,|\xi_{t}| < C$ As $t\to \infty$ , We have + +$$ +\begin{array}{l} \left| \mathbb {E} \xi_ {t - 1} - \mathbb {E} \xi_ {t} \right| = \left| \int_ {x \in [ - C, C ]} x p _ {t - 1} (x) d x - \int_ {x \in [ - C, C ]} x p _ {t} (x) d x \right| \\ = \left| \int_ {x \in [ - C, C ]} x \left(p _ {t - 1} (x) - p _ {t} (x)\right) d x \right| \\ = \left| x \left(F _ {t - 1} (x) - F _ {t} (x)\right) \right| _ {- C} ^ {C} - \int_ {x \in [ - C, C ]} \left(F _ {t - 1} (x) - F _ {t} (x)\right) d x \Big | \tag {24} \\ \leq \int_ {x \in [ - C, C ]} | F _ {t - 1} (x) - F _ {t} (x) | d x \\ \leq 2 C \cdot \sup _ {x} | F _ {t - 1} (x) - F _ {t} (x) | \\ = 2 C \cdot \sup _ {x} | P (\xi_ {t - 1} < x) - P (\xi_ {t} < x) | \\ = o (1) \\ \end{array} +$$ + +Similarly, we have + +$$ +\begin{array}{l} | \mathbb {E} \xi_ {t - 1} ^ {2} - \mathbb {E} \xi_ {t} ^ {2} | = \left| \int_ {x \in [ - C, C ]} x ^ {2} (p _ {t - 1} (x) - p _ {t} (x)) d x \right| \\ = \left| x ^ {2} \left(F _ {t - 1} (x) - F _ {t} (x)\right) \right| _ {- C} ^ {C} - \int_ {x \in [ - C, C ]} 2 x \left(F _ {t - 1} (x) - F _ {t} (x)\right) d x (25) \\ \leq \int_ {x \in [ - C, C ]} 2 | x | \left| F _ {t - 1} (x) - F _ {t} (x) \right| d x (25) \\ \leq 4 C ^ {2} \cdot \sup _ {x} | F _ {t - 1} (x) - F _ {t} (x) | \\ = o (1) \\ \end{array} +$$ + +Therefore combining (24) and (25), we have + +$$ +\begin{array}{l} \left| \operatorname {V a r} \left[ \xi_ {t - 1} \right] - \operatorname {V a r} \left[ \xi_ {t} \right] \right| \leq \left| E \xi_ {t - 1} ^ {2} - E \xi_ {t} ^ {2} \right| + \left| \left(E \xi_ {t - 1}\right) ^ {2} - \left(E \xi_ {t}\right) ^ {2} \right| \tag {26} \\ = o (1) \\ \end{array} +$$ + +For a fixed memory size $m$ , as $t \to \infty$ , we have + +$$ +\begin{array}{l} V a r (S _ {t}) = V a r [ \frac {1}{m} \sum_ {i = 0} ^ {m - 1} \xi_ {t - i} ] \\ = \frac {1}{m} \sum_ {i = 0} ^ {m - 1} \operatorname {V a r} \left[ \xi_ {t - i} \right] \tag {27} \\ = \frac {1}{m} \sum_ {i = 0} ^ {m - 1} (V a r [ \xi_ {t} ] + o (1)) \\ = \operatorname {V a r} [ \xi_ {t} ] + o (1) \\ \end{array} +$$ + +Therefore, (11) has been proven. + +# A.2 PROOF OF THEOREM 2 + +Without loss of generality, given the backward propagation of two normalizing form of a single input $x$ with batch $\mathcal{B}$ : + +$$ +\left. \frac {\partial \mathcal {L}}{\partial x} \right| _ {v a n i l l a} = \frac {1}{\hat {\sigma}} \left[ \frac {\partial \mathcal {L}}{\partial y} - g _ {\mathcal {B}} - y \cdot \Psi_ {\mathcal {B}} \right], \quad \left. \frac {\partial \mathcal {L}}{\partial x} \right| _ {m o d i f i e d} = \frac {1}{\hat {\chi}} \left[ \frac {\partial \mathcal {L}}{\partial y} - y \cdot \Psi_ {\mathcal {B}} \right], \tag {28} +$$ + +where $g_{\mathcal{B}}$ , $\Psi_{\mathcal{B}}$ are the batch statistics, and $\hat{\sigma}$ , $\hat{\chi}$ are the EMAS, defined as before. We omitted the subscript $t$ for simplicity. Then the variance of partial gradients w.r.t. inputs $x$ is written as + +$$ +\begin{array}{l} \left. \operatorname {V a r} \left[ \frac {\partial \mathcal {L}}{\partial x} \right| _ {\text {v a n i l l a}} \right] = \operatorname {V a r} \left[ \frac {1}{\hat {\sigma}} \left[ \frac {\partial \mathcal {L}}{\partial y} - g _ {\mathcal {B}} - y \cdot \Psi_ {\mathcal {B}} \right] \right] (29) \\ = \frac {1}{\hat {\sigma} ^ {2}} \left[ V a r \left[ \frac {\partial \mathcal {L}}{\partial y} - y \cdot \Psi_ {\mathcal {B}} \right] + V a r \left[ g _ {\mathcal {B}} \right] + 2 C o v \left[ \frac {\partial \mathcal {L}}{\partial y} - y \cdot \Psi_ {\mathcal {B}}, g _ {\mathcal {B}} \right] \right] (30) \\ = \frac {1}{\hat {\sigma} ^ {2}} \left[ \operatorname {V a r} \left[ \frac {\partial \mathcal {L}}{\partial y} - y \cdot \Psi_ {\mathcal {B}} \right] + \operatorname {V a r} \left[ g _ {\mathcal {B}} \right] \right] (31) \\ \geq \frac {1}{\hat {\chi} ^ {2}} \operatorname {V a r} \left[ \frac {\partial \mathcal {L}}{\partial y} - y \cdot \Psi_ {\mathcal {B}} \right] + \frac {\operatorname {V a r} \left[ g _ {\mathcal {B}} \right.}{\hat {\sigma} ^ {2}} (32) \\ = \operatorname {V a r} \left[ \frac {\partial \mathcal {L}}{\partial x} \right| _ {\text {m o d i f i e d}} \bigg ] + \frac {\operatorname {V a r} [ g _ {\mathcal {B}} ]}{\hat {\sigma} ^ {2}} (33) \\ \end{array} +$$ + +where (30) is satisfied due to assumption 1. The variance of $\hat{\sigma}$ is so small that $\hat{\sigma}$ can be regarded as a fixed number; (31) is satisfied because + +$$ +\begin{array}{l} C o v \left[ \frac {\partial \mathcal {L}}{\partial y} - y \cdot \Psi_ {\mathcal {B}}, g _ {\mathcal {B}} \right] = C o v \left[ \frac {\partial \mathcal {L}}{\partial y}, g _ {\mathcal {B}} \right] - C o v \left[ y \cdot \Psi_ {\mathcal {B}}, g _ {\mathcal {B}} \right] \tag {34} \\ = C o v \left[ \frac {\partial \mathcal {L}}{\partial y}, g _ {\mathcal {B}} \right] - \mathbb {E} \left[ y \Psi_ {\mathcal {B}} \left(g _ {\mathcal {B}} - \mathbb {E} g \mathcal {B}\right) \right] + \mathbb {E} \left[ y \Psi_ {\mathcal {B}} \right] \mathbb {E} \left[ g _ {\mathcal {B}} - \mathbb {E} g _ {\mathcal {B}} \right] (3 5) \\ \end{array} +$$ + +Due to assumption 2, the correlation between individual sample and batch statistics is close to 0, hence we have + +$$ +C o v \left[ \frac {\partial \mathcal {L}}{\partial y}, g _ {\mathcal {B}} \right] = 0 \tag {36} +$$ + +$$ +\mathbb {E} \left[ y \Psi_ {\mathcal {B}} \left(g _ {\mathcal {B}} - \mathbb {E} g \mathcal {B}\right) \right] = \mathbb {E} y \mathbb {E} \left[ \Psi_ {\mathcal {B}} \left(g _ {\mathcal {B}} - \mathbb {E} g \mathcal {B}\right) \right] \tag {37} +$$ + +$$ +\mathbb {E} [ y \Psi_ {\mathcal {B}} ] = \mathbb {E} y \mathbb {E} \Psi_ {\mathcal {B}} \tag {38} +$$ + +Besides, $\mathbb{E}y$ is close to 0 according to assumption 3, hence + +$$ +C o v \left[ \frac {\partial \mathcal {L}}{\partial y} - y \cdot \Psi_ {\mathcal {B}}, g _ {\mathcal {B}} \right] = 0. \tag {39} +$$ + +(32) is satisfied due to the definition of $\hat{\chi}$ and $\hat{\sigma}$ , we have + +$$ +\hat {\chi} ^ {2} = \hat {\sigma} ^ {2} + \hat {\mu} ^ {2}. \tag {40} +$$ + +Similar to $\hat{\sigma}$ , the variance of $\hat{\chi}$ is also too small that $\hat{\chi}$ can be regarded as a fixed number due to assumption 1, so (33) is satisfied. + +# B EXPERIMENTS + +# B.1 STATISTICS ANALYSIS + +We analyze the difference between small batch statistics $(|\mathcal{B}| = 2)$ and regular batch statistics $(|\mathcal{B}| = 32)$ with the modified formulation of normalization (15) shown in Section 4.2. + +![](images/db0843ac1bea8c21c806f0d6354e33a877e4ea34e3bcedcbd921f56b721d10af.jpg) +(a) $\chi_{\mathcal{B}}^{2}$ + +![](images/e7f8d7732b481a066138e4559d01ce05d764ebe7adb35b853ff4e468141fe2cf.jpg) +(b) $\Psi_{\mathcal{B}}$ +Figure 3: Plot of batch statistics from layer1.0.bn1 in ResNet-50 with a modified structure during training. The formulation of these batch statistics $(\chi_{\mathcal{B}}^2, \Psi_{\mathcal{B}})$ is shown in section 4.2, 3.1 respectively. Blue line represents the small batch statistic $(|\mathcal{B}| = 2)$ , while orange line represents the regular batch statistics $(|\mathcal{B}| = 32)$ . We use the small batch statistics to update the network parameters. + +![](images/37933a3009dfc5b9efe938e0f4aae05b1733bb610b144fd68a8265960a59c950.jpg) +(a) $\chi_{\mathcal{B}}^{2}$ +Figure 4: Plot of batch statistics from layer1.0.bn1 in ResNet-50 with MABN. The formulation of these batch statistics $(\chi_{\mathcal{B}}^{2}, \Psi_{\mathcal{B}})$ is shown in section 4.2, 3.1 respectively. Blue line represents the SMA batch statistic(2+30), while orange line represents the regular batch statistics(32). We use the moving average batch statistics to update the network parameters. + +![](images/549b899d6b3e44c01f1c417856febc980a624af4be1df95ee5dc8f89a2cbd7b6.jpg) +(b) $\Psi_{\mathcal{B}}$ + +Figure 3 illustrates the change of small batch statistics and regular batch statistics in FP and BP respectively. The variance of small batch statistics is much higher than the regular one. However, when we use SMAS as a approximation for regular batch statistics, the gap between SMAS and regular batch statistics is not obvious as shown in Figure 4. + +# B.2 EXPERIMENTS ON IMAGENET + +Implementation details. All experiments on ImageNet are conducted across 8 GPUs. We train models with a gradient batch size of $B_{g} = 32$ images per GPU. To simulate small batch training, we split the samples on each GPU into $B_{g} / |\mathcal{B}|$ groups where $|\mathcal{B}|$ denotes the normalization batch size. The batch statistics are computed within each group individually. + +All weights from convolutions are initialized as He et al. (2015). We use 1 to initialize all $\gamma$ and 0 to initialize all $\beta$ in normalization layers. We use a weight decay of $10^{-4}$ for all weight layers including $\gamma$ and $\beta$ (following Wu & He (2018)). We train 600,000 iterations (approximately equal to 120 epoch when gradient batch size is 256) for all models, and divide the learning rate by 10 at 150,000, 300,000 and 450,000 iterations. The data augmentation follows Gross & Wilber (2016). The models are evaluated by top-1 classification error on center crops of $224\times 224$ pixels in the validation set. In vanilla BN or BRN, the momentum $\alpha = 0.9$ , in MABN, the momentum $\alpha = 0.98$ . + +Additional ablation studies. Table 4 shows the additional ablation results. We test all possible combination of all three kinds of statistics (SMAS, EMAS, BS) in FP and BP. The experiments results strongly prove our theoretical analysis in section 4.3. Besides, we verify the necessity of centralizing weights with modified normalization form. + +
Experiment numberw/o centralizing feature maps XCentralizing weights WFP statisticsBP statisticsTop-1 Error (%)
EMASSMAS23.58(MABN)
SMASSMAS26.63
EMASEMAS24.83
EMASBS27.03
BSBS29.68
EMASSMAS25.45
EMASBS29.57
BSBS32.95
BSBS35.22
BSBS34.27
BSBS23.35(regular)
+ +Table 4: Further ablation study on ImageNet with ResNet-50. The normalization batch size is 2 in all experiments. The buffer size $(m)$ is 16 and momentum is 0.98 when using SMA statistics, otherwise the momentum is 0.9. BS means vanilla batch statistics. + +# B.3 EXPERIMENTS ON COCO + +Implementation details. We train the Mask-RCNN pipeline from scratch with MABN. We train the model on 8 GPUs, with 2 images per GPU. We train our model using COCO 2014 train and trainval35k dataset. We evaluate the model on COCO 2014 minival dataset. We set the momentum $\alpha = 0.98$ for all MABN layers. We report the standard COCO metrics $AP^{bbox}$ , $AP_{75}^{bbox}$ , $AP_{50}^{bbox}$ for bounding box detection and $AP^{mask}$ , $AP_{50}^{mask}$ , $AP_{75}^{mask}$ for instance segmentation. Other basic settings follow He et al. (2017). + +MABN used on heads. We build mask-rcnn baseline using a Feature Pyramid Network(FPN)(Lin et al., 2017a) backbone. The base model is ResNet-50. We train the models for $2 \times$ iterations. We use 4conv1fc instead of 2fc as the box head. Both backbone and heads contain normalization layers. We replace all normalization layers in each experiments. While training models with MABN, we use batch statistics in normalization layers on head during first 10,000 iterations. Table 5 shows the result. The momentum are set to 0.98 in BRN and MABN. + +
APbboxAPbbox50APbbox75APmaskAPmask50APmask75
BN32.3850.4435.4729.0747.6830.75
BRN34.0752.6637.1230.9850.0332.93
SyncBN36.8156.2340.0833.1153.4635.28
MABN36.5055.7940.1732.6952.7834.71
+ +Training from pretrained model. We compare the performance of MABN and SyncBN when training model based on ImageNet pretrained weights for 2x iterations. The results are shown in Table + +Table 5: Comparison of Average Precision(AP) of Mask-RCNN on COCO Detection and Segmentation. The gradients batch size is 16. The normalization batch size of SyncBN is 16, while that of BN, BRN, MABN are both 2, the buffer size $(m)$ of MABN is 32. + +
APbboxAPbbox50APbbox75APmaskAPmask50APmask75
SyncBN38.2557.8142.0134.2254.9736.34
MABN38.4258.1941.9934.1255.1036.12
+ +Training from scratch for one-stage model. We also compare MABN and SyncBN based on one-stage pipeline. We build on retinanet(Lin et al., 2017b) benchmark. We train the model from scratch for $2 \times$ iterations. The results are shown in Table 7. + +Table 6: Comparison of Average Precision(AP) of Mask-RCNN on COCO Detection and Segmentation. The gradients batch size is 16. The normalization batch size of SyncBN is 16, while that of BN, BRN, MABN are both 2, the buffer size $(m)$ of MABN is 32. + +
APbboxAPbbox50APbbox75
SyncBN29.8046.2131.47
MABN29.5245.6931.14
+ +Table 7: Comparison of Average Precision(AP) of retinanet on COCO Detection. The gradients batch size is 16. The normalization batch size of SyncBN is 16, while that of MABN is 2. + +All experiment results show MABN can get comparable as SyncBN, and significantly outperform BN on COCO. + +# B.4 SEMANTIC SEGMENTATION IN CITYSCAPES + +We evaluate semantic segmentation in Cityscapes(Cordts et al., 2016). It contains 5,000 high quality pixel-level finely annotated images collected from 50 cities in different seasons. We conduct experiments on PSPNET baseline and follow the basic settings mentioned in Zhao et al. (2017). + +For fair comparison, our backbone network is ResNet-101 as in Chen et al. (2017). Since we centralize weights of convolutional kernel to use MABN, we have to re-pretrain our backbone model on Imagenet dataset. During fine-tuning process, we linearly increase the learning rate for 3 epoch (558 iterations) at first. Then we follow the "poly" learning schedule as Zhao et al. (2017). Table 8 shows the result of MABN compared with vanilla BN, BRN and SyncBN. The buffer size $(m)$ of MABN is 16, the modementum of MABN and BRN is 0.98. + +Since the statistics (mean and variance) is more stable in a pre-trained model than a random initialized one, the gap between vanilla BN and SyncBN is not significant $(+1.41\%)$ . However, MABN + +
pretrain Top-1mIoU
BN21.7477.11
BRN21.7477.30
SyncBN21.7478.52
MABN21.7078.20
+ +still outperforms vanilla BN by a clear margin $(+1.09\%)$ . Besides, BRN shows no obvious superiority to vanilla BN $(+0.19\%)$ on Cityscapes dataset. + +# B.5 COMPUTATIONAL OVERHEAD + +We compare the computational overhead and memory footprint of BN, GN and MABN. We use maskrcnn with resnet50 and FPN as benchmark. We compute the theoretical FLOPS during inference and measure the inference speed when a single image $(3\times 224\times 224)$ goes through the backbone (resnet $50 +$ FPN). We assume BN and MABN can be absorbed in convolution layer during inference. GN can not be absorbed in convolution layer, so its FLOPS is larger than BN and MABN. Besides GN includes division and sqrt operation during inference, therefore it's much slower than BN and MABN during inference time. + +We also monitor the training process of maskrcnn on COCO (8 GPUs, 2 images per GPU), and show its memory footprint and training speed. Notice we have not optimized the implementation of MABN, so its training speed is a little slower than BN and GN. + +Table 8: Results on Cityscapes testing set. + +
FLOPS (M)Memory (GB)Training Speed (iter/s)Inference Speed (iter/s)
BN3123.7558.8752.3512.73
GN3183.2858.8592.226.34
MABN3123.7560.6091.8112.73
+ +Table 9: Computational overhead and memory footprint of BN, GN and MABN. \ No newline at end of file diff --git a/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/images.zip b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..36a484d45a929a416248b06791cfe670052046fa --- /dev/null +++ b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab3147676ece8a2744374ae30c188f46fe8b84ab97a4392481f946cf08680891 +size 687192 diff --git a/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/layout.json b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b5c829c15bc932df9e6fa9cde93198885734d3cb --- /dev/null +++ b/towardsstabilizingbatchstatisticsinbackwardpropagationofbatchnormalization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e033acb92fc5b4b1b3b5ce5b8847c5f8dbc2d23ece5975692b9cb2d791984d5 +size 572527 diff --git a/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/4f84e998-df37-4930-88b2-d7de9101d3d8_content_list.json b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/4f84e998-df37-4930-88b2-d7de9101d3d8_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5b1507bd831c518a9e49cc893b96812bcb081b37 --- /dev/null +++ b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/4f84e998-df37-4930-88b2-d7de9101d3d8_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8fb3de5c6584affbac5df2743508318b658470aa9689e7aa12e824b93f9382d4 +size 294880 diff --git a/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/4f84e998-df37-4930-88b2-d7de9101d3d8_model.json b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/4f84e998-df37-4930-88b2-d7de9101d3d8_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1fb9bcc07f6a93742d47f653af8959e0a6849939 --- /dev/null +++ b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/4f84e998-df37-4930-88b2-d7de9101d3d8_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f46dfe9b2c6a5cf841421a3303ba8ecd18020a21f60c17c7ed82df0eecf5fba +size 325794 diff --git a/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/4f84e998-df37-4930-88b2-d7de9101d3d8_origin.pdf b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/4f84e998-df37-4930-88b2-d7de9101d3d8_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8ba849f5742728d6e0bad17dfc0b2fd7c811a960 --- /dev/null +++ b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/4f84e998-df37-4930-88b2-d7de9101d3d8_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66a1cb31dd23d8869a44a97a31fcd27cf7c508832d5bc56eb51e15c555ea8314 +size 2074287 diff --git a/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/full.md b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7ed52ebcb3064a2671b00649c9c51db913751152 --- /dev/null +++ b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/full.md @@ -0,0 +1,519 @@ +# TOWARDS STABLE AND EFFICIENT TRAINING OF VERIFIABLY ROBUST NEURAL NETWORKS + +Huan Zhang $^{1*}$ Hongge Chen $^{2}$ Chaowei Xiao $^{3}$ Sven Gowal $^{4}$ Robert Stanforth $^{4}$ + +Bo Li $^{5}$ Duane Boning $^{2}$ Cho-Jui Hsieh $^{1}$ + +$^{1}$ UCLA $^{2}$ MIT $^{3}$ University of Michigan $^{4}$ DeepMind $^{5}$ UIUC + +huan@huan-zhang.com, chenhg@mit.edu, xiaocw@umich.edu + +sgowal@google.com, stanforth@google.com + +lbo@illinois.edu, boning@mtl.mit.edu, chohsieh@cs.ucla.edu + +# ABSTRACT + +Training neural networks with verifiable robustness guarantees is challenging. Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures. Meanwhile, interval bound propagation (IBP) based training is efficient and significantly outperforms linear relaxation based methods on many tasks, yet it may suffer from stability issues since the bounds are much looser especially at the beginning of training. In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass. CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks. We conduct large scale experiments on MNIST and CIFAR datasets, and outperform all previous linear relaxation and bound propagation based certified defenses in $\ell_{\infty}$ robustness. Notably, we achieve $7.02\%$ verified test error on MNIST at $\epsilon = 0.3$ , and $66.94\%$ on CIFAR-10 with $\epsilon = 8/255$ . + +# 1 INTRODUCTION + +The success of deep neural networks (DNNs) has motivated their deployment in some safety-critical environments, such as autonomous driving and facial recognition systems. Applications in these areas make understanding the robustness and security of deep neural networks urgently needed, especially their resilience under malicious, finely crafted inputs. Unfortunately, the performance of DNNs are often so brittle that even imperceptibly modified inputs, also known as adversarial examples, are able to completely break the model (Goodfellow et al., 2015; Szegedy et al., 2013). The robustness of DNNs under adversarial examples is well-studied from both attack (crafting powerful adversarial examples) and defence (making the model more robust) perspectives (Athalye et al., 2018; Carlini & Wagner, 2017a;b; Goodfellow et al., 2015; Madry et al., 2018; Papernot et al., 2016; Xiao et al., 2019b; 2018b;c; Eykholt et al., 2018; Chen et al., 2018; Xu et al., 2018; Zhang et al., 2019b). Recently, it has been shown that defending against adversarial examples is a very difficult task, especially under strong and adaptive attacks. Early defenses such as distillation (Papernot et al., 2016) have been broken by stronger attacks like C&W (Carlini & Wagner, 2017b). Many defense methods have been proposed recently (Guo et al., 2018; Song et al., 2017; Buckman et al., 2018; Ma et al., 2018; Samangouei et al., 2018; Xiao et al., 2018a; 2019a), but their robustness improvement cannot be certified – no provable guarantees can be given to verify their robustness. In fact, most of these uncertified defenses become vulnerable under stronger attacks (Athalye et al., 2018; He et al., 2017). + +Several recent works in the literature seeking to give provable guarantees on the robustness performance, such as linear relaxations (Wong & Kolter, 2018; Mirman et al., 2018; Wang et al., 2018a; Dvijotham et al., 2018b; Weng et al., 2018; Zhang et al., 2018), interval bound propagation (Mirman et al., 2018; Gowal et al., 2018), ReLU stability regularization (Xiao et al., 2019c), and distributionally + +robust optimization (Sinha et al., 2018) and semidefinite relaxations (Raghunathan et al., 2018a; Dvijotham et al.). Linear relaxations of neural networks, first proposed by Wong & Kolter (2018), is one of the most popular categories among these certified defences. They use the dual of linear programming or several similar approaches to provide a linear relaxation of the network (referred to as a "convex adversarial polytope") and the resulting bounds are tractable for robust optimization. However, these methods are both computationally and memory intensive, and can increase model training time by a factor of hundreds. On the other hand, interval bound propagation (IBP) is a simple and efficient method for training verifiable neural networks (Gowal et al., 2018), which achieved state-of-the-art verified error on many datasets. However, since the IBP bounds are very loose during the initial phase of training, the training procedure can be unstable and sensitive to hyperparameters. + +In this paper, we first discuss the strengths and weakness of existing linear relaxation based and interval bound propagation based certified robust training methods. Then we propose a new certified robust training method, CROWN-IBP, which marries the efficiency of IBP and the tightness of a linear relaxation based verification bound, CROWN (Zhang et al., 2018). CROWN-IBP bound propagation involves a IBP based fast forward bounding pass, and a tight convex relaxation based backward bounding pass (CROWN) which scales linearly with the size of neural network output and is very efficient for problems with low output dimensions. Additional, CROWN-IBP provides flexibility for exploiting the strengths of both IBP and convex relaxation based verifiable training methods. + +The efficiency, tightness and flexibility of CROWN-IBP allow it to outperform state-of-the-art methods for training verifiable neural networks with $\ell_{\infty}$ robustness under all $\epsilon$ settings on MNIST and CIFAR-10 datasets. In our experiment, on MNIST dataset we reach $7.02\%$ and $12.06\%$ IBP verified error under $\ell_{\infty}$ distortions $\epsilon = 0.3$ and $\epsilon = 0.4$ , respectively, outperforming the state-of-the-art baseline results by IBP (8.55% and 15.01%). On CIFAR-10, at $\epsilon = \frac{2}{255}$ , CROWN-IBP decreases the verified error from $55.88\%$ (IBP) to $46.03\%$ and matches convex relaxation based methods; at a larger $\epsilon$ , CROWN-IBP outperforms all other methods with a noticeable margin. + +# 2 RELATED WORK AND BACKGROUND + +# 2.1 ROBUSTNESS VERIFICATION AND RELAXATIONS OF NEURAL NETWORKS + +Neural network robustness verification algorithms seek for upper and lower bounds of an output neuron for all possible inputs within a set $S$ , typically a norm bounded perturbation. Most importantly, the margins between the ground-truth class and any other classes determine model robustness. However, it has already been shown that finding the exact output range is a non-convex problem and NP-complete (Katz et al., 2017; Weng et al., 2018). Therefore, recent works resorted to giving relatively tight but computationally tractable bounds of the output range with necessary relaxations of the original problem. Many of these robustness verification approaches are based on linear relaxations of non-linear units in neural networks, including CROWN (Zhang et al., 2018), DeepPoly (Singh et al., 2019), Fast-Lin (Weng et al., 2018), DeepZ (Singh et al., 2018) and Neurify (Wang et al., 2018b). We refer the readers to (Salman et al., 2019b) for a comprehensive survey on this topic. After linear relaxation, they bound the output of a neural network $f_{i}(\cdot)$ by linear upper/upper hyper-planes: + +$$ +\mathbf {A} _ {i,:} \Delta \boldsymbol {x} + b _ {L} \leq f _ {i} (\boldsymbol {x} _ {0} + \Delta \boldsymbol {x}) \leq \mathbf {A} _ {i,:} \Delta \boldsymbol {x} + b _ {U} \tag {1} +$$ + +where a row vector $\mathbf{A}_{i,:} = \mathbf{W}_{i,:}^{(L)}\mathbf{D}^{(L - 1)}\mathbf{W}^{(L - 1)}\dots \mathbf{D}^{(1)}\mathbf{W}^{(1)}$ is the product of the network weight matrices $\mathbf{W}^{(l)}$ and diagonal matrices $\mathbf{D}^{(l)}$ reflecting the ReLU relaxations for output neuron $i$ ; $b_{L}$ and $b_{U}$ are two bias terms unrelated to $\Delta x$ . Additionally, Dvijotham et al. (2018c;a); Qin et al. (2019) solve the Lagrangian dual of verification problem; Raghunathan et al. (2018a;b); Dvijotham et al. propose semidefinite relaxations which are tighter compared to linear relaxation based methods, but computationally expensive. Bounds on neural network local Lipschitz constant can also be used for verification (Zhang et al., 2019c; Hein & Andriushchenko, 2017). Besides these deterministic verification approaches, randomized smoothing can be used to certify the robustness of any model in a probabilistic manner (Cohen et al., 2019; Salman et al., 2019a; Lecuyer et al., 2018; Li et al., 2018). + +# 2.2 ROBUST OPTIMIZATION AND VERIFIABLE ADVERSARIAL DEFENSE + +To improve the robustness of neural networks against adversarial perturbations, a natural idea is to generate adversarial examples by attacking the network and then use them to augment the training set (Kurakin et al., 2017). More recently, Madry et al. (2018) showed that adversarial training can + +be formulated as solving a minimax robust optimization problem as in (2). Given a model with parameter $\theta$ , loss function $L$ , and training data distribution $\mathcal{X}$ , the training algorithm aims to minimize the robust loss, which is defined as the maximum loss within a neighborhood $\{x + \delta | \delta \in S\}$ of each data point $x$ , leading to the following robust optimization problem: + +$$ +\min _ {\theta} E _ {(x, y) \in \mathcal {X}} \left[ \max _ {\delta \in S} L (x + \delta ; y; \theta) \right]. \tag {2} +$$ + +Madry et al. (2018) proposed to use projected gradient descent (PGD) to approximately solve the inner max and then use the loss on the perturbed example $x + \delta$ to update the model. Networks trained by this procedure achieve state-of-the-art test accuracy under strong attacks (Athalye et al., 2018; Wang et al., 2018a; Zheng et al., 2018). Despite being robust under strong attacks, models obtained by this PGD-based adversarial training do not have verified error guarantees. Due to the nonconvexity of neural networks, PGD attack can only compute the lower bound of robust loss (the inner maximization problem). Minimizing a lower bound of the inner max cannot guarantee (2) is minimized. In other words, even if PGD-attack cannot find a perturbation with large loss, that does not mean there exists no such perturbation. This becomes problematic in safety-critical applications since those models need certified safety. + +Verifiable adversarial training methods, on the other hand, aim to obtain a network with good robustness that can be verified efficiently. This can be done by combining adversarial training and robustness verification—instead of using PGD to find a lower bound of inner max, certified adversarial training uses a verification method to find an upper bound of the inner max, and then update the parameters based on this upper bound of robust loss. Minimizing an upper bound of the inner max guarantees to minimize the robust loss. There are two certified robust training methods that are related to our work and we describe them in detail below. + +Linear Relaxation Based Verifiable Adversarial Training. One of the most popular verifiable adversarial training method was proposed in (Wong & Kolter, 2018) using linear relaxations of neural networks to give an upper bound of the inner max. Other similar approaches include Mirman et al. (2018); Wang et al. (2018a); Dvijotham et al. (2018b). Since the bound propagation process of a convex adversarial polytope is too expensive, several methods were proposed to improve its efficiency, like Cauchy projection (Wong et al., 2018) and dynamic mixed training (Wang et al., 2018a). However, even with these speed-ups, the training process is still slow. Also, this method may significantly reduce a model's standard accuracy (accuracy on natural, unmodified test set). As we will demonstrate shortly, we find that this method tends to over-regularize the network during training, which is harmful for obtaining good accuracy. + +Interval Bound Propagation (IBP). Interval Bound Propagation (IBP) uses a very simple rule to compute the pre-activation outer bounds for each layer of the neural network. Unlike linear relaxation based methods, IBP does not relax ReLU neurons and does not consider the correlations between neurons of different layers, yielding much looser bounds. Mirman et al. (2018) proposed a variety of abstract domains to give sound over-approximations for neural networks, including the "Box/Interval Domain" (referred to as IBP in Gowal et al. (2018)) and showed that it could scale to much larger networks than other works (Raghunathan et al., 2018a) could at the time. Gowal et al. (2018) demonstrated that IBP could outperform many state-of-the-art results by a large margin with more precise approximations for the last linear layer and better training schemes. However, IBP can be unstable to use and hard to tune in practice, since the bounds can be very loose especially during the initial phase of training, posing a challenge to the optimizer. To mitigate instability, Gowal et al. (2018) use a mixture of regular and minimax robust cross-entropy loss as the model's training loss. + +# 3 METHODOLOGY + +Notation. We define an $L$ -layer feed-forward neural network recursively as: + +$$ +f (\boldsymbol {x}) = z ^ {(L)} \quad z ^ {(l)} = \mathbf {W} ^ {(l)} h ^ {(l - 1)} + \boldsymbol {b} ^ {(l)} \quad \mathbf {W} ^ {(l)} \in \mathbb {R} ^ {n _ {l} \times n _ {l - 1}} \quad \boldsymbol {b} ^ {(l)} \in \mathbb {R} ^ {n _ {l}} +$$ + +$$ +h ^ {(l)} = \sigma^ {(l)} (z ^ {(l)}), \quad \forall l \in \{1, \dots , L - 1 \}, +$$ + +where $h^{(0)}(\pmb{x}) = \pmb{x}$ , $n_0$ represents input dimension and $n_L$ is the number of classes, $\sigma$ is an element-wise activation function. We use $z$ to represent pre-activation neuron values and $h$ to represent + +
Datasetε (ℓ∞ norm)CAP verified errorCROWN verified errorIBP verified error
MNIST0.18.90%7.05%5.83%
0.245.37%24.17%7.37%
0.397.77%65.26%10.68%
0.499.98%99.57%16.76%
Fashion-MNIST0.144.64%36.85%23.49%
CIFAR-102/25562.94%60.83%58.75%
8/25591.44%82.68%73.34%
+ +Table 1: IBP trained models have low IBP verified errors but when verified with a typically much tighter bound, including convex adversarial polytope (CAP) (Wong et al., 2018) and CROWN (Zhang et al., 2018), the verified errors increase significantly. CROWN is generally tighter than convex adversarial polytope however the gap between CROWN and IBP is still large, especially at large $\epsilon$ . We used a 4-layer CNN network for all datasets to compute these bounds. $^{1}$ + +post-activation neuron values. Consider an input example $\pmb{x}_k$ with ground-truth label $y_k$ , we define a set of $S(\pmb{x}_k, \epsilon) = \{\pmb{x} | \| \pmb{x} - \pmb{x}_k \|_{\infty} \leq \epsilon\}$ and we desire a robust network to have the property $y_k = \operatorname{argmax}_j [f(\pmb{x})]_j$ for all $\pmb{x} \in S$ . We define element-wise upper and lower bounds for $z^{(l)}$ and $h^{(l)}$ as $\underline{z}^{(l)} \leq z^{(l)} \leq \overline{z}^{(l)}$ and $\underline{h}^{(l)} \leq h^{(l)} \leq \overline{h}^{(l)}$ . + +Verification Specifications. Neural network verification literature typically defines a specification vector $\pmb{c} \in \mathbb{R}^{n_L}$ , that gives a linear combination for neural network output: $\pmb{c}^{\top} f(\pmb{x})$ . In robustness verification, typically we set $c_{i} = 1$ where $i$ is the ground truth class label, $c_{j} = -1$ where $j$ is the attack target label and other elements in $c$ are 0. This represents the margin between class $i$ and class $j$ . For an $n_{L}$ class classifier and a given label $y$ , we define a specification matrix $C \in \mathbb{R}^{n_L \times n_L}$ as: + +$$ +C _ {i, j} = \left\{ \begin{array}{l l} 1, & \text {i f} j = y, i \neq y \text {(o u t p u t o f g r o u n d t r u t h c l a s s)} \\ - 1, & \text {i f} i = j, i \neq y \text {(o u t p u t o f o t h e r c l a s s e s , n e g a t e d)} \\ 0, & \text {o t h e r w i s e (n o t e t h a t t h e y - t h r o w c o n t a i n s a l l 0)} \end{array} \right. \tag {3} +$$ + +Importantly, each element in vector $\boldsymbol{m} \coloneqq Cf(\boldsymbol{x}) \in \mathbb{R}^{n_L}$ gives us margins between class $y$ and all other classes. We define the lower bound of $Cf(\boldsymbol{x})$ for all $\boldsymbol{x} \in S(\boldsymbol{x}_k, \epsilon)$ as $\underline{\boldsymbol{m}}(\boldsymbol{x}_k, \epsilon)$ , which is a very important quantity: when all elements of $\underline{\boldsymbol{m}}(\boldsymbol{x}_k, \epsilon) > 0$ , $\boldsymbol{x}_k$ is verifiably robust for any perturbation with $\ell_{\infty}$ norm less than $\epsilon$ . $\underline{\boldsymbol{m}}(\boldsymbol{x}_k, \epsilon)$ can be obtained by a neural network verification algorithm, such as convex adversarial polytope, IBP, or CROWN. Additionally, Wong & Kolter (2018) showed that for cross-entropy (CE) loss: + +$$ +\max _ {\boldsymbol {x} \in S \left(\boldsymbol {x} _ {k}, \epsilon\right)} L (f (\boldsymbol {x}); y; \theta) \leq L (- \underline {{\boldsymbol {m}}} \left(\boldsymbol {x} _ {k}, \epsilon\right); y; \theta). \tag {4} +$$ + +(4) gives us the opportunity to solve the robust optimization problem (2) via minimizing this tractable upper bound of inner-max. This guarantees that $\max_{\boldsymbol{x} \in S(\boldsymbol{x}_k, \epsilon)} L(f(\boldsymbol{x}), y)$ is also minimized. + +# 3.1 ANALYSIS OF IBP AND LINEAR RELAXATION BASED VERIFIABLE TRAINING METHODS + +Interval Bound Propagation (IBP) Interval Bound Propagation (IBP) uses a simple bound propagation rule. For the input layer we set $\boldsymbol{x}_L \leq \boldsymbol{x} \leq \boldsymbol{x}_U$ element-wise. For affine layers we have: + +$$ +\bar {z} ^ {(l)} = \mathbf {W} ^ {(l)} \frac {\bar {h} ^ {(l - 1)} + \underline {{h}} ^ {(l - 1)}}{2} + | \mathbf {W} ^ {(l)} | \frac {\bar {h} ^ {(l - 1)} - \underline {{h}} ^ {(l - 1)}}{2} + b ^ {(l)} \tag {5} +$$ + +$$ +\underline {{z}} ^ {(l)} = \mathbf {W} ^ {(l)} \frac {\bar {h} ^ {(l - 1)} + \underline {{h}} ^ {(l - 1)}}{2} - | \mathbf {W} ^ {(l)} | \frac {\bar {h} ^ {(l - 1)} - \underline {{h}} ^ {(l - 1)}}{2} + b ^ {(l)} \tag {6} +$$ + +where $|\mathbf{W}^{(l)}|$ takes element-wise absolute value. Note that $\overline{h}^{(0)} = \pmb{x}_U$ and $\underline{h}^{(0)} = \pmb{x}_L^2$ . And for element-wise monotonic increasing activation functions $\sigma$ , + +$$ +\bar {h} ^ {(l)} = \sigma (\bar {z} ^ {(l)}) \quad \underline {{h}} ^ {(l)} = \sigma (\underline {{z}} ^ {(l)}). \tag {7} +$$ + +We found that IBP can be viewed as training a simple augmented ReLU network which is friendly to optimizers (see Appendix A for more discussions). We also found that a network trained using IBP can obtain good verified errors when verified using IBP, but it can get much worse verified errors using linear relaxation based verification methods, including convex adversarial polytope (CAP) by Wong & Kolter (2018) (equivalently, Fast-Lin by Weng et al. (2018)) and CROWN (Zhang et al., 2018). Table 1 demonstrates that this gap can be very large on large $\epsilon$ . + +However, IBP is a very loose bound during the initial phase of training, which makes training unstable and hard to tune; purely using IBP frequently leads to divergence. Gowal et al. (2018) proposed to use a $\epsilon$ schedule where $\epsilon$ is gradually increased during training, and a mixture of robust cross-entropy loss with natural cross-entropy loss as the objective to stabilize training: + +$$ +\min _ {\theta} \underset {(\boldsymbol {x}, y) \in \mathcal {X}} {E} \left[ \kappa L (\boldsymbol {x}; y; \theta) + (1 - \kappa) L \left(- \underline {{\boldsymbol {m}}} _ {\mathrm {I B P}} (\boldsymbol {x}, \epsilon); y; \theta\right) \right], \tag {8} +$$ + +Issues with linear relaxation based training. Since IBP hugely outperforms linear relaxation based methods in the recent work (Gowal et al., 2018) in many settings, we want to understand what is going wrong with linear relaxation based methods. We found that, empirically, the norm of the weights in the models produced by linear relaxation based methods such as (Wong & Kolter, 2018) and (Wong et al., 2018) does not change or even decreases during training. + +In Figure 1 we train a small 4-layer MNIST model and we linearly increase $\epsilon$ from 0 to 0.3 in 60 epochs. We plot the $\ell_{\infty}$ induced norm of the 2nd CNN layer during the training process of CROWN-IBP and (Wong et al., 2018). The norm of weight matrix using (Wong et al., 2018) does not increase. When $\epsilon$ becomes larger (roughly at $\epsilon = 0.2$ , epoch 40), the norm even starts to decrease slightly, indicating that the model is forced to learn smaller norm weights. Meanwhile, the verified error also starts to ramp up possibly due to the lack of capacity. We conjecture that linear relaxation based training over-regularizes the model, especially at a larger $\epsilon$ . However, in CROWN-IBP, the norm of weight matrices keep increasing during the training process, and verifiable error does not significantly increase when $\epsilon$ reaches 0.3. + +Another issue with current linear relaxation based training or verification methods is their high computational and memory cost, and poor scalability. For the small network + +![](images/8f5c633163dc4c3244810a4e5df9825d8435603bcda81ae44ae48f904abb13f3.jpg) +Figure 1: Verified error and 2nd CNN layer's $\ell_{\infty}$ induced norm for a model trained using (Wong et al., 2018) and CROWN-IBP. $\epsilon$ is increased from 0 to 0.3 in 60 epochs. + +in Figure 1, convex adversarial polytope (with 50 random Cauchy projections) is 8 times slower and takes 4 times more memory than CROWN-IBP (without using random projections). Convex adversarial polytope scales even worse for larger networks; see Appendix J for a comparison. + +# 3.2 THE PROPOSED ALGORITHM: CROWN-IBP + +Overview. We have reviewed IBP and linear relaxation based methods above. As shown in Gowal et al. (2018), IBP performs well at large $\epsilon$ with much smaller verified error, and also efficiently scales to large networks; however, it can be sensitive to hyperparameters due to its very imprecise bound at the beginning phase of training. On the other hand, linear relaxation based methods can give tighter lower bounds at the cost of high computational expenses, but it over-regulates the network at large $\epsilon$ and forbids us to achieve good standard and verified accuracy. We propose CROWN-IBP, a new certified defense where we optimize the following problem ( $\theta$ represents the network parameters): + +$$ +\min _ {\theta} \underbrace {E} _ {(\boldsymbol {x}, y) \in \mathcal {X}} \left[ \kappa \underbrace {L (\boldsymbol {x} ; y ; \theta)} _ {\text {n a t u r a l l o s s}} + (1 - \kappa) \underbrace {L \left(- \overbrace {(1 - \beta) \underline {{\boldsymbol {m}}} _ {\mathrm {I B P}} (\boldsymbol {x} , \epsilon)} ^ {\text {I B P b o u n d}} + \overbrace {(\beta \underline {{\boldsymbol {m}}} _ {\mathrm {C R O W N - I B P}} (\boldsymbol {x} , \epsilon))} ^ {\text {C R O W N - I B P b o u n d}} ; \quad y ; \quad \theta\right)} _ {\text {r o b u s t l o s s}} \right], \tag {9} +$$ + +where our lower bound of margin $\underline{m}(\boldsymbol{x}, \epsilon)$ is a combination of two bounds with different natures: IBP, and a CROWN-style bound (which will be detailed below); $L$ is the cross-entropy loss. Note that the combination is inside the loss function and is thus still a valid lower bound; thus (4) still holds and we are within the minimax robust optimization theoretical framework. Similar to IBP and + +TRADES (Zhang et al., 2019a), we use a mixture of natural and robust training loss with parameter $\kappa$ , allowing us to explicitly trade-off between clean accuracy and verified accuracy. + +In a high level, the computation of the lower bounds of CROWN-IBP $(\underline{m}_{\mathrm{CROWN - IBP}}(\pmb {x},\epsilon))$ consists of IBP bound propagation in a forward bounding pass and CROWN-style bound propagation in a backward bounding pass. We discuss the details of CROWN-IBP algorithm below. + +Forward Bound Propagation in CROWN-IBP. In CROWN-IBP, we first obtain $\overline{z}^{(l)}$ and $\underline{z}^{(l)}$ for all layers by applying (5), (6) and (7). Then we will obtain $\underline{\boldsymbol{m}}_{\mathrm{IBP}}(\boldsymbol{x},\epsilon) = \underline{z}^{(L)}$ (assuming $C$ is merged into $\mathbf{W}^{(L)})$ ). The time complexity is comparable to two forward propagation passes of the network. + +Linear Relaxation of ReLU neurons Given $\underline{z}^{(l)}$ and $\overline{z}^{(l)}$ computed in the previous step, we first check if some neurons are always active ( $\underline{z}_k^{(l)} > 0$ ) or always inactive ( $\overline{z}_k^{(l)} < 0$ ), since they are effectively linear and no relaxations are needed. For the remaining unstable neurons, Zhang et al. (2018); Wong & Kolter (2018) give a linear relaxation for ReLU activation function: + +$$ +\alpha_ {k} z _ {k} ^ {(l)} \leq \sigma \left(z _ {k} ^ {(l)}\right) \leq \frac {\bar {z} _ {k} ^ {(l)}}{\bar {z} _ {k} ^ {(l)} - \underline {{z}} _ {k} ^ {(l)}} z _ {k} ^ {(l)} - \frac {\bar {z} _ {k} ^ {(l)} \underline {{z}} _ {k} ^ {(l)}}{\bar {z} _ {k} ^ {(l)} - \underline {{z}} _ {k} ^ {(l)}}, \quad \text {f o r a l l} k \in [ n _ {l} ] \text {a n d} \underline {{z}} _ {k} ^ {(l)} < 0 < \bar {z} _ {k} ^ {(l)}, \tag {10} +$$ + +where $0 \leq \alpha_{k} \leq 1$ ; Zhang et al. (2018) propose to adaptively select $\alpha_{k} = 1$ when $\overline{z}_k^{(l)} > |z_k^{(l)}|$ and 0 otherwise, which minimizes the relaxation error. Following (10), for an input vector $z^{(l)}$ , we effectively replace the ReLU layer with a linear layer, giving upper or lower bounds of the output: + +$$ +\underline {{\mathbf {D}}} ^ {(l)} z ^ {(l)} \leq \sigma (z ^ {(l)}) \leq \overline {{\mathbf {D}}} ^ {(l)} z ^ {(l)} + \bar {c} _ {d} ^ {(l)} \tag {11} +$$ + +where $\underline{\mathbf{D}}^{(l)}$ and $\overline{\mathbf{D}}^{(l)}$ are two diagonal matrices representing the "weights" of the relaxed ReLU layer. Other general activation functions can be supported similarly. In the following we focus on conceptually presenting the algorithm, while more details of each term can be found in the Appendix. + +Backward Bound Propagation in CROWN-IBP. Unlike IBP, CROWN-style bounds start bounding from the last layer, so we refer to it as backward bound propagation (not to be confused with the back-propagation algorithm to obtain gradients). Suppose we want to obtain the lower bound $[\underline{m}_{\mathrm{CROWN - IBP}}(\pmb {x},\epsilon)]_i\coloneqq \underline{z}_i^{(L)}$ (we assume the specification matrix $C$ has been merged into $\mathbf{W}^{(L)})$ The input to layer $\mathbf{W}^{(L)}$ is $\sigma (z^{(L - 1)})$ , which can be bounded linearly by Eq. (11). CROWN-style bounds choose the lower bound of $\sigma (z_k^{(L - 1)})$ (LHS of (11)) when $\mathbf{W}_{i,k}^{(L)}$ is positive, and choose the upper bound otherwise. We then merge $\mathbf{W}^{(L)}$ and the linearized ReLU layer together and define: + +$$ +\mathbf {A} _ {i,:} ^ {(L - 1)} = \mathbf {W} _ {i,:} ^ {(L)} \mathbf {D} ^ {i, (L - 1)}, \quad \text {w h e r e} \quad \mathbf {D} _ {k, k} ^ {i, (L - 1)} = \left\{ \begin{array}{l l} \underline {{\mathbf {D}}} _ {k, k} ^ {(L - 1)}, & \text {i f} \mathbf {W} _ {i, k} ^ {(L)} > 0 \\ \overline {{\mathbf {D}}} _ {k, k} ^ {(L - 1)}, & \text {i f} \mathbf {W} _ {i, k} ^ {(L)} \leq 0 \end{array} \right. \tag {12} +$$ + +Now we have a lower bound $\underline{z}_i^{(L)} = \mathbf{A}_{i,:}^{(L - 1)}z^{(L - 1)} + \underline{b}_i^{(L - 1)}\leq z_i^{(L)}$ where $\underline{b}_i^{(L - 1)} = \sum_{k,\mathbf{W}_{i,k}^{(L)} < 0}\mathbf{W}_{i,k}^{(L)}\overline{c}_k^{(l)} + \pmb {b}^{(L)}$ collects all terms not related to $z^{(L - 1)}$ . Note that the diagonal matrix $\mathbf{D}^{i,(L - 1)}$ implicitly depends on $i$ . Then, we merge $\mathbf{A}_{i,:}^{(L - 1)}$ with the next linear layer, which is straight forward by plugging in $z^{(L - 1)} = \mathbf{W}^{(L - 1)}\sigma (z^{(L - 2)}) + \pmb {b}^{(L - 1)}$ : + +$$ +z _ {i} ^ {(L)} \geq \mathbf {A} _ {i,:} ^ {(L - 1)} \mathbf {W} ^ {(L - 1)} \sigma (z ^ {(L - 2)}) + \mathbf {A} _ {i,:} ^ {(L - 1)} \boldsymbol {b} ^ {(L - 1)} + \underline {{b}} _ {i} ^ {(L - 1)}. +$$ + +Then we continue to unfold the next ReLU layer $\sigma(z^{(L-2)})$ using its linear relaxations, and compute a new $\mathbf{A}^{(L-2)} \in \mathbb{R}^{n_L \times n_{L-2}}$ matrix, with $\mathbf{A}_{i,:}^{(L-2)} = \mathbf{A}_{i,:}^{(L-1)}\mathbf{W}^{(L-1)}\mathbf{D}^{i,(L-2)}$ in a similar manner as in (12). Along with the bound propagation process, we need to compute a series of matrices, $\mathbf{A}^{(L-1)}, \dots, \mathbf{A}^{(0)}$ , where $\mathbf{A}_{i,:}^{(l)} = \mathbf{A}_{i,:}^{(l+1)}\mathbf{W}^{(l+1)}\mathbf{D}^{i,(l)} \in \mathbb{R}^{n_L \times n_{(l)}}$ , and $\mathbf{A}_{i,:}^{(0)} = \mathbf{A}_{i,:}^{(1)}\mathbf{W}^{(1)} = \mathbf{W}_{i,:}^{(L)}\mathbf{D}^{i,(L-1)}\mathbf{W}^{(L-2)}\mathbf{D}^{i,(L-2)}\mathbf{A}^{(L-2)} \dots \mathbf{D}^{i,(1)}\mathbf{W}^{(1)}$ . At this point, we merged all layers of the network into a linear layer: $z_i^{(L)} \geq \mathbf{A}_{i,:}^{(0)}\pmb{x} + \underline{b}$ , where $\underline{b}$ collects all terms not related to $\pmb{x}$ . A lower bound for $z_i^{(L)}$ with $\pmb{x}_L \leq \pmb{x} \leq \pmb{x}_U$ can then be easily given as + +$$ +[ \underline {{\boldsymbol {m}}} _ {\text {C R O W N - I B P}} ] _ {i} \equiv \underline {{z}} _ {i} ^ {(L)} = \mathbf {A} _ {i,:} ^ {(0)} \boldsymbol {x} + \underline {{b}} \geq \sum_ {k, \mathbf {A} _ {i,: k} ^ {(0)} < 0} \mathbf {A} _ {i,: k} ^ {(0)} \boldsymbol {x} _ {U, k} + \sum_ {k, \mathbf {A} _ {i,: k} ^ {(0)} > 0} \mathbf {A} _ {i,: k} ^ {(0)} \boldsymbol {x} _ {L, k} + \underline {{b}} \tag {13} +$$ + +For ReLU networks, convex adversarial polytope (Wong & Kolter, 2018) uses a very similar bound propagation procedure. CROWN-style bounds allow an adaptive selection of $\alpha_{i}$ in (10), thus often gives better bounds (e.g., see Table 1). We give details on each term in Appendix L. + +Computational Cost. Ordinary CROWN (Zhang et al., 2018) and convex adversarial polytope (Wong & Kolter, 2018) use (13) to compute all intermediate layer's $\underline{z}_i^{(m)}$ and $\overline{z}_i^{(m)}$ ( $m \in [L]$ ), by considering $\mathbf{W}^{(m)}$ as the final layer of the network. For each layer $m$ , we need a different set of $m$ A matrices, defined as $\mathbf{A}^{m,(l)}, l \in \{m - 1, \dots, 0\}$ . This causes three computational issues: + +- Unlike the last layer $\mathbf{W}^{(L)}$ , an intermediate layer $\mathbf{W}^{(m)}$ typically has a much larger output dimension $n_m \gg n_L$ thus all $\mathbf{A}^{m,(l)} \in \{\mathbf{A}^{m,(m-1)}, \dots, \mathbf{A}^{m,(0)}\}$ have large dimensions $\mathbb{R}^{n_m \times n_l}$ . +- Computation of all $\mathbf{A}^{m,(l)}$ matrices is expensive. Suppose the network has $n$ neurons for all $L - 1$ intermediate and input layers and $n_L \ll n$ neurons for the output layer (assuming $L \geq 2$ ), the time complexity of ordinary CROWN or convex adversarial polytope is $O\left(\sum_{l=1}^{L-2} \ln^3 + (L - 1)n_Ln^2\right) = O((L - 1)^2n^3 + (L - 1)n_Ln^2) = O(Ln^2(Ln + n_L))$ . A ordinary forward propagation only takes $O(Ln^2)$ time per example, thus ordinary CROWN does not scale up to large networks for training, due to its quadratic dependency in $L$ and extra $Ln$ times overhead. +- When both $\mathbf{W}^{(l)}$ and $\mathbf{W}^{(l - 1)}$ represent convolutional layers with small kernel tensors $\mathbf{K}^{(l)}$ and $\mathbf{K}^{(l - 1)}$ , there are no efficient GPU operations to form the matrix $\mathbf{W}^{(l)}\mathbf{D}^{(l - 1)}\mathbf{W}^{(l - 1)}$ using $\mathbf{K}^{(l)}$ and $\mathbf{K}^{(l - 1)}$ . Existing implementations either unfold at least one of the convolutional kernels to fully connected weights, or use sparse matrices to represent $\mathbf{W}^{(l)}$ and $\mathbf{W}^{(l - 1)}$ . They suffer from poor hardware efficiency on GPUs. + +In CROWN-IBP, we use IBP to obtain bounds of intermediate layers, which takes only twice the regular forward propagate time $(O(Ln^{2}))$ , thus we do not have the first and second issues. The time complexity of the backward bound propagation in CROWN-IBP is $O((L - 1)n_{L}n^{2})$ , only $n_{L}$ times slower than forward propagation and significantly more scalable than ordinary CROWN (which is $Ln$ times slower than forward propagation, where typically $n \gg n_{L}$ ). The third convolution issue is also not a concern, since we start from the last specification layer $\mathbf{W}^{(L)}$ which is a small fully connected layer. Suppose we need to compute $\mathbf{W}^{(L)}\mathbf{D}^{(L - 1)}\mathbf{W}^{(L - 1)}$ and $\mathbf{W}^{(L - 1)}$ is a convolutional layer with kernel $\mathbf{K}^{(L - 1)}$ , we can efficiently compute $(\mathbf{W}^{(L - 1)\top}(\mathbf{D}^{(L - 1)}\mathbf{W}^{(L)\top}))^\top$ on GPUs using the transposed convolution operator with kernel $\mathbf{K}^{(L - 1)}$ , without unfolding any convolutional layers. Conceptually, the backward pass of CROWN-IBP propagates a small specification matrix $\mathbf{W}^{(L)}$ backwards, replacing affine layers with their transposed operators, and activation function layers with a diagonal matrix product. This allows efficient implementation and better scalability. + +Benefits of CROWN-IBP. Tightness, efficiency and flexibility are unique benefits of CROWN-IBP: + +- CROWN-IBP is based on CROWN, a tight linear relaxation based lower bound which can greatly improve the quality of bounds obtained by IBP to guide verifiable training and improve stabability; +- CROWN-IBP avoids the high computational cost of convex relaxation based methods: the time complexity is reduced from $\mathcal{O}(Ln^2(\bar{L}n + n_L))$ to $O(Ln^2n_L)$ , well suited to problems where the output size $n_L$ is much smaller than input and intermediate layers' sizes; also, there is no quadratic dependency on $L$ . Thus, CROWN-IBP is efficient on relatively large networks; +- The objective (9) is strictly more general than IBP and allows the flexibility to exploit the strength from both IBP (good for large $\epsilon$ ) and convex relaxation based methods (good for small $\epsilon$ ). We can slowly decrease $\beta$ to 0 during training to avoid the over-regularization problem, yet keeping the initial training of IBP more stable by providing a much tighter bound; we can also keep $\beta = 1$ which helps to outperform convex relaxation based methods in small $\epsilon$ regime (e.g., $\epsilon = 2/255$ on CIFAR-10). + +# 4 EXPERIMENTS + +Models and training schedules. We evaluate CROWN-IBP on three models that are similar to the models used in (Gowal et al., 2018) on MNIST and CIFAR-10 datasets with different $\ell_{\infty}$ perturbation norms. Here we denote the small, medium and large models in Gowal et al. (2018) as DM-small, DM-medium and DM-large. During training, we first warm up (regular training without robust loss) + +for a fixed number of epochs and then increase $\epsilon$ from 0 to $\epsilon_{\mathrm{train}}$ using a ramp-up schedule of $R$ epochs. Similar techniques are also used in many other works (Wong et al., 2018; Wang et al., 2018a; Gowal et al., 2018). For both IBP and CROWN-IBP, a natural cross-entropy (CE) loss with weight $\kappa$ (as in Eq (9)) may be added, and $\kappa$ is scheduled to linearly decrease from $\kappa_{\mathrm{start}}$ to $\kappa_{\mathrm{end}}$ within $R$ ramp-up epochs. Gowal et al. (2018) used $\kappa_{\mathrm{start}} = 1$ and $\kappa_{\mathrm{end}} = 0.5$ . To understand the trade-off between verified accuracy and standard (clean) accuracy, we explore two more settings: $\kappa_{\mathrm{start}} = \kappa_{\mathrm{end}} = 0$ (without natural CE loss) and $\kappa_{\mathrm{start}} = 1$ , $\kappa_{\mathrm{end}} = 0$ . For $\beta$ , a linear schedule during the ramp-up period is used, but we always set $\beta_{\mathrm{start}} = 1$ and $\beta_{\mathrm{end}} = 0$ , except that we set $\beta_{\mathrm{start}} = \beta_{\mathrm{end}} = 1$ for CIFAR-10 at $\epsilon = \frac{2}{255}$ . Detailed model structures and hyperparameters are in Appendix C. Our training code for IBP and CROWN-IBP, and pre-trained models are publicly available3. + +Metrics. Verified error is the percentage of test examples where at least one element in the lower bounds $\underline{m}(\boldsymbol{x}_k, \epsilon)$ is $< 0$ . It is an guaranteed upper bound of test error under any $\ell_{\infty}$ perturbations. We obtain $\underline{m}(\boldsymbol{x}_k, \epsilon)$ using IBP or CROWN-IBP (Eq. 13). We also report standard (clean) errors and errors under 200-step PGD attack. PGD errors are lower bounds of test errors under $\ell_{\infty}$ perturbations. + +Comparison to IBP. Table 2 represents the standard, verified and PGD errors under different $\epsilon$ for each dataset with different $\kappa$ settings. We test CROWN-IBP on the same model structures in Table 1 of Gowal et al. (2018). These three models' architectures are presented in Table A in the Appendix. Here we only report the DM-large model structure in as it performs best under all settings; small and medium models are deferred to Table C in the Appendix. When both $\kappa_{\mathrm{start}} = \kappa_{\mathrm{end}} = 0$ , no natural CE loss is added and the model focuses on minimizing verified error, but the lack of natural CE loss may lead to unstable training, especially for IBP; the $\kappa_{\mathrm{start}} = 1$ , $\kappa_{\mathrm{end}} = 0.5$ setting emphasizes on minimizing standard error, usually at the cost of slightly higher verified error rates. $\kappa_{\mathrm{start}} = 1$ , $\kappa_{\mathrm{end}} = 0$ typically achieves the best balance. We can observe that under the same $\kappa$ settings, CROWN-IBP outperforms IBP in both standard error and verified error. The benefits of CROWN-IBP is significant especially when model is large and $\epsilon$ is large. We highlight that CROWN-IBP reduces the verified error rate obtained by IBP from $8.21\%$ to $7.02\%$ on MNIST at $\epsilon = 0.3$ and from $55.88\%$ to $46.03\%$ on CIFAR-10 at $\epsilon = 2/255$ (it is the first time that an IBP based method outperforms results from (Wong et al., 2018), and our model also has better standard error). We also note that we are the first to obtain verifiable bound on CIFAR-10 at $\epsilon = 16/255$ . + +Trade-off Between Standard Accuracy and Verified Accuracy. To show the trade-off between standard and verified accuracy, we evaluate DM-large CIFAR-10 model with $\epsilon_{\mathrm{test}} = 8 / 255$ under different $\kappa$ settings, while keeping all other hyperparameters unchanged. For each $\kappa_{\mathrm{end}} = \{0.5,0.25,0\}$ , we uniformly choose $11\kappa_{\mathrm{start}} \in [1,\kappa_{\mathrm{end}}]$ while keeping all other hyper-parameters unchanged. A larger $\kappa_{\mathrm{start}}$ or $\kappa_{\mathrm{end}}$ tends to produce better standard errors, and we can explicitly control the trade-off between standard accuracy and verified accuracy. In Figure 2 we plot the standard and verified errors of IBP and CROWN-IBP trained models with different $\kappa$ settings. Each cluster on the figure has 11 points, representing 11 different $\kappa_{\mathrm{start}}$ values. Models with lower verified errors tend to have higher standard errors. However, CROWN-IBP clearly outperforms IBP with improvement on both standard and verified accuracy, and + +![](images/658620f8946fc97408b62cda1f87979a562de6ccbc31fc34bcfd4eb2cf34d974.jpg) +Figure 2: Standard and verified errors of IBP and CROWN-IBP with different $\kappa_{\mathrm{start}}$ and $\kappa_{\mathrm{end}}$ values. + +pushes the Pareto front towards the lower left corner, indicating overall better performance. To reach the same verified error of $70\%$ , CROWN-IBP can reduce standard error from roughly $55\%$ to $45\%$ . + +Training Stability. To discourage hand-tuning on a small set of models and demonstrate the stability of CROWN-IBP over a broader range of models, we evaluate IBP and CROWN-IBP on a variety of small and medium sized model architectures (18 for MNIST and 17 for CIFAR-10), detailed in Appendix D. To evaluate training stability, we compare verified errors under different $\epsilon$ ramp-up schedule length ( $R = \{30,60,90,120\}$ on CIFAR-10 and $R = \{10,15,30,60\}$ on MNIST) + +Table 2: The verified, standard (clean) and PGD attack errors for models trained using IBP and CROWN-IBP on MNIST and CIFAR-10. We only present performance on model DM-large here due to limited space (see Table C for a full comparison). CROWN-IBP outperforms IBP under all $\kappa$ settings, and achieves state-of-the-art performance on both MNIST and CIFAR datasets for all $\epsilon$ . + +
Datasetε(ℓ∞norm)Training Methodκ schedulesModel errors (%)Best errors reported in literature (%)
κstartκendStandardVerifiedPGDSourceStandardVerified
MNISTεtest=0.1IBP001.132.892.24Gowal et al. (2018)1.062.92*
10.51.082.752.02Dvijotham et al. (2018b)1.24.44
101.142.812.11Xiao et al. (2019c)1.054.4
CROWN-IBP001.172.361.91Wong et al. (2018)1.083.67
10.50.952.381.77Mirman et al. (2018)1.03.4
101.172.241.81
εtest=0.2IBP003.456.466.00Gowal et al. (2018)1.664.53*
10.52.124.754.24Xiao et al. (2019c)1.910.21
102.745.464.89
CROWN-IBP002.845.154.90
10.51.824.133.81
102.174.313.99
εtest=0.3IBP003.459.768.42Gowal et al. (2018)1.668.21*
10.502.128.476.78Wong et al. (2018)14.8743.1
102.748.737.37Xiao et al. (2019c)2.6719.32
CROWN-IBP002.847.656.90
10.51.827.026.05
102.177.036.12
εtest=0.4IBP003.4516.1912.73Gowal et al. (2018)1.6615.01*
10.52.1215.3711.05
102.7414.8011.14
CROWN-IBP002.8412.7410.39
10.51.8212.599.58
102.1712.069.47
CIFAR-10εtest=2/255§IBP0038.5455.2149.72Gowal et al. (2018)29.8455.88*
10.533.7758.4850.54Mirman et al. (2018)38.047.8
1039.2255.1950.40Wong et al. (2018)31.7246.11
CROWN-IBP0028.4846.0340.28Xiao et al. (2019c)38.8854.07
10.526.1950.5340.24
1028.9146.4340.27
εtest=8/255IBP0059.4171.2268.96Gowal et al. (2018)50.51(68.44)†
10.549.0172.6868.14Dvijotham et al. (2018b)51.3673.33
1058.4370.8168.73Xiao et al. (2019c)59.5579.73
CROWN-IBP0054.0266.9465.42Wong et al. (2018)71.3378.22
10.545.4769.5565.74Mirman et al. (2019)59.876.8
εtest=16/255IBP0068.9778.1276.66None, but our best verified test error (76.80%) and standard test error (66.06%) are both better than Wong et al. (2018) at ε = 8/255, despite our ε being twice larger.
10.559.4680.8576.97
1068.8878.9176.95
CROWN-IBP0067.1777.2775.76
10.556.7378.2074.87
1066.0676.8075.23
+ +* Verified errors reported in Table 4 of Gowal et al. (2018) are evaluated using mixed integer programming (MIP) and linear programming (LP), which are strictly smaller than IBP verified errors but computationally expensive. For a fair comparison, we use the IBP verified errors reported in their Table 3. +† According to direct communications with Gowal et al. (2018), achieving the 68.44% IBP verified error requires to adding an extra PGD adversarial training loss. Without adding PGD, the verified error is 72.91% (LP/MIP verified) or 73.52% (IBP verified). Our result should be compared to 73.52%. +$\ddagger$ Although not explicitly mentioned, the CIFAR-10 models in (Gowal et al., 2018) are trained using $\epsilon_{\mathrm{train}} = 1.1\epsilon_{\mathrm{test}}$ . We thus follow their settings. +$\S$ We use $\beta_{\mathrm{start}} = \beta_{\mathrm{end}} = 1$ for this setting, and thus CROWN-IBP bound $(\beta = 1)$ is used to evaluate the verified error. + +and different $\kappa$ settings. Instead of reporting just the best model, we compare the best, worst and median verified errors over all models. Our results are presented in Figure 3: (a) is for MNIST with $\epsilon = 0.3$ ; (c),(d) are for CIFAR with $\epsilon = 8/255$ . We can observe that CROWN-IBP achieves better performance consistently under different schedule length. In addition, IBP with $\kappa = 0$ cannot stably converge on all models when $\epsilon$ schedule is short; under other $\kappa$ settings, CROWN-IBP always performs better. We conduct additional training stability experiments on MNIST and CIFAR-10 dataset under other model and $\epsilon$ settings and the observations are similar (see Appendix H). + +# 5 CONCLUSIONS + +We propose a new certified defense method, CROWN-IBP, by combining the fast interval bound propagation (IBP) bound and a tight linear relaxation based bound, CROWN. Our method enjoys high computational efficiency provided by IBP while facilitating the tight CROWN bound to stabilize training under the robust optimization framework, and provides the flexibility to trade-off between the two. Our experiments show that CROWN-IBP consistently outperforms other IBP baselines in both standard errors and verified errors and achieves state-of-the-art verified test errors for $\ell_{\infty}$ robustness. + +![](images/030a5c7e80ff2b82d0499fc81d008b7e2624c3f9d915406d8cd4c0a259acae26.jpg) +(a) MNIST, $\epsilon = 0.3$ , best $7.46\%$ + +![](images/36f70bcd79d09721492196dffc26403a62a3bf0dfa63b78beefcba5c9e319c13.jpg) +Figure 3: Verified error vs. schedule length on 8 medium MNIST models and 8 medium CIFAR-10 models. The solid bars show median values of verified errors. $\kappa_{\mathrm{start}} = 1.0$ except for the $\kappa = 0$ setting. The upper and lower ends of an error bar are the worst and best verified error, respectively. For each schedule length, three color groups represent three different $\kappa$ settings. + +![](images/9897e6cc38482396923976f30817a6059dab7f47651de90a029b4e836c72ed91.jpg) +(b) CIFAR, $\epsilon = \frac{8}{255}$ , best 70.51% +(c) CIFAR, $\epsilon = \frac{2}{255}$ , best 49.28% + +# REFERENCES + +Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. International Conference on Machine Learning (ICML), 2018. +Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. Thermometer encoding: One hot way to resist adversarial examples. International Conference on Learning Representations, 2018. +Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3-14. ACM, 2017a. +Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 38th IEEE Symposium on Security and Privacy (SP), pp. 39-57. IEEE, 2017b. +Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh. Attacking visual language grounding with adversarial examples: A case study on neural image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2587-2597, 2018. +Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019. +Krishnamurthy Dvijotham, Marta Garnelo, Alhussein Fawzi, and Pushmeet Kohli. Verification of deep probabilistic models. CoRR, abs/1812.02795, 2018a. URL http://arxiv.org/abs/1812.02795. +Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, and Pushmeet Kohli. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265, 2018b. +Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, and Pushmeet Kohli. A dual approach to scalable verification of deep networks. UAI, 2018c. +Krishnamurthy Dj Dvijotham, Robert Stanforth, Sven Gowal, Chongli Qin, Soham De, and Pushmeet Kohli. Efficient neural network verification with exactness characterization. +Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625-1634, 2018. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. *ICLR*, 2015. +Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Timothy Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018. + +Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. Countering adversarial images using input transformations. In ICLR, 2018. +Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song. Adversarial example defenses: ensembles of weak defenses are not strong. In Proceedings of the 11th USENIX Conference on Offensive Technologies, pp. 15-15. USENIX Association, 2017. +Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. In Advances in Neural Information Processing Systems (NIPS), pp. 2266-2276, 2017. +Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pp. 97-117. Springer, 2017. +Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. In International Conference on Learning Representations, 2017. +Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. arXiv preprint arXiv:1802.03471, 2018. +Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Second-order adversarial attack and certifiable robustness. arXiv preprint arXiv:1809.03113, 2018. +Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Michael E Houle, Grant Schoenebeck, Dawn Song, and James Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. In International Conference on Learning Representations (ICLR), 2018. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. +Matthew Mirman, Timon Gehr, and Martin Vechev. Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, pp. 3575-3583, 2018. +Matthew Mirman, Gagandeep Singh, and Martin Vechev. A provable defense for deep residual networks. arXiv preprint arXiv:1903.12519, 2019. +Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), pp. 582-597. IEEE, 2016. +Chongli Qin, Krishnamurthy Dj Dvijotham, Brendan O'Donoghue, Rudy Bunel, Robert Stanforth, Sven Gowal, Jonathan Uesato, Grzegorz Swirszcz, and Pushmeet Kohli. Verification of non-linear specifications for neural networks. ICLR, 2019. +Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. International Conference on Learning Representations (ICLR), arXiv preprint arXiv:1801.09344, 2018a. +Aditi Raghunathan, Jacob Steinhardt, and Percy S Liang. Semidefinite relaxations for certifying robustness to adversarial examples. In Advances in Neural Information Processing Systems, pp. 10900-10910, 2018b. +Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, Huan Zhang, Ilya Razenshteyn, and Sebastien Bubeck. Provably robust deep learning via adversarially trained smoothed classifiers. arXiv preprint arXiv:1906.04584, 2019a. +Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. A convex relaxation barrier to tight robust verification of neural networks. arXiv preprint arXiv:1902.08722, 2019b. +Pouya Samangouei, Maya Kabkab, and Rama Chellappa. Defense-GAN: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605, 2018. + +Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Puschel, and Martin Vechev. Fast and effective robustness certification. In Advances in Neural Information Processing Systems, pp. 10825-10836, 2018. +Gagandeep Singh, Timon Gehr, Markus Puschel, and Martin Vechev. Robustness certification with refinement. *ICLR*, 2019. +Aman Sinha, Hongseok Namkoong, and John Duchi. Certifying some distributional robustness with principled adversarial training. In ICLR, 2018. +Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766, 2017. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. +Shiqi Wang, Yizheng Chen, Ahmed Abdou, and Suman Jana. Mixtrain: Scalable training of formally robust neural networks. arXiv preprint arXiv:1811.02625, 2018a. +Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems, pp. 6369-6379, 2018b. +Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S Dhillon, and Luca Daniel. Towards fast computation of certified robustness for ReLU networks. In International Conference on Machine Learning, 2018. +Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, pp. 5283-5292, 2018. +Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J Zico Kolter. Scaling provable adversarial defenses. Advances in Neural Information Processing Systems (NIPS), 2018. +Chaowei Xiao, Ruizhi Deng, Bo Li, Fisher Yu, Mingyan Liu, and Dawn Song. Characterizing adversarial examples based on spatial consistency information for semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 217-234, 2018a. +Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating adversarial examples with adversarial networks. *IJCAI* 18, 2018b. +Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. Spatially transformed adversarial examples. ICLR18, 2018c. +Chaowei Xiao, Ruizhi Deng, Bo Li, Taesung Lee, Benjamin Edwards, Jinfeng Yi, Dawn Song, Mingyan Liu, and Ian Molloy. Advit: Adversarial frames identifier based on temporal consistency in videos. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3968-3977, 2019a. +Chaowei Xiao, Dawei Yang, Bo Li, Jia Deng, and Mingyan Liu. Meshadv: Adversarial meshes for visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6898-6907, 2019b. +Kai Y Xiao, Vincent Tjeng, Nur Muhammad Shafiullah, and Aleksander Madry. Training for faster adversarial robustness verification via inducing relu stability. ICLR, 2019c. +Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, Huan Zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, and Xue Lin. Structured adversarial attack: Towards general implementation and better interpretability. arXiv preprint arXiv:1808.01664, 2018. +Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 7472-7482, Long Beach, California, USA, 09-15 Jun 2019a. PMLR. URL http://proceedings.mlr.press/v97/zhang19p.html. + +Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. Efficient neural network robustness certification with general activation functions. In Advances in Neural Information Processing Systems (NIPS), 2018. +Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S Dhillon, and Cho-Jui Hsieh. The limitations of adversarial training and the blind-spot attack. *ICLR*, 2019b. +Huan Zhang, Pengchuan Zhang, and Cho-Jui Hsieh. Recurjac: An efficient recursive algorithm for bounding jacobian matrix of neural networks and its applications. AAAI Conference on Artificial Intelligence, 2019c. +Tianhang Zheng, Changyou Chen, and Kui Ren. Distributionally adversarial attack. arXiv preprint arXiv:1808.05537, 2018. + +# A IBP AS A SIMPLE AUGMENTED NETWORK + +Despite achieving great success, it is still an open question why IBP based methods significantly outperform convex relaxation based methods, despite the fact that convex relaxations usually provide significantly tighter bounds. We conjecture that IBP performs better because the bound propagation process can be viewed as a ReLU network with the same depth as the original network, and the IBP training process is effectively training this equivalent network for standard accuracy, as explained below. + +Given a fixed neural network (NN) $f(\pmb{x})$ , IBP gives a very loose estimation of the output range of $f(\pmb{x})$ . However, during training, since the weights of this NN can be updated, we can equivalently view IBP as an augmented neural network, which we denote as an IBP-NN (Figure A). Unlike a usual network which takes an input $\pmb{x}_k$ with label $y_k$ , IBP-NN takes two points $\pmb{x}_L = \pmb{x}_k - \epsilon$ and $\pmb{x}_U = \pmb{x}_k + \epsilon$ as inputs (where $\pmb{x}_L \leq \pmb{x} \leq \pmb{x}_U$ , element-wisely). The bound propagation process can be equivalently seen as forward propagation in a specially structured neural network, as shown in Figure A. After the last specification layer $C$ (typically merged into $\mathbf{W}^{(L)}$ ), we can obtain $\underline{\boldsymbol{m}}(\pmb{x}_k, \epsilon)$ . Then, $-\underline{\boldsymbol{m}}(\pmb{x}_k, \epsilon)$ is sent to softmax layer for prediction. Importantly, since $[\underline{\boldsymbol{m}}(\pmb{x}_k, \epsilon)]_{y_k} = 0$ (as the $y_k$ -th row in $C$ is always 0), the top-1 prediction of the augmented IBP network is $y_k$ if and only if all other elements of $\underline{\boldsymbol{m}}(\pmb{x}_k, \epsilon)$ are positive, i.e., the original network will predict correctly for all $\pmb{x}_L \leq \pmb{x} \leq \pmb{x}_U$ . When we train the augmented IBP network with ordinary cross-entropy loss and desire it to predict correctly on an input $\pmb{x}_k$ , we are implicitly doing robust optimization (Eq. (2)). + +![](images/89779d91c5b52392eb3b0417f462268d0d9b17484af25b5e139f958eb4bf6289.jpg) +Figure A: Interval Bound Propagation viewed as training an augmented neural network (IBP-NN). The inputs of IBP-NN are two images $\boldsymbol{x}_k + \epsilon$ and $\boldsymbol{x}_k - \epsilon$ . The output of IBP-NN is a vector of lower bounds of margins (denoted as $\underline{\boldsymbol{m}}$ ) between ground-truth class and all classes (including the ground-truth class itself) for all $\boldsymbol{x}_k - \epsilon \leq \boldsymbol{x} \leq \boldsymbol{x}_k + \epsilon$ . This vector $\underline{\boldsymbol{m}}$ is negated and sent into a regular softmax function to get model prediction. The top-1 prediction of softmax is correct if and only if all margins between the ground-truth class and other classes (except the ground truth class) are positive, i.e., the model is verifiably robust. Thus, an IBP-NN with low standard error guarantees low verified error on the original network. + +The simplicity of IBP-NN may help a gradient based optimizer to find better solutions. On the other hand, while the computation of convex relaxation based bounds can also be cast as an equivalent network (e.g., the "dual network" in Wong & Kolter (2018)), its construction is significantly more complex, and sometimes requires non-differentiable indicator functions (the sets $\mathcal{I}^+$ , $\mathcal{I}^-$ and $\mathcal{I}$ in Wong & Kolter (2018)). As a consequence, it can be challenging for the optimizer to find a good solution, and the optimizer tends to making the bounds tighter naively by reducing the norm of weight matrices and over-regularizing the network, as demonstrated in Figure 1. + +# B TIGHTNESS COMPARISON BETWEEN IBP AND CROWN-IBP + +Both IBP and CROWN-IBP produce lower bounds $\underline{m}(\boldsymbol{x}, \epsilon)$ , and a larger lower bound has better quality. To measure the relative tightness of the two bounds, we take the average of all bounds of training examples: + +$$ +\underset {(\boldsymbol {x}, \boldsymbol {y}) \in \mathcal {X}} {E} \frac {1}{n _ {L}} \mathbf {1} ^ {\top} \left(\underline {{\boldsymbol {m}}} _ {\text {C R O W N - I B P}} (\boldsymbol {x}, \epsilon) - \underline {{\boldsymbol {m}}} _ {\text {I B P}} (\boldsymbol {x}, \epsilon)\right) +$$ + +A positive value indicates that CROWN-IBP is tighter than IBP. In Figure B we plot this averaged bound differences during $\epsilon$ schedule for one MNIST model and one CIFAR-10 model. We can observe that during the early phase of training when the $\epsilon$ schedule just starts, CROWN-IBP produces significantly better bounds than IBP. A tighter lower bound $\underline{m}(\boldsymbol{x}, \epsilon)$ gives a tighter upper bound for $\max_{\delta \in S} L(x + \delta; y; \theta)$ , making the minimax optimization problem (2) more effective to solve. When the training schedule proceeds, the model gradually learns how to make IBP bounds tighter and eventually the difference between the two bounds become close to 0. + +![](images/b5228d6854cccf1b461c990b50b1223442317e4b3077dd4d126e80e6314383c3.jpg) +Figure B: Bound differences between IBP and CROWN-IBP for DM-large models during training. The bound difference is only computed during the $\epsilon$ schedule (epoch 10 to 60 for MNIST, and 320 to 1920 for CIFAR-10), as we don't compute CROWN-IBP bounds in warmup period and after $\epsilon$ schedule. + +![](images/520c9810b242014e9f904d240a880104d755b95a2e78191ae3eb7f8e89ddc8ed.jpg) + +Why CROWN-IBP stabilizes IBP training? When taking a randomly initialized network or a naturally trained network, IBP bounds are very loose. But in Table 1, we show that a network trained using IBP can eventually obtain quite tight IBP bounds and high verified accuracy; the network can adapt to IBP bounds and learn a specific set of weights to make IBP tight and also correctly classify examples. However, since the training has to start from weights that produce loose bounds for IBP, the beginning phase of IBP training can be challenging and is vitally important. + +We observe that IBP training can have a large performance variance across models and initializations. Also IBP is more sensitive to hyper-parameter like $\kappa$ or schedule length; in Figure 3, many IBP models converge sub-optimally (large worst/median verified error). The reason for instability is that during the beginning phase of training, the loose bounds produced by IBP make the robust loss (9) ineffective, and it is challenging for the optimizer to reduce this loss and find a set of good weights that produce tight IBP verified bounds in the end. + +Conversely, if our bounds are much tighter at the beginning, the robust loss (9) always remains in a reasonable range during training, and the network can gradually learn to find a good set of weights that make IBP bounds increasingly tighter (this is obvious in Figure B). Initially, tighter bounds can be provided by a convex relaxation based method like CROWN, and they are gradually replaced by IBP bounds (using $\beta_{\mathrm{start}} = 1$ , $\beta_{\mathrm{end}} = 0$ ), eventually leading to a model with learned tight IBP bounds in the end. + +# C MODELS AND HYPERPARAMETERS FOR COMPARISON TO IBP + +The goal of these experiments is to reproduce the performance reported in (Gowal et al., 2018) and demonstrate the advantage of CROWN-IBP under the same experimental settings. Specifically, to reproduce the IBP results, for CIFAR-10 we train using a large batch size and long training schedule on TPUs (we can also replicate these results on multi-GPUs using a reasonable amount of training time; see Section F). Also, for this set of experiments we use the same code base as in Gowal et al. (2018). For model performance on a comprehensive set of small and medium sized models trained on a single GPU, please see Table D in Section F, as well as the training stability experiments in Section 4 and Section H. + +The models structures (DM-small, DM-medium and DM-large) used in Table C and Table 2 are listed in Table A. These three model structures are the same as in Gowal et al. (2018). Training hyperparameters are detailed below: + +- For MNIST IBP baseline results, we follow exact the same set of hyperparameters as in (Gowal et al., 2018). We train 100 epochs (60K steps) with a batch size of 100, and use a warm-up and ramp-up duration of 2K and 10K steps. Learning rate for Adam optimizer is set to $1 \times 10^{-3}$ and decayed by 10X at steps 15K and 25K. Our IBP results match their reported numbers. Note that we always use IBP verified errors rather than MIP verified errors. We use the same schedule for CROWN-IBP with $\epsilon_{\mathrm{train}} = 0.2$ ( $\epsilon_{\mathrm{test}} = 0.1$ ) in Table C and Table 2. For $\epsilon_{\mathrm{train}} = 0.4$ , this schedule can obtain verified error rates $4.22\%$ , $7.01\%$ and $12.84\%$ at $\epsilon_{\mathrm{test}} = \{0.2, 0.3, 0.4\}$ using the DM-Large model, respectively. +- For MNIST CROWN-IBP with $\epsilon_{\text{train}} = 0.4$ in Table C and Table 2, we train 200 epochs with a batch size of 256. We use Adam optimizer and set learning rate to $5 \times 10^{-4}$ . We warm up with 10 epochs' regular training, and gradually ramp up $\epsilon$ from 0 to $\epsilon_{\text{train}}$ in 50 epochs. We reduce the learning rate by 10X at epoch 130 and 190. Using this schedule, IBP's performance becomes worse (by about $1 - 2\%$ in all settings), but this schedule improves verified error for CROWN-IBP at $\epsilon_{\text{test}} = 0.4$ from $12.84\%$ to $12.06\%$ and does do affect verified errors at other $\epsilon_{\text{test}}$ levels. +- For CIFAR-10, we follow the setting in Gowal et al. (2018) and train 3200 epochs on 32 TPU cores. We use a batch size of 1024, and a learning rate of $5 \times 10^{-4}$ . We warm up for 320 epochs, and ramp-up $\epsilon$ for 1600 epochs. Learning rate is reduced by 10X at epoch 2600 and 3040. We use random horizontal flips and random crops as data augmentation, and normalize images according to per-channel statistics. Note that this schedule is slightly different from the schedule used in (Gowal et al., 2018); we use a smaller batch size due to TPU memory constraints (we used TPUv2 which has half memory capacity as TPUv3 used in (Gowal et al., 2018)), and also we decay learning rates later. We found that this schedule improves both IBP baseline performance and CROWN-IBP performance by around $1\%$ ; for example, at $\epsilon = 8/255$ , this improved schedule can reduce verified error from $73.52\%$ to $72.68\%$ for IBP baseline ( $\kappa_{\mathrm{start}} = 1.0$ , $\kappa_{\mathrm{end}} = 0.5$ ) using the DM-Large model. + +Hyperparameter $\kappa$ and $\beta$ . We use a linear schedule for both hyperparameters, decreasing $\kappa$ from $\kappa_{\mathrm{start}}$ to $\kappa_{\mathrm{end}}$ while increasing $\beta$ from $\beta_{\mathrm{start}}$ to $\beta_{\mathrm{end}}$ . The schedule length is set to the same length as the $\epsilon$ schedule. + +In both IBP and CROWN-IBP, a hyperparameter $\kappa$ is used to trade-off between clean accuracy and verified accuracy. Figure 2 shows that $\kappa_{\mathrm{end}}$ can significantly affect the trade-off, while $\kappa_{\mathrm{start}}$ has minor impacts compared to $\kappa_{\mathrm{end}}$ . In general, we recommend $\kappa_{\mathrm{start}} = 1$ and $\kappa_{\mathrm{end}} = 0$ as a safe starting point, and we can adjust $\kappa_{\mathrm{end}}$ to a larger value if a better standard accuracy is desired. The setting $\kappa_{\mathrm{start}} = \kappa_{\mathrm{end}} = 0$ (pure minimax optimization) can be challenging for IBP as there is no natural loss as a stabilizer; under this setting CROWN-IBP usually produces a model with good (sometimes best) verified accuracy but noticeably worse standard accuracy (on CIFAR-10 $\epsilon = \frac{8}{255}$ the difference can be as large as $10\%$ ), so this setting is only recommended when a model with best verified accuracy is desired at a cost of noticeably reduced standard accuracy. + +Compared to IBP, CROWN-IBP adds one additional hyperparameter, $\beta$ . $\beta$ has a clear meaning: balancing between the convex relaxation based bounds and the IBP bounds. $\beta_{\mathrm{start}}$ is always set to 1 as we want to use CROWN-IBP to obtain tighter bounds to stabilize the early phase of training when IBP bounds are very loose; $\beta_{\mathrm{end}}$ determines if we want to use a convex relaxation based bound $(\beta_{\mathrm{end}} = 1)$ or IBP based bound $(\beta_{\mathrm{end}} = 0)$ after the $\epsilon$ schedule. Thus, we set $\beta_{\mathrm{end}} = 1$ for the case where convex relaxation based method (Wong et al., 2018) can outperform IBP (e.g., CIFAR-10 $\epsilon = 2 / 255$ , and $\beta_{\mathrm{end}} = 0$ for the case where IBP outperforms convex relaxation based bounds. We do not tune or grid-search this hyperparameter. + +
DM-SmallDM-MediumDM-Large
CONV 16 4×4+2CONV 32 3×3+1CONV 64 3×3+1
CONV 32 4×4+1CONV 32 4×4+2CONV 64 3×3+1
FC 100CONV 64 3×3+1CONV 128 3×3+2
CONV 64 4×4+2CONV 128 3×3+1
FC 512CONV 128 3×3+1
FC 512FC 512
+ +Table A: Model structures from Gowal et al. (2018). "CONV $k$ $w \times h + s$ " represents a 2D convolutional layer with $k$ filters of size $w \times h$ using a stride of $s$ in both dimensions. "FC n" = fully connected layer with $n$ outputs. Last fully connected layer is omitted. All networks use ReLU activation functions. + +# D HYPERPARAMETERS AND MODEL STRUCTURES FOR TRAINING STABILITY EXPERIMENTS + +In all our training stability experiments, we use a large number of relatively small models and train them on a single GPU. These small models cannot achieve state-of-the-art performance but they can be trained quickly and cheaply, allowing us to explore training stability over a variety of settings, and report min, median and max statistics. We use the following hyperparameters: + +- For MNIST, we train 100 epochs with batch size 256. We use Adam optimizer and the learning rate is $5 \times 10^{-4}$ . The first epoch is standard training for warming up. We gradually increase $\epsilon$ linearly per batch in our training process with a $\epsilon$ schedule length of 60. We reduce the learning rate by $50\%$ every 10 epochs after $\epsilon$ schedule ends. No data augmentation technique is used and the whole $28 \times 28$ images are used (normalized to 0 - 1 range). +- For CIFAR, we train 200 epochs with batch size 128. We use Adam optimizer and the learning rate is $0.1\%$ . The first 10 epochs are standard training for warming up. We gradually increase $\epsilon$ linearly per batch in our training process with a $\epsilon$ schedule length of 120. We reduce the learning rate by $50\%$ every 10 epochs after $\epsilon$ schedule ends. We use random horizontal flips and random crops as data augmentation. The three channels are normalized with mean (0.4914, 0.4822, 0.4465) and standard deviation (0.2023, 0.1914, 0.2010). These numbers are per-channel statistics from the training set used in (Gowal et al., 2018). + +All verified error numbers are evaluated on the test set using IBP, since the networks are trained using IBP ( $\beta = 0$ after $\epsilon$ reaches the target $\epsilon_{\mathrm{train}}$ ), except for CIFAR $\epsilon = \frac{2}{255}$ where we set $\beta = 1$ to compute the CROWN-IBP verified error. + +Table B gives the 18 model structures used in our training stability experiments. These model structures are designed by us and are not used in Gowal et al. (2018). Most CIFAR-10 models share the same structures as MNIST models (unless noted on the table) except that their input dimensions are different. Model A is too small for CIFAR-10 thus we remove it for CIFAR-10 experiments. Models A - J are the "small models" reported in Figure 3. Models K - T are the "medium models" reported in Figure 3. For results in Table 1, we use a small model (model structure B) for all three datasets. These MNIST, CIFAR-10 models can be trained on a single NVIDIA RTX 2080 Ti GPU within a few hours each. + +# E OMTTED RESULTS ON DM-SMALL AND DM-MEDIUM MODELS + +In Table 2 we report results from the best DM-Large model. Table C presents the verified, standard (clean) and PGD attack errors for all three model structures used in (Gowal et al., 2018) (DM-Small, DM-Medium and DM-Large) trained on MNIST and CIFAR-10 datasets. We evaluate IBP and CROWN-IBP under the same three $\kappa$ settings as in Table 2. We use hyperparameters detailed in Section C to train these models. We can see that given any model structure and any $\kappa$ setting, CROWN-IBP consistently outperforms IBP. + +
NameModel Structure (all models have a last FC 10 layer, which are omitted)
A (MNIST Only)Conv 4 4 × 4+2, Conv 8 4 × 4+2, FC 128
BConv 8 4 × 4+2, Conv 16 4 × 4+2, FC 256
CConv 4 3 × 3+1, Conv 8 3 × 3+1, Conv 8 4 × 4+4, FC 64
DConv 8 3 × 3+1, Conv 16 3 × 3+1, Conv 16 4 × 4+4, FC 128
EConv 4 5 × 5+1, Conv 8 5 × 5+1, Conv 8 5 × 5+4, FC 64
FConv 8 5 × 5+1, Conv 16 5 × 5+1, Conv 16 5 × 5+4, FC 128
GConv 4 3 × 3+1, Conv 4 4 × 4+2, Conv 8 3 × 3+1, Conv 8 4 × 4+2, FC 256, FC 256
HConv 8 3 × 3+1, Conv 8 4 × 4+2, Conv 16 3 × 3+1, Conv 16 4 × 4+2, FC 256, FC 256
IConv 4 3 × 3+1, Conv 4 4 × 4+2, Conv 8 3 × 3+1, Conv 8 4 × 4+2, FC 512, FC 512
JConv 8 3 × 3+1, Conv 8 4 × 4+2, Conv 16 3 × 3+1, Conv 16 4 × 4+2, FC 512, FC 512
KConv 16 3 × 3+1, Conv 16 4 × 4+2, Conv 32 3 × 3+1, Conv 32 4 × 4+2, FC 256, FC 256
LConv 16 3 × 3+1, Conv 16 4 × 4+2, Conv 32 3 × 3+1, Conv 32 4 × 4+2, FC 512, FC 512
MConv 32 3 × 3+1, Conv 32 4 × 4+2, Conv 64 3 × 3+1, Conv 64 4 × 4+2, FC 512, FC 512
NConv 64 3 × 3+1, Conv 64 4 × 4+2, Conv 128 3 × 3+1, Conv 128 4 × 4+2, FC 512, FC 512
O(MNIST Only)Conv 64 5 × 5+1, Conv 128 5 × 5+1, Conv 128 4 × 4+4, FC 512
P(MNIST Only)Conv 32 5 × 5+1, Conv 64 5 × 5+1, Conv 64 4 × 4+4, FC 512
QConv 16 5 × 5+1, Conv 32 5 × 5+1, Conv 32 5 × 5+4, FC 512
RConv 32 3 × 3+1, Conv 64 3 × 3+1, Conv 64 3 × 3+4, FC 512
S(CIFAR-10 Only)Conv 32 4 × 4+2, Conv 64 4 × 4+2, FC 128
T(CIFAR-10 Only)Conv 64 4 × 4+2, Conv 128 4 × 4+2, FC 256
+ +Table B: Model structures used in our training stability experiments. We use ReLU activations for all models. We omit the last fully connected layer as its output dimension is always 10. In the table, "Conv $k \times w + s$ represents to a 2D convolutional layer with $k$ filters of size $w \times w$ and a stride of $s$ . Model A - J are referred to as "small models" and model K to T are referred to as "medium models". + +# F ADDITIONAL EXPERIMENTS ON SMALLER MODELS USING A SINGLE GPU + +In this section we present additional experiments on a variety of smaller MNIST and CIFAR-10 models which can be trained on a single GPU. The purpose of this experiment is to compare model performance statistics (min, median and max) on a wide range of models, rather than a few hand selected models. The model structures used in these experiments are detailed in Table B. In Table D, we present the best, median and worst verified and standard (clean) test errors for models trained on MNIST and CIFAR-10 using IBP and CROWN-IBP. Although these small models cannot achieve state-of-the-art performance, CROWN-IBP's best, median and worst verified errors among all model structures consistently outperform those of IBP. Especially, in many situations the worst case verified error improves significantly using CROWN-IBP, because IBP training is not stable on some of the models. + +It is worth noting that in this set of experiments we explore a different $\epsilon$ setting: $\epsilon_{\mathrm{train}} = \epsilon_{\mathrm{test}}$ . We found that both IBP and CROWN-IBP tend to overfit to training dataset on MNIST with small $\epsilon$ , thus verified errors are not as good as presented in Table C. This overfitting issue can be alleviated by using $\epsilon_{\mathrm{train}} > \epsilon_{\mathrm{test}}$ (as used in Table 2 and Table C), or using an explicit $\ell_1$ regularization, which will be discussed in detail in Section I. + +Table C: The verified, standard (clean) and PGD attack errors for 3 models (DM-small, DM-medium, DM-large) trained on MNIST and CIFAR test sets. We evaluate IBP and CROWN-IBP under different $\kappa$ schedules. CROWN-IBP outperforms IBP under the same $\kappa$ setting, and also achieves state-of-the-art results for $\ell_{\infty}$ robustness on both MNIST and CIFAR datasets for all $\epsilon$ . + +
Datasetε(ℓ∞norm)Training Methodκ schedulesDM-small model's err. (%)DM-medium model's err. (%)DM-large model's err. (%)
κstartκendStandardVerifiedPGDStandardVerifiedPGDStandardVerifiedPGD
MNISTεtest=0.1IBP001.924.163.881.533.262.821.132.892.24
10.51.683.603.341.463.202.571.082.752.02
102.144.243.941.483.212.771.142.812.11
CROWN-IBP001.903.503.211.442.772.371.172.361.91
10.51.603.513.191.142.642.230.952.381.77
101.673.443.091.342.762.391.172.241.81
εtest=0.2IBP005.089.809.363.687.386.773.456.466.00
10.53.838.648.062.555.845.332.124.754.24
106.2511.3210.843.897.216.682.745.464.89
CROWN-IBP003.786.616.403.846.656.422.845.154.90
10.52.966.115.742.375.354.901.824.133.81
103.556.296.133.165.825.442.174.313.99
εtest=0.3IBP005.0814.4213.303.6810.979.663.459.768.42
10.503.8313.9912.252.559.517.872.128.476.78
106.2516.5115.073.8910.49.172.748.737.37
CROWN-IBP003.789.608.903.849.258.572.847.656.90
10.52.969.448.262.378.547.741.827.026.05
103.559.408.503.168.627.652.177.036.12
εtest=0.4IBP005.0823.4020.153.6818.3414.753.4516.1912.73
10.53.8324.1619.972.5516.8212.832.1215.3711.05
106.2526.8122.783.8916.9913.812.7414.8011.14
CROWN-IBP003.7815.2113.343.8414.5812.692.8412.7410.39
10.52.9616.0412.912.3714.9712.471.8212.599.58
103.5515.5513.113.1614.1911.312.1712.069.47
CIFAR-10εtest=2.4IBP0044.6656.3854.1539.1253.8649.7738.5455.2149.72
10.538.9057.9453.6434.1956.2449.6333.7758.4850.54
1044.0856.3254.1639.3053.6849.7439.2255.1950.40
CROWN-IBP0039.4353.9349.1632.7849.5744.2228.4846.0340.28
10.534.0854.2851.1728.6351.3942.4326.1950.5340.24
1038.1552.5750.3533.1749.8244.6428.9146.4340.27
εtest=8/255IBP0061.9173.1271.7561.4671.9870.0759.4171.2268.96
10.554.0173.0470.5450.3373.5869.5749.0172.6868.14
1062.6672.2570.9861.6172.6070.5758.4370.8168.73
CROWN-IBP0059.9470.7669.6559.1769.0067.6054.0266.9465.42
10.553.1273.5170.6148.5171.5567.6745.4769.5565.74
1060.8472.4771.1858.1968.9467.7255.2767.7665.71
εtest=16/255IBP0070.0278.8677.6767.5578.6576.9268.9778.1276.66
10.563.4381.5878.8160.0781.0177.3259.4680.8576.97
1067.7378.7177.5270.2879.2677.4368.8878.9176.95
CROWN-IBP0067.4278.4176.8668.0677.9276.8967.1777.2775.76
10.561.4779.6277.1359.5679.3076.4356.7378.2074.87
1068.7578.7177.9167.9478.4677.2166.0676.8075.23
+ +1 Verified errors reported in Table 4 of Gowal et al. (2018) are evaluated using mixed integer programming (MIP). For a fair comparison, we use the IBP verified errors reported in Table 3 of Gowal et al. (2018). +2 According to direct communication with the authors of Gowal et al. (2018), achieving $68.44\%$ IBP verified error requires to adding an extra PGD adversarial training loss. Without adding PGD, the achievable verified error is $72.91\%$ (LP/MIP verified) or $73.52\%$ (IBP verified). +3 Although not explicitly mentioned, the best CIFAR-10 models in (Gowal et al., 2018) also use $\epsilon_{\mathrm{train}} = 1.1\epsilon_{\mathrm{test}}$ +4 We use $\beta_{\mathrm{start}} = \beta_{\mathrm{end}} = 1$ for this setting, the same as in Table 2, and thus CROWN-IBP bound is used to evaluate the verified error. + +Table D: Verified and standard (clean) test errors for a large number of models trained on MNIST and CIFAR-10 datasets using IBP and CROWN-IBP. The purpose of this experiment is to compare model performance statistics (min, median and max) on a wide range of models, rather than a few hand selected models. For each setting we report 3 representative models: the models with smallest, median, and largest verified error. We also report the standard error of these three selected models. Note that in this table we set $\epsilon_{\mathrm{train}} = \epsilon_{\mathrm{test}}$ and observe overfitting on small $\epsilon$ for MNIST. See Section I for detailed discussions. + +
Datasete (cc·norm)Model FamilyTraining MethodR-scheduleVerified Test Error (%)
10 small modelsIBP10.55.425.957.361.411.881.871.871.871.871.871.871.871.871.871.871.871.871.871.871.871.871.871.871.871.871.881.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.911.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.901.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.991.991.991.991.991.991.991.991.991.991.991.991.991.991.991.991.991.991.991.991.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.981.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.971.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.941.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.961.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.921.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.931.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.951.9. +CFA-2/255
ftest= 8/255
+ +Verified errors reported in Table 4 of Gowal et al. (2018) are evaluated using mixed integer programming (MIP) and linear programming (LP), which are strictly smaller than IBP verified errors but computationally expensive. For a fair comparison, we use the IBP verified errors reported in their Table 3. +3 We use $\beta_{\mathrm{start}} = \beta_{\mathrm{end}} = 1$ for this setting, the same as in Table 2, and thus CROWN-IBP bound is used to evaluate the verified error. + +![](images/89ad03b797a51d33aee1b3cad76d656ada053889a512bf7398eebf1c7a018f69.jpg) +(a) small models, $\epsilon = 2 / 255$ best $48.87\%$ + +![](images/e99ab4e18050a3c1ae90df3ccb84fabcd18c7ff4eb8d34f482a4cd2ce49b8822.jpg) +(b) small models, $\epsilon = 8 / 255$ best $70.61\%$ +Figure C: Verified error vs. schedule length (30, 60, 90, 120) on 9 small models on CIFAR-10. The solid boxes show median values of verified errors. $\kappa_{\mathrm{start}} = 1.0$ except for the $\kappa = 0$ setting. The upper and lower bound of an error bar are worst and best verified error, respectively. + +# G REPRODUCIBILITY + +To further test the training stability of CROWN-IBP, we run each MNIST experiment (using selected models in Table B) 5 times to get the mean and standard deviation of the verified and standard errors on test set. Results are presented in Table E. Standard deviations of verified errors are very small, giving us further evidence of good stability and reproducibility. + +
εerrormodel Amodel Bmodel Cmodel Dmodel Emodel Fmodel Gmodel Hmodel Imodel J
0.1std. err. (%)2.57 ± .041.45 ± .053.02 ± .041.77 ± .042.13 ± .081.35 ± .052.03 ± .081.32 ± .081.77 ± .041.45 ± .05
verified err. (%)6.85 ± .044.88 ± .046.67 ± .15.10 ± .14.82 ± .24.18 ± .0085.23 ± .24.59 ± .085.92 ± .095.40 ± .09
0.2std. err. (%)3.87 ± .042.43 ± .044.40 ± .22.32 ± .043.45 ± .31.90 ± 02.67 ± .12.00 ± .072.22 ± .041.65 ± .05
verified err. (%)12.0 ± .036.99 ± .0410.3 ± .27.37 ± .069.01 ± .96.05 ± .037.50 ± .16.45 ± .067.50 ± .36.31 ± .08
0.3std. err. (%)5.97 ± .083.20 ± 06.78 ± .13.70 ± .13.85 ± .23.10 ± .14.20 ± .32.85 ± .053.67 ± .082.35 ± .09
verified err. (%)15.4 ± .0810.6 ± .0616.1 ± .311.3 ± .111.7 ± .29.96 ± .0912.2 ± .69.90 ± .211.2 ± .099.21 ± .3
0.4std. err. (%)8.43 ± .044.93 ± .18.53 ± .25.83 ± .25.48 ± .24.65 ± .096.80 ± .24.28 ± .15.60 ± .13.60 ± .07
verified err. (%)24.6 ± .118.5 ± .224.6 ± .719.2 ± .218.8 ± .217.3 ± .0420.4 ± .316.3 ± .218.5 ± .0715.2 ± .3
+ +Table E: Means and standard deviations of verified and standard errors of 10 MNIST models trained using CROWN-IBP. The architectures of these models are presented in Table B. We run each model 5 times to compute its mean and standard deviation. + +# H TRAINING STABILITY EXPERIMENTS ON OTHER $\epsilon$ + +Similar to our experiments in Section 4, we compare the verified errors obtained by CROWN-IBP and IBP under different $\epsilon$ schedule lengths (10, 15, 30, 60) on MNIST and (30,60,90,120) on CIFAR-10. We present the best, worst and median verified errors over all 18 models for MNIST in Figure D, E at $\epsilon \in \{0.1,0.2,0.3\}$ and 9 small models for CIFAR-10 in Figure C. The upper and lower ends of an error bar are the worst and best verified error, respectively, and the solid boxes represent median values. CROWN-IBP can improve training stability, and consistently outperform IBP under different schedule length and $\kappa$ settings. + +# I OVERFITTING ISSUE WITH SMALL $\epsilon$ + +We found that on MNIST for a small $\epsilon$ , the verified error obtained by IBP based methods are not as good as linear relaxation based methods (Wong et al., 2018; Mirman et al., 2018). Gowal et al. (2018) thus propose to train models using a larger $\epsilon$ and evaluate them under a smaller $\epsilon$ , for example $\epsilon_{\mathrm{train}} = 0.4$ and $\epsilon_{\mathrm{eval}} = 0.3$ . Instead, we investigated this issue further and found that many CROWN-IBP trained models achieve very small verified errors (close to 0 and sometimes exactly 0) on training set (see Table F). This indicates possible overfitting during training. As we discussed in Section 3, linear relaxation based methods implicitly regularize the weight matrices so the network does not overfit when $\epsilon$ is small. Inspired by this finding, we want to see if adding an explicit $\ell_1$ + +![](images/6ebf6f01dfcafffe1a649fde71ee3592b9399dd1d860823f41637b458b2c33b5.jpg) +(a) $\epsilon = 0.1$ , best $3.55\%$ + +![](images/d8e6b61b05127f66e91c120d8be4ee307097acb16517fd757256fbfa2929c93a.jpg) +(b) $\epsilon = 0.2$ , best $4.98\%$ + +![](images/69d354bff9d30f922896a9403dc78718cc711059582d0b32e951700dc4792eb8.jpg) +Figure D: Verified error vs. $\epsilon$ schedule length (10, 15, 30, 60) on 8 medium MNIST models. The upper and lower ends of a vertical bar represent the worst and best verified error, respectively. The solid boxes represent the median values of the verified error. For a small $\epsilon$ , using a shorter schedule length improves verified error due to early stopping, which prevents overfitting. All best verified errors are achieved by CROWN-IBP regardless of schedule length. +(a) $\epsilon = 0.1$ , best $3.84\%$ +Figure E: Verified error vs. $\epsilon$ schedule length (10, 15, 30, 60) on 10 small MNIST models. The upper and lower ends of a vertical bar represent the worst and best verified error, respectively. All best verified errors are achieved by CROWN-IBP regardless of schedule length. + +![](images/3167aa42af24255f8ece4eacbedac6e2c6e506115b1ada682cfc84292d9682ba.jpg) +(b) $\epsilon = 0.2$ , best $6.11\%$ + +![](images/07ff1b6243c4cdf9a90c8b30bb6b737bd8eae8b2537f2326c51a070443699bd5.jpg) +(c) $\epsilon = 0.3$ , best $8.87\%$ + +regularization term in CROWN-IBP training helps when $\epsilon_{\mathrm{train}} = 0.1$ or 0.2. The verified and standard errors on the training and test sets with and without regularization can be found in Table F. We can see that with a small $\ell_1$ regularization added ( $\lambda = 5 \times 10^{-5}$ ) we can reduce verified errors on test set significantly. This makes CROWN-IBP results comparable to the numbers reported in convex adversarial polytope (Wong et al., 2018); at $\epsilon = 0.1$ , the best model using convex adversarial polytope training can achieve $3.67\%$ verified error, while CROWN-IBP achieves $3.60\%$ best certified error on the models presented in Table F. The overfitting is likely caused by IBP's strong learning power without over-regularization, which also explains why IBP based methods significantly outperform linear relaxation based methods at larger $\epsilon$ values. Using early stopping can also improve verified error on test set; see Figure D. + +# J TRAINING TIME + +In Table G we present the training time of CROWN-IBP, IBP and convex adversarial polytope (Wong et al., 2018) on several representative models. All experiments are measured on a single RTX 2080 Ti GPU with 11 GB RAM except for 2 DM-Large models where we use 4 RTX 2080 Ti GPUs to speed up training. We can observe that CROWN-IBP is practically 1.5 to 3.5 times slower than IBP. Theoretically, CROWN-IBP is up to $n_{L} = 10$ times slower $^{4}$ than IBP; however usually the total training time is less than 10 times since the CROWN-IBP bound is only computed during the ramp-up phase, and CROWN-IBP has higher GPU computation intensity and thus better GPU utilization than IBP. Convex adversarial polytope (Wong et al., 2018), as a representative linear relaxation based + +
εModel Name (see Appendix D)λ: ℓ1 regularizationTrainingTest
standard errorverified errorstandard errorverified error
0.1P00.01%0.01%1.05%5.63%
P5 × 10-50.32%0.98%1.30%3.60%
O00.02%0.05%0.82%6.02%
O5 × 10-50.38%1.34%1.43%4.02%
0.2P00.35%1.40%1.09%6.06%
P5 × 10-51.02%3.73%1.48%5.48%
O00.31%1.54%1.22%6.64%
O5 × 10-51.09%4.08%1.69%5.72%
+ +Table F: $\ell_1$ regularized and unregularized models' standard and verified errors on training and test set. At a small $\epsilon$ , CROWN-IBP may overfit and adding regularization helps robust generalization; on the other hand, convex relaxation based methods (Wong et al., 2018) provides implicitly regularization which helps generalization under small $\epsilon$ but deteriorate model performance at larger $\epsilon$ . +method, can be over hundreds times slower than IBP especially on deeper networks. Note that we use 50 random Cauchy projections for (Wong et al., 2018). Using random projections alone is not sufficient to scale purely linear relaxation based methods to larger datasets, thus we advocate a combination of IBP bounds with linear relaxation based methods as in CROWN-IBP, which offers good scalability and stability. We also note that the random projection based acceleration can also be applied to the backward bound propagation (CROWN-style bound) in CROWN-IBP to further speed up CROWN-IBP. +Table G: IBP and CROWN-IBP's training time on different models in seconds. For IBP and CROWN-IBP, we use a batchsize of 256 for MNIST and 128 for CIFAR-10. For convex adversarial polytope, we use 50 random Cauchy projections, and reduce batch size if necessary to fit into GPU memory. + +
DataMNISTCIFAR-10
Model NameACGLODM-large(εtrain=0.4)BDHSMDM-large
IBP (s)24526429036410323769173490810486911407404961
CROWN-IBP (s)37156459095436495584111481853185914914137912881
CAP (Wong et al., 2018)2(s)17089263126493551816079423721268818691696151145
+ +1 We use 4 GPUs to train this model. +2 Convex adversarial polytopes (CAP) are computed with 50 random projections. Without random projections it will not scale to most models except for the smallest ones. + +# K REPRODUCING CIFAR-10 RESULTS ON MULTI-GPUS + +The use of 32 TPUs for our CIFAR-10 experiments is not necessary. We use TPUs mainly for obtaining a completely fair comparison to IBP (Gowal et al., 2018), as their implementation was TPU-based. Since TPUs are not widely available, we additionally implemented CROWN-IBP using multi-GPUs. We train the best models in Table 2 on 4 RTX 2080Ti GPUs. As shown in Table H, we can achieve comparable verified errors using GPUs, and the differences between GPU and TPU training are around $\pm 0.5\%$ . Training time is reported in Table G. + +# L EXACT FORMS OF THE CROWN-IBP BACKWARD BOUND + +CROWN (Zhang et al., 2018) is a general framework that replaces non-linear functions in a neural network with linear upper and lower hyperplanes with respect to pre-activation variables, such that the entire neural network function can be bounded by a linear upper hyperplane and linear lower hyperplane for all $x \in S$ ( $S$ is typically a norm bounded ball, or a box region): + +$$ +\underline {{\mathbf {A}}} x + \underline {{\mathbf {b}}} \leq f (x) \leq \overline {{\mathbf {A}}} x + \overline {{\mathbf {b}}} +$$ + +CROWN achieves such linear bounds by replacing non-linear functions with linear bounds, and utilizing the fact that the linear combinations of linear bounds are still linear, thus these linear bounds + +Table H: Comparison of verified and standard errors for CROWN-IBP models trained on TPUs and GPUs (CIFAR-10, DM-Large model). + +
Datasetε (ℓ∞ norm)Training Deviceκ schedulesModel errors (%)
κstartκendStandardVerified
CIFAR-10εtest = 2/2551GPU0029.1845.50
εtrain = 2/255TPU0028.4846.03
εtest = 8/255GPU0054.6067.11
εtrain = 8/255TPU0054.0266.94
+ +1 We use $\beta_{\mathrm{start}} = \beta_{\mathrm{end}} = 1$ for this setting, the same as in Table 2, and thus CROWN-IBP +bound is used to evaluate the verified error. + +can propagate through layers. Suppose we have a non-linear vector function $\sigma$ , applying to an input (pre-activation) vector $z$ , CROWN requires the following bounds in a general form: + +$$ +\underline {{\mathbf {A}}} _ {\sigma} z + \underline {{\mathbf {b}}} _ {\sigma} \leq \sigma (z) \leq \overline {{\mathbf {A}}} _ {\sigma} z + \overline {{\mathbf {b}}} _ {\sigma} +$$ + +In general the specific bounds $\underline{\mathbf{A}}_{\sigma}, \underline{\mathbf{b}}_{\sigma}, \overline{\mathbf{A}}_{\sigma}, \overline{\mathbf{b}}_{\sigma}$ for different $\sigma$ needs to be given in a case-by-case basis, depending on the characteristics of $\sigma$ and the preactivation range $z \leq z \leq \overline{z}$ . In neural network common $\sigma$ can be ReLU, tanh, sigmoid, maxpool, etc. Convex adversarial polytope (Wong et al., 2018) is also a linear relaxation based techniques that is closely related to CROWN, but only for ReLU layers. For ReLU such bounds are simple, where $\underline{\mathbf{A}}_{\sigma}, \overline{\mathbf{A}}_{\sigma}$ are diagonal matrices, $\underline{\mathbf{b}}_{\sigma} = \mathbf{0}$ : + +$$ +\underline {{\mathbf {D}}} z \leq \sigma (z) \leq \bar {\mathbf {D}} z + \bar {c} \tag {14} +$$ + +where $\underline{\mathbf{D}}$ and $\overline{\mathbf{D}}$ are two diagonal matrices: + +$$ +\underline {{\mathbf {D}}} _ {k, k} = \left\{ \begin{array}{l l} 1, & \text {i f} \underline {{z}} _ {k} > 0, \text {i . e . , t h i s n e u r o n i s a l w a y s a c t i v e} \\ 0, & \text {i f} \bar {z} _ {k} < 0, \text {i . e . , t h i s n e u r o n i s a l w a y s i n a c t i v e} \\ \alpha , & \text {o t h e r w i s e , a n y} 0 \leq \alpha \leq 1 \end{array} \right. \tag {15} +$$ + +$$ +\overline {{\mathbf {D}}} _ {k, k} = \left\{ \begin{array}{l l} 1, & \text {i f} \underline {{z}} _ {k} > 0, \text {i . e . , t h i s n e u r o n i s a l w a y s a c t i v e} \\ 0, & \text {i f} \bar {z} _ {k} < 0, \text {i . e . , t h i s n e u r o n i s a l w a y s i n a c t i v e} \\ \frac {\bar {z} _ {k}}{\bar {z} _ {k} - \underline {{z}} _ {k}}, & \text {o t h e r w i s e} \end{array} \right. \tag {16} +$$ + +$$ +\bar {c} _ {k} = \left\{ \begin{array}{l l} 0, & \text {i f} \underline {{z}} _ {k} > 0, \text {i . e . , t h i s n e u r o n i s a l w a y s a c t i v e} \\ 0, & \text {i f} \overline {{z}} _ {k} < 0, \text {i . e . , t h i s n e u r o n i s a l w a y s i n a c t i v e} \\ \frac {\bar {z} _ {k} z _ {k}}{\bar {z} _ {k} - \underline {{z}} _ {k}}, & \text {o t h e r w i s e} \end{array} \right. \tag {17} +$$ + +Note that CROWN-style bounds require to know all pre-activation bounds $\underline{z}^{(l)}$ and $\overline{z}^{(l)}$ . We assume these bounds are valid for $\pmb{x} \in S$ . In CROWN-IBP, these bounds are obtained by interval bound propagation (IBP). With pre-activation bounds $\underline{z}^{(l)}$ and $\overline{z}^{(l)}$ given (for $\pmb{x} \in S$ ), we rewrite the CROWN lower bound for the special case of ReLU neurons: + +Theorem L.1 (CROWN Lower Bound). For a $L$ -layer neural network function $f(\pmb{x}) : \mathbb{R}^{n_0} \to \mathbb{R}^{n_L}$ , $\forall j \in [n_L]$ , $\forall \pmb{x} \in S$ , we have $\underline{f}_j(\pmb{x}) \leq f_j(\pmb{x})$ , where + +$$ +\underline {{f _ {j}}} (\boldsymbol {x}) = \mathbf {A} _ {j,:} ^ {(0)} \boldsymbol {x} + \sum_ {l = 1} ^ {L} \mathbf {A} _ {j,:} ^ {(l)} \left(\boldsymbol {b} ^ {(l)} + \underline {{\mathbf {b}}} ^ {j, (l)}\right), \tag {18} +$$ + +$$ +\mathbf {A} _ {j,:} ^ {(l)} = \left\{ \begin{array}{l l} \mathbf {e} _ {j} ^ {\top} & i f l = L; \\ \mathbf {A} _ {j,:} ^ {(l + 1)} \mathbf {W} ^ {(l + 1)} \mathbf {D} ^ {j, (l)} & i f l \in \{0, \dots , L - 1 \}. \end{array} \right. +$$ + +and $\forall i\in [n_k]$ , we define diagonal matrices $\mathbf{D}^{j,(l)}$ , bias vector $\underline{\mathbf{b}}^{(l)}$ .. + +$$ +\begin{array}{l} \mathbf {D} ^ {j, (0)} = \boldsymbol {I}, \quad \mathbf {b} ^ {j, (L)} = \mathbf {0} \\ \mathbf {D} _ {k, k} ^ {j, (l)} = \left\{ \begin{array}{l l} 1 & i f \mathbf {A} _ {j,:} ^ {(l + 1)} \mathbf {W} _ {:, i} ^ {(l + 1)} \geq 0, \overline {{z}} _ {k} ^ {(l)} > | z _ {k} ^ {(l)} |, l \in \{1, \dots , L - 1 \}; \\ 0 & i f \mathbf {A} _ {j,:} ^ {(l + 1)} \mathbf {W} _ {:, i} ^ {(l + 1)} \geq 0, \overline {{z}} _ {k} ^ {(l)} < | \underline {{z}} _ {k} ^ {(l)} |, l \in \{1, \dots , L - 1 \}; \\ \frac {\overline {{z}} _ {k} ^ {(l)}}{\overline {{z}} _ {k} ^ {(l)} - \underline {{z}} _ {k} ^ {(l)}} & i f \mathbf {A} _ {j,:} ^ {(k + 1)} \mathbf {W} _ {:, i} ^ {(k + 1)} < 0, l \in \{1, \dots , L - 1 \}. \end{array} \right. \\ \underline {{\mathbf {b}}} _ {k} ^ {j, (l)} = \left\{ \begin{array}{l l} 0 & i f \mathbf {A} _ {j,:} ^ {(l + 1)} \mathbf {W} _ {:,: i} ^ {(l + 1)} \geq 0; l \in \{1, \dots , L - 1 \} \\ \frac {\overline {{z}} _ {k} ^ {(l)} \underline {{z}} _ {k} ^ {(l)}}{\overline {{z}} _ {k} ^ {(l)} - \underline {{z}} _ {k} ^ {(l)}} & i f \mathbf {A} _ {j,:} ^ {(l + 1)} \mathbf {W} _ {:,: i} ^ {(l + 1)} < 0 l \in \{1, \dots , L - 1 \}. \end{array} \right. \\ \end{array} +$$ + +$\mathbf{e}_j \in \mathbb{R}^{n_L}$ is a standard unit vector with $j$ -th coordinate set to 1. + +Note that unlike the ordinary CROWN (Zhang et al., 2018), in CROWN-IBP we only need the lower bound to compute $\underline{m}$ and do not need to compute the A matrices for the upper bound. This save half of the computation cost in ordinary CROWN. Also, $\mathbf{W}$ represents any affine layers in a neural network, including convolutional layers in CNNs. In Section 3.2, we discussed how to use transposed convolution operators to efficiently implement CROWN-IBP on GPUs. + +Although in this paper we focus on the common case of ReLU activation function, other general activation functions (sigmoid, max-pooling, etc) can be used in the network as CROWN is a general framework to deal with non-linearity. For a more general derivation we refer the readers to (Zhang et al., 2018) and (Salman et al., 2019b). \ No newline at end of file diff --git a/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/images.zip b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d35e0c4cff69cbe2f518c3019580749636742158 --- /dev/null +++ b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:822e905d576826d28493c5d6a98e3af5d9ef3c023afd16e2a60c415e42c93818 +size 1466167 diff --git a/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/layout.json b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b4aacedc539076298dbc1e5538ba68f0b7759294 --- /dev/null +++ b/towardsstableandefficienttrainingofverifiablyrobustneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:639bbbb287a664e2873f2b74357bca39fa6a9b01468be583c29f93c4336dea77 +size 1123523 diff --git a/towardsverifiedrobustnessundertextdeletioninterventions/08c23d09-e1f6-4247-9eda-cd620b0ae8b5_content_list.json b/towardsverifiedrobustnessundertextdeletioninterventions/08c23d09-e1f6-4247-9eda-cd620b0ae8b5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..86abe44f8ee76997816cad6e5fdbf257232a89fc --- /dev/null +++ b/towardsverifiedrobustnessundertextdeletioninterventions/08c23d09-e1f6-4247-9eda-cd620b0ae8b5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c99e05babc27f136f56fa8eb881e7323fd13c0aa32206a359f32bf2232aa9e67 +size 93199 diff --git a/towardsverifiedrobustnessundertextdeletioninterventions/08c23d09-e1f6-4247-9eda-cd620b0ae8b5_model.json b/towardsverifiedrobustnessundertextdeletioninterventions/08c23d09-e1f6-4247-9eda-cd620b0ae8b5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..48c67f1e4028e7dd1bcd824d1b479981ea109cd8 --- /dev/null +++ b/towardsverifiedrobustnessundertextdeletioninterventions/08c23d09-e1f6-4247-9eda-cd620b0ae8b5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd24c9fa5126d892487c682f4de70763dfb118356797b2e346cb1115614513e0 +size 113563 diff --git a/towardsverifiedrobustnessundertextdeletioninterventions/08c23d09-e1f6-4247-9eda-cd620b0ae8b5_origin.pdf b/towardsverifiedrobustnessundertextdeletioninterventions/08c23d09-e1f6-4247-9eda-cd620b0ae8b5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..447870e2b4500df23368ed90ec8b26e6d8bc9add --- /dev/null +++ b/towardsverifiedrobustnessundertextdeletioninterventions/08c23d09-e1f6-4247-9eda-cd620b0ae8b5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b15d9a9fffa2867336f8bf3cefbfc7984644d5142e5ed4a8a88e21c141c82c0 +size 396213 diff --git a/towardsverifiedrobustnessundertextdeletioninterventions/full.md b/towardsverifiedrobustnessundertextdeletioninterventions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..84d8d6e9fdacc5fc201a3cd6275f7f448a4309fd --- /dev/null +++ b/towardsverifiedrobustnessundertextdeletioninterventions/full.md @@ -0,0 +1,364 @@ +# TOWARDS VERIFIED ROBUSTNESS UNDER TEXT DELETION INTERVENTIONS + +Johannes Welbl†‡* Po-Sen Huang† Robert Stanforth† Sven Gowal† + +Krishnamurthy (Dj) Dvijotham† Martin Szummer† Pushmeet Kohli† + +$^{\dagger}$ DeepMind, London, UK $^{\ddagger}$ University College London, UK + +{welbl,posedhuang,stanforth,sgowal,dvij,zummer,pushmeet} + +@google.com + +# ABSTRACT + +Neural networks are widely used in Natural Language Processing, yet despite their empirical successes, their behaviour is brittle: they are both over-sensitive to small input changes, and under-sensitive to deletions of large fractions of input text. This paper aims to tackle under-sensitivity in the context of natural language inference by ensuring that models do not become more confident in their predictions as arbitrary subsets of words from the input text are deleted. We develop a novel technique for formal verification of this specification for models based on the popular decomposable attention mechanism by employing the efficient yet effective interval bound propagation (IBP) approach. Using this method we can efficiently prove, given a model, whether a particular sample is free from the under-sensitivity problem. We compare different training methods to address under-sensitivity, and compare metrics to measure it. In our experiments on the SNLI and MNLI datasets, we observe that IBP training leads to a significantly improved verified accuracy. On the SNLI test set, we can verify $18.4\%$ of samples, a substantial improvement over only $2.8\%$ using standard training. + +# 1 INTRODUCTION + +Natural language processing (NLP) widely relies on neural networks, a model class known to be vulnerable to adversarial input perturbations (Szegedy et al., 2013; Kurakin et al., 2016). Adversarial samples typically expose over-sensitivity to semantically invariant text transformations (Belinkov & Bisk, 2017; Ettinger et al., 2017), e.g. character flips (Ebrahimi et al., 2018) or paraphrases (Ribeiro et al., 2018b; Iyyer et al., 2018). + +Feng et al. (2018) exposed another type of problematic behaviour: deleting large parts of input text can cause a model's confidence to increase; Figure 1 shows an example. That is, reduced sets of input words can suffice to trigger more confident predictions. Such under-sensitivity is problematic: neural models can 'solve' NLP tasks without task-relevant textual comprehension skills, but instead fit spurious cues in the data that suffice to form correct predictions. Models might then achieve strong nominal test accuracy on data of the same (biased) distribution as the training set, by exploiting predictive shortcuts that are not representative of the given NLP task at hand. Consequently, they fail drastically when evaluated on samples without these spurious cues (Jia & Liang, 2017; Poliak et al., 2018; Gururangan et al., 2018; Niven & Kao, 2019). + +A major issue with identifying reduced inputs is the combinatorially large space of arbitrary text deletions; this can only be searched exhaustively for short sequences. Prior work has considered heuristics like beam search (Feng et al., 2018) or bandits (Ribeiro et al., 2018a), but these are generally not guaranteed to find the worst-case reductions. + +In this work, we address the under-sensitivity issue by designing and formally verifying the undersensitivity specification that a model should not become more confident as arbitrary subsets of input words are deleted. Under-sensitivity behaviour is not reflected in nominal accuracy, but one can + +Original Sample + +Premise: A little boy in a blue shirt holding a toy. Hypothesis: A boy dressed in blue holds a toy. Entailment (86.4%) + +Reduced Sample + +Premise: A little boy in a blue shirt holding a toy. Hypothesis: A boy dressed in blue holds a toy. Entailment $(91.9\%)$ + +Figure 1: Example of under-sensitive behaviour in Natural Language Inference, where deleting premise words increases model confidence. This problem was identified by Feng et al. (2018); our aim is to formally verify whether or not any such reductions exist, over the combinatorially large space of possibilities. + +instead use this specification to measure and evaluate the extent with which samples exhibit undersensitivity. Instead of better, yet still imperfect search heuristics, we describe how interval bound propagation (IBP) (Gowal et al., 2018; Mirman et al., 2018) – a formal model verification method – can be used to efficiently cover the full reduction space, and verify the under-sensitivity specification. IBP can be applied at test time to arbitrary model inputs to verify whether or not they are undersensitive; but it can also be used to derive a new auxiliary training objective that leads to models verifiably adhering to this specification, and which we find generalises to held-out test data. + +While under-sensitivity has been demonstrated for several NLP tasks (Feng et al., 2018), we chose to study the use case of natural language inference (NLI) (Dagan et al., 2006; Bowman et al., 2015) in particular as a representative task: sequences are comparatively short, datasets large, and the label complexity is small. We investigate the verification of the popular decomposable attention model $(\mathrm{DAM})^2$ (Parikh et al., 2016) in detail. This architecture covers many of the neural layer types of contemporary models, and we focus on a detailed description for how IBP can be leveraged to efficiently verify its behaviour. We then experimentally compare various training methods addressing under-sensitivity: i) standard training ii) data augmentation iii) adversarial training iv) IBP-verified training and v) entropy regularisation, and evaluate their effectiveness against nominal (test) accuracy, adversarial accuracy, IBP-verified accuracy and a verification oracle. + +To summarise, the main contributions of this paper are (1) Formalisation of the problem of verifying an under-sensitivity specification, (2) Verification of the Decomposable Attention Model using Interval Bound Propagation, and (3) Empirical analysis of the efficacy of (i) different evaluation methods for verifying robustness; and (ii) different training methods for developing verifiably robust models. + +# 2 RELATED WORK + +Natural Language Inference. Natural Language Inference (Dagan et al., 2006) is the task of predicting whether a natural language premise entails a natural language hypothesis. The availability of large-scale datasets (Bowman et al., 2015; Williams et al., 2018) has spurred a profusion of neural architecture development for this task, e.g. (Rocktäschel et al., 2016; Parikh et al., 2016; Chen et al., 2017), among many others. + +Adversarial Vulnerability in NLP. There is a growing body of research into NLP adversarial examples, each using a slightly different choice of semantically invariant text transformations, or a task-specific attack. A first class of attack considers word- and character-level perturbation attacks (Ebrahimi et al., 2018; Alzantot et al., 2018) while another type of attack exploits back-translation systems to either mine rules (Ribeiro et al., 2018b) or train syntactically controlled paraphrasing models (Iyyer et al., 2018). Li et al. (2017) use syntactic and lexical transformations, whereas Belinkov & Bisk (2017) investigate synthetic and natural noise in Machine Translation. Jia & Liang (2017) and Mudrakarta et al. (2018) introduce task-specific adversarial attacks for Reading Comprehension/QA, Zhao et al. (2018) for Machine Translation and NLI, and Thorne & Vlachos (2019) for Fact Checking. In NLI in particular, Minervini & Riedel (2018) penalise adversially chosen + +![](images/57a1b320eb92c78a2916d1556b88e9ebb1b5e57d5028aa1b96b4f704f1fc6ae6.jpg) +Figure 2: Under-sensitivity: This figure maps dataset percentiles against the proportion of words one can delete before DAM prediction confidence decreases, where reduced samples are found using beam search. + +logical inconsistencies in NLI predictions, Kang et al. (2018) use background-knowledge guided adversaries, and Glockner et al. (2018) utilise lexical entailment relationships. + +Ribeiro et al. (2016) and Ribeiro et al. (2018a) describe analysis tools that have uncovered model over-sensitivity and under-sensitivity, respectively. Feng et al. (2018) focus on under-sensitivity, showing that models can become more confident as large fractions of input text are deleted, whereas Niu & Bansal (2018) address under-sensitivity in a dialogue setting. Jacobsen et al. (2019) demonstrated a link between excessive prediction invariance and model vulnerability in computer vision. + +Formal Verification. Formal verification provides a provable guarantee that models are consistent with a formally defined specification (a mathematical relationship between the inputs and outputs of the model). Examples of specifications include robustness to bounded adversarial perturbations, monotonicity of the output with respect to a subset of the inputs, and consistency with physical laws (Qin et al., 2019). Literature can be categorised into complete methods that use Mixed-Integer Programming (MIP) (Bunel et al., 2017; Cheng et al., 2017) or Satisfiability Modulo Theory (SMT) (Katz et al., 2017), and incomplete methods that solve a convex relaxation of the verification problem (Weng et al., 2018; Wong & Kolter, 2018; Wang et al., 2018; Raghunathan et al., 2018b). Complete methods perform exhaustive enumeration to find a counter-example to the specification or rule out the existence of counter-examples (thus proving that the specification is true). Hence, complete methods are expensive and difficult to scale. Incomplete methods are conservative (i.e. they cannot always prove that a specification is true even when it is), but are more scalable and can be used inside the training loop for training models to be consistent and verifiable (Raghunathan et al., 2018a; Wong & Kolter, 2018; Dvijotham et al., 2018a; Gowal et al., 2018; Dvijotham et al., 2018b). + +While Barr & Klavans (2001) address the issue of verification in NLP, most of the recent work has focused on $\ell_{\infty}$ norm-bounded perturbations for image classification. This paper complements work on incomplete verification methods by extending IBP to NLI where inputs are inherently discrete (to the contrary of images which are continuous). In the NLP context in particular, Huang et al. (2019) and Jia et al. (2019) have very recently verified CNN, and LSTM models with specifications against over-sensitivity adversaries under synonym replacement. Wang et al. (2019) study verification of output length specifications in machine translation models, showing that the outputs of machine translation and image captioning systems can be provably bounded when the inputs are perturbed within a given set. In contrast, this work examines under-sensitivity behaviour: excessive model prediction invariance under arbitrary word combination deletions. We highlight that the verification of neural networks is an extremely challenging task, and that scaling complete and incomplete methods to large models is an open problem. + +# 3 FORMULATING A SPECIFICATION AGAINST UNDER-SENSITIVITY + +Neural networks are expressive models that can fit large datasets and achieve strong nominal test accuracy. At the same time however, they can fit data in a way that violates our idea of how they should fit it, from an input attribution perspective. Figure 2 visualises the extent of the problem in NLI: it is, for example, for $20\%$ of the SNLI test set possible to delete $78\%$ or more of premise + +words while the prediction confidence increases or remains the same. We will next formally describe a specification that checks a model's under-sensitivity, i.e. whether any such reduction exists. + +The specification addresses model output probabilities when parts of the input text are deleted. To this end, we first introduce the notion of a perturbation space $\mathcal{X}^{\mathrm{in}}(\mathbf{x}^{\mathrm{nom}})$ of an original nominal input $\mathbf{x}^{\mathrm{nom}}$ . This space contains all possible reductions, i.e. inputs where arbitrarily chosen tokens of the original nominal input $\mathbf{x}^{\mathrm{nom}}$ are deleted. Note that this space grows exponentially in the length of the input. We would like to verify whether or not there exists any reduced input $\mathbf{x}^{-}$ with higher probability for the (nominal) prediction than $\mathbf{x}^{\mathrm{nom}}$ has. More formally, this can be stated as a specification: + +$$ +\forall \mathbf {x} ^ {-} \in \mathcal {X} ^ {\text {i n}} (\mathbf {x} ^ {\text {n o m}}): P (\hat {y} | \mathbf {x} ^ {-}) \leq P (\hat {y} | \mathbf {x} ^ {\text {n o m}}) \tag {1} +$$ + +where $\hat{y}$ is the (nominal) model prediction. + +Determining precisely how prediction probabilities should change when input words are deleted is contentious and prone to inconsistencies: removing stop words for example may lead to little relevant change, while crucial information carrying words (e.g., 'not') can significantly alter the meaning of the sentence in a task. It is important to be cautious and not too restrictive in the specification design, and certain that whatever is specified is desirable. A specification to at least not increase prediction probabilities under arbitrary input deletion is a conservative choice. Other specifications are worth consideration, such as monotonically decreasing certainty as more input is deleted. We will however see that even our very conservative choice of an under-sensitivity specification is hard to positively verify for most inputs in the DAM model. + +There are different approaches to establish if the Specification (1) is satisfied. With unlimited computational capacity, the property could exhaustively be evaluated for all $\mathbf{x}^{-} \in \mathcal{X}^{\mathrm{in}}(\mathbf{x}^{\mathrm{nom}})$ . Statistically sampling from the reduction space can give an indication of under-sensitivity, but has a very limited coverage rate. Search heuristics can try to identify violations (and be used for 'adversarial' training), but there is no guarantee that a stronger search procedure cannot find more or worse violations. IBP verification on the other hand offers a formal guarantee across the whole space by establishing outer bounds for $\mathcal{X}^{\mathrm{in}}(\mathbf{x}^{\mathrm{nom}})$ and resulting bounds on output probabilities. + +# 4 BACKGROUND + +We next give a brief introduction to the Decomposable Attention Model (DAM) which we will later verify. The DAM architecture comprises commonly used neural NLP components, such as word embeddings, attention, and feed-forward networks. Subsequently we introduce Interval Bound Propagation and then bring these together to verify the behaviour of the DAM, i.e. efficiently assert whether an input satisfies Specification (1). + +Decomposable Attention. The NLI task takes two word sequences as input - a premise and a hypothesis - and outputs a discrete entailment label prediction in {entailment, neutral, contradiction}. The DAM architecture (Parikh et al., 2016) assumes the input word sequences to be embedded (e.g. as $d$ -dimensional word vectors), i.e. operates on two sequences of input vectors: $\mathbf{A} = [\mathbf{a}_1; \ldots; \mathbf{a}_I] \in \mathbb{R}^{d \times I}$ , and $\mathbf{B} = [\mathbf{b}_1; \ldots; \mathbf{b}_J] \in \mathbb{R}^{d \times J}$ , where $[.;.\]$ denotes concatenation, and $I$ and $J$ are sequence lengths. Word vectors are individually transformed with a vector-valued function $F(.)$ , and pairs thereof are then compared: + +$$ +e _ {i j} = F \left(\mathbf {a} _ {i}\right) ^ {\top} F \left(\mathbf {b} _ {j}\right) \in \mathbb {R} \tag {2} +$$ + +Note that we follow the notation of Parikh et al. (2016), and that $e_{ij}$ is not related to a basis vector. In the general model formulation $F$ can be a linear transformation or MLP; this does not affect the derivations made here. Adopting matrix notation across position pairs $(i,j)$ , Equation (2) can instead be rewritten in matrix form as $\mathbf{E} = F(\mathbf{A})^{\top}F(\mathbf{B}) \in \mathbb{R}^{I \times J}$ , which is used to compute two + +attention masks - one over each sequence - by normalising across $i$ or across $j$ : + +$$ +P _ {i j} ^ {(\mathbf {A})} = \frac {\exp \left(e _ {i j}\right)}{\sum_ {k} \exp \left(e _ {k j}\right)}; \mathbf {P} ^ {(\mathbf {A})} \in \mathbb {R} ^ {I \times J} \tag {3} +$$ + +$$ +P _ {i j} ^ {(\mathbf {B})} = \frac {\exp (e _ {i j})}{\sum_ {k} \exp (e _ {i k})}; \mathbf {P} ^ {(\mathbf {B})} \in \mathbb {R} ^ {I \times J} \tag {4} +$$ + +These two attention masks serve as coefficients in a convex combination over the original word vectors, aggregating each of the two sequences: + +$$ +\mathcal {A} = \mathbf {A} \cdot \mathbf {P} ^ {(\mathbf {A})} \in \mathbb {R} ^ {d \times J} +$$ + +$$ +\mathcal {B} = \mathbf {B} \cdot (\mathbf {P} ^ {(\mathbf {B})}) ^ {\top} \in \mathbb {R} ^ {d \times I} +$$ + +That is, $\mathcal{A}$ and $\mathcal{B}$ hold attention-aggregated word vectors from $\mathbf{A}$ and $\mathbf{B}$ at positions $j = 1,\dots ,J$ and $i = 1,\ldots ,I$ , respectively. These are joined with the original word representations, mixed using a position-wise feed-forward network $G:\mathbb{R}^{2d}\to \mathbb{R}^{d^{\prime}}$ and finally summed into a single vector representation for each sequence: + +$$ +\mathbf {v} _ {1} = \sum_ {i} G \left(\left[ \mathbf {a} _ {i}; \mathcal {B} _ {i} \right]\right) \in \mathbb {R} ^ {d ^ {\prime}} \tag {5} +$$ + +$$ +\mathbf {v} _ {2} = \sum_ {j} G \left(\left[ \mathcal {A} _ {j}; \mathbf {b} _ {j} \right]\right) \in \mathbb {R} ^ {d ^ {\prime}} \tag {6} +$$ + +As a last step, a logit vector with entries for each class is computed as $H([\mathbf{v}_1,\mathbf{v}_2])$ , where $H: \mathbb{R}^{2d'} \to \mathbb{R}^C$ is again a feed-forward network, and $C$ is the number of output classes. + +Interval Bound Propagation. IBP is an incomplete but efficient verification method that can be used to verify input-output relationships. It tracks how a part of the input space (in our case: the perturbation space $\mathcal{X}^{\mathrm{in}}(\mathbf{x}^{\mathrm{nom}})$ ) propagates forward through the network. IBP starts with an axis-aligned bounding box surrounding $\mathcal{X}^{\mathrm{in}}(\mathbf{x}^{\mathrm{nom}})$ , and uses interval arithmetic to obtain an axis-aligned bounding box for the output set. Formally, let us assume that the neural network is defined by a sequence of transformations $h_k$ for each of its $K$ layers. That is, for $\mathbf{z}_0 \in \mathcal{X}^{\mathrm{in}}(\mathbf{x}^{\mathrm{nom}})$ + +$$ +\mathbf {z} _ {k} = h _ {k} \left(\mathbf {z} _ {k - 1}\right) \quad k = 1, \dots , K \tag {7} +$$ + +The output $\mathbf{z}_K\in \mathbb{R}^C$ has $C$ logits corresponding to $C$ classes. IBP bounds the activation $\mathbf{z}_k$ of each layer by an axis-aligned bounding box (i.e., $\underline{\mathbf{z}}_k\leq \mathbf{z}_k\leq \overline{\mathbf{z}}_k$ ) using interval arithmetic. We have for each coordinate $z_{k,i}$ of $\mathbf{z}_k$ : + +$$ +\underline {{\mathbf {z}}} _ {k, i} = \min _ {\substack {\mathbf {z} _ {k - 1} \leq \mathbf {z} _ {k - 1} \leq \overline {{\mathbf {z}}} _ {k - 1}}} \mathbf {e} _ {i} ^ {\top} h _ {k} (\mathbf {z} _ {k - 1}) +$$ + +$$ +\overline {{\mathbf {z}}} _ {k, i} = \max _ {\underline {{\mathbf {z}}} _ {k - 1} \leq \mathbf {z} _ {k - 1} \leq \overline {{\mathbf {z}}} _ {k - 1}} \mathbf {e} _ {i} ^ {\top} h _ {k} \left(\mathbf {z} _ {k - 1}\right) \tag {8} +$$ + +where $\mathbf{e}_i$ is the standard $i^{\mathrm{th}}$ basis vector. Finally, at the last layer, an upper bound on the worst-case violation of the specification can be evaluated quickly from the logit lower and upper bounds $\underline{\mathbf{z}}_K$ and $\overline{\mathbf{z}}_K$ respectively, as the bounds translate directly into bounds for the softmax probabilities. + +IBP can be performed in parallel while running the nominal forward pass. However in general the output bounds are loose, which is exacerbated with increasing network depth. Consequently IBP over-approximates the true extent of the image of $\mathcal{X}^{\mathrm{in}}(\mathbf{x}^{\mathrm{nom}})$ in output space, and can result in false negatives. It is thus in practice important to keep the bounds as tight as possible. IBP can be used at test time for verification, but also for training, minimising loss terms derived from the logit bounds. IBP has been used on MLPs and convolutional networks with monotonic activations (Gowal et al., 2018; Huang et al., 2019). One technical contribution of this work is to apply it to a model with an attention component (Section 5). + +# 5 VERIFYING UNDER-SENSITIVITY GUARANTEES FOR THE DAM MODEL + +To address under-sensitivity, we aim to verify Specification (1) for the DAM model. If the upper probability bound $\overline{\mathbf{z}}_K$ of the entire perturbation space $\mathcal{X}^{\mathrm{in}}(\mathbf{x}^{\mathrm{nom}})$ is smaller than the probability $P(\hat{y}|\mathbf{x}^{\mathrm{nom}})$ for the predicted class, then the specification is verified. That is $\forall \mathbf{z}_0\in \mathcal{X}^{\mathrm{in}}(\mathbf{x}^{\mathrm{nom}})$ : + +$$ +P (\hat {y} | \mathbf {z} _ {0}) \leq \overline {{\mathbf {z}}} _ {K, \hat {y}} \leq P (\hat {y} | \mathbf {x} ^ {\text {n o m}}) \tag {9} +$$ + +Using this inequality, we can assert whether Specification (1) is verifiably satisfied for any given $\mathbf{x}^{\mathrm{nom}}$ , i.e. whether there exist any reduced samples with higher probability than $\mathbf{x}^{\mathrm{nom}}$ . + +Overview. We will first describe the model behaviour when removing a single word at fixed position, and then extend this to deleting single words at any position, and finally generalise this to arbitrary multi-token deletions. + +One key difference to IBP bounds of other architectural components, such as CNNs or feed-forward layers, is the need for bounds on the attention normalisation, which has to take into account per-token upper and lower bounds. We will exploit the fact that each vector of $\mathcal{B}$ is a convex combination of the $J$ vectors that constitute $\mathbf{B}$ (and similarly for $\mathcal{A}$ ). Hence, component-wise bounds on $\mathcal{B}$ can be obtained efficiently by maximising over those $J$ vectors. $\mathcal{A}$ and $\mathcal{B}$ are then inputs to a regular feed-forward network ( $G$ followed by $H$ ), for which IBP can be used. + +Deleting Single Word: Particular Position. We first describe how model variables behave when an individual token at a fixed position $r$ is removed from one of the sequences. Without loss of generality, we delete words from the second sequence, noting that the model has a symmetric architecture and that the same can be derived for the other input sequence. We denote all resulting quantities with a bar (as in $\bar{\mathbf{B}}$ ). That is, when removing a single token at position $r$ : + +$$ +\bar {\mathbf {B}} = \left[ \mathbf {b} _ {1}, \dots , \mathbf {b} _ {r - 1}, \mathbf {b} _ {r + 1}, \dots , \mathbf {b} _ {J} \right] \in \mathbb {R} ^ {d \times (J - 1)} \tag {10} +$$ + +whereas $\bar{\mathbf{A}} = \mathbf{A}$ . Since $F(.)$ is applied per-position in the sequence, the effect of word deletion remains isolated at this point; the matrix product $\bar{\mathbf{E}} = F(\bar{\mathbf{A}})^{\top}F(\bar{\mathbf{B}})\in \mathbb{R}^{I\times (J - 1)}$ has identical entries as before, but the $r$ -th column disappears. + +Renormalising Attention Masks. Likewise the attention mask $\bar{\mathbf{P}}^{(\mathbf{A})}$ has identical entries compared to $\mathbf{P}^{(\mathbf{A})}$ , but the $r$ -th column removed. That is, for $i = 1,\dots,I$ and $j = 1,\dots,J$ s.t. $j\neq r$ : + +$$ +\bar {P} _ {i j} ^ {(\mathbf {A})} = \frac {\exp (\bar {e} _ {i j})}{\sum_ {k} \exp (\bar {e} _ {k j})}; \quad \bar {\mathbf {P}} ^ {(\mathbf {A})} \in \mathbb {R} ^ {I \times (J - 1)} \tag {11} +$$ + +The attention mask $\bar{\mathbf{P}}^{(B)}$ on the other hand has renormalised entries. The values retain their relative order, yet the entries are larger because the $r$ -th normalisation summand is removed. For $j \neq r$ : + +$$ +\bar {P} _ {i j} ^ {(\mathbf {B})} = \frac {\exp (e _ {i j})}{\sum_ {k \neq r} \exp (e _ {i k})}; \quad \bar {\mathbf {P}} ^ {(\mathbf {B})} \in \mathbb {R} ^ {I \times (J - 1)} \tag {12} +$$ + +Hence we can compute $\bar{P}_{ij}^{(\mathbf{B})}$ in closed form as + +$$ +\bar {P} _ {i j} ^ {(\mathbf {B})} = P _ {i j} ^ {(\mathbf {B})} \cdot \frac {\sum_ {k} \exp (e _ {i k})}{\sum_ {k \neq r} \exp (e _ {i k})} \tag {13} +$$ + +To summarise the above: attention weights $P_{ij}^{(\mathbf{B})}$ remain largely unchanged when deleting token $r$ , but are rescaled to take into account missing normalisation mass. + +In the next step, the model computes convex combinations $\bar{\mathcal{A}}$ and $\bar{\mathcal{B}}$ . Concretely, $\bar{\mathcal{A}} = \bar{\mathbf{A}} \cdot \bar{\mathbf{P}}^{(\mathbf{A})} \in \mathbb{R}^{d \times (J - 1)}$ has unchanged elements compared to before (as $\mathbf{A}$ remains unchanged), but the $r$ -th column is removed. For $\bar{\mathcal{B}} = \bar{\mathbf{B}} \cdot (\bar{\mathbf{P}}^{(\mathbf{B})})^{\top} \in \mathbb{R}^{d \times I}$ the dimensionality remains unchanged, but $\bar{\mathbf{B}}$ has fewer elements and $(\bar{\mathbf{P}}^{(\mathbf{B})})^{\top}$ is renormalised accordingly. Note that all this can still be computed in closed form using Equation (13), i.e. without need for IBP thus far, and these quantities can further be fed through the remaining network layers $G$ and $H$ to obtain concrete probabilities. + +Deleting Single Word: Arbitrary Position. We have reached the point where $\bar{\mathbf{A}},\bar{\mathbf{B}},\bar{\mathbf{A}}$ and $\bar{B}$ are derived in closed form, for fixed position $r$ . These can be computed exactly without approximation, for deleted words at any position $r$ in the sequence. Extending this to arbitrary single-word deletions, we take the elementwise minimum / maximum across all possible single word deletions, e.g. + +$$ +\bar {\mathcal {B}} ^ {\text {U p p e r}} = \max _ {r = 1, \dots , J} \bar {\mathcal {B}} (r) \tag {14} +$$ + +$$ +\bar {\mathcal {B}} ^ {\text {L o w e r}} = \min _ {r = 1, \dots , J} \bar {\mathcal {B}} (r) \tag {15} +$$ + +which establishes upper and lower bounds for each element, and analogously for the other matrices. + +In the DAM architecture, these matrices are next fed into dense feed-forward layers $G$ (Equations (5) and (6)) and $H$ , each with two layers. We use IBP to propagate bounds through these layers, feeding in bounds on $\bar{\mathbf{A}}$ , $\bar{\mathbf{B}}$ , $\bar{\mathcal{A}}$ and $\bar{\mathcal{B}}$ as described above. As a result, after propagating these bounds through $G$ and $H$ , we obtain bounds on output logits (and consequently on probabilities) for deletions of a single token at any position. + +One further simplification is possible: we compute $\bar{\mathbf{v}}_2$ directly from $\mathbf{v}_2$ by subtracting the $r$ -th summand for fixed $r$ (see Equation (6)). Generalising this to arbitrary positions $r$ , we can bound the subtracted vector with $\max_{r=1,\ldots,J}\{G([\bar{\mathcal{A}}_r; \bar{\mathbf{b}}_r])\}$ and $\min_{r=1,\ldots,J}\{G([\bar{\mathcal{A}}_r; \bar{\mathbf{b}}_r])\}$ , and thus directly obtain bounds for $\bar{\mathbf{v}}_2$ . + +Deleting Several Words. We have described the behaviour of intermediate representations (and bounds for them) under deletions of arbitrary individual words; the case of removing several words is similar. The values of remaining individual word vectors $\mathbf{a}_i$ and $\mathbf{b}_j$ naturally remain unchanged. The previously established bounds for single word deletions can be partly re-used to establish bounds for arbitrary multi-word deletions, see appendix A for more detail. The resulting bounds for $\bar{\mathbf{v}}_1$ and $\bar{\mathbf{v}}_2$ are then input to a regular feed-forward network, for which IBP can be used. + +# 6 EXPERIMENTS + +We now evaluate to which extent the DAM model verifiably satisfies the Specification (1) against under-sensitivity, and we will furthermore compare different training approaches. Experiments are conducted on two large-scale NLI datasets: SNLI (Bowman et al., 2015) and multiNLI (Williams et al., 2018), henceforth MNLI. Whereas Feng et al. (2018) addressed deletions of hypothesis words in SNLI, we establish the phenomenon also for MNLI, and for premise reductions. In our experiments we use premise reductions, noting that under-sensitivity is also present for hypotheses (see Fig. 2). For SNLI we use standard dataset splits, tuning hyperparameters on the development set and reporting results for the test set. For MNLI we split off 2000 samples from the development set for validation purposes and use the remaining samples as test set. We use the same types of feed-forward components, layer size, dropout, and word embedding hyperparameters described by Parikh et al. (2016). + +Evaluation Metrics We evaluate with respect to the following metrics: + +1. Accuracy: Standard test accuracy. +2. Verified Accuracy: This metric measures whether both i) the prediction is correct ii) it can be verified that no reduction with higher probability exists, using IBP verification. +3. Beam Search Heuristic: This metric uses beam search to find specification violations in the perturbation space, following the protocol of Feng et al. (2018). Search begins from the full sequence, gradually deleting words while keeping a beam of width 10. This metric then measures whether both i) the search heuristic found no counterexample, and ii) the prediction is correct. Note that this heuristic does not cover the full perturbation space, i.e. does not suffice to rule out counterexamples to the specification. This metric provides an upper bound for verified accuracy. + +Training Methods We will compare the following training methods: + +1. Standard Training: This provides a baseline for under-sensitivity behaviour under standard log-likelihood training. +2. Data Augmentation: A first and comparatively simple way to address under-sensitivity is by adding training samples with random word subsets deleted, and penalising the model with a loss proportional to the specification violation. +3. Adversarial Training: Here we use a more systematic approach than random word deletions: we search within the perturbation space for inputs with large differences between + +
Training MethodAccuracyVerified AccuracyBeam Search Heuristic
Standard Training77.222.833.36
Data Augmentation76.375.096.27
Adversarial Training: random76.891.794.16
Adversarial Training: beam search76.095.4823.76
Entropy Regularisation77.325.826.28
IBP-Training75.5118.3619.26
+ +(a) SNLI + +
Training MethodAccuracyVerified AccuracyBeam Search Heuristic
Standard Training60.007.778.77
Data Augmentation62.021.934.26
Adversarial Training: random61.892.605.04
Adversarial Training: beam search58.740.457.44
Entropy Regularisation60.748.839.47
IBP-Training44.9517.4419.07
+ +(b) MNLI + +Table 1: Experimental results: accuracy vs. verified accuracy using IBP, for different training methods. All models tuned for verified accuracy, numbers in %. + +nominal prediction probability and reduced probability, i.e. the strongest specification violations. We compare both i) random adversarial search that samples 512 randomly reduced perturbations and picks the strongest violation, and ii) beam search with width 10, following the protocol of Feng et al. (2018). Both for data augmentation and adversarial training, altered samples are recomputed throughout training. + +4. Entropy Regularisation: Feng et al. (2018) observed that entropy regularisation on prediction probabilities can partially mitigate the severity of under-sensitivity. +5. IBP-Training: Here we use IBP verification as described in Section 5, which provides upper bounds on the prediction probability of arbitrarily reduced inputs (Eq. (9)). We penalise the model with an auxiliary hinge loss on the difference between the upper probability bound for the gold label $y$ and the nominal probability $P(y|\mathbf{x}^{\mathrm{nom}})$ . Note that the upper bound serves as a proxy for the adversarial objective, as it over-approximates the probabilities of arbitrary reduced samples, covering the full reduction space comprehensively. + +Training Details The training methods described above make use of an additive contribution to the training loss besides standard log-likelihood. We tune the scale of the respective contribution in [0.01, 0.1, 1.0, 10.0, 100.0]. All experiments used a learning rate of 0.001, Adam optimiser, and batch size 128. We perform early stopping with respect to verified accuracy, for a maximum of 3M training steps. For verified training, we found it useful to continuously phase in the volume of the perturbation space to its maximum size, similar to Gowal et al. (2018). Concretely, we compute the per-dimension center of upper and lower bound, and start linearly increasing its volume until it reaches the full perturbation space volume. Similarly we phase in the perturbation radius, i.e. the maximum number of words deleted from 1 to the maximum sequence length of 48. We tune phase-in intervals in $[10^0, 10^3, 10^4, 10^5, 10^6]$ training steps. We also experimented with over-inflating the perturbation volume to larger than its real size at training time, as well as randomly sampling a maximum perturbation radius during training, neither of which improved verifiability results. + +# 6.1 RESULTS AND ANALYSIS + +Evaluating the Effectiveness of IBP for Verification. Tables 1a and 1b show the main results. For both datasets, a non-negligible portion of data points can be verified using IBP. The gap between (standard) accuracy however is striking: only a small fraction of correctly predicted inputs is actually verifiably not under-sensitive. Note that IBP accuracy is naturally bounded above by the beam search heuristic, which does however not cover the full reduction space, and overestimates verification + +
MetricTime[s]#Eval's /sample
Accuracy21
IBP Verification3≈ 2
Verification Oracle-2L
Oracle up to 200K45674200000
Beam Search505≈ b · L
+ +Table 2: Computational cost of verification. Left: time elapsed (1 GPU) for evaluating 300 SNLI samples, without cross-sample batching. Right: worst-case number of forward passes. $L$ : sequence length; $b$ : beam width. + +
TrainingIBPOracleBeam
Standard Training4.345.135.37
Data Augmentation6.488.598.78
Adversarial:random1.876.887.03
Adversarial:beam5.1331.9032.14
Entropy Regul.8.358.909.28
IBP-Training19.2920.6820.94
+ +Table 3: Oracle on SNLI sequences up to 12 tokens. Numbers in %. + +rates. IBP verification becomes particularly effective when adding the IBP-verifiability objective during training, verifying $18.36\%$ and $17.44\%$ of samples on SNLI and MNLI. Verifiability does however come at a cost: test accuracy is generally decreased when tuning for verifiability, compared to Parikh et al. (2016). This highlights a shortcoming of test accuracy as a metric: it does not reflect the under-sensitivity problem. Once under-sensitivity is taken into account by dedicated training objectives or tuning for verification rates, nominal accuracy suffers. + +Computational Efficiency of IBP Verification. Table 2 gives a breakdown of the computational cost incurred for verification, both empirically, and the theoretical worst-case number of forward passes required per sample. IBP verification comes with small computational overhead compared to a standard forward pass, which is incurred by propagating upper and lower interval bounds through the network once. A full oracle is computationally infeasible, instead we used an exhaustive search oracle, but only up to a maximum budget of $200\mathrm{K}$ forward passes per sample. Even when stopping as soon as a single reduced sample is found that violates the specification, the incurred time is orders of magnitude larger than verification via IBP. + +Comparing Training Methods. We next discuss the differences between training methods, and how they reflect in verified model behaviour. In absolute terms, standard training does not adhere to the under-sensitivity specification well, neither on SNLI nor MNLI. Data augmentation and random adversarial training lead to slightly different results on the two datasets, albeit without major improvements. These methods have a strong random component in their choice of deletions, and this tends to lead to lower verification rates on MNLI, where premises are on average 6.2 tokens longer, and the reduction space is correspondingly larger. Beam search adversarial training leads to improved verification rates on SNLI, yet not for MNLI, and it is noteworthy that when also trained with beam search adversarial samples, beam search evaluation improves substantially. Entropy regularisation improves verified accuracy over standard training; this is in line with previous observations that it mitigates under-sensitivity behaviour, made by Feng et al. (2018). Finally, the dedicated IBP-Training objective substantially raises verification rates compared to all other approaches. In an ablation (Table 3) we evaluate performance on short sequences (up to 12 tokens) in the SNLI test set: here an exhaustive search over all possible reductions is feasible. Still the absolute verification rates are low in absolute terms, but we observe that shorter sequences are comparatively easier to verify, and that the incomplete IBP verification can approach the verification levels of the complete oracle (see rows 1,2,5, and 6). For adversarial training (rows 3 and 4), however, oracle verification rates are much closer to the Beam Search Heuristic. This suggests that i) for short sequences the smaller perturbation space can be covered better by beam search, and ii) adversarial training can + +lead to high verifiability on short sequences, but it fits a model in a way that results in loose IBP bounds. + +# 7 DISCUSSION + +Verification of a specification offers a stronger form of robustness than robustness to adversarial samples. Adversarial accuracy, as e.g. derived from beam search, might conceptually be easier to compute, yet has no guarantees to find all or the strongest violations. In fact, evaluating against weak adversaries under-estimates the extent of a problem (Uesato et al., 2018) and may lead to a false sense of confidence. IBP verification can provide guarantees on the nonexistence of reduced inputs, but it is incomplete and can have false negatives. + +Observations of comparatively low verification or adversarial accuracy rates – as in this work – are not new, and have been found to be a general problem of datasets with high sample complexity (Schmidt et al., 2018). We emphasise that under-sensitivity is a very challenging problem to address; even the relatively conservative specification of non-increasing probability under deletion cannot be fulfilled for the majority of test samples under the baselines tested. + +We see the verification of the attention-based DAM model as a stepping stone towards the verification of larger and more performant attention-based architectures, such as BERT. Following the derivations here, token deletion bounds could similarly be propagated through BERT's self-attention layer. Towards this end, however, we see two main hurdles: i) BERT's network depth, resulting in gradually looser IBP bounds ii) BERT's word piece tokenisation, which requires special consideration in conjunction with token-level perturbations. + +# 8 CONCLUSION + +We have investigated under-sensitivity to input text deletions in NLI and recast the problem as one of formally verifying a specification on model behaviour. We have described how Interval Bound Propagation can be used in order to verify the popular Decomposable Attention Model, and have then compared several training methods in their ability to address and be verified against undersensitivity. We observed that only a relatively small fraction of data points can be positively verified, but that IBP-training in particular is capable of improving verified accuracy. + +# REFERENCES + +Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. Generating natural language adversarial examples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2890-2896, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/D18-1316. +Valerie Barr and Judith L. Klavans. Verification and validation of language processing systems: Is it evaluation? In Proceedings of the ACL 2001 Workshop on Evaluation Methodologies for Language and Dialogue Systems, 2001. URL https://www.aclweb.org/anthology/W01-0906. +Yonatan Belinkov and Yonatan Bisk. Synthetic and natural noise both break neural machine translation. CoRR, abs/1711.02173, 2017. +Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 632-642. Association for Computational Linguistics, 2015. doi: 10.18653/v1/D15-1075. URL http://aclweb.org/anthology/D15-1075. +Rudy Bunel, Ilker Turkaslan, Philip HS Torr, Pushmeet Kohli, and M Pawan Kumar. Piecewise linear neural network verification: a comparative study. arXiv preprint arXiv:1711.00455, 2017. + +Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1657-1668. Association for Computational Linguistics, 2017. doi: 10.18653/v1/P17-1152. URL http://aclweb.org/anthology/P17-1152. +Chih-Hong Cheng, Georg Nuhrenberg, and Harald Ruess. Maximum resilience of artificial neural networks. In International Symposium on Automated Technology for Verification and Analysis, pp. 251-268. Springer, 2017. +Ido Dagan, Oren Glickman, and Bernardo Magnini. The Pascal recognising textual entailment challenge. In Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW'05, pp. 177-190, Berlin, Heidelberg, 2006. Springer-Verlag. ISBN 3-540-33427-0, 978-3-540-33427-9. doi: 10.1007/11736790_9. URL http://dx.doi.org/10.1007/11736790_9. +Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, and Pushmeet Kohli. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265, 2018a. +Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, and Pushmeet Kohli. A dual approach to scalable verification of deep networks. arXiv preprint arXiv:1803.06567, 2018b. +Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 31-36. Association for Computational Linguistics, 2018. URL http://aclweb.org/anthology/P18-2006. +Allyson Ettinger, Sudha Rao, Hal Daumé III, and Emily M. Bender. Towards linguistically generalizable nlp systems: A workshop and shared task. In Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems, 2017. +Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. Pathologies of neural models make interpretations difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3719-3728. Association for Computational Linguistics, 2018. URL http://aclweb.org/anthology/D18-1407. +Max Glockner, Vered Shwartz, and Yoav Goldberg. Breaking nli systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 650-655. Association for Computational Linguistics, 2018. URL http://aclweb.org/anthology/P18-2103. +Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy A. Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. CoRR, abs/1810.12715, 2018. URL http://arxiv.org/abs/1810.12715. +Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pp. 107-112, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-2017. URL https://www.aclweb.org/anthology/N18-2017. +Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. Achieving verified robustness to symbol substitutions via interval bound propagation. arXiv preprint arXiv:1909.01492, 2019. +Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. Adversarial example generation with syntactically controlled paraphrase networks. In North American Association for Computational Linguistics, 2018. + +Joern-Henrik Jacobsen, Jens Behrmann, Richard Zemel, and Matthias Bethge. Excessive invariance causes adversarial vulnerability. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BkfbpsAcF7. +Robin Jia and Percy Liang. Adversarial examples for evaluating reading comprehension systems. In Empirical Methods in Natural Language Processing (EMNLP), 2017. +Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. Certified robustness to adversarial word substitutions. arXiv preprint arXiv:1909.00986, 2019. +Dongyeop Kang, Tushar Khot, Ashish Sabharwal, and Eduard Hovy. AdvEntuRe: Adversarial training for textual entailment with knowledge-guided examples. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2418-2428, Melbourne, Australia, July 2018. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P18-1225. +Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pp. 97-117. Springer, 2017. +Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016. +Yitong Li, Trevor Cohn, and Timothy Baldwin. Robust training under linguistic adversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pp. 21-27, 2017. +Pasquale Minervini and Sebastian Riedel. Adversarially regularising neural nli models to integrate logical background knowledge. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pp. 65-74. Association for Computational Linguistics, 2018. URL http://aclweb.org/anthology/K18-1007. +Matthew Mirman, Timon Gehr, and Martin Vechev. Differentiable abstract interpretation for provably robust neural networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pp. 3578-3586, 2018. +Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. Did the model understand the question? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1896-1906, Melbourne, Australia, July 2018. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P18-1176. +Tong Niu and Mohit Bansal. Adversarial over-sensitivity and over-stability strategies for dialogue models. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pp. 486-496, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/K18-1047. URL https://www.aclweb.org/anthology/K18-1047. +Timothy Niven and Hung-Yu Kao. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4658-4664, Florence, Italy, July 2019. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P19-1459. +Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2249-2255. Association for Computational Linguistics, 2016. doi: 10.18653/v1/D16-1244. URL http://aclweb.org/anthology/D16-1244. +Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pp. 180-191. Association for Computational Linguistics, 2018. doi: 10.18653/v1/S18-2023. URL http://aclweb.org/anthology/S18-2023. + +Chongli Qin, Krishnamurthy Dvijotham, Brendan O'Donoghue, Rudy Bunel, Robert Stanforth, Sven Gowal, Jonathan Uesato, Grzegorz Swirszcz, and Pushmeet Kohli. Verification of nonlinear specifications for neural networks. CoRR, abs/1902.09592, 2019. +Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344, 2018a. +Aditi Raghunathan, Jacob Steinhardt, and Percy S Liang. Semidefinite relaxations for certifying robustness to adversarial examples. In Advances in Neural Information Processing Systems, pp. 10877-10887, 2018b. +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "Why Should I Trust You?": Explaining the predictions of any classifier. In Knowledge Discovery and Data Mining (KDD), 2016. +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model-agnostic explanations. In AAAI Conference on Artificial Intelligence (AAAI), 2018a. +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Semantically equivalent adversarial rules for debugging nlp models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 856-865. Association for Computational Linguistics, 2018b. URL http://aclweb.org/anthology/P18-1079. +Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom. Reasoning about entailment with neural attention. In International Conference on Learning Representations (ICLR), 2016. +Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarily robust generalization requires more data. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 5014-5026. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7749-adversarily-robust-generalization-requires-more-data.pdf. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. +James Thorne and Andreas Vlachos. Adversarial attacks against fact extraction and verification. CoRR, abs/1903.05543, 2019. URL http://arxiv.org/abs/1903.05543. +Jonathan Uesato, Brendan O'Donoghue, Aaron van den Oord, and Pushmeet Kohli. Adversarial risk and the dangers of evaluating against weak attacks. arXiv preprint arXiv:1802.05666, 2018. +Chenglong Wang, Rudy Bunel, Krishnamurthy Dvijotham, Po-Sen Huang, Edward Grefenstette, and Pushmeet Kohli. Knowing when to stop: Evaluation and verification of conformity to output-size specifications. arXiv preprint arXiv:1904.12004, 2019. +Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Formal security analysis of neural networks using symbolic intervals. arXiv preprint arXiv:1804.10829, 2018. +Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S Dhillon, and Luca Daniel. Towards fast computation of certified robustness for relu networks. arXiv preprint arXiv:1804.09699, 2018. +Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122. Association for Computational Linguistics, 2018. URL http://aclweb.org/anthology/N18-1101. +Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, pp. 5283-5292, 2018. +Zhengli Zhao, Dheeru Dua, and Sameer Singh. Generating natural adversarial examples. In International Conference on Learning Representations (ICLR), 2018. + +# A APPENDIX: MULTI-WORD DELETION BOUNDS + +In this section we will elaborate on how bounds for $\bar{\mathbf{v}}_1$ and $\bar{\mathbf{v}}_2$ are computed in the case of arbitrary multi-word deletions. + +Bounds on $\bar{\mathbf{v}}_1$ : Recall from Equation 5 that + +$$ +\mathbf {v} _ {1} = \sum_ {i} G ([ \mathbf {a} _ {i}; \mathcal {B} _ {i} ]) \in \mathbb {R} ^ {d ^ {\prime}} +$$ + +We can compute bounds on $\bar{\mathbf{v}}_1$ under arbitrary multi-word deletions by bounding both $\bar{\mathbf{a}}_i$ and $\bar{\mathcal{B}}_i$ , and then propagating these bounds using IBP. The values of $\mathbf{a}_i$ remain unchanged under deletions in the second input sequence ( $\bar{\mathbf{a}}_i = \mathbf{a}_i$ ), and we will thus focus on deriving upper and lower bounds for $\bar{\mathcal{B}}_i$ . + +To this end, recall that individual columns in $\mathcal{B}$ are computed as convex combination of vectors in $\mathbf{B}$ . That is, the $i^{th}$ column is computed as $\mathcal{B}_i = \sum_j\mathbf{b}_jP_{i,j}^{(B)}$ , where $P_{i,j}^{(B)}$ is the entry of $\mathbf{P}^{(\mathbf{B})}$ at position $(i,j)$ . The (elementwise) minimum and maximum values that $\bar{\mathcal{B}}_i$ can assume, are given by the (elementwise) minimum and maximum of single vectors $\mathbf{b}_j$ : $\mathbf{b}_{min} = \min_j\{\mathbf{b}_j\}$ , and $\mathbf{b}_{max} = \max_j\{\mathbf{b}_j\}$ . The values of $\bar{\mathcal{B}}_i$ are hence bounded elementwisely from above by the values in $\mathbf{b}_{max}$ : + +$$ +\bar {\mathcal {B}} _ {i} = \sum_ {j \notin D} \mathbf {b} _ {j} \bar {P} _ {i, j} ^ {(\bar {B})} \leq \sum_ {j \notin D} \mathbf {b} _ {\max } \bar {P} _ {i, j} ^ {(\bar {B})} = \mathbf {b} _ {\max } \cdot \sum_ {j \notin D} \bar {P} _ {i, j} ^ {(\bar {B})} = \mathbf {b} _ {\max } \cdot 1 = \mathbf {b} _ {\max } \tag {16} +$$ + +where $D$ is an arbitrary set of indices of deleted tokens. Note that whichever token set is deleted, the (renormalised) attention weights $\bar{P}_{i,j}^{(\bar{B})}$ always sum to 1. + +The same follows analogously for $\mathbf{b}_{min}$ as elementwise lower bound on $\bar{B}_i$ . + +Bounds on $\bar{\mathbf{v}}_2$ : Recall from Equation 6 that + +$$ +\mathbf {v} _ {2} = \sum_ {j} G ([ \mathcal {A} _ {j}; \mathbf {b} _ {j} ]) = \sum_ {j} \mathbf {g} _ {j} \in \mathbb {R} ^ {d ^ {\prime}} \tag {17} +$$ + +where $\mathbf{g}_j = G([\mathcal{A}_j;\mathbf{b}_j])$ for notational convenience. The function $G$ is a dense feed-forward neural network with softplus nonlinearity; consequently all values in $\mathbf{g}_j$ are strictly positive. Since each $\mathbf{g}_j$ has positive values, their sum will monotonically decrease if summands are removed, and monotonically increase as summands are added. + +We consider two extreme cases of deleting word combinations: i) removing all but one word ii) removing precisely one word. These will be used to bound $\bar{\mathbf{v}}_2$ for any other number of deleted words which lie in between these extremes. + +For the case that all but one words are removed (at position $r$ ), $\bar{\mathbf{v}}_2 = \mathbf{g}_r$ , and the smallest values this expression can assume (elementwise) is $\min_{r=1,\dots,J}\{\mathbf{g}_r\}$ . This is thus a lower bound on $\bar{\mathbf{v}}_2$ for sequences with only one word, and, due to the monotonicity of $\bar{\mathbf{v}}_2$ in its number of summands, also for $\bar{\mathbf{v}}_2$ under any combination of deleted words. + +For the case of deleting only a single word at position $r$ , one single summand is subtracted from $\mathbf{v}_2$ : $\bar{\mathbf{v}}_2 = \mathbf{v}_2 - \mathbf{g}_r$ , and this expression is bounded from above by $\mathbf{v}_2 - \min_{r=1,\dots,J} \{\mathbf{g}_r\}$ . Again, due to the monotonicity of $\bar{\mathbf{v}}_2$ when deleting more symbols, this upper bound for single-word deletions is consequently also an upper bound for any combination of more deleted words. + +To summarise, $\bar{\mathbf{v}}_2$ is bounded as follows: + +$$ +\min _ {r = 1, \dots , J} \left\{\mathbf {g} _ {r} \right\} \leq \bar {\mathbf {v}} _ {2} \leq \mathbf {v} _ {2} - \min _ {r = 1, \dots , J} \left\{\mathbf {g} _ {r} \right\} \tag {18} +$$ \ No newline at end of file diff --git a/towardsverifiedrobustnessundertextdeletioninterventions/images.zip b/towardsverifiedrobustnessundertextdeletioninterventions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f6920eccfacb89a839d8d458e37f2d83530374f6 --- /dev/null +++ b/towardsverifiedrobustnessundertextdeletioninterventions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4e0c24c42e1b4ec84087979704303aea67d0a82a271f0e9550997c71431534e +size 256670 diff --git a/towardsverifiedrobustnessundertextdeletioninterventions/layout.json b/towardsverifiedrobustnessundertextdeletioninterventions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..58ce3e5cbeaacdbeeff3708088c8f3ad3c15f176 --- /dev/null +++ b/towardsverifiedrobustnessundertextdeletioninterventions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7113085aafc31d2a2b52c0133785fe313ca2cb4a502a66bbcad06df6ab7dcc6 +size 494034 diff --git a/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/90b08b19-4028-4233-a23b-3dfb3f375fb5_content_list.json b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/90b08b19-4028-4233-a23b-3dfb3f375fb5_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9a60fbc785d1cb11ef9e1ad9c0561e68b4d6cd3b --- /dev/null +++ b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/90b08b19-4028-4233-a23b-3dfb3f375fb5_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddd6ac710930aa890c2b41997eaae6daca292be18c5d2acbd8eb1612cffba37c +size 61262 diff --git a/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/90b08b19-4028-4233-a23b-3dfb3f375fb5_model.json b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/90b08b19-4028-4233-a23b-3dfb3f375fb5_model.json new file mode 100644 index 0000000000000000000000000000000000000000..03daa8566b20f5a3ad5c4f2b0c4b97235d4f036f --- /dev/null +++ b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/90b08b19-4028-4233-a23b-3dfb3f375fb5_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84da5b24d4059760b31cd27328cdab35807208f6b85a546dc60cfb430a4ceb5d +size 74209 diff --git a/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/90b08b19-4028-4233-a23b-3dfb3f375fb5_origin.pdf b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/90b08b19-4028-4233-a23b-3dfb3f375fb5_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..565d1115c2e785d54b2d0e39d8e510235761064a --- /dev/null +++ b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/90b08b19-4028-4233-a23b-3dfb3f375fb5_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38b9d4d26cecb391c725f3182442578b234189ef07ed1ab3c31d2e85897d22a8 +size 314202 diff --git a/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/full.md b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cc8c377935261104fe942ae51eabd206b774e578 --- /dev/null +++ b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/full.md @@ -0,0 +1,213 @@ +# TRAINING BINARY NEURAL NETWORKS WITH REALTO-BINARY CONVOLUTIONS + +Brais Martinez $^{1}$ , Jing Yang $^{1,2,*}$ , Adrian Bulat $^{1,*}$ & Georgios Tzimiropoulos $^{1,2}$ + +$^{1}$ Samsung AI Research Center, Cambridge, UK +2 Computer Vision Laboratory, The University of Nottingham, UK + +{brais.a,adrian.bulat,georgios.t}@samsung.com + +# ABSTRACT + +This paper shows how to train binary networks to within a few percent points $(\sim 3 - 5\%)$ of the full precision counterpart. We first show how to build a strong baseline, which already achieves state-of-the-art accuracy, by combining recently proposed advances and carefully adjusting the optimization procedure. Secondly, we show that by attempting to minimize the discrepancy between the output of the binary and the corresponding real-valued convolution, additional significant accuracy gains can be obtained. We materialize this idea in two complementary ways: (1) with a loss function, during training, by matching the spatial attention maps computed at the output of the binary and real-valued convolutions, and (2) in a data-driven manner, by using the real-valued activations, available during inference prior to the binarization process, for re-scaling the activations right after the binary convolution. Finally, we show that, when putting all of our improvements together, the proposed model beats the current state of the art by more than $5\%$ top-1 accuracy on ImageNet and reduces the gap to its real-valued counterpart to less than $3\%$ and $5\%$ top-1 accuracy on CIFAR-100 and ImageNet respectively when using a ResNet-18 architecture. Code available at https://github.com/bras-martinez/real2binary. + +# 1 INTRODUCTION + +Following the introduction of the BinaryNeuralNet (BNN) algorithm (Courbariaux et al., 2016), binary neural networks emerged as one of the most promising approaches for obtaining highly efficient neural networks that can be deployed on devices with limited computational resources. Binary convolutions are appealing mainly for two reasons: (a) Model compression: if the weights of the network are stored as bits in a 32-bit float, this implies a reduction of $32 \times$ in memory usage. (b) Computational speed-up: computationally intensive floating-point multiply and add operations are replaced by efficient xnor and pop-count operations, which have been shown to provide practical speed-ups of up to $58 \times$ on CPU (Rastegari et al., 2016) and, as opposed to general low bit-width operations, are amenable to standard hardware. Despite these appealing properties, binary neural networks have been criticized as binarization typically results in large accuracy drops. Thus, their deployment in practical scenarios is uncommon. For example, on ImageNet classification, there is a $\sim 18\%$ gap in top-1 accuracy between a ResNet-18 and its binary counterpart when binarized with XNOR-Net (Rastegari et al., 2016), which is the method of choice for neural network binarization. + +But how far are we from training binary neural networks that are powerful enough to become a viable alternative to real-valued networks? Our first contribution in this work is to take stock of recent advances on binary neural networks and train a very strong baseline which already results in state-of-the-art performance. Our second contribution is a method for bridging most of the remaining gap, which boils down to minimizing the discrepancy between the output of the binary and the corresponding real-valued convolution. This idea is materialized in our work in two complementary ways: Firstly, we use an attention matching strategy so that the real-valued network can more + +![](images/d2d498317e36ed68707d084da2545641514ce7638105ef5b269ad0160481e80b.jpg) +Figure 1: Left: The proposed real-to-binary block. The diagram shows how spatial attention maps computed from a teacher real-valued network are matched with the ones computed from the binary network. Supervision is injected at the end of each binary block. See also section 4.2. Right: The proposed data-driven channel re-scaling approach. The left-hand side branch corresponds to the standard binary convolution module. The right-hand side branch corresponds to the proposed gating function that computes the channel-scaling factors from the output of the batch normalization. The factor $r$ controls the compression ratio on the gating function, and $H$ , $W$ and $C$ indicate the two spatial and the channel dimensions of the activation tensors. See also section 4.3. + +![](images/cf3c00baa5f2c181e9983670074c3cf559a510042e2e30cf7c9941190dc632fe.jpg) + +closely guide the binary network during optimization. However, we show that due to the architectural discrepancies between the real and the binary networks, a direct application of teacher-student produces sub-optimal performance. Instead, we propose to use a sequence of teacher-student pairs that progressively bridges the architectural gap. Secondly, we further propose to use the real-valued activations of the binary network, available prior to the binarization preceding convolution, to compute scale factors that are used to re-scale the activations right after the application of the binary convolution. This is in line with recent works which have shown that re-scaling the binary convolution output can result in large performance gains (Rastegari et al., 2016; Bulat & Tzimiropoulos, 2019). However, unlike prior work, we compute the scaling factors in a data-driven manner based on the real-valued activations of each layer prior to binarization, which results in superior performance. + +Overall, we make the following contributions: + +- We construct a very strong baseline by combining some recent insights on training binary networks and by performing a thorough experimentation to find the most well-suited optimization techniques. We show that this baseline already achieves state-of-the-art accuracy on ImageNet, surpassing all previously published works on binary networks. +- We propose a real-to-binary attention matching: this entails that matching spatial attention maps computed at the output of the binary and real-valued convolutions is particularly suited for training binary neural networks (see Fig. 1 left and section 4.2). We also devise an approach in which the architectural gap between real and binary networks is progressively bridged through a sequence of teacher-student pairs. +- We propose a data-driven channel re-scaling: this entails using the real-valued activations of the binary network prior to their binarization to compute the scale factors used to rescale the activations produced right after the application of the binary convolution. See Fig. 1, right, and section 4.3. +- We show that our combined contributions provide, for the first time, competitive results on two standard datasets, achieving $76.2\%$ top-1 performance on CIFAR-100 and $65.4\%$ top-1 performance on ImageNet when using a ResNet-18-a gap bellow $3\%$ and $5\%$ respectively compared to their full precision counterparts. + +# 2 RELATED WORK + +While being pre-dated by other works on binary networks (Soudry et al., 2014), the BNN algorithm (Courbariaux et al., 2016) established how to train networks with binary weights within the familiar back-propagation paradigm. The training method relies on a real-valued copy of the network weights which is binarized during the forward pass, but is updated during back-propagation ignoring the binarization step. Unfortunately, BNN resulted in a staggering $\sim 28\%$ gap in top-1 accuracy compared to the full precision ResNet-18 on ImageNet. + +It is worth noting that binary networks do have a number of floating point operations. In fact, the output of a binary convolution is not binary (values are integers resulting from the count). Also, in accordance to other low bit-width quantization methodologies, the first convolution (a costly $7 \times 7$ kernel in ResNet), the fully connected layer and the batch normalization layers are all real-valued. In consequence, a line of research has focused on developing methodologies that add a fractional amount of real-valued operations in exchange for significant accuracy gains. For example, the seminal work of XNOR-Net (Rastegari et al., 2016) proposed to add a real-valued scaling factor to each output channel of a binary convolution, a technique that has become standard for binary networks. Similarly, Bi-Real Net (Liu et al., 2018) argued that skip connections are fundamental for binary networks and observed that the flow of full precision activations provided by the skip connections is interrupted by the binary downsample convolutions. This degrades the signal and make subsequent skip connections less effective. To alleviate this, they proposed making the downsample layers real valued, obtaining around $3\%$ accuracy increase in exchange for a small increase in computational complexity. + +Improving the optimization algorithm for binary networks has been another fundamental line of research. Examples include the use of smooth approximations of the gradient, the use of PReLU (Bulat et al., 2019), a two-stage training which binarizes the weights first and then the activations (Bulat et al., 2019) and progressive quantization (Gong et al., 2019; Bulat et al., 2019). The work in (Wang et al., 2019) proposed to learn channel correlations through reinforcement learning to better preserve the sign of a convolution output. A set of regularizers are added to the loss term in (Ding et al., 2019) so as to control the range of values of the activations, and guarantee good gradient flow. Other optimization aspects, such the effect of gradient clipping or batch-norm momentum, were empirically tested in (Alizadeh et al., 2019). In section 4.1, we show how to combine many of the insights provided in these works with standard optimization techniques to obtain a very strong baseline that already achieves state-of-the-art accuracy. + +While the aforementioned works either maintain the same computational cost, or increase it by a fractional amount, other research has focused instead on relaxing the problem constraints by increasing the number of binary operations by a large amount, typically a factor of 2 to 8 times. Examples include ABC-Net (Lin et al., 2017), the structure approximation of (Zhuang et al., 2019), the circulant CNN of (Liu et al., 2019), and the binary ensemble of (Zhu et al., 2019). Note that the large increase of binary operations diminishes the efficiency claim that justifies the use of binary networks in first place. Furthermore, we will show that there is still a lot of margin in order to bridge the accuracy gap prior to resorting to scaling up the network capacity1. + +The methodology proposed in this paper has some relations with prior work: our use of attention matching as described in section 4.2 is somewhat related to the feature distillation approach of (Zhuang et al., 2018). However, (Zhuang et al., 2018) tries to match whole feature maps of the to-be-quantized network with the quantized feature maps of a real-valued network that is trained in parallel with the to-be-quantized network. Such an approach is shown to improve training of low-bitwidth quantized models but not binary networks. Notably, our approach based on matching attention maps is much simpler and shown to be effective for the case of binary networks. + +Our data-driven channel re-scaling approach, described in section 4.3, is related to the channel re-scaling approach of XNOR-Net, and also that of (Xu & Cheung, 2019; Bulat & Tzimiropoulos, 2019), which propose to learn the scale factors discriminatively through backpropagation. Contrary to (Xu & Cheung, 2019; Bulat & Tzimiropoulos, 2019), our method is data-driven and avoids using + +fixed scale factors learnt during training. Contrary to XNOR-Net, our method discriminatively learns how to produce the data-driven scale factors so that they are optimal for the task in hand. + +# 3 BACKGROUND + +This section reviews the binarization process proposed in (Courbariaux et al., 2016) and its improved version from (Rastegari et al., 2016), which is the method of choice for neural network binarization. + +We denote by $\mathcal{W} \in \mathbb{R}^{o \times c \times k \times k}$ and $\mathcal{A} \in \mathbb{R}^{c \times w_{in} \times h_{in}}$ the weights and input features of a CNN layer, where $o$ and $c$ represent the number of output and input channels, $k$ the width and height of the kernel, and $w_{in}$ and $h_{in}$ represent the spatial dimension of the input features $\mathcal{A}$ . In (Courbariaux et al., 2016), both weights and activations are binarized using the sign function and then convolution is performed as $\mathcal{A} * \mathcal{W} \approx \mathrm{sign}(\mathcal{A}) \odot \mathrm{sign}(\mathcal{W})$ where $\odot$ denotes the binary convolution, which can be implemented using bit-wise operations. + +However, this direct binarization approach introduces a high quantization error that leads to low accuracy. To alleviate this, XNOR-Net (Rastegari et al., 2016) proposes to use real-valued scaling factors to re-scale the output of the binary convolution as + +$$ +\mathcal {A} * \mathcal {W} \approx (\operatorname {s i g n} (\mathcal {A}) * \operatorname {s i g n} (\mathcal {W})) \odot \mathcal {K} \alpha , \tag {1} +$$ + +where $\odot$ denotes the element-wise multiplication, $\alpha$ and $\mathcal{K}$ are the weight and activation scaling factors, respectively, calculated in Rastegari et al. (2016) in an analytic manner. More recently, Bulat & Tzimiropoulos (2019) proposed to fuse $\alpha$ and $\mathcal{K}$ into a single factor $\Gamma$ that is learned via backpropagation, resulting in further accuracy gains. + +# 4 METHOD + +This section firstly introduces our strong baseline. Then, we present two ways to improve the approximation of Eq. 1: Firstly, we use a loss based on matching attention maps computed from the binary and a real-valued network (see section 4.2). Secondly, we make the scaling factor a function of the real-valued input activations $\mathcal{A}$ (see section 4.3). + +# 4.1 BUILDING A STRONG BASELINE + +Currently, almost all works on binary networks use XNOR-Net and BNN as baselines. In this section, we show how to construct a strong baseline by incorporating insights and techniques described in recent works as well as standard optimization techniques. We show that our baseline already achieves state-of-the-art accuracy. We believe this is an important contribution towards understanding the true impact of proposed methodologies and towards assessing the true gap with real-valued networks. Following prior work in binary networks, we focus on the ResNet-18 architecture and apply the improvements listed below: + +Block structure: It is well-known that a modified ResNet block must be used to obtain optimal results for binary networks. We found the widely-used setting where the operations are ordered as BatchNorm $\rightarrow$ Binarization $\rightarrow$ BinaryConv $\rightarrow$ Activation to be the best. The skip connection is the last operation of the block (Rastegari et al., 2016). Note that we use the sign function to binarize the activations. However, the BatchNorm layer includes an affine transformation and this ordering of the blocks allows its bias term act as a learnable binarization threshold. + +Residual learning: We used double skip connections, as proposed in (Liu et al., 2018). + +Activation: We used PReLU (He et al., 2015) as it is known to facilitate the training of binary networks (Bulat et al., 2019). + +Scaling factors: We used discriminatively learnt scaling factors via backpropagation as in (Bulat & Tzimiropoulos, 2019). + +Downsample layers: We used real-valued downsample layers (Liu et al., 2018). We found the large accuracy boost to be consistent across our experiments (around $3 - 4\%$ top-1 improvement on ImageNet). + +We used the following training strategies to train our strong baseline: + +Initialization: When training binary networks, it is crucial to use a 2-stage optimization strategy (Bulat et al., 2019). In particular, we first train a network using binary activations and real-valued weights, and then use the resulting model as initialization to train a network where both weights and activations are binarized. + +Weight decay: Setting up weight decay carefully is surprisingly important. We use $1e - 5$ when training stage 1 (binary activation and real weights network), and set it to 0 on stage 2 (Bethge et al., 2019). Note that weights at stage 2 are either 1 or $-1$ , so applying an $L_{2}$ regularization term to them does not make sense. + +Data augmentation: For CIFAR-100 we use the standard random crop, horizontal flip and rotation $(\pm 15^{\circ})$ . For ImageNet, we found that random cropping, flipping and colour jitter augmentation worked best. However, colour jitter is disabled for stage 2. + +Mix-up: We found that mix-up (Zhang et al., 2017) is crucial for CIFAR-100, while it slightly hurts performance for ImageNet – this is due to the higher risk of overfitting on CIFAR-100. + +Warm-up: We used warm-up for 5 epochs during stage 1 and no warm-up for stage 2. + +Optimizer: We used Adam (Kingma & Ba, 2014) with a stepwise scheduler. The learning rate is set to $1e - 3$ for stage 1, and $2e - 4$ for stage 2. For CIFAR-100, we trained for 350 epochs, with steps at epochs 150, 250 and 320. For ImageNet, we train for 75 epochs, with steps at epochs 40, 60 and 70. Batch sizes are 256 for ImageNet and 128 for CIFAR-100. + +# 4.2 REAL-TO-BINARY ATTENTION MATCHING + +We make the reasonable assumption that if a binary network is trained so that the output of each binary convolution more closely matches the output of a real convolution in the corresponding layer of a real-valued network, then significant accuracy gains can be obtained. Notably, a similar assumption was made in (Rastegari et al., 2016) where analytic scale factors were calculated so that the error between binary and real convolutions is minimized. Instead, and inspired by the attention transfer method of (Zagoruyko & Komodakis, 2017), we propose to enforce such a constraint via a loss term at the end of each convolutional block by comparing attention maps calculated from the binary and real-valued activations. Such supervisory signals provide the binary network with much-needed extra guidance. It is also well-known that backpropagation for binary networks is not as effective as for real-valued ones. By introducing such loss terms at the end of each block, gradients do not have to traverse the whole network and suffer a degraded signal. + +Assuming that attention matching is applied at a set of $\mathcal{I}$ transfer points within the network, the total loss can be expressed as: + +$$ +\mathcal {L} _ {a t t} = \sum_ {j = 1} ^ {\mathcal {I}} \| \frac {\mathcal {Q} _ {S} ^ {j}}{\| \mathcal {Q} _ {S} ^ {j} \| _ {2}} - \frac {\mathcal {Q} _ {T} ^ {j}}{\| \mathcal {Q} _ {T} ^ {j} \| _ {2}} \|, \tag {2} +$$ + +where $\mathcal{Q}^j = \sum_{i=1}^{c} |\mathcal{A}_i|^2$ and $\mathcal{A}_i$ is the $i$ -th channel of activation map $\mathcal{A}$ . Moreover, at the end of the network, we apply a standard logit matching loss (Hinton et al., 2015). + +Progressive teacher-student: We observed that teacher and student having as similar architecture as possible is very important in our case. We thus train a sequence of teacher-student pairs that progressively bridges the differences between the real network and the binary network in small increments: + +Step 1: the teacher is the real-valued network with the standard ResNet architecture. The student is another real-valued network, but with the same architecture as the binary ResNet-18 (e.g. double skip connection, layer ordering, PReLU activations, etc). Furthermore, a soft binarization (a Tanh function) is applied to the activations instead of the binarization (sign) function. In this way the network is still real-valued, but it behaves more closely to a network with binary activations. + +Step 2: The network resulting from the previous step is used as the teacher. A network with binary activations and real-valued weights is used as the student. + +Step 3: The network resulting from step 2 is used as the teacher and the network with binary weights and binary activations is the student. In this stage, only logit matching is used. + +# 4.3 DATA-DRIVEN CHANNEL RE-SCALING + +While the approach of the previous section provides better guidance for the training of binary networks, the representation power of binary convolutions is still limited, hindering its capacity to approximate the real-valued network. Here we describe how to boost the representation capability of a binary neural network and yet incur in only a negligible increment on the number of operations. + +Previous works have shown the effectiveness of re-scaling binary convolutions with the goal of better approximating real convolutions. XNOR-Net (Rastegari et al., 2016) proposed to compute these scale factors analytically while (Bulat & Tzimiropoulos, 2019; Xu & Cheung, 2019) proposed to learn them discriminatively in an end-to-end manner, showing additional accuracy gains. For the latter case, during training, the optimization aims to find a set of fixed scaling factors that minimize the average expected loss for the training set. We propose instead to go beyond this and obtain discriminatively-trained input-dependent scaling factors – thus, at test time, these scaling factors will not be fixed but rather inferred from data. + +Let us first recall what the signal flow is when going through a binary block. The activations entering a binary block are actually real-valued. Batch normalization centers the activations, which are then binarized, losing a large amount of information. Binary convolution, re-scaling and PReLU follow. We propose to use the full-precision activation signal, available prior to the large information loss incurred by the binarization operation, to predict the scaling factors used to re-scale the output of the binary convolution channel-wise. Specifically, we propose to approximate the real convolution as follows: + +$$ +\mathcal {A} * \mathcal {W} \approx (\operatorname {s i g n} (\mathcal {A}) \circledast \operatorname {s i g n} (\mathcal {W})) \odot \boldsymbol {\alpha} \odot G (\mathcal {A}; \mathcal {W} _ {G}), \tag {3} +$$ + +where $\mathcal{W}_G$ are the parameters of the gating function $G$ . Such function computes the scale factors used to re-scale the output of the binary convolution, and uses the pre-convolution real-valued activations as input. Fig. 1 shows our implementation of function $G$ . The design is inspired by Hu et al. (2018), but we use the gating function to predict ahead rather than as a self-attention mechanism. + +An optimal mechanism to modulate the output of the binary convolution clearly should not be the same for all examples as in Bulat & Tzimiropoulos (2019) or Xu & Cheung (2019). Note that in Rastegari et al. (2016) the computation of the scale factors depends on the input activations. However the analytic calculation is sub-optimal with respect to the task at hand. To circumvent the aforementioned problems, our method learns, via backpropagation for the task at hand, to predict the modulating factors using the real-valued input activations. By doing so, more than $1/3$ of the remaining gap with the real-valued network is bridged. + +# 4.4 COMPUTATIONAL COST ANALYSIS + +Table 1 details the computational cost of the different binary network methodologies. We differentiate between the number of binary and floating point operations, including operations such as skip connections, pooling layers, etc. It shows that our method leaves the number of binary operations constant, and that the number of FLOPs increases by only $1\%$ of the total floating point operation count. This is assuming a factor $r$ of 8, which is the one used in all of our experiments. To put this into perspective, the magnitude is similar to the operation increase incurred by the XNOR-Net with respect to its predecessor, BNN. Similarly, the double skip connections proposed in (Liu et al., 2018) adds again a comparable amount of operations. Note however that in order to fully exploit the computational efficiency of binary convolutions during inference, a specialized engine such as (Zhang et al., 2019; Yang et al., 2017) is required. + +# 5 RESULTS + +We present two main sets of experiments. We used ImageNet (Russakovsky et al., 2015) as a benchmark to compare our method against other state-of-the-art approaches in Sec. 5.1. ImageNet is the most widely used dataset to report results on binary networks and, at the same time, allows us to show for the first time that binary networks can perform competitively on a large-scale dataset. We further used CIFAR-100 (Krizhevsky & Hinton, 2009) to conduct ablation studies (Sec. 5.2). + +
MethodBOPSFLOPS
BNN (Courbariaux et al., 2016)1.695×1091.314×108
XNOR-Net (Rastegari et al., 2016)1.695×1091.333×108
Double Skip ((Liu et al., 2018)1.695×1091.351×108
Bi-Real (Liu et al., 2018)1.676×1091.544×108
Ours1.676×1091.564×108
Full Precision01.826×109
+ +Table 1: Breakdown of floating point and binary operations for variants of binary ResNet-18. + +# 5.1 COMPARISON WITH THE STATE-OF-THE-ART + +Table 2 shows a comparison between our method and relevant state-of-the-art methods, including low-bit quantization methods other than binary. + +Vs. other binary networks: Our strong baseline already comfortably achieves state-of-the-art results, surpassing the previously best-reported result by about $1\%$ (Wang et al., 2019). Our full method further improves over the state-of-the-art by $5.5\%$ top-1 accuracy. When comparing to binary models that scale the capacity of the network (second set of results on Tab. 2), only (Zhuang et al., 2019) outperforms our method, surpassing it by $0.9\%$ top-1 accuracy - yet, this is achieved using 4 times the number of binary blocks. + +Vs. real-valued networks: Our method reduces the performance gap with its real-valued counterpart to $\sim 4\%$ top-1 accuracy, or $\sim 5\%$ if we compare against a real-valued network trained with attention transfer. + +Vs. other low-bit quantization: Table 2 also shows a comparison to the state-of-the-art for low-bit quantization methods (first set of results). It can be seen that our method surpasses the performance of all methods, except for TTQ (Zhu et al., 2017), which uses 2-bit weights, full-precision activations and 1.5 the channel width at each layer. + +# 5.2 ABLATION STUDIES + +In order to conduct a more detailed ablation study we provide results on CIFAR-100. We thoroughly optimized a ResNet-18 full precision network to serve as the real-valued baseline. + +Teacher-Student effectiveness: We trained a real-valued ResNet-18 using ResNet-34 as its teacher, yielding $\sim 1\%$ top-1 accuracy increase. Instead, our progressive teacher-student strategy yields $\sim 5\%$ top-1 accuracy gain, showing that it is a fundamental tool when training binary networks, and that its impact is much larger than for real-valued networks, where the baseline optimization is already healthier. + +Performance gap to real-valued: We observe that, for CIFAR-100, we close the gap with real-valued networks to about $2\%$ when comparing with the full-precision ResNet-18, and to about $3\%$ when optimized using teacher supervision. The gap is consistent to that on ImageNet in relative terms: $13\%$ and $10\%$ relative degradation on ImageNet and CIFAR-100 respectively. + +Binary vs real downsample: Our proposed method achieves similar performance increase irrespective of whether binary or real-valued downsample layers are used, the improvement being $5.5\%$ and $6.6\%$ top-1 accuracy gain respectively. It is also interesting to note that the results on the ablation study are consistent for all entries on both cases. + +Scaling factors and attention matching: It is also noteworthy that the gating module is not effective in the absence of attention matching (see $\mathrm{SB + G}$ entries). It seems clear from this result that both are interconnected: the extra supervisory signal is necessary to properly guide the training, while the extra flexibility added through the gating mechanism boosts the capacity of the network to mimic the attention map. + +
MethodImageNet
Bitwidth (W/A)Top-1Top-5
BWN (Rastegari et al., 2016)1/3260.883.0
TTQ (Zhu et al., 2017)2/3266.687.2
HWGQ (Cai et al., 2017)1/259.682.2
LQ-Net (Zhang et al., 2018)1/262.684.3
SYQ (Faraone et al., 2018)1/255.478.6
DOREFA-Net (Zhou et al., 2016)2/262.684.4
ABC-Net (Lin et al., 2017)(1/1)×565.085.9
Circulant CNN (Liu et al., 2019)(1/1)×461.482.8
Struct Appr (Zhuang et al., 2019)(1/1)×464.285.6
Struct Appr** (Zhuang et al., 2019)(1/1)×466.386.6
Ensemble (Zhu et al., 2019)(1/1)×661.0-
BNN (Courbariaux et al., 2016)1/142.269.2
XNOR-Net (Rastegari et al., 2016)1/151.273.2
Trained Bin (Xu & Cheung, 2019)1/154.277.9
Bi-Real Net (Liu et al., 2018)**1/156.479.5
CI-Net (Wang et al., 2019)1/156.780.1
XNOR-Net++ (Bulat & Tzimiropoulos, 2019)1/157.179.9
CI-Net (Wang et al., 2019)**1/159.984.2
Strong Baseline (ours)**1/160.983.0
Real-to-Bin (ours)**1/165.486.2
Real valued32/3269.389.2
Real valued T-S32/3270.790.0
+ +Table 2: Comparison with state-of-the-art methods on ImageNet. ** indicates real-valued down-sample. The second column indicates the number of bits used to represent weights and activations. Methods include low-bit quantization (upper section), and methods multiplying the capacity of the network (second section). For the latter case, the second column includes the multiplicative factor of the network capacity used. + +
MethodStage 1Stage 2
Top-1 / Top-5Top-1 / Top-5
Strong Baseline69.3 / 88.768.0 / 88.3
SB + Att Trans72.2 / 90.371.1 / 90.1
SB + Att Trans + HKD73.1 / 91.271.9 / 90.9
SB + G67.2 / 87.066.2 / 86.8
SB + Progressive TS73.8 / 91.572.3 / 89.8
Real-to-Bin75.0 / 92.273.5 / 91.6
Strong Baseline**72.1 / 89.969.6 / 89.2
SB + Att Trans**74.3 / 91.372.6 / 91.4
SB + Att Trans + HKD**75.4 / 92.273.9 / 91.2
SB + G**72.0 / 89.870.9 / 89.3
SB + Progressive TS**75.7 / 92.174.6 / 91.8
Real-to-Bin**76.5 / 92.876.2 / 92.7
Full Prec (our impl.)78.3 / 93.6
Full Prec + TS (our impl.)79.3 / 94.4
+ +Table 3: Top-1 and Top-5 classification accuracy using ResNet-18 on CIFAR-100. ** indicates real-valued downsample layers. $G$ indicates that the gating function of Sec. 4.3 is used. + +# 6 CONCLUSION + +In this work we showed how to train binary networks to within a few percent points of their real-valued counterpart, turning binary networks from hopeful research into a compelling alternative to real-valued networks. We did so by training a binary network to not only predict training labels, but also mimic the behaviour of real-valued networks. To this end, we devised a progressive attention matching strategy to drive optimization, and combined it with a gating strategy for scaling the output of binary convolutions, increasing the representation power of the convolutional block. The two strategies combine perfectly to boost the state-of-the-art of binary networks by 5.5 top-1 accuracy on ImageNet, the standard benchmark for binary networks. + +# REFERENCES + +Milad Alizadeh, Javier Fernández-Marqués, Nicholas D. Lane, and Yarin Gal. An empirical study of binary neural networks' optimisation. In International Conference on Learning Representations, 2019. +Joseph Bethge, Haojin Yang, Marvin Bornstein, and Christoph Meinel. Back to simplicity: How to train accurate BNNs from scratch? arXiv preprint arXiv:1906.08637, 2019. +Adrian Bulat and Georgios Tzimiropoulos. XNOR-Net++: Improved binary neural networks. In British Machine Vision Conference, 2019. +Adrian Bulat, Georgios Tzimiropoulos, Jean Kossaifi, and Maja Pantic. Improved training of binary networks for human pose estimation and image recognition. arXiv preprint arXiv:1904.05868, 2019. +Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos. Deep learning with low precision by half-wave gaussian quantization. In IEEE Conference on Computer Vision and Pattern Recognition, 2017. +Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or-1. arXiv, 2016. +Ruizhou Ding, Ting-Wu Chin, Zeye Liu, and Diana Marculescu. Regularizing activation distribution for training binarized deep networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. +Julian Faraone, Nicholas J. Fraser, Michaela Blott, and Philip H. W. Leong. SYQ: learning symmetric quantization for efficient deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. +Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. arXiv, 2019. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In IEEE International Conference on Computer Vision, pp. 1026-1034, 2015. +Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. +Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. + +Xiaofan Lin, Cong Zhao, and Wei Pan. Towards accurate binary convolutional neural network. In Advances on Neural Information Processing Systems, 2017. +Chunlei Liu, Wenrui Ding, Xin Xia, Baochang Zhang, Jiaxin Gu, Jianzhuang Liu, Rongrong Ji, and David Doermann. Circulant binary convolutional networks: Enhancing the performance of 1-bit dcnns with circulant back propagation. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. +Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. Bi-Real Net: Enhancing the performance of 1-bit CNNs with improved representational capability and advanced training algorithm. In European Conference on Computer Vision, 2018. +Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-Net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, 2016. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal on Computer Vision, 115(3):211-252, 2015. +Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In Advances on Neural Information Processing Systems, 2014. +Ziwei Wang, Jiwen Lu, Chenxin Tao, Jie Zhou, and Qi Tian. Learning channel-wise interactions for binary convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. +Zhe Xu and Ray C.C. Cheung. Accurate and compact convolutional neural networks with trained binarization. In *British Machine Vision Conference*, 2019. +Haojin Yang, Martin Fritzsche, Christian Bartz, and Christoph Meinel. BMXNet: An open-source binary neural network implementation based on MXNet. In ACM International Conference on Multimedia, 2017. +Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In International Conference on Learning Representations, 2017. +Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, and Gang Hua. LQ-Nets: Learned quantization for highly accurate and compact deep neural networks. In European Conference on Computer Vision, 2018. +Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. Mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. +Jianhao Zhang, Yingwei Pan, Ting Yao, He Zhao, and Tao Mei. dabnn: A super fast inference framework for binary neural networks on ARM devices. In ACM International Conference on Multimedia, 2019. +Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv, 2016. +Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. International Conference on Learning Representations, 2017. +Shilin Zhu, Xin Dong, and Hao Su. Binary ensemble neural network: More bits per network or more networks per bit? In IEEE Conference on Computer Vision and Pattern Recognition, 2019. +Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, and Ian D. Reid. Towards effective low-bitwidth convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. + +Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, and Ian Reid. Structured binary neural networks for accurate image classification and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. \ No newline at end of file diff --git a/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/images.zip b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..171ea63463febd9bbc08d23f8085b654bac2a027 --- /dev/null +++ b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a52d2e1a1d5bb1bf8b9c32dc666ccfabdca0a28128f11765f52df2a4ad97c66f +size 278931 diff --git a/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/layout.json b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..42f6763e0225342bd767475f7ef009755a076909 --- /dev/null +++ b/trainingbinaryneuralnetworkswithrealtobinaryconvolutions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:877a07b74efc870e6f4982fed4e93eef96ba9943edcac6774aea36cab0609255 +size 289232 diff --git a/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/ab9488eb-e110-49cd-a4ae-9d1ddded3a60_content_list.json b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/ab9488eb-e110-49cd-a4ae-9d1ddded3a60_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..18722d8ec9777f2709111c2fdff1942de726ab15 --- /dev/null +++ b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/ab9488eb-e110-49cd-a4ae-9d1ddded3a60_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d869c646b2e45c2c017c22f728454f1c87057689ba5df556e8bdfd4e7d19daf6 +size 163925 diff --git a/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/ab9488eb-e110-49cd-a4ae-9d1ddded3a60_model.json b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/ab9488eb-e110-49cd-a4ae-9d1ddded3a60_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d3edb283af7507b9d1322a40b6d18eb75552d2a3 --- /dev/null +++ b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/ab9488eb-e110-49cd-a4ae-9d1ddded3a60_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1856cb96532a72a1ba5729e740b02dc516ea785ccebd63280755364bc59a549d +size 170267 diff --git a/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/ab9488eb-e110-49cd-a4ae-9d1ddded3a60_origin.pdf b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/ab9488eb-e110-49cd-a4ae-9d1ddded3a60_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fa7032189bde4908d2bee6049ae46c00861c36b6 --- /dev/null +++ b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/ab9488eb-e110-49cd-a4ae-9d1ddded3a60_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:32c7db8c0171b91dc3ab25cefae36848bf790c309b23c97343161adfd14b4614 +size 6545789 diff --git a/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/full.md b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9a838b22556253bffef834c7977eb9c8e1727658 --- /dev/null +++ b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/full.md @@ -0,0 +1,776 @@ +# TRAINING GENERATIVE ADVERSARIAL NETWORKS FROM INCOMPLETE OBSERVATIONS USING FACTORISED DISCRIMINATORS + +Daniel Stoller + +Queen Mary University London, UK + +d.stoller@qmul.ac.uk + +Sebastian Ewert + +Spotify + +Berlin, Germany + +sewert@spotify.com + +Simon Dixon + +Queen Mary University + +London, UK + +s.e.dixon@qmul.ac.uk + +# ABSTRACT + +Generative adversarial networks (GANs) have shown great success in applications such as image generation and inpainting. However, they typically require large datasets, which are often not available, especially in the context of prediction tasks such as image segmentation that require labels. Therefore, methods such as the CycleGAN use more easily available unlabelled data, but do not offer a way to leverage additional labelled data for improved performance. To address this shortcoming, we show how to factorise the joint data distribution into a set of lower-dimensional distributions along with their dependencies. This allows splitting the discriminator in a GAN into multiple "sub-discriminators" that can be independently trained from incomplete observations. Their outputs can be combined to estimate the density ratio between the joint real and the generator distribution, which enables training generators as in the original GAN framework. We apply our method to image generation, image segmentation and audio source separation, and obtain improved performance over a standard GAN when additional incomplete training examples are available. For the Cityscapes segmentation task in particular, our method also improves accuracy by an absolute $14.9\%$ over CycleGAN while using only 25 additional paired examples. + +# 1 INTRODUCTION + +In generative adversarial networks (GANs) (Goodfellow et al., 2014) a generator network is trained to produce samples from a given target distribution. To achieve this, a discriminator network is employed to distinguish between "real" samples from the dataset and "fake" samples from the generator network. The discriminator's feedback is used by the generator to improve its output. While GANs have become highly effective at synthesising realistic examples even for complex data such as natural images (Radford et al., 2015; Karras et al., 2018), they typically rely on large training datasets. These are not available in many cases, especially for prediction tasks such as audio source separation (Stoller et al., 2018) or image-to-image translation (Zhu et al., 2017). Instead, one often encounters many incomplete observations, such as unpaired images in image-to-image translation, or isolated source recordings in source separation. However, standard GANs cannot be trained with these observations. Recent approaches that work with unpaired data can not make use of additional paired data (Zhu et al., 2017) or lead to computational overhead due to additional generators and discriminators that model the inverse of the mapping of interest (Almahairi et al., 2018; Gan et al., 2017). For training the generator, multiple losses are combined whose interactions are not clear and that do not guarantee that the generator converges to the desired distribution. + +In this paper, we adapt the standard GAN framework to enable training predictive models with both paired and unpaired data as well as generative models with incomplete observations. To achieve this, we split the discriminator into multiple "marginal" discriminators, each modelling a separate set of dimensions of the input. As this modification on its own would ignore any dependencies between these parts, we incorporate two additional "dependency discriminators", each focusing only on inter-part relationships. We show how the outputs from these marginal and dependency discriminators can be recombined and used to estimate the same density ratios as in the original + +GAN framework – which enables training any generator network in an unmodified form. In contrast to previous GANs, our approach only requires full observations to train the smaller dependency discriminator and can leverage much bigger, simpler datasets to train the marginal discriminators, which enables the generator to model the marginal distributions more accurately. Additionally, prior knowledge about the marginals and dependencies can be incorporated into the architecture of each discriminator. Deriving from first principles, we obtain a consistent adversarial learning framework without the need for extra losses that rely on more assumptions or conflict with the GAN objective. + +In our experiments, we apply our approach ("FactorGAN") to two image generation tasks (Sections 4.1 and 4.2), image segmentation (Section 4.3) and audio source separation (Section 4.4), and observe improved performance in missing data scenarios compared to a GAN. For image segmentation, we also compare to the CycleGAN (Zhu et al., 2017), which does not require images to be paired with their segmentation maps. By leveraging both paired and unpaired examples with a unified adversarial objective, we achieve a substantially higher segmentation accuracy even with only 25 paired samples than GAN and CycleGAN models. + +# 2 METHOD + +After a brief summary of GANs in Section 2.1, we introduce our method from a missing data perspective in Section 2.2, before extending it to conditional generation (Section 2.3) and the case of independent outputs (Section 2.4). + +# 2.1 GENERATIVE ADVERSARIAL NETWORKS + +To model a probability distribution $p_x$ over $\mathbf{x} \in \mathbb{R}^d$ , we follow the standard GAN framework and introduce a generator model $G_{\phi}: \mathbb{R}^n \to \mathbb{R}^d$ that maps an $n$ -dimensional input $\mathbf{z} \sim p_z$ to a $d$ -dimensional sample $G_{\phi}(\mathbf{z})$ , resulting in the generator distribution $q_x$ . To train $G_{\phi}$ such that $q_x$ approximates the real data density $p_x$ , a discriminator $D_{\theta}: \mathbb{R}^d \to (0,1)$ is trained to estimate whether a given sample is real or generated: + +$$ +\underset {\theta} {\arg \max } \mathbb {E} _ {\mathbf {x} \sim p _ {x}} \log D _ {\theta} (\mathbf {x}) + \mathbb {E} _ {\mathbf {x} \sim q _ {x}} \log \left(1 - D _ {\theta} (\mathbf {x})\right). \tag {1} +$$ + +In the non-parametric limit (Goodfellow et al., 2014), $D_{\theta}(\mathbf{x})$ approaches $\tilde{D} (\mathbf{x})\coloneqq \frac{p_x(\mathbf{x})}{p_x(\mathbf{x}) + q_x(\mathbf{x})}$ at every point $\mathbf{x}$ . The generator is updated based on the discriminator's estimate of $\tilde{D} (\mathbf{x})$ . In this paper, we use the alternative loss function for $G_{\phi}$ as proposed by Goodfellow et al. (2014): + +$$ +\underset {\theta} {\arg \max } \mathbb {E} _ {\mathbf {z} \sim p _ {z}} \log D _ {\theta} \left(G _ {\phi} (\mathbf {z})\right). \tag {2} +$$ + +# 2.2 ADAPTATION TO MISSING DATA + +In the following we consider the case that incomplete observations are available in addition to our regular dataset (i.e. simpler yet larger datasets). In particular, we partition the set of $d$ input dimensions of $\mathbf{x}$ into $K$ ( $2 \leq K \leq d$ ) non-overlapping subsets $\mathcal{D}_1, \ldots, \mathcal{D}_K$ . For each $i \in \{1, \ldots, K\}$ , an incomplete ("marginal") observation $\mathbf{x}^i$ can be drawn from $p_x^i$ , which is obtained from $p_x$ after marginalising out all dimensions not in $\mathcal{D}_i$ . Analogously, $q_x^i$ denotes the $i$ -th marginal distribution of the generator $G_\phi$ . Next, we extend the existing GAN framework such we can employ the additional incomplete observations. In this context, a main hurdle is that a standard GAN discriminator is trained with samples from the full joint $p_x$ . To eliminate this restriction, we note that $\tilde{D}(\mathbf{x})$ can be mapped to a "joint density ratio" $\frac{p_x(\mathbf{x})}{q_x(\mathbf{x})}$ by applying the bijective function $h: [0, 1) \to \mathbb{R}^+$ , $h(a) = -\frac{a}{a-1}$ . For our approach, we exploit that this joint density ratio can be factorised into a product of density ratios: + +$$ +h (\tilde {D} (\mathbf {x})) = \frac {p _ {x} (\mathbf {x})}{q _ {x} (\mathbf {x})} = \frac {c _ {P} (\mathbf {x})}{c _ {Q} (\mathbf {x})} \prod_ {i = 1} ^ {K} \frac {p _ {x} ^ {i} \left(\mathbf {x} ^ {i}\right)}{q _ {x} ^ {i} \left(\mathbf {x} ^ {i}\right)} \text {w i t h} \tag {3} +$$ + +$$ +c _ {P} (\mathbf {x}) = \frac {p _ {x} (\mathbf {x})}{\prod_ {i = 1} ^ {K} p _ {x} ^ {i} \left(\mathbf {x} ^ {i}\right)} \text {a n d} c _ {Q} (\mathbf {x}) = \frac {q _ {x} (\mathbf {x})}{\prod_ {i = 1} ^ {K} q _ {x} ^ {i} \left(\mathbf {x} ^ {i}\right)}. +$$ + +Each "marginal density ratio" $\frac{p_x^i(\mathbf{x}^i)}{q_x^i(\mathbf{x}^i)}$ captures the generator's output quality for one marginal variable $\mathbf{x}^i$ , while the $c_P$ and $c_Q$ terms describe the dependency structure between marginal variables in the real and generated distribution, respectively. Note that our theoretical considerations assume that the densities $p_x$ and $q_x$ are non-zero everywhere. While this might not be fulfilled in practice, our implementation does not directly compute density ratios and instead relies on the same assumptions as Goodfellow et al. (2014). We can estimate each density ratio independently by training a "sub-discriminator" network, and combine their outputs to estimate $\tilde{D}(\mathbf{x})$ , as shown below. + +Estimating the marginal density ratios: To estimate $\frac{p_x^i(\mathbf{x}^i)}{q_x^i(\mathbf{x}^i)}$ for each $i\in \{1,\ldots ,K\}$ , we train a "marginal discriminator network" $D_{\theta_i}:\mathbb{R}^{|\mathcal{D}_i|}\to (0,1)$ with parameters $\theta_{i}$ to determine whether a marginal sample $\mathbf{x}^i$ is real or generated following the GAN discriminator loss in Equation (1) 2. This allows making use of the additional incomplete observations. In the non-parametric limit, $D_{\theta_i}(\mathbf{x}^i)$ will approach $\tilde{D}_i(\mathbf{x}^i)\coloneqq \frac{p_x^i(\mathbf{x}^i)}{p_x^i(\mathbf{x}^i) + q_x^i(\mathbf{x}^i)}$ , so that we can use $h(D_{\theta_i}(\mathbf{x}^i))$ as an estimate of $\frac{p_x^i(\mathbf{x}^i)}{q_x^i(\mathbf{x}^i)}$ . + +Estimation of $c_{P}(\mathbf{x})$ and $c_{Q}(\mathbf{x})$ : Note that $c_{P}$ and $c_{Q}$ are also density ratios, this time containing a distribution over $\mathbf{x}$ in both the numerator and denominator – the main difference being that in the latter the individual parts $\mathbf{x}^i$ are independent from each other. To approximate the ratio $c_{P}$ , we can apply the same principles as above and train a "p-dependency discriminator" $D_{\theta_P}^P: \mathbb{R}^d \to (0,1)$ to distinguish samples from the two distributions, i.e. to discriminate real joint samples from samples where the individual parts are real but were drawn independently of each other (i.e. the individual parts might not originate from the same real joint sample). Again, in the non-parametric limit, its response approaches $\tilde{D}^{P}(\mathbf{x}) := \frac{p_{x}(\mathbf{x})}{p_{x}(\mathbf{x}) + \prod_{i=1}^{K} p_{x}^{i}(\mathbf{x}^{i})}$ and thus $c_{P}$ can be approximated via $h \circ D_{\theta_P}^P$ . + +Analogously, the $c_{Q}$ term is estimated with a "q-dependency discriminator" $D_{\theta_Q}^Q$ - here, we compare joint generator samples with samples where the individual parts were shuffled across several generated samples (to implement the independence assumption). + +Joint discriminator sample complexity: In contrast to $c_{Q}$ , where the generator provides an infinite number of samples, estimating $c_{P}$ without overfitting to the limited number of joint training samples can be challenging. While standard GANs suffer from the same difficulty, our factorisation into specialised sub-units allows for additional opportunities to improve the sample complexity. In particular, we can design the architecture of the p-dependency discriminator to incorporate prior knowledge about the dependency structure3. + +Combining the discriminators: As the marginal and the p- and q-dependency sub-discriminators provide estimates of their respective density ratios, we can multiply them and apply $h^{-1}$ to obtain the desired ratio $\tilde{D}(\mathbf{x})$ , following Equation (3). This can be implemented in a simple and stable fashion using a linear combination of pre-activation sub-discriminator outputs followed by a sigmoid (see Section A.4 for details and proof). The time for a generator update step grows linearly with the number of marginals $K$ , assuming the time to update each of the $K$ marginal discriminators remains constant. + +# 2.3 ADAPTATION TO CONDITIONAL GENERATION + +Conditional generation, such as image segmentation or inpainting, can be performed with GANs by using a generator $G_{\phi}$ that maps a conditional input $\mathbf{x}^1$ and noise to an output $\mathbf{x}^2$ , resulting in an output probability $q_{\phi}(\mathbf{x}^2|\mathbf{x}^1)$ . + +When viewing $\mathbf{x}^1$ and $\mathbf{x}^2$ as parts of a joint variable $\mathbf{x} \coloneqq (\mathbf{x}^1, \mathbf{x}^2)$ with distribution $p_x$ , we can also frame the above task as matching $p_x$ to the joint generator distribution $q_x(\mathbf{x}) \coloneqq p_x^1(\mathbf{x}^1) q_\phi(\mathbf{x}^2|\mathbf{x}^1)$ . In a standard conditional GAN, the discriminator is asked to distinguish between joint samples from $p_x$ and $q_x$ , which requires paired samples from $p_x$ and is inefficient as the inputs $\mathbf{x}^1$ are the same in + +both $p_x$ and $q_x$ . In contrast, applying our factorisation principle from Equation (3) to $\mathbf{x}^1$ and $\mathbf{x}^2$ (for the special case $K = 2$ ) yields + +$$ +\frac {p _ {x} (\mathbf {x})}{q _ {x} (\mathbf {x})} = \frac {\frac {p _ {x} (\mathbf {x})}{p _ {x} ^ {1} \left(\mathbf {x} ^ {1}\right) p _ {x} ^ {2} \left(\mathbf {x} ^ {2}\right)}}{\frac {q _ {x} (\mathbf {x})}{q _ {x} ^ {1} \left(\mathbf {x} ^ {1}\right) q _ {x} ^ {2} \left(\mathbf {x} ^ {2}\right)}} \frac {p _ {x} ^ {2} \left(\mathbf {x} ^ {2}\right)}{q _ {x} ^ {2} \left(\mathbf {x} ^ {2}\right)} = \frac {c _ {P} (\mathbf {x})}{c _ {Q} (\mathbf {x})} \frac {p _ {x} ^ {2} \left(\mathbf {x} ^ {2}\right)}{q _ {x} ^ {2} \left(\mathbf {x} ^ {2}\right)}, \tag {4} +$$ + +suggesting the use of a p- and a q-dependency discriminator to model the input-output relationship, and a marginal discriminator over $\mathbf{x}^2$ that matches aggregate generator predictions from $q_x^2$ to real output examples from $p_x^2$ . Note that we do not need a marginal discriminator for $\mathbf{x}^1$ , which increases computational efficiency. This adaptation can also involve additionally partitioning $\mathbf{x}^2$ into multiple partial observations as shown in Equation 3. + +# 2.4 ADPTION TO INDEPENDENT MARGINALS + +In case the marginals can be assumed to be completely independent, one can remove the p-dependency discriminator from our framework, since $c_{P}(\mathbf{x}) = 1$ for all inputs $\mathbf{x}$ . This approach can be useful in the conditional setting, when each output is related to the input but their marginals are independent from each other. In this setting, our method is related to adversarial ICA (Brakel & Bengio, 2017). Note that the q-dependency discriminator still needs to be trained on the full generator outputs if the generator should not introduce unwanted dependencies between the marginals. + +# 2.5 FURTHER EXTENSIONS + +There are many more ways of partitioning the joint distribution into marginals. We discuss two additional variants (Hierarchical and auto-regressive FactorGANs) of our approach in Section A.3. + +# 3 RELATED WORK + +For conditional generation, "CycleGAN" (Zhu et al., 2017) exploits unpaired samples by assuming a one-to-one mapping between the domains and using bidirectional generators (along with Gan et al. (2017)), while FactorGAN makes no such assumptions and instead uses paired examples to learn the dependency structure. Almahairi et al. (2018) and Tripathy et al. (2018) learn from paired examples with an additional reconstruction-based loss, but use a sum of many different loss terms which have to be balanced by additional hyper-parameters. Additionally, it can not be applied to generation tasks with missing data or prediction tasks with multiple outputs. Brakel & Bengio (2017) perform independent component analysis in an adversarial fashion using a discriminator to identify correlations. Similarly to our q-dependency discriminator, the separator outputs are enforced to be independent, but our method is fully adversarial and can model arbitrary dependencies with the p-dependency discriminator. GANs were also used for source separation, but dependencies were either ignored (Zhang et al., 2017) or modelled with an additional L2 loss (Stoller et al., 2018) that supports only deterministic separators. + +Pu et al. (2018) use GANs for joint distribution modelling by training a generator for each possible factorisation of the joint distribution, but this requires $K!$ generators for $K$ marginals, whereas we assume either all parts or exactly one part of the variable of interest is observed to avoid functional redundancies between the different networks. Karaletsos (2016) propose adversarial inference on local factors of a high-dimensional joint distribution and factorise both generator and discriminator based on independence assumptions given by a Bayesian network, whereas we keep a joint sample generator and model all dependencies. Finally, Yoon et al. (2018) randomly mask the inputs to a GAN generator so it learns to impute missing values, whereas our generator aims to learn a transformation where inputs are fully observed. + +# 4 EXPERIMENTS + +To validate our method, we compare our FactorGAN with the regular GAN approach, both for unsupervised generation as well as supervised prediction tasks. For the latter, we also compare to the CycleGAN (Zhu et al., 2017) as an unsupervised baseline. To investigate whether FactorGAN makes + +efficient use of all observations, we vary the proportion of the training samples available for joint sampling (paired), while using the rest to sample from the marginals (unpaired). We train all models using a single NVIDIA GTX 1080 GPU. + +Training procedure For stable training, we employ spectral normalisation (Miyato et al., 2018) on each discriminator network to ensure they satisfy a Lipschitz condition. Since the overall output used for training the generator is simply a linear combination of the individual discriminators (see Section A.4), the generator gradients are also constrained in magnitude accordingly. Unless otherwise noted, we use an Adam optimiser with learning rate $10^{-4}$ and a batch size of 25 for training all models. We perform two discriminator updates after each generator update. + +# 4.1 PAIRED MNIST + +Our first experiment will involve "Paired MNIST", a synthetic dataset of low complexity whose dependencies between marginals can be easily controlled. More precisely, we generate a paired version of the original MNIST dataset4 by creating samples that contain a pair of vertically stacked digit images. With a probability of $\lambda$ , the lower digit chosen during random generation is the same as the upper one, and different otherwise. For FactorGAN, we model the distributions of upper and lower digits as individual marginal distributions ( $K = 2$ ). + +Experimental setup We compare the normal GAN with our FactorGAN, also including a variant without p-dependency discriminator that assumes marginals to be independent ("FactorGAN-no-cp"). We conduct the experiment with $\lambda = 0.1$ and $\lambda = 0.9$ and also vary the amount of training samples available in paired form, while keeping the others as marginal samples only usable by FactorGAN. For both generators and discriminators, we used simple multi-layer perceptrons (Tables 1 and 2). + +To evaluate the quality of generated digits, we adopt the "Frecht Inception Distance" (FID) as metric (Heusel et al., 2017). It is based on estimating the distance between the distributions of hidden layer activations of a pre-trained Imagenet object detection model for real and fake examples. To adapt the metric to MNIST data, we pre-train a classifier to predict MNIST digits (see Table 3) on the training set for 20 epochs, obtaining a test accuracy of $98\%$ . We input the top and bottom digits in each sample separately to the classifier and collect the activations from the last hidden layer (FC1) to compute FIDs for the top and bottom digits, respectively. We use the average of both FIDs to measure the overall output quality of the marginals (lower value is better). + +Since the only dependencies in the data are digit correlations controlled by $\lambda$ , we can evaluate how well FactorGAN models these dependencies. We compute $p_{D}(D_{t}, D_{b})$ as the probability for a real sample to have digit $D_{t} \in \{0, \dots, 9\}$ at the top and digit $D_{b} \in \{0, \dots, 9\}$ at the bottom, along with marginal probabilities $p_{D}^{t}(D_{t})$ and $p_{D}^{b}(D_{b})$ (and analogously $q_{D}(D_{t}, D_{b})$ for generated data). Since we do not have ground truth digit labels for the generated samples, we instead use the class predicted by the pre-trained classifier. We encode the dependency as a ratio between a joint and the product of its marginals, where the ratios for real and generated data are ideally the same. Therefore, we take their absolute difference for all digit combinations as evaluation metric (lower is better): + +$$ +d _ {\mathrm {d e p}} = \frac {1}{1 0 0} \sum_ {D _ {t} = 0} ^ {9} \sum_ {D _ {b} = 0} ^ {9} \left| \frac {p _ {D} \left(D _ {t} , D _ {b}\right)}{p _ {D} ^ {t} \left(D _ {t}\right) p _ {D} ^ {b} \left(D _ {b}\right)} - \frac {q _ {D} \left(D _ {t} , D _ {b}\right)}{q _ {D} ^ {t} \left(D _ {t}\right) q _ {D} ^ {b} \left(D _ {b}\right)} \right|. \tag {5} +$$ + +Note that the metric computes how well dependencies in the real data are modelled by a generator, but not whether it introduces any additional unwanted dependencies such as top and bottom digits sharing stroke thickness, and thus presents only a necessary condition for a good generator. + +Results The results of our experiment are shown in Figure 1. Since FactorGAN-no-cp trains on all samples independently of the number of paired observations, both FID and $d_{\mathrm{dep}}$ are constant. As expected, FactorGAN-no-cp delivers good digit quality, and performs well for $\lambda = 0.1$ (as it assumes independence) and badly for $\lambda = 0.9$ with regards to dependency modelling. + +FactorGAN outperforms GAN with small numbers of paired samples in terms of FID by exploiting the additional unpaired samples, although this gap closes as both models eventually have access to + +![](images/438a3f52c71ce65437d2f214ed1f096398f6bb93e218a89684d02d132b141b85.jpg) +(a) FID value, averaged over both digits + +![](images/1a41f8b445bd789eb07cef45c1ca513f1bb9d2fbd7c2c8d3df4bf6d086ad21d6.jpg) +(b) Dependency metric +Figure 1: Performance with different numbers of paired training samples and settings for $\lambda$ compared between GAN and FactorGAN with and without dependency modelling. + +the same amount of data. FactorGAN also consistently improves in modelling the digit dependencies with an increasing number of paired observations. For $\lambda = 0.1$ , this also applies to the normal GAN, although its performance is much worse for small sample sizes as it introduces unwanted digit dependencies. Additionally, its performance appears unstable for $\lambda = 0.9$ , where it achieves the best results for a small number of paired examples. Further improvements in this setting could be gained by incorporating prior knowledge about the nature of these dependencies into the p-dependency discriminator to increase its sample efficiency, but this is left for future work. + +# 4.2 IMAGE PAIR GENERATION + +In this section, we use GAN and FactorGAN for generating pairs of images in an unsupervised way to evaluate how well FactorGAN models more complex data distributions. + +Datasets We use the "Cityscapes" dataset (Cordts et al., 2016) and the "Edges2Shoes" dataset (Isola et al., 2016). To keep the outputs in a continuous domain, we treat the segmentation maps in the Cityscapes dataset as RGB images, instead of a set of discrete categorical labels. Each input and output image is downsampled to $64 \times 64$ pixels as a preprocessing step to reduce computational complexity and to ensure stable GAN training. + +Experimental setup We define the distributions of input as well as output images as marginal distributions. Therefore, FactorGAN uses two marginal discriminators and a p- and q-dependency discriminator. All discriminators employ a convolutional architecture shown in Table 5 with $W = 6$ and $H = 6$ . To control for the impact of discriminator size, we also train a GAN with twice the number of filters in each discriminator layer to match its size with the combined size of the FactorGAN discriminators. The same convolutional generator shown in Table 4 is used for GAN and FactorGAN. Each image pair is concatenated along the channel dimension to form one sample, so that $C = 6$ for the Cityscapes and $C = 4$ for the Edges2Shoes dataset (since edge maps are greyscale). We make either 100, 1000, or all training samples available in paired form, to investigate whether FactorGAN can improve upon GAN by exploiting the remaining unpaired samples or match its quality if there are none. + +For evaluation, we randomly assign $80\%$ of validation data to a "test-train" and the rest to a "test-test" partition. We train an LSGAN discriminator (Mao et al., 2017) with the architecture shown in Table 5 (but half the filters in each layer) on the test-train partition for 40 epochs to distinguish real from generated samples, before measuring its loss on the test set. We continuously sample from the generator during training and testing instead of using a fixed set of samples to better approximate the true generator distribution. As evaluation metric, we use the average test loss over 10 training runs, which was shown to correlate with subjective ratings of visual quality (Im et al., 2018) and also with our own quality judgements throughout this study. A larger value indicates better performance, as we use a flipped sign compared to Im et al. (2018). While the quantitative results appear indicative of output quality, accurate GAN evaluation is still an open problem and so we encourage the reader to judge generated examples given in Section A.5. + +![](images/4968d799d6b4d38f25e4b1888b00379a0df85e75c88c0367146ee5b0c8336e25.jpg) +Figure 2: GAN and FactorGAN output quality estimated by the LS metric for different datasets and numbers of paired samples. Error bars show $95\%$ confidence intervals. + +![](images/cb927e7d998b8a00e6857a40d33048437c2d90ede86e63d6d952de45f8416e7d.jpg) +(a) GAN + +![](images/c7c5a6d888c87765042f87dd7bdb46269b63fdcd3b10001fc19afab4ca8c76b1.jpg) +(b) FactorGAN +Figure 3: Examples generated for the Edges2Shoes dataset using 100 paired samples + +Results Our FactorGAN achieves better or similar output quality compared to the GAN baseline in all cases, as seen in Figure 2. For the Edges2Shoes dataset, the performance gains are most pronounced for small numbers of paired samples. On the more complex Cityscapes dataset, FactorGAN outperforms GAN by a large margin independent of training set size, even when the discriminators are closely matched in size. This suggests that FactorGAN converges with fewer training iterations for $G_{\phi}$ , although the exact cause is unclear and should be investigated in future work. + +We show some generated examples in Figure 3. Due to the small number of available paired samples, we observe a strong mode collapse of the GAN in Figure 3a, while FactorGAN provides high-fidelity, diverse outputs, as shown in Figure 3b. Similar observations can be made for the Cityscapes dataset when using 100 paired samples (see Section A.5.2). + +# 4.3 IMAGE SEGMENTATION + +Our approach extends to the case of conditional generation (see Section 2.3), so we tackle a complex and important image segmentation task on the Cityscapes dataset, where we ask the generator to predict a segmentation map for a city scene (instead of generating both from scratch as in Section 4.2). + +Experimental setup We downsample the scenes and segmentation maps to $128 \times 128$ pixels and use a U-Net architecture (Ronneberger et al., 2015) (shown in Table 6 with $W = 7$ and $C = 3$ ) as segmentation model. For FactorGAN, we use one marginal discriminator to match the distribution of real and fake segmentation maps to ensure realistic predictions, which enables training with isolated city scenes and segmentation maps. To ensure the correct predictions for each city scene, a p- and a q-dependency discriminator learns the input-output relationship using joint samples, both employing a convolutional architecture shown in Table 5. Note that as in Section 4.2, we output segmentation maps in the RGB space instead of performing classification. In addition to the MSE in the RGB space, we compute the widely used pixel-wise classification accuracy (Cordts et al., 2016) by assigning each output pixel to the class whose colour has the lowest Euclidean distance in RGB space. + +Using the same experimental setup (including network architectures), we also implement the CycleGAN (Zhu et al., 2017) as an unsupervised baseline. For the CycleGAN objective, the same GAN losses as shown in (1) and (2) are used5. + +Results The results in Figure 4 demonstrate that our approach can exploit additional unpaired samples to deliver better MSE and accuracy than a GAN and less noisy outputs as seen in Figure 5. + +![](images/1ca9fe508db3dccc9e45880dabe97167183c73d7782c48eb4dd8b737c209dded.jpg) +Figure 4: MSE (left) and accuracy (right) obtained on the Cityscapes dataset with different numbers of paired training samples for the GAN and FactorGAN + +![](images/5f4c1ac7b5bc3dec2fe6bf18b16eba5a4a1c09070abc18e6a245e8642d2d6460.jpg) + +![](images/ea880b9f23d18b58ea0c53b648292bc1c107e9c2dc2a7a1f0e42f4d036a4b289.jpg) + +![](images/e0ae4f392d1b0fb2faeaf6114f99abc154e1b761b6047c2987842ad54b0b333b.jpg) +(a) GAN + +![](images/9f8b3755ddcf5611baa35b8a5054900a696bbf241d132da2742ef630d53ba4f1.jpg) + +![](images/8f517fb293a848fa1cecd47492826a33667ad8c4341394d6a2be5a465ed9b52f.jpg) +Figure 5: Segmentation predictions made on the Cityscapes dataset for the same set of test inputs, compared between models, using 100 paired samples for training + +![](images/3630f10a53efc1ea4369860a77bca1ec93d20f888af0ff64feee8a8a9488b6c5.jpg) + +![](images/e40ac3ac9603cba3aabdac61f9d9b3debb73d4f13bf93c83f5ae185cf882596e.jpg) +(b) FactorGAN + +![](images/c47fc255ad7b61d37cbcdedff2436b857dae60acffd2a98c7b47b753af7d0891.jpg) + +![](images/180815b3d92864fabb9e4b21bc560822713289d93627bb214e466bd56f6800e2.jpg) + +When using only 25 paired samples, FactorGAN reaches $71.6\%$ accuracy, outperforming both GAN and CycleGAN by an absolute $17.7\%$ and $14.9\%$ , respectively. CycleGAN performs better than GAN only in this setting, and increasingly falls behind both GAN and FactorGAN with a growing number of paired samples, likely since GAN and FactorGAN are able to improve their input-output mapping gradually while CycleGAN remains reliant on its cycle consistency assumption. These findings suggest that FactorGAN can efficiently learn the dependency structure from few paired samples with more accuracy than a CycleGAN that is limited by its simplistic cycle consistency assumption. + +# 4.4 AUDIO SOURCE SEPARATION + +We apply our method to audio source separation as another conditional generation task to investigate whether it transfers across domains. Specifically, we separate music signals into singing voice and accompaniment, as detailed in Section A.2. As in Section 4.3, we find that FactorGAN provides better separation than GAN, suggesting that our factorisation is useful across problem domains. + +# 5 DISCUSSION + +We find that FactorGAN outperforms GAN across all experiments when additional incomplete samples are available, especially when they are abundant in comparison to the number of joint samples. When using only joint observations, FactorGAN should be expected to match the GAN in quality, and it does so quite closely in most of our experiments. Surprisingly, it outperforms GAN in some scenarios such as image segmentation even with matched discriminator sizes – a phenomenon we do not fully understand yet and should be investigated in the future. For image segmentation, FactorGAN substantially improves segmentation accuracy compared to the fully unsupervised CycleGAN model even when only using 25 paired examples, indicating that it can efficiently exploit the pairing information. + +Since the p-dependency discriminator does not rely on generator samples that change during training, it could be pre-trained to reduce computation time, but this led to sudden training instabilities in our experiments. We suspect that this is due to a mismatch between training and testing conditions for the p-dependency discriminator since it is trained on real but evaluated on fake data, and neural networks + +can yield overly confident predictions outside the support of the training set (Gal & Ghahramani, 2016). Therefore, we expect classifiers with better uncertainty calibration to alleviate this issue. + +# 6 CONCLUSION + +In this paper, we demonstrated how a joint distribution can be factorised into a set of marginals and dependencies, giving rise to the FactorGAN – a GAN in which the discriminator is split into parts that can be independently trained with incomplete observations. For both generation and conditional prediction tasks in multiple domains, we find that FactorGAN outperforms the standard GAN when additional incomplete observations are available. For Cityscapes scene segmentation in particular, FactorGAN achieves a much higher accuracy than the supervised GAN as well as the unsupervised CycleGAN, while requiring only 25 of all examples to be annotated. + +Factorising discriminators enables incorporating more prior knowledge into the design of neural architectures in GANs, which could improve empirical results in applied domains. The presented factorisation is generally applicable independent of model choice, so it can be readily integrated into many existing GAN-based approaches. Since the joint density can be factorised in different ways, multiple extensions are conceivable depending on the particular application (as shown in Section A.3). This paper derives FactorGAN from the original GAN proposed by Goodfellow et al. (2014) by exploiting the probabilistic view of the optimal discriminator. Adapting the FactorGAN to alternative GAN objectives (such as the Wasserstein GAN (Arjovsky et al., 2017)) might be possible as well. Instead of relying on additional techniques such as spectral normalisation to ensure training stability, which our theory does not explicitly incorporate, this would enable the use of an inherently more stable GAN variant with the same theoretical guarantees. + +# ACKNOWLEDGMENTS + +We thank Emmanouil Benetos for his helpful feedback. Daniel Stoller is funded by EPSRC grant EP/L01632X/1. + +# REFERENCES + +Amjad Almahairi, Sai Rajeswar, Alessandro Sordoni, Philip Bachman, and Aaron Courville. Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data. CoRR, abs/1802.10151, 2018. +Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. CoRR, abs/1701.07875, 2017. +P. Brakel and Y. Bengio. Learning Independent Features with Adversarial Nets for Non-linear ICA. ArXiv e-prints, 2017. +Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The Cityscapes Dataset for Semantic Urban Scene Understanding. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. +Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proc. of the International Conference on Machine Learning (ICML), volume 48, pp. 1050-1059, 2016. +Zhe Gan, Liquan Chen, Weiyao Wang, Yunchen Pu, Yizhe Zhang, Hao Liu, Chunyuan Li, and Lawrence Carin. Triangle Generative Adversarial Networks. CoRR, abs/1709.06548, September 2017. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672-2680, 2014. + +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances In Neural Information Processing Systems, pp. 6629-6640, 2017. +Daniel Jiwoong Im, Allan He Ma, Graham W. Taylor, and Kristin Branson. Quantitatively Evaluating GANs With Divergences Proposed for Training. In Proc. of the International Conference on Learning Representations (ICLR), 2018. +Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-Image Translation with Conditional Adversarial Networks. CoRR, abs/1611.07004, November 2016. +Theofanis Karaletsos. Adversarial Message Passing For Graphical Models. CoRR, abs/1612.05048, December 2016. +Tero Karras, Samuli Laine, and Timo Aila. A Style-Based Generator Architecture for Generative Adversarial Networks. CoRR, abs/1812.04948, December 2018. +Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang, and Stephen Paul Smolley. Least Squares Generative Adversarial Networks. In Proc. of the IEEE International Conference on Computer Vision (ICCV), October 2017. +Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral Normalization for Generative Adversarial Networks. arXiv:1802.05957 [cs, stat], February 2018. +Olof Mogren. C-RNN-GAN: A continuous recurrent neural network with adversarial training. In Constructive Machine Learning Workshop (CML) at NIPS 2016, 2016. +Yunchen Pu, Shuyang Dai, Zhe Gan, Weiyao Wang, Guoyin Wang, Yizhe Zhang, Ricardo Henao, and Lawrence Carin Duke. JointGAN: Multi-Domain Joint Distribution Learning with Generative Adversarial Nets. In Jennifer Dy and Andreas Krause (eds.), Proc. of the International Conference on Machine Learning (ICML), volume 80 of Proceedings of Machine Learning Research, pp. 4151-4160, Stockholm, Sweden, July 2018. PMLR. +Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. CoRR, abs/1511.06434, 2015. +Zafar Rafii, Antoine Liutkus, Fabian-Robert Stöter, Stylianos Ioannis Mimilakis, and Rachel Bittner. The Musdb18 Corpus For Music Separation, December 2017. +O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Proc. of the International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241. Springer, 2015. +Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised map inference for image super-resolution. In Proc. of the International Conference on Learning Representations (ICLR), 2017. +Daniel Stoller, Sebastian Ewert, and Simon Dixon. Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction. In Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2391-2395, Calgary, Canada, 2018. IEEE. +Soumya Tripathy, Juho Kannala, and Esa Rahtu. Learning image-to-image translation using paired and unpaired training samples. CoRR, abs/805.03189, May 2018. +E. Vincent, R. Gribonval, and C. Favotte. Performance measurement in blind audio source separation. IEEE Transactions on Audio, Speech, and Language Processing, 14(4):1462-1469, 2006. ISSN 1558-7916. doi: 10.1109/TSA.2005.858005. +Jinsung Yoon, James Jordon, and Mihaela van der Schaar. GAIN: Missing Data Imputation using Generative Adversarial Nets. CoRR, abs/1806.02920, June 2018. +Ning Zhang, Junchi Yan, and Yu Chen Zhou. Unsupervised Audio Source Separation via Spectrum Energy Preserved Wasserstein Learning. CoRR, abs/1711.04121, 2017. +Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. of the IEEE International Conference on Computer Vision (ICCV), pp. 2223-2232, 2017. + +# A APPENDIX + +# A.1 TABLES + +Table 1: The architecture of our generator on the MNIST dataset. All layers have biases. + +
LayerInput shapeOutputsOutput shapeActivation
FC50128128ReLU
FC128128128ReLU
FC128156856 × 28 × 1Sigmoid
+ +Table 2: The architecture of our discriminators on the paired MNIST dataset. $W = 28$ for marginal, $W = 56$ for dependency discriminators. + +
LayerInput shapeOutputsOutput shapeActivation
FCW·28128128LeakyReLU
FC128128128LeakyReLU
FC12811-
+ +Table 3: The architecture of our MNIST classifier. Dropout with probability 0.5 is applied to FC1 outputs. + +
LayerInput shapeFilter sizeStrideOutputsOutput shapeActivation
Conv28 × 28 × 15 × 51 × 11028 × 28 × 10-
AvgPool28 × 28 × 102 × 22 × 21012 × 12 × 10LeakyReLU
Conv12 × 12 × 105 × 51 × 12012 × 12 × 20-
AvgPool12 × 12 × 202 × 22 × 2204 × 4 × 20LeakyReLU
FC1320--5050LeakyReLU
FC250--1010-
+ +Table 4: The architecture of our convolutional generator. "ConvT" represent transposed convolutions. All layers have biases. The number of output channels $C$ depends on the task. + +
LayerInput shapeFilter sizeStrideOutputsOutput shapeActivation
ConvT1 × 1 × 504 × 41 × 110244 × 4 × 1024ReLU
ConvT4 × 4 × 10244 × 42 × 25128 × 8 × 512ReLU
ConvT8 × 8 × 5124 × 42 × 225616 × 16 × 256ReLU
ConvT16 × 16 × 2564 × 42 × 212832 × 32 × 128ReLU
ConvT32 × 32 × 1284 × 42 × 26464 × 64 × 64ReLU
Conv64 × 64 × 644 × 41 × 1C64 × 64 × CSigmoid
+ +Table 5: The architecture of our convolutional discriminator. All layers except FC have biases. $W$ , $H$ and $C$ are set for each task so that the dimensions of the input data are matched. + +
LayerInput shapeFilter sizeStrideOutputsOutput shapeActivation
Conv2W × 2H × C4 × 42 × 2322W-1 × 2H-1 × 32LeakyReLU
Conv2W-1 × 2H-1 × 324 × 42 × 2642W-2 × 2H-2 × 64LeakyReLU
Conv2W-2 × 2H-2 × 644 × 42 × 21282W-3 × 2H-3 × 128LeakyReLU
Conv2W-3 × 2H-3 × 1284 × 42 × 22562W-4 × 2H-4 × 256LeakyReLU
Conv2W-4 × 2H-4 × 2564 × 42 × 25122W-5 × 2H-5 × 512LeakyReLU
FC2W-5 · 2H-5 · 512--11LeakyReLU
+ +Table 6: The architecture of our U-Net. The height $H$ and number of input channels $C$ depends on the experiment. MP is maxpooling with stride 2. FC has noise as input. UpConv performs transposed convolution with stride 2. Concat concatenates the current feature map with one from the downstream path. The final output is computed depending on the task (see text for more details) + +
LayerInput (shape)OutputsOutput shape
DoubleConv12W × 128 × C322W × 128 × 32
MP12W × 128 × 32322W-1 × 64 × 32
DoubleConv22W-1 × 64 × 32642W-1 × 64 × 64
MP22W-1 × 64 × 64642W-2 × 32 × 64
DoubleConv32W-2 × 32 × 64642W-2 × 32 × 128
MP32W-2 × 32 × 1281282W-3 × 16 × 128
DoubleConv42W-3 × 16 × 1282562W-3 × 16 × 256
MP42W-3 × 16 × 2562562W-4 × 8 × 256
DoubleConv52W-4 × 8 × 2562562W-4 × 8 × 256
FC502W-4 · 162W-4 × 8 × 2
ConcatDoubleConv5-2W-4 × 8 × 258
UpConv2W-4 × 8 × 2582562W-3 × 16 × 258
ConcatDoubleConv45142W-3 × 16 × 514
Conv2W-3 × 16 × 5141282W-3 × 16 × 128
UpConv2W-3 × 16 × 1281282W-2 × 32 × 128
ConcatDoubleConv32562W-2 × 32 × 256
Conv2W-2 × 32 × 256642W-2 × 32 × 64
UpConv2W-2 × 32 × 64642W-1 × 64 × 64
ConcatDoubleConv21282W-1 × 64 × 128
Conv2W-1 × 64 × 128322W-1 × 64 × 32
UpConv2W-1 × 64 × 32322W × 128 × 32
ConcatDoubleConv1642W × 128 × 64
Conv2W × 128 × 64322W × 128 × 32
Conv2W × 128 × 32C2W × 128 × C
+ +Table 7: The DoubleConv neural network block used in the U-Net. Conv uses a $3 \times 3$ filter size. + +
LayerInput shapeOutputsOutput shape
ConvW × H × CC/2W × H × C/2
BatchNorm & ReLUW × H × C/2-W × H × C/2
ConvW × H × C/2C/2W × H × C/2
BatchNorm & ReLUW × H × C/2-W × H × C/2
+ +![](images/534739c1d39d1d57192d848918d5947e9732fe586abdaaff2d71222b5f592592.jpg) +Figure 6: GAN and FactorGAN separation performance for different numbers of paired samples + +![](images/94cb1caad7fdc240fe820ebf48117db232874bbf02614d0a0c40f0979c6691cf.jpg) + +# A.2 AUDIO SOURCE SEPARATION EXPERIMENT + +For our audio source separation experiment, our generator $G_{\phi}$ takes a music spectrogram $\mathbf{m}$ along with noise $\mathbf{z}$ and maps it to an estimate of the accompaniment and vocal spectra $\mathbf{a}$ and $\mathbf{v}$ , implicitly defining an output probability $q_{\phi}(\mathbf{a}, \mathbf{v}|\mathbf{m})$ . We define the joint real and generated distributions that should be matched as $p(\mathbf{m}, \mathbf{a}, \mathbf{v})$ and $q(\mathbf{m}, \mathbf{a}, \mathbf{v}) = q_{\phi}(\mathbf{a}, \mathbf{v}|\mathbf{m})p(\mathbf{m})$ . Since the source signals in our dataset are simply added in the time-domain to produce the mixture, this approximately applies to the spectrogram as well, so we assume that $p(\mathbf{m}|\mathbf{a}, \mathbf{v}) = \delta (\mathbf{m} - \mathbf{a} - \mathbf{v})$ . We can constrain our generator $G_{\phi}$ to make predictions that always satisfy this condition, thereby taking care of the input-output relationship manually, similarly to Sønderby et al. (2017). Instead of predicting the sources directly, a mask $\mathbf{b}$ with values in the range [0, 1] is computed, and the accompaniment and vocals are estimated as $\mathbf{b} \odot \mathbf{m}$ and $(\mathbf{b} - 1) \odot \mathbf{m}$ , respectively. As a result, $q(\mathbf{m}|\mathbf{a}, \mathbf{v}) = p(\mathbf{m}|\mathbf{a}, \mathbf{v})$ , so we can simplify the joint density ratio to + +$$ +\frac {p (\mathbf {m} , \mathbf {a} , \mathbf {v})}{q (\mathbf {m} , \mathbf {a} , \mathbf {v})} = \frac {p (\mathbf {a} , \mathbf {v}) p (\mathbf {m} | \mathbf {a} , \mathbf {v})}{q (\mathbf {a} , \mathbf {v}) q (\mathbf {m} | \mathbf {a} , \mathbf {v})} = \frac {p (\mathbf {a} , \mathbf {v})}{q (\mathbf {a} , \mathbf {v})} = \frac {c _ {P} (\mathbf {a} , \mathbf {v})}{c _ {Q} (\mathbf {a} , \mathbf {v})} \frac {p (\mathbf {a})}{q (\mathbf {a})} \frac {p (\mathbf {v})}{q (\mathbf {v})}, \tag {6} +$$ + +meaning that the discriminator(s) in the GAN and the FactorGAN only require $(\mathbf{a},\mathbf{v})$ pairs, but not the mixture $\mathbf{m}$ as additional input, as the correct input-output relationship is already incorporated into the generator. Furthermore, the last equality suggests a FactorGAN application with one marginal discriminator for each source along with dependency discriminators to model source dependencies. + +Dataset We use MUSDB (Rafii et al., 2017) as multi-track dataset for our experiment, featuring 100 songs for training and 50 songs for testing. Each song is downsampled to $22.05\mathrm{kHz}$ before spectrogram magnitudes are computed, using an STFT with a 512-sample window and a 256-sample hop6. Snippets with 128 timeframes each are created by cropping each song's full spectrogram at regular intervals of 64 timeframes. Thus, the generator only separates snippets $\mathbf{m} \in \mathbb{R}_{\geq 0}^{256 \times 128}$ and outputs predictions of the same shape, however this does not change the derivation presented in Equation (6), and longer inputs at test time can be processed by partitioning them into snippets and concatenating the model predictions. + +Experimental setup For our generator, we use the U-Net architecture detailed in Table 6 with $W = 8$ and $C = 1$ . We use the convolutional discriminator described in Table 5 with $W = 8$ , $H = 7$ and $C = 1$ . The source dependency discriminators take two sources as input via concatenation along the channel dimension, so they use $C = 2$ . + +In each experiment, we vary the number of training songs whose snippets are available for paired training between 10, 20 and 50 and compare between GAN and FactorGAN. The spectrograms predicted on the test set are converted to audio with the inverse STFT by reusing the phase from the mixture, and then evaluated using the signal-to-distortion ratio (SDR), a well-established evaluation metric for source separation (Vincent et al., 2006). + +Results Figure 6 shows our separation results. Compared to a GAN, the separation performance is significantly higher using FactorGAN. As expected, FactorGAN improves slightly with more paired examples, which is not the case for the GAN – here we find that the vocal output becomes too quiet + +when increasing the number of songs for training, possibly a sign of mode collapse. Similarly to the results seen in the image pair generation experiments, we suspect that the FactorGAN discriminator might approximate the joint density $\tilde{D} (\mathbf{x})$ more closely than the GAN discriminator due to its use of multiple discriminators, although the reasons for this are not yet understood. + +# A.3 POSSIBLE EXTENSIONS + +We can decompose the joint density ratio $\frac{p_x(\mathbf{x})}{q_x(\mathbf{x})}$ in other ways than shown in Equation 3 in the paper. In the following, we discuss two additional possibilities. + +# A.3.1 HIERARCHICAL FACTORGAN + +The decomposition of the joint density ratio could be applied recursively, splitting the obtained marginals further into "sub-marginals" and their dependencies, which could be repeated multiple times. In addition to training with incomplete observations where only a single part is given, this also allows making use of samples where only sub-parts of these parts are given and is thus more flexible than a single factorisation as used in the standard FactorGAN. + +As a demonstration, we split each marginal $\mathbf{x}^i$ further into a group of $J_{i}$ marginals, $J_{i}\leq |\mathcal{D}_{i}|$ , and their dependencies, without further recursion for simplicity: + +$$ +\frac {p _ {x} (\mathbf {x})}{q _ {x} (\mathbf {x})} = \frac {c _ {P} (\mathbf {x})}{c _ {Q} (\mathbf {x})} \prod_ {i = 1} ^ {K} \frac {p _ {x} ^ {i} \left(\mathbf {x} ^ {i}\right)}{q _ {x} ^ {i} \left(\mathbf {x} ^ {i}\right)} = \frac {c _ {P} (\mathbf {x})}{c _ {Q} (\mathbf {x})} \left[ \prod_ {i = 1} ^ {K} \frac {c _ {P} ^ {i} \left(\mathbf {x} ^ {i}\right)}{c _ {Q} ^ {i} \left(\mathbf {x} ^ {i}\right)} \left[ \prod_ {j = 1} ^ {J} \frac {p _ {x} ^ {i , j} \left(\mathbf {x} ^ {i , j}\right)}{q _ {x} ^ {i , j} \left(\mathbf {x} ^ {i , j}\right)} \right] \right]. \tag {7} +$$ + +$c_{P}^{i}$ and $c_{Q}^{i}$ are dependency terms analogously to $c_{P}$ and $c_{Q}$ , but only defined on marginal variable $\mathbf{x}^i$ , whose $J$ "sub-marginals" are denoted by $\mathbf{x}^{i,1},\ldots ,\mathbf{x}^{i,J}$ . + +Such a hierarchical decomposition might also be beneficial if the data is known to be generated from a hierarchical process. We leave the empirical exploration of this concept to future work. + +# A.3.2 AUTOREGRESSIVE FACTORGAN + +For a multi-dimensional variable $\mathbf{x} = [\mathbf{x}^1, \mathbf{x}^2, \dots, \mathbf{x}^T]$ composed of $T$ elements arranged in a sequence, such as time series data, the joint density ratio can also be decomposed in a causal, auto-regressive fashion: + +$$ +\begin{array}{l} \frac {p _ {x} (\mathbf {x})}{q _ {x} (\mathbf {x})} = \frac {p _ {x} ^ {1} \left(\mathbf {x} ^ {1}\right)}{q _ {x} ^ {1} \left(\mathbf {x} ^ {1}\right)} \prod_ {i = 2} ^ {T} \frac {c _ {P} \left(\mathbf {x} ^ {1} , \dots , \mathbf {x} ^ {i}\right)}{c _ {Q} \left(\mathbf {x} ^ {1} , \dots , \mathbf {x} ^ {i}\right)} \frac {p _ {x} \left(\mathbf {x} ^ {i}\right)}{q _ {x} \left(\mathbf {x} ^ {i}\right)} (8) \\ = \frac {p _ {x} ^ {1} \left(\mathbf {x} ^ {1}\right)}{q _ {x} ^ {1} \left(\mathbf {x} ^ {1}\right)} \prod_ {i = 2} ^ {T} \frac {p _ {x} \left(\mathbf {x} _ {i} \mid \mathbf {x} _ {1} , \dots , \mathbf {x} _ {i - 1}\right)}{q _ {x} \left(\mathbf {x} _ {i} \mid \mathbf {x} _ {1} , \dots , \mathbf {x} _ {i - 1}\right)} (9) \\ \end{array} +$$ + +Note that $c_{P}$ is defined here as $\frac{p(\mathbf{x})}{p(\mathbf{x}^1, \ldots, \mathbf{x}^{i-1}) p(\mathbf{x}^i)}$ ( $c_{Q}$ analogously using $q_x$ ). Equation (8) suggests an auto-regressive version of FactorGAN in which the generator output quality at each time-step $i$ is evaluated using a marginal discriminator that estimates $\frac{p_x(\mathbf{x}^i)}{q_x(\mathbf{x}^i)}$ combined with dependency discriminators that model the dependency between the current and all past time-steps. + +The final product formulation in Equation (9) reveals a close similarity to auto-regressive models and suggests a modification of the normal GAN with an auto-regressive discriminator that rates an input at each time-step given the previous ones. Using a derivation analogous to the one shown in Section A.4, this implies taking the unnormalised discriminator outputs at each time-step, summing them, and applying a sigmoid non-linearity to obtain the overall estimate of the probability $\tilde{D}(\mathbf{x})$ . A similar implementation was used before in Mogren (2016), attempting to stabilise GAN training with recurrent neural networks as discriminators, but for the first time, we provide a rigorous theoretical justification for this practice here. + +# A.4 DISCRIMINATOR COMBINATION + +Definition A.1. Sigmoid discriminator output. Let $D_{\theta_i}(\mathbf{x}^i) \coloneqq \sigma(d_{\theta_i}(\mathbf{x}^i))$ , $d_{\theta_i}: \mathbb{R}^{|\mathcal{D}_i|} \to \mathbb{R}$ for all $i \in \{1, \dots, K\}$ , analogously define $D_{\theta_P}^P(\mathbf{x})$ and $D_{\theta_Q}^Q(\mathbf{x})$ . + +Definition A.2. Combined discriminator. Let $D^{C}(\mathbf{x}) \coloneqq \sigma(d_{\theta_{P}}^{P}(\mathbf{x}) - d_{\theta_{Q}}^{Q}(\mathbf{x}) + \sum_{i=1}^{K} d_{\theta_{i}}(\mathbf{x}^{i}))$ be the output of the combined discriminator that is used for training $G_{\phi}$ using Equation 2. + +Theorem 1. Combined discriminator approximates $\tilde{D}(\mathbf{x})$ . Under definitions A.1 and A.2 and assuming optimally trained sub-discriminators, $D^{C}(\mathbf{x}) = \tilde{D}(\mathbf{x}) = \frac{p_{x}(\mathbf{x})}{p_{x}(\mathbf{x}) + q_{x}(\mathbf{x})}$ . + +Proof. Proof of Theorem 1 using Definitions A.1 and A.2: + +$$ +\begin{array}{l} D ^ {C} (\mathbf {x}) \\ = \sigma \left(d _ {\theta_ {P}} ^ {P} (\mathbf {x}) - d _ {\theta_ {Q}} ^ {Q} (\mathbf {x}) + \sum_ {i = 1} ^ {K} d _ {\theta_ {i}} \left(\mathbf {x} ^ {i}\right)\right) \\ = \left(1 + e ^ {- d _ {\theta_ {P}} ^ {P} (\mathbf {x})} e ^ {d _ {\theta_ {Q}} ^ {Q} (\mathbf {x})} \prod_ {i = 1} ^ {K} e ^ {- d _ {\theta_ {i}} (\mathbf {x} ^ {i})}\right) ^ {- 1} \\ = \left(1 + \frac {1 - D _ {\theta_ {P}} ^ {P} (\mathbf {x})}{D _ {\theta_ {P}} ^ {P} (\mathbf {x})} \frac {D _ {\theta_ {Q}} ^ {Q} (\mathbf {x})}{1 - D _ {\theta_ {Q}} ^ {Q} (\mathbf {x})} \prod_ {i = 1} ^ {K} \frac {1 - D _ {\theta_ {i}} \left(\mathbf {x} ^ {i}\right)}{D _ {\theta_ {i}} \left(\mathbf {x} ^ {i}\right)}\right) ^ {- 1} \tag {10} \\ = \left(1 + \frac {\prod_ {i = 1} ^ {K} p _ {x} (\mathbf {x} ^ {i})}{p _ {x} (\mathbf {x})} \frac {q _ {x} (\mathbf {x})}{\prod_ {i = 1} ^ {K} q _ {x} ^ {i} (\mathbf {x} ^ {i})} \prod_ {i = 1} ^ {K} \frac {q _ {x} ^ {i} (\mathbf {x} ^ {i})}{p _ {x} (\mathbf {x} ^ {i})}\right) ^ {- 1} \\ = \left(1 + \frac {q _ {x} (\mathbf {x})}{p _ {x} (\mathbf {x})}\right) ^ {- 1} \\ = \frac {p _ {x} (\mathbf {x})}{p _ {x} (\mathbf {x}) + q _ {x} (\mathbf {x})}. \\ \end{array} +$$ + +# A.5 GENERATED EXAMPLES + +# A.5.1 PAIRED MNIST + +![](images/3c14a7e400b63a99310a7f85dc45614f793f7bfcae7d9d1c28df819aeff307f4.jpg) +Figure 7: Paired MNIST examples generated by GAN and FactorGAN for different number of paired training samples, using $\lambda = 0.9$ . + +# A.5.2 IMAGE PAIRS + +![](images/2d50143ce1151d413ff03a38f07ad3e8b965465943380f3b60c326bd9077784b.jpg) + +![](images/afc5a7a12853794e0bd96e6e4f69cf89233be66236e011e7c429c2b1c91ec148.jpg) + +![](images/3243de79f01351531f8d5559f77d34268f9ee10ad09a58761e38165018829a1a.jpg) + +![](images/33295e68b0248017c2cafe24c24feaecf697f5850cb9f5e4b789133b179ab058.jpg) + +![](images/17e867adf4d00164b40347db60ed43eae2df10964241bd48a999289de6c8b49f.jpg) + +![](images/84bed773e96fc288a6e2ab6487ee9f8bdffab48ab9c9f88e304ad8314030d7c9.jpg) + +![](images/26f9a8d7cbf208ed1c07732505e835918df318de4286756dbc96fdccb4678799.jpg) + +![](images/2a806e4fbb0c945395f189aa8dc755df6feb9a6670aeac288ffb03afceccaced.jpg) + +![](images/8a4c80564a4d157b5308ec8e81a56dbcc034f9344f740dce386bd49a0530874e.jpg) + +![](images/7d2cfbb719ed9aa5422ee0c4f9199fa20ad16332edb06fc227ead9330c2a890e.jpg) + +![](images/ac26e166009e2169cb2abae05b9e4635980c543e469b9614d5386ca64a81bb55.jpg) + +![](images/9a20a798e80f6c796d8e6186eac254d2d2888cf306602981fff7094982505214.jpg) + +![](images/4eecdd1749efa046816e3815920fc0a023c85a7571695ceea332987f54d3fb4f.jpg) +Figure 8: GAN generating image pairs for the Cityscapes dataset using 100 paired samples. + +![](images/3259b2ed8f9f6a785ebd118af1f88285ecade7f0ffbad23ec0f63c8ccb520ca4.jpg) + +![](images/9f47dbb9878533cb35b1db814d96342c8dcc77faaa60e58be7636cc7c9c84055.jpg) + +![](images/ae14448f5e09b313c5bb50d26b4ce1f3fb1b698f5c1861fd7e298ab909acc0a0.jpg) + +![](images/da7df2c012d2021701a709fdb96d40122786c3ce90243f8fb804c3c337b67076.jpg) + +![](images/3104e45e0f4e2dcffa3e84d1c4082f079edac34ffeb1b256d1ee8d318eba879f.jpg) + +![](images/c7aece0571c93521d8d18fa2f1d072579d0160a56a27d0f763752badd8ff225b.jpg) + +![](images/3d3baa9fc04366b4dd22244b4382f4a48607d8bc680ba4f79c1c6b989c80857b.jpg) + +![](images/f6a380677c4ac6b998b3a2b89c846e4ce2b9467ba0de8c29ad5a96ecaeee4eb0.jpg) + +![](images/03b531f79d5825f3c5c99299acfefb652e96f70b61e6ceb6179c911109932f4f.jpg) + +![](images/ec0a90ec8a95ed8912f551cf124127ddc7cfbb246b852233209ce7568ec5c4ab.jpg) + +![](images/b0b2c5baef9b57998bfee17047ceb1cc7b214c4b39025fdbbb169328c9383b9c.jpg) + +![](images/31ec4ded0468836fcd5c6d15541907ba92ac25ffe7ff94e813c061d9549003a0.jpg) + +![](images/b090ada4b860d5f5068c6bcd9dc0cbf99aed1f1e9bb2a36b2cdf77d1131546c0.jpg) + +![](images/e12912e9accc5a0a41189d48eb24a3a8ddd37eef9ee37404f39072958472f160.jpg) + +![](images/75bda10404d6a47c2a91987c7a1a66c95073b6a997542f200765ec844f112367.jpg) + +![](images/c3cfa468e2ceec6ada1206ad6aa77199766e73b91aa0e0282d6f8baf840b0c47.jpg) +Figure 9: GAN (big) generating image pairs for the Cityscapes dataset using 100 paired samples. + +![](images/370095e486a9d86192ebc4bb2025f71d98309e964fa2beb0888054e44eb2ecbf.jpg) + +![](images/13ff8c2b0ab2526a8c042fccc42c11e0c95e955764f09e0778094136103af642.jpg) + +![](images/c3184b9be5eabb7409fde59d59a6f8f51cb866b49983555f1297d8c19fe5e89f.jpg) + +![](images/dc265ba1cf434dbaa0eb4ec718c1336fc874ab1889c821b8642658f5d70fa79e.jpg) + +![](images/2acabcb9daaf08f8ff0cdf4c3e68fb75a2195b054373ecc833630dd3fb523861.jpg) + +![](images/968a5de2771e75c471d655ea746533c4f16765fe4c9aba9ab190449027a28dd4.jpg) + +![](images/b57a8c5a000ab62dbd2576a0c8dffd89bf6ec8e9fdacb23263c78d3bfa12771f.jpg) + +![](images/0284e70cc0d5e0709d61746f0c65edda197919ca8ca333707fb4e8311b81e9b3.jpg) + +![](images/7161c87fb154dcc365c5c81d2501f85f8f6959085becebe928d5250a11d37747.jpg) + +![](images/6ad34bde1346cf47e79189ea9563e38c9f90c5934f7ad46a14af43dbaf129017.jpg) + +![](images/8eec0c9d0850e59dd0606f9462531f89a5e404fb0c9d6ed2e9b9461c68205eed.jpg) + +![](images/d6432a55d1f0497bc7d1dd3f59fb08d4123c4e1990330437e7cc86ec35cb6c8c.jpg) + +![](images/4cd505bf64f1f236715bb9cad427efc850514665ab08a5b06984e4be54b0254a.jpg) + +![](images/a2494213dac95a656b42b18d540eb403cfb7c577e6eaf269d9d2748fab858e8c.jpg) + +![](images/ce68f475eafcabb677aa3837dd299ef9bc2fd5fd27de4c226df7a21c17e77ecc.jpg) + +![](images/cc586b05ddd2ef411e2e454526578de3be7311fc825190b3769a46bd24a9254a.jpg) +Figure 10: FactorGAN generating image pairs for the Cityscapes dataset using 100 paired samples. + +![](images/fd061394eb17fee64ea116db6dd178d336ce2065f56c15fdba8673a93c668684.jpg) + +![](images/c59dbf908e81989b89d6127725365f12b2871e8b56782e0fd2db66fa3c7b7e03.jpg) + +![](images/0256781a03da31a4f468050ffe15c6b7acc14e93c331ca2c03b6c2975874c8ed.jpg) + +![](images/5e84a1eb5130ab54c9bbfc11163e94c7b23f9e79af765559bb47e6cf3a7089de.jpg) + +![](images/60f278a2225a6b825e31798af70a7c6e4667343b073ec5678b92c44ff1a81121.jpg) + +![](images/cc296018986b95fea57e48d7be82a62fb3084755157d347e0153a77773507a95.jpg) + +![](images/cf857a59d3ac2cb3a19177243096d5337dea8f779af0281bb292597c61f66481.jpg) + +![](images/9e6f07b3b24e08d7a2a07d58fac6ad598ddb8711fafa55124a794dfec0ab3566.jpg) + +![](images/0e76c983151f0f617876dc1203dabe2951400e13d7135d47d1517def21aac701.jpg) + +![](images/9607b2102de752feb1c8974a6b80707bc3438053e80a5277a175044e5b5c9242.jpg) + +![](images/26ccf2081dd2274121308bbbb77697b1bc6d340534e06a2da1ecd1e9506a701e.jpg) + +![](images/d05b3b9325e82a31ab43a9c38ef72065f29fa77ef564f631d38e36f690b0ae4a.jpg) + +![](images/b612825e0054df9f236c036b4b567d6bef1ab7c52145e759803e43a3e6aa86b3.jpg) + +![](images/fa82fcf09ec71631488a423c50a4dbf939f0d0903234a43b722f713f7dc0f3ee.jpg) + +![](images/699eee17321669446d5265c3764c5750c512d3724e61e2ffa82740843852ce0f.jpg) + +![](images/efe96c47d56c3c32c5abf99d5038eaac77ed78cca381613e060ce5fcc7189294.jpg) +Figure 11: GAN generating image pairs for the Cityscapes dataset using 1000 paired samples. + +![](images/512ff7fc2e11c55d5513a83bafa2b6a94a47e506a6dcf236692f8cc21664878c.jpg) + +![](images/17917246f5f9fea9c00dc9b2edc8a428ec2e7f6e8f3dce8c2cefd245e2892e76.jpg) + +![](images/35dab9ebfbe7af327c273971429634a5981a342d51a4ba63dcc6225f15bb5218.jpg) + +![](images/4a56655336bdb913eeffa1aef107a1629e0208f13ff5cb621a35871995ae6621.jpg) + +![](images/aef46d18c1618b80013d6958f53904f2cfa31ece1fa96e34c6ca77ab0af78c68.jpg) + +![](images/f822fe69893a2118ed2f0eba9aa7e3e48a9fcb36fc4ed845b7589f2f8a137110.jpg) + +![](images/075117b47e125b07f3e4fda15a2b07504873b5540fd4411ecb7504d429109953.jpg) + +![](images/760866bfe146bbbf85397f5c958a3b6778226f757ff2fc31e54dd2115c91de4a.jpg) + +![](images/95d9935c7cd9ec78cf5298d1757172703479b352fb99edc3c16d0fa6d783285d.jpg) + +![](images/8d2afc44a1406f4ea8566fc4da2fd12ce4dbcb6396117ecc6e55b1bb913b37d7.jpg) + +![](images/2a29f9dd83e75d3b377a977cfb7d5bf17eff9051d73a049101c1a3b6a2f08d38.jpg) + +![](images/31dfb72177799cae7215581876868978ca7da31ed75dcce72571fcbc762afb1f.jpg) + +![](images/ee9149da51181b2eca4e6c2ebcdad811455012f94e6d23af6e6cf68e3cef8fcf.jpg) + +![](images/867778bd7901f3cc375ea9aade55e4860607a6f5daa012cf5622991080dc2d6d.jpg) + +![](images/b1c9136486883c7a1385cd87952de12f94a3c04718da79575a33eea8617c3f7f.jpg) + +![](images/26401aff0b275982e11d498be2181888df0f5466d5ece55b89db76ab0201dd91.jpg) +Figure 12: GAN (big) generating image pairs for the Cityscapes dataset using 1000 paired samples. + +![](images/d3e94680f1ae2fcbb9034bacd479f67484641dffed5ebf20734d34191a556bbd.jpg) + +![](images/c9f6da38539025e01f0a54a93e75ce9d806f87027005e2b877ff299d04f2101d.jpg) + +![](images/04c1b74e949c51bf9903d0b566b40c0c43621faa2b3a928ac46c4ec012e496a3.jpg) + +![](images/383ce59f00704b24ddb7df578b856e25d8f9524e3f0147cb5720c95de9d51419.jpg) + +![](images/a02996dd54c37015741b9d25417e91239bc5dae4326049a14523e8b9933e88ed.jpg) + +![](images/b8777f0c4182bef1025e94f28a62fd811a6764ac107a4a9dd9a0653bd80f4ac3.jpg) + +![](images/87cf89fe9f168afb50476f4bca821826c658fda54c4d13f083e70d7d5d71ca11.jpg) + +![](images/aef5ec48dab51d7b9757fd6d90c7294db3da522de8798c795e174ff21b52c6f8.jpg) + +![](images/93d5ee5d8ffa95fbc082419f108643ac26fcb9d2ec0768e9f52b2f3689693bd1.jpg) + +![](images/363ba5fae4b5f9e29ad5ad665af7cc2202aab05914096801ca8fe13bd1b1f94f.jpg) + +![](images/454d7eef54f1f070cf6a833985fa79ace4e82ce31ca9d4c9ff4dcd4c9f92c628.jpg) + +![](images/763a79b75c0a9486553a1f1e9e722bf52c441b41ccdf4ac18fdca1796e506688.jpg) + +![](images/118c4bbde78e44b6e7fb2a38127e482c5e0bcdb423bc518106c600e3dd3348dd.jpg) + +![](images/a08b4864f6a119d1822dfd233d2592eea91a13c364dd3c73ff249fd7eca5c53a.jpg) + +![](images/ebde8a69ef0d073d74302e711d683f45fa5678655bceece58fb48e96715cc83d.jpg) + +![](images/0eca0a8f04b4246bd0189c5262397d06afdf3a58b859f7d313b0c31fc30a7907.jpg) +Figure 13: FactorGAN generating image pairs for the Cityscapes dataset using 1000 paired samples. + +![](images/6a1dfcc8d4c08b9b700e457f42a686e1972c73f5dd97a8b6e179d5fbe1f98825.jpg) + +![](images/b34fc905c758fad5c04bcc894e9ffb797f069234d66a20a334a55315227faf0e.jpg) + +![](images/16b2de157b405dbaf7a2ec75c5f5a6bbf421382970672408a38f7a27ba2cec51.jpg) + +![](images/2faf9560a41a4242f73b80428c1ed67dbe9852a9f10c632412b351fc95755e5e.jpg) + +![](images/1681756462768e678b16fff2077f847173d97de4d50473ee577cad8490ee0090.jpg) + +![](images/36cd920d6ca51696f64d26f11c449e109bcc5b15325452f3ec6e9592caf87090.jpg) + +![](images/92ee7e4a888650a56ce9ed02cf3ad6b9d7313fb53895790a66e0f22ee1ad4036.jpg) + +![](images/8b30af266b92ca5f61cb4c7281216ab6d2393b04edaba2d54acf9e4141aba1bc.jpg) + +![](images/ad36a8817a52fd97323083d5242b59605400f7e9fb2787018e2923579cf277c1.jpg) + +![](images/d849c7a40575264bb4b4e88c36f73a5f8fa2f96ce9f0a9e8b1ef01e70d46cc9f.jpg) + +![](images/878cae7ac6e25c2a9e9e75437c5782c60daebbf534bb3796c6c3060406e48169.jpg) + +![](images/ac5be38dea39f77d771517562c595255862a3420156e43a0f5ecab020e7a2787.jpg) + +![](images/3c933cb8b8333607485c3fc3a34c0c87fa281e41835c8c7c4dadd2150e790e30.jpg) + +![](images/41db918a67417854e2d8f4cc3490d5d80ce48ff0811cf44e494ac6077b15eb2b.jpg) + +![](images/ca1e18111337fc1fdf582376f86ab6ef18af4378f35507b0048446989f559adc.jpg) + +![](images/57410d7f1b116d59db2ebb606c1fca8144967d8d9e3563c96c16437ff510ea5b.jpg) +Figure 14: GAN generating image pairs using the full Cityscapes dataset. + +![](images/65fccda1423a011fb52abec38d41fcb1ebc8700a7f900613c17ce0784ea18003.jpg) + +![](images/eb4e6cb0b19d998b97c64625ad954df546ec395dd6052ed69508f8e23a15e7ad.jpg) + +![](images/d3e0fbad1dc3b9dd35b4b1bdbbb9ce3238d06a327c156d75bda02130c8174fe6.jpg) + +![](images/f5e381ff59fa2eb5a17833b4ce697a09285583846f489575d588484db81ab969.jpg) + +![](images/9f8dc400c2bd0e8de64922a87e3419dce2fcdfd4a7137b57df2a5b213a3d434f.jpg) + +![](images/b05506b4b6b035e92cdd55bc0474a5a82a29e566f00627a5e0daffdef5a9bd0a.jpg) + +![](images/4f9c2e073b6ad81e51304241d96bbe0ca614534be4a47a382597bd5822bfaa88.jpg) + +![](images/cb1f483f001fcce69b83f64920025b9b8623c43519f5911f64e7d3ef3b5e4f04.jpg) + +![](images/bdfada40a44428c87953a607f328cc627c26c2351210b775bb68952a4b957905.jpg) + +![](images/453d2e1ff617e25bc608d0f724cab8458eb56c2070de07f202b74e86ebe09375.jpg) + +![](images/62fb7785b7940ec189ddc7573b139b71c19fdf3297b9448e7cff84df80d21d0c.jpg) + +![](images/ffa7c2971ac3a7268f2e14fe15678699884ed90a10f38a6dbb982ae0bb73344b.jpg) + +![](images/a8fd31676350e5a301b6526eeda22e4bedb9f359d20b5b59ed88946a44c5d24a.jpg) + +![](images/d0815f007806ec03c751f6198471c85ab75c07c353058eeb2c091bbbb4024652.jpg) + +![](images/a7bef58fb9cd72a04c1a8b0f8e3d22bec1a762bd0dc6e0387af701e23bd801a8.jpg) + +![](images/b5c5dcb05a137017fd1cbe986ec0f19984c14815634bd370b67e1f5180a3169d.jpg) +Figure 15: GAN (big) generating image pairs using the full Cityscapes dataset. + +![](images/bcc29d3eef283c864ff8382af1d3736420b962e47545f84e4bda3cf215d27e6e.jpg) + +![](images/9909aa13e88635280fd7d7a4c02d25e14b5caaf1924e0a9ff507d00b1ee308b9.jpg) + +![](images/aecc2687ef008ce8268693400f214d912ff5896ff32612932c575c52544ab8d3.jpg) + +![](images/194bf65d16235eacfb2ec5b6b2d9034000425a7815eccc01b4f187c9190cf329.jpg) + +![](images/1fdf45a8dea18025cd3a32a090789422e26c3fab677ba7d5e197f92e4e2cea68.jpg) + +![](images/5ccbae09b20f656c531363b7715f00d6b780d3f9b922eba612f5229f253cea28.jpg) + +![](images/9bc0c7b8d3de536dd79f3474ea20de1390a526066877534fd431991a42059341.jpg) + +![](images/da804f2f34f145fe2f5f01c702a12a634fc0792478d77beb8b4d18bf98ea4e97.jpg) + +![](images/a66dadac31d24f4bd0ad91a8bd70fa94eef1f543f44b6f1da6dd24a0b5c56568.jpg) + +![](images/0f8db3a3b690ee8e55ef91b725b0d6e03eea8cd6be3b41b49631fa1451c510d9.jpg) + +![](images/9146ac6e63a4b1d029ca7328effec917a9cda505ced0f392af906063443d96be.jpg) + +![](images/7902f342cad92772a3f872bd1eff191e156fd2fb6ad241a781aba3b15d11f7b9.jpg) + +![](images/e9884a3dff8c863c2c388280e6eef69dc0dc05219dd4e2a7fcee35b66bfd472b.jpg) + +![](images/2b3d51bdc81e125c873f0f7d52ff4e49657a62bdf6ade5c0ee11db3831929cd1.jpg) + +![](images/1bcd5ffc0c8f1ecfddc5ca55d6bf1f848a59fed6143fba81df2c19c0300f2651.jpg) + +![](images/f2a8f9967282fe0c89439299d01dabaf8ddc0da9343f070df48ccbbc8a5c7054.jpg) +Figure 16: FactorGAN generating image pairs using the full Cityscapes dataset. + +![](images/fb3f0f855556700ed87645d9abb5a76e24c3dce4fa819f7ab5d2e53effec3212.jpg) + +![](images/d3f87b6d52036fec5885a26d9997ecd4f7e0d5ebc47775fff52a099fda3555be.jpg) + +![](images/5d4dc86b58e85acc6c9eeb109e5bff307564963da929763cefe1216bc85bfda3.jpg) + +![](images/fe52727df94740479c1b8ae05b5bcc0c7257536fff9e6e577399278f995f7d4c.jpg) +(a) GAN + +![](images/7e0106f66f0839b043d172518075d5ff8453854e67dd607c08c61f845f42a140.jpg) +(b) FactorGAN + +![](images/f534c2a87c45955c410bc6a8fedb8896d6a6ce28d8f09bf8532b97f5e4d6858d.jpg) +Figure 17: Image pairs generated for the Edges2Shoes dataset using 100 paired samples. +(a) GAN +Figure 18: Image pairs generated for the Edges2Shoes dataset using 1000 paired samples. + +![](images/45526abd76d1b26d211fddbd748ec08fad9c81d468dd5711caa70fa81c2320fb.jpg) +(b) FactorGAN + +![](images/78e7507cbc42798221a4df3ffad17a3a0374ced3bd4898b71fdec822b797d87a.jpg) +(a) GAN + +![](images/e1b16fd92298937eebab15e42061008f654f61110502858a5c600af090bf221a.jpg) +(b) FactorGAN +Figure 19: Image pairs generated for the Edges2Shoes dataset using all samples as paired. + +# A.6 IMAGE SEGMENTATION + +![](images/138d80568b011d5f9504dad11e1edbef60508b147872e5ad63102da3fd8e90de.jpg) + +![](images/4df390a6e723f022f782abb2c57bae38995ca073a8d0a6aa0f76141b69c70598.jpg) + +![](images/fe7df0635c7e7dc2e014b2b9347fa88222e84c5a2e75e6c13a11fc1214148496.jpg) + +![](images/5c602f23f22d7e1da106b167dc84f44e0785318418a70e9ec5ed6ffd51f0e809.jpg) + +![](images/203571fc8a4f6fdcc55c6fbdb2caa1b21547f659f5880d7e8365f0b013f35789.jpg) +(a) GAN + +![](images/b22475e1318c2ea80df5401363d6b3e7e03fbc242ed43659976964e4fa958daf.jpg) +Figure 20: Segmentation predictions made on the Cityscapes dataset for the same set of test inputs, compared between models, using 100 paired samples for training + +![](images/eb0c2a588123be80a16d0ed86c3f663158768f0004e852f79f178f374bf88bd7.jpg) + +![](images/6bdabe8ca2e7eb066358662f792ae8705f2ef4453728ab89f22e678f3343986a.jpg) + +![](images/fe29cf591bef3427982aab0195a3fe00855ca28f83141db219f3e2e37700452f.jpg) + +![](images/4a69208f352d9bdb8646347137faac4cb525a15730f33649dfaaa082b7e72ca4.jpg) + +![](images/942336ec5fa96d8b3dad28da7e78359e99d20d5df78fa13df85b65606f9bbceb.jpg) +(b) FactorGAN + +![](images/d3ab5907b698810c9ee1f8ed0a83975255b62cf07dece3c05b47d7ee3ba0e7cc.jpg) + +![](images/2a3d506fbdc3546a5709f4dfc122a323ebadc7313a969480e0700d5c198ef32c.jpg) + +![](images/5c0cf8c903bac5470119c4695bb66ec57e6ffd46c24c4f52308f70923f34aa59.jpg) + +![](images/60498ae11ee2da8caa7bb79794dabfca3009dc2201e8301ae7422a8cdc144efc.jpg) + +![](images/c9fd1a3ab6edfeb4d70347957a5b7e15f718c997f450ecc02e3bb71cd9cd2885.jpg) + +![](images/ed324a564c4c799033c0b5c32ae38dc40ac4c60f337129b5c20dc229b4932f24.jpg) +(a) GAN + +![](images/fbeca8a3ce64e6ed67a96418f9fb43ce80b95b564cb0bd206b849f023f6e3b01.jpg) +Figure 21: Segmentation predictions made on the Cityscapes dataset for the same set of test inputs, compared between models, using 1000 paired samples for training + +![](images/dbac99408995eec2b07e4c0aae49b241015a6a59c10fcb7b54f76dbc30e94eb0.jpg) + +![](images/04d80115c45f53e298dcc06447fbf34e61451b18c4d4b82bb9cc91065f69179b.jpg) + +![](images/f3fb30711bff3fcad25b7f8e1aab7203056497b3cf51b27e343f75145663a0a6.jpg) + +![](images/c13ca3bbaabf087eb436a145da34600b7094679af6e1da1f2d41d181940590ca.jpg) + +![](images/725fe96bc44440c49342b6372b52db395b68d3fefdc7c58cbe081aeea7444e3a.jpg) +(b) FactorGAN + +![](images/8c06de0a7e0708dd5a38f2b2c3cf4d63e25019dc65cd975603f041e52ce74bc7.jpg) + +![](images/4070cfb7b82599153e422d4581b4029c71fd9338cc9a660c0373f9cb70d88986.jpg) + +![](images/dd900398f1e53b6be1652cc08351261147ecf21ac62d19c5e976c211d1015711.jpg) + +![](images/297e0930f3bff6af9297fd3f7dd2d968c4764054997d584d0797c86898e6e070.jpg) + +![](images/b6e750359b3c83adf16948da73ebd722594f3550676c3c094704b3a0b69823ee.jpg) +Figure 22: Segmentation predictions made on the Cityscapes dataset for the same set of test inputs, compared between models, using all paired samples for training + +![](images/707fc3349e1f0fd265555a8aa923fc3e5bb2d9b9530f67e1c52cc0cc8835e113.jpg) +(a) GAN + +![](images/6b4ab751df0dd77a1ced3b9ad9107fdb89b186033771bf0e86154b05e5b98f0c.jpg) + +![](images/eca7d4dd494448805994adf1a0abdae581fa932becdf799639e2d74447959252.jpg) + +![](images/6d0544c8c10772073093005840e1fd3b187b3428165c13a22307f8df5bc979ea.jpg) + +![](images/25f1d398caf71ceb39ee2c6121d88fc37d9ecc81c8bdc6f036ab044e3975b8d8.jpg) + +![](images/94e610e4b80d3323248eefc0b5581ba4f509ecb3c5ec660e5aa8bd5ecb24ddb2.jpg) + +![](images/584c36b56f859bddd3d8408ba6de8877b4af69b34c4c9ae7cdda11fe26c04238.jpg) +(b) FactorGAN + +![](images/15ede49a3a7adec1aafba3e04dc3f18ab7cf95fd6161dcf8862a22c5e113a35b.jpg) + +![](images/c2e566c6c1c7e75a5931575736543321aca20ade02a93c24ec6c9e2d736f75a5.jpg) +Figure 23: CycleGAN generating image pairs for the Cityscapes dataset without any paired samples. + +![](images/dae42d28c1ed804b12a12fc254a0a67ae26bb7c7c5e83e38ebb8ee07c40ca52e.jpg) + +![](images/89376f917cb4d94dac927ba46847c61d2b0c03195b2df00646c2b9c2d0ff2ce6.jpg) + +![](images/e5bf28891a5425d185f81e99165342aadd7ab9ba84db12fd9fba8185bea26188.jpg) \ No newline at end of file diff --git a/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/images.zip b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a02100b288859141c2c053359f6b24e774ef0240 --- /dev/null +++ b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85dde64c522650d24dd4d5a4686bf54ce67b867d29b2bf9a4ce27d9f6c917532 +size 2008451 diff --git a/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/layout.json b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..960e4f753d5ca380a51fea17a4789a286180c58f --- /dev/null +++ b/traininggenerativeadversarialnetworksfromincompleteobservationsusingfactoriseddiscriminators/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e29d6686df0757994ccff797eeda23ca6d3ee68a54167799804f379833c32495 +size 936914 diff --git a/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/7e67f5b3-864a-475d-9bc6-33cd23d8727f_content_list.json b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/7e67f5b3-864a-475d-9bc6-33cd23d8727f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..782bc44e4d5f3ceb621c0b8d580efd1193e618dd --- /dev/null +++ b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/7e67f5b3-864a-475d-9bc6-33cd23d8727f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3933c37ae5ff79453626ecc54512086709cad63f0e0965c8a40a30557d1cee5 +size 149859 diff --git a/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/7e67f5b3-864a-475d-9bc6-33cd23d8727f_model.json b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/7e67f5b3-864a-475d-9bc6-33cd23d8727f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c5da30f5ce41fc6f71e30794c6871435ad8788fd --- /dev/null +++ b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/7e67f5b3-864a-475d-9bc6-33cd23d8727f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb9cfbf58fab050698e23de4a7c6a09a50ad935f0178784d8fc337bd79f14582 +size 174552 diff --git a/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/7e67f5b3-864a-475d-9bc6-33cd23d8727f_origin.pdf b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/7e67f5b3-864a-475d-9bc6-33cd23d8727f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ab83c3ff2f64bdb83899c1552547dd05a239e663 --- /dev/null +++ b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/7e67f5b3-864a-475d-9bc6-33cd23d8727f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd1297241428df025aaff972bf8efdc0c1aa16436e193a0d2b76bda7119d9395 +size 2464923 diff --git a/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/full.md b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0809785915a763137464f4ff2dab954093154f8a --- /dev/null +++ b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/full.md @@ -0,0 +1,675 @@ +# TRAINING RECURRENT NEURAL NETWORKS ONLINE BY LEARNING EXPLICIT STATE VARIABLES + +Somjit Nath, Vincent Liu, Alan Chan, Xin Li, Adam White and Martha White + +Department of Computing Science + +University of Alberta + +{somjit,vliul,achan4,xzli,amw8,whitem}@ualberta.ca + +# ABSTRACT + +Recurrent neural networks (RNNs) allow an agent to construct a state-representation from a stream of experience, which is essential in partially observable problems. However, there are two primary issues one must overcome when training an RNN: the sensitivity of the learning algorithm's performance to truncation length and and long training times. There are variety of strategies to improve training in RNNs, the mostly notably Backprop Through Time (BPTT) and by Real-Time Recurrent Learning. These strategies, however, are typically computationally expensive and focus computation on computing gradients back in time. In this work, we reformulate the RNN training objective to explicitly learn state vectors; this breaks the dependence across time and so avoids the need to estimate gradients far back in time. We show that for a fixed buffer of data, our algorithm-called Fixed Point Propagation (FPP)—is sound: it converges to a stationary point of the new objective. We investigate the empirical performance of our online FPP algorithm, particularly in terms of computation compared to truncated BPTT with varying truncation levels. + +# 1 INTRODUCTION + +Many online prediction problems are partially observable: the most recent observation is typically insufficient to make accurate predictions about the future. Augmenting the inputs with a history can improve accuracy, but can require a long history when there are long-term dependencies back in time. Recurrent Neural Networks (RNNs) (Elman, 1990; Hopfield, 1982) learn a state which summarizes this history. Specifically, RNNs contain recurrent connections to their hidden layers which allow past information to propagate through time. This state need not correspond to a true underlying state; rather, the state is subjective and constructed to facilitate prediction. RNNs have been widely used, in speech recognition (Hinton et al., 2012; Graves et al., 2013; Miao et al., 2015; Chan et al., 2016), image captioning (Mao et al., 2014; Lu et al., 2016; Vinyals et al., 2014), speech synthesis (Mehri et al., 2016) and reinforcement learning (Hochreiter and Schmidhuber, 1997; Dull et al., 2012). + +Despite these success, there are significant stability and computational issues in training RNNs online (Pascanu et al., 2013; Tallec and Ollivier, 2017). In the online setting, the agent faces an unending stream of data and on each step the agent must update its parameters to make a new prediction. RNNs are typically trained either using Backpropagation-through-time (BPTT) (Werbos, 1990) or approximations to an algorithm called Real-Time Recurrent Learning (RTRL) (Williams and Zipser, 1989a; Pearlmutter, 1995), although there are methods that appeal to other principles (see Murray (2019) for instance). The update for BPTT is a variant of standard backpropagation, computing gradients all the way back in time. This approach is problematic because the computational cost scales linearly with the number of time-steps. A more common alternative is truncated BPTT (T-BPTT) (Williams and Peng, 1990) which only computes the gradient up to some maximum number of steps: we truncate how far back in time we unroll the network to update the parameters. This approximation, though, is not robust to long-term dependencies (Tallec and Ollivier, 2017). Approximate gradients can also be computed online by RTRL (Williams and Zipser, 1989b). This online algorithm, however, has high computational complexity per step and therefore is not commonly used in practice. + +Recently, there have been some efforts towards approximating gradients for back-propagation, both for feedforward NNs and RNNs. Synthetic gradients and $\mathrm{BP}(\lambda)$ (Jaderberg et al., 2017) use an idea similar to returns from reinforcement learning: they approximate gradients by bootstrapping off estimated gradients in later layers (Jaderberg et al., 2017; Czarnecki et al., 2017). There are several methods that approximate RTRL—which is itself an approximation of the true gradient back in time—including NoBackTrack (Ollivier and Charpiat, 2015), Unbiased Online Recurrent Optimization (UORO) (Tallec and Ollivier, 2017) which uses an unbiased rank-1 approximation to the full matrix gradient, and Kronecker Factored RTRL (Mujika et al., 2018) which uses a Kronecker product decomposition to approximate the RTRL update for a class of RNNs. Finally, there are some methods that use selective memory back in time to compute gradients for the most pertinent samples, using skip connections (Ke et al., 2017). All of these methods, however, attempt to approximate the gradient back in time, for the current observation and state, and so suffer to some extent from the same issues as BPTT and RTRL. + +In this paper, we investigate an alternative optimization strategy that does not attempt to approximate the gradient back in time. Instead we learn the state variables in the RNN explicitly. These new variables are optimized to both improve prediction accuracy, and to maintain consistency in producing the next learned state variables. This second constraint is a fixed-point formula for the states under the given RNN dynamics.1 We develop a provably sound stochastic update for the new fixed-point objective, which we then use to develop an online algorithm for training RNNs. The algorithm explicitly optimizes state vectors and RNN parameters with many efficient one-step—or short term multi-step updates—across a buffer. Instead of focusing computation to get a more accurate gradient estimates for this time-step, our algorithm, called Fixed Point Propagation (FPP), can more effectively use computation to update prediction accuracy across states. We demonstrate that the algorithm is effective on several problems with long-term dependencies, and improves over T-BPTT, particularly in terms of stability and computation. + +# 2 PROBLEM SETTING AND BACKGROUND + +We consider a partially observable online setting, where an immediate observation is not sufficient for prediction. More formally, assume there is a sequence of $n$ observations, $\mathbf{o}_1, \ldots, \mathbf{o}_n$ , which provide only partial information about an unknown underlying sequence of states. After obtaining an observation $\mathbf{o}_i$ , the agent makes a prediction $\hat{\mathbf{y}}_i$ and sees the actual outcome $\mathbf{y}_i$ . The goal is to minimize this prediction error. Given only $\mathbf{o}_i$ , the agent is unlikely to make accurate predictions about $\mathbf{y}_i$ , because $\mathbf{o}_i$ is not a sufficient statistic to predict $\mathbf{y}_i$ : $p(\mathbf{y}|\mathbf{o}_i, \mathbf{o}_{i-1}, \mathbf{o}_{i-2}, \ldots) \neq p(\mathbf{y}|\mathbf{o}_i)$ . The agent could obtain a lower prediction error using a history of observations. Unfortunately, the agent may require a prohibitively long history, even if this history could have been summarized compactly. + +An alternative is to construct state using a Recurrent Neural Network (RNN), by learning a state-update function. Given the current (constructed) state $\mathbf{s}_{t-1} \in \mathbb{R}^k$ , and a new observation $\mathbf{o}_t \in \mathbb{R}^d$ , the parameterized state-update function $f_{\mathbf{W}}: \mathbb{R}^k \times \mathbb{R}^d \to \mathbb{R}^k$ , with parameters $\mathbf{W}$ , produces the next (constructed) state $\mathbf{s}_t = f_{\mathbf{W}}(\mathbf{s}_{t-1}, \mathbf{o}_t)$ . For example, $f_{\mathbf{W}}$ could be a linear weighting of $\mathbf{s}_{t-1}$ and $\mathbf{o}_t$ , with a ReLu activation: $f_{\mathbf{W}}(\mathbf{s}_{t-1}, \mathbf{o}_t) = \max([\mathbf{s}_{t-1}, \mathbf{o}_t] \mathbf{W}, \mathbf{0})$ . More complex state-updates are possible, like the gating in Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997). + +The objective is to adapt these parameters $\mathbf{W}$ for the state-update to minimize prediction error. For the current state $\mathbf{s}_t$ , a prediction is made by parameterized function $g_{\beta}:\mathbb{R}^k\to \mathbb{R}^m$ with learned parameters $\beta$ . For example, the prediction could be a linear weighting of the state, $g_{\beta}(\mathbf{s}) = \boldsymbol{\beta}^{\top}\mathbf{s}$ . We denote the prediction error as $\ell_{\beta}:\mathbb{R}^{k}\times \mathbb{R}^{m}\rightarrow \mathbb{R}$ for a given $\beta$ . For example, this loss could be + +$$ +\ell_ {\boldsymbol {\beta}} (\mathbf {s} _ {t}; \mathbf {y} _ {t}) = \| g _ {\boldsymbol {\beta}} (\mathbf {s} _ {t}) - \mathbf {y} _ {t} \| _ {2} ^ {2}. +$$ + +The goal in RNNs training is to minimize, for some start state $\mathbf{s}_0$ + +$$ +\min _ {\boldsymbol {\beta}, \mathbf {W}} \sum_ {i = 1} ^ {n} \ell_ {\boldsymbol {\beta}} \left(f _ {\mathbf {W}} \left(\dots f _ {\mathbf {W}} \left(\underbrace {f _ {\mathbf {W}} \left(\mathbf {s} _ {0} , \mathbf {o} _ {1}\right)} _ {\mathbf {s} _ {1}}, \mathbf {o} _ {2}\right), \dots , \mathbf {o} _ {i}\right); \mathbf {y} _ {i}\right). \tag {1} +$$ + +Computing gradients for this objective, however, can be prohibitively expensive. A large literature on optimizing RNNs focuses on approximating this gradient, either through approximations to RTRL or improvements to BPTT. RTRL (Williams and Zipser, 1989b) uses a recursive gradient form, which can take advantage of gradients computed up until the last observation to compute the gradient for the next observation. This estimate, however, is only exact in the offline case and thus RTRL is an approximation of the true gradient in our online setting. Further, in either online or offline, RTRL requires $O(k^4)$ computation per observation (recall $k$ is the dimension of the state). In BPTT, gradients are computed back in time, by unrolling the network. In the online setting, it is infeasible to compute gradients all the way back to the beginning of time. Instead, this procedure is truncated $T$ steps back in time. T-BPTT is suitable for the online setting, and requires $O(Tk^2)$ computation per step, i.e., for each observation. + +Arguably the most widely-used strategy is T-BPTT, because of its simplicity. Unfortunately, T-BPTT has been shown to fail in settings where dependencies back in time are further than $T$ (Tallec and Ollivier, 2017), as we affirm in our experiments. Yet, the need for simple algorithms remains. In this work, we investigate an alternative direction for optimizing RNNs that does not attempt to estimate the gradients of (1). + +Note that in addition to a variety of optimization strategies, different architectures have also been proposed to facilitate learning long-term dependencies with RNNs. The most commonly used are LSTMs (Hochreiter and Schmidhuber, 1997), which use gates to remember and forget components of the state. Other architectures include clockwork RNNs (Koutnik et al., 2014), phased LSTMs (Neil et al., 2016), hierarchical multi-scale RNNs (Chung et al., 2016), dilated RNNs (Chang et al., 2017), and skip RNNs (Campos et al., 2017). In this work, we focus on a general purpose RNN algorithm, that could be combined with each of these architectures for further improvement. + +# 3 A NEW FIXED-POINT OBJECTIVE FOR RNNS + +In this section we introduce our new formulation for training RNNs. We begin with an idealized setting to introduce and explain the ideas. Later we will generalize our approach to partially observable online training tasks. + +First, assume the observations are produced by an underly Markov Chain with a discrete set of states, and the agent has access to a set of observations that are deterministic function of the state. We denote the set of states $\mathcal{H} = \{1,\dots ,n\}$ , and the observations from each state as $\mathbf{o}_1,\ldots ,\mathbf{o}_n$ . The goal is to find state vectors $\mathbf{s}_1,\ldots ,\mathbf{s}_n\in \mathbb{R}^k$ that satisfy two goals. One is to enable the state to be updated + +$$ +f _ {\mathbf {W}} \left(\mathbf {s} _ {i}, \mathbf {o} _ {j}\right) = \mathbf {s} _ {j} \quad \forall j \text {w h e r e} \mathbf {P} (i, j) > 0 \tag {2} +$$ + +for $\mathbf{P}:\mathcal{H}\times \mathcal{H}\to [0,1]$ the transition dynamics. Another criterion is for these state vectors to facilitate accurate predictions. In particular, the learned state should minimize $\ell_{\beta}(\mathbf{s}_j;\mathbf{y}_j)$ for all $h$ , where $\mathbf{y}_j\in \mathbb{R}$ is the expected target for a true state $j$ . Together, this results in the following optimization, with the relationship between states encoded as a constraint + +$$ +\min _ {\boldsymbol {\beta}, \mathbf {W}, \mathbf {s}} \sum_ {i, j \in \mathcal {H}} \mathbf {P} (i, j) \ell_ {\boldsymbol {\beta}} \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {i}; \mathbf {o} _ {j}\right), \mathbf {y} _ {j}\right) \quad \text {s . t .} f _ {\mathbf {W}} \left(\mathbf {s} _ {i}, \mathbf {o} _ {j}\right) = \mathbf {s} _ {j} \forall i, j \text {w h e r e} \mathbf {P} (i, j) > 0 +$$ + +The satisfiability of this will depend on $f_{\mathbf{W}}$ and if $\mathbf{s}_i$ and $\mathbf{o}_j$ can uniquely determine $\mathbf{s}_j$ . + +More generally, we will not know the underlying state, nor is it necessarily discrete. But, we can consider a similar objective for observed data. Assume $n$ observations have been observed, $\mathbf{o}_1, \ldots, \mathbf{o}_n$ , with corresponding targets $\mathbf{y}_1, \ldots, \mathbf{y}_n$ . Let the state variables be stacked in a matrix $\mathbf{S} \in \mathbb{R}^{k \times n}$ and observations as a matrix $\mathbf{O} \in \mathbb{R}^{d \times n}$ , with $\mathbf{S} = [\mathbf{s}_0, \ldots, \mathbf{s}_n]$ and $\mathbf{O} = [\mathbf{o}_1, \ldots, \mathbf{o}_n]$ . The constraint on the states becomes $\mathbf{S} = F_{\mathbf{W}}(\mathbf{S}, \mathbf{O})$ for operator + +$$ +F _ {\mathbf {W}} (\mathbf {S}, \mathbf {O}) \stackrel {\text {d e f}} {=} [ \mathbf {S} _ {:, 0}, f _ {\mathbf {W}} (\mathbf {S} _ {:, 0}, \mathbf {O} _ {:, 1}), \dots , f _ {\mathbf {W}} (\mathbf {S} _ {:, n - 1}, \mathbf {O} _ {:, n}) ]. \tag {3} +$$ + +We call this the fixed-point constraint, since a solution $\mathbf{S}$ to the constraint is a fixed point of the system defined by $F_{\mathbf{W}}(\cdot ,\mathbf{O})$ . The resulting optimization, for this batch, is + +$$ +\min _ {\boldsymbol {\beta}, \mathbf {W}, \mathbf {S}} \sum_ {i = 1} ^ {n} \ell_ {\boldsymbol {\beta}} \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right); \mathbf {y} _ {i}\right) \quad \text {s . t .} \mathbf {S} = F _ {\mathbf {W}} (\mathbf {S}, \mathbf {O}). \tag {4} +$$ + +The solution to this new optimization corresponds to the solution for the original RNN problem in (1)—when also optimizing over $\mathbf{s}_0$ in (1)—because the fixed-point constraint forces variables $\mathbf{s}_i$ to be representable by $f_{\mathbf{W}}$ . Therefore, the reformulation as a fixed point problem has not changed the solution; rather, it has only made explicit that the goal is to learn these states and facilitates the use of alternative optimization strategies. + +Reformulations like the one in (4) have been widely considered in optimization, because (4) is actually an auxiliary variable reformulation of (1). In this case, the auxiliary variables are the states S. Using auxiliary variables is a standard strategy in optimization—under the general term method of multipliers—to decouple terms in an optimization and so facilitate decentralized optimization. + +Such auxiliary variable methods have even been previously considered for optimizing neural networks. Carreira-Perpínán and Wang (2014) introduced the Method of Auxiliary Coordinates (MAC), which explicitly optimize hidden vectors in the neural network. Taylor et al. (2016) proposed a similar strategy, but introduced an additional set of auxiliary variables to obtain further decoupling and a particularly efficient algorithm for the batch setting. Scellier and Bengio (2017) introduced Equilibrium Propagation for symmetric neural networks, where the state of the network is explicitly optimized to obtain a stationary point in terms of the energy function. Gotmare et al. (2018) built on these previous ideas to obtain a stochastic gradient descent update for distributed updates to blocks of weights in a neural network. Our proposed optimization can be seen as a variation of the objective considered for MAC (Carreira-Perpínán and Wang, 2014, Equation 1), though we arrived at it from a different perspective: with the goal to learn explicit state vectors. + +The objective in (4) still has two issues. First, it is not amenable to online updating: it is a batch optimization with a constraint. Second, it does not allow for any training back in time. But, this stringent computational restriction is unnecessary. We could have instead asked: learn states so that when iterated twice through the RNN, the resulting state enables accurate predictions on the target two steps in the future. We develop a more general objective below to address both issues. + +We can rewrite the objective so that it is clear how to stochastically sample it, and so enable online updating. As in MAC-QP (Carreira-Perpñán and Wang, 2014), we reformulate this constrained objective into an unconstrained objective with a quadratic penalty, with $\lambda > 0$ + +$$ +L (\boldsymbol {\beta}, \mathbf {W}, \mathbf {S}) \stackrel {\text {d e f}} {=} \frac {1}{n} \sum_ {i = 1} ^ {n} \ell_ {\boldsymbol {\beta}} \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right); \mathbf {y} _ {i}\right) + \frac {\lambda}{2 n} \sum_ {i = 1} ^ {n} \left\| \mathbf {s} _ {i} - f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right) \right\| _ {2} ^ {2} \tag {5} +$$ + +Once in this unconstrained form, we can perform stochastic gradient descent on this objective in terms of $\beta$ , $\mathbf{W}$ and $\mathbf{S}$ to reach a stationary point. To use stochastic gradient descent, the objective needs to break up into a sum of losses, $L(\beta, \mathbf{W}, \mathbf{S}) = \frac{1}{n} \sum_{i=1}^{n} L_i(\beta, \mathbf{W}, \mathbf{S})$ , where we define + +$$ +L _ {i} (\boldsymbol {\beta}, \mathbf {W}, \mathbf {S}) \stackrel {\mathrm {d e f}} {=} \ell_ {\boldsymbol {\beta}} (f _ {\mathbf {W}} (\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}); \mathbf {y} _ {i}) + \frac {\lambda}{2} \| \mathbf {s} _ {i} - f _ {\mathbf {W}} (\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}) \| _ {2} ^ {2}. +$$ + +We can stochastically sample $i$ from our buffer of $n$ samples and update our variables with $\nabla L_{i}$ . Fortunately, because the state variables break connections across time, this gradient is zero for most variables, except $\beta$ , $\mathbf{W}$ , $\mathbf{s}_{i-1}$ and $\mathbf{s}_{i}$ . Therefore, each stochastic update can be computed efficiently. + +Second, we can generalize this objective to incorporate more than one step of propagation back in time, simply by generalizing the fixed-point operator. Consider the more general $T$ -step fixed point problem $\mathbf{S} = F_{T,\mathbf{W}}(\mathbf{S},\mathbf{O})$ where + +$$ +\begin{array}{l} F _ {T, \mathbf {W}} (\mathbf {S}, \mathbf {O}) \stackrel {\mathrm {d e f}} {=} \left[ \mathbf {S} _ {:,: 0}, \mathbf {S} _ {:,: 1}, \ldots , \mathbf {S} _ {:,: T - 1}, \underbrace {f _ {\mathbf {W}} (\ldots f _ {\mathbf {W}} (f _ {\mathbf {W}} (\mathbf {S} _ {: , 0} , \mathbf {O} _ {: , 1}) , \mathbf {O} _ {: , 2}) , \ldots) , \mathbf {O} _ {: , T})} _ {\hat {\mathbf {S}} _ {:,: T}}\right), \ldots \\ f _ {\mathbf {W}} (\ldots f _ {\mathbf {W}} (f _ {\mathbf {W}} (\mathbf {S} _ {:, n - T - 1}, \mathbf {O} _ {:, n - T}), \mathbf {O} _ {:, n - T + 1}), \ldots), \mathbf {O} _ {:, n}) ]. \\ \end{array} +$$ + +![](images/ba392df513b9c37dc192648a156487776490c04b9137e31ea7031d9a6dcf7292.jpg) +Figure 1: A single update by FPP. It randomly samples $i$ , and performs a gradient descent update to $\mathbf{s}_{i - T}$ , $\mathbf{s}_i$ , $\mathbf{W}$ and $\beta$ , where the loss on the targets affects $\mathbf{s}_{i - T}$ , $\mathbf{W}$ , $\beta$ and the loss producing the next state variable $\mathbf{s}_i$ affects $\mathbf{s}_{i - T}$ , $\mathbf{s}_i$ , $\mathbf{W}$ . The state variables are stored in the buffer, but are explicit variables we learn, just like $\mathbf{W}$ and $\beta$ . + +For $T = 1$ , we recover the operator provided in (3). This generalization mimics the use of $T$ -step methods for learning value functions in reinforcement learning. This generalization provides more flexibility in using the allocated computation per step. For example, for a budget of $T$ updates per step, we could use $T$ 1-step updates, $T / 2$ 2-step updates, all the way up to one $T$ -step update. + +The loss for general $T$ similarly decomposes into a sum $\frac{1}{n - T + 1}\sum_{i = T}^{n}L_{i}(\beta ,\mathbf{W},\mathbf{S})$ for + +$$ +L _ {i} (\boldsymbol {\beta}, \mathbf {W}, \mathbf {S}) = \ell_ {\boldsymbol {\beta}} \left(\hat {\mathbf {s}} _ {i} \left(\mathbf {s} _ {i - T}, \mathbf {W}\right); \mathbf {y} _ {i}\right) + \frac {\lambda}{2} \| \mathbf {s} _ {i} - \hat {\mathbf {s}} _ {i} \left(\mathbf {s} _ {i - T}, \mathbf {W}\right) \| _ {2} ^ {2} \tag {6} +$$ + +where $\hat{\mathbf{s}}_i(\mathbf{s}_{i - T},\mathbf{W})\stackrel {\mathrm{def}}{=}f_{\mathbf{W}}(\dots f_{\mathbf{W}}(f_{\mathbf{W}}(\mathbf{s}_{i - T - 1},\mathbf{o}_{i - T}),\mathbf{o}_{i - T + 1}),\dots),\mathbf{o}_i).$ + +For each stochastic sample $i$ , $\nabla L_{i}$ is only non-zero for $\beta$ , $\mathbf{W}$ , $\mathbf{s}_{i - T}$ and $\mathbf{s}_i$ . Though these updates can simply be computed using automatic differentiation on $L_{i}$ , the explicit updates are simple so we include them here, using shorthand $\hat{\mathbf{s}}_i$ for $\hat{\mathbf{s}}_i(\mathbf{s}_{i - T},\mathbf{W})$ : + +$$ +\nabla_ {\mathbf {s} _ {i - T}} L _ {i} = \left[ \nabla_ {\hat {\mathbf {s}} _ {i}} \ell_ {\boldsymbol {\beta}} (\hat {\mathbf {s}} _ {i}; \mathbf {y} _ {i}) - \lambda (\mathbf {s} _ {i} - \hat {\mathbf {s}} _ {i}) \right] ^ {\top} \nabla_ {\mathbf {s} _ {i - T}} \hat {\mathbf {s}} _ {i} +$$ + +$$ +\nabla_ {\mathbf {s} _ {i}} L _ {i} = \lambda (\mathbf {s} _ {i} - \hat {\mathbf {s}} _ {i}) +$$ + +$$ +\nabla_ {\mathbf {W}} L _ {i} = \left[ \nabla_ {\hat {\mathbf {s}} _ {i}} \ell_ {\boldsymbol {\beta}} \left(\hat {\mathbf {s}} _ {i}; \mathbf {y} _ {i}\right) - \lambda \left(\mathbf {s} _ {i} - \hat {\mathbf {s}} _ {i}\right) \right] ^ {\top} \nabla_ {\mathbf {W}} \hat {\mathbf {s}} _ {i} \tag {7} +$$ + +$$ +\nabla_ {\boldsymbol {\beta}} L _ {i} = \nabla_ {\boldsymbol {\beta}} \ell_ {\boldsymbol {\beta}} (\hat {\mathbf {s}} _ {i}; \mathbf {y} _ {i}) +$$ + +The online algorithm uses these updates on a sliding window buffer, instead of a fixed buffer. This algorithm—called Fixed Point Propagation (FPP)—is summarized in Figure 1 and Algorithm 1. + +As alluded to, the advantage of FPP over T-BPTT is that we are not restricted to focusing all computation to estimate the gradient $T$ -steps back in time for one state-observation pair. Rather, instead of sweeping all the way back, we spread value by using updates on random transitions in the buffer. This has three advantages. First, it updates more states per step, including updates towards their targets. Second, this ensures that targets for older transitions are constantly being reinforced, and spends gradient computation resources towards this goal, rather than spending all computation on computing a more exact gradient for the recent time step. This distributes updates better across time, and should likely also result in a more stable state. Third, the formulation as stochastic gradient descent on the fixed point objective makes it a sound strategy—as opposed to truncation which is not sound. FPP, therefore, maintains the simplicity of T-BPTT, but provides a more viable direction to obtain sound algorithms for training RNNs. + +# 4 CONVERGENCE RESULTS + +In this section we show two theoretical results. First, we show that the FPP algorithm converges to a stationary point, for a fixed buffer. This result is a relatively straightforward application of recent theory for nonconvex optimization (Ghadimi et al., 2016), mainly requiring us to show that our + +Algorithm 1 Fixed Point Propagation (FPP) +Input: a truncation parameter $T$ , mini-batch size $B$ , and number of updates per step $M$ +Initialize weights $\mathbf{W}$ and $\beta$ randomly +Initialize state $\mathbf{s}_0 \gets \mathbf{0} \in \mathbb{R}^d$ +Initialize an empty buffer B of size $N$ +for $t \gets 1,2,\ldots$ do +if B is full then +Remove the oldest transition +end if +Observe $\mathbf{o}_t, \mathbf{y}_t$ , and compute $\mathbf{s}_t = f_{\mathbf{W}}(\mathbf{s}_{t-1}, \mathbf{o}_t)$ +Add $(\mathbf{s}_t, \mathbf{o}_t, \mathbf{y}_t)$ to buffer B +if $t \geq T$ then +for $j \gets 1, \dots, M$ do +Sample a mini-batch of size $B$ , of trajectories of length $T$ , from the buffer: + $\{(\mathbf{s}_{i_l-T}, \mathbf{o}_{i_l-T}, \dots, \mathbf{s}_{i_l}, \mathbf{o}_{i_l}, \mathbf{y}_{i_l})\}_{l=1}^{B}$ where $i_l$ is the index of the $l$ -th mini-batch +Compute the average mini-batch loss and update $\{\mathbf{s}_{i_l-T}, \mathbf{s}_{i_l}\}_{l=1}^{B}$ , W and $\beta$ +Update $\{\mathbf{s}_{i_l-T}, \mathbf{s}_{i_l}\}_{l=1}^{B}$ in the buffer +end for +end if +end for + +algorithm can be written as an instance of that framework and to show that each stochastic gradient update is unbiased. This convergence result, nonetheless, is key, as it suggests that FPP is a sound strategy for using replay with RNNs. Previous attempts to use replay for RNNs, in reinforcement learning, were not able to show convergence (Kapturowski et al., 2019), which is to be expected as truncated BPTT updates on a buffer may not be sound. + +Additionally, we show that as $\lambda$ approaches infinity, the set of stationary points of the FPP objective approaches the set of stationary points for the RNN objective. In our experiments, we use $\lambda = 1$ , as obtaining precisely the same solutions as the RNN objective is not our goal. We include this theoretical result nonetheless for completeness to characterize the relationship between the stationary points of FPP objective and the RNN objective. The proof is similar to that for MAC-QP (Carreira-Perpñán and Wang, 2014), with the main novelty in checking the KKT conditions for our objective and for linear independence in the Jacobian. Full proofs for both results are in Appendix A. + +# 4.1 CONVERGENCE OF FPP TO A STATIONARY POINT FOR A FIXED BUFFER + +Recent work uses the idea of randomized gradient descent to show convergence to a stationary point for nonconvex objectives (Ghadimi et al., 2016), as opposed to typical restrictions such as convexity or the PL condition (Karimi et al., 2016). The randomized approach uses a random stopping time $R$ , and characterizes the norm of the expected gradient for the variables at this random time. The variables we learn are $(\mathbf{W},\boldsymbol {\beta},\mathbf{S})\in \mathbb{R}^z$ , where $z$ is the appropriate dimension. + +For the proof we also require the variables to remain in a closed, convex set, to ensure that our objective is Lipschitz. To do so, we will analyze our update with the addition of a projection operator onto a closed ball $C$ in $\mathbb{R}^z$ of radius $r > 0$ about the origin. $r$ can be very large, and we emphasize that $C$ is only a convenience used for theoretical analysis. In practice, we do not project our iterates. Since $\mathbb{R}^d$ is a Hilbert space and $C$ is closed and convex, we have the existence of a unique projection operator $\Gamma$ + +$$ +\Gamma \left(\mathbf {W} _ {0}, \boldsymbol {\beta} _ {0}, \mathbf {S} _ {0}\right) \stackrel {\text {d e f}} {=} \underset {\left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {S}\right) \in C} {\arg \min } \| \left(\mathbf {W} _ {0}, \boldsymbol {\beta} _ {0}, \mathbf {S} _ {0}\right) - \left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {S}\right) \| ^ {2}. \tag {8} +$$ + +Our objective is $L(\mathbf{W},\beta ,\mathbf{S}) \stackrel{\mathrm{def}}{=} \frac{1}{n - T + 1}\sum_{i = T}^{n}L_{i}(\mathbf{W},\beta ,\mathbf{S})$ , for $L_{i}$ defined in Equation (6), for $n > T$ samples. Each time we perform an update, we randomly sample $k_{t} \sim \text{uniform}(-T,n)$ , inclusive of both endpoints. The update to parameters at time $t$ , for stepsize $\alpha_{t}$ , is + +$$ +\left(\mathbf {W} _ {t + 1}, \boldsymbol {\beta} _ {t + 1}, \mathbf {S} _ {t + 1}\right) \stackrel {{\mathrm {d e f}}} {{=}} \Gamma \left(\left(\mathbf {W} _ {t}, \boldsymbol {\beta} _ {t}, \mathbf {S} _ {t}\right) - \alpha_ {t} \nabla L _ {k _ {t}} \left(\mathbf {W} _ {t}, \boldsymbol {\beta} _ {t}, \mathbf {S} _ {t}\right)\right). \tag {9} +$$ + +Theorem 1. Let $D$ be a Lipschitz constant of $\nabla L(\mathbf{W},\beta ,\mathbf{S})$ . Define probability mass functions + +$$ +P _ {N} (k) := \frac {\alpha_ {k} - D \alpha_ {k} ^ {2}}{\sum_ {j = 1} ^ {N} \alpha_ {j} - D \alpha_ {j} ^ {2}}. +$$ + +for each $N \in \mathbb{N}$ . Let $R$ be distributed according to $P_N$ . Assume $\alpha_t = \frac{1}{2D}$ for all $t$ and that we perform $N$ stochastic updates. Write $x_R = (\mathbf{W}_R, \boldsymbol{\beta}_R, \mathbf{S}_R)$ . Then + +$$ +\mathbb {E} \left[ \frac {1}{\alpha_ {R} ^ {2}} \left\| \Gamma (\alpha_ {R} \nabla L (x _ {R})) \right\| ^ {2} \right] = \mathcal {O} \left(\frac {1}{N}\right). +$$ + +# 4.2 RECOVERING RNN SOLUTIONS + +Consider the standard RNN problem, + +$$ +\min _ {\boldsymbol {\beta}, \mathbf {W}, \mathbf {s} _ {0}} E (\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}) \quad \text {f o r} E (\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}) \stackrel {\text {d e f}} {=} \sum_ {i = 1} ^ {n} \ell_ {\boldsymbol {\beta}} \left(f _ {\mathbf {W}} \left(\dots f _ {\mathbf {W}} \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {0}, \mathbf {o} _ {1}\right), \mathbf {o} _ {2}\right), \dots , \mathbf {o} _ {i}\right); \mathbf {y} _ {i}\right) \tag {10} +$$ + +where we also optimize over $s_0$ . Our goal is to show that for increasing $\lambda$ , the set of stationary points of the FPP objective in Equation (5) approach stationary points of the RNN objective in Equation (10). We assume $T = 1$ in our analysis of FPP. + +Theorem 2. Assume we have a positive, increasing sequence $\{\lambda_k\} \to \infty$ , a non-negative sequence $\{\epsilon_k\} \to 0$ , and a sequence of points $\{(\mathbf{W}_k,\beta_k,\mathbf{S}_k)\}$ such that $\| \nabla L(\mathbf{W}_k,\beta_k,\mathbf{S}_k);\lambda_k)\| \leq \epsilon_k$ for + +$$ +L \left(\mathbf {W} _ {k}, \boldsymbol {\beta} _ {k}, \mathbf {S} _ {k}\right); \lambda_ {k}) \stackrel {\text {d e f}} {=} \frac {1}{n} \sum_ {i = 1} ^ {n} \ell_ {\boldsymbol {\beta}} \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right); \mathbf {y} _ {i}\right) + \frac {\lambda_ {k}}{2} \| \mathbf {s} _ {i} - f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right) \| _ {2} ^ {2} \tag {11} +$$ + +Assume further that $\{(\mathbf{W}_k,\boldsymbol {\beta}_k,\mathbf{S}_k)\}$ has a convergent subsequence $\{(\mathbf{W}_{k_i},\boldsymbol {\beta}_{k_i},\mathbf{S}_{k_i})\}$ with limit $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{S}^{*})$ . Then $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{S}^{*})$ is a KKT point of the constrained FPP objective (see (12)) and $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{s}_{0}^{*})$ is a KKT point of the RNN objective (10). Further, if $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{S}^{*})$ is a local min of the constrained FPP objective, then $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{s}_{0}^{*})$ is a local min of (10). + +# 5 EXPERIMENTAL RESULTS + +We designed a sequence of experiments in real and synthetic problems to evaluate our new method compared with several common baselines and to highlight the robustness of our method to different truncation lengths, buffer sizes and number of updates. In particular we compare (1) against T-BPTT with a variety of truncation lengths greater and lesser than the temporal delay required to solve each problem; (2) No-Overlap T-BPTT, a common a variant of T-BPTT that updates on disjoint partitions of the data; and (3) FPP without the state update, which is similar to the Stored State T-BPTT algorithm (Kapturowski et al., 2019). We begin by describing the problems we used to evaluate our methods, and why they were chosen. Unless otherwise stated, we report average performance over all training steps (online performance), averaged over 30 independent runs. + +Simulation Problems We used two small simulation problems to highlight the robustness of each method to increasing temporal delay in online training. The first tasks is a simple ring of states, called Cycle World. On each timestep the agent deterministically transitions to the next state in the chain. The agent's observation is zero in every state, except the last. The agent's objective is to predict the next observation, which is difficult without a memory of length equal to the length of the cycle. With a shorter memory, the agent cannot tell when the last non-zero observation, which is essential for predicting the next observation. This task has been used exclusively in benchmarking k-Markov methods, POMDPs, and predictive state representations (Tanner and Sutton, 2005). The complexity of the task can be easily varied, and yet the determinism ensures the variance does not introduce confounding factors. At each time step, we measure the prediction accuracy for the next observation. + +We also experimented with a stochastic prediction task, where correct prediction requires remembering two independent observation's from the past. In particular, the target on the next timestep is probabilistically dependent on the one-dimensional observation 15 timesteps ago and 30 timesteps ago. The dynamics are summarised in Table 1, in Appendix C. This task is called Stochastic World. + +For this problem, a cross-entropy loss of 0.66 or higher indicates that the learned state did not capture the observation from either 15 or 30 steps in the past. If the state captures the observation from 15 time-steps ago the cross entropy loss is about 0.51. Optimal performance in this problem results in a cross-entropy loss is about 0.46. Like Cycle World, Stochastic World requires a long and detailed memory of past observations, but the stochastic nature of the target pose an additional challenge. + +Real DataSets We also performed experiments on two fixed datasets, to gain insights into how each method performed on better known benchmark tasks. In both cases the data was processed and performance evaluated in an online fashion. The first problem is Sequential MNIST. The objective is to classify numerical digits based on a stream of pixel inputs. On each timestep the input is one row (1x28) of the image, and the target is the label of the image. We used an RNN architecture with 512 hidden units as in previous work (Arjovsky et al., 2015). It is not possible to predict the target image base on a few samples, so we wait until 15 steps (corresponding to 15x28 pixels) to begin measuring the error. Here, we report these incorrect predictions for the last 15 time-steps for every image. We ran this on 1000 images, which correspond to 28000 steps. + +Finally, we also include results on a character prediction problem called Penn Tree Bank dataset. This problem is relevant because language modelling remains an important application of recurrent learning systems, and robust performance on this dataset can provide insight into the utility of our new method in application. We used a vocabulary size of 10000. The Target Loss function used here is a weighted cross-entropy loss for a sequence of logits. We used an LSTM with 200 hidden nodes as this architecture was found to perform well in previous work (Zaremba et al., 2014). + +Comparison to T-BPTT We compare FPP to T-BPTT for varying truncation levels. For all the algorithms, we used a constant buffer size of 100 and the trajectory length $T$ for both T-BPTT(overlap and no overlap versions) and FPP. All algorithms use $O(T)$ computation per step. For overlap T-BPTT, we employ T-BPTT online by taking each observation and updating with respect to the loss at that time-step, using a T-step truncated gradient. The no-overlap version of T-BPTT performs a batch update for every $T$ observations, such that these observations do not overlap. + +Additionally, we include UORO (Tallec and Ollivier, 2017) as another baseline. UORO uses an unbiased rank-1 approximation to approximate RTRL. It is a relatively new method for training RNN online, and has only been tested on small scale datasets. In our experiment, we include memory-1 and rank-1 UORO in Cycleworld and StochasticWorld. + +We first compare the performances of FPP and T-BPTT on Cycleworld with varying $p$ . We expect T-BPTT to degrade with $T$ less than the dependence back in time (the length of the cycle $p$ ); we therefore test both $T = p$ and $T = p/2$ for increasing $p$ . To make the results comparable across $p$ , we report performance as the ratio to a simple baseline of predicting 0 at every time step. From Figure 2, we can see that FPP is more robust to $T$ , whereas T-BPTT with $T = p/2$ performs poorly even when given more data (Figure 2(b)). In early learning, with fewer samples, FPP has an even more clear advantage. Even though T-BPTT can eventually learn optimal predictions for $T = p$ , it takes longer than FPP which learns near optimal predictions in early learning (Figure 2(a)). + +We additionally compare FPP and the two variants of T-BPTT across all four problems, under different settings of $T$ , shown in Figure 3. Across all problems, FPP outperforms the other two for every $T$ , except $T = 1$ in CycleWorld where all three methods perform similarly. The performance of FPP is notably better for smaller $T$ , as compared to T-BPTT. For example, in Figure 3(b) 20-BPTT has a high loss and is unable to learn both the dependencies, whereas FPP with $T = 20$ , performs almost as well as 40-BPTT. Similar conclusions can be made for $T \in \{3,5\}$ in (a), $T \in \{10,15,20,30\}$ in (b), $T \in \{7,14,21,28\}$ in (c) and $T \in \{1,5,10,20\}$ in (d). + +Benefits of mini-batch updates and multiple updates per step One of the advantages of using a buffer is the ability to perform mini-batch updates and multiple updates per step. We evaluate the performance of FPP with and without state updates using M updates per step and a mini-batch of size B. We show the performance with varying $T$ . To show the effect of multiple update, we fix $B$ and vary $M \in \{1,2,4,8,16\}$ . To show the effect of mini-batch update, we fix $M$ and vary $B \in \{1,2,4,8,16\}$ . We use a buffer size of 1000 and 10000 training steps. + +We also include FPP without state updating, to determine if the benefits of FPP are mainly due to using a buffer rather than due to the new objective to learn explicit state variables. We particularly expect FPP to outperform FPP without state updating under more updates per step, because we showed converge for FPP on a fixed buffer whereas no such result exists for FPP without state + +![](images/1db41dd9413854641b87bfd955262139df68aa693c34ec52713bf839d6ef5d17.jpg) +(a) Early Learning (2500 steps) + +![](images/562da9e3f897fc81b38176dfb0c14f619c02bce5e44a24fa6916301596368648.jpg) +(b) Learning with More Data (15000 steps) + +![](images/bd8ab50f84b7e32629a26e4c764b866d74088d3ac513e15f87119984dde4edba.jpg) +Figure 2: The ratio error of each of the algorithms with respect to the baseline of predicting 0 at every time step is our measure of performance. For all the values of p, FPP seems to be more robust to T, especially with larger p. The numbers are average over 30 runs with standard error bars. +(a) CycleWorld + +![](images/3a9f96c4ac1e64d8c98b1e4d1be7c8026168f66fe4c40d0cf8be0eba55e32c83.jpg) +(b) StochasticWorld + +![](images/e0cb6d657170796dc825dc22eda93af8ecada8021c9dce4af75d25c09421e726.jpg) +(c) Sequential MNIST +Figure 3: Average online performance for FPP (red), T-BPTT (orange) and No-Overlap T-BPTT (blue). Across all the domains, FPP seems to be more robust to T, and it does much better than T-BPTT especially for small T. The numbers are average over 30 runs with one standard error with (a) being run for 5000 steps, (b) for 10000 steps, (c) for 1000 images (28000 steps) and (d) for 5000 steps (5000 points in dataset, processed in order). FPP at $T = 20, 30, 40$ reaches a final solution with optimal performance; it is only above the second line because the plot shows average performance across all steps, rather than final performance. + +![](images/5c96c5a5b6fbb4b625c703b443488d42380754915c2f458dcd3f345a573dd314.jpg) +(d) Penn-Tree Bank + +updating. Here, the buffer is not fixed, but performing more updates per step should move the FPP solution closer to a stationary point of the current buffer. + +Figure 4 (a) and (b) shows the effect of multiple updates and (c) and (d) the effect of mini-batch updates. For both, increasing the number of updates and the size of the mini-batch improves performance, except for a bit of overfitting we observed in Stochastic World for increasing updates $(B = 1, M = 16)$ . However, in general, FPP can better take advantage of both multiple updates and mini-batch updating. The most noticeable gaps are for $T = 16$ and $T = 32$ in StochasticWorld and $T = 1$ and $T = 2$ in CycleWorld. The theory suggests that more updates, even with $T = 1$ , should allow FPP to converge to a reasonable solution. We test this on CycleWorld (with Figure 7 in Appendix C), and find that for both larger mini-batch and number of updates FPP can get the error down to zero, whereas FPP without state updating cannot. + +![](images/f163c690791fa5ef09ea52a60bfd3d725eed165867eac40cad0728974db17d75.jpg) +(a) Multiple Updates in CycleWorld $(B = 1)$ + +![](images/3e3487d5a896f14d244a5aee826e3e6a4629e53887340c43dc1d38b40b8a0f08.jpg) +(c) Mini-batch Updates in CycleWorld $(M = 1)$ + +![](images/b64d67c9297686c1c64e536ac58d59c22ab31235cb308142d71d8cc8bfff14bc.jpg) +(b) Multiple Updates in StochasticWorld $(B = 1)$ + +![](images/f99a76e6c64e355872d69ef61d5760e797a0b12205ce7805ceb95fae4623aa97.jpg) +(d) Mini-batch Updates in StochasticWorld $(M = 1)$ +Figure 4: The performance for increase number of updates (with mini-batch of $B = 1$ ) and increasing mini-batch size (with number of updates $M = 1$ ). The numbers are average over 30 runs with 10000 training steps. The solid line is FPP and the dashed line is FPP without state updating. + +# 6 CONCLUSION + +The main objective of this paper is to reformulate RNN training to explicitly learn state variables. In particular, the goal is to investigate methods that can better distribute computation, and improve state updating without having to compute expensive—and potentially unstable—gradients back in time for each state. We introduce a new objective to explicitly learn state variables for RNNs, which breaks gradient dependence back in time. The choice of $T$ to compute gradients back in time is used only to improve training speed, rather than to effectively approximate gradients. We found that our algorithm, called FPP, was indeed more robust to $T$ , than truncated BPTT was to its truncation level. We proved that our algorithm converges to a stationary point, under a fixed buffer, and so is a sound approach to using a buffer to train RNNs. Further, we chose simple optimization choices in this work; there are clear next steps for benefiting more from the decoupled update, such as by parallelizing updates across state variables. Overall, this work provides evidence that FPP could be a promising direction for robustly training RNNs, without the need to compute or approximate long gradients back in time. + +# REFERENCES + +Luis B Almeida. A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In International Conference on Neural Networks, 1987. +Martín Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. CoRR, abs/1511.06464, 2015. +Dimitri P. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods. Academic Press, 1982. +Victor Campos, Brendan Jou, Xavier Giroi Nieto, Jordi Torres, and Shih-Fu Chang. Skip RNN: learning to skip state updates in recurrent neural networks. CoRR, abs/1708.06834, 2017. +Miguel Á Carreira-Perpínán and Weiran Wang. Distributed optimization of deeply nested systems. In International Conference on Artificial Intelligence and Statistics, 2014. + +W. Chan, N. Jaitly, Q. Le, and O. Vinyals. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2016. +Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael J. Witbrock, Mark Hasegawa-Johnson, and Thomas S. Huang. Dilated recurrent neural networks. CoRR, abs/1710.02224, 2017. +Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. CoRR, abs/1609.01704, 2016. +Wojciech Marian Czarnecki, Max Jaderberg, Simon Osindero, Oriol Vinyals, and Koray Kavukcuoglu. Understanding Synthetic Gradients and Decoupled Neural Interfaces. arXiv:1411.4000v2, 2017. +Siegmund Düll, Steffen Udluft, and Volkmar Sterzing. Solving partially observable reinforcement learning problems with recurrent neural networks. In Neural Networks: Tricks of the Trade, 2012. +Jeffrey L. Elman. Finding structure in time. Cognitive Science, 14(2):179-211, 1990. +Saeed Ghadimi, Guanghui Lan, and Hongchao Zhang. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Mathematical Programming, 155(1-2):267-305, 2016. +Akhilesh Gotmare, Valentin Thomas, Johanni Brea, and Martin Jaggi. Decoupling Backpropagation using Constrained Optimization Methods. 2018. +Alex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deep recurrent neural networks. CoRR, abs/1303.5778, 2013. +G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82-97, Nov 2012. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 1997. +J J Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79(8):2554-2558, 1982. +Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, David Silver, and Koray Kavukcuoglu. Decoupled Neural Interfaces using Synthetic Gradients. In International Conference on Machine Learning, 2017. +Steven Kaptuowski, Georg Ostrovski, John Quan, Remi Munos, and Will Dabney. Recurrent experience replay in distributed reinforcement learning. In International Conference on Learning Representations, 2019. +Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal-gradient methods under the polyak-lojasiewicz condition. In European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2016. +Nan Rosemary Ke, Anirudh Goyal, Olexa Bilaniuk, Jonathan Binas, Laurent Charlin, Chris Pal, and Yoshua Bengio. Sparse Attentive Backtracking: Long-Range Credit Assignment in Recurrent Networks. arXiv:1509.01240v2, 2017. +Jan Koutnik, Klaus Greff, Faustino J. Gomez, and Jürgen Schmidhuber. A clockwork RNN. CoRR, abs/1402.3511, 2014. +Renjie Liao, Yuwen Xiong, Ethan Fetaya, Lisa Zhang, KiJung Yoon, Xaq Pitkow, Raquel Urtasun, and Richard S Zemel. Reviving and Improving Recurrent Back-Propagation. In International Conference on Machine Learning, 2018. +Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. Knowing when to look: Adaptive attention via A visual sentinel for image captioning. CoRR, abs/1612.01887, 2016. + +Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan L. Yuille. Explain images with multimodal recurrent neural networks. CoRR, abs/1410.1090, 2014. +Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron C. Courville, and Yoshua Bengio. Samplernen: An unconditional end-to-end neural audio generation model. CoRR, abs/1612.07837, 2016. +Yajie Miao, Mohammad Gowayyed, and Florian Metze. Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding. CoRR, abs/1507.08240, 2015. +Asier Mujika, Florian Meier, and Angelika Steger. Approximating real-time recurrent learning with random kronecker factors. In Advances in Neural Information Processing Systems, pages 6594-6603, 2018. +James M Murray. Local online learning in recurrent networks with random feedback. eLife, 8:e43299, 2019. +Daniel Neil, Michael Pfeiffer, and Shih-Chii Liu. Phased LSTM: accelerating recurrent network training for long or event-based sequences. CoRR, abs/1610.09513, 2016. +Yann Ollivier and Guillaume Charpiat. Training recurrent networks online without backtracking. arXiv, 2015. +Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, 2013. +B. A. Pearlmutter. Gradient calculations for dynamic recurrent neural networks: a survey. IEEE Transactions on Neural Networks, 1995. +Fernando J Pineda. Generalization of back-propagation to recurrent neural networks. Physical review letters, 1987. +Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2008. +Benjamin Scellier and Yoshua Bengio. Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation. Frontiers in Computational Neuroscience, 2017. +Coretin Tallec and Yann Ollivier. Unbiased Online Recurrent Optimization. arXiv:1411.4000v2 [cs.LG], 2017. +Brian Tanner and Richard S. Sutton. Td(lambda) networks: Temporal-difference networks with eligibility traces. 2005. +Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, and Tom Goldstein. Training neural networks without gradients: A scalable admm approach. In International Conference on Machine Learning, 2016. +Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. CoRR, abs/1411.4555, 2014. +P. J. Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550-1560, Oct 1990. +Ronald J. Williams and Jing Peng. An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Computation, 1990. +R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1989. +Ronald J Williams and David Zipser. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Computation, 1989. +Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. CoRR, abs/1409.2329, 2014. + +# A FULL PROOFS + +# A.1 CONVERGENCE ON A FIXED BUFFER + +At first glance, the update (9) is different than the update in Ghadimi et al. (2016, p. 276). Nevertheless, the following lemma guarantees that they are indeed the same. + +Lemma 1. Let $f$ be $L$ or a stochastic sample of $L$ and let $\alpha >0$ . Write $x = (\mathbf{W},\beta ,\mathbf{S})$ . Then + +$$ +\underset {u \in C} {\arg \min} \left\{\langle \nabla f (x), u \rangle + \frac {1}{2 \alpha} \| x - u \| ^ {2} \right\} = \underset {u \in C} {\arg \min} \left\{\| u - (x - \alpha \nabla f (x)) \| ^ {2} \right\} =: \Gamma (x - \alpha \nabla f (x)). +$$ + +Proof. The proof is a straightforward calculation. + +$$ +\begin{array}{l} \underset {u \in C} {\arg \min } \left\{\| u - (x - \alpha \nabla f (x)) \| ^ {2} \right\} = \underset {u \in C} {\arg \min } \left\{\| u - x \| ^ {2} + \alpha^ {2} \| \nabla f (x)) \| ^ {2} + 2 \langle u - x, \alpha \nabla f (x) \rangle \right\} \\ = \operatorname * {a r g m i n} _ {u \in C} \left\{\| u - x \| ^ {2} + 2 \alpha \langle u, \nabla f (x) \rangle \right\} \\ = \operatorname * {a r g m i n} _ {u \in C} \left\{\langle \nabla f (x), u \rangle + \frac {1}{2 \alpha} \| x - u \| ^ {2} \right\} \\ \end{array} +$$ + +![](images/c40db0c060c9696e07b00d3e9bd4bfe7e207daa5297a2f55c7a45ff9fa1805cd.jpg) + +Our goal is to apply Corollary 3 of Ghadimi et al. (2016, p. 282). We must show that $\nabla g$ is Lipschitz on $C$ and demonstrate that Assumption A1 in Ghadimi et al. (2016, p. 268) holds. + +Lemma 2. $\nabla L$ is Lipschitz on $C$ . + +Proof. A function is Lipschitz if it gradient is bounded. Since $L$ is smooth and $C$ is compact (continuous functions on compact sets are bounded), this lemma follows. + +Lemma 3. $\nabla L_{k}$ is an unbiased estimate of $\nabla L$ , where $k \sim$ uniform- $(T, n)$ . + +Proof. The terms in $\nabla L_{k}$ corresponding to the gradients of $\mathbf{W}$ and $\beta$ are exactly $\nabla_{\mathbf{W}}L$ and $\nabla_{\beta}L$ in expectation, given that $k\sim$ uniform- $(T,n)$ + +Let us consider the gradient elements corresponding to the parameters $s_{0:n}$ . For shorthand, define $[a:b] := \{a, a + 1, \dots, b - 1, b\}$ . Define $P_0 := [0:n - T], P_1 := [T:n]$ . If $j \in P_0$ , then $s_j$ predicts future states. If $s_j \in P_1$ , then $s_j$ is predicted by other states in the regularizer terms of $L$ . Note that $P_0$ and $P_1$ are not disjoint. First, we calculate. + +$$ +\nabla_ {\mathbf {s} _ {j}} L (\mathbf {W}, \boldsymbol {\beta}, \mathbf {S}) := \left\{ \begin{array}{l l} \frac {1}{n - T + 1} (\nabla_ {\hat {\mathbf {s}} _ {j + T}} \ell_ {\boldsymbol {\beta}} (\hat {\mathbf {s}} _ {j + T}; y _ {j + T}) - \lambda (\mathbf {s} _ {j + T} - \hat {\mathbf {s}} _ {j + T})) ^ {\top} \nabla_ {\mathbf {s} _ {j}} \hat {\mathbf {s}} _ {j} & \text {i f} j \in P _ {0} \cap P _ {1} ^ {\complement} \\ \frac {1}{n - T + 1} \lambda (\mathbf {s} _ {j} - \hat {\mathbf {s}} _ {j}) & \text {i f} j \in P _ {0} ^ {\complement} \cap P _ {1} \\ \frac {1}{n - T + 1} [ \lambda (\mathbf {s} _ {j} - \hat {\mathbf {s}} _ {j}) + (\nabla_ {\hat {\mathbf {s}} _ {j + T}} \ell_ {\boldsymbol {\beta}} (\hat {\mathbf {s}} _ {j + T}; y _ {j + T}) - \lambda (\mathbf {s} _ {j + T} - \hat {\mathbf {s}} _ {j + T})) ^ {\top} \nabla_ {\mathbf {s} _ {j}} \widehat {\mathbf {s}} _ {j} ] & \\ & \text {i f} j \in P _ {0} \cap P _ {1} \\ 0 & \text {i f} j \in P _ {0} ^ {\complement} \cap P _ {1} ^ {\complement} \end{array} \right. +$$ + +If $j \in P_0 \cap P_1^{\mathbb{C}}$ , then $\mathbf{s}_j$ does not show up as the target (i.e., the term that is not $\hat{\mathbf{s}}_k$ ) in any regularizer term of $L$ . Hence, $\nabla_{\mathbf{s}_j} L_k$ is zero with probability $1 - \frac{1}{n - T + 1}$ , and is $(\nabla_{\hat{\mathbf{s}}_{j + T}} \ell_\beta (\hat{\mathbf{s}}_{j + T}; y_{j + T}) - \lambda (\mathbf{s}_{j + T} - \hat{\mathbf{s}}_{j + T}))^\top \nabla_{\mathbf{s}_j} \hat{\mathbf{s}}_j$ with probability $\frac{1}{n - T + 1}$ . + +If $j \in P_0^{\mathbb{C}} \cap P_1$ , then $s_j$ only shows up as a target in a regularizer term, so $\nabla_{\mathbf{s}_j} L_k$ is zero with probability $1 - \frac{1}{n - T + 1}$ and is otherwise $\frac{1}{n - T + 1} \lambda (\mathbf{s}_j - \hat{\mathbf{s}}_j)$ . + +If $j \in P_0 \cap P_1$ , then $\nabla_{\mathbf{s}_j} L_k$ is zero with probability $1 - \frac{2}{n - T + 1}$ , $\lambda (\mathbf{s}_j - \hat{\mathbf{s}}_j)$ with probability $\frac{1}{n - T + 1}$ , and $(\nabla_{\hat{\mathbf{s}}_{j + T}}\ell_{\beta}(\hat{\mathbf{s}}_{j + T};y_{j + T}) - \lambda (\mathbf{s}_{j + T} - \hat{\mathbf{s}}_{j + T}))^\top \nabla_{\mathbf{s}_j}\hat{\mathbf{s}}_j$ with probability $\frac{1}{n - T + 1}$ . + +The case for $j \in P_0^{\mathbb{C}} \cap P_1^{\mathbb{C}}$ is trivial. Consequently, $\mathbb{E}[\nabla_{\mathbf{s}_j}L_k] = \nabla_{\mathbf{s}_j}L$ for all $j \in \{0,\dots ,n\}$ . + +Lemma 4. The variance of $\nabla L_{k}$ is bounded on $C$ . + +Proof. This follows because $\nabla L_{k}$ and $\nabla L$ are both smooth functions on a compact set $C$ , and thus bounded. + +Theorem 1. Let $D$ be a Lipschitz constant of $\nabla L(\mathbf{W},\beta ,\mathbf{S})$ . Define probability mass functions + +$$ +P _ {N} (k) := \frac {\alpha_ {k} - D \alpha_ {k} ^ {2}}{\sum_ {j = 1} ^ {N} \alpha_ {j} - D \alpha_ {j} ^ {2}}. +$$ + +for each $N \in \mathbb{N}$ . Let $R$ be distributed according to $P_N$ . Assume $\alpha_t = \frac{1}{2D}$ for all $t$ and that we perform $N$ stochastic updates. Write $x_R = (\mathbf{W}_R, \boldsymbol{\beta}_R, \mathbf{S}_R)$ . Then + +$$ +\mathbb {E} \left[ \frac {1}{\alpha_ {R} ^ {2}} \left\| \Gamma (\alpha_ {R} \nabla L (x _ {R})) \right\| ^ {2} \right] = \mathcal {O} \left(\frac {1}{N}\right). +$$ + +Proof. The $g_{X,R}$ (defined in Ghadimi et al. (2016, p. 271, 274)) in Corollary 3 of Ghadimi et al. (2016, p. 282) corresponds in our case to the following. + +$$ +\begin{array}{l} g _ {X, R} := \frac {1}{\alpha_ {R}} \left(x _ {R} - \underset {u \in C} {\arg \min } \left\{\langle \nabla f (x), u \rangle + \frac {1}{2 \alpha} \| x - u \| ^ {2} \right\}\right) \\ = \frac {1}{\alpha_ {R}} (x _ {R} - \Gamma (x _ {R} - \alpha_ {R} \nabla L (x _ {R}))). \\ \end{array} +$$ + +In the last line, we use Lemma 1. Since we project based on squared norm distance in (8) (corresponding to $\omega(x) = \frac{1}{2} \|x\|_2^2$ in Ghadimi et al. (2016)), the $\alpha$ in Ghadimi et al. (2016, p. 271) (not our step-size $\alpha_t$ ) can be set to 1. + +After applying our Lemma 2, Lemma 3, and Lemma 4, we have from Corollary 3 of Ghadimi et al. (2016, p. 282) that + +$$ +\mathbb {E} \left[ \frac {1}{\alpha_ {R} ^ {2}} \left\| \Gamma (x _ {R} - \alpha_ {R} \nabla L (x _ {R})) - x _ {R} \right\| ^ {2} \right] = \mathcal {O} \left(\frac {1}{N}\right). +$$ + +The only thing left to check is that $\Gamma (x_{R} - \alpha_{R}\nabla L(x_{R})) - x_{R} = \Gamma (\nabla L(x_{R}))$ + +$$ +\begin{array}{l} \Gamma \left(x _ {R} - \alpha_ {R} \nabla L \left(x _ {R}\right)\right) - x _ {R} = \underset {u \in C} {\arg \min } \left\{\left\| u - \left(x _ {R} - \alpha_ {R} \nabla L \left(x _ {R}\right)\right) \right\| ^ {2} \right\} - x _ {R} \\ = \operatorname * {a r g m i n} _ {u \in C} \left\{\| u - \nabla L (x _ {R})) \| ^ {2} \right\} \\ = \Gamma \left(\alpha_ {R} \nabla L \left(x _ {R}\right)\right). \\ \end{array} +$$ + +![](images/22bb32efd513514d44babe3fa14d73d1edd1e5f3097c66e13869cd10c87f8d12.jpg) + +# A.2 RECOVERY OF RNN SOLUTIONS + +Recall our goal is to compare to the RNN solutions of (10). + +$$ +E (\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}) := \sum_ {i = 1} ^ {n} \ell_ {\boldsymbol {\beta}} \left(f _ {\mathbf {W}} \left(\dots f _ {\mathbf {W}} \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {0}, \mathbf {o} _ {1}\right), \mathbf {o} _ {2}\right), \dots , \mathbf {o} _ {i}\right); \mathbf {y} _ {i}\right) \tag {10 revisited} +$$ + +$$ +\min _ {\boldsymbol {\beta}, \mathbf {W}, \mathbf {s} _ {0}} E (\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}), +$$ + +Let us also write a constrained version of the above problem, which we will use in the analysis of FPP. + +$$ +\begin{array}{l} E _ {f p p} \left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}, \dots , \mathbf {s} _ {n}\right) := \sum_ {i = 1} ^ {n} \ell_ {\boldsymbol {\beta}} \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right); \mathbf {y} _ {i}\right) \tag {12} \\ \mathrm {s . t .} \forall 1 \leq i \leq n, f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right) = \mathbf {s} _ {i} \\ \min _ {\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0: n}} E _ {f p p} (\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}, \dots , \mathbf {s} _ {n}) \\ \end{array} +$$ + +The idea is that FPP can be viewed as a way to solve the problem (12) and thus (10) through quadratic regularization. + +We will use $\mathbf{s}_{0:n}$ as shorthand for $\{\mathbf{s}_0,\dots ,\mathbf{s}_n\}$ , which in the main paper we labeled as $\mathbf{S}$ , but for this proof it will be convenient to use explicit variables. Define the feasible set of (12) as + +$$ +\Omega := \left\{\left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0: n}\right): \mathbf {W} \in \mathbb {R} ^ {w}; \mathbf {s} _ {i} \in \mathbb {R} ^ {k}; \boldsymbol {\beta} \in \mathbb {R} ^ {b}; \forall 1 \leq i \leq n, \mathbf {s} _ {i} = f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right) \right\}. +$$ + +Proposition 1. Let $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{s}_{0}^{*})$ be a local min of (10). For $1\leq i\leq n$ , define recursively $\mathbf{s}_i^* \coloneqq f_{\mathbf{W}^*}(\mathbf{s}_{i - 1}^*,\mathbf{o}_i)$ . Then $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{s}_{0:n}^{*})$ is a local min of (12). + +Let $(\mathbf{W}^{*},\beta^{*},\mathbf{s}_{0:n}^{*})$ be a local min of (12). Then $(\mathbf{W}^{*},\beta^{*},\mathbf{s}_{0}^{*})$ is a local min of (10). + +Proof. First, let $N \subset \mathbb{R}^{w + b + k}$ be a neighbourhood of $(\mathbf{W}^*, \boldsymbol{\beta}^*, \mathbf{s}_0^*)$ such that $\forall (\mathbf{W}, \boldsymbol{\beta}, \mathbf{s}_0) \in N$ , we have + +$$ +E \left(\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, \mathbf {s} _ {0} ^ {*}\right) \leq E \left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}\right). +$$ + +Without loss of generality, we may take $N$ to be open. Otherwise, by definition of a neighbourhood, we may take a smaller open set around $(\mathbf{W}^{*},\boldsymbol {\beta}^{*},\mathbf{s}_{0}^{*})$ by definition of a neighbourhood and call that set $N$ . + +Let $\mathbf{s}_i^*$ be defined as above. Define $M\coloneqq N\times \mathbb{R}^{nk}$ , which is an open neighbourhood of $(\mathbf{W}^{*},\boldsymbol {\beta}^{*},\mathbf{s}_{0}^{*})$ since $N$ is open. Let $(\mathbf{W},\boldsymbol {\beta},\mathbf{s}_{0:n})\in M\cap \Omega$ . Note that $(\mathbf{W},\boldsymbol {\beta},\mathbf{s}_0)\in N$ . By definition of $\Omega$ , we have that $f_{\mathbf{W}}(\mathbf{s}_{i - 1},\mathbf{o}_i) = \mathbf{s}_i$ . Hence, $E_{fpp}(\mathbf{W},\boldsymbol {\beta},\mathbf{s}_{0:n}) = E(\mathbf{W},\boldsymbol {\beta},\mathbf{s}_0)$ . + +By definition of (12), (10), we have $E(\mathbf{W}^*, \boldsymbol{\beta}^*, \mathbf{s}_0^*) = E_{f_{pp}}(\mathbf{W}^*, \boldsymbol{\beta}^*, \mathbf{s}_{0:n}^*)$ . Finally, + +$$ +E _ {f p p} \left(\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, \mathbf {s} _ {0: n} ^ {*}\right) = E \left(\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, \mathbf {s} _ {0} ^ {*}\right) \leq E (\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}) = E _ {f p p} \left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0: n}\right). +$$ + +For the second part of the proof, assume $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{s}_{0:n}^{*})$ is a local min of (12), meaning there is a neighbourhood $M\subset \mathbb{R}^{w + b + (n + 1)^k}$ of $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{s}_{0:n}^{*})$ such that for every $(\mathbf{W},\boldsymbol {\beta},\mathbf{s}_{0:n})\in M\cap \Omega$ + +$$ +E _ {f p p} (\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, \mathbf {s} _ {0: n} ^ {*}) \leq E _ {f p p} (\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0: n}). +$$ + +Similarly, without loss of generality, we can assume that $M$ is an open ball, so we may write for some $\epsilon > 0$ , $M = B_{\epsilon}(\mathbf{W}^{*}, \boldsymbol{\beta}^{*}, \mathbf{s}_{0:n}^{*})$ . + +We will construct an open set $N \subset \mathbb{R}^{w + b + k}$ such that $(\mathbf{W}^*, \boldsymbol{\beta}^*, \mathbf{s}_0^*)$ is a local min with respect to $N$ . Define the projection $\pi$ onto the first $w + b + k$ indices. Define $N \coloneqq \pi(M \cap \Omega)$ . Let us show that $N$ is open. + +We will write $f_{\mathbf{W}}(\mathbf{s}_{0:n - 1},\mathbf{o}_{1:n})$ to mean $\{f_{\mathbf{W}}(\mathbf{s}_0,\mathbf{o}_1),\dots ,f_{\mathbf{W}}(f_{\mathbf{W}}(\dots (\mathbf{s}_0,\mathbf{o}_1),\mathbf{o}_2),\dots ,\mathbf{o}_n))\}$ We can write $N$ as + +$$ +\begin{array}{l} N = \left\{\left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}\right): \left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0: n}\right) \in \Omega \cap B _ {\epsilon} \left(\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, \mathbf {s} _ {0: n} ^ {*}\right) \right\} \\ = \left\{\left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}\right): \left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}, f _ {\mathbf {W}} \left(\mathbf {s} _ {0: n - 1}, \mathbf {o} _ {1: n}\right)\right) \in B _ {\epsilon} \left(\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, \mathbf {s} _ {0: n} ^ {*}\right) \right\} \\ = \left\{\left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}\right): \left\| \left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}, f _ {\mathbf {W}} \left(\mathbf {s} _ {0: n - 1}, \mathbf {o} _ {1: n}\right)\right) - \left(\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, s _ {0: n} ^ {*}\right) \right\| < \epsilon \right\} \\ \end{array} +$$ + +On the second line, we used the fact that $\mathbf{s}_i = f_{\mathbf{W}}(\mathbf{s}_{i - 1},\mathbf{o}_i)$ in $\Omega$ . Since the norm and $f$ are continuous and $(-\infty ,\epsilon)$ is open, we have that $N$ , a continuous preimage of an open set, is open. + +Now, let $(\mathbf{W},\beta ,\mathbf{s}_0)\in N$ such that $\exists \mathbf{s}_{1:n}$ with $(\mathbf{W},\beta ,\mathbf{s}_{0:n})\in M\cap \Omega$ + +$$ +E \left(\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, \mathbf {s} _ {0} ^ {*}\right) = E _ {f p p} \left(\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, \mathbf {s} _ {0: n} ^ {*}\right) \leq E _ {f p p} \left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0: n}\right) = E \left(\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0}\right) +$$ + +The claim follows. + +![](images/7e66d26bc19c422383648dd57b23f48f65a276e4e45d4badffd673e1870622e1.jpg) + +Proposition 2. The first order KKT equations for (10) and for (12) are the same. + +Proof. Given $(\mathbf{W}^{*},\beta^{*},\mathbf{s}_{0}^{*})$ , for $1\leq i\leq n$ define $\tilde{s}_i\coloneqq f_{\mathbf{W}}(\tilde{s}_{i - 1},\mathbf{o}_i)$ , where $\tilde{s}_0\coloneqq \mathbf{s}_0^*$ . If we write $\frac{\partial f_{\mathbf{W}^*}(\tilde{s}_l,\mathbf{o}_{l + 1})}{\partial\mathbf{s}_l}$ for instance, this is taken to mean the gradient of $f_{\mathbf{W}^*}(\tilde{s}_l,\mathbf{o}_{l + 1})$ with respect to the function arguments corresponding to $\tilde{s}_l$ . Furthermore, when writing $\frac{\partial f_{\mathbf{W}^*}(\tilde{s}_j,\mathbf{o}_{j + 1})}{\partial\mathbf{W}}$ , we only mean the gradient with respect to the parameters of the outer $f_{\mathbf{W}^*}$ , and not with respect to any of the parameters of $\tilde{s}_j$ . + +Using the chain rule for the first and third equations below, the first order KKT conditions for (10) are given by + +$$ +\begin{array}{l} \frac {\partial E \left(\mathbf {W} ^ {*} , \boldsymbol {\beta} ^ {*} , \mathbf {s} _ {0} ^ {*}\right)}{\partial \mathbf {W}} = \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {0} ^ {*} , \mathbf {o} _ {1}\right) ; y _ {1}\right)}{\partial f _ {\mathbf {W}}} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {0} ^ {*} , \mathbf {o} _ {1}\right)}{\partial \mathbf {W}} + \tag {13} \\ \sum_ {j = 1} ^ {n - 1} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\tilde {s} _ {j} , \mathbf {o} _ {j + 1}\right) ; y _ {j + 1}\right)}{\partial f _ {\mathbf {W}}} \\ \left(\frac {\partial f _ {\mathbf {W} ^ {*}} (\tilde {s} _ {j} , \mathbf {o} _ {j + 1})}{\partial \mathbf {W}} + \sum_ {i = 1} ^ {j} \prod_ {l = i} ^ {j} \frac {\partial f _ {\mathbf {W} ^ {*}} (\tilde {s} _ {l} , \mathbf {o} _ {l + 1})}{\partial \mathbf {s} _ {l}}\right) \frac {\partial f _ {\mathbf {W} ^ {*}} (\tilde {s} _ {i - 1} , \mathbf {o} _ {i})}{\partial \mathbf {W}} \\ = 0 \\ \end{array} +$$ + +$$ +\frac {\partial E \left(\mathbf {W} ^ {*} , \boldsymbol {\beta} ^ {*} , \mathbf {s} _ {0} ^ {*}\right)}{\partial \boldsymbol {\beta}} = \sum_ {i = 1} ^ {n} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\tilde {s} _ {i - 1} , \mathbf {o} _ {i}\right) ; \mathbf {y} _ {i}\right)}{\partial \boldsymbol {\beta}} = 0 \tag {14} +$$ + +$$ +\begin{array}{l} \frac {\partial E \left(\mathbf {W} ^ {*} , \boldsymbol {\beta} ^ {*} , \mathbf {s} _ {0} ^ {*}\right)}{\partial \mathbf {s} _ {0}} = \left(\frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {0} ^ {*} , \mathbf {o} _ {1}\right) ; y _ {1}\right)}{\partial f _ {\mathbf {W}}} + \left(\sum_ {i = 1} ^ {n - 1} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\tilde {s} _ {i} , \mathbf {o} _ {i + 1}\right) ; y _ {i + 1}\right)}{\partial f _ {\mathbf {W}}}\right) \right. \tag {15} \\ \left. \prod_ {l = 1} ^ {i} \frac {\partial f _ {\mathbf {W} ^ {*}} (\tilde {s} _ {l} , \mathbf {o} _ {l + 1})}{\partial \mathbf {s} _ {l}}\right) \Bigg) \frac {\partial f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {0} ^ {*} , \mathbf {o} _ {1})}{\partial \mathbf {s} _ {0}} \\ = 0 \\ \end{array} +$$ + +The Lagrangian for (12) is + +$$ +\mathcal {L} _ {f p p} (\mathbf {W}, \boldsymbol {\beta}, \mathbf {s} _ {0: n}) = \sum_ {i = 1} ^ {n} \ell_ {\boldsymbol {\beta}} \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right); \mathbf {y} _ {i}\right) - \lambda_ {i} ^ {T} \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right) - \mathbf {s} _ {i}\right), \tag {16} +$$ + +where $\lambda_{i}\in \mathbb{R}^{k}$ for $1\leq i\leq n$ are Lagrange multipliers. We define $\lambda_0\coloneqq 0$ for convenience. The KKT equations for (16) are + +$$ +\begin{array}{l} \frac {\partial \mathcal {L} _ {f p p} \left(\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, \mathbf {s} _ {0 : n} ^ {*}\right)}{\partial \mathbf {W}} = \sum_ {i = 1} ^ {n} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i}\right) ; \mathbf {y} _ {i}\right)}{\partial f _ {\mathbf {W}}} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i}\right)}{\partial \mathbf {W}} \tag {17} \\ - \lambda_ {i} ^ {T} \frac {\partial f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i})}{\partial \mathbf {W}} = 0 \\ \end{array} +$$ + +$$ +\begin{array}{l} \frac {\partial \mathcal {L} _ {f p p} (\mathbf {W} ^ {*} , \boldsymbol {\beta} ^ {*} , \mathbf {s} _ {0 : n} ^ {*})}{\partial \boldsymbol {\beta}} = \sum_ {i = 1} ^ {n} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} (f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i}) ; \mathbf {y} _ {i})}{\partial \boldsymbol {\beta}} = 0 \\ \frac {\partial \mathcal {L} _ {f p p} \left(\mathbf {W} ^ {*} , \boldsymbol {\beta} ^ {*} , \mathbf {s} _ {0 : n} ^ {*}\right)}{\partial \mathbf {s} _ {j}} = \left\{ \begin{array}{l l} \lambda_ {n} ^ {T} & \text {i f} j = n \\ \lambda_ {j} ^ {T} + \left(\frac {\partial \ell_ {\beta^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {j} ^ {*} , \mathbf {o} _ {j + 1}\right) ; y _ {j + 1}\right)}{\partial f _ {\mathbf {W}}} - \lambda_ {j + 1} ^ {T}\right) \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {j} ^ {*} , \mathbf {o} _ {j + 1}\right)}{\partial \mathbf {s} _ {j}} \\ & \text {i f} 0 \leq j < n \end{array} \right. \tag {18} \\ = 0 \\ \end{array} +$$ + +$$ +\mathbf {s} _ {i} ^ {*} = f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {i - 1} ^ {*}, \mathbf {o} _ {i}), \quad \forall 1 \leq i \leq n +$$ + +First, let us find a closed-form expression for $\lambda_{i}$ + +Lemma 1. Let $0 \leq j \leq n$ . Then + +$$ +\lambda_ {j} ^ {T} = - \left(\sum_ {i = j} ^ {n - 1} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {l} ^ {*} , \mathbf {o} _ {i + 1}\right) ; y _ {i + 1}\right)}{\partial f _ {\mathbf {W}}} \prod_ {l = j} ^ {i} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {l} ^ {*} , \mathbf {o} _ {l + 1}\right)}{\partial \mathbf {s} _ {l}}\right) +$$ + +Proof. We proceed by induction. The base case and the case $j = n - 1$ are trivial. Assume the claim is true for $m + 1 > 0$ . We will show the claim for $j = m$ . Using the KKT equations (17) and the induction hypothesis, + +$$ +\begin{array}{l} \lambda_ {m} ^ {T} := - \left(\frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} (f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {m} ^ {*} , \mathbf {o} _ {m + 1}) ; y _ {m + 1})}{\partial f _ {\mathbf {W}}} - \lambda_ {m + 1} ^ {T}\right) \frac {\partial f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {m} ^ {*} , \mathbf {o} _ {m + 1})}{\partial \mathbf {s} _ {m}} \\ = - \left(\frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {m} ^ {*} , \mathbf {o} _ {m + 1}\right) ; y _ {m + 1}\right)}{\partial f _ {\mathbf {W}}} + \sum_ {i = m + 1} ^ {n - 1} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i} ^ {*} , \mathbf {o} _ {i + 1}\right) ; y _ {i + 1}\right)}{\partial f _ {\mathbf {W}}}\right. \\ \prod_ {l = m + 1} ^ {i} \left. \frac {\partial f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {l} ^ {*} , \mathbf {o} _ {l + 1})}{\partial \mathbf {s} _ {l}}\right) \frac {\partial f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {m} ^ {*} , \mathbf {o} _ {m + 1})}{\partial \mathbf {s} _ {m}} \\ = - \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {m} ^ {*} , \mathbf {o} _ {m + 1}\right) ; y _ {m + 1}\right)}{\partial f _ {\mathbf {W}}} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {m} ^ {*} , \mathbf {o} _ {m + 1}\right)}{\partial \mathbf {s} _ {m}} - \\ \sum_ {i = m + 1} ^ {n - 1} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i} ^ {*} , \mathbf {o} _ {i + 1}\right) ; y _ {i + 1}\right)}{\partial f _ {\mathbf {W}}} \prod_ {l = m + 1} ^ {i} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {l} ^ {*} , \mathbf {o} _ {l + 1}\right)}{\partial \mathbf {s} _ {l}} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {m} ^ {*} , \mathbf {o} _ {m + 1}\right)}{\partial \mathbf {s} _ {m}} \\ = - \left(\sum_ {i = m} ^ {n - 1} \frac {\partial \ell_ {\beta^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i} ^ {*} , \mathbf {o} _ {i + 1}\right) ; y _ {i + 1}\right)}{\partial f _ {\mathbf {W}}} \prod_ {l = m} ^ {i} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {l} ^ {*} , \mathbf {o} _ {l + 1}\right)}{\partial \mathbf {s} _ {l}}\right) \\ \end{array} +$$ + +![](images/5e1fea15b395d0ab0c2a5c6ecf25dcf97bdd4feebf580960d6816d2dc2b5843e.jpg) + +Now, we will show that the sets of equations are the same. First, it is clear that the two equations involving gradients of $\beta$ in (13) and (17) are the same given that the constraint must be satisfied in (17). Now consider the equations involving gradients with respect to $\mathbf{W}$ . + +$$ +\begin{array}{l} \frac {\partial \mathcal {L} _ {f p p}}{\partial \mathbf {W}} = \sum_ {i = 1} ^ {n} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i}\right) ; \mathbf {y} _ {i}\right)}{\partial f _ {\mathbf {W}}} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i}\right)}{\partial \mathbf {W}} + \\ \sum_ {i = 1} ^ {n} \left(\sum_ {j = i} ^ {n - 1} \frac {\partial \ell_ {\beta^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {j} ^ {*} , \mathbf {o} _ {j + 1}\right) ; y _ {j + 1}\right)}{\partial f _ {\mathbf {W}}} \prod_ {l = i} ^ {j} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {l} ^ {*} , \mathbf {o} _ {l + 1}\right)}{\partial \mathbf {s} _ {l}}\right) \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i}\right)}{\partial \mathbf {W}} \\ = \sum_ {j = 1} ^ {n} \frac {\partial \ell_ {\beta^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {j - 1} ^ {*} , \mathbf {o} _ {j}\right) ; y _ {j}\right)}{\partial f _ {\mathbf {W}}} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {j - 1} ^ {*} , \mathbf {o} _ {j}\right)}{\partial \mathbf {W}} + \\ \sum_ {j = 1} ^ {n - 1} \left(\sum_ {i = 1} ^ {j} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {j} ^ {*} , \mathbf {o} _ {j + 1}\right) ; y _ {j + 1}\right)}{\partial f _ {\mathbf {W}}} \prod_ {l = i} ^ {j} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {l} ^ {*} , \mathbf {o} _ {l + 1}\right)}{\partial \mathbf {s} _ {l}}\right) \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i}\right)}{\partial \mathbf {W}} \\ = \frac {\partial \ell_ {\beta^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {0} ^ {*} , \mathbf {o} _ {1}\right) ; y _ {1}\right)}{\partial f _ {\mathbf {W}}} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {0} ^ {*} , \mathbf {o} _ {1}\right)}{\partial \mathbf {W}} + \sum_ {j = 1} ^ {n - 1} \frac {\partial \ell_ {\beta^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {j} ^ {*} , \mathbf {o} _ {j + 1}\right) ; y _ {j + 1}\right)}{\partial f _ {\mathbf {W}}} \\ \left(\frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {j} ^ {*} , \mathbf {o} _ {j + 1}\right)}{\partial \mathbf {W}} + \sum_ {i = 1} ^ {j} \prod_ {l = i} ^ {j} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {l} ^ {*} , \mathbf {o} _ {l + 1}\right)}{\partial \mathbf {s} _ {l}}\right) \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i}\right)}{\partial \mathbf {W}} \\ = 0. \\ \end{array} +$$ + +By substituting in the constraint equations $\mathbf{s}_i^* = f_{\mathbf{W}^*}(\mathbf{s}_{i - 1}^*,\mathbf{o}_i)$ , this recovers exactly the gradient with respect to $\mathbf{W}$ in (13). + +Finally, consider the gradient with respect to $\mathbf{s}_0$ . + +$$ +\begin{array}{l} \frac {\partial \mathcal {L} _ {f p p} (\mathbf {W} ^ {*} , \boldsymbol {\beta} ^ {*} , \mathbf {s} _ {0 : n} ^ {*})}{\partial \mathbf {s} _ {0}} = \lambda_ {0} ^ {T} + \left(\frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} (f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {0} ^ {*} , \mathbf {o} _ {1}) ; y _ {1})}{\partial f _ {\mathbf {W}}} - \lambda_ {1} ^ {T}\right) \frac {\partial f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {0} ^ {*} , \mathbf {o} _ {1})}{\partial \mathbf {s} _ {0}} \\ = \left(\frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {0} ^ {*} , \mathbf {o} _ {1}\right) ; y _ {1}\right)}{\partial f _ {\mathbf {W}}} + \left(\sum_ {i = 1} ^ {n - 1} \frac {\partial \ell_ {\boldsymbol {\beta} ^ {*}} \left(f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i} ^ {*} , \mathbf {o} _ {i + 1}\right) ; y _ {i + 1}\right)}{\partial f _ {\mathbf {W}}}\right) \right. \\ \left. \prod_ {l = 1} ^ {i} \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {l} ^ {*} , \mathbf {o} _ {l + 1}\right)}{\partial \mathbf {s} _ {l}}\right)\left. \right) \frac {\partial f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {0} ^ {*} , \mathbf {o} _ {1}\right)}{\partial \mathbf {s} _ {0}} \\ = 0. \\ \end{array} +$$ + +This matches the corresponding equation in (13). + +![](images/0ac526e1f28db2320da623a993f36570acff9bbff45814fd177655d145c6e9a0.jpg) + +Proposition 3. Let $(\mathbf{W}^{*},\boldsymbol {\beta}^{*},\mathbf{s}_{0:n}^{*})$ be a local min of (12). Write the constraints of (12) as a vector: + +$$ +h \left(\mathbf {W}, \mathbf {s} _ {0: n}\right) := \left[ \left[ \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {0}, \mathbf {o} _ {1}\right) - \mathbf {s} _ {1}\right) ^ {T} \quad \dots \quad \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {n - 1}, \mathbf {o} _ {n}\right) - \mathbf {s} _ {n}\right) ^ {T} \right] \right. \tag {19} +$$ + +Index each element of $h$ by $h_i$ . Then the vectors $\nabla h_i(\mathbf{W}^*, \mathbf{s}_{0:n}^*)$ are linearly independent. + +Proof. In the following, we will write $\frac{\partial [g]_l}{\partial x_j^i}$ to mean the derivative of the $l$ -th component of $g$ with respect to the $i$ -th component of $x_j$ . For compactness, write $g(i) \coloneqq f_{\mathbf{W}^*}(\mathbf{s}_{i-1}^*, \mathbf{o}_i) - \mathbf{s}_i^*$ . We can write the Jacobian $\nabla h(\mathbf{W}^*, \mathbf{s}_{0:n}^*)$ as + +$$ +\nabla h (\mathbf {W} ^ {*}, \mathbf {s} _ {0: n} ^ {*}) = \left[ \begin{array}{c c c c c c c} \frac {\partial [ g (1) ] _ {1}}{\partial \mathbf {W} ^ {1}} & \dots & \frac {\partial [ g (1) ] _ {1}}{\partial \mathbf {W} ^ {w}} & \frac {\partial [ g (1) ] _ {1}}{\partial s _ {1} ^ {1}} & \dots & \frac {\partial [ g (1) ] _ {1}}{\partial s _ {1} ^ {k}} & \dots & \frac {\partial [ g (1) ] _ {1}}{\partial \mathbf {s} _ {n} ^ {k}} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \frac {\partial [ g (1) ] _ {k}}{\partial \mathbf {W} ^ {1}} & \dots & \frac {\partial [ g (1) ] _ {k}}{\partial \mathbf {W} ^ {w}} & \frac {\partial [ g (1) ] _ {k}}{\partial s _ {1} ^ {1}} & \dots & \frac {\partial [ g (1) ] _ {k}}{\partial s _ {1} ^ {k}} & \dots & \frac {\partial [ g (1) ] _ {k}}{\partial \mathbf {s} _ {n} ^ {k}} \\ \frac {\partial [ g (2) ] _ {1}}{\partial \mathbf {W} ^ {1}} & \dots & \frac {\partial [ g (2) ] _ {1}}{\partial \mathbf {W} ^ {w}} & \frac {\partial [ g (2) ] _ {1}}{\partial s _ {1} ^ {1}} & \dots & \frac {\partial [ g (2) ] _ {1}}{\partial s _ {1} ^ {k}} & \dots & \frac {\partial [ g (2) ] _ {1}}{\partial \mathbf {s} _ {n} ^ {k}} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \frac {\partial [ g (n) ] _ {k}}{\partial \mathbf {W} ^ {1}} & \dots & \frac {\partial [ g (n) ] _ {k}}{\partial \mathbf {W} ^ {w}} & \frac {\partial [ g (n) ] _ {k}}{\partial s _ {1} ^ {1}} & \dots & \frac {\partial [ g (n) ] _ {k}}{\partial s _ {1} ^ {k}} & \dots & \frac {\partial [ g (n) ] _ {k}}{\partial \mathbf {s} _ {n} ^ {k}} \end{array} \right] +$$ + +We will show that the rows of $\nabla h(\mathbf{W}^*,\mathbf{s}_{0:n}^*)$ are linearly independent. To this end, let $\lambda_{ij}\in \mathbb{R}$ for $i\in \{1,\dots ,n\} ,j\in \{1,\dots ,k\}$ be such that: + +$$ +\sum_ {j = 1} ^ {k} \sum_ {i = 1} ^ {n} \lambda_ {i j} \nabla [ f _ {\mathbf {W} ^ {*}} (\mathbf {s} _ {i - 1} ^ {*}, \mathbf {o} _ {i}) - \mathbf {s} _ {i} ^ {*} ] _ {j} = 0. +$$ + +In particular, for $1 \leq a \leq n$ , $1 \leq b \leq k$ , + +$$ +\begin{array}{l} \sum_ {j = 1} ^ {k} \sum_ {i = 1} ^ {n} \lambda_ {i j} \frac {\partial \left[ f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i}\right) - \mathbf {s} _ {i} ^ {*} \right] _ {j}}{\partial \mathbf {s} _ {a} ^ {b}} = \sum_ {j = 1} ^ {k} \sum_ {i = 1} ^ {n} \lambda_ {i j} \left(\delta_ {a} ^ {i - 1} \frac {\partial \left[ f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {i - 1} ^ {*} , \mathbf {o} _ {i}\right) \right] _ {j}}{\partial \mathbf {s} _ {a} ^ {b}} - \delta_ {a} ^ {i} \delta_ {b} ^ {j}\right) (20) \\ = 1 _ {a < n} \sum_ {j = 1} ^ {k} \lambda_ {a + 1, j} \frac {\partial \left[ f _ {\mathbf {W} ^ {*}} \left(\mathbf {s} _ {a} ^ {*} , \mathbf {o} _ {a + 1}\right) \right] _ {j}}{\partial \mathbf {s} _ {a + 1} ^ {b}} - \lambda_ {a b} (21) \\ = 0 (22) \\ \end{array} +$$ + +By setting $a = n$ , we have that $\lambda_{nb} = 0$ for all $1 \leq b \leq k$ . Setting $a = n - 1$ , we similarly have that $\lambda_{n-1,b} = 0$ . Proceeding in this fashion, we have that $\lambda_{ab} = 0$ for all $1 \leq a \leq n$ , $1 \leq b \leq k$ . Actually, we did not at any point use the fact that $(\mathbf{W}^*, \boldsymbol{\beta}^*, \mathbf{s}_{0:n}^*)$ is a local min, so that the constraint gradients are linearly independent everywhere, and in particular at $(\mathbf{W}^*, \boldsymbol{\beta}^*, \mathbf{s}_{0:n}^*)$ . + +Theorem 2. Assume we have a positive, increasing sequence $\{\lambda_k\} \to \infty$ , a non-negative sequence $\{\epsilon_k\} \to 0$ , and a sequence of points $\{(\mathbf{W}_k,\beta_k,\mathbf{S}_k)\}$ such that $\| \nabla L(\mathbf{W}_k,\beta_k,\mathbf{S}_k);\lambda_k)\| \leq \epsilon_k$ for + +$$ +L \left(\mathbf {W} _ {k}, \boldsymbol {\beta} _ {k}, \mathbf {S} _ {k}\right); \lambda_ {k}) \stackrel {\text {d e f}} {=} \frac {1}{n} \sum_ {i = 1} ^ {n} \ell_ {\boldsymbol {\beta}} \left(f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right); \mathbf {y} _ {i}\right) + \frac {\lambda_ {k}}{2} \| \mathbf {s} _ {i} - f _ {\mathbf {W}} \left(\mathbf {s} _ {i - 1}, \mathbf {o} _ {i}\right) \| _ {2} ^ {2} \tag {11} +$$ + +Assume further that $\{(\mathbf{W}_k,\boldsymbol {\beta}_k,\mathbf{S}_k)\}$ has a convergent subsequence $\{(\mathbf{W}_{k_i},\boldsymbol {\beta}_{k_i},\mathbf{S}_{k_i})\}$ with limit $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{S}^{*})$ . Then $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{S}^{*})$ is a KKT point of the constrained FPP objective (see (12)) and $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{s}_{0}^{*})$ is a KKT point of the RNN objective (10). Further, if $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{S}^{*})$ is a local min of the constrained FPP objective, then $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{s}_{0}^{*})$ is a local min of (10). + +Proof. By Proposition 3 and Proposition 2.3 from Bertsekas (1982), We have the existence of a Lagrange multiplier vector $\lambda$ such that + +$$ +\nabla E _ {f p p} \left(\mathbf {W} ^ {*}, \boldsymbol {\beta} ^ {*}, \mathbf {s} _ {0: n} ^ {*}\right) - \nabla h \left(\mathbf {W} ^ {*}, \mathbf {s} _ {0: n} ^ {*}\right) \lambda = 0, +$$ + +$$ +h (\mathbf {W} ^ {*}, \mathbf {s} _ {0: n} ^ {*}) = 0, +$$ + +where $h(\mathbf{W}^*,\mathbf{s}_{0:n}^*)$ is as in Proposition 3. Hence, $(\mathbf{W}^{*},\boldsymbol{\beta}^{*},\mathbf{s}_{0:n}^{*})$ is a KKT point of (12). + +By Proposition 2, $(\mathbf{W}^{*},\beta^{*},\mathbf{s}_{0}^{*})$ is a KKT point for (10). Finally, if $(\mathbf{W}^{*},\beta^{*},\mathbf{s}_{0:n}^{*})$ is a local min of (12), then by Proposition 1 we have that $(\mathbf{W}^{*},\beta^{*},\mathbf{s}_{0}^{*})$ is a local min of (10). + +# B PARAMETER STUDY + +We investigate the sensitivity of FPP to its two key parameters: the length of the trajectory $T$ , and the buffer size $N$ . Overall, the losses on y-axis of Figure 5 show that FPP is robust to buffer size and truncation length. As expected, for very small $T$ , performance degrades, but otherwise the move from $T = 10$ to $T = 50$ does not result in a large difference. The algorithm was quite invariant to buffer size, starting from a reasonable size of 100. For too large a buffer with a small number of updates, performance did degrade somewhat. Overall, though, across this wide range of settings, FPP performed consistently well. + +![](images/2590921971e5b1688d289c738fe5900bf22d0a159865bbd6d2b9d0c220931baf.jpg) +(a) 10-CycleWorld + +![](images/2b1e9cc79a36fad279a1ef72a46014fec2ad618edf1aa35e7871d176e3472b06.jpg) +(b) StochasticWorld +Figure 5: Sensitivity to buffer length and trajectory length in FPP, for buffer sizes 100, 1000 and 10000 and truncations of 1, 5,10,15 and 50. + +We also investigated how performance changes when changing $\lambda$ . Throughout all previous experiments, we simply set $\lambda = 1$ , to avoid unfairly tuning our method to each problem. Interestingly, tuning $\lambda$ does enable further performance improvements, though the algorithm worked well for quite a large range of $\lambda$ . + +# C EXPERIMENTAL DETAILS + +The dynamics for the Stochastic World environment are in Table 1. + +For all experiments, we use RMSprop optimizer and the learning rate is chosen over the set $\{0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03\}$ based on the average accuracy/loss. For real datasets, we use multiple trajectories to speed up training. The details of each task are provided below: + +![](images/32ca4c993031986fd0b6b08ab5608869de5e20eeb915ec9b5013495e666d59f9.jpg) +(a) 10-CycleWorld + +![](images/9c931f9ef8a345a0f8c43a97a77a47ae2fd115bdc9921b11e5d3d55986a3a21d.jpg) +(b) StochasticWorld +Figure 6: Sensitivity of Lambda for various values of T. For small T, higher lambda works better suggesting the impact of propagation of state values across the buffer. + +
P(Yt=1|Ot-T1, Ot-T2)Ot-T1Ot-T2
50%00
100%10
25%01
75%11
+ +Table 1: The conditional probability of the target output given the past observations. + +![](images/d4f39ff18a53d9c2fe81b5c3c65e967d48fb02e3c8ccf07869e3a460910daa4c.jpg) +Figure 7: The performance of FPP and FPP without state updating with $T = 1$ , $B = 16$ after 50000 training steps, for varying $M$ . This result highlights that FPP can better take advantage of more updates and larger mini-batches, with its sound updating strategy on a buffer. + +# C.1 CYCLEWORLD + +Network Type $=$ simple RNN Hidden Units $= 4$ + +# C.2 STOCHASTIC WORLD + +Network Type $=$ simple RNN Hidden Units $= 32$ + +# C.3 SEQUENTIAL MNIST + +Network Type $=$ simple RNN +Hidden Units $= 512$ +Image Size $= 784$ pixels +Input Dimension $= 28$ pixels +Number of Steps $= 28000$ (1000 images of 28 steps) + +Number of Trajectories $= 20$ + +# C.4 PTB + +Network Type = LSTM + +Hidden Units $= 200$ + +Vocabulary Size $= 10000$ + +Embedding Size $= 200$ + +Number of Steps $= 5000$ (5000 samples in the dataset) + +Number of Trajectories $= 20$ \ No newline at end of file diff --git a/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/images.zip b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..31b4b1818675b1728da54ee2256d8dfaae824e50 --- /dev/null +++ b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a70f8e9467b756f31f94a01212efc16697ff4aa93fbb704951212cbdb13e4c4 +size 966847 diff --git a/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/layout.json b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..86c52d79f6ca63da6e260ea3a70ba5a667e9d4bd --- /dev/null +++ b/trainingrecurrentneuralnetworksonlinebylearningexplicitstatevariables/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1187345ba7614a9ff98cfab442f6a68a986d07a136a5fa4732675c8fc1a7144e +size 905430 diff --git a/transferableperturbationsofdeepfeaturedistributions/f0b258cd-79f6-49a6-b430-4b2663788bd3_content_list.json b/transferableperturbationsofdeepfeaturedistributions/f0b258cd-79f6-49a6-b430-4b2663788bd3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..284b3c734e6cbf5ec25ba7eec9bdcf0c19c8eb0a --- /dev/null +++ b/transferableperturbationsofdeepfeaturedistributions/f0b258cd-79f6-49a6-b430-4b2663788bd3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee9287cca486b8fc255e2fb5f8260930e8b4fb30a9f41616261db5bd5e63da5c +size 77946 diff --git a/transferableperturbationsofdeepfeaturedistributions/f0b258cd-79f6-49a6-b430-4b2663788bd3_model.json b/transferableperturbationsofdeepfeaturedistributions/f0b258cd-79f6-49a6-b430-4b2663788bd3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b97b41d86d982738a34f3f86c87eb644260e7469 --- /dev/null +++ b/transferableperturbationsofdeepfeaturedistributions/f0b258cd-79f6-49a6-b430-4b2663788bd3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc3dd89e2556019548d1a09408938bacf4d6a43c3f36e0a2de5ad04d9b38e4a7 +size 92043 diff --git a/transferableperturbationsofdeepfeaturedistributions/f0b258cd-79f6-49a6-b430-4b2663788bd3_origin.pdf b/transferableperturbationsofdeepfeaturedistributions/f0b258cd-79f6-49a6-b430-4b2663788bd3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3e947281de1481f951d866f67be30e7c93fb4219 --- /dev/null +++ b/transferableperturbationsofdeepfeaturedistributions/f0b258cd-79f6-49a6-b430-4b2663788bd3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b77e2f16d284260173a23879724d4fd003f2de9ce09059bb500eb533b668e7f +size 6868524 diff --git a/transferableperturbationsofdeepfeaturedistributions/full.md b/transferableperturbationsofdeepfeaturedistributions/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9e19b12abebae7fad094f5297498e1932eda72de --- /dev/null +++ b/transferableperturbationsofdeepfeaturedistributions/full.md @@ -0,0 +1,283 @@ +# TRANSFERABLE PERTURBATIONS OF DEEP FEATURE DISTRIBUTIONS + +Nathan Inkawich, Kevin J Liang, Lawrence Carin & Yiran Chen + +Department of Electrical and Computer Engineering + +Duke University + +{nathan.inkawich,kevin.liang,lcarin,yiran.chen}@duke.edu + +# ABSTRACT + +Almost all current adversarial attacks of CNN classifiers rely on information derived from the output layer of the network. This work presents a new adversarial attack based on the modeling and exploitation of class-wise and layer-wise deep feature distributions. We achieve state-of-the-art targeted blackbox transfer-based attack results for undefended ImageNet models. Further, we place a priority on explainability and interpretability of the attacking process. Our methodology affords an analysis of how adversarial attacks change the intermediate feature distributions of CNNs, as well as a measure of layer-wise and class-wise feature distributional separability/entanglement. We also conceptualize a transition from task/data-specific to model-specific features within a CNN architecture that directly impacts the transferability of adversarial examples. + +# 1 INTRODUCTION + +Most recent adversarial attack literature has focused on empirical demonstrations of how classifiers can be fooled by the addition of quasi-imperceptible noise to the input (Szegedy et al., 2014; Goodfellow et al., 2015; Carlini & Wagner, 2017; Moosavi-Dezfooli et al., 2016; Madry et al., 2018; Kurakin et al., 2017). However, adversarial attacks may be leveraged in other constructive ways to provide insights into how deep learning models learn data representations and make decisions. In this work, we propose a new blackbox transfer-based adversarial attack that outperforms state-of-the-art methods for undefended ImageNet classifiers. Importantly, this work provides a broad exploration into how different Deep Neural Network (DNN) models build feature representation + +![](images/71876a58f7a11d37348dfc56e862e984090998ee859b8053cf371112a3a53618.jpg) +Figure 1: (top) Given a pre-trained whitebox model $f$ , we capture the layer-wise and class-wise feature distributions with binary neural networks $g_{l,c}$ , aiming to model the probability that the layer $l$ features extracted from input $x$ are from the class $c$ feature distribution (i.e. $p(y = c|f_l(x))$ ). (bottom) Forward pass for FDA targeted attack. + +tions and conceptualize classes. The new attack methodology, which we call the Feature Distribution Attack (FDA), leverages class-wise and layer-wise deep feature distributions of a substitute DNN to generate adversarial examples that are highly transferable to a blackbox target DNN. + +One perspective on adversarial attacks is that adversarial noise is a direction in which to "move" the natural data. In standard attacks which directly use the classification output, the noise points in the direction of the nearest decision boundary at the classification layer (Tramér et al., 2017). In this work, our crafted noise points in a direction that makes the data "look like" a sample of another class in intermediate feature space. Intuitively, if we can alter the representation in a layer whose features are representative of the data for the given task, but not specific to the model, the adversarial example may transfer better (to unobserved architectures) than attacks derived from logit-layer information. + +Figure 1(top) illustrates the feature distribution modeling of a DNN, which is the core mechanism of the attack. $f$ is a pre-trained substitute whitebox model to which we have full access. The true target blackbox model is not shown, but we only assume limited query access and that it has been + +trained on ImageNet-1k (Deng et al., 2009). Adversarial examples are then generated on the white-box model and transferred to the blackbox model. The novelty of the attack comes from the explicit use of class-wise and layer-wise feature distributions. In Figure 1(top), an auxiliary Neural Network (NN) $g_{l,c}$ learns $p(y = c|f_l(x))$ , which is the probability that the layer $l$ features of the whitebox model, extracted from input image $x$ , belong to class $c$ . The attack uses these learned distributions to generate targeted (or untargeted) adversarial examples by maximizing (or minimizing) the probability that the adversarial example is from a particular class's feature distribution (Figure 1(bottom)). We also use these learned distributions to analyze layer-wise and model-wise transfer properties, and to monitor how perturbations of the input change feature space representations. Thus, we gain insights on how feature distributions evolve with layer depth and architecture. + +# 2 RELATED WORK + +In blackbox attacks (Narodytska & Kasiviswanathan, 2017; Su et al., 2017; Papernot et al., 2017; Tramér et al., 2017; Inkawich et al., 2019; Dong et al., 2018; Zhou et al., 2018), knowledge of the target model is limited. In this work, the target model is blackbox in the sense that we do not have access to its gradients and make no assumptions about its architecture (Madry et al., 2018; Cheng et al., 2019). A popular blackbox technique is transfer-based attacks, in which adversarial examples are constructed on the attackers' own whitebox model and transferred to the target model. Papernot et al. (2016; 2017) develop special methods for training the attackers' whitebox model to approximate the target model's decision boundaries. In this work, we only use models that have been trained under standard configurations for ImageNet-1k (Deng et al., 2009). Tramér et al. (2018) and Liu et al. (2017) bolster transferability by generating adversarial examples from an ensemble of whitebox models, which helps the noise not overfit a single model architecture. Our methods also discourage overfitting of the generating architecture, but we instead leverage feature space perturbations at the appropriate layer. In the 2017 NeurIPS blackbox attack competition (Kurakin et al., 2018), the winning method (Dong et al., 2018) used momentum in the optimization step, which helped to speed up the convergence rate and de-noise the gradient directions as to not be overly specific to the generating architecture. We also use this approach. Finally, Tramér et al. (2017) analyze why transferability occurs and find that well-trained models have similar decision boundary structures. We also analyze transferability, but in the context of how adversarial examples change a model's internal representations, rather than only making observations at the output layer. + +While all of the above methods generate adversarial examples using information from the classification layer of the model, there have been a few recent works delving into the feature space of DNNs for both attacks and defenses. Sabour et al. (2016) show that in whitebox settings, samples can be moved very close together while maintaining their original image-domain representations. Zhou et al. (2018) regularize standard untargeted attack objectives to maximize perturbations of (all) intermediate feature maps and increase transferability. However, their primary objective is untargeted and still based on classification output information. Also, the authors do not consider which layers are affected and how the regularization alters the intermediate representations. Inkawich et al. (2019) show that driving a source sample's feature representation towards a target sample's representation at particular layers in deep feature space is an effective method of targeted transfer attack. However, the method is targeted only and relies on the selection of a single (carefully selected) sample of the target class. Also, the attack success rate on ImageNet was empirically low. This work describes a more robust attack, with significantly better performance on ImageNet, and provides a more detailed analysis of layer-wise transfer properties. For adversarial defenses, Xie et al. (2019), Frosst et al. (2019), and Lin et al. (2019) consider the effects of adversarial perturbations in feature space but do not perform a layer-wise analysis of how the internal representations are affected. + +# 3 ATTACK METHODOLOGY + +We assume to have a set of training data and a pre-trained model $f$ from the same task as the target blackbox model (i.e. the ImageNet-1k training set and a pre-trained ImageNet model). To model the feature distributions for $f$ , we identify a set of classes $\mathcal{C} = \{c_1, \dots, c_K\}$ and a set of layers $\mathcal{L} = \{l_1, \dots, l_N\}$ that we are keen to probe. For each layer in $\mathcal{L}$ , we train a small, binary, one-versus-all classifier $g$ for each of the classes in $\mathcal{C}$ , as shown in Figure 1 (top). Each binary classifier is given a unique set of parameters, and referred to as an auxiliary model $g_{l,c}$ . The output of an auxiliary model represents the probability that the input feature map is from a specific class $c \in \mathcal{C}$ . Thus, we + +say that $g_{l,c}(f_l(x))$ outputs $p(y = c|f_l(x))$ , where $f_{l}(x)$ is the layer $l$ feature map of the pre-trained model $f$ given input image $x$ . + +Once trained, we may leverage the learned feature distributions to create both targeted and untargeted adversarial examples. Here, we focus mostly on targeted attacks, which are considered a harder problem (especially in the blackbox transfer case when we do not have access to the target model's gradients) (Kurakin et al., 2018; Sharma et al., 2018). Discussion of untargeted attacks is left to Appendix C. Recall, the goal of a targeted attack is to generate an adversarial noise $\delta$ that when added to a clean sample $x$ of class $y_{src}$ , the classification result of $x + \delta$ is a chosen class $y_{tgt}$ . The key intuition for our targeted methods is that if a sample has features consistent with the feature distribution of class $c$ at some layer of intermediate feature space, then it will likely be classified as class $c$ . Although not shown in the objective functions for simplicity, for all attacks the adversarial noise $\delta$ is constrained by an $\ell_p$ norm (i.e. $||\delta||_p \leq \epsilon$ ), and the choice of layer $l$ and target class label $y_{tgt}$ are chosen prior to optimization. + +FDA We propose three targeted attack variants. The most straightforward variant, called FDA, finds a perturbation $\delta$ of the "clean" input image $x$ that maximizes the probability that the layer $l$ features are from the target class $y_{tgt}$ distribution: + +$$ +\max _ {\delta} p \left(y = y _ {t g t} \mid f _ {l} (x + \delta)\right). \tag {1} +$$ + +We stress that unlike standard attacks that use output layer information to directly cross decision boundaries of the whitebox, our FDA objective leverages intermediate feature distributions which do not implicitly describe these exact boundaries. + +$FDA + ms$ In addition to maximizing the probability that the layer $l$ features are from the target class distribution, the $FDA + ms$ variant also considers minimizing the probability that the layer $l$ features are from the source class $y_{src}$ distribution ( $ms =$ minimize source): + +$$ +\max _ {\delta} \lambda p \left(y = y _ {t g t} \mid f _ {l} (x + \delta)\right) - (1 - \lambda) p \left(y = y _ {s r c} \mid f _ {l} (x + \delta)\right). \tag {2} +$$ + +Here, $\lambda \in (0,1)$ weights the contribution of both terms and is a fixed positive value. + +$FDA + fd$ Similarly, the $FDA + fd$ variant maximizes the probability that the layer $l$ features are from the target class distribution while also maximizing the distance of the perturbed features from the original features ( $fd =$ feature-disruption): + +$$ +\max _ {\delta} p \left(y = y _ {t g t} \mid f _ {l} (x + \delta)\right) + \eta \frac {\left\| f _ {l} (x + \delta) - f _ {l} (x) \right\| _ {2}}{\left\| f _ {l} (x) \right\| _ {2}}. \tag {3} +$$ + +In other words, the feature-disruption term, with a fixed $\eta \in \mathbb{R}_+$ , prioritizes making the layer $l$ features of the perturbed sample maximally different from the original sample. + +The additional terms in $FDA + ms$ and $FDA + fd$ encourage the adversarial sample to move far away from the starting point, which may intuitively help in generating (targeted) adversarial examples. Also, notice that $FDA + ms$ requires the modeling of both the source and target class distributions, whereas the others only require the modeling of the target class distribution. + +**Optimization Procedure.** The trained auxiliary models afford a way to construct a fully differentiable path for gradient-based optimization of the objective functions. Specifically, to compute FDA adversarial noise from layer $l$ , we first build a composite model using the truncated whitebox model $f_{l}$ and the corresponding layer's auxiliary model $g_{l,c=y_{tgt}}$ for the target class $y_{tgt}$ , as shown in Figure 1(bottom). The loss is calculated as the Binary Cross Entropy (BCELoss) between the predicted $p(y=y_{tgt}|f_l(x))$ and 1. Thus, we perturb the input image in the direction that will minimize the loss, in turn maximizing $p(y=y_{tgt}|f_l(x))$ . For optimization, we employ iterative gradient descent with momentum, as the inclusion of a momentum term in adversarial attacks has proven effective (Inkawich et al., 2019; Dong et al., 2018). See Appendix D for more details. + +# 4 EXPERIMENTAL SETUP + +ImageNet models. For evaluation we use popular CNN architectures designed for the ImageNet-1k (Deng et al., 2009) classification task: VGG-19 with batch-normalization (VGG19) (Simonyan & Zisserman, 2015), DenseNet-121 (DN121) (Huang et al., 2017), and ResNet-50 (RN50) (He et al., + +2016). All models are pre-trained and found in the PyTorch Model Zoo. Note, our methods are in no way specific to these particular models/architectures. We also emphasize transfers across different architectures rather than showing results between models from the same family. + +Layer decoding scheme. Given a pre-trained model, we must choose a set of layers $\mathcal{L}$ to probe. For each model we subsample the layers such that we probe across the depth. For notation we use relative layer numbers, so layer 0 of DN121 $(DN121_{l=0})$ is near the input layer and $DN121_{l=12}$ is closer to the classification layer. For all models, the deepest layer probed is the logit layer. Appendix A decodes the notation for each model. + +Auxiliary model training. We must also choose a set of classes $\mathcal{C}$ that we are interested in modeling. Recall, the number of auxiliary models required for a given base model is the number of layers probed multiplied by the number of classes we are interested in modeling. Attempting to model the feature distributions for all 1000 ImageNet classes for each layer is expensive, so we instead choose to run the majority of tests with a set of 10 randomly chosen classes (which are meant to be representative of the entire dataset): 24:"grey-owl", 99:"goose", 245:"bulldog", 344:"hippo", 471:"cannon", 555:"fire-truck", 661:"Model-T", 701:"parachute", 802:"snowmobile", 919:"street-sign". Thus, for each layer of each model we train 10 auxiliary classifiers, one for each class. After identifying high performing attack settings, we then produce results for all 1000 classes. + +The architecture of all auxiliary models is the same, regardless of model, layer, or class. Each is a 2-hidden layer NN with a single output unit. There are 200 neurons in each hidden layer and the number of input units matches the size of the input feature map. To train the auxiliary models, unbiased batches from the whole ImageNet-1k training set are pushed through the truncated pretrained model $(f_l)$ , and the extracted features are used to train the auxiliary model parameters. + +Experimental procedure. Since we have three pre-trained models, there are 6 blackbox transfer scenarios to evaluate (no self-transfers). We use the ImageNet-1K validation set as the test dataset. Because $FDA + ms$ requires both the source and target class distributions, for the targeted attack evaluations we only use source samples from the 10 trained classes, and for each sample, target each of the other 9 classes. For baseline attacks, we use targeted random-start Projected Gradient Descent (tpgd) (Madry et al., 2018; Kurakin et al., 2018), targeted NeurIPS2017 competition winning momentum iterative method (tmim) (Dong et al., 2018), and the Activation Attack (AA) (Inkawich et al., 2019). Further, all targeted adversarial examples are constrained by $\ell_{\infty} \epsilon = 16/255$ as described in (Dong et al., 2018; Kurakin et al., 2018). As experimentally found, $\lambda = 0.8$ in (2) and $\eta = 1e-6$ in (3). Finally, as measured over the initially correctly classified subset of the test dataset (by both the whitebox and blackbox models), attack success is captured in two metrics. Error is the percentage of examples that the blackbox misclassifies and Targeted Success Rate (tSuc) is the percentage of examples that the blackbox misclassifies as the target label. + +# 5 EMPIRICAL RESULTS + +# 5.1 10-CLASS IMAGENET RESULTS + +The primary axis of interest is how attack success rate varies with the layer depth from which the feature distributions are attacked. Figure 2 shows the transfer results between all pairs of whitebox and blackbox models. Each plot shows a metric of attack success versus relative layer depth of the generated attack. The notation DN121 $\rightarrow$ VGG19 indicates adversarial examples were generated with a DN121 whitebox model and transferred to a VGG19 blackbox model. + +Similar to Inkawich et al. (2019), transferability trends for $FDAs$ from a given whitebox model appear blackbox model agnostic (e.g. the shape of the curves from $\mathrm{DN}121\rightarrow \mathrm{RN}50$ are the same as $\mathrm{DN}121\to \mathrm{VGG}19$ ). This is a positive property, as once the optimal transfer layer for a whitebox model is found, evidence shows it will be the same for any blackbox model architecture. Further, the most powerful transfers come from perturbations of intermediate features, rather than perturbations of classification layer information. Another global trend is that in tSuc, $FDA + fd$ performs best, $FDA$ performs worst, $FDA + ms$ is in-between, and all $FDAs$ significantly outperform the other baselines. In the error metric, $FDA + fd$ is best in early layers, $FDA + ms$ is best in later layers, and $FDA$ routinely under-performs the AA baseline. Although all attacks are targeted, it is relevant to report error as it is still an indication of attack strength. Also, it is clearly beneficial for the targeted attack objective to include a term that encourages the adversarial example to move far away from its starting place ( $FDA + ms$ & $FDA + fd$ ) in addition to moving toward the target region. + +![](images/c3f67624a8e88e7c49aa3874ea9c394f7c261545630bfd62b7b22920654d71bd.jpg) +Figure 2: Targeted adversarial attack transfer results. The x-axis of each plot is the relative layer depth at which the adversarial example was generated from. Each row is a different whitebox model. + +We now compare performance across whitebox models. For a DN121 whitebox, $FDA + fd$ from $DN121_{l=7}$ is the optimal targeted attack with an average tSuc of $34\%$ . For DN121 → RN50, this attack outperforms the best baseline by $14\%$ and $32\%$ in error and tSuc, respectively. For a VGG19 whitebox, $FDA + fd$ from $VGG19_{l=5}$ is the optimal targeted attack with an average tSuc of $2.3\%$ . For VGG19 → RN50, this attack outperforms the best baseline by $15\%$ and $2\%$ in error and tSuc, respectively. For a RN50 whitebox, $FDA + fd$ from $RN50_{l=8}$ is the optimal targeted attack with an average tSuc of $18\%$ . For RN50 → DN121, this attack outperforms the best baseline by $27\%$ and $17\%$ in error and tSuc, respectively. + +# 5.2 1000-CLASS IMAGENET RESULTS + +Table 1: Transferability rates for 1000-class targeted attack tests using optimal layers. + +
attackDN121 → VGG19DN121 → RN50RN50 → DN121RN50 → VGG19
errortSucerrortSucerrortSucerrortSuc
tpgd23.10.321.40.620.20.522.40.3
tmim48.61.445.52.244.32.946.71.3
FDA64.915.564.318.156.412.654.66.9
FDA+ms91.921.791.923.487.315.985.310.2
FDA+fd81.229.081.730.982.624.378.915.9
+ +Recall, due to the computational complexity of training one auxiliary model per class per layer per model, we ran the previous experiments using 10 randomly sampled ImageNet-1k classes. In reality, this may be a realistic attack scenario because an adversary would likely only be interested in attacking certain source-target pairs. However, to show that the 10 chosen classes are not special, and the previously identified optimal transfer layers are still valid, we train all 1000 class auxiliary models for $DN121_{l=7}$ and $RN50_{l=8}$ . We exclude VGG19 because of its inferior performance in previous tests. Table 1 shows results for the four transfer scenarios. Attack success rates are all averaged over four random 10k splits and the standard deviation of all measurements is less than $1\%$ . In these tests, for each source sample, a random target class is chosen. As expected, the 1000-class results closely match the previously reported 10-class results. + +# 6 ANALYSIS OF TRANSFER PROPERTIES + +We now investigate why a given layer and/or whitebox model is better for creating transferable adversarial examples. We also explore the hypothesis that the layer-wise transfer properties of a + +DNN implicate the transition of intermediate features from task/data-specific to model-specific. Intuitively, early layers of DNNs trained for classification may be working to optimally construct a task/data-specific feature set (Zeiler & Fergus, 2014; Yosinski et al., 2014). However, once the necessary feature hierarchy is built to model the data, further layers may perform extra processing to best suit the classification functionality of the model. This additional processing may be what makes the features model-specific. We posit that the peak of the tSuc curve for a given transfer directly encodes the inflection point from task/data-specific to model-specific features in the whitebox model. Instinctively, to achieve targeted attack success, the layer at which the attacks are generated must have captured the concepts of the classes for the general task of classification, without being overly specific to the architecture. Thus, layers prior to the inflection point may not have solidified the class concepts, whereas layers after the inflection point may have established the class concepts and are further processing them for the model output. This may also be considered an extension of the convergent learning theory of Li et al. (2016) and general-to-specific theory of Yosinski et al. (2014). + +# 6.1 INTERMEDIATE DISRUPTION + +One way to measure why and how adversarial attacks work is to observe how the intermediate representations change as a result of perturbations to the input. Our trained auxiliary models afford a novel way to monitor the effects of such perturbations in deep feature space. To measure how much a layer's features have changed as a result of a (targeted) adversarial perturbation, we define layer-wise disruption as the difference between the target class probability before and after perturbation, as measured in layer $l$ of model $f$ : disruption $= p(y = y_{tgt}|f_l(x + \delta)) - p(y = y_{tgt}|f_l(x))$ . + +Figure 3 shows the average disruption caused in each transfer scenario, using both logit-based (tmim) and feature-based $(FDA + fd)$ adversarial attacks. Each row plots the disruption versus layer depth from a single whitebox model to each blackbox model (e.g. the top row results from DN121 $\rightarrow$ VGG19 and DN121 $\rightarrow$ RN50 transfers). Each line represents the average disruption caused by some adversarial attack, where all FDAs are $FDA + fd$ . The first column of plots shows the impact of each attack on the whitebox model's feature distributions while the second and third columns show impacts on the blackbox models' feature distributions. + +![](images/8a7a3a00ab7676e0809c5279281dd73a384d021fae791f8a04efda41da106e81.jpg) + +![](images/7be57d813ea275aa38869bea28ff6d96340807543ba92c1cace6f6fd3048f4d4.jpg) + +![](images/0c4384eb3f52fa684a55b71ee3b3bd74a870dac285bda8292e3ea0b61afec2d9.jpg) +Figure 3: Disruption versus layer depth for all transfer scenarios. Each row uses a different whitebox model. Each line is a different attack, where all FDAs are $FDA + fd$ . + +It appears FDAs generated from early layers (e.g. $DN121_{l=0}$ , $VGG19_{l=0}$ , $RN50_{l=0}$ ) disrupt features the most in early layers and less so in deeper layers. Therefore, a sample resembling class $y_{tgt}$ in an early layer does not mean it will ultimately be classified as $y_{tgt}$ . However, recall from Figure 2 that attacks from early layers create very powerful untargeted adversarial examples (error). This indicates that early layer perturbations are amplified as they proceed through the model (Lin + +et al., 2019), just not in a class-specific manner. Next, as expected, attacks that use information from the last layers of a whitebox model (e.g. tmim, $DN121_{l=13}$ , $VGG19_{l=9}$ , $RN50_{l=12}$ ) create the largest disruption in the last layers of the whitebox, but not necessarily at the last layers of the blackbox models. However, the optimal transfer attacks $(DN121_{l=7}, VGG19_{l=5}, RN50_{l=8})$ have high disruption all throughout the models, not just at the last layer. This is further evidence that perturbations of classification-layer features are overly model-specific and perturbations of optimal transfer-layer features are more specific to the data/task. Finally, notice that the maximum disruption caused in any blackbox model layer from VGG19 whitebox transfers is around $40\%$ (row 2). For the DN121 and RN50 whitebox models, the maximum disruption is around $80\%$ . This may explain VGG19 being an inferior whitebox model to transfer from, as the perturbation of intermediate VGG features does not in-turn cause significant disruption of blackbox model features. + +# 6.2 AUXILIARY MODEL CORRELATION WITH FULL MODEL + +Another point of analysis is to investigate the correlation/discrepancy between the auxiliary models at a given layer and the output of the whitebox model. This may also indicate a transition from task/data-specific to model-specific features. We discuss discrepancy as an indication of how different the auxiliary model outputs are from the whitebox model outputs. Then correlation is the inverse of discrepancy so that when the auxiliary model outputs align well with the whitebox model outputs, the discrepancy is low and correlation is high. To evaluate discrepancy at a given layer $l$ for input $x$ , we aggregate the logit values (i.e. pre-sigmoid/softmax) for each class in $\mathcal{C}$ , as measured by the auxiliary models $g_{l,c}$ and the whitebox model $f$ , into separate vectors. Then, a softmax (smax) operation is performed on each vector to establish two proper probability distributions over the classes in $\mathcal{C}$ . Discrepancy is then defined as the Kullback-Leibler divergence $(D_{\mathrm{KL}})$ between the two + +distributions, or discrepancy $= D_{\mathrm{KL}}\big(\max ([g_{l,c}(f_l(x))]_{\forall c\in \mathcal{C}})\big\| \max ([f(x)[c]]_{\forall c\in \mathcal{C}})\big)$ . Here, $f(x)[c]$ is the class $c$ logit value from the whitebox model $f$ given input $x$ . Figure 4 shows the layer-wise auxiliary model correlations with the whitebox model outputs as measured from the average discrepancy over 500 input samples of classes in $\mathcal{C}$ . + +![](images/aa222444505dd5bc109835e3b3faf028e843dbbd922a16cc00ba3696a52d4326.jpg) +Figure 4: Correlation of a layer's auxiliary models with the white-box model output. + +Note, the shapes of the curves are more informative than the actual values. Also, VGG19 layers have been shifted in notation by $+4$ so that layer depth 13 is the logit layer of each model. As expected, the auxiliary models in early layers have little correlation with the model output, while auxiliary models in later layers have high correlation with the model output. Importantly, the optimal transfer layers $(\star)$ mark a transition in the trendlines after which correlation increases sharply. This effect may directly explain why layers after the optimal layer are suboptimal, because the auxiliary models become highly correlated with the model output and begin to overfit the architecture. Since the auxiliary models are not highly-correlated with the output layer at the optimal-transfer-layers, we may surmise that the captured features are still mostly task/data specific. + +![](images/39da347bccf1480006abc63512ebe73393d652c989bf9b0e4b2c2176ea5e556d.jpg) +Figure 5: Saliency maps of auxiliary models on several interesting inputs across model depth. + +# 6.3 AUXILIARY MODEL SALIENCY + +For a more qualitative analysis, we may inspect the auxiliary model saliency maps. Given an image of class $y_{src}$ , we visualize in Figure 5 what is salient to the $y_{src}$ auxiliary models at several DN121 layer depths using SmoothGrad (Smilkov et al., 2017) (see Appendix E for additional saliency examples for RN50). Notice, an observable transition occurs at the high-performing transfer layers from Figure 2 ( $DN121_{l=5,7}$ ). The salient regions move from large areas around the whole image (e.g. $DN121_{l=0,3}$ ) to specific regions that are also salient in the classification layer ( $DN121_{l=13}$ ). The saliency and correlation transitions together show that the well-transferring layers have learned similar salient features as the classification layer while not being overly correlated with the model output. Therefore, perturbations focused on these salient regions significantly impact the final classification without being too specific to the generating architecture. + +# 6.4 CLASS DISTRIBUTION SEPARABILITY + +Finally, our trained auxiliary models afford new ways of measuring class-wise feature entanglement/separability for the purpose of explaining transfer performance. We adopt the definition of entanglement from Frosst et al. (2019) which states that highly entangled features have a "lack of separation of class manifolds in representation space," and define separability as the inverse of entanglement. One way to measure the separability between class distributions in a layer using the auxiliary models is to gauge how far a sample has to "move" to enter a region of high class confidence. We define intra-class distance as the distance a sample has to move to enter a high-confidence region of its source class's distribution. Similarly, we define inter-class distance as the distance a sample has to move to enter a high-confidence region of a target class distribution, where $y_{src} \neq y_{tgt}$ . Then, separability in a layer is + +![](images/ccd179e301ea26c85a2c882c99e4219dbd1df8577c5726b2427b533e472c7609.jpg) +Figure 6: Class separability versus layer depth for each whitebox. + +the difference between average inter-class and intra-class distances. In practice, for a given sample, given model, chosen target class, and chosen layer, we iteratively perturb the sample using FDA with a small step size (e.g. 5e-5) until the confidence of the target distribution auxiliary network is over a threshold (e.g. $99.9\%$ ). The number of perturbation steps it takes to reach the confidence threshold encodes the distance of the sample to the targeted class's distribution. + +Results for the three whitebox models are shown in Figure 6, where the vertical axis is the separability in units of perturbation steps, and the horizontal axis is the layer-depth of each model. We see that there is some separability in all layers, for all models, indicating that even features very close to the input layer are somewhat class-specific. Further, VGG19's features are much less separable than DN121 and RN50, indicating why VGG19 may have performed much worse as a whitebox model in the transferability tests. In the same vein, DN121 has generally the most separated features which further indicates why it may be a superior whitebox model. Intuitively, if a model/layer has highly class-separable feature distributions, FDA attacks may be more transferable because there is less ambiguity between the target class's distribution and other class distributions during generation. + +# 7 CONCLUSIONS + +We present a new targeted blackbox transfer-based adversarial attack methodology that achieves state-of-the-art success rates for ImageNet classifiers. The presented attacks leverage learned class-wise and layer-wise intermediate feature distributions of modern DNNs. Critically, the depth at which features are perturbed has a large impact on the transferability of those perturbations, which may be linked to the transition from task/data-specific to model-specific features in an architecture. We further leverage the learned feature distributions to measure the entanglement/separability of class manifolds in the representation space and the correlations of the intermediate feature distributions with the model output. Interestingly, we find the optimal attack transfer layers have feature distributions that are class-specific and highly-separable, but are not overly-correlated with the whitebox model output. We also find that highly transferable attacks induce large disruptions in the intermediate feature space of the blackbox models. + +# ACKNOWLEDGMENTS + +The research was supported in part by AFRL (FA8750-18-2-0057), DARPA, DOE, NIH, NSF and ONR. + +# REFERENCES + +Nicholas Carlini and David A. Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, pp. 39-57. IEEE Computer Society, 2017. +Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Improving black-box adversarial attacks with a transfer-based prior. CoRR, abs/1906.06919, 2019. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. In CVPR, pp. 248-255. IEEE Computer Society, 2009. +Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In CVPR, pp. 9185-9193. IEEE Computer Society, 2018. +Nicholas Frosst, Nicolas Papernot, and Geoffrey E. Hinton. Analyzing and improving representations with the soft nearest neighbor loss. In ICML, volume 97 of Proceedings of Machine Learning Research, pp. 2012-2020. PMLR, 2019. +Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pp. 770-778. IEEE Computer Society, 2016. +Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In CVPR, pp. 2261-2269. IEEE Computer Society, 2017. +Nathan Inkawich, Wei Wen, Hai Li, and Yiran Chen. Feature space perturbations yield more transferable adversarial examples. In CVPR. IEEE Computer Society, 2019. +Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial machine learning at scale. In ICLR. OpenReview.net, 2017. +Alexey Kurakin, Ian J. Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jianfeng Zhu, Xiaolin C. Hu, Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, Alan Loddon Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, and Motoki Abe. Adversarial attacks and defences competition. arXiv, abs/1804.00097, 2018. +Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John E. Hopcroft. Convergent learning: Do different neural networks learn the same representations? In ICLR, 2016. +Ji Lin, Chuang Gan, and Song Han. Defensive quantization: When efficiency meets robustness. In ICLR. OpenReview.net, 2019. +Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. In *ICLR*. OpenReview.net, 2017. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In *ICLR*. OpenReview.net, 2018. +Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: A simple and accurate method to fool deep neural networks. In CVPR, pp. 2574-2582. IEEE Computer Society, 2016. +Nina Narodytska and Shiva Prasad Kasiviswanathan. Simple black-box adversarial attacks on deep neural networks. In CVPR Workshops, pp. 1310-1318. IEEE Computer Society, 2017. + +Nicolas Papernot, Patrick D. McDaniel, and Ian J. Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv, abs/1605.07277, 2016. +Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In AsiaCCS, pp. 506-519. ACM, 2017. +Sara Sabour, Yanshuai Cao, Fartash Faghri, and David J. Fleet. Adversarial manipulation of deep representations. In ICLR, 2016. +Yash Sharma, Tien-Dung Le, and Moustafa Alzantot. CAAD 2018: Generating transferable adversarial examples. CoRR, abs/1810.01268, 2018. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. +Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. arXiv, abs/1706.03825, 2017. +Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling deep neural networks. arXiv, abs/1710.08864, 2017. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In *ICLR*, 2014. +Florian Tramèr, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, and Patrick D. McDaniel. The space of transferable adversarial examples. arXiv, abs/1704.03453, 2017. +Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, and Patrick D. McDaniel. Ensemble adversarial training: Attacks and defenses. In *ICLR*. OpenReview.net, 2018. +Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L. Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In CVPR. IEEE Computer Society, 2019. +Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, pp. 3320-3328, 2014. +Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV(1), volume 8689 of Lecture Notes in Computer Science, pp. 818-833. Springer, 2014. +Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, and Yong Yang. Transferable adversarial perturbations. In ECCV (14), volume 11218 of Lecture Notes in Computer Science, pp. 471-486. Springer, 2018. + +# APPENDIX + +# A. LAYER DECODING + +Table 2: Whitebox Model Layer Decoding Table + +
LayerDenseNet-121VGG19bnResNet-50
06,25123,1
16,105123,2
26,125123,3
36,12,25123,4
46,12,145123,4,1
56,12,205123,4,2
66,12,225123,4,3
76,12,245123,4,4
86,12,24,2FC23,4,5
96,12,24,8FC33,4,6
106,12,24,12-3,4,6,1
116,12,24,14-3,4,6,2
126,12,24,16-3,4,6,3
136,12,24,16,FC-3,4,6,3,FC
+ +Table 2 is the layer number look-up-table that corresponds to the layer notation used in the paper. DenseNet-121 (DN121), VGG19bn (VGG), and ResNet-50 (RN50) appear because they are the model architectures used for the main results. The DN121 notation follows the implementation here: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py. In english, layer 0 shows that the output of the truncated model comes from the $2^{nd}$ denseblock of the $2^{nd}$ denselayer. Layer 11 means the output of the truncated model comes from the $14^{th}$ denseblock in the $4^{th}$ denselayer. Layer 13 indicates the output comes from the final FC layer of the model. + +The VGG model does not have denseblocks or dense layers so we use another notation. In the implementation at https://github.com/pytorch/vision/blob/master/ torchvision/models/vgg.py, the VGG19bn model is constructed from the layer array: $[64,64,'M',128,128,'M',256,256,256,256,'M',512,512,512,512,'M',512,512,512,$ $M^{\prime},FC1,FC2,FC3]$ , and we follow this convention in the table. In the array, each number corresponds to a convolutional layer with that number of filters, the M's represent max-pooling layers, and the FCs represent the linear layers at the end of the model. Notice, in these tests we do not consider the first 11 layers of VGG19 as they were shown to have very little impact on classification when perturbed. + +The RN50 notation follows the implementation here: https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py. As designed, the model has 4 layer groups with [3,4,6,3] Bottlenecks in each, respectively. Thus, layer 0 means the output of the truncated model comes from the $1^{st}$ Bottleneck of layer group 2. Layer 12 means the output comes from the $3^{rd}$ Bottleneck of layer group 4, and layer 13 means the output comes from the final FC layer (i.e. output layer) of the model. + +# B. FULL TARGETED TRANSFER RESULTS + +Figure 7 shows the full targeted attack transfer results from which the Figure 2 were extracted. These full results include two additional metrics of attack success. Untargeted Transfer Rate (uTR) is the rate at which examples that fool the whitebox also fool the blackbox (encodes likelihood of misclassification). Targeted Transfer Rate (tTR) is the rate at which successful targeted examples on the whitebox are also successful targeted examples on the blackbox (encodes likelihood of targeted misclassification). Error and tSuc are described in Section 4. + +![](images/fa2308ef27c79d36afb1bda82b944e9755baa9af320d7909d98b9e324bee4cda.jpg) +Figure 7: Full targeted adversarial attack transfer results. Each row is a unique transfer scenario and each column is a different attack success metric. The x-axis of each plot is the layer depth at which the adversarial example was generated from. Note, top two rows are transfers from DN121 whitebox model, middle two rows are from VGG19 whitebox model, and bottom two rows are from RN50 whitebox. + +# C. UNTARGETED FEATURE DISTRIBUTION ATTACKS + +The goal of an untargeted attack is to generate an adversarial noise $\delta$ that when added to a clean sample $x$ of class $y_{src}$ , the classification result of $x + \delta$ is not $y_{src}$ . The key intuition for feature distribution-based untargeted attacks is that if a sample's features are made to be outside of the feature distribution of class $y_{src}$ at some layer of intermediate feature space, then it will likely not be classified as $y_{src}$ . + +$uFDA$ The first untargeted attack variant is $uFDA$ which is described as + +$$ +\min _ {\delta} p (y = y _ {s r c} | f _ {l} (x + \delta)). +$$ + +$uFDA$ minimizes the probability that the layer $l$ features of the perturbed sample $x + \delta$ are from the source class $y_{src}$ distribution. Unlike the targeted samples which drive towards high confidence regions of a target class feature distribution, this objective drives the sample towards low confidence regions of the source class feature distribution. + +$uFDA + fd$ The second untargeted variant $uFDA + fd$ is described as + +$$ +\min _ {\delta} p _ {l} (y = y _ {s r c} | f _ {l} (x + \delta)) - \eta \frac {\| f _ {l} (x + \delta) - f _ {l} (x) \| _ {2}}{\| f _ {l} (x) \| _ {2}}. +$$ + +$uFDA + fd$ also carries a feature disruption term so that the objective drives the perturbed sample towards low confidence regions of the source class feature distribution and maximal distance from the original sample's feature representation. + +fd-only The final untargeted attack fd-only is described as + +$$ +\max _ {\delta} \frac {\| f _ {l} (x + \delta) - f _ {l} (x) \| _ {2}}{\| f _ {l} (x) \| _ {2}}. +$$ + +Notice, fd-only is simply the feature disruption term and is a reasonable standalone untargeted attack objective because making features maximally different may intuitively cause misclassification. + +To test attack success, we generate untargeted adversarial examples from both DN121 and RN50 whiteboxes and test transfers to a VGG19 blackbox model. It is common to evaluate untargeted attacks with a tighter noise constraint (Kurakin et al., 2018; Dong et al., 2018) as the task is simpler, so in these tests we use $\ell_{\infty}\epsilon = 4 / 255$ and $\epsilon = 8 / 255$ (rather than $\epsilon = 16 / 255$ used for targeted tests). For baselines, we use the Madry et al. (2018) random start PGD attack (upgd) and the Dong et al. (2018) competition winning momentum iterative attack (umim). Similar to the layer-wise targeted evaluations, each "clean" source sample belongs to the same set of 10 previously modeled classes. Figure 8 shows the error rates versus layer depth for the attacks. + +![](images/74afcdd4c9f6ae2565dc81db0ae7146f74c3006f37e45d66c0bc270136924f37.jpg) +Figure 8: Error versus layer depth plots caused by untargeted adversarial attacks for DN121 $\rightarrow$ VGG19 and RN50 $\rightarrow$ VGG19 transfer scenarios at two different attack strengths $\epsilon = 4, 8$ . + +![](images/3f09eb6d75dbc2e4f4b2797345fcd95986c3fd4524382a5d2332374ce8cf3f36.jpg) + +As expected, the error rate increases with epsilon and the layer depth at which feature-based attacks are generated from has a large impact on attack success rate. In general, $uFDA + fd$ is the top performer, followed by $fd$ -only, then $uFDA$ . However, $uFDA$ often under-performs the umim baseline, further indicating that for adversarial attacks in feature space it is beneficial to include a term that prioritizes feature disruption (e.g. $uFDA + fd \& fd$ -only). + +On average across models, at $\epsilon = 4 / 255$ , the optimal layer $uFDA + fd$ has an untargeted error rate of $37\%$ , which is $9\%$ higher than the best baseline. At $\epsilon = 8 / 255$ , the optimal layer $uFDA + fd$ has an untargeted error rate of $79\%$ , which is $27\%$ higher than the best baseline. Also, both whitebox models perform similarly in terms of attack success rate, however the performance of fd-only varies + +between the two (especially at $\epsilon = 8 / 255$ ). Surprisingly, fd-only which simply disrupts the original feature map is the optimal attack for the DN121 whitebox (by a small margin). Finally, note that the optimal transfer layers from the targeted attacks (i.e. $DN121_{l=7}$ and $RN50_{l=8}$ ) are also high performing layers for the untargeted attacks. + +# D. ADVERSARIAL EXAMPLE GENERATION PROCESS + +Recall, because the auxiliary models are NNs, the optimization objectives described for both the targeted and untargeted attacks can be solved with an iterative gradient descent procedure. For any version of the FDA attacks, we first build a "composite" model which includes the truncated whitebox model $f_{l}$ and the appropriate auxiliary model $g_{l,c}$ , as shown in Figure 1(bottom). An attack loss function $L_{FDA}$ is then defined which includes a BCELoss term and any additional term which is trivially incorporated (e.g. the feature disruption term). We then iteratively perturb the source image for $K$ iterations using the sign of a collected momentum term. Similar to Dong et al. (2018) and Inkawich et al. (2019), momentum is calculated as + +$$ +m _ {k + 1} = m _ {k} + \frac {\nabla_ {I _ {k}} L _ {F D A} (I _ {k} ; \theta)}{| | \nabla_ {I _ {k}} L _ {F D A} (I _ {k} ; \theta) | | _ {1}}, +$$ + +where $m_0 = 0$ and $I_{k}$ is the perturbed source image at iteration $k$ . The perturbation method for this $\ell_{\infty}$ constrained attack is then + +$$ +I _ {k + 1} = \operatorname {C l i p} \left(I _ {k} - \alpha * \operatorname {s i g n} \left(m _ {k + 1}\right), 0, 1\right). +$$ + +In this work, all attacks perturb for $K = 10$ iterations and $\alpha = \epsilon / K$ . + +# E. ADDITIONAL SALIENCY + +![](images/655e39e5d050d26b87fceda5ed815e0ff89f3d7fbe7a4dd1f77ba97eca468299.jpg) +Figure 9: SmoothGrad saliency maps for RN50 auxiliary models. \ No newline at end of file diff --git a/transferableperturbationsofdeepfeaturedistributions/images.zip b/transferableperturbationsofdeepfeaturedistributions/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..33458b7901e8bdaa60886b861ebbaa5b16f8a71e --- /dev/null +++ b/transferableperturbationsofdeepfeaturedistributions/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5e574399bec45eb438a698f691ff8e36cbbbf3935ed64c6f6a8b731731c9e43 +size 964988 diff --git a/transferableperturbationsofdeepfeaturedistributions/layout.json b/transferableperturbationsofdeepfeaturedistributions/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1d0726be87038bec7125bbdd450e6aa127c91d8b --- /dev/null +++ b/transferableperturbationsofdeepfeaturedistributions/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5376216229a35423a05edcf36bb3f48369e798a47c133cf23bcbb22e6b1e3531 +size 455342 diff --git a/transferringoptimalityacrossdatadistributionsviahomotopymethods/ce8bb4be-845b-469e-8681-30336a27bd7c_content_list.json b/transferringoptimalityacrossdatadistributionsviahomotopymethods/ce8bb4be-845b-469e-8681-30336a27bd7c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ef436acd4ac31f58e8fdc2b34d2a141ccd57f0d5 --- /dev/null +++ b/transferringoptimalityacrossdatadistributionsviahomotopymethods/ce8bb4be-845b-469e-8681-30336a27bd7c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07c2ca9d3c2108f39496fdc6390f31c8996c36b15c017b659207191725fae561 +size 142267 diff --git a/transferringoptimalityacrossdatadistributionsviahomotopymethods/ce8bb4be-845b-469e-8681-30336a27bd7c_model.json b/transferringoptimalityacrossdatadistributionsviahomotopymethods/ce8bb4be-845b-469e-8681-30336a27bd7c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bdbecb7b3a748f1c00cfdded942018f259a4eb51 --- /dev/null +++ b/transferringoptimalityacrossdatadistributionsviahomotopymethods/ce8bb4be-845b-469e-8681-30336a27bd7c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f938410141e35523b0b5d87f768de16ed89a5c55115554bfb93f1c5ec531122 +size 170583 diff --git a/transferringoptimalityacrossdatadistributionsviahomotopymethods/ce8bb4be-845b-469e-8681-30336a27bd7c_origin.pdf b/transferringoptimalityacrossdatadistributionsviahomotopymethods/ce8bb4be-845b-469e-8681-30336a27bd7c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1549e5ae19966ed33ed97f6cdcc0205b7126771a --- /dev/null +++ b/transferringoptimalityacrossdatadistributionsviahomotopymethods/ce8bb4be-845b-469e-8681-30336a27bd7c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cdc1f4f8a8d681b70064a3e96c78dd9be4664464507c1507531f526cfb18b5b6 +size 5154546 diff --git a/transferringoptimalityacrossdatadistributionsviahomotopymethods/full.md b/transferringoptimalityacrossdatadistributionsviahomotopymethods/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7a541d4f4900c8e112202b0832ca1cdf236e17f8 --- /dev/null +++ b/transferringoptimalityacrossdatadistributionsviahomotopymethods/full.md @@ -0,0 +1,767 @@ +# TRANSFERRING OPTIMALITY ACROSS DATA DISTRIBUTIONS VIA HOMOTOPY METHODS + +Matilde Gargiani1, Andrea Zanelli2, Quoc Tran-Dinh3, Moritz Diehl2,4, Frank Hutter1,5 + +1Department of Computer Science, University of Freiburg {gargiani, fh}@cs.uni-freiburg.de +$^{2}$ Department of Microsystems Engineering (IMTEK), University of Freiburg {andrea.zanelli, moritz.diehl}@imtek.uni-freiburg.de +$^{3}$ Department of Statistics and Operations Research, University of North Carolina quoctd@email.unc.edu +$^{4}$ Department of Mathematics, University of Freiburg +5 Bosch Center for Artificial Intelligence + +# ABSTRACT + +Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis. In this work, we propose a novel homotopy-based numerical method that can be used to gradually transfer optimized parameters of a neural network across different data distributions. This method generalizes the widely-used heuristic of pre-training parameters on one dataset and then fine-tuning them on another dataset of interest. We conduct a theoretical analysis showing that, under some assumptions, the homotopy method combined with Stochastic Gradient Descent (SGD) is guaranteed to converge in expectation to an $r_{\theta}$ -optimal solution for a target task when started from an expected $r_{\theta}$ -optimal solution on a source task. Empirical evaluations on a toy regression dataset and for transferring optimized parameters from MNIST to Fashion-MNIST and CIFAR-10 show substantial improvement of the numerical performance over random initialization and pre-training. + +# 1 INTRODUCTION + +Homotopy methods (Allgower & Georg, 1980), also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis (e.g., Tran-Dinh et al. (2012), Zanelli et al. (2019)). The core idea consists in sequentially solving a series of parametric problems, starting from an easy-to-solve problem and progressively deforming it, via a homotopy function, to the target one. Homotopy methods are suitable to solve complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In addition, in contrast to state-of-the-art algorithms in deep learning (e.g., Bottou (2010), Duchi et al. (2011), Kingma & Ba (2015)), these methods often achieve global convergence guarantees by only exploiting local structures of the problem. Concepts, such as curriculum-learning and warm-starting, that are related to different degrees to homotopy methods, have been explored both in the deep learning (e.g., Gulcehre et al. (2016), Mobahi (2016), Gulcehre et al. (2017)) and in the reinforcement learning (e.g., Narvekar (2017)) communities. + +In this work, we propose a novel homotopy-based numerical method to transfer knowledge regarding the localization of a minimizer across different task distributions in deep learning. This method gradually tracks a neural network's (close-to-)optimal parameters from one data distribution to another one via the homotopy method (Allgower & Georg, 1980) and can be interpreted as a generalization of the very common heuristic of fine-tuning a pre-trained network. After discussing related work (Section 2) and background on homotopy methods (Section 3), our contributions are as follows: + +1. We provide a general theoretical analysis of the homotopy method when using SGD as an iterative solver, proving that under some local assumptions it tracks in expectation an $r_{\theta}$ -optimal solution from the source task to the target task (Section 4). + +2. We introduce homotopy functions for transferring optimality across data distributions for supervised regression and classification tasks (Section 5). +3. For a toy regression dataset and for transferring optimized parameters from MNIST to Fashion-MNIST and from MNIST to CIFAR-10, we show that our method obtains up to two orders of magnitude better numerical performance than random initialization and substantial improvement of the numerical performance over pre-training (Section 6). + +# 2 RELATED WORK + +Deep neural networks have led to establish a new state-of-the-art in many applications. Despite their great success and the many theoretical studies that have been published in the last years (e.g., Balduzzi et al. (2017), Li et al. (2018), Feizi et al. (2018), Kunin et al. (2019)), training these deep models remains a big challenge. Various stochastic optimization algorithms (e.g., Duchi et al. (2011), Kingma & Ba (2015), Reddi et al. (2018)) and initialization heuristics (e.g., Daniely et al. (2016), Klambauer et al. (2017), Hanin & Rolnick (2018)) have been recently suggested in order to improve and speed up the training procedure. We now briefly discuss the state-of-the-art deep learning optimization techniques and initialization strategies that are most related with the proposed homotopy-based method, drawing connections with existing and ongoing research works in the field. + +Curriculum Learning. First introduced by Bengio et al. (2009) and then extended in different works (e.g., Graves et al. (2017), Weinshall et al. (2018), Hacohen & Weinshall (2019)), curriculum learning can also be listed among the optimization heuristics proposed to alleviate the complexity of solving high dimensional and non-convex problems. In particular, taking inspiration from the fact that humans and animals learn "better" when exposed to progressively more complex situations in an organized manner, curriculum learning techniques guide the training by starting with "easy-to-learn" samples and progressively introducing more "complex-to-learn" ones. This guided learning process can also be rephrased in a homotopy-like fashion (see Algorithm 1) as solving a sequence of optimization problems where the target training distribution gradually changes from considering only the "easy" examples to the full original training distribution. + +Meta-Learning and Transfer-Learning. Due to the massive amount of computational resources required by the development of modern deep learning applications, the community has started to explore the possibility of re-using learned parameters across different tasks, leading to the development of many new transfer-learning (e.g., Rohrbach et al. (2013), Wang & Schneider (2014), Cui et al. (2019)) and meta-learning (e.g., Schmidhuber (1987), Hochreiter et al. (2001), Finn et al. (2017), Zintgraf et al. (2019)) algorithms. The simplest way to transfer knowledge across different tasks consists in using warm-start initialization. This heuristic is amply used in computer vision applications, where it is also known as the fine-tuning technique (e.g., Krizhevsky et al. (2012), Yosinski et al. (2014), Reyes et al. (2015), Käding et al. (2016)). So far, there is no rigorous explanation of why and when fine-tuning works. However, numerous empirical evaluations on different benchmarks show that warm-starting the parameters of deep models often leads to faster convergence and better generalization than using random initialization. + +# 3 BACKGROUND + +In this work, we will focus on solving problems of the form + +$$ +\theta^ {*} \in \arg \min _ {\theta \in \mathbb {R} ^ {d}} \underbrace {\frac {1}{N} \sum_ {j = 1} ^ {N} \ell_ {j} (\theta)} _ {:= J (\theta)}, \tag {1} +$$ + +where $J: \mathbb{R}^d \to \mathbb{R}$ is our target objective function and $\theta^{*}$ is a minimizer. Problems as described in (1) arise, for instance, in classification and regression scenarios. + +In the following section we briefly review the main concepts of homotopy and continuation methods, which the proposed technique to solve problem (1) is based on. + +# 3.1 HOMOTOPIC FUNCTIONS AND CONTINUATION METHODS FOR OPTIMIZATION + +Given two topological spaces $Z$ and $Y$ , a homotopy is a continuous deformation between two continuous functions $g, f: Z \to Y$ that fulfills certain properties. We can formalize this concept with the following definition + +Definition 3.1. Let $g, f: Z \to Y$ be continuous maps on the topological spaces $Z, Y$ . A homotopy from $g$ to $f$ is a continuous function $H: Z \times [0,1] \to Y$ such that + +$$ +H (z, 0) = g (z), \quad H (z, 1) = f (z), \quad \forall z \in Z. \tag {2} +$$ + +If such function $H$ exists, $g$ is said to be homotopic of $f$ , and this relation is denoted by $g \simeq f$ . + +It is straightforward to show that, $A \subseteq \mathbb{R}^n$ being a convex set, any two continuous maps $g, f: Z \to A$ are homotopic (see (Suciu, 2016) for a derivation). From this fact it follows that any two continuous and real functions are homotopic. See Figures 4a-4b in the appendix for a graphical representation of two different homotopy maps between the probability density functions of two Gaussian distributions, where $\lambda \in [0,1]$ denotes the homotopy parameter. See also Section A in the appendix for details on some of the main properties of homotopic functions. + +Continuation methods (also known as homotopy methods) are a widely used mathematical tool to solve complex non-convex optimization problems where no or only very limited prior knowledge regarding the localization of optimal solutions is available (see (Allgower & Georg, 1980) for a full characterization of continuation methods). The core idea of a homotopy approach consists in defining a homotopy function $H(\theta, \lambda)$ with $\lambda \in [0,1]$ such that $H(\theta, 0) = J_0(\theta)$ is a trivial to optimize smooth map (or a smooth map of which a surrogate $\theta_0$ of an optimal solution is available) and $H(\theta, 1) = J(\theta)$ is our target objective function. Instead of directly addressing problem (1), we approximately and sequentially solve $\gamma > 0$ parametric optimization problems of the form + +$$ +\theta_ {i} ^ {*} \in \arg \min _ {\theta \in \mathbb {R} ^ {d}} \underbrace {\frac {1}{N} \sum_ {j = 1} ^ {N} \ell_ {j} \left(\theta , \lambda_ {i}\right)} _ {:= H \left(\theta , \lambda_ {i}\right)}, \tag {3} +$$ + +for increasing values of the parameter $\lambda_{i}$ for $i = 1,\dots ,\gamma$ and warm-starting each problem with the previously derived approximate solution. Conceptually, Algorithm 1 describes the basic steps of a general homotopy algorithm. Under appropriate assumptions, if the increment $\Delta \lambda$ is sufficiently small, then the iterative procedure in Algorithm 1 will converge to a neighborhood of an optimal solution of the target objective $J$ that depends in some sense on the number of iterations $k > 0$ performed (Allgower & Georg, 1980). Many different variations of Algorithm 1 exist. In particular, + +Algorithm 1 A Conceptual Homotopy Algorithm +1: $\theta_0\approx \theta_0^*\in \arg \min_\theta H(\theta ,0)$ +2: $\gamma >0,\gamma \in \mathbb{Z}$ +3: $\lambda_0 = 0$ $\Delta \lambda = 1 / \gamma$ +4: $k > 0,k\in \mathbb{Z}$ +5: for $i = 1,\dots ,\gamma$ do +6: $\lambda_{i}\gets \lambda_{i - 1} + \Delta \lambda$ +7: procedure $\theta_{i}\gets$ ITERATIVESOLVER( $\theta_{i - 1},k,H(\theta ,\lambda_i))$ +8: return $\theta_{\gamma}$ + +different update schemes for the homotopy parameter can be adopted (e.g., geometric or sublinear rate of increase), various iterative solvers can be used under distinct and specific assumptions, and, finally, also diverse levels of approximation for the solutions $\theta_{i}^{*}$ can be considered, i.e. different $k$ values. + +Before going into the details of two concrete formulations of the conceptual homotopy method outlined in Algorithm 1 (see Section 5) when applied to transfer optimality knowledge in regression and classification scenarios, we provide a general theoretical analysis in a simplified setting. + +# 4 THEORETICAL ANALYSIS + +In this section, we provide a local theoretical analysis of homotopy methods when Stochastic Gradient Descent (SGD) (Bottou, 2010) is used as iterative solver in Algorithm 1. The locality of the analysis consists in the definition of hyperspheres of radius $B \geq 0$ around the optimal solutions of each homotopy problem $H(\theta, \lambda_i)$ where it is possible to exploit certain structures of the problem. In particular, we approximately and sequentially solve $\gamma > 0$ unconstrained optimization problems of the form + +$$ +\theta_ {i} ^ {*} \in \arg \min _ {\theta \in \mathbb {R} ^ {d}} H (\theta , \lambda_ {i}), \quad \forall i = 1, \dots , \gamma , \tag {4} +$$ + +where $H(\theta, \lambda_i)$ fulfills the assumptions described in Section 4.1 and $\lambda_i \in [0,1]$ . Let $\theta_i$ be an approximate solution of the problem associated with parameter $\lambda_i$ derived by applying $k > 0$ iterations of SGD (in the limit, $k = 1$ ) and also the starting point for the problem associated with parameter $\lambda_{i+1}, \forall i = 1, \dots, \gamma-1$ . In addition, let $\theta_0$ denote an approximate solution for the source task, i.e. $\lambda_0 = 0$ , that is used as initial point for the problem associated with $\lambda_1$ . In this section we characterize the maximum allowed variation of the homotopy parameter in order for the method to able to track in expectation an $r_\theta$ -optimal solution from source to target task. + +# 4.1 ASSUMPTIONS + +We now expose the fundamental assumptions for our general local theoretical analysis on which all the derivations in Sections 4.2 and 4.3 rely. In addition, throughout the analysis the $\ell$ -functions in (3) are implicitly assumed to be differentiable in $\theta$ . We start by giving the definition of the regions around the optimal solutions of the homotopy problems where the analysis is conducted. + +Definition 4.1. Given $\theta_i^*$ and $B \geq 0$ , let $\mathcal{B}_{B,\theta_i^*}$ be the following set of vectors + +$$ +\mathcal {B} _ {B, \theta_ {i} ^ {*}} := \left\{\theta \text {s . t .} \| \theta - \theta_ {i} ^ {*} \| \leq B \right\}, \forall i = 0, \dots , \gamma . +$$ + +Assumption 4.2 (local $L$ -smoothness). Assume that there exists a constant $L > 0$ such that + +$$ +\left\| \nabla_ {\theta} H (\tilde {\theta}, \lambda_ {i}) - \nabla_ {\theta} H (\hat {\theta}, \lambda_ {i}) \right\| \leq L \| \tilde {\theta} - \hat {\theta} \|, \quad \forall \tilde {\theta}, \hat {\theta} \in \mathcal {B} _ {B, \theta_ {i} ^ {*}}, \forall i = 0, \dots , \gamma . \tag {5} +$$ + +Corollary 4.2.1. If $H$ is locally $L$ -smooth in $\theta$ , then the following inequality holds + +$$ +H \left(\theta_ {i} ^ {*}, \lambda_ {i}\right) - H (\hat {\theta}, \lambda_ {i}) \leq - \frac {1}{2 L} \| \nabla_ {\theta} H (\hat {\theta}, \lambda_ {i}) \| ^ {2}, \quad \forall \hat {\theta} \in \mathcal {B} _ {B, \theta_ {i} ^ {*}}, \forall i = 0, \dots , \gamma . \tag {6} +$$ + +Proof. See Lemma 1.1 in (Gower, 2018) for a proof. + +Assumption 4.3 (local $\mu$ -strong convexity). Assume that there exists $\mu > 0$ such that + +$$ +H (\tilde {\theta}, \lambda_ {i}) \geq H (\hat {\theta}, \lambda_ {i}) + \nabla_ {\theta} H (\hat {\theta}, \lambda_ {i}) ^ {T} (\tilde {\theta} - \hat {\theta}) + \frac {\mu}{2} \| \tilde {\theta} - \hat {\theta} \| ^ {2}, \quad \forall \tilde {\theta}, \hat {\theta} \in \mathcal {B} _ {B, \theta_ {i} ^ {*}}, \forall i = 0, \dots , \gamma . \tag {7} +$$ + +Assumption 4.4 (bounded $\ell$ -derivative). Assume that there exists $\nu > 0$ such that + +$$ +\left\| \nabla_ {\theta} \ell_ {j} (\hat {\theta}, \lambda_ {i}) \right\| \leq \nu , \quad \forall \hat {\theta} \in \mathcal {B} _ {B, \theta_ {i} ^ {*}}, \forall i = 0, \dots , \gamma , \forall j = 1, \dots , N. \tag {8} +$$ + +Assumption 4.5 (local bounded "variance"). Let $g(\hat{\theta}, \lambda_i)$ denote an unbiased estimate of the gradient $\nabla_{\theta} H(\hat{\theta}, \lambda_i)$ . Assume that there exists a constant $C \geq 0$ such that the following bound on the expected squared norm of the estimate of the gradient holds + +$$ +\mathbb {E} \left[ \| g (\hat {\theta}, \lambda_ {i}) \| ^ {2} \right] \leq C ^ {2}, \quad \forall \hat {\theta} \in \mathcal {B} _ {B, \theta_ {i} ^ {*}}, \forall i = 0, \dots , \gamma . \tag {9} +$$ + +Remark 4.6. Assumption 4.5 is standard for proving error bounds on SGD iterates (see (Schmidt, 2014)). In addition, notice that, since + +$$ +\mathbb {E} \left[ \| g (\hat {\theta}, \lambda_ {i}) \| ^ {2} \right] = \operatorname {V a r} \left(\| g (\hat {\theta}, \lambda_ {i}) \|\right) + \mathbb {E} \left[ \| g (\hat {\theta}, \lambda_ {i}) \| \right] ^ {2}, +$$ + +the $C$ constant is proportional to the variance and the squared expected value of the norm of the gradient estimate. Therefore, it decreases when the iterates approach a minimizer and by reducing the noise in the estimate of the gradient. In the limit (i.e. exact gradient and convergence to a minimizer), $C = 0$ . + +Recall that $\theta^{*}(\lambda_{i})\equiv \theta_{i}^{*}$ + +Assumption 4.7 (strong regularity). Assume that there exists $\delta >0$ such that the following inequality holds + +$$ +\left\| \theta^ {*} \left(\lambda_ {i + 1}\right) - \theta^ {*} \left(\lambda_ {i}\right) \right\| \leq \delta \left| \lambda_ {i + 1} - \lambda_ {i} \right|, \quad \forall i = 0, \dots , \gamma - 1. +$$ + +Remark 4.8. Assumption 4.7 follows directly from the application of the Implicit Function Theorem by introducing some milder assumptions on the problem structure (see Lemma 2.1.8 in (Allgower & Georg, 1980)). + +# 4.2 FUNDAMENTAL THEORETICAL PRELIMINARIES + +Before proceeding with the main theoretical contributions, we extend the existing results in the literature on global error bounds for the iterates of Stochastic Gradient Descent such that they can be applied when the underlying assumptions are only required to hold locally. The derived local error bounds for SGD iterates are used in Proposition 4.11 and Theorem 4.12. + +Proposition 4.9. Let $\theta_{i} \in \mathcal{B}_{B,\theta_{i}^{*}}$ be the starting point for the problem described in (3), and let $\theta_{i} := \theta_{i,0}$ and $\theta_{i+1} := \theta_{i,k}$ denote the iterate after $k > 0$ SGD steps, where an SGD step is defined as + +$$ +\theta_ {i, k} = \theta_ {i, k - 1} - \alpha g \left(\theta_ {i, k - 1}, \lambda_ {i}\right). +$$ + +Under Assumptions 4.2- 4.5 and by setting the batch size $0 < M\leq N$ to a value such that $\frac{(N - M)}{N}\leq \frac{(1 - \kappa_d)}{2\alpha\nu} B$ with $\kappa_{d} = \sqrt{(1 - \alpha\mu)}$ and the learning rate $\alpha$ to a constant value such that $0 < \alpha \leq \min \left(\frac{1}{2\mu},\frac{1}{L}\right)$ , the following error bound on the iterates holds + +$$ +\mathbb {E} \left[ \| \theta_ {i + 1} - \theta_ {i + 1} ^ {*} \| ^ {2} \right] \leq (1 - 2 \alpha \mu) ^ {k} \cdot \mathbb {E} \left[ \| \theta_ {i} - \theta_ {i + 1} ^ {*} \| ^ {2} \right] + \frac {\alpha C ^ {2}}{2 \mu}. \tag {10} +$$ + +Proof. See Section D in the appendix. + +Remark 4.10. The expectation in (10) is taken w.r.t. all the random variables, i.e. estimates of the gradients and initial point $\theta_0$ , involved in the optimization procedure up to the current $i + 1$ iteration of the algorithm. + +# 4.3 MAIN THEORETICAL CONTRIBUTIONS + +Under the considered assumptions and by exploiting the previously derived results on local error bounds for SGD iterates, we show that, if the approximate solution $\theta_{i}$ for the problem with parameter $\lambda_{i}$ is "sufficiently close" to a minimizer $\theta_{i}^{*}$ in expectation, i.e. $\mathbb{E}\left[\| \theta_i - \theta_i^*\| ^2\right]\leq r_\theta^2$ , then, for a "sufficiently small" change in the homotopy parameter, the same vicinity to a minimizer $\theta_{i + 1}^{*}$ is preserved in expectation for the approximate solution $\theta_{i + 1}$ of the problem with parameter $\lambda_{i + 1}$ , i.e. $\mathbb{E}\left[\| \theta_{i + 1} - \theta_{i + 1}^{*}\|^{2}\right]\leq r_{\theta}^{2}$ . In particular, with Theorem 4.12 we characterize the maximum allowed variation of the homotopy parameter based on the properties of the parametric problems and the convergence characteristics of the adopted iterative solver, i.e. rate of convergence and number of iterations. + +First, in order to apply the results derived in Theorem 4.12, given a realization of $\theta_{i} \in \mathcal{B}_{B,\theta_{i}^{*}}$ , we have to derive the conditions on $\| \theta_{i} - \theta_{i}^{*}\|$ such that $\| \theta_{i} - \theta_{i + 1}^{*}\| \leq B$ . In addition, we derive the necessary conditions in order to apply these results recursively across the iterations of Algorithm 1. + +Proposition 4.11. Let $\theta_{i} \in \mathcal{B}_{B,\theta_{i}^{*}}$ and $|\lambda_{i} - \lambda_{i + 1}| \leq \epsilon$ , with $0 \leq \epsilon \leq \frac{B}{\delta}$ . If $\| \theta_{i} - \theta_{i}^{*} \| \leq B - \delta \epsilon$ , then $\| \theta_{i} - \theta_{i + 1}^{*} \| \leq B$ . Moreover, let $\kappa_{d} = \sqrt{(1 - \alpha \mu)}$ and assume that + +$$ +\frac {(N - M)}{N} \leq \frac {(1 - \kappa_ {d} ^ {k}) (1 - \kappa_ {d}) B}{2 \alpha \nu}, +$$ + +and + +$$ +\epsilon \leq \frac {1}{\delta} \left((1 - \kappa_ {d} ^ {k}) B - \frac {(N - M)}{N} \frac {2 \alpha \nu}{(1 - \kappa_ {d})}\right). +$$ + +Then, after applying $k$ iterations of SGD, we obtain that + +$$ +\left\| \theta_ {i + 1} - \theta_ {i + 1} ^ {*} \right\| \leq B - \delta \epsilon . +$$ + +Proof. See Section E.1 in the appendix. + +![](images/c83ed4230d0ea7f02ea8ff322a89c42131ad2800a3ed4c20057cf35d380933ca.jpg) + +See Figure 9 in the appendix for a graphical representation of the results derived in Proposition 4.11, where the continuous and dashed lines are used to represent the circles of radius $B$ and $B - \delta \epsilon$ , respectively. + +Theorem 4.12. Consider Algorithm 1 with Stochastic Gradient Descent as solver and let $k > 0$ be the number of iterations, $0 < \alpha \leq \min \left(\frac{1}{2\mu}, \frac{1}{L}\right)$ be the step size and $0 < M \leq N$ be the batch size such that + +$$ +\frac {(N - M)}{N} \leq \frac {(1 - \kappa_ {d} ^ {k}) (1 - \kappa_ {d}) B}{2 \alpha \nu}, +$$ + +where $\kappa_{d} = \sqrt{(1 - \alpha\mu)}$ . For $\theta_0 \in \mathcal{B}_{B - \delta \epsilon, \theta_0^*}$ and $r_\theta \in \mathbb{R}$ such that + +$$ +r _ {\theta} ^ {2} \geq \frac {\alpha C ^ {2}}{2 \mu}, \tag {11} +$$ + +then, if $\mathbb{E}\left[\| \theta_i - \theta_i^*\|^2\right] \leq r_\theta^2$ and $|\lambda_i - \lambda_{i + 1}| \leq \tilde{\epsilon}$ , where $\tilde{\epsilon} := \min \{\bar{\epsilon}, \epsilon\}$ with + +$$ +\bar {\epsilon} = - \frac {r _ {\theta}}{\delta} + \frac {1}{\delta} \sqrt {\frac {r _ {\theta} ^ {2} - \alpha C ^ {2} / 2 \mu}{(1 - 2 \alpha \mu) ^ {k}}}, \tag {12} +$$ + +the following inequality holds + +$$ +\mathbb {E} \left[ \| \theta_ {i + 1} - \theta_ {i + 1} ^ {*} \| ^ {2} \right] \leq r _ {\theta} ^ {2}. \tag {13} +$$ + +Proof. See Section E.2 in the appendix. + +![](images/d866bc04d1b7306f7fd555bac0b161a6bbc0b122b9003e178fdd0bb85e5852af.jpg) + +The results derived in Theorem 4.12 show that the homotopy method used in combination with SGD allows to track in expectation an $r_{\theta}$ -optimal solution across the parametric problems for "small enough" variations of the homotopy parameter, i.e. $\Delta \lambda \leq \tilde{\epsilon}$ . Notice that $r_{\theta}$ can potentially be smaller than $B - \delta \epsilon$ and has to be bigger than the radius of the "noise-dominant" hypersphere centered at the minimizers, i.e. $r_{\theta}^{2} \geq \frac{\alpha C^{2}}{2\mu}$ . In particular, by exploiting the local structure of the parametric problems we derive the maximum allowed variation of the homotopy parameter across the iterations of Algorithm 1. The derived upper bound is inversely proportional to the strong regularity constant $\delta$ and depends on the number of iterations $k$ performed with SGD, such that the more iterations we perform on each parametric problem the more we are allowed to change the homotopy parameter. Finally, notice that these results can be applied recursively across the parametric problems. + +# 5 TRANSFERRING OPTIMALITY VIA HOMOTOPY METHODS + +In this section we describe a possible application of homotopy methods to solve supervised regression and classification tasks. We address the case where deep neural networks are used as models. We start by introducing the problem framework of supervised learning and then we propose two different homotopy functions for the regression and classification scenarios, respectively. + +# 5.1 PROBLEM FORMULATION + +Despite the generality of the proposed methodology, in this work we specifically address the supervised learning framework, and, in particular, when the predictive model is constituted by a deep neural network $f(x; \theta)$ parameterized by $\theta \in \mathbb{R}^d$ . + +In the supervised learning scenario, independently from the type of task $t$ , we typically dispose of a training set $\mathcal{D}_t$ consisting of $N$ pairs of examples $(x_j, y_j)$ . The goal of the learning process is to find a value of $\theta$ that minimizes an objective function which measures the discrepancy between the outputs produced by the network $\hat{y} = f(x; \theta)$ and the target outputs $y$ . In particular, the learning process consists in minimizing the following empirical objective function + +$$ +J (\theta) := \frac {1}{N} \sum_ {\left(x _ {j}, y _ {j}\right) \in \mathcal {D} _ {t}} \ell \left(y _ {j}, f \left(x _ {j}; \theta\right)\right), \tag {14} +$$ + +whose non-convexity originates from the high non-convexity of our model $f$ . + +In the classical setting, $J$ is chosen based on the KL divergence between the target data distribution $Q_{x,y}$ , with density $q_{x,y} = q(y|x)q(x)$ , and the learned data distribution $P_{x,y}(\theta)$ , with density $p_{x,y} = p(y|x;\theta)q(x)$ , where $p(y|x;\theta)$ is modeled via a neural network, (Goodfellow et al., 2016). With the appropriate approximations, this leads to the following form for the objective function + +$$ +J (\theta) = \frac {1}{N} \sum_ {\left(x _ {j}, y _ {j}\right) \in \mathcal {D} _ {t}} q (y | x) \log \frac {q (y | x)}{p (y | x ; \theta)}. \tag {15} +$$ + +# 5.2 HOMOTOPY FUNCTIONS ACROSS DATA DISTRIBUTIONS + +Finding a value of $\theta$ that attains a local minimum of the objective function in (14) is often a hard optimization task, given the high dimensionality and non-convexity of the problem. In addition, prior knowledge regarding the localization of the solutions is rarely available. The complexity of minimizing such functions also depends in some non-trivial way on the task distribution $Q_{x,y}$ that is addressed (e.g., Ionescu et al. (2016), Zendel et al. (2017)). For some tasks, convergence to a good approximate solution is achieved after a few epochs, while for other tasks, orders of magnitude more iterations are required to reach the neighborhood of a solution. In this perspective, different heuristics have been recently proposed in the attempt of re-using across different data distributions the prior knowledge gained from approximately solving the learning problem associated with a certain task. The question whether we could exploit easy-to-solve or already-solved tasks to speed up and improve the learning of unsolved hard tasks arises. The method we propose in this paper addresses this question and attempts to do so by using a rigorous and well-established mathematical framework, with the goal of speeding up the learning process in presence of hard-to-solve tasks. + +In the perspective of homotopy methods, this goal can be achieved under some assumptions by defining a homotopy transformation between starting and target tasks and by following the procedure described in Algorithm 1. Despite the flexibility and generality of the method, with this work we only focus on homotopy deformations across different task distributions, but similar transformations can be applied in numerous different manners that are also worth exploring, e.g., progressively modifying the architecture of the network or the weights of the objective function terms. + +Let $s$ be the source task with training data $\mathcal{D}_s$ of pairs $(x_s, y_s) \sim Q_{x_s, y_s}$ whose good approximate solution $\theta_s^*$ for the minimization of the objective in (14) is available (or cheaply computable), and let $t$ denote the target task with training data $\mathcal{D}_t$ of pairs $(x_t, y_t) \sim Q_{x_t, y_t}$ whose conditional distribution we aim to learn. We propose two different homotopy deformations from task $s$ to task $t$ for regression and classification, respectively. + +# 5.2.1 SUPERVISED REGRESSION + +In the supervised regression scenario, by modeling the density of the conditional learned distribution as $p(y|x;\theta) = \mathcal{N}\left(y; f(x,\theta), \sigma^2 I\right)$ and using the approximate KL divergence objective function described in (15), we recover the mean squared error as minimization criterion. The proposed homotopy deformation is based on the following equations + +$$ +y _ {\lambda} | x = (1 - \lambda) y _ {s} | x + \lambda y _ {t} | x, \tag {16} +$$ + +$$ +p \left(y _ {\lambda} | x\right) = \mathcal {N} \left(y _ {\lambda}; f (x; \theta), \sigma^ {2} I\right). \tag {17} +$$ + +Notice that the transformation described in (16) preserves the unimodality of the conditional distribution (see caption of Figures 4a and 4b in the appendix), and, when used in combination with the objective function defined in Equation (15), leads to the minimization w.r.t. $\theta$ of + +$$ +H (\theta , \lambda) := E _ {(x, y _ {\lambda})} \| (1 - \lambda) (y _ {s} - f (x; \theta)) + \lambda (y _ {t} - f (x; \theta)) \| ^ {2}. \tag {18} +$$ + +See Figure 6a in the appendix for a graphical representation of this homotopy deformation when applied to gradually transform a one-dimensional sine wave function with a frequency of 1 radian into a one-dimensional sine wave function with a frequency of 137 radians. A downside of this homotopy deformation is that the same support for $x$ is required (the absence of the subscripts $s$ and $t$ on $x$ stands to indicate that the same realization for $x_{s}$ and $x_{t}$ has to be considered). Alternatively, it is possible to approximate (16) by using a Gaussian filter (see Figure 6b and Section B in the appendix). + +# 5.2.2 SUPERVISED CLASSIFICATION + +In the case of supervised classification, by modeling the density of the conditional learned distribution as $p(y|x;\theta) = \text{Multinoulli}(y; f(x;\theta))$ , and using the approximate KL divergence objective function described in (15), we recover the cross-entropy loss function, (Goodfellow et al., 2016). A possible homotopy deformation for the classification case consists in applying the following transformations + +$$ +x _ {\lambda} = (1 - \lambda) x _ {s} + \lambda x _ {t}, \tag {19} +$$ + +$$ +y _ {\lambda} \left| x _ {\lambda} = (1 - \lambda) y _ {s} \right| x _ {s} + \lambda y _ {t} \left| x _ {t} \right., \tag {20} +$$ + +which corresponds to the use of probabilistic labels. See Figure 8 in the appendix for a graphical representation of the proposed homotopy deformation. The corresponding label vector for the deformed image represented in Figure 8b is $y_{0.5} = [0,0,0.5,0,0,0.5,0,0,0,0]$ , given that $\lambda = 0.5$ and that the sampled realizations of $x_{s}$ and $x_{t}$ , represented in Figures 8a and 8c, belong to class 2 and 5, respectively. + +# 6 EXPERIMENTAL EVALUATION + +In this section, we present some experimental evaluations of homotopy methods when applied to solve supervised regression and classification tasks. As homotopy functions we adopt the ones discussed in Section 5.2. We empirically show that homotopy methods outperform random and warm-start initialization schemes in terms of numerical performance. In particular, when the target task is complex and/or, in the transfer-learning scenario, when the data distributions are significantly different, continuation methods can achieve significant speed-up compared to random and warm-start initializations. We believe that their superior numerical performance relies on the use of homotopy functions that progressively deform the data distribution from an easy-to-solve or already-solved task to the target data distribution. In addition, consistently across all the benchmarks, our homotopy-based method shows faster convergence than random-initialization and faster or comparable convergence than warm-start initialization. When the source task is "similar" to the target one, there is indeed no need to gradually vary the $\lambda$ parameter in Algorithm 1, but it suffices to directly set it to 1. In this extreme case, our homotopy method boils down to warm-start initialization. + +# 6.1 REGRESSION + +For the supervised regression scenario, the problem we address is how to transfer "optimality knowledge" across two tasks that involve regressing from the input to the output of two sine wave functions with different values of phase $\omega$ . Each considered dataset has 10000 samples split across training and testing, where $x$ and $y$ are defined as follows + +$$ +x \sim \mathcal {U} (0, 1), \quad y = \sin (\omega x) + \varepsilon , \quad \varepsilon \sim \mathcal {N} (0, 0. 0 1). \tag {21} +$$ + +The goal is to start with an "easy-to-learn" task, i.e. $\omega \approx 1$ rad, whose optimum is available by performing only few epochs with a first-order optimizer, e.g. SGD, Adam, and progressively transfer the "optimality knowledge" to a more complex task, i.e. $\omega >> 1$ rad, by approximately solving the homotopy problems for increasing values of $\lambda$ as described in Algorithm 1. We set $\omega = 1$ rad for our source task distribution, and study the performance of the proposed approach with homotopy function as described in Equation (16) for different target distributions with $\omega >> 1$ rad. See Figures 5a and 5b in the appendix for a visualization of the source data distribution with $\omega = 1$ rad and the target data distribution when $\omega = 137$ rad, respectively. The regressor is a feedforward neural network with 6 hidden layers of 100 units each and relu as activation function. In order to make the experiments more robust with respect to the choice of the step size $\alpha$ , we use Adam as optimizer. For the experiments in Figures 1a–1b, Figures 7a–7b in the appendix, and Figure 2a, we set $\alpha = 0.001$ , $\gamma = 10$ , $k = 200$ and then performed an additional 500 epochs on the final target problem, while for the experiments in Figure 2b, we set $\gamma = 10$ , $k = 300$ and performed an additional 600 epochs on the final target problem. In this last scenario we set $\alpha = 0.001$ and then decrease it with a cosine annealing schedule to observe convergence to an optimum. As shown in Figures 1a–1b, Figures 7a–7b in the appendix, and Figures 2a and 2b, the homotopy method leads to faster convergence than the considered baselines by preserving the vicinity to an optimal solution for problems $H(\theta, \lambda)$ across the different $\lambda$ values. In particular, we achieve a training loss up to two orders of magnitude better than the considered baselines. + +![](images/6ceb49b91292c0dffcebb5ad0a64c9cbdbed68c7b4ce33a5766105a0a480e22c.jpg) +(a) $\omega_{s} = 1$ rad, $\omega_{t} = 74$ rad. + +![](images/709f46f418e71d4e8e87051bdb10b10f0a9e065c33396e77747e56b207e96bac.jpg) +(b) $\omega_{s} = 1$ rad, $\omega_{t} = 137$ rad. + +![](images/0bcc0b146545e3eda7e3a02cb4c8937f6cddd8e660e931317d0375e8036bd493.jpg) +(a) Median train loss versus omega values. +Figure 2: Comparison of homotopy method, warm start and random initialization on sine wave regression tasks. The shaded areas represent the 25th and 75th percentiles. On the left, the median train loss achieved by the considered methods after 2500 epochs across 100 runs versus different omega values for the target task is plotted. For the homotopy method and warm-start initialization, $\omega_{s} = 1$ rad is used. On the right, the median train loss across 100 runs versus epochs for target task with $\omega = 137$ rad is plotted. With respect to Figure 1b, in Figure 2b a cosine decay schedule is used for the learning rate, and more epochs are performed to better observe the convergence properties of the different methods. + +![](images/a7086083bb2edac08d9b3f68fac0372de106d8e808926b9c03a7ca64adf20e8d.jpg) +Figure 1: Median train loss across 100 runs versus epochs for sine wave regression tasks with different frequency values. The shaded areas represent the 25th and 75th percentiles. See Section F.1 in the appendix for an evaluation of the test performance. +(b) $\omega_{s} = 1$ rad, $\omega_{t} = 137$ rad. + +# 6.2 CLASSIFICATION + +For the supervised classification scenario, we first apply the continuation method with the homotopy deformation described in Equations (19) and (20) in order to transfer optimality from the MNIST task, a notoriously "easy-to-learn" task for neural networks, to the FashionMNIST task. Since the two datasets have the same input dimensionality and the same number of classes, no additional preprocessing of the data is required. As network architecture, we use a VGG-type network, (Simonyan & Zisserman, 2015), and Adam as optimizer with a step size of $\alpha = 0.001$ . + +Secondly, we consider CIFAR-10 as target data distribution. Differently from the previous scenario, padding of the MNIST samples is required in order to apply Equation (19). The MNIST samples are also replicated across three channels. Also in this case we adopt a VGG-type network, (Simonyan & Zisserman, 2015), and Adam as optimizer with a step size of $\alpha = 0.0001$ . + +As shown in Figures 3a and 3b, in both benchmarks the homotopy method leads to faster convergence than random initialization. While in the second benchmark our method reaches a lower value of training loss in fewer epochs than warm-start, in the MNIST-to-FashionMNIST case the performance is comparable to using warm-start initialization. A possible interpretation is that, when the source and target task distributions are "too similar", as we hypothesize in the MNIST- + +![](images/850c4238fe5a47e869c715fa195ee19fd6637c76a8d756a658ce87c1dbc2b1e4.jpg) +(a) FashionMNIST. + +![](images/38ad9ea8078c3e9b991c6faff38a1bb4c3f52dd623e1501dfdbd0cb4444bb99f.jpg) +(b) CIFAR-10. +Figure 3: Median train loss across 10 runs versus epochs for different target task distributions. In both cases, the source task is the classification of the MNIST dataset. See Section F.2 in the appendix for an evaluation of the test performance. + +to-FashionMNIST scenario, then there is no need for homotopy deformations to be applied, i.e. $0 < \lambda < 1$ , but we can directly apply $\lambda = 1$ in our scheme, which corresponds to simply using warm-start initialization. + +# 7 CONCLUSIONS + +In this paper we propose a new methodology based on homotopy methods in order to transfer knowledge across different task distributions. In particular, our homotopy-based method allows one to exploit easy-to-solve or already-solved learning problems to solve new and complex tasks, by approximately and sequentially solving a sequence of optimization problems where the task distribution is gradually deformed from the source to the target one. We conduct a theoretical analysis of a general homotopy method in a simplified setting, and then we test our method on some popular deep learning benchmarks, where it shows superior numerical performance compared to random and warm-start initialization schemes. The proposed framework, in its limiting case, corresponds to the widely used fine-tuning heuristic, allowing for a new and more rigorous interpretation of the latter. Finally, the generality of homotopy methods also opens many novel and promising research directions in fundamental fields for deep learning, such as stochastic non-convex optimization and transfer-learning. + +# ACKNOWLEDGMENTS + +This work has partly been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant no. 716721 as well as by the German Federal Ministry for Economic Affairs and Energy (BMWi) via DyConPV (0324166B), and by DFG via Research Unit FOR 2401. In addition, Q. Tran-Dinh has partly been supported by the National Science Foundation (NSF), grant. no. 1619884. The authors thank Stefan Falkner for his helpful suggestions and comments. + +# REFERENCES + +Eugene L. Allgower and Kurt Georg. Numerical continuation methods. An introduction. Springer-Verlag, 1980. +David Balduzzi, Brian McWilliams, and Tony Butler-Yeoman. Neural taylor approximations: Convergence and exploration in rectifier networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 351-360, 2017. URL http://proceedings.mlr.press/v70/balduzzi17c.html. +Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML + +2009, Montreal, Quebec, Canada, June 14-18, pp. 41-48, 2009. URL https://doi.org/10.1145/1553374.1553380. +Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT'2010, pp. 177-186, 2010. URL https://leon.bottou.org/publications/pdf/compstat-2010.pdf. +Wanyun Cui, Guangyu Zheng, Zhiqiang Shen, Sihang Jiang, and Wei Wang. Transfer learning for sequences via learning to collocate. In International Conference on Learning Representations, 2019. URL https://openreview.net/pdf?id=ByldlhAqYQ. +Amit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 2253-2261, 2016. URL http://papers.nips.cc/paper/6427-toward-deeper-understanding-of-neural-networks-the-power-of-initialization-and-a-dual-view-on-expressivity.pdf. +John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning, 12:2121-2159, 2011. URL http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf. +Soheil Feizi, Hamid Javadi, Jesse Zhang, and David Tse. Porcupine neural networks: Approximating neural network landscapes. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 4831-4841. Curran Associates, Inc., 2018. URL https://papers.nips.cc/paper/7732-porcupine-neural-networks-approximating-neural-network-landscapes.pdf. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70, pp. 1126-1135. PMLR, 2017. URL http://proceedings.mlr.press/v70/finn17a/finn17a.pdf?source=post_page +Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. URL http://www.deeplearningbook.org. +Robert M. Gower. Convergence theorems for gradient descent, 2018. URL https://perso.telecom-paristech.fr/rgower/pdf/M2-statistique_optimisation/grad_conv.pdf. +Alex Graves, Marc G. Bellemare, Jacob Menick, Rémi Munos, and Koray Kavukcuoglu. Automated curriculum learning for neural networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 1311-1320, 2017. URL http://proceedings.mlr.press/v70/graves17a.html. +Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions. arXiv 1603.00391, 2016. URL https://arxiv.org/pdf/1603.00391.pdf. +Caglar Gulcehre, Marcin Moczulski, Francesco Visin, and Yoshua Bengio. Mollifying networks. In International Conference on Learning Representations, 2017. URL https://openreview.net/pdf?id=r1G4z8cge. +Guy Hacohen and Daphna Weinshall. On the power of curriculum learning in training deep networks. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pp. 2535-2544, 2019. URL http://proceedings.mlr.press/v97/hacohen19a.html. +Boris Hanin and David Rolnick. How to start training: The effect of initialization and architecture. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 571-581. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7338-how-to-start-training-the-effect-of-initialization-and-architecture.pdf. + +Sepp Hochreiter, A. Steven Younger, and Peter R. Conwell. Learning to learn using gradient descent. In Georg Dorffner, Horst Bischof, and Kurt Hornik (eds.), Artificial Neural Networks — ICANN 2001, pp. 87–94. Springer Berlin Heidelberg, 2001. URL https://link.springer.com/chapter/10.1007/3-540-44668-0_13. +R. Tudor Ionescu, Bogdan Alexe, Marius Leordeanu, Marius Popescu, Dim P. Papadopoulos, and Vittorio Ferrari. How hard can it be? Estimating the difficulty of visual search in an image. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2157-2166, 2016. URL https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7780606&tag=1. +Diederik P. Kingma and Jimmy Ba. Adam: a method for stochastic optimization. In Proc. 3rd International Conference for Learning Representations, 2015. URL http://arxiv.org/abs/1412.6980. +Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 971-980. Curran Associates, Inc., 2017. URL https://papers.nips.cc/paper/6698-self-normalizing-neural-networks.pdf. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp. 1097-1105. Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf. +Daniel Kunin, Jonathan M. Bloom, Aleksandrina Goeva, and Cotton Seed. Loss landscapes of regularized linear autoencoders. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pp. 3560-3569, 2019. URL http://proceedings.mlr.press/v97/kunin19a.html. +Christoph Käding, Erik Rodner, Alexander Freytag, and Joachim Denzler. Fine-tuning deep neural networks in continuous learning scenarios. In ACCV Workshop on Interpretation and Visualization of Deep Neural Nets, 2016. +Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 6389-6399. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7875-visualizing-the-loss-landscape-of-neural-nets.pdf. +Hossein Mobahi. Training recurrent neural networks by diffusion. arXiv 1601.04114, 2016. URL https://arxiv.org/pdf/1601.04114.pdf. +Sanmit Narvekar. Curriculum learning in reinforcement learning. In International Joint Conference on Artificial Intelligence, 2017. URL https://www.ijcai.org/Proceedings/2017/0757.pdf. +Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=ryQu7f-RZ. +Angie K. Reyes, Juan C. Caicedo, and Jorge E. Camargo. Fine-tuning deep convolutional networks for plant recognition. In Linda Cappellato, Nicola Ferro, Gareth J. F. Jones, and Eric SanJuan (eds.), Working Notes of CLEF 2015 - Conference and Labs of the Evaluation forum, Toulouse, France, September 8-11, 2015, volume 1391 of CEUR Workshop Proceedings. CEUR-WS.org, 2015. URL http://ceur-ws.org/Vol-1391/121-CR.pdf. +Marcus Rohrbach, Sandra Ebert, and Bernt Schiele. Transfer learning in a transductive setting. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 26, pp. 46-54. Curran Associates, Inc., 2013. URL http://papers.nips.cc/paper/5209-transfer-learning-in-a-transductive-setting.pdf. + +Jürgen Schmidhuber. Evolutionary principles in self-referential learning. On learning now to learn: The meta-meta-meta...hook. Diploma thesis, Technische Universität München, Germany, May 1987. URL http://people.idsia.ch/~juergen/diploma.html. +Mark Schmidt. Convergence rate of stochastic gradient with constant step size, July 2014. URL https://www.cs.ubc.ca/~schmidtm/Documents/2014_Notes_ConstantStepSG.pdf. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1409.1556. +Alexandru I. Suciu. Lecture notes in topology, February 2016. URL www.northeastern.edu/suciu/MATH4565/utop.sp16.html. +Quoc Tran-Dinh, Carlo Savorgnan, and Moritz Diehl. Adjoint-based predictor-corrector sequential convex programming for parametric nonlinear optimization. SIAM Journal on Optimization, 22 (4):1258-1284, 2012. +Xuezhi Wang and Jeff Schneider. Flexible transfer learning under support and model shift. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 1898-1906. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5632-flexible-transfer-learning-under-support-and-model-shift.pdf. +Daphna Weinshall, Gad Cohen, and Dan Amir. Curriculum learning by transfer learning: Theory and experiments with deep networks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, pp. 5235-5243, 2018. URL http://proceedings.mlr.press/v80/weinshall18a.html. +Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 3320-3328. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5347-how-transferable-are-features-in-deep-neural-networks.pdf. +Andrea Zanelli, Quoc Tran-Dinh, and Moritz Diehl. Contraction estimates for abstract real-time algorithms for NMPC. In Proceedings of the IEEE Conference on Decision and Control, Nice, France, 2019. +Oliver Zendel, Katrin Honauer, Markus Murschitz, Martin Humenberger, and Gustavo Fernández Dominguez. Analyzing computer vision data — the good, the bad and the ugly. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6670-6680, 2017. URL http://openaccess.thecvf.com/content_cvpr_2017/papers/Zendel_Anyzing_Computer_Vision_CVPR_2017_paper.pdf. +Luisa M. Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Cavia: Fast context adaptation via meta-learning. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pp. 7693-7702, 2019. URL http://proceedings.mlr.press/v97/zintgraf19a.html. + +# A PROPERTIES OF HOMOTOPIC FUNCTIONS + +Among the numerous properties of homotopic functions, we recall the following ones + +Proposition A.1. Suppose that there exists a homotopy $H: Z \times [0,1] \to Y$ from $g$ to $f$ , i.e. $g \simeq f$ . Then + +$g\simeq g$ (reflexive property) +$g\simeq f\Rightarrow f\simeq g$ (symmetric property) +- $g\simeq f$ and $f\simeq h\Rightarrow g\simeq h$ (transitive property) + +Proof. See proof of Theorem 1.5 in (Suciu, 2016). + +![](images/459d30819247d4f5e5fe51229b3e112836837b1f8653833c3f62d5b42c2bcb4b.jpg) + +Proposition A.2. Let $g, g': Z \to Y$ and $f, f': Y \to W$ be continuous maps, and let $f \circ g, f' \circ g': Z \to W$ be the respective composite maps. If $g \simeq g'$ and $f \simeq f'$ , then $f \circ g \simeq f' \circ g'$ . + +Proof. See proof of Proposition 1.7 in (Suciu, 2016). + +![](images/240f76554de4de9c0ad7b946cfd5f560fce3b5ddcfbc9c83b70d0f83b32e111b.jpg) + +# B APPROXIMATION VIA GAUSSIAN FILTER + +For the supervised regression scenario, we propose the following homotopy deformation + +$$ +y _ {\lambda} | x = \lambda y _ {s} | x + (1 - \lambda) y _ {t} | x. \tag {22} +$$ + +A downside of this homotopy function is that the same support for $x$ is required (the absence of the subscripts $s$ and $t$ on $x$ stands to indicate that the same realization for $x_{s}$ and $x_{t}$ has to be considered). Alternatively, it is possible to approximate Equation (22) by using a Gaussian filter, as depicted in Figure 6b. + +In particular, having sampled one realization $z$ of the pair $(x_{s},y_{s})$ from the training set $\mathcal{D}_s$ , $0 < M_{GF} \leq N$ realizations of the pair $(x_{t},y_{t})$ are sampled from $\mathcal{D}_t$ . Each $y_{t,j}$ realization is then weighted based on the vicinity of $x_{t,j}$ to the sampled $x_{s,z}$ realization. This leads to the following approximation of the $z$ realization of $y_{\lambda}$ + +$$ +y _ {\lambda , z} = (1 - \lambda) y _ {s, z} + \frac {\lambda}{M _ {G F}} \sum_ {j = 1} ^ {M _ {G F}} w _ {j} y _ {t, j}, \tag {23} +$$ + +$$ +w _ {j} = \frac {1}{\sqrt {2 \pi \xi^ {2}}} \exp \left(- \frac {\left| \left| x _ {s , z} - x _ {t , j} \right| \right| ^ {2}}{2 \xi^ {2}}\right), \tag {24} +$$ + +where $\xi > 0$ is the standard deviation of the Gaussian filter. + +# C ADDITIONAL FIGURES + +![](images/92ef3aac1062d38d7280e6329a247d19c351b0ff3a7f5b800e5eea13557e6dd0.jpg) +(a) Homotopy 1. + +![](images/05c4b1dca0413d1940612617d6f1f7e941353246aa3a3a3f62f8737d43a07c6c.jpg) +(b) Homotopy 2. +Figure 4: Two different homotopy deformations between the probability density functions of two one-dimensional Gaussian distributions with mean and standard deviation given by $\mu_1 = 1$ , $\sigma_1 = 1$ and $\mu_2 = 5$ , $\sigma_2 = 0.1$ , respectively. The homotopy represented in Figure 4a results in a mixture of Gaussian distributions, with mixture coefficient given by the homotopy parameter $\lambda$ . In Figure 4b the deformation concerns instead the parameters $\mu$ and $\sigma$ of the original distributions. Preserving unimodality is a desirable property when the homotopy function is used in combination with a continuation method since, as shown in Figure 4b, the location of the optimum moves together with the function deformation, allowing the optimizer to track it and gradually reach the optimum of the final target task. On the contrary, deforming the function as shown in Figure 4a does not lead to a gradual shift of the optimal solutions. Consequently, approximately and sequentially solving the problems corresponding to intermediate values of the homotopy parameter $\lambda$ , i.e. $0 < \lambda < 1$ , will not allow the homotopy method to gradually approach the desired final optimal solution. + +![](images/da00aed3a29106b041bbb8d9bdcaece065494e069670ecfd64cb34dd8fe7eaf9.jpg) +(a) $\omega = 1$ rad. +Figure 5: Graphical representations of the source (left) and target with $\omega = 137$ rad (right) data distributions used for the sine-wave regression evaluation. + +![](images/119ee369037910b0ec6cf4dd696b2393cd9d6ff10ecfd33df0946cd7d4ce3884.jpg) +(b) $\omega = 137$ rad. + +![](images/32d5d9b56231b568dab87c116771aa945f4b0e1faa98916f6f7649e8b86b2170.jpg) +(a) Homotopy transformation described in Equation (16). + +![](images/4268436e994c7c1a3ff87ec1048226f84a0ec3537984fb8758783a46ebf1fa9f.jpg) +(b) Approximation of the homotopy transformation in Equation (16) (also Equation (22)) with a Gaussian filter as described in Equations (23) and (24). + +![](images/66b6a11ddf9631f361ea55409143e182398282db47c64610ec7479d8bcf136ff.jpg) +Figure 6: Graphical representation of the proposed homotopy transformation for the supervised regression scenario when applied to progressively deform a sine wave function with frequency of 1 radian into a sine wave function with frequency of 137 radians for different values of homotopy parameter. +(a) $\omega_{s} = 1$ rad, $\omega_{t} = 95$ rad. + +![](images/a8c8167780b5ca7774b46510ee3624e062f9b6dfec50af634ef4a46b257b92bc.jpg) +(b) $\omega_{s} = 1$ rad, $\omega_{t} = 116$ rad. + +![](images/729f92cc1cb32ea34db41cbc11a110fc88568c63c735644ef7b798430b5b89be.jpg) +(a) Sampled image of an handwritten digit 2 (class 2) from the MNIST dataset. +Figure 8: Graphical representation of the homotopy transformation from $x_{s}$ to $x_{t}$ as described in Equation (19) for two sampled images from the MNIST and FashionMNIST datasets. + +![](images/2c7edc050c7036c9a0bee8f7f6e9769ff013d2b83c2205612f258d661fb22a69.jpg) +(b) Homotopy deformation of the images represented in Figures 8a and 8c corresponding to $\lambda = 0.5$ + +![](images/8e9c65595e1810f6f9cb7a9c52c8bf62410bb13edcb9d872e91fc278262f2815.jpg) +Figure 7: Median train loss across 100 runs versus epochs for sine wave regression tasks with different omega values. +(c) Sampled image of a sandal (class 5) from the FahionMNIST dataset. + +# D LOCAL ERROR BOUNDS FOR SGD ITERATES + +Before proving local error bounds for SGD iterates in the considered framework, given the local nature of our assumptions, we need to demonstrate two important facts, on which the proof relies. In particular, we need to show: + +- local linear contraction of Gradient Descent (GD) iterates, and that +- starting in a hypersphere of radius $B$ around a minimizer and given a "big enough" batch size, the next SGD iterate is also contained in this region for all possible realizations of the gradient estimate. + +Considering problem (4) with fixed parameter $\lambda_{i}$ , in the following subsections we will refer to $\theta^{*} = \theta_{i}^{*}$ , $\theta_{k} = \theta_{i,k}$ and $g_{k} = g(\theta_{k},\lambda_{i})$ , where we drop the subscript $i$ and the explicit dependence on $\lambda_{i}$ in order to simplify the notation. The analysis holds for all fixed parameters $\lambda_{i}$ . + +# D.1 LOCAL LINEAR CONTRACTION OF GD ITERATES + +Let us use GD to solve the following optimization problem + +$$ +\theta^ {*} \in \arg \min _ {\theta} H (\theta , \lambda_ {i}), +$$ + +where the objective function $H$ fulfills Assumptions 4.2 and 4.3. + +We now derive error bounds on the iterates of GD + +$$ +\theta_ {k + 1} = \theta_ {k} - \alpha \nabla_ {\theta} H \left(\theta_ {k}, \lambda_ {i}\right), +$$ + +where $\theta_{k}\in \mathcal{B}_{B,\theta^{*}}$ and $0 < \alpha \leq \frac{1}{L}$ is the step size. + +We start by applying the definition of GD iterates and then we exploit the introduced assumptions + +$$ +\begin{array}{l} \left\| \theta_ {k + 1} - \theta^ {*} \right\| ^ {2} = \left\| \theta_ {k} - \alpha \nabla_ {\theta} H \left(\theta_ {k}, \lambda_ {i}\right) - \theta^ {*} \right\| ^ {2} \\ = \left\| \theta_ {k} - \theta^ {*} \right\| ^ {2} - 2 \alpha \nabla_ {\theta} H \left(\theta_ {k}, \lambda_ {i}\right) ^ {T} \left(\theta_ {k} - \theta^ {*}\right) + \alpha^ {2} \left\| \nabla_ {\theta} H \left(\theta_ {k}, \lambda_ {i}\right) \right\| ^ {2} \\ \end{array} +$$ + +strong convexity + +$$ +\leq (1 - \alpha \mu) \| \theta_ {k} - \theta^ {*} \| ^ {2} - 2 \alpha \left(H \left(\theta_ {k}, \lambda_ {i}\right) - H \left(\theta^ {*}, \lambda_ {i}\right)\right) + \alpha^ {2} \| \nabla_ {\theta} H \left(\theta_ {k}, \lambda_ {i}\right) \| ^ {2} +$$ + +corollary 4.2.1 + +$$ +\leq (1 - \alpha \mu) \| \theta_ {k} - \theta^ {*} \| ^ {2} - 2 \alpha (1 - \alpha L) \left(H \left(\theta_ {k}, \lambda_ {i}\right) - H \left(\theta^ {*}, \lambda_ {i}\right)\right). +$$ + +Since $H(\theta_k, \lambda_i) - H(\theta^*, \lambda_i) \geq 0$ and $-2\alpha (1 - \alpha L) \leq 0$ when $0 < \alpha \leq \frac{1}{L}$ , we can safely drop the second term and obtain the final result + +$$ +\left\| \theta_ {k + 1} - \theta^ {*} \right\| ^ {2} \leq (1 - \alpha \mu) \left\| \theta_ {k} - \theta^ {*} \right\| ^ {2}. +$$ + +See also Theorem 2.3 in (Gower, 2018) for a derivation where Assumptions 4.2 and 4.3 are required to hold globally. + +# D.2 REALIZATION OF THE SGD ITERATES IN THE STRONG CONVEXITY AND L-SMOOTHNESS REGION AROUND A MINIMIZER + +We address the following optimization problem + +$$ +\theta^ {*} \in \arg \min _ {\theta} \underbrace {\frac {1}{N} \sum_ {j = 1} ^ {N} \ell_ {j} (\theta , \lambda_ {i})} _ {:= H (\theta , \lambda_ {i})}, +$$ + +where $H$ fulfills Assumptions 4.2-4.4. + +As proved in Section D.1, under Assumptions 4.2 and 4.3, whenever $\theta_0 \in \mathcal{B}_{B,\theta^*}$ and $0 < \alpha \leq \frac{1}{L}$ , deterministic gradient descent iterates converge linearly with contraction rate $\kappa_d \coloneqq \sqrt{(1 - \alpha\mu)}$ . + +In particular, the following inequality holds + +$$ +\left\| \theta_ {k + 1} ^ {D} - \theta^ {*} \right\| \leq \kappa_ {d} \cdot \left\| \theta_ {k} - \theta^ {*} \right\|, +$$ + +for any $\theta_{k}$ such that $\| \theta_k - \theta^*\| \leq B$ , and superscript $^D$ denotes iterates obtained by applying the full gradient $\nabla H_{k}\coloneqq \nabla H(\theta_{k},\lambda_{i})$ + +$$ +\theta_ {k + 1} ^ {D} = \theta_ {k} - \alpha \nabla H _ {k}. +$$ + +Let $\theta_{k + 1}$ denote the iterate obtained by applying one iteration of stochastic gradient descent + +$$ +\theta_ {k + 1} = \theta_ {k} - \alpha g _ {k}, +$$ + +where $g_{k} \coloneqq \frac{1}{M}\sum_{j\in \mathcal{M}}\nabla \ell_{j}(\theta_{k},\lambda_{i})$ and $\mathcal{M}$ is a set of $0 < M \leq N$ indexes randomly sampled from $\mathcal{N} = \{1,\ldots ,N\}$ . + +Given any realization of $\theta_{k}$ s.t. $\| \theta_k - \theta^*\| \leq B$ and any realization of $g_{k}$ , by exploiting Assumption 4.4 and the results derived in Section D.1, we have that + +$$ +\begin{array}{l} \left\| \theta_ {k + 1} - \theta^ {*} \right\| = \left\| \theta_ {k} - \alpha g _ {k} - \theta^ {*} \right\| \\ = \left\| \theta_ {k} - \alpha \nabla H _ {k} + \alpha \nabla H _ {k} - \alpha g _ {k} - \theta^ {*} \right\| \\ \leq \left\| \theta_ {k} - \alpha \nabla H _ {k} - \theta^ {*} \right\| + \alpha \| \nabla H _ {k} - g _ {k} \| \\ = \| \theta_ {k} - \alpha \nabla H _ {k} - \theta^ {*} \| + \alpha \left\| \frac {1}{N} \sum_ {j \in \mathcal {N} \backslash \mathcal {M}} \nabla \ell_ {j} + \frac {1}{N} \sum_ {j \in \mathcal {M}} \nabla \ell_ {j} - \frac {1}{M} \sum_ {j \in \mathcal {M}} \nabla \ell_ {j} \right\| \\ = \left\| \theta_ {k} - \alpha \nabla H _ {k} - \theta^ {*} \right\| + \alpha \left\| \frac {1}{N} \sum_ {j \in \mathcal {N} \backslash \mathcal {M}} \nabla \ell_ {j} + \frac {M - N}{N M} \sum_ {j \in \mathcal {M}} \nabla \ell_ {j} \right\| \tag {25} \\ \leq \| \theta_ {k} - \alpha \nabla H _ {k} - \theta^ {*} \| + \alpha \left(\frac {1}{N} \sum_ {j \in \mathcal {N} \backslash \mathcal {M}} \| \nabla \ell_ {j} \| + \frac {N - M}{N M} \sum_ {j \in \mathcal {M}} \| \nabla \ell_ {j} \|\right) \\ \leq \left\| \theta_ {k + 1} ^ {D} - \theta^ {*} \right\| + 2 \alpha \frac {(N - M)}{N} \nu \\ \leq \kappa_ {d} \| \theta_ {k} - \theta^ {*} \| + 2 \alpha \frac {(N - M)}{N} \nu . \\ \end{array} +$$ + +Since we have assumed that the current realization of $\theta_{k}$ lies in the hypersphere of radius $B$ around the optimal solution $\theta^{*}$ , by solving for $\frac{N - M}{N}$ the following inequality + +$$ +\kappa_ {d} B + 2 \alpha \frac {(N - M)}{N} \nu \leq B, +$$ + +we obtain that, whenever $\frac{(N - M)}{N} \leq \frac{(1 - \kappa_d)}{2\alpha\nu} B$ , the realization of $\theta_{k + 1}$ will also lie in this region. + +These derivations show that when the realization of the current iterate $\theta_{k}$ lies in the hypersphere of radius $B$ around the minimizer $\theta^{*}$ , and $\frac{(N - M)}{N} \leq \frac{(1 - \kappa_d)}{2\alpha\nu} B$ , then the next iterate $\theta_{k + 1}$ will also lie in this region. Consequently, in our scenario, if we assume that the initial point $\theta_0$ lies in the hypersphere of radius $B$ around the minimizer $\theta^{*}$ , then, by applying the derivations recursively, we can show that the iterates will remain in this local region around the minimizer where strong convexity and smoothness hold. + +# D.3 PROOF OF PROPOSITION 4.9 + +Let us use SGD to solve the following optimization problem + +$$ +\theta^ {*} \in \operatorname * {a r g m i n} _ {\theta} H (\theta , \lambda_ {i}), +$$ + +where the objective function $H$ fulfills Assumptions 4.2- 4.4. We now derive error bounds for the iterates of SGD + +$$ +\theta_ {k + 1} = \theta_ {k} - \alpha g _ {k}, +$$ + +where $g_{k}$ is the unbiased estimate of $\nabla H_{k}$ defined in the previous section and fulfills Assumption 4.5, $\theta_{k} \in \mathcal{B}_{B,\theta^{*}}$ , $0 < \alpha \leq \min \left(\frac{1}{2\mu},\frac{1}{L}\right)$ is the step size and the batch size is set to a value $M$ such that $\frac{(N - M)}{N} \leq \frac{(1 - \kappa_d)}{2\alpha\nu} B$ . + +We start by applying the definition of SGD iterates + +$$ +\begin{array}{l} \left\| \theta_ {k + 1} - \theta^ {*} \right\| ^ {2} \stackrel {\text {S G D i t e r a t e}} {=} \left\| \theta_ {k} - \alpha g _ {k} - \theta^ {*} \right\| ^ {2} \\ = \| \theta_ {k} - \theta^ {*} \| ^ {2} - 2 \alpha g _ {k} ^ {T} (\theta_ {k} - \theta^ {*}) + \alpha^ {2} \| g _ {k} \| ^ {2}. \\ \end{array} +$$ + +We now take the expectation w.r.t. $\theta_0, g_0, \ldots, g_{k-1}, g_k$ and, considering Assumptions 4.2-4.5, we obtain the following series of inequalities + +$$ +\begin{array}{l} \mathbb {E} _ {\theta_ {0}, g _ {0}, \dots , g _ {k - 1}, g _ {k}} \left[ \| \theta_ {k + 1} - \theta^ {*} \| ^ {2} \right] = \mathbb {E} _ {\theta_ {0}, g _ {0}, \dots , g _ {k - 1}, g _ {k}} \left[ \| \theta_ {k} - \theta^ {*} \| ^ {2} - 2 \alpha g _ {k} ^ {T} (\theta_ {k} - \theta^ {*}) \right. \\ \left. + \alpha^ {2} \| g _ {k} \| ^ {2} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {\text {l a w o f i t e r a t e d} \quad \text {e x p e c t a t i o n s}} {=} \mathbb {E} _ {\theta_ {0}, g _ {0}, \dots , g _ {k - 1}} \left[ \mathbb {E} _ {g _ {k}} \left[ \| \theta_ {k} - \theta^ {*} \| ^ {2} \right. \right. \\ \left. - 2 \alpha g _ {k} ^ {T} \left(\theta_ {k} - \theta^ {*}\right) + \alpha^ {2} \| g _ {k} \| ^ {2} \mid \theta_ {0}, g _ {0}, \dots , g _ {k - 1} \rbrack \right] \\ \end{array} +$$ + +unbiased $g_{k}+$ bounded "variance" + +$$ +\begin{array}{l} \leq \quad \mathbb {E} _ {\theta_ {0}, g _ {0}, \dots , g _ {k - 1}} \left[ \left\| \theta_ {k} - \theta^ {*} \right\| ^ {2} \right. \\ \left. - 2 \alpha \nabla H _ {k} ^ {T} \left(\theta_ {k} - \theta^ {*}\right) \right] + \alpha^ {2} C ^ {2} \\ \text {s t r o n g c o n v e x i t y} \\ \leq \quad (1 - 2 \alpha \mu) \cdot \mathbb {E} _ {\theta_ {0}, g _ {0}, \dots , g _ {k - 1}} \left[ \| \theta_ {k} - \theta^ {*} \| ^ {2} \right] + \alpha^ {2} C ^ {2}. \\ \end{array} +$$ + +By applying this result recursively, we derive the following bound on the error for the SGD iterates + +$$ +\mathbb {E} _ {\theta_ {0}, g _ {0}, \dots , g _ {k - 1}, g _ {k}} \left[ \| \theta_ {k + 1} - \theta^ {*} \| ^ {2} \right] \leq (1 - 2 \alpha \mu) ^ {k + 1} \cdot \mathbb {E} _ {\theta_ {0}} \left[ \| \theta_ {0} - \theta^ {*} \| ^ {2} \right] + \frac {\alpha C ^ {2}}{2 \mu}. +$$ + +See also Section 3 in (Schmidt, 2014) for a derivation where Assumptions 4.2 and 4.3 are required to hold globally. + +# E MAIN THEORETICAL CONTRIBUTIONS + +# E.1 PROOF OF PROPOSITION 4.11 + +Proposition E.1. Let $\theta_{i} \in \mathcal{B}_{B,\theta_{i}^{*}}$ and $|\lambda_{i} - \lambda_{i + 1}| \leq \epsilon$ , with $0 \leq \epsilon \leq \frac{B}{\delta}$ . If $\| \theta_{i} - \theta_{i}^{*} \| \leq B - \delta \epsilon$ , then $\| \theta_{i} - \theta_{i + 1}^{*} \| \leq B$ . Moreover, let $\kappa_{d} = \sqrt{(1 - \alpha \mu)}$ and assume that + +$$ +\frac {(N - M)}{N} \leq \frac {(1 - \kappa_ {d} ^ {k}) (1 - \kappa_ {d}) B}{2 \alpha \nu}, +$$ + +and + +$$ +\epsilon \leq \frac {1}{\delta} \left((1 - \kappa_ {d} ^ {k}) B - \frac {(N - M)}{N} \frac {2 \alpha \nu}{(1 - \kappa_ {d})}\right). +$$ + +Then, after applying $k$ iterations of SGD, we obtain that + +$$ +\left\| \theta_ {i + 1} - \theta_ {i + 1} ^ {*} \right\| \leq B - \delta \epsilon . +$$ + +Proof. + +$$ +\begin{array}{l} \left\| \theta_ {i} - \theta_ {i + 1} ^ {*} \right\| = \left\| \theta_ {i} - \theta_ {i} ^ {*} + \theta_ {i} ^ {*} - \theta_ {i + 1} ^ {*} \right\| \\ \text {T r i a n g l e I n e q .} \\ \leq \| \theta_ {i} - \theta_ {i} ^ {*} \| + \| \theta_ {i} ^ {*} - \theta_ {i + 1} ^ {*} \| \\ \text {A s s u m p t i o n} 4. 7 \\ \leq \quad \| \theta_ {i} - \theta_ {i} ^ {*} \| + \delta | \lambda_ {i} - \lambda_ {i + 1} |. \\ \end{array} +$$ + +Finally, using the fact that $|\lambda_i - \lambda_{i + 1}|\leq \epsilon$ , it follows that, if $\| \theta_{i} - \theta_{i}^{*}\| \leq B - \delta \epsilon$ with $0\leq \epsilon \leq \frac{B}{\delta}$ then $\| \theta_{i} - \theta_{i + 1}^{*}\| \leq B$ + +We now derive the conditions on $\epsilon$ such that $\| \theta_{i + 1} - \theta_{i + 1}^*\| \leq B - \delta \epsilon$ . By applying recursively the results derived in Section D.2 (25), we obtain that + +$$ +\left\| \theta_ {i + 1} - \theta_ {i + 1} ^ {*} \right\| \leq \kappa_ {d} ^ {k} \left\| \theta_ {i} - \theta_ {i + 1} ^ {*} \right\| + 2 \alpha \frac {(N - M)}{N} \nu \sum_ {i = 0} ^ {k - 1} \kappa_ {d} ^ {i}. +$$ + +By using the limit of the geometric series, we have that + +$$ +\left\| \theta_ {i + 1} - \theta_ {i + 1} ^ {*} \right\| \leq \kappa_ {d} ^ {k} \left\| \theta_ {i} - \theta_ {i + 1} ^ {*} \right\| + \frac {(N - M)}{N} \frac {2 \alpha \nu}{(1 - \kappa_ {d})}. +$$ + +Finally, by considering that $\| \theta_{i} - \theta_{i + 1}^{*}\| \leq B$ and by solving in $\epsilon$ the following inequality + +$$ +\kappa_ {d} ^ {k} B + \frac {(N - M)}{N} \frac {2 \alpha \nu}{(1 - \kappa_ {d})} \leq B - \delta \epsilon , +$$ + +we obtain the following upper bound on $\epsilon$ + +$$ +\epsilon \leq \frac {1}{\delta} \left((1 - \kappa_ {d} ^ {k}) B - \frac {(N - M)}{N} \frac {2 \alpha \nu}{(1 - \kappa_ {d})}\right), +$$ + +from which also the extra condition on the batch size + +$$ +\frac {(N - M)}{N} \leq \frac {(1 - \kappa_ {d} ^ {k}) (1 - \kappa_ {d}) B}{2 \alpha \nu}. +$$ + +![](images/42f7dc76fceb7821484edd7f387bc33beb8b7c056421e862bd39174b44238a1f.jpg) + +![](images/3f2a81bad7d0d610a5c44ca8c69dc9f531f44837de5fcdfa03e195018a19f77d.jpg) +Figure 9: Graphical representation of the results derived in Proposition 4.11. The continuous and dashed lines are used to represent the circles of radius $B$ and $B - \delta \epsilon$ around the optimal solutions, respectively. + +# E.2 PROOF OF THEOREM 4.12 + +Theorem E.2. Consider Algorithm 1 with Stochastic Gradient Descent as solver and let $k > 0$ be the number of iterations, $0 < \alpha \leq \min \left(\frac{1}{2\mu}, \frac{1}{L}\right)$ be the step size and $0 < M \leq N$ be the batch size such that + +$$ +\frac {(N - M)}{N} \leq \frac {\left(1 - \kappa_ {d} ^ {k}\right) \left(1 - \kappa_ {d}\right) B}{2 \alpha \nu}, +$$ + +where $\kappa_{d} = \sqrt{(1 - \alpha\mu)}$ . For $\theta_0 \in \mathcal{B}_{B - \delta \epsilon, \theta_0^*}$ and $r_\theta \in \mathbb{R}$ such that + +$$ +r _ {\theta} ^ {2} \geq \frac {\alpha C ^ {2}}{2 \mu}, \tag {26} +$$ + +then, if $\mathbb{E}\left[\| \theta_i - \theta_i^*\|^2\right] \leq r_\theta^2$ and $|\lambda_i - \lambda_{i + 1}| \leq \tilde{\epsilon}$ , where $\tilde{\epsilon} \coloneqq \min \{\bar{\epsilon}, \epsilon\}$ with + +$$ +\bar {\epsilon} = - \frac {r _ {\theta}}{\delta} + \frac {1}{\delta} \sqrt {\frac {r _ {\theta} ^ {2} - \alpha C ^ {2} / 2 \mu}{(1 - 2 \alpha \mu) ^ {k}}} \tag {27} +$$ + +the following inequality holds + +$$ +\mathbb {E} \left[ \| \theta_ {i + 1} - \theta_ {i + 1} ^ {*} \| ^ {2} \right] \leq r _ {\theta} ^ {2}. \tag {28} +$$ + +Proof. + +$$ +\begin{array}{l} \mathbb {E} \left[ \| \theta_ {i + 1} - \theta_ {i + 1} ^ {*} \| ^ {2} \right] \stackrel {\text {I n e q .} 1 0} {\leq} (1 - 2 \alpha \mu) ^ {k} \mathbb {E} \left[ \| \theta_ {i} - \theta_ {i + 1} ^ {*} \| ^ {2} \right] + \frac {\alpha C ^ {2}}{2 \mu} \\ = \left(1 - 2 \alpha \mu\right) ^ {k} \mathbb {E} \left[ \left\| \theta_ {i} - \theta_ {i} ^ {*} + \theta_ {i} ^ {*} - \theta_ {i + 1} ^ {*} \right\| ^ {2} \right] + \frac {\alpha C ^ {2}}{2 \mu} \\ \end{array} +$$ + +$\stackrel{\mathrm{Triangle Ineq.}}{\leq} (1 - 2\alpha \mu)^k \mathbb{E}\left[\left(\|\theta_i - \theta_i^*\| + \|\theta_i^* - \theta_{i+1}^*\|\right)^2\right] + \frac{\alpha C^2}{2\mu}$ + +$$ +\begin{array}{l} = (1 - 2 \alpha \mu) ^ {k} \mathbb {E} \left[ \left(\| \theta_ {i} - \theta_ {i} ^ {*} \| ^ {2} + \| \theta_ {i} ^ {*} - \theta_ {i + 1} ^ {*} \| ^ {2} \right. \right. \\ \left. \left. + 2 \| \theta_ {i} - \theta_ {i} ^ {*} \| \| \theta_ {i} ^ {*} - \theta_ {i + 1} ^ {*} \|\right) \right] + \frac {\alpha C ^ {2}}{2 \mu} \\ \end{array} +$$ + +Assumption 4.7 $\leq (1 - 2\alpha \mu)^k\mathbb{E}\left[(\| \theta_i - \theta_i^*\|^2 +\delta^2 |\lambda_i - \lambda_{i + 1}|^2\right]$ + +$$ +\begin{array}{l} + 2 \delta \| \theta_ {i} - \theta_ {i} ^ {*} \| | \lambda_ {i} - \lambda_ {i + 1} |) ] + \frac {\alpha C ^ {2}}{2 \mu} \\ \leq \left(1 - 2 \alpha \mu\right) ^ {k} \left(\delta^ {2} \tilde {\epsilon} ^ {2} + 2 \delta r _ {\theta} \tilde {\epsilon} + r _ {\theta} ^ {2}\right) + \frac {\alpha C ^ {2}}{2 \mu}. \\ \end{array} +$$ + +We now solve in $\tilde{\epsilon}$ the following second degree inequality + +$$ +\left(1 - 2 \alpha \mu\right) ^ {k} \left(\delta^ {2} \tilde {\epsilon} ^ {2} + 2 \delta r _ {\theta} \tilde {\epsilon} + r _ {\theta} ^ {2}\right) + \frac {\alpha C ^ {2}}{2 \mu} \leq r _ {\theta} ^ {2}. \tag {29} +$$ + +The inequality (29) admits solutions if and only if $r_{\theta}^{2} \geq \frac{\alpha C^{2}}{2\mu}$ . In particular, inequality (29) holds $\forall \tilde{\epsilon} \in [0,\tilde{\epsilon} ]$ , where $\bar{\epsilon} = -\frac{r_{\theta}}{\delta} +\frac{1}{\delta}\sqrt{\frac{r_{\theta}^{2} - \alpha C^{2} / 2\mu}{(1 - 2\alpha\mu)^{k}}}$ . + +# F EXPERIMENTAL EVALUATION: TEST PERFORMANCES + +# F.1 REGRESSION + +![](images/95bca52993df9fbb47eb045ee34be313b4aae12eb61d24066840ae89bd9ff1fd.jpg) +(a) $\omega_{s} = 1$ rad, $\omega_{t} = 74$ rad. + +![](images/9b7ada8f4b21772c48de4bcaf33c2c874d96c5c037e9d221ed9d9fe1ccd9cf2c.jpg) +(b) $\omega_{s} = 1$ rad, $\omega_{t} = 95$ rad. + +![](images/7a92949d4800cb27aa745229b0453232c2da7965c06519e60bdbd09cb365425d.jpg) +(c) $\omega_{s} = 1$ rad, $\omega_{t} = 116$ rad. + +![](images/ce3ff8982eeef42e97204572a31e52b9fd26baed033850f481956925fa4d1fbb.jpg) +(d) $\omega_{s} = 1$ rad, $\omega_{t} = 137$ rad. +Figure 10: Median test loss across 100 runs versus epochs for target tasks with different $\omega$ values. The shaded areas represent the 25th and 75th percentiles. For warm-start initialization and homotopy method, $\omega_{s} = 1$ rad is used for the source task. + +# F.2 CLASSIFICATION + +
MethodFinal Mean Test AccuracyBest Mean Test Accuracy
homotopy γ = 5, k = 20.89 ± 0.0030.91 ± 0.002
homotopy γ = 5, k = 40.89 ± 0.0020.91 ± 0.003
homotopy γ = 10, k = 10.89 ± 0.0040.91 ± 0.001
homotopy γ = 10, k = 40.90 ± 0.0020.91 ± 0.003
warm start0.89 ± 0.0030.90 ± 0.002
random init0.89 ± 0.0040.90 ± 0.003
+ +Table 1: MNIST-FashionMNIST + +
MethodFinal Mean Test AccuracyBest Mean Test Accuracy
homotopy γ = 5, k = 20.55 ± 0.0040.59 ± 0.003
homotopy γ = 10, k = 10.55 ± 0.0050.60 ± 0.002
homotopy γ = 10, k = 20.56 ± 0.0030.60 ± 0.003
homotopy γ = 10, k = 40.56 ± 0.0050.61 ± 0.004
warm start0.54 ± 0.0060.59 ± 0.005
random init0.64 ± 0.020.64 ± 0.02
+ +Table 2: MNIST-CIFAR-10 \ No newline at end of file diff --git a/transferringoptimalityacrossdatadistributionsviahomotopymethods/images.zip b/transferringoptimalityacrossdatadistributionsviahomotopymethods/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9ce53bb3f3b208f4311f6ac44d78f56bbdcdb4a6 --- /dev/null +++ b/transferringoptimalityacrossdatadistributionsviahomotopymethods/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c92ef21fd7978dc48f95a11ef8cb881016c09b52dffa68feac7ada61a770f255 +size 1069975 diff --git a/transferringoptimalityacrossdatadistributionsviahomotopymethods/layout.json b/transferringoptimalityacrossdatadistributionsviahomotopymethods/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..560865745d71dc734035b891c982fa31874d60e8 --- /dev/null +++ b/transferringoptimalityacrossdatadistributionsviahomotopymethods/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67c4a15aa28fd92d5b0f20e6c6a0dc7fdc620166b47c41f546cc2beed430dc35 +size 889903 diff --git a/transformerxhmultievidencereasoningwithextrahopattention/12116f7f-2757-4a00-8a9d-f4ca67db14ec_content_list.json b/transformerxhmultievidencereasoningwithextrahopattention/12116f7f-2757-4a00-8a9d-f4ca67db14ec_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..223e630ba21aa03f36619d68442a62aff9fd2135 --- /dev/null +++ b/transformerxhmultievidencereasoningwithextrahopattention/12116f7f-2757-4a00-8a9d-f4ca67db14ec_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ff2e59aeb720194c7d9770ffe9014a955e9ce68b9373c21a4f6ad21d972cc3f +size 102869 diff --git a/transformerxhmultievidencereasoningwithextrahopattention/12116f7f-2757-4a00-8a9d-f4ca67db14ec_model.json b/transformerxhmultievidencereasoningwithextrahopattention/12116f7f-2757-4a00-8a9d-f4ca67db14ec_model.json new file mode 100644 index 0000000000000000000000000000000000000000..10892f5accb495281c90b1aa375a0e500879ab9f --- /dev/null +++ b/transformerxhmultievidencereasoningwithextrahopattention/12116f7f-2757-4a00-8a9d-f4ca67db14ec_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:257692de18c954020430adf3a0811de77142b3fc25e744e61196615634db4fb9 +size 118792 diff --git a/transformerxhmultievidencereasoningwithextrahopattention/12116f7f-2757-4a00-8a9d-f4ca67db14ec_origin.pdf b/transformerxhmultievidencereasoningwithextrahopattention/12116f7f-2757-4a00-8a9d-f4ca67db14ec_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fe976f4c3d7cb11a94392c13a21f58c30f6c55a2 --- /dev/null +++ b/transformerxhmultievidencereasoningwithextrahopattention/12116f7f-2757-4a00-8a9d-f4ca67db14ec_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cffe3efc00c755ad0f3ad4c7e8c7202a00898a2dfee1e5ff52510e4459301910 +size 1342554 diff --git a/transformerxhmultievidencereasoningwithextrahopattention/full.md b/transformerxhmultievidencereasoningwithextrahopattention/full.md new file mode 100644 index 0000000000000000000000000000000000000000..376146f50e11ee3ef5b12fc91d5c6335bd30da64 --- /dev/null +++ b/transformerxhmultievidencereasoningwithextrahopattention/full.md @@ -0,0 +1,426 @@ +# TRANSFORMER-XH: MULTI-EVIDENCE REASONING WITH EXTRA HOP ATTENTION + +Chen Zhao* + +University of Maryland, College Park chenz@cs.umd.edu + +Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, and Saurabh Tiwary + +Microsoft AI & Research +cxiong, corosset, xiao, +pauben, satiwary@microsoft.com + +# ABSTRACT + +Transformers have achieved new heights modeling natural language as a sequence of text tokens. However, in many real world scenarios, textual data inherently exhibits structures beyond a linear sequence such as trees and graphs; many tasks require reasoning with evidence scattered across multiple pieces of texts. This paper presents Transformer-XH, which uses eXtra Hop attention to enable intrinsic modeling of structured texts in a fully data-driven way. Its new attention mechanism naturally "hops" across the connected text sequences in addition to attending over tokens within each sequence. Thus, Transformer-XH better conducts joint multi-evidence reasoning by propagating information between documents and constructing global contextualized representations. On multi-hop question answering, Transformer-XH leads to a simpler multi-hop QA system which outperforms previous state-of-the-art on the HotpotQA FullWiki setting. On FEVER fact verification, applying Transformer-XH provides state-of-the-art accuracy and excels on claims whose verification requires multiple evidence. + +# 1 INTRODUCTION + +Transformers effectively model natural language in sequential form (Vaswani et al., 2017; Dai et al., 2019; Devlin et al., 2019; Yang et al., 2019). Nevertheless, in many NLP tasks, text does not simply appear as a linear sequence of tokens but rather carries meaningful structure in the form of paragraphs, headings, and hyperlinks. These structures can be represented abstractly as trees or graphs with nodes and edges; and the tasks can be performed as joint reasoning on these more general structures as input. Multi-hop question answering (Yang et al., 2018) is one such task in which structure plays an important role, since the evidence required to formulate the answer is scattered across multiple documents, requiring systems to jointly reason across links between them. + +Recent approaches leverage pre-trained Transformers (e.g., BERT) for multi-hop question answering (QA) by converting the structural reasoning task into sub-tasks that model flat sequences. For example, Min et al. (2019b) decompose a multi-hop question into a series of single-hop questions; Ding et al. (2019) conduct several steps of single-hop reading comprehension to simulate the multihop reasoning. The hope is that additional processing to fuse the outputs of the sub-models can recover all the necessary information from the original structure. While pre-trained Transformer language models have shown improvements on multi-hop QA, manipulating the inherent structure of the problem to fit the rigid requirements of out-of-the-box models can introduce problematic assumptions or information loss. + +This paper presents Transformer-XH (meaning eXtra Hop), which upgrades Transformers with the ability to natively represent structured texts. Transformer-XH introduces extra hop attention in its layers that connects different text pieces following their inherent structure while also maintaining the powerful pre-trained Transformer abilities over each textual piece individually. Our extra hop attention enables 1) a more global representation of the evidence contributed by each piece of text as it relates to the other evidence, and 2) a more natural way to jointly reason over an evidence graph by propagating information along edges necessary to complete the task at hand. + +We apply Transformer-XH to two multi-evidence reasoning tasks: Hotpot QA, the multi-hop question answering task (Yang et al., 2018), and FEVER, the fact verification benchmark whose claims often require multiple pieces of evidence to support (Thorne et al., 2018). Rather than decomposing the task into a series of sub-tasks to fit the constraints of pre-trained Transformers, Transformer-XH is a solution that fits the problem as it naturally occurs. It is a single model that represents and combines evidence from multiple documents to conduct the reasoning process. On HotpotQA's FullWiki setting, which requires strong multi-hop reasoning capability (Min et al., 2019b; Jiang & Bansal, 2019), Transformer-XH outperforms CogQA (Ding et al., 2019), the previous start-of-the-art, by 12 points on answer F1. On FEVER 1.0 shared task, Transformer-XH outperforms GEAR, the Graph Neural Network based approach significantly. On both applications, Transformer-XH beats the contemporary BERT based pipeline SR-MRS (Nie et al., 2019), by 2-3 points. + +The results follow from our simple yet effective design, with one unified model operating over the inherent structure of the task, rather than melding the outputs from disparate sub-tasks adapted to the sequential constraints of pre-trained Transformers. Our ablation studies demonstrate Transformer-XH's efficacy on questions that are known to require multi-hop reasoning (Min et al., 2019b) and on verifying multi-evidence claims (Liu et al., 2019b). Our analyses confirm that the source of Transformer-XH's effectiveness success is due to the eXtra Hop attention's ability to fuse and propagate information across multiple documents.1 + +# 2 MODEL + +This section first discusses preliminaries on sequential Transformers, then we show how we incorporate eXtra hop attention to create Transformer-XH. + +# 2.1 PRELIMINARIES + +Transformers represent a sequence of input text tokens $X = \{x_{1},\dots,x_{i},\dots,x_{n}\}$ as contextualized distributed representations $H = \{h_1,\dots,h_i,\dots,h_n\}$ (Vaswani et al., 2017). This process involves multiple stacked self-attention layers that converts the input $X$ into $\{H^0,H^1,\dots,H^l,\dots H^L\}$ , starting from $H^0$ , the embeddings, to the final layer of depth $L$ . + +The key idea of Transformer is its attention mechanism, which calculates the $l$ -th layer output $H^{l}$ using the input $H^{l-1}$ from the previous layer: + +$$ +H ^ {l} = \operatorname {s o f t m a x} \left(\frac {Q \cdot K ^ {T}}{\sqrt {d _ {k}}}\right) \cdot V ^ {T}, \tag {1} +$$ + +$$ +Q ^ {T}; K ^ {T}; V ^ {T} = W ^ {q} \cdot H ^ {l - 1}; W ^ {k} \cdot H ^ {l - 1}; W ^ {v} \cdot H ^ {l - 1}. \tag {2} +$$ + +It includes three projections on the input $H^{l-1}$ : Query (Q), Key (K), and Value (V). + +Specifically, the slices of token $h_i^l$ in Eqn.(2) is: + +$$ +h _ {i} ^ {l} = \sum_ {j} \operatorname {s o f t m a x} _ {j} \left(\frac {q _ {i} ^ {T} \cdot k _ {j}}{\sqrt {d _ {k}}}\right) \cdot v _ {j}, \tag {3} +$$ + +which first calculates its attention to all other tokens $j$ in the sequence and then combines the token values $v_{j}$ into a new representation $h_{i}^{l}$ , using the normalized attention weights. Multiple attentions can be used in one Transformer layer and concatenated as multi-head attention (Vaswani et al., 2017). The architecture is stacked to form rather deep networks, which leads to significant success of large pre-trained Transformer models (Devlin et al., 2019; Liu et al., 2019a). + +A challenge of Transformer is that its attention is calculated over all token pairs (Eqn. 3), which is hard to scale to long text sequences. Transformer-XL (eXtra Long) addresses this challenge by breaking down longer texts, e.g., a multi-paragraph document, into a sequence of text segments: $\{X_{1},\dots,X_{\tau},\dots,X_{\zeta}\}$ , and propagates the information between adjacent text segments using the following attention: + +$$ +\tilde {H} _ {\tau} ^ {l - 1} = \left[ \operatorname {F r e e z e} \left(H _ {\tau - 1} ^ {l - 1}\right) \circ H _ {\tau} ^ {l - 1} \right]. \tag {4} +$$ + +![](images/047489106ead965c38a34c1c6499c1fba2d34b67f5e5e57ff9aea8b2cb7fdf11.jpg) +(a) Hop attention on the path $d_2 \to d_1 \to d_3$ . + +![](images/2329a6bd77941abe3bb86f063cff0a0203dadf6201534d438233d1020dd56645.jpg) +(b) Transformer-XH in Multi-hop QA +Figure 1: The eXtra Hop attention in Transformer-XH (a) and its application to multi-hop QA (b). + +It concatenates $(\circ)$ the representation of the previous segment $H_{\tau -1}^{l - 1}$ to the current segment as segment level recurrences. The new representation $\tilde{H}_{\tau}^{l - 1}$ includes the information from previous segment and is integrated in the new attention mechanism: + +$$ +\tilde {Q} ^ {T}; \tilde {K} ^ {T}; \tilde {V} ^ {T} = W ^ {q} \cdot H _ {\tau} ^ {l - 1}; W ^ {k} \cdot \tilde {H} _ {\tau} ^ {l - 1}; W ^ {v} \cdot \tilde {H} _ {\tau} ^ {l - 1}. \tag {5} +$$ + +The attention over the previous segment allows Transformer-XL to effectively model long form text data recurrently as a sequence of text chunks (Dai et al., 2019). + +Nevertheless, in many scenarios, the text segments are organized in nontrivial structures beyond a linear sequence. For example, documents are connected by hyperlinks in a graphical structure that does not readily simplify to form a linear sequence, prohibiting Transformer-XL's recurrent approach. + +# 2.2 TRANSFORMER-XH WITH EXTRA HOP ATTENTION + +Transformer-XH models structured text sequence by linking them with eXtra Hop attention following their original structure. As illustrated in Figure 1a, to model three connected documents $d_{2} \rightarrow d_{1} \rightarrow d_{3}$ , Transformer-XH uses eXtra Hop attention to propagate information along the graph edges, enabling information sharing between connected text sequence. + +Formally, the structured text data includes a set of nodes, $\mathcal{X} = \{X_1,\dots,X_\tau,\dots X_\zeta\}$ , each corresponding to a text sequence, and an edge matrix $E$ , which includes the connections (e.g., links) between them. The goal is to learn representations $\mathcal{H} = \{\tilde{H}_1,\dots,\tilde{H}_\tau,\dots \tilde{H}_\zeta\}$ , that incorporate not only the local information in each sequence $X$ , but also the global contexts on the entire structured text $\{\mathcal{X},E\}$ . + +Transformer-XH achieves this by two attention mechanisms: in-sequence attention and eXtra Hop attention. The in-sequence attention is the same as vanilla Transformer: in layer $l$ , token $i$ gathers information from other tokens inside the same text piece $\tau$ : + +$$ +h _ {\tau , i} ^ {l} = \sum_ {j} \operatorname {s o f t m a x} _ {j} \left(\frac {q _ {\tau , i} ^ {T} \cdot k _ {\tau , j}}{\sqrt {d _ {k}}}\right) \cdot v _ {\tau , j}. \tag {6} +$$ + +The eXtra Hop attention uses the first token in each sequence - the added special token "[CLS]" - as an "attention hub", which attends on all other connected nodes' hub token. In layer $l$ , the $\tau$ -th text sequence attends over another text sequence $\eta$ if there is an edge between them ( $e_{\tau \eta} = 1$ ): + +$$ +\hat {h} _ {\tau , 0} ^ {l} = \sum_ {\eta ; e _ {\tau \eta} = 1} \operatorname {s o f t m a x} _ {\eta} \left(\frac {\hat {q} _ {\tau , 0} ^ {T} \cdot \hat {k} _ {\eta , 0}}{\sqrt {d _ {k}}}\right) \cdot \hat {v} _ {\eta , 0}. \tag {7} +$$ + +Node $\tau$ calculates the attention weight on its neighbor $\eta$ using hop query $\hat{q}_{\tau,0}$ and key $\hat{k}_{\eta,0}$ . Then it uses the weights to combine its neighbors' value $\hat{v}_{\eta,0}$ and forms a globalized representation $\hat{h}_{\tau,0}^{l}$ . + +The two attention mechanism are combined to form the new representation of layer $l$ : + +$$ +\tilde {h} _ {\tau , 0} ^ {l} = \operatorname {L i n e a r} \left(\left[ h _ {\tau , 0} ^ {l} \circ \hat {h} _ {\tau , 0} ^ {l} \right]\right), \tag {8} +$$ + +$$ +\tilde {h} _ {\tau , i} ^ {l} = h _ {\tau , i} ^ {l}; \forall i \neq 0. \tag {9} +$$ + +Note that the non-hub tokens $(i\neq 0)$ still have access to the hop attention in the previous layer through Eqn. (6). + +One layer of eXtra Hop attention can be viewed as single-step of information propagation along edges $E$ . For example, in Figure 1a, the document node $d_{3}$ updates its representation by gathering information from its neighbor $d_{1}$ using the hop attention $d_{1} \rightarrow d_{3}$ . When multiple Transformer-XH layers are stacked, this information in $d_{1}$ includes both $d_{1}$ 's local contexts from its in-sequence attention, and cross-sequence information from the hop attention $d_{2} \rightarrow d_{1}$ of the $l - 1$ layer. Hence, an L-layer Transformer-XH can attend over information from up to L hops away. + +Together, three main properties equip Transformer-XH to effectively model raw structured text data: the propagation of information (values) along edges, the importance of that information (hop attention weights), and the balance of in-sequence and cross-sequence information (attention combination). The representations learned in $\mathcal{H}$ can innately express nuances in structured text that are required for complex reasoning tasks such as multi-hop QA and natural language inference. + +# 3 APPLICATION TO MULTI-HOP QUESTION ANSWERING + +This section describes how Transformer-XH applies to multi-hop QA. Given a question $q$ , the task is to find an answer span $a$ in a large open-domain document corpus, e.g. the first paragraph of all Wikipedia pages. By design, the questions are complex and often require information from multiple documents to answer. For example, in the case shown in Figure 1b, the correct answer "Cambridge" requires combining the information from both the Wikipedia pages "Facebook" and "Harvard University". To apply Transformer-XH in the open domain multi-hop QA task, we first construct an evidence graph and then apply Transformer-XH on the graph to find the answer. + +Evidence Graph Construction. The first step is to find the relevant candidate documents $D$ for the question $q$ and connect them with edges $E$ to form the graph $G$ . Our set $D$ consists of three sources. The first two sources are from canonical information retrieval and entity linking techniques: + +$D_{ir}$ : the top 100 documents retrieved by DrQA's TF-IDF on the question (Chen et al., 2017). + +$D_{el}$ : the Wikipedia documents associated with the entities that appear in the question, annotated by entity linking systems: TagMe (Ferragina & Scaiella, 2010) and CMNS (Hasibi et al., 2017). + +For better retrieval quality, we use a BERT ranker (Nogueira & Cho, 2019) on the set $D_{ir} \cup D_{el}$ and keep the top two ranked ones in $D_{ir}$ and top one per question entity in $D_{el}$ . Then the third source $D_{exp}$ includes all documents connected to or from any top ranked documents via Wikipedia hyperlinks (e.g., "Facebook" → "Harvard University"). + +The final graph comprises all documents from the three sources as nodes $\mathcal{X}$ . The edge matrix $E$ is flexible. We experiment with various edge matrix settings, including directed edges along Wikipedia links, i.e. $e_{ij} = 1$ if there is a hyperlink from document $i$ to $j$ , bidirectional edges along Wiki links, and fully-connected graphs, which rely on Transformer-XH to learn the edge importance. + +Similar to previous work (Ding et al., 2019), the textual representation for each node in the graph is the [SEP]-delimited concatenation of the question, anchor text (the text in the hyperlink in parent nodes pointing to the child node), and the paragraph itself. More details on the evidence graph construction are in Appendix A.1. + +Transformer-XH on Evidence Graph. Transformer-XH takes the input nodes $\mathcal{X}$ and edges $E$ , and produces the global representation of all text sequences: + +$$ +\mathcal {H} ^ {L} = \operatorname {T r a n s f o r m e r - X H} (\mathcal {X}, E). \tag {10} +$$ + +Then we add two task-specific layers upon the last layer's representation $\mathcal{H}^L$ : one auxiliary layer to predict the relevance score of the evidence node, and one layer to extract the answer span within it: + +$$ +p (\text {r e l e v a n c e} | \tau) = \operatorname {s o f t m a x} \left(\operatorname {L i n e a r} \left(\tilde {h} _ {\tau , 0} ^ {L}\right)\right); \tag {11} +$$ + +$$ +p (\text {s t a r t} | \tau , i), p (\text {e n d} | \tau , j) = \operatorname {s o f t m a x} \left(\operatorname {L i n e a r} \left(\tilde {h} _ {\tau , i} ^ {L}\right)\right), \operatorname {s o f t m a x} \left(\operatorname {L i n e a r} \left(\tilde {h} _ {\tau , j} ^ {L}\right)\right). \tag {12} +$$ + +The final model is trained end-to-end with cross-entropy loss for both tasks in a multi-task setting. During inference, we first select the document with the highest relevance score, and then the start and end positions of the answer within that document. + +# 4 APPLICATION TO FACT VERIFICATION + +This section describes how Transformer-XH applies to the fact verification task in FEVER Thorne et al. (2018). Given a claim and a trustworthy background corpus, i.e. Wikipedia, the task is to verify whether the evidence in the corpus SUPPORTS, REFUTES, or there is NOT ENOUGH INFO to verify the claim. Similar to multi-hop QA, the first step is to construct an evidence graph using the text pieces in the background corpus and then Transformer-XH can be easily applied to conduct reasoning on these evidence pieces. + +Evidence Graph Construction. Many previous FEVER systems first retrieve the evidence sentences for the claim and then reason verify it (Nie et al., 2019; Zhou et al., 2019). This first step is similar as the retrieval stage in Hotpot QA. And the second step is a multi-evidence reasoning task, where Transformer-XH is applied. + +We keep the evidence sentence retrieval step consistent with previous methods. The sentence retrieval results of SR-MRS is not yet released at the time of our experiments, thus we instead use the BERT-based retrieval results from another contemporary work (Liu et al., 2019b). + +We construct the evidence graph using the top five sentences from Liu et al. (2019b) as the nodes $\mathcal{X}$ and fully connected edges $E$ . Following Liu et al. (2019b), the representation of each node is the concatenation of the claim, the Wikipedia title (entity name) of the document that includes the sentence, and the evidence sentence. + +Transformer-XH on Evidence Graph. Transformer-XH takes the evidence graph $\{X, E\}$ and learns to verify the claim to three categories: $y \in \{\text{SUPPORT, REFUSE, NOT ENOUGH EVIDENCE}\}$ . Similar to the application in Hotpot QA, it first produces the global representation of the graph: + +$$ +\mathcal {H} ^ {L} = \operatorname {T r a n s f o r m e r - X H} (\mathcal {X}, E). \tag {13} +$$ + +Then two task-specific layers are added upon the last layer. The first layer conducts the fact prediction per node using the "[CLS]" token: + +$$ +p (y | \tau) = \operatorname {s o f t m a x} \left(\operatorname {L i n e a r} \left(\tilde {h} _ {\tau , 0} ^ {L}\right)\right). \tag {14} +$$ + +The second layer learns to measure the importance of each node in the graph: + +$$ +p (s \mid \tau) = \operatorname {s o f t m a x} \left(\operatorname {L i n e a r} \left(\tilde {h} _ {\tau , 0} ^ {L}\right)\right), \tag {15} +$$ + +The node level predictions and node importance are combined to the final prediction for the claim: + +$$ +p (y | \mathcal {X}, E) = \sum_ {\tau} p (s | \tau) \cdot p (y | \tau). \tag {16} +$$ + +Similar to the Hotpot QA scenario, we use multi-task learning that combines the node prediction task and the claim verification task. The first task uses the evidence sentence label provided by FEVER and cross-entropy loss on Eqn. 15. The second task uses the final verification label with cross-entropy loss on Eqn. 16. + +# 5 EXPERIMENTAL METHODOLOGIES + +Our experiments are conducted on Hotpot QA, the multi-hop question answering benchmark Yang et al. (2018), and FEVER, the fact verification benchmark Thorne et al. (2018). + +# 5.1 MULTI-HOP QUESTION ANSWERING ON HOTPOT QA + +Dataset. HotpotQA includes 112k crowd-sourced questions designed to require multiple pieces of textual evidence, which are the first paragraphs of Wikipedia pages. It has two types of questions: bridge question require hopping via an outside entity, and comparison question compare a property of two entities. There are two settings in HotpotQA. The Distractor setting provides golden evidence paragraphs together with TF-IDF retrieved negatives. The FullWiki setting requires systems to retrieve evidence paragraphs from the full set of Wikipedia articles. + +We focus on FullWiki setting since previous research found that the negative documents in Distractor may be too weak and mitigate the need for multi-hop reasoning (Min et al., 2019b). There are 90k Train, 7k Dev and 7k Test questions. The ground truth answer and supporting evidence sentences in Train and Dev sets are provided. Test labels are hidden; only one submission is allowed to the leaderboard per 30 days $^2$ . We evaluate our final model on Test and conduct ablations on Dev. + +Metrics. We use official evaluation metrics of HotpotQA: exact match (EM) and F1 on answer (Ans), supporting facts (Supp), and the combination (Joint). The supporting facts prediction is an auxiliary task that evaluates model's ability to find the evidence sentences. Joint EM is the product of the two EM result. Joint F1 first multiplies the precision and recall from Ans and Supp, then combines the Joint precision and recall to F1. + +Baseline. The main baselines include Cognitive QA (CogQA, Ding et al. (2019)) and Semantic Retrieval MRS (SR-MRS, Nie et al. (2019)). CogQA uses several fine-tuned BERT machine reading comprehension (MRC) models to find hop entities and candidate spans, and then uses a BERT based Graph Convolution Network to rank the candidate spans. SR-MRS is a contemporary work and was the previous leaderboard rank one. It is a BERT based pipeline and uses fine-tuned BERT models to first rank the documents (twice), then to rank sentences to find supporting facts, and finally conducts BERT MRC on the concatenated evidence sentences. + +We also re-implement CogQA and upgrade its IR with our BERT IR model (BERT on $D_{ir} \cup D_{el}$ , same as Transformer-XH), for fair comparisons. We include other approaches on the FullWiki setting: Official Baseline (Yang et al., 2018), MUPPET (Feldman & El-Yaniv, 2019), QFE (Nishida et al., 2019), and DecompRC (Min et al., 2019a), + +Implementation Details. The in-sequence attention and other standard Transformer components in Transformer-XH are initialized by the pre-trained BERT base model (Devlin et al., 2019). The extra hop attention parameters are initialized randomly and trained from scratch. The final model uses three hop steps. For bridge questions, we build the evidence graph described in Section 3. And for comparison questions, we build the fully-connected graph on the set $D_{ir} \cup D_{el}$ and train Transformer-XH separately. We leave more implementation details in the Appendix. + +# 5.2 FACT VERIFICATION ON FEVER + +Dataset. The FEVER task provides a claim sentence and requires the system to classify it into three categories: SUPPORTS, REFUTES, and NOT ENOUGH INFO, using the Wikipedia corpus as the evidence source. It provides 185,455 claims with manual labels and uses the Wikipedia dump in June 2017 which includes 5.4 million documents. + +Metrics. There are two official evaluation metrics in FEVER: Label Accuracy (LA), which evaluates the classification accuracy of the verification labels, and FEVER Score, which evaluates both the correctness of the evidence sentences used in verification and the LA. The latter is close to Joint EM in Hotpot QA and is the main metric. We use the official evaluation scripts from FEVER task and we refer to Thorne et al. (2018) for more details of this task. + +Experimental Setups. We follow the experiment settings used by previous research in FEVER 1.0 shared task, i.e. Nie et al. (2019), Zhou et al. (2019), and Liu et al. (2019b). Similar as Liu et al. (2019b), we also split the data into single and multi evidence categories and evaluate Transformer-XH on the two splits. + +
DevTest
AnsSuppJointAnsSuppJoint
EMF1EMF1EMF1EMF1EMF1
Official Baseline (Yang et al., 2018)23.932.95.140.947.240.824.032.93.937.7
DecompRC (Min et al., 2019a)-43.3----30.040.7--
QFE (Nishida et al., 2019)------28.738.114.244.4
MUPPET (Feldman & El-Yaniv, 2019)31.140.417.047.711.827.630.640.316.747.3
CogQA (Ding et al., 2019)37.649.423.158.512.235.337.148.922.857.7
SR-MRS* (Nie et al., 2019)46.558.839.971.526.649.245.357.338.770.8
CogQA (w. BERT IR) [Ours]44.857.729.262.818.543.4----
Transformer-XH54.066.241.772.127.752.951.664.140.971.4
+ +Table 1: Results (%) on HotpotQA FullWiki Setting. Dev results of previous methods are reported in their papers. Test results are from the leaderboard. Contemporary method is marked by * . + +
Question TypeReasoning Type
Comparison (1487)Bridge (5918)Single-Hop (3426)Multi-Hop (3979)
EMF1EMF1EMF1EMF1
CogQA43.351.136.149.045.161.131.139.4
SR-MRS*62.068.942.456.152.368.441.350.3
CogQA (w. BERT IR)54.160.942.456.952.069.338.647.8
Transformer-XH (w. BERT IR)59.965.852.466.362.278.346.855.7
Transformer-XH (w. SR-MRS)64.370.747.962.358.174.345.355.2
+ +Table 2: Dev Ans (%) on different scenarios. Reasoning Types are estimated by Min et al. (2019b) via whether single-hop BERT has non-zero Ans F1. The numbers of questions are shown in brackets. + +Baselines. The baselines include GEAR (Zhou et al., 2019) and two contemporary work, SR-MRS (Nie et al., 2019) and KGAT (Liu et al., 2019b). SR-MRS uses similar adaptations as Transformer-XH from Hotpot QA to FEVER. GEAR is a graph attention network based approach specially designed for fact verification. KGAT further improves GEAR's GAT by adding the kernel information, and is the previous STOA with BERT base. We also include the BERT Concat baseline Liu et al. (2019b) which concatenates the evidence sentences to a text sequence and applies BERT on it. + +Implementation Details. We use the retrieval result from Liu et al. (2019b) and connect all sentences as a fully connected graph. We follow similar parameter settings as Hothot QA. We use pre-trained BERT base model to initialize the Transformer components. The extra hop attention parameters are initialized randomly and trained from scratch, and three hop steps are used. We train Transformer-XH for two epochs. + +# 6 EVALUATION RESULTS + +This section first presents the evaluation results on HotpotQA and FEVER. Then it conducts ablation studies, analyses, and case studies on HotpotQA to understand the effectiveness of Transformer-XH. + +# 6.1 OVERALL RESULT + +HotpotQA FullWiki results are presented in Table 1. Transformer-XH outperforms all previous methods by significant margins. Besides strong results, Transformer-XH's ability to natively represent structured data leads to much simpler QA system. Previously, in order to utilize pre-trained BERT, Hotpot QA approaches adapted the multi-hop reasoning task to comprise multiple sub-tasks. For example, given the retrieved documents, CogQA (w. BERT IR) first leverages one BERT MRC model to find hop entities and then another BERT MRC to find candidate answer spans. After that, it ranks the candidate spans using a BERT based GAT, which is the only structure modeling step. In comparison, Transformer-XH is a unified model which directly represents structured texts and integrates BERT weights. + +
DevTestSingle EvidenceMulti Evidence
LAFEVERLAFEVERLAFEVERLAFEVER
BERT Concat (Liu et al., 2019b)73.6768.8971.0165.64----
GEAR/GAT (Zhou et al., 2019)74.8470.6971.6067.1079.7977.4266.1238.21
SR-MRS* (Nie et al., 2019)75.1270.1872.5667.26----
KGAT* (Liu et al., 2019b)78.0275.8872.8169.4080.3378.0765.9239.23
Transformer-XH78.0574.9872.3969.0781.8481.3186.5858.47
+ +Table 3: FEVER Results. Contemporary work is marked by $^*$ . Single and Multi Evidence are results on Dev claims on which one or multiple sentences are labeled as evidence. + +Table 2 further inspects model performances on the Dev set by question types and reasoning types. Transformer-XH significantly outperforms all baselines on bridge questions which require more multi-hop reasoning. And on the "multi-hop" questions, Transformer-XH has higher relative gains (39% over CogQA on EM) than the "single-hop" questions (27%), demonstrating its stronger multi-hop reasoning capability. We further study this in Section 6.3. + +To further investigate the reasoning ability of Transformer-XH, we replace our retrieval pipeline with the top retrieved documents from the SR-MRS pipeline. More specifically, we use the top retrieved documents from SR-MRS to construct Transformer-XH's evidence graph while keeping all else constant. The resulting system, Transformer-XH (w. SR-MRS), outperforms SR-MRS's multi-step BERT based reasoning on all metrics and question types. Transformer-XH's effectiveness is robust with multiple IR systems. + +FEVER fact verification results are shown in Table 3. Transformer-XH outperforms SR-MRS by 4 FEVER score on Dev and 1.8 on Test. It performs on par with KGAT. More importantly, Transformer-XH excels at verifying claims that require multiple pieces of evidence—outperforming the contemporary work KGAT by 20 FEVER scores on the multi-evidence claims, a $49\%$ relative improvement. Compared to KGAT, Transformer-XH mainly loses on the "not enough evidence" category which is neither single nor multi evidence. This is an artifact the FEVER task which our system is not specifically designed for. + +This result also demonstrates Transformer-XH's generality on tasks with multiple text inputs not in sequential formats. The only difference between Transformer-XH when applied on multi-hop QA and FEVER is the last (linear) task specific layer; it provides similar or better performances over contemporary approaches that were specifically designed for the fact verification task. Due to space constraints and the consistent effectiveness of Transformer-XH on the two applications, the rest experiments mainly used HotpotQA to analyze the behavior of Transformer-XH. + +# 6.2 ABLATION STUDIES + +Model Variations. We show the results of different model variations on the top left of Table 4. Single-Hop BERT uses BERT MRC model on each document individually, which significantly decreases the accuracy, confirming the importance of multi-hop reasoning in FullWiki setting (Min et al., 2019a). $GAT + BERT$ first uses Graph Attention Network (Veličković et al., 2018) on the evidence graph to predict the best node; then it uses BERT MRC on the best document. It is $10\%$ worse than Transformer-XH since the MRC model has no access to the information from other documents. No Node Prediction eliminates the node prediction task and only trains on span prediction task; the accuracy difference shows node prediction task helps the model training. + +Graph Structures. We show Transformer-XH's performance with different graph structures on the bottom left of Table 4. Bidirectional Edges adds reverse edges along the hyperlinks; Fully Connected Graph connects all document pairs; Node Sequence randomly permutes the documents and connects them into a sequence to simulate the Transformer-XL setting. Both Bidirectional Links and Fully Connected Graph have comparable performance with the original graph structure. Transformer-XH is able to learn meaningful connections using its hop attentions and is less dependent on the pre-existing graph structural. The fully connected graph can be used if there is no strong edge patterns available in the task. However, the performance drops significantly on Node Sequence, showing that structured texts cannot be treated as a linear sequence which cuts off many connections. + +
Model AblationDev AnsHop StepsDev Ans
EMF1EMF1
Single-Hop BERT MRC on Individual Documents31.342.2One Hop50.364.6
GAT (Node Prediction) + BERT (MRC on Best Node)48.961.9Two Hops51.666.4
No Node Prediction Multi-Task43.255.3Four Hops51.466.1
Bidirectional Edges on Hyperlinks50.665.0Five Hops50.664.7
Fully Connected Graph51.065.5Six Hops50.164.2
Node Sequence (Bidirectional Transformer-XL)14.120.7Transformer-XH52.466.3
+ +Table 4: Ablation studies on the bridge questions on Dev answer accuracy $(\%)$ , including model components (top left), graph structures (bottom left), and hop steps (right). Transformer-XH's full model uses three hop step and unidirectional Wiki link graph. + +![](images/fc8eacec21cc7b2405662d24f928c57ac5de259be3021eed0f9f82beb120e5a2.jpg) +(a) First Hop Attentions. + +![](images/83a79b57ed87aae9119395caea64b16f780acf8514e15cc4a9544793ea9a310b.jpg) +(b) Second Hop Attentions. +Figure 2: Distributions of learned attention weights of three hops on three groups: From All (Node) $\rightarrow$ (to) All, All $\rightarrow$ (to) Ans (ground truth answer node), and Supp (nodes with the supporting facts) $\rightarrow$ (to) Ans. X-axes are attention values scaled by number of nodes. + +![](images/2cb9b8f06803ccc182ff66936ef5a358c22f071f302fde1c20aae40ca57692df.jpg) +(c) Third Hop Attentions. + +Hop Steps. Recall that a Transformer-XH layer with extra hop attention corresponds to one information propagation (hop) step in the graph. Thus Transformer-XH with last K layers conducts K-step attention hops in the graph. We show results with different K on the right side of Table 4. Transformer-XH reaches its peak performance with three hops (our full-model). This is expected as most Hotpot QA questions can be answered by two documents (Yang et al., 2018). + +# 6.3 HOP ATTENTION ANALYSIS + +This experiment analyzes the hop attentions using our full-model (three-hop) on the fully connected graph to study their behavior without pre-defined structure. Figure 2 plots the distributions of the learned hop attentions on the Dev set. It shows a strong shift away from the normal distribution with more hops. Transformer-XH learns to distinguish different nodes after multi-hop attention: the attention score becomes a bimodal distribution after three hops, ignoring some non-useful nodes. Transformer-XH also learns to focus on meaningful edges: the score is higher on the path $\mathrm{Supp} \rightarrow \mathrm{Ans}$ than $\mathrm{All} \rightarrow \mathrm{Ans}$ . And the margin is larger as the hop step increases from one to three. + +# 6.4 CASE STUDY + +Table 5 lists two examples from Transformer-XH and CogQA (w. BERT IR). The first case has a clear evidence chain "2011/S/S"→"Winner"→"YG Entertainment"; both methods find the correct answer. However, the second case has too many distractors in the first document. Without additional clues from document 2, it is likely that the single-hop hop entity extraction component in CogQA (w. BERT IR) misses the correct answer document in its candidate sets; and the later structural reasoning component can not recover from this cascade error. In comparison, Transformer-XH finds the correct answer by combining the evidence with the hop attentions between the two evidence pieces. We leave more positive and negative cases in Appendix A.5. + +
Q: 2014 S/S is the debut album of a South Korean boy group that was formed by who? +Document 1: 2014 S/S is the debut album of South Korean group Winner. +Document 2: Winner is a South Korean boy group formed in 2013 by YG Entertainment. +Transformer-XH: YG Entertainment √ +CogQA (w. BERT IR): YG Entertainment √Q: Which man who presented 2022 FIFA World Cup bid was born on October 22, 1930? +Document 1: 2022 FIFA World Cup bid was presented by Frank Lowy, Ben Buckley, Quentin Bryce and Elle Macpherson. +Document 2: Frank Lowy (born 22 October 1930), is an Australian-Israeli businessman and Chairman of Westfield Corporation. +Transformer-XH: Frank Lowy √ +CogQA (w. BERT IR): Quentin Bryce ×
+ +Table 5: Examples of Transformer-XH and BERT pipeline results in Hotpot QA. + +# 7 RELATED WORK + +HotpotQA's FullWiki task is a combination of open-domain QA (Chen et al., 2017) and multi-hop QA (Yang et al., 2018): the questions are designed to require multiple pieces of evidence and these evidence pieces are documents to retrieve from Wikipedia. It is a challenging combination: The retrieved documents are inevitably noisy and include much stronger distractors than the TF-IDF retrieved documents in the Distractor setting (Min et al., 2019a; Jiang & Bansal, 2019). + +Various solutions have been proposed for Hotpot QA (Min et al., 2019b; Feldman & El-Yaniv, 2019; Nishida et al., 2019). These solutions often use complicated pipelines to adapt the multi-hop task into a combination of single-hop tasks, in order to leverage the advantage of pre-trained models. For example, CogQA (Ding et al., 2019) uses two BERT based MRC model to find candidate spans and then another BERT initialized Graph Neural Network (GNN) to rank spans; SR-MRS (Nie et al., 2019) uses three BERT based rankers to find supporting sentences, and then another BERT MRC model on the concatenated sentences to get the answer span. Transformer-XH is a simpler model that directly represents and reasons with multiple pieces of evidence using extra hop attentions. + +Fact verification is a natural language inference task while also requires retrieving ("open-domain") and reasoning with multiple text pieces ("multi-evidence") (Thorne et al., 2018; Nie et al., 2019; Liu et al., 2019b). Many recent FEVER systems leverage Graph Neural Networks to combine information from multiple text nodes, while each node text is represented by BERT encodings (Zhou et al., 2019; Liu et al., 2019b). Transformer-XH is a more unified solution that simply includes language modeling as part of its joint reasoning. + +In addition to Transformer-XL (Dai et al., 2019), other work is proposed to improve the Transformer architecture on long text sequence. For example, T-DMCA (Liu et al., 2018) splits the sequence into blocks and then the attention merges different blocks. Sparse Transformer (Child et al., 2019) introduces the sparse factorizations of the attention matrix. Transformer-XH shares similar motivation and focuses on multiple pieces of text that are not in sequential forms. + +Transformer-XH is also inspired by GNN (Kipf & Welling, 2017; Schlichtkrull et al., 2017; Velicković et al., 2018), which leverages neural networks to model graph structured data for downstream tasks (Sun et al., 2018; Zhao et al., 2020). The key difference is that a "node" in Transformer-XH is a text sequence, and modeling of the structure is conducted jointly with the representation of the text. Transformer-XH combines the Transformer's advantages in understanding text with the power that GNN has in modeling structure. + +# 8 CONCLUSION + +Transformer-XH and its eXtra Hop attention mechanism is a simple yet powerful adaptation of Transformer to learn better representations of structured text data as it naturally occurs. It innately integrates with pre-trained language models to allow for complex reasoning across multiple textual evidence pieces. When applied to HotpotQA, Transformer-XH significantly shrinks the typical multi-hop QA pipeline, eliminating many cascading errors that arise from the linear sequence input constraints of pre-trained Transformers. The same simplicity also applies to FEVER, with one Transformer-XH all we needed to obtain a much stronger answer accuracy. With its simplicity and efficacy, we envision Transformer-XH will benefit many applications in the near future. + +# REFERENCES + +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to Answer Open-Domain Questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1870-1879, 2017. +Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating Long Sequences with Sparse Transformers. arXiv preprint arXiv:1904.10509, 2019. +Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2978-2988, 2019. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171-4186, 2019. +Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. Cognitive Graph for Multi-Hop Reading Comprehension at Scale. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2694-2703, 2019. +Yair Feldman and Ran El-Yaniv. Multi-Hop Paragraph Retrieval for Open-Domain Question Answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2296-2309, 2019. +Paolo Ferragina and Ugo Scaiella. Tagme: On-the-fly Annotation of Short Text Fragments (by Wikipedia Entities). In Proceedings of the 19th ACM international conference on Information and knowledge management, pp. 1625-1628, 2010. +Faegheh Hasibi, Krisztian Balog, and Svein Erik Bratsberg. Entity Linking in Queries: Efficiency vs.Effectiveness. In European Conference on Information Retrieval, pp. 40-53, 2017. +Yichen Jiang and Mohit Bansal. Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2726-2736, 2019. +Thomas N Kipf and Max Welling. Semi-supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations, 2017. +Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating Wikipedia by Summarizing Long Sequences. In International Conference on Learning Representations, 2018. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692, 2019a. +Zhenghao Liu, Chenyan Xiong, and Maosong Sun. Kernel Graph Attention Network for Fact Verification. arXiv preprint arXiv:1910.09796, 2019b. +Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, and Luke Zettlemoyer. Compositional Questions Do Not Necessitate Multi-hop Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4249-4257, 2019a. +Sewon Min, Victor Zhong, Luke Zettlemoyer, and Hannaneh Hajishirzi. Multi-hop Reading Comprehension through Question Decomposition and Rescoring. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6097-6109, 2019b. +Yixin Nie, Songhe Wang, and Mohit Bansal. Revealing the Importance of Semantic Retrieval for Machine Reading at Scale. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019. + +Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, and Junji Tomita. Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2335-2345, 2019. +Rodrigo Nogueira and Kyunghyun Cho. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085, 2019. +Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling Relational Data with Graph Convolutional Networks. arXiv preprint arXiv:1703.06103, 2017. +Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. Open domain question answering using early fusion of knowledge bases and text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4231-4242, 2018. +James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. The Fact Extraction and VERIFICATION (FEVER) Shared Task. In Proceedings of the First Workshop on Fact Extraction and VERIFICATION (FEVER), pp. 1-9, 2018. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is All You Need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph Attention Networks. In International Conference on Learning Representations, 2018. +Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin, Junbo Zhao, Jinyang Li, Alexander J Smola, and Zheng Zhang. Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 2369-2380, 2018. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in Neural Information Processing Systems, pp. 5754-5764, 2019. +Chen Zhao, Chenyan Xiong, Xin Qian, and Jordan Boyd-Graber. Complex factoid question answering with a free-text knowledge graph. In The Web Conference, 2020. +Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 892-901, 2019. + +# A APPENDIX + +The appendix includes details of the evidence graph construction for Hotpot QA, ablation studies in the BERT IR component, more details and results on Hotpot QA. + +# A.1 HOTPOTQA EVIDENCE GRAPH CONSTRUCTION DETAILS + +The evidence graph construction includes two stages. The first stage is BERT IR, which extract related documents directly from question. The second stage expands the related documents along Wikipedia links. The first is applied on all questions while the second is only required by Bridge questions. + +The first stage uses two methods to find documents. The first method uses DrQA's retrieval system (Chen et al., 2017), which is unsupervised TF-IDF. We keep the top 100 DrQA retrieved ones $D_{ir}$ for each question. The second method uses TagMe (Ferragina & Scaiella, 2010) and CMNS (Hasibi et al., 2017), two commonly used entity linkers, to annotate questions. We keep the TagMe output entity and three highest scored entities per surface form (a phrase in the question linked with entities) from CMNS, and use its corresponding Wikipedia document as $D_{el}$ . + +We use BERT ranker (Nogueira & Cho, 2019) to re-rank the initial set $D_{ir} \cup D_{el}$ . The input to the BERT is the concatenation of question and first paragraph of document: + +[CLS] Question [SEP] First Paragraph of Document. + +Then a linear layer is added on the last layer's [CLS] representation to score the relevance of the document. We use BERT base and fine-tune it using the relevance label (from supporting facts) with cross-entropy loss. The top two highest scored documents from $D_{ir}$ and the top one document per entity position (surface form) in $D_{el}$ are kept as the first stage BERT IR documents. + +The second stage expands the first stage BERT IR documents by Wikipedia hyperlinks to obtain $D_{exp}$ . A document is included if it is linked to or links to a document in the first stage. We use the same BERT ranker to rank $D_{exp}$ and keep the top 15 documents in $D_{exp}$ . + +The final evidence graph nodes per question includes the top two highest ranked documents in $D_{ir}$ , top one per entity name in $D_{el}$ , and top 15 from the expanded documents $D_{exp}$ . + +The comparison questions only require information from two question entities; thus when building the evidence graph we do not expand them (i.e. there is no $D_{exp}$ ). + +The retrieval pipeline is a multi-stage retrieval enhanced with entity linking. It is close to the retrieval system used in SR-MRS (Nie et al., 2019). When using SR-MRS retrieved documents for documents, we use top 10 documents on bridge questions and top two documents on comparison questions. + +In the next section, we show that Transformer-XH is robust to different number of documents kept in the evidence graph and performs similarly using the documents retrieved from SR-MRS. + +# A.2 ABLATION STUDIES ON DOCUMENT RETRIEVAL + +This experiment studies the effectiveness and influence of different retrieval settings. We use different numbers of top K ranked documents from the BERT ranker, run Transformer-XH in the corresponding evidence graph, and evaluate its performance on Bridge questions in the Dev set. We also evaluate the Supporting facts Recall and Answer Recall. Supp Recall evaluates whether the document with the supporting fact is included in the first stage retrieved documents. Ans Recall evaluates whether their exists a document in the evidence graph that includes the ground truth answer. The results are in Table 6. + +Our BERT IR system performs better than CogQA's TF-IDF and on par with SR-MRS, as expected. The latter uses a similar retrieval pipeline with our BERT IR system; Transformer-XH is robust on different retrieval settings and keeps its effectiveness when applied on Top 10 documents from SR-MRS (including both stages). + +
MethodSupp RecallAns RecallDev Ans EMDev Ans F1
Top 10 TFIDF (CogQA)70.8n.a.--
Top 2 w. BERT IR + Q Entities72.388.148.762.6
Top 5 w. BERT IR + Q Entities76.589.647.160.8
Top 10 w. BERT IR + Q Entities78.991.146.660.3
SR-MRS Top 10 (All Together)n.a.86.147.962.3
+ +Table 6: Ablation study on the retrieval systems. Top-10 TFIDF is the one used by CogQA (Ding et al., 2019). BERT IR is the retrieval system used by CogQA (w. BERT IR) and Transformer-XH. Top K refers to using the 2/5/10 highest ranked documents from the BERT ranker in the first stage. SR-MRS Top 10 uses the 10 retrieved documents per question provided by Nie et al. (2019). All retrieval methods include entities linked in the question and are expanded along Wiki links, except when evaluating the 1st stage Supp Recall. + +# A.3 OTHER HOTPOT QA COMPONENTS + +This section describes the other components for HotpotQA dataset. The whole QA system starts with question type classification. We train transformer-XH separately on each type of questions over their evidence graph. Besides answer prediction, we also adopt BERT based model for predicting supporting sentences. + +# A.3.1 QUESTION CLASSIFICATION + +The first component of our system is to classify the question to bridge and comparison types. We adopt BERT classification fine-tuning setting on HotpotQA questions using the question type labels provided in HotpotQA. The classifier achieves $99.1\%$ accuracy on the dev set. We use the classifier to split the questions into Comparison and Bridge. + +# A.3.2 SUPPORTING FACTS CLASSIFICATION + +The supporting facts prediction task is to extract all sentences that help get the answer. For bridge question, these sentences usually cover different pieces of questions. And for comparison questions, the supporting facts are the properties of two question entities. We design one model architecture for this task, but we train two models on each type to reflect the inherent difference. + +We use BERT as our base model and on top of BERT, we conduct multi-task learning scheme. The first task is document relevance prediction, similar as Transformer-XH, we add a linear layer on the [CLS] token of BERT to predict the relevance score. The other task is sentence binary classification, we concatenate the first and last token representation of each sentence in the document through a linear layer, the binary output decides whether this sentence is supporting sentence. + +Bridge question supporting facts prediction For bridge questions, we predict supporting facts after answer prediction from Transformer-XH to resume the inference chain. We start by predicting supporting facts in the answer document. The other document is chosen from the parents of the answer document in the evidence graph. Compare with the contemporary model Nie et al. (2019), which does not limit the search space along the inference chain (i.e., the answer document may not be relevant to the other supporting page), our method more naturally fits the task purpose. + +Comparison question supporting facts prediction For comparison questions, after extracting the first step documents $D$ , we simply run this supporting facts prediction model to select the top-2 documents, and predict the corresponding supporting facts. + +# A.3.3 TRAINING DETAILS + +We use DGL (Wang et al., 2019) for implementing Transformer-XH and CogQA (w. BERT IR) with batch size 1 (i.e., one graph for each batch), and keep the other parameters same as default BERT setting. We train Transformer-XH separately on two different types of questions, following previous + +
idExampleExplanation
1(+)Q: In which year was the King who made the 1925 Birthday Honours born?P: 1865 √Document 1: The 1925 Birthday Honours were appointments by King George V to various orders and honours.Document 2: George V (3 June 1865 - 20 January 1936) was King of the United Kingdom.With necessary evidence available, Transformer-XH conducts multi-hop reasoning, and extracts the correct span.
2(-)Q: Where was the world cup hosted that Algeria qualified for the first time into the round of 16?A: Brazil P: Spain XDocument 1 (Algeria at the FIFA World Cup): In 2014, Algeria qualified for the first time into the round of 16.Document 2 (2014 FIFA CUP): It took place in Brazil from 12 June to 13 July 2014, after the country was awarded the hosting rights in 2007.Transformer-XH does not predict the correct answer, since document 1 does not link to any other documents. Thus, the information does not propagate to the correct answer document 2014 FIFA CUP.
3(-)Q: What government position was held by the woman who portrayed Corliss Archer in the film Kiss and Tell?A: Chief of Protocol P: ambassador XDocument 1: Kiss and Tell is a 1945 American comedy film starring then 17-year-old Shirley Temple as Corliss Archer.Document 2: As an adult, Shirley Temple was named United States ambassador to Ghana and to Czechoslovakia, and also served as Chief of Protocol of the United States.Transformer-XH predicts the correct answer document Shirley Temple. However it could not distinguish from the wrong answer ambassador which she was named but not held that position.
+ +Table 7: Additional examples for model prediction on HotpotQA dataset, the first example is the correct prediction (+), the other two examples are the wrong predictions (-). + +research (Ding et al., 2019). We train Transformer-XH and the GNN of CogQA (w. BERT IR) for 2 epochs. All other BERT based models use the default BERT parameters and train the model for 1 epoch. + +# A.4 IMPLEMENTATION DETAILS OF COGQA (W. BERT IR) + +This section discusses our implementation of CogQA (W. BERT IR). We start with the same documents from BERT IR, the one used by Transformer-XH, and then implement the following steps: + +# A.4.1 HOP ENTITY EXTRACTION + +For each document from the previous step, we run BERT MRC model and limit the span candidates as hyperlinked entities for hop entity extraction (e.g., in Figure 1, "Harvard University" is a hop entity). Following Ding et al. (2019), we predict the top three entities that above the relative threshold that is the start span probability of [CLS] position. + +# A.4.2 ANSWER SPAN EXTRACTION + +For each document (add the hop entity document), following Ding et al. (2019), we run BERT MRC model to extract spans (e.g., "Combridge" in Figure 1.). We predict the top one span that above the threshold that is the start span probability of [CLS] position. + +We train both hop entity extraction and span extraction tasks with same BERT model but different prediction layers. For each training example, we extract the link between two given supporting pages. The page includes the link (e.g., "Harvard University" in Figure 1.) is the supporting page + +for hop entity extraction, while the other page is the answer page (e.g., "Combridge" in Figure 1.) for answer span extraction. + +# A.4.3 GAT MODELING + +All the entities and answer spans form the final graph. The nodes are the entities and spans, and edges are the connections from the entities to the extracted hop entities or spans. + +We use BERT for each node representation with question, anchor sentences and context, following Ding et al. (2019). We run GAT (Velicković et al., 2018) on top of BERT to predict the correct answer span node. + +# A.4.4 COMPARISON QUESTIONS + +After predicting supporting facts, we concatenate the sentences and follow Min et al. (2019b) to run a BERT MRC model to predict either span or yes/no as the answer. + +# A.5 ADDITIONAL CASE STUDY + +We provide addition case studies in Table 7. The first case can be directly predicted through the clear evidence chain "the 1925 Birthday Honours" $\rightarrow$ "George V" $\rightarrow$ "1865". In the second case, the first document ("Algeria at the FIFA World Cup") has no link to any other documents, therefore the model can not access the correct answer. The third case is more reading comprehension oriented, where the model can not distinguish the correct and wrong spans inside one sentence. \ No newline at end of file diff --git a/transformerxhmultievidencereasoningwithextrahopattention/images.zip b/transformerxhmultievidencereasoningwithextrahopattention/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..51a077de20ed96fea78ccc0731ff8c229498d850 --- /dev/null +++ b/transformerxhmultievidencereasoningwithextrahopattention/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44ed29e0df4bfc0aaaef254fd2a4fbdaa263abaf4580c5197bffcd175db3c1b6 +size 678428 diff --git a/transformerxhmultievidencereasoningwithextrahopattention/layout.json b/transformerxhmultievidencereasoningwithextrahopattention/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bce465249cb0d044e126e80bd2d41f62df70ee32 --- /dev/null +++ b/transformerxhmultievidencereasoningwithextrahopattention/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee2d52d3168205fedb7c5c8ab7b69d0ba8e5afcc175e44ba28f7698b330ae443 +size 453808 diff --git a/treestructuredattentionwithhierarchicalaccumulation/0afb7820-dcb7-40a8-b70a-f75011744169_content_list.json b/treestructuredattentionwithhierarchicalaccumulation/0afb7820-dcb7-40a8-b70a-f75011744169_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..979c35707b2fca5009128382a21c65b9f0fe4ac1 --- /dev/null +++ b/treestructuredattentionwithhierarchicalaccumulation/0afb7820-dcb7-40a8-b70a-f75011744169_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:261ac8d1817487da5777a667e6aae54ed261f10afc46cdaf011ca8496ccdf963 +size 98735 diff --git a/treestructuredattentionwithhierarchicalaccumulation/0afb7820-dcb7-40a8-b70a-f75011744169_model.json b/treestructuredattentionwithhierarchicalaccumulation/0afb7820-dcb7-40a8-b70a-f75011744169_model.json new file mode 100644 index 0000000000000000000000000000000000000000..832e38541c8f0f6bedea681c0ec3caf366ca3fa4 --- /dev/null +++ b/treestructuredattentionwithhierarchicalaccumulation/0afb7820-dcb7-40a8-b70a-f75011744169_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5a226f80df656ff3a34cc6ca7c55d6f597b469767e0158def3b8eeb4a8e700f +size 115264 diff --git a/treestructuredattentionwithhierarchicalaccumulation/0afb7820-dcb7-40a8-b70a-f75011744169_origin.pdf b/treestructuredattentionwithhierarchicalaccumulation/0afb7820-dcb7-40a8-b70a-f75011744169_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..204d1242ace3af9e8d24f120ec7421550f2dba87 --- /dev/null +++ b/treestructuredattentionwithhierarchicalaccumulation/0afb7820-dcb7-40a8-b70a-f75011744169_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fe0f4b239c0fc5f4f0e1cfc4af2036e7c156242153aa36c82bf15d20dfb6ab2 +size 1333349 diff --git a/treestructuredattentionwithhierarchicalaccumulation/full.md b/treestructuredattentionwithhierarchicalaccumulation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..791d6901ce6ab173fee0ed321ac17f50b4f977d4 --- /dev/null +++ b/treestructuredattentionwithhierarchicalaccumulation/full.md @@ -0,0 +1,370 @@ +# TREER-STRUCTURED ATTENTION WITH HIERARCHICAL ACCUMULATION + +Xuan-Phi Nguyen\*\*, Shafiq Joty\*\*, Steven C.H. Hoi†, Richard Socher† + +$\dagger$ Salesforce Research +$^{\ddagger}$ Nanyang Technological University + +nguyenxu002@e.ntu.edu.sg,{sjoty,shoi,rsocher}@salesforce.com + +# ABSTRACT + +Incorporating hierarchical structures like constituency trees has been shown to be effective for various natural language processing (NLP) tasks. However, it is evident that state-of-the-art (SOTA) sequence-based models like the Transformer struggle to encode such structures inherently. On the other hand, dedicated models like the Tree-LSTM, while explicitly modeling hierarchical structures, do not perform as efficiently as the Transformer. In this paper, we attempt to bridge this gap with "Hierarchical Accumulation" to encode parse tree structures into self-attention at constant time complexity. Our approach outperforms SOTA methods in four IWSLT translation tasks and the WMT'14 English-German translation task. It also yields improvements over Transformer and Tree-LSTM on three text classification tasks. We further demonstrate that using hierarchical priors can compensate for data shortage, and that our model prefers phrase-level attentions over token-level attentions. + +# 1 INTRODUCTION + +Although natural language has a linear surface form, the underlying construction process is known to be hierarchical (Frege, 1892). As such, different tree-like structures have been proposed to represent the compositional grammar and meaning of a text, such as constituency and dependency trees. Leveraging the hierarchical structures of language gives models more structural information about the data and improves performance on the downstream tasks (Tai et al., 2015; Eriguchi et al., 2016). Despite that, state-of-the-art neural models like the Transformer still prefer the linear (sequential) form of natural language (Vaswani et al., 2017; Ott et al., 2018; Devlin et al., 2018). This is because the linear form allows us to develop simple but efficient and scalable techniques (like self-attention which operates at constant parallel time complexity1) to train models at a large scale. Yet, there is still no concrete evidence that these models learn grammatical and constituency structures implicitly. However, ad hoc tree-structured models (Socher et al., 2013; Tai et al., 2015; Shi et al., 2018) often operate on recursive or recurrent mechanisms, which are not parallelizable, thus hindering their application in larger-scale training. Besides, such models are designed to only operate at the sentence-level (i.e., single tree), limiting their application to document-level processing. + +We propose a novel attention-based method that encodes trees in a bottom-up manner and executes competitively with the Transformer at constant parallel time complexity. In particular, our attention layers receive as input the constituency tree of a piece of text and then model the hidden states of all nodes in the tree (leaves and nonterminals) from their lower-layer representations according to the tree structure. As attentions typically have query, key and value components, our model uses hierarchical accumulation to encode the value component of each nonterminal node by aggregating the hidden states of all of its descendants. The accumulation process is three-staged. First, we induce the value states of nonterminals with hierarchical embeddings, which help the model become aware of the hierarchical and sibling relationships between the nodes. Second, we perform an upward cumulative-average operation on each target node, which accumulates all elements in the branches originating from the target node to its descendant leaves. Third, these branch-level representations + +are combined into a new value representation of the target node by using weighted aggregation. Finally, the model proceeds to perform attention with subtree masking where the attention score between a nonterminal query and a key is activated only if the key is a descendant of the query. + +Our contributions are threefold. First, we present our attention-based hierarchical encoding method. Our method overcomes linear parallel time complexity of Tree-LSTM (Tai et al., 2015) and offers attractive scalability. Second, we adopt our methods within the Transformer architecture and show improvements across various NLP tasks over strong baselines. In particular, our model leverages tree-based prior to improve translation quality over the Transformer baselines in the IWSLT'14 English-German and German-English, the IWSLT'13 English-French and French-English, and the WMT'14 English-German translation tasks. Furthermore, our model also exhibits advantages over Tree-LSTM in classification tasks including Stanford Sentiment Analysis (SST) (Socher et al., 2013), IMDB Sentiment Analysis and Subject-Verb Agreement (Linzen et al., 2016). Finally, our analysis of the results suggests that incorporating a hierarchical prior using our method can compensate for the lack of data in the context of machine translation. We also demonstrate that the model has natural and consistent preference for phrase-level attention over token-level attention. Our source code is available at https://github.com/ngxphi47/tree_transformer. + +# 2 RELATED WORK + +The Transformer framework has become the driving force in recent NLP research. For example, it has achieved state-of-the-art performance in machine translation tasks (Vaswani et al., 2017; Shaw et al., 2018; Ott et al., 2018; Wu et al., 2019) and self-supervised representational learning (Devlin et al., 2018; Radford et al., 2018; Lample & Conneau, 2019; Yang et al., 2019). The self-attention layers in the Transformer encode a sequence at constant parallel time complexity, which makes it parallelizable and scalable. On the other hand, there have been many proposals to use parse trees as an architectural prior to facilitate different downstream tasks. Socher et al. (2013) adopt a recursive compositional method over constituency trees to solve sentiment analysis in a bottom-up manner. Tree-LSTM (Tai et al., 2015) improves the task performance by using an LSTM structure to encode trees recurrently. Both of the proposed methods, while effective, operate sequentially in parallel time complexity. Tree structures have also been used as an architectural bias to improve machine translation (Eriguchi et al., 2016; Shi et al., 2018; Yang et al., 2017). Constituency trees can also be decoded in a top-down manner, as proposed in (Alvarez-Melis & Jaakkola, 2017; Gu et al., 2018). Besides, they can also be learned in an unsupervised way (Kim et al., 2019; Shen et al., 2018; 2019; Yaushian Wang & Chen, 2019). Meanwhile, Strubell et al. (2018); Hao et al. (2019); Harer et al. (2019) attempted to incorporate trees into self-attention. Hewitt & Manning (2019) showed that dependency semantics are already intrinsically embedded in BERT (Devlin et al., 2018). Concurrently, Yaushian Wang & Chen (2019) suggested that BERT may not naturally embed constituency semantics. + +Our approach encodes trees in a bottom-up manner as Tai et al. (2015) and Socher et al. (2013). But it differs from them in that it leverages the attention mechanism to achieve high efficiency and performance. Plus, it is applicable to self- and cross-attention layers in the Transformer sequence-to-sequence (Seq2Seq) skeleton. Unlike previous methods, our model works with multi-sentence documents (multi-tree) seamlessly. Our model also differs from Strubell et al. (2018); Hao et al. (2019); Harer et al. (2019) in that their methods only use tree structures to guide and mask token-level attentions, while ours processes all the nodes of the tree hierarchically. In this paper, while applicable to dependency trees, our approach focused primarily on constituency trees because (1) constituency trees contain richer grammatical information in terms of phrase structure, and (2) there is as yet no evidence, to the best of our knowledge, that constituency structures are learned implicitly in the standard self-supervised models. + +# 3 BACKGROUND - TRANSFORMER FRAMEWORK + +The Transformer (Vaswani et al., 2017) is a Seq2Seq network that models sequential information using stacked self- and cross-attention layers. The output $O$ of each attention sub-layer is computed via scaled multiplicative formulations defined as: + +$$ +\boldsymbol {A} = \left(\boldsymbol {Q} \boldsymbol {W} ^ {Q}\right) \left(\boldsymbol {K} \boldsymbol {W} ^ {K}\right) ^ {T} / \sqrt {d}; \quad \operatorname {A t t} (\boldsymbol {Q}, \boldsymbol {K}, \boldsymbol {V}) = \operatorname {s o f t m a x} (\boldsymbol {A}) \left(\boldsymbol {V} \boldsymbol {W} ^ {V}\right) \tag {1} +$$ + +![](images/5a317317df934e106d9b4ffebc888038cd022306bdc96da55aa3c979c7bbc084.jpg) +Figure 1: The hierarchical accumulation process of tree structures (best seen in colors). Given a parse tree, it is interpolated into a tensor $\mathbf{S}$ , which is then accumulated vertically from bottom to top to produce $\hat{\mathbf{S}}$ . Next, the (branch-level) component representations of the nonterminal nodes are combined into one representation as $\overline{N}$ by weighted aggregation. Multi-colored blocks indicate accumulation of nodes of respective colors. The rows of $\mathbf{S}$ in Eq. 5 are counted from the bottom. + +$$ +\boldsymbol {O} = \operatorname {A t t} (\boldsymbol {Q}, \boldsymbol {K}, \boldsymbol {V}) \boldsymbol {W} ^ {O} \tag {2} +$$ + +where softmax is the softmax function, $Q = (q_{1},\dots,q_{l_{q}})\in \mathbb{R}^{l_{q}\times d}$ , $K = (k_{1},\dots,k_{l_{k}})\in \mathbb{R}^{l_{k}\times d}$ , $V = (v_{1},\dots,v_{l_{k}})\in \mathbb{R}^{l_{k}\times d}$ are matrices of query, key and value vectors respectively, and $W^{Q},W^{K},W^{V},W^{O}\in \mathbb{R}^{d\times d}$ are the associated trainable weight matrices. $A$ denotes the affinity scores (attention scores) between queries and keys, while $\mathrm{Att}(Q,K,V)$ are the attention vectors. Then, the final output of a Transformer layer is computed as: + +$$ +\phi (\boldsymbol {A}, \boldsymbol {Q}) = \operatorname {L N} (\operatorname {F F N} (\operatorname {L N} (\boldsymbol {O} + \boldsymbol {Q})) + \operatorname {L N} (\boldsymbol {O} + \boldsymbol {Q})) \tag {3} +$$ + +where $\phi$ represents the typical serial computations of a Transformer layer with layer normalization (LN) and feed-forward (FFN) layers. For simplicity, we omit the multi-head structure and other details and refer the reader to Vaswani et al. (2017) for a complete description. + +# 4 TREE-BASED ATTENTION + +# 4.1 ENCODING TREES WITH HIERARCHICAL ACCUMULATIONS + +To encode hierarchical structures in parallel, we need to represent the tree in a data structure that can be parallelized. Given a sentence $X$ of length $n$ , let $\mathcal{G}(X)$ be the directed spanning tree which represents the parse tree of $X$ produced by a parser. We define a transformation $\mathcal{H}$ such that $\mathcal{H}(\mathcal{G}(X)) = \mathcal{T}(X) \triangleq (\mathcal{L}, \mathcal{N}, \mathcal{R})$ . In this formulation, $\mathcal{L}$ denotes the ordered sequence of $n$ terminal nodes (or leaves) of the tree (i.e., $\mathcal{L} = X$ ), and $\mathcal{N}$ denotes the set of $m$ nonterminal nodes (or simply nodes), each of which has a phrase label (e.g., NP, VP) and spans over a sequence of terminal nodes. $^2\mathcal{R}$ contains a set of rules indexed by the nonterminal nodes in $\mathcal{N}$ such that for each node $x \in \mathcal{N}$ , $\mathcal{R}(x)$ denotes the set of all nodes that belong to the subtree rooted at $x$ . For example, for the nonterminals $g$ and $h$ in Figure 1, $\mathcal{R}(g) = \{g, c, h, d, e\}$ and $\mathcal{R}(h) = \{h, d, e\}$ . + +There might be various ways to transform the tree $\mathcal{G}(X)$ . For a tree-encoding process, a particular transformation is legitimate only if the resulting data structure represents only $\mathcal{G}(X)$ and not any other structures. Otherwise, the encoding process may confuse $\mathcal{G}(X)$ with another structure. In other words, the transformation should be a one-to-one mapping. Our transformation $\mathcal{H}$ satisfies this requirement as shown in the following proposition (see Appendix 7.1 for a proof). + +Proposition 1 Suppose $\mathcal{G}(X)$ is a parse tree and there exists an inverse-transformation $\mathcal{I}$ that converts $\mathcal{T}(X)$ to a graph $\mathcal{I}(\mathcal{T}(X))$ , then $\mathcal{I}$ can only transform $\mathcal{T}(X)$ back to $\mathcal{G}(X)$ , or: + +$$ +\mathcal {I} \left(\mathcal {H} \left(\mathcal {G} (X)\right)\right) = \mathcal {G} (X) \quad o r \quad \mathcal {I} = \mathcal {H} ^ {- 1} \tag {4} +$$ + +We now describe the tree accumulation method using $\mathcal{T}(X)$ . Figure 1 shows the overall process. Let $\pmb{L} = (l_{1},\dots,l_{n})\in \mathbb{R}^{n\times d}$ and $\pmb {N} = (\pmb {n}_1,\dots,\pmb {n}_m)\in \mathbb{R}^{m\times d}$ be the hidden representations of the + +leaves $\mathcal{L} = (x_1^{\mathcal{L}},\dots,x_n^{\mathcal{L}})$ and nodes $\mathcal{N} = (x_1^{\mathcal{N}},\dots,x_m^{\mathcal{N}})$ , respectively. We define an interpolation function $\mathcal{F}:(\mathbb{R}^{n\times d},\mathbb{R}^{m\times d})\to \mathbb{R}^{(m + 1)\times n\times d}$ , which takes $L,N$ and $\mathcal{R}$ as inputs and returns a tensor $\mathbf{S}\in \mathbb{R}^{(m + 1)\times n\times d}$ . The row $i$ and column $j$ vector of $\mathbf{S}$ , or $\mathbf{S}_{i,j}\in \mathbb{R}^d$ , is defined as: + +$$ +\mathbf {S} _ {i, j} = \mathcal {F} (\boldsymbol {L}, \boldsymbol {N}, \mathcal {R}) _ {i, j} = \left\{ \begin{array}{l l} \boldsymbol {l} _ {j} & \text {i f} i = 1 \\ \boldsymbol {n} _ {i - 1} & \text {e l s e i f} x _ {j} ^ {\mathcal {L}} \in \mathcal {R} \left(x _ {i - 1} ^ {\mathcal {N}}\right) \\ \boldsymbol {0} & \text {o t h e r w i s e .} \end{array} \right. \tag {5} +$$ + +where $\mathbf{0}$ denotes a zero vector of length $d$ . Note that the row and column arrangements in $\mathbf{S}$ reflect the tree structure (see Figure 1). Next, we perform the upward cumulative-average (upward-CA) operation $\mathcal{U}$ on $\mathbf{S}$ to compose the node representations in a bottom-up fashion over the induced tree structure. The result of this operation is a tensor $\hat{\mathbf{S}} \in \mathbb{R}^{m \times n \times d}$ , in which each nonterminal node representation is averaged along with all of its descendants in a particular branch. More formally, + +$$ +\mathcal {U} (\mathbf {S}) _ {i, j} = \hat {\mathbf {S}} _ {i, j} = \left\{ \begin{array}{l l} \mathbf {0} & \text {i f} \mathbf {S} _ {i + 1, j} = \mathbf {0} \\ \sum_ {\mathbf {S} _ {t, j} \in C _ {j} ^ {i}} \mathbf {S} _ {t, j} / \left| C _ {j} ^ {i} \right| & \text {o t h e r w i s e .} \end{array} \right. \tag {6} +$$ + +where $C_j^i = \{\mathbf{S}_{1,j}\} \cup \{\mathbf{S}_{t,j}|x_t^\mathcal{N} \in \mathcal{R}(x_i^\mathcal{N})\}$ is the set of vectors in $\mathbf{S}$ representing the leaves and nodes in the branch that starts with $x_i^\mathcal{N}$ and ends with $x_j^\mathcal{L}$ . Note that we discard the leaves in $\hat{\mathbf{S}}$ . As demonstrated in Figure 1, each row $i$ of $\hat{\mathbf{S}}$ represents a nonterminal node $x_i^\mathcal{N}$ and each entry $\hat{\mathbf{S}}_{i,j}$ represents its vector representation reflecting the tree branch from $x_i^\mathcal{N}$ to a leaf $x_j^\mathcal{L}$ . This gives $|\mathcal{R}(x_i^\mathcal{N}) \cap \mathcal{L}|$ different constituents of $x_i^\mathcal{N}$ that represent the branches rooted at $x_i^\mathcal{N}$ . + +The next task is to combine the branch-level accumulated representations of a nonterminal $x_{i}^{\mathcal{N}}$ into a single vector $\overline{n}_i$ that encapsulates all the elements in the subtree rooted by $x_{i}^{\mathcal{N}}$ . Our method does so with a weighted aggregation operation. The aggregation function $\mathcal{V}$ takes $\hat{\mathbf{S}}$ as input and a weighting vector $\boldsymbol{w} \in \mathbb{R}^{n}$ , and computes the final node representations $\overline{N} = (\overline{n}_1, \dots, \overline{n}_m) \in \mathbb{R}^{m \times d}$ , where each row-vector $\overline{n}_i$ in $\overline{N}$ is computed as: + +$$ +\mathcal {V} (\hat {\mathbf {S}}, \boldsymbol {w}) _ {i} = \bar {\boldsymbol {n}} _ {i} = \frac {1}{| \mathcal {L} \cap \mathcal {R} \left(x _ {i} ^ {\mathcal {N}}\right) |} \sum_ {j: x _ {j} ^ {\mathcal {L}} \in \mathcal {R} \left(x _ {i} ^ {\mathcal {N}}\right)} w _ {j} \odot \hat {\mathbf {S}} _ {i, j} \tag {7} +$$ + +where $\odot$ denotes the element-wise multiplication. Specifically, the aggregation function $\mathcal{V}$ computes a weighted average of the branch-level representations. In summary, the hierarchical accumulation process can be expressed as the following equation: + +$$ +\overline {{\boldsymbol {N}}} = \mathcal {V} (\mathcal {U} (\boldsymbol {S}), \boldsymbol {w}) = \mathcal {V} (\mathcal {U} (\mathcal {F} (\boldsymbol {L}, \boldsymbol {N}, \mathcal {R})), \boldsymbol {w}) \tag {8} +$$ + +# 4.2 HIERARCHICAL EMBEDDINGS + +While the above technique is able to model the states of nonterminal nodes as an encapsulation of their respective descendants, those descendants are equally treated since no biases are imposed on them. In other words, although each branch from a node comprises a distinctive set of descendants, the hierarchy of elements within a branch and the sibling relationship among branches are not explicitly represented. Thus, it may be beneficial to introduce biases that reflect such underlying subtree-level hierarchical structures. We propose Hierarchical Embeddings to induce distinguishable tree structures into the tensor $\mathbf{S}$ before being accumulated by $\mathcal{U}$ and $\mathcal{V}$ . We also demonstrate the effectiveness of these embeddings with experiments in Section 5. Figure 2 illustrates the hierarchical embeddings for the nodes in Figure 1. Given $L$ , $N$ and $\mathcal{R}$ as defined in Section 4.1, we construct a + +![](images/9ac9c06cf336b4ec0f0f90bf74632f57e3d54e09c193c851d0b2d78ed65322a1.jpg) +Figure 2: Hierarchical Embeddings. Each block $\mathbf{E}_{i,j}$ is an embedding vector $[e_x^v; e_y^h]$ with indices $x, y$ following the syntax " $x; y$ ", where $x = |V_j^i|$ and $y = |H_j^i|$ . "0" indicates no embedding. + +tensor of hierarchical embeddings $\mathbf{E} \in \mathbb{R}^{(m + 1) \times n \times d}$ with entries defined as follows: + +$$ +\mathbf {E} _ {i, j} = \left\{ \begin{array}{l l} {[ e _ {| V _ {j} ^ {i} |}; e _ {| H _ {j} ^ {i} |} ^ {h} ]} & {\text {i f} i > 1 \text {a n d} x _ {j} ^ {\mathcal {L}} \in \mathcal {R} \left(x _ {i} ^ {\mathcal {N}}\right)} \\ {\mathbf {0}} & \text {o t h e r w i s e .} \end{array} \right. \tag {9} +$$ + +where $V_{j}^{i} = \{x_{t}^{\mathcal{N}}|x_{t}^{\mathcal{N}}\in \mathcal{R}(x_{i}^{\mathcal{N}})$ and $x_{j}^{\mathcal{L}}\in \mathcal{R}(x_{t}^{\mathcal{N}}) \})$ is the set of $x_{j}^{\mathcal{L}}$ 's ancestors up to $x_{i}^{\mathcal{N}}$ , and $H_{j}^{i} = \{x_{t}^{\mathcal{L}}|t\leq j$ and $x_{t}^{\mathcal{L}}\in \mathcal{L}\cap \mathcal{R}(x_{i}^{\mathcal{N}})\}$ is the set of leaves from the leftmost leaf up to $x_{j}^{\mathcal{L}}$ of the $x_{i}^{\mathcal{N}}$ -rooted subtree; $e_i^v$ and $e_i^h$ are embedding row-vectors of the respective trainable vertical and horizontal embedding matrices $E^{v},E^{h}\in \mathbb{R}^{|E|\times \frac{d}{2}}$ and $[\bullet ;\bullet ]$ denotes the concatenation operation in the hidden dimension. The vertical embeddings represent the path length of a node to a leaf which expresses the hierarchical order within a branch, whereas the horizontal embeddings exhibit the relationship among branch siblings in a subtree. The resulting node representations after hierarchical encoding are defined as: + +$$ +\overline {{\boldsymbol {N}}} ^ {\prime} = \mathcal {V} (\mathcal {U} (\boldsymbol {S} + \boldsymbol {E}), \boldsymbol {w}) \tag {10} +$$ + +Note that we share such embeddings across attention heads, making them account for only $0.1\%$ of the total parameters (see Appendix 7.3 for more information). + +# 4.3 SUBTREE MASKING + +Masking attentions is a common practice to filter out irrelevant signals. For example, in the decoder self-attention layers of the Transformer, the affinity values between query $\pmb{q}_i$ and key $\pmb{k}_j$ are turned off for $j > i$ to avoid future keys being attended since they are not available during inference. This can be done by adding to the affinity $\pmb{q}_i^T\pmb{k}_j$ an infinitely negative value $(-\infty)$ so that the resulting attention weight (after softmax) becomes zero. + +In the context of tree-based attentions (to be described next), we promote the bottom-up structure by introducing subtree masking for encoder self-attention. That is, if a node-query $q_{i}^{\mathcal{N}} \in \mathcal{N}$ is attending to a set of nodekeys $k_{j}^{\mathcal{N}} \in \mathcal{N}$ and leaf keys $k_{j}^{\mathcal{L}} \in \mathcal{L}$ , attentions are turned on only for affinity pairs whose key belongs to the subtree rooted at $q_{i}^{\mathcal{N}}$ . In other words, each node-query has access only to its own subtree descendants, but not to its ancestors and siblings. On the other hand, if a leaf-query $q_{i}^{\mathcal{L}} \in \mathcal{L}$ is attending, only leaf-keys are turned on, like in the Transformer. Figure 3 illustrates the subtree masking with an example. More formally, + +given $a_{ij}$ as the affinity value between a node/leaf-query $q_i \in \mathcal{N} \cup \mathcal{L}$ and a node/leaf-key $k_j \in \mathcal{N} \cup \mathcal{L}$ , the masking function $\mu$ is defined as: + +$$ +\mu \left(a _ {i j}\right) = \left\{ \begin{array}{l l} a _ {i j} & \text {i f} \left(q _ {i} \in \mathcal {N} \text {a n d} k _ {j} \in \mathcal {R} \left(q _ {i}\right)\right) \text {o r} \left(q _ {i}, k _ {j} \in \mathcal {L}\right) \\ a _ {i j} - \infty & \text {o t h e r w i s e .} \end{array} \right. \tag {11} +$$ + +![](images/afc1bcafe6823b4eb431c7ddd68a1b1d5e37785999f4942ed3ea2a76cb6fea55.jpg) +Figure 3: Subtree masking. Given the query at position $g$ , attentions are only included within the $g$ -rooted subtree, while the remaining elements are masked out (shaded). + +# 4.4 INTEGRATING INTO TRANSFORMER FRAMEWORK + +In this section, we describe how the above proposed methods fit into self- and cross-attentions of the Transformer framework, which enable them to efficiently encode parse trees. + +Encoder Self-attention. Figure 4a visualizes the encoder self-attention process. Without loss of generality, let $\pmb{L} \in \mathbb{R}^{n \times d}$ and $\pmb{N} \in \mathbb{R}^{m \times d}$ respectively denote the leaf and node representations that a Transformer encoder layer receives from its previous layer along with the parse tree represented as $\mathcal{T}(X) = (\mathcal{L}, \mathcal{N}, \mathcal{R})$ . The tree-based self-attention layer then computes the respective output representations $\hat{\pmb{L}}$ and $\hat{\pmb{N}}$ . Specifically, first, we compare the node and leaf representations against + +![](images/5903c3dd45a8762dcf9eb54b68f2ad1cfee87cbfb222f11b5be476ae626c5b50.jpg) +(a) Encoder Self-attention + +![](images/ad148461a30bb571633f442a3949131a62de3d1c21346491f5313f94517c62ce.jpg) +(b) Decoder Cross-attention +Figure 4: Illustration of the proposed Tree-based Attnctions: (a) Encoder self-attention, (b) Decoder cross-attention. Circle-ended arrows indicate where hierarchical accumulations take place. The overall Transformer architecture is provided in Figure 6 (Appendix 7.3). + +each other to produce query-key affinity matrices $\mathbf{A}_{NL} \in \mathbb{R}^{m \times n}$ , $\mathbf{A}_{NN} \in \mathbb{R}^{m \times m}$ , $\mathbf{A}_{LL} \in \mathbb{R}^{n \times n}$ and $\mathbf{A}_{LN} \in \mathbb{R}^{n \times m}$ for node-leaf (i.e., node representation as the query and leaf representation as the key), node-node, leaf-leaf, and leaf-node pairs, respectively, as follows: + +$$ +\boldsymbol {A} _ {N L} = \left(\boldsymbol {N} \boldsymbol {W} ^ {Q}\right) \left(\boldsymbol {L} \boldsymbol {W} ^ {K}\right) ^ {T} / \sqrt {d} \tag {12} +$$ + +$$ +\boldsymbol {A} _ {L L} = \left(\boldsymbol {L} \boldsymbol {W} ^ {Q}\right) \left(\boldsymbol {L} \boldsymbol {W} ^ {K}\right) ^ {T} / \sqrt {d} \tag {13} +$$ + +$$ +\boldsymbol {A} _ {N N} = \left(\boldsymbol {N} \boldsymbol {W} ^ {Q}\right) \left(\boldsymbol {N} \boldsymbol {W} ^ {K}\right) ^ {T} / \sqrt {d} \tag {14} +$$ + +$$ +\boldsymbol {A} _ {L N} = \left(\boldsymbol {L} \boldsymbol {W} ^ {Q}\right) \left(\boldsymbol {N} \boldsymbol {W} ^ {K}\right) ^ {T} / \sqrt {d} \tag {15} +$$ + +Then, the value representation $\overline{L}$ of the leaves $L$ is computed by a linear layer, while the value representation $\overline{N}^{\prime}$ of the nodes $N$ is encoded with tree structure using the hierarchical accumulation process (Section 4.1-4.2) as: + +$$ +\overline {{\boldsymbol {N}}} ^ {\prime} = \mathcal {V} (\mathcal {U} (\mathcal {F} (\boldsymbol {L} \boldsymbol {W} ^ {V}, \boldsymbol {N} \boldsymbol {W} ^ {V}, \mathcal {R}) + \boldsymbol {\mathrm {E}}), \boldsymbol {w}); \quad \overline {{\boldsymbol {L}}} = \boldsymbol {L} \boldsymbol {W} ^ {V} \tag {16} +$$ + +where $\boldsymbol{w} = \boldsymbol{L}\boldsymbol{u}_s$ with $\boldsymbol{u}_s \in \mathbb{R}^d$ being a trainable vector, while the weight matrices $\boldsymbol{W}^Q$ , $\boldsymbol{W}^K$ , $\boldsymbol{W}^V$ , and $\boldsymbol{W}^O$ are similarly defined as in Section 3. After this, the resulting affinity scores for leaves and nodes are concatenated and then masked by subtree masking (Section 4.3) to promote bottom-up encoding. The final attention for the nodes and leaves are then computed by taking the weighted averages of the value vectors in $\overline{\boldsymbol{N}}'$ and $\overline{\boldsymbol{L}}$ : + +$$ +\operatorname {A t t} _ {N} = \operatorname {s o f t m a x} (\mu ([ \boldsymbol {A} _ {N N}; \boldsymbol {A} _ {N L} ])) [ \overline {{\boldsymbol {N}}} ^ {\prime}; \overline {{\boldsymbol {L}}} ] \quad (1 7) \quad \operatorname {A t t} _ {L} = \operatorname {s o f t m a x} (\mu ([ \boldsymbol {A} _ {L N}; \boldsymbol {A} _ {L L} ])) [ \overline {{\boldsymbol {N}}} ^ {\prime}; \overline {{\boldsymbol {L}}} ] \tag {18} +$$ + +Both $\mathrm{Att}_N$ and $\mathrm{Att}_L$ are then passed through the Transformer's serial computations by function $\phi$ (Eq. 3 in Section 3), which results in the final output representations $\hat{\pmb{N}}$ and $\hat{\pmb{L}}$ as follows: + +$$ +\hat {\boldsymbol {N}} = \phi (\operatorname {A t t} _ {N} \boldsymbol {W} ^ {O}, \boldsymbol {N}) \tag {19} +$$ + +$$ +\hat {\boldsymbol {L}} = \phi (\operatorname {A t t} _ {L} \boldsymbol {W} ^ {O}, \boldsymbol {L}) \tag {20} +$$ + +Decoder Cross-attention. For tasks involving generation (e.g., NMT), we also use tree-based encoder-decoder attention (or cross-attention) in the decoder so that the target-side queries can leverage the hierarchical structures in the source side (tree2seq). Figure 4b shows the cross-attention process. Specifically, given the target-side query matrix $\mathbf{Q} \in \mathbb{R}^{t \times d}$ and the source-side leaf and node matrices $\mathbf{L}$ and $\mathbf{N}$ , the affinity scores $\mathbf{A}_{QN} \in \mathbb{R}^{t \times m}$ and $\mathbf{A}_{QL} \in \mathbb{R}^{t \times n}$ are computed as: + +
ModelIWSLTWMT En-De
En-DeEn-FrDe-EnFr-EnBaseBig
Tree2Seq (Shi et al., 2018)24.0140.2229.9539.41
Conv-Seq2Seq (Gehring et al., 2017)24.7639.5130.3239.56-25.16
Transformer (Vaswani et al., 2017)28.3543.7534.4242.8427.3029.30
Dynamic Conv (Wu et al., 2019)28.4343.7234.7243.0827.4829.70
Ours29.4745.5335.9644.3428.4029.95
+ +Table 1: BLEU scores for the base models on IWSLT'14 English $\leftrightarrow$ German, IWSLT'13 English $\leftrightarrow$ French, and the base and big models on WMT'14 English $\rightarrow$ German task. Refer to Table 5 in the Appendix for parameter comparisons. + +$$ +\boldsymbol {A} _ {Q N} = \left(\boldsymbol {Q} ^ {t} \boldsymbol {W} ^ {Q}\right) \left(\boldsymbol {N} \boldsymbol {W} ^ {K}\right) ^ {T} / \sqrt {d} \quad (2 1) \quad \boldsymbol {A} _ {Q L} = \left(\boldsymbol {Q} ^ {t} \boldsymbol {W} ^ {Q}\right) \left(\boldsymbol {L} \boldsymbol {W} ^ {K}\right) ^ {T} / \sqrt {d} \tag {22} +$$ + +Similar to encoder self-attention, the node representations $N$ are encoded with the tree structure and the attention output $\mathrm{Att}_Q$ of decoder cross-attention is computed as: + +$$ +\bar {\boldsymbol {N}} ^ {\prime} = \mathcal {V} (\mathcal {U} (\mathcal {F} (\boldsymbol {L} \boldsymbol {W} ^ {V}, \boldsymbol {N} \boldsymbol {W} ^ {V}, \mathcal {R}) + \mathbf {E}), \boldsymbol {w}); \bar {\boldsymbol {L}} = \boldsymbol {L} \boldsymbol {W} ^ {V}; \tag {23} +$$ + +$$ +\operatorname {A t t} _ {Q} = \operatorname {s o f t m a x} \left(\left[ \boldsymbol {A} _ {Q N}; \boldsymbol {A} _ {Q L} \right]\right) \left[ \overline {{\boldsymbol {N}}} ^ {\prime}; \overline {{\boldsymbol {L}}} \right] \tag {24} +$$ + +where $\boldsymbol{w} = \boldsymbol{L}\boldsymbol{u}_c$ with $\boldsymbol{u}_c \in \mathbb{R}^d$ . Note that cross-attention does not adopt subtree masking because the queries are from another domain and are not elements of the source tree. + +Remark on Speed. Our model runs competitively with the Transformer, thanks to its constant parallel time complexity. In terms of sequential (single-CPU) computations, the hierarchical accumulation process takes $\mathcal{O}(N\log(N))$ time and our entire model maintains a time complexity identical to the Transformer, which is $\mathcal{O}(N^2)$ ; see Appendix 7.2 for a proof. + +# 5 EXPERIMENTS + +We conduct our experiments on two types of tasks: Machine Translation and Text Classification. + +# 5.1 NEURAL MACHINE TRANSLATION + +Setup. We experiment with five translation tasks: IWSLT'14 English-German (En-De), German-English (De-En), IWSLT'13 English-French (En-Fr), French-English (Fr-En), and WMT'14 English-German. We replicate most of the training settings from Ott et al. (2018) for our models, to enable a fair comparison with the Transformer-based methods (Vaswani et al., 2017; Wu et al., 2019). For IWSLT experiments, we trained the base models with $d = 512$ for 60K updates with a batch size of 4K tokens. For WMT, we used 200K updates and 32K tokens for the base models ( $d = 512$ ), and 20K updates and 512K tokens for the big models with $d = 1024$ . We parsed the texts with the Stanford CoreNLP parser (Manning et al., 2014). We used Byte Pair Encoding (Sennrich et al., 2016), where subwords of a word form a subtree. More details are provided in Appendix 7.4. + +Results. Table 1 shows the BLEU scores for the translation tasks. Our models outperform the baselines consistently in all the tested tasks. The results demonstrate the impact of using parse trees as a prior and the effectiveness of our methods in incorporating such a structural prior. Specifically, our model surpasses the Transformer by more than 1 BLEU for all IWSLT tasks. Our big model also outdoes dynamic convolution (Wu et al., 2019) by 0.25 BLEU. + +# 5.2 TEXT CLASSIFICATION + +Setup. We also compare our attention-based tree encoding method with Tree-LSTM (Tai et al., 2015) and other sequence-based baselines on the Stanford Sentiment Analysis (SST) (Socher et al., + +![](images/b9b2943dbeac9aaa69c21e62dba9965e8cd8533d015b5cce9d3f7205c20e20d5.jpg) +(a) WMT'14 English-German BLEU on newestest2014 with varying size of training data. + +![](images/944d75e4d3b8a365d3b9451b294e8eeb3744493aad8f4672c4d80ea0470d3c46.jpg) +(b) Elapse training and inference time in seconds (y-axis) w.r.t sequence length (x-axis). + +Figure 5: Training data size and training/inference time analysis. + +
ModelEn-DeEn-FrDe-EnFr-En
Leaves/Nodes59.2/40.859.3/40.766.4/33.664.7/35.3
Target→Nodes66.4±2e-461.9±6e-464.9±4e-459.3±2e-2
+ +2013), IMDB Sentiment Analysis and Subject-Verb Agreement (SVA) (Linzen et al., 2016) tasks. We adopt a tiny-sized version of our tree-based models and the Transformer baseline. The models have 2 Transformer layers, 4 heads in each layer, and dimensions $d = 64$ . We trained the models for 15K updates, with a batch size of 2K tokens. Word embeddings are randomly initialized. We provide further details of the setup in Appendix 7.4. For the Stanford Sentiment Analysis task (SST), we tested on binary (SST-2) and fine-grained (SST-5) subtasks, following Tai et al. (2015). + +Results. Table 2 shows the results in accuracy on the classification tasks. Our Tree Transformer outperforms sequence-based Transformer and BiLSTM baselines in all tasks by a wide margin. This suggests that for small datasets, our models with a more appropriate structural bias can provide outstanding improvements compared to the vanilla Transformer. Furthermore, our models also surpass Tree-LSTM significantly in all the tested tasks, which also demonstrates our method's effectiveness compared to the best existing tree-encoding method. + +Table 4: Attention distributions (\%) between phrases (nodes) and tokens (leaves) across different translation tasks. Statistics are derived from IWSLT'14 En-De and IWSLT'13 En-Fr test sets. + +
TaskTransformerBiLSTMTree-Based Models
Tree-LSTMOurs
SST-537.635.143.947.4
SST-274.876.082.084.3
IMDB86.585.8-90.1
SVA94.495.196.298.0
+ +Table 2: Classification results in accuracy $(\%)$ on Stanford Sentiment Analysis fine-grained (SST-5) and binary (SST-2),IMDB sentiment analysis, and Subject-Verb Agreement (SVA) tasks. + +
ModelEn-DeEn-FrSST-5
TreeTransformer29.4745.5347.4
-HierEmb29.2044.8046.1
-SubMask29.0545.0745.7
-HierEmb -SubMask28.9844.5045.0
+ +Table 3: Performances of different model variants on IWSLT'14 En-De, IWSLT'13 En-Fr and Stanford Sentiment Analysis (fine-grained) tasks. 'HierEmb': no hierarchical embeddings, 'SubMask': no subtree masking. + +# 5.3 ANALYSIS + +Model Variations. In Table 3, we examine the contributions of each component of our Tree Transformer on IWSLT'14 English-German translation task and Stanford Sentiment Analysis (fine-grained) task. We see that removing either or both of hierarchical embeddings and subtree masking methods has a negative impact on the model performance. + +Effectiveness on Small Datasets. Figure 5a shows how our model performs compared to the baselines on WMT'14 English-German translation task with varying amounts of training data. It is apparent that our model yields substantial improvements (3.3 to 1.6 BLEU) when the training data is less than 1 million pairs ( $<20\%$ of WMT'14). The margin of gains gradually decreases (1.0 to 1.1 BLEU) with increasing training data. We observe similar trend in the classification tasks (Table 2), where our model outperforms sequence-based methods by around $10\%$ absolute in accuracy. This suggests that utilizing a hierarchical architectural bias can compensate for the shortage of labeled data in low-resource scenarios. + +Training Time Analysis. Figure 5b shows the empirical training time and inference time for the Transformer, Tree-LSTM, and our Tree Transformer with respect to input sequence length. All the models are trained on a sentiment classification task on a single GPU for 1000 iterations with a batch-size of 1. We can see that the training time for Tree-LSTM grows linearly with the sequence length. The training time for the vanilla and Tree Transformer are much less than that of the Tree-LSTM and remain relatively at a plateau with respect to the sequence length. This demonstrates our model's speed efficiency compared to Tree-LSTM or other recurrent/recursive methods. For inference, we take into account the parsing time of the Stanford parser $(\mathcal{O}(N))$ , which substantially overwhelms the overall timing. + +Phrase- vs. Token-level Attentions. Table 4 shows how frequently a target language token attends to phrases (nodes) vs. tokens (leaves) in the source tree. We see that although $60 - 66\%$ of the source tree constituents are leaves, attentions over nodes overwhelm those over leaves (around $59\%$ to $66\%$ ) consistently across all translation tasks, meaning that the model slightly favors phrasal attentions. The results also suggest that the attention concentrations are not correlated with the leaf/node ratio; rather, they depend on the language pairs. Leaf/node ratios might be a trivial explanation for the phenomenon, but the results indicate that certain language-dependent factors may be at play. + +# 6 CONCLUSION + +We presented a novel approach to incorporate constituency parse trees as an architectural bias to the attention mechanism of the Transformer network. Our method encodes trees in a bottom-up manner with constant parallel time complexity. We have shown the effectiveness of our approach on various NLP tasks involving machine translation and text classification. On machine translation, our model yields significant improvements on IWSLT and WMT translation tasks. On text classification, it also shows improvements on Stanford and IMDB sentiment analysis and subject-verb agreement tasks. + +# REFERENCES + +David Alvarez-Melis and Tommi S Jaakkola. Tree-structured decoding with doubly-recurrent neural networks. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. Tree-to-sequence attentional neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 823-833, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1078. URL https://www.aclweb.org/anthology/P16-1078. +Gottlob Frege. über sinn und bedeutung. Zeitschrift für Philosophie Und Philosophische Kritik, 100 (1):25-50, 1892. +Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional Sequence to Sequence Learning. In Proc. of ICML, 2017. +Jetic Gu, Hassan S. Shavarani, and Anoop Sarkar. Top-down tree structured decoding with syntactic connections for neural machine translation and parsing. In Proceedings of the 2018 Conference on + +Empirical Methods in Natural Language Processing, pp. 401-413. Association for Computational Linguistics, 2018. URL http://aclweb.org/anthology/D18-1037. +Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, and Zhaopeng Tu. Multi-granularity self-attention for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, 2019. +Jacob Harer, Chris Reale, and Peter Chin. Tree-transformer: A transformer-based method for correction of tree-structured data. arXiv preprint arXiv:1908.00449, 2019. +John Hewitt and Christopher D Manning. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4129-4138, 2019. +Yoon Kim, Alexander M Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gábor Melis. Unsupervised recurrent neural network grammars. arXiv preprint arXiv:1904.03746, 2019. +Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019. +Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. Assessing the ability of lstms to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521-535, 2016. ISSN 2307-387X. URL https://transacl.org/ojs/index.php/tacl/article/view/972. +Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pp. 55-60, 2014. URL http://www.aclweb.org/anthology/P/P14/P14-5010. +Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation (WMT), 2018. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assetss/researchcovers/languageunsupervised/language understanding paper. pdf, 2018. +Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725. Association for Computational Linguistics, 2016. doi: 10.18653/v1/P16-1162. URL http://www.aclweb.org/anthology/P16-1162. +Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pp. 464–468. Association for Computational Linguistics, 2018. doi: 10.18653/v1/N18-2074. URL http://aclweb.org/anthology/N18-2074. +Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron Courville, and Yoshua Bengio. Straight to the tree: Constituency parsing with neural syntactic distance. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1171-1180. Association for Computational Linguistics, 2018. URL http://aclweb.org/anthology/P18-1108. +Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. Ordered neurons: Integrating tree structures into recurrent neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=B116qiR5F7. +Haoyue Shi, Hao Zhou, Jiaze Chen, and Lei Li. On tree-based neural sentence modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4631-4641, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1492. URL https://www.aclweb.org/anthology/D18-1492. + +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631-1642. Association for Computational Linguistics, 2013. URL http://aclweb.org/anthology/D13-1170. +Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 5027-5038, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1548. URL https://www.aclweb.org/anthology/D18-1548. +Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1556–1566, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.3115/v1/P15-1150. URL https://www.aclweb.org/anthology/P15-1150. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998-6008, 2017. +Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=SkVhlh09tX. +Baosong Yang, Derek F. Wong, Tong Xiao, Lidia S. Chao, and Jingbo Zhu. Towards bidirectional hierarchical representations for attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1432-1441, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10. 18653/v1/D17-1150. URL https://www.aclweb.org/anthology/D17-1150. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019. +Hung-Yi Lee Yaushian Wang and Yun-Nung Chen. Multitree transformer: Integrating tree structures into self-attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, 2019. + +# 7 APPENDIX + +# 7.1 PROOF FOR PROPOSITION 1 + +In this section, we present a proof of Proposition 1. The main argument of this proposition is that in order for $\mathcal{H}$ to be a legitimate transformation of the parse tree $\mathcal{G}(X)$ , it must preserve the uniqueness of the hierarchical structure encoded by the tree. Otherwise speaking, the result of such transformation must reflect the tree $\mathcal{G}(X)$ and only $\mathcal{G}(X)$ , and not any other tree form. If $\mathcal{H}$ does not have this property, it may transform two different tree structures into an identical representation, which will make the downstream tree-encoding process ambiguous. This property also means that in a parse tree (1) every node (either leaf or nonterminal node) has at most one parent, and (2) a nonterminal node which has multiple children can tell the sibling order between its children. + +For requirement (1) specifically, except for the root node which has no parent, all other nodes have one and only one parent. Therefore, Proposition 1 implies that if there exists a inverse-transformation $\mathcal{I}$ , it must be able to find the exact parent given any node in the tree. Our proposed transformation $\mathcal{H}$ satisfies this property. Formally, let $x \in \mathcal{L} \cup \mathcal{N}$ be a node in the tree and $\rho(x) \in \mathcal{N}$ be the parent of $x$ . We define $\mathcal{P}(x) = \{y \in \mathcal{N} | x \in \mathcal{R}(y)\}$ to be the set of all ancestors of $x$ . Thus, the parent of $x$ should belong to $\mathcal{P}(x)$ . From that, we derive the parent $\rho(x)$ of $x$ as: + +$$ +\rho (x) = y _ {x} \in \mathcal {P} (x) \text {s u c h} \mathcal {R} \left(y _ {x}\right) \cap \mathcal {P} (x) = \left\{y _ {x} \right\} \tag {25} +$$ + +As implied in Equation 25, the parent $\rho(x)$ is one of the ancestors of $x$ whose subtree does not encapsulate any of the ancestors in $\mathcal{P}(x)$ except itself. In other words, the parent is the immediate ancestor. In a parse tree, there is at most one node $y_x$ that satisfies this condition. We prove this by contradiction below. + +Proof 1 Suppose there exists $\rho'(x) = y_x' \neq y_x$ such that $\mathcal{R}(y_x') \cap \mathcal{P}(x) = \{y_x'\}$ , i.e., $x$ has 2 parents. Under this assumption, there will be two different paths from the root node to $x$ - one via $y_x$ and the other via $y_x'$ . This makes a closed cycle. However, a parse tree is an acyclic directed spanning tree, which does not have cycles. Thus, by contradiction, if $\mathcal{G}(x)$ is a spanning tree reflecting the true parse tree, then $y_x' = y_x$ . + +For requirement (2), we can prove it by exploiting the fact that in the definition of $\mathcal{T}(X) \triangleq (\mathcal{L}, \mathcal{N}, \mathcal{R})$ , $\mathcal{L}$ is ordered according to the original word order in the text. Generally speaking, if a nonterminal node $x$ has $t$ children $(c^1, \ldots, c^t)$ , each child node $c^k$ heads (or bears) its own subtree $\mathcal{R}(c^k)$ which includes certain leaves $\mathcal{R}(c^k) \cap \mathcal{L}$ . Then, the order of leaves $\mathcal{L}$ indicates the order of different leaf sets $\mathcal{R}(c^k) \cap \mathcal{L}$ , which in turn indicates the sibling order of the children of node $x$ . + +Formally, let $\mathcal{L} = (l_1,\dots,l_n)$ where $l_{i}$ is the $i$ -th word in the input text, and $x$ is an arbitrary nonterminal node whose children are $(c^{1},\ldots ,c^{t})$ (for $t > 1$ ) with any two of them being $c^k$ and $c^{k'}(c^{kk'})$ , then either: + +$$ +\left\{ \begin{array}{l} (i < j \forall l _ {i} \in \mathcal {R} \left(c ^ {k}\right) \cap \mathcal {L} \text {a n d} \forall l _ {j} \in \mathcal {R} \left(c ^ {k ^ {\prime}}\right) \cap \mathcal {L}) \text {o r} \\ (i > j \forall l _ {i} \in \mathcal {R} \left(c ^ {k}\right) \cap \mathcal {L} \text {a n d} \forall l _ {j} \in \mathcal {R} \left(c ^ {k ^ {\prime}}\right) \cap \mathcal {L}) \end{array} \right. \tag {26} +$$ + +We proceed to prove the first clause in Equation 26, while the second clause can be similarly inferred. + +Specifically, the first clause in Equation 26 implies that if there exists a leaf $l_{i'} \in \mathcal{R}(c^k) \cap \mathcal{L}$ and a leaf $l_{j'} \in \mathcal{R}(c^{k'}) \cap \mathcal{L}$ such that $i' < j'$ , then $i < j \forall l_i \in \mathcal{R}(c^k) \cap \mathcal{L}$ and $\forall l_j \in \mathcal{R}(c^{k'}) \cap \mathcal{L}$ , or in other words, subtree $\mathcal{R}(c^k)$ is to the left side of subtree $\mathcal{R}(c^{k'})$ . We prove this by contradiction: + +Proof 2 Suppose there exist $l_{i'}$ , $l_{i^*} \in \mathcal{R}(c^k) \cap \mathcal{L}$ and $l_{j'}$ , $l_{j^*} \in \mathcal{R}(c^{k'}) \cap \mathcal{L}$ such that $i' < j'$ but $i^* \geq j^*$ and not both $i' = i^*$ and $j' = j^*$ . If $i^* = j^*$ , this creates a closed cycle, which is impossible as described in proof 1. If $i^* > j^*$ , $l_{j^*}$ lies between the leftmost and rightmost leaves of $\mathcal{R}(c^k) \cap \mathcal{L}$ . As a result, the subtree $\mathcal{R}(c^k)$ does not cover a full contiguous phrase but multiple segmented phrases, which is prohibited for a parse tree. Therefore, by contradiction, there can not exist $l_{i^*} \in \mathcal{R}(c^k) \cap \mathcal{L}$ and $l_{j^*} \in \mathcal{R}(c^{k'}) \cap \mathcal{L}$ such that $i^* \geq j^*$ . + +In general, combing the satisfactions from the above requirements, the reverse-transformation $\mathcal{I}$ can convert $\mathcal{T}(X) = (\mathcal{L},\mathcal{N},\mathcal{R})$ back to the original graph $\mathcal{G}(X)$ : + +$$ +\mathcal {I} (\mathcal {H} (\mathcal {G} (X))) = \mathcal {G} (X) \tag {27} +$$ + +# 7.2 TIME COMPLEXITY ANALYSIS + +In this section, we prove that given single-thread sequential computation (single CPU), our method operates at $\mathcal{O}(N^2)$ time complexity. In short, the hierarchical accumulation process (Section 4.1) runs at $O(N\log N)$ time, while the attention scores $(QK^{T})$ in standard attention are computed at $\mathcal{O}(N^2)$ . Overall, the tree-based attention can perform at $\mathcal{O}(N^2)$ time complexity, same as the Transformer. We then proceed to analyze the time complexity of the hierarchical accumulation process. Formally, let $X$ be a $n$ -length sentence with $\mathcal{T}(X) = (\mathcal{L},\mathcal{N},\mathcal{R})$ as its balance binary constituency parse tree as defined in section 4, $N\in \mathbb{R}^{m\times d}$ and $L\in \mathbb{R}^{n\times d}$ be the hidden states of elements in $N$ and $L$ respectively. Therefore, there are $m = n - 1$ non-terminal nodes in the tree, which is also the size of $\mathcal{N}$ . In the upward cumulative-average operation $\mathcal{U}$ , each branch from the root to a leaf has $\approx \log (n)$ nodes, the cumulative operation of these nodes can be done at $\mathcal{O}(\log (n))$ . That is, because the result $y_{i}$ of each node $x_{i}$ can be computed as $y_{i} = y_{i - 1} + x_{i}$ , computations for all nodes in a branch take linear time using dynamic programming, yielding $\mathcal{O}(\log (n))$ time complexity. As we have $n$ branches in the tree, the total complexity for $\mathcal{U}$ is $\mathcal{O}(n\log (n))$ . Likewise, the weighted aggregation operation $\nu$ is also computed at $\mathcal{O}(n\log (n))$ complexity. Specifically, at level $i$ from the root of the tree, there $i$ non-terminal nodes, which each has to aggregate $n / i$ components $\hat{\mathbf{S}}_{i,j}$ to calculate the final representation of the node. Thus, at each level, there are $n$ computations. Because the total height of the tree is $\log (n)$ , the time complexity of $\nu$ is also $\mathcal{O}(n\log (n))$ . Hence, the total complexity of hierarchical accumulation process is $\mathcal{O}(n\log (n))$ . As such, the final sequential time complexity of the proposed attention layer is: + +$$ +\mathcal {O} \left(N ^ {2}\right) + \mathcal {O} \left(\left(N - 1\right) ^ {2}\right) + \mathcal {O} \left(\left(N - 1\right) N\right) + \mathcal {O} \left(N \log (N)\right) = \mathcal {O} \left(N ^ {2}\right) \tag {28} +$$ + +Having said that, in the era that powerful GPU-based hardware is ubiquitous, it is important to note that our models can achieve comparable parallelizability compared to the Transformer, while they can leverage the essence of hierarchical structures in natural languages. The purpose of the above time complexity analysis is only to show more insights about our models. That being said, even though we provide numerical analysis of training speed (figure 5b) as objective as we could, training speed depends greatly on different subjective factors that may stray the actual timing away from its theoretical asymptotic time complexity. These factors are data preprocessing and batching, actual low-level and high-level programmatic implementation of the method, GPUs, CPUs, I/O hardwares, etc. For instance, a standard LSTM module in Tensorflow or Pytorch may performs 10 times slower than an LSTM with CUDA CuDNN kernels, even though no extra computations are theoretical required. + +# 7.3 OVERALL ARCHITECTURE + +Figure 6 shows the overall Seq2Seq architecture of our model. In the encoder specifically, we parse the source sequence into constituency trees and then feed the leaf and node components into a stack of encoder layers. The leaf and node components are passed through the tree-based self-attention layer, where the value representation of node is incorporated with hierarchical accumulation. After that, the output representations of leaves and nodes are passed through a weight-shared series of layer normalizations and feed-forward layer. In the decoder, the query representation of the target domain attends on the leaves and nodes representation of the source sequence as computed by the encoder. For more insights, apply hierarchical accumulation on the key components (instead on value components) causes dramatic performance because they are disrupted after being multiplied with the query components and they do not directly contribute to the representations of the outputs. Meanwhile, applying accumulation on attention scores does not yield performance benefits but increases computational burden. + +Table 5 shows the exact number of parameters of the Transformer and our method. As it can be seen, the increase of parameters is almost unnoticeable because the hierarchical embedding modules share weights among different attention heads. + +# 7.4 MORE DETAILED TRAINING CONFIGURATIONS + +Text Classification. We adopt the tiny size versions of our tree-based models as well as the transformer baseline. The models possess 2 transformer-layers, each has model dimension of 64, 4 heads. We trained the models for 15,000 updates, with batch size of 2048 tokens. We + +![](images/b9721184ba1b21e3dc48d4c1f3d82c9f4e61a0ef34ab76a6f237d9e469ae6316.jpg) +Figure 6: Overall architecture of Tree Transformer. (Dashed lines: sharing parameters) + +
ModelBaseBig
Transformer61,747,200209,813,504
Ours61,810,944 (+0.1%)209,967,104(+0.07%)
+ +Table 5: Exact number of parameters for Transformer and our model, both used for WMT'14 English-German task. + +used random initialized embeddings for all experiments. For the Stanford Sentiment Analysis task (SST), we tested on two subtasks: binary and fine-grained (5 classes) classification on the standard train/dev/test splits of 6920/872/1821 and 8544/1101/2210 respectively. We optimize every sentiment label provided in the dataset. We used a learning rate of $7 \times 10^{-4}$ , dropout 0.5 and 8000 warmup steps. For subject-verb agreement task, we trained on a set of 142,000 sentences, validated on the set of $\approx 15,700$ sentences and tested on the set of $\approx 1,000,000$ sentences. For IMDB sentiment analysis, we used a training set of 25,000 documents and test set of 25,000 documents. As the documents are multi-sentence, they are added with a dummy root node which is used to predict the sentiment. For both subject-verb agreement and IMDB sentiment analysis, we trained models with 20 warmup steps, 0.01 learning rate and 0.2 dropout. + +Machine Translation. We replicate most of the base training settings from Ott et al. (2018) for our models, to enable a fair comparison with transformer-based methods (Vaswani et al., 2017; Wu et al., 2019). For IWSLT experiments, we trained the models with $d = 512$ , feed-forward dimension $d_{ffn} = 1024$ , approximate batch size of 4000 tokens, 60,000 updates, learning rate $5 \times 10^{-4}$ , 4000 warmup steps, dropout rate 0.3, L2 weight decay $10^{-4}$ , beam size 5 and length penalty 1.0 for English-German and 0.7 for English-French. The hierarchical embedding size $|E|$ is set to 100. We used BPE tokens pre-trained with 32,000 iterations. Note that if a word is broken into multiple BPE subwords, it form a subtree with leaves as such subwords and root-node as the POS tag of the word. Figure 7 visualizes how this works. The IWSLT English-German training dataset contains $\approx 160,000$ sentence pairs, we used $5\%$ of the data for validation and combined (IWSLT14.TED.dev2010,dev2012,tst2010-tst2012) for testing. Meanwhile, the IWSLT English-French task has $\approx 200,000$ training sentence pairs, we used IWSLT15.TED.tst2012 for validation and IWSLT15.TED.tst2013 for testing. We use the Stanford CoreNLP parser (v3.9.1) $^5$ (Manning et al., 2014) to parse the datasets. For WMT experiments, we trained the models with $d_{model} = 512$ , feed-forward dimension $d_{ffn} = 2048$ , batch size of $\approx 32,000$ tokens, 200,000 updates, learning rate $7 \times 10^{-4}$ , 4000 warmup steps, dropout rate 0.1, L2 weight decay $10^{-4}$ , beam size 5 and length penalty 0.6. We take average of the last 5 checkpoints for evaluation. The WMT"14 English-German dataset contains $\approx 4.5M$ pairs. We tested the models on newtest2014 test set. We used tokenized-BLEU to evaluate the models. + +![](images/460100887e54461ac30e7f19ccd4e6d0eef41e4137369dd202b7f508569c4bb5.jpg) +(a) Standard Constituency Tree + +![](images/d2e7a6a4d7bccf4ac03e451f793a907f3e1797b6da050c859965f7e7241626d9.jpg) +(b) A BPE constituency tree +Figure 7: Process to break standard tree (fig. 7a) into BPE tree (fig. 7b). + +Training Time Analysis. For this experiment, all the examined models (Transformer, Tree Transformer and Tree-LSTM) are trained on text classification task on a GPU for 1000 training steps with batch-size of 1. For vanilla and Tree Transformer, we used the 2-layer encoder with dimension 64. For Tree-LSTM, we used one-layer LSTM with dimension 64. A binary constituency tree is built given each input sentence. The parser's processing time is significant. For training time, we exclude the parser's time because the parser processes the data only once and this counts towards preprocessing procedures of the dataset. However, we include parser's time into inference time because we receive surface-form text instead of trees. In practice, the parsing time substantially overshadows the computation time of the network. Thus, the overall process will be faster if a more efficient parser is developed. + +Effectiveness on Small Datasets. We used both Transformer and Tree Transformer on base-size training settings from Ott et al. (2018). We trained the models with batch size of 32,000 tokens, 100,000 updates, learning rate $7 \times 10^{-4}$ , 4000 warmup steps, dropout 0.1, L2 weight decay $10^{-4}$ , beam size 5 and length penalty 0.6. For both sets of tasks, we take average of the last 5 checkpoints for evaluation. We used BPE tokens pre-trained with 32,000 iterations. We randomly samples the training data into size-varying portions: $2\% (90\mathrm{K}$ pairs), $5\% (225\mathrm{K}$ pairs), $10\% (450\mathrm{K}$ pairs), $25\% (1.125\mathrm{M}$ pairs), $50\% (2.25\mathrm{M}$ pairs), $75\% (3.375\mathrm{M}$ pairs) and $100\% (4.5\mathrm{M}$ pairs). \ No newline at end of file diff --git a/treestructuredattentionwithhierarchicalaccumulation/images.zip b/treestructuredattentionwithhierarchicalaccumulation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f7cda7ffb835b3a02664a8dbe422d304cac9d62e --- /dev/null +++ b/treestructuredattentionwithhierarchicalaccumulation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:702c3393f0a1a4424b2ead7c34573f48c1d883e3d7f6937adcaee3f4f51e97af +size 481880 diff --git a/treestructuredattentionwithhierarchicalaccumulation/layout.json b/treestructuredattentionwithhierarchicalaccumulation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..11f445e39e39e7e84545b59db742b1dd06c6565e --- /dev/null +++ b/treestructuredattentionwithhierarchicalaccumulation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4ac35d7b2d189aac753bceadd84b6211a25f49f71abe94a90d4b3567b744ee6 +size 619709 diff --git a/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/833ee54c-3393-44eb-a4ed-b7174cd6c2d8_content_list.json b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/833ee54c-3393-44eb-a4ed-b7174cd6c2d8_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cf2c0b8659cec69e4327ded133bfbe1d2737ca09 --- /dev/null +++ b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/833ee54c-3393-44eb-a4ed-b7174cd6c2d8_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf7cfaabcd2a373c226962dfe6ca242496a4cc671ee6dd5025a98513fb0c8e4b +size 84244 diff --git a/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/833ee54c-3393-44eb-a4ed-b7174cd6c2d8_model.json b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/833ee54c-3393-44eb-a4ed-b7174cd6c2d8_model.json new file mode 100644 index 0000000000000000000000000000000000000000..dbd992a50a7e5119e44cd533359789b325717e60 --- /dev/null +++ b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/833ee54c-3393-44eb-a4ed-b7174cd6c2d8_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e981c3e75d782822f36ba00dda5a45ad0a195c71f8048786027131aca4c77f27 +size 102370 diff --git a/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/833ee54c-3393-44eb-a4ed-b7174cd6c2d8_origin.pdf b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/833ee54c-3393-44eb-a4ed-b7174cd6c2d8_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b2a1a579edae98cc699ceecfd0eaee548dad7bd3 --- /dev/null +++ b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/833ee54c-3393-44eb-a4ed-b7174cd6c2d8_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fad9ac63d90b4d5010ebbb9e347f943d6bdc75cc4571db9b86441329fb77ac70 +size 950482 diff --git a/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/full.md b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7febc61f932d32d4aed46803a1e44e0fd55e674e --- /dev/null +++ b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/full.md @@ -0,0 +1,327 @@ +# TRIPLE WINS: BOOSTING ACCURACY, ROBUSTNESS AND EFFICIENCY TOGETHER BY ENABLING INPUT-ADAPTIVE INFERENCE + +Ting-Kuei Hu*, Tianlong Chen*, Haotao Wang Zhangyang Wang + +Department of Computer Science and Engineering + +Texas A&M University, USA + +{tkhu,wiwjp619,htwang,atlaswang}@amu.edu + +# ABSTRACT + +Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images) (Tsipras et al., 2019). Such a dilemma is shown to be rooted in the inherently higher sample complexity (Schmidt et al., 2018) and/or model capacity (Nakkiran, 2019), for learning a high-accuracy and robust classifier. In view of that, give a classification task, growing the model capacity appears to help draw a win-win between accuracy and robustness, yet at the expense of model size and latency, therefore posing challenges for resource-constrained applications. Is it possible to co-design model accuracy, robustness and efficiency to achieve their triple wins? + +This paper studies multi-exit networks associated with input-adaptive efficient inference, showing their strong promise in achieving a "sweet point" in co-optimizing model accuracy, robustness and efficiency. Our proposed solution, dubbed Robust Dynamic Inference Networks (RDI-Nets), allows for each input (either clean or adversarial) to adaptively choose one of the multiple output layers (early branches or the final one) to output its prediction. That multi-loss adaptivity adds new variations and flexibility to adversarial attacks and defenses, on which we present a systematical investigation. We show experimentally that by equipping existing backbones with such robust adaptive inference, the resulting RDI-Nets can achieve better accuracy and robustness, yet with over $30\%$ computational savings, compared to the defended original models. + +# 1 INTRODUCTION + +Deep networks, despite their high predictive accuracy, are notoriously vulnerable to adversarial attacks (Goodfellow et al., 2015; Biggio et al., 2013; Szegedy et al., 2014; Papernot et al., 2016). While many defense methods have been proposed to increase a model's robustness to adversarial examples, they were typically observed to hamper its accuracy on original clean images. Tsipras et al. (2019) first pointed out the inherent tension between the goals of adversarial robustness and standard accuracy in deep networks, whose provable existence was shown in a simplified setting. Zhang et al. (2019) theoretically quantified the accuracy-robustness trade-off, in terms of the gap between the risk for adversarial examples versus the risk for non-adversarial examples. + +It is intriguing to consider whether and why the model accuracy and robustness have to be at odds. Schmidt et al. (2018) demonstrated that the number of samples needed to achieve adversarially robust generalization is polynomially larger than that needed for standard generalization, under the adversarial training setting. A similar conclusion was concurred by Sun et al. (2019) in the standard training setting. Tsipras et al. (2019) considered the accuracy-robustness trade-off as an inherent trait of the data distribution itself, indicating that this phenomenon persists even in the limit of infinite data. Nakkiran (2019) argued from a different perspective, that the complexity (e.g. capacity) of a robust classifier must be higher than that of a standard classifier. Therefore, replacing a larger-capacity classifier might effectively alleviate the trade-off. Overall, those existing works appear to suggest that, while accuracy and robustness are likely to trade off for a fixed classification model + +and on a given dataset, such trade-off might be effectively alleviated ("win-win"), if supplying more training data and/or replacing a larger-capacity classifier. + +On a separate note, deep networks also face the pressing challenge to be deployed on resource-constrained platforms due to the prosperity of smart Internet-of-Things (IoT) devices. Many IoT applications naturally demand security and trustworthiness, e.g., biometrics and identity verification, but can only afford limited latency, memory and energy budget. Hereby we extend the question: can we achieve a triple-win, i.e., an accurate and robust classifier while keeping it efficient? + +This paper makes an attempt in providing a positive answer to the above question. Rather than proposing a specific design of robust light-weight models, we reduce the average computation loads by input-adaptive routing to achieve triple-win. To this end, we introduce the input-adaptive dynamic inference (Teerapittayanon et al., 2017; Wang et al., 2018a), an emerging efficient inference scheme in contrast to the (non-adaptive) model compression, to the adversarial defense field for the first time. Given any deep network backbone (e.g., ResNet, MobileNet), we first follow (Teerapittayanon et al., 2017) to augment it with multiple early-branch output layers in addition to the original final output. Each input, regardless of clean or adversarial samples, adaptively chooses which output layer to take for its own prediction. Therefore, a large portion of input inferences can be terminated early when the samples can already be inferred with high confidence. + +Up to our best knowledge, no existing work studied adversarial attacks and defenses for an adaptive multi-output model, as the multiple sources of losses provide much larger flexibility to compose attacks (and therefore defenses), compared to the typical single-loss backbone. We present a systematic exploration on how to (white-box) attack and defense our proposed multi-output network with adaptive inference, demonstrating that the composition of multiple-loss information is critical in making the attack/defense strong. Fig. 1 illustrates our proposed Robust Dynamic Inference Networks (RDI-Nets). We show experimentally that the input-adaptive inference and multi-loss flexibility can be our friend in achieving the desired "triple wins". With our best defended RDI-Nets, we achieve better accuracy and robustness, yet with over $30\%$ inference computational savings, compared to the defended original models as well as existing solutions co-designing robustness and efficiency (Gui et al., 2019; Guo et al., 2018). The codes can be referenced from https://github.com/TAMU-VITA/triple-wins. + +![](images/d10a7f2aab726ba1537e016ed4b099d81babba4acb898f93745099fa0244caaf.jpg) +Figure 1: Our proposed RDI-Net framework, a defended multi-output network enabling dynamic inference. Each image, being it clean or adversarially perturbed, adaptively picks one branch to exit. + +# 2 RELATED WORK + +# 2.1 ADVERSARIAL DEFENSE + +A magnitude of defend approaches have been proposed (Kurakin et al., 2017; Xu et al., 2018; Song et al., 2018; Liao et al., 2018), although many were quickly evaded by new attacks (Carlini & Wagner, 2017; Baluja & Fischer, 2018). One strong defense algorithm that has so far not been fully compromised is adversarial training (Madry et al., 2018). It searches for adversarial images to augment the training procedure, although at the price of higher training costs (but not affecting inference efficiency). However, almost all existing attacks and defenses focus on a single-output classification (or other task) model. We are unaware of prior studies directly addressing attacks/defenses to more complicated networks with multiple possible outputs. + +One related row of works are to exploit model ensemble (Tramér et al., 2018; Strauss et al., 2017) in adversarial training. The gains of the defended ensemble compared to a single model could be viewed as the benefits of either the benefits of diversity (generating stronger and more transferable perturbations), or the increasing model capacity (consider the ensembled multiple models as a compound one). Unfortunately, ensemble methods could amplify the inference complexity and be detrimental for efficiency. Besides, it is also known that injecting randomization at inference time helps mitigate adversarial effects (Xie et al., 2018; Cohen et al., 2019). Yet up to our best knowledge, no work has studied non-random, but rather input-dependent inference for defense. + +# 2.2 EFFICIENT INFERENCE + +Research in improving deep network efficiency could be categorized into two streams: the static way that designs compact models or compresses heavy models, while the compact/compressed models remain fixed for all inputs at inference; and the dynamic way, that at inference the inputs can choose different computational paths adaptively, and the simpler inputs usually take less computation to make predictions. We briefly review the literature below. + +Static: Compact Network Design and Model Compression. Many compact architectures have been specifically designed for resource-constrained applications, by adopting lightweight depthwise convolutions (Sandler et al., 2018), and group-wise convolutions with channel-shuffling (Zhang et al., 2018), to just name a few. For model compression, Han et al. (2015) first proposed to sparsify deep models by removing non-significant synapses and then re-training to restore performance. Structured pruning was later on introduced for more hardware friendliness (Wen et al., 2016). Layer factorization (Tai et al., 2016; Yu et al., 2017), quantization (Wu et al., 2016), model distillation (Wang et al., 2018c) and weight sharing (Wu et al., 2018) have also been respectively found effective. + +Dynamic: Input-Adaptive Inference. Higher inference efficiency could be also accomplished by enabling input-conditional execution. Teerapittayanon et al. (2017); Huang et al. (2018); Kaya et al. (2019) leveraged intermediate features to augment multiple side branch classifiers to enable early predictions. Their methodology sets up the foundation for our work. Other efforts (Figurnov et al., 2017; Wang et al., 2018a;b; 2019) allow for an input to choose between passing through or skipping each layer. The approach could be integrated with RDI-Nets too, which we leave as future work. + +# 2.3 BRIDGING ROBUSTNESS WITH EFFICIENCY + +A few studies recently try to link deep learning robustness and efficiency. Guo et al. (2018) observed that in a sparse deep network, appropriately sparsified weights improve robustness, whereas over-sparsification (e.g., less than $5\%$ nonzero weights) in turn makes the model more fragile. Two latest works (Ye et al., 2019; Gui et al., 2019) examined the robustness of compressed models, and concluded similar observations that the relationship between mode size and robustness depends on compression methods and are often non-monotonic. Lin et al. (2019) found that activation quantization may hurt robustness, but can be turned into effective defense if enforcing continuity constraints. + +Different from above methods that tackle robustness from static compact/compressed models, the proposed RDI-Nets are the first to address robustness from the dynamic input-adaptive inference. Our experiment results demonstrate the consistent superiority of RDI-Nets over those static methods (Section 4.3). Moreover, applying dynamic inference top of those static methods may further boost the robustness and efficiency, which we leave as future work. + +# 3 APPROACH + +With the goal of achieving inference efficiency, we first look at the setting of multi-output networks and the specific design of RDI-Net in Section 3.1. Then we define three forms of adversarial attacks for multi-output networks in Section 3.2 and their corresponding defense methods in Section 3.3. + +Note that RDI-Nets achieve "triple wins" via reducing the average computation loads through input-adaptive routing. It is not to be confused with any specifically-designed robust light-weight model. + +# 3.1 DESIGNING RDI-NETS FOR HIGHER INFERENCE EFFICIENCY + +Given an input image $x$ , an $N$ -output network can produce a set of predictions $[\hat{y}_1, \dots, \hat{y}_N]$ by a set of transformations $[f_{\theta_1}(\cdot), \dots, f_{\theta_N}(\cdot)]$ . $\theta_i$ denote the model parameter of $f_{\theta_i}$ , $i = 1, \dots, N$ , and $f_{\theta_i}$ + +will typically share some weights. With an input $x$ , one can express $\hat{y}_i = f_{\theta_i}(x)$ . We assume that the final prediction will be one chosen (NOT fused) from $[\hat{y}_1, \dots, \hat{y}_N]$ via some deterministic strategy. + +We now look at RDI-Nets as a specific instance of multi-output networks, specifically designed for the goal of more efficient, input-adaptive inference. As shown in Fig. 1, for any deep network (e.g., ResNet, MobileNet), we could append $K$ side branches (with negligible overhead) to allow for early-exit predictions. In other words, it becomes a $(K + 1)$ -output network, and the subnetworks with the $K + 1$ exits, from the lowest to the highest (the original final output), correspond to $[f_{\theta_1}(\cdot), \dots, f_{\theta_{K + 1}}(\cdot)]$ . They share their weights in a nested fashion: $\theta_1 \subseteq \theta_2 \ldots \subseteq \theta_{K + 1}$ , with $\theta_{K + 1}$ including the entire network's parameters. + +Our deterministic strategy in selecting one final output follows (Teerapittayanon et al., 2017). We set a confidence threshold $t_k$ for each $k$ -th exit, $k = 1, \dots, K + 1$ , and each input $x$ will terminate inference and output its prediction in the earliest exit (smallest $k$ ), whose softmax entropy (as a confidence measure) falls below $t_k$ . All computations after the $k$ -th exit will not be activated for this $x$ . Such a progressive and early-halting mechanism effectively saves unnecessary computation for most easier-to-classify samples, and applies in both training and inference. Note that, if efficiency is not the concern, instead of choosing (the earliest one), we could have designed an adaptive or randomized fusion of all $f_{\theta_i}$ predictions: but that falls beyond the goal of this work. + +The training objective for RDI-Nets could be written as + +$$ +L _ {R D I} = \sum_ {i = 1} ^ {K + 1} w _ {i} \left[ \left(\phi \left(f _ {i} \left(\theta_ {i} | x\right), y\right) + \phi \left(f _ {i} \left(\theta_ {i} | x ^ {a d v}\right), y\right) \right], \right. \tag {1} +$$ + +For each exit loss, we minimize a hybrid loss of accuracy (on clean $x$ ) and robustness (on $x^{adv}$ ). The $K + 1$ exits are balanced with a group of weights $\{w_i\}_{i=1}^{K+1}$ . More details about RDI-Net structures, hyperparameters, and inference branch selection can be founded in Appendix A, B, and C. + +In what follows, we discuss three ways to generate $x^{adv}$ in RDI-Nets, and then their defenses. + +# 3.2 THREE ATTACK FORMS ON MULTI-OUTPUT NETWORKS + +We consider white box attacks in this paper. Attackers have access to the model's parameters, and aim to generate an adversarial image $x^{adv}$ to fool the model by perturbing an input $x$ within a given magnitude bound. + +We next discuss three attack forms for an $N$ -output network. Note that they are independent of, and to be distinguished from attacker algorithms (e.g., PGD, C&W, FGSM): the former depicts the optimization formulation, that can be solved any of the attacker algorithms. + +Single Attack Naively extending from attacking single-output networks, a single attack is defined to maximally fool one $f_{\theta_i}(\cdot)$ only, expressed as: + +$$ +x _ {i} ^ {a d v} = \underset {x ^ {\prime} \in | x ^ {\prime} - x | _ {\infty} \leq \epsilon} {\arg \max } \left| \phi \left(f _ {\theta_ {i}} \left(x ^ {\prime}\right), y\right) \right|, \tag {2} +$$ + +where $y$ is the ground truth label, and $\phi$ is the loss for $f_{\theta_i}$ (we assume softmax for all). $\epsilon$ is the perturbation radius and we adopt $\ell_{\infty}$ ball for an empirically strong attacker. Naturally, an $N$ -output network can have $N$ different single attacks. However, each single attack is derived without being aware of other parallel outputs. The found $x_{i}^{adv}$ is not necessarily transferable to other $f_{\theta_j} s (j \neq i)$ , and therefore can be easily bypassed if $x$ is re-routed through other outputs to make its prediction. + +Average Attack Our second attack maximizes the average of all $f_{\theta_i}$ losses, so that the found $x^{adv}$ remains in effect no matter which one $f_{\theta_i}$ is chosen to output the prediction for $x$ : + +$$ +x _ {a v g} ^ {a d v} = \underset {x ^ {\prime} \in | x ^ {\prime} - x | _ {\infty} \leq \epsilon} {\arg \max } \left| \frac {1}{N} \sum_ {j = 1} ^ {N} \phi \left(f _ {\theta_ {j}} \left(x ^ {\prime}\right), y\right) \right|, \tag {3} +$$ + +The average attack addresses takes into account the attack transferability and involves all $\theta_{j}$ s into optimization. However, while only one output will be selected for each sample at inference, the average strategy might weaken the individual defense strength of each $f_{\theta_i}$ . + +Max-Average Attack Our third attack aims to emphasize individual output defense strength, more than simply maximizing an all-averaged loss. We first solve the $N$ single attacks $x_{i}^{adv}$ as described in Eqn. 2, and denote their collection as $\Omega$ . We then solve the max-average attack via the following: + +$$ +x _ {m a x} ^ {a d v} \leftarrow x _ {i ^ {*}} ^ {a d v}, \text {w h e r e} x _ {i ^ {*}} ^ {a d v} \in \Omega \text {a n d} i ^ {*} = \arg \max _ {i} \left| \frac {1}{N} \sum_ {j = 1} ^ {N} \phi \left(f _ {\theta_ {j}} \left(x _ {i} ^ {a d v}\right), y\right) \right|. \tag {4} +$$ + +Note Eqn. 4 differs from Eqn. 3 by adding an $\Omega$ constraint to balance between "commodity" and "specificity". The found $x_{max}^{adv}$ both strongly increases the averaged loss values from all $f_{i}$ s (therefore possessing transferability), and maximally fools one individual $f_{\theta_i}$ s as it is selected from the collection $\Omega$ of single attacks. + +# 3.3 DEFENCE ON MULTI-OUTPUT NETWORKS + +For simplicity and fair comparison, we focus on adversarial training (Madry et al., 2018) as our defense framework, where the three above defined attack forms can be plugged-in to generate adversarial images to augment training, as follows ( $\Theta$ is the union of learnable parameters): + +$$ +\theta_ {i} \in \Theta , \text {w h e r e} \theta_ {i} = \underset {\theta^ {\prime}} {\arg \min } | \phi \left(f _ {\theta_ {i}} (x), y\right) + \phi \left(f _ {\theta_ {i}} \left(x ^ {a d v}\right), y\right) |., \tag {5} +$$ + +where $x^{adv} \in \{x_i^{adv}, x_{avg}^{adv}, x_{max}^{adv}\}$ . As $f_i$ is partially share their weights $\theta_i$ in a multi-output network, the updates from different $f_i$ s will be averaged on the shared parameters. + +# 4 EXPERIMENTAL RESULTS + +# 4.1 EXPERIMENTAL SETUP + +Evaluation Metrics We evaluate accuracy, robustness, and efficiency, using the metrics below: + +- Testing Accuracy (TA): the classification accuracy on the original clean test set. +- Adversarial Testing Accuracy (ATA): Given an attacker, ATA stands for the classification accuracy on the attacked test set. It is the same as the "robust accuracy" in (Zhang et al., 2019). +- Mega Flops (MFlops): The number of million floating-point multiplication operations consumed on the inference, averaged over the entire testing set. + +Datasets and Benchmark Models We evaluate three representative CNN models on two popular datasets: SmallCNN on MNIST (Chen et al., 2018); ResNet-38 (He et al., 2016) and MobileNet-V2 (Sandler et al., 2018) on CIFAR-10. The three networks span from simplest to more complicated, and covers a compact backbone. All three models are defended by adversarial training, constituting strong baselines. Table 1 reports the models, datasets, the attacker algorithm used in attack & defense, and thee TA/ATA/MFlops performance of three defended models. + +Attack and Defense on RDI-Nets We build RDI-Nets by appending side branch outputs for each backbone. For SmallCNN, we add two side branches ( $K = 2$ ). For ResNet-38 and MobileNet-V2, we have $K = 6$ and $K = 2$ , respectively. The branches are designed to cause negligible overheads: more details of their structure and positions can be referenced in Appendix B. We call those result models RDI-SmallCNN, RDI-ResNet38 and RDI-MobileNetV2 hereinafter. + +We then generate attacks using our three defined forms. Each attack form could be solved with various attacker algorithms (e.g., PGD, C&W, FGSM), and by default we solve it with the same attacker used for each backbone in Table 1. If we fix one attacker algorithm (e.g., PGD), then TA/ATA for a single-output network can be measured without ambiguity. Yet for $(K + 1)$ -output RDI-Nets, there could be at least $K + 3$ different ATA numbers for one defended model, depending on what attack form in Section 3.1 to apply ( $K + 1$ single attacks, 1 average attack, and 1 max-average attack). For example, we denote by ATA (Branch1) the ATA number when applying the single attack generated from the first side output branch (e.g., $x_{1}^{adv}$ ); similarly elsewhere. + +We also defend RDI-Nets using adversarial training, using the forms of adversarial images to augment training. By default, we adopt three adversarial training defense schemes: Main Branch (single attack using $x_{K+1}^{adv}$ ), Average (using $x_{avg}^{adv}$ ), and Max-Average (using $x_{avg}^{max}$ ), in addition to the undefended RDI-Nets (using standard training) denoted as Standard. + +We cross evaluate ATAs of different defenses and attacks, since an ideal defense shall protect against all possible attack forms. To faithfully indicate the actual robustness, we choose the lowest number among all $K + 3$ ATAs, denoted as ATA (Worst-Case), as the robustness measure for an RDI-Net. + +Table 1: Benchmarking results of adversarial training of three networks. PGD-40 denotes running the projected gradient descent attacker (Madry et al., 2018) for 40 iterations. We set the perturbation size as 0.3 for MNIST and 8/255 for CIFAR-10 in $\ell_{\infty}$ norm (adopted by all following experiments). + +
ModelDatasetDefendAttackTAATAMFlops
SmallCNNMNISTPGD-40PGD-4099.49%96.31%9.25
ResNet-38CIFAR-10PGD-10PGD-2083.62%42.29%79.42
MobileNetV2CIFAR-10PGD-10PGD-2084.42%46.92%86.91
+ +# 4.2 EVALUATION AND ANALYSIS + +MNIST Experiments The MNIST experimental results on RDI-SmallCNN are summarized in table 2, with several meaningful observations to be drawn. First, the undefended models (Standard) are easily compromised by all attack forms. Second, The single attack-defended model (Main Branch) achieves the best ATA against the same type of attack, i.e., ATA (Main Branch), and also seems to boost the closest output branch's robustness, i.e., ATA (Branch 2). However, its defense effect on the further-away Branch 1 is degraded, and also shows to be fragile under two stronger attacks (Average, and Max-Average). Third, both Average and Max-Average defenses achieve good TAs, as well as ATAs against all attack forms (and therefore Worst-Case), with Max-Average slightly better at both (the margins are small due to the data/task simplicity; see next two). + +Moreover, compared to the strong baseline of SmallCNN defended by PGD (40 iterations)-based adversarial training, RDI-SmallCNN with Max-Average defense wins in terms of both TA and ATA. Impressively, that comes together with $34.30\%$ computational savings compared to the baseline. Here the different defense forms do not appear to alter the inference efficiency much: they all save around $34\% - 36\%$ MFlops compared to the backbone. + +Table 2: The performance of RDI-SmallCNN. The "Average MFlops" is calculated by averaging the total flop costs consumed over the inference of the entire set (different samples take different FLOPs due to input-adaptive inference). The perturbation size and step size are 0.3 and 0.01, respectively. + +
Defense MethodStandardMain BranchAverageMax-Average
TA99.48%99.50%99.51%99.52%
ATA (Branch 1)6.60%60.50%98.69%98.52%
ATA (Branch 2)3.16%98.14%97.64%97.62%
ATA (Main Branch)1.32%96.70%96.30%96.43%
ATA (Average)2.61%61.35%97.37%97.42%
ATA (Max-Average)2.10%61.83%96.82%96.89%
ATA (Worst-Case)1.32%60.50%96.30%96.43%
Average MFlops5.895.895.956.08
Computation Saving36.40%36.40%35.70%34.30%
+ +CIFAR-10 Experiments The results on RDI-ResNet38 and RDI-MobileNetV2 are presented in Tables 3 and 4, respectively. Most findings seem to concur with MNIST experiments. Specifically, on the more complicated CIFAR-10 classification task, Max-Average defense achieves much more obvious margins over Average defense, in terms of ATA (Worst-Case): $2.79\%$ for RDI-ResNet38, and $1.06\%$ for RDI-MobileNetV2. Interestingly, the Average defense is not even the strongest in defending average attacks, as Max-Average defense can achieve higher ATA (Average) in both cases. We conjecture that averaging all branch losses might "over-smooth" and diminish useful gradients. + +Compared to the defended ResNet-38 and MobileNet-V2 backbones, RDI-Nets with Max-Average defense achieve higher TAs and ATAs for both. Especially, the ATA (Worst-Case) of RDI-ResNet-38 surpasses the ATA of ResNet-38 defended by PGD-adversarial training by $1.03\%$ , while saving around $30\%$ inference budget. We find that different defenses on CIFAR-10 have more notable impacts on computational saving. Seemingly, a stronger defense (Max-Average) requires inputs to go through the scrutiny of more layers on average, before outputting confident enough predictions: a sensible observation as we expect. + +Visualization of Adaptive Inference Behaviors We visualize the exiting behaviors of RDI-ResNet38 in Fig 4.2. We plot each branch exiting percentage on clean set and adversarial sets (worst-case) of examples. A few interesting observations can be found. First, we observe that the single-attack defended model can be easily fooled as adversarial examples can be routed through other less-defended outputs (due to the limited transferability of attacks between different outputs). Second, the two stronger defenses (Average and Max-Average) show much more uniform usage of multiple outputs. Their routing behaviors for clean examples are almost identical. For adversarial examples, Max-Average tends to call upon the full inference more often (i.e., more "conservative"). + +Table 3: The performance evaluation on RDI-ResNet38. The perturbation size and step size are 8/255 and 2/255, respectively. + +
Defence MethodStandardMain BranchAverageMax-Average
TA92.43%83.74%82.42%83.79%
ATA (Branch1)0.12%12.02%71.56%69.71%
ATA (Branch2)0.01%5.58%66.67%63.11%
ATA (Branch3)0.04%42.73%60.65%60.72%
ATA (Branch4)0.06%34.95%50.17%47.82%
ATA (Branch5)0.06%41.77%44.83%45.53%
ATA (Branch6)0.11%41.68%45.83%44.12%
ATA (Main Branch)0.13%42.74%47.52%49.82%
ATA (Average)0.01%9.14%42.09%43.32%
ATA (Max-Average)0.01%7.15%40.53%43.43%
ATA (Worst-Case)0.01%5.58%40.53%43.32%
Average MFlops29.4148.2756.9057.81
Computation Saving62.96%39.20%28.35%27.20%
+ +Table 4: The performance evaluation on RDI-MobilenetV2. The perturbation size and step size are 8/255 and 2/255, respectively. + +
Defence MethodStandardMain BranchAverageMax-Average
TA93.22%85.28%82.14%84.91%
ATA (Branch1)0.35%37.40%67.65%71.78%
ATA (Branch2)0%47.35%50.38%50.15%
ATA (Main Branch)0%46.69%49.33%46.99%
ATA (Average)0%35.20%45.93%47.00%
ATA (Max-Average)0%36.66%49.33%50.18%
ATA (Worst-Case)0%35.20%45.93%46.99%
Average MFlops49.7852.8158.2360.84
Computation Saving42.72%39.23%33.00%29.99%
+ +![](images/1f597c5668659ba0b8123a3507c69a647f194e9ea447287805f6644441d14e77.jpg) +Figure 2: The exiting behaviours of RDI-ResNet38 defended by (a) Single attack defense (Main Branch); (b) Average defense; and (c) Max-Average defense. + +![](images/4d9145aed43742f96b0c377bc08de2acc9133bd0ea353caf809f0c6fec8a37fa.jpg) + +![](images/fa2b8d86d7bed71c4a06e9c84261b1265a9a78bc83012936eae45ecc6fe5006f.jpg) + +# 4.3 COMPARISON WITH DEFENDED SPARSE NETWORKS + +An alternative to achieve accuracy-robust-efficiency trade-off is by defending a sparse or compressed model. Inspired by (Guo et al., 2018; Gui et al., 2019), we compare RDI-Net with Max-Average + +defense to the following baseline: first compressing the network with a state-of-the-art model compression method (Huang & Wang, 2018), and then defend the compressed network using the PGD-10 adversarial training. We sample different sparsity ratios in (Huang & Wang, 2018) to obtain models of different complexities. Fig. 6 in Appendix visualizes the comparison on ResNet-38: for either method, we sample a few models of different MFLOPs. At similar inference costs (e.g., 49.38M for pruning + defense, and 48.35M for RDI-Nets), our proposed approach consistently achieves higher ATAs ( $>2\%$ ) than the strong pruning + defense baseline, with higher TAs. + +We also compare with the latest ATMC algorithm (Gui et al., 2019) that jointly optimizes robustness and efficiency, applied the same ResNet-38 backbone. As shown in Table 5, at comparable MFlops, RDI-ResNet-38 + +Table 5: Performance comparison between RDI-ResNet38 and ATMC. +Methods TA ATA MFlops + +
ATMC (Gui et al. (2019))83.8143.0256.82
RDI-ResNet-38 (Worst-Case)83.7943.3257.81
+ +surpasses ATMC by $0.3\%$ in terms of ATA, with a similar TA. + +# 4.4 GENERALIZED ROBUSTNESS AGAINST OTHER ATTACKERS + +In the aforementioned experiments, we have only evaluated on RDI-Nets against "deterministic" PGD-based adversarial images. We show that RDI-Nets also achieve better generalized robustness against other "randomized" or unseen attackers. We create the new "random attack": that attack will randomly combine the multi-exit losses, and summarize the results in Table 6. We also follow the similar setting in Gui et al. (2019) and report the results against FGSM (Goodfellow et al., 2015) and WRM (Sinha et al., 2018) attacker, in Tables 7 and 8 respectively (more complete results can be found in Appendix D). + +Table 6: Performance on RDI-ResNet38 against random attack. The perturbation size and step size are 8/255 and 2/255, respectively. More details of random attack can be referenced in Appendix D. + +
Defence MethodStandardMain BranchAverageMax-Average
TA92.43%83.74%82.42%83.79%
ATA (Random)0.01%10.33%43.11%44.86%
Average MFlops27.3352.3655.2156.54
Computation Saving65.58%34.07%30.48%28.80%
+ +Table 7: Performance on RDI-ResNet38 (defended with PGD) against FGSM attack (perturbation size is 8/255). The original defended ResNet38 by PGD under the same attack has ATA $51.11\%$ . + +
Defence MethodStandardMain BranchAverageMax-Average
TA92.43%83.74%82.42%83.79%
ATA (Main Branch)11.51%51.45%53.64%54.72%
ATA (Average)11.41%50.21%51.81%53.20%
ATA (Max-Average)2.09%47.53%50.63%52.40%
ATA (Worst-Case)2.09%47.53%50.63%51.05%
Average MFlops65.7455.2758.2759.67
Computation Saving17.21%30.40%26.40%24.86%
+ +Table 8: Performance on RDI-ResNet38 (defended with PGD) against WRM attack (perturbation size is 0.3). The original defended ResNet38 by PGD under the same attack has ATA $83.35\%$ . + +
Defence MethodStandardMain BranchAverageMax-Average
TA92.43%83.74%82.42%83.79%
ATA (Main Branch)34.42%83.74%82.42%83.78%
ATA (Average)26.48%83.69%82.36%83.77%
ATA (Max-Average)23.51%83.73%82.40%83.78%
ATA (Worst-Case)23.51%83.69%82.36%83.77%
Average MFlops50.0550.4652.8952.38
Computation Saving36.98%36.46%33.40%34.04%
+ +# 5 DISCUSSION AND ANALYSIS + +Intuition: Multi-Output Networks as Special Ensembles Our intuition on defending multi-output networks arises from the success of ensemble defense in improving both accuracy and robustness (Tramér et al., 2018; Strauss et al., 2017), which also aligns with the model capacity hypothesis (Nakkiran, 2019). A general multi-output network (Xu et al., 2019) could be decomposed by an ensemble of single-output models, with weight re-using enforced among them. It is thus more compact than an ensemble of independent models, and the extent of sharing weight calibrates ensemble diversity versus efficiency. Therefore, we expect a defended multi-output network to (mostly) inherit the strong accuracy/robustness of ensemble defense, while keeping the inference cost lower. + +Do "Triple Wins" Go Against the Model Capacity Needs? We point out that our seemingly "free" efficiency gains (e.g., not sacrificing TA/ATA) do not go against the current belief that a more accurate and robust classifier relies on a larger model capacity (Nakkiran, 2019). From the visualization, there remains to be a portion of clean/adversarial examples that have to utilize the full inference to predict well. In other words, the full model capacity is still necessary to achieve our current TAs/ATAs. Meanwhile, just like in standard classification (Wang et al., 2018a), not all adversarial examples are born equally. Many of them can be predicted using fewer inference costs (taking earlier exits). Therefore, RDI-Nets reduces the "effective model capacity" averaged on all testing samples for overall higher inference efficiency, while not altering the full model capacity. + +# 6 CONCLUSION + +This paper targets to simultaneously achieve high accuracy and robustness and meanwhile keeping inference costs lower. We introduce the multi-output network and input-adaptive dynamic inference, as a strong tool to the adversarial defense field for the first time. Our RDI-Nets achieve the "triple wins" of better accuracy, stronger robustness, and around $30\%$ inference computational savings. Our future work will extend RDI-Nets to more dynamic inference mechanisms. + +# 7 ACKNOWLEDGEMENT + +We would like to thank Dr. Yang Yang from Walmart Technology for highly helpful discussions throughout this project. + +# REFERENCES + +Shumeet Baluja and Ian Fischer. Adversarial transformation networks: Learning to generate adversarial examples. In AAAI, 2018. +Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In ECML, 2013. +Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In SP, 2017. +Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. In NeurIPS, 2018. +Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. In ICML, 2019. +Michael Figurnov, Maxwell D Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Ruslan Salakhutdinov. Spatially adaptive computation time for residual networks. In CVPR, 2017. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In *ICLR*, 2015. +Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, and Ji Liu. Model compression with adversarial robustness: A unified optimization framework. In NeurIPS, pp. 1283-1294, 2019. + +Yiwen Guo, Chao Zhang, Changshui Zhang, and Yurong Chen. Sparse dnns with improved adversarial robustness. In NeurIPS, 2018. +Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In NeurIPS, 2015. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. +Hanzhang Hu, Debadeepta Dey, J. Andrew Bagnell, and Martial Hebert. Anytime neural networks via joint optimization of auxiliary losses. In AAAI, 2019. +Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Weinberger. Multi-scale dense networks for resource efficient image classification. In ICLR, 2018. +Zehao Huang and Naiyan Wang. Data-driven sparse structure selection for deep neural networks. In ECCV, 2018. +Yigitcan Kaya, Sanghyun Hong, and Tudor Dumitras. Shallow-deep networks: Understanding and mitigating network overthinking. In ICML, 2019. +Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. In ICLR, 2017. +Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. Defense against adversarial attacks using high-level representation guided denoiser. In CVPR, 2018. +Ji Lin, Chuang Gan, and Song Han. Defensive quantization: When efficiency meets robustness. In ICLR, 2019. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018. +Preetum Nakkiran. Adversarial robustness may be at odds with simplicity. arXiv, 2019. +Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In EuroS&P, 2016. +Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. *Mobilenetv2: Inverted residuals and linear bottlenecks*. In *CVPR*, 2018. +Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarily robust generalization requires more data. In NeurIPS, 2018. +Aman Sinha, Hongseok Namkoong, and John Duchi. Certifying Some Distributional Robustness with Principled Adversarial Training. In ICLR, 2018. +Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In ICLR, 2018. +Thilo Strauss, Markus Hanselmann, Andrej Junginger, and Holger Ulmer. Ensemble methods as a defense to adversarial perturbations against deep neural networks. arXiv, 2017. +Ke Sun, Zhanxing Zhu, and Zhouchen Lin. Towards understanding adversarial examples systematically: Exploring data size, task and model factors. arXiv, 2019. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In *ICLR*, 2014. +Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, et al. Convolutional neural networks with low-rank regularization. In ICLR, 2016. +Surat Teerapittayanon, Bradley McDanel, and H. T. Kung. Branchynet: Fast inference via early exiting from deep neural networks. In ICPR, 2017. + +Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In *ICLR*, 2018. +Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In ICLR, 2019. +Xin Wang, Fisher Yu, Zi-Yi Dou, and Joseph E Gonzalez. Skipnet: Learning dynamic routing in convolutional networks. In ECCV, 2018a. +Yue Wang, Tan Nguyen, Yang Zhao, Zhangyang Wang, Yingyan Lin, and Richard Baraniuk. Energynet: Energy-efficient dynamic inference. 2018b. +Yue Wang, Jianghao Shen, Ting-Kuei Hu, Pengfei Xu, Tan Nguyen, Richard Baraniuk, Zhangyang Wang, and Yingyan Lin. Dual dynamic inference: Enabling more efficient, adaptive and controllable deep inference. arXiv preprint arXiv:1907.04523, 2019. +Yunhe Wang, Chang Xu, Chao Xu, and Dacheng Tao. Adversarial learning of portable student networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018c. +Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In NeurIPS, 2016. +Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. Quantized convolutional neural networks for mobile devices. In CVPR, 2016. +Junru Wu, Yue Wang, Zhenyu Wu, Zhangyang Wang, Ashok Veeraraghavan, and Yingyan Lin. Deep $k$ -means: Re-training and parameter sharing with harder cluster assignments for compressing deep convolutions. In ICML, 2018. +Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. In ICLR, 2018. +Donna Xu, Yaxin Shi, Ivor W Tsang, Yew-Soon Ong, Chen Gong, and Xiaobo Shen. A survey on multi-output learning. arXiv, 2019. +Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. In NDSS, 2018. +Shaokai Ye, Kaidi Xu, Sijia Liu, Hao Cheng, Jan-Henrik Lambrechts, Huan Zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, and Xue Lin. Adversarial robustness vs model compression, or both? In ICCV, 2019. +Xiyu Yu, Tongliang Liu, Xinchao Wang, and Dacheng Tao. On compressing deep models by low rank and sparse decomposition. In CVPR, 2017. +Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv, 2019. +Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In CVPR, 2018. + +# A LEARNING DETAILS OF RDI-NETS + +MNIST We adopt the network architecture from (Chen et al., 2018) with four convolutions and three full-connected layers. We train for 13100 iterations with a batch size of 256. The learning rate is initialized as 0.033 and is lowered by 10 at 12000th and 12900th iteration. For hybrid loss, the weights $\{w_i\}_{i=1}^{N+1}$ are set as $\{1,1,1\}$ for simplicity. For adversarial defense/attack, we perform 40-steps PGD for both defense and evaluation. The perturbation size and step size are set as 0.3 and 0.01. + +CIFAR-10 We take ResNet-38 and MobileNetV2 as the backbone architectures. For RDI-ResNet38, we initialize learning rate as 0.1 and decay it by a factor of 10 at 32000th and 48000th iteration. The learning procedure stops at 55000 iteration. For RDI-MobileNetV2, the learning rate is set to 0.05 and is lowered by 10 times at 62000th and 70000th iteration. We stop the learning procedure at 76000 iteration. For hybrid loss, we follow the discussion in (Hu et al., 2019) and set $\{w_i\}_{i=1}^{N+1}$ of RDI-ResNet38 and RDI-MobileNetV2 as $\{0.5, 0.5, 0.7, 0.7, 0.9, 0.9, 2\}$ and $\{0.5, 0.5, 1\}$ , respectively. For adversarial defense/attack, the perturbation size and step size are set as $8/255$ and $2/255$ . 10-steps PGD is performed for defense and 20-steps PGD is utilized for evaluation. + +# B NETWORK STRUCTURE OF RDI-NETS + +To build RDI-Nets, we follow the similar setting in Teerapittayanon et al. (2017) by appending additional branch classifiers at equidistant points throughout a given network, as illustrated in Fig 3, Fig 4 and Fig 5. A few pooling operations, light-weight convolutions and fully-connected layers are appended to each branch classifiers. Note that the extra flops introduced by side branch classifiers are less than $2\%$ than the original ResNet-38 or MobileNetV2. + +![](images/2783d78c70974bec802e722049c8429100aa6e4f4e30c533ff48c87614949435.jpg) +Figure 3: Network architecture of RDI-SmallCNN. Two branch classifiers are inserted after $1st$ convolutional layer and 3rd convolutional layer in the original SmallCNN. + +![](images/3ee059b12bac3f4467e753dd62513c94d3aebb456dd0b700b0ee0cf6e0097bb4.jpg) +Figure 4: Network architecture of RDI-ResNet38. In each residual block group, two branch classifiers are inserted after $1st$ residual block and $4th$ residual block. + +# C INPUT-ADAPTIVE INFERENCE FOR RDI-NETS + +Similar to the deterministic strategy in Teerapittayanon et al. (2017), we adopt the entropy as the measure of the prediction confidence. Given a prediction vector $y \in \mathbb{R}^C$ , where $C$ is the number of + +![](images/0f99bac5aaffcfae491118b7593da03c56d23103a976e7dee1e165196ceba0b9.jpg) +Figure 5: Network architecture of RDI-MobilenetV2. Two branch classifiers are inserted after $3rd$ inverted residual block and 11th inverted residual block in the original MobilenetV2. + +class, the entropy of $y$ is defined as follow, + +$$ +- \sum_ {c = 1} ^ {C} \left(y _ {c} + \epsilon\right) \log \left(y _ {c} + \epsilon\right), \tag {6} +$$ + +where $\epsilon$ is a small positive constant used for robust entropy computation. To perform fast inference on a $(K + 1)$ -output RDI-Net, we need to determine $K$ threshold numbers, i.e., $\{t_i\}_{i = 1}^K$ , so that the input $x$ will exit at $i$ th branch if the entropy of $y_{i}$ is larger than $t_i$ . To choose $\{t_i\}_{i = 1}^K$ , Huang et al. (2018) provides a good starting point by fixing exiting probability of each branch classifiers equally on validation set so that each sample can equally contribute to inference. We follow this strategy but adjust the thresholds to make the contribution of middle branches slightly larger than the early branches. The threshold numbers for RDI-SmallCNN, RDI-ResNet38, and RDI-MobilenetV2 are set to be $\{0.023, 0.014\}$ , $\{0.32, 0.36, 0.39, 0.83, 1.12, 1.35\}$ , and $\{0.267, 0.765\}$ , respectively. + +![](images/dd38539ade6c676264029de8d584f4be533fbf067f7401045f4d6c6fdbc83dcf.jpg) +Figure 6: Performance comparison between RDI-Net and the pruning + defense baseline. Each marker represents a model, whose size is proportional to its MFlops. $\gamma$ is the sparsity trade-off parameter: the larger the sparser (smaller model). + +# D GENERALIZED ROBUSTNESS + +Here, we introduce the attack form of random attack and report the complete results against FGSM (Goodfellow et al., 2015) and WRM (Sinha et al., 2018) attacker under various attack forms, in Tables 9 and 10, respectively. + +Random Attack the attack exploits multi-loss flexibility by randomly fusing all $f_{\theta_i}$ losses. Given a $N$ -output network, we have a fusion vector $C \in \mathbb{R}^N \sim \mathbb{D}$ , where $\mathbb{D}$ is some distribution (uniform + +by default). We denote $c_{j}$ as the $j$ th element of $C$ and $x_{rdm}^{adv}$ can be found by: + +$$ +x _ {r d m} ^ {a d v} = \underset {x ^ {\prime} \in | x ^ {\prime} - x | _ {\infty} \leq \epsilon} {\arg \max } \left| \frac {1}{N} \sum_ {j = 1} ^ {N} c _ {j} \phi \left(f _ {\theta_ {j}} \left(x ^ {\prime}\right), y\right) \right|. \tag {7} +$$ + +It is expected to challenge our defense, due to the infinitely many ways of randomly fusing outputs. + +Table 9: The performance evaluation on RDI-ResNet38 (defended with PGD) against FGSM attack. The perturbation size is 8/255. The ATA of the original defended ResNet38 by PGD under the same attacker is $51.11\%$ . + +
Defence MethodStandardMain BranchAverageMax-Average
TA92.43%83.74%82.42%83.79%
ATA (Branch1)20.69%66.06%72.77%72.76%
ATA (Branch2)16.15%53.87%70.40%69.71%
ATA (Branch3)8.13%63.70%64.19%65.14%
ATA (Branch4)10.09%56.67%58.45%58.20%
ATA (Branch5)9.45%50.81%52.76%52.96%
ATA (Branch6)10.22%50.34%53.17%51.05%
ATA (Main Branch)11.51%51.45%53.64%54.72%
ATA (Average)11.41%50.21%51.81%53.20%
ATA (Max-Average)2.09%47.53%50.63%52.40%
ATA (Worst-Case)2.09%47.53%50.63%51.05%
Average MFlops65.7455.2758.2759.67
Computation Saving17.21%30.40%26.40%24.86%
+ +Table 10: The performance evaluation on RDI-ResNet38 (defended with PGD) against WRM attack. The perturbation size is 0.3. The ATA of the original defended ResNet38 by PGD under the same attacker is $83.35\%$ . + +
Defence MethodStandardMain BranchAverageMax-Average
TA92.43%83.74%82.42%83.79%
ATA (Branch1)46.60%83.73%82.42%83.78%
ATA (Branch2)71.33%83.73%82.42%83.79%
ATA (Branch3)23.51%83.73%82.41%83.78%
ATA (Branch4)33.41%83.73%82.42%83.78%
ATA (Branch5)42.35%83.73%82.41%83.78%
ATA (Branch6)47.77%83.74%82.40%83.78%
ATA (Main Branch)34.42%83.74%82.42%83.78%
ATA (Average)26.48%83.69%82.36%83.77%
ATA (Max-Average)23.51%83.73%82.40%83.78%
ATA (Worst-Case)23.51%83.69%82.36%83.77%
Average MFlops50.0550.4652.8952.38
Computation Saving36.98%36.46%33.40%34.04%
\ No newline at end of file diff --git a/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/images.zip b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6fded12d079bb4225ec21b5ee5d819fb25c14b73 --- /dev/null +++ b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8d7bff3bd42bea41c832f1df9364900c05a5ce50609e0be37080a654aa71a67 +size 863517 diff --git a/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/layout.json b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b054ab73565cfe6266aa515a44f5082e2f468688 --- /dev/null +++ b/triplewinsboostingaccuracyrobustnessandefficiencytogetherbyenablinginputadaptiveinference/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d25ee41fc7a0b6ba5b43c363aac6b9523a071f7471beb0906d4da24f2ccb0d55 +size 442292 diff --git a/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/6f9e2f1e-9596-4f7c-8f02-c0e0c9ffb73b_content_list.json b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/6f9e2f1e-9596-4f7c-8f02-c0e0c9ffb73b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..274d04d39f7575cb4a9df98566d52207e29c07b9 --- /dev/null +++ b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/6f9e2f1e-9596-4f7c-8f02-c0e0c9ffb73b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22703300d14bccbb9a365c6eba553810b87ce346c4dabbc57dab69c05c9145e0 +size 84339 diff --git a/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/6f9e2f1e-9596-4f7c-8f02-c0e0c9ffb73b_model.json b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/6f9e2f1e-9596-4f7c-8f02-c0e0c9ffb73b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bdad0898e64dfc75239450eeaa9f3a7a1de6c835 --- /dev/null +++ b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/6f9e2f1e-9596-4f7c-8f02-c0e0c9ffb73b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96b732bae02ead97c466f266e162a050e96263917866876ae8c6b74b6531fd56 +size 102457 diff --git a/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/6f9e2f1e-9596-4f7c-8f02-c0e0c9ffb73b_origin.pdf b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/6f9e2f1e-9596-4f7c-8f02-c0e0c9ffb73b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ee21dbcd56ad70522980d2ec0684ab448080ca65 --- /dev/null +++ b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/6f9e2f1e-9596-4f7c-8f02-c0e0c9ffb73b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17ed11ab4628a3afdde6f294cc187cddb923d1804da553c81293fd1282929ff2 +size 9590667 diff --git a/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/full.md b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e190bb2bdc960791c21cf7c73872852193c87fd0 --- /dev/null +++ b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/full.md @@ -0,0 +1,286 @@ +# U-GAT-IT: UNSUPERVISED GENERATIVE ATTENTIONAL NETWORKS WITH ADAPTIVE LAYER-INSTANCE NORMALIZATION FOR IMAGE-TO-IMAGE TRANSLATION + +Junho Kim $^{1,2*}$ , Minjae Kim $^{2}$ , Hyeonwoo Kang $^{2}$ , Kwang Hee Lee $^{3\dagger}$ + +$^{1}$ Clova AI Research, Naver Corp, $^{2}$ NCSOFT, $^{3}$ Boeing Korea Engineering and Technology Center jhkim.ai@navercorp.com, {minjaekim, hwangk0131}@ncsoft.com, kwanghee.lee2@boeing.com + +# ABSTRACT + +We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. Our code and datasets are available at https://github.com/taki0112/UGATIT or https://github.com/znxlwm/UGATITpytorch. + +# 1 INTRODUCTION + +Image-to-image translation aims to learn a function that maps images within two different domains. This topic has gained a lot of attention from researchers in the fields of machine learning and computer vision because of its wide range of applications including image inpainting (Pathak et al. (2014); Iizuka et al. (2017)), super resolution (Dong et al. (2016); Kim et al. (2016)), colorization (Zhang et al. (2016; 2017)) and style transfer (Gatys et al. (2016); Huang & Belongie (2017)). When paired samples are given, the mapping model can be trained in a supervised manner using a conditional generative model (Isola et al. (2017); Li et al. (2017a); Wang et al. (2018)) or a simple regression model (Larsson et al. (2016); Long et al. (2015); Zhang et al. (2016)). In unsupervised settings where no paired data is available, multiple works (Anoosheh et al. (2018); Choi et al. (2018); Huang et al. (2018); Kim et al. (2017); Liu et al. (2017); Royer et al. (2017); Taigman et al. (2017); Yi et al. (2017); Zhu et al. (2017)) successfully have translated images using shared latent space (Liu et al. (2017)) and cycle consistency assumptions (Kim et al. (2017); Zhu et al. (2017)). These works have been further developed to handle the multi-modality of the task (Huang et al. (2018)). + +Despite these advances, previous methods show performance differences depending on the amount of change in both shape and texture between domains. For example, they are successful for the style transfer tasks mapping local texture (e.g., photo2vangogh and photo2portrait) but are typically unsuccessful for image translation tasks with larger shape change (e.g., selfie2anime and cat2dog) in wild images. Therefore, the pre-processing steps such as image cropping and alignment are often required to avoid these problems by limiting the complexity of the data distributions (Huang et al. (2018); Liu et al. (2017)). In addition, existing methods such as DRIT (Lee et al. (2018)) cannot + +![](images/8922f56bb527fc09f200655f3c32822831c70af96b22e832d288fe4220c13213.jpg) +Figure 1: The model architecture of U-GAT-IT. The detailed notations are described in Section Model + +acquire the desired results for both image translation preserving the shape (e.g., horse2zebra) and image translation changing the shape (e.g., cat2dog) with the fixed network architecture and hyperparameters. The network structure or hyper-parameter setting needs to be adjusted for the specific dataset. + +In this work, we propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. Our model guides the translation to focus on more important regions and ignore minor regions by distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. These attention maps are embedded into the generator and discriminator to focus on semantically important areas, thus facilitating the shape transformation. While the attention map in the generator induces the focus on areas that specifically distinguish between the two domains, the attention map in the discriminator helps fine-tuning by focusing on the difference between real image and fake image in target domain. In addition to the attentional mechanism, we have found that the choice of the normalization function has a significant impact on the quality of the transformed results for various datasets with different amounts of change in shape and texture. Inspired by Batch-Instance Normalization(BIN) (Nam & Kim (2018)), we propose Adaptive Layer-Instance Normalization (AdaLIN), whose parameters are learned from datasets during training time by adaptively selecting a proper ratio between Instance normalization (IN) and Layer Normalization (LN). The AdaLIN function helps our attention-guided model to flexibly control the amount of change in shape and texture. As a result, our model, without modifying the model architecture or the hyper-parameters, can perform image translation tasks not only requiring holistic changes but also requiring large shape changes. In the experiments, we show the superiority of the proposed method compared to the existing state-of-the-art models on not only style transfer but also object transfiguration. The main contribution of the proposed work can be summarized as follows: + +- We propose a novel method for unsupervised image-to-image translation with a new attention module and a new normalization function, AdaLIN. +- Our attention module helps the model to know where to transform intensively by distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. +- AdaLIN function helps our attention-guided model to flexibly control the amount of change in shape and texture without modifying the model architecture or the hyper-parameters. + +# 2 UNSUPERVISED GENERATIVE ATTENTIONAL NETWORKS WITH ADAPTIVE LAYER-INSTANCE NORMALIZATION + +Our goal is to train a function $G_{s \to t}$ that maps images from a source domain $X_s$ to a target domain $X_t$ using only unpaired samples drawn from each domain. Our framework consists of two generators $G_{s \to t}$ and $G_{t \to s}$ and two discriminators $D_s$ and $D_t$ . We integrate the attention module into both generator and discriminator. The attention module in the discriminator guides the generator to focus on regions that are critical to generate a realistic image. The attention module in the generator gives attention to the region distinguished from the other domain. Here, we only explain $G_{s \to t}$ and $D_t$ (See Fig 1) as the vice versa should be straight-forward. + +# 2.1 MODEL + +# 2.1.1 GENERATOR + +Let $x \in \{X_s, X_t\}$ represent a sample from the source and the target domain. Our translation model $G_{s \to t}$ consists of an encoder $E_s$ , a decoder $G_t$ , and an auxiliary classifier $\eta_s$ , where $\eta_s(x)$ represents the probability that $x$ comes from $X_s$ . Let $E_s^k(x)$ be the $k$ -th activation map of the encoder and $E_s^{k_{ij}}(x)$ be the value at $(i,j)$ . Inspired by CAM (Zhou et al. (2016)), the auxiliary classifier is trained to learn the weight of the $k$ -th feature map for the source domain, $w_s^k$ , by using the global average pooling and global max pooling, i.e., $\eta_s(x) = \sigma(\Sigma_k w_s^k \Sigma_{ij} E_s^{k_{ij}}(x))$ . By exploiting $w_s^k$ , we can calculate a set of domain specific attention feature map $a_s(x) = w_s * E_s(x) = \{w_s^k * E_s^k(x) | 1 \leq k \leq n\}$ , where $n$ is the number of encoded feature maps. Then, our translation model $G_{s \to t}$ becomes equal to $G_t(a_s(x))$ . Inspired by recent works that use affine transformation parameters in normalization layers and combine normalization functions (Huang & Belongie (2017); Nam & Kim (2018)), we equip the residual blocks with AdaLIN whose parameters, $\gamma$ and $\beta$ are dynamically computed by a fully connected layer from the attention map. + +$$ +A d a L I N (a, \gamma , \beta) = \gamma \cdot (\rho \cdot \hat {a} _ {I} + (1 - \rho) \cdot \hat {a} _ {L}) + \beta , +$$ + +$$ +\hat {a} _ {I} = \frac {a - \mu_ {I}}{\sqrt {\sigma_ {I} ^ {2} + \epsilon}}, \hat {a} _ {L} = \frac {a - \mu_ {L}}{\sqrt {\sigma_ {L} ^ {2} + \epsilon}}, \tag {1} +$$ + +$$ +\rho \leftarrow c l i p _ {[ 0, 1 ]} (\rho - \tau \Delta \rho) +$$ + +where $\mu_{I},\mu_{L}$ and $\sigma_I,\sigma_L$ are channel-wise, layer-wise mean and standard deviation respectively, $\gamma$ and $\beta$ are parameters generated by the fully connected layer, $\tau$ is the learning rate and $\Delta \rho$ indicates the parameter update vector (e.g., the gradient) determined by the optimizer. The values of $\rho$ are constrained to the range of [0, 1] simply by imposing bounds at the parameter update step. Generator adjusts the value so that the value of $\rho$ is close to 1 in the task where the instance normalization is important and the value of $\rho$ is close to 0 in the task where the LN is important. The value of $\rho$ is initialized to 1 in the residual blocks of the decoder and 0 in the up-sampling blocks of the decoder. + +An optimal method to transfer the content features onto the style features is to apply Whitening and Coloring Transform (WCT) (Li et al. (2017b)), but the computational cost is high due to the calculation of the covariance matrix and matrix inverse. Although, the AdaIN (Huang & Belongie (2017)) is much faster than the WCT, it is sub-optimal to WCT as it assumes uncorrelation between feature channels. Thus the transferred features contain slightly more patterns of the content. On the other hand, the LN (Ba et al. (2016)) does not assume uncorrelation between channels, but + +sometimes it does not keep the content structure of the original domain well because it considers global statistics only for the feature maps. To overcome this, our proposed normalization technique AdaLIN combines the advantages of AdaIN and LN by selectively keeping or changing the content information, which helps to solve a wide range of image-to-image translation problems. + +# 2.1.2 DISCRIMINATOR + +Let $x \in \{X_{t}, G_{s \to t}(X_{s})\}$ represent a sample from the target domain and the translated source domain. Similar to other translation models, the discriminator $D_{t}$ which is a multi-scale model consists of an encoder $E_{D_{t}}$ , a classifier $C_{D_{t}}$ , and an auxiliary classifier $\eta_{D_{t}}$ . Unlike the other translation models, both $\eta_{D_{t}}(x)$ and $D_{t}(x)$ are trained to discriminate whether $x$ comes from $X_{t}$ or $G_{s \to t}(X_{s})$ . Given a sample $x$ , $D_{t}(x)$ exploits the attention feature maps $a_{D_{t}}(x) = w_{D_{t}} * E_{D_{t}}(x)$ using $w_{D_{t}}$ on the encoded feature maps $E_{D_{t}}(x)$ that is trained by $\eta_{D_{t}}(x)$ . Then, our discriminator $D_{t}(x)$ becomes equal to $C_{D_{t}}(a_{D_{t}}(x))$ . + +# 2.2 LOSS FUNCTION + +The full objective of our model comprises four loss functions. Here, instead of using the vanilla GAN objective, we used the Least Squares GAN (Mao et al. (2017)) objective for stable training. + +Adversarial loss An adversarial loss is employed to match the distribution of the translated images to the target image distribution: + +$$ +L _ {l s g a n} ^ {s \rightarrow t} = \left(\mathbb {E} _ {x \sim X _ {t}} \left[\left(D _ {t} (x)\right) ^ {2} \right] + \mathbb {E} _ {x \sim X _ {s}} \left[\left(1 - D _ {t} \left(G _ {s \rightarrow t} (x)\right)\right) ^ {2} \right]\right). \tag {2} +$$ + +Cycle loss To alleviate the mode collapse problem, we apply a cycle consistency constraint to the generator. Given an image $x \in X_s$ , after the sequential translations of $x$ from $X_s$ to $X_t$ and from $X_t$ to $X_s$ , the image should be successfully translated back to the original domain: + +$$ +L _ {c y c l e} ^ {s \rightarrow t} = \mathbb {E} _ {x \sim X _ {s}} [ | x - G _ {t \rightarrow s} (G _ {s \rightarrow t} (x)) | _ {1} ]. \tag {3} +$$ + +Identity loss To ensure that the color distributions of input image and output image are similar, we apply an identity consistency constraint to the generator. Given an image $x \in X_{t}$ , after the translation of $x$ using $G_{s \to t}$ , the image should not change. + +$$ +L _ {i d e n t i t y} ^ {s \rightarrow t} = \mathbb {E} _ {x \sim X _ {t}} [ | x - G _ {s \rightarrow t} (x) | _ {1} ]. \tag {4} +$$ + +CAM loss By exploiting the information from the auxiliary classifiers $\eta_{s}$ and $\eta_{D_t}$ , given an image $x\in \{X_s,X_t\}$ . $G_{s\rightarrow t}$ and $D_{t}$ get to know where they need to improve or what makes the most difference between two domains in the current state: + +$$ +L _ {c a m} ^ {s \rightarrow t} = - \left(\mathbb {E} _ {x \sim X _ {s}} [ \log \left(\eta_ {s} (x)\right) ] + \mathbb {E} _ {x \sim X _ {t}} [ \log \left(1 - \eta_ {s} (x)\right) ]\right), \tag {5} +$$ + +$$ +L _ {c a m} ^ {D _ {t}} = \mathbb {E} _ {x \sim X _ {t}} [ (\eta_ {D _ {t}} (x)) ^ {2} ] + \mathbb {E} _ {x \sim X _ {s}} [ (1 - \eta_ {D _ {t}} (G _ {s \rightarrow t} (x)) ^ {2} ]. \tag {6} +$$ + +Full objective Finally, we jointly train the encoders, decoders, discriminators, and auxiliary classifiers to optimize the final objective: + +$$ +\min _ {G _ {s \rightarrow t}, G _ {t \rightarrow s}, \eta_ {s}, \eta_ {t}} \max _ {D _ {s}, D _ {t}, \eta_ {D _ {s}}, \eta_ {D _ {t}}} \lambda_ {1} L _ {l s g a n} + \lambda_ {2} L _ {c y c l e} + \lambda_ {3} L _ {i d e n t i t y} + \lambda_ {4} L _ {c a m}, \tag {7} +$$ + +where $\lambda_1 = 1, \lambda_2 = 10, \lambda_3 = 10, \lambda_4 = 1000$ . Here, $L_{lsgan} = L_{lsgan}^{s \rightarrow t} + L_{lsgan}^{t \rightarrow s}$ and the other losses are defined in the similar way ( $L_{cycle}, L_{identity}$ , and $L_{cam}$ ) + +![](images/e6ebb5aa4fb00ff8ec0faba3a26bda0d54aa64f34350b70615295714f142b1e7.jpg) +Figure 2: Visualization of the attention maps and their effects shown in the ablation experiments: (a) Source images, (b) Attention map of the generator, (c-d) Local and global attention maps of the discriminator, respectively. (e) Our results with CAM, (f) Results without CAM. + +# 3 EXPERIMENTS + +# 3.1 BASELINE MODEL + +We have compared our method with various models including CycleGAN (Zhu et al. (2017)), UNIT (Liu et al. (2017)), MUNIT (Huang et al. (2018)), DRIT (Lee et al. (2018)), AGGAN (Mejjati et al. (2018)), and CartoonGAN (Chen et al. (2018)). All the baseline methods are implemented using the author's code. + +# 3.2 DATASET + +We have evaluated the performance of each method with five unpaired image datasets including four representative image translation datasets and a newly created dataset consisting of real photos and animation artworks, i.e., selfie2anim. All images are resized to $256\times 256$ for training. See Appendix C for each dataset for our experiments. + +# 3.3 EXPERIMENT RESULTS + +We first analyze the effects of attention module and AdaLIN in the proposed model. We then compare the performance of our model against the other unsupervised image translation models listed in the previous section. To evaluate, the visual quality of translated images, we have conducted a user study. Users are asked to select the best image among the images generated from five different methods. More examples of the results comparing our model with other models are included in the supplementary materials. + +# 3.3.1 CAM ANALYSIS + +First, we conduct an ablation study to confirm the benefit from the attention modules used in both generator and discriminator. As shown in Fig 2 (b), the attention feature map helps the generator to focus on the source image regions that are more discriminative from the target domain, such as eyes and mouth. Meanwhile, we can see the regions where the discriminator concentrates its attention to determine whether the target image is real or fake by visualizing local and global attention maps of + +![](images/af89eb31e6a9be5d73f996b0e88175fad41ec7150eb52794f4ba8f470eefa22c.jpg) +Figure 3: Comparison of the results using each normalization function: (a) Source images, (b) Our results, (c) Results only using IN in decoder with CAM, (d) Results only using LN in decoder with CAM, (e) Results only using AdaIN in decoder with CAM, (f) Results only using GN in decoder with CAM. + +the discriminator as shown in Fig 2 (c) and (d), respectively. The generator can fine-tune the area where the discriminator focuses on with those attention maps. Note that we incorporate both global and local attention maps from two discriminators having different size of receptive field. Those maps can help the generator to capture the global structure (e.g., face area and near of eyes) as well as the local regions. With this information some regions are translated with more care. The results with the attention module shown in Fig 2 (e) verify the advantageous effect of exploiting attention feature map in an image translation task. On the other hand, one can see that the eyes are misaligned, or the translation is not done at all in the results without using attention module as shown in Fig 2 (f). + +# 3.3.2 ADALIN ANALYSIS + +As described in Appendix B, we have applied the AdaLIN only to the decoder of the generator. The role of the residual blocks in the decoder is to embed features, and the role of the up-sampling convolution blocks in the decoder is to generate target domain images from the embedded features. If the learned value of the gate parameter $\rho$ is closer to 1, it means that the corresponding layers rely more on IN than LN. Likewise, if the learned value of $\rho$ is closer to 0, it means that the corresponding layers rely more on LN than IN. As shown in Fig 3 (c), in the case of using only IN in the decoder, the features of the source domain (e.g., earrings and shades around cheekbones) are well preserved due to channel-wise normalized feature statistics used in the residual blocks. However, the amount of translation to target domain style is somewhat insufficient since the global style cannot be captured by IN of the up-sampling convolution blocks. On the other hand, As shown in Fig 3 (d), if we use only LN in the decoder, target domain style can be transferred sufficiently by virtue of layerwise normalized feature statistics used in the up-sampling convolution. But the features of the source domain image are less preserved by using LN in the residual blocks. This analysis of two extreme cases tells us that it is beneficial to rely more on IN than LN in the feature representation layers to preserve semantic characteristics of source domain, and the opposite is true for the upsampling layers that actually generate images from the feature embedding. Therefore, the proposed AdaLIN which adjusts the ratio of IN and LN in the decoder according to source and target domain distributions is more preferable in unsupervised image-to-image translation tasks. Additionally, the Fig 3 (e), (f) are the results of using the AdaIN and Group Normalization (GN) (Wu & He (2018)) respectively, and our methods are showing better results compared to these. + +![](images/a3e25c82b2fceb59b96c7ddcb206067c42c8c3fc332234d46de9712f44db15fe.jpg) +Figure 4: Visual comparisons on the five datasets. From top to bottom: selfie2anine, horse2zebra, cat2dog, photo2portrait, and photo2vangogh. (a)Source images, (b)U-GAT-IT, (c)CycleGAN, (d)UNIT, (e)MUNIT, (f)DRIT, (g)AGGAN + +Table 1: Kernel Inception Distance $\times {100} \pm$ std. $\times {100}$ for ablation our model. Lower is better. There are some notations; GN: Group Normalization, G_CAM: CAM of generator, D_CAM: CAM of discriminator + +
Modelselfie2anineanine2selfie
U-GAT-IT11.61 ± 0.5711.52 ± 0.57
U-GAT-IT w/ IN13.64 ± 0.7613.58 ± 0.8
U-GAT-IT w/ LN12.39 ± 0.6113.17 ± 0.8
U-GAT-IT w/ AdaIN12.29 ± 0.7811.81 ± 0.77
U-GAT-IT w/ GN12.76 ± 0.6412.30 ± 0.77
U-GAT-IT w/o CAM12.85 ± 0.8214.06 ± 0.75
U-GAT-IT w/o G_CAM12.33 ± 0.6813.86 ± 0.75
U-GAT-IT w/o D_CAM12.49 ± 0.7413.33 ± 0.89
+ +Also, as shown in Table 1, we demonstrate the performance of the attention module and AdaLIN in the selfie2animation dataset through an ablation study using Kernel Inception Distance (KID) (Binkowski et al. (2018)). Our model achieves the lowest KID values. Even if the attention module and AdaLIN are used separately, we can see that our models perform better than the others. However, when used together, the performance is even better. + +# 3.3.3 QUALITATIVE EVALUATION + +For qualitative evaluation, we have also conducted a perceptual study. 135 participants are shown translated results from different methods including the proposed method with source image, and asked to select the best translated image to target domain. We inform only the name of target domain, i.e., animation, dog, and zebra to the participants. But, some example images of target domain are provided for the portrait and Van Gogh datasets as minimum information to ensure proper judgments. Table 2 shows that the proposed method achieved significantly higher score except for photo2vangogh but comparable in human perceptual study compared to other methods. In Fig 4, we present the image translation results from each method for performance comparisons. U-GAT-IT can generate undistorted image by focusing more on the distinct regions between source + +Table 2: Preference score on translated images by user study. + +
Modelselfie2aninehorse2zebracat2dogphoto2portraitphoto2vangogh
U-GAT-IT73.1573.5658.2230.5948.96
CycleGAN20.0723.076.1926.5927.33
UNIT1.480.8518.6332.1111.93
MUNIT3.411.0414.488.222.07
DRIT1.891.482.482.489.70
+ +Table 3: Kernel Inception Distance $\times {100} \pm$ std. $\times {100}$ for difference image translation mode. Lower is better. + +
Modelselfie2aninehorse2zebracat2dogphoto2portraitphoto2vangogh
U-GAT-IT11.61 ± 0.577.06 ± 0.87.07 ± 0.651.79 ± 0.344.28 ± 0.33
CycleGAN13.08 ± 0.498.05 ± 0.728.92 ± 0.691.84 ± 0.345.46 ± 0.33
UNIT14.71 ± 0.5910.44 ± 0.678.15 ± 0.481.20 ± 0.314.26 ± 0.29
MUNIT13.85 ± 0.4111.41 ± 0.8310.13 ± 0.274.75 ± 0.5213.08 ± 0.34
DRIT15.08 ± 0.629.79 ± 0.6210.92 ± 0.335.85 ± 0.5412.65 ± 0.35
AGGAN14.63 ± 0.557.58 ± 0.719.84 ± 0.792.33 ± 0.366.95 ± 0.33
CartoonGAN15.85 ± 0.69----
Modelanimate2selfiezebra2horsedog2catportrait2photovangogh2photo
U-GAT-IT11.52 ± 0.577.47 ± 0.718.15 ± 0.661.69 ± 0.535.61 ± 0.32
CycleGAN11.84 ± 0.748.0 ± 0.669.94 ± 0.361.82 ± 0.364.68 ± 0.36
UNIT26.32 ± 0.9214.93 ± 0.759.81 ± 0.341.42 ± 0.249.72 ± 0.33
MUNIT13.94 ± 0.7216.47 ± 1.0410.39 ± 0.253.30 ± 0.479.53 ± 0.35
DRIT14.85 ± 0.6010.98 ± 0.5510.86 ± 0.244.76 ± 0.727.72 ± 0.34
AGGAN12.72 ± 1.038.80 ± 0.669.45 ± 0.642.19 ± 0.405.85 ± 0.31
+ +and target domain by exploiting the attention modules. Note that the regions around heads of two zebras or eyes of dog are distorted in the results from CycleGAN. Moreover, translated results using U-GAT-IT are visually superior to other methods while preserving semantic features of source domain. It is worth noting that the results from MUNIT and DRIT are much dissimilar to the source images since they generate images with random style codes for diversity. Furthermore, it should be emphasized that U-GAT-IT have applied with the same network architecture and hyper-parameters for all of the five different datasets, while the other algorithms are trained with preset networks or hyper-parameters. Through the results of user study, we show that the combination of our attention module and AdaLIN makes our model more flexible. + +# 3.3.4 QUANTITATIVE EVALUATION + +For quantitative evaluation, we use the recently proposed KID, which computes the squared Maximum Mean Discrepancy between the feature representations of real and generated images. The feature representations are extracted from the Inception network (Szegedy et al. (2016)). In contrast to the Fréchet Inception Distance (Heusel et al. (2017)), KID has an unbiased estimator, which makes it more reliable, especially when there are fewer test images than the dimensionality of the inception features. The lower KID indicates that the more shared visual similarities between real and generated images (Mejjati et al. (2018)). Therefore, if well translated, the KID will have a small value in several datasets. Table 3 shows that the proposed method achieved the lowest KID scores except for the style transfer tasks like photo2vangogh and photo2portrait. However, there is no big difference from the lowest score. Also, unlike UNIT and MUNIT, we can see that the source $\rightarrow$ target, target $\rightarrow$ source translations are both stable. U-GAT-IT shows even lower KID than the recent attention-based method, AGGAN. AGGAN yields poor performance for the transformation with shape change such as dog2cat and anime2selfie unlike the U-GAT-IT, the attention module of which focuses on distinguishing not between background and foreground but differences between + +two domains. CartoonGAN, as shown in the supplementary materials, has only changed the overall color of the image to an animated style, but compared to selfie, the eye, which is the biggest characteristic of animation, has not changed at all. Therefore, CartoonGAN has the higher KID. + +# 4 CONCLUSIONS + +In this paper, we have proposed unsupervised image-to-image translation (U-GAT-IT), with the attention module and AdaLIN which can produce more visually pleasing results in various datasets with a fixed network architecture and hyper-parameter. Detailed analysis of various experimental results supports our assumption that attention maps obtained by an auxiliary classifier can guide generator to focus more on distinct regions between source and target domain. In addition, we have found that the Adaptive Layer-Instance Normalization (AdaLIN) is essential for translating various datasets that contains different amount of geometry and style changes. Through experiments, we have shown that the superiority of the proposed method compared to the existing state-of-the-art GAN-based models for unsupervised image-to-image translation tasks. + +# REFERENCES + +Asha Anoosheh, Eirikur Agustsson, Radu Timofte, and Luc Van Gool. Combogan: Unrestrained scalability for image domain translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 783-790, 2018. +Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International Conference on Machine Learning, pp. 214-223, 2017. +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. +David Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017. +Mikołaj Binkowski, Dougal J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD GANs. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=r1lUOzWCW. +Yang Chen, Yu-Kun Lai, and Yong-Jin Liu. Cartoonan: Generative adversarial networks for photo cartoonization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9465-9474, 2018. +Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789-8797, 2018. +Chao Dong, Chen Change Loy, Kaiming He, and Xiaou Tang. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2): 295-307, 2016. +Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=BJO-BuT1g. +Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414-2423, 2016. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672-2680, 2014. + +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626-6637, 2017. +Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1501-1510, 2017. +Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision, pp. 172-189, 2018. +Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics, 36(4):107, 2017. +Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125-1134, 2017. +Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hk99zCeAb. +Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646-1654, 2016. +Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim. Learning to discover cross-domain relations with generative adversarial networks. In Proceedings of the International Conference on Machine Learning, pp. 1857-1865, 2017. URL http://proceedings.mlr.press/v70/kim17a.html. +Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. URL http://arxiv.org/abs/1412.6980. +Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Learning representations for automatic colorization. In Proceedings of the European Conference on Computer Vision, pp. 577-593. Springer, 2016. +Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Diverse image-to-image translation via disentangled representations. In Proceedings of the European Conference on Computer Vision, pp. 35-51, 2018. +Chunyuan Li, Hao Liu, Changyou Chen, Yuchen Pu, Liquin Chen, Ricardo Henao, and Lawrence Carin. Alice: Towards understanding adversarial learning for joint distribution matching. In Advances in Neural Information Processing Systems, pp. 5501-5509, 2017a. +Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Universal style transfer via feature transforms. In Advances in Neural Information Processing Systems, pp. 386-396, 2017b. +Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems, pp. 700-708, 2017. +Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431-3440, 2015. +Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2813-2821. IEEE, 2017. + +Youssef Alami Mejjati, Christian Richardt, James Tompkin, Darren Cosker, and Kwang In Kim. Unsupervised attention-guided image-to-image translation. In Advances in Neural Information Processing Systems, pp. 3697-3707, 2018. +Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1QRgziT-. +Hyeonseob Nam and Hyo-Eun Kim. Batch-instance normalization for adaptively style-invariant neural networks. In Advances in Neural Information Processing Systems, pp. 2563-2572, 2018. +Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536-2544, 2014. +Amélie Royer, Konstantinos Bousmalis, Stephan Gouws, Fred Bertsch, Inbar Moressi, Forrester Cole, and Kevin Murphy. Xgan: Unsupervised image-to-image translation for many-to-many mappings. arXiv preprint arXiv:1711.05139, 2017. +Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818-2826, 2016. +Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross-domain image generation. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Sk2Im59ex. +Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798-8807, 2018. +Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European Conference on Computer Vision, September 2018. +Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2849-2857, 2017. +Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In Proceedings of the European Conference on Computer Vision, pp. 649-666. Springer, 2016. +Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S. Lin, Tianhe Yu, and Alexei A. Efros. Real-time user-guided image colorization with learned deep priors. ACM Transactions on Graphics, 36(4):119:1-119:11, 2017. doi: 10.1145/3072959.3073703. URL https://doi.org/10.1145/3072959.3073703. +Junbo Jake Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial networks. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=ryh9pmcee. +Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921-2929. IEEE, 2016. +Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2223-2232, 2017. + +# A RELATED WORKS + +# A.1 GENERATIVE ADVERSARIAL NETWORKS + +Generative Adversarial Networks (GAN)(Goodfellow et al. (2014)) have achieved impressive results on a wide variety of image generation(Arjovsky et al. (2017); Berthelot et al. (2017); Karras et al. (2018); Zhao et al. (2017)), image inpainting(Iizuka et al. (2017)), image translation(Choi et al. (2018); Huang et al. (2018); Isola et al. (2017); Liu et al. (2017); Wang et al. (2018); Zhu et al. (2017)) tasks. In training, a generator aims to generate realistic images to fool a discriminator while the discriminator tries to distinguish the generated images from real images. Various multi-stage generative models(Karras et al. (2018); Wang et al. (2018)) and better training objectives(Arjovsky et al. (2017); Berthelot et al. (2017); Mao et al. (2017); Zhao et al. (2017)) have been proposed to generate more realistic images. In this paper, our model uses GAN to learn the transformation from a source domain to a significantly different target domain, given unpaired training data. + +# A.2 IMAGE-TO-IMAGE TRANSLATION + +Isola et al.(Isola et al. (2017)) have proposed a conditional GAN-based unified framework for image-to-image translation. High-resolution version of the pix2pix have been proposed by Wang et al.(Wang et al. (2018)) Recently, there have been various attempts (Huang et al. (2018); Kim et al. (2017); Liu et al. (2017); Taigman et al. (2017); Zhu et al. (2017)) to learn image translation from an unpaired dataset. CycleGAN (Zhu et al. (2017)) have proposed a cyclic consistence loss for the first time to enforce one-to-one mapping. UNIT (Liu et al. (2017)) assumed a shared-latent space to tackle unsupervised image translation. However, this approach performs well only when the two domains have similar patterns. MUNIT (Huang et al. (2018)) makes it possible to extend to many-to-many mapping by decomposing the image into content code that is domain-invariant and a style code that captures domain-specific properties. MUNIT synthesizes the separated content and style to generate the final image, where the image quality is improved by using adaptive instance normalization (Huang & Belongie (2017)). With the same purpose as MUNIT, DRIT (Lee et al. (2018)) decomposes images into content and style, so that many-to-many mapping is possible. The only difference is that content space is shared between the two domains using the weight sharing and content discriminator which is auxiliary classifier. Nevertheless, the performance of these methods (Huang et al. (2018); Liu et al. (2017); Lee et al. (2018)) are limited to the dataset that contains well-aligned images between source and target domains. In addition, AGGAN (Mejjati et al. (2018)) improved the performance of image translation by using attention mechanism to distinguish between foreground and background. However, the attention module in AGGAN cannot help to transform the object's shape in the image. Although, CartoonGAN (Chen et al. (2018)) shows good performance for animation style translation, it changes only the color, tone, and thickness of line in the image. Therefore it is not suitable for the shape change in the image. + +# A.3 CLASS ACTIVATION MAP + +Zhou et al. (Zhou et al. (2016)) have proposed Class Activation Map (CAM) using global average pooling in a CNN. The CAM for a particular class shows the discriminative image regions by the CNN to determine that class. In this work, our model leads to intensively change discriminative image regions provided by distinguishing two domains using the CAM approach. However, not only global average pooling is used, but global max pooling is also used to make the results better. + +# A.4 NORMALIZATION + +Recent neural style transfer researches have shown that CNN feature statistics (e.g., Gram matrix (Gatys et al. (2016)), mean and variance (Huang & Belongie (2017)) can be used as direct descriptors for image styles. In particular, Instance Normalization (IN) has the effect of removing the style variation by directly normalizing the feature statistics of the image and is used more often than Batch Normalization (BN) or Layer Normalization (LN) in style transfer. However, when normalizing images, recent studies use Adaptive Instance Normalization (AdaIN) (Huang & Belongie (2017)), Conditional Instance Normalization (CIN) (Dumoulin et al. (2017)), and Batch-Instance Normalization (BIN) (Nam & Kim (2018)) instead of using IN alone. In our work, we propose an Adaptive + +Layer-Instance Normalization (AdaLIN) function to adaptively select a proper ratio between IN and LN. Through the AdaLIN, our attention-guided model can flexibly control the amount of change in shape and texture. + +# B IMPLEMENTATION DETAILS + +# B.1 NETWORK ARCHITECTURE + +The network architectures of U-GAT-IT are shown in Table 4, 5, and 6. The encoder of the generator is composed of two convolution layers with the stride size of two for down-sampling and four residual blocks. The decoder of the generator consists of four residual blocks and two up-sampling convolution layers with the stride size of one. Note that we use the instance normalization for the encoder and AdaLIN for the decoder, respectively. In general, LN does not perform better than batch normalization in classification problems (Wu & He (2018)). Since the auxiliary classifier is connected from the encoder in the generator, to increase the accuracy of the auxiliary classifier we use the instance normalization(batch normalization with a mini-batch size of 1) instead of the AdaLIN. Spectral normalization (Miyato et al. (2018)) is used for the discriminator. We employ two different scales of PatchGAN (Isola et al. (2017)) for the discriminator network, which classifies whether local $(70 \times 70)$ and global $(286 \times 286)$ image patches are real or fake. For the activation function, we use ReLU in the generator and leaky-ReLU with a slope of 0.2 in the discriminator. + +# B.2 TRAINING + +All models are trained using Adam (Kingma & Ba (2015)) with $\beta_{1} = 0.5$ and $\beta_{2} = 0.999$ . For data augmentation, we flipped the images horizontally with a probability of 0.5, resized them to $286 \times 286$ , and random cropped them to $256 \times 256$ . The batch size is set to one for all experiments. We train all models with a fixed learning rate of 0.0001 until 500,000 iterations and linearly decayed up to 1,000,000 iterations. We also use a weight decay at rate of 0.0001. The weights are initialized from a zero-centered normal distribution with a standard deviation of 0.02. + +# C DATASET DETAILS + +selfie2animem The selfie dataset contains 46,836 selfie images annotated with 36 different attributes. We only use photos of females as training data and test data. The size of the training dataset is 3400, and that of the test dataset is 100, with the image size of $256\times 256$ . For the anime dataset, we have firstly retrieved 69,926 animation character images from Anime-Planet'. Among those images, 27,023 face images are extracted by using an anime-face detector'. After selecting only female character images and removing monochrome images manually, we have collected two datasets of female anime face images, with the sizes of 3400 and 100 for training and test data respectively, which is the same numbers as the selfie dataset. Finally, all anime face images are resized to $256\times$ 256 by applying a CNN-based image super-resolution algorithm3. + +horse2zebra and photo2vangogh These datasets are used in CycleGAN (Zhu et al. (2017)). The training dataset size of each class: 1,067 (horse), 1,334 (zebra), 6,287 (photo), and 400 (vangogh). The test datasets consist of 120 (horse), 140 (zebra), 751 (photo), and 400 (vangogh). Note that the training data and the test data of vangogh class are the same. + +cat2dog and photo2portrait These datasets are used in DRIT (Lee et al. (2018)). The numbers of data for each class are 871 (cat), 1,364 (zebra), 6,452 (photo), and 1,811 (vangogh). We use 120 (horse), 140 (zebra), 751 (photo), and 400 (vangogh) randomly selected images as test data, respectively. + +# D ADDITIONAL EXPERIMENTAL RESULTS + +In addition to the results presented in the paper, we show supplement generation results for the five datasets in Figs 5, 6, 7, 8, 9, 10, 11, and 12. + +Table 4: The detail of generator architecture. + +
PartInput → Output ShapeLayer Information
Encoder Down-sampling(h, w, 3) → (h, w, 64)CONV-(N64, K7, S1, P3), IN, ReLU
(h, w, 64) → (h/2, w/2, 128)CONV-(N128, K3, S2, P1), IN, ReLU
(h/2, w/2, 128) → (h/4, w/4, 256)CONV-(N256, K3, S2, P1), IN, ReLU
Encoder Bottleneck(h/4, w/4, 256) → (h/4, w/4, 256)ResBlock-(N256, K3, S1, P1), IN, ReLU
(h/4, w/4, 256) → (h/4, w/4, 256)ResBlock-(N256, K3, S1, P1), IN, ReLU
(h/4, w/4, 256) → (h/4, w/4, 256)ResBlock-(N256, K3, S1, P2), IN, ReLU
(h/4, w/4, 256) → (h/4, w/4, 256)ResBlock-(N256, K3, S1, P2), IN, ReLU
CAM of Generator(h/4, w/4, 256) → (h/4, w/4, 512)Global Average & Max Pooling, MLP-(N1), Multiply the weights of MLP
(h/4, w/4, 512) → (h/4, w/4, 256)CONV-(N256, K1, S1), ReLU
γ, β(h/4, w/4, 256) → (1, 1, 256)MLP-(N256), ReLU
(1, 1, 256) → (1, 1, 256)MLP-(N256), ReLU
(1, 1, 256) → (1, 1, 256)MLP-(N256), ReLU
Decoder Bottleneck(h/4, w/4, 256) → (h/4, w/4, 256)AdaResBlock-(N256, K3, S1, P1), AdaILN, ReLU
(h/4, w/4, 256) → (h/4, w/4, 256)AdaResBlock-(N256, K3, S1, P1), AdaILN, ReLU
(h/4, w/4, 256) → (h/4, w/4, 256)AdaResBlock-(N256, K3, s1, P1), AdaILN, ReLU
(h/4, w/4, 256) → (h/4, w/4, 256)AdaResBlock-(N256, K3, S1, P1), AdaILN, ReLU
Decoder Up-sampling(h/4, w/4, 256) → (h/2, w/2, 128)Up-CONV-(N128, K3, S1, P1), LIN, ReLU
(h/2, w/2, 128) → (h, w, 64)Up-CONV-(N64, K3, S1, P1), LIN, ReLU
(h, w, 64) → (h, w, 3)CONV-(N3, K7, S1, P3), Tanh
+ +Table 5: The detail of local discriminator. + +
PartInput → Output ShapeLayer Information
Encoder Down-sampling(h, w, 3) → (h/2, w/2, 64)CONV-(N64, K4, S2, P1), SN, Leaky-ReLU
(h/2, w/2, 64) → (h/4, w/4, 128)CONV-(N128, K4, S2, P1), SN, Leaky-ReLU
(h/4, w/4, 128) → (h/8, w/8, 256)CONV-(N256, K4, S2, P1), SN, Leaky-ReLU
(h/8, w/8, 256) → (h/8, w/8, 512)CONV-(N512, K4, S1, P1), SN, Leaky-ReLU
CAM of Discriminator(h/8, w/8, 512) → (h/8, w/8, 1024)Global Average & Max Pooling, MLP-(N1), Multiply the weights of MLP
(h/8, w/8, 1024) → (h/8, w/8, 512)CONV-(N512, K1, S1), Leaky-ReLU
Classifier(h/8, w/8, 512) → (h/8, w/8, 1)CONV-(N1, K4, S1, P1), SN
+ +Table 6: The detail of global discriminator. + +
PartInput → Output ShapeLayer Information
Encoder Down-sampling(h, w, 3) → (h/2, w/2, 64)CONV-(N64, K4, S2, P1), SN, Leaky-ReLU
(h/2, w/2, 64) → (h/4, w/4, 128)CONV-(N128, K4, S2, P1), SN, Leaky-ReLU
(h/4, w/4, 128) → (h/8, w/8, 256)CONV-(N256, K4, S2, P1), SN, Leaky-ReLU
(h/8, w/8, 256) → (h/16, w/16, 512)CONV-(N512, K4, S2, P1), SN, Leaky-ReLU
(h/16, w/16, 512) → (h/32, w/32, 1024)CONV-(N1024, K4, S2, P1), SN, Leaky-ReLU
(h/32, w/32, 1024) → (h/32, w/32, 2048)CONV-(N2048, K4, S1, P1), SN, Leaky-ReLU
CAM of Discriminator(h/32, w/32, 2048) → (h/32, w/32, 4096)Global Average & Max Pooling, MLP-(N1), Multiply the weights of MLP
(h/32, w/32, 4096) → (h/32, w/32, 2048)CONV-(N2048, K1, S1), Leaky-ReLU
Classifier(h/32, w/32, 2048) → (h/32, w/32, 1)CONV-(N1, K4, S1, P1), SN
+ +![](images/f895afd8118c424d4d4f2ca00519e08217161339e3b71c4ab09964a210707d20.jpg) +Figure 5: Visual comparisons of the selfie2anine with attention features maps. (a) Source images, (b) Attention map of the generator, (c-d) Local and global attention maps of the discriminators, (e) Our results, (f) CycleGAN (Zhu et al. (2017)), (g) UNIT (Liu et al. (2017)), (h) MUNIT (Huang et al. (2018)), (i) DRIT (Lee et al. (2018)), (j) AGGAN (Mejjati et al. (2018)), (k) CartoonGAN (Chen et al. (2018)). + +![](images/04cd17ee0e9481d8e0e80b38e4127a08fc81ddd0bf5cddbdbdfddeaf6a043936.jpg) +Figure 6: Visual comparisons of the anime2selfie with attention features maps. (a) Source images, (b) Attention map of the generator, (c-d) Local and global attention maps of the discriminators, (e) Our results, (f) CycleGAN (Zhu et al. (2017)), (g) UNIT (Liu et al. (2017)), (h) MUNIT (Huang et al. (2018)), (i) DRIT (Lee et al. (2018)), (j) AGGAN (Mejjati et al. (2018)). + +![](images/936333675d953d62aa94481a3e992d636dc9d58be5ee6895545e3d40de6facba.jpg) +Figure 7: Visual comparisons of the horse2zebra with attention features maps. (a) Source images, (b) Attention map of the generator, (c-d) Local and global attention maps of the discriminators, (e) Our results, (f) CycleGAN (Zhu et al. (2017)), (g) UNIT (Liu et al. (2017)), (h) MUNIT (Huang et al. (2018)), (i) DRIT (Lee et al. (2018)), (j) AGGAN (Mejjati et al. (2018)). + +![](images/a53f604cf79e9bcfcad3d9779260532c4709d5f8761c516243d0c88661c5b453.jpg) +Figure 8: Visual comparisons of the zebra2horse with attention features maps. (a) Source images, (b) Attention map of the generator, (c-d) Local and global attention maps of the discriminators, (e) Our results, (f) CycleGAN (Zhu et al. (2017)), (g) UNIT (Liu et al. (2017)), (h) MUNIT (Huang et al. (2018)), (i) DRIT (Lee et al. (2018)), (j) AGGAN (Mejjati et al. (2018)). + +![](images/f958a5a6dbcbcf6caefa936d8d1aea9402b398659138b5360f6788fe6c92aa1b.jpg) +Figure 9: Visual comparisons of the cat2dog with attention features maps. (a) Source images, (b) Attention map of the generation, (c-d) Local and global attention maps of the discriminators, (e) Our results, (f) CycleGAN (Zhu et al. (2017)), (g) UNIT (Liu et al. (2017)), (h) MUNIT (Huang et al. (2018)), (i) DRIT (Lee et al. (2018)), (j) AGGAN (Mejjati et al. (2018)). + +![](images/3101cb5916e660efa1a04fd65dfd4be1ad15d37f6523ac7c8397f57955341ebc.jpg) +Figure 10: Visual comparisons of the dog2cat with attention features maps. (a) Source images, (b) Attention map of the generation, (c-d) Local and global attention maps of the discriminators, (e) Our results, (f) CycleGAN (Zhu et al. (2017)), (g) UNIT (Liu et al. (2017)), (h) MUNIT (Huang et al. (2018)), (i) DRIT (Lee et al. (2018)), (j) AGGAN (Mejjati et al. (2018)). + +![](images/9a2c51905dcd46f2dcf754de761df4a6b6cd397b5a24cbd83bc5be55b6aa168a.jpg) +Figure 11: Visual comparisons of the photo2vangogh with attention features maps. (a) Source images, (b) Attention map of the generation, (c-d) Local and global attention maps of the discriminators, respectively, (e) Our results, (f) CycleGAN (Zhu et al. (2017)), (g) UNIT (Liu et al. (2017)), (h) MUNIT (Huang et al. (2018)), (i) DRIT (Lee et al. (2018)), (j) AGGAN (Mejjati et al. (2018)). + +![](images/f026cf23ce765fa92b1bac149066ad4cda533307ab5c043e4cc8d58c23c4cb11.jpg) +Figure 12: Visual comparisons of the photo2portrait with attention features maps. (a) Source images, (b) Attention map of the generator, (c-d) Local and global attention maps of the discriminators, respectively, (e) Our results,(f) CycleGAN (Zhu et al. (2017)), (g) UNIT (Liu et al. (2017)), (h) MUNIT (Huang et al. (2018)), (i) DRIT (Lee et al. (2018)), (j) AGGAN (Mejjati et al. (2018)). \ No newline at end of file diff --git a/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/images.zip b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..640b121924852516e3bf64bef1d6f1797fa9984c --- /dev/null +++ b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ba7925e4e8ec203fb942f749127dc68a7546937601622c84113bd4bf7150c6c +size 2268594 diff --git a/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/layout.json b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1db89fc7727b263fd6426a3302762457b75613f1 --- /dev/null +++ b/ugatitunsupervisedgenerativeattentionalnetworkswithadaptivelayerinstancenormalizationforimagetoimagetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bbca419a3c6f9af52e29d4026726e3f585d8e1afbb4a1a5296378f992c56361 +size 409422 diff --git a/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/e2750711-3281-40ec-bf3f-6d425066a205_content_list.json b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/e2750711-3281-40ec-bf3f-6d425066a205_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1e5a933c09202329f2444a6c304afba7c539c51a --- /dev/null +++ b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/e2750711-3281-40ec-bf3f-6d425066a205_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd093ca6fc602315c77aa5029255b7d19fd1d4f3e6d409593f022c7ee5fdfafc +size 97792 diff --git a/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/e2750711-3281-40ec-bf3f-6d425066a205_model.json b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/e2750711-3281-40ec-bf3f-6d425066a205_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f5d8f8ee58f7e51b626407fe0705d74a3cd46d0a --- /dev/null +++ b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/e2750711-3281-40ec-bf3f-6d425066a205_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fec7f069de240561f2bfb4737519b206ba99c1efd985418f9a1a605aef93d16e +size 116523 diff --git a/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/e2750711-3281-40ec-bf3f-6d425066a205_origin.pdf b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/e2750711-3281-40ec-bf3f-6d425066a205_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..56c2dc58dac58436e0a62983be6b4e736d4c97e3 --- /dev/null +++ b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/e2750711-3281-40ec-bf3f-6d425066a205_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:984d66797683fe5558094dfdb011d367a88f1ac92fc1f5a740d6f377d298cc3d +size 456849 diff --git a/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/full.md b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0ce76b476944aca24855d9e59dbfa1c2e41b06ec --- /dev/null +++ b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/full.md @@ -0,0 +1,344 @@ +# UNCERTAINTY-GUIDED CONTINUAL LEARNING WITH BAYESIAN NEURAL NETWORKS + +Sayna Ebrahimi* +UC Berkeley + +Mohamed Elhoseiny† +KAUST, Stanford University + +Trevor Darrell +UC Berkeley + +Marcus Rohrbach +Facebook AI Research + +# ABSTRACT + +Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' importance. In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB), where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify what to remember and what to change as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e. it does not presume knowledge of which task a sample belongs to. + +# 1 INTRODUCTION + +Humans can easily accumulate and maintain knowledge gained from previously observed tasks, and continuously learn to solve new problems or tasks. Artificial learning systems typically forget prior tasks when they cannot access all training data at once but are presented with task data in sequence. + +Overcoming these challenges is the focus of continual learning, sometimes also referred to as lifelong learning or sequential learning. Catastrophic forgetting (McCloskey & Cohen, 1989; McClelland et al., 1995) refers to the significant drop in the performance of a learner when switching from a trained task to a new one. This phenomenon occurs because trained parameters on the initial task change in favor of learning new objectives. + +Given a network of limited capacity, one way to address this problem is to identify the importance of each parameter and penalize further changes to those parameters that were deemed to be important for the previous tasks (Kirkpatrick et al., 2017; Aljundi et al., 2018; Zenke et al., 2017). An alternative is to freeze the most important parameters and allow future tasks to only adapt the remaining parameters to new tasks (Mallya & Lazebnik, 2018). Such models rely on the explicit parametrization of importance. We propose here implicit uncertainty-guided importance representation. + +Bayesian approaches to neural networks (MacKay, 1992b) can potentially avoid some of the pitfalls of explicit parameterization of importance in regular neural networks. Bayesian techniques, naturally account for uncertainty in parameters estimates. These networks represent each parameter with a distribution defined by a mean and variance over possible values drawn from a shared latent probability distribution (Blundell et al., 2015). Variational inference can approximate posterior distributions using Monte Carlo sampling for gradient estimation. These networks act like ensemble methods in that they reduce the prediction variance but only use twice the number of parameters present in a regular neural network. We propose to use the predicted mean and variance of the latent distributions to characterize the importance of each parameter. We perform continual learning with + +![](images/05280e8e3738b94952055383def8582b10a940a7143d9ca6dacde9070ee7d626.jpg) +Figure 1: Illustration of the evolution of weight distributions – uncertain weights adapt more quickly – when learning two tasks using UCB. (a) weight parameter initialized by distributions initialized with mean and variance values randomly sampled from $\mathcal{N}(0,0.1)$ . (b) posterior distribution after learning task one; while $\theta_{1}$ and $\theta_{2}$ exhibit lower uncertainties after learning the first task, $\theta_{3}$ , $\theta_{4}$ , and $\theta_{5}$ have larger uncertainties, making them available to learn more tasks. (c) a second task is learned using higher learning rates for previously uncertain parameters ( $\theta_{1}$ , $\theta_{2}$ , $\theta_{3}$ , and $\theta_{4}$ ) while learning rates for $\theta_{1}$ and $\theta_{2}$ are reduced. Size of the arrows indicate the magnitude of the change of the distribution mean upon gradient update. + +Bayesian neural networks by controlling the learning rate of each parameter as a function of its uncertainty. Figure 1 illustrates how posterior distributions evolve for certain and uncertain weight distributions while learning two consecutive tasks. Intuitively, the more uncertain a parameter is, the more learnable it can be and therefore, larger gradient steps can be taken for it to learn the current task. As a hard version of this regularization technique, we also show that pruning, i.e., preventing the most important model parameters from any change and learning new tasks with the remaining parameters, can be also integrated into UCB. We refer to this method as UCB-P. + +Contributions: We propose to perform continual learning with Bayesian neural networks and develop a new method which exploits the inherent measure of uncertainty therein to adapt the learning rate of individual parameters (Sec. 4). Second, we introduce a hard-threshold variant of our method that decides which parameters to freeze (Sec. 4.2). Third, in Sec. 5, we extensively validate our approach experimentally, comparing it to prior art both on single datasets split into different tasks, as well as for the more difficult scenario of learning a sequence of different datasets. Forth, in contrast to most prior work, our approach does not rely on knowledge about task boundaries at inference time, which humans do not need and might not be always available. We show in Sec. 6 that our approach naturally supports this scenario and does not require task information at test time, sometimes also referred to as a "single head" scenario for all tasks. We refer to evaluation metric of a "single head" model without task information at test time as "generalized accuracy". Our code is available at https://github.com/SaynaEbrahimi/UCB. + +# 2 RELATED WORK + +Conceptually, approaches to continual learning can be divided into the following categories: dynamic architectural methods, memory-based methods, and regularization methods. + +Dynamic architectural methods: In this setting, the architecture grows while keeping past knowledge fixed and storing new knowledge in different forms such as additional layers, nodes, or modules. In this approach, the objective function remains fixed whereas the model capacity grows –often exponentially– with the number of tasks. Progressive networks (Rusu et al., 2016; Schwarz et al., 2018) was one of the earliest works in this direction and was successfully applied to reinforcement learning problems; the base architecture was duplicated and lateral connections added in response to new tasks. Dynamically Expandable Network (DEN) (Yoon et al., 2018) also expands its network by selecting drifting units and retraining them on new tasks. In contrast to our method, these approaches require the architecture grow with each new task. + +Memory-based methods: In this regime, previous information is partially stored to be used later as a form of rehearsal (Robins, 1995). Gradient episodic memory (GEM) (Lopez-Paz et al., 2017) uses this idea to store the data at the end of each episode to be used later to prevent gradient updates from deviating from their previous values. GEM also allows for positive backward knowledge transfer, i.e, + +an improvement on previously learned tasks, and it was the first method capable of learning using a single training example. Recent approaches in this category have mitigated forgetting by using external data combined with distillation loss and/or confidence-based sampling strategies to select the most representative samples. (Castro et al., 2018; Wu et al., 2019; Lee et al., 2019) + +Regularization methods: In these approaches, significant changes to the representation learned for previous tasks are prevented. This can be performed through regularizing the objective function or directly enforced on weight parameters. Typically, this importance measure is engineered to represent the importance of each parameter. Inspired by Bayesian learning, in elastic weight consolidation (EWC) method (Kirkpatrick et al., 2017) important parameters are those to have the highest in terms of the Fisher information matrix. In Synaptic Intelligence (SI) (Zenke et al., 2017) this parameter importance notion is engineered to correlate with the loss function: parameters that contribute more to the loss are more important. Similar to SI, Memory-aware Synapses (MAS) (Aljundi et al., 2018) proposed an online way of computing importance adaptive to the test set using the change in the model outputs w.r.t the inputs. While all the above algorithms are task-dependent, in parallel development to this work, (Aljundi et al., 2019) has recently investigated task-free continual learning by building upon MAS and using a protocol to update the weights instead of waiting until the tasks are finished. PackNet (Mallya & Lazebnik, 2018) used iterative pruning to fully restrict gradient updates on important weights via binary masks. This method requires knowing which task is being tested to use the appropriate mask. PackNet also ranks the weight importance by their magnitude which is not guaranteed to be a proper importance indicative. HAT (Serra et al., 2018) identifies important neurons by learning an attention vector to the task embedding to control the gradient propagation. It maintains the information learned on previous tasks using an almost-binary mask per previous tasks. + +Bayesian approaches: Using Bayesian approach in learning neural networks has been studied for few decades (MacKay, 1992b;a). Several approaches have been proposed for Bayesian neural networks, based on, e.g., the Laplace approximation (MacKay, 1992a), Hamiltonian Monte Carlo (Neal, 2012), variational inference (Hinton & Van Camp, 1993; Graves, 2011), and probabilistic backpropagation (Hernandez-Lobato & Adams, 2015). Variational continual learning (Nguyen et al., 2018) uses Bayesian inference to perform continual learning where new posterior distribution is simply obtained by multiplying the previous posterior by the likelihood of the dataset belonging to the new task. They also showed that by using a core-set, a small representative set of data from previous tasks, VCL can experience less forgetting. In contrast, we rely on Bayesian neural networks to use their predictive uncertainty to perform continual learning. Moreover, we do not use episodic memory or any other way to access or store previous data in our approach. + +Natural gradient descent methods: A fast natural gradient descent method for variational inference was introduced in (Khan & Nielsen, 2018) in which, the Fisher Information matrix is approximated using the generalized Gauss-Newton method. In contrast, in our work, we use classic gradient descent. Although second order optimization algorithms are proven to be more accurate than the first order methods, they add considerable computational cost. Tseran et al. (2018); Chen et al. (2019) both investigate the effect of natural gradient descent methods as an alternative to classic gradient descent used in VCL and EWC methods. GNG (Chen et al., 2019) uses Gaussian natural gradients in the Adam optimizer (Kingma & Ba, 2014) in the framework of VCL because as opposed to conventional gradient methods which perform in Euclidean space, natural gradients cause a small difference in terms of distributions following the changes in parameters in the Riemannian space. Similar to VCL, they obtained their best performance by adding a coreset of previous examples. Tseran et al. (2018) introduce two modifications to VCL called Natural-VCL (N-VCL) and VCL-Vadam. N-VCL (Tseran et al., 2018) uses a Gauss-Newton approximation introduced by (Schraudolph, 2002; Graves, 2011) to estimate the VCL objective function and used natural gradient method proposed in (Khan et al., 2018) to exploit the Riemannian geometry of the variational posterior by scaling the gradient with an adaptive learning rate equal to $\sigma^{-2}$ obtained by approximating the Fisher Information matrix in an online fashion. VCL-Vadam (Tseran et al., 2018) is a simpler version of N-VCL to trade-off accuracy for simplicity which uses Vadam (Khan et al., 2018) to update the gradients by perturbing the weights with a Gaussian noise using a reparameterization trick and scaling by $\sigma^{-1}$ instead of its squared. N-VCL/VCL-Vadam both use variational inference to adapt the learning rate within Adam optimizer at every time step, whereas in our method below, gradient decent is used with constant learning rate during each task where learning rate scales with uncertainty only after finishing a task. We show extensive comparison with state-of-the-art results on short and relatively long sequence of vision datasets with Bayesian convolutional neural networks, whereas VCL-Vadam only rely on + +multi-layer perceptron networks. We also like to highlight that this is the first work which evaluates and shows the working of convolutional Bayesian Neural Networks rather than only fully connected MLP models for continual learning. + +# 3 BACKGROUND: VARIATIONAL BAYES-BY-BACKPROP + +In this section, we review the Bayes-by-Backprop (BBB) framework which was introduced by (Blundell et al., 2015); to learn a probability distribution over network parameters. (Blundell et al., 2015) showed a back-propagation-compatible algorithm which acts as a regularizer and yields comparable performance to dropout on the MNIST dataset. In Bayesian models, latent variables are drawn from a prior density $p(\mathbf{w})$ which are related to the observations through the likelihood $p(\mathbf{x}|\mathbf{w})$ . During inference, the posterior distribution $p(\mathbf{w}|\mathbf{x})$ is computed conditioned on the given input data. However, in practice, this probability distribution is intractable and is often estimated through approximate inference. Markov Chain Monte Carlo (MCMC) sampling (Hastings, 1970) has been widely used and explored for this purpose, see (Robert & Casella, 2013) for different methods under this category. However, MCMC algorithms, despite providing guarantees for finding asymptotically exact samples from the target distribution, are not suitable for large datasets and/or large models as they are bounded by speed and scalability issues. Alternatively, variational inference provides a faster solution to the same problem in which the posterior is approximated using optimization rather than being sampled from a chain (Hinton & Van Camp, 1993). Variational inference methods always take advantage of fast optimization techniques such as stochastic methods or distributed methods, which allow them to explore data models quickly. See (Blei et al., 2017) for a complete review of the theory and (Shridhar et al., 2018) for more discussion on how to use Bayes by Backprop (BBB) in convolutional neural networks. + +# 3.1 BAYES BY BACKPROP (BBB) + +Let $\mathbf{x} \in \mathbb{R}^n$ be a set of observed variables and $\mathbf{w}$ be a set of latent variables. A neural network, as a probabilistic model $P(\mathbf{y}|\mathbf{x},\mathbf{w})$ , given a set of training examples $\mathcal{D} = (\mathbf{x},\mathbf{y})$ can output $\mathbf{y}$ which belongs to a set of classes by using the set of weight parameters $\mathbf{w}$ . Variational inference aims to calculate this conditional probability distribution over the latent variables by finding the closest proxy to the exact posterior by solving an optimization problem. + +We first assume a family of probability densities over the latent variables $\mathbf{w}$ parametrized by $\theta$ , i.e., $q(\mathbf{w}|\theta)$ . We then find the closest member of this family to the true conditional probability of interest $P(\mathbf{w}|\mathcal{D})$ by minimizing the Kullback-Leibler (KL) divergence between $q$ and $P$ which is equivalent to minimizing variational free energy or maximizing the expected lower bound: + +$$ +\theta^ {*} = \arg \min _ {\theta} \mathrm {K L} (q (\mathbf {w} | \theta) \| P (\mathbf {w} | \mathcal {D})) \tag {1} +$$ + +The objective function can be written as: + +$$ +\mathcal {L} _ {B B B} (\theta , \mathcal {D}) = \mathrm {K L} [ q (\mathbf {w} | \theta) \| P (\mathbf {w}) ] - \mathbb {E} _ {q (\mathbf {w} | \theta)} [ \log (P (\mathcal {D} | \mathbf {w})) ] \tag {2} +$$ + +Eq. 2 can be approximated using $N$ Monte Carlo samples $\mathbf{w}_i$ from the variational posterior (Blundell et al., 2015): + +$$ +\mathcal {L} _ {B B B} (\theta , \mathcal {D}) \approx \sum_ {i = 1} ^ {N} \log q \left(\mathbf {w} _ {i} | \theta\right) - \log P \left(\mathbf {w} _ {i}\right) - \log \left(P \left(\mathcal {D} \mid \mathbf {w} _ {i}\right)\right) \tag {3} +$$ + +We assume $q(\mathbf{w}|\theta)$ to have a Gaussian pdf with diagonal covariance and parametrized by $\theta = (\mu, \rho)$ . A sample weight of the variational posterior can be obtained by sampling from a unit Gaussian and reparametrized by $\mathbf{w} = \mu + \sigma \circ \epsilon$ where $\epsilon$ is the noise drawn from unit Gaussian, and $\circ$ is a pointwise multiplication. Standard deviation is parametrized as $\sigma = \log(1 + \exp(\rho))$ and thus is always positive. For the prior, as suggested by Blundell et al. (2015), a scale mixture of two Gaussian pdfs are chosen which are zero-centered while having different variances of $\sigma_1^2$ and $\sigma_2^2$ . The uncertainty obtained for every parameter has been successfully used in model compression (Han et al., 2015) and uncertainty-based exploration in reinforcement learning (Blundell et al., 2015). In this work we propose to use this framework to learn sequential tasks without forgetting using per-weight uncertainties. + +# 4 UNCERTAINTY-GUIDED CONTINUAL LEARNING IN BAYESIAN NEURAL NETWORKS + +In this section, we introduce Uncertainty-guided Continual learning approach with Bayesian neural networks (UCB), which exploits the estimated uncertainty of the parameters' posterior distribution to regulate the change in "important" parameters both in a soft way (Section 4.1) or setting a hard threshold (Section 4.2). + +# 4.1 UCB WITH LEARNING RATE REGULARIZATION + +A common strategy to perform continual learning is to reduce forgetting by regularizing further changes in the model representation based on parameters' importance. In UCB the regularization is performed with the learning rate such that the learning rate of each parameter and hence its gradient update becomes a function of its importance. As shown in the following equations, in particular, we scale the learning rate of $\mu$ and $\rho$ for each parameter distribution inversely proportional to its importance $\Omega$ to reduce changes in important parameters while allowing less important parameters to alter more in favor of learning new tasks. + +$$ +\alpha_ {\mu} \leftarrow^ {\alpha_ {\mu}} / \Omega_ {\mu} \tag {4} +$$ + +$$ +\alpha_ {\rho} \leftarrow \alpha_ {\rho} / \Omega_ {\rho} \tag {5} +$$ + +The core idea of this work is to base the definition of importance on the well-defined uncertainty in parameters distribution of Bayesian neural networks, i.e., setting the importance to be inversely proportional to the standard deviation $\sigma$ which represents the parameter uncertainty in the Bayesian neural network: + +$$ +\Omega \propto 1 / \sigma \tag {6} +$$ + +We explore different options to set $\Omega$ in our ablation study presented in Section A.2 of the appendix, Table 1. We empirically found that $\Omega_{\mu} = 1 / \sigma$ and not adapting the learning rate for $\rho$ (i.e. $\Omega_{\rho} = 1$ ) yields the highest accuracy and the least forgetting. + +The key benefit of UCB with learning rate as the regularizer is that it neither requires additional memory, as opposed to pruning technique nor tracking the change in parameters with respect to the previously learned task, as needed in common weight regularization methods. + +More importantly, this method does not need to be aware of task switching as it only needs to adjust the learning rates of the means in the posterior distribution based on their current uncertainty. The complete algorithm for UCB is shown in Algorithm 1 with parameter update function given in Algorithm 2. + +# 4.2 UCB USING WEIGHT PRUNING (UCB-P) + +In this section, we introduce a variant of our method, UCB-P, which is related to recent efforts in weight pruning in the context of reducing inference computation and network compression (Liu et al., 2017; Molchanov et al., 2016). More specifically, weight pruning has been recently used in continual learning (Mallya & Lazebnik, 2018), where the goal is to continue learning multiple tasks using a single network's capacity. (Mallya & Lazebnik, 2018) accomplished this by freeing up parameters deemed to be unimportant to the current task according to their magnitude. Forgetting is prevented in pruning by saving a task-specific binary mask of important vs. unimportant parameters. Here, we adapt pruning to Bayesian neural networks. Specifically, we propose a different criterion for measuring importance: the statistically-grounded uncertainty defined in Bayesian neural networks. + +Unlike regular deep neural networks, in a BBB model weight parameters are represented by probability distributions parametrized by their mean and standard deviation. Similar to (Blundell et al., 2015), in order to take into account both mean and standard deviation, we use the signal-to-noise ratio (SNR) for each parameter defined as + +$$ +\Omega = \mathrm {S N R} = | \mu | / \sigma \tag {7} +$$ + +Algorithm 1 Uncertainty-guided Continual Learning with Bayesian Neural Networks UCB +1: Require Training data for all tasks $\mathcal{D} = (\mathbf{x},\mathbf{y})$ $\mu$ (mean of posterior), $\rho$ $\sigma_{1}$ and $\sigma_{2}$ (std for the scaled mixture Gaussian pdf of prior), $\pi$ (weighting factor for prior), $N$ (number of samples in a mini-batch), $M$ (Number of minibatches per epoch), initial learning rate $(\alpha_0)$ +2: $\alpha_{\mu} = \alpha_{\rho} = \alpha_{0}$ +3: for every task do +4: repeat +5: $\epsilon \sim \mathcal{N}(0,I)$ +6: $\sigma = \log (1 + \exp (\rho))$ ▷ Ensures $\sigma$ is always positive +7: $\mathbf{w} = \mu +\sigma \circ \epsilon$ $\triangleright \mathbf{w} = \{\mathbf{w}_1,\dots ,\mathbf{w}_i,\dots ,\mathbf{w}_N\}$ posterior samples of weights +8: $l_{1} = \sum_{i = 1}^{N}\log \mathcal{N}(\mathbf{w}_{i}|\mu ,\sigma^{2})$ ▷ $l_{1}\coloneqq$ Log-posterior +9: $l_{2} = \sum_{i = 1}^{N}\log \left(\pi \mathcal{N}(\mathbf{w}_{i}\mid 0,\sigma_{1}^{2}) + (1 - \pi)\mathcal{N}(\mathbf{w}_{i}\mid 0,\sigma_{2}^{2})\right)$ ▷ $l_{2}\coloneqq$ Log-prior +10: $l_{3} = \sum_{i = 1}^{N}\log (p(\mathcal{D}|\mathbf{w}_{i}))$ ▷ $l_{3}\coloneqq$ Log-likelihood of data +11: $\mathcal{L}_{BBB} = \frac{1}{M} (l_1 - l_2 - l_3)$ +12: $\mu \gets \mu -\alpha_{\mu}\nabla \mathcal{L}_{BBB\mu}$ +13: $\rho \gets \rho -\alpha_{\rho}\nabla \mathcal{L}_{BBB\rho}$ +14: until loss plateaus +15: $\alpha_{\mu},\alpha_{\rho}\gets$ LearningRateUpdate(αμ,αρ,σ,μ) ▷ See Algorithm 2 for UCB and 3 for UCB-P + +Algorithm 2 LearningRateUpdate in UCB +1: function LearningRateUpdate $(\alpha_{\mu}, \alpha_{\rho}, \sigma)$ +2: for each parameter do +3: $\Omega_{\mu} \gets 1 / \sigma$ +4: $\Omega_{\rho} \gets 1$ +5: $\alpha_{\mu} \gets \alpha_{\mu} / \Omega_{\mu}$ +6: $\alpha_{\rho} \gets \alpha_{\rho} / \Omega_{\rho}$ +7: end for +8: end function + +SNR is a commonly used measure in signal processing to distinguish between "useful" information from unwanted noise contained in a signal. In the context of neural models, the SNR can be thought as an indicative of parameter importance; the higher the SNR, the more effective or important the parameter is to the model predictions for a given task. + +UCB-P, as shown in Algorithms 1 and 3, is performed as follows: for every layer, convolutional or fully-connected, the parameters are ordered by their SNR value and those with the lowest importance are pruned (set to zero). The pruned parameters are marked using a binary mask so that they can be used later in learning new tasks whereas the important parameters remain fixed throughout training on future tasks. Once a task is learned, an associated binary mask is saved which will be used during inference to recover key parameters and hence the exact performance to the desired task. + +The overhead memory per parameter in encoding the mask as well as saving it on the disk is as follows. Assuming we have $n$ tasks to learn using a single network, the total number of required bits to encode an accumulated mask for a parameter is at max $\log_2 n$ bits assuming a parameter deemed to be important from task 1 and kept being encoded in the mask. + +# 5 RESULTS + +# 5.1 EXPERIMENTAL SETUP + +Datasets: We evaluate our approach in two common scenarios for continual learning: 1) class-incremental learning of a single or two randomly alternating datasets, where each task covers only a subset of the classes in a dataset, and 2) continual learning of multiple datasets, where each task is a dataset. We use Split MNIST with 5 tasks (5-Split MNIST) similar to (Nguyen et al., 2018; Chen et al., 2019; Tseran et al., 2018) and permuted MNIST (Srivastava et al., 2013) for class incremental learning with similar experimental settings as used in (Serra et al., 2018; Tseran et al., 2018). Furthermore, to have a better understanding of our method, we evaluate our approach on continually learning a sequence of 8 datasets with different distributions using the identical sequence + +as in (Serra et al., 2018), which includes FaceScrub (Ng & Winkler, 2014), MNIST, CIFAR100, NotMNIST (Bulatov, 2011), SVHN (Netzer et al., 2011), CIFAR10, TrafficSigns (Stallkamp et al., 2011), and FashionMNIST (Xiao et al., 2017). Details of each are summarized in Table 4 in appendix. No data augmentation of any kind has been used in our analysis. + +Baselines: Within the Bayesian framework, we compare to three models which do not incorporate the importance of parameters, namely fine-tuning, feature extraction, and joint training. In fine-tuning (BBB-FT), training continues upon arrival of new tasks without any forgetting avoidance strategy. Feature extraction, denoted as (BBB-FE), refers to freezing all layers in the network after training the first task and training only the last layer for the remaining tasks. In joint training (BBB-JT) we learn all the tasks jointly in a multitask learning fashion which serves as the upper bound for average accuracy on all tasks, as it does not adhere to the continual learning scenario. We also perform the counterparts for FT, FE, and JT using ordinary neural networks and denote them as ORD-FT, ORD-FE, and ORD-JT. From the prior work, we compare with state-of-the-art approaches including Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017), Incremental Moment Matching (IMM) (Lee et al., 2017), Learning Without Forgetting (LWF) (Li & Hoiem, 2016), Less-Forgetting Learning (LFL) (Jung et al., 2016), PathNet (Fernando et al., 2017), Progressive neural networks (PNNs) (Rusu et al., 2016), and Hard Attention Mask (HAT) (Serra et al., 2018) using implementations provided by (Serra et al., 2018). On Permuted MNIST results for SI (Zenke et al., 2017) are reported from (Serra et al., 2018). On Split and Permuted MNIST, results for VCL (Nguyen et al., 2018) are obtained using their original provided code whereas for VCL-GNG (Chen et al., 2019) and VCL-Vadam (Tseran et al., 2018) results are reported from the original work without re-implementation. Because our method lies into the regularization-based regime, we only compare against baselines which do not benefit from episodic or coreset memory. + +Hyperparameter tuning: Unlike commonly used tuning techniques which use a validation set composed of all classes in the dataset, we only rely on the first two task and their validations set, similar to the setup in (Chaudhry et al., 2019). In all our experiments we consider a 0.15 split for the validation set on the first two tasks. After tuning, training starts from the beginning of the sequence. Our scheme is different from (Chaudhry et al., 2019), where the models are trained on the first (e.g. three) tasks for validation and then training is restarted for the remaining ones and the reported performance is only on the remaining tasks. + +Training details: It is important to note that in all our experiments, no pre-trained model is used. We used stochastic gradient descent with a batch size of 64 and a learning rate of 0.01, decaying it by a factor of 0.3 once the loss plateaued. Dataset splits and batch shuffle are identically in all UCB experiments and all baselines. + +Pruning procedure and mask size: Once a task is learned, we compute the performance drop for a set of arbitrary pruning percentages from the maximum training accuracy achieved when no pruning is applied. The pruning portion is then chosen using a threshold beyond which the performance drop is not accepted. Mask size is chosen without having the knowledge of how many tasks to learn in the future. Upon learning each task we used a uniform distribution of pruning ratios (50-100%) and picked the ratio resulted in at most 1%, 2%, and 3% forgetting for MNIST, CIFAR, and 8tasks experiments, respectively. We did not tune this parameter because in our hyperparameter tuning, we only assume we have validation sets of the first two tasks. + +Parameter regularization and importance measurement: Table 1 ablates different ways to compute the importance $\Omega$ of an parameter in Eq. 4 and 5. As shown in Table 1 the configuration that yields the highest accuracy and the least forgetting (maximum BWT) occurs when the learning rate regularization is performed only on $\mu$ of the posteriors using $\Omega_{\mu} = 1 / \sigma$ as the importance and $\Omega_{\rho} = 1$ . + +Performance measurement: Let $n$ be the total number of tasks. Once all are learned, we evaluate our model on all $n$ tasks. ACC is the average test classification accuracy across all tasks. To measure forgetting we report backward transfer, BWT, which indicates how much learning new tasks has influenced the performance on previous tasks. While BWT $< 0$ directly reports catastrophic forgetting, BWT $> 0$ indicates that learning new tasks has helped with the preceding tasks. Formally, BWT and ACC are as follows: + +$$ +\mathrm {B W T} = \frac {1}{n} \sum_ {i = 1} ^ {n} R _ {i, n} - R _ {i, i}, \quad \mathrm {A C C} = \frac {1}{n} \sum_ {i = 1} ^ {n} R _ {i, n} \tag {8} +$$ + +Table 1: Variants of learning rate regularization and importance measurement on 2-Split MNIST + +
MethodμρImportance ΩBWT (%)ACC (%)
UCBx-1/σ0.0099.2
UCB-x1/σ-0.0498.7
UCBxx1/σ-0.0298.0
UCBx-|μ|/σ-0.0398.4
UCB-x|μ|/σ-0.5298.7
UCBxx|μ|/σ-0.3298.8
UCB-Pxx|μ|/σ-0.0199.0
UCB-Pxx1/σ-0.0198.9
+ +Table 2: Continually learning on different datasets. BWT and ACC in %. (*) denotes that methods do not adhere to the continual learning setup: BBB-JT and ORD-JT serve as the upper bound for ACC for BBB/ORD networks, respectively. ‡ denotes results reported by (Serra et al., 2018). † denotes the result reported from original work. BWT was not reported in ‡ and †. All others results are (re)produced by us and are averaged over 3 runs with standard deviations given in Section A.3 of the appendix. +(a) 5-Split MNIST, 5 tasks. + +
MethodBWTACC
VCL-Vadam†-99.17
VCL-GNG†-96.50
VCL-0.5698.20
IMM-11.2088.54
EWC-4.2095.78
HAT0.0099.59
ORD-FT-9.1890.60
ORD-FE0.0098.54
BBB-FT-6.4593.42
BBB-FE0.0098.76
UCB-P (Ours)-0.7299.32
UCB (Ours)0.0099.63
ORD-JT*0.0099.78
BBB-JT*0.0099.87
+ +(b) Permuted MNIST, 10 permutations. + +
Method#ParamsBWT ACC
SI‡0.1M-86.0
EWC‡0.1M-88.2
HAT‡0.1M-91.6
VCL-Vadam†0.1M-86.34
VCL-GNG†0.1M-90.50
VCL0.1M-7.90 88.80
UCB (Ours)0.1M-0.38 91.44
LWF1.9M-31.17 65.65
IMM1.9M-7.14 90.51
HAT1.9M0.03 97.34
BBB-FT1.9M-0.58 90.01
BBB-FE1.9M0.02 93.54
UCB-P (Ours)1.9M-0.95 97.24
UCB (Ours)1.9M0.03 97.42
BBB-JT*1.9M0.00 98.12
+ +(c) Alternating CIFAR10/100 + +
MethodBWTACC
PathNet0.0028.94
LWF-37.942.93
LFL-24.2247.67
IMM-12.2369.37
PNN0.0070.73
EWC-1.5372.46
HAT-0.0478.32
BBB-FE-0.0451.04
BBB-FT-7.4368.89
UCB-P (Ours)-1.8977.32
UCB (Ours)-0.7279.44
BBB-JT*1.5283.93
+ +(d) Sequence of 8 tasks + +
MethodBWTACC
LFL-10.08.61
PathNet0.0020.22
LWF-54.328.22
IMM-38.543.93
EWC-18.0450.68
PNN0.0076.78
HAT-0.1481.59
BBB-FT-23.143.09
BBB-FE-0.0158.07
UCB-P (Ours)-2.5480.38
UCB (Ours)-0.8484.04
BBB-JT*-1.284.1
+ +where $R_{i,n}$ is the test classification accuracy on task $i$ after sequentially finishing learning the $n^{\text{th}}$ task. Note that in UCB-P, $R_{i,i}$ refers the test accuracy on task $i$ before pruning and $R_{i,n}$ after pruning which is equivalent to the end of sequence performance. In Section 6, we show that our UCB model can be used when tasks labels are not available at inference time by training it with a "single head" architecture with a sum of number of classes for all tasks. We refer to the ACC measured for this scenario as "Generalized Accuracy". + +# 5.2 5-SPLIT MNIST + +We first present our results for class incremental learning of MNIST (5-Split MNIST) in which we learn the digits 0 - 9 in five tasks with 2 classes at a time in 5 pairs of $0/1, 2/3, 4/5, 6/7,$ and $8/9$ . Table 2a shows the results for reference baselines in Bayesian and non-Bayesian neural networks including fine-tuning (BBB-FT, ORD-FT), feature extraction (BBB-FE, ORD-FE) and, joint training (BBB-JT, ORD-JT) averaged over 3 runs and standard deviations are given in Table 9 in the appendix. Although the MNIST dataset is an "easy" dataset, we observe throughout all experiments that Bayesian fine-tuning and joint training perform significantly better than their counterparts, ORD-FT and ORD-JT. For Bayesian methods, we compare against VCL and its variations named as VCL with Variational Adam (VCL-Vadam), VCL with Adam and Gaussian natural gradients (VCL-GNG). For non-Bayesian methods, we compare against HAT, IMM, and EWC (EWC can be regarded as Bayesian-inspired). VCL-Vadam (ACC=99.17%) appears to be outperforming VCL (ACC=98.20%) and VCL-GNG (ACC=96.50%) in average accuracy. However, full comparison is not possible because forgetting was not reported for Vadam and GNG. Nevertheless, UCB (ACC=99.63%) is able to surpass all the baselines including VCL-Vadam in average accuracy while in zero forgetting it is on par with HAT (ACC=99.59%). We also report results on incrementally learning MNIST in two tasks (2-Split MNIST) in Table 8 in the appendix, where we compare it + +against PackNet, HAT, and LWF where PackNet, HAT, UCB-P, and UCB have zero forgetting while UCB has marginally higher accuracy than all others. + +# 5.3 PERMUTED MNIST + +Permuted MNIST is a popular variant of the MNIST dataset to evaluate continual learning approaches in which each task is considered as a random permutation of the original MNIST pixels. Following the literature, we learn a sequence of 10 random permutations and report average accuracy at the end. Table 2b shows ACC and BWT of UCB and UCB-P in comparison to state-of-the-art models using a small and a large network with 0.1M and 1.9M parameters, respectively (architecture details are given in Section A.2 of the appendix). The accuracy achieved by UCB (ACC=91.44 ± 0.04%) using the small network outperforms the ACC reported by Serra et al. (2018) for SI (ACC=86.0%), EWC (ACC=88.2%), while HAT attains a slightly better performance (ACC=91.6%). Comparing the average accuracy reported in VCL-Vadam (ACC=86.34%) and VCL-GNG (ACC=90.50%) as well as obtained results for VCL (ACC=88.80%) shows UCB with BWT=(0.03% ± 0.00%) is able to outperform other Bayesian approaches in accuracy while forgetting significantly less compared to VCL with BWT=-7.9%. While we do not experiment with memory in this work, not surprisingly adding memory to most approaches will improve their performance significantly as it allows looking into past tasks. E.g. Chen et al. (2019) report ACC=94.37% for VCL-GNC when adding a memory of size 200. + +Next, we compare the results for the larger network (1.9M). While HAT and UCB have zero forgetting, UCB, reaching ACC=97.42 ± 0.01%, performs better than all baselines including HAT which obtains ACC=97.34 ± 0.05% using 1.9M parameters. We also observe again that BBB-FT, despite being not specifically penalized to prevent forgetting, exhibits reasonable negative BWT values, performing better than IMM and LWF baselines. It is close to joint training, BBB-JT, with ACC=98.1%, which can be seen as an upper bound. + +# 5.4 ALTERNATING CIFAR10 AND CIFAR100 + +In this experiment, we randomly alternate between class incremental learning of CIFAR10 and CIFAR100. Both datasets are divided into 5 tasks each with 2 and 20 classes per task, respectively. Table 2c presents ACC and BWT obtained with UCB-P, UCB, and three BBB reference methods compared against various continual learning baselines. Among the baselines presented in Table 2c, PNN and PathNet are the only zero-forgetting-guaranteed approaches. It is interesting to note that in this setup, some baselines (PathNet, LWF, and LFL) do not perform better than the naive accuracy achieved by feature extraction. PathNet suffers from bad pre-assignment of the network's capacity per task which causes poor performance on the initial task from which it never recovers. IMM performs almost similar to fine-tuning in ACC, yet forgets more. PNN, EWC, and HAT are the only baselines that perform better than BBB-FE and BBB-FT. EWC and HAT are both allowed to forget by construction, however, HAT shows zero forgetting behavior. While EWC is outperformed by both of our UCB variants, HAT exhibits $1\%$ better ACC over UCB-P. Despite having a slightly higher forgetting, the overall accuracy of UCB is higher, reaching $79.4\%$ . BBB-JT in this experiment achieves a positive BWT which shows that learning the entire sequence improves the performance on earlier tasks. + +# 5.5 MULTIPLE DATASETS LEARNING + +Finally, we present our results for continual learning of 8 tasks using UCB-P and UCB in Table 2d. Similar to the previous experiments we look at both ACC and BWT obtained for UCB-P, UCB, BBB references (FT, FE, JT) as well as various baselines. Considering the ACC achieved by BBB-FE or BBB-FT $(58.1\%)$ as a lower bound we observe again that some baselines are not able to do better than BBB-FT including LFL, PathNet, LWF, IMM, and EWC while PNN and HAT remain the only strong baselines for our UCB-P and UCB approaches. UCB-P again outperforms PNN by $3.6\%$ in ACC. HAT exhibits only $-0.1\%$ BWT, but our UCB achieves $2.4\%$ higher ACC. + +# 6 SINGLE HEAD AND GENERALIZED ACCURACY OF UCB + +UCB can be used even if the task information is not given at test time. For this purpose, at training time, instead of using a separate fully connected classification head for each task, we use a single + +Table 3: Single Head vs. Multi-Head architecture and Generalized vs. Standard Accuracy. Generalized accuracy means that task information is not available at test time. SM, PM, CF, and 8T denote the 5-Split MNIST, Permuted MNIST, Alternating CIFAR10/100, and sequence of 8 tasks, respectively. + +
ExpGeneralized ACCACC
Single HeadSingle HeadMulti Head
UCBBBB-FTUCBBBB-FTUCBBBB-FT
SM98.798.198.998.799.298.4
PM92.586.195.188.397.790.0
CF71.265.274.367.879.468.9
8T76.847.679.953.284.043.1
+ +head with the total number of outputs for all tasks. For example in the 8-dataset experiment we only use one head with 293 number of output classes, rather than using 8 separate heads, during training and inference time. + +Table 3 presents our results for UCB and BBB-FT trained with a single head against having a multi-head architecture, in columns 4-7. Interestingly, we see only a small performance degrade for UCB from training with multi-head to a single head. The ACC reduction is $0.3\%$ , $2.6\%$ , $5.1\%$ , and $4.1\%$ for 2-Split MNIST, Permuted MNIST, Alternating CIFAR10/100, and sequence of 8 tasks experiments, respectively. + +We evaluated UCB and BBB-FT with a more challenging metric where the prediction space covers the classes across all the tasks. Hence, confusion of similar class labels across tasks can be measured. Performance for this condition is reported as Generalized ACC in Table 3 in columns 2-3. We observe a small performance reduction in going from ACC to Generalized ACC, suggesting non-significant confusion caused by the presence of more number of classes at test time. The performance degradation from ACC to Generalized ACC is $0.2\%$ , $2.6\%$ , $3.1\%$ , and $3.1\%$ for 2-Split MNIST, Permuted MNIST, Alternating CIFAR10/100, and sequence of 8 tasks, respectively. This shows that UCB can perform competitively in more realistic conditions such as unavailability of task information at test time. We believe the main insight of our approach is that instead of computing additional measurements of importance, which are often task, input or output dependent, we directly use predicted weight uncertainty to find important parameters. We can freeze them using a binary mask, as in UCB-P, or regularize changes conditioned on current uncertainty, as in UCB. + +# 7 CONCLUSION + +In this work, we propose a continual learning formulation with Bayesian neural networks, called UCB, that uses uncertainty predictions to perform continual learning: important parameters can be either fully preserved through a saved binary mask (UCB-P) or allowed to change conditioned on their uncertainty for learning new tasks (UCB). We demonstrated how the probabilistic uncertainty distributions per weight are helpful to continually learning short and long sequences of benchmark datasets compared against baselines and prior work. We show that UCB performs superior or on par with state-of-the-art models such as HAT (Serra et al., 2018) across all the experiments. Choosing between the two UCB variants depends on the application scenario: While UCB-P enforces no forgetting after the initial pruning stage by saving a small binary mask per task, UCB does not require additional memory and allows for more learning flexibility in the network by allowing small forgetting to occur. UCB can also be used in a single head setting where the right subset of classes belonging to the task is not known during inference leading to a competitive model that can be deployed where it is not possible to distinguish tasks in a continuous stream of the data at test time. UCB can also be deployed in a single head scenario and where tasks information is not available at test time. + +# REFERENCES + +Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 139-154, 2018. + +Rahaf Aljundi, Klaas Kelchtermans, and Tinne Tuytelaars. Task-free continual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11254-11263, 2019. +David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859-877, 2017. +Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1613-1622. PMLR, 2015. +Yaroslav Bulatov. Notmnist dataset. Google (Books/OCR), Tech. Rep.[Online]. Available: http://yaroslavvb.blogspot.it/2011/09/notmnist-dataset.html, 2011. +Francisco M Castro, Manuel J Marín-Jiménez, Nicolas Guil, Cordelia Schmid, and Karteek Alahari. End-to-end incremental learning. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 233-248, 2018. +Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with A-GEM. In International Conference on Learning Representations, 2019. +Yu Chen, Tom Diethe, and Neil Lawrence. Facilitating bayesian continual learning by natural gradients andstein gradients.arXiv preprint arXiv:1904.10644,2019. +Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734, 2017. +Alex Graves. Practical variational inference for neural networks. In Advances in neural information processing systems, pp. 2348-2356, 2011. +Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. +W Keith Hastings. Monte carlo sampling methods using markov chains and their applications. Biometrika, 1970. +Jose Miguel Hernandez-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. In International Conference on Machine Learning, pp. 1861-1869, 2015. +Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pp. 5-13. ACM, 1993. +Heechul Jung, Jeongwoo Ju, Minju Jung, and Junmo Kim. Less-forgetting learning in deep neural networks. arXiv preprint arXiv:1607.00122, 2016. +Mohammad Emtiyaz Khan and Didrik Nielsen. Fast yet simple natural-gradient descent for variational inference in complex models. In 2018 International Symposium on Information Theory and Its Applications (ISITA), pp. 31-35. IEEE, 2018. +Mohammad Emtiyaz Khan, Didrik Nielsen, Voot Tangkaratt, Wu Lin, Yarin Gal, and Akash Srivastava. Fast and scalable bayesian deep learning by weight-perturbation in adam. arXiv preprint arXiv:1806.04854, 2018. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, pp. 201611835, 2017. + +Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. +Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +Kibok Lee, Kimin Lee, Jinwoo Shin, and Honglak Lee. Overcoming catastrophic forgetting with unlabeled data in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 312-321, 2019. +Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming catastrophic forgetting by incremental moment matching. In Advances in Neural Information Processing Systems, pp. 4652-4662, 2017. +Zhizhong Li and Derek Hoiem. Learning without forgetting. In European Conference on Computer Vision, pp. 614-629. Springer, 2016. +Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2736-2744, 2017. +David Lopez-Paz et al. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pp. 6467-6476, 2017. +David JC MacKay. A practical bayesian framework for backpropagation networks. Neural computation, 4(3):448-472, 1992a. +David JC MacKay. *Bayesian methods for adaptive models*. PhD thesis, California Institute of Technology, 1992b. +Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. +James L McClelland, Bruce L McNaughton, and Randall C O'reilly. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. *Psychological review*, 102(3):419, 1995. +Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109-165. Elsevier, 1989. +Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference. In International Conference on Learning Representations (ICLR), 2016. +Radford M Neal. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012. +Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, 2011. +Hong-Wei Ng and Stefan Winkler. A data-driven approach to cleaning large face datasets. In Image Processing (ICIP), 2014 IEEE International Conference on, pp. 343-347. IEEE, 2014. +Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning. In International Conference on Learning Representations, 2018. +Christian Robert and George Casella. Monte Carlo statistical methods. Springer Science & Business Media, 2013. +Anthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2): 123-146, 1995. + +Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016. +Nicol N Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. Neural computation, 14(7), 2002. +Jonathan Schwarz, Jelena Luketina, Wojciech M Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. arXiv preprint arXiv:1805.06370, 2018. +Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 4548-4557. PMLR, 2018. +Kumar Shridhar, Felix Laumann, and Marcus Liwicki. Uncertainty estimations by softplus normalization in bayesian convolutional neural networks with variational inference. arXiv preprint arXiv:1806.05978, 2018. +Rupesh K Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez, and Jürgen Schmidhuber. Compete to compute. In Advances in neural information processing systems, pp. 2310-2318, 2013. +Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. The german traffic sign recognition benchmark: a multi-class classification competition. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pp. 1453-1460. IEEE, 2011. +Hanna Tseran, Mohammad Emtiyaz Khan, Tatsuya Harada, and Thang D Bui. Natural variational continual learning. In Continual Learning Workshop@ NeurIPS, volume 2, 2018. +Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. Large scale incremental learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 374-382, 2019. +Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, The MIT License (MIT) Copyright © 2017 Zalando SE. https://tech.zalando.com, arXiv preprint arXiv:1708.07747, 2017. +Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks. In International Conference on Learning Representations, 2018. +Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 3987-3995. PMLR, 2017. + +# A APPENDIX + +# A.1 DATASETS + +Table 4 shows a summary of the datasets utilized in our work along with their size and number of classes. In all the experiments we resized images to $32 \times 32 \times 3$ if necessary. For datasets with monochromatic images, we replicate the image across all RGB channels. + +Table 4: Utilized datasets summary + +
Names#ClassesTrainTest
FaceScrub (Ng & Winkler, 2014)10020,6002,289
MNIST (LeCun et al., 1998)1060,00010,000
CIFAR100 (Krizhevsky & Hinton, 2009)10050,00010,000
NotMNIST (Bulatov, 2011)1016,8531,873
SVHN (Netzer et al., 2011)1073,25726,032
CIFAR10 (Krizhevsky & Hinton, 2009)1039,20912,630
TrafficSigns (Stallkamp et al., 2011)4339,20912,630
FashionMNIST (Xiao et al., 2017)1060,00010,000
+ +# A.2 IMPLEMENTATION DETAILS + +In this section we take a closer look at elements of our UCB model on MNIST and evaluate variants of parameter regularization, importance measurement, as well as the effect of the number of samples drawn from the posited posterior. + +Bayes-by-backprop (BBB) Hyperparameters: Table 5 shows the search space for hyperparameters in the BBB algorithm Blundell et al. (2015) which we used for tuning on the validation set of the first two tasks. + +Table 5: Search space for hyperparameters in BBB given by Blundell et al. (2015) + +
BBB hyperparameters- log σ1- log σ2π
Search space{0,1,2}{6,7,8}{0.25,0.5,0.75}
+ +Network architecture: For Split MNIST and Permuted MNIST experiments, we have used a two-layer perceptron which has 1200 units. Because there is more number of parameters in our Bayesian neural network compared to its equivalent regular neural net, we ensured fair comparison by matching the total number of parameters between the two to be 1.9M unless otherwise is stated. For the multiple datasets learning scenario, as well as alternating incremental CIFAR10/100 datasets, we have used a ResNet18 Bayesian neural network with 7.1-11.3M parameters depending on the experiment. However, the majority of the baselines provided in this work are originally developed using some variants of AlexNet structure and altering that, e.g. to ResNet18, resulted in degrading in their reported and experimented performance as shown in Table 6. Therefore, we kept the architecture for baselines as AlexNet and ours as ResNet18 and only matched their number of parameters to ensure having equal capacity across different approaches. + +Table 6: Continually learning on CIFAR10/100 using AlexNet and ResNet18 for UCB (our method) and HAT (Serra et al., 2018). BWT and ACC in %. All results are (re)produced by us. + +
MethodBWTACC
HAT (AlexNet)0.078.3
HAT (ResNet18)-9.056.8
UCB (AlexNet)-0.779.44
UCB (ResNet18)-0.779.70
+ +Number of Monte Carlo samples: UCB is ensured to be robust to random noise using multiple samples drawn from posteriors. Here we explore different number of samples and the effect on final performance for ACC and BWT. We have used $\Omega_{\mu} = 1 / \sigma$ as importance and regularization has been performed on mean values only. Following the result in Table 7 we chose the number of samples to be 10 for all experiments. + +Table 7: Number of Monte Carlo samples (N) in 2-Split MNIST + +
MethodNBWT (%)ACC (%)
UCB10.0098.0
UCB20.0098.3
UCB5-0.1599.0
UCB100.0099.2
UCB15-0.0198.3
+ +# A.3 ADDITIONAL RESULTS + +Here we include some additional results such as Table 8 for 2-split MNIST and some complementary results for tables in the main text as follows: 9, 10, and 11 include standard deviation for results shown in Table 2a, 2b, 2c, respectively. + +Table 8: Continually learning on 2-Split MNIST. BWT and ACC in %. (*) denotes that methods do not adhere to the continual learning setup: BBB-JT and ORD-JT serve as the upper bound for ACC for BBB/ORD networks, respectively. All results are (re)produced by us. + +
MethodBWTACC
PackNet (Mollya & Lazebnik, 2018)0.04 ± 0.0198.91 ± 0.03
LWF (Li & Hoiem, 2016)-0.22 ± 0.0499.12 ± 0.03
HAT (Serra et al., 2018)0.01 ± 0.0099.02 ± 0.00
ORD-FT-6.81 ± 0.0392.42 ± 0.02
ORD-FE0.04 ± 0.0497.90 ± 0.04
BBB-FT-0.61 ± 0.0398.44 ± 0.03
BBB-FE0.02 ± 0.0598.03 ± 0.05
UCB-P (Ours)0.03 ± 0.0499.02 ± 0.01
UCB (Ours)0.01 ± 0.0099.18 ± 0.01
ORD-JT*0.02 ± 0.0399.13 ± 0.03
BBB-JT*0.03 ± 0.0299.51 ± 0.02
+ +Table 9: Continually learning on 5-Split MNIST. BWT and ACC in %. (*) denotes that methods do not adhere to the continual learning setup: BBB-JT and ORD-JT serve as the upper bound for ACC for BBB/ORD networks, respectively. All results are (re)produced by us. + +
MethodBWTACC
VCL-Vadam (Tseran et al., 2018)-99.17 ± 0.05
VCL-GNG (Chen et al., 2019)-96.50 ± 0.07
VCL (Nguyen et al., 2018)-0.56 ± 0.0398.20 ± 0.03
IMM (Lee et al., 2017)-11.20 ± 1.5788.54 ± 1.56
EWC (Kirkpatrick et al., 2017)-4.20 ± 1.0895.78 ± 1.08
HAT (Serra et al., 2018)0.00 ± 0.0299.59 ± 0.02
ORD-FT*-9.18 ± 1.1290.60 ± 1.12
ORD-FE*0.00 ± 1.5698.54 ± 1.57
BBB-FT*-6.45 ± 1.9993.42 ± 1.98
BBB-FE*0.00 ± 2.2398.76 ± 2.23
UCB-P (Ours)-0.72 ± 0.0499.32 ± 0.04
UCB (Ours)0.00 ± 0.0499.63 ± 0.03
ORD-JT*0.00 ± 0.0299.78 ± 0.02
BBB-JT*0.00 ± 0.0199.87 ± 0.01
+ +Table 10: Continually learning on Permuted MNIST. BWT and ACC in %. (*) denotes that method does not adhere to the continual learning setup: BBB-JT serves as the upper bound for ACC for BBB network. ‡ denotes results reported by (Serra et al., 2018). † denotes the result reported from original work. BWT was not reported in ‡ and †. All others results are (re)produced by us. + +
Method#ParamsBWTACC
SI (Zenke et al., 2017)‡0.1M-86.0
EWC (Kirkpatrick et al., 2017)‡0.1M-88.2
HAT (Serra et al., 2018)‡0.1M-91.6
VCL-Vadam†0.1M-93.34
VCL-GNG†0.1M-94.62
VCL0.1M-7.90 ± 0.2388.80 ± 0.23
UCB (Ours)0.1M-0.38 ± 0.0291.44 ± 0.04
LWF (Li & Hoiem, 2016)1.9M-31.17 ± 0.0565.65 ± 0.05
IMM (Lee et al., 2017)1.9M-7.14 ± 0.0790.51 ± 0.08
HAT (Serra et al., 2018)1.9M0.03 ± 0.0597.34 ± 0.05
BBB-FT1.9M-0.58 ± 0.0590.01 ± 0.05
BBB-FE1.9M0.02 ± 0.0393.54 ± 0.04
UCB-P (Ours)1.9M-0.95 ± 0.0697.24 ± 0.06
UCB (Ours)1.9M0.03 ± 0.0097.42 ± 0.01
BBB-JT*1.9M0.00 ± 0.0098.12 ± 0.01
+ +Table 11: Continually learning on CIFAR10/100. BWT and ACC in %. (*) denotes that method does not adhere to the continual learning setup: BBB-JT serves as the upper bound for ACC for BBB network. All results are (re)produced by us. + +
MethodBWTACC
PathNet (Fernando et al., 2017)0.00 ± 0.0028.94 ± 0.03
LWF (Li & Hoiem, 2016)-37.9 ± 0.3242.93 ± 0.30
LFL (Jung et al., 2016)-24.22 ± 0.2147.67 ± 0.22
IMM (Lee et al., 2017)-12.23 ± 0.0669.37 ± 0.06
PNN (Rusu et al., 2016)0.00 ± 0.0070.73 ± 0.08
EWC (Kirkpatrick et al., 2017)-1.53 ± 0.0772.46 ± 0.06
HAT (Serra et al., 2018)0.04 ± 0.0678.32 ± 0.06
BBB-FE0.04 ± 0.0251.04 ± 0.03
BBB-FT-7.43 ± 0.0768.89 ± 0.07
UCB-P (Ours)-1.89 ± 0.0377.32 ± 0.03
UCB (Ours)-0.72 ± 0.0279.44 ± 0.02
BBB-JT*1.52 ± 0.0483.93 ± 0.04
\ No newline at end of file diff --git a/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/images.zip b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2f48d4e75332a577c80a79433b16c9ca21d8d214 --- /dev/null +++ b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:097a630e094f038e21885e85a9effd92c8325602795e49e4fd493f8fce8b7188 +size 539112 diff --git a/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/layout.json b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..058da567d5f41fdebd3e04aa51c05fdf0b7027c4 --- /dev/null +++ b/uncertaintyguidedcontinuallearningwithbayesianneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58dd808f1a5fcff70a692c6b9a75a778aba794205488e26daf904322b31e03ce +size 442251 diff --git a/understandingandimprovinginformationtransferinmultitasklearning/dec5dd60-c639-48b9-803e-d6c8384243c4_content_list.json b/understandingandimprovinginformationtransferinmultitasklearning/dec5dd60-c639-48b9-803e-d6c8384243c4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..40581088258ad619c3255a4e5d8d13b79c0b57a0 --- /dev/null +++ b/understandingandimprovinginformationtransferinmultitasklearning/dec5dd60-c639-48b9-803e-d6c8384243c4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9abc7a2935a9a222574e2be1b5da6d948d57aac91136bd2315aa650e32e09d46 +size 189853 diff --git a/understandingandimprovinginformationtransferinmultitasklearning/dec5dd60-c639-48b9-803e-d6c8384243c4_model.json b/understandingandimprovinginformationtransferinmultitasklearning/dec5dd60-c639-48b9-803e-d6c8384243c4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f680f7b26c1ab79e0f23393f082202311e864caf --- /dev/null +++ b/understandingandimprovinginformationtransferinmultitasklearning/dec5dd60-c639-48b9-803e-d6c8384243c4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:daaeee1ec45cb215911356de8407989e97f0cc3dd23227a9041f069bf88ee064 +size 230207 diff --git a/understandingandimprovinginformationtransferinmultitasklearning/dec5dd60-c639-48b9-803e-d6c8384243c4_origin.pdf b/understandingandimprovinginformationtransferinmultitasklearning/dec5dd60-c639-48b9-803e-d6c8384243c4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..538e7b0e0ddd945f27451f841ed8427f51da29b5 --- /dev/null +++ b/understandingandimprovinginformationtransferinmultitasklearning/dec5dd60-c639-48b9-803e-d6c8384243c4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0df93b8b9dba1cb3ddc2c7afa50333cb1a6e17f335490647398a588bdc4b4de1 +size 1420874 diff --git a/understandingandimprovinginformationtransferinmultitasklearning/full.md b/understandingandimprovinginformationtransferinmultitasklearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7df9ad7ac48c93df33ff584fda4f3cd8c911740c --- /dev/null +++ b/understandingandimprovinginformationtransferinmultitasklearning/full.md @@ -0,0 +1,893 @@ +# UNDERSTANDING AND IMPROVING INFORMATION TRANSFER IN MULTI-TASK LEARNING + +Sen Wu* + +Stanford University + +Hongyang R. Zhang* + +University of Pennsylvania + +Christopher Ré + +Stanford University + +# ABSTRACT + +We investigate multi-task learning approaches that use a shared feature representation for all tasks. To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task. We study the theory of this setting on linear and ReLU-activated models. Our key observation is that whether or not tasks' data are well-aligned can significantly affect the performance of multi-task learning. We show that misalignment between task data can cause negative transfer (or hurt performance) and provide sufficient conditions for positive transfer. Inspired by the theoretical insights, we show that aligning tasks' embedding layers leads to performance gains for multi-task training and transfer learning on the GLUE benchmark and sentiment analysis tasks; for example, we obtain a $2.35\%$ GLUE score average improvement on 5 GLUE tasks over $\mathrm{BERT}_{\mathrm{LARGE}}$ using our alignment method. We also design an SVD-based task reweighting scheme and show that it improves the robustness of multi-task training on a multi-label image dataset. + +# 1 INTRODUCTION + +Multi-task learning has recently emerged as a powerful paradigm in deep learning to obtain language (Devlin et al. (2018); Liu et al. (2019a;b)) and visual representations (Kokkinos (2017)) from large-scale data. By leveraging supervised data from related tasks, multi-task learning approaches reduce the expensive cost of curating the massive per-task training data sets needed by deep learning methods and provide a shared representation which is also more efficient for learning over multiple tasks. While in some cases, great improvements have been reported compared to single-task learning (McCann et al. (2018)), practitioners have also observed problematic outcomes, where the performances of certain tasks have decreased due to task interference (Alonso and Plank (2016); Bingel and Søgaard (2017)). Predicting when and for which tasks this occurs is a challenge exacerbated by the lack of analytic tools. In this work, we investigate key components to determine whether tasks interfere constructively or destructively from theoretical and empirical perspectives. Based on these insights, we develop methods to improve the effectiveness and robustness of multi-task training. + +There has been a large body of algorithmic and theoretical studies for kernel-based multi-task learning, but less is known for neural networks. The conceptual message from the earlier work (Baxter (2000); Evgeniou and Pontil (2004); Micchelli and Pontil (2005); Xue et al. (2007)) show that multi-task learning is effective over "similar" tasks, where the notion of similarity is based on the single-task models (e.g. decision boundaries are close). The work on structural correspondence learning (Ando and Zhang (2005); Blitzer et al. (2006)) uses alternating minimization to learn a shared parameter and separate task parameters. Zhang and Yeung (2014) use a parameter vector for each task and learn task relationships via $l_{2}$ regularization, which implicitly controls the capacity of the model. These results are difficult to apply to neural networks: it is unclear how to reason about neural networks whose feature space is given by layer-wise embeddings. + +To determine whether two tasks interfere constructively or destructively, we investigate an architecture with a shared module for all tasks and a separate output module for each task (Ruder (2017)). See Figure 1 for an illustration. Our motivating observation is that in addition to model similarity which affects the type of interference, task data similarity plays a second-order effect after controlling model similarity. To illustrate the idea, we consider three tasks with the same number of data + +![](images/af33c4e26b61707f5e789887d9749527f9052582a10c66f14d084742531127ca.jpg) +Figure 1: An illustration of the multi-task learning architecture with a shared lower module $B$ and $k$ task-specific modules $\{A_i\}_{i=1}^k$ . +Figure 2: Positive vs. Negative transfer is affected by the data - not just the model. See lower right-vs-mid. Task 2 and 3 have the same model (dotted lines) but different data distributions. Notice the difference of data in circled areas. + +samples where task 2 and 3 have the same decision boundary but different data distributions (see Figure 2 for an illustration). We observe that training task 1 with task 2 or task 3 can either improve or hurt task 1's performance, depending on the amount of contributing data along the decision boundary! This observation shows that by measuring the similarities of the task data and the models separately, we can analyze the interference of tasks and attribute the cause more precisely. + +Motivated by the above observation, we study the theory of multi-task learning through the shared module in linear and ReLU-activated settings. Our theoretical contribution involves three components: the capacity of the shared module, task covariance, and the per-task weight of the training procedure. The capacity plays a fundamental role because, if the shared module's capacity is too large, there is no interference between tasks; if it is too small, there can be destructive interference. Then, we show how to determine interference by proposing a more fine-grained notion called task covariance which can be used to measure the alignment of task data. By varying task covariances, we observe both positive and negative transfers from one task to another! We then provide sufficient conditions which guarantee that one task can transfer positively to another task, provided with sufficiently many data points from the contributor task. Finally, we study how to assign per-task weights for settings where different tasks share the same data but have different labels. + +Experimental results. Our theory leads to the design of two algorithms with practical interest. First, we propose to align the covariances of the task embedding layers and present empirical evaluations on well-known benchmarks and tasks. On 5 tasks from the General Language Understanding Evaluation (GLUE) benchmark (Wang et al. (2018b)) trained with the $\mathrm{BERT}_{\mathrm{LARGE}}$ model by Devlin et al. (2018), our method improves the result of $\mathrm{BERT}_{\mathrm{LARGE}}$ by a $2.35\%$ average GLUE score, which is the standard metric for the benchmark. Further, we show that our method is applicable to transfer learning settings; we observe up to $2.5\%$ higher accuracy by transferring between six sentiment analysis tasks using the LSTM model of Lei et al. (2018). + +Second, we propose an SVD-based task reweighting scheme to improve multi-task training for settings where different tasks have the same features but different labels. On the ChestX-ray14 dataset, we compare our method to the unweighted scheme and observe an improvement of $0.4\%$ AUC score on average for all tasks. In conclusion, these evaluations confirm that our theoretical insights are applicable to a broad range of settings and applications. + +# 2 THREE COMPONENTS OF MULTI-TASK LEARNING + +We study multi-task learning (MTL) models with a shared module for all tasks and a separate output module for each task. We ask: What are the key components to determine whether or not MTL is better than single-task learning (STL)? In response, we identify three components: model capacity, task covariance, and optimization scheme. After setting up the model, we briefly describe the role of model capacity. We then introduce the notion of task covariance, which comprises the bulk of the section. We finish by showing the implications of our results for choosing optimization schemes. + +# 2.1 MODELING SETUP + +We are given $k$ tasks. Let $m_{i}$ denote the number of data samples of task $i$ . For task $i$ , let $X_{i} \in \mathbb{R}^{m_{i} \times d}$ denote its covariates and let $y_{i} \in \mathbb{R}^{m_{i}}$ denote its labels, where $d$ is the dimension of the data. We have assumed that all the tasks have the same input dimension $d$ . This is not a restrictive assumption and is typically satisfied, e.g. for word embeddings on BERT, or by padding zeros to the input otherwise. Our model assumes the output label is 1-dimensional. We can also model a multi-label problem with $k$ types of labels by having $k$ tasks with the same covariates but different labels. We consider an MTL model with a shared module $B \in \mathbb{R}^{d \times r}$ and a separate output module $A_{i} \in \mathbb{R}^{r}$ for task $i$ , where $r$ denotes the output dimension of $B$ . See Figure 1 for the illustration. We define the objective of finding an MTL model as minimizing the following equation over $B$ and the $A_{i}$ 's: + +$$ +f \left(A _ {1}, A _ {2}, \dots , A _ {k}; B\right) = \sum_ {i = 1} ^ {k} L \left(g \left(X _ {i} B\right) A _ {i}, y _ {i}\right), \tag {1} +$$ + +where $L$ is a loss function such as the squared loss. The activation function $g: \mathbb{R} \to \mathbb{R}$ is applied on every entry of $X_{i}B$ . In equation 1, all data samples contribute equally. Because of the differences between tasks such as data size, it is natural to re-weight tasks during training: + +$$ +f \left(A _ {1}, A _ {2}, \dots , A _ {k}; B\right) = \sum_ {i = 1} ^ {k} \alpha_ {i} \cdot L \left(g \left(X _ {i} B\right) A _ {i}, y _ {i}\right), \tag {2} +$$ + +This setup is an abstraction of the hard parameter sharing architecture (Ruder (2017)). The shared module $B$ provides a universal representation (e.g., an LSTM for encoding sentences) for all tasks. Each task-specific module $A_{i}$ is optimized for its output. We focus on two models as follows. + +The single-task linear model. The labels $y$ of each task follow a linear model with parameter $\theta \in \mathbb{R}^d$ : $y = X\theta + \varepsilon$ . Every entry of $\varepsilon$ follows the normal distribution $\mathcal{N}(0, \sigma^2)$ with variance $\sigma^2$ . The function $g(XB) = XB$ . This is a well-studied setting for linear regression (Hastie et al. (2005)). + +The single-task ReLU model. Denote by $\mathrm{ReLU}(x) = \max (x,0)$ for any $x\in \mathbb{R}$ . We will also consider a non-linear model where $X\theta$ goes through the ReLU activation function with $a\in \mathbb{R}$ and $\theta \in \mathbb{R}^d$ : $y = a\cdot \mathrm{ReLU}(X\theta) + \varepsilon$ , which applies the ReLU activation on $X\theta$ entrywise. The encoding function $g(XB)$ then maps to $\mathrm{ReLU}(XB)$ . + +Positive vs. negative transfer. For a source task and a target task, we say the source task transfers positively to the target task, if training both through equation 1 improves over just training the target task (measured on its validation set). Negative transfer is the converse of positive transfer. + +Problem statement. Our goal is to analyze the three components to determine positive vs. negative transfer between tasks: model capacity $(r)$ , task covariances $\left\{X_{i}^{\top} X_{i}\right\}_{i=1}^{k}$ and the per-task weights $\left(\left\{\alpha_{i}\right\}_{i=1}^{k}\right)$ . We focus on regression tasks under the squared loss but we also provide synthetic experiments on classification tasks to validate our theory. + +Notations. For a matrix $X$ , its column span is the set of all linear combinations of the column vectors of $X$ . Let $X^{\dagger}$ denote its pseudoinverse. Given $u, v \in \mathbb{R}^{d}$ , $\cos(u, v)$ is equal to $u^{\top}v / (\|u\| \cdot \|v\|)$ . + +# 2.2 MODEL CAPACITY + +We begin by revisiting the role of model capacity, i.e. the output dimension of $B$ (denoted by $r$ ). We show that as a rule of thumb, $r$ should be smaller than the sum of capacities of the STL modules. + +Example. Suppose we have $k$ linear regression tasks using the squared loss, equation 1 becomes: + +$$ +f \left(A _ {1}, A _ {2}, \dots , A _ {k}; B\right) = \sum_ {i = 1} ^ {k} \| X _ {i} B A _ {i} - y _ {i} \| _ {F} ^ {2}. \tag {3} +$$ + +The optimal solution of equation 3 for task $i$ is $\theta_{i} = (X_{i}^{\top}X_{i})^{\dagger}X_{i}^{\top}y_{i}\in \mathbb{R}^{d}$ . Hence a capacity of 1 suffices for each task. We show that if $r\geq k$ , then there is no transfer between any two tasks. + +![](images/73ad0ddd7e9c3b5a5181677e0a40c28ee8529a3ab2772e05f60482af05b1c69b.jpg) +Figure 3: Performance improvement of a target task (Task 1) by MTL with a source task vs. STL. Red: positive transfer when the source is Task 2, which has the same covariance matrix with target. Green: negative (to positive) transfer when the source is Task 3, which has a different covariance from the target, as its # of samples increases. See the example below for the definition of each task. + +Proposition 1. Let $r \geq k$ . There exists an optimum $B^{\star}$ and $\{A_i^\star\}_{i=1}^k$ of equation 3 where $B^{\star} A_i^{\star} = \theta_i$ , for all $i = 1, 2, \ldots, k$ . + +To illustrate the idea, as long as $B^{\star}$ contains $\{\theta_i\}_{i=1}^k$ in its column span, there exists $A_i^{\star}$ such that $B^{\star}A_i^{\star} = \theta_i$ , which is optimal for equation 3 with minimum error. But this means no transfer among any two tasks. This can hurt generalization if a task has limited data, in which case its STL solution overfits training data, whereas the MTL solution can leverage other tasks' data to improve generalization. The proof of Proposition 1 and its extension to ReLU settings are in Appendix A.1. + +Algorithmic consequence. The implication is that limiting the shared module's capacity is necessary to enforce information transfer. If the shared module is too small, then tasks may interfere negatively with each other. But if it is too large, then there may be no transfer between tasks. In Section 3.3, we verify the need to carefully choose model capacity on a wide range of neural networks including CNN, LSTM and multi-layer perceptron. + +# 2.3 TASK COVARIANCE + +To show how to quantify task data similarity, we illustrate with two regression tasks under the linear model without noise: $y_{1} = X_{1}\theta_{1}$ and $y_{2} = X_{2}\theta_{2}$ . By Section 2.2, it is necessary to limit the capacity of the shared module to enforce information transfer. Therefore, we consider the case of $r = 1$ . Hence, the shared module $B$ is now a $d$ -dimensional vector, and $A_{1}, A_{2}$ are both scalars. + +A natural requirement of task similarity is for the STL models to be similar, i.e. $|\cos(\theta_1, \theta_2)|$ to be large. To see this, the optimal STL model for task 1 is $(X_1^\top X_1)^{-1} X_1^\top y_1 = \theta_1$ . Hence if $|\cos(\theta_1, \theta_2)|$ is 1, then tasks 1 and 2 can share a model $B \in \mathbb{R}^d$ which is either $\theta_1$ or $-\theta_1$ . The scalar $A_1$ and $A_2$ can then transform $B$ to be equal to $\theta_1$ and $\theta_2$ . + +Is this requirement sufficient? Recall that in equation 3, the task data $X_{1}$ and $X_{2}$ are both multiplied by $B$ . If they are poorly "aligned" geometrically, the performance could suffer. How do we formalize the geometry between task alignment? In the following, we show that the covariance matrices of $X_{1}$ and $X_{2}$ , which we define to be $X_{1}^{\top}X_{1}$ and $X_{2}^{\top}X_{2}$ , captures the geometry. We fix $|\cos(\theta_1,\theta_2)|$ to be close to 1 to examine the effects of task covariances. In Appendix A.2.1 we fix task covariances to examine the effects of model cosine similarity. Concretely, equation 3 reduces to: + +$$ +\max _ {B \in \mathbb {R} ^ {d}} h (B) = \left\langle \frac {X _ {1} B}{\| X _ {1} B \|}, y _ {1} \right\rangle^ {2} + \left\langle \frac {X _ {2} B}{\| X _ {2} B \|}, y _ {2} \right\rangle^ {2}, \tag {4} +$$ + +where we apply the first-order optimality condition on $A_{1}$ and $A_{2}$ and simplify the equation. Specifically, we focus on a scenario where task 1 is the source and task 2 is the target. Our goal is to determine when the source transfers to the target positively or negatively in MTL. Determining the type of transfer from task 2 to task 1 can be done similarly. Answering the question boils down to studying the angle or cosine similarity between the optimum of equation 4 and $\theta_{2}$ . + +Example. In Figure 3, we show that by varying task covariances and the number of samples, we can observe both positive and negative transfers. The conceptual message is the same as Figure 2; we + +# Algorithm 1 Covariance alignment for multi-task training + +Require: Task embedding layers $X_{1}\in \mathbb{R}^{m_{1}\times d},X_{2}\in \mathbb{R}^{m_{2}\times d},\ldots ,X_{k}\in \mathbb{R}^{m_{k}\times d}$ , shared module $B$ + +Parameter: Alignment matrices $R_{1}, R_{2}, \ldots, R_{k} \in \mathbb{R}^{d \times d}$ and output modules $A_{1}, A_{2}, \ldots, A_{k} \in \mathbb{R}^{r}$ + +1: Let $Z_{i} = X_{i}R_{i}$ , for $1\leq i\leq k$ + +Consider the following modified loss (with $B$ being fixed): + +$$ +\hat {f} (A _ {1}, \dots , A _ {k}; R _ {1}, \dots , R _ {k}) = \sum_ {i = 1} ^ {k} L (\bar {g} (Z _ {i} B) A _ {i}, y _ {i}) = \sum_ {i = 1} ^ {k} L (g (X _ {i} R _ {i} B) A _ {i}, y _ {i}) +$$ + +2: Minimize $f$ by alternatively applying a gradient descent update on $A_{i}$ and $R_{i}$ , given a sampled data batch from task $i$ . + +Other implementation details are described in Appendix B.3. + +describe the data generation process in more detail. We use 3 tasks and measure the type of transfer from the source to the target. The $x$ -axis is the number of data samples from the source. The $y$ -axis is the target's performance improvement measured on its validation set between MTL minus STL. + +Data generation. We have $|\cos(\theta_1, \theta_2)| \approx 1$ (say 0.96). For $i \in \{1, 2, 3\}$ , let $R_i \subseteq \mathbb{R}^{m_i \times d}$ denote a random Gaussian matrix drawn from $\mathcal{N}(0, 1)$ . Let $S_1, S_2 \subseteq \{1, 2, \ldots, d\}$ be two disjoint sets of size $d/10$ . For $i = 1, 2$ , let $D_i$ be a diagonal matrix whose entries are equal to a large value $\kappa$ (e.g. $\kappa = 100$ ) for coordinates in $S_i$ and 1 otherwise. Let $Q_i \subseteq \mathbb{R}^{d \times d}$ denote an orthonormal matrix, i.e. $Q_i^\top Q_i$ is equal to the identity matrix, orthogonalized from a random Gaussian matrix. + +Then, we define the 3 tasks as follows. (i) Task 1 (target): $X_{1} = R_{1}Q_{1}D_{1}$ and $y_{1} = X_{1}\theta_{1}$ . (ii) Task 2 (source task for red line): $X_{2} = R_{2}Q_{1}D_{1}$ and $y_{2} = X_{2}\theta_{2}$ . (iii) Task 3 (source task for green line): $X_{3} = R_{3}Q_{2}D_{2}$ and $y_{3} = X_{3}\theta_{2}$ . Task 1 and 2 have the same covariance matrices but task 1 and 3 have different covariance matrices. Intuitively, the signals of task 1 and 3 lie in different subspaces, which arise from the difference in the diagonals of $D_{i}$ and the orthonormal matrices. + +Analysis. Unless the source task has lots of samples to estimate $\theta_{2}$ , which is much more than the samples needed to estimate only the coordinates of $S_{1}$ , the effect of transferring to the target is small. We observe similar results for logistic regression tasks and for ReLU-activated regression tasks. + +Theory. We rigorously quantify how many data points is needed to guarantee positive transfer. The folklore in MTL is that when a source task has a lot of data but the related target task has limited data, then the source can often transfer positively to the target task. Our previous example shows that by varying the source's number of samples and its covariance, we can observe both types of transfer. How much data do we need from the source to guarantee a positive transfer to the target? We show that this depends on the condition numbers of both tasks' covariances. + +Theorem 2 (informal). For $i = 1,2$ , let $y_{i} = X_{i}\theta_{i} + \varepsilon_{i}$ denote two linear regression tasks with parameters $\theta_{i} \in \mathbb{R}^{d}$ and $m_{i}$ number of samples. Suppose that each row of the source task $X_{1}$ is drawn independently from a distribution with covariance $\Sigma_{1} \subseteq \mathbb{R}^{d \times d}$ and bounded $l_{2}$ -norm. Let $c = \kappa(X_{2})\sin(\theta_{1},\theta_{2})$ and assume that $c \leq 1/3$ . Denote by $(B^{\star},A_1^{\star},A_2^{\star})$ the optimal MTL solution. With high probability, when $m_{1}$ is at least on the order of $(\kappa^{2}(\Sigma_{1})\cdot \kappa^{4}(X_{2})\cdot \|y_{2}\|^{2}) / c^{4}$ , we have + +$$ +\left\| B ^ {\star} A _ {2} ^ {\star} - \theta_ {2} \right\| / \left\| \theta_ {2} \right\| \leq 6 c + \frac {1}{1 - 3 c} \frac {\left\| \varepsilon_ {2} \right\|}{\left\| X _ {2} \theta_ {2} \right\|}. \tag {5} +$$ + +Recall that for a matrix $X$ , $\kappa(X)$ denotes its condition number. Theorem 2 quantifies the trend in Figure 3, where the improvements for task 2 reaches the plateau when $m_1$ becomes large enough. + +The parameter $c$ here indicates how similar the two tasks are. The smaller $\sin (\theta_1,\theta_2)$ is, the smaller $c$ is. As an example, if $\sin (\theta_1,\theta_2)\leq \delta /\kappa (X_2)$ for some $\delta$ , then equation 5 is at most $O(\delta)+\| \varepsilon_{2}\| /\| X_{2}\theta_{2}\|$ .1 The formal statement, its proof and discussions on the assumptions are deferred to Appendix A.2.2. + +The ReLU model. We show a similar result for the ReLU model, which requires resolving the challenge of analyzing the ReLU function. We use a geometric characterization for the ReLU function under distributional input assumptions by Du et al. (2017). The result is deferred to Appendix A.2.3. + +# Algorithm 2 An SVD-based task re-weighting scheme + +Input: $k$ tasks: $(X,y_{i})\in (\mathbb{R}^{m\times d},\mathbb{R}^{m})$ ; a rank parameter $r\in \{1,2,\dots ,k\}$ + +Output: A weight vector: $\{\alpha_{1},\alpha_{2},\dots ,\alpha_{k}\}$ + +1: Let $\theta_{i} = X^{\top}y_{i}$ +2: $U_{r}, D_{r}, V_{r} = \mathrm{SVD}_{r}(\theta_{1},\theta_{2},\dots,\theta_{k})$ , i.e. the best rank- $r$ approximation to the $\theta_{i}$ 's. +3: Let $\alpha_{i} = \| \theta_{i}^{\top}U_{r}\|$ , for $i = 1,2,\ldots ,k$ + +Algorithmic consequence. An implication of our theory is a covariance alignment method to improve multi-task training. For the $i$ -th task, we add an alignment matrix $R_{i}$ before its input $X_{i}$ passes through the shared module $B$ . Algorithm 1 shows the procedure. + +We also propose a metric called covariance similarity score to measure the similarity between two tasks. Given $X_{1} \in \mathbb{R}^{m_{1} \times d}$ and $X_{2} \in \mathbb{R}^{m_{2} \times d}$ , we measure their similarity in three steps: (a) The covariance matrix is $X_{1}^{\top} X_{1}$ . (b) Find the best rank- $r_{1}$ approximation to be $U_{1, r_{1}} D_{1, r_{1}} U_{1, r_{1}}^{\top}$ , where $r_{1}$ is chosen to contain 99% of the singular values. (c) Apply step (a),(b) to $X_{2}$ , compute the score: + +$$ +\text {C o v a r i a n c e} = \frac {\left\| \left(U _ {1 , r _ {1}} D _ {1 , r _ {1}} ^ {1 / 2}\right) ^ {\top} U _ {2 , r _ {2}} D _ {2 , r _ {2}} ^ {1 / 2} \right\| _ {F}}{\left\| U _ {1 , r _ {1}} D _ {1 , r _ {1}} ^ {1 / 2} \right\| _ {F} \cdot \left\| U _ {2 , r _ {2}} D _ {2 , r _ {2}} ^ {1 / 2} \right\| _ {F}}. \tag {6} +$$ + +The nice property of the score is that it is invariant to rotations of the columns of $X_{1}$ and $X_{2}$ . + +# 2.4 OPTIMIZATION SCHEME + +Lastly, we consider the effect of re-weighting the tasks (or their losses in equation 2). When does reweighting the tasks help? In this part, we show a use case for improving the robustness of multi-task training in the presence of label noise. The settings involving label noise can arise when some tasks only have weakly-supervised labels, which have been studied before in the literature (e.g. Mintz et al. (2009); Pentina and Lampert (2017)). We start by describing a motivating example. + +Consider two tasks where task 1 is $y_{1} = X\theta$ and task 2 is $y_{2} = X\theta + \varepsilon_{2}$ . If we train the two tasks together, the error $\varepsilon_{2}$ will add noise to the trained model. However, by up weighting task 1, we reduce the noise from task 2 and get better performance. To rigorously study the effect of task weights, we consider a setting where all the tasks have the same data but different labels. This setting arises for example in multi-label image tasks. We derive the optimal solution in the linear model. + +Proposition 3. Let the shared module have capacity $r \leq k$ . Given $k$ tasks with the same covariates $X \subseteq \mathbb{R}^{m \times d}$ but different labels $\{y_i\}_{i=1}^k$ . Let $X$ be full rank and $UDV^\top$ be its SVD. Let $Q_rQ_r^\top$ be the best rank- $r$ approximation to $\sum_{i=1}^k \alpha_i U^\top y_i y_i^\top U$ . Let $B^\star \subseteq \mathbb{R}^{d \times r}$ be an optimal solution for the re-weighted loss. Then the column span of $B^\star$ is equal to the column span of $(X^\top X)^{-1} VDQ_r$ . + +We can also extend Proposition 3 to show that all local minima of equation 3 are global minima in the linear setting. We leave the proof to Appendix A.3. We remark that this result does not extend to the non-linear ReLU setting and leave this for future work. + +Based on Proposition 3, we provide a rigorous proof of the previous example. Suppose that $X$ is full rank, $(X^{\top}X)^{\dagger}X[\alpha_{1}y_{1},\alpha_{1}y_{2}] = [\alpha_{1}\theta ,\alpha_{2}\theta +\alpha_{2}(X^{\top}X)^{-1}X\varepsilon_{2}]$ . Hence, when we increase $\alpha_{1}$ , $\cos (B^{\star},\theta)$ increases closer to 1. + +Algorithmic consequence. Inspired by our theory, we describe a re-weighting scheme in the presence of label noise. We compute the per-task weights by computing the SVD over $X^{\top}y_{i}$ , for $1\leq i\leq k$ . The intuition is that if the label vector of a task $y_{i}$ is noisy, then the entropy of $y_{i}$ is small. Therefore, we would like to design a procedure that removes the noise. The SVD procedure does this, where the weight of a task is calculated by its projection into the principal $r$ directions. See Algorithm 2 for the description. + +![](images/a67492f9e1e9feec2b144c9bbebf47fcb084e88321962f2f0f278cf2599b31b5.jpg) +Figure 4: Illustration of the covariance alignment module on task embeddings. + +# 3 EXPERIMENTS + +We describe connections between our theoretical results and practical problems of interest. We show three claims on real world datasets. (i) The shared MTL module is best performing when its capacity is smaller than the total capacities of the single-task models. (ii) Our proposed covariance alignment method improves multi-task training on a variety of settings including the GLUE benchmarks and six sentiment analysis tasks. Our method can be naturally extended to transfer learning settings and we validate this as well. (iii) Our SVD-based reweighed scheme is more robust than the standard unweighted scheme on multi-label image classification tasks in the presence of label noise. + +# 3.1 EXPERIMENTAL SETUP + +Datasets and models. We describe the datasets and models we use in the experiments. + +GLUE: GLUE is a natural language understanding dataset including question answering, sentiment analysis, text similarity and textual entailment problems. We choose $\mathrm{BERT}_{\mathrm{LARGE}}$ as our model, which is a 24 layer transformer network from Devlin et al. (2018). We use this dataset to evaluate how Algorithm 1 works on the state-of-the-art BERT model. + +Sentiment Analysis: This dataset includes six tasks: movie review sentiment (MR), sentence subjectivity (SUBJ), customer reviews polarity (CR), question type (TREC), opinion polarity (MPQA), and the Stanford sentiment treebank (SST) tasks. + +For each task, the goal is to categorize sentiment opinions expressed in the text. We use an embedding layer (with GloVe embeddings $^2$ ) followed by an LSTM layer proposed by Lei et al. (2018) $^3$ . + +ChestX-ray14: This dataset contains 112,120 frontal-view X-ray images and each image has up to 14 diseases. This is a 14-task multi-label image classification problem. We use the CheXNet model from Rajpurkar et al. (2017), which is a 121-layer convolutional neural network on all tasks. + +For all models, we share the main module across all tasks (BERTLARGE for GLUE, LSTM for sentiment analysis, CheXNet for ChestX-ray14) and assign a separate regression or classification layer on top of the shared module for each tasks. + +Comparison methods. For the experiment on multi-task training, we compare Algorithm 1 by training with our method and training without it. Specifically, we apply the alignment procedure on the task embedding layers. See Figure 4 for an illustration, where $E_{i}$ denotes the embedding of task $i$ , $R_{i}$ denotes its alignment module and $Z_{i} = E_{i}R_{i}$ is the rotated embedding. + +For transfer learning, we first train an STL model on the source task by tuning its model capacity (e.g. the output dimension of the LSTM layer). Then, we fine-tune the STL model on the target task for 5-10 epochs. To apply Algorithm 1, we add an alignment module for the target task during fine-tuning. + +For the experiment on reweighted schemes, we compute the per-task weights as described in Algorithm 2. Then, we reweight the loss function as in equation 2. We compare with the reweighting techniques of Kendall et al. (2018). Informally, the latter uses Gaussian likelihood to model classi- + +![](images/7d008e165e9bff8bd39f72f61e9d5c4df595cee7f579a0ac8bde922995efb3f7.jpg) +(a) MTL on GLUE over 10 task pairs + +![](images/ec34acb7d832c43d671b5fe2243f6f7c06c53b78f4adae4ef8b3424aed50a49c.jpg) +(b) Transfer learning on six sentiment analysis tasks +Figure 5: Performance improvements of Algorithm 1 by aligning task embeddings. + +fication outputs. The weights, defined as inversely proportional to the variances of the Gaussian, are optimized during training. We also compare with the unweighted loss (cf. equation 1) as a baseline. + +Metric. We measure performance on the GLUE benchmark using a standard metric called the GLUE score, which contains accuracy and correlation scores for each task. + +For the sentiment analysis tasks, we measure the accuracy of predicting the sentiment opinion. + +For the image classification task, we measure the area under the curve (AUC) score. We run five different random seeds to report the average results. The result of an MTL experiment is averaged over the results of all the tasks, unless specified otherwise. + +For the training procedures and other details on the setup, we refer the reader to Appendix B. + +# 3.2 EXPERIMENTAL RESULTS + +We present use cases of our methods on open-source datasets. We expected to see improvements via our methods in multi-task and other settings, and indeed we saw such gains across a variety of tasks. + +Improving multi-task training. We apply Algorithm 1 on five tasks (CoLA, MRPC, QNLI, RTE, SST-2) from the GLUE benchmark using a state-of-the-art language model $\mathrm{BERT}_{\mathrm{LARGE}}$ . We train the output layers $\{A_i\}$ and the alignment layers $\{R_i\}$ using our algorithm. We compare the average performance over all five tasks and find that our method outperforms $\mathrm{BERT}_{\mathrm{LARGE}}$ by $2.35\%$ average GLUE score for the five tasks. For the particular setting of training two tasks, our method outperforms $\mathrm{BERT}_{\mathrm{LARGE}}$ on 7 of the 10 task pairs. See Figure 5a for the results. + +Improving transfer learning. While our study has focused on multi-task learning, transfer learning is a naturally related goal – and we find that our method is also useful in this case. We validate this by training an LSTM on sentiment analysis. Figure 5b shows the result with SST being the source task and the rest being the target task. Algorithm 1 improves accuracy on four tasks by up to $2.5\%$ . + +Reweighting training for the same task covariates. We evaluate Algorithm 2 on the ChestX-ray14 dataset. This setting satisfies the assumption of Algorithm 2, which requires different tasks to have the same input data. Across all 14 tasks, we find that our reweighting method improves the technique of Kendall et al. (2018) by $0.1\%$ AUC score. Compared to training with the unweighted loss, our method improves performance by $0.4\%$ AUC score over all tasks. + +# 3.3 ABLATION STUDIES + +Model capacity. We verify our hypothesis that the capacity of the MTL model should not exceed the total capacities of the STL model. We show this on an LSTM model with sentiment analysis tasks. Recall that the capacity of an LSTM model is its output dimension (before the last classification layer). We train an MTL model with all tasks and vary the shared module's capacity to find the optimum from 5 to 500. Similarly we train an STL model for each task and find the optimum. + +In Figure 1, we find that the performance of MTL peaks when the shared module has capacity 100. This is much smaller than the total capacities of all the STL models. The result confirms that + +Table 1: Comparing the model capacity between MTL and STL. + +
TaskSTLMTL
Cap.Acc.Cap.Acc.
SST20082.310090.8
MR20076.496.0
CR573.278.7
SUBJ20091.589.5
MPQA50086.787.0
TREC10085.778.7
Overall120582.610085.1
+ +![](images/4a4cbbd0c8ce458856f5e18a4ed2ce1c6ca6276040e7ba4aa95b6411c819fdf6.jpg) +Figure 6: Covariance similarity score vs. performance improvements from alignment. + +constraining the shared module's capacity is crucial to achieve the ideal performance. Extended results on CNN/MLP to support our hypothesis are shown in Appendix B.5. + +Task covariance. We apply our metric of task covariance similarity score from Section 2.3 to provide an in-depth study of the covariance alignment method. The hypothesis is that: (a) aligning the covariances helps, which we have shown in Figure 5a; (b) the similarity score between two tasks increases after applying the alignment. We verify the hypothesis on the sentiment analysis tasks. We use the single-task model's embedding before the LSTM layer to compute the covariance. + +First, we measure the similarity score using equation 6 between all six single-task models. Then, for each task pair, we train an MTL model using Algorithm 1. We measure the similarity score on the trained MTL model. Our results confirm the hypothesis (Figure 6): (a) we observe increased accuracy on 13 of 15 task pairs by up to $4.1\%$ ; (b) the similarity score increases for all 15 task pairs. + +**Optimization scheme.** We verify the robustness of Algorithm 2. After selecting two tasks from the ChestX-ray14 dataset, we test our method by assigning random labels to $20\%$ of the data on one task. The labels for the other task remain unchanged. + +On 10 randomly selected pairs, our method improves over the unweighted scheme by an average $1.0\%$ AUC score and the techniques of Kendall et al. (2018) by an average $0.4\%$ AUC score. We include more details of this experiment in Appendix B.5. + +# 4 RELATED WORK + +There has been a large body of recent work on using the multi-task learning approach to train deep neural networks. Liu et al. (2019a); McCann et al. (2018) and subsequent follow-up work show state-of-the-art results on the GLUE benchmark, which inspired our study of an abstraction of the MTL model. Recent work of Zamir et al. (2018); Standley et al. (2019) answer which visual tasks to train together via a heuristic which involves intensive computation. We discuss several lines of studies related to this work. For complete references, we refer the interested readers to the survey of Ruder (2017); Zhang and Yang (2017) and the surveys on domain adaptation and transfer learning by Pan and Yang (2009); Kouw (2018) for references. + +Theoretical studies of multi-task learning. Of particular relevance to this work are those that study the theory of multi-task learning. The earlier works of Baxter (2000); Ben-David and Schuller (2003) are among the first to formally study the importance of task relatedness for learning multiple tasks. See also the follow-up work of Maurer (2006) which studies generalization bounds of MTL. + +A closely related line of work to structural learning is subspace selection, i.e. how to select a common subspace for multiple tasks. Examples from this line work include Obozinski et al. (2010); Wang et al. (2015); Fernando et al. (2013); Elhamifar et al. (2015). Evgeniou and Pontil (2004); Micchelli and Pontil (2005) study a formulation that extends support vector machine to the multitask setting. See also Argyriou et al. (2008); Pentina et al. (2015); Pentina and Ben-David (2015); Pentina and Lampert (2017) that provide more refined optimization methods and further study. The work of Ben-David et al. (2010) provides theories to measure the differences between source and target tasks for transfer learning in a different model setup. Khodak et al. (2019); Kong et al. (2020); + +Du et al. (2020) consider the related meta learning setting, which is in spirit an online setting of multi-task learning. + +Our result on restricting the model capacities for multi-task learning is in contrast with recent theoretical studies on over-parametrized models (e.g. Li et al. (2018); Zhang et al. (2019a); Bartlett et al. (2020)), where the model capacities are usually much larger than the regime we consider here. It would be interesting to better understand multi-task learning in the context of over-parametrized models with respect to other phenomenon such as double descent that has been observed in other contexts (Belkin et al. (2019)). + +Finally, Zhang et al. (2019b); Shui et al. (2019) consider multi-task learning from the perspective of adversarial robustness. Mahmud and Ray (2008) consider using Kolmogorov complexity measure the effectiveness of transfer learning for decision tree methods. + +Hard parameter sharing vs soft parameter sharing. The architecture that we study in this work is also known as the hard parameter sharing architecture. There is another kind of architecture called soft parameter sharing. The idea is that each task has its own parameters and modules. The relationships between these parameters are regularized in order to encourage the parameters to be similar. Other architectures that have been studied before include the work of Misra et al. (2016), where the authors explore trainable architectures for convolutional neural networks. + +Domain adaptation. Another closely related line of work is on domain adaptation. The acute reader may notice the similarity between our study in Section 2.3 and domain adaptation. The crucial difference here is that we are minimizing the multi-task learning objective, whereas in domain adaptation the objective is typically to minimize the objective on the target task. See Ben-David et al. (2010); Zhang et al. (2019b) and the references therein for other related work. + +Optimization techniques. Guo et al. (2019) use ideas from the multi-armed bandit literature to develop a method for weighting each task. Compared to their method, our SVD-based method is conceptually simpler and requires much less computation. Kendall et al. (2018) derive a weighted loss scheme by maximizing a Gaussian likelihood function. Roughly speaking, each task is reweighted by $1 / \sigma^2$ where $\sigma$ is the standard deviation of the Gaussian and a penalty of $\log \sigma$ is added to the loss. The values of $\{\sigma_i\}_i$ are also optimized during training. The exact details can be found in the paper. The very recent work of Li and Vasconcelos (2019) show empirical results using a similar idea of covariance normalization on imaging tasks for cross-domain transfer. + +# 5 CONCLUSIONS AND FUTURE WORK + +We studied the theory of multi-task learning in linear and ReLU-activated settings. We verified our theory and its practical implications through extensive synthetic and real world experiments. + +Our work opens up many interesting future questions. First, could we extend the guarantees for choosing optimization schemes to non-linear settings? Second, a limitation of our SVD-based optimization scheduler is that it only applies to settings with the same data. Could we extend the method for heterogeneous task data? More broadly, we hope our work inspires further studies to better understand multi-task learning in neural networks and to guide its practice. + +Acknowledgements. Thanks to Sharon Y. Li and Avner May for stimulating discussions during early stages of this work. We are grateful to the Stanford StatsML group and the anonymous referees for providing helpful comments that improve the quality of this work. We gratefully acknowledge the support of DARPA under Nos. FA87501720095 (D3M), FA86501827865 (SDH), and FA86501827882 (ASED); NIH under No. U54EB020405 (Mobilize), NSF under Nos. CCF1763315 (Beyond Sparsity), CCF1563078 (Volume to Velocity), and 1937301 (RTML); ONR under No. N000141712266 (Unifying Weak Supervision); the Moore Foundation, NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, the Okawa Foundation, American Family Insurance, Google Cloud, Swiss Re, and members of the Stanford DAWN project: Teradata, Facebook, Google, Ant Financial, NEC, VMWare, andInfosys. H. Zhang is supported in part by Gregory Valiant's ONR YIP award (#1704417). The experiments are partly run on Stanford's SOAL cluster. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding + +any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, NIH, ONR, or the U.S. Government. + +# REFERENCES + +Héctor Martínez Alonso and Barbara Plank. When is multitask learning effective? semantic sequence prediction under varying data conditions. arXiv preprint arXiv:1612.02251, 2016. +Rie Kubota Ando and Tong Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6(Nov):1817-1853, 2005. +Andreas Argyriou, Andreas Maurer, and Massimiliano Pontil. An algorithm for transfer learning in a heterogeneous environment. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 71-85. Springer, 2008. +Maria-Florina Balcan, Yingyu Liang, David P Woodruff, and Hongyang Zhang. Matrix completion and related problems via strong duality. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018), 2018. +Peter L Bartlett, Philip M Long, Gábor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. Proceedings of the National Academy of Sciences, 2020. +Jonathan Baxter. A model of inductive bias learning. Journal of artificial intelligence research, 12: 149-198, 2000. +Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice and the classical bias-variance trade-off. Proceedings of the National Academy of Sciences, 116(32):15849-15854, 2019. +Shai Ben-David and Reba Schuller. Exploiting task relatedness for multiple task learning. In Learning Theory and Kernel Machines, pages 567-580. Springer, 2003. +Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1-2):151-175, 2010. +Joachim Bingel and Anders Søgaard. Identifying beneficial task relations for multi-task learning in deep neural networks. arXiv preprint arXiv:1702.08303, 2017. +John Blitzer, Ryan McDonald, and Fernando Pereira. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 conference on empirical methods in natural language processing, pages 120-128. Association for Computational Linguistics, 2006. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +Simon S Du, Jason D Lee, Yuandong Tian, Barnabas Poczos, and Aarti Singh. Gradient descent learns one-hidden-layer cnn: Don't be afraid of spurious local minima. arXiv preprint arXiv:1712.00779, 2017. +Simon S Du, Wei Hu, Sham M Kakade, Jason D Lee, and Qi Lei. Few-shot learning via learning the representation, provably. arXiv preprint arXiv:2002.09434, 2020. +Ehsan Elhamifar, Guillermo Sapiro, and S Shankar Sastry. Dissimilarity-based sparse subset selection. IEEE transactions on pattern analysis and machine intelligence, 38(11):2182-2197, 2015. +Theodoros Evgeniou and Massimiliano Pontil. Regularized multi-task learning. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 109-117. ACM, 2004. +Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE international conference on computer vision, pages 2960-2967, 2013. + +Han Guo, Ramakanth Pasunuru, and Mohit Bansal. Autosem: Automatic task selection and mixing in multi-task learning. arXiv preprint arXiv:1904.04153, 2019. +Trevor Hastie, Robert Tibshirani, Jerome Friedman, and James Franklin. The elements of statistical learning: data mining, inference and prediction. *The Mathematical Intelligencer*, 27(2):83-85, 2005. +Minqing Hu and Bing Liu. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168-177. ACM, 2004. +Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7482-7491, 2018. +Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar. Provable guarantees for gradient-based meta-learning. arXiv preprint arXiv:1902.10644, 2019. +Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014. +Iasonas Kokkinos. Übernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6129-6138, 2017. +Weihao Kong, Raghav Somani, Zhao Song, Sham Kakade, and Sewoong Oh. Meta-learning for mixed linear regression. arXiv preprint arXiv:2002.08936, 2020. +Wouter M Kouw. An introduction to domain adaptation and transfer learning. arXiv preprint arXiv:1812.11806, 2018. +Tao Lei, Yu Zhang, Sida I Wang, Hui Dai, and Yoav Artzi. Simple recurrent units for highly parallelizable recurrence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4470-4481, 2018. +Xin Li and Dan Roth. Learning question classifiers. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1-7. Association for Computational Linguistics, 2002. +Yuanzhi Li, Tengyu Ma, and Hongyang Zhang. Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations. In Conference On Learning Theory, pages 2-47, 2018. +Yunsheng Li and Nuno Vasconcelos. Efficient multi-domain learning by covariance normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5424-5433, 2019. +Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504, 2019a. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019b. +MM Mahmud and Sylvian Ray. Transfer learning using kolmogorov complexity: Basic theory and empirical evaluations. In Advances in neural information processing systems, pages 985-992, 2008. +Pasin Manurangsi and Daniel Reichman. The computational complexity of training relu (s). arXiv preprint arXiv:1810.04207, 2018. +Andreas Maurer. Bounds for linear multi-task learning. Journal of Machine Learning Research, 7 (Jan):117-139, 2006. + +Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730, 2018. +Charles A Micchelli and Massimiliano Pontil. Kernels for multi-task learning. In Advances in neural information processing systems, pages 921-928, 2005. +Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003-1011. Association for Computational Linguistics, 2009. +Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross-stitch networks for multi-task learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3994-4003, 2016. +Guillaume Obozinski, Ben Taskar, and Michael I Jordan. Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing, 20(2):231-252, 2010. +Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359, 2009. +Bo Pang and Lillian Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd annual meeting on Association for Computational Linguistics, page 271. Association for Computational Linguistics, 2004. +Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 115-124. Association for Computational Linguistics, 2005. +Anastasia Pentina and Shai Ben-David. Multi-task and lifelong learning of kernels. In International Conference on Algorithmic Learning Theory, pages 194-208. Springer, 2015. +Anastasia Pentina and Christoph H Lampert. Multi-task learning with labeled and unlabeled tasks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2807-2816. JMLR.org, 2017. +Anastasia Pentina, Viktoriia Sharmanska, and Christoph H Lampert. Curriculum learning of multiple tasks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5492-5500, 2015. +Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017. +Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017. +Changjian Shui, Mahdieh Abbasi, Louis-Émile Robitaille, Boyu Wang, and Christian Gagné. A principled approach for learning task similarity in multitask learning. arXiv preprint arXiv:1903.09109, 2019. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642, 2013. +Trevor Standley, Amir R Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. Which tasks should be learned together in multi-task learning? arXiv preprint arXiv:1905.07553, 2019. +Joel A Tropp et al. An introduction to matrix concentration inequalities. Foundations and Trends® in Machine Learning, 8(1-2):1-230, 2015. + +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, 2018a. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018b. +Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2097-2106, 2017. +Yu Wang, David Wipf, Qing Ling, Wei Chen, and Ian James Wassell. Multi-task learning for subspace segmentation. 2015. +Janyce Wiebe, Theresa Wilson, and Claire Cardie. Annotating expressions of opinions and emotions in language. Language resources and evaluation, 39(2-3):165-210, 2005. +Ya Xue, Xuejun Liao, Lawrence Carin, and Balaji Krishnapuram. Multi-task learning for classification with dirichlet process priors. Journal of Machine Learning Research, 8(Jan):35-63, 2007. +Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3712-3722, 2018. +Hongyang Zhang, Vatsal Sharan, Moses Charikar, and Yingyu Liang. Recovery guarantees for quadratic tensors with limited observations. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2019a. +Yu Zhang and Qiang Yang. A survey on multi-task learning. arXiv preprint arXiv:1707.08114, 2017. +Yu Zhang and Dit-Yan Yeung. A regularization approach to learning task relationships in multitask learning. ACM Transactions on Knowledge Discovery from Data (TKDD), 8(3):12, 2014. +Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael I Jordan. Bridging theory and algorithm for domain adaptation. arXiv preprint arXiv:1904.05801, 2019b. + +# A MISSING DETAILS OF SECTION 2 + +We fill in the missing details left from Section 2. In Section A.1, we provide rigorous arguments regarding the capacity of the shared module. In Section A.2, we fill in the details left from Section 2.3, including the proof of Theorem 2 and its extension to the ReLU model. In Section A.3, we provide the proof of Proposition 3 on the task reweighting schemes. We first describe the notations. + +Notations. We define the notations to be used later on. We denote $f(x) \lesssim g(x)$ if there exists an absolute constant $C$ such that $f(x) \leq C g(x)$ . The big-O notation $f(x) = O(g(x))$ means that $f(x) \lesssim g(x)$ . + +Suppose $A \in \mathbb{R}^{m \times n}$ , then $\lambda_{\max}(A)$ denotes its largest singular value and $\lambda_{\min}(A)$ denotes its min $\{m, n\}$ -th largest singular value. Alternatively, we have $\lambda_{\min}(A) = \min_{x: \| x \| = 1} \| Ax \|$ . Let $\kappa(A) = \lambda_{\max}(A) / \lambda_{\min}(A)$ denote the condition number of $A$ . Let Id denotes the identity matrix. Let $U^{\dagger}$ denote the Moore-Penrose pseudo-inverse of the matrix $U$ . Let $\| \cdot \|$ denote the Euclidean norm for vectors and spectral norm for matrices. Let $\| \cdot \|_F$ denote the Frobenius norm of a matrix. Let $\langle A, B, = \rangle \operatorname{Tr}(A^\top B)$ denote the inner product of two matrices. + +The sine function is defined as $\sin (u,v) = \sqrt{1 - \cos(u,v)^2}$ , where we assume that $\sin (u,v)\geq 0$ which is without loss of generality for our study. + +# A.1 MISSING DETAILS OF SECTION 2.2 + +We describe the full detail to show that our model setup captures the phenomenon that the shared module should be smaller than the sum of capacities of the single-task models. We state the following proposition which shows that the quality of the subspace $B$ in equation 1 determines the performance of multi-task learning. This supplements the result of Proposition 1. + +Proposition 4. In the optimum of $f(\cdot)$ (equation 1), each $A_{i}$ selects the vector $v$ within the column span of $g_{B}(X_{i})$ to minimize $L(v,y_{i})$ . As a corollary, in the linear setting, the optimal $B$ can be achieved at a rotation matrix $B^{\star} \subseteq \mathbb{R}^{d\times r}$ by maximizing + +$$ +\sum_ {i = 1} ^ {k} \left\langle B \left(B ^ {\top} X _ {i} ^ {\top} X _ {i} B\right) ^ {\dagger} B ^ {\top}, X _ {i} ^ {\top} y _ {i} y _ {i} ^ {\top} X _ {i} \right\rangle . \tag {7} +$$ + +Furthermore, any $B^{\star}$ which contains $\{\theta_i\}_{i=1}^k$ in its column subspace is optimal. In particular, for such a $B^{\star}$ , there exists $\{A_i^{\star}\}$ so that $B^{\star} A_i^{\star} = \theta_i$ for all $1 \leq i \leq k$ . + +Proof. Recall the MTL objective in the linear setting from equation 3 as follows: + +$$ +\min f (A _ {1}, A _ {2}, \ldots , A _ {k}; B) = \sum_ {i = 1} ^ {k} \left(X _ {i} B A _ {i} - y _ {i}\right) ^ {2}, +$$ + +Note that the linear layer $A_{i}$ can pick any combination within the subspace of $B$ . Therefore, we could assume without loss of generality that $B$ is a rotation matrix. i.e. $B^{\top}B = \mathrm{Id}$ . After fixing $B$ , since objective $f(\cdot)$ is linear in $A_{i}$ for all $i$ , by the local optimality condition, we obtain that + +$$ +A _ {i} = \left(B ^ {\top} X _ {i} ^ {\top} X _ {i} B\right) ^ {\dagger} B ^ {\top} X _ {i} ^ {\top} y _ {i} +$$ + +Replacing the solution of $A_{i}$ to $f(\cdot)$ , we obtain an objective over $B$ . + +$$ +h (B) = \sum_ {i = 1} ^ {k} \| X _ {i} B (B ^ {\top} X _ {i} ^ {\top} X _ {i} B) ^ {\dagger} B ^ {\top} X _ {i} ^ {\top} y _ {i} - y _ {i} \| _ {F} ^ {2}. +$$ + +Next, note that + +$$ +\begin{array}{l} \left\| X _ {i} B (B ^ {\top} X _ {i} ^ {\top} X _ {i} B) ^ {\dagger} B ^ {\top} X _ {i} ^ {\top} y _ {i} \right\| _ {F} ^ {2} = \mathrm {T r} (y _ {i} ^ {\top} X _ {i} B (B ^ {\top} X _ {i} ^ {\top} X _ {i} B) ^ {\dagger} B ^ {\top} X _ {i} ^ {\top} y _ {i}) \\ = \langle B (B ^ {\top} X _ {i} ^ {\top} X _ {i} B) B ^ {\top}, X _ {i} ^ {\top} y _ {i} y _ {i} ^ {\top} X _ {i} \rangle , \\ \end{array} +$$ + +where we used the fact that $A^\dagger AA^\dagger = A^\dagger$ for $A = B^\top X_i^\top X_iB$ in the first equation. Hence we have shown equation 7. + +For the final claim, as long as $B^{\star}$ contains $\{\theta_i\}_{i=1}^k$ in its column subspace, then there exists $A_i^{\star}$ such that $B^{\star}A_i^{\star} = \theta_i$ . The $B^{\star}$ and $\{A_i^{\star}\}_{i=1}^k$ are optimal solutions because each $\theta_i$ is an optimal solution for the single-task problem. + +The above result on linear regression suggests the intuition that optimizing an MTL model reduces to optimizing over the span of $B$ . The intuition can be easily extended to linear classification tasks as well as mixtures of regression and classification tasks. + +Extension to the ReLU setting. If the shared module's capacity is larger than the total capacities of the STL models, then we can put all the STL model parameters into the shared module. As in the linear setting, the final output layer $A_{i}$ can pick out the optimal parameter for the $i$ -th task. This remains an optimal solution to the MTL problem in the ReLU setting. Furthermore, there is no transfer between any two tasks through the shared module. + +# A.2 MISSING DETAILS OF SECTION 2.3 + +# A.2.1 THE EFFECT OF COSINE SIMILARITY + +We consider the effect of varying the cosine similarity between single task models in multi-task learning. We first describe the following proposition to solve the multi-task learning objective when the covariances of the task data are the same. The idea is similar to the work of Ando and Zhang (2005) and we adapt it here for our study. + +Proposition 5. Consider the reweighted loss of equation 2 with the encoding function being linear, where the weights are $\{\alpha_i\}_{i=1}^k$ . Suppose the task features of every task have the same covariance: $X_i^\top X_i = \Sigma$ for all $1 \leq i \leq k$ . Let $\Sigma = VDV^\top$ be the singular vector decomposition (SVD) of $\Sigma$ . Then the optimum of $f(\cdot)$ in equation 3 is achieved at: + +$$ +B ^ {\star} = V D ^ {- 1 / 2} C ^ {\star}, +$$ + +where $C^{\star}C^{\star \top}$ is the best rank- $r$ approximation subspace of $\sum_{i=1}^{k} \alpha_i U_i^\top y_i y_i^\top U_i$ and $X_i = U_i D V^\top$ is the SVD of $X_i$ , for each $1 \leq i \leq k$ . + +As a corollary, denote by $\lambda_1, \lambda_2, \ldots, \lambda_k$ as the singular values of $D^{-1}V^{\top}\sum_{i=1}^{k}\alpha_iX_i^{\top}y_iy_i^{\top}X_i$ in decreasing order. Then the difference between an MTL model with hidden dimension $r$ and the all the single task models is bounded by $\sum_{i=r+1}^{k}\lambda_i^2$ . + +Proof. Note that $B^{\star}$ is obtained by maximizing + +$$ +\sum_ {i = 1} ^ {k} \langle B (B ^ {\top} X _ {i} ^ {\top} X _ {i} B) ^ {- 1} B ^ {\top}, \alpha_ {i} X _ {i} ^ {\top} y _ {i} y _ {i} ^ {\top} X _ {i} \rangle +$$ + +Let $C = DV^{\top}B$ . Clearly, there is a one to one mapping between $B$ and $C$ . And we have $B = V D^{-1}C$ . Hence the above is equivalent to maximizing over $C \subseteq \mathbb{R}^{d \times r}$ with + +$$ +\begin{array}{l} \sum_ {i = 1} ^ {k} \left\langle C \left(C ^ {\top} C\right) ^ {- 1} C ^ {\top}, D ^ {- 1} V ^ {\top} \left(\sum_ {i = 1} ^ {k} \alpha_ {i} X _ {i} ^ {\top} y _ {i} y _ {i} ^ {\top} X _ {i}\right) V D ^ {- 1} \right\rangle \\ = \langle C (C ^ {\top} C) ^ {- 1} C ^ {\top}, \sum_ {i = 1} ^ {k} \alpha_ {i} U _ {i} ^ {\top} y _ {i} y _ {i} ^ {\top} U _ {i} \rangle . \\ \end{array} +$$ + +Note that $C(C^{\top}C)^{-1}C^{\top}$ is a projection matrix onto a subspace of dimension $r$ . Hence the maximum (denote by $C^\star$ ) is attained at the best rank- $r$ approximation subspace of $\sum_{i=1}^{k} \alpha_i U_i^\top y_i y_i^\top U_i$ . + +To illustrate the above proposition, consider a simple setting where $X_{i}$ is identity for every $1 \leq i \leq k$ , and $y_{i} = e_{i}$ , i.e. the $i$ -th basis vector. Note that the optimal solution for the $i$ -th task is $(X_{i}^{\top}X_{i})^{-1}X_{i}^{\top}y_{i} = y_{i}$ . Hence the optimal solutions are orthogonal to each other for all the tasks, with $\lambda_{i} = 1$ for all $1 \leq i \leq k$ . And the minimum STL error is zero for all tasks. + +Consider the MTL model with hidden dimension $r$ . By Proposition 5, the minimum MTL error is achieved by the best rank- $r$ approximation subspace to $\sum_{i=1}^{k} X_i^\top y_i y_i^\top X_i = \sum_{i=1}^{k} y_i y_i^\top$ . Denote the optimum as $B_r^\star$ . The MTL error is: + +$$ +\sum_ {i = 1} ^ {k} \| y _ {i} \| ^ {2} - \langle \sum_ {i = 1} ^ {k} y _ {i} y _ {i} ^ {\top}, B _ {r} ^ {\star} B _ {r} ^ {\star^ {\top}} \rangle = k - r. +$$ + +Different data covariance. We provide upper bounds on the quality of MTL solutions for different data covariance, which depend on the relatedness of all the tasks. The following procedure gives the precise statement. Consider $k$ regression tasks with data $\{(X_i,y_i)\}_{i = 1}^k$ . Let $\theta_{i} = (X_{i}^{\top}X_{i})^{\dagger}X_{i}^{\top}y_{i}$ denote the optimal solution of each regression task. Let $W\subseteq \mathbb{R}^{d\times k}$ denote the matrix where the $i$ -th column is equal to $\theta_{i}$ . Consider the following procedure for orthogonalizing $W$ for $1\leq i\leq k$ . + +a) Let $W_{i}^{\star} \in \mathbb{R}^{d}$ denote the vector which maximizes $\sum_{i=1}^{k} \left\langle \frac{X_{i}B}{\|X_{i}B\|}, y_{i} \right\rangle^{2}$ over $B \in \mathbb{R}^{d}$ ; +b) Denote by $\lambda_{j} = \sum_{j = 1}^{k}\langle \frac{X_{j}W_{j}^{\star}}{\|X_{j}W_{j}^{\star}\|},y_{j}\rangle^{2};$ +c) For each $1 \leq i \leq k$ , project $X_{i}W_{i}^{\star}$ off from every column of $X_{i}$ . Go to Step a). + +Proposition 6. Suppose that $r \leq d$ . Let $B^{\star}$ denote the optimal MTL solution of capacity $r$ in the shared module. Denote by $OPT = \sum_{i=1}^{k} (\|y_i\|^2 - \|X_i(X_i^\top X_i)^\dagger X_i^\top y_i\|^2)$ . Then $h(B^{\star}) \leq OPT - \sum_{i=r+1}^{d} \lambda_i$ . + +Proof. It suffices to show that $OPT$ is equal to $\sum_{i=1}^{k} \lambda_i$ . The result then follows since $h(B^{\star})$ is less than the error given by $W_1^{\star}, \ldots, W_k^{\star}$ , which is equal to $OPT - \sum_{i=r+1}^{d} \lambda_i$ . + +# A.2.2 PROOF OF THEOREM 2 + +We fill in the proof of Theorem 2. First, we restate the result rigorously as follows. + +Theorem 2. For $i = 1,2$ , let $(X_i,y_i) \in (\mathbb{R}^{m_i \times d},\mathbb{R}^{m_i})$ denote two linear regression tasks with parameters $\theta_i \in \mathbb{R}^d$ . Suppose that each row of $X_1$ is drawn independently from a distribution with covariance $\Sigma_1 \subseteq \mathbb{R}^{d \times d}$ and bounded $l_2$ -norm $\sqrt{L}$ . Assume that $\theta_1^\top \Sigma_1 \theta_1 = 1$ w.l.o.g. + +Let $c \in [\kappa(X_2) \sin(\theta_1, \theta_2), 1/3]$ denote the desired error margin. Denote by $(B^\star, A_1^\star, A_2^\star)$ the optimal MTL solution. With probability $1 - \delta$ over the randomness of $(X_1, y_1)$ , when + +$$ +m _ {1} \gtrsim \max \left(\frac {L \| \Sigma_ {1} \| \log \frac {d}{\delta}}{\lambda_ {\min} ^ {2} (\Sigma_ {1})}, \frac {\kappa (\Sigma_ {1}) \kappa^ {2} (X _ {2})}{c ^ {2}} \| y _ {2} \| ^ {2}, \frac {\kappa^ {2} (\Sigma_ {1}) \kappa^ {4} (X _ {2})}{c ^ {4}} \sigma_ {1} ^ {2} \log \frac {1}{\delta}\right), +$$ + +we have that $\| B^{\star}A_{2}^{\star} - \theta_{2}\| /\| \theta_{2}\| \leq 6c + \frac{1}{1 - 3c}\| \varepsilon_{2}\| /\| X_{2}\theta_{2}\|$ + +We make several remarks to provide more insight on Theorem 2. + +- Theorem 2 guarantees positive transfers in MTL, when the source and target models are close and the number of source samples is large. While the intuition is folklore in MTL, we provide a formal justification in the linear and ReLU models to quantify the phenomenon. +- The error bound decreases with $c$ , hence the smaller $c$ is the better. On the other hand, the required number of data points $m_{1}$ increases. Hence there is a trade-off between accuracy and the amount of data. +- $c$ is assumed to be at most $1/3$ . This assumption arises when we deal with the label noise of task 2. If there is no noise for task 2, then this assumption is not needed. If there is noise for task 2, this assumption is satisfied when $\sin(\theta_1, \theta_2)$ is less than $1/(3\kappa(X_2))$ . In synthetic experiments, we observe that the dependence on $\kappa(X_2)$ and $\sin(\theta_1, \theta_2)$ both arise in the performance of task 2, cf. Figure 3 and Figure 7, respectively. + +The proof of Theorem 2 consists of two steps. + +a) We show that the angle between $B^{\star}$ and $\theta_{1}$ will be small. Once this is established, we get a bound on the angle between $B^{\star}$ and $\theta_{2}$ via the triangle inequality. +b) We bound the distance between $B^{\star}A_{2}$ and $\theta_{2}$ . The distance consists of two parts. One part comes from $B^{\star}$ , i.e. the angle between $B^{\star}$ and $\theta_{2}$ . The second part comes from $A_{2}$ , i.e. the estimation error of the norm of $\theta_{2}$ , which involves the signal to noise ratio of task two. + +We first show the following geometric fact, which will be used later in the proof. + +Fact 7. Let $a, b \in \mathbb{R}^d$ denote two unit vectors. Suppose that $X \in \mathbb{R}^{m \times d}$ has full column rank with condition number denoted by $\kappa = \kappa(X)$ . Then we have + +$$ +| \sin (X a, X b) | \geq \frac {1}{\kappa^ {2}} | \sin (a, b) |. +$$ + +Proof. Let $X = UDV^{\top}$ be the SVD of $X$ . Since $X$ has full column rank by assumption, we have $X^{\top}X = XX^{\top} = \mathrm{Id}$ . Clearly, we have $\sin(Xa, Xb) = \sin(DV^{\top}a, DV^{\top}b)$ . Denote by $a' = V^{\top}a$ and $b' = V^{\top}b$ . We also have that $a'$ and $b'$ are both unit vectors, and $\sin(a', b') = \sin(a, b)$ . Let $\lambda_1, \ldots, \lambda_d$ denote the singular values of $X$ . Then, + +$$ +\begin{array}{l} \sin^ {2} (D a ^ {\prime}, D b ^ {\prime}) = 1 - \frac {\left(\sum_ {i = 1} ^ {d} \lambda_ {i} ^ {2} a _ {i} ^ {\prime} b _ {i} ^ {\prime}\right) ^ {2}}{\left(\sum_ {i = 1} ^ {d} \lambda_ {i} ^ {2} a _ {i} ^ {\prime 2}\right) \left(\sum_ {i = 1} ^ {d} \lambda_ {i} ^ {2} b _ {i} ^ {\prime 2}\right)} \\ = \frac {\sum_ {1 \leq i , j \leq d} \lambda_ {i} ^ {2} \lambda_ {j} ^ {2} \left(a _ {i} ^ {\prime} b _ {j} ^ {\prime} - a _ {j} ^ {\prime} b _ {i} ^ {\prime}\right) ^ {2}}{\left(\sum_ {i = 1} ^ {d} \lambda_ {i} ^ {2} a _ {i} ^ {\prime 2}\right) \left(\sum_ {i = 1} ^ {d} \lambda_ {j} ^ {2} b _ {i} ^ {\prime 2}\right)} \\ \geq \frac {\lambda_ {\operatorname* {m i n}} ^ {4}}{\lambda_ {\operatorname* {m a x}} ^ {4}} \cdot \sum_ {1 \leq i, j \leq d} \left(a _ {i} ^ {\prime} b _ {j} ^ {\prime} - a _ {j} ^ {\prime} b _ {i} ^ {\prime}\right) ^ {2} \\ = \frac {1}{\kappa^ {4}} \left(\left(\sum_ {i = 1} ^ {d} a _ {i} ^ {\prime 2}\right) \left(\sum_ {i = 1} ^ {d} b _ {i} ^ {\prime 2}\right) - \left(\sum_ {i = 1} ^ {d} a _ {i} ^ {\prime} b _ {i} ^ {\prime}\right) ^ {2}\right) = \frac {1}{\kappa^ {4}} \sin^ {2} \left(a ^ {\prime}, b ^ {\prime}\right). \\ \end{array} +$$ + +This concludes the proof. + +![](images/30f1eb2569a2d380020d2975031a76fd9dad3c2bab49afb6d2d2b1fca62ab467.jpg) + +We first show the following Lemma, which bounds the angle between $B^{\star}$ and $\theta_{2}$ . + +Lemma 8. In the setting of Theorem 2, with probability $1 - \delta$ over the randomness of task one, we have that + +$$ +\left| \sin \left(B ^ {\star}, \theta_ {2}\right) \right| \leq \sin \left(\theta_ {1}, \theta_ {2}\right) + c / \kappa \left(X _ {2}\right). +$$ + +Proof. We note that $h(B^{\star}) \geq \| y_1\| ^2$ by the optimality of $B^{\star}$ . Furthermore, $\langle \frac{X_2B^*}{\|X_2B^*\|},y_2\rangle \leq \| y_2\| ^2$ . Hence we obtain that + +$$ +\left\langle \frac {X _ {1} B ^ {\star}}{\| X _ {1} B ^ {\star} \|}, y _ {1} \right\rangle^ {2} \geq \| y _ {1} \| ^ {2} - \| y _ {2} \| ^ {2}. +$$ + +For the left hand side, + +$$ +\begin{array}{l} \langle \frac {X _ {1} B ^ {\star}}{\| X _ {1} B ^ {\star} \|}, y _ {1} \rangle^ {2} = \langle \frac {X _ {1} B ^ {\star}}{\| X _ {1} B ^ {\star} \|}, X _ {1} \theta_ {1} + \varepsilon_ {1} \rangle^ {2} \\ = \langle \frac {X _ {1} B ^ {\star}}{\| X _ {1} B ^ {\star} \|}, X _ {1} \theta_ {1} \rangle^ {2} + \langle \frac {X _ {1} B ^ {\star}}{\| X _ {1} B ^ {\star} \|}, \varepsilon_ {1} \rangle^ {2} + 2 \langle \frac {X _ {1} B ^ {\star}}{\| X _ {1} B ^ {\star} \|}, X _ {1} \theta_ {1} \rangle \langle \frac {X _ {1} B ^ {\star}}{\| X _ {1} B ^ {\star} \|}, \varepsilon_ {1} \rangle \\ \end{array} +$$ + +Note that the second term is a chi-squared random variable with expectation $\sigma_1^2$ . Hence it is bounded by $\sigma_1^2 \sqrt{\log \frac{1}{\delta}}$ with probability at least $1 - \delta$ . Similarly, the third term is bounded by $2 \| X_1 \theta_1 \| \sigma_1 \sqrt{\log \frac{1}{\delta}}$ with probability $1 - \delta$ . Therefore, we obtain the following: + +$$ +\| X _ {1} \theta_ {1} \| ^ {2} \cos^ {2} (X _ {1} B ^ {\star}, X _ {1} \theta_ {1}) \geq \| y _ {1} \| ^ {2} - \| y _ {2} \| ^ {2} - (\sigma_ {1} ^ {2} + 2 \sigma_ {1} \| X _ {1} \theta_ {1} \|) \sqrt {\log \frac {1}{\delta}} +$$ + +Note that + +$$ +\begin{array}{l} \left\| y _ {1} \right\| ^ {2} \geq \left\| X _ {1} \theta_ {1} \right\| ^ {2} + 2 \left\langle X _ {1} \theta_ {1}, \varepsilon_ {1} \right\rangle \\ \geq \| X _ {1} \theta_ {1} \| ^ {2} - 2 \| X _ {1} \theta_ {1} \| \sigma_ {1} \sqrt {\log \frac {1}{\delta}}. \\ \end{array} +$$ + +Therefore, + +$$ +\begin{array}{l} \left\| X _ {1} \theta_ {1} \right\| ^ {2} \cos^ {2} \left(X _ {1} B ^ {\star}, X _ {1} \theta_ {1}\right) \geq \left\| X _ {1} \theta_ {1} \right\| ^ {2} - \left\| y _ {2} \right\| ^ {2} - \left(\sigma_ {1} ^ {2} + 3 \sigma_ {1} \left\| X _ {1} \theta_ {1} \right\|\right) \sqrt {\log \frac {1}{\delta}} \\ \Rightarrow \sin^ {2} (X _ {1} B ^ {\star}, X _ {1} \theta_ {1}) \leq \frac {\| y _ {2} \| ^ {2}}{\| X _ {1} \theta_ {1} \| ^ {2}} + \frac {4 \sigma_ {1} \sqrt {\log \frac {1}{\delta}}}{\| X _ {1} \theta_ {1} \|} \\ \Rightarrow \sin^ {2} (B ^ {\star}, \theta_ {1}) \leq \kappa^ {2} (X _ {1}) \left(\frac {\| y _ {2} \| ^ {2}}{\| X _ {1} \theta_ {1} \| ^ {2}} + \frac {4 \sigma_ {1} \sqrt {\log \frac {1}{\delta}}}{\| X _ {1} \theta_ {1} \|}\right) \tag {by Lemma 7} \\ \end{array} +$$ + +By matrix Bernstein inequality (see e.g. Tropp et al. (2015)), when $m_{1} \geq 10\|\Sigma_{1}\| \log \frac{d}{\delta} / \lambda_{\min}^{2}(\Sigma_{1})$ , we have that: + +$$ +\left\| \frac {1}{m _ {1}} X _ {1} ^ {\top} X _ {1} - \Sigma_ {1} \right\| \leq \frac {1}{2} \lambda_ {\min } (\Sigma_ {1}). +$$ + +Hence we obtain that $\kappa^2(X_1) \leq 3\kappa(\Sigma_1)$ and $\|X_1\theta_1\|^2 \geq m_1 \cdot \theta_1^\top \Sigma_1\theta_1/2 \geq m_1/2$ (where we assumed that $\theta_1^\top\Sigma_1\theta_1 = 1$ ). Therefore, + +$$ +\sin^ {2} (B ^ {\star}, \theta_ {1}) \leq 3 \kappa (\Sigma_ {1}) \left(\frac {\| y _ {2} \| ^ {2}}{m _ {1} ^ {2} / 4} + \frac {4 \sigma_ {1} \sqrt {\log \frac {1}{\delta}}}{\sqrt {m _ {1} / 2}}\right), +$$ + +which is at most $c^2 / \kappa^2(X_2)$ by our setting of $m_1$ . Therefore, the conclusion follows by triangle inequality (noting that both $c$ and $\sin(\theta_1, \theta_2)$ are less than $1/2$ ). + +Based on the above Lemma, we are now to ready to prove Theorem 2. + +Proof of Theorem 2. Note that in the MTL model, after obtaining $B^{\star}$ , we then solve the linear layer for each task. For task 2, this gives weight value $A_2^{\star} \coloneqq \langle X_2\hat{\theta},y_2\rangle /\| X_2\hat{\theta}\| ^2$ . Thus the regression coefficients for task 2 is $B^{\star}A_{2}^{\star}$ . For the rest of the proof, we focus on bounding the distance between $B^{\star}A_{2}^{\star}$ and $\theta_{2}$ . By triangle inequality, + +$$ +\left\| B ^ {\star} A _ {2} ^ {\star} - \theta_ {2} \right\| \leq \frac {\left| \left\langle X _ {2} B ^ {\star} , \varepsilon_ {2} \right\rangle \right|}{\left\| X _ {2} B ^ {\star} \right\| ^ {2}} + \left| \frac {\left\langle X _ {2} B ^ {\star} , X _ {2} \theta_ {2} \right\rangle}{\left\| X _ {2} B ^ {\star} \right\| ^ {2}} - \left\| \theta_ {2} \right\| \right| + \left\| B ^ {\star} \right\| \theta_ {2} \| - \theta_ {2} \|. \tag {8} +$$ + +Note that the second term of equation 8 is equal to + +$$ +\frac {| \langle X _ {2} B ^ {\star} , X _ {2} (\theta_ {2} - \| \theta_ {2} \| B ^ {\star}) \rangle |}{\| X _ {2} B ^ {\star} \| ^ {2}} \leq \kappa (X _ {2}) \cdot \| \theta_ {2} - \| \theta_ {2} \| B ^ {\star} \|. +$$ + +The first term of equation 8 is bounded by + +$$ +\frac {\left\| \varepsilon_ {2} \right\|}{\left\| X _ {2} B ^ {\star} \right\|} \leq \frac {\left\| \varepsilon_ {2} \right\| \left\| \theta_ {2} \right\|}{\left\| X _ {2} \theta_ {2} \right\| - \left\| X _ {2} \left(\theta_ {2} - \left\| \theta_ {2} \right\| B ^ {\star}\right) \right\|}. \tag {9} +$$ + +Lastly, we have that + +$$ +\left\| \theta_ {2} - \left\| \theta_ {2} \right\| B ^ {\star} \right\| ^ {2} = \left\| \theta_ {2} \right\| ^ {2} 2 \left(1 - \cos \left(B ^ {\star}, \theta_ {2}\right)\right) \leq 2 \left\| \theta_ {2} \right\| ^ {2} \sin^ {2} \left(B ^ {\star}, \theta_ {2}\right) +$$ + +By Lemma 8, we have + +$$ +\left| \sin \left(B ^ {\star}, \theta_ {2}\right) \right| \leq \sin \left(\theta_ {1}, \theta_ {2}\right) + c / \kappa \left(X _ {2}\right) +$$ + +Therefore, we conclude that equation 9 is at most + +$$ +\begin{array}{l} \frac {\left\| \varepsilon_ {2} \right\| \cdot \left\| \theta_ {2} \right\|}{\left\| X _ {2} \theta_ {2} \right\| - \sqrt {2} \lambda_ {\max } (X _ {2}) \left\| \theta_ {2} \right\| \sin (\theta_ {1} , \theta_ {2}) - \sqrt {2} c \lambda_ {\min } (X _ {2}) \left\| \theta_ {2} \right\|} \\ \leq \frac {\left\| \varepsilon_ {2} \right\| \cdot \left\| \theta_ {2} \right\|}{\left\| X _ {2} \theta_ {2} \right\| - 3 c \lambda_ {\min } (X _ {2}) \left\| \theta_ {2} \right\|} \\ \leq \frac {1}{1 - 3 c} \frac {\left\| \varepsilon_ {2} \right\| \cdot \left\| \theta_ {2} \right\|}{\left\| X _ {2} \theta_ {2} \right\|} \\ \end{array} +$$ + +Thus equation 8 is at most the following. + +$$ +\begin{array}{l} \| \theta_ {2} \| \cdot \left(\frac {1}{1 - 3 c} \frac {\| \varepsilon_ {2} \|}{\| X _ {2} \theta_ {2} \|} + \sqrt {2} (\kappa (X _ {2}) + 1) \cdot \sin (B ^ {\star}, \theta_ {2})\right) \\ \leq \| \theta_ {2} \| \cdot \left(\frac {1}{1 - 3 c} \frac {\| \varepsilon_ {2} \|}{\| X _ {2} \theta_ {2} \|} + 6 c\right). \\ \end{array} +$$ + +Hence we obtain the desired estimation error of $BA_{2}^{\star}$ . + +![](images/c858eda476e1579e1337e92afff385edc2e36544d8b480be45c7fb35de4a8f87.jpg) + +# A.2.3 EXTENSION TO THE RELU MODEL + +In this part, we extend Theorem 2 to the ReLU model. Note that the problem is reduced to the following objective. + +$$ +\max _ {B \in \mathbb {R} ^ {d}} g (B) = \left\langle \frac {\operatorname {R e L U} \left(X _ {1} B\right)}{\| \operatorname {R e L U} \left(X _ {1} B\right) \|}, y _ {1} \right\rangle^ {2} + \left\langle \frac {\operatorname {R e L U} \left(X _ {2} B\right)}{\| \operatorname {R e L U} \left(X _ {2} B\right) \|}, y _ {2} \right\rangle^ {2} \tag {10} +$$ + +We make a crucial assumption that task 1's input $X_{1}$ follows the Gaussian distribution. Note that making distributional assumptions is necessary because for worst-case inputs, even optimizing a single ReLU function under the squared loss is NP-hard (Manurangsi and Reichman (2018)). We state our result formally as follows. + +Theorem 9. Let $(X_{1},y_{1})\in (\mathbb{R}^{m_{1}\times d},\mathbb{R}^{m_{1}})$ and $(X_{2},y_{2})\in (\mathbb{R}^{m_{2}\times d},\mathbb{R}^{m_{2}})$ denote two tasks. Suppose that each row of $X_{1}$ is drawn from the standard Gaussian distribution. And $y_{i} = a_{i}\cdot \mathrm{ReLU}(X_{i}\theta_{i}) + \varepsilon_{i}$ are generated via the ReLU model with $\theta_{1},\theta_{2}\in \mathbb{R}^{d}$ . Let $\mathbb{E}\left[(a_i\cdot \mathrm{ReLU}(X_i\theta_i))_j^2\right] = 1$ for every $1\leq j\leq m_1$ without loss of generality, and let $\sigma_1^2$ denote the variance of every entry of $\varepsilon_{1}$ . + +Suppose that $c \geq \sin(\theta_1, \theta_2) / \kappa(X_2)$ . Denote by $(B^\star, A_1^\star, A_2^\star)$ the optimal MTL solution of equation 10. With probability $1 - \delta$ over the randomness of $(X_1, y_1)$ , when + +$$ +m _ {1} \gtrsim \max \left(\frac {d \log d}{c ^ {2}} (\frac {1}{c ^ {2}} + \log d), \frac {\| y _ {2} \| ^ {2}}{c ^ {2}}\right), +$$ + +we have that the estimation error is at most: + +$$ +\sin \left(B ^ {\star}, \theta_ {1}\right) \leq \sin \left(\theta_ {1}, \theta_ {2}\right) + O \left(c / \kappa \left(X _ {2}\right)\right), +$$ + +$$ +\frac {\left| A _ {2} ^ {\star} - a _ {2} \right|}{a _ {2}} \leq O (c) + \frac {1}{(1 - O (c))} \cdot \frac {\left\| \varepsilon_ {2} \right\|}{a _ {2} \cdot \operatorname {R e L U} (\left\| X _ {2} \theta_ {2} \right\|)} +$$ + +Proof. The proof follows a similar structure to that of Theorem 2. Without loss of generality, we can assume that $\theta_{1},\theta_{2}$ are both unit vectors. We first bound the angle between $B^{\star}$ and $\theta_{1}$ . + +By the optimality of $B^{\star}$ , we have that: + +$$ +\left\langle \frac {\operatorname {R e L U} \left(X _ {1} B ^ {\star}\right)}{\| \operatorname {R e L U} \left(X _ {1} B ^ {\star}\right) \|}, y _ {1} \right\rangle^ {2} \geq \left\langle \frac {\operatorname {R e L U} \left(X _ {1} \theta_ {1}\right)}{\| \operatorname {R e L U} \left(X _ {1} \theta_ {1}\right) \|}, y _ {1} \right\rangle^ {2} - \| y _ {2} \| ^ {2} +$$ + +From this we obtain: + +$$ +\begin{array}{l} a _ {1} ^ {2} \cdot \left\langle \frac {\operatorname {R e L U} \left(X _ {1} B ^ {\star}\right)}{\| \operatorname {R e L U} \left(X _ {1} B ^ {\star}\right) \|}, \operatorname {R e L U} \left(X _ {1} B ^ {\star}\right) \right\rangle^ {2} \\ \geq a _ {1} ^ {2} \cdot \| \operatorname {R e L U} \left(X _ {1} \theta_ {1}\right) \| ^ {2} - \| y _ {2} \| ^ {2} - \left(\sigma_ {1} ^ {2} + 4 a _ {1} \cdot \sigma_ {1} \| \operatorname {R e L U} \left(X _ {1} \theta_ {1}\right) \|\right) \sqrt {\log \frac {1}{\delta}} \tag {11} \\ \end{array} +$$ + +Note that each entry of $\mathrm{ReLU}(X_1\theta_1)$ is a truncated Gaussian random variable. By the Hoeffding bound, with probability $1 - \delta$ we have + +$$ +\left| \left\| \operatorname {R e L U} \left(X _ {1} \theta_ {1}\right) \right\| ^ {2} - \frac {m _ {1}}{2} \right| \leq \sqrt {\frac {m _ {1}}{2} \log \frac {1}{\delta}}. +$$ + +As for $\langle \mathrm{ReLU}(X_1B^\star),\mathrm{ReLU}(X_1\theta_1)\rangle$ , we will use an epsilon-net argument over $B^{\star}$ to show the concentration. For a fixed $B^{\star}$ , we note that this is a sum of independent random variables that are all bounded within $O(\log \frac{m_1}{\delta})$ with probability $1 - \delta$ . Denote by $\phi$ the angle between $B^{\star}$ and $\theta_{1}$ a standard geometric fact states that (see e.g. Lemma 1 of Du et al. (2017)) for a random Gaussian vector $x\in \mathbb{R}^d$ + +$$ +\mathbb {E} _ {x} \left[ \operatorname {R e L U} \left(x ^ {\top} B ^ {\star}\right) \cdot \operatorname {R e L U} \left(x ^ {\top} \theta_ {1}\right) \right] = \frac {\cos \phi}{2} + \frac {\cos \phi (\tan \phi - \phi)}{2 \pi} := \frac {g (\phi)}{2}. +$$ + +Therefore, by applying Bernstein's inequality and union bound, with probability $1 - \eta$ we have: + +$$ +\left| \left\langle \operatorname {R e L U} \left(X _ {1} B ^ {\star}\right), \operatorname {R e L U} \left(X _ {1} \theta_ {1}\right) \right\rangle - m _ {1} g (\phi) / 2 \right| \leq 2 \sqrt {m _ {1} g (\phi) \log \frac {1}{\eta}} + \frac {2}{3} \log \frac {1}{\eta} \log \frac {m _ {1}}{\delta} +$$ + +By standard arguments, there exists a set of $d^{O(d)}$ unit vectors $S$ such that for any other unit vector $u$ there exists $\hat{u} \in S$ such that $\| u - \hat{u} \| \leq \min(1 / d^3, c^2 / \kappa^2(X_2))$ . By setting $\eta = d^{-O(d)}$ and take union bound over all unit vectors in $S$ , we have that there exists $\hat{u} \in S$ satisfying $\| B^\star - \hat{u} \| \leq \min(1 / d^3, c^2 / \kappa^2(X_2))$ and the following: + +$$ +\begin{array}{l} \left| \left\langle \operatorname {R e L U} \left(X _ {1} \hat {u}\right), \operatorname {R e L U} \left(X _ {1} \theta_ {1}\right) \right\rangle - m _ {1} g \left(\phi^ {\prime}\right) / 2 \right| \lesssim \sqrt {m _ {1} d \log d} + d \log^ {2} d \\ \leq 2 m _ {1} c ^ {2} / \kappa^ {2} (X _ {2}) \quad (\text {b y}) \\ \end{array} +$$ + +where $\phi^{\prime}$ is the angle between $\hat{u}$ and $\theta_{1}$ . Note that + +$$ +\begin{array}{l} \left| \langle \operatorname {R e L U} \left(X _ {1} \hat {\theta}\right) - \operatorname {R e L U} \left(X _ {1} B ^ {\star}\right), \operatorname {R e L U} \left(X _ {1} \theta_ {1}\right) \rangle \right| \leq \| X _ {1} (\hat {u} - B ^ {\star}) \| \cdot \| \operatorname {R e L U} \left(X _ {1} \theta_ {1}\right) \| \\ \leq c ^ {2} / \kappa^ {2} (X _ {2}) \cdot O (m _ {1}) \\ \end{array} +$$ + +Together we have shown that + +$$ +\left| \left\langle \operatorname {R e L U} \left(X _ {1} B ^ {\star}\right), \operatorname {R e L U} \left(X _ {1} \theta_ {1}\right) \right\rangle - m _ {1} g \left(\phi^ {\prime}\right) / 2 \right| \leq c ^ {2} / \kappa^ {2} \left(X _ {2}\right) \cdot O \left(m _ {1}\right). +$$ + +Combined with equation 11, by our setting of $m_{1}$ , it is not hard to show that + +$$ +g \left(\phi^ {\prime}\right) \geq 1 - O \left(c ^ {2} / \kappa^ {2} \left(X _ {2}\right)\right). +$$ + +Note that + +$$ +\begin{array}{l} 1 - g \left(\phi^ {\prime}\right) = 1 - \cos \phi^ {\prime} - \cos \phi^ {\prime} (\tan \phi^ {\prime} - \phi^ {\prime}) \\ \leq 1 - \cos \phi^ {\prime} = 2 \sin^ {2} \frac {\phi^ {\prime}}{2} \lesssim c ^ {2} / \kappa^ {2} (X _ {2}), \\ \end{array} +$$ + +which implies that $\sin^2\phi' \lesssim c^2/\kappa^2(X_2)$ (since $\cos \frac{\phi'}{2} \geq 0.9$ ). Finally note that $\| \hat{u} - B^\star \| \leq c^2/\kappa^2(X_2)$ , hence + +$$ +\left\| \hat {u} - B ^ {\star} \right\| ^ {2} = 2 \left(1 - \cos (\hat {u}, B ^ {\star})\right) \geq 2 \sin^ {2} (\hat {u}, B ^ {\star}). +$$ + +Overall, we conclude that $\sin (B^{\star},\theta_1)\leq O(c / \kappa (X_2))$ . Hence + +$$ +\sin \left(B ^ {\star}, \theta_ {2}\right) \leq \sin \left(\theta_ {1}, \theta_ {2}\right) + O \left(c / \kappa \left(X _ {2}\right)\right). +$$ + +For the estimation of $a_2$ , we have + +$$ +\begin{array}{l} \left| \frac {\langle \operatorname {R e L U} \left(X _ {2} B ^ {\star}\right) , y _ {2} \rangle}{\| \operatorname {R e L U} \left(X _ {2} B ^ {\star}\right) \| ^ {2}} - a _ {2} \right| \leq \frac {| \langle \operatorname {R e L U} \left(X _ {2} B ^ {\star}\right) , \varepsilon_ {2} \rangle |}{\| \operatorname {R e L U} \left(X _ {2} B ^ {\star}\right) \| ^ {2}} \\ + a _ {2} \left| \frac {\langle \operatorname {R e L U} \left(X _ {2} B ^ {\star}\right) , \operatorname {R e L U} \left(X _ {2} B ^ {\star}\right) - \operatorname {R e L U} \left(X _ {2} \theta_ {2}\right) \rangle}{\| \operatorname {R e L U} \left(X _ {2} B ^ {\star}\right) \| ^ {2}} \right| \\ \end{array} +$$ + +The first part is at most + +$$ +\begin{array}{l} \frac {\left\| \varepsilon_ {2} \right\|}{\left\| \operatorname {R e L U} \left(X _ {2} B ^ {\star}\right) \right\|} \leq \frac {\left\| \varepsilon_ {2} \right\|}{\left\| \operatorname {R e L U} \left(X _ {2} \theta_ {2}\right) \right\| - \left\| \operatorname {R e L U} \left(X _ {2} \theta_ {2}\right) - \operatorname {R e L U} \left(X _ {2} B ^ {\star}\right) \right\|} \\ \leq \frac {1}{1 - O (c)} \frac {\| \varepsilon_ {2} \|}{\| \operatorname {R e L U} (X _ {2} \theta_ {2}) \|} \\ \end{array} +$$ + +Similarly, we can show that the second part is at most $O(c)$ . Therefore, the proof is complete. + +![](images/977aa40282e115e7c2eb28ae757752c513ad8ea4a58275a5295c610c6928745c.jpg) + +# A.3 PROOF OF PROPOSITION 3 + +In this part, we present the proof of Proposition 3. In fact, we present a more refined result, by showing that all local minima are global minima for the reweighted loss in the linear case. + +$$ +f \left(A _ {1}, A _ {2}, \dots , A _ {k}; B\right) = \sum_ {i = 1} ^ {k} \alpha_ {i} \| X _ {i} B A _ {i} - y _ {i}) \| _ {F} ^ {2}. \tag {12} +$$ + +The key is to reduce the MTL objective $f(\cdot)$ to low rank matrix approximation, and apply recent results by Balcan et al. (2018) which show that there is no spurious local minima for the latter problem. + +Lemma 10. Assume that $X_{i}^{\top}X_{i} = \alpha_{i}\Sigma$ with $\alpha_{i} > 0$ for all $1\leq i\leq k$ . Then all the local minima of $f(A_{1},\ldots ,A_{k};B)$ are global minima of equation 3. + +Proof. We first transform the problem from the space of $B$ to the space of $C$ . Note that this is without loss of generality, since there is a one-to-one mapping between $B$ and $C$ with $C = DV^{\top}B$ . In this case, the corresponding objective becomes the following. + +$$ +\begin{array}{l} g \left(A _ {1}, \dots , A _ {k}; B\right) = \sum_ {i = 1} ^ {k} \alpha_ {i} \cdot \left\| U _ {i} C A _ {i} - y _ {i} \right\| ^ {2} \\ = \sum_ {i = 1} ^ {k} \| C (\sqrt {\alpha_ {i}} A _ {i}) - \sqrt {\alpha_ {i}} U _ {i} ^ {\top} y _ {i} \| ^ {2} + \sum_ {i = 1} ^ {k} \alpha_ {i} \cdot (\| y _ {i} \| ^ {2} - \| U _ {i} ^ {\top} y _ {i} \| ^ {2}) \\ \end{array} +$$ + +The latter expression is a constant. Hence it does not affect the optimization solution. For the former, denote by $A \in \mathbb{R}^{r \times k}$ as stacking the $\sqrt{\alpha_i} A_i$ 's together column-wise. Similarly, denote by $Z \in \mathbb{R}^{d \times k}$ as stacking $\sqrt{\alpha_i} U_i^\top y_i$ together column-wise. Then minimizing $g(\cdot)$ reduces solving low rank matrix approximation: $\| CA - Z \|_F^2$ . + +By Lemma 3.1 of Balcan et al. (2018), the only local minima of $\| CA - Z\| _F^2$ are the ones where $CA$ is equal to the best rank- $r$ approximation of $Z$ . Hence the proof is complete. + +Now we are ready to prove Proposition 3. + +Proof of Proposition 3. By Proposition 5, the optimal solution of $B^{\star}$ for equation 12 is $VD^{-1}$ times the best rank- $r$ approximation to $\alpha_{i}U^{\top}y_{i}y_{i}^{\top}U$ , where we denote the SVD of $X$ as $UDV^{\top}$ . Denote by $Q_{r}Q_{r}^{\top}$ as the best rank- $r$ approximation to $U^{\top}ZZ^{\top}U$ , where we denote by $Z = [\sqrt{\alpha_1} y_1,\sqrt{\alpha_2} y_2,\dots ,\sqrt{\alpha_k} y_k]$ as stacking the $k$ vectors to a $d$ by $k$ matrix. Hence the result of Proposition 5 shows that the optimal solution $B^{\star}$ is $VD^{-1}Q_{r}$ , which is equal to $(X^{\top}X)^{-1}XQ^{r}$ . By Proposition 4, the optimality of $B^{\star}$ is the same up to transformations on the column space. Hence the proof is complete. + +To show that all local minima are also equal to $(X^{\top}X)^{-1}XQ^{r}$ , we can simply apply Lemma 10 and Proposition 3. + +Remark. This result only applies to the linear model and does not work on ReLU models. The question of characterizing the optimization landscape in non-linear ReLU models is not well-understood based on the current theoretical understanding of neural networks. We leave this for future work. + +# B SUPPLEMENTARY EXPERIMENTAL RESULTS + +We fill in the details left from our experimental section. In Appendix B.1, we review the datasets used in our experiments. In Appendix B.2, we describe the models we use on each dataset. In Appendix B.3, we describe the training procedures for all experiments. In Appendix B.4 and Appendix B.5, we show extended synthetic and real world experiments to support our claims. + +# B.1 DATASETS + +We describe the synthetic settings and the datasets Sentiment Analysis, General Language Understanding Evaluation (GLUE) benchmark, and ChestX-ray14 used in the experiments. + +Synthetic settings. For the synthetic experiments, we draw 10,000 random data samples with dimension $d = 100$ from the standard Gaussian $\mathcal{N}(0,1)$ and calculate the corresponding labels based on the model described in experiment. We split the data samples into training and validation sets with 9,000 and 1,000 samples in each. For classification tasks, we generate the labels by applying a sigmoid function and then thresholding the value to binary labels at 0.5. For ReLU regression tasks, we apply the ReLU activation function on the real-valued labels. The number of data samples used in the experiments varies depending on the specification. Specifically, for the task covariance experiment of Figure 3, we fix task 1's data with $m_{1} = 9$ , 000 training data and vary task 2's data under three settings: (i) same rotation $Q_{1} = Q_{2}$ but different singular values $D_{1} \neq D_{2}$ ; (ii) same singular values $D_{1} = D_{2}$ but random rotations $Q_{1} \neq Q_{2}$ . + +Sentiment analysis. For the sentiment analysis task, the goal is to understand the sentiment opinions expressed in the text based on the context provided. This is a popular text classification task which is usually formulated as a multi-label classification task over different ratings such as positive $(+1)$ , negative $(-1)$ , or neutral $(0)$ . We use six sentiment analysis benchmarks in our experiments: + +- Movie review sentiment (MR): In the MR dataset (Pang and Lee (2005)), each movie review consists of a single sentence. The goal is to detect positive vs. negative reviews. +- Sentence subjectivity (SUBJ): The SUBJ dataset is proposed in Pang and Lee (2004) and the goal is to classify whether a given sentence is subjective or objective. +- Customer reviews polarity (CR): The CR dataset (Hu and Liu (2004)) provides customer reviews of various products. The goal is to categorize positive and negative reviews. +- Question type (TREC): The TREC dataset is collected by Li and Roth (2002). The aim is to classify a question into 6 question types. +- Opinion polarity (MPQA): The MPQA dataset detects whether an opinion is polarized or not (Wiebe et al. (2005)). +- Stanford sentiment treebank (SST): The SST dataset, created by Socher et al. (2013), is an extension of the MR dataset. + +The General Language Understanding Evaluation (GLUE) benchmark. GLUE is a collection of NLP tasks including question answering, sentiment analysis, text similarity and textual entailment problems. The GLUE benchmark is a state-of-the-art MTL benchmark for both academia and industry. We select five representative tasks including CoLA, MRPC, QNLI, RTE, and SST-2 to validate our proposed method. We emphasize that the goal of this work is not to come up with a state-of-the-art result but rather to provide insights into the working of multi-task learning. It is conceivable that our results can be extended to the entire dataset as well. This is left for future work. More details about the GLUE benchmark can be found in the original paper (Wang et al. (2018a)). + +ChestX-ray14. The ChestX-ray14 dataset (Wang et al. (2017)) is the largest publicly available chest X-ray dataset. It contains 112,120 frontal-view X-ray images of 30,805 unique patients. Each image contains up to 14 different thoracic pathology labels using automatic extraction methods on radiology reports. This can be formulated as a 14-task multi-label image classification problem. The ChestX-ray14 dataset is a representative dataset in the medical imaging domain as well as in computer vision. We use this dataset to examine our proposed task reweighting scheme since it satisfies the assumption that all tasks have the same input data but different labels. + +# B.2 MODELS + +Synthetic settings. For the synthetic experiments, we use the linear regression model, the logistic regression model and a one-layer neural network with the ReLU activation function. + +Sentiment analysis. For the sentiment analysis experiments, we consider three different models including multi-layer perceptron (MLP), LSTM, CNN: + +- For the MLP model, we average the word embeddings of a sentence and feed the result into a two layer perceptron, followed by a classification layer. +- For the LSTM model, we use the standard one-layer single direction LSTM as proposed by Lei et al. (2018), followed by a classification layer. +- For the CNN model, we use the model proposed by Kim (2014) which uses one convolutional layer with multiple filters, followed by a ReLU layer, max-pooling layer, and classification layer. We follow the protocol of Kim (2014) and set the filter size as $\{3, 4, 5\}$ . + +We use the pre-trained GLoVe embeddings trained on Wikipedia 2014 and Gigaword 5 corpora $^{6}$ . We fine-tune the entire model in our experiments. In the multi-task learning setting, the shared modules include the embedding layer and the feature extraction layer (i.e. the MLP, LSTM, or CNN model). Each task has its separate output module. + +GLUE. For the experiments on the GLUE benchmark, we use a state-of-the-art language model called BERT (Devlin et al. (2018)). For each task, we add a classification/regression layer on top it as our model. For all the experiments, we use the $\mathrm{BERT}_{\mathrm{LARGE}}$ uncased model, which is a 24 layer network as described in Devlin et al. (2018). For the multi-task learning setting, we follow the work of Liu et al. (2019a) and use $\mathrm{BERT}_{\mathrm{LARGE}}$ as the shared module. + +ChestX-ray14. For the experiments on the ChestX-ray14 dataset, we use the DenseNet model proposed by Rajpurkar et al. (2017) as the shared module, which is a 121 layer network. For each task, we use a separate classification output layer. We use the pre-trained model7 in our experiments. + +# B.3 TRAINING PROCEDURES + +In this subsection, we describe the training procedures for our experiments. + +Mini-batch SGD. We describe the details of task data sampling in our SGD implementation. + +- For tasks with different features such as GLUE, we first divide each task data into small batches. Then, we mix all the batches from all tasks and shuffle randomly. During every epoch, a SGD step is applied on every batch over the corresponding task. If the current batch is for task $i$ , then the SGD is applied on $A_{i}$ , and possibly $R_{i}$ or $B$ depending on the setup. The other parameters for other tasks are fixed. +- For tasks with the same features such as ChestX-ray14, the SGD is applied on all the tasks jointly to update all the $A_{i}$ 's and $B$ together. + +Synthetic settings. For the synthetic experiments, we do a grid search over the learning rate from $\{1e - 4, 1e - 3, 1e - 2, 1e - 1\}$ and the number of epochs from $\{10, 20, 30, 40, 50\}$ . We pick the best results for all the experiments. We choose the learning rate to be $1e - 3$ , the number of epochs to be 30, and the batch size to be 50. For regression task, we report the Spearman's correlation score. For classification task, we report the classification accuracy. + +Sentiment analysis. For the sentiment analysis experiments, we randomly split the data into training, dev and test sets with percentages $80\%$ , $10\%$ , and $10\%$ respectively. We follow the protocol of Lei et al. (2018) to set up our model for the sentiment analysis experiments. + +The default hidden dimension of the model (e.g. LSTM) is set to be 200, but we vary this parameter for the model capacity experiments. We report the accuracy score on the test set as the performance metric. + +![](images/80fe224f96e0529f0b3aad66ca2aed326e4bf7143b598aa76eb2aa59e62f48e6.jpg) +(a) Linear regression tasks + +![](images/3d66339c3c4079a3837437546ae6e10d2ad12e98acb90fbebd76fe84a5828919.jpg) +(b) Logistic classification tasks + +![](images/212b0f140e979db59b6dfd653ebfba407176751c366fb56779edfc57414e3c7f.jpg) +(c) Regression tasks with ReLU non-linearity + +![](images/321bcdd3d87d420e5279213cab44b7fb7b15f5c082a3cd53edd2ded523655093.jpg) +(d) Classification tasks with ReLU non-linearity +Figure 7: Comparing MTL model performance over different task similarity. For (a) and (c), MTL trains two regression tasks; For (b) and (d), MTL trains two classification tasks. For regression tasks, we use spearman correlation as model performance indicator. For classification tasks, we use accuracy as the metric. We report the average model performance over two tasks. The $x$ -axis denotes the cosine distance, i.e. $1 - \cos(\theta_1, \theta_2)$ . + +GLUE. For the GLUE experiments, the training procedure is used on the alignment modules and the output modules. Due to the complexity of the BERTLARGE module, which involves 24 layers of non-linear transformations. + +We fix the $\mathrm{BERT}_{\mathrm{LARGE}}$ module during the training process to examine the effect of adding the alignment modules to the training process. In general, even after fine-tuning the $\mathrm{BERT}_{\mathrm{LARGE}}$ module on a set of tasks, it is always possible to add our alignment modules and apply Algorithm 1. + +For the training parameters, we apply grid search to tune the learning rate from $\{2\mathrm{e} - 5,3\mathrm{e} - 5,1\mathrm{e} - 5\}$ and the number of epochs from $\{2,3,5,10\}$ . We choose the learning rate to be $2\mathrm{e} - 5$ , the number of epochs to be 5, and with batch size 16 for all the experiments. + +We use the GLUE evaluation metric (cf. Wang et al. (2018b)) and report the scores on the development set as the performance metric. + +ChestX-ray14. For the ChestX-ray 14 experiments, we use the configuration suggested by Rajpurkar et al. (2017) and report the AUC score on the test set after fine-tuning the model for 20 epochs. + +# B.4 EXTENDED SYNTHETIC EXPERIMENTS + +Varying cosine similarity on linear and ReLU models. We demonstrate the effect of cosine similarity in synthetic settings for both regression and classification tasks. + +Synthetic tasks. We start with linear settings. We generate 20 synthetic task datasets (either for regression tasks, or classification tasks) based on data generation procedure and vary the task similarity between task 1 and task $i$ . We run the experiment with a different dataset pairs (dataset 1 and dataset $i$ ). + +![](images/f8dd8952d9edfc1b1a9722b5f83982b454e4aaaa86f86363e386c5a399bdbcee.jpg) +(a) Regression tasks with non-linearity + +![](images/f20ca6ee287fea62a03d3dc872b5b44814f3129d2695788271ef007b060f4999.jpg) +(b) Classification tasks with non-linearity + +![](images/5c3556afe6eb237de35344b7d72169fc75704c7e909c17e9ad85ac4ac1ab0a82.jpg) +Figure 8: The performance improvement on the target task (MTL minus STL) by varying the cosine similarity of the two tasks' STL models. We observe that higher similarity between the STL models leads to better improvement on the target task. +(a) Linear regression tasks +Figure 9: Comparing Algorithm 1 to the baseline MTL training on the synthetic example in Section 2.3. Algorithm 1 corrects the negative transfer phenomenon observed in Figure 3. + +![](images/806536149b13fe41f63d0758c3dc3907b822a4717813dc5f3fbd9d7113be0b09.jpg) +(b) Regression tasks with ReLU activation + +After generating the tasks, we compare the performance gap between MTL and STL model. + +Results. From Figure 7a and Figure 7a, we find that for both regression and classification settings, with the larger task similarity the MTL outperforms more than STL model and the negative transfer could occur if the task similarity is too small. + +ReLU settings. We also consider a ReLU-activated model. We use the same setup as the linear setting, but apply a ReLU activation to generate the data. Similar results are shown in Figure 7c, 7d. + +Higher rank regimes for ReLU settings. We provide further validation of our results on ReLU-activated models. + +Synthetic tasks. In this synthetic experiment, there are two sets of model parameters $\Theta_1 \subseteq \mathbb{R}^{d \times r}$ and $\Theta_2 \subseteq \mathbb{R}^{d \times r}$ ( $d = 100$ and $r = 10$ ). $\Theta_1$ is a fixed random rotation matrix and there are $m_1 = 100$ data points for task 1. Task 2's model parameter is $\Theta_2 = \alpha \Theta_1 + (1 - \alpha) \Theta'$ , where $\Theta'$ is also a fixed rotation matrix that is orthogonal to $\Theta_1$ . Note that $\alpha$ is the cosine value/similarity of the principal angle between $\Theta_1$ and $\Theta_2$ . + +We then generate $X_{1} \subseteq \mathbb{R}^{m_{1} \times d}$ and $X_{2} \subseteq \mathbb{R}^{m_{2} \times d}$ from Gaussian. For each task, the labels are $y_{i} = \mathrm{ReLU}(X_{i}\Theta_{i})e + \varepsilon_{i}$ , where $e \in \mathbb{R}^r$ is the all ones vector and $\varepsilon_{i}$ is a random Gaussian noise. + +Given the two tasks, we use MTL with ReLU activations and capacity $H = 10$ to co-train the two tasks. The goal is to see how different levels of $\alpha$ or similarity affects the transfer from task two to task one. Note that this setting parallels the ReLU setting of Theorem 9 but applies to rank $r = 5$ . + +![](images/a1cdd2ae4f1ac97d3efcb9242b953c6f2f253a67d9187b094a83618d415ca34d.jpg) +Figure 10: Cross validation to choose the best performing model capacity for each model. + +![](images/49d842434f4990d560a5e8c87ac33688cbfebb83cde09110e784e2fb713b3f0b.jpg) +Figure 11: Validation on MLP, CNN and LSTM models for sentiment analysis tasks. + +![](images/5eef9bac94547f4dab2c6817a9b090e324e1a89514d6f8677e71083dcf9da30b.jpg) + +![](images/9af22ba57d0e06132a19e59cdb0bfe467b6d7ea2149c9620ea55121937bfba57.jpg) + +Results. In Figure 8 we show that the data size, the cosine similarity between the STL solutions and the alignment of covariances continue to affect the rate of transfer in the new settings. The study shows that our conceptual results are applicable to a wide range of settings. + +Evaluating Algorithm 1 on linear and ReLU-activated models. We consider the synthetic example in Section 2.3 to compare Algorithm 1 and the baseline MTL training. Recall that in the example, when the source and target tasks have different covariance matrices, MTL causes negative transfer on the target task. Our hypothesis in this experiment is to show that Algorithm 1 can correct the misalignment and the negative transfer. + +Synthetic tasks. We evaluate on both linear and ReLU regression tasks. The linear case follows the example in Section 2.3. For the ReLU case, the data is generated according to the previous example. + +Results. Figure 9 confirms the hypothesis. We observe that Algorithm 1 corrects the negative transfer in the regime where the source task only has limited amount of data. Furthermore, Algorithm 1 matches the baseline MTL training when the source task has sufficiently many data points. + +# B.5 EXTENDED ABLATION STUDIES + +Cross validation for choosing model capacities. We provide a cross validation experiment to indicate how we choose the best performing model capacities in Figure 1. This is done on the six sentiment analysis tasks trained with an LSTM layer. + +In Figure 10, we vary the model capacities to plot the validation accuracies of the MTL model trained with all six tasks and the STL model for each task. The result complements Table 1 in Section 3.3. + +Choosing model capacities for CNN and MLP. Next we verify our result on model capacities for CNN and MLP models. We select the SST and MR datasets from the sentiment analysis tasks for this experiment. We train all three models CNN, MLP and LSTM by varying the capacities. + +Results. From Figure 11 we observe that the best performing MTL model capacity is less than total best performing model capacities of STL model on all models. + +The effect of label noise on Algorithm 2. To evaluate the robustness of Algorithm 2 in the presence of label noise, we conduct the following experiment. First, we subsample $10\%$ of the ChestX-ray14 dataset and select two tasks from it. Then, we randomly pick one task to add $20\%$ of noise to its labels by randomly flipping them with probability 0.5. We compare the performance of training both tasks using our reweighting scheme (Algorithm 2) vs. the reweighting techniques of Kendall et al. (2018) and the unweighted loss scheme. + +Results. On 10 randomly chosen task pairs, our method improves over the unweighted training scheme by $1.0\%$ AUC score and $0.4\%$ AUC score over Kendall et al. (2018) averaged over the 10 task pairs. Figure 12 shows 5 example task pairs from our evaluation. + +![](images/914016c059fd56faf6c1f2b04258210944407a154e953bdecd6ec1ee14c9330f.jpg) +Figure 12: Comparing Algorithm 2 to the unweighted scheme and Kendall et al. (2018). \ No newline at end of file diff --git a/understandingandimprovinginformationtransferinmultitasklearning/images.zip b/understandingandimprovinginformationtransferinmultitasklearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..351e6978f85fa7e8e527711a292bef58cf8d72c5 --- /dev/null +++ b/understandingandimprovinginformationtransferinmultitasklearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:446f5e3b56fef43bdfc719ca1493705e9448fb1e1450ad63d9834d3273e48602 +size 1174098 diff --git a/understandingandimprovinginformationtransferinmultitasklearning/layout.json b/understandingandimprovinginformationtransferinmultitasklearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fdfba1b2136910d7738c0972801d71d402738fef --- /dev/null +++ b/understandingandimprovinginformationtransferinmultitasklearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:867bf82b133701854c837f2c64040368e5b6c74341719e2ae49f532bb7c42fbc +size 1251703 diff --git a/understandingandrobustifyingdifferentiablearchitecturesearch/20166431-23e3-4c60-9c92-7754bc283acf_content_list.json b/understandingandrobustifyingdifferentiablearchitecturesearch/20166431-23e3-4c60-9c92-7754bc283acf_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f5586886ddab703886aa90baaf7761434c9fc47a --- /dev/null +++ b/understandingandrobustifyingdifferentiablearchitecturesearch/20166431-23e3-4c60-9c92-7754bc283acf_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62acd0a6a9d6c7aa0bf5b0de5d4abd69aae76746fb29399abaad835bdba5eb1f +size 177350 diff --git a/understandingandrobustifyingdifferentiablearchitecturesearch/20166431-23e3-4c60-9c92-7754bc283acf_model.json b/understandingandrobustifyingdifferentiablearchitecturesearch/20166431-23e3-4c60-9c92-7754bc283acf_model.json new file mode 100644 index 0000000000000000000000000000000000000000..273cf9a0cbaab6fdf9d9b48e33ee7b6d3cee18e5 --- /dev/null +++ b/understandingandrobustifyingdifferentiablearchitecturesearch/20166431-23e3-4c60-9c92-7754bc283acf_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bc0d97a4ea1d0ab827dbab1de0c028127493a952a08522191fce6199a013913 +size 206005 diff --git a/understandingandrobustifyingdifferentiablearchitecturesearch/20166431-23e3-4c60-9c92-7754bc283acf_origin.pdf b/understandingandrobustifyingdifferentiablearchitecturesearch/20166431-23e3-4c60-9c92-7754bc283acf_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2b67925d666565d003ab9c29eeb0ec68ed6c703d --- /dev/null +++ b/understandingandrobustifyingdifferentiablearchitecturesearch/20166431-23e3-4c60-9c92-7754bc283acf_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccb997f77d1c5ad44d0f4843be92f42df24ed1d924f1fc509353eddc7cf6bb8a +size 2471361 diff --git a/understandingandrobustifyingdifferentiablearchitecturesearch/full.md b/understandingandrobustifyingdifferentiablearchitecturesearch/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0de7294ca2dc12ff542f5649a85020b98623c0d7 --- /dev/null +++ b/understandingandrobustifyingdifferentiablearchitecturesearch/full.md @@ -0,0 +1,869 @@ +# UNDERSTANDING AND ROBUSTIFYING DIFFERENTIABLE ARCHITECTURE SEARCH + +Arber Zela1, Thomas Elsken2,1, Tonmoy Saikia1, Yassine Marrakchi1, Thomas Brox1 & Frank Hutter1,2 + +$^{1}$ Department of Computer Science, University of Freiburg +{zelaa, saikiat, marrakch, brox, fh}@cs.uni-freiburg.de +$^{2}$ Bosch Center for Artificial Intelligence + +Thomas.Elsken@de.bosch.com + +# ABSTRACT + +Differentiable Architecture Search (DARTS) has attracted a lot of attention due to its simplicity and small search costs achieved by a continuous relaxation and an approximation of the resulting bi-level optimization problem. However, DARTS does not work robustly for new problems: we identify a wide range of search spaces for which DARTS yields degenerate architectures with very poor test performance. We study this failure mode and show that, while DARTS successfully minimizes validation loss, the found solutions generalize poorly when they coincide with high validation loss curvature in the architecture space. We show that by adding one of various types of regularization we can robustify DARTS to find solutions with less curvature and better generalization properties. Based on these observations, we propose several simple variations of DARTS that perform substantially more robustly in practice. Our observations are robust across five search spaces on three image classification tasks and also hold for the very different domains of disparity estimation (a dense regression task) and language modelling. + +# 1 INTRODUCTION + +Neural Architecture Search (NAS), the process of automatically designing neural network architectures, has recently attracted attention by achieving state-of-the-art performance on a variety of tasks (Zoph & Le, 2017; Real et al., 2019). Differentiable architecture search (DARTS) (Liu et al., 2019) significantly improved the efficiency of NAS over prior work, reducing its costs to the same order of magnitude as training a single neural network. This expanded the scope of NAS substantially, allowing it to also be applied on more expensive problems, such as semantic segmentation (Chenxi et al., 2019) or disparity estimation (Saikia et al., 2019). + +However, several researchers have also reported DARTS to not work well, in some cases even no better than random search (Li & Talwalkar, 2019; Sciuto et al., 2019). Why is this? How can these seemingly contradicting results be explained? The overall goal of this paper is to understand and overcome such failure modes of DARTS. To this end, we make the following contributions: + +1. We identify 12 NAS benchmarks based on four search spaces in which standard DARTS yields degenerate architectures with poor test performance across several datasets (Section 3). +2. By computing the eigenspectrum of the Hessian of the validation loss with respect to the architectural parameters, we show that there is a strong correlation between its dominant eigenvalue and the architecture's generalization error. Based on this finding, we propose a simple variation of DARTS with early stopping that performs substantially more robustly (Section 4). +3. We show that, related to previous work on sharp/flat local minima, regularizing the inner objective of DARTS more strongly allows it to find solutions with smaller Hessian spectrum and better generalization properties. Based on these insights, we propose two practical robustifications of DARTS that overcome its failure modes in all our 12 NAS benchmarks (Section 5). + +Our findings are robust across a wide range of NAS benchmarks based on image recognition and also hold for the very different domains of language modelling (PTB) and disparity estimation. They + +consolidate the findings of the various results in the literature and lead to a substantially more robust version of DARTS. We provide our implementation and scripts to facilitate reproducibility1. + +# 2 BACKGROUND AND RELATED WORK + +# 2.1 RELATIONZION BETWEEN FLAT/SHARP MINIMA AND GENERALIZATION PERFORMANCE + +Already Hochreiter & Schmidhuber (1997) observed that flat minima of the training loss yield better generalization performance than sharp minima. Recent work (Keskar et al., 2016; Yao et al., 2018) focuses more on the settings of large/small batch size training, where observations show that small batch training tends to get attracted to flatter minima and generalizes better. Similarly, Nguyen et al. (2018) observed that this phenomenon manifests also in the hyperparameter space. They showed that whenever the hyperparameters overfit the validation data, the minima lie in a sharper region of the space. This motivated us to conduct a similar analysis in the context of differentiable architecture search later in Section 4.1, where we see the same effect in the space of neural network architectures. + +# 2.2 BI-LEVEL OPTIMIZATION + +We start by a short introduction of the bi-level optimization problem (Colson et al., 2007). These are problems which contain two optimization tasks, nested within each other. + +Definition 2.1. Given the outer objective function $F: \mathbb{R}^P \times \mathbb{R}^N \to \mathbb{R}$ and the inner objective function $f: \mathbb{R}^P \times \mathbb{R}^N \to \mathbb{R}$ , the bi-level optimization problem is given by + +$$ +\min _ {y \in \mathbb {R} ^ {P}} F (y, \theta^ {*} (y)) \tag {1} +$$ + +$$ +s. t. \quad \theta^ {*} (y) \in \underset {\theta \in \mathbb {R} ^ {N}} {\arg \min } f (y, \theta), \tag {2} +$$ + +where $y \in \mathbb{R}^P$ and $\theta \in \mathbb{R}^N$ are the outer and inner variables, respectively. One may also see the bi-level problem as a constrained optimization problem, with the inner problem as a constraint. + +In general, even in the case when the inner objective (2) is strongly convex and has an unique minimizer $\theta^{*}(y) = \arg \min_{\theta \in \mathbb{R}^{N}}f(y,\theta)$ , it is not possible to directly optimize the outer objective (1). A possible method around this issue is to use the implicit function theorem to retrieve the derivative of the solution map (or response map) $\theta^{*}(y)\in \mathbb{F}\subseteq \mathbb{R}^{N}$ w.r.t. $y$ (Bengio, 2000; Pedregosa, 2016; Beirami et al., 2017). Another strategy is to approximate the inner problem with a dynamical system (Domke, 2012; Maclaurin et al., 2015; Franceschi et al., 2017; 2018), where the optimization dynamics could, e.g., describe gradient descent. In the case that the minimizer of the inner problem is unique, under some conditions the set of minimizers of this approximate problem will indeed converge to the minimizers of the bilevel problem (1) (see Franceschi et al. (2018)). + +# 2.3 NEURAL ARCHITECTURE SEARCH + +Neural Architecture Search (NAS) denotes the process of automatically designing neural network architectures in order to overcome the cumbersome trial-and-error process when designing architectures manually. We briefly review NAS here and refer to the recent survey by Elsken et al. (2019b) for a more thorough overview. Prior work mostly employs either reinforcement learning techniques (Baker et al., 2017a; Zoph & Le, 2017; Zhong et al., 2018; Zoph et al., 2018) or evolutionary algorithms (Stanley & Miikkulainen, 2002; Liu et al., 2018b; Miikkulainen et al., 2017; Real et al., 2017; 2019) to optimize the discrete architecture space. As these methods are often very expensive, various works focus on reducing the search costs by, e.g., employing network morphisms (Cai et al., 2018a;b; Elsken et al., 2017; 2019a), weight sharing within search models (Saxena & Verbeek, 2016; Bender et al., 2018; Pham et al., 2018) or multi-fidelity optimization (Baker et al., 2017b; Falkner et al., 2018; Li et al., 2017; Zela et al., 2018), but their applicability still often remains restricted to rather simple tasks and small datasets. + +# 2.4 DIFFERENTIABLE ARCHITECTURE SEARCH (DARTS) + +A recent line of work focuses on relaxing the discrete neural architecture search problem to a continuous one that can be solved by gradient descent (Liu et al., 2019; Xie et al., 2019; Casale et al., 2019; Cai et al., 2019). In DARTS (Liu et al., 2019), this is achieved by simply using a weighted sum of possible candidate operations for each layer, whereas the real-valued weights then effectively parametrize the network's architecture. We will now review DARTS in more detail, as our work builds directly upon it. + +Continuous relaxation of the search space. In agreement with prior work (Zoph et al., 2018; Real et al., 2019), DARTS optimizes only substructures called cells that are stacked to define the full network architecture. Each cell contains $N$ nodes organized in a directed acyclic graph. The graph contains two inputs nodes (given by the outputs of the previous two cells), a set of intermediate nodes, and one output node (given by concatenating all intermediate nodes). Each intermediate node $x^{(j)}$ represents a feature map. See Figure 1 for an illustration of such a cell. Instead of applying a single operation to a specific node during architecture search, Liu et al. (2019) relax the decision which operation to choose by computing the intermediate node as a mixture of candidate operations, applied to predecessor nodes $x^{(i)}, i < j$ , $x^{(j)} = \sum_{i < j} \sum_{o \in \mathcal{O}} \frac{\exp(\alpha_{o}^{i,j})}{\sum_{o' \in \mathcal{O}} \exp(\alpha_{o'}^{i,j})} o(x^{(i)})$ , where $\mathcal{O}$ denotes the set of all candidate operations (e.g., $3 \times 3$ convolution, skip connection, $3 \times 3$ max pooling, etc.) and $\alpha = (\alpha_{o}^{i,j})_{i,j,o}$ serves as a real-valued parameterization of the architecture. + +Gradient-based optimization of the search space. DARTS then optimizes both the weights of the search network (often called the weight-sharing or one-shot model, since the weights of all individual subgraphs/architectures are shared) and architectural parameters by alternating gradient descent. The network weights and the architecture parameters are optimized on the training and validation set, respectively. This can be interpreted as solving the bi-level optimization problem (1), (2), where $F$ and $f$ are the validation and training loss, $\mathcal{L}_{valid}$ and $\mathcal{L}_{train}$ , respectively, while $y$ and $\theta$ denote the architectural parameters $\alpha$ and network weights $w$ , respectively. Note that DARTS only approximates the lower-level solution by a single gradient step (see Appendix A for more details). + +At the end of the search phase, a discrete cell is obtained by choosing the $k$ most important incoming operations for each intermediate node while all others are pruned. Importance is measured by the operation weighting factor $\frac{\exp(\alpha_{o}^{i,j})}{\sum_{o^{\prime}\in\mathcal{O}}\exp(\alpha_{o^{\prime}}^{i,j})}$ . + +# 3 WHEN DARTS FAILS + +We now describe various search spaces and demonstrate that standard DARTS fails on them. We start with four search spaces similar to the original CIFAR-10 search space but simpler, and evaluate across three different datasets (CIFAR-10, CIFAR-100 and SVHN). They are quite standard in that they use the same macro architecture as the original DARTS paper (Liu et al., 2018a), consisting of normal and reduction cells; however, they only allow a subset of operators for the cell search space: + +S1: This search space uses a different set of only two operators per edge, which we identified using an offline process that iteratively dropped the operations from the original DARTS search space with the least importance. This pre-optimized space has the advantage of being quite small while still including many strong architectures. We refer to Appendix B for details on its construction and an illustration (Figure 9). +S2: In this space, the set of candidate operations per edge is $\{3 \times 3 \text{SepConv}, \text{SkipConnect}\}$ . We choose these operations since they are the most frequent ones in the discovered cells reported by Liu et al. (2019). +S3: In this space, the set of candidate operations per edge is $\{3 \times 3 \text{SepConv}, \text{SkipConnect}, \text{Zero}\}$ , where the Zero operation simply replaces every value in the input feature map by zeros. +S4: In this space, the set of candidate operations per edge is $\{3 \times 3 \text{SepConv}, \text{Noise}\}$ , where the Noise operation simply replaces every value from the input feature map by noise $\epsilon \sim \mathcal{N}(0,1)$ . This is the only space out of S1-S4 that is not a strict subspace of the + +![](images/da47f2f1ea05e3aafcb22fede9cd3a00ce180b35056e4ea65c08bbd36b4f4216.jpg) +(a) Space 1 + +![](images/2a8d0a09e5b5a819c53fba7992b1a8fab60ce58daac9e7b1ad7cb64585b86256.jpg) +(b) Space 2 +Figure 1: The poor cells standard DARTS finds on spaces S1-S4. For all spaces, DARTS chooses mostly parameter-less operations (skip connection) or even the harmful Noise operation. Shown are the normal cells on CIFAR-10; see Appendix G for reduction cells and other datasets. + +![](images/05ca012331316f339f95f18649f5d89cee75b5e4b8ed4f48d181b2f34680515b.jpg) +(c) Space 3 + +![](images/457db31713eb36fd7c40c9b00dad5d10368bd83cefc793322baeb3cb3169482c.jpg) +(d) Space 4 + +original DARTS space; we intentionally added the Noise operation, which actively harms performance and should therefore not be selected by DARTS. + +We ran DARTS on each of these spaces, using exactly the same setup as Liu et al. (2019). Figure 1 shows the poor cells DARTS selected on these search spaces for CIFAR-10 (see Appendix G for analogous results on the other datasets). Already visually, one might suspect that the found cells are suboptimal: the parameter-less skip connections dominate in almost all the edges for spaces S1-S3, and for S4 even the harmful Noise operation was selected for five out of eight operations. Table 1 (first column) confirms the very poor performance standard DARTS yields on all of these search spaces and on different datasets. We note that Liu et al. (2019) and Xie et al. (2019) argue that the Zero operation can help to search for the architecture topology and choice of operators jointly, but in our experiments it did not help to reduce the importance weight of the skip connection (compare Figure 1b vs. Figure 1c). + +We emphasize that search spaces S1-S3 are very natural, and, as strict subspaces of the original space, should merely be easier to search than that. They are in no way special or constructed in an adversarial manner. Only S4 was constructed specifically to show-case the failure mode of DARTS selecting the obviously suboptimal Noise operator. + +S5: Very small search space with known global optimum. Knowing the global minimum has the advantage that one can benchmark the performance of algorithms by measuring the regret of chosen points with respect to the known global minimum. Therefore, we created another search space with only one intermediate node for both normal and reduction cells, and 3 operation choices in each edge, namely $3 \times 3$ SepConv, SkipConnection, and $3 \times 3$ MaxPooling. The total number of possible architectures in this space is 81, all of which we evaluated a-priori. We dub this space S5. + +We ran DARTS on this search space three times for each dataset and compared its result to the baseline of Random Search with weight sharing (RS-ws) by Li & Talwalkar (2019). Figure 2 shows the test regret of the architectures selected by DARTS (blue) and RS-ws (green) throughout the search. DARTS manages to find an architecture close to the global minimum, but around epoch 40 the test performance deteriorated. Note that the search model validation error (dashed red line) did not deteriorate but rather converged, indicating that the architectural parameters are overfitting to the validation set. In contrast, RS-ws stays relatively constant throughout the search; when evaluating only the final architecture found, RS-ws indeed outperformed DARTS. + +![](images/fc51a7787d6688b61cada4a95c701282e4c8e0ac26adbb8f948c9c9292dc562b.jpg) +Figure 2: Test regret of found architectures and validation error of the search model when running DARTS on S5 and CIFAR-10. DARTS finds the global minimum but starts overfitting the architectural parameters to the validation set in the end. + +S6: encoder-decoder architecture for disparity estimation. To study whether our findings generalize beyond image recognition, we also analyzed a search space for a very different problem: finding encoder-decoder architectures for the dense regression task of disparity estimation; please + +![](images/a4f2b52780b62d4abd33009259ad9c51a3777840d8ed43bdc5c66911417bad5c.jpg) +Figure 3: (left) validation error of search model; (middle) test error of the architectures deemed by DARTS optimal (right) dominant eigenvalue of $\nabla_{\alpha}^{2}\mathcal{L}_{valid}$ throughout DARTS search. Solid line and shaded areas show mean and standard deviation of 3 independent runs. All experiments conducted on CIFAR-10. + +![](images/5715a84a8583cd61f536244de5a93b18cc777deb31a95b999fb1d91aaa77a17d.jpg) + +![](images/d7abde9c62f9d56318428028d149b50696cb1d8ec8e0bff165ddba20add57324.jpg) + +refer to Appendix E for details. We base this search space on AutoDispNet (Saikia et al., 2019), which used DARTS for a space containing normal, downsampling and upsampling cells. We again constructed a reduced space. Similarly to the image classification search spaces, we found the normal cell to be mainly composed of parameter-less operations (see Figure 25 in Appendix G). As expected, this causes a large generalization error (see first row in Table 2 of our later experiments). + +# 4 THE ROLE OF DOMINANT EIGENVALUES OF $\nabla_{\alpha}^{2}\mathcal{L}_{valid}$ + +We now analyze why DARTS fails in all these cases. Motivated by Section 2.1, we will have a closer look at the largest eigenvalue $\lambda_{max}^{\alpha}$ of the Hessian matrix of validation loss $\nabla_{\alpha}^{2}\mathcal{L}_{valid}$ w.r.t. the architectural parameters $\alpha$ . + +# 4.1 LARGE ARCHITECTURAL EIGENVALUES AND GENERALIZATION PERFORMANCE + +One may hypothesize that DARTS performs poorly because its approximate solution of the bi-level optimization problem by iterative optimization fails, but we actually observe validation errors to progress nicely: Figure 3 (left) shows that the search model validation error converges in all cases, even though the cell structures selected here are the ones in Figure 1. + +Rather, the architectures DARTS finds do not generalize well. This can be seen in Figure 3 (middle). There, every 5 epochs, we evaluated the architecture deemed by DARTS to be optimal according to the $\alpha$ values. Note that whenever evaluating on the test set, we retrain from scratch the architecture obtained after applying the argmax to the architectural weights $\alpha$ . As one can notice, the architectures start to degenerate after a certain number of search epochs, similarly to the results shown in Figure 2. We hypothesized that this might be related to sharp local minima as discussed in Section 2.1. To test this hypothesis, we computed the full Hessian $\nabla_{\alpha}^{2}\mathcal{L}_{valid}$ of the validation loss w.r.t. the architectural parameters on a randomly sampled mini-batch. Figure 3 (right) shows that the dominant eigenvalue $\lambda_{max}^{\alpha}$ (which serves as a proxy for the sharpness) indeed increases + +![](images/8a6ffc2302a9613f6a6fdc5705ca502759b375afb158bdfd5662403a6273bd93.jpg) +Figure 4: Correlation between dominant eigenvalue of $\nabla_{\alpha}^{2}\mathcal{L}_{valid}$ and test error of corresponding architectures. + +in standard DARTS, along with the test error (middle) of the final architectures, while the validation error still decreases (left). We also studied the correlation between $\lambda_{max}^{\alpha}$ and test error more directly, by measuring these two quantities for 24 different architectures (obtained via standard DARTS and the regularized versions we discuss in Section 5). For the example of space S1 on CIFAR-10, Figure 4 shows that $\lambda_{max}^{\alpha}$ indeed strongly correlates with test error (with a Pearson correlation coefficient of 0.867). + +![](images/be34088877d3ecd5fb199c9d1b418ccf0533443cff653a3201eabd8c0dcd7b5f.jpg) +Figure 6: Local average (LA) of the dominant eigenvalue $\lambda_{max}^{\alpha}$ throughout DARTS search. Markers denote the early stopping point based on the criterion in Section 4.3. Each line also corresponds to one of the runs in Table 1. + +![](images/12cd998c448e126255e3ab64e12417c49db4109402241bd35e7d584b4b3a272e.jpg) + +![](images/98dcd1afdf727b2915bce522eae1c3e675541def16f256a9ff5fe6a56eed4c58.jpg) + +# 4.2 LARGE ARCHITECTURAL EIGENVALUES AND PERFORMANCE DROP AFTER PRUNING + +One reason why DARTS performs poorly when the architectural eigenvalues are large (and thus the minimum is sharp) might be the pruning step at the end of DARTS: the optimal, continuous $\alpha^{*}$ from the search is pruned to obtain a discrete $\alpha^{disc}$ , somewhere in the neighbourhood of $\alpha^{*}$ . In the case of a sharp minimum $\alpha^{*}$ , $\alpha^{disc}$ might have a loss function value significantly higher than the minimum $\alpha^{*}$ , while in the case of a flat minimum, $\alpha^{disc}$ is expected to have a similar loss function value. This is hypothetically illustrated in Figure 5a, where the $y$ -axis indicates the search model validation loss and the $x$ -axis the $\alpha$ values. + +To investigate this hypothesis, we measured the performance drop: $\mathcal{L}_{valid}(\alpha^{disc},w^{*}) - \mathcal{L}_{valid}(\alpha^{*},w^{*})$ w.r.t. to the search model weights incurred by this discretization step and correlated it with $\lambda_{max}^{\alpha}$ . The results in Figure 5b show that, indeed, low curvature never led to large performance drops (here we actually compute the accuracy drop rather than the loss function difference, but we observed a similar relationship). Having identified this relationship, we now move on to avoid high curvature. + +![](images/da50c76e61290614e580a75e78d8956812538c14736f6f2c49bac68172212b3b.jpg) + +![](images/462e283d0dc548fd41defb812282260e9ad5e2b3a641c606fcadb560750f0adb.jpg) +Figure 5: (a) Hypothetical illustration of the loss function change in the case of flat vs. sharp minima. (b) Drop in accuracy after discretizing the search model vs. the sharpness of minima (by means of $\lambda_{max}^{\alpha}$ ). + +# 4.3 EARLY STOPPING + +BASED ON LARGE EIGENVALUES OF $\nabla_{\alpha}^{2}\mathcal{L}_{valid}$ + +We propose a simple early stopping methods to avoid large curvature and thus poor generalization. We emphasize that simply stopping the search based + +on validation performance (as one would do in the case of training a network) does not apply here as NAS directly optimizes validation performance, which - as we have seen in Figure 2 - keeps on improving. + +Instead, we propose to track $\lambda_{max}^{\alpha}$ over the course of architecture search and stop whenever it increases too much. To implement this idea, we use a simple heuristic that worked off-the-shelf without any tuning. Let $\overline{\lambda}_{max}^{\alpha}(i)$ denote the value of $\lambda_{max}^{\alpha}$ smoothed over $k = 5$ epochs around $i$ ; then, we stop if $\overline{\lambda}_{max}^{\alpha}(i - k) / \overline{\lambda}_{max}^{\alpha}(i) < 0.75$ and return the architecture from epoch $i - k$ . + +![](images/5b2ac804f186e7def7e1b10d953993f60d0ce78d01bb6010e6af1dba1a1e971e.jpg) +Figure 7: Effect of regularization strength via ScheduledDropPath (during the search phase) on the test performance of DARTS (solid lines) and DARTS-ES (dashed-lines). Results for each of the search spaces and datasets. + +![](images/a4c926a54f5d104441a3f9224e6bb404e8df5e392622cd3a93f45b878463b53b.jpg) + +![](images/47458844de3079bed21ad3e1ca0489b973d5a82644a86cc0cd3ca745c7a66390.jpg) + +By this early stopping heuristic, we do not only avoid exploding eigenvalues, which are correlated with poor generalization (see Figure 4), but also shorten the time of the search. + +Table 1 shows the results for running DARTS with this early stopping criterion (DARTS-ES) across S1-S4 and all three image classification datasets. Figure 6 shows the local average of the eigenvalue trajectory throughout the search and the point where the DARTS search early stops for each of the settings in Table 1. Note that we never use the test data when applying the early stopping mechanism. Early stopping significantly improved DARTS for all settings without ever harming it. + +Table 1: Performance of DARTS and DARTS-ES. (mean ± std for 3 runs each). + +
BenchmarkDARTSDARTS-ES
C10S14.66 ± 0.713.05 ± 0.07
S24.42 ± 0.403.41 ± 0.14
S34.12 ± 0.853.71 ± 1.14
S46.95 ± 0.184.17 ± 0.21
C100S129.93 ± 0.4128.90 ± 0.81
S228.75 ± 0.9224.68 ± 1.43
S329.01 ± 0.2426.99 ± 1.79
S424.77 ± 1.5123.90 ± 2.01
SVHNS19.88 ± 5.502.80 ± 0.09
S23.69 ± 0.122.68 ± 0.18
S34.00 ± 1.012.78 ± 0.29
S42.90 ± 0.022.55 ± 0.15
+ +# 5 REGULARIZATION OF INNER OBJECTIVE IMPROVES GENERALIZATION OF ARCHITECTURE + +As we saw in Section 4.1, sharper minima (by means of large eigenvalues) of the validation loss lead to poor generalization performance. In our bi-level optimization setting, the outer variables' trajectory depends on the inner optimization procedure. Therefore, we hypothesized that modifying the landscape of the inner objective $\mathcal{L}_{train}$ could redirect the outer variables $\alpha$ to flatter areas of the architectural space. We study two ways of regularization (data augmentation in Section 5.1 and $L_{2}$ regularization in Section 5.2) and find that both, along with the early stopping criterion from Section 4.3, make DARTS more robust in practice. We emphasize that we do not alter the regularization of the final training and evaluation phase, but solely that of the search phase. The setting we use for all experiments in this paper to obtain the final test performance is described in Appendix C. + +# 5.1 REGULARIZATION VIA DATA AUGMENTATION + +We first investigate the effect of regularizing via data augmentation, namely masking out parts of the input and intermediate feature maps via Cutout (CO, DeVries & Taylor (2017)) and ScheduledDropPath (DP, Zoph et al. (2018)) (ScheduledDropPath is a regularization technique, but we list it here since we apply it together with Cutout), respectively, during architecture search. We ran DARTS with CO and DP (with and without our early stopping criterion, DARTS-ES) with different maximum DP probabilities on all three image classification datasets and search spaces S1-S4. + +Figure 7 summarizes the results: regularization improves the test performance of DARTS and DARTS-ES in all cases, sometimes very substantially, and at the same time kept the dominant eigenvalue relatively low (Figure 13). This also directly results in smaller drops in accuracy after pruning, as discussed in Section 4.2; indeed, the search runs plotted in Figure 5b are the same as in this section. Figure 17 in the appendix explicitly shows how regularization relates to the accuracy drops. We also refer to further results in the appendix: Figure 11 (showing test vs. validation error) and Table 5 (showing that overfitting of the architectural parameters is reduced). + +![](images/70a28a53c9865de437415769347dec94cf4da735fa565fa6cde8a2554fb0648a.jpg) +Figure 8: Effect of $L_{2}$ regularization of the inner objective during architecture search for DARTS (solid lines) and DARTS-ES (dashed). + +![](images/e5a7987b9bad7986734d99a7d905db4c11d718945bf95987e1737fa9e49b7607.jpg) + +![](images/2d1e8393344cfcd6b5613ef723a46bc5f04a45d3f233afb2f091897342b2ec87.jpg) + +Similar observations hold for disparity estimation on S6, where we vary the strength of standard data augmentation methods, such as shearing or brightness change, rather than masking parts of features, which is unreasonable for this task. The augmentation strength is described by an "augmentation scaling factor" (Appendix E). Table 2 summarizes the results. We report the average end point error (EPE), which is the Euclidean distance between the predicted and ground truth disparity maps. Data augmentation avoided the degenerate architectures and substantially improved results. + +# 5.2 INCREASED $L_{2}$ REGULARIZATION + +As a second type of regularization, we also tested different $L_{2}$ regularization factors $3i \cdot 10^{-4}$ for $i \in \{1,3,9,27,81\}$ . Standard DARTS in fact does already include a small amount of $L_{2}$ regularization; $i = 1$ yields its default. Figure 8 shows that DARTS' test performance (solid lines) can be significantly improved by higher $L_{2}$ factors across all datasets and spaces, while keeping the dominant eigenvalue low (Figure 14). DARTS with early stopping (dashed lines) also benefits from additional regularization. Again, we observe the implicit regularization effect on the outer objective which reduces the overfitting of the architectural parameters. We again refer to + +Table 2: Effect of regularization for disparity estimation. Search was conducted on FlyingThings3D (FT) and then evaluated on both FT and Sintel. Lower is better. + +
Aug. ScaleSearch model valid EPEFT test EPESintel test EPEParams (M)
0.04.493.835.699.65
0.13.533.755.979.65
0.53.283.375.229.43
1.04.613.125.4712.46
1.55.232.604.1512.57
2.07.452.333.7612.25
L2reg. factorSearch model valid EPEFT test EPESintel test EPEParams (M)
3 × 10-43.953.256.1311.00
9 × 10-45.972.304.1213.92
27 × 10-44.252.724.8310.29
81 × 10-44.612.343.8512.16
+ +Table 2 for disparity estimation; Appendix F shows similar results for language modelling (Penn TreeBank). + +# 5.3 PRACTICAL ROBUSTIFICATION OF DARTS BY REGULARIZING THE INNER OBJECTIVE + +Based on the insights from the aforementioned analysis and empirical results, we now propose two alternative simple modifications to make DARTS more robust in practice without having to manually tune its regularization hyperparameters. + +DARTS with adaptive regularization One option is to adapt DARTS' regularization hyperparameters in an automated way, in order to keep the architectural weights in areas of the validation loss objective with smaller curvature. The simplest off-the-shelf procedure towards this desiderata would be to increase the regularization strength whenever the dominant eigenvalue starts increasing rapidly. Algorithm 1 (DARTS-ADA, Appendix D.1) shows such a procedure. We use the same stopping criterion as in DARTS-ES (Section 4.3), roll back DARTS to the epoch when this criterion is met, and continue the search with a larger regularization value $R$ for the remaining epochs (larger by a factor of $\eta$ ). This procedure is repeated whenever the criterion is met, unless the regularization value exceeds some maximum predefined value $R_{max}$ . + +Multiple DARTS runs with different regularization strength Liu et al. (2019) already suggested to run the search phase of DARTS four times, resulting in four architectures, and to return + +the best of these four architectures w.r.t. validation performance when retrained from scratch for a limited number of epochs. We propose to use the same procedure, with the only difference that the four runs use different amounts of regularization. The resulting RobustDARTS (R-DARTS) method is conceptually very simple, trivial to implement and likely to work well if any of the tried regularization strengths works well. + +Table 3 evaluates the performance of our practical robustifications of DARTS, DARTS-ADA and R-DARTS (based on either L2 or ScheduledDropPath regularization), by comparing them to the original DARTS, DARTS-ES and Random Search with weight sharing (RS-ws). For each of these methods, as proposed in the DARTS paper (Liu et al., 2019), we ran the search four independent times with different random seeds and selected the architecture used for the final evaluation based on a validation run as described above. + +Table 3: Empirical evaluation of practical robustified versions of DARTS. Each entry is the test error after retraining the selected architecture as usual. The best method for each setting is boldface and underlined, the second best boldface. + +
BenchmarkRS-wsDARTSR-DARTS(DP)R-DARTS(L2)DARTS-ESDARTS-ADA
C10S13.233.843.112.783.013.10
S23.664.853.483.313.263.35
S32.953.342.932.512.742.59
S48.077.203.583.563.714.84
C100S123.3029.4625.9324.2528.3724.03
S221.2126.0522.3022.2423.2523.52
S323.7528.9022.3623.9923.7323.37
S428.1922.8522.1821.9421.2623.20
SVHNS12.594.582.554.792.722.53
S22.723.532.522.512.602.54
S32.873.412.492.482.502.50
S43.463.052.612.502.512.46
+ +As the table shows, in accordance with Li & Talwalkar (2019), RS-ws often outperformed the original DARTS; however, with our robustifications, DARTS typically performs substantially better than RS-ws. DARTS-ADA consistently improved over standard DARTS for all benchmarks, indicating that a gradual increase of regularization during search prevents ending up in the bad regions of the architectural space. Finally, RobustDARTS yielded the best performance and since it is also easier to implement than DARTS-ES and DARTS-ADA, it is the method that we recommend to be used in practice. + +Finally, since the evaluations in this paper have so far focussed on smaller subspaces of the original DARTS search space, the reader may wonder how well Robust-DARTS works on the full search spaces. As Table 4 shows, RobustDARTS performed similarly to DARTS for the two original benchmarks from the DARTS paper (PTB and CIFAR-10), on which DARTS was developed and is well tuned; however, even when only changing the dataset to CIFAR-100 or SVHN, RobustDARTS already performed significantly better than DARTS, underlining its robustness. + +Table 4: DARTS vs. RobustDARTS on the original DARTS search spaces. We show mean ± stddev for 5 repetitions (based on 4 fresh subruns each as in Table 3); for the more expensive PTB we could only afford 1 such repetition. + +
BenchmarkDARTSR-DARTS(L2)
C102.91 ± 0.252.95 ± 0.21
C10020.58 ± 0.4418.01 ± 0.26
SVHN2.46 ± 0.092.17 ± 0.09
PTB58.6457.59
+ +# 6 CONCLUSIONS + +We showed that the generalization performance of architectures found by DARTS is related to the eigenvalues of the Hessian matrix of the validation loss w.r.t. the architectural parameters. Standard DARTS often results in degenerate architectures with large eigenvalues and poor generalization. Based on this observation, we proposed a simple early stopping criterion for DARTS based on tracking the largest eigenvalue. Our empirical results also show that properly regularizing the inner objective helps controlling the eigenvalue and therefore improves generalization. Our findings substantially improve our understanding of DARTS' failure modes and lead to much more robust versions. They are consistent across many different search spaces on image recognition tasks and also for the very different domains of language modelling and disparity estimation. Our code is available for reproducibility. + +# ACKNOWLEDGMENTS + +The authors acknowledge funding by the Robert Bosch GmbH, support by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme through grant no. 716721, and by BMBF grant DeToL. + +# REFERENCES + +Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. In International Conference on Learning Representations, 2017a. +Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Accelerating Neural Architecture Search using Performance Prediction. In NIPS Workshop on Meta-Learning, 2017b. +Ahmad Beirami, Meisam Razaviyayn, Shahin Shahrampour, and Vahid Tarokh. On optimal generalizability in parametric learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 3455-3465. Curran Associates, Inc., 2017. +Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, 2018. +Y. Bengio. Gradient-based optimization of hyperparameters. Neural Computation, 12(8):1889-1900, Aug 2000. ISSN 0899-7667. doi: 10.1162/089976600300015187. +D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black. A naturalistic open source movie for optical flow evaluation. In A. Fitzgibbon et al. (Eds.) (ed.), European Conf. on Computer Vision (ECCV), Part IV, LNCS 7577, pp. 611-625. Springer-Verlag, October 2012. +Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efficient architecture search by network transformation. In AAAI, 2018a. +Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-Level Network Transformation for Efficient Architecture Search. In International Conference on Machine Learning, June 2018b. +Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019. +Francesco Casale, Jonathan Gordon, and Nicolo Fusi. Probabilistic neural architecture search. arXiv preprint, 2019. +Liu Chenxi, Chen Liang Chieh, Schroff Florian, Adam Hartwig, Hua Wei, Yuille Alan L., and Fei Fei Li. Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In Conference on Computer Vision and Pattern Recognition, 2019. +Benot Colson, Patrice Marcotte, and Gilles Savard. An overview of bilevel optimization, 2007. +Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. +Justin Domke. Generic methods for optimization-based modeling. In Neil D. Lawrence and Mark Girolami (eds.), Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, volume 22 of Proceedings of Machine Learning Research, pp. 318-326, La Palma, Canary Islands, 21-23 Apr 2012. PMLR. +A. Dosovitskiy, P. Fischer, E. Ilg, P. Häusser, C. Hazirbaş, V. Golkov, P. v.d. Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. In IEEE International Conference on Computer Vision (ICCV), 2015. +Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Simple And Efficient Architecture Search for Convolutional Neural Networks. In NIPS Workshop on Meta-Learning, 2017. +Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural architecture search via lamarckian evolution. In International Conference on Learning Representations, 2019a. +Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. Journal of Machine Learning Research, 20(55):1-21, 2019b. + +Stefan Falkner, Aaron Klein, and Frank Hutter. BOHB: Robust and efficient hyperparameter optimization at scale. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1437-1446, Stockholm, Sweden, 10-15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/falkner18a.html. +Luca Franceschi, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. Forward and reverse gradient-based hyperparameter optimization. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1165-1173, International Convention Centre, Sydney, Australia, 06-11 Aug 2017. PMLR. +Luca Franceschi, Paolo Frasconi, Saverio Salzo, Riccardo Grazzi, and Massimiliano Pontil. Bilevel programming for hyperparameter optimization and meta-learning. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1568-1577, Stockholm, Sweden, 10-15 Jul 2018. PMLR. +Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Comput., 9(1):1-42, January 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.1.1. +Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. +L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: Bandit-based configuration evaluation for hyperparameter optimization. In Proceedings of the International Conference on Learning Representations (ICLR'17), 2017. Published online: iclr.cc. +Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture search. CoRR, abs/1902.07638, 2019. +H. Liu, K. Simonyan, O. Vinyals, C.Fernando, and K. Kavukcuoglu. Hierarchical representations for efficient architecture search. In International Conference on Learning Representations (ICLR) 2018 Conference Track, April 2018a. +Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical Representations for Efficient Architecture Search. In International Conference on Learning Representations, 2018b. +Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019. +Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 2113-2122, Lille, France, 07-09 Jul 2015. PMLR. +N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2016. arXiv:1512.02134. +Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Dan Fink, Olivier Francon, Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy, and Babak Hodjat. Evolving Deep Neural Networks. In arXiv:1703.00548, March 2017. +Thanh Dai Nguyen, Sunil Gupta, Santu Rana, and Svetha Venkatesh. Stable bayesian optimization. International Journal of Data Science and Analytics, 6(4):327-339, Dec 2018. ISSN 2364-4168. doi: 10.1007/s41060-018-0119-9. + +Fabian Pedregosa. Hyperparameter optimization with approximate gradient. In Maria Florina Balcan and Kilian Q. Weinberger (eds.), Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp. 737-746, New York, New York, USA, 20-22 Jun 2016. PMLR. +Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. In International Conference on Machine Learning, 2018. +Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V. Le, and Alexy Kurakin. Large-scale evolution of image classifiers. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 2902-2911, International Convention Centre, Sydney, Australia, 06-11 Aug 2017. PMLR. +Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Aging Evolution for Image Classifier Architecture Search. In AAAI, 2019. +T. Saikia, Y. Marrakchi, A. Zela, F. Hutter, and T. Brox. Autodispnet: Improving disparity estimation with automl. In IEEE International Conference on Computer Vision (ICCV), 2019. URL http://lmb.informatik.uni-freiburg.de/Publications/2019/SMB19. +Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 4053-4061. Curran Associates, Inc., 2016. +Christian Sciuto, Kaicheng Yu, Martin Jaggi, Claudi Musat, and Mathieu Salzmann. Evaluating the search phase of neural architecture search. arXiv preprint, 2019. +Kenneth O Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary Computation, 10:99-127, 2002. +Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. SNAS: stochastic neural architecture search. In International Conference on Learning Representations, 2019. +Zhewei Yao, Amir Gholami, Qi Lei, Kurt Keutzer, and Michael W Mahoney. Hessian-based analysis of large batch training and robustness to adversaries. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 4949-4959. Curran Associates, Inc., 2018. +Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter. Towards automated deep learning: Efficient joint neural architecture and hyperparameter search. In ICML 2018 Workshop on AutoML (AutoML 2018), July 2018. +Zhao Zhong, Jingchen Yan, Wei Wu, Jing Shao, and Cheng-Lin Liu. Practical block-wise neural network architecture generation. In CVPR. IEEE Computer Society, 2018. +Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In International Conference on Learning Representations, 2017. +Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. Learning transferable architectures for scalable image recognition. In Conference on Computer Vision and Pattern Recognition, 2018. + +# A MORE DETAIL ON DARTS + +Here we present a detailed description of DARTS architectural update steps. We firstly provide the general formalism which computes the gradient of the outer level problem in (1) by means of the implicit function theorem. Afterwards, we present how DARTS computes the gradient used to update the architectural parameters $\alpha$ . + +# A.1 DERIVATIVE WITH SMOOTHED NON-QUADRATIC LOWER LEVEL PROBLEM + +Consider the general definition of the bi-level optimization problem as given by (1) and (2). Given that $f$ is twice continuously differentiable and that all stationary points are local minima, one can make use of the implicit function theorem to find the derivative of the solution map $\theta^{*}(y)$ w.r.t. $y$ (Bengio, 2000). Under the smoothness assumption, the optimality condition of the lower level (2) is $\nabla_{\theta}f(y,\theta) = \mathbf{0}$ , which defines an implicit function $\theta^{*}(y)$ . With the assumption that $\min_{\theta}f(y,\theta)$ has a solution, there exists a $(y,\theta^{*})$ such that $\nabla_{\theta}f(y,\theta^{*}) = \mathbf{0}$ . Under the condition that $\nabla_{\theta}f(y,\theta^{*}) = \mathbf{0}$ is continuously differentiable and that $\theta^{*}(y)$ is continuously differentiable at $y$ , implicitly differentiating the last equality from both sides w.r.t. $y$ and applying the chain rule, yields: + +$$ +\frac {\partial (\nabla_ {\theta} f)}{\partial \theta} (y, \theta^ {*}) \cdot \frac {\partial \theta^ {*}}{\partial y} (y) + \frac {\partial (\nabla_ {\theta} f)}{\partial y} (y, \theta^ {*}) = \mathbf {0}. \tag {3} +$$ + +Assuming that the Hessian $\nabla_{\theta}^{2}f(y,\theta^{*})$ is invertible, we can rewrite (3) as follows: + +$$ +\frac {\partial \theta^ {*}}{\partial y} (y) = - \left(\nabla_ {\theta} ^ {2} f \left(y, \theta^ {*}\right)\right) ^ {- 1} \cdot \frac {\partial \left(\nabla_ {\theta} f\right)}{\partial y} \left(y, \theta^ {*}\right). \tag {4} +$$ + +Applying the chain rule to (1) for computing the total derivative of $F$ with respect to $y$ yields: + +$$ +\frac {d F}{d y} = \frac {\partial F}{\partial \theta} \cdot \frac {\partial \theta^ {*}}{\partial y} + \frac {\partial F}{\partial y}, \tag {5} +$$ + +where we have omitted the evaluation at $(y,\theta^{*})$ . Substituting (4) into (5) and reordering yields: + +$$ +\frac {d F}{d y} = \frac {\partial F}{\partial y} - \frac {\partial F}{\partial \theta} \cdot \left(\nabla_ {\theta} ^ {2} f\right) ^ {- 1} \cdot \frac {\partial^ {2} f}{\partial \theta \partial y}. \tag {6} +$$ + +equation 6 computes the gradient of $F$ , given the function $\theta^{*}(y)$ , which maps outer variables to the inner variables minimizing the inner problem. However, in most of the cases obtaining such a mapping is computationally expensive, therefore different heuristics have been proposed to approximate $dF / dy$ (Maclaurin et al., 2015; Pedregosa, 2016; Franceschi et al., 2017; 2018). + +# A.2 DARTS ARCHITECTURAL GRADIENT COMPUTATION + +DARTS optimization procedure is defined as a bi-level optimization problem where $\mathcal{L}_{\text{valid}}$ is the outer objective (1) and $\mathcal{L}_{\text{train}}$ is the inner objective (2): + +$$ +\min _ {\alpha} \mathcal {L} _ {\text {v a l i d}} (\alpha , w ^ {*} (\alpha)) \tag {7} +$$ + +$$ +s. t. \quad w ^ {*} (\alpha) = \underset {w} {\arg \min } \mathcal {L} _ {\text {t r a i n}} (\alpha , w), \tag {8} +$$ + +where both losses are determined by both the architecture parameters $\alpha$ (outer variables) and the network weights $w$ (inner variables). Based on Appendix A.1, under some conditions, the total derivative of $\mathcal{L}_{valid}$ w.r.t. $\alpha$ evaluated on $(\alpha, w^{*}(\alpha))$ would be: + +$$ +\frac {d \mathcal {L} _ {\text {v a l i d}}}{d \alpha} = \nabla_ {\alpha} \mathcal {L} _ {\text {v a l i d}} - \nabla_ {w} \mathcal {L} _ {\text {v a l i d}} \left(\nabla_ {w} ^ {2} \mathcal {L} _ {\text {t r a i n}}\right) ^ {- 1} \nabla_ {\alpha , w} ^ {2} \mathcal {L} _ {\text {t r a i n}}, \tag {9} +$$ + +where $\nabla_{\alpha} = \frac{\partial}{\partial\alpha}$ , $\nabla_w = \frac{\partial}{\partial w}$ and $\nabla_{\alpha ,w}^2 = \frac{\partial^2}{\partial\alpha\partial w}$ . Computing the inverse of the Hessian is in general not possible considering the high dimensionality of the model parameters $w$ , therefore resolving to gradient-based iterative algorithms for finding $w^{*}$ is necessary. However, this would also require to + +optimize the model parameters $w$ till convergence each time $\alpha$ is updated. If our model is a deep neural network it is clear that this computation is expensive, therefore Liu et al. (2019) propose to approximate $w^{*}(\alpha)$ by updating the current model parameters $w$ using a single gradient descent step: + +$$ +w ^ {*} (\alpha) \approx w - \xi \nabla_ {w} \mathcal {L} _ {\text {t r a i n}} (\alpha , w), \tag {10} +$$ + +where $\xi$ is the learning rate for the virtual gradient step DARTS takes with respect to the model weights $w$ . From equation 10 the gradient of $w^{*}(\alpha)$ with respect to $\alpha$ is + +$$ +\frac {\partial w ^ {*}}{\partial \alpha} (\alpha) = - \xi \nabla_ {\alpha , w} ^ {2} \mathcal {L} _ {\text {t r a i n}} (\alpha , w), \tag {11} +$$ + +By setting the evaluation point $w^{*} = w - \xi \nabla_{w}\mathcal{L}_{train}(\alpha ,w)$ and following the same derivation as in Appendix A.1, we obtain the DARTS architectural gradient approximation: + +$$ +\frac {d \mathcal {L} _ {\text {v a l i d}}}{d \alpha} (\alpha) = \nabla_ {\alpha} \mathcal {L} _ {\text {v a l i d}} \left(\alpha , w ^ {*}\right) - \xi \nabla_ {w} \mathcal {L} _ {\text {v a l i d}} \left(\alpha , w ^ {*}\right) \nabla_ {\alpha , w} ^ {2} \mathcal {L} _ {\text {t r a i n}} \left(\alpha , w ^ {*}\right), \tag {12} +$$ + +where the inverse Hessian $\nabla_w^2\mathcal{L}_{train}^{-1}$ in (9) is replaced by the learning rate $\xi$ . This expression however contains again an expensive vector-matrix product. Liu et al. (2019) reduce the complexity by using the finite difference approximation around $w^{\pm} = w\pm \epsilon \nabla_{w}\mathcal{L}_{valid}(\alpha ,w^{*})$ for some small $\epsilon = 0.01 / \| \nabla_w\mathcal{L}_{valid}(\alpha ,w^*)\| _2$ to compute the gradient of $\nabla_{\alpha}\mathcal{L}_{train}(\alpha ,w^{*})$ with respect to $w$ as + +$$ +\nabla_ {\alpha , w} ^ {2} \mathcal {L} _ {t r a i n} (\alpha , w ^ {*}) \approx \frac {\nabla_ {\alpha} \mathcal {L} _ {t r a i n} (\alpha , w ^ {+}) - \nabla_ {\alpha} \mathcal {L} _ {t r a i n} (\alpha , w ^ {-})}{2 \epsilon \nabla_ {w} \mathcal {L} _ {v a l i d} (\alpha , w ^ {*})} \quad \Leftrightarrow +$$ + +$$ +\nabla_ {w} \mathcal {L} _ {\text {v a l i d}} (\alpha , w ^ {*}) \nabla_ {\alpha , w} ^ {2} \mathcal {L} _ {\text {t r a i n}} (\alpha , w ^ {*}) \approx \frac {\nabla_ {\alpha} \mathcal {L} _ {\text {t r a i n}} (\alpha , w ^ {+}) - \nabla_ {\alpha} \mathcal {L} _ {\text {t r a i n}} (\alpha , w ^ {-})}{2 \epsilon}. \tag {13} +$$ + +In the end, combining equation 12 and equation 13 gives the gradient to compute the architectural updates in DARTS: + +$$ +\frac {d \mathcal {L} _ {\text {v a l i d}}}{d \alpha} (\alpha) = \nabla_ {\alpha} \mathcal {L} _ {\text {v a l i d}} \left(\alpha , w ^ {*}\right) - \frac {\xi}{2 \epsilon} \left(\nabla_ {\alpha} \mathcal {L} _ {\text {t r a i n}} \left(\alpha , w ^ {+}\right) - \nabla_ {\alpha} \mathcal {L} _ {\text {t r a i n}} \left(\alpha , w ^ {-}\right)\right) \tag {14} +$$ + +In all our experiments we always use $\xi = \eta$ (also called second order approximation in Liu et al. (2019)), where $\eta$ is the learning rate used in SGD for updating the parameters $w$ . + +# B CONSTRUCTION OF S1 FROM SECTION 3 + +We ran DARTS two times on the default search space to find the two most important operations per mixed operation. Initially, every mixed operation consists of 8 operations. After the first DARTS run, we drop the 4 (out of 8) least important ones. In the second DARTS run, we drop the 2 (out of the remaining 4) least important ones. S1 is then defined to contain only the two remaining most important operations per mixed op. Refer to Figure 9 for an illustration of this pre-optimized space. + +# C FINAL ARCHITECTURE EVALUATION + +Similar to the original DARTS paper (Liu et al., 2019), the architecture found during the search are scaled up by increasing the number of filters and cells and retrained from scratch to obtain the final test performance. For CIFAR-100 and SVHN we use 16 number of initial filters and 8 cells when training architectures from scratch for all the experiments we conduct. The rest of the settings is the same as in Liu et al. (2019). + +On CIFAR-10, when scaling the ScheduledDropPath drop probability, we use the same settings for training from scratch the found architectures as in the original DARTS paper, i.e. 36 initial filters and 20 stacked cells. However, for search space S2 and S4 we reduce the number of initial filters to 16 in order to avoid memory issues, since the cells found with more regularization usually are composed only with separable convolutions. When scaling the $L_{2}$ factor on CIFAR-10 experiments we use 16 initial filters and 8 stacked cells, except the experiments on S1, where the settings are the same as in Liu et al. (2019), i.e. 36 initial filters and 20 stacked cells. + +![](images/303ba00d6028934926ff3f4ea86e1907b9ae7a4ad14b5516b0113ebb7e2f59b0.jpg) +(a) Normal cell space + +![](images/86e99e1eb12156d78b54d212285a658ba6316abc2f950c508695be4c9d5cdcca.jpg) +(b) Reduction cell space +Figure 9: Search space S1. + +Note that although altering the regularization factors during DARTS search, when training the final architectures from scratch we always use the same values for them as in Liu et al. (2019), i.e. ScheduledDropPath maximum drop probability linearly increases from 0 towards 0.2 throughout training, Cutout is always enabled with cutout probability 1.0, and the $L_{2}$ regularization factor is set to $3 \cdot 10^{-4}$ . + +# D ADDITIONAL EMPIRICAL RESULTS + +![](images/68e1b8f84182d898d198897a27ff5ffc2fac0ad299e6650d367e23463f064242.jpg) + +![](images/36b33326e2681ebe5c787cb57ee5f0411d0cdf5744a3ea2869b555b80de46702.jpg) +Figure 10: Test regret and validation error of the search (one-shot) model when running DARTS on S5 and CIFAR-10 with different $L_{2}$ regularization values. The architectural parameters' overfit reduces as we increase the $L_{2}$ factor and successfully finds the global minimum. However, we notice that the architectural parameters start underfitting as we increase to much the $L_{2}$ factor, i.e. both validation and test error increase. + +![](images/b346f39c2796ea3187f97496605cbefe01badecaac2176ce6499fac6f0246bb7.jpg) + +![](images/d844080cddfe67bea7e6fc5b557dff240950e77f475f88b47b0c6b0b71ec1b27.jpg) + +Table 5: Validation (train) and test accuracy on CIFAR-10 of the search and final evaluation models, respectively. The values in the last column show the maximum eigenvalue $\lambda_{max}^{\alpha}$ (computed on a random sampled mini-batch) of the Hessian, at the end of search for different maximum drop path probability). The four blocks in the table state results for the search spaces S1-S4, respectively. + +
Drop Prob.Valid acc.Test acc.Params\( {\lambda }_{\max }^{\alpha } \)
seed 1seed 2seed 3seed 1seed 2seed 3seed 1seed 2seed 3seed 1seed 2seed 3
S10.087.2287.0186.9896.1694.4395.432.241.932.031.0230.8350.698
0.284.2484.3284.2296.3996.6696.202.632.842.480.1480.2640.228
0.482.2882.1882.7996.4496.9496.762.632.993.170.1920.1990.149
0.679.1779.1878.8496.8996.9396.963.383.023.170.3000.2550.256
S20.088.4988.4088.3595.1595.4896.110.930.860.970.6840.4090.268
0.285.2984.8185.3695.1595.4096.141.281.441.360.2700.2170.145
0.482.0382.6683.2096.3496.5096.441.281.281.360.3040.4110.282
0.679.8680.1979.7096.5296.3596.291.211.281.360.2920.2950.281
S30.088.7889.1588.6794.7096.2796.662.212.432.850.4960.5350.446
0.285.6185.6085.5096.7896.8496.743.624.042.990.1790.1850.202
0.483.0383.2483.4397.0796.8596.484.103.743.380.1560.3700.184
0.679.8680.0379.6896.9194.5696.444.462.302.660.2390.2750.280
S40.086.3386.7286.4692.8093.2293.141.051.131.050.4000.4420.314
0.281.0182.4382.0395.8496.0896.151.441.441.440.0700.0540.079
0.479.4979.6778.9696.1196.3096.281.441.441.440.0640.0570.049
0.674.5474.7474.3796.4296.3696.641.441.441.440.0570.0600.066
+ +# D.1 ADAPTIVE DARTS DETAILS + +We evaluated DARTS-ADA (Section 5.3) with $R = 3 \cdot 10^{-4}$ (DARTS default), $R_{max} = 3 \cdot 10^{-2}$ and $\eta = 10$ on all the search spaces and datasets we use for image classification. The results are shown in Table 3 (DARTS-ADA). The function train_and.eval conducts the normal DARTS search for one epoch and returns the architecture at the end of that epoch's updates and the stop value if a decision was made to stop the search and rollback to stop_epoch. + +Algorithm 1: DARTS_ADDA +/\*E:epochs to search;R:initial regularization value; $R_{max}$ :maximal regularizationvalue;stop_criter:stopping criterion; $\eta$ : regularizationincrease factor \*/ +Input:E,R,Rmax,stop_criter, $\eta$ +\*/start search for E epochs \*/ +for epoch in E do /\*run DARTS for one epoch and return stop=True together with the stop_epoch \*/ /\*and the architecture at stop_epoch if the criterion is met \*/ stop,stop_epoch,arch $\leftarrow$ train_and_eval(stop_criter); if stop& $R\leq R_{max}$ then /\*startDARTSfromstop_epochwithalargerR arch $\leftarrow$ DARTS_ADA(E-stop_epoch, $\eta \cdot R,R_{max}$ ,stop_criter, $\eta$ ); break +end +end +Output:arch + +![](images/2e24409f82de36ebf5e0c43f4a39434cd990eac80daeecd6e39413635dc94d98.jpg) +Figure 11: Test errors of architectures along with the validation error of the search (one-shot) model for each dataset and space when scaling the ScheduledDropPath drop probability. Note that these results (blue lines) are the same as the ones in Figure 8. + +![](images/73db970bc9bb744584b152c6ea44d4f28c9a5e162613c08bc06923fc66cb8006.jpg) + +![](images/bf7703c0a7f8055a19fe7db9adf6065eef6b65596eb0f35a8c8197724421c0c3.jpg) + +![](images/ae2903bbac6529ddbf5899dffdc06baded049ee8f92cf98cde84be9949ec54a6.jpg) + +![](images/b8004d1b40dba7997dad3e1e2c5abaacee2e3d6b2d8cd2e1fb44760dd05a4666.jpg) + +![](images/f0f1d663a5fc0c9909be495f82e21260099814f5b2eebd8832776114a533172f.jpg) + +![](images/5b5541d694253838b7fcc0a3556e233544daecac7901c5930fb7a8068117e775.jpg) + +![](images/cb3cffdfbe1e8102f21370149a95b74443e214246180d018ebce6068035427ff.jpg) + +![](images/a616543f9211ebacde1b1f4039ffbb9f22ea03181a1561d995d3936b8dda3b5a.jpg) + +![](images/aa1d031220ba1682c4bf7f05c73fa366f433084b57a5f09aa86e66e291e60f86.jpg) +Figure 12: Test errors of architectures along with the validation error of the search (one-shot) model for each dataset and space when scaling the $L_{2}$ factor. Note that these results (blue lines) are the same as the ones in Figure 7. + +![](images/b6bcdd4a7c53e999dc05374203a792ea519bc85e571979b81643546629d18d08.jpg) + +![](images/f75add5a007e453239db0c2fd0db8ca1cfebaab3143e31b0a14791679d3c6fe5.jpg) + +![](images/4dcca62f7bb73f6f7715f401166fdb94480cdc06d7adcec967e9c871f7e7e88a.jpg) + +![](images/667f039655ff34ac52e7a13be71dd48d47416b8782f36767ad25c9b407f0fa3f.jpg) + +![](images/81546d768f1e49d3c0161ba8083a9ba918a8220724c0857b405cd78a78e48341.jpg) + +![](images/4a6fb8f2e5eec6365eb1d8c7218013d707ff649dbe95c13d78868fdd63f081e5.jpg) + +![](images/3be361fc03bdad20b60169d682de860e8a82a484605bf6ce465ceee04ed2e13c.jpg) + +![](images/0758318159a17e2889b9d781940507c274edb1bfa5718bea1741f99ba775d5d2.jpg) + +![](images/96be01ffe21f72b76902446d16af9bd164771340a7b4717bf8e0850c3f0fe0a6.jpg) + +![](images/7efdc31716086e20e7e22abd31acf05bedad2df793c4a8e4b2d3f03639518e52.jpg) + +![](images/710080d29c33020ead76016c6f580070a46d1c3bd1726523f60188bbed182787.jpg) + +![](images/e0bef403f74667c843854c3a4bbafa3c4de5c0c1fbdf4c473f3dff5abe750d05.jpg) +Figure 13: Local average of the dominant EV $\lambda_{max}^{\alpha}$ throughout DARTS search (for different drop path prob. values). Markers denote the early stopping point based on the criterion in Section 4.3. + +![](images/d46ccf455172e5df5a3a7f80cfd2e82866fd9a98a82683f9f7d524011e585da5.jpg) + +![](images/95d1461517fb88f42df159ba8c13b2ea097603206150cdff1367bb401c50df56.jpg) + +![](images/c18a70caf98c8124a487472fdad48572841ecb999fef2d9dd5b3f15b5f48d2e8.jpg) + +![](images/6ec462612baf28970842f5e7352b2efb450d9e081ca4e5e686a28bec4add6dc3.jpg) + +![](images/dbb68c08b09af19e072baece1a76746f7b96fb9050dd9dec237ff5b4d865d223.jpg) + +![](images/20dc538e83f5752f8eb9c4af19891e9881ed7c77f63b2518801c5da2f808dc11.jpg) + +![](images/4ffe4d24e06a54b129320ee5afec43ef0c7b5394bfc604e844eceea5425e7092.jpg) + +![](images/72fc55f7c1f1da7a16d060361be3fc89a4e4e0cd3672b9dbbeae33a783e13474.jpg) + +![](images/313ca8e08ceeeaf266501ac439d380f8cb20ecafca56bc66a28ccbd434f51100.jpg) + +![](images/79f76c2448604e0b241ea75c06692ae668821deeb8240317df042f2d4a03ce1f.jpg) + +![](images/ce20af4c77bfb806901682fa2d7c718b4fb091514c4c6c42577a82a43a8d5ba5.jpg) + +![](images/713619c8712959d3af88bb2d66cd7e766ad7d1aa8dbdb5b213024bbd7ac476fe.jpg) +Figure 14: Effect of $L_{2}$ regularization no the EV trajectory. The figure is analogous to Figure 13. + +![](images/6e65e3c43a34717aa9f0378269fffa58054ac5d959f50df2da0cb4a7d5097d4d.jpg) + +![](images/aebfb6880359d5982c4520d1d55984670a61412f3817471fa2716315f81a85e3.jpg) + +![](images/eb8f770dacef42425a5c9d067e771702c42e479dc2e81f34051ed1b6b33b9e91.jpg) + +![](images/d54b5c33925c0bcde84f5efc87e51adef76843d13af60c02a63ccb072c744a26.jpg) + +![](images/c0fafee59f600c634eeb6031b16f9b39692515870d30bd909cf09819cb2b02a5.jpg) + +![](images/3aa4b5d380a8613ec1d443dc84c8daaa9bb933a05cbc99ca2e4a326adab200ec.jpg) + +![](images/a5fcda4b692802ac1eb856066e058991f28a3a040fbf53b36f23cd9dfa63bc14.jpg) + +![](images/3093066b9850f487d948ade550af8f290014475ceeaebe1fe356253233426f9c.jpg) + +![](images/912c0989aef9b57cdfc90e47a43f33b6c754f3617612f8e36f6c1c2039a50bf8.jpg) + +![](images/49ecbd9a0382ef098e5d317463282ef2f817c9692a9658f9d41da325b77fbab1.jpg) + +![](images/d2e91de03307bf9548b8d067937e51a9bfc87394b5472ef119a30edc3e9dfbdb.jpg) + +![](images/e334a5c00477f7f1d8805542128a83fc99d620cf9a0b76241fc4478e664a5d6e.jpg) + +![](images/8ea824ee67f21b0cc41f9497fb98aa69eea9247a1661a65793ea1f978e63461e.jpg) + +![](images/a432e379fe4a80889651f505cd734bb6e68d17ae53e69c0e6346d2c227b57e92.jpg) + +![](images/0db7cd05438018b4d201b11751d2051e7f062de1cd59b54a4842f444dcdabe92.jpg) + +![](images/eb51f53b62ea1d5a7259596d34ec407f7dec71c90c115db8bae205f0c3834012.jpg) + +![](images/e6441f342393f7057b142b6c9f690096a05ddf6bd013e27e54c137eff4e9c9e4.jpg) + +![](images/2c77a26ec7ec7854e34a3c16925e150751f0987e36f3c4ba89cffe495ecafc19.jpg) + +![](images/a222c61cd523e231ef7a29b36610b146ad9e96b4cce0c9c3a58059e6e5917229.jpg) + +![](images/4c28ca4753681daeafe93d27e89622e4afc337208869f2b50e982e240133cc92.jpg) + +![](images/24b1063a179b3622d6745c75a05027c96e54b6d6c68f5d8fa826e3dd989260c9.jpg) + +![](images/c165124b0c157f309f3a0fd6d92b4d3a3e994c0882d216188269e6560d8e9574.jpg) + +![](images/7540881f14fa53508de703268d419741aec3e41b405f362fced938b01e74c4ce.jpg) + +![](images/071330983edad520cbb39e2fc60114f4d1eea34b12236510596ff4085796aa8c.jpg) +Figure 15: Effect of ScheduledDropPath and Cutout on the full eigenspectrum of the Hessian at the end of architecture search for each of the search spaces. Since most of the eigenvalues after the 30-th largest one are almost zero, we plot only the largest (based on magnitude) 30 eigenvalues here. We also provide the eigenvalue distribution for these 30 eigenvalues. Notice that not only the dominant eigenvalue is larger when $dp = 0$ but in general also the others. + +![](images/b06b743f3feda7d50c6c129f4fb60d5954da9dba2c4e8525baaab3a637027ed9.jpg) + +![](images/b4475a8c8566b87caad9c2a926cb8cf97367d814405143a9f66ca1ab467076d2.jpg) + +![](images/5fd2162feaf469c5b5f910fbff7907da6b57b78b1f973016e623f27c3b5c4fb4.jpg) + +![](images/313529b0fe811931b7bf84a435694369c38b572de9f258e600d68eb9065812eb.jpg) + +![](images/19d816ba6550496bebd3b1d427e1e031112f2806130e4045066bbc1f3ca49b7c.jpg) + +![](images/4234e110166e3d45007676a583d32d56adb3dbfa456d1c983a701e7bfd413b0e.jpg) + +![](images/377cee2a3039eb1179771c44dad79bf5e72425f2f11af05fc3c3018fa7ecca1d.jpg) + +![](images/d82974667364cd7288ab458ff2eb57700fca4682a51b99c8deabb3f8bb5a8520.jpg) + +![](images/a2cc5cb1cf1408807dd12ecc8ee7f61bab2157226944a061453900ed2570b875.jpg) + +![](images/43b7039d65d0ffa5750378e5fbc281d954d7522662d1df792145097216851519.jpg) + +![](images/865ecf9ffc12ec8e76db7c0fd9a279f56bd4f21eb0fb294715bcf6cdff71faf2.jpg) + +![](images/b554febd9e3989149cc690b11fe165058e26ffc8fdf10077bf2c807edbc08f5e.jpg) + +![](images/97b2390779e9edf87ca81ae58b2c8fdb275e938d032964b55d4c137efebaad1f.jpg) + +![](images/816ca340e0177758a3e3506555adaddfff57c24c57b236c110bc71dfc8caa0a2.jpg) + +![](images/69ed50a7131cc4d5e6549b7dfdb4c45d5e5ab95f3fae773f92411582c9bdf6b9.jpg) + +![](images/a06c2650cee79aaadfb7b82802883a2c1bf5a154d81a970cd994cbf0ed6ab725.jpg) + +![](images/5629abb1e1a70fc6156235c3094337af55c8d4d140d45a8711c15526fd853b70.jpg) + +![](images/54da5793209014768f83389b3496e9850dd72a3e721d881a0819d5d025668e2b.jpg) + +![](images/918ed53d5438a8ac4bad41b5bfc1a83fb968b5932a385fc2a2cec91322107701.jpg) + +![](images/1a03b63e7d4b2b111d0e309162b124d7654611f6f19c77a4e086be2348bae26f.jpg) + +![](images/46f4c3863235cebc561df3dc0e68e476e07f3dae4c392f1203bcdae34e1b0f34.jpg) + +![](images/9413300bdff41ddd278c3092ba02ff857e36b5d4058523867ee8a1a21e1aeb79.jpg) + +![](images/84d5207befdb86bf7529281f22321d0b57fda1806bd3c4ea3d5fb1ddfb12576d.jpg) + +![](images/f75e3de86c13b5a315848adc48ca9dab4183d69eae5dc4e1a3026d2f61a215f5.jpg) +Figure 16: Effect of $L_{2}$ regularization on the full eigenspectrum of the Hessian at the end of architecture search for each of the search spaces. Since most of the eigenvalues after the 30-th largest one are almost zero, we plot only the largest (based on magnitude) 30 eigenvalues here. We also provide the eigenvalue distribution for these 30 eigenvalues. Notice that not only the dominant eigenvalue is larger when $L_{2} = 3 \cdot 10^{-4}$ but in general also the others. + +![](images/f906f8f04b5ab58293767fac48c95e241b20f48179dcd2999fd8cb26b7daeeea.jpg) + +![](images/f8952de9fa6a9b4a56e3ceb6eb40d148d631f2c83da51256e403db44dcbabda4.jpg) + +![](images/f6785a3a0fb7a016ffc8e8d6c88b2ab5c4f0be58b62ea2a7ffd855c50e67e9c4.jpg) + +![](images/6e7f4c1ac49bb82d60c077bfacac32904483cb955103261a1d9ad297d5dc82db.jpg) +Figure 17: Drop in accuracy after discretizing the search model for different spaces, datasets and drop path regularization strengths.. Example of some of the settings from Section 5. + +![](images/91714574d9ace29274dc9704e9795d26ef5907b112ccf148d74e47011e6bdab0.jpg) +Figure 18: Effect of more regularization on the performance of found architectures by DARTS. + +![](images/8d27c2c7661f08731d18c6028f15b7b325fc9577226ca2f556b3fd18547c44a2.jpg) + +Table 6: Performance of architectures found by DARTS (-ES / -ADA) vs. RandomNAS with weight sharing. For each of the settings we repeat the search 3 times and report the mean ± std of the 3 found architectures retrained from scratch. + +
SettingRandomNASDARTSDARTS-ESDARTS-ADA
C10S13.17 ± 0.154.66 ± 0.713.05 ± 0.073.03 ± 0.08
S23.46 ± 0.154.42 ± 0.403.41 ± 0.143.59 ± 0.31
S32.92 ± 0.044.12 ± 0.853.71 ± 1.142.99 ± 0.34
S489.39 ± 0.846.95 ± 0.184.17 ± 0.213.89 ± 0.67
C100S125.81 ± 0.3929.93 ± 0.4128.90 ± 0.8124.94 ± 0.81
S222.88 ± 0.1628.75 ± 0.9224.68 ± 1.4326.88 ± 1.11
S324.58 ± 0.6129.01 ± 0.2426.99 ± 1.7924.55 ± 0.63
S430.01 ± 1.5224.77 ± 1.5123.90 ± 2.0123.66 ± 0.90
SVHNS12.64 ± 0.099.88 ± 5.502.80 ± 0.092.59 ± 0.07
S22.57 ± 0.043.69 ± 0.122.68 ± 0.182.79 ± 0.22
S32.89 ± 0.094.00 ± 1.012.78 ± 0.292.58 ± 0.07
S43.42 ± 0.042.90 ± 0.022.55 ± 0.152.52 ± 0.06
+ +# D.2 A CLOSER LOOK AT THE EIGENVALUES + +Over the course of all experiments from the paper, we tracked the largest eigenvalue across all configuration and datasets to see how they evolve during the search. Figures 13 and 14 show the results across all the settings for image classification. It can be clearly seen that increasing the inner objective regularization, both in terms of $L_{2}$ or data augmentation, helps controlling the largest eigenvalue and keeping it to a small value, which again helps explaining why the architectures found with stronger regularization generalize better. The markers on each line highlight the epochs where DARTS is early stopped. As one can see from Figure 4, there is indeed some correlation between the average dominant eigenvalue throughout the search and the test performance of the found architectures by DARTS. + +Figures 15 and 16 (top 3 rows) show the full spectrum (sorted based on eigenvalue absolute values) at the end of search, whilst bottom 3 rows plot the distribution of eigenvalues in the eigenspectrum. As one can see, not only the dominant eigenvalue is larger compared to the cases when the regularization is stronger and the generalization of architectures is better, but also the other eigenvalues in the spectrum have larger absolute value, indicating a sharper objective landscape towards many dimensions. Furthermore, from the distribution plots note the presence of more negative eigenvalues whenever the architectures are degenerate (lower regularization value) indicating that DARTS gets stuck in a point with larger positive and negative curvature of the validation loss objective, associated with a more degenerate Hessian matrix. + +# E DISPARITY ESTIMATION + +# E.1 DATASETS + +We use the FlyingThings3D dataset (Mayer et al., 2016) for training AutoDispNet. It consists of rendered stereo image pairs and their ground truth disparity maps. The dataset provides a training and testing split consisting of 21,818 and 4248 samples respectively with an image resolution of $960 \times 540$ . We use the Sintel dataset (Butler et al. (2012)) for testing our networks. Sintel is another synthetic dataset from derived from an animated movie which also provides ground truth disparity maps (1064 samples) with a resolution of $1024 \times 436$ . + +# E.2 TRAINING + +We use the AutoDispNet-C architecture as described in Saikia et al. (2019). However, we use the smaller search which consists of three operations: MaxPool $3 \times 3$ , SepConv $3 \times 3$ , and SkipConnect. For training the search network, images are downsampled by a factor of two and trained for $300k$ mini-batch iterations. During search, we use SGD and ADAM to optimize the inner and outer objectives respectively. Differently from the original AutoDispNet we do not warm-start the search model weights before starting the architectural parameter updates. The extracted network is also trained for $300k$ mini-batch iterations but full resolution images are used. Here, ADAM is used for optimization and the learning rate is annealed to 0 from $1e - 4$ , using a cosine decay schedule. + +# E.3 EFFECT OF REGULARIZATION ON THE INNER OBJECTIVE + +To study the effect of regularization on the inner objective for AutoDispNet-C we use experiment with two types of regularization: data augmentation and of $L2$ regularization on network weights. + +We note that we could not test the early stopping method on AutoDispNet since AutoDispNet relies on custom operations to compute feature map correlation (Dosovitskiy et al., 2015) and resampling, for which second order derivatives are currently not available (which are required to compute the Hessian). + +Data augmentation. Inspite of fairly large number of training samples in FlyingThings3D, data augmentation is crucial for good generalization performance. Disparity estimation networks employ spatial transformations such as translation, cropping, shearing and scaling. Additionally, appearance transformations such as additive Gaussian noise, changes in brightness, contrast, gamma and color + +are also applied. Parameters for such transformations are sampled from a uniform or Gaussian distribution (parameterized by a mean and variance). In our experiments, we vary the data augmentation strength by multiplying the variance of these parameter distributions by a fixed factor, which we dub the augmentation scaling factor. The extracted networks are evaluated with the same augmentation parameters. The results of increasing the augmentation strength of the inner objective can be seen in Table 2. We observe that as augmentation strength increases DARTS finds networks with more number of parameters and better test performance. The best test performance is obtained for the network with maximum augmentation for the inner objective. At the same time the search model validation error increases when scaling up the augmentation factor, which again enforces the argument that the overfitting of architectural parameters is reduced by this implicit regularizer. + +L2 regularization. We study the effect of increasing regularization strength on the weights of the network. The results are shown in Table 2. Also in this case best test performance is obtained with the maximum regularization strength. + +# F RESULTS ON PENN TREEBANK + +Here we investigate the effect of more $L_{2}$ regularization on the inner objective for searching recurrent cells on Penn Treebank (PTB). We again used a reduced search space with only $ReLU$ and identity mapping as possible operations. The rest of the settings is the same as in (Liu et al., 2019). + +We run DARTS search four independent times with different random seeds, each with four $L_{2}$ regularization factors, namely $5 \times 10^{-7}$ (DARTS default), $15 \times 10^{-7}$ , $45 \times 10^{-7}$ and $135 \times 10^{-7}$ . Figure 19 shows the test perplexity of the architectures found by DARTS with the aforementioned $L_{2}$ regularization values. As we can see, a stronger regularization factor on the inner objective makes the search procedure more robust. The median perplexity of the discovered architectures gets better as we increase the $L_{2}$ factor from $5 \times 10^{-7}$ to $45 \times 10^{-7}$ , while the search model (one-shot) validation mean perplexity increases. This observation is similar to the ones on image classification shown in Figure 10, showing again that properly regularizing the inner objective helps reduce overfitting the architectural parameters. + +![](images/75da0244634407a150aa4b11ada0021fd4cee8ffceefe7b029ce482c5bd320f3.jpg) +Figure 19: Performance of recurrent cells found with different $L_{2}$ regularization factors on the inner objective on PTB. We run DARTS 4 independent times with different random seeds, train each of them from scratch with the evaluation settings for 1600 epochs and report the median test perplexity. The blue dashed line denotes the validation perplexity of the search model. + +# G DISCOVERED CELLS ON SEARCH SPACES S1-S4 FROM SECTION 3 ON OTHER DATASETS + +![](images/abb2a2ce5374326d18a9a2d34a9ec75bb5d5c4eb7652b14e0e774da62ff9346b.jpg) +(a) S1 + +![](images/a23607c2a5c02b0ebc068d0e004402f23b219e043fa67fef9ce3b1122d6f8612.jpg) +(b) S2 + +![](images/b24581711f464fddd472b75a453ec3d6569e6c6db5017caad854e21be0280c77.jpg) +(c) S3 + +![](images/dcbe7e55c114095c674aa1639e3d06582c682aab6ae2fac5ec54276b76fc5073.jpg) +(d) S4 + +![](images/212c5cc1e0fa678c4f52ecc721a38196bd7c4410bd518ac9897da7d5274999b9.jpg) +(a) S1 (C100) + +![](images/3351f3d38c4fa83e91ac9e3754036762f92f533583dad52f8797af404aa92885.jpg) +(b) S2 (C100) + +![](images/7c875a382e6af3e9f2a51621f5bbbd032efe26e038d71649a12efe9d8279b435.jpg) +Figure 20: Reduction cells found by DARTS when ran on CIFAR-10 with its default hyperparameters on spaces S1-S4. These cells correspond with the normal ones in Figure 1. + +![](images/942f189a6946140021ab1e1ce621824beac4cc4a623d457c504ea24dfc09b8bf.jpg) +(d) S4 (C100) + +![](images/323196ee156d30318259fe92e29ffd9e0425db9a8b801c3a2a99486a07a39831.jpg) +(e) S1 (SVHN) + +![](images/8a86814cdab56a7a36511f6b9193b5c0ad51e5c501a5d2864e438bf0c74c1332.jpg) +(f) S2 (SVHN) + +![](images/a61c3a7e31c512fc814e505f0ea43a85e965975e321f3d043e8485ca98390b2e.jpg) +(c) S3 (C100) +(g) S3 (SVHN) + +![](images/9d45f3fa9f418f1c8dcdd0b2f530a911dda797b4c971c5c937986624a5460790.jpg) +(h) S4 (SVHN) + +![](images/acefc669ecb7318c34537b1e94bff3e6afe59aa98369b0ead1d310a6a2cec56d.jpg) +(a) S1 (C100) + +![](images/b5f553234da1e1900c8a034636e88741ff945fe3cc7930cd644c666f1ec34912.jpg) +(b) S2 (C100) + +![](images/9f7db92486f355165652c1486a3e411f6d3e7c0b77528630ea4d5492cdf2d26c.jpg) +Figure 21: Normal cells found by DARTS on CIFAR-100 and SVHN when ran with its default hyperparameters on spaces S1-S4. Notice the dominance of parameter-less operations such as skip connection and pooling ops. +(c) S3 (C100) + +![](images/5518f28f9592d591006ea4c5001fa60eb85cf55aa30f6b1db5cf553cba7b4d97.jpg) +(d) S4 (C100) + +![](images/f562cd8e4b09b27aae689a858dbd1c0d56f704d1bf601b1a723da6491daaf38c.jpg) +(e) S1 (SVHN) + +![](images/db05b43de908d683a382756b963f0baab7fe5b93590c50e251c8d386b99eaa54.jpg) +(f) S2 (SVHN) + +![](images/2c7bcacabe1d9b6e4b66bf11a5d490ed5780f7e83c66a24bfd3dd0a2f07513b8.jpg) +(g) S3 (SVHN) + +![](images/e03e10a40ddcdb35bd89587a9c292521036cbcfb13c6d84ec3799bba912b0af1.jpg) +(h) S4 (SVHN) +Figure 22: Reduction cells found by DARTS on CIFAR-100 and SVHN when ran with its default hyperparameters on spaces S1-S4. + +![](images/73218d5c02b8c1a64b51839c59b2c138c4d5cc27390f17b523cf9c8d8981a795.jpg) +(a) S1 (C10) + +![](images/f1b49a674c56a8b274f7aef67a88ba1727d3783ab25236a550ae2ce1947562ab.jpg) +(b) S2 (C10) + +![](images/db6016194bcbb5382f9e6db8b4982046bac53f751274d4942a714d9c32a7a9c4.jpg) + +![](images/a45d58e504b019e980e80992486306eabeca1e47999eb578a25b886efc1e803b.jpg) +(d) S4 (C10) + +![](images/b09feea838f4a4a7d77ddfe3a8baeb5270325de7762752c6c5357abefbda2f88.jpg) +(e) S1 (C100) + +![](images/bdf9b64772e71732646fdac68aa4eb9649be9030a897806c320e97435427c7ae.jpg) +(f) S2 (C100) + +![](images/4345412f92d6f4e01eb449eb3fd1f84db818a7a388009237049839a6d4d1d97d.jpg) +(c) S3 (C10) + +![](images/be4df3ff9fd58cbafab6d77f30dd4920a041d0fe60b9402ce581815ed38ddd67.jpg) + +![](images/5c79be08b0aa3c170f33531c37eb95025db970a8a7bd371699984ec517ec04ac.jpg) +(i) S1 (SVHN) + +![](images/02fcae8f488a292662217a7f975edbdb134683914979ea31a59c42cd73db5dd8.jpg) +(j) S2 (SVHN) + +![](images/880160876a48c895657f51baf998d525de7c6ddefa9bae2e7e1de70713e8e41a.jpg) +(g) S3 (C100) +(k) S3 (SVHN) + +![](images/61264aff51f80a929945063ed204448e493c255b74ced68b1a79314c912407cb.jpg) +(h) S4 (C100) +(1) S4 (SVHN) +Figure 23: Normal cells found by DARTS-ES when ran with DARTS default hyperparameters on spaces S1-S4. + +![](images/11170bff52978f4e01d0c01523e03336a076b08ca6a49e5d79b04982f4d6d36a.jpg) +(a) S1 (C10) + +![](images/2d62d44a3a0415de0cd337abbcbc3c129fd2c553567d2da000b42bccfcb79588.jpg) +(b) S2 (C10) + +![](images/25ce7dfde485fe098134c34e6452dfc6e414b8d531dea7f760c734c942a5de57.jpg) + +![](images/45c968bf680e86cc625c003c49effe2d0eab19f57f516bdde531566f67d63d56.jpg) + +![](images/ca8db2fc5236ceb1b7ce649fa9ebbae17a8477f523a989fac812aa4733308c9d.jpg) +(e) S1 (C100) + +![](images/7bc98babb8871bd2ca84b6edd49d5d596ac39b7fbbde9e5dafd334797424fd62.jpg) +(f) S2 (C100) + +![](images/28c9bdf33c5da28ee1c8c91fb991ed58fe78c28daedead965716fba586ebba74.jpg) +(c) S3 (C10) +(g) S3 (C100) + +![](images/287e8182dc9e2deb9ec180ebd318aada84658f3a9a5eb633819f5de7127e0bdd.jpg) +(d) S4 (C10) + +![](images/96cf75ef8f22e5f414bab4fc7495fa3d8be34cf1fcaec1ef0a093776b81de75c.jpg) +(i) S1 (SVHN) + +![](images/ec73e6369ef6c69c6c9cda7f969c2c1ba706697cc224da9f500c343ed6fabbe7.jpg) +(j) S2 (SVHN) + +![](images/e34a3c75bfc35d2465bb1b868291079ffea08f38a1b7ae69d6cedf1f5c2b85d5.jpg) +(k) S3 (SVHN) + +![](images/4ade31ccf67748e74b0cf060c9dff3c247f3a54c1538a2e2868726ad91d07841.jpg) +(h) S4 (C100) +(1) S4 (SVHN) +Figure 24: Reduction cells found by DARTS-ES when ran with DARTS default hyperparameters on spaces S1-S4. + +![](images/8e3427baaca12c4b03a8041c5a6423780a42d50eccfabb33fbd7d6db60bd105c.jpg) +(a) Normal cell + +![](images/ee823067e89a720c569bb9fed0064bd3d221d12f192c629ffccb24cd66f14bd5.jpg) +(b) Reduction cell + +![](images/9f6a04532c76aa8d5ea7052ecd5a2a7b13e6ae59376df0b31905535c62932d95.jpg) +(c) Upsampling cell +Figure 25: Cells found by AutoDispNet when ran on S6-d. These cells correspond to the results for augmentation scale 0.0 of Table 2. + +![](images/58e82b66be78a23fe98708cc2aaeccc283401690a9ff91dc1b958345cfbfc370.jpg) +(a) Normal cell + +![](images/a9963e10e6c777c2b7669b556ef4880eb2347a51c29fcbbe52505ae6e2dffdb6.jpg) +(b) Reduction cell + +![](images/45bb5dcbfe2a753e19f3a5568ccff0cc564480dca1b325393d7179856c39ca99.jpg) +(c) Upsampling cell +Figure 26: Cells found by AutoDispNet when ran on S6-d. These cells correspond to the results for augmentation scale 2.0 of Table 2. + +![](images/b00057a1caa52568c267e60c9a14f00898500b1fddfb35bae4f444638cc89898.jpg) +(a) Normal cell + +![](images/0f3b3383ab27f0dfec8bd5b8f868561c81b4635837c04fcc0df50960ea8e22ca.jpg) +(b) Reduction cell + +![](images/1f829175fed7f46b00e1649deeeac690848ce343f36cf805fe845d42cb550425.jpg) +(c) Upsampling cell +Figure 27: Cells found by AutoDispNet when ran on S6-d. These cells correspond to the results for $L_{2} = 3 \cdot 10^{-4}$ of Table 2. + +![](images/7354ce06e44ec44edcb0cfff8dfef436879b8dd2af32330adaa5a361e94736f4.jpg) +(a) Normal cell + +![](images/d009c5ccfd91115a25881b889cdc573c6d99083ad59d175bf4bf0470f48675e6.jpg) +(b) Reduction cell + +![](images/43d2889756369e349ddf80c908f203508ea21b6f6eae458b6b0eba9e01b6b3e5.jpg) +(c) Upsampling cell +Figure 28: Cells found by AutoDispNet when ran on S6-d. These cells correspond to the results for $L_{2} = 81 \cdot 10^{-4}$ of Table 2. + +![](images/34aa5645f48e3512ebe2851ffc0bff18675061fcdb1e62a763af71eaf1c92e33.jpg) +(a) S1 + +![](images/e34dda27e17137790f7b2a9e9750322d95052922277a9c0c940be5f63f2955c8.jpg) +(b) S2 + +![](images/08d1adcc40c9fb7431c19ad6821e83e87e2c42baa453258965ca03d85ce22c58.jpg) +(c) S3 + +![](images/355269280ab7142986eeef53fd29bfa968813ac03d19d74ecd3765fc865d8df5.jpg) +(d) S4 + +![](images/cf3071289cb04245a3aa1f3d5b982775cd712eb13479092801ccc5bcf009c5e0.jpg) +(e) S1 + +![](images/de600d8d0c6d49f942f5037cc39a0cda28841fd43fc4c27b9cc22d2c67a0e138.jpg) +(f) S2 + +![](images/3a6cc5979de28ee93b8b5f127205964879d3a452fb2fcd04979c5ba3f2491b1f.jpg) +(g) S3 + +![](images/14b507288ba2abfa6d88e570c4002fe30d19551ab36a8c66991604ba0a17a216.jpg) +(h) S4 + +![](images/71fc78596a521e695c1a815482b3c8e00de454f7980b743c62be685d40128b66.jpg) +(a) S1 + +![](images/35a55f295a143d699c4f3d16585016538adf32f8846590486d3bce80d535be03.jpg) +(b) S2 + +![](images/bc24b807b82820f0581bb3f15636274e2c5b61dc6650129bea77579d1e09dd96.jpg) +Figure 29: Normal (top row) and reduction (bottom) cells found by DARTS on CIFAR-10 when ran with its default hyperparameters on spaces S1-S4. Same as Figure 1 but with different random seed (seed 2). + +![](images/0a3c31e3973d97593004f19d21522511f39c875f5ddebf1a6a2ee7a77625165c.jpg) +(d) S4 + +![](images/b661e2a7f24680e7049b59ea0c49f120ee52c3089e19b59937e3e721efc9c94f.jpg) +(e) S1 + +![](images/c6a6a82a53b3c2599f477aeb863914b74e516d2c28370e171f06e2fb2c067bab.jpg) +(f) S2 + +![](images/e717771668281627948ba56e7db31812a0ab30d1fbf0046d4bd8c26fc5479596.jpg) +(g) S3 + +![](images/0d2014cfb35cdf21f6ed77cbe477b1937dee19d539c19930aefcc4734327a816.jpg) +(h) S4 +Figure 30: Normal (top row) and reduction (bottom) cells found by DARTS on CIFAR-10 when ran with its default hyperparameters on spaces S1-S4. Same as Figure 1 but with different random seed (seed 3). \ No newline at end of file diff --git a/understandingandrobustifyingdifferentiablearchitecturesearch/images.zip b/understandingandrobustifyingdifferentiablearchitecturesearch/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0f1efdf02f832130d2ef4a36d752df68cb4e974c --- /dev/null +++ b/understandingandrobustifyingdifferentiablearchitecturesearch/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:956b4b72f836b253bc1193f02795d06a2b0a2663dfbf2381c8994ffad62b639e +size 2585344 diff --git a/understandingandrobustifyingdifferentiablearchitecturesearch/layout.json b/understandingandrobustifyingdifferentiablearchitecturesearch/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2372f7c9627df3bbb7b9a43383555a24b556e083 --- /dev/null +++ b/understandingandrobustifyingdifferentiablearchitecturesearch/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3ac24e919b03735d5bcc86fbf7efaa6212c9551024d07db1068a8db7a2a0d057 +size 1047097 diff --git a/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/0ec4be64-81c5-4aec-b840-543b9cf74ef4_content_list.json b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/0ec4be64-81c5-4aec-b840-543b9cf74ef4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..44c9f0a66067c7f336897914c754b2d1fe65f15e --- /dev/null +++ b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/0ec4be64-81c5-4aec-b840-543b9cf74ef4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d8a6ac01f537c7d7326c2d81ea9753b4173079799806e3c3bfda2f01c8ca5e8 +size 151360 diff --git a/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/0ec4be64-81c5-4aec-b840-543b9cf74ef4_model.json b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/0ec4be64-81c5-4aec-b840-543b9cf74ef4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1beaa93e6df60aa723c3951569efbd8381c6808b --- /dev/null +++ b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/0ec4be64-81c5-4aec-b840-543b9cf74ef4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18fd4a382e71b0c45527ab063d71c03fbe813778bf9213e8a2315899eebe6292 +size 178458 diff --git a/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/0ec4be64-81c5-4aec-b840-543b9cf74ef4_origin.pdf b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/0ec4be64-81c5-4aec-b840-543b9cf74ef4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6172a42fecfc7d8f3b1d8214086e8a1ae3944083 --- /dev/null +++ b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/0ec4be64-81c5-4aec-b840-543b9cf74ef4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02dcc3c4bb404819e9b64cd0b627f2db9e731d2443b7288b1e0b5cc76f7eb3d1 +size 4655525 diff --git a/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/full.md b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1b5c39896b025aa9f87a001fd8703fe315826127 --- /dev/null +++ b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/full.md @@ -0,0 +1,751 @@ +# UNDERSTANDING ARCHITECTURES LEARNT BY CELL-BASED NEURAL ARCHITECTURE SEARCH + +Yao Shu, Wei Wang & Shaofeng Cai + +School of Computing + +National University of Singapore + +{shuyao, wangwei, shaofeng}@comp.nus.edu.sg + +# ABSTRACT + +Neural architecture search (NAS) searches architectures automatically for given tasks, e.g., image classification and language modeling. Improving the search efficiency and effectiveness has attracted increasing attention in recent years. However, few efforts have been devoted to understanding the generated architectures. In this paper, we first reveal that existing NAS algorithms (e.g., DARTS, ENAS) tend to favor architectures with wide and shallow cell structures. These favorable architectures consistently achieve fast convergence and are consequently selected by NAS algorithms. Our empirical and theoretical study further confirms that their fast convergence derives from their smooth loss landscape and accurate gradient information. Nonetheless, these architectures may not necessarily lead to better generalization performance compared with other candidate architectures in the same search space, and therefore further improvement is possible by revising existing NAS algorithms. + +# 1 INTRODUCTION + +Various neural network architectures (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016; Huang et al., 2017) have been devised over the past decades, achieving superhuman performance for a wide range of tasks. Designing these neural networks typically takes substantial efforts from domain experts by trial and error. Recently, there is a growing interest in neural architecture search (NAS), which automatically searches for high-performance architectures for the given task. The searched NAS architectures (Zoph et al., 2018; Real et al., 2019; Pham et al., 2018; Liu et al., 2019; Xie et al., 2019b; Luo et al., 2018; Cai et al., 2019; Akimoto et al., 2019; Nayman et al., 2019) have outperformed best expert-designed architectures on many computer vision and natural language processing tasks. + +Mainstream NAS algorithms typically search for the connection topology and transforming operation accompanying each connection from a predefined search space. Tremendous efforts have been exerted to develop efficient and effective NAS algorithms (Liu et al., 2019; Xie et al., 2019b; Luo et al., 2018; Akimoto et al., 2019; Nayman et al., 2019). However, less attention has been paid to these searched architectures for further insight. To our best knowledge, there is no related work in the literature examining whether these NAS architectures share any pattern, and how the pattern may impact the architecture search if there exists the pattern. These questions are fundamental to understand and improve existing NAS algorithms. In this paper, we endeavour to address these questions by examining the popular NAS architectures1. + +The recent work (Xie et al., 2019a) shows that the architectures with random connection topologies can achieve competitive performance on various tasks compared with expert-designed architectures. Inspired by this result, we examine the connection topologies of the architectures generated by popular NAS algorithms. In particular, we find a connection pattern of the popular NAS architectures. These architectures tend to favor wide and shallow cells, where the majority of intermediate nodes are directly connected to the input nodes. + +To appreciate this particular connection pattern, we first visualize the training process of the popular NAS architectures and their randomly connected variants. Fast and stable convergence is observed for the architectures with wide and shallow cells. We further empirically and theoretically show that the architectures with wider and shallower cells consistently enjoy a smoother loss landscape and smaller gradient variance than their random variants, which helps explain their better convergence and consequently the selection of these NAS architectures during the architecture search. + +We finally evaluate the generalization performance of the popular NAS architectures and their randomly connected variants. We find that the architectures with wide and shallow cells may not generalize better than other candidate architectures despite their faster convergence. We therefore believe that rethinking NAS from the perspective of the true generalization performance rather than the convergence of candidate architectures should potentially help generate better architectures. + +# 2 RELATED WORKS + +Neural Architecture Search Neural architecture search (NAS) searches for best-performing architectures automatically for a given task. It has received increasing attention in recent years due to its outstanding performance and the demand for automated machine learning (AutoML). There are three major components in NAS as summarized by Elsken et al. (2019), namely search space, search policy (or strategy, algorithm), and performance evaluation (or estimation). To define the search space, the prior knowledge extracted from expert-designed architectures is typically exploited. As for the search policy, different algorithms are proposed to improve the effectiveness (Zoph et al., 2018; Real et al., 2019; Tan et al., 2019; Cai et al., 2019) and the efficiency (Pham et al., 2018; Liu et al., 2019; Xie et al., 2019b; Luo et al., 2018; Nayman et al., 2019; Akimoto et al., 2019) of the architecture search. However, no effort has been devoted to understanding the best architectures generated by various NAS approaches. Detailed analysis of these architectures may give insights about the further improvement of existing NAS algorithms. + +Evaluation of NAS algorithms Recent works evaluate NAS algorithms by comparing them with random search. Li & Talwalkar (2019) and Sciuto et al. (2019) compare the generalization performance of architectures generated from random search and existing NAS algorithms. Interestingly, the random search can find architectures with comparable or even better generalization performance. Particularly, Sciuto et al. (2019) show empirically that the ineffectiveness of some NAS algorithms (Pham et al., 2018) could be the consequence of the weight sharing mechanism during the architecture search. While these evaluations help understand the general disadvantages of NAS algorithms, what kind of architectures the NAS algorithms are learning and why they learn these specific architectures are still not well understood. + +# 3 THE CONNECTION PATTERN OF POPULAR NAS CELLS + +Mainstream NAS algorithms (Zoph et al., 2018; Real et al., 2019; Pham et al., 2018; Liu et al., 2019; Xie et al., 2019b; Luo et al., 2018) typically search for the cell structure, including the connection topology and the corresponding operation (transformation) coupling each connection. The generated cell is then replicated to construct the entire neural network. We therefore mainly investigate these cell-based NAS architectures. In this section, we first introduce the commonly adopted cell representation, which is useful to understand the connection and computation in a cell space. We then sketch the connection topologies of popular cell-based NAS architectures to investigate their connection patterns. By comparison, we show that there is a common connection pattern among the cells learned by different NAS algorithms; particularly, these cells tend to be wide and shallow. + +# 3.1 CELL REPRESENTATION + +Following DARTS (Liu et al., 2019), we represent the cell topology as a directed acyclic graph (DAG) consisting of $N$ nodes, including $M$ input nodes, one output node and $(N - M - 1)$ intermediate nodes. Each node forms a latent representation of the input instance. The input nodes consist of the outputs from $M$ preceding cells. And the output node aggregates (e.g., concatenate) the representations from all intermediate nodes. Each intermediate node is connected to $M$ proceeding nodes in the same cell. Each connection transforms the representation from one node via + +![](images/7fac4a217dabea0953eda8670049611212f5fe61f82bbec4d146193b02a0f654.jpg) +(a) NASNet, 5c, 2 + +![](images/7e9effa1ab14cfd9d0320aee6d9cabdb127819ab12952d7f18e42f13b51ba7e5.jpg) +(b) AmoebaNet, 4c, 4 + +![](images/d4d9e17e438097546a54171075c657ddd0ec5d83349ad0af65773f03cb044533.jpg) +(c) ENAS, 5c, 2 + +![](images/a58aaf2618aa4913bc97401dedcc3ea6af558f4d070acbf0ee82ca415c9241a9.jpg) +(d) DARTS, $3.5c, 3$ + +![](images/7bc7fff74883a25561d5ff3069fee6605d878704d533eb1e2e99e304a384ebe9.jpg) +(e) SNAS, 4c, 2 + +![](images/82ca1893370c084d9930b945077875ca158245da9dac328c372833904cab8bdb.jpg) +(a) $C^{darts}$ ,3.5c,2 + +![](images/30eb82089f7caf9afdc0ca984c94d45b85e5e0cd16f18f3d4b74203100af9920.jpg) +(b) $C_1^{darts}$ , 2.5c, 3 + +![](images/7ac71dcfbbef5878b04cb829d6d6afcf1b016973aee188779446d3d52bafa680.jpg) +Figure 1: Cell topologies of popular NAS architectures. Each sub-figure has three sets of nodes from left to right, i.e., the input nodes, intermediate nodes, and output node. The arrows (i.e., operations of the cell) represent the direction of information flow. The caption of each sub-figure reports the name of the architecture, width and depth of a cell following our definition. The width of a cell is computed with the assumption that all intermediate nodes share the same width $c$ . +(c) $C_2^{darts}$ , 2.5c, 3 + +![](images/9436723198d01c7d25de02a990c711af0ed45a5c8d32509a1096e50e58512387.jpg) +(d) $C_3^{darts}$ , 2c, 4 +Figure 2: Topologies of DARTS (Liu et al., 2019) cell (leftmost) and its variants with random connections. The cell depth is increasing and width decreasing from left to right. In particular, the original DARTS cell $C^{darts}$ is widest and shallowest among these cells. + +![](images/0e04a7ba53c567d0dcfa5bcb3aceb8565740b0432da6277b76dcfc8d6fa6c1a8.jpg) +(e) $C_4^{darts}$ , 2c, 4 + +an operation from a predefined operation set, e.g., $3 \times 3$ convolution, $3 \times 3$ max pooling, etc. The target of NAS algorithm is to search for the best $M$ source nodes for each intermediate node and the best operation for each of the connections between nodes. In the literature, the searched cell is then replicated by $L$ times to build the entire neural network architecture2. + +We abuse the notation $C$ to denote a cell and also the architecture built with the specific cell in the following sections. Besides, we shall use $C^A$ to denote the best architecture (or cell) searched with the NAS algorithm $A$ (e.g., DARTS (Liu et al., 2019), ENAS (Pham et al., 2018)). Details on how to build the architecture with given cells are provided in Appendix A.3. + +# 3.2 THE COMMON CONNECTION PATTERN + +Recently, Xie et al. (2019a) shows that neural networks constructed by cells with random connection patterns can achieve compelling performance on multiple tasks. Taking this a step further, we wonder whether cells generated from popular NAS algorithms share any connection patterns, which may explain why these cells are chosen during the architecture search. To investigate the connection patterns, we sketch the topologies of the popular NAS cells with detailed operations omitted. + +Figure 1 illustrates topologies of 5 popular NAS cells3. To examine the connection pattern formally, we introduce the concept of 'depth' and 'width' for a cell. The depth of a cell is defined as the number of connections along the longest path from input nodes to the output node. The width of a cell is defined as the total width of the intermediate nodes that are connected to the input nodes. In particular, if some intermediate nodes are only partially connected to input nodes (i.e., have connections to other intermediate nodes), their width is reduced by the percentage of the number of connections to intermediate nodes over all connections. The width of a node is the number + +of channels for convolution operations; and the width is the dimension of the features for linear operations. Supposing that the width of each intermediate node is $c$ , as shown in Figure 1, the width and depth of the DARTS (Liu et al., 2019) cell are $3.5c$ and 3 respectively, and the width and depth of the AmoebaNet (Real et al., 2019) cell are $4c$ and 4 correspondingly. + +Following the above definitions, the smallest depth and largest width for a cell with $N = 7$ and $M = 2$ are 2 and $4c$ respectively. Similarly, for a cell with $N = 8$ and $M = 2$ , the smallest depth and largest width are 2 and $5c$ respectively. In Figure 1, we can observe that cells from popular NAS architectures tend to be the widest and shallowest ones (with width close to $4c / 5c$ and depth close to 2) among all candidate cells in the same search space. Regarding this as the common connection pattern, we have the following observation: + +Observation 3.1 (The Common Connection Pattern) NAS architectures generated by popular NAS algorithms tend to have the widest and shallowest cells among all candidate cells in the same search space. + +# 4 THE IMPACTS OF CELL WIDTH AND DEPTH ON OPTIMIZATION + +Given that popular NAS cells share the common connection pattern, we then explore the impact of this common connection pattern from the optimization perspective to answer the question: why the wide and shallow cells are selected during the architecture search? We sample and train variants of popular NAS architectures with random connections. Comparing randomly connected variants with the popular NAS architectures, we find that architectures with wider and shallower cells indeed converge faster so that they are selected by NAS algorithms (Section 4.1). To understand why the wider and shallower cell contributes to faster convergence, we further investigate the loss landscape and gradient variance of popular NAS architectures and their variants via both empirical experiments (Section 4.2) and theoretical analysis (Section 4.3). + +# 4.1 CONVERGENCE + +Popular NAS algorithms typically evaluate the performance of a candidate architecture prematurely before the convergence of its model parameters during the search process. For instance, DARTS (Liu et al., 2019), SNAS (Xie et al., 2019b) and ENAS (Pham et al., 2018) optimize hyper-parameters of architectures and model parameters concurrently. The amortized training time of each candidate architecture is insufficient and therefore far from the requirement for the full convergence. Likewise, AmoebaNet (Real et al., 2019) evaluates the performance of candidate architectures with the training of only a few epochs. In other words, these candidate architectures are not evaluated based on their generalization performance at convergence. As a result, architectures with faster convergence rates are more likely to be selected by existing NAS algorithms because they can obtain better evaluation performance given the same training budget. We therefore hypothesize that the popular NAS architectures may converge faster than other candidate architectures, which largely contributes to the selection of these architectures during the search. + +To support the hypothesis above, we compare the convergence of original NAS architectures and their variants with random connections via empirical studies. We first sample variants of popular NAS cells following the sampling method in Appendix A.2. Then, we train both original NAS architectures and their random variants on CIFAR-10 and CIFAR-100 following the training details in Appendix A.3. During training, we evaluate the testing loss and accuracy of these architectures. Since the convergence is dependent on optimization settings, we also evaluate the convergence performance under different learning rates. + +Take DARTS (Liu et al., 2019) for example, Figure 2 shows the connection topology of the original DARTS cell and its random variants. Figure 3 reports the test loss and accuracy curves of these architectures during training. As illustrated in Figure 3, the original cell $C^{darts}$ , known as the widest and shallowest cell, has the fastest and most stable convergence compared with its variants. Further, as the width of a cell increases and the depth decreases (i.e., from $C_4$ to $C_1$ ), the convergence becomes faster. The results of other popular NAS architectures and their randomly connected variants are reported in Appendix B.2. + +![](images/b4bcb4813067130def318dbca1399288b10e8794bdc25a98404759380f2f0078.jpg) +(a) CIFAR-10 Loss + +![](images/bb3119f603101d6f12d6ab212529069fdd5540ef0464c73f7eda731bc8ead5d5.jpg) +(b) CIFAR-100 Loss + +![](images/91905c08d3dbe77c0d95d37f8777a52dff973f8651a167f868e8da6ca8a307db.jpg) +(c) CIFAR-10 Accuracy + +![](images/398e947eb9bb3e0f4a5fe91eb346de7e4d76324f513b57db527642eb41b31608.jpg) +(d) CIFAR-100 Accuracy + +![](images/8cae86847c981c4bd19a5bf932dea9e5ccd0368270ca71adf16cbc458a3feb3e.jpg) +Figure 3: Test loss and test accuracy $(\%)$ curves of DARTS and its randomly connected variants on CIFAR-10 and CIFAR-100 during training. The default learning rate is 0.025. +(a) CIFAR-10, 0.0025 +Figure 4: Test accuracy $(\%)$ curves of DARTS and its randomly connected variants on CIFAR-10 and CIFAR-100 during training under different learning rates (0.0025 and 0.25). We only evaluate $C^{darts}$ , $C_1^{darts}$ and $C_3^{darts}$ for illustration. The caption of each sub-figure reports the dataset and the learning rate. + +![](images/216b4af0792b744761092651d616a6af729d65c72566b887d918fa72d7a81139.jpg) +(b) CIFAR-10, 0.25 + +![](images/834391c6e4a809e313f4804f3526eaa9d74bcb8e1004a41818b796b94d8a3132.jpg) +(c) CIFAR-100, 0.0025 + +![](images/04598fcb7a8cf819b15c1f0dcb44654cb95857be2ea2e5ffb525d467fa1f7515.jpg) +(d) CIFAR-100, 0.25 + +Figure 4 further validates the difference of convergence under different learning rates. The original cell $C^{darts}$ enjoys the fastest and the most stable convergence among these cells under various learning rates. The difference in terms of convergence rate and stability is more obvious between $C^{darts}$ and its variants with a larger learning rate as shown in Figure 4. Interestingly, $C_3^{darts}$ completely fails to converge on both CIFAR-10 and CIFAR-100 with a larger learning rate of 0.25. While there is a minor difference among these cells with a lower learning rate of 0.0025, we still find that there is a decreasing performance of convergence (i.e., convergence rate and stability) from $C^{darts}$ , $C_1^{darts}$ to $C_3^{darts}$ . Overall, the observations are consistent with the results in Figure 3. + +We have also compared the convergence of popular NAS architectures and their random variants of different operations. Similarly, we sample and train the random variants of operations for popular NAS architectures following the details in Appendix A.2 and Appendix A.3. Figure 5 illustrates the convergence of these architectures. Surprisingly, with the same connection topologies as the popular NAS cells but different operations, all random variants achieve nearly the same convergence as these popular NAS architectures. Consistent results can be found in Figure 12 of Appendix B.2. We therefore believe that the types of operations have limited impacts on the convergence of NAS architectures and the connection topologies affect the convergence more significantly. + +With these observations, we conclude that the popular NAS architectures with wider and shallower cells indeed converge faster and more stably, which explains why these popular NAS cells are selected during the architecture search. The next question is then why the wider and shallower cell leads to a faster and more stable convergence? + +# 4.2 EMPIRICAL STUDY OF FACTORS AFFECTING CONVERGENCE + +Since the wide and shallow cell is related to fast convergence, we further conduct the theoretical convergence analysis to investigate the cause of fast convergence. In this section, we first introduce the convergence analysis (i.e., Theorem 4.1) of non-convex optimization with the randomized stochastic gradient method (Ghadimi & Lan, 2013). Based on the analysis, we introduce the possible factors related to the common connection pattern that may affect the convergence. We then examine these factors empirically in the following subsections. + +![](images/5e430ac8735cb275c49e4fa2f5b6ac245a7c2871d9d01d288e595eb691d2f08f.jpg) +(a) DARTS + +![](images/3516d5978aad9cf9aad3f3b2a1bc2da5b61fea25613981beb27ab871ff63444d.jpg) +(b) ENAS +Figure 5: Test accuracy $(\%)$ curves of DARTS, ENAS, AmoebaNet, NASNet and their random variants of operations on CIFAR-10 during training. The parameter size is attached in Table 3 of Appendix B.2. + +![](images/3c00bfd70c13330a079ed0fa9a993a0d446f25c5ae521b489e33c2ef4e496245.jpg) +(c) AmoebaNet + +![](images/68d342773c10b05377052e1767613b12e0bfdcb14ec9e9c395f0347ed18d053f.jpg) +(d) NASNet + +Theorem 4.1 (Ghadimi & Lan, 2013) Let $f$ be a $L$ -smooth non-convex function, and let $f^*$ be the minimal. Given repeated, independent accesses to stochastic gradients with variance bound $\sigma^2$ for $f(\boldsymbol{w})$ , SGD with initial $\boldsymbol{w}_0$ , total iterations $N > 0$ and learning rate $\gamma_k < \frac{1}{L}$ achieves the following convergence by randomly choosing $\boldsymbol{w}_k$ as the final output $\boldsymbol{w}_R$ with probability $\frac{\gamma_k}{H}$ where $H = \sum_{k=1}^{N} \gamma_k$ : + +$$ +\mathbb {E} [ \| \nabla f (\pmb {w} _ {R}) \| ^ {2} ] \leq \frac {2 (f (\pmb {w} _ {0}) - f ^ {*})}{H} + \frac {L \sigma^ {2}}{H} \sum_ {k = 1} ^ {N} \gamma_ {k} ^ {2} +$$ + +In this paper, $f$ and $w$ denote the objective (loss) function and model parameters respectively. Based on the above theorem, Lipschitz smoothness $L$ and gradient variance $\sigma^2$ significantly affect the convergence, including the rate and the stability of convergence. Particularly, given a specific number of iterations $N$ , a smaller Lipschitz constant $L$ or smaller gradient variance $\sigma^2$ would lead to a smaller convergence error and less damped oscillations, which indicates a faster and more stable convergence. Since the Lipschitz constant $L$ and gradient variance $\sigma^2$ are highly related to the objective function, different NAS architectures result in different $L$ and $\sigma^2$ . In the following subsections, we therefore conduct empirical analysis for the impacts of the cell with and depth on the Lipschitz smoothness and gradient variance. + +# 4.2.1 LOSS LANDSCAPE + +The constant $L$ of Lipschitz smoothness is closely correlated with the Hessian matrix of the objective function as shown by Nesterov (2004), which requires substantial computation and can only represent the global smoothness. The loss contour, which has been widely adopted to visualize the loss landscape of neural networks by Goodfellow & Vinyals (2015); Li et al. (2018), is instead computationally efficient and is able to report the local smoothness of the objective function. To explore the loss landscape of different architectures, we adopt the method in Li et al. (2018) to plot the loss contour $s(\alpha, \beta) = \mathbb{E}_{i \sim P}[f_i(\boldsymbol{w}^* + \alpha \boldsymbol{w}_1 + \beta \boldsymbol{w}_2)]$ . The notation $f_i(\cdot)$ denotes the loss evaluated at $i_{th}$ instance in the dataset and $P$ denotes the distribution of dataset. The notation $\boldsymbol{w}^*$ , $\boldsymbol{w}_1$ and $\boldsymbol{w}_2$ denote the (local) optimal and two direction vectors randomly sampled from Gaussian distribution respectively. And $\alpha, \beta$ , which are the $x$ and $y$ axis of the plots, denote the step sizes to perturb $\boldsymbol{w}^*$ . The loss contour plotted here is therefore a two-dimensional approximation of the truly high-dimensional loss contour. However, as shown in Li et al. (2018), the approximation is valid and effective to characterize the property of the true loss contour. + +To study the impact of the cell width and depth on Lipschitz smoothness, we compare the loss landscape between popular NAS architectures and their randomly connected variants trained in Section 4.1 on CIFAR-10 and CIFAR-100. Due to the space limitation, we only plot the loss landscape of DARTS (Liu et al., 2019) and its randomly connected variants in Figure 6. We observe that the connection topology has a significant influence on the smoothness of the loss landscape. With the widest and shallowest cell, $C^{darts}$ has a fairly benign and smooth landscape along with the widest near-convex region around the optimal. With a deeper and narrower cell, $C_{1}^{darts}$ and $C_{2}^{darts}$ have a more agitated loss landscape compared with $C^{darts}$ . Further, $C_{3}^{darts}$ , with the smallest width and largest depth among these cells, has the most complicated loss landscape and the narrowest and steepest near-convex region around the optimum. The largest eigenvalue of the Hessian matrix, which indicates the maximum curvature of the objective function, is positively correlated with Lip + +![](images/e401bbbbe734389ef3a2c87ab502a6afc70a24aad7f4ae64023d792207021712.jpg) + +![](images/8ba1fa3e888e1d34a553439d269016fcb5f41e31a90e02dd0b89128195aa0817.jpg) + +![](images/729fcb88f7b9701dba39a150a9c395e17b0d4adb6e355b14f66bc3ae9ef7d130.jpg) + +![](images/ff78a82b3f679d8abf6981d52ac21def5114dc7db303a64162347effe376d461.jpg) + +![](images/c0e02dc54f6aa513cab0dd729c5d90f5d5f471dfca4c051f1be3249878b8cb18.jpg) + +![](images/638058f72d2e7bc111331181240a84c9196dc50ebea7d57749353e61ed72760e.jpg) +(a) $C^{darts}$ + +![](images/ea0d19eb1a6aca013a49db9da6e75c3c1a8fc91125bb886cd9d397101cc8ec0b.jpg) +(b) $C_1^{darts}$ + +![](images/7dccf1729d8bd2cb6bac513ae94ff722cf985f611da71f7e0db55a1f1973f88d.jpg) +(c) $C_2^{darts}$ + +![](images/b84dab58f785481c7ed42a9d9c317f1a00107e9a8ccedc64ad8179d6f2d91995.jpg) +(d) $C_3^{darts}$ + +![](images/38be7bf1780cb18de1f88ad51b1cdd6a04c81eb0c619a69df0dc3ce704426a2e.jpg) +(e) $C_4^{darts}$ + +![](images/362fcb589a99b6c47e8f2945871b73111fe7b059c15ecfb9d8edb608ac689551.jpg) +Figure 6: Loss contours of DARTS and its variants with random connections on the test dataset of CIFAR-10. The lighter color of the contour lines indicates a larger loss. Notably, the loss of the blank area, around the corners of each plot, is extremely large. Besides, the area with denser contour lines indicates a steeper loss surface. +(a) $C^{darts}$ +Figure 7: Heat maps of the gradient variance from DARTS and its randomly connected variants around the optimal on the test dataset of CIFAR-10. The lighter color indicates a larger gradient variance. Notably, the gradient variance of the yellow area, around the corners of each plot, is extremely large. Obviously, the region with relatively small gradient variance becomes smaller from left to right. + +![](images/e952a90953cde1c6dcfc663e32fb222d9f9f1f17fbc4d247da99ada63d931420.jpg) +(b) $C_1^{darts}$ + +![](images/62c0d1b28e8ac31a513955f19747dff63890c3f59dadfa8f9e40b4edc40eb24d.jpg) +(c) $C_2^{darts}$ + +![](images/0e7315faabb2e9be1cbd6725e201f49ef32a6e5852d708907538dee5a6a67e09.jpg) +(d) $C_3^{darts}$ + +![](images/fc13c9ed22deaa9cc81096d430044a3d0a0d61fbc7a00b13865f0891ac1946b3.jpg) +(e) $C_4^{darts}$ + +schitz constant as shown by Nesterov (2004). A smoother loss landscape therefore corresponds to a smaller Lipschitz constant $L$ . $C^{darts}$ is likely to achieve the smallest Lipschitz constant among these cells. + +Consistent results can be found in Appendix B.3 for the loss landscape of other popular NAS cells and their variants. Based on these results, we conclude that increasing the width and decreasing the depth of a cell widens the near-convex region around the optimal and smooths the loss landscape. The constant $L$ of Lipschitz smoothness therefore becomes smaller locally and globally. Following Theorem 4.1, architectures with wider and shallower cells shall converge faster and more stably. + +# 4.2.2 GRADIENT VARIANCE + +The gradient variance indicates the noise level of gradient by randomly selecting training instances in stochastic gradient descent (SGD) method. Large gradient variance indicates large noise in the gradient, which typically results in unstable updating of model parameters. Following Ghadimi & Lan (2013), gradient variance is defined as $\mathrm{Var}(\nabla f_i(\pmb{w}))$ . Similar to the visualization of loss landscape in Section 4.2.1, we visualize the gradient variance by $g(\alpha, \beta) = \mathrm{Var}(\nabla f_i(\pmb{w}^* + \alpha \pmb{w}_1 + \beta \pmb{w}_2))$ . All other notations follow Section 4.2.1. + +To study the impact of the width and depth of a cell on the gradient variance, we compare the gradient variance between popular NAS architectures and their randomly connected variants trained in Section 4.1 on CIFAR-10 and CIFAR-100. We visualize the gradient variance of DARTS (Liu et al., 2019) and its randomly connected variants in Figure 7 and Figure 8. For better visualization, we plot the figures using the standard deviation (i.e., $\sqrt{g(\alpha,\beta)}$ ) to avoid extremely large values in the visualization of DARTS. Obviously, as the cell width decreases and the cell depth increases (i.e., from $C^{darts}$ to $C_{4}^{darts}$ ), the region with relatively small gradient variance becomes smaller as shown in Figure 7. Consistently, the gradient variance generally shows an increasing trend from $C^{darts}$ to + +![](images/59384b8a6898696094e3f33223d96ce292607edc02ba7e4fdb5f823408038f04.jpg) +(a) $C^{darts}$ + +![](images/975c864337ee8e89d3be65fe626132b8a732d69cc1eeea78ed3739a14a133f3c.jpg) +(b) $C_1^{darts}$ +Figure 8: 3D surfaces of the gradient variance from DARTS and its randomly connected variants around the optimal on the test dataset of CIFAR-100. The height of the surface indicates the value of gradient variance. Notably, the height of the gradient variance surface is gradually increasing from left to right. Especially, $C^{darts}$ has the smoothest and lowest surface of gradient variance among these architectures. + +![](images/78a845ab282febc64d92048e69209b4ba44e5da0b9282623ef10ee3e1850f63b.jpg) +(c) $C_2^{darts}$ + +![](images/6f192cdc0f3af25a04044c4baae6ae47fc818107153620b91d40658a063acd13.jpg) +(d) $C_3^{darts}$ + +![](images/de6370cc4ad8535d44123be44ae01104c5bfba11b21235335ac8c5d021638d37.jpg) +(e) $C_4^{darts}$ + +$C_4^{darts}$ in Figure 8. Consequently, the gradient becomes noisier in the neighborhood of the optimal, which typically makes the optimization harder and unstable. + +Similar results from other popular NAS architectures and their random variants are provided in Appendix B.4. Based on these results, we conclude that the increase in width and the decrease in depth of a cell result in a smaller gradient variance, which makes the optimization process less noisy and more efficient. The convergence of wide and shallow cells therefore shall be fast and stable following Theorem 4.1. + +# 4.3 THEORETICAL ANALYSIS OF FACTORS AFFECTING CONVERGENCE + +Our empirical study so far suggests that larger cell width and smaller cell depth smooth the loss landscape and decrease the gradient variance. Consequently, popular NAS architectures with wide and shallow cells converge fast. In this section, we investigate the impacts of the cell width and depth on Lipschitz smoothness and gradient variance from a theoretical perspective. + +# 4.3.1 SETUP + +We analyze the impact of the cell width and depth by comparing architectures with the widest cell and the narrowest cell as shown in Figure 26 of Appendix C. To simplify the analysis, the cells we investigate contain only one input node $x$ and one output node. The input node may be training instances or output node from any proceeding cell. All operations in the cell are linear operations without any non-linearity. Suppose there are $n$ intermediate nodes in a cell, the $i_{th}$ intermediate node and its associated weight matrix are denoted as $\pmb{y}^{(i)}$ and $W^{(i)} (i = 1, \dots, n)$ respectively. The output node $z$ denotes the concatenation of all intermediate nodes. Both cells have the same arbitrary objective function $f$ following the output node, which shall consist of the arbitrary number of activation functions and cells. For clarity, we refer to the objective function, intermediate nodes and output node of the architecture with the narrowest cell as $\widehat{f}$ , $\widehat{\pmb{y}}^{(i)}$ and $\widehat{\pmb{z}}$ respectively. As shown in Figure 26, the intermediate node $\pmb{y}^{(i)}$ and $\widehat{\pmb{y}}^{(i)}$ can be computed by $\pmb{y}^{(i)} = W^{(i)}\pmb{x}$ and $\widehat{\pmb{y}}^{(i)} = \prod_{k=1}^{i} W^{(k)}\pmb{x}$ respectively. Particularly, we set $\prod_{k=1}^{i} W^{(k)} = W^{(i)}W^{(i-1)}\cdots W^{(1)}$ . And all the related proofs of following theorems can be found in Appendix C. + +# 4.3.2 THEORETICAL RESULTS + +Due to the complexity of the standard Lipschitz smoothness, we instead investigate the block-wise Lipschitz smoothness (Beck & Tetruashvili, 2013) of the two cases shown in Figure 26. In Theorem 4.2, we show that the block-wise Lipschitz constant of the narrowest cell is scaled by the largest eigenvalues of the model parameters (i.e., $W^{(i)}(i = 1,\dots ,n)$ ). Notably, the Lipschitz constant of the narrowest cell can be significantly larger than the one of the widest cell while most of the largest eigenvalues are larger than 1, which slows down the convergence substantially. The empirical study in Section 4.2.1 has validated the results. + +Theorem 4.2 (The impact of cell width and depth on block-wise Lipschitz smoothness) Let $\lambda^{(i)}$ be the largest eigenvalue of $W^{(i)}$ . Given the widest cell with objective function $f$ and the + +![](images/0faf9fe9ea81530835c39c5da8baeb86b6792886150ebc3164c720b4840cf697.jpg) +Figure 9: Comparison of the test accuracy at the convergence between popular NAS architectures and their randomly connected variants on CIFAR-10. Each popular NAS architecture (index 0 on the $x$ -axis) is followed by 13 randomly connected variants (from index 1 to index 13 on the $x$ -axis), corresponding to $C_1$ to $C_{13}$ respectively. The width and depth of these random variants are shown in Table 2 in Appendix B.2. The dashed lines report the accuracy of the popular NAS architectures. + +narrowest cell with objective function $\widehat{f}$ , by assuming the block-wise Lipschitz smoothness of the widest cell as $\left\| \frac{\partial f}{\partial W_1^{(i)}} - \frac{\partial f}{\partial W_2^{(i)}} \right\| \leq L^{(i)} \left\| W_1^{(i)} - W_2^{(i)} \right\|$ for any $W_1^{(i)}$ and $W_2^{(i)}$ , the block-wise Lipschitz smoothness of the narrowest cell then can be represented as + +$$ +\left\| \frac {\partial \widehat {f}}{\partial W _ {1} ^ {(i)}} - \frac {\partial \widehat {f}}{\partial W _ {2} ^ {(i)}} \right\| \leq \left(\prod_ {j = 1} ^ {i - 1} \lambda^ {(j)}\right) L ^ {(i)} \left\| W _ {1} ^ {(i)} - W _ {2} ^ {(i)} \right\| +$$ + +We then compare the gradient variance of the two cases shown in Figure 26. Interestingly, gradient variance suggests a similar but more significant difference between the two cases compared with their difference in Lipschitz smoothness. As shown in Theorem 4.3, the gradient variance of the narrowest cell is not only scaled by the square of the largest eigenvalue of the weight matrix but also is scaled by the number of intermediate nodes (i.e., $n$ ). Moreover, the upper bound of its gradient variance has numbers of additional terms, leading to a significantly larger gradient variance. The empirical study in Section 4.2.2 has confirmed the results. + +Theorem 4.3 (The impact of cell width and depth on gradient variance) Let $\lambda^{(i)}$ be the largest eigenvalue of $W^{(i)}$ . Given the widest cell with objective function $f$ and the narrowest cell with objective function $\widehat{f}$ , by assuming the gradient variance of the widest cell as $\mathbb{E}\left\| \frac{\partial f}{\partial W^{(i)}} - \mathbb{E}\frac{\partial f}{\partial W^{(i)}}\right\|^2 \leq (\sigma^{(i)})^2$ for any $W^{(i)}$ , the gradient variance of the narrowest cell is then bounded by + +$$ +\mathbb {E} \left\| \frac {\partial \widehat {f}}{\partial W ^ {(i)}} - \mathbb {E} \frac {\partial \widehat {f}}{\partial W ^ {(i)}} \right\| ^ {2} \leq n \sum_ {k = i} ^ {n} (\frac {\sigma^ {(k)}}{\lambda^ {(i)}} \prod_ {j = 1} ^ {k} \lambda^ {(j)}) ^ {2} +$$ + +# 5 GENERALIZATION BEYOND THE COMMON CONNECTIONS + +Our empirical and theoretical results so far have demonstrated that the common connection pattern helps to smooth the loss landscape and make the gradient more accurate. Popular NAS architectures with wider and shallower cells therefore converge faster, which explains why popular NAS architectures are selected by the NAS algorithms. Nonetheless, we have ignored the generalization performance obtained by popular NAS architectures and their random variants. We therefore wonder whether popular NAS architectures with wide and shallow cells generalize better. + +In Figure 9, we visualize the test accuracy of popular NAS architectures and their randomly connected variants trained in Section 4.1. Notably, the popular NAS architectures can achieve competitive accuracy compared with most of the random variants. However, there are some random variants, which achieve higher accuracy than the popular architectures. Interestingly, there seems to be an optimal choice of depth and width for a cell to achieve higher test accuracy (i.e., $C_7$ for DARTS and $C_4$ for ENAS). Popular NAS architectures with wide and shallow cells therefore are not guaranteed to generalize better, although they typically converge faster than other random variants. + +We also adapt the connections of popular NAS architectures to obtain their widest and shallowest variants. The adaption is possible due to the fact that the cells (including normal and reduction cell) + +Table 1: Comparison of the test error at the convergence between the original and the adapted NAS architectures on CIFAR-10/100 and Tiny-ImageNet-200. The entire networks are constructed and trained following the experimental settings reported in Appendix A.3, which may slightly deviate from the original ones. The test errors (or the parameter sizes) of original and adapted architectures are reported on the left and right hand-side of slash respectively. + +
ArchitectureCIFAR-10CIFAR-100Tiny-ImageNet-200
Error(%)Params(M)Error(%)Params(M)Error(%)Params(M)
NASNet (Zoph et al., 2018)2.65/2.804.29/4.3217.06/16.864.42/4.4531.88/32.054.57/4.60
AmoebaNet (Real et al., 2019)2.76/2.913.60/3.6017.55/17.283.71/3.7132.22/33.163.83/3.83
ENAS (Pham et al., 2018)2.64/2.764.32/4.3216.67/16.044.45/4.4530.68/31.364.60/4.60
DARTS (Liu et al., 2019)2.67/2.733.83/3.9016.41/16.153.95/4.0330.58/31.334.08/4.16
SNAS (Xie et al., 2019b)2.88/2.693.14/3.1917.78/17.203.26/3.3132.40/32.613.39/3.45
+ +of popular NAS architectures are generally not widest and narrowest as shown in Figure 1. While there are various widest and shallowest cells following our definition of cell width and depth, we apply the connection pattern of SNAS cell shown in Figure 1(e) to obtain the widest and shallowest cells. The adapted topologies are shown in Figure 25 of Appendix B.5. + +Table 1 illustrates the comparison of the test accuracy between our adapted NAS architectures and the original ones. As shown in Table 1, the adapted architectures achieve smaller test error on CIFAR-100. Nevertheless, most of the adapted architectures, obtain larger test error than the original NAS architectures on both CIFAR-10 and Tiny-ImageNet-200 $^4$ . The results again suggest that the widest and shallowest cells may not help architectures generalize better, while these architectures typically achieve compelling generalization performance. + +The results above have revealed that the architectures with wide and shallow cells may not generalize better despite their fast convergence. To improve current NAS algorithms, we therefore need to rethink the evaluation of the performance of candidate architectures during architecture search since the current NAS algorithms are not based on the generalization performance at convergence as mentioned in Section 4.1. Nonetheless, architectures with the wide and shallow cells usually guarantee a stable and fast convergence along with competitive generalization performance, which should be good prior knowledge for designing architectures and NAS algorithms. + +# 6 CONCLUSION AND DISCUSSION + +Recent works have been focusing on the design and evaluation of NAS algorithms. We instead endeavour to examine the architectures selected by the various popular NAS algorithms. Our study is the first to explore the common structural patterns selected by existing algorithms, why these architectures are selected, and why these algorithms may be flawed. In particular, we reveal that popular NAS algorithms tend to favor architectures with wide and shallow cells, which typically converge fast and consequently are likely be selected during the search process. However, these architectures may not generalize better than other candidates of narrow and deep cells. + +To further improve the performance of the selected NAS architectures, one promising direction for the current NAS research is to evaluate the generalization performance of candidate architectures more accurately and effectively. While popular NAS architectures appreciate fast and stable convergence along with competitive generalization performance, we believe that the wide and shallow cells are still useful prior knowledge for the design of the search space. We hope this work can attract more attention to the interpretation and understanding of existing popular NAS algorithms. + +# ACKNOWLEDGEMENT + +This research is supported by the National Research Foundation Singapore under its AI Singapore Programme [Award No. AISG-GC-2019-002] and Singapore Ministry of Education Academic Research Fund Tier 3 under MOEs official grant number MOE2017-T3-1-007. + +# REFERENCES + +Youhei Akimoto, Shinichi Shirakawa, Nozomu Yoshinari, Kento Uchida, Shota Saito, and Kouhei Nishida. Adaptive stochastic natural gradient method for one-shot neural architecture search. In ICML, volume 97 of Proceedings of Machine Learning Research, pp. 171-180. PMLR, 2019. +Amir Beck and Luba Tetruashvili. On the convergence of block coordinate descent type methods. SIAM Journal on Optimization, 23(4):2037-2060, 2013. +Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. In ICLR (Poster). OpenReview.net, 2019. +Terrance Devries and Graham W. Taylor. Improved regularization of convolutional neural networks with cutout. CoRR, abs/1708.04552, 2017. +Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. Journal of Machine Learning Research, 20(55):1-21, 2019. +Saeed Ghadimi and Guanghui Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341-2368, 2013. +Ian J. Goodfellow and Oriol Vinyals. Qualitatively characterizing neural network optimization problems. In ICLR, 2015. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In CVPR, pp. 2261-2269. IEEE Computer Society, 2017. +Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pp. 1106-1114, 2012. +Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. In *ICLR (Poster)*. OpenReview.net, 2017. +Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In NeurIPS, pp. 6391-6401, 2018. +Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture search. In UAI, pp. 129. AUAI Press, 2019. +Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: differentiable architecture search. In ICLR (Poster). OpenReview.net, 2019. +Renqian Luo, Fei Tian, Tao Qin, Enhong Chen, and Tie-Yan Liu. Neural architecture optimization. In NeurIPS, pp. 7827-7838, 2018. +Niv Nayman, Asaf Noy, Tal Ridnik, Itamar Friedman, Rong Jin, and Lihi Zelnik-Manor. XNAS: neural architecture search with expert advice. CoRR, abs/1906.08031, 2019. +Yurii Nesterov. Introductory Lectures on Convex Optimization - A Basic Course, volume 87 of Applied Optimization. Springer, 2004. +Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. In ICML, volume 80 of Proceedings of Machine Learning Research, pp. 4092-4101. PMLR, 2018. +Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Regularized evolution for image classifier architecture search. In AAAI, pp. 4780-4789. AAAI Press, 2019. + +Christian Sciuto, Kaicheng Yu, Martin Jaggi, Claudiu Musat, and Mathieu Salzmann. Evaluating the search phase of neural architecture search. arXiv preprint arXiv:1902.08142, 2019. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. +Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, pp. 1-9. IEEE Computer Society, 2015. +Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V. Le. Mnasnet: Platform-aware neural architecture search for mobile. In CVPR, pp. 2820-2828. Computer Vision Foundation / IEEE, 2019. +Saining Xie, Alexander Kirillov, Ross Girshick, and Kaiming He. Exploring randomly wired neural networks for image recognition. arXiv preprint arXiv:1904.01569, 2019a. +Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. SNAS: stochastic neural architecture search. In ICLR (Poster). OpenReview.net, 2019b. +Zirui Zhou, Qi Zhang, and Anthony Man-Cho So. 1,p-norm regularization: Error bounds and convergence rate analysis of first-order methods. In ICML, pp. 1501-1510, 2015. +Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. Learning transferable architectures for scalable image recognition. In CVPR, pp. 8697-8710. IEEE Computer Society, 2018. + +# APPENDIX A EXPERIMENTAL SETUP + +# A.1 DATA PRE-PROCESSING AND AUGMENTATION + +Our experiments are conducted on CIFAR-10/100 (Krizhevsky et al., 2009) and Tiny-ImageNet-200. CIFAR-10/100 contains 50,000 training images and 10,000 test images of $32 \times 32$ pixels in 10 and 100 classes respectively. Tiny-ImageNet-200 consists of 100,000 training images, 10,000 validation images and 10,000 test images $^{5}$ in 200 classes. We adopt the same data pre-processing and argumentation as described in DARTS (Liu et al., 2019): zero padding the training images with 4 pixels on each side and then randomly cropping them back to $32 \times 32$ on CIFAR-10/100 and $64 \times 64$ on Tiny-ImageNet-200; randomly flipping training images horizontally; normalizing training images with the means and standard deviations along the channel dimension. + +# A.2 SAMPLING OF RANDOM VARIANTS + +For a $N$ -node NAS cell, there are $\frac{(N - 2)!}{(M - 1)!}$ possible connections with $M$ input nodes and one output node. There are therefore hundreds to thousands of possible randomly connected variants for each popular NAS cell. The random variants of operations consist of a similar or even higher amount of architectures. Due to the prohibitive cost of comparing popular NAS cells with all variants, we randomly sample some variants to understand why the popular NAS cells are selected. + +Given a NAS cell $C$ , we fix the partial order of intermediate nodes and their accompanying operations. We then replace the source node of their associated operations by uniformly randomly sampling a node from their preceding nodes in the same cell to get their randomly connected variants. Similarly, given a NAS cell $C$ , we fix the partial order of intermediate nodes and their connection topologies. We then replace the operations coupling each connection by uniformly randomly sampling from candidate operations to get their random variants of operations. + +# A.3 ARCHITECTURES AND TRAINING DETAILS + +For experiments on CIFAR-10/100 and Tiny-ImageNet-200, the neural network architectures are constructed by stacking $L = 20$ cells. Feature maps are down-sampled at the $L / 3$ -th and $2L / 3$ -th cell of the entire architecture with stride 2. For Tiny-ImageNet-200, the stride of the first convolutional layer is adapted to 2 to reduce the input resolution from $64 \times 64$ to $32 \times 32$ . A more detailed building scheme can be found in DARTS (Liu et al., 2019). + +In the default training setting, we apply stochastic gradient descent (SGD) with learning rate 0.025, momentum 0.9, weight decay $3 \times 10^{-4}$ and batch size 80 to train the models for 600 epochs on CIFAR10/100 and 300 epochs on Tiny-ImageNet-200 to ensure the convergence. The learning rate is gradually annealed to zero following the standard cosine annealing schedule. To compare the convergence under different learning rates in Section 4.1, we change the initial learning rate from 0.025 to 0.25 and 0.0025 respectively. + +# A.4 REGULARIZATION + +Since regularization mechanisms shall affect the convergence (Zhou et al., 2015), architectures are trained without regularization for a neat empirical study in Section 4. The regularization mechanisms are only used in Section 5 to get the converged generalization performance of the original and adapted NAS architectures on CIFAR-10/100 and Tiny-ImageNet-200 as shown in Table 1. + +There are three adopted regularization mechanisms on CIFAR-10/100 and Tiny-ImageNet-200 in this paper: cutout (Devries & Taylor, 2017), auxiliary tower (Szegedy et al., 2015) and drop path (Larsson et al., 2017). We apply standard cutout regularization with cutout length 16. Moreover, the auxiliary tower is located at $2L / 3$ -th cell of the entire architecture with weight 0.4. We apply the same linearly-increased drop path schedule as in NASNet (Zoph et al., 2018) with the maximum probability of 0.2. + +# APPENDIX B MORE RESULTS + +# B.1 NAS ARCHITECTURES AND THEIR VARIANTS + +We compare the width and depth of popular NAS architectures and their variants of random connections in Table 2. The random variants are sampled following the method in Appendix A.2. We further show the connection topologies of popular NAS and their partial random variants of connections in Figure 10 and Figure 11. + +Table 2: Comparison of the width and depth of popular NAS cells and their randomly variants of connections. The name of the popular NAS cell is followed by its width and depth, which is separated by a comma. The width of a cell is conventionally computed by assuming that each intermediate node shares the same width $c$ . Notably, the width and depth of random variants are in ascending and descending order respectively from $C_1$ to $C_{13}$ . Moreover, the popular NAS architectures achieve the largest width and nearly the smallest depth among all the variants. + +
Base CellC1C2C3C4C5C6C7C8C9C10C11C12C13
DARTS (3.5c, 3)2c, 42c, 42c, 42.5c, 42.5c, 32.5c, 32.5c, 32.5c, 33c, 33c, 33c, 33.5c, 33.5c, 3
ENAS (5c, 2)1.5c, 61.5c, 52c, 62c, 62.5c, 52.5c, 53c, 43c, 33.5c, 53.5c, 43.5c, 43.5c, 33.5c, 3
AmoebaNet (4c, 4)1.5c, 61.5c, 51.5c, 51.5c, 32c, 62c, 62c, 42.5c, 52.5c, 32.5c, 33c, 33.5c, 43.5c, 3
NASNet (5c, 2)1.5c, 61.5c, 52c, 62c, 62.5c, 52.5c, 53c, 43c, 33.5c, 53.5c, 43.5c, 43.5c, 33.5
+ +![](images/83ec466a3fd00e17ad6322867df597d146bdfa91e8384b6b4e1f0c4c63d1754d.jpg) +(a) $3.5c,4$ + +![](images/e045be3143caa4ada54e1c52f8bee0b153d7db011a7f997a31e3b5a092e6512e.jpg) +(b) $3.5c, 3$ + +![](images/e2d5ead8d8e5faa3a5b0bfff639083bed0f735fe7013d28ec94f35360b440903.jpg) +(c) $3c,4$ + +![](images/6031f2dee6d0d885f9017c2d2c416d8d42b9b6175f0cd88c118283bd90671f37.jpg) +(d) $2.5c, 5$ + +![](images/b08b2fbe8841fed6edf21c5da88f255dd4e0196013690ef36dccb703a609a9ce.jpg) +(e) $2c,6$ + +![](images/4f236e6bd664d649e9d72bbbb0746528362e33b58cc0ca63c018fabb505a29ec.jpg) +(a) $4c,2$ + +![](images/67ef1dafda8556ea43e5d63a11bbfd3bea79b0586dd8fa65c5c66547110043fa.jpg) +(b) $3.5c, 3$ + +![](images/cd736512117168bdef723728f82613141fc195dd6a0c58b5af154e75824e7f6c.jpg) +Figure 10: Connection topology of AmoebaNet cell (Real et al., 2019) and its part of randomly connected variants. Each sub-figure reports the width and depth of a cell separated by a comma. The leftmost one is the original connection from AmoebaNet normal cell and others are the ones randomly sampled. The width of a cell is also computed by assuming that each intermediate node shares the same width $c$ . Notably, the original AmoebaNet cell has the largest width and almost the smallest depth among these cells. +(c) $3c, 3$ + +![](images/cd45cb5e3a2312f34c55799c1b32a3bbd7b737c40b2a41ee70b8fccf2ca4aab3.jpg) +(d) $2.5c,4$ +Figure 11: Connection topology of SNAS cell under mild constraint (Xie et al., 2019b) and its part of randomly connected variants. The width and depth of a cell are reported in the title of each plot. The leftmost one is the original connection from SNAS normal cell and others are the ones randomly sampled. The width of a cell is conventionally computed by assuming that each intermediate node shares the same width $c$ . Notably, the original SNAS cell has the largest width and the smallest depth among these cells. + +![](images/963c0392138a730c49b0d9f9760359bb42d3b67405675ab5ca55a528fa8cc05a.jpg) +(e) $2c,5$ + +![](images/a03150cd70469164dcf7fd3547f77e0732c00d64fcc1b69633bc687bfe71fbbf.jpg) +(a) DARTS + +![](images/d1ba5873dd59604bec3cfe963103544363a7dba4164de61ca8bcb6d0d08f5380.jpg) +(b) ENAS +Figure 12: More test accuracy (\%) curves of DARTS, ENAS, AmoebaNet, NASNet and their random variants of operations on CIFAR-10 during training. + +![](images/d026eb9d09efb4ce849f37182aa69757aec92ad93ef45609d754e948aab347ad.jpg) +(c) AmoebaNet + +![](images/97017cd384fe9469139953a894c6119a19007cff2369db1e0cb6b8882bb1b6ac.jpg) +(d) NASNet + +Table 3: Comparison of the parameter size (MB) of popular NAS cells and their randomly variants of operations. $C_0$ denotes the original NAS cell and $C_1$ to $C_{10}$ denote the random variants. Notably, there is a gap of $\sim 30\%$ between the parameter size of the smallest architecture and one of the largest architecture. + +
Base cellC0C1C2C3C4C5C6C7C8C9C10
DARTS3.353.372.842.702.983.192.433.492.883.312.81
ENAS3.863.453.192.982.703.673.033.853.263.813.29
AmoebaNet3.152.862.622.412.103.102.463.282.693.422.75
NASNet3.833.453.192.982.703.673.033.853.263.813.29
+ +# B.2 CONVERGENCE + +In this section, we plot more test loss curves on CIFAR-10 (Krizhevsky et al., 2009) for original popular NAS architectures and their (12) randomly connected variants, as shown in Figure 13, Figure 14 and Figure 16. The depth and width of these 12 randomly connected variants can be found in Table 2. Notably, the width and depth of random variants (from $C_1$ to $C_{12}$ ) are in ascending and descending order respectively. Moreover, the popular NAS architectures achieve the largest width and nearly the smallest depth among all the variants. As shown in the following figures, the popular NAS cells, with larger width and smaller depth, typically achieve faster and more stable convergence than the random variants. Furthermore, with the increasing width and the decreasing depth, the convergence of random variants approaches to the original NAS architecture. + +![](images/044f09e83ea4759e215f3121c5da8e9b271ad2f69e532e8ac8b672faae3aa192.jpg) +Figure 13: Test loss curves of DARTS and its variants on CIFAR-10 during training. + +![](images/05a2ff6ca45bb9ff4ea0049164193e04fa8f99e481a6b76e5619864003d82b4f.jpg) + +![](images/3e92f83ee091a7bad11fb7c554092811603d3236d8bda6a1a007ffc6a9ab9453.jpg) + +![](images/8e04d54902b503dadfcc5e42e0e59f361140cb10eb44a72ca50e1b2c255ffceb.jpg) + +![](images/10bdb03f3b42e84afebaf5e2c1ab5680cf90ac68d200a311ba1f6ccddef49484.jpg) +Figure 14: Test loss curves of AmoebaNet and its variants on CIFAR-10 during training. + +![](images/5729414b41495569be0982bda95616b90581a08f3e61ee168e2f98e9b6a1dafb.jpg) + +![](images/1c6f0f57078e369ab805cdee9670988df4022df4d74a0482251cfd18cae08937.jpg) + +![](images/323eb056cc651589dd7cbcbca82cf2f2466db8a6ebb47cac28f3d982a608dd5f.jpg) + +![](images/023d1ce1743f7a74ccbfa75453e5ba8ef38ae5797673e32a878d669acb62e3c7.jpg) +Figure 15: Test loss curves of ENAS and its variants on CIFAR-10 suring training. + +![](images/da86ec89d9232c15ff23e9485d8663f49351c2f64a7a40e491a9a178e6a9c632.jpg) + +![](images/45bb1a2e19174842ac9216cd3cc6d80616a8071158471d9651de045abaeae0ca.jpg) + +![](images/7947f8506d1173b781624ca8b588d56d8a87b169c11ed440e3565f33c835dcb6.jpg) + +![](images/48614737018c15b1e978bc44e9a25a4b09ba86894955988a74bdc13e967c357c.jpg) +Figure 16: Test loss curves of NASNet and its variants on CIFAR-10 suring training. + +![](images/7c39acb6f4f7b72754e7ee3a3deb915a24ed1dc308069b8b763905b6f8e60961.jpg) + +![](images/b536176bb400d730b6372988b7b9d3bc7265008b638b155c9c112c0d4d6de5ec.jpg) + +![](images/15285198976f231979a040f7c76cd5690ca8e89053bd2d30e7db0395c19e8b87.jpg) + +# B.3 LOSS LANDSCAPE + +In this section, we visualize loss landscapes for popular NAS architectures and their randomly connected variants. The depth and width of a cell are highly correlated. For example, the depth and width cannot reach their maximum simultaneously. With the increasing width, the average depth of cells grouped by the same width is decreasing as shown in Table 2. We therefore only group the results (including the ones from original NAS architectures) with various width levels of a cell for a better comparison. Notably, the architectures with wider and shallower cells have a smoother and benigner loss landscape, as shown in Figure 17, Figure 18, Figure 19 and Figure 20, which further supports the results in Section 4.2.1. + +![](images/d717efd45baffa41a91aac4b8b227389d3768832e7b93b13e710e4bdcc357347.jpg) + +![](images/de26aaf62f0885223185f60527942b818fc2377b5f02942bec9540e4c57036fe.jpg) + +![](images/9a19513ce277addbf2bc97f67544dddf7c56ea77346e3423e29d0fb950670712.jpg) + +![](images/1bb27490af40709d4197eee291b61e7b519b6eb27f8fac57699a8df11709dd4c.jpg) + +![](images/3215a05be733d0b7a2e79cd981a6f94ef7f8676d0c80d51885db65758d906613.jpg) + +![](images/43147d8781a6a4a83241ed7bca10d15f72644fb72f229be5922715925e6dd5d8.jpg) + +![](images/e3daee1dfec25c0c386ea3419611fa2c17297f9c595212ecdf987d237a847171.jpg) + +![](images/c5bd3e38194f742b1b215af2675b5760165d5eb99ceb0e59f89cd16faea3f565.jpg) + +![](images/9b40944bb6d887b1cbc5a566e67fd34411d2c330d56fffb4f3ea8b89c1ffe965.jpg) +(a) $2c$ + +![](images/18d73c117caf46420615b9e97094845f2bde78fcbf85f9fc57666148c85993bc.jpg) +(b) $2.5c$ + +![](images/2d200d54ab1714913769c1686866d5b14f8703a213fd2693188da149160501f6.jpg) +(c) $3c$ +Figure 17: Loss contours of DARTS and its variants with random connections on the test dataset of CIFAR-10. + +![](images/e903a0b02ea2babcf6650b13fe547bb8dc6dd14078df396773b0e207a8007513.jpg) +(d) $3.5c$ + +![](images/dd9d7297417e70ddbaf8a27ae03ff9fefe94590712d7374d29ee62eb1c22a661.jpg) + +![](images/7e46e1a1a11458635876549b5b2cce71fb05c73d298e087ada24f0f1ba5d9b01.jpg) + +![](images/1725bcda7dff124dcf5ae720d44f21e9874ff2cd384ad03acd510729ce85a312.jpg) + +![](images/42808357170e1d1b560fa5bc14cc10d754f6ce6d1f9878725ff789700da1e17c.jpg) + +![](images/31bc2900faf4aee33e147fad11ee7a065fe7454420719761831d43f50551da06.jpg) + +![](images/1dacdc3ebd1ac3ab3cd21191f624fea5de07f5a7c8ed2b0c1d706a172af72d85.jpg) +(a) $1.5c$ + +![](images/a3e9b0cbc8961a44a6790438bdcf9ed24cabcb4c4a78d01774ed27a73c084518.jpg) +(b) $2c$ + +![](images/c720b1e56603f4416e51fb47582b8eeea789e21f797b3de06dae6917cb698159.jpg) +(c) $2.5c$ +Figure 18: Loss contours of AmoebaNet and its randomly connected variants on the test dataset of CIFAR-10. + +![](images/c7de3ed0435669d4155c4b203936123acc0ccc3535a4c717d5798ebcb90f4610.jpg) +(d) $3c$ + +![](images/9b65daa108826e400ce65b58d9ceb1b4fc6d96fd78831e359b24e9012f202d1c.jpg) +(e) $3.5c$ + +![](images/392280b1dde6b0e5ff7ab80e4f84a6b9b1f78bf2bdb928a0697a58dfd282e894.jpg) + +![](images/8ac1506faf68ae1b303a278ace2bfcc5892f57f7bfb4202a8ebe5fb0759f8540.jpg) + +![](images/f628e95f1b3fdea0f7f8828d34ea7d0ba79cc3b598bf067c60dd074b8855fcaf.jpg) + +![](images/966bd9fb9507b173342badbe551215ccd920cf9487199e242f9dc870418e520a.jpg) + +![](images/755e4b3883a4d128434acf93ce297139c982f71ba2f13ebf47668003a30a4541.jpg) + +![](images/c274c3a3df7663ba89f90de897004153a3559cd99ff1eb31855d331de3333791.jpg) +(a) $1.5c$ +Figure 19: Loss contours of ENAS and its randomly connected variants on the test dataset of CIFAR-10. + +![](images/3dab29a3a5115382a5b2d1fc5d9b7f4169dd7804ca68ca77542b2d2a6653092f.jpg) +(b) $2c$ + +![](images/23542e77397bfb181a1bb13e2f685e7b11f8407feabae8ac7a0d5d42ce22e63a.jpg) +(c) $2.5c$ + +![](images/bbe0fca5e1b098ab46dc98298b0294fa60efc483eb95db80968c1a2c518b337d.jpg) +(d) $3c$ + +![](images/aff594ced2b913f1101495b5ba0d681fc4cd481fdb7a0975ae70cc3c8441d38a.jpg) +(e) $3.5c$ + +![](images/6ab4299a5b449b636ba5bf46c782061584bf21ace2e9e7c1c8a261a6054d2788.jpg) + +![](images/46d79229db5b5a37a99dd66371bd428de108ebb92ac77d5cef38ec7206afa0f7.jpg) + +![](images/b45c98f4afca6a2c8df0a6fa527982c4557cc435cbd32fa2999a43781149fa0f.jpg) + +![](images/83d1579edec2fde4f69f6a52964fc2b2fe970bd499bd5ae3819ad0c5107afa79.jpg) + +![](images/499b1edf21cb302bd5f5f9b6c9109545e14e6346f6dd2a792866008fb3ea0ded.jpg) + +![](images/bd260f944db81d3c6fcea9cbb7d311d0fbb9b2fd4abc014a4be7346da58a51f2.jpg) +(a) $1.5c$ +Figure 20: Loss contours of NASNet and its randomly connected variants on the test dataset of CIFAR-10. + +![](images/e313272a11a3d08681a82f0c3ab4988d7c9b0b60bf0d1bf6f232ca8d2fed7017.jpg) +(b) $2c$ + +![](images/1242e091fdb28d57631ed1ca4d473395528f611b20d1a762bfc5ca052668be0d.jpg) +(c) $2.5c$ + +![](images/df47934874bdba579da2f83804e5c98d7dd3d628e4485c513e3c2f6c9ba04272.jpg) +(d) $3c$ + +![](images/b966e3713de07ec0ff2d23b480480c6ad6ef24620693977cf271b31f1d128c61.jpg) +(e) $3.5c$ + +# B.4 GRADIENT VARIANCE + +In this section, we visualize the gradient variance (i.e., $g(\alpha, \beta)$ as defined in Section 4.2.2) for the popular NAS architectures as well as their variants with random connection, such as AmoebaNet in Figure 21, DARTS in Figure 22, ENAS in Figure 23 and NASNet in Figure 23. The $z$ -axis has been scaled by $10^{-5}$ for a better visualization. Similarly, we group the results based on the width of cells. Notably, architectures with wider and shallower cells achieve relatively smaller gradient variance, which further confirms the results in Section 4.2.2. + +![](images/2d4c4abec30d5d7313b31fb45a42ce28144bb85cefbdb814d8446c9911e2a0ed.jpg) + +![](images/6cf793cba81ca2732d5859a58603942144e258a3d651418345f9ab4d041fed0b.jpg) +(a) $1.5c$ + +![](images/c3220dd91246b2ba0e07921a3c50e25da082bfacbcbf579d8f61c213804885bd.jpg) + +![](images/8879b9d5ee6c3d13d1d0e28ce974aa185543a6e5b61f26f53ec8e0376d2108f2.jpg) +(b) $2c$ + +![](images/19b6b1e5ee5a83594e605e5f8d54d2339a03c7577c1fb0059d20afa396d6edc1.jpg) + +![](images/ce378388d96e722ee39bdc30bc2b15b7e8793fdb2734550e1c6f29c2f9717be2.jpg) +(c) $2.5c$ +Figure 21: 3D surfaces of the gradient variance from AmoebaNet and its randomly connected variants on the test dataset of CIFAR-10. + +![](images/090f4d2903e913bc218ba30f6cdbd43acaa80edefa6a3dc4daa96f977577daf8.jpg) + +![](images/cad898c132c84f26f8be1521f0f99746c95f6265f3c274f1fb7be9abcb007cf6.jpg) +(d) $3c$ + +![](images/7edc6a8d0580b0793ee8dd0a827c0af03099794a7574c2f84553fe3a1f35adfc.jpg) + +![](images/27a556f5e2556c8bc82025d10f0c3181722a2563523a42bde57f5c0d6dad8456.jpg) +(e) $3.5c$ + +![](images/9633a841ffed5feddab25d4a6d4e156464bc96cd58fb2df17dd18ff710bd631c.jpg) + +![](images/0724b5f815f3b25ac0e11f3057162179f988c8416ddead875cd28bf990275885.jpg) + +![](images/cadabceb99752135edf1c030e17f48c66c719ce03bb801e0ff52775b08978aa9.jpg) + +![](images/638fb641d470d65bc254db6f72a91ebe36630fac739b515c977f5ec29f4aa906.jpg) + +![](images/6355e9dda6f74ddcba953772b3146e0b47d75b6ff2d37ffdf83ab75eea6932f6.jpg) + +![](images/0b37fc74ee81dcfeeb8d111d94e9f0c77bcf29628e6e5f09f9dfe793ff7e3ea6.jpg) + +![](images/4d871e78090c92a068030196e07959566b769cbc17c95c5e9e808ee1842c34f9.jpg) + +![](images/6adb365217713724718f828a1e8704028a7dce1ca84efb216f2f17e6d0174b86.jpg) + +![](images/f2515bcf7109319e5f1a292191901272e179153e9f9a296a908393e30772760c.jpg) +(a) $2c$ + +![](images/b428938e6c4565f2997d193a12bc952912a45d2ba07cfa26f2fdcd9ae15c72bd.jpg) +(b) $2.5c$ + +![](images/50e9f70a7cd3c508e3fcb0324eae529eae0de208d76f6019941d056b2370872c.jpg) +(c) $3c$ + +![](images/5be04c432bb90f8e6f33d17183588d526c8a944b0b4747a9b7fe5babe5639443.jpg) +(d) $3.5c$ +Figure 22: 3D surfaces of the gradient variance from DARTS and its randomly connected variants on the test dataset of CIFAR-10. + +![](images/c9bb3402401e5f554bf1f5a0adcb16c82c3b7abe5c628bbb85b8241d09fd1410.jpg) + +![](images/e85fbf4b327f581065616e211f435b290b7ab21ae7754ce7180a319e4f77c501.jpg) + +![](images/646e80bf7f6aac02311ffd3b432f1e8a61c6f4ef885cad3317edaf126ffb4eef.jpg) + +![](images/6a83ff8d71303f7ca3cf84f7ccb1d558e8175dabf0beec2ce4104bd69f190949.jpg) + +![](images/ab6ca3e2ad0faa4ea0b760c9cfe94f79c2119cb2f5a19b190ffabc9c9e76ed44.jpg) + +![](images/7551d737b2e5ced8e0307246b2586dea8a0cdbfc0d971039806389a2857fe570.jpg) +(a) $1.5c$ +Figure 23: 3D surfaces of the gradient variance from ENAS and its randomly connected variants on the test dataset of CIFAR-10. + +![](images/04d22ce53342062f7cb15b4111e3721d87a5e6ee560ec4d80597c406377b9d5f.jpg) +(b) $2c$ + +![](images/bd24cbb935b37637a815987754f0f4c4e6b3bc96903ffb2ec6d0947329c40758.jpg) +(c) $2.5c$ + +![](images/742cf3aa781e936c73e33c58e79db37906e7d83dacc1d73570d7d51d88511825.jpg) +(d) $3c$ + +![](images/4fdc8f9a9e8bd7271b1690fa6fad0d7273db7cc965f867ef749ce72e46b247b9.jpg) +(e) $3.5c$ + +![](images/9725adc82c1259d0e0c9af186378c760a65e28b6b46a3acae80dda8d01c2765f.jpg) + +![](images/0b06a7be6842ffc9ac9246acb7a584b0513a8470c8d23aa4dced22751e6ff477.jpg) + +![](images/c07f9cac0db558f46b67a1020e9febf3a737623597708c7921ffe1c404943a66.jpg) + +![](images/27df301d44d04f2902b9b1f0822f29d1a957a0087ed847b0d094ea53ab26a374.jpg) + +![](images/512862549ccbf9b14b82453f6d0f724d1a3d969419d8fd2e3844e5a83d9252f5.jpg) + +![](images/6ced7db9a71c81f7007d7440e59b1d479fa0e8dd420c97a1a55e2830218a154a.jpg) +(a) $1.5c$ + +![](images/54ba7170534db3d18f0656b388852271ca44002dbfeb65a5256e79b30879a089.jpg) +(b) $2c$ + +![](images/598b61245a9e8dbd5f36b0157a2aa0ab6ffaf2f67fc07000fc54740f6220cdc6.jpg) +(c) $2.5c$ +Figure 24: 3D surfaces of the gradient variance from NASNet and its randomly connected variants on the test dataset of CIFAR-10. + +![](images/f757d21d326a70bae4ca602c921bda8a0db0f8148d3e2394cc723d536700f33b.jpg) +(d) $3c$ + +![](images/b9dd8dd876488949b54701df7fe081eec6397e384c04707e71e6597d5cdfd653.jpg) +(e) $3.5c$ + +# B.5 ADAPTED TOPOLOGIES + +In this section, we visualize the adapted architectures (in Figure 25) we investigate on in Section 5. Notably, The adapted connection topologies are not only applied in the normal cell but also the reduction cell. The adapted architectures are compared with popular NAS architectures to examine the impacts of the common connection pattern on generalization. + +![](images/c240529917e76bb27e96fe73eba8439a302330bb2bdfdc51014c34246befe504.jpg) +(a) NASNet, 5c, 2 +Figure 25: Adapted topologies of cells from popular NAS architectures. The title of each sub-figure includes the name of the architecture, width and depth of the cell following our definition. Notably, these cells achieve the largest width and smallest depth in their original search space. + +![](images/1979e62448ef538a308d92d5f8224fbf7a73419ddbf7bb270e4f1b7a0f8ecbea.jpg) +(b) AmoebaNet, 3c, 2 + +![](images/abb9cd1e0cf5b63fe7b46d6b3db6e795f78f617eadfcd27760dacc89c11b2e28.jpg) +(c) ENAS, 5c, 2 + +![](images/1f5b1ee135a2272e9968cf62583e66cbf0039046a7d206af0e5662d03ca47a90.jpg) +(d) DARTS, 4c, 2 + +![](images/ff5fb1be7b505efa49813ff69516b887551b2d5a3264890a690cc073cddbe444.jpg) +(e) SNAS, $4c$ 2 + +# APPENDIX C THEORETICAL ANALYSIS + +# C.1 SETUP + +![](images/1ed6673313ef7c3e602f43558bbbd5d40e97c7648302d64d52c0ecbafc4771b2.jpg) +(a) case I: widest cell +Figure 26: Two architectures to compare in the theoretical analysis: (a) architecture with widest cell; (b) architecture with narrowest cell. The notation $l$ and $\widehat{l}$ denote the values of objective function $f$ and $\widehat{f}$ evaluated at input $x$ respectively. + +![](images/58b3652b5c4ffad1a7b995a08a10a15804167a61b0bdf978f01bb5c2857df500.jpg) +(b) case II: narrowest cell + +# C.2 BASICS + +We firstly compare the gradient of case I and case II shown in Figure 26. For case I, since $\pmb{y}^{(i)} = W^{(i)}\pmb{x}$ , the gradient to each weight matrix $W^{(i)}$ is denoted by + +$$ +\frac {\partial f}{\partial W ^ {(i)}} = \frac {\partial f}{\partial \boldsymbol {y} ^ {(i)}} \boldsymbol {x} ^ {T} \tag {1} +$$ + +Similarly, since $\widehat{\pmb{y}}^{(i)} = \prod_{k=1}^{i} W^{(k)}\pmb{x}$ for the case II, the gradient to each weight matrix $W^{(i)}$ is denoted by + +$$ +\begin{array}{l} \frac {\partial \widehat {f}}{\partial W ^ {(i)}} = \sum_ {k = i} ^ {n} \left(\prod_ {j = i + 1} ^ {k} W ^ {(j)}\right) ^ {T} \frac {\partial \widehat {f}}{\partial \widehat {\boldsymbol {y}} ^ {(k)}} \left(\prod_ {j = 1} ^ {i - 1} W ^ {(j)} \boldsymbol {x}\right) ^ {T} (2) \\ = \sum_ {k = i} ^ {n} \left(\prod_ {j = i + 1} ^ {k} W ^ {(j)}\right) ^ {T} \frac {\partial \widehat {f}}{\partial \widehat {\boldsymbol {y}} ^ {(k)}} \boldsymbol {x} ^ {T} \left(\prod_ {j = 1} ^ {i - 1} W ^ {(j)}\right) ^ {T} (3) \\ = \sum_ {k = i} ^ {n} \left(\prod_ {j = i + 1} ^ {k} W ^ {(j)}\right) ^ {T} \frac {\partial f}{\partial W ^ {(k)}} \left(\prod_ {j = 1} ^ {i - 1} W ^ {(j)}\right) ^ {T} (4) \\ \end{array} +$$ + +Exploring the fact that $\frac{\partial\widehat{f}}{\partial\widehat{\pmb{y}}^{(i)}} = \frac{\partial f}{\partial\pmb{y}^{(i)}}$ , we get (4) by inserting (1) into (3). + +# C.3 PROOF OF THEOREM 4.2 + +Due to the complexity of comparing the standard Lipschitz constant of the smoothness for these two cases, we instead investigate the block-wise Lipschitz constant (Beck & Tetruashvili, 2013). In other words, we evaluate the Lipschitz constant for each weight matrix $W^{(i)}$ while fixing all other matrices. Formally, we assume the block-wise Lipschitz smoothness of case I as + +$$ +\left\| \frac {\partial f}{\partial W _ {1} ^ {(i)}} - \frac {\partial f}{\partial W _ {2} ^ {(i)}} \right\| \leq L ^ {(i)} \left\| W _ {1} ^ {(i)} - W _ {2} ^ {(i)} \right\| \quad \forall W _ {1} ^ {(i)}, W _ {2} ^ {(i)} \tag {5} +$$ + +The default matrix norm we adopted is 2-norm. And $W_1^{(i)}, W_2^{(i)}$ denote possible assignments for $W^{(i)}$ . + +Denoting that $\lambda^{(i)} = \left\| W^{(i)}\right\|$ , which is the largest eigenvalue of matrix $W^{(i)}$ , we can get the smoothness of case II as + +$$ +\begin{array}{l} \left\| \frac {\partial \widehat {f}}{\partial W _ {1} ^ {(i)}} - \frac {\partial \widehat {f}}{\partial W _ {2} ^ {(i)}} \right\| = \left\| \sum_ {k = i} ^ {n} \left(\prod_ {j = i + 1} ^ {k} W ^ {(j)}\right) ^ {T} \left(\frac {\partial f}{\partial W _ {1} ^ {(k)}} - \frac {\partial f}{\partial W _ {2} ^ {(k)}}\right) \left(\prod_ {j = 1} ^ {i - 1} W ^ {(j)}\right) ^ {T} \right\| (6) \\ \leq \sum_ {k = i} ^ {n} \left\| \left(\prod_ {j = i + 1} ^ {k} W ^ {(j)}\right) ^ {T} \left(\frac {\partial f}{\partial W _ {1} ^ {(k)}} - \frac {\partial f}{\partial W _ {2} ^ {(k)}}\right) \left(\prod_ {j = 1} ^ {i - 1} W ^ {(j)}\right) ^ {T} \right\| (7) \\ \leq \sum_ {k = i} ^ {n} \left(\frac {1}{\lambda^ {(i)}} \prod_ {j = 1} ^ {k} \lambda^ {(j)}\right) L ^ {(k)} \left\| W _ {1} ^ {(k)} - W _ {2} ^ {(k)} \right\| (8) \\ \leq \left(\prod_ {j = 1} ^ {i - 1} \lambda^ {(j)}\right) L ^ {(i)} \left\| W _ {1} ^ {(i)} - W _ {2} ^ {(i)} \right\| (9) \\ \end{array} +$$ + +We get the equality in (6) since $j > i$ and $W^{(j)}$ keeps the same for the computation of block-wise Lipschitz constant of $W^{(i)}$ . Based on the triangle inequality of norm, we get (7) from (6). We get (8) from (7) based on the inequality $\| WV \| \leq \| W \| \| V \|$ and the assumption of the smoothness for case I in (5). Finally, since we are evaluating the block-wise Lipschitz constant for $W^{(i)}$ , $W_1^{(k)} = W_2^{(k)}$ while $k \neq i$ , which leads to the final inequality (9). + +# C.4 PROOF OF THEOREM 4.3 + +Similarly, we assume the gradient variance of case I is bounded as + +$$ +\mathbb {E} \left\| \frac {\partial f}{\partial W ^ {(i)}} - \mathbb {E} \frac {\partial f}{\partial W ^ {(i)}} \right\| ^ {2} \leq (\sigma^ {(i)}) ^ {2} \tag {10} +$$ + +The gradient variance of case II is then bounded by + +$$ +\begin{array}{l} \mathbb {E} \left\| \frac {\partial \widehat {f}}{\partial W ^ {(i)}} - \mathbb {E} \frac {\partial \widehat {f}}{\partial W ^ {(i)}} \right\| ^ {2} = \mathbb {E} \left\| \sum_ {k = i} ^ {n} \left(\prod_ {j = i + 1} ^ {k} W ^ {(j)}\right) ^ {T} \left(\frac {\partial f}{\partial W ^ {(k)}} - \mathbb {E} \frac {\partial f}{\partial W ^ {(k)}}\right) \left(\prod_ {j = 1} ^ {i - 1} W ^ {(j)}\right) ^ {T} \right\| ^ {2} (11) \\ \leq n \mathbb {E} \sum_ {k = i} ^ {n} \left\| \left(\prod_ {j = i + 1} ^ {k} W ^ {(j)}\right) ^ {T} \left(\frac {\partial f}{\partial W ^ {(k)}} - \mathbb {E} \frac {\partial f}{\partial W ^ {(k)}}\right) \left(\prod_ {j = 1} ^ {i - 1} W ^ {(j)}\right) ^ {T} \right\| ^ {2} (12) \\ \leq n \sum_ {k = i} ^ {n} \left(\frac {\sigma^ {(k)}}{\lambda^ {(i)}} \prod_ {j = 1} ^ {k} \lambda^ {(j)}\right) ^ {2} (13) \\ \end{array} +$$ + +We get (12) from (11) based on Cauchy-Schwarz inequality. Based on the inequality $\| W V \| \leq \| W \| \| V \|$ and the assumption of bounded gradient variance for case I in (10), we get the final inequality. \ No newline at end of file diff --git a/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/images.zip b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e863faf86e210cfb3705efc6708499ed6e6c416e --- /dev/null +++ b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92d53e00cd49d09fd2f9cc43e639d899d2a84613c39edc7c00859eb2682196f7 +size 1594894 diff --git a/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/layout.json b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ca0e9fa523ca94821ee4ef9822a3de1f6b543d26 --- /dev/null +++ b/understandingarchitectureslearntbycellbasedneuralarchitecturesearch/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4685749f3eed733f5d2f05121b12815e3ef3861849773e6c57f3db95ae62cc8f +size 977103 diff --git a/understandinggeneralizationinrecurrentneuralnetworks/80e056e5-579d-4686-8a9e-b71b0587793d_content_list.json b/understandinggeneralizationinrecurrentneuralnetworks/80e056e5-579d-4686-8a9e-b71b0587793d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4d5a8ed4c08b65cf48ba7b4ea20c242aa8e0226c --- /dev/null +++ b/understandinggeneralizationinrecurrentneuralnetworks/80e056e5-579d-4686-8a9e-b71b0587793d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:490c05fde4250f67da7e9178ed9f21f9e58c203b5580d6984ba3f7e93725dc22 +size 127751 diff --git a/understandinggeneralizationinrecurrentneuralnetworks/80e056e5-579d-4686-8a9e-b71b0587793d_model.json b/understandinggeneralizationinrecurrentneuralnetworks/80e056e5-579d-4686-8a9e-b71b0587793d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..df983487c53b22a19bafcd5f179df11eed8789a8 --- /dev/null +++ b/understandinggeneralizationinrecurrentneuralnetworks/80e056e5-579d-4686-8a9e-b71b0587793d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b6f3f5c4e724ff828984926164ea7067af02c8c75b5123d397b842077198698 +size 146180 diff --git a/understandinggeneralizationinrecurrentneuralnetworks/80e056e5-579d-4686-8a9e-b71b0587793d_origin.pdf b/understandinggeneralizationinrecurrentneuralnetworks/80e056e5-579d-4686-8a9e-b71b0587793d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ec9e32c832ae4e2ab36861d7e365fe62cad87ef3 --- /dev/null +++ b/understandinggeneralizationinrecurrentneuralnetworks/80e056e5-579d-4686-8a9e-b71b0587793d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3232b3926fd9bd23919a0deabc6eb7ad2313c284a2d8be8d38bc39a3cf202ed +size 338693 diff --git a/understandinggeneralizationinrecurrentneuralnetworks/full.md b/understandinggeneralizationinrecurrentneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b68dc1d29438e9b853c0a4e9456db082889ae555 --- /dev/null +++ b/understandinggeneralizationinrecurrentneuralnetworks/full.md @@ -0,0 +1,646 @@ +# UNDERSTANDING GENERALIZATION IN RECURRENT NEURAL NETWORKS + +Zhuozhuo Tu, Fengxiang He, Dacheng Tao + +UBTECH Sydney AI Centre, School of Computer Science, Faculty of Engineering + +The University of Sydney + +Darlington, NSW 2008, Australia + +zhtu3055@uni.sydney.edu.au, {fengxiang.he,dacheng.tao}@sydney.edu.au + +# ABSTRACT + +In this work, we develop the theory for analyzing the generalization performance of recurrent neural networks. We first present a new generalization bound for recurrent neural networks based on matrix 1-norm and Fisher-Rao norm. The definition of Fisher-Rao norm relies on a structural lemma about the gradient of RNNs. This new generalization bound assumes that the covariance matrix of the input data is positive definite, which might limit its use in practice. To address this issue, we propose to add random noise to the input data and prove a generalization bound for training with random noise, which is an extension of the former one. Compared with existing results, our generalization bounds have no explicit dependency on the size of networks. We also discover that Fisher-Rao norm for RNNs can be interpreted as a measure of gradient, and incorporating this gradient measure not only can tighten the bound, but allows us to build a relationship between generalization and trainability. Based on the bound, we theoretically analyze the effect of covariance of features on generalization of RNNs and discuss how weight decay and gradient clipping in the training can help improve generalization. + +# 1 INTRODUCTION + +The Recurrent Neural network (RNN) is a neural sequence model that has achieved the state-of-the-art performance on numerous tasks, including natural language processing (Yang et al., 2018; Mikolov & Zweig, 2012), speech recognition (Chiu et al., 2018; Graves, 2013) and machine translation (Wu et al., 2016; Kalchbrenner & Blunsom, 2013). Unlike feedforward neural networks, RNNs allow connections among hidden units associated with a time delay. Through these connections, RNNs can maintain a "memory" that summarizes the past sequence of inputs, enabling it to capture correlations between temporally distant events in the data. + +RNNs are very powerful, and empirical studies have shown that they have a very good generalization property. For example, Graves (2013) showed that deep LSTM RNNs achieved a test error of $17.7\%$ on TIMT phoneme recognition benchmark after training with only 462 speech samples. Despite of the popularity of RNNs in practice, their theory is still not well understood. A number of recent works have sought to shed light on the effective representational properties of recurrent networks trained in practice. For example, Oymak (2018) studied the state equation of recurrent neural networks and showed that SGD can efficiently learn the unknown dynamics from few observations under proper assumptions. Miller & Hardt (2019) tried to explain why feed-forward neural networks are competitive with recurrent networks in practice. They identified stability as a necessary condition and proved that stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent. Despite of the impressive progress in understanding the training behavior of RNNs, there is no generalization guarantee in these works. + +Understanding the generalization performance in machine learning has been a central problem for many years and revived in recent years with the advent of deep learning. One classical approach to proving generalization bound is via notions of complexity. For deep neural networks, numerous complexity measures have been proposed to capture the generalization behavior such as VC + +dimension (Harvey et al., 2017) and norm-based capacity including spectral norm (Bartlett et al., 2017; Neyshabur et al., 2019), Frobenius norm (Neyshabur et al., 2015b;a; 2018) and $l_{p}$ -path norm (Neyshabur et al., 2015b; Bartlett & Mendelson, 2002; Golowich et al., 2018). These existing norm-based complexity measures depend on the number of hidden units of the network explicitly and thus can not explain why neural networks generalize so well in practice, despite that they operate in an overparametrized setting (Zhang et al., 2017). Neyshabur et al. (2019) proved generalization bounds for two layer ReLU feedforward networks, which decreased with the increasing number of hidden unit in the network. However their results only applied to two layer ReLU networks and some specific experiments. More recently, a new generalization bound based on Fisher-Rao norm was proposed (Liang et al., 2017). This notion of Fisher-Rao norm is motivated by information geometry and has good invariance properties. But they proved the bound only for linear deep neural networks. There are also some works about the generalization of recurrent neural networks (Zhang et al., 2018; Chen et al., 2019; Allen-Zhu & Li, 2019). However these bounds also depend on the size of networks, which makes them vacuous for very large neural networks. + +Our main contributions are summarized as follows. + +- We define the Fisher-Rao norm for RNNs based on its gradient structure and derive new Rademacher complexity bound and generalization bound for recurrent neural networks based on Fisher-Rao norm and matrix 1-norm. In contrast to existing results such as spectral norm-based bounds, our bound has no explicit dependence on the size of networks. +- We prove a generalization bound for RNNs when training with random noises. Our bound applies to general types of noises and can potentially explain the effect of noise training on generalization of recurrent neural networks as demonstrated by our empirical results. +- We propose a new technique to decompose RNNs with ReLU activation into a sum of linear network and difference terms. As a result, each term in the decomposition can be treated independently and easily when estimating the Rademacher complexity. This decomposition technique can potentially be applied to other neural networks architectures such as convolutional neural networks, which might be of independent interest. + +The remainder of this paper is structured as follows. We define the problem and notations in Section 2. The notion of Fisher-Rao norm for RNNs is introduced in Section 3.1. We prove the generalization bound for RNNs in Section 3.2, and the generalization bound for training with random noise is derived in Section 3.3. Section 3.4 gives a detailed analysis of our generalization bound. Finally we conclude and discuss future directions. + +# 2 PRELIMINARIES + +We focus on the vanilla RNNs with ReLU activation. Let $U \in R^{m \times d}$ , $V \in R^{k \times m}$ and $W \in R^{m \times m}$ be the weight matrices. Given the input sequence $x = (x_{1}, x_{2}, \dots, x_{L}) \in R^{L^{d}}$ where each $x_{i} \in R^{d}$ and $L$ is the input sequence length, the vanilla RNNs can be described as follows. + +$$ +g _ {t} = U x _ {t} + W h _ {t - 1}, +$$ + +$$ +h _ {t} = \rho (g _ {t}), \tag {1} +$$ + +$$ +y _ {t} = V h _ {t}, +$$ + +where $g_{t}$ and $h_t \in R^m$ represents the input and output of hidden layer at step $t$ , $\rho(\cdot)$ is the ReLU function and $y_{t} \in R^{k}$ denotes the output value at step $t$ . + +For simplicity, in this paper, we only consider the final output $y_{L}$ . We assume that data $(x,y)$ is drawn i.i.d. from some unknown distribution $\mathcal{D}$ over $R^{Ld} \times \mathcal{Y}$ where $\mathcal{Y}$ represents the label space $\{1,2,\dots ,k\}$ . The RNNs above define a mapping $y_{L}(x)$ from $R^{Ld} \to R^{k}$ , where $k$ is the number of classes. We convert $y_{L}(x)$ to a classifier by selecting the output coordinate with the largest magnitude, meaning + +$$ +x \rightarrow \operatorname {a r g m a x} _ {i} [ y _ {L} (x) ] _ {i}, +$$ + +where $\left[\cdot\right]_i$ represents the $i$ -th element of a vector. This naturally leads to the definition of margin $\mathcal{M}_{y_L}(x,y)$ of the output $y_{L}$ at a labeled example $(x,y)$ : + +$$ +\mathcal {M} _ {y _ {L}} (x, y) = [ y _ {L} (x) ] _ {y} - \max _ {y ^ {\prime} \neq y} [ y _ {L} (x) ] _ {y ^ {\prime}}. +$$ + +Thus, $y_{L}$ misclassifies $(x,y)$ if and only if $\mathcal{M}_{y_L}(x,y)\leq 0$ . The quality of the prediction made by $y_{L}$ is measured by the expected risk defined as + +$$ +\mathbb {E} _ {(x, y) \sim \mathcal {D}} \big [ \mathbb {1} _ {\mathcal {M} _ {y _ {L}} (x, y) \leq 0} \big ]. +$$ + +Since the underlying distribution $\mathcal{D}$ is unknown to us, we instead consider the empirical error on sample data given by + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} \left(\mathbb {1} _ {\mathcal {M} _ {y _ {L}} \left(x _ {i}, y _ {i}\right) \leq \alpha}\right). +$$ + +The generalization error is then the difference between expected risk and empirical risk, defined as + +$$ +\mathbb {E} _ {(x, y) \sim \mathcal {D}} [ \mathbb {1} _ {\mathcal {M} _ {y _ {L}} (x, y) \leq 0} ] - \frac {1}{n} \sum_ {i = 1} ^ {n} \big (\mathbb {1} _ {\mathcal {M} _ {y _ {L}} (x _ {i}, y _ {i}) \leq \alpha} \big). +$$ + +Our goal in this paper is to study the generalization error for RNNs theoretically. + +To establish the generalization bound, a little bit of notations are necessary. For a vector, we denote the $l_{p}$ norm by $\| v\| _p = (\sum |v_i|^p)^{1 / p}$ and the $l_{\infty}$ norm by $\| v\|_{\infty} = \max |v_{i}|$ . For a matrix, we denote the matrix $p$ -norm as $\| A\| _p = \max_{||x||_p = 1}\| Ax\| _p$ , the matrix 1-norm by $\| A\| _1 = \max_j\{\sum_i|a_{ij}|\}$ and the Frobenius norm by $\| A\| _F^2 = \text{trace}(AA^T)$ . The smallest eigenvalue of a matrix A is given by $\lambda_{min}(A)$ . The activation function $\rho$ and its derivative $\rho^\prime$ are entrywise, i.e., $\rho (A) = (\rho (a_{ij}))_{ij}$ and $\rho^{\prime}(v) = (\rho^{\prime}(v_{i}))_{i}$ . We denote $c = (L + 1,L,\dots ,2)^T$ , $\eta (\theta) = [Vdiag(\rho '(g_L))\ldots Wdiag(\rho '(g_1))Ux_1,Vdiag(\rho '(g_L))\ldots Wdiag(\rho '(g_2))Ux_2,\dots ,Vdiag(\rho '(g_L))$ $Ux_{L}]\in R^{k\times L}$ and $\tau (\theta) = (W^{L - 1}Ux_{1},W^{L - 2}Ux_{2},\dots ,VUx_{L})$ where $\theta = (U,W,V)$ and diag converts a vector into a diagonal matrix. + +# 3 MAIN RESULT + +In this section, we prove a generalization bound for RNNs with ReLU activation. Our new bound is based on Fisher-Rao norm and matrix 1-norm. We first define the Fisher-Rao norm for RNNs. + +# 3.1 FISHER-RAO NORM FOR RNNS + +We adapt the notion of Fisher Rao norm to recurrent neural networks. To begin with, we establish the following structural result for RNNs. + +Lemma 1. Given an input $x = (x_{1},x_{2},\dots ,x_{L})$ , consider the recurrent neural network in (1), we have the identity + +$$ +\sum_ {a, b} \frac {\partial y _ {L}}{\partial v _ {a b}} v _ {a b} + \sum_ {i, j} \frac {\partial y _ {L}}{\partial w _ {i j}} w _ {i j} + \sum_ {p, q} \frac {\partial y _ {L}}{\partial u _ {p q}} u _ {p q} = \eta (\theta) c. +$$ + +The notion of Fisher-Rao norm is motivated by Fisher-Rao metric of information geometry and is defined as follows. + +Definition 1 ((Liang et al., 2017), Definition 2). The Fisher-Rao norm for a parameter $\theta$ is defined as + +$$ +\left| \left| \theta \right| \right| _ {f r} ^ {2} := < \theta , I (\theta) \theta >, +$$ + +where $I(\theta) = \mathbb{E}(\nabla l(y_{L\theta}(x),y)\otimes \nabla l(y_{L\theta}(x),y))$ and $l(.,.)$ is the loss function. + +The following lemma gives the explicit formula of Fisher-Rao norm in RNNs. We can see that the notion of Fisher-Rao norm relies mainly on the gradient structure of RNNs. + +Lemma 2. Assume that the loss function $l(.,.)$ is smooth in the first argument. Then the following identity holds for the RNN in (1), + +$$ +| | \theta | | _ {f r} ^ {2} = \mathbb {E} \big (\big < \eta (\theta) c, \frac {\partial l (y _ {L \theta} (x) , y)}{\partial y _ {L \theta}} \big > ^ {2} \big). +$$ + +Remark 1. We observe that each term $V\mathrm{diag}(\rho '(g_L)) \dots W\mathrm{diag}(\rho '(g_i))Ux_i$ in $\eta (\theta)$ is actually the gradient component in Backpropagation through time (BPTT). Therefore, the Fisher-Rao norm can be regarded as a measure of gradient. As will be shown later, we can build a relationship between generalization and trainability in RNNs via Fisher-Rao norm. + +For the linear activation function and margin loss $l(y_{L\theta}(x),y) = \Phi_{\alpha}(\mathcal{M}_{y_L}(x,y))$ where $\alpha >0$ is the margin parameter, one might upper bound the Fisher-Rao norm in Lemma 2 by + +$$ +\left| \left| \theta \right| \right| _ {f r} ^ {2} \leq \frac {4}{\alpha^ {2}} \mathbb {E} \left(\max _ {i} [ (\tau (\theta) c) _ {i} ] ^ {2}\right), +$$ + +since $\left\langle \tau (\theta)c,\frac{\partial l(y_{L\theta}(x),y)}{\partial y_{L\theta}}\right\rangle^2\leq \frac{4}{\alpha^2}\max_i[(\tau (\theta)c)_i]^2$ by definition of $\mathcal{M}_{y_L}(x,y)$ and lipschitz property of $\Phi_{\alpha}(\cdot)$ . We define this upper bound as + +$$ +\left\| \theta \right\| _ {f s} ^ {2} := \mathbb {E} \left(\max _ {i} [ (\tau (\theta) c) _ {i} ] ^ {2}\right), \tag {2} +$$ + +and still call it "Fisher-Rao norm" in the paper by slightly abusing the terminology as they are equivalent for $k = 1$ . In the rest of the paper, we will use this Fisher-Rao norm $||\cdot||_{fs}$ to derive generalization bound for RNNs. + +# 3.2 GENERALIZATION BOUND FOR RNNS + +We use matrix 1-norm and Fisher-Rao norm together to derive a generalization bound for RNNs. Since it is very challenging to bound the Rademacher complexity of ReLU networks directly in terms of the Fisher-Rao norm, we consider decomposing the ReLU network into the sum of a linear network and a difference term, i.e., $y_{L} = \psi (\theta)x + (y_{L} - \psi (\theta)x)$ . For the linear network part $\psi (\theta)x$ , the Rademacher complexity can be bounded directly by Fisher-Rao norm. For the difference term $(y_{L} - \psi (\theta)x)$ , we further decompose it into a sum of simpler terms and then upper bound the Rademacher complexity of these simpler terms by matrix 1-norm. We first give the results for the linear network part. + +Lemma 3. Define $\mathcal{F}_r\coloneqq \{x\to [\psi (\theta)x]_y:||\theta ||_{fs}\leq r,y\in \mathcal{Y}\}$ where $x\in R^{Ld}$ and $\psi (\theta)\coloneqq (VW^{L - 1}U,VW^{L - 2}U,\dots ,VU)$ . For any data $x_{1},x_{2},\dots ,x_{n}$ drawn i.i.d from the distribution $\mathcal{D}$ , collect them as columns of a matrix $X\in R^{Ld\times n}$ . Then we have + +$$ +\hat {\mathfrak {R}} _ {n} (\mathcal {F} _ {r}) \leq \frac {r | | X | | _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} (\mathbb {E} (x x ^ {T}))}}, +$$ + +assuming that $\mathbb{E}(xx^T)$ is positive definite. + +Remark 2. If $\mathbb{E}(x) = 0$ , $\mathbb{E}(xx^T)$ is the covariance matrix of random variable $x$ . + +Remark 3. We should mention that our assumption that $\mathbb{E}(xx^T)$ is positive definite is not so restrictive and usually holds in practice. For example, for the case that $x$ is continuous random variable, we can prove that $E(xx^{T})$ is positive definite as follows. Suppose that $x$ is a continuous random variable in the $n$ -dimensional subspace $X\subset R^n$ . If there exists $u\in R^n$ such that $u^{T}E(xx^{T})u = 0$ , then for any $x\in X$ we have $u^{T}x = 0$ , i.e., $u\perp X$ . Since $X$ is $n$ -dimensional, the only $u$ that satisfies is that $u = 0$ . Therefore, by definition, $E(xx^{T})$ is positive definite. As we will show in Section 3.3, this assumption can be removed, and a more general generalization bound will be presented. + +Now we bound the Rademacher complexity of the difference term $y_{L} - \psi(\theta)x$ . With a slight abuse of notations, given input data $x_{1}, x_{2}, \dots, x_{n} \in R^{Ld}$ , the corresponding $g_{1}, g_{2}, \dots, g_{n} \in R^{Lm}$ and $h_{1}, h_{2}, \dots, h_{n} \in R^{Lm}$ are calculated by (1). We collect all input data as a matrix denoted by $X$ , all input data at time $t$ as a matrix denoted by $X_{t}$ , all input of the hidden layer at time $t$ as a matrix denoted by $G_{t}$ and all output of the hidden layer at time $t$ denoted by $H_{t}$ , where $X \in R^{Ld \times n}$ , $X_{t} \in R^{d \times n}$ , $G_{t} \in R^{m \times n}$ , $H_{t} \in R^{m \times n}$ and $t = 1, \dots, L$ . The difference term can be decomposed by the following lemma. + +Lemma 4. Define $H_t'' \coloneqq H_t - G_t$ . Then the following equality holds + +$$ +V H _ {L} - \psi (\theta) X = \sum_ {i = 1} ^ {L} V W ^ {L - i} H _ {i} ^ {\prime \prime}. +$$ + +To bound the Rademacher complexity of each term in the above decomposition, we need a technical lemma given as follows. + +Lemma 5. For any $p \geq 1$ , $\| H_t^{\prime \prime}\|_p \leq m^{\frac{1}{p} (1 - \frac{1}{p})}n^{\frac{1}{p} (1 - \frac{1}{p})}\| G_t\|_p$ . + +As we will see, the operator norm in Lemma 5 will be instantiated for the case of $p = 1$ . The use of $||\cdot||_1$ helps avoid the appearance of the dimension $m$ when upper bounding the Rademacher complexity. Also it guarantees that Rademacher complexity has a convergence rate $\mathcal{O}(1/n)$ . The upper bound for the Rademacher complexity of these individual terms is given by the following lemma. + +Lemma 6. Let $\Omega \coloneqq \{\theta = (U,W,V):||V^T ||_1\leq \beta_V,||W^T ||_1\leq \beta_W,||U^T ||_1\leq \beta_U\}$ . Then for any $i = 1,\dots ,L$ , we have + +$$ +\mathbb {E} _ {\sigma} \big (\sup _ {\theta \in \Omega , y \in \mathcal {Y}} \frac {1}{n} [ V W ^ {L - i} H _ {i} ^ {\prime \prime} ] _ {y, \sigma} \big) \leq \frac {1}{n} \beta_ {V} \beta_ {U} \sum_ {j = 1} ^ {i} \beta_ {W} ^ {L - j} | | X _ {j} ^ {T} | | _ {1}, +$$ + +where $\sigma = (\sigma_{1},\sigma_{2},\dots ,\sigma_{n})^{T}$ is Rademacher random variable and $[\cdot ]_y$ represents the $y$ -th row of the matrix. + +We are now ready to put the ingredients together to prove our first theorem. + +Theorem 1 (Rademacher complexity of RNNs). Let $\overline{\Omega} := \{\theta = (U, W, V) : ||V^T||_1 \leq \beta_V, ||W^T||_1 \leq \beta_W, ||U^T||_1 \leq \beta_U, ||\theta||_{fs} \leq r\}$ . Then, the empirical Rademacher complexity of RNNs with ReLU can be bounded as follows + +$$ +\mathbb {E} _ {\sigma} \Big (\sup _ {\theta \in \overline {{\Omega}}, y \in \mathcal {Y}} \frac {1}{n} \sum_ {i = 1} ^ {n} [ y _ {L \theta} (x _ {i}) ] _ {y} \sigma_ {i} \Big) \leq \frac {r | | X | | _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} (\mathbb {E} (x x ^ {T}))}} + \frac {1}{n} \beta_ {V} \beta_ {U} | | X ^ {T} | | _ {1} \Lambda , +$$ + +where $\Lambda := \frac{1}{1 - \beta_W} \left( \frac{1 - \beta_W^L}{1 - \beta_W} - L \beta_W^L \right)$ if $\beta_W \neq 1$ and $\Lambda := \frac{L + L^2}{2}$ for $\beta_W = 1$ . + +To establish the generalization bound for RNNs, we need the following classical results for multiclass margin bounds. + +Lemma 7 ((Kuznetsov et al., 2015), Theorem 2). Let $H \subseteq \mathbb{R}^{\mathcal{X} \times \mathcal{Y}}$ be a hypothesis set with $\mathcal{Y} = \{1, 2, \dots, k\}$ . Fix $\alpha > 0$ . Then, for any $\delta > 0$ , with probability at least $1 - \delta$ , the following multi-class classification generalization bound holds for all $h \in H$ : + +$$ +R (h) \leq \frac {1}{n} \sum_ {i = 1} ^ {n} \Phi_ {\alpha} (\mathcal {M} _ {h} (x _ {i}, y _ {i})) + \frac {4 k}{\alpha} \hat {\Re} _ {n} (\Pi_ {1} (H)) + 3 \sqrt {\frac {\log \frac {2}{\delta}}{2 n}}, +$$ + +where $\Pi_1(H) = \{x\to h(x,y):y\in \mathcal{Y},h\in H\}$ . + +The generalization bound for RNNs follows from combining Theorem 1 and Lemma 7. + +Theorem 2. Fix margin parameter $\alpha$ , then for any $\delta > 0$ , with probability at least $1 - \delta$ , the following holds for every RNN whose weight matrices $\theta = (U, W, V)$ satisfy $||V^T||_1 \leq \beta_V$ , $||W^T||_1 \leq \beta_W$ , $||U^T||_1 \leq \beta_U$ and $||\theta||_{fs} \leq r$ : + +$$ +\begin{array}{r l} \mathbb {E} [ \mathbb {1} _ {\mathcal {M} _ {y _ {L}} (x, y) \leq 0} ] \leq & \frac {1}{n} \sum \mathbb {1} _ {\mathcal {M} _ {y _ {L}} (x _ {i}, y _ {i}) \leq a} + \frac {4 k}{\alpha} \left(\frac {r \| X \| _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {\min} (\mathbb {E} (x x ^ {T}))}} + \frac {1}{n} \beta_ {V} \beta_ {U} \| X ^ {T} \| _ {1} \Lambda\right) + \\ & 3 \sqrt {\frac {\log \frac {2}{\delta}}{2 n}} \end{array} +$$ + +Comparison with existing results. We compare our result with the existing generalization bounds (Zhang et al., 2018; Chen et al., 2019). In comparison with the bound in Zhang et al. (2018), which is of the order $\tilde{\mathcal{O}}\left(\frac{\max\{d,m,k\}L^2||U||_2||V||_2\max\{1,||W||_2^L\}}{\sqrt{n}\alpha}\right)$ , there is no explicit appearance of the network size parameters $d$ and $m$ in our bound. As we have mentioned before, the reason that we can avoid these dimensional factors is that we use matrix 1-norm instead of spectral norm to + +upper bound the Rademacher complexity of the network. Moreover, there is always a $L^2$ factor in their bound, whereas the $L^2$ term only occurs in our bound when $\| W^T\|_1 = 1$ . For the case that $\| W^T\| > 1$ , our bound has only a linear dependence on $L$ , and for the case that $\| W^T\|_1 < 1$ , by simple calculation, we can show that $\Lambda \leq \frac{1}{(1 - \beta_W)^2}$ and the dependence on $L$ would vanish. Both of our bounds have an exponential term $\| W\|^L$ , which would make the bounds become vacuous for $\| W\| > 1$ . It should also be pointed out that our bound scales linearly with the number of classes since we handle multiclass on each coordinate of a $k$ -tuple of functions and pay a factor of $k$ . Chen et al. (2019) also derived generalization bound for RNNs in terms of spectral norm and the total number of parameters of the network by using covering number analysis. Since their work assumed that the activation function in the hidden layers was bounded rather than the ReLU activation function considered in our paper, their bound is not directly comparable to ours, and we do not make a comparison here due to the page limit. We should emphasize that our proof technique is totally different from the PAC-Bayes approach (Zhang et al., 2018) and covering number analysis (Chen et al., 2019). In particular, we work on the Rademacher complexity of RNNs directly with no invocation of complicated tools such as covering number, which makes our analysis conceptually much simpler. There is also an additional bonus of our proof technique. In the next section, we will use this proof technique to derive a generalization bound for RNNs when training with random noise. + +# 3.3 GENERALIZATION BOUND FOR TRAINING WITH RANDOM NOISE + +The generalization bound in Theorem 2 requires the input covariance matrix $\mathbb{E}(xx^T)$ to be positive definite and would become very poor when the smallest eigenvalue is close to 0, which greatly limits the power of our bound. To address this issue, we consider adding random noise to the input data. We notice that after adding random noise with mean 0 and variance $\sigma_{\epsilon}^{2}$ , the term $\mathbb{E}(xx^T)$ in the bound becomes $\mathbb{E}((x + \epsilon)(x + \epsilon)^T)$ and the smallest eigenvalue of $\mathbb{E}((x + \epsilon)(x + \epsilon)^T)$ is $(\lambda_{min}(\mathbb{E}(xx^T)) + \sigma_\epsilon^2)$ , which is greater than $\sigma_{\epsilon}^{2}$ . Therefore, our bound is still applicable even when the covariance matrix of original input data is rank-deficient. Involving noise variables has been widely used in recurrent neural networks as a regularization technique (Bayer et al., 2013; Zaremba et al., 2014; Dieng et al., 2018; Gal & Ghahramani, 2016). For example, Bayer et al. (2013) claimed that conventional dropout did not work well with RNNs because the recurrence amplified noise, which in turn hurt learning. To fix this problem, Zaremba et al. (2014) proposed to inject noise only to the input and output of RNNs. Although their method greatly reduced overfitting on a variety of tasks, the generalization guarantee was not provided. In this section, we present a generalization bound for RNNs with noise training. For simplicity, we assume that the noise is drawn i.i.d. from a Gaussian distribution with zero mean and variance $\sigma_{\epsilon}^{2}$ . Let $\epsilon_{i}$ denotes the $d$ -dimensional gaussian noise generated at step $i$ and $\epsilon = (\epsilon_1,\epsilon_2,\dots ,\epsilon_L)\in R^{Ld}$ . We collect all noise data as a matrix denoted by $X_{\epsilon}$ . To prove the generalization bound, we need to use the Lipschitz property of RNNs given by the following lemma. + +Lemma 8. For every RNN in (1) with weight matrices $\theta = (U,W,V)$ , $y_{L}$ is Lipschitz with respect to $\| \cdot \|_{\infty}$ , i.e., + +$$ +| | y _ {L} (x) - y _ {L} (x ^ {\prime}) | | _ {\infty} \leq \sum_ {i} | | V ^ {T} | | _ {1} | | U ^ {T} | | _ {1} | | W ^ {T} | | _ {1} ^ {L - i} | | x _ {i} - x _ {i} ^ {\prime} | | _ {\infty} +$$ + +for any $x = (x_{1},x_{2},\dots ,x_{L}),x^{\prime} = (x_{1}^{\prime},x_{2}^{\prime},\dots ,x_{L}^{\prime})\in R^{Ld}$ + +The generalization bound for training with random noise is described as follows. + +Theorem 3. Fix margin parameter $\alpha$ , then for any $\delta > 0$ , with probability at least $1 - \delta$ over a sample $((x_1, \epsilon_1, y_1), (x_2, \epsilon_2, y_2), \dots, (x_n, \epsilon_n, y_n))$ , the following holds for every RNN whose weight matrices $\theta = (U, W, V)$ satisfy $||V^T||_1 \leq \beta_V, ||W^T||_1 \leq \beta_W, ||U^T||_1 \leq \beta_U$ and $||\theta||_{fs} \leq r$ : + +$$ +\begin{array}{l} \mathbb {E} [ \mathbb {1} _ {\mathcal {M} _ {y _ {L}} (x, y) \leq 0} ] \leq \frac {1}{n} \sum \Phi_ {\alpha} (\mathcal {M} _ {y _ {L}} (x _ {i} + \epsilon_ {i}, y _ {i})) + \frac {2}{\alpha} \sum_ {i} \beta_ {V} \beta_ {U} \beta_ {W} ^ {L - i} \sigma_ {\epsilon} \sqrt {2 \log (2 d)} + 3 \sqrt {\frac {l o g _ {\delta} ^ {2}}{2 n}} + \\ \frac {4 k}{\alpha} \left(\frac {r \| X + X _ {\epsilon} \| _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} (\mathbb {E} (x x ^ {T})) + \sigma_ {\epsilon} ^ {2}}} + \frac {1}{n} \beta_ {V} \beta_ {U} \| X ^ {T} + X _ {\epsilon} ^ {T} \| _ {1} \Lambda\right) \\ \end{array} +$$ + +Remark 4. The above bound can be easily extended to other kinds of noises by replacing $\sigma_{\epsilon}\sqrt{2\log(2d)}$ by $\mathbb{E}_{\epsilon}||\epsilon_i||_{\infty}$ . + +Remark 5. The bound in Theorem 3 is an extension of that in Theorem 2 and can be applied even when the smallest eigenvalue of $\mathbb{E}(xx^T)$ is very close to 0. For example, when $\lambda_{min}(\mathbb{E}(xx^T)) = 1 \times 10^{-6}$ , applying Theorem 2 directly might lead to a vacuous bound. But if using Theorem 3 by choosing a small noise with mean 0 and variance 0.01, we might obtain a better bound since the term $\sqrt{\frac{1}{\lambda_{min}(\mathbb{E}((xx^T))) + \sigma_\epsilon^2}} \leq 10$ . Notice that adding noise can not always guarantee an improved generalization especially when $\lambda_{min}(\mathbb{E}(xx^T))$ is not so small as it incurs an additional linear term $\frac{2}{\alpha} \sum_i \beta_V \beta_U \beta_W^{L - i} \sigma_\epsilon \sqrt{2 \log(2d)}$ and might also increase other parameters in the bound such as $||X + X_\epsilon||_F$ . Therefore we suggest adding noise only when the smallest eigenvalue of $\mathbb{E}(xx^T)$ is very small. For this case, a small noise such as $\sigma_\epsilon = 0.1$ not only can greatly improve the term $\sqrt{\frac{1}{\lambda_{min}(\mathbb{E}(xx^T))}}$ but also ensure that the extra cost $\sigma_\epsilon \sqrt{2 \log(2d)}$ and $||X + X_\epsilon||_F / n$ be small enough since $||X + X_\epsilon||_F / n \leq ||X||_F / n + ||X_\epsilon||_F / n$ and $||X_\epsilon||_F / n$ would be small when $n$ is large. + +Remark 6. If we remove the constraint condition $||\theta ||_{fs}\leq r$ , which means that we do not have any knowledge about the gradients, the generalization bound in Theorem 2 and Theorem 3 still holds by substituting $r$ with $\beta_V\beta_U B(\frac{1}{(1 - \beta_W)^2} +\frac{1}{1 - \beta_W})$ for $\beta_W < 1$ . But with this extra gradient measure, the bound can become much tighter, especially when $\lambda_{min}(\mathbb{E}(xx^T))$ is small. Please refer to the detailed analysis in the next section. + +Experiments. We now study the effect of random noise on generalization of RNNs empirically. For simplicity, we consider the IMDB dataset, a collection of 50K movie reviews for binary sentiment classification. We use GloVe word embedding to map each word to a 50-dimensional vector. We train vanilla RNNs with ReLU activation function for sequence length $L = 100$ . The corresponding smallest eigenvalue of $\mathbb{E}(xx^T)$ is approximated by using the total training data, which is $4 \times 10^{-4}$ . We add Gaussian noise to the input data in the training process with $\sigma_{\epsilon} = 0.1, 0.2, 0.3$ and 0.4. The generalization error which is the gap between test error without noise and training error with noise for $L = 100$ and different values of $\sigma_{\epsilon}$ is shown in Figure 1 (results for other values of $L$ in Appendix D). We observe that as we start injecting noise, the generalization error becomes better. But when the deviation + +![](images/11974c310401029afa62eb1faab7d91bbb9548ea3579f5feb5f4395ccf1aa4ad.jpg) +Figure 1: Generalization error for training with noise (mean $\pm$ standard error averaged on 5 runs). + +of noise keeps growing, the generalization error shows an increasing tendency. This behavior is consistent with the prediction made by our bound. + +# 3.4 ANALYSIS OF GENERALIZATION BOUND + +Our theoretical results have a number of implications for the generalization performance in RNNs, and some of them have been observed in empirical studies. We summarize these implications as follows. + +# 3.4.1 GENERALIZATION AND SMALLEST EIGENVALUE OF $\mathbb{E}(xx^T)$ + +According to our results, the generalization performance in RNNs is influenced by the smallest eigenvalue of $\mathbb{E}(xx^T)$ . Since the smaller eigenvalues may contribute to high frequency components of the input signal, our bound suggests that high frequency information is potentially more difficult to generalize, which is consistent with intuition. There are many factors that impact on the smallest eigenvalue and therefore the generalization performance in RNNs. In particular, we study the effect of the correlation between features on the generalization in RNNs. The exact answer for this problem may be complicated. Here we only make an initial attempt and claim that weaker correlation would + +help improve the generalization, and a non-rigorous proof is given as follows. Denote the covariance matrix $\mathbb{E}(xx^T)$ by $\Xi$ where each element $\xi_{ij}$ in $\Xi$ represents the covariance between feature $i$ and $j$ . Suppose that $\| \Xi - I \|_1 \leq \zeta$ with $\zeta < 1$ . By definition of $\| \cdot \|_1$ matrix norm, we immediately get $|\xi_{ii} - 1| + \sum_{j \neq i} |\xi_{ij}| \leq \zeta$ for any $i$ . Then by simple derivation, we obtain $\xi_{ii} - \sum_{j \neq i} |\xi_{ij}| \geq 1 - \zeta$ for any $i$ . Applying Gershgorin circle theorem, we have that the smallest eigenvalue must be greater or equal than $1 - \zeta$ . Since the element $\xi_{ij}$ with $i \neq j$ represents the covariance between feature $i$ and $j$ , a weaker correlation between feature $i$ and $j$ means a smaller value of $|\xi_{ij}|$ and we need a smaller $\xi$ to upper bound $||\Xi - I||_1$ , which gives us a bigger lower bound on the smallest eigenvalue. Therefore the generalization bound becomes better. + +# 3.4.2 GENERALIZATION AND TRAINABILITY + +The generalization of RNNs also depends on parameters $\beta_{U},\beta_{V},\beta_{W}$ and $r$ , where $\beta_{U},\beta_{V}$ and $\beta_{W}$ control the weight matrices and $r$ represents the gradient measure. It has a natural relationship with the training process. The normal procedure in training RNNs is to use weight decay for regularization and gradient clipping to avoid the exploding gradients problems (Bengio et al., 1994; Pascanu et al., 2013). From the perspective of generalization, these strategies can decrease the value of these parameters $\beta_{U},\beta_{V},\beta_{W}$ and $r$ and thus improves the generalization. For example, if $\beta_W\leq 1$ , we have $\Lambda \leq \frac{1}{(1 - \beta_W)^2}$ , and the second term $\frac{1}{n}\beta_V\beta_U||X^T||_1\Lambda$ in the generalization bound would be small when $\beta_{W}$ is not so close to 1. Similarly, if $\lambda_{min}(\mathbb{E}(xx^{T}))$ is very small, by setting the gradient clipping value in the training procedure, we can achieve a smaller value of $r$ and thus good generalization. Therefore our bound partially explains why training RNNs in this way can achieve good performance in practice. + +# 3.4.3 GENERALIZATION AND GRADIENT MEASURE + +We are interested in how the gradient measure contributes to generalization. Suppose now that we only have the weights, i.e., the parameters $\beta_{U},\beta_{W}$ and $\beta_{V}$ and the gradient measure parameterized by $r$ is unknown to us. To apply our bound, a natural idea is to infer the gradient measure parameter $r$ based on the known weight parameters. An upper bound for $r$ in terms of $\beta_{U},\beta_{W}$ and $\beta_{V}$ is given as follows. Under the same conditions as Theorem 2, if we further assume that the data $x$ be given with $\| x^T\| _1\leq B$ , by the definition of $\| \cdot \|_{fs}$ in (2), for any $y\in \mathcal{V}$ , we have + +$$ +\begin{array}{l} \left(\left(\tau (\theta) c\right) _ {y}\right) ^ {2} = \left(\left(L + 1\right) [ V ] _ {y, W ^ {L - 1}} U x _ {1} + L [ V ] _ {y, W ^ {L - 2}} U x _ {2} + \dots + 2 [ V ] _ {y, U x _ {L}}\right) ^ {2} \\ \leq \left(| (L + 1) [ V ] _ {y, W ^ {L - 1}} U x _ {1} | + | L [ V ] _ {y, W ^ {L - 2}} U x _ {2} | + \dots + | 2 [ V ] _ {y, U x _ {L}} |\right) ^ {2} \\ \leq \left((L + 1) \beta_ {V} \beta_ {U} B \beta_ {W} ^ {L - 1} + L \beta_ {V} \beta_ {U} B \beta_ {W} ^ {L - 2} + 2 \beta_ {V} \beta_ {U} B\right) ^ {2} \\ = \left(\beta_ {V} \beta_ {U} B \left(\frac {\beta_ {W} - \beta_ {W} ^ {L}}{(1 - \beta_ {W}) ^ {2}} + \frac {2 - (L + 1) \beta_ {W} ^ {L}}{1 - \beta_ {W}}\right)\right) ^ {2} \leq \left(\beta_ {V} \beta_ {U} B \left(\frac {1}{(1 - \beta_ {W}) ^ {2}} + \frac {1}{1 - \beta_ {W}}\right)\right) ^ {2} \\ \end{array} +$$ + +for $\beta_W < 1$ , and $(\tau(\theta)c)_y)^2 \leq (\beta_V\beta_U B\frac{3L + L^2}{2})^2$ for $\beta_W = 1$ . The above inequality holds for any $x$ and $y$ . So we can get $||\theta||_{fs} = \mathbb{E}\bigl(\max_i[(\tau(\theta)c)_i]^2\bigr)^{1/2} \leq \beta_V\beta_U B\bigl(\frac{1}{(1-\beta_W)^2} + \frac{1}{1-\beta_W}\bigr)$ for $\beta_W < 1$ . By replacing $r$ with $\beta_V\beta_U B\bigl(\frac{1}{(1-\beta_W)^2} + \frac{1}{1-\beta_W}\bigr)$ , the generalization bound (3) also holds. But notice that this bound is obtained without any knowledge about the gradients. If we happen to know that the parameter $r$ is much smaller than $\beta_V\beta_U B\bigl(\frac{1}{(1-\beta_W)^2} + \frac{1}{1-\beta_W}\bigr)$ , for example, by setting gradient clipping value to be small in training process, this extra gradient measure can provide us with a better generalization bound, especially when the smallest eigenvalue of $\mathbb{E}(xx^T)$ is small. Therefore the introduction of Fisher-Rao norm can help eliminate the negative effect of $\lambda_{min}(\mathbb{E}(xx^T))$ and thus improve the generalization bound. + +# 4 CONCLUSION + +In this paper, we propose a new generalization bound for RNNs in terms of matrix 1-norm and Fisher-Rao norm, which has no explicit dependence on the size of networks. Based on the bound, we analyze the influence of covariance of features on generalization of RNNs and discuss how weight decay and gradient clipping in the training can help improve generalization. While our bound is useful for analyzing generalization performance of RNNs, it would become vacuous because of an exponential term when $||W^T||_1 > 1$ . It is of interest to get a tighter bound which can avoid this + +exponential dependence. Moreover, our bound only applies to vanilla RNNs with ReLU activation, and extending the results to tangent and sigmoid activation functions or other variants of RNNs like LSTM and MGU might be an interesting topic for future research. + +# ACKNOWLEDGMENTS + +This work was supported by Australian Research Council Project FL-170100117. We would like to thank Gemeng Zhang from Tulane University for helpful discussions and all the reviewers for their constructive comments. + +# REFERENCES + +Madhu S Advani and Andrew M Saxe. High-dimensional dynamics of generalization error in neural networks. arXiv preprint arXiv:1710.03667, 2017. +Zeyuan Allen-Zhu and Yuanzhi Li. Can sgd learn recurrent neural networks with provable generalization? arXiv preprint arXiv:1902.01028, 2019. +Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463-482, 2002. +Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems, pp. 6240-6249, 2017. +Justin Bayer, Christian Osendorfer, Daniela Korhammer, Nutan Chen, Sebastian Urban, and Patrick van der Smagt. On fast dropout and its applicability to recurrent networks. arXiv preprint arXiv:1311.0701, 2013. +Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157-166, 1994. +Minshuo Chen, Xingguo Li, and Tuo Zhao. On generalization bounds of a family of recurrent neural networks, 2019. URL https://openreview.net/forum?id=Skf-oo0qt7. +Chung-Cheng Chiu, Tara N Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Ekaterina Gonina, et al. State-of-the-art speech recognition with sequence-to-sequence models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4774-4778. IEEE, 2018. +Adji Bousso Dieng, Rajesh Ranganath, Jaan Altosaar, and David Blei. Noisin: Unbiased regularization for recurrent neural networks. In International Conference on Machine Learning, pp. 1251-1260, 2018. +Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pp. 1019-1027, 2016. +Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-independent sample complexity of neural networks. In Proceedings of the 31st Conference On Learning Theory, volume 75 of Proceedings of Machine Learning Research, pp. 297-299. PMLR, 2018. +Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. +Nick Harvey, Christopher Liaw, and Abbas Mehrabian. Nearly-tight vc-dimension bounds for piecewise linear neural networks. In Conference on Learning Theory, pp. 1064-1068, 2017. +Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1700-1709, 2013. +Vitaly Kuznetsov, Mehryar Mohri, and U Syed. Rademacher complexity margin bounds for learning with a large number of classes. In ICML Workshop on Extreme Classification: Learning with a Very Large Number of Labels, 2015. + +Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, and James Stokes. Fisher-rao metric, geometry, and complexity of neural networks. arXiv preprint arXiv:1711.01530, 2017. +Vladimir Alexandrovich Marchenko and Leonid Andreevich Pastur. Distribution of eigenvalues for some sets of random matrices. Matematicheskii Sbornik, 114(4):507-536, 1967. +Tomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural network language model. In 2012 IEEE Spoken Language Technology Workshop (SLT), pp. 234-239. IEEE, 2012. +John Miller and Moritz Hardt. Stable recurrent models. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Hygxb2CqKm. +Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-sgd: Path-normalized optimization in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2422-2430, 2015a. +Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks. In Conference on Learning Theory, pp. 1376-1401, 2015b. +Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A PAC-bayesian approach to spectrally-normalized margin bounds for neural networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Skz_WfbCZ. +Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. The role of over-parametrization in generalization of neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BygfghAcYX. +Samet Oymak. Stochastic gradient descent learns state equations with nonlinear activations. arXiv preprint arXiv:1809.03019, 2018. +Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pp. 1310-1318, 2013. +J Saniuk and I Rhodes. A matrix inequality associated with bounds on solutions of algebraic riccati and lyapunov equations. IEEE Transactions on Automatic Control, 32(8):739-740, 1987. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. +Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. Breaking the softmax bottleneck: A high-rank RNN language model. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=HkwZSG-CZ. +Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014. +Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. 2017. URL https://arxiv.org/abs/1611.03530. +Jiong Zhang, Qi Lei, and Inderjit Dhillon. Stabilizing gradients for deep neural networks via efficient svd parameterization. In International Conference on Machine Learning, pp. 5801-5809, 2018. + +# A PROOFS IN SECTION 3.1 + +# A.1 PROOF OF LEMMA 1 + +Proof. To begin with, by equation (1), we have $y_{L} = V h_{L}$ . Then the derivative of $y_{L}$ with respect to $v_{ab}$ can be calculated as + +$$ +\frac {\partial y _ {L}}{\partial v _ {a b}} = (0, 0, \dots , [ h _ {L} ] _ {b}, \dots , 0) ^ {T}, +$$ + +i.e., a $k$ -dimensional vector with $a$ -th element $[h_L]_b$ and all other elements zero. Multiplying both sides by $v_{ab}$ and summing them up, we get + +$$ +\sum_ {a, b} \frac {\partial y _ {L}}{\partial v _ {a b}} v _ {a b} = V h _ {L} = y _ {L}. +$$ + +The derivative of $y_{L}$ with respect to $W$ and $U$ can be derived by using chain rule in the similar way as follows. + +$$ +\begin{array}{l} \frac {\partial y _ {L}}{\partial w _ {i j}} = V \frac {\partial h _ {L}}{\partial w _ {i j}} \\ \frac {\partial h _ {L}}{\partial w _ {i j}} = d i a g (\rho^ {\prime} (g _ {L})) (0, \dots , [ h _ {L - 1} ] _ {j}, \dots , 0) ^ {T} + d i a g (\rho^ {\prime} (g _ {L})) W \frac {\partial h _ {L - 1}}{\partial w _ {i j}} \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} \frac {\partial y _ {L}}{\partial u _ {p q}} = V \frac {\partial h _ {L}}{\partial u _ {p q}} \\ \frac {\partial h _ {L} ^ {*}}{\partial u _ {p q}} = d i a g (\rho^ {\prime} (g _ {L})) (0, \dots , [ x _ {L} ] _ {q}, \dots , 0) ^ {T} + d i a g (\rho^ {\prime} (g _ {L})) W \frac {\partial h _ {L - 1}}{\partial u _ {p q}} \\ \end{array} +$$ + +where we use the property of ReLU activation function that $\rho (z) = \rho '(z)z$ . Summing up these terms immediately gives us the following equality. + +$$ +\begin{array}{l} \sum_ {i, j} \frac {\partial h _ {L}}{\partial w _ {i j}} w _ {i j} + \sum_ {p, q} \frac {\partial h _ {L}}{\partial u _ {p q}} u _ {p q} \\ = d i a g \left(\rho^ {\prime} \left(g _ {L}\right)\right) \left(\sum_ {i, j} (0, \dots , \left[ h _ {L - 1} \right] _ {j}, \dots , 0\right) ^ {T} w _ {i j} + \sum_ {p, q} (0, \dots , \left[ x _ {L} \right] _ {q}, \dots , 0) ^ {T} u _ {p q}) + d i a g \left(\rho^ {\prime} \left(g _ {L}\right)\right) \\ \end{array} +$$ + +$$ +\begin{array}{l} W \left(\sum_ {i, j} \frac {\partial h _ {L - 1}}{\partial w _ {i j}} w _ {i j} + \sum_ {p, q} \frac {\partial h _ {L - 1}}{\partial u _ {p q}} u _ {p q}\right) \\ = d i a g \left(\rho^ {\prime} \left(g _ {L}\right)\right) \left(W h _ {L - 1} + U x _ {L}\right) + d i a g \left(\rho^ {\prime} \left(g _ {L}\right)\right) W \left(\sum_ {i, j} \frac {\partial h _ {L - 1}}{\partial w _ {i j}} w _ {i j} + \sum_ {p, q} \frac {\partial h _ {L - 1}}{\partial u _ {p q}} u _ {p q}\right) \\ = h _ {L} + d i a g (\rho^ {\prime} (g _ {L})) W (\sum_ {i, j} \frac {\partial h _ {L - 1}}{\partial w _ {i j}} w _ {i j} + \sum_ {p, q} \frac {\partial h _ {L - 1}}{\partial u _ {p q}} u _ {p q}). \\ \end{array} +$$ + +For ease of exposition, define $f_{L} \coloneqq \sum_{i,j} \frac{\partial h_{L}}{\partial w_{ij}} w_{ij} + \sum_{p,q} \frac{\partial h_{L}}{\partial u_{pq}} u_{pq}$ . Then the above equality can be rewritten as + +$$ +f _ {L} = h _ {L} + \operatorname {d i a g} \left(\rho^ {\prime} (g _ {L})\right) W f _ {L - 1}. +$$ + +By induction, we have + +$$ +\begin{array}{c} f _ {L} = h _ {L} + \text {d i a g} (\rho^ {\prime} (g _ {L})) W h _ {L - 1} + \text {d i a g} (\rho^ {\prime} (g _ {L})) W \text {d i a g} (\rho^ {\prime} (g _ {L - 1})) W h _ {L - 2} + \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad . \end{array} +$$ + +Multiplying both sides by $V$ and using some basic calculation, we can show that $Vf_{L} = Ly_{L} - \eta (\theta)(0,1,\dots ,L - 1)^{T}$ . Therefore, + +$$ +\sum_ {a, b} \frac {\partial y _ {L}}{\partial v _ {a b}} v _ {a b} + \sum_ {i, j} \frac {\partial y _ {L}}{\partial w _ {i j}} w _ {i j} + \sum_ {p, q} \frac {\partial y _ {L}}{\partial u _ {p q}} u _ {p q} = y _ {L} + V f _ {L} = (L + 1) y _ {L} - \eta (\theta) (0, 1, \dots , L - 1) ^ {T}. +$$ + +Substituting $y_{L} = \eta (\theta)(1,1,\dots ,1)^{T}$ into the above equality leads to the desired result + +$$ +\sum_ {a, b} \frac {\partial y _ {L}}{\partial v _ {a b}} v _ {a b} + \sum_ {i, j} \frac {\partial y _ {L}}{\partial w _ {i j}} w _ {i j} + \sum_ {p, q} \frac {\partial y _ {L}}{\partial u _ {p q}} u _ {p q} = \eta (\theta) c. +$$ + +![](images/c9702925cd4310d644dbf89333eeec2a3975a73650af3251f2c4e286d005649f.jpg) + +# A.2 PROOF OF LEMMA 2 + +Proof. Using the definition of Fisher-Rao norm, + +$$ +\begin{array}{l} | | \theta | | _ {f r} ^ {2} = \mathbb {E} (< \theta , \nabla l (y _ {L \theta}, y) > ^ {2}) = \mathbb {E} (< \theta , \nabla y _ {L \theta} (x) \frac {\partial l (y _ {L \theta} , y)}{\partial y _ {L \theta}} > ^ {2}) \\ = \mathbb {E} ((\theta^ {T} \nabla y _ {L \theta} (x) \frac {\partial l (y _ {L \theta} , y)}{\partial y _ {L \theta}}) ^ {2}) = \mathbb {E} (< \nabla y _ {L \theta} (x) ^ {T} \theta , \frac {\partial l (y _ {L \theta} , y)}{\partial y _ {L \theta}} > ^ {2}) \\ \end{array} +$$ + +By Lemma 1, we have $\nabla y_{L\theta}(x)^T\theta = \eta (\theta)c$ . Substituting it into the above equality gives us + +$$ +\left| \left| \theta \right| \right| _ {f r} ^ {2} = \mathbb {E} \big (\left\langle \eta (\theta) c, \frac {\partial l (y _ {L \theta} (x) , y)}{\partial y _ {L \theta}} \right\rangle^ {2} \big). +$$ + +# B PROOFS IN SECTION 3.2 + +# B.1 PROOF OF LEMMA 3 + +The proof of Lemma 3 relies on the following result in Saniuk & Rhodes (1987). + +Proposition 1. Let $X, Y \in R^{n \times n}$ with $Y$ symmetric and nonnegative definite. Then, + +$$ +\operatorname {t r a c e} (X Y) \leq | | X | | _ {2} \cdot \operatorname {t r a c e} (Y). +$$ + +Now we are ready to prove Lemma 3. + +Proof. Denote $A \coloneqq \mathbb{E}((L + 1)x_1^T, Lx_2^T, \dots, 2x_L^T)^T((L + 1)x_1^T, Lx_2^T, \dots, 2x_L^T))$ . By the definition of $||\theta||_{fs}$ , for any $y \in \mathcal{V}$ , we have + +$$ +\begin{array}{l} \left\| \left[ \psi (\theta) \right] _ {y}, ^ {T} \right\| _ {A} ^ {2} \\ = [ \psi (\theta) ] _ {y,} \mathbb {E} ((L + 1) x _ {1} ^ {T}, L x _ {2} ^ {T}, \dots , 2 x _ {L} ^ {T}) ^ {T} ((L + 1) x _ {1} ^ {T}, L x _ {2} ^ {T}, \dots , 2 x _ {L} ^ {T})) [ \psi (\theta) ] _ {y,} ^ {T} \\ = \mathbb {E} \left(\left[ \psi (\theta) \right] _ {y,} \left(\left(L + 1\right) x _ {1} ^ {T}, L x _ {2} ^ {T}, \dots , 2 x _ {L} ^ {T}\right) ^ {T} \left(\left(L + 1\right) x _ {1} ^ {T}, L x _ {2} ^ {T}, \dots , 2 x _ {L} ^ {T}\right) \left[ \psi (\theta) \right] _ {y,} ^ {T}\right) \\ = \mathbb {E} \left(\left[ (\tau (\theta) \bar {c}) _ {y} \right] ^ {2}\right) \leq | | \theta | | _ {f s} ^ {2} \\ \end{array} +$$ + +where $[\psi(\theta)]_y$ represents the $y$ -th row of $\psi(\theta)$ . On the other hand, from the definition of Rademacher complexities, + +$$ +\begin{array}{l} \hat {\mathfrak {R}} _ {n} (\mathcal {F} _ {r}) = \mathbb {E} _ {\sigma} (\sup _ {f \in \mathcal {F} _ {r}} \frac {1}{n} \sum_ {i = 1} ^ {n} f (x _ {i}) \sigma_ {i}) = \mathbb {E} _ {\sigma} (\sup _ {\theta , y} \frac {1}{n} \sum_ {i = 1} ^ {n} [ \psi (\theta) x _ {i} ] _ {y} \sigma_ {i}) = \mathbb {E} _ {\sigma} (\sup _ {\theta , y} \frac {1}{n} \sum_ {i = 1} ^ {n} [ \psi (\theta) ] _ {y}, x _ {i} \sigma_ {i}) \\ = \mathbb {E} _ {\sigma} \left(\sup _ {\theta , y} < \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} \sigma_ {i}, [ \psi (\theta) ] _ {y,} ^ {T} >\right) \leq \mathbb {E} _ {\sigma} \left(\sup _ {\theta , y} \left(\left| \left| \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} \sigma_ {i} \right| \right| _ {A ^ {- 1}} \right| \left| \left| \left[ \psi (\theta) \right] _ {y,} ^ {T} \right| | _ {A}\right)\right) \\ \leq r \mathbb {E} _ {\sigma} \left(\left| \left| \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} \sigma_ {i} \right| \right| _ {A ^ {- 1}}\right) = r \mathbb {E} _ {\sigma} \sqrt {\left. < \left(\frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} \sigma_ {i}\right) \left(\frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \sigma_ {i}\right) , A ^ {- 1} > \right.} \\ \leq r \sqrt {\mathbb {E} _ {\sigma} < \left(\frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} \sigma_ {i}\right) \left(\frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \sigma_ {i}\right) , A ^ {- 1} >} = r \sqrt {\frac {1}{n ^ {2}} < \sum_ {i = 1} ^ {n} x _ {i} x _ {i} ^ {T} , A ^ {- 1} >} \\ = \frac {r}{n} \sqrt {< X X ^ {T} , A ^ {- 1} >} = \frac {r}{n} \sqrt {< X X ^ {T} , (C \mathbb {E} (x x ^ {T}) C) ^ {- 1} >} = \frac {r}{n} \sqrt {\operatorname {t r a c e} (X X ^ {T} C ^ {- 1} (\mathbb {E} (x x ^ {T})) ^ {- 1} C ^ {- 1})} \\ \end{array} +$$ + +where the first inequality uses Cauchy-Schwarz inequality and $C\coloneqq diag((L + 1)I_d,LI_d,\dots ,2I_d)$ + +Since $C^{-1}$ and $\mathbb{E}(xx^T)$ are positive definite, we have + +$$ +\begin{array}{l} \operatorname {t r a c e} \left(X X ^ {T} C ^ {- 1} (\mathbb {E} (x x ^ {T})) ^ {- 1} C ^ {- 1}\right) = \operatorname {t r a c e} \left(C ^ {- 1} (\mathbb {E} (x x ^ {T})) ^ {- 1} C ^ {- 1} X X ^ {T}\right) \\ \leq \left| \left| C ^ {- 1} \left(\mathbb {E} \left(x x ^ {T}\right)\right) ^ {- 1} C ^ {- 1} \right| \right| _ {2} \operatorname {t r a c e} \left(X X ^ {T}\right) \leq \left| \left| C ^ {- 1} \right| \right| _ {2} | | \left(\mathbb {E} \left(x x ^ {T}\right)\right) ^ {- 1} | | _ {2} | | C ^ {- 1} | | _ {2} \operatorname {t r a c e} \left(X X ^ {T}\right) \\ = \frac {1}{4} | | (\mathbb {E} (x x ^ {T})) ^ {- 1} | | _ {2} | | X | | _ {F} ^ {2} \leq \frac {1}{4} \frac {| | X | | _ {F} ^ {2}}{\lambda_ {\min} (\mathbb {E} (x x ^ {T}))} \\ \end{array} +$$ + +where the first inequality is by Proposition 1 and the last inequality uses trace $(XX^T) = ||X||_F^2$ . + +Therefore, + +$$ +\hat {\mathfrak {R}} _ {n} \left(\mathcal {F} _ {r}\right) \leq \frac {r \left| \left| X \right| \right| _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} \left(\mathbb {E} \left(x x ^ {T}\right)\right)}}. +$$ + +![](images/8c27e1899dbf0e5f736b89d6099db5a12b10a04b5a5e4ffaf71304d29f17ad6e.jpg) + +# B.2 PROOF OF LEMMA 4 + +Proof. Denote $H_{t}^{\prime} \coloneqq UX_{t} + WH_{t - 1}^{\prime}$ and $H_1^\prime \coloneqq UX_1$ . By the definition of $H_{L}$ , we have + +$$ +\begin{array}{l} H _ {L} - H _ {L} ^ {\prime} = \rho \left(U X _ {L} + W H _ {L - 1}\right) - U X _ {L} - W H _ {L - 1} ^ {\prime} \\ = \rho (U \tilde {X _ {L}} + W H _ {L - 1}) - U X _ {L} - W H _ {L - 1} + W (\tilde {H _ {L - 1}} - H _ {L - 1} ^ {\prime}) = H _ {L} ^ {\prime \prime} + W (H _ {L - 1} - H _ {L - 1} ^ {\prime}) \\ \end{array} +$$ + +which by induction gives + +$$ +H _ {L} - H _ {L} ^ {\prime} = H _ {L} ^ {\prime \prime} + W H _ {L - 1} ^ {\prime \prime} + \dots + W ^ {L - 1} (H _ {1} - H _ {1} ^ {\prime}) = \sum_ {i = 1} ^ {L} W ^ {L - i} H _ {i} ^ {\prime \prime}. +$$ + +So the difference term can be rewritten as + +$$ +V H _ {L} - \psi (\theta) X = V H _ {L} - V H _ {L} ^ {\prime} = \sum_ {i = 1} ^ {L} V W ^ {L - i} H _ {i} ^ {\prime \prime}, +$$ + +where the second equality uses $\psi (\theta)X = VH_L^{\prime}$ + +![](images/c2023aa520f98895fbaa42a4641060ce07d19dd3e31cf96989515b43116e7236.jpg) + +# B.3 PROOF OF LEMMA 5 + +Proof. Using Riesz-Thorin Theorem, we have $||H_t^{\prime \prime}||_p\leq ||H_t^{\prime \prime}||_1^{1 / p}||H_t^{\prime \prime}||_\infty^{1 - 1 / p}$ . And since $H_{t}^{\prime \prime} = \rho (G_{t}) - G_{t}$ , by the definition of the induced $L_{1}$ and $L_{\infty}$ matrix norm, we know $||H_t^{\prime \prime}||_1\leq ||G_t||_1$ and $||H_t^{\prime \prime}||_\infty \leq ||G_t||_\infty$ . Therefore, + +$$ +| | H _ {t} ^ {\prime \prime} | | _ {p} \leq | | H _ {t} ^ {\prime \prime} | | _ {1} ^ {1 / p} | | H _ {t} ^ {\prime \prime} | | _ {\infty} ^ {1 - 1 / p} \leq | | G _ {t} | | _ {1} ^ {1 / p} | | G _ {t} | | _ {\infty} ^ {1 - 1 / p} \leq m ^ {\frac {1}{p} (1 - \frac {1}{p})} n ^ {\frac {1}{p} (1 - \frac {1}{p})} | | G _ {t} | | _ {p}, +$$ + +where the last inequality uses some basic facts about matrix norm that $||G_t||_1 \leq m^{1 - 1/p}||G_t||_p$ and $||G_t||_\infty \leq n^{1/p}||G_t||_p$ . + +# B.4 PROOF OF LEMMA 6 + +Proof. For any $y \in \mathcal{V}$ , by Hölder's inequality, for any $p, q \geq 1$ with $\frac{1}{p} + \frac{1}{q} = 1$ , + +$$ +\begin{array}{l} \frac {1}{n} \left[ V W ^ {L - i} H _ {i} ^ {\prime \prime} \right] _ {y}, \sigma = \frac {1}{n} [ V ] _ {y}, W ^ {L - i} H _ {i} ^ {\prime \prime} \sigma \leq \frac {1}{n} \left\| H _ {i} ^ {\prime \prime T} W ^ {T ^ {L - i}} [ V ] _ {y}, ^ {T} \right\| _ {p} \| \sigma \| _ {q} \\ = | | [ V ] _ {y}, ^ {T} | | _ {p} | | W ^ {T} | | _ {p} ^ {L - i} | | H _ {i} ^ {\prime \prime T} | | _ {p} n ^ {1 / q - 1} \leq | | [ V ] _ {y}, ^ {T} | | _ {p} | | W ^ {T} | | _ {p} ^ {L - i} m ^ {\frac {1}{p} (1 - \frac {1}{p})} n ^ {- \frac {1}{p ^ {2}}} | | G _ {i} ^ {T} | | _ {p}. \\ \end{array} +$$ + +In order to eliminate the dimension dependency on $m$ and simultaneously enjoy a faster convergence rate with respect to $n$ , we choose $p = 1$ . Then the above inequality reduces to $\frac{1}{n} [V W^{L - i}H_{i}^{\prime \prime}]_{y,\sigma}\leq ||[V]_{y,T}|_{1}||W^{T}||_{1}^{L - i}n^{-1}||G_{i}^{T}||_{1}\leq \beta_{V}\beta_{W}^{L - i}n^{-1}||G_{i}^{T}||_{1}\leq \beta_{V}\beta_{W}^{L - i}n^{-1}(\beta_{U}||X_{i}^{T}||_{1} + \beta_{W}||H_{i - 1}^{T}||_{1})$ . For $||H_{i - 1}^{T}||_{1}$ , we have + +$$ +\begin{array}{l} \left| \left| H _ {i - 1} ^ {T} \right| \right| _ {1} = \left| \left| \rho \left(X _ {i - 1} ^ {T} U ^ {T} + H _ {i - 1} ^ {T} W ^ {T}\right) \right| \right| _ {1} \leq \left| \left| X _ {i - 1} ^ {T} U ^ {T} + H _ {i - 2} ^ {T} W ^ {T} \right| \right| _ {1} \\ \leq \beta_ {U} | | X _ {i - 1} ^ {T} | | _ {1} + \beta_ {W} | | H _ {i - 2} ^ {T} | | _ {1}, \\ \end{array} +$$ + +which by induction gives + +$$ +\left| \left| H _ {i - 1} ^ {T} \right| \right| _ {1} \leq \beta_ {U} \sum_ {j = 1} ^ {i - 1} \beta_ {W} ^ {i - 1 - j} \left| \left| X _ {j} ^ {T} \right| \right| _ {1}. +$$ + +Therefore, + +$$ +\mathbb {E} _ {\sigma} \big (\sup _ {\theta \in \Omega , y \in \mathcal {Y}} \frac {1}{n} [ V W ^ {L - i} H _ {i} ^ {\prime \prime} ] _ {y, \sigma} \big) \leq \frac {1}{n} \beta_ {V} \beta_ {U} \sum_ {j = 1} ^ {i} \beta_ {W} ^ {L - j} | | X _ {j} ^ {T} | | _ {1}. +$$ + +![](images/ce229a406837d92e230bca18cf04c31c2cb5d4dff31274da69c3c4dc64afd428.jpg) + +# B.5 PROOF OF THEOREM 1 + +Proof. Using the notations that we have introduced earlier, the empirical Rademacher complexity can be rewritten as + +$$ +\begin{array}{l} \mathbb{E}_{\sigma}\big(\sup_{\theta \in \overline{\Omega},y\in \mathcal{Y}}\frac{1}{n}\sum_{i = 1}^{n}[y_{L_{\theta}}(x_{i})]_{y}\sigma_{i}\big) \\ = \mathbb{E}_{\sigma}\bigl( \sup_{\theta \in \overline{\Omega},y\in \mathcal{Y}}\frac{1}{n} [VH_{L}]_{y},\sigma \bigr) \\ = \mathbb {E} _ {\sigma} \big (\sup _ {\theta \in \overline {{\Omega}}, y \in \mathcal {Y}} \frac {1}{n} [ \sum_ {i = 1} ^ {L} V W ^ {L - i} H _ {i} ^ {\prime \prime} + \psi (\theta) X ] _ {y, \sigma} \big) \\ \leq \sum_ {i = 1} ^ {L} \mathbb {E} _ {\sigma} \left(\sup _ {\theta \in \overline {{\Omega}}, y \in \mathcal {Y}} \frac {1}{n} [ V W ^ {L - i} H _ {i} ^ {\prime \prime} ] _ {y}, \sigma\right) + \mathbb {E} _ {\sigma} \left(\sup _ {\theta \in \overline {{\Omega}}, y \in \mathcal {Y}} \frac {1}{n} [ \psi (\theta) X ] _ {y}, \sigma\right) \\ \leq \sum_ {i = 1} ^ {L} \mathbb {E} _ {\sigma} \big (\sup _ {\theta \in \Omega , y \in \mathcal {Y}} \frac {1}{n} [ V W ^ {L - i} H _ {i} ^ {\prime \prime} ] _ {y}, \sigma \big) + \mathbb {E} _ {\sigma} \big (\sup _ {| | \theta | | _ {f s} \leq r, y \in \mathcal {Y}} \frac {1}{n} [ \psi (\theta) X ] _ {y}, \sigma \big) \\ \end{array} +$$ + +where the second equality uses Lemma 4 and the last inequality is due to the fact that $\overline{\Omega} \subseteq \Omega$ and $\overline{\Omega} \subseteq \{\theta : ||\theta||_{fs} \leq r\}$ . + +For the last term, by Lemma 3, we have + +$$ +\mathbb {E} _ {\sigma} \big (\sup _ {| | \theta | | _ {f s} \leq r, y \in \mathcal {Y}} \frac {1}{n} [ \psi (\theta) X ] _ {y, \sigma} \big) \leq \frac {r | | X | | _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} (\mathbb {E} (x x ^ {T}))}}. +$$ + +The other terms can be handled by Lemma 6 in the following way. + +$$ +\begin{array}{l} \sum_ {i = 1} ^ {L} \mathbb {E} _ {\sigma} \big (\sup _ {\theta \in \Omega , y \in \mathcal {Y}} \frac {1}{n} [ V W ^ {L - i} H _ {i} ^ {\prime \prime} ] _ {y, \sigma} \big) \leq \sum_ {i = 1} ^ {L} \left(\frac {1}{n} \beta_ {V} \beta_ {U} \sum_ {j = 1} ^ {i} \beta_ {W} ^ {L - j} | | X _ {j} ^ {T} | | _ {1}\right) \\ \leq \sum_ {i = 1} ^ {L} \left(\frac {1}{n} \beta_ {V} \beta_ {U} \sum_ {j = 1} ^ {i} \beta_ {W} ^ {L - j} | | X ^ {T} | | _ {1}\right) = \frac {1}{n} \frac {\beta_ {V} \beta_ {U} | | X ^ {T} | | _ {1}}{1 - \beta_ {W}} \big (\frac {1 - \beta_ {W} ^ {L}}{1 - \beta_ {W}} - L \beta_ {W} ^ {L} \big) \\ \end{array} +$$ + +for $\beta_W \neq 1$ , and $\sum_{i=1}^{L} \mathbb{E}_{\sigma}\left(\sup_{\theta \in \Omega, y \in \mathcal{Y}} \frac{1}{n}[V W^{L-i} H_i^{\prime\prime}]_y \sigma\right) \leq \frac{1}{n} \beta_V \beta_U B_2 \frac{L + L^2}{2}$ for $\beta_W = 1$ , where the second inequality uses the definition of matrix norm $||\cdot||_1$ . + +Collecting all terms, we establish + +$$ +\mathbb {E} _ {\sigma} \Big (\sup _ {\theta \in \overline {{\Omega}}, y \in \mathcal {Y}} \frac {1}{n} \sum_ {i = 1} ^ {n} [ y _ {L _ {\theta}} (x _ {i}) ] _ {y} \sigma_ {i} \Big) \leq \frac {r | | X | | _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} (\mathbb {E} (x x ^ {T}))}} + \frac {1}{n} \beta_ {V} \beta_ {U} | | X ^ {T} | | _ {1} \Lambda . +$$ + +![](images/b3cf0a5f7f7cb1ff4738f9d5ba2da7da09e464e8da0fde3cfcafcd78b63d36e6.jpg) + +# C PROOFS IN SECTION 3.3 + +This section includes the full proofs of the generalization bound for training with random noise. + +# C.1 LIPSCHITZ PROPERTIES OF RELU NONLINEARITIES AND MARGIN OPERATOR + +We first establish the Lipschitz properties of the ReLU activation function and margin operator $\mathcal{M}(y_L(x), y) \coloneqq \mathcal{M}_{y_L}(x, y)$ . + +Lemma 9. Let $\rho : R^n \to R^n$ be the coordinate-wise ReLU function, then it is 1-Lipschitz according to $||\cdot||_p$ for any $p \geq 1$ . + +Proof. For any $x, x' \in R^n$ , + +$$ +| | \rho (x) - \rho (x ^ {\prime}) | | _ {p} = \left(\sum | \rho (x) _ {i} - \rho (x ^ {\prime}) _ {i} | ^ {p}\right) ^ {1 / p} \leq \left(\sum | x _ {i} - x _ {i} ^ {\prime} | ^ {p}\right) ^ {1 / p} = | | x - x ^ {\prime} | | _ {p}. +$$ + +Lemma 10. For every $j$ and every $p \geq 1$ , $\mathcal{M}(\cdot, j)$ is 2-Lipschitz wrt $\|\cdot\|_p$ . + +Proof. Let $y, y', j$ be given, and suppose that $\mathcal{M}(y, j) \leq \mathcal{M}(y', j)$ without loss of generality. Select coordinate $i \neq j$ so that $\mathcal{M}(y, j) = y_j - y_i$ . Then + +$$ +\mathcal {M} (y ^ {\prime}, j) - \mathcal {M} (y, j) = y _ {j} ^ {\prime} - \max _ {l \neq j} y _ {l} ^ {\prime} - y _ {j} + y _ {i} \leq (y _ {j} ^ {\prime} - y _ {j}) + (y _ {i} - y _ {i} ^ {\prime}) \leq 2 | | y ^ {\prime} - y | | _ {\infty} \leq 2 | | y - y ^ {\prime} | | _ {p}. +$$ + +![](images/00b991b0afc97b6e0ce377796fb325c37484d0f05466d430a5477fa60ec9afe2.jpg) + +# C.2 PROOF OF LEMMA 8 + +Proof. We prove this Lemma by induction. Let $x = (x_{1}, x_{2}, \dots, x_{L})$ and $x' = (x_{1}', x_{2}', \dots, x_{L}')$ . Denote $g_{t}' := U x_{t}' + W h_{t-1}', h_{t}' := \rho(g_{t}')$ and $y_{t}' := V h_{t}'$ . Then we have + +$$ +\begin{array}{l} \left. \left| \left| h _ {t} - h _ {t} ^ {\prime} \right| \right| _ {\infty} = \left| \left| \rho (g _ {t}) - \rho (g _ {t} ^ {\prime}) \right| \right| _ {\infty} \leq \left| \left| g _ {t} - g _ {t} ^ {\prime} \right| \right| _ {\infty} = \left| \left| U x _ {t} + W h _ {t - 1} - U x _ {t} ^ {\prime} - W h _ {t - 1} ^ {\prime} \right| \right| _ {\infty} \right. \\ \leq | | U ^ {T} | | _ {1} | | x _ {t} - x _ {t} ^ {\prime} | | _ {\infty} + | | W ^ {T} | | _ {1} | | h _ {t - 1} - h _ {t - 1} ^ {\prime} | | _ {\infty}, \\ \end{array} +$$ + +where the first inequality uses Lemma 9 and the second inequality uses basic properties of $||\cdot ||_{\infty}$ . By induction, we get + +$$ +\left| \left| h _ {L} - h _ {L} ^ {\prime} \right| \right| _ {\infty} \leq \sum_ {i} \left| \left| U ^ {T} \right| \right| _ {1} \left| \left| W ^ {T} \right| \right| _ {1} ^ {L - i} \left| \left| x _ {i} - x _ {i} ^ {\prime} \right| \right| _ {\infty}. +$$ + +Therefore, + +$$ +| | y _ {L} - y _ {L} ^ {\prime} | | _ {\infty} = | | V h _ {L} - V h _ {L} ^ {\prime} | | _ {\infty} \leq | | V | | _ {\infty} | | h _ {L} - h _ {L} ^ {\prime} | | _ {\infty} \leq \sum_ {i} | | V ^ {T} | | _ {1} | | U ^ {T} | | _ {1} | | W ^ {T} | | _ {1} ^ {L - i} | | x _ {i} - x _ {i} ^ {\prime} | | _ {\infty}. +$$ + +![](images/078174fefce5ab7caf861d9f47b70750c21ee46e827709db9b1a182ac69aa0f3.jpg) + +# C.3 PROOF OF THEOREM 3 + +We begin by establishing two auxiliary lemmas that we will need for the subsequent theorem. + +Lemma 11. For every RNNs in (1) with weight matrices $\theta = (U,W,V)$ , the following inequality holds for any $x = (x_{1},x_{2},\dots ,x_{L})$ and $y$ . + +$$ +\left| \mathbb {E} _ {\epsilon} \left[ \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x, y)\right) - \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x + \epsilon , y)\right) \right] \right| \leq \frac {2}{\alpha} \sum_ {i} | | V ^ {T} | | _ {1} | | U ^ {T} | | _ {1} | | W ^ {T} | | _ {1} ^ {L - i} \left(\mathbb {E} _ {\epsilon} | | \epsilon_ {i} | | _ {\infty}\right). +$$ + +Proof. For any fixed $x$ and $y$ , + +$$ +\begin{array}{l} \left. \right.\left| \mathbb {E} _ {\epsilon} \left[ \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x, y)\right) - \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x + \epsilon , y)\right)\right]\right| \leq \mathbb {E} _ {\epsilon} \left| \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x, y)\right) - \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x + \epsilon , y)\right)\right| \\ \leq \frac {2}{\alpha} \mathbb {E} _ {\epsilon} | | y _ {L} (x) - y _ {L} (x + \epsilon) | | _ {\infty} \leq \frac {2}{\alpha} \mathbb {E} _ {\epsilon} (\sum_ {i} | | V ^ {T} | | _ {1} | | U ^ {T} | | _ {1} | | W ^ {T} | | _ {1} ^ {L - i} | | \epsilon_ {i} | | _ {\infty}) \\ = \frac {2}{\alpha} \sum_ {i} | | V ^ {T} | | _ {1} | | U ^ {T} | | _ {1} | | W ^ {T} | | _ {1} ^ {L - i} (\mathbb {E} _ {\epsilon} | | \epsilon_ {i} | | _ {\infty}) \\ \end{array} +$$ + +where the first inequality uses Jensen's inequality, the second inequality follows from the $\frac{1}{\alpha}$ -Lipschitz property of $\Phi_{\alpha}(\cdot)$ and Lemma 10 and the last inequality is by Lemma 8. + +Lemma 12. Let $\{\epsilon_i\}_{i=1}^d$ be an i.i.d sequence of $\mathcal{N}(0, \sigma^2)$ variables, then $\mathbb{E}[\max_i |\epsilon_i|] \leq \sigma \sqrt{2 \log(2d)}$ . + +Proof. Define $Z = \left[\max_{i}|\epsilon_{i}|\right]$ . For any $t > 0$ , by Jensen' inequality, we have + +$$ +e ^ {t \mathbb {E} (Z)} \leq \mathbb {E} (e ^ {t Z}) = \mathbb {E} (\max _ {i} e ^ {t | \epsilon_ {i} |}) \leq \sum_ {i} \mathbb {E} (e ^ {t | \epsilon_ {i} |}) = 2 d \Phi (\sigma^ {2} t) e ^ {\sigma^ {2} t ^ {2} / 2} \leq 2 d e ^ {\sigma^ {2} t ^ {2} / 2}, +$$ + +where the second inequality uses the definition of normal distribution and $\Phi$ is the cumulative distribution function of the standard normal distribution. Taking logs on both sides and dividing by $t$ , we get + +$$ +\mathbb {E} (Z) \leq \frac {\log (2 d)}{t} + \frac {\sigma^ {2} t}{2}. +$$ + +Choosing $t = \frac{\sqrt{2\log(2d)}}{\sigma}$ leads to the desired result, + +$$ +\mathbb {E} (Z) \leq \sigma \sqrt {2 \log (2 d)}. +$$ + +![](images/fa655624fc476d5f2a431657c431320aa59cf6ad54e051f5104ac665fe3413ac.jpg) + +We now return to the proof of Theorem 3. + +Proof. For any RNNs with weight matrices $\theta = (U,W,V)$ satisfying $||V^T||_1\leq \beta_V,||W^T||_1\leq \beta_W,||U^T||_1\leq \beta_U$ , we have + +$$ +\begin{array}{l} \left| \mathbb {E} _ {x, y} \left[ \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x, y)\right) \right] - \mathbb {E} _ {x, \epsilon , y} \left[ \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x + \epsilon , y)\right) \right] \right| \\ = \left| \mathbb {E} _ {x, y} \left(\Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x, y)\right) - \mathbb {E} _ {\epsilon} \left[ \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x + \epsilon , y)\right) \right]\right) \right| \\ \leq \mathbb {E} _ {x, y} | \Phi_ {\alpha} (\mathcal {M} _ {y _ {L}} (x, y)) - \mathbb {E} _ {\epsilon} [ \Phi_ {\alpha} (\mathcal {M} _ {y _ {L}} (x + \epsilon , y)) ] | \leq \frac {2}{\alpha} \sum_ {i} \beta_ {V} \beta_ {U} \beta_ {W} ^ {L - i} (\mathbb {E} _ {\epsilon} | | \epsilon_ {i} | | _ {\infty}) \\ \end{array} +$$ + +where the first equality is due to the fact that the input $x$ and noise $\epsilon$ are independent, the first inequality uses Jensen's inequality and the last inequality follows from Lemma 11. The inequality above can be rewritten as follows. + +$$ +\mathbb {E} _ {x, y} [ \Phi_ {\alpha} (\mathcal {M} _ {y _ {L}} (x, y)) ] \leq \mathbb {E} _ {x, \epsilon , y} [ \Phi_ {\alpha} (\mathcal {M} _ {y _ {L}} (x + \epsilon , y)) + \frac {2}{\alpha} \sum_ {i} \beta_ {V} \beta_ {U} \beta_ {W} ^ {L - i} (\mathbb {E} _ {\epsilon} | | \epsilon_ {i} | | _ {\infty}) . +$$ + +For the first term in the right hand side of the above inequality, by Theorem 2, with probability at least $1 - \delta$ , the following holds: + +$$ +\begin{array}{l} \mathbb {E} _ {x, \epsilon , y} [ \Phi_ {\alpha} (\mathcal {M} _ {y L} (x + \epsilon , y)) \\ \leq \frac {4 k}{\alpha} \big (\frac {r | | X + X _ {\epsilon} | | _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} (\mathbb {E} ((x + \epsilon) (x + \epsilon) ^ {T}))}} + \frac {1}{n} \beta_ {V} \beta_ {U} | | X ^ {T} + X _ {\epsilon} ^ {T} | | _ {1} \Lambda \big) + 3 \sqrt {\frac {\log \frac {2}{\delta}}{2 n}} + \\ \frac {1}{n} \sum \Phi_ {\alpha} (\mathcal {M} _ {y _ {L}} (x _ {i} + \epsilon_ {i}, y _ {i})) \\ = \frac {4 k}{\alpha} \left(\frac {r | | X + X _ {\epsilon} | | _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} \left(\mathbb {E} \left(x x ^ {T}\right) + \mathbb {E} \left(\epsilon \epsilon^ {T}\right)\right)}} + \frac {1}{n} \beta_ {V} \beta_ {U} | | X ^ {T} + X _ {\epsilon} ^ {T} | | _ {1} \Lambda\right) + 3 \sqrt {\frac {\log \frac {2}{\delta}}{2 n}} + \\ \frac {1}{n} \sum \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} \left(x _ {i} + \epsilon_ {i}, y _ {i}\right)\right) \\ = \frac {4 k}{\alpha} \left(\frac {r \| X + X _ {\epsilon} \| _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} (\mathbb {E} (x x ^ {T}) + \sigma_ {\epsilon} ^ {2} I)}} + \frac {1}{n} \beta_ {V} \beta_ {U} \| X ^ {T} + X _ {\epsilon} ^ {T} \| _ {1} \Lambda\right) + 3 \sqrt {\frac {\log \frac {2}{\delta}}{2 n}} + \\ \frac {1}{n} \sum \Phi_ {\alpha} (\mathcal {M} _ {y _ {L}} (x _ {i} + \epsilon_ {i}, y _ {i})) \\ = \frac {4 k}{\alpha} \left(\frac {r | | X + X _ {\epsilon} | | _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} (\mathbb {E} (x x ^ {T})) + \sigma_ {\epsilon} ^ {2}}} + \frac {1}{n} \beta_ {V} \beta_ {U} | | X ^ {T} + X _ {\epsilon} ^ {T} | | _ {1} \Lambda\right) + 3 \sqrt {\frac {\log \frac {2}{\delta}}{2 n}} + \\ \frac {1}{n} \sum \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} \left(x _ {i} + \epsilon_ {i}, y _ {i}\right)\right) \\ \end{array} +$$ + +Combining the above two inequalities together leads to + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathbb {1} _ {\mathcal {M} _ {y _ {L}} (x, y) \leq 0} \right] \leq \mathbb {E} _ {x, y} \left[ \Phi_ {\alpha} \left(\mathcal {M} _ {y _ {L}} (x, y)\right) \right] \\ \leq \frac {1}{n} \sum^ {\infty} \Phi_ {\alpha} (\mathcal {M} _ {y _ {L}} (x _ {i} + \epsilon_ {i}, y _ {i})) + \frac {2}{\alpha} \sum_ {i} \beta_ {V} \beta_ {U} \beta_ {W} ^ {L - i} (\mathbb {E} _ {\epsilon} | | \epsilon_ {i} | | _ {\infty}) + \\ \frac {4 k}{\alpha} \Big (\frac {r | | X + X _ {\epsilon} | | _ {F}}{2 n} \sqrt {\frac {1}{\lambda_ {m i n} (\mathbb {E} (x x ^ {T})) + \sigma_ {\epsilon} ^ {2}}} + \frac {1}{n} \beta_ {V} \beta_ {U} | | X ^ {T} + X _ {\epsilon} ^ {T} | | _ {1} \Lambda \Big) + 3 \sqrt {\frac {\log \frac {2}{\delta}}{2 n}} , \\ \end{array} +$$ + +where the first inequality makes use of the fact that $\mathbb{1}_u\leq \Phi_\alpha (u)$ . Therefore, the desired result can be immediately obtained by substituting $\mathbb{E}_{\epsilon}||\epsilon_{i}||_{\infty}$ with $\sigma_{\epsilon}\sqrt{2\log(2d)}$ according to Lemma 12. + +# D SUPPLEMENTARY FIGURES + +Figure 2 shows the behavior of generalization error for RNNs as the standard derivation of noise $\sigma_{\epsilon}$ varies for the sequence length $L = 200$ and 300 trained on IMDB dataset. As in the body, increasing $\sigma_{\epsilon}$ will first improve the generalization error and then, after a certain point, harm the performance of RNNs. + +![](images/775bd5ac1070cec78fb8a8b102063fc82f541b9e42afd43b25ceeeb3a1203730.jpg) +Figure 2: Generalization error for training with noise (mean ± standard error averaged on 5 runs). The left and right panel are for $L = 200$ (smallest eigenvalue: $1 \times 10^{-4}$ ) and $L = 300$ (smallest eigenvalue: $2.5 \times 10^{-5}$ ) respectively. + +![](images/e2070fa59f3aa7bf799a75edbfbcd9ba9bcd5e1b713e7aec951c30e884076e43.jpg) \ No newline at end of file diff --git a/understandinggeneralizationinrecurrentneuralnetworks/images.zip b/understandinggeneralizationinrecurrentneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d3de07d3ce78f2502b92f7e2d68458a24faebb1c --- /dev/null +++ b/understandinggeneralizationinrecurrentneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57802c1365972c4332a4d09a60211dc1059378ad812257360cfc2533482329ec +size 847234 diff --git a/understandinggeneralizationinrecurrentneuralnetworks/layout.json b/understandinggeneralizationinrecurrentneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..72e52a4b2cdcb04573dd67b7df3048ee60d52018 --- /dev/null +++ b/understandinggeneralizationinrecurrentneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0baa65b028becba818667253ef2bfbf8834adbc9f58f2115d3dce1ac9f1dba1f +size 782172 diff --git a/understandingknowledgedistillationinnonautoregressivemachinetranslation/60954fa9-afa2-4e58-9348-a418a72744b3_content_list.json b/understandingknowledgedistillationinnonautoregressivemachinetranslation/60954fa9-afa2-4e58-9348-a418a72744b3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..98c1fb8bb3f36a0f1328729f454ae41c3a6f16d4 --- /dev/null +++ b/understandingknowledgedistillationinnonautoregressivemachinetranslation/60954fa9-afa2-4e58-9348-a418a72744b3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ca755540dea09e7f2f8ee1b45463dbbb27fbc3dd8d2189d591ae01ad6e6404a +size 103179 diff --git a/understandingknowledgedistillationinnonautoregressivemachinetranslation/60954fa9-afa2-4e58-9348-a418a72744b3_model.json b/understandingknowledgedistillationinnonautoregressivemachinetranslation/60954fa9-afa2-4e58-9348-a418a72744b3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..61e74cbcdd17d744ec2ea6ef76c5e36a3e216acb --- /dev/null +++ b/understandingknowledgedistillationinnonautoregressivemachinetranslation/60954fa9-afa2-4e58-9348-a418a72744b3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20eb5c0cab707ee0a2d6e11a916763d151a8ea29d86d8d0f9eb5027d3c41f887 +size 123405 diff --git a/understandingknowledgedistillationinnonautoregressivemachinetranslation/60954fa9-afa2-4e58-9348-a418a72744b3_origin.pdf b/understandingknowledgedistillationinnonautoregressivemachinetranslation/60954fa9-afa2-4e58-9348-a418a72744b3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..710411353a5afb40d474f17632a2608f80384f89 --- /dev/null +++ b/understandingknowledgedistillationinnonautoregressivemachinetranslation/60954fa9-afa2-4e58-9348-a418a72744b3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:247649696a5410fe2eb7f849420923a493d9559cda264098862a2db3525ec5d8 +size 1190995 diff --git a/understandingknowledgedistillationinnonautoregressivemachinetranslation/full.md b/understandingknowledgedistillationinnonautoregressivemachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..082d9636caa6590da17874505fc596529fc8830c --- /dev/null +++ b/understandingknowledgedistillationinnonautoregressivemachinetranslation/full.md @@ -0,0 +1,397 @@ +# UNDERSTANDING KNOWLEDGE DISTILLATION IN NON-AUTOREGRESSIVE MACHINE TRANSLATION + +Chunting Zhou $^{1*}$ , Jiatao Gu $^{2*}$ , Graham Neubig $^{1}$ + +Language Technologies Institute, Carnegie Mellon University1 +Facebook AI Research2 + +{chuntinz, gneubig}@cs.cmu.edu, jgu@fb.com + +# ABSTRACT + +Non-autoregressive machine translation (NAT) systems predict a sequence of output tokens in parallel, achieving substantial improvements in generation speed compared to autoregressive models. Existing NAT models usually rely on the technique of knowledge distillation, which creates the training data from a pretrained autoregressive model for better performance. Knowledge distillation is empirically useful, leading to large gains in accuracy for NAT models, but the reason for this success has, as of yet, been unclear. In this paper, we first design systematic experiments to investigate why knowledge distillation is crucial in NAT training. We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data. Furthermore, a strong correlation is observed between the capacity of an NAT model and the complexity of the distilled data that provides the best translation quality. Based on these findings, we further propose several approaches that can alter the complexity of data sets to improve the performance of NAT models. We achieve state-of-the-art performance for NAT-based models, and close the gap with the autoregressive baseline on the WMT14 En-De benchmark. $^{1}$ + +# 1 INTRODUCTION + +Traditional neural machine translation (NMT) systems (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) generate sequences in an autoregressive fashion; each target token is predicted step-by-step by conditioning on the previous generated tokens in a monotonic (e.g. left-to-right) order. While such autoregressive translation (AT) models have proven successful, the sequential dependence of decisions precludes taking full advantage of parallelism afforded by modern hardware (e.g. GPUs) at inference time. In contrast, non-autoregressive translation (NAT) models (Gu et al., 2018; Lee et al., 2018) predict the whole sequence or multi-token chunks of the sequence simultaneously, alleviating this problem by trading the model's capacity for decoding efficiency. Such a non-autoregressive factorization assumes that the output tokens are independent from each other. However, this assumption obviously does not hold in reality and as a result NAT models generally perform worse than standard AT models. + +One key ingredient in the training recipe for NAT models that is used in almost all existing works (Gu et al. (2018); Lee et al. (2018); Stern et al. (2019), inter alia) is creation of training data through knowledge distillation (Hinton et al., 2015). More precisely, sequence-level knowledge distillation (Kim & Rush, 2016) – a special variant of the original approach – is applied during NAT model training by replacing the target side of training samples with the outputs from a pre-trained AT model trained on the same corpus with a roughly equal number of parameters. It is usually assumed (Gu et al., 2018) that knowledge distillation's reduction of the "modes" (alternative translations for an input) in the training data is the key reason why distillation benefits NAT training. However, this intuition has not been rigorously tested, leading to three important open questions: + +- Exactly how does distillation reduce the "modes", and how we could we measure this reduction quantitatively? Why does this reduction consistently improve NAT models? +- What is the relationship between the NAT model (student) and the AT model (teacher)? Are different varieties of distilled data better for different NAT models? +- Due to distillation, the performance of NAT models is largely bounded by the choice of AT teacher. Is there a way to further close the performance gap with standard AT models? + +In this paper, we aim to answer the three questions above, improving understanding of knowledge distillation through empirical analysis over a variety of AT and NAT models. Specifically, our contributions are as follows: + +- We first visualize explicitly on a synthetic dataset how modes are reduced by distillation (§3.1). Inspired by the synthetic experiments, we further propose metrics for measuring complexity and faithfulness for a given training set. Specifically, our metrics are the conditional entropy and KL-divergence of word translation based on an external alignment tool, and we show that these metrics are correlated with NAT model performance (§3.2). +- We conduct a systematic analysis ( $\S 4$ ) over four AT teacher models and six NAT student models with various architectures on the standard WMT14 English-German translation benchmark. These experiments find a strong correlation between the capacity of an NAT model and the optimal dataset complexity that results in the best translation quality. +- Inspired by these observations, we propose approaches to further adjust the complexity of the distilled data in order to match the model's capacity (§5). We also show that we can achieve the state-of-the-art performance for NAT models and largely match the performance of the AT model. + +# 2 BACKGROUND + +# 2.1 NON-AUTOREGRESSIVE NEURAL MACHINE TRANSLATION + +In order to model the joint probability of the output sequence $\pmb{y}$ , NMT models usually generate each output token conditioned on the previously generated ones $p(\pmb{y}|\pmb{x}) = \prod_{t=1}^{T} p(y_t|\pmb{y}_{dEn-DeEn-EsEn-FrFull Real DataRandom SelectionDistillationC(d)3.122.812.893.673.302.64 + +Table 1: Complexity $C(d)$ (↑ more complex) of the Europarl data set of different settings in §3.1. + +where we use $x$ and $y$ to denote a word in the source and target vocabulary respectively. $T_{x}$ and $T_{y}$ denote the length of the source and target sentences. To make the computation tractable, we make two additional assumptions on the conditional distribution $p(\boldsymbol{y}|\boldsymbol{x})$ : + +- Assumption 1: We assume the target tokens are independent given the source sentence. Then the conditional entropy of a sentence can be converted into the sum of entropy of target words conditioned on the source sentence $\mathbf{x}$ . +- Assumption 2: We assume the distribution of $p(y_t | \boldsymbol{x})$ follows an alignment model (Dyer et al., 2013)3 where $y_t$ is generated from the word alignment distribution $p(y_t | \mathrm{Align}(y_t))$ . This makes it possible to simplify the conditional entropy to the sum of entropy of target words conditioned on the aligned source words denoted $\mathcal{H}(y | x = x_t)$ . + +The corpus level complexity $C(d)$ is then calculated by adding up the conditional entropy $\mathcal{H}(\mathbf{Y}|\mathbf{X} = \boldsymbol{x})$ of all sentences. To prevent $C(d)$ from being dominated by frequent words, we calculate $C(d)$ by averaging the entropy of target words conditioned on a source word, denoted $C(d) = \frac{1}{|\mathcal{V}_x|}\sum_{x\in \mathcal{V}_x}\mathcal{H}(y|x)$ . + +To illustrate that the proposed metric is a reasonable measure of complexity of a parallel corpus, in Tab. 1 we compute $C(d)$ for parallel data from different language pairs, the concatenated data set, and the data distilled from the AT model described in §3.1. We observe that the conditional entropy of the distilled data is much smaller than that of the concatenated or randomly selected data mentioned above. Additionally, we find that the conditional entropy of En-Es and En-Fr are similar but that of En-De is relatively larger, which can also explain why the student NAT model prefers to predict the modes of Es or Fr more often than De as shown in Fig. 1(d). + +Measure of Faithfulness. $C(d)$ reflects the level of multi-modality of a parallel corpus, and we have shown that a simpler data set is favorable to an NAT model. However, it is not fair to assess the data set only by its complexity; we can trivially construct a simple data set with no variations in the output, which obviously won't be useful for training. The other important measurement of the data set is its faithfulness to the real data distribution. To measure the faithfulness of a parallel corpus $d$ , we use KL-divergence of the alignment distribution between the real parallel data set $r$ and an altered parallel data set $d$ , denoted $F(d)$ : + +$$ +F (d) = \frac {1}{\left| \mathcal {V} _ {x} \right|} \sum_ {x \in \mathcal {V} _ {x}} \sum_ {y \in \mathcal {V} _ {y}} p _ {r} (y | x) \log \frac {p _ {r} (y | x)}{p _ {d} (y | x)} \tag {4} +$$ + +# 4 EMPIRICAL STUDY + +In this section, we perform an extensive study over a variety of non-autoregressive (NAT) models trained from different autoregressive (AT) teacher models to assess how knowledge distillation affects the performance of NAT models. + +# 4.1 EXPERIMENTAL SETTINGS + +Data. We use the data set commonly used by prior work as our evaluation benchmark: WMT14 English-German (En-De) $^{4}$ . We use newstest2013 as the validation set for selecting the best model, and newstest2014 as the test set. We learn a byte-pair encoding (BPE, Sennrich et al., 2016) vocabulary of 37,000 on the tokenized data. + +AT Models. We set up four Transformer models with different parameter sizes: Transformertiny/small/base/big denoted as tiny, small, base, big respectively. We build base and big models + +following settings described in Vaswani et al. (2017), and reduce the model sizes for tiny, small to create weaker teacher models. Details of the model architectures can be found in Appendix A. + +All the models are trained using the Adam optimizer (Kingma & Ba, 2014) with the maximum number of steps set to 300,000. After training, we use the resulting AT models to decode the whole training set with beam size 5 and replace the real target sentences to create a new parallel corpus. + +NAT Models. We consider the following NAT models, from vanilla to state-of-the-art. All the models are using the Transformer as the basic backbone and are (re-)implemented based on Fairseq5 except for FlowSeq. We briefly outline the methods and parameters here, and describe detailed settings in the Appendix A. + +- Vanilla NAT (Gu et al., 2018): Similarly to §3.1, we use a simplified version where the decoder's inputs are directly copied from the encoder without considering latent variables. +- FlowSeq (Ma et al., 2019): FlowSeq adopts normalizing flows (Kingma & Dhariwal, 2018) as the latent variables to model the mappings from source sentences to a latent space. +- NAT with Iterative Refinement (iNAT, Lee et al., 2018): iNAT extends the vanilla NAT by iteratively reading and refining the translation. The number of iterations is set to 10 for decoding. +- Insertion Transformer (InsT, Stern et al., 2019): InsT adopts a similar architecture as iNAT while generating the sequence by parallel insertion operations. Here, we only consider InsT trained with uniform loss as described in the original paper. +- MaskPredict (MaskT, Ghazvininejad et al., 2019): MaskT adopts a masked language model (Devlin et al., 2018) to progressively generate the sequence from an entirely masked input. The number of iterations is set to be 10. +- Levenshtein Transformer (LevT, Gu et al., 2019): LevT uses similar architectures as in InsT and MaskT while generating based on both insertion and deletion operations. We experiment with a base and big LevT model (LevT and LevT-big in Tab. 2). + +We also summarize the parameter size, performance and relative decoding speed of the NAT models introduced in Tab. 2. We use the decoding time of vanilla NAT to represent one unit of time, and Iters $\times$ Pass represents the relative time units used for each model. + +As mentioned earlier, we analyze each model by training from both the real and 4 distilled targets. We train the NAT models for the same number of steps as the AT models. For a fair comparison of the actual ability of each NAT-based model, we test all the models based on greedy decoding without any advanced search algorithms (e.g. length beam (Ghazvininejad et al., 2019), noisy parallel decoding (Ma et al., 2019), or re-ranking from the teacher model (Gu et al., 2018)). Notably, the vanilla NAT and FlowSeq output translations with single forward pass, while the remaining models are based on the iterative refinement. + +
ModelsParamsBLEUPassIters
AT models
AT-tiny16M23.3-n
AT-small37M25.6-n
AT-base65M27.1-n
AT-big218M28.2-n
NAT models
vanilla71M11.411
FlowSeq73M18.6131
iNAT66M19.31k << n
InsT66M20.91≈ log2 n
MaskT66M23.5110
LevT66M25.213k << n
LevT-big220M26.5≈33k << n
+ +# 4.2 ANALYSIS OF THE DISTILLED DATA + +We compare different dimensions of the data generated by the four AT models and the real data set in Fig. 3. First, Fig. 3 (a) shows that as the capacity of the AT model increases, the + +complexity $\hat{C}(d)$ of the distilled data increases, which indicates that the multi-modality increases as well. At the same time, we observe that $F(d)$ defined in §3.2 also decreases, showing that the distilled data more faithfully represents the word-level translation distribution of the original data. + +Table 2: AT and NAT models. Number of parameters and test BLEU when trained on the real data demonstrate model capacity. Iters is number of passes used in decoding for output length $n$ and hyperparameter $k$ . Pass is relative time used for one pass of decoding. + +![](images/f2f9c7ba3c7046a75b93389254862352d5f07187bd8a282f628325c1dea28310.jpg) +Figure 2: A sampled pair together with its real target from the distilled data of the base-AT model. Chunks annotated in the same colors are approximately aligned with each other. + +![](images/be459977f5d76afe1e7fb63c196dea213762889a27731581c97caff43bbc5506.jpg) +Figure 3: Complexity $C(d)$ (↑ more complex), faithfulness $F(d)$ (↓ more faithful), training BLEU, and reordering score (↑ more monotonic alignment) of different distilled sets of WMT14-ENDE. + +![](images/6c294505b268efccefa5fde58019e47b482ed1a76d52bfc11170f70208c5bcc9.jpg) + +![](images/bbe9a4d4a6c71ae1af04e64be5b2d53102443db89850abf45abbf96501de4eda.jpg) + +Second, we plot the BLEU score of the distilled data w.r.t to the real data set in (b) and we observe that the BLEU score of the distilled data from a higher-capacity teacher model is higher, which is both intuitive and in agreement with the results on KL divergence. + +We also investigate how the relative ordering of words in the source and target sentences is changed during distillation. We use the fuzzy reordering score proposed in Talbot et al. (2011). A larger fuzzy reordering score indicates the more monotonic alignments. As shown in Fig 3 (c), the distilled data has significantly less reordering compared to the real parallel sentences, and the distilled data from a weaker AT teacher is more monotonic than a stronger AT teacher. We also show a randomly sampled example in Fig. 2 where compared to the real translation, the AT distilled target is much more monotonically aligned to the source sentence. This has potential benefits in that these simpler reordering patterns may be easier to learn for NAT models, but also disadvantages in that it may prevent NAT models from learning complex reordering patterns. + +# 4.3 ANALYSIS OF DISTILLATION STRATEGIES + +In §4.2, we have shown that decoding with an AT model reduces the conditional entropy of the parallel data set, which mitigates multi-modality in the output data. But does the decoding method of the AT model affect this change in the data set? We also investigate different decoding strategies when creating distilled data, using the base Transformer model as the teacher and the vanilla NAT model as the student. In Tab. 3, four decoding methods are presented: sampling, sampling within the top-10 candidates, beam search, and greedy decoding. With the same AT model, the performance of the NAT model differs widely depending on the decoding approach, where distillation with beam search results in the best performance. + +We can see that beam search or greedy decoding can reduce the complexity of the real data the most while maintaining high faithfulness. In contrast, sampling based decoding methods less aggressively reduce the modes in the output sequence. This finding is in concert with Ott et al. (2018), who demonstrate that because beam search approximately selects the most probable translation, it effectively reduces diversity in the output translations compared to sampling or the true distribution. + +
Decoding MethodC(d)F(d)BLEU
sampling3.6233.3546.6
sampling (Top 10)2.4112.93214.6
greedy1.9602.95918.9
beam search1.9022.94819.5
+ +Table 3: Comparisons of decoding methods on WMT14-ENDE newtest 2014 test set. + +# 4.4 Distilled DATA V.S. NAT MODELS + +We next examine the relationship between the NAT students and distilled training data from different AT models. In Fig. 4, we demonstrate results for the NAT models listed in §4.1. We use the test + +![](images/9b5cd15aa7b981de9b34490f9696035a5eb75f781267a861232f72333a180a20.jpg) + +![](images/2f3608980898ff2d8c48b6e422efc995379b5c9c740e1d8e3ba9c7d44d073d80.jpg) + +![](images/13a8ad59401ea8b9168ca0bd3917303145e86b6741902722a593fdf61f4fcc24.jpg) + +![](images/3bde6d367a14af7f0a933e9921da04fe6411b1a37e8b5c040d07bf52a87aabc4.jpg) +Figure 4: The performance of NAT models of varying capacity trained on both the real and the distilled data from tiny, small, base and big AT models on WMT14-ENDE newstest 2014 test sets. + +![](images/e89b5552715af4591225807d6b08853f4b2245061107aa79b20cd490c0fc0287.jpg) + +![](images/dbcfee958424523c576e0f6dfb6fb98bca10bd496d1b424bca4bc780174a6fce.jpg) + +set performance on real data as a simple metric to measure the capacity of the NAT model and arrange the subfigures in an increasing order of the performance (left-to-right, top-to-bottom). The results in the figure demonstrate that, interestingly, weaker NAT students prefer distilled data with smaller complexity as measured above in §4.2. The best performance of NAT models – from lower capacity ones to higher capacity ones – is achieved with distilled data of lower complexity to higher complexity, i.e. the vanilla NAT model performs best when using the distilled data from a small Transformer whereas LevT achieves the best performance when training with the distilled data from a big Transformer. Third, and notably, by simply changing the distilled data set upon which the models are trained, we are able to significantly improve the state-of-the-art results for models in a particular class. For example, FlowSeq increased to 22, by simply changing from the distilled data of Transformer(base) to Transformer(small). Finally, we find that by distilling from a big AT model, LevT is able to close the gap with the Transformer (base) with a similar number of parameters. Both LevT and LevT-big achieve the state-of-the-art performance for NAT-based models. + +# 5 IMPROVEMENTS TO KNOWLEDGE DISTILLATION + +The previous section shows that the optimal complexity of the dataset is highly correlated with the capacity of the NAT model. In this section, we introduce three techniques that can be used to alter the distilled data to match the capacity of NAT model. Specifically, these techniques can be used to simplify the data further (BANs, MoE) for a lower-capacity student model or increase faithfulness of the data set (Interpolation) for a higher-capacity student model. + +Born-Again Networks. We apply Born-Again networks (BANs) to create a simplified dataset for NAT models. BANs were originally proposed as a self-distillation technique (Furlanello et al., 2018) that uses the output distribution of a trained model to train the original model. Starting from the real data, we repeatedly train new AT models with decoded sentences from the AT model at the previous iteration. This process is repeated for $k$ times and yields $k$ distilled data sets, upon which we perform NAT training and examine how the $k$ born-again teachers affect the performance of NAT students. + +We conduct experiments using the vanilla NAT model (Gu et al., 2018) (which achieved the best performance with distilled data from a small Transformer in §4.4) and the base Transformer as the AT model. As shown in Fig. 5, we can make the following observations: (i) The performance of the base AT model almost remains unchanged during the reborn iterations. (ii) The performance of the vanilla NAT model can be improved by 2 BLEU when using the distilled data from reborn iteration 6. (iii) As the reborn iterations continue, the complexity of the distilled data decreases and becomes constant eventually. Meanwhile, the quality of the distilled data compared to the real data decreases. + +![](images/d42cdcd3c2d8144f1e33ae6fd46c8086f3cac3d30d1e47fe3306ce528490bf1b.jpg) +Figure 5: Reborn experiments: (from left to right) performance of the base AT model, performance of the vanilla NAT model, $C(d)$ and $F(d)$ of distilled data sets. R- $i$ denotes the $i$ -th reborn iteration. + +![](images/e6b1d02b2e23c759610cd5e76987bdfda63475e3025b9602c2f350ea654a794f.jpg) + +![](images/a2738db6d6eb53b4045f919aa8d20de80230edabf48c125f5bc4e6565693d8c6.jpg) + +![](images/04d17235e332242ac2d676be4bd722c6c437060ffd50a19a2df89f0bb0bca46a.jpg) +Figure 6: MoE experiments: (from left to right) performance of the base AT model, performance of the vanilla NAT model, $C(d)$ and $F(d)$ of distilled data sets w.r.t the number of experts. + +![](images/fa51db50969343f895e395abafe34494333ddb1a007a3edb6667d0aa20dcf834.jpg) + +![](images/67fe1ec4fca911571f1653a46071d06624b4f65ebc5e8a194a9d755597f064c1.jpg) + +Mixture-of-Experts. The mixture-of-expert model (MoE; Shen et al. (2019)) learns different experts for diverse machine translation, and different mixture components were shown to capture consistent translation styles across examples. Inspired by this, we use one expert from the mixture model to translate the training data, which is supposed to generate a single style of translation and reduce the diversity in the original data set. Then we use the best single-expert translations as the distilled data to train the vanilla NAT model. Specifically, we follow Shen et al. (2019)'s setup, using the base Transformer model and uniform hard mixture model, varying the number of experts. + +In Fig. 6, we observe that the performance of the best expert of MoE tends to decrease as the number of experts increases. However, the complexity $(C(d))$ and faithfulness $(F(D))$ of distilled data from different MoE models has a relatively large variance. Compared to using the distilled data from a plain base AT model, the performance of NAT model is improved by 1.21 BLEU when using the distilled data from the MoE model with the number of experts of 3 which produces the distilled data with the least complexity. + +Sequence-Level Interpolation. §4.4 shows stronger NAT models (e.g. MaskT, LevT) have the ability to learn from the dataset that is closer to the real data, and achieve better performance. We adopt the sequence-level interpolation proposed in Kim & Rush (2016) as a natural way to create a better dataset. Different from distillation, interpolation picks the sentence with the highest sentence-level BLEU score w.r.t. the ground truth from $K$ -best beam search hy- + +
dC(d)F(d)BLEU
base1.9022.94826.94
base-inter1.9082.91627.32
+ +Table 4: Results w/ and w/o sequence-level interpolation with LevT. + +potheses. In our experiments, we first run beam search using the base Transformer model with a beam size of 5 then select the sentences with the highest BLEU score from the top-3 candidates. + +Tab. 4 compares the performance of LevT trained with distilled data from the AT model with the standard distillation or interpolation. We observe that selection with BLEU score from the base AT model (base-inter) improves the performance of LevT $\sim 0.4$ BLEU while the dataset complexity $C(d)$ does not increase much. + +# 6 CONCLUSION + +In this paper, we first systematically examine why knowledge distillation improves the performance of NAT models. We conducted extensive experiments with autoregressive teacher models of different capacity and a wide range of NAT models. Furthermore, we defined metrics that can quantitatively measure the complexity of a parallel data set. Empirically, we find that a higher-capacity + +NAT model requires a more complex distilled data to achieve better performance. Accordingly, we propose several techniques that can adjust the complexity of a data set to match the capacity of an NAT model for better performance. + +# REFERENCES + +Nader Akoury, Kalpesh Krishna, and Mohit Iyyer. Syntactically supervised transformers for faster neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1269-1281, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1122. URL https://www.aclweb.org/anthology/P19-1122. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR), 2015. +Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65-72, 2005. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805. +Chris Dyer, Victor Chahuneau, and Noah Smith. A simple, fast, and effective reparameterization of IBM Model 2. In NAACL, 2013. +Tommaso Furlanello, Zachary Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. Born-again neural networks. In International Conference on Machine Learning, pp. 1602-1611, 2018. +Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1243-1252. JMLR.org, 2017. +Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. Constant-time machine translation with conditional masked language models. arXiv preprint arXiv:1904.09324, 2019. +Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. Non-autoregressive neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, Canada, April 30-May 3, 2018, Conference Track Proceedings, 2018. +Jiatao Gu, Changhan Wang, and Jake Zhao. Levenshtein transformer. In Advances in Neural Information Processing Systems 33. 2019. +Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. +Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. +Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. Automatic evaluation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 944-952. Association for Computational Linguistics, 2010. +Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351, 2017. + +Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. Fast decoding in sequence models using discrete latent variables. In International Conference on Machine Learning, pp. 2395-2404, 2018. +Yoon Kim and Alexander M Rush. Sequence-level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1317-1327, 2016. +Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pp. 10215-10224, 2018. +Jason Lee, Elman Mansimov, and Kyunghyun Cho. Deterministic non-autoregressive neural sequence modeling by iterative refinement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1173-1182, 2018. +Percy Liang, Hal Daumé III, and Dan Klein. Structure compilation: trading structure for features. In ICML, pp. 592-599, 2008. +Xuezhe Ma, Pengcheng Yin, Jingzhou Liu, Graham Neubig, and Eduard Hovy. Softmax q-distribution estimation for structured prediction: A theoretical interpretation for raml. arXiv preprint arXiv:1705.07136, 2017. +Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, and Eduard Hovy. Flowseq: Non-autoregressive conditional sequence generation with generative flow. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, November 2019. +Aaron Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George Driessche, Edward Lockhart, Luis Cobo, Florian Stimberg, et al. Parallel wavenet: Fast high-fidelity speech synthesis. In International Conference on Machine Learning, pp. 3915-3923, 2018. +Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. Analyzing uncertainty in neural machine translation. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, pp. 3953-3962, 2018. URL http://proceedings.mlr.press/v80/ott18a.html. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019. +Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), pp. 582-597. IEEE, 2016. +Maja Popović. chrf: character n-gram f-score for automatic mt evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pp. 392–395, 2015. +Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://www.aclweb.org/anthology/P16-1162. +Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, and Jie Zhou. Retrieving sequential information for non-autoregressive neural machine translation. arXiv preprint arXiv:1906.09444, 2019. +Tianxiao Shen, Myle Ott, Michael Auli, et al. Mixture models for diverse machine translation: Tricks of the trade. In International Conference on Machine Learning, pp. 5719-5728, 2019. + +Raphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. Latent-variable non-autoregressive neural machine translation with deterministic inference using a delta posterior. arXiv preprint arXiv:1908.07181, 2019. +Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. A study of translation edit rate with targeted human annotation. In *In Proceedings of Association for Machine Translation in the Americas*, pp. 223-231, 2006. +Milos Stanojevic and Khalil Simaan. Beer: Better evaluation as ranking. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pp. 414-419, 2014. +Mitchell Stern, Noam Shazeer, and Jakob Uszkoreit. Blockwise parallel decoding for deep autoregressive models. In Advances in Neural Information Processing Systems, pp. 10107-10116, 2018. +Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. Insertion transformer: Flexible sequence generation via insertion operations. arXiv preprint arXiv:1902.03249, 2019. +David Talbot, Hideto Kazawa, Hiroshi Ichikawa, Jason Katz-Brown, Masakazu Seno, and Franz J Och. A lightweight evaluation framework for machine translation reordering. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pp. 12-21. Association for Computational Linguistics, 2011. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Chunqi Wang, Ji Zhang, and Haiqing Chen. Semi-autoregressive neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 479-488, 2018. +Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. Non-autoregressive machine translation with auxiliary regularization. arXiv preprint arXiv:1902.10245, 2019. +Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, and Xu Sun. Imitation learning for non-autoregressive neural machine translation. arXiv preprint arXiv:1906.02041, 2019. + +# A EXPERIMENTAL DETAILS + +# A.1 AT MODELS + +Model All the AT models are implemented based on the Transformer model using fairseq (Ott et al., 2019), and we basically follow the fairseq examples to train the transformers. Following the notation from Vaswani et al. (2017), we list the basic parameters of all the AT model we used: + +
Modelstinysmallbasebig
dmodel2565125121024
dhidden1024102420484096
n_layers3366
nh 查s48816
p_dropout0.10.10.30.3
+ +Table 5: Basic hyper-parameters of architecture for AT models. + +Training For all experiments, we adopt the Adam optimizer (Kingma & Ba, 2014) using $\beta_{1} = 0.9, \beta_{2} = 0.98, \epsilon = 1e - 8$ . The learning rate is scheduled using inverse_sqrt with a maximum learning rate 0.0005 and 4000 warmup steps. We set the label smoothing as 0.1. All the models are run on 8 GPUs for 300,000 updates with an effective batch size of 32,000 tokens. The best model is selected based on the validation loss except for FlowSeq which uses valid BLEU score. + +Decoding After training, we use beam-search with a fixed beam size 5 for all AT models to create the distilled dataset. We use length normalization without length penalty. + +# A.2 NAT MODELS + +Model Tab. 2 also lists all the NAT models we test in this work. In general, all the NAT models except FlowSeq and LevT-big adopts a similar architecture and hyper-parameters as the Transformer-base (see Tab. 5). LevT-big is a naive extension of the original LevT model with a comparable parameter setting as Transformer-big (Tab. 5). For FlowSeq, we use the base model (FlowSeq-base) described in (Ma et al., 2019). We re-implemented the vanilla NAT as a simplified version of Gu et al. (2018) where instead of modeling fertility as described in the original paper, we monotonically copy the encoder embeddings to the input of the decoder. All the models except InsT require the additional module to predict the length of the output sequence, or the number of placeholders to be inserted, which is implemented as a standard softmax classifier over the lengths of [0, 256). For LevT, we also have a binary classifier to predict the deletion of the incorrect tokens. + +Training Similar to the AT models, all the NAT models are trained using the Adam optimizer with the same learning rate scheduler, in which the warmup steps are set to 10,000. We train the FlowSeq model on 32 GPUs with a batch size as 2048 sentences, while all the other models are trained on 8 GPUs with an effective batch size of 64,000 tokens. Note that, the batch sizes for training NAT is typically larger than the AT model, which improves final results. There are also specialized training settings for each models: + +- iNAT (Lee et al., 2018): following the original paper, we train the iNAT model jointly with 4 iterations of refinement during training. For each iteration, the model has the $50\%$ probability to learn as a denoising autoencoder, and the rest of the probability to learn from the model's own prediction. +- InsT (Stern et al., 2019): in this work, we only consider training the Insertion Transformer (InsT) using the slot-loss based on the uniform loss function (Stern et al., 2019). That is, we assign equal probabilities to all the insertable tokens inside each slot. +- MaskT (Ghazvininejad et al., 2019): following the original paper, we train the model as a typical masked language model where the ratio of masked tokens is sampled from $0 \sim 100\%$ . + +- LevT (Gu et al., 2019): in this work, we only consider sequence generation tasks, which means the training of LevT is very similar to InstT. We use sentences with randomly deleted tokens to learn insertion, and learn deletion based on the model's own prediction. + +Decoding For a fair comparison over all the NAT models, we use greedy decoding for all the models without considering any advanced decoding methods such as searching or re-ranking from a teacher model. For the vanilla NAT and FlowSeq, decoding is quite straight-forward and simply picks the arg max at every position. For iNAT and MaskT, we fix the decoding steps to 10. Both InsT and LevT decode in an adaptive number of iterations, and we set the maximum iterations for both models to be 10. A special EOS penalty that penalizes generating too short sequences is tuned based on the validation set for both InsT and LevT. + +For all models, final results are calculated using tokenized BLEU score. + +# B REAL DATA STATISTICS + +The detailed dataset split for WMT14 En-De is shown in Tab. 6. In Fig. 7, we also plot the histogram of the conditional entropy of each pair of sentences $\mathcal{H}(\boldsymbol{y}|\boldsymbol{x})$ in the real parallel data and different distilled data sets from the big-AT, base-AT, small-AT and tiny-AT respectively. It shows that the distribution of the sentence-level conditional entropy differs widely. The mode of $\mathcal{H}(\boldsymbol{y}|\boldsymbol{x})$ in the real data is the highest and follows by distilled data from the big-AT, base-AT, small-AT and tiny-AT. This observation aligns with the complexity value $C(d)$ proposed in §3.2. + +
DatasetTrainValidTestVocabulary
WMT'14 En-De4,500,9663000300337,009
+ +Table 6: Dataset statistics for WMT14 En-De. + +![](images/9bd963b719387427f3856b6c6a6e4fa0dc8b9dc2b091836b515b854cb281053e.jpg) +Figure 7: Density of conditional entropy $C(d)$ of each sentence pairs in different distilled data sets and the real data. + +# C ADDITIONAL METRICS + +In Figure 8, we also showed results with different metrics together with BLEU scores considering that BLEU scores sometimes cannot fully capture the changes in the system. We considered 5 additional metrics in our experiments: METEOR (Banerjee & Lavie, 2005), RIBES (Isozaki et al., 2010), ChrF (Popovic, 2015) TER (Snover et al., 2006), and BEER (Stanojevic & Simaan, 2014). Not surprisingly, we find that all the metrics are correlated with the original BLEU scores quite well showing a similar trend as discussed earlier. + +![](images/9e71881616a519ced78487f29a265c776d47a382badfd27b943f1bff03e5fd78.jpg) + +![](images/d413933c6f3e021b85b9b35ee23bf6bda6a7af71c09907dcce4f0deb53059a4f.jpg) + +![](images/8bfcada575d4310471340c9e988498320cf69b8b1ea6bd120b6ce45900d8acd9.jpg) + +![](images/807db47b4609469a193cf5734a92899d9d7f7690a0d2f4a967c73980c6e0ff8f.jpg) +Figure 8: The performance of variant measure (BLEU ↑, METEOR ↑, RIBES ↑, ChrF ↑, TER ↓, BEER ↑) for the vanilla NAT model trained on the distilled data from tiny, small, base and big AT models on WMT14-ENDE newestest 2014 test sets. + +![](images/8c73b30c1ff4cf2248106fd7551f08d665529c348e87d998da141b947f4a244f.jpg) + +![](images/5fb02f7b4bed04736899a91881fb265f643c1b0e2114f690d219eaaaf80ab096.jpg) + +# D SYNTHETIC DATA WITH ACCESS TO THE TRUE DISTRIBUTION + +# D.1 BACKGROUND: BAYESIAN DECISION THEORY + +Bayesian decision theory is a fundamental statistical approach to the problem of pattern classification, which provides a principled rule of finding the optimal classification decision using probability and losses that accompany such decisions. + +In the problem of structured prediction (Ma et al., 2017), let $\mathbf{x}$ denote the input sequence and $\mathbf{y}$ denote the output label sequence. Let $\mathcal{H}$ denote all the possible hypothesis functions from the input to the output space: $\mathcal{H} = \{h : \mathcal{X} \to \mathcal{Y}\}$ . Let $r(\mathbf{y}|\mathbf{x})$ denote the conditional risk on the input $\mathbf{x}$ , which is the expected loss of predicting $\mathbf{y}$ based on the posterior probabilities: + +$$ +r (\boldsymbol {y} | \boldsymbol {x}) = \mathbb {E} _ {P \left(\boldsymbol {y} ^ {\prime} \mid \boldsymbol {x}\right)} [ L (\boldsymbol {y}, \boldsymbol {y} ^ {\prime}) ], \tag {5} +$$ + +, where $L(\pmb{y}, \pmb{y}')$ is the loss function that penalizes predicting the true target $\pmb{y}'$ as $\pmb{y}$ . The classification task aims to find a hypothesis function $h$ that minimizes the overall risk $R$ given by + +$$ +R (h) = \mathbb {E} _ {P (\boldsymbol {x})} [ r (h (\boldsymbol {x}) | \boldsymbol {x}) ] \tag {6} +$$ + +This is known as the Bayes risk. To minimize the overall risk, obviously we need to minimize the conditional risk for each input $\pmb{x}$ . The Bayesian decision rule states that the global minimum of $R(h)$ is achieved when the classifier makes predictions that minimize each conditional risk given $\pmb{x}$ and this gives the Bayes optimal classifier: + +$$ +h ^ {*} (\boldsymbol {x}) = \underset {\boldsymbol {y} \in \mathcal {Y}} {\arg \min } r (\boldsymbol {y} | \boldsymbol {x}) \tag {7} +$$ + +Let us consider two loss functions defined in Eq. 5. First is the sequence-level loss $L_{seq}(\pmb{y}, \pmb{y}') = 1 - \mathbb{I}(\pmb{y} = \pmb{y}')$ , then in this case the Bayes classifier is: + +$$ +h _ {\text {s e q}} ^ {*} (\boldsymbol {x}) = \underset {\boldsymbol {y} \in \mathcal {Y}} {\arg \max } P (\boldsymbol {y} | \boldsymbol {x}) \tag {8} +$$ + +, which is the most probable output label sequence given the input sequence $\pmb{x}$ . + +Second let us consider the token-level loss $L_{tok}(\pmb{y}, \pmb{y}') = \sum_{t=1}^{T} 1 - \mathbb{I}(y_t = y'_t)$ , i.e. the sum of zero-one loss at each time step. We have: + +$$ +\begin{array}{l} h _ {t o k} ^ {*} (\boldsymbol {x}) = \underset {\boldsymbol {y} \in \mathcal {Y}} {\arg \min } \mathbb {E} _ {P (\boldsymbol {y} ^ {\prime} | \boldsymbol {x})} [ L _ {2} (\boldsymbol {y}, \boldsymbol {y} ^ {\prime}) ] \\ = \underset {\boldsymbol {y} \in \mathcal {Y}} {\arg \max } \mathbb {E} _ {P (\boldsymbol {y} ^ {\prime} | \boldsymbol {x})} \left[ \sum_ {t = 1} ^ {T} \mathbb {I} \left(y _ {t} = y _ {t} ^ {\prime}\right) \right] \\ = \underset {\boldsymbol {y} \in \mathcal {Y}} {\arg \max } \sum_ {t = 1} ^ {T} \mathbb {E} _ {P \left(\boldsymbol {y} ^ {\prime} \mid \boldsymbol {x}\right)} \left[ \mathbb {I} \left(y _ {t} = y _ {t} ^ {\prime}\right) \right] \tag {9} \\ = \operatorname * {a r g m a x} _ {\boldsymbol {y} \in \mathcal {Y}} \sum_ {t = 1} ^ {T} \mathbb {E} _ {P (y _ {t} ^ {\prime} | \boldsymbol {x})} [ \mathbb {I} (y _ {t} = y _ {t} ^ {\prime}) ] \\ = \underset {\boldsymbol {y} \in \mathcal {Y}} {\arg \max } \prod_ {t = 1} ^ {T} P (y _ {t} | \boldsymbol {x}) \\ \end{array} +$$ + +This suggests that the Bayes classifier finds the most probable label at each time step given the input sequence. + +# D.2 EXPERIMENTAL SETUPS AND ANALYSIS + +To study how training data affects the performance of a weaker classifier, we construct a Hidden Markov Model (HMM) by sampling the parameters of the transition and emission probabilities uniformly within $(0,a]$ and $(0,b]$ respectively. A higher value of $a$ and $b$ indicates an HMM model with higher uncertainty. We refer this HMM as the "true HMM" as our real data generator. Next we consider a weaker classifier that uses a low-dimension bidirectional-LSTM (Bi-LSTM) to encode the input sequence and individual softmax functions at each time step to predict labels independently, which is referred as the "Bi-LSTM" classifier. Obviously, the Bi-LSTM classifier is not able to model the dependencies between output labels embedded in the HMM, and it is equivalent to a simplified non-autoregressive generation model. + +We generate the real training data $D_{real} = \{(x_1, y_1), \dots, (x_N, y_N)\}$ of size $N$ by sampling from the joint probability of the true HMM. Similarly we sample $N_{test}$ data points as the test data and $N_{valid}$ data points as the validation data. We evaluate the classifier's token-level accuracy $tacc$ and sequence-level accuracy $sacc$ on the test data respectively, where $tacc = \frac{\sum_{i=1}^{N_{test}} \sum_{t=1}^{T} \mathbb{I}(h(\boldsymbol{x}_i)^t = \boldsymbol{y}_i^t)}{T \times N_{test}}$ and $sacc = \frac{\sum_{i=1}^{N_{test}} \mathbb{I}(h(\boldsymbol{x}_i) = \boldsymbol{y}_i)}{N_{test}}$ . These two metrics correspond to the token-level loss $L_{tok}$ and sequence-level loss $L_{seq}$ on each data point of the test data. + +First, we use $h_{seq}^{*}(\pmb{x})$ to generate the distillation labels $\pmb{y}'$ from the true HMM, which corresponds to applying the Viterbi decoding to each $\pmb{x}_i$ in $D_{real}$ . The training data set $D_{seq}$ is created with $(\pmb{x}_i, \pmb{y}_i')$ . Next, we use $h_{tok}^{*}(\pmb{x})$ to generate the distillation labels $\hat{\pmb{y}}$ and create the training data $D_{tok}$ of $(\pmb{x}_i, \hat{\pmb{y}}_i)$ . To generate $\hat{\pmb{y}}$ , we apply the forward-backward algorithm to each $\pmb{x}_i$ in $D_{real}$ and obtain $P(y_i^t | x_i)$ . We take arg max over the label space $\mathcal{L}$ : $\hat{y}_i^t = \arg \max P(y_i^t | x_i)$ . + +$$ +y _ {i} ^ {t} \in \mathcal {L} +$$ + +We use these three training data $(D_{real}, D_{tok}, D_{seq})$ to train the Bi-LSTM classifier respectively. We repeat the experiment for 50 times by constructing 50 HMM models with different random seeds as the data generator. We find that when evaluating with the token-level accuracy $tacc$ , models trained with $D_{tok}$ yields the best performance (Bi-LSTM trained with $D_{tok}$ win $97.6\%$ runs); when evaluating with the sequence-level accuracy $sacc$ , models trained with $D_{seq}$ yields the best performance (Bi-LSTM trained with $D_{seq}$ win $98.5\%$ runs). This is because the Bi-LSTM classifier has difficulty modeling the true data distribution defined by an HMM. On the other hand, it is easier for the Bi-LSTM classifier to model the distributions of $D_{seq}$ and $D_{tok}$ . Data sets $D_{seq}$ and $D_{tok}$ define deterministic conditional distributions over the input data, which are much simpler than the real data distribution. By definition, $D_{tok}$ is created by the optimal Bayes classifier $h_{tok}^{*}(\boldsymbol{x})$ , this means that the Bi-LSTM classifier trained with $D_{tok}$ can better capture the distribution of $P(y_t|\boldsymbol{x}) = \max_{u_t} P(u_t|\boldsymbol{x})$ , which can generalize better to the test data when evaluated with the token-level accuracy. Similarly, Bi-LSTM trained with $D_{seq}$ performs better on the test data with the sequence-level metric. + +This corroborates our observation in machine translation task that NAT has difficulty in modeling the real conditional distribution of true sentence pairs. However, when using the distilled data translated + +from a pretrained autoregressive model with beam-search decoding, it performs better on the test set when evaluated with the BLEU score metric. \ No newline at end of file diff --git a/understandingknowledgedistillationinnonautoregressivemachinetranslation/images.zip b/understandingknowledgedistillationinnonautoregressivemachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c728021728bca4e724c64bb4c897607949f222f3 --- /dev/null +++ b/understandingknowledgedistillationinnonautoregressivemachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7c669336dc062f0e4f6892f13e8bab5f24e8eab820afc0f9201588dd23e85be +size 516259 diff --git a/understandingknowledgedistillationinnonautoregressivemachinetranslation/layout.json b/understandingknowledgedistillationinnonautoregressivemachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3c6b2da605e8b6ec1f9805655403d6c4c3b6b851 --- /dev/null +++ b/understandingknowledgedistillationinnonautoregressivemachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b53e50ca772c8fb027e35d28b92e0c1e81a71b49747f9e29dced111ff62fac6 +size 538157 diff --git a/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/76497547-a25e-458d-a758-693eb9a7d8a4_content_list.json b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/76497547-a25e-458d-a758-693eb9a7d8a4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9dbfdfdcba71d1920ac524cf4008cddf9a8c8857 --- /dev/null +++ b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/76497547-a25e-458d-a758-693eb9a7d8a4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2b0f03a1243d0e1b76600baf8589e16a60dc06634adc8cb82eb6b21613d4ec4 +size 229454 diff --git a/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/76497547-a25e-458d-a758-693eb9a7d8a4_model.json b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/76497547-a25e-458d-a758-693eb9a7d8a4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2c4ac24802e6dc477a5724a8dfdce9eaec469986 --- /dev/null +++ b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/76497547-a25e-458d-a758-693eb9a7d8a4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1cbdc92cc36c46b1a4d14b85a459a36a72e49096d848a2bf9ec8b81912c45431 +size 267397 diff --git a/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/76497547-a25e-458d-a758-693eb9a7d8a4_origin.pdf b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/76497547-a25e-458d-a758-693eb9a7d8a4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..59bba02bfc23df141e4d12c9c04e42d2e74c9b0d --- /dev/null +++ b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/76497547-a25e-458d-a758-693eb9a7d8a4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3867b1ee48c22e36a3d0f86683cb21bbd9a54064132adc895baae901109cffb3 +size 11962056 diff --git a/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/full.md b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1b023f80d841d4988a4c14f073ccf72b3ef798aa --- /dev/null +++ b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/full.md @@ -0,0 +1,1197 @@ +# UNDERSTANDING $\ell^4$ -BASED DICTIONARY LEARNING: INTERPRETATION, STABILITY, AND ROBUSTNESS + +Yuexiang Zhai $^{1,2*}$ + +Hermish Mehta + +Zhengyuan Zhou + +Yi Ma + +$^{1}$ Department of EECS, UC Berkeley + +2ByteDance Inc. + +$^{3}$ Stern School of Business, NYU + +# ABSTRACT + +Recently, the $\ell^4$ -norm maximization has been proposed to solve the sparse dictionary learning (SDL) problem. The simple MSP (matching, stretching, and projection) algorithm proposed by Zhai et al. (2019a) has shown to be surprisingly efficient and effective. This paper aims to better understand this algorithm from its strong geometric and statistical connections with the classic PCA and ICA, as well as their associated fixed-point style algorithms. Such connections provide a unified way of viewing problems that pursue principal, independent, or sparse components of high-dimensional data. Our studies reveal additional good properties of $\ell^4$ -maximization: not only is the MSP algorithm for sparse coding insensitive to small noise, but it is also robust to outliers and resilient to sparse corruptions. We provide statistical justification for such inherently nice properties. To corroborate the theoretical analysis, we also provide extensive and compelling experimental evidence with both synthetic data and real images. + +# 1 INTRODUCTION + +The explosion of massive amounts of high-dimensional data has become the modern-day norm for a large number of scientific and engineering disciplines and hence presents a daunting challenge for both computation and learning. Rising to this challenge, sparse dictionary learning (SDL) provides a potent framework in representation learning that exploits the blessing of dimensionality: real data tends to lie in or near some low-dimensional subspaces or manifolds, even though the ambient dimension is often extremely large (e.g. the number of raw pixels in an image). More specifically, SDL (Olshausen & Field (1997); Mairal et al. (2008; 2012; 2014); Spielman et al. (2012); Sun et al. (2015); Bai et al. (2018); Qu et al. (2019)) concerns the problem of learning a compact, sparse representation from raw, unlabelled data: given a data matrix $\mathbf{Y} = [\pmb{y}_1, \pmb{y}_2, \dots, \pmb{y}_p] \in \mathbb{R}^{n \times p}$ that contains $p$ $n$ -dimensional samples, one aims to find a linear transformation (i.e. a dictionary) $\pmb{D} \in \mathbb{R}^{n \times m}$ and an associated maximally sparse representation $\pmb{X} = [\pmb{x}_1, \pmb{x}_2, \dots, \pmb{x}_p] \in \mathbb{R}^{m \times p}$ that satisfies + +$$ +\boldsymbol {Y} = \boldsymbol {D} \boldsymbol {X}. \tag {1} +$$ + +As the data matrix $\mathbf{Y}$ can represent a variety of signals (e.g. images, audios, languages, and genetics etc) in practical applications, SDL provides a versatile structure-seeking formulation that has found widespread applications in computational neuroscience, image processing, computer vision, and machine learning at large (Olshausen & Field, 1996; 1997; Argyriou et al., 2008; Ranzato et al., 2007; Elad & Aharon, 2006; Wright et al., 2008; Yang et al., 2010; Zhang et al., 2013; Mairal et al., 2014; Zhang et al., 2014; 2019). + +Related Works. Motivated by this practical significance, there has been a growing surge of interest recently (e.g. Rambhatla et al. (2019); Bai et al. (2018); Gilboa et al. (2018); Nguyen et al. (2018); Chatterji & Bartlett (2017); Mensch et al. (2016)) that aims to tackle SDL. In attempts to recover the sparse signals $\mathbf{X}$ , these existing work adopt an $\ell^0$ - or $\ell^1$ -penalty function to promote the underlying sparsity and give various optimization algorithms for the resulting objectives (some of those are heuristics while a few others have theoretical convergence guarantees). Although these penalty + +functions are indeed sparsity-promoting, the resulting optimization problems must be solved one row at a time, hence resulting as many optimization problems as the ambient dimension $n$ . Consequently, $\ell^0$ - or $\ell^1$ -based objectives result only in local methods (i.e. cannot yield the entire solution at once) and hence entail prohibitive computational burden. Another prominent approach in SDL is Sum-of-Squares (SOS), proposed by and articulated in a series of recent work (Barak et al. (2015); Ma et al. (2016); Schramm & Steurer (2017)). The key idea there is to utilize the properties of higher order SOS polynomials to correctly recover one column of the dictionary at a time, for which there are $m$ columns in total. However, the computational complexity of these recovery methods are quasi-polynomial, thus again resulting in large computational expense. + +Very recently, in the complete dictionary learning $^{1}$ setting, a novel global approach has been suggested in Zhai et al. (2019a;b) that presents a formulation which can efficiently recover the sparse signal matrix $\mathbf{X}$ once and for all. In particular, Zhai et al. (2019b) has shown that if the generative model for $\mathbf{Y} = \mathbf{D}_o\mathbf{X}_o \in \mathbb{R}^{n \times p}$ satisfies that $\mathbf{D}_o \in \mathcal{O}(n; \mathbb{R})$ is orthonormal and $\mathbf{X}_o \in \mathbb{R}^{n \times p}$ is Bernoulli-Gaussian sparse, $^{2}$ then maximizing the $\ell^4$ -norm $^{3}$ of $\mathbf{A}\mathbf{Y}$ over $\mathcal{O}(n; \mathbb{R})$ : + +$$ +\max _ {\boldsymbol {A}} \frac {1}{4} \| \boldsymbol {A} \boldsymbol {Y} \| _ {4} ^ {4} \quad \text {s u b j e c t t o} \boldsymbol {A} \in \mathcal {O} (n; \mathbb {R}) \quad (\text {o r} \boldsymbol {A} \boldsymbol {A} ^ {*} = \boldsymbol {I}), \tag {2} +$$ + +is able to find the ground truth dictionary $D_{o}$ up to an arbitrary signed permutation. Moreover, Zhai et al. (2019b) has proposed the simple "Matching, Stretching, and Projection" (MSP) algorithm, which is shown to be experimentally efficient and effective, for solving the program in equation 2: + +$$ +\mathbf {M S P}: \quad \boldsymbol {A} _ {t + 1} = \mathcal {P} _ {\mathrm {O} (n; \mathbb {R})} \left[ (\boldsymbol {A} _ {t} \boldsymbol {Y}) ^ {\circ 3} \boldsymbol {Y} ^ {*} \right] = \boldsymbol {U} _ {t} \boldsymbol {V} _ {t} ^ {*}, \tag {3} +$$ + +where $\pmb{U}_t\pmb{V}_t^*$ are from the singular value decomposition: $\pmb{U}_t\Sigma_t\pmb{V}_t^* = \mathrm{SVD}[(\pmb{A}_t\pmb{Y})^{\circ 3}\pmb{Y}^*]$ . + +In this paper, we here give an alternative (arguably simpler and more revealing) derivation of the MSP algorithm (3). Consider the Lagrangian formulation of the constrained optimization problem given in equation 2. The necessary condition of critical points, $\nabla_{\mathbf{A}}\frac{1}{4}\| \mathbf{A}\mathbf{Y}\|_{4}^{4} = \nabla_{\mathbf{A}}\langle \mathbf{\Lambda},\mathbf{A}\mathbf{A}^{*} - \mathbf{I}\rangle$ for some Lagrangian multipliers $\pmb{\Lambda}$ , implies: + +$$ +\left(\boldsymbol {A} \boldsymbol {Y}\right) ^ {\circ 3} \boldsymbol {Y} ^ {*} = \left(\boldsymbol {\Lambda} + \boldsymbol {\Lambda} ^ {*}\right) \boldsymbol {A}. \tag {4} +$$ + +As the optimization is over the orthogonal group $\mathrm{O}(n; \mathbb{R})$ , restricting the condition in equation 4 onto the orthogonal group yields a necessary condition for any critical point $\pmb{A}$ :4 + +$$ +\mathcal {P} _ {\mathrm {O} (n; \mathbb {R})} \left[ (\boldsymbol {A} \boldsymbol {Y}) ^ {\circ 3} \boldsymbol {Y} ^ {*} \right] = \boldsymbol {A}. \tag {5} +$$ + +Hence the critical point $\mathbf{A}$ can be viewed as a "fixed point" of the map: $\mathcal{P}_{\mathrm{O}(n;\mathbb{R})}[(\cdot)\mathbf{Y})^{\circ 3}\mathbf{Y}^{*}]$ from $\mathrm{O}(n;\mathbb{R})$ to itself. The MSP algorithm in equation 3 is to find the fixed point(s) of this map. + +Notice that the orthonormal constraint $A \in \mathcal{O}(n; \mathbb{R})$ in equation 2 can be viewed as enforcing the orthogonality of $n$ unit vectors simultaneously. So, more flexibly and generally, one may choose to compute any $k$ , for $1 \leq k \leq n$ , leading orthonormal bases of $D_o$ by solving the program: + +$$ +\max _ {\boldsymbol {W}} \frac {1}{4} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} \| _ {4} ^ {4} \quad \text {s u b j e c t t o} \boldsymbol {W} \in \operatorname {S t} (n, k; \mathbb {R}) \subset \mathbb {R} ^ {n \times k}, \tag {6} +$$ + +where $\mathsf{St}(n,k;\mathbb{R})$ is the Stiefel manifold. The orthogonal group $\mathrm{O}(n;\mathbb{R})$ and the unit sphere $\mathbb{S}^{n - 1}$ can be viewed as two special cases of the Stiefel manifold $\mathsf{St}(n,k;\mathbb{R})$ , with $k = n$ and $k = 1$ , respectively. In some specific tasks such as dictionary learning and blind deconvolution, optimization over the unit sphere has been widely practiced, such as in Sun et al. (2015); Bai et al. (2018); Zhang et al. (2018); Kuo et al. (2019). The more general setting of maximizing a convex function over any compact set also has been studied by Journée et al. (2010) in the context of sparse PCA, which has provided convergence guarantees for this class of programs. + +Our Contributions. Our contributions are twofold. First, by taking a suitable analytical angle, we reveal novel geometric and statistical connections between PCA, ICA and the $\ell^4$ -norm maximization based SDL. We then show that algorithm-wise, the fixed-point type MSP algorithm for $\ell^4$ -norm maximization has the same nature as the classic power-iteration method for PCA (Jolliffe, 2011) and the FastICA algorithm for ICA (Hyvärinen & Oja, 1997). This interpretation gives a unified view for problems that pursue principal, independent, or sparse components from high-dimensional data and enriches our understanding of low-dimensional structure recovery frameworks, classical and new, at both formulation and algorithmic fronts. + +Second, and more importantly from a practical perspective, we examine how MSP performs under a variety of more realistic conditions, when the measurements $\mathbf{Y}$ could be contaminated with noise, outliers, or sparse corruptions. We show that, similar to PCA, $\ell^4$ -norm maximization and the MSP algorithm are inherently stable to small noise. Somewhat surprisingly though, unlike PCA, the MSP algorithm is further robust to outliers and resilient to sparse gross errors! We provide characterizations of these desirable properties of MSP. The claims are further corroborated with extensive experiments on both synthetic data and real images. Taken as a whole, our results contribute to the broad landscape of dictionary learning by affirming that $\ell^4$ -maximization based SDL and the corresponding global algorithm MSP provide a valuable toolkit to the existing literature. + +# 2 SDL VERSUS PCA AND ICA + +# 2.1 PURSUIT OF PRINCIPAL, INDEPENDENT, OR SPARSE COMPONENTS + +Relation with the Geometric Interpretation of PCA. For a data matrix $\mathbf{Y} \in \mathbb{R}^{n \times p}$ , Principal Component Analysis (PCA), aiming to find the top (top $k$ ) left singular vector (vectors) of $\mathbf{Y}$ , + +$$ +\max _ {\boldsymbol {W}} \frac {1}{2} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} \| _ {F} ^ {2} \quad \text {s u b j e c t t o} \boldsymbol {W} \in \mathrm {S t} (n, k; \mathbb {R}), \tag {7} +$$ + +can be understood as finding a direction (a $k$ -dimensional subspace) in $\operatorname{row}(\mathbf{Y})$ in which $\mathbf{Y}$ has the largest $\ell^2$ -norm (Frobenius norm). For instance, finding the direction with the largest $\ell^2$ -norm over the unit sphere can be viewed as calculating the spectral norm (or the largest singular value), of matrix $\mathbf{Y}$ . In comparison, we may view equation 6: + +$$ +\max _ {\boldsymbol {W}} \frac {1}{4} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} \| _ {4} ^ {4} \quad \text {s u b j e c t t o} \boldsymbol {W} \in \mathrm {S t} (n, k; \mathbb {R}) +$$ + +as finding a direction, or a $k$ -dimensional subspace, in $\operatorname{row}(\mathbf{Y})$ where the projection of $\mathbf{Y}$ has the largest $\ell^4$ -norm. For instance, finding the direction with the largest $\ell^4$ -norm over the unit sphere (equation 6) can be viewed as calculating the induced $\| \cdot \|_{2,4}$ norm of matrix $\mathbf{Y}$ : $\| \mathbf{Y} \|_{2,4} \doteq \max_{\mathbf{a} \in \mathbb{S}^{n-1}} \| \mathbf{a}^* \mathbf{Y} \|_4$ . + +Relation with the Statistical Interpretation of PCA. View each column $y_{j}, j \in [p]$ of data matrix $\mathbf{Y} \in \mathbb{R}^{n \times p}$ as an $n$ dimensional random vector that is i.i.d. drawn from a distribution of random variable $\mathbf{y}$ and let $Y_{c}$ denote the centered $\mathbf{Y}$ : $Y_{c} \doteq \mathbf{Y}[I - \frac{1}{p}\mathbf{11}^{*}]$ , where $\mathbf{1} \in \mathbb{R}^{p}$ is a vector of all 1's. Then, finding the top $k$ principal components of $Y_{c}$ : $\max_{\mathbf{W} \in \mathrm{St}(n,k;\mathbb{R})} \frac{1}{2} \| \mathbf{W}^{*}\mathbf{Y}_{c} \|_{2}^{2}$ is to find $k$ uncorrelated projections of $\mathbf{y} \in \mathbb{R}^{n}$ that has the top $k$ sample variance (i.e. $2^{\mathrm{nd}}$ order moment) (Jolliffe, 2011; Helwig, 2017). Similar to PCA, the $\ell^{4}$ -norm maximization of centered data matrix $Y_{c}$ : $\max_{\mathbf{W} \in \mathrm{St}(n,k;\mathbb{R})} \frac{1}{4} \| \mathbf{W}^{*}\mathbf{Y}_{c} \|_{4}^{4}$ can be viewed as finding $k$ uncorrelated projections of $\mathbf{y}$ that have the top $k$ sample $4^{\mathrm{th}}$ order moment, whose statistical meaning is better revealed below. + +Relation with ICA and Nonnormality. The $\ell^4$ -norm maximization over the Stiefel manifold is strongly related to finding the maximal or minimal kurtosis in Independent Component Analysis (ICA) (Hyvarinen & Oja, 1997; 2000): In order to identify one component of a given random vector $\pmb{y} \in \mathbb{R}^n$ , ICA aims to find a unit vector (a direction) $\pmb{w} \in \mathbb{S}^{n-1}$ that maximizes or minimizes the kurtosis of $\pmb{w}^*\pmb{y}$ , defined as: + +$$ +\operatorname {k u r t} \left(\boldsymbol {w} ^ {*} \boldsymbol {y}\right) = \mathbb {E} \left(\boldsymbol {w} ^ {*} \boldsymbol {y}\right) ^ {4} - 3 \| \boldsymbol {w} \| _ {2} ^ {4}. \tag {8} +$$ + +Kurtosis is widely used for evaluating the nonnormality of a random variable; see DeCarlo (1997); Hyvarinen & Oja (1997; 2000). According to Huber (1985), the nonnormality of data carries "abnormal" hence interesting information in real data for many applications (e.g. Lee et al. (2003); Cain + +et al. (2017)). Thus, extracting the $4^{\text{th}}$ order moment helps understand such statistics of real datasets (Hyvarinen et al., 2009) and even their topology (Carlsson, 2009). One may also find that the $\ell^4$ -maximization based dictionary learning formulation is similar to maximizing kurtosis (equation 8) with spherical constraint $\boldsymbol{w} \in \mathbb{S}^{n-1}$ , in fact, these two formulations are exactly the same if one only wants to find one column of the dictionary. Intuitively, such coincidence occurs due to the fact that maximizing $\ell^4$ -norm or kurtosis have the same effect—they both promote the "spikiness" (Zhang et al. (2018); Li & Bresler (2018)) (or "peak" (DeCarlo, 1997)) of a distribution. More rigorous analysis regarding the similarity between $\ell^4$ -based dictionary learning and ICA is beyond the scope of this paper and we leave them for future research. + +# 2.2 FIXED-POINT STYLE ALGORITHMS + +In optimization, the $\ell^4$ -norm maximization in equation 6 over the Stiefel manifold $\mathrm{St}(k,n;\mathbb{R})$ is a special type of nonconvex optimization problem - convex maximization over a compact set. Although Journée et al. (2010); Zhai et al. (2019b) have shown that the MSP algorithm is guaranteed to find critical points, the experiments in Zhai et al. (2019b) suggest that the MSP algorithm finds global maxima of the $\ell^4$ -norm efficiently and effectively. For better understanding, in this section we illustrate some striking similarities between the MSP algorithm and the "power-iteration" type algorithms for solving PCA as well as ICA. + +Fixed-point Perspective of Power Iteration. For a general data matrix $\mathbf{Y} \in \mathbb{R}^{n \times p}$ , finding the top singular value of $\mathbf{Y}$ is equivalent to solving the following optimization problem: + +$$ +\max _ {\boldsymbol {w}} \varphi (\boldsymbol {w}) \doteq \frac {1}{2} \| \boldsymbol {w} ^ {*} \boldsymbol {Y} \| _ {2} ^ {2} \quad \text {s u b j e c t t o} \boldsymbol {w} \in \mathbb {S} ^ {n - 1}. \tag {9} +$$ + +For this constrained optimization, the Lagrangian multiplier method gives the necessary condition: $\nabla_{\boldsymbol{w}}\varphi (\boldsymbol {w}) = \boldsymbol {Y}\boldsymbol{Y}^{*}\boldsymbol {w} = \lambda \boldsymbol{w}$ , similar to equation 4. If we restrict this condition onto the sphere, we obtain the fixed point condition $\pmb {w} = \mathcal{P}_{\mathbb{S}^{n - 1}}[\nabla_{\boldsymbol{w}}\varphi (\boldsymbol {w})]$ . The classic power-iteration method + +$$ +\boldsymbol {w} _ {t + 1} = \mathcal {P} _ {\mathbb {S} ^ {n - 1}} \left[ \nabla_ {\boldsymbol {w}} \varphi \left(\boldsymbol {w} _ {t}\right) \right] = \frac {\boldsymbol {Y Y} ^ {*} \boldsymbol {w} _ {t}}{\left\| \boldsymbol {Y Y} ^ {*} \boldsymbol {w} _ {t} \right\| _ {2}}, \tag {10} +$$ + +is precisely computing this fixed point, which is arguably the most efficient and widely used algorithm to solve equation 9, for PCA (or computing SVD of $\mathbf{Y}$ ). + +Fixed-point Perspective of FastICA. In order to maximize (or minimize) the kurtosis over $\mathbb{S}^{n - 1}$ , + +$$ +\max _ {\boldsymbol {w}} \psi (\boldsymbol {w}) \doteq \frac {1}{4} \operatorname {k u r t} [ \boldsymbol {w} ^ {*} \boldsymbol {y} ] = \frac {1}{4} \mathbb {E} [ \boldsymbol {w} ^ {*} \boldsymbol {y} ] ^ {4} - \frac {3}{4} \| \boldsymbol {w} \| _ {2} ^ {4} \quad \text {s u b j e c t t o} \boldsymbol {w} \in \mathbb {S} ^ {n - 1}, \tag {11} +$$ + +Hyvarinen & Oja (1997) has proposed the following fixed-point type iteration: + +$$ +\boldsymbol {w} _ {t + 1} = \mathcal {P} _ {\mathbb {S} ^ {n - 1}} \left[ \nabla_ {\boldsymbol {w}} \psi \left(\boldsymbol {w} _ {t}\right) \right] = \frac {\mathbb {E} \left[ \boldsymbol {y} \left(\boldsymbol {y} ^ {*} \boldsymbol {w} _ {t}\right) ^ {3} \right] - 3 \| \boldsymbol {w} _ {t} \| _ {2} ^ {2} \boldsymbol {w} _ {t}}{\left\| \mathbb {E} \left[ \boldsymbol {y} \left(\boldsymbol {y} ^ {*} \boldsymbol {w} _ {t}\right) ^ {3} \right] - 3 \| \boldsymbol {w} _ {t} \| _ {2} ^ {2} \boldsymbol {w} _ {t} \right\| _ {2}}, \tag {12} +$$ + +which enjoys cubic (at least quadratic) rate of convergence, under the ICA model assumption. + +Fixed-point Perspective of MSP. For the $\ell^4$ -norm maximization program: + +$$ +\max _ {\boldsymbol {W}} \phi (\boldsymbol {W}) \doteq \frac {1}{4} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} \| _ {4} ^ {4} \quad \text {s u b j e c t t o} \boldsymbol {W} \in \mathrm {S t} (n, k; \mathbb {R}), +$$ + +through a similar derivation to that in Section 1, one can show that the MSP iteration in equation 5 for the orthogonal group generalizes to the Stiefel manifold case as: + +$$ +\boldsymbol {W} _ {t + 1} = \mathcal {P} _ {\mathrm {S t} (n, k; \mathbb {R})} \left[ \nabla_ {\boldsymbol {W}} \phi (\boldsymbol {W} _ {t}) \right] = \boldsymbol {U} _ {t} \boldsymbol {V} _ {t} ^ {*}, \tag {13} +$$ + +where $\mathbf{U}_t\pmb{\Sigma}_t\mathbf{V}_t^* = \mathrm{SVD}[\mathbf{Y}(\mathbf{Y}^*\mathbf{W}_t)^\circ 3]$ . The above iteration has the same nature as the power iteration in equation 10 and equation 12, since they all solve a fixed-point type problem, by projecting gradient of the objective function $\nabla \varphi (\cdot),\nabla \phi (\cdot),\nabla \psi (\cdot)$ onto the constraint manifold $\mathbb{S}^{n - 1}$ and $\mathsf{St}(n,k;\mathbb{R})$ , respectively. Table 1 summarizes these striking similarities. + +
ObjectivesConstraint SetsAlgorithms
Power Iterationφ(w)÷1/2||w*Y||2w∈Sn-1wt+1=Psnn-1[∇wφ(wt)]
FastICAψ(w)÷1/4kurt[w*y]w∈Sn-1wt+1=Psnn-1[∇wψ(wt)]
MSPφ(W)÷1/4||W*Y||4W∈St(n,k;R)Wt+1=PsT(n,k;R)[∇Wφ(Wt)]
+ +Table 1: Similarities among fixed-point algorithms for Power Iteration, FastICA, and MSP. + +# 3 STABILITY AND ROBUSTNESS OF $\ell^4$ -NORM MAXIMIZATION + +Even though the MSP algorithm for $\ell^4$ -norm maximization is similar to power-iteration for PCA, in real applications, PCA often requires modifications to improve its robustness (Candès et al., 2011; Xu et al., 2010; 2012). In this section, we want to examine the stability and robustness of the $\ell^4$ -maximization for different types of imperfect measurement models: small noise, outliers, and sparse corruptions of large magnitude. + +# 3.1 DIFFERENT MODELS FOR IMPERFECT MEASUREMENTS + +We adopt the same Bernoulli-Gaussian model as in prior works (Spielman et al., 2012; Sun et al., 2015; Bai et al., 2018; Zhai et al., 2019b) to test the stability and robustness of the $\ell^4$ -maximization framework. Assume our clean observation matrix $Y \in \mathbb{R}^{n \times p}$ is produced by the product of a ground truth orthogonal dictionary $D_o$ and a Bernoulli-Gaussian matrix $X_o \in \mathbb{R}^{n \times p}$ : + +$$ +\boldsymbol {Y} = \boldsymbol {D} _ {o} \boldsymbol {X} _ {o}, \quad \boldsymbol {D} _ {o} \in \mathrm {O} (n; \mathbb {R}), \left\{\boldsymbol {X} _ {o} \right\} _ {i, j} \sim_ {i i d} \operatorname {B G} (\theta). \tag {14} +$$ + +Now let us assume we only observe different types of imperfect measurements of $\mathbf{Y}$ : + +Noisy Measurements: $\mathbf{Y}_N \coloneqq \mathbf{Y} + \mathbf{G}$ , where $\mathbf{G} \in \mathbb{R}^{n \times p}$ is a matrix that satisfies $g_{i,j} \sim_{iid} \mathcal{N}(0, \eta^2)$ and $\eta > 0$ controls the variance of the noise. + +Measurements with Outliers: $\mathbf{Y}_O \coloneqq [\mathbf{Y}, \mathbf{G}']$ , where $\mathbf{Y}_O$ contains extra columns $(\mathbf{G}' \in \mathbb{R}^{n \times \tau p})^6$ generated from an independent Gaussian process $g_{i,j}' \sim_{iid} \mathcal{N}(0,1)$ . Here, $\tau$ controls the portion of the outliers w.r.t. the clean data size $p$ . + +Measurements with Sparse Corruptions: $\mathbf{Y}_C \coloneqq \mathbf{Y} + \sigma \mathbf{B} \circ \mathbf{S}$ , where $\sigma > 0$ controls the scale of corrupting entries, $\mathbf{B} \in \mathbb{R}^{n \times p}$ is a Bernoulli matrix so $b_{i,j} \sim_{iid} \operatorname{Ber}(\beta)$ with $\beta \in (0,1)$ controlling the ratio of the sparse corruptions, and $S \in \mathbb{R}^{n \times p}$ has entries $s_{i,j}$ drawn i.i.d. from a Rademacher distribution: + +$$ +s _ {i, j} = \left\{ \begin{array}{l l} 1 & \text {w i t h p r o b a b i l i t y} 1 / 2 \\ - 1 & \text {w i t h p r o b a b i l i t y} 1 / 2 \end{array} . \right. \tag {15} +$$ + +# 3.2 STATISTICAL ANALYSIS AND JUSTIFICATION + +The analysis for the stability and robustness of the $\ell^4$ -norm maximization follows similar statistical analysis techniques in Zhai et al. (2019b) to show that the global maximum of + +$$ +\boldsymbol {W} _ {\star} \in \underset {\boldsymbol {W}} {\arg \max } \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {\diamond} \| _ {4} ^ {4}, \quad \text {s u b j e c t t o} \boldsymbol {W} \in \mathrm {O} (n; \mathbb {R}) \tag {16} +$$ + +satisfies $W_{\star}^{*}D_{o}\in \mathrm{SP}(n)$ . Note here $\mathbf{Y}_{\diamond}$ denotes different imperfect measurements: noisy $(\mathbf{Y}_N)$ , with outliers $(\mathbf{Y}_O)$ , and with sparse corruptions $(\mathbf{Y}_C)$ . Below we provide the expectation and concentration results for $\| W^{*}Y_{\diamond}\|_{4}^{4}$ over the data distribution. We show that $\mathbb{E}\| W^{*}Y_{\diamond}\|_{4}^{4}$ is largely determined by $\| WD_{o}\|_{4}^{4}$ , a quantity that indicates a "distance" from $W^{*}D_{o}$ to $\mathrm{SP}(n)$ . As shown in Lemma 2.3 and Lemma 2.4 in Zhai et al. (2019b), the only global maximizers of $\| W^{*}D_{o}\|_{4}^{4}$ are signed permutation matrices, and $W^{*}D_{o}$ converges to a signed permutation matrix as $\| W^{*}D_{o}\|_{4}^{4}$ reaches its global maximum. + +Proposition 3.1 (Expectation of Objective with Small Noise) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathrm{O}(n; \mathbb{R})$ be an orthogonal matrix and assume $\mathbf{Y} = D_{o}\mathbf{X}_{o}$ . For any orthogonal matrix $\mathbf{W} \in \mathrm{O}(n; \mathbb{R})$ and any random Gaussian matrix $\mathbf{G} \in \mathbb{R}^{n \times p}$ , $g_{i,j} \sim_{iid} \mathcal{N}(0,\eta^2)$ independent of $X_{o}$ , let $\mathbf{Y}_N = \mathbf{Y} + \mathbf{G}$ denote the data with noise. Then the expectation of $\frac{1}{np} \| \mathbf{W}^* \mathbf{Y}_N \|_4^4$ satisfies: + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\left\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \right\| _ {4} ^ {4}}{n} + C _ {\theta , \eta}, \tag {17} +$$ + +where $C_{\theta ,\eta}$ is a constant which depends on $\theta$ and $\eta$ + +Proof See Appendix A.1. + +Theorem 3.2 (Concentration of Objective with Small Noise) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathcal{O}(n;\mathbb{R})$ be an orthogonal matrix and assume $\pmb{Y} = D_{o}\pmb{X}_{o}$ . For any orthogonal matrix $\pmb{W} \in \mathcal{O}(n;\mathbb{R})$ and any random Gaussian matrix $\pmb{G} \in \mathbb{R}^{n \times p}$ , $g_{i,j} \sim_{iid} \mathcal{N}(0,\eta^2)$ independent of $X_{o}$ , let $Y_{N} = Y + G$ denote the input with noise, then: + +$$ +\mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} \right| \geq \delta\right) < \frac {1}{p}, \tag {18} +$$ + +when $p = \Omega\left((1 + \eta^2)^4 n^2 \ln n / \delta^2\right)$ . + +Proof See Appendix A.2. + +Proposition 3.3 (Expectation of Objective with Outliers) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathcal{O}(n;\mathbb{R})$ be an orthogonal matrix and assume $\mathbf{Y} = D_{o}\mathbf{X}_{o}$ . For any orthogonal matrix $\mathbf{W} \in \mathcal{O}(n;\mathbb{R})$ and any random Gaussian matrix $\mathbf{G}' \in \mathbb{R}^{n \times \tau p}$ , $g_{i,j}' \sim_{iid} \mathcal{N}(0,1)$ independent of $\mathbf{X}_{o}$ , let $\mathbf{Y}_{O} = [\mathbf{Y},\mathbf{G}']$ denote the data with outliers $\mathbf{G}'$ . Then the expectation of $\frac{1}{np}\| \mathbf{W}^{\ast}\mathbf{Y}_{O}\|_{4}^{4}$ satisfies: + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G} ^ {\prime}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\left\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \right\| _ {4} ^ {4}}{n} + C _ {\theta , \tau}, \tag {19} +$$ + +where $C_{\theta ,\tau}$ is a constant depends on $\theta ,\tau$ + +Proof See Appendix A.3 + +Theorem 3.4 (Concentration of Objective with Outliers) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathcal{O}(n;\mathbb{R})$ be an orthogonal matrix and assume $\mathbf{Y} = D_{o}\mathbf{X}_{o}$ . For any orthogonal matrix $W \in \mathcal{O}(n;\mathbb{R})$ and any random Gaussian matrix $G' \in \mathbb{R}^{n \times \tau p}$ , $g_{i,j}' \sim_{iid} \mathcal{N}(0,1)$ independent of $X_{o}$ , let $Y_{O} = [Y,G']$ denote the input with outlier $G'$ , then: + +$$ +\mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \| _ {4} ^ {4} \right| \geq \delta\right) < \frac {1}{p}, \tag {20} +$$ + +when $p = \Omega (\tau^2 n^2\ln n / \delta^2)$ + +Proof See Appendix A.4. + +In the above results, Proposition 3.1 and 3.3 reveal that both normalized $\frac{1}{np}\mathbb{E}\| W^{*}Y_{N}\|_{4}^{4},\frac{1}{np}\mathbb{E}\| W^{*}Y_{O}\|_{4}^{4}$ are only determined by $\| W^{*}D_{o}\|_{4}^{4}$ . Moreover, as shown in Theorem 3.2 and 3.4, when $p$ is large enough ( $p = \Omega((1 + \eta^2)^4n^2\ln n), \Omega(\tau^2n^2\ln n)$ respectively), both $\frac{1}{np}\| W^{*}Y_{N}\|_{4}^{4},\frac{1}{np}\| W^{*}Y_{O}\|_{4}^{4}$ concentrate around their expectation with high probability. Therefore, the $\ell^4$ -norm maximization formulation $\| W^{*}Y_{\diamond}\|_{4}^{4}$ for dictionary learning is insensitive to dense Gaussian noise and robust to Gaussian outliers. $^9$ + +Proposition 3.5 (Expectation of Objective with Sparse Corruptions) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathcal{O}(n; \mathbb{R})$ be an orthogonal matrix and assume $\mathbf{Y} = D_{o}\mathbf{X}_{o}$ . For any orthogonal matrix $\mathbf{W} \in \mathcal{O}(n; \mathbb{R})$ and any random Bernoulli matrix $\mathbf{B} \in \mathbb{R}^{n \times p}$ , $b_{i,j} \sim_{iid} Ber(\beta)$ independent of $X_{o}$ , let $\mathbf{Y}_{C} = \mathbf{Y} + \sigma \mathbf{B} \circ \mathbf{S}$ denote the data with sparse corruptions, and $\mathbf{S} \in \mathbb{R}^{n \times p}$ is defined in equation 15. Then the expectation of $\frac{1}{np} \| \mathbf{W}^* \mathbf{Y}_C \|_4^4$ satisfies: + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4}}{n} + \sigma^ {4} \beta (1 - 3 \beta) \frac {\| \boldsymbol {W} \| _ {4} ^ {4}}{n} + C _ {\theta , \sigma , \beta}, \tag {21} +$$ + +where $C_{\theta, \sigma, \beta}$ is a constant depending on $\theta, \sigma$ and $\beta$ . + +Proof See Appendix A.5. + +Theorem 3.6 (Concentration of Objective with Sparse Corruptions) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathsf{O}(n;\mathbb{R})$ be an orthogonal matrix and assume $\pmb{Y} = D_{o}\pmb{X}_{o}$ . For any orthogonal matrix $\pmb{W} \in \mathsf{O}(n;\mathbb{R})$ and any random Bernoulli matrix $\pmb{B} \in \mathbb{R}^{n \times p}$ , $b_{i,j} \sim_{iid} Ber(\beta)$ independent of $X_{o}$ , let $\pmb{Y}_{C} = \pmb{Y} + \sigma\pmb{B} \circ \pmb{S}$ denote the input with sparse corruptions, and $\pmb{S} \in \mathbb{R}^{n \times p}$ is defined in equation 15, then: + +$$ +\mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} \right| \geq \delta\right) < \frac {1}{p}, \tag {22} +$$ + +when $p = \Omega \left(\sigma^{8}\beta n^{2}\ln n / \delta^{2}\right)$ + +# Proof See Appendix A.6. + +Unlike the cases with noise and outliers, Proposition 3.5 and Theorem 3.6 indicate that $\frac{1}{np}\mathbb{E}\| W^{*}Y_{C}\|_{4}^{4}$ depends on both $\| W^{*}D_{o}\|_{4}^{4}$ and $\| W\| _4^4$ ; when $p = \Omega (\sigma^8\beta n^2\ln n)$ , the objective $\frac{1}{np}\| W^{*}Y_{C}\|_{4}^{4}$ concentrates around this expectation with high probability. Nevertheless, when the magnitude of $\sigma^4\beta (1 - 3\beta)$ is significantly smaller than $3\theta (1 - \theta)$ , the landscape of the objective $\| W^{*}Y_{C}\|_{4}^{4}$ would largely be determined by $\| W^{*}D_{o}\|_{4}^{4}$ only. As shown in Figure 1, this is indeed the case whenever: (a) the sparsity level $\theta$ of ground truth signal $X_{o}$ , is "reasonably" small (neither diminishing to O nor larger than O.5);(b) $\beta$ , the sparsity level of the corruption, is small (smaller than O.5); (c) $\sigma$ , the magnitude of the sparse errors, is not significantly larger than the in + +trinsic variance of the sparse signal (the intrinsic variance of the sparse signal Bernoulli-Gaussian model is 1). + +![](images/a33c05ba67dd2073e0885bbecef0242d5606c3a1cdf1c3501c76af693e5a1ee3.jpg) +Figure 1: Comparison between $y = 3x(1 - x)$ and $y = \sigma^4 |x(1 - 3x)|$ when $x \in [0,1]$ with different $\sigma$ . + +Therefore, besides the insensitivity to small noise and robustness to outliers, the $\ell^4$ -maximization based dictionary learning also shows resilience to sparse corruptions under reasonable conditions. + +# 4 SIMULATIONS AND EXPERIMENTS + +# 4.1 QUANTITATIVE EVALUATION: SIMULATIONS ON SYNTHETIC DATA + +Single Trial of MSP. In this simulation, we run the MSP algorithm from equation 3, using the imperfect measurements $\mathbf{Y}_{\diamond}$ of different models $(\mathbf{Y}_N,\mathbf{Y}_O,\mathbf{Y}_C)$ . As shown in Figure 2, the normalized value of $\| W^{*}D_{o}\|_{4}^{4} / n$ reaches global maximum with all types of inputs when varying the level of noise, outliers, and sparse corruptions. Moreover, as the scale of imperfect measurements increase, Figure 2 shows that (a) the iterations for convergence increases and (b) the final objective value $\| W^{*}D_{o}\|_{4}^{4}$ decreases almost negligibly. This numerical experiment suggests that the MSP algorithm is able to identify the ground truth orthogonal transformation $D_{o}$ despite different types of imperfect measurements. + +Phase Transition. Next, we conduct extensive simulations to study the relation between recovery accuracy and sample size $p$ . We run the experiments by increasing the sample size $p$ w.r.t. the scale of imperfect measurements $\eta, \tau, \beta$ , respectively. As shown in Figure 3, the MSP algorithm (3) + +Figure 2: Normalized $\| W^{*}D_{o}\|_{4}^{4} / n$ of the MSP algorithm for dictionary learning, using imperfect measurements $Y_{N}, Y_{O}, Y_{C}$ , respectively. +![](images/de08ab55381700cbaa8483e892c615700df249e25e231828e940e06b71a8813e.jpg) +(a) $n = 50, p = 20, 000, \theta = 0.3$ , varying $\eta^2$ from 0.1 to 0.4 + +![](images/237b753fd412aa0dad011922afbe35b21b709604d049fb79d576339cfe68b0fe.jpg) +(b) $n = 50, p = 20, 000, \theta = 0.3$ , varying $\tau$ from 0.1 to 0.4 + +![](images/3fb7bf674e13c197132c010fd24583441168715ef3c2ae449fcb4d177c2ea007.jpg) +(c) $n = 50, p = 20, 000, \theta = 0.3$ , $\sigma = 1$ , varying $\beta$ from 0.1 to 0.4 + +demonstrates a clear phase transition behavior w.r.t. noise, outliers, and sparse corruptions. Such phenomena suggest that the algorithm is inherently stable and robust to certain amounts of noise, outliers, and sparse corruptions. The results corroborate our concentration results in Theorem 3.2, 3.4 and 3.6—a larger sample size $p$ increases the accuracy and robustness of the MSP algorithm (3) for all types of nuisances. + +![](images/8ef8bb6340379e681c90a901ad9d85ce96ecca172c966bac59a466c92080b673.jpg) +(a) Noise: $n = 50, \theta = 0.3$ + +![](images/628e0bd067b4de90f886f9d7d9e25017318e6e939a859da2d890fabd703a355d.jpg) +(b) Outliers: $n = 50, \theta = 0.3$ + +![](images/e74372d8562c387b20e0619944405826311d2b0b678b0ec024014b07a3048ddd.jpg) +(c) Corruptions: $n = 50, \theta = 0.3$ +Figure 3: Average normalized error $\left|1 - \left\| W^{*}D_{o}\right\|_{4}^{4} / n\right|$ of 10 random trials for the MSP algorithm: (a) Varying sample size $p$ and variance of noise $\eta^2$ ; (b) Varying sample size $p$ and Gaussian Outlier ratio $\tau$ ; (c) Varying sample size $p$ and sparse corruption ratio $\beta$ , with fixed $\sigma = 1$ . + +Comparison with Prior Arts. We also compare the MSP algorithm (Zhai et al., 2019b) with previous complete dictionary learning algorithms: Subgradient (SG) (Bai et al., 2018) and Riemannian Trust Region (RTR) methods (Sun et al., 2015). While SG and RTR demonstrate slightly better accuracy in some cases (e.g. small Gaussian noise and Gaussian outliers), both algorithms appear unstable to sparse corruptions. Meanwhile, the MSP algorithm is stable to all imperfect measurement models mentioned in this paper and runs significantly faster than the others. + +
CleanNoiseOutlierCorruption
0.20.40.20.40.20.4
npAlg.ErrorTimeErrorTimeErrorTimeErrorTimeErrorTimeErrorTimeErrorTime
2510kMSP0.34%1.14s0.45%1.11s0.99%1.12s1.11%1.29s1.82%1.45s1.27%1.05s2.85%1.03s
SG0.00%8.00m5.54%19.3m25.7%26.4m0.00%9.62m0.00%11.1m87.0%16.6m88.4%34.2m
RTR0.00%3.28m0.03%17.0m1.34%20.2m0.00%4.63m0.00%5.87m3.20%18.0m33.9%16.3m
5020kMSP0.34%4.82s0.47%4.65s1.02%5.44s1.15%5.31s2.01%6.45s1.33%4.80s3.04%4.41s
SG0.00%1.34hN/A>2hN/A>2h3.42%1.40h0.00%1.81hN/A>2hN/A>2h
RTR0.00%23.0m0.04%1.57h1.65%1.38h0.00%30.7m0.00%41.8m2.17%1.25h82.57%1.29h
+ +Table 2: Comparison of the MSP algorithm (Zhai et al., 2019b) with prior complete dictionary learning algorithms: Subgradient method (Bai et al., 2018) and Riemannian Trust Region (Sun et al., 2015) methods in different models with fixed ground truth sparsity $\theta = 0.3$ . Note that SG only learns a unit vector each time and does not guarantee orthogonality; we therefore project the dictionary learned to $\mathsf{O}(n;\mathbb{R})$ for fair comparison. + +# 4.2 QUALITATIVE EVALUATION: EXPERIMENTS ON REAL IMAGES AND PATCHES + +Besides simulations, we also conduct extensive experiments to verify the stability and robustness of the MSP algorithm with real imagery data, at both image level and patch level. Throughout these experiments, rather than visualize all bases, we routinely show the top bases learned—heuristically, the top bases are those with the largest coefficients (here, in terms of $\ell^1$ -norm). + +Image Level. At the image level, we first vectorize all 60,000 images in the MNIST dataset (LeCun et al., 1998) into a single matrix $\mathbf{Y} \in \mathbb{R}^{784 \times 60,000}$ , then create imperfect measurements based on models specified in Section 3: $\mathbf{Y}_N$ (MNIST with noise), $\mathbf{Y}_O$ (MNIST with outliers), $\mathbf{Y}_C$ (MNIST with sparse corruptions). We run the MSP algorithm 3 with $\mathbf{Y}, \mathbf{Y}_N, \mathbf{Y}_C, \mathbf{Y}_O$ and compare the learned bases. Figure 4(a), (c), (e), and (g) show examples of $\mathbf{Y}, \mathbf{Y}_N, \mathbf{Y}_O$ , and $\mathbf{Y}_C$ , and Figure 4(b), (d), (f), and (h) show top 10 bases learned from $\mathbf{Y}, \mathbf{Y}_N, \mathbf{Y}_C, \mathbf{Y}_O$ , respectively. Despite that we use different types of imperfect measurements of MNIST, the top bases learned from MSP algorithm 3 are very much the same. $^{10}$ This result corroborates our analysis: the $\ell^4$ -maximization and the MSP algorithm is inherently insensitive to noise, robust to outliers, and resilient to sparse corruptions. + +![](images/521dbd48246a0c4b89bf8e6ddd679f00d8a705756c636b93791b17ef21fd7172.jpg) + +![](images/a60cc8fbf6c316b4ff493a396dcda2257b44efd798d97d9342f9d7797d5456dc.jpg) +(a) Normalized MNIST to mean 0 and 1 std + +![](images/8cac3d10dbc89d951a99a12f46fd543e21cbd7dcc97f953406b5e758bd681d6f.jpg) +(c) MNIST with noise, $\mathrm{SNR} = 3.333$ + +![](images/5f0cd838d71c9c4ec79bc3e9345a9470b7c737e583eb0668f4ab16cda01f9d69.jpg) +(e) MNIST with $50\%$ outliers +(g) MNIST with $50\%$ sparse corruptions +Figure 4: Left: Examples of MNIST and its different imperfect measurements. Right: Learned bases from MNIST and its different imperfect measurements using the MSP algorithm 3. + +![](images/5a546f6a0d5c17c9c191cffc0c6ea48ae2f1cebb182a67b757716be8f275628f.jpg) + +![](images/3ed340cd68731d7965a20225eb199d18d92fef509fe5fbacdfac859716bd4743.jpg) +(b) Top bases from MNIST + +![](images/aa27dfb414edc6c8d2340a1238f81b7038fc13a549adbfbab378a0af1bb3d6c9.jpg) +(d) Top bases from MNIST with noise + +![](images/50a26120e825e5b0e2b3d0f262f1025fb09e90c1bcb90462cdebcb702970faf2.jpg) +(f) Top bases from MNIST with outliers +(h) Top bases from MNIST with sparse corruptions + +Patch Level. A classic application of dictionary learning involves learning sparse representations of image patches (Elad & Aharon, 2006; Mairal et al., 2007). In this section, we extend the experiments of Zhai et al. (2019b) to learn patches from gray scale and color images. First, we construct a data matrix $\mathbf{Y}$ by vectorizing each $8\times 8$ patch from the $512\times 512$ gray scale image, "Barbara" (see Figure 5). We then run the MSP algorithm with 100 iterations on both $\mathbf{Y}$ and a noisy version $\mathbf{Y}_N$ , and the learned top bases are visualized in Figure 5. + +![](images/25e203c359f353c018b13049732f11c594860a532243375ede6e766723295eb8.jpg) +(a) Clean Image and Bases +Figure 5: The top 12 bases learned from all $16 \times 16$ patches of Barbara, both with (b) and without (a) noise. The noisy image is produced by adding Gaussian noise to the clean image, resulting in a signal-to-noise ratio (SNR) of 5.87. We observed a similar effect when using an $8 \times 8$ patch size. + +![](images/f6a0aac815ca9a6d4ab5cc588cfcb3cb1aca555f4f224a7089897c537230c692.jpg) + +![](images/4161f9e175a1b2f79cbafc9d32ea788b93ca226f0436dfb9bc9c221851de868a.jpg) +(b) Noisy Image and Bases $(\mathrm{SNR} = 5.87)$ + +![](images/3681b7425014e3fc7b73b764da3f72423105978b2d1c6da19b119dc03959a84d.jpg) + +![](images/1be2c1e8280cb8c2529e5721a96c655c7400d97e91f839708c17cbe5d03309e7.jpg) + +Analogously, we apply the same scheme to a $256 \times 256$ color image, "Duck" (see Figure 6), converting each $8 \times 8 \times 3$ patch into a column vector (in $\mathbb{R}^{192}$ ) of $\pmb{Y}$ . Notice this forces the algorithm to learn bases involving all three channels simultaneously, rather than one at a time. After running the MSP algorithm for 100 iterations, we visualize the top bases learned from both $\pmb{Y}$ and corresponding $\pmb{Y}_N$ in Figure 6. + +We next consider the problem of learning a "global dictionary" (Mairal et al., 2007) for patches from many different images. To construct our data matrix $\mathbf{Y}$ , we randomly sample $100,000 \times 8 \times 3$ + +![](images/0024a6975278db037013a539338057dda2daeade9aece6edf35f438594fff9ba.jpg) +(a) Clean Image and Bases + +![](images/bb5ab346fe57e645ebcb1a3446f7bec9dd11dcf2c75d3840c2b4e7b9cdca89f6.jpg) + +![](images/62d21257c5f376fdb03bb0040f527130bf316fb70f02e1011d0d983c66017090.jpg) +(b) Noisy Image and Bases (SNR = 6.56) + +![](images/b504dc5e596db647050b83cf72ec9f5cf6adb3997a69b3d498a8c750af2d34aa.jpg) + +![](images/d3809ad9de23e8d6bf29727fe55d0512e40637144111fb52f33788e9f71b5091.jpg) + +![](images/3e2768ea32fdf446c8b39809266e41d84a6bca217104f4fe1a8fcfaf202bc8b1.jpg) +Figure 6: The top 12 bases learned from all $8 \times 8 \times 3$ color patches of the clean and noisy image, respectively. Here, the SNR of the noisy image is 6.56. +(a) Bases from Clean Patches +Figure 7: Top half (96) bases learned from 100,000 random $8 \times 8 \times 3$ patches sampled from CIFAR-10, before and after adding Gaussian noise, with SNR 6.23. + +![](images/dd2c6857aa123ee7c1174830876e944538856f067eea68d9d02d134497ae27fa.jpg) +(b) Bases from Noisy Patches (SNR = 6.23) + +patches from the CIFAR-10 data-set (Krizhevsky et al., 2009). A noisy data matrix $\mathbf{Y}_N$ , is then generated by adding Gaussian noise. Again, we apply the MSP algorithm with 200 iterations to learn 192 bases and visualize the results in Figure 7. We leave the experiments of CIFAR-10 with outliers and sparse corruptions in the Appendix due to limited space. + +In each of these experiments, the top bases in the learned dictionary remain relatively unchanged with the addition of noise. To quantify this similarity, we take the top bases from the noisy dictionary and find the closest top clean base for each. If the bases are nearly identical, then the inner product of each of these pairs should be close to 1. Table 3 reports the statistics. + +
MinimumLower QuartileMedianUpper QuartileMaximum
Barbara0.30480.84710.99410.99931.0000
Duck0.25100.97820.98910.99711.0000
CIFAR-100.51470.72030.98920.99981.0000
+ +Table 3: Statistics about the inner products between the top 20 noisy bases and their corresponding closest top-20 clean bases. + +# 5 CONCLUSION AND DISCUSSIONS + +In this paper, we find the $\ell^4$ -norm maximization based dictionary learning and corresponding MSP algorithm introduced by Zhai et al. (2019b) have strong geometric and statistical connections to classic data analysis methods PCA and ICA. These connections seem to be the reason why they all admit similar simple and efficient algorithms. Empirically, we have observed that $\ell^4$ -norm maximization is surprisingly insensitive to noise, robust to outliers, and resilient to sparse corruptions. Moreover, such empirical observations corroborate our concentration analysis—larger data samples $(p)$ improve the stability and robustness of the $\ell^4$ -maximization based dictionary learning. + +From experiments on real images, we observed that top bases learned are rather stable but tail bases can be less stable (see Figure 10 in Appendix C). We conjecture this phenomenon happens due to the fact that real images generally do not follow the uniformly sparse Bernoulli Gaussian model (of equation 14). Generalizing dictionary learning to non-uniformly sparsely generated data would be a good topic for future study. Finally, note that Bai et al. (2018) has established subgradient descent can find global optimal solutions for dictionary learning, an interesting theoretical guarantee since dictionary learning problems do not satisfy the standard structural (i.e. generalized convexity) assumptions that guarantee global convergence (Ben-Tal & Nemirovski, 2001; Zhou et al., 2017a; Nesterov, 2013; Zhou et al., 2017b; Bubeck et al., 2015; Zhou et al., 2017c; 2018; Ma et al., 2018; Chi et al., 2019). It would be desirable, although highly challenging, to establish similar global convergence guarantees for our MSP algorithm. We leave that for future work. + +# REFERENCES + +P-A Absil and Jérôme Malick. Projection-like retractions on matrix manifolds. SIAM Journal on Optimization, 22(1):135-158, 2012. +Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243-272, 2008. +Yu Bai, Qijia Jiang, and Ju Sun. Subgradient descent learns orthogonal dictionaries. arXiv preprint arXiv:1810.10702, 2018. +Boaz Barak, Jonathan Kelner, and David Steurer. Dictionary learning and tensor decomposition via the sum-of-squares method. In STOC, 2015. +Aharon Ben-Tal and Arkadi Nemirovski. Lectures on modern convex optimization: analysis, algorithms, and engineering applications, volume 2. Siam, 2001. +Wlodzimierz Bryc. The normal distribution: characterizations with applications, volume 100. Springer Science & Business Media, 2012. +Sebastien Bubeck et al. Convex optimization: Algorithms and complexity. Foundations and Trends in Machine Learning, 8(3-4):231-357, 2015. +Meghan K Cain, Zhiyong Zhang, and Ke-Hai Yuan. Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation. Behavior Research Methods, 49(5):1716-1735, 2017. +Emmanuel J Candès, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? Journal of the ACM (JACM), 58(3):11, 2011. +Gunnar Carlsson. Topology and data. Bulletin of the American Mathematical Society, 46(2):255-308, 2009. +Niladri Chatterji and Peter L Bartlett. Alternating minimization for dictionary learning with random initialization. In Advances in Neural Information Processing Systems, pp. 1997-2006, 2017. +Yuejie Chi, Yue M Lu, and Yuxin Chen. Nonconvex optimization meets low-rank matrix factorization: An overview. IEEE Transactions on Signal Processing, 67(20):5239-5269, 2019. +Lawrence T DeCarlo. On the meaning and use of kurtosis. Psychological methods, 2(3):292, 1997. +Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736-3745, 2006. +Dar Gilboa, Sam Buchanan, and John Wright. Efficient dictionary learning with gradient descent. arXiv preprint arXiv:1809.10313, 2018. +Nathaniel E Helwig. Principal components analysis. 2017. +Peter J Huber. Projection pursuit. The annals of Statistics, pp. 435-475, 1985. +Aapo Hyvarinen and Erkki Oja. A fast fixed-point algorithm for independent component analysis. Neural Computation, 9:1483-1492, 1997. +Aapo Hyvarinen and Erkki Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4-5):411-430, 2000. +Aapo Hyvärinen, Jarmo Hurri, and Patrick O Hoyer. Natural image statistics: A probabilistic approach to early computational vision., volume 39. Springer Science & Business Media, 2009. +Ian Jolliffe. Principal component analysis. Springer, 2011. +Michel Journée, Yurii Nesterov, Peter Richtárik, and Rodolphe Sepulchre. Generalized power method for sparse principal component analysis. Journal of Machine Learning Research, 11 (Feb):517-553, 2010. + +Alex Krizhevsky et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. +Han-Wen Kuo, Yenson Lau, Yuqian Zhang, and John Wright. Geometry and symmetry in short-and-sparse deconvolution. arXiv preprint arXiv:1901.00256, 2019. +Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +Ann B Lee, Kim S Pedersen, and David Mumford. The nonlinear statistics of high-contrast patches in natural images. International Journal of Computer Vision, 54(1-3):83-103, 2003. +Yanjun Li and Yoram Bresler. Global geometry of multichannel sparse blind deconvolution on the sphere. In Advances in Neural Information Processing Systems, pp. 1132-1143, 2018. +Cong Ma, Kaizheng Wang, Yuejie Chi, and Yuxin Chen. Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution. Foundations of Computational Mathematics, pp. 1-182, 2018. +Tengyu Ma, Jonathan Shi, and David Steurer. Polynomial-time tensor decompositions with sum-of-squares. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pp. 438-446. IEEE, 2016. +Julien Mairal, Michael Elad, and Guillermo Sapiro. Sparse representation for color image restoration. IEEE Transactions on image processing, 17(1):53-69, 2007. +Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Discriminative learned dictionaries for local image analysis. Technical report, MINNESOTA UNIV MINNEAPOLIS INST FOR MATHEMATICS AND ITS APPLICATIONS, 2008. +Julien Mairal, Francis Bach, and Jean Ponce. Task-driven dictionary learning. IEEE transactions on pattern analysis and machine intelligence, 34(4):791-804, 2012. +Julien Mairal, Francis Bach, and Jean Ponce. Sparse modeling for image and vision processing. Foundations and Trends in Computer Graphics and Vision, 8(2-3):85-283, 2014. ISSN 1572-2740. doi: 10.1561/0600000058. +Arthur Mensch, Julien Mairal, Bertrand Thirion, and Gaël Varoquaux. Dictionary learning for massive matrix factorization. In International Conference on Machine Learning, pp. 1737-1746, 2016. +Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013. +Thanh Nguyen, Akshay Soni, and Chinmay Hegde. On learning sparsely used dictionaries from incomplete samples. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 3769-3778, Stockholm, Sweden, 10-15 Jul 2018. PMLR. +Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607-609, 1996. +Bruno A. Olshausen and David J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision research, 37(23):3311-3325, 1997. +Qing Qu, Yuexiang Zhai, Xiao Li, Yuqian Zhang, and Zhihui Zhu. Analysis of the optimization landscapes for overcomplete representation learning. arXiv preprint arXiv:1912.02427, 2019. +Sirisha Rambhatla, Xingguo Li, and Jarvis Haupt. Noodl: Provable online dictionary learning and sparse coding. In International Conference on Learning Representations, 2019. +Marc'Aurelio Ranzato, Y-Lan Boureau, and Yann LeCun. Sparse feature learning for deep belief networks. In Advances in Neural Information Processing Systems (NIPS 2007), volume 20, 2007. + +Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review, 52(3):471-501, 2010. +Tselil Schramm and David Steurer. Fast and robust tensor decomposition with applications to dictionary learning. arXiv preprint arXiv:1706.08672, 2017. +Daniel A Spielman, Huan Wang, and John Wright. Exact recovery of sparsely-used dictionaries. In Conference on Learning Theory, pp. 37-1, 2012. +Ju Sun, Qing Qu, and John Wright. Complete dictionary recovery over the sphere. arXiv preprint arXiv:1504.06785, 2015. +Martin J Wainwright. High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press, 2019. +John Wright, Allen Y Yang, Arvind Ganesh, S Shankar Sastry, and Yi Ma. Robust face recognition via sparse representation. IEEE transactions on pattern analysis and machine intelligence, 31(2): 210-227, 2008. +Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust PCA via outlier pursuit. In Advances in Neural Information Processing Systems, pp. 2496-2504, 2010. +Huan Xu, Constantine Caramanis, and Shie Mannor. Outlier-robust PCA: The high-dimensional case. IEEE transactions on information theory, 59(1):546-572, 2012. +Jianchao Yang, John Wright, Thomas S Huang, and Yi Ma. Image super-resolution via sparse representation. IEEE transactions on image processing, 19(11):2861-2873, 2010. +Yuexiang Zhai, Zitong Yang, Zhenyu Liao, John Wright, and Yi Ma. A fast holistic algorithm for complete dictionary learning via $\ell^4$ norm maximization. Signal Processing with Adaptive Sparse Structured Representations (SPARS), 2019a. +Yuexiang Zhai, Zitong Yang, Zhenyu Liao, John Wright, and Yi Ma. Complete dictionary learning via $\ell^4$ -norm maximization over the orthogonal group. arXiv preprint arXiv:1906.02435, 2019b. +Xiaoqin Zhang, Di Wang, Zhengyuan Zhou, and Yi Ma. Simultaneous rectification and alignment via robust recovery of low-rank tensors. In Advances in Neural Information Processing Systems, pp. 1637-1645, 2013. +Xiaoqin Zhang, Zhengyuan Zhou, Di Wang, and Yi Ma. Hybrid singular value thresholding for tensor completion. In Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014. +Xiaoqin Zhang, Di Wang, Zhengyuan Zhou, and Yi Ma. Robust low-rank tensor recovery with rectification and alignment. IEEE transactions on pattern analysis and machine intelligence, 2019. +Yuqian Zhang, Han-Wen Kuo, and John Wright. Structured local optima in sparse blind deconvolution. arXiv preprint arXiv:1806.00338, 2018. +Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Stephen Boyd, and Peter Glynn. On the convergence of mirror descent beyond stochastic convex programming. arXiv preprint arXiv:1706.05681, 2017a. +Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Stephen Boyd, and Peter W Glynn. Stochastic mirror descent in variationally coherent optimization problems. In Advances in Neural Information Processing Systems, pp. 7040-7049, 2017b. +Zhengyuan Zhou, Panayotis Mertikopoulos, Aris L Moustakas, Nicholas Bambos, and Peter Glynn. Mirror descent learning in continuous games. In 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pp. 5776-5783. IEEE, 2017c. +Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Peter W Glynn, Yinyu Ye, Li-Jia Li, and Fei-Fei Li. Distributed asynchronous optimization with unbounded delays: How slow can you go? 2018. + +# A PROOF OF SECTION 3 + +# A.1 PROOF OF PROPOSITION 3.1 + +Claim A.1 (Expectation of Objective with Small Noise) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathrm{O}(n;\mathbb{R})$ be an orthogonal matrix and assume $\mathbf{Y} = D_{o}\mathbf{X}_{o}$ . For any orthogonal matrix $\mathbf{W} \in \mathrm{O}(n;\mathbb{R})$ and any random Gaussian matrix $\mathbf{G} \in \mathbb{R}^{n \times p}$ , $g_{i,j} \sim_{iid}\mathcal{N}(0,\eta^2)$ independent of $\mathbf{X}_{o}$ , let $\mathbf{Y}_{N} = \mathbf{Y} + \mathbf{G}$ denote the data with noise. Then the expectation of $\frac{1}{np}\| \mathbf{W}^{*}\mathbf{Y}_{N}\|_{4}^{4}$ satisfies: + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4}}{n} + 3 \theta^ {2} + 6 \theta \eta^ {2} + 3 \eta^ {4}. \tag {23} +$$ + +Proof Let $W^{*}D_{o} = M \in \mathrm{O}(n; \mathbb{R})$ , notice that the orthogonal transformation $(W^{*}G)$ of a Gaussian matrix $(G)$ is still a Gaussian matrix and satisfies $\{W^{*}G\}_{i,j} \sim \mathcal{N}(0,1)$ , and it is independent of $Y(X_{o})$ . We abuse the notation a bit let $G = W^{*}G$ in the following calculation, since $W^{*}G$ is also a Gaussian matrix independent of $X_{o}$ and it will not affect the final result. + +$$ +\begin{array}{l} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} = \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \| \boldsymbol {M} \boldsymbol {X} _ {o} + \boldsymbol {G} \| _ {4} ^ {4} \\ = \sum_ {j = 1} ^ {p} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \left(\sum_ {k = 1} ^ {n} m _ {i, k} x _ {k, j} + g _ {i, j}\right) ^ {4} \\ = \sum_ {j = 1} ^ {p} \sum_ {i = 1} ^ {n} \left\{\mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \left(\sum_ {k = 1} ^ {n} m _ {i, k} x _ {k, j}\right) ^ {4} + 6 \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \left[ n _ {i, j} ^ {2} \left(\sum_ {k = 1} ^ {n} m _ {i, k} x _ {k, j}\right) ^ {2} \right] + \mathbb {E} _ {\boldsymbol {G}} g _ {i, j} ^ {4} \right\} \\ = \sum_ {j = 1} ^ {p} \sum_ {i = 1} ^ {n} \left\{\mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \left(\sum_ {k = 1} ^ {n} m _ {i, k} x _ {k, j}\right) ^ {4} + 6 \eta^ {2} \mathbb {E} _ {\boldsymbol {X} _ {o}} \left(\sum_ {k = 1} ^ {n} m _ {i, k} x _ {k, j}\right) ^ {2} \right\} + 3 n p \eta^ {4} \tag {24} \\ = \sum_ {j = 1} ^ {p} \sum_ {i = 1} ^ {n} \left\{\mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \left(\sum_ {k = 1} ^ {n} m _ {i, k} x _ {k, j}\right) ^ {4} \right\} + 6 n p \theta \eta^ {2} + 3 n p \eta^ {4} \\ = \mathbb {E} _ {\boldsymbol {X} _ {o}} \| \boldsymbol {M} \boldsymbol {X} _ {o} \| _ {4} ^ {4} + 6 n p \theta \eta^ {2} + 3 n p \eta^ {4} \\ = 3 p \theta (1 - \theta) \| M \| _ {4} ^ {4} + 3 n p \theta^ {2} + 6 n p \theta \eta^ {2} + 3 n p \eta^ {4}, \\ \end{array} +$$ + +therefore, + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\left\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \right\| _ {4} ^ {4}}{n} + 3 \theta^ {2} + 6 \theta \eta^ {2} + 3 \eta^ {4}, \tag {25} +$$ + +which completes the proof. + +# A.2 PROOF OF THEOREM 3.2 + +Claim A.2 (Concentration of Objective with Small Noise) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathrm{O}(n;\mathbb{R})$ be an orthogonal matrix and assume $\pmb{Y} = D_{o}\pmb{X}_{o}$ . For any orthogonal matrix $\pmb{W} \in \mathrm{O}(n;\mathbb{R})$ and any random Gaussian matrix $\pmb{G} \in \mathbb{R}^{n \times p}$ , $g_{i,j} \sim_{iid} \mathcal{N}(0,\eta^2)$ independent of $X_{o}$ , let $\pmb{Y}_N = \pmb{Y} + \pmb{G}$ denote the input with noise, then: + +$$ +\mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} \right| \geq \delta\right) < \frac {1}{p}, \tag {26} +$$ + +when $p = \Omega \left((1 + \eta^2)^4 n^2\ln n / \delta^2\right)$ + +Proof According to Proposition 3.1, we know that + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4}}{n} + 3 \theta^ {2} + 6 \theta \eta^ {2} + 3 \eta^ {4}. \tag {27} +$$ + +Moreover, since $\mathbf{Y}_N = [\pmb{y}_1 + \pmb{g}_1, \pmb{y}_2 + \pmb{g}_2, \dots, \pmb{y}_p + \pmb{g}_p]$ , whose columns are independent. Let + +$$ +\boldsymbol {Z} = \left[ \boldsymbol {z} _ {1}, \boldsymbol {z} _ {2}, \dots , \boldsymbol {z} _ {p} \right] = \boldsymbol {D} _ {o} ^ {*} \boldsymbol {Y} _ {N} = \left[ \boldsymbol {x} _ {1} + \boldsymbol {g} _ {1}, \boldsymbol {x} _ {2} + \boldsymbol {g} _ {2}, \dots , \boldsymbol {x} _ {p} + \boldsymbol {g} _ {p} \right] = \boldsymbol {X} _ {o} + \boldsymbol {G}, \tag {28} +$$ + +note that we abuse the notation a bit by assuming $D_o^* G = G$ , since $D_o^* G$ is also a Gaussian matrix independent of $\mathbf{X}_o$ and it will not affect the final result. Define function $f_{\mathbf{z}}(\mathbf{W}) : \mathrm{O}(n; \mathbb{R}) \mapsto \mathbb{R}$ as + +$$ +f _ {\boldsymbol {z}} (\boldsymbol {W}) = \left\| \boldsymbol {W} ^ {*} \boldsymbol {z} \right\| _ {4} ^ {4}. \tag {29} +$$ + +Next, we will check assumption 1, 2, and 3 then apply Lemma B.4. + +Assumption 1: $\mu (n,p)$ . Since $z_{i,j} = x_{i,j} + g_{i,j}$ , we know that with probability $\theta$ , $z \sim \mathcal{N}(0,1 + \eta^2)$ and with probability $1 - \theta$ , $z \sim \mathcal{N}(0,1)$ . Hence, + +$$ +\mathbb {P} \left(\max _ {i, j} | z _ {i, j} | > B\right) \leq 2 n p \theta \exp \left(- \frac {B ^ {2}}{2 + 2 \eta^ {2}}\right) + 2 n p (1 - \theta) \exp \left(- \frac {B ^ {2}}{2}\right) = \mu (n, p). \tag {30} +$$ + +Specifically, we set $B = \ln p$ , which yields + +$$ +\mu (n, p) = 2 n p \theta \exp \left(- \frac {(\ln p) ^ {2}}{2 + 2 \eta^ {2}}\right) + 2 n p (1 - \theta) \exp \left(- \frac {(\ln p) ^ {2}}{2}\right). \tag {31} +$$ + +Assumption 2: Lipschitz Constant $L_{f}$ . By Proposition 3.1, $\forall W\in \mathsf{O}(n;\mathbb{R})$ , we know that + +$$ +\mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W}) = 3 \theta (1 - \theta) \| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4} + 3 n \theta^ {2} + 6 n \theta \eta^ {2} + 3 n \eta^ {4}. \tag {32} +$$ + +Hence, $\forall W_1, W_2 \in \mathcal{O}(n; \mathbb{R})$ , we have + +$$ +\left| \mathbb {E} \| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {z} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {z} \| _ {4} ^ {4} \right| = 3 \theta (1 - \theta) \left| \| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4} - \| \boldsymbol {W} _ {2} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4} \right|, \tag {33} +$$ + +moreover, + +$$ +\begin{array}{l} \left| \sum_ {i, j} \left[ (\pmb {W} _ {1} ^ {*} \pmb {D} _ {o}) _ {i, j} ^ {4} - (\pmb {W} _ {2} ^ {*} \pmb {D} _ {o}) _ {i, j} ^ {4} \right] \right| \\ = \left| \sum_ {i, j} \left\{\left[ \left(\boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o}\right) _ {i, j} ^ {2} - \left(\boldsymbol {W} _ {2} ^ {*} \boldsymbol {D} _ {o}\right) _ {i, j} ^ {2} \right] \left[ \left(\boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o}\right) _ {i, j} ^ {2} + \left(\boldsymbol {W} _ {2} ^ {*} \boldsymbol {D} _ {o}\right) _ {i, j} ^ {2} \right] \right\} \right| \\ \leq \left\{\sum_ {i, j} \left[ \left(\boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o}\right) _ {i, j} ^ {2} - \left(\boldsymbol {W} _ {2} ^ {*} \boldsymbol {D} _ {o}\right) _ {i, j} ^ {2} \right] ^ {2} \right\} ^ {1 / 2} \left\{\sum_ {i, j} \left[ \left(\boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o}\right) _ {i, j} ^ {2} + \left(\boldsymbol {W} _ {2} ^ {*} \boldsymbol {D} _ {o}\right) _ {i, j} ^ {2} \right] ^ {2} \right\} ^ {1 / 2} \tag {34} \\ = \left\| (\boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o}) ^ {\circ 2} - (\boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o}) ^ {\circ 2} \right\| _ {F} \left\| (\boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o}) ^ {\circ 2} + (\boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o}) ^ {\circ 2} \right\| _ {F} \\ \leq \left\| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o} - \boldsymbol {W} _ {2} ^ {*} \boldsymbol {D} _ {o} \right\| _ {F} \left\| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o} + \boldsymbol {W} _ {2} ^ {*} \boldsymbol {D} _ {o} \right\| _ {F} \left(\left\| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o} \right\| _ {F} ^ {2} + \left\| \boldsymbol {W} _ {2} ^ {*} \boldsymbol {D} _ {o} \right\| _ {F} ^ {2}\right) \\ \leq 4 n ^ {2} \left\| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o} - \boldsymbol {W} _ {2} ^ {*} \boldsymbol {D} _ {o} \right\| _ {2} = 4 n ^ {2} \left\| \boldsymbol {W} _ {1} - \boldsymbol {W} _ {2} \right\| _ {2}. \\ \end{array} +$$ + +Hence, we know that + +$$ +\left| \mathbb {E} \| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {z} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {z} \| _ {4} ^ {4} \right| \leq 1 2 n ^ {2} \theta (1 - \theta), \tag {35} +$$ + +which yields + +$$ +L _ {f} = 1 2 n ^ {2} \theta (1 - \theta). \tag {36} +$$ + +Assumption 2: Lipschitz Constant $\bar{L}_f$ . Since $f_{\tilde{z}}(W) = \| W^{*}\tilde{z}\|_{4}^{4}$ , we have + +$$ +\begin{array}{l} \left| \left\| \boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}} \right\| _ {4} ^ {4} - \left\| \boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}} \right\| _ {4} ^ {4} \right| = \left| \sum_ {i} \left\{\left[ (\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}) _ {i} ^ {2} - (\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}) _ {i} ^ {2} \right] \left[ (\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}) _ {i} ^ {2} + (\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}) _ {i} ^ {2} \right] \right\} \right| \\ \leq \left\{\sum_ {i} \left[ \left(\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}\right) _ {i} ^ {2} - \left(\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}\right) _ {i} ^ {2} \right] ^ {2} \right\} ^ {1 / 2} \left\{\sum_ {i} \left[ \left(\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}\right) _ {i} ^ {2} + \left(\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}\right) _ {i} ^ {2} \right] ^ {2} \right\} ^ {1 / 2} \tag {37} \\ = \underbrace {\left\| \left(\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}\right) ^ {\circ 2} - \left(\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}\right) ^ {\circ 2} \right\| _ {2}} _ {\Gamma_ {1}} \underbrace {\left\| \left(\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}\right) ^ {\circ 2} + \left(\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}\right) ^ {\circ 2} \right\| _ {2}} _ {\Gamma_ {2}}. \\ \end{array} +$$ + +For $\Gamma_{1}$ , we have + +$$ +\begin{array}{l} \Gamma_ {1} = \left\| \left(\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}\right) ^ {\circ 2} - \left(\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}\right) ^ {\circ 2} \right\| _ {2} = \left\| \left(\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}\right) - \left(\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}\right) \circ \left(\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}\right) + \left(\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}\right) \right\| _ {2} \\ \leq \left\| \boldsymbol {W} _ {1} - \boldsymbol {W} _ {2} \right\| _ {2} \underbrace {\left\| \boldsymbol {W} _ {1} + \boldsymbol {W} _ {2} \right\| _ {2}} _ {\leq 2} \underbrace {\left\| \bar {z} \right\| _ {2} ^ {2}} _ {\leq n B ^ {2}} \leq 2 n B ^ {2} \left\| \boldsymbol {W} _ {1} - \boldsymbol {W} _ {2} \right\| _ {2}. \tag {38} \\ \end{array} +$$ + +For $\Gamma_{2}$ , we have + +$$ +\begin{array}{l} \Gamma_ {2} = \left\| \left(\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}\right) ^ {\circ 2} + \left(\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}\right) ^ {\circ 2} \right\| _ {2} \leq \left\| \left(\boldsymbol {W} _ {1} ^ {*} \bar {\boldsymbol {z}}\right) ^ {\circ 2} \right\| _ {2} + \left\| \left(\boldsymbol {W} _ {2} ^ {*} \bar {\boldsymbol {z}}\right) ^ {\circ 2} \right\| _ {2} \tag {39} \\ \leq \left\| \boldsymbol {W} _ {1} ^ {*} \bar {z} \right\| _ {2} ^ {2} + \left\| \boldsymbol {W} _ {2} ^ {*} \bar {z} \right\| _ {2} ^ {2} \leq 2 n B ^ {2}. \\ \end{array} +$$ + +Hence, we know that + +$$ +\left| \left\| \boldsymbol {W} _ {1} ^ {*} \bar {z} \right\| _ {4} ^ {4} - \left\| \boldsymbol {W} _ {2} ^ {*} \bar {z} \right\| _ {4} ^ {4} \right| \leq 4 n ^ {2} B ^ {4} \| \boldsymbol {W} _ {1} - \boldsymbol {W} _ {2} \| _ {2}, \tag {40} +$$ + +which yields + +$$ +\bar {L} _ {f} = 4 n ^ {2} B ^ {4}. \tag {41} +$$ + +Assumption 3: Upper Bound $R_{1}$ . Notice that $\forall W\in \mathrm{O}(n;\mathbb{R})$ + +$$ +f _ {\bar {z}} (\boldsymbol {W}) = \| \boldsymbol {W} ^ {*} \bar {z} \| _ {4} ^ {4} = \| \boldsymbol {W} ^ {*} \bar {z} \| _ {2} ^ {4} \left\| \frac {\boldsymbol {W} ^ {*} \bar {z}}{\| \boldsymbol {W} ^ {*} \bar {z} \| _ {2}} \right\| _ {4} ^ {4} \leq \| \boldsymbol {W} ^ {*} \bar {z} \| _ {2} ^ {4} = \| \bar {z} \| _ {2} ^ {4} \leq n ^ {2} B ^ {4} = R _ {1}. \tag {42} +$$ + +Assumption 3: Upper Bound $R_{2}$ . Assume the support for $x$ is $S$ , so we know that know that $\forall i \in [n]$ , + +$$ +z _ {i} = \left\{ \begin{array}{l l} \sqrt {1 + \eta^ {2}} v _ {i}, & \text {i f} i \in \mathcal {S}, \\ v _ {i}, & \text {o t h e r w i s e}, \end{array} \right. \tag {43} +$$ + +where $\pmb{v} \sim \mathcal{N}(\pmb{0}, \pmb{I})$ is a Gaussian vector. Let $P_S: \mathbb{R}^n \mapsto \mathbb{R}^n$ be a projection that satisfies $\forall \pmb{q} \in \mathbb{R}^n$ + +$$ +\left(\boldsymbol {P} _ {S} \boldsymbol {q}\right) _ {i} = \left\{ \begin{array}{l l} \sqrt {1 + \eta^ {2}} q _ {i}, & \text {i f} i \in \mathcal {S}, \\ q _ {i}, & \text {o t h e r w i s e .} \end{array} \right. \tag {44} +$$ + +Moreover, $\forall W\in \mathrm{O}(n;\mathbb{R})$ , let $\pmb {w}_i$ denote the $i^{\mathrm{th}}$ column vector of $\pmb{W}$ , hence + +$$ +\begin{array}{l} \mathbb {E} \left[ f _ {\boldsymbol {z}} ^ {2} (\boldsymbol {W}) \right] = \mathbb {E} \left[ \left(\| \boldsymbol {W} ^ {*} \boldsymbol {z} \| _ {4} ^ {4}\right) ^ {2} \right] = \mathbb {E} \left[ \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \langle \boldsymbol {w} _ {i}, \boldsymbol {z} \rangle^ {4} \langle \boldsymbol {w} _ {j}, \boldsymbol {z} \rangle^ {4} \right] \tag {45} \\ \leq \mathbb {E} _ {\mathcal {S}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \mathbb {E} \left[ \langle P _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \rangle^ {4} \langle P _ {\mathcal {S}} \boldsymbol {w} _ {j}, \boldsymbol {v} \rangle^ {4} \right] \leq \mathbb {E} _ {\mathcal {S}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \left[ \mathbb {E} \langle P _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \rangle^ {8} \mathbb {E} \langle P _ {\mathcal {S}} \boldsymbol {w} _ {j}, \boldsymbol {v} \rangle^ {8} \right] ^ {\frac {1}{2}}. \\ \end{array} +$$ + +Notice that $\pmb{v} \sim \mathcal{N}(\pmb{0}, \pmb{I})$ , therefore $\forall i \in [n]$ , we have + +$$ +\mathbb {E} \left\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \right\rangle^ {8} = 1 0 5 \| \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i} \| _ {2} ^ {8}, \tag {46} +$$ + +hence + +$$ +\begin{array}{l} \mathbb {E} _ {\mathcal {S}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \left[ \mathbb {E} \left\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \right\rangle^ {8} \mathbb {E} \left\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {j}, \boldsymbol {v} \right\rangle^ {8} \right] ^ {\frac {1}{2}} = 1 0 5 \mathbb {E} _ {\mathcal {S}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \| \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i} \| _ {2} ^ {4} \| \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i} \| _ {2} ^ {4} \\ = 1 0 5 \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \sum_ {k _ {1}, k _ {2}. k _ {3}, k _ {4}} \mathbb {E} _ {\mathcal {S}} \left[ \prod_ {l = 1} ^ {4} \left(1 + \eta^ {2} \mathbb {1} _ {k l \in \mathcal {S}}\right) w _ {i, k _ {1}} ^ {2} w _ {i, k _ {2}} ^ {2} w _ {j, k _ {3}} ^ {2} w _ {j, k _ {4}} ^ {2} \right] \tag {47} \\ \leq 1 0 5 (1 + \eta^ {2}) ^ {4} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \sum_ {k _ {1}, k _ {2}. k _ {3}, k _ {4}} [ w _ {i, k _ {1}} ^ {2} w _ {i, k _ {2}} ^ {2} w _ {j, k _ {3}} ^ {2} w _ {j, k _ {4}} ^ {2} ] = 1 0 5 (1 + \eta^ {2}) ^ {4} n ^ {2}. \\ \end{array} +$$ + +Therefore, we can conclude that + +$$ +\mathbb {E} \left[ f _ {z} ^ {2} (\boldsymbol {W}) \right] \leq 1 0 5 \left(1 + \eta^ {2}\right) ^ {4} = R _ {2}. \tag {48} +$$ + +Applying Lemma B.4 for Concentration. Now we apply Lemma B.4 with + +1. + +$$ +B = \ln p, \quad \mu (n, p) = 2 n p \theta \exp \left(- \frac {(\ln p) ^ {2}}{4}\right) + 2 n p (1 - \theta) \exp \left(- \frac {(\ln p) ^ {2}}{2}\right), \tag {49} +$$ + +2. + +$$ +L _ {f} = 1 2 n ^ {2} \theta (1 - \theta), \quad \bar {L} _ {f} = 4 n ^ {2} (\ln p) ^ {4}, \tag {50} +$$ + +3. + +$$ +R _ {1} = n ^ {2} (\ln p) ^ {4}, \quad R _ {2} = 1 0 5 (1 + \eta^ {2}) ^ {4} n ^ {2}, \tag {51} +$$ + +we have + +$$ +\begin{array}{l} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \sum_ {j = 1} ^ {p} \left[ f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) \right] \right| \geq \delta\right) \\ = \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {N} \| _ {4} ^ {4} \right| \geq \delta\right) \\ < \exp \left[ - \frac {p n ^ {2} \delta^ {2}}{3 2 R _ {2} + 8 R _ {1} n \delta / 3} + n ^ {2} \ln \left(\frac {1 2 \left(L _ {f} + \bar {L} _ {f}\right)}{n \delta}\right) + \ln 2 \right] + \mu (n, p) \tag {52} \\ < \exp \left[ - \frac {3 p \delta^ {2}}{C (1 + \eta^ {2}) ^ {4} + 8 n (\ln p) ^ {4} \delta} + n ^ {2} \ln \left(\frac {6 0 n (\ln p) ^ {4}}{\delta}\right) + \ln 2 \right] \\ + 2 n p \theta \exp \left(- \frac {(\ln p) ^ {2}}{4}\right) + 2 n p (1 - \theta) \exp \left(- \frac {(\ln p) ^ {2}}{2}\right) < \frac {1}{p}, \\ \end{array} +$$ + +for a constant $C > 10^4$ , when $p = \Omega \left((1 + \eta^2)^4 n^2 \ln n / \delta^2\right)$ . + +# A.3 PROOF OF PROPOSITION 3.3 + +Claim A.3 (Expectation of Objective with Outliers) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathrm{O}(n;\mathbb{R})$ be an orthogonal matrix and assume $\mathbf{Y} = D_{o}\mathbf{X}_{o}$ . For any orthogonal matrix $\mathbf{W} \in \mathrm{O}(n;\mathbb{R})$ and any random Gaussian matrix $\mathbf{G}' \in \mathbb{R}^{n \times \tau p}$ , $g_{i,j}' \sim_{iid} \mathcal{N}(0,1)$ independent of $\mathbf{X}_{o}$ , let $\mathbf{Y}_{O} = [\mathbf{Y},\mathbf{G}']$ denote the data with outliers $\mathbf{G}'$ . Then the expectation of $\frac{1}{np}\|\mathbf{W}^*\mathbf{Y}_O\|_4^4$ satisfies: + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G} ^ {\prime}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\left\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \right\| _ {4} ^ {4}}{n} + 3 \theta^ {2} + 3 \tau . \tag {53} +$$ + +Proof Notice that + +$$ +\mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G} ^ {\prime}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \| _ {4} ^ {4} = \mathbb {E} _ {\boldsymbol {X} _ {o}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} \| _ {4} ^ {4} + \mathbb {E} _ {\boldsymbol {G} ^ {\prime}} \| \boldsymbol {W} ^ {*} \boldsymbol {G} ^ {\prime} \| _ {4} ^ {4}, \tag {54} +$$ + +and + +$$ +\mathbb {E} _ {\boldsymbol {X} _ {o}} \left\| \boldsymbol {W} ^ {*} \boldsymbol {Y} \right\| _ {4} ^ {4} = \mathbb {E} _ {\boldsymbol {X} _ {o}} \left\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \boldsymbol {X} _ {o} \right\| _ {4} ^ {4} = 3 p \theta (1 - \theta) \left\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \right\| _ {4} ^ {4} + 3 n p \theta^ {2}. \tag {55} +$$ + +Moreover, the orthogonal rotation $(W^{*}G^{\prime})$ of a standard Gaussian matrix $G^{\prime}$ is also a standard Gaussian matrix, therefore, + +$$ +\mathbb {E} _ {\boldsymbol {G} ^ {\prime}} \left\| \boldsymbol {W} ^ {*} \boldsymbol {G} ^ {\prime} \right\| _ {4} ^ {4} = 3 \tau n p. \tag {56} +$$ + +Hence, + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {G} ^ {\prime}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\left\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \right\| _ {4} ^ {4}}{n} + 3 \theta^ {2} + 3 \tau , \tag {57} +$$ + +which completes the proof. + +# A.4 PROOF OF THEOREM 3.4 + +Claim A.4 (Concentration of Objective with Outliers) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathcal{O}(n;\mathbb{R})$ be an orthogonal matrix and assume $\pmb{Y} = D_{o}\pmb{X}_{o}$ . For any orthogonal matrix $\pmb{W} \in \mathcal{O}(n;\mathbb{R})$ and any random Gaussian matrix $\pmb{G}' \in \mathbb{R}^{n \times \tau p}$ , $g_{i,j}' \sim_{iid} \mathcal{N}(0,1)$ independent of $X_{o}$ , let $\pmb{Y}_{O} = [\pmb{Y},\pmb{G}']$ denote the input with outlier $\pmb{G}'$ , then: + +$$ +\mathbb {P} \left(\sup _ {\boldsymbol {W} \in O (n; \mathbb {R})} \frac {1}{n p} \left| \left\| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \right\| _ {4} ^ {4} - \mathbb {E} \left\| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \right\| _ {4} ^ {4} \right| \geq \delta\right) < \frac {1}{p}, \tag {58} +$$ + +when $p = \Omega (\tau^2 n^2\ln n / \delta^2)$ + +Proof Notice that + +$$ +\begin{array}{l} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \| _ {4} ^ {4} \right| \geq \delta\right) \\ \leq \underbrace {\mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n ; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} \| _ {4} ^ {4} \right| \geq \frac {\delta}{2}\right)} _ {\Gamma_ {1}} \tag {59} \\ + \underbrace {\mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n ; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {G} ^ {\prime} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {G} ^ {\prime} \| _ {4} ^ {4} \right| \geq \frac {\delta}{2}\right)} _ {\Gamma_ {2}}. \\ \end{array} +$$ + +Apply Lemma B.5 (substitute $\delta$ with $\frac{\delta}{2}$ ) to $\Gamma_1$ , we know that + +$$ +\begin{array}{l} \Gamma_ {1} = \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} \| _ {4} ^ {4} \right| \geq \frac {\delta}{2}\right) \\ < \exp \left(- \frac {3 p \delta^ {2}}{c _ {1} \theta + 1 6 n (\ln p) ^ {4} \delta} + n ^ {2} \ln \left(\frac {1 2 0 n p (\ln p) ^ {4}}{\delta}\right)\right) \tag {60} \\ + \exp \left(- \frac {p \delta^ {2}}{c _ {2} \theta} + n ^ {2} \ln \left(\frac {1 2 0 n p (\ln p) ^ {4}}{\delta}\right)\right) + 2 n p \theta \exp \left(- \frac {(\ln p) ^ {2}}{2}\right) < \frac {1}{2 p}, \\ \end{array} +$$ + +for two constants $c_{1} \geq 4 \times 10^{4}, c_{2} > 13, 440$ , and the last inequality holds when $p = \Omega(\theta n^{2} \ln n / \delta^{2})$ . Moreover, apply Lemma B.5 again (let $\theta = 1, Y = G' \in \mathbb{R}^{n \times \tau p}$ and substitute $\delta$ with $\frac{\delta}{2\tau}$ ), we have + +$$ +\begin{array}{l} \Gamma_ {2} = \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {G} ^ {\prime} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {G} ^ {\prime} \| _ {4} ^ {4} \right| \geq \frac {\delta}{2}\right) \\ = \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{\tau n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {G} ^ {\prime} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {G} ^ {\prime} \| _ {4} ^ {4} \right| \geq \frac {\delta}{2 \tau}\right) \tag {61} \\ < \exp \left(- \frac {3 p (\delta / \tau) ^ {2}}{c _ {1} + 1 6 n (\ln p) ^ {4} \delta / \tau} + n ^ {2} \ln \left(\frac {1 2 0 n p (\ln p) ^ {4}}{\delta / \tau}\right)\right) \\ + \exp \left(- \frac {p (\delta / \tau) ^ {2}}{c _ {2}} + n ^ {2} \ln \left(\frac {1 2 0 n p (\ln p) ^ {4}}{\delta / \tau}\right)\right) + 2 n p \exp \left(- \frac {(\ln p) ^ {2}}{2}\right) < \frac {1}{2 p}, \\ \end{array} +$$ + +for two constants $c_1 \geq 4 \times 10^4, c_2 > 13, 440$ , and the last inequality holds when $p = \Omega(\tau^2 n^2 \ln n / \delta^2)$ . Combine $\theta \leq 1$ and $\tau$ can be larger than 1, we conclude that + +$$ +\mathbb {P} \left(\sup _ {\boldsymbol {W} \in O (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {O} \| _ {4} ^ {4} \right| \geq \delta\right) < \Gamma_ {1} + \Gamma_ {2} < \frac {1}{p} \tag {62} +$$ + +when $p = \Omega\left(\tau^{2}n^{2}\ln n / \delta^{2}\right)$ . + +# A.5 PROOF OF PROPOSITION 3.5 + +Claim A.5 (Expectation of Objective with Sparse Corruptions) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathrm{O}(n; \mathbb{R})$ be an orthogonal matrix and assume $\mathbf{Y} = D_{o}\mathbf{X}_{o}$ . For any orthogonal matrix $\mathbf{W} \in \mathrm{O}(n; \mathbb{R})$ and any random Bernoulli matrix $\mathbf{B} \in \mathbb{R}^{n \times p}$ , $b_{i,j} \sim_{iid} Ber(\beta)$ independent of $X_{o}$ , let $\mathbf{Y}_{C} = \mathbf{Y} + \sigma \mathbf{B} \circ \mathbf{S}$ denote the data with sparse corruptions, and $\mathbf{S} \in \mathbb{R}^{n \times p}$ is defined in equation 15. Then the expectation of $\frac{1}{np} \| \mathbf{W}^* \mathbf{Y}_C \|_4^4$ satisfies: + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4}}{n} + \sigma^ {4} \beta (1 - 3 \beta) \frac {\| \boldsymbol {W} \| _ {4} ^ {4}}{n} + 3 \theta^ {2} + 6 \sigma^ {2} \theta \beta + 3 \sigma^ {4} \beta^ {2}. \tag {63} +$$ + +Proof Let $W^{*}D_{o} = M\in \mathrm{O}(n;\mathbb{R})$ , notice that + +$$ +\left\| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \right\| _ {4} ^ {4} = \left\| \boldsymbol {M} \boldsymbol {X} _ {o} + \sigma \boldsymbol {W} ^ {*} (\boldsymbol {B} \circ \boldsymbol {S}) \right\| _ {4} ^ {4}, \tag {64} +$$ + +hence + +$$ +\begin{array}{l} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} = \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \sum_ {j = 1} ^ {p} \sum_ {i = 1} ^ {n} \left(\sum_ {k = 1} ^ {n} m _ {i, k} x _ {k, j} + \sigma \sum_ {k = 1} ^ {n} w _ {k, i} b _ {k, j} s _ {k, j}\right) ^ {4} \\ = \sum_ {j = 1} ^ {p} \sum_ {i = 1} ^ {n} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \left[ \underbrace {\left(\sum_ {k = 1} ^ {n} m _ {i , k} x _ {k , j}\right) ^ {4}} _ {\Gamma_ {1}} + 6 \underbrace {\left(\sum_ {k = 1} ^ {n} m _ {i , k} x _ {k , j}\right) ^ {2} \left(\sigma \sum_ {k = 1} ^ {n} w _ {k , i} b _ {k , j} s _ {k , j}\right) ^ {2}} _ {\Gamma_ {2}} \right. \\ + \underbrace {\left(\sigma \sum_ {k = 1} ^ {n} w _ {k , i} b _ {k , j} s _ {k , j}\right) ^ {4}} _ {\Gamma_ {3}} + 4 \underbrace {\left(\sum_ {k = 1} ^ {n} m _ {i , k} x _ {k , j}\right) ^ {3} \left(\sigma \sum_ {k = 1} ^ {n} w _ {k , i} b _ {k , j} s _ {k , j}\right)} _ {\Gamma_ {4}} \tag {65} \\ + 4 \underbrace {\left(\sum_ {k = 1} ^ {n} m _ {i , k} x _ {k , j}\right) \left(\sigma \sum_ {k = 1} ^ {n} w _ {k , i} b _ {k , j} s _ {k , j}\right) ^ {3}} _ {\Gamma_ {5}} \biggr ]. \\ \end{array} +$$ + +Moreover, + +$\Gamma_{1}$ + +$$ +\begin{array}{l} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \Gamma_ {1} = \mathbb {E} _ {\boldsymbol {X} _ {o}} \Gamma_ {1} = 3 \theta \sum_ {k = 1} ^ {n} m _ {i, k} ^ {4} + 6 \theta^ {2} \sum_ {1 \leq k _ {1} < k _ {2} \leq n} m _ {i, k _ {1}} ^ {2} m _ {i, k _ {2}} ^ {2} \tag {66} \\ = 3 \theta (1 - \theta) \sum_ {k = 1} ^ {n} m _ {i, k} ^ {4} + 3 \theta^ {2}, \\ \end{array} +$$ + +$\Gamma_{2}$ + +$$ +\begin{array}{l} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \Gamma_ {2} = \left[ \mathbb {E} _ {\boldsymbol {X} _ {o}} \left(\sum_ {k = 1} ^ {n} m _ {i, k} x _ {k, j}\right) ^ {2} \right] \left[ \mathbb {E} _ {\boldsymbol {B}, \boldsymbol {S}} \left(\sigma \sum_ {k = 1} ^ {n} w _ {k, i} b _ {k, j} s _ {k, j}\right) ^ {2} \right] \tag {67} \\ = \theta \left(\sum_ {k = 1} ^ {n} m _ {i, k} ^ {2}\right) \sigma^ {2} \beta \left(\sum_ {k = 1} ^ {n} w _ {k, i} ^ {2}\right) = \sigma^ {2} \theta \beta , \\ \end{array} +$$ + +$\Gamma_3$ + +$$ +\begin{array}{l} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \Gamma_ {3} = \mathbb {E} _ {\boldsymbol {B}, \boldsymbol {S}} \Gamma_ {3} = \sigma^ {4} \beta \sum_ {k = 1} ^ {n} w _ {k, i} ^ {4} + 6 \sigma^ {4} \beta^ {2} \sum_ {1 \leq k _ {1} < k _ {2} \leq n} w _ {k _ {1}, i} ^ {2} w _ {k _ {2}, i} ^ {2} \tag {68} \\ = \sigma^ {4} \beta (1 - 3 \beta) \sum_ {k = 1} ^ {n} w _ {k, i} ^ {4} + 3 \sigma^ {4} \beta^ {2}, \\ \end{array} +$$ + +$\Gamma_4,\Gamma_5$ + +$$ +\mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \Gamma_ {4} = 0, \quad \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \Gamma_ {5} = 0. \tag {69} +$$ + +Substitute $\mathbb{E}\Gamma_1,\mathbb{E}\Gamma_2,\mathbb{E}\Gamma_3,\mathbb{E}\Gamma_4,\mathbb{E}\Gamma_5$ back to equation 65, yields + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \| \boldsymbol {W} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4}}{n} + \sigma^ {4} \beta (1 - 3 \beta) \frac {\| \boldsymbol {W} \| _ {4} ^ {4}}{n} + 3 \theta^ {2} + 6 \sigma^ {2} \theta \beta + 3 \sigma^ {4} \beta^ {2}. \tag {70} +$$ + +# A.6 PROOF OF THEOREM 3.6 + +Claim A.6 (Concentration of Objective with Sparse Corruptions) $\forall \theta \in (0,1)$ , let $X_{o} \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ . Let $D_{o} \in \mathrm{O}(n;\mathbb{R})$ be an orthogonal matrix and assume $\pmb{Y} = D_{o}\pmb{X}_{o}$ . For any orthogonal matrix $\pmb{W} \in \mathrm{O}(n;\mathbb{R})$ and any random Bernoulli matrix $\pmb{B} \in \mathbb{R}^{n \times p}$ , $b_{i,j} \sim_{iid} Ber(\beta)$ independent of $X_{o}$ , let $\pmb{Y}_{C} = \pmb{Y} + \sigma\pmb{B} \circ \pmb{S}$ denote the input with sparse corruptions, and $\pmb{S} \in \mathbb{R}^{n \times p}$ is defined in equation 15, then: + +$$ +\mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} \right| \geq \delta\right) < \frac {1}{p}, \tag {71} +$$ + +when $p = \Omega \left(\sigma^{8}\beta n^{2}\ln n / \delta^{2}\right)$ + +Proof According to Proposition 3.5, we know that + +$$ +\frac {1}{n p} \mathbb {E} _ {\boldsymbol {X} _ {o}, \boldsymbol {B}, \boldsymbol {S}} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} = 3 \theta (1 - \theta) \frac {\| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4}}{n} + \sigma^ {4} \beta (1 - 3 \beta) \frac {\| \boldsymbol {W} \| _ {4} ^ {4}}{n} + 3 \theta^ {2} + 6 \sigma^ {2} \theta \beta + 3 \sigma^ {4} \beta^ {2}. \tag {72} +$$ + +Moreover, since $\mathbf{Y}_C = [\pmb {y}_1 + \sigma \pmb {b}_1\circ \pmb {s}_1,\pmb {y}_2 + \sigma \pmb {b}_2\circ \pmb {s}_2,\dots ,\pmb {y}_p + \sigma \pmb {b}_p\circ \pmb {s}_p]$ , whose columns are independent. Let + +$$ +\boldsymbol {Z} = \left[ \boldsymbol {z} _ {1}, \boldsymbol {z} _ {2}, \dots , \boldsymbol {z} _ {p} \right] = \boldsymbol {D} _ {o} ^ {*} \boldsymbol {Y} _ {C} = \boldsymbol {X} _ {o} + \sigma \boldsymbol {C}, \tag {73} +$$ + +where $\pmb{C} = \pmb{D}_o^*(\pmb{B} \circ \pmb{S})$ . Note that the columns of $\pmb{Z}$ are independent, we can define function $f_{\pmb{z}}(\pmb{W}): \mathcal{O}(n; \mathbb{R}) \mapsto \mathbb{R}$ as + +$$ +f _ {z} (\boldsymbol {W}) = \left\| \boldsymbol {W} ^ {*} z \right\| _ {4} ^ {4}. \tag {74} +$$ + +Next, we will check assumption 1, 2, and 3 then apply Lemma B.4. + +Assumption 1: $\mu (n,p)$ . Since $Z = X_{o} + \sigma C$ , we have + +$$ +\left\{\max _ {i, j} \left| z _ {i, j} \right| > B \right\} = \left\{\max _ {i, j} \left| x _ {i, j} + \sigma c _ {i, j} \right| > B \right\} \subseteq \left\{\max _ {i, j} \left| x _ {i, j} \right| > B - \sigma \max _ {i, j} \left| c _ {i, j} \right| \right\}. \tag {75} +$$ + +Moreover, since + +$$ +\max _ {i, j} \left| c _ {i, j} \right| \leq \max _ {i} \| \boldsymbol {d} _ {i} \| _ {1} \leq \sqrt {n}, \tag {76} +$$ + +we have + +$$ +\left\{\max _ {i, j} \left| z _ {i, j} \right| > B \right\} \subseteq \left\{\max _ {i, j} \left| x _ {i, j} \right| > B - \sigma \sqrt {n} \right\}. \tag {77} +$$ + +Thus we know that + +$$ +\mathbb {P} \left(\max _ {i, j} | z _ {i, j} | > B\right) \leq \mathbb {P} \left(\max _ {i, j} | x _ {i, j} | > B - \sigma \sqrt {n}\right) \leq 2 n p \theta \exp \left(- \frac {\left(B - \sigma \sqrt {n}\right) ^ {2}}{2}\right). \tag {78} +$$ + +Specifically, we set $B = p^{\frac{1}{4}}$ , which yields + +$$ +\mu (n, p) = 2 n p \theta \exp \left(- \frac {\left(p ^ {\frac {1}{4}} - \sigma \sqrt {n}\right) ^ {2}}{2}\right). \tag {79} +$$ + +Assumption 2: Lipschitz Constant $L_{f}$ . By Proposition 3.5, $\forall W\in \mathrm{O}(n;\mathbb{R})$ , we know that + +$$ +\mathbb {E} f _ {z} (\boldsymbol {W}) = 3 \theta (1 - \theta) \| \boldsymbol {W} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4} + \sigma^ {4} \beta (1 - 3 \beta) \| \boldsymbol {W} \| _ {4} ^ {4} + 3 n \theta^ {2} + 6 n \sigma^ {2} \theta \beta + 3 n \sigma^ {4} \beta^ {2}. \tag {80} +$$ + +Hence $\forall W_1, W_2 \in \mathrm{O}(n; \mathbb{R})$ , we have + +$$ +\begin{array}{l} \left| \mathbb {E} \| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {z} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} _ {w} ^ {*} \boldsymbol {z} \| _ {4} ^ {4} \right| \tag {81} \\ \leq 3 \theta (1 - \theta) \left| \| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4} - \| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o} \| _ {4} ^ {4} \right| + \sigma^ {4} \beta (1 - 3 \beta) \left| \| \boldsymbol {W} _ {1} \| _ {4} ^ {4} - \| \boldsymbol {W} _ {2} \| _ {4} ^ {4} \right|, \\ \end{array} +$$ + +from the derivation of equation 34, we have + +$$ +\left| \left\| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {D} _ {o} \right\| _ {4} ^ {4} - \left\| \boldsymbol {W} _ {2} ^ {*} \boldsymbol {D} _ {o} \right\| _ {4} ^ {4} \right| \leq 4 n ^ {2} \left\| \boldsymbol {W} _ {1} - \boldsymbol {W} _ {2} \right\| _ {2}, \tag {82} +$$ + +and therefore, we have + +$$ +\left| \mathbb {E} \| \boldsymbol {W} _ {1} ^ {*} \boldsymbol {z} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} _ {w} ^ {*} \boldsymbol {z} \| _ {4} ^ {4} \right| \leq \left[ 1 2 n ^ {2} \theta (1 - \theta) + 4 n ^ {2} \sigma^ {4} \beta (1 - 3 \beta) \right] \| \boldsymbol {W} _ {1} - \boldsymbol {W} _ {2} \| _ {2}, \tag {83} +$$ + +which yields + +$$ +L _ {f} = 1 2 n ^ {2} \theta (1 - \theta) + 4 n ^ {2} \sigma^ {4} \beta (1 - 3 \beta). \tag {84} +$$ + +Assumption 2: Lipschitz Constant $\bar{L}_f$ . Since $f_{\bar{z}}(\boldsymbol {W}) = \| \boldsymbol{W}^{*}\bar{z}\|_{4}^{4}$ , apply the same derivation as equation 37, we have + +$$ +\left| \left\| \boldsymbol {W} _ {1} ^ {*} \bar {z} \right\| _ {4} ^ {4} - \left\| \boldsymbol {W} _ {2} ^ {*} \bar {z} \right\| _ {4} ^ {4} \right| \leq 4 n ^ {2} B ^ {4} \| \boldsymbol {W} _ {1} - \boldsymbol {W} _ {2} \| _ {2}, \tag {85} +$$ + +which yields + +$$ +\bar {L} _ {f} = 4 n ^ {2} B ^ {4}. \tag {86} +$$ + +Assumption 3: Upper Bound $R_{1}$ . Notice that $\forall W\in \mathrm{O}(n;\mathbb{R})$ + +$$ +f _ {\bar {z}} (\boldsymbol {W}) = \| \boldsymbol {W} ^ {*} \bar {z} \| _ {4} ^ {4} = \| \boldsymbol {W} ^ {*} \bar {z} \| _ {4} ^ {4} = \| \boldsymbol {W} ^ {*} \bar {z} \| _ {2} ^ {4} \left\| \frac {\boldsymbol {W} ^ {*} \bar {z}}{\| \boldsymbol {W} ^ {*} \bar {z} \| _ {2}} \right\| _ {4} ^ {4} \leq \| \boldsymbol {W} ^ {*} \bar {z} \| _ {2} ^ {4} = \| \bar {z} \| _ {2} ^ {4} \leq n ^ {2} B ^ {4} = R _ {1}. \tag {87} +$$ + +Assumption 3: Upper Bound $R_{2}$ . Assume the support for $x$ is $S$ , so we know that $\forall i \in [n]$ + +$$ +z _ {i} = \left\{ \begin{array}{l l} v _ {i} + c _ {i}, & \text {i f} i \in \mathcal {S}, \\ c _ {i}, & \text {o t h e r w i s e}, \end{array} \right. \tag {88} +$$ + +where $\pmb{v} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ is a Gaussian vector and $\pmb{c} = \pmb{D}_o^*(\pmb{b} \circ \pmb{s})$ is the sparse corruption vector after orthogonal rotation $\pmb{D}_o^*$ . Let $\pmb{P}_{\mathcal{S}}: \mathbb{R}^n \mapsto \mathbb{R}^n$ be a projection onto the support $\mathcal{S}$ , that is, $\forall \pmb{q} \in \mathbb{R}^n$ : + +$$ +\left(\boldsymbol {P} _ {S} \boldsymbol {q}\right) _ {i} = \left\{ \begin{array}{l l} q _ {i}, & \text {i f} i \in \mathcal {S}, \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {89} +$$ + +Let $M = D_{o}W$ , and $h = b \circ s$ , we have + +$$ +\begin{array}{l} \mathbb {E} f _ {z} ^ {2} (\boldsymbol {W}) = \mathbb {E} \left(\| \boldsymbol {W} ^ {*} z \| _ {4} ^ {8}\right) \\ = \mathbb {E} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \left[ \left(\langle \boldsymbol {w} _ {i}, \boldsymbol {x} \rangle + \sigma \langle \boldsymbol {m} _ {i}, \boldsymbol {h} \rangle\right) ^ {4} \left(\langle \boldsymbol {w} _ {j}, \boldsymbol {x} \rangle + \sigma \langle \boldsymbol {m} _ {j}, \boldsymbol {h} \rangle\right) ^ {4} \right] \\ \leq \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \left[ \mathbb {E} \left(\langle \boldsymbol {w} _ {i}, \boldsymbol {x} \rangle + \sigma \langle \boldsymbol {m} _ {i}, \boldsymbol {h} \rangle\right) ^ {8} \mathbb {E} \left(\langle \boldsymbol {w} _ {j}, \boldsymbol {x} \rangle + \sigma \langle \boldsymbol {m} _ {j}, \boldsymbol {h} \rangle\right) ^ {8} \right] ^ {\frac {1}{2}} \tag {90} \\ = \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \left[ \mathbb {E} \left(\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \rangle + \sigma \langle \boldsymbol {m} _ {i}, \boldsymbol {h} \rangle\right) ^ {8} \mathbb {E} \left(\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {j}, \boldsymbol {v} \rangle + \sigma \langle \boldsymbol {m} _ {j}, \boldsymbol {h} \rangle\right) ^ {8} \right] ^ {\frac {1}{2}}. \\ \end{array} +$$ + +Moreover, we have: + +$$ +\begin{array}{l} \mathbb {E} \left(\left\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \right\rangle + \sigma \left\langle \boldsymbol {m} _ {i}, \boldsymbol {h} \right\rangle\right) ^ {8} \\ = \mathbb {E} \left(\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \rangle^ {8}\right) + 2 8 \sigma^ {2} \mathbb {E} \left(\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \rangle^ {6} \langle \boldsymbol {m} _ {i}, \boldsymbol {h} \rangle^ {2}\right) + 7 0 \sigma^ {4} \mathbb {E} \left(\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \rangle^ {4} \langle \boldsymbol {m} _ {i}, \boldsymbol {h} \rangle^ {4}\right) \tag {91} \\ + 2 8 \sigma^ {6} \mathbb {E} \left(\left\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \right\rangle^ {6} \left\langle \boldsymbol {m} _ {i}, \boldsymbol {h} \right\rangle^ {2}\right) + \sigma^ {8} \mathbb {E} \left(\left\langle \boldsymbol {m} _ {i}, \boldsymbol {h} \right\rangle^ {8}\right). \\ \end{array} +$$ + +We now provide upper bound for $\mathbb{E}(\langle P_S w_i, v \rangle^r)$ and $\mathbb{E}(\langle m_i, h \rangle^r)$ for $r = 2, 4, 6, 8$ respectively. Since $\mathbb{E}h = 0$ and $h = b \circ s$ . Let $S'$ denote the support of $h$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left(\left\langle \boldsymbol {m} _ {i}, \boldsymbol {h} \right\rangle^ {8}\right) = \mathbb {E} \prod_ {l = 1} ^ {8} \left(\sum_ {k _ {1} = 1} ^ {n} m _ {k _ {l}, i} h _ {i}\right) \tag {92} \\ = \sum_ {k _ {1}, k _ {2}, k _ {3}, k _ {4}} \mathbb {E} _ {\mathcal {S} ^ {\prime}} \left(m _ {k _ {1}, i} ^ {2} \mathbb {1} _ {k _ {1} \in \mathcal {S} ^ {\prime}} m _ {k _ {2}, i} ^ {2} \mathbb {1} _ {k _ {2} \in \mathcal {S} ^ {\prime}} m _ {k _ {3}, i} ^ {2} \mathbb {1} _ {k _ {3} \in \mathcal {S} ^ {\prime}} m _ {k _ {4}, i} ^ {2} \mathbb {1} _ {k _ {4} \in \mathcal {S} ^ {\prime}}\right). \\ \end{array} +$$ + +Now we discuss these cases separately: + +- With probability $c_{1}\beta^{4}(c_{1} \leq 1)$ , all $k_{1}, k_{2}, k_{3}, k_{4}$ are in $S$ , in this case, we have + +$$ +\begin{array}{l} \sum \quad \mathbb {E} _ {\mathcal {S} ^ {\prime}} \left(m _ {k _ {1}, i} ^ {2} \mathbb {1} _ {k _ {1} \in \mathcal {S} ^ {\prime}} m _ {k _ {2}, i} ^ {2} \mathbb {1} _ {k _ {2} \in \mathcal {S} ^ {\prime}} m _ {k _ {3}, i} ^ {2} \mathbb {1} _ {k _ {3} \in \mathcal {S} ^ {\prime}} m _ {k _ {4}, i} ^ {2} \mathbb {1} _ {k _ {4} \in \mathcal {S} ^ {\prime}}\right) \\ \kappa_ {1}, \kappa_ {2}, \kappa_ {3}, \kappa_ {4} \tag {93} \\ = \sum_ {k _ {1}, k _ {2}, k _ {3}, k _ {4}} \left(m _ {k _ {1}, i} ^ {2} m _ {k _ {2}, i} ^ {2} m _ {k _ {3}, i} ^ {2} m _ {k _ {4}, i} ^ {2}\right) = 1. \\ \end{array} +$$ + +- With probability $c_2 \beta^3 (c_2 \leq 1)$ , only three among $k_1, k_2, k_3, k_4$ are in $S'$ , in this case, we have + +$$ +\begin{array}{l} \sum \quad \mathbb {E} _ {\mathcal {S} ^ {\prime}} \left(m _ {k _ {1}, i} ^ {2} \mathbb {1} _ {k _ {1} \in \mathcal {S} ^ {\prime}} m _ {k _ {2}, i} ^ {2} \mathbb {1} _ {k _ {2} \in \mathcal {S} ^ {\prime}} m _ {k _ {3}, i} ^ {2} \mathbb {1} _ {k _ {3} \in \mathcal {S} ^ {\prime}} m _ {k _ {4}, i} ^ {2} \mathbb {1} _ {k _ {4} \in \mathcal {S} ^ {\prime}}\right) \\ \left. \begin{array}{l l l l} n _ {1}, n _ {2}, n _ {3}, n _ {4} \\ \hline \end{array} \right\} \\ = \sum_ {k _ {1}, k _ {2}, k _ {3}} \left(m _ {k _ {1}, i} ^ {2} m _ {k _ {2}, i} ^ {2} m _ {k _ {3}, i} ^ {4}\right) = \| \boldsymbol {m} _ {i} \| _ {4} ^ {4} \leq 1. \\ \end{array} +$$ + +- With probability $c_3\beta^2 (c_3 \leq 1)$ , only two among $k_{1}, k_{2}, k_{3}, k_{4}$ are in $S'$ , in this case, we have + +$$ +\begin{array}{l} \sum \quad \mathbb {E} _ {\mathcal {S} ^ {\prime}} \left(m _ {k _ {1}, i} ^ {2} \mathbb {1} _ {k _ {1} \in \mathcal {S} ^ {\prime}} m _ {k _ {2}, i} ^ {2} \mathbb {1} _ {k _ {2} \in \mathcal {S} ^ {\prime}} m _ {k _ {3}, i} ^ {2} \mathbb {1} _ {k _ {3} \in \mathcal {S} ^ {\prime}} m _ {k _ {4}, i} ^ {2} \mathbb {1} _ {k _ {4} \in \mathcal {S} ^ {\prime}}\right) \\ k _ {1}, k _ {2}, k _ {3}, k _ {4} \tag {95} \\ = \sum_ {k _ {1}, k _ {2}} \left(m _ {k _ {1}, i} ^ {4} m _ {k _ {2}, i} ^ {4}\right) + \sum_ {k _ {1}, k _ {2}} \left(m _ {k _ {1}, i} ^ {2} m _ {k _ {2}, i} ^ {6}\right) \leq \| \boldsymbol {m} _ {i} \| _ {4} ^ {4} + \| \boldsymbol {m} _ {i} \| _ {6} ^ {6} \leq 2. \\ \end{array} +$$ + +- With probability $c_4\beta (c_4 \leq 1)$ , only one among $k_{1}, k_{2}, k_{3}, k_{4}$ is in $S'$ , in this case, we have + +$$ +\begin{array}{l} \sum \mathbb {E} _ {\mathcal {S} ^ {\prime}} \left(m _ {k _ {1}, i} ^ {2} \mathbb {1} _ {k _ {1} \in \mathcal {S} ^ {\prime}} m _ {k _ {2}, i} ^ {2} \mathbb {1} _ {k _ {2} \in \mathcal {S} ^ {\prime}} m _ {k _ {3}, i} ^ {2} \mathbb {1} _ {k _ {3} \in \mathcal {S} ^ {\prime}} m _ {k _ {4}, i} ^ {2} \mathbb {1} _ {k _ {4} \in \mathcal {S} ^ {\prime}}\right) \\ \begin{array}{l} k _ {1}, k _ {2}, k _ {3}, k _ {4} \\ \hline \end{array} \tag {96} \\ = \sum_ {k _ {1}} m _ {k _ {1}, i} ^ {8} = \| \boldsymbol {m} _ {i} \| _ {8} ^ {8} \leq 1. \\ \end{array} +$$ + +Hence, combine equation 93, equation 94, equation 95, and equation 96, we know that + +$$ +\mathbb {E} \left(\langle \boldsymbol {m} _ {i}, \boldsymbol {h} \rangle^ {8}\right) \leq c \beta , \tag {97} +$$ + +for a constant $c > 1$ . Similarly, one can conclude that + +$$ +\mathbb {E} \left(\left\langle \boldsymbol {m} _ {i}, \boldsymbol {h} \right\rangle^ {r}\right) \leq c \beta , \quad \forall r = 2, 4, 6. \tag {98} +$$ + +Moreover, since $\pmb{v} \sim \mathcal{N}(\pmb{0}, \pmb{I})$ , we know that + +$$ +\mathbb {E} \left(\left\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \right\rangle^ {r}\right) = \left\| \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i} \right\| _ {2} ^ {r}, \quad \forall r = 2, 4, 6, 8. \tag {99} +$$ + +Hence, with the similar technique applied above, we have + +$$ +\begin{array}{l} \mathbb {E} \left(\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \rangle^ {8}\right) = 1 0 5 \mathbb {E} \| \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i} \| _ {2} ^ {8} \\ = 1 0 5 \sum_ {k _ {1}, k _ {2}, k _ {3}, k _ {4}} \mathbb {E} _ {\mathcal {S} ^ {\prime}} \left(w _ {k _ {1}, i} ^ {2} \mathbb {1} _ {k _ {1} \in \mathcal {S} ^ {\prime}} w _ {k _ {2}, i} ^ {2} \mathbb {1} _ {k _ {2} \in \mathcal {S} ^ {\prime}} w _ {k _ {3}, i} ^ {2} \mathbb {1} _ {k _ {3} \in \mathcal {S} ^ {\prime}} w _ {k _ {4}, i} ^ {2} \mathbb {1} _ {k _ {4} \in \mathcal {S} ^ {\prime}}\right) \leq 1 0 5 c \theta , \tag {100} \\ \end{array} +$$ + +for a constant $c > 1$ . Similarly, we have + +$$ +\mathbb {E} \left(\left\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \right\rangle^ {r}\right) = \left\| \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i} \right\| _ {2} ^ {r} \leq c (r - 1)!! \theta , \quad \forall r = 2, 4, 6, \tag {101} +$$ + +for a constant $c > 1$ . Therefore, combine equation 91, we have + +$$ +\begin{array}{l} \mathbb {E} \left(\left\langle \boldsymbol {P} _ {\boldsymbol {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \right\rangle + \sigma \left\langle \boldsymbol {m} _ {i}, \boldsymbol {h} \right\rangle\right) ^ {8} \\ = \mathbb {E} \left(\langle P _ {S} \boldsymbol {w} _ {i}, \boldsymbol {v} \rangle^ {8}\right) + 2 8 \sigma^ {2} \mathbb {E} \left(\langle P _ {S} \boldsymbol {w} _ {i}, \boldsymbol {v} \rangle^ {6} \langle \boldsymbol {m} _ {i}, \boldsymbol {h} \rangle^ {2}\right) + 7 0 \sigma^ {4} \mathbb {E} \left(\langle P _ {S} \boldsymbol {w} _ {i}, \boldsymbol {v} \rangle^ {4} \langle \boldsymbol {m} _ {i}, \boldsymbol {h} \rangle^ {4}\right) \tag {102} \\ + 2 8 \sigma^ {6} \mathbb {E} \left(\left\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \right\rangle^ {6} \left\langle \boldsymbol {m} _ {i}, \boldsymbol {h} \right\rangle^ {2}\right) + \sigma^ {8} \mathbb {E} \left(\left\langle \boldsymbol {m} _ {i}, \boldsymbol {h} \right\rangle^ {8}\right) \\ \end{array} +$$ + +$$ +\leq c \theta + 2 8 c \sigma^ {2} \theta \beta + 7 0 c \sigma^ {4} \theta \beta + 2 8 c \sigma^ {6} \theta \beta + c \sigma^ {8} \beta \leq c \sigma^ {8} \beta , +$$ + +for a constant $c > 1$ . Hence, combine equation 90, we have + +$$ +\begin{array}{l} \mathbb {E} f _ {\boldsymbol {z}} ^ {2} (\boldsymbol {W}) = \mathbb {E} \left(\| \boldsymbol {W} ^ {*} \boldsymbol {z} \| _ {4} ^ {8}\right) \\ = \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \left[ \mathbb {E} \left(\left\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {i}, \boldsymbol {v} \right\rangle + \sigma \left\langle \boldsymbol {m} _ {i}, \boldsymbol {h} \right\rangle\right) ^ {8} \mathbb {E} \left(\left\langle \boldsymbol {P} _ {\mathcal {S}} \boldsymbol {w} _ {j}, \boldsymbol {v} \right\rangle + \sigma \left\langle \boldsymbol {m} _ {j}, \boldsymbol {h} \right\rangle\right) ^ {8} \right] ^ {\frac {1}{2}} \leq c \sigma^ {8} \beta n ^ {2}, \tag {103} \\ \end{array} +$$ + +for a constant $c > 1$ . Therefore, we can conclude that + +$$ +R _ {2} = c \sigma^ {8} \beta n ^ {2}, \tag {104} +$$ + +for a constant $c > 1$ . + +Applying Lemma B.4 for Concentration. Now we apply Lemma B.4 with + +1. + +$$ +B = p ^ {\frac {1}{4}}, \quad \mu (n, p) = 2 n p \theta \exp \left(- \frac {\left(p ^ {\frac {1}{4}} - \sigma \sqrt {n}\right) ^ {2}}{2}\right), \tag {105} +$$ + +2. + +$$ +L _ {f} = 1 2 n ^ {2} \theta (1 - \theta) + 4 n ^ {2} \sigma^ {4} \beta (1 - 3 \beta), \quad \bar {L} _ {f} = 4 n ^ {2} p, \tag {106} +$$ + +3. + +$$ +R _ {1} = n ^ {2} p, \quad R _ {2} = c \sigma^ {8} \beta n ^ {2}, \tag {107} +$$ + +for a constant $c > 1$ , we have + +$$ +\begin{array}{l} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \sum_ {j = 1} ^ {p} \left[ f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) \right] \right| \geq \delta\right) \\ = \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathcal {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} ^ {*} \boldsymbol {Y} _ {C} \| _ {4} ^ {4} \right| \geq \delta\right) \\ < \exp \left[ - \frac {p n ^ {2} \delta^ {2}}{3 2 R _ {2} + 8 R _ {1} n \delta / 3} + n ^ {2} \ln \left(\frac {1 2 \left(L _ {f} + \bar {L} _ {f}\right)}{n \delta}\right) + \ln 2 \right] + \mu (n, p) \tag {108} \\ < \exp \left[ - \frac {3 p \delta^ {2}}{C \sigma^ {8} \beta + 8 n p \delta} + n ^ {2} \ln \left(\frac {6 0 n p}{\delta}\right) + \ln 2 \right] \\ + 2 n p \theta \exp \left(- \frac {\left(p ^ {\frac {1}{4}} - \sigma \sqrt {n}\right) ^ {2}}{2}\right) < \frac {1}{p}, \\ \end{array} +$$ + +for a constant $C > 96$ , when $p = \Omega \left(\sigma^8\beta n^2\ln n / \delta^2\right)$ . + +# B RELATED LEMMAS AND INEQUALITIES + +Lemma B.1 (Two-sided Bernstein's Inequality) Given $p$ random variables $x_{1},x_{2},\ldots x_{p}$ , if $\forall i\in [p],|x_i|\leq b$ almost surely, then + +$$ +\mathbb {P} \left(\frac {1}{p} \left| \sum_ {i = 1} ^ {p} \left[ x _ {i} - \mathbb {E} \left[ x _ {i} \right] \right] \right| \geq t\right) \leq 2 \exp \left(- \frac {p t ^ {2}}{\frac {2}{p} \sum_ {i = 1} ^ {p} \mathbb {E} \left[ x _ {i} ^ {2} \right] + 2 b t / 3}\right). \tag {109} +$$ + +Proof See Proposition 2.14 in Wainwright (2019), one can easily generalize it to two-sided case. + +Lemma B.2 (Entry-wise Truncation of a Bernoulli Gaussian Matrix) Let $X \in \mathbb{R}^{n \times p}$ , where $x_{i,j} \sim_{iid} BG(\theta)$ and let $\| \cdot \|_{\infty}$ denote the maximum element (in absolute value) of a matrix, then + +$$ +\mathbb {P} \left(\max _ {i, j} | x _ {i, j} | \geq t\right) \leq 2 n p \theta \exp \left(- \frac {t ^ {2}}{2}\right). \tag {110} +$$ + +Proof A Bernoulli Gaussian variable $x_{i,j}, \forall i \in [n], j \in [p]$ satisfies $x_{i,j} = b_{i,j} \cdot g_{i,j}$ , where $b_{i,j} \sim_{iid} \operatorname{Ber}(\theta)$ , $g_{i,j} \sim_{iid} \mathcal{N}(0,1)$ and therefore + +$$ +\mathbb {P} \left(\left| x _ {i, j} \right| \geq t\right) = \theta \cdot \mathbb {P} \left(\left| g _ {i, j} \right| \geq t\right) \leq 2 \theta \exp \left(- \frac {t ^ {2}}{2}\right). \tag {111} +$$ + +By union bound, we have: + +$$ +\mathbb {P} \left(\max _ {i, j} | x _ {i, j} | \geq t\right) \leq \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {p} \mathbb {P} \left(| x _ {i, j} | \geq t\right) \leq 2 n p \theta \exp \left(- \frac {t ^ {2}}{2}\right). \tag {112} +$$ + +Lemma B.3 ( $\epsilon$ -Net Covering of Stiefel Manifolds) There is a covering $\epsilon$ -net $S_{\epsilon}$ for Stiefel manifold $\mathcal{M} = \{\pmb{W} \in \mathbb{R}^{n \times r} | \pmb{W}^* \pmb{W} = \pmb{I}\}$ , $(n \geq r)$ in operator norm + +$$ +\forall \boldsymbol {W} \in \mathcal {M}, \exists \boldsymbol {W} ^ {\prime} \in \mathcal {S} _ {\epsilon} \quad \text {s u b j e c t} \quad \| \boldsymbol {W} - \boldsymbol {W} ^ {\prime} \| _ {2} \leq \epsilon , \tag {113} +$$ + +of size $|\mathcal{S}_{\epsilon}|\leq \left(\frac{6}{\epsilon}\right)^{nr}$ + +Proof See Lemma D.4 in Zhai et al. (2019b). + +Lemma B.4 (Uniform Concentration bound over $\mathsf{O}(n;\mathbb{R})$ ) Let $Z \in \mathbb{R}^{n \times p}$ be a random matrix whose columns $z_{1}, z_{2}, \ldots, z_{p}$ are i.i.d. drawn from a distribution $\mathcal{P}$ . $\forall z \sim \mathcal{P}$ , let $f_{z}(\cdot): \mathsf{O}(n;\mathbb{R}) \mapsto \mathbb{R}$ denote a function that maps $\mathsf{O}(n;\mathbb{R})$ to $\mathbb{R}$ . $\forall B > 0$ , let $\bar{Z}$ denote the truncation of $Z$ : + +$$ +\bar {z} _ {i, j} = \left\{ \begin{array}{l l} z _ {i, j} & i f \quad | z _ {i, j} | \leq B, \\ 0 & o t h e r w i s e. \end{array} \right. \tag {114} +$$ + +Assume that: + +1. $\mathbb{P}(\max_{i,j}|z_{i,j}| > B) < \mu (n,p),$ where $\mu (n,p)\to 0$ as $p$ increase, $B$ depends on $p$ +2. $\mathbb{E}f_{\bar{z}}(\cdot)$ is $L_{f}$ -Lipschitz and $f_{\bar{z}}(\cdot)$ is $\bar{L}_f$ -Lipschitz, that is, $\forall W_1,W_2\in \mathrm{O}(n;\mathbb{R})$ + +$$ +\left| \mathbb {E} f _ {\boldsymbol {z}} \left(\boldsymbol {W} _ {1}\right) - \mathbb {E} f _ {\boldsymbol {z}} \left(\boldsymbol {W} _ {2}\right) \right| \leq L _ {f} \| \boldsymbol {W} _ {1} - \boldsymbol {W} _ {2} \| _ {2}, \tag {115} +$$ + +$$ +\left| f _ {\bar {z}} \left(\boldsymbol {W} _ {1}\right) - f _ {\bar {z}} \left(\boldsymbol {W} _ {2}\right) \right| \leq \bar {L} _ {f} \| \boldsymbol {W} _ {1} - \boldsymbol {W} _ {2} \| _ {2}. +$$ + +3. $\forall \pmb{W} \in \mathsf{O}(n; \mathbb{R})$ , $f_{\tilde{\pmb{z}}}(\pmb{W}) \leq R_1, \mathbb{E}[f_{\pmb{z}}^2(\pmb{W})] \leq R_2$ . + +Then: + +$$ +\begin{array}{l} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \sum_ {j = 1} ^ {p} \left[ f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) \right] \right| \geq \delta\right) \tag {116} \\ < \exp \left[ - \frac {p n ^ {2} \delta^ {2}}{3 2 R _ {2} + 8 R _ {1} n \delta / 3} + n ^ {2} \ln \left(\frac {1 2 (L _ {f} + \bar {L} _ {f})}{n \delta}\right) + \ln 2 \right] + \mu (n, p), \\ \end{array} +$$ + +when $p > \rho$ , where $\rho$ depends on $n$ . + +Proof By assumption 1, we have + +$$ +\begin{array}{l} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \sum_ {j = 1} ^ {p} \left[ f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) \right] \right| \geq \delta\right) \\ \leq \mathbb {P} (\boldsymbol {Z} \neq \bar {\boldsymbol {Z}}) + \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W}) \right| \geq n \delta\right) \tag {117} \\ \leq \mu (n, p) + \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W}) \right| \geq n \delta\right). \\ \end{array} +$$ + +Uniform bound over $\mathrm{O}(n;\mathbb{R})$ . $\forall \epsilon > 0$ , lemma B.3 shows there exists an $\epsilon$ -nets $S_{\epsilon}$ that covers $\mathrm{O}(n;\mathbb{R})$ : + +$$ +\mathcal {S} _ {\epsilon} = \left\{\boldsymbol {W} _ {1}, \boldsymbol {W} _ {2}, \dots , \boldsymbol {W} _ {| \mathcal {S} _ {\epsilon} |} \right\}, \quad \mathrm {O} (n; \mathbb {R}) \subset \bigcup_ {l = 1} ^ {| \mathcal {S} _ {\epsilon} |} \mathbb {B} (\boldsymbol {W} _ {l}, \epsilon), \tag {118} +$$ + +and $|\mathcal{S}_{\epsilon}|$ satisfies $|S_{\epsilon}| \leq (6 / \epsilon)^{n^2}$ . Together with Lipschitz assumption 2, we know that + +$$ +\begin{array}{l} \sup _ {\boldsymbol {W} \in \mathbb {B} (\boldsymbol {W} _ {l}, \epsilon)} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W}) \right| \\ \leq \sup _ {\boldsymbol {W} \in \mathbb {B} (\boldsymbol {W} _ {l}, \epsilon)} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\tilde {\boldsymbol {z}} _ {j}} (\boldsymbol {W} _ {l}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W} _ {l}) \right| + \sup _ {\boldsymbol {W} \in \mathbb {B} (\boldsymbol {W} _ {l}, \epsilon)} | \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W} _ {l}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W}) | \tag {119} \\ + \sup _ {\boldsymbol {W} \in \mathbb {B} (\boldsymbol {W}, \epsilon)} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {\boldsymbol {z}} _ {j}} (\boldsymbol {W} _ {l}) - \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {\boldsymbol {z}} _ {j}} (\boldsymbol {W}) \right| \\ \leq \sup _ {\boldsymbol {W} \in \mathbb {B} (\boldsymbol {W} _ {l}, \epsilon)} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {\boldsymbol {z}} _ {j}} (\boldsymbol {W} _ {l}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W} _ {l}) \right| + (L _ {f} + \bar {L} _ {f}) \epsilon . \\ \end{array} +$$ + +Hence, let + +$$ +\epsilon = \frac {n \delta}{2 \left(L _ {f} + \bar {L} _ {f}\right)}, \tag {120} +$$ + +we have + +$$ +\begin{array}{l} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W}) \right| \geq n \delta\right) \\ \leq \sum_ {l = 1} ^ {| S _ {\epsilon} |} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathbb {B} (\boldsymbol {W} _ {l}; \epsilon)} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W}) \right| \geq n \delta\right) \tag {121} \\ < \left(\frac {6}{\epsilon}\right) ^ {n ^ {2}} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathbb {B} (\boldsymbol {W} _ {l}; \epsilon)} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {\boldsymbol {z}} _ {j}} (\boldsymbol {W} _ {l}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W} _ {l}) \right| \geq n \delta - (L _ {f} + \bar {L} _ {f}) \epsilon\right) \\ = \exp \left[ n ^ {2} \ln \left(\frac {1 2 (L _ {f} + \bar {L} _ {f})}{n \delta}\right) \right] \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathbb {B} (\boldsymbol {W} _ {l}; \epsilon)} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} (\boldsymbol {W} _ {l}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W} _ {l}) \right| \geq \frac {n \delta}{2}\right). \\ \end{array} +$$ + +Tail bound within each $\mathbb{B}(\pmb{W}_l, \epsilon)$ . Then, $\forall l \in [S_{\epsilon}]$ , we apply point-wise control to a given point $\pmb{W}_l \in S$ . Later we will provide a uniform concentration bound over $\mathrm{O}(n; \mathbb{R})$ . By triangle inequality, we have + +$$ +\left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} \left(\boldsymbol {W} _ {l}\right) - \mathbb {E} f _ {\boldsymbol {z} _ {j}} \left(\boldsymbol {W} _ {l}\right) \right| \leq \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} \left(\boldsymbol {W} _ {l}\right) - \mathbb {E} f _ {\bar {z}} \left(\boldsymbol {W} _ {l}\right) \right| + \left| \mathbb {E} f _ {\bar {z} _ {j}} \left(\boldsymbol {W} _ {l}\right) - \mathbb {E} f _ {\boldsymbol {z}} \left(\boldsymbol {W} _ {l}\right) \right|. \tag {122} +$$ + +$\forall \pmb {W}\in \mathbb{B}(\pmb {W}_l,\epsilon)$ , by assumption 1 and 3, we have + +$$ +\begin{array}{l} \left| \mathbb {E} f _ {\bar {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) \right| = \left| \mathbb {E} \left[ f _ {\boldsymbol {z}} (\boldsymbol {W}) \cdot \mathbb {1} _ {\boldsymbol {Z} \neq \bar {\boldsymbol {Z}}} \right] \right| \leq \sqrt {\mathbb {E} ^ {2} f _ {\boldsymbol {z}} (\boldsymbol {W})} \sqrt {\left| \mathbb {E} \mathbb {1} _ {\boldsymbol {Z} \neq \bar {\boldsymbol {Z}}} \right|} \\ = \sqrt {\mathbb {E} ^ {2} f _ {\boldsymbol {z}} (\boldsymbol {W}) \mathbb {P} \left(\max _ {i , j} | z _ {i , j} | > B\right)} \leq \sqrt {R _ {2} \mu (n , p)}. \tag {123} \\ \end{array} +$$ + +Let $\rho$ be the lower bound of $p$ , such that $\forall p > \rho$ , we have $\sqrt{R_2\mu(n,p)} < \frac{n\delta}{4}$ . Hence, when $p > \rho$ , we have + +$$ +\left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} \left(\boldsymbol {W} _ {l}\right) - \mathbb {E} f _ {\boldsymbol {z} _ {j}} \left(\boldsymbol {W} _ {l}\right) \right| \leq \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} \left(\boldsymbol {W} _ {l}\right) - \mathbb {E} f _ {\bar {z} _ {j}} \left(\boldsymbol {W} _ {l}\right) \right| + \frac {n \delta}{4}, \tag {124} +$$ + +which implies + +$$ +\begin{array}{l} \mathbb {P} \left(\left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W}) \right| \geq \frac {n \delta}{2}\right) \leq \mathbb {P} \left(\left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\bar {z}} (\boldsymbol {W}) \right| \geq \frac {n \delta}{4}\right) \tag {125} \\ \leq 2 \exp \left(- \frac {p n ^ {2} \delta^ {2}}{3 2 R _ {2} + 8 R _ {1} n \delta / 3}\right), \\ \end{array} +$$ + +where the last inequality is achieved by Bernstein's inequality (Lemma B.1), along with assumption 3 and $\mathbb{E}f_{\overline{z}}^{2}(\pmb {W})\leq \mathbb{E}f_{\pmb{z}}^{2}(\pmb {W})\leq R^{2}$ . Combine equation 125 and equation 121, we have + +$$ +\begin{array}{l} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z}} (\boldsymbol {W}) \right| \geq n \delta\right) \\ < \exp \left[ n ^ {2} \ln \left(\frac {1 2 \left(L _ {f} + \bar {L} _ {f}\right)}{n \delta}\right) \right] \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathbb {B} \left(\boldsymbol {W} _ {l}; \epsilon\right)} \left| \frac {1}{p} \sum_ {j = 1} ^ {p} f _ {\bar {z} _ {j}} \left(\boldsymbol {W} _ {l}\right) - \mathbb {E} f _ {\boldsymbol {z}} \left(\boldsymbol {W} _ {l}\right) \right| \geq \frac {n \delta}{2}\right) \tag {126} \\ < \exp \left[ - \frac {p n ^ {2} \delta^ {2}}{3 2 R _ {2} + 8 R _ {1} n \delta / 3} + n ^ {2} \ln \left(\frac {1 2 (L _ {f} + \bar {L} _ {f})}{n \delta}\right) + \ln 2 \right]. \\ \end{array} +$$ + +Summary. Therefore, we conclude that + +$$ +\begin{array}{l} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathrm {O} (n; \mathbb {R})} \frac {1}{n p} \left| \sum_ {j = 1} ^ {p} \left[ f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) - \mathbb {E} f _ {\boldsymbol {z} _ {j}} (\boldsymbol {W}) \right] \right| \geq \delta\right) \tag {127} \\ < \exp \left[ - \frac {p n ^ {2} \delta^ {2}}{3 2 R _ {2} + 8 R _ {1} n \delta / 3} + n ^ {2} \ln \left(\frac {1 2 (L _ {f} + \bar {L} _ {f})}{n \delta}\right) + \ln 2 \right] + \mu (n, p), \\ \end{array} +$$ + +when $p > \rho$ + +Lemma B.5 (Concentration Bound of the Clean Objective over $\mathcal{O}(n; \mathbb{R})$ ) $\forall \theta \in (0,1]$ , if $X \in \mathbb{R}^{n \times p}$ , $x_{i,j} \sim_{iid} BG(\theta)$ , for any $\delta > 0$ , the following inequality holds + +$$ +\begin{array}{l} \mathbb {P} \left(\sup _ {\boldsymbol {W} \in \mathcal {O} (n; \mathbb {R})} \frac {1}{n p} \left| \| \boldsymbol {W} \boldsymbol {X} \| _ {4} ^ {4} - \mathbb {E} \| \boldsymbol {W} \boldsymbol {X} \| _ {4} ^ {4} \right| \geq \delta\right) \\ < \exp \left(- \frac {3 p \delta^ {2}}{c _ {1} \theta + 8 n (\ln p) ^ {4} \delta} + n ^ {2} \ln \left(\frac {6 0 n p (\ln p) ^ {4}}{\delta}\right)\right) \tag {128} \\ + \exp \left(- \frac {p \delta^ {2}}{c _ {2} \theta} + n ^ {2} \ln \left(\frac {6 0 n p (\ln p) ^ {4}}{\delta}\right)\right) + 2 n p \theta \exp \left(- \frac {(\ln p) ^ {2}}{2}\right), \\ \end{array} +$$ + +for some constants $c_{1} > 10^{4}, c_{2} > 3360$ . Moreover + +$$ +\begin{array}{l} \exp \left(- \frac {3 p \delta^ {2}}{c _ {1} \theta + 8 n (\ln p) ^ {4} \delta} + n ^ {2} \ln \left(\frac {6 0 n p (\ln p) ^ {4}}{\delta}\right)\right) \tag {129} \\ + \exp \left(- \frac {p \delta^ {2}}{c _ {2} \theta} + n ^ {2} \ln \left(\frac {6 0 n p (\ln p) ^ {4}}{\delta}\right)\right) + 2 n p \theta \exp \left(- \frac {(\ln p) ^ {2}}{2}\right) \leq \frac {1}{p}, \\ \end{array} +$$ + +when $p = \Omega (\theta n^2\ln n / \delta^2)$ + +Proof See Lemma 2.2 in Zhai et al. (2019b), note that the sparsity condition $\theta \in (0,1)$ of the original Lemma in Zhai et al. (2019b) can be easily generalized to $\theta = 1$ . + +# C ADDITIONAL EXPERIMENTAL RESULTS + +![](images/814d0e0790308b8830ac14d3c5f6d9a2b7c93bbb1bd8b8ab28982c1834d74dcd.jpg) +Figure 8: Representations of three $16 \times 16$ patches in both the clean and noisy images. Each selected patch is visualized, both with and without noise, and the 6 corresponding bases with largest absolute coefficients are shown. + +![](images/21b09cc88374b9d4f100285973824ede4a14d337de5bfc9dc4ffd52f919c95bd.jpg) +(a) + +![](images/e655ad25b8b5a21b181b8379bfd59cd0d512e22099a91519c2d56543d6bc68dc.jpg) +(b) + +![](images/776bb0319cfa0d5136306fb04384e3e43e57a1c9ea7e34dec4706e7da0e4a480.jpg) +(c) + +![](images/7c3df6738418ec4fb33045fe79492d47b39a58a153fc387397dcf69bc42388f8.jpg) +Figure 9: Representations of three $8 \times 8 \times 3$ colored patches in both the clean and noisy images. Each selected patch is visualized, both with and without noise, and the 6 corresponding bases with largest absolute coefficients are shown. + +![](images/91e5225400b771dc4fa636272b0886f1db6786041ca0344039886923a0906d4b.jpg) +(a) + +![](images/44b77efd838600a680562196a678627474d8720b2e70ba153d1dcfe34ec6b370.jpg) +(b) + +![](images/b3439aec7c8cb690aed38cb3eadfbbd75c549397ae1a4377a3f9d74c7756716d.jpg) +(c) + +![](images/7618d19673643fc9b992d35e68499b0adab96333594e2017b3c57bc6c3ca6267.jpg) + +![](images/c0e48db21f33be15808efcb2942153a115d3203f6b791cde47227267ce63b7c8.jpg) + +![](images/50416dc8a78cafb840ddcb1caab7eebec8007db14e9c4c8a775ebdd6c4e9b5c8.jpg) +(a) Clean Patches +(c) Patches with Outliers +Figure 10: All $8 \times 8 \times 3 = 192$ bases learned from 100,000 random $8 \times 8$ colored patches sampled from the CIFAR-10 data-set. (a) Learned Bases from clean CIFAR-10; (b) Learned Bases from CIFAR-10 with Gaussian noise, $\mathrm{SNR} = 6.23$ ; (c) Learned Bases from CIFAR-10 with $20\%$ of Gaussian outliers; (d) Learned Bases from CIFAR-10 with $50\%$ of sparse corruptions. For all learned bases, the resulting atoms are sorted according to the $\ell^1$ -norm of their coefficients in the sparse code. + +![](images/339322a45ba8398dc1eca631f7d27be928a130b5b1e98607ef94179110f52766.jpg) +(b) Patches with Noise (SNR = 6.23) +(d) Patches with Sparse Corruptions \ No newline at end of file diff --git a/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/images.zip b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e00059f383d4b6c86fa8d24ef12539e12af76ef2 --- /dev/null +++ b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4dd3bb51222d24a34ac75593de337fa93b765f04d55b49254a00719f79f42cc +size 2371709 diff --git a/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/layout.json b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ae71aaa35c62336e27efb62f0dcb776a7b2fffe4 --- /dev/null +++ b/understandingl4baseddictionarylearninginterpretationstabilityandrobustness/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b52a75946b86ac9cac63009037866810ac493b535c3eed0247ff0b25d2399ef8 +size 1405722 diff --git a/understandingthelimitationsofconditionalgenerativemodels/a24f8631-37b9-4ea8-9a6a-ab2cc53fb55c_content_list.json b/understandingthelimitationsofconditionalgenerativemodels/a24f8631-37b9-4ea8-9a6a-ab2cc53fb55c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..dc19405d09f62d580c69f5818b883d23bda374a5 --- /dev/null +++ b/understandingthelimitationsofconditionalgenerativemodels/a24f8631-37b9-4ea8-9a6a-ab2cc53fb55c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4d33304ea79dc7bdcf174a8d593dc8c8da4c433bfecd972a889adf3fab284cb +size 97642 diff --git a/understandingthelimitationsofconditionalgenerativemodels/a24f8631-37b9-4ea8-9a6a-ab2cc53fb55c_model.json b/understandingthelimitationsofconditionalgenerativemodels/a24f8631-37b9-4ea8-9a6a-ab2cc53fb55c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..cc4d300067e73d5de211ec6bd9501c5cd164e540 --- /dev/null +++ b/understandingthelimitationsofconditionalgenerativemodels/a24f8631-37b9-4ea8-9a6a-ab2cc53fb55c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cfa02217638c4cce901a8b3b024afa37bfd1afea2cd59fe071e69f3aed95f12 +size 116723 diff --git a/understandingthelimitationsofconditionalgenerativemodels/a24f8631-37b9-4ea8-9a6a-ab2cc53fb55c_origin.pdf b/understandingthelimitationsofconditionalgenerativemodels/a24f8631-37b9-4ea8-9a6a-ab2cc53fb55c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2f85cf7f5a6e571c1f73ce0af0a7f336190c87ac --- /dev/null +++ b/understandingthelimitationsofconditionalgenerativemodels/a24f8631-37b9-4ea8-9a6a-ab2cc53fb55c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66073d98ddf2abb53e41e9e600c5ef97e83b27ba9edf93d2d8e8dad0e8f7ffcb +size 1629327 diff --git a/understandingthelimitationsofconditionalgenerativemodels/full.md b/understandingthelimitationsofconditionalgenerativemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9c6129857ead7ec8ea20062d36ba2d3a78b73905 --- /dev/null +++ b/understandingthelimitationsofconditionalgenerativemodels/full.md @@ -0,0 +1,425 @@ +# UNDERSTANDING THE LIMITATIONS OF CONDITIONAL GENERATIVE MODELS + +Ethan Fetaya* + +Jörn-Henrik Jacobsen* + +Will Grathwohl + +Richard Zemel + +Vector Institute and University of Toronto + +{ethanf, jjacobs,wgrathwohl,zemel} @cs.toronto.edu + +# ABSTRACT + +Class-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts. They are a natural choice to solve discriminative tasks in a robust manner as they jointly optimize for predictive performance and accurate modeling of the input distribution. In this work, we investigate robust classification with likelihood-based generative models from a theoretical and practical perspective to investigate if they can deliver on their promises. Our analysis focuses on a spectrum of robustness properties: (1) Detection of worst-case outliers in the form of adversarial examples; (2) Detection of average-case outliers in the form of ambiguous inputs and (3) Detection of incorrectly labeled in-distribution inputs. + +Our theoretical result reveals that it is impossible to guarantee detectability of adversarially-perturbed inputs even for near-optimal generative classifiers. Experimentally, we find that while we are able to train robust models for MNIST, robustness completely breaks down on CIFAR10. We relate this failure to various undesirable model properties that can be traced to the maximum likelihood training objective. Despite being a common choice in the literature, our results indicate that likelihood-based conditional generative models may be surprisingly ineffective for robust classification. + +# 1 INTRODUCTION + +![](images/0f3004448dc61a6145b683a2e7a4d93e180a166af0fa484e233ae1f17e84c9fa.jpg) +Figure 1: Linear interpolations of inputs and respective outputs of a conditional generative model between two MNIST and CIFAR10 images from different classes. X-axis is interpolation steps and Y-axis negative log-likelihood in bits/dim (higher is more likely under model). MNIST interpolated images are far less likely than real images, whereas for CIFAR10 the opposite is observed, leading to high confidence classification of ambiguous out-of-distribution images. + +Conditional generative models have recently shown promise to overcome many limitations of their discriminative counterparts. They have been shown to be robust against adversarial attacks (Schott et al., 2019; Ghosh et al., 2019; Song et al., 2018; Li et al., 2018; Frosst et al., 2018), to enable robust classification in the presence of outliers (Nalisnick et al., 2019b) and to achieve promising results in semi-supervised learning (Kingma et al., 2014; Salimans et al., 2016). Motivated by these success stories, we study the properties of conditional generative models in more detail. + +Unlike discriminative models, which can ignore class-irrelevant information, conditional generative models cannot discard any information in the input, potentially making it harder to fool them. Further, + +jointly modeling the input and target distribution should make it easy to detect out-of-distribution inputs. These traits lend hope to the belief that good class-conditional generative models can overcome important problems faced by discriminative models. + +In this work, we analyze conditional generative models by assessing them on a spectrum of robustness tasks. (1) Detection of worst-case outliers in the form of adversarial examples; (2) Detection of average-case outliers in the form of ambiguous inputs and (3) Detection of incorrectly labeled indistribution inputs. If a generative classifier is able to perform well on all of these, it will naturally be robust to noisy, ambiguous or adversarily perturbed inputs. + +Outlier detection in the above settings is substantially different from general out-of-distribution (OOD) detection, where the goal is to use unconditional generative models to detect any OOD input. For the general case, likelihood has been shown to be a poor detector of OOD samples. In fact, often higher likelihood is assigned to OOD data than to the training data itself (Nalisnick et al., 2019a). However, class-conditional likelihood necessarily needs to decrease towards the decision-boundary for the classifier to work well. Thus, if the class-conditional generative model has high accuracy, rejection of outliers from the wrong class via likelihood may be possible. + +Our contributions are: + +Provable Robustness We answer: Can we theoretically guarantee that a strong conditional generative model can robustly detect adversarially attacked inputs? In section 2 we show that even a near-perfect conditional generative model cannot be guaranteed to reject adversarially perturbed inputs with high probability. + +Assessing the Likelihood Objective We discuss the basis to empirically analyze robustness in practice. We identify several fundamental issues with the maximum likelihood objective typically used to train conditional generative models and discuss whether it is appropriate for detecting out-of-distribution inputs. + +Understanding Conflicting Results We explore various properties of our trained conditional generative models and how they relate to fact that the model is robust on MNIST but not on CIFAR10. We further propose a new dataset where we combine MNIST images with CIFAR background, making the generative task as hard as CIFAR while keeping the discriminative task as easy as MNIST, and investigate how it affects robustness. + +# 2 CONFIDENT MISTAKES CANNOT BE RULED OUT + +The most challenging task in robust classification is accurately classifying or detecting adversarial attacks; inputs which have been maliciously perturbed to fool the classifier. In this section we discuss the possibility of guaranteeing robustness to adversarial attacks via conditional generative models. + +Detectability of Adversarial Examples In the adversarial spheres work (Gilmer et al., 2018) the authors showed that a model can be fooled without changing the ground-truth probability of the attacked datapoint. This was claimed to show that adversarial examples can lie on the data manifold and therefore cannot be detected. While (Gilmer et al., 2018) is an important work for understanding adversarial attacks, it has several limitations with regard to conditional generative models. First, just because the attack does not change the ground-truth likelihood, this does not mean the model can not detect the attack. Since the adversary needs to move the input to a location where the model is incorrect, the question arises: what kind of mistake will the model make? If the model assigns low likelihood to the correct class without increasing the likelihood of the other classes then the adversarial attack will be detected, as the joint likelihood over all classes moves below the threshold of typical inputs. Second, on the adversarial spheres dataset (Gilmer et al., 2018) the class supports do not overlap. If we were to train a model of the joint density $p_{\theta}(x,y)$ (which does not have $100\%$ classification accuracy) then the KL divergence $KL(p(x,y)||p_{\theta}(x,y))$ , where $p(x,y)$ is the data density, is infinite due to division by zero (note that $KL(p_{\theta}(x,y)||p(x,y))$ is what is minimized with maximum likelihood). This poses the question, whether small $KL(p(x,y)||p_{\theta}(x,y))$ or small Shannon-Jensen divergence is sufficient to guarantee robustness. In the following, we show that this condition is insufficient. + +![](images/ea635a6e26568c064d0ee21d46f6c2c2dda55cce5852757760c57de5ec340e98.jpg) +Figure 2: Counter example construction. Shown on the left are the two class data densities, on the right the Bayes-optimal classifier for this problem (assuming $\lambda_1 > \lambda_2$ ) and the model we consider. Despite being almost optimal, the model can be fooled with undetectable adversarial examples (red arrows). Detailed description in section 2. + +Why no Robustness Guarantee can be Given The intuition why conditional generative models should be robust is as follows: If we have a robust discriminative model then the set of confident mistakes, i.e. where the adversarial attacks must reside, has low probability but might be large in volume. For a robust conditional generative model, the set of undetectable adversarial attacks, i.e. high-density high-confidence mistakes, has to be small in volume. Since the adversary has to be $\Delta$ close to this small volume set, the $\Delta$ area around this small volume set should still be small. This is where the idea breaks down due to the curse of dimensionality. Expanding a set by a small radius can lead to a much larger one even with smoothness assumptions. Based on this insight we build an analytic counter-example for which we can prove that even if + +$$ +K L (q | | p) < \epsilon \quad K L (p | | q) < \epsilon \tag {1} +$$ + +where $p = p(x,y)$ is the data distribution, and $q = q(x,y)$ is the model, we can with probability $\approx 0.5$ take a correctly classified input sampled from $p$ , and perturb it by at most $\Delta$ to create an adversarial example that is classified incorrectly and is not detectable. + +We note that the probability in every ball with radius $\Delta$ can be made as small as desired, excluding degenerate cases. We also assume that the Bayes optimal classifier is confident and is not affected by the attack, i.e. we do not change the underlying class but wrongfully flip the decision of the classifier. + +The counter-example goes as follows: Let $U(a, b)$ be the density of a uniform distribution on an annulus in dimension $d$ , $\{x \in \mathbb{R}^d : a \leq ||x|| \leq b\}$ then the data conditional distribution is + +$$ +p (x | 0) = \lambda_ {1} U (0, 1) + (1 - \lambda_ {1}) U (1, 1 + \Delta) \quad 0 \leq \lambda_ {1} \leq 1 \tag {2} +$$ + +$$ +p (x \mid 1) = \lambda_ {2} U (0, 1) + (1 - \lambda_ {2}) U (2, 3) \quad 0 \leq \lambda_ {2} \leq 1 +$$ + +with $p(y = 0) = p(y = 1) = 1 / 2$ . Both classes are a mixture of two distributions, uniform on the unit sphere and uniform on an annulus, as shown in Fig. 2. The model distribution is the following: + +$$ +q (x \mid 0) = U (0, 1 + \Delta) \tag {3} +$$ + +$$ +q (x \mid 1) = \lambda_ {2} U (0, 1) + (1 - \lambda_ {2}) U (2, 3) +$$ + +i.e. for $y = 1$ the model is perfect, while for $y = 0$ we replace the mixture with uniform distribution over the whole domain. If $\lambda_1 \gg \lambda_2$ then points in the sphere with radius 1 should be classified as class $y = 0$ with high likelihood. If $\lambda_2 >> \frac{1}{(1 + \Delta)^d}$ then the model classifies points in the unit sphere incorrectly with high likelihood. Finally if $1 >> \lambda_1$ then almost half the data points will fall in the annulus between 1 and $1 + \Delta$ and can be adversarially attacked with distance lesser or equal to $\Delta$ by moving them into the unit sphere as seen in Fig. 2. We also note that these attacks cannot be detected as the model likelihood only increases. In high dimensions, almost all the volume of a sphere is in the outer shell, and this can be used to show that in high enough dimensions we can get the condition in Eq. 1 for any value of $\epsilon$ and $\Delta$ (and also the confidence of the mistakes $\delta$ ). The detailed proof is in the supplementary material. + +This counter-example shows that even under very strong conditions, a good conditional generative model can be attacked. Therefore no theoretical guarantees can be given in the general case for these models. Our construction, however, does not depend on the learning model but on the data geometry. This raises interesting questions concerning the source of the susceptibility to attacks: Is it the model or an inherent issue with the data? + +# 3 THE MAXIMUM LIKELIHOOD OBJECTIVE + +# 3.1 THE DIFFICULTY IN TRAINING CONDITIONAL GENERATIVE MODELS + +Most recent publications on likelihood-based generative models primarily focus on quantitative results of unconditional density estimation (van den Oord et al., 2016; Kingma & Dhariwal, 2018; Salimans et al., 2017b; Kingma et al., 2016; Papamakarios et al., 2017). For conditional density estimation, either only qualitative samples are shown (Kingma & Dhariwal, 2018), or it is reported that conditional density estimation does not lead to better likelihoods than unconditional density estimation. In fact, it has been reported that conditional density estimation can lead to slightly worse data likelihoods (Papamakarios et al., 2017; Salimans et al., 2017b), which is surprising at first, as extra bits of important information are provided to the model. + +Explaining Likelihood Behaviour One way to understand this seemingly contradictory relationship is to consider the objective we use to train our models. When we train a generative model with maximum likelihood (either exactly or through a lower bound) we are minimizing the empirical approximation of $\mathbb{E}_{x,y\sim P}\left[-\log (P_{\theta}(x,y))\right]$ which is equivalent to minimizing $KL(P(x,y)||P_{\theta}(x,y))$ . Consider now an image $x$ with a discrete label $y$ , which we are trying to model using $P_{\theta}(x,y)$ . The negative log-likelihood (NLL) objective is: + +$$ +\begin{array}{l} \mathbb {E} _ {(x, y) \sim P} [ - \log (P _ {\theta} (x, y)) ] = \mathbb {E} _ {x \sim P} [ - \log (P _ {\theta} (x)) ] \\ + \quad \mathbb {E} _ {x \sim P} [ \mathbb {E} _ {y} [ - \log (P _ {\theta} (y | x)) | x ] ] \tag {4} \\ \end{array} +$$ + +If we model $P_{\theta}(y|x)$ with a uniform distribution over classes, then the second term has a value of $\log(C)$ where $C$ is the number of classes. This value is negligible compared to the first term $\mathbb{E}_{x \sim P}[-\log(P_{\theta}(x))]$ and therefore the "penalty" for completely ignoring class information is negligible. So it is not surprising that models with strong generative abilities can have limited discriminative power. What makes matters even worse is that the penalty for confident mis-classification can be unbounded. This may also explain why the conditional ELBO is comparable to the unconditional ELBO (Papamakarios et al., 2017). Another way this can be seen is by thinking of the likelihood as the best lossless compression. When trying to encode an image, the benefit of the label is at most $\log(C)$ bits which is small compared to the whole image. While these few bits are important for users, from a likelihood perspective the difference between the correct $p(y|x)$ and a uniform distribution is negligible. This means that when naively training a class-conditional generative model by minimizing $\mathbb{E}_{(x,y) \sim P}[-\log(P_{\theta}(x|y))]$ , typically discriminative performance as a classifier is very poor. + +# 3.2 OUTLIER DETECTION + +Another issue arises when models trained with maximum likelihood are used to detect outliers. The main issue is that maximum likelihood, which is equivalent to minimizing $KL(P(x,y) || P_{\theta}(x,y))$ , is known to have a "mode-covering" behavior. It has been shown recently in (Nalisnick et al., 2019a) that generative models, trained using maximum likelihood, can be quite poor at detecting out-of-distribution example. In fact it has been shown that these models can give higher likelihood values, on average, to datasets different from the test dataset that corresponds to the training data. Intuitively one can still hope that a high accuracy conditional generative model would recognize an input conditioned on the wrong class as an outlier, as it was successfully trained to separate these classes. In section 4.2 we show this is not the case in practice. + +While (Nalisnick et al., 2019a) focuses its analysis into dataset variance, we propose this is an inherit issue with the likelihood objective. If it is correct then the way conditional generative models are trained is at odds with their desired behaviour. If this is the case, then useful conditional generative model will require a fundamentally different approach. + +# 4 EXPERIMENTS + +We now present a set of experiments designed to test the robustness of conditional generative models. All experiments were performed with a flow model where the likelihood can be computed in closed form as the probability of the latent space embedding (the prior) and a Jacobian correction term; see Sec A.1 for a detailed explanation. Given that we can compute $p(x,y)$ for each class, we can easily compute $p(y|x)$ and classify accordingly. Besides allowing closed-form likelihood computation, the + +flexibility in choosing the prior distribution was important to conduct various experiments. In our work we used a version of the GLOW model; details of the models and training is in the supplementary material sec. B. We note that the results are not unique to flow models, and we verified that similar phenomenon can be seen when training with the PixelCNN++ autoregressive model (Salimans et al., 2017a) in sec. E. + +# 4.1 TRAINING CONDITIONAL GENERATIVE MODELS + +Here we investigate the ability to train a conditional generative model with good likelihood and accuracy simultaneously. Usually in flow models the prior distribution in latent space $z$ is Gaussian. For classification we used a class-conditional mixture of 10 Gaussians $p(z|y) = \mathcal{N}(\mu_y,\sigma_y^2)$ We compare three settings: 1) A class-conditional mixture of 10 Gaussians as the prior (Base). 2) A class-conditional mixture of 10 Gaussians trained with an additional classification loss term (Reweighted). 3) Our proposed conditional split prior (Split) described in sec. A.4 in the supplementary material. Results can be found in table 1. + +
MNISTBaseReweightSplitCIFAR10BaseReweightSplit
% Acc96.999.099.3% Acc56.883.284.0
bits/dim10.951.101.00bits/dim3.473.543.53
+ +Table 1: Comparison between different models. + +As we can see, especially on CIFAR10, pushing up the accuracy to values that are still far from state-of-the-art already results in non-negligible deterioration to the likelihood values. This exemplifies how obtaining strong classification accuracy without harming likelihood estimation is still a challenging problem. We note that while the difference between the split prior and re-weighted version is not huge, the split prior achieves better NLL and better accuracy in both experiments. We experimented with various other methods to improve training with limited success, see sec. C in the supplementary material for future information. + +# 4.2 NEGLIGIBLE IMPACT OF CLASS MEMBERSHIP ON LIKELIHOOD + +![](images/61d0edd495faf6c8c0cb46db5d8fb71e6eb424220e4d7f728b6a8d4e4d0fa48a.jpg) +(a) MNIST + +![](images/113390d7b877bbd317549c5a6e21a8b60638b101d819fadffa5efce8753ecf1d.jpg) +(b) CIFAR10 +Figure 3: NLL for images conditioned on the correct class vs the highest probability wrong class. + +Next we show that even conditional generative models which are strong classifiers do not see images with the corrupted labels as outliers. To understand this phenomenon we first note that if we want the correct class to have a probability of at least $1 - \delta$ then it is enough for the corresponding logit to be larger than all the others by $\log(C) + \log\left(\frac{1 - \delta}{\delta}\right)$ where $C$ is the number of classes. For $C = 10$ and $\delta = 1e - 5$ this is about 6, which is negligible relative to the likelihood of the image, which is in the scale of thousands. This means that even for a strong conditional generative model which confidently predicts the correct label, the pair $\{x_i, y_w \neq y_i\}$ (where $w$ is the leading incorrect class) cannot be detected as an outlier according to the joint distribution, as the gap $\log(p(x_i | y_i)) - \log(p(x_i | y_w))$ is much smaller than the variation in likelihood values. In Fig. 3 we show this by plotting the histograms of the likelihood conditioned both on the correct class and on the most likely wrong class over the test set. In other words, in order for $\log(p(x_i | y_w))$ to be considered an outlier the prediction needs to be extremely confident, much more than we expect it to be, considering test classification error. + +# 4.3 ADVERSARIAL ATTACKS AS WORST CASE ANALYSIS + +We first evaluate the ability of conditional generative models to detect standard attacks, and then try to detect attacks designed to fool the detector (likelihood function). We evaluate both the gradient based Carlini-Wagner $L_{2}$ attack (CW- $L_{2}$ ) (Carlini & Wagner, 2017b) and the gradient free boundary attack (Brendel et al., 2018). Results are shown in table 2 on the left. It is interesting to observe the disparity between the CW- $L_{2}$ attack, which is easily detectable, and the boundary attack which is much harder to detect. + +
AttackingClassificationClassification and Detection
MNISTReweightSplitReweightSplit
CW - L20% (100%)1% (100%)17% (100%)14% (100%)
Boundary attack43% (82%)36% (80%)0% (0%)0% (0%)
CIFAR10
CW - L20% (97%)0% (0%)6% (99%)3% (100%)
Boundary attack67% (100%)72% (100%)100% (100%)100% (100%)
+ +Table 2: Comparison of attack detection. Percentage of successful and undetected attacks within $L_{2}$ -distance of $\epsilon = 1.5$ for MNIST and $\epsilon = 33 / 255$ for CIFAR10 for proposed models. Number in parentheses is percentage of attacks that successfully fool the classifier, both detected and undetected. + +Next we modify our attacks to try to fool the detector as well. With the CW- $L_{2}$ attack we follow the modification suggested in (Carlini & Wagner, 2017a) and add an extra loss term $\ell_{det}(x') = \max \{0, -\log(p(x')) - T\}$ where $T$ is the detection threshold. For the boundary attack we turn the $C$ -way classification into a $C + 1$ -way classification by adding another class which is "non-image" and classify any image above the detection threshold as such. We then use a targeted attack to try to fool the network to classify the image into a specific original class. This simple modification to the boundary attack will typically fail because it cannot initialize. The standard attack starts from a random image and all random images are easily detected as "non-image" and therefore do not have the right target class. To address this we start from a randomly chosen image from the target class, ensuring the original image is detected as a real image from the desired class. + +From table 2 (right side) we can see that even after the modification $\mathrm{CW - L_2}$ still struggles to fool the detector. The boundary attack, however, succeeds completely on CIFAR10 and fails completely on MNIST, even when it managed to sometimes fool the detector without directly trying. We hypothesize that this is because the area between two images of separate classes, where the boundary attack needs to pass through, is correctly detected as out of distribution only for MNIST and not CIFAR10. We explore this further below. + +# 4.4 AMBIGUOUS INPUTS AS AVERAGE CASE ANALYSIS + +To understand why the learned networks are easily attacked on CIFAR but not on MNIST with the modified boundary attack, we explore the probability density of interpolations between two real images. This is inspired by the fact that the boundary attack proceeds along the line between the attacked image and the initial image. The minimum we would expect from a decent generative model is to detect the intermediate middle images as "non-image" with low likelihood. If this was the case and each class was a disconnected high likelihood region, the boundary attack would have a difficult time when starting from a different class image. + +Given images $x_0$ and $x_1$ from separate classes $y_0$ and $y_1$ and for $\alpha \in [0, 1]$ we generate an intermediate image $x_\alpha = \alpha \cdot x_1 + (1 - \alpha)x_0$ , and run the model on various $\alpha$ values to see the model prediction along the line. For endpoints we sample real images that are classified correctly and are above the detection threshold used previously. See Fig. 1 for interpolation examples from MNIST and CIFAR. + +In figure 4 (a) we see the average results for MNIST for 1487 randomly selected pairs. As expected, the likelihood goes down as $\alpha$ moves away from the real images $x_0$ and $x_{1}$ . We also see the probability of both classes drop rapidly as the network predictions become less confident on the intermediate images. Sampling $100\alpha$ values uniformly in the range [0, 1] we can also investigate how many of the + +![](images/4ec9fa132530cb6ce28f937cdca12cc0294c1600c38ab97dbd4662ad354d8cf3.jpg) +(a) Log-likelihood of interpolations on MNIST + +![](images/d76d481ef62b2e821a73918886f90ed84bcaad7308ba222fbe38436086f9c890.jpg) +(c) Class probability of interpolations on MNIST + +![](images/2c8f7a867569d698ab30be5ea33fd3f1a4c08c4e1bd7f91c846180e3dd52cd72.jpg) +(b) Log-likelihood of interpolations on CIFAR10 + +![](images/03d754e8ff11abb446a8a494ad88e02488a430cf425c0057e4d359101877e452.jpg) +(d) Class probability of interpolations on CIFAR10 +Figure 4: Average Log likelihoods and class probabilities for interpolations between data points from different classes, x-axis is interpolation coefficient $\alpha$ . The MNIST model behaves as desired and robustly detects interpolated images. The CIFAR10 model, however, fails strikingly and interpolated images are consistently more likely than true data under the model. + +interpolations all stay above the detection threshold, i.e. all intermediate images are considered real by the model, and find that this happens only in $0.5\%$ of the cases. + +On CIFAR images, using 1179 pairs, we get a very different picture (see fig. 4 (b)). Not only does the intermediate likelihood not drop down, it is even higher on average than on the real images albeit to a small degree. In classification we also see a very smooth transition between classes, unlike the sharp drop in the MNIST experiment. Lastly, $100\%$ of the interpolated images lay above the detection threshold and none are detected as a "non-image" (for reference the detection threshold has $78.6\%$ recall on real CIFAR10 test images). This shows that even with good likelihood and reasonable accuracy, the model still "mashes" the classes together, as one can move from one Gaussian to another without passing through low likelihood regions in-between. It also clarifies why the boundary attack is so successful on CIFAR but fails completely on MNIST. We note that the basic attack on MNIST is allowed to pass through these low density areas which is why it sometimes succeeds. + +# 4.5 CLASS-UNRELATED ENTROPY IS TO BLAME + +In this section, we show that the difference in performance between CIFAR10 and MNIST can largely be attributed to how the entropy in the datasets is distributed, i.e. how much the uncertainty in the data distribution is reduced after conditioning on the class label. For MNIST digits, a large source of uncertainty in pixel-space comes from the class label. Given the class, most pixels can be predicted accurately by simply taking the mean of the training set in each class. This is exactly why a linear classifier performs well on MNIST. Conversely on CIFAR10, after conditioning on the class label there still exists considerable uncertainty. Given the class is "cat," there still exists many complicated sources of uncertainty such as where the cat is and how it is posed. In this dataset, a much larger fraction of the uncertainty is not accounted for after conditioning on the label. This is not a function of the domain or the dimensionality of the dataset, it is a function of the dataset itself. + +To empirically verify this, we have designed a dataset which replicates the challenges of CIFAR10 and places them onto a problem of the same discriminative difficulty as MNIST. To achieve this, we simply replaced the black backgrounds of MNIST images with randomly sampled (downsampled and greyscale) images from CIFAR10. In this dataset, which we call background-MNIST (BG-MNIST), the classification problem is identically predictable from the same set of pixels + +![](images/cd7fd8c5d40686acd6564c802a89f5fa53cd65dfbd7ab270ee37939b26006273.jpg) + +![](images/11814b485d3407c9d19a8a5cc171a7703bdfef1ae2d2c9d9f8451a1e28fd5fac.jpg) +Figure 5: Top: Samples from the BG-MNIST-0 dataset. Bottom: Samples from conditional generative model trained on the dataset. Note how the model has learnt to capture digit identity. + +as in standard MNIST but modeling the data density is much more challenging. + +To further control the entropy in a fine-grained manner, we convolve the background with a Gaussian blur filter with various bandwidths to remove varying degrees of high frequency information. With high blur, the task begins to resemble standard MNIST and conditional generative models should perform as they do on MNIST. With low and no blur we expect them to behave as they do on CIFAR10. + +Table 3 summarizes the performance of conditional generative models on BG-MNIST. We train models with a "Reweighted" discriminative objective as in Section A. The reweighting allows them to perform well as classifiers but the likelihood of their generative component falls to below CIFAR10 levels. More strikingly, now when we interpolate between datapoints we observe behavior identical to our CIFAR10 models. This can be seen in Figure 6. Thus, we have created a dataset with the discriminative difficulty of MNIST and the generative difficulty of CIFAR10. + +
MNISTBG-MNIST-5BG-MNIST-1BG-MNIST-0CIFAR10
% Acc9999999884
bits/dim1.101.673.304.583.53
+ +Table 3: Conditional generative models trained on BG-MNIST. BG-MNIST- $X$ indicates the bandwidth of blur applied to CIFAR10 backgrounds. + +![](images/6e3939a3867ee2fb24e2858afaba9827904596d46e49fd2ae65ab1c1bf063e8b.jpg) +(a) Log-likelihood of interpolations on BG-MNIST-0 + +![](images/ca1c69cb428530f1fdaaede56bbb8d86538e68700888993e669c5c65f14ebb59.jpg) +(b) Class probability of interpolations on BG-MNIST-0 +Figure 6: Average log-likelihoods and class probabilities for interpolations between BG-MNIST-0 datapoints. While classification is on par with MNIST models, the likelihood exhibits the same failures as CIFAR10 models. + +# 5 RELATED WORK + +Despite state of the art performance in many tasks, deep neural networks have been shown to be fragile where small image transformations, (Azulay & Weiss, 2018) or background object transplant (Rosenfeld et al., 2018) can greatly change predictions. In the more challenging case of adversarial perturbations, deep neural networks are known to be vulnerable to adversarial attacks (Akhtar & Mian, 2018), and while many attempts have been made to train robust models or detect malicious attacks, significant progress towards truly robust models has been made only on MNIST (Schott et al., 2019; Madry et al., 2017). Even CIFAR10 remains far from being solved from a standpoint of adversarial robustness. + +One common belief is that adversarial attacks succeed by moving the data points off the data manifold, and therefore can possibly be detected by a generative model which should assign them low likelihood values. Although this view has been challenged in (Gilmer et al., 2018), we now discuss how their setting needs to be extended to fully study robustness guarantees of conditional generative models. + +Recent work (Song et al., 2018; Frosst et al., 2018; Li et al., 2018) showed that a generative model can detect and defend adversarial attacks. However, there is a caveat when evaluating detectability of adversarial attacks: the attacker needs to be able to attack the detection algorithm as well. Not doing so has been shown to lead to drastically false robustness claims (Carlini & Wagner, 2017a). In (Li et al., 2018) the authors report difficulties training a high accuracy conditional generative model on CIFAR10, and resort to evaluation on a 2-class classification problem derived from CIFAR10. While + +they do show robustness similar to our Carlini-Wagner results, they do not apply the boundary attack which we found to break our models on CIFAR10. This highlights the need to utilize a diverse set of attacks. In (Schott et al., 2019) a generative model was used not just for adversarial detection but also robust classification on MNIST, leading to state-of-the-art robust classification accuracy. The method was only shown to work on MNIST, and is very slow at inference time. However, overall it provides an existence proof that conditional generative models can be very robust in practice. In (Ghosh et al., 2019) the authors also use generative models for detection and classification but only show results with the relatively weak FGSM attack, and on simple datasets. As we see in Fig. 1 and discuss in section 4, generative models trained on MNIST can display very different behavior than similar models trained on more challenging data like CIFAR10. This shows how success on MNIST may often not translate to success on other datasets. + +# 6 CONCLUSION + +In this work we explored limitations, both in theory and practice, of using conditional generative models to detect adversarial attacks. Most practical issues arise due to likelihood, the standard objective and evaluation metric for generative models by which probabilities can be computed. We conclude that likelihood-based density modeling and robust classification may fundamentally be at odds with one another as important aspects of the problem are not captured by this training and evaluation metric. This has wide-reaching implications for applications like out-of-distribution detection, adversarial robustness and generalization as well as semi-supervised learning with these models. + +# REFERENCES + +Naveed Akhtar and Ajmal S. Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410-14430, 2018. +Aharon Azulay and Yair Weiss. Why do deep convolutional networks generalize so poorly to small image transformations? CoRR, 2018. URL http://arxiv.org/abs/1805.12177. +Jens Behrmann, Will Grathwohl, Ricky T. Q. Chen, David Duvenaud, and Jörn-Henrik Jacobsen. Invertible residual networks. International Conference on Machine Learning, 2019. +Wieland Brendel, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In International Conference on Learning Representations (ICLR), 2018. URL https://openreview.net/forum?id=SyZI0GWCZ. +Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3-14. ACM, 2017a. +Nicholas Carlini and David A. Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, SP, 2017b. +Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: non-linear independent components estimation. In International Conference on Learning Representations (ICLR), 2015. +Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. In International Conference on Learning Representations, (ICLR), 2017. +Nicholas Frosst, Sara Sabour, and Geoffrey Hinton. Darccc: Detecting adversaries by reconstruction from class conditional capsules. arXiv preprint arXiv:1811.06969, 2018. +Partha Ghosh, Arpan Losalka, and Michael J Black. Resisting adversarial attacks using gaussian mixture variational autoencoders. In Conference on Artificial Intelligence (AAAI), 2019. URL https://arxiv.org/abs/1806.00081. +Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S. Schoenholz, Maithra Raghu, Martin Wattenberg, and Ian J. Goodfellow. Adversarial spheres. International Conference on Learning Representations (ICLR), 2018. URL http://arxiv.org/abs/1801.02774. + +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27. 2014. URL http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf. +Aditya Grover, Manik Dhar, and Stefano Ermon. Flow-gan: Combining maximum likelihood and adversarial learning in generative models. In Conference on Artificial Intelligence (AAAI), 2018. +Joern-Henrik Jacobsen, Jens Behrmann, Richard Zemel, and Matthias Bethge. Excessive invariance causes adversarial vulnerability. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BkfbpsAcF7. +Jörn-Henrik Jacobsen, Arnold W. M. Smeulders, and Edouard Oyallon. i-revnet: Deep invertible networks. International Conference on Learning Representations (ICLR), 2018. URL http://arxiv.org/abs/1802.07088. +Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pp. 10236-10245, 2018. +Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in neural information processing systems (NIPS), pp. 3581-3589, 2014. +Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In Advances in neural information processing systems, pp. 4743-4751, 2016. +Yingzhen Li, John Bradshaw, and Yash Sharma. Are generative classifiers more robust to adversarial attacks? arXiv preprint arXiv:1802.06552, 2018. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. +Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don't know? In International Conference on Learning Representations, 2019a. URL https://openreview.net/forum?id=H1xwNhCcYm. +Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Hybrid models with deep and invertible features. In International Conference on Machine Learning (ICML), 2019b. +George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pp. 2338-2347, 2017. +Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In International Conference on Machine Learning (ICML), 2015. +Amir Rosenfeld, Richard S. Zemel, and John K. Tsotsos. The elephant in the room. CoRR, 2018. URL http://arxiv.org/abs/1808.03305. +Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in neural information processing systems, pp. 2234-2242, 2016. +Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P. Kingma. Pixelconn++: A pixelconn implementation with discretized logistic mixture likelihood and other modifications. In ICLR, 2017a. +Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelconn++: Improving the pixelconn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017b. + +L. Schott, J. Rauber, W. Brendel, and M. Bethge. Towards the first adversarially robust neural network model on mnist. 2019. URL https://arxiv.org/pdf/1805.09190.pdf. +Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In International Conference on Learning Representations (ICLR), 2018. +Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelCNN decoders. In Advances in Neural Information Processing Systems, pp. 4790-4798, 2016. + +# A TRAINING CONDITIONAL GENERATIVE MODELS + +# A.1 LIKELIHOOD-BASED GENERATIVE MODELS AS GENERATIVE CLASSIFIERS + +We present a brief overview of flow-based deep generative models, conditional generative models, and their applications to adversarial example detection. + +Flow based generative models Rezende & Mohamed (2015); Dinh et al. (2015; 2017); Kingma & Dhariwal (2018) compute exact densities for complex distributions using the change of variable formula. They achieve strong empirical results Kingma & Dhariwal (2018) and the closed form likelihood makes them easier to analyze than the closely related VAE Kingma et al. (2014). The main idea behind flow-based generative models is to model the data distribution using a series of bijective mappings $z_{N} = f(x) = f_{N}\circ f_{N - 1}\dots \circ f_{1}(x)$ where $z_{N}$ has a known simple distribution, e.g. Gaussian, and all $f_{i}$ are parametric functions for which the determinant of the Jacobian can be computed efficiently. Using the change of variable formula we have $\log (p(x)) = \log (p(z_N)) + \sum_{i = 1}^{N}\log (|\det (J_i(z_i))|)$ where $J_{i}$ is the Jacobian of $f_{i}$ and $z_{i - 1} = f_{i - 1}\ldots \circ f_1(x)$ . + +The standard way to parameterize such functions $f_{i}$ is by splitting the input $z_{i-1}$ into two $z_{i-1} = (z_{i-1}^{1}, z_{i-1}^{2})$ and chose + +$$ +f _ {i} \left(\left[ \begin{array}{l} z _ {i - 1} ^ {1} \\ z _ {i - 1} ^ {2} \end{array} \right]\right) = \left[ \begin{array}{c} z _ {i - 1} ^ {1} \\ s (z _ {i - 1} ^ {1}) \odot z _ {i - 1} ^ {2} + t (z _ {i - 1} ^ {1}) \end{array} \right] \tag {5} +$$ + +which is invertible as long as $s(z_{i-1}^1)_j = s_j \neq 0$ and we have $\log(|\det(J_i(z_{i-1}))|) = \sum_j \log(|s_j|)$ . For images the splitting is normally done in the channel dimension. These models are then trained by maximizing the empirical log likelihood (MLE). + +A straightforward way to turn this generative model into a conditional generative model is to make $p(z_{N})$ a Gaussian mixture model (GMM) with one Gaussian per class, i.e. $p(z_{N}|y) = \mathcal{N}(\mu_{y},\Sigma_{y})$ . Assuming $p(y)$ is known, then maximizing $\log(p(x,y))$ is equivalent to maximizing $\log(p(x|y)) = \log(p(z_N|y)) + \sum_{i=1}^{N} \log(|\det(J_i(z_i))|)$ . At inference time, one can classify by simply using Bayes rule. Note that directly optimizing $\log(p(x|y))$ results in poor classification accuracy as discussed in section 3. This issue was also addressed in the recent hybrid model work Nalisnick et al. (2019a). + +We will now describe some of the various approaches we investigated in order to train the best possible flow-based conditional generative models, to achieve a better trade-off between classification accuracy and data-likelihood as compared to commonly-used approaches. We also discuss some failed approaches in the appendix. + +# A.2 REWEIGHTING + +The most basic approach, which has been used before in various works, is to reweight the discriminative part in eq. (4). While this can produce good accuracy, it can have an unfavorable trade-off with the NLL where good accuracy comes with severely sub-optimal NLL. This tradeoff has also been shown in Nalisnick et al. (2019a) where they train a somewhat similar model but classify with a generalized linear model instead of a Gaussian mixture model. + +# A.3 ARCHITECTURE CHANGE + +Padding channels has been shown to increase accuracy in invertible networks Jacobsen et al. (2018); Behrmann et al. (2019). This helps ameliorate a basic limitation in bijective mappings (see Eq. (5)), by allowing to increase the number of channels as a pre-processing step. Unlike the discriminative i-RevNet, we cannot just pad zeros as that would not be a continuous density. Instead we pad channels with uniform(0,1) random noise. In effect we do not model MNIST and CIFAR10 as is typically done in the literature, but rather the zero-padded version of those. While the ground-truth likelihoods for the padded and un-padded datapoints are the same due to independence of the uniform noise and unit density of the noise, this is not guaranteed to be captured by the model, making likelihoods very similar but not exactly comparable with the literature. This is not an issue for us, as we only compare models on the padded datasets. + +# A.4 SPLIT PRIOR + +One reason MLE is bad at capturing the label is because a small number of dimensions have a small effect on the NLL. Fortunately, we can use this property to our advantage. As the contribution of the conditional class information is negligible for the data log likelihood, we choose to model it in its distinct subspace, as proposed by Jacobsen et al. (2019). Thus, we partition the hidden dimensions $z = (z_{s},z_{n})$ and only try to enforce the low-dimensional $z_{s}$ to be the logits. This has two advantages: 1) we do not enforce class-conditional dimensions to be factorial; and 2) we can explicitly up-weight the loss on this subspace and treat it as standard logits of a discriminative model. A similar approach is also used by semi-supervised VAEs Kingma et al. (2014). This lets us jointly optimize the data log-likelihood alongside a classification objective without requiring most of the dimensions to be discriminative. Using the factorization $p(z_s,z_n|y) = p(z_s|y)\cdot p(z_n|z_s,y)$ we model $p(z_{s}|y)$ as Gaussian with class conditional mean $e_i = (0,\dots,0,1,\dots,0)$ and covariance matrix scaled by a constant. The distribution $p(z_{n}|z_{s},y)$ is modeled as a Gaussian where the mean and variances are a function of $y$ and $z_{n}$ . + +# B IMPLEMENTATION DETAILS + +We pad MNIST with zeros so both datasets are $32 \times 32$ and subtract 0.5 from both datasets to have a [-0.5,0.5] range. For data augmentation we do pytorch's random crop with a padding of 4 and 'edge' padding mode, and random horizontal flip for CIFAR10 only. + +The model is based on GLOW with 4 levels, affine coupling layers, 1x1 convolution permutations and actnorm in a multi-scale architecture. We choose 128 channels and 12 blocks per level for MNIST and 256 channels and 16 blocks for CIFAR10. In both MNIST and CIFAR10 experiments we double the number of channels with uniform(0,1) noise which we scale down to the range $[0, 2/256]$ (taking it into account in the Jacobian term). One major difference is that we do the squeeze operation at the end of each level instead of the beginning, which is what allows us to use 4 levels. This is possible because with the added channels the number of channels is even and the standard splitting is possible before the squeeze operation. + +The models are optimized using Adam, for 150 epochs. The initial learning rate is $1e - 3$ , decayed by a factor of 10 every 60 epochs. For the reweighted optimization the objective is + +$$ +\operatorname {l o s s} = - \log (p (x | y)) / D - \log (p (y | x)) \tag {6} +$$ + +where D is the data dimension (3x32x32 for CIFAR, 1x32x32 for MNIST). + +For adversarial detection we use a threshold of 1.4 for MNIST (100% of test data are below the threshold) and 4. for CIFAR10 (78.6% of test images are below the threshold). + +# C NEGATIVE RESULTS + +In this work explored many ideas in order to achieve better tradeoff between accuracy with little or no impact. + +# C.1 ROBUST PRIORS + +Since the Gaussian prior is very sensitive to outliers, one idea was that confident miss-classifications carry a strong penalty which might result in "messing" all the classes together. A solution would be to replace the Gaussian with a more robust prior, e.g. Laplace or Cauchy. Another idea we explored is a mixture of Gaussian and Laplace or Cauchy using the same location parameter. In our experiments we did not see any significant difference from the Gaussian prior. + +# C.2 LABEL SMOOTHING + +Another approach to try to address the same issue is a version of label smoothing. In this new model the Gaussian clusters are a latent variable that is equal to the real label with probability $1 - \epsilon$ . + +and uniform on the other labels with probability $\epsilon$ . Using this will bound the error for confident miss-classification as long as the data is close to one of the Gaussian centers. + +# C.3 FLOW-GAN + +As we claimed the main issue is with the MLE objective, it seems like a better objective is to optimize $KL(p(x,y)||p_{\theta}(x,y))$ or the Jensen-Shannon divergence as this KL term is highly penalized for miss-classification. It is also more natural when considering robustness against adversarial attacks. Optimizing this directly is hard, but generative adversarial networks (GANs) Goodfellow et al. (2014) in theory should also optimize this objective. Simply training a GAN would not work as we are interested in the likelihood value for adversarial detection and GANs only let you sample and does not give you any information regarding an input image. + +Since flow algorithms are bijective, we could combine the two objective as was done in the flow-GAN paper Grover et al. (2018). We trained this approach with various conditional-GAN alternatives and found it very hard to train. GANs are known to be unstable to train, and combining them with the unstable flow generator is problematic. + +# D ANALYTICAL COUNTER EXAMPLE: + +Assume $p(y = 1) = p(y = 0) = q(y = 1) = q(y = 0) = 1/2$ and + +$$ +p (x \mid 0) = \lambda_ {1} U (0, 1) + (1 - \lambda_ {1}) U (1, 1 + \Delta) \tag {7} +$$ + +$$ +p (x \mid 1) = \lambda_ {2} U (0, 1) + (1 - \lambda_ {2}) U (2, 3) \tag {8} +$$ + +$$ +q (x \mid 0) = U (0, 1 + \Delta) \tag {9} +$$ + +$$ +q (x \mid 1) = \lambda_ {2} U (0, 1) + (1 - \lambda_ {2}) U (2, 3) \tag {10} +$$ + +where $U(a,b)$ is the uniform distribution on the annulus $R^d (a,b) = \{x\in \mathbb{R}^d:a\leq ||x||\leq b\}$ in dimension $d$ . + +Lemma 1. For $||x|| < 1$ we have + +$$ +p (0 | x) = \frac {\lambda_ {1}}{\lambda_ {1} + \lambda_ {2}} \tag {11} +$$ + +$$ +q (0 | x) = \frac {1}{1 + \lambda_ {2} (1 + \Delta) ^ {d}} \tag {12} +$$ + +Proof. The $U(a, b)$ density (when it isn't zero) is $\frac{1}{C_d(b^d - a^d)}$ where $c_d$ is the volume of the $d$ -dimensional unit ball. The proof follows by a simple use of Bayes rule. + +so by having $\lambda_1 >> \lambda_2 >> \frac{1}{(1 + \Delta)^d}$ we can have the model switch wrongfully predictions from $y = 0$ to $y = 1$ when we move $x$ from the annulus $R^d (1,1 + \Delta)$ to $R^{d}(0,1)$ + +Lemma 2. If $\lambda_1 > \frac{1}{(1 + \Delta)^d}$ and $\lambda_1 < 1 - e^{-\epsilon}$ then $KL(q(x,y)||P(x,y)) \leq \epsilon$ + +Proof. Using the chain rule for KL divergence, $\mathrm{KL}(P(x,y)||Q(x,y)) = \mathrm{KL}(P(y)||Q(y)) + \mathbb{E}_y[\mathrm{KL}(P(x|y)||Q(x|y))]$ we get that $\mathrm{KL}(q(x,y)||P(x,y)) = \mathrm{KL}(q(x|y = 0)||P(x|y = 0))$ . We now have + +$$ +\begin{array}{l} \operatorname {K L} (q (x \mid y = 0) | | P (x \mid y = 0)) = \int_ {R ^ {d} (0, 1)} \frac {1}{C _ {d} (1 + \Delta) ^ {d}} \log \left(\frac {\frac {1}{C _ {d} (1 + \Delta) ^ {d}}}{\frac {\lambda_ {1}}{C _ {d}}}\right) (13) \\ + \int_ {R ^ {d} (1, 1 + \Delta)} \frac {1}{C _ {d} (1 + \Delta) ^ {d}} \log \left(\frac {\frac {1}{C _ {d} (1 + \Delta) ^ {d}}}{\frac {1 - \lambda_ {1}}{C _ {d} ((1 + \Delta) ^ {d} - 1)}}\right) = \frac {- \log \left(\lambda_ {1} (1 + \Delta) ^ {d}\right)}{(1 + \Delta) ^ {d}} (14) \\ + \frac {(1 + \Delta) ^ {d} - 1}{(1 + \Delta) ^ {d}} \log \left(\frac {(1 + \Delta) ^ {d} - 1}{(1 - \lambda_ {1}) (1 + \Delta) ^ {d}}\right) \leq \log \left(\frac {1}{1 - \lambda_ {1}}\right) < \epsilon (15) \\ \end{array} +$$ + +![](images/ca4df2ceb2acc6de080ead66ad62b905c1f9d800e7bf5dc8cd2e9cbe20184de6.jpg) + +Lemma 3. If $1 > \lambda_{1} > \frac{1}{(1 + \Delta)^{d}}$ and $\lambda_1 < \frac{\epsilon}{d\log(1 + \Delta)}$ then $KL(P(x,y)||q(x,y))\leq \epsilon$ + +Proof. Again using the KL chain rule we have + +$$ +\operatorname {K L} (P (x \mid y = 0) | | q (x \mid y = 0)) = \lambda_ {1} \int_ {R ^ {d} (0, 1)} \frac {1}{C _ {d}} \log \left(\frac {\frac {\lambda_ {1}}{C _ {d}}}{\frac {1}{C _ {d} (1 + \Delta) ^ {d}}}\right) \tag {16} +$$ + +$$ +\int_ {R ^ {d} (1, 1 + \Delta)} \frac {\left(1 - \lambda_ {1}\right)}{C _ {d} ((1 + \Delta) ^ {d} - 1)} \log \left(\frac {\frac {\left(1 - \lambda_ {1}\right)}{C _ {d} ((1 + \Delta) ^ {d} - 1)}}{\frac {1}{C _ {d} (1 + \Delta) ^ {d}}}\right) \leq \lambda_ {1} d \log (1 + \Delta) < \epsilon \tag {17} +$$ + +![](images/8a1b6471581379018668302f0be35bae86678e250b5b4491d48ad41c9aa4d3a1.jpg) + +Proposition 1. For all $(\epsilon, \delta, \Delta)$ there is a distribution $p$ and an approximation $q$ in dimension $d = \tilde{\mathcal{O}}\left(\frac{\log\left(\frac{\delta}{1 + \delta}\right) + \log\left(\frac{1}{\epsilon}\right)}{\log(1 + \Delta)}\right)$ such that + +$$ +K L (q (x, y) | | p (x, y)) < \epsilon , \quad K L (p (x, y) | | q (x, y)) < \epsilon \tag {18} +$$ + +but with probability greater then $1/3$ over samples $x \sim p$ there is an adversarial example $\bar{x}$ satisfying: + +1. $y_{q}(x) = y_{p}(x)$ with $p(y_{p}(x)|x)$ and $q(y_{q}(x)|x)$ greater or equal to $1 - \delta$ . The original point is classifier correctly and confidently. +2. $y_{q}(x) \neq y(\bar{x})$ , $y_{q}(\bar{x}) = y(\bar{x})$ . We change the prediction without changing the ground-truth label. +3. $q(y_{q}(\bar{x})|\bar{x}) < \delta, p(y_{p}(\bar{x})|\bar{x}) > 1 - \delta$ . The classifier is confident in its wrong prediction. +4. $||x - \bar{x} || < \Delta$ .We make a small change to the inputs. +5. The density $q(\bar{x})$ is greater or equal to the median density, making the attack undetectable by observing $q(x)$ . +6. For $\Delta < 1$ the probability in any radius ball can be made as small as desired. +7. The total variation of the distribution can be made as small as desired. + +The last two conditions exclude degenerate trivial counter-examples, one where the whole distribution support is in a $\Delta$ radius ball and $\Delta$ does indeed represent a small perturbation. The other condition excludes "pathological" distributions, e.g. misclassification on a dense zero measure set like the rationals. + +Proof. In order to satisfy conditions 1-5, using previous lemmas, it is enough that + +1. $\frac{\lambda_1}{\lambda_1 + \lambda_2} \geq 1 - \delta$ +2. $\frac{1}{1 + \lambda_2(1 + \Delta)^d} \leq \delta$ +3. $\lambda_1\leq 1 - e^{-\epsilon}$ +4. $\lambda_1 > \frac{1}{(1 + \Delta)^d}$ +5. $\lambda_1 < \frac{\epsilon}{d\log(1 + \Delta)}$ + +By setting $\lambda_{2} = \frac{\delta}{1 - \delta}\lambda_{1}$ we can easily satisfy condition 1. It is not hard to see that condition 2 is equivalent to $\lambda_{1}\geq \left(\frac{1 - \delta}{\delta}\right)^{2}\frac{1}{(1 + \Delta)^{d}}$ which superseeds condition 4 when $\delta < 1 / 2$ . Condition 3 can be + +satisfied with $\lambda_1 < \epsilon / 2$ by using $1 - x \geq e^{-2x}$ for $x < 1/2$ . + +This boils down to ensuring $d$ is large enough so that there is a valid $\lambda_{1}$ such as + +$$ +\left(\frac {1 - \delta}{\delta}\right) ^ {2} \frac {1}{(1 + \Delta) ^ {d}} < \lambda_ {1} < \frac {\epsilon}{d \log (1 + \Delta)} \tag {19} +$$ + +Which is true for large enough $d$ as the l.h.s decays exponentially while the r.h.s linearly. + +Condition 6 is trivial as the radius of the support is fixed so as long as $\Delta < 1$ the probability in any $\Delta$ radius ball decays exponentially. Regarding total variation, we note that from the divergence theorem this can be bounded by a term that depends on the surface area of shperes with fixed radius which decreases to zero as $d$ goes to infinity. + +![](images/91230a44c7a2ce056a3ab99897551a3217c8aa23292968d9c52bf665de2b37f7.jpg) + +# E PILCNN++ + +We trained a conditional PixelCNN++ where instead of predicting each new pixel using a mixture of 10 components, we use one mixture component per class. Using reweighting we train using the following objective $-\log(p(x|y)) / \dim + \alpha \cdot -\log(p(y|x))$ . As one can see from table 4, standard training, i.e. $\alpha = 0$ , results in very poor accuracy, while reweighting the classification score results in much better accuracy but worse NLL. + +
αacc (%)bits/dim
025.483.05
100085.783.34
+ +Table 4: Accuracy and NLL for pixelCNN++ on CIFAR10 \ No newline at end of file diff --git a/understandingthelimitationsofconditionalgenerativemodels/images.zip b/understandingthelimitationsofconditionalgenerativemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0299685cc73fdfe0f7ea1300bc13b9e96fc4c675 --- /dev/null +++ b/understandingthelimitationsofconditionalgenerativemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b98ccb24acfc22f7c4a7bfd18e75f2fd42edc317cc14af0d4d7c9b7706eab48 +size 406803 diff --git a/understandingthelimitationsofconditionalgenerativemodels/layout.json b/understandingthelimitationsofconditionalgenerativemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..488138a688428f58bfc9ceda57fda3fd69ab0295 --- /dev/null +++ b/understandingthelimitationsofconditionalgenerativemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:961265460eaa1baa672dfefaa1947039bcbc050137f3580efe6beeb3d456bf25 +size 535196 diff --git a/understandingthelimitationsofvariationalmutualinformationestimators/d9f6e948-ffac-47ad-9c82-7dbb7d99d49d_content_list.json b/understandingthelimitationsofvariationalmutualinformationestimators/d9f6e948-ffac-47ad-9c82-7dbb7d99d49d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f8e6fdade6b40e8f2c5bca7a4f99e787c7423f4d --- /dev/null +++ b/understandingthelimitationsofvariationalmutualinformationestimators/d9f6e948-ffac-47ad-9c82-7dbb7d99d49d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:150cc5885266506249918559647a95f0bf55b18a363adc6f7d966e9e95529f59 +size 131949 diff --git a/understandingthelimitationsofvariationalmutualinformationestimators/d9f6e948-ffac-47ad-9c82-7dbb7d99d49d_model.json b/understandingthelimitationsofvariationalmutualinformationestimators/d9f6e948-ffac-47ad-9c82-7dbb7d99d49d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3f6fab4a4d4bfcac8df85e64554811e14ff416e2 --- /dev/null +++ b/understandingthelimitationsofvariationalmutualinformationestimators/d9f6e948-ffac-47ad-9c82-7dbb7d99d49d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd42d76bda4e58471341ec772f1ba0ff69577b58ad9ccb4d5f36aa7b89e88ec4 +size 152895 diff --git a/understandingthelimitationsofvariationalmutualinformationestimators/d9f6e948-ffac-47ad-9c82-7dbb7d99d49d_origin.pdf b/understandingthelimitationsofvariationalmutualinformationestimators/d9f6e948-ffac-47ad-9c82-7dbb7d99d49d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cdacb6fdb9576de2c638e48431c0da77f75306b8 --- /dev/null +++ b/understandingthelimitationsofvariationalmutualinformationestimators/d9f6e948-ffac-47ad-9c82-7dbb7d99d49d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e6d447bf0414c28637d759b95741f070cc5426964d63f8844b222748fe87c5c +size 10098658 diff --git a/understandingthelimitationsofvariationalmutualinformationestimators/full.md b/understandingthelimitationsofvariationalmutualinformationestimators/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5fac893a9e3a526463df3549f00a63612da93ace --- /dev/null +++ b/understandingthelimitationsofvariationalmutualinformationestimators/full.md @@ -0,0 +1,722 @@ +# UNDERSTANDING THE LIMITATIONS OF VARIATIONAL MUTUAL INFORMATION ESTIMATORS + +Jiaming Song & Stefano Ermon + +Stanford University + +{tsong, ermon}@cs.stanford.edu + +# ABSTRACT + +Variational approaches based on neural networks are showing promise for estimating mutual information (MI) between high dimensional variables. However, they can be difficult to use in practice due to poorly understood bias/variance tradeoffs. We theoretically show that, under some conditions, estimators such as MINE exhibit variance that could grow exponentially with the true amount of underlying MI. We also empirically demonstrate that existing estimators fail to satisfy basic self-consistency properties of MI, such as data processing and additivity under independence. Based on a unified perspective of variational approaches, we develop a new estimator that focuses on variance reduction. Empirical results on standard benchmark tasks demonstrate that our proposed estimator exhibits improved bias-variance trade-offs on standard benchmark tasks. + +# 1 INTRODUCTION + +Mutual information (MI) estimation and optimization are crucial to many important problems in machine learning, such as representation learning (Chen et al., 2016; Zhao et al., 2018b; Tishby & Zaslavsky, 2015; Higgins et al., 2018) and reinforcement learning (Pathak et al., 2017; van den Oord et al., 2018). However, estimating mutual information from samples is challenging (McAllester & Statos, 2018) and traditional parametric and non-parametric approaches (Nemenman et al., 2004; Gao et al., 2015; 2017) struggle to scale up to modern machine learning problems, such as estimating the MI between images and learned representations. + +Recently, there has been a surge of interest in MI estimation with variational approaches (Barber & Agakov, 2003; Nguyen et al., 2010; Donsker & Varadhan, 1975), which can be naturally combined with deep learning methods (Alemi et al., 2016; van den Oord et al., 2018; Poole et al., 2019). Despite their empirical effectiveness in downstream tasks such as representation learning (Hjelm et al., 2018; Velicković et al., 2018), their effectiveness for MI estimation remains unclear. In particular, higher estimated MI between observations and learned representations do not seem to indicate improved predictive performance when the representations are used for downstream supervised learning tasks (Tschannen et al., 2019). + +In this paper, we discuss two limitations of variational approaches to MI estimation. First, we theoretically demonstrate that the variance of certain estimators, such as MINE (Belghazi et al., 2018), could grow exponentially with the ground truth MI, leading to poor bias-variance trade-offs. Second, we propose a set of self-consistency tests over basic properties of MI, and empirically demonstrate that all considered variational estimators fail to satisfy critical properties of MI, such as data processing and additivity under independence. These limitations challenge the effectiveness of these methods for estimating or optimizing MI. + +To mitigate these issues, we propose a unified perspective over variational estimators treating variational MI estimation as an optimization problem over (valid) density ratios. This view highlights the role of partition functions estimation, which is the culprit of high variance issues in MINE. To address this issue, we propose to improve MI estimation via variance reduction techniques for partition function estimation. Empirical results demonstrate that our estimators have much better bias-variance trade-off compared to existing methods on standard benchmark tasks. + +# 2 BACKGROUND AND RELATED WORK + +# 2.1 NOTATIONS + +We use uppercase letters to denote a probability measure (e.g., $P$ , $Q$ ) and corresponding lowercase letters to denote its density1 functions (e.g., $p$ , $q$ ) unless specified otherwise. We use $X, Y$ to denote random variables with separable sample spaces denoted as $\mathcal{X}$ and $\mathcal{Y}$ respectively, and $\mathcal{P}(\mathcal{X})$ (or $\mathcal{P}(\mathcal{Y})$ ) to denote the set of all probability measures over the Borel $\sigma$ -algebra on $\mathcal{X}$ (or $\mathcal{Y}$ ). + +Under $Q \in \mathcal{P}(\mathcal{X})$ , the $p$ -norm of a function $r: \mathcal{X} \to \mathbb{R}$ is defined as $\| r \|_p \coloneqq (\int |r|^p \mathrm{d}Q)^{1/p}$ with $\| r \|_{\infty} = \lim_{p \to \infty} \| r \|_p$ . The set of locally $p$ -integrable functions is defined as $L^p(Q) \coloneqq \{r: \mathcal{X} \to \mathbb{R} \mid \| r \|_p < \infty\}$ . The space of probability measures wrt. $Q$ is defined as $\Delta(Q) \coloneqq \{r \in L^1(Q) \mid \| r \|_1 = 1, r \geq 0\}$ ; we also call this the space of "valid density ratios" wrt. $Q$ . We use $P \ll Q$ to denote that $P$ is absolutely continuous with respect to $Q$ . We use $\hat{I}_E$ to denote an estimator for $I_E$ where we replace expectations with sample averages. + +# 2.2 VARIATIONAL MUTUAL INFORAMTION ESTIMATION + +The mutual information between two random variables $X$ and $Y$ is the KL divergence between the joint and the product of marginals: + +$$ +I (X; Y) = D _ {\mathrm {K L}} \left(P (X, Y) \| P (X) P (Y)\right) \tag {1} +$$ + +which we wish to estimate using samples from $P(X,Y)$ ; in certain cases we may know the density of marginals (e.g. $P(X)$ ). There are a wide range of variational approaches to variational MI estimation. Variational information maximization uses the following result (Barber & Agakov, 2003): + +Lemma 1 (Barber-Agakov (BA)). For two random variables $X$ and $Y$ : + +$$ +I (X; Y) = \sup _ {q _ {\phi}} \left\{\mathbb {E} _ {P (X, Y)} \left[ \log q _ {\phi} (\boldsymbol {x} | \boldsymbol {y}) - \log p (\boldsymbol {x}) \right] =: I _ {\mathrm {B A}} \left(q _ {\phi}\right) \right\} \tag {2} +$$ + +where $q_{\phi}:\mathcal{Y}\to \mathcal{P}(\mathcal{X})$ is a valid conditional distribution over $\mathcal{X}$ given $\pmb {y}\in \mathcal{V}$ and $p(x)$ is the probability density function of the marginal distribution $P(X)$ . + +Another family of approaches perform MI estimation through variational lower bounds to KL divergences. For example, the Mutual Information Neural Estimator (MINE, Belghazi et al. (2018)) applies the following lower bound to KL divergences (Donsker & Varadhan, 1975). + +Lemma 2 (Donsker-Varadahn (DV)). $\forall P, Q \in \mathcal{P}(\mathcal{X})$ such that $P \ll Q$ , + +$$ +D _ {\mathrm {K L}} (P \| Q) = \sup _ {T \in L ^ {\infty} (Q)} \left\{\mathbb {E} _ {P} [ T ] - \log \mathbb {E} _ {Q} \left[ e ^ {T} \right] =: I _ {\text {M I N E}} (T) \right\}. \tag {3} +$$ + +One could set $P = P(X,Y)$ and $Q = P(X)P(Y)$ , $T$ as a parametrized neural network (e.g. $T_{\theta}(\pmb{x},\pmb{y})$ parametrized by $\theta$ ), and obtain the estimate by optimizing the above objective via stochastic gradient descent over mini-batches. However, the corresponding estimator $\hat{I}_{\mathrm{MINE}}$ (where we replace the expectations in Eq. (3) with sample averages) is biased, leading to biased gradient estimates; Belghazi et al. (2018) propose to reduce bias via estimating the partition function $\mathbb{E}_Q[e^T]$ with exponential moving averages of mini-batches. + +The variational $f$ -divergence estimation approach (Nguyen et al., 2010; Nowozin et al., 2016) considers lower bounds on $f$ -divergences which can be specialized to KL divergence, and subsequently to mutual information estimation: + +Lemma 3 (Nyugen et al. (NWJ)). $\forall P, Q \in \mathcal{P}(\mathcal{X})$ such that $P \ll Q$ , + +$$ +D _ {\mathrm {K L}} (P \| Q) = \sup _ {T \in L ^ {\infty} (Q)} \left\{\mathbb {E} _ {P} [ T ] - \mathbb {E} _ {Q} \left[ e ^ {T - 1} \right] =: I _ {\mathrm {N W J}} (T) \right\} \tag {4} +$$ + +and $D_{\mathrm{KL}}(P\| Q) = I_{\mathrm{NWJ}}(T)$ when $T = \log (\mathrm{d}P / \mathrm{d}Q) + 1$ + +The supremum over $T$ is a invertible function of the density ratio $\mathrm{d}P / \mathrm{d}Q$ , so one could use this approach to estimate density ratios by inverting the function (Nguyen et al., 2010; Nowozin et al., 2016; Grover & Ermon, 2017). The corresponding mini-batch estimator (denoted as $\hat{I}_{\mathrm{NWJ}}$ ) is unbiased, so unlike MINE, this approach does not require special care to reduce bias in gradients. + +Contrastive Predictive Coding (CPC, van den Oord et al. (2018)) considers the following objective: + +$$ +I _ {\mathrm {C P C}} \left(f _ {\theta}\right) := \mathbb {E} _ {P ^ {n} (X, Y)} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \log \frac {f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {i}\right)}{\frac {1}{n} \sum_ {j = 1} ^ {n} f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {j}\right)} \right] \tag {5} +$$ + +where $f_{\theta}:\mathcal{X}\times \mathcal{Y}\to \mathbb{R}_{\geq 0}$ is a neural network parametrized by $\theta$ and $P^n (X,Y)$ denotes the joint pdf for $n$ i.i.d. random variables sampled from $P(X,Y)$ . CPC generally has less variance but is more biased because its estimate does not exceed $\log n$ , where $n$ is the batch size (van den Oord et al., 2018; Poole et al., 2019). While one can further reduce the bias with larger $n$ , the number of evaluations needed for estimating each batch with $f_{\theta}$ is $n^2$ , which scales poorly. To address the high-bias issue of CPC, Poole et al. proposed an interpolation between $I_{\mathrm{CPC}}$ and $I_{\mathrm{NWJ}}$ to obtain more fine-grained bias-variance trade-offs. + +# 3 VARIATIONAL MUTUAL INFORMATION ESTIMATION AS OPTIMIZATION OVER DENSITY RATIOS + +In this section, we unify several existing methods for variational mutual information estimation. We first show that variational mutual information estimation can be formulated as a constrained optimization problem, where the feasible set is $\Delta(Q)$ , i.e. the valid density ratios with respect to $Q$ . + +Theorem 1. $\forall P, Q \in \mathcal{P}(\mathcal{X})$ such that $P \ll Q$ we have + +$$ +D _ {\mathrm {K L}} (P \| Q) = \sup _ {r \in \Delta (Q)} \mathbb {E} _ {P} [ \log r ] \tag {6} +$$ + +where the supremum is achieved when $r = \mathrm{d}P / \mathrm{d}Q$ . + +We defer the proof in Appendix A. The above argument works for KL divergence between general distributions, but in this paper we focus on the special case of mutual information estimation. For the remainder of the paper, we use $P$ to represent the short-hand notation for the joint distribution $P(X,Y)$ and use $Q$ to represent the short-hand notation for the product of marginals $P(X)P(Y)$ . + +# 3.1 A SUMMARY OF EXISTING VARIATIONAL METHODS + +From Theorem 1, we can describe a general approach to variational MI estimation: + +1. Obtain a density ratio estimate - denote the solution as $r$ ; +2. Project $r$ to be close to $\Delta(Q)$ - in practice we only have samples from $Q$ , so we denote the solution as $\Gamma(r;Q_n)$ , where $Q_n$ is the empirical distribution of $n$ i.i.d. samples from $Q$ ; +3. Estimate mutual information with $\mathbb{E}_P[\log \Gamma (r;Q_n)]$ + +We illustrate two examples of variational mutual information estimation that can be summarized with this approach. In the case of Barber-Agakov, the proposed density ratio estimate is $r_{\mathrm{BA}} = q_{\phi}(\pmb{x}|\pmb{y}) / p(\pmb{x})$ (assuming that $p(\pmb{x})$ is known), which is guaranteed to be in $\Delta(Q)$ because + +$$ +\mathbb {E} _ {Q} \left[ q _ {\phi} (\boldsymbol {x} | \boldsymbol {y}) / p (\boldsymbol {x}) \right] = \int q _ {\phi} (\boldsymbol {x} | \boldsymbol {y}) / p (\boldsymbol {x}) \mathrm {d} P (x) \mathrm {d} P (y) = 1, \quad \Gamma_ {\mathrm {B A}} \left(r _ {\mathrm {B A}}, Q _ {n}\right) = r _ {\mathrm {B A}} \tag {7} +$$ + +for all conditional distributions $q_{\phi}$ . In the case of MINE / Donsker-Varadahn, the logarithm of the density ratio is estimated with $T_{\theta}(\pmb{x}, \pmb{y})$ ; the corresponding density ratio might not be normalized, so one could apply the following normalization for $n$ samples: + +$$ +\mathbb {E} _ {Q _ {n}} \left[ e ^ {T _ {\theta}} / \mathbb {E} _ {Q _ {n}} \left[ e ^ {T _ {\theta}} \right] \right] = 1, \quad \Gamma_ {\text {M I N E}} \left(e ^ {T _ {\theta}}, Q _ {n}\right) = e _ {\theta} ^ {T} / \mathbb {E} _ {Q _ {n}} \left[ e ^ {T _ {\theta}} \right] \tag {8} +$$ + +where $\mathbb{E}_{Q_n}[e^{T_\theta}]$ (the sample average) is an unbiased estimate of the partition function $\mathbb{E}_Q[e^{T_\theta}]$ ; $\Gamma_{\mathrm{DV}}(e^{T_\theta}, Q_n) \in \Delta(Q)$ is only true when $n \to \infty$ . Similarly, we show $I_{\mathrm{CPC}}$ is a lower bound to MI in Corollary 2, Appendix A, providing an alternative proof to the one in Poole et al. (2019). + +These examples demonstrate that different mutual information estimators can be obtained in a procedural manner by implementing the above steps, and one could involve different objectives at each step. For example, one could estimate density ratio via logistic regression (Hjelm et al., 2018; Poole et al., 2019; Mukherjee et al., 2019) while using $I_{\mathrm{NWJ}}$ or $I_{\mathrm{MINE}}$ to estimate MI. While logistic regression does not optimize for a lower bound for KL divergence, it provides density ratio estimates between $P$ and $Q$ which could be used for subsequent steps. + +Table 1: Summarization of variational estimators of mutual information. The $\in \Delta (Q)$ column denotes whether the estimator is a valid density ratio wrt. $Q$ . $(\checkmark)$ means any parameterization is valid; $(n\to \infty)$ means any parameterization is valid as the batch size grows to infinity; $(\mathrm{tr}\rightarrow \infty)$ means only the optimal parametrization is valid (infinite training cost). + +
CategoryEstimatorParamsΓ(r;Qn)∈Δ(Q)
Gen.IBAqφ(x|y)/p(x)
IGM(Eq. (9))pθ,pφ,pψpθ(x,y)/pφ(x)pψ(y)tr→∞
Disc.IMINEeTθ(x,y)/EQi[n] [eTθ(x,y)]n→∞
ICPCfθ(x,y)/EPn(Y)[fθ(x,y)]
ISMILE (Eq. (17))Tθ,τeTθ(x,y)/EQi[n] [eclip(Tθ(x,y),-τ,τ)]n,τ→∞
+ +# 3.2 GENERATIVE AND DISCRIMINATIVE APPROACHES TO MI ESTIMATION + +The above discussed variational mutual information methods can be summarized into two broad categories based on how the density ratio is obtained. + +- The discriminative approach estimates the density ratio $\mathrm{d}P / \mathrm{d}Q$ directly; examples include the MINE, NWJ and CPC estimators. +- The generative approach estimates the densities of $P$ and $Q$ separately; examples include the BA estimator where a conditional generative model is learned. In addition, we describe a generative approach that explicitly learns generative models (GM) for $P(X,Y)$ , $P(X)$ and $P(Y)$ : + +$$ +I _ {\mathrm {G M}} \left(p _ {\theta}, p _ {\phi}, p _ {\psi}\right) := \mathbb {E} _ {P} \left[ \log p _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) - \log p _ {\phi} (\boldsymbol {x}) - \log p _ {\psi} (\boldsymbol {y}) \right], \tag {9} +$$ + +where $p_{\theta}, p_{\phi}, p_{\psi}$ are maximum likelihood estimates of $P(X,Y)$ , $P(X)$ and $P(Y)$ respectively. We can learn the three distributions with generative models, such as VAE (Kingma & Welling, 2013) or Normalizing flows (Dinh et al., 2016), from samples. + +We summarize various generative and discriminative variational estimators in Table 1. + +Differences between two approaches While both generative and discriminative approaches can be summarized with the procedure in Section 3.1, they imply different choices in modeling, estimation and optimization. + +- On the modeling side, the generative approaches might require more stringent assumptions on the architectures (e.g. likelihood or evidence lower bound is tractable), whereas the discriminative approaches do not have such restrictions. +- On the estimation side, generative approaches do not need to consider samples from the product of marginals $P(X)P(Y)$ (since it can model $P(X,Y)$ , $P(X)$ , $P(Y)$ separately), yet the discriminative approaches require samples from $P(X)P(Y)$ ; if we consider a minibatch of size $n$ , the number of evaluations for generative approaches is $\Omega(n)$ whereas that for discriminative approaches it could be $\Omega(n^2)$ . +- On the optimization side, discriminative approaches may need additional projection steps to be close to $\Delta(Q)$ (such as $I_{\mathrm{MINE}}$ ), while generative approaches might not need to perform this step (such as $I_{\mathrm{BA}}$ ). + +# 4 LIMITATIONS OF EXISTING VARIATIONAL ESTIMATORS + +# 4.1 GOOD DISCRIMINATIVE ESTIMATORS REQUIRE EXPONENTIALLY LARGE BATCHES + +In the $\hat{I}_{\mathrm{NWJ}}$ and $\hat{I}_{\mathrm{MINE}}$ estimators, one needs to estimate the "partition function" $\mathbb{E}_Q[r]$ for some density ratio estimator $r$ ; for example, $\hat{I}_{\mathrm{MINE}}$ needs this in order to perform the projection step $\Gamma_{\mathrm{MINE}}(r,Q_n)$ in Eq (8). Note that the $I_{\mathrm{NWJ}}$ and $I_{\mathrm{MINE}}$ lower bounds are maximized when $r$ takes the optimal value $r^{\star} = \mathrm{d}P / \mathrm{d}Q$ . However, the sample averages $\hat{I}_{\mathrm{MINE}}$ and $\hat{I}_{\mathrm{NWJ}}$ of $\mathbb{E}_Q[r^{\star}]$ could have a variance that scales exponentially with the ground-truth MI; we show this in Theorem 2. + +Theorem 2. Assume that the ground truth density ratio $r^{\star} = \mathrm{d}P / \mathrm{d}Q$ and $\operatorname{Var}_Q[r^\star]$ exist. Let $Q_n$ denote the empirical distribution of $n$ i.i.d. samples from $Q$ and let $\mathbb{E}_{Q_n}$ denote the sample average over $Q_n$ . Then under the randomness of the sampling procedure, we have: + +$$ +\operatorname {V a r} _ {Q} \left[ \mathbb {E} _ {Q _ {n}} \left[ r ^ {\star} \right] \right] \geq \frac {e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1}{n} \tag {10} +$$ + +$$ +\lim _ {n \rightarrow \infty} n \operatorname {V a r} _ {Q} \left[ \log \mathbb {E} _ {Q _ {n}} \left[ r ^ {\star} \right]\right] \geq e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1. \tag {11} +$$ + +We defer the proof to Appendix A. Note that in the theorem above, we assume the ground truth density ratio $r^{\star}$ is already obtained, which is the optimal ratio for NWJ and MINE estimators. As a natural consequence, the NWJ and MINE estimators under the optimal solution could exhibit variances that grow exponentially with the ground truth MI (recall that in our context MI is a KL divergence). One could achieve smaller variances with some $r \neq r^{\star}$ , but this guarantees looser bounds and higher bias. + +Corollary 1. Assume that the assumptions in Theorem 2 hold. Let $P_{m}$ and $Q_{n}$ be the empirical distributions of $m$ i.i.d. samples from $P$ and $n$ i.i.d. samples from $Q$ , respectively. Define + +$$ +I _ {\mathrm {N W J}} ^ {m, n} := \mathbb {E} _ {P _ {m}} [ \log r ^ {\star} + 1 ] - \mathbb {E} _ {Q _ {n}} [ r ^ {\star} ] \tag {12} +$$ + +$$ +I _ {\mathrm {M I N E}} ^ {m, n} := \mathbb {E} _ {P _ {m}} \left[ \log r ^ {\star} \right] - \log \mathbb {E} _ {Q _ {n}} \left[ r ^ {\star} \right] \tag {13} +$$ + +where $r^{\star} = \mathrm{d}P / \mathrm{d}Q$ . Then under the randomness of the sampling procedure, we have $\forall m \in \mathbb{N}$ : + +$$ +\operatorname {V a r} _ {P, Q} \left[ I _ {\mathrm {N W J}} ^ {m, n} \right] \geq \left(e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1\right) / n \tag {14} +$$ + +$$ +\lim _ {n \rightarrow \infty} n \operatorname {V a r} _ {P, Q} \left[ I _ {\mathrm {M I N E}} ^ {m, n} \right] \geq e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1. \tag {15} +$$ + +This high variance phenomenon has been empirically observed in Poole et al. (2019) (Figure 3) for $\hat{I}_{\mathrm{NWJ}}$ under various batch sizes, where the log-variance scales linearly with MI. We also demonstrate this in Figure 2 (Section 6.1). In order to keep the variance of $\hat{I}_{\mathrm{MINE}}$ and $\hat{I}_{\mathrm{NWJ}}$ relatively constant with growing MI, one would need a batch size of $n = \Theta(e^{D_{\mathrm{KL}}(P\|Q)})$ . $\hat{I}_{\mathrm{CPC}}$ has small variance, but it would need $n \geq e^{D_{\mathrm{KL}}(P\|Q)}$ to have small bias, as its estimations are bounded by $\log n$ . + +# 4.2 SELF-CONSISTENCY ISSUES FOR MUTUAL INFORMATION ESTIMATORS + +If we consider $\mathcal{X},\mathcal{Y}$ to be high-dimensional, estimation of mutual information becomes more difficult. The density ratio between $P(X,Y)$ and $P(X)P(Y)$ could be very difficult to estimate from finite samples without proper parametric assumptions (McAllester & Statos, 2018; Zhao et al., 2018a). Additionally, the exact value of mutual information is dependent on the definition of the sample space; given finite samples, whether the underlying random variable is assumed to be discrete or continuous will lead to different measurements of mutual information (corresponding to entropy and differential entropy, respectively). + +In machine learning applications, however, we are often more interested in maximizing or minimizing mutual information (estimates), rather than estimating its exact value. For example, if an estimator is off by a constant factor, it would still be useful for downstream applications, even though it can be highly biased. To this end, we propose a set of self-consistency tests for any MI estimator $\hat{I}$ , based on properties of mutual information: + +1. (Independence) if $X$ and $Y$ are independent, then $\hat{I}(X;Y) = 0$ ; + +2. (Data processing) for all functions $g, h$ , $\hat{I}(X;Y) \geq \hat{I}(g(X);h(Y))$ and $\hat{I}(X;Y) \approx \hat{I}([X,g(X)];[Y,h(Y)])$ where $[\cdot, \cdot]$ denotes concatenation. +3. (Additivity) denote $X_{1}, X_{2}$ as independent random variables that have the same distribution as $X$ (similarly define $Y_{1}, Y_{2}$ ), then $\hat{I}([X_{1}, X_{2}]; [Y_{1}, Y_{2}]) \approx 2 \cdot \hat{I}(X, Y)$ . + +These properties hold under both entropy and differential entropy, so they do not depend on the choice of the sample space. While these conditions are necessary but obviously not sufficient for accurate mutual information estimation, we argue that satisfying them is highly desirable for applications such as representation learning (Chen et al., 2016) and information bottleneck (Tishby & Zaslavsky, 2015). Unfortunately, none of the MI estimators we considered above pass all the self-consistency tests when $X, Y$ are images, as we demonstrate below in Section 6.2. In particular, the generative approaches perform poorly when MI is low (failing in independence and data processing), whereas discriminative approaches perform poorly when MI is high (failing in additivity). + +# 5 IMPROVED MI ESTIMATION VIA CLIPPED DENSITY RATIOS + +To address the high-variance issue in the $I_{\mathrm{NWJ}}$ and $I_{\mathrm{MINE}}$ estimators, we propose to clip the density ratios when estimating the partition function. We define the following clip function: + +$$ +\operatorname {c l i p} (v, l, u) = \max (\min (v, u), l) \tag {16} +$$ + +For an empirical distribution of $n$ samples $Q_{n}$ , instead of estimating the partition function via $\mathbb{E}_{Q_n}[r]$ , we instead consider $\mathbb{E}_{Q_n}[\mathrm{clip}(r,e^{-\tau},e^{\tau})]$ where $\tau \geq 0$ is a hyperparameter; this is equivalent to clipping the log density ratio estimator between $-\tau$ and $\tau$ . + +We can then obtain a following estimator with smoothed partition function estimates: + +$$ +I _ {\text {S M I L E}} \left(T _ {\theta}, \tau\right) := \mathbb {E} _ {P} \left[ T _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) \right] - \log \mathbb {E} _ {Q} \left[ \operatorname {c l i p} \left(e ^ {T _ {\theta} (\boldsymbol {x}, \boldsymbol {y})}, e ^ {- \tau}, e ^ {\tau}\right) \right] \tag {17} +$$ + +where $T_{\theta}$ is a neural network that estimates the log-density ratio (similar to the role of $T_{\theta}$ in $\hat{I}_{\mathrm{MINE}}$ ). We term this the Smoothed Mutual Information "Lower-bound" Estimator (SMILE) with hyperparameter $\tau$ ; $I_{\mathrm{SMILE}}$ converges to $I_{\mathrm{MINE}}$ when $\tau \rightarrow \infty$ . In our experiments, we consider learning the density ratio with logistic regression, similar to the procedure in Deep InfoMax (Hjelm et al., 2018). + +The selection of $\tau$ affects the bias-variance trade-off when estimating the partition function; with a smaller $\tau$ , variance is reduced at the cost of (potentially) increasing bias. In the following theorems, we analyze the bias and variance in the worst case for density ratio estimators whose actual partition function is $S$ for some $S \in (0, \infty)$ . + +Theorem 3. Let $r(\pmb{x}) : \mathcal{X} \to \mathbb{R}_{\geq 0}$ be any non-negative measurable function such that $\int r \, \mathrm{d}Q = S$ , $S \in (0,\infty)$ and $r(\pmb{x}) \in [0,e^{K}]$ . Define $r_{\tau}(\pmb{x}) = \mathrm{clip}(r(\pmb{x}),e^{\tau},e^{-\tau})$ for finite, non-negative $\tau$ . If $\tau < K$ , then the bias for using $r_{\tau}$ to estimate the partition function of $r$ satisfies: + +$$ +| \mathbb {E} _ {Q} [ r ] - \mathbb {E} _ {Q} [ r _ {\tau} ] | \leq \max \left(e ^ {- \tau} | 1 - S e ^ {- \tau} |, \left| \frac {1 - e ^ {K} e ^ {- \tau} + S (e ^ {K} - e ^ {\tau})}{e ^ {K} - e ^ {- \tau}} \right|\right); +$$ + +if $\tau \geq K$ , then + +$$ +| \mathbb {E} _ {Q} [ r ] - \mathbb {E} _ {Q} [ r _ {\tau} ] | \leq e ^ {- \tau} (1 - S e ^ {- K}). +$$ + +Theorem 4. The variance of the estimator $\mathbb{E}_{Q_n}[r_\tau]$ (using $n$ samples from $Q$ ) satisfies: + +$$ +\operatorname {V a r} \left[ \mathbb {E} _ {Q _ {n}} \left[ r _ {\tau} \right] \right] \leq \frac {e ^ {\tau} - e ^ {- \tau}}{4 n} \tag {18} +$$ + +We defer the proofs to Appendix A. Theorems 3 and 4 suggest that as we decrease $\tau$ , variance is decreased at the cost of potentially increasing bias. However, if $S$ is close to 1, then we could use small $\tau$ values to obtain estimators where both variance and bias are small. We further discuss the bias-variance trade-off for a fixed $r$ over changes of $\tau$ in Theorem 3 and Corollary 3. + +# 6 EXPERIMENTS + +# 6.1 BENCHMARKING ON MULTIVARIATE GAUSSIANS + +First, we evaluate the performance of MI bounds on two toy tasks detailed in (Poole et al., 2019; Belghazi et al., 2018), where the ground truth MI is tractable. The first task (Gaussian) is where $(\boldsymbol{x}, \boldsymbol{y})$ are drawn from a 20-d Gaussian distribution with correlation $\rho$ , and the second task (Cubic) is the same as Gaussian but we apply the transformation $\boldsymbol{y} \mapsto \boldsymbol{y}^3$ . We consider three discriminative approaches ( $I_{\mathrm{CPC}}$ , $I_{\mathrm{NWJ}}$ , $I_{\mathrm{SMILE}}$ ) and one generative approach ( $I_{\mathrm{GM}}$ ). For the discriminative approaches, we consider the joint critic in (Belghazi et al., 2018) and the separate critic in (van den Oord et al., 2018). For $I_{\mathrm{GM}}$ we consider invertible flow models (Dinh et al., 2016). We train all models for 20k iterations, with the ground truth mutual information increasing by 2 per 4k iterations. More training details are included in Appendix B². + +![](images/bdf95cd63f3440124161ae1a93d317f878a79962123c3ca8c49ef70d60de36d9.jpg) + +![](images/94337b913f4078426e5a5c950743e59dae900eb318e98fef21c72b757cf7c1e9.jpg) + +![](images/8b5731559ec469fa2c60575b527d3ec53168f8b75f8523fd62acd466c9ce1614.jpg) + +![](images/29406a31bcb31e05657bfb823a10abc95edfeef61ec3390051676c5e00291fe2.jpg) + +![](images/c3dd238dce5f9230c5cc77757998dbf97b39fb64b8725d8c655652f414ec6f75.jpg) + +![](images/f6898f66b3b16bcb02106512a52badb7f9cd96920d2821635a99107a3dd4ff54.jpg) + +![](images/078944e7a918bb53c302e09f37c04ecd6dcc26a3767cc6d226a0460f2b5bbaf4.jpg) +Figure 1: Performance of mutual information estimation approaches on Gaussian (top row) and Cubic (bottom row). Left two columns are $I_{\mathrm{CPC}}$ and $I_{\mathrm{NWJ}}$ , next three columns are $I_{\mathrm{SMILE}}$ with $\tau = 1.0, 5.0, \infty$ and the right column is $I_{\mathrm{GM}}$ with flow models. + +![](images/2dfbfed7e15e2fd26bdcb128ff19ce6a5627dc40cceb839f4cba91468fed8e7a.jpg) + +![](images/6333e720c2e1d14106a97595956044fdb6b6261a9e5dacec12a0b517b51cb949.jpg) + +![](images/450a97180556dfcd5d3b12a76c201733dec7572b90bee012ddc761550480d241.jpg) + +![](images/3d5c4cbe8eb6c7a05102176812c3881f30fb388e9f8d4ea80d52a6c63eb5b1cc.jpg) + +![](images/c5e8647425e3f5e33f0a5fa06778f258eb43c69fa14107f2de84a65b64d6c7fd.jpg) + +Figure 1 shows the estimated mutual information over the number of iterations. In both tasks, $I_{\mathrm{CPC}}$ has high bias and $I_{\mathrm{NWJ}}$ has high variance when the ground truth MI is high, whereas $I_{\mathrm{SMILE}}$ has relatively low bias and low variance across different architectures and tasks. Decreasing $\tau$ in the SMILE estimator decreases variances consistently but has different effects over bias; for example, under the joint critic bias is higher for $\tau = 5.0$ in Gaussian but lower in Cubic. $I_{\mathrm{GM}}$ with flow models has the best performance on Gaussian, yet performs poorly on Cubic, illustrating the importance of model parametrization in the generative approaches. + +![](images/2e25c1a09e74613449777cfb9b0a18c4ec1bae3761806dfcf4f6134d35a4e73e.jpg) +Figure 2: Bias / Variance / MSE of various estimators on Cubic (right). We display more results for Gaussian in Appendix B. + +![](images/78596f4c18e5027a1edb189445e243a65c5b062850a3d8c17087fa62e70664f7.jpg) + +![](images/799b015608df2dcb2725d59f4d3d5aebccbfcbca78b83c4981100a773d8bca82.jpg) + +In Figure 2, we compare the bias, variance and mean squared error (MSE) of the discriminative methods. We observe that the variance of $I_{\mathrm{NWJ}}$ increases exponentially with mutual information, which is consistent with our theory in Corollary 1. On the other hand, the SMILE estimator is able to achieve much lower variances with small $\tau$ values; in comparison the variance of SMILE when $\tau = \infty$ is similar to that of $I_{\mathrm{NWJ}}$ in Cubic. In Table 2, we show that $I_{\mathrm{SMILE}}$ can have nearly two orders of magnitude smaller variance than $I_{\mathrm{NWJ}}$ while having similar bias. Therefore $I_{\mathrm{SMILE}}$ enjoys lower MSE in this benchmark MI estimation task compared to $I_{\mathrm{NWJ}}$ and $I_{\mathrm{CPC}}$ . + +# 6.2 SELF-CONSISTENCY TESTS ON IMAGES + +$$ +X = \boxed {5} +$$ + +$$ +Y = \left. \begin{array}{c} \bullet \\ \bullet \end{array} \right\} = t \text {r o w s} +$$ + +Setting 1 (baseline) + +$$ +X = (\overbrace {\boxed {S} , \boxed {S}} ^ {\text {s a m e i m a g e}}) +$$ + +$$ +Y = (\boxed {\mathbf {\Pi}}, \boxed {\mathbf {\Pi}}) +$$ + +Setting 2 (data processing) + +$$ +X = (\overbrace {\boxed {5} , \boxed {0}} ^ {\text {d i f f e r e n t i m a g e s}}) +$$ + +$$ +Y = (\boxed {c}, \boxed {A}) +$$ + +Setting 3 (additivity) + +Next, we perform our proposed self-consistency tests on high-dimensional images (MNIST and CIFAR10) under three settings, where the ground truth MI is difficult to obtain (if not impossible). These settings are illustrated in Figure 3. + +1. The first setting is where $X$ is an image and $Y$ is the same image where we mask the bottom rows, leaving the top $t$ rows from $X$ ( $t$ is selected before evaluation). The rationale behind this choice of $Y$ is twofold: 1) $I(X;Y)$ should be non-decreasing with $t$ ; 2) it is easier (compared to low-d representations) to gain intuition about the amount of information remaining in $Y$ . +2. In the second setting, $X$ corresponds to two identical images, and $Y$ to the top $t_1$ , $t_2$ rows of the two images ( $t_1 \geq t_2$ ); this considers the "data-processing" property. +3. In the third setting, $X$ corresponds to two independent images, and $Y$ to the top $t$ rows of both; this considers the "additivity" property. + +We compare four approaches: $I_{\mathrm{CPC}}$ , $I_{\mathrm{MINE}}$ , $I_{\mathrm{SMILE}}$ and $I_{\mathrm{GM}}$ . We use the same CNN architecture for $I_{\mathrm{CPC}}$ , $I_{\mathrm{MINE}}$ and $I_{\mathrm{SMILE}}$ , and use VAEs (Kingma & Welling, 2013) for $I_{\mathrm{GM}}$ . We include more experimental details and alternative image processing approaches in Appendix B. + +![](images/b7412a91442fb56bbfdc89cc8551c84529060c000e38521019d7f0bce9010a46.jpg) +Figure 3: Three settings in the self-consistency experiments. + +![](images/a8490ed3532ce9a34445e7e3e7d050b2ce3decdb01b02acf5790a989b8df70df.jpg) +Figure 4: Evaluation of $\hat{I}(X;Y) / \hat{I}(X;.,X)$ . $X$ is an image and $Y$ contains the top $t$ rows of $X$ . + +Baselines We evaluate the first setting with $Y$ having varying number of rows $t$ in Figure 4, where the estimations are normalized by the estimated $\hat{I}(X; X)$ . Most methods (except for $I_{\mathrm{GM}}$ ) predicts zero MI when $X$ and $Y$ are independent, passing the first self-consistency test. Moreover, the estimated MI is non-decreasing with increasing $t$ , but with different slopes. As a reference, we show the validation accuracy of predicting the label where only the top $t$ rows are considered. + +Data-processing In the second setting we set $t_2 = t_1 - 3$ . Ideally, the estimator should satisfy $\hat{I}([X, X]; [Y, h(Y)]) / \hat{I}(X, Y) \approx 1$ , as additional processing should not increase information. We show the above ratio in Figure 5 under varying $t_1$ values. All methods except for $I_{\mathrm{MINE}}$ and $I_{\mathrm{GM}}$ performs well in both datasets; $I_{\mathrm{GM}}$ performs poorly in CIFAR10 (possibly due to limited capacity of VAE), whereas $I_{\mathrm{MINE}}$ performs poorly in MNIST (possibly due to numerical stability issues). + +![](images/602653f66d380007d349e04f92ac8023f9680ffe3421d91cf8bfca269046380b.jpg) +Figure 5: Evaluation of $\hat{I}([X, X]; [Y, h(Y)]) / \hat{I}(X, Y)$ , where the ideal value is 1. + +![](images/a1163cb5ae40d1571f02cb68258765961d0e2c27d04584236e2046455160f850.jpg) + +![](images/6db1df7bb8d140bffc169e580cd42ca35cd2e0cd5ab268ae2f00ae37303b018b.jpg) +Figure 6: Evaluation of $\hat{I}([X_1, X_2]; [Y_1, Y_2]) / \hat{I}(X, Y)$ , where the ideal value is 2. + +![](images/96cb2c5198e1ee49ea064f9d9b004b22f532625432f468a8415fd4a562310ca7.jpg) + +Additivity In the third setting, the estimator should double its value compared to the baseline with the same $t$ , i.e. $\hat{I}([X_1, X_2]; [Y_1, Y_2]) / \hat{I}(X, Y) \approx 2$ . Figure 6 shows the above ratio under different values of $t$ . None of the discriminative approaches worked well in this case except when $t$ is very small, when $t$ is large this ratio converges to 1 (possibly due to initialization and saturation of the training objective). $I_{\mathrm{GM}}$ however, performs near perfectly on this test for all values of $t$ . + +# 7 DISCUSSION + +In this work, we discuss generative and discriminative approaches to variational mutual information estimation and demonstrate their limitations. We show that estimators based on $I_{\mathrm{NWJ}}$ and $I_{\mathrm{MINE}}$ are prone to high variances when estimating with mini-batches, inspiring our $I_{\mathrm{SMILE}}$ estimator that improves performances on benchmark tasks. However, none of the approaches are good enough to pass the self-consistency tests. The generative approaches perform poorly when MI is small (failing independence and data-processing tests) while the discriminative approaches perform poorly when MI is large (failing additivity tests). + +These empirical evidences suggest that optimization over these variational estimators are not necessarily related to optimizing MI, so the empirical successes with these estimators might have little connections to optimizing mutual information. Therefore, it would be helpful to acknowledge these limitations and consider alternative measurements of information that are more suited for modern machine learning applications (Ozair et al., 2019; Tschannen et al., 2019). + +# ACKNOWLEDGEMENTS + +This research was supported by AFOSR (FA9550-19-1-0024), NSF (#1651565, #1522054, #1733686), ONR, and FLI. The authors would like to thank Shengjia Zhao, Yilun Xu and Lantao Yu for helpful discussions. + +# REFERENCES + +Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, December 2016. +David Barber and Felix V Agakov. The IM algorithm: a variational approach to information maximization. In Advances in neural information processing systems, pp. None. researchgate.net, 2003. + +Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm. MINE: Mutual information neural estimation. arXiv preprint arXiv:1801.04062, January 2018. +Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In D D Lee, M Sugiyama, U V Luxburg, I Guyon, and R Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 2172-2180. Curran Associates, Inc., 2016. +L Dinh, D Krueger, and Y Bengio. NICE: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. +Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. arXiv preprint arXiv:1605.08803, May 2016. +Monroe D Donsker and SR Srinivasa Varadhan. Asymptotic evaluation of certain markov process expectations for large time, i. Communications on Pure and Applied Mathematics, 28(1):1-47, 1975. +Shuyang Gao, Greg Ver Steeg, and Aram Galstyan. Efficient estimation of mutual information for strongly dependent variables. In Artificial intelligence and statistics, pp. 277-286, 2015. +Weihao Gao, Sreeram Kannan, Sewoong Oh, and Pramod Viswanath. Estimating mutual information for Discrete-Continuous mixtures. arXiv preprint arXiv:1709.06212, September 2017. +Aditya Grover and Stefano Ermon. Boosted generative models. arXiv preprint arXiv:1702.08484, February 2017. +Irina Higgins, David Amos, David Pfau, Sebastien Racaniere, Loic Matthew, Danilo Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. arXiv preprint arXiv:1812.02230, 2018. +R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, December 2014. +Diederik P Kingma and Max Welling. Auto-Encoding variational bayes. arXiv preprint arXiv:1312.6114v10, December 2013. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In F Pereira, C J C Burges, L Bottou, and K Q Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp. 1097-1105. Curran Associates, Inc., 2012. +Y LeCun, L eon Bottou, Y Bengio, and others. Gradient-Based learning applied to document recognition. PROC. OF THE, 1998. +David McAllester and Karl Statos. Formal limitations on the measurement of mutual information. arXiv preprint arXiv:1811.04251, 2018. +Sudipto Mukherjee, Himanshu Asnani, and Sreeram Kannan. Ccmi: Classifier based conditional mutual information estimation. arXiv preprint arXiv:1906.01824, 2019. +Ilya Nemenman, William Bialek, and Rob De Ruyter Van Steveninck. Entropy and information in neural spike trains: Progress on the sampling problem. Physical Review E, 69(5):056111, 2004. +X Nguyen, M J Wainwright, and M I Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE transactions on information theory / Professional Technical Group on Information Theory, 56(11):5847-5861, November 2010. ISSN 0018-9448. doi: 10.1109/TIT.2010.2068870. + +Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. arXiv preprint arXiv:1606.00709, June 2016. +Sherjil Ozair, Corey Lynch, Yoshua Bengio, Aaron van den Oord, Sergey Levine, and Pierre Sermanet. Wasserstein dependency measure for representation learning. arXiv preprint arXiv:1903.11780, March 2019. +Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 16-17, 2017. +Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexander A Alemi, and George Tucker. On variational bounds of mutual information. arXiv preprint arXiv:1905.06922, May 2019. +Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. arXiv preprint arXiv:1503.02406, March 2015. +Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. arXiv preprint arXiv:1907.13625, July 2019. +Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, July 2018. +Petar Velicković, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. arXiv preprint arXiv:1809.10341, September 2018. +Shengjia Zhao, Hongyu Ren, Arianna Yuan, Jiaming Song, Noah Goodman, and Stefano Ermon. Bias and generalization in deep generative models: An empirical study. In Advances in Neural Information Processing Systems, pp. 10792-10801, 2018a. +Shengjia Zhao, Jiaming Song, and Stefano Ermon. The information autoencoding family: A la-grangian perspective on latent variable generative models. arXiv preprint arXiv:1806.06514, June 2018b. + +# A PROOFS + +# A.1 PROOFS IN SECTION 3 + +Theorem 1. $\forall P, Q \in \mathcal{P}(\mathcal{X})$ such that $P \ll Q$ we have + +$$ +D _ {\mathrm {K L}} (P \| Q) = \sup _ {r \in \Delta (Q)} \mathbb {E} _ {P} [ \log r ] \tag {6} +$$ + +where the supremum is achieved when $r = \mathrm{d}P / \mathrm{d}Q$ . + +Proof. For every $T \in L^{\infty}(Q)$ , define $r_T = \frac{e^T}{\mathbb{E}_Q[e^T]}$ , then $r_T \in \Delta(Q)$ and from the Donsker-Varadhan inequality (Donsker & Varadhan, 1975) + +$$ +\begin{array}{l} D _ {\mathrm {K L}} (P \| Q) = \sup _ {T \in L ^ {\infty} (Q)} \mathbb {E} _ {P} [ T ] - \log \mathbb {E} _ {Q} \left[ e ^ {T} \right] (19) \\ = \sup _ {T \in L ^ {\infty} (Q)} \mathbb {E} _ {P} \left[ \log \frac {e ^ {T}}{\mathbb {E} _ {Q} \left[ e ^ {T} \right]} \right] = \sup _ {r _ {T} \in \Delta (Q)} \mathbb {E} _ {P} [ \log r _ {T} ] (20) \\ \end{array} +$$ + +Moreover, we have: + +$$ +D _ {\mathrm {K L}} (P \| Q) = \mathbb {E} _ {P} [ \log \mathrm {d} P - \log \mathrm {d} Q ] = \mathbb {E} _ {P} \left[ \log \frac {\mathrm {d} P}{\mathrm {d} Q} \right] \tag {21} +$$ + +which completes the proof. + +![](images/a79985161ce004a81d26d93162eea3f2b4e25357d82b104f2d8e17d302036261.jpg) + +Corollary 2. $\forall P, Q \in \mathcal{P}(\mathcal{X})$ such that $P \ll Q$ , $\forall f_{\theta}: \mathcal{X} \to \mathbb{R}_{\geq 0}$ we have + +$$ +I (X; Y) \geq I _ {\mathrm {C P C}} (f _ {\theta}) := \mathbb {E} _ {P ^ {n} (X, Y)} \left[ \frac {1}{n} \sum_ {i = 1} ^ {n} \log \frac {f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {i}\right)}{\frac {1}{n} \sum_ {j = 1} ^ {n} f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {j}\right)} \right] \tag {22} +$$ + +Proof. + +$$ +\begin{array}{l} n I _ {\mathrm {C P C}} \left(f _ {\theta}\right) := \mathbb {E} _ {P ^ {n} (X, Y)} \left[ \sum_ {i = 1} ^ {n} \log \frac {f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {i}\right)}{\frac {1}{n} \sum_ {j = 1} ^ {n} f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {j}\right)} \right] (23) \\ = \mathbb {E} _ {P ^ {n} (X, Y)} \left[ \sum_ {i = 1} ^ {n} \log \frac {n f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {i}\right)}{\sum_ {j = 1} ^ {n} f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {j}\right)} \right] (24) \\ \end{array} +$$ + +Since + +$$ +\mathbb {E} _ {P (X) P ^ {n} (Y)} \left[ \frac {n f _ {\theta} (\boldsymbol {x} , \boldsymbol {y})}{\sum_ {j = 1} ^ {n} f _ {\theta} (\boldsymbol {x} , \boldsymbol {y} _ {j})} \right] = 1, \tag {25} +$$ + +we can apply Theorem 1 to obtain: + +$$ +\begin{array}{l} n I _ {\mathrm {C P C}} \left(f _ {\theta}\right) = \mathbb {E} _ {P ^ {n} (X, Y)} \left[ \sum_ {i = 1} ^ {n} \log \frac {n f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {i}\right)}{\sum_ {j = 1} ^ {n} f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {j}\right)} \right] (26) \\ = \sum_ {i = 1} ^ {n} \mathbb {E} _ {P \left(X _ {i}, Y _ {1} ^ {n}\right)} \left[ \log \frac {n f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {i}\right)}{\sum_ {j = 1} ^ {n} f _ {\theta} \left(\boldsymbol {x} _ {i} , \boldsymbol {y} _ {j}\right)} \right] (27) \\ \leq \sum_ {i = 1} ^ {n} I \left(X _ {i}; Y _ {1} ^ {n}\right) = n I (X; Y) (28) \\ \end{array} +$$ + +where $Y_1^n$ denotes the concatenation of $n$ independent random variables $(Y_1, \ldots, Y_n)$ and + +$$ +P (X _ {i}, Y _ {1} ^ {n}) = P (X _ {i}, Y _ {i}) P (Y _ {1} ^ {i - 1}) P (Y _ {i + 1} ^ {n}) +$$ + +is the joint distribution of $P(X_{i},Y_{1}^{n})$ . + +![](images/70c400e93c3a3beaf3b0144511718d2f95a7d7e093c2a5694bd60dc0a7666836.jpg) + +# A.2 PROOFS IN SECTION 4 + +Theorem 2. Assume that the ground truth density ratio $r^{\star} = \mathrm{d}P / \mathrm{d}Q$ and $\operatorname{Var}_Q[r^\star]$ exist. Let $Q_n$ denote the empirical distribution of $n$ i.i.d. samples from $Q$ and let $\mathbb{E}_{Q_n}$ denote the sample average over $Q_n$ . Then under the randomness of the sampling procedure, we have: + +$$ +\operatorname {V a r} _ {Q} \left[ \mathbb {E} _ {Q _ {n}} \left[ r ^ {\star} \right] \right] \geq \frac {e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1}{n} \tag {10} +$$ + +$$ +\lim _ {n \rightarrow \infty} n \operatorname {V a r} _ {Q} \left[ \log \mathbb {E} _ {Q _ {n}} \left[ r ^ {\star} \right]\right] \geq e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1. \tag {11} +$$ + +Proof. Consider the variance of $r^{\star}(\pmb {x})$ when $\pmb {x}\sim Q$ .. + +$$ +\operatorname {V a r} _ {Q} \left[ r ^ {\star} \right] = \mathbb {E} _ {Q} \left[ \left(\frac {\mathrm {d} P}{\mathrm {d} Q}\right) ^ {2} \right] - \left(\mathbb {E} _ {Q} \left[ \frac {\mathrm {d} P}{\mathrm {d} Q} \right]\right) ^ {2} \tag {29} +$$ + +$$ += \mathbb {E} _ {P} \left[ \frac {\mathrm {d} P}{\mathrm {d} Q} \right] - 1 \tag {30} +$$ + +$$ +\geq e ^ {\mathbb {E} _ {P} \left[ \log \frac {\mathrm {d} E}{\mathrm {d} Q} \right]} - 1 \tag {31} +$$ + +$$ += e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1 \tag {32} +$$ + +where (29) uses the definition of variance, (30) uses the definition of Radon-Nikodym derivative to change measures, (31) uses Jensen's inequality over log, and (32) uses the definition of KL divergences. + +The variance of the mean of $n$ i.i.d. random variables then gives us: + +$$ +\operatorname {V a r} _ {Q} \left[ \mathbb {E} _ {Q _ {n}} [ r ] \right] = \frac {\operatorname {V a r} [ r ]}{n} \geq \frac {e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1}{n} \tag {33} +$$ + +which is the first part of the theorem. + +As $n\to \infty$ $\operatorname {Var}_Q[\mathbb{E}_{Q_n}[r]]\to 0$ , so we can apply the delta method: + +$$ +\operatorname {V a r} _ {Q} [ f (X) ] \approx \left(f ^ {\prime} (\mathbb {E} (X))\right) ^ {2} \operatorname {V a r} _ {Q} [ X ] \tag {34} +$$ + +Applying $f = \log$ and $\mathbb{E}[X] = 1$ gives us the second part of the theorem: + +$$ +\lim _ {n \rightarrow \infty} n \operatorname {V a r} _ {Q} [ \log \mathbb {E} _ {Q _ {n}} [ r ] ] = \lim _ {n \rightarrow \infty} n \operatorname {V a r} [ \mathbb {E} _ {Q _ {n}} [ r ] ] \geq e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1 \tag {35} +$$ + +which describes the variance in the asymptotic sense. + +Corollary 1. Assume that the assumptions in Theorem 2 hold. Let $P_{m}$ and $Q_{n}$ be the empirical distributions of $m$ i.i.d. samples from $P$ and $n$ i.i.d. samples from $Q$ , respectively. Define + +$$ +I _ {\mathrm {N W J}} ^ {m, n} := \mathbb {E} _ {P _ {m}} [ \log r ^ {\star} + 1 ] - \mathbb {E} _ {Q _ {n}} [ r ^ {\star} ] \tag {12} +$$ + +$$ +I _ {\mathrm {M I N E}} ^ {m, n} := \mathbb {E} _ {P _ {m}} \left[ \log r ^ {\star} \right] - \log \mathbb {E} _ {Q _ {n}} \left[ r ^ {\star} \right] \tag {13} +$$ + +where $r^{\star} = \mathrm{d}P / \mathrm{d}Q$ . Then under the randomness of the sampling procedure, we have $\forall m \in \mathbb{N}$ : + +$$ +\operatorname {V a r} _ {P, Q} \left[ I _ {\mathrm {N W J}} ^ {m, n} \right] \geq \left(e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1\right) / n \tag {14} +$$ + +$$ +\lim _ {n \rightarrow \infty} n \operatorname {V a r} _ {P, Q} \left[ I _ {\mathrm {M I N E}} ^ {m, n} \right] \geq e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1. \tag {15} +$$ + +Proof. Since $P_{m}$ and $Q_{n}$ are independent, we have + +$$ +\operatorname {V a r} \left[ I _ {\mathrm {N W J}} ^ {m, n} \right] \geq \operatorname {V a r} \left[ \mathbb {E} _ {Q _ {n}} \left[ r ^ {\star} \right] \right] \tag {36} +$$ + +$$ += \operatorname {V a r} \left[ \mathbb {E} _ {Q _ {n}} \left[ r ^ {\star} \right] \right] \geq \frac {e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1}{n} \tag {37} +$$ + +and + +$$ +\lim _ {n \rightarrow \infty} n \operatorname {V a r} \left[ I _ {\text {M I N E}} ^ {m, n} \right] \geq \lim _ {n \rightarrow \infty} n \operatorname {V a r} \left[ \log \mathbb {E} _ {Q _ {n}} \left[ r ^ {\star} \right]\right] \geq e ^ {D _ {\mathrm {K L}} (P \| Q)} - 1 \tag {38} +$$ + +which completes the proof. + +# A.3 PROOFS IN SECTION 5 + +Theorem 3. Let $r(\pmb{x}) : \mathcal{X} \to \mathbb{R}_{\geq 0}$ be any non-negative measurable function such that $\int r \, \mathrm{d}Q = S$ , $S \in (0,\infty)$ and $r(\pmb{x}) \in [0,e^K]$ . Define $r_{\tau}(\pmb{x}) = \mathrm{clip}(r(\pmb{x}),e^{\tau},e^{-\tau})$ for finite, non-negative $\tau$ . If $\tau < K$ , then the bias for using $r_{\tau}$ to estimate the partition function of $r$ satisfies: + +$$ +| \mathbb {E} _ {Q} [ r ] - \mathbb {E} _ {Q} [ r _ {\tau} ] | \leq \max \left(e ^ {- \tau} | 1 - S e ^ {- \tau} |, \left| \frac {1 - e ^ {K} e ^ {- \tau} + S (e ^ {K} - e ^ {\tau})}{e ^ {K} - e ^ {- \tau}} \right|\right); +$$ + +if $\tau \geq K$ , then + +$$ +\left| \mathbb {E} _ {Q} [ r ] - \mathbb {E} _ {Q} [ r _ {\tau} ] \right| \leq e ^ {- \tau} (1 - S e ^ {- K}). +$$ + +Proof. We establish the upper bounds by finding a worst case $r$ to find the largest $|\mathbb{E}_Q[r] - \mathbb{E}_Q[r_\tau]|$ . First, without loss of generality, we may assume that $r(\pmb{x}) \in (-\infty, e^{-\tau}] \cup [e^{\tau}, \infty)$ for all $\pmb{x} \in \mathcal{X}$ . Otherwise, denote $\mathcal{X}_{\tau}(r) = \{\pmb{x} \in \mathcal{X} : e^{-\tau} < r(\pmb{x}) < e^{\tau}\}$ as the (measurable) set where the $r(\pmb{x})$ values are between $e^{-\tau}$ and $e^{\tau}$ . Let + +$$ +V _ {\tau} (r) = \int_ {\boldsymbol {x} \in \mathcal {X} _ {\tau} (r)} r (\boldsymbol {x}) \mathrm {d} \boldsymbol {x} \in \left(e ^ {- \tau} \left| \mathcal {X} _ {\tau} (r) \right|, e ^ {\tau} \left| \mathcal {X} _ {\tau} (r) \right|\right) \tag {39} +$$ + +be the integral of $r$ over $\mathcal{X}_{\tau}(r)$ . We can transform $r(\pmb{x})$ for all $\pmb{x} \in \mathcal{X}_{\tau}(r)$ to have values only in $\{e^{-\tau}, e^{\tau}\}$ and still integrate to $V_{\tau}(r)$ , so the expectation under $Q$ is not changed. + +Then we show that we can rescale all the values above $e^{\tau}$ and below $e^{\tau}$ to the same value without changing the expected value under $Q$ . We denote + +$$ +K _ {1} = \log \int I (r (\boldsymbol {x}) \leq e ^ {- \tau}) r (\boldsymbol {x}) \mathrm {d} Q (\boldsymbol {x}) - \log \int I (r (\boldsymbol {x}) \leq e ^ {- \tau}) \mathrm {d} Q (\boldsymbol {x}) \tag {40} +$$ + +$$ +K _ {2} = \log \int I (r (\boldsymbol {x}) \geq e ^ {\tau}) r (\boldsymbol {x}) \mathrm {d} Q (\boldsymbol {x}) - \log \int I (r (\boldsymbol {x}) \geq e ^ {\tau}) \mathrm {d} Q (\boldsymbol {x}) \tag {41} +$$ + +where $e^{K_1}$ and $e^{K_2}$ represents the mean of $r(\pmb{x})$ for all $r(\pmb{x}) \leq e^{-\tau}$ and $r(\pmb{x}) \geq e^{\tau}$ respectively. We then have: + +$$ +\mathbb {E} _ {Q} [ r ] = e ^ {K _ {1}} \int I (r (\boldsymbol {x}) \leq e ^ {- \tau}) \mathrm {d} Q (\boldsymbol {x}) + e ^ {K _ {2}} \int I (r (\boldsymbol {x}) \geq e ^ {\tau}) \mathrm {d} Q (\boldsymbol {x}) \tag {42} +$$ + +$$ +1 = \int I (r (\boldsymbol {x}) \leq e ^ {- \tau}) \mathrm {d} Q (\boldsymbol {x}) + \int I (r (\boldsymbol {x}) \geq e ^ {\tau}) \mathrm {d} Q (\boldsymbol {x}) \tag {43} +$$ + +so we can parametrize $\mathbb{E}_Q[r]$ via $K_{1}$ and $K_{2}$ . Since $E_{Q}[r] = S$ by assumption, we have: + +$$ +\int I (r (\boldsymbol {x}) \leq e ^ {- \tau}) \mathrm {d} Q (\boldsymbol {x}) = \frac {e ^ {K _ {2}} - S}{e ^ {K _ {2}} - e ^ {- K _ {1}}} \tag {44} +$$ + +and from the definition of $r_{\tau}(\pmb {x})$ .. + +$$ +\mathbb {E} _ {Q} \left[ r _ {\tau} \right] = \frac {e ^ {K _ {2}} e ^ {- \tau} - S e ^ {- \tau} + S e ^ {\tau} - e ^ {- K _ {1}} e ^ {\tau}}{e ^ {K _ {2}} - e ^ {- K _ {1}}} := g \left(K _ {1}, K _ {2}\right) \tag {45} +$$ + +We can obtain an upper bound once we find $\max g(K_1, K_2)$ and $\min g(K_1, K_2)$ . First, we have: + +$$ +\begin{array}{l} \frac {\partial g \left(K _ {1} , K _ {2}\right)}{\partial K _ {1}} = \frac {e ^ {- K _ {1}} e ^ {\tau} \left(e ^ {K _ {2}} - e ^ {- K _ {1}}\right) - e ^ {- K _ {1}} \left(e ^ {K _ {2}} e ^ {- \tau} - S e ^ {- \tau} + S e ^ {\tau} - e ^ {- K _ {1}} e ^ {\tau}\right)}{\left(e ^ {K _ {2}} - e ^ {- K _ {1}}\right) ^ {2}} \\ = \frac {e ^ {- K _ {1}} \left(e ^ {\tau} - e ^ {- \tau}\right) \left(e ^ {K _ {2}} - S\right)}{\left(e ^ {K _ {2}} - e ^ {- K _ {1}}\right) ^ {2}} \geq 0 \tag {46} \\ \end{array} +$$ + +$$ +\begin{array}{l} \frac {\partial g (K _ {1} , K _ {2})}{\partial K _ {2}} = \frac {e ^ {K _ {2}} e ^ {- \tau} \left(e ^ {K _ {2}} - e ^ {- K _ {1}}\right) - e ^ {K _ {2}} \left(e ^ {K _ {2}} e ^ {- \tau} - S e ^ {- \tau} + S e ^ {\tau} - e ^ {- K _ {1}} e ^ {\tau}\right)}{\left(e ^ {K _ {2}} - e ^ {- K _ {1}}\right) ^ {2}} \\ = \frac {e ^ {K _ {2}} \left(e ^ {\tau} - e ^ {- \tau}\right) \left(e ^ {- K _ {1}} - S\right)}{\left(e ^ {K _ {2}} - e ^ {- K _ {1}}\right) ^ {2}} \leq 0 \tag {47} \\ \end{array} +$$ + +Therefore, $g(K_1, K_2)$ is largest when $K_1 \to \infty$ , $K_2 = \tau$ and smallest when $K_1 = \tau$ , $K_2 \to \infty$ . + +$$ +\max g \left(K _ {1}, K _ {2}\right) = \lim _ {K \rightarrow \infty} \frac {1 - e ^ {- K} e ^ {\tau} + S \left(e ^ {\tau} - e ^ {- \tau}\right)}{e ^ {\tau} - e ^ {- K}} = S + e ^ {- \tau} - S e ^ {- 2 \tau} \tag {48} +$$ + +$$ +\min g \left(K _ {1}, K _ {2}\right) = \lim _ {K \rightarrow \infty} \frac {e ^ {K} e ^ {- \tau} - 1 + S \left(e ^ {\tau} - e ^ {- \tau}\right)}{e ^ {K} - e ^ {- \tau}} = e ^ {- \tau} \tag {49} +$$ + +Therefore, + +$$ +\begin{array}{l} \left| \mathbb {E} _ {Q} [ r ] - \mathbb {E} _ {Q} \left[ r _ {\tau} \right] \right| \leq \max \left(\left| \max g \left(K _ {1}, K _ {2}\right) - S \right|, \left| S - \min g \left(K _ {1}, K _ {2}\right) \right|\right) (50) \\ = \max \left(\left| e ^ {- \tau} - S e ^ {- 2 \tau} \right|, \left| S - e ^ {- \tau} \right|\right) (51) \\ \end{array} +$$ + +The proof for Theorem 3 simply follows the above analysis for fixed $K$ . When $\tau < K$ , we consider the case when $K_{1} \to \infty, K_{2} = \tau$ and $K_{1} = \tau, K_{2} = K$ ; when $\tau > K$ only the smaller values will be clipped, so the increased value is no larger than the case where $K_{1} \to \infty, K_{2} = K$ : + +$$ +\frac {e ^ {K} - S}{e ^ {K}} \cdot e ^ {\tau} = e ^ {- \tau} \left(1 - S e ^ {- K}\right) \tag {52} +$$ + +where $e^K \geq S$ from the fact that $\int r\mathrm{d}Q = S$ . + +![](images/51e9bca344b147d339fc5a4a2af506dde13531d30ec75821557b2757c7e807b6.jpg) + +Theorem 4. The variance of the estimator $\mathbb{E}_{Q_n}[r_\tau]$ (using $n$ samples from $Q$ ) satisfies: + +$$ +\operatorname {V a r} \left[ \mathbb {E} _ {Q _ {n}} \left[ r _ {\tau} \right] \right] \leq \frac {e ^ {\tau} - e ^ {- \tau}}{4 n} \tag {18} +$$ + +Proof. Since $r_{\tau}(\pmb{x})$ is bounded between $e^{\tau}$ and $e^{-\tau}$ , we have + +$$ +\operatorname {V a r} \left[ r _ {\tau} \right] \leq \frac {e ^ {\tau} - e ^ {- \tau}}{4} \tag {53} +$$ + +Taking the mean of $n$ independent random variables gives us the result. + +![](images/001203e9fb457b32de5528389592eedd04cd3334a03929986ca11a89025af231.jpg) + +Combining Theorem 3 and 4 with the bias-variance trade-off argument, we have the following: + +Corollary 3. Let $r(\pmb{x}) : \mathcal{X} \to \mathbb{R}_{\geq 0}$ be any non-negative measurable function such that $\int r \, \mathrm{d}Q = S$ , $S \in (0,\infty)$ and $r(\pmb{x}) \in [0,e^{K}]$ . Define $r_{\tau}(\pmb{x}) = \mathrm{clip}(r(\pmb{x}),e^{\tau},e^{-\tau})$ for finite, non-negative $\tau$ and $\mathbb{E}_{Q_n}$ as the sample average of $n$ i.i.d. samples from $Q$ . If $\tau < K$ , then + +$$ +\mathbb {E} _ {Q} [ (r - \mathbb {E} _ {Q _ {n}} [ r _ {\tau} ]) ^ {2} ] \leq \max \left(e ^ {- \tau} | 1 - S e ^ {- \tau} |, \left| \frac {1 - e ^ {K} e ^ {- \tau} + S (e ^ {K} - e ^ {\tau})}{e ^ {K} - e ^ {- \tau}} \right|\right) ^ {2} + \frac {e ^ {\tau} - e ^ {- \tau}}{4 n}; +$$ + +If $\tau \geq K$ , then: + +$$ +\mathbb {E} _ {Q} \left[ \left(r - \mathbb {E} _ {Q _ {n}} \left[ r _ {\tau} \right]\right) ^ {2} \right] \leq e ^ {- 2 \tau} \left(1 - S e ^ {- K}\right) ^ {2} + \frac {e ^ {\tau} - e ^ {- \tau}}{4 n} \tag {54} +$$ + +# B ADDITIONAL EXPERIMENTAL DETAILS + +# B.1 BENCHMARK TASKS + +Tasks We sample each dimension of $(\pmb{x},\pmb{y})$ independently from a correlated Gaussian with mean 0 and correlation of $\rho$ , where $\mathcal{X} = \mathcal{Y} = \mathbb{R}^{20}$ . The true mutual information is computed as: + +$$ +I (\boldsymbol {x}, \boldsymbol {y}) = - \frac {d}{2} \log \left(1 - \frac {\rho}{2}\right) \tag {55} +$$ + +The initial mutual information is 2, and we increase the mutual information by 2 every $4k$ iterations, so the total training iterations is $20k$ . + +Architecture and training procedure For all the discriminative methods, we consider two types of architectures - joint and separable. The joint architecture concatenates the inputs $\mathbf{x}$ , $\mathbf{y}$ , and then passes through a two layer MLP with 256 neurons in each layer with ReLU activations at each layer. The separable architecture learns two separate neural networks for $\mathbf{x}$ and $\mathbf{y}$ (denoted as $g(\mathbf{x})$ and $h(\mathbf{y})$ ) and predicts $g(\mathbf{x})^{\top} h(\mathbf{y})$ ; $g$ and $h$ are two neural networks, each is a two layer MLP with 256 neurons in each layer with ReLU activations at each layer; the output of $g$ and $h$ are 32 dimensions. + +For the generative method, we consider the invertible flow architecture described in (Dinh et al., 2014; 2016). $p_{\theta}, p_{\phi}, p_{\psi}$ are flow models with 5 coupling layers (with scaling), where each layer contains a neural network with 2 layers of 100 neurons and ReLU activation. For all the cases, we use with the Adam optimizer (Kingma & Ba, 2014) with learning rate $5 \times 10^{-4}$ and $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ and train for $20k$ iterations with a batch size of 64, following the setup in Poole et al. (2019). + +Additional results We show the bias, variance and mean squared error of the discriminative approaches in Table 2. We include additional results for $I_{\mathrm{SMILE}}$ with $\tau = 10.0$ . + +
GaussianCubic
MI246810246810
BiasCPC0.250.992.314.005.890.721.482.634.205.99
NWJ0.120.300.752.302.970.661.212.043.214.70
SMILE (τ = 1.0)0.150.300.320.180.030.470.771.161.642.16
SMILE (τ = 5.0)0.130.110.190.540.860.711.221.551.842.16
SMILE (τ = 10.0)0.140.210.220.110.190.701.281.832.443.02
SMILE (τ = ∞)0.150.210.220.120.220.711.291.822.352.81
GM (Flow)0.110.140.150.140.171.020.471.852.933.55
VarCPC0.040.040.020.010.000.030.040.030.010.01
NWJ0.060.221.3616.5099.00.040.100.410.933.23
SMILE (τ = 1.0)0.050.120.200.280.340.040.100.140.200.30
SMILE (τ = 5.0)0.050.110.190.310.510.040.070.120.180.26
SMILE (τ = 10.0)0.050.130.310.691.350.030.100.210.460.79
SMILE (τ = ∞)0.050.140.360.751.540.030.120.240.650.94
GM (Flow)0.050.100.130.160.190.560.720.921.021.02
MSECPC0.101.025.3316.0034.660.552.226.9517.6235.91
NWJ0.070.322.1933.3728.430.471.554.5611.1327.00
SMILE (τ = 1.0)0.080.210.300.320.310.260.691.492.904.98
SMILE (τ = 5.0)0.070.130.220.571.260.541.562.533.584.92
SMILE (τ = 10.0)0.070.180.360.671.330.521.753.546.419.91
SMILE (τ = ∞)0.080.190.400.761.620.541.753.556.098.81
GM (Flow)0.070.110.140.170.221.650.914.369.7013.67
+ +Table 2: Bias, Variance and MSE of the estimators under the joint critic. + +We show the bias, variance and MSE results in Figure 7. We also evaluate the variance of estimating $\mathbb{E}_{Q_n}[r_\tau]$ (partition function with clipped ratios) for different values of $\tau$ in the SMILE estimator in Figure 8b. With smaller $\tau$ we see a visible decrease in terms of variance in this term; this is consistent with the variance estimates in Figure 7, as there the variance of $\mathbb{E}_{P_n}[\log r]$ is also considered. + +![](images/9aa0877850c601b2715b088d30cb759eab736ec34f51420d5de9ecde02b293fc.jpg) + +![](images/52a18df44d8f5963fcb7638762626b0d8e49926d9cdfc2bd2221a36fe539101f.jpg) + +![](images/024386173432558e19c8d68e7bcb4e84c177a98ff8db9df6ba6e0b01d4e37b2b.jpg) + +![](images/1109140d4ebc299d619b83dba5bec9e0e173ce5c94f88174d09a64e16abf538b.jpg) +Figure 7: Bias / Variance / MSE of various estimators. on Gaussian (top) and Cubic (down). + +![](images/f67958d9e005cc7b3e2491a9800b1bf0783670e1fd6564bebc3674fd860e2008.jpg) + +![](images/df36ea980764b95373ee2840ae1ab21b4870cf79da69dd95ab4b876bbde6a182.jpg) + +![](images/ee0f4191b58d8a90a590bb3aeda3e46ab928009430b3e67918d1da9e855807e2.jpg) +(a) Additional benchmark results on MINE estimator. + +![](images/8d38e47df256cda1ede559be99cd94133b5f8047f0f4375ed898cc581a7b25e4.jpg) +Figure 8: Additional benchmark results. + +![](images/05f64ef31aff31aebd4af6cf40bf171cb5e56894700949b036d39cc476772106.jpg) +(b) Variance of $\mathbb{E}_{Q_n}[r_\tau]$ + +# B.2 SELF-CONSISTENCY EXPERIMENTS + +Tasks We consider three tasks with the mutual information estimator $\hat{I}$ : + +1. $\hat{I}(X;Y)$ where $X$ is an image from MNIST (LeCun et al., 1998) or CIFAR10 (Krizhevsky et al., 2012) and $Y$ is the top $t$ rows of $X$ . To simplify architecture designs, we simply mask out the bottom rows to be zero, see Figure 3. +2. $\hat{I}([X, X]; [Y; h(Y)])$ where $X$ is an image, $Y$ is the top $t$ rows of $X$ , $h(Y)$ is the top $(t - 3)$ rows of $Y$ and $[\cdot, \cdot]$ denotes concatenation. Ideally, the prediction should be close to $\hat{I}(X; Y)$ . +3. $\hat{I}([X_1, X_2], [Y_1, Y_2])$ where $X_1$ and $X_2$ are independent images from MNIST or CIFAR10, $Y_1$ and $Y_2$ are the top $t$ rows of $X_1$ and $X_2$ respectively. Ideally, this prediction should be close to $2 \cdot \hat{I}(X; Y)$ . + +Architecture and training procedure We consider the same architecture for all the discriminative approaches. The first layer is a convolutional layer with 64 output channels, kernel size of 5, stride of 2 and padding of 2; the second layer is a convolutional layer with 128 output channels, kernel size of 5, stride of 2 and padding of 2. This is followed another fully connected layer with 1024 neurons and finally a linear layer that produces an output of 1. All the layers (except the last one) use ReLU activations. We stack variables over the channel dimension to perform concatenation. + +For the generative approach, we consider the following VAE architectures. The encoder architecture is identical to the discriminative approach except the last layer has 20 outputs that predict the mean and standard deviations of 10 Gaussians respectively. The decoder for MNIST is a two layer MLP with 400 neurons each; the decoder for CIFAR10 is the corresponding transposed convolution network for the encoder. All the layers (except the last layers for encoder and decoder) use ReLU activations. For concatenation we stack variables over the channel dimension. For all the cases, we use with the Adam optimizer (Kingma & Ba, 2014) with learning rate $10^{-4}$ and $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ . + +For $I_{\mathrm{GM}}$ we train for 10 epochs, and for the discriminative methods, we train for 2 epochs, due to numerical stability issues of $I_{\mathrm{MINE}}$ . + +Additional experiments on scaling, rotation and translation We consider additional benchmark experiments on MNIST where instead of removing rows, we apply alternative transformations such as random scaling, rotation and translations. For random scaling, we upscale the image randomly by 1x to $1.2\mathrm{x}$ ; for random rotation, we randomly rotate the image between $\pm 20$ degrees; for random translation, we shift the image randomly by no more than 3 pixels horizontally and vertically. We consider evaluating the data processing and additivity properties, where the ideal value for the former is no more than 1, and the ideal value for the latter is 2. From the results in Table 3, none of the considered approaches achieve good results in all cases. + +
CPCMINEGM (VAE)SMILE (τ = 5.0)SMILE (τ = ∞)
Data-ProcessingScaling1.001.031.121.191.04
Rotation1.001.301.131.031.27
Translation1.001.281.011.071.08
AdditivityScaling1.001.551.891.041.18
Rotation1.002.091.581.501.78
Translation1.001.411.281.321.33
+ +Table 3: Self-consistency experiments on other image transforms. \ No newline at end of file diff --git a/understandingthelimitationsofvariationalmutualinformationestimators/images.zip b/understandingthelimitationsofvariationalmutualinformationestimators/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2aabe80d90963dd2c1c1739ad4cff955946f8196 --- /dev/null +++ b/understandingthelimitationsofvariationalmutualinformationestimators/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb1ce56289351668a50477264c5a39d77763e20c4228ec43fb21fa5d7bf15451 +size 987034 diff --git a/understandingthelimitationsofvariationalmutualinformationestimators/layout.json b/understandingthelimitationsofvariationalmutualinformationestimators/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..14044aac7c0f5cef17cde1b5e5ba393c12a1645b --- /dev/null +++ b/understandingthelimitationsofvariationalmutualinformationestimators/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5aa60c27645007a6ae0c20c840a01ef0d45a1c64f8c6cdc2d00f143294068aeb +size 950759 diff --git a/universalapproximationwithcertifiednetworks/ed35d491-537d-487d-9a7a-f333e038d9eb_content_list.json b/universalapproximationwithcertifiednetworks/ed35d491-537d-487d-9a7a-f333e038d9eb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3215a5fbb98b169fdd2778aae2064e08c9ea85b6 --- /dev/null +++ b/universalapproximationwithcertifiednetworks/ed35d491-537d-487d-9a7a-f333e038d9eb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e61691e1af49562f0b1d24cf4e922dfe4f044af228810a585cf1c149759af97 +size 125695 diff --git a/universalapproximationwithcertifiednetworks/ed35d491-537d-487d-9a7a-f333e038d9eb_model.json b/universalapproximationwithcertifiednetworks/ed35d491-537d-487d-9a7a-f333e038d9eb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8458325c3ed427d25ab0655a7ac726ecc5841ff8 --- /dev/null +++ b/universalapproximationwithcertifiednetworks/ed35d491-537d-487d-9a7a-f333e038d9eb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0a9b6d317c855a6cf9fd87d50118626ebbf1eee61c3fedb8f76a305fa998607 +size 149192 diff --git a/universalapproximationwithcertifiednetworks/ed35d491-537d-487d-9a7a-f333e038d9eb_origin.pdf b/universalapproximationwithcertifiednetworks/ed35d491-537d-487d-9a7a-f333e038d9eb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..122e52e43c48235328de63a6ab9664cbfe37dfb7 --- /dev/null +++ b/universalapproximationwithcertifiednetworks/ed35d491-537d-487d-9a7a-f333e038d9eb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16798ca8045dfe3ada4aa5c618234ce31ed8de615490682a627efd415718846a +size 518676 diff --git a/universalapproximationwithcertifiednetworks/full.md b/universalapproximationwithcertifiednetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7c606d05059e83beaf3bc359170a41fbea8cf8c6 --- /dev/null +++ b/universalapproximationwithcertifiednetworks/full.md @@ -0,0 +1,714 @@ +# UNIVERSAL APPROXIMATION WITH CERTIFIED NETWORKS + +Maximilian Baader, Matthew Mirman, Martin Vechev + +Department of Computer Science + +ETH Zurich, Switzerland + +{mbaader,matthew.mirman,martin.vechev}@inf.ethz.ch + +# ABSTRACT + +Training neural networks to be certifiably robust is critical to ensure their safety against adversarial attacks. However, it is currently very difficult to train a neural network that is both accurate and certifiably robust. In this work we take a step towards addressing this challenge. We prove that for every continuous function $f$ , there exists a network $n$ such that: (i) $n$ approximates $f$ arbitrarily close, and (ii) simple interval bound propagation of a region $B$ through $n$ yields a result that is arbitrarily close to the optimal output of $f$ on $B$ . Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks. + +# 1 INTRODUCTION + +Much recent work has shown that neural networks can be fooled into misclassifying adversarial examples (Szegedy et al., 2014), inputs which are imperceptibly different from those that the neural network classifies correctly. Initial work on defending against adversarial examples revolved around training networks to be empirically robust, usually by including adversarial examples found with various attacks into the training dataset (Gu and Rigazio, 2015; Papernot et al., 2016; Zheng et al., 2016; Athalye et al., 2018; Eykholt et al., 2018; Moosavi-Dezfooli et al., 2017; Xiao et al., 2018). However, while empirical robustness can be practically useful, it does not provide safety guarantees. As a result, much recent research has focused on verifying that a network is certifiably robust, typically by employing methods based on mixed integer linear programming (Tjeng et al., 2019), SMT solvers (Katz et al., 2017), semidefinite programming (Raghunathan et al., 2018a), duality (Wong and Kolter, 2018; Dvijotham et al., 2018b), and linear relaxations (Gehr et al., 2018; Weng et al., 2018; Wang et al., 2018b; Zhang et al., 2018; Singh et al., 2018; Salman et al., 2019). + +Because the certification rates were far from satisfactory, specific training methods were recently developed which produce networks that are certifiably robust: Mirman et al. (2018); Raghunathan et al. (2018b); Wang et al. (2018a); Wong and Kolter (2018); Wong et al. (2018); Gowal et al. (2018) train the network with standard optimization applied to an over-approximation of the network behavior on a given input region (the region is created around the concrete input point). These techniques aim to discover specific weights which facilitate verification. There is a tradeoff between the degree of the over-approximation used and the speed of training and certification. Recently, (Cohen et al., 2019b) proposed a statistical approach to certification, which unlike the non-probabilistic methods discussed above, creates a probabilistic classifier that comes with probabilistic guarantees. + +So far, some of the best non-probabilistic results achieved on the popular MNIST (Lecun et al., 1998) and CIFAR10 (Krizhevsky, 2009) datasets have been obtained with the simple Interval relaxation (Gowal et al., 2018; Mirman et al., 2019), which scales well at both training and verification time. Despite this progress, there are still substantial gaps between known standard accuracy, experimental robustness, and certified robustness. For example, for CIFAR10, the best reported certified robustness is $32.04\%$ with an accuracy of $49.49\%$ when using a fairly modest $l_{\infty}$ region with radius 8/255 (Gowal et al., 2018). The state-of-the-art non-robust accuracy for this dataset is $>95\%$ with experimental robustness $>50\%$ . Given the size of this gap, a key question then is: can certified training ever succeed or is there a fundamental limit? + +![](images/c96f795cc8c129df985e6a067a64a5e21a5c6b926653611ad71690efb153cdf5.jpg) +$[0,\frac{1}{2} ]$ +(a) Not certifiable network $n_1$ . +$[0,\frac{3}{4} ]$ + +![](images/a3d84fc06a0202f65abf8e08a86347ebe998b80881c2471714d9ef9105cdecb0.jpg) +(b) The function $f$ . + +![](images/ea3ce1e5f43ad672811b26c1dc3639e64e2b60a49355c92c2670df223653049c.jpg) +$[0, \frac{1}{2}]$ +(c) Certifiable network $n_2$ . + +In this paper we take a step in answering this question by proving a result parallel to the Universal Approximation Theorem (Cybenko, 1989; Hornik et al., 1989). We prove that for any continuous function $f$ defined on a compact domain $\Gamma \subseteq \mathbb{R}^m$ and for any desired level of accuracy $\delta$ , there exists a ReLU neural network $n$ which can certifiably approximate $f$ up to $\delta$ using interval bound propagation. As an interval is a fairly imprecise relaxation, our result directly applies to more precise convex relaxations (e.g., Zhang et al. (2018); Singh et al. (2019)). + +![](images/28f44b2336204e3637093739fb2ad638a7655e5a9afaf034286d6dbed3081009.jpg) +Figure 2: The ReLU networks $n_1$ (Figure 2a) and $n_2$ (Figure 2c) encode the same function $f$ (Figure 2b). Interval analysis fails to certify that $n_1$ does not exceed [0, 1] on [0, 1] while certification succeeds for $n_2$ . +Figure 1: Illustration of Theorem 1.1. + +Theorem 1.1 (Universal Interval-Certified Approximation, Figure 1). Let $\Gamma \subset \mathbb{R}^m$ be a compact set and let $f\colon \Gamma \to \mathbb{R}$ be a continuous function. For all $\delta >0$ , there exists a ReLU network $n$ such that for all boxes $[a,b]$ in $\Gamma$ defined by points $a,b\in \Gamma$ where $a_{k}\leq b_{k}$ for all $k$ , the propagation of the box $[a,b]$ using interval analysis through the network $n$ , denoted $n^\sharp ([a,b])$ , approximates the set $[l,u] = [\min f([a,b]),\max f([a,b])]\subseteq \mathbb{R}$ up to $\delta$ , + +$$ +[ l + \delta , u - \delta ] \subseteq n ^ {\sharp} ([ a, b ]) \subseteq [ l - \delta , u + \delta ]. \tag {1} +$$ + +We recover the classical universal approximation theorem $(|f(x) - n(x)| \leq \delta$ for all $x \in \Gamma$ ) by considering boxes $[a, b]$ describing points $(x = a = b)$ . Note that here the lower bound is not $[l, u]$ as the network $n$ is an approximation of $f$ . Because interval analysis propagates boxes, the theorem naturally handles $l_{\infty}$ norm bound perturbations to the input. Other $l_{p}$ norms can be handled by covering the $l_{p}$ ball with boxes. The theorem can be extended easily to functions $f \colon \Gamma \to \mathbb{R}^k$ by applying the theorem component wise. + +Practical meaning of theorem The practical meaning of this theorem is as follows: if we train a neural network $n'$ on a given training data set (e.g., CIFAR10) and we are satisfied with the properties of $n'$ (e.g., high accuracy), then because $n'$ is a continuous function, the theorem tells us that there exists a network $n$ which is as accurate as $n'$ and as certifiable with interval analysis as $n'$ is with a complete verifier. This means that if we fail to find such an $n$ , then either $n$ did not possess the required capacity or the optimizer was unsuccessful. + +Focus on the existence of a network We note that we do not provide a method for training a certified ReLU network - even though our method is constructive, we aim to answer an existential question and thus we focus on proving that a given network exists. Interesting future work items would be to study the requirements on the size of this network and the inherent hardness of finding it with standard optimization methods. + +Universal approximation is insufficient We now discuss why classical universal approximation is insufficient for establishing our result. While classical universal approximation theorems state that neural networks can approximate a large class of functions $f$ , unlike our result, they do not state that robustness of the approximation $n$ of $f$ is actually certified with a scalable proof method (e.g., interval bound propagation). If one uses a non scalable complete verifier instead, then the standard Universal approximation theorem is sufficient. + +To demonstrate this point, consider the function $f: \mathbb{R} \to \mathbb{R}$ (Figure 2b) mapping all $x \leq 0$ to 1, all $x \geq 1$ to 0 and all $0 < x < 1$ to $1 - x$ and two ReLU networks $n_1$ (Figure 2a) and $n_2$ (Figure 2c) perfectly approximating $f$ , that is $n_1(x) = f(x) = n_2(x)$ for all $x$ . For $\delta = \frac{1}{4}$ , the interval certification that $n_1$ maps all $x \in [0,1]$ to $[0,1]$ fails because $[\frac{1}{4},\frac{3}{4}] \subseteq n_1^{\sharp}([0,1]) = [0,\frac{3}{2}] \not\subset [-\frac{1}{4},\frac{5}{4}]$ . However, interval certification succeeds for $n_2$ , because $n_2^{\sharp}([0,1]) = [0,1]$ . To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks. + +# 2 RELATED WORK + +After adversarial examples were discovered by Szegedy et al. (2014), many attacks and defenses were introduced (for a survey, see Akhtar and Mian (2018)). Initial work on verifying neural network robustness used exact methods (Katz et al., 2017; Tjeng et al., 2019) on small networks, while later research introduced methods based on over-approximation (Gehr et al., 2018; Raghunathan et al., 2018a; Singh et al., 2018; Salman et al., 2019) aiming to scale to larger networks. A fundamentally different approach is randomized smoothing (Li et al., 2019; Lécuyer et al., 2019; Cohen et al., 2019b), in which probabilistic classification and certification with high confidence is performed. + +As neural networks that are experimentally robust need not be certifiably robust, there has been significant recent research on training certifiably robust neural networks (Raghunathan et al., 2018b; Mirman et al., 2018; 2019; Wong and Kolter, 2018; Wong et al., 2018; Wang et al., 2018a; Gowal et al., 2018; Dvijotham et al., 2018a; Xiao et al., 2019; Cohen et al., 2019b). As these methods appear to have reached a performance wall, several works have started investigating the fundamental barriers in the datasets and methods that preclude the learning of a robust network (let alone a certifiably robust one) (Khoury and Hadfield-Menell, 2018; Schmidt et al., 2018; Tsipras et al., 2019). In our work, we focus on the question of whether neural networks are capable of approximating functions whose robustness can be established with the efficient interval relaxation. + +Feasibility Results with Neural Networks Early versions of the Universal Approximation Theorem were stated by Cybenko (1989) and Hornik et al. (1989). Cybenko (1989) showed that networks using sigmoidal activations could approximate continuous functions in the unit hypercube, while Hornik et al. (1989) showed that even networks with only one hidden layer are capable of approximating Borel measurable functions. + +More recent work has investigated the capabilities of ReLU networks. Here, Arora et al. (2018), based on Tarela and Martínez (1999), proved that every continuous piecewise linear function in $\mathbb{R}^m$ can be represented by a ReLU network. Later, He et al. (2018) reduced the number of neurons needed using ideas from finite elements methods. Relevant to our work, Arora et al. (2018) introduced a ReLU network representations of the min function. Further, we use a construction method that is similar to the construction for nodal basis functions given in He et al. (2018). + +Universal approximation for Lipschitz constrained networks have been considered by Anil et al. (2019) and later by Cohen et al. (2019a). A bound on the Lipschitz constant of a network immediately yields a certified region depending on the classification margin. Anil et al. (2019) proved that the set of Lipschitz networks with the GroupSort activation is dense in the space of Lipschitz continuous functions with Lipschitz constant 1, while Cohen et al. (2019a) provide an explicit construction to obtain the network. We note that both of these works focus on Lipschitz continuous functions, a more restricted class than continuous functions, which we consider in our work. + +# 3 BACKGROUND + +In this section we provide the concepts necessary to describe our main result. + +Adversarial Examples and Robustness Verification Let $n: \mathbb{R}^m \to \mathbb{R}^k$ be a neural network, which classifies an input $x$ to a label $t$ if $n(x)_t > n(x)_j$ for all $j \neq t$ . For a correctly classified input $x$ , an adversarial example is an input $y$ such that $x$ is imperceptible from $y$ to a human, but is classified to a different label by $n$ . + +Frequently, two images are assumed to be "imperceptible" if there $l_{p}$ distance is at most $\epsilon$ . The $l_{p}$ ball around an image is said to be the adversarial ball, and a network is said to be $\epsilon$ -robust around $x$ if + +![](images/e115aad2884c1a3d32ee2e0d9e713f5924805697785110cf7540ff40603cab24.jpg) +(a) $f$ + +![](images/11a794912a4b20937fea80d60fb1f13e293e9e727e36a6331615d9d12c11541e.jpg) +Figure 3: Approximating $f$ (Figure 3a) using a ReLU network $n = \xi_0 + \sum_k n_k$ . The ReLU networks $n_k$ (Figure 3c) approximate the $N$ -slicing of $f$ (Figure 3b), as a sum of local bumps (Figure 6). + +![](images/1c46a65deec186ec1677ae7a25545cee9f80d981a581a4ce7aadf62333993102.jpg) +(b) Slicing of $f, f_0, \ldots, f_4$ +(c) Networks $n_k$ approximating $f_{k}$ + +every point in the adversarial ball around $x$ classifies the same. In this paper, we limit our discussion to $l_{\infty}$ adversarial balls which can be used to cover to all $l_{p}$ balls. + +The goal of robustness verification is to show that for a neural network $n$ , input point $x$ and label $t$ , every possible input in an $l_{\infty}$ ball of size $\epsilon$ around $x$ (written $\mathbb{B}_{\epsilon}^{\infty}(x)$ ) is also classified to $t$ . + +Verifying neural networks with Interval Analysis The verification technique we investigate in this work is interval analysis. We denote by $\mathcal{B}$ the set of boxes $B = [a,b]\subset \mathbb{R}^m$ for all $m$ , where $a_{i}\leq b_{i}$ for all $i$ . Furthermore for $\Gamma \subseteq \mathbb{R}^{m}$ we define $\mathcal{B}(\Gamma)\coloneqq \mathcal{B}\cap \Gamma$ describing all the boxes in $\Gamma$ . The standard interval-transformations for the basic operations we are considering, namely $+,-,\cdot$ and the ReLU function $R$ (Gehr et al. (2018), Gowal et al. (2018)) are + +$$ +[ a, b ] + ^ {\sharp} [ c, d ] = [ a + c, b + d ] +$$ + +$$ +- ^ {\sharp} [ a, b ] = \left[ - b, - a \right] +$$ + +$$ +R ^ {\sharp} ([ a, b ]) = [ R (a), R (b) ] +$$ + +$$ +\lambda \cdot^ {\sharp} [ a, b ] = [ \lambda a, \lambda b ], +$$ + +where $[a, b], [c, d] \in \mathcal{B}(\mathbb{R})$ , and $\lambda \in \mathbb{R}_{\geq 0}$ . Furthermore, we used $\sharp$ to distinguish the function $f$ from its interval-transformation $f^{\sharp}$ . To illustrate the difference between $f$ and $f^{\sharp}$ , consider $f(x) := x - x$ evaluated on $x = [0,1]$ . We have $f([0,1]) = 0$ , but $f^{\sharp}([0,1]) = [0,1] - ^{\#}[0,1] = [0,1] + ^{\#}[-1,0] = [-1,1]$ illustrating the loss in precision that interval analysis suffers from. + +Interval analysis provides a sound over-approximation in the sense that for all function $f$ , the values that $f$ can obtain on $[a, b]$ , namely $f([a, b]) := \{f(x) \mid x \in [a, b]\}$ are a subset of $f^{\sharp}([a, b])$ . If $f$ is a composition of functions, $f = f_{1} \circ \dots \circ f_{k}$ , then $f_{1}^{\sharp} \circ \dots \circ f_{k}^{\sharp}$ is a sound interval-transformer for $f$ . + +Furthermore all combinations $f$ of $+, -, \cdot$ and $R$ are monotone, that is for $[a, b], [c, d] \subseteq \mathcal{B}(\mathbb{R}^m)$ such that $[a, b] \subseteq [c, d]$ then $f^\sharp([a, b]) \subseteq f^\sharp([c, d])$ (Appendix A). For boxes $[x, x]$ representing points $f^\sharp$ coincides with $f$ , $f^\sharp([x, x]) = f(x)$ . This will later be needed. + +# 4 PROVING UNIVERSAL INTERVAL-PROVABLE APPROXIMATION + +In this section, we provide an explanation of the proof of our main result, Theorem 4.6, and illustrate the main points of the proof. + +The first step in the construction is to deconstruct the function $f$ into slices $\{f_k \colon \Gamma \to [0, \frac{\delta}{2}]\}_{0 \leq k < N}$ such that $f(x) = \xi_0 + \sum_{k=0}^{N-1} f_k(x)$ for all $x$ , where $\xi_0$ is the minimum of $f(\Gamma)$ . We approximate each slice $f_k$ by a ReLU network $\frac{\delta}{2} \cdot n_k$ . The network $n$ approximating $f$ up to $\delta$ will be $n(x) := \xi_0 + \frac{\delta}{2} \sum_k n_k(x)$ . The construction relies on 2 key insights, (i) the output of $\frac{\delta}{2} \cdot n_k^\sharp$ can be confined to the interval $[0, \frac{\delta}{2}]$ , thus the loss of analysis precision is at most the height of the slice, and (ii) we can construct the networks $n_k$ using local bump functions, such that only 4 slices can contribute to the loss of analysis precision, two for the lower interval bound, two for the upper one. + +![](images/e0710fa49c7a44ce74241c2b16692bcca9c3ae169b7fe82e945d05903d24bc31.jpg) +Figure 4: Neighbors $\mathcal{N}(x)$ (blue dots) and $\mathcal{N}(U)$ (red squares). + +![](images/be29254b2c4b2e96c7f2a16a52c5d928fb0e3bdd30a2a824cdf3059565089134.jpg) +Figure 5: $R_{[*,b]}(x)$ + +The slicing $\{f_k\}_{0 \leq k < 5}$ of the function $f \colon [-2, 2] \to \mathbb{R}$ (Figure 3a), mapping $x$ to $f(x) = -x^3 + 3x$ is depicted in Figure 3b. The networks $n_k$ are depicted in Figure 3c. In this example, evaluating the interval-transformer of $n$ , namely $n^\sharp$ on the box $B = [-1, 1]$ results into $n^\sharp([-1, 1]) = [-2, 6/5]$ lies within the $\delta = \frac{8}{5}$ bound of $f([-1, 1]) = [-2, 2]$ . + +Definition 4.1 ( $N$ -slicing (Figure 3b)). Let $\Gamma \subset \mathbb{R}^m$ be a closed $m$ -dimensional box and let $f: \Gamma \to \mathbb{R}$ be continuous. The $N$ -slicing of $f$ is a set of functions $\{f_k\}_{0 \leq k < N}$ defined by + +$$ +f _ {k} \colon \Gamma \to \mathbb {R}, \quad x \mapsto \left\{ \begin{array}{l l} 0 & \text {i f} f (x) \leq \xi_ {k}, \\ f (x) - \xi_ {k} & \text {i f} \xi_ {k} < f (x) < \xi_ {k + 1}, \\ \xi_ {k + 1} - \xi_ {k} & \text {i f} \xi_ {k + 1} \leq f (x), \end{array} \right. \qquad \forall k \in \{0, \ldots , N - 1 \}, +$$ + +where $\xi_{k}\coloneqq \xi_{0} + \frac{k}{N} (\xi_{N} - \xi_{0}),k\in \{1,\ldots ,N - 1\} ,\xi_{0}\coloneqq \min f(\Gamma)$ and $\xi_N\coloneqq \max f(\Gamma)$ + +To construct a ReLU network satisfying the desired approximation property (Equation (1)) if evaluated on boxes in $\mathcal{B}(\Gamma)$ , we need the ReLU network nmin capturing the behavior of min as a building block (similar to He et al. (2018)). It is given by + +$$ +\operatorname {n m i n} (x, y) := \frac {1}{2} \left( \begin{array}{c c c c} 1 & - 1 & - 1 & - 1 \end{array} \right) R \left(\left( \begin{array}{c c} 1 & 1 \\ - 1 & - 1 \\ 1 & - 1 \\ - 1 & 1 \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right)\right). +$$ + +With the ReLU network nmin, we can construct recursively a ReLU network $\mathrm{nmin}_N$ mapping $N$ arguments to the smallest one (Definition A.8). Even though the interval-transformation loses precision, we can establish bounds on the precision loss of $\mathrm{nmin}_N^{\sharp}$ sufficient for our use case (Appendix A). + +Now, we use the clipping function $R_{[*,1]} \coloneqq 1 - R(1 - x)$ clipping every value exceeding 1 back to 1 (Figure 5) to construct the local bumps $\phi_c$ w.r.t. a grid $G$ . $G$ specifies the set of all possible local bumps we can use to construct the networks $n_k$ . Increasing the finesse of $G$ will increase the approximation precision. + +Definition 4.2 (local bump, Figure 6). Let $M \in \mathbb{N}$ , $G \coloneqq \left\{\left(\frac{i_1}{M}\right), \ldots, \frac{i_m}{M} \mid i \in \mathbb{Z}^m\right\}$ be a grid, $\ell = 2^{\lceil \log_2 2m \rceil + 1}$ and let $c = \left\{\frac{i_1^l}{M}, \frac{i_1^u}{M}\right\} \times \dots \times \left\{\frac{i_m^l}{M}, \frac{i_m^u}{M}\right\} \subseteq G$ be a set of grid points describing the corner points of a hyperrectangle in $G$ . We define a ReLU neural network $\phi_c: \mathbb{R}^m \to [0,1] \subset \mathbb{R}$ w.r.t. $G$ by + +$$ +\phi_{c}(x):= R\left(\operatorname{nmin}_{2m}\bigcup_{1\leq k\leq m}\left\{ \begin{array}{l}R_{[*,1]}(M\cdot \ell \cdot (x_{k} - \frac{i_{k}^{l}}{M}) + 1),\\ R_{[*,1]}(M\cdot \ell \cdot (\frac{i_{k}^{u}}{M} -x_{k}) + 1) \end{array} \right\}\right). +$$ + +We will describe later how $M$ and $c$ get picked. A graphical illustration of a local bump for in two dimensions and $c = \left\{\frac{i_1^l}{M}, \frac{i_1^u}{M}\right\} \times \left\{\frac{i_2^l}{M}, \frac{i_2^u}{M}\right\} = \{c^{ll}, c^{lu}, c^{ul}, c^{uu}\}$ is shown in Figure 6. The local bump $\phi_c(x)$ evaluates to 1 for all $x$ that lie within the convex hull of $c$ , namely $\mathrm{conv}(c)$ , after which $\phi_c(x)$ quickly decreases linearly to 0. $\phi_c$ has $1 + 2(2d - 1) + 2d$ ReLUs and $1 + \lceil \log_2(2d + 1) \rceil + 1$ layers. + +![](images/1923012e53cf133ea0b8457ef1cabdc463dcdba1f102af1e3d6cc9814b36f406.jpg) +Figure 6: Local bump $\phi_c$ , where $c$ contains the points $c^{ll}, c^{lu}, c^{ul}, c^{uu}$ . The points in $\mathcal{N}(\mathrm{conv}(c))$ are depicted by the red squares. + +By construction $\phi_c(x)$ decreases to 0 before reaching the next neighboring grid points $\mathcal{N}(\mathrm{conv}(c))$ , where $\mathcal{N}(x) := \{g \in G \mid ||x - g||_{\infty} \leq \frac{1}{M}\} \setminus \{x\}$ denotes the neighboring grid points of $x$ and similarly for $\mathcal{N}(U) := \{\mathcal{N}(x) \mid x \in U\} \setminus U$ (Figure 4). The set $\mathcal{N}(\mathrm{conv}(c))$ forms a hyperrectangle in $G$ and is shown in Figure 6 using red squares. Clearly $\mathrm{conv}(c) \subseteq \mathrm{conv}(\mathcal{N}(c))$ . + +Next, we give bounds on the loss of precision for the interval-transformation $\phi_c^{\sharp}$ . We can show that interval analysis can (i) never produce intervals exceeding [0, 1] and (ii) is precise if $B$ does not intersect $\operatorname{conv}(\mathcal{N}(c)) \setminus \operatorname{conv}(c)$ . + +Lemma 4.3. For all $B \in \mathcal{B}(\mathbb{R}^m)$ , it holds that $\phi_c^\sharp(B) \subseteq [0,1] \in \mathcal{B}$ and + +$$ +\phi_ {c} ^ {\sharp} (B) = \left\{ \begin{array}{l l} [ 1, 1 ] & \text {i f} B \subseteq \operatorname {c o n v} (c) \\ [ 0, 0 ] & \text {i f} B \subseteq \Gamma \setminus \operatorname {c o n v} (\mathcal {N} (c)). \end{array} \right. +$$ + +The formal proof is given in Appendix A. The next lemma shows, how a ReLU network $n_k$ can approximate the slice $f_k$ while simultaneously confining the loss of analysis precision. + +Lemma 4.4. Let $\Gamma \subset \mathbb{R}^m$ be a closed box and let $f\colon \Gamma \to \mathbb{R}$ be continuous. For all $\delta >0$ there exists a set of ReLU networks $\{n_k\}_{0\leq k < N}$ of size $N\in \mathbb{N}$ approximating the $N$ -slicing of $f$ , $\{f_k\}_{0\leq k < N}$ ( $\xi_{k}$ as in Definition 4.1) such that for all boxes $B\in \mathcal{B}(\Gamma)$ + +$$ +n _ {k} ^ {\sharp} (B) = \left\{ \begin{array}{l l} [ 0, 0 ] & \text {i f} f (B) \leq \xi_ {k} - \frac {\delta}{2} \\ [ 1, 1 ] & \text {i f} f (B) \geq \xi_ {k + 1} + \frac {\delta}{2}. \end{array} \right. \tag {2} +$$ + +and $n_k^\sharp (B)\subseteq [0,1]$ + +It is important to note that in Equation (2) we mean $f$ and not $f^{\sharp}$ . The proof for Lemma 4.4 is given in Appendix A. In the following, we discuss a proof sketch. + +Because $\Gamma$ is compact and $f$ is continuous, $f$ is uniformly continuous by the Heine-Cantor Theorem. So we can pick a $M \in \mathbb{N}$ such that for all $x, y \in \Gamma$ satisfying $||y - x||_{\infty} \leq \frac{1}{M}$ holds $|f(y) - f(x)| \leq \frac{\delta}{2}$ . We then choose the grid $G = (\frac{\mathbb{Z}}{M})^{m} \subseteq \mathbb{R}^{m}$ . + +Next, we construct for every slice $k$ a set $\Delta_k$ of hyperrectangles on the grid $G$ : if a box $B \in \mathcal{B}(\Gamma)$ fulfills $f(B) \geq \xi_{k+1} + \frac{\delta}{2}$ , then we add a minimal enclosing hyperrectangle $c \subset G$ such that $B \subseteq \operatorname{conv}(c)$ to $\Delta_k$ , where $\operatorname{conv}(c)$ denotes the convex hull of $c$ . This implies, using uniform continuity of $f$ and that the grid $G$ is fine enough, that $f(\operatorname{conv}(c)) \geq \xi_{k+1}$ . Since there is only a finite number of possible hyperrectangles in $G$ , the set $\Delta_k$ is clearly finite. The network fulfilling Equation (2) is + +$$ +n _ {k} (x) := R _ {[ *, 1 ]} \left(\sum_ {c \in \Delta_ {k}} \phi_ {c} (x)\right), +$$ + +where $\phi_c$ is as in Definition 4.2. The $n_k$ are depicted in Figure 3c. + +Now, we see that Equation (2) holds by construction: For all boxes $B \in \mathcal{B}(\Gamma)$ such that $f \geq \xi_{k+1} + \frac{\delta}{2}$ on $B$ exists $c' \in \Delta_k$ such that $B \subseteq \mathrm{conv}(c')$ which implies, using Lemma 4.3, that $\phi_{c'}^{\sharp}(B) = [1,1]$ , hence + +$$ +\begin{array}{l} n _ {k} ^ {\sharp} (B) = R _ {[ *, 1 ]} ^ {\sharp} \left(\phi_ {c ^ {\prime}} ^ {\sharp} (B) + \sum_ {c \in \Delta_ {k} \backslash c ^ {\prime}} \phi_ {c} ^ {\sharp} (B)\right) \quad \forall c \neq c ^ {\prime}: \phi_ {c} ^ {\sharp} (B) \subseteq [ 0, 1 ] (\text {L e m m a} 4. 3) \\ = R _ {[ *, 1 ]} ^ {\sharp} ([ 1, 1 ] + [ p _ {1}, p _ {2} ]) \qquad \qquad [ p _ {1}, p _ {1} ] \in \mathcal {B} (\mathbb {R} _ {\geq 0}) \\ = R _ {[ *, 1 ]} ^ {\sharp} ([ 1 + p _ {1}, 1 + p _ {2} ]) \\ = [ 1, 1 ]. \\ \end{array} +$$ + +Similarly, if $f(B) \leq \xi_k - \frac{\delta}{2}$ holds, then it holds for all $c \in \Delta_k$ that $B$ does not intersect $\mathcal{N}(\mathrm{conv}(c))$ . Indeed, if a $c \in \Delta_k$ would violate this, then by construction, $f(\mathrm{conv}(c)) \geq \xi_{k + 1}$ , contradicting $f(B) \leq \xi_k - \frac{\delta}{2}$ . Thus $\phi_c^\sharp (B) = [0,0]$ , and hence $n^{\sharp}(B) = [0,0]$ . + +Theorem 4.5. Let $\Gamma \subset \mathbb{R}^m$ be a closed box and let $f\colon \Gamma \to \mathbb{R}$ be continuous. Then for all $\delta >0$ exists a ReLU network $n$ such that for all $B\in \mathcal{B}(\Gamma)$ + +$$ +[ l + \delta , u - \delta ] \subseteq n ^ {\sharp} (B) \subseteq [ l - \delta , u + \delta ], +$$ + +where $l \coloneqq \min f(B)$ and $u \coloneqq \max f(B)$ . + +Proof. Pick $N$ such that the height of each slice is exactly $\frac{\delta}{2}$ , if this is impossible choose a slightly smaller $\delta$ . Let $\{n_k\}_{0 \leq k < N}$ be a series of networks as in Lemma 4.4. Recall that $\xi_0 = \min f(\Gamma)$ . We define the ReLU network + +$$ +n (x) := \xi_ {0} + \frac {\delta}{2} \sum_ {k = 0} ^ {N - 1} n _ {k} (x). \tag {3} +$$ + +Let $B\in \mathcal{B}(\Gamma)$ . Thus we have for all $k$ + +$$ +f (B) \geq \xi_ {k + 2} \Leftrightarrow f (B) \geq \xi_ {k + 1} + \frac {\delta}{2} \quad \stackrel {{\text {L e m m a 4 . 4}}} {{\Rightarrow}} n _ {k} ^ {\sharp} (B) = [ 1, 1 ] \tag {4} +$$ + +$$ +f (B) \leq \xi_ {k - 1} \Leftrightarrow f (B) \leq \xi_ {k} - \frac {\delta}{2} \quad \stackrel {\text {L e m m a 4 . 4}} {\Rightarrow} n _ {k} ^ {\sharp} (B) = [ 0, 0 ]. \tag {5} +$$ + +Let $p,q\in \{0,\dots ,N - 1\}$ such that + +$$ +\xi_ {p} \leq l = \min f (B) \leq \xi_ {p + 1} \tag {6} +$$ + +$$ +\xi_ {q} \leq u = \max f (B) \leq \xi_ {q + 1}, \tag {7} +$$ + +as depicted in Figure 7. Thus by Equation (4) for all $k \in \{0, \dots, p - 2\}$ it holds that $n_{k}^{\sharp}(B) = [1, 1]$ and similarly, by Equation (5) for all $k \in \{q + 2, \dots, N - 1\}$ it holds that $n_{k}^{\sharp}(B) = [0, 0]$ . Plugging this into Equation (3) after splitting the sum into three parts leaves us with + +$$ +\begin{array}{l} n ^ {\sharp} (B) = \xi_ {0} + \frac {\delta}{2} \sum_ {k = 0} ^ {p - 2} n _ {k} ^ {\sharp} (B) + \frac {\delta}{2} \sum_ {k = p - 1} ^ {q + 1} n _ {k} ^ {\sharp} (B) + \frac {\delta}{2} \sum_ {k = p + 1} ^ {N - 1} n _ {k} ^ {\sharp} (B) \\ = \xi_ {0} + (p - 1) [ \frac {\delta}{2}, \frac {\delta}{2} ] + \frac {\delta}{2} \sum_ {k = p - 1} ^ {q + 1} n _ {k} ^ {\sharp} (B) + [ 0, 0 ]. \\ \end{array} +$$ + +Applying the standard rules for interval analysis, leads to + +$$ +n ^ {\sharp} (B) = [ \xi_ {p - 1}, \xi_ {p - 1} ] + \frac {\delta}{2} \sum_ {k = p - 1} ^ {q + 1} n _ {k} ^ {\sharp} (B), +$$ + +where we used in the last step, that $\xi_0 + k\frac{\delta}{2} = \xi_k$ . For all terms in the sum except the terms corresponding to the 3 highest and lowest $k$ we get + +$$ +n _ {k} ^ {\sharp} (B) = [ 0, 1 ] \quad \forall k \in \{p + 2, \dots , q - 2 \}. \tag {8} +$$ + +![](images/12b1ecc83689e8b7c3b665b67a328869ebb12f6a5195eadf4053ccd2a7882dae.jpg) +Figure 7: Illustration of the proof for Theorem 4.5. + +Indeed, from Equation (6) we know that there is $x \in B$ such that $f(x) \leq \xi_{p+1} = \xi_{p+2} - \frac{\delta}{2}$ , thus by Lemma 4.4 $n_k^\sharp([x, x]) = [0, 0]$ for all $p + 2 \leq k \leq q - 2$ . Similarly, from Equation (7) we know, that there is $x' \in B$ such that $f(x) \geq \xi_q = \xi_{q-1} + \frac{\delta}{2}$ , thus by Lemma 4.4 $n_k^\sharp([x', x']) = [1, 1]$ for all $p + 2 \leq k \leq q - 2$ . So $n_k^\sharp(B)$ is at least $[0, 1]$ , and by Lemma 4.4 also at most $[0, 1]$ . This leads to + +$$ +\begin{array}{l} n ^ {\sharp} (B) = [ \xi_ {p - 1}, \xi_ {p - 1} ] + \frac {\delta}{2} \sum_ {k = p - 1} ^ {p + 1} n _ {k} ^ {\sharp} (B) + \frac {\delta}{2} ((q - 2) - (p + 2) + 1) [ 0, 1 ] + \frac {\delta}{2} \sum_ {k = q - 1} ^ {q + 1} n _ {k} ^ {\sharp} (B) \\ = \left[ \xi_ {p - 1}, \xi_ {p - 1} \right] + \frac {\delta}{2} \sum_ {k = p - 1} ^ {p + 1} n _ {k} ^ {\sharp} (B) + \left[ 0, \xi_ {q - 1} - \xi_ {p + 2} \right] \quad + \frac {\delta}{2} \sum_ {k = q - 1} ^ {q + 1} n _ {k} ^ {\sharp} (B). \\ \end{array} +$$ + +We know further, that if $p + 3 \leq q$ , then there is an $x \in B$ such that $f(x) \geq \xi_{p + 3} = \xi_{p + 2} + \frac{\delta}{2}$ , hence similar as before $n_{p + 1}^{\sharp}([x,x]) = [1,1]$ and similarly $n_p^\sharp ([x,x]) = [1,1]$ and $n^{\sharp}([x,x]) = [1,1]$ . So we know, that $\frac{\delta}{2}\sum_{k = p - 1}^{p + 1}n_k^\sharp (B)$ includes at least $[3\frac{\delta}{2},3\frac{\delta}{2}]$ and at the most $[0,3\frac{\delta}{2}]$ . Similarly, there exists an $x^{\prime} \in B$ such that $n_{q - 1}^{\sharp}([x',x']) = [0,0]$ , $n_q^\sharp ([x',x']) = [0,0]$ and $n_{q + 1}^{\sharp}([x',x']) = [0,0]$ . This leaves us with + +$$ +[ 3 \frac {\delta}{2}, 3 \frac {\delta}{2} ] \subseteq \frac {\delta}{2} \sum_ {k = p - 1} ^ {p + 1} n _ {k} ^ {\sharp} (B) \subseteq [ 0, 3 \frac {\delta}{2} ] +$$ + +$$ +[ 0, 0 ] \subseteq \frac {\delta}{2} \sum_ {k = q - 1} ^ {q + 1} n _ {k} ^ {\sharp} (B) \subseteq [ 0, 3 \frac {\delta}{2} ], +$$ + +If $p + 3 > q$ the lower bound we want to prove becomes vacuous and only the upper one needs to be proven. Thus we have + +$$ +\left[ l + \delta , u - \delta \right] \subseteq \left[ \xi_ {p + 2}, \xi_ {p - 1} \right] \subseteq n ^ {\sharp} (B) \subseteq \left[ \xi_ {p - 1}, \xi_ {q + 2} \right] \subseteq \left[ l - \delta , u + \delta \right], +$$ + +where $l \coloneqq \min f(B)$ and $u \coloneqq \max f(B)$ . + +![](images/4be293e0d6c9f82b37bbf14497d218e97219e560823bc4c6651e294768361310.jpg) + +Theorem 4.6 (Universal Interval-Provable Approximation). Let $\Gamma \subset \mathbb{R}^m$ be compact and $f\colon \Gamma \to \mathbb{R}^d$ be continuous. For all $\delta \in \mathbb{R}_{\geq 0}^{m}$ exists a ReLU network $n$ such that for all $B\in \mathcal{B}(\Gamma)$ + +$$ +[ l + \delta , u - \delta ] \subseteq n ^ {\sharp} (B) \subseteq [ l - \delta , u + \delta ], +$$ + +where $l, u \in \mathbb{R}^m$ such that $l_k \coloneqq \min f(B)_k$ and $u_k \coloneqq \max f(B)_k$ for all $k$ . + +Proof. This is a direct consequence of using Theorem 4.5 and the Tietze extension theorem to produce a neural network for each dimension $d$ of the codomain of $f$ . + +Note that Theorem 1.1 is a special case of Theorem 4.6 with $d = 1$ to simplify presentation. + +# 5 CONCLUSION + +We proved that for all real valued continuous functions $f$ on compact sets, there exists a ReLU network $n$ approximating $f$ arbitrarily well with the interval abstraction. This means that for arbitrary input sets, analysis using the interval relaxation yields an over-approximation arbitrarily close to the smallest interval containing all possible outputs. Our theorem affirmatively answers the open question, whether the Universal Approximation Theorem generalizes to Interval analysis. + +Our results address the question of whether the interval abstraction is expressive enough to analyse networks approximating interesting functions $f$ . This is of practical importance because interval analysis is the most scalable non-trivial analysis. + +# REFERENCES + +Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. arXiv preprint arXiv:1801.00553, 2018. +Cem Anil, James Lucas, and Roger B. Grosse. Sorting out lipschitz function approximation. In International Conference on Machine Learning, (ICML), 2019. +Raman Arora, Amitabh Basu, Poorya Mianjy, and Anirbit Mukherjee. Understanding deep neural networks with rectified linear units. In International Conference on Learning Representations, (ICLR), 2018. +Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In International Conference on Machine Learning, (ICML), 2018. +Jeremy E. J. Cohen, Todd Huster, and Ra Cohen. Universal lipschitz approximation in bounded depth neural networks. arXiv preprint arXiv:1904.04861, 2019a. +Jeremy M. Cohen, Elan Rosenfeld, and J. Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, (ICML), 2019b. +George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems (MCSS), 1989. +Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, and Pushmeet Kohli. Training verified learners with learned verifiers. arXiv preprint arXiv:1805.10265, 2018a. +Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy A. Mann, and Pushmeet Kohli. A dual approach to scalable verification of deep networks. In Uncertainty in Artificial Intelligence, (UAI), 2018b. +Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Robust physical-world attacks on deep learning visual classification. In IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), 2018. +Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin T. Vechev. AI2: safety and robustness certification of neural networks with abstract interpretation. In IEEE Symposium on Security and Privacy, (SP), 2018. +Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy A. Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018. +Shixiang Gu and Luca Rigazio. Towards deep neural network architectures robust to adversarial examples. In International Conference on Learning Representations, (ICLR), Workshop, 2015. +Juncai He, Lin Li, Jinchao Xu, and Chunyue Zheng. ReLU Deep Neural Networks and Linear Finite Elements. arXiv preprint arXiv:1807.03973, 2018. +Kurt Hornik, Maxwell B. Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural Networks, 1989. +Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. Reluplex: An efficient SMT solver for verifying deep neural networks. In Computer Aided Verification (CAV), 2017. +Marc Khoury and Dylan Hadfield-Menell. On the geometry of adversarial examples. arXiv preprint arXiv:1811.00525, 2018. +Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. +Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. of the IEEE, 1998. + +Mathias Lécuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified robustness to adversarial examples with differential privacy. In IEEE Symposium on Security and Privacy, (SP), 2019. +Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Certified adversarial robustness with additive noise. In Advances in Neural Information Processing Systems (NeurIPS), 2019. +Matthew Mirman, Timon Gehr, and Martin T. Vechev. Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, (ICML), 2018. +Matthew Mirman, Gagandeep Singh, and Martin T. Vechev. A provable defense for deep residual networks. arXiv preprint arXiv:1903.12519, 2019. +Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), 2017. +Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy, EuroS&P, 2016. +Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Semidefinite relaxations for certifying robustness to adversarial examples. In Advances in Neural Information Processing Systems (NeurIPS), 2018a. +Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. In International Conference on Learning Representations, (ICLR), 2018b. +Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. A convex relaxation barrier to tight robustness verification of neural networks. arXiv preprint arXiv:1902.08722, 2019. +Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems (NeurIPS), 2018. +Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Puschel, and Martin T. Vechev. Fast and effective robustness certification. In Advances in Neural Information Processing Systems (NeurIPS), 2018. +Gagandeep Singh, Timon Gehr, Markus Puschel, and Martin T. Vechev. An abstract domain for certifying neural networks. PACMPL, (POPL), 2019. +Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, (ICLR), 2014. +J. M. Tarela and M. V. Martínez. Region configurations for realizability of lattice piecewise-linear models. Mathematical and Computer Modelling, 1999. +Vincent Tjeng, Kai Y. Xiao, and Russ Tedrake. Evaluating robustness of neural networks with mixed integer programming. In International Conference on Learning Representations, (ICLR), 2019. +Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In International Conference on Learning Representations, (ICLR), 2019. +Shiqi Wang, Yizheng Chen, Ahmed Abdou, and Suman Jana. Mixtrain: Scalable training of formally robust neural networks. arXiv preprint arXiv:1811.02625, 2018a. +Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2018b. + +Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane S. Boning, and Inderjit S. Dhillon. Towards fast computation of certified robustness for relu networks. In International Conference on Machine Learning, (ICML), 2018. +Eric Wong and J. Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, (ICML), 2018. +Eric Wong, Frank R. Schmidt, Jan Hendrik Metzen, and J. Zico Kolter. Scaling provable adversarial defenses. In Advances in Neural Information Processing Systems (NeurIPS), 2018. +Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating adversarial examples with adversarial networks. In International Joint Conference on Artificial Intelligence, (IJCAI), 2018. +Kai Y. Xiao, Vincent Tjeng, Nur Muhammad (Mahi) Shafiullah, and Aleksander Madry. Training for faster adversarial robustness verification via inducing relu stability. In International Conference on Learning Representations, (ICLR), 2019. +Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. Efficient neural network robustness certification with general activation functions. In Advances in Neural Information Processing Systems (NeurIPS), 2018. +Stephan Zheng, Yang Song, Thomas Leung, and Ian J. Goodfellow. Improving the robustness of deep neural networks via stability training. In IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), 2016. + +# A PROOFS FOR THE UNIVERSAL INTERVAL-CERTIFIED APPROXIMATION + +Lemma A.1 (Monotonicity). The operations $+, -$ are monotone, that is for all $[a_1, b_1], [a_2, b_2], [c_1, d_1], [c_2, d_2] \in \mathcal{B}(R)$ such that $[a_1, b_1] \subseteq [a_2, b_2]$ and $[c_1, d_2] \subseteq [c_2, d_2]$ holds + +$$ +\left[ a _ {1}, b _ {1} \right] + ^ {\sharp} \left[ c _ {1}, d _ {1} \right] \subseteq \left[ a _ {2}, d _ {2} \right] + ^ {\sharp} \left[ c _ {2}, d _ {2} \right] +$$ + +$$ +\left[ a _ {1}, b _ {1} \right] - ^ {\sharp} \left[ c _ {1}, d _ {1} \right] \subseteq \left[ a _ {2}, d _ {2} \right] - ^ {\sharp} \left[ c _ {2}, d _ {2} \right] +$$ + +$$ +\left[ a _ {1}, b _ {1} \right] \cdot^ {\sharp} \left[ c _ {1}, d _ {1} \right] \subseteq \left[ a _ {2}, d _ {2} \right] \cdot^ {\sharp} \left[ c _ {2}, d _ {2} \right]. +$$ + +Further the operation $*$ and $R$ are monotone, that is for all $[a,b],[c,d]\in \mathcal{B}(R)$ and for all $\lambda \in \mathbb{R}_{\geq 0}$ such that $[a,b]\subseteq [c,d]$ holds + +$$ +\lambda \cdot^ {\sharp} [ a, b ] \subseteq \lambda \cdot^ {\sharp} [ c, d ] +$$ + +$$ +R ^ {\sharp} ([ a, b ]) \subseteq R ^ {\sharp} ([ c, d ]). +$$ + +Proof. + +$$ +\left[ a _ {1}, b _ {1} \right] + ^ {\sharp} \left[ c _ {1}, d _ {1} \right] = \left[ a _ {1} + c _ {1}, b _ {1} + d _ {1} \right] \subseteq \left[ a _ {2} + c _ {2}, b _ {2} + d _ {2} \right] = \left[ a _ {2}, d _ {2} \right] + ^ {\sharp} \left[ c _ {2}, d _ {2} \right] +$$ + +$$ +\left[ a _ {1}, b _ {1} \right] - ^ {\sharp} \left[ c _ {1}, d _ {1} \right] = \left[ a _ {1} - d _ {1}, b _ {1} - c _ {1} \right] \subseteq \left[ a _ {2} - d _ {2}, b _ {2} - c _ {2} \right] = \left[ a _ {2}, d _ {2} \right] - ^ {\sharp} \left[ c _ {2}, d _ {2} \right] +$$ + +$$ +\lambda \cdot^ {\sharp} [ a, b ] = [ \lambda a, \lambda b ] \subseteq [ \lambda c, \lambda d ] = [ \lambda c, \lambda d ] +$$ + +$$ +R ^ {\sharp} ([ a, b ]) = [ R (a), R (b) ] \subseteq [ R (c), R (d) ] = R ^ {\sharp} ([ c, d ]). +$$ + +Definition A.2 (N-slicing). Let $\Gamma \subset \mathbb{R}^m$ be a compact $m$ -dimensional box and let $f\colon \Gamma \to \mathbb{R}$ be continuous. The $N$ -slicing of $f$ is a set of functions $\{f_k\}_{0\leq k\leq N - 1}$ defined by + +$$ +f _ {k} \colon \Gamma \to \mathbb {R}, \quad x \mapsto \left\{ \begin{array}{l l} 0 & \text {i f} f (x) \leq \xi_ {k}, \\ f (x) - \xi_ {k} & \text {i f} \xi_ {k} < f (x) < \xi_ {k + 1}, \forall k \in \{0, \ldots , N - 1 \}, \\ \xi_ {k + 1} - \xi_ {k} & \text {o t h e r w i s e}, \end{array} \right. +$$ + +where $\xi_{k}:= \frac{k}{N} (\xi_{\max} - \xi_{\min})$ $k\in \{0,\dots ,N\}$ $\xi_{\mathrm{min}}\coloneqq \min f(\Gamma)$ and $\xi_{\mathrm{max}}\coloneqq \max f(\Gamma)$ + +Lemma A.3 (N-slicing). Let $\{f_k\}_{0 \leq k \leq N-1}$ be the $N$ -slicing of $f$ . Then for all $x \in \Gamma$ we have $f(x) := \xi_0 + \sum_{k=0}^{N-1} f_k(x)$ . + +Proof. Pick $x \in \Gamma$ and let $l \in \{0, \dots, N - 1\}$ such that $\xi_l \leq f(x) \leq \xi_{l + 1}$ . Then + +$$ +\begin{array}{l} \xi_ {0} + \sum_ {k = 0} ^ {N - 1} f _ {k} (x) = \xi_ {0} + \sum_ {k = 0} ^ {l - 1} f _ {k} (x) + f _ {l} (x) + \sum_ {k = l + 1} ^ {N - 1} f _ {k} (x) = \xi_ {0} + \sum_ {k = 0} ^ {l - 1} (\xi_ {k + 1} - \xi_ {k}) + f _ {l} (x) \\ = \xi_ {l} + f _ {l} (x) = f (x). \\ \end{array} +$$ + +![](images/1e4de7a286c45c1c370856fd1dffc219d0496ffb3b454508e8cfad62077ea467.jpg) + +Definition A.4 (clipping). Let $a, b \in \mathbb{R}, a < b$ . We define the clipping function $R_{[*,b]}: \mathbb{R} \to \mathbb{R}$ by + +$$ +R _ {[ *, b ]} (x) := b - R (b - x). +$$ + +Lemma A.5 (clipping). The function $R_{[*,b]}$ sends all $x \leq b$ to $x$ , and all $x > b$ to $b$ . Further, $R_{[*,b]}^{\sharp}([a',b']) = [R_{[*,b]}(a'),R_{[*,b]}(b')]$ . + +Proof. We show the proof for $R_{[a,b]}$ , the proof for $R_{[*,b]}$ is similar. + +$$ +x < b \Rightarrow R _ {[ *, b ]} (x) = b - R (b - x) = b - b + x = x +$$ + +$$ +x \geq b \Rightarrow R _ {[ *, b ]} (x) = b - R (b - x) = b - 0 = b +$$ + +Next, + +$$ +\begin{array}{l} R _ {[ *, b ]} ^ {\sharp} ([ a ^ {\prime}, b ^ {\prime} ]) = b - ^ {\sharp} R ^ {\sharp} (b - ^ {\sharp} [ a ^ {\prime}, b ^ {\prime} ]) \\ = b - ^ {\sharp} R ^ {\sharp} \left(b + ^ {\sharp} [ - b ^ {\prime}, - a ^ {\prime} ]\right) \\ = b - ^ {\sharp} R ^ {\sharp} \left([ b - b ^ {\prime}, b - a ^ {\prime} ]\right) \\ = b - ^ {\sharp} \left[ R \left(b - b ^ {\prime}\right), R \left(b - a ^ {\prime}\right) \right] \\ = b + ^ {\sharp} \left[ - R \left(b - a ^ {\prime}\right), - R \left(b - b ^ {\prime}\right) \right] \\ = [ b - R (b - a ^ {\prime}), b - R (b - b ^ {\prime}) ] \\ = \left[ R _ {[ *, b ]} \left(a ^ {\prime}\right), R _ {[ *, b ]} \left(b ^ {\prime}\right) \right]. \\ \end{array} +$$ + +![](images/f729991fa214c921e51cdb6354bf50c2aa465e9ef6beaaffb1cec76bb3fae4d1.jpg) + +Definition A.6 (nmin). We define the ReLU network nmin: $\mathbb{R}^2\to \mathbb{R}$ by + +$$ +\operatorname {n m i n} (x, y) := \frac {1}{2} \left( \begin{array}{c c c c} 1 & - 1 & - 1 & - 1 \end{array} \right) R \left(\left( \begin{array}{c c} 1 & 1 \\ - 1 & - 1 \\ 1 & - 1 \\ - 1 & 1 \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right)\right). +$$ + +Lemma A.7 (nmin). Let $x, y \in \mathbb{R}$ , then $\mathrm{nmin}(x, y) = \mathrm{min}(x, y)$ . + +Proof. Because $\mathfrak{nmin}$ is symmetric in its arguments, we assume w.o.l.g. $x \geq y$ . + +$$ +\begin{array}{l} \operatorname {n m i n} (x, y) = \frac {1}{2} \left( \begin{array}{c c c c} 1 & - 1 & - 1 & - 1 \end{array} \right) R \left(\left( \begin{array}{c c} 1 & 1 \\ - 1 & - 1 \\ 1 & - 1 \\ - 1 & 1 \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right)\right) \\ = \frac {1}{2} \left( \begin{array}{c c c c} 1 & - 1 & - 1 & - 1 \end{array} \right) R \left( \begin{array}{c} x + y \\ - x - y \\ x - y \\ - x + y \end{array} \right) \\ \end{array} +$$ + +If $x + y\geq 0$ , then + +$$ +\operatorname {n m i n} (x, y) = \frac {1}{2} (x + y - x + y) = y. +$$ + +If $x + y < 0$ , then + +$$ +\operatorname {n m i n} (x, y) = \frac {1}{2} (x + y - x + y) = y. +$$ + +![](images/7d30f716fc82ef2de9ede8b480d63307e7dc8c5519ee4e74e440b71178e1da76.jpg) + +Definition A.8 (nmin $_N$ ). For all $N \in \mathbb{N}_{\geq 1}$ , we define a ReLU network nmin $_N$ defined by + +$$ +\operatorname {n m i n} _ {1} (x) := x +$$ + +$$ +\operatorname {n m i n} _ {N} \left(x _ {1}, \dots , x _ {N}\right) := \operatorname {n m i n} \left(\operatorname {n m i n} _ {\lceil N / 2 \rceil} \left(x _ {1}, \dots , x _ {\lceil N / 2 \rceil}\right), \operatorname {n m i n} _ {\lceil N / 2 \rceil + 1} \left(x _ {\lceil N / 2 \rceil + 1}, \dots , x _ {N}\right)\right). +$$ + +Lemma A.9. Let $[a,b],[c,d]\in \mathcal{B}(\mathbb{R})$ . Then $\mathrm{nmin}^{\sharp}([a,b],[c,d]) = \mathrm{nmin}^{\sharp}([c,d],[a,b])$ and + +$$ +\mathsf {n m i n} ^ {\sharp} ([ a, b ], [ c, d ]) = \left\{ \begin{array}{l l} {[ c + \frac {a - b}{2}, d + \frac {b - a}{2} ]} & \text {i f} d \leq a \\ {[ a + \frac {c - d}{2}, b + \frac {d - c}{2} ]} & \text {i f} a \leq d \text {a n d} b < c \\ {[ a + c - \frac {b + d}{2}, \frac {b + d}{2} ]} & \text {i f} a \leq d \text {a n d} b \geq c \end{array} \right. +$$ + +Proof. The symmetry on abstract elements is immediate. In the following, we omit some of $\sharp$ to improve readability. + +$$ +\begin{array}{l} \mathrm {n m i n} ^ {\sharp} ([ a, b ], [ c, d ]) = \frac {1}{2} \left( \begin{array}{c c c c} 1 & - 1 & - 1 & - 1 \end{array} \right) R ^ {\sharp} \left(\left( \begin{array}{c c} 1 & 1 \\ - 1 & - 1 \\ 1 & - 1 \\ - 1 & 1 \end{array} \right) \left( \begin{array}{c} [ a, b ] \\ [ c, d ] \end{array} \right)\right) \\ = \frac {1}{2} (1 - 1 - 1 - 1) R ^ {\sharp} \left(\left( \begin{array}{c} {[ a, b ] + [ c, d ]} \\ {- [ a, b ] - [ c, d ]} \\ {[ a, b ] - [ c, d ]} \\ {- [ a, b ] + [ c, d ]} \end{array} \right)\right) \\ = \frac {1}{2} (1 - 1 - 1 - 1) R ^ {\sharp} \left(\left( \begin{array}{c} [ a + c, b + d ] \\ [ - b - d, - a - c ] \\ [ a - d, b - c ] \\ [ c - b, d - a ] \end{array} \right)\right) \\ = \frac {1}{2} \left( \begin{array}{c c c c} 1 & - 1 & - 1 & - 1 \end{array} \right) \left( \begin{array}{c} [ R (a + c), R (b + d) ] \\ [ R (- b - d), R (- a - c) ] \\ [ R (a - d), R (b - c) ] \\ [ R (c - b), R (d - a) ] \end{array} \right) \\ = \frac {1}{2} ([ R (a + c), R (b + d) ] - [ R (- b - d), R (- a - c) ] \\ - [ R (a - d), R (b - c) ] - [ R (c - b), R (d - a) ]) \\ = \frac {1}{2} ([ R (a + c), R (b + d) ] + [ - R (- a - c), - R (- b - d) ] \\ + \left[ - R (b - c), - R (a - d) \right] + \left[ - R (d - a), - R (c - b) \right]) \\ = \frac {1}{2} ([ R (a + c) - R (- a - c), R (b + d) - R (- b - d) ] \\ + \left[ - R (b - c) - R (d - a), - R (a - d) - R (c - b) \right]) \\ \end{array} +$$ + +Claim: $R(a + c) - R(-a - c) = a + c$ . If $a + c > 0$ then $-a - c < 0$ thus the claim in this case. Indeed: If $a + c \leq 0$ then $-a - c \geq 0$ thus $R(a + c) - R(-a - c) = -R(-a - c) = -(-a - c) = a + c$ . Similarly $R(b + d) - R(-b - d) = b + d$ . + +So the expression simplifies to + +$$ +\mathrm {n m i n} ^ {\sharp} ([ a, b ], [ c, d ]) = \frac {1}{2} ([ a + c, b + d ] + [ - R (b - c) - R (d - a), - R (a - d) - R (c - b) ]) +$$ + +We proceed by case distinction: + +Case 1: $b - c \leq 0$ : Then $a \leq b \leq c \leq d$ : + +$$ +\begin{array}{l} \mathrm {n m i n} ^ {\sharp} ([ a, b ], [ c, d ]) = \frac {1}{2} ([ a + c, b + d ] + [ a - d, b - c ]) \\ = \frac {1}{2} ([ a + c + a - d, b + d + b - c ]) \\ = \left[ a + \frac {c - d}{2}, b + \frac {d - c}{2} \right] \\ \end{array} +$$ + +Case 2: $a - d \geq 0$ : Then $c \leq d \leq a \leq b$ . By symmetry of nmin equivalent to Case 1. Hence + +$$ +\mathrm {n m i n} ^ {\sharp} ([ a, b ], [ c, d ]) = \left[ c + \frac {a - b}{2}, d + \frac {b - a}{2} \right]. +$$ + +Case 3: $a - d < 0$ and $b - c > 0$ : + +$$ +\begin{array}{l} \mathrm {n m i n} ^ {\sharp} ([ a, b ], [ c, d ]) = \frac {1}{2} ([ a + c, b + d ] + [ c - b - d + a, 0 ]) \\ = \frac {1}{2} ([ a + c + c - b - d + a, b + d ]) \\ = [ a + c - \frac {b + d}{2}, \frac {b + d}{2} ] \\ \end{array} +$$ + +Thus we have + +$$ +\mathbf {n m i n} ^ {\sharp} ([ a, b ], [ c, d ]) = \left\{ \begin{array}{l l} {[ a + \frac {c - d}{2}, b + \frac {d - c}{2} ]} & {\text {i f} b \leq c} \\ {[ c + \frac {a - b}{2}, d + \frac {b - a}{2} ]} & {\text {i f} d \leq a} \\ {[ a + c - \frac {b + d}{2}, \frac {b + d}{2} ]} & {\text {i f} a < d \text {a n d} b > c} \end{array} \right. +$$ + +![](images/e9c527e15549e2bd6b08c161775fa2e653a640006045c77a99357a10a3604803.jpg) + +Definition A.10 (neighboring grid points). Let $G$ be as above. We define the set of neighboring grid points of $x \in \Gamma$ by + +$$ +\mathcal {N} (x) := \left\{g \in G \mid g \in | | x - g | | \leq \frac {1}{M} \right\} \setminus \{x \}. +$$ + +For $U\subset \mathbb{R}^m$ , we define $\mathcal{N}(U)\coloneqq \{\mathcal{N}(x)\mid x\in U\} \setminus U$ + +Definition A.11 (local bump). Let $M \in \mathbb{N}$ , $G := (\frac{\mathbb{Z}}{M})^m$ , $\ell = 2^{\lceil \log_2 2^m \rceil + 1}$ and let $c = \left\{ \frac{i_1^l}{M}, \frac{i_1^u}{M} \right\} \times \dots \times \left\{ \frac{i_m^l}{M}, \frac{i_m^u}{M} \right\} \subseteq G$ . We define a ReLU neural network $\phi_c \colon \mathbb{R}^m \to [0,1]$ w.r.t. the grid $G$ by + +$$ +\phi_ {c} (x) := R \left(\operatorname {n m i n} _ {2 m} \bigcup_ {1 \leq k \leq m} \left\{R _ {[ *, 1 ]} \left(M \ell \left(x _ {k} - \frac {i _ {k} ^ {l}}{M}\right) + 1\right), R _ {[ *, 1 ]} \left(M \ell \left(\frac {i _ {k} ^ {u}}{M} - x _ {k}\right) + 1\right) \right\}\right) +$$ + +Lemma A.12. It holds: + +$$ +\phi_ {c} (x) := \left\{ \begin{array}{l l} 0 & \text {i f} x \notin \operatorname {c o n v} (\mathcal {N} (c)) \\ 1 & \text {i f} x \in \operatorname {c o n v} (c) \\ \min \left(0, \bigcup_ {k = 1} ^ {m} \{M \ell (x _ {k} - \frac {i _ {k} ^ {l}}{M}) + 1 \} \cup \{M \ell (\frac {i _ {k} ^ {u}}{M} - x _ {k}) + 1 \}\right) & \text {o t h e r w i s e .} \end{array} \right. +$$ + +Proof. By case distinction: + +- Case $x \notin \mathcal{N}(c)$ . Then there exists $k$ , such that either $x_{k} < \frac{i_{k}^{l} - 1}{M}$ or $x_{k} > \frac{i_{k}^{u} + 1}{M}$ . Then $M\ell(x_{k} - \frac{i_{k}^{l}}{M}) + 1$ or $M\ell(\frac{i_{k}^{u}}{M} - x_{k}) + 1$ is less or equal to 0. Hence + +$$ +\phi_ {c} (x) = 0. +$$ + +- Case $x \in \operatorname{conv}(c)$ . Then for all $k$ holds $\frac{i_k^l}{M} \leq x_k \leq \frac{i_k^u}{M}$ . Thus $M\ell(x_k - \frac{i_k^l}{M}) + 1 \geq 1$ and $M\ell(\frac{i_k^u}{M} - x_k) + 1 \geq 1$ for all $k$ Hence + +$$ +\phi_ {c} (x) = 1. +$$ + +where $\alpha \geq 1$ + +- Case otherwise: For all $x$ exists a $k$ such that $M\ell(x_k - \frac{i_k^l}{M}) + 1$ or $M\ell(\frac{i_k^u}{M} - x_k) + 1$ is smaller or equal to all other arguments of the function min and smaller or equal to 1. If the smallest element is smaller than 0, then $\phi_c(x)$ will evaluate to 0, otherwise it will evaluate to $M\ell(x_k - \frac{i_k^l}{M}) + 1$ or $M\ell(\frac{i_k^u}{M} - x_k) + 1$ . Thus we can just drop $R$ and $R_{[*,1]}$ from the equations and take the minimum also over 0: + +$$ +\begin{array}{l} \phi_ {c} (x) = R \left(\min \bigcup_ {k = 1} ^ {m} \left\{R _ {[ *, 1 ]} \left(M \ell \left(x _ {k} - \frac {i _ {k} ^ {l}}{M}\right) + 1\right), R _ {[ *, 1 ]} \left(M \ell \left(\frac {i _ {k} ^ {u}}{M} - x _ {k}\right) + 1\right) \right\}\right) \\ = \min \left(0, \bigcup_ {k = 1} ^ {m} \{(M \ell (x _ {k} - \frac {i _ {k} ^ {l}}{M}) + 1) \} \cup \{(M \ell (\frac {i _ {k} ^ {u}}{M} - x _ {k}) + 1) \}\right) \\ = \min \bigcup_ {k = 0} ^ {m} \left\{M \ell \left(x _ {k} - \frac {i _ {k} ^ {l}}{M}\right) + 1 \right\} \cup \left\{M \ell \left(\frac {i _ {k} ^ {u}}{M} - x _ {k}\right) + 1 \right\} \\ \end{array} +$$ + +![](images/d09f7f5be09bb2811e9446cbaeaf433cc782506c0e56a19bdaaa401298a64faf.jpg) + +Lemma A.13. Let $[u_1, 1], \ldots, [u_N, 1]$ be abstract elements of the Interval Domain $\mathcal{B}$ . Then + +$$ +\mathrm {n m i n} _ {N} ^ {\sharp} ([ u _ {1}, 1 ], \dots , [ u _ {N}, 1 ]) = [ u _ {1} + \dots u _ {N} + 1 - N, 1 ]. +$$ + +Proof. By induction. Base case: Let $N = 1$ . Then $\mathrm{nmin}_1^\sharp ([u_1,1]) = [u_1,1]$ . Let $N = 2$ . Then $\mathrm{nmin}_2^\sharp ([u_1,1],[u_2,1]) = [u_1 + u_2 - 1,1]$ . + +Induction hypothesis: The property holds for $N'$ s.t. $0 < N' \leq N - 1$ . + +Induction step: Then it also holds for $N$ : + +$$ +\begin{array}{l} \mathrm {n m i n} _ {N} ^ {\sharp} ([ u _ {1}, 1 ], \dots , [ u _ {N}, 1 ]) = \mathrm {n m i n} ^ {\sharp} (\mathrm {n m i n} _ {\lceil N / 2 \rceil} ^ {\sharp} ([ u _ {1}, 1 ], \dots , [ u _ {\lceil N / 2 \rceil}, 1 ]), \\ \mathrm {n m i n} _ {N - \lceil N / 2 \rceil} ^ {\sharp} ([ u _ {\lceil N / 2 \rceil + 1}, 1 ], \dots , [ u _ {N}, 1 ]) \\ = \mathrm {n m i n} ^ {\sharp} ([ u _ {1} + \dots + u _ {\lceil N / 2 \rceil} + 1 - \lceil N / 2 \rceil , 1 ], \\ \left[ u _ {\lceil N / 2 \rceil + 1} + \dots u _ {N} + 1 - N + \lceil N / 2 \rceil , 1 \right]) \\ \stackrel {L e m m a} {=} ^ {A. 9} \left[ u _ {1} + \dots + u _ {N} + 2 - \lceil N / 2 \rceil - N + \lceil N / 2 \rceil - 1, 1 \right] \\ = \left[ u _ {1} + \dots + u _ {N} + 1 - N, 1 \right] \\ \end{array} +$$ + +![](images/4cf3e9801662a5aadb9b6048166206d4df151e61d41238d0485e7845ab5ae88c.jpg) + +Lemma A.14. Let $[a,b],[u,1]\in \mathcal{B}(\mathbb{R}_{\leq 1})$ . Then + +$$ +\mathrm {n m i n} ^ {\sharp} ([ a, b ], [ u, 1 ]) \subseteq [ a + \frac {u - 1}{2}, \frac {b + 1}{2} ] +$$ + +Proof. + +$$ +\mathbf {n m i n} ^ {\sharp} ([ a, b ], [ u, 1 ]) = \left\{ \begin{array}{l l} [ a + \frac {u - 1}{2}, b + \frac {1 - u}{2} ] & \text {i f} b \leq u \\ [ a + u - \frac {b + 1}{2}, \frac {b + 1}{2} ] & \text {i f} b \geq u \end{array} \right. +$$ + +If $b \leq u$ then $b + \frac{1 - u}{2} \leq b + \frac{1 - b}{2} = \frac{b + 1}{2}$ . If $u \leq b$ then $a + u - \frac{b + 1}{2} \geq a + u - \frac{u + 1}{2} = a + \frac{u - 1}{2}$ . So + +$$ +\mathrm {n m i n} ^ {\sharp} ([ a, b ], [ u, 1 ]) \subseteq [ a + \frac {u - 1}{2}, \frac {b + 1}{2} ]. +$$ + +![](images/9d1ed8d7d6e401d0ffdeee7da3d23bb2b69003203c5957a1f8f8ef0470710a47.jpg) + +Lemma A.15. Let $N \in \mathbb{N}_{\geq 2}$ , let $[u_1, 1], \ldots, [u_{N-1}, 1], [u_N, d] \in \mathcal{B}(\mathbb{R})$ s.t. $b \leq 1$ be abstract elements of the Interval Domain $\mathcal{B}$ . Furthermore, let $H(x) \coloneqq \frac{1 + x}{2}$ . Then there exists a $u \in \mathbb{R}$ s.t. + +$$ +\mathrm {n m i n} _ {N} ^ {\sharp} ([ u _ {1}, 1 ], \dots , [ u _ {N - 1}, 1 ], [ u _ {N}, d ]) \subseteq [ u, H ^ {\lceil \log_ {2} N \rceil + 1} (d) ] +$$ + +Proof. By induction: Let $N = 2$ : + +$$ +\mathrm {n m i n} _ {2} ^ {\sharp} ([ u _ {1}, 1 ], [ u _ {2}, d ]) \stackrel {{L e m m a}} {{=}} ^ {A. 1 4} [ a + \frac {u _ {1} - 1}{2}, H (d) ] +$$ + +Let $N = 3$ + +$$ +\begin{array}{l} \mathrm {n m i n} _ {3} ^ {\sharp} \left(\left[ u _ {1}, 1 \right], \left[ u _ {2}, 1 \right], \left[ u _ {3}, d \right]\right) = \mathrm {n m i n} ^ {\sharp} \left(\mathrm {n m i n} ^ {\sharp} \left(\left[ u _ {1}, 1 \right], \left[ u _ {2}, 1 \right]\right), \left[ u _ {3}, d \right]\right) \\ = \mathrm {n m i n} ^ {\sharp} ([ u _ {1} + u _ {2} - 1, 1 ], [ u _ {3}, d ]) \\ \subseteq \left[ u _ {3} + \frac {u _ {1} + u _ {2} - 2}{2}, H (d) \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathrm {n m i n} _ {3} ^ {\sharp} ([ u _ {1}, 1 ], [ a, b ], [ u _ {2}, 1 ]) = \mathrm {n m i n} _ {3} ^ {\sharp} ([ u _ {3}, d ], [ u _ {1}, 1 ], [ u _ {2}, 1 ]) \\ = \mathrm {n m i n} ^ {\sharp} \left(\mathrm {n m i n} ^ {\sharp} ([ u _ {3}, d ], [ u _ {1}, 1 ]), [ u _ {2}, 1 ]\right) \\ = \mathrm {n m i n} ^ {\sharp} ([ u _ {3} + \frac {u _ {1} - 1}{2}, H (d) ], [ u _ {2}, 1 ]) \\ \subseteq \left[ u _ {3} + \frac {u _ {1} + u _ {2} - 2}{2}, H ^ {2} (d) \right] \\ \end{array} +$$ + +So $\mathrm{nmin}_3^\sharp ([u_3,d],[u_1,1],[u_2,1])$ is always included in $[u_{3} + \frac{u_{1} + u_{2} - 2}{2},H^{2}(d)]$ + +Induction hypothesis: The statement holds for all $2 \leq N' \leq N - 1$ . + +Induction step: Then the property holds also for $N$ : + +$$ +\begin{array}{l} \mathbf {n m i n} _ {N} ^ {\sharp} ([ u _ {N}, d ], [ u _ {1}, 1 ], \ldots , [ u _ {N - 1}, 1 ]) = \mathbf {n m i n} ^ {\sharp} (\mathbf {n m i n} _ {\lceil N / 2 \rceil} ^ {\sharp} ([ u _ {N}, d ], [ u _ {1}, 1 ], \ldots , [ u _ {\lceil N / 2 \rceil - 1}, 1 ]), \\ \mathrm {n m i n} _ {N - \lceil N / 2 \rceil} ^ {\sharp} ([ u _ {\lceil N / 2 \rceil}, 1 ], \dots , [ u _ {N - 1}, 1 ]) \\ = \mathrm {n m i n} ^ {\sharp} ([ u ^ {\prime}, H ^ {\lceil \log_ {2} \lceil N / 2 \rceil ] + 1} (d) ], [ u ^ {\prime \prime}, 1 ]) \\ \subseteq \mathrm {n m i n} ^ {\sharp} ([ u ^ {\prime}, H ^ {\lceil \log_ {2} N / 2 \rceil + 1} (d) ], [ u ^ {\prime \prime}, 1 ]) \\ = \mathrm {n m i n} ^ {\sharp} ([ u ^ {\prime}, H ^ {\lceil \log_ {2} N - \log_ {2} (2) \rceil + 1} (d) ], [ u ^ {\prime \prime}, 1 ]) \\ = \mathrm {n m i n} ^ {\sharp} ([ u ^ {\prime}, H ^ {\lceil \log_ {2} N - 1 \rceil + 1} (d) ], [ u ^ {\prime \prime}, 1 ]) \\ = \mathrm {n m i n} ^ {\sharp} ([ u ^ {\prime}, H ^ {\lceil \log_ {2} N \rceil} (d) ], [ u ^ {\prime \prime}, 1 ]) \\ = [ u ^ {\prime \prime \prime}, H ^ {\lceil \log_ {2} N \rceil + 1} (d) ] \\ \end{array} +$$ + +and similarly for other orderings of the arguments. + +![](images/e5562248b2d9be5efd1c12c2bd8fded8e917a095e85d1e917a0f876cf0c84e67.jpg) + +Lemma A.16. Let $H(x) \coloneqq \frac{1 + x}{2}$ . For all $N \in \mathbb{N}_{>0}$ , we have that $d \leq 1 - 2^{N}$ implies $H^{N}(d) \leq 0$ . + +Proof. By induction. $N = 1$ : Then $H(1 - 2) = \frac{1 + 1 - 2}{2} = 0$ + +Induction hypothesis. The statement holds for all $N'$ such that $0 < N' \leq N$ . + +Induction step: $N + 1$ : $d \leq 1 - 2^{N}$ + +$$ +H ^ {N + 1} (d) \leq H ^ {N + 1} (1 - 2 ^ {N + 1}) = H ^ {N} \big (H \big (1 - 2 ^ {N + 1} \big) \big) = H ^ {N} \big (\frac {1 + 1 - 2 ^ {N + 1}}{2} \big) = H ^ {N} \big (1 - 2 ^ {N} \big) \leq 0 +$$ + +![](images/cd8785fa9bdd79188c634cc64b06fc6c6924d562b1cdcca287ed606997c64bbe.jpg) + +Lemma A.17. For all boxes $B \in \mathcal{B}(\mathbb{R}^m)$ , we have + +$$ +\phi_ {c} ^ {\sharp} (B) = \left\{ \begin{array}{l l} [ 1, 1 ] & \text {i f} B \subseteq \operatorname {c o n v} (c) \\ [ 0, 0 ] & \text {i f} B \subseteq \Gamma \setminus \operatorname {c o n v} (\mathcal {N} (c)) \end{array} \right. +$$ + +Furthermore, $\phi_c^\sharp (B)\subseteq [0,1]$ + +Proof. Let $\phi_c$ be a local bump and let $B = [a,b] \in \mathcal{B}(\mathbb{R}^m)$ . Let $[r_k^1, s_k^1], [r_k^2, s_k^2] \in \mathcal{B}(\mathbb{R})$ such that $M\ell([a_k, b_k] - \frac{i_k^l}{M}) + 1 = [r_k^1, s_k^1]$ and $M\ell(\frac{i_k^u}{M} - [a_k, b_k]) + 1 = [r_k^2, s_k^2]$ . + +- If $[a, b] \subseteq \operatorname{conv}(c)$ : Then $1 \leq r_k^1$ and $1 \leq r_k^2$ for all $k \in \{1, \dots, m\}$ . Thus + +$$ +\begin{array}{l} \phi_ {c} ^ {\sharp} ([ a, b ]) = R ^ {\sharp} (\mathrm {n m i n} _ {2 m} ^ {\sharp} \{R _ {[ *, 1 ]} ^ {\sharp} ([ r _ {k} ^ {p}, s _ {k} ^ {p} ]) \} _ {(p, k) \in \{1, 2 \} \times \{1, \ldots , m \}}) \\ = R ^ {\sharp} \left(\mathrm {n m i n} _ {2 m} ^ {\sharp} \left\{\left[ 1, 1 \right] \right\} _ {(p, k) \in \{1, 2 \} \times \{1, \dots , m \}}\right) \\ = [ 1, 1 ] \\ \end{array} +$$ + +- If $[a, b] \subseteq \Gamma \setminus \operatorname{conv}(\mathcal{N}(c))$ : Then there exists a $(p', k') \in \{1, 2\} \times \{1, \ldots, m\}$ such that $s_{k'}^{p'} \leq 1 - 2^{\lceil \log_2 N \rceil + 1}$ . Using Lemma A.16 and Lemma A.15, we now that there exists a $u \in \mathbb{R}$ s.t. + +$$ +\begin{array}{l} \phi_ {c} ^ {\sharp} ([ a, b ]) = R ^ {\sharp} (\mathsf {n m i n} _ {2 m} ^ {\sharp} \{R _ {[ * 1 ]} ^ {\sharp} ([ r _ {k} ^ {p}, s _ {k} ^ {p} ]) \} _ {(p, k) \in \{1, 2 \} \times \{1, \ldots , m \}}) \\ = R ^ {\sharp} \left(\mathrm {n m i n} _ {2 m} ^ {\sharp} \left\{\left[ R _ {[ *, 1 ]} \left(r _ {k} ^ {p}\right), R _ {[ *, 1 ]} \left(s _ {k} ^ {p}\right) \right] \right\} _ {(p, k) \in \{1, 2 \} \times \{1, \dots , m \}}\right) \\ \subseteq R ^ {\sharp} \left(\operatorname {n m i n} _ {2 m} ^ {\sharp} \left\{\left[ R _ {[ *, 1 ]} \left(r _ {k} ^ {p}\right), 1 \right] \right\} _ {(p, k) \neq \left(p ^ {\prime}, k ^ {\prime}\right)} \cup \left\{\left[ r _ {k ^ {\prime}} ^ {p ^ {\prime}}, s _ {k ^ {\prime}} ^ {p ^ {\prime}} \right] \right\}\right) \\ \subseteq R ^ {\sharp} ([ u, 0 ]) \\ = [ 0, 0 ] \\ \end{array} +$$ + +For any $[a,b]\in \mathcal{B}(\Gamma)$ we have $\phi_c^\sharp ([a,b])\subseteq [0,1]$ by construction. + +Lemma A.18. Let $\Gamma \subset \mathbb{R}^m$ be a closed box and let $f\colon \Gamma \to \mathbb{R}$ be continuous. For all $\delta >0$ exists a set of ReLU networks $\{n_k\}_{0\leq k\leq N - 1}$ of size $N\in \mathbb{N}$ approximating the $N$ -slicing of $f$ $\{f_k\}_{0\leq k\leq N - 1}$ ( $\xi_{k}$ as in Definition A.2) such that for all boxes $B\in \mathcal{B}(\Gamma)$ + +$$ +n _ {k} ^ {\sharp} (B) = \left\{ \begin{array}{l l} [ 0, 0 ] & \text {i f} f (B) \leq \xi_ {k} - \frac {\delta}{2} \\ [ 1, 1 ] & \text {i f} f (B) \geq \xi_ {k + 1} + \frac {\delta}{2}. \end{array} \right. +$$ + +and $n_k^\sharp (B)\subseteq [0,1]$ + +Proof. Let $N \in \mathbb{N}$ such that $N \geq 2\frac{\xi_{\max} - \xi_{\min}}{\delta}$ where $\xi_{\min} \coloneqq \min f(\Gamma)$ and $\xi_{\max} \coloneqq \max f(\Gamma)$ . For simplicity we assume $\Gamma = [0,1]^m$ . Using the Heine-Cantor theorem, we get that $f$ is uniformly continuous, thus there exists a $\delta' > 0$ such that $\forall x, y \in \Gamma$ . $||y - x||_{\infty} < \delta' \Rightarrow ||f(y) - f(x)|| < \frac{\delta}{2}$ . Further, let $M \in \mathbb{N}$ such that $M \geq \frac{1}{\delta'}$ and let $G$ be the grid defined by $G \coloneqq (\frac{\mathbb{Z}}{M})^m \subseteq \mathbb{R}^m$ . + +Let $C(B)$ be the set of corner points of the closest hyperrectangle in $G$ confining $B \in \mathcal{B}(\Gamma)$ . We construct the set + +$$ +\Delta_ {k} := \left\{C (B) \mid B \in \mathcal {B} (\Gamma): f (B) \geq \xi_ {k + 1} + \frac {\delta}{2} \right\}. +$$ + +We claim that $\{n_k\}_{0\leq k\leq N - 1}$ defined by + +$$ +n _ {k} (x) := R _ {[ *, 1 ]} \left(\sum_ {c \in \Delta_ {k}} \phi_ {c} (x)\right) +$$ + +satisfies the condition. + +Case 1: Let $B \in \mathcal{B}(\Gamma)$ such that $f(B) \geq \xi_{k+1} + \frac{\delta}{2}$ . Then for all $g \in \mathcal{N}(B)$ holds $f_k(g) = \delta_2$ . By construction exists a $c' \in \Delta_k$ such that $B \subseteq \mathrm{conv}(c')$ . Using Lemma 4.3 we get + +$$ +\begin{array}{l} n _ {k} ^ {\sharp} (B) = R _ {[ *, 1 ]} ^ {\sharp} \left(\sum_ {c \in \Delta_ {k}} \phi_ {c} ^ {\sharp} (B)\right) = R _ {[ *, 1 ]} ^ {\sharp} \left(\phi_ {c ^ {\prime}} ^ {\sharp} (B) + \sum_ {c \in \Delta_ {k} \backslash c ^ {\prime}} \phi_ {c} ^ {\sharp} (B)\right) \\ = R _ {[ *, 1 ]} ^ {\sharp} \left([ 1, 1 ] + [ p _ {1}, p _ {2} ]\right) = [ 1, 1 ], \\ \end{array} +$$ + +where $[p_1,p_2]\in \mathcal{B}(\mathbb{R}_{\geq 0})$ . Indeed, by case distinction: + +Case 2: Let $B \in \mathcal{B}(\Gamma)$ such that $f(B) \leq \xi_k - \frac{\delta}{2}$ . Then for all $g \in \mathcal{N}(B)$ holds $f_k(g) = 0$ . Further, $B \cap \operatorname{conv}(\mathcal{N}(c)) = \emptyset$ for all $c \in \Delta_k$ because $G$ is fine enough. Using Lemma 4.3 we obtain + +$$ +n _ {k} ^ {\sharp} (B) = R _ {[ *, 1 ]} ^ {\sharp} \left(\sum_ {c \in \Delta_ {k}} \phi_ {c} ^ {\sharp} (B)\right) = R _ {[ *, 1 ]} ^ {\sharp} ([ 0, 0 ]) = [ 0, 0 ]. +$$ + +By construction we have $n_k^\sharp (B)\subseteq [0,1]$ \ No newline at end of file diff --git a/universalapproximationwithcertifiednetworks/images.zip b/universalapproximationwithcertifiednetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..249fa643260604e6f7778b5f472a7ea9408e3514 --- /dev/null +++ b/universalapproximationwithcertifiednetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77b9e50c76b01f830d9fe914ffc404582ad7730f796009c5fd221824c669d763 +size 893470 diff --git a/universalapproximationwithcertifiednetworks/layout.json b/universalapproximationwithcertifiednetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5f5fea5c719ae9f7ddad725a775731c917303be7 --- /dev/null +++ b/universalapproximationwithcertifiednetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fba9fa6fe4c3e1b30c1c3dd09dcdc152bcb1797ee1ae4326830d4ae7a27034ee +size 976434 diff --git a/unpairedpointcloudcompletiononrealscansusingadversarialtraining/9d64f74d-de9c-451a-9458-1910f3c43ffb_content_list.json b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/9d64f74d-de9c-451a-9458-1910f3c43ffb_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0622968dce9a636cf2c0daa5b7a9ddcb3a80f493 --- /dev/null +++ b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/9d64f74d-de9c-451a-9458-1910f3c43ffb_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:020712a86c88fad2f3c343c1b0c67e45ff5fdca445dbef4b992b0d476f6888e3 +size 99994 diff --git a/unpairedpointcloudcompletiononrealscansusingadversarialtraining/9d64f74d-de9c-451a-9458-1910f3c43ffb_model.json b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/9d64f74d-de9c-451a-9458-1910f3c43ffb_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e67d071fbe16fa294f3b1e00712cb81cc028daf4 --- /dev/null +++ b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/9d64f74d-de9c-451a-9458-1910f3c43ffb_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27bf10d68e5fd0db7951f5b778b7bc0b8b77a63ea977046435e3c9db202c1fc6 +size 119536 diff --git a/unpairedpointcloudcompletiononrealscansusingadversarialtraining/9d64f74d-de9c-451a-9458-1910f3c43ffb_origin.pdf b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/9d64f74d-de9c-451a-9458-1910f3c43ffb_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ad553bb77cca996e3277f697c974394a38495eb1 --- /dev/null +++ b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/9d64f74d-de9c-451a-9458-1910f3c43ffb_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77cb5c0d0a80258a2585275f9521e7af5cbf67065a54026956a6acfec1ef90b6 +size 30895454 diff --git a/unpairedpointcloudcompletiononrealscansusingadversarialtraining/full.md b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..029b6050523b1ca94cd9c29ab81a2839b7c88438 --- /dev/null +++ b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/full.md @@ -0,0 +1,352 @@ +# UNPAIRED POINT CLOUD COMPLETION ON REAL SCANS USING ADVERSARIAL TRAINING + +Xuelin Chen + +Shandong University + +University College London + +Baoquan Chen + +Peking University + +Niloy J. Mitra + +University College London + +Adobe Research London + +# ABSTRACT + +As 3D scanning solutions become increasingly popular, several deep learning setups have been developed for the task of scan completion, i.e., plausibly filling in regions that were missed in the raw scans. These methods, however, largely rely on supervision in the form of paired training data, i.e., partial scans with corresponding desired completed scans. While these methods have been successfully demonstrated on synthetic data, the approaches cannot be directly used on real scans in the absence of suitable paired training data. We develop a first approach that works directly on input point clouds, does not require paired training data, and hence can directly be applied to real scans for scan completion. We evaluate the approach qualitatively on several real-world datasets (ScanNet, Matterport3D, KITTI), quantitatively on 3D-EPN shape completion dataset, and demonstrate realistic completions under varying levels of incompleteness. + +# 1 INTRODUCTION + +Robust, efficient, and scalable solutions now exist for easily scanning large environments and workspaces (Dai et al., 2017a; Chang et al., 2017). The resultant scans, however, are often partial and have to be completed (i.e., missing parts have to be hallucinated and filled in) before they can be used in downstream applications, e.g., virtual walk-through, path planning. + +The most popular data-driven scan completion methods rely on paired supervision data, i.e., for each incomplete training scan, a corresponding complete data (e.g., voxels, point sets, signed distance fields) is required. One way to establish such a shape completion network is then to train a suitably designed encoder-decoder architecture (Dai et al., 2017b; 2018). The required paired training data is obtained by virtually scanning 3D objects (e.g., SunCG Song et al. (2017), ShapeNet Chang et al. (2015) datasets) to simulate occlusion effects. Such approaches, however, are unsuited for real scans where large volumes of paired supervision data remain difficult to collect. Additionally, when data distributions from virtual scans do not match those from real scans, completion networks trained on synthetic-partial and synthetic-complete data do not sufficiently generalize to real (partial) scans. To the best of our knowledge, no point-based unpaired method exists that learns to translate noisy and incomplete point cloud from raw scans to clean and complete point sets. + +We propose an unpaired point-based scan completion method that can be trained without requiring explicit correspondence between partial point sets (e.g., raw scans) and example complete shape models (e.g., synthetic models). Note that the network does not require explicit examples of real complete scans and hence existing (unpaired) large-scale real 3D scan (e.g., Dai et al. (2017a); Chang et al. (2017)) and virtual 3D object repositories (e.g., Song et al. (2017); Chang et al. (2015)) can directly be leveraged as training data. Figure 1 shows example scan completions. As we show in Table 1, unlike methods requiring paired supervision, our method continues to perform well even if the data distributions of synthetic complete scans and real partial scans differ. + +We achieve this by designing a generative adversarial network (GAN) wherein a generator, i.e., an adaptation network, transforms the input into a suitable latent representation such that a discriminator cannot differentiate between the transformed latent variables and the latent variables obtained from training data (i.e., complete shape models). Intuitively, the generator is responsible for the key task of mapping raw partial point sets into clean and complete point sets, and the process is regu + +![](images/cf7fd84ea949d111f03f78ed877d5b6cc902e8dbfe569b88e415da34df651a44.jpg) +Figure 1: We present a point-based shape completion network that can be directly used on raw scans without requiring paired training data. Here we show a sampling of results from the ScanNet, Matterport3D, 3D-EPN, and KITTI datasets. + +larized by working in two different latent spaces that have separately learned manifolds of scanned and synthetic object data. + +We demonstrate our method on several publicly available real-world scan datasets namely (i) ScanNet (Dai et al., 2017a) chairs and tables; (ii) Matterport3D (Chang et al., 2017) chairs and tables; and (iii) KITTI (Geiger et al., 2012) cars. In absence of completion ground truth, we cannot directly compute accuracy for the completed scans, and instead compare using plausibility scores. Further, in order to quantitatively evaluate the performance of the network, we report numbers on a synthetic dataset (Dai et al., 2017b) where completed versions are available. Finally, we compare our method against baseline methods to demonstrate the advantages of the proposed unpaired scan completion framework. + +# 2 RELATED WORK + +Shape Completion. Many deep neural networks have been proposed to address the shape completion challenge. Inspired by CNN-based 2D image completion networks, 3D convolutional neural networks applied on voxelized inputs have been widely adopted for 3D shape completion task (Dai et al., 2018; 2017b; Sharma et al., 2016; Han et al., 2017; Thanh Nguyen et al., 2016; Yang et al., 2018; Wang et al., 2017). As quantizing shapes to voxel grids lead to geometric information loss, recent approaches (Yuan et al., 2018; Yu et al., 2018b; Achlioptas et al., 2018) operate directly on point sets to fill in missing parts. These works, however, require supervision in the form of partial-complete paired data for training deep neural networks to directly regress partial input to their ground truth counterparts. Since paired ground truth of real-world data is rarely available such training data is generated using virtual scanning. While the methods work well on synthetic test data, they do not generalize easily to real scans arising from hard-to-model acquisition processes. + +Realizing the gap between synthetically-generated data and real-world data, Stutz & Geiger (2018) proposed to directly work on voxelized real-world data. They also work in a latent space created for clean and complete data but measure reconstruction loss using a maximum likelihood estimator. Instead, we propose a GAN setup to learn a mapping between latent spaces respectively arising from partial real and synthetic complete data. Further, by measuring loss using Hausdorff distance on point clouds, we directly work with point sets instead of voxelized input. + +Generative Adversarial Network. Since its introduction, GAN (Goodfellow et al., 2014) has been used for a variety of generative tasks. In 2D image domain, researchers have utilized adversarial training to recover richer information from low-resolution images or corrupted images (Ledig et al., 2017; Wang et al., 2018; Mao et al., 2017; Park et al., 2018; Bulat et al., 2018; Yeh et al., 2017; Iizuka et al., 2017). In 3D context, Yang et al. (2018); Wang et al. (2017) combine 3D-CNN and generative adversarial training to complete shapes under the supervision of ground truth data. Gurumurthy & Agrawal (2019) treats the point cloud completion task as denoising AE problem, utilizing adversarial training to optimize on the AE latent space. We also leverage the power of GAN for reasoning the missing part of partial point cloud scanning. However, our method is designed to work with unpaired data, and thus can directly be applied to real-world scans even when real-world and synthetic data distributions differ. Intuitively, our GAN-based approach directly learns a translation mapping between these two different distributions. + +Deep Learning on Point clouds. Our method is built upon recent advances in deep neural networks for point clouds. PointNet Qi et al. (2017a), the pioneering work on this topic, takes an input point set through point-wise MLP layers followed by a symmetric and permutation-invariant func + +![](images/b1fb3b96b4765aab2f868e8f5203c3a8462fe4300e2b5673a4e4bc479e6953e9.jpg) +Figure 2: Unpaired Scan Completion Network. + +tion to produce a compact global feature, which can then be used for a diverse set of tasks (e.g., classification, segmentation). Although many improvements to PointNet have been proposed (Su et al., 2018; Li et al., 2018b; Qi et al., 2017b; Li et al., 2018a; Zaheer et al., 2017), the simplicity and effectiveness of PointNet and its extension PointNet++ make them popular for many other analysis tasks (Yu et al., 2018a; Yin et al., 2018; Yu et al., 2018b; Guerrero et al., 2018). + +In the context of synthesis, Achlioptas et al. (2018) proposed an autoencoder network, using a PointNet-based backbone, to learn compact representations of point clouds. By working in a reduced latent space produced by the autoencoder, they report significant advantages in training GANs, instead of having a generator producing raw point clouds. Inspired by this work, we design a GAN to translate between two different latent spaces to perform unpaired shape completion on real scans. + +# 3 METHOD + +Given a noisy and partial point set $S = \{\mathbf{s}_i\}$ as input, our goal is to produce a clean and complete point set $\mathcal{R} = \{\mathbf{r}_i\}$ as output. Note that although the two sets have the same number of points, there is no explicit correspondence between the sets $S$ and $\mathcal{R}$ . Further, we assume access to clean and complete point sets for shapes for the object classes. We achieve unpaired completion by learning two class-specific point set manifolds, $\mathbb{X}_r$ for the scanned inputs, and $\mathbb{X}_c$ for clean and complete shapes. Solving the shape completion problem then amounts to learning a mapping $\mathbb{X}_r \to \mathbb{X}_c$ between the respective latent spaces. We train a generator $G_{\theta}: \mathbb{X}_r \to \mathbb{X}_c$ to perform the mapping. Note that we do not require the noise characteristics in the two data distributions, i.e., real and synthetic, to be the same. In absence of paired training data, we score the generated output by setting up a min-max game where the generator is trained to fool a discriminator $F_{\chi}$ , whose goal is to differentiate between encoded clean and complete shapes, and mapped encodings of the raw and partial inputs. Figure 2 shows the setup of the proposed scan completion network. The latent space encoder-decoders, the mapping generator, and the discriminator are all trained as detailed next. + +# 3.1 LEARNING LATENT SPACES FOR POINT SETS + +The latent space of a given set of point sets is obtained by training an autoencoder, which encodes the given input to a low-dimension latent feature and then decodes to reconstruct the original input. We work directly on the point sets via these learned latent spaces instead of quantizing them to voxel grids or signed distance fields. + +For point sets coming from the clean and complete point sets $\mathcal{P}$ , we learn an encoder network $E_{\eta}^{c}$ that maps $\mathcal{P}$ from the original parameter space $\mathbb{R}^{3N}$ , defined by concatenating the coordinates of the $N$ (2048 in all our experiments) points, to a lower-dimensional latent space $\mathbb{X}_c$ . A decoder network $D_{\phi}^{c}$ performs the inverse transformation back to $\mathbb{R}^{3N}$ giving us a reconstructed point set $\tilde{\mathcal{P}}$ with also $N$ points. The encoder-decoders are trained with reconstruction loss, + +$$ +\mathcal {L} ^ {\mathrm {E M D}} (\eta , \phi) = \mathbb {E} _ {\mathcal {P} \sim p _ {\text {c o m p l e t e}}} d \left(\mathcal {P}, D _ {\phi} ^ {c} \left(E _ {\eta} ^ {c} (\mathcal {P})\right)\right), \tag {1} +$$ + +where $\mathcal{P} \sim p_{\mathrm{complete}}$ denotes point set samples drawn from the set of clean and complete point sets, $d(X_1, X_2)$ is the Earth Mover's Distance (EMD) between point sets $X_1, X_2$ , and $(\eta, \phi)$ are the + +![](images/641767ef24bcd454447308849ba6a3321bc9c6fc851e7d9d8ddd346f06fbbd4f.jpg) +input point set + +![](images/5d906051da78dde830ffce54ff17a1e9fbcf285ea170007ff2ddc57ef3cc6579.jpg) +completion without HL + +![](images/d14b17d594c76ee14905599f97c4f678551dc2a1d6090ec1fa255727903095b7.jpg) +completion with HL +Figure 3: Effect of unpaired scan completion without (Equation 5) and with HL term (Equation 6). Without the HL term, the network produces a clean point set for a complete chair, that is different in shape from the input. With the HL term, the network produces a clean point set that matches the input. + +learnable parameters of the encoder and decoder networks, respectively. Once trained, the weights of both networks are held fixed and the latent code $z = E_{\eta}^{c}(X)$ , $z \in \mathbb{X}_{c}$ for a clean and complete point set $X$ provides a compact representation for subsequent training and implicitly captures the manifold of clean and complete data. The architecture of the encoder and decoder is similar to Achlioptas et al. (2018); Qi et al. (2017a): using a 5-layer MLP to lift individual points to a deeper feature space, followed by a symmetric function to maintain permutation invariance. This results in a $k$ -dimensional latent code that describes the entire point cloud ( $k = 128$ in all our experiments). More details of the network architecture can be found in the appendix. + +As for the point set coming from the noisy-partial point sets $S$ , one can also train another encoder $E_{\gamma}^{r}: S \to \mathbb{X}_{r}$ and decoder $D_{\psi}^{r}: \mathbb{X}_{r} \to \tilde{S}$ pair that provides a latent parameterization $\mathbb{X}_{r}$ for the noisy-partial point sets, with the definition of the reconstruction loss as, + +$$ +\mathcal {L} ^ {\mathrm {E M D}} (\gamma , \psi) = \mathbb {E} _ {\mathcal {S} \sim p _ {\text {r a w}}} d (\mathcal {S}, D _ {\psi} ^ {r} \left(E _ {\gamma} ^ {r} (\mathcal {S})\right)), \tag {2} +$$ + +where $S \sim p_{\mathrm{raw}}$ denotes point set samples drawn from the set of noisy and partial point sets. + +Although, in experiments, the latent space of this autoencoder trained on noisy-partial point sets works considerably well as the noisy-partial point set manifold, we found that using the latent space produced by feeding noisy-partial point sets to the autoencoder trained on clean and complete point sets yields slightly better results. Hence, unless specified, we set $\gamma = \eta$ and $\psi = \phi$ in our experiments. The comparison of different choices to obtain the latent space for noisy-partial point sets is also presented in Section 4. Next, we will describe the GAN setup to learn a mapping between the latent spaces of raw noisy-partial and synthetic clean-complete point sets, i.e., $\mathbb{X}_r\to \mathbb{X}_c$ . + +# 3.2 LEARNING A MAPPING BETWEEN LATENT SPACES + +We set up a min-max game between a generator and a discriminator to perform the mapping between the latent spaces. The generator $G_{\theta}$ is trained to perform the mapping $\mathbb{X}_r \to \mathbb{X}_c$ such that the discriminator fails to reliably tell if the latent variable comes from original $\mathbb{X}_c$ or the remapped $\mathbb{X}_r$ . + +The latent representation of a noisy and partial scan $z_{r} = E_{\gamma}^{r}(\mathcal{S})$ is mapped by the generator to $\tilde{z}_c = G_\theta (z_r)$ . Then, the task of the discriminator $F_{\chi}$ is to distinguish between latent representations $\tilde{z}_{c}$ and $z_{c} = E_{\eta}^{c}(\mathcal{P})$ . We train the mapping function using a GAN. Given training examples of clean latent variables $z_{c}$ and remapped-noisy latent variables $\tilde{z}_c$ , we seek to optimize the following adversarial loss over the mapping generator $G_{\theta}$ and a discriminator $F_{\chi}$ , + +$$ +\min _ {\theta} \max _ {\chi} \mathbb {E} _ {x \sim p _ {\text {c l e a n - c o m p l e t e}}} \left[ \log \left(F _ {\chi} \left(E _ {\eta} ^ {c} (x)\right)\right) \right] + \mathbb {E} _ {y \sim p _ {\text {n o i s y - p a r t i a l}}} \left[ \log \left(1 - F _ {\chi} \left(G _ {\theta} \left(E _ {\gamma} ^ {r} (y)\right)\right)\right) \right]. \tag {3} +$$ + +In our experiments, we found the least square GAN Mao et al. (2016) to be easier to train and hence minimize both the discriminator and generator losses defined as, + +$$ +\mathcal {L} _ {F} (\chi) = \mathbb {E} _ {x \sim p _ {\text {c l e a n - c o m p l e t}}} \left[ F _ {\chi} \left(E _ {\eta} ^ {c} (x)\right) - 1 \right] ^ {2} + \mathbb {E} _ {y \sim p _ {\text {n o i s y - p a r t i a l}}} \left[ F _ {\chi} \left(G _ {\theta} \left(E _ {\gamma} ^ {r} (y)\right)\right) \right] ^ {2} \tag {4} +$$ + +$$ +\mathcal {L} _ {G} (\theta) = \mathbb {E} _ {y \sim p _ {\text {n o i s y - p a r t i a l}}} \left[ F _ {\chi} \left(G _ {\theta} \left(E _ {\gamma} ^ {r} (y)\right)\right) - 1 \right] ^ {2}. \tag {5} +$$ + +The above setup encourages the generator to perform the mapping $\mathbb{X}_r\to \mathbb{X}_c$ resulting in $D_{\psi}^{c}(\tilde{z}_{c})$ to be a clean and complete point cloud $\mathcal{R}$ . However, the generator is free to map a noisy latent vector to any point on the manifold of valid shapes in $\mathbb{X}_c$ , including shapes that are far from the original partial scan $\mathcal{S}$ . As shown in Figure 3, the result is a complete and clean point cloud that can be dissimilar in shape to the partial scanned input. To prevent this, we add a reconstruction loss term $\mathcal{L}_{\mathrm{recon}}$ to the generator loss: + +$$ +\mathcal {L} _ {G} (\theta) = \alpha \mathbb {E} _ {y \sim p _ {\text {n o i s y - p a r t i a l}}} \left[ F _ {\chi} \left(G _ {\theta} \left(E _ {\gamma} ^ {r} (y)\right)\right) - 1 \right] ^ {2} + \beta \mathcal {L} _ {\text {r e c o n}} ^ {\mathrm {H L}} (\mathcal {S}, D _ {\psi} ^ {c} \left(G _ {\theta} \left(E _ {\gamma} ^ {r} (\mathcal {S})\right)\right)), \tag {6} +$$ + +where $\mathcal{L}_{\mathrm{recon}}^{\mathrm{HL}}$ denotes the Hausdorff distance loss $^{1}$ (HL) from the partial input point set to the completion point set, which encourages the predicted completion point set to match the input only partially. Note that, it is crucial to use HL as $\mathcal{L}_{\mathrm{recon}}$ , since the partial input can only provide partial supervision when no ground truth complete point set is available. In contrast, using EMD as $\mathcal{L}_{\mathrm{recon}}$ forces the network to reconstruct the overall partial input leading to worse completion results. The comparison of these design choices is presented in Section 4. Unless specified, we set the trade-off parameters as $\alpha = 0.25$ and $\beta = 0.75$ in all our experiments. + +# 4 EXPERIMENTAL EVALUATION + +We present quantitative and qualitative experimental results on several noisy and partial datasets. First, we present results on real-world datasets, demonstrating the effectiveness of our method on unpaired raw scans. Second, we thoroughly compare our method to various baseline methods on 3D-EPN dataset, which contains simulated partial scans and corresponding ground truth for full evaluation. Finally, we derive a synthetic noisy-partial scan dataset based on ShapeNet, on which we can evaluate the performance degradation of applying supervised methods to test data of different distribution and the performance of our method under varying levels of incompleteness. A set of ablation studies is also included to evaluate our design choices. + +Datasets. (A) Real-world dataset comes from three sources. First, a dataset of $\sim 550$ chairs and $\sim 550$ tables extracted from the ScanNet dataset split into $90\% -10\%$ train-test sets. Second, a dataset of 20 chairs and 20 tables extracted from the Matterport3D dataset. Note that we train our method only on the ScanNet training split, and use the trained model to test on the Matterport3D data to evaluate generalization to new data sources. Third, a dataset containing cars from the KITTI Velodyne point clouds. (B) 3D-EPN dataset provides simulated partial scans with corresponding ground truth. Scans are represented as Signed Distance Field (SDF). We only use the provided point cloud representations of the training data, instead of using the SDF data which holds richer information. (C) Clean and complete point set dataset contains virtually scanned point sets of ShapeNet models covering 8 categories, namely boat, car, chair, dresser, lamp, plane, sofa, and table. We use this dataset for learning the clean-complete point set manifold in all our experiments. (D) Synthetic dataset provides different incomplete scan distribution and at different levels of incompleteness. Ground truth complete scan counterparts are available for evaluation. + +Evaluation measures. We assess completion quality using the following measures. (A) Accuracy measures the fraction of points in $P_{comp}$ that are matched by $P_{gt}$ , where and $P_{comp}$ denote the completion point set and $P_{gt}$ denote the ground truth point set. Specifically, for each point $v \in P_{comp}$ , we compute $D(v, P_{gt}) = \min \{ \| v - q \|, q \in P_{gt} \}$ . If $D(v, P_{gt})$ is within distance threshold $\epsilon = 0.03$ , we count it as a correct match. The fraction of matched points is reported as the accuracy in percentage. (B) Completeness reports the fraction of points in $P_{gt}$ that are within distance threshold $\epsilon$ of any point in $P_{comp}$ . (C) F1 score is defined as the harmonic average of the accuracy and the completeness, where F1 reaches its best value at 1 (perfect accuracy and completeness) and worst at 0. (D) Plausibility of the completion is evaluated as the classification accuracy in percentage produced by PointNet++, a SOA point-based classification network. To avoid bias on ShapeNet point clouds, we trained the classification network on the ModelNet40 dataset. We mainly used plausibility score for real-world data completions, where no ground truth data is available for calculating accuracy, completeness, or F1 scores. + +In the following, we show all experimental and evaluation results. We trained separate networks for each category. More details are in the appendix. + +# 4.1 EXPERIMENTAL RESULTS ON REAL-WORLD DATA + +Our method works directly on real-world data where no paired data is available. We train and test our network on noisy-partial chairs and tables extracted from the ScanNet dataset. We further test the network trained on ScanNet dataset on chairs and tables extracted from the Matterport3D dataset, to show how well our network can generalize to definitely unseen data. We present qualitative results of our method in Fig 4. Our method consistently produces plausible completions for the ScanNet and Matterport3D data. + +![](images/1a95c4810a74fb6a98d8fb008d15ee7cb3a30959e5ced452c52548402fe3eaff.jpg) +Figure 4: Qualitative comparisons on real-world data, which includes partial scans of ScanNet chairs and tables, Matterport3D chairs and tables, and KITTI cars. We show the partial input in grey and the corresponding completion in gold on the right. + +![](images/11e893fb187da8e2bb2cc2dc01e229a3e72dac1869658a48b5297a00c1e770fb.jpg) + +![](images/a2d33e060e4e93de1c1d9ed804b1a455c0a1f984941f3be2c56e452e22140106.jpg) + +In the absence of ground truth completions on real data, we compare our method quantitatively against others based on the plausibility of the results. The left sub-table of Table 1 shows that our method is superior to those supervised methods, namely 3D-EPN and PCN. Directly applying PCN trained on simulated partial data to real-world data leads to completions that have low plausibility, while our method consistently produces results with high plausibility. 3D-EPN trained on simulated partial data failed to complete the real-world partial scans. In Section 4.2 and Section 4.3, we present more in-depth comparisons on 3D-EPN and our synthetic dataset, where the ground truth is available for computing accuracy, completeness, and F1 of the completions. + +
Raw input3D-EPNPCNOurs
Syntheticchair73.177.385.091.5
table52.571.272.080.6
Real-worldchair71.47.178.694.3
table47.84.469.681.2
+ +Table 1: Completion plausibility on synthetic scans and real-world scans and effects of data distribution discrepancy. (Left) Plausibility comparison on synthetic scans and real-world scans. Synthetic scans include test data from 3D-EPN, real-world scans includes ScanNet and Matterport3D test data. 3D-EPN failed to produce good completions on real-world data. (Right) On our synthetic data, supervised methods trained on other simulated partial scans produce worse results on partial scans with different data distribution. + +
3D-EPNPCNOurs
modelacc.comp.F1acc.comp.F1acc.comp.F1
chair39.661.848.249.376.059.880.780.880.8
car43.862.351.463.281.471.282.680.781.7
table36.661.045.862.380.670.383.184.583.8
plane17.157.626.367.185.475.194.492.793.6
+ +Completing the car observations from KITTI is extremely challenging, as each car instance only receives few data points from the Lidar scanner. Fig 4 shows the qualitative results of our method on completing sparse point sets of KITTI cars, we can see that our network can still generate highly plausible cars with such sparse inputs. + +We also use a point-based object part segmentation network (Qi et al., 2017b) to indirectly evaluate our completions of real-world data. Due to the absence of ground truth segmentation, we calculate the approximate segmentation accuracy for each completion. Specifically, for the completion of a chair, we count the predicted segmentation label of each point to be correct as long as the predicted label falls into the set of 4 parts (i.e., seat, back, leg, and armrest) of chair class. Our completion results have much higher approximate segmentation accuracy compared to the real-world raw input (chair: $77.2\%$ vs. $24.8\%$ ; table: $96.4\%$ vs. $83.5\%$ ; and car: $98.0\%$ vs. $5.2\%$ , as segmentation accuracy on our completions versus on original partial input), indicating high completion quality. + +# 4.2 COMPARISON WITH BASELINES ON 3D-EPN DATA + +We compare our method to several baseline methods and present both quantitative and qualitative comparisons on the 3D-EPN test set: + +- Autoencoder (AE), which is trained only with clean and complete point sets. +- 3D-EPN (Dai et al., 2017b), a supervised method that requires SDF input and is trained with paired data. We convert its Distance Field representation results into surface meshes, from which we can uniformly sample $N$ points for calculating our point-based measures. + +Table 2: Comparison with baselines on the 3D-EPN dataset. Note that 3D-EPN and PCN require paired supervision data, while ours does not. Ours outperforms 3D-EPN and achieves comparable results to PCN. Furthermore, after adapted to leverage the ground truth data as well, our method achieves similar performance to PCN. + +
AEEPN (fully supervised)PCN (fully supervised)Ours (unsupervised)Ours+ (supervised)
modelacc.comp.F1acc.comp.F1acc.comp.F1acc.comp.F1acc.comp.F1
boat89.681.485.382.481.481.992.693.493.086.684.785.689.892.090.9
car81.371.175.969.881.775.397.396.196.788.987.688.293.592.893.1
chair79.968.573.861.776.968.591.190.690.978.777.478.082.383.382.8
dresser68.964.266.558.472.764.893.591.592.575.876.576.287.491.589.4
lamp75.979.677.760.867.864.182.988.385.571.380.275.576.686.381.2
plane97.695.196.378.193.585.198.398.298.297.295.996.595.694.895.2
sofa80.364.071.265.072.668.691.590.891.168.272.370.281.087.083.9
table82.872.577.356.875.164.793.489.291.282.277.880.081.281.481.3
+ +- PCN (Yuan et al., 2018), which completes partial inputs in a hierarchical manner, receiving supervision from both sparse and dense ground truth point clouds. +- Ours+, which is an adaption of our method for training with paired data, to show that our method can be easily adapted to work with ground truth data, improving the completion. Specifically, we set $\alpha = 0$ and use EMD loss as $L_{\text{recon}}$ . More details and discussion about adapting our method to train with paired data can be found in the appendix. + +Table 2 shows quantitative results on 3D-EPN test split and summarizes the comparisons: although our network is only trained with unpaired data, our method outperforms 3D-EPN method and achieves comparable results to PCN. Note that both 3D-EPN and PCN require paired data. Furthermore, after adapting our method to be supervised by the ground truth, the performance of our method $(\mathrm{Ours}+)$ improves, achieving similar performance to PCN. Note that a simple autoencoder network trained with only clean-complete data can produce quantitatively good results, especially when the input is rather complete. Thus, we also evaluate the performance of AE on our synthetic data with incompleteness control in Section 4.3, to show that AE performance declines dramatically as the incompleteness of the input increases. Additional comparisons are included in the appendix. + +# 4.3 EFFECT OF DATA DISTRIBUTION DISCREPANCY AND VARYING INCOMPLETENESS + +Supervised methods assume that simulated partial scans share the same data distribution as the test data. We conduct quantitative experiments to show that applying 3D-EPN and PCN to our synthetic data, which is of different data distribution to its training data and in which the ground truth complete scans are not available for training, lead to performance degradation. The right sub-table of Table 1 shows that our method continues to produce good completions on our synthetic data, as we do not require paired data for training. The visual comparison is presented in the appendix. + +To evaluate our method under different levels of input incompleteness, we conduct experiments on our synthetic data, in which we can control the fraction of missing points. Specifically, we train our network with varying levels of incompleteness by randomizing the amount of missing points during training, and afterwards fix the amount of missing points for testing. Table 3 shows the performance of our method on different classes under increasing amount of incompleteness and the comparison to AE. We can see that AE performance declines dramatically as the incompleteness of the input increases, while our method can still produce completions with high plausibility and F1 score. + +Table 3: Effect of varying incompleteness. Performance of AE and ours with increasing incompleteness (\% of the missing points). Our completions remain robust even with increasing incompleteness as our method restricts the completion via the learned latent shape manifolds. + +
PlausibilityF1PlausibilityF1PlausibilityF1PlausibilityF1
incomp.modelAEOursAEOursmodelAEOursAEOursmodelAEOursAEOursmodelAEOursAEOurs
10car96.399.494.985.888.091.088.487.769.374.090.090.288.991.096.696.5
2096.799.789.584.581.091.087.285.165.075.585.087.389.790.794.095.5
3095.098.281.883.3chair67.090.970.880.7table55.273.477.384.0plane88.790.789.594.1
4085.496.171.879.744.089.452.576.945.171.869.680.185.089.084.992.8
5058.696.463.472.538.083.533.571.832.573.362.174.580.090.780.691.0
+ +# 4.4 DIVERSITY OF THE COMPLETION RESULTS + +Our network is encouraged to partially match the input, alleviating the mode-collapse issue, which often occurs in GAN. Unlike traditional generative model problem, where high diversity in generated results is always better, the diversity in our completion results should match that of the ground truth. Although this can be qualitatively accessed, see Fig. 5 in Appendix, in order to quantitatively quantify the divergence, we compute the Jensen-Shannon Divergence (JSD) between the marginal distribution of ground truth point sets and that of our completions as proposed in Achlioptas et al. (2018). As a reference, we simulate extremely mode-collapsed point cloud sets by repeating a randomly selected point cloud, then report the JSD between ground truth point sets and the simulated extremely mode-collapsed point sets. The JSD scores - lower is better - highlight the diversity of our 3D-EPN completions and the divergence between our diversity and that of the ground truth using the extreme mode-collapse results as reference (the former is ours): 0.06 vs. 0.46 on cars, 0.05 vs. 0.61 on chairs, 0.04 vs. 0.53 on planes and 0.04 vs. 0.59 on tables. + +# 4.5 ABLATION STUDY + +- Ours with partial AE, uses encoder $E_{\gamma}^{r}$ and decoder $D_{\psi}^{r}$ that are trained to reconstruct partial point sets for the latent space of partial input. +- Ours with EMD loss, uses EMD as the reconstruction loss. +- Ours without GAN, "switch off" the GAN module by simply setting $\alpha = 0$ and $\beta = 1$ , to verify the effectiveness of using adversarial training in our network. +- Ours with reconstruction loss, removes the reconstruction loss term by simply setting $\alpha = 1$ and $\beta = 0$ , to verify the effectiveness of the reconstruction loss term in generator loss. + +Table 4 presents quantitative results for the ablation experiments, where we demonstrate the importance of various design choices and modules in our proposed network. We can see that our method has the best performance over all other variations. + +Table 4: Ablation study showing the importance of various design choices in our proposed network. + +
Ours w/ partial AEOurs w/ EMDOurs w/o GANOurs w/o Recon.Ours
acc.comp.F1acc.comp.F1acc.comp.F1acc.comp.F1acc.comp.F1
boat75.175.475.282.084.883.447.493.162.844.438.141.086.684.785.6
car88.987.688.276.076.876.446.288.360.772.272.772.588.987.788.3
chair64.166.765.478.676.477.541.379.854.475.675.175.378.777.478.0
dresser67.468.668.071.472.371.944.274.455.420.921.921.475.876.576.2
lamp64.074.869.069.979.074.228.684.742.815.622.218.371.380.275.5
plane94.394.994.696.895.496.141.298.358.187.184.785.997.295.996.5
sofa64.867.366.068.669.869.238.675.651.155.158.056.568.272.370.2
table76.077.676.881.575.178.223.059.333.127.423.425.282.277.880.0
+ +# 5 CONCLUSION + +We presented a point-based unpaired shape completion framework that can be applied directly on raw partial scans to obtain clean and complete point clouds. At the core of the algorithm is an adaptation network acting as a generator that transforms latent code encodings of the raw point scans, and maps them to latent code encodings of clean and complete object scans. The two latent spaces regularize the problem by restricting the transfer problem to respective data manifolds. We extensively evaluated our method on real scans and virtual scans, demonstrating that our approach consistently leads to plausible completions and perform superior to other methods. The work opens up the possibility of generalizing our approach to scene-level scan completions, rather than object-specific completions. Our method shares the same limitations as many of the supervised counterparts: does not produce fine-scale details and assumes input to be canonically oriented. Another interesting future direction will be to combine point- and image-features to apply the completion setup to both geometry and texture details. + +# 6 ACKNOWLEDGEMENTS + +We thank all the anonymous reviewers for their insightful comments and feedback. This work is supported in part by grants from National Key R&D Program of China (2019YFF0302900), China Scholarship Council, National Natural Science Foundation of China (No.61602273), ERC Starting Grant, ERC PoC Grant, Google Faculty Award, Royal Society Advanced Newton Fellowship, and gifts from Adobe. + +# REFERENCES + +Panos Achlioptas, Olga Diamanti, Ioannis Mitlagkas, and Leonidas Guibas. Learning representations and generative models for 3d point clouds. In International Conference on Machine Learning (ICML), pp. 40-49, 2018. +Adrian Bulat, Jing Yang, and Georgios Tzimiropoulos. To learn image super-resolution, use a gan to learn how to do image degradation first. In European Conference on Computer Vision (ECCV), pp. 185-200, 2018. +Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158, 2017. +Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Technological Institute at Chicago, 2015. +Brian Curless and Marc Levoy. A volumetric method for building complex models from range images. 1996. +Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017a. +Angela Dai, Charles Ruizhongtai Qi, and Matthias Nießner. Shape completion using 3d-encoder-predictor cnns and shape synthesis. In International Conference on Computer Vision (ICCV), pp. 5868-5877, 2017b. +Angela Dai, Daniel Ritchie, Martin Bokeloh, Scott Reed, Jurgen Sturm, and Matthias Nießner. Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans. In Conference on Computer Vision and Pattern Recognition (CVPR), 2018. +Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), 2012. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems, pp. 2672-2680, 2014. +Paul Guerrero, Yanir Kleiman, Maks Ovsjanikov, and Niloy J Mitra. Pcpnet learning local shape properties from raw point clouds. In Computer Graphics Forum, volume 37, pp. 75-85, 2018. +Swaminathan Gurumurthy and Shubham Agrawal. High fidelity semantic shape completion for point clouds using latent optimization. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1099-1108. IEEE, 2019. +Xiaoguang Han, Zhen Li, Haibin Huang, Evangelos Kalogerakis, and Yizhou Yu. High-resolution shape completion using deep neural networks for global structure and local geometry inference. In International Conference on Computer Vision (ICCV), pp. 85-93, 2017. + +Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (TOG), 36(4):107, 2017. +Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4681-4690, 2017. +Jiaxin Li, Ben M Chen, and Gim Hee Lee. So-net: Self-organizing network for point cloud analysis. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9397–9406, 2018a. +Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. In Advances in Neural Information Processing Systems, 2018b. +Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, and Zhen Wang. Multi-class generative adversarial networks with the L2 loss function. CoRR, abs/1611.04076, 2016. +Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In International Conference on Computer Vision (ICCV), pp. 2794-2802, 2017. +Seong-Jin Park, Hyeongseok Son, Sunghyun Cho, Ki-Sang Hong, and Seungyong Lee. Srfeat: Single image super-resolution with feature discrimination. In European Conference on Computer Vision (ECCV), pp. 439-455, 2018. +Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 652-660, 2017a. +Charles R Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J Guibas. Frustum pointnets for 3d object detection from rgb-d data. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 918-927, 2018. +Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems, pp. 5099-5108, 2017b. +Abhishek Sharma, Oliver Grau, and Mario Fritz. Vconv-dae: Deep volumetric shape learning without object labels. In European Conference on Computer Vision (ECCV), pp. 236-250, 2016. +Shuran Song, Fisher Yu, Andy Zeng, Angel X Chang, Manolis Savva, and Thomas Funkhouser. Semantic scene completion from a single depth image. Conference on Computer Vision and Pattern Recognition (CVPR), 2017. +David Stutz and Andreas Geiger. Learning 3d shape completion from laser scan data with weak supervision. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1955-1964, 2018. +Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, and Jan Kautz. Splatnet: Sparse lattice networks for point cloud processing. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2530-2539, 2018. +Duc Thanh Nguyen, Binh-Son Hua, Khoi Tran, Quang-Hieu Pham, and Sai-Kit Yeung. A field model for repairing 3d shapes. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5676-5684, 2016. +Weiyue Wang, Qiangui Huang, Suya You, Chao Yang, and Ulrich Neumann. Shape inpainting using 3d generative adversarial network and recurrent convolutional networks. In International Conference on Computer Vision (ICCV), pp. 2298-2306, 2017. +Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In European Conference on Computer Vision (ECCV), pp. 63-79. Springer, 2018. + +Bo Yang, Stefano Rosa, Andrew Markham, Niki Trigoni, and Hongkai Wen. 3d object dense reconstruction from a single depth view. arXiv preprint arXiv:1802.00411, 1(2):6, 2018. +Raymond A Yeh, Chen Chen, Teck Yian Lim, Alexander G Schwing, Mark Hasegawa-Johnson, and Minh N Do. Semantic image inpainting with deep generative models. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5485-5493, 2017. +Kangxue Yin, Hui Huang, Daniel Cohen-Or, and Hao Zhang. P2p-net: bidirectional point displacement net for shape transform. ACM Transactions on Graphics (TOG), 37(4):152, 2018. +Lequan Yu, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. Ec-net: an edge-aware point set consolidation network. In European Conference on Computer Vision (ECCV), pp. 386-402, 2018a. +Lequan Yu, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. Pu-net: Point cloud upsampling network. In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2790-2799, 2018b. +Wentao Yuan, Tejas Khot, David Held, Christoph Mertz, and Martial Hebert. Pcn: Point completion network. In 2018 International Conference on 3D Vision (3DV), pp. 728-737, 2018. +Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in Neural Information Processing Systems, 2017. + +# A DETAILS OF DATASETS + +Clean and Complete Point Sets are obtained by virtually scanning the models from ShapeNet. We use a subset of 8 categories, namely boat, car, chair, dresser, lamp, plane, sofa and table, in our experiments. To generate clean and complete point set of a model, we virtually scan the models by performing ray-intersection test from cameras placed around the model to obtain the dense point set, followed by a down-sampling procedure to obtain a relatively sparser point set of $N$ points. Note that we use the models without any pose and scale augmentation. + +This dataset is used for training to learn the clean-complete point set manifold in all our experiments. The following datasets of different data distributions serve as different noisy-partial input data. + +Real-world Data comes from three sources. The first one is derived from ScanNet dataset which provides many mesh objects that have been pre-segmented from its surrounding environment. For the purpose of training and testing our network, we extract $\sim 550$ chair objects and $\sim 550$ table objects from ScanNet dataset, and manually align them to be consistently orientated with models in ShapeNet dataset. We also split these objects into $90\% /10\%$ train/test sets. + +The second one consists of 20 chairs and 20 tables from the Matterport3D dataset, to which the same extraction and alignment as is done in ScanNet dataset is also applied. Note that we train our method only on ScanNet training split, and use the trained model to test on Matterport3D data, to show how our method can generalize to absolutely unseen data. For both ScanNet and Matterport3D datasets, we uniformly sample $N$ points on the surface mesh of each object to obtain the input point sets. + +Last, we extract car observations from the KITTI dataset using the provided ground truth bounding boxes for training and testing our method. We use KITTI Velodyne point clouds from the 3D object detection benchmark and the split of Qi et al. (2018). We filter the observations such that each car observation contains at least 100 points to avoid overly sparse observations. + +3D-EPN Dataset provides partial reconstructions of ShapeNet objects (8 categories) by using volumetric fusion method Curless & Levoy (1996) to integrate depth maps scanned along a virtual scanning trajectory around the model. For each model, a set of trajectories is generated with different levels of incompleteness, reflect the real-world scanning with a hand-held commodity RGB-D sensor. The entire dataset covers 8 categories and a total of 25590 object instances (the test set is composed of 5384 models). Note that, in the original 3D-EPN dataset, the data is represented in Signed Distance Field (SDF) for training data and Distance Field (DF) for test data. As our method works on pure point sets, we only use the point cloud representations of the training data provided by the authors, instead of using the SDF data which holds richer information and is claimed in Dai et al. (2017b) to be crucial for completing partial data. + +Synthetic Data serves the purpose of having another dataset of different incomplete scan distribution and controlling the incompleteness of the input. We use ShapeNet to generate a synthetic dataset, in which we can control the incompleteness of the synthetic partial point sets. For the models in each one of the 4 categories (car, chair, plane, and table), we split them into $90\% /10\%$ train/test sets. For each model, from which a clean and complete point set has been scanned (as described earlier in this subsection), we can randomly pick a point and remove its $N\times r$ $(r\in [0,1))$ nearest neighbor points. The parameter $r$ controls the incompleteness of the synthetically-generated input. Furthermore, we add Gaussian noise $\mathcal{N}(\mu ,\sigma^2)$ to each point $(\mu = 0$ and $\sigma = 0.01$ for all our experiments). Last, we duplicate the points in the resulting point sets to generate point sets with an equal number of $N$ points. + +# B NETWORK ARCHITECTURE DETAILS + +In this section, we describe the details of the encoder, decoder, generator and discriminator in our network implementation. + +# B.1 AE ARCHITECTURE DETAILS + +Encoder consists of 5 1-D convolutional layers which are implemented as 1-D convolutions with ReLU and batch normalization, with kernel size of 1 and stride of 1, to lift the feature of each point to high dimensional feature space independently. In all experiments, we use an encoder with 64, + +128, 128, 256 and $k = 128$ filters in each of its layers, with $k$ being the latent code size. The output of the last convolutional layer is passed to a feature-wise maximum to produce a $k$ -dimensional latent code. + +Decoder transforms the latent vector using 3 fully connected layers with 256, 256, and $N \times 3$ neurons each, the first two having ReLUs, to reconstruct $N \times 3$ output. + +# B.2 GAN ARCHITECTURE DETAILS + +Since the generator and discriminator of GAN directly operate on the latent space, the architecture for them is significantly simpler. Specifically, the generator is comprised of two fully connected layers with 128 and 128 neurons each, to map the latent code of noisy and incomplete point sets to that of clean and complete point sets. The discriminator consists of 3 fully connected layers with 256, 512 and 1 neurons each, to produce a single scalar for each latent code. + +# C TRAINING DETAILS + +To make the training of the entire network trackable, we pre-train the AEs used for obtaining the latent spaces. After that we retain the weights of AEs, only the weights of the generator and discriminator are updated through the back-propagation during the GAN training. The following training hyper-parameters are used in all our experiments. + +For training the AE, we use Adam optimizer with an initial learning rate of 0.0005, $\beta_{1} = 0.9$ and a batch size of 200 and train for a maximum of 2000 epochs. + +For training the generator and discriminator on the latent spaces, we use Adam optimizer with an initial learning rate of 0.0001, $\beta_{1} = 0.5$ and a batch size of 24 and train the generator and discriminator alternately for a maximum of 1000 epochs. + +# D QUALITATIVE RESULTS ON 3D-EPN DATASET + +We present qualitatively comparisons in Fig 5, where we show the partial input, AE, 3D-EPN, PCN, Ours, Ours+ result and the ground truth point set. We can see that, although our method is not quantitatively the best, our results are very qualitatively plausible, as the generator is restricted to generate point sets from learned clean and complete shape manifolds. + +# E VISUAL COMPARISON ON TEST DATA WITH DISTRIBUTION DIFFERENT TO TRAINING DATA + +We present the visual comparison of 3D-EPN, PCN and our method on our synthetic data, which differs from 3D-EPN and PCN training data. In this experiment, the ground truth of our synthetic data is only available for evaluation. In Fig 6, we can see that our method keeps producing high-quality completions, as our method does not require paired data for training hence can still be trained when no ground truth is available. 3D-EPN and PCN produce much worse results as the data distribution of its training data and our synthetic data differ. + +# F VARIATIONS OF LEVERAGING GROUND TRUTH SUPERVISION + +To adapt our network for training with ground truth point sets, we first change the reconstruction loss term in the generator loss from HD (Hausdorff Distance) to EMD (Earth Mover's Distance), as the ground truth point set is complete and thus contains full information for supervising the completion. Note that HD is superior when the ground truth is unavailable for training, as shown in Table 4 of Section 4. Moreover, we present the comparison of different decisions on whether to adopt the adversarial training in our network for training with the ground truth. + +![](images/17348538398fa0f01baedb30de6bab5f761faf224b1e08a1190f1dfc30c66c80.jpg) +Figure 5: Qualitative comparison on 3D-EPN dataset. + +![](images/34259642a43142df39213106167eaea364d1ca8ad39104af1ad3d30ea5137971.jpg) +Figure 6: Effect of data distribution discrepancy and qualitative comparison on our synthetic dataset. + +- Ours (GT+EMD), which is also denoted as Ours+ in the paper, removes the adversarial training in the network by simply setting $\alpha = 0$ and not updating the discriminator weights, hence there is only EMD reconstruction loss for the generator. +- Ours (GT+EMD+GAN), in contrast, retains the adversarial training. + +Table 5 shows the quantitative comparison results, we can see that, when the ground truth point sets are available, Ours (GT+EMD) produces better results than Ours (GT+EMD+GAN). Our explanation for why adopting adversarial training here leads to worse results is that: when the ground truth is available, which is complete and contains all information for supervising the network, adding adversarial training will make the network much harder to train, as the network always gets punished by failing to fool the discriminator when it is actually transforming current output closer to the ground truth. + +Table 5: Removal of adversarial training when training with ground truth leads to significant improvement. + +
Ours (GT+EMD)Ours (GT+EMD+GAN)
modelacc.comp.F1acc.comp.F1
car93.592.893.180.577.979.2
chair82.383.382.851.558.154.6
plane95.694.895.291.486.388.8
table81.281.481.337.939.338.6
+ +# G MORE STATISTICS FRO THE BASELINE COMPARISON + +For the baseline methods comparison on 3D-EPN dataset, we also report the Chamfer distance (CD), Earth Mover's Distance (EMD) and Hausdorff Distance (HD, maximum of the two directional distances) between the ground truth and the completion in Table 6: + +Table 6 + +
AEEPNPCNOursOurs+
modelCDEMDHDCDEMDHDCDEMDHDCDEMDHDCDEMDHD
boat0.00120.05300.08640.00090.05000.05620.00060.04370.06350.00110.05320.08570.00080.04550.0799
car0.00190.06680.10930.00240.07440.09890.00050.04180.06480.00100.04340.07630.00070.03930.0677
chair0.00310.10030.13740.00160.07040.08770.00090.05860.08320.00200.07730.10100.00150.06190.0915
dresser0.00370.09850.12950.00270.07830.09630.00080.05450.07710.00190.05880.08330.00110.04820.0734
lamp0.00260.08570.10920.00380.09660.11540.00130.06920.08900.00230.08480.10730.00180.07290.1002
plane0.00040.03460.05910.00600.09430.12550.00020.03080.03940.00040.03380.05450.00050.04050.0685
sofa0.00300.07920.12320.00450.08800.10930.00080.04940.06510.00260.06550.09280.00120.05360.0958
table0.00440.08860.15180.00140.06810.09750.00100.06030.09680.00260.0684450.10710.00210.06810.1224
+ +# H USER STUDY ON REAL-WORLD DATA COMPLETION + +We also conducted a user study on the completion results of the real-world scans, where given a partial input users are required to pick the most preferable completion among EPN, PCN and our results. In total, we received 1,000 valid user selections and report the preference (in percentage of the total selections) of each method in the user study. From Fig. 7 we can see that in over half $(52\%)$ of the selections, our completion results are selected as the best completion, while PCN completion results are better in $45\%$ of the selections. + +Interestingly, although our method outperforms other methods in the user study, our method clearly does not hold a dominant position in this user study. After a more in-depth analysis of the user selections, we found that users intend to pick the completion in which the partial input is embedded, which can be formulated as the Hausdorff distance from the partial input to the completion, and the supervised method PCN preserves the input point cloud in its completion output as this always minimizes the distance loss. The plausibility of the completion result is usually neglected by users, + +![](images/53983e4c19a249e2760feabe626040e88cbdd96d588d65b9ae5a6aee62c5a98c.jpg) +Figure 7 + +![](images/bd04634af8d9f51f0639db23dd72155710a5a1c3d285b2c130bd7d9fed7f657a.jpg) +Input + +![](images/9bbea9c72d69777bc2450f1179ec3ee61f0cae57b1aed6c2770b3a0c43083516.jpg) +PCN + +![](images/99d9f1e661aff1d8d91c9b138d927a15beb0b8305f2f141e54859a0f2a2e6941.jpg) +Ours +Figure 8 + +while the plausibility and the Hausdorff distance from the partial input to the completion are both considered as two trade-off terms in the objective function of our completion generator. Fig. 8 gives a typical example, where the PCN completion without clear chair structure is often picked as better completion while our completion tries to trade-off between the HL and the plausibility. + +# I GENERALIZATION TO UNSEEN CLASSES + +![](images/6a80ed63dd13f7099a8b744be4a1082eca7c46dde46116863cd1f96c1bc0bafe.jpg) +Figure 9 + +Our method does generalize to unseen objects from the same category in the test set, which is demonstrated in the experimental results section. However, our method intuitively should not generalize to classes that are not seen during the training, as the autoencoder, which is a fundamental component + +in our network, does not generalize to unseen classes. We present the qualitative results of applying our table completion network on chair and airplane class. We can see that from Fig. 9, on the unseen chair class, which shares similar structure with table, our table completion network can produce some reasonable structures, but is unable to complete with a seat back as the autoencoder does not have such capability; on the unseen airplane class, which is rather dissimilar to table class, our table completion network failed to complete the partial airplanes. \ No newline at end of file diff --git a/unpairedpointcloudcompletiononrealscansusingadversarialtraining/images.zip b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ecdc42ee30ee2dc63bc3ac8e66369bcfb00dd4d5 --- /dev/null +++ b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd686b81b939a1c00098b1eafef769601d707e4c251927d3605faa4cbafa7fe4 +size 718892 diff --git a/unpairedpointcloudcompletiononrealscansusingadversarialtraining/layout.json b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a392b12753b08580625e37ec86ef8d13fcd1a253 --- /dev/null +++ b/unpairedpointcloudcompletiononrealscansusingadversarialtraining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bddf6c312f9ed2a1d99107457abef2838e6ca8bd7fe7a0966ccb1fcfa0179de8 +size 469167 diff --git a/unrestrictedadversarialexamplesviasemanticmanipulation/0046c3f5-a307-49cd-9f48-903ecb815242_content_list.json b/unrestrictedadversarialexamplesviasemanticmanipulation/0046c3f5-a307-49cd-9f48-903ecb815242_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..624a012c496d4ddc5927802e95645149b746dd75 --- /dev/null +++ b/unrestrictedadversarialexamplesviasemanticmanipulation/0046c3f5-a307-49cd-9f48-903ecb815242_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2345fd77b35af7116485224f73052dbe0759b15d461f9beb94add42918107b17 +size 106271 diff --git a/unrestrictedadversarialexamplesviasemanticmanipulation/0046c3f5-a307-49cd-9f48-903ecb815242_model.json b/unrestrictedadversarialexamplesviasemanticmanipulation/0046c3f5-a307-49cd-9f48-903ecb815242_model.json new file mode 100644 index 0000000000000000000000000000000000000000..603285cdd4786e009c1e4d786a28188e900fa2b1 --- /dev/null +++ b/unrestrictedadversarialexamplesviasemanticmanipulation/0046c3f5-a307-49cd-9f48-903ecb815242_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed90b21d639124a80dffa934885698291412f0f6848d94e13e522e1cf908a84a +size 127678 diff --git a/unrestrictedadversarialexamplesviasemanticmanipulation/0046c3f5-a307-49cd-9f48-903ecb815242_origin.pdf b/unrestrictedadversarialexamplesviasemanticmanipulation/0046c3f5-a307-49cd-9f48-903ecb815242_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..93129d5172c3af1e1c1238e6f0bff194a938883b --- /dev/null +++ b/unrestrictedadversarialexamplesviasemanticmanipulation/0046c3f5-a307-49cd-9f48-903ecb815242_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ee542c5fb2ea904d5aafba9c432d3145697c1279234442f50aa6187eec01786c +size 33254621 diff --git a/unrestrictedadversarialexamplesviasemanticmanipulation/full.md b/unrestrictedadversarialexamplesviasemanticmanipulation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1b67157c484f6537ab3daaee88d9343abfdc4cdd --- /dev/null +++ b/unrestrictedadversarialexamplesviasemanticmanipulation/full.md @@ -0,0 +1,413 @@ +# UNRESTRICTED ADVERSARIAL EXAMPLES VIA SEMANTIC MANIPULATION + +Anand Bhattach* Min Jin Chong* Kaizhao Liang Bo Li D. A. Forsyth + +University of Illinois at Urbana-Champaign + +{bhattad2, mchong6, k12, lbo, daf}@illinois.edu + +# ABSTRACT + +Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their $\mathcal{L}_p$ norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce "unrestricted" perturbations that manipulate semantically meaningful image-based visual descriptors – color and texture – in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarily trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks. + +# 1 INTRODUCTION + +Machine learning (ML), especially deep neural networks (DNNs) have achieved great success in various tasks, including image recognition (Krizhevsky et al., 2012; He et al., 2016), speech processing (Hinton et al., 2012) and robotics training (Levine et al., 2016). However, recent literature has shown that these widely deployed ML models are vulnerable to adversarial examples – carefully crafted perturbations aiming to mislead learning models (Carlini & Wagner, 2017; Kurakin et al., 2016; Xiao et al., 2018b). The fast growth of DNNs based solutions demands in-depth studies on adversarial examples to help better understand potential vulnerabilities of ML models and therefore improve their robustness. + +To date, a variety of different approaches has been proposed to generate adversarial examples (Goodfellow et al., 2014b; Carlini & Wagner, 2017; Kurakin et al., 2016; Xiao et al., 2018a); and many of these attacks search for perturbation within a bounded $\mathcal{L}_p$ norm in order to preserve their photorealism. However, it is known that the $\mathcal{L}_p$ norm distance as a perceptual similarity metric is not ideal (Johnson et al., 2016; Isola et al., 2017). In addition, recent work shows that defenses trained on $\mathcal{L}_p$ bounded perturbation are not robust at all against new types of unseen attacks (Kang et al., 2019). Therefore, exploring diverse adversarial examples, especially those with "unrestricted" magnitude of perturbation has acquired a lot of attention in both academia and industries (Brown et al., 2018). + +Recent work based on generative adversarial networks (GANs) (Goodfellow et al., 2014a) have introduced unrestricted attacks (Song et al, 2018). However, these attacks are limited to datasets like MNIST, CIFAR and CelebA, and are usually unable to scale up to bigger and more complex datasets such as ImageNet. Xiao et al. (2018b) directly manipulated spatial pixel flow of an image to produce adversarial examples without $\mathcal{L}_p$ bounded constraints on the perturbation. However, the attack does not explicitly control visual semantic representation. More recently, Hosseini & Poovendran (2018) manipulated hue and saturation of an image to create adversarial perturbations. However, these examples are easily distinguishable by human and are also not scalable to complex datasets. + +![](images/d91e0f967341db44433de90492a3ec26d315dd873563bd39c77140ae3fca066d.jpg) +Colorization Attack (cAdv) + +![](images/98d10f39e275606a7855d85735a953ecc9a5eddfb83bb4058ab97230a9af5d9d.jpg) +Color Hints+Mask + +![](images/81b407505fcc4c9b52dd0f6ef5e93bb4f070ef9fb679f5fbe16f38ad94d0b72e.jpg) +Colorization Model + +![](images/c31734b7282579087d33ccafa1aab8330ade7c8c9f26d8633b43f7722dc5943b.jpg) + +![](images/cb6f7d71356697eea5077ce82db4927cf8d719992f3fd236d6b72db48bdbd181.jpg) + +![](images/71085bc554e051abea78bfb1ba742ad5844125195bc206a8ed146299c93296f1.jpg) + +![](images/fb538bb16e136248f93bc221bf8f1755d042c9fb02bf4140653768a69a35c450.jpg) + +![](images/cda3135a8a57e46cb6c24bec946b89efc76ebdabb7c6ac68534cfd287367acb7.jpg) + +![](images/7d5d865b768ec4bac6c266d744a5808b3028e0cf2e511e7bfda20c16296558ee.jpg) + +![](images/0f9e9fd5c7249dcc3b0d6de2872f82fb06b6df9fe82168f920a9611af37d874e.jpg) +Texture Attack (tAdv) + +![](images/50d9c8aa5c49b1585a6bfdf6cfc7b132d2ce128690cd72d89132d7a48accd6a1.jpg) + +![](images/f283c24530db603d34317aec47e211fd8e2856c6b4b66341af1ff18405fa35ae.jpg) +Texture-Transfer Model + +![](images/0c30bb6c90777c8cb5f226b6bb9aa03174e50beee28cc7ba88cabfec18043969.jpg) + +![](images/46eb102bc47311483d3730f31ffd249f26592c0b52808a8fe47c5f85b09f1bad.jpg) + +![](images/fe91b1b3b5552125fd934c1f11c8ba7d642b0437ed7d956f8016eec4937dce7a.jpg) + +![](images/12f5e52d41e60776d5fed5b4261406ed77d95ff1937a7de6e83bbda56c6a7e56.jpg) + +![](images/eaebb19cb22d7122a168c6d83f75e50d7d747e45282fcfbdcc2e18d4b2756e8e.jpg) + +![](images/66610326a01af5b521a8c74a9f9eeda7d9d99b410747ef67bcd073507270b614.jpg) + +![](images/859ddc8ff32781e634739fdc369fb65b04cee6cbdd85dd4342fae2c4f7db3394.jpg) +C +Figure 1: An overview of proposed attacks. Top: Colorization attack (cAdv); Bottom: Texture transfer attack (tAdv). Our attacks achieve high attack success rate via semantic manipulation without any constraints on the $\mathcal{L}_p$ norm of the perturbation. Our methods are general and can be used for attacking both classifiers and captioners. + +![](images/45063f3386f9a04bb502fdd116742b41987790c9a70de7f16ef1d9cfe23b05e9.jpg) +Captioner + +![](images/98a76b248bec08f31446d20a2a3fbcb13df9a46ac369b568a311bec79223f2db.jpg) + +![](images/6a3391c09d2942a66bf9a11209d06194e337d0ac5a76fbf7258d69c346b6d75a.jpg) +tAdv Perturbation + +In this work, we propose unrestricted attack strategies that explicitly manipulate semantic visual representations to generate natural-looking adversarial examples that are "far" from the original image in tems of the $\mathcal{L}_p$ norm distance. In particular, we manipulate color (cAdv) and texture (tAdv) to create realistic adversarial examples (see Fig 1). cAdv adaptively chooses locations in an image to change their colors, producing adversarial perturbation that is usually fairly substantial, while tAdv utilizes the texture from other images and adjusts the instance's texture field using style transfer. + +These semantic transformation-based adversarial perturbations shed light upon the understanding of what information is important for DNNs to make predictions. For instance, in one of our case studies, when the road is recolored from gray to blue, the image gets misclassified to tench (a fish) although a car remains evidently visible (Fig. 2b). This indicates that deep learning models can easily be fooled by certain large scale patterns. In addition to image classifiers, the proposed attack methods can be generalized to different machine learning tasks such as image captioning (Karpathy & Fei-Fei (2015)). Our attacks can either change the entire caption to the target (Chen et al., 2017; Xu et al., 2019) or take on more challenging tasks like changing one or two specific target words from the caption to a target. For example, in Fig. 1, "stop sign" of the original image caption is changed to "cat sitting" and "umbrella is" for cAdv and tAdv respectively. + +To ensure our "unrestricted" semantically manipulated images are natural, we conducted extensive user studies with Amazon Mechanical Turk. We also tested our proposed attacks on several state of the art defenses. Rather than just showing the attacks break these defenses (better defenses will come up), we aim to show that cAdv and tAdv are able to produce new types of adversarial examples. Experiments also show that our proposed attacks are more transferable given their large and structured perturbations (Papernot et al., 2016). Our semantic adversarial attacks provide further insights about the vulnerabilities of ML models and therefore encourage new solutions to improve their robustness. + +In summary, our contributions are: 1) We propose two novel approaches to generate "unrestricted" adversarial examples via semantic transformation; 2) We conduct extensive experiments to attack both image classification and image captioning models on large scale datasets (ImageNet and MSCOCO); 3) We show that our attacks are equipped with unique properties such as smooth cAdv perturbations and structured tAdv perturbations. 4) We perform comprehensive user studies to show that when compared to other attacks, our generated adversarial examples appear more natural to humans despite their large perturbations; 5) We test different adversarial examples against several state of the art defenses and show that the proposed attacks are more transferable and harder to defend. + +# 2 COLORIZATION ATTACK (cADV) + +Background. Image Colorization is the task of giving natural colorization to a grayscale image. This is an ill-posed problem as there are multiple viable natural colorizations given a single grayscale + +![](images/82df9d059419d52a12305789a86728da4c3e1f1e0c0a2f467a138cd92c91918a.jpg) +Figure 2: Class color affinity. Samples from unconstrained cAdv attacking network weights with zero hints provided. For (a) the ground truth (GT) class is pretzel; and for (b) the GT is car. These new colors added are commonly found in images from target class. For instance, green in Golfcart and blue sea in Tench images. + +image. Deshpande et al. (2017) showed that diverse image colorization can be achieved by using an architecture that combines VAE (Kingma & Welling (2013)) and Mixture Density Network; while Zhang et al. (2017) demonstrated an improved and diverse image colorization by using input hints from users guided colorization process. + +Our goal is to adversarially color an image by leveraging a pretrained colorization model. We hypothesize that it is possible to find a natural colorization that is adversarial for a target model (e.g., classifier or captioner) by searching in the color space. Since a colorization network learns to color natural colors that conform to boundaries and respect short-range color consistency, we can use it to introduce smooth and consistent adversarial noise with a large magnitude that looks natural to humans. This attack differs from common adversarial attacks which tend to introduce short-scale high-frequency artifacts that are minimized to be invisible for human observers. + +We leverage Zhang et al. (2016; 2017) colorization model for our attack. In their work, they produce natural colorizations on ImageNet with input hints from the user. The inputs to their network consist of the L channel of the image in CIELAB color space $X_{L} \in \mathbb{R}^{H \times W \times 1}$ , the sparse colored input hints $X_{ab} \in \mathbb{R}^{H \times W \times 2}$ , and the binary mask $M \in \mathbb{B}^{H \times W \times 1}$ , indicating the location of the hints. + +cAdv Objectives. There are a few ways to leverage the colorization model to achieve adversarial objectives. We experimented with two main methods and achieved varied results. + +Network weights. The straightforward approach of producing adversarial colors is to modify Zhang et al. (2017) colorization network, $\mathcal{C}$ directly. To do so, we simply update $\mathcal{C}$ by minimizing the adversarial loss objective $J_{adv}$ , which in our case, is the cross entropy loss. $t$ represents target class and $\mathcal{F}$ represents the victim network. + +$$ +\theta^ {*} = \underset {\theta} {\arg \min } J _ {a d v} (\mathcal {F} (\mathcal {C} (X _ {L}, X _ {a b}, M; \theta)), t) \tag {1} +$$ + +Hints and mask. We can also vary input hints $X_{ab}$ and mask $M$ to produce adversarial colorizations. Hints provide the network with ground truth color patches that guides the colorization, while the mask provides its spatial location. By jointly varying both hints and mask, we are able to manipulate the output colorization. We can update the hints and mask as follows: + +$$ +M ^ {*}, X _ {a b} ^ {*} = \underset {M, X _ {a b}} {\arg \min } J _ {a d v} \left(\mathcal {F} \left(\mathcal {C} \left(X _ {L}, X _ {a b}, M; \theta\right), t\right) \right. \tag {2} +$$ + +cAdvAttack Methods. Attacking network weights allows the network to search the color space with no constraints for adversarial colors. This attack is the easiest to optimize, but the output colors are not realistic as shown in Fig. 2. Our various strategies outlined below are ineffective as the model learns to generate the adversarial colors without taking into account color realism. However, colorizations produced often correlate with colors often observed in the target class. This suggests that classifiers associate certain colors with certain classes which we will discuss more in our case study. + +Attacking input hints and mask jointly gives us natural results as the pretrained network will not be affected by our optimization. Attacking hints and mask separately also works but takes a long optimization time and give slightly worse results. For our experiments, we use Adam Optimizer (Kingma & Ba (2014)) with a learning rate of $10^{-4}$ in cAdv. We iteratively update hints and mask until our adversarial image reaches the target class and the confidence change of consecutive iterations does not exceed a threshold of 0.05. + +![](images/cad3a5cbb95c37bae3997ecf4ee91d8a8d283c6366a6b2351b91de2a2a2c0e66.jpg) +Figure 3: Controlling cAdv. We show a comparison of sampling 50 color hints from k clusters with low-entropy. All images are attacked to misclassify as golf-cart. Second and fourth row visualize our cluster segments, with darker colors representing higher mean entropy and red dots representing the sampled hints location. Sampling hints across more clusters gives less color variety. + +Control over colorization. Current attack methods lack control over where the attack occurs, opting to attack all pixels indiscriminately. This lack of control is not important for most attacks where the $\epsilon$ is small but is concerning in cAdv where making unstructured large changes can be jarring. To produce realistic colorization, we need to avoid making large color changes at locations where colors are unambiguous (e.g. roads in general are gray) and focus on those where colors are ambiguous (e.g. an umbrella can have different colors). To do so, we need to segment an image and determine which segments should be attacked or preserved. + +To segment the image into meaningful areas, we cluster the image's ground truth AB space using K-Means. We first use a Gaussian filter of $\sigma = 3$ to smooth the AB channels and then cluster them into 8 clusters. Then, we have to determine which cluster's colors should be preserved. Fortunately, Zhang et al. (2017) network output a per-pixel color distribution for a given image which we used to calculate the entropy of each pixel. The entropy represents how confident the network is at assigning a color at that location. The average entropy of each cluster represents how ambiguous their color is. We want to avoid making large changes to clusters with low-entropy while allowing our attack to change clusters with high entropy. One way to enforce this behavior is through hints, which are sampled from the ground truth at locations belonging to clusters of low-entropy. We sample hints from the $k$ clusters with the lowest entropy which we refer as $\mathbf{cAdv}_k$ (e.g. $\mathbf{cAdv}_2$ samples hints from the 2 lowest entropy clusters). + +Number of input hints. Network hints constrain our output to have similar colors as the ground truth, avoiding the possibility of unnatural colorization at the cost of color diversity. This trade-off is controlled by the number of hints given to the network as initialization (Fig. 4). Generally, providing more hints gives us similar colors that are observed in original image. However, having too many hints is also problematic. Too many hints makes the optimization between drawing adversarial colors and matching local color hints difficult. Since the search space for adversarial colors is constrained because of more hints, we may instead generate unrealistic examples. + +Number of Clusters. The trade-off between the color diversity and the color realism is also controlled by the number of clusters we sample hints from as shown in Fig. 3. Sampling from multiple clusters gives us realistic colors closer to the ground truth image at the expense of color diversity. + +![](images/d5c5f4ebce9aa9c745375581eb5df41bf4421e233a53c206c9fe5b76f5974026.jpg) +Figure 4: Number of color hints required for cAdv. All images are attacked to Merganser with $k = 4$ . When the number of hints increases (from left to right), the output colors are more similar to groundtruth. However, when the number of hints is too high (500), cAdv often generates unrealistic perturbations. This is due to a harder optimization for cAdv to both add adversarial colors and match GT color hints. cAdv is effective and realistic with a balanced number of hints. + +Empirically, from our experiments we find that in terms of color diversity, realism, and robustness of attacks, using $k = 4$ and 50 hints gives us better adversarial examples. For the rest of this paper, we fix 50 hints for all $\mathbf{cAdv}_k$ methods. + +# 3 TEXTURE ATTACK (tADV) + +Background. Texture transfer extracts texture from one image and adds it to another. Transferring texture from one source image to another target image has been widely studied in computer vision ( Efros & Freeman (2001); Gatys et al. (2015)). The Convolutional Neural Network (CNN) based texture transfer from Gatys et al. (2015) led to a series of new ideas in the domain of artistic style transfer ( Gatys et al. (2016); Huang & Belongie (2017); Li et al. (2017); Yeh et al. (2019)). More recently, Geirhos et al. (2018) showed that DNNs trained on ImageNet are biased towards texture for making predictions. + +Our goal is to generate adversarial examples by infusing texture from another image without explicit constraints on $\mathcal{L}_p$ norm of the perturbation. For generating our tAdv examples, we used a pretrained VGG19 network (Simonyan & Zisserman, 2014) to extract textural features. We directly optimize our victim image $(I_v)$ by adding texture from a target image $(I_t)$ . A natural strategy to transfer texture is by minimizing within-layer feature correlation statistics (gram matrices) between two images Gatys et al. (2015; 2016). Based on Yeh et al. (2019), we find that optimizing cross-layer gram matrices instead of within-layer gram matrices helps produce more natural looking adversarial examples. The difference between the within-layer and the cross-layer gram matrices is that for a within-layer, the feature's statistics are computed between the same layer. For a cross-layer, the statistics are computed between two adjacent layers. + +tAdv Objectives. tAdv directly attacks the image to create adversarial examples without modifying network parameters. Moreover, there is no additional content loss that is used in style transfer methods (Gatys et al. (2016); Yeh et al. (2019)). Our overall objective function for the texture attack contains a texture transfer loss $(L_t^A)$ and an cross-entropy loss $(J_{adv})$ . + +$$ +L _ {\mathbf {t} \mathrm {A d v}} ^ {\mathcal {A}} = \alpha L _ {t} ^ {\mathcal {A}} \left(I _ {v}, I _ {t}\right) + \beta J _ {a d v} \left(\mathcal {F} \left(I _ {v}\right), t\right) \tag {3} +$$ + +Unlike style transfer methods, we do not want the adversarial examples to be artistically pleasing. Our goal is to infuse a reasonable texture from a target class image to the victim image and fool a classifier or captioning network. To ensure a reasonable texture is added without overly perturbing the victim image too much, we introduce an additional constraint on the variation in the gram matrices of the victim image. This constraint helps us to control the image transformation procedure and prevents it from producing artistic images. Let $m$ and $n$ denote two layers of a pretrained VGG-19 with a decreasing spatial resolution and $C$ for number of filter maps in layer $n$ , our texture transfer loss is then given by + +$$ +L _ {t} ^ {\mathcal {A}} \left(I _ {v}, I _ {t}\right) = \sum_ {(m, n) \in \mathcal {L}} \frac {1}{C ^ {2}} \sum_ {i j} \frac {\left\| G _ {i j} ^ {m , n} \left(I _ {v}\right) - G _ {i j} ^ {m , n} \left(I _ {t}\right) \right\| ^ {2}}{\operatorname {s t d} \left(G _ {i j} ^ {m , n} \left(I _ {v}\right)\right)} \tag {4} +$$ + +![](images/9d40d3d478c790467b5f946b5236defb92b6267d7851696e5b75656b0657a108.jpg) +Figure 5: tAdv strategies. Texture transferred from random "texture source" $(T_{s})$ in row 1, random target class $T_{s}$ (row 2) and from the nearest target class $T_{s}$ (row 3). All examples are misclassified from Beacon to Nautilus. Images in the last row look photo realistic, while those in the first two rows contain more artifacts as the texture weight $\alpha$ increases (left to right). + +Let $f$ be feature maps, $\mathcal{U}f^n$ be an upsampled $f^n$ that matches the spatial resolution of layer $m$ . The cross layer gram matrices $G$ between the victim image $(I_v)$ and a target image $(I_t)$ is given as + +$$ +G _ {i j} ^ {m, n} (I) = \sum_ {p} \left[ f _ {i, p} ^ {m} (I) \right] \left[ \mathcal {U} f _ {j, p} ^ {n} (I) \right] ^ {T} \tag {5} +$$ + +Texture Transfer. To create tAdv adversarial examples, we need to find images to extract the texture from, which we call "texture source" $(T_{s})$ . A naive strategy is to randomly select an image from the data bank as $T_{s}$ . Though this strategy is successful, their perturbations are clearly perceptible. Alternatively, we can randomly select $T_{s}$ from the adversarial target class. This strategy produces less perceptible perturbations compared to the random $T_{s}$ method as we are extracting a texture from the known target class. A better strategy to select $T_{s}$ is to find a target class image that lies closest to the victim image in the feature space using nearest neighbors. This strategy is sensible as we assure our victim image has similar feature statistics as our target image. Consequently, minimizing gram matrices is easier and our attack generates more natural looking images (see Fig. 5). + +For texture transfer, we extract cross-layer statistics in Eq. 4 from the R11, R21, R31, R41, and R51 of a pretrained VGG19. We optimize our objective (Eq. 3) using an L-BFGS (Liu & Nocedal (1989)) optimizer. tAdv attacks are sensitive and if not controlled well, images get transformed into artistic images. Since we do not have any constraints over the perturbation norm, it is necessary to decide when to stop the texture transfer procedure. For a successful attack (images look realistic), we limit our L-BFGS to fixed number of small steps and perform two sets of experiments: one with only one iteration or round of L-BFGS for 14 steps and another with three iterations of 14 steps. For the three iterations setup, after every iteration, we look at the confidence of our target class and stop if the confidence is greater than 0.9. + +Texture and Cross-Entropy Weights. Empirically, we found setting $\alpha$ to be in the range [150, 1000] and $\beta$ in the range $[10^{-4}, 10^{-3}]$ to be successful and also produce less perceptible tAdv examples. The additional cross-entropy based adversarial objective $J_{adv}$ helps our optimization. We ensure large flow of gradients is from the texture loss and they are sufficiently larger than the adversarial cross-entropy objective. The adversarial objective also helps in transforming victim image to adversarial without stylizing the image. All our tabulated results are shown for one iteration, $\alpha = 250$ and $\beta = 10^{-3}$ , unless otherwise stated. We use the annotation tAdv $_{\alpha}$ iter for the rest of the paper to denote the texture method that we are using. + +
ModelR50D121VGG19
Accuracy76.1574.6574.24
Attack SuccesscAdv199.7299.8999.89
cAdv499.7899.83100.00
tAdv125097.9999.7299.50
tAdv150099.2799.8399.83
+ +(a) Whitebox Target Attack Success Rate + +
MethodModelR50D121VGG19
Kurakin et al. (2016)R50100.0017.3312.95
Carlini & Wagner (2017)R5098.8516.5011.00
Xiao et al. (2018b)R501005.238.90
cAdv4R5099.8328.5631.00
D12118.1399.8329.43
VGG1922.9426.39100.00
tAdv1250R5099.0024.5134.84
D12121.1699.8332.56
VGG1920.2124.4099.89
+ +(b) Transferability + +Table 1: Our attacks are highly successful on ResNet50 (R50), DenseNet121 (D121) and VGG19. In (a), for cAdv, we show results for $k = \{1,4\}$ when attacked with 50 hints. For tAdv we show results for $\alpha = \{250,500\}$ and $\beta = 0.001$ . In (b) We show the transferability of our attacks. We attack models from the column and test them on models from the rows. + +Control over Texture. The amount of texture that gets added to our victim image is controlled by the texture weight coefficient $(\alpha)$ . Increasing texture weights improves attack success rate at the cost of noticeable perturbation. When compared to within-layer statistics, the cross-layer statistics that we use are not only better at extracting texture, it is also easier to control the texture weight. + +# 4 EXPERIMENTAL RESULTS + +In this section, we evaluate the two proposed attack methods both quantitatively, via attack success rate under different settings, and qualitatively, based on interesting case studies. We conduct our experiments on ImageNet Deng et al. (2009) by randomly selecting images from 10 sufficiently different classes predicted correctly for the classification attack. + +We use a pretrained ResNet 50 classifier (He et al. (2016)) for all our methods. DenseNet 121 and VGG 19 (Huang et al.; Simonyan & Zisserman (2014)) are used for our transferability analysis. + +# 4.1 cADV ATTACK + +cAdv achieves high targeted attack success rate by adding realistic color perturbation. Our numbers in Table 1 and Table 2 also reveal that cAdv examples with larger color changes (consequently more color diversity) are more robust against transferability and adversarial defenses. However, these big changes are found to be slightly less realistic from our user study (Table 2, Table 4). + +Smooth cAdv perturbations. Fig. 8 in our Appendix shows interesting properties of the adversarial colors. We observe that cAdv perturbations are locally smooth and are relatively low-frequency. This is different from most adversarial attacks that generate high-frequency noise-like perturbations. This phenomenon can be explained by the observation that colors are usually smooth within object boundaries. The pretrained colorization model will thus produce smooth, low-frequency adversarial colors that conform to object boundaries. + +Importance of color in classification. From Fig. 2, we can compare how different target class affects our colorization results if we relax our constraints on colors (cAdv on Network Weights, 0 hints). In many cases, the images contain strong colors that are related to the target class. In the case of golf-cart, we get a green tint over the entire image. This can push the target classifier to misclassify the image as green grass is usually overabundant in benign golf-cart images. Fig. 2b shows our attack on an image of a car to tench (a type of fish). We observe that the gray road turned blue and that the colors are tinted. We can hypothesize that the blue colors and the tint fooled the classifier into thinking the image is a tench in the sea. + +The colorization model is originally trained to produce natural colorization that conforms to object boundaries. By adjusting its parameters, we are able to produce such large and abnormal color change that is impossible with our attack on hints and mask. These colors, however, show us some evidence that colors play a stronger role in classification than we thought. We reserve the exploration of this observation for future works. + +While this effect (strong color correlation to target class) is less pronounced for our attack on hints and mask, for all cAdv methods, we observe isoluminant color blobs. Isoluminant colors are characterized + +
MethodRes50JPEG75Feature SqueezingRes152Adv Res152User Pref.
4-bit5-bit2x23x311-3-4
Kurakin et al. (2016)10012.7328.6286.6634.2821.5629.281.081.380.506
Carlini & Wagner (2017)99.8511.5012.0030.5022.0014.5018.501.081.380.497
Hosseini & Poovendran (2018)1.20---------
Xiao et al. (2018b)10017.6122.5129.2628.7123.5126.674.131.390.470
cAdv110052.3347.7876.1736.2850.5061.9512.0611.620.427
cAdv299.8946.6142.7872.5634.2846.4559.0017.3919.40.437
cAdv499.8342.6138.3969.6734.3440.7854.6214.1312.50.473
cAdv899.8138.2236.6267.0631.6737.6749.176.5210.040.476
tAdv125099.0032.8962.7989.7454.9438.9240.5710.92.100.433
tAdv325010036.3367.6894.1158.9242.8244.5615.214.60.425
tAdv1100099.8831.4952.6990.5251.2434.8539.6819.125.590.412
tAdv3100010035.2361.4093.1856.3139.6645.5922.286.940.406
+ +Table 2: Comparison against defense models. Misclassification rate after passing different adversarial examples through the defense models (higher means the attack is stronger). All attacks are performed on ResNet50 with whitebox attack. The highest attack success rate is in bold. We also report the user preference scores from AMT for each attack (last column). + +by a change in color without a corresponding change in luminance. As most color changes occur along edges in natural images, it is likely that classifiers trained on ImageNet have never seen isoluminant colors. This suggests that cAdv might be exploiting isoluminant colors to fool classifiers. + +# 4.2 tADV ATTACK + +tAdv successfully fools the classifiers with a very small weighted adversarial cross-entropy objective $(\beta)$ when combined with texture loss, while remaining realistic to humans. As shown in Table 1, our attacks are highly successful on white-box attacks tested on three different models with the nearest neighbor texture transfer approach. We also show our attacks are more transferable to other models. In our Appendix, we show ablation results for tAdv attacks along with other strategies that we used for generating tAdv adversarial examples. + +Structured tAdv Perturbations. Since we extract features across different layers of VGG, the tAdv perturbations follow a textural pattern. They are more structured and organized when compared to others. Our tAdv perturbations are big when compared with existing attack methods in $\mathcal{L}_p$ norm. They are of high-frequency and yet imperceptible (see Fig. 1 and Fig. 8). + +Importance of Texture in Classification. Textures are crucial descriptors for image classification and Imagenet trained models can be exploited by altering the texture. Their importance is also shown in the recent work from Geirhos et al. (2018). Our results also show that even with a small or invisible change in the texture field can break the current state of the art classifiers. + +# 4.3 DEFENSE AND TRANSFERABILITY ANALYSIS + +We test all our attacks and other existing methods with images attacked from Resnet50. We evaluate them on three defenses – JPEG defense (Das et al., 2017), feature squeezing (Xu et al., 2017) and adversarial training. By leveraging JPEG compression and decompression, adversarial noise may be removed. We tested our methods against JPEG compression of 75. Feature squeezing is a family of simple but surprisingly effective strategies, including reducing color bit depth and spatial smoothing. Adversarial training has been shown as an effective but costly method to defend against adversarial attacks. Mixing adversarial samples into training data of a classifier improves its robustness without affecting the overall accuracy. We were able to obtain an adversially pretrained Resnet152 model on ImageNet dataset and hence we tested our Resnet50 attacked images with this model. + +Robustness. In general, our attacks are more robust to the considered defenses and transferable for targeted attacks. For cAdv, there is a trade-off between more realistic colors (using more hints and sampling from more clusters) and attack robustness. From Table 1 and 2, we show that as we progressively use more clusters, our transferability and defense numbers drop. A similar trend is observed with the change in the number of hints. cAdv is robust to JPEG defense and adversarial training because of their large and spatially smooth perturbations. For tAdv, increasing texture weight $(\alpha)$ does not necessarily perform well with the defense even though it increases attack success rate, but increasing texture flow with more iterations improves attack's robustness against defenses. + +# 5 HUMAN PERCEPTUAL STUDIES + +To quantify how realistic tAdv and cAdv examples are, we conducted a user study on Amazon Mechanical Turk (AMT). We follow the same procedure as described in (Zhang et al., 2016; Xiao et al., 2018b). For each attack, we choose the same 200 adversarial images and their corresponding benign ones. During each trial, one random adversarial-benign pair appears for three seconds and workers are given five minutes to identify the realistic one. Each attack has 600 unique pairs of images and each pair is evaluated by at least 10 unique workers. We restrict biases in this process by allowing each unique user up to 5 rounds of trials and also ignore users who complete the study in less than 30 seconds. In total, 598 unique workers completed at least one round of our user study. For each image, we can then calculate the user preference score as the number of times it is chosen divided by the number of times it is displayed. 0.5 represents that users are unable to distinguish if the image is fake. For cAdv and tAdv, user preferences averages at 0.476 and 0.433 respectively, indicating that workers have a hard time distinguishing them. The user preferences for all attacks are summarized in Table 2 and their comparison with $\mathcal{L}_p$ norm is in Table 4 and Table 5. + +# 6 ATTACKING CAPTIONING MODEL + +Our methods are general and can be easily adapted for other learning tasks. As proof of concept, we test our attacks against image captioning task. Image captioning is the task of generating a sequence of word description for an image. The popular architecture for captioning is a Long-Short-Term-Memory (LSTM) (Hochreiter & Schmidhuber, 1997) based models (Karpathy & Fei-Fei, 2015; Wang et al., 2017). Recently, (Aneja et al., 2018) proposed a convolutional based captioning model for a fast and accurate caption generation. This convolutional based approach does not suffer from the commonly known problems of vanishing gradients and overly confident predictions of LSTM network. Therefore, we choose to attack the current state of the art convolutional captioning model. We randomly selected images from MSCOCO (Lin et al., 2014) for image captioning attack. + +Attacking captioning models is harder than attacking classifiers when the goal is to change exactly one word in the benign image's caption unlike pixel based attacks (Chen et al., 2017; Xu et al., 2019). We show that our attacks are successful and have no visible artifacts even for this challenging task. In Fig. 6, we change the second word of the caption to dog while keeping the rest of the caption the same. This is a challenging targeted attack because, in many untargeted attacks, the resulted captions do not make sense. More examples are in our Appendix. + +![](images/b75517dea4ce773d35510be77fe232fd2441242f288641ae700b79db30c8d86c.jpg) +Figure 6: Captioning attack. Top: cAdv; Bottom: tAdv. We attack the second word to dog and show the corresponding change in attention mask of that word. More examples in Appendix. + +Adversarial Cross-Entropy Objective for Captioning. Let $t$ be the target caption, $w$ denote the word position of the caption, $\mathcal{F}$ for the captioning model, $I_v$ for the victim image and $J_{adv}$ for the cross-entropy loss + +$$ +L _ {c a p t} ^ {\mathcal {A}} = \sum_ {w} J _ {a d v} \left(\left(\mathcal {F} \left(I _ {v}\right)\right) _ {w}, t _ {w}\right) \tag {6} +$$ + +For cAdv, we give all color hints and optimize to get an adversarial colored image to produce target caption. For tAdv, we add Eqn 6 to Eqn 4 to optimize the image. We select $T_{S}$ as the nearest neighbor of the victim image from the ones in the adversarial target class using ImageNet dataset. We stop our attack once we reach the target caption and the caption does not change in consecutive iterations. Note we do not change the network weights, we only optimize hints and mask (for cAdv) or the victim image (for tAdv) to achieve our target caption. + +# 7 RELATED WORK + +Here we briefly summarize existing unrestricted and semantic adversarial attacks. Xiao et al. (2018b) proposed geometric or spatial distortion of pixels in image to create adversarial examples. They distort the input image by optimizing pixel flow instead of pixel values to generate adversarial examples. While this attack leads to "natural" looking adversarial examples with large $\mathcal{L}_{\infty}$ norm, it does not take image semantics into account. Song et al (2018) and Dunn et al. (2019) considered GANs for adversarial attacks. This attack is unrestricted in $\mathcal{L}_p$ norm but they are restricted to simple datasets as it involves training GANs, which have been known to be unstable and computationally intensive for complex datasets like ImageNet (Karras et al., 2017; Brock et al., 2018). + +Hosseini & Poovendran (2018), changes the hue & saturation of an image randomly to create adversarial examples. It is similar to cAdv as they both involve changing colors, however, their search space is limited to two dimensions and their images are unrealistic, Appendix (Fig. 10). Also, while this method has a non-trivial untargeted attack success rate, it performs extremely poorly for targeted attacks (1.20% success rate in our own experiments on ImageNet). Our work is also related to Joshi et al. (2019) and Qiu et al. (2019), who manipulate images conditioned on face dataset attributes like glasses, beard for their attacks. These work focuses on changing single image visual attribute and are conditionally dependent. Our work focuses on changing visual semantic descriptors to misclassify images and are not conditioned to any semantic attributes. + +
MethodSemantic BasedUnrestrictedPhotorealisticExplainable (eg color affinity)Complex DatasetCaption Attack
Kurakin et al. (2016)XXXX
Carlini & Wagner (2017)XXXX
Hosseini & Poovendran (2018)XXXX
Song et al (2018)XXXXX
Xiao et al. (2018b)XX
cAdv (ours)
tAdv (ours)
+ +Table 3: Summary of the difference in our work compared to previous work. Unlike previous attack methods, our attacks are unbounded, semantically motivated, realistic, highly successful, and scales to more complex datasets and other ML tasks. They are also robust against tested defenses. + +# 8 CONCLUSION + +Our proposed two novel unrestricted semantic attacks shed light on the role of texture and color fields in influencing DNN's predictions. They not only consistently fool human subjects but in general are harder to defend against. We hope by presenting our methods, we encourage future studies on unbounded adversarial attacks, better metrics for measuring perturbations, and more sophisticated defenses. + +# ACKNOWLEDGEMENTS + +We thank Chaowei Xiao for sharing their code to compare our methods with Xiao et al. (2018b) and helping us setup the user study. We also thank Tianyuan Zhang for providing the AdvRes152 pretrained model. This work was supported by NSF Grant No. 1718221 and ONR MURI Award N00014-16-1-2007. + +# REFERENCES + +Jyoti Aneja, Aditya Deshpande, and Alexander G Schwing. Convolutional image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5561-5570, 2018. +Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018. +Tom B Brown, Nicholas Carlini, Chiyuan Zhang, Catherine Olsson, Paul Christiano, and Ian Goodfellow. Unrestricted adversarial examples. arXiv preprint arXiv:1809.08352, 2018. +Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39-57. IEEE, 2017. +Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh. Attacking visual language grounding with adversarial examples: A case study on neural image captioning. arXiv preprint arXiv:1712.02051, 2017. +Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E Kounavis, and Duen Horng Chau. Keeping the bad guys out: Protecting and vaccinating deep learning withJPEG compression. arXiv preprint arXiv:1705.02900, 2017. +J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. +Aditya Deshpande, Jiajun Lu, Mao-Chuang Yeh, Min Jin Chong, and David A Forsyth. Learning diverse image colorization. In CVPR, pp. 2877-2885, 2017. +Isaac Dunn, Tom Melham, and Daniel Kroening. Generating realistic unrestricted adversarial inputs using dual-objective gan training. arXiv preprint arXiv:1905.02463, 2019. +Alexei A Efros and William T Freeman. Image quilting for texture synthesis and transfer. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 341-346. ACM, 2001. +Leon Gatys, Alexander S Ecker, and Matthias Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 262-270, 2015. +Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414-2423, 2016. +Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2018. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014a. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014b. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6): 82–97, 2012. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997. +Hossein Hosseini and Radha Poovendran. Semantic adversarial examples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1614-1619, 2018. +Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. +Xun Huang and Serge J Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, pp. 1510-1519, 2017. + +Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. 2017. +Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and superresolution. In European Conference on Computer Vision, 2016. +Ameya Joshi, Amitangshu Mukherjee, Soumik Sarkar, and Chinmay Hegde. Semantic adversarial attacks: Parametric transformations that fool deep classifiers. arXiv preprint arXiv:1904.08489, 2019. +Daniel Kang, Yi Sun, Dan Hendrycks, Tom Brown, and Jacob Steinhardt. Testing robustness against unforeseen adversaries. arXiv preprint arXiv:1908.08016, 2019. +Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3128-3137, 2015. +Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. pp. 1097-1105, 2012. +Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2016. URL http://dblp.uni-trier.de/db/journals/corr/corr1607.html#KurakinGB16. +Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. 17(39):1-40, 2016. +Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Universal style transfer via feature transforms. In Advances in Neural Information Processing Systems, pp. 386-396, 2017. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014. +Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503-528, 1989. +Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016. +Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, and Bo Li. Sem anticadv: Generating adversarial examples via attribute-conditional image editing. arXiv preprint arXiv:1906.07927, 2019. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. +Yang Song et al. Constructing unrestricted adversarial examples with generative models. 2018. +Liwei Wang, Alexander Schwing, and Svetlana Lazebnik. Diverse and accurate image description using a variational auto-encoder with an additive gaussian encoding space. In Advances in Neural Information Processing Systems, pp. 5756-5766, 2017. +Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610, 2018a. +Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612, 2018b. +Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017. +Yan Xu, Baoyuan Wu, Fumin Shen, Yanbo Fan, Yong Zhang, Heng Tao Shen, and Wei Liu. Exact adversarial attack to image captioning via structured output learning with latent variables. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4135-4144, 2019. + +Mao-Chuang Yeh, Shuai Tang, Anand Bhattachad, Chuhang Zou, and David Forsyth. Improving style transfer with calibrated metrics. arXiv preprint arXiv:1910.09447, 2019. +Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European Conference on Computer Vision, pp. 649-666. Springer, 2016. +Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S Lin, Tianhe Yu, and Alexei A Efros. Real-time user-guided image colorization with learned deep priors. arXiv preprint arXiv:1705.02999, 2017. + +# A APPENDIX + +# A.1 OTHER DETAILS ON HUMAN STUDY + +We also chose BIM (Kurakin et al., 2016) and CW (Carlini & Wagner, 2017) for comparing our perturbations. Since these attacks are known to have low $\mathcal{L}_p$ norm, we designed an aggressive version of BIM by relaxing its $\mathcal{L}_{\infty}$ bound to match the norm of our attacks. We settled with two aggressive versions of BIM with average $\mathcal{L}_{\infty} = \{0.21, 0.347\}$ , which we refer to as $\mathrm{BIM}_{0.21}$ , $\mathrm{BIM}_{0.34}$ . The average user preferences for BIM drops drastically from 0.497 to 0.332 when we relax the norm to $\mathrm{BIM}_{0.34}$ ; the decrease in user preferences for tAdv (0.433 to 0.406) and cAdv (0.476 to 0.437) is not significant. In Fig. 7, we plot a density plot of $\mathcal{L}_{\infty}$ vs user preference scores. + +![](images/4a53ddad346434f291ed2d9227693cd8fb4905512639915b293ad09fab2d70ef.jpg) +Figure 7: Density Plot. Our methods achieve large $\mathcal{L}_{\infty}$ norm perturbations without notable reduction in user preference. Each plot is a density plot between perturbation ( $\mathcal{L}_{\infty}$ norm) on X axis and $Pr$ (user prefers adversarial image) on Y axis. For ideal systems, the density would be a concentrated horizontal line at 0.5. All plots are on the same set of axes. On the left, plots for three baseline methods (Fig a - Fig c). Note the very strong concentration on small norm perturbations, which users like. Right 4 plots shows our methods (Fig d - Fig g). Note strong push into large norm regions, without loss of user preference. + +
MethodL0L2L∞Preference
cw0.5870.0260.0540.506
BIM0.0510.8260.0300.0510.497
BIM0.210.8930.1180.210.355
BIM0.3470.8920.1190.3470.332
cAdv20.9100.0980.4890.437
cAdv40.8980.0710.4480.473
cAdv80.8960.0590.4250.476
tAdv12500.8910.0410.1980.433
tAdv10000.9310.0560.2370.412
tAdv3500.9160.0610.2580.425
tAdv310000.9460.080.3150.406
+ +Table 4: User Study. User preference score and $\mathcal{L}_p$ norm of perturbation for different attacks. cAdv and tAdv are imperceptible for humans (score close to 0.5) even with very big $\mathcal{L}_p$ norm perturbation. + +
AttacktarantulamergansernautilushyenabeacongolfcartphotocopierumbrellapretzelsandbarMean
BIML00.820.810.800.880.800.790.860.800.850.840.83
L20.050.020.030.040.020.010.040.020.020.040.03
L∞0.070.040.060.060.040.030.070.040.040.060.05
Preference0.490.750.440.670.480.480.270.390.380.630.50
CWL00.540.550.520.770.480.420.700.480.690.710.59
L20.050.010.030.030.020.010.040.010.020.040.03
L∞0.080.040.060.070.040.030.070.040.050.060.05
Preference0.510.680.430.590.520.510.390.400.420.620.51
tAdv3250L00.940.950.910.940.890.940.890.910.940.850.92
L20.070.080.080.060.050.060.030.050.070.050.06
L∞0.280.310.300.310.220.270.180.250.270.190.26
Preference0.390.580.430.530.460.350.250.400.330.540.43
cAdv4L00.900.900.880.900.900.900.900.890.930.880.90
L20.060.060.070.050.080.070.080.080.090.060.07
L∞0.360.440.410.310.480.510.550.560.460.390.45
Preference0.460.650.460.580.450.480.300.430.350.590.47
+ +Table 5: Class wise $\mathcal{L}_p$ norm and user preference breakdown. Users are biased and pick a few classes quite often (merganser, sandbar), and do not like a few classes (photocopier) over others. + +![](images/d97783d48f781c87d273b99d3ba285c903db4861ba26361e8b78a5f342701275.jpg) +Figure 8: Perturbation comparisons. Images are attacked from tarantula to beacon, golf cart, nautilus, photocopier, pretzel from left to right. Our perturbations (cAdv and tAdv) are large, structured and have spatial patterns when compared with other attacks. Perturbations from cAdv are low-frequency and locally smooth while perturbations from tAdv are primarily high-frequency and structured. Note gray color indicates no perturbations. + +# A.2 ADDITIONAL RESULTS + +
ModelResnet50Dense121VGG 19
Accuracy76.1574.6574.24
Attack SuccessRandom Ts99.6799.7296.16
Random Target Ts99.7299.8999.94
Nearest Target Ts97.9999.7299.50
cAdv4 25 hints99.7899.8399.93
cAdv4 50 hints99.7899.83100.00
cAdv4 100 hints99.4499.5099.93
+ +Whitebox target attack success rate. Our attacks are highly successful on different models across all strategies. tAdv results are for $\alpha = 250$ , $\beta = 10^{-3}$ and iter= 1. + +
\(\begin{array}{c}\alpha \\ \beta\end{array}\)2505007501000
025.0099.6198.5595.92
\(10^{-4}\)99.8899.6198.5595.92
\(10^{-3}\)97.9999.2799.6699.50
\(10^{-2}\)96.2695.4296.3296.59
+ +tAdv ablation study. Whitebox target success rate with nearest target $T_{s}$ (texture source). In columns, we have increasing texture weight $(\alpha)$ and in rows, we have increasing adversarial cross-entropy weight $(\beta)$ . All attacks are done on Resnet50. + +Table 6: Ablation Studies. + +![](images/ed0de1aad024453e3d7c2e427500c655bf47f14545feb7d6ad1a50c5fe57d446.jpg) +Figure 9: Additional qualitative examples for controlling cAdv. We show a comparison of sampling 50 color hints from k clusters with low-entropy. All images are attacked to golf-cart. Even numbered rows visualize our cluster segments, with darker colors representing higher mean entropy and red dots representing the location we sample hints from. Sampling hints across more clusters gives less color variety. + +![](images/e25eef2696d3e9d009e7fa4b0981dea294333ccbdb372573c6287be6f849c9fe.jpg) +Merganser $\rightarrow$ Umbrella + +![](images/cee0fee043bdcd136bcdf93758ceb45c58e85f715b999805a6c48cd83557440c.jpg) +Sandbar $\rightarrow$ Beacon + +![](images/6e29975ddb8e52ee1e683802a7f00be5fc4c8710c0dea59badf9ede0a4546fe7.jpg) +Hyena $\rightarrow$ Tarantula +Figure 10: Images attacked with the method of (Hosseini & Poovendran, 2018) are not realistic, as these examples show. Compare Fig 12, showing results of our color attack. Similar qualitative results for CIFAR-10 are visible in Hosseini & Poovendran (2018). + +![](images/b341d3637835ba7d7099147ff2d251399f4ede02027976d336407c5b59252243.jpg) +Figure 11: Additional qualitative examples for tAdv. Texture transferred from random images (row 1), random images from adversarial target class (row 2) and from the nearest neighbor of the victim image from the adversarial target class (row 3). All examples are misclassified from Merganser to Umbrella. Images in the last row look photo realistic, while those in the first two rows contain more artifacts as the texture weight $\alpha$ increases (left to right). + +![](images/5e084fcfcf5e38ea669628de2f89bb8646b4972e2eb3f5683a04a3f5222f3635.jpg) +a banana sitting on top of a table + +![](images/d06d113777501454a73361d96f003609d92d8506f3f62076304eae95184a7320.jpg) +Attention for banana + +![](images/10c5b36f75deff93773b8c500e8108bca81f2b93c347eeae8dfab444dc69fc6c.jpg) +a dog sitting on top of a table + +![](images/d405241e0bd18361c4f44b4091c4d09ee62c9ef1ead3b0007ad44d2c6cc9a421.jpg) +Attention for dog + +![](images/8b8c785a50c8d74baeb0b36f4cbb3c807982ac4380c32a5c89e46a312fc1ae06.jpg) +tAdv +a clock tower in the middle of a city + +![](images/90e3abf4f551a4aa1604c4b14b710fd04f6ba31b15469a58d184b97943133d22.jpg) +Attention for clock + +![](images/e1aebfc756fa6b48aacae0c4b6bdc89ddb0ded7509413a4bf7127234e5729738.jpg) +a bird tower in the middle of a city + +![](images/aba473b6dc3618573e668e5df764182bc025c0d1f3c8130524a0e3e5c23af800.jpg) +Attention for bird + +![](images/12f8feec134142fbb8cbcb7fa87719157a331cb93bbf4ae1c9a12d7dacb5054c.jpg) +cAdv +a woman is holding a donut in her mouth + +![](images/ae410666c78f9674095c32617b7718603dfced74962b0b567fd2b81585b59961.jpg) +Attention for woman + +![](images/d4879ba5046a8b61c4946d088a257dd61a3009f86d9aba73c9ee7fe2de76c734.jpg) +a dog is holding a donut in her mouth + +![](images/6bdb24e825e5bb2f3551f845982c61dadc70f0aa1d0413652094bb0c78b82b57.jpg) +Attention for dog + +![](images/eec548eb925ab8de5078b44a75c9515804e59e763e039af644084dc47e8ef201.jpg) +a man sitting on a bench with a skateboard + +![](images/b23006cc2d4a63f4731505a63064f064131e84abd8453a33b314b010b693674e.jpg) +Attention for man + +![](images/b8d547cebd47d3bf302d8e20639fa82969e85ab342174e33e9baedbf1fac132e.jpg) +a bird boy sitting on a bench with a skateboard +Figure 12: Captioning attack. We attack the second word of each caption to $\{\mathrm{dog},\mathrm{bird}\}$ and show the corresponding change in attention mask of that word. For tAdv we use the nearest neighbor selection method and for cAdv we initialize with all groundtruth color hints. + +![](images/9af850b58eb79f9b43f9fc51bbf84be159ef04e12bf88df33b22e0150f45fb5e.jpg) +Attention for bird + +![](images/c3279d8e23708b4a4a2d7ba57afdb1b7b9dbf98938bbef1f234c357f7cbdaaf9.jpg) + +![](images/92794955a6d91c04bf5168df40308dd2abe9cd553c10ef0a40a0f19cd49f6d56.jpg) +(b) cAdv Perturbations + +![](images/01bb6863b9cf73fea60218e31000e3bdb01990b73807a2db25818b5b315a362a.jpg) +(a) $\mathbf{cAdv}_4$ with 50 hints +(c) $\mathsf{tAdv}_{250}^{1}$ for nearest Target $T_{s}$ +Figure 13: Randomly sampled, semantically manipulated, unrestricted adversarial examples and their perturbations. (a) Adversarial examples generated by cAdv attacking Hints and mask with 50 hints. (c) Adversarial examples generated by tAdv with $\alpha = 250$ , $\beta = 10^{-3}$ and iter= 1 using nearest target $T_{s}$ . (b) and (d) are their respective perturbations. Note that diagonal images are groundtruth images and gray pixels indicate no perturbation. + +![](images/977caaf09742ec5039c85296efe35665acb75845cd2ebb6bbe2dd28256435378.jpg) +(d) tAdv Perturbations \ No newline at end of file diff --git a/unrestrictedadversarialexamplesviasemanticmanipulation/images.zip b/unrestrictedadversarialexamplesviasemanticmanipulation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e569e5169862498c9bf8192ee3e7419b37ff6a18 --- /dev/null +++ b/unrestrictedadversarialexamplesviasemanticmanipulation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23517bca152354ee71cbe0b1e5870d7dc79ccfc3df985a7e952f5d104e975be1 +size 1886013 diff --git a/unrestrictedadversarialexamplesviasemanticmanipulation/layout.json b/unrestrictedadversarialexamplesviasemanticmanipulation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..db089f77f180b60e7c162fceb13bcb79897209b9 --- /dev/null +++ b/unrestrictedadversarialexamplesviasemanticmanipulation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d7709f5c59ea5fc646e49b4d3d477e588d3c1990e95b3bcfd5492ed940c6db7 +size 551008 diff --git a/unsupervisedclusteringusingpseudosemisupervisedlearning/b9105381-97e2-47a1-8015-3ba499e496dc_content_list.json b/unsupervisedclusteringusingpseudosemisupervisedlearning/b9105381-97e2-47a1-8015-3ba499e496dc_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..807aa4223a3758ce5318ac321def674ce1439b8d --- /dev/null +++ b/unsupervisedclusteringusingpseudosemisupervisedlearning/b9105381-97e2-47a1-8015-3ba499e496dc_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78345e8a9133e74ea062e4212cbe0b8851208217407c065bef3b37cccbf46b62 +size 91730 diff --git a/unsupervisedclusteringusingpseudosemisupervisedlearning/b9105381-97e2-47a1-8015-3ba499e496dc_model.json b/unsupervisedclusteringusingpseudosemisupervisedlearning/b9105381-97e2-47a1-8015-3ba499e496dc_model.json new file mode 100644 index 0000000000000000000000000000000000000000..34bd19ed8f3f01212b0d336c897ddb4e7fcb41ea --- /dev/null +++ b/unsupervisedclusteringusingpseudosemisupervisedlearning/b9105381-97e2-47a1-8015-3ba499e496dc_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82065b1d026b564d0209bf50eedd7cd9f75cc0fc1b0d33346ecfd5765cc711da +size 107625 diff --git a/unsupervisedclusteringusingpseudosemisupervisedlearning/b9105381-97e2-47a1-8015-3ba499e496dc_origin.pdf b/unsupervisedclusteringusingpseudosemisupervisedlearning/b9105381-97e2-47a1-8015-3ba499e496dc_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..237d774a037473f6db2e94ae8ec1ef5c58253101 --- /dev/null +++ b/unsupervisedclusteringusingpseudosemisupervisedlearning/b9105381-97e2-47a1-8015-3ba499e496dc_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b2685d57b84b64219e8cbbd103644ea7d6d48e3b89170d575c8998f7e51e0e6 +size 704666 diff --git a/unsupervisedclusteringusingpseudosemisupervisedlearning/full.md b/unsupervisedclusteringusingpseudosemisupervisedlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..384ef09ddfb32d7c2e1d02660f2763faecf415a7 --- /dev/null +++ b/unsupervisedclusteringusingpseudosemisupervisedlearning/full.md @@ -0,0 +1,398 @@ +# UNSUPERVISED CLUSTERING USING PSEUDO-SEMISUPERVISED LEARNING + +Divam Gupta * + +Carnegie Mellon University + +divam@cmu.edu + +Ramachandran Ramjee + +Microsoft Research India + +ramjee@microsoft.com + +Nipun Kwatra + +Microsoft Research India + +nipun.kwatra@microsoft.com + +Muthian Sivathanu + +Microsoft Research India + +muthian@microsoft.com + +# ABSTRACT + +In this paper, we propose a framework that leverages semi-supervised models to improve unsupervised clustering performance. To leverage semi-supervised models, we first need to automatically generate labels, called pseudo-labels. We find that prior approaches for generating pseudo-labels hurt clustering performance because of their low accuracy. Instead, we use an ensemble of deep networks to construct a similarity graph, from which we extract high accuracy pseudolabels. The approach of finding high quality pseudo-labels using ensembles and training the semi-supervised model is iterated, yielding continued improvement. We show that our approach outperforms state of the art clustering results for multiple image and text datasets. For example, we achieve $54.6\%$ accuracy for CIFAR-10 and $43.9\%$ for 20news, outperforming state of the art by $8 - 12\%$ in absolute terms. Project details and code are available at https://divamgupta.com/pseudo-semi-supervised-clustering + +# 1 INTRODUCTION + +Semi-supervised methods, which make use of large unlabelled data sets and a small labelled data set, have seen recent success, e.g., ladder networks Rasmus et al. (2015) achieves $99\%$ accuracy in MNIST using only 100 labelled samples. These approaches leverage the unlabelled data to help the network learn an underlying representation, while the labelled data guides the network towards separating the classes. + +In this paper, we ask two questions: is it possible to create the small labelled data set required by semi-supervised methods purely using unsupervised techniques? If so, can semi-supervised methods leverage this autonomously generated pseudo-labelled data set to deliver higher performance than state-of-the-art unsupervised approaches? We answer both these questions in the affirmative. + +We first find that prior approaches for identifying pseudo-labels Caron et al. (2018); Chen (2018); Lee (2013) perform poorly because of their low accuracy (Section 2). To create a high accuracy pseudo-labelled data set autonomously, we use a combination of ensemble of deep networks with a custom graph clustering algorithm (Section 4). We first train an ensemble of deep networks in an unsupervised manner. Each network independently clusters the input. We then compare two input data points. If all of the networks agree that these two data points belong to the same cluster, we can be reasonably sure that these data points belong to the same class. In this way, we identify all input data pairs belonging to the same class with high precision in a completely unsupervised manner. + +In the next step, we use these high quality input pairs to generate a similarity graph, with the data points as nodes and edges between data points which are deemed to be similar by our ensemble. From this graph, we extract tight clusters of data points, which serve as pseudo-labels. Note that, in this step, we do not cluster the entire dataset, but only a small subset on which we can get high + +precision. Extracting high quality clusters from this graph while ensuring that the extracted clusters correspond to different classes is challenging. We discuss our approach in Section 4.2.1 for solving this problem. In this way, our method extracts unambiguous samples belonging to each class, which serves as pseudo-labels for semi-supervised learning. + +For semi-supervised learning using the labels generated above, one could use ladder networks Rasmus et al. (2015). However, we found that ladder networks is unsuitable for the initial unsupervised clustering step as it can degenerate to outputting constant values for all inputs in the absence of unsupervised loss. To enable unsupervised clustering, we augment ladder networks using information maximization Krause et al. (2010) to create the Ladder-IM, and with a dot product loss to create Ladder-Dot. We show in Section 5 that Ladder-IM and Ladder-Dot, by themselves, also provide improvements over previous state of the art. We use the same models for both the first unsupervised learning step as well as the subsequent pseudo-semi-supervised iterations. + +Finally, the approach of finding high quality clusters using an ensemble, and using them as labels to train a new ensemble of semi-supervised models, is iterated, yielding continued improvements. The large gains of our method mainly come from this iterative approach, which can in some cases, yield upto $17\%$ gains in accuracy over the base unsupervised models (see section 5.4). We name our pseudo-semi-supervised learning approach Kingdra1. Kingdra is independent of the type of data set; we show examples of its use on both image and text data sets in Section 5. This is in contrast to some previous approaches using CNNs, e.g. Chang et al. (2017), Caron et al. (2018), which are specialized for image data sets. + +We perform unsupervised classification using Kingdra on several standard image (MNIST, CIFAR10, STL) and text (reuters, 20news) datasets. On all these datasets, Kingdra is able to achieve higher clustering accuracy compared to current state-of-the-art deep unsupervised clustering techniques. For example, on the CIFAR10 and 20news datasets, Kingdra is able to achieve classification accuracy of $54.6\%$ and $43.9\%$ , respectively, delivering 8-12% absolute gains over state of the art results Hu et al. (2017); Xie et al. (2016). + +# 2 PRIOR WORK ON GENERATING PSEUDO-LABELS + +
Model + Pseudo-label MethodMNISTCIFAR10
Label Acc.(%)Cluster Acc. (%)Label Acc. (%)Cluster Acc. (%)
Ladder-IM + Argmax (Lee (2013))95.495.449.049.0
Ladder-IM + K-means (Caron et al. (2018))75.460.945.344.8
Ladder-IM + Threshold (Chen (2018))88.691.660.547.4
+ +Table 1: Pseudo-label and clustering accuracy + +Several techniques have been proposed in the literature for generating pseudo-labels (Caron et al. (2018); Chen (2018); Lee (2013). In Lee (2013), the output class with the highest softmax value (Argmax) is taken to be the pseudo-label. In Caron et al. (2018), the authors perform K-means clustering on the feature vector and use the K-means clusters as pseudo-labels. Finally, authors in Chen (2018) treat the softmax output as confidence and only label those items whose confidence value is above a high threshold. Note that none of these techniques for identifying pseudo-labels have been applied in our context, i.e., for unsupervised clustering using semi-supervised models. + +In this section, we evaluate if pseudo-labels created by these prior techniques can be leveraged by semi-supervised models to improve clustering accuracy. We start with a semi-supervised model based on Ladder networks (Rasmus et al. (2015)) called Ladder-IM (see Section 4.1 for model details) and train using only its unsupervised loss terms on MNIST and CIFAR10 datasets. We use each of the above three pseudo-labelling approaches on the trained model to provide an initial set of pseudo-labels to the datasets (e.g., using K-means clustering on the feature vector of the model as in Caron et al. (2018), etc.). We call the accuracy of these pseudo-labels the initial pseudo-label accuracy. We then use these generated pseudo-labels along with the datasets to train the model again, + +now with a supervised loss term (based on the pseudo-labels) and the earlier unsupervised loss terms. We again run the pseudo-labelling approaches on the newly trained model to derive an updated set of pseudo-labels. We iterate this process of training and pseudo-labelling until the pseudo-label accuracy stabilizes. We call this the final clustering accuracy. + +The initial pseudo-label accuracy and the final clustering accuracy results for the three approaches are shown in Table 1. First, consider MNIST. The unsupervised clustering accuracy of Ladder-IM is $95.4\%$ . Argmax simply assigns pseudo-labels based on the model's output and since this doesn't add any new information for subsequent iterations, the final accuracy remains at $95.4\%$ . On the other hand, the pseudo-labels identified by both the K-means and threshold approaches result in worse initial label accuracy ( $75.4\%$ and $88.6\%$ ). When this low-accuracy pseudo-label is used as supervision to train the model further, it results in a low final clustering accuracy of $60.9\%$ and $91.6\%$ , respectively. CIFAR10 results are similar. Ladder-IM clustering accuracy is $49\%$ which remains the same under Argmax as before. Pseudo-label accuracy using the K-means approach is worse and results in pulling down the final accuracy to $44.8\%$ . Interestingly, threshold results in a slightly higher initial accuracy of $60.5\%$ but even this is not high enough to improve the final clustering accuracy for CIFAR10. + +From these results, we arrive at the following two conclusions. First, if the initial pseudo-label accuracy is not high, using pseudo-labels as supervision can result in bringing down the final clustering accuracy. Thus, high accuracy of initial pseudo-labels is crucial for improving clustering accuracy. Second, current approaches for identifying pseudo-labels do not deliver high accuracy and hence are unable to help improve overall clustering accuracy. + +# 3 RELATED WORK + +Unsupervised clustering: Various unsupervised clustering methods have been proposed over the years. Ng et al. (2002) uses a spectral clustering based approach, while Elhamifar & Vidal (2009) uses a sparse subspace approach for unsupervised learning. Recently, several deep neural networks based methods have been proposed, which scale well to large datasets. The ability of deep neural networks to learn higher level representations makes them a good choice for unsupervised learning. Coates & Ng (2012) and Caron et al. (2018) use convnets and k-means for clustering. Caron et al. (2018) for example, iterates over clustering the features obtained from a convnet, and training the classifier using these clusters as pseudo-labels. The authors do not report clustering performance and we observed that this method can easily degenerate. Chang et al. (2017) uses pair-wise dot-product based similarity to identify close input pairs, which provide a supervisory signal. These convnets based approaches however work on only image datasets. Xie et al. (2016) simultaneously learns feature representations and cluster assignments using deep neural networks, and works on both image and text datasets. Hu et al. (2017) uses regularization combined with mutual information loss for unsupervised learning and achieves state of the art results. The authors conduct experiments in two settings - Random Perturbation Training and Virtual Adversarial Training. Other works such as Hjelm et al. (2018) using mutual information, maximize the mutual information between the spatial features and the non-spatial features. Ji et al. (2019) maximizes the mutual information between the predicted label of the image and the predicted label of the augmented image. This method uses convolution networks and requires domain knowledge of the dataset. + +Self-supervised learning: Another form of unsupervised learning uses auxiliary learning tasks for which labels can be self generated to generate useful representations from data. Many methods use spatial information of image patches to generate self-supervised data. E.g. Pathak et al. (2016) predicts pixels in an image patch using surrounding patches, while Doersch et al. (2015) predicts the relative position of image patches. Sermanet et al. (2018) uses time as a self supervisory signal between videos taken from different view points. Temporal signal is also used to learn representations from single videos by predicting future frames, e.g. Denton et al. (2017). Our method uses correlation between outputs of input points across an ensemble as a supervisory signal to generate self-supervised pseudo-labels. + +Semi-Supervised learning: Semi-supervised approaches use sparse labelling of datapoints. Szummer & Jaakkola (2002) propagates labels based on nearest neighbors. Weston et al. (2012) uses a deep version of label propagation. Lee (2013) adjusts labels probabilities, starting with trusting only true labels and gradually increases the weight of pseudo labels. Rasmus et al. (2015) employs a denoising + +auto encoder architecture and have shown impressive performance. Tarvainen & Valpola (2017) uses an averaged model over previous iterations as a teacher. Other than these, some semi-supervised learning methods like Xie et al. (2019) and Berthelot et al. (2019) use data augmentation and assume some domain knowledge of the dataset with some of the data augmentation specific to image datasets. Miyato et al. (2018) and Shinoda et al. (2017) uses virtual adversarial training combined with the classification loss to perform semi-supervised classification. However, we found that these methods do not work well if we jointly train them with unsupervised losses. Ladder networks does not require any domain-dependent augmentation, works for both image and text datasets, and can be easily jointly trained with supervised and unsupervised losses. Thus, we chose to work with Ladder networks in our experiments, though our approach is general enough to work with any semi-supervised method that accommodates training with unsupervised loss terms. + +Unsupervised ensemble learning: Unsupervised ensemble learning has been mostly limited to generating a set of clusterings and combining them into a final clustering. Huang et al. (2016) cast ensemble clustering into a binary linear programming problem. Wang et al. (2009); Fred & Jain (2005) use a pair wise co-occurrence approach to construct a co-association matrix and use it to measure similarity between data points. See Vega-Pons & Ruiz-Shulcloper (2011) for a survey of ensemble clustering algorithms. Note that to the best of our knowledge none of the ensemble clustering algorithms employ a semi-supervised step like ours, or make use of deep networks. + +# 4 PROPOSED FRAMEWORK + +![](images/bca539b950e765c9c979cdc63ba19ab2b3115d77f23d07fbe44456e337f86642.jpg) +Figure 1: Kingdra overview. In step 1, we train all the models using the unlabeled samples. In step 2 we construct a graph modeling pairwise agreement of the models. In step 3, we get $k$ high confidence clusters by pruning out data-points for which the models do not agree. In step 4 we take the high confidence clusters and generate pseudo labels. In step 5 we train the models using both unlabeled samples and pseudo labeled samples. We iterate step 2 to step 5 and final clusters are generated. + +An overview of the Kingdra method is summarized in Figure 1. Given an unlabelled dataset $\mathbf{X} = \{x_1, \ldots, x_n\}$ , we start with unsupervised training of an ensemble of models $\mathbf{M} = \{M_1, \ldots, M_m\}$ . For the individual models, any unsupervised model can be used. However, we propose a novel Ladder-* model, in which we build on ladder networks Rasmus et al. (2015) and modify it to support clustering. Next, we use the agreement between the ensemble models on a pair of data points, as a measure of similarity between the data points. This pairwise data is used to construct a similarity graph, from which we extract $k$ tight clusters of data points, which serve as pseudo-labels. Note that, in this step, we do not cluster the entire dataset, but only a small subset on which we can get high precision. This is important for improving the accuracy of our semi-supervised training, as discussed in section 2. These pseudo-labels then serve as training data for semi-supervised training of a new ensemble of Ladder-* models. Finally, we perform multiple iterations of the above steps for continued improvement. + +# 4.1 BASE MODEL + +The first step of our method is unsupervised training of an ensemble of models. Our framework allows using any unsupervised method for this step, and we have experimented with existing approaches, such as IMSAT Hu et al. (2017). The accuracy of this base model directly impacts the accuracy of + +our final model, and thus using an accurate base model clearly helps. In that light, we have also developed a novel unsupervised model Ladder-**, which outperforms other unsupervised models in most data sets. + +Ladder networks Rasmus et al. (2015) have shown great success in semi-supervised setting. However, to the best of our knowledge, the ladder architecture has not been used for unsupervised clustering. One reason perhaps is that ladder networks can degenerate to outputting constant values for all inputs in the absence of a supervised loss term. To circumvent this degeneracy, we add an unsupervised loss to the regular ladder loss terms so that it directs the network to give similar outputs for similar inputs, but overall maximizes the diversity in outputs, so that dissimilar inputs are directed towards dissimilar outputs. We achieve this objective by incorporating one of two losses – the IM loss Krause et al. (2010); Hu et al. (2017) or the dot product loss Chang et al. (2017). We call the two variants Ladder-IM and Ladder-Dot, respectively. + +IM loss: The IM loss or the information maximization loss is simply the mutual information between the input $X$ and output $Y$ of the classifier: + +$$ +I (X; Y) = H (Y) - H (Y | X) \tag {1} +$$ + +where $H(.)$ and $H(.|.)$ are the entropy and conditional entropy, respectively. Maximizing the marginal entropy term $H(Y)$ , encourages the network to assign disparate classes to the inputs, and thus encourages a uniform distribution over the output classes. On the other hand, minimizing the conditional entropy encourages unambiguous class assignment for a given input. + +Dot product loss: The dot product loss is defined to be + +$$ +D \left(X _ {i}, X _ {j}\right) = Y _ {i} ^ {T} Y _ {j}, \text {i f} i \neq j \tag {2} +$$ + +which forces the network outputs for different inputs to be as orthogonal as possible. This has a similar effect to IM loss, encouraging the network to assign disparate classes to the inputs. + +Among Ladder-IM and Ladder-Dot, we found Ladder-IM to perform better than Ladder-Dot in most cases. However, we did find that Ladder-Dot along with Kingdra iterations outperforms when the data set has a large imbalance in the number of samples per class. The reason for this is that the dot product loss is agnostic to the number of samples per class, while the marginal entropy term in the IM loss will drive the network towards overfitting a class with more samples, compared to a class with less number of samples. A more detailed presentation of Ladder-IM and Ladder-Dot can be found in the appendix. + +# 4.2 UNSUPERVISED ENSEMBLING + +Kingdra exploits an ensemble of Ladder-* models to further improve the performance of unsupervised learning. Note that, in supervised learning, ensembling is trivial as we can simply average the outputs of the individual models or do voting on them. On the other hand, in unsupervised learning, it is not trivial to do voting, as in the absence of training labels there is no stable class assignment for outputs across different models, and thus we do not have any mapping of class IDs of one model to another. To solve this we propose a simple approach, where we look at pairs of data-points, rather than at individual samples. Two data-points are in the same cluster with a high confidence if majority (or all) of the models in the ensembles put them in the same cluster. For example, given an input pair $x$ , $x'$ , if $M_i(x) = M_i(x')$ for enough models, we can say with high confidence that they belong to the same class. Using this pairwise approach, we propose a graph based method to find small sized, but high precision clusters. + +# 4.2.1 GRAPH BASED MINI-CLUSTERING + +We construct a graph $G = \{X, E_{pos}, E_{neg}\}$ with $n$ nodes where each input data-point $x$ is represented as a node. Here $E_{pos}$ and $E_{neg}$ are two types of edges in the graph: + +- Strong Positive Edges: A strong positive edge is added between two data-points when a large number of models agree on their predicted class. $(x, x') \in E_{pos} \iff n\_agree(x, x') \geq t_{pos}$ where $t_{pos}$ is a chosen threshold, and $n\_agree(x, x') = |\{m : m \in \mathbf{M}, m(x) = m(x')\}|$ . + +- Strong Negative edges: A strong negative edge is added between two data-points when a large number of models disagree on their predicted class. $(x, xt) \in E_{\text{neg}} \iff n\_disagree(x, xt) \geq t_{\text{neg}}$ , where $t_{\text{neg}}$ is a chosen threshold, and $n\_disagree(x, xt) = |\{m : m \in \mathbf{M}, m(x) \neq m(x')\}|$ . + +A strong positive edge between two data points, implies that most models believe they are in the same class, while a strong negative edge between two data points implies that most models believe they should belong to different classes. + +Algorithm 1 Get high precision clusters using ensembles +```txt +1: procedure GETCLUSTERS(X, k) +2: $G = \{X, E_{pos}, E_{neg}\}$ +3: for $k' \in \{1, 2, \ldots, k\}$ do +4: $x_{max} = \operatorname{argmax}_{x \in X} \{|(x, x') \in E_{pos}|\}$ +5: $S_{k'} = \{x : (x, x_{max}) \in E_{pos}\} \cup \{x_{max}\}$ +6: for $x' \in X$ do +7: Remove $x'$ from $G$ , if $(x', x_{max}) \notin E_{neg}$ +8: end for +9: end for +10: Return $S = \{S_1, S_2, \ldots, S_k\}$ +11: end procedure +``` + +After building the graph, each clique of strong positive edges would be a cluster, where within a clique, data-points belong to the same class with high confidence. Since we add only high confidence edges to the graph, the number of cliques can be much larger than $k$ . Hence we need to select $k$ cliques where we would like to maximize the size of each clique, but also require that the cliques are diverse (in order to not select two cliques with data-points belonging to the same class). Hence, within a clique, nodes should be connected by strong positive edges, while across cliques, nodes should be connected by strong negative edges. As finding cliques is not solvable in polynomial time, we use a simple and efficient greedy approximation algorithm, as shown in Algorithm 1. + +Rather than finding cliques, we greedily find nodes with the highest number of strong positive edges (line 4). The intuition is that most of the neighbours of that node will also be connected with each other. In the case of Cifar-10, we find that with a threshold of $90\%$ , $81\%$ of nodes are fully connected with each other. If the threshold is $100\%$ , all nodes in a cluster are connected with each other by transitivity. We take the node with highest number of strong positive edges, along with other nodes connected to it by strong positive edges and add them to a cluster (line 5). We then remove all the nodes that do not have a strong negative edge to the chosen node (line 6-7). The intuition here is that these nodes are not diverse enough from the chosen cluster (since some models think that they belong to the same class as the currently chosen node), and thus should not be part of the next set of chosen clusters. By repeating the process $k$ times, we get $k$ diverse clusters, approximately satisfying our requirement. + +# 4.3 ITERATIVE ENSEMBLE TRAINING + +Once the high precision clusters are identified, we treat these clustered points (points in set S) as pseudo-labels, and solve our unsupervised clustering problem using a semi-supervised method. Although any semi-supervised method can be used, as described in section 4.1 we use the proposed Ladder-* method, which we found superior to ladder networks in our experiments. + +Instead of training a single semi-supervised model, we train an ensemble of models, and again use them to find high quality clusters. This approach can be iterated, yielding continued improvements. We name this approach Kingdra. Algorithm 2 describes the complete Kingdra algorithm. First, the individual models are trained using only the unsupervised Ladder-* loss (lines 1-4). Then, for each of the iterations, we obtain high precision clusters (line 6), derive pseudo-labels from them (line 8), and then train the models with both the unsupervised and supervised losses (lines 9-10). + +We compute the pseudo-labels using the mini-clusters as follows. For a model $M_{j} \in \mathbf{M}$ and clusters $\mathbf{S}$ , we need to find an appropriate mapping of the clusters to the output classes of the model. In + +Algorithm 2 Kingdra: Iterative Ensemble Training +Input: Dataset X, Models M, num_clusters k +Output Cluster Labels +1: for $j \in \{1,2,\dots,m\}$ do +2: Initialize weights of $M_j$ +3: Update $M_j$ by minimizing lossLadder-* +4: end for +5: for it ∈ {1,2,...,n_iter} do +6: S = GetClusters(X,k) +7: for $j \in \{1,2,\dots,m\}$ do +8: Get pseudo labels for $M_j$ +9: Update $M_j$ by minimizing: +10: lossLadder-* + losssup > Use pseudo labels for losssup +11: end for +12: end for +13: Use averaging on the ensemble models M to return final clusters + +particular, for a cluster $S' \in \mathbf{S}$ , we assign all data-points in $S'$ the following label: + +$$ +y _ {S ^ {\prime}} ^ {j} = \operatorname {m o d e} \left(\left\{M _ {j} \left(x ^ {\prime}\right): x ^ {\prime} \in S ^ {\prime} \right\}\right). \tag {3} +$$ + +That is, we map a cluster to the output class to which most data-points in the cluster are mapped. These pseudo-labels are then used for computing the supervised loss of Ladder-*. This iterative approach leads to a continuous improvement of clustering quality. We observe that the size of clusters returned by Algorithm 1 increases after each iteration until they cover almost the entire input set. The clustering performance of the model also generally improves with each iteration until it saturates, as we show in Section 5. We also note that cluster assignments become more stable with subsequent iterations, which also leads to decrease in variance across multiple runs. That is, the variance across multiple runs decreases if we run Kingdra for more iterations. + +# 5 EXPERIMENTS + +In this section we evaluate the performance of Kingdra on several popular datasets. For a fair comparison, we use the same data pre-processing and same model layer sizes as prior work Hu et al. (2017). + +# 5.1 DATASETS + +We evaluate Kingdra on three image datasets and two text datasets: MNIST is a dataset of 70000 handwritten digits of 28-by-28 pixel size. Here, the raw pixel values are normalized to a range 0-1 and flattened to vector of 784 dimensions. CIFAR10 is a dataset of 32-by-32 color images with 10 classes having 6000 examples each. STL is a dataset of 96-by-96 color images with 10 classes having 1300 examples each. For CIFAR10 and STL raw pixels are not suited for our goal as the color information dominates, hence as mentioned in Hu et al. (2017), we use features extracted from a Resnet-50 network pre-trained on the ImageNet dataset. Reuters is a dataset containing English news stories with imbalanced data and four categories. We used the same pre-processing as used by Hu et al. (2017); after removing the stop-words, tfidf features were used. 20News is a dataset containing newsgroup documents with 20 different newsgroups. Similar to Hu et al. (2017), we remove stop words and keep 2000 most frequent words, and used tfidf features. All our experiments were performed using the same pre-processed data. + +# 5.2 EVALUATION METRIC + +We use standard unsupervised evaluation methodology and protocol to compare different methods. Following Xie et al. (2016), we set the number of clusters the same as the number of ground truth + +
MethodMNISTSTLCIFAR10Reuters20news
K-means53.3 (0.1)85.0 (0.2)34.4 (0.9)53.7 (0.4)14.0 (1.5)
AC62.1 (0.0)82.2 (0.0)42.4 (0.0)54.9 (0.0)18.6 (0.0)
dAE+K-means67.7 (3.0)20.8 (1.9)45.2 (2.1)33.7 (0.2)7.9 (0.1)
dVAE+K-means65.2 (3.4)60.8 (1.9)44.2 (0.2)53.7 (1.4)12.2 (0.2)
DEC Xie et al. (2016)84.378.1 (0.1)46.9 (0.9)67.3 (0.2)30.8 (1.8)
DeepCluster Caron et al. (2018)27.9 (1.7)69.9 (3.2)37.2 (0.5)43.1 (4.3)15.8 (1.2)
Deep RIM Hu et al. (2017)58.5 (3.5)92.5 (2.2)40.3 (3.5)62.3 (3.9)25.1 (2.8)
IMSAT (RPT) Hu et al. (2017)89.6 (5.4)92.8 (2.5)45.5 (2.9)71.9 (6.5)24.4 (4.7)
IMSAT (VAT) Hu et al. (2017)98.4 (0.4)94.1 (0.4)45.6 (0.8)71.0 (4.9)31.1 (1.9)
LADDER-IM (ours)95.0 (2.8)90.7 (1.8)49.5 (2.9)68.2 (2.8)38.4 (2.5)
LADDER-IM-ensemble (ours)95.1 (0.4)91.5 (0.3)51.5 (0.9)69.0 (3.4)40.5 (0.6)
LADDER-DOT (ours)89.2 (7.2)76.1 (4.7)48.0 (1.0)66.6 (4.8)25.6 (1.3)
KINGDRA-LADDER-DOT (ours)98.0 (0.01)93.5 (1.4)54.3 (2.5)71.9 (3.4)28.4 (1.2)
KINGDRA-LADDER-IM (ours)98.5 (0.4)95.1 (0.1)54.6 (0.9)70.5 (2.0)43.9 (1.4)
+ +Table 2: Comparison of clustering accuracy (%) on five benchmark datasets. Averages and standard deviations are reported. The results for DEC, Deep RIM, and IMSAT are excerpted from Hu et al. (2017). + +classes and evaluated unsupervised clustering accuracy as: + +$$ +\mathrm {A C C} = \max _ {p} \frac {\sum_ {i = 1} ^ {N} \mathbf {1} \left\{l _ {n} = p \left(c _ {i}\right) \right\}}{N}, \tag {4} +$$ + +where $l_{i}$ and $c_{i}$ are the ground truth cluster label and the cluster label assigned by the model respectively. We find the best one-to-one mapping of ground truth label and model generated clusters with $p$ ranging over all one-to-one mappings. + +# 5.3 COMPARED METHODS + +We compare Kingdra against several clustering algorithms on our datasets. Specifically, we compare against traditional clustering algorithms such as K-Means and Agglomerative clustering(AC). We also compare against representation learning baselines where we use models such as Deep Auto-encoders(dAE), Deep Variational Auto-encoders (dVAE), and then use K-Means on the learned representations. Finally, we also compare our model with deep learning based clustering methods such as Deep RIM, DEC, DeepCluster, and IMSAT. Deep RIM uses a multi-layer neural network with the RIM objective. DEC iteratively learns a lower dimensional feature representation and optimizes a clustering objective. We also compare with two versions of IMSAT - IMSAT(RPT) and IMSAT(VAT) where data augmentation is used to impose invariance in the model outputs. For our results, we report the performance of Ladder-IM and Ladder-Dot individually, and finally Kingdra that includes an ensemble of Ladder-* networks, along with the semi-supervised iterations. For a fair comparison we used the same network architecture for all the neural network based models. + +# 5.4 EXPERIMENTAL RESULTS + +Accuracy results of prior approaches and ours is shown in Table 2. As can be seen from the table, Ladder-IM by itself delivers good performance and Kingdra-Ladder-IM achieves higher clustering accuracy than state-of-the-art deep unsupervised approaches such as DEC Xie et al. (2016) and IMSAT Hu et al. (2017) in all five data sets. Further, the gap between Kingdra and prior approaches is significant in two data sets: Kingdra-Ladder-IM achieves an average accuracy of $54.6\%$ for CIFAR10 compared to $45.6\%$ for IMSAT and $46.9\%$ for DEC - an $8\%$ increase in absolute accuracy. Similarly, Kingdra-Ladder-IM achieves an average accuracy of $43.9\%$ for 20news compared to $31.1\%$ for IMSAT and $30.8\%$ for DEC - an increase of over $12\%$ in absolute accuracy. Note that while deep networks are state-of-the-art for most data sets, linear approaches outperform deep approaches on 20news with linear RIM achieving $50.9\%$ accuracy Hu et al. (2017). We also tried DeepCluster Caron et al. (2018) in our experimental setting, but observed the model to degenerate, assigning most of the samples to the same cluster. Additional analysis of DeepCluster is in the Appendix. + +![](images/8a5c5dfc0981495fabc2693c5ee9d69f6ecd986692e895d6464676ea5a6057df.jpg) +Figure 2: The left graph shows clustering and pseudo-label accuracy vs iterations for STL, CIFAR10, and the MNIST datasets. The right graph shows the number of pseudo-labels vs iterations. + +![](images/da40f6d3dd866700353a2a2fe7f4cb764f124e36773b7f39a3a8c35d7fe9ac18.jpg) + +An interesting aspect to note is that the use of an ensemble by itself only provides small gains of $1 - 2\%$ , similar to what one expects from ensembles in supervised learning (e.g., compare Ladder-IM with Ladder-IM-ensemble). The large gains mainly come from Kingdra using the ensemble to generate pseudo-labels, which is then iterated. For example, Kingdra-Ladder-IM provides absolute gains of $4 - 6\%$ in most data sets over the base model. Similarly, Kingdra-Ladder-Dot provides absolute gains of $9\%$ in MNIST and $17\%$ in STL over the base Ladder-Dot model. Thus, our approach of generating pseudo-labels from ensembles is a powerful approach that delivers large gains in unsupervised learning. + +Also note that Kingdra-Ladder-IM performs better than Kingdra-Ladder-Dot for most data sets except for the Reuters data set where the latter performs better (Reuters has a large class imbalance with the largest class representing $43\%$ of the data). + +Finally, note the standard deviation of the various approaches shown in the Table. One can see that Kingdra in general results in lower standard deviation than many of the prior approaches even while delivering higher accuracy. + +Figure 2 shows the accuracy of pseudo-labels and Kingdra-Ladder-IM, as well as number of pseudolabels identified by the graph clustering algorithm vs the number of iterations for STL, CIFAR10, and MNIST datasets. As iterations progress, the accuracy of pseudo labels decreases as more pseudolabels get added; however, this still helps improve the overall clustering accuracy. Note that, unlike pure semi-supervised approaches which use a small set of (randomly sampled) data points that match the input data distribution, our pseudo-labels do not completely match the input data distribution (since our selection algorithm is biased towards easy data points). This causes an increased gap between the accuracy of pseudo-labels, and that of overall clustering. + +# 5.5 QUALITATIVE ANALYSIS + +Figure 3 shows the similarity graph obtained after the first three iterations of Kingdra on the MNIST dataset. As the iteration progresses, one can see that there are fewer inter-cluster linkages indicating that the models are converging on the labels for these data points. Figure 4 shows randomly selected examples from our final clusters generated by Kingdra. One can see that the examples are highly accurate for MNIST, thus resulting in high overall accuracy. However, for CIFAR10, there are several incorrectly labelled examples, including two clusters which do not have a clear mapping with any ground truth class, thereby resulting in much lower overall accuracy. + +![](images/473f650fc5f6ddc6700d8e91b7b7de007da80024861d2f77ba713f5acbc02d7d.jpg) +Figure 3: Similarity graph from a sample of the input data points, obtained during three iterations of Kingdra for MNIST. The strong positive edges are shown by black lines and grey lines indicate that at least two models think they belong to the same class. The different colors show different true class labels. + +![](images/4fd417e43f204818e432e213ecc5dbc7702a975791d7b85d8ef2f4e43d587b42.jpg) + +![](images/14f8d9293c1c7c7ec3b8b73637013b046489bf8b7375eab24a0ae279ffaa0772.jpg) + +![](images/7fe3f2c88c712b7358d1cf1c024283b74adc5fdb823fd1ab331ae7b707f998c8.jpg) +Figure 4: Examples of randomly selected images obtained from our final clusters for MNIST and CIFAR10 datasets. The images with incorrect class associations are identified by red boxes. + +![](images/41525dd732e2e480f594c8a6e2b95e1bc8f3c58bc1bcfabc9f71ad84ca394157.jpg) + +# 6 CONCLUSION + +In this paper, we introduced Kingdra, a novel pseudo-semi-supervised learning approach for clustering. Kingdra outperforms current state-of-the-art unsupervised deep learning based approaches, with 8-12% gains in absolute accuracy for CIFAR10 and 20news datasets. As part of Kingdra, we proposed clustering ladder networks, Ladder-IM and Ladder-Dot, that works well in both unsupervised and semi-supervised settings. + +# 7 DISCUSSION + +While Kingdra performs well in the datasets we studied, the similarity-based graph clustering algorithm used has difficulty as the number of classes increase. For example, for the datasets we evaluated, the $t_{pos}$ and $t_{neg}$ can be simply set to the number of models in the ensemble. However, as the number of classes increase, these thresholds may need some tuning. For CIFAR100, with 100 classes, our graph clustering algorithm is not able to identify 100 diverse classes effectively. We are looking at improving the clustering algorithm as part of future work. We are also evaluating adding diversity to the models in the ensemble, either via changing the model structure, size and/or through changing the standard deviation of random noise used in ladder networks. + +# REFERENCES + +David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249, 2019. +Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In Computer Vision-ECCV 2018, pp. 139-156. Springer, 2018. +Jianlong Chang, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. Deep adaptive image clustering. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5880-5888. IEEE, 2017. +Nicholas FY Chen. Pseudo-labels for supervised learning on dynamic vision sensor data, applied to object detection under ego-motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 644-653, 2018. +Adam Coates and Andrew Y Ng. Learning feature representations with k-means. In Neural networks: Tricks of the trade, pp. 561-580. Springer, 2012. +Emily L Denton et al. Unsupervised learning of disentangled representations from video. In Advances in Neural Information Processing Systems, pp. 4414-4423, 2017. +Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1422-1430, 2015. +Ehsan Elhamifar and René Vidal. Sparse subspace clustering. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 2790-2797. IEEE, 2009. +Ana LN Fred and Anil K Jain. Combining multiple clusterings using evidence accumulation. IEEE Transactions on Pattern Analysis & Machine Intelligence, (6):835-850, 2005. +R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018. +Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, and Masashi Sugiyama. Learning discrete representations via information maximizing self augmented training. In ICML, 2017. +Dong Huang, Jianhuang Lai, and Chang-Dong Wang. Ensemble clustering using factor graph. Pattern Recognition, 50:131-142, 2016. +Xu Ji, João F Henriques, and Andrea Vedaldi. Invariant information clustering for unsupervised image classification and segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9865-9874, 2019. +Andreas Krause, Pietro Perona, and Ryan G. Gomes. Discriminative clustering by regularized information maximization. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta (eds.), Advances in Neural Information Processing Systems 23, pp. 775-783. Curran Associates, Inc., 2010. +Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, volume 3, pp. 2, 2013. +Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993, 2018. +Andrew Y Ng, Michael I Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In Advances in neural information processing systems, pp. 849-856, 2002. + +Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536-2544, 2016. +Mohammad Pezeshki, Linxi Fan, Philemon Brakel, Aaron Courville, and Yoshua Bengio. Deconstructing the ladder network architecture. In International Conference on Machine Learning, pp. 2368-2376, 2016. +Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pp. 3546-3554, 2015. +Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. Time-contrastive networks: Self-supervised learning from video. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1134–1141. IEEE, 2018. +Saki Shinoda, Daniel E Worrall, and Gabriel J Brostow. Virtual adversarial ladder networks for semi-supervised learning. arXiv preprint arXiv:1711.07476, 2017. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958, 2014. +Martin Szummer and Tommi Jaakkola. Partially labeled classification with markov random walks. In Advances in neural information processing systems, pp. 945-952, 2002. +Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pp. 1195–1204, 2017. +Sandro Vega-Pons and José Ruiz-Shulcloper. A survey of clustering ensemble algorithms. International Journal of Pattern Recognition and Artificial Intelligence, 25(03):337-372, 2011. +Xi Wang, Chunyu Yang, and Jie Zhou. Clustering aggregation by probability accumulation. Pattern Recognition, 42(5):668-675, 2009. +Jason Weston, Frédéric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639-655. Springer, 2012. +Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In International conference on machine learning, pp. 478-487, 2016. +Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmentation. arXiv preprint arXiv:1904.12848, 2019. + +# A APPENDIX + +# B Ladder-*: LADDER NETWORKS FOR CLUSTERING + +We now describe the Ladder-\* architecture for the individual models in the ensemble. We use the same model architecture for both unsupervised learning in the initial step, and the subsequent semi-supervised learning iterations, the only difference being that the semi-supervised models carry an extra supervision loss term. Our architecture augments ladder networks Rasmus et al. (2015) with one of two losses - an information maximization loss similar to the RIM method described in Krause et al. (2010); Hu et al. (2017), or a dot product loss Chang et al. (2017). We call the two variants Ladder-IM and Ladder-Dot, respectively. We first briefly describe the RIM method and ladder networks, followed by our Ladder-IM and Ladder-Dot methods. + +# REGULARIZED INFORMATION MAXIMIZATION (RIM) + +The Regularized Information Maximization (RIM) approach for unsupervised learning was introduced in Krause et al. (2010) and extended by Hu et al. (2017) for multi-dimensional setting. The RIM method minimizes the following objective for a classifier: + +$$ +R (\theta) - \lambda I (X; Y) \tag {5} +$$ + +where $R(\theta)$ is a regularization term, and $I(X;Y)$ is the mutual information between the input $X$ and output $Y$ of the classifier. The mutual information can be written as the difference between marginal entropy and conditional entropy Hu et al. (2017): + +$$ +I (X; Y) = H (Y) - H (Y | X) \tag {6} +$$ + +where $H(. )$ and $H(. |.)$ are entropy and conditional entropy, respectively. Maximizing the marginal entropy term $H(Y)$ , encourages the network to assign disparate classes to the inputs, and thus encourages a uniform distribution over the output classes. On the other hand, minimizing the conditional entropy encourages unambiguous class assignment for a given input. In the unsupervised setting, where other priors are not known, this loss makes intuitive sense. + +For the regularization loss term $R(\theta)$ above, many options have been proposed. Hu et al. (2017), for example, propose a Self-Augmented Training (SAT) loss, which imposes invariance on the outputs of original and slightly perturbed input data. The authors experimented with random perturbation (IMSAT-RPT), and adversarial perturbation (IMSAT-VAT) where the perturbation is chosen to maximize the divergence between the two outputs on the current model. + +# LADDER NETWORKS + +Ladder networks Rasmus et al. (2015) have shown impressive performance for semi-supervised classification. They employ a deep denoising auto encoder architecture, in which an additive noise is added to each hidden layer in the encoder, and the decoder learns a denoising function for each layer. The objective function is a weighted sum of supervised cross entropy loss on the output of the noisy encoder, and a squared error of the unsupervised denoising loss for all layers. Unlike standard auto-encoders, ladder networks also add lateral skip connections from each layer of the noisy encoder to the corresponding decoder layer. The additive noise acts as a regularizer for the supervised loss, while the lateral connections in the denoising decoder layers enable the higher layer features to focus on more abstract and task-specific features. See Pezeshki et al. (2016) for a detailed analysis. + +Borrowing the formalism in Pezeshki et al. (2016), a ladder network with $L$ encoder/decoder layers can be defined as: + +$$ +\tilde {x} _ {i}, \tilde {z} _ {i} ^ {(1)}, \dots , \tilde {z} _ {i} ^ {(L)}, \tilde {y} _ {i} = \operatorname {E n c o d e r} _ {\text {n o i s y}} (x _ {i}, \theta_ {j}), +$$ + +$$ +x, z _ {i} ^ {(1)}, \dots , z _ {i} ^ {(L)}, y _ {i} = \operatorname {E n c o d e r} _ {\text {c l e a n}} (x _ {i}, \theta_ {j}), +$$ + +$$ +\hat {x} _ {i}, \hat {z} _ {i} ^ {(1)}, \dots , \hat {z} _ {i} ^ {(L)} = \operatorname {D e c o d e r} (\tilde {z} _ {i} ^ {(1)}, \dots , \tilde {z} _ {i} ^ {(L)}, \phi_ {j}), +$$ + +where $\theta_{j}$ and $\phi_j$ are the parameters for the Encoder and Decoder, respectively. The variables $z_i^{(k)}$ , $\tilde{z}_i^{(k)}$ , and $\hat{z}_i^{(k)}$ are the hidden layer outputs for the clean, noisy, and denoised versions at layer $k$ , respectively. $x$ , $y_i$ , $\tilde{y}_i$ are the input, clean output and the noisy output, respectively. The objective function consists of the reconstruction loss between clean and decoded intermediate features: + +$$ +l o s s ^ {d e n o i s e} = \Sigma_ {i = 1} ^ {n} \Sigma_ {k = 1} ^ {L} \lambda_ {k} ^ {d e n o i s e} \left\| \left(z _ {i} ^ {(l)}, \hat {z} _ {i} ^ {(l)}\right) \right\| _ {2} \tag {7} +$$ + +and a supervised cross entropy loss on the output of the noisy encoder (which is used only in the semi-supervised setting): + +$$ +\operatorname {l o s s} ^ {\operatorname {s u p}} = - \Sigma_ {i = 1} ^ {n} \log P (\tilde {y} (i) = y ^ {*} | x (i)) \tag {8} +$$ + +# Ladder-IM & Ladder-Dot + +We now describe our novel Ladder-IM and Ladder-Dot models. The unsupervised denoising loss in Equation 7, along with the lateral connections architecture enables ladder networks to learn useful features from unsupervised data. However, in the absence of any supervised loss (Equation 8), ladder + +networks can degenerate to the trivial solution of a constant output for each encoder layer, as the decoder can then simply memorize these constants to make the denoising loss zero. Having batch normalization layers helps to alleviate this problem, but the loss function still allows the trivial solution. On the other hand, the mutual information loss (Equation 6) in RIM methods, in particular the marginal entropy term $H(Y)$ , encourages the network to assign disparate classes to the inputs. + +Ladder-IM: Combining ladder networks with information maximization can fix the above degeneracy problem, while simultaneously encouraging the ladder output towards a uniform distribution. We use both the clean, and noisy outputs of the ladder network for computing the mutual information loss, i.e. + +$$ +\operatorname {l o s s} ^ {M I} = I (X; \tilde {Y}) + I (X; Y) \tag {9} +$$ + +where $Y = \{y_{1},\dots ,y_{N}\}$ is the set of clean outputs, and $\tilde{Y} = \{\tilde{y}_1,\ldots ,\tilde{y}_N\}$ is the set of noisy outputs from the ladder network. + +Another way of thinking about the Ladder-IM approach is completely within the RIM framework. The unsupervised ladder loss $loss^{denoise}$ , can be simply thought of as the regularization term $R(\theta)$ in equation 5. To that effect, we also add another regularization loss term, which is the KL divergence between the clean and noisy outputs of the ladder network encoder, i.e. + +$$ +l o s s ^ {l a d d e r \_ R} = K L (p (\tilde {y} | x), p (y | x)) \tag {10} +$$ + +This regularization can be thought of as a generalization of the random perturbation loss proposed in Hu et al. (2017), where the authors impose invariance on the outputs of original and randomly perturbed inputs. Our regularization based on adding noise to the hidden layers is similar to dropout Srivastava et al. (2014), and can be thought of as adding higher level feature noise, rather than just input noise. + +Thus, in the unsupervised case, this would lead to the following minimization objective: + +$$ +\begin{array}{l} l o s s ^ {L a d d e r - I M} = l o s s ^ {d e n o i s e} + \alpha \cdot l o s s ^ {l a d d e r - R} \\ + \beta \cdot \operatorname {l o s s} ^ {M I} \tag {11} \\ \end{array} +$$ + +In this paper, we set $\alpha$ and $\beta$ to one. Finally, in the semi-supervised case, we also add the supervised cross entropy term (Equation 8), as done in the original ladder networks. + +Ladder-Dot: We also try a dot product loss to fix the above degeneracy problem. The dot product loss is defined to be + +$$ +D \left(X _ {i}, X _ {j}\right) = Y _ {i} ^ {T} Y _ {j}, \text {i f} i \neq j \tag {12} +$$ + +which forces the network outputs for different inputs to be as orthogonal as possible. This has a similar effect to IM loss, encouraging the network to assign disparate classes to the inputs. + +Among Ladder-IM and Ladder-Dot, we found Ladder-IM to perform better than Ladder-Dot in most cases. However, we did find that Ladder-Dot along with Kingdra iterations outperforms when the data set has a large imbalance in the number of samples per class. The reason for this is that the dot product loss is agnostic to the number of samples per class, while the marginal entropy term in the IM loss will drive the network towards overfitting a class with more samples, compared to a class with less number of samples. + +Overall, we found in our experiments that Ladder-IM showed superior performance to IMSAT-RPT and IMSAT-VATHu et al. (2017) on most data sets. Moreover, in pure semi-supervised settings also, Ladder-IM outperformed vanilla ladder networks in our preliminary analysis. + +# C EXPERIMENTAL RESULTS + +# C.1 IMPACT OF NUMBER OF MODELS IN ENSEMBLE + +We evaluated the accuracy of KINGDRA-LADDER-IM as the number of models in the ensemble was varied. MNIST accuracy with 1, 2, 5, 10, and 15 models is 95.0, 96.2, 97.4, 98.5, and 98.5 respectively. This suggests that accuracy saturates after 10 models and we use 10 models for our ensemble for all our experiments. + +![](images/6fdced02a80eba95b5a1f909e9ecf46d409ec46b5dd796e2d2a54cdb94c341a5.jpg) +Figure 5: Graph shows clustering accuracy vs iterations for DeepCluster. We see that there is no improvement in accuracy after the first iteration. + +# C.2 COMPUTATION COST + +We have an efficient implementation of clustering, which takes 210s for largest $n = 70000$ . On a server with four P100 GPUs, CLadder-IM takes 2mins, CLadder-IM with ensemble takes 8mins and Kingdra with 10 iterations takes 80mins while IMSAT(RPT) takes 5mins. + +# C.3 ANALYSIS OF DEEPLUSTER + +Here we give an analysis of DeepCluster Caron et al. (2018), explaining the shortcomings. We observed that the clustering accuracy generally decreases with iterations. This is because the pseudolabels generated could be bad, which results in worse accuracy in the next iteration. On the other hand, our approach only uses small number high-confidence samples for pseudo-labels. + +# D DETAILS OF THE DATASETS + +- MNIST: A dataset of 70000 handwritten digits of 28-by-28 pixel size. The raw pixel values are normalized to a range 0-1 and flattened to vector of 784 dimensions. +- CIFAR10: A dataset of 32-by-32 color images with 10 classes having 6000 examples each. Similar to Hu et al. (2017), features are extracted using 50-layer pre-trained deep residual networks. +- STL: A dataset of 96-by-96 color images with 10 classes having 1300 examples each. We do not use the 100000 unlabeled images provided in the dataset. Similar to Hu et al. (2017)], features are extracted using 50-layer pre-trained deep residual networks. +- Reuters: A dataset containing English news stories with four categories : corporate/industrial, government/social, markets, and economics. We used the same preprocessing as used by Hu et al. (2017). After removing the stop-words, tdidf features were used. +- 20News: A dataset containing newsgroup documents with 20 different newsgroups. Similar to Hu et al. (2017) after removing stop words and keeping 2000 most frequent words, tdidf features were used. \ No newline at end of file diff --git a/unsupervisedclusteringusingpseudosemisupervisedlearning/images.zip b/unsupervisedclusteringusingpseudosemisupervisedlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ba8b04197701403c3ef2d19308bae5bb7af27e85 --- /dev/null +++ b/unsupervisedclusteringusingpseudosemisupervisedlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a02586fe03b4f11a8af68f175c81b1010ca65850f934cf528acfba10eaf0c061 +size 526130 diff --git a/unsupervisedclusteringusingpseudosemisupervisedlearning/layout.json b/unsupervisedclusteringusingpseudosemisupervisedlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a2907d056134e5fb7b6a84581a0dcbd731e3f2ed --- /dev/null +++ b/unsupervisedclusteringusingpseudosemisupervisedlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:172794d8445105153150299b0d81b933b3538d8d805f0cfaaa0e801fdc797b64 +size 433077 diff --git a/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/6e2a967e-5449-4bf2-9580-2a2b009e1684_content_list.json b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/6e2a967e-5449-4bf2-9580-2a2b009e1684_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1cb42cec9429b1e0781aa79a57c1ffb43cad53a7 --- /dev/null +++ b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/6e2a967e-5449-4bf2-9580-2a2b009e1684_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd1db4546fe5277521227b4d6a25841864eaaecb0e24387770f86ccb60830694 +size 152095 diff --git a/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/6e2a967e-5449-4bf2-9580-2a2b009e1684_model.json b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/6e2a967e-5449-4bf2-9580-2a2b009e1684_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4c7843fc0275f1b2bb52152fd477ebfea40ce97a --- /dev/null +++ b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/6e2a967e-5449-4bf2-9580-2a2b009e1684_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81a4549c221f5504cf960ec9b0b728ffd5f876f3b244a31e815cb5a701a82a49 +size 179648 diff --git a/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/6e2a967e-5449-4bf2-9580-2a2b009e1684_origin.pdf b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/6e2a967e-5449-4bf2-9580-2a2b009e1684_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..145b4c6357ca3d308c4f8caa6024256c7a491e75 --- /dev/null +++ b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/6e2a967e-5449-4bf2-9580-2a2b009e1684_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:900b3acfe3b114cdc1b93363a28c60642ea3c48a6543c63c4dfe57b5fe6df875 +size 7466164 diff --git a/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/full.md b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4e18ecfd0b7bb01476749d248d76d375fe813e75 --- /dev/null +++ b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/full.md @@ -0,0 +1,543 @@ +# UNSUPERVISED MODEL SELECTION FOR VARIATIONAL DISENTANGLED REPRESENTATION LEARNING + +Sunny Duan* + +DeepMind + +sunnyd@google.com + +Loic Matthew + +DeepMind + +lmatthey@google.com + +Andre Saraiva + +DeepMind + +andresnds@google.com + +Nick Watters + +DeepMind + +nwatters@google.com + +Chris Burgess + +DeepMind + +cpburgess@google.com + +Alexander Lerchner + +DeepMind + +lerchner@google.com + +Irina Higgins* + +DeepMind + +irinah@google.com + +# ABSTRACT + +Disentangled representations have recently been shown to improve fairness, data efficiency and generalisation in simple supervised and reinforcement learning tasks. To extend the benefits of disentangled representations to more complex domains and practical applications, it is important to enable hyperparameter tuning and model selection of existing unsupervised approaches without requiring access to ground truth attribute labels, which are not available for most datasets. This paper addresses this problem by introducing a simple yet robust and reliable method for unsupervised disentangled model selection. Our approach, Unsupervised Disentanglement Ranking (UDR)1, leverages the recent theoretical results that explain why variational autoencoders disentangle (Rolinek et al., 2019), to quantify the quality of disentanglement by performing pairwise comparisons between trained model representations. We show that our approach performs comparably to the existing supervised alternatives across 5400 models from six state of the art unsupervised disentangled representation learning model classes. Furthermore, we show that the ranking produced by our approach correlates well with the final task performance on two different domains. + +# 1 INTRODUCTION + +Happy families are all alike; every unhappy family is unhappy in its own way. — + +Leo Tolstoy, Anna Karenina + +Despite the success of deep learning in the recent years (Hu et al., 2018; Espeholt et al., 2018; Silver et al., 2018; Lample et al., 2018; Hessel et al., 2017; Oord et al., 2016), the majority of state of the art approaches are still missing many basic yet important properties, such as fairness, data efficient learning, strong generalisation beyond the training data distribution, or the ability to transfer knowledge between tasks (Lake et al., 2016; Garnelo et al., 2016; Marcus, 2018). The idea that a good representation can help with such shortcomings is not new, and recently a number of papers have demonstrated that models with disentangled representations show improvements in terms of these shortcomings (Higgins et al., 2017b; 2018b; Achille et al., 2018; Steenbrugge et al., 2018; Nair et al., 2018; Laversanne-Finot et al., 2018; van Steenkiste et al., 2019; Locatello et al., 2019). A common intuitive way to think about disentangled representations is that it should reflect the compositional + +![](images/04afdf04dee188078a34761deda54291bf35cce39cb898da23475d4073012a8d.jpg) +Figure 1: Latent traversals for one of the best and worst ranked trained $\beta$ -VAE models using the Unsupervised Disentanglement Ranking $(\mathrm{UDR_L})$ method on the 3D Cars dataset. For each seed image we fix all latents $z_{i}$ to the inferred value, then vary the value of one latent at a time to visualise its effect on the reconstructions. The high scoring model (left 3 blocks) appears well disentangled, since individual latents have consistent semantic meaning across seeds. The low scoring model (right block) is highly entangled, since the latent traversals are not easily interpretable. + +structure of the world. For example, to describe an object we often use words pertaining to its colour, position, shape and size. We can use different words to describe these properties because they relate to independent factors of variation in our world, i.e. properties which can be compositionally recombined. Hence a disentangled representation of objects should reflect this by factorising into dimensions which correspond to those properties (Bengio et al., 2013; Higgins et al., 2018a). + +The ability to automatically discover the compositional factors of complex real datasets can be of great importance in many practical applications of machine learning and data science. However, it is important to be able to learn such representations in an unsupervised manner, since most interesting datasets do not have their generative factors fully labelled. For a long time scalable unsupervised disentangled representation learning was impossible, until recently a new class of models based on Variational Autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) was developed. These approaches (Higgins et al., 2017a; Burgess et al., 2017; Chen et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018) scale reasonably well and are the current state of the art in unsupervised disentangled representation learning. However, so far the benefits of these techniques have not been widely exploited because of two major shortcomings: First, the quality of the achieved disentangling is sensitive to the choice of hyperparameters, however, model selection is currently impossible without having access to the ground truth generative process and/or attribute labels, which are required by all the currently existing disentanglement metrics (Higgins et al., 2017a; Kim & Mnih, 2018; Chen et al., 2018; Eastwood & Williams, 2018; Ridgeway & Mozer, 2018). Second, even if one could apply any of the existing disentanglement metrics for model selection, the scores produced by these metrics can vary a lot even for models with the same hyperparameters and trained on the same data (Locatello et al., 2018). While a lot of this variance is explained by the actual quality of the learnt representations, some of it is introduced by the metrics themselves. In particular, all of the existing supervised disentanglement metrics assume a single "canonical" factorisation of the generative factors, any deviation from which is penalised. Such a "canonical" factorisation, however, is not chosen in a principled manner. Indeed, for the majority of datasets, apart from the simplest ones, multiple equally valid disentangled representations may be possible (see Higgins et al. (2018a) for a discussion). For example, the intuitive way that humans reason about colour is in terms of hue and saturation. However, colour may also be represented in RGB, YUV, HSV, HSL, CIELAB. Any of the above representations are as valid as each other, yet only one of them is allowed to be "canonical" by the supervised metrics. Hence, a model that learns to represent colour in a subspace aligned with HSV will be penalised by a supervised metric which assumes that the canonical disentangled representation of colour should be in RGB. This is despite the fact that both representations are equal in terms of preserving the compositional property at the core of what makes disentangled representations useful (Higgins et al., 2018a). Hence, the field finds itself in a predicament. From one point of view, there exists a set of approaches capable of reasonably scalable unsupervised disentangled representation learning. On the other hand, these models are hard to use in practice, because there is no easy way to do a hyperparameter search and model selection. + +This paper attempts to bridge this gap. We propose a simple yet effective method for unsupervised model selection for the class of current state-of-the-art VAE-based unsupervised disentangled representation learning methods. Our approach, Unsupervised Disentanglement Ranking (UDR), leverages the recent + +theoretical results that explain why variational autoencoders disentangle (Rolinek et al., 2019), to quantify the quality of disentanglement by performing pairwise comparisons between trained model representations. We evaluate the validity of our unsupervised model selection metric against the four best existing supervised alternatives reported in the large scale study by Locatello et al. (2018): the $\beta$ -VAE metric (Higgins et al., 2017a), the FactorVAE metric (Kim & Mnih, 2018), Mutual Information Gap (MIG) (Chen et al., 2018) and DCI Disentanglement scores (Eastwood & Williams, 2018). We do so for all existing state of the art disentangled representation learning approaches: $\beta$ -VAE (Higgins et al., 2017a), CCI-VAE (Burgess et al., 2017), FactorVAE (Kim & Mnih, 2018), TC-VAE (Chen et al., 2018) and two versions of DIP-VAE (Kumar et al., 2017). We validate our proposed method on two datasets with fully known generative processes commonly used to evaluate the quality of disentangled representations: dSprites (Matthey et al., 2017) and 3D Shapes (Burgess & Kim, 2018), and show that our unsupervised model selection method is able to match the supervised baselines in terms of guiding a hyperparameter search and picking the most disentangled trained models both quantitatively and qualitatively. We also apply our approach to the 3D Cars dataset (Reed et al., 2014), where the full set of ground truth attribute labels is not available, and confirm through visual inspection that the ranking produced by our method is meaningful (Fig. 1). Overall we evaluate 6 different model classes, with 6 separate hyperparameter settings and 50 seeds on 3 separate datasets, totalling 5400 models and show that our method is both accurate and consistent across models and datasets. Finally, we validate that the model ranking produced by our approach correlates well with the final task performance on two recently reported tasks: a classification fairness task (Locatello et al., 2019) and a model-based reinforcement learning (RL) task (Watters et al., 2019). Indeed, on the former our approach outperformed the reported supervised baseline scores. + +# 2 OPERATIONAL DEFINITION OF DISENTANGLING + +Given a dataset of observations $X = \{\pmb{x}_1, \dots, \pmb{x}_N\}$ , we assume that there exist a number of plausible generative processes $g_i$ that produce the observations from a small set of corresponding $K_i$ independent generative factors $\pmb{c}_i$ . For each choice of $i$ , $g: \pmb{c}_n \mapsto \pmb{x}_n$ , where $p(\pmb{c}_n) = \prod_{j=1}^{K} p(c_n^j)$ . For example, a dataset containing images of an object, which can be of a particular shape and colour, and which can be in a particular vertical and horizontal positions, may be created by a generative process that assumes a ground truth disentangled factorisation into shape x colour x position, or shape x hue x saturation x position X x position Y. We operationalise a model as having learnt a disentangled representation, if it learns to invert of one of the generative processes $g_i$ and recover a latent representation $z \in \mathbb{R}^L$ , so that it best explains the observed data $p(z, x) \approx p(c_i, x)$ , and factorises the same way as the corresponding data generative factors $\pmb{c}_i$ . The choice of the generative process can be determined by the interaction between the model class and the observed data distribution $p(x)$ , as discussed next in Sec. 3.1. + +# 3 VARIATIONAL UNSUPERVISED DISENTANGLING + +The current state of the art approaches to unsupervised disentangled representation learning are based on the Variational Autoencoder (VAE) framework (Rezende et al., 2014; Kingma & Welling, 2014). VAEs attempt to estimate the lower bound on the joint distribution of the data and the latent factors $p(\pmb{x}, \pmb{z})$ by optimising the following objective: + +$$ +\mathcal {L} _ {V A E} = \mathbb {E} _ {p (\boldsymbol {x})} \left[ \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log p _ {\theta} (\boldsymbol {x} | \boldsymbol {z}) \right] - K L \left(q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) \mid \mid p (\boldsymbol {z})\right) \right] \tag {1} +$$ + +where, in the usual case, the prior $p(z)$ is chosen to be an isotropic unit Gaussian. In order to encourage disentangling, different approaches decompose the objective in Eq. 1 into various terms and change their relative weighting. In this paper we will consider six state of the art approaches to unsupervised disentangled representation learning that can be grouped into three broad classes based on how they modify the objective in Eq. 1: 1) $\beta$ -VAE (Higgins et al., 2017a) and CCI-VAE (Burgess et al., 2017) upweight the KL term; 2) FactorVAE (Kim & Mnih, 2018) and TC-VAE (Chen et al., 2018) introduce a total correlation penalty; and 3) two different implementations of DIP-VAE (-I and -II) (Kumar et al., 2017) penalise the deviation of the the marginal posterior from a factorised prior (see Sec. A.4.1 in Supplementary Material for details). + +# 3.1 WHY DO VAES DISENTANGLE? + +In order to understand the reasoning behind our proposed unsupervised disentangled model selection method, it is first important to understand why VAEs disentangle. The objective in Eq. 1 does not in itself encourage disentangling, as discussed in Rolinek et al. (2019) and Locatello et al. (2018). Indeed, any rotationally invariant prior makes disentangled representations learnt in an unsupervised setting unidentifiable when optimising Eq. 1. This theoretical result is not surprising and has been known for a while in the ICA literature (Comon, 1994), however what is surprising is that disentangling VAEs appear to work in practice. The question of what makes VAEs disentangle was answered by Rolinek et al. (2019), who were able to show that it is the peculiarities of the VAE implementation choices that allow disentangling to emerge (see also discussion in Burgess et al. (2017); Mathieu et al. (2019)). In particular, the interactions between the reconstruction objective (the first term in Eq. 1) and the enhanced pressure to match a diagonal prior created by the modified objectives of the disentangling VAEs, force the decoder to approximate PCA-like behaviour locally around the data samples. Rolinek et al. (2019) demonstrated that during training VAEs often enter the so-called "polarised regime", where many of the latent dimensions of the posterior are effectively switched off by being reduced to the prior $q_{\phi}(z_j) = p(z_j)$ (this behaviour is often further encouraged by the extra disentangling terms added to the ELBO). When trained in such a regime, a linear approximation of the Jacobian of the decoder around a data sample $\boldsymbol{x}_i$ , $J_i = \frac{\partial Dec_\theta(\mu_\phi(\boldsymbol{x}_i))}{\partial \mu_\phi(\boldsymbol{x}_i)}$ , is forced to have orthogonal columns, and hence to separate the generative factors based on the amount of reconstruction variance they induce. Given that the transformations induced by different generative factors will typically have different effects on the pixel space (e.g. changing the position of a sprite will typically affect more pixels than changing its size), such local orthogonalisation of the decoder induces an identifiable disentangled latent space for each particular dataset. An equivalent statement is that for a well disentangled VAE, the SVD decomposition $J = U\Sigma V^\top$ of the Jacobian $J$ calculated as above, results in a trivial $V$ , which is a signed permutation matrix. + +# 4 UNSUPERVISED DISENTANGLED MODEL SELECTION + +We now describe how the insights from Sec. 3.1 motivate the development of our proposed Unsupervised Disentanglement Ranking (UDR) method. Our method relies on the assumption that for a particular dataset and a VAE-based unsupervised disentangled representation learning model class, disentangled representations are all alike, while every entangled representation is entangled in its own way, to rephrase Tolstoy. We justify this assumption next. + +Disentangled representations are similar According to Rolinek et al. (2019) for a given non-adversarial dataset a disentangling VAE will likely keep converging to the same disentangled representation (up to permutation and sign inverse). Note that this representation will correspond to a single plausible disentangled generative process $g_{i}$ using the notation we introduced in Sec. 2. This is because any two different disentangled representations $\mathbf{z}_a$ and $\mathbf{z}_b$ learnt by a VAE-based model will only differ in terms of the corresponding signed permutation matrices $V_{a}$ and $V_{b}$ of the SVD decompositions of the locally linear approximations of the Jacobians of their decoders. + +Entangled representations are different Unfortunately the field of machine learning has little theoretical understanding of the nature and learning dynamics of internal representations in neural networks. The few pieces of research that have looked into the nature of model representations (Raghu et al., 2017; Li et al., 2016; Wang et al., 2018; Morcos et al., 2018) have been empirical rather than theoretical in nature. All of them suggest that neural networks tend to converge to different hidden representations despite being trained on the same task with the same hyperparameters and architecture and reaching similar levels of task performance. Furthermore, the theoretical analysis and the empirical demonstrations in Rolinek et al. (2019) suggest that the entangled VAEs learn representations that are different at least up to a rotation induced by a non-degenerate matrix $V$ in the SVD decomposition of the local linear approximation of the decoder Jacobian $J_{i}$ . + +The justifications presented above rely on the theoretical work of Rolinek et al. (2019), which was empirically verified only for the $\beta$ -VAE. We have reasons to believe that the theory also holds for the other model classes presented in this paper, apart from DIP-VAE-I. We empirically verify that this is the case in Sec. A.10 in Supplementary Materials. Furthermore, in Sec. 5 we show that our proposed method works well in practice across all model classes, including DIP-VAE-I. + +Unsupervised Disentanglement Ranking Our proposed UDR method consists of four steps (illustrated in Fig. 4 in Supplementary Material): + +1. Train $M = H \times S$ models, where $H$ is the number of different hyperparameter settings, and $S$ is the number of different initial model weight configurations (seeds). +2. For each trained model $i \in \{1, \dots, M\}$ , sample without replacement $P \leq S$ other trained models with the same hyperparameters but different seeds. +3. Perform $P$ pairwise comparisons per trained model and calculate the respective UDRij scores, where $i\in \{1,\dots,M\}$ is the model index, and $j\in \{1,\dots,P\}$ is its unique pairwise match from Step 2. +4. Aggregate $\mathrm{UDR}_{ij}$ scores for each model $i$ to report the final $\mathrm{UDR}_i = \mathrm{avg}_j(\mathrm{UDR}_{ij})$ scores, where $\mathrm{avg}_j(\cdot)$ is the median over $P$ scores. + +The key part of the UDR method is Step 3, where we calculate the UDR $_{ij}$ score that summarises how similar the representations of the two models $i$ and $j$ are. As per the justifications above, two latent representations $\mathbf{z}_i$ and $\mathbf{z}_j$ should be scored as highly similar if they axis align with each other up to permutation (the same ground truth factor $c_k$ may be encoded by different latent dimensions within the two models, $z_{i,a}$ and $z_{j,b}$ where $a \neq b$ ), sign inverse (the two models may learn to encode the values of the generative factor in the opposite order compared to each other, $z_{i,a} = -z_{j,b}$ ), and **subsetting** (one model may learn a subset of the factors that the other model has learnt if the relevant disentangling hyperparameters encourage a different number of latents to be switched off in the two models). In order for the UDR to be invariant to the first scenario, we propose calculating a full $L \times L$ similarity matrix $R_{ij}$ between the individual dimensions of $\mathbf{z}_i \in \mathbb{R}^L$ and $\mathbf{z}_j \in \mathbb{R}^L$ (see Fig. 5 in Supplementary Material). In order to address the second point, we take the absolute value of the similarity matrix $|R_{ij}|$ . Finally, to address the third point, we divide the UDR score by the average number of informative latents discovered by the two models. Note that even though disentangling often happens when the VAEs enter the "polarised regime", where many of the latent dimensions are switched off, the rankings produced by UDR are not affected by whether the model operates in such a regime or not. + +To populate the similarity matrix $R_{ij}$ we calculate each matrix element as the similarity between two vectors $\mathbf{z}_{i,a}$ and $\mathbf{z}_{j,b}$ , where $\mathbf{z}_{i,a}$ is a response of a single latent dimension $z_{a}$ of model $i$ over the entire ordered dataset or a fixed number of ordered mini-batches if the former is computationally restrictive (see Sec. A.5 in Supplementary Material for details). We considered two versions of the UDR score based on the method used for calculating the vector similarity: the non-parametric UDRS, using Spearman's correlation; and the parametric UDRL, using Lasso regression following past work on evaluating representations (Eastwood & Williams, 2018; Li et al., 2016). In practice the Lasso regression version worked slightly better, so the remainder of the paper is restricted to UDRL (we use UDRL and UDR interchangeably to refer to this version), while UDRS is discussed in the Supplementary Materials. + +Given a similarity matrix $R_{ij}$ , we want to find one-to-one correspondence between all the informative latent dimensions within the chosen pair of models. Hence, we want to see at most a single strong correlation in each row and column of the similarity matrix. To that accord, we step through the matrix $R = |R_{ij}|$ , one column and row at a time, looking for the strongest correlation, and weighting it by the proportional weight it has within its respective column or row. We then average all such weighted scores over all the informative row and column latents to calculate the final UDR $_{ij}$ score: + +$$ +\frac {1}{d _ {a} + d _ {b}} \left[ \sum_ {b} \frac {r _ {a} ^ {2} \cdot I _ {K L} (b)}{\sum_ {a} R (a , b)} + \sum_ {a} \frac {r _ {b} ^ {2} \cdot I _ {K L} (a)}{\sum_ {b} R (a , b)} \right] \tag {2} +$$ + +where $r_a = \max_a R(a, b)$ and $r_b = \max_b R(a, b)$ . $I_{KL}$ indicates an "informative" latent within a model and $d$ is the number of such latents: $d_a = \sum_a I_{KL}(a)$ and $d_b = \sum_b I_{KL}(b)$ . We define a latent dimension as "informative" if it has learnt a latent posterior which diverges from the prior: + +$$ +I _ {K L} (a) = \left\{ \begin{array}{l l} 1 & K L \left(q _ {\phi} \left(z _ {a} \mid \boldsymbol {x}\right) \mid p \left(z _ {a}\right)\right) > 0. 0 1 \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {3} +$$ + +UDR variations We explored whether doing all-to-all pairwise comparisons, with models in Step 2 sampled from the set of all $M$ models rather than the subset of $S$ models with the same hyperparameters, would produce more accurate results. Additionally we investigated the effect of choosing different numbers of models $P$ for pairwise comparisons by sampling $P \sim \mathrm{U}[5,45]$ . + +![](images/6c7ce4b6e32b6afcf91d8144a5a48f8219888094a1538c4bd1d611e9748a6e34.jpg) +Figure 2: Hyperparameter search results for six unsupervised disentangling model classes evaluated using the unsupervised UDR and the supervised $\beta$ -VAE, FactorVAE, MIG and DCI Disentangling metrics and trained on either dSprites (top) or 3D Shapes (bottom) datasets. "Hyper" corresponds to the particular hyperparameter setting considered (see Tbl. 5 in Supplementary Materials for particular values). The box and whisker plots for each hyperparameter setting are summarising the scores for 50 different model seeds. Higher median values indicate better hyperparameters. The ranking of hyperparameters tends to be similar between the different metrics, including UDR. + +UDR assumptions and limitations Note that our approach is aimed at the current state of the art disentangling VAEs, for which the assumptions of our metric have been demonstrated to hold (Rolinek et al., 2019). It may be applied to other model classes, however the following assumptions and limitations need to be considered: + +1. Disentangled representations produced by two models from the same class trained on the same dataset are likely to be more similar than entangled representations – this holds for disentangling VAEs (Rolinek et al., 2019), but may not hold more broadly. +2. Continuous, monotonic and scalar factors – UDR assumes that these properties hold for the data generative factors and their representations. This is true for the disentangling approaches described in Sec. 3, but may not hold more generally. It is likely that UDR can be adapted to work with other kinds of generative factors (e.g. factors with special or no geometry) by exchanging the similarity calculations in Step 3 with an appropriate measure, however we leave this for future work. + +
MODEL CLASSDSPRITES3DSHAPES
LASSOSPEARMANLASSOSPEARMAN
HYPERALL-2-ALLHYPERALL-2-ALLHYPERALL-2-ALLHYPERALL-2-ALL
β-VAE0.600.760.540.720.710.680.700.71
TC-VAE0.400.670.370.600.810.790.810.75
DIP-VAE0.610.690.650.720.750.740.750.78
+ +Table 1: Rank correlations between MIG and different versions of UDR across two datasets and three model classes. The performance is comparable across datasets, UDR versions and model classes. See Fig. 6 in Supplementary Materials for comparisons with other supervised metrics. + +3. Herd effect - since UDR detects disentangled representations through pairwise comparisons, the score it assigns to each individual model will depend on the nature of the other models involved in these comparisons. This means that UDR is unable to detect a single disentangled model within a hyperparameter sweep. It also means that when models are only compared within a single hyperparameter setting, individual model scores may be over/under estimated as they tend to be drawn towards the mean of the scores of the other models within a hyperparameter group. Thus, it is preferable to perform the UDR-A2A during model selection and UDR during hyperparameter selection. +4. Explicitness bias - UDR does not penalise models that learn a subset of the data generative factors. In fact, such models often score higher than those that learn the full set of generative factors, because the current state of the art disentangling approaches tend to trade-off the number of discovered factors for cleaner disentangling. As discussed in Sec. 2, we provide the practitioner with the ability to choose the most disentangled model per number of factors discovered by approximating this with the $d$ score in Eq. 2. +5. Computational cost - UDR requires training a number of seeds per hyperparameter setting and $M \times P$ pairwise comparisons per hyperparameter search, which may be computationally expensive. Saying this, training multiple seeds per hyperparameter setting is a good research practice to produce more robust results and UDR computations are highly parallelisable. + +To summarise, UDR relies on a number of assumptions and has certain limitations that we hope to relax in future work. However, it offers improvements over the existing supervised metrics. Apart from being the only method that does not rely on supervised attribute labels, its scores are often more representative of the true disentanglement quality (e.g. see Fig. 3 and Fig. 9 in Supplementary Materials), and it does not assume a single "canonical" disentangled factorisation per dataset. Hence, we believe that UDR can be a useful method for unlocking the power of unsupervised disentangled representation learning to real-life practical applications, at least in the near future. + +# 5 EXPERIMENTS + +Our hope was to develop a method for unsupervised disentangled model selection with the following properties: it should 1) help with hyperparameter tuning by producing an aggregate score that can be used to guide evolutionary or Bayesian methods (Jaderberg et al., 2018; Snoek et al., 2012; Thornton et al., 2012; Bergstra et al., 2011; Hutter et al., 2011; Miikkulainen et al., 2017); 2) rank individual trained models based on their disentanglement quality; 3) correlate well with final task performance. In this section we evaluate our proposed UDR against these qualities. For the reported experiments we use the trained model checkpoints and supervised scores from Locatello et al. (2018) to evaluate $\beta$ -VAE, CCI-VAE, FactorVAE, TC-VAE, DIP-VAE-I and DIP-VAE-II on two benchmark datasets: dSprites (Matthey et al., 2017) and 3D Shapes (Burgess & Kim, 2018) (see Sec. A.3 for details). Each model is trained with $H = 6$ different hyperparameter settings (detailed in Sec. A.4.1 in Supplementary Material), with $S = 50$ seeds per setting, and $P = 50$ pairwise comparisons. + +UDR correlates well with the supervised metrics. To validate UDR, we calculate Spearman's correlation between its model ranking and that produced by four existing supervised disentanglement metrics found to be the most meaningful in the large scale comparison study by Locatello et al. (2018): the original $\beta$ -VAE metric (Higgins et al., 2017a), FactorVAE metric (Kim & Mnih, 2018), Mutual Information Gap (MIG) (Chen et al., 2018) and DCI Disentanglement (Eastwood & Williams, 2018) (see Sec. A.6 in Supplementary Material for metric details). The average correlation for UDR + +![](images/8ebebae19c40be1b984fea426313d055d12c12cb964be8cc32111f83f8420a97.jpg) +Figure 3: Latent traversals of the top ranked trained DIP-VAE-I, TC-VAE, CCI-VAE and $\beta$ -VAE according to the UDR method. At the top of each plot the two presented scores are UDR/FactorVAE metric. Note that the FactorVAE metric scores visually entangled models very highly. $d$ is the number of informative latents. The uninformative latents are greyed out. + +![](images/63e6010c8ea0b815f922820671601630626ba85060b189467a3477eca0e2169c.jpg) + +![](images/e64a2c1f3b20b3c8986b0f1e48258b0de0f2e18f5a63eef0a18eb482902ad6ee.jpg) + +![](images/940c05225a262633da1608e96fb6edd7341441ac6202ff9cc4743c9409f42b7a.jpg) + +is $0.54 \pm 0.06$ and for UDR-A2A is $0.60 \pm 0.11$ . This is comparable to the average Spearman's correlation between the model rankings produced by the different supervised metrics: $0.67 \pm 0.2$ . The variance in rankings produced by the different metrics is explained by the fact that the metrics capture different aspects of disentangling (see Sec. A.2 in Supplementary Materials for a discussion of how UDR relates to other representation comparison methods). Tbl. 1 provides a breakdown of correlation scores between MIG and the different versions of UDR for different model classes and datasets. It is clear that the different versions of UDR perform similarly to each other, and this holds across datasets and model classes. Note that unlike the supervised metrics, UDR does not assume a "canonical" disentangled representation. Instead, it allows any one of the many equivalent possible ground truth generative processes to become the "canonical" one for each particular dataset and model class, as per the theoretical results by Rolinek et al. (2019) summarised in Sec. 3.1. + +UDR is useful for hyperparameter selection. Fig. 2 compares the scores produced by UDR and the four supervised metrics for 3600 trained models, split over six model classes, two datasets and six hyperparameter settings. We consider the median score profiles across the six hyperparameter settings to evaluate whether a particular setting is better than others. It can be seen that UDR broadly agrees with the supervised metrics on which hyperparameters are more promising for disentangling. This holds across datasets and model classes. Hence, UDR may be useful for evaluating model fitness for disentangled representation learning as part of an evolutionary algorithm or Bayesian hyperparameter tuning. + +UDR is useful for model selection. Fig. 2 can also be used to examine whether a particular trained model has learnt a good disentangled representation. We see that some models reach high UDR scores. For example, more models score highly as the value of the $\beta$ hyperparameter is increased in the $\beta$ -VAE model class. This is in line with the previously reported results (Higgins et al., 2017a). Note that the 0th hyperparameter setting in this case corresponds to $\beta = 1$ , which is equivalent to the standard VAE objective (Kingma & Welling, 2014; Rezende et al., 2014). As expected, these models score low in terms of disentangling. We also see that for some model classes (e.g. DIP-VAE-I, DIP-VAE-II and FactorVAE on dSprites) no trained model scores highly according to UDR. This suggests that none of the hyperparameter choices explored were good for this particular dataset, and that no instance of the model class learnt to disentangle well. To test this, we use latent traversals to qualitatively evaluate the level of disentanglement achieved by the models, ranked by their UDR scores. This is a common technique to qualitatively evaluate the level of disentanglement on simple visual datasets where no ground truth attribute labels are available. Such traversals involve changing the value of one latent dimension at a time and evaluating its effect on the resulting reconstructions to understand whether the latent has learnt to represent anything semantically meaningful. Fig. 3 demonstrates that the qualitative disentanglement quality is reflected well in the UDR scores. The figure also highlights that the supervised metric scores can sometimes be overoptimistic. For example, compare TC-VAE and $\beta$ -VAE traversals in Fig. 3. These are scored similarly by the supervised metric (0.774 and 0.751) but differently by UDR (0.444 and 0.607). Qualitative evaluation of the traversals clearly shows that $\beta$ -VAE learnt a more disentangled representation than TC-VAE, which is captured by UDR but not by the supervised metric. Fig. 9 in Supplementary Material provides more examples. We also evaluated how well UDR ranks models trained on more complex datasets, CelebA and ImageNet, and found that it performs well (see Sec. A.9 in Supplementary Materials). + +
SAMPLE # (P)51015202530354045
CORRELATION0.51±0.070.57±0.030.57±0.050.6±0.030.59±0.030.61±0.020.61±0.020.61±0.010.61±0.01
+ +Table 2: Rank correlations of the UDR score with the $\beta$ -VAE metric on the dSprites dataset for a $\beta$ -VAE hyperparameter search as the number of pairwise comparisons $P$ per model were changed. + +UDR works well even with five pairwise comparisons. We test the effect of the number of pairwise comparisons $P$ on the variance and accuracy of the UDR scores. Tbl. 2 reports the changes in the rank correlation with the $\beta$ -VAE metric on the dSprites dataset as $P$ is varied between 5 and 45. We see that the correlation between the UDR and the $\beta$ -VAE metric becomes higher and the variance decreases as the number of seeds is increased. However, even with $P = 5$ the correlation is reasonable. + +UDR generalises to a dataset with no attribute labels. We investigate whether UDR can be useful for selecting well disentangled models trained on the 3D Cars (Reed et al., 2014) dataset with poorly labelled attributes, which makes it a bad fit for supervised disentanglement metrics. Fig. 1 shows that a highly ranked model according to UDR appears disentangled, while a poorly ranked one appears entangled. Fig. 10 in Supplementary Material provides more examples of high and low scoring models according to the UDR method. + +UDR predicts final task performance. We developed UDR to help practitioners use disentangled representations to better solve subsequent tasks. Hence, we evaluate whether the model ranking produced by UDR correlates with task performance on two different domains: the fairness on a classification task introduced by Locatello et al. (2018), and data efficiency on a clustering task for a model-based reinforcement learning agent introduced by Watters et al. (2019) (see Sec. A.8 in Supplementary Materials for more details). We found that UDR had an average of 0.8 Spearman correlation with the fairness scores, which is higher than the average of 0.72 correlation between fairness and supervised scores reported by Locatello et al. (2018). We also found that UDR scores had 0.56 Spearman correlation with data efficiency of the COBRA agent. The difference between the best and the worst models according to UDR amounted to around $66\%$ reduction in the number of steps to $90\%$ success rate on the task. + +# 6 CONCLUSION + +We have introduced UDR, the first method for unsupervised model selection for variational disentangled representation learning. We have validated our approach on 5400 models covering all six state of the art VAE-based unsupervised disentangled representation learning model classes. We compared UDR to four existing supervised disentanglement metrics both quantitatively and qualitatively, and demonstrated that our approach works reliably well across three different datasets, often ranking models more accurately than the supervised alternatives. Moreover, UDR avoids one of the big shortcomings of the supervised disentangling metrics – the arbitrary choice of a "canonical" disentangled factorisation, instead allowing any of the equally valid disentangled generative processes to be accepted. Finally, we also demonstrated that UDR is useful for predicting final task performance using two different domains. Hence, we hope that UDR can be a step towards unlocking the power of unsupervised disentangled representation learning to real-life applications. + +# ACKNOWLEDGEMENTS + +We thank Olivier Bachem and Francesco Locatello for helping us re-use their code and model checkpoints, and Neil Rabinowitz, Avraham Ruderman and Tatjana Chavdarova for useful feedback. + +# REFERENCES + +Alessandro Achille, Tom Eccles, Loic Matthey, Christopher P Burgess, Nick Watters, Alexander Lerchner, and Irina Higgins. Life-long disentangled representation learning with cross-domain latent homologies. NIPS, 2018. +Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828, 2013. + +James Bergstra, Remi Bardenet, Yoshua Bengio, and Balazs Kegl. Algorithms for hyper-parameter optimization. NIPS, 2011. +Chris Burgess and Hyunjik Kim. 3d shapes dataset. https://github.com/deepmind/3dshapes-dataset/, 2018. +Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in $\beta$ -VAE. NIPS Workshop of Learning Disentangled Features, 2017. +Christopher P Burgess, Loic Matthey, Nick Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, and Alexander Lerchner. MONet: Unsupervised scene decomposition and representation. arXiv preprint, January 2019. +Tian Qi Chen, Xuechen Li, Roger Grosse, and David Duvenaud. Isolating sources of disentanglement in variational autoencoders. NIPS, 2018. +Taco Cohen and Max Welling. Group equivariant convolutional networks. ICML, 2016. +Pierre Comon. Independent component analysis, a new concept? Signal Processing, 36:287-314, 1994. +Cian Eastwood and Christopher K. I. Williams. A framework for the quantitative evaluation of disentangled representations. *ICLR*, 2018. +Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA: Scalable distributed deep-rl with importance weighted actor-learner architectures. arxiv, 2018. +Marta Garnelo, Kai Arulkumaran, and Murray Shanahan. Towards deep symbolic reinforcement learning. arXiv preprint arXiv:1609.05518, 2016. +Robert Gens and Pedro M. Domingos. Deep symmetry networks. NIPS, 2014. +David R. Hardoon, Sandor Szedmak, and John Shawe-Taylor. Canonical correlation analysis; an overview with application to learning methods. Neural Computation, 16(12):2639 - 2664, 2004. +Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. arxiv, 2017. +Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. $\beta$ -VAE: Learning basic visual concepts with a constrained variational framework. *ICLR*, 2017a. +Irina Higgins, Arka Pal, Andrei Rusu, Loic Matthey, Christopher Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, and Alexander Lerchner. DARLA: Improving zero-shot transfer in reinforcement learning. ICML, 2017b. +Irina Higgins, David Amos, David Pfau, Sebastien Racaniere, Loic Matthew, Danilo Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. arXiv, 2018a. +Irina Higgins, Nicolas Sonnerat, Loic Matthew, Arka Pal, Christopher Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis, and Alexander Lerchner. SCAN: Learning hierarchical compositional visual concepts. ICLR, 2018b. +Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. CVPR, 2018. +Frank Hutter, Holger Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. Learning and Intelligent Optimization, 2011. +Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M. Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, Chrisantha Fernando, and Koray Kavukcuoglu. Population based training of neural networks. arXiv, 2018. +Hyunjik Kim and Andriy Mnih. Disentangling by factorising. *ICLR*, 2018. +Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. *ICLR*, 2014. +Nikolaus Kriegeskorte, Marieke Mur, and Peter Bandettini. Representational similarity analysis - connecting the branches of systems neuroscience. Front Syst Neurosci., 4(2), 2008. +Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. arxiv, 2017. + +Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, pp. 1-101, 2016. +Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. Phrase-based & neural unsupervised machine translation. arxiv, 2018. +Adrien Laversanne-Finot, Alexandre Pére, and Pierre-Yves Oudeyer. Curiosity driven exploration of learned disentangled goal spaces. *arxiv*, 2018. +Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent learning: Do different neural networks learn the same representations? *ICLR*, 2016. +Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Scholkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. ICML, 97: 4114-4124, 2018. +Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer, Bernhard Scholkopf, and Olivier Bachem. On the fairness of disentangled representations. *arxiv*, 2019. +Gary Marcus. Deep learning: A critical appraisal. arxiv, 2018. +Emile Mathieu, Tom Rainforth, N. Siddharth, and Yee Whye Teh. Disentangling disentanglement in variational autoencoders. ICML, 2019. +Loic Matthew, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset, 2017. URL https://github.com/deepmind/dSprites-dataset/. +Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Dan Fink, Olivier Francon, Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy, and Babak Hodjat. Evolving Deep Neural Networks. arxiv, 2017. +Ari S. Morcos, Maithra Raghu, and Samy Bengio. Insights on representational similarity in neural networks with canonical correlation. NIPS, 2018. +Ashvin Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual reinforcement learning with imagined goals. arxiv, 2018. +Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016. +Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. NIPS, 2017. +Scott Reed, Kihyuk Sohn, Yuting Zhang, and Honglak Lee. Learning to disentangle factors of variation with manifold interaction. ICML, 2014. +Danilo J Rezende and Fabio Viola. Generalized elbo with constrained optimization, geco. Workshop on Bayesian Deep Learning, NeurIPS, 2018. +Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. ICML, 32(2):1278-1286, 2014. +Karl Ridgeway and Michael C Mozer. Learning deep disentangled embeddings with the f-statistic loss. NIPS, 2018. +Michal Rolinek, Dominik Zietlow, and Georg Martius. Variational autoencoders pursue pca directions (by accident). CVPR, 2019. +Jürgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6): 863-869, 1992. +David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362 (6419):1140-1144, 2018. doi: 10.1126/science.aar6404. +J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian Optimization of Machine Learning Algorithms. arXiv, 2012. +Stefano Soatto. Steps toward a theory of visual information. Technical Report UCLA-CSD100028, 2010. + +Xander Steenbrugge, Sam Leroux, Tim Verbelen, and Bart Dhoedt. Improving generalization for abstract reasoning tasks using disentangled feature representations. *arxiv*, 2018. +Raphael Suter, Dorde Miladinovic, Stefan Bauer, and Bernhard Scholkopf. Interventional robustness of deep latent variable models. arxiv, 2018. +C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown. Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms. arXiv, 2012. +Sjoerd van Steenkiste, Francesco Locatello, Jürgen Schmidhuber, and Olivier Bachem. Are disentangled representations helpful for abstract visual reasoning? arxiv, 2019. +Liwei Wang, Lunjia Hu, Jiayuan Gu, Yue Wu, Zhiqiang Hu, Kun He, and John Hopcroft. Towards understanding learning representations: To what extent do different neural networks learn the same representation. NeurIPS, 2018. +Nicholas Watters, Loic Matthew, Matko Bosnjak, Christopher P. Burgess, and Alexander Lerchner. Cobra: Data-efficient model-based rl through unsupervised object discovery and curiosity-driven exploration. *arxiv*, 2019. + +# A SUPPLEMENTARY MATERIAL + +# A.1 USEFUL PROPERTIES OF DISENTANGLED REPRESENTATIONS + +Disentangled representations are particularly useful because they re-represent the information contained in the data in a way that enables semantically meaningful compositionality. For example, having discovered that the data is generated using two factors, colour and shape, such a model would be able to support meaningful reasoning about fictitious objects, like pink elephants, despite having never seen one during training (Higgins et al., 2017b; 2018b). This opens up opportunities for counterfactual reasoning, more robust and interpretable inference and model-based planning (Higgins et al., 2018a; Suter et al., 2018). Furthermore, such a representation would support more data efficient learning for subsequent tasks, like a classification objective for differentiating elephants from cats. This could be achieved by ignoring the nuisance variables irrelevant for the task, e.g. the colour variations, by simply masking out the disentangled subspaces that learnt to represent such nuisances, while only paying attention to the task-relevant subspaces, e.g. the units that learnt to represent shape (Cohen & Welling, 2016; Gens & Domingos, 2014; Soatto, 2010). Hence, the semantically meaningful compositional nature of disentangled representations is perhaps the most sought after aspect of disentangling, due to its strong implications for generalisation, data efficiency and interpretability (Schmidhuber, 1992; Bengio et al., 2013; Higgins et al., 2018a). + +# A.2 ASPECTS OF DISENTANGLEMENT MEASURED BY DIFFERENT METRICS + +Methods for evaluating and comparing representations have been proposed in the past. The most similar approaches to ours are the DCI Disentanglement score from Eastwood & Williams (2018) and the axis alignment comparison of representations in trained classifiers proposed in Li et al. (2016). The former is not directly applicable for unsupervised disentangled model selection, since it requires access to the ground truth attribute labels. Even when adapted to compare two latent representations, our preliminary experiments suggested that the entropy based aggregation proposed in Eastwood & Williams (2018) is inferior to our aggregation in Eq. 2 when used in the UDR setup. The approach by Li et al. (2016) shares the similarity matrix calculation step with us, however they never go beyond that quantitatively, opting for qualitative evaluations of model representations instead. Hence, their approach is not directly applicable to quantitative unsupervised disentangled model ranking. + +Other related approaches worth mentioning are the Canonical Correlation Analysis (CCA) and its modifications (Hardoon et al., 2004; Raghu et al., 2017; Morcos et al., 2018). These approaches, however, tend to be invariant to invertible affine transformations and therefore to the axis alignment of individual neurons, which makes them unsuitable for evaluating disentangling quality. Finally, Representation Similarity Matrix (RSM) (Kriegeskorte et al., 2008) is a commonly used method in Neuroscience for comparing the representations of different systems to the same set of stimuli. This technique, however, is not applicable for measuring disentangling, because it ignores dimension-wise response properties. + +When talking about disentangled representations, three properties are generally considered: modularity, compactness and explicitness² (Ridgeway & Mozer, 2018). Modularity measures whether each latent dimension encodes only one data generative factor, compactness measures whether each data generative factor is encoded by a single latent dimension, and explicitness measures whether all the information about the data generative factors can be decoded from the latent representation. We believe that modularity is the key aspect of disentangling, since it measures whether the representation is compositional, which gives disentangled representations the majority of their beneficial properties (see Sec. A.1 in Supplementary Materials for more details). Compactness, on the other hand, may not always be desirable. For example, according to a recent definition of disentangled representations (Higgins et al., 2018a), it is theoretically impossible to represent 3D rotation in a single dimension (see also Ridgeway & Mozer (2018)). Finally, while explicitness is clearly desirable for preserving information about the data that may be useful for subsequent tasks, in practice models often fail to discover and represent the full set of the data generative factors due to restrictions on both the observed data distribution and the model capacity (Mathieu et al., 2019). Hence, we suggest noting the explicitness of a representation, but not necessarily punishing its disentanglement ranking if it is not fully explicit. Instead, we suggest that the practitioner should have the choice to select the most disentangled model given a particular number of discovered generative factors. Hence, in the rest of the paper we will use the term "disentanglement" to refer to the compositional property of a representation related to the modularity measure. Tbl. 3 provides a summary of how the different metrics considered in the paper compare in terms of modularity, compactness and explicitness. + +# A.3 DATASET DETAILS + +dSprites A commonly used unit test for evaluating disentangling is the dSprites dataset (Matthey et al., 2017). This dataset consists of images of a single binary sprite pasted on a blank background and can be fully described by five generative factors: shape (3 values), position x (32 values), position y (32 values), size (6 values) and + +Table 3: Disentangled model selection metrics comparison. M - modularity, C - compactness, E - explicitness (Ridgeway & Mozer, 2018) + +
METRICMCE
β-VAE×
FACTORVAE
MIG
DCI DISENTANGLEMENT××
UDR××
+ +rotation (40 values). All the generative factors are sampled from a uniform distribution. Rotation is sampled from the full 360 degree range. The generative process for this dataset is fully deterministic, resulting in 737,280 total images produced from the Cartesian product of the generative factors. + +3D Shapes A more complex dataset for evaluating disentangling is the 3D Shapes dataset (Burgess & Kim, 2018). This dataset consists of images of a single 3D object in a room and is fully specified by six generative factors: floor colour (10 values), wall colour (10 values), object colour (10 values), size (8 values), shape (4 values) and rotation (15 values). All the generative factors are sampled from a uniform distribution. Colours are sampled from the circular hue space. Rotation is sampled from the $[-30, 30]$ degree angle range. + +3D Cars This dataset was adapted from Reed et al. (2014). The full data generative process for this dataset is unknown. The labelled factors include 199 car models and 24 rotations sampled from the full 360 degree out of plane rotation range. An example of an unlabelled generative factor is the colour of the car – this varies across the dataset. + +# A.4 UNSUPERVISED DISENTANGLED REPRESENTATION LEARNING MODELS + +As mentioned in Sec. 3, current state of the art approaches to unsupervised disentangled representation learning are based on the VAE (Kingma & Welling, 2014; Rezende et al., 2014) objective presented in Eq. 1. These approaches decompose the objective in Eq. 1 into various terms and change their relative weighting to exploit the trade-off between the capacity of the latent information bottleneck with independent sources of noise, and the quality of the resulting reconstruction in order to learn a disentangled representation. The first such modification was introduced by Higgins et al. (2017a) in their $\beta$ -VAE framework: + +$$ +\mathbb {E} _ {p (\boldsymbol {x})} \left[ \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log p _ {\theta} (\boldsymbol {x} | \boldsymbol {z}) \right] - \beta K L \left(q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) \mid \mid p (\boldsymbol {z})\right) \right] \tag {4} +$$ + +In order to achieve disentangling in $\beta$ -VAE, the KL term in Eq. 4 is typically up-weighted by setting $\beta > 1$ . This implicitly reduces the latent bottleneck capacity and, through the interaction with the reconstruction term, encourages the generative factors $c_k$ with different reconstruction profiles to be encoded by different independent noisy channels $z_l$ in the latent bottleneck. Building on the $\beta$ -VAE ideas, CCI-VAE (Burgess et al., 2017) suggested slowly increasing the bottleneck capacity during training, thus improving the final disentanglement and reconstruction quality: + +$$ +\mathbb {E} _ {p (\boldsymbol {x})} \left[ \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log p _ {\theta} (\boldsymbol {x} | \boldsymbol {z}) \right] - \gamma \mid K L \left(q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) \mid | p (\boldsymbol {z})\right) - C \mid \right] \tag {5} +$$ + +Later approaches (Kim & Mnih, 2018; Chen et al., 2018; Kumar et al., 2017) showed that the KL term in Eq. 1 can be further decomposed according to: + +$$ +\mathbb {E} _ {p (\boldsymbol {x})} \left[ K L \left(q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) \mid p (\boldsymbol {z})\right) \right] = I (\boldsymbol {x}; \boldsymbol {z}) + K L \left(q _ {\phi} (\boldsymbol {z}) \mid p (\boldsymbol {z})\right) \tag {6} +$$ + +Hence, penalising the full KL term as in Eqs. 4-5 is not optimal, since it unnecessarily penalises the mutual information between the latents and the data. To remove this undesirable side effect, different authors suggested instead adding more targeted penalised terms to the VAE objective function. These include different implementations of the total correlation penalty (FactorVAE by Kim & Mnih (2018) and TC-VAE by Chen et al. (2018)): + +$$ +\mathcal {L} _ {V A E} - \gamma K L \left(q _ {\phi} (\boldsymbol {z}) \mid \mid \prod_ {j = 1} ^ {M} q _ {\phi} \left(z _ {j}\right)\right) \tag {7} +$$ + +and different implementations of the penalty that pushes the marginal posterior towards a factorised prior (DIP-VAE by Kumar et al. (2017)): + +$$ +\mathcal {L} _ {V A E} - \gamma K L \left(q _ {\phi} (\boldsymbol {z}) \| p (\boldsymbol {z})\right) \tag {8} +$$ + +# A.4.1 MODEL IMPLEMENTATION DETAILS + +We re-used the trained checkpoints from Locatello et al. (2018), hence we recommend the readers to check the original paper for model implementation details. Briefly, the following architecture and optimiser were used. + +Table 4: Encoder and Decoder Implementation details shared for all models + +
EncoderDecoder
Input: 64 × 64 × number of channelsInput: R10
4 × 4 conv, 32 ReLU, stride 2FC, 256 ReLU
4 × 4 conv, 32 ReLU, stride 2FC, 4 × 4 × 64 ReLU
4 × 4 conv, 64 ReLU, stride 2FC, 4 × 4 upconv, 64 ReLU, stride 2
4 × 4 conv, 64 ReLU, stride 2FC, 4 × 4 upconv, 32 ReLU, stride 2
FC 256, F2 2 × 104 × 4 upconv, 32 ReLU, stride 2
4 × 4 upconv, number of channels, stride 2
+ +Table 5: Hyperparameters used for each model architecture + +
ModelParametersValues
β-VAEβ[1,2,4,6,8,16]
CCI-VAEcmax[5,10,25,50,75,100]
iteration threshold100000
γ1000
FactorVAEγ[10,20,30,40,50,100]
DIP-VAE-Iλod[1,2,5,10,20,50]
λd10λod
DIP-VAE-IIλod[1,2,5,10,20,50]
λdλod
TC-VAEβ[1,2,4,6,8,10]
+ +(a) Common hyperparameters across all models + +
ParameterValues
Batch Size64
Latent space dimension10
OptimizerAdam
Adam: beta10.9
Adam: beta20.999
Adam: epsilon1e-8
Adam: learning rate0.0001
Decoder typeBernoulli
+ +(b) FactorVAE discriminator architecture + +
Discriminator
FC, 1000 leaky ReLU
FC, 1000 leaky ReLU
FC, 1000 leaky ReLU
FC, 1000 leaky ReLU
FC, 1000 leaky ReLU
FC, 2
+ +(c) FactorVAE discriminator parameters + +
ParameterValues
Batch size64
OptimizerAdam
Adam: beta10.5
Adam: beta20.9
Adam: epsilon1e-8
Adam: learning rate0.0001
+ +Table 6: Miscellaneous model details + +For consistency, all the models were trained using the same architecture, optimiser, and hyperparameters. All of the methods use a deep neural network to encode and decode the latent embedding and the parameters of the latent factors are predicted using a Gaussian encoder whose architecture is specified in Table 4. All of the models predict a latent vector with 10 factors. Each model was also trained with 6 different levels of regularisation strength specified in Table 5. The ranges of the hyperparameters used for the various levels of regularisation were specified to show a diversity of different performance on different datasets without relying on pre-existing intuition on good hyperparameters, however ranges were based on hyperparameters that were used previously in literature. For each of the model classes outlined above, we tried 6 hyperparameter values with 50 seeds each. + +$\beta$ -VAE The $\beta$ -VAE (Higgins et al., 2017a) model is similar to the vanilla VAE model but with an additional hyperparameter $\beta$ to modify the strength of the KL regulariser. + +$$ +\mathbb {E} _ {p (\boldsymbol {x})} \left[ \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} \left[ \log p _ {\theta} (\boldsymbol {x} | \boldsymbol {z}) \right] - \beta K L \left(q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) \mid \mid p (\boldsymbol {z})\right) \right] \tag {9} +$$ + +where a $\beta$ value of 1 corresponds to the vanilla VAE model. Increasing $\beta$ enforces a stronger prior on the latent distribution and encourages the representation to be independent. + +CCI-VAE The CCI-VAE model (Burgess et al., 2017) is a variant of the $\beta$ -VAE where the KL divergence is encouraged to match a controlled value $C$ which is increased gradually throughout training. This yields us the objective function for CCI-VAE. + +$$ +\mathbb {E} _ {p (\boldsymbol {x})} \left[ \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} [ \log p _ {\theta} (\boldsymbol {x} | \boldsymbol {z}) ] - \beta \left| K L \left(q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) \mid \mid p (\boldsymbol {z})\right) - C \right| \right] \tag {10} +$$ + +FactorVAE FactorVAE (Kim & Mnih, 2018) specifically penalises the dependencies between the latent dimensions such that the "Total Correlation" term is targeted yielding a modified version of the $\beta$ -VAE objective. + +$$ +\begin{array}{l} \mathbb {E} _ {p (\boldsymbol {x})} \left[ \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} [ \log p _ {\theta} (\boldsymbol {x} | \boldsymbol {z}) ] - K L (q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) | | p (\boldsymbol {z})) \right] \\ - \beta K L (q (\boldsymbol {z}) | | \prod_ {j} q \left(\boldsymbol {z} _ {j}\right)) \tag {11} \\ \end{array} +$$ + +The "Total Correlation" term is intractable in this case so for FactorVAE, samples are used from both $q(z|\boldsymbol{x})$ and $q(\boldsymbol{z})$ as well as the density-ratio trick to compute an estimate of the "Total Correlation" term. FactorVAE uses an additional discriminator network to approximate the density ratio in the KL divergence. The implementation details for the discriminator network and its hyperparameters can be found in Table 5(b) and 5(c). + +TC-VAE The TC-VAE model (Chen et al., 2018) which independently from FactorVAE has a similar objective KL regulariser which contains a "Total Correlation" term. In the case of TC-VAE the "Total Correlation" term is estimated using a biased Monte-Carlo estimate. + +![](images/32dc70cb12eeeb10d7e68acc38484e6433cc6325d1a435508cbdbbf91fed292d.jpg) +Figure 4: Schematic illustration of the UDR method. See details in text. + +DIP-VAE The DIP-VAE model also adds regularisation to the aggregated posterior but instead an additional loss term is added to encourage it to match the factorised prior. Since the KL divergence is intractable, other measures of divergence are used instead. $Cov_{p(\boldsymbol{x})}[\mu_{\phi}(\boldsymbol{x})]$ can be used, yielding the DIP-VAE-I objective + +$$ +\begin{array}{l} \mathbb {E} _ {p (\boldsymbol {x})} \left[ \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} [ \log p _ {\theta} (\boldsymbol {x} | \boldsymbol {z}) ] - K L (q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) | | p (\boldsymbol {z})) \right] \\ - \lambda_ {o d} \sum_ {i \neq j} [ C o v _ {p (x)} [ \mu_ {\phi} (\boldsymbol {x}) ] ] _ {i j} ^ {2} \tag {12} \\ - \lambda_ {d} \sum_ {i} \left(\left[ C o v _ {p (\boldsymbol {x})} [ \mu_ {\phi} (\boldsymbol {x}) ] \right] _ {i i} - 1\right) ^ {2} \\ \end{array} +$$ + +or $Cov_{q_{\phi}}[z]$ is used instead yielding the DIP-VAE-II objective. + +$$ +\begin{array}{l} \mathbb {E} _ {p (\boldsymbol {x})} \left[ \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x})} [ \log p _ {\theta} (\boldsymbol {x} | \boldsymbol {z}) ] - K L (q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) | | p (\boldsymbol {z})) \right] \\ - \lambda_ {o d} \sum_ {i \neq j} \left[ \operatorname {C o v} _ {q _ {\phi}} [ \boldsymbol {z} ] \right] _ {i j} ^ {2} \tag {13} \\ - \lambda_ {d} \sum_ {i} \left(\left[ C o v _ {q _ {\phi}} [ \boldsymbol {z} ] \right] _ {i i} - 1\right) ^ {2} \\ \end{array} +$$ + +# A.5 UDR IMPLEMENTATION DETAILS + +Similarity matrix To compute the similarity matrix $R_{ij}$ we follow the approach of Li et al. (2016) and Morcos et al. (2018). For a given dataset $X = \{\pmb{x}_1, \pmb{x}_2, \dots, \pmb{x}_N\}$ and a neuron $a \in \{1, \dots, L\}$ of model $i$ (denoted as $z_{i,a}$ ), we define $\pmb{z}_{i,a}$ to be the vector of mean inferred posteriors $q_i(\pmb{z}_i | \pmb{x}_i)$ across the full dataset: $\pmb{z}_{i,a} = (z_{i,a}(\pmb{x}_1), \dots, z_{i,a}(\pmb{x}_N)) \in \mathbb{R}^N$ . Note that this is different from the often considered notion of a "latent representation vector". Here $\pmb{z}_{i,a}$ is a response of a single latent dimension over the entire dataset, not an entire latent response for a single input. We then calculate the similarity between each two of such vectors $\pmb{z}_{i,a}$ and $\pmb{z}_{j,b}$ using either Lasso regression or Spearman's correlation. + +Lasso regression (UDR_L) We trained $L$ lasso regressors to predict each of the latent responses $z_{i,a}$ from $z_{j}$ using the dataset of latent encodings $Z_{i,a} = \{(z_{j,1},z_{i,a,1}),\dots,(z_{j,N},z_{i,a,N})\}$ . Each row in $R_{ij}(a)$ is then filled in using the weights of the trained Lasso regressor for $z_{i,a}$ . The lasso regressors were trained using the default Scikit-learn multi-task lasso objective $\min_w\frac{1}{2n_{samples}} ||XW - Y||_{Fro}^2 +\lambda ||W||_{21}$ where $Fro$ is the Frobenius norm: $||A||_{Fro} = \sqrt{\sum_{ij}a_{ij}^2}$ and the $l_1l_2$ loss is computed as $||A||_{21} = \sum_i\sqrt{\sum_j a_{ij}^2}$ . $\lambda$ is chosen using cross validation and the lasso is trained until convergence until either 1000 iterations have been run or our updates are below a tolerance of 0.0001. Lasso regressors were trained on a dataset of 10000 latents from each model and training was performed using coordinate descent over the entire dataset. $R_{nm}$ is then computed by extracting the weights in the trained lasso regressor and computing their absolute value (Eastwood & Williams, 2018). It is important that the representations are normalised per-latent such that the relative importances computed per latent are scaled to reflect their contribution to the output. Normalising our latents also ensures that the weights that are computed roughly lie in the interval [−1,1]. + +Spearman's based similarity matrix (UDRs) We calculate each entry in the similarity matrix according to $R_{ij}(a,b) = \mathrm{Corr}(\pmb{z}_{i,a},\pmb{z}_{j,b})$ , where Corr stands for Spearman's correlation. We use Spearman's correlation to measure the similarity between $z_{i,a}$ and $z_{j,b}$ , because we do not want to necessarily assume a linear relationship between the two latent encodings, since the geometry of the representational space is not crucial for measuring whether a representation is disentangled (see Sec. 2), but we do hope to find a monotonic dependence between them. Spearman correlation coefficients were computed by extracting 1000 samples from each model and computing the Spearman correlation over all the samples on a per-latent basis. + +All-to-all calculations To make all-to-all comparisons, we picked 10 random seeds per hyperparameter setting and limited all the calculations to those models. Hence we made the maximum of (60 choose 2) pairwise model comparisons when calculating UDR-A2A. + +Informative latent thresholding Uninformative latents typically have $\mathrm{KL}\ll 0.01$ while informative latents have $\mathrm{KL}\gg 0.01$ , so $\mathrm{KL} = 0.01$ threshold in Eq. 3 is somewhat arbitrarily chosen to pick out the informative latents $z$ . + +Sample reduction experiments We randomly sampled without replacement 20 different sets of $P$ models for pairwise comparison from the original set of 50 models with the same hyperparameter setting for UDR or 60 models with different seeds and hyperparameters for UDR-A2A. + +# A.6 SUPERVISED METRIC IMPLEMENTATION DETAILS + +Original $\beta$ -VAE metric. First proposed in Higgins et al. (2017a), this metric suggests sampling two batches of observations $x$ where in both batches the same single data generative factor is fixed to a particular value, while the other factors are sampled randomly from the underlying distribution. These two batches are encoded into the corresponding latent representations $q_{\phi}(z|x)$ and the pairwise differences between the corresponding mean latent values from the two batches are taken. Disentanglement is measured as the ability of a linear classifier to predict the index of the data generative factor that was fixed when generating $x$ . + +We compute the $\beta$ -VAE score by first randomly picking a single factor of variation and fixing the value of that factor to a randomly sampled value. We then generate two batches of 64 where all the other factors are sampled + +randomly and take the mean of the differences between the latent mean responses in the two batches to generate a training point. This process is repeated 10000 times to generate a training set by using the fixed factor of variation as the label. We then train a logistic regression on the data using Scikit-learn and report the evaluation accuracy on a test set of 5000 as the disentanglement score. + +FactorVAE metric. Kim & Mnih (2018) proposed a modification on the $\beta$ -VAE metric which made the classifier non-parametric (majority vote based on the index of the latent dimension with the least variance after the pairwise difference step). This made the FactorVAE metric more robust, since the classifier did not need to be optimised. Furthermore, the FactorVAE metric is more accurate than the $\beta$ -VAE one, since the $\beta$ -VAE metric often over-estimates the level of disentanglement by reporting $100\%$ disentanglement even when only $K - 1$ factors were disentangled. + +The Factor VAE score is computed similarly to the $\beta$ -VAE metric but with a few modifications. First we draw a set of 10000 random samples from the dataset and we estimate the variance of the mean latent responses in the model. Latents with a variance of less than 0.05 are discarded. Then batches of 64 samples are generated by a random set of generative factors with a single fixed generative factor. The variances of all the latent responses over the 64 samples are computed and divided by the latent variance computed in the first step. The variances are averaged to generate a single training point using the fixed factor of variation as the label. 10000 such training points are generated as the training set. A majority vote classifier is trained to pick out the fixed generative factor and the evaluation accuracy is computed on test set of 5000 and reported as the disentanglement score. + +![](images/183d7fc949b3527ab1f82d33a98cec841f7bae31610efca1d7bb73fbad735f79.jpg) +A + +![](images/f57bd7116ee045e31ff1380cbbb598375ef09d7eff16af703c20aabbac6e3211.jpg) +B + +![](images/eb44cf3c2ad8e0fd274df0e208c41cd273b2e777c9c50a7433d139f0bb28d7b8.jpg) + +![](images/89fcf3469ea7cab960f0dafeda8bbb8dba36c6948a8d1711ab57b4c8de35329a.jpg) +C +Figure 5: A: Schematic illustration of the pairwise model comparison. Two trained models $i$ and $j$ are sampled for pairwise comparison. Both models learnt a perfectly disentangled representation, learning to represent two (positions $\mathrm{x / y}$ ) and three (positions $\mathrm{x / y}$ , and size) generative factors respectively. Similarity matrix $R_{ij}$ : white - high similarity between latent dimensions, black - low. B: Similarity matrix $R_{ij}$ for the same pair of models, calculated using either Spearman correlation or Lasso regression. The latter is often cleaner. C: Examples of Lasso similarity matrices of an entangled vs a disentangled model. + +![](images/ac72c2357408d1ba83f6374f1318925ca8d6526c3bec03fc76be83f867f2360d.jpg) + +Mutual Information Gap (MIG). The MIG metric proposed in Chen et al. (2018) proposes estimating the mutual information (MI) between each data generative factor and each latent dimension. For each factor, they consider two latent dimensions with the highest MI scores. It is assumed that in a disentangled representation only one latent dimension will have high MI with a single data generative factor, and hence the difference between these two MI scores will be large. Hence, the MIG score is calculated as the average normalised difference between such pairs of MI scores per each data generative factor. Chen et al. (2018) suggest that the MIG score is more general and unbiased than the $\beta$ -VAE and FactorVAE metrics. + +We compute the Mutual Information Gap by taking the discretising the mean representation of 10000 samples into 20 bins. The disentanglement score is then derived by computing, per generative factor, the difference between the top two latents with the greatest mutual information with the generative factor and taking the mean. + +Table 7: Rank correlations between each of the scores produced by the four versions of UDR and four supervised metrics. The scores are averaged over three model classes, two datasets and four supervised metrics. See Supplementary Material for details. + +
UDRLASSOSPEARMANSUPERVISED
HYPER0.54±0.060.53±0.070.67±0.2
ALL-TO-ALL0.60±0.110.59±0.10
+ +$$ +\frac {1}{K} \sum_ {k = 1} ^ {K} \frac {1}{H _ {v _ {k}}} \left(I \left(z _ {j} ^ {(k)}\right) - \max _ {j \neq j _ {k}} I \left(z _ {j}, v _ {k}\right)\right) \tag {14} +$$ + +where $\mathbf{K}$ is the number of generative factors, from which $v_{k}$ is a single generative factor $z_{j}$ is the mean representation and $j^{(k)} = \operatorname{argmax}_{j} I_{n}(z_{j};v_{k})$ is the latent representation with the greatest mutual information with the generative factor. $H_{v_k}$ is the computed entropy of the generative factor. + +DCI Disentanglement. This is the disentanglement part of the three-part metric proposed by Eastwood & Williams (2018). The DCI disentanglement metric is somewhat similar to our unsupervised metric, whereby the authors train a random forest classifier to predict the ground truth factors from the corresponding latent encodings $q(z|\boldsymbol{x})$ . They then use the resulting $M \times N$ matrix of feature importance weights to calculate the difference between the entropy of the probability that a latent dimension is important for predicting a particular ground truth factor weighted by the relative importance of each dimension. + +The DCI disentanglement metric is an implementation of the disentanglement metric as described in Eastwood & Williams (2018) using a gradient boosted tree. It was computed by first extracting the relative importance of each latent mean representation as a predictor for each generative factor by training a gradient boosted tree using the default Scikit-learn model on 10000 training and 1000 test points and extracting the importance weights. The weights are summarised into an importance matrix $R_{ij}$ with the number of rows equal to the number of generative factors and columns equal to the number of latents. The disentanglement score for each column is computed as $D_{i} = (1 - H_{K}(P_{i}))$ where $H_{K}(P_{i}) = -\sum_{k=0}^{K-1} P_{ik} \log_{K} P_{ik}$ denotes the entropy. $P_{ik} = R_{ij} / \sum_{k=0}^{K-1}$ is the probability of the latent factor $i$ in being important for predicting factor $k$ . The weighted mean of the scores for the column is computed using the relative predictive importance of each column as the weight $D = \sum_{i} p_{i} * D_{i}$ where $p_{i} = \sum_{j} R_{ij} / \sum_{i,j} R_{ij}$ . + +# A.7 ADDITIONAL RESULTS + +We evaluated four UDR versions, which differed in terms of whether Spearman- and Lasso-based similarity matrices $R_{ij}$ were used (subscripts S and L respectively), and whether the models for pairwise similarity comparison are picked from the pool of different seeds trained with the same hyperparameters or from the pool of all models (the latter indicated by the A2A suffix). The A2A correlations in Tbl. 7 are on average slightly higher, however these scores are more computationally expensive to compute due to the higher number of total pairwise similarity calculations. For that reason, the scores presented in the table are calculated using only $20\%$ of all the trained models. Hence, the results presented in the main text of the paper are computed using the UDRL score, which allowed us to evaluate all 5400 models and performed slightly better than the UDRS score. Figs. 6-8 provide more details on the performance of the different UDR versions. + +To qualitatively validate that the UDR method is ranking models well, we look into more detail into the $\beta$ -VAE model ranking when evaluated with the DCI disentanglement metric on the dSprites dataset. This scenario resulted in the worst disagreement between UDR and the supervised metric as shown in Fig. 6. We consider the UDRL version of our method, since it appears to give the best trade off between overall correlations with the supervised metrics and hyperparameter selection accuracy. Fig. 9 demonstrates that the poor correlation between UDRL and DCI Disentanglement is due to the supervised metric. Models ranked highly by UDRL but poorly by DCI Disentanglement appear to be qualitatively disentangled through visual inspection of latent traversals. Conversely, models scored highly by DCI Disentanglement but poorly by UDRL appear entangled. + +# A.8 UDR CORRELATION WITH FINAL TASK PERFORMANCE + +To illustrate the usefulness of UDR to select disentangled models, we ran two experiments. We computed the UDR correlation with fairness scores and with data efficiency on a model-based RL task. + +Fairness scores. Fig. 11 (left) demonstrates that UDR correlates well with the classification fairness scores introduced by Locatello et al. (2019). We adopted a similar setup described in Locatello et al. (2019) to compute fairness, using a gradient booting classifier over 10000 labelled examples. The fairness score was computed by taking + +![](images/30875df689971ec057c2bd4e6ad1c9281f4c2db28e063db9f3350d05a86dda66.jpg) +Figure 6: Rank correlation between different versions of UDR with different supervised metrics across two datasets and three model classes. We see that the UDRL approaches slightly outperform the UDRS ones. + +the mean of the fairness scores across all targets and all sensitive variables where the fairness scores are computed by measuring the total variation after intervening on the sensitive variable. The fairness scores were compared against the Lasso regression version of UDR where models were paired only within the same hyperparameters. + +Model-based RL data efficiency. We reproduced the results from the COBRA agent (Watters et al., 2019), to observe if UDR would correlate with the final tasks performance when using VAEs as state representations. More precisely, we will look at the training data efficiency, reported as the number of steps needed to achieve $90\%$ performance on the Clustering tasks (see Watters et al. (2019) for details), while using differently disentangled models. + +The agent is provided with a pre-trained MONet (Burgess et al., 2019), an exploration policy and a transition model and has to learn a good reward predictor for the task in a dense reward setting. It uses Model Predictive Control in order to plan and solve the task, where sprites have to be clustered by color (e.g. two blue sprites and two red sprites). In COBRA, the authors use a MONet with disentangled representation by using a high $\beta = 1$ . + +When pre-training MONet, we used $\beta \in \{0.01, 0.1, 1\}$ in order to introduce entanglement in the representations without compromising reconstruction accuracy and pre-trained 10 seeds for each value of $\beta$ . We use 5 random initialisations of the reward predictor for each possible MONet model, and train them to perform the clustering task as explained in Watters et al. (2019). We report the number of steps to reach $90\%$ success, averaged across the initialisations. The UDR score is computed by feeding images with a single sprite to obtain an associated unique representation and proceeding as described in the main text. + +As can be seen in Figure 11 (right), we find that the UDR scores correlate with this final data efficiency (linear regression shown, Spearman correlation $\rho = 0.56$ ). This indicates that one could leverage the UDR score as a metric to select representations for further tasks. In this analysis we used the version of UDR that uses Spearman correlations and within-hyperparameter model comparisons. + +# A.9 EVALUATING UDR ON MORE COMPLEX DATASETS + +We evaluated whether UDR is useful for model selection on more complex datasets. In particular, we chose CelebA and ImageNet. While disentangling VAEs have been shown to perform well on CelebA in the past (e.g. Higgins et al. (2018b)), ImageNet is notoriously too complex for even vanilla VAEs to model. However, we still wanted to verify whether the coarse representations of VAEs on ImageNet could be disentangled, and if so, whether UDR would be useful for model selection. To this end, we ran a hyperparameter sweep for the $\beta$ -VAE and ranked its representations using UDR. Fig. 12 shows that UDR scores are clearly different for the different values of the $\beta$ hyperparameter. It is also clear that the models were able to learn about CelebA and produce reasonable reconstructions, but on ImageNet even the vanilla VAEs struggled to represent anything but the coarsest information. Figs. 13-14 plot latent traversals for three randomly chosen models with high ( $>0.6$ ) and low ( $<0.3$ ) UDR scores. The latents are sorted by their informativeness, as approximated by their batch-averaged per dimension KL with the prior as per Eq. 3. It is clear that for both datasets those models that are ranked high by the UDR have both more interpretable and more similar representations than those models that are ranked low. + +![](images/628952177e14d94578a7537d460740a186ec99b8f9ca7cbb582f675b5b189ecb.jpg) +dSprites +Figure 7: The range of scores for each hyperparameter setting for the dSprites and 3D Shapes datasets for various models and metrics. We see that the different versions of the UDR method broadly agree with each other. + +# A.10 QUALITATIVE EVALUATION OF MODEL REPRESENTATIONS RANKED BY UDR SCORES + +In this section we attempt to qualitatively verify our assumption that "for a particular dataset and a VAE-based unsupervised disentangled representation learning model class, disentangled representations are all alike, while every entangled representation is entangled in its own way, to rephrase Tolstoy". The theoretical justification of the proposed UDR hinges on the work by Rolinek et al. (2019). However, that work only empirically evaluated their analysis on the $\beta$ -VAE model class. Even though we have reasons to believe that their theoretical results would also hold for the other disentangling VAEs evaluated in this paper, in this section we empirically evaluate whether this is true. + +First, we check if all model classes operate in the so called "polarised regime", which was highlighted by Rolinek et al. (2019) as being important for pushing VAEs towards disentanglement. It is known that even vanilla VAEs (Kingma & Welling, 2014; Rezende et al., 2014) enter the "polarised regime", which is often cited as one of their shortcomings (e.g. see Rezende & Viola (2018)). All of the disentangling VAEs considered in this paper augment the original ELBO objective with extra terms. None of these extra terms penalise entering the "polarised regime", apart from that of DIP-VAE-I. We tested empirically whether different model classes entered the "polarised regime" during our hyperparameter sweeps. We did this by counting the number of latents that were "switched off" in each of the 5400 models considered in our paper by using Eq. 3. We found that all models apart from DIP-VAE-I entered the polarised regime during the hyperparameter sweep, having on average 2.95/10 latents "switched off" (with a standard deviation of 1.97). + +Second, we check if the models scored highly by the UDR do indeed have similar representations, and models that are scored low have dissimilar representations. We do this qualitatively by plotting latent traversals for three randomly chosen models within each of the six disentangling VAE model classes considered in this paper. + +![](images/0e1319b1f6577afc1e0fa3cd063e4a5d2fa7e967a6dc268a5d70b874a129a3bf.jpg) +Figure 8: Rank correlations of the different versions of the UDR score with the $\beta$ -VAE metric on the dSprites dataset for a $\beta$ -VAE hyperparameter search as the number of pairwise comparisons per model were changed. Higher number of comparisons leads to more accurate and more stable rankings, however these are still decent even with 5 pairwise comparisons per model. + +We groups these plots by UDR scores into three bands: high (UDR>0.4), medium (0.3L scores. Uninformative latents are greyed out. + +![](images/4d84bb5113fcf029a3ef240c20ca665a3b09c4e157f5711cdecc2bb3ab594641.jpg) + +![](images/8230746ba4994948e5ec6bb9312d02e57f5e3aa4752067fd44e4725154330d3b.jpg) + +![](images/eec038bed6e0ee903b3e3540575efad48255607855ba7db4e8013ebf303f9870.jpg) +Figure 11: Left: Spearman correlation between UDR scores and classification fairness scores introduced by Locatello et al. (2019) across sixty models trained per each one of the three different model classes (rows) and over two datasets (columns). Right: Spearman correlation between UDR scores and data efficiency for learning a clustering task by the COBRA agent introduced by Watters et al. (2019). Lower step number is better. + +![](images/82b16c0e50bfce592bc17477bdf8722595d31257603fbe9bedfd44bb4fd9dc00.jpg) + +![](images/f1d50ddf4c23413fdca4fd9af2652d02455a5dc7b7febdc91cc0d6c281504247.jpg) +Figure 12: Distribution of UDRs scores $(\mathrm{P} = 50)$ for $300\beta$ -VAE models trained with 6 settings of the $\beta$ hyperparameter and 50 seeds on CelebA and ImageNet datasets. The reconstructions shown are for the vanilla VAE $(\beta = 1)$ . ImageNet is a complex dataset that VAEs struggle to model well. + +![](images/7b5adca7310cf2fc12a4a49d64bade2ddae16e92ac6d8c64c6664654b4879ed5.jpg) + +![](images/f6e0e213f584b28a471a2a408bc93c4c6232881e283b2a689870ce48cdbe11e0.jpg) + +![](images/ed8458fe8f24440c7f977fda57190a93ff243d65b0f2f4aef527026f62d956cf.jpg) + +![](images/36b4636a6b58aad0785b773cf5cc7fb3fdfc94f83927e824a779f468f36bc3d1.jpg) + +![](images/49755f4e7f53afbeaaeb00c0be5aaa6ca0d28d9711c15862d0129eba341fdec5.jpg) +Figure 13: Latent traversals for the four most informative latents ordered by their KL from the prior for three different $\beta$ -VAE models that ranked high or low according to UDR. Those models that were ranked high have learnt representations that are both interpretable and very similar across models. Those models that were ranked low have learnt representations that are harder to interpret and they do not appear similar to each other across models. + +![](images/0f7a942225558f8ef5dde6b5c1450238dcda6529eacce7f7db1d4b52b6091161.jpg) + +![](images/5df8020f0a8070a9d67ea394b630239f45c02e82b6103d9cd7a6af1d63cb6793.jpg) + +![](images/3cca6bcd3308a075e5dcc72d53487292cb2536025a02a3f3ed1e31e315bf11ee.jpg) +Figure 14: Latent traversals for the six most informative latents ordered by their KL from the prior for three different $\beta$ -VAE models that ranked high or low according to UDR. Despite the fact that none of the $\beta$ -VAE or VAE models were able to learn to reconstruct this dataset well, those models that were ranked high by the UDR still managed to learn representations that are more interpretable and more similar across models. This is unlike the representations of those models that were ranked low by the UDR. + +![](images/5eafc853d437d5dcd9c5aa4e995caeadca2e59ab8309e6aa34c5b6ea99088c96.jpg) +Figure 15: Latent traversals for all ten latent dimensions presented in no particular ordering for three different models per model class. These models were ranked highly by the UDR. It can be seen that they learnt interpretable and similar representations up to permutation, sign inverse and subsetting. We included all model classes that achieved UDR scores in the range specified (UDR > 0.4). + +![](images/7a7b7dfd80bd2c80f3dd104b6238d9896e2317ff420b3d600af82b09122f529a.jpg) +Factor VAE + +![](images/e792cbc1f19cb70544e186eaa6eb5d01c07fd9c902c5d210d23f568baa92787b.jpg) +DIP-VAE-II +Figure 16: Latent traversals for all ten latent dimensions presented in no particular ordering for three different models per model class. These models received medium UDR scores. It can be seen that they learnt less interpretable and less similar representations than the models shown in Fig. 15. None of the models in these model classes scored higher than the range specified $(0.3 < \mathrm{UDR} < 0.4)$ . + +![](images/9e9bbe9640464555bf0bf5576f968c601f3df3388f2089af4790064d38facec4.jpg) +Figure 17: Latent traversals for all ten latent dimensions presented in no particular ordering for three different models per model class. These models received low UDR scores. It can be seen that their representations are hard to interpret and they look quite different from each other. We included all model classes that achieved UDR scores in the range specified (UDR < 0.3). \ No newline at end of file diff --git a/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/images.zip b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b9fff83e6c02747907fcd1919a42183e1dce1aa2 --- /dev/null +++ b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:588b9e52fcbd17d05df18364dfdca357ecf6dc03e7acf83a0692b7942f9947c8 +size 1746579 diff --git a/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/layout.json b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c1f76cf1923eed85ab172431b6595e763926de3a --- /dev/null +++ b/unsupervisedmodelselectionforvariationaldisentangledrepresentationlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:554b5c3df2b9f42c50099fb0ff94123c51adc74cd94f79ed092f5138f8af7585 +size 776724 diff --git a/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/df22333f-0ea7-448e-a9c3-3680e8769f3b_content_list.json b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/df22333f-0ea7-448e-a9c3-3680e8769f3b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..34d74ce54669845398a3099e4043505a52194822 --- /dev/null +++ b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/df22333f-0ea7-448e-a9c3-3680e8769f3b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bf5ea37e069ac54e7b6e9147cebb9d8495d8250eac62ee757ff05ff853cd128 +size 74476 diff --git a/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/df22333f-0ea7-448e-a9c3-3680e8769f3b_model.json b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/df22333f-0ea7-448e-a9c3-3680e8769f3b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..cbc0ef38eaaf15241efb2606e5b9494f2a9ed75d --- /dev/null +++ b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/df22333f-0ea7-448e-a9c3-3680e8769f3b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f10efe6ea8f103203e1515adb9b4320eecd6bd18daaa297f53109abf2d4ae467 +size 89965 diff --git a/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/df22333f-0ea7-448e-a9c3-3680e8769f3b_origin.pdf b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/df22333f-0ea7-448e-a9c3-3680e8769f3b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..47a1ef205758bf48641d46e0b890ee48349a350d --- /dev/null +++ b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/df22333f-0ea7-448e-a9c3-3680e8769f3b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0092b9aafda69372348558dd3aba49204da4d9ab3ce78ab48040995c33103a27 +size 18311569 diff --git a/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/full.md b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e0a8fdd268bb4c675d56718254db81e6a1b738db --- /dev/null +++ b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/full.md @@ -0,0 +1,283 @@ +# V4D:4D CONVOLUTIONAL NEURAL NETWORKS FOR VIDEO-LEVEL REPRESENTATION LEARNING + +Shiwen Zhang, Sheng Guo, Weilin Huang* & Matthew R. Scott + +Malong Technologies, Shenzhen, China + +Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China + +{shizhang, sheng,whuang, mscott}@malong.com + +Limin Wang + +State Key Laboratory for Novel Software Technology, Nanjing University, China + +lmwang@nju.edu.cn + +# ABSTRACT + +Most existing 3D CNNs for video representation learning are clip-based methods, and thus do not consider video-level temporal evolution of spatio-temporal features. In this paper, we propose Video-level 4D Convolutional Neural Networks, referred as V4D, to model the evolution of long-range spatio-temporal representation with 4D convolutions, and at the same time, to preserve strong 3D spatio-temporal representation with residual connections. Specifically, we design a new 4D residual block able to capture inter-clip interactions, which could enhance the representation power of the original clip-level 3D CNNs. The 4D residual blocks can be easily integrated into the existing 3D CNNs to perform long-range modeling hierarchically. We further introduce the training and inference methods for the proposed V4D. Extensive experiments are conducted on three video recognition benchmarks, where V4D achieves excellent results, surpassing recent 3D CNNs by a large margin. + +# 1 INTRODUCTION + +3D convolutional neural networks (3D CNNs) and their variants (Ji et al., 2010; Tran et al., 2015; Carreira & Zisserman, 2017; Qiu et al., 2017; Wang et al., 2018b) provide a simple extension from 2D counterparts for video representation learning. However, due to practical issues such as memory consumption and computational cost, these models are mainly used for clip-level feature learning instead of learning from the whole video. The clip-based methods randomly sample a short clip (e.g., 32 frames) from a video for representation learning, and calculate prediction scores for each clip independently. The prediction scores of all clips are simply averaged to yield the video-level prediction. These clip-based models often ignore the video-level structure and long-range spatiotemporal dependency during training, as they only sample a small portion of the entire video. In fact, in some cases, it could be difficult to identify an action correctly by only using partial observation. Meanwhile, simply averaging the prediction scores of all clips could be sub-optimal during inference. To overcome this issue, Temporal Segment Network (TSN) (Wang et al., 2016) was proposed. TSN uniformly samples multiple clips from the entire video, and the average scores are used to guide back-propagation during training. Thus TSN is a video-level representation learning framework. However, the inter-clip interaction and video-level fusion in TSN is only performed at very late stage, which fails to capture finer temporal structures. + +In this paper, we propose a general and flexible framework for video-level representation learning, called V4D. As shown in Figure 1, to model long-range dependency in a more efficient way, V4D is composed of two critical designs: (1) holistic sampling strategy, and (2) 4D convolutional interaction. We first introduce a video-level sampling strategy by uniformly sampling a sequence of short-term units covering the whole video. Then we model long-range spatio-temporal dependency by designing a unique 4D residual block. Specifically, we present a 4D convolutional operation to capture inter-clip + +interaction, which could enhance the representation power of the original clip-level 3D CNNs. The 4D residual blocks could be easily integrated into the existing 3D CNNs to perform long-range modeling hierarchically, which is more efficient than TSN. We also design a specific video-level inference algorithm for V4D. Finally, we verify the effectiveness of V4D on three video action recognition benchmarks, Mini-Kinetics (Xie et al., 2018), Kinetics-400 (Carreira & Zisserman, 2017) and Something-Something-V1 (Goyal et al., 2017). Our V4D achieves very competitive performance on the three benchmarks, and obtains evident performance improvement over its 3D counterparts. + +# 2 RELATED WORKS + +Two-stream CNNs. Two-stream architecture was originally proposed by (Simonyan & Zisserman, 2014), where one stream is used for learning from RGB images, and the other one is applied to model optical flow. The results produced by the two streams are then fused at later stages, yielding the final prediction. Two-stream CNNs have achieved impressive results on various video recognition tasks. However, the main limitation is that the computation of optical flow is highly expensive where parallel optimization is difficult to implement, with significant resource explored. Recent effort has been devoted to reducing the computational cost on modeling optical flow, such as (Dosovitskiy et al., 2015; Sun et al., 2018; Piergiovanni & Ryoo, 2018; Zhang et al., 2016). The two-stream design is a general framework to boost the performance of various CNN models, which is orthogonal to the proposed V4D. + +3D CNNs. Recently, 3D CNNs have been proposed (Tran et al., 2015; Carreira & Zisserman, 2017; Wang et al., 2018a,b; Feichtenhofer et al., 2018). By considering a video as a stack of frames, it is natural to develop 3D convolutions applied directly on video sequence. However, 3D CNNs often introduce a large number of model parameters, which inevitably require a large amount of training data to achieve good performance. As reported in (Wang et al., 2018b; Feichtenhofer et al., 2018), recent experimental results on large-scale benchmark, likes Kinetics-400 (Carreira & Zisserman, 2017), show that 3D CNNs can surpass their 2D counterparts in many cases, and even can be on par with or better than the two-stream 2D CNNs. It is noteworthy that most of 3D CNNs are clip-based methods, which only explore a certain part of the holistic video. + +Long-term Modeling Framework. Various long-term modeling frameworks have been developed for capturing more complex temporal structure for video-level representation learning. In (Laptev et al., 2008), video compositional models were proposed to jointly model local video events, where temporal pyramid matching was introduced with a bag-of-visual-words framework to compute long-term temporal structure. However, the rigid composition only works under defined constraints, e.g., prefixed duration and anchor points provided in time. A mainstream method is to process a continuous video sequence with recurrent neural networks Ng et al. (2015); Donahue et al. (2015), where 2D CNNs are used for frame-level feature extraction. Temporal Segment Network (TSN) (Wang et al., 2016) has been proposed to model video-level temporal information with a sparse sampling and aggregation strategy. TSN sparsely samples a set of frames from the whole video, and then the sampled frames are modelled by the same CNN backbone, which outputs a confident score for each frame. The output scores are averaged to generate final video-level prediction. TSN was originally designed for 2D CNNs, but it can be applied to 3D CNNs, which serves as one of the baselines in this paper. One of the main limitations of TSN is that it is difficult to model finer temporal structure due to the average aggregation. Temporal Relational Reasoning Network (TRN) (Zhou et al., 2018) was introduced to model temporal segment relation by encoding individual representation of each segment with relation networks. TRN is able to model video-level temporal order but lacks the capacity of capturing finer temporal structure. The proposed V4D can outperform these previous video-level learning methods on both appearance-dominated video recognition (e.g., on Kinetics) and motion-dominated video recognition (e.g., on Something-Something). It is able to model both short-term and long-term temporal structure with a unique design of 4D residual blocks. + +# 3VIDEO-LEVEL 4D COVOLUTIONAL NEURAL NETWORKS + +In this section, we introduce new Video-level 4D Convolution Neural Networks, namely V4D, for video action recognition. This is the first attempt to design 4D convolutions for RGB-based video recognition. Previous methods, such as You & Jiang (2018); Choy et al. (2019), utilize 4D CNNs to + +![](images/59e84a1703080c913a6459a8fb938a89fd612acad9739b25a533c6c8dec14374.jpg) +Figure 1: Video-level 4D Convolutional Neural Networks for video recognition. + +process videos of point cloud by using 4D data as input. Instead, our V4D processes videos of RGB frames with input of 3D data. Existing 3D CNNs often take a short-term snippet as input, without considering the evolution of 3D spatio-temporal features for video-level representation. In Wang et al. (2018b); Yue et al. (2018); Liu et al. (2019), self-attention mechanism was developed to model non-local spatio-temporal features, but these methods were originally designed for clip-based 3D CNNs. It remains unclear how to incorporate such operations on holistic video representation, and whether such operations are useful for video-level representation learning. Our goal is to model 3D spatio-temporal features globally, which can be implemented in a higher dimension. In this work, we introduce new Residual 4D Blocks, which allow us to cast 3D CNNs into 4D CNNs for learning long-range interactions of the 3D features, resulting in a "time of time" video-level representation. + +# 3.1 AVIDEO-LEVEL SAMPLING STRATEGY + +To model meaningful video-level representation for action recognition, the input to the networks has to cover the holistic duration of a given video, and at the same time preserve short-term action details. A straightforward approach is to implement per-frame training of the networks yet this is not practical by considering the limit of computation resource. In this work, we uniformly divide the whole video into $U$ sections, and select a snippet from each section to represent a short-term action pattern, called "action unit". Then we have $U$ action units to represent the holistic action in a video. Formally, we denote the video-level input $V = \{A_{1}, A_{2}, \dots, A_{U}\}$ , where $A_{i} \in \mathbb{R}^{C \times T \times H \times W}$ . During training, each action unit $A_{i}$ is randomly selected from each of the $U$ sections. During testing, the center of each $A_{i}$ locates exactly at the center of the corresponding section. + +# 3.2 4D CONVOLUTIONS FOR LEARNING SPATIO-TEMPORAL INTERACTIONS + +3D Convolutional kernels have been proposed, and are powerful to model short-term spatio-temporal features. However, the receptive fields of 3D kernels are often limited due to the small sizes of kernels, and pooling operations are applied to enlarge the receptive fields, resulting in a significant cost of information loss. This inspired us to develop new operations which are able to model both short- and long-term spatio-temporal representations simultaneously, with easy implementations and fast training. From this prospective, we propose 4D convolutions for better modeling the long-range spatio-temporal interactions. + +Specifically, we denote the input to 4D convolutions as a tensor $V$ of size $(C, U, T, H, W)$ , where $C$ is number of channel, $U$ is the number of action units (the 4-th dimension in this paper), $T, H, W$ are temporal length, height and width of an action unit. We omit the batch dimension for simplicity. By following the annotations provided in Ji et al. (2010), a pixel at position $(u, t, h, w)$ of the $j$ th channel in the output is denoted as $o_j^{uthw}$ , and a 4D convolution operation can be formulated as: + +$$ +o _ {j} ^ {u t h w} = b _ {j} + \sum_ {c} ^ {C _ {i n}} \sum_ {s = 0} ^ {S - 1} \sum_ {p = 0} ^ {P - 1} \sum_ {q = 0} ^ {Q - 1} \sum_ {r = 0} ^ {R - 1} \mathcal {W} _ {j c} ^ {s p q r} v _ {c} ^ {(u + s) (t + p) (h + q) (w + r)} \tag {1} +$$ + +where $b_{j}$ is a bias term, $c$ is one of the $C_{in}$ input channels of the feature maps from input $V$ , $S \times P \times Q \times R$ is the shape of 4D convolutional kernel, $\mathcal{W}_{jc}^{spqr}$ is the weight at the position $(s,p,q,r)$ of the kernel, corresponding to the $c$ -th channel of the input feature maps and $j$ -th channel of the output feature maps. + +![](images/b1a0cb356164a5f651e51fb26af963d97bbbfda94e59fdd008cda83f941356ae.jpg) +(a) $3 \times 1 \times 1 \dashrightarrow 3D$ + +![](images/7a968b7ca64d178fca9b65438f073a3acd5446974d5717275d3e7f5858c1ea94.jpg) +(b) $3 \times 1 \times 1 \times 1 - -4D$ +Figure 2: Implementation of 4D kernels, compared to3D kernel s. $U$ denotes the number of action units, with shape of $T$ , $H$ , $W$ . Channel and batch dimensions are omitted for clarity. The kernels are colored in Blue, with the center of each kernel colored in Green. + +![](images/bfff0994a044b2e1d49e2136a67ee14a9dc5248a49abb09e31bb6069efd06c4c.jpg) +(c) $3 \times 3 \times 1 \times 1 - 4D$ + +Convolutional operation is linear, and the sequential sum operations in E.q. 1 are exchangeable. Thus we can generate E.q. 2, where the expression in the parentheses can be implemented by 3D convolutions, allowing us to implement 4D convolutions using 3D convolutions, while most deep learning libraries do not directly provide 4D convolutional operations. + +$$ +o _ {j} ^ {u t h w} = b _ {j} + \sum_ {s = 0} ^ {S - 1} \left(\sum_ {c} ^ {C _ {i n}} \sum_ {p = 0} ^ {P - 1} \sum_ {q = 0} ^ {Q - 1} \sum_ {r = 0} ^ {R - 1} \mathcal {W} _ {j c} ^ {s p q r} v _ {c} ^ {(u + s) (t + p) (h + q) (w + r)}\right) \tag {2} +$$ + +With the 4D convolutional kernel, the short-term 3D features of an individual action unit and long-term temporal evolution of multiple action units can be modeled simultaneously in the 4D space. Compared to 3D convolutions, the proposed 4D convolutions are able to model videos in a more meaningful 4D feature space that enables it to learn more complicated interactions of long-range 3D spatio-temporal representation. However, 4D convolutions inevitably introduce more parameters and computation cost. For example, a 4D convolutional kernel of $k \times k \times k \times k$ employs $k$ times more parameters than a 3D kernel of $k \times k \times k$ . In practice, we explore $k \times k \times 1 \times 1$ and $k \times 1 \times 1 \times 1$ kernels, to reduce the parameters and avoid the risk of overfitting. The implementation of different kernels is shown in Figure 2. + +# 3.3VIDEO-LEVEL 4D CNN ARCHITECTURE + +In this section, we demonstrate that our 4D convolutions can be integrated into existing CNN architecture for action recognition. To fully utilize current state-of-the-art 3D CNNs, we propose a new Residual 4D Convolution Block, by designing a 4D convolution in the residual structure introduced in (He et al., 2016). This allows it to aggregate both short-term 3D features and long-term evolution of the spatio-temporal representations for video-level action recognition. Specifically, we define a permutation function $\varphi_{(d_i,d_j)}:M^{d_1\times \ldots \times d_i\times \ldots \times d_j\times \ldots \times d_n}\mapsto M^{d_1\times \ldots \times d_j\times \ldots \times d_i\times \ldots \times d_n}$ , which permutes dimension $d_{i}$ and $d_{j}$ of a tensor $M\in \mathbb{R}^{d_1\times \dots \times d_n}$ . The Residual 4D Convolution Block can be formulated as: + +$$ +\mathcal {Y} _ {3 D} = \mathcal {X} _ {3 D} + \varphi_ {(U, C)} \left(\mathcal {F} _ {4 D} \left(\varphi_ {(C, U)} \left(\mathcal {X} _ {3 D}\right); \mathcal {W} _ {4 D}\right)\right) \tag {3} +$$ + +where $\mathcal{F}_{4D}(\mathcal{X};\mathcal{W}_{4D})$ is the introduced 4D convolution. $\mathcal{X}_{3D},\mathcal{Y}_{3D}\in \mathbb{R}^{U\times C\times T\times H\times W}$ , and $U$ is merged into batch dimension so that $\mathcal{X}_{3D},\mathcal{Y}_{3D}$ can be directly processed by standard 3D CNNs. Note that we employ $\varphi$ to permute the dimensions of $\mathcal{X}_{3D}$ from $U\times C\times T\times H\times W$ to $C\times U\times T\times H\times W$ so that it can be processed by 4D convolutions. Then the output of 4D convolution is permuted reversely to 3D form so that the output dimensions are consistent with $\mathcal{X}_{3D}$ . Batch Normalization (Ioffe & Szegedy, 2015) and ReLU activation (Nair & Hinton, 2010) are then applied. The detailed structure is described in Figure 1. + +Theoretically, any 3D CNN structure can be cast to 4D CNNs by integrating our 4D Convolutional Blocks. As shown in previous works (Zolfaghari et al., 2018; Xie et al., 2018; Wang et al., 2018b; Feichtenhofer et al., 2018), higher performance can be obtained by applying 2D convolutions at lower layers and 3D convolutions at higher layers of the 3D networks. In our framework, we utilize the "Slow-path" introduced in Feichtenhofer et al. (2018) as our backbone, denoted as I3D-S. Although the original "Slowpath" is designed for ResNet50, we can extend it to I3D-S ResNet18 for further experiments. The detailed structure of our 3D backbone is shown in Table 1. + +
layerI3D-S ResNet18I3D-S ResNet50output size
conv11×7×7, 64, stride 1, 2, 21×7×7, 64, stride 1, 2, 24×112×112
res2[1×3×3, 641×3×3, 64]×2[1×1×1, 641×3×3, 641×1×1, 256]×34×56×56
res3[1×3×3, 1281×3×3, 128]×2[1×1×1, 1281×3×3, 1281×1×1, 512]×44×28×28
res4[3×3×3, 2563×3×3, 256]×23×1×1, 2561×3×3, 2561×1×1, 1024]×64×14×14
res5[3×3×3, 5123×3×3, 512]×23×1×1, 5121×3×3, 5121×1×1, 2048]×34×7×7
global average pool, fc1×1×1
+ +Table 1: We use I3D-Slowpath from (Feichtenhofer et al., 2018) as our backbone. The output size of an example is shown in the right column, where the input has a size of $4 \times 224 \times 224$ . No temporal degenerating is performed in this structure. + +# 3.4 TRAINING AND INFERENCE + +Training. As shown in Figure 1, the convolutional part of the networks is composed of 3D convolution layers and the proposed Residual 4D Blocks. Each action unit is trained individually and in parallel in the 3D convolution layers, which share the same parameters. The 3D features computed from the action units are then fed to the Residual 4D Block, where the long-term temporal evolution of the consecutive action units can be modeled. Finally, global average pooling is computed on the sequence of all action units to form the final video-level representation. + +Inference. Given $U$ action units $\{A_1, A_2, \dots, A_U\}$ of a video, we denote $U_{train}$ as the number of action units for training and $U_{infer}$ as the number of action units for inference. $U_{train}$ and $U_{infer}$ are usually different because computation resource is limited in training, but high accuracy is encouraged in inference. We develop a new video-level inference method, which is described in Algorithm 1. The 3D convolutional layers are denote as $N_{3D}$ , followed by the proposed 4D Blocks, $N_{4D}$ . + +# Algorithm 1: V4D Inference. + +Networks :The structure of networks is divided into two sub-networks by the first 4D Block, namely $N_{3D}$ and $N_{4D}$ + +Input : $U_{infer}$ action units from a holistic video: $\{A_1,A_2,\dots ,A_{U_{infer}}\}$ + +Output :The video-level prediction. + +# V4D Inference : + +1 $\{A_{1},A_{2},\dots ,A_{U_{infer}}\}$ are fed into $N_{3D}$ , generating intermediate feature maps for each unit $\{F_1,F_2,\ldots ,F_{U_{infer}}\} ,F_i\in \mathbb{R}^{C\times T\times H\times W};$ +2 For the $U_{infer}$ intermediate features, we equally divide them into $U_{train}$ sections. Then we select one unit from each section $F_{\text{sec}_i}$ and combine these $U_{train}$ units into a video-level intermediate representation $F^{\text{video}} = (F_{\text{sec}_1}, F_{\text{sec}_2}, \ldots, F_{\text{sec}_{U_{train}}})$ . These video-level representations form a new set $\{F_1^{\text{video}}, F_2^{\text{video}}, \ldots, F_{U_{combined}}^{\text{video}}\}$ , where $U_{combined} = (U_{infer} / U_{train})^{U_{train}}$ , $F_i^{\text{video}} \in \mathbb{R}^{U_{train} \times C \times T \times H \times W}$ ; +3 Each $F_{i}^{video}$ in set $\{F_1^{video}, F_2^{video}, \dots, F_{U_{combined}}^{video}\}$ are processed by $N_{4D}$ to form a set of prediction scores, $\{P_1, P_2, \dots, P_{U_{combined}}\}$ ; +4 $\{P_1, P_2, \dots, P_{U_{\text{combined}}}\}$ are averaged to give the final video-level prediction. + +# 3.5 DISCUSSION + +We further demonstrate that the proposed V4D can be considered as a 4D generalization of a number of recent widely-applied methods, which may partially explain why our V4D works practically well on learning meaningful video-level representation. + +Temporal Segment Network. Our V4D is closely related to Temporal Segment Network (TSN). TSN was originally designed for 2D CNN, but it can be directly applied to 3D CNN to model + +video-level representation. TSN also employs a video-level sampling strategy with each action unit named "segment". During training, each segment is calculated individually and the prediction scores after the fully-connected layer are then averaged. Since the fully-connected layer is a linear classifier, it is mathematically identical to calculating the average before the fully-connected layer (similar to our global average pooling) or after the fully-connected layer (similar to TSN). Thus our V4D can be considered as 3D CNN + TSN when all parameters in the 4D Blocks are set to 0. + +Dilated Temporal Convolution. One special form of 4D convolution kernel, $k \times 1 \times 1 \times 1$ , is closely related to Temporal Dilated Convolution (Lea et al., 2016). The input tensor $V$ can be considered as a $(C, U \times T, H, W)$ tensor when all action units are concatenated along the temporal dimension. In this case, the $k \times 1 \times 1 \times 1$ 4D convolution can be considered as a dilated 3D convolution kernel of $k \times 1 \times 1$ with a dilation of $T$ frames. Note that the $k \times 1 \times 1 \times 1$ kernel is just the simplest form of our 4D convolutions, while our V4D architecture can utilize more complex kernels and thus can be more meaningful for learning stronger video representation. Furthermore, our 4D Blocks utilize residual connections, ensuring that both long-term and short-term representation can be learned jointly. Simply applying the dilated convolution might discard the short-term fine-grained features. + +# 4 EXPERIMENTS + +# 4.1 DATASETS + +We conduct experiments on three standard benchmarks: Mini-Kinetics (Xie et al., 2018), Kinetics-400 (Carreira & Zisserman, 2017), and Something-Something-v1 (Goyal et al., 2017). Mini-kinetics dataset covers 200 action classes, and is a subset of Kinetics-400. Since some videos are no longer available for Kinetics dataset, our version of Kinetics-400 contains 240,436 and 19,796 videos in the training subset and validation subset, respectively. Our version of Mini-kinetics contains 78,422 videos for training, and 4,994 videos for validation. Each video has around 300 frames. Something-Something-v1 contains 108,499 videos totally, with 86,017 for training, 11,522 for validation, and 10,960 for testing. Each video has 36 to 72 frames. + +# 4.2 ABLATION STUDY ON MINI-KINETICS + +We use pre-trained weights from ImageNet to initialize the model. For training, we adapt the holistic sampling strategy mentioned in section 3.1. We uniformly divide the whole video into $U$ sections, and randomly select a clip of 32 frames from each section. For each clip, by following the sampling strategy in Feichtenhofer et al. (2018), we uniformly sample 4 frames with a fixed stride of 8 to form an action unit. We will study the impact of $U$ in the following experiments. We first resize each frame to $320 \times 256$ , and then randomly cropping is applied as Wang et al. (2018b). Then the cropped region is further resized to $224 \times 224$ . We utilize a SGD optimizer with an initial learning rate of 0.01, weight decay is set to $10^{-5}$ with a momentum of 0.9. The learning rate drops by 10 at epoch 35, 60, 80, and the model is trained for 100 epochs in total. + +To make a fair comparison, we use spatially convolutional testing by following Wang et al. (2018b); Yue et al. (2018); Feichtenhofer et al. (2018). We sample 10 action units evenly from a full-length video, and crop $256 \times 256$ regions to spatially cover the whole frame for each action unit. Then we apply the proposed V4D inference. Note that, for the original TSN, 25 clips and 10-crop testing are used during inference. To make a fair comparison between I3D and our V4D, we instead apply this 10 clips and 3-crop inference strategy for TSN. + +Results. To verify the effectiveness of V4D, we compare it with the clip-based method I3D-S, and video-based method TSN+3D CNN. To compensate the extra parameters introduced by 4D blocks, we add a $3 \times 3 \times 3$ residual block at res4 for I3D-S for a fair comparison, denoted as I3D-S ResNet18++. As shown in Table 2a, by using 4 times less frames than I3D-S during inference and with less parameters than I3D-S ResNet18++, V4D still obtain a $2.0\%$ higher top-1 accuracy than I3D-S. Comparing with the state-of-the-art video-level method TSN+3D CNN, V4D significantly outperforms it by $2.6\%$ top-1 accuracy, with the same protocols used in training and inference. + +4D Convolution Kernels. As mentioned, our 4D convolution kernels can use 3 typical forms: $k \times 1 \times 1 \times 1$ , $k \times k \times 1 \times 1$ and $k \times k \times k \times k$ . In this experiment, we set $k = 3$ for simplicity, and apply a single 4D block at the end of res4 in I3D-S ResNet18. As shown in Table 2c, V4D with + +
model\( T_{train} \times U_{train} \)\( T_{infer} \times U_{infer} \times #crop \)top-1top5parameters
I3D-S ResNet184 × 14 × 10 × 372.291.232.3M
I3D-S ResNet1816 × 116 × 10 × 373.491.132.3M
I3D-S ResNet18++16 × 116 × 10 × 373.691.534.1M
TSN+I3D-S ResNet184 × 44 × 10 × 373.091.332.3M
V4D ResNet184 × 44 × 10 × 375.692.733.1M
+ +(a) Effectiveness of V4D. $T$ represents temporal length of each action unit. $U$ represents the number of action units. + +
modelinput sizeflops
I3D-S ResNet1816 × 256 × 25655.1G
TSN+I3D-S ResNet184 × 4 × 256 × 25655.1G
V4D ResNet184 × 4 × 256 × 25658.8G
+ +(b) Forward flops of previous works and V4D. One 4D block at res3 and one at res4 for V4D. + +
modelform of 4D kerneltop-1top5
I3D-S ResNet18-72.291.2
TSN+I3D-S ResNet18-73.091.3
V4D ResNet183 × 1 × 1 × 173.892.0
V4D ResNet183 × 3 × 1 × 174.592.4
V4D ResNet183 × 3 × 3 × 374.792.5
+ +(c) Different Forms of 4D Convolution Kernel. + +
model4D kerneltop-1top5
I3D-S ResNet18-72.291.2
TSN+I3D-S ResNet18-73.091.3
V4D ResNet181 at res374.292.3
V4D ResNet181 at res474.592.4
V4D ResNet181 at res573.691.4
V4D ResNet181 at res3, 1 at res475.692.7
+ +(d) Position and Number of 4D Blocks. + +
model\( U_{train} \)top-1top5
I3D-S ResNet18172.291.2
TSN+I3D-S ResNet18473.091.3
V4D ResNet18374.392.2
V4D ResNet18474.592.4
V4D ResNet18574.592.3
V4D ResNet18674.692.5
+ +Table 2: Ablations on Mini-Kinetics, with top-1 and top-5 classification accuracy $(\%)$ + +$3 \times 3 \times 3 \times 3$ kernel can achieve the highest performance. However, by considering the trade-off between model parameters and performance, we use the kernel of $3 \times 3 \times 1 \times 1$ in the following experiments. + +On 4D Blocks. We evaluate the impact of position and number of 4D Blocks for our V4D. We investigate the performance of V4D by using one $3 \times 3 \times 1 \times 1$ 4D block at res3, res4 or res5. As shown in Table 2d, a higher accuracy can be obtained by applying the 4D block at res3 or res4, indicating that the merged long-short term features of the 4D block need to be further refined by 3D convolutions to generate more meaningful representation. Furthermore, inserting one 4D block at res3 and one at res4 can achieve a higher accuracy. + +Number of Action Units $U$ . We further evaluate our V4D by using different numbers of action units for training, with different values of hyperparameter $U$ . In this experiment, one $3 \times 3 \times 1 \times 1$ Residual 4D block is added at the end of res4 of ResNet18. As shown in Table 2e, $U$ does not have a significant impact to the performance, which suggests that: (1) V4D is a video-level feature learning model, and is robust against the number of short-term units; (2) an action generally does not contain many stages, and thus increasing $U$ is not helpful. In addition, increasing the number of action units means that the 4-th dimension is increased, requiring a larger 4D kernel to cover the long-range evolution of spatio-temporal representation. + +With state-of-the-art. We compare the results on Mini-Kinetics. 4D Residual Blocks are added into every other 3D residual blocks in res3 and res4. With much fewer frames utilized during training and inference, our V4D ResNet50 achieves a higher accuracy than all reported results, which is even higher than 3D ResNet101 having 5 compact Generalized Non-local Blocks. Note that our V4D ResNet18 can achieve a higher accuracy than 3D ResNet50, which further verify the effectiveness of our V4D structure. + +
ModelBackbone\( T_{train} \times U_{train} \)\( T_{infer} \times U_{infer} \times \#crop \)top-1top5
S3D (Xie et al., 2018)S3D Inception64 × 1N/A78.9-
I3D (Yue et al., 2018)3D ResNet5032 × 132 × 10 × 375.592.2
I3D (Yue et al., 2018)3D ResNet10132 × 132 × 10 × 377.493.2
I3D+NL (Yue et al., 2018)3D ResNet5032 × 132 × 10 × 377.594.0
I3D+CGNL (Yue et al., 2018)3D ResNet5032 × 132 × 10 × 378.894.4
I3D+NL (Yue et al., 2018)3D ResNet10132 × 132 × 10 × 379.293.2
I3D+CGNL (Yue et al., 2018)3D ResNet10132 × 132 × 10 × 379.993.4
V4D(Ours)V4D ResNet184 × 44 × 10 × 375.692.7
V4D(Ours)V4D ResNet504 × 44 × 10 × 380.795.3
+ +Table 3: Results on Mini-Kinetics. $T$ - temporal length of action unit. $U$ - number of action units. + +# 4.3 RESULTS ON KINETICS + +We further conduct experiments on large-scale video recognition benchmark, Kinetics-400, to evaluate the capability of our V4D. To make a fair comparison, we utilize ResNet50 as backbone for V4D. The training and inference sampling strategy is identical to previous section, except that each action unit now contains 8 frames instead of 4. We set $U = 4$ so that there are $8 \times 4$ frames in total for training. Due to the limit of computation resource, we train the model in multiple stages. We first train the 3D ResNet50 backbone with 8-frame inputs. Then we load the 3D ResNet50 weights to V4D ResNet50, with all 4D Blocks fixed to zero. The V4D ResNet50 is then fine-tuned with $8 \times 4$ input frames. Finally, we optimize all 4D Blocks, and train the V4D with $8 \times 4$ frames. As shown in Table 4, our V4D achieves competitive results on Kinetics-400 benchmark. + +
ModelBackbonetop-1top-5
ARTNet with TSN (Wang et al., 2018a)ARTNet ResNet1870.789.3
ECO (Zolfaghari et al., 2018)BN-Inception+3D ResNet1870.089.4
S3D-G (Xie et al., 2018)S3D Inception74.793.4
Nonlocal Network (Wang et al., 2018a)3D ResNet5076.592.6
SlowFast (Feichtenhofer et al., 2018)SlowFast ResNet5077.092.6
I3D(Carreira & Zisserman, 2017)I3D Inception72.190.3
Two-stream I3D(Carreira & Zisserman, 2017)I3D Inception75.792.0
I3D-S(Feichtenhofer et al., 2018)Slow pathway ResNet5074.991.5
V4D(Ours)V4D ResNet5077.493.1
+ +# 4.4 RESULTS ON SOMETHING-SOMETHING-V1 + +Something-Something dataset focuses on modeling temporal information and motion. The background is much cleaner than Kinetics but the motions of action categories are more complicated. Each video contains one single and continuous action with clear start and end on temporal dimension. + +Table 4: Comparison with state-of-the-art on Kinetics. + +
ModelBackbonetop-1
MultiScale TRN (Zhou et al., 2018)BN-Inception34.4
ECO (Zolfaghari et al., 2018)BN-Inception+3D ResNet1846.4
S3D-G (Xie et al., 2018)S3D Inception45.8
Nonlocal Network+GCN (Wang & Gupta, 2018)3D ResNet5046.1
TrajectoryNet (Zhao et al., 2018)S3D ResNet1847.8
V4D(Ours)V4D ResNet5050.4
+ +Table 5: Comparison with state-of-the-art on Something-Something-v1. + +Results. As shown in Table 4.4, our V4D achieves competitive results on the Something-Something-v1. We use V4D ResNet50 pre-trained on Kinetics for experiments. Temporal Order. As shown in Xie et al. (2018), the performance can drop considerably by reversing the temporal order of short-term 3D features, suggesting that 3D CNNs can learn strong temporal order information. We further conduct experiments by reversing the frames within each action unit or reversing the sequence of action units, where the top-1 accuracy drops considerably by $50.4\% \rightarrow 17.2\%$ and $50.4\% \rightarrow 20.1\%$ respectively, indicating that our V4D can capture both long-term and short-term temporal order. + +# 5 CONCLUSIONS + +We have introduced new Video-level 4D Convolutional Neural Networks, namely V4D, to learn strong temporal evolution of long-range spatio-temporal representation, as well as retaining 3D features with residual connections. In addition, we further introduce the training and inference methods for our V4D. Experiments were conducted on three video recognition benchmarks, where our V4D achieved the state-of-the-art results. + +# REFERENCES + +João Carreira and Andrew Zisserman. Quo vadis, action recognition? A new model and the kinetics dataset. In CVPR, 2017. + +Christopher Bongsoo Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neural networks. CoRR, abs/1904.08755, 2019. +Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015. +Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Häusser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In ICCV, 2015. +Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. CoRR, abs/1812.03982, 2018. +Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fründ, Peter Yianilos, Moritz Mueller-Freitag, Florian Hoppe, Christian Thurau, Ingo Bax, and Roland Memisevic. The "something something" video database for learning and evaluating visual common sense. In ICCV, 2017. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. +Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In CVPR, 2015. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. +Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 3d convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35:221-231, 2010. +Ivan Laptev, Marcin Marszalek, Cordelia Schmid, and Benjamin Rozenfeld. Learning realistic human actions from movies. In CVPR, 2008. +Colin Lea, René Vidal, Austin Reiter, and Gregory D. Hager. Temporal convolutional networks: A unified approach to action segmentation. In ECCV Workshops, 2016. +Xingyu Liu, Joon-Young Lee, and Hailin Jin. Learning video representations from correspondence proposals. CoRR, abs/1905.07853, 2019. +Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. +Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, and George Toderici. Beyond short snippets: Deep networks for video classification. In CVPR, 2015. +A. J. Piergiovanni and Michael S. Ryoo. Representation flow for action recognition. CoRR, abs/1810.01455, 2018. +Zhaofan Qiu, Ting Yao, and Tao Mei. Learning spatio-temporal representation with pseudo-3d residual networks. ICCV, 2017. +Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014. +Shuyang Sun, Zhanghui Kuang, Lu Sheng, Wanli Ouyang, and Wei Zhang. Optical flow guided feature: A fast and robust motion representation for video action recognition. In CVPR, 2018. +Du Tran, Lubomir D. Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015. +Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, 2016. + +Limin Wang, Wei Li, Wen Li, and Luc Van Gool. Appearance-and-relation networks for video classification. In CVPR, 2018a. +Xiaolong Wang and Abhinav Gupta. Videos as space-time region graphs. In ECCV, 2018. +Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018b. +Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In ECCV, 2018. +Quanzeng You and Hao Jiang. Action4d: Real-time action recognition in the crowd and clutter. CoRR, abs/1806.02424, 2018. +Kaiyu Yue, Ming Sun, Yuchen Yuan, Feng Zhou, Errui Ding, and Fuxin Xu. Compact generalized non-local network. In NeurIPS, 2018. +Bowen Zhang, Limin Wang, Zhe Wang, Yu Qiao, and Hanli Wang. Real-time action recognition with enhanced motion vector CNNs. In CVPR, 2016. +Yue Zhao, Yuanjun Xiong, and Dahua Lin. Trajectory convolution for action recognition. In NeurIPS, 2018. +Bolei Zhou, Aditya Khosla, Ågata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In CVPR, 2016. +Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Torralba. Temporal relational reasoning in videos. In ECCV, 2018. +Mohammadreza Zolfaghari, Kamaljeet Singh, and Thomas Brox. ECO: efficient convolutional network for online video understanding. In ECCV, 2018. + +# A APPENDIX + +# A.1 EXTENDED EXPERIMENTS ON LARGE-SCALE UNTRIMMED VIDEO RECOGNITION + +In order to check the generalization ability of our proposed V4D, we also conduct experiments for untrimmed video classification. To be specific, we choose ActivityNet v1.3 Heilbron et al. (2015), which is a large-scale untrimmed video dataset, containing videos of 5 to 10 minutes and typically large time lapses of the videos are not related with any activity of interest. We adopt V4D ResNet50 to compare with previous works. During inference, Multi-scale Temporal Window Integration is applied following (Wang et al., 2016). The evaluation metric is mean average precision (mAP) for action recognition. Note that only RGB modality is used as input. + +
ModelBackbonemAP
TSN Wang et al. (2016)BN-Inception79.7
TSN Wang et al. (2016)Inception V383.3
TSN-Top3 Wang et al. (2016)Inception V384.5
V4D(Ours)V4D ResNet5088.9
+ +Table 6: Comparison with state-of-the-art on ActivityNet v1.3. + +# A.2 VISUALIZATION + +We implement 3D CAM based on Zhou et al. (2016), which was originally implemented for 2D cases. Generally, class activation maps (CAM) imply which areas are most important for classification. Here we show some random visualization results from validation set of Mini-Kinetics, where TSN + I3D-S ResNet18 generates wrong prediction while V4D ResNet18 generates correct prediction. The original RGB frames are shown in the first row. The second row shows the CAMs of TSN + I3D-S ResNet18. The third row shows the CAMs of V4D ResNet18. + +![](images/7e381d1a88bdfc603d9fe43cddae917c970bfeff819c44b6c0d37ce690eab95a.jpg) +Figure 3: + +![](images/11dd84f49abc2455507c3c77c1566781cbd3507e750eb541379f651b72e35dc0.jpg) +Figure 4: + +![](images/81163a46d5c18d3eba93a967c2c9b16f460369e6324dbb979ccec144a8b2a8cf.jpg) +Figure 5: + +![](images/a8537599fdd1fba8b4a6d198886fa242d6b9f802f1c2369ca11e212cd624214a.jpg) +Figure 6: + +![](images/f4e2fca89d639fbc59d16a022dda0edf11da85ba63266482b5c1b96b4b90ccb3.jpg) +Figure 7: + +![](images/8e4cddac3dff003c5f5d2a3c9e635ac192fce722b3591efd1de5012c851de656.jpg) +Figure 8: + +![](images/e958ee28c1d4d047eb5b461095580361b647c935cbc1e8cba6e3712194e7bd0e.jpg) +Figure 9: + +![](images/b758a1518728bcde233e19d5adf57161c954231181062238f7e94dac9689e96f.jpg) +Figure 10: + +![](images/2fc1a4efc903c9c9e3bfc902c7f667a148118ac7fe1185b7772e45dbce0018ac.jpg) +Figure 11: + +![](images/e608f1c660052d441829ee749158c0685f8cb8fefc1f4b96a6741b7d9c278713.jpg) +Figure 12: + +![](images/2b8cdb9bae0ba70a292d287be3f921e354dc68e815e97b537c699405e4dd9730.jpg) + +![](images/706196a527f4072406d459aa4d056987e9d7407512c45cb5896d66994a4c9009.jpg) + +![](images/be444bc7f4c83c4b408be1b596406a87d33fe7a2897eeeb28bb16efe20d973e9.jpg) +Figure 13: \ No newline at end of file diff --git a/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/images.zip b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..54c638a772b8cb145d2d8fe3d4acf9ed2e238147 --- /dev/null +++ b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c069edc1c02b3c8579c893dd9842b82f8868a96e634b2746f6e198cc13e1165 +size 1696524 diff --git a/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/layout.json b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5ddcce7275bfd5793d7384d9410afd36d662f1a1 --- /dev/null +++ b/v4d4dconvolutionalneuralnetworksforvideolevelrepresentationlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8267b9f40402b9a5301f44b13b404e11f58116ad19d43ffc9063de10181934ba +size 407892 diff --git a/variancereductionwithsparsegradients/d659eb20-fb34-4028-8380-1f3f32d4f655_content_list.json b/variancereductionwithsparsegradients/d659eb20-fb34-4028-8380-1f3f32d4f655_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..aa398b67cff00fec7423b268b0fc5d50a9890532 --- /dev/null +++ b/variancereductionwithsparsegradients/d659eb20-fb34-4028-8380-1f3f32d4f655_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ae63a7c4b8a949bb7b1c0c8dbfe80d84b5d72829e21f32054613842468bdf41 +size 126037 diff --git a/variancereductionwithsparsegradients/d659eb20-fb34-4028-8380-1f3f32d4f655_model.json b/variancereductionwithsparsegradients/d659eb20-fb34-4028-8380-1f3f32d4f655_model.json new file mode 100644 index 0000000000000000000000000000000000000000..84d07e1545904a016e51a97bf5068090ea2ebb8e --- /dev/null +++ b/variancereductionwithsparsegradients/d659eb20-fb34-4028-8380-1f3f32d4f655_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4580fc4537063b0852b2eb8f49bee08f357b0732fa310929ac8687d8da45662 +size 148167 diff --git a/variancereductionwithsparsegradients/d659eb20-fb34-4028-8380-1f3f32d4f655_origin.pdf b/variancereductionwithsparsegradients/d659eb20-fb34-4028-8380-1f3f32d4f655_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0419bf8569ddcf092f267b644a884d92c5c4ae7b --- /dev/null +++ b/variancereductionwithsparsegradients/d659eb20-fb34-4028-8380-1f3f32d4f655_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e44d690bddf57744416921bef0a5f893064ec1672be5cd96d5b36b6b01b84f5 +size 1239086 diff --git a/variancereductionwithsparsegradients/full.md b/variancereductionwithsparsegradients/full.md new file mode 100644 index 0000000000000000000000000000000000000000..440986f41349c240ae58e4d03a261ecba8ebab09 --- /dev/null +++ b/variancereductionwithsparsegradients/full.md @@ -0,0 +1,766 @@ +# VARIANCE REDUCTION WITH SPARSE GRADIENTS + +Melih Elibol, Michael I. Jordan + +University of California, Berkeley + +{elibol, jordan}@cs.berkeley.edu + +Lihua Lei + +Stanford University + +lihualei@stanford.edu + +# ABSTRACT + +Variance reduction methods such as SVRG (Johnson & Zhang, 2013) and SpiderBoost (Wang et al., 2018) use a mixture of large and small batch gradients to reduce the variance of stochastic gradients. Compared to SGD (Robbins & Monro, 1951), these methods require at least double the number of operations per update to model parameters. To reduce the computational cost of these methods, we introduce a new sparsity operator: The random-top- $k$ operator. Our operator reduces computational complexity by estimating gradient sparsity exhibited in a variety of applications by combining the top- $k$ operator (Stich et al., 2018; Aji & Heafield, 2017) and the randomized coordinate descent operator. With this operator, large batch gradients offer an extra benefit beyond variance reduction: A reliable estimate of gradient sparsity. Theoretically, our algorithm is at least as good as the best algorithm (SpiderBoost), and further excels in performance whenever the random-top- $k$ operator captures gradient sparsity. Empirically, our algorithm consistently outperforms SpiderBoost using various models on various tasks including image classification, natural language processing, and sparse matrix factorization. We also provide empirical evidence to support the intuition behind our algorithm via a simple gradient entropy computation, which serves to quantify gradient sparsity at every iteration. + +# 1 INTRODUCTION + +Optimization tools for machine learning applications seek to minimize the finite sum objective + +$$ +\min _ {x \in \mathbb {R} ^ {d}} f (x) \triangleq \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (x), \tag {1} +$$ + +where $x$ is a vector of parameters, and $f_{i}:\mathbb{R}^{d}\to \mathbb{R}$ is the loss associated with sample $i$ . Batch SGD serves as the prototype for modern stochastic gradient methods. It updates the iterate $x$ with $x - \eta \nabla f_{\mathcal{I}}(x)$ , where $\eta$ is the learning rate and $\nabla f_{\mathcal{I}}(x)$ is the batch stochastic gradient, i.e. + +$$ +\nabla f _ {\mathcal {I}} (x) = \frac {1}{| \mathcal {I} |} \sum_ {i \in \mathcal {I}} \nabla f _ {i} (x). +$$ + +The batch size $|\mathcal{Z}|$ in batch SGD directly impacts the stochastic variance and gradient query complexity of each iteration of the update rule. + +In recent years, variance reduction techniques have been proposed by carefully blending large and small batch gradients (e.g. Roux et al., 2012; Johnson & Zhang, 2013; Defazio et al., 2014; Xiao & Zhang, 2014; Allen-Zhu & Yuan, 2016; Allen-Zhu & Hazan, 2016; Reddi et al., 2016a;b; Allen-Zhu, 2017; Lei & Jordan, 2017; Lei et al., 2017; Allen-Zhu, 2018b; Fang et al., 2018; Zhou et al., 2018; Wang et al., 2018; Pham et al., 2019; Nguyen et al., 2019; Lei & Jordan, 2019). They are alternatives to batch SGD and are provably better than SGD in various settings. While these methods allow for greater learning rates than batch SGD and have appealing theoretical guarantees, they require a per-iteration query complexity which is more than double than that of batch SGD. Defazio (2019) questions the utility of variance reduction techniques in modern machine learning problems, empirically identifying query complexity as one issue. In this paper, we show that gradient sparsity (Aji & Heafield, 2017) can be used to significantly reduce the query complexity of variance reduction methods. Our work is motivated by the observation that gradients tend to be "sparse," having only + +a small fraction of large coordinates. Specifically, if the indices of large gradient coordinates (measured in absolute value) are known before updating model parameters, we compute the derivative of only those coordinates while setting the remaining gradient coordinates to zero. In principle, if sparsity is exhibited, using large gradient coordinates will not effect performance and will significantly reduce the number of operations required to update model parameters. Nevertheless, this heuristic alone has three issues: (1) bias is introduced by setting other entries to zero; (2) the locations of large coordinates are typically unknown; (3) accessing a subset of coordinates may not be easily implemented for some problems like deep neural networks. + +We provide solutions for all three issues. First, we introduce a new sparse gradient operator: The random-top- $k$ operator. The random-top- $k$ operator is a composition of the randomized coordinate descent operator and the top- $k$ operator. In prior work, the top- $k$ operator has been used to reduce the communication complexity of distributed optimization (Stich et al., 2018; Aji & Heafield, 2017) applications. The random-top- $k$ operator has two phases: Given a stochastic gradient and a pair of integers $(k_{1}, k_{2})$ that sum to $k$ , the operator retains $k_{1}$ coordinates which are most "promising" in terms of their "likelihood" to be large on average, then randomly selects $k_{2}$ of the remaining coordinates with appropriate rescaling. The first phase captures sparsity patterns while the second phase eliminates bias. Second, we make use of large batch gradients in variance reduction methods to estimate sparsity patterns. Inspired by the use of a memory vector in Aji & Heafield (2017), the algorithm maintains a memory vector initialized with the absolute value of the large batch gradient at the beginning of each outer loop and updated by taking an exponential moving average over subsequent stochastic gradients. Coordinates with large values in the memory vector are more "promising," and the random-top- $k$ operator will pick the top $k_{1}$ coordinate indices based on the memory vector. Since larger batch gradients have lower variance, the initial estimate is quite accurate. Finally, for software that supports dynamic computation graphs, we provide a cost-effective way (sparse back-propagation) to implement the random-top- $k$ operator. + +In this work we apply the random-top- $k$ operator to SpiderBoost (Wang et al., 2018), a recent variance reduction method that achieves optimal query complexity, with a slight modification based on the "geometrization" technique introduced by Lei & Jordan (2019). Theoretically, we show that our algorithm is never worse than SpiderBoost and can strictly outperform it when the random-top- $k$ operator captures gradient sparsity. Empirically, we demonstrate the improvements in computation for various tasks including image classification, natural language processing, and sparse matrix factorization. + +The rest of the paper is organized as follows. In Section 2, we define the random-top- $k$ operator, our optimization algorithm, and a description of sparse backpropagation. The theoretical analyses are presented in Section 3, followed by experimental results in Section 4. All technical proofs are relegated to Appendix A, and additional experimental details can be found in Appendix B. + +# 2 STOCHASTIC VARIANCE REDUCTION WITH SPARSE GRADIENTS + +Generally, variance reduction methods reduce the variance of stochastic gradients by taking a snapshot $\nabla f(y)$ of the gradient $\nabla f(x)$ every $m$ steps of optimization, and use the gradient information in this snapshot to reduce the variance of subsequent smaller batch gradients $\nabla f_{\mathcal{I}}(x)$ (Johnson & Zhang, 2013; Wang et al., 2018). Methods such as SCSG (Lei & Jordan, 2017) utilize a large batch gradient, which is typically some multiple in size of the small batch gradient $b$ , which is much more practical and is what we do in this paper. To reduce the cost of computing additional gradients, we use sparsity by only computing a subset $k$ of the total gradients $d$ , where $y \in \mathbb{R}^d$ . + +For $d, k, k_1, k_2 \in \mathbb{Z}_+$ , let $k = k_1 + k_2$ , where $1 \leq k \leq d$ for a parametric model of dimension $d$ . In what follows, we define an operator which takes vectors $x, y$ and outputs $y'$ , where $y'$ retains only $k$ of the entries in $y$ , $k_1$ of which are selected according to the coordinates in $x$ which have the $k_1$ largest absolute values, and the remaining $k_2$ entries are randomly selected from $y$ . The $k_1$ coordinate indices and $k_2$ coordinate indices are disjoint. Formally, the operator $\operatorname{rtop}_{k_1,k_2} : \mathbb{R}^d \to \mathbb{R}^d$ is defined for $x, y \in \mathbb{R}^d$ as + +$$ +\big (\operatorname {r t o p} _ {k _ {1}, k _ {2}} (x, y) \big) _ {\ell} = \left\{ \begin{array}{l l} y _ {\ell} & \text {i f} k _ {1} > 0 \text {a n d} | x | _ {\ell} \geq | x | _ {(k _ {1})} \\ \frac {(d - k _ {1})}{k _ {2}} y _ {\ell} & \text {i f} \ell \in S \\ 0 & \text {o t h e r w i s e ,} \end{array} \right. +$$ + +where $|x|$ denotes a vector of absolute values, $|x|_{(1)} \geq |x|_{(2)} \geq \ldots \geq |x|_{(d)}$ denotes the order statistics of coordinates of $x$ in absolute values, and $S$ denotes a random subset with size $k_2$ that is uniformly drawn from the set $\{\ell : |x|_\ell < |x|_{(k_1)}\}$ . For instance, if $x = (11, 12, 13, -14, -15), y = (-25, -24, 13, 12, 11)$ and $k_1 = k_2 = 1$ , then $S$ is a singleton uniformly drawn from $\{1, 2, 3, 4\}$ . Suppose $S = \{2\}$ , then $\operatorname{rtop}_{1,1}(x, y) = (0, 4y_2, 0, 0, y_5) = (0, -96, 0, 0, 11)$ . If $k_1 + k_2 = d$ , $\operatorname{rtop}_{k_1,k_2}(x,y) = y$ . On the other hand, if $k_1 = 0$ , $\operatorname{rtop}_{0,k_2}(x,y)$ does not depend on $x$ and returns a rescaled random subset of $y$ . This is the operator used in coordinate descent methods. Finally, $\operatorname{rtop}_{k_1,k_2}(x,y)$ is linear in $y$ . The following Lemma shows that $\operatorname{rtop}_{k_1,k_2}(x,y)$ is an unbiased estimator of $y$ , which is a crucial property in our later analysis. + +Lemma 1. Given any $x, y \in \mathbb{R}^d$ , + +$$ +\mathbb {E} \left(\mathrm {r t o p} _ {k _ {1}, k _ {2}} (x, y)\right) = y, \quad \mathrm {V a r} \left(\mathrm {r t o p} _ {k _ {1}, k _ {2}} (x, y)\right) = \frac {d - k _ {1} - k _ {2}}{k _ {2}} \| \mathrm {t o p} _ {- k _ {1}} (x, y) \| ^ {2}, +$$ + +where $\mathbb{E}$ is taken over the random subset $S$ involved in the $\mathrm{rtop}_{k_1,k_2}$ operator and + +$$ +(\operatorname {t o p} _ {- k _ {1}} (x, y)) _ {\ell} = \left\{ \begin{array}{l l} y _ {\ell} & i f k _ {1} > 0 a n d | x | _ {\ell} < | x | _ {(k _ {1})} \\ 0 & o t h e r w i s e. \end{array} \right. +$$ + +Our algorithm is detailed as below. + +# Algorithm 1: SpiderBoost with Sparse Gradients. + +Input: Learning rate $\eta$ , inner loop size $m$ , outer loop size $T$ , large batch size $B$ , small batch size $b$ , initial iterate $x_0$ , memory decay factor $\alpha$ , sparsity parameters $k_1, k_2$ . +1 $\mathcal{I}_0 \sim \mathrm{Unif}(\{1, \dots, n\})$ with $|\mathcal{I}_0| = B$ +2 $M_0 := |\nabla f_{\mathcal{I}_0}(x_0)|$ +3 for $j = 1,\dots,T$ do + +4 $x_0^{(j)}\coloneqq x_{j - 1},M_0^{(j)}\coloneqq M_{j - 1}$ +5 $\mathcal{I}_j\sim \mathrm{Unif}(\{1,\ldots ,n\})$ with $|\mathcal{I}_j| = B$ +6 $\nu_0^{(j)}\coloneqq \nabla f_{\mathcal{I}_j}(x_0^{(j)})$ +7 $N_{j}\coloneqq m$ (for implementation) or $N_{j}\sim$ geometric distribution with mean $m$ (for theory) +s for $t = 0,\dots ,N_{j} - 1$ do +9 $x_{t + 1}^{(j)}\coloneqq x_t^{(j)} - \eta \nu_t^{(j)}$ +10 $\mathcal{I}_t^{(j)}\sim \mathrm{Unif}([n])$ with $|\mathcal{I}_t^{(j)}| = b$ +11 $\nu_{t + 1}^{(j)}\coloneqq \nu_t^{(j)} + \mathrm{rtop}_{k_1,k_2}\left(M_t^{(j)},\nabla f_{\mathcal{I}_t^{(j)}}(x_{t + 1}^{(j)}) - \nabla f_{\mathcal{I}_t^{(j)}}(x_t^{(j)})\right)$ +12 $M_{t + 1}^{(j)}\coloneqq \alpha |\nu_{t + 1}^{(j)}| + (1 - \alpha)M_t^{(j)}$ +13 $x_{j}\coloneqq x_{N_{j}}^{(j)},M_{j}\coloneqq M_{N_{j}}^{(j)}$ + +Output: $x_{\mathrm{out}} = x_T$ (for implementation) or $x_{\mathrm{out}} = x_{T'}$ where $T' \sim \mathrm{Unif}([T])$ (for theory) + +The algorithm includes an outer-loop and an inner-loop. In the theoretical analysis, we generate $N_{j}$ as Geometric random variables. This trick is called "geometrization", proposed by Lei & Jordan (2017) and dubbed by Lei & Jordan (2019). It greatly simplifies analysis (e.g. Lei et al., 2017; Allen-Zhu, 2018a). In practice, as observed by Lei et al. (2017), setting $N_{j}$ to $m$ does not impact performance in any significant way. We only use "geometrization" in our theoretical analysis for clarity. Similarly, for our theoretical analysis, the output of our algorithm is selected uniformly at random from the set of outer loop iterations. Like the use of average iterates in convex optimization, this is a common technique for nonconvex optimization proposed by Nemirovski et al. (2009). In practice, we simply use the last iterate. + +Similar to Aji & Heafield (2017), we maintain a memory vector $M_{t}^{(j)}$ at each iteration of our algorithm. The memory vector is initialized to the large batch gradient computed before every pass through the inner loop, which provides a relatively accurate gradient sparsity estimate of $x_0^{(j)}$ . The exponential moving average gradually incorporates information from subsequent small batch gradients to account for changes to gradient sparsity. We then use $M_{t}^{(j)}$ as an approximation to the variance of each gradient coordinate in our $\mathrm{rtop}_{k_1,k_2}$ operator. With $M_{t}^{(j)}$ as input, the $\mathrm{rtop}_{k_1,k_2}$ + +operator targets $k_{1}$ high variance gradient coordinates in addition to $k_{2}$ randomly selected coordinates. + +The cost of invoking $\mathrm{rtop}_{k_1,k_2}$ is dominated by the algorithm for selecting the top $k$ coordinates, which has linear worst case complexity when using the introselect algorithm (Musser, 1997). + +# 2.1 SPARSE BACK-PROPAGATION + +A weakness of our method is the technical difficulty of implementing a sparse backpropagation algorithm in modern machine learning libraries, such as Tensorflow (Abadi et al., 2015) and Pytorch (Paszke et al., 2017). Models implemented in these libraries generally assume dense structured parameters. The optimal implementation of our algorithm makes use of a sparse forward pass and assumes a sparse computation graph upon which backpropagation is executed. Libraries that support dynamic computation graphs, such as Pytorch, will construct the sparse computation graph in the forward pass, which makes the required sparse backpropagation trivial. We therefore expect our algorithm to perform quite well on libraries which support dynamic computation graphs. + +Consider the forward pass of a deep neural network, where $\phi$ is a deep composition of parametric functions, + +$$ +\phi (x; \theta) = \phi_ {L} \left(\phi_ {L - 1} \left(\dots \phi_ {0} (x; \theta_ {0}) \dots ; \theta_ {L - 1}\right); \theta_ {L}\right). \tag {2} +$$ + +The unconstrained problem of minimizing over the $\theta_{\ell}$ can be rewritten as a constrained optimization problem as follows: + +$$ +\min _ {\theta} \frac {1}{n} \sum_ {i = 1} ^ {n} \mathrm {l o s s} (z _ {i} ^ {(L + 1)}, y _ {i}) +$$ + +$$ +\mathrm {s . t .} \quad z _ {i} ^ {(L + 1)} = \phi_ {L} (z _ {i} ^ {(L)}; \theta_ {L}) +$$ + +$$ +\vdots \tag {3} +$$ + +$$ +z _ {i} ^ {(\ell + 1)} = \phi_ {\ell} (z _ {i} ^ {(\ell)}; \theta_ {\ell}) +$$ + +$$ +\begin{array}{c} \vdots \\ \vdots \end{array} +$$ + +$$ +z _ {i} ^ {(1)} = \phi_ {0} (x _ {i}; \theta_ {0}). +$$ + +In this form, $z_{i}^{L + 1}$ is the model estimate for data point $i$ . Consider $\phi_{\ell}(x;\theta_{\ell}) = \sigma (x^{T}\theta_{\ell})$ for $1\leq \ell < L$ , $\phi_L$ be the output layer, and $\sigma$ be some subdifferentiable activation function. If we apply the $\mathrm{rtop}_{k_1,k_2}$ operator per-layer in the forward-pass, with appropriate scaling of $k_{1}$ and $k_{2}$ to account for depth, we see that the number of multiplications in the forward pass is reduced to $k_{1} + k_{2}$ : $\sigma (\mathrm{rtop}_{k_1,k_2}(v,x)^T\mathrm{rtop}_{k_1,k_2}(v,\theta_\ell))$ . A sparse forward-pass yields a computation graph for a $(k_{1} + k_{2})$ -parameter model, and back-propagation will compute the gradient of the objective with respect to model parameters in linear time (Chauvin & Rumelhart, 1995). + +# 3 THEORETICAL COMPLEXITY ANALYSIS + +# 3.1 NOTATION AND ASSUMPTIONS + +Denote by $\| \cdot \|$ the Euclidean norm and by $a \wedge b$ the minimum of $a$ and $b$ . For a random vector $Y \in \mathbb{R}^d$ , + +$$ +\operatorname {V a r} (Y) = \sum_ {i = 1} ^ {d} \operatorname {V a r} (Y _ {i}). +$$ + +We say a random variable $N$ has a geometric distribution, $N \sim \mathrm{Geom}(m)$ , if $N$ is supported on the non-negative integers with + +$$ +\mathbb {P} (N = k) = \gamma^ {k} (1 - \gamma), \quad \forall k = 0, 1, \ldots , +$$ + +for some $\gamma$ such that $\mathbb{E}N = m$ . Here we allow $N$ to be zero to facilitate the analysis. + +Assumption A1 on the smoothness of individual functions will be made throughout the paper. + +A1 $f_{i}$ is differentiable with + +$$ +\left\| \nabla f _ {i} (x) - \nabla f _ {i} (y) \right\| \leq L \| x - y \|, +$$ + +for some $L < \infty$ and for all $i\in \{1,\ldots ,n\}$ + +As a direct consequence of assumption A1, it holds for any $x,y\in \mathbb{R}^d$ that + +$$ +- \frac {L}{2} \| x - y \| ^ {2} \leq f _ {i} (x) - f _ {i} (y) - \langle \nabla f _ {i} (y), x - y \rangle \leq \frac {L}{2} \| x - y \| ^ {2}. \tag {4} +$$ + +To formulate our complexity bounds, we define + +$$ +f ^ {*} = \inf _ {x} f (x), \quad \Delta_ {f} = f (x _ {0}) - f ^ {*}. +$$ + +Further we define $\sigma^2$ as an upper bound on the expected norm of the stochastic gradients: + +$$ +\sigma^ {2} = \sup _ {x} \frac {1}{n} \sum_ {i = 1} ^ {n} \| \nabla f _ {i} (x) \| ^ {2}. \tag {5} +$$ + +By Cauchy-Schwarz inequality, it is easy to see that $\sigma^2$ is also a uniform bound of $\| \nabla f(x)\| ^2$ . + +Finally, we assume that sampling an index $i$ and accessing the pair $\nabla f_{i}(x)$ incur a unit of cost and accessing the truncated version $\mathrm{rtop}_{k_1,k_2}(m,\nabla f_i(x))$ incur $(k_{1} + k_{2}) / d$ units of cost. Note that calculating $\mathrm{rtop}_{k_1,k_2}(m,\nabla f_{\mathcal{I}}(x))$ incurs $|\mathcal{I}|(k_1 + k_2) / d$ units of computational cost. Given our framework, the theoretical complexity of the algorithm is + +$$ +C _ {\mathrm {c o m p}} (\epsilon) \triangleq \sum_ {j = 1} ^ {T} \left(B + 2 b N _ {j} \frac {k _ {1} + k _ {2}}{d}\right). \tag {6} +$$ + +# 3.2 WORST-CASE GUARANTEE + +Theorem 1. Under the following setting of parameters + +$$ +\eta L = \sqrt {\frac {k _ {2}}{6 d m}}, \quad B = \left\lceil \frac {2 \sigma^ {2}}{\epsilon^ {2}} \wedge n \right\rceil +$$ + +For any $T \geq T(\epsilon) \triangleq 4\Delta_f / \eta m\epsilon^2$ + +$$ +\mathbb {E} \| \nabla f (x _ {\mathrm {o u t}}) \| \leq \epsilon . +$$ + +If we further set + +$$ +m = \frac {B d}{b \left(k _ {1} + k _ {2}\right)}, +$$ + +the complexity to achieve the above condition is + +$$ +\mathbb {E} C _ {\mathrm {c o m p}} (\epsilon) = O \left(\left(\frac {\sigma}{\epsilon^ {3}} \wedge \frac {\sqrt {n}}{\epsilon^ {2}}\right) L \Delta_ {f} \sqrt {\frac {b (k _ {1} + k _ {2})}{k _ {2}}}\right). +$$ + +Recall that the complexity of SpiderBoost (Wang et al., 2018) is + +$$ +O \left(\left(\frac {\sigma}{\epsilon^ {3}} \wedge \frac {\sqrt {n}}{\epsilon^ {2}}\right) L \Delta_ {f}\right). +$$ + +Thus as long as $b = O(1), k_1 = O(k_2)$ , our algorithm has the same complexity as SpiderBoost under appropriate settings. The penalty term $O(\sqrt{b(k_1 + k_2) / k_2})$ is due to the information loss by sparsification. + +# 3.3 DATA ADAPTIVE ANALYSIS + +Let + +$$ +g _ {t} ^ {(j)} = \left\| \operatorname {t o p} _ {- k _ {1}} (M _ {t} ^ {(j)}, \nabla f (x _ {t + 1} ^ {(j)}) - \nabla f (x _ {t} ^ {(j)})) \right\| ^ {2}, +$$ + +and + +$$ +G _ {t} ^ {(j)} = \frac {1}{n} \sum_ {i = 1} ^ {n} \| \operatorname {t o p} _ {- k _ {1}} (M _ {t} ^ {(j)}, \nabla f _ {i} (x _ {t + 1} ^ {(j)}) - \nabla f _ {i} (x _ {t} ^ {(j)})) \| ^ {2}. +$$ + +By Cauchy-Schwarz inequality and the linearity of $\mathrm{top}_{-k_1}$ , it is easy to see that $g_t^{(j)} \leq G_t^{(j)}$ . If our algorithm succeeds in capturing sparsity, both $g_t^{(j)}$ and $G_t^{(j)}$ will be small. In this subsection we will analyze the complexity under this case. Further define $R_j$ as + +$$ +R _ {j} = \mathbb {E} _ {j} g _ {N _ {j}} ^ {(j)} + \frac {\mathbb {E} _ {j} G _ {N _ {j}} ^ {(j)}}{b}, \tag {7} +$$ + +where $\mathbb{E}_j$ is taken over all randomness in $j$ -th outer loop (line 4-13 of Algorithm 1). + +Theorem 2. Under the following setting of parameters + +$$ +\eta L = \sqrt {\frac {b \wedge m}{3 m}}, \quad B = \left\lceil \frac {3 \sigma^ {2}}{\epsilon^ {2}} \wedge n \right\rceil +$$ + +For any $T \geq T(\epsilon) \triangleq 6\Delta_f / \eta m\epsilon^2$ + +$$ +\mathbb {E} \| \nabla f (x _ {\mathrm {o u t}}) \| ^ {2} \leq \frac {2 \epsilon^ {2}}{3} + \frac {(d - k _ {1} - k _ {2}) m}{k _ {2}} \mathbb {E} \bar {R} _ {T}, +$$ + +where + +$$ +\bar {R} _ {T} = \frac {1}{T} \sum_ {j = 1} ^ {T} R _ {j}. +$$ + +If $\mathbb{E}\bar{R}_T\leq \epsilon^2\frac{k_2}{3(d - k_1 - k_2)m}$ , then + +$$ +\mathbb {E} \| \nabla f (x _ {\mathrm {o u t}}) \| \leq \epsilon . +$$ + +If we further set + +$$ +m = \frac {B d}{b \left(k _ {1} + k _ {2}\right)}, +$$ + +the complexity to achieve the above condition is + +$$ +\mathbb {E} C _ {\mathrm {c o m p}} (\epsilon) = O \left(\left(\frac {\sigma}{\epsilon^ {3}} \wedge \frac {\sqrt {n}}{\epsilon^ {2}}\right) L \Delta_ {f} \sqrt {\frac {k _ {1} + k _ {2}}{d} \frac {b}{b \wedge m}}\right). +$$ + +In practice, $m$ is usually much larger than $b$ . As a result, the complexity of our algorithm is $O\left(\sqrt{\left(k_{1} + k_{2}\right) / d}\right)$ smaller than that of SpiderBoost if our algorithm captures gradient sparsity. Although this type of data adaptive analyses is not as clean as the worst-case guarantee (Theorem 1), it reveals the potentially superior performance of our algorithm. Similar analyses have been done for various other algorithms, including AdaGrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014). + +# 4 EXPERIMENTS + +In this section, we present a variety of experiments to illustrate gradient sparsity and demonstrate the performance of Sparse SpiderBoost. By computing the entropy of the empirical distribution of the absolute value of stochastic gradient coordinates, we show that certain models exhibit gradient sparsity during optimization. To evaluate the performance of variance reduction with sparse gradients, we compute the loss over gradient queries per epoch of Sparse Spiderboost and SpiderBoost for a number of image classification problems. We also compare Sparse SpiderBoost, SpiderBoost, and SGD on a natural language processing task and sparse matrix factorization. + +For all experiments, unless otherwise specified, we run SpiderBoost and Sparse SpiderBoost with a learning rate $\eta = 0.1$ , large-batch size $B = 1000$ , small-batch size $b = 100$ , inner loop length of $m = 10$ , memory decay factor of $\alpha = 0.5$ , and $k_{1}$ and $k_{2}$ both set to $5\%$ of the total number of model parameters. We call the sum $k_{1} + k_{2} = k = 10\%$ the sparsity of the optimization algorithm. + +# 4.1 GRADIENT SPARSITY AND IMAGE CLASSIFICATION + +Our experiments in this section test a number of image classification tasks for gradient sparsity, and plot the learning curves of some of these tasks. We test a 2-layer fully connected neural network with hidden layers of width 100, a simple convolutional neural net which we describe in detail in Appendix B, and Resnet-18 (He et al., 2015). All models use ReLu activations. For datasets, we use CIFAR-10 (Krizhevsky et al.), SVHN (Netzer et al., 2011), and MNIST (LeCun & Cortes, 2010). None of our experiments include Resnet-18 on MNIST as MNIST is an easier dataset; it is included primarily to provide variety. + +Our method relies partially on the assumption that the magnitude of the derivative of some model parameters are greater than others. To measure this, we compute the entropy of the empirical distribution of the absolute value of stochastic gradient coordinates. In Algorithm 1, the following term updates our estimate of the variance of each coordinate's derivative: + +$$ +M _ {t + 1} ^ {(j)} := \alpha | \nu_ {t + 1} ^ {(j)} | + (1 - \alpha) M _ {t} ^ {(j)}. +$$ + +Consider the entropy of the following probability vector $p_t^{(j)} = M_t^{(j)} / \| M_t^{(j)}\|_1$ . The entropy of $p$ provides us with a measure of how much structure there is in our gradients. To see this, consider the hypothetical scenario where $p_i = 1/d$ . In this scenario we have no structure; the top $k_1$ component of our sparsity operator is providing no value and entropy is maximized. On the other hand, if a single entry $p_i = 1$ and all other entries $p_j = 0$ , then the top $k_1$ component of our sparsity operator is effectively identifying the only relevant model parameter. + +To measure the potential of our sparsity operator, we compute the entropy of $p$ while running SpiderBoost on a variety of datasets and model architectures. The results of running this experiment are summarized in the following table. + +Table 1: Entropy of Memory Vectors + +
FC NNConv NNResnet-18
MaxBeforeAfterMaxBeforeAfterMaxBeforeAfter
CIFAR-1018.23416.418.0915.92013.382.6623.41422.5921.70
SVHN18.23415.368.0515.92013.002.9723.41422.6221.31
MNIST18.23414.299.7715.92014.212.77---
+ +Table 1 provides the maximum entropy as well as the entropy of the memory vector before and after training for 150 epochs, for each dataset and each model. For each model, the entropy at the beginning of training is almost maximal. This is due to random initialization of model parameters. After 150 epochs, the entropy of $M_t$ for the convolutional model drops to approximately 3, which suggests a substantial amount of gradient structure. Note that for the datasets that we tested, the gradient structure depends primarily on the model and not the dataset. In particular, for Resnet-18, the entropy appears to vary minimally after 150 epochs. + +![](images/5e956e4f5bcdcef65a8764ec0e091002d3150fdcbae0afaa634a5edc25ec2e16.jpg) +Figure 1: SpiderBoost with $10\%$ sparsity $(k = 0.1d)$ compared to SpiderBoost without sparsity. Left figure compares the two algorithms using Resnet-18 on Cifar-10. Right figure compares the two algorithms using a convolutional neural network trained on MNIST. The x-axis measures gradient queries over $N$ , where $N$ is the size of the respective datasets. Plots are in log-scale. + +![](images/7acb8e0a13611ea46ad6aad228916964ae4fde5f632d7f9ba7f351dc199230ca.jpg) + +Figure 1 compares SpiderBoost alone to SpiderBoost with $10\%$ sparsity (10% of parameter derivatives). All experiments in this section are run for 50 epochs. In our comparison to SpiderBoost, we measure the number of gradient queries over the size of the dataset $N$ . A single gradient query is taken to be the cost of computing a gradient for a single data point. If $i$ is the index of a single sample, then $\nabla f_i(x)$ is a single gradient query. Using the batch gradient to update model parameters for a dataset of size $B$ has a gradient query cost of $B$ . For a model with $d$ parameters, using a single sample to update $k$ model parameters has a gradient query cost of $k / d$ , etc. + +Our results of fitting the convolutional neural network to MNIST show that sparsity provides a significant advantage compared to using SpiderBoost alone. We only show 2 epochs of this experiment since the MNIST dataset is fairly simple and convergence is rapidly achieved. The results of training Resnet-18 on CIFAR-10 suggests that our sparsity algorithm works well on large neural networks, and non-trivial datasets. We believe Resnet-18 on CIFAR-10 does not do as well due to the gradient density we observe for Resnet-18 in general. Sparsity here not only has the additional benefit of reducing gradient query complexity, but also provides a dampening effect on variance due to the additional covariates in SpiderBoost's update to model parameters. Results for the rest of these experiments can be found in Appendix B. + +# 4.2 NATURAL LANGUAGE PROCESSING + +We evaluate Sparse SpiderBoost's performance on an LSTM-based (Hochreiter & Schmidhuber, 1997) generative language model. We compare Sparse SpiderBoost, SpiderBoost, and SGD. We train our LSTM model on the Penn Treebank (Marcus et al., 1994) corpus. The natural language processing model consists of a word embedding of dimension 128 of 1000 tokens, which is jointly learned with the task. The LSTM has a hidden and cell state dimension of 1024. All three optimization algorithms operate on this model. The variance reduction training algorithm for this type of model can be found in Appendix B. We run SpiderBoost and Sparse SpiderBoost with a learning rate $\eta = 0.2$ , large-batch size $B = 40$ , small-batch size $b = 20$ , inner loop length of $m = 2$ . We run SGD with learning rate 0.2 and batch size is 20. Figure 2 shows SpiderBoost is slightly worse than SGD, and sparsity provides a noticeable improvement over SGD. + +![](images/8b021e936ab6a55c4095df9a9df812896456eac617226975bfdece396a4b2650.jpg) +Figure 2: (a): SGD learning rate is 0.2 and batch size is 20. (b): SGD batch size is 103 and learning rate schedule is 0.1 for epochs $0 - 10$ , 0.01 for epochs $10 - 20$ , and 0.001 for epochs $20 - 40$ . The x-axis measures gradient queries over $N$ , where $N$ is the size of the respective datasets. Plots are in log-scale. + +![](images/684cbeb93383fdd4786d78d39d957ffdb2e536d4be9bec7cf7ac8cfcebb051e8.jpg) + +# 4.3 SPARSE MATRIX FACTORIZATION + +For our experiments with sparse matrix factorization, we perform Bayesian Personalized Ranking (Rendle et al., 2009) on the MovieLens database (Harper & Konstan, 2015) with a latent dimension of 20. To satisfy $m = B / b$ , we run SpiderBoost and Sparse SpiderBoost with a large-batch size $B = 1030$ , small-batch size $b = 103$ , inner loop length of $m = 10$ . For this experiment, we run SpiderBoost with the following learning rate schedule: + +$$ +\eta (a, b, t) = b + (a - b) \frac {m - t}{m}, +$$ + +where $a = 1.0$ and $b = 0.1$ . The schedule interpolates from $a$ to $b$ as the algorithm progresses through the inner loop. For instance, within the inner loop, at iteration 0 the learning rate is 1.0, and at iteration $m$ the learning rate is 0.1. We believe this is a natural way to utilize the low variance + +at the beginning of the inner loop, and is a fair comparison to an exponential decay learning rate schedule for SGD. Details of the SGD baselines are provided in Figure 2. We see SpiderBoost is slightly worse than SGD, and sparsity provides a slight improvement over SGD, especially in the first few epochs. + +# 5 CONCLUSION + +In this paper, we show how sparse gradients with memory can be used to improve the gradient query complexity of SVRG-type variance reduction algorithms. While we provide a concrete sparse variance reduction algorithm for SpiderBoost, the techniques developed in this paper can be adapted to other variance reduction algorithms. + +We show that our algorithm provides a way to explicitly control the gradient query complexity of variance reduction methods, a problem which has thus far not been addressed. Assuming our algorithm captures the sparsity structure of the optimization problem, we also prove that the complexity of our algorithm is an improvement over SpiderBoost. The results of our comparison to SpiderBoost validate this assumption, and entropy measures provided in Table 1 empirically support our hypothesis that gradient sparsity exists. + +Table 1 also supports the results in Aji & Heafield (2017), which shows that the top- $k$ operator generally outperforms the random- $k$ operator. Our random-top- $k$ operator takes advantage of the superior performance of the top- $k$ operator while eliminating bias via a secondary random- $k$ operator. Not every problem we tested exhibited sparsity structure. While this is true, our analysis proves that our algorithm performs no worse than SpiderBoost in these settings. Even when there is no structure, our algorithm reduces to a random sampling of $k_{1} + k_{2}$ coordinates, which is essentially a randomized coordinate descent analogue of SpiderBoost. Empirically, we see that Sparse SpiderBoost outperforms SpiderBoost when no sparsity structure is present. We believe this is due to the variance introduced by additional covariates in the SpiderBoost update, which is mitigated in Sparse SpiderBoost by our random-top- $k$ operator. + +The results of our experiments on natural language processing and matrix factorization demonstrate that, with additional effort, variance reduction methods are competitive with SGD. While we view this as progress toward improving the practical viability of variance reduction algorithms, we believe further improvements can be made, such as better utilization of reduced variance during training, and better control over increased variance in very high dimensional models such as dense net (Defazio, 2019). We recognize these issues and hope to make progress on them in future work. + +# REFERENCES + +Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaogiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org. +Alham Fikri Aji and Kenneth Heafield. Sparse communication for distributed gradient descent. CoRR, abs/1704.05021, 2017. URL http://arxiv.org/abs/1704.05021. +Zeyuan Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 1200-1205. ACM, 2017. +Zeyuan Allen-Zhu. Katyusha x: Practical momentum method for stochastic sum-of-nonconvex optimization. arXiv preprint arXiv:1802.03866, 2018a. +Zeyuan Allen-Zhu. Natasha 2: Faster non-convex optimization than sgd. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett + +(eds.), Advances in Neural Information Processing Systems 31, pp. 2675-2686. Curran Associates, Inc., 2018b. URL http://papers.nips.cc/paper/7533-natasha-2-faster-non-convex-optimization-than-sgd.pdf. +Zeyuan Allen-Zhu and Elad Hazan. Variance reduction for faster non-convex optimization. ArXiv eprints abs/1603.05643, 2016. +Zeyuan Allen-Zhu and Yang Yuan. Improved SVRG for non-strongly-convex or sum-of-non-convex objectives. In International conference on machine learning, pp. 1080-1089, 2016. +Yves Chauvin and David E. Rumelhart (eds.). Backpropagation: Theory, Architectures, and Applications. L. Erlbaum Associates Inc., Hillsdale, NJ, USA, 1995. ISBN 0-8058-1259-8. +Aaron Defazio. On the ineffectiveness of variance reduced optimization for deep learning, 2019. URL https://openreview.net/forum?id=B1MIBs05F7. +Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems, pp. 1646-1654, 2014. +John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011. +Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. In Advances in Neural Information Processing Systems, pp. 689-699, 2018. +F. Maxwell Harper and Joseph A. Konstan. The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst., 5(4):19:1-19:19, December 2015. ISSN 2160-6455. doi: 10.1145/2827872. URL http://doi.acm.org/10.1145/2827872. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997. +Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1, NIPS'13, pp. 315-323, USA, 2013. Curran Associates Inc. URL http://dl.acm.org/citation.cfm?id=2999611.2999647. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/~kriz/cifar.html. +Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/exdb/mnist/. +Lihua Lei and Michael Jordan. Less than a Single Pass: Stochastically Controlled Stochastic Gradient. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54, pp. 148-156. PMLR, 2017. +Lihua Lei and Michael I Jordan. On the adaptivity of stochastic gradient-based optimization. arXiv preprint arXiv:1904.04480, 2019. +Lihua Lei, Cheng Ju, Jianbo Chen, and Michael I Jordan. Non-convex finite-sum optimization via scsg methods. In Advances in Neural Information Processing Systems 30, pp. 2348-2358. 2017. URL http://papers.nips.cc/paper/6829-non-convex-finite-sum-optimization-via-scsg-methods.pdf. + +Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. The penn treebank: Annotating predicate argument structure. In Proceedings of the Workshop on Human Language Technology, HLT '94, pp. 114-119, Stroudsburg, PA, USA, 1994. Association for Computational Linguistics. ISBN 1-55860-357-3. doi: 10.3115/1075812.1075835. URL https://doi.org/10.3115/1075812.1075835. +David R Musser. Introspective sorting and selection algorithms. Software: Practice and Experience, 27(8):983-993, 1997. +Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on optimization, 19(4):1574-1609, 2009. +Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011. URL http://ufldl.stanford.edu/housenumbers/nips2011_housenumbers.pdf. +Lam M Nguyen, Marten van Dijk, Dzung T Phan, Phuong Ha Nguyen, Tsui-Wei Weng, and Jayant R Kalagnanam. Optimal finite-sum smooth non-convex optimization with sarah. arXiv preprint arXiv:1901.07648, 2019. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. +Nhan H Pham, Lam M Nguyen, Dzung T Phan, and Quoc Tran-Dinh. Proxsarah: An efficient algorithmic framework for stochastic composite nonconvex optimization. arXiv preprint arXiv:1902.05679, 2019. +Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. Stochastic variance reduction for nonconvex optimization. arXiv preprint arXiv:1603.06160, 2016a. +Sashank J Reddi, Suvrit Sra, Barnabás Póczos, and Alex Smola. Fast incremental method for nonconvex optimization. arXiv preprint arXiv:1603.06159, 2016b. +Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence, pp. 452-461. AUAI Press, 2009. +H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22:400-407, 1951. +Nicolas Le Roux, Mark Schmidt, and Francis Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Advances in Neural Information Processing Systems, pp. 2663-2671, 2012. +Sebastian U. Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. Sparsified SGD with memory. CoRR, abs/1809.07599, 2018. URL http://arxiv.org/abs/1809.07599. +Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, and Vahid Tarokh. Spiderboost: A class of faster variance-reduced algorithms for nonconvex optimization. CoRR, abs/1810.10690, 2018. URL http://arxiv.org/abs/1810.10690. +Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057-2075, 2014. +Dongruo Zhou, Pan Xu, and Quanquan Gu. Stochastic nested variance reduction for nonconvex optimization. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 3925-3936. Curran Associates Inc., 2018. + +# A TECHNICAL PROOFS + +# A.1 PREPARATORY RESULTS + +Lemma 2 (Lemma 3.1 of Lei & Jordan (2019)). Let $N \sim \mathrm{Geom}(m)$ . Then for any sequence $D_0, D_1, \ldots$ with $\mathbb{E}|D_N| < \infty$ , + +$$ +\mathbb {E} (D _ {N} - D _ {N + 1}) = \frac {1}{m} \left(D _ {0} - \mathbb {E} D _ {N}\right). +$$ + +Remark 1. The requirement $\mathbb{E}|D_N| < \infty$ is essential. A useful sufficient condition if $|D_t| = O(\mathrm{Poly}(t))$ because a geometric random variable has finite moments of any order. + +Lemma 3 (Lemma B.2 of Lei & Jordan (2019)). Let $z_{1},\ldots ,z_{M}\in \mathbb{R}^{d}$ be an arbitrary population and $\mathcal{J}$ be a uniform random subset of $[M]$ with size $m$ . Then + +$$ +\operatorname {V a r} \left(\frac {1}{m} \sum_ {j \in \mathcal {J}} z _ {j}\right) \leq \frac {I (m < M)}{m} \cdot \frac {1}{M} \sum_ {j = 1} ^ {M} \| z _ {j} \| _ {2} ^ {2}. +$$ + +Proof of Lemma 1. WLOG, assume that $|x_1| \geq |x_2| \geq \ldots \geq |x_d|$ . Let $S$ be a random subset of $\{k_1 + 1, \ldots, d\}$ with size $k_2$ . Then + +$$ +\left(\operatorname {r t o p} _ {k _ {1}, k _ {2}} (x, y)\right) _ {\ell} = y _ {\ell} \left(I (\ell \leq k _ {1}) + \frac {d - k _ {1}}{k _ {2}} I (\ell \in S)\right). +$$ + +As a result, + +$$ +\mathbb {E} \left[ \left(\operatorname {r t o p} _ {k _ {1}, k _ {2}} (x, y)\right) _ {\ell} \right] = y _ {\ell} \left(I (\ell \leq k _ {1}) + \frac {d - k _ {1}}{k _ {2}} I (\ell > k _ {1}) P (\ell \in S)\right) = y _ {\ell}, +$$ + +and + +$$ +\begin{array}{l} \operatorname {V a r} \left[ \left(\operatorname {r t o p} _ {k _ {1}, k _ {2}} (x, y)\right) _ {\ell} \right] = \left(\frac {d - k _ {1}}{k _ {2}}\right) ^ {2} y _ {\ell} ^ {2} I (\ell > k _ {1}) P (\ell \in S) (1 - P (\ell \in S)) \\ = \frac {d - k _ {1} - k _ {2}}{k _ {2}} y _ {\ell} ^ {2} I (\ell > k _ {1}). \\ \end{array} +$$ + +Therefore, + +$$ +\mathrm {V a r} \left(\mathrm {r t o p} _ {k _ {1}, k _ {2}} (x, y)\right) = \frac {d - k _ {1} - k _ {2}}{k _ {2}} \sum_ {\ell > k _ {1}} y _ {\ell} ^ {2} = \frac {d - k _ {1} - k _ {2}}{k _ {2}} \| \mathrm {t o p} _ {- k _ {1}} (x, y) \| ^ {2}. +$$ + +# A.2 ANALYSIS OF A SINGLE INNER LOOP + +Lemma 4. For any $j,t$ + +$$ +\mathbb {E} _ {j, t} (\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}) = \nabla f (x _ {t + 1} ^ {(j)}) - \nabla f (x _ {t} ^ {(j)}) +$$ + +and + +$$ +\mathrm {V a r} _ {j, t} (\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}) \leq \frac {\eta^ {2} L ^ {2}}{b} \| \nu_ {t} ^ {(j)} \| ^ {2} + \frac {d - k _ {1} - k _ {2}}{k _ {2}} \left(g _ {t} ^ {(j)} + \frac {G _ {t} ^ {(j)}}{b}\right), +$$ + +where $\mathbb{E}_{j,t}$ and $\mathrm{Var}_{j,t}$ are taken over the randomness of $\mathcal{I}_t^{(j)}$ and the random subset $S$ involved in the $\mathrm{rtop}_{k_1,k_2}$ operator. + +Proof. By definition, + +$$ +\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)} = \mathrm {r t o p} _ {k _ {1}, k _ {2}} \left(M _ {t} ^ {(j)}, \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t + 1} ^ {(j)}) - \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t} ^ {(j)})\right). +$$ + +Let $S$ be the random subset involved in $\mathrm{rtop}_{k_1,k_2}$ . Then $S$ is independent of $(\mathcal{I}_t^{(j)},M_t^{(j)},x_{t + 1}^{(j)},x_t^{(j)})$ . By Lemma 1, + +$$ +\mathbb {E} _ {S} \left(\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}\right) = \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t + 1} ^ {(j)}) - \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t} ^ {(j)}) +$$ + +and + +$$ +\mathrm {V a r} _ {S} \left(\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}\right) = \frac {d - k _ {1} - k _ {2}}{k _ {2}} \left\| \mathrm {t o p} _ {- k _ {1}} \left(M _ {t} ^ {(j)}, \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t + 1} ^ {(j)}) - \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t} ^ {(j)})\right) \right\| ^ {2}. +$$ + +Since $\mathcal{I}_t^{(j)}$ is independent of $(M_t^{(j)},x_{t + 1}^{(j)},x_t^{(j)})$ , the tower property of conditional expectation and variance implies that + +$$ +\mathbb {E} _ {j, t} \left(\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}\right) = \mathbb {E} _ {\mathcal {I} _ {t} ^ {(j)}} \left(\nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t + 1} ^ {(j)}) - \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t} ^ {(j)})\right) = \nabla f (x _ {t + 1} ^ {(j)}) - \nabla f (x _ {t} ^ {(j)}), +$$ + +and + +$$ +\operatorname {V a r} _ {j, t} \left(\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}\right) = \mathbb {E} _ {\mathcal {I} _ {t} ^ {(j)}} \left(\operatorname {V a r} _ {S} \left(\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}\right)\right) + \operatorname {V a r} _ {\mathcal {I} _ {t} ^ {(j)}} \left(\mathbb {E} _ {S} \left(\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}\right)\right). \tag {8} +$$ + +To bound the first term, we note that $\mathrm{top}_{-k_1}$ is linear in $y$ and thus + +$$ +\begin{array}{l} \left. \mathbb {E} _ {\mathcal {I} _ {t} ^ {(j)}} \left\| \operatorname {t o p} _ {- k _ {1}} \left(M _ {t} ^ {(j)}, \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} \left(x _ {t + 1} ^ {(j)}\right) - \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} \left(x _ {t} ^ {(j)}\right)\right) \right\| ^ {2} \right. \\ = \left\| \mathbb {E} _ {\mathcal {I} _ {t} ^ {(j)}} \operatorname {t o p} _ {- k _ {1}} \left(M _ {t} ^ {(j)}, \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t + 1} ^ {(j)}) - \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t} ^ {(j)})\right) \right\| ^ {2} \\ + \operatorname {V a r} _ {\mathcal {I} _ {t} ^ {(j)}} \left[ \operatorname {t o p} _ {- k _ {1}} \left(M _ {t} ^ {(j)}, \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} \left(x _ {t + 1} ^ {(j)}\right) - \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} \left(x _ {t} ^ {(j)}\right)\right) \right] \\ = g _ {t} ^ {(j)} + \operatorname {V a r} _ {\mathcal {I} _ {t} ^ {(j)}} \left[ \frac {1}{b} \sum_ {i \in \mathcal {I} _ {t} ^ {(j)}} \operatorname {t o p} _ {- k _ {1}} \left(M _ {t} ^ {(j)}, \nabla f _ {i} \left(x _ {t + 1} ^ {(j)}\right) - \nabla f _ {i} \left(x _ {t} ^ {(j)}\right)\right) \right] \\ \leq g _ {t} ^ {(j)} + \frac {G _ {t} ^ {(j)}}{b}, \tag {9} \\ \end{array} +$$ + +where the last inequality uses Lemma 3. To bound the second term of (8), by Lemma 3, + +$$ +\begin{array}{l} \operatorname {V a r} _ {\mathcal {I} _ {t} ^ {(j)}} \left(\mathbb {E} _ {S} \left(\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}\right)\right) = \operatorname {V a r} _ {\mathcal {I} _ {t} ^ {(j)}} \left(\nabla f _ {\mathcal {I} _ {t} ^ {(j)}} \left(x _ {t + 1} ^ {(j)}\right) - \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} \left(x _ {t} ^ {(j)}\right)\right) \\ \leq \frac {1}{b} \frac {1}{n} \sum_ {i = 1} ^ {n} \| \nabla f _ {i} (x _ {t + 1} ^ {(j)}) - \nabla f _ {i} (x _ {t} ^ {(j)}) \| ^ {2} \stackrel {(i)} {\leq} \frac {L ^ {2}}{b} \| x _ {t + 1} ^ {(j)} - x _ {t} ^ {(j)} \| ^ {2} \stackrel {(i i)} {=} \frac {\eta^ {2} L ^ {2}}{b} \| \nu_ {t} ^ {(j)} \| ^ {2}, \\ \end{array} +$$ + +where (i) uses assumption A1 and (ii) uses the definition that $x_{t + 1}^{(j)} = x_t^{(j)} - \eta \nu_t^{(j)}$ . + +Lemma 5. For any $j, t$ , + +$$ +\mathbb {E} _ {j, t} \| \nu_ {t + 1} ^ {(j)} - \nabla f (x _ {t + 1} ^ {(j)}) \| ^ {2} \leq \| \nu_ {t} ^ {(j)} - \nabla f (x _ {t} ^ {(j)}) \| ^ {2} + \frac {\eta^ {2} L ^ {2}}{b} \| \nu_ {t} ^ {(j)} \| ^ {2} + \frac {d - k _ {1} - k _ {2}}{k _ {2}} \left(g _ {t} ^ {(j)} + \frac {G _ {t} ^ {(j)}}{b}\right), +$$ + +where $\mathbb{E}_{j,t}$ and $\mathrm{Var}_{j,t}$ are taken over the randomness of $\mathcal{I}_t^{(j)}$ and the random subset $S$ involved in the $\mathrm{rtop}_{k_1,k_2}$ operator. + +Proof. By Lemma 4, we have + +$$ +\nu_ {t + 1} ^ {(j)} - \nabla f (x _ {t + 1} ^ {(j)}) = \nu_ {t} ^ {(j)} - \nabla f (x _ {t} ^ {(j)}) + \left(\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)} - \mathbb {E} _ {j, t} (\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)})\right). +$$ + +Since $\mathcal{I}_t^{(j)}$ is independent of $(\nu_t^{(j)},x_t^{(j)})$ + +$$ +\operatorname {C o v} _ {j, t} \left(\nu_ {t} ^ {(j)} - \nabla f (x _ {t} ^ {(j)}), \nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}\right) = 0. +$$ + +As a result, + +$$ +\mathbb {E} _ {j, t} \| \nu_ {t + 1} ^ {(j)} - \nabla f (x _ {t + 1} ^ {(j)}) \| ^ {2} = \| \nu_ {t} ^ {(j)} - \nabla f (x _ {t} ^ {(j)}) \| ^ {2} + \operatorname {V a r} _ {j, t} (\nu_ {t + 1} ^ {(j)} - \nu_ {t} ^ {(j)}). +$$ + +The proof is then completed by Lemma 4. + +Lemma 6. For any $j$ + +$$ +\mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} - \nabla f (x _ {N _ {j}} ^ {(j)}) \| ^ {2} \leq \frac {m \eta^ {2} L ^ {2}}{b} \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} \| ^ {2} + \frac {\sigma^ {2} I (B < n)}{B} + \frac {(d - k _ {1} - k _ {2}) m}{k _ {2}} R _ {j}, +$$ + +where $\mathbb{E}_j$ is taken over all randomness in $j$ -th outer loop (line 4-13 of Algorithm 1). 4. + +Proof. By definition, + +$$ +\begin{array}{l} \left\| \nu_ {t + 1} ^ {(j)} \right\| \leq \left\| \nu_ {t} ^ {(j)} \right\| + \left\| \operatorname {r t o p} _ {k _ {1}, k _ {2}} \left(M _ {t} ^ {(j)}, \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} \left(x _ {t + 1} ^ {(j)}\right) - \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} \left(x _ {t} ^ {(j)}\right)\right) \right\| \\ \leq \left\| \nu_ {t} ^ {(j)} \right\| + \left\| \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t + 1} ^ {(j)}) - \nabla f _ {\mathcal {I} _ {t} ^ {(j)}} (x _ {t} ^ {(j)}) \right\| \\ \leq \| \nu_ {t} ^ {(j)} \| + \frac {1}{b} \sum_ {i \in \mathcal {I} _ {t} ^ {(j)}} \left\| \nabla f _ {i} (x _ {t + 1} ^ {(j)}) - \nabla f _ {i} (x _ {t} ^ {(j)}) \right\| \\ \leq \| \nu_ {t} ^ {(j)} \| + \sqrt {\frac {1}{b} \sum_ {i \in \mathcal {I} _ {t} ^ {(j)}} \left\| \nabla f _ {i} (x _ {t + 1} ^ {(j)}) - \nabla f _ {i} (x _ {t} ^ {(j)}) \right\| ^ {2}} \\ \leq \| \nu_ {t} ^ {(j)} \| + \sqrt {\frac {2}{b} \left(\sum_ {i \in \mathcal {I} _ {t} ^ {(j)}} \left\| \nabla f _ {i} \left(x _ {t + 1} ^ {(j)}\right) \right\| ^ {2} + \sum_ {i \in \mathcal {I} _ {t} ^ {(j)}} \left\| \nabla f _ {i} \left(x _ {t} ^ {(j)}\right) \right\| ^ {2}\right)} \\ \leq \| \nu_ {t} ^ {(j)} \| + \sqrt {\frac {2 n}{b} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x _ {t + 1} ^ {(j)}\right) \right\| ^ {2} + \frac {1}{n} \sum_ {i = 1} ^ {n} \left\| \nabla f _ {i} \left(x _ {t} ^ {(j)}\right) \right\| ^ {2}\right)} \\ \leq \left\| \nu_ {t} ^ {(j)} \right\| + \sqrt {2 n} \sigma \\ \end{array} +$$ + +As a result, + +$$ +\left\| \nu_ {t} ^ {(j)} \right\| \leq \left\| \nu_ {0} ^ {(j)} \right\| + t \sqrt {2 n} \sigma , \tag {10} +$$ + +Thus, + +$$ +\| \nu_ {t} ^ {(j)} - \nabla f (x _ {t} ^ {(j)}) \| ^ {2} \leq 2 \| \nu_ {t} ^ {(j)} \| ^ {2} + 2 \| \nabla f (x _ {t} ^ {(j)}) \| ^ {2} = \mathrm {P o l y} (t). +$$ + +This implies that we can apply Lemma 2 on the sequence $D_{t} = \| \nu_{t}^{(j)} - \nabla f(x_{t}^{(j)})\|^{2}$ . + +Letting $j = N_{j}$ in Lemma 5 and taking expectation over all randomness in $\mathbb{E}_j$ , we have + +$$ +\begin{array}{l} \mathbb {E} _ {j} \left\| \nu_ {N _ {j} + 1} ^ {(j)} - \nabla f (x _ {N _ {j} + 1} ^ {(j)}) \right\| ^ {2} \\ \leq \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} - \nabla f (x _ {N _ {j}} ^ {(j)}) \| ^ {2} + \frac {\eta^ {2} L ^ {2}}{b} \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} \| ^ {2} + \frac {d - k _ {1} - k _ {2}}{k _ {2}} \mathbb {E} _ {j} \left(g _ {N _ {j}} ^ {(j)} + \frac {G _ {N _ {j}} ^ {(j)}}{b}\right) \\ = \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} - \nabla f (x _ {N _ {j}} ^ {(j)}) \| ^ {2} + \frac {\eta^ {2} L ^ {2}}{b} \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} \| ^ {2} + \frac {d - k _ {1} - k _ {2}}{k _ {2}} R _ {j}. \tag {11} \\ \end{array} +$$ + +By Lemma 2, + +$$ +\begin{array}{l} \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} - \nabla f (x _ {N _ {j}} ^ {(j)}) \| ^ {2} - \mathbb {E} _ {j} \| \nu_ {N _ {j} + 1} ^ {(j)} - \nabla f (x _ {N _ {j} + 1} ^ {(j)}) \| ^ {2} \\ = \frac {1}{m} \left(\| \nu_ {0} ^ {(j)} - \nabla f \left(x _ {0} ^ {(j)}\right) \| ^ {2} - \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} - \nabla f \left(x _ {N _ {j}} ^ {(j)}\right) \| ^ {2}\right) \\ = \frac {1}{m} \left(\mathbb {E} _ {j} \| \nu_ {0} ^ {(j)} - \nabla f (x _ {j - 1}) \| ^ {2} - \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} - \nabla f (x _ {j}) \| ^ {2}\right), \tag {12} \\ \end{array} +$$ + +where the last line uses the definition that $x_{j-1} = x_0^{(j)}$ , $x_j = x_{N_j}^{(j)}$ . By Lemma 3, + +$$ +\mathbb {E} _ {j} \| \nu_ {0} ^ {(j)} - \nabla f (x _ {j - 1}) \| ^ {2} \leq \frac {\sigma^ {2} I (B < n)}{B}. \tag {13} +$$ + +The proof is completed by putting (11), (12) and (13) together. + +Lemma 7. For any $j, t$ , + +$$ +f (x _ {t + 1} ^ {(j)}) \leq f (x _ {t} ^ {(j)}) + \frac {\eta}{2} \| \nu_ {t} ^ {(j)} - \nabla f (x _ {t} ^ {(j)}) \| ^ {2} - \frac {\eta}{2} \| \nabla f (x _ {t} ^ {(j)}) \| ^ {2} - \frac {\eta}{2} (1 - \eta L) \| \nu_ {t} ^ {(j)} \| ^ {2}. +$$ + +Proof. By (4), + +$$ +\begin{array}{l} f (x _ {t + 1} ^ {(j)}) \leq f (x _ {t} ^ {(j)}) + \left\langle \nabla f (x _ {t} ^ {(j)}), x _ {t + 1} ^ {(j)} - x _ {t} ^ {(j)} \right\rangle + \frac {L}{2} \| x _ {t} ^ {(j)} - x _ {t + 1} ^ {(j)} \| ^ {2} \\ = f \left(x _ {t} ^ {(j)}\right) - \eta \left\langle \nabla f \left(x _ {t} ^ {(j)}, v _ {t} ^ {(j)}\right) \right\rangle + \frac {\eta^ {2} L}{2} \| v _ {t} ^ {(j)} \| ^ {2} \\ = f (x _ {t} ^ {(j)}) + \frac {\eta}{2} \| \nu_ {t} ^ {(j)} - \nabla f (x _ {t} ^ {(j)}) \| ^ {2} - \frac {\eta}{2} \| \nabla f (x _ {t} ^ {(j)}) \| ^ {2} - \frac {\eta}{2} \| \nu_ {t} ^ {(j)} \| ^ {2} + \frac {\eta^ {2} L}{2} \| \nu_ {t} ^ {(j)} \| ^ {2}. \\ \end{array} +$$ + +The proof is then completed. + +![](images/767e902689d78ba5860bf1ef76a1d01c46bdd205dc97dc942e327a10497b2179.jpg) + +Lemma 8. For any $j$ + +$$ +\mathbb {E} _ {j} \| \nabla f (x _ {j}) \| ^ {2} \leq \frac {2}{\eta m} \mathbb {E} _ {j} (f (x _ {j - 1}) - f (x _ {j})) + \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} - \nabla f (x _ {j}) \| ^ {2} - (1 - \eta L) \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} \| ^ {2}, +$$ + +where $\mathbb{E}_j$ is taken over all randomness in $j$ -th outer loop (line 4-13 of Algorithm 1). + +Proof. Since $\| \nabla f(x) \| \leq \sigma$ for any $x$ , + +$$ +| f (x _ {t + 1} ^ {(j)}) - f (x _ {t} ^ {(j)}) | \leq \sigma \| \nu_ {t} ^ {(j)} \|. +$$ + +This implies that + +$$ +| f (x _ {t} ^ {(j)}) | \leq \sigma \sum_ {k = 0} ^ {t} \| \nu_ {t} ^ {(j)} \| + | f (x _ {0} ^ {(j)}) |. +$$ + +As shown in (10), $\| \nu_t^{(j)}\| = \mathrm{Poly}(t)$ and thus $|f(x_{t}^{(j)})| = \mathrm{Poly}(t)$ . This implies that we can apply Lemma 2 on the sequence $D_{t} = f(x_{t}^{(j)})$ . + +Letting $j = N_{j}$ in Lemma 7 and taking expectation over all randomness in $\mathbb{E}_j$ , we have + +$$ +\mathbb {E} _ {j} f (x _ {N _ {j} + 1} ^ {(j)}) \leq \mathbb {E} _ {j} f (x _ {N _ {j}} ^ {(j)}) + \frac {\eta}{2} \| \nu_ {N _ {j}} ^ {(j)} - \nabla f (x _ {N _ {j}} ^ {(j)}) \| ^ {2} - \frac {\eta}{2} \| \nabla f (x _ {N _ {j}} ^ {(j)}) \| ^ {2} - \frac {\eta}{2} (1 - \eta L) \| \nu_ {N _ {j}} ^ {(j)} \| ^ {2}. +$$ + +By Lemma 2, + +$$ +\mathbb {E} _ {j} f (x _ {N _ {j}} ^ {(j)}) - \mathbb {E} _ {j} f (x _ {N _ {j} + 1} ^ {(j)}) = \frac {1}{m} \mathbb {E} _ {j} (f (x _ {0} ^ {(j)}) - f (x _ {N _ {j}} ^ {(j)})) = \frac {1}{m} \mathbb {E} _ {j} (f (x _ {j - 1}) - f (x _ {j})). +$$ + +The proof is then completed. + +![](images/f7b4dc8a21c10a1f34276565a95047b92dff1f7b2cad0f47a28ebe3f118eebd1.jpg) + +Combining Lemma 6 and Lemma 8, we arrive at the following key result on one inner loop. + +Theorem 3. For any $j$ + +$$ +\begin{array}{l} \mathbb {E} \| \nabla f (x _ {j}) \| ^ {2} \leq \frac {2}{\eta m} \mathbb {E} _ {j} (f (x _ {j - 1}) - f (x _ {j})) + \frac {\sigma^ {2} I (B < n)}{B} + \frac {(d - k _ {1} - k _ {2}) m}{k _ {2}} R _ {j} \\ - \left(1 - \eta L - \frac {m \eta^ {2} L ^ {2}}{b}\right) \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} \| ^ {2}. \\ \end{array} +$$ + +# A.3 COMPLEXITY ANALYSIS + +Proof of Theorem 1. By definition (7) of $R_{j}$ and the smoothness assumption A1, + +$$ +\mathbb {E} R _ {j} \leq \frac {b + 1}{b} L ^ {2} \mathbb {E} \| x _ {N _ {j} + 1} ^ {(j)} - x _ {N _ {j}} ^ {(j)} \| ^ {2} \leq 2 \eta^ {2} L ^ {2} \mathbb {E} \| \nu_ {N _ {j}} ^ {(j)} \| ^ {2}. +$$ + +By Theorem 3, + +$$ +\begin{array}{l} \mathbb {E} \| \nabla f (x _ {j}) \| ^ {2} \leq \frac {2}{\eta m} \mathbb {E} _ {j} (f (x _ {j - 1}) - f (x _ {j})) + \frac {\sigma^ {2} I (B < n)}{B} \\ - \left(1 - \eta L - \frac {m \eta^ {2} L ^ {2}}{b} - \frac {2 (d - k _ {1} - k _ {2}) m \eta^ {2} L ^ {2}}{k _ {2}}\right) \mathbb {E} _ {j} \| \nu_ {N _ {j}} ^ {(j)} \| ^ {2}. \\ \end{array} +$$ + +Since $\eta L = \sqrt{k_2 / 6dm}$ + +$$ +\eta L + \frac {m \eta^ {2} L ^ {2}}{b} + \frac {2 (d - k _ {1} - k _ {2}) m \eta^ {2} L ^ {2}}{k _ {2}} \leq \frac {1}{\sqrt {6}} + \frac {1}{6} + \frac {1}{3} \leq 1. +$$ + +As a result, + +$$ +\mathbb {E} \| \nabla f (x _ {j}) \| ^ {2} \leq \frac {2}{\eta m} \mathbb {E} _ {j} (f (x _ {j - 1}) - f (x _ {j})) + \frac {\sigma^ {2} I (B < n)}{B}. +$$ + +Since $x_{\mathrm{out}} = x_{T'}$ where $T' \sim \mathrm{Unif}([T])$ , we have + +$$ +\mathbb {E} \| \nabla f (x _ {\mathrm {o u t}}) \| ^ {2} \leq \frac {2}{\eta m T} \mathbb {E} (f (x _ {0}) - f (x _ {T + 1})) + \frac {\sigma^ {2} I (B < n)}{B} \leq \frac {2 \Delta_ {f}}{\eta m T} + \frac {\sigma^ {2} I (B < n)}{B}. +$$ + +The setting of $T$ and $B$ guarantees that + +$$ +\frac {2 \Delta_ {f}}{\eta m T} \leq \frac {\epsilon^ {2}}{2}, \quad \frac {\sigma^ {2} I (B < n)}{B} \leq \frac {\epsilon^ {2}}{2}. +$$ + +Therefore, + +$$ +\mathbb {E} \| \nabla f (x _ {\mathrm {o u t}}) \| ^ {2} \leq \epsilon^ {2}. +$$ + +By Cauchy-Schwarz inequality, + +$$ +\mathbb {E} \| \nabla f (x _ {\mathrm {o u t}}) \| \leq \sqrt {\mathbb {E} \| \nabla f (x _ {\mathrm {o u t}}) \| ^ {2}} \leq \epsilon . +$$ + +In this case, the average computation cost is + +$$ +\begin{array}{l} \mathbb {E} C _ {\mathrm {c o m p}} (\epsilon) = T (\epsilon) \left(B + \frac {2 \left(k _ {1} + k _ {2}\right)}{d} b m\right) = 3 B T (\epsilon) \\ = O \left(\frac {B \Delta_ {f}}{\eta m \epsilon^ {2}}\right) = O \left(\frac {\sqrt {B b} L \Delta_ {f}}{\epsilon^ {2}} \sqrt {\frac {k _ {1} + k _ {2}}{k _ {2}}}\right). \\ \end{array} +$$ + +The proof is then proved by the setting of $B$ . + +Proof of Theorem 2. Under the setting of $\eta$ , + +$$ +\eta L + \frac {m \eta^ {2} L ^ {2}}{b} \leq \frac {1}{\sqrt {3}} + \frac {1}{3} \leq 1. +$$ + +By Theorem 3, + +$$ +\mathbb {E} \| \nabla f (x _ {j}) \| ^ {2} \leq \frac {2}{\eta m} \mathbb {E} _ {j} (f (x _ {j - 1}) - f (x _ {j})) + \frac {\sigma^ {2} I (B < n)}{B} + \frac {d - k _ {1} - k _ {2}}{k _ {2}} R _ {j}. +$$ + +By definition of $x_{\mathrm{out}}$ + +$$ +\mathbb {E} \| \nabla f (x _ {\mathrm {o u t}}) \| ^ {2} \leq \frac {2 \Delta_ {f}}{\eta m T} + \frac {\sigma^ {2} I (B < n)}{B} + \frac {(d - k _ {1} - k _ {2}) m}{k _ {2}} \mathbb {E} \bar {R} _ {T}. +$$ + +Under the settings of $T$ and $B$ , + +$$ +\frac {2 \Delta_ {f}}{\eta m T} \leq \frac {\epsilon^ {2}}{3}, \quad \frac {\sigma^ {2} I (B < n)}{B} \leq \frac {\epsilon^ {2}}{3}. +$$ + +This proves the first result. The second result follows directly. For the computation cost, similar to the proof of Theorem 1, we have + +$$ +\mathbb {E} C _ {\mathrm {c o m p}} (\epsilon) = O (B T) = O \left(\frac {L \Delta_ {f}}{\epsilon^ {2}} \frac {B}{\sqrt {m (b \wedge m)}}\right). +$$ + +The proof is then completed by trivial algebra. + +![](images/5ebcc797f1e094405de97e78999450d3f86166518d0216a729656a66b3a21ff3.jpg) + +![](images/68347870fb021cba823948e855b962e65fed3f50b23579333a1737279544c447.jpg) +Figure 3: SpiderBoost with various values of sparsity, where (sparsity $= k / d$ ) corresponds to SpiderBoost with sparsity $k / d$ . Both figures use MNIST. The x-axis measures gradient queries over $N$ , where $N$ is the size of the respective datasets. Plots are in log-scale. + +![](images/6e1b1eca299377a9b12f90e50a814e3a7943397278ac934d080886ec965c5f79.jpg) + +# B EXPERIMENTS + +# B.1 DESCRIPTION OF SIMPLE CONVOLUTIONAL NEURAL NETWORK + +The simple convolutional neural network used in the experiments consists of a convolutional layer with a kernel size of 5, followed by a max pool layer with kernel size 2, followed by another convolutional layer with kernel size 5, followed by a fully connected layer of input size $16 \times \text{side}^2 \times 120$ (side is the size of the second dimension of the input), followed by a fully connected layer of size $120 \times 84$ , followed by a final fully connected layer of size $84 \times$ the output dimension. + +# B.2 NATURAL LANGUAGE PROCESSING + +The natural language processing model consists of a word embedding of dimension 128 of 1000 tokens, which is jointly learned with the task. The LSTM has a hidden and cell state dimension of 1024. + +Algorithm 2: SpiderBoost for Natural Language Processing. +Input: Learning rate $\eta$ , inner loop size $m$ , number of iterations $T$ , large batch matrix $Z_{2}$ with $\ell_{2}$ batches of size $B$ , small batch matrix $Z_{1}$ with $\ell_{1}$ batches of size $b$ , initial iterate $x_{0}$ , initial states $s_{0}$ and $S_{0}$ . +for $t = 0, \dots, T - 1$ do + $i = \mathrm{mod}(t, \ell_{1})$ $j = \mathrm{mod}(t, \ell_{2})$ +if $i = 0$ then + $\lfloor s_{t} = 0$ +if $j = 0$ then + $\lfloor S_{t} = 0$ +if mod(t, m) = 0 then + $\nu_{t}, S_{t + 1} := \nabla f_{Z_{2j}}(x_{t}, S_{t})$ $\lfloor s_{t + 1} = s_{t}$ +else + $g_{p} := \nabla f_{Z_{1i}}(x_{t - 1}, s_{t - 1})$ $g_{c}, s_{t + 1} := \nabla f_{Z_{1i}}(x_{t}, s_{t})$ $\nu_{t} := \nu_{t - 1} + (g_{c} - g_{b})$ $S_{t + 1} = S_{t}$ $x_{t + 1} := x_{t} - \eta \nu_{t}$ +Output: $x_{T}$ + +Before describing Algorithm 2, let us derive the full batch gradient of a generative language model. We encode the vocabulary of our dataset of length $N$ so that $D \in \mathbb{N}^N$ is a sequence of integers corresponding to one-hot encodings of each token. We model the transition $p(D_{i+1}|D_i, s_i)$ using an RNN model $M$ as $M(D_i, s_i) = D_{i+1}, s_{i+1}$ , where $s_i$ is the sequential model state at step $i$ . The + +model $M$ can be thought of as a classifier with cross entropy loss $L$ and additional dependence on $s_i$ . The batch gradient objective can therefore be formulated by considering the full sequence of predictions from $i = 0$ to $i = N - 1$ , generating for each step $i$ the output $\hat{D}_{i + 1}, s_{i + 1}$ . Each token is one-hot encoded as an integer (from 0 to the size of the vocabulary), so the empirical risk is given by + +$$ +J (D; x) = \frac {1}{N} \sum_ {i = 0} ^ {N - 1} L (\hat {D} _ {i}, D _ {i}). +$$ + +Thus, the full batch gradient is simply the gradient of $J$ with respect to $x$ . + +In Algorithm 2, $D$ is split into $b$ contiguous sequences of length $\ell_1 = N / b$ and stored in a matrix $Z_{1}\in \mathbb{N}^{b\times \ell_{1}}$ . Taking a pass over $Z_{1}$ requires maintaining a state $s_i\in \mathbb{N}^b$ for each entry in a batch, which is reset before every pass over $Z_{1}$ . To deal with maintaining state for batches at different time scales, we define a different matrix $Z_{2}\in \mathbb{N}^{b\times \ell_{2}}$ which maintains a different set of states $S_{i}\in \mathbb{N}^{B}$ for each entry of batch size $B$ . We denote by $g,s_{t + 1} = \nabla f_{Z_{1j}}(x,s_t)$ the gradient of our model with respect to $x$ , where $\nabla f_{Z_{1j}}$ denotes the gradient function corresponding to the $j$ th batch of matrix $Z_{1}$ . The function $f_{Z_{1j}}$ simply computes the loss of the $j$ th batch of matrix $Z_{1}$ . \ No newline at end of file diff --git a/variancereductionwithsparsegradients/images.zip b/variancereductionwithsparsegradients/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4a1ff8d94ef871870f4277bc6c3a9692918837c9 --- /dev/null +++ b/variancereductionwithsparsegradients/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c176c9c451f73d891052438a97e03cf0f0995108fbc2e2c5d05253b4624b7883 +size 865344 diff --git a/variancereductionwithsparsegradients/layout.json b/variancereductionwithsparsegradients/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2e1b337685fecc0c66e6c6b845e35a7649ad9487 --- /dev/null +++ b/variancereductionwithsparsegradients/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4818d7e1cd7045c051be814f2d1d0cc81731f031b9d5aae01e51e5d4ea252db +size 816896 diff --git a/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/21cc9cda-45b5-4445-b470-d86b409efeb7_content_list.json b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/21cc9cda-45b5-4445-b470-d86b409efeb7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c86055b6c8316f115cbbca09d9081f7377a9ff54 --- /dev/null +++ b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/21cc9cda-45b5-4445-b470-d86b409efeb7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20ff48ba73eedd5cd75bc7ca757622a3c7b75987a82921a8cafc648ce9a21eb4 +size 84206 diff --git a/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/21cc9cda-45b5-4445-b470-d86b409efeb7_model.json b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/21cc9cda-45b5-4445-b470-d86b409efeb7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..66e61567f556e74555995ff7c9455cbd3a06e69a --- /dev/null +++ b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/21cc9cda-45b5-4445-b470-d86b409efeb7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7f8d6fdf4cff0545b4b5539e0f78d8d4fdcaf8fd3c3c173b8f4a133248152e76 +size 96739 diff --git a/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/21cc9cda-45b5-4445-b470-d86b409efeb7_origin.pdf b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/21cc9cda-45b5-4445-b470-d86b409efeb7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1aceaa848d2e3002274922f4ee45d0613871b1c3 --- /dev/null +++ b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/21cc9cda-45b5-4445-b470-d86b409efeb7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:117d2df2fb82118c77e3955bb344c57bce42cfccf707a1fc31e82cf7506595dd +size 2763810 diff --git a/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/full.md b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6a3ef2966372c651845cb7f8513184b44e33583f --- /dev/null +++ b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/full.md @@ -0,0 +1,364 @@ +# VARIATIONAL AUTOENCODERS FOR HIGHLY MULTIVARIATE SPATIAL POINT PROCESSES INTENSITIES + +Baichuan Yuan $^{1}$ , Xiaowei Wang $^{2}$ , Jianxin Ma $^{2}$ , Chang Zhou $^{2}$ , Andrea L. Bertozzi $^{1}$ , Hongxia Yang $^{2}$ + +$^{1}$ Department of Mathematics, University of California, Los Angeles + +$^{2}$ DAMO Academy, Alibaba Group + +ybcmath@gmail.com, daemon.wxw@alibaba-inc.com, majx13fromthu@gmail.com, ericzhou.zc@alibaba-inc.com, bertozzi@math.ucla.edu, yang.yhx@alibaba-inc.com + +# ABSTRACT + +Multivariate spatial point process models can describe heterotopic data over space. However, highly multivariate intensities are computationally challenging due to the curse of dimensionality. To bridge this gap, we introduce a declustering based hidden variable model that leads to an efficient inference procedure via a variational autoencoder (VAE). We also prove that this model is a generalization of the VAE-based model for collaborative filtering. This leads to an interesting application of spatial point process models to recommender systems. Experimental results show the method's utility on both synthetic data and real-world data sets. + +# 1 INTRODUCTION + +Multivariate point processes are widely used to model events of multiple types occurring in an $n$ dimensional continuum. This paper focuses on multivariate spatial point processes (SPP), which can uncover hidden connections between subprocesses based on the correlations of their spatial point patterns. Often we encounter missing data problems, where some subprocesses are not fully observable. The underlying connections could further contribute to the prediction of these subprocesses over the unobserved areas. Moreno-Munoz et al. (2018) has shown the effectiveness of this joint model for Gaussian processes with heterotopic data. Multi-output models in Lian et al. (2015) such as coregionalization and cokriging can outperform independent predictions. However, there is limited literature on the statistical methodology of the highly multivariate spatial point processes, according to the very recent paper (Choiruddin et al., 2019). + +Inference for multivariate spatial point processes intensities is still a challenging problem (Taylor et al., 2015), especially with a large number of subprocesses. For popular Gaussian processes-based approaches (Williams & Rasmussen, 2006), the multivariate intensity often consists of independent and multi-output Gaussian processes. The complexity of the models and the curse of dimensionality hinder this approach for highly multivariate data, such as friendship networks and recommender systems with millions of users. In these problems, we only partially observe the events (e.g. users interact with items, locations) for each subprocess (user). It is necessary to jointly infer the preference of each user based on their hidden correlations. For example, a common approach in recommender systems, collaborative filtering (He et al., 2017), predicts the item interests of each user with the help of the collection of item preferences for a large number of users. + +To address these problems, we propose a multivariate spatial point process model with a nonparametric intensity. We extend the well-known kernel estimator in Diggle (1985) to the multivariate case. This generalization is achieved through the introduction of hidden variables inspired by stochastic declustering (Zhuang et al., 2002). The latent variables naturally lead to a variational Bayesian inference approach, which is different from the frequentist point estimation in the kernel estimator. To reduce the complexity in the highly multivariate case, we consider an alternative set of hidden variables that are designed to work well as latent variables for a variational autoencoder (VAE) (Kingma & Welling, 2014). This amortized inference (Gershman & Goodman, 2014) approach leads to fast inference once the model is fully trained. Further, we show the equivalence for these two different + +settings of hidden variables using the properties of the spatial point process. This efficient approach makes it possible to apply multivariate spatial point processes in many areas, including location-based social networks and recommender systems with many users. Moreover, the nonparametric method for analyzing spatial point data patterns is not related to specific parametric families of models, which only requires the intensity to be well-defined. + +Our approach is not a direct replacement for current inference methods on few-variate spatial point processes (Jalilian et al., 2015). In contrast to the classical methodology, VAE requires a large number of training data. The highly multivariate data that are widely available in social networks and recommender systems can be ideal applications for our approach. In fact, it can be shown that our model is a generalization of a state-of-the-art VAE-based collaborative filtering model (Liang et al., 2018). Our model nonparametrically fits the underlying intensity function. Compared with the multinomial distribution used in Liang et al. (2018), this leads to not only a smoother intensity over space but also better predictions in terms of ranking-based losses. Compared to a univariate model, such as the trans-Gaussian Cox processes (Williams & Rasmussen, 2006), our multivariate model enhances the predictive ability on missing or unobserved areas, which is consistent with the results of heterogeneous multi-output Gaussian processes (Moreno-Muñoz et al., 2018). + +The contributions of this paper are three-fold. We first build a novel multivariate spatial point process model and find a direct connection with the VAE-based collaborative filtering through detailed theoretical analysis. Secondly, this connection introduces amortized inference for an efficient multivariate point process estimation. Finally, point processes generalize the discrete distribution used in (Liang et al., 2018) and lead to a better modeling of spatial heterogeneity. We validate these benefits through experiments for multiple multivariate data sets, showing improvement over classic SPP methods and potentials on collaborative filtering applications. + +# 2 PRELIMINARIES + +Spatial point process A point process (PP) is a random counting measure $N(x)$ on a complete separable metric space $R$ (here we always assume that $R \subset \mathbb{R}^n$ ) that takes values on $\{0,1,2,\ldots\} \cup \{\infty\}$ . While the major theory of point processes centers around the temporal dynamics, spatial point process models (Diggle et al., 1983) are established in forestry and seismology, focusing on the stationary and isotropic case. We focus on the (first-order) intensity function $\lambda(x)$ , which is the expected rate of the accumulation of points around a particular spatial location $x$ . We write + +$$ +\lambda (x) = \lim _ {| \Delta x | \downarrow 0} \frac {\mathbb {E} [ N (\Delta x) ]}{| \Delta x |}, \tag {1} +$$ + +where $\Delta x$ is a small ball in the metric space, e.g. the Euclidean space $\mathbb{R}^n$ , with the centre $x$ and measure $|\Delta x|$ . The second-order intensity function is naturally defined as + +$$ +\lambda_ {(2)} (x, y) = \lim _ {| \Delta x |, | \Delta y | \downarrow 0} \frac {\mathbb {E} [ N (\Delta x) N (\Delta y) ]}{| \Delta x | | \Delta y |}, \tag {2} +$$ + +measuring the chance of points co-occurring in both $\Delta x$ and $\Delta y$ . Normalizing this leads to the pair-correlation function $g(x,y) = \lambda_{(2)}(x,y) / \lambda (x)\lambda (y)$ . $g(x,y) > 1$ indicates that points are more likely to form clusters than the simple Poisson process where $g(x,y) = 1$ . + +Common models in SPPs include the Poisson process with a non-stationary rate $\lambda(x)$ , and the Cox process with a nonnegative-valued intensity process $\Lambda(x)$ , which is also a stochastic process. Cox processes conditional on a realization of the intensity process $\Lambda(x) = \lambda(x)$ are Poisson processes with intensity $\lambda(x)$ . To model the aggregated points patterns, Poisson cluster (Neyman-Scott) processes generate parent events from a Poisson process. Then each parent independently generates a random number of offsprings. The relative positions of these offsprings to the parent are distributed according to some p.d.f $K_{\sigma}(x)$ in space (Diggle et al., 1983). Many point process models, including most Cox processes, are in fact Poisson cluster processes. The duality between Cox processes and cluster processes is widely used to construct Cox process models. For example, the kernel-based intensity process $\Lambda(x) = \sum_{i=1}^{\infty} K_{\sigma}(x - x_i)$ with $x_i$ from a Poisson process, is essentially a Poisson cluster process. The number of offsprings is from a Poisson distribution with $\lambda = 1$ and the relative position distribution is $K_{\sigma}(x)$ . Repulsive SPPs, on the other hand, model that nearby points of the + +process tend to repel each other. Higher order intensities are often considered in this case, such as determinantal PPs. + +Alternatively, if we are more interested in the realization intensity $\lambda(x)$ than the mechanical interpretation, the trans-Gaussian Cox process provides a tractable way to construct the Cox process using a nonlinear transformation on a Gaussian process $S(x)$ . Popular choices for $\Lambda(x)$ include the log-Gaussian Cox process (LGCP) with $\exp(S(x))$ and the permanental process with $S(x)^2$ . Recent works on Cox processes have been extensively focused on the cases that are modulated via the Gaussian random field, due to its capability in modeling the intensity and pair-correlations between subprocesses. In the next section, we develop a more explicit approach to model interactions for fast inference and the generalization ability for new subprocesses. + +Inference for Point Processes Inference methods for point processes are mainly based on the order statistics or likelihood function. The order statistics are often estimated nonparametrically, such as the kernel estimator (Diggle, 1985) of the intensity function. For the likelihood-based inference, we assume that one observes events $X = \{x_{i}\}_{i=1}^{N}$ of the underlying spatial point process over the area $R$ . The log-likelihood for the inhomogeneous Poisson process over space $R$ is + +$$ +\log p (X | \Theta) = \sum_ {i = 1} ^ {N} \log \left(\lambda \left(x _ {i}\right)\right) - \int_ {R} \lambda (x) d x. \tag {3} +$$ + +The integration term is the log void probability and can be viewed as a normalization term for the likelihood. For Cox processes, the likelihood is the expectation over the Poisson likelihood. It is difficult to directly integrate over the distribution of $\Lambda$ . Monte Carlo methods (Adams et al., 2009) are commonly used to approximate the expectation. To improve the scalability of the expensive sampling, many methods such as variational inference (Lloyd et al., 2015), Laplace approximation (Williams & Rasmussen, 2006) and reproducing kernel Hilbert spaces (Flaxman et al., 2017) are proposed. + +Variational Autoencoder As a stochastic variational inference algorithm, VAE (Kingma & Welling, 2014) is maximizing the evidence lower bound (ELBO) of the log-likelihood function + +$$ +\log p (X | \Theta) \geq \mathbb {E} _ {q _ {\phi} (z | X)} [ \log (p _ {\theta} (X | z) ] - K L (q _ {\phi} (z | X) | p (z)). \tag {4} +$$ + +The hidden variables $z$ have a simple multivariate Gaussian prior $p(z) = \mathcal{N}(z;0,I)$ . The true posterior, which is often intractable as in the Cox process, is approximated via a multivariate Gaussian $q_{\phi}(z|X) = \mathcal{N}(z;\mu_{\phi}(X),\sigma_{\phi}(X))$ . The KL divergence term in the ELBO can be calculated analytically. VAE uses a multilayer perceptron (MLP) to learn the mean and variance of the approximated posterior directly from the data. The most related work here is a recent VAE-based model for collaborative filtering (VAE-CF) (Liang et al., 2018). They assume that each user is a multinomial distribution over items with the log-likelihood $\log p_{\theta}(X_u|z_u) = \sum_{i=1}^{N} X_{iu} \log \pi_i(z_u)$ for each user $u$ . Here $X_u$ is the observed data of user clicking items, $\pi_i(z_u)$ is the probability that user $u$ clicks the item $i$ and $X_{iu}$ is an indicator function on whether the user $u$ clicked the item $i$ . + +# 3 MULTIVARIATE SPATIAL POINT PROCESSES + +Here we consider a multivariate case of the SPP, with $U$ interdependent univariate point processes on the sample space $R$ . The intensity function is measured in a similar way as the univariate case via $\lambda_u(x) = \lim_{|\Delta x| \downarrow 0} (\mathbb{E}[N_u(\Delta x)] / |\Delta x|)$ , where $N_u(\Delta x)$ is the number of events within a set $\Delta x$ for the subprocess $u$ . + +# 3.1 A NONPARAMETRIC MODEL + +The observed data of multivariate SPP include the location of $N_{u}$ events $X_{u} = \{x_{i}^{u}\}_{i = 1}^{N_{u}}$ associated with each subprocess $u$ . For each $u$ , the observed event locations follow a Poisson process with spatial intensity $\lambda_{u}(x)$ , which is a realization of the random intensity $\Lambda_{u}(x)$ . Using the nonparametric kernel estimator, the intensity of the subprocess $u$ is estimated by + +$$ +\lambda_ {u} (x) = \sum_ {i = 1} ^ {N _ {u}} K _ {\sigma} \left(x - x _ {i} ^ {u}\right). \tag {5} +$$ + +Here $K_{\sigma}(x)$ is a kernel function and we usually adopt the radial basis function kernel (RBF) where $K_{\sigma}(x) = \exp (-\| x\|^{2} / 2\sigma^{2})$ . We ignore the end-correction (Diggle, 1985) for now. + +In real-world applications, however, one often encounters the missing data problem, where we cannot directly observe points in certain areas for some subprocesses. Instead, we seek to infer the hidden data from other fully observed subprocesses. In our model, we assume that each subprocess reflects the stochastic and heterogeneous patterns. For example, users in an e-commerce platform usually prefer different categories. As in Poisson cluster processes, this naturally leads to events clustering in specific areas. Another real-world example is the aggregation of check-in activities around the home and workplaces for social network users (Cho et al., 2011). Note that $N = \sum_{u=1}^{U} N_u$ is the total number of events. We introduce hidden variables $Y_i^u$ for each event $x_i = 1, \dots, N$ and subprocess $u = 1, \dots, U$ , where $Y_i^u = 1$ if the subprocess $u$ includes event $x_i$ and $Y_i^u = 0$ otherwise. $\mathbb{E}Y_i^u = p_i^u$ is the probability that event $x_i$ is from the subprocess $u$ . Then the intensity process for our multivariate SPP model is + +$$ +\Lambda_ {u} (x) = \sum_ {i = 1} ^ {N} Y _ {i} ^ {u} K _ {\sigma} \left(x - x _ {i}\right), \tag {6} +$$ + +for each subprocess $u$ . This model generalizes the kernel density-based intensity to the missing data case. Similarly to the original method, it can be applied to estimate the intensity for both cluster processes such as Cox processes and repulsive ones like determinantal PPs. In order to incorporate prior information and model the data uncertainty, we adopt a variational inference approach for the hidden variables. + +# 3.2 VARIATIONAL INFERENCE + +A major drawback of current inference methods for SPP is the introduction of a large number of parameters in the highly multivariate case. For our model, we use an amortized inference approach - VAE (Kingma & Welling, 2014) to avoid the computational complexity of directly estimating the posterior for each subprocess $u$ . + +The generative process of our model can be described as follows: For each subprocess $u$ , it has a $K$ -dimensional hidden variable $z_{u}$ with a multivariate normal prior $z_{u} \sim \mathcal{N}(0, I_{K})$ . Here we use a low-dimensional representation and then a nonlinear mapping $f_{\theta}(z_{u}) = \{p_{i}^{u}\}_{i=1}^{N}$ transforms $z_{u}$ so that it has the same dimension as the number of events $N$ . Finally the spatial points of the subprocess $u$ are sampled according to the intensity $\lambda_{u}(x) = \sum_{i=1}^{N} p_{i}^{u} K_{\sigma}(x - x_{i})$ . We approximate the intractable posterior distribution of $z$ , $q(z|X)$ with a multivariate Gaussian $\mathcal{N}(\mu_{\phi}(X), \sigma_{\phi}(X))$ . As in Liang et al. (2018), we use MLPs to learn the nonlinear function $f_{\theta}(z)$ with parameters $\theta$ and the mean and variance with parameters $\phi$ . The variational bound of our multivariate Cox process model is then + +$$ +\log p \left(X _ {u} \mid \Theta\right) \geq \mathbb {E} _ {q _ {\phi} \left(z _ {u} \mid X _ {u}\right)} \left[ \log \left(p _ {\theta} \left(X _ {u} \mid z _ {u}\right)\right) - K L \left(q _ {\phi} \left(z _ {u} \mid X _ {u}\right) \mid p \left(z _ {u}\right)\right) = \mathbf {L}. \right. \tag {7} +$$ + +The first term in $\mathbf{L}$ is essentially a complete likelihood function. For each subprocess $u$ , it has the following (expected) intensity function + +$$ +\mathbb {E} _ {q _ {\phi} \left(z _ {u} \mid X _ {u}\right)} \Lambda_ {u} (x) = \sum_ {i = 1} ^ {N _ {u}} p _ {i} ^ {u} K _ {\sigma} \left(x - x _ {i} ^ {u}\right) \tag {8} +$$ + +and a Poisson process log-likelihood function from (8) and (3) + +$$ +\mathbb {E} _ {q _ {\phi} \left(z _ {u} \mid X _ {u}\right)} \log p _ {\theta} \left(X _ {u} \mid z _ {u}\right) = \sum_ {i = 1} ^ {N _ {u}} \log \left(\sum_ {i = 1} ^ {N _ {u}} p _ {i} ^ {u} K _ {\sigma} \left(x - x _ {i} ^ {u}\right)\right) - \int_ {R} \sum_ {i = 1} ^ {N _ {u}} p _ {i} ^ {u} K _ {\sigma} \left(x - x _ {i} ^ {u}\right) d x. \tag {9} +$$ + +For applications without explicit spatial information, we embed each event into a latent space as a vector. First, we obtain a similarity graph for all events. Then the embedding $x_{i}$ of $i_{th}$ event in this graph is obtained via graph neural networks (GNNs) such as GraphSAGE (Hamilton et al., 2017). See Figure 1 for an illustration of our framework. Both $x_{i}$ and $z_{u}$ are learned jointly combining the information of item embedding and user hidden variable. + +![](images/5b6f66ae0470de427a5b054397bb0699c3cbeb93f93b287903028b9f146a7edc.jpg) +Figure 1: Visual illustration of our spatial point process model via VAE during the training. + +# 3.3 ALTERNATIVE MODEL + +Recall that the hidden variables $Y_{i}^{u}$ describe whether the event $x_{i}$ is from the subprocess $u$ . By definition, we have $\sum_{u = 1}^{U}Y_{i}^{u} = 1$ and $\sum_{u = 1}^{U}p_{i}^{u} = 1$ for any $i$ . During the training process, it is difficult to normalize the probability $p_i^u$ over all subprocesses (have to use the full data). Moreover, this constraint leads to $g_{uv} < 1$ for $u\neq v$ , implying mutual-inhibition behaviors between subprocesses. Instead, we consider an alternative model where $p_i^u$ is the probability that the subprocess $u$ generates an event $x_{i}$ i.e. $\sum_{i = 1}^{N_u}p_i^u = 1$ for each $u$ . During the training, the total number of events $N_{u}$ is not viewed as a hidden variable for each subprocess. Thus the alternative model essentially normalizes $\lambda_{u}$ by a constant. With the reparameterization trick in Kingma & Welling (2014), we sample the log-likelihood function using all events within a mini user batch and compute the gradient. This approach incorporates all information about the user so that negative sampling is not needed. See Algorithm 1 for our training procedure. For the model prediction, the normalized intensity of a new subprocess can be efficiently calculated in $O(N)$ using the approximated posterior $q_{\phi}(z|X_{new})$ and nonlinear function $f_{\theta}(z)$ with parameters $\theta ,\phi$ inferred from data. We can further reduce the computational challenge (Liang et al., 2018) for large $N$ due to $f_{\theta}(z)$ by discretizing the space. + +Now we show the equivalence of our multivariate model and the alternative one. There are two probabilities to consider. The first one is the conditional probability of observed events $X_{u}$ in the subprocess $u$ with an intensity function $\lambda_{u}(x)$ , given that there are $N_{u}$ events within the metric space $R$ . The second one is the probability of sample $X_{u}$ of size $N_{u}$ from the normalized density $h_{u}(x) = \lambda_{u}(x) / \int_{R} \lambda_{u}(s) ds$ . For general SPPs data, we have + +Theorem 1. A spatial point process on a measurable set $R \subset \mathbb{R}^n$ with an intensity function $\lambda_u(x)$ is equivalent to $N_u$ i.i.d samples within $R$ with p.d.f $h_u(x) = \lambda_u(x) / \int_R \lambda_u(s) ds$ , given we know $N_u = \int \lambda_u(s) ds$ , which is the number of points within $R$ for the point process model. + +Proof. See Section B in Appendix. + +![](images/1a5642f8337b5c3e62a6ae340264ac67ca135c57c2af7ead565e9974baecdbad.jpg) + +According to this theorem (see supplementary material), we can replace the log-likelihood function (9) in the ELBO with + +$$ +\mathbb {E} _ {q _ {\phi} \left(z _ {u} \mid X _ {u}\right)} \log p _ {\theta} \left(X _ {u} \mid z _ {u}\right) = \sum_ {i = 1} ^ {N _ {u}} \log \left(h _ {u} \left(x _ {i} ^ {u}\right)\right) + C. \tag {10} +$$ + +Here $C$ is related to the log-likelihood on the number of events $N_{u}$ , which is a constant because $\int_{R} \lambda_{u}(s) ds = N_{u}$ is observed. One drawback of this approach is that, for the prediction of actual missing data, we cannot infer the number of missing points. Instead, our VAE-based model generates the normalized intensity predicting the possible locations for the missing events. We use the alternative definition of $p_{i}^{u}$ from now on. + +This result shows that VAE-CF is a special case of this multivariate SPP model over a discrete space $X$ of events. In fact, VAE-CF is the alternative model with a delta function as the kernel $(h_u(x_i) = \lambda_u(x_i) / \int_X \lambda_u(s) ds = p_i^u)$ , which is equivalent to the SPP model according to the theorem. To better model the spatial heterogeneity of events, one can replace the delta function with other kernels or use more advanced SPP intensities. We simply use a RBF kernel here, resulting in $h_u(x) = \sum_{i=1}^{N} p_i^u \exp(\|x - x_i\|^2 / 2\sigma^2)$ . + +Algorithm 1: Training VAE SPP with stochastic gradient descent. +Input: Traning subprocesses $u\in \mathbf{U}_{\mathbf{T}}$ with their point locations $X_{u}$ +Result: Parameters $\theta$ and $\phi$ +Initialize $\theta$ and $\phi$ randomly; +while not converged do Sample a subprocesses batch $\mathbf{U}_{\mathbf{b}}$ from $\mathbf{U}_{\mathbf{T}}$ and their points $X_{b} = \bigcup_{u\in \mathbf{U}_{\mathbf{b}}}X_{u}$ . forall $u\in \mathbf{U}_{\mathbf{b}}$ do Sample $z_{u}\sim \mathcal{N}(\mu_{\phi}(X_{u}),\sigma_{\phi}(X_{u}))$ with reparameterization trick; Compute $f_{\theta}(z_u) = \{p_i^u\}_{x_i\in X_b}$ forall $x\in X_u$ do | Compute sampled normalized intensity $h_u(x)\approx \sum_{x_i\in X_b}p_i^u K_\sigma (x - x_i);$ end Compute noisy gradients of the ELBO L w.r.t $\theta$ and $\phi$ +end Average noisy gradients over batch; Update $\theta$ and $\phi$ with the Adam optimizer (Kingma & Ba, 2015); + +One benefit of this alternative model is its resulted consistency. The nonparametric kernel estimation for the point process intensity is unbiased. To see this, for any measurable set $R$ , we take the expectation of the estimated intensity $\lambda(x)$ over the Poisson point process distribution + +$$ +\mathbb {E} \int_ {R} \lambda (x) d x = \int_ {R} \mathbb {E} \sum_ {i = 1} ^ {N} K _ {\sigma} (x - x _ {i}) d x = \int_ {R} \int_ {R} K _ {\sigma} (x - y) \rho (y) d y d x = \int_ {R} \rho (y) d y, \tag {11} +$$ + +where $\rho(y)$ is the true intensity function. Then $\mathbb{E}\lambda(x) = \rho(x)$ under mild conditions, e.g., a spatially continuous assumption on $\rho$ . But it is inconsistent due to the non-vanishing variance without normalization. For our alternative model, the normalized intensity function $h_u(x)$ is still unbiased. And according to the standard theory of the multivariate kernel density estimation (KDE), the consistency of $h_u(x)$ is also guaranteed. Another benefit of using this alternative form can be seen from the cross pair-correlation function. For the alternative model, we remove the undesirable restriction of negative correlations between all users ( $g_{uv} < 1$ for $u \neq v$ ) and can incorporate more diverse relationships between users. To see this, we first consider the auto and cross pair-correlation function $g_{uv} = \mathbb{E}\Lambda_u\Lambda_v / \mathbb{E}\Lambda_u\mathbb{E}\Lambda_v$ . For our original model, it is straightforward to prove that $g_{uu} > 1$ and $g_{uv} < 1$ , $u \neq v$ (see supplementary material). The auto pair-correlation functions show that our model is more aggregate than the simple Poisson process. + +# 4 EXPERIMENTS + +We compare our model (with the RBF kernel, VAE-SPP) with both VAE-CF (Liang et al., 2018) and univariate spatial point process models using a standard KDE (Diggle, 1985) or TGCP (Williams & Rasmussen, 2006) as intensity functions. We adopt the experiment setting in VAE-CF. We split the data into training, validation and testing sets. For the multivariate model, the training data is used to learn the parameters $\theta, \phi$ . For KDE and TGCP models, we omit the training data because different subprocesses are assumed to be independent and also because of the computational complexity of fitting a highly multivariate TGCP. We assume that only $80\%$ of the events in the validation and test sets are observed. The remaining $20\%$ are viewed as missing data to be inferred by different models. Hyperparameters are selected on the validation data as in Liang et al. (2018). Finally, we compare the prediction performance of different models on the missing data given the partially-observed events. We use standard ranking losses such as NDCG@K and Recall@K defined in Appendix D.1. + +# 4.1 MULTIVARIATE SPP ON SPATIAL DATA + +Synthetic data sets We simulate two different data sets using multiexponential and multisine models. For the multiexponential data set, we simulate 5,000 Poisson processes with $\lambda_{k}(x) = a_{k}e^{-b_{k}x}, k = 1,\dots,5000, x \in [0,30]$ as training data. Here $a_{k}$ and $b_{k}$ are uniformly sampled between [5, 10] and [0.1, 0.2] separately. 500 validation and 500 test subprocesses are generated in the same way with parameters sampled from $a_{k}$ and $b_{k}$ . The multisine data set is generated via replacing the intensity function with $\lambda_{k}(x) = \max(a_{k} * \sin(b_{k}x), 5)$ and sampling $a_{k}$ and $b_{k}$ uniformly between [5, 10] and [1, 2] separately. Each realization of the spatial point process is discretized using a uniform grid over $x$ with grid spacing 0.01. + +Table 1: Testing results on the simulation data sets. Both the mean and variance are percentages (same below). + +
NameMultiexpMultisine
NDCG@100Recall@50Recall@100NDCG@100Recall@50Recall@100
VAE-CF6.78(0.28)7.25(0.40)14.5(0.52)3.30(0.15)2.49(0.13)4.64(0.18)
VAE-SPP7.11(0.31)7.34(0.40)14.9(0.54)3.53(0.15)2.58(0.13)4.90(0.18)
KDE5.27(0.15)5.85(0.12)11.8(0.17)3.23(0.15)2.29(0.12)4.55(0.27)
TCGP3.11(0.14)3.32(0.11)6.44(0.11)3.77(0.14)1.88(0.11)3.92(0.17)
+ +Location-based Social Network. We consider the Gowalla data set (Cho et al., 2011) in New York City (NYC) and California (CA). We use a bounding box of -124.4096, 32.5343, -114.1308, 42.0095 for CA and -74.0479, 40.6829, -73.9067, 40.8820 for NYC (both from flickr $^{1}$ ). Each user with at least 20 events (check-ins) is viewed as a subprocess. There are 673,183 events and 6,728 users for Gowalla-CA. We randomly select 500 users as the validation set and 500 users as the testing set. We use the remaining users for training. For Gowalla NYC, there are 86,703 events from 1,171 users. We set the size of both validation and testing sets to 100. For the spatial tessellation, we use uniform grids ( $32 \times 32$ for NYC and $64 \times 64$ for CA). Both our model and VAE-CF can work without grids. We further compare the performance of our model with VAE-CF by viewing each location as an item. + +Table 2: Testing results on the Gowalla data sets with uniform grids. + +
NameCANYC
NDCG@100Recall@50Recall@100NDCG@100Recall@50Recall@100
VAE-CF41.8(1.5)64.8(2.0)70.0(2.0)43.6(2.3)73.9(2.9)86.2(2.2)
VAE-SPP42.3(1.5)65.2(2.0)70.2(1.9)44.8(2.4)74.5(2.9)86.2(2.2)
KDE34.5(1.5)59.2(2.0)64.0(2.0)41.2(1.5)69.9(2.0)83.6(2.0)
TCGP31.8(1.3)56.5(2.0)60.9(2.0)37.3(2.3)59.9(3.3)75.9(2.8)
+ +Table 3: Testing results on the Gowalla data sets without discretization. + +
NameCANYC
NDCG@100Recall@20Recall@100NDCG@100Recall@20Recall@100
VAE-CF21.3(0.77)16.6(0.74)32.8(0.97)16.0(1.7)13.2(1.7)26.3(2.4)
VAE-SPP21.6(0.77)17.0(0.80)33.5(0.76)16.1(1.7)13.7(1.8)27.1(2.5)
+ +In Table 1, we summarize the performance of both multivariate and univariate models on the simulation data sets. It is clear that the multivariate models outperform the univariate ones. Moreover, testing on multivariate models takes less time because it only evaluates the posterior probability and intensity function. This illustrates the power of multivariate models using amortized inference. Within the multivariate models, our continuous model further improves upon the discrete VAE-CF. This is due to the fact that these simulation intensities are continuous over $R$ . For real-world applications, the results on the location-based social network prediction and recommendation with and without grids are presented in Table 2 and 3. We observe the same pattern in both NYC and CA. + +![](images/1e679b7f6bb91af80b901d086f2a5b126d202eb075a528fce60a7b1da30a7b6e.jpg) + +![](images/b52a43491c486432e3ecffbe41dce613a6a1b388a502c2d86e702f48776f3bf9.jpg) + +![](images/ef76f9f98609c709b8897ecdeced8a059fbb04b13afa8eccc6699dd73de4d6e8.jpg) + +![](images/1c7f447f077dba5ea462fd0b8d7fb5e334bb1d6b6dac4a97c139d59f7a66cb8c.jpg) +Figure 2: Estimated density functions for a Gowalla user in NYC (log scale). The first row from left to right: observed check-in locations (in red), held-out check-in locations (in blue, as missing data) and the estimated intensity from VAE-SPP. The second row from left to right: the estimated intensity (or density) from VAE-CF, KDE and TGCP. + +![](images/5eda894d7b0afd55acaf9b50e79e2b2dacad3fbbc75097be54e1ed877eb4d46f.jpg) + +![](images/bd4f8373821c84a61f2fc68f7c11d053b372226a5951c3981e2490497684eca0.jpg) + +We stop using univariate models from now on due to their inferior performances, especially for collaborative filtering applications. Moreover, our model improves discrete VAE-CF regardless of the choice of spatial grids. For visualization purposes, in Figure 2, we plot a user's check-in locations in Gowalla-NYC and intensities estimated via different methods. Comparing with VAE-CF, our model generates a continuous intensity. The univariate models overfit the training data and lead to inferior predictions of the missing data. + +# 4.2 MULTIVARIATE SPP WITH A LATENT SPACE + +MovieLens data sets (ML-100K and ML-1M) include the movie (item) rating by users and we binarize the rating with a threshold of 4. In the spatial point process setting, we view each user as a subprocess over the latent space of item embeddings. Here the item embedding is generated via a GNN. This framework is a natural generalization of the multimodal distribution over items. The item-item graph is constructed based on item-item similarities. We use the Jaccard distance to measure the similarities between items, which are further viewed as the sampling probabilities for GNN. Currently, we only consider 1-hop connections. Both GNN and VAE are trained jointly, which is more expensive than VAE-CF but leads to better performance compared to separate training (see Appendix D). For movie recommendation tasks, we compare the discrete VAE-CF to our joint model with GNN. The results in Table 4 show again the improvement of our model over the baseline. + +Table 4: Testing results on the MovieLens data sets. + +
NameML100KML1M
NDCG@100Recall@20Recall@100NDCG@100Recall@20Recall@100
VAE-CF40.8(2.8)32.3(2.8)57.6(3.3)41.6(0.76)33.1(0.81)56.8(0.88)
VAE-SPP41.5(2.9)31.3(2.7)59.0(3.5)42.3(0.77)33.9(0.82)57.6(0.88)
+ +# 5 CONCLUSION + +In this paper, we introduce a novel spatial point process model for efficient inference on the highly multivariate case. Through amortized inference, our model makes it possible to investigate correlations between a myriad of point patterns based on a large number of training data, and the theoretical analysis on the density and intensity for SPPs builds the connection between our model and VAE-CF. There are many promising directions of future works including the extension for multivariate spatiotemporal PPs (Mohler et al., 2011; Yuan et al., 2019) and using features as covariances. There are multiple ways to estimate the mean rate $(h_u(x))$ of a spatial point process overall events, including Gaussian mixture models, Gaussian processes and flow-based models. For future work, we can investigate the connections between our model and other density-based estimations for point processes. Another interesting application is to handle real-world recommender systems via improving the joint training efficiency and comparing thoroughly with simpler algorithms as in Dacrema et al. (2019). + +# ACKNOWLEDGMENTS + +We would like to thank the comments from Frederic P. Schoenberg and George Mohler. Andrea L. Bertozzi and Baichuan Yuan want to thank the support of NIJ fellowship 2018-R2-CX-0013 and NSF DMS-1737770. + +# REFERENCES + +Ryan Prescott Adams, Iain Murray, and David JC MacKay. Tractable nonparametric bayesian inference in poisson processes with gaussian process intensities. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 9-16. ACM, 2009. +Eunjoon Cho, Seth A Myers, and Jure Leskovec. Friendship and mobility: user movement in location-based social networks. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1082-1090. ACM, 2011. +Achmad Choiuddin, Francisco Cuevas-Pacheco, Jean-François Coeurjolly, and Rasmus Waagepetersen. Regularized estimation for highly multivariate log gaussian cox processes. arXiv preprint arXiv:1905.01455, 2019. +Maurizio Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. Are we really making much progress? a worrying analysis of recent neural recommendation approaches. In Proceedings of the 13th ACM Conference on Recommender Systems, pp. 101-109. ACM, 2019. +Peter Diggle. A kernel method for smoothing point process data. Journal of the Royal Statistical Society: Series C (Applied Statistics), 34(2):138-147, 1985. +Peter J Diggle et al. Statistical analysis of spatial point patterns. Academic press, 1983. +Seth Flaxman, Yee Whye Teh, Dino Sejdinovic, et al. Poisson intensity estimation with reproducing kernels. Electronic Journal of Statistics, 11(2):5081-5104, 2017. +Samuel Gershman and Noah Goodman. Amortized inference in probabilistic reasoning. In Proceedings of the annual meeting of the cognitive science society, volume 36, 2014. +Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024-1034, 2017. +Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, pp. 173-182. International World Wide Web Conferences Steering Committee, 2017. +Abdollah Jalilian, Yongtao Guan, Jorge Mateu, and Rasmus Waagepetersen. Multivariate product-shot-noise cox point process models. Biometrics, 71(4):1022-1033, 2015. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2015. + +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *ICLR*, 2014. +Wenzhao Lian, Ricardo Henao, Vinayak Rao, Joseph Lucas, and Lawrence Carin. A multitask point process predictive model. In International Conference on Machine Learning, pp. 2030-2038, 2015. +Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pp. 689-698. International World Wide Web Conferences Steering Committee, 2018. +Chris Lloyd, Tom Gunter, Michael Osborne, and Stephen Roberts. Variational inference for gaussian process modulated poisson processes. In International Conference on Machine Learning, pp. 1814-1822, 2015. +George O Mohler, Martin B Short, P Jeffrey Brantingham, Frederic Paik Schoenberg, and George E Tita. Self-exciting point process modeling of crime. Journal of the American Statistical Association, 106(493):100-108, 2011. +Pablo Moreno-Muñoz, Antonio Artés, and Mauricio Álvarez. Heterogeneous multi-output gaussian process prediction. In Advances in Neural Information Processing Systems, pp. 6711-6720, 2018. +Benjamin Taylor, Tilman Davies, Barry Rowlingson, and Peter Diggle. Bayesian inference and data augmentation schemes for spatial, spatiotemporal and multivariate log-gaussian cox processes in r. Journal of Statistical Software, 63:1-48, 2015. +Christopher KI Williams and Carl Edward Rasmussen. Gaussian processes for machine learning, volume 2. MIT Press Cambridge, MA, 2006. +Baichuan Yuan, Hao Li, Andrea L Bertozzi, P Jeffrey Brantingham, and Mason A Porter. Multivariate spatiotemporal hawkes processes and network reconstruction. SIAM Journal on Mathematics of Data Science, 1(2):356-382, 2019. +Jiancang Zhuang, Yoshihiko Ogata, and David Vere-Jones. Stochastic declustering of space-time earthquake occurrences. Journal of the American Statistical Association, 97(458):369-380, 2002. + +# A TABLE OF NOTATIONS + +Table 5: Notations. + +
NotationDefinition or Descriptions
N(x)counting measure on a metric space R
Nuthe number of events of subprocess u
λu(x)intensity function of a subprocess u
Λu(x)intensity process of a subprocess u
Kσ(x)kernel function
U# of subprocesses on the space
Ubsubprocesses in a batch
Xevents set
Xuobserved events of subprocess u
Xball the observed events in a batch of sub-processes
xiembedding/location of theith event
Yiuhidden variables indicate whether the subprocess u includes theith event
zuK-dimensional hidden variable repre-sents subprocess u
puprobability of theith event occurs in subprocess u
φ,θparameters of encoder(μφ, σφ) and decoder(fθ)
guv = ΕΛu Λv / ΕΛu ΕΛvauto and cross pair-correlation function
hu(x)normalized density
+ +# B PROOF OF THEOREM 1 + +Proof. We define our model as a point process on $\mathbf{R}$ with the intensity function $\lambda_{u}(x)$ . + +The alternative model is $N_{u}$ i.i.d samples within $R$ with p.d.f $h_{u}(x)$ , given that we know $N_{u} = \int \lambda_{u}(s) ds$ is the number of points within the point process model. + +1) Our model has the following probability generating functional + +$$ +G (v) = \exp \left(- \int_ {\mathbf {R} ^ {d}} [ 1 - v (x) ] \Lambda (\mathrm {d} x)\right) \tag {12} +$$ + +2) Given $N_{u}$ + +$$ +p \left(x _ {1}, \dots , x _ {N _ {u}} \mid N _ {u}\right) = \prod_ {i = 1} ^ {N _ {u}} h _ {u} \left(x _ {i}\right) \tag {13} +$$ + +3) Our alternative model (a counting r.v. $N(x)$ with locations according to $h_u(x)$ ) has the following characteristic functional + +$$ +G _ {c} (v) = \sum_ {n = 0} ^ {\infty} p (N (R) = n) \mathbb {E} \left[ \exp \left(\int_ {R} \log (v (s)) N (d s)\right) \mid N (R) = n \right] \tag {14} +$$ + +Using 2), we can evaluate this conditional probability + +$$ +\mathbb {E} \left[ \exp \left(\int_ {R} \log (v (s)) N (d s)\right) | N (R) = n \right] = \left(\frac {\int_ {R} \lambda_ {u} (s) v (s) d s}{\lambda_ {u} (s) d s}\right) ^ {n} \tag {15} +$$ + +Using 2) again and because the point process observation probability is + +$$ +p (\omega) = p (N (R) = N _ {u}) p \left(x _ {1}, \dots , x _ {N _ {u}} \mid N _ {u}\right) = \frac {1}{n !} \left[ \prod_ {i = 1} ^ {n} \lambda_ {u} \left(x _ {i}\right) \right] \exp \left(- \int_ {R} \lambda (x) d x\right), \tag {16} +$$ + +we have + +$$ +G _ {c} (v) = \exp \left(- \int_ {R} \lambda (x) d x\right) \left(1 + \sum_ {n = 1} ^ {\infty} \frac {1}{n !} \left(\int_ {R} \lambda (s) v (s) d s\right) ^ {n}\right) = \exp \left(\int_ {R} \lambda (s) (v (s) - 1) d s\right). \tag {17} +$$ + +The theorem follows from $G_{c}(v) = G(v)$ as the probability generating functional completely determines the probability structure of the point process. + +We show that (10) holds in the main paper. + +Corollary 1.1. + +$$ +\mathbb {E} _ {q _ {\phi} \left(z _ {u} \mid X _ {u}\right)} \log p \left(X _ {u} \mid z _ {u}\right) = \sum_ {i = 1} ^ {N _ {u}} \log \left(h _ {u} \left(x _ {i} ^ {u}\right)\right) + C. \tag {18} +$$ + +Proof. Define $\lambda_{u}(x) = \mathbb{E}_{q_{\phi}(z_{u}|X_{u})}\Lambda_{u}(x)$ . + +$$ +\begin{array}{l} \mathbb {E} _ {q _ {\phi} \left(z _ {u} \mid X _ {u}\right)} \log p \left(X _ {u} \mid z _ {u}\right) = \log \left(p \left(N (R) = N _ {u}\right) p \left(x _ {1} ^ {u}, \dots , x _ {N _ {u}} ^ {u} \mid N _ {u}\right)\right) (19) \\ = \sum_ {i = 1} ^ {N _ {u}} \log \left(h _ {u} \left(x _ {i} ^ {u}\right)\right) + \log (p (N (R) = N _ {u})) (20) \\ \end{array} +$$ + +$$ +\log (p (N (R) = N _ {u})) = n \log \left(\int_ {R} \lambda (x) d x\right) - \log (n!) - \int_ {R} \lambda (x) d x \tag {21} +$$ + +is only a function of $N_{u}$ + +# C AUTO AND CROSS PAIR-CORRELATION FUNCTIONS + +We show that $g_{u,v}(x,y) > 1$ for $u = v$ and $g_{u,v}(x,y) < 1$ for $u \neq v$ for our original model. + +$$ +g _ {u, v} (x, y) = \frac {\mathbb {E} \Lambda_ {u} (x) \Lambda_ {v} (x)}{\mathbb {E} \Lambda_ {u} (x) \mathbb {E} \Lambda_ {v} (x)} \tag {22} +$$ + +We have + +$$ +\begin{array}{l} \mathbb {E} \Lambda_ {u} (x) \Lambda_ {v} (x) = \mathbb {E} \left(\sum_ {i = 1} ^ {N} Y _ {i} ^ {u} K _ {h} \left(x - x _ {i}\right)\right) \left(\sum_ {j = 1} ^ {N} Y _ {j} ^ {v} K _ {h} \left(y - x _ {j}\right)\right) (23) \\ = \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {N} \mathbb {E} Y _ {i} ^ {u} Y _ {j} ^ {v} K _ {h} (x - x _ {i}) K _ {h} (y - x _ {j}). (24) \\ \end{array} +$$ + +Similarly, + +$$ +\begin{array}{l} \mathbb {E} \Lambda_ {u} (x) \mathbb {E} \Lambda_ {v} (x) = (\mathbb {E} \sum_ {i = 1} ^ {N} Y _ {i} ^ {u} K _ {h} (x - x _ {i})) (\mathbb {E} \sum_ {j = 1} ^ {N} Y _ {j} ^ {v} K _ {h} (y - x _ {j})) (25) \\ = \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {N} p _ {i} ^ {u} p _ {j} ^ {v} K _ {h} \left(x - x _ {i}\right) K _ {h} \left(y - x _ {j}\right). (26) \\ \end{array} +$$ + +Note that $\sum_{u=1}^{U} Y_i^u = 1$ and $\sum_{u=1}^{U} p_i^u = 1$ . When $i \neq j$ , we have $\mathbb{E}Y_i^u Y_j^v = p_i^u p_j^v$ for any $u, v$ . When $i = j$ , $\mathbb{E}Y_i^u Y_i^u = p_i^u > (p_i^u)^2$ for $u = v$ and $\mathbb{E}Y_i^u Y_i^v = 0 < p_i^u p_i^v$ . Then it is easy to see $g_{u,v}(x,y) > 1$ for $u = v$ and $g_{u,v}(x,y) < 1$ for $u \neq v$ . + +# D MORE ON EXPERIMENTS + +# D.1 METRICS DEFINITION + +The ranking performance is evaluated through recall at K (Recall@K) and normalized discounted cumulative gain at K (NDCG@K). In our VAE-SPP model, the predicted rank of the held-out items $I_{u}$ for each user $u$ are obtained from sorting the intensity function $\lambda_{u}(x)$ . + +Here we keep the definition in Liang et al., (2018). Recall@K is defined as + +$$ +\operatorname {R e c a l l} @ \mathrm {K} = \frac {\sum_ {i = 1} ^ {K} \mathbb {I} \left(r _ {i} \in I _ {u}\right)}{\left| I _ {u} \right|}. \tag {27} +$$ + +NDCG@K is calculated by normalizing discounted cumulative gain (DCG@K) with ideal DCG@K (IDCG@K). The definition are as follows + +$$ +\mathrm {D C G} @ \mathrm {K} = \sum_ {i = 1} ^ {K} \frac {2 ^ {\mathbb {I} \left(r _ {i} \in I _ {u}\right)} - 1}{\log_ {2} (i + 1)}, \quad \mathrm {N D C G} @ \mathrm {K} = \frac {\mathrm {D C G} @ \mathrm {K}}{\mathrm {I D C G} @ \mathrm {K}}, \tag {28} +$$ + +where $\mathbb{I}$ is the indicator function and $r_i$ is the $i^{\mathrm{th}}$ item among held-out items; IDCG@K is the ideal DCG@K when the ranked list is perfectly ranked. + +# D.2 HYPERPARAMETERS + +We implement our models in Tensorflow based on VAE-CF $^2$ . We keep the same MLP architectures and hyperparameters for both of them. We use $\beta$ -VAE with as suggested. The only additional hyperparameters for our model is the $\sigma^2$ in the kernel function, which is determined using grid search on the validation set. We conducted the experiments on a single GTX 1080 TI 11GB GPU. + +For simulation data, we train both models for 200 epochs using Adam optimizer with $\beta = 0.2$ , $lr = 5 \times 10^{-5}$ . We use mini-batches of size 20. Our architectures consist of a one layer MLP with $K = 50$ . For VAE-SPP, $\sigma^2 = 0.001$ . For Gowalla-NYC data, we train both models for 200 epochs using Adam optimizer with $\beta = 0.2$ , $lr = 5 \times 10^{-4}$ . We use mini-batches of size 20. Our architectures consist of a one layer MLP with $K = 50$ . For VAE-SPP, $\sigma^2 = 1 \times 10^{-5}$ . For Gowalla-LA data, we train both models for 200 epochs using Adam optimizer with $\beta = 0.2$ , $lr = 1 \times 10^{-3}$ . We use mini-batches of size 20. Our architectures consist of a one layer MLP with $K = 50$ . For VAE-SPP, $\sigma^2 = 0.001$ . For ML-1M data, we train both models for 100 epochs using Adam optimizer with $\beta = 0.2$ , $lr = 1 \times 10^{-3}$ . We use mini-batches of size 5. Our architectures consist of a one layer MLP with $K = 200$ . For VAE-SPP, $\sigma^2 = 1 \times 10^{-5}$ . For ML-100K data, we train both models for 100 epochs using Adam optimizer with $\beta = 0.2$ , $lr = 1 \times 10^{-3}$ . We use mini-batches of size 5. Our architectures consist of a one layer MLP with $K = 200$ . For VAE-SPP, $\sigma^2 = 1 \times 10^{-5}$ . The one-layer GNN in ML data is trained using GraphSAGE, for which the embedding dimension is 32 and the number of neighborhood is 10 for item and 5 for user. The graph is consist of the edges between users and items as well as the edges between items based on their Jaccard similarity. + +We use python statsmodel for the KDE and GPy for TGCP. Bandwidth for KDE is selected automatically. The hyperparameters for TGCP are determined with a grid search on the validation set. For simulation data sets, we set an RBF kernel with variance=1, lengthscale=0.1 for TGCP. For Gowalla-CA data sets, we set an Matern32 kernel with variance=1e-3, lengthscale=0.1 for TGCP. For Gowalla-NYC data sets, we set an Matern32 kernel with variance=1e-4, lengthscale=0.01 for TGCP. + +# D.3 ADDITIONAL EXPERIMENTS + +On the training of VAE and GNN, we tried different training settings (separately or jointly) and choose to train them jointly. We also tested the point estimate version of the VAE-CF called DAE-CF (Mult-DAE in Liang et al., (2018), with the same setting), which can improve the result under + +certain metrics. One can easily extend our work to a DAE-SPP to obtain a point estimation for the SPP intensity. + +Table 6: Testing results on MovieLens-100K. These methods share the same network and trained with 100 epochs. The test data is evaluating the model with the best performance during the validation. Separate means that GNN is trained separately with VAE-SPP. + +
NDCG@100Recall@20Recall@100
VAE-CF40.8832.3257.63
DAE-CF40.9829.2958.80
VAE-SPP41.5031.3458.99
VAE-SPP-Separate41.4331.1558.82
+ +We also did experiments on the MLPs for VAE. For Movie Lens 1M, the larger network in VAE-CF leads to a 40.3 NDCG@100 for VAE-CF and 41.9 for VAE-SPP. As a result, we use the smaller one instead. \ No newline at end of file diff --git a/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/images.zip b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ccee2dfd2923ef15dfa10610e1d7d8b79b1b7d54 --- /dev/null +++ b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e5994e1449a9dae0e8089ff1fbe022126bc0d59cd21b900e39e4972bc19d984 +size 621062 diff --git a/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/layout.json b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d9896bf5edacc808bef2b8ec5ecd10a2887e7752 --- /dev/null +++ b/variationalautoencodersforhighlymultivariatespatialpointprocessesintensities/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f79b3fb902760acdd617ba6e2083530859c731f62db4cd7b960e10132cc7ac3d +size 522011 diff --git a/variationalheteroencoderrandomizedgansforjointimagetextmodeling/194a9032-aeb8-4e45-83da-3f34baffb3ef_content_list.json b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/194a9032-aeb8-4e45-83da-3f34baffb3ef_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..afdfddcbd5bd2a10a3bc21bb6c427a28f899992b --- /dev/null +++ b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/194a9032-aeb8-4e45-83da-3f34baffb3ef_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b51fb775b688d1a4f74ed9e74c74e555f6e6134755da1157400b1e9b0857b4d +size 162224 diff --git a/variationalheteroencoderrandomizedgansforjointimagetextmodeling/194a9032-aeb8-4e45-83da-3f34baffb3ef_model.json b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/194a9032-aeb8-4e45-83da-3f34baffb3ef_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d361acb2db60f189fc474a1dd44f1f49bdcbf404 --- /dev/null +++ b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/194a9032-aeb8-4e45-83da-3f34baffb3ef_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2a9689676d8e5ca7c93e9fea22f3dfc4c95911ab5552b7cfdfb8665c27371b4 +size 192199 diff --git a/variationalheteroencoderrandomizedgansforjointimagetextmodeling/194a9032-aeb8-4e45-83da-3f34baffb3ef_origin.pdf b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/194a9032-aeb8-4e45-83da-3f34baffb3ef_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..339da739e694f05c514b2180319b56d9ab94cf8c --- /dev/null +++ b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/194a9032-aeb8-4e45-83da-3f34baffb3ef_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f3def1dab5b264a2cb2ffd2d653320701117bb5dc22e64de530ed5e66abdc1a +size 24884626 diff --git a/variationalheteroencoderrandomizedgansforjointimagetextmodeling/full.md b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..349d61dd44e5d294c557fd82a02e9d42334791de --- /dev/null +++ b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/full.md @@ -0,0 +1,660 @@ +# VARIATIONAL HETERO-ENCODER RANDOMIZED GANS FOR JOINT IMAGE-TEXT MODELING + +Hao Zhang, Bo Chen* Long Tian, Zhengjue Wang + +National Laboratory of Radar Signal Processing + +Xidian University, Xian, China + +zhanghao_xidian@163.com bchen@mail.xidian.edu.cn + +tianlong_xidian@163.com zhengjuewang@163.com + +# Mingyuan Zhou + +McCombs School of Business + +The University of Texas at Austin, Austin, TX 78712, USA + +mingyuan.zhou@mccombs.utexas.edu + +# ABSTRACT + +For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN), a versatile deep generative model that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework. VHE randomized GAN (VHE-GAN) encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator. We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance. This further motivates the development of VHE-raster-scan-GAN that generates photo-realistic images in not only a multiscale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion. By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-raster-scan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks. + +# 1 INTRODUCTION + +Images and texts commonly occur together in the real world. There exists a wide variety of deep neural network based unidirectional methods that model images (texts) given texts (images) (Gomez et al., 2017; Kiros & Szepesvari, 2012; Reed et al., 2016; Xu et al., 2018; Zhang et al., 2017a). There also exist probabilistic graphic model based bidirectional methods (Srivastava & Salakhutdinov, 2012b;a; Wang et al., 2018) that capture the joint distribution of images and texts. These bidirectional methods, however, often make restrictive parametric assumptions that limit their image generation ability. Exploiting recent progress on deep probabilistic models and variational inference (Kingma & Welling, 2014; Zhou et al., 2016; Zhang et al., 2018a; Goodfellow et al., 2014; Zhang et al., 2017b), we propose an end-to-end learning framework to construct multi-modality deep generative models that can not only generate vivid image-text pairs, but also achieve state-of-the-art results on various unidirectional tasks (Srivastava & Salakhutdinov, 2012b;a; Wang et al., 2018; Gomez et al., 2017; Xu et al., 2018; Zhang et al., 2017a;b; Verma et al., 2018; Zhang et al., 2018b), such as generating photo-realistic images given texts and performing text-based zero-shot learning. + +To extract and relate semantic and visual concepts, we first introduce variational hetero-encoder (VHE) that encodes an image to decode its textual description (e.g., tags, sentences, binary attributes, and long documents), where the probabilistic encoder and decoder are jointly optimized using variational inference (Blei et al., 2017; Hoffman et al., 2013; Kingma & Welling, 2014; Rezende et al., 2014). The latent representation of VHE can be sampled from either the variational posterior provided + +by the image encoder given an image input, or the posterior of the text decoder via MCMC given a text input. VHE by construction has the ability to generate texts given images. To further enhance its text generation performance and allow synthesizing photo-realistic images given an image, text, or random noise, we feed the variational posterior of VHE in lieu of random noise as the source of randomness into the image generator of a generative adversarial network (GAN) (Goodfellow et al., 2014). We refer to this new modeling framework as VHE randomized GAN (VHE-GAN). + +Off-the-shelf text decoders, image encoders, and GANs can be directly plugged into the VHE-GAN framework for end-to-end multi-modality learning. To begin with, as shown in Figs. 1(a) and 1(b), we construct VHE-StackGAN++ by using the Poisson gamma belief network (PGBN) (Zhou et al., 2016) as the VHE text decoder, using the Weibull upward-downward variational encoder (Zhang et al., 2018a) as the VHE image encoder, and feeding the concatenation of the multi-stochastic-layer latent representation of the VHE as the source of randomness into the image generator of StackGAN++ (Zhang et al., 2017b). While VHE-StackGAN++ already achieves very attractive performance, we find that its performance can be clearly boosted by better exploiting the multi-stochastic-layer semantically meaningful hierarchical latent structure of the PGBN text decoder. To this end, as shown in Figs. 1(a) and 1(c), we develop VHE-raster-scan-GAN to perform image generation in not only a multi-scale low-to-high-resolution manner in each layer, as done by StackGAN++, but also a hierarchical-semantic coarse-to-fine fashion across layers, a unique feature distinguishing it from existing methods. Consequently, not only can VHE-raster-scan-GAN generate vivid high-resolution images with better details, but also build interpretable hierarchical semantic-visual relationships between the generated images and texts. + +Our main contributions include: 1) VHE-GAN that provides a plug-and-play framework to integrate off-the-shelf probabilistic decoders, variational encoders, and GANs for end-to-end bidirectional multi-modality learning; the shared latent space can be inferred either by image encoder $q(\boldsymbol{z} \mid \boldsymbol{x})$ , if given images, or by Gibbs sampling from the conditional posterior of text decoder $p(\boldsymbol{t} \mid \boldsymbol{z})$ , if given texts; 2) VHE-raster-scan-GAN that captures and relates hierarchical semantic and visual concepts to achieve state-of-the-art results in various unidirectional and bidirectional image-text modeling tasks. + +# 2 VARIATIONAL HETERO-ENCODER RANDOMIZED GANS + +VAEs and GANs are two distinct types of deep generative models. Consisting of a generator (decoder) $p(\pmb{x} \mid \pmb{z})$ , a prior $p(\pmb{z})$ , and an inference network (encoder) $q(\pmb{z} \mid \pmb{x})$ that is used to approximate the posterior $p(\pmb{z} \mid \pmb{x})$ , VAEs (Kingma & Welling, 2014; Rezende et al., 2014) are optimized by maximizing the evidence lower bound (ELBO) as + +$$ +\operatorname {E L B O} = \mathbb {E} _ {\boldsymbol {x} \sim p _ {\text {d a t a}} (\boldsymbol {x})} [ \mathcal {L} (\boldsymbol {x}) ], \quad \mathcal {L} (\boldsymbol {x}) := \mathbb {E} _ {\boldsymbol {z} \sim q (\boldsymbol {z} \mid \boldsymbol {x})} [ \ln p (\boldsymbol {x} \mid \boldsymbol {z}) ] - \operatorname {K L} [ q (\boldsymbol {z} \mid \boldsymbol {x}) | | p (\boldsymbol {z}) ], \tag {1} +$$ + +where $p_{\mathrm{data}}(\pmb{x}) = \sum_{i=1}^{N} \frac{1}{N} \delta_{\pmb{x}_i}$ represents the empirical data distribution. Distinct from VAEs that make parametric assumptions on data distribution and perform posterior inference, GANs in general use implicit data distribution and do not provide meaningful latent representations (Goodfellow et al., 2014); they learn both a generator $G$ and a discriminator $D$ by optimizing a mini-max objective as + +$$ +\min _ {G} \max _ {D} \left\{\mathbb {E} _ {\boldsymbol {x} \sim p _ {\text {d a t a}} (\boldsymbol {x})} \left[ \ln D (\boldsymbol {x}) \right] + \mathbb {E} _ {\boldsymbol {z} \sim p (\boldsymbol {z})} \left[ \ln \left(1 - D (G (\boldsymbol {z}))\right) \right] \right\}, \tag {2} +$$ + +where $p(z)$ is a random noise distribution that acts as the source of randomness for data generation. + +# 2.1 VHE-GAN OBJECTIVE FUNCTION FOR END-TO-END MULTI-MODALITY LEARNING + +Below we show how to construct VHE-GAN to jointly model images $\pmb{x}$ and their associated texts $\pmb{t}$ , capturing and relating hierarchical semantic and visual concepts. First, we modify the usual VAE into VHE, optimizing a lower bound of the text log-marginal-likelihood $\mathbb{E}_{\pmb{t} \sim p_{\mathrm{data}}(\pmb{t})}[\ln p(\pmb{t})]$ as + +$$ +\operatorname {E L B O} _ {\mathrm {v h e}} = \mathbb {E} _ {p _ {\text {d a t a}} (\boldsymbol {t}, \boldsymbol {x})} [ \mathcal {L} _ {\mathrm {v h e}} (\boldsymbol {t}, \boldsymbol {x}) ], \quad \mathcal {L} _ {\mathrm {v h e}} (\boldsymbol {t}, \boldsymbol {x}) := \mathbb {E} _ {\boldsymbol {z} \sim q (\boldsymbol {z} \mid \boldsymbol {x})} [ \ln p (\boldsymbol {t} \mid \boldsymbol {z}) ] - \operatorname {K L} [ q (\boldsymbol {z} \mid \boldsymbol {x}) | | p (\boldsymbol {z}) ], \tag {3} +$$ + +where $p(\boldsymbol{t} \mid \boldsymbol{z})$ is the text decoder, $p(\boldsymbol{z})$ is the prior, $p(\boldsymbol{t}) = \mathbb{E}_{\boldsymbol{z} \sim p(\boldsymbol{z})}[p(\boldsymbol{t} \mid \boldsymbol{z})]$ , and $\mathcal{L}_{\mathrm{vhe}}(\boldsymbol{t}, \boldsymbol{x}) \leq \ln \mathbb{E}_{\boldsymbol{z} \sim q(\boldsymbol{z} \mid \boldsymbol{x})}[\frac{p(\boldsymbol{t} \mid \boldsymbol{z}) p(\boldsymbol{z})}{q(\boldsymbol{z} \mid \boldsymbol{x})}] = \ln p(\boldsymbol{t})$ . Second, the image encoder $q(\boldsymbol{z} \mid \boldsymbol{x})$ , which encodes image $\boldsymbol{x}$ into its latent representation $\boldsymbol{z}$ , is used to approximate the posterior $p(\boldsymbol{z} \mid \boldsymbol{t}) = p(\boldsymbol{t} \mid \boldsymbol{z}) p(\boldsymbol{z}) / p(\boldsymbol{t})$ . Third, variational posterior $q(\boldsymbol{z} \mid \boldsymbol{x})$ in lieu of random noise $p(\boldsymbol{z})$ is fed as the source of randomness into the GAN image generator. Combining these three steps, with the parameters of the image encoder + +![](images/5cc4600e8ccf07068e77480982066b70773f66a6af0f13e9a81a2bb5b1a9d7c9.jpg) +(a) + +![](images/0ced3b9d3106f8fb80ed93123cafd5295f31033df9786b5c1fd5858c983f90fe.jpg) +(b) + +![](images/a0c45b2c93d2a4771d28edd0bd3a8d82c7c2492f04442c9417d0695d7f81fab1.jpg) +(c) +Figure 1: Illustration of (a) VHE, (b) StackGAN++, (c) raster-scan-GAN, (d) vanilla-GAN, and (e) simpler-raster-scan-GAN. VHE-raster-scan-GAN consists of (a) and (c). $\pmb{x}_{\downarrow d}$ is down-sampled from $\pmb{x}$ with scaling factor $d$ . VHE-StackGAN++, consisting of (a) and (b), VHE-vanilla-GAN, consisting of (a) and (d), and VHE-simple-raster-scan-GAN, consisting of (a) and (e), are all used for ablation studies. + +![](images/d17024df62bf6c78e29bc54c3f79fcc3d393b3d80bcc014b7e9c9735bbd060f0.jpg) +(d) + +![](images/4b532d6e9d0e789e906510d4cb5b3b49ab327ad23e308440275518cf852ee7af.jpg) +(e) + +$q(\boldsymbol{z} \mid \boldsymbol{x})$ , text decoder $p(\boldsymbol{t} \mid \boldsymbol{z})$ , and GAN generator denoted by $E$ , $G_{\mathrm{vae}}$ , and $G_{\mathrm{gan}}$ , respectively, we express the objective function of VHE-GAN for joint image-text end-to-end learning as + +$$ +\min _ {E, G _ {\mathrm {v a c}}, G _ {\mathrm {g a n}}} \max _ {D} \mathbb {E} _ {p _ {\mathrm {d a t a}} (\boldsymbol {t}, \boldsymbol {x})} [ \mathcal {L} (\boldsymbol {t}, \boldsymbol {x}) ], +$$ + +$$ +\mathcal {L} (\boldsymbol {t}, \boldsymbol {x}) := \ln D (\boldsymbol {x}) + \operatorname {K L} [ q (\boldsymbol {z} | \boldsymbol {x}) | | p (\boldsymbol {z}) ] + \mathbb {E} _ {\boldsymbol {z} \sim q (\boldsymbol {z} | \boldsymbol {x})} \left[ \ln \left(1 - D \left(G _ {\text {g a n}} (\boldsymbol {z})\right)\right) - \ln p (\boldsymbol {t} | \boldsymbol {z}) \right]. \tag {4} +$$ + +Note the objective function in (4) implies a data-triple-reuse training strategy, which uses the same data mini-batch in each stochastic gradient update iteration to jointly train the VHE, GAN discriminator, and GAN generator; see a related objective function, shown in (10) of Appendix A, that is resulted from naively combining the VHE and GAN training objectives. In VHE-GAN, the optimization of the encoder parameter $E$ is related to not only the VHE's ELBO, but also the GAN mini-max objective function, forcing the variational posterior $q(z \mid x)$ to serve as a bridge between VHE and GAN, allowing them to help each other. Although there are some models (Mescheder et al., 2017; Makhzani et al., 2015; Tolstikhin et al., 2018; Dumoulin et al., 2017; Donahue et al., 2017; Che et al., 2017; Srivastava et al., 2017; Grover et al., 2018; Larsen et al., 2016; Huang et al., 2018) combining VAEs and GANs in various ways, they focus on single-modality tasks while the VHE-GAN on two different modalities. In Appendix A, we analyze the properties of the VHE-GAN objective function and discuss related works. Below we develop two different VHE-GANs, one integrates off-the-shelf modules, while the other introduces new interpretable hierarchical latent structure. + +# 2.2 VHE-STACKGAN++ WITH OFF-THE-SHELF MODULES + +As shown in Figs. 1(a) and 1(b), we first construct VHE-StackGAN++ by plugging into VHE-GAN three off-the-shelf modules, including a deep topic model (Zhou et al., 2016), a ladder-structured encoder (Zhang et al., 2018a), and StackGAN++ (Zhang et al., 2017b). For text analysis, both sequence models and topic models are widely used. Sequence models (Bengio et al., 2003) often represent each document as a sequence of word embedding vectors, capturing local dependency structures with some type of recurrent neural networks (RNNs), such as long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997). Topic models such as latent Dirichlet allocation (LDA) (Blei et al., 2003) often represent each document as a bag of words (BoW), capturing global word cooccurrence patterns into latent topics. Suitable for capturing local dependency structure, existing sequence models often have difficulty in capturing long-range word dependencies and hence macro-level information, such as global word cooccurrence patterns (i.e., topics), especially for long documents. By contrast, while topic models ignore word order, they are very effective in capturing latent topics, which are often directly related to macro-level visual information (Gomez et al., 2017; Dieng et al., 2017; Lau et al., 2017). Moreover, topic models can be applied to not only sequential texts, such as few sentences (Wang et al., 2009; Jin et al., 2015) and long documents (Zhou et al., 2016), but also non-sequential ones, such as textual tags (Srivastava & Salakhutdinov, 2012a; 2014; Wang et al., 2018) and binary attributes (Elhoseiny et al., 2017b; Zhu et al., 2018). For this reason, for the VHE text decoder, we choose PGBN (Zhou et al., 2016), a state-of-the-art topic model that can also be represented as a multi-stochastic-layer deep generalization of LDA (Cong et al., 2017). We complete VHE-StackGAN++ by choosing the Weibull upward-downward variational encoder (Zhang et al., 2018a) as the VHE image encoder, and feeding the concatenation of all the hidden layers of PGBN as the source of randomness to the image generator of StackGAN++ (Zhang et al., 2017b). + +As in Fig. 1, we use a VHE that encodes an image into a deterministic-upward-stochastic-downward ladder-structured latent representation, which is used to decode the corresponding text. Specifically, we represent each document as a BoW high-dimensional sparse count vector $\pmb{t}_n \in \mathbb{Z}^{K_0}$ , where + +$\mathbb{Z} = \{0,1,\dots \}$ and $K_{0}$ is the vocabulary size. For the VHE text decoder, we choose to use PGBN to extract hierarchical latent representation from $\pmb{t}_n$ . PGBN consists of multiple gamma distributed stochastic hidden layers, generalizing the "shallow" Poisson factor analysis (Zhou et al., 2012; Zhou & Carin, 2015) into a deep setting. PGBN with $L$ hidden layers, from top to bottom, is expressed as + +$$ +\boldsymbol {\theta} _ {n} ^ {(L)} \sim \operatorname {G a m} \left(\boldsymbol {r}, 1 / s _ {n} ^ {(L + 1)}\right), \boldsymbol {r} \sim \operatorname {G a m} \left(\gamma_ {0} / K _ {L}, 1 / s _ {0}\right), +$$ + +$$ +\boldsymbol {\theta} _ {n} ^ {(l)} \sim \operatorname {G a m} \left(\Phi^ {(l + 1)} \boldsymbol {\theta} _ {n} ^ {(l + 1)}, 1 / s _ {n} ^ {(l + 1)}\right), l = L - 1, \dots , 2, 1, \quad \boldsymbol {t} _ {n} \sim \operatorname {P o i s} \left(\boldsymbol {\Phi} ^ {(1)} \boldsymbol {\theta} _ {n} ^ {(1)}\right), \tag {5} +$$ + +where the hidden units $\theta_{n}^{(l)}\in \mathbb{R}_{+}^{K_{l}}$ of layer $l$ are factorized under the gamma likelihood into the product of topics $\Phi^{(l)}\in \mathbb{R}_{+}^{K_{l - 1}\times K_{l}}$ and hidden units of the next layer, $\mathbb{R}_+ = \{x,x\geq 0\}$ $s_n^{(l)} > 0$ and $K_{l}$ is the number of topics of layer $l$ . If the texts are represented as binary attribute vectors $\pmb{b}_{n}$ we can add a Bernoulli-Poisson link layer as $\pmb {b}_n = \mathbf{1}(t_n\geq 1)$ (Zhou, 2015; Zhou et al., 2016). We place a Dirichlet prior on each column of $\Phi^{(l)}$ . The topics can be organized into a directed acyclic graph (DAG), whose node $k$ at layer $l$ can be visualized with the top words of $\left[\prod_{t = 1}^{l - 1}\Phi^{(t)}\right]\phi_k^{(l)}$ ; the topics tend to be very general in the top layer and become increasingly more specific when moving downwards. This semantically meaningful latent hierarchy provides unique opportunities to build a better image generator by coupling the semantic hierarchical structures with visual ones. + +Let us denote $\Phi = \{\Phi^{(1)},\dots,\Phi^{(L)},\boldsymbol {r}\}$ as the set of global parameters of PGBN shown in (5). Given $\Phi$ , we adopt the inference in Zhang et al. (2018a) to build an Weibull upward-downward variational image encoder as $\prod_{n = 1}^{N}\prod_{l = 1}^{L}q(\pmb{\theta}_{n}^{(l)}|\pmb{x}_{n},\pmb{\Phi}^{(l + 1)},\pmb{\theta}_{n}^{(l + 1)})$ , where $\Phi^{(L + 1)}\coloneqq r,\pmb{\theta}_n^{(L + 1)}\coloneqq \emptyset$ , and + +$$ +q \left(\boldsymbol {\theta} _ {n} ^ {(l)} \mid \boldsymbol {x} _ {n}, \boldsymbol {\Phi} ^ {(l + 1)}, \boldsymbol {\theta} _ {n} ^ {(l + 1)}\right) = \operatorname {W e i b u l l} \left(\boldsymbol {k} _ {n} ^ {(l)} + \boldsymbol {\Phi} ^ {(l + 1)} \boldsymbol {\theta} _ {n} ^ {(l + 1)}, \boldsymbol {\lambda} _ {n} ^ {(l)}\right). \tag {6} +$$ + +The Weibull distribution is used to approximate the gamma distributed conditional posterior, and its parameters $\pmb{k}_n^{(l)}, \pmb{\lambda}_n^{(l)} \in \mathbb{R}^{K_l}$ are deterministically transformed from the convolutional neural network (CNN) image features $f(\pmb{x}_n)$ (Szegedy et al., 2016), as shown in Fig. 1(a) and described in Appendix D.1. We denote $\Omega$ as the set of encoder parameters. We refer to Zhang et al. (2018a) for more details about this deterministic-upward-stochastic-downward ladder-structured inference network, which is distinct from a usual inference network that has a pure bottom-up structure and only interacts with the generative model via the ELBO (Kingma & Welling, 2014; Gulrajani et al., 2017). + +The multi-stochastic-layer latent representation $\mathbf{z} = \{\pmb{\theta}^{(l)}\}_{l=1}^{L}$ is the bridge between two modalities. As shown in Fig. 1(b), VHE-StackGAN++ simply randomizes the image generator of StackGAN++ (Zhang et al., 2017b) with the concatenated vector $\pmb{\theta} = [\pmb{\theta}^{(1)}, \dots, \pmb{\theta}^{(L)}]$ . We provide the overall objective function in (15) of Appendix D.2. Note that existing neural-network-based models (Gomez et al., 2017; Xu et al., 2018; Zhang et al., 2017a;b; Verma et al., 2018; Zhang et al., 2018b) are often able to perform unidirectional but not bidirectional transforms between images $\pmb{x}$ and texts $\pmb{t}$ . However, bidirectional transforms are straightforward for the proposed model, as $\pmb{z}$ can be either drawn from the image encoder $q(\pmb{z} \mid \pmb{x})$ in (6), or drawn with an upward-downward Gibbs sampler (Zhou et al., 2016) from the conditional posteriors $p(\pmb{z} \mid \pmb{t})$ of the PGBN text decoder $p(\pmb{t} \mid \pmb{z})$ in (5). + +# 2.3 VHE-RASTER-SCAN-GAN WITH A HIERARCHICAL-SEMANTIC MULTI-RESOLUTION IMAGE GENERATOR + +While we find that VHE-StackGAN++ has already achieved impressive results, its simple concatenation of $\pmb{\theta}^{(l)}$ does not fully exploit the semantically-meaningful hierarchical latent representation of the PGBN-based text decoder. For three DAG subnets inferred from three different datasets, as shown in Figs. 21 -23 of Appendix C.7, the higher-layer PGBN topics match general visual concepts, such as those on shapes, colors, and backgrounds, while the lower-layer ones provide finer details. This motivates us to develop an image generator to exploit the semantic structure, which matches coarse-to-fine visual concepts, to gradually refine its generation. To this end, as shown in Fig. 1(c), we develop "raster-scan" GAN that performs generation not only in a multi-scale low-to-high-resolution manner in each layer, but also a hierarchical-semantic coarse-to-fine fashion across layers. + +Suppose we are building a three-layer raster-scan GAN to generate an image of size $256^2$ . We randomly select an image $\pmb{x}_n$ and then sample $\{\pmb{\theta}_n^{(l)}\}_{l=1}^3$ from the variational posterior $\prod_{l=1}^3 q(\pmb{\theta}_n^{(l)} \mid \pmb{x}_n, \pmb{\Phi}^{(l+1)}, \pmb{\theta}_n^{(l+1)})$ . First, the top-layer latent variable $\pmb{\theta}^{(3)}$ , often capturing general + +semantic information, is transformed to hidden features $h_i^{(3)}$ for the $i^{th}$ branch: + +$$ +h _ {1} ^ {(3)} = F _ {1} ^ {(3)} \left(\boldsymbol {\theta} ^ {(3)}\right); h _ {i} ^ {(3)} = F _ {i} ^ {(3)} \left(h _ {i - 1} ^ {(3)}, \boldsymbol {\theta} ^ {(3)}\right), i = 2, 3, \tag {7} +$$ + +where $F_{i}^{(l)}$ is a CNN. Second, having obtained $\{h_i^{(3)}\}_{i = 1}^3$ , generators $\{G_i^{(3)}\}_{i = 1}^3$ synthesize low-to-high-resolution image samples $\{\pmb{s}_i^{(3)} = G_i^{(3)}(h_i^{(3)})\}_{i = 1}^3$ , where $\pmb{s}_1^{(3)},\pmb{s}_2^{(3)}$ , and $\pmb{s}_3^{(3)}$ are of $16^{2}$ , $32^{2}$ , and $64^{2}$ , respectively. Third, $\pmb{s}_3^{(3)}$ is down-sampled to $\hat{\pmb{s}}_3^{(3)}$ of size $32^{2}$ and combined with the information from $\theta^{(2)}$ to provide the hidden features at layer two: + +$$ +h _ {1} ^ {(2)} = C \left(F _ {1} ^ {(2)} \left(\boldsymbol {\theta} ^ {(2)}\right), \hat {\boldsymbol {s}} _ {3} ^ {(3)}\right); h _ {i} ^ {(2)} = F _ {i} ^ {(2)} \left(h _ {i - 1} ^ {(2)}, \boldsymbol {\theta} ^ {(2)}\right), i = 2, 3, \tag {8} +$$ + +where $C$ denotes concatenation along the channel. Fourth, the generators synthesize image samples $\{s_i^{(2)} = G_i^{(2)}(h_i^{(2)})\}_{i=1}^3$ , where $s_1^{(2)}, s_2^{(2)}$ , and $s_3^{(2)}$ are of $32^2, 64^2$ , and $128^2$ , respectively. The same process is then replicated at layer one to generate $\{s_i^{(1)} = G_i^{(1)}(h_i^{(1)})\}_{i=1}^3$ , where $s_1^{(1)}, s_2^{(1)}$ , and $s_3^{(1)}$ are of size $64^2, 128^2$ , and $256^2$ , respectively, and $s_3^{(1)}$ becomes a desired high-resolution synthesized image with fine details. The detailed structure of raster-scan-GAN is described in Fig. 26 of Appendix D.3. PyTorch code is provided to aid the understanding and help reproduce the results. + +Different from many existing methods (Gomez et al., 2017; Reed et al., 2016; Xu et al., 2018; Zhang et al., 2017b) whose textual feature extraction is separated from the end task, VHE-raster-scan-GAN performs joint optimization. As detailedly described in the Algorithm in Appendix E, at each minibatch based iteration, after updating $\Phi$ by the topic-layer-adaptive stochastic gradient Riemannian (TLASGR) MCMC of Cong et al. (2017), a Weibull distribution based reparameterization gradient (Zhang et al., 2018a) is used to end-to-end optimize the following objective: + +$$ +\begin{array}{l} \min _ {\left\{G _ {i} ^ {(l)} \right\} _ {i, l}, \boldsymbol {\Omega}} \max _ {\left\{D _ {i} ^ {(l)} \right\} _ {i, l}} \mathbb {E} _ {p _ {\text {d a t a}}} \left(\boldsymbol {x} _ {n}, \boldsymbol {t} _ {n}\right) \mathbb {E} _ {\prod_ {l = 1} ^ {3} q \left(\boldsymbol {\theta} _ {n} ^ {(l)} \mid \boldsymbol {x} _ {n}, \boldsymbol {\Phi} ^ {(l + 1)}, \boldsymbol {\theta} _ {n} ^ {(l + 1)}\right)} \left\{- \log p \left(\boldsymbol {t} _ {n} \mid \boldsymbol {\Phi} ^ {(1)}, \boldsymbol {\theta} _ {n} ^ {(1)}\right) \right. \\ + \sum_ {l = 1} ^ {3} \operatorname {K L} \left[ q \left(\boldsymbol {\theta} _ {n} ^ {(l)} \mid \boldsymbol {x} _ {n}, \boldsymbol {\Phi} ^ {(1 + 1)}, \boldsymbol {\theta} _ {n} ^ {(l + 1)}\right) \mid \mid p \left(\boldsymbol {\theta} _ {n} ^ {(l)} \mid \boldsymbol {\Phi} ^ {(1 + 1)}, \boldsymbol {\theta} _ {n} ^ {(l + 1)}\right) \right] \\ \left. + \sum_ {l = 1} ^ {3} \sum_ {i = 1} ^ {3} \left[ \log D _ {i} ^ {(l)} \left(\boldsymbol {x} _ {n, i} ^ {(l)}, \boldsymbol {\theta} _ {n} ^ {(l)}\right) + \log \left(1 - D _ {i} ^ {(l)} \left(G _ {i} ^ {(l)} \left(\boldsymbol {\theta} _ {n} ^ {(l)}\right), \boldsymbol {\theta} _ {n} ^ {(l)}\right)\right) \right] \right\}, \tag {9} \\ \end{array} +$$ + +where $\{\pmb{x}_{n,i}^{(l)}\}_{i = 1,l = 1}^{3,3}$ denote different resolutions of $\pmb{x}_n$ , corresponding to $\{\pmb{s}_{n,i}^{(l)}\}_{i = 1,l = 1}^{3,3}$ . + +# 2.4 RELATED WORK ON JOINT IMAGE-TEXT LEARNING + +Gomez et al. (2017) develop a CNN to learn a transformation from images to textual features pre-extracted by LDA. GANs have been exploited to generate images given pre-learned textual features extracted by RNNs (Denton et al., 2015; Reed et al., 2016; Zhang et al., 2017a; Xu et al., 2018; Zhang et al., 2018b; Li et al., 2019). All these works need a pre-trained linguistic model based on large-scale extra text data and the transformations between images and texts are only unidirectional. The recently proposed Obj-GAN (Li et al., 2019) needs even more side information such as the locations and labels of objects inside images, which could be difficult and costly to acquire in practice. On the other hand, probabilistic graphical model based methods (Srivastava & Salakhutdinov, 2012b;a; Wang et al., 2018) are proposed to learn a joint latent space for images and texts to realize bidirectional transformations, but their image generators are often limited to generating low-level image features. By contrast, VHE-raster-scan-GAN performs bidirectional end-to-end learning to capture and relate hierarchical visual and semantic concepts across multiple stochastic layers, capable of a wide variety of joint image-text learning and generation tasks, as described below. + +# 3 EXPERIMENTAL RESULTS + +For joint image-text learning, following previous work, we evaluate the proposed VHE-StackGAN++ and VHE-raster-scan-GAN on three datasets: CUB (Wah et al., 2011), Flower (Nilsback & Zisserman, 2008), and COCO (Lin et al., 2014), as described in Appendix F. Besides the usual text-to-image generation task, due to the distinct bidirectional inference capability of the proposed models, we can perform a rich set of additional tasks such as image-to-text, image-to-image, and noise-to-image-text-pair generations. Due to space constraint, we present below some representative results, and defer additional ones to the Appendix. We provide the details of our experimental settings in Appendix F. PyTorch code is provided at https://github.com/BoChenGroup/VHE-GAN. + +Table 1: Inception score (IS, larger is better) and Frechet inception distance (FID, smaller is better) of StackGAN++ (Zhang et al., 2017b), HDGAN (Zhang et al., 2018b), AttGAN (Xu et al., 2018), Obj-GAN (Li et al., 2019), and the proposed VHE-raster-scan-GAN; the values labeled with * are calculated by the provided well-trained models and the others are quoted from the original publications; see Tab. 5 in Appendix C.1 for the error bars of IS. Note that while the FID of Obj-GAN is the lowest, it does not necessarily imply it produces high-quality images, as shown in Figs. 13 and 27; this is because FID only measures the similarity on the image feature space, but ignores the shapes of objects and diversity of generated images. More discussions can be found in Section 3.1 and Appendix G. + +
MethodStackGAN++HDGANAttnGANObj-GANVHE-raster-scan-GAN
CriterionISFIDISFIDISFIDISFIDISFID
Flower3.2648.683.4540.12*----3.7235.13
CUB3.8415.304.1513.48*4.3613.02*--4.4112.02
COCO8.3081.5911.8678.16*25.8977.01*26.58*36.98*27.1675.88
+ +Table 2: Ablation study for image-to-text learning, where the structures of different variations of raster-scanGAN are illustrated in Figs. 1(b), 1(d), and 1(e). + +
MethodPGBN+StackGAN++VHE-vanilla-GANVHE-StackGAN++VHE-simple-raster-scan-GAN
CriterionISFIDISFIDISFIDISFID
Flower3.2941.043.0152.153.5638.663.6236.18
CUB3.9213.793.5221.244.2012.934.3112.35
COCO10.6379.656.3697.1512.6378.0220.1377.18
+ +![](images/0db4acd2665d3e41ae4a4694a94182572a1392defc8e191580fec50b627a5cce.jpg) +Figure 2: Comparison on image generation given texts from CUB, Flower, and COCO. Shown in the top row are the textual descriptions and their associated real images; see Appendix C.2 for higher-resolution images. Note AttnGAN did not perform experiments on Flower and hence its results on Flower are not shown, and since Obj-GAN only performed experiment on COCO, we defer its visual results to Appendix C.3. + +# 3.1 TEXT-TO-IMAGE LEARNING + +Although the proposed VHE-GANs do not have a text encoder to directly project a document to the shared latent space, given a document and a set of topics inferred during training, we use the upward-downward Gibbs sampler of Zhou et al. (2016) to draw $\{\pmb{\theta}^{(l)}\}_{l=1}^{L}$ from its conditional posterior under PGBN, which are then fed into the GAN image generator to synthesize random images. + +Text-to-image generation: In Tab. 1, with inception score (IS) (Salimans et al., 2016) and Frechet inception distance (FID) (Heusel et al., 2017), we compare our models with three state-of-the-art GANs in text-to-image generation. For visualization, we show in the top row of Fig. 2 different test textual descriptions and the real images associated with them, and in the other rows random images generated conditioning on these textual descriptions by different algorithms. Higher-resolution images are shown in Appendix C.2. We also provide example results on COCO, a much more challenging dataset, in Fig. 13 of Appendix C.3. + +It is clear from Fig. 2 that although both StackGAN++ (Zhang et al., 2017b) and HDGAN (Zhang et al., 2018b) generate photo-realistic images nicely matched to the given texts, they often misrepresent or ignore some key textual information, such as "black crown" for the 2nd test text, "yellow pistil" for 5th, "yellow stamen" for 6th, and "computer" for 7th. These observations also apply to AttnGAN (Xu et al., 2018). By contrast, both the proposed VHE-StackGAN++ and VHE-raster-scan-GAN do a better job in capturing and faithfully representing these key textual information into their generated images. Fig. 13 for COCO further shows the advantages of VHE-raster-scan-GAN in better + +![](images/ca9592148e475f29a75af750b2af37473afc50ceb0031d520b0723579ed45e32.jpg) +(a) + +Class name: Rhinoceros Auklet +![](images/ebe8954f54a2a584042f29c58eb15d098b7c18cb063b6071d7294c413a201410.jpg) +It is a seabird, nesting in seabird colonies, with a large orange/brown bill. Plumage is dark on top and paler below, in offshore and inshore water. Sometimes it swim in the water and sometimes it stand on the strong. + +![](images/a0ee6278e956c72276e9319a5d2796f3a344b3a3d4e5d7346626a90d8c32a033.jpg) +(b) + +![](images/6bad451d4d33af50686d6565d19c7426e48b38d7315c8f8843af9e4ed2b26861.jpg) + +(c) +Figure 3: Example results of VHE-raster-scan-GAN on three different tasks: (a) image generation given five textual attributes; (b) image generation given a long class-specific document (showing three representative sentences for brevity) from CUB; and (c) latent space interpolation for joint image-text generation on CUB (left column) and Flower (right column), where the texts in the first and last row are given. +![](images/9f19779cf459e658b4f48e9325248147a80eac2cb50ede06f6e422fe3bfbbe8a.jpg) +red grey brown dark long large +red brown gray dark large black +orange black brown dark yellow head +yellow black dark head body blue +yellow blue black body long colored +red pink large petal stamen light +red pink stamen petal long yellow +red yellow long petal brown thin +yellow long red thin petal green +yellow long green stamen petal dark + +![](images/3b90468711eaead46bb58eddedb34063f4da75597cb2633fc408f535cd792273.jpg) + +representing the given textual information in its generated images. Note Obj-GAN, which learns a bounding box generator that restricts object locations, obtains the lowest FID on COCO. However, it appears that this type of restriction significantly improves FID at the expense of sacrificing the diversity of generated images given text, as shown in Fig. 27 of Appendix G. From the results in Fig. 13, it also appears that Obj-GAN overly emphasizes correctly arranging the spatial locations of different visual features, which is important to achieve low FID, but does not do well in generating correct object shapes, which is important to visual effect. Besides, the training of Obj-GAN requires more side information including the locations and labels of objects in the images, which are often not provided in practice (e.g., neither CUB nor Flower comes with this type of side information). While the proposed VHE-GAN models do not need these additional side information, they could be further improved by following Obj-GAN to take them into consideration. + +As discussed in Section 2.2, compared with sequence models, topic models can be applied to more diverse textual descriptions, including textual attributes and long documents. For illustration, we show in Figs. 3(a) and 3(b) example images generated conditioning on a set of textual attributes and an encyclopedia document, respectively. These synthesized images are photo-realistic and their visual contents well match the semantics of the given texts. Trained on CelebA (Liu et al., 2015), we provide in Fig. 9 examples of facial image generation given attributes; see Appendix B for details. + +Ablation studies: We also consider several ablation studies for text-to-image generation, as shown in Tab. 2. First, we modify StackGAN++ (Zhang et al., 2017b), using the text features extracted by PGBN to replace the original ones by RNN, referred to as PGBN+StackGAN++. It is clear that PGBN+StackGAN++ outperforms the original StackGAN++, but underperforms VHE-StackGAN++, which can be explained by that 1) the PGBN deep topic model is more effective in extracting macro-level textual information, such as key words, than RNNs; and 2) jointly end-to-end training the textual feature extractor and image encoder, discriminator, and generator helps better capture and relate the visual and semantical concepts. Second, note that VHE-StackGAN++ has the same structured image generator as both StackGAN++ and HDGAN do, but performs better than them. We attribute its performance gain to 1) its PGBN deep topic model helps better capture key semantic information from the textual descriptions; and 2) it performs end-to-end joint image-text learning via the VHE-GAN framework, rather than separating the extraction of textual features from text-to-image generation. Third, VHE-vanilla-GAN underperforms VHE-StackGAN++, suggesting that the stacking structure is helpful for generating high resolution images, as previously verified in Zhang et al. (2017a). VHE-simple-raster-scan-GAN outperforms VHE-StackGAN++ but underperforms VHE-raster-scan-GAN, confirming the benefits of combining the stacking and raster-scan structures. More visual results for ablation studies can be found in Appendix C.2. Below we focus on illustrating the outstanding performance of VHE-raster-scan-GAN. + +Latent space interpolation: In order to understand the jointly learned image and text manifolds, given texts $t_1$ and $t_2$ , we draw $\theta_1$ and $\theta_2$ and use the interpolated variables between them to generate + +both images via the GAN's image generator and texts via the PGBN text decoder. As in Fig. 3(c), the first row shows the true texts $t_1$ and images generated with $\theta_1$ , the last row shows $t_2$ and images generated with $\theta_2$ , and the second to fourth rows show the generated texts and images with the interpolations from $\theta_1$ to $\theta_2$ . The strong correspondences between the generated images and texts, with smooth changes in colors, object positions, and backgrounds between adjacent rows, suggest that the latent space of VHE-raster-scan-GAN is both visually and semantically meaningful. Additional more fine-gridded latent space interpolation results are shown in Figs. 15-18 of Appendix C.4. + +Visualization of captured semantic and visual concepts: Zhou et al. (2016) show that the semantic concepts extracted by PGBN and their hierarchical relationships can be represented as a DAG, only a subnet of which will be activated given a specific text input. In each subplot of Fig. 4, we visualize example topic nodes of the DAG subnet activated by the given text input, and show the corresponding images generated at different hidden layers. There is a good match at each layer between the visual contents of the generated images and semantics of the top activated topics, which are mainly about general shapes, colors, or backgrounds at the top layer, and become more and more fine-grained when moving downward. In Fig. 5, for the DAG learned on COCO, we show a representative subnet that is rooted at a top-layer node about "rooms and objects at home," and provide both semantic and visual representations for each node. Being able to capture and relate hierarchical semantic and visual concepts helps explain the state-of-the-art performance of VHE-raster-scan-GAN. + +# 3.2 IMAGE-TO-TEXT LEARNING + +VHE-raster-scan-GAN can perform a wide variety of extra tasks, such as image-to-text generation, text-based zero-shot learning (ZSL), and image retrieval given a text query. In particular, given image $\pmb{x}_n$ , we draw $\hat{\pmb{t}}_n$ as $\hat{\pmb{t}}_n \mid \pmb{\theta}_n \sim p(\pmb{t} \mid \pmb{\Phi}, \pmb{\theta}_n)$ , $\pmb{\theta}_n \mid \pmb{x}_n \sim q_{\Omega}(\pmb{\theta} \mid \pmb{\Phi}, \pmb{x}_n)$ and use it for downstream tasks. + +Image-to-text generation: Given an image, we may generate some key words, as shown in Fig. 6(a), where the true and generated ones are displayed on the left and right of the input image, respectively. It is clear that VHE-raster-scan-GAN successfully captures the object colors, shapes, locations, and backgrounds to predict relevant key words. + +Text-based ZSL: Text-based ZSL is a specific task that learns a relationship between images and texts on the seen classes and transfer it to the unseen ones (Fu et al., 2018). We follow the the same settings on CUB and Flower as existing text-based ZSL methods summarized in Tab. 3. There are two default splits for CUB—the hard (CUB-H) and easy one (CUB-E)—and one split setting for Flower, as described in Appendix F. Note that except for our models that infer a shared semantically meaningful latent space between two modalities, none of the other methods have generative models for both modalities, regardless of whether they learn a classifier or a distance metric in a latent space for ZSL. Tab. 3 shows that VHE-raster-scan-GAN clearly outperforms the state of the art in terms of the Top-1 accuracy on both the CUB-H and Flower, and is comparable to the second best on CUB-E (it is the best among all methods that have reported their Top-5 accuracies on CUB-E). Note for CUB-E, every unseen class has some corresponding seen classes under the same super-category, which makes the classification of surface or distance metric learned on the seen classes easier to generalize to the unseen ones. We also note that both GAZSL and ZSLPP rely on visual part detection to extract image features, making their performance sensitive to the quality of the visual part detector that often has to be elaborately tuned for different classes and hence limiting their generalization ability, for example, the visual part detector for birds is not suitable for flowers. Tab. 3 also includes the results of ZSL using VHE, which show that given the same structured text decoder and image encoder, VHE consistently underperforms both VHE-StackGAN++ and VHE-raster-scan-GAN. This suggests 1) the advantage of a joint generation of two modalities, and 2) the ability of GAN in helping VHE achieve better data representation. The results in Tab. 3 also show that the ZSL performance of VHE-raster-scan-GAN has a clear trend of improvement as PGBN becomes deeper, suggesting the advantage of having a multi-stochastic-hidden-layer deep topic model for text generation. We also collect the ZSL results of the last 1000 mini-batch based stochastic gradient update iterations to calculate the error bars. For existing methods, since there are no error bars provided in published paper, we only provide the text error bars of the methods that have publicly accessible code. + +# 3.3 IMAGE/TEXT RETRIEVAL + +As discussed in Section 2.4, the proposed models are able to infer the shared latent space given either an image or text. We test both VHE-StackGAN++ and VHE-raster-scan-GAN on the same image/text + +![](images/326e86dfe1aa755272b2f80cb0c24a17654cf87b931a80f98fbf600ee8e2be6a.jpg) +Figure 4: Visualization of example semantic and visual concepts captured by a three-stochastic-hidden-layer VHE-raster-scan-GAN from (a) Flower, (b) Bird, and (c) COCO. In each subplot, given the real text $\pmb{t}_n$ shown at the bottom, we draw $\{\pmb{\theta}_n^{(l)}\}_{l=1}^3$ via Gibbs sampling; we show the three most active topics in $\Phi^{(l)}$ (ranked by the weights of $\pmb{\theta}_n^{(l)}$ ) at layer $l = 3, 2, 1$ , where each topic is visualized by its top three words; and we feed $\{\pmb{\theta}_n^{(l)}\}_{l=1}^3$ into raster-scan-GAN to generate three random images (one per layer, coarse to fine from layers 3 to 1). + +Table 3: Accuracy (%) of ZSL on CUB and Flower. Note that some of them are attribute-based methods but applicable in our setting by replacing attribute vectors with text features (labeled by *), as discussed in (Elhoseiny et al., 2017b). + +
Text-ZSL datasetCUB-HCUB-EFlower
Accuracy criteriontop-1top-1top-5top-1
WAC-Kernel (Elhoseiny et al., 2017a)7.7 ± 0.2833.5 ± 0.2264.3 ± 0.209.1 ± 2.77
ZSLNS (Qiao et al., 2016)7.3 ± 0.3629.1 ± 0.2861.8 ± 0.228.7 ± 2.46
ESZSL* (Romeraparedes & Torr, 2015)7.4 ± 0.3128.5 ± 0.2659.9 ± 0.208.6 ± 2.53
SynC* (Changpinyo et al., 2016)8.628.061.38.2
ZSLPP (Elhoseiny et al., 2017b)9.737.2--
GAZSL (Zhu et al., 2018)10.3 ± 0.2643.7 ± 0.2867.61 ± 0.24-
VHE-L314.0 ± 0.2434.6 ± 0.2564.6 ± 0.208.9 ± 1.57
VHE-StackGAN++-L316.138.568.210.6
VHE-raster-scan-GAN-L111.7 ± 0.3132.1 ± 0.3262.6 ± 0.339.4 ± 1.68
VHE-raster-scan-GAN-L214.9 ± 0.2637.1 ± 0.2464.6 ± 0.2511.0 ± 1.54
VHE-raster-scan-GAN-L316.7 ± 0.2439.6 ± 0.2070.3 ± 0.1812.1 ± 1.47
+ +Table 4: Comparison of the image-to-text retrieval performance, measured by Top-1 accuracy, and text-to-image retrieval performance, measured by AP@50, between different methods on CUB-E. + +
MethodCNN-LSTM (Li et al., 2017)AttnGAN (Xu et al., 2018)TA-GAN (Nam et al., 2018)VHE-StackGAN++VHE-raster-scan-GAN
Top1-ACC(%)61.555.161.360.261.7
AP@50(%)57.651.062.861.362.6
+ +retrieval tasks as in TA-GAN (Nam et al., 2018), where we use the cosine distance between the inferred latent space given images $(q(\theta \mid x)$ , image encoder) and these given texts $(p(\theta \mid t)$ , Gibbs sampling) to compute the similarity scores. Similar with TA-GAN, the top-1 image-to-text retrieval accuracy (Top-1 Acc) and the percentage of matching images in top-50 text-to-image retrieval results (AP@50) on CUB-E dataset are used to measure the performance. As shown in Table 4, VHE-raster-scan-GAN clearly outperforms AttnGAN (Xu et al., 2018) and is comparable with TA-GAN. Note TA-GAN needs to extract its text features based on the fastText model (Bojanowski et al., 2017) pre-trained on a large corpus, while VHE-raster-scan-GAN learns everything directly from the current dataset in an end-to-end manner. Also, VHE-raster-scan-GAN outperforms VHE-StackGAN++, which further confirms the benefits of combining both the stacking and raster scan structures. + +# 3.4 GENERATION OF RANDOM TEXT-IMAGE PAIRS + +Below we show how to generate data samples that contain both modalities. After training a three-stochastic-hidden-layer VHE-raster-scan-GAN, following the data generation process of the PGBN text decoder, given $\{\Phi^{(l)}\}_{l = 1}^3$ and $\pmb{r}$ , we first generate $\pmb{\theta}^{(3)} \sim \mathrm{Gam}\left(\pmb{r}, 1 / s^{(4)}\right)$ and then downward propagate it through the PGBN as in (5) to calculate the Poisson rates for all words using $\Phi^{(1)}\pmb{\theta}^{(1)}$ . Given a random draw, $\{\pmb{\theta}^{(l)}\}_{l = 1}^3$ is fed into the raster-scan-GAN image generator to generate a + +![](images/2f6da1dac67f2d953a7458d99e032748653dc084a5b8ed196de5e7e500b0f4aa.jpg) + +![](images/03d9b73d40f049cd2f92b3a91b2b84cbca550f6b349838289348d5a117c7a401.jpg) +Figure 5: An example topic hierarchy learned on COCO and its visual representation. We sample $\theta_{n}^{(1:3)} \sim q(\pmb{\theta}_{n}^{(1:3)} \mid \pmb{\Phi}, \pmb{x}_{n})$ for all $n$ ; for topic node $k$ of layer $l$ , we show both its top words and the top two images ranked by their activations $\theta_{nk}^{(l)}$ . +(a) + +![](images/09ef9be7c34cd88f8560a9b32fc3671114d461d9186f2440005394e95eeaf3cb.jpg) +(b) +Figure 6: Example results of using VHE-raster-scan-GAN for (a) image-to-textual-tags generation, where the generated tags highlighted in red are included in the original ones; (b) image-text-pair generations (columns from left to right are based on Flower, CUB, and COCO, respectively). + +corresponding image. Shown in Fig. 6(b) are six random draws, for each of which we show its top seven words and generated image, whose relationships are clearly interpretable, suggesting that VHE-raster-scan-GAN is able to recode the key information of both modalities and the relationships between them. In addition to the tasks shown above, VHE-raster-scan-GAN can also be used to perform image retrieval given a text query, and image regeneration; see Appendices C.5 and C.6 for example results on these additional tasks. + +# 4 CONCLUSION + +We develop variational hetero-encoder randomized generative adversarial network (VHE-GAN) to provide a plug-and-play joint image-text modeling framework. VHE-GAN is a versatile deep generative model that integrates off-the-shelf image encoders, text decoders, and GAN image discriminators and generators into a coherent end-to-end learning objective. It couples its VHE and GAN components by feeding the VHE variational posterior in lieu of noise as the source of randomness of the GAN generator. We show VHE-StackGAN++ that combines the Poisson gamma belief network, a deep topic model, and StackGAN++ achieves competitive performance, and VHE-raster-scan-GAN, which further improves VHE-StackGAN++ by exploiting the semantically-meaningful hierarchical structure of the deep topic model, generates photo-realistic images not only in a multi-scale low-to-high-resolution manner, but also in a hierarchical-semantic coarse-to-fine fashion, achieving outstanding results in many challenging image-to-text, text-to-image, and joint text-image learning and generation tasks. + +# ACKNOWLEDGEMENTS + +B. Chen acknowledges the support of the Program for Young Thousand Talent by Chinese Central Government, the 111 Project (No. B18039), NSFC (61771361), NSFC for Distinguished Young Scholars (61525105), Shaanxi Innovation Team Project, and the Innovation Fund of Xidian University. M. Zhou acknowledges the support of the U.S. National Science Foundation under Grant IIS-1812699. + +# REFERENCES + +Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, and Bernt Schiele. Evaluation of output embeddings for fine-grained image classification. In CVPR, pp. 2927-2936, 2015. +Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic language model. Journal of Machine Learning Research, 3(6):1137-1155, 2003. +David M Blei, Andrew Y Ng, and Michael I Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022, 2003. +David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859-877, 2017. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146, 2017. +Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, and Fei Sha. Synthesized classifiers for zero-shot learning. In CVPR, pp. 5327-5336, 2016. +Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. In ICLR, 2017. +Yulai Cong, Bo Chen, Hongwei Liu, and Mingyuan Zhou. Deep latent Dirichlet allocation with topic-layer-adaptive stochastic gradient Riemannian MCMC. In ICML, 2017. +Emily L Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS, pp. 1486-1494, 2015. +Adji B Dieng, Chong Wang, Jianfeng Gao, and John Paisley. TopicRNN: A recurrent neural network with long-range semantic dependency. In ICLR, 2017. +Jeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. In ICLR, 2017. +Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron C Courville. Adversarily learned inference. In ICLR, 2017. +Mohamed Elhoseiny, Ahmed M Elgammal, and Babak Saleh. Write a classifier: Predicting visual classifiers from unstructured text. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12):2539-2553, 2017a. +Mohamed Elhoseiny, Yizhe Zhu, Han Zhang, and Ahmed M Elgammal. Link the head to the "beak": Zero shot learning from noisy text description at part precision. In CVPR, pp. 6288-6297, 2017b. +Yanwei Fu, Tao Xiang, Yu Gang Jiang, Xiangyang Xue, Leonid Signal, and Shaogang Gong. Recent advances in zero-shot recognition. IEEE Signal Processing Magazine, 35, 2018. +Lluis Gomez, Yash Patel, Marcal Rusinol, Dimosthenis Karatzas, and C V Jawahar. Self-supervised learning of visual features through embedding images into text topic spaces. In CVPR, pp. 2017-2026, 2017. +Ian J Goodfellow, Jean Pougetabadie, Mehdi Mirza, Bing Xu, David Wardefarley, Sherjil Ozair, Aaron C Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pp. 2672-2680, 2014. + +Aditya Grover, Manik Dhar, and Stefano Ermon. Flow-GAN: Combining maximum likelihood and adversarial learning in generative models. In AAAI, pp. 3069-3076, 2018. +Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and Aaron C Courville. PixelVAE: A latent variable model for natural images. In ICLR, 2017. +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, pp. 6626-6637, 2017. +Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735-1780, 1997. +Matthew D Hoffman and Matthew J Johnson. ELBO surgery: Yet another way to carve up the variational evidence lower bound. In Workshop in Advances in Approximate Bayesian Inference, NIPS, 2016. +Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303-1347, 2013. +Huaibo Huang, Zhihang Li, Ran He, Zhenan Sun, and Tieniu Tan. IntroVAE: Introspective variational autoencoders for photographic image synthesis. In NeurIPS, 2018. +Junqi Jin, Kun Fu, Runpeng Cui, Fei Sha, and Changshui Zhang. Aligning where to see and what to tell: Image caption with region-based attention and scene factorization. In CVPR, 2015. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Diederik P Kingma and Max Welling. Stochastic gradient VB and the variational auto-encoder. In ICLR, 2014. +Ryan Kiros and Csaba Szepesvari. Deep representations and codes for image auto-annotation. pp. 908-916, 2012. +Anders Boesen Lindbo Larsen, Soren Kaae Sonderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In ICML, pp. 1558-1566, 2016. +Jey Han Lau, Timothy Baldwin, and Trevor Cohn. Topically driven neural language model. In ACL, pp. 355-365, 2017. +Shuang Li, Tong Xiao, Hongsheng Li, Wei Yang, and Xiaogang Wang. Identity-aware textual-visual matching with latent co-attention. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1890-1899, 2017. +Wenbo Li, Pengchuan Zhang, Lei Zhang, Qiuyuan Huang, Xiaodong He, Siwei Lyu, and Jianfeng Gao. Object-driven text-to-image synthesis via adversarial training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12174-12182, 2019. +Tsungyi Lin, Michael Maire, Serge J Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, pp. 740-755, 2014. +Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. +Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. +Lars Mescheder, S Nowozin, and Andreas Geiger. Adversarial variational Bayes: Unifying variational autoencoders and generative adversarial networks. In ICML, pp. 2391-2400. PMLR, 2017. +Seonghyeon Nam, Yunji Kim, and Seon Joo Kim. Text-adaptive generative adversarial networks: Manipulating images with natural language. In Advances in Neural Information Processing Systems, pp. 42-51, 2018. + +Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In Computer Vision, Graphics & Image Processing, 2008. ICVGIP'08. Sixth Indian Conference on, pp. 722-729. IEEE, 2008. +Ruizhi Qiao, Lingqiao Liu, Chunhua Shen, and Anton Van Den Hengel. Less is more: Zero-shot learning from online textual documents with noise suppression. In CVPR, pp. 2249-2257, 2016. +Scott E Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In ICML, pp. 1060-1069, 2016. +Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pp. 1278-1286, 2014. +Bernardino Romeraparedes and Philip H S Torr. An embarrassingly simple approach to zero-shot learning. In ICML, pp. 2152-2161, 2015. +Tim Salimans, Ian J Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In NIPS, pp. 2234-2242, 2016. +Akash Srivastava, Lazar Valkoz, Chris Russell, Michael U Gutmann, and Charles A Sutton. VEEGAN: Reducing mode collapse in GANs using implicit variational learning. In NIPS, pp. 3308-3318, 2017. +Nitish Srivastava and Ruslan Salakhutdinov. Multimodal learning with deep Boltzmann machines. In NIPS, pp. 2222-2230, 2012a. +Nitish Srivastava and Ruslan Salakhutdinov. Learning representations for multimodal data with deep belief nets. In NIPS workshop, pp. 2222-2230, 2012b. +Nitish Srivastava and Ruslan Salakhutdinov. Multimodal learning with deep Boltzmann machines. Journal of Machine Learning Research, 15(1):2949-2980, 2014. +Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, pp. 2818-2826, 2016. +Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein auto-encoders. In ICLR, 2018. +Vinay Kumar Verma, Gundeep Arora, Ashish Kumar Mishra, and Piyush Rai. Generalized zero-shot learning via synthesized examples. In CVPR, pp. 4281-4289, 2018. +C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical report, 2011. +Chaojie Wang, Bo Chen, and Mingyuan Zhou. Multimodal Poisson gamma belief network. In AAAI, 2018. +Dingding Wang, Shenghuo Zhu, Tao Li, and Yihong Gong. Multi-document summarization using sentence-based topic models. In ACL, pp. 297-300, 2009. +Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks. In CVPR, pp. 1316-1324, 2018. +Han Zhang, Tao Xu, and Hongsheng Li. StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks. In CVPR, 2017a. +Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris Metaxas. StackGAN++: Realistic image synthesis with stacked generative adversarial networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP(99):1-1, 2017b. +Hao Zhang, Bo Chen, Dandan Guo, and Mingyuan Zhou. WHAI: Weibull hybrid autoencoding inference for deep topic modeling. In ICLR, 2018a. + +Zizhao Zhang, Yuanpu Xie, and Lin Yang. Photographic text-to-image synthesis with a hierarchically-nested adversarial network. In CVPR, 2018b. +Mingyuan Zhou. Infinite edge partition models for overlapping community detection and link prediction. In AISTATS, pp. 1135-1143, 2015. +Mingyuan Zhou and Lawrence Carin. Negative binomial process count and mixture modeling. IEEE Trans. Pattern Anal. Mach. Intell., 37(2):307-320, 2015. +Mingyuan Zhou, Lauren Hannah, David Dunson, and Lawrence Carin. Beta-negative binomial process and Poisson factor analysis. In AISTATS, pp. 1462-1471, 2012. +Mingyuan Zhou, Yulai Cong, and Bo Chen. Augmentable gamma belief networks. Journal of Machine Learning Research, 17(163):1-44, 2016. +Yizhe Zhu, Mohamed Elhoseiny, Bingchen Liu, Xi Peng, and Ahmed M Elgammal. A generative adversarial approach for zero-shot learning from noisy texts. In CVPR, 2018. + +# A MODEL PROPERTY OF VHE-GAN AND RELATED WORK + +Let us denote $q(\boldsymbol{z}) = \mathbb{E}_{\boldsymbol{x} \sim p_{\mathrm{data}}(\boldsymbol{x})}[q(\boldsymbol{z} \mid \boldsymbol{x})] = \frac{1}{N} \sum_{n=1}^{N} q(\boldsymbol{z} \mid \boldsymbol{x}_n)$ as the aggregated posterior (Hoffman & Johnson, 2016; Makhzani et al., 2015). Removing the triple-data-reuse training strategy, we can re-express the VHE-GAN objective in (4) as + +$$ +\min _ {E, G _ {\mathrm {v a c}}, G _ {\mathrm {g a n}}} \max _ {D} [ - \operatorname {E L B O} _ {\mathrm {v h e}} + \mathcal {L} _ {\mathrm {g a n}} ], \mathcal {L} _ {\mathrm {g a n}} := \mathbb {E} _ {\boldsymbol {x} \sim p _ {\mathrm {d a t a}} (\boldsymbol {x})} \ln D (\boldsymbol {x}) + \mathbb {E} _ {\boldsymbol {z} \sim q (\boldsymbol {z})} \ln (1 - D (G _ {\mathrm {g a n}} (\boldsymbol {z}))), \tag {10} +$$ + +which corresponds to a naive combination of the VHE and GAN training objectives, where the data samples used to train the VHE, GAN generator, and GAN discriminator in each gradient update iteration are not imposed to be the same. While the naive objective function in (10) differs from the true one in (4) that is used to train VHE-GAN, it simplifies the analysis of its theoretical property, as described below. + +Let us denote $q(\boldsymbol{z}, \boldsymbol{x}, t) \coloneqq q(\boldsymbol{z} \mid \boldsymbol{x}) p_{data}(\boldsymbol{x}, t)$ as the joint distribution of $(\boldsymbol{x}, t)$ and $\boldsymbol{z}$ under the VHE variational posterior $q(\boldsymbol{z} \mid \boldsymbol{x})$ , $I_q(\boldsymbol{x}, \boldsymbol{z}) \coloneqq \mathbb{E}_{q(\boldsymbol{z}, \boldsymbol{x})}\left[\ln \frac{q(\boldsymbol{z}, \boldsymbol{x})}{q(\boldsymbol{z}) p_{data}(\boldsymbol{x})}\right]$ as the mutual information between $\boldsymbol{x} \sim p_{data}(\boldsymbol{x})$ and $\boldsymbol{z} \sim q(\boldsymbol{z})$ , and $\mathrm{JDS}(p_1||p_2) \coloneqq \frac{1}{2} \mathrm{KL}[p_1||(p_1 + p_2)/2] + \frac{1}{2} \mathrm{KL}[p_2||(p_1 + p_2)/2]$ as the Jensen-Shannon divergence between distributions $p_1$ and $p_2$ . Similar to the analysis in Hoffman & Johnson (2016), the VHE's ELBO can be rewritten as $\mathrm{ELBO}_{\mathrm{vhe}} = \mathbb{E}_{q(\boldsymbol{z}, \boldsymbol{x}, t)}[\log p(\boldsymbol{t} \mid \boldsymbol{z})] - I_q(\boldsymbol{x}, \boldsymbol{z}) - \mathrm{KL}[q(\boldsymbol{z})||p(\boldsymbol{z})]$ , where the mutual information term can also be expressed as $I_q(\boldsymbol{x}, \boldsymbol{z}) = \mathbb{E}_{\boldsymbol{x} \sim p_{data}(\boldsymbol{x})} \mathrm{KL}[q(\boldsymbol{z} \mid \boldsymbol{x})||q(\boldsymbol{z})]$ . Thus maximizing the ELBO encourages the mutual information term $I_q(\boldsymbol{x}, \boldsymbol{z})$ to be minimized, which means while the data reconstruction term $\mathbb{E}_{q(\boldsymbol{z}, \boldsymbol{x}, t)}[\log p(\boldsymbol{t} \mid \boldsymbol{z})]$ needs to be maximized, part of the VHE optimization objective penalizes a $\boldsymbol{z}$ from carrying the information of the $\boldsymbol{x}$ that it is encoded from. This mechanism helps provide necessary regularization to prevent overfitting. As in Goodfellow et al. (2014), with an optimal discriminator $D_G^*$ for generator $G$ , we have $\min_{G} D_G^*, G = \ln 4 + 2\mathrm{JSD}(p_{data}(\boldsymbol{x})||p_{G_z}(\boldsymbol{x}))$ , where $p_{G_z}(\boldsymbol{x})$ denotes the distribution of the generated data $G(\boldsymbol{z})$ that use $z \sim q(\boldsymbol{z})$ as the random source fed into the GAN generator. The JSD term is minimized when $p_{G_z}(\boldsymbol{x}) = p_{data}(\boldsymbol{x})$ . + +With these analyses, given an optimal GAN discriminator, the naive VHE-GAN objective function in (10) reduces to + +$$ +\min _ {E, G _ {\text {g a n}}, G _ {\text {v a c}}} - \mathbb {E} _ {q (\boldsymbol {z}, \boldsymbol {x}, t)} [ \log p (\boldsymbol {t} \mid \boldsymbol {z}) ] + \mathrm {K L} [ q (\boldsymbol {z}) | | p (\boldsymbol {z}) ] + I _ {q} (\boldsymbol {x}, \boldsymbol {z}) + 2 \mathrm {J S D} (p _ {\text {d a t a}} (\boldsymbol {x}) | | p _ {G _ {\boldsymbol {z}}} (\boldsymbol {x})). \tag {11} +$$ + +From the VHEs' point of view, examining (11) shows that it alleviates the inherent conflict in VHE of maximizing the ELBO and maximizing the mutual information $I_{q}(\pmb{x},\pmb{z})$ . This is because while the VHE part of VHE-GAN still relies on minimizing $I_{q}(\pmb{x},\pmb{z})$ to regularize the learning, the GAN part tries to transform $q(z)$ through the GAN generator to match the true data distribution $p_{data}(\pmb{x})$ . In other words, while its VHE part penalizes a $z$ from carrying the information about the $\pmb{x}$ that it is encoded from, its GAN part encourages a $z$ to carry information about the true data distribution $p_{data}(\pmb{x})$ , but not necessarily the observed $\pmb{x}$ that it is encoded from. + +From the GANs' point of view, examining (11) shows that it provides GAN with a meaningful latent space, necessary for performing inference and data reconstruction (with the aid of the data-triple-use training strategy). More specifically, this latent representation is also used by the VHE to maximize the data log-likelihood, a training procedure that tries to cover all modes of the empirical data distribution rather than dropping modes. For VHE-GAN (4), the source distribution is $q(z \mid x)$ , not only allowing GANs to participate in posterior inference and data reconstruction, but also helping GANs resist mode collapse. In the following, we discuss some related works on combining VAEs and GANs. + +# A.1 RELATED WORK ON COMBINING VAES AND GANS + +Examples in improving VAEs with adversarial learning include Mescheder et al. (2017), which allows the VAEs to take implicit encoder distribution, and adversarial auto-encoder (Makhzani et al., 2015) and Wasserstein auto-encoder (Tolstikhin et al., 2018), which drop the mutual information term from the ELBO and use adversarial learning to match the aggregated posterior and prior. Examples in allowing GANs to perform inference include Dumoulin et al. (2017) and Donahue et al. (2017), + +which use GANs to match the joint distribution $q(\boldsymbol{z} \mid \boldsymbol{x}) p_{data}(\boldsymbol{x})$ defined by the encoder and the one $p(\boldsymbol{x} \mid \boldsymbol{z}) p(\boldsymbol{z})$ defined by the generator. However, they often do not provide good data reconstruction. Examples in using VAEs or maximum likelihood to help GANs resist mode collapse include Che et al. (2017); Srivastava et al. (2017); Grover et al. (2018). Another example is VAEGAN (Larsen et al., 2016) that combines unit-wise likelihood at hidden layer and adversarial loss at original space, but its update of the encoder is separated from the GAN mini-max objective. On the contrary, IntroVAE (Huang et al., 2018) retains the pixel-wise likelihood with an adversarial regularization on the latent space. Sharing network between the VAE decoder and GAN generator in VAEGAN and IntroVAE, however, limit them to model a single modality. + +# B MORE DISCUSSION ON SEQUENCE MODELS AND TOPIC MODELS IN TEXT ANALYSIS. + +In Section 3.1, we have discussed two models to represent the text: sequence models and topic models. Considering the versatility of topic models (Wang et al., 2009; Jin et al., 2015; Zhou et al., 2016; Srivastava & Salakhutdinov, 2012a; 2014; Wang et al., 2018; Elhoseiny et al., 2017b; Zhu et al., 2018) in dealing with different types of textual information, and its effectiveness in capturing latent topics that are often directly related to macro-level visual information (Gomez et al., 2017; Dieng et al., 2017; Lau et al., 2017), we choose a state-of-the-art deep topic model, PGBN, to model the textual descriptions in VHE. Due to space constraint, we only provide simple illustrations in Figs. 3(a) and 3(b). In this section, more insights and discussions are provided. + +![](images/c3fd9ced8d1947b71d04c1257b9cdbbc54ce95aceefb3fa0c31e07200589438d.jpg) +Figure 7: Generated random images by VHE-raster-scan-GAN conditioning on five binary attributes. + +As discussed before, topic models are able to model non-sequential texts such as binary attributes. The CUB dataset provides 312 binary attributes (Wah et al., 2011) for each images, such as whether "crown color is blue" and whether "tail shape is solid" to define the color or shape of different body parts of a bird. We first transform these binary attributes for the $n$ th image to a 312-dimensional binary vector $t_n$ , whose $i$ th element is 1 or 0 depending on whether the bird in this image owns the $i$ th attribute or not. The binary attribute vectors $t_n$ are used together with the corresponding bird images $x_n$ to train VHE-raster-scan-GAN. As shown in Fig. 7, we generate images given five binary attributes, which are formed into a 312-dimensional binary vector $t$ (with five non-zero elements at these five attributes) that becomes the input to the PGBN text decoder. Clearly, these generated images are photo-realistic and faithfully represent the five provided attributes. + +The proposed VHE-GANs can also well model long documents. In text-based ZSL discussed in Section 3.2, each class (not each image) is represented as a long encyclopedia document, whose global + +semantic structure is hard to captured by existing sequence models. Besides a good ZSL performance achieved by VHE-raster-scan-GAN, illustrating its advantages of text generation given images, we show Fig. 8 example results of image generation conditioning on long encyclopedia documents on the unseen classes of CUB-E (Qiao et al., 2016; Akata et al., 2015) and Flower (Elhoseiny et al., 2017a). + +# Class name: Rhinoceros Auklet + +It is a seabird, nesting in seabird colonies, with a large orange/brown bill. Plumage is dark on top and paler below, in offshore and inshore water. Sometimes it swim in the water and sometimes it stand on the strong. + +![](images/c02d8291813ce20b048c5470cb160fd8e596aac80191fc7deb2a120f921da5b6.jpg) + +![](images/363071320cff7ac6b6a9ff926d8da1ad26bd024623ce89fae64e408564bd7257.jpg) + +![](images/5f5605a10a5f48c4554594a6b934ddbc1aa912988a8eef23caf267371cdb1cdd.jpg) + +![](images/1ba6e370b24d9b6fb8a2e491f31d6ed1a66b927744d180a6b8ef5ae5221d3986.jpg) + +![](images/e51825fc4f34cec5a024e86e98c1f8e170c79e2ebd9e603459116ee6f4be37c6.jpg) + +![](images/ca70ecf783556756bf9c679d72bbe88579643ac7fe1dcb1397ac28f010d9b1e9.jpg) + +![](images/33fd18fae16bc7ffd8519eac52736306e0bd4ccf23b685c68c09cd21f6879f68.jpg) + +![](images/03ee0f3a1bdf8c49eac6b52ad28fbf38b20fb8933c8ad7cb0d0796241d22d827.jpg) + +![](images/a9571c5fd8a3745ba32ce10b8529563f3df160e5d1185d16d9c5305dee08c5a0.jpg) +(a) + +# Class name: Yellow Bellied Flycatcher + +Brownish-olive upperparts, darker on the wings and tail, yellowish underparts. Have small bill short tail, on a perch low or in the middle of a tree. Its eyes are dark and round with radiating vigor, like looking for food or insects. + +![](images/3fde6992decf173da77610345c6226dcb29b32a3e89b3f21048f60628111e166.jpg) + +![](images/0344cda5eea6b876a3c9724464d8db93fb5c99ad5163966f7af420a43297b142.jpg) + +![](images/89fa25322b7cf152a40a3d65e07685fda39b625a9c0d0e32a6d05d9b127c0460.jpg) + +![](images/76da81ab1885a7a7876a5aa0f4d4a7c67ab3135efbd72485f651ddb2b5efa9fb.jpg) + +![](images/2447c56b9097d360748d570531520a0addfe2ad9a35601660122e60f4462d071.jpg) + +![](images/746f05c7ce5a15cf42dcec6a160c1b9f79072ca4c14b3cfe23ab7ffe2c128ba5.jpg) + +![](images/9985a5dd8b00191188f3952a8f876ba0d68f0b22fe12d5ec5ff9619af69044bd.jpg) + +![](images/3fd2a96c902c910fd233d95a3458de9f162d277c6ff6b4f6933c997ae2a26e93.jpg) + +![](images/e10bcca382fcb778b71f475996e484604918a276cf0ba870d38e5163a39469e7.jpg) + +# Class name: Ball Moss + +It tends to form a spheroid shape ranging in size from a golf ball to a soccer ball. It may hinder tree growth. Its petals are stripe-like yellow ones and its stamen is also round dark brown or yellow. + +![](images/9fb8e000880d377727ca783cb521306be6304ab45a286d80c6323a6975258472.jpg) + +![](images/3ae0092616875b9d0793e6bbf3172d738d4d805312cf7b0c9c47b3d9972e4a80.jpg) + +![](images/220001e0f4e84838496a4c6cd8ccce1df5efbe263f3f116ccd5a9a46cdcec8b9.jpg) + +![](images/b03c7a08a376977c889cd3912a20417319c1c21dfa095a80a07306513f61abf7.jpg) +Figure 8: Image generation conditioning on long encyclopedia documents using VHE-raster-scanGAN trained on (a) CUB-E and (b) Flower. Shown in the top part of each subplot are representative sentences taken from the long document that describes an unseen class; for the three rows of images shown in the bottom part, the first row includes three real images from the corresponding unseen class, and the other two rows include a total of six randomly generated images conditioning on the long encyclopedia document of the corresponding unseen class. + +![](images/bd4732b2afa83385ff8357c646fa74c66a5642789a62b64d7ef3df5b676bf17d.jpg) + +![](images/68f5e83a27d1b7246948761e78d8c20ef04653fb9de9a0d450f09264a0785d37.jpg) + +![](images/88924043b08e2429f99337e30955a652fb9b16d9161f6ea86be0814141490f34.jpg) + +![](images/43b37ba9a8c1745ad5bde9e3da6d63f37246c0343f248a0004868cd9767f5da1.jpg) + +![](images/c928e501793cbf15919314fee1982d95b74476e972a07351d41bcb6d247604c2.jpg) +(b) + +# Class name:Barberton Daisy + +It bear a large capitulum with striking, two-lipped ray floret in yellow or orange. Colors include white, yellow, and pink. Its petals are medium, and each of them is round and the number is about six. + +![](images/68d2a1dad5745f0ac915c72683bd921b918ac845ee8c7ae5d9a6dc0faeb004fe.jpg) + +![](images/aed67df333cc9b01ffe35e45b3249cc70c85110509a3cbc5626cadeb000fec69.jpg) + +![](images/ce6c963fd5cdcdd2e7308d0b5618db2bcb2ab4225935c5e70f663b0e8334d670.jpg) + +![](images/28a02af431eb4e0956934b7acbf6d0f3cc59abe423208f8b6f7942b092df22b9.jpg) + +![](images/074f952a5016cdce4c9ba10f32125686f3f2ecf6aba743ddcb1c9d90155ce292.jpg) + +![](images/54eee2a615e56ff147cc112c74d5c47686937f50f192a174e7d8365f13ee5f61.jpg) + +![](images/a57db7396134186ee671ccd4cd8d2ecc5a98cc73ffad5a72ca63f9c951b51612.jpg) + +![](images/274db4b985b36a031a2555e4569634acbd6ebb4f8035331add0bac5bfdb056cc.jpg) + +![](images/72970dc90254fd0421cdcead54606294dfcf297cdf119b642a6917e1796c4190.jpg) + +Analogous to how the Bird images are generated in Fig. 7, we also perform facial image generation given a set of textual attributes. On CelebA dataset, given attributes, we train VHE-stackGAN++ and + +![](images/a180df02e6e09a60038147a97ea28eafbca5bb852a59b0de2d016ce5307c6250.jpg) +Figure 9: Example results of facial image generation conditioning on five textual attributes, by VHEStackGAN++ and VHE-raster-scan-GAN trained on the CelebA dataset. Both models are trained with 20 epochs, with the output resolution set as $128 \times 128$ . Note our current network architecture, designed mainly for natural images, has not yet been fine-tuned for facial images. + +VHE-raster-scan-GAN to generate the facial images with resolution $128 \times 128$ . As shown in Fig. 9, after the training of 20 epochs, we generate facial images given five attributes. While the facial images generated by both models nicely match the given attributes, VHE-raster-scan-GAN provides higher visual quality and does a better job in representing the details. + +# C MORE EXPERIMENTAL RESULTS ON JOINT IMAGE-TEXT LEARNING + +# C.1 TABLES 1 AND 2 WITH ERROR BARS. + +For text-to-image generation tasks, we use the official pre-defined training/testing split (illustrated in Appendix F) to train and test all the models. Following the definition of error bar of IS in StackGAN++ (Zhang et al., 2017b), HDGAN (Zhang et al., 2018b), and AttnGAN (Xu et al., 2018), we provide the IS results with error bars for various methods in Table 5, where the results of the StackGAN++ , HDGAN, and AttnGAN are quoted from the published papers. The FID error bar is not included as it has not been clearly defined. + +Table 5: Inception score (IS) results in Table 1 with error bars. + +
MethodStackGAN++HDGANAttnGANObj-GANVHE-raster-scan-GAN
Flower3.26 ± .013.45 ± .07--3.72 ± .01
CUB3.84 ± .064.15 ± .054.36 ± .03-4.41 ± .03
COCO8.30 ± .1011.86 ± .1825.89 ± .4726.68 ± .5227.16 ± .23
+ +Table 6: Inception score (IS) results in Table 2 with error bars. + +
MethodPGBN+StackGAN++VHE-vanilla-GANVHE-StackGAN++VHE-simple-raster-scan-GAN
Flower3.29 ± .023.01 ± .063.56 ± .033.62 ± .02
CUB3.92 ± .063.52 ± .084.20 ± .044.31 ± .06
COCO10.63 ± .106.36 ± .2012.63 ± .1520.13 ± 22
+ +# C.2 HIGH-QUALITY IMAGES OF FIGURE 2 + +Due to space constraint, we provide relative small-size images in Fig. 2. Below we show the corresponding images with larger sizes. + +![](images/839ee371efb6fca41ffe801f3c44b8686b05aba1060652fdcfaed3d140309092.jpg) +Figure 10: The images above the blue line are the larger-size replots of CUB Bird images in Figure 2, while the images below the blue line are results for ablation study. + +![](images/86762b85911c475d362b4705c377680eab2746291f189a8aa3217dd798d6b28c.jpg) +Figure 11: The images above the blue line are the larger-size replots of Flower images in Figure 2, while the images below the blue line are results for ablation study. + +![](images/da7f415fa0fac18f8934dcd0032fa1c73a5ed26416be2be818376c8472adcf44.jpg) + +![](images/69cef2542d142b4186c71dd4f1db0276d71de1abe23235c03ffcc42b5c28918f.jpg) +Figure 12: The images above the blue line are the larger-size replots of COCO images in Figure 2, while the images below the blue line are results for ablation study. + +# C.3 MORE TEXT-TO-IMAGE GENERATION RESULTS ON COCO + +COCO is a more challenging dataset than CUB and Flower, as it contains very diverse objects and scenes. We show in Fig. 13 more samples conditioned on different textural descriptions. + +![](images/4ce350de995663fa7edf9c8336c0fb57ef61c7c82fa3780e805639eae484e51c.jpg) +Figure 13: Example text-to-image generation results on COCO. + +# C.4 LATENT SPACE INTERPOLATION + +In addition to the latent space interpolation results of VHE-raster-scan-GAN in Fig. 3(c) of Section 3.1, below we provide more fine-gridded latent space interpolation in Figs. 15-18. + +![](images/deeafc439060c9977eea08f0cd62f5d1e10363eb939e84c1cc91a5bea860ff48.jpg) +Figure 14: Example of latent space interpolation on CUB. + +![](images/281f410a5e531830a93befe2161404e5b646bb5a4d3524c3e3c84e43eee84ac2.jpg) +Figure 15: Example of latent space interpolation on CUB. + +![](images/59d23816eb2ec9fba61bd09074ccdf0d49786abfebef28b4c5773d970e4a6559.jpg) +Figure 16: Example of latent space interpolation on CUB. + +![](images/66d66c296f2d1cd3a40854044207a3b3bb74a0737e178612a3baa9bd1ea704e7.jpg) +Figure 17: Example of latent space interpolation on Flower. + +![](images/2e06fd71e6dd34354c93cca036d02a4d1098c84b13a3fef13bb32ea5aaec61bf.jpg) +Figure 18: Example of latent space interpolation on Flower. 25 + +# C.5 IMAGE RETRIEVAL GIVEN A TEXT QUERY + +For image $\pmb{x}_n$ , we draw its BoW textual description $\hat{t}_n$ as $\hat{t}_n|\theta_n \sim p(\pmb{t}|\Phi, \pmb{\theta}_n)$ , $\pmb{\theta}_n|x_n \sim q_{\Omega}(\pmb{\theta}|\Phi, x_n)$ . Given the BoW textual description $\pmb{t}$ as a text query, we retrieve the top five images ranked by the cosine distances between $\pmb{t}$ and $\hat{t}_n$ 's. Shown in Fig. 19 are three example image retrieval results, which suggest that the retrieved images are semantically related to their text queries in colors, shapes, and locations. + +![](images/ba6f7b1a45d906706fc1956ee7598cc292c8ba408fd6affcf41942fe79bb97e8.jpg) +Figure 19: Top-5 retrieved images given a text query. Rows 1 to 3 are for Flower, CUB, and COCO, respectively. + +# C.6 IMAGE REGENERATION + +We note for VHE-GAN, its image encoder and GAN component together can also be viewed as an "autoencoding" GAN for images. More specifically, given image $\pmb{x}$ , VHE-GAN can provide random regenerations using $G(q_{\Omega}(\pmb{\theta} \mid \pmb{\Phi}, \pmb{x}))$ . We show example image regeneration results by both VHE-StackGAN++ and VHE-raster-scan-GAN in Fig. 20. These example results suggest that the regenerated random images by the proposed VHE-GANs more of less resemble the original real image fed into the VHE image encoder. + +![](images/c96c270fa7adf65b4833d196ca43bb1476ee4e50f80d8a7f426246783c5ce0da.jpg) +Figure 20: Example results of image regeneration using VHE-StackGAN++ and VHE-raster-scan-GAN. An original image is fed into the VHE image encoder, whose latent representation is then fed into the GAN image generator to generate a corresponding random image. The models in columns 1-4 are trained on Flower, columns 5-8 on CUB, and columns 9-12 on COCO. + +# C.7 LEARNED HIERARCHICAL TOPICS IN VHE + +The inferred topics at different layers and the inferred sparse connection weights between the topics of adjacent layers are found to be highly interpretable. In particular, we can understand the meaning of each topic by projecting it back to the original data space via $\left[\prod_{t=1}^{l-1}\Phi^{(t)}\right]\phi_k^{(l)}$ and understand the relationship between the topics by arranging them into a directed acyclic graph (DAG) and choose + +its subnets to visualize. We show in Figs. 21, 22, and 23 example subnets taken from the DAGs inferred by the three-layer VHE-raster-scan-GAN of size 256-128-64 on Flower, CUB, and COCO, respectively. The semantic meaning of each topic and the connection weights between the topics of adjacent layers are highly interpretable. For example, in Figs. 21, the topics describe very specific flower characteristics, such as special colors, textures, shapes, and parts, at the bottom layer, and become increasingly more general when moving upwards. + +![](images/babfdaba3f8fa89fe82fe10e503d9daf3f36b6e8792c1b71521ce783ec1ad2ee.jpg) +Figure 21: An example topic hierarchy taken from the directed acyclic graph learned by a three-layer VHEraster-scan-GAN of size 256-128-64 on Flower. + +![](images/4f0ca1549033c27a7dcba092183c2fa8b55f4221d067a3650e51520b25a99424.jpg) +Figure 22: Analogous plot to Fig. 21 on CUB. + +![](images/fda3f88fcbc14e7f1a2e24dca80a0d72508f192c163153eb4ae096cb23b0817a.jpg) +Figure 23: Analogous plot to Fig. 21 on COCO. + +# D SPECIFIC MODEL STRUCTURE IN VHE-STACKGAN++ AND VHE-RASTER-SCAN-GAN + +# D.1 MODEL STRUCTURE OF VHE + +In Fig. 24, we give the structure of VHE used in VHE-StackGAN++ and VHE-raster-scan-GAN, where $f(\pmb{x})$ is the image features extracted by Inception v3 network and $\varepsilon^{(l)} \sim \prod_{k=1}^{K_l} \mathrm{Uniform}(\varepsilon_k^{(l)}; 0, 1)$ . With the definition of $g^{(0)} = f(\pmb{x})$ , we have + +$$ +\boldsymbol {k} ^ {(l)} = \exp \left(\mathbf {W} _ {1} ^ {(l)} \boldsymbol {g} ^ {(l)} + \boldsymbol {b} _ {1} ^ {(l)}\right), \tag {12} +$$ + +$$ +\boldsymbol {\lambda} ^ {(l)} = \exp \left(\mathbf {W} _ {2} ^ {(l)} \boldsymbol {g} ^ {(l)} + \boldsymbol {b} _ {2} ^ {(l)}\right), \tag {13} +$$ + +$$ +\boldsymbol {g} ^ {(l)} = \ln \left[ 1 + \exp \left(\mathbf {W} _ {3} ^ {(l)} \boldsymbol {g} ^ {(l - 1)} + \boldsymbol {b} _ {3} ^ {(l)}\right) \right], \tag {14} +$$ + +where $\mathbf{W}_1^{(l)}\in \mathbb{R}^{K_l\times K_l}$ , $\mathbf{W}_2^{(l)}\in \mathbb{R}^{K_l\times K_l}$ , $\mathbf{W}_3^{(l)}\in \mathbb{R}^{K_l\times K_{l - 1}}$ , $\pmb {b}_1^{(l)}\in \mathbb{R}^{K_l}$ , $\pmb {b}_2^{(l)}\in \mathbb{R}^{K_l}$ , and $\pmb {b}_3^{(l)}\in \mathbb{R}^{K_l}$ . + +![](images/5ce902783aa9a39bd552303e716ab271dedd388d744f312c5b97dd75f1520e02.jpg) +Figure 24: The architecture of VHE in VHE-StackGAN++ and VHE-raster-scan-GAN. + +# D.2 MODEL OF VHE-STACKGAN++ + +In Section 2.2, we first introduce the VHE-StackGAN++, where the multi-layer textual representation $\{\pmb{\theta}^{(1)},\pmb{\theta}^{(2)},\dots ,\pmb{\theta}^{(L)}\}$ is concatenated as $\pmb {\theta} = \left[\pmb{\theta}^{(1)},\dots ,\pmb{\theta}^{(L)}\right]$ and then fed into StackGAN++ (Zhang et al., 2017b). In Figs. 1 (a) and (b), we provide the model structure of VHE-StackGAN++. We also provide a detailed plot of the structure of StackGAN++ used in VHE-StackGAN++ in Fig. 25, where JCU is a specific type of discriminator; see Zhang et al. (2017b) for more details. + +The same with VHE-raster-scan-GAN, VHE-StackGAN++ is also able to jointly optimize all components by merging the expectation in VHE and GAN to define its loss function as + +$$ +\begin{array}{l} \min _ {\boldsymbol {\Omega}, \left\{G _ {i} \right\} _ {i = 1} ^ {3}} \max _ {\left\{D _ {i} \right\} _ {i = 1} ^ {3}} \mathbb {E} _ {p _ {\text {d a t a}} (\boldsymbol {x} _ {n}, \boldsymbol {t} _ {n})} \mathbb {E} _ {\prod_ {l = 1} ^ {L} q \left(\boldsymbol {\theta} _ {n} ^ {(l)} \mid \boldsymbol {x} _ {n}, \boldsymbol {\Phi} ^ {(1 + 1)}, \boldsymbol {\theta} _ {n} ^ {(l + 1)}\right)} \left\{- \log p \left(\boldsymbol {t} _ {n} \mid \boldsymbol {\Phi} ^ {(1)}, \boldsymbol {\theta} _ {n} ^ {(1)}\right) \right. \\ + \sum_ {l = 1} ^ {L} \mathrm {K L} \left[ q \left(\boldsymbol {\theta} _ {n} ^ {(l)} \mid \boldsymbol {x} _ {n}, \boldsymbol {\Phi} ^ {(1 + 1)}, \boldsymbol {\theta} _ {n} ^ {(l + 1)}\right) | | p \left(\boldsymbol {\theta} _ {n} ^ {(l)} \mid \boldsymbol {\Phi} ^ {(1 + 1)}, \boldsymbol {\theta} _ {n} ^ {(l + 1)}\right) \right] \\ \left. + \sum_ {i = 1} ^ {3} \left[ \log D _ {i} \left(\boldsymbol {x} _ {n, i}, \boldsymbol {\theta} _ {n}\right) + \log \left(1 - D _ {i} \left(G _ {i} \left(\boldsymbol {\theta} _ {n}\right), \boldsymbol {\theta} _ {n}\right)\right) \right] \right\}. \tag {15} \\ \end{array} +$$ + +![](images/b17c3ef44c1da7ef8a42c873b91bd6d0dfeb6194df293cfa3914f4d1c660e919.jpg) +Figure 25: The structure of Stack-GAN++ in VHE-StackGAN++, where JCU is a type of discriminator proposed in Zhang et al. (2017b). + +# D.3 STRUCTURE OF RASTER-SCAN-GAN + +In Fig. 26, we provide a detailed plot of the structure of the proposed raster-scan-GAN. + +![](images/4e2913dbb2848e93555ff99a787ee1a4c9304e9a48b1823c2b93bce40034b823.jpg) +Figure 26: The structure of raster-scan-GAN in VHE-raster-scan-GAN, where JCU is a type of discriminator proposed in Zhang et al. (2017b). + +# E JOINT OPTIMIZATION FOR VHE-RASTER-SCAN-GAN + +Based on the loss function of VHE-raster-scan-GAN (9), with TLASGR-MCMC (Cong et al., 2017) and WHAI (Zhang et al., 2018a), we describe in Algorithm 1 how to perform mini-batch based joint update of all model parameters. + +Algorithm 1 Hybrid TLASGR-MCMC/VHE inference algorithm for VHE-raster-scan-GAN. +```txt +Initialize encoder parameters $\Omega$ , topic parameters of PGBN $\{\Phi^{(l)}\}_{1,L}$ generator $G$ , and discriminator $D$ +for iter $= 1,2,\dots$ do Randomly select a mini-batch containing $N$ image-text pairs $\pmb {d} = \{\pmb {x}_n,\pmb {t}_n\}_{n = 1}^N$ Draw random noise $\left\{\varepsilon_n^{(l)}\right\}_{n = 1,l = 1}^{N,L}$ from uniform distribution; Calculate $\nabla_D\mathcal{L}(D,G,\Omega \mid \pmb {x})$ Calculate $\nabla_G\mathcal{L}(D,G,\Omega \mid \pmb {x})$ Calculate $\nabla_{\Omega}L$ by the aid of $\left\{\varepsilon_n^{(l)}\right\}_{n = 1,l = 1}^{N,L}$ Update $D$ as $D = D + \nabla_{D}\mathcal{L}(D,G,\Omega \mid \pmb {x})$ Update $G$ as $G = G - \nabla_{G}\mathcal{L}(D,G,\Omega \mid \pmb {x})$ Update $\Omega$ as $\Omega = \Omega -\nabla_{\Omega}L$ Sample $\{\theta_n^{(l)}\}_{l = 1}^L$ from (6) given $\Omega$ and $\{\Phi^{(l)}\}_{l = 1}^{L}$ , and use $\{t\}_{n = 1}^{N}$ to update topics $\{\Phi^{(l)}\}_{l = 1}^{L}$ according to TLASGR-MCMC; +end for +``` + +# F DATA DESCRIPTION ON CUB, FLOWER, AND COCO WITH TRAINING DETAILS + +In image-text multi-modality learning, CUB (Wah et al., 2011), Flower (Nilsback & Zisserman, 2008) and COCO (Lin et al., 2014) are widely used datasets. + +CUB (http://www.vision.caltech.edu/visipedia/CUB-200-2011.html): CUB contains 200 bird species with 11,788 images. Since $80\%$ of birds in this dataset have object-image size ratios of less than 0.5 (Wah et al., 2011), as a preprocessing step, we crop all images to ensure that bounding boxes of birds have greater-than-0.75 object-image size ratios, which is the same with all related work. For textual description, Wah et al. (2011) provide ten sentences for each image and we collect them together to form BoW vectors. Besides, for each species, Elhoseiny et al. (2017a) provide its encyclopedia document for text-based ZSL, which is also used in our text-based ZSL experiments. + +For CUB, there are two split settings: the hard one and the easy one. The hard one ensures that the bird subspecies belonging to the same super-category should belong to either the training split or test one without overlapping, referred to as CUB-hard (CUB-H in our manuscript). A recently used split setting (Qiao et al., 2016; Akata et al., 2015) is super-category split, where for each super-category, except for one subspecies that is left as unseen, all the other are used for training, referred to as CUB-easy (CUB-E in our manuscript). For CUB-H, there are 150 species containing 9410 samples for training and 50 species containing 2378 samples for testing. For CUB-E, there are 150 species containing 8855 samples for training and 50 species containing 2933 samples to testing. We use both of them the for the text-based ZSL, and only CUB-E for all the other experiments as usual. + +Flower http://www.florobots.ox.ac.uk/~vgg/data/flowers/102/index.html: Oxford-102, commonly referred to as Flower, contains 8,189 images of flowers from 102 different categories. For textual description, Nilsback & Zisserman (2008) provide ten sentences for each image and we collect them together to form BoW vectors. Besides, for each species, Elhoseiny et al. (2017a) provide its encyclopedia document for text-based ZSL, which is also used in our text-based ZSL experiments in section 4.2.2. There are 82 species containing 7034 samples for training and 20 species containing 1155 samples for testing. + +For text-based ZSL, we follow the same way in Elhoseiny et al. (2017a) to split the data. Specifically, five random splits are performed, in each of which $4/5$ of the classes are considered as "seen classes" for training and $1/5$ of the classes as "unseen classes" for testing. For other experiments, we follow Zhang et al. (2017b) to split the data. + +COCO http://cocodataset.org/#download: Compared with Flower and CUB, COCO is a more challenging dataset, since it contains images with multiple objects and diverse backgrounds. To show the generalization capability of the proposed VHE-GANs, we also utilize COCO for evaluation. Following the standard experimental setup for COCO (Reed et al., 2016; Zhang et al., 2017b), we directly use the pre-split training and test sets to train and evaluate our proposed models. There are 82081 samples for training and 40137 samples for testing. + +Training details: we train VHE-rater-scan-GAN in four Nvidia GeForce RTX2080 TI GPUs. The experiments are performed with mini-batch size 32 and about 30.2G GPU memory space. We run 600 epochs to train the models on CUB and Flower, taking about 797 seconds for CUB-E and 713 seconds for Flower for each epoch. We run 100 epochs to train the models on COCO, taking about 6315 seconds for each epoch. We use the Adam optimizer (Kingma & Ba, 2014) with learning rate $2 \times 10^{-4}$ , $\beta_{1} = 0.5$ , and $\beta_{2} = 0.999$ to optimize the parameters of the GAN generator and discriminator, and use Adam with learning rate $10^{-4}$ , $\beta_{1} = 0.9$ , and $\beta_{2} = 0.999$ to optimize the VHE parameters. The hyper-parameters to update the topics $\Phi$ with TLASGR-MCMC are the same with those in Cong et al. (2017). + +# G ADDITIONAL DISCUSSION ON OBJ-GAN + +Focusing on the COCO dataset, the recently proposed Obj-GAN (Li et al., 2019) exploits more side information, including the bounding boxes and labels of objects existing in the images, to perform text-to-image generation. More specifically, Obj-GAN first trains an attentive sequence to sequence model to infer the bounding boxes given a text $t$ : + +$$ +B _ {1: T} = \left[ B _ {1}, B _ {2}, \dots , B _ {T} \right] = G _ {\text {b o x}} (e), \tag {16} +$$ + +where, $e$ are the pre-trained bi-LSTM word vectors of $t$ , $B_{t} = (l_{t}, b_{t})$ consists of the class label of the $t$ th object and its bounding box $b = (x, y, w, h) \in \mathbb{R}^{4}$ . Given the bounding boxes $B_{1:T}$ , Obj-GAN learn a shape generator to predict the shape of each object in its bounding box: + +$$ +\hat {M} _ {1: T} = G _ {\text {s h a p e}} \left(B _ {1: T}, z _ {1: T}\right), \tag {17} +$$ + +where $\mathbf{z}_t \sim \mathcal{N}(0,1)$ is a random noise vector. Having obtained $B_{1:T}$ and $\hat{M}_{1:T}$ , Obj-GAN trains an attentive multi-stage image generator to generate the images conditioned on $B_{1:T}$ , $\hat{M}_{1:T}$ , and $e$ . + +Although Obj-GAN achieves a better FID on COCO, it has two major limitations in practice. First, it is not always possible to obtain accurate bounding boxes and labels of objects in the image; even they can be acquired by manual labeling, it is often time and labor consuming, especially on large datasets. Second, each word is associated with one fixed bounding box; in other words, given one sentence, the locations of the objects in the generated images are fixed, which clearly hurts the diversity of the Obj-GAN generated images, as shown in Fig. 27. + +People playing with kites outside under the blue sky. + +![](images/f50be3f049b5b770f8c3969ee274bfe19de657409625f06ddb6e9ae2a0529d10.jpg) + +![](images/ce37b675092b8095e6fc350fc88c2362bd51aa3ea25ef982eaa7d5326e4d0f0b.jpg) +Obj-GAN + +![](images/d54e3b0c54ba35d8ca58688f2b6d1cdb18a219ee86c817dc0764c9c06b8f9ac3.jpg) + +![](images/020ab6e4e992660245435f636726013967b776cd235ae62f718f19fcb0779d84.jpg) + +![](images/a606c814b380f850290a7483d436ac5e7a62223fb2053ba40e98f65b8a6660f6.jpg) +VHE-raster-scan-GAN + +![](images/a8925c1c65e1969c2bf74450263ff19a51c9822937b672ddf032f73777520101.jpg) +Figure 27: The generated random images of Obj-GAN given text lack diversity. + +![](images/2a60a51526d8004505e6eb31c030643a4822680984d0fc073c1068916f846903.jpg) \ No newline at end of file diff --git a/variationalheteroencoderrandomizedgansforjointimagetextmodeling/images.zip b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..84140d3b4d00b9b856cd937f22aaecc1e7895c8f --- /dev/null +++ b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20e070e9aa2ae4311aaa20a17c399d2a25b141316d1cca4e18ea7d580b31fdc1 +size 2901038 diff --git a/variationalheteroencoderrandomizedgansforjointimagetextmodeling/layout.json b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..58e34b873b35fdc26da76d125a08a3192332e56b --- /dev/null +++ b/variationalheteroencoderrandomizedgansforjointimagetextmodeling/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b33869bc3da821d7313b3a4db2cdf3f522e3e34d2ea3f609a21524c244939fb6 +size 902830 diff --git a/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/b771c9dd-617c-4ac6-a72e-06496ad83e50_content_list.json b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/b771c9dd-617c-4ac6-a72e-06496ad83e50_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..398c48af611d06fbf767df172b41c1961d8b01b4 --- /dev/null +++ b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/b771c9dd-617c-4ac6-a72e-06496ad83e50_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f01661c7f123bfee1606b400e00bbc69513432d2e89b93ed96b9a0de6ce4f001 +size 116777 diff --git a/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/b771c9dd-617c-4ac6-a72e-06496ad83e50_model.json b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/b771c9dd-617c-4ac6-a72e-06496ad83e50_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6a5edfd9856a0ce3117a5eef89dc631faf9b1e92 --- /dev/null +++ b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/b771c9dd-617c-4ac6-a72e-06496ad83e50_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15b0668dad51b9f068b06414b91355051f8a63eb9ebbdc119629913cd7bf2234 +size 134227 diff --git a/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/b771c9dd-617c-4ac6-a72e-06496ad83e50_origin.pdf b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/b771c9dd-617c-4ac6-a72e-06496ad83e50_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d190a7293ffe7256123fcbafa7ea73093e170d25 --- /dev/null +++ b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/b771c9dd-617c-4ac6-a72e-06496ad83e50_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96bc5c7f8cb2689bae28df2d2ce3da9977148ee4f5a1eebc5e1a767f829c3d05 +size 12112221 diff --git a/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/full.md b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..47bb097221b0f081ab39071530a325b6e33c9a7e --- /dev/null +++ b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/full.md @@ -0,0 +1,551 @@ +# VARIATIONAL RECURRENT MODELS FOR SOLVING PARTIALLY OBSERVABLE CONTROL TASKS + +# Dongqi Han + +Cognitive Neurorobotics Research Unit +Okinawa Institute of Science and Technology +Okinawa, Japan +dongqi.han@oist.jp + +# Kenji Doya + +Neural Computation Unit +Okinawa Institute of Science and Technology +Okinawa, Japan +doya@oist.jp + +# Jun Tani* + +Cognitive Neurorobotics Research Unit +Okinawa Institute of Science and Technology +Okinawa, Japan +jun.tani@oist.jp + +# ABSTRACT + +In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy. In this study, we propose an RL algorithm for solving PO tasks. Our method comprises two parts: a variational recurrent model (VRM) for modeling the environment, and an RL controller that has access to both the environment and the VRM. The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization. Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner1. + +# 1 INTRODUCTION + +Model-free deep reinforcement learning (RL) algorithms have been developed to solve difficult control and decision-making tasks by self-exploration (Sutton & Barto, 1998; Mnih et al., 2015; Silver et al., 2016). While various kinds of fully observable environments have been well investigated, recently, partially observable (PO) environments (Hafner et al., 2018; Igl et al., 2018; Lee et al., 2019; Jaderberg et al., 2019) have commanded greater attention, since real-world applications often need to tackle incomplete information and a non-trivial solution is highly desirable. + +There are many types of PO tasks; however, those that can be solved by taking the history of observations into account are more common. These tasks are often encountered in real life, such as videos games that require memorization of previous events (Kapturowski et al., 2018; Jaderberg et al., 2019) and robotic control using real-time images as input (Hafner et al., 2018; Lee et al., 2019). While humans are good at solving these tasks by extracting crucial information from the past observations, deep RL agents often have difficulty acquiring satisfactory policy and achieving good data efficiency, compared to those in fully observable tasks (Hafner et al., 2018; Lee et al., 2019). + +For solving such PO tasks, several categories of methods have been proposed. One simple, straightforward solution is to include a history of raw observations in the current "observation" (McCallum, 1993; Lee et al., 2019). Unfortunately, this method can be impractical when decision-making requires a long-term memory because dimension of observation become unacceptably large if a long history is included. + +Another category is based on model-free RL methods with recurrent neural networks (RNN) as function approximators (Schmidhuber, 1990; 1991; Igl et al., 2018; Kapturowski et al., 2018; Jaderberg et al., 2019), which is usually more tractable to implement. In this case, RNNs need to tackle two problems simultaneously (Lee et al., 2019): learning representation (coded by hidden states of the RNN) of the underlying states of the environment from the state-transition data, and learning to maximize returns using the learned representation. As most RL algorithms use a bootstrapping strategy to learn the expected return and to improve the policy (Sutton & Barto, 1998), it is challenging to train the RNN stably and efficiently, since RNNs are relatively more difficult to train (Pascanu et al., 2013) than feedforward neural networks. + +The third category considers learning a model of the environment and estimating a belief state, extracted from a sequence of state-transitions (Kaelbling et al., 1998; Ha & Schmidhuber, 2018; Lee et al., 2019). The belief state is an agent-estimated variable encoding underlying states of the environment that determines state-transitions and rewards. Perfectly-estimated belief states can thus be taken as "observations" of an RL agent that contains complete information for solving the task. Therefore, solving a PO task is segregated into a representation learning problem and a fully observable RL problem. Since fully observable RL problems have been well explored by the RL community, the critical challenge here is how to estimate the belief state. + +In this study, we developed a variational recurrent model (VRM) that models sequential observations and rewards using a latent stochastic variable. The VRM is an extension of the variational recurrent neural network (VRNN) model (Chung et al., 2015) that takes actions into account. Our approach falls into the third category by taking the internal states of the VRM together with raw observations as the belief state. We then propose an algorithm to solve PO tasks by training the VRM and a feed-forward RL controller network, respectively. The algorithm can be applied in an end-to-end manner, without fine tuning of a hyperparameters. + +We then experimentally evaluated the proposed algorithm in various PO versions of robotic control tasks. The agents showed substantial policy improvement in all tasks, and in some tasks the algorithm performed essentially as in fully observable cases. In particular, our algorithm demonstrates greater performance compared to alternative approaches in environments where only velocity information is observable or in which long-term memorization is needed. + +# 2 RELATED WORK + +Typical model-based RL approaches utilize learned models for dreaming, i.e. generating state-transition data for training the agent (Deisenroth & Rasmussen, 2011; Ha & Schmidhuber, 2018; Kaiser et al., 2019) or for planning of future state-transitions (Watter et al., 2015; Hafner et al., 2018; Ke et al., 2019). This usually requires a well-designed and finely tuned model so that its predictions are accurate and robust. In our case, we do not use VRMs for dreaming and planning, but for auto-encoding state-transitions. Actually, PO tasks can be solved without requiring VRMs to predict accurately (see Appendix E). This distinguishes our algorithm from typical model-based RL methods. + +The work our method most closely resembles is known as stochastic latent actor-critic (SLAC, Lee et al. (2019)), in which a latent variable model is trained and uses the latent state as the belief state for the critic. SLAC showed promising results using pixels-based robotic control tasks, in which velocity information needs to be inferred from third-person images of the robot. Here we consider more general PO environments in which the reward may depend on a long history of inputs, e.g., in a snooker game one has to remember which ball was potted previously. The actor network of SLAC did not take advantage of the latent variable, but instead used some steps of raw observations as input, which creates problems in achieving long-term memorization of reward-related state-transitions. Furthermore, SLAC did not include raw observations in the input of the critic, which may complicate training the critic before the model converges. + +# 3 BACKGROUND + +# 3.1 PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES + +The scope of problems we study can be formulated into a framework known as partially observable Markov decision processes (POMDP) (Kaelbling et al., 1998). POMDPs are used to describe decision or control problems in which a part of underlying states of the environment, which determine state-transitions and rewards, cannot be directly observed by an agent. + +A POMDP is usually defined as a 7-tuple $(\mathbb{S},\mathbb{A},T,R,\mathbb{X},O,\gamma)$ , in which $\mathbb{S}$ is a set of states, $\mathbb{A}$ is a set of actions, and $T:\mathbb{S}\times \mathbb{A}\to p(\mathbb{S})$ is the state-transition probability function that determines the distribution of the next state given current state and action. The reward function $R:\mathbb{S}\times \mathbb{A}\rightarrow \mathbb{R}$ decides the reward during a state-transition, which can also be probabilistic. Moreover, $\mathbb{X}$ is a set of observations, and observations are determined by the observation probability function $O:\mathbb{S}\times \mathbb{A}\to p(\mathbb{X})$ . By defining a POMDP, the goal is to maximize expected discounted future rewards $\sum_{t}\gamma^{t}r_{t}$ by learning a good strategy to select actions (policy function). + +Our algorithm was designed for general POMDP problems by learning the representation of underlying states $s_t \in \mathbb{S}$ via modeling observation-transitions and reward functions. However, it is expected to work in PO tasks in which $s_t$ or $p(s_t)$ can be (at least partially) estimated from the history of observations $x_{1:t}$ . + +# 3.2 VARIATIONAL RECURRENT NEURAL NETWORKS + +To model general state-transitions that can be stochastic and complicated, we employ a modified version of the VRNN (Chung et al., 2015). The VRNN was developed as a recurrent version of the variational auto-encoder (VAE, Kingma & Welling (2013)), composed of a variational generation model and a variational inference model. It is a recurrent latent variable model that can learn to encode and predict complicated sequential observations $\boldsymbol{x}_t$ with a stochastic latent variable $\boldsymbol{z}_t$ . + +The generation model predicts future observations given the its internal states, + +$$ +\boldsymbol {z} _ {t} \sim \mathcal {N} \left(\boldsymbol {\mu} _ {p, t}, \operatorname {d i a g} \left(\boldsymbol {\sigma} _ {p, t} ^ {2}\right)\right), \quad \left[ \boldsymbol {\mu} _ {p, t}, \boldsymbol {\sigma} _ {p, t} ^ {2} \right] = f ^ {\text {p r i o r}} \left(\boldsymbol {d} _ {t - 1}\right), +$$ + +$$ +\left. \boldsymbol {x} _ {t} \mid \boldsymbol {z} _ {t} \sim \mathcal {N} \left(\boldsymbol {\mu} _ {y, t}, \operatorname {d i a g} \left(\boldsymbol {\sigma} _ {y, t} ^ {2}\right)\right), \quad \left[ \boldsymbol {\mu} _ {y, t}, \boldsymbol {\sigma} _ {y, t} ^ {2} \right] = f ^ {\text {d e c o d e r}} \left(\boldsymbol {z} _ {t}, \boldsymbol {d} _ {t - 1}\right), \right. \tag {1} +$$ + +where $f_{s}$ are parameterized mappings, such as feed-forward neural networks, and $d_{t}$ is the state variable of the RNN, which is recurrently updated by + +$$ +\boldsymbol {d} _ {t} = f ^ {\mathrm {R N N}} \left(\boldsymbol {d} _ {t - 1}; \boldsymbol {z} _ {t}, \boldsymbol {x} _ {t}\right). \tag {2} +$$ + +The inference model approximates the latent variable $z_{t}$ given $x_{t}$ and $d_{t}$ . + +$$ +\left. \boldsymbol {z} _ {t} \mid \boldsymbol {x} _ {t} \sim \mathcal {N} \left(\boldsymbol {\mu} _ {z, t}, \operatorname {d i a g} \left(\boldsymbol {\sigma} _ {z, t} ^ {2}\right)\right), \text {w h e r e} \left[ \boldsymbol {\mu} _ {z, t}, \boldsymbol {\sigma} _ {z, t} ^ {2} \right] = f ^ {\text {e n c o d e r}} \left(\boldsymbol {x} _ {t}, \boldsymbol {d} _ {t - 1}\right). \right. \tag {3} +$$ + +For sequential data that contain $T$ time steps, learning is conducted by maximizing the evidence lower bound ELBO, like that in a VEA (Kingma & Welling, 2013), where + +$$ +\begin{array}{l} E L B O = \sum_ {t} ^ {T} \left[ - D _ {K L} \left(q \left(\boldsymbol {z} _ {t} \mid \boldsymbol {z} _ {1: t - 1}, \boldsymbol {x} _ {1: t}\right) \mid \mid p \left(\boldsymbol {z} _ {t} \mid \boldsymbol {z} _ {1: t - 1}, \boldsymbol {x} _ {1: t - 1}\right)\right) \right] \\ + \mathbb {E} _ {q \left(\boldsymbol {z} _ {t} \mid \boldsymbol {x} _ {1: t}, \boldsymbol {z} _ {1: t - 1}\right)} \left[ \log \left(p \left(\boldsymbol {x} _ {t} \mid \boldsymbol {z} _ {1: t}, \boldsymbol {x} _ {1: t - 1}\right)\right) \right], \tag {4} \\ \end{array} +$$ + +where $p$ and $q$ are parameterized PDFs of $z_{t}$ by the generative model and the inference model, respectively. In a POMDP, a VRNN can be used to model the environment and to represent underlying states in its state variable $d_{t}$ . Thus an RL agent can benefit from a well-learned VRNN model since $d_{t}$ provides additional information about the environment beyond the current raw observation $x_{t}$ . + +# 3.3 SOFT ACTOR CRITIC + +Soft actor-critic (SAC) is a state-of-the-art model-free RL that uses experience replay for dynamic programming, which been tested on various robotic control tasks and that shows promising performance (Haarnoja et al., 2018a;b). A SAC agent learns to maximize reinforcement returns as well as entropy of its policy, so as to obtain more rewards while keeping actions sufficiently stochastic. + +![](images/ff196f9ade588d890f2dd98ed624582f547dad32c04c730d6dada34c3585329f.jpg) +Figure 1: Diagrams of the proposed algorithm. (a) Overview. (b, c) The generative model and the inference model of a VRM. + +![](images/a1f13fbee2d26855a9187c266321cdaf5715e47c38daa4a66c534755a25b8e12.jpg) + +![](images/c2459f8fcc4cb8d3532dbec41ec070d65554bbf689b07b32457d9f185a69683c.jpg) + +A typical SAC implementation can be described as follows. The state value function $V(s)$ , the state-action value function $Q(s, a)$ and the policy function $\pi(a|s)$ are parameterized by neural networks, indicated by $\psi, \lambda, \eta$ , respectively. Also, an entropy coefficient factor (also known as the temperature parameter), denoted by $\alpha$ , is learned to control the degree of stochasticity of the policy. The parameters are learned by simultaneously minimizing the following loss functions. + +$$ +J _ {V} (\psi) = \mathbb {E} _ {\boldsymbol {s} _ {t} \sim \mathcal {B}} \left[ \frac {1}{2} \left(V _ {\psi} (\boldsymbol {s} _ {t}) - \mathbb {E} _ {\boldsymbol {a} _ {t} \sim \boldsymbol {\pi} _ {\eta}} \left[ Q _ {\lambda} (\boldsymbol {s} _ {t}, \boldsymbol {a} _ {t}) - \alpha \log \pi_ {\eta} (\boldsymbol {a} _ {t} | \boldsymbol {s} _ {t}) \right]\right) ^ {2} \right], \tag {5} +$$ + +$$ +J _ {Q} (\lambda) = \mathbb {E} _ {\left(\boldsymbol {s} _ {t}, \boldsymbol {a} _ {t}\right) \sim \mathcal {B}} \left[ \frac {1}{2} \left(Q _ {\lambda} \left(\boldsymbol {s} _ {t}, \boldsymbol {a} _ {t}\right) - \left(r \left(\boldsymbol {s} _ {t}, \boldsymbol {a} _ {t}\right) + \gamma \mathbb {E} _ {\boldsymbol {s} _ {t + 1} \sim \mathcal {B}} \left[ V _ {\psi} \left(\boldsymbol {s} _ {t + 1}\right) \right]\right)\right) ^ {2} \right], \tag {6} +$$ + +$$ +J _ {\pi} (\eta) = \mathbb {E} _ {\boldsymbol {s} _ {t} \sim \mathcal {B}} \left[ \mathbb {E} _ {\boldsymbol {a} _ {\eta} (\boldsymbol {s} _ {t}) \sim \pi_ {\eta} (\boldsymbol {s} _ {t})} \left[ \alpha \log \pi_ {\eta} \left(\boldsymbol {a} _ {\eta} (\boldsymbol {s} _ {t}) | \boldsymbol {s} _ {t}\right) - Q _ {\lambda} \left(\boldsymbol {s} _ {t}, \boldsymbol {a} _ {\eta} (\boldsymbol {s} _ {t})\right) \right] \right], \tag {7} +$$ + +$$ +J (\alpha) = \mathbb {E} _ {\boldsymbol {s} _ {t} \sim \mathcal {B}} \left[ \mathbb {E} _ {\boldsymbol {a} \sim \pi_ {\eta} (\boldsymbol {s} _ {t})} \left[ - \alpha \log \pi_ {\eta} (\boldsymbol {a} | \boldsymbol {s} _ {t}) - \alpha \mathcal {H} _ {\mathrm {t a r}} \right] \right], \tag {8} +$$ + +where $\mathcal{B}$ is the replay buffer from which $s_t$ is sampled, and $\mathcal{H}_{\mathrm{tar}}$ is the target entropy. To compute the gradient of $J_{\pi}(\eta)$ (Equation. 7), the reparameterization trick (Kingma & Welling, 2013) is used on action, indicated by $\pmb{a}_{\eta}(\pmb{s}_t)$ . Reparameterization of action is not required in minimizing $J(\alpha)$ (Equation. 8) since $\log \pi_{\eta}(\pmb{a}|\pmb{s}_t)$ does not depend on $\alpha$ . + +SAC was originally developed for fully observable environments; thus, the raw observation at the current step $\boldsymbol{x}_t$ was used as network input. In this work, we apply SAC in PO tasks by including the state variable $d_t$ of the VRNN in the input of function approximators of both the actor and the critic. + +# 4 METHODS + +# 4.1 VARIATIONAL RECURRENT STATE-TRANSITION MODELS + +An overall diagram of the proposed algorithm is summarized in Fig. 1(a), while a more detailed computational graph is plotted in Fig. 2. We extend the original VRNN model (Chung et al., 2015) to the proposed VRM model by adding action feedback, i.e., actions taken by the agent are used in the inference model and the generative model. Also, since we are modeling state-transition and reward functions, we include the reward $r_{t-1}$ in the current raw observation $x_t$ for convenience. Thus, we have the inference model (Fig. 1(c)), denoted by $\phi$ , as + +$$ +\left. \boldsymbol {z} _ {\phi , t} \right| \boldsymbol {x} _ {t} \sim \mathcal {N} \left(\boldsymbol {\mu} _ {\phi , t}, \operatorname {d i a g} \left(\boldsymbol {\sigma} _ {\phi , t} ^ {2}\right)\right), \text {w h e r e} \left[ \boldsymbol {\mu} _ {\phi , t}, \boldsymbol {\sigma} _ {\phi , t} ^ {2} \right] = \phi \left(\boldsymbol {x} _ {t}, \boldsymbol {d} _ {t - 1}, \boldsymbol {a} _ {t - 1}\right), \tag {9} +$$ + +The generative model (Fig. 1(b)), denoted by $\theta$ here, is + +$$ +\boldsymbol {z} _ {t} \sim \mathcal {N} \left(\boldsymbol {\mu} _ {\theta , t}, \operatorname {d i a g} \left(\boldsymbol {\sigma} _ {\theta , t} ^ {2}\right)\right), \quad \left[ \boldsymbol {\mu} _ {\theta , t}, \boldsymbol {\sigma} _ {\theta , t} ^ {2} \right] = \theta^ {\text {p r i o r}} \left(\boldsymbol {d} _ {t - 1}, \boldsymbol {a} _ {t - 1}\right), +$$ + +$$ +\left. \boldsymbol {x} _ {t} \mid \boldsymbol {z} _ {t} \sim \mathcal {N} \left(\boldsymbol {\mu} _ {x, t}, \operatorname {d i a g} \left(\boldsymbol {\sigma} _ {x, t} ^ {2}\right)\right), \quad \left[ \boldsymbol {\mu} _ {x, t}, \boldsymbol {\sigma} _ {x, t} ^ {2} \right] = \theta^ {\text {d e c o d e r}} \left(\boldsymbol {z} _ {t}, \boldsymbol {d} _ {t - 1}\right). \right. \tag {10} +$$ + +For building recurrent connections, the choice of RNN types is not limited. In our study, the long-short term memory (LSTM) (Hochreiter & Schmidhuber, 1997) is used since it works well in general cases. So we have $\pmb{d}_t = \mathrm{LSTM}(\pmb{d}_{t-1}; \pmb{z}_t, \pmb{x}_t)$ . + +![](images/4040dfa91a7a6e770c36ac3af89d8db5c1df90dce05fb6f3d7f5402307302202.jpg) +(a) RL controller + +![](images/8cba1af17e85bef94f52a799bf6516423a462cf14b52d2f7c8aa5e04a88ab205.jpg) +(b) Execution phase + +![](images/afd958e706f4b89b72621b11189557c3dad24f47a2c5ca14679a568645643d14.jpg) +Deterministic node + +![](images/600ec54346560d5af5213e187d7e0469098950a02d9da8696f069327c2357d56.jpg) +Stochastic node + +![](images/bd3ab6827b46ede74536aad0a6ecb5ffc860e78b39f0c4979b6ec708043df482.jpg) +Generativemodel + +![](images/48cc6885a1df53269d29c0b3df52fae9e64c03682221ca7a0274e119263d8319.jpg) +Inference model + +![](images/03039a8a1dbc56175cdaa3b5bbb80fda498b56a7d51a44b71e3c007f3712cda9.jpg) +RL controller network + +![](images/9690628835f7abeb162822f212d89dd08f6e887c9b56fedd752ec342476cd54a.jpg) +Interacting with the environment + +![](images/93ce84b556d689c4754c3b6ffdd63c15a876e7fe4163f6dea1ccb14a566668f6.jpg) +Error backpropagation +Figure 2: Computation graph of the proposed algorithm. (a) The RL controller. (b) The execution phase. (c) The learning phase of a VRM. $\pmb{a}$ : action; $z$ : latent variable; $d$ : RNN state variable; $x$ : raw observation (including reward); $Q$ : state-action value function; $V$ : state value function. A bar on a variable means that it is the actual value from the replay buffer or the environment. Each stochastic variable follows a parameterized diagonal Gaussian distribution. + +As in training a VRNN, the VRM is trained by maximizing an evidence lower bound (Fig. 1(c)) + +$$ +\begin{array}{l} E L B O = \sum_ {t} \left\{\mathbb {E} _ {q _ {\phi}} \left[ \log p _ {\theta} \left(\boldsymbol {x} _ {t} \mid \boldsymbol {z} _ {1: t}, \boldsymbol {x} _ {1: t - 1}\right) \right] \right. \\ \left. - D _ {K L} \left[ q _ {\phi} \left(\boldsymbol {z} _ {t} \mid \boldsymbol {z} _ {1: t - 1}, \bar {\boldsymbol {x}} _ {1: t}, \bar {\boldsymbol {a}} _ {1: t}\right) \mid \mid p _ {\theta} \left(\boldsymbol {z} _ {t} \mid \boldsymbol {z} _ {1: t - 1}, \bar {\boldsymbol {x}} _ {1: t - 1}, \bar {\boldsymbol {a}} _ {1: t}\right) \right] \right\}. \tag {11} \\ \end{array} +$$ + +In practice, the first term $\mathbb{E}_{q_{\phi}}[\log p_{\theta}(\pmb{x}_t|\pmb{z}_{1:t},\pmb{x}_{1:t - 1})]$ can be obtained by unrolling the RNN using the inference model (Fig. 1(c)) with sampled sequences of $\pmb{x}_t$ . Since $q_{\phi}$ and $p_{\theta}$ are parameterized Gaussian distributions, the KL-divergence term can be analytically expressed as + +$$ +D _ {K L} \left[ q _ {\phi} \left(\boldsymbol {z} _ {t}\right) \right\lvert | p _ {\theta} \left(\boldsymbol {z} _ {t}\right) ] = \log \frac {\boldsymbol {\sigma} _ {\phi , t}}{\boldsymbol {\sigma} _ {\theta , t}} + \frac {\left(\boldsymbol {\mu} _ {\phi , t} - \boldsymbol {\mu} _ {\theta , t}\right) ^ {2} + \boldsymbol {\sigma} _ {\phi , t} ^ {2}}{2 \boldsymbol {\sigma} _ {\theta , t} ^ {2}} - \frac {1}{2} \tag {12} +$$ + +For computation efficiency in experience replay, we train a VRM by sampling minibatches of truncated sequences of fixed length, instead of whole episodes. Details are found in Appendix A.1. + +Since training of a VRM is segregated from training of the RL controllers, there are several strategies for conducting them in parallel. For the RL controller, we adopted a smooth update strategy as in Haarnoja et al. (2018a), i.e., performing one time of experience replay every $n$ steps. To train the VRM, one can also conduct smooth update. However, in that case, RL suffers from instability of the representation of underlying states in the VRM before it converges. Also, stochasticity of RNN state variables $d$ can be meaninglessly high at early stage of training, which may create problems in RL. Another strategy is to pre-train the VRM for abundant epochs only before RL starts, which unfortunately, can fail if novel observations from the environment appear after some degree of policy improvement. Moreover, if pre-training and smooth update are both applied to the VRM, RL may suffer from a large representation shift of the belief state. + +To resolve this conflict, we propose using two VRMs, which we call the first-impression model and the keep-learning model, respectively. As the names suggest, we pre-train the first-impression model and stop updating it when RL controllers and the keep-learning model start smooth updates. Then we take state variables from both VRMs, together with raw observations, as input for the RL controller. We found that this method yields better overall performance than using a single VRM (Appendix C). + +Algorithm 1 Variational Recurrent Models with Soft Actor Critic +Initialize the first-impression VRM $\mathcal{M}_f$ and the keep-learning VRM $\mathcal{M}_k$ , the RL controller $\mathcal{C}$ , and the replay buffer $\mathcal{D}$ , global step $t \gets 0$ . +repeat +Initialize an episode, assign $\mathcal{M}$ with zero initial states. +while episode not terminated do +Sample an action $a_t$ from $\pi(a_t | d_t, x_t)$ and execute $a_t$ , $t \gets t + 1$ . Record $(x_t, a_t, done_t)$ into $\mathcal{B}$ . +Compute 1-step forward of both VRMs using inference models. +if $t == \text{step_start\_RL}$ then + For $N$ epochs, sample a minibatch of samples from $\mathcal{B}$ to update $\mathcal{M}_f$ (Eq. 11). + end if + if $t > \text{step_start\_RL}$ and mod(t, train_interval_KLVRM) == 0 then + Sample a minibatch of samples from $\mathcal{B}$ to update $\mathcal{M}_k$ (Eq. 5, 6, 7, 8). + end if + if $t > \text{step_start\_RL}$ and mod(t, train_interval_RL) == 0 then + Sample a minibatch of samples from $\mathcal{B}$ to update $\mathcal{R}$ (Eq. 11). + end if +end while +until training stopped + +# 4.2 REINFORCEMENT LEARNING CONTROLLER + +As shown in Fig. 1(a), we use multi-layer perceptrons (MLP) as function approximators for $V$ , $Q$ , respectively. Inputs for the $Q_{t}$ network are $(\pmb{x}_t, \pmb{d}_t, \pmb{a}_t)$ , and $V_{t}$ is mapped from $(\pmb{x}_t, \pmb{d}_t)$ . Following Haarnoja et al. (2018a), we use two Q networks $\lambda_{1}$ and $\lambda_{2}$ and compute $Q = \min(Q_{\lambda_1}, Q_{\lambda_2})$ in Eq. 5 and 7 for better performance and stability. Furthermore, we also used a target value network for computing $V$ in Eq. 6 as in Haarnoja et al. (2018a). The policy function $\pi_{\eta}$ follows a parameterized Gaussian distribution $\mathcal{N}(\pmb{\mu}_{\eta}(\pmb{d}_{t}, \pmb{x}_{t}), \text{diag}(\pmb{\sigma}_{\eta}(\pmb{d}_{t}, \pmb{x}_{t})))$ where $\pmb{\mu}_{\eta}$ and $\pmb{\sigma}_{\eta}$ are also MLPs. + +In the execution phase (Fig. 1(b)), observation and reward $\boldsymbol{x}_t = (\boldsymbol{X}_t, r_{t-1})$ are received as VRM inputs to compute internal states $\boldsymbol{d}_t$ using inference models. Then, the agent selects an action, sampled from $\pi_\eta(\boldsymbol{a}_t | \boldsymbol{d}_t, \boldsymbol{x}_t)$ , to interact with the environment. + +To train RL networks, we first sample sequences of steps from the replay buffer as minibatches; thus, $d_{t}$ can be computed by the inference models using recorded observations $\bar{x}_{t}$ and actions $\bar{a}_{t}$ (See Appendix A.1.2). Then RL networks are updated by minimizing the loss functions with gradient descent. Gradients stop at $d_{t}$ so that training of RL networks does not involve updating VRMs. + +# 5 RESULTS + +To empirically evaluate our algorithm, we performed experiments in a range of (partially observable) continuous control tasks and compared it to the following alternative algorithms. The overall procedure is summarized in Algorithm 1. For the RL controllers, we adopted hyperparameters from the original SAC implementation (Haarnoja et al., 2018b). Both the keep-learning and first-impression VRMs were trained using learning rate 0.0008. We pre-trained the first-impression VRM for 5,000 epochs, and updated the keep-learning VRM every 5 steps. Batches of size 4, each containing a sequence of 64 steps, were used for training both the VRMs and the RL controllers. All tasks used the same hyperparameters (Appendix A.1). + +- SAC-MLP: The vanilla soft actor-critic implementation (Haarnoja et al., 2018a;b), in which each function is approximated by a 2-layer MLP taking raw observations as input. +- SAC-LSTM: Soft actor-critic with recurrent networks as function approximators, where raw observations are processed through an LSTM layer followed by 2 layers of MLPs. This allows the agent to make decisions based on the whole history of raw observations. In this case, the network has to conduct representation learning and dynamic programming collectively. Our algorithm is compared with SAC-LSTM to demonstrate the effect of separating representation learning from dynamic programming. + +![](images/c65b6586e160876c28e54eea0f0decde9e64f97e6d1252fc2cd88ce98cc82c8d.jpg) +Figure 3: Learning curves of the classic control tasks. Shaded areas indicate S.E.M.. + +- SLAC: The stochastic latent actor-critic algorithm introduced in Lee et al. (2019), which is a state-of-the-art RL algorithm for solving POMDP tasks. It was shown that SLAC outperformed other model-based and model-free algorithms, such as (Igl et al., 2018; Hafner et al., 2018), in robotic control tasks with third-person image of the robot as observation $^{2}$ . + +Note that in our algorithm, we apply pre-training of the first-impression model. For a fair comparison, we also perform pre-training for the alternative algorithm with the same epochs. For SAC-MLP and SAC-LSTM, pre-training is conducted on RL networks; while for SLAC, its model is pre-trained. + +# 5.1 PARTIALLY OBSERVABLE CLASSIC CONTROL TASKS + +The Pendulum and CartPole (Barto et al., 1983) tasks are the classic control tasks for evaluating RL algorithms (Fig. 3, Left). The CartPole task requires learning of a policy that prevents the pole from falling down and keeps the cart from running away by applying a (1-dimensional) force to the cart, in which observable information is the coordinate of the cart, the angle of the pole, and their derivatives w.r.t time (i.e., velocities). For the Pendulum task, the agent needs to learn a policy to swing an inverse-pendulum up and to maintain it at the highest position in order to obtain more rewards. + +We are interested in classic control tasks because they are relatively easy to solve when fully observable, and thus the PO cases can highlight the representation learning problem. Experiments were performed in these two tasks, as well as their PO versions, in which either velocities cannot be observed or only velocities can be observed. The latter case is meaningful in real-life applications because an agent may not be able to perceive its own position, but can estimate its speed. + +As expected, SAC-MLP failed to solve the PO tasks (Fig. 3). While our algorithm succeeded in learning to solve all these tasks, SAC-LSTM showed poorer performance in some of them. In particular, in the pendulum task with only angular velocity observable, SAC-LSTM may suffer from the periodicity of the angle. SLAC performed well in the CartPole tasks, but showed less satisfactory sample efficiency in the Pendulum tasks. + +# 5.2 PARTIALLY OBSERVABLE ROBOTIC CONTROL TASKS + +To examine performance of the proposed algorithm in more challenging control tasks with higher degrees of freedom (DOF), we also evaluated performance of the proposed algorithm in the OpenAI Roboschool environments (Brockman et al., 2016). The Roboschool environments include a number + +![](images/e95857f69e46fc32fbf10a6dea2078d0c1e9d0b33c1b675a2faa8e344b6b3693.jpg) +Figure 4: Learning curves of the robotic control tasks, plotted in the same way as in Fig. 3. + +of continuous robotic control tasks, such as teaching a multiple-joint robot to walk as fast as possible without falling down (Fig. 4, Left). The original Roboschool environments are nearly fully observable since observations include the robot's coordinates and (trigonometric functions of) joint angles, as well as (angular and coordinate) velocities. As in the PO classic control tasks, we also performed experiments in the PO versions of the Roboschool environments. + +Using our algorithm, experimental results (Fig. 4) demonstrated substantial policy improvement in all PO tasks (visualization of the trained agents is in Appendix D). In some PO cases, the agents achieved comparable performance to that in fully observable cases. For tasks with unobserved velocities, our algorithm performed similarly to SAC-LSTM. This is because velocities can be simply estimated by one-step differences in robot coordinates and joint angles, which eases representation learning. However, in environments where only velocities can be observed, our algorithm significantly outperformed SAC-LSTM, presumably because SAC-LSTM is less efficient at encoding underlying states from velocity observations. Also, we found that learning of a SLAC agent was unstable, i.e., it sometimes could acquire a near-optimal policy, but often its policy converged to a poor one. Thus, average performance of SLAC was less promising than ours in most of the PO robotic control tasks. + +# 5.3 LONG-TERM MEMORIZATION TASKS + +Another common type of PO task requires long-term memorization of past events. To solve these tasks, an agent needs to learn to extract and to remember critical information from the whole history of raw observations. Therefore, we also examined our algorithm and other alternatives in a long-term memorization task known as the sequential target reaching task (Han et al., 2019), in which a robot agent needs to reach 3 different targets in a certain sequence (Fig. 5, Left). The robot can control its two wheels to move or turn, and will get one-step small, medium, and large rewards when it reaches the first, second, and third targets, respectively, in the correct sequence. The robot senses distances and angles from the 3 targets, but does not receive any signal indicating which target to reach. In each episode, the robot's initial position and those of the three targets are randomly initialized. In order to obtain rewards, the agent needs to infer the current correct target using historical observations. + +We found that agents using our algorithm achieved almost $100\%$ success rate (reaching 3 targets in the correct sequence within maximum steps). SAC-LSTM also achieved similar success rate after convergence, but spent more training steps learning to encode underlying goal-related information + +![](images/ec841ad049334c609db1427e2c141d16719377cd5b17876f890bea8d5d7efdad.jpg) +Figure 5: Learning curves of the sequential target reaching task. + +![](images/f97b471d8d4c1ec35b6f974a60b59bae194a81a7e808c3bfdb4b0796d4363127.jpg) + +from sequential observations. Also, SLAC struggled hard to solve this task since its actor only received a limited steps of observations, making it difficult to infer the correct target. + +# 5.4 CONVERGENCE OF THE KEEP-LEARNING VRM + +One of the most concerned problems of our algorithm is that input of the RL controllers can experience representation change, because the keep-learning model is not guaranteed to converge if novel observation appears due to improved policy (e.g. for a hopper robot, "in-the-air" state can only happen after it learns to hop). To empirically investigate how convergence of the keep-learning VRM affect policy improvement, we plot the loss functions (negative ELBOs) of the the keep-learning VRM for 3 example tasks (Fig. 6). For a simpler task (CartPole), the policy was already near optimal before the VRM fully converged. We also saw that the policy was gradually improved after the VRM mostly converged (RoboschoolAnt - no velocities), and that the policy and the VRM were being improved in parallel (RoboschoolAnt - velocities only). + +The results suggested that policy could be improved with sufficient sample efficiency even the keep-learning VRM did not converge. This can be explained by that the RL controller also extract information from the first-impression model and the raw observations, which did not experience representation change during RL. Indeed, our ablation study showed performance degradation in many tasks without the first-impression VRM (Appendix C). + +![](images/6d67be1e89c0afc42ca5ee572ba844edffbfc241324e4ef771794eda85d02af8.jpg) +Figure 6: Example tasks showing relationship between average return of the agent and negative ELBO (loss function, dashed) of the keep-learning VRM. + +![](images/e0d5ef43485b84a179d85141268fa67a107bbc4f6385c4006a40efbc201b6619.jpg) + +![](images/4eec65d5e621f0c3d7927aa31e27cba63b9925dc79ea458f7236041d90ec8864.jpg) + +# 6 DISCUSSION + +In this paper, we proposed a variational recurrent model for learning to represent underlying states of PO environments and the corresponding algorithm for solving POMDPs. Our experimental results demonstrate effectiveness of the proposed algorithm in tasks in which underlying states cannot be simply inferred using a short sequence of observations. Our work can be considered an attempt to understand how RL benefits from stochastic Bayesian inference of state-transitions, which actually happens in the brain (Funamizu et al., 2016), but has been considered less often in RL studies. + +We used stochastic models in this work which we actually found perform better than deterministic ones, even through the environments we used are deterministic (Appendix C). The VRNN can + +be replaced with other alternatives (Bayer & Osendorfer, 2014; Goyal et al., 2017) to potentially improve performance, although developing model architecture is beyond the scope of the current study. Moreover, a recent study (Ahmadi & Tani, 2019) showed a novel way of inference using back-propagation of prediction errors, which may also benefit our future studies. + +Many researchers think that there are two distinct systems for model-based and model-free RL in the brain (Gläscher et al., 2010; Lee et al., 2014) and a number of studies investigated how and when the brain switches between them (Smittenaar et al., 2013; Lee et al., 2014). However, Stachenfeld et al. (2017) suggested that the hippocampus can learn a successor representation of the environment that benefits both model-free and model-based RL, contrary to the aforementioned conventional view. We further propose another possibility, that a model is learned, but not used for planning or dreaming. This blurs the distinction between model-based and model-free RL. + +# ACKNOWLEDGEMENT + +This work was supported by Okinawa Institute of Science and Technology Graduate University funding, and was also partially supported by a Grant-in-Aid for Scientific Research on Innovative Areas: Elucidation of the Mathematical Basis and Neural Mechanisms of Multi-layer Representation Learning 16H06563. We would like to thank the lab members in the Cognitive Neurorobotics Research Unit and the Neural Computation Unit of Okinawa Institute of Science and Technology. In particular, we would like to thank Ahmadreza Ahmadi for his help during model development. We also would like to thank Steven Aird for assisting improving the manuscript. + +# REFERENCES + +Ahmadreza Ahmadi and Jun Tani. A novel predictive-coding-inspired variational rn model for online prediction and recognition. Neural computation, pp. 1-50, 2019. +Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE transactions on systems, man, and cybernetics, pp. 834-846, 1983. +Justin Bayer and Christian Osendorfer. Learning stochastic recurrent networks. In NIPS 2014 Workshop on Advances in Variational Inference, 2014. +Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv preprint arXiv:1606.01540, 2016. +Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pp. 2980-2988, 2015. +Ian Danforth. Continuous CartPole for openAI Gym. https://gist.github.com/ iandanforth/e3ffb67cf3623153e968f2affdb01dc8, 2018. +Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465-472, 2011. +Akihiro Funamizu, Bernd Kuhn, and Kenji Doya. Neural substrate of dynamic Bayesian inference in the cerebral cortex. Nature neuroscience, 19(12):1682, 2016. +Jan Glascher, Nathaniel Daw, Peter Dayan, and John P O'Doherty. States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron, 66(4):585-595, 2010. +Anirudh Goyal Alias Parth Goyal, Alessandro Sordoni, Marc-Alexandre Côté, Nan Rosemary Ke, and Yoshua Bengio. Z-forcing: Training stochastic recurrent networks. In Advances in neural information processing systems, pp. 6713-6723, 2017. +David Ha and Jürgen Schmidhuber. Recurrent world models facilitate policy evolution. In Advances in Neural Information Processing Systems, pp. 2450-2462, 2018. + +Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, pp. 1856-1865, 2018a. +Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018b. +Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. arXiv preprint arXiv:1811.04551, 2018. +Dongqi Han, Kenji Doya, and Jun Tani. Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent networks. arXiv preprint arXiv:1901.10113, 2019. +Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, and David Silver. Memory-based control with recurrent neural networks. arXiv preprint arXiv:1512.04455, 2015. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997. +Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, and Shimon Whiteson. Deep variational reinforcement learning for pomdpns. arXiv preprint arXiv:1806.02426, 2018. +Max Jaderberg, Wojciech M Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C Rabinowitz, Ari S Morcos, Avraham Ruderman, et al. Human-level performance in 3d multiplayer games with population-based reinforcement learning. Science, 364(6443):859-865, 2019. +Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99-134, 1998. +Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, et al. Model-based reinforcement learning for atari. arXiv preprint arXiv:1903.00374, 2019. +Steven Kapturowski, Georg Ostrovski, Will Dabney, John Quan, and Remi Munos. Recurrent experience replay in distributed reinforcement learning. OpenReview, 2018. +Nan Rosemary Ke, Amanpreet Singh, Ahmed Touati, Anirudh Goyal, Yoshua Bengio, Devi Parikh, and Dhruv Batra. Learning dynamics model in reinforcement learning by incorporating the long term future. arXiv preprint arXiv:1903.01599, 2019. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013. +Alex X Lee, Anusha Nagabandi, Pieter Abbeel, and Sergey Levine. Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model. arXiv preprint arXiv:1907.00953, 2019. +Sang Wan Lee, Shinsuke Shimojo, and John P O'Doherty. Neural computations underlying arbitration between model-based and model-free learning. Neuron, 81(3):687-699, 2014. +Andrew McCallum. Overcoming incomplete perception with utile distinction memory. In Proceedings of the Tenth International Conference on Machine Learning, pp. 190-196, 1993. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. +Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International conference on machine learning, pp. 1310-1318, 2013. + +Jürgen Schmidhuber. Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. Institut für Informatik, Technische Universität München. Technical Report FK1-126, 90, 1990. +Jürgen Schmidhuber. Reinforcement learning in Markovian and non-Markovian environments. In Advances in neural information processing systems, pp. 500-506, 1991. +David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016. +Peter Smittenaar, Thomas HB FitzGerald, Vincenzo Romei, Nicholas D Wright, and Raymond J Dolan. Disruption of dorsolateral prefrontal cortex decreases model-based in favor of model-free control in humans. Neuron, 80(4):914-919, 2013. +Kimberly L Stachenfeld, Matthew M Botvinick, and Samuel J Gershman. The hippocampus as a predictive map. Nature neuroscience, 20(11):1643, 2017. +Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. +Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in neural information processing systems, pp. 2746-2754, 2015. + +# A IMPLEMENTATION DETAILS + +In this section we describe the details of implementing our algorithm as well as the alternative ones. Summaries of hyperparameters can be found in Table 1 and 2. + +Table 1: Shared hyperparameters for all the algorithms and tasks in the paper, adopted from the original SAC implementation (Haarnoja et al., 2018b). + +
HyperparameterDescriptionValue
γDiscount factor0.99
step_start_RLFrom how many steps to start training the RL controllers1,000
train_interval_RLInterval of training the RL controllers1
lr_ACTORLearning rate for the actor0.0003
lr_criticLearning rate for the critic0.0003
lr_αLearning rate for the entropy coefficient α0.0003
H TARTarget entropy-DOF
optimizerOptimizers for all the networksAdam (Kingma & Ba, 2014)
τFraction of updating the target network each gradient step0.005
policy_layersMLP layer sizes for μη and πη256, 256
value_layersMLP layer sizes for Vφ and Qλ256, 256
+ +Table 2: Hyperparameters for the proposed algorithm. + +
HyperparameterDescriptionValue
train-times-FIVRMEpoches of training the first-impression model.5,000
train_interval_KLVRMInterval of training the keep-learning model.5
lr_modelLearning rate for the VRMs0.0008
seq_lenHow many steps in a sampled sequence for each update64
batch_sizeHow many sequences to sample for each update4
+ +# A.1 THE PROPOSED ALGORITHM + +# A.1.1 NETWORK ARCHITECTURES + +The first-impression model and the keep-learning model adopted the same architecture. Size of $\pmb{d}$ and $z$ is 256 and 64, respectively. We used one-hidden-layer fully-connected networks with 128 hidden neurons for the inference models $\left[\pmb{\mu}_{\phi,t}, \pmb{\sigma}_{\phi,t}^{2}\right] = \phi(\pmb{x}_{t}, \pmb{d}_{t-1}, \pmb{a}_{t-1})$ , as well as for $\left[\pmb{\mu}_{\theta,t}, \pmb{\sigma}_{\theta,t}^{2}\right] = \theta^{\text{prior}}(\pmb{d}_{t-1}, \pmb{a}_{t-1})$ in the generative models. For the decoder $\left[\pmb{\mu}_{x,t}, \pmb{\sigma}_{x,t}^{2}\right] = \theta^{\text{decoder}}(\pmb{z}_{t}, \pmb{d}_{t-1})$ in the generative models, we used 2-layers MLPs with 128 neurons in each layer. The input processing layer $f_{x}$ is also an one-layer MLP with size-128. For all the Gaussian variables, output functions for mean are linear and output functions for variance are softplus. Other activation functions of the VRMs are tanh. + +The RL controllers are the same as those in SAC-MLP (Section A.2.1) except that network inputs are raw observations together with the RNN states from the first-impression model and the keep-learning model. + +# A.1.2 INITIAL STATES OF THE VRMS + +To train the VRMs, one can use a number of entire episodes as a mini-batch, using zero initial states, as in Heess et al. (2015). However, when tackling with long episodes (e.g. there can be 1,000 steps in each episode in the robotic control tasks we used) or even infinite-horizon problems, the computation consumption will be huge in back-propagation through time (BPTT). For better computation efficiency, we used 4 length-64 sequences for training the RNNs, and applied the burn-in method for providing the initial states (Kapturowski et al., 2018), or more specifically, unrolling the RNNs using a portion of the replay sequence (burn-in period, up to 64 steps in our case) from zero + +initial states. We assume that proper initial states can be obtained in this way. This is crucial for the tasks that require long-term memorization, and is helpful to reduce bias introduces by incorrect initial states in general cases. + +# A.2 ALTERNATIVE ALGORITHMS + +# A.2.1 SAC-MLP + +We followed the original implementation of SAC in (Haarnoja et al., 2018a) including hyperparameters. However, we also applied automatic learning of the entropy coefficient $\alpha$ (inverse of the the reward scale in Haarnoja et al. (2018a)) as introduced by the authors in Haarnoja et al. (2018b) to avoid tuning the reward scale for each task. + +# A.2.2 SAC-LSTM + +To apply recurrence to SAC's function approximators, we added an LSTM network with size-256 receiving raw observations as input. The function approximators of actor and critic were the same as those in SAC except receiving the LSTM's output as input. The gradients can pass through the LSTM so that the training of the LSTM and MLPs were synchronized. The training the network also followed Section A.1.2. + +# A.2.3 SLAC + +We mostly followed the implementation of SLAC explained in the authors' paper (Lee et al., 2019). One modification is that since their work was using pixels as observations, convolutional neural networks (CNN) and transposed CNNs were chosen for input feature extracting and output decoding layers; in our case, we replaced the CNN and transposed CNNs by 2-layers MLPs with 256 units in each layer. In addition, the authors set the output variance $\sigma_{y,t}^{2}$ for each image pixel as 0.1. However, $\sigma_{y,t}^{2} = 0.1$ can be too large for joint states/velocities as observations. We found that it will lead to better performance by setting $\sigma_{y,t}$ as trainable parameters (as that in our algorithm). We also used a 2-layer MLP with 256 units for approximating $\sigma_{y}(x_{t},d_{t - 1})$ . To avoid network weights being divergent, all the activation functions of the model were tanh except those for outputs. + +# B ENVIRONMENTS + +For the robotic control tasks and the Pendulum task, we used environments (and modified them for PO versions) from OpenAI Gym (Brockman et al., 2016). The CartPole environment with a continuous action space was from Danforth (2018), and the codes for the sequential target reaching tasks were provided by the authors (Han et al., 2019). + +In the no-velocities cases, velocity information was removed from raw observations; while in the velocities-only cases, only velocity information was retained in raw observations. We summarize key information of each environment in Table 3. + +The performance curves were obtained in evaluation phases in which agents used same policy but did not update networks or record state-transition data. Each experiment was repeated using 5 different random seeds. + +# C ABLATION STUDY + +This section demonstrated a ablation study in which we compared the performance of the proposed algorithm to the same but with some modification: + +- With a single VRM. In this case, we used only one VRM and applied both pre-training and smooth update to it. +- Only first-impression model. In this case, only the first-impression model was used and pre-trained. + +Table 3: Information of environments we used. + +
Namedim(X)DOFMaximum steps
Pendulum31200
Pendulum (velocities only)11200
Pendulum (no velocities)21200
CartPole411,000
CartPole (velocities only)211,000
CartPole (no velocities)211,000
RoboschoolHopper1531,000
RoboschoolHopper (velocities only)631,000
RoboschoolHopper (no velocities)931,000
RoboschoolWalker2d2261,000
RoboschoolWalker2d (velocities only)961,000
RoboschoolWalker2d (no velocities)1361,000
RoboschoolAnt2881,000
RoboschoolAnt (velocities only)1181,000
RoboschoolAnt (no velocities)1781,000
Sequential goal reaching task122128
+ +- Only keep-learning model. In this case, only the keep-learning model was used and smooth-update was applied. +- Deterministic model. In this case, the first-imporession model and the keep-learning model were deterministic RNNs which learned to model the state-transitions by minimizing mean-square error between prediction and observations instead of $ELBO$ . The network architecture was mostly the same as the VRM expect that the inference model and the generative model were merged into a deterministic one. + +The learning curves are shown in Fig. 7. It can be seen that the proposed algorithm consistently performed similar as or better than the modified ones. + +# D VISUALIZATION OF TRAINED AGENTS + +Here we show actual movements of the trained robots in the PO robotic control tasks (Fig. 8). It can be seen that the robots succeeded in learning to hop or walk, although their policy may be sub-optimal. + +# E MODEL ACCURACY + +As we discussed in Section 2, our algorithm relies mostly on encoding capacity of models, but does not require models to make accurate prediction of future observations. Fig. 9 shows open-loop (using the inference model to compute the latent variable $z$ ) and close-loop (purely using the generative model) prediction of raw observation by the keep-learning models of randomly selected trained agents. Here we showcase "RoboschoolHopper - velocities only" and "Pendulum - no velocities" because in these tasks our algorithm achieved similar performance to those in fully-observable versions (Fig. 4), although the prediction accuracy of the models was imperfect. + +# F SENSITIVITY TO HYPERPARAMETERS OF THE VRMS + +To empirically show how choice of hyperparameters of the VRMs affect RL performance, we conducted experiments using hyperparameters different from those used in the main study. More specifically, the learning rate for both VRMs was randomly selected from $\{0.0004, 0.0006, 0.0008, 0.001\}$ and the sequence length was randomly selected from $\{16, 32, 64\}$ (the batch size was $256 / (\text{sequence_length})$ to ensure that the total number of samples in a batch was 256 which matched with the alternative approaches). The other hyperparameters were unchanged. + +The results can be checked in Fig 10 for all the environments we used. The overall performance did not significantly change using different, random hyperparameters of the VRMs, although we could observe significant performance improvement (e.g. RoboshoolWalker2d) or degradation (e.g. RoboshoolHopper - velocities only) in a few tasks using different haperparameters. Therefore, the representation learning part (VRMs) of our algorithm does not suffer from high sensitivity to hyperparameters. This can be explained by the fact that we do not use a bootstrapping (e.g. the estimation of targets of value functions depends on the estimation of value functions) (Sutton & Barto, 1998) update rule to train the VRMs. + +# G SCALABILITY + +Table 4 showed scalability of our algorithm and the alternative ones. + +
Algorithmwall-clock time (100,000 steps)# parameters
Ours8 hr2.8M
SAC-MLP1 hr0.4M
SAC-LSTM12 hr1.1M
SLAC5 hr2.8M
+ +Table 4: Wall-clock time and number of parameters of our algorithm and the alternative ones. The working environment was a desktop computer using Intel i7-6850K CPU and the task is "Velocities-only RoboschoolHopper". The wall-clock time include training the first-impression VRM or pretrainings. + +![](images/f6de5e21cdeaeae8d8d89e672a379f3cd9fcc009b3e6585d0141edbba28d897d.jpg) + +![](images/22b48e71ed430f57ce4d984943ab5dc50daa1df7c0dca6153da967ab801c616d.jpg) + +![](images/edabca57955d2362a95382f474981bd32ac9124909537fda76ea5e11f4988823.jpg) + +![](images/eca954c6a7603d653f7cc9d1f3f72c1f94618e8fde5bf22d00cdc031494e4602.jpg) + +![](images/f1eea4ce31f8aef713ef3d4dd36451929134ba3562595f753f0236e64bdffe59.jpg) + +![](images/454e524cf0770172253b25bde4cd17a0ab6d70d2bfd87e888ce461ac7d04d7b8.jpg) + +![](images/731cdf49edb7045c7995e93bc544f3e1619dac62a5f6855d5c1f58ac18e2452b.jpg) + +![](images/e3a993596964f7583c9f8940de5e29cceaaeca7570ba1d463aeea3c13c4da401.jpg) + +![](images/2a7607ec4d058cfa50d42e2c2ba4be6893a45e0783c6de61f81a2603ad753e5c.jpg) + +![](images/88cfb4a8ed228d067855bf4633d7dc2c50a5ead0669da3e6c61228d8fc3c40cb.jpg) + +![](images/9d2132ecf570ec476c271a5a00cb0e8218250dc2126c5e64b9ef7b1fc57e775d.jpg) + +![](images/628eadfc44108359e620dacecb76a9696dec242695e7d750d905f9661fb15b63.jpg) + +![](images/1a0b395bb98b69e5ec61ae99d273880f47eaf3a7cff3c099f3be02da78026d38.jpg) +Figure 7: Learning curves of our algorithms and the modified ones. + +![](images/6b7b814022c49575839770c67049b71e86074db683fdeb3c340db2e291a25b37.jpg) + +![](images/b5d6cd2189a092cdf918c3c58d57dab90d32569d15d02e3c16339fcd612af204.jpg) + +![](images/20c78536594a70576340d8ffc19168328ed8192a6f0ceb5fb91eb166ab77cf65.jpg) + +![](images/d816562a533f3a5dde5c2b3bf873eb8da1139b909e1c752a85a781ed3760598e.jpg) +RoboschoolHopper - velocities only + +![](images/1d8a98a2709312b9b7c94b649413842ecaa11287ae005c9fa8fd49e6c0d473cf.jpg) +RoboschoolWalker2d - velocities only + +![](images/0cfc228141a186403b0b99a5c9af81dd139db0c8dff1c458a3fd595621b5b72f.jpg) +RoboschoolAnt - velocities only +Figure 8: Robots learned to hop or walk in PO environments using our algorithm. Each panel shows trajectory of a trained agent (randomly selected) within one episode. + +![](images/8945482c6e1db419c3c9278620af9454494fd8e1820f64ab421726baa5362e34.jpg) +RoboschoolHopper - no velocities + +![](images/ddc3460a607422e200fb5f97386526324d90e4d06c289123369e69585285f521.jpg) +RoboschoolWalker2d - no velocities + +![](images/e22c8c3ff9895a8ebc8bc4b962a0daaf66af003ab084c0bd372129f2a448da7e.jpg) +RoboschoolAnt - no velocities + +![](images/a71c16b5141b056c063c182b07515052aeffb16aff36fba4ad27830f98787d67.jpg) +RoboschoolHopper - velocities only (open loop) + +![](images/a9b734ff302bd907abfd89e9b7876d2d4ed900df34a59432b7c6770d133fb6ad.jpg) + +![](images/96f84384d0ae780e33ff96b3e2a56388ccdaa07ca802bcfabc4a6067483c9c04.jpg) + +![](images/54d8c34b32300ded3bd13cd50fad311b3d69fc0c2667709e8f5a097d88c17f1e.jpg) + +![](images/1b74011372f94aefeb9e3c4fc3c21a515cd993ca7d450f41d637eaba78506cb3.jpg) + +![](images/c5afd7a86450bc464728d186f8197f17920d08b76b218f165c957f8a5a0ab639.jpg) + +![](images/b96c9b0871c547b782dd2c6276b876eb7209bea3b1e82b2ba4d5fe989a51c247.jpg) + +![](images/705b0f3412fec52955173a46090837cd168eea20d02ebb7805526dbcb92afbc0.jpg) +RoboschoolHopper - velocities only (close loop) + +![](images/87ce48ede86ca2e511df1e0e0911b4cccdad96c08fcc469ace5276b8ef35f93b.jpg) + +![](images/764d5060ff590754f1003f142fac9e97dd9ebd9977d318b07763afb6a37109c0.jpg) + +![](images/bf82af2cbccdfe5613fb73a614dbc8aa6adcc4d5c59439fb07064caad292a2a0.jpg) + +![](images/113c1d6ec82d8dc6d594c1d5c9c380a8db8ed6b8383055f6289383d1bd64a08a.jpg) + +![](images/f64d652907fec20e4fb9bb977f560710d264b5bd0adab211250719f8a0dfb04e.jpg) + +![](images/877909d99bcc7cf13eb95727eb571b25d17af8d723ea8716914a6f58b925a36a.jpg) + +![](images/e55707d97b70e94d1876ae0c303b9cbaa1dd37e4737875711b7541264eb56623.jpg) +Pendulum - no velocities (open loop) + +![](images/fb890eb69e3c1ed1dcc7ca3c1c19df3c63fa690423f4e58c50a8791032e70bac.jpg) + +![](images/41257e7bbfec505f22179cea889adbfc9197abe8f068f783646ac61e0b2320aa.jpg) +Pendulum - no velocities (close loop) + +![](images/fe26fb550a6d93b264d688963445bb215a1c2b8ae96a6c9127eb51a31556f3f8.jpg) + +![](images/c13b1c2ad9fe9b7b8c320b661aec8f8e35b2956f0f1c755407a0fafd0e36c40b.jpg) +Figure 9: Examples of observation predictions by keep-learning VRMs of trained agents. + +![](images/8ba25f2cbd14d1c04c1d77869f06dc8d8f370b4004f6b2b42193c9a2fe199846.jpg) + +![](images/d41ecea136e16151476b6c48cca9cf6a363d3d20bdd5ced9d8a5e9a9ab1e871e.jpg) + +![](images/89aa83f80314729d706ee51a678e5b9b079331efb6ac720c498d11376b3a0007.jpg) + +![](images/1b8e78bc0b7da7726865c3d1c3675a11e0dc7eab8ad883443644a51281eb7634.jpg) + +![](images/a1b13c6fdda8c34f7bc69ed91e320329d58b43a2686e2be4a2f410957891b5f5.jpg) + +![](images/8b0c9e6ac52741aff8d6e36e705d8dfacbe2fe8d9a6576e467c305c825ef01b0.jpg) + +![](images/f04d83988e3c4d284043faac7ac50e7193d394e275dcdb1c8c80ac31e0bf6a63.jpg) + +![](images/a64a6bd7775056ad33d9216846926a355e030c62754272712db031eaf203d380.jpg) + +![](images/70d9ec4491fead62bf1d99d5d39323fca00a6faeb7d5b46e97720a0473d6fb00.jpg) + +![](images/a8834b166a5b3cc4727002e76d4887a44a6a697c334146cd7aa1b39ee3df2b9c.jpg) +Figure 10: The learning curves of our algorithm using the hyperparameters for the VRMs used in the paper (Table 2), and using a range of random hyperparameters (Appendix F). Data are Mean $\pm$ S.E.M., obtained from 20 repeats using different random seeds. + +![](images/f729a4c47bbc151ebd9a808286e295d263210857fa17bec13871c7278e0be0f7.jpg) + +![](images/c9a9cb059370905c47b7f28688f416f8b92be3fb4ddae58cbdf71e2bb3ac6355.jpg) + +![](images/9aaecf0ed9f8fbbc89d0b946d00d7d300b5aa92b1e6b988ebf417ccbf0a55c60.jpg) + +![](images/537c721c7e5657281be86aa11dd6959b47599b09f7d9fe6285295dc773c558a5.jpg) + +![](images/3e30db960de1527ed36aa1f6c87687e36ed84d1be0957048032e8800661b9353.jpg) + +![](images/8a4956e91e426055c49e7db0b665e8c08fbc87bb6b7bfef385cedd5b16d8c01b.jpg) + +![](images/f79788e7ac5ea09b85fbd0f891569df83243ccc578a0c777a0ca9955731011a2.jpg) \ No newline at end of file diff --git a/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/images.zip b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0c29b46f742e4663cd6405f64668fab569c2c168 --- /dev/null +++ b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca2ca7296fd50ad720e6899621cb9c5426f96c62bcfe80be954dc87b15478a49 +size 1087991 diff --git a/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/layout.json b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..35ae3e5f6a80c4c65f3650b4d397e56b627817b7 --- /dev/null +++ b/variationalrecurrentmodelsforsolvingpartiallyobservablecontroltasks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e7f013a01279166e2d7df0312f03abda4f45c28a38971ad013e85f6c348b236 +size 618513 diff --git a/variationaltemplatemachinefordatatotextgeneration/065b973d-9c3b-4d9a-9627-1fc9cf3cef45_content_list.json b/variationaltemplatemachinefordatatotextgeneration/065b973d-9c3b-4d9a-9627-1fc9cf3cef45_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9eacad7d0bc8579f4e9eadc0186010492b6189dd --- /dev/null +++ b/variationaltemplatemachinefordatatotextgeneration/065b973d-9c3b-4d9a-9627-1fc9cf3cef45_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca85bb882c60791a590e8cd0ce18b838e665989d01e7524ce2c75d92e9bfd2a9 +size 89861 diff --git a/variationaltemplatemachinefordatatotextgeneration/065b973d-9c3b-4d9a-9627-1fc9cf3cef45_model.json b/variationaltemplatemachinefordatatotextgeneration/065b973d-9c3b-4d9a-9627-1fc9cf3cef45_model.json new file mode 100644 index 0000000000000000000000000000000000000000..28b8a3670de43ab5e51a4947cbcc74ddaa530988 --- /dev/null +++ b/variationaltemplatemachinefordatatotextgeneration/065b973d-9c3b-4d9a-9627-1fc9cf3cef45_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14f0fe17b4ae052aae332ea998588138995f154513f92d9664b0fa9e1a68a2f6 +size 107064 diff --git a/variationaltemplatemachinefordatatotextgeneration/065b973d-9c3b-4d9a-9627-1fc9cf3cef45_origin.pdf b/variationaltemplatemachinefordatatotextgeneration/065b973d-9c3b-4d9a-9627-1fc9cf3cef45_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cfd23cabee0ad9c02b063ea13d79222b7ba64fb8 --- /dev/null +++ b/variationaltemplatemachinefordatatotextgeneration/065b973d-9c3b-4d9a-9627-1fc9cf3cef45_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89ac949cea30f2270ccfdb167fdbe7d760f186926557105664e3d7755a825150 +size 796192 diff --git a/variationaltemplatemachinefordatatotextgeneration/full.md b/variationaltemplatemachinefordatatotextgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..584a6ecad2b707185df49634ea54dae8d9723518 --- /dev/null +++ b/variationaltemplatemachinefordatatotextgeneration/full.md @@ -0,0 +1,351 @@ +# VARIATIONAL TEMPLATE MACHINE FOR DATA-TO-TEXT GENERATION + +Rong Ye†*, Wenxian Shi, Hao Zhou, Zhongyu Wei†, Lei Li + +†Fudan University + +{rye18,zywei}@fudan.edu.cn + +ByteDance AI Lab + +{shiwenxian,zhouhao.nlp,lileilab}@.bytedance.com + +# ABSTRACT + +How to generate descriptions from structured data organized in tables? Existing approaches using neural encoder-decoder models often suffer from lacking diversity. We claim that an open set of templates is crucial for enriching the phrase constructions and realizing varied generations. Learning such templates is prohibitive since it often requires a large paired corpus, which is seldom available. This paper explores the problem of automatically learning reusable "templates" from paired and non-paired data. We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables. Our contributions include: $a$ we carefully devise a specific model architecture and losses to explicitly disentangle text template and semantic content information in the latent spaces, and $b$ we utilize both small parallel data and large raw text without aligned tables to enrich the template learning. Experiments on datasets from a variety of different domains show that VTM is able to generate more diversely while keeping a good fluency and quality. + +# 1 INTRODUCTION + +Generating text descriptions from structured data (data-to-text) is an important task with many practical applications. Data-to-text has been used to generate different kinds of texts, such as weather reports (Angeli et al., 2010), sports news (Mei et al., 2016; Wiseman et al., 2017) and biographies (Lebret et al., 2016; Wang et al., 2018b; Chisholm et al., 2017). Figure 1 gives an example of data-to-text task, which takes an infobox ${}^{1}$ as the input and outputs a brief description of the information in the table. There are several recent methods utilizing neural encoder-decoder frameworks to generate text description from data tables (Lebret et al., 2016; Bao et al., 2018; Chisholm et al., 2017; Liu et al., 2018). + +Although current table-to-text models could generate high quality sentences, the diversity of these output sentences are not satisfactory. We find that templates are crucial in increasing the variations of sentence structure. For example, Table 1 gives three descriptions with their templates for the given table input. Different templates control the sentence arrangement, thus vary the generation. Some related work (Wiseman et al., 2018; Dou et al., 2018) employs hidden semi-Markov hidden model to extract templates from table-text pairs. + +We argue that templates can be better considered for generating more diverse outputs. First, it is non-trivial to sample different templates for obtaining different output utterances. Directly adopting variational auto-encoders (VAEs, Kingma & Welling (2014)) in table-to-text only enables to sample in the latent space. However, VAEs always generate irrelevant outputs, which may change the table content instead of sampling templates. This may harm the quality of output sentences. To address the above problem, if we can directly sample in the template space, we may get more diverse outputs while keeping the good quality of output sentences. + +
Table:name[nameVariable], eatType[pub], food[Japanese], priceRange[average], customerRating[low], area[riiverside]
Template1:[name] is a [food] restaurant, it is a [eatType] and it has an [priceRange] cost and [customerRating] rating. it is in [area].
Sentence1:nameVariable is a Japanese restaurant, it is a pub and it has an average cost and low rating. it is in riverside.
Template2:[name] has an [priceRange] price range with a [customerRating] rating, and [name] is an [food] [eat-Type] in [area].
Sentence2:nameVariable has an average price range with a low rating, and nameVariable is an Japanese pub in riverside.
Template3:[name] is a [eatType] with a [customerRating] rating and [priceRange] cost, it is a [food] restaurant and [name] is in [area].
Sentence3:nameVariable is a pub with a low rating and average cost, it is a Japanese restaurant and nameVariable is in riverside.
+ +Table 1: An example: generating sentences based on different templates. + +Second, we can hardly obtain promising sentences by sampling in the template space, if the template space is less informative. Namely, either encoder-decoder models or VAE-based models requires abundant parallel table-text pairs during the training. In such case, constructing high-quality parallel dataset is often labor-intensive. With limited table-sentence pairs, a VAE model cannot construct an informative template space. How to fully utilize raw sentences (without aligned table) to enrich the latent template space is under study. + +In this paper, to address the above two problems, we propose the variational template machine (VTM) for data-to-text generation, which enables to generate sentences with diverse templates while preserving the high quality. Particularly, we introduce two latent variables, representing template and content, to control the generation. The two latent variables are disentangled, and thus we can generate diverse outputs by directly sampling in the latent space for template. Moreover, we propose a novel approach for semi-supervised learning in the VAE framework, which could fully exploit the raw sentences for enriching the template space. Inspired by back-translation (Sennrich et al., 2016; Burlot & Yvon, 2018; Artetxe et al., 2018), we design a variational back-translation process. Instead of training a sentence-to-table backward generation model directly, we take the variational posterior of the content latent variable as the backward model to help to train the forward generative model. Auxiliary losses are introduced to ensure the learning of meaningful and disentangled latent variables. + +Experimental results on Wikipedia biography dataset (Lebret et al., 2016) and sentence planning NLG dataset (Reed et al., 2018) show that our model can generate texts with more diversity while keeping a good fluency. Training together with a large amount of raw text, VTM can further improve the generation performance. Besides, VTM is more predominant in the case where sentence-to-table backward model is hard to train. Ablation studies also demonstrate the effects of the auxiliary losses on the disentanglement of template and content spaces. + +# 2 PROBLEM FORMULATION AND NOTATIONS + +As a data-to-text task, we have table-text pairs $\mathcal{D}_p = \{(x_i,y_i)\}_{i = 1}^N$ , where $x_{i}$ is the table, and $y_{i}$ is the output sentence. + +Following the description scheme of Lebret et al. (2016), a table $\pmb{x}$ can be viewed as a set of $K$ records of field-position-value triples, i.e., $\pmb{x} = \{(f,p,v)_i\}_{i=1}^K$ , where $f$ is the field and $p$ is the index of value $v$ in the field $f$ . For example, an item "Name: John Lennon" is denoted as two corresponding records: (Name, 1, John) and (Name, 2, Lennon). For each triple, we first embed field, position and value as $d$ -dim vectors $e_p, e_f, e_v \in \mathbb{R}^d$ . Then, the $d_t$ -dim representation of the record is obtained by $h_i = \tanh(W[e_f, e_p, e_v]^T + b)$ , $i = 1\dots K$ , where $W \in \mathbb{R}^{d_t \times 3d}$ and $b \in \mathbb{R}^{d_t}$ are parameters. The final representation of the table, denoted as $f_{enc}(x)$ , is obtained by max-pooling over all field-position-value triple records, + +$$ +f _ {\mathrm {e n c}} (x) = h = \mathbf {M a x P o o l} _ {i} \left\{h _ {i}; i = 1 \dots K ^ {\prime} \right\}. +$$ + +In addition to the table-text pairs, we also have raw texts without table input, denoted as $\mathcal{D}_r = \{\pmb{y}_i\}_{i=1}^M$ . It usually has $M \gg N$ . + +![](images/97941bbec79533ff5ed26a32f9f1f3b1ae26028b2be52918306fbbab2e3a2fb9.jpg) +Figure 1: Two types of data in the data-to-text task: Row 2 presents an example of table-text pairs; Row 3 shows a sample of raw text, whose table input is missing and only sentence is provided. + +![](images/0b61cc065d41b6e6af90428c0634c945b68e75f79897697bdb88bf8b754ec5a0.jpg) +Figure 2: The graphical model of VTM: $z$ is the latent variable from template space, and $c$ is the content variable. $x$ is the corresponding table for the tableau pairs. $y$ is the observed sentence. The solid lines depict the generative model and the dashed lines form the inference model. + +# 3 VARIATIONAL TEMPLATE MACHINE + +As shown in the graphical model in Figure 2, our VTM modifies the vanilla VAE model by introducing two independent latent variables $z$ and $c$ , representing template latent variable and content latent variable respectively. $c$ models the content information in the table, while $z$ models the sentence template information. Target sentence $y$ is generated by both content and template variables. The two latent variables are disentangled, which makes it possible to generate diverse and relevant sentences by sampling template variable and retraining the content variable. Considering pairwise and raw data presented in Figure 1, their generation process for the content latent variable $c$ is different. + +- For a given table-text pair $(x, y) \in \mathcal{D}_p$ , the content is observable from table $x$ . As a result, $c$ is assumed to be deterministic given table $x$ , whose prior is defined as a delta distribution $p(c|x) = \delta (c = f_{\mathrm{enc}}(x))$ . The marginal log-likelihood is: + +$$ +\begin{array}{l} \log p _ {\theta} (y | x) = \log \int_ {z} \int_ {c} p _ {\theta} (y | x, z, c) p (z) p (c | x) d c d z \\ = \log \int_ {z} p _ {\theta} (y | x, z, c = f _ {\mathrm {e n c}} (x)) p (z) \mathrm {d} z, (x, y) \in \mathcal {D} _ {p}. \tag {1} \\ \end{array} +$$ + +- For raw text $y \in {\mathcal{D}}_{n}$ ,the content is unobservable with the absence of table $x$ . As a result,the content latent variable $c$ should be sampled from prior of Gaussian distribution $\mathcal{N}\left( {0,I}\right)$ . The marginal log-likelihood is: + +$$ +\log p _ {\theta} (y) = \log \int_ {z} \int_ {c} p _ {\theta} (y | z, c) p (z) p (c) \mathrm {d} c \mathrm {d} z, y \in \mathcal {D} _ {r}. \tag {2} +$$ + +In order to make full use of both table-text pair data and raw text data, the above marginal log-likelihood should be optimized jointly: + +$$ +\mathcal {L} (\theta) = \mathbb {E} _ {(x, y) \sim \mathcal {D} _ {p}} [ \log p _ {\theta} (y | x) ] + \mathbb {E} _ {y \sim \mathcal {D} _ {r}} [ \log p _ {\theta} (y) ]. \tag {3} +$$ + +Directly optimizing Equation 3 is intractable. Following the idea of variational inference (Kingma & Welling, 2014), a variational posterior $q_{\phi}(\cdot)$ is constructed as an inference model (dashed lines in Figure 2) to approximate the true posterior. Instead of optimizing the marginal log-likelihood in Equation 3, we maximize the evidence lower bound (ELBO). In Section 3.1 and 3.2, the ELBO of table-text pairwise data and raw text data are discussed, respectively. + +# 3.1 LEARNING FROM TABLE-TEXT PAIR DATA + +In this section, we will show the learning loss of table-text pair data. According to the aforementioned assumption, the content variable $c$ is observable and follows a delta distribution centred in the hidden representation of the table $x$ . + +ELBO objective. Assuming that the template variable $z$ only relies on the template of target sentence, we introduce $q_{\phi}(z|y)$ as an approximation of the true posterior $p(z|y, c, x)$ , + +The ELBO loss of Equation 1 is written as + +$$ +\mathcal {L} _ {\mathrm {E L B O} _ {p}} (x, y) = - \mathbb {E} _ {q _ {\phi_ {z}} (z | y)} \log p _ {\theta} (y | z, c = f _ {\mathrm {e n c}} (x), x) + D _ {\mathrm {K L}} (q _ {\phi_ {z}} (z | y) \| p (z)), \quad (x, y) \in \mathcal {D} _ {p}. +$$ + +The variational posterior $q_{\phi_z}(z|y)$ is assumed as a multivariate Gaussian distribution $\mathcal{N}(\mu_{\phi_z}(y), \Sigma_{\phi_z}(y))$ , while the prior $p(z)$ is taken as a normal distribution $\mathcal{N}(0, I)$ . + +Preserving-Template Loss. Without any supervision, the ELBO loss alone does not guarantee to learn a good template representation space. Inspired by the work in style-transfer (Hu et al., 2017b; Shen et al., 2017; Bao et al., 2019; John et al., 2018), an auxiliary loss is introduced to embed the template information of sentences into template variable $\mathbf{z}$ . + +With table, we are able to roughly align the tokens in sentence with the records in the table. By replacing these tokens with a special token $\langle ent \rangle$ , we can remove the content information from sentences and get the sketchy sentence template, denote as $\tilde{y}$ . We introduce the preserving-template loss $\mathcal{L}_{\mathrm{pt}}$ to ensure that the latent variable $z$ only contains the information of the template. + +$$ +\mathcal {L} _ {\mathrm {p t}} (x, y, \tilde {y}) = - \mathbb {E} _ {q _ {\phi_ {z}} (z | y)} \log p _ {\eta} (\tilde {y} | z) = - \mathbb {E} _ {q _ {\phi_ {z}} (z | y)} \sum_ {t = 1} ^ {m} \log p _ {\eta} (\tilde {y} _ {t} | z, \tilde {y} _ {< t}) +$$ + +where $m$ is the length of the $\tilde{y}$ , and $\eta$ denotes the parameters of the extra template generator. $\mathcal{L}_{\mathrm{pt}}$ is trained via parallel data. In practice, due to the insufficient amount of parallel data, template generator $p_{\eta}$ may not be well-learned. However, experimental results show that this loss is sufficient to provide a guidance for learning a template space. + +# 3.2 LEARNING FROM RAW TEXT DATA + +Our model is able to make use of a large number of raw data without table since the content information of table could be obtained by the content latent variable. + +ELBO objective. According to the definition of generative model in Equation 2, the ELBO of raw text data is + +$$ +\log p _ {\theta} (y) = \mathbb {E} _ {q _ {\phi} (z, c | y)} \log \frac {p _ {\theta} (y , z , c)}{q _ {\phi} (z , c | y)}, \quad y \in \mathcal {D} _ {r}. +$$ + +With the mean field approximation (Xing et al., 2003), $q_{\phi}(z, c|x)$ can be factorized as: $q_{\phi}(z, c|y) = q_{\phi_z}(z|y)q_{\phi_c}(c|y)$ . We have: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {E L B O} _ {r}} (y) = - \mathbb {E} _ {q _ {\phi_ {z}} (z | y) q _ {\phi_ {c}} (c | y)} \log p _ {\theta} (y | z, c) \\ + D _ {\mathrm {K L}} \left(q _ {\phi_ {z}} (z | y) \| p (z)\right) + D _ {\mathrm {K L}} \left(q _ {\phi_ {c}} (c | y) \| p (c)\right), \quad y \in \mathcal {D} _ {r}. \\ \end{array} +$$ + +In order to make use of template information contained in raw text data effectively, the parameters of generation network $p_{\theta}(y|z,c)$ and posterior network $q_{\phi_z}(z|y)$ are shared for pairwise and raw data. In decoding process, for raw text data, we use content variable $c$ as the table embedding for the missing of table $x$ . Variational posterior for $c$ is deployed as another multivariate Gaussian $q_{\phi_c}(c|y) = \mathcal{N}(\mu_{\phi_c}(y), \Sigma_{\phi_c}(y))$ . Both $p(z)$ and $p(c)$ are taken as normal distribution $\mathcal{N}(0,I)$ . + +Preserving-Content Loss. In order to make the posterior $q_{\phi_c}(c|y)$ correctly infers the content information, the table-text pairs are used as the supervision to train the recognition network of $q_{\phi_c}(c|y)$ . To this end, we add a preserving-content loss + +$$ +\mathcal {L} _ {\mathrm {p c}} (x, y) = - \mathbb {E} _ {q _ {\phi_ {c}} (c | y)} \| c - h \| ^ {2} + D _ {\mathrm {K L}} \left(q _ {\phi_ {c}} (c | y) \| p (c)\right), \quad (x, y) \in \mathcal {D} _ {p}, +$$ + +where $h = f_{\mathrm{enc}}(x)$ is the embedding of table obtained by the table encoder. Minimizing $\mathcal{L}_{\mathrm{pc}}$ is also helpful to bridge the gap of $c$ between pairwise (taking $c = h$ ) and raw training data (sampling from $q_{\phi}(c|y)$ ). Moreover, we find that the first term of $\mathcal{L}_{\mathrm{pc}}$ is equivalent to (1) make the mean of $q_{\phi}(c|y)$ closer to $h$ ; (2) minimize the trace of co-variance of $q_{\phi}(c|y)$ . The second term serves as a regularization. Detailed explanations and proof are referred in supplementary materials. + +Algorithm 1 Training procedure +Input: Model parameters $\phi_z,\phi_c,\theta ,\eta$ Table-text pair data $\mathcal{D}_p = \{(x,y)_i\}_{i = 1}^N$ ; raw text data $\mathcal{D}_r = \{\pmb {y}_j\}_{j = 1}^M$ $M\gg N$ Procedure $\mathrm{TRAIN}(\mathcal{D}_p,\mathcal{D}_r)$ .. +1: Update $\phi_z,\phi_c,\theta ,\eta$ by gradient descent on $\mathcal{L}_{\mathrm{ELBO}_p} + \mathcal{L}_{\mathrm{MI}} + \mathcal{L}_{\mathrm{pt}} + \mathcal{L}_{\mathrm{pc}}$ +2: Update $\phi_z,\phi_c,\theta$ by gradient descent on $\mathcal{L}_{\mathrm{ELBO}_r} + \mathcal{L}_{\mathrm{MI}}$ +3: Update $\phi_z,\phi_c,\theta ,\eta$ by gradient descent on $\mathcal{L}_{tot}$ + +# 3.3 MUTUAL INFORMATION LOSS + +As introduced by previous works (Chen et al., 2016; Zhao et al., 2017; 2018), adding mutual information term to ELBO could alleviate KL collapse effectively and improve the quality of variational posterior. Adding mutual information terms directly imposes the association of content and template latent variables with target sentences. Besides, theoretical proof2 and experimental results show that introducing mutual information bias is necessary in the presence of preserving-template loss $\mathcal{L}_{\mathrm{pt}}(\boldsymbol{x}^p,\boldsymbol{y}^p)$ . + +As a result, in our work, the following mutual information term is added to objective + +$$ +\mathcal {L} _ {\mathrm {M I}} (y) = - I (z, y) - I (c, y). +$$ + +# 3.4 TRAINING PROCESS + +The final loss of VTM is made up of the ELBO losses and extra losses: + +$$ +\begin{array}{l} \mathcal {L} _ {t o t} \left(x ^ {p}, y ^ {p}, y ^ {r}\right) = \mathcal {L} _ {\mathrm {E L B O} _ {p}} \left(x ^ {p}, y ^ {p}\right) + \mathcal {L} _ {\mathrm {E L B O} _ {r}} \left(y ^ {r}\right) + \lambda_ {\mathrm {M I}} \left(\mathcal {L} _ {\mathrm {M I}} \left(y ^ {p}\right) + \mathcal {L} _ {\mathrm {M I}} \left(y ^ {r}\right)\right) \\ + \lambda_ {\mathrm {p t}} \mathcal {L} _ {\mathrm {p t}} \left(x ^ {p}, y ^ {p}\right) + \lambda_ {\mathrm {p c}} \mathcal {L} _ {\mathrm {p c}} \left(x ^ {p}, y ^ {p}\right), \quad \left(x ^ {p}, y ^ {p}\right) \in \mathcal {D} _ {p}, y ^ {r} \in \mathcal {D} _ {r}. \\ \end{array} +$$ + +$\lambda_{\mathrm{MI}}, \lambda_{\mathrm{pt}}$ and $\lambda_{\mathrm{pc}}$ are hyperparameters with respect to auxiliary losses. + +The training procedure is shown in Algorithm 1. The parameters of generation network $\theta$ and posterior network $\phi_{z,c}$ could be trained jointly by both table-text pair data and raw text data. In this way, a large number of raw text data can be used to enrich the generation diversity. + +# 4 EXPERIMENT + +# 4.1 DATASETS AND BASELINE MODELS + +Dataset. We perform the experiment on SPNLG (Reed et al., 2018) $^3$ and WIKI (Lebret et al., 2016; Wang et al., 2018b). Two datasets come from two different domains. The former is a collection of restaurant descriptions, which expands the E2E dataset $^4$ into a total of 204,955 utterances with more varied sentence structures and instances. The latter contains 728,321 sentences of biographies from Wikipedia. To simulate the environment that a large number of raw texts provided, we just use part of the table-text pairs from two datasets, leaving most of the instances as raw texts. Concretely, for two datasets, we initially keep the ratio of table-text pairs to raw texts as 1:10. For WIKI dataset, in addition to the data from WikiBio (Lebret et al., 2016), the raw text data is further extended by the biographical descriptions of people $^5$ from external Wikipedia Person and Animal Dataset (Wang et al., 2018a). The statistics for the number of table-text pairs and raw texts in the training, validation and test sets are shown in Table 2. + +Evaluation Metrics. For WIKI dataset, we evaluate the generation quality based on BLEU-4, NIST, ROUGE-L (F-score). For SPNLG, we use BLEU-4, NIST, METEOR, ROUGE-L (F-score), and CIDEr. We use the same automatic evaluation script from E2E NLG Challenge6. The diversity of generation is evaluated by self-BLEU (Zhu et al., 2018). The lower self-BLEU, the more diversely the model generates. + +
TrainValidTest
Dataset#table-text pair#raw text#table-text pair#raw text#table-text pair
SPNLG14,906149,05820,495/20,496
WIKI84,150841,50772,83142,87472,831
+ +Table 2: Dataset statistics in our experiments. + +Baseline models. We implement the following models as baselines: + +- Table2seq: Table2seq model first encodes the table into hidden representations then generates the sentence in a sequence-to-sequence architecture (Sutskever et al., 2014). For a fair comparison, we apply the same table-encoder architecture as in Section 2 and the same LSTM decoder with attention mechanism as our model. The model is only trained on pair-wise data. During the testing, we generate five sentences with beam size ranging from one to five to increase some variations. We denote the model as Table2seq-beam. We also implement the decoding with forward sampling strategy (namely Table2seq-sample). Moreover, to incorporate raw data, we first pretrain the decoder using raw text as a language model, then train Table2seq on the table-text pairs, which is noted as Table2seq-pretrain. Table2seq-pretrain has the same decoding strategy as Table2seq-beam. + +- Temp-KN: Template-KN model (Lebret et al., 2016) first generates a template according to the interpolated 5-gram Kneser-Ney (KN) language modeled over sentence templates, then replaces the special token for the field with the corresponding words from the table. + +The hype-parameters of the VTM are chosen based on the lowest $\mathcal{L}_{\mathrm{ELBO}_p}$ on the validation set of SPNLG and $\mathcal{L}_{\mathrm{ELBO}_p} + \mathcal{L}_{\mathrm{ELBO}_r}$ on the validation set of WIKI. Word embeddings are randomly initialized with 300-dimension. During training, we use Adam optimizer (Kingma & Ba, 2015) with the initial learning rate as 0.001. Details on hyperparameters are listed in Appendix D. + +# 4.2 EXPERIMENTAL RESULTS ON SPNLG DATASET + +Quantitative analysis. According to the results in Table 3, we find that our variational template machine (VTM) can generally produce sentences with more diversity under a promising performance in terms of BLEU metrics. Table2seq with beam search algorithm (Table2seq-beam), which is only trained on parallel data, generates the most fluent sentences, but its diversity is rather poor. Although the sampling decoder (Table2seq-sample) gets the lowest self-BLEU, it sacrifices the fluency at the cost. Table2seq performs even worse when the decoder is pre-trained by raw data as a language model. Because there is still a gap between the language model and data-to-text task, the decoder fails to learn how to use raw text in the generation of data-to-text stage. On the contrary, VTM can make full use of the raw data with the help of content variables. As a template-based model, Temp-KN receives the lowest self-BLEU score, but it fails to generate fluent sentences. + +Ablation study. To study the effectiveness of the auxiliary losses and the augmented raw texts, we progressively remove the auxiliary losses and raw data in the ablation study. We reach the conclusions as follows. + +- Without the preserving-content loss $\mathcal{L}_{\mathrm{pc}}$ , the model has a relative decline in generation quality. This implies that, by training the same inference model of content variable in pairwise data, preserving-content loss provides an effective instruction for learning the content space. +- VTM-noraw is the model trained without using raw data, where only the loss functions in Section 3.1 are optimized. Comparing with VTM-noraw, VTM gets a substantial improvement in generation quality. More importantly, without extra raw text data, there is also a decline in diversity (self-BLEU). Experimental results show that raw data plays a valuable role in improving both generation quality and diversity, which is often neglected by previous studies. +- We further remove the mutual information loss and preserving-template loss from VTM-noraw model. Both generation quality and diversity continuously decline, which verifies the effectiveness of the two losses. Moreover, the automatic evaluation results of VTM-noraw- $\mathcal{L}_{\mathrm{MI}} - \mathcal{L}_{\mathrm{pt}}$ empirically show that preserving-template loss may be a hinder if we only add it during the training, as illustrated in Section 3.3. + +
MethodsBLEUNISTMETEORROUGECIDErSelf-BLEU
Table2seq-beam40.616.3138.6756.953.7497.14
Table2seq-sample34.975.6835.4652.743.0065.69
Table2seq-pretrain40.566.3338.5156.323.75100.00
Temp-KN6.450.4512.5327.600.2337.85
VTM40.046.2538.3156.483.6488.77
- Lpc39.586.2438.3056.243.6987.20
VTM-noraw39.946.2238.4256.723.6688.92
- LMI38.336.0237.7755.923.5196.55
- LMI-Lpt39.636.2438.3556.363.7092.54
+ +Table 3: Result for SPNLG data set. Under the 0.05 significance level, VTM gets significantly higher results in all the fluency metrics than all the baselines except Table2seq-beam. + +![](images/be794583e0732558d185dfdec6ec7d4282fc1acac8d646e9188f93a1508206fc.jpg) +Figure 3: Quality-diversity trade-off curve on SPNLG dataset. + +![](images/1ff098292531cf76148c88761c08273a68d8aa2dc997c0a54a2bbb0991702c21.jpg) +Figure 4: Self-BLEU and the proportion of raw texts to table-sentence pairs. + +Experiment on quality and diversity trade-off. The quality and diversity trade-off is further analyzed to illustrate the superiority of VTM. In order to evaluate the quality and diversity under different sampling methods, we conduct experiment on sampling from the softmax with different temperatures. Sampling from the softmax with temperature is commonly applied to shape the distribution (Ficler & Goldberg, 2017; Holtzman et al., 2019). Given the logits $u_{1:|V|}$ and temperature $\tau$ , we sample from the distribution: + +$$ +p (y _ {t} = V _ {l} | y _ {< t}, x, z, \tau) = \frac {\exp \left(u _ {l} / \tau\right)}{\sum_ {l ^ {\prime}} \exp \left(u _ {l ^ {\prime}} / \tau\right)} +$$ + +When $\tau \rightarrow 0$ , it approaches greedy decoding. When $\tau = 1.0$ , it is the same as forward sampling. In the experiment, we gradually adjust temperature from 0 to 1, taking $\tau = 0.1, 0.2, 0.3, 0.5, 0.6, 0.9, 1.0$ . BLEU and self-BLEU under different temperatures are evaluated for both Table2seq and VTM. The self-BLEU in different temperatures and BLEU and self-BLEU curves are plotted in Figure 3. It empirically demonstrates the trade-off between the generation quality and diversity. By sampling from different temperatures, we can plot the portfolios of (Self-BLEU, BLEU) pairs of Table2seq and VTM. The closer the curve is to the upper left, the better the performance of the model. VTM generally gets lower self-BLEU with more diverse outputs under the comparable level of BLEU score. + +Human evaluation In addition to the quantitative experiments, human evaluation is conducted as well. We randomly select 120 generated samples (each has five sentences) and ask three annotators to rate them on a 1-5 Likert scale in terms of the following features: + +- Accuracy: whether the generated sentences are consistent with the content in the table. +- Coherence: whether the generated sentences are coherent. +- Diversity: whether the sentences have as many patterns/structures as possible. + +Based on the qualitative results in Table 4, VTM generates the best sentences with the highest accuracy and coherence. Besides, VTM is able to obtain the comparable diversity with Table2seq-sample and Temp-KN. Compared with the model without using raw data (VTM-no raw), there is a significant improvement in diversity, which indicates that raw data essentially enriches the latent + +
MethodsAccuracyCoherenceDiversity
Table2seq-sample3.444.544.87
Temp-KN2.902.784.85
VTM4.444.844.33
VTM-noraw4.334.623.44
+ +Table 4: Human evaluation results on different models. The bold numbers are significantly higher than others under 0.01 significance level. + +
MethodsBLEUNISTROUGESelf-BLEU
Table2seq-beam26.745.9748.2092.00
Table2seq-sample21.755.3242.0936.07
Table2seq-pretrain25.435.4445.8699.88
Temp-KN11.682.0440.5473.14
VTM25.225.9645.3674.86
- \( {\mathcal{L}}_{pc} \)22.164.2840.9180.39
VTM-noraw21.595.0239.0778.19
\( -{\mathcal{L}}_{\mathrm{{MI}}} \)21.304.7340.9979.45
\( -{\mathcal{L}}_{\mathrm{{MI}}} - {\mathcal{L}}_{\mathrm{{pt}}} \)16.203.8138.0484.45
+ +Table 5: Results for WIKI dataset. All the metrics are significant under 0.05 significance level. + +![](images/45cdbad3556ab8a75c4c5e688cad9adf8ad2efb71ded6785f229fb40637e1dfc.jpg) +Figure 5: Quality-diversity trade-off curve compared with NER+Table2seq. + +template space. Although obtaining the highest scores in diversity for Table2seq-sample and TempKN, their generation qualities are much inferior to the VTM, and comparable generation quality is the prerequisite when comparing the diversity. + +Experiment on the diversity under different proportions of raw. In order to show how much raw data may contribute to the VTM model, we train the model under different proportions of raw data to pairwise data in training. Specifically, we control the ratio of raw sentences to the table-text pairs under 0.5:1, 1:1, 2:1, 3:1, 5:1, 7:1 and 10:1. As shown in Figure 4, the self-BLEU rapidly decreases even adding a small number of raw data, and continuously decreases until the ratio equals 5:1. The improvement is marginal after adding more than 5 times of raw data. + +Case study. According to Table 8 (in Appendix E), despite template-like structures vary much in a forward sampling model, the information in sentences may be wrong. For example, Sentence 3 says that the restaurant is a Japanese place. Notably, VTM produces correct texts with more diversity of templates. VTM is able to generate different number of sentences and conjunctions. For example, "[name] is a [food] place in [area] with a price range of [priceRange]. It is a [eatType]." (Sentence 1, two sentences, "with" aggregation), "[name] is a [eatType] with a price range of [priceRange]. It is in [area]. It is a [food] place." (Sentence 2, three sentences, "with" aggregation), "[name] is a [food] restaurant in [area] and it is a [food]." (Sentence 4, one sentence, "and" aggregation). + +# 4.3 EXPERIMENTAL RESULTS ON WIKI DATASET + +Table 5 shows the results for WIKI dataset, the same conclusions can be drawn as in the results in SPNLG dataset for both the quantitative analysis and ablation study. VTM is able to generate sentences with the comparable quality as Table2seq-beam but more diversity. + +Comparison with the pseudo-table-based method. Another way to incorporate raw data is to construct pseudo-table from the given sentence by applying a sentence-to-table backward model via name entity recognition (NER). However, when the type of entities is complicated, such as in product introduction, or the raw data comes from the different domains as pairwise data, the commonly-used model for NER cannot provide accurate pseudo-tables. In this experiment, we replace 841,507 biography raw sentences with 101,807 sentences that describe the animals (Wang et al., 2018b) to test the generalization of our model in raw data of different domains. $\mathbf{NER + }$ Table2seq is the two-step model that first constructs the pseudo-table by a Bi-LSTM-CRF (Huang et al., 2015) model trained from the table-text pairs, then trains Table2seq from both table-text pairs and pseudo-tabletext pairs. We control the temperature in decoding method as previous, and results are plotted in Figure 5. We find that compared with $\mathbf{NER + }$ Table2seq, the curve of VTM is closer to the upper left, + +
Table2seqVTM-norawVTM
Train~30min / 6 epochs~30min / 6 epochs~160min / 15 epochs
Test~80min~80min~80min
+ +Table 6: Computational cost for each model. + +
Tablename[Jack Ryder], country[Australia], fullname[John Ryder], nickname[the king of Collingwood], birth_date[8 August 1889], birth_place[Collingwood, Victoria, Australia], death_date[4 April 1977], death_place[Fitzroy, Victoria, Australia], club[Victoria], testdebutyear[1920 england], article_title[Jack Ryder (cricketer)]
ReferenceJohn "Jack" Ryder, mbe (8 August 1889 - 3 April 1977) was a cricketer who played for Victoria and Australia.
Table2seq-sample1: john Ryder (8 August 1889 - 3 April 1977) was an Australian cricketer.2: john Ryder Ryder (8 August 1889 - 3 April 1977) was an Australian cricketer.3: john Ryder Ryder (8 August 1889 - 3 April 1977) was an Australian cricketer who played for gloucestershire cricket club in 1912.4: john Ryder (8 August 1889 - 3 April 1977) was an Australian cricketer.5: john oliveira (8 August 1889 - 3 April 1977) was an Australian test cricketer who played against great Britain with international cricket club .
Temp-KN1: jack Ryder (born August 8, 1889) is a former professional cricketer).2: "jack" Ryder (born August 8, 1889) is a former professional cricketer) who played in the national football league.3: jack Ryder (born 8 August 1889 in Collingwood, Victoria,) is a former professional cricketer).4: Jack Ryder (born August 8, 1889, in Collingwood, Victoria, Australia) is a former professional football player who is currently a member of the united states .5: jack Ryder (born August 8, 1889) is a former professional cricketer).
VTM-noraw1: John Ryder (8 August 1889 - 4 April 1977) was an Australian cricketer.2: Jack Ryder (born August 21, 1951 in Melbourne, Victoria) was an Australian cricketer.3: John Ryder (21 August 1889 - 4 April 1977) was an Australian cricketer.4: Jack Ryder (8 March 1889 - 3 April 1977) was an Australian cricketer.5: John Ryder (August 1889 - April 1977) was an Australian cricketer.
VTM1: John Ryder (8 August 1889 - 4 April 1977) was an Australian cricketer.2: John Ryder (born 8 August 1889) was an Australian cricketer.3: Jack Ryder (born August 9, 1889 in Victoria, Australia) was an Australian cricketer.4: John Ryder (August 8, 1889 - April 4, 1977) was an Australian rules footballer who played for Victoria in the Victorian football league (VFL).5: John Ryder, also known as the king of Collingwood (8 August 1889 - 4 April 1977) was an Australian cricketer.
+ +Table 7: An example of the generated text by our model and the baselines on WIKI dataset. + +which implies that VTM can generate more diverse (lower Self-BLEU) under the commensurate BLEU. + +Computational cost. We further compare the computational cost of VTM with other models, for both training and testing phases. We train and test the models on a single Tesla V100 GPU. The time spent to reach the lowest ELBO in the validation set is listed in Table 6. VTM is trained about five times longer than the baseline Table2seq model (160 minutes, 15 epochs in total) because of the training of an extra large number of raw data (84k pairwise data and 841k raw texts). In the testing phase, VTM enjoys the same speed as other competitor models, approximately 80 minutes to generate 72k wiki sentences in the test set. + +Case study. Table 7 shows an example of sentences generated by different models. Although forward sampling enables the Table2seq model to generate diversely, it is more likely to generate incorrect and irrelevant content. For example, it generates the wrong club name in Sentence 3. By sampling from template space, VTM-noraw can generate texts with multiple templates, like different expressions for birth date and death date, while preserving readability. Furthermore, with extra raw data, VTM is able to generate more diverse expressions, which other models cannot produce, such as "[fullname], also known as [nickname] ([birth_date] - [daeth_date]) was a [country] [article_name_4]." (Sentence 5). It implies that raw sentences not in the pairwise dataset could additionally enrich the information in template space. + +# 5 RELATED WORK + +Data-to-text Generation. Data-to-text generation aims to produce summary for the factual structured data, such as numerical table. Neural language models have made distinguished progress by generating sentences from the table in an end-to-end style. Jain et al. (2018) proposed a mixed hierarchical attention model to generate weather report from the standard table. Gong et al. (2019) proposed a hierarchical table-encoder and a decoder with dual attention. Although encoder-decoder models can generate fluent sentences, they are criticized for deficiency in sentence diversity. Other works focused on controllable and interpretable generation by introducing templates as latent variables. Wiseman et al. (2018) designed a Semi-HMM decoder to learn discrete templates representation, and Dou et al. (2018) created a platform, Data2TextStudio, equipped with a Semi-HMMs model, to extract template and generate from table input in an interactive way. + +Semi-supervised Learning From Raw Data. It is easier to acquire raw text than to get structured data, and most neural generators cannot make the best use of raw text, universally. Ma et al. (2019) proposed that encoder-decoder framework may fail when not enough parallel corpus is provided. In the area of machine translation, back-translation have been proved to be an effective method to utilize monolingual data (Sennrich et al., 2016; Burlot & Yvon, 2018). + +Latent Variable Generative Model. Deep generative models, especially variational autoencoders (VAE) (Kingma & Welling, 2014) have shown a promising performance in generation. Bowman et al. (2016) showed that a RNN-based VAE model can produce diverse and well-formed sentences by sampling from the prior of continuous latent variable. Recent works explored methods to learn disentangled latent variables (Hu et al., 2017a; Zhou & Neubig, 2017; Bao et al., 2019). For instance, Bao et al. (2019) devised multi-task losses adversarial losses to disentangle the latent space into syntactic space and semantic space. Motivated by the idea of back-translation and variational autoencoders, VTM model proposed in this work can not only fully utilize the non-parallel text corpus, but also learn a disentangled representation for template and content. + +# 6 CONCLUSION + +In this paper, we propose the Variational Template Machine (VTM) based on a semi-supervised learning approach in the VAE framework. Our method not only builds independent latent spaces for template and content for diverse generation, but also exploits raw texts without tables to further expand the template diversity. Experimental results on two datasets show that VTM outperforms the model without using raw data in terms of both generation quality and diversity, and it can achieve a comparable quality in generation with Table2seq, as well as promote the diversity by a large margin. + +# ACKNOWLEDGMENTS + +We thank the anonymous reviewers for their insightful comments. Hao Zhou and Zhongyu Wei are the corresponding authors of this paper. + +# REFERENCES + +Gabor Angeli, Percy Liang, and Dan Klein. A simple domain-independent probabilistic approach to generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2010. +Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. Unsupervised neural machine translation. In Proceedings of the International Conference on Learning Representations, 2018. +Junwei Bao, Duyu Tang, Nan Duan, Zhao Yan, Yuanhua Lv, Ming Zhou, and Tiejun Zhao. Table-to-text: Describing table region with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, 2018. +Yu Bao, Hao Zhou, Shujian Huang, Lei Li, Lili Mou, Olga Vechtomova, Xinyu Dai, and Jiajun Chen. Generating sentences from disentangled syntactic and semantic spaces. In Proceedings of the Conference of the Association for Computational Linguistics, 2019. + +Samuel Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. In Proceedings of the Conference on Computational Natural Language Learning., 2016. +Franck Burlot and François Yvon. Using monolingual data in neural machine translation: a systematic study. In Proceedings of the Conference on Machine Translation: Research Papers, 2018. +Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets. Proceedings of the Advances in Neural Information Processing Systems, 2016. +Andrew Chisholm, Will Radford, and Ben Hachey. Learning to generate one-sentence biographies from wikidata. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, 2017. +Longxu Dou, Guanghui Qin, Jinpeng Wang, Jin-Ge Yao, and Chin-Yew Lin. Data2text studio: Automated text generation from structured data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2018. +Jessica Ficler and Yoav Goldberg. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, 2017. +Heng Gong, Xiaocheng Feng, Bing Qin, and Ting Liu. Table-to-text generation with effective hierarchical encoder on three dimensions (row, column and time). In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing, 2019. +Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019. +Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. Toward controlled generation of text. In Proceedings of the International Conference on Machine Learning, 2017a. +Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. Toward controlled generation of text. In Proceedings of the International Conference on Machine Learning, 2017b. +Zhiheng Huang, Wei Xu, and Kai Yu. Bidirectional LSTM-crf models for sequence tagging. arXiv preprint arXiv:1508.01991, 2015. +Parag Jain, Anirban Laha, Karthik Sankaranarayanan, Preksha Nema, Mitesh M Khapra, and Shreyas Shetty. A mixed hierarchical attention based encoder-decoder approach for standard table summarization. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, 2018. +Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. Disentangled representation learning for non-parallel text style transfer. In Proceedings of the Conference of the Association for Computational Linguistics, 2018. +Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, 2015. +Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations, 2014. +Rémi Lebret, David Grangier, and Michael Auli. Neural text generation from structured data with application to the biography domain. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2016. +Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. Table-to-text generation by structure-aware seq2seq learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2018. +Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie Zhou, and Xu Sun. Key fact as pivot: A two-stage model for low resource table-to-text generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2019. + +Hongyuan Mei, Mohit Bansal, and Matthew R Walter. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016. +Lena Reed, Shereen Oraby, and Marilyn Walker. Can neural generators for dialogue learn sentence planning and discourse structuring? In Proceedings of the International Conference on Natural Language Generation, 2018. +Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2016. +Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. Style transfer from non-parallel text by cross-alignment. In Proceedings of the Advances in Neural Information Processing Systems, 2017. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Proceedings of the Advances in Neural Information Processing Systems, 2014. +Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Heng Ji, and Kevin Knight. Describing a knowledge base. In Proceedings of the International Conference on Natural Language Generation, 2018a. +Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Heng Ji, and Kevin Knight. Describing a knowledge base. In Proceedings of the International Conference on Natural Language Generation, 2018b. +Sam Wiseman, Stuart Shieber, and Alexander Rush. Challenges in data-to-document generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2017. +Sam Wiseman, Stuart Shieber, and Alexander Rush. Learning neural templates for text generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2018. +Eric P Xing, Michael I Jordan, and Stuart Russell. A generalized mean field algorithm for variational inference in exponential families. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2003. +Shengjia Zhao, Jiaming Song, and Stefano Ermon. Infovae: Information maximizing variational autoencoders. arXiv preprint arXiv:1706.02262, 2017. +Tiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2018. +Chunting Zhou and Graham Neubig. Multi-space variational encoder-decoders for semi-supervised labeled sequence transduction. Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2017. +Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. Texygen: A benchmarking platform for text generation models. In Proceedings of the International ACM SIGIR Conference on Research & Development in Information Retrieval, 2018. + +# A EXPLANATION FOR PRESERVING-CONTENT LOSS + +The first term of $-\mathcal{L}_{\mathrm{pc}}(x,y)$ is equivalent to: + +$$ +\begin{array}{l} \mathbb {E} _ {q _ {c} (c | x)} | | c - h | | ^ {2} = \mathbb {E} _ {q _ {c} (c | x)} \sum_ {i = 1} ^ {K} (c _ {i} - h _ {i}) ^ {2} \\ = \sum_ {i = 1} ^ {K} \mathbb {E} _ {q _ {c} (c | x)} \left(c _ {i} - h _ {i}\right) ^ {2} \\ = \sum_ {i = 1} ^ {K} \left[ \left(\mathbb {E} \left(c _ {i} - h _ {i}\right)\right) ^ {2} + \operatorname {v a r} \left(c _ {i}\right) \right] \\ = \sum_ {i = 1} ^ {K} \left[ \left(E \left(c _ {i}\right) - h _ {i}\right) ^ {2} + \operatorname {v a r} \left(c _ {i}\right) \right] \\ = \sum_ {i = 1} ^ {K} [ (\mu_ {i} - h _ {i}) ^ {2} + \Sigma_ {i i} ] \\ = \left\| \mu - h \right\| ^ {2} + t r (\Sigma) \\ \end{array} +$$ + +When we minimize it, we jointly minimize the distance between mean of approximated posterior distribution, and the trace of the co-variance matrix. + +# B PROOF FOR ANTI-INFORMATION PROPERTY OF ELBO + +Consider the KL divergence over the whole dataset (or a mini-batch of data), we have + +$$ +\begin{array}{l} \mathbb {E} _ {x \sim p (x)} [ D _ {\mathrm {K L}} (q (z | x) \| p (x)) ] = \mathbb {E} _ {q (z | x) p (x)} [ \log q (z | x) - \log p (z) ] \\ = - H (z | x) - \mathbb {E} _ {q (z)} \log p (z) \\ = - H (z | x) + H (z) + D _ {\mathrm {K L}} (q (z) \| p (z)) \\ = I (z, x) + D _ {\mathrm {K L}} (q (z) \| p (z)) \\ \end{array} +$$ + +where $q(z) = \mathbb{E}_{x\sim \mathcal{D}}(q(z|x))$ and $I(z,x) = H(z) - H(z|x)$ . Since KL divergence can be viewed as a regularization term in ELBO loss, When ELBO is maximized, the KL term is minimized, and mutual information between $x$ and latent $z$ , $I(z,x)$ is minimized. This implies that $z$ and $x$ eventually become more independent. + +# C PROOF FOR THE PRESERVING-TEMPLATE LOSS WHEN POSTERIOR COLLAPSE HAPPENS + +When posterior collapse happens, $D_{\mathrm{KL}}(q(z|y)||p(z)) \approx 0$ + +$$ +\begin{array}{l} \mathcal {L} _ {p t} (Y, \tilde {Y}) = \mathbb {E} _ {\tilde {y} \sim p (\tilde {y}), y \sim p (y)} \mathbb {E} _ {z \sim q (z | y)} \log p _ {\eta} (\tilde {y} | z) \\ = \mathbb {E} _ {\tilde {y} \sim p (\tilde {y})} \mathbb {E} _ {z \sim p (z)} \log p _ {\eta} (\tilde {y} | z) \\ = \int_ {\tilde {y}} p (\tilde {y}) \int_ {z} p (z) \log p _ {\eta} (\tilde {y} | z) d z d \tilde {y} \\ = \int_ {z} p (z) \int_ {\tilde {y}} p (\tilde {y}) \log p _ {\eta} (\tilde {y} | z) d z d \tilde {y} \\ = \mathbb {E} _ {z} \mathbb {E} _ {\tilde {y}} [ \log p _ {\eta} (y) | z ] = \mathbb {E} _ {\tilde {y}} \log p _ {\eta} (y) \\ \end{array} +$$ + +During the back-propagation, + +$$ +\left| \right| \bigtriangledown_ {z} \mathcal {L} _ {p t} (Y, \tilde {Y}) | | = 0 +$$ + +thus, $\phi_z$ is not updated. + +# D IMPLEMENTATION DETAILS + +For the model trained on WIKI dataset, the dimension of latent template variable is set as 100, and the dimension of latent content variable is set as 200. The dimension of the hidden for table is 300. For the hyperparameters of total loss $\mathcal{L}_{tot}$ , we set $\lambda_{\mathrm{MI}} = 0.5$ , $\lambda_{\mathrm{pt}} = 1.0$ and $\lambda_{\mathrm{pc}} = 0.5$ . + +For the model trained on SPNLG dataset, the dimension of latent template variable is set as 64, and the dimension of latent content variable is set as 100. The dimension of the hidden for table is also 300. For the hyperparameters of total loss $\mathcal{L}_{tot}$ , we set $\lambda_{\mathrm{MI}} = \lambda_{\mathrm{pt}} = \lambda_{\mathrm{pc}} = 1.0$ . + +# E CASE STUDY ON SPNLG EXPERIMENT + +
Tablename[nameVariable], eatType[pub], food[French], priceRange[20-25], area[riiverside]
ReferencenameVariable is a French place with a price range of 20-25. It is in riverside. It is a pub.
Table2seq-sample1: nameVariable is a pub with a price range of 20-25. It is a French restaurant in riverside.2: nameVariable is a French restaurant in riverside with a price range of 20-25. nameVariable is a pub.3: nameVariable is a pub with a price range of 20-25 and nameVariable is a French restaurant in riverside.4: nameVariable is a pub with a price range of 20-25, also it is in riverside. it is a Japanese place.5: nameVariable is a pub with a average rating and it is a French place in riverside.
Temp-KN1: nameVariable is in riverside, also it is in riverside.2: nameVariable is a French restaurant.3: nameVariable is the best restaurant.4: nameVariable is in riverside, and nameVariable is in [location].5: nameVariable is in. Its a French restaurant and it is in [location] with food and, even if nameVariable is [food_qual], it is the best place.
VTM-noraw1: nameVariable is a pub with a price range of 20-25. It is a French place in riverside.2: nameVariable is a pub with a price range of 20-25. it is a pub. It is in riverside.3: nameVariable is a French place in riverside with a price range of 20-25. It is a pub.4: nameVariable is a French place in riverside with a price range of 20-25. It is a pub.5: nameVariable is a French place in riverside with a price range of 20-25. It is a pub.
VTM1: nameVariable is a French place in riverside with a price range of 20-25. It is a pub.2: nameVariable is a pub with a price range of 20-25. It is in riverside. It is a French place.3: nameVariable is a French pub in riverside with a price range of 20-25, and it is a pub.4: nameVariable is a French restaurant in riverside and it is a pub.5: nameVariable is a French place in riverside with a price range of 20-25. It is a pub.
+ +Table 8: An example of the generated text by our model and the baselines on SPNLG dataset. \ No newline at end of file diff --git a/variationaltemplatemachinefordatatotextgeneration/images.zip b/variationaltemplatemachinefordatatotextgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0fe29a3a4fa273d94a58dd5e3c6bb0ae3cf1389d --- /dev/null +++ b/variationaltemplatemachinefordatatotextgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f30fd4b4cf90ff68fbd0638c208d00aadab3b4f2319b43d5eaa5716850f705a +size 775555 diff --git a/variationaltemplatemachinefordatatotextgeneration/layout.json b/variationaltemplatemachinefordatatotextgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5f739505f7b8d6d201784a5765b0afea85ce0d18 --- /dev/null +++ b/variationaltemplatemachinefordatatotextgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9adab758979520db78371d31f11409338aa774e3f3e19985ab4d064c1cbf169 +size 443901 diff --git a/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/4d84d7ad-6908-48d5-9fc2-14cd2689d413_content_list.json b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/4d84d7ad-6908-48d5-9fc2-14cd2689d413_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7896a4a24eeaed08cf61762d740a90060bf276a3 --- /dev/null +++ b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/4d84d7ad-6908-48d5-9fc2-14cd2689d413_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3f2d268114d2fe904f4fbbb30e3713891e8860492723c9aac9cb8f18d7ab463f +size 117081 diff --git a/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/4d84d7ad-6908-48d5-9fc2-14cd2689d413_model.json b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/4d84d7ad-6908-48d5-9fc2-14cd2689d413_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b5b6649ce7a793903674c434c3131eafb3f206d0 --- /dev/null +++ b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/4d84d7ad-6908-48d5-9fc2-14cd2689d413_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fbaaf991be571962c1fbfadfaac5102845b31576e4ce0f0274721f0b72a03746 +size 143143 diff --git a/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/4d84d7ad-6908-48d5-9fc2-14cd2689d413_origin.pdf b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/4d84d7ad-6908-48d5-9fc2-14cd2689d413_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a020f839b43013bdcf2d75b8002fa1f4879c0592 --- /dev/null +++ b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/4d84d7ad-6908-48d5-9fc2-14cd2689d413_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ee2b61f00dfa876bfcd514fea95b112f40eb783904f833f67743cddca024dd1 +size 1624975 diff --git a/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/full.md b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..113c4e3cc62fd8d051e4d0c21d609beb5c643ca5 --- /dev/null +++ b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/full.md @@ -0,0 +1,472 @@ +# VARIBAD: A VERY GOOD METHOD FOR BAYES-ADAPTIVE DEEP RL VIA META-LEARNING + +Luisa Zintgraf * + +University of Oxford + +Kyriacos Shiarlis + +Latent Logic + +Maximilian Igl + +University of Oxford + +Sebastian Schulze + +University of Oxford + +Yarin Gal + +University of Oxford + +Katja Hofmann + +Microsoft Research + +Shimon Whiteson + +University of Oxford + +# ABSTRACT + +Trading off exploration and exploitation in an unknown environment is key to maximising expected return during learning. A Bayes-optimal policy, which does so optimally, conditions its actions not only on the environment state but on the agent's uncertainty about the environment. Computing a Bayes-optimal policy is however intractable for all but the smallest tasks. In this paper, we introduce variational Bayes-Adaptive Deep RL (variBAD), a way to meta-learn to perform approximate inference in an unknown environment, and incorporate task uncertainty directly during action selection. In a grid-world domain, we illustrate how variBAD performs structured online exploration as a function of task uncertainty. We further evaluate variBAD on MuJoCo domains widely used in meta-RL and show that it achieves higher online return than existing methods. + +# 1 INTRODUCTION + +Reinforcement learning (RL) is typically concerned with finding an optimal policy that maximises expected return for a given Markov decision process (MDP) with an unknown reward and transition function. If these were known, the optimal policy could in theory be computed without environment interactions. By contrast, learning in an unknown environment usually requires trading off exploration (learning about the environment) and exploitation (taking promising actions). Balancing this trade-off is key to maximising expected return during learning, which is desirable in many settings, particularly in high-stakes real-world applications like healthcare and education (Liu et al., 2014; Yauney & Shah, 2018). A Bayes-optimal policy, which does this trade-off optimally, conditions actions not only on the environment state but on the agent's own uncertainty about the current MDP. + +In principle, a Bayes-optimal policy can be computed using the framework of Bayes-adaptive Markov decision processes (BAMDPs) (Martin, 1967; Duff & Barto, 2002), in which the agent maintains a belief distribution over possible environments. Augmenting the state space of the underlying MDP with this belief yields a BAMDP, a special case of a belief MDP (Kaelbling et al., 1998). A Bayes-optimal agent maximises expected return in the BAMDP by systematically seeking out the data needed to quickly reduce uncertainty, but only insofar as doing so helps maximise expected return. Its performance is bounded from above by the optimal policy for the given MDP, which does not need to take exploratory actions but requires prior knowledge about the MDP to compute. + +Unfortunately, planning in a BAMDP, i.e., computing a Bayes-optimal policy that conditions on the augmented state, is intractable for all but the smallest tasks. A common shortcut is to rely instead on posterior sampling (Thompson, 1933; Strens, 2000; Osband et al., 2013). Here, the agent periodically samples a single hypothesis MDP (e.g., at the beginning of an episode) from its posterior, and the policy that is optimal for the sampled MDP is followed until the next sample is drawn. Planning is far more tractable since it is done on a regular MDP, not a BAMDP. However, posterior sampling's exploration can be highly inefficient and far from Bayes-optimal. + +![](images/0311d4fa27692962aa7203c801c3847d8b6d697093ca88aea8ed6cf7478a8393.jpg) +(a) Environment + +![](images/9a4df75a1b03bafe14ead310a930b98a044f4c2814bd61b22fd538f627c1b831.jpg) +(b) Bayes-Optimal + +![](images/5a0a6890cf2988008cbc502b32d9f27280f6949292e7bfdf994b71e8ccdb74e0.jpg) +(c) Posterior Sampling + +![](images/d3206f550f8a64afe0fbd5ba4125e6857656934ae4fcdeee40849a39f031f6c3.jpg) +(d) VariBAD + +![](images/8067896c60d9c399cfdc0f96f0f7d94daad590a257518f7484f5d393209b3d7c.jpg) +(e) Performance +Figure 1: Illustration of different exploration strategies. (a) Environment: The agent starts at the bottom left and has to navigate to an unknown goal, located in the grey area. (b) A Bayes-optimal exploration strategy that systematically searches possible grid cells to find the goal, shown in solid (past actions) and dashed (future actions) blue lines. A simplified posterior is shown in the background in grey ( $p = 1$ /number of possible goal positions left) of containing the goal) and white ( $p = 0$ ). (c) Posterior sampling, which repeatedly samples a possible goal position (red squares) from the current posterior, takes the shortest route there, and updates its posterior. (d) Exploration strategy learned by variBAD. The grey background represents the approximate posterior the agent has learned. (e) Average return over all possible environments, over six episodes with 15 steps each (after which the agent is reset to the starting position). VariBAD results are averaged across 20 random seeds. The performance of any exploration strategy is bounded above by the optimal behaviour (of a policy with access to the true goal position). The Bayes-optimal agent matches this behaviour from the second episode, whereas posterior sampling needs six rollouts. VariBAD closely approximates Bayes-optimal behaviour in this environment. + +Consider the example of a gridworld in Figure 1, where the agent must navigate to an unknown goal located in the grey area (1a). To maintain a posteriori, the agent can uniformly assign non-zero probability to cells where the goal could be, and zero to all other cells. A Bayes-optimal strategy strategically searches the set of goal positions that the posterior considers possible, until the goal is found (1b). Posterior sampling by contrast samples a possible goal position, takes the shortest route there, and then resamples a different goal position from the updated posterior (1c). Doing so is much less efficient since the agent's uncertainty is not reduced optimally (e.g., states are revisited). + +As this example illustrates, Bayes-optimal policies can explore much more efficiently than posterior sampling. A key challenge is to learn approximately Bayes-optimal policies while retaining the tractability of posterior sampling. In addition, the inference involved in maintaining a posterior belief, needed even for posterior sampling, may itself be intractable. + +In this paper, we combine ideas from Bayesian RL, approximate variational inference, and meta-learning to tackle these challenges, and equip an agent with the ability to strategically explore unseen (but related) environments for a given distribution, in order to maximise its expected online return. + +More specifically, we propose variational Bayes-Adaptive Deep RL (variBAD), a way to meta-learn to perform approximate inference on an unknown task, $^{1}$ and incorporate task uncertainty directly during action selection. Given a distribution over MDPs $p(M)$ , we represent a single MDP $M$ using a learned, low-dimensional stochastic latent variable $m$ and jointly meta-train: + +1. A variational auto-encoder that can infer the posterior distribution over $m$ in a new task, given the agent's experience, while interacting with the environment, and +2. A policy that conditions on this posterior belief over MDP embeddings, and thus learns how to trade off exploration and exploitation when selecting actions under task uncertainty. + +Figure 1e shows the performance of our method versus the hard-coded optimal (with privileged goal information), Bayes-optimal, and posterior sampling exploration strategies. VariBAD's performance closely matches the Bayes-optimal one, matching optimal performance from the third rollout. + +Previous approaches to BAMDPs are only tractable in environments with small action and state spaces or rely on privileged information about the task during training. VariBAD offers a tractable and flexible approach for learning Bayes-adaptive policies tailored to the training task distribution, with the only assumption that such a distribution is available for meta-training. We evaluate our approach on the gridworld shown above and on MuJoCo domains that are widely used in meta-RL, and show that variBAD exhibits superior exploratory behaviour at test time compared to existing meta-learning methods, achieving higher returns during learning. As such, variBAD opens a path to tractable approximate Bayes-optimal exploration for deep reinforcement learning. + +# 2 BACKGROUND + +We define a Markov decision process (MDP) as a tuple $M = (\mathcal{S}, \mathcal{A}, R, T, T_0, \gamma, H)$ with $\mathcal{S}$ a set of states, $\mathcal{A}$ a set of actions, $R(r_{t+1}|s_t, a_t, s_{t+1})$ a reward function, $T(s_{t+1}|s_t, a_t)$ a transition function, $T_0(s_0)$ an initial state distribution, $\gamma$ a discount factor, and $H$ the horizon. In the standard RL setting, we want to learn a policy $\pi$ that maximises $\mathcal{J}(\pi) = \mathbb{E}_{T_0, T, \pi}\left[\sum_{t=0}^{H-1} \gamma^t R(r_{t+1}|s_t, a_t, s_{t+1})\right]$ , the expected return. Here, we consider a multi-task meta-learning setting, which we introduce next. + +# 2.1 TRAINING SETUP + +We adopt the standard meta-learning setting where we have a distribution $p(M)$ over MDPs from which we can sample during meta-training, with an MDP $M_{i} \sim p(M)$ defined by a tuple $M_{i} = (S, \mathcal{A}, R_{i}, T_{i}, T_{i,0}, \gamma, H)$ . Across tasks, the reward and transition functions vary but share some structure. The index $i$ represents an unknown task description (e.g., a goal position or natural language instruction) or task ID. Sampling an MDP from $p(M)$ is typically done by sampling a reward and transition function from a distribution $p(R, T)$ . During meta-training, batches of tasks are repeatedly sampled, and a small training procedure is performed on each of them, with the goal of learning to learn (for an overview of existing methods see Sec 4). At meta-test time, the agent is evaluated based on the average return it achieves during learning, for tasks drawn from $p$ . Doing this well requires at least two things: (1) incorporating prior knowledge obtained in related tasks, and (2) reasoning about task uncertainty when selecting actions to trade off exploration and exploitation. In the following, we combine ideas from meta-learning and Bayesian RL to tackle these challenges. + +# 2.2 BAYESIAN REINFORCEMENT LEARNING + +When the MDP is unknown, optimal decision making has to trade off exploration and exploitation when selecting actions. In principle, this can be done by taking a Bayesian approach to reinforcement learning formalised as a Bayes-Adaptive MDP (BAMDP), the solution to which is a Bayes-optimal policy (Bellman, 1956; Duff & Barto, 2002; Ghavamzadeh et al., 2015). + +In the Bayesian formulation of RL, we assume that the transition and reward functions are distributed according to a prior $b_{0} = p(R,T)$ . Since the agent does not have access to the true reward and transition function, it can maintain a belief $b_{t}(R,T) = p(R,T|\tau_{:t})$ , which is the posterior over the MDP given the agent's experience $\tau_{:t} = \{s_{0},a_{0},r_{1},s_{1},a_{1},\ldots ,s_{t}\}$ up until the current timestep. This is often done by maintaining a distribution over the model parameters. + +To allow the agent to incorporate the task uncertainty into its decision-making, this belief can be augmented to the state, resulting in hyper-states $s_t^+ \in S^+ = S \times \mathcal{B}$ , where $\mathcal{B}$ is the belief space. These transition according to + +$$ +\begin{array}{l} T ^ {+} \left(s _ {t + 1} ^ {+} \mid s _ {t} ^ {+}, a _ {t}, r _ {t}\right) = T ^ {+} \left(s _ {t + 1}, b _ {t + 1} \mid s _ {t}, a _ {t}, r _ {t}, b _ {t}\right) \\ = T ^ {+} \left(s _ {t + 1} \mid s _ {t}, a _ {t}, b _ {t}\right) T ^ {+} \left(b _ {t + 1} \mid s _ {t}, a _ {t}, r _ {t}, b _ {t}, s _ {t + 1}\right) \\ = \mathbb {E} _ {b _ {t}} \left[ T \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right) \right] \delta \left(b _ {t + 1} = p (R, T \mid \tau_ {: t + 1})\right) \tag {1} \\ \end{array} +$$ + +i.e., the new environment state $s_t$ is the expected new state w.r.t. the current posterior distribution of the transition function, and the belief is updated deterministically according to Bayes rule. The reward function on hyper-states is defined as the expected reward under the current posterior (after the state transition) over reward functions, + +$$ +R ^ {+} \left(s _ {t} ^ {+}, a _ {t}, s _ {t + 1} ^ {+}\right) = R ^ {+} \left(s _ {t}, b _ {t}, a _ {t}, s _ {t + 1}, b _ {t + 1}\right) = \mathbb {E} _ {b _ {t + 1}} \left[ R \left(s _ {t}, a _ {t}, s _ {t + 1}\right) \right]. \tag {2} +$$ + +This results in a BAMDP $M^{+} = (S^{+},\mathcal{A},R^{+},T^{+},T_{0}^{+},\gamma ,H^{+})$ (Duff & Barto, 2002), which is a special case of a belief MDP, i.e., the MDP formed by taking the posterior beliefs maintained by an agent in a partially observable MDP and reinterpreting them as Markov states (Cassandra et al., 1994). In an arbitrary belief MDP, the belief is over a hidden state that can change over time. In a BAMDP, the belief is over the transition and reward functions, which are constant for a given task. The agent's objective is now to maximise the expected return in the BAMDP, + +$$ +\mathcal {J} ^ {+} (\pi) = \mathbb {E} _ {b _ {0}, T _ {0} ^ {+}, T ^ {+}, \pi} \left[ \sum_ {t = 0} ^ {H ^ {+} - 1} \gamma^ {t} R ^ {+} \left(r _ {t + 1} \mid s _ {t} ^ {+}, a _ {t}, s _ {t + 1} ^ {+}\right) \right], \tag {3} +$$ + +i.e., maximise the expected return in an initially unknown environment, while learning, within the horizon $H^{+}$ . Note the distinction between the MDP horizon $H$ and BAMDP horizon $H^{+}$ . Although they often coincide, we might instead want the agent to act Bayes-optimal within the first $N$ MDP episodes, so $H^{+} = N \times H$ . Trading off exploration and exploitation optimally depends heavily on how much time the agent has left (e.g., to decide whether information-seeking actions are worth it). + +The objective in (3) is maximised by the Bayes-optimal policy, which automatically trades off exploration and exploitation: it takes exploratory actions to reduce its task uncertainty only insofar as it helps to maximise the expected return within the horizon. The BAMDP framework is powerful because it provides a principled way of formulating Bayes-optimal behaviour. However, solving the BAMDP is hopelessly intractable for most interesting problems. + +The main challenges are as follows. + +- We typically do not know the parameterisation of the true reward and/or transition model, +- The belief update (computing the posterior $p(R, T|\tau_{:t})$ ) is often intractable, and +- Even with the correct posterior, planning in belief space is typically intractable. + +In the following, we propose a method that simultaneously meta-learns the reward and transition functions, how to perform inference in an unknown MDP, and how to use the belief to maximise expected online return. Since the Bayes-adaptive policy is learned end-to-end with the inference framework, no planning is necessary at test time. We make minimal assumptions (no privileged task information is required during training), resulting in a highly flexible and scalable approach to Bayes-adaptive Deep RL. + +# 3 BAYES-ADAPTIVE DEEP RL VIA META-LEARNING + +In this section, we present variBAD, and describe how we tackle the challenges outlined above. We start by describing how to represent reward and transition functions, and (posterior) distributions over these. We then consider how to meta-learn to perform approximate variational inference in a given task, and finally put all the pieces together to form our training objective. + +In the typical meta-learning setting, the reward and transition functions that are unique to each MDP are unknown, but also share some structure across the MDPs $M_{i}$ in $p(M)$ . We know that there exists a true $i$ which represents either a task description or task ID, but we do not have access to this information. We therefore represent this value using a learned stochastic latent variable $m_{i}$ . For a given MDP $M_{i}$ we can then write + +$$ +R _ {i} \left(r _ {t + 1} \mid s _ {t}, a _ {t}, s _ {t + 1}\right) \approx R \left(r _ {t + 1} \mid s _ {t}, a _ {t}, s _ {t + 1}; m _ {i}\right), \tag {4} +$$ + +$$ +T _ {i} \left(s _ {t + 1} \mid s _ {t}, a _ {t}\right) \approx T \left(s _ {t + 1} \mid s _ {t}, a _ {t}; m _ {i}\right), \tag {5} +$$ + +where $R$ and $T$ are shared across tasks. Since we do not have access to the true task description or ID, we need to infer $m_{i}$ given the agent's experience up to time step $t$ collected in $M_{i}$ , + +$$ +\tau_ {: t} ^ {(i)} = \left(s _ {0}, a _ {0}, r _ {1}, s _ {1}, a _ {1}, r _ {2}, \dots , s _ {t - 1}, a _ {t - 1}, r _ {t}, s _ {t}\right), \tag {6} +$$ + +i.e., we want to infer the posterior distribution $p(m_i|\tau_{:t}^{(i)})$ over $m_i$ given $\tau_{:t}^{(i)}$ (from now on, we drop the sub- and superscript $i$ for ease of notation). + +Recall that our goal is to learn a distribution over the MDPs, and given a posteriori knowledge of the environment compute the optimal action. Given the above reformulation, it is now sufficient to + +![](images/8ee4253ad17816a9ec4f0fb9c1579b715d6670e09b18d23f83bdd20a4a91d712.jpg) +Figure 2: VariBAD architecture: A trajectory of states, actions and rewards is processed online using an RNN to produce the posterior over task embeddings, $q_{\phi}(m|\tau ;t)$ . The posterior is trained using a decoder which attempts to predict past and future states and rewards from current states and actions. The policy conditions on the posterior in order to act in the environment and is trained using RL. + +reason about the embedding $m$ , instead of the transition and reward dynamics. This is particularly useful when deploying deep learning strategies, where the reward and transition function can consist of millions of parameters, but the embedding $m$ can be a small vector. + +# 3.1 APPROXIMATE INFERENCE + +Computing the exact posterior is typically not possible: we do not have access to the MDP (and hence the transition and reward function), and marginalising over tasks is computationally infeasible. Consequently, we need to learn a model of the environment $p_{\theta}(\tau_{:H^{+}}|a_{:H^{+}-1})$ , parameterised by $\theta$ , together with an amortised inference network $q_{\phi}(m|\tau_{:t})$ , parameterised by $\phi$ , which allows fast inference at runtime at each timestep $t$ . The action-selection policy is not part of the MDP, so an environmental model can only give rise to a distribution of trajectories when conditioned on actions, which we typically draw from our current policy, $a \sim \pi$ . At any given time step $t$ , our model learning objective is thus to maximise + +$$ +\mathbb {E} _ {\rho (M, \tau : H +)} [ \log p _ {\theta} (\tau : H ^ {+} | a: H ^ {+} - 1) ], \tag {7} +$$ + +where $\rho(M, \tau_{:H^+})$ is the trajectory distribution induced by our policy and we slightly abuse notation by denoting by $\tau$ the state-reward trajectories, excluding the actions. In the following, we drop the conditioning on $a_{:H^+ -1}$ to simplify notation. + +Instead of optimising (7), which is intractable, we can optimise a tractable lower bound, defined with a learned approximate posterior $q_{\phi}(m|\tau ;t)$ which can be estimated by Monte Carlo sampling (for the full derivation see AppendixA): + +$$ +\begin{array}{l} \mathbb {E} _ {\rho (M, \tau_ {: H ^ {+}})} [ \log p _ {\theta} (\tau_ {: H ^ {+}}) ] \geq \mathbb {E} _ {\rho} \left[ \mathbb {E} _ {q _ {\phi} (m | \tau ; t)} [ \log p _ {\theta} (\tau_ {: H ^ {+}} | m) ] - K L (q _ {\phi} (m | \tau ; t) | | p _ {\theta} (m)) ] \right. \tag {8} \\ = E L B O _ {t}. \\ \end{array} +$$ + +The term $\mathbb{E}_q[\log p(\tau_{:H^+}|m)]$ is often referred to as the reconstruction loss, and $p(\tau_{:t}|m)$ as the decoder. The term $KL(q(m|\tau_{:t})||p_{\theta}(m))$ is the KL-divergence between our variational posterior $q_{\phi}$ and the prior over the embeddings $p_{\theta}(m)$ . We set the prior to our previous posterior, $q_{\phi}(m|\tau_{:t-1})$ with initial prior $q_{\phi}(m) = \mathcal{N}(0,I)$ . + +As can be seen in Equation (8) and Figure 2, when the agent is at timestep $t$ , we encode the past trajectory $\tau_{:t}$ to get the current posterior $q(m|\tau_{:t})$ since this is all the information available to perform inference about the current task. We then decode the entire trajectory $\tau_{:H^{+}}$ including the future, i.e., model $\mathbb{E}_q[p(\tau_{:H^+}|m)]$ . This is different than the conventional VAE setup (and possible since we have access to this information during training). Decoding not only the past but also the future is important because this way, variBAD learns to perform inference about unseen states given the past. + +The reconstruction term $\log p(\tau_{:H^{+}}|m)$ factorises as + +$$ +\begin{array}{l} \log p \left(\tau_ {: H ^ {+}} | m, a _ {: H ^ {+} - 1}\right) = \log p \left(\left(s _ {0}, r _ {0}, \dots , s _ {t - 1}, r _ {t - 1}, s _ {t}\right) | m, a _ {: H ^ {+} - 1}\right) \tag {9} \\ = \log p (s _ {0} | m) + \sum_ {i = 0} ^ {H ^ {+} - 1} \left[ \log p \left(s _ {i + 1} \mid s _ {i}, a _ {i}, m\right) + \log p \left(r _ {i + 1} \mid s _ {i}, a _ {i}, s _ {i + 1}, m\right) \right]. \\ \end{array} +$$ + +Here, $p(s_0|m)$ is the initial state distribution $T_0'$ , $p(s_{i+1}|s_i,a_i;m)$ the transition function $T'$ , and $p(r_{i+1}|s_t,a_t,s_{i+1};m)$ the reward function $R'$ . From now, we include $T_0'$ in $T'$ for ease of notation. + +# 3.2 TRAINING OBJECTIVE + +We can now formulate a training objective for learning the approximate posterior distribution over task embeddings, the policy, and the generalised reward and transition functions $R'$ and $T'$ . We use deep neural networks to represent the individual components. These are: + +1. The encoder $q_{\phi}(m|\tau_{:t})$ , parameterised by $\phi$ ; +2. An approximate transition function $T' = p_{\theta}^{T}(s_{i+1}|s_{i},a_{i};m)$ and an approximate reward function $R' = p_{\theta}^{R}(r_{i+1}|s_{t},a_{t},s_{i+1};m)$ which are jointly parameterised by $\theta$ ; and +3. A policy $\pi_{\psi}(a_t|s_t,q_\phi (m|\tau ;t))$ parameterised by $\psi$ and dependent on $\phi$ + +The policy is conditioned on both the environment state and the posterior over $m$ , $\pi(a_{t}|s_{t}, q(m|\tau_{:t}))$ . This is similar to the formulation of BAMDPs introduced in 2.2, with the difference that we learn a unifying distribution over MDP embeddings, instead of the transition/reward function directly. This makes learning easier since there are fewer parameters to perform inference over, and we can use data from all tasks to learn the shared reward and transition function. The posterior can be represented by the distribution's parameters (e.g., mean and standard deviation if $q$ is Gaussian). + +Our overall objective is to maximise + +$$ +\mathcal {L} (\phi , \theta , \psi) = \mathbb {E} _ {p (M)} \left[ \mathcal {J} (\psi , \phi) + \lambda \sum_ {t = 0} ^ {H ^ {+}} E L B O _ {t} (\phi , \theta) \right]. \tag {10} +$$ + +Expectations are approximated by Monte Carlo samples, and the ELBO can be optimised using the reparameterisation trick (Kingma & Welling, 2014). For $t = 0$ , we use the prior $q_{\phi}(m) = \mathcal{N}(0,I)$ . We encode past trajectories using a recurrent network as in Duan et al. (2016); Wang et al. (2016), but other types of encoders could be considered like the ones used in Zaheer et al. (2017); Garnelo et al. (2018); Rakelly et al. (2019). The network architecture is shown in Figure 2. + +In Equation (10), we see that the ELBO appears for all possible context lengths $t$ . This way, variBAD can learn how to perform inference online (while the agent is interacting with an environment), and decrease its uncertainty over time given more data. In practice, we may subsample a fixed number of ELBO terms (for random time steps $t$ ) for computational efficiency if $H^{+}$ is large. + +Equation (10) is trained end-to-end, and $\lambda$ weights the supervised model learning objective against the RL loss. This is necessary since parameters $\phi$ are shared between the model and the policy. However, we found that backpropagating the RL loss through the encoder is typically unnecessary in practice. Not doing so also speeds up training considerably, avoids the need to trade off these losses, and prevents interference between gradients of opposing losses. In our experiments, we therefore optimise the policy and the VAE using different optimisers and learning rates. We train the RL agent and the VAE using different data buffers: the policy is only trained with the most recent data since we use on-policy algorithms in our experiments; and for the VAE we maintain a separate, larger buffer of observed trajectories. + +At meta-test time, we roll out the policy in randomly sampled test tasks (via forward passes through the encoder and policy) to evaluate performance. The decoder is not used at test time, and no gradient adaptation is done: the policy has learned to act approximately Bayes-optimal during meta-training. + +# 4 RELATED WORK + +Meta Reinforcement Learning. A prominent model-free meta-RL approach is to utilise the dynamics of recurrent networks for fast adaptation $(\mathrm{RL}^2$ , Wang et al. (2016); Duan et al. (2016)). At every time step, the network gets an auxiliary comprised of the preceding action and reward. This allows learning within a task to happen online, entirely in the dynamics of the recurrent network. If we remove the decoder (Fig 2) and the VAE objective (Eq (7)), variBAD reduces to this setting, i.e., the main differences are that we use a stochastic latent variable (an inductive bias for representing uncertainty) together with a decoder to reconstruct previous and future transitions / rewards (which acts as an auxiliary loss (Jaderberg et al., 2017) to encode the task in latent space and deduce information about unseen states). Ortega et al. (2019) provide an in-depth discussion of meta-learning sequential strategies and how to recast memory-based meta-learning within a Bayesian framework. + +Another popular approach to meta RL is to learn an initialisation of the model, such that at test time, only a few gradient steps are necessary to achieve good performance (Finn et al., 2017; Nichol & Schulman, 2018). These methods do not directly account for the fact that the initial policy needs to explore, a problem addressed, a.o., by Stadie et al. (2018) (E-MAML) and Rothfuss et al. (2019) (ProMP). In terms of model complexity, MAML and ProMP are relatively lightweight, since they typically consist of a feedforward policy. $\mathsf{RL}^2$ and variBAD use recurrent modules, which increases model complexity but allows online adaptation. Other methods that perform gradient adaptation at test time are, e.g., Houthooft et al. (2018) who meta-learn a loss function conditioned on the agent's experience that is used at test time so learn a policy (from scratch); and Sung et al. (2017) who learn a meta-critic that can criticise any actor for any task, and is used at test time to train a policy. Compared to variBAD, these methods usually separate exploration (before gradient adaptation) and exploitation (after gradient adaptation) at test time by design, making them less sample efficient. + +Skill / Task Embeddings. Learning (variational) task or skill embeddings for meta / transfer reinforcement learning is used in a variety of approaches. Hausman et al. (2018) use approximate variational inference learn an embedding space of skills (with a different lower bound than variBAD). At test time the policy is fixed, and a new embedder is learned that interpolates between already learned skills. Arnekvist et al. (2019) learn a stochastic embedding of optimal $Q$ -functions for different skills, and condition the policy on (samples of) this embedding. Adaptation at test time is done in latent space. Co-Reyes et al. (2018) learn a latent space of low-level skills that can be controlled by a higher-level policy, framed within the setting of hierarchical RL. This embedding is learned using a VAE to encode state trajectories and decode states and actions. Zintgraf et al. (2019) learn a deterministic task embedding trained similarly to MAML (Finn et al., 2017). Similar to variBAD, Zhang et al. (2018) use learned dynamics and reward modules to learn a latent representation which the policy conditions on and show that transferring the (fixed) encoder to new environments helps learning. Perez et al. (2018) learn dynamic models with auxiliary latent variables, and use them for model-predictive control. Lan et al. (2019) learn a task embedding with an optimisation procedure similar to MAML, where the encoder is updated at test time, and the policy is fixed. Sæmundsson et al. (2018) explicitly learn an embedding of the environment model, which is subsequently used for model predictive control (and not, like in variBAD, for exploration). In the field of imitation learning, some approaches embed expert demonstrations to represent the task; e.g., Wang et al. (2017) use variational methods and Duan et al. (2017) learn deterministic embeddings. + +VariBAD differs from the above methods mainly in what the embedding represents (i.e., task uncertainty) and how it is used: the policy conditions on the posterior distribution over MDPs, allowing it to reason about task uncertainty and trade off exploration and exploitation online. Our objective (8) explicitly optimises for Bayes-optimal behaviour. Unlike some of the above methods, we do not use the model at test time, but model-based planning is a natural extension for future work. + +Bayesian Reinforcement Learning. Bayesian methods for RL can be used to quantify uncertainty to support action-selection, and provide a way to incorporate prior knowledge into the algorithms (see Ghavamzadeh et al. (2015) for a review). A Bayes-optimal policy is one that optimally trades off exploration and exploitation, and thus maximises expected return during learning. While such a policy can in principle be computed using the BAMDP framework, it is hopelessly intractable for all but the smallest tasks. Existing methods are therefore restricted to small and discrete state / action spaces (Asmuth & Littman, 2011; Guez et al., 2012; 2013), or a discrete set of tasks (Brunskill, 2012; Poupart et al., 2006). VariBAD opens a path to tractable approximate Bayes-optimal exploration for + +deep RL by leveraging ideas from meta-learning and approximate variational inference, with the only assumption that we can meta-train on a set of related tasks. Existing approximate Bayesian RL methods often require us to define a prior / belief update on the reward / transition function, and rely on (possibly expensive) sample-based planning procedures. Due to the use of deep neural networks however, variBAD lacks the formal guarantees enjoyed by some of the methods mentioned above. + +Closely related to our approach is the recent work of Humplik et al. (2019). Like variBAD, they condition the policy on a posterior distribution over the MDP, which is meta-trained using privileged information such as a task description. In comparison, variBAD meta-learns to represent the belief in an unsupervised way, and does not rely on privileged task information during training. + +Posterior sampling (Strens, 2000; Osband et al., 2013), which extends Thompson sampling (Thompson, 1933) from bandits to MDPs, estimates a posterior distribution over MDPs (i.e., model and reward functions), in the same spirit as variBAD. This posterior is used to periodically sample a single hypothesis MDP (e.g., at the beginning of an episode), and the policy that is optimal for the sampled MDP is followed subsequently. This approach is less efficient than Bayes-optimal behaviour and therefore typically has lower expected return during learning. + +A related approach for inter-task transfer of abstract knowledge is to pose policy search with priors as Markov Chain Monte Carlo inference (Wingate et al., 2011). Similarly Guez et al. (2013) propose a Monte Carlo Tree Search based method for Bayesian planning to get a tractable, sample-based method for obtaining approximate Bayes-optimal behaviour. Osband et al. (2018) note that non-Bayesian treatment for decision making can be arbitrarily suboptimal and propose a simple randomised prior based approach for structured exploration. Some recent deep RL methods use stochastic latent variables for structured exploration (Gupta et al., 2018; Rakelly et al., 2019), which gives rise to behaviour similar to posterior sampling. Other ways to use the posterior for exploration are, e.g., certain reward bonuses Kolter & Ng (2009); Sorg et al. (2012) and methods based on optimism in the face of uncertainty (Kearns & Singh, 2002; Brafman & Tennenholtz, 2002). Non-Bayesian methods for exploration are often used in practice, such as other exploration bonuses (e.g., via state-visitation counts) or using uninformed sampling of actions (e.g., $\epsilon$ -greedy action selection). Such methods are prone to wasteful exploration that does not help maximise expected reward. + +Related to BAMDPs are contextual MDPs, where the task description is referred to as a context, on which the environment dynamics and rewards depend (Hallak et al., 2015; Jiang et al., 2017; Dann et al., 2018; Modi & Tewari, 2019). Research in this area is often concerned with developing tight bounds by putting assumptions on the context, such as having a small known number of contexts, or that there is a linear relationship between the contexts and dynamics/rewards. Similarly, the framework of hidden parameter (HiP-) MDPs assumes that there is a set of low-dimensional latent factors which define a family of related dynamical systems (with shared reward structure), similar to the assumption we make in Equation (5) (Doshi-Velez & Konidaris, 2016; Killian et al., 2017; Yao et al., 2018). These methods however don't directly learn Bayes-optimal behaviour but allow for a longer training period in new environments to infer the latents and train the policy. + +Variational Inference and Meta-Learning. A main difference of variBAD to many existing Bayesian RL methods is that we meta-learn the inference procedure, i.e., how to do a posterior update. Apart from (RL) methods mentioned above, related work in this direction can be found, a.o., in Garnelo et al. (2018); Gordon et al. (2019); Choi et al. (2019). By comparison, variBAD has an inference procedure tailored to the setting of Bayes-optimal RL. + +POMDPs. Several deep learning approaches to model-free reinforcement learning (Igl et al., 2019) and model learning for planning (Tschiatschek et al., 2018) in partially observable Markov decision processes have recently been proposed and utilise approximate variational inference methods. VariBAD by contrast focuses on BAMDPs (Martin, 1967; Duff & Barto, 2002; Ghavamzadeh et al., 2015), a special case of POMDPs where the transition and reward functions constitute the hidden state and the agent must maintain a belief over them. While in general the hidden state in a POMDP can change at each time-step, in a BAMDP the underlying task, and therefore the hidden state, is fixed per task. We exploit this property by learning an embedding that is fixed over time, unlike approaches like Igl et al. (2019) which use filtering to track the changing hidden state. While we utilise the power of deep approximate variational inference, other approaches for BAMDPs often use more accurate but less scalable methods, e.g., Lee et al. (2019) discretise the latent distribution and use Bayesian filtering for the posterior update. + +![](images/dadb876b8858889ebdc57ce1ff374e554f8d1073a318de46aea4765b3cbd7ae9.jpg) +(a) Example Rollout + +![](images/f6c14433d3f2f441c2423aea1b5af6a1101e815264eacb46a31a807d182f6ab6.jpg) +(b) Reward Predictions +Figure 3: Behaviour of variBAD in the gridworld environment. (a) Hand-picked but representative example test rollout. The blue background indicates the posterior probability of receiving a reward at that cell. (b) Probability of receiving a reward for each cell, as predicted by the decoder, over the course of interacting with the environment (average in black, goal state in green). (c) Visualisation of the latent space; each line is one latent dimension, the black line is the average. + +![](images/0f375496bf4ba9c1b1361eb1d2201bf0a2f4d4bb96c433f3b941ec6b93f0e177.jpg) +(c) Latent Space + +# 5 EXPERIMENTS + +In this section we first investigate the properties of variBAD on a didactic gridworld domain. We show that variBAD performs structured and online exploration as it infers the task at hand. Then we consider more complex meta-learning settings by employing on four MuJoCo continuous control tasks commonly used in the meta-RL literature. We show that variBAD learns to adapt to the task during the first rollout, unlike many existing meta-learning methods. Details and hyperparameters can be found in the appendix, and at https://github.com/lmzintgraf/variabad. + +# 5.1 GRIDWORLD + +To gain insight into variBAD's properties, we start with a didactic gridworld environment. The task is to reach a goal (selected uniformly at random) in a $5 \times 5$ gridworld. The goal is unobserved by the agent, inducing task uncertainty and necessitating exploration. The goal can be anywhere except around the starting cell, which is at the bottom left. Actions are: up, right, down, left, stay (executed deterministically), and after 15 steps the agent is reset. The horizon within the MDP is $H = 15$ , but we choose a horizon of $H^{+} = 4 \times H = 45$ for the BAMDP. I.e., we train our agent to maximise performance for 4 MDP episodes. The agent gets a sparse reward signal: $-0.1$ on non-goal cells, and $+1$ on the goal cell. The best strategy is to explore until the goal is found, and stay at the goal or return to it when reset to the initial position. We use a latent dimensionality of 5. + +Figure 3 illustrates how variBAD behaves at test time with deterministic actions (i.e., all exploration is done by the policy). In 3a we see how the agent interacts with the environment, with the blue background visualising the posterior belief by using the learned reward function. VariBAD learns the correct prior and adjusts its belief correctly over time. It predicts no reward for cells it has visited, and explores the remaining cells until it finds the goal. + +A nice property of variBAD is that we can gain insight into the agent's belief about the environment by analysing what the decoder predicts, and how the latent space changes while the agent interacts with the environment. Figure 3b show the reward predictions: each line represents a grid cell and its value the probability of receiving a reward at that cell. As the agent gathers more data, more and more cells are excluded $(p(rew = 1) = 0)$ , until eventually the agent finds the goal. In Figure 3c we visualise the 5-dimensional latent space. We see that once the agent finds the goal, the posterior concentrates: the variance drops close to zero, and the mean settles on a value. + +As we showed in Figure 1e, the behaviour of variBAD closely matches that of the Bayes-optimal policy. Recall that the Bayes-optimal policy is the one which optimally trades off exploration and exploitation in an unknown environment, and outperforms posterior sampling. Our results on this gridworld indicate that variBAD is an effective way to approximate Bayes-optimal control, and has the additional benefit of giving insight into the task belief of the policy. + +![](images/c38c38544141135875bb0a4c75761887769580dddfa408d3646bc510953bd1a8.jpg) +Figure 4: Average test performance for the first 5 rollouts of MuJoCo environments (using 5 seeds). + +![](images/0d62ed1ffbb43ff69c2f4d7d0d67ddbbea28e3ae672aee87613a74ed046889b4.jpg) + +![](images/2d31a865974bb4c5578693578d74ad213a5dc1121bc4f966b098a98331306efd.jpg) + +![](images/73b698f94d521fc911bbc81e08e7cf73ce0e7943b2c23a6f93bb1a4a239ae504.jpg) + +# 5.2 MUJOCO CONTINUOUS CONTROL META-LEARNING TASKS + +We show that variBAD can scale to more complex meta learning settings by employing it on MuJoCo (Todorov et al., 2012) locomotion tasks commonly used in the meta-RL literature. We consider the AntDir and HalfCheetahDir environment where the agent has to run either forwards or backwards (i.e., there are only two tasks), the HalfCheetahVel environment where the agent has to run at different velocities, and the Walker environment where the system parameters are randomised. + +Figure 4 shows the performance at test time compared to existing methods. While we show performance for multiple rollouts for the sake of completeness, anything beyond the first rollout is not directly relevant to our goal, which is to maximise performance on a new task, while learning, within a single episode. Only variBAD and $\mathrm{RL}^2$ are able to adapt to the task at hand within a single episode. $\mathrm{RL}^2$ underperforms variBAD on the HalfCheetahDir environment, and learning is slower and less stable (see learning curves and runtime comparisons in Appendix C). Even though the first rollout includes exploratory steps, this matches the optimal oracle policy (which is conditioned on the true task description) up to a small margin. The other methods (PEARL Rakelly et al. (2019), E-MAML Stadie et al. (2018) and ProMP Rothfuss et al. (2019)) are not designed to maximise reward during a single rollout, and perform poorly in this case. They all require substantially more environment interactions in each new task to achieve good performance. PEARL, which is akin to posterior sampling, only starts performing well starting from the third episode (Note: PEARL outperforms our oracle slightly, likely since our oracle is based on PPO, and PEARL is based on SAC). + +Overall, our empirical results confirm that variBAD can scale up to current benchmarks and maximise expected reward within a single episode. + +# 6 CONCLUSION & FUTURE WORK + +We presented variBAD, a novel deep RL method to approximate Bayes-optimal behaviour, which uses meta-learning to utilise knowledge obtained in related tasks and perform approximate inference in unknown environments. In a didactic gridworld environment, our agent closely matches Bayes-optimal behaviour, and in more challenging MuJoCo tasks, variBAD outperforms existing methods in terms of achieved reward during a single episode. In summary, we believe variBAD opens a path to tractable approximate Bayes-optimal exploration for deep reinforcement learning. + +There are several interesting directions of future work based on variBAD. For example, we currently do not use the decoder at test time. One could instead use the decoder for model-predictive planning, or to get a sense for how wrong the predictions are (which might indicate we are out of distribution, and further training is necessary). Another exciting direction for future research is considering settings where the training and test distribution of environments are not the same. Generalising to out-of-distribution tasks poses additional challenges and in particular for variBAD two problems are likely to arise: the inference procedure will be wrong (the prior and/or posterior update) and the policy will not be able to interpret a changed posterior. In this case, further training of both the encoder/decoder might be necessary, together with updates to the policy and/or explicit planning. + +# ACKNOWLEDGMENTS + +We thank Anuj Mahajan who contributed to early work on this topic. We thank Joost van Amersfoort, Andrei Rusu and Dushyant Rao for useful discussions and feedback. Luisa Zintgraf is supported by the Microsoft Research PhD Scholarship Program. Maximilian Igl is supported by the UK EPSRC CDT in Autonomous Intelligent Machines and Systems. Sebastian Schulze is supported by Dyson. This work was supported by a generous equipment grant and a donated DGX-1 from NVIDIA. This project has received funding from the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement number 637713). + +# REFERENCES + +Isac Arnekvist, Danica Kragic, and Johannes A Stork. Vpe: Variational policy embedding for transfer reinforcement learning. In International Conference on Robotics and Automation, 2019. +John Asmuth and Michael L Littman. Learning is planning: near bayes-optimal reinforcement learning via monte-carlo tree search. In _Conf on Uncertainty in Artificial Intelligence_, 2011. +Richard Bellman. A problem in the sequential design of experiments. Sankhya: The Indian Journal of Statistics (1933-1960), 16(3/4):221-229, 1956. +Ronen I Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, pp. 3:213-231, 2002. +Emma Brunskill. Bayes-optimal reinforcement learning for discrete uncertainty domains. In International Conference on Autonomous Agents and Multiagent Systems, 2012. +Anthony R. Cassandra, Leslie Pack Kaelbling, and Michael L. Littman. Acting optimally in partially observable stochastic domains. In Twelfth National Conference on Artificial Intelligence, 1994. AAAI Classic Paper Award, 2013. +Kristy Choi, Mike Wu, Noah Goodman, and Stefano Ermon. Meta-amortized variational inference and learning. In International Conference on Learning Representation, 2019. +John D Co-Reyes, YuXuan Liu, Abhishek Gupta, Benjamin Eysenbach, Pieter Abbeel, and Sergey Levine. Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings. In International Conference on Machine Learning, 2018. +Christoph Dann, Lihong Li, Wei Wei, and Emma Brunskill. Policy certificates: Towards accountable reinforcement learning. arXiv preprint arXiv:1811.03056, 2018. +Finale Doshi-Velez and George Konidaris. Hidden parameter markov decision processes: A semi-parametric regression approach for discovering latent task parametrizations. In International Joint Conference on Artificial Intelligence, 2016. +Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. $\mathbf{RL}^2$ : Fast reinforcement learning via slow reinforcement learning. arXiv:1611.02779, 2016. +Yan Duan, Marcin Andrychowicz, Bradly Stadie, OpenAI Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. In Advances in Neural Information Processing Systems, 2017. +Michael O'Gordon Duff and Andrew Barto. Optimal Learning: Computational procedures for Bayes-adaptive Markov decision processes. PhD thesis, Univ of Massachusetts at Amherst, 2002. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, 2017. +Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J Rezende, SM Eslami, and Yee Whye Teh. Neural processes. In ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models, 2018. +Mohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, Aviv Tamar, et al. Bayesian reinforcement learning: A survey. Foundations and Trends in Machine Learning, 8(5-6):359-483, 2015. + +Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, and Richard E Turner. Meta-learning probabilistic inference for prediction. In International Conference on Learning Representation, 2019. +Arthur Guez, David Silver, and Peter Dayan. Efficient bayes-adaptive reinforcement learning using sample-based search. In Advances in Neural Processing Systems, 2012. +Arthur Guez, David Silver, and Peter Dayan. Scalable and efficient bayes-adaptive reinforcement learning based on monte-carlo tree search. Journal of Artificial Intelligence Research, 48:841-883, 2013. +Abhishek Gupta, Russell Mendonca, YuXuan Liu, Pieter Abbeel, and Sergey Levine. Meta-reinforcement learning of structured exploration strategies. In Advances in Neural Processing Systems, 2018. +Assaf Hallak, Dotan Di Castro, and Shie Mannor. Contextual markov decision processes. arXiv preprint arXiv:1502.02259, 2015. +Karol Hausman, Jost Tobias Springenberg, Ziyu Wang, Nicolas Heess, and Martin Riedmiller. Learning an embedding space for transferable robot skills. In International Conference on Learning Representation, 2018. +Rein Houthooft, Yuhua Chen, Phillip Isola, Bradley Stadie, Filip Wolski, OpenAI Jonathan Ho, and Pieter Abbeel. Evolved policy gradients. In Advances in Neural Information Processing Systems, 2018. +Jan Humplik, Alexandre Galashov, Leonard Hasenclever, Pedro A Ortega, Yee Whye Teh, and Nicolas Heess. Meta reinforcement learning as task inference. arXiv:1905.06424, 2019. +Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, and Shimon Whiteson. Deep variational reinforcement learning for pomdpds. In International Conference on Machine Learning, 2019. +Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In International Conference on Learning Representation, 2017. +Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, and Robert E Schapire. Contextual decision processes with low bellman rank are pac-learnable. In International Conference on Machine Learning-Volume, 2017. +Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99-134, 1998. +Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine learning, 49(2-3):209-232, 2002. +Taylor W Killian, Samuel Daulton, George Konidaris, and Finale Doshi-Velez. Robust and efficient transfer learning with hidden parameter markov decision processes. In Advances in neural information processing systems, 2017. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learning Representation, 2014. +J Zico Kolter and Andrew Y Ng. Near-bayesian exploration in polynomial time. In International Conference on Machine Learning, 2009. +Lin Lan, Zhenguo Li, Xiaohong Guan, and Pinghui Wang. Meta reinforcement learning with task embedding and shared policy. In INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019. +Gilwoo Lee, Brian Hou, Aditya Mandalika, Jeongseok Lee, and Siddhartha S Srinivasa. Bayesian policy optimization for model uncertainty. In International Conference on Learning Representation, 2019. + +Yun-En Liu, Travis Mandel, Emma Brunskill, and Zoran Popovic. Trading off scientific knowledge and user learning with multi-armed bandits. In *EDM*, pp. 161–168, 2014. +James John Martin. Bayesian decision problems and Markov chains. Wiley, 1967. +Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive metalearner. arXiv:1707.03141, 2017. +Aditya Modi and Ambuj Tewari. Contextual markov decision processes using generalized linear models. In Reinforcement Learning for Real Life (RL4RealLife) Workshop at the International Conference on Machine Learning, 2019. +Alex Nichol and John Schulman. Reptile: a scalable metalearning algorithm. arXiv:1803.02999, 2018. +Pedro A Ortega, Jane X Wang, Mark Rowland, Tim Genewein, Zeb Kurth-Nelson, Razvan Pascanu, Nicolas Heess, Joel Veness, Alex Pritzel, Pablo Sprechmann, et al. Meta-learning of sequential strategies. arXiv:1905.03030, 2019. +Ian Osband, Daniel Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via posterior sampling. In Advances in Neural Information Processing Systems, 2013. +Ian Osband, John Aslanides, and Albin Cassirer. Randomized prior functions for deep reinforcement learning. In Advances in Neural Information Processing Systems, 2018. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. +Christian F Perez, Felipe Petroski Such, and Theofanis Karaletsos. Efficient transfer learning and online adaptation with latent variable models for continuous control. In Continual Learning Workshop, NeurIPS 2018, 2018. +Pascal Poupart, Nikos Vlassis, Jesse Hoey, and Kevin Regan. An analytic solution to discrete bayesian reinforcement learning. In International Conference on Machine Learning, 2006. +Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. Efficient off-policy meta-reinforcement learning via probabilistic context variables. In International Conference on Machine Learning, 2019. +Jonas Rothfuss, Dennis Lee, Ignasi Clavera, Tamim Asfour, and Pieter Abbeel. Prompt: Proximal meta-policy search. In International Conference on Learning Representation, 2019. +Steindór Sæmundsson, Katja Hofmann, and Marc Peter Deisenroth. Meta reinforcement learning with latent variable gaussian processes. In Conference on Uncertainty in Artificial Intelligence, 2018. +Jonathan Sorg, Satinder Singh, and Richard L Lewis. Variance-based rewards for approximate bayesian reinforcement learning. In Conference on Uncertainty in Artificial Intelligence, 2012. +Bradly C Stadie, Ge Yang, Rein Houthooft, Xi Chen, Yan Duan, Yuhuai Wu, Pieter Abbeel, and Ilya Sutskever. Some considerations on learning to explore via meta-reinforcement learning. In Advances in Neural Processing Systems, 2018. +Malcolm Strens. A bayesian framework for reinforcement learning. In International Conference on Machine Learning, 2000. +Flood Sung, Li Zhang, Tao Xiang, Timothy Hospedales, and Yongxin Yang. Learning to learn: Meta-critic networks for sample efficient learning. arXiv:1706.09529, 2017. +William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285-294, 1933. +Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In IROS, pp. 5026-5033. IEEE, 2012. ISBN 978-1-4673-1737-5. + +Sebastian Tschiatschek, Kai Arulkumaran, Jan Stuhmer, and Katja Hofmann. Variational inference for data-efficient model learning in pomds. arXiv:1805.09281, 2018. +Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to reinforcement learn. In Annual Meeting of the Cognitive Science Community (CogSci), 2016. +Ziyu Wang, Josh S Merel, Scott E Reed, Nando de Freitas, Gregory Wayne, and Nicolas Heess. Robust imitation of diverse behaviors. In Advances in Neural Information Processing Systems, 2017. +David Wingate, Noah D Goodman, Daniel M Roy, Leslie P Kaelbling, and Joshua B Tenenbaum. Bayesian policy search with policy priors. In International Joint Conference on Artificial Intelligence, 2011. +Jiayu Yao, Taylor Killian, George Konidaris, and Finale Doshi-Velez. Direct policy transfer via hidden parameter markov decision processes. In LULARLA Workshop, FAIM, 2018. +Gregory Yauney and Pratik Shah. Reinforcement learning with action-derived rewards for chemotherapy and clinical trial dosing regimen selection. In Machine Learning for Healthcare Conference, pp. 161-226, 2018. +Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in Neural Processing Systems, 2017. +Amy Zhang, Harsh Satija, and Joelle Pineau. Decoupling dynamics and reward for transfer learning. In ICLR workshop track, 2018. +Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast context adaptation via meta-learning. In International Conference on Machine Learning, 2019. + +# Bayes-Adaptive Deep Reinforcement Learning via Meta-Learning + +# Supplementary Material + +# A FULL ELBO DERIVATION + +Equation (8) can be derived as follows. + +$$ +\begin{array}{l} \mathbb {E} _ {\rho (M, \tau : H)} [ \log p _ {\theta} (\tau : H) ] = \mathbb {E} _ {\rho} \left[ \log \int p _ {\theta} (\tau : H, m) \frac {q _ {\phi} (m | \tau : t)}{q _ {\phi} (m | \tau : t)} d m \right] \\ = \mathbb {E} _ {\rho} \left[ \log \mathbb {E} _ {q _ {\phi} (m | \tau ; t)} \left[ \frac {p _ {\theta} (\tau ; H , m)}{q _ {\phi} (m | \tau ; t)} \right] \right] \\ \geq \mathbb {E} _ {\rho , q _ {\phi} (m | \tau_ {: t})} \left[ \log \frac {p _ {\theta} (\tau_ {: H} , m)}{q _ {\phi} (m | \tau_ {: t})} \right] \\ = \mathbb {E} _ {\rho , q _ {\phi} (m | \tau ; t)} [ \log p _ {\theta} (\tau ; H | m) + \log p _ {\theta} (m) - \log q _ {\phi} (m | \tau ; t) ] \\ = \mathbb {E} _ {\rho} \left[ \mathbb {E} _ {q _ {\phi} (m | \tau ; t)} [ \log p _ {\theta} (\tau ; H | m) ] - K L (q _ {\phi} (m | \tau ; t) | | p _ {\theta} (m)) ] \right. \tag {11} \\ = E L B O _ {t}. \\ \end{array} +$$ + +# B EXPERIMENTS: GRIDWORLD + +# B.1 ADDITIONAL REMARKS + +Figure 3c visualises how the latent space changes as the agent interacts with the environment. As we can see, the value of the latent dimensions starts around mean 1 and variance 0, which is the prior we chose for the beginning of an episode. Given that the variance increases for a little bit before the agent finds the goal, this prior might not be optimal. A natural extension of variBAD is therefore to also learn the prior to match the task at hand. + +# B.2 HYPERPARAMETERS + +We used the PyTorch framework for our experiments. Hyperparameters are listed below, and the source code can be found at https://github.com/lmzintgraf/varibad. + +Hyperparameters for variBAD are: + +
RL AlgorithmA2C
Number of policy steps60
Number of parallel processes16
Epsilon1e-5
Discount factor γ0.95
Max grad norm0.5
Value loss coefficient0.5
Entropy coefficient0.01
GAE parameter tau0.95
ELBO loss coefficient1.0
Policy LR0.001
Policy VAE0.001
Task embedding size5
Policy architecture2 hidden layers, 32 nodes each, TanH activations
Encoder architectureFC layer with 40 nodes, GRU with hidden size 64, output layer with 10 outputs (μ and σ), ReLu activations
Reward decoder architecture2 hidden layers, 32 nodes each, +25 outputs heads, ReLu activations
Decoder loss functionBinary cross entropy
+ +![](images/edb160d58a14af4b926e01951498c55eebc559589a83e597c7738c825d194013.jpg) +(a) Learning curves. + +![](images/622aad9c61c87dd9ba845c8163378ac6498c1d19885bee71cdf3188b848d000b.jpg) +(b) Average return per episode. +Figure 5: Results for the gridworld toy environment. Results are averages over 20 seeds (with $95\%$ confidence intervals for the learning curve). + +Hyperparameters for RL2 the same as above, with the following changes: + +
Policy architectureStates are embedded using a fc linear layer, output size 32. +Rewards are embedding using a fc layer, output size 8. +Results are concatenate and passed to a GRU, hidden size 128, output size 32. After an additional fc layer with hidden size 32, the network outputs the actions. +We used TanH activations throughout.
+ +# B.3 COMPARISON TO RL2 + +Figure 5a shows the learning curves for variBAD and RL2, in comparison to an oracle policy (which has access to the goal position). We trained these policies on a horizon of $H^{+} = 4 \times H = 60$ , i.e., on a BAMDP in which the agent has to maximise online return within four episodes. We indicate the values of a hard-coded Bayes-optimal policy, and a hard-coded posterior sampling policy using dashed lines. + +Figure 5b shows the end-performance of variBAD and RL2, compared to the hard-coded optimal policy (which has access to the goal position), Bayes-optimal policy, and posterior sampling policy. VariBAD and RL2 both closely approximate the Bayes-optimal solution. By inspecting the individual runs, we found that VariBAD learned the Bayes-optimal solution for 4 out of 20 seeds, RL2 zero times. Both otherwise find solutions that are very close to Bayes-optimal, with the difference that during the second rollout, the cells left to search are not all on the shortest path from the starting point. + +Note that both variBAD and RL2 were trained on only four episodes, but we evaluate them on six episodes here. After the fourth rollout, we do not fix the latent / hidden state, but continue rolling out the policy as before. As we can see, the performance of RL2 drops again after the fourth episode: this is likely due to instabilities in the 128-dimensional hidden state. VariBAD's latent representation, the approximate task posterior, is concentrated and does not change with more data. + +# C EXPERIMENTS: MUJOCO + +# C.1 LEARNING CURVES + +Figure 6 shows the learning curves for the MuJoCo environments for all approaches. The oracle policy was trained using PPO. PEARL (Rakelly et al., 2019) was trained using the reference implementation provided by the authors. The environments we used are also taken from this implementation. E-MAML (Stadie et al., 2018) and ProMP (Rothfuss et al., 2019) were trained using the reference implementation provided by Rothfuss et al. (2019). + +![](images/c7c4f3da0eae347db7df0dc6f7af9034fbda666850469c7fc0a3220a0a4d5a67.jpg) + +![](images/705f2cd082c46fd1fbcc05fed29422f47b06ea0e99a1ff6a03c4e6a3f81361ae.jpg) + +![](images/b912eed632e97180a05baaeb3352cccef1d472461745b19a20ca48a68f7b4db2.jpg) + +![](images/4fab081f42855ba62bd86c67a0922522fcd7c9390692dbaf50c6282211890c23.jpg) + +![](images/f4e01344b6fe50400beda05144835d8a188d4aef7da01680dba24c9cdafdd68e.jpg) +Figure 6: Learning curves for the MuJoCo results presented in Section 5.2. The top row shows performance evaluated at the first rollout, and the second row shows the performance at the $N$ -th rollout. For variBAD and RL2, $N = 2$ . For ProMP and E-MAML, $N = 20$ . For PEARL, $N = 10$ . + +![](images/0680c4cddca8c114ad1d02a881be737d1fe7f53d31b280ca4cf7c7f37dcb2505.jpg) + +![](images/c75f3f14c4271a4f346e03df525c5dea7e236250d0f35c94d7ed6bc180c04b20.jpg) + +![](images/d37722e903f4bee4038e8f6d60d620376b411d96545370b2e50142dcf0f1b938.jpg) + +As we can see, PEARL is much more sample efficient in terms of number of frames than the other methods (Fig 6), which is because it is an off-policy method. On-policy vs off-policy training is an orthogonal issue to our contribution, but an extension of variBAD to off-policy methods is an interesting direction for future work. Doing posterior sampling using off-policy methods also requires PEARL to use a different encoder (to maintain order invariance of the sampled trajectories) which is non-recurrent (and hence faster to train, see next section) but restrictive since it assumes independence between individual transitions. + +Note than in Figure 4, for the Walker environment evaluation, we used the models obtained after half the training time $(5e + 7$ frames) for variBAD and the Oracle, since performance declined again after that. + +For all MuJoCo environments, we trained variBAD with a reward decoder only (even for Walker, where the dynamics change, we found that this has superior performance). + +# C.2 TRAINING DETAILS AND COMPARISON TO RL2 + +We are interested in maximising performance within a single rollout ( $H = 200$ ). However in order to compare better to existing methods, we trained variBAD and the RL2 baseline to maximise performance within two rollouts ( $H^{+} = 400$ ). We implemented task resets by adding a 'done' flag to the states, so that the agent knows when it gets reset in-between episodes. This allows us to evaluate on multiple rollouts (without re-setting the hidden states of the RNN) because the agents have learned to handle re-sets to the starting position. + +We observe that RL2 is sometimes unstable when it comes to maintaining its performance over multiple rollouts, e.g., in the CheetahVel task (Figure 6). We hypothesise that the drop of RL2's performance in CheetahVel occurs because it has not properly learned to deal with environment resets. The sudden change in state space (with includes joint positions and velocities) could lead to a dramatic shift in the hidden state, which then might not represent the task at hand properly. In addition, once the Cheetah is running at the correct velocity, it can infer which task it is in from its own velocity (which is part of the environment state) and stop doing inference, which might be another reason we observe this drop when the environment resets and the state suddenly has a different (very low) velocity. For variBAD this is less of a problem, since we train the latent embedding to represent the task, and only the task. Therefore, the agent does not have to do the inference procedure again when reset to the starting position, but can rely on the latent task description that is given by the approximate posterior. It might also just be due to implementation details, and, e.g., Mishra et al. (2017) do not observe this problem (see their Fig 4). + +![](images/326db14ee440e1dd52dbb60206d0887ef60f829db7367773a7c3e14fcd1afd1c.jpg) +Figure 7: Behaviour at test time for the task "walk left" in HalfCheetahDir. The x-axis reflects the position of the agent; the y-axis the steps in the environment (to be read from bottom to top). Rows are separate examples, columns the number of rollouts. + +# C.3 CHEETAHDIR TEST TIME BEHAVIOUR + +To get a sense for where these differences between the different approaches might stem from, consider Figure 7 which shows example behaviour of the policies during the first three rollouts in the HalfCheetahDir environment, when the task is "go left". Both variBAD and $\mathsf{RL}^2$ adapt to the task online, whereas PEARL acts according to the current sample, which in the first two rollouts can mean walking in the wrong direction. For a visualisation of the variBAD latent space at test time for this environment see Appendix C.5. While we outperform at meta-test time, PEARL is more sample efficient during meta-training (see Fig 6), since it is an off-policy method. Extending variBAD to off-policy methods is an interesting but orthogonal direction for future work. + +# C.4 RUNTIME COMPARISON + +The following are rough estimates of average run-times for the HalfCheetah-Dir environment (from what we have experienced; we often ran multiple experiments per machine, so some of these might be overestimated and should be mostly understood as giving a relative sense of ordering). + +ProMP, E-MAML: 5-8 hours +variBAD: 48 hours +- $\mathrm{RL}^2$ : 60 hours +- PEARL: 24 hours + +E-MAML and ProMP have the advantage that they do not have a recurrent part such as variBAD or RL $^2$ . Forward and backward passes through recurrent networks can be slow, especially with large horizons. + +Even though both variBAD and $\mathrm{RL}^2$ use recurrent modules, we observed that variBAD is faster when training the policy with PPO. This is because we do not backpropagate the RL-loss through the recurrent part, which allows us to make the PPO mini-batch updates without having to re-compute the embeddings (so it saves us a lot of forward/backward passes through the recurrent model). This difference is less pronounced with other RL methods that do not rely on this many forward/backward passes per policy update. + +![](images/1a92306e98a6a9acd5bfa1a7210213f94f4fc33e542eeb1db325d82b6c52dc4e.jpg) + +![](images/c6a009c219a3872a9bc3aae516a06b2c5f8a0f04f78f147a0d7055c229bedd7d.jpg) + +![](images/927ef1973a33ef472e7e03447f08538d665655aa47a32c21102d2f41c1d5fdb7.jpg) + +![](images/ab60a46eb572a1dea5b6093745b9fcce3df42925f02c00231e63c97045aca725.jpg) +Figure 8: Visualisation of the latent space at meta-test time, for the HalfCheetahDir environment and the tasks "go right" (top) and the task "go left" (bottom). Left: value of the posterior mean during a single rollout (200 environment steps). The black line is the average value. Middle: value of the posterior log-variance during a single rollout. Right: Behaviour of the policy during a single rollout. The x-axis show the position of the Cheetah, and the y-axis the step (should be read from bottom to top). + +![](images/5740bbe762dfc44078d0de8b378d1145bc07dcc497b11b750bcbd5cb873a65cf.jpg) + +![](images/3536db7620c41c4da87dd518e6b12a6b163631cd05ef6bc37c25e5843018d85c.jpg) + +# C.5 LATENT SPACE VISUALISATION + +A nice feature of variBAD is that it can give us insight into the uncertainty of the agent about what task it is in. Figure 8 shows the latent space for the HalfCheetahDir tasks "go right" (top row) and "go left" (bottom row). We observe that the latent mean and log-variance adapt rapidly, within just a few environment steps (left and middle figures). This is also how fast the agent adapts to the current task (right figures). As expected, the variance decreases over time as the agent gets more certain. It is interesting to note that the values of the latent dimensions swap signs between the two tasks. + +Visualising the belief in the reward/state space directly, as we have done in the gridworld example, is more difficult for MuJoCo tasks, since we now have continuous states and actions. What we could do instead, is to additionally train a model that predicts a ground-truth task description (separate from the main objective and just for further analysis, since we do not want to use this privileged information for meta-training). This would give us a more direct sense of what task it thinks it is in. + +# C.6 HYPERPARAMETERS + +We used the PyTorch framework (Paszke et al., 2017) for our experiments. The default arguments for our MuJoCo experiments can be found below, for details see our reference implementation at https://github.com/lMZintgraf/varibad. + +
RL AlgorithmPPO
Batch size3200
Epochs2
Minibatches4
Max grad norm0.5
Clip parameter0.1
Value loss coefficient0.5
Entropy coefficient0.01
NotesWe use a Huber loss in the RL loss
Weight of KL term in ELBO0.1
Policy LR0.0007
Policy VAE0.001
Task embedding size5
Policy architecture2 hidden layers, 128 nodes each, TanH activations
Encoder architectureStates, actions, rewards encoder: FC layer (32/16/16-dim), GRU with hidden size 128, output layer with 5 outputs, ReLu activations
Reward decoder architecture2 hidden layers, 64 and 32 nodes, ReLu activations
Reward decoder loss functionMean squared error
\ No newline at end of file diff --git a/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/images.zip b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b95d5b6033cd694377f3e81aad284aabce604f4e --- /dev/null +++ b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b15ba7bd5bbf1c02cfd5d3fae6154645c069d5eac5f17ac4a58048be3d8e5a9 +size 640781 diff --git a/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/layout.json b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7d32c97ba53fcf25e6b7d7bddfe6e09c565f031b --- /dev/null +++ b/varibadaverygoodmethodforbayesadaptivedeeprlviametalearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:911b89e086af4018272c674c31bdd01f1fca546ce7f06476051a26dd3d7f035c +size 594149 diff --git a/vid2gamecontrollablecharactersextractedfromrealworldvideos/611e77eb-537f-456b-8aa3-5a464503ee21_content_list.json b/vid2gamecontrollablecharactersextractedfromrealworldvideos/611e77eb-537f-456b-8aa3-5a464503ee21_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7fb4c7be24800c4dcb870cb810b9d14e06b55764 --- /dev/null +++ b/vid2gamecontrollablecharactersextractedfromrealworldvideos/611e77eb-537f-456b-8aa3-5a464503ee21_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67445a940d898bb60858403753d92d85f3a1ed243c9d60a9e5782a77c5e4fea0 +size 101696 diff --git a/vid2gamecontrollablecharactersextractedfromrealworldvideos/611e77eb-537f-456b-8aa3-5a464503ee21_model.json b/vid2gamecontrollablecharactersextractedfromrealworldvideos/611e77eb-537f-456b-8aa3-5a464503ee21_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d938f637b08e79fdfe0d2d03cbc326a35499d4d0 --- /dev/null +++ b/vid2gamecontrollablecharactersextractedfromrealworldvideos/611e77eb-537f-456b-8aa3-5a464503ee21_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:782a13f031de1680af267488e51ce3863c9d69b346e132e55c599dd4d45b38d7 +size 121009 diff --git a/vid2gamecontrollablecharactersextractedfromrealworldvideos/611e77eb-537f-456b-8aa3-5a464503ee21_origin.pdf b/vid2gamecontrollablecharactersextractedfromrealworldvideos/611e77eb-537f-456b-8aa3-5a464503ee21_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9ec9efd8345133f50f2aadd9caec9bae1f27784d --- /dev/null +++ b/vid2gamecontrollablecharactersextractedfromrealworldvideos/611e77eb-537f-456b-8aa3-5a464503ee21_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90a740da7b55e4341efa42ec2ec4f7c0909e0580b049420eef8466e83503a444 +size 9961402 diff --git a/vid2gamecontrollablecharactersextractedfromrealworldvideos/full.md b/vid2gamecontrollablecharactersextractedfromrealworldvideos/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a91400782ef78220b00411184968c95b3595b608 --- /dev/null +++ b/vid2gamecontrollablecharactersextractedfromrealworldvideos/full.md @@ -0,0 +1,489 @@ +# VID2GAME: CONTROLLABLE CHARACTERS EXTRACTED FROM REAL-WORLD VIDEOS + +# Oran Gafni + +Facebook AI Research +oran@fb.com + +# Lior Wolf + +Facebook AI Research & Tel Aviv Uni. wolf@fb.com + +# Yaniv Taigman + +Facebook AI Research +yaniv@fb.com + +# ABSTRACT + +We extract a controllable model from a video of a person performing a certain activity. The model generates novel image sequences of that person, according to user-defined control signals, typically marking the displacement of the moving body. The generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person. + +The method is based on two networks. The first maps a current pose, and a single-instance control signal to the next pose. The second maps the current pose, the new pose, and a given background, to an output frame. Both networks include multiple novelties that enable high-quality performance. This is demonstrated on multiple characters extracted from various videos of dancers and athletes. + +# 1 INTRODUCTION + +We propose a new video generation tool that is able to extract a character from a video, reanimate it, and generate a novel video of the modified scene, see Fig. 1. Unlike previous work, the reanimation is controlled by a low-dimensional signal, such as the one provided by a joystick, and the model has to complete this signal to a high-dimensional full-body signal, in order to generate realistic motion sequences. In addition, our method is general enough to position the extracted character in a new background, which is possibly also dynamic. A video containing a short explanation of our method, samples of output videos, and a comparison to previous work, is provided in https://youtu.be/sNp6HskavBE. + +Our work provides a general and convenient way for human users to control the dynamic development of a given video. The input is a video, which contains one or more characters. The characters are extracted, and each is associated with a sequence of displacements. In the current implementation, the motion is taken as the trajectory of the center of mass of that character in the frame. This can be readily generalized to separate different motion elements. Given a user-defined trajectory, a realistic video of the character, placed in front of an arbitrary background, is generated. + +The method employs two networks, applied in a sequential manner. The first is the Pose2Pose (P2P) network, responsible for manipulating a given pose in an autoregressive manner, based on an input stream of control signals. The second is the Pose2Frame (P2F) network, accountable for generating a high-resolution realistic video frame, given an input pose and a background image. + +Each network addresses a computational problem not previously fully met, together paving the way for the generation of video games with realistic graphics. The Pose2Pose network enables guided human-pose generation for a specific trained domain (e.g., a tennis player, a dancer, etc.), where guiding takes the form of 2D motion controls, while the Pose2Frame network allows the incorporation of a photo-realistic generated character into a desired environment. + +In order to enable this, the following challenges are to be addressed: (1) replacing the background requires the system to separate the character from the surroundings, which is not handled by previous work, since they either embed the character into the same learned background, or paste the generated character into the background with noticeable artifacts, (2) the separation is not binary, and some effects, such as shadows, blend the character's motion effect with that background information, (3) the control signal is arbitrary, and can lead the character to poses that are not covered by the training set, and (4) generated sequences may easily drift, by accumulating small errors over time. + +![](images/48112ece456eeb1360c09812319c1fff0b8a2ff8904a21446aac69b9adc77f1c.jpg) +Figure 1: Our method extracts a character from an uncontrolled video, and enables us to control its motion. The pose of the character, shown in the first row, is created by our Pose2Pose network in an autoregressive way, so that the motion matches the control signal illustrated by the joystick. The second row depicts the character's appearance, as generated by the Pose2Frame network, which also generates the masks shown in the third row. The final frame (last row) blends a given background and the generated frames, in accordance with these masks. + +![](images/2ad8377a7951037d2b50ab916ce6dbccada491a939d381d2b85e634671b04778.jpg) +Figure 2: Comparison with Esser et al. (2018b). (a) Their input, (b) their output, (c) a frame from our training video, (d) our generated frame. With different objectives and dataset types, a direct comparison is not applicable. Qualitatively, Esser et al. (2018b) output a low-res image with noticeable artifacts, and cannot model the packet, while ours is indistinguishable from the source. +Figure 3: Comparison with Esser et al. (2018a). (a) Their input, (b) their generated output, (c) our pose input, (d) the output generated by our P2F network. In contrast to our method, Esser et al. (2018a) do not render environmental effects, resulting in unnatural blending of the character, undesired residues (e.g. source clothing), and works in lower resolution. + +Both the Pose2Pose and Pose2Frame networks adopt the pix2pixHD framework of Wang et al. (2018b) as the generator and discriminator backbones, yet add many contributions in order to address the aforementioned challenges. As a building block, we use the pose representation provided by the DensePose framework by Riza Alp Güler (2018), unmodified. Similarly, the hand-held object is extracted using the semantic segmentation method of Zhou et al. (2019), which incorporates elements from Maninis et al. (2018); Law & Deng (2018). + +In addition to the main application of generating a realistic video from a 2D trajectory, the learned Pose2Frame network can be used for other applications. For example, instead of predicting the pose, it can be extracted from an existing video. This allows us to compare the Pose2Frame network directly with recent video-to-video solutions. + +# 2 RELATED WORK + +Novel view synthesis is a task where unseen frames, camera views, or poses, are synthesized given a prior image. Recent approaches have also shown success in generating detailed images of human subjects in different poses (Balakrishnan et al., 2018; Kanazawa et al., 2018), where some of them + +also condition on pose (Chan et al., 2018; Yang et al., 2018; Li et al., 2019) to guide the generation. These approaches do not build a movable character model, but transfers one image to target poses. The pose variability in these images is smaller than required for our application, the handling of the background is limited, and these were also not demonstrated on video. For example, much of the literature presents results on a fashion dataset, in which the poses are limited and a white background is used. Another common benchmark is gathered from surveillance cameras, where the resolution is low, and background generation is lacking due to an inherent lack of supervision. + +A method for learning motion patterns by analyzing YouTube videos is demonstrated by Peng et al. (2018), where synthetic virtual characters are set to perform complex skills in physically simulated environments, leveraging a data-driven Reinforcement Learning method that utilizes a reference motion. This method outputs a control policy that enables the character to reproduce a particular skill observed in video, which the rendered character then imitates. Unlike our method, the control signal is not provided online, one frame at a time. In addition, rendering is performed using simulated characters only, and the character in the video is not reanimated. + +Autoregressive models, which can be controlled one step at a time, are suitable for the dynamic nature of video games. However, such models, including RNNs, can easily drift with long range sequences (Fragkiadaki et al., 2015), and training RNN models for long sequences suffers from vanishing or exploding gradients. Holden et al. (2017) propose a more stable model by generating the weights of a regression network at each frame as a function of the motion phase. However, this is mostly practical to apply given a limited number of keypoints, whereas dense pose models contain more information. + +Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) and conditional GANs (Mirza & Osindero, 2014), have been used for video synthesis by Vondrick et al. (2016) who separately generates the static background and the foreground motion. Frameworks such as vid2vid (Wang et al., 2018a; Chan et al., 2018) learn mappings between different videos, and demonstrate motion transfer between faces, and from poses to body. In these contributions, the reference pose is extracted from a real frame, and the methods are not challenged with generated poses. Working with generated poses, with the accompanying artifacts and the accumulated error, is considerably more challenging. In order to address this, we incorporate a few modifications, such as relying on a second input pose, in case one of the input poses is of lesser quality, and add additional loss terms to increase the realism of the generated image. In addition, these approaches model the entire frame, including both the character and the background, which usually leads to blurry results (Pumarola et al., 2018; Chao et al., 2018), particularly near the edges of the generated pose, and with complex objects, such as faces. It also leads to a loss of details from the background, and to unnatural background motion. + +A method for mixing the appearance of a figure seen in an image with an arbitrary pose is presented by Esser et al. (2018b). While it differs greatly in the performed task, we can compare the richness of the generated images, as shown in Fig. 2. Their method results in a low-resolution output with noticeable artifacts, and cannot model the object, while our result is indistinguishable from the source. The same is true for the follow-up work (Esser et al., 2018a). We work at a higher resolution of $1024\mathrm{p}$ , while their work is limited to low-resolution characters, see Fig. 3. Similarly, the work of Balakrishnan et al. (2018) provides lower resolution outputs, limited to the same background, and does not handle shadows (as seen in Fig. 9-10 of that work). + +In another set of experiments, Esser et al. (2018a) also present a step toward our task and show results for generating a controllable figure, building upon the phase-based neural network of Holden et al. (2017). Their work is keypoint based and does not model environmental factors, such as shadows. The videos presented by Esser et al. (2018a) for a controllable figure are displayed only on a synthetic background with a checkerboard floor pattern in an otherwise empty scene. These examples are limited to either walking or running, and the motion patterns are of an existing animation model. + +# 3 METHOD OVERVIEW + +The method's objective is to learn the character's motion from a video sequence, such that new videos of that character can be rendered, based on a user-provided motion sequence. The input of the training procedure is a video sequence of a character performing an action. From this video, the pose and an approximated foreground mask are extracted by the DensePose network, augmented by the semantic + +![](images/e664166a82d55514a98e5ad3cab9b89d0fb9be70d3db12c2f3bbe50bddafcf01.jpg) +Figure 4: The architecture of the Pose2Pose generator. During training, the middle $n_r - 2$ residual blocks are conditioned by a linear projection (FC layer) of the center-mass differences between consecutive frames (in the x and y axes). For each concatenation of input pose and object $[p_{i-1}, obj_{i-1}]$ , the network generates the next consecutive pose and object $[p_i, obj_i]$ . At inference time, the network generates the next pose-object pair in an autoregressive manner, conditioned on input directions. + +segmentation of the hand-held object, for each frame. The trajectory of the center of mass is taken to be the control sequence. At test time, the user provides a sequence of 2D displacements, and a video is created, in which the character moves in accordance with this control sequence. The background can be arbitrary, and is also selected by the user. The method then predicts the sequence of poses based on the given control sequence (starting with an arbitrary pose), and synthesizes a video in which the character extracted from the training video is rendered in the given background. + +The following notation is used: a video sequence with frames $f_{i}$ is generated, based on a sequence of poses $p_i$ and a sequence of background images $b_{i}$ , where $i = 1,2,\ldots$ is the frame index. The frame generation process also involves a sequence of spatial masks $m_{i}$ that determine which regions of the background are replaced by synthesized image information $z_{i}$ . + +To generate a video, the user provides the pose at time zero: $p_0$ , the sequence of background images $b_i$ (which can be static, i.e., $\forall i b_i = b$ ) and a sequence of control signals $s_i$ . In our experiments, the control signal is typically comprised of the desired 2D displacement of the animated character. + +Our method is an autoregressive pose model, coupled with a frame-rendering mechanism. The first aspect of our method creates a sequence of poses, and optionally of hand-held objects. Each pose and object pair $[p_i, obj_i]$ is dependent on the previous pair $[p_{i-1}, obj_{i-1}]$ , as well as on the current control signal $s_i$ . The second aspect generates the current frame $f_i$ , based on the current background image $b_i$ , the previous combined pose and object $p_{i-1} + obj_{i-1}$ , and the current combined pose and object $p_i + obj_i$ . The pose and object are combined by simply summing the object channel with each of the three RGB channels that encode the pose. This rendering process includes the generation of both a raw image output $z_i$ and a blending mask $m_i$ . $m_i$ has values between 0 and 1, with $1 - m_i$ denoting the inverted mask. + +Formally, the high-level processing is given by the following three equations: + +$$ +\left[ p _ {i}, o b j _ {i} \right] = P 2 P \left(\left[ p _ {i - 1}, o b j _ {i - 1} \right], s _ {i}\right) \tag {1} +$$ + +$$ +\left(z _ {i}, m _ {i}\right) = P 2 F \left(\left[ p _ {i - 1} + o b j _ {i - 1}, p _ {i} + o b j _ {i} \right]\right) \tag {2} +$$ + +$$ +f _ {i} = z _ {i} \odot m _ {i} + b _ {i} \odot (1 - m _ {i}) \tag {3} +$$ + +where $P2P$ and $P2F$ are the Pose2Pose and the Pose2Frame networks. As stated, $P2F$ returns a pair of outputs that are then linearly blended with the desired background, using the per-pixel multiplication operator $\odot$ . + +# 4 THE POSE2POSE NETWORK + +As mentioned, the P2P network is an evolution of the pix2pixHD architecture. Although the primary use of the pix2pixHD framework in the literature is for unconditioned image-to-image translation, we show how to modify it to enable conditioning on a control signal. The P2P network generates a scaled-down frame (512 pixels wide), allowing the network to focus on pose representation, rather than high-resolution image generation. Generation of a high-res output is deferred to the P2F network. This enables us to train the P2P network much more effectively, resulting in a stable training process that generates natural dynamics, and leads to significantly reduced inference time (post-training). + +The generator's architecture is illustrated in Fig. 4. The encoder is composed of a convolutional layer, followed by convolutions with batch normalization Ioffe & Szegedy (2015) and ReLU Nair & Hinton (2010) activations. The latent space combines a sequence of $n_r$ residual blocks. The decoder is composed of fractional strided convolutions with instance normalization Ulyanov et al. (2016) and ReLU activations, followed by a single convolution terminated by a Tanh activation for the generated frame output. + +Recall that the P2P network also receives the control signal as a second input (Eq. 1). In our experiment, the control signal is a vector of dimension $n_d = 2$ representing displacements along the $x$ and $y$ axes. This signal is incorporated into the network, by conditioning the center $n_r - 2$ blocks of the latent space. + +The conditioning takes place by adding to the activations of each residual block, a similarly sized tensor that is obtained by linearly projecting the 2D control vector $s_i$ . + +Modified conditional block Rather than applying a conditioning block based on a traditional ResNet block, we apply a modified one that does not allow for a complete bypass of the convolutional layers. This form of conditioning increases the motion naturalness, as seen in our ablation study. + +The specific details are as follows. The P2P network contains a down-sampling encoder $e$ , a latent space transformation network $r$ , and an up-sampling decoder $u$ . The $r$ network is conditioned on the control signal $s$ , and contains $n_r$ blocks of two types: vanilla residual blocks $(v)$ , and conditioned blocks $w$ . + +$$ +P 2 P (p, s) = u \left(r \left(e (p), s\right)\right) \quad (4) \quad r = v \circ \underbrace {w \circ w \cdots \circ w} _ {n _ {r} - 2 \text {t i m e s}} \circ v \tag {5} +$$ + +The architecture and implementation details of the P2P network can be found in appendix A. Briefly, let $x$ denote the activations of the previous layer, and $f_{1}(x), f_{2}(x)$ be two consecutive convolutional layers. Let $s$ be a 2D displacement vector, and $g$ a fully-connected layer with a number of output neurons that equals the product of the dimensions of the tensor $x$ . The two block types take the form: + +$$ +v (x) = f _ {2} \left(f _ {1} (x)\right) + x \quad (6) \quad w (x, s) = f _ {2} \left(f _ {1} (x) + g (s)\right) + f _ {1} (x) + g (s) \tag {7} +$$ + +# 4.1 TRAINING THE POSE PREDICTION NETWORK + +Following Wang et al. (2018b), we employ two discriminators (low-res and high-res), indexed by $k = 1,2$ . During training, the LSGAN (Mao et al., 2017) loss is applied to the generator and discriminator. An L1 feature-matching loss is applied over the discriminators' activations, and a trained VGG (Simonyan & Zisserman, 2014b) network. The loss applied to the generator can then be formulated as: + +$$ +\mathcal {L} _ {P 2 P} = \sum_ {k = 1} ^ {2} \left(\mathcal {L} _ {L S ^ {k}} + \lambda_ {D} \mathcal {L} _ {F M _ {D} ^ {k}}\right) + \lambda_ {V G G} \mathcal {L} _ {F M _ {V G G}} \tag {8} +$$ + +where the networks are trained with $\lambda_{D} = \lambda_{VGG} = 10$ . The LSGAN generator loss is (the obj elements are omitted for brevity): + +$$ +\mathcal {L} _ {L S ^ {k}} = \mathbb {E} _ {\left(p _ {i - 1}, s _ {i}\right)} \left[ \left(D _ {k} \left(p _ {i - 1}, P 2 P \left(p _ {i - 1}, s _ {i}\right)\right) - 1\right) ^ {2} \right] \tag {9} +$$ + +The expectation is computed per mini-batch, over the input pose $p_{i-1}$ and the associated $s_i$ . The discriminator-feature matching-loss compares the predicted pose with that of the generated pose, using the activations of the discriminator, and is calculated as: + +$$ +\mathcal {L} _ {F M _ {D} ^ {k}} = \mathbb {E} _ {\left(p _ {i - 1}, p _ {i}\right)} \sum_ {j = 1} ^ {M} \frac {1}{N _ {j}} \left\| D _ {k} ^ {(j)} \left(p _ {i - 1}, p _ {i}\right) - D _ {k} ^ {(j)} \left(p _ {i - 1}, P 2 P \left(p _ {i - 1}, s _ {i}\right)\right) \right\| _ {1} \tag {10} +$$ + +with $M$ being the number of layers, $N_{j}$ the number of elements in each layer, $p_{i - 1}$ the input (previous) pose, $p_i$ the current (real) pose, $P2P(p_{i - 1},s)$ the estimated pose, and $D_k^{(j)}$ the activations of discriminator $k$ in layer $j$ . + +The VGG feature-matching loss is calculated similarly, acting as a perceptual loss over a trained VGG classifier: + +$$ +\mathcal {L} _ {F M _ {V G G}} = \sum_ {j = 1} ^ {M} \frac {1}{N _ {j} ^ {\prime}} \left| \left| V G G ^ {(j)} \left(p _ {i}\right) - V G G ^ {(j)} \left(P 2 P \left(p _ {i - 1}, s _ {i}\right)\right) \right| \right| _ {1} \tag {11} +$$ + +with $N_{j}^{\prime}$ being the number of elements in the $j$ -th layer, and $VGG^{(j)}$ the VGG classifier activations at the $j$ -th layer. The loss applied to the discriminator is formulated as: + +$$ +\mathcal {L} _ {D ^ {k}} = \frac {1}{2} \mathbb {E} _ {\left(p _ {i - 1}, s _ {i}\right)} \left[ \left(D _ {k} \left(p _ {i - 1}, P 2 P \left(p _ {i - 1}, s _ {i}\right)\right)\right) ^ {2} \right] + \frac {1}{2} \mathbb {E} _ {\left(p _ {i - 1}, p _ {i}\right)} \left[ \left(D _ {k} \left(p _ {i - 1}, p _ {i}\right) - 1\right) ^ {2} \right] \tag {12} +$$ + +The training sequences are first processed by employing the DensePose network, in order to extract the pose information from each frame. This pose information takes the form of an RGB image, where the 2D RGB intensity levels are a projection of the 3D (I)UV mapping. + +By applying a binary threshold over the DensePose RGB image, we are able to create a binary mask for the character in the video. From the binary mask $t_i$ of each frame $i$ , we compute the center of mass of the character $\rho_i$ . The control signal during training is denoted as $s_i = \rho_i - \rho_{i-1}$ . + +Due to the temporal smoothness in the videos, the difference between consecutive frames in the full frame-rate videos (30fps) is too small to observe significant motion. This results in learned networks that are biased towards motionless poses. Hence, we train with $\Delta = 2$ inter-frame intervals (where $\Delta = 1$ describes using consecutive frames). During inference, we sample at 30fps and apply a directional conditioning signal that has half of the average motion magnitude during training. + +Stopping criteria We use the Adam optimizer (Kingma & Ba, 2016) with a learning rate of $2 \cdot 10^{-4}$ , $\beta_{1} = 0.5$ and $\beta_{2} = 0.999$ . We observe that training the P2P network does not provide for monotonic improvement in output quality. We stipulate the P2P network final model to be that which yields the lowest loss, in terms of discriminator feature-matching. While there are several losses applied while training the P2P network, the discriminator feature-matching loss is the only one that holds both motion context (i.e. receives both the previous and current pose), and information of different abstraction levels (i.e. feature-matching is applied over different levels of activations). This results in improved motion naturalness, and reduced perceptual distance, as evident from the ablation study. + +Random occlusions To cope with pose detection imperfections that occasionally occur, which in turn impair the quality of the generated character, we employ a dedicated data augmentation method, in order to boost the robustness of the P2P network. A black ellipse of random size and location is added to each input pose frame within the detection bounding box, resulting in an impaired pose (see appendix Fig. 8), with characteristics that are similar to "naturally" occurring imperfections. + +# 5 THE POSE2FRAME NETWORK + +While the original pix2pixHD network transforms an entire image to an output image of the same size from a specified domain, our Pose2Frame network transforms a pose to a character that is localized in a specific part of the output image and embedded in a given, possibly dynamic, background. This is done by both refocusing the discriminators' receptive field, and applying a learned blending mask over the raw image output. The DensePose network plays a crucial role, as it provides both the relevant image region and a prior over the blending mask. + +Focusing the discriminator on the character eliminates the need for feature-enhancing techniques, such as the introduction of a face-GAN, as done by Chan et al. (2018)), or adding a temporal loss (which is useful for reducing irrelevant background motion) as done by Wang et al. (2018a). + +The generator architecture is illustrated in Fig. 5(a). The P2F low-level network architecture details are somewhat similar to those of the P2P network, with the following modifications: (1) the P2F network generates frames with a resolution width of 1024, (2) no conditioning is applied, i.e., the $w$ blocks are replaced by $v$ blocks, (3) the network generates two outputs: the raw image data $z$ and a separate blending mask $m$ , (4) the discriminators are altered to reflect the added focus, and (5) new regularization terms are added to ensure that the masking takes place at the relevant regions (Eq. 17), see Fig. 9 in the appendix. + +The generated mask $m$ blends the raw output $z$ with the desired background $b$ , rendering the final output frame $f$ , according to Eq. 3 (omitting the index $i$ for brevity). Note that the blending mask is not binary, since various effects such as shadows, contain both character-derived information and background information, see Fig. 6. Nevertheless, we softly encourage the blending mask to favor the background in regions external to the character, and discourage the generator from rendering meaningful representations outside the character. This is done by employing several regularization + +![](images/cd0294db1e739e9d99fad228851e6ca0fcc3b0f8eae00cf1af8217a0f92fffc4.jpg) +Figure 5: The Pose2Frame network. (a) For each two combined input pose and object $(p = [p_{i-1} + obj_{i-1}, p_i + obj_i])$ , the network generates an RGB image $(z_i)$ and a mask $(m_i)$ . The RGB and background images are then linearly blended by the generated mask to create the output frame $f_i$ . (b) The P2F discriminator setup. The multi-scale discriminator focuses on the binary-thresholded character, obtained with the binary mask $t$ , as it appears in both the ground truth image $o$ and the output of the P2F network, for a given pose $p = (p_i, p_{i-1})$ . The $\downarrow$ operator denotes downscaling by a factor of two, obtained by average pooling, as applied before the low-resolution discriminator. The VGG feature-matching loss term engages with the full frame, covering perceptual context in higher abstraction layers (e.g. generated shadows). + +![](images/7168a4c38e9bd88369b1d52ae74d3f9c9cf7cacb2e3e825e424fb0e827fb159d.jpg) +(a) + +![](images/b20450459779c2fbdf784ee87eaea8ae0c3fbfceee613a627b0565a0d240635f.jpg) +(b) +Figure 6: Samples of masks that model both the character, and places in the scene in which appearance is changed by the character. (a) The shadow and the tennis racket of the character are captured by the mask, (b) the dancer's shadow appears as part of the mask. + +terms over the generated mask. As a side effect of these added losses, the network is required to perform higher-level reasoning and not rely on memorization. In other words, instead of expanding the mask to include all background changes, the network separates between character dependent changes, such as shadows, held items, and reflections, and those that are independent. + +The discriminator setup is illustrated in Fig. 5(b). The discriminator's attention is predominantly shifted towards the character, by applying an inverse binary mask over the character. The masked character image is fed into the discriminators, affecting both the multi-scale loss, and the feature-matching loss applied over the discriminators' activations. In parallel, the fully generated frame is fed into the VGG network, allowing the VGG feature-matching loss to aid in the generation of desired structures external to the character. + +# 5.1 TRAINING THE POSE TO FRAME NETWORK + +The P2F generator loss is formulated as: + +$$ +\mathcal {L} _ {P 2 F} = \sum_ {k = 1} ^ {2} \left(\mathcal {L} _ {L S ^ {k}} + \lambda_ {D} \mathcal {L} _ {F M _ {D} ^ {k}}\right) + \lambda_ {1} \mathcal {L} _ {F M _ {V G G}} + \lambda_ {2} \mathcal {L} _ {\text {m a s k}} \tag {13} +$$ + +where $\lambda_{1} = 10$ and $\lambda_{2} = 1$ . The LSGAN generator loss is calculated as: + +$$ +\mathcal {L} _ {L S ^ {k}} = \mathbb {E} _ {(p, t)} \left[ \left(D _ {k} (p \odot t, f \odot t) - 1\right) ^ {2} \right] \tag {14} +$$ + +where $p = [p_{i-1} + obj_{i-1}, p_i + obj_i]$ denotes the two pose images, and $t$ is the binary mask obtained by thresholding the DensePose image at time $i$ . The discriminator-feature matching-loss is calculated as: + +$$ +\mathcal {L} _ {F M _ {D} ^ {k}} = \mathbb {E} _ {(p, o, t)} \sum_ {j = 1} ^ {M} \frac {1}{N _ {j}} \left| \left| D _ {k} ^ {(j)} (p \odot t, o \odot t) - D _ {k} ^ {(j)} (p \odot t, f \odot t) \right| \right| _ {1}, \tag {15} +$$ + +![](images/f2314a0a9374af079cc1443e4f3650d1d81743b6fbd146d1f895dff33310cacb.jpg) + +![](images/ffc86ddb60bea91a0ddcc197afc7644c8b28ca2bc6f82485bc5aed5545c27507.jpg) + +![](images/402c81041bac90fe8b8ed9214076f9e15202ad644ca14a5e528fb6f5f5aa4d32.jpg) +Figure 7: Generated frames for the controllable tennis character, blended into different backgrounds. + +![](images/e343d50ce487585e38b647b236732f7aafc1ad5a52a8704532d023507c3c1e4f.jpg) + +with $M$ being the number of layers, $N_{j}$ the number of elements in each layer, and $o$ the real (ground truth) frame. The VGG feature-matching loss is calculated over the full ground truth frame, rather than the one masked by $t$ : + +$$ +\mathcal {L} _ {F M _ {V G G}} = \sum_ {j = 1} ^ {M} \frac {1}{N _ {j}} \left| \left| V G G ^ {(j)} (o) - V G G ^ {(j)} (f) \right| \right| _ {1} \tag {16} +$$ + +with $o$ being the ground truth frame, $N_{j}$ being the number of elements in the $j$ -th layer, and, as before, $VGG^{(j)}$ the VGG activations of the $j$ -th layer. + +The mask term penalizes the mask (see appendix Fig. 9 for a visual illustration): + +$$ +\mathcal {L} _ {\text {m a s k}} = \left\| m \odot (1 - t) \right\| _ {1} + \left\| m _ {x} \odot (1 - t) \right\| _ {1} + \left\| m _ {y} \odot (1 - t) \right\| _ {1} + \left\| 1 - m \odot t \right\| _ {1} \tag {17} +$$ + +where $m$ is the generated mask, and $m_x$ and $m_y$ the mask derivatives in the x and y axes respectively. The first term acts to reduce the mask's activity outside the regions detected by DensePose. The mask, however, is still required to function in such regions, e.g., to render shadows. Similarly, we suppress the mask derivative outside the pose-detected region, in order to eliminate secluded points, and other high-frequency patterns. Finally, a term is added to encourage the mask to be active in the image regions occupied by the character. + +The loss applied to the two discriminators is given by: + +$$ +\mathcal {L} _ {D ^ {k}} = \frac {1}{2} \mathbb {E} _ {(p, t)} \left[ \left(D _ {k} (p \odot t, f \odot t)\right) ^ {2} \right] + \frac {1}{2} \mathbb {E} _ {(p, o, t)} \left[ \left(D _ {k} (p \odot t, o \odot t) - 1\right) ^ {2} \right] \tag {18} +$$ + +The Adam optimizer is used for P2F similar to the P2P. The training progression across the epochs is visualized in the appendix (Fig. 10). + +# 6 EXPERIMENTS + +The method was tested on multiple video sequences, see the supplementary video (https://youtu.be/sNp6HskavBE). The first video shows a tennis player outdoors, the second video, a person swiping a sword indoors, and the third, a person walking. The part of the videos used for training consists of $5.5\mathrm{min}$ , $3.5\mathrm{min}$ , and $7.5\mathrm{min}$ , respectively. In addition, for comparative purposes, we trained the P2F network on a three min video of a dancer, which was part of the evaluation done by Wang et al. (2018a). + +The controllable output of the tennis player is shown in Fig. 1, which depicts the controller signal used to drive the pose, as well as the generated pose $p_i$ , object $obj_i$ , mask $m_i$ , raw frame $z_i$ , and + +
DatasetMethodSSIMLPIPS (SqzNet)LPIPS (AlexNet)LPIPS (VGG)
Tennisours240±2265±3400±4474±5
pix2pixHD301±26379±35533±42589±32
Walkingours193±133216±149365±252374±258
pix2pixHD224±156308±224485±347434±303
FencingOurs45±441±852±11150±15
pix2pixHD308±95531±129670±168642±86
+ +Table 1: Comparison with pix2pixHD (see also Fig. 14). SSIM and LPIPS (multiplied by 1000) are shown for three scenarios: (1) tennis (contains dynamic elements, e.g. other players, crowd, difference in camera angle), (2) walking (different character clothing, lighting, and camera angle), (3) fencing (fixed background and view). + +output frame $f_{i}$ . A realistic character is generated with some artifacts (see supplementary video) around the tennis racket, for which the segmentation of the training video is only partially successful. Fig. 7 depicts additional results, in which the character is placed on a diverse set of backgrounds containing considerable motion. Appendix B also present a controlled walking character, and a controlled fencing character, which also appear in the supplementary video. + +A comparison of the P2F network with the pix2pixHD method of Wang et al. (2018b) is provided in Tab. 1, and as a figure in appendix Fig. 14. We compare by Structural Similarity (SSIM) (Wang et al., 2004) and Learned Perceptual Image Patch Similarity (LPIPS) Zhang et al. (2018) distance methods. The mean and standard deviation are calculated for each generated video. The LPIPS method provides a perceptual distance metric, by comparing the activations of three different network architectures, VGG (Simonyan & Zisserman, 2014a), AlexNet (Krizhevsky, 2014), and SqueezeNet (Iandola et al., 2016), with an additional linear layer set on top of each network. For each dataset, we select a test set that was not used during training. Although this test set is evaluated as the ground-truth, there is a domain shift between the training and the test video: the tennis test set contains dynamic elements, such as other players, crowd, and a slight difference in camera angle; the walking test set contains different character clothing, background lighting, and camera angle. The fencing test set is more similar to the training set. As seen in appendix Fig. 14, the baseline method results in many background and character artifacts, and a degradation in image and character quality, as it is forced to model the entire scene, rather than focus solely on the character and its shadow, as our method does. This is also apparent in the statistics reported in the table. + +Another experiment dedicated to the P2F network (other methods do not employ P2P), compares it with the vid2vid method of Wang et al. (2018a). The results are reported in the supplementary video, and in appendix C. Our method produces far fewer background distortions, can better handle variation in the character's location, and has the ability to embed the character into novel backgrounds. + +An ablation study is presented in appendix D, showing the contribution of the various components of the system both quantitatively and qualitatively. In addition, we describe the unfavorable results obtained when replacing the autoregressive model with a concatenative model. + +# 7 CONCLUSIONS + +Generating smooth motion that combines unpredictable control, the current pose, and previous motion patterns is a challenging task. The proposed novel method employs two autoencoders: one generates autoregressive motion for a specific learned style, and the other generates a realistic frame for blending with a dynamic background. + +Our work paves the way for new types of realistic and personalized games, which can be casually created from everyday videos. In addition, controllable characters extracted from YouTube-like videos can find their place in the virtual worlds and augmented realities. The work is still limited in various aspects, such as not allowing control over the illumination of the character, the lack of support for novel views, and not modeling the character's interaction with scene objects. + +# ACKNOWLEDGMENTS + +The authors would like to thank Lisa Rhee, Ilkka Hartikainen, and Adrian Bryant for allowing us to use their videos for training. + +# REFERENCES + +Guha Balakrishnan, Amy Zhao, Adrian V Dalca, Fredo Durand, and John Guttag. Synthesizing images of humans in unseen poses. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8340-8348, 2018. +Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. Everybody dance now. arXiv preprint arXiv:1808.07371, 2018. +Patrick Chao, Alexander Li, and Gokul Swamy. Generative models for pose transfer. arXiv preprint arXiv:1806.09070, 2018. +Patrick Esser, Johannes Haux, Timo Milbich, and Björn Ommer. Towards learning a realistic rendering of human behavior. In ECCV WORKSHOP, 2018a. +Patrick Esser, Ekaterina Sutter, and Björn Ommer. A variational u-net for conditional appearance and shape generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8857-8866, 2018b. +Katerina Fragkiadaki, Sergey Levine, and Jitendra Malik. Recurrent network models for kinematic tracking. arXiv preprint arXiv:1508.00271, 2015. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2014. +Daniel Holden, Taku Komura, and Jun Saito. Phase-functional neural networks for character control. ACM Trans. Graph., 36(4):42:1-42:13, July 2017. ISSN 0730-0301. doi: 10.1145/3072959.3073663. URL http://doi.acm.org/10.1145/3072959.3073663. +Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and $< 0.5$ mb model size. arXiv preprint arXiv:1602.07360, 2016. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. +Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. +Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision (ECCV), 2016. +Angjoo Kanazawa, Jason Y. Zhang, Panna Felsen, and Jitendra Malik. Learning 3d human dynamics from video. arXiv preprint arXiv:1812.01601, 2018. +D.P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2016. +Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014. +Hei Law and Jia Deng. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 734-750, 2018. +Yining Li, Chen Huang, and Chen Change Loy. Dense intrinsic appearance flow for human pose transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3693-3702, 2019. +K.K. Maninis, S. Caelles, J. Pont-Tuset, and L. Van Gool. Deep extreme cut: From extreme points to object segmentation. In Computer Vision and Pattern Recognition (CVPR), 2018. +Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In ICCV, 2017. + +Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. +Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML), pp. 807-814, 2010. +Xue Bin Peng, Angjoo Kanazawa, Jitendra Malik, Pieter Abbeel, and Sergey Levine. Sfv: Reinforcement learning of physical skills from videos. ACM Trans. Graph., 37(6), November 2018. +Albert Pumarola, Antonio Agudo, Alberto Sanfeliu, and Francesc Moreno-Noguer. Unsupervised person image synthesis in arbitrary poses. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. +Iasonas Kokkinos Riza Alp Güler, Natalia Neverova. DENSEpose: Dense human pose estimation in the wild. 2018. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014a. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014b. +Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016. +Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. arXiv preprint arXiv:1609.02612, 2016. +Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. Video-to-video synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2018a. +Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018b. +Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, et al. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. +Ceyuan Yang, Zhe Wang, Xinge Zhu, Chen Huang, Jianping Shi, and Dahua Lin. Pose guided human video generation. arXiv preprint arXiv:1807.11152, 2018. +Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018. +Xingyi Zhou, Jiacheng Zhuo, and Philipp Krahenbuhl. Bottom-up object detection by grouping extreme and center points. In CVPR, 2019. +Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networkss. arXiv preprint arXiv:1703.10593, 2017. + +![](images/f02c6523e9f693a6d0d83da982ff7477bdf8391c68d29aad0b7fd468afe10964.jpg) + +![](images/dcd422ff3de53b8bc8c1e7b715e05c1e493ae999b404a622494d0a3f2f6a83e3.jpg) +(a) + +![](images/79db7f2e306813d3e8ccff8bb57a632ddf2d7e32823587957decc1b88e1f357e.jpg) + +![](images/ff2b43193a7d83ac4d566fdaa8566fb90953f298c2b6ec28fad6da5f8f92cf10.jpg) +(b) +Figure 8: The occlusion-based augmentation technique used to increase robustness during training the P2P network. Each row is a single sample. (a) $p_{i-1}$ with part of it occluded by a random ellipse, (b) the predicted pose $\hat{p}_i$ , (c) the ground truth pose $p_i$ . The generated output seems to "fill in" the missing limbs, as well as predict the next frame. In this figure and elsewhere, the colors represent the 3D UV mapping. + +![](images/300527b1154e74cf46916dbe76f6b04b6eebc57cab187d5225ff14c1b6595654.jpg) + +![](images/103533fb6cd8b27531ac7cc332fea5a9458678a26b440b9fbb0a127c6d4373f5.jpg) +(c) + +# A ADDITIONAL POSE2POSE NETWORK ARCHITECTURE AND IMPLEMENTATION DETAILS + +We follow the naming convention of (Wang et al., 2018b; Zhu et al., 2017; Johnson et al., 2016). Let Ck denote a Conv-InstanceNorm-ReLU layer with $k$ filters, each with a kernel size of 7x7, with a stride of 1. Dk denotes a Convolution-InstanceNorm-ReLU layer with $k$ filters and a stride of 2, where reflection padding is used. Vk denotes a vanilla residual block with two 3x3 convolutional layers with the same number of filters on both layers. Wk denotes a conditioned residual block. Uk denotes a 3x3 Fractional-Strided-Convolution-InstanceNorm layer with $k$ filters, and a stride of 0.5. + +The generator, i.e., the P2P network, can then be described as: C64, D128, D256, D512, D1024, V1024, W1024, W1024, W1024, W1024, W1024, W1024, U512, U256, U128, U64, C3. + +The input images are scaled to a width size of 512 pixels, with the height scaled accordingly. + +The discriminators are two PatchGANs (Isola et al., 2017) with an identical architecture of C64,C128,C256,C512, working at the input resolution and a lower resolution, down-sampled by an average-2D-pooling operation with a kernel size of 3, and a stride of 2. + +The architecture of the P2F network is similar to that of the P2P network, with the following adjustments: (i) the conditional residual blocks are replaced by non-residual ones, (ii) the input of P2F has 6 channels for $p_i$ and $p_{i-1}$ , (iii) there is an additional head generating the mask output, which uses a sigmoid activation function. + +# B ADDITIONAL IMAGES + +Fig. 8 depicts the random occlusion process (P2P training), in which a black ellipse of random size and location is added to each input pose frame within the detection bounding box. This results in an impaired pose, with characteristics that are similar to "naturally" occurring imperfections. + +The mask loss term $\mathcal{L}_{mask}$ of P2F (Sec. 5) is illustrated in Fig. 9. + +Fig. 10 depicts the progression during training of the P2F dancer model. As training progresses, the details of the dancer become sharper and the hair becomes part of the mask, despite being outside the DensePose detected area (i.e., off pixels in $t$ ). + +Fig. 11 depicts a controlled walking character along with the control signal and the generated poses. + +![](images/4b8972e6f9d447257caa0a4aa7a5efd630f974380eff5fc8453b63c6254c5743.jpg) +Figure 9: Mask losses applied during the P2F network training. An inverse binary-thresholded mask is used to penalize pixel intensity for the generated mask, in the regions excluding the character of interest. For the generated mask, we apply regularization over the derivatives in the x and y axes as well, to encourage smooth mask generation, and discourage high-frequency pattern generation. + +![](images/40d20646504eee388b4478456ec08c93a62c3c3e402b99e32b42605a9e5fae87.jpg) +(a) + +![](images/731483ed6f4cc4c4c347bb909000541d800abcd0d6ec397b779f33e5cdfd49b5.jpg) + +![](images/e7d481e4031517c1a63ba95f92e773b017d7ade00016fee97ff2d2b3446be557.jpg) + +![](images/53559d3433da323588328a3065ca8eb88ba7f412e46f958a13fc65c20dc6888c.jpg) + +![](images/08b54dff0a3ee12d07ccb5afd72538c905fa15f4d1cc6ade6339e8ae8d27f4ea.jpg) + +![](images/129de6e38052df08606b36233950a5c14fb4e3212ea7dbe8059136c867112b6e.jpg) + +![](images/4bf54d7416b34cc90cd62b4b7913af9890b4939ad052d9d1a3fd8ebffb589479.jpg) + +![](images/108aa1dddd22b6b7b90f6eed25fda5c14ee4f1dd17db94ca30136af2c0be881f.jpg) + +![](images/e98ffd9486b41283060e3cfe9527ec3efebd9a9b1f123612b41765892ed9a40a.jpg) + +![](images/ea39991ff041c10f4cfd6b5eec46578e9b8eca91bcdcac2b0ce4b614773c99f4.jpg) +(b) + +![](images/9fad3414466f562591eaf4f5b4828bc335e7a46d28ea820ab990b719886a33fc.jpg) +Figure 10: Training the P2F network. (a) A sample pose, (b) the target frame, (c) the generated raw frame, the mask, and the output frame at different epochs: 10, 30, 50, and 200 (final). + +![](images/fc6e2203ac33a2f306e57d692670e887b8e5d31fedabf6e2e3201806e80da927.jpg) +(c) + +![](images/192035e00e14ddf83d002845c7ba9efd79a7f94b4d8e3e8ac2a822ebafb6cb25.jpg) + +![](images/8ff7fd6c735570e5c70a5235a17f855932f70f2c96c656f1fe5292be7ee5f8c2.jpg) + +The fencing character is shown in Fig. 12. The mask for various frames in the controlled sequence is shown, as well as two backgrounds: the background of the reference video, and an animated background. Fig. 13 depicts an additional controlled walking character, along with the control signal and the generated poses. + +Fig 14 compares visually with the baseline method of pix2pixHD (Wang et al., 2018b). As can be seen, the baseline method results in many background and character artifacts, a degradation in image and character quality, as it is forced to model the entire scene, rather than focus solely on the character and the environmental factors, such as in our method. + +# C COMPARISON WITH VID2VID + +Fig. 15(a-e) presents a comparison with the method of Wang et al. (2018a). Shown are the target image from which the driving pose is extracted, the extracted pose, the results of the baseline method, and our result. As can be seen, our method handles the background in a way that creates far fewer distortions, as we apply a learned mask, thus background generation is not required. + +The characters themselves are mostly comparable in quality, despite our choice not to add a dedicated treatment to the face. In addition, despite not applying considerable emphasis on temporal consistency + +![](images/07ad14f1d46b25ec4c57839211f3f384b0003384a792a2a4423ee18befe38023.jpg) +Figure 11: Synthesizing a walking character, emphasizing the control between the frames. Shown are the sequence of poses generated in an autoregressive manner, as well as the generated frames. + +![](images/d75af6c97d642a5f2eb1127af3c7d2433d0afa8feaff33c2d177a8b98f6ff879.jpg) +Figure 12: Generated frames for the controllable fencing character. Each column is a different pose. The rows are the obtained mask, and the placement on two different backgrounds: the one obtained by applying a median filter to the reference video, and one taken from a motion picture. + +![](images/c5b41148177da35c42820a57d63dc5dd00ffba90c263099270b18b0c29743b06.jpg) +Figure 13: Synthesizing an additional walking character. Shown are the sequence of poses generated in an autoregressive manner, as well as the generated frames. + +![](images/269a8f9da580ae76f740e5790004296f862eafd70b27afc000e72cb72d84ba5f.jpg) + +![](images/94e3370496a47eccb11c978c6995ca56d040e4e9494eeb25407ec0977fd77651.jpg) + +![](images/d5665a4e278cda1efc95904e87a5d7b2fb64b8a518b18f928a34764e537c4892.jpg) + +![](images/77d5bba33cd3dc6d3ba568cdeace0152420273316b0a470232e2a34a608a691d.jpg) + +![](images/d6db50f8f98f02077b8d384e2fdefc709ae8c5e20dc4c1813c66ee608828221f.jpg) + +![](images/28120be670c71736923fd7c0d5543990fbd349608b73513132053fbee89ad4fc.jpg) + +![](images/413fb5579dc124af813be52f6c7de8b3cc16b8291d61e71bb7f47707f0c37d9c.jpg) +(a) + +![](images/ce645d9c2d52ce960c3b662b8a10c46be902dd8fc594d1703824448c957480f5.jpg) +(b) + +![](images/2c5b702dc443fde1b0e86bad191bd05294690e2e90ec3970b41bfecd6336ea01.jpg) +(c) +Figure 14: A comparison of the P2F network with the pix2pixHD method of Wang et al. (2018b). (a) Ground truth image used as the pose source, (b) our result, (c) The results of pix2pixHD. The baseline method results in many background artifacts, as it generates the entire frame. The degradation in image quality is apparent as well, and that of the character in particular. + +![](images/20853b9b2053f5eb39d21d295d1ba7e88713910e98f10067ea7df4bd3ddb740f.jpg) + +![](images/1da37f0aee41b8247650041c8cd90cdb405377c57da0fa9b22d8faaf8aa10829.jpg) + +![](images/44ef9337b765928e918e2b26360599f3e53b745cb76c915e8e29569c6bb7a436.jpg) + +![](images/3241930278cd04d33a0cff93c91664626bb1f93071cf75664fb4516c01449343.jpg) + +![](images/cd865adc4a44349dc9e3985882b7efec58b920eb8cfeb5c958b2bee02524a2c1.jpg) + +![](images/a2d8cd32f8668126f4547cb260bb687df1eb8ecd15a4175786973e7c1a897aca.jpg) +(a) +(f) +Figure 15: A comparison of the P2F network with the vid2vid method of Wang et al. (2018a). (a) The target-posed image, (b) the pose extracted from this image, (c) the result of vid2vid, (d) our result, (e) a frame from the reference video. Many artifacts are apparent in the background produced by vid2vid. vid2vid also distorts the character's appearance and dimensions to better match the pose. (f-k) The same pose, displayed by two characters on three different backgrounds, demonstrates our advantage over vid2vid in replacing backgrounds. + +![](images/1b5df4f06034d3432cdc38eaa35dd4c874e63ab2b5383e374cc6e7f09cd224ba.jpg) +(b) +(g) + +![](images/9cefd3dba27cb96abc2d2497136172c6ea469ae77b009245ee1074daee1157bc.jpg) +(c) +(h) + +![](images/045a973b4cf4961f303dd8e1ce036ccca4fde8af43c3982d6e2299bdcfdcc275.jpg) +(i) + +![](images/5e2b0bdf8935d2d622518dc11aaa5c4c8f21bdd6b3364eaf3b2b7c0c02ec3976.jpg) +(d) +(j) + +![](images/b9c36144c3b6d6e7b8a8aae4e3fc4b417a61933c3a9ff4215e99c7c23eb56051.jpg) +(e) +(k) + +during training (e.g. optical flow, temporal discriminator), our method produces videos that are as smooth. Finally, the proportions of the character in our video are better maintained, while in the baseline model, the character is slightly distorted toward the driving pose. + +In addition, as we demonstrate in Fig. 15(f-k), our method has the ability to replace the background. + +
Network ComponentSSIMLPIPS (SqzNet)LPIPS (AlexNet)LPIPS (VGG)
Base Conditioning15.0±420.5±1439.8±2537.0±14
+ Conditioning Block14.7±315.6±729.8±1430.6±8
+ Stopping Criteria14.0±314.9±728.1±1429.5±8
+ Object Channel14.1±313.3±624.9±1228.6±7
+ +Table 2: Ablation study of the P2P network on the tennis sequence. The results are multiplied by a factor of 1000 for readability. + +![](images/98da0bd9ba2c19e839b414ec07e344878c990a6b1b21a1810ae8c56408e389ae.jpg) +(a) + +![](images/836b0b483e5916371eba4ea3633c6e3867762c38851e95409aabfb6d44845d2d.jpg) +(b) + +![](images/5198208ee69ed41f378f2608a96e41e5ab041deae23cca545542ac86e9591922.jpg) +(c) + +![](images/4669bd541086211df8d8c8a803538eef9b71545c557248448cfbc3aee58d53da.jpg) +(d) + +![](images/f3de8e3e93fe840465440712488499aab0aee000064832e344ef2a0569f2cea8.jpg) +(e) + +![](images/b68045164a2f261cd503a17bd372934d730b9aeec96bf2999a3f5f2d8275a6ed.jpg) +(f) + +![](images/7cb9811d7c65bfca523eed8318c314ab4e3bd5902d50b2e84d9415746ea65b18.jpg) +Figure 16: P2F ablation. (a) Ours, (b) no VGG FM on the full-frame (no shadows generated), (c) no mask regularization (background artifacts), (d) 1 input pose (no packet generation due to a semantic segmentation mis-detection), (e) no discriminator FM (character/racket heavily distorted), (f) no mask, i.e. background fully-generated (excessive distortion in background / character). +Figure 17: P2P vs. baseline method comparison. Temporal consistency of P2P generated motion (row 1) is apparent, as opposed to the baseline method (row 2), that results in temporal inconsistency. + +# D ABLATION STUDY + +We test the effect of several novel P2P network components, both by SSIM and LPIPS. The test is performed when predicting one frame into the future (in a "teacher forcing" mode). The results in Tab. 2 demonstrate that our conditioning block is preferable to the conventional one, and that adding the object channel is beneficial. Selecting the model based on the minimal discriminator feature-matching loss is helpful as well. + +A qualitative ablation study for the P2F network is provided in Fig. 16. As can be seen, each component contributes to the naturalness of the results. + +To validate the need for an autoregressive motion generation, as done by the P2P network, we implemented a baseline method that copies motion patterns from the training set, matching the displacement, and verified that such a naive approach fails to produce natural motion. A sequence of frames from the experiment video can be seen in Fig. 17. \ No newline at end of file diff --git a/vid2gamecontrollablecharactersextractedfromrealworldvideos/images.zip b/vid2gamecontrollablecharactersextractedfromrealworldvideos/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7d560ff68ffbcaff2b7e2b4e24a56ecb31634ec4 --- /dev/null +++ b/vid2gamecontrollablecharactersextractedfromrealworldvideos/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edc68aaaafd6a229216d9529b3c2c8f1daad5235b38abef577dfcd2d96b1f915 +size 909874 diff --git a/vid2gamecontrollablecharactersextractedfromrealworldvideos/layout.json b/vid2gamecontrollablecharactersextractedfromrealworldvideos/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6a472ff1dd3e878a46abb7a8b19c8ceecf5c97ed --- /dev/null +++ b/vid2gamecontrollablecharactersextractedfromrealworldvideos/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5b3f7949339626e907004cc09519a7c09c91cd6057d8a8bde963558234b15af +size 572466 diff --git a/vlbertpretrainingofgenericvisuallinguisticrepresentations/9aa6df04-7ecf-4fe3-8834-7eb2f3808719_content_list.json b/vlbertpretrainingofgenericvisuallinguisticrepresentations/9aa6df04-7ecf-4fe3-8834-7eb2f3808719_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1d180e6c2c4b72eeef7007296204b9c89907c845 --- /dev/null +++ b/vlbertpretrainingofgenericvisuallinguisticrepresentations/9aa6df04-7ecf-4fe3-8834-7eb2f3808719_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5174031468532adeddac99ee80e6d6a58f214fd4995ea47317657d6733ec9f4d +size 95940 diff --git a/vlbertpretrainingofgenericvisuallinguisticrepresentations/9aa6df04-7ecf-4fe3-8834-7eb2f3808719_model.json b/vlbertpretrainingofgenericvisuallinguisticrepresentations/9aa6df04-7ecf-4fe3-8834-7eb2f3808719_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c6cb8a7225bff14a7083210c7927963d7e8aaf5c --- /dev/null +++ b/vlbertpretrainingofgenericvisuallinguisticrepresentations/9aa6df04-7ecf-4fe3-8834-7eb2f3808719_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:131ff8c6d5d44d803c9d6853f17704ba53480eff7dfcbdb148335588f00de393 +size 114728 diff --git a/vlbertpretrainingofgenericvisuallinguisticrepresentations/9aa6df04-7ecf-4fe3-8834-7eb2f3808719_origin.pdf b/vlbertpretrainingofgenericvisuallinguisticrepresentations/9aa6df04-7ecf-4fe3-8834-7eb2f3808719_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..493f908a9f0d4f21eb092e96210448e701904ae2 --- /dev/null +++ b/vlbertpretrainingofgenericvisuallinguisticrepresentations/9aa6df04-7ecf-4fe3-8834-7eb2f3808719_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4782751adac56232196dc1d3bdca3f83eef649ee84ff604b91ed76d711f48a78 +size 7655875 diff --git a/vlbertpretrainingofgenericvisuallinguisticrepresentations/full.md b/vlbertpretrainingofgenericvisuallinguisticrepresentations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..01dfa11c90530e31be25cd1e548a3c9c4a072d69 --- /dev/null +++ b/vlbertpretrainingofgenericvisuallinguisticrepresentations/full.md @@ -0,0 +1,340 @@ +# VL-BERT: PRE-TRAINING OF GENERIC VISUAL-LINGUISTIC REPRESENTATIONS + +Weijie $\mathbf{Su}^{1,2*}$ , Xizhou Zhu $^{1,2*}$ , Yue Cao $^{2}$ , Bin Li $^{1}$ , Lewei Lu $^{2}$ , Furu Wei $^{2}$ , Jifeng Dai $^{2\dagger}$ + +1University of Science and Technology of China +2Microsoft Research Asia + +{jackroos, ezra0408}@mail.ustc.edu.cn, binli@ustc.edu.cn + +{yuecao,lewlu,fuwei,jidf dai}@microsoft.com + +# ABSTRACT + +We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. It is worth noting that VL-BERT achieved the first place of single model on the leaderboard of the VCR benchmark. Code is released at https://github.com/jackroos/VL-BERT. + +# 1 INTRODUCTION + +Pre-training of generic feature representations applicable to a variety of tasks in a domain is a hallmark of the success of deep networks. Firstly in computer vision, backbone networks designed for and pre-trained on ImageNet (Deng et al., 2009) classification are found to be effective for improving numerous image recognition tasks. Recently in natural language processing (NLP), Transformer networks (Vaswani et al., 2017) pre-trained with "masked language model" (MLM) objective (Devlin et al., 2018) on large language corpus excel at a variety of NLP tasks. + +Meanwhile, for tasks at the intersection of vision and language, such as image captioning (Young et al., 2014; Chen et al., 2015; Sharma et al., 2018), visual question answering (VQA) (Antol et al., 2015; Johnson et al., 2017; Goyal et al., 2017; Hudson & Manning, 2019), visual commonsense reasoning (VCR) (Zellers et al., 2019; Gao et al., 2019), there lacks such pre-trained generic feature representations. The previous practice is to combine base networks pre-trained for image recognition and NLP respectively in a task-specific way. The task-specific model is directly finetuned for the specific target task, without any generic visual-linguistic pre-training. The task-specific model may well suffer from overfitting when the data for the target task is scarce. Also, due to the task-specific model design, it is difficult to benefit from pre-training, where the pre-training task may well be different from the target. There lacks a common ground for studying the feature design and pretraining of visual-linguistic tasks in general. + +In the various network architectures designed for different visual-linguistic tasks, a key goal is to effectively aggregate the multi-modal information in both the visual and linguistic domains. For example, to pick the right answer in the VQA task, the network should empower integrating linguistic information from the question and the answers, and aggregating visual information from the input image, together with aligning the linguistic meanings with the visual clues. Thus, we seek to derive generic representations that can effectively aggregate and align visual and linguistic information. + +In the meantime, we see the successful application of Transformer attention (Vaswani et al., 2017) in NLP, together with its MLM-based pre-training technique in BERT (Devlin et al., 2018). The attention module is powerful and flexible in aggregating and aligning word embedded features in sentences, while the pre-training in BERT further enhances the capability. + +Inspired by that, we developed VL-BERT, a pre-trainable generic representation for visual-linguistic tasks, as shown in Figure 1. The backbone of VL-BERT is of (multi-modal) Transformer attention module taking both visual and linguistic embedded features as input. In it, each element is either of a word from the input sentence, or a region-of-interest (RoI) from the input image, together with certain special elements to disambiguate different input formats. Each element can adaptively aggregate information from all the other elements according to the compatibility defined on their contents, positions, categories, and etc. The content features of a word / an RoI are domain specific (WordPiece embeddings (Wu et al., 2016) as word features, Fast R-CNN (Girshick, 2015) features for RoIs). By stacking multiple layers of multi-modal Transformer attention modules, the derived representation is of rich capability in aggregating and aligning visual-linguistic clues. And task-specific branches can be added above for specific visual-linguistic tasks. + +To better exploit the generic representation, we pre-train VL-BERT at both large visual-linguistic corpus and text-only datasets1. The pre-training loss on the visual-linguistic corpus is incurred via predicting randomly masked words or RoIs. Such pre-training sharpens the capability of VL-BERT in aggregating and aligning visual-linguistic clues. While the loss on the text-only corpus is of the standard MLM loss in BERT, improving the generalization on long and complex sentences. + +Comprehensive empirical evidence demonstrates that the proposed VL-BERT achieves state-of-the-art performance on various downstream visual-linguistic tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. In particular, we achieved the first place of single model on the leaderboard of visual commonsense reasoning. + +# 2 RELATED WORK + +Pre-training for Computer Vision Prior to the era of deep networks, it is far from mature to share features among different tasks and to improve the features via pre-training. The models for various computer vision tasks are of too diverse design choices to derive a generic representation. With the success of AlexNet (Krizhevsky et al., 2012) in ImageNet (Deng et al., 2009) classification, we see the renaissance of convolutional neural networks (CNNs) in the vision community. Soon after that, researchers found that ImageNet pre-trained CNNs can serve well as generic feature representation for various downstream tasks (Donahue et al., 2014), such as object detection (Girshick et al., 2014), semantic segmentation (Long et al., 2015), instance segmentation (Hariharan et al., 2014). The improvement in backbone networks for ImageNet classification further improves the downstream tasks. Recently there are research works on directly training CNNs from scratch on massive-scale target datasets, without ImageNet pre-training (He et al., 2018). They achieved performance on par with those with ImageNet pre-training. While they also note that pre-training on a proper massive dataset is vital for improving performance on target tasks with scarce data. + +Pre-training for Natural Language Processing (NLP) It is interesting to note that the development of pre-training techniques in NLP lags quite behind computer vision. There are previous research works on improving word embedding (Mikolov et al., 2013; Pennington et al., 2014; Kiros et al., 2015), which is a low-level linguistic feature representation. On top of that, numerous diverse architectures are designed for various NLP tasks. In the milestone work of Transformers (Vaswani et al., 2017), the Transformer attention module is proposed as a generic building block for various NLP tasks. After that, a serious of approaches are proposed for pre-training the generic representation, mainly based on Transformers, such as GPT (Radford et al., 2018), BERT (Devlin et al., 2018), GPT-2 (Radford et al., 2019), XLNet (Yang et al., 2019), XLM (Lample & Conneau, 2019), and RoBERTa (Liu et al., 2019). Among them, BERT is perhaps the most popular one due to its simplicity and superior performance. + +Pre-training for Visual-Linguistic Tasks. The development course of models for visual-linguistic tasks is also quite similar to those in the computer vision and NLP communities. Previously, task + +specific models are designed, wherein the features derived from off-the-shelf computer vision and NLP models are combined in an ad-hoc way for specific tasks. Model training is performed on the dataset for the specific task only. + +VideoBERT (Sun et al., 2019b) is the first work seeking to conduct pre-training for visual-linguistic tasks. In it, video clips are processed by off-the-shelf networks for action recognition, and are assigned to different clusters (visual words) based on the derived features. The pre-training loss is incurred via predicting the cluster ids of masked video clips. Due to the abrupt clustering of the video clips, it losses considerable visual content information and hinders updating visual network parameters. In the following work of CBT (Sun et al., 2019a), such clustering mechanism is removed. Both works are applied on videos, which are of linear structure in the time dimension, same as sentences. It is highly desired to study at the well-established image-based visual-linguistic tasks. + +Concurrent to our work, multiple works released on Arxiv very recently also seek to derive a pretrainable generic representation for visual-linguistic tasks. Table 5 in Appendix compares among them. We briefly discuss some of these works here. + +In ViLBERT (Lu et al., 2019) and LXMERT (Tan & Bansal, 2019), which are under review or just got accepted, the network architectures are of two single-modal networks applied on input sentences and images respectively, followed by a cross-modal Transformer combining information from the two sources. The attention pattern in the cross-modal Transformer is restricted, where the authors believe to improve the performance. The authors of ViLBERT claim that such two-stream design is superior than a single-stream unified model. Meanwhile, in the proposed VL-BERT, it is of a unified architecture based on Transformers without any restriction on the attention patterns. The visual and linguistic contents are fed as input to VL-BERT, wherein they interact early and freely. We found that our unified model of VL-BERT outperforms such two-stream designs. + +VisualBert (Li et al., 2019b), B2T2 (Alberti et al., 2019), and Unicoder-VL (Li et al., 2019a), which are of work in progress or under review, are also of unified single-stream architecture. The differences of these works are compared in Table 5. The concurrent emergency of these research works indicates the importance of deriving a generic pre-trainable representation for visual-linguistic tasks. + +In addition, there are three noticeable differences between VL-BERT and other concurrent works in pre-training. Their effects are validated in Section 4.3. (1) We found the task of Sentence-Image Relationship Prediction used in all of the other concurrent works (e.g., ViLBERT (Lu et al., 2019) and LXMERT (Tan & Bansal, 2019)) is of no help in pre-training visual-linguistic representations. Thus such a task is not incorporated in VL-BERT. (2) We pre-train VL-BERT on both visual-linguistic and text-only datasets. We found such joint pre-training improves the generalization over long and complex sentences. (3) Improved tuning of the visual representation. In VL-BERT, the parameters of Fast R-CNN, deriving the visual features, are also updated. To avoid visual clue leakage in the pre-training task of Masked RoI Classification with Linguistic Clues, the masking operation is conducted on the input raw pixels, other than the feature maps produced by layers of convolution. + +# 3 VL-BERT + +# 3.1 REVISIT BERT MODEL + +Let $x = \{x_{1},\ldots ,x_{N}\}$ be the input elements in BERT (Devlin et al., 2018), which are of embedded features encoding sentence words. They are processed by a multi-layer bidirectional Transformer (Vaswani et al., 2017), where the embedding features of each element are transformed layer-by-layer in the fashion of aggregating features from the other elements with adaptive attention weights. Let $x^{l} = \{x_{1}^{l},\dots,x_{N}^{l}\}$ be the features of the $l$ -th layer ( $x^0$ is set as the input $x$ ). The features of the $(l + 1)$ -th layer, $x^{l + 1}$ , is computed by + +$$ +\tilde {h} _ {i} ^ {l + 1} = \sum_ {m = 1} ^ {M} W _ {m} ^ {l + 1} \left\{\sum_ {j = 1} ^ {N} A _ {i, j} ^ {m} \cdot V _ {m} ^ {l + 1} x _ {j} ^ {l} \right\} \quad \text {M u l t i - h e a d A t t e n t i o n}, \tag {1} +$$ + +$$ +h _ {i} ^ {l + 1} = \operatorname {L a y e r N o r m} \left(x _ {i} ^ {l} + \tilde {h} _ {i} ^ {l + 1}\right) \quad \text {R e s i d u a l C o n n e c t i o n}, \tag {2} +$$ + +$$ +\tilde {x} _ {i} ^ {l + 1} = W _ {2} ^ {l + 1} \cdot \operatorname {G E L U} \left(W _ {1} ^ {l + 1} h _ {i} ^ {l + 1} + b _ {1} ^ {l + 1}\right) + b _ {2} ^ {l + 1} \quad \text {F e e d - f o r w a r d}, \tag {3} +$$ + +$$ +x _ {i} ^ {l + 1} = \operatorname {L a y e r N o r m} \left(h _ {i} ^ {l + 1} + \tilde {x} _ {i} ^ {l + 1}\right) \quad \text {R e s i d u a l C o n n e c t i o n ,} \tag {4} +$$ + +where $m$ in Eq. 1 indexes over the attention heads, and $A_{i,j}^{m}\propto \exp [(Q_{m}^{l + 1}x_{i}^{l})^{T}(K_{m}^{l + 1}x_{j}^{l})]$ denotes the attention weights between elements $i$ and $j$ in the $m$ -th head, which is normalized by $\sum_{j = 1}^{N}A_{i,j}^{m} = 1$ . $W_{m}^{l + 1}$ , $Q_{m}^{l + 1}$ , $K_{m}^{l + 1}$ and $V_{m}^{l + 1}$ are learnable weights for $m^{\mathrm{th}}$ attention head, $W_{1}^{l + 1}$ , $W_{2}^{l + 1}$ and $b_{1}^{l + 1}$ , $b_{2}^{l + 1}$ in Eq. 3 are learnable weights and biases, respectively. Note that, the operations in Eq. $1\sim 4$ is irrelevant to the order of input sequence, i.e. the final BERT representation of permuted input is same as the final BERT representation of the original input after the same permutation. The position of an element in BERT is encoded in its own embedding features by sequence positional embedding. Thanks to such decoupled representation, the BERT model is flexible enough to be pre-trained and finetuned for a variety of NLP tasks. + +In BERT pre-training, the masked language modeling (MLM) task is introduced. The embedded features of a certain input word would be randomly masked out (the token embedding channels capturing the word content is replaced by a special [MASK] token). The BERT model is trained to predict the masked word from linguistic clues of all the other unmasked elements. As explained in Wang & Cho (2019), the overall MLM-based training of BERT is equivalent to optimizing the following joint probability distribution + +$$ +\log P (x | \theta) = \frac {1}{Z (\theta)} \sum_ {i = 1} ^ {N} \log \phi_ {i} (x | \theta), \tag {5} +$$ + +where $\phi_i(x|\theta)$ is the potential function for the $i$ -th input element, with parameters $\theta$ , and $Z(\theta)$ is the partition function. Each log-potential term $\log \phi_i(x)$ is defined as + +$$ +\log \phi_ {i} (x | \theta) = x _ {i} ^ {T} f _ {i} (x _ {\backslash i} | \theta) _ {i}, \tag {6} +$$ + +where $f_{i}(x_{\backslash i}|\theta)$ denotes the final output feature of BERT corresponding to the $i$ -th element for input $x_{\backslash i}$ , where $x_{\backslash i}$ is defined as $x_{\backslash i} = \{x_1, \dots, x_{i-1}, [\text{MASK}], x_{i+1}, \dots, x_N\}$ . The incurred MLM-based loss is as + +$$ +L _ {\mathrm {M L M}} (\theta) = - E _ {x \sim D, i \sim \{1, \dots , N \}} \log \phi_ {i} (x), \tag {7} +$$ + +where $x$ is a randomly sampled sentence from the training set $D$ , and $i$ is a randomly sampled location for masking words. + +The second pre-training task, Next Sentence Prediction, focuses on modeling the relationship between two sentences. Two sentences are sampled from the input document, and the model should predict whether the second sentence is the direct successor of the first. In BERT, the sampled two sentences are concatenated into one input sequence, with special elements [CLS] and [SEP] inserted prior to the first and the second sentences, respectively. A Sigmoid classifier is appended on the final output feature corresponding to the [CLS] element to make the prediction. Let $x$ be the input sequence, $t \in \{0,1\}$ indicates the relationship between the two sentences. The loss function is defined as + +$$ +L _ {\mathrm {N S P}} (\theta) = - E _ {(x, t) \sim D} [ t \log (g \left(x _ {0} ^ {L}\right)) + (1 - t) \log (1 - g \left(x _ {0} ^ {L}\right)) ], \tag {8} +$$ + +where $x_0^L$ is the final output feature of the [CLS] element (at the $L$ -th layer), and $g(x_0^L)$ is the classifier output. + +# 3.2 MODEL ARCHITECTURE + +Figure 1 illustrates the architecture of VL-BERT. Basically, it modifies the original BERT (Devlin et al., 2018) model by adding new elements to accommodate the visual contents, and a new type of visual feature embedding to the input feature embeddings. Similar to BERT, the backbone is of multi-layer bidirectional Transformer encoder (Vaswani et al., 2017), enabling dependency modeling among all the input elements. Different to BERT processing sentence words only, VL-BERT takes both visual and linguistic elements as input, which are of features defined on regions-of-interest (RoIs) in images and sub-words from input sentences, respectively. The RoIs can either be bounding boxes produced by object detectors, or be annotated ones in certain tasks. + +It is worth noting that the input formats vary for different visual-linguistic tasks (e.g., for image captioning, and for VQA (Antol et al., 2015; Johnson et al., 2017; Goyal et al., 2017; Hudson & Manning, 2019) and VCR (Zellers et al., 2019; Gao et al., 2019)). But thanks to the unordered representation nature of Transformer attention (e.g., the + +![](images/770b79170c1f4b7d0929e17651aa242ef4dd2e6758afe5bea42b19c73086d688.jpg) +Figure 1: Architecture for pre-training VL-BERT. All the parameters in this architecture including VL-BERT and Fast R-CNN are jointly trained in both pre-training and fine-tuning phases. + +position of a word in sentence is encoded by the positional embedding only, other than the order in the input sequence), a generic representation can be derived as long as the input elements and embedding features are properly designed. Three types of input elements are involved, namely, visual, linguistic, and special elements for disambiguating different input formats. The input sequence always starts with a special classification element ([CLS]), then goes on with linguistic elements, then follows up with visual elements, and ends with a special ending element ([END]). A special separation element ([SEP]) is inserted in between different sentences in the linguistic elements, and between the linguistic and visual elements. For each input element, its embedding feature is the summation of four types of embedding, namely, token embedding, visual feature embedding, segment embedding, and sequence position embedding. Among them, the visual feature embedding is newly introduced for capturing visual clues, while the other three embeddings follow the design in the original BERT paper. + +Token Embedding Following the practice in BERT, the linguistic words are embedded with Word-Piece embeddings (Wu et al., 2016) with a 30,000 vocabulary. A special token is assigned to each special element. For the visual elements, a special [IMG] token is assigned for each one of them. + +Visual Feature Embedding We firstly describe visual appearance feature and visual geometry embedding separately, and then how to combine them to form the visual feature embedding. + +For the visual element corresponding to an RoI, the visual appearance feature is extracted by applying a Fast R-CNN (Girshick, 2015) detector (i.e., the detection branch in Faster R-CNN (Ren et al., 2015)), where the feature vector prior to the output layer of each RoI is utilized as the visual feature embedding (of 2048-d in paper). For the non-visual elements, the corresponding visual appearance features are of features extracted on the whole input image. They are obtained by applying Faster R-CNN on an RoI covering the whole input image. + +The visual geometry embedding is designed to inform VL-BERT the geometry location of each input visual element in image. Each RoI is characterized by a 4-d vector, as $\left(\frac{x_{\mathrm{LT}}}{W},\frac{y_{\mathrm{LT}}}{H},\frac{x_{\mathrm{RB}}}{W},\frac{h_{\mathrm{RB}}}{H}\right)$ , where $(x_{\mathrm{LT}},y_{\mathrm{LT}})$ and $(x_{\mathrm{RB}},y_{\mathrm{RB}})$ denote the coordinate of the top-left and bottom-right corner respectively, and $W,H$ are of the width and height of the input image. Following the practice in Relation Networks (Hu et al., 2018), the 4-d vector is embedded into a high-dimensional representation (of 2048-d in paper) by computing sine and cosine functions of different wavelengths. + +The visual feature embedding is attached to each of the input elements, which is the output of a fully connected layer taking the concatenation of visual appearance feature and visual geometry embedding as input. + +Segment Embedding Three types of segment, $A, B, C$ , are defined to separate input elements from different sources, namely, $A$ and $B$ for the words from the first and second input sentence respectively, and $C$ for the RoIs from the input image. For example, for input format of , $A$ denotes Question, $B$ denotes Answer, and $C$ denotes Image. For input format + +of $<\text{Caption}, \text{Image}>$ , $A$ denotes Caption, and $C$ denotes Image. A learned segment embedding is added to every input element for indicating which segment it belongs to. + +Sequence Position Embedding A learnable sequence position embedding is added to every input element indicating its order in the input sequence, same as BERT. Because there is no natural order among input visual elements, any permutation of them in the input sequence should achieve the same result. Thus the sequence position embedding for all visual elements are the same. + +# 3.3 PRE-TRAINING VL-BERT + +The generic feature representation of VL-BERT enables us to pre-train it on massive-scale datasets, with properly designed pre-training tasks. We pre-train VL-BERT on both visual-linguistic and text-only datasets. Here we utilize the Conceptual Captions dataset (Sharma et al., 2018) as the visual-linguistic corpus. It contains around 3.3 million images annotated with captions, which are harvested from web data and processed through an automatic pipeline. The issue with the Conceptual Captions dataset is that the captions are mainly simple clauses, which are too short and simple for many downstream tasks. To avoid overfitting on such short and simple text scenario, we also pre-train VL-BERT on text-only corpus with long and complex sentences. We utilize the BooksCorpus (Zhu et al., 2015) and the English Wikipedia datasets, which are also utilized in pre-training BERT. + +In SGD training, in each mini-batch, samples are randomly drawn from both Conceptual Captions and BooksCorpus & English Wikipedia (at a ratio of 1:1). For a sample drawn from Conceptual Captions, the input format to VL-BERT is of , where the RoIs in the image are localized and categorized by a pre-trained Faster R-CNN object detector. Two pre-training tasks are exploited to incur loss, which are as follows. + +Task #1: Masked Language Modeling with Visual Clues This task is very similar to the Masked Language Modeling (MLM) task utilized in BERT. The key difference is that visual clues are incorporated in VL-BERT for capturing the dependencies among visual and linguistic contents. During pre-training, each word in the input sentence(s) is randomly masked (at a probability of $15\%$ ). For the masked word, its token is replaced with a special token of [MASK]. The model is trained to predict the masked words, based on the unmasked words and the visual features. The task drives the network to not only model the dependencies in sentence words, but also to align the visual and linguistic contents. For example, in Figure 1 "kitten drinking from [MASK]", without the input image, the masked word could be any containers, such as "bowl", "spoon" and "bottle". The representation should capture the correspondence of the word "bottle" and the corresponding RoIs in the image to make the right guess. During pre-training, the final output feature corresponding to the masked word is fed into a classifier over the whole vocabulary, driven by Softmax cross-entropy loss. + +Task #2: Masked RoI Classification with Linguistic Clues This is a dual task of Task #1. Each RoI in image is randomly masked out (with $15\%$ probability), and the pre-training task is to predict the category label of the masked RoI from the other clues. To avoid any visual clue leakage from the visual feature embedding of other elements, the pixels laid in the masked RoI are set as zeros before applying Fast R-CNN. During pre-training, the final output feature corresponding to the masked RoI is fed into a classifier with Softmax cross-entropy loss for object category classification. The category label predicted by pre-trained Faster R-CNN is set as the ground-truth. An example is shown in Figure 1. The RoI corresponding to cat in image is masked out, and the corresponding category cannot be predicted from any visual clues. But with the input caption of "kitten drinking from bottle", the model can infer the category by exploiting the linguistic clues. + +For a sample drawn from the BooksCorpus & English Wikipedia datasets, the input format to VLBERT degenerates to be $\langle \text{Text}, \varnothing \rangle$ , where no visual information is involved. The "visual feature embedding" term in Figure 1 is a learnable embedding shared for all words. The training loss is from the standard task of Masked Language Modeling (MLM) as in BERT. + +In summary, the pre-training on visual-linguistic corpus improves the detailed alignment between visual and linguistic contents. Such detailed alignment is vital for many downstream tasks (for example, in Visual Grounding (Kazemzadeh et al., 2014), the model locates the most relevant object or region in an image based on a natural language query). While the pre-training on text-only corpus facilitates downstream tasks involving understanding of long and complex sentences. + +# 3.4 FINE-TUNING VL-BERT + +VL-BERT is designed to be a generic feature representation for various visual-linguistic tasks. It is relatively simple to finetune VL-BERT for various downstream tasks. We simply need to feed VL-BERT with properly formatted input and output, and finetune all the network parameters end-to-end. For the input, the typical formats of and cover the majority visual-linguistic tasks. VL-BERT also supports more sentences and more images as long as appropriate segment embeddings are introduced to identify different input sources. At the output, typically, the final output feature of the [CLS] element is used for sentence-image-relation level prediction. The final output features of words or RoIs are for word-level or RoI-level prediction. In addition to the input and output format, task-specific loss functions and training strategies also need to be tuned. See Section 4.2 for the detailed design choices and settings. + +# 4 EXPERIMENT + +# 4.1 PRE-TRAINING + +As described in Section 3.3, we pre-train VL-BERT jointly on Conceptual Captions (Sharma et al., 2018) as visual-linguistic corpus, and BooksCorpus (Zhu et al., 2015) & English Wikipedia as text-only corpus. As VL-BERT is developed via adding new inputs capturing visual information to the original BERT model, we initialize the parameters to be the same as the original BERT described in (Devlin et al., 2018). VL-BERTBASE and VL-BERTLARGE denote models developed from the original BERTBASE and BERTLARGE models, respectively. The newly added parameters in VL-BERT are randomly initialized from a Gaussian distribution with mean of 0 and standard deviation of 0.02. Visual content embedding is produced by Faster R-CNN + ResNet-101, initialized from parameters pre-trained on Visual Genome (Krishna et al., 2017) for object detection (see BUTD (Anderson et al., 2018)). + +Prior to pre-training on Conceptual Captions, the pre-trained Faster R-CNN is applied to extract RoIs. Specifically, at most 100 RoIs with detection scores higher than 0.5 are selected for each image. At minimum, 10 RoIs are selected from one image, regardless of the detection score threshold. The detailed parameter settings are in Appendix. + +# 4.2 FINE-TUNING ON DOWNSTREAM TASKS + +The pre-trained VL-BERT model can be fine-tuned for various downstream visual-linguistic tasks, with simple modifications on the input format, output prediction, loss function and training strategy. + +# 4.2.1 VISUAL COMMONSENSE REASONING (VCR) + +
ModelQ → AQA → RQ → AR
valtestvaltestvaltest
R2C (Zellers et al., 2019)63.865.167.267.343.144.0
ViLBERT (Lu et al., 2019)†72.473.374.574.654.054.8
VisualBERT (Li et al., 2019b)†70.871.673.273.252.252.4
B2T2 (Alberti et al., 2019)†71.972.676.075.754.955.0
VL-BERTBASE w/o pre-training73.1-73.8-54.2-
VL-BERTBASE73.8-74.4-55.2-
VL-BERTLARGE75.575.877.978.458.959.7
+ +Table 1: Comparison to the state-of-the-art methods with single model on the VCR dataset. $\dagger$ indicates concurrent works. + +Visual Commonsense Reasoning (VCR) focuses on higher-order cognitive and commonsense understanding of the given image. In the dataset of Zellers et al. (2019), given an image and a list of categorized RoIs, a question at cognition level is raised. The model should pick the right answer to the question and provide the rationale explanation. For each question, there are 4 candidate answers and 4 candidate rationales. This holistic task $(\mathrm{Q} \rightarrow \mathrm{AR})$ is decomposed into two sub-tasks wherein researchers can train specific individual models: question answering $(\mathrm{Q} \rightarrow \mathrm{A})$ and answer + +![](images/b141ba263ed2bf6ffa8b9fd522bceb6606d772df3bdc00d7073ffd2488218f15.jpg) +(a) Input and output format for Visual Commonsense Reasoning (VCR) dataset + +![](images/4628b32ed97212a49b25c9b7f889a597d1f27c49206f89c3f8cbe226c719f3c4.jpg) +(b) Input and output format for Visual Question Answering (VQA) dataset + +![](images/a8f0ea1b32550faecc856823710a866da74d3647b914b3b563cd269cc745e708.jpg) +(c) Input and output format for Referring Expression task on RefCOCO+ dataset +Figure 2: Input and output formats for fine-tuning different visual-linguistic downstream tasks. + +justification (QA $\rightarrow$ R). The released VCR dataset consists of 265k pairs of questions, answers, and rationales, over 100k unique movie scenes (100k images). They are split into training, validation, and test sets consisting of 213k questions and 80k images, 27k questions and 10k images, and 25k questions and 10k images, respectively. + +Our experimental protocol for VCR follows that in R2C (Zellers et al., 2019). The model is trained on the train split, and is evaluated at the val and test sets. In the original work R2C, task-specific "Grounding", "Contextualization" and "Reasoning" modules are designed. Here we simply adopt the generic representation of VL-BERT for the task. Figure 2 (a) illustrates the input format, . For the sub-task of $\mathrm{Q} \rightarrow \mathrm{A}$ , 'Q' and 'A' are filled to the Question section and Answer section respectively. For the sub-task of $\mathrm{QA} \rightarrow \mathrm{R}$ , the concatenation of 'Q' and 'A' is filled to the Question section, and 'R' is filled to the Answer section. The input RoIs to VL-BERT are the ground-truth annotations in the dataset. The final output feature of [CLS] element is fed to a Softmax classifier for predicting whether the given Answer is the correct choice. During fine-tuning, we adopt two losses, the classification over the correctness of the answers and the RoI classification with linguistic clues. The detailed parameter settings are in Appendix. + +Table 1 presents the experiment results. Pre-training VL-BERT improves the performance by $1.0\%$ in the final $Q \rightarrow AR$ task, which validates the effectiveness of pre-training. Compared with R2C, we do not use ad-hoc task-specific modules. Instead, we simply adopt the generic representation of VL-BERT and jointly train the whole model end-to-end. Despite the same input, output and experimental protocol as R2C, VL-BERT outperforms R2C by large margins, indicating the power of our simple cross-modal architecture. Compared with other concurrent works, i.e., ViLBERT, VisualBERT and B2T2, our VL-BERT achieves the state-of-the-art performance. + +# 4.2.2 VISUAL QUESTION ANSWERING (VQA) + +In the VQA task, given a natural image, a question at the perceptual level is asked, and the algorithm should generate / choose the correct answer. Here we conduct experiments on the widely-used VQA v2.0 dataset (Goyal et al., 2017), which is built based on the COCO (Lin et al., 2014) images. The VQA v2.0 dataset is split into train (83k images and 444k questions), validation (41k images and + +
Modeltest-devtest-standard
BUTD (Anderson et al., 2018)65.3265.67
ViLBERT (Lu et al., 2019)†70.5570.92
VisualBERT (Li et al., 2019b)†70.8071.00
LXMERT (Tan & Bansal, 2019)†72.4272.54
VL-BERTBASE w/o pre-training69.58-
VL-BERTBASE71.16-
VL-BERTLARGE71.7972.22
+ +214k questions), and test (81k images and 448k questions) sets. Following the experimental protocol in BUTD (Anderson et al., 2018), for each question, the algorithm should pick the corresponding answer from a shared set consisting of 3,129 answers. + +Figure 2 (b) illustrates the input format for the VQA task, which is of . As the possible answers are from a shared pool independent to the question, we only fill a [MASK] element to the Answer section. As in BUTD (Anderson et al., 2018), the input RoIs in VL-BERT are generated by a Faster R-CNN detector pre-trained on Visual Genome (Krishna et al., 2017). The answer prediction is made from a multi-class classifier based upon the output feature of the [MASK] element. During fine-tuning, the network training is driven by the multi-class cross-entropy loss over the possible answers. The detailed parameter settings are in Appendix. + +Table 2 presents our experimental results. Pre-training VL-BERT improves the performance by $1.6\%$ , which validates the importance of pre-training. VL-BERT shares the same input (i.e., question, image, and RoIs), output and experimental protocol with BUTD, a prevalent model specifically designed for the task. Still, VL-BERT surpasses BUTD by over $5\%$ in accuracy. Except for LXMERT, our VL-BERT achieves better performance than the other concurrent works. This is because LXMERT is pre-trained on massive visual question answering data (aggregating almost all the VQA datasets based on COCO and Visual Genome). While our model is only pre-trained on captioning and text-only dataset, where there is still gap with the VQA task. + +# 4.2.3 REFERRING EXPRESSION COMPREHENSION + +Table 2: Comparison to the state-of-the-art methods with single model on the VQA dataset. $\dagger$ indicates concurrent works. + +
ModelGround-truth RegionsDetected Regions
valtestAtestBvaltestAtestB
MAttNet (Yu et al., 2018)71.0175.1366.1765.3371.6256.02
ViLBERT (Lu et al., 2019)†---72.3478.5262.61
VL-BERTBASE w/o pre-training74.4177.2867.5266.0371.8756.13
VL-BERTBASE79.8882.4075.0171.6077.7260.99
VL-BERTLARGE80.3183.6275.4572.5978.5762.30
+ +Table 3: Comparison to the state-of-the-art methods with single model on the RefCOCO+ dataset. $\dagger$ indicates concurrent work. + +A referring expression is a natural language phrase that refers to an object in an image. The referring expression comprehension task is to localize the object in an image with the given referring expression. We adopt the RefCOCO+ (Kazemzadeh et al., 2014) dataset for evaluation, consisting of 141k expressions for 50k referred objects in 20k images in the COCO dataset (Lin et al., 2014). The referring expressions in RefCOCO+ are forbidden from using absolute location words, e.g. left dog. Therefore the referring expressions focus on purely appearance-based descriptions. RefCOCO+ are split into four sets, training set (train), validation set (val), and two testing sets (testA and testB). Images containing multiple people are in testA set, while images containing multiple objects of other categories are in testB set. There is no overlap between the training, validation and testing images. + +Figure 2 (c) illustrates the input format for referring expression comprehension, where the input format is of . Model training and evaluation are conducted either on the ground-truth RoIs or on the detected boxes in MAttNet (Yu et al., 2018). And the results are reported either in the track of ground-truth regions or that of detected regions, respectively. During training, we + +compute the classification scores for all the input RoIs. For each RoI, a binary classification loss is applied. During inference, we directly choose the RoI with the highest classification score as the referred object of the input referring expression. The detailed parameter settings are in Appendix. + +Table 3 presents our experimental results. Pre-trained VL-BERT significantly improves the performance. Compared with MAttNet, VL-BERT is much simpler without task-specific architecture designs, yet much better. VL-BERT achieves comparable performance with the concurrent work of ViLBERT. + +# 4.3 ABLATION STUDY + +
SettingsMasked Language Modeling with Visual CluesMasked RoI Classification with Linguistic CluesSentence-Image Relationship Predictionwith Text-only CorpusTuning Fast R-CNNVCR Q→A valQA→R valVQA test-devRefCOCO+ Detected Regions val
w/o pre-training72.973.069.562.7
(a)72.973.171.069.1
(b)73.073.171.170.7
(c)72.272.470.369.5
(d)73.473.871.170.7
VL-BERTBASE73.873.971.271.1
+ +Table 4: Ablation study for VL-BERTBASE with $0.5 \times$ fine-tuning epochs. + +Table 4 ablates key design choices in pre-training VL-BERT. For experimental efficiency, the finetuning epochs of VL-BERT are of $0.5 \times$ of those in Section 4.2, with only VL-BERTBASE model. + +Overall, the pre-training of VL-BERT improves the performance over all the three down-stream tasks (by comparing setting "w/o pre-training" and VL-BERTBASE). The improvement amplitude varies for different tasks. By comparing setting (a) to that of "w/o pre-training", we see the benefits of Task #1, Masked Language Modeling with Visual Clues. By further incorporating Task #2, Masked RoI Classification with Linguistic Clues, the accuracy further improves on RefCOCO+, but gets stuck at VCR and VQA. This might be because only RefCOCO+ utilizes the final output feature corresponding to [IMG] tokens for prediction. Thus the pre-training of such features is beneficial. Setting (c) incorporates the task of Sentence-Image Relationship Prediction as in ViLBERT (Lu et al., 2019) and LXMERT (Tan & Bansal, 2019). It would hurt accuracy on all the three downstream tasks. We guess the reason is because the task of Sentence-Image Relationship Prediction would introduce unmatched image and caption pairs as negative examples. Such unmatched samples would hamper the training of other tasks. Setting (d) adds text-only corpus during pre-training. Compared with setting (b), it improves the performance over all three down-stream tasks, and is most significant on VCR. This is because the task of VCR involves more complex and longer sentences than those in VQA and RefCOCO+. By further finetuning the network parameters of Fast R-CNN, which generates the visual features, we get the final setting of VL-BERTBASE. Such end-to-end training of the entire network is helpful for all the downstream tasks. + +# 5 CONCLUSION + +In this paper, we developed VL-BERT, a new pre-trainable generic representation for visual-linguistic tasks. Instead of using ad-hoc task-specific modules, VL-BERT adopts the simple yet powerful Transformer model as the backbone. It is pre-trained on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues, and thus benefit the downstream tasks. In the future, we would like to seek better pre-training tasks, which could beneficial more downstream tasks (e.g., Image Caption Generation). + +# ACKNOWLEDGMENTS + +The work is partially supported by the National Natural Science Foundation of China under grand No.U19B2044 and No.61836011. + +# REFERENCES + +Chris Alberti, Jeffrey Ling, Michael Collins, and David Reitter. Fusion of detected objects in text for visual question answering. arXiv preprint arXiv:1908.05054, 2019. +Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6077-6086, 2018. +Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pp. 2425-2433, 2015. +Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dólár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pp. 647-655, 2014. +Difei Gao, Ruiping Wang, Shiguang Shan, and Xilin Chen. From two graphs to n questions: A vqa dataset for compositional reasoning on vision and commonsense. arXiv preprint arXiv:1908.02962, 2019. +Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440-1448, 2015. +Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580-587, 2014. +Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6904-6913, 2017. +Bharath Hariharan, Pablo Arbeláez, Ross Girshick, and Jitendra Malik. Simultaneous detection and segmentation. In European Conference on Computer Vision, pp. 297-312. Springer, 2014. +Kaiming He, Ross Girshick, and Piotr Dólar. Rethinking imagenet pre-training. arXiv preprint arXiv:1811.08883, 2018. +Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, and Yichen Wei. Relation networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3588-3597, 2018. +Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6700-6709, 2019. +Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2901-2910, 2017. + +Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. Referitgame: Referring to objects in photographs of natural scenes. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 787-798, 2014. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thought vectors. In Advances in neural information processing systems, pp. 3294-3302, 2015. +Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32-73, 2017. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012. +Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019. +Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training, 2019a. +Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019b. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. +Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015. +Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265, 2019. +Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. +Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543, 2014. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019. +Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91-99, 2015. +Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2556-2565, 2018. + +Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743, 2019a. +Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. arXiv preprint arXiv:1904.01766, 2019b. +Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, 2019. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Jesse Vig. A multiscale visualization of attention in the transformer model. arXiv preprint arXiv:1906.05714, 2019. URL https://arxiv.org/abs/1906.05714. +Alex Wang and Kyunghyun Cho. Bert has a mouth, and it must speak: Bert as a markov random field language model. arXiv preprint arXiv:1902.04094, 2019. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019. +Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78, 2014. +Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. Mattnet: Modular attention network for referring expression comprehension. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1307-1315, 2018. +Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6720-6731, 2019. +Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4995-5004, 2016. +Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pp. 19-27, 2015. + +# A APPENDIX + +# A.1 COMPARISON AMONG VL-BERT AND OTHER WORKS + +Table 5 compares among VL-BERT and other concurrent works for pre-training generic visual-linguistic representations. + +
MethodArchitectureVisual TokenPre-train DatasetsPre-train TasksDownstream Tasks
Published WorksVideoBERT (Sun et al., 2019b)single cross-modal Transformervideo frameCooking312K (Sun et al., 2019b)1) sentence-image alignment2) masked language modeling3) masked visual-words prediction1) zero-shot action classification2) video captioning
Works Under Review/Just Got AcceptedCBT (Sun et al., 2019a)two single-modal Transformer (vision & language respectively) + one cross-modal Transformervideo frameCooking312K (Sun et al., 2019b)1) sentence-image alignment2) masked language modeling3) masked visual-feature regression1) action anticipation2) video captioning
ViLBERT (Lu et al., 2019)one single-modal Transformer (language) + one cross-modal Transformer (with restricted attention pattern)image RoIConceptual Captions (Sharma et al., 2018)1) sentence-image alignment2) masked language modeling3) masked visual-feature classification4) image retrieval5) zero-shot image retrieval1) visual question answering2) visual commonsense reasoning3) grounding referring expressions4) image retrieval5) zero-shot image retrieval
B2T2 (Alberti et al., 2019)single cross-modal Transformerimage RoIConceptual Captions (Sharma et al., 2018)1) sentence-image alignment2) masked language modeling1) visual commonsense reasoning
LXMERT (Tan & Bansal, 2019)two single-modal Transformer (vision & language respectively) + one cross-modal Transformerimage RoI‡ COCO Caption + VG Caption + VG QA + VQA + GQA1) sentence-image alignment2) masked language modeling3) masked visual-feature classification4) masked visual-feature regression5) visual question answering1) visual question answering2) natural language visual reasoning
VisualBERT (Li et al., 2019b)single cross-modal Transformerimage RoICOCO Caption (Chen et al., 2015)1) sentence-image alignment2) masked language modeling1) visual question answering2) visual commonsense reasoning3) natural language visual reasoning4) grounding phrases
Unicoder-VL (Li et al., 2019a)single cross-modal Transformerimage RoIConceptual Captions (Sharma et al., 2018)1) sentence-image alignment2) masked language modeling3) masked visual-feature classification1) image-text retrieval2) zero-shot image-text retrieval
Our VL-BERTsingle cross-modal Transformerimage RoIConceptual Captions (Sharma et al., 2018) + BooksCorpus (Zhu et al., 2015) + English Wikipedia1) masked language modeling2) masked visual-feature classification1) visual question answering2) visual commonsense reasoning3) grounding referring expressions
+ +$\ddagger$ LXMERT is pre-trained on COCO Caption (Chen et al., 2015), VG Caption (Krishna et al., 2017), VG QA (Zhu et al., 2016), VQA (Antol et al., 2015) and GQA (Hudson & Manning, 2019). + +Table 5: Comparison among our VL-BERT and other works seeking to derive pre-trainable generic representations for visual-linguistic tasks. + +# A.2 DETAILED EXPERIMENT SETTINGS + +Pre-training is conducted on 16 Tesla V100 GPUs for 250k iterations by SGD. In each mini-batch, 256 samples are drawn. Among them, 128 samples are of pairs from Conceptual Captions, and the rest 128 samples are sequential tokens (at most 64 tokens for each sequence) from BooksCorpus & English Wikipedia. In SGD, Adam optimizer (Kingma & Ba, 2014) is applied, with base learning rate of $2 \times 10^{-5}$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , weight decay of $10^{-4}$ , learning rate warmed up over the first 8,000 steps, and linear decay of the learning rate. All the parameters in VL-BERT and Fast R-CNN are jointly trained in both pre-training and fine-tuning phase. The visual feature input for textual corpus is a learnable embedding shared for all words. In the task of Masked RoI Classification with Linguistic Clues, the pixels lying in all the masked RoIs are set as zeros in the image. A box covering the whole image is added as a RoI and would not be masked. + +For VCR, the fine-tuning is conducted on 16 Tesla V100 GPUs for 20 epochs. In each mini-batch, 256 triplets of are sampled. In SGD, the basic mini-batch gradient descent is conducted, with base learning rate of $5 \times 10^{-3}$ , momentum of 0.9, and weight decay of $10^{-4}$ . The learning rate is linearly warmed up in the first 1,000 steps from an initial learning rate of 0, and is decayed by 0.1 at the 14-th and the 18-th epochs. + +For VQA, the fine-tuning is conducted on 16 Tesla V100 GPUs for 20 epochs. In each mini-batch, 256 triplets of are sampled. In SGD, Adam optimizer is applied, with base learning rate of $1 \times 10^{-4}$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , weight decay of $10^{-4}$ , learning rate warmed up over the first 2,000 steps, and linear decay of the learning rate. + +For RefCOCO+, the fine-tuning is conducted on 16 Tesla V100 GPUs for 20 epochs. In each minibatch, 256 pairs of are sampled. In SGD, Adam optimizer is applied, with base learning rate of $1 \times 10^{-4}$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , weight decay of $10^{-4}$ , learning rate warmed up over the first 500 steps, and linear decay of the learning rate. + +![](images/6397b005f3f1fdee87527773a8d0c95ee6dd6a17351114fb57ffb9f47498b2c5.jpg) + +![](images/ec435d272614dac27bde4e72b2cdbb11febc08f276253bf36186a79076bdbf67.jpg) +(a) + +![](images/8444102e4aec48b3316687674f8ccf5ef5f9d0e80edbb9fe6baa385e098644aa.jpg) + +![](images/7dc9a40c910ec7b98ccadef31c8283825e548930b5ed466d22e5a02a9a7250f6.jpg) +(b) + +![](images/8fa50e0d9566270cbeb7c08eb4117a5ae641cc0f83d4f5d409fdedc302c70481.jpg) + +![](images/3e7dc2cbfea86436a8f2d7eb5ff4919f63b2d26bfc2fc7db4a11d940bf97023d.jpg) +(c) + +![](images/3b9fef13cc5e7ac69327265d965934c66a67ec9c4114db9308b454b621266744.jpg) + +![](images/585a1973c280dd115e1ae314296c178d5bf11956f17dec5bb3c32476d6f516f9.jpg) +(d) + +![](images/6046e77b0c89f7d5b4879d82b70203914923d9d3f1f88c70667610eb07d72adc.jpg) + +![](images/864d8eb2e960807a83ee9b317a303ac33dbd71ede54fa7016813ba274dd74786.jpg) +(e) + +![](images/8d65f51f826a3407eacd8ee12c450b3ae7412dbdfe1326d88464deeb5dfe8528.jpg) + +![](images/c5999c21d9ed70c6e1d036dfaa48a857cfab8fbec35e78c47a1ea4b23d2f5ca7.jpg) +(f) +Figure 3: Visualization of attention maps in pre-trained VL-BERTBASE. Line intensity indicates the magnitude of attention probability with the text token as query and the image RoI as key. The intensity is affinely rescaled to set the maximum value as 1 and the minimum as 0, across different heads in each layer. The index of network layer and attention head is counted from 0. + +# A.3 VISUALIZATION OF ATTENTION MAPS IN VL-BERT + +To better understand what VL-BERT learns from pre-training, we visualized the attention maps of pre-trained VL-BERT (without fine-tuning on downstream tasks) using BertViz3(Vig, 2019). + +Some visualization results on COCO (Lin et al., 2014; Chen et al., 2015) val2017 set are shown in Figure 3. We can see different attention patterns across attention heads. For some attention heads, text tokens attend more on the associated image RoIs. While in some other heads, text tokens attend uniformly to all RoIs. It demonstrates the ability of VL-BERT in aggregating and aligning visual-linguistic contents. \ No newline at end of file diff --git a/vlbertpretrainingofgenericvisuallinguisticrepresentations/images.zip b/vlbertpretrainingofgenericvisuallinguisticrepresentations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..01ea4dd284aec7e6cc7e4031000b82f5381d5d97 --- /dev/null +++ b/vlbertpretrainingofgenericvisuallinguisticrepresentations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14bbd661408c636364dcd3d8d4cd0829a35b4a22ef998aff611da30bdacedd96 +size 673181 diff --git a/vlbertpretrainingofgenericvisuallinguisticrepresentations/layout.json b/vlbertpretrainingofgenericvisuallinguisticrepresentations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e887bb48ef7c6a0034d927bc9523b2cb0bcf8139 --- /dev/null +++ b/vlbertpretrainingofgenericvisuallinguisticrepresentations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a7f2d071a457fd214f3246c0c6f3d2630779eddfba1bf0d683b2a5ea18ef865 +size 439553 diff --git a/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/793e5379-7737-48aa-a962-e1bf43b1521b_content_list.json b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/793e5379-7737-48aa-a962-e1bf43b1521b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a85464e835065a63312be647cb41349d8607c04f --- /dev/null +++ b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/793e5379-7737-48aa-a962-e1bf43b1521b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19038111a5242518088fa2402725732e989463565e8e9c1eb0e0f5355abd5c80 +size 129947 diff --git a/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/793e5379-7737-48aa-a962-e1bf43b1521b_model.json b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/793e5379-7737-48aa-a962-e1bf43b1521b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b8508ccb5dc0cf796ad0b40d26f19e5f9d25c7ab --- /dev/null +++ b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/793e5379-7737-48aa-a962-e1bf43b1521b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e9299cd73c77c0f6d45020db3686d5326b54a8c2e15801be20160bc4ae17727 +size 152686 diff --git a/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/793e5379-7737-48aa-a962-e1bf43b1521b_origin.pdf b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/793e5379-7737-48aa-a962-e1bf43b1521b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..25d702955a252116c3e31d8f314d654fbe53b3f9 --- /dev/null +++ b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/793e5379-7737-48aa-a962-e1bf43b1521b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:990c308d0b895c601dbb46b42735e7433805cd0f67cd8ce7bc54f81270f784f2 +size 13929655 diff --git a/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/full.md b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2efad77c66cb98026ebf006d9370e6b9a4c9738f --- /dev/null +++ b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/full.md @@ -0,0 +1,513 @@ +# V-MPO: ON-POLICY MAXIMUM A POSTERIORI POLICY OPTIMIZATION FOR DISCRETE AND CONTINUOUS CONTROL + +H. Francis Song,\* Abbas Abdelmaleki;\* Jost Tobias Springenberg, Aidan Clark, Hubert Soyer, Jack W. Rae, Seb Noury, Arun Ahuja, Siqi Liu, Dhruva Tirumala, Nicolas Heess, Dan Belov, Martin Riedmiller, Matthew M. Botvinick DeepMind, London, UK + +{songf, aabdolmaleki, springenberg, aidanclark, soyer, jwrae, snoury, arahuja, liusiqi, dhruvat, heess, danbelow, riedmiller, botvinick}@google.com + +# ABSTRACT + +Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported. + +# 1 INTRODUCTION + +Deep reinforcement learning (RL) with neural network function approximators has achieved superhuman performance in several challenging domains (Mnih et al., 2015; Silver et al., 2016; 2018). Some of the most successful recent applications of deep RL to difficult environments such as Dota 2 (OpenAI, 2018a), Capture the Flag (Jaderberg et al., 2019), Starcraft II (Vinyals et al., 2019), and dexterous object manipulation (OpenAI, 2018b) have used policy gradient-based methods such as Proximal Policy Optimization (PPO) (Schulman et al., 2017) and the Importance-Weighted Actor-Learner Architecture (IMPALA) (Espeholt et al., 2018), both in the approximately on-policy setting. + +Policy gradients, however, can suffer from large variance that may limit performance, especially for high-dimensional action spaces (Wu et al., 2018). In practice, moreover, policy gradient methods typically employ carefully tuned entropy regularization in order to prevent policy collapse. As an alternative to policy gradient-based algorithms, in this work we introduce an approximate policy iteration algorithm that adapts Maximum a Posteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018a;b) to the on-policy setting. The modified algorithm, V-MPO, relies on a learned state-value function $V(s)$ instead of the state-action value function used in MPO. Like MPO, rather than directly updating the parameters in the direction of the policy gradient, V-MPO first constructs a target distribution for the policy update subject to a sample-based KL constraint, then calculates the gradient that partially moves the parameters toward that target, again subject to a KL constraint. + +As we are particularly interested in scalable RL algorithms that can be applied to multi-task settings where a single agent must perform a wide variety of tasks, we show for the case of discrete actions that the proposed algorithm surpasses previously reported performance in the multi-task setting for both the Atari-57 (Bellemare et al., 2012) and DMLab-30 (Beattie et al., 2016) benchmark suites, and does so reliably without population-based tuning of hyperparameters (Jaderberg et al., 2017a). For a few individual levels in DMLab and Atari we also show that V-MPO can achieve scores that are substantially higher than has previously been reported in the single-task setting, especially in the challenging Ms. Pacman. + +V-MPO is also applicable to problems with high-dimensional, continuous action spaces. We demonstrate this in the context of learning to control both a 22-dimensional simulated humanoid from full state observations—where V-MPO reliably achieves higher asymptotic performance than previous algorithms—and a 56-dimensional simulated humanoid from pixel observations (Tassa et al., 2018; Merel et al., 2019). In addition, for several OpenAI Gym tasks (Brockman et al., 2016) we show that V-MPO achieves higher asymptotic performance than has previously been reported. + +# 2 BACKGROUND AND SETTING + +We consider the discounted RL setting, where we seek to optimize a policy $\pi$ for a Markov Decision Process described by states $s$ , actions $a$ , initial state distribution $\rho_0^{\mathrm{env}}(s_0)$ , transition probabilities $\mathcal{P}^{\mathrm{env}}(s_{t + 1}|s_t,a_t)$ , reward function $r(s_{t},a_{t})$ , and discount factor $\gamma \in (0,1)$ . In deep RL, the policy $\pi_{\theta}(a_{t}|s_{t})$ , which specifies the probability that the agent takes action $a_{t}$ in state $s_t$ at time $t$ , is described by a neural network with parameters $\theta$ . We consider problems where both the states $s$ and actions $a$ may be discrete or continuous. Two functions play a central role in RL: the state-value function $V^{\pi}(s_t) = \mathbb{E}_{a_t,s_{t + 1},a_{t + 1},\ldots}\big[\sum_{k = 0}^{\infty}\gamma^k r(s_{t + k},a_{t + k})\big]$ and the state-action value function $Q^{\pi}(s_t,a_t) = \mathbb{E}_{s_{t + 1},a_{t + 1},\ldots}\big[\sum_{k = 0}^{\infty}\gamma^k r(s_{t + k},a_{t + k})\big] = r(s_t,a_t) + \gamma \mathbb{E}_{s_{t + 1}}\big[V^{\pi}(s_{t + 1})\big]$ , where $s_0\sim \rho_0^{\mathrm{env}}(s_0)$ , $a_{t}\sim \pi (a_{t}|s_{t})$ , and $s_{t + 1}\sim \mathcal{P}^{\mathrm{env}}(s_{t + 1}|s_{t},a_{t})$ . + +In the usual formulation of the RL problem, the goal is to find a policy $\pi$ that maximizes the expected return given by $J(\pi) = \mathbb{E}_{s_0,a_0,s_1,a_1,\ldots}\left[\sum_{t=0}^{\infty}\gamma^t r(s_t,a_t)\right]$ . In policy gradient algorithms (Williams, 1992; Sutton et al., 2000; Mnih et al., 2016), for example, this objective is directly optimized by estimating the gradient of the expected return. An alternative approach to finding optimal policies derives from research that treats RL as a problem in probabilistic inference, including Maximum a Posteriori Policy Optimization (MPO) (Levine, 2018; Abdelmaleki et al., 2018a,b). Here our objective is subtly different, namely, given a suitable criterion for what are good actions to take in a certain state, how do we find a policy that achieves this goal? + +As was the case for the original MPO algorithm, the following derivation is valid for any such criterion. However, the policy improvement theorem (Sutton & Barto, 1998) tells us that a policy update performed by exact policy iteration, $\pi(s) = \arg \max_{a} [Q^{\pi}(s, a) - V^{\pi}(s)]$ , can improve the policy if there is at least one state-action pair with a positive advantage and nonzero probability of visiting the state. Motivated by this classic result, in this work we specifically choose an exponential function of the advantages $A^{\pi}(s, a) = Q^{\pi}(s, a) - V^{\pi}(s)$ . + +Notation. In the following we use $\sum_{s,a}$ to indicate both discrete and continuous sums (i.e., integrals) over states $s$ and actions $a$ depending on the setting. A sum with indices only, such as $\sum_{s,a}$ , denotes a sum over all possible states and actions, while $\sum_{s,a\sim \mathcal{D}}$ , for example, denotes a sum over sample states and actions from a batch of trajectories (the "dataset") $\mathcal{D}$ . + +# 3 RELATED WORK + +V-MPO shares many similarities, and thus relevant related work, with the original MPO algorithm (Abdolmaleki et al., 2018a;b). In particular, the general idea of using KL constraints to limit the size of policy updates is present in both Trust Region Policy Optimization (TRPO; Schulman et al., 2015) and Proximal Policy Optimization (PPO) (Schulman et al., 2017); we note, however, that this corresponds to the E-step constraint in V-MPO. + +It is worth noting here the following main differences with MPO, which is conceptually quite similar to V-MPO. MPO is primarily designed to be a sample-efficient off-policy algorithm in which the + +![](images/51ef6b82c9177eb8b2b65f34f81d727894b50785f943f10997d99a9c18cc83dc.jpg) +(a) + +![](images/bed449ef3e13b7e1c3d94ae903cba34474ee0e01560f0c2a806f7e39f17773f2.jpg) +(b) +Figure 1: (a) Actor-learner architecture with a target network, which is used to generate agent experience in the environment and is updated every $T_{\text{target}}$ learning steps from the online network. (b) Schematic of the agents, with the policy $(\theta)$ and value $(\phi)$ networks sharing most of their parameters through a shared input encoder and LSTM [or Transformer-XL (TrXL) for single Atari levels]. The agent also receives the action and reward from the previous step as an input to the LSTM. For DMLab an additional LSTM is used to process simple language instructions. + +E-step constructs a conditional target distribution $q(a|s)$ , which requires a state-action value function $Q(s, a)$ that can evaluate multiple sampled actions for a given state. In contrast, V-MPO is primarily (though not exclusively) designed to be an on-policy algorithm in which the E-step constructs a joint distribution $\psi(s, a)$ , and in the absence of a learned $Q$ -function only one action per state is used. In this regard V-MPO can also be compared to Fitted $Q$ -iteration by Advantage Weighted Regression (Neumann & Peters, 2009), which learns a $Q$ -function but uses only one action per state. + +V-MPO can also be related to Relative Entropy Policy Search (REPS) (Peters et al., 2008). Two distinguishing features of V-MPO from REPS are the introduction of the M-step KL constraint and the use of top- $k$ advantages. Moreover, in REPS the value function is a linear function of a learned feature representation whose parameters are trained by matching the feature distributions under the policy's stationary state distribution. In V-MPO, the nonlinear neural network value function is instead learned directly from $n$ -step returns. Interestingly, previous attempts to use REPS with neural network function approximators reported very poor performance, being particularly prone to local optima (Duan et al., 2016). In contrast, we find that the principles of EM-style policy optimization, when combined with this learned value function and appropriate constraints, can reliably train powerful neural networks, including transformers, for RL tasks. + +Like V-MPO, Supervised Policy Update (SPU) (Vuong et al., 2019) seeks to exactly solve an optimization problem and fit the parametric policy to this solution. As we argue in Appendix D, however, SPU uses this nonparametric distribution quite differently from V-MPO; as a result, the final algorithm is closer to a policy gradient algorithm such as PPO. + +# 4 METHOD + +V-MPO is an approximate policy iteration (Sutton & Barto, 1998) algorithm with a specific prescription for the policy improvement step. In general, policy iteration uses the fact that the true state-value function $V^{\pi}$ corresponding to policy $\pi$ can be used to obtain an improved policy $\pi'$ . Thus we can + +1. Generate trajectories $\tau$ from an old policy $\pi_{\theta_{\mathrm{old}}}(a|s)$ whose parameters $\theta_{\mathrm{old}}$ are fixed. To control the amount of data generated by a particular policy, we use a target network which is fixed for $T_{\mathrm{target}}$ learning steps (Fig. 1a). + +2. Evaluate the policy $\pi_{\theta_{\mathrm{old}}}(a|s)$ by learning the value function $V^{\pi_{\theta_{\mathrm{old}}}}(s)$ from empirical returns and estimating the corresponding advantages $A^{\pi_{\theta_{\mathrm{old}}}}(s,a)$ for the actions that were taken (Section 4.1). +3. Based on $A^{\pi_{\theta_{\mathrm{old}}}}(s,a)$ , estimate an improved policy $\pi_{\theta}(a|s)$ which we call the "online" policy to distinguish it from the fixed target network (Section 4.2). + +The first two steps are standard, and describing V-MPO's approach to step 3 is the essential contribution of this work. At a high level, our strategy is to first construct a nonparametric target distribution for the policy update, then partially move the parametric policy towards this distribution subject to a KL constraint. We first review policy evaluation (step 2) in Section 4.1, then derive the V-MPO policy improvement (step 3) in Section 4.2. Ultimately, we use gradient descent to optimize a single, relatively simple loss, which is given in Eq. 10 following the derivation. A summary of the full algorithm is also presented in Algorithm 1. + +# 4.1 POLICY EVALUATION + +In the present setting, policy evaluation means learning an approximate state-value function $V^{\pi}(s)$ given a policy $\pi(a|s)$ , which we keep fixed for $T_{\mathrm{target}}$ learning steps (i.e., batches of trajectories). We note that the value function corresponding to the target policy is instantiated in the "online" network receiving gradient updates; bootstrapping uses the online value function, as it is the best available estimate of the value function for the target policy. Thus in this section $\pi$ refers to $\pi_{\theta_{\mathrm{old}}}$ , while the value function update is performed on the current $\phi$ , which may share parameters with the current $\theta$ . + +We fit a parametric value function $V_{\phi}^{\pi}(s)$ with parameters $\phi$ by minimizing the squared loss + +$$ +\mathcal {L} _ {V} (\phi) = \frac {1}{2 | \mathcal {D} |} \sum_ {s _ {t} \sim \mathcal {D}} \left(V _ {\phi} ^ {\pi} \left(s _ {t}\right) - G _ {t} ^ {(n)}\right) ^ {2}, \tag {1} +$$ + +where $G_{t}^{(n)}$ is the standard $n$ -step target for the value function at state $s_t$ at time $t$ (Sutton & Barto, 1998). This return uses the actual rewards in the trajectory and bootstraps from the value function for the rest: for each $\ell = t,\ldots ,t + n - 1$ in an unroll, $G_{\ell}^{(n)} = \sum_{k = \ell}^{t + n - 1}\gamma^{k - \ell}r_k + \gamma^{t + n - \ell}V_{\phi}^{\pi}(s_{t + n})$ . The advantages, which are the key quantity of interest for the policy improvement step in V-MPO, are then given by $A^{\pi}(s_t,a_t) = G_t^{(n)} - V_\phi^\pi (s_t)$ for each $s_t,a_t$ in the batch of trajectories. + +PopArt normalization. As we are interested in the multi-task setting where a single agent must learn a large number of tasks with differing reward scales, we used PopArt (van Hasselt et al., 2016; Hessel et al., 2018) for the value function, even when training on a single task. We observed benefits in using PopArt even in the single-task setting, partly due to the fact that we do not tune the relative weighting of the policy evaluation and policy improvement losses despite sharing most parameters for the policy and value networks. Specifically, the value function outputs a separate value for each task in normalized space, which is converted to actual returns by a shift and scaling operation, the statistics of which are learned during training. We used a scale lower bound of $10^{-2}$ , scale upper bound of $10^{6}$ , and learning rate of $10^{-4}$ for the statistics. The lower bound guards against numerical issues when rewards are extremely sparse. + +Importance-weighting for off-policy data. It is possible to importance-weight the samples using V-trace to correct for off-policy data (Espeholt et al., 2018), for example when data is taken from a replay buffer. For simplicity, however, no importance-weighting was used for the experiments presented in this work, which were mostly on-policy. + +# 4.2 POLICY IMPROVEMENT IN V-MPO + +In this section we show how, given the advantage function $A^{\pi_{\theta_{\mathrm{old}}}}(s,a)$ for the state-action distribution $p_{\theta_{\mathrm{old}}}(s,a) = \pi_{\theta_{\mathrm{old}}}(a|s)p(s)$ induced by the old policy $\pi_{\theta_{\mathrm{old}}}(a|s)$ , we can estimate an improved policy $\pi_{\theta}(a|s)$ . More formally, let $\mathcal{I}$ denote the binary event that the new policy is an improvement (in a sense to be defined below) over the previous policy: $\mathcal{I} = 1$ if the policy is successfully improved and 0 otherwise. Then we would like to find the mode of the posterior distribution over parameters $\theta$ conditioned on this event, i.e., we seek the maximum a posteriori (MAP) estimate + +$$ +\theta^ {*} = \arg \max _ {\theta} \left[ \log p _ {\theta} (\mathcal {I} = 1) + \log p (\theta) \right], \tag {2} +$$ + +where we have written $p(\mathcal{I} = 1|\theta)$ as $p_{\theta}(\mathcal{I} = 1)$ to emphasize the parametric nature of the dependence on $\theta$ . We use the well-known identity $\log p(X) = \mathbb{E}_{\psi(Z)}\left[\log \frac{p(X,Z)}{\psi(Z)}\right] + D_{\mathrm{KL}}\big(\psi(Z)\| p(Z|X)\big)$ for any latent distribution $\psi(Z)$ , where $D_{\mathrm{KL}}(\psi(Z)\| p(Z|X))$ is the Kullback-Leibler divergence between $\psi(Z)$ and $p(Z|X)$ with respect to $Z$ , and the first term is a lower bound because the KL divergence is always non-negative. Then considering $s, a$ as latent variables, + +$$ +\log p _ {\theta} (\mathcal {I} = 1) = \sum_ {s, a} \psi (s, a) \log \frac {p _ {\theta} (\mathcal {I} = 1 , s , a)}{\psi (s , a)} + D _ {\mathrm {K L}} \left(\psi (s, a) \| p _ {\theta} (s, a | \mathcal {I} = 1)\right). \tag {3} +$$ + +Policy improvement in V-MPO consists of the following two steps which have direct correspondences to the expectation maximization (EM) algorithm (Neal & Hinton, 1998): In the expectation (E) step, we choose the variational distribution $\psi(s, a)$ such that the lower bound on $\log p_{\theta}(\mathcal{I} = 1)$ is as tight as possible, by minimizing the KL term. In the maximization (M) step we then find parameters $\theta$ that maximize the corresponding lower bound, together with the prior term in Eq. 2. + +# 4.2.1 E-STEP + +In the E-step, our goal is to choose the variational distribution $\psi(s,a)$ such that the lower bound on $\log p_{\theta}(\mathcal{I} = 1)$ is as tight as possible, which is the case when the KL term in Eq. 3 is zero. Given the old parameters $\theta_{\mathrm{old}}$ , this simply leads to $\psi(s,a) = p_{\theta_{\mathrm{old}}}(s,a|\mathcal{I} = 1)$ , or + +$$ +\psi (s, a) = \frac {p _ {\theta_ {\mathrm {o l d}}} (s , a) p _ {\theta_ {\mathrm {o l d}}} (\mathcal {I} = 1 | s , a)}{p _ {\theta_ {\mathrm {o l d}}} (\mathcal {I} = 1)}, \quad p _ {\theta_ {\mathrm {o l d}}} (\mathcal {I} = 1) = \sum_ {s, a} p _ {\theta_ {\mathrm {o l d}}} (s, a) p _ {\theta_ {\mathrm {o l d}}} (\mathcal {I} = 1 | s, a). \tag {4} +$$ + +Intuitively, this solution weights the probability of each state-action pair with its relative improvement probability $p_{\theta_{\mathrm{old}}}(\mathcal{I} = 1|s,a)$ . We now choose a distribution $p_{\theta_{\mathrm{old}}}(\mathcal{I} = 1|s,a)$ that leads to our desired outcome. As we prefer actions that lead to a higher advantage in each state, we suppose that this probability is given by + +$$ +p _ {\theta_ {\text {o l d}}} (\mathcal {I} = 1 | s, a) \propto \exp \left(\frac {A ^ {\pi_ {\theta_ {\text {o l d}}}} (s , a)}{\eta}\right) \tag {5} +$$ + +for some temperature $\eta > 0$ , from which we obtain the equation on the right in Eq. 12. This probability depends on the old parameters $\theta_{\mathrm{old}}$ and not on the new parameters $\theta$ . Meanwhile, the value of $\eta$ allows us to control the diversity of actions that contribute to the weighting, but at the moment is arbitrary. It turns out, however, that we can tune $\eta$ as part of the optimization, which is desirable since the optimal value of $\eta$ changes across iterations. The convex loss that achieves this, Eq. 13, is derived in Appendix A by minimizing the KL term in Eq. 3 subject to a hard constraint on $\psi(s, a)$ . + +Top- $k$ advantages. We found that learning improves substantially if we take only the samples corresponding to the highest $50\%$ of advantages in each batch for the E-step, corresponding to the use of $\tilde{\mathcal{D}}$ rather than $\mathcal{D}$ in Eqs. 12, 13. Importantly, these must be consistent between the maximum likelihood weights in Eq. 12 and the temperature loss in Eq. 13, since, mathematically, this corresponds to a specific choice of the policy improvement probability in Eq. 5 to only use the top half of the advantages. This is similar to the technique used in the Cross Entropy Method (CEM) (Mannor et al., 2003) and Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES) (Hansen et al., 1997; Abdelmaleki et al., 2017), and is a special case of the more general feature that any rank-preserving transformation is allowed under this formalism. For example, in Fig. 8 of the Appendix we show an example of an agent trained with uniform weights given to the top- $k$ samples, instead of optimizing the temperature. Other choices are possible, and in future work we will investigate the suitability of different choices for specific applications. + +Importance weighting for off-policy corrections. As for the value function, importance weights can be used in the policy improvement step to correct for off-policy data. While not used for the experiments presented in this work, details for how to carry out this correction are given in Appendix E. + +# 4.2.2 M-STEP: CONSTRAINED SUPERVISED LEARNING OF THE PARAMETRIC POLICY + +In the E-step we found the nonparametric variational state-action distribution $\psi (s,a)$ , Eq. 4, that gives the tightest lower bound to $p_{\theta}(\mathcal{I} = 1)$ in Eq. 3. In the M-step we maximize this lower bound + +together with the prior term $\log p(\theta)$ with respect to the parameters $\theta$ , which effectively leads to a constrained weighted maximum likelihood problem. Thus the introduction of the nonparametric distribution in Eq. 4 separates the RL procedure from the neural network fitting. + +We would like to find new parameters $\theta$ that minimize + +$$ +\mathcal {L} (\theta) = - \sum_ {s, a} \psi (s, a) \log \frac {p _ {\theta} (\mathcal {I} = 1 , s , a)}{\psi (s , a)} - \log p (\theta). \tag {6} +$$ + +Note, however, that so far we have worked with the joint state-action distribution $\psi(s,a)$ while we are in fact optimizing for the policy, which is the conditional distribution $\pi_{\theta}(a|s)$ . Writing $p_{\theta}(s,a) = \pi_{\theta}(a|s)p(s)$ since only the policy is parametrized by $\theta$ and dropping terms that are not parametrized by $\theta$ , the first term of Eq. 6 is seen to be the weighted maximum likelihood policy loss + +$$ +\mathcal {L} _ {\pi} (\theta) = - \sum_ {s, a} \psi (s, a) \log \pi_ {\theta} (a | s). \tag {7} +$$ + +In the sample-based computation of this loss, we assume that any state-action pairs not in the batch of trajectories have zero weight, leading to the normalization in Eq. 12. + +As in the original MPO algorithm, a useful prior is to keep the new policy $\pi_{\theta}(a|s)$ close to the old policy $\pi_{\theta_{\mathrm{old}}}(a|s): \log p(\theta) \approx -\alpha \mathbb{E}_{s \sim p(s)}\left[D_{\mathrm{KL}}\left(\pi_{\theta_{\mathrm{old}}}(a|s) \mid \pi_{\theta}(a|s)\right)\right]$ . While intuitive, we motivate this more formally in Appendix B. It is again more convenient to specify a bound on the KL divergence instead of tuning $\alpha$ directly, so we solve the constrained optimization problem + +$$ +\theta^ {*} = \arg \min _ {\theta} - \sum_ {s, a} \psi (s, a) \log \pi_ {\theta} (a | s) \quad \text {s . t .} \underset {s \sim p (s)} {\mathbb {E}} \left[ D _ {\mathrm {K L}} \left(\pi_ {\theta_ {\mathrm {o l d}}} (a | s) \| \pi_ {\theta} (a | s)\right) \right] < \epsilon_ {\alpha}. \tag {8} +$$ + +Intuitively, the constraint in the E-step expressed by Eq. 18 in Appendix A for tuning the temperature only constrains the nonparametric distribution; it is the constraint in Eq. 8 that directly limits the change in the parametric policy, in particular for states and actions that were not in the batch of samples and which rely on the generalization capabilities of the neural network function approximator. + +To make the constrained optimization problem amenable to gradient descent, we use Lagrangian relaxation to write the unconstrained objective as + +$$ +\mathcal {J} (\theta , \alpha) = \mathcal {L} _ {\pi} (\theta) + \alpha \left(\epsilon_ {\alpha} - \underset {s \sim p (s)} {\mathbb {E}} \left[ D _ {\mathrm {K L}} \left(\pi_ {\theta_ {\text {o l d}}} (a | s) \| \pi_ {\theta} (a | s)\right) \right]\right), \tag {9} +$$ + +which we can optimize by following a coordinate-descent strategy, alternating between the optimization over $\theta$ and $\alpha$ . Since $\eta$ and $\alpha$ are Lagrange multipliers that must be positive, after each gradient update we project the resulting $\eta$ and $\alpha$ to a small positive value which we choose to be $\eta_{\mathrm{min}} = \alpha_{\mathrm{min}} = 10^{-8}$ throughout the results presented below. + +KL constraints in both the E-step and M-step are generally well satisfied, especially for the E-step since the temperature optimization is convex. Fig. 7 in the Appendix shows an example of how the KL constraints behave in the Atari Seaquest experiment presented below. We note, in particular, that it is desirable for the bounds to not just be satisfied but saturated. + +# 4.3 FULL LOSS FUNCTION + +In this section we provide the full loss function used to implement V-MPO, which is perhaps simpler than is suggested by the derivation. Consider a batch of data $\mathcal{D}$ consisting of a number of trajectories, with $|\mathcal{D}|$ total state-action samples. Each trajectory consists of an unroll of length $n$ of the form $\tau = \left[(s_t,a_t,r_{t + 1}),\dots ,(s_{t + n - 1},a_{t + n - 1},r_{t + n}),s_{t + n}\right]$ including the bootstrapped state $s_{t + n}$ , where $r_{t + 1} = r(s_t,a_t)$ . The total loss is the sum of a policy evaluation loss and a policy improvement loss, + +$$ +\mathcal {L} (\phi , \theta , \eta , \alpha) = \mathcal {L} _ {V} (\phi) + \mathcal {L} _ {\mathrm {V - M P O}} (\theta , \eta , \alpha), \tag {10} +$$ + +where $\phi$ are the parameters of the value network, $\theta$ the parameters of the policy network, and $\eta$ and $\alpha$ are Lagrange multipliers. In practice, the policy and value networks share most of their parameters in the form of a shared convolutional network (a ResNet) and recurrent LSTM core, and are optimized together (Fig. 1b) (Mnih et al., 2016). We note, however, that the value network parameters $\phi$ are considered fixed for the policy improvement loss, and gradients are not propagated. + +The policy evaluation loss for the value function, $\mathcal{L}_V(\phi)$ , is the standard regression to $n$ -step returns and is given by Eq. 1 above. The policy improvement loss $\mathcal{L}_{\mathrm{V - MPO}}(\theta ,\eta ,\alpha)$ is given by + +$$ +\mathcal {L} _ {\mathrm {V - M P O}} (\theta , \eta , \alpha) = \mathcal {L} _ {\pi} (\theta) + \mathcal {L} _ {\eta} (\eta) + \mathcal {L} _ {\alpha} (\theta , \alpha). \tag {11} +$$ + +Here the policy loss is the weighted maximum likelihood loss + +$$ +\mathcal {L} _ {\pi} (\theta) = - \sum_ {s, a \sim \tilde {\mathcal {D}}} \psi (s, a) \log \pi_ {\theta} (a | s), \quad \psi (s, a) = \frac {\exp \left(\frac {A ^ {\text {t a r g e t}} (s , a)}{\eta}\right)}{\sum_ {s , a \sim \tilde {\mathcal {D}}} \exp \left(\frac {A ^ {\text {t a r g e t}} (s , a)}{\eta}\right)}, \tag {12} +$$ + +where the advantages $A^{\mathrm{target}}(s,a)$ for the target network policy $\pi_{\theta_{\mathrm{target}}}(a|s)$ are estimated according to the standard method described above. The tilde over the dataset, $\tilde{D}$ , indicates that we take samples corresponding to the top half advantages in the batch of data. The $\eta$ , or "temperature", loss is + +$$ +\mathcal {L} _ {\eta} (\eta) = \eta \epsilon_ {\eta} + \eta \log \left[ \frac {1}{| \tilde {\mathcal {D}} |} \sum_ {s, a \sim \tilde {\mathcal {D}}} \exp \left(\frac {A ^ {\text {t a r g e t}} (s , a)}{\eta}\right) \right]. \tag {13} +$$ + +We perform the alternating optimization over $\theta$ and $\alpha$ while keeping a single loss function by alternately applying a "stop-gradient" to the Lagrange multiplier and KL term. Then the KL constraint, which can be viewed as a form of trust-region loss, is given by + +$$ +\mathcal {L} _ {\alpha} (\theta , \alpha) = \frac {1}{| \mathcal {D} |} \sum_ {s \in \mathcal {D}} \left[ \alpha \left(\epsilon_ {\alpha} - \operatorname {s g} \left[ \left[ D _ {\mathrm {K L}} \left(\pi_ {\theta_ {\text {t a r g e t}}} (a | s) \| \pi_ {\theta} (a | s)\right) \right] \right]\right) + \operatorname {s g} [ [ \alpha ] ] D _ {\mathrm {K L}} \left(\pi_ {\theta_ {\text {t a r g e t}}} (a | s) \| \pi_ {\theta} (a | s)\right) \right], \tag {14} +$$ + +where $\mathrm{sg}[[\cdot]]$ indicates a stop gradient, i.e., that the enclosed term is assumed constant with respect to all variables. Note that here we use the full batch $\mathcal{D}$ , not $\hat{\mathcal{D}}$ . + +For continuous action spaces parametrized by Gaussian distributions, we use decoupled KL constraints for the M-step in Eq. 14 as in Abdelmaleki et al. (2018b); the precise form is given in Appendix C. + +We used the Adam optimizer (Kingma & Ba, 2015) with default TensorFlow hyperparameters to optimize the total loss in Eq. 10. In particular, the learning rate was fixed at $10^{-4}$ for all experiments. + +# Algorithm 1 V-MPO + +```latex +given Batch size $B$ , unroll length $n$ $T_{\mathrm{target}}$ , KL bounds $\epsilon_{\eta}$ $\epsilon_{\alpha}$ +initialize Network parameters $\theta_{\mathrm{online}}$ $\phi_{\mathrm{online}}$ , Lagrange multipliers $\eta$ $\alpha$ +repeat + $\theta_{\mathrm{target}}\gets \theta_{\mathrm{online}}$ +for $i = 1,\dots ,T_{\mathrm{target}}$ do Use policy $\pi_{\theta_{\mathrm{target}}}$ to act in the environment and collect $B$ trajectories $\tau$ of length $n$ Update $\theta_{\mathrm{online}}$ $\phi_{\mathrm{online}}$ $\eta$ $\alpha$ using Adam to minimize the total loss in Eq. 10. $\eta \leftarrow \max (\eta ,\eta_{\min})$ $\alpha \leftarrow \max (\alpha ,\alpha_{\min})$ +end for +until Fixed number of steps. +``` + +# 5 EXPERIMENTS + +Details on the network architecture and hyperparameters used for each task are given in Appendix F. + +# 5.1 DISCRETE ACTIONS: DMLAB, ATARI + +DMLab. DMLab-30 (Beattie et al., 2016) is a collection of visually rich, partially observable 3D environments played from the first-person point of view. Like IMPALA, for DMLab we used pixel control as an auxiliary loss for representation learning (Jaderberg et al., 2017b; Hessel et al., 2018). However, we did not employ the optimistic asymmetric reward scaling used by previous IMPALA + +![](images/9d1aefb702c5d47d82390c6181615552d892ae2f6378270a870166eb5c4bf2b4.jpg) +(a) Multi-task DMLab-30. + +![](images/6b771eac1ff1aa1f5ea2b71879f1566aa26999422db7c589b217b3d5f3aa39f4.jpg) +(b) Multi-task Atari-57. + +![](images/c0bdd042324e013d1fec83fd63d2978ddf1e6da9905a5f8f34611af2a88caa2e.jpg) +Figure 2: (a) Multi-task DMLab-30. IMPALA results show 3 runs of 8 agents each; within a run hyperparameters were evolved via PBT. For V-MPO each line represents a set of hyperparameters that are fixed throughout training. The final result of R2D2+ trained for 10B environment steps on individual levels (Kapturowski et al., 2019) is also shown for comparison (orange line). (b) Multi-task Atari-57. In the IMPALA experiment, hyperparameters were evolved with PBT. For V-MPO each of the 24 lines represents a set of hyperparameters that were fixed throughout training, and all runs achieved a higher score than the best IMPALA run. Data for IMPALA ("Pixel-PopArt-IMPALA" for DMLab-30 and "PopArt-IMPALA" for Atari-57) was obtained from the authors of Hessel et al. (2018). Each agent step corresponds to 4 environment frames due to the action repeat. + +![](images/a7d65c77f3ff6bbdd41b2d7ed57b6921dce7803ee81034c14b3ee6433283a742.jpg) +Figure 3: V-MPO trained on single example levels from DMLab-30, compared to IMPALA and more recent results from R2D2+, the larger, DMLab-specific version of R2D2 (Kapturowski et al., 2019). The IMPALA results include hyperparameter evolution with PBT. + +![](images/bd8734f0ccaa172ed8c302825b7e0f845462f979a6cb4d434f43237ced2ccf1c.jpg) + +![](images/a54ad748ca2096e82492b1edb0b03cadff0769dc756ba35a0f6bfffb60940047.jpg) + +experiments to aid exploration on a subset of the DMLab levels, by weighting positive rewards more than negative rewards (Espeholt et al., 2018; Hessel et al., 2018; Kapturowski et al., 2019). Unlike in Hessel et al. (2018) we also did not use population-based training (PBT) (Jaderberg et al., 2017a). Additional details for the settings used in DMLab can be found in Table 5 of the Appendix. + +Fig. 2a shows the results for multi-task DMLab-30, comparing the V-MPO learning curves to data obtained from Hessel et al. (2018) for the PopArt IMPALA agent with pixel control. We note that the result for V-MPO at 10B environment frames across all levels matches the result for the Recurrent Replay Distributed DQN (R2D2) agent (Kapturowski et al., 2019) trained on individual levels for 10B environment steps per level. Fig. 3 shows example individual levels in DMLab where V-MPO achieves scores that are substantially higher than has previously been reported, for both R2D2 and IMPALA. The pixel-control IMPALA agents shown here were carefully tuned for DMLab and are similar to the "experts" used in Schmitt et al. (2018); in all cases these results match or exceed previously published results for IMPALA (Espeholt et al., 2018; Kapturowski et al., 2019). + +Atari. The Atari Learning Environment (ALE) (Bellemare et al., 2012) is a collection of 57 Atari 2600 games that has served as an important benchmark for recent deep RL methods. We used the standard preprocessing scheme and a maximum episode length of 30 minutes (108,000 frames), see Table 6 in the Appendix. For the multi-task setting we followed Hessel et al. (2018) in setting the discount to zero on loss of life; for the example single tasks we did not employ this trick, since it + +![](images/b6832386a0e98b2a9ba5b35b1da20c0c85ad86adabc0400c7d4682bd83f2ff29.jpg) +Figure 4: Example levels from Atari. In Breakout, V-MPO achieves the maximum score of 864 in every episode. No reward clipping was applied, and the maximum length of an episode was 30 minutes (108,000 frames). Supplementary video for Ms. Pacman: https://bit.ly/21WQBy5 + +![](images/88bb9c01c0bb43113577a92ac65d679495ef85080e569634f908177b541454e6.jpg) + +![](images/44e87ac3237540521955f8f5da3b2cb1778b172d1e25b7bfb24f16a243fba0b3.jpg) + +![](images/43b8c0326931fdefbf24aff8dfbe55a8c6ef02b7fe13002d6766328883d7dde5.jpg) + +![](images/99bce19e8d5b35edf8cd1f1cf82195d406a8d8aaff142746d447793d2c27fa39.jpg) +(a) + +![](images/4491ee4e19f46d36dea5aee56203f45f3afe7b13dae963c9849636d0b8f5bab1.jpg) +(b) + +![](images/5e7f20cda89efe10739b75526e6778f6b7412b21c8674e9fb6f0350b16cdba95.jpg) +(c) + +![](images/8c98d9476d462344a29594b6939b7b87f598b8385e1d3f4702e77b567a370a9f.jpg) +(d) +Figure 5: (a) Humanoid "run" from full state (Tassa et al., 2018) and (b) humanoid "gaps" from pixel observations (Merel et al., 2019). Purple curves are the same runs but without parametric KL constraints. Det. eval.: deterministic evaluation. Supplementary video for humanoid gaps: https://bit.ly/2L9KZdS. (c)-(d) Example OpenAI Gym tasks. See also Fig. 11 in the Appendix for Gym Humanoid-V1. + +can prevent the agent from achieving the highest score possible by sacrificing lives. Similarly, while in the multi-task setting we followed previous work in clipping the maximum reward to 1.0, no such clipping was applied in the single-task setting in order to preserve the original reward structure. Additional details for the settings used in Atari can be found in Table 6 in the Appendix. + +Fig. 2b shows the results for multi-task Atari-57, demonstrating that it is possible for a single agent to achieve "superhuman" median performance on Atari-57 in approximately 4 billion ( $\sim 70$ million per level) environment frames. Again, while we did not employ PBT in order to demonstrate that individual V-MPO runs can exceed the performance of a population of IMPALA agents, Fig. 6 shows that with population-based tuning of hyperparameters even higher performance is possible. + +We also compare the performance of V-MPO on a few individual Atari levels to R2D2 (Kapturowski et al., 2019), which previously achieved some of the highest scores reported for Atari. Again, V-MPO can match or exceed previously reported scores while requiring fewer interactions with the environment. In Ms. Pacman, the final performance approaches 300,000 with a 30-minute timeout (and the maximum 1M without). Inspired by the argument in Kapturowski et al. (2019) that in a fully observable environment LSTMs enable the agent to utilize more useful representations than is available in the immediate observation, for the single-task setting we used a Transformer-XL (TrXL) (Dai et al., 2019) to replace the LSTM core. Unlike previous work for single Atari levels, we did not employ any reward clipping (Mnih et al., 2015; Espeholt et al., 2018) or nonlinear value function rescaling (Kapturowski et al., 2019). + +# 5.2 CONTINUOUS CONTROL + +To demonstrate V-MPO's effectiveness in high-dimensional, continuous action spaces, here we present examples of learning to control both a simulated humanoid with 22 degrees of freedom from full state observations and one with 56 degrees of freedom from pixel observations (Tassa et al., 2018; Merel et al., 2019). As shown in Fig. 5a, for the 22-dimensional humanoid V-MPO reliably achieves higher asymptotic returns than has previously been reported, including for Deep Deterministic Policy + +Gradients (DDPG) (Lillicrap et al., 2015), Stochastic Value Gradients (SVG) (Heess et al., 2015), and MPO. These algorithms are far more sample-efficient but reach a lower final performance. +In the "gaps" task the 56-dimensional humanoid must run forward to match a target velocity of $4\mathrm{m / s}$ and jump over the gaps between platforms by learning to actuate joints with position-control (Merel et al., 2019). Previously, only an agent operating in the space of pre-learned motor primitives was able to solve the task from pixel observations (Merel et al., 2018; 2019); here we show that V-MPO can learn a challenging visuomotor task from scratch (Fig. 5b). For this task we also demonstrate the importance of the parametric KL constraint, without which the agent learns poorly. +In Figs. 5c-d we also show that V-MPO achieves the highest asymptotic performance reported for two OpenAI Gym tasks (Brockman et al., 2016). Again, MPO and Stochastic Actor-Critic (Haarnoja et al., 2018) are far more sample-efficient but reach a lower final performance. +These experiments are presented to demonstrate the existence of higher-return solutions than have previously been reported, and an algorithm, V-MPO, that can reliably converge to these solutions. However, in the future we desire algorithms that can do so while using fewer interactions with the environment. + +# 6 CONCLUSION + +In this work we have introduced a scalable on-policy deep reinforcement learning algorithm, V-MPO, that is applicable to both discrete and continuous control domains. For the results presented in this work neither importance weighting nor entropy regularization was used; moreover, since the size of neural network parameter updates is limited by KL constraints, we were also able to use the same learning rate for all experiments. This suggests that a scalable, performant RL algorithm may not require some of the tricks that have been developed over the past several years. Interestingly, both the original MPO algorithm for replay-based off-policy learning (Abdolmaleki et al., 2018a;b) and V-MPO for on-policy learning are derived from similar principles, providing evidence for the benefits of this approach as an alternative to popular policy gradient-based methods. + +# ACKNOWLEDGMENTS + +We thank Lorenzo Blanco, Trevor Cai, Greg Wayne, Chloe Hillier, and Vicky Langston for their assistance and support. + +# REFERENCES + +Abbas Abdelmaleki, Bob Price, Nuno Lau, Luis P Reis, and Gerhard Neumann. Deriving and Improving CMA-ES with Information Geometric Trust Regions. Proceedings of the Genetic and Evolutionary Computation Conference, 2017. +Abbas Abdelmaleki, Jost Tobias Springenberg, Jonas Degrave, Steven Bohez, Yuval Tassa, Dan Belov, Nicolas Heess, and Martin Riedmiller. Relative Entropy Regularized Policy Iteration. arXiv preprint, 2018a. URL https://arxiv.org/pdf/1812.02256.pdf. +Abbas Abdelmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a Posteriori Policy Optimisation. Int. Conf. Learn. Represent., 2018b. URL https://arxiv.org/pdf/1806.06920.pdf. +Charles Beattie, Joel Z Leibo, Denis Teptyashin, Tom Ward, Marcus Wainwright, Heinrich Kuttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, et al. Deepmind Lab. arXiv preprint arXiv:1612.03801, 2016. +Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The Arcade Learning Environment: An Evaluation Platform for General Agents. Journal of Artificial Intelligence Research, 47, 2012. +Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv preprint, 2016. URL http://arxiv.org/abs/1606.01540. + +Peter Buchovsky, David Budden, Dominik Grewe, Chris Jones, John Aslanides, Frederic Besse, Andy Brock, Aidan Clark, Sergio Gomez Colmenarejo, Aedan Pope, Fabio Viola, and Dan Belov. TF-Replicator: Distributed Machine Learning for Researchers. arXiv preprint, 2019. URL http://arxiv.org/abs/1902.00465. +Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. arXiv preprint, 2019. URL http://arxiv.org/abs/1901.02860. +Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking Deep Reinforcement Learning for Continuous Control. arXiv preprint, 2016. URL http://arxiv.org/abs/1604.06778. +Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures. arXiv preprint, 2018. URL http://arxiv.org/abs/1802.01561. +Google. Cloud TPU, 2018. URL https://cloud.google.com/tpu/. +Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arXiv preprint, 2018. URL http://arxiv.org/abs/1801.01290. +Nikolaus Hansen, Andreas Ostermeier, and Andreas Ostermeier. Convergence Properties of Evolution Strategies with the Derandomized Covariance Matrix Adaptation: CMA-ES. 1997. URL http://www.cmap.polytechnique.fr/~nikolaus.hansen/CMAES2.pdf. +Nicolas Heess, Greg Wayne, David Silver, Timothy P. Lillicrap, Yuval Tassa, and Tom Erez. Learning continuous control policies by stochastic value gradients. arXiv preprint, 2015. URL http://arxiv.org/abs/1510.09142. +Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, and Hado van Hasselt. Multi-task Deep Reinforcement Learning with PopArt. arXiv preprint, 2018. URL https://arxiv.org/pdf/1809.04474.pdf. +Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M. Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, Chrisantha Fernando, and Koray Kavukcuoglu. Population Based Training of Neural Networks. arXiv preprint, 2017a. URL http://arxiv.org/abs/1711.09846. +Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement Learning with Unsupervised Auxiliary Tasks. Int. Conf. Learn. Represent., 2017b. URL https://openreview.net/pdf?id=SJ6yPD5xg. +Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castañeda, Charles Beattie, Neil C. Rabinowitz, Ari S. Morcos, Avraham Ruderman, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z. Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, and Thore Graepel. Human-level performance in 3d multiplayer games with population-based reinforcement learning. Science, 364:859-865, 2019. URL https://science.sciencemag.org/content/364/6443/859. +Steven Kapturowski, Georg Ostrovski, John Quan, Rémi Munos, and Will Dabney. Recurrent Experience Replay in Distributed Reinforcement Learning. Int. Conf. Learn. Represent., 2019. URL https://openreview.net/pdf?id=r1lyTjAqYX. +Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. Int. Conf. Learn. Represent., 2015. URL https://arxiv.org/abs/1412.6980. +Sergey Levine. Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review. arXiv preprint, 2018. URL http://arxiv.org/abs/1805.00909. + +Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint, 2015. URL http://arxiv.org/abs/1509.02971. +Shie Mannor, Reuven Y Rubinstein, and Yohai Gat. The cross entropy method for fast policy search. Proceedings of the 20th International Conference on Machine Learning, 2003. URL https://www.aaai.org/Papers/ICML/2003/ICML03-068.pdf. +Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, and Nicolas Heess. Neural probabilistic motor primitives for humanoid control. arXiv preprint, 2018. URL http://arxiv.org/abs/1811.11711. +Josh Merel, Arun Ahuja, Vu Pham, Saran Tunyasuvunakool, Siqi Liu, Dhruva Tirumala, Nicolas Heess, and Greg Wayne. Hierarchical Visuomotor Control of Humanoids. Int. Conf. Learn. Represent., 2019. URL https://openreview.net/pdf?id=BJfYvo09Y7. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-Level Control through Deep Reinforcement Learning. Nature, 518:529-533, 2015. URL http://dx.doi.org/10.1038/nature14236. +Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Tim Harley, Timothy P Lillicrap, David Silver, and Koray Kavukcuoglu. Asynchronous Methods for Deep Reinforcement Learning. arXiv:1602.01783, 2016. URL http://arxiv.org/abs/1602.01783. +Radford M. Neal and Geoffrey E. Hinton. A View of the EM Algorithm that Justifies Incremental, Sparse, and Other Variants. In M.I. Jordan (ed.), Learn. Graph. Model. NATO ASI Ser. vol. 89. Springer, Dordrecht, 1998. +Gerhard Neumann and Jan R. Peters. Fitted Q-iteration by Advantage Weighted Regression. Advances in Neural Information Processing Systems, 2009. URL http://papers.nips.cc/paper/3501-fitted-q-iteration-by-advantage-weighted-regression.pdf. +OpenAI. OpenAI Five, 2018a. URL https://openai.com/blog/openai-five/. +OpenAI. Learning Dexterity, 2018b. URL https://openai.com/blog/learning-dexterity/. +Jan Peters, M Katharina, and Yasemin Altün. Relative Entropy Policy Search. Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, pp. 1607-1612, 2008. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language Models are Unsupervised Multitask Learners. 2019. URL https://d4mucfpkseywv.cloudfront.net/better-language-models/language_models—are_unsupervisedMULTITASK_learners.pdf. +Simon Schmitt, Jonathan J. Hudson, Augustin Zidek, Simon Osindero, Carl Doersch, Wojciech M. Czarnecki, Joel Z. Leibo, Heinrich Kuttler, Andrew Zisserman, Karen Simonyan, and S. M. Ali Eslami. Kickstarting Deep Reinforcement Learning. arXiv preprint, 2018. URL http://arxiv.org/abs/1803.03835. +Simon Schmitt, Matteo Hessel, and Karen Simonyan. Off-Policy Actor-Critic with Shared Experience Replay. arXiv preprint, 2019. URL https://arxiv.org/abs/1909.11583. +John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust Region Policy Optimization. arXiv preprint, 2015. URL http://arxiv.org/abs/1502.05477. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint, 2017. URL http://arxiv.org/abs/1707.06347. + +David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529:484-489, 2016. URL http://www.nature.com/doifinder/10.1038/nature16961. +David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362:1140-1144, 2018. URL https://science.sciencemag.org/content/362/6419/1140. +Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998. +Richard S Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In S. A. Solla, T. K. Leen, and K. Müller (eds.), Advances in Neural Information Processing Systems 12, pp. 1057-1063. MIT Press, 2000. URL http://papers.nips.cc/paper/1713-policy-gradient-methods-for-reinforcement-learning-with-function-approximation.pdf. +Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, and Martin A. Riedmiller. DeepMind Control Suite. arXiv preprint, 2018. URL http://arxiv.org/abs/1801.00690. +Hado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many orders of magnitudes. arXiv preprint, 2016. URL http://arxiv.org/abs/1602.07714. +Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michael Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P Agapiou, Max Jaderberg, Alexander S Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, and David Silver. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 2019. URL http://doi.org/10.1038/s41586-019-1724-z. +Quan Vuong, Keith Ross, and Yiming Zhang. Supervised Policy Update for Deep Reinforcement Learning. arXiv preprint, 2019. URL http://arxiv.org/abs/1805.11706. +Ronald J. Williams. Simple statistical gradient-following methods for connectionist reinforcement learning. Mach. Learn., 8:229-256, 1992. URL http://dx.doi.org/10.1007/BF00992696. +Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M. Bayen, Sham Kakade, Igor Mordatch, and Pieter Abbeel. Variance reduction for policy gradient with action-dependent factorized baselines. arXiv preprint, 2018. URL http://arxiv.org/abs/1803.07246. + +# A DERIVATION OF THE V-MPO TEMPERATURE LOSS + +In this section we derive the E-step temperature loss in Eq. 22. To this end, we explicitly commit to the more specific improvement criterion in Eq. 5 by plugging into the original objective in Eq. 3. We seek $\psi(s, a)$ that minimizes + +$$ +\mathcal {J} (\psi (s, a)) = D _ {\mathrm {K L}} \left(\psi (s, a) \| p _ {\theta_ {\text {o l d}}} (s, a | \mathcal {I} = 1)\right) \tag {15} +$$ + +$$ +\propto - \sum_ {s, a} \psi (s, a) A ^ {\pi_ {\theta_ {\text {o l d}}}} (s, a) + \eta \sum_ {s, a} \psi (s, a) \log \frac {\psi (s , a)}{p _ {\theta_ {\text {o l d}}} (s , a)} + \lambda \sum_ {s, a} \psi (s, a) \tag {16} +$$ + +where $\lambda = \eta \log p_{\theta_{\mathrm{old}}}(\mathcal{I} = 1)$ after multiplying through by $\eta$ , which up to this point in the derivation is given. We wish to automatically tune $\eta$ so as to enforce a bound $\epsilon_{\eta}$ on the KL term $D_{\mathrm{KL}}\bigl (\psi (s,a)\| p_{\theta_{\mathrm{old}}}(s,a)\bigr)$ multiplying it in Eq. 16, in which case the temperature optimization can also be viewed as a nonparametric trust region for the variational distribution with respect to the old distribution. We therefore consider the constrained optimization problem + +$$ +\psi (s, a) = \arg \max _ {\psi (s, a)} \sum_ {s, a} \psi (s, a) A ^ {\pi_ {\theta_ {\text {o l d}}}} (s, a) \tag {17} +$$ + +$$ +\text {s . t .} \sum_ {s, a} \psi (s, a) \log \frac {\psi (s , a)}{p _ {\theta_ {\mathrm {o l d}}} (s , a)} < \epsilon_ {\eta} \text {a n d} \sum_ {s, a} \psi (s, a) = 1. \tag {18} +$$ + +We can now use Lagrangian relaxation to transform the constrained optimization problem into one that maximizes the unconstrained objective + +$$ +\mathcal {J} (\psi (s, a), \eta , \lambda) = \sum_ {s, a} \psi (s, a) A ^ {\pi_ {\theta_ {\text {o l d}}}} (s, a) + \eta \left(\epsilon_ {\eta} - \sum_ {s, a} \psi (s, a) \log \frac {\psi (s , a)}{p _ {\theta_ {\text {o l d}}} (s , a)}\right) + \lambda \left(1 - \sum_ {s, a} \psi (s, a)\right) \tag {19} +$$ + +with $\eta \geq 0$ . (Note we are re-using the variables $\eta$ and $\lambda$ for the new optimization problem.) Differentiating $\mathcal{J}$ with respect to $\psi(s, a)$ and setting equal to zero, we obtain + +$$ +\psi (s, a) = p _ {\theta_ {\mathrm {o l d}}} (s, a) \exp \left(\frac {A ^ {\pi_ {\theta_ {\mathrm {o l d}}}} (s , a)}{\eta}\right) \exp \left(- 1 - \frac {\lambda}{\eta}\right). \tag {20} +$$ + +Normalizing over $s, a$ (using the freedom given by $\lambda$ ) then gives + +$$ +\psi (s, a) = \frac {p _ {\theta_ {\mathrm {o l d}}} (s , a) \exp \left(\frac {A ^ {\pi_ {\theta_ {\mathrm {o l d}}} (s , a)}}{\eta}\right)}{\sum_ {s , a} p _ {\theta_ {\mathrm {o l d}}} (s , a) \exp \left(\frac {A ^ {\pi_ {\theta_ {\mathrm {o l d}}} (s , a)}}{\eta}\right)}, \tag {21} +$$ + +which reproduces the general solution Eq. 4 for our specific choice of policy improvement in Eq. 5. However, the value of $\eta$ can now be found by optimizing the corresponding dual function. Plugging Eq. 21 into the unconstrained objective in Eq. 19 gives rise to the $\eta$ -dependent term + +$$ +\mathcal {L} _ {\eta} (\eta) = \eta \epsilon_ {\eta} + \eta \log \left[ \sum_ {s, a} p _ {\theta_ {\text {o l d}}} (s, a) \exp \left(\frac {A ^ {\pi_ {\theta_ {\text {o l d}}}} (s , a)}{\eta}\right) \right]. \tag {22} +$$ + +Replacing the expectation with samples from $p_{\theta_{\mathrm{old}}}(s, a)$ in the batch of trajectories $\mathcal{D}$ leads to the loss in Eq. 13. + +# B M-STEP KL CONSTRAINT + +Here we give a somewhat more formal motivation for the prior $\log p(\theta)$ . Consider a normal prior $\mathcal{N}(\theta ;\mu ,\Sigma)$ with mean $\mu$ and covariance $\Sigma$ . We choose $\Sigma^{-1} = \alpha F(\theta_{\mathrm{old}})$ where $\alpha$ is a scaling parameter and $F(\theta_{\mathrm{old}})$ is the Fisher information for $\pi_{\theta^{\prime}}(a|s)$ evaluated at $\theta^{\prime} = \theta_{\mathrm{old}}$ . Then $\log p(\theta)\approx -\alpha \times \frac{1}{2} (\theta -\theta_{\mathrm{old}})^T F(\theta_{\mathrm{old}})(\theta -\theta_{\mathrm{old}}) + \{\text{term independent of}\theta \}$ , where the first term is precisely the second-order approximation to the KL divergence $D_{\mathrm{KL}}(\theta_{\mathrm{old}}\| \theta)$ . We now follow TRPO (Schulman et al., 2015) in heuristically approximating this as the state-averaged expression, $\mathbb{E}_{s\sim p(s)}\left[D_{\mathrm{KL}}\bigl (\pi_{\theta_{\mathrm{old}}}(a|s)\| \pi_{\theta}(a|s)\bigr)\right]$ . We note that the KL divergence in either direction has the same second-order expansion, so our choice of KL is an empirical one (Abdolmaleki et al., 2018a). + +# C DECOUPLED KL CONSTRAINTS FOR CONTINUOUS CONTROL + +As in Abdelmaleki et al. (2018b), for continuous action spaces parametrized by Gaussian distributions we use decoupled KL constraints for the M-step. This uses the fact that the KL divergence between two $d$ -dimensional multivariate normal distributions with means $\mu_1, \mu_2$ and covariances $\Sigma_1, \Sigma_2$ can be written as + +$$ +D _ {\mathrm {K L}} \left(\mathcal {N} \left(\mu_ {1}, \Sigma_ {1}\right) \| \mathcal {N} \left(\mu_ {2}, \Sigma_ {2}\right)\right) = \frac {1}{2} \left[ \left(\mu_ {2} - \mu_ {1}\right) ^ {T} \Sigma_ {1} ^ {- 1} \left(\mu_ {2} - \mu_ {1}\right) + \operatorname {T r} \left(\Sigma_ {2} ^ {- 1} \Sigma_ {1}\right) - d + \log \frac {\left| \Sigma_ {2} \right|}{\left| \Sigma_ {1} \right|} \right], \tag {23} +$$ + +where $|\cdot|$ is the matrix determinant. Since the first distribution and hence $\Sigma_{1}$ in the KL divergence of Eq. 9 depends on the old target network parameters, we see that we can separate the overall KL divergence into a mean component and a covariance component: + +$$ +D _ {\mathrm {K L}} ^ {\mu} \left(\pi_ {\theta_ {\text {o l d}}} \| \pi_ {\theta}\right) = \frac {1}{2} \left(\mu_ {\theta} - \mu_ {\theta_ {\text {o l d}}}\right) ^ {T} \Sigma_ {\theta_ {\text {o l d}}} ^ {- 1} \left(\mu_ {\theta} - \mu_ {\theta_ {\text {o l d}}}\right), \tag {24} +$$ + +$$ +D _ {\mathrm {K L}} ^ {\Sigma} \left(\pi_ {\theta_ {\text {o l d}}} \| \pi_ {\theta}\right) = \frac {1}{2} \left[ \operatorname {T r} \left(\Sigma_ {\theta} ^ {- 1} \Sigma_ {\theta_ {\text {o l d}}}\right) - d + \log \frac {| \Sigma_ {\theta} |}{\left| \Sigma_ {\theta_ {\text {o l d}}} \right|} \right]. \tag {25} +$$ + +With the replacement $D_{\mathrm{KL}}(\pi_{\theta_{\mathrm{old}}}\| \pi_{\theta}) \to D_{\mathrm{KL}}^{C}(\pi_{\theta_{\mathrm{old}}}\| \pi_{\theta})$ for $C = \mu, \Sigma$ and corresponding $\alpha \to \alpha_{\mu}, \alpha_{\Sigma}$ , we obtain the total loss + +$$ +\mathcal {L} _ {\mathrm {V - M P O}} (\theta , \eta , \alpha_ {\mu}, \alpha_ {\Sigma}) = \mathcal {L} _ {\pi} (\theta) + \mathcal {L} _ {\eta} (\eta) + \mathcal {L} _ {\alpha_ {\mu}} (\theta , \alpha_ {\mu}) + \mathcal {L} _ {\alpha_ {\Sigma}} (\theta , \alpha_ {\Sigma}), \tag {26} +$$ + +where $\mathcal{L}_{\pi}(\theta)$ and $\mathcal{L}_{\eta}(\eta)$ are the same as before. Note, however, that unlike in Abdelmaleki et al. (2018a) we do not decouple the policy loss. + +We generally set $\epsilon_{\Sigma}$ to be much smaller than $\epsilon_{\mu}$ (see Table 7). Intuitively, this allows the policy to learn quickly in action space while preventing premature collapse of the policy, and, conversely, increasing "exploration" without moving in action space. + +# D RELATION TO SUPERVISED POLICY UPDATE + +Like V-MPO, Supervised Policy Update (SPU) (Vuong et al., 2019) adopts the strategy of first solving a nonparametric constrained optimization problem exactly, then fitting a neural network to the resulting solution via a supervised loss function. There is, however, an important difference from V-MPO, which we describe here. + +In SPU, the KL loss, which is the sole loss in SPU, leads to a parametric optimization problem that is equivalent to the nonparametric optimization problem posed initially. To see this, we observe that the SPU loss seeks parameters (note the direction of the KL divergence) + +$$ +\begin{array}{l} \theta^ {*} = \arg \min _ {\theta} \sum_ {s} d ^ {\pi_ {\theta_ {k}}} (s) D _ {\mathrm {K L}} \left(\pi_ {\theta} (a | s) \| \pi^ {\lambda} (a | s)\right) (27) \\ = \arg \min _ {\theta} \sum_ {s} d ^ {\pi_ {\theta_ {k}}} (s) \sum_ {a} \pi_ {\theta} (a | s) \log \left[ \frac {\pi_ {\theta} (a | s)}{\pi_ {\theta_ {k}} (a | s) \exp \left(A ^ {\pi_ {\theta_ {k}}} (s , a) / \lambda\right) / Z _ {\lambda} (s)} \right] (28) \\ = \arg \min _ {\theta} \sum_ {s} d ^ {\pi_ {\theta_ {k}}} (s) \sum_ {a} \left[ \pi_ {\theta} (a | s) \log \frac {\pi_ {\theta} (a | s)}{\pi_ {\theta_ {k}} (a | s)} - \frac {1}{\lambda} \pi_ {\theta} (a | s) A ^ {\pi_ {\theta_ {k}}} (s, a) \right] + \{c o n s t a n t t e r m s \}. (29) \\ \end{array} +$$ + +Multiplying by $\lambda$ since it can be treated as a constant up to this point, we then see that this corresponds exactly to the (Lagrangian form) of the problem + +$$ +\theta^ {*} = \arg \max _ {\theta} \sum_ {s} d ^ {\pi_ {\theta_ {k}}} (s) \sum_ {a} \pi_ {\theta} (a | s) A ^ {\pi_ {\theta_ {k}}} (s, a) \tag {30} +$$ + +$$ +\text {s . t .} \sum_ {s} d ^ {\pi_ {\theta_ {k}}} (s) D _ {\mathrm {K L}} \left(\pi_ {\theta} (a | s) \| \pi_ {\theta_ {k}} (a | s)\right) < \epsilon , \tag {31} +$$ + +which is the original nonparametric problem posed in Vuong et al. (2019). + +# E IMPORTANCE-WEIGHTING FOR OFF-POLICY CORRECTIONS + +The network that generates the data may lag behind the target network in common distributed, asynchronous implementations (Espeholt et al., 2018). We can compensate for this by multiplying the exponentiated advantages by importance weights $\rho(s, a)$ : + +$$ +\psi (s, a) = \frac {\rho (s , a) p _ {\theta_ {\mathcal {D}}} (s , a) \exp \left(\frac {A ^ {\pi_ {\theta_ {\mathcal {D}}}} (s , a)}{\eta}\right)}{\sum_ {s , a} \rho (s , a) p _ {\theta_ {\mathcal {D}}} (s , a) \exp \left(\frac {A ^ {\pi_ {\theta_ {\mathcal {D}}}} (s , a)}{\eta}\right)}, \tag {32} +$$ + +$$ +\mathcal {L} _ {\eta} (\eta) = \eta \epsilon_ {\eta} + \eta \log \left[ \sum_ {s, a} \rho (s, a) p _ {\theta_ {\mathcal {D}}} (s, a) \exp \left(\frac {A ^ {\pi_ {\theta_ {\mathcal {D}}}} (s , a)}{\eta}\right) \right], \tag {33} +$$ + +where $\theta_{\mathcal{D}}$ are the parameters of the behavior policy that generated $\mathcal{D}$ and which may be different from $\theta_{\mathrm{target}}$ . The clipped importance weights $\rho(s, a)$ are given by + +$$ +\rho (s, a) = \min \left(1, \frac {\pi_ {\theta_ {\text {o l d}}} (a | s)}{\pi_ {\theta_ {\mathcal {D}}} (a | s)}\right). \tag {34} +$$ + +As was the case with V-trace for the value function, we did not find it necessary to use importance weighting and all experiments presented in this work did not use them for the sake of simplicity. + +# F NETWORK ARCHITECTURE AND HYPERPARAMETERS + +For DMLab the visual observations were $72 \times 96$ RGB images, while for Atari the observations were 4 stacked frames of $84 \times 84$ grayscale images. The ResNet used to process visual observations is similar to the 3-section ResNet used in Hessel et al. (2018), except the number of channels was multiplied by 4 in each section, so that the number of channels were (64, 128, 128) (Schmitt et al., 2019). For individual DMLab levels we used the same number of channels as Hessel et al. (2018), i.e., (16, 32, 32). Each section consisted of a convolution and $3 \times 3$ max-pooling operation (stride 2), followed by residual blocks of size 2, i.e., a convolution followed by a ReLU nonlinearity, repeated twice, and a skip connection from the input residual block input to the output. The entire stack was passed through one more ReLU nonlinearity. All convolutions had a kernel size of 3 and a stride of 1. For the humanoid control tasks from vision, the number of channels in each section were (16, 32, 32). + +Since some of the levels in DMLab require simple language processing, for DMLab the agents contained an additional 256-unit LSTM receiving an embedding of hashed words as input. The output of the language LSTM was then concatenated with the output of the visual processing pathway as well as the previous reward and action, then fed to the main LSTM. + +For multi-task DMLab we used a 3-layer LSTM, each with 256 units, and an unroll length of 95 with batch size 128. For the single-task setting we used a 2-layer LSTM. For multi-task Atari and the 56-dimensional humanoid-gaps control task a single 256-unit LSTM was used, while for the 22-dimensional humanoid-run task the core consisted only of a 2-layer MLP with 512 and 256 units (no LSTM). For single-task Atari a Transformer-XL was used in place of the LSTM. Note that we followed Radford et al. (2019) in placing the layer normalization on only the inputs to each sub-block. For Atari the unroll length was 63 with a batch size of 128. For both humanoid control tasks the batch size was 64, but the unroll length was 40 for the 22-dimensional humanoid and 63 for the 56-dimensional humanoid. + +In all cases the policy logits (for discrete actions) and Gaussian distribution parameters (for continuous actions) consisted of a 256-unit MLP followed by a linear readout, and similarly for the value function. For discrete actions we initialized the linear policy layer with zero weights and biases to ensure a uniform policy at the start of training. + +The initial values for the Lagrange multipliers in the V-MPO loss are given in Table 1 + +Implementation note. We implemented V-MPO in an actor-learner framework (Espeholt et al., 2018) that utilizes TF-Replicator (Buchlovsky et al., 2019) for distributed training on TPU 8-core and 16-core configurations (Google, 2018). One practical consequence of this is that a full batch of data $\mathcal{D}$ was in fact split into 8 or 16 minibatches, one per core/replica, and the overall result obtained by averaging the computations performed for each minibatch. More specifically, the determination of the highest advantages and the normalization of the nonparametric distribution, Eq. 12, is performed within minibatches. While it is possible to perform the full-batch computation by utilizing cross-replica communication, we found this to be unnecessary. + +DMLab action set. Ignoring the "jump" and "crouch" actions which we do not use, an action in the native DMLab action space consists of 5 integers whose meaning and allowed values are given in Table 2. Following previous work on DMLab (Hessel et al., 2018), we used the reduced action set given in Table 3 with an action repeat of 4. + +
HYPERPARAMETERVALUE
DMLabAtariContinuous control
Initial η1.01.01.0
Initial α5.05.0-
Initial αμ--1.0
Initial αΣ--1.0
+ +Table 1: Values for common V-MPO parameters. + +
ACTION NAMERANGE
LOOK_LEFT_RIGHT_PIXELS_PER_FRAME[-512, 512]
LOOK_DOWN_UP_PIXELS_PER_FRAME[-512, 512]
STRAFE_LEFT_RIGHT[-1, 1]
MOVEBack FORWARD[-1, 1]
FIRE[0, 1]
+ +Table 2: Native action space for DMLab. See https://github.com/deepmind/lab/blob/master/docs/users/actions.md for more details. + +
ACTIONNATIVE DMLAB ACTION
Forward (FW)[0,0,0,1,0]
Backward (BW)[0,0,0,-1,0]
Strafe left[0,0,-1,0,0]
Strafe right[0,0,1,0,0]
Small look left (LL)[-10,0,0,0,0]
Small look right (LR)[10,0,0,0,0]
Large look left (LL)[-60,0,0,0,0]
Large look right (LR)[60,0,0,0,0]
Look down[0,10,0,0,0]
Look up[0,-10,0,0,0]
FW + small LL[-10,0,0,1,0]
FW + small LR[10,0,0,1,0]
FW + large LL[-60,0,0,1,0]
FW + large LR[60,0,0,1,0]
Fire[0,0,0,0,1]
+ +Table 3: Reduced action set for DMLab from Hessel et al. (2018). + +
LEVEL NAMEEPISODE REWARDHUMAN-NORMALIZED
IMPALAV-MPOIMPALAV-MPO
alien1163.00 ± 148.432332.00 ± 290.1613.55 ± 2.1530.50 ± 4.21
amidar192.50 ± 9.16423.60 ± 20.5310.89 ± 0.5324.38 ± 1.20
assault4215.30 ± 294.511225.90 ± 60.64768.46 ± 56.68193.13 ± 11.67
asterix4180.00 ± 303.919955.00 ± 2043.4847.87 ± 3.66117.50 ± 24.64
steroids3473.00 ± 381.302982.00 ± 164.355.90 ± 0.824.85 ± 0.35
atlantis997530.00 ± 3552.89940310.00 ± 6085.966086.50 ± 21.965732.81 ± 37.62
bank_heist1329.00 ± 2.211563.00 ± 15.81177.94 ± 0.30209.61 ± 2.14
battle-zone43900.00 ± 4738.0461400.00 ± 5958.52119.27 ± 13.60169.52 ± 17.11
beam_rider4598.00 ± 618.093868.20 ± 666.5525.56 ± 3.7321.16 ± 4.02
berzerk1018.00 ± 72.631424.00 ± 150.9335.68 ± 2.9051.87 ± 6.02
bowling63.60 ± 0.8427.60 ± 0.6229.43 ± 0.613.27 ± 0.45
boxing93.10 ± 0.94100.00 ± 0.00775.00 ± 7.86832.50 ± 0.00
breakout484.30 ± 57.24400.70 ± 18.821675.69 ± 198.771385.42 ± 65.36
centipede6037.90 ± 994.993015.00 ± 404.9739.76 ± 10.029.31 ± 4.08
chopper_command4250.00 ± 417.914340.00 ± 714.4552.29 ± 6.3553.66 ± 10.86
crazy_climber100440.00 ± 9421.56116760.00 ± 5312.12357.94 ± 37.61423.09 ± 21.21
defender41585.00 ± 4194.4298395.00 ± 17552.17244.78 ± 26.52604.01 ± 110.99
demon_attack77880.00 ± 8798.4420243.00 ± 5434.414273.35 ± 483.721104.56 ± 298.77
double_dunk-0.80 ± 0.3112.60 ± 1.94809.09 ± 14.081418.18 ± 88.19
enduro1187.90 ± 76.101453.80 ± 104.37138.05 ± 8.84168.95 ± 12.13
fishing derby21.60 ± 3.4633.80 ± 2.10213.77 ± 6.54236.79 ± 3.96
freeway32.10 ± 0.1733.20 ± 0.28108.45 ± 0.58112.16 ± 0.93
frostbite250.00 ± 0.00260.00 ± 0.004.33 ± 0.004.56 ± 0.00
gopher11720.00 ± 1687.717576.00 ± 973.13531.92 ± 78.32339.62 ± 45.16
gravitar1095.00 ± 232.753125.00 ± 191.8729.01 ± 7.3292.88 ± 6.04
hero13159.50 ± 68.9029196.50 ± 752.0640.71 ± 0.2394.53 ± 2.52
ice_hockey4.80 ± 1.3110.60 ± 2.00132.23 ± 10.83180.17 ± 16.50
jamesbond1015.00 ± 91.393805.00 ± 595.92360.12 ± 33.381379.11 ± 217.65
kangaroo1780.00 ± 18.9712790.00 ± 629.5257.93 ± 0.64427.02 ± 21.10
krull9738.00 ± 360.957359.00 ± 1064.84762.53 ± 33.81539.67 ± 99.75
kung_fu/master44340.00 ± 2898.7038620.00 ± 2346.48196.11 ± 12.90170.66 ± 10.44
montezuma_revenge0.00 ± 0.000.00 ± 0.000.00 ± 0.000.00 ± 0.00
ms_pacman1953.00 ± 227.122856.00 ± 324.5424.77 ± 3.4238.36 ± 4.88
name.this_game5708.00 ± 354.929295.00 ± 679.8359.33 ± 6.17121.64 ± 11.81
phoenix37030.00 ± 6415.9519560.00 ± 1843.44559.60 ± 98.99290.05 ± 28.44
pitfall-4.90 ± 2.34-2.80 ± 1.403.35 ± 0.043.39 ± 0.02
pong20.80 ± 0.1921.00 ± 0.00117.56 ± 0.54118.13 ± 0.00
private_eye100.00 ± 0.00100.00 ± 0.000.11 ± 0.000.11 ± 0.00
qbert5512.50 ± 741.0815297.50 ± 1244.4740.24 ± 5.58113.86 ± 9.36
riverraid8237.00 ± 97.0911160.00 ± 733.0643.72 ± 0.6262.24 ± 4.65
road Runner28440.00 ± 1215.9951060.00 ± 1560.72362.91 ± 15.52651.67 ± 19.92
robotank29.60 ± 2.1546.80 ± 3.42282.47 ± 22.22459.79 ± 35.29
seaquest1888.00 ± 63.269953.00 ± 973.024.33 ± 0.1523.54 ± 2.32
skiing-16244.00 ± 592.28-15438.10 ± 1573.396.69 ± 4.6413.01 ± 12.33
solaris1794.00 ± 279.042194.00 ± 417.915.03 ± 2.528.64 ± 3.77
space_invaders793.50 ± 90.611771.50 ± 201.9542.45 ± 5.96106.76 ± 13.28
star_gunner44860.00 ± 5157.7460120.00 ± 1953.60461.05 ± 53.80620.24 ± 20.38
surround2.50 ± 1.044.00 ± 0.6275.76 ± 6.3184.85 ± 3.74
tennis-0.10 ± 0.0923.10 ± 0.26152.90 ± 0.61302.58 ± 1.69
time_pilot10890.00 ± 787.4622330.00 ± 2443.11440.77 ± 47.401129.42 ± 147.07
tutankham218.50 ± 13.53254.60 ± 9.99132.59 ± 8.66155.70 ± 6.40
up_n_down175083.00 ± 16341.0582913.00 ± 12142.081564.09 ± 146.43738.18 ± 108.80
venture0.00 ± 0.000.00 ± 0.000.00 ± 0.000.00 ± 0.00
video pinchball59898.40 ± 23875.14198845.20 ± 98768.54339.02 ± 135.131125.46 ± 559.03
wizard_of_wor6960.00 ± 1730.977890.00 ± 1595.77152.55 ± 41.28174.73 ± 38.06
yars_revenge12825.70 ± 2065.9041271.70 ± 4726.7218.90 ± 4.0174.16 ± 9.18
zaxxon11520.00 ± 646.8118820.00 ± 754.69125.67 ± 7.08205.53 ± 8.26
Median117.56155.70
+ +Table 4: Multi-task Atari-57 scores by level after 11.4B total (200M per level) environment frames. All entries show mean ± standard deviation. Data for IMPALA ("PopArt-IMPALA") was obtained from the authors of Hessel et al. (2018). Human-normalized scores are calculated as $(E - R) / (H - R) \times 100$ , where $E$ is the episode reward, $R$ the episode reward obtained by a random agent, and $H$ is the episode reward obtained by a human. 18 + +
SETTINGSINGLE-TASKMULTI-TASK
Agent discount0.99
Image height72
Image width96
Number of action repeats4
Number of LSTM layers23
Pixel-control cost2 × 10-3
Ttarget10
εη0.10.5
εα(log-uniform)[0.001, 0.01)[0.01, 0.1)
+ +Table 5: Settings for DMLab. + +
SETTINGSINGLE-TASKMULTI-TASK
Environment discount on end of life10
Agent discount0.9970.99
Clipped reward rangeno clipping[-1, 1]
Max episode length30 mins (108,000 frames)
Image height84
Image width84
GrayscaleTrue
Number of stacked frames4
Number of action repeats4
TrXL: Key/Value size32·
TrXL: Number of heads8·
TrXL: Number of layers8·
TrXL: MLP size512·
T_target1000100
εη1 × 10-1
εα(log-uniform)[0.005, 0.01)[0.001, 0.01)
+ +Table 6: Settings for Atari. TrXL: Transformer-XL. + +
SETTINGHUMANOID-PICKELSHUMANOID-STATEOPENAI GYM
Agent discount0.99
Unroll length636339
Image height64··
Image width64··
Target update period100
εη0.10.01
εαμ(log-uniform)[0.01, 1.0)[0.05, 0.5][0.005, 0.01]
εαΣ(log-uniform)[5 × 10-6, 5 × 10-5)[10-5, 5 × 10-5)[5 × 10-6, 5 × 10-5)
+ +Table 7: Settings for continuous control. For the humanoid gaps task from pixels the physics time step was $5\mathrm{ms}$ and the control time step $30~\mathrm{ms}$ . + +![](images/a81de6169fcf0a688d4a36fd0eecccd81d08fe9fb4577e75d0a55419063e4a86.jpg) +Figure 6: Multi-task Atari-57 with population-based training (PBT) (Jaderberg et al., 2017a). All settings of the PBT experiment were the same as without except the learning rates were also sampled log-uniformly from $[8 \times 10^{-5}, 3 \times 10^{-4})$ and $\epsilon_{\eta}$ from [0.05, 0.5). Along with $\epsilon_{\alpha}$ sampled log-uniformly from [0.001, 0.01) as in the original experiment, hyperparameters were evolved via copy and mutation operators roughly once every $4 \times 10^{8}$ environment frames. + +![](images/90d0ad7db3c3e6e2033a0a3434dd62623c61fd0bc0e0069cfe9d74af23ffd67c.jpg) +Figure 7: KL constraints during optimization for the Seaquest example in Fig. 4c. Values are subsampled but not smoothed to show the variability. + +![](images/3369c4efcf6a6f49068cd8c764de7a20c7fc0f6f7a6ee9b6d119aa16261b892f.jpg) +Figure 8: Same as Fig. 4c (Atari Seaquest), but trained with uniform weights on the top $50\%$ of advantages. + +![](images/61dc6231ec61ffc9d31d04418ea2c3f83d2ebfd7200422238ee1cc25a89b528e.jpg) +Figure 9: Same as Fig. 2a (multi-task DMLab-30), but trained without top- $k$ , i.e., all advantages are used in the E-step. Note the small dip in the middle is due to a pause in the experiment and resetting of the human-normalized scores. + +![](images/2a53c7c38403489857cd4e546f5dfa74cc9d06022ad48f40175294272db195b6.jpg) +Figure 10: Example frame from the humanoid gaps task, with the agent's $64 \times 64$ first-person view on the right. The proprioceptive information provided to the agent in addition to the primary pixel observation consisted of joint angles and velocities, root-to-end-effector vectors, root-frame velocity, rotational velocity, root-frame acceleration, and the 3D orientation relative to the $z$ -axis. + +![](images/16543ba135650851fdbeeb9c6a71fea9fac7d2bb7f95bc381b200f6b3174ab06.jpg) +Figure 11: 17-dimensional Humanoid-V1 task in OpenAI Gym. \ No newline at end of file diff --git a/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/images.zip b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4ec40c3267c5bfce710e0d3b1f650fc5bdfbb500 --- /dev/null +++ b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0384681ee460bfb77fe0b599e9ab75b95e8b21f1b210a7803986a135ea0a0364 +size 1292511 diff --git a/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/layout.json b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..74dd37df53fa1d41c58645b73764afdc16be22c5 --- /dev/null +++ b/vmpoonpolicymaximumaposterioripolicyoptimizationfordiscreteandcontinuouscontrol/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:821b314068b71da6155533a90237bc8aad1cecd4af2063877ab37d32be1ef5bb +size 687333