diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/AsdwkIqO74/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/AsdwkIqO74/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..37cae41769fa815365ebaba4692db078cbe5b5c3 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/AsdwkIqO74/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,303 @@ +# Recurrent Flow Networks: A Recurrent Latent Variable Model for Density Modelling of Urban Mobility + +Anonymous Authors ${}^{1}$ + +## Abstract + +Mobility-on-demand (MoD) systems represent a rapidly developing mode of transportation wherein travel requests are dynamically handled by a coordinated fleet of vehicles. Crucially, the efficiency of an MoD system highly depends on how well supply and demand distributions are aligned in spatio-temporal space (i.e., to satisfy user demand, cars have to be available in the correct place and at the desired time). When modelling urban mobility as temporal sequences, current approaches typically rely on either (i) a spatial discretization (e.g. ConvLSTMs), or (ii) a Gaussian mixture model to describe the conditional output distribution. In this paper, we argue that both of these approaches could exhibit structural limitations when faced with highly complex data distributions such as for urban mobility densities. To address this issue, we introduce recurrent flow networks which combine deterministic and stochastic recurrent hidden states with conditional normalizing flows and show how the added flexibility allows our model to generate distributions matching potentially complex urban topologies. + +## 1. Introduction + +With the growing prevalence of smart mobile phones in our daily lives, companies such as Uber, Lyft, and DiDi have been pioneering Mobility-on-Demand (MoD) and online ride-hailing platforms as a solution capable of providing a more efficient and personalized transportation service. Notably, an efficient MoD system could allow for reduced idle times and higher fulfillment rates, thus offering a better user experience for both driver and passenger groups. The efficiency of an MoD system highly depends on the ability to model and accurately forecast the need for transportation, such to enable service providers to take operational decisions in strong accordance with user needs and preferences. However, the complexity of the geo-spatial distributions characterizing MoD demand requires flexible models that can capture rich, time-dependent $2\mathrm{\;d}$ patterns and adapt to complex urban geographies (e.g. presence of rivers, irregular landforms, etc.). + +Historically, dynamic Bayesian networks (DBNs), such as hidden Markov models (HMMs) and state space models (SSMs) (Durbin & Koopman, 2001), have characterized a unifying probabilistic framework with illustrious successes in modelling time-dependent dynamics. Advances in deep learning architectures however, shifted this supremacy towards the field of Recurrent Neural Networks (RNNs). At a high level, both DBNs and RNNs can be framed as parametrizations of two core components: 1) a transition function characterizing the time-dependent evolution of a learned internal representation, and 2) an emission function denoting a mapping from representation space to observation space. Recently, evidence has been gathered in favor of combinations bringing together the representative power of RNNs with the consistent handling of uncertainties given by probabilistic approaches (Chung et al., 2015; Fraccaro et al., 2016; Krishnan et al., 2016; Karl et al., 2017). The core concept underlying recent developments is the idea that, in current RNNs, the only source of variability is found in the conditional emission distribution (i.e. typically a unimodal distribution or a mixture of unimodal distributions). Most efforts have therefore concentrated in building models capable of effectively propagating uncertainty in the transition function of RNNs. + +In this paper, we build on these recent advances by shifting the focus towards more flexible emission functions. We suggest that the traditional treatment of output variability through the parametrization of either (i) unimodal (or mixtures of unimodal) distributions, or (ii) discretized representations of naturally-continuous distributions, may act as a bottleneck in cases characterized by complex data distributions, such as the ones observed in urban mobility. We propose the use of Conditional Normalizing Flows (CNFs) (Winkler et al., 2020) as a general approach to define arbitrarily expressive output probability distributions under temporal dynamics. On one hand, we model the temporal variability in the data through a transition function combining stochastic and deterministic states, on the other, we propose to use this mixed hidden representation as a conditioning variable to capture the output variability with a CNF. We call this model a Recurrent Flow Network (RFN). + +To summarize, the main contributions of this paper are twofold: first, we propose a probabilistic neural generative model which is able to combine deterministic and stochastic + +055 + +![01963e2b-bcd4-7927-b39a-cc985913d5e3_1_203_183_1351_423_0.jpg](images/01963e2b-bcd4-7927-b39a-cc985913d5e3_1_203_183_1351_423_0.jpg) + +Figure 1. Graphical model of the operations defining the RFN: a) transition function defined in Eq. (1) and Eq. (2); b) emission function as in Eq. (3) and Eq. (4); c) inference network using Eq. (5); d) overall RFN graphical model. Shaded nodes represent observed variables, while un-shaded nodes represent either deterministic (diamond-shaped) or stochastic (circles) hidden states. For sequence generation, a traditional approach is to use ${\mathbf{u}}_{t} = {\mathbf{x}}_{t - 1}$ . + +056 + +057 + +058 + +059 + +060 + +061 + +062 + +063 + +064 + +065 + +066 + +067 + +068 + +069 + +070 + +071 + +072 temporal representations with the flexibility of normalizing 073 flows in the conditional output distribution. Second, we 074 showcase how our model is able to represent fine-grained + +075 urban mobility patterns on several real-world tasks, which + +076 could drastically impact downstream decision making pro- + +077 cesses in current mobility systems. + +078 + +079 + +## 2. Recurrent Flow Networks + +080 + +081 In this section, we define the generative model ${p}_{\theta }$ and inference network ${q}_{\phi }$ characterizing the RFN for the purpose of sequence modelling. RFNs explicitly model temporal dependencies by combining deterministic and stochastic layers. The resulting intractability of the posterior distribution over the latent states ${\mathbf{z}}_{1 : T}$ , as in the case of VAEs (Kingma & Welling, 2014; Rezende et al., 2014), is further approached by learning a tractable approximation through amortized variational inference. The schematic view of the RFN is shown in Fig 1. + +Generative model As in (Fraccaro et al., 2016), the transition function of the RFN interlocks an SSM with an RNN: + +$$ +{\mathbf{h}}_{t} = {f}_{{\theta }_{\mathbf{h}}}\left( {{\mathbf{h}}_{t - 1},{\varphi }_{\tau }^{\operatorname{extr}}\left( {\mathbf{u}}_{t}\right) }\right) \tag{1} +$$ + +$$ +{\mathbf{z}}_{t} \sim \mathcal{N}\left( {{\mathbf{\mu }}_{0, t},\operatorname{diag}\left( {\mathbf{\sigma }}_{0, t}^{2}\right) }\right) , \tag{2} +$$ + +$$ +\text{with}\left\lbrack {{\mathbf{\mu }}_{0, t},{\mathbf{\sigma }}_{0, t}}\right\rbrack = {f}_{{\theta }_{\mathbf{z}}}\left( {{\mathbf{z}}_{t - 1},{\mathbf{h}}_{t}}\right) \text{,} +$$ + +where ${\mathbf{\mu }}_{0, t}$ and ${\mathbf{\sigma }}_{0, t}$ represent the parameters of the conditional prior distribution over the stochastic hidden states ${\mathbf{z}}_{1 : T}$ . In our implementation, ${f}_{{\theta }_{\mathbf{h}}}$ and ${f}_{{\theta }_{\mathbf{z}}}$ are respectively an LSTM cell and a deep feed-forward neural network, with parameters ${\theta }_{\mathbf{h}}$ and ${\theta }_{\mathbf{z}}$ . In Eq. (1), ${\varphi }_{\tau }^{\text{extr }}$ can also be a neural network extracting features from ${\mathbf{u}}_{t}$ . Unlike the SRNN, the learned representations (i.e. ${\mathbf{z}}_{1 : T},{\mathbf{h}}_{1 : T}$ ) are used as conditioners for a CNF parametrizing the output distribution. That is, for every time-step $t$ , we learn a complex + +108 distribution $p\left( {{\mathbf{x}}_{t} \mid {\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right)$ by defining the conditional base dis- tribution $p\left( {{\mathbf{b}}_{t} \mid {\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right)$ and conditional coupling layers (Dinh 109 et al.,2017) for the transformation ${T}_{\psi }$ as follows: + +Conditional Prior: + +$$ +{\mathbf{b}}_{t} \sim \mathcal{N}\left( {{\mathbf{\mu }}_{b, t},\operatorname{diag}\left( {\mathbf{\sigma }}_{b, t}^{2}\right) }\right) , \tag{3} +$$ + +$$ +\text{with}\left\lbrack {{\mathbf{\mu }}_{b, t},{\mathbf{\sigma }}_{b, t}}\right\rbrack = {f}_{\psi }\left( {{\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) +$$ + +Conditional Coupling: + +$$ +{\mathbf{b}}_{t, d + 1 : D} = {\mathbf{x}}_{t, d + 1 : D} \odot \exp \left( {{s}_{\psi }\left( {{\mathbf{x}}_{t,1 : d},{\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) }\right) + +$$ + +$$ ++ {t}_{\psi }\left( {{\mathbf{x}}_{t,1 : d},{\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) \tag{4} +$$ + +$$ +{\mathbf{b}}_{t,1 : d} = {\mathbf{x}}_{t,1 : d}, +$$ + +where ${\mathbf{\mu }}_{b, t}$ and ${\mathbf{\sigma }}_{b, t}$ represent the parameters of the conditional base distribution (determined by a learnable function ${f}_{\psi }$ ), while ${s}_{\psi }$ and ${t}_{\psi }$ denote the conditional scale and translation functions characterizing the coupling layers in the CNF. In our implementation, ${f}_{\psi },{s}_{\psi }$ and ${t}_{\psi }$ are parametrized by deep neural networks. Together, Eq. (3) and Eq. (4) define the emission function, enabling the generative model to result in the factorization $p\left( {\mathbf{x},\mathbf{z},\mathbf{h}}\right) =$ $\mathop{\prod }\limits_{{t = 1}}^{T}{p}_{{\theta }_{\mathbf{x}}}\left( {{\mathbf{x}}_{t} \mid {\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) {p}_{{\theta }_{\mathbf{z}}}\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t}}\right) {p}_{{\theta }_{\mathbf{h}}}\left( {{\mathbf{h}}_{t} \mid {\mathbf{h}}_{t - 1},{\mathbf{u}}_{t}}\right)$ , where the emission and transition distributions have parameters ${\theta }_{\mathbf{x}},{\theta }_{\mathbf{z}},{\theta }_{\mathbf{h}}$ , and where we assume that ${\mathbf{h}}_{t}$ follows a delta distribution centered in ${\mathbf{h}}_{t} = {f}_{{\theta }_{\mathbf{h}}}\left( {{\mathbf{h}}_{t - 1},{\mathbf{u}}_{t}}\right)$ . + +Inference The variational approximation defining the RFN directly depends on ${\mathbf{z}}_{t - 1},{\mathbf{h}}_{t}$ and ${\mathbf{x}}_{t}$ as follows: + +$$ +{q}_{\phi }\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t},{\mathbf{x}}_{t}}\right) = \mathcal{N}\left( {{\mathbf{\mu }}_{z, t},\operatorname{diag}\left( {\mathbf{\sigma }}_{z, t}^{2}\right) }\right) , \tag{5} +$$ + +$$ +\text{with}\left\lbrack {{\mathbf{\mu }}_{z, t},{\mathbf{\sigma }}_{z, t}}\right\rbrack = {\varphi }_{\tau }^{\mathrm{{enc}}}\left( {{\mathbf{z}}_{t - 1},{\mathbf{h}}_{t},{\mathbf{x}}_{t}}\right) \text{,} +$$ + +where ${\varphi }_{\tau }^{\text{enc }}$ is an encoder network defining the parameters of the approximate posterior distribution ${\mathbf{\mu }}_{z, t}$ and ${\mathbf{\sigma }}_{z, t}$ . Given the above structure, the generative and inference models are tied through the RNN hidden state ${\mathbf{h}}_{t}$ , resulting in the factorization given by: + +$$ +{q}_{\phi }\left( {{\mathbf{z}}_{1 : T} \mid {\mathbf{x}}_{1 : T}}\right) = \mathop{\prod }\limits_{{t = 1}}^{T}{q}_{\phi }\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t},{\mathbf{x}}_{t}}\right) . \tag{6} +$$ + +In addition to the explicit dependence of the approximate posterior on ${\mathbf{x}}_{t}$ and ${\mathbf{h}}_{t}$ , the inference network defined in Eq. (5) also exhibits an implicit dependence on ${\mathbf{x}}_{1 : t}$ and ${\mathbf{h}}_{1 : t}$ through ${\mathbf{z}}_{t - 1}$ . This implicit dependency on all information from the past can be considered as resembling a filtering approach from the state-space model literature (Durbin & Koopman,2001). Denoting $\theta$ and $\phi$ as the set of model and variational parameters respectively, variational inference offers a scheme for jointly optimizing parameters $\theta$ and computing an approximation to the posterior distribution by maximizing the evidence lower bound ${}^{1}$ (i.e. ELBO). + +## 3. Experiments + +Concretely, we evaluate the proposed RFN on three transportation datasets: + +NYC Taxi (NYC-P/D): This dataset is released by the New York City Taxi and Limousine Commission. We focused on aggregating the taxi demand in 2-hour bins for the month of March 2016 containing 249,637 trip geo-coordinates. We further differentiated the task of modelling pick-ups (i.e. where the demand is) and drop-offs (i.e. where people want to go). In what follows, we denote the two datasets as NYC-P and NYC-D respectively. + +Copenhagen Bike-Share (CPH-BS): This dataset contains geo-coordinates from users accessing the smartphone app of Donkey Republic, one of the major bike sharing services in Copenhagen, Denmark. As for the case of New York, we aggregated the geo-coordinates in 2-hour bins for the month of August, resulting in 87,740 app accesses. + +Models We compare the proposed RFN against various baselines assuming both continuous and discrete support for the output distribution. In particular, in the continuous case (i.e. where we assume to be modelling a 2-dimensional distribution directly in longitude-latitude space), we consider RNN, VRNN (Chung et al., 2015) and SRNN (Frac-caro et al., 2016) models each using two different MDN-based emission distributions. That is, we compare against a GMM output parametrized by Gaussians with either diagonal (MDN-Diag) or full (MDN-Full) covariance matrix. On the other hand, when assuming discrete support for the output distribution (i.e. we divide the map into tiled non-overlapping patches and view the pixels inside a patch as its measurements), we consider Convolutional LSTM (ConvLSTM), (Shi et al., 2015) which leverage the spatial information encoded on the sequences by substituting the matrix operations in the standard LSTM formulation with convolutions. + +Results One-step Prediction: In Table 1 we compare test log-likelihoods on the tasks of continuous spatio-temporal demand modelling for the cases of New York and Copenhagen. We report exact log-likelihoods for both RNN-MDN- + +Table 1. Test log-likelihood for each task under the continuous support assumption. For non-deterministic models the approximation on the marginal log-likelihood is given with the $\approx$ sign. + +
ModelsNYC-PNYC-DCPH-BS
RNN-MDN-Diag16358214376549124
RNN-MDN-Full16401614667650109
VRNN-MDN-Diag $\approx$16134513996449231
VRNN-MDN-Full $\approx$16254914367149664
SRNN-MDN-Diag $\approx$16483014371949331
SRNN-MDN-Full $\approx$16497614740049810
RFN $\approx$16873414829151100
+ +Diag and RNN-MDN-Full, while in the case of VRNNs, SRNNs and RFNs we report the importance sampling approximation to the marginal log-likelihood using 30 samples, as in (Rezende et al., 2014). We see from Table 1 that RFN outperformes competing methods yielding higher log-likelihood across all tasks. The results support our claim that more flexible output distributions are advantageous when modelling potentially complex and structured temporal data distributions. To further illustrate this, in Fig. 2, we show a visualization of the predicted spatial densities (one-step-ahead) from three of the implemented models at specific times of the day. Opposed to GMM-based densities, the figures show how the RFN exploits the flexibility of conditional normalizing flows to generate sharper distributions capable of better approximating complex shapes such as geographical landforms or urban topologies (e.g. Central Park or the sharper edges in proximity of the Hudson river along the west side of Manhattan). + +Multi-step Prediction: In order to take reliable strategic decisions, service providers might also be interested in obtaining full roll-outs of demand predictions, opposed to 1-step predictions. To do so, we generate entire sequences in an autoregressive way (i.e., the prediction at timestep $t$ is fed back into the model at $t + 1$ ) and analyze the ability of the proposed model to unroll for different forecasting horizons. From a methodological point of view, we are interested in measuring the effect of explicitly modelling the stochasticity in the temporal evolution of demand opposed to fully-deterministic architectures. To this regard, in Table 2 we compare the RFN with the most competitive deterministic benchmark (i.e. RNN-MDN-Full). As the results suggest, the stochasticity in the transition probability allows the RFN to better capture the temporal dynamics, thus resulting in lower performance decay in comparison with the fully deterministic RNN-MDN assuming full covariance. + +Quantization: As a further analysis, we compare the proposed RFN with a Convolutional LSTM, under the assumption that the spatial map has been discretized in a ${64} \times {64}$ pixel space described by a Categorical distribution. This comparison is particularly relevant given the prevalence + +--- + +${}^{1}$ Please refer to the Appendix for the derivation. + +--- + +165 + +Table 2. Test log-likelihood comparison of RFN and RNN-MDN-Full for different forecast horizons on the NYC-P task. + +
Modelst+2t+5t+10full (t+90)
RNN-MDN162891161065160099158922
RFN $\approx$167509167400167359167392
+ +166 + +167 + +168 + +169 + +170 of ConvLSTMs in spatio-temporal travel modelling applications (Petersen et al., 2019; Yuan et al., 2018; Wang et al., 2018). As previously introduced, the RFN is naturally defined by a continuous output distribution (in practice parametrized as a normalizing flow), thus, in order to characterize a valid comparison, we apply a quantization procedure to obtain a discrete output distribution for the RFN. In particular, the implemented quantization procedure can be summarized with the following steps: (i) as in the continuous case, evaluate the approximated marginal log-likelihood under the trained RFN at the pixel-centers of a ${64} \times {64}$ grid,(ii) normalize the computed log-likelihood log-its through the use of a softmax function and (iii) evaluate the log-likelihood under a Categorical distribution characterized by the probabilities computed in (ii), thus having values comparable with the output of the ConvLSTM. Table 3 compares test log-likelihoods on the task of discrete spatio-temporal demand modelling. To this regard, when considering results in Table 3, two relevant observations must be underlined. First of all, the true output of the RFN (i.e. before quantization) is a continuous density, thus, its discretization will, by definition, result in a loss of information and granularity. Secondly, and most importantly, the quantization is applied as post-processing evaluation step, thus, opposed to the implemented ConvLSTMs, the RFNs are not directly optimizing for the objective evaluated in Table 3. In light of this, the results under the discretized space assumption support even more our claims on the effectiveness of the RFN to approximate spatially complex distributions. Moreover, the ability of the RFN to model a continuous spatial density, opposed to a discretized approach as in the case of ConvLSTMs, has several theoretical + +![01963e2b-bcd4-7927-b39a-cc985913d5e3_3_153_454_701_246_0.jpg](images/01963e2b-bcd4-7927-b39a-cc985913d5e3_3_153_454_701_246_0.jpg) + +Figure 2. Generated spatio-temporal densities from SRNN-MDN-Diag, SRNN-MDN-Full and RFN on the NYC-P dataset. The blue (low) to red (high) log-likelihood heatmaps show models defined by increasing flexibility (best viewed in color). + +217 and practical advantages. For instance, RFNs are be able to evaluate the log-likelihood of individual data points for 218 anomaly and hotspot detection. Secondly, ConvLSTMs 219 define a discretized space whose cells might have different natural landscape characteristics (e.g. rivers, lakes), thus effectively changing the dimension of the support in each bin and making comparisons of log-likelihoods across pixels an ill-posed question. Furthermore, if for discrete output distributions exploring different levels of discretization would require to repeatedly train independent ConvLSTM networks, the post-processing quantization of the RFN allows discretization to be done instantaneously, thus enabling for fast prototyping and exploration of discretization levels. + +Table 3. Test log-likelihood for each task under the discrete support assumption. For the RFN, results are given after a quantization procedure mapping from a continuous $2\mathrm{\;d}$ space to the ${64} \times {64}$ pixel space used to train the ConvLSTMs. + +
ModelsNYC-PNYC-DCPH-BS
ConvLSTM-352962-350803-112548
RFN (Quantized)-339745-349627-110999
+ +## 4. Related Work + +A number of works have concentrated in defining more flexible emission functions for sequential models (Rasul et al., 2021; Kumar et al., 2020; Castrejon et al., 2019). Within this line of research, normalizing flows are used to either parametrize the emission function for the tasks of video generation and multi-variate time series forecasting (Castrejon et al., 2019; Rasul et al., 2021), or to describe the latent states representing the temporal evolution of the system (Kumar et al., 2020). However, to the author's best knowledge, there is no track of previous attempts for the task of urban mobility modelling. + +Within the transportation domain, traditional approaches rely on spatial discretizations of the urban topology(Yuan et al., 2018; Petersen et al., 2019; Wang et al., 2018), which allow for the prediction of spatio-temporal sequences with discrete support through e.g. ConvLSTMs (Shi et al., 2015). + +## 5. Conclusions + +This work addresses the problem of continuous spatiotemporal density modelling by proposing the use of conditional normalizing flows as a general approach to parametrize the output distribution of recurrent latent variable models. Our experiments focus on real-world data for the task of urban mobility density modelling. We empirically show that the flexibility of normalizing flows enables RFNs to generate rich output distributions capable of describing potentially complex geographical surfaces under both continuous and discrete output distribution assumptions. Ultimately, we believe that the ability to estimate fine-grained distributions of urban mobility represents an important step towards user-tailored MoD services. + +References + +Abadi, M. et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https: //www.tensorflow.org/. Software available from tensorflow.org. + +Bingham, E., Chen, J. P., Jankowiak, M., Obermeyer, F., Pradhan, N., Karaletsos, T., Singh, R., Szerlip, P., Hors-fall, P., and Goodman, N. D. Pyro: Deep universal probabilistic programming. 2018. + +Castrejon, L., Ballas, N., and Courville, A. Improved conditional vrnns for video prediction. In IEEE Int. Conf. on Computer Vision, 2019. + +Chung, J., Kastner, K., Dinh, L., Goel, K., Courville, A. C., and Bengio, Y. A recurrent latent variable model for sequential data. In Conf. on Neural Information Processing Systems, 2015. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real NVP. In Int. Conf. on Learning Representations, 2017. + +Durbin, J. and Koopman, S. J. Time Series Analysis by State Space Methods. Oxford University Press, 2001. + +Fraccaro, M., Sønderby, S. K., Paquet, U., and Winther, O. Sequential neural models with stochastic layers. In Conf. on Neural Information Processing Systems, 2016. + +Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Int. Conf. on Machine Learning, 2015. + +Karl, M., Soelch, M., Bayer, J., and van der Smagt, P. Deep variational bayes filters: Unsupervised learning of state space models from raw data. In Int. Conf. on Learning Representations, 2017. + +Kingma, D. and Ba, J. Adam: A method for stochastic optimization. In Int. Conf. on Learning Representations, 2015. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. In Int. Conf. on Learning Representations, 2014. + +Krishnan, R. G., Shalit, U., and Sontag, D. Deep kalman filters. In Conf. on Neural Information Processing Systems, 2016. + +Kumar, M., Babaeizadeh, M., Erhan, D., Finn, C., Levine, S., Dinh, L., and Kingma, D. Videoflow: A flow-based generative model for video. In Int. Conf. on Learning Representations, 2020. + +Nair, V. and Hinton, G. E. Rectified linear units improve restricted boltzmann machines. In Int. Conf. on Machine Learning, 2010. + +Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. Automatic differentiation in PyTorch. In Conf. on Neural Information Processing Systems - Autodiff Workshop, 2017. + +Petersen, N., Rodrigues, F., and Francisco, P. Multi-output bus travel time prediction with convolutional lstm neural network. Expert Systems with Applications, 120:426-435, 2019. + +Rasul, K., Sheikh, A.-S., Schuster, I., Bergmann, U., and Vollgraf, R. Multi-variate probabilistic time series forecasting via conditioned normalizing flows. In Int. Conf. on Learning Representations, 2021. + +Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In Int. Conf. on Machine Learning, 2014. + +Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-k., and WOO, W.-c. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In Conf. on Neural Information Processing Systems, 2015. + +Sønderby, C. K., Raiko, T., Maaløe, L., Sønderby, S. K., and Winther, O. Ladder variational autoencoders. In Conf. on Neural Information Processing Systems, 2016. + +Wang, D., Yang, Y., and Ning, S. Deepstcl: A deep spatiotemporal convlstm for travel demand prediction. In International Joint Conference on Neural Networks, 2018. + +Winkler, C., Worrall, D., Hoogeboom, E., and Welling, M. Learning likelihoods with conditional normalizing flows. In Int. Conf. on Learning Representations, 2020. + +Yuan, Z., Zhou, X., and Yang, T. Hetero-convlstm: A deep learning approach to traffic accident prediction on heterogeneous spatio-temporal data. In ACM Int. Conf. on Knowledge Discovery and Data Mining, 2018. + +## A. Appendix + +### A.1. Training + +We train each model using stochastic gradient ascent on the evidence lower bound $\mathcal{L}\left( {\theta ,\phi }\right)$ defined in Eq. (7) using the Adam optimizer (Kingma &Ba, 2015), with a starting learning rate of 0.003 being reduced by a factor of 0.1 every 100 epochs without loss improvement (in our implementation, we used the ReduceLROnPlateau scheduler in PyTorch with patience=100). As in (Sønderby et al., 2016), we found that annealing the KL term in Eq. (7) (using a scalar multiplier linearly increasing from 0 to 1 over the course of training) yielded better results. The final model was selected with an early-stopping procedure based on the validation performance. Training using a NVIDIA GeForce RTX 2080 Ti took around 6 hours for CPH-BS and around 9 hours for NYC-P/D. + +### A.2. Benchmarks + +For every model considered under the continuous support assumption, we select a single layer of 128 LSTM cells. The feature extractor ${\varphi }_{\tau }^{\text{extr }}$ in Eq. (1) has three layers of 128 hidden units using rectified linear activations (Nair & Hinton,2010). For the VRNN, SRNN and RFN we also define a 128-dimensional latent state ${\mathbf{z}}_{1 : T}$ . Both the transition function ${t}_{{\theta }_{x}}$ from Eq. (2) and the inference network ${\varphi }_{\tau }^{\text{enc }}$ in Eq. (5) use a single layer of 128 hidden units. For the mixture-based models, the MDN emission is further defined by two layers of 64 hidden units where we use a softplus activation to ensure the positivity of the variance vector in the MDN-Diag case and a Cholesky decomposition of the full covariance matrix in MDN-Full. Based on a random search, we use 50 and 30 mixtures for MDN-Diag and MDN-Full respectively. The emission function in the RFN is defined as in Eq. (3) and Eq. (4), where ${f}_{\psi },{s}_{\psi }$ and ${t}_{\psi }$ are neural networks with two layers of 128 hidden units. The conditional flow is further defined as an alternation of 35 layers of the triplet [Affine coupling layer, Batch Normalization (Ioffe & Szegedy, 2015), Permutation], where the permutation ensures that all dimensions are processed by the affine coupling layers and where the batch normalization ensures better propagation of the training signal, as shown in (Dinh et al.,2017). In our experiments we define ${\mathbf{u}}_{t} = {\mathbf{x}}_{t - 1}$ , although ${\mathbf{u}}_{t}$ could potentially be used to introduce relevant information for the problem at hand (e.g. weather or special event data in the case of spatio-temporal transportation demand estimation). + +On the other hand, under the discrete support assumption, we train a 5-layer ConvLSTM network with 4 layers containing 40 hidden states and $3 \times 3$ kernels (in alternation with 4 batch normalization layers) using zero-padding to ensure preservation of tensor dimensions, a 3D Convolution layer with kernel $3 \times 3 \times 3$ and softmax activation function to describe a normalized density over the next frame (i.e. time-step) in the sequence. + +All models assuming continuous output distribution were implemented using PyTorch (Paszke et al., 2017) and the universal probabilistic programming language Pyro (Bingham et al., 2018), while the ConvLSTMs where implemented using Tensorflow (Abadi et al., 2015). To reduce computational cost, we use a single sample to approximate the intractable expectations in the ELBO. + +For both New York and Copenhagen experiments we process the data so to discard corrupted geo-coordinates outside the area of interest. For the taxi experiments, we discarded coordinates related to trips either shorter than ${30s}$ or longer than ${3h}$ , while in the bike-sharing dataset, we ensured to keep only one app access from the same user in a window of 5 minutes. In both cases we divide the data temporally into train/validation/test splits using a ratio of ${0.5}/{0.25}/{0.25}$ . + +### A.3. ELBO derivation + +$$ +\log {p}_{\theta }\left( {\mathbf{x}}_{1 : T}\right) = \log \int {p}_{\theta }\left( {{\mathbf{x}}_{1 : T},{\mathbf{z}}_{1 : T},{\mathbf{h}}_{1 : T}}\right) d\mathbf{z}d\mathbf{h} +$$ + +$$ += \log \int \frac{{q}_{\phi }\left( {{\mathbf{z}}_{1 : T} \mid {\mathbf{x}}_{1 : T}}\right) }{{q}_{\phi }\left( {{\mathbf{z}}_{1 : T} \mid {\mathbf{x}}_{1 : T}}\right) }{p}_{\theta }\left( {{\mathbf{x}}_{1 : T},{\mathbf{z}}_{1 : T},{\mathbf{h}}_{1 : T}}\right) d\mathbf{z}d\mathbf{h} +$$ + +$$ += \log {\mathbb{E}}_{{q}_{\phi }\left( {{\mathbf{z}}_{1 : T} \mid {\mathbf{x}}_{1 : T}}\right) }\left\lbrack {\mathop{\prod }\limits_{{t = 1}}^{T}\frac{{p}_{\theta }\left( {{\mathbf{x}}_{t} \mid {\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) {p}_{\theta }\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t}}\right) {p}_{\theta }\left( {{\mathbf{h}}_{t} \mid {\mathbf{h}}_{t - 1},{\mathbf{u}}_{t}}\right) }{{q}_{\phi }\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t},{\mathbf{x}}_{t}}\right) }}\right\rbrack +$$ + +$$ +\geq {\mathbb{E}}_{{q}_{\phi }\left( {{\mathbf{z}}_{1 : T} \mid {\mathbf{x}}_{1 : T}}\right) }\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{T}\log {p}_{\theta }\left( {{\mathbf{x}}_{t} \mid {\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) + \log {p}_{\theta }\left( {{\mathbf{h}}_{t} \mid {\mathbf{h}}_{t - 1},{\mathbf{u}}_{t}}\right) + \log \left( \frac{{p}_{\theta }\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t}}\right) }{{q}_{\phi }\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t},{\mathbf{x}}_{t}}\right) }\right) }\right\rbrack \tag{7} +$$ + +$$ += {\mathbb{E}}_{{q}_{\phi }\left( {{\mathbf{z}}_{1 : T} \mid {\mathbf{x}}_{1 : T}}\right) }\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{T}\log {p}_{\theta }\left( {{\mathbf{x}}_{t} \mid {\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) + \log {p}_{\theta }\left( {{\mathbf{h}}_{t} \mid {\mathbf{h}}_{t - 1},{\mathbf{u}}_{t}}\right) }\right\rbrack +$$ + +$$ +- \mathop{\sum }\limits_{{t = 1}}^{T}\mathbb{{KL}}\left( {{q}_{\phi }\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t},{\mathbf{x}}_{t}}\right) \parallel {p}_{\theta }\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t}}\right) }\right) = \mathcal{L}\left( {\theta ,\phi }\right) +$$ + +370 + +376 + +380 + +381 + +382 + +383 + +384 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/AsdwkIqO74/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/AsdwkIqO74/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..8f8ae275b094c6c2d95183a611572c947680d14e --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/AsdwkIqO74/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,240 @@ +§ RECURRENT FLOW NETWORKS: A RECURRENT LATENT VARIABLE MODEL FOR DENSITY MODELLING OF URBAN MOBILITY + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Mobility-on-demand (MoD) systems represent a rapidly developing mode of transportation wherein travel requests are dynamically handled by a coordinated fleet of vehicles. Crucially, the efficiency of an MoD system highly depends on how well supply and demand distributions are aligned in spatio-temporal space (i.e., to satisfy user demand, cars have to be available in the correct place and at the desired time). When modelling urban mobility as temporal sequences, current approaches typically rely on either (i) a spatial discretization (e.g. ConvLSTMs), or (ii) a Gaussian mixture model to describe the conditional output distribution. In this paper, we argue that both of these approaches could exhibit structural limitations when faced with highly complex data distributions such as for urban mobility densities. To address this issue, we introduce recurrent flow networks which combine deterministic and stochastic recurrent hidden states with conditional normalizing flows and show how the added flexibility allows our model to generate distributions matching potentially complex urban topologies. + +§ 1. INTRODUCTION + +With the growing prevalence of smart mobile phones in our daily lives, companies such as Uber, Lyft, and DiDi have been pioneering Mobility-on-Demand (MoD) and online ride-hailing platforms as a solution capable of providing a more efficient and personalized transportation service. Notably, an efficient MoD system could allow for reduced idle times and higher fulfillment rates, thus offering a better user experience for both driver and passenger groups. The efficiency of an MoD system highly depends on the ability to model and accurately forecast the need for transportation, such to enable service providers to take operational decisions in strong accordance with user needs and preferences. However, the complexity of the geo-spatial distributions characterizing MoD demand requires flexible models that can capture rich, time-dependent $2\mathrm{\;d}$ patterns and adapt to complex urban geographies (e.g. presence of rivers, irregular landforms, etc.). + +Historically, dynamic Bayesian networks (DBNs), such as hidden Markov models (HMMs) and state space models (SSMs) (Durbin & Koopman, 2001), have characterized a unifying probabilistic framework with illustrious successes in modelling time-dependent dynamics. Advances in deep learning architectures however, shifted this supremacy towards the field of Recurrent Neural Networks (RNNs). At a high level, both DBNs and RNNs can be framed as parametrizations of two core components: 1) a transition function characterizing the time-dependent evolution of a learned internal representation, and 2) an emission function denoting a mapping from representation space to observation space. Recently, evidence has been gathered in favor of combinations bringing together the representative power of RNNs with the consistent handling of uncertainties given by probabilistic approaches (Chung et al., 2015; Fraccaro et al., 2016; Krishnan et al., 2016; Karl et al., 2017). The core concept underlying recent developments is the idea that, in current RNNs, the only source of variability is found in the conditional emission distribution (i.e. typically a unimodal distribution or a mixture of unimodal distributions). Most efforts have therefore concentrated in building models capable of effectively propagating uncertainty in the transition function of RNNs. + +In this paper, we build on these recent advances by shifting the focus towards more flexible emission functions. We suggest that the traditional treatment of output variability through the parametrization of either (i) unimodal (or mixtures of unimodal) distributions, or (ii) discretized representations of naturally-continuous distributions, may act as a bottleneck in cases characterized by complex data distributions, such as the ones observed in urban mobility. We propose the use of Conditional Normalizing Flows (CNFs) (Winkler et al., 2020) as a general approach to define arbitrarily expressive output probability distributions under temporal dynamics. On one hand, we model the temporal variability in the data through a transition function combining stochastic and deterministic states, on the other, we propose to use this mixed hidden representation as a conditioning variable to capture the output variability with a CNF. We call this model a Recurrent Flow Network (RFN). + +To summarize, the main contributions of this paper are twofold: first, we propose a probabilistic neural generative model which is able to combine deterministic and stochastic + +055 + + < g r a p h i c s > + +Figure 1. Graphical model of the operations defining the RFN: a) transition function defined in Eq. (1) and Eq. (2); b) emission function as in Eq. (3) and Eq. (4); c) inference network using Eq. (5); d) overall RFN graphical model. Shaded nodes represent observed variables, while un-shaded nodes represent either deterministic (diamond-shaped) or stochastic (circles) hidden states. For sequence generation, a traditional approach is to use ${\mathbf{u}}_{t} = {\mathbf{x}}_{t - 1}$ . + +056 + +057 + +058 + +059 + +060 + +061 + +062 + +063 + +064 + +065 + +066 + +067 + +068 + +069 + +070 + +071 + +072 temporal representations with the flexibility of normalizing 073 flows in the conditional output distribution. Second, we 074 showcase how our model is able to represent fine-grained + +075 urban mobility patterns on several real-world tasks, which + +076 could drastically impact downstream decision making pro- + +077 cesses in current mobility systems. + +078 + +079 + +§ 2. RECURRENT FLOW NETWORKS + +080 + +081 In this section, we define the generative model ${p}_{\theta }$ and inference network ${q}_{\phi }$ characterizing the RFN for the purpose of sequence modelling. RFNs explicitly model temporal dependencies by combining deterministic and stochastic layers. The resulting intractability of the posterior distribution over the latent states ${\mathbf{z}}_{1 : T}$ , as in the case of VAEs (Kingma & Welling, 2014; Rezende et al., 2014), is further approached by learning a tractable approximation through amortized variational inference. The schematic view of the RFN is shown in Fig 1. + +Generative model As in (Fraccaro et al., 2016), the transition function of the RFN interlocks an SSM with an RNN: + +$$ +{\mathbf{h}}_{t} = {f}_{{\theta }_{\mathbf{h}}}\left( {{\mathbf{h}}_{t - 1},{\varphi }_{\tau }^{\operatorname{extr}}\left( {\mathbf{u}}_{t}\right) }\right) \tag{1} +$$ + +$$ +{\mathbf{z}}_{t} \sim \mathcal{N}\left( {{\mathbf{\mu }}_{0,t},\operatorname{diag}\left( {\mathbf{\sigma }}_{0,t}^{2}\right) }\right) , \tag{2} +$$ + +$$ +\text{ with }\left\lbrack {{\mathbf{\mu }}_{0,t},{\mathbf{\sigma }}_{0,t}}\right\rbrack = {f}_{{\theta }_{\mathbf{z}}}\left( {{\mathbf{z}}_{t - 1},{\mathbf{h}}_{t}}\right) \text{ , } +$$ + +where ${\mathbf{\mu }}_{0,t}$ and ${\mathbf{\sigma }}_{0,t}$ represent the parameters of the conditional prior distribution over the stochastic hidden states ${\mathbf{z}}_{1 : T}$ . In our implementation, ${f}_{{\theta }_{\mathbf{h}}}$ and ${f}_{{\theta }_{\mathbf{z}}}$ are respectively an LSTM cell and a deep feed-forward neural network, with parameters ${\theta }_{\mathbf{h}}$ and ${\theta }_{\mathbf{z}}$ . In Eq. (1), ${\varphi }_{\tau }^{\text{ extr }}$ can also be a neural network extracting features from ${\mathbf{u}}_{t}$ . Unlike the SRNN, the learned representations (i.e. ${\mathbf{z}}_{1 : T},{\mathbf{h}}_{1 : T}$ ) are used as conditioners for a CNF parametrizing the output distribution. That is, for every time-step $t$ , we learn a complex + +108 distribution $p\left( {{\mathbf{x}}_{t} \mid {\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right)$ by defining the conditional base dis- tribution $p\left( {{\mathbf{b}}_{t} \mid {\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right)$ and conditional coupling layers (Dinh 109 et al.,2017) for the transformation ${T}_{\psi }$ as follows: + +Conditional Prior: + +$$ +{\mathbf{b}}_{t} \sim \mathcal{N}\left( {{\mathbf{\mu }}_{b,t},\operatorname{diag}\left( {\mathbf{\sigma }}_{b,t}^{2}\right) }\right) , \tag{3} +$$ + +$$ +\text{ with }\left\lbrack {{\mathbf{\mu }}_{b,t},{\mathbf{\sigma }}_{b,t}}\right\rbrack = {f}_{\psi }\left( {{\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) +$$ + +Conditional Coupling: + +$$ +{\mathbf{b}}_{t,d + 1 : D} = {\mathbf{x}}_{t,d + 1 : D} \odot \exp \left( {{s}_{\psi }\left( {{\mathbf{x}}_{t,1 : d},{\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) }\right) + +$$ + +$$ ++ {t}_{\psi }\left( {{\mathbf{x}}_{t,1 : d},{\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) \tag{4} +$$ + +$$ +{\mathbf{b}}_{t,1 : d} = {\mathbf{x}}_{t,1 : d}, +$$ + +where ${\mathbf{\mu }}_{b,t}$ and ${\mathbf{\sigma }}_{b,t}$ represent the parameters of the conditional base distribution (determined by a learnable function ${f}_{\psi }$ ), while ${s}_{\psi }$ and ${t}_{\psi }$ denote the conditional scale and translation functions characterizing the coupling layers in the CNF. In our implementation, ${f}_{\psi },{s}_{\psi }$ and ${t}_{\psi }$ are parametrized by deep neural networks. Together, Eq. (3) and Eq. (4) define the emission function, enabling the generative model to result in the factorization $p\left( {\mathbf{x},\mathbf{z},\mathbf{h}}\right) =$ $\mathop{\prod }\limits_{{t = 1}}^{T}{p}_{{\theta }_{\mathbf{x}}}\left( {{\mathbf{x}}_{t} \mid {\mathbf{z}}_{t},{\mathbf{h}}_{t}}\right) {p}_{{\theta }_{\mathbf{z}}}\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t}}\right) {p}_{{\theta }_{\mathbf{h}}}\left( {{\mathbf{h}}_{t} \mid {\mathbf{h}}_{t - 1},{\mathbf{u}}_{t}}\right)$ , where the emission and transition distributions have parameters ${\theta }_{\mathbf{x}},{\theta }_{\mathbf{z}},{\theta }_{\mathbf{h}}$ , and where we assume that ${\mathbf{h}}_{t}$ follows a delta distribution centered in ${\mathbf{h}}_{t} = {f}_{{\theta }_{\mathbf{h}}}\left( {{\mathbf{h}}_{t - 1},{\mathbf{u}}_{t}}\right)$ . + +Inference The variational approximation defining the RFN directly depends on ${\mathbf{z}}_{t - 1},{\mathbf{h}}_{t}$ and ${\mathbf{x}}_{t}$ as follows: + +$$ +{q}_{\phi }\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t},{\mathbf{x}}_{t}}\right) = \mathcal{N}\left( {{\mathbf{\mu }}_{z,t},\operatorname{diag}\left( {\mathbf{\sigma }}_{z,t}^{2}\right) }\right) , \tag{5} +$$ + +$$ +\text{ with }\left\lbrack {{\mathbf{\mu }}_{z,t},{\mathbf{\sigma }}_{z,t}}\right\rbrack = {\varphi }_{\tau }^{\mathrm{{enc}}}\left( {{\mathbf{z}}_{t - 1},{\mathbf{h}}_{t},{\mathbf{x}}_{t}}\right) \text{ , } +$$ + +where ${\varphi }_{\tau }^{\text{ enc }}$ is an encoder network defining the parameters of the approximate posterior distribution ${\mathbf{\mu }}_{z,t}$ and ${\mathbf{\sigma }}_{z,t}$ . Given the above structure, the generative and inference models are tied through the RNN hidden state ${\mathbf{h}}_{t}$ , resulting in the factorization given by: + +$$ +{q}_{\phi }\left( {{\mathbf{z}}_{1 : T} \mid {\mathbf{x}}_{1 : T}}\right) = \mathop{\prod }\limits_{{t = 1}}^{T}{q}_{\phi }\left( {{\mathbf{z}}_{t} \mid {\mathbf{z}}_{t - 1},{\mathbf{h}}_{t},{\mathbf{x}}_{t}}\right) . \tag{6} +$$ + +In addition to the explicit dependence of the approximate posterior on ${\mathbf{x}}_{t}$ and ${\mathbf{h}}_{t}$ , the inference network defined in Eq. (5) also exhibits an implicit dependence on ${\mathbf{x}}_{1 : t}$ and ${\mathbf{h}}_{1 : t}$ through ${\mathbf{z}}_{t - 1}$ . This implicit dependency on all information from the past can be considered as resembling a filtering approach from the state-space model literature (Durbin & Koopman,2001). Denoting $\theta$ and $\phi$ as the set of model and variational parameters respectively, variational inference offers a scheme for jointly optimizing parameters $\theta$ and computing an approximation to the posterior distribution by maximizing the evidence lower bound ${}^{1}$ (i.e. ELBO). + +§ 3. EXPERIMENTS + +Concretely, we evaluate the proposed RFN on three transportation datasets: + +NYC Taxi (NYC-P/D): This dataset is released by the New York City Taxi and Limousine Commission. We focused on aggregating the taxi demand in 2-hour bins for the month of March 2016 containing 249,637 trip geo-coordinates. We further differentiated the task of modelling pick-ups (i.e. where the demand is) and drop-offs (i.e. where people want to go). In what follows, we denote the two datasets as NYC-P and NYC-D respectively. + +Copenhagen Bike-Share (CPH-BS): This dataset contains geo-coordinates from users accessing the smartphone app of Donkey Republic, one of the major bike sharing services in Copenhagen, Denmark. As for the case of New York, we aggregated the geo-coordinates in 2-hour bins for the month of August, resulting in 87,740 app accesses. + +Models We compare the proposed RFN against various baselines assuming both continuous and discrete support for the output distribution. In particular, in the continuous case (i.e. where we assume to be modelling a 2-dimensional distribution directly in longitude-latitude space), we consider RNN, VRNN (Chung et al., 2015) and SRNN (Frac-caro et al., 2016) models each using two different MDN-based emission distributions. That is, we compare against a GMM output parametrized by Gaussians with either diagonal (MDN-Diag) or full (MDN-Full) covariance matrix. On the other hand, when assuming discrete support for the output distribution (i.e. we divide the map into tiled non-overlapping patches and view the pixels inside a patch as its measurements), we consider Convolutional LSTM (ConvLSTM), (Shi et al., 2015) which leverage the spatial information encoded on the sequences by substituting the matrix operations in the standard LSTM formulation with convolutions. + +Results One-step Prediction: In Table 1 we compare test log-likelihoods on the tasks of continuous spatio-temporal demand modelling for the cases of New York and Copenhagen. We report exact log-likelihoods for both RNN-MDN- + +Table 1. Test log-likelihood for each task under the continuous support assumption. For non-deterministic models the approximation on the marginal log-likelihood is given with the $\approx$ sign. + +max width= + +Models NYC-P NYC-D CPH-BS + +1-4 +RNN-MDN-Diag 163582 143765 49124 + +1-4 +RNN-MDN-Full 164016 146676 50109 + +1-4 +VRNN-MDN-Diag $\approx$ 161345 139964 49231 + +1-4 +VRNN-MDN-Full $\approx$ 162549 143671 49664 + +1-4 +SRNN-MDN-Diag $\approx$ 164830 143719 49331 + +1-4 +SRNN-MDN-Full $\approx$ 164976 147400 49810 + +1-4 +RFN $\approx$ 168734 148291 51100 + +1-4 + +Diag and RNN-MDN-Full, while in the case of VRNNs, SRNNs and RFNs we report the importance sampling approximation to the marginal log-likelihood using 30 samples, as in (Rezende et al., 2014). We see from Table 1 that RFN outperformes competing methods yielding higher log-likelihood across all tasks. The results support our claim that more flexible output distributions are advantageous when modelling potentially complex and structured temporal data distributions. To further illustrate this, in Fig. 2, we show a visualization of the predicted spatial densities (one-step-ahead) from three of the implemented models at specific times of the day. Opposed to GMM-based densities, the figures show how the RFN exploits the flexibility of conditional normalizing flows to generate sharper distributions capable of better approximating complex shapes such as geographical landforms or urban topologies (e.g. Central Park or the sharper edges in proximity of the Hudson river along the west side of Manhattan). + +Multi-step Prediction: In order to take reliable strategic decisions, service providers might also be interested in obtaining full roll-outs of demand predictions, opposed to 1-step predictions. To do so, we generate entire sequences in an autoregressive way (i.e., the prediction at timestep $t$ is fed back into the model at $t + 1$ ) and analyze the ability of the proposed model to unroll for different forecasting horizons. From a methodological point of view, we are interested in measuring the effect of explicitly modelling the stochasticity in the temporal evolution of demand opposed to fully-deterministic architectures. To this regard, in Table 2 we compare the RFN with the most competitive deterministic benchmark (i.e. RNN-MDN-Full). As the results suggest, the stochasticity in the transition probability allows the RFN to better capture the temporal dynamics, thus resulting in lower performance decay in comparison with the fully deterministic RNN-MDN assuming full covariance. + +Quantization: As a further analysis, we compare the proposed RFN with a Convolutional LSTM, under the assumption that the spatial map has been discretized in a ${64} \times {64}$ pixel space described by a Categorical distribution. This comparison is particularly relevant given the prevalence + +${}^{1}$ Please refer to the Appendix for the derivation. + +165 + +Table 2. Test log-likelihood comparison of RFN and RNN-MDN-Full for different forecast horizons on the NYC-P task. + +max width= + +Models t+2 t+5 t+10 full (t+90) + +1-5 +RNN-MDN 162891 161065 160099 158922 + +1-5 +RFN $\approx$ 167509 167400 167359 167392 + +1-5 + +166 + +167 + +168 + +169 + +170 of ConvLSTMs in spatio-temporal travel modelling applications (Petersen et al., 2019; Yuan et al., 2018; Wang et al., 2018). As previously introduced, the RFN is naturally defined by a continuous output distribution (in practice parametrized as a normalizing flow), thus, in order to characterize a valid comparison, we apply a quantization procedure to obtain a discrete output distribution for the RFN. In particular, the implemented quantization procedure can be summarized with the following steps: (i) as in the continuous case, evaluate the approximated marginal log-likelihood under the trained RFN at the pixel-centers of a ${64} \times {64}$ grid,(ii) normalize the computed log-likelihood log-its through the use of a softmax function and (iii) evaluate the log-likelihood under a Categorical distribution characterized by the probabilities computed in (ii), thus having values comparable with the output of the ConvLSTM. Table 3 compares test log-likelihoods on the task of discrete spatio-temporal demand modelling. To this regard, when considering results in Table 3, two relevant observations must be underlined. First of all, the true output of the RFN (i.e. before quantization) is a continuous density, thus, its discretization will, by definition, result in a loss of information and granularity. Secondly, and most importantly, the quantization is applied as post-processing evaluation step, thus, opposed to the implemented ConvLSTMs, the RFNs are not directly optimizing for the objective evaluated in Table 3. In light of this, the results under the discretized space assumption support even more our claims on the effectiveness of the RFN to approximate spatially complex distributions. Moreover, the ability of the RFN to model a continuous spatial density, opposed to a discretized approach as in the case of ConvLSTMs, has several theoretical + + < g r a p h i c s > + +Figure 2. Generated spatio-temporal densities from SRNN-MDN-Diag, SRNN-MDN-Full and RFN on the NYC-P dataset. The blue (low) to red (high) log-likelihood heatmaps show models defined by increasing flexibility (best viewed in color). + +217 and practical advantages. For instance, RFNs are be able to evaluate the log-likelihood of individual data points for 218 anomaly and hotspot detection. Secondly, ConvLSTMs 219 define a discretized space whose cells might have different natural landscape characteristics (e.g. rivers, lakes), thus effectively changing the dimension of the support in each bin and making comparisons of log-likelihoods across pixels an ill-posed question. Furthermore, if for discrete output distributions exploring different levels of discretization would require to repeatedly train independent ConvLSTM networks, the post-processing quantization of the RFN allows discretization to be done instantaneously, thus enabling for fast prototyping and exploration of discretization levels. + +Table 3. Test log-likelihood for each task under the discrete support assumption. For the RFN, results are given after a quantization procedure mapping from a continuous $2\mathrm{\;d}$ space to the ${64} \times {64}$ pixel space used to train the ConvLSTMs. + +max width= + +Models NYC-P NYC-D CPH-BS + +1-4 +ConvLSTM -352962 -350803 -112548 + +1-4 +RFN (Quantized) -339745 -349627 -110999 + +1-4 + +§ 4. RELATED WORK + +A number of works have concentrated in defining more flexible emission functions for sequential models (Rasul et al., 2021; Kumar et al., 2020; Castrejon et al., 2019). Within this line of research, normalizing flows are used to either parametrize the emission function for the tasks of video generation and multi-variate time series forecasting (Castrejon et al., 2019; Rasul et al., 2021), or to describe the latent states representing the temporal evolution of the system (Kumar et al., 2020). However, to the author's best knowledge, there is no track of previous attempts for the task of urban mobility modelling. + +Within the transportation domain, traditional approaches rely on spatial discretizations of the urban topology(Yuan et al., 2018; Petersen et al., 2019; Wang et al., 2018), which allow for the prediction of spatio-temporal sequences with discrete support through e.g. ConvLSTMs (Shi et al., 2015). + +§ 5. CONCLUSIONS + +This work addresses the problem of continuous spatiotemporal density modelling by proposing the use of conditional normalizing flows as a general approach to parametrize the output distribution of recurrent latent variable models. Our experiments focus on real-world data for the task of urban mobility density modelling. We empirically show that the flexibility of normalizing flows enables RFNs to generate rich output distributions capable of describing potentially complex geographical surfaces under both continuous and discrete output distribution assumptions. Ultimately, we believe that the ability to estimate fine-grained distributions of urban mobility represents an important step towards user-tailored MoD services. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/CAjeU4GmBY/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/CAjeU4GmBY/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..91737977a9ea557bb432b283c78880b5d0951612 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/CAjeU4GmBY/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,279 @@ +# Universal Approximation using Well-conditioned Normalizing Flows + +Anonymous Authors ${}^{1}$ + +## Abstract + +Affine-coupling models (Dinh et al., 2014; 2016) are a particularly common type of normalizing flows, for which the Jacobian of the latent-to-observable-variable transformation is triangular, allowing the likelihood to be computed in lin- + +016 ear time. Despite the widespread usage of affine + +017 couplings, the special structure of the architec- + +018 ture makes understanding their representational + +019 power challenging. The question of universal + +020 approximation was only recently resolved by + +021 three parallel papers (Huang et al., 2020; Zhang + +022 et al., 2020; Koehler et al., 2020) - who showed + +023 reasonably regular distributions can be approx- + +024 imated arbitrarily well using affine couplings - albeit with networks with a nearly-singular Ja- + +026 cobian. As ill-conditioned Jacobians are an ob- + +027 stacle for likelihood-based training, the funda- + +028 mental question remains: which distributions can + +029 be approximated using well-conditioned affine + +030 coupling flows? In this paper, we show that any + +031 log-concave distribution can be approximated us- + +032 ing well-conditioned affine-coupling flows. In terms of proof techniques, we uncover and leverage deep connections between affine coupling architectures, underdamped Langevin dynamics (a + +036 stochastic differential equation often used to sam- + +037 ple from Gibbs measures) and Hénon maps (a + +038 structured dynamical system that appears in the + +039 study of symplectic diffeomorphisms). In terms + +040 of informing practice, we approximate a padded + +041 version of the input distribution with iid Gaus-sians - a strategy which (Koehler et al., 2020) empirically observed to result in better-conditioned flows, but had hitherto no theoretical grounding. Our proof can thus be seen as providing theoretical evidence for the benefits of Gaussian padding when training normalizing flows. + +## 1. Introduction + +Normalizing flows (Dinh et al., 2014; Rezende & Mohamed, 2015) are a class of generative models parametrizing a distribution in ${\mathbb{R}}^{d}$ as the pushfoward of a simple distribution (e.g. Gaussian) through an invertible map ${g}_{\theta } : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ with trainable parameter $\theta$ . The fact that ${g}_{\theta }$ is invertible allows us to write down an explicit expression for the density of a point $x$ through the change-of-variables formula, namely ${p}_{\theta }\left( x\right) = \phi \left( {{g}_{\theta }^{-1}\left( x\right) }\right) \det \left( {D{g}_{\theta }^{-1}\left( x\right) }\right)$ , where $\phi$ denotes the density of the standard Gaussian. For different choices of parametric families for ${g}_{\theta }$ , one gets different families of normalizing flows, e.g. affine coupling flows (Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018), Gaussianization flows (Meng et al., 2020), sum-of-squares polynomial flows (Jaini et al., 2019). + +In this paper we focus on affine coupling flows - arguably the family that has been most successfully scaled up (Kingma & Dhariwal, 2018). The parametrization of ${g}_{\theta }$ is chosen to be a composition of so-called affine coupling blocks, which are maps $f : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ , s.t. $f\left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S}}\right) = \left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S} \odot s\left( {x}_{S}\right) + t\left( {x}_{S}\right) }\right)$ , where $\odot$ denotes entrywise multiplication and $s, t$ are (typically simple) neural networks. The choice of parametrization is motivated by the fact that the Jacobian of each affine block is triangular, so that the determinant can be calculated in linear time. + +Despite the empirical success of this architecture, theoretical understanding remains elusive. + +The most basic questions revolve around the representational power of such models. Even the question of universal approximation was only recently answered by three concurrent papers (Huang et al., 2020; Zhang et al., 2020; Koehler et al., 2020) - though in a less-than-satisfactory manner, in light of how normalizing flows are trained in practice. (Huang et al., 2020; Zhang et al., 2020) show that any (reasonably well-behaved) distribution $p$ , once padded with zeros and treated as a distribution in ${\mathbb{R}}^{d + {d}^{\prime }}$ , can be arbitrarily closely approximated by an affine coupling flow. While such padding can be operationalized as an algorithm by padding the training image with zeros, it is never done in practice, as it results in an ill-conditioned Jacobian. This is expected, as the map that always sends the last ${d}^{\prime }$ coordinates to 0 is not bijective. (Koehler et al., 2020) prove universal approximation without padding; however their construction also gives rise to a poorly conditioned Jacobian: namely, to approximate a distribution $p$ to within accuracy $\epsilon$ in the Wasserstein-1 distance, the Jacobian will have smallest singular value on the order of $\epsilon$ . + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +Importantly, for all these constructions, the condition number of the resulting affine coupling map is poor no matter how nice the underlying distribution it's trying to approximate is. In other words, the source of this phenomenon isn't that the underlying distribution is low-dimensional or otherwise degenerate. Thus the question arises: + +Question: Can well-behaved distributions be approximated by an affine coupling flow with a well-conditioned Jacobian? + +In this paper, we answer the above question in the affirmative for a broad class of distributions - log-concave distributions - if we pad the input distribution not with zeroes, but with independent Gaussians. This gives theoretical grounding of the empirical observations in (Koehler et al., 2020) that Gaussian padding works better than zero-padding, as well as no padding. + +The practical relevance of this question is in providing guidance on the type of distributions we can hope to fit via training using an affine coupling flow. Theoretically, our techniques uncover some deep connections between affine coupling flows and two other (seeming unrelated) areas of mathematics: stochastic differential equations (more precisely underdamped Langevin dynamics, a "momentum" variant of the standard overdamped Langevin dynamics) and dynamical systems (more precisely, a family of dynamical systems called Hénon-like maps). + +## 2. Overview of results + +In order to state our main result, we introduce some notation and definitions. + +### 2.1. Notation + +Definition 1. An affine coupling block is a map $f : {\mathbb{R}}^{d} \rightarrow$ ${\mathbb{R}}^{d}$ , s.t. $f\left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S}}\right) = \left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S} \odot s\left( {x}_{S}\right) + t\left( {x}_{S}\right) }\right)$ for some set of coordinates $S$ , where $\odot$ denotes entrywise multiplication and $s, t$ are trainable (generally non-linear) functions. An affine coupling network is a finite sequence of affine coupling blocks. Note that the partition $\left( {S,\left\lbrack d\right\rbrack \smallsetminus S}\right)$ , as well as $s, t$ may be different between blocks. We say that the non-linearities are in a class $\mathcal{F}$ (e.g., neural networks, polynomials, etc.) if $s, t \in \mathcal{F}$ . + +The appeal of affine coupling networks comes from the fact that the Jacobian of each affine block is triangular, so calculating the determinant is a linear-time operation. We will be interested in the conditioning of $f$ - that is, an upper bound on the largest singular value ${\sigma }_{\max }\left( {Df}\right)$ and lower bound on the smallest singular value ${\sigma }_{\min }\left( {Df}\right)$ of the Jacobian ${Df}$ of $f$ . Note that this is a slight abuse of nomenclature - most of the time, "condition number" refers to the ratio of the largest and smallest singular value. As training a normalizing flow involves evaluating $\det \left( {Df}\right)$ , we in fact want to ensure that neither the smallest nor largest singular values are extreme. + +The class of distributions we will focus on approximating via affine coupling flows is $\log$ -concave distributions: + +Definition 2. A distribution $p : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{ + }, p\left( x\right) \propto {e}^{-U\left( x\right) }$ is log-concave if ${\nabla }^{2}U\left( x\right) = - {\nabla }^{2}\ln p\left( x\right) \succcurlyeq 0$ . + +Log-concave distributions are typically used to model distributions with Gaussian-like tail behavior. What we will leverage about this class of distributions is that a special stochastic differential equation (SDE), called under-damped Langevin dynamics, is well-behaved in an analytic sense.Finally, we recall the definitions of positive definite matrices and the Wasserstein distance, and introduce a notation for truncated distributions. + +Definition 3. We say that a symmetric matrix is positive semidefinite (PSD) if all of its eigenvalues are non-negative. For symmetric matrices $A, B$ , we write $A \succcurlyeq B$ if and only if $A - B$ is PSD. + +Definition 4. Given two probability measures $\mu ,\nu$ over a metric space(M, d), the Wasserstein-1 distance between them, denoted ${W}_{1}\left( {\mu ,\nu }\right)$ , is defined as + +$$ +{W}_{1}\left( {\mu ,\nu }\right) = \mathop{\inf }\limits_{{\gamma \in \Gamma \left( {\mu ,\nu }\right) }}{\int }_{M \times M}d\left( {x, y}\right) {d\gamma }\left( {x, y}\right) +$$ + +where $\Gamma \left( {\mu ,\nu }\right)$ is the set of couplings, i.e. measures on $M \times$ $M$ with marginals $\mu ,\nu$ respectively. For two probability distributions $p, q$ , we denote by ${W}_{1}\left( {p, q}\right)$ the Wasserstein-1 distance between their associated measures. In this paper, we set $M = {\mathbb{R}}^{d}$ and $d\left( {x, y}\right) = \parallel x - y{\parallel }_{2}$ . + +Definition 5. Given a distribution $q$ and a compact set $\mathcal{C}$ , we denote by ${\left. q\right| }_{\mathcal{C}}$ the distribution $q$ truncated to the set $\mathcal{C}$ . The truncated measure is defined as ${\left. q\right| }_{\mathcal{C}}\left( A\right) = \frac{1}{q\left( \mathcal{C}\right) }q\left( {A \cap \mathcal{C}}\right)$ . + +### 2.2. Main result + +Our main result states that we can approximate any log-concave distribution in Wasserstein-1 distance by a well-conditioned affine-coupling flow network. Precisely, we show: + +Theorem 1. Let $p\left( x\right) : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{ + }$ be of the form $p\left( x\right) \propto$ ${e}^{-U\left( x\right) }$ , such that: + +1. $U \in {C}^{2}$ , that is, ${\nabla }^{2}U\left( x\right)$ exists and is continuous, and + +2. $\ln p$ satisfies ${\mathrm{I}}_{d} \preccurlyeq - {\nabla }^{2}\ln p\left( x\right) \preccurlyeq \kappa {\mathrm{I}}_{d}$ . + +Furthermore, let ${p}_{0} \mathrel{\text{:=}} p \times \mathcal{N}\left( {0,{\mathrm{I}}_{d}}\right)$ . Then, for every $\epsilon > 0$ , there exists a compact set $\mathcal{C} \subset {\mathbb{R}}^{2d}$ and an invertible affine-coupling network $f : {\mathbb{R}}^{2d} \rightarrow {\mathbb{R}}^{2d}$ with polynomial nonlinearities, such that + +$$ +{W}_{1}\left( {{f}_{\# }\left( {\left. \mathcal{N}\left( 0,{\mathrm{I}}_{2d}\right) \right| }_{\mathcal{C}}\right) ,{p}_{0}}\right) \leq \epsilon +$$ + +Furthermore, the map defined by this affine-coupling network $f$ is well conditioned over $\mathcal{C}$ , that is, there are positive constants $A\left( \kappa \right) , B\left( \kappa \right) = {\kappa }^{O\left( 1\right) }$ such that for any unit vector $w,$ + +$$ +A\left( \kappa \right) \leq \begin{Vmatrix}{{D}_{w}f\left( {x, v}\right) }\end{Vmatrix} \leq B\left( \kappa \right) +$$ + +for all $\left( {x, v}\right) \in \mathcal{C}$ , where ${D}_{w}$ is the directional derivative in the direction $w$ . In particular, the condition number of ${Df}\left( {x, v}\right)$ is bounded by $\frac{B\left( \kappa \right) }{A\left( \kappa \right) } = {\kappa }^{O\left( 1\right) }$ for all $\left( {x, v}\right) \in \mathcal{C}$ . + +We make several remarks regarding the statement of the theorem. + +Remark 1. The Gaussian padding (i.e. setting ${p}_{0} =$ $p \times \mathcal{N}\left( {0,{\mathrm{I}}_{d}}\right)$ ) is essential for our proofs. All the other prior works on the universal approximation properties of normalizing flows (with or without padding) result in ill-conditioned affine coupling networks. This gives theoretical backing of empirical observations on the benefits of Gaussian padding in (Koehler et al., 2020). + +Remark 2. The choice of non-linearities $s, t$ being polynomials is for the sake of convenience in our proofs. Using standard universal approximation results, they can also be chosen to be neural networks with a smooth activation function. + +Remark 3. The Jacobian ${Df}$ has both upper-bounded largest singular value, and lower-bounded smallest singular value - which of course bounds the determinant $\det \left( {Df}\right)$ . As remarked in Section 2.1, merely bounding the ratio of the two quantities would not suffice for this. Moreover, the bound we prove only depends on properties of the distribution (i.e., $\kappa$ ), and does not worsen as $\epsilon \rightarrow 0$ , in contrast to (Koehler et al., 2020). + +Remark 4. The region $\mathcal{C}$ where the pushforward of the Gaussian through $f$ and ${p}_{0}$ are close is introduced solely for technical reasons - essentially, standard results in analysis for approximating smooth functions by polynomials can only be used if the approximation needs to hold on a compact set. Note that $\mathcal{C}$ can be made arbitrarily large by making $\epsilon$ arbitrarily small. + +Remark 5. We do not provide a bound on the number of affine coupling blocks, although a bound can be extracted from our proofs. + +## 3. Proof Sketch of Theorem 1 + +We wish to construct an affine coupling network that (approximately) pushes forward a Gaussian ${p}^{ * } = \mathcal{N}\left( {0,{\mathrm{I}}_{2d}}\right)$ to the distribution we wish to model with Gaussian padding, i.e. ${p}_{0} = p \times \mathcal{N}\left( {0,{\mathrm{I}}_{d}}\right)$ . Because the inverse of an affine coupling network is an affine coupling network, we can invert the problem, and instead attempt to map ${p}_{0}$ to $N\left( {0,{\mathrm{I}}_{2d}}\right)$ . ${}^{1}$ + +There is a natural map that takes ${p}_{0}$ to ${p}^{ * } = N\left( {0,{\mathrm{I}}_{2d}}\right)$ , namely, underdamped Langevin dynamics (1). Hence, our proof strategy involves understanding and simulating underdamped Langevin dynamics with the initial distribution ${p}_{0} = p \times \mathcal{N}\left( {0,{\mathrm{I}}_{d}}\right)$ , and the target distribution ${p}^{ * } = \mathcal{N}\left( {0,{\mathrm{I}}_{2d}}\right)$ , and comprises of two important steps. + +First, we show that the flow-map for Langevin is well-conditioned. Here, by flow-map, we mean the map which assigns each $x$ to its evolution over a certain amount of time $t$ according to the equations specified by (1): + +$$ +\left\{ \begin{array}{ll} d{x}_{t} & = - \zeta {v}_{t}{dt} \\ d{v}_{t} & = - {\gamma \zeta }{v}_{t}{dt} - \nabla U\left( {x}_{t}\right) {dt} + \sqrt{2\gamma }d{B}_{t}. \end{array}\right. \tag{1} +$$ + +The stationary distribution of the SDEs (limiting distribution as $t \rightarrow \infty$ ) is given by ${p}^{ * }\left( {x, v}\right) \propto {e}^{-U\left( x\right) - \frac{\zeta }{2}\parallel v{\parallel }^{2}}$ . + +The convergence of (1) can be bounded when the distribution $p\left( x\right) \propto \exp \left( {-U\left( x\right) }\right)$ satisfies an analytic condition, namely has a bounded log-Sobolev constant. For brevity, we omit the definition of a log-Sobolev inequality, since we will only need the following fact: + +Fact 1 ((Bakry & Émery, 1985; Bakry et al., 2013)). Let the distributions $p\left( x\right) \propto \exp \left( {-U\left( x\right) }\right)$ be such that $U\left( x\right) \succcurlyeq {\lambda I}$ . Then, $p$ has log-Sobolev constant bounded by $\lambda$ . + +In fact, our proofs leverage a less well-known deterministic form of the updates which is equivalent to (1). Precisely, we convert (1) an equivalent ODE (with time-dependent coefficients). The proof of this fact (via a straightforward comparison of the Fokker-Planck equation) can be found in (Ma et al., 2019). + +Theorem 2. Let ${p}_{t}\left( {{x}_{t},{v}_{t}}\right)$ be the probability distribution of running (1) for time $t$ . If started from $\left( {{x}_{0},{v}_{0}}\right) \sim {p}_{0}$ , the probability distribution of the solution $\left( {{x}_{t},{v}_{t}}\right)$ to the ODEs + +$$ +\frac{d}{dt}\left\lbrack \begin{array}{l} {x}_{t} \\ {v}_{t} \end{array}\right\rbrack = \left\lbrack \begin{matrix} O & {I}_{d} \\ - {I}_{d} & - \gamma {I}_{d} \end{matrix}\right\rbrack \left( {\nabla \ln {p}_{t} - \nabla \ln {p}^{ * }}\right) \tag{2} +$$ + +is also ${p}_{t}\left( {{x}_{t},{v}_{t}}\right)$ . + +With this in mind, we show the following fact about the conditioning of the underdamped Langevin flow: + +Lemma 1. Consider underdamped Langevin dynamics (1) with $\zeta = 1$ , friction coefficient $\gamma < 2$ and starting distribution $p$ which satisfies all the assumptions in Theorem 1. Let ${T}_{t}$ denote the flow map from time 0 to time $t$ induced by (1). + +--- + +${}^{1}$ As an aside, a similar strategy is taken in practice by recent SDE-based generative models ((Song et al., 2020)). + +--- + +Then for any ${x}_{0},{v}_{0} \in {\mathbb{R}}^{d}$ and unit vector $w$ , the directional derivative of ${T}_{t}$ at ${x}_{0},{v}_{0}$ in direction $w$ satisfies + +$$ +\begin{Vmatrix}{{D}_{w}{T}_{t}\left( {x}_{0}\right) }\end{Vmatrix} \geq {\left( 1 + \frac{2 + \gamma }{2 - \gamma }\left( \kappa - 1\right) \right) }^{-2/\gamma }, +$$ + +$$ +\begin{Vmatrix}{{D}_{w}{T}_{t}\left( {x}_{0}\right) }\end{Vmatrix} \leq {\left( 1 + \frac{2 + \gamma }{2 - \gamma }\left( \kappa - 1\right) \right) }^{2/\gamma }. +$$ + +Therefore, the condition number of ${T}_{t}$ is bounded by ${\left( 1 + \frac{2 + \gamma }{2 - \gamma }\left( \kappa - 1\right) \right) }^{4/\gamma }$ . + +The main idea is to consider how ${\nabla }^{2}\ln {p}_{t}$ evolves if we replace (1) by its discretization, + +$$ +{\widetilde{x}}_{t + \eta } = {\widetilde{x}}_{t} + \eta {\widetilde{v}}_{t} +$$ + +$$ +{\widetilde{v}}_{t + \eta } = \left( {1 - {\eta \gamma }}\right) {\widetilde{v}}_{t} - \eta {\widetilde{x}}_{t} + {\xi }_{t},\;{\xi }_{t} \sim N\left( {0,{2\eta }{\mathrm{I}}_{d}}\right) . +$$ + +Because the stationary distribution is a Gaussian, $\nabla U\left( {x}_{t}\right) =$ ${x}_{t}$ in (1) and the above equations take a particularly simple form: we apply a linear transformation to $\left\lbrack \begin{matrix} {\widetilde{x}}_{t} \\ {\widetilde{v}}_{t} \end{matrix}\right\rbrack$ , and then add Gaussian noise, which corresponds to convolving the current distribution by a Gaussian. The core of the approach is then to track upper and lower bounds for ${\nabla }^{2}\ln {p}_{t}$ , and compute how they evolve under this linear transformation and convolution by a Gaussian. + +Second, we break the simulation of underdamped Langevin dynamics for a certain time $t$ into intervals of size $\tau$ , and show that the inverse flow-map over each $\tau$ -sized interval of time can be approximated well by a composition of affine-coupling maps. To show this, we consider a more general system of ODEs than the one in (Turaev, 2002) (in particular, a non-Hamiltonian system), which can be applied to underdamped Langevin dynamics: + +$$ +\left\{ \begin{array}{l} \frac{dx}{dt} = \frac{\partial }{\partial v}H\left( {x, v, t}\right) \\ \frac{dv}{dt} = - \frac{\partial }{\partial x}H\left( {x, v, t}\right) - \gamma \frac{\partial }{\partial v}H\left( {x, v, t}\right) \end{array}\right. \tag{3} +$$ + +In (Turaev, 2002), the version of the above system where $\gamma = 0$ was proven to be a universal approximator in some sense: namely, the iterations of this ODE can approximate any symplectic diffeomorphism: a continuous map which preserves volumes (i.e. the Jacobian of the map is 1). These kinds of diffeomorphisms have their genesis in Hamiltonian formulations of classical mechanics (Abraham & Marsden, 2008). Precisely, after reducing the problem to considering polynomial $H$ only, we show: + +Lemma 2. Let $\mathcal{C} \subset {\mathbb{R}}^{n}$ be a compact set. For any function $H\left( {x, v, t}\right) : {\mathbb{R}}^{2d} \rightarrow \mathbb{R}$ which is polynomial in(x, v), there exist polynomial functions $J, F, G$ , s.t. the time- $\left( {{t}_{0},{t}_{0} + \tau }\right)$ flow map of the system + +$$ +\left\{ \begin{array}{l} \frac{dx}{dt} = \frac{\partial }{\partial v}H\left( {x, v, t}\right) \\ \frac{dv}{dt} = - \frac{\partial }{\partial x}H\left( {x, v, t}\right) - \gamma \frac{\partial }{\partial v}H\left( {x, v, t}\right) \end{array}\right. \tag{4} +$$ + +is uniformly $O\left( {\tau }^{2}\right)$ -close in ${C}^{1}$ over $\mathcal{C}$ topology to the time- ${2\pi }$ map of the system + +$$ +\left\{ \begin{array}{l} \frac{dx}{dt} = v - {\tau F}\left( {v, t}\right) \odot x \\ \frac{d{v}_{j}}{dt} = - {\Omega }_{j}^{2}{x}_{j} - \tau {J}_{j}\left( {x, t}\right) - \tau {v}_{j}{G}_{j}\left( {x, t}\right) \end{array}\right. \tag{5} +$$ + +Here, $\odot$ denotes component-wise product, and the constants inside the $O\left( \cdot \right)$ depend on $\mathcal{C}$ . + +We then show that the inverse flow-map of this system of ODEs can be approximated by a sequence of affine-coupling blocks, by considering an Euler-Mariyama discretization of the newly constructed ODE (5) into small steps of size $\eta$ i.e. + +$$ +\begin{cases} {x}_{n + 1} = {x}_{n} + \eta \left( {{v}_{n} - {\tau F}\left( {{v}_{n},{\eta n}}\right) \odot {x}_{n}}\right) \\ {v}_{n + 1, j} = {v}_{n, j} - \eta \left( {{\Omega }_{j}^{2}{x}_{n, j} - \tau {J}_{j}\left( {{x}_{n},{\eta n}}\right) }\right. \\ \left. {-\tau {v}_{n, j}{G}_{j}\left( {{x}_{n},{\eta n}}\right) }\right) \end{cases} \tag{6} +$$ + +Note that each step above can be written as a composition of affine coupling blocks given by + +$$ +\left( {{x}_{n},{v}_{n}}\right) \mapsto \left( {{x}_{n},{v}_{n + 1}}\right) \mapsto \left( {{x}_{n + 1},{v}_{n + 1}}\right) +$$ + +## 4. Related Work + +On the theory front, most closely related to our results are the recent works of (Huang et al., 2020; Zhang et al., 2020; Koehler et al., 2020). The former two show universal approximation of affine couplings - albeit if the input is padded with zeros. This of course results in maps with singular Jacobians, which is why this strategy isn't used in practice. (Koehler et al., 2020) show universal approximation without padding - though their constructions results in a flow model with condition number $1/\epsilon$ to get approximation $\epsilon$ in the Wasserstein sense, regardless of how well-behaved the distribution to be approximated is. Furthemore, (Koehler et al., 2020) provide some empirical evidence that padding with iid Gaussians (as in our paper) is better than both zero padding (as in (Huang et al., 2020; Zhang et al., 2020)) and no padding on small-scale data. + +## 5. Conclusion + +In this paper, we provide the first guarantees on universal approximation with well-conditioned affine coupling networks. The conditioning of the network is crucial in order for likelihood-based training to succeed. At the mathematical level, we uncover connections between stochastic differential equations, dynamical systems and affine coupling flows. Our construction uses Gaussian padding, which lends support to the empirical observation that this strategy tends to result in better-conditioned flows (Koehler et al., 2020). We leave it as an open problem to generalize beyond log-concave distributions. + +References + +Abraham, R. and Marsden, J. E. Foundations of mechanics. Number 364. American Mathematical Soc., 2008. 4 + +Bakry, D. and Émery, M. Diffusions hypercontractives. In Séminaire de Probabilités XIX 1983/84, pp. 177-206. Springer, 1985. 3 + +Bakry, D., Gentil, I., and Ledoux, M. Analysis and geometry of Markov diffusion operators, volume 348. Springer Science & Business Media, 2013. 3 + +Dinh, L., Krueger, D., and Bengio, Y. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. 1 + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. 1 + +Huang, C.-W., Dinh, L., and Courville, A. Augmented normalizing flows: Bridging the gap between generative flows and latent variable models. arXiv preprint arXiv:2002.07101, 2020. 1, 4 + +Jaini, P., Selby, K. A., and Yu, Y. Sum-of-squares polynomial flow, 2019. 1 + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1 \times 1$ convolutions. In Advances in Neural Information Processing Systems, pp. 10215-10224, 2018. 1 + +Koehler, F., Mehta, V., and Risteski, A. Representational aspects of depth and conditioning in normalizing flows, 2020.1,2,3,4 + +Ma, Y.-A., Chatterji, N., Cheng, X., Flammarion, N., Bartlett, P., and Jordan, M. I. Is there an analog of nesterov acceleration for memc? feb 2019. URL http://arxiv.org/abs/1902.00996v2.3 + +Meng, C., Song, Y., Song, J., and Ermon, S. Gaussianization flows, 2020. 1 + +Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In International Conference on Machine Learning, pp. 1530-1538. PMLR, 2015. 1 + +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Er-mon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. 3 + +Turaev, D. Polynomial approximations of symplectic dynamics and richness of chaos in nonhyperbolic area-preserving maps. Nonlinearity, 16 (1):123-135, nov 2002. doi: 10.1088/0951-7715/ + +16/1/308. URL https://doi.org/10.1088% 2F0951-7715%2F16%2F1%2F308.4 + +Zhang, H., Gao, X., Unterman, J., and Arodz, T. Approximation capabilities of neural odes and invertible residual networks. 2020. 1, 4 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/CAjeU4GmBY/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/CAjeU4GmBY/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..3b0533b94af6e1fadc1079566a8bf0d51293e864 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/CAjeU4GmBY/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,237 @@ +§ UNIVERSAL APPROXIMATION USING WELL-CONDITIONED NORMALIZING FLOWS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Affine-coupling models (Dinh et al., 2014; 2016) are a particularly common type of normalizing flows, for which the Jacobian of the latent-to-observable-variable transformation is triangular, allowing the likelihood to be computed in lin- + +016 ear time. Despite the widespread usage of affine + +017 couplings, the special structure of the architec- + +018 ture makes understanding their representational + +019 power challenging. The question of universal + +020 approximation was only recently resolved by + +021 three parallel papers (Huang et al., 2020; Zhang + +022 et al., 2020; Koehler et al., 2020) - who showed + +023 reasonably regular distributions can be approx- + +024 imated arbitrarily well using affine couplings - albeit with networks with a nearly-singular Ja- + +026 cobian. As ill-conditioned Jacobians are an ob- + +027 stacle for likelihood-based training, the funda- + +028 mental question remains: which distributions can + +029 be approximated using well-conditioned affine + +030 coupling flows? In this paper, we show that any + +031 log-concave distribution can be approximated us- + +032 ing well-conditioned affine-coupling flows. In terms of proof techniques, we uncover and leverage deep connections between affine coupling architectures, underdamped Langevin dynamics (a + +036 stochastic differential equation often used to sam- + +037 ple from Gibbs measures) and Hénon maps (a + +038 structured dynamical system that appears in the + +039 study of symplectic diffeomorphisms). In terms + +040 of informing practice, we approximate a padded + +041 version of the input distribution with iid Gaus-sians - a strategy which (Koehler et al., 2020) empirically observed to result in better-conditioned flows, but had hitherto no theoretical grounding. Our proof can thus be seen as providing theoretical evidence for the benefits of Gaussian padding when training normalizing flows. + +§ 1. INTRODUCTION + +Normalizing flows (Dinh et al., 2014; Rezende & Mohamed, 2015) are a class of generative models parametrizing a distribution in ${\mathbb{R}}^{d}$ as the pushfoward of a simple distribution (e.g. Gaussian) through an invertible map ${g}_{\theta } : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ with trainable parameter $\theta$ . The fact that ${g}_{\theta }$ is invertible allows us to write down an explicit expression for the density of a point $x$ through the change-of-variables formula, namely ${p}_{\theta }\left( x\right) = \phi \left( {{g}_{\theta }^{-1}\left( x\right) }\right) \det \left( {D{g}_{\theta }^{-1}\left( x\right) }\right)$ , where $\phi$ denotes the density of the standard Gaussian. For different choices of parametric families for ${g}_{\theta }$ , one gets different families of normalizing flows, e.g. affine coupling flows (Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018), Gaussianization flows (Meng et al., 2020), sum-of-squares polynomial flows (Jaini et al., 2019). + +In this paper we focus on affine coupling flows - arguably the family that has been most successfully scaled up (Kingma & Dhariwal, 2018). The parametrization of ${g}_{\theta }$ is chosen to be a composition of so-called affine coupling blocks, which are maps $f : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ , s.t. $f\left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S}}\right) = \left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S} \odot s\left( {x}_{S}\right) + t\left( {x}_{S}\right) }\right)$ , where $\odot$ denotes entrywise multiplication and $s,t$ are (typically simple) neural networks. The choice of parametrization is motivated by the fact that the Jacobian of each affine block is triangular, so that the determinant can be calculated in linear time. + +Despite the empirical success of this architecture, theoretical understanding remains elusive. + +The most basic questions revolve around the representational power of such models. Even the question of universal approximation was only recently answered by three concurrent papers (Huang et al., 2020; Zhang et al., 2020; Koehler et al., 2020) - though in a less-than-satisfactory manner, in light of how normalizing flows are trained in practice. (Huang et al., 2020; Zhang et al., 2020) show that any (reasonably well-behaved) distribution $p$ , once padded with zeros and treated as a distribution in ${\mathbb{R}}^{d + {d}^{\prime }}$ , can be arbitrarily closely approximated by an affine coupling flow. While such padding can be operationalized as an algorithm by padding the training image with zeros, it is never done in practice, as it results in an ill-conditioned Jacobian. This is expected, as the map that always sends the last ${d}^{\prime }$ coordinates to 0 is not bijective. (Koehler et al., 2020) prove universal approximation without padding; however their construction also gives rise to a poorly conditioned Jacobian: namely, to approximate a distribution $p$ to within accuracy $\epsilon$ in the Wasserstein-1 distance, the Jacobian will have smallest singular value on the order of $\epsilon$ . + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +Importantly, for all these constructions, the condition number of the resulting affine coupling map is poor no matter how nice the underlying distribution it's trying to approximate is. In other words, the source of this phenomenon isn't that the underlying distribution is low-dimensional or otherwise degenerate. Thus the question arises: + +Question: Can well-behaved distributions be approximated by an affine coupling flow with a well-conditioned Jacobian? + +In this paper, we answer the above question in the affirmative for a broad class of distributions - log-concave distributions - if we pad the input distribution not with zeroes, but with independent Gaussians. This gives theoretical grounding of the empirical observations in (Koehler et al., 2020) that Gaussian padding works better than zero-padding, as well as no padding. + +The practical relevance of this question is in providing guidance on the type of distributions we can hope to fit via training using an affine coupling flow. Theoretically, our techniques uncover some deep connections between affine coupling flows and two other (seeming unrelated) areas of mathematics: stochastic differential equations (more precisely underdamped Langevin dynamics, a "momentum" variant of the standard overdamped Langevin dynamics) and dynamical systems (more precisely, a family of dynamical systems called Hénon-like maps). + +§ 2. OVERVIEW OF RESULTS + +In order to state our main result, we introduce some notation and definitions. + +§ 2.1. NOTATION + +Definition 1. An affine coupling block is a map $f : {\mathbb{R}}^{d} \rightarrow$ ${\mathbb{R}}^{d}$ , s.t. $f\left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S}}\right) = \left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S} \odot s\left( {x}_{S}\right) + t\left( {x}_{S}\right) }\right)$ for some set of coordinates $S$ , where $\odot$ denotes entrywise multiplication and $s,t$ are trainable (generally non-linear) functions. An affine coupling network is a finite sequence of affine coupling blocks. Note that the partition $\left( {S,\left\lbrack d\right\rbrack \smallsetminus S}\right)$ , as well as $s,t$ may be different between blocks. We say that the non-linearities are in a class $\mathcal{F}$ (e.g., neural networks, polynomials, etc.) if $s,t \in \mathcal{F}$ . + +The appeal of affine coupling networks comes from the fact that the Jacobian of each affine block is triangular, so calculating the determinant is a linear-time operation. We will be interested in the conditioning of $f$ - that is, an upper bound on the largest singular value ${\sigma }_{\max }\left( {Df}\right)$ and lower bound on the smallest singular value ${\sigma }_{\min }\left( {Df}\right)$ of the Jacobian ${Df}$ of $f$ . Note that this is a slight abuse of nomenclature - most of the time, "condition number" refers to the ratio of the largest and smallest singular value. As training a normalizing flow involves evaluating $\det \left( {Df}\right)$ , we in fact want to ensure that neither the smallest nor largest singular values are extreme. + +The class of distributions we will focus on approximating via affine coupling flows is $\log$ -concave distributions: + +Definition 2. A distribution $p : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{ + },p\left( x\right) \propto {e}^{-U\left( x\right) }$ is log-concave if ${\nabla }^{2}U\left( x\right) = - {\nabla }^{2}\ln p\left( x\right) \succcurlyeq 0$ . + +Log-concave distributions are typically used to model distributions with Gaussian-like tail behavior. What we will leverage about this class of distributions is that a special stochastic differential equation (SDE), called under-damped Langevin dynamics, is well-behaved in an analytic sense.Finally, we recall the definitions of positive definite matrices and the Wasserstein distance, and introduce a notation for truncated distributions. + +Definition 3. We say that a symmetric matrix is positive semidefinite (PSD) if all of its eigenvalues are non-negative. For symmetric matrices $A,B$ , we write $A \succcurlyeq B$ if and only if $A - B$ is PSD. + +Definition 4. Given two probability measures $\mu ,\nu$ over a metric space(M, d), the Wasserstein-1 distance between them, denoted ${W}_{1}\left( {\mu ,\nu }\right)$ , is defined as + +$$ +{W}_{1}\left( {\mu ,\nu }\right) = \mathop{\inf }\limits_{{\gamma \in \Gamma \left( {\mu ,\nu }\right) }}{\int }_{M \times M}d\left( {x,y}\right) {d\gamma }\left( {x,y}\right) +$$ + +where $\Gamma \left( {\mu ,\nu }\right)$ is the set of couplings, i.e. measures on $M \times$ $M$ with marginals $\mu ,\nu$ respectively. For two probability distributions $p,q$ , we denote by ${W}_{1}\left( {p,q}\right)$ the Wasserstein-1 distance between their associated measures. In this paper, we set $M = {\mathbb{R}}^{d}$ and $d\left( {x,y}\right) = \parallel x - y{\parallel }_{2}$ . + +Definition 5. Given a distribution $q$ and a compact set $\mathcal{C}$ , we denote by ${\left. q\right| }_{\mathcal{C}}$ the distribution $q$ truncated to the set $\mathcal{C}$ . The truncated measure is defined as ${\left. q\right| }_{\mathcal{C}}\left( A\right) = \frac{1}{q\left( \mathcal{C}\right) }q\left( {A \cap \mathcal{C}}\right)$ . + +§ 2.2. MAIN RESULT + +Our main result states that we can approximate any log-concave distribution in Wasserstein-1 distance by a well-conditioned affine-coupling flow network. Precisely, we show: + +Theorem 1. Let $p\left( x\right) : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{ + }$ be of the form $p\left( x\right) \propto$ ${e}^{-U\left( x\right) }$ , such that: + +1. $U \in {C}^{2}$ , that is, ${\nabla }^{2}U\left( x\right)$ exists and is continuous, and + +2. $\ln p$ satisfies ${\mathrm{I}}_{d} \preccurlyeq - {\nabla }^{2}\ln p\left( x\right) \preccurlyeq \kappa {\mathrm{I}}_{d}$ . + +Furthermore, let ${p}_{0} \mathrel{\text{ := }} p \times \mathcal{N}\left( {0,{\mathrm{I}}_{d}}\right)$ . Then, for every $\epsilon > 0$ , there exists a compact set $\mathcal{C} \subset {\mathbb{R}}^{2d}$ and an invertible affine-coupling network $f : {\mathbb{R}}^{2d} \rightarrow {\mathbb{R}}^{2d}$ with polynomial nonlinearities, such that + +$$ +{W}_{1}\left( {{f}_{\# }\left( {\left. \mathcal{N}\left( 0,{\mathrm{I}}_{2d}\right) \right| }_{\mathcal{C}}\right) ,{p}_{0}}\right) \leq \epsilon +$$ + +Furthermore, the map defined by this affine-coupling network $f$ is well conditioned over $\mathcal{C}$ , that is, there are positive constants $A\left( \kappa \right) ,B\left( \kappa \right) = {\kappa }^{O\left( 1\right) }$ such that for any unit vector $w,$ + +$$ +A\left( \kappa \right) \leq \begin{Vmatrix}{{D}_{w}f\left( {x,v}\right) }\end{Vmatrix} \leq B\left( \kappa \right) +$$ + +for all $\left( {x,v}\right) \in \mathcal{C}$ , where ${D}_{w}$ is the directional derivative in the direction $w$ . In particular, the condition number of ${Df}\left( {x,v}\right)$ is bounded by $\frac{B\left( \kappa \right) }{A\left( \kappa \right) } = {\kappa }^{O\left( 1\right) }$ for all $\left( {x,v}\right) \in \mathcal{C}$ . + +We make several remarks regarding the statement of the theorem. + +Remark 1. The Gaussian padding (i.e. setting ${p}_{0} =$ $p \times \mathcal{N}\left( {0,{\mathrm{I}}_{d}}\right)$ ) is essential for our proofs. All the other prior works on the universal approximation properties of normalizing flows (with or without padding) result in ill-conditioned affine coupling networks. This gives theoretical backing of empirical observations on the benefits of Gaussian padding in (Koehler et al., 2020). + +Remark 2. The choice of non-linearities $s,t$ being polynomials is for the sake of convenience in our proofs. Using standard universal approximation results, they can also be chosen to be neural networks with a smooth activation function. + +Remark 3. The Jacobian ${Df}$ has both upper-bounded largest singular value, and lower-bounded smallest singular value - which of course bounds the determinant $\det \left( {Df}\right)$ . As remarked in Section 2.1, merely bounding the ratio of the two quantities would not suffice for this. Moreover, the bound we prove only depends on properties of the distribution (i.e., $\kappa$ ), and does not worsen as $\epsilon \rightarrow 0$ , in contrast to (Koehler et al., 2020). + +Remark 4. The region $\mathcal{C}$ where the pushforward of the Gaussian through $f$ and ${p}_{0}$ are close is introduced solely for technical reasons - essentially, standard results in analysis for approximating smooth functions by polynomials can only be used if the approximation needs to hold on a compact set. Note that $\mathcal{C}$ can be made arbitrarily large by making $\epsilon$ arbitrarily small. + +Remark 5. We do not provide a bound on the number of affine coupling blocks, although a bound can be extracted from our proofs. + +§ 3. PROOF SKETCH OF THEOREM 1 + +We wish to construct an affine coupling network that (approximately) pushes forward a Gaussian ${p}^{ * } = \mathcal{N}\left( {0,{\mathrm{I}}_{2d}}\right)$ to the distribution we wish to model with Gaussian padding, i.e. ${p}_{0} = p \times \mathcal{N}\left( {0,{\mathrm{I}}_{d}}\right)$ . Because the inverse of an affine coupling network is an affine coupling network, we can invert the problem, and instead attempt to map ${p}_{0}$ to $N\left( {0,{\mathrm{I}}_{2d}}\right)$ . ${}^{1}$ + +There is a natural map that takes ${p}_{0}$ to ${p}^{ * } = N\left( {0,{\mathrm{I}}_{2d}}\right)$ , namely, underdamped Langevin dynamics (1). Hence, our proof strategy involves understanding and simulating underdamped Langevin dynamics with the initial distribution ${p}_{0} = p \times \mathcal{N}\left( {0,{\mathrm{I}}_{d}}\right)$ , and the target distribution ${p}^{ * } = \mathcal{N}\left( {0,{\mathrm{I}}_{2d}}\right)$ , and comprises of two important steps. + +First, we show that the flow-map for Langevin is well-conditioned. Here, by flow-map, we mean the map which assigns each $x$ to its evolution over a certain amount of time $t$ according to the equations specified by (1): + +$$ +\left\{ \begin{array}{ll} d{x}_{t} & = - \zeta {v}_{t}{dt} \\ d{v}_{t} & = - {\gamma \zeta }{v}_{t}{dt} - \nabla U\left( {x}_{t}\right) {dt} + \sqrt{2\gamma }d{B}_{t}. \end{array}\right. \tag{1} +$$ + +The stationary distribution of the SDEs (limiting distribution as $t \rightarrow \infty$ ) is given by ${p}^{ * }\left( {x,v}\right) \propto {e}^{-U\left( x\right) - \frac{\zeta }{2}\parallel v{\parallel }^{2}}$ . + +The convergence of (1) can be bounded when the distribution $p\left( x\right) \propto \exp \left( {-U\left( x\right) }\right)$ satisfies an analytic condition, namely has a bounded log-Sobolev constant. For brevity, we omit the definition of a log-Sobolev inequality, since we will only need the following fact: + +Fact 1 ((Bakry & Émery, 1985; Bakry et al., 2013)). Let the distributions $p\left( x\right) \propto \exp \left( {-U\left( x\right) }\right)$ be such that $U\left( x\right) \succcurlyeq {\lambda I}$ . Then, $p$ has log-Sobolev constant bounded by $\lambda$ . + +In fact, our proofs leverage a less well-known deterministic form of the updates which is equivalent to (1). Precisely, we convert (1) an equivalent ODE (with time-dependent coefficients). The proof of this fact (via a straightforward comparison of the Fokker-Planck equation) can be found in (Ma et al., 2019). + +Theorem 2. Let ${p}_{t}\left( {{x}_{t},{v}_{t}}\right)$ be the probability distribution of running (1) for time $t$ . If started from $\left( {{x}_{0},{v}_{0}}\right) \sim {p}_{0}$ , the probability distribution of the solution $\left( {{x}_{t},{v}_{t}}\right)$ to the ODEs + +$$ +\frac{d}{dt}\left\lbrack \begin{array}{l} {x}_{t} \\ {v}_{t} \end{array}\right\rbrack = \left\lbrack \begin{matrix} O & {I}_{d} \\ - {I}_{d} & - \gamma {I}_{d} \end{matrix}\right\rbrack \left( {\nabla \ln {p}_{t} - \nabla \ln {p}^{ * }}\right) \tag{2} +$$ + +is also ${p}_{t}\left( {{x}_{t},{v}_{t}}\right)$ . + +With this in mind, we show the following fact about the conditioning of the underdamped Langevin flow: + +Lemma 1. Consider underdamped Langevin dynamics (1) with $\zeta = 1$ , friction coefficient $\gamma < 2$ and starting distribution $p$ which satisfies all the assumptions in Theorem 1. Let ${T}_{t}$ denote the flow map from time 0 to time $t$ induced by (1). + +${}^{1}$ As an aside, a similar strategy is taken in practice by recent SDE-based generative models ((Song et al., 2020)). + +Then for any ${x}_{0},{v}_{0} \in {\mathbb{R}}^{d}$ and unit vector $w$ , the directional derivative of ${T}_{t}$ at ${x}_{0},{v}_{0}$ in direction $w$ satisfies + +$$ +\begin{Vmatrix}{{D}_{w}{T}_{t}\left( {x}_{0}\right) }\end{Vmatrix} \geq {\left( 1 + \frac{2 + \gamma }{2 - \gamma }\left( \kappa - 1\right) \right) }^{-2/\gamma }, +$$ + +$$ +\begin{Vmatrix}{{D}_{w}{T}_{t}\left( {x}_{0}\right) }\end{Vmatrix} \leq {\left( 1 + \frac{2 + \gamma }{2 - \gamma }\left( \kappa - 1\right) \right) }^{2/\gamma }. +$$ + +Therefore, the condition number of ${T}_{t}$ is bounded by ${\left( 1 + \frac{2 + \gamma }{2 - \gamma }\left( \kappa - 1\right) \right) }^{4/\gamma }$ . + +The main idea is to consider how ${\nabla }^{2}\ln {p}_{t}$ evolves if we replace (1) by its discretization, + +$$ +{\widetilde{x}}_{t + \eta } = {\widetilde{x}}_{t} + \eta {\widetilde{v}}_{t} +$$ + +$$ +{\widetilde{v}}_{t + \eta } = \left( {1 - {\eta \gamma }}\right) {\widetilde{v}}_{t} - \eta {\widetilde{x}}_{t} + {\xi }_{t},\;{\xi }_{t} \sim N\left( {0,{2\eta }{\mathrm{I}}_{d}}\right) . +$$ + +Because the stationary distribution is a Gaussian, $\nabla U\left( {x}_{t}\right) =$ ${x}_{t}$ in (1) and the above equations take a particularly simple form: we apply a linear transformation to $\left\lbrack \begin{matrix} {\widetilde{x}}_{t} \\ {\widetilde{v}}_{t} \end{matrix}\right\rbrack$ , and then add Gaussian noise, which corresponds to convolving the current distribution by a Gaussian. The core of the approach is then to track upper and lower bounds for ${\nabla }^{2}\ln {p}_{t}$ , and compute how they evolve under this linear transformation and convolution by a Gaussian. + +Second, we break the simulation of underdamped Langevin dynamics for a certain time $t$ into intervals of size $\tau$ , and show that the inverse flow-map over each $\tau$ -sized interval of time can be approximated well by a composition of affine-coupling maps. To show this, we consider a more general system of ODEs than the one in (Turaev, 2002) (in particular, a non-Hamiltonian system), which can be applied to underdamped Langevin dynamics: + +$$ +\left\{ \begin{array}{l} \frac{dx}{dt} = \frac{\partial }{\partial v}H\left( {x,v,t}\right) \\ \frac{dv}{dt} = - \frac{\partial }{\partial x}H\left( {x,v,t}\right) - \gamma \frac{\partial }{\partial v}H\left( {x,v,t}\right) \end{array}\right. \tag{3} +$$ + +In (Turaev, 2002), the version of the above system where $\gamma = 0$ was proven to be a universal approximator in some sense: namely, the iterations of this ODE can approximate any symplectic diffeomorphism: a continuous map which preserves volumes (i.e. the Jacobian of the map is 1). These kinds of diffeomorphisms have their genesis in Hamiltonian formulations of classical mechanics (Abraham & Marsden, 2008). Precisely, after reducing the problem to considering polynomial $H$ only, we show: + +Lemma 2. Let $\mathcal{C} \subset {\mathbb{R}}^{n}$ be a compact set. For any function $H\left( {x,v,t}\right) : {\mathbb{R}}^{2d} \rightarrow \mathbb{R}$ which is polynomial in(x, v), there exist polynomial functions $J,F,G$ , s.t. the time- $\left( {{t}_{0},{t}_{0} + \tau }\right)$ flow map of the system + +$$ +\left\{ \begin{array}{l} \frac{dx}{dt} = \frac{\partial }{\partial v}H\left( {x,v,t}\right) \\ \frac{dv}{dt} = - \frac{\partial }{\partial x}H\left( {x,v,t}\right) - \gamma \frac{\partial }{\partial v}H\left( {x,v,t}\right) \end{array}\right. \tag{4} +$$ + +is uniformly $O\left( {\tau }^{2}\right)$ -close in ${C}^{1}$ over $\mathcal{C}$ topology to the time- ${2\pi }$ map of the system + +$$ +\left\{ \begin{array}{l} \frac{dx}{dt} = v - {\tau F}\left( {v,t}\right) \odot x \\ \frac{d{v}_{j}}{dt} = - {\Omega }_{j}^{2}{x}_{j} - \tau {J}_{j}\left( {x,t}\right) - \tau {v}_{j}{G}_{j}\left( {x,t}\right) \end{array}\right. \tag{5} +$$ + +Here, $\odot$ denotes component-wise product, and the constants inside the $O\left( \cdot \right)$ depend on $\mathcal{C}$ . + +We then show that the inverse flow-map of this system of ODEs can be approximated by a sequence of affine-coupling blocks, by considering an Euler-Mariyama discretization of the newly constructed ODE (5) into small steps of size $\eta$ i.e. + +$$ +\begin{cases} {x}_{n + 1} = {x}_{n} + \eta \left( {{v}_{n} - {\tau F}\left( {{v}_{n},{\eta n}}\right) \odot {x}_{n}}\right) \\ {v}_{n + 1,j} = {v}_{n,j} - \eta \left( {{\Omega }_{j}^{2}{x}_{n,j} - \tau {J}_{j}\left( {{x}_{n},{\eta n}}\right) }\right. \\ \left. {-\tau {v}_{n,j}{G}_{j}\left( {{x}_{n},{\eta n}}\right) }\right) \end{cases} \tag{6} +$$ + +Note that each step above can be written as a composition of affine coupling blocks given by + +$$ +\left( {{x}_{n},{v}_{n}}\right) \mapsto \left( {{x}_{n},{v}_{n + 1}}\right) \mapsto \left( {{x}_{n + 1},{v}_{n + 1}}\right) +$$ + +§ 4. RELATED WORK + +On the theory front, most closely related to our results are the recent works of (Huang et al., 2020; Zhang et al., 2020; Koehler et al., 2020). The former two show universal approximation of affine couplings - albeit if the input is padded with zeros. This of course results in maps with singular Jacobians, which is why this strategy isn't used in practice. (Koehler et al., 2020) show universal approximation without padding - though their constructions results in a flow model with condition number $1/\epsilon$ to get approximation $\epsilon$ in the Wasserstein sense, regardless of how well-behaved the distribution to be approximated is. Furthemore, (Koehler et al., 2020) provide some empirical evidence that padding with iid Gaussians (as in our paper) is better than both zero padding (as in (Huang et al., 2020; Zhang et al., 2020)) and no padding on small-scale data. + +§ 5. CONCLUSION + +In this paper, we provide the first guarantees on universal approximation with well-conditioned affine coupling networks. The conditioning of the network is crucial in order for likelihood-based training to succeed. At the mathematical level, we uncover connections between stochastic differential equations, dynamical systems and affine coupling flows. Our construction uses Gaussian padding, which lends support to the empirical observation that this strategy tends to result in better-conditioned flows (Koehler et al., 2020). We leave it as an open problem to generalize beyond log-concave distributions. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Cx3XaRx5C2W/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Cx3XaRx5C2W/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..ed7c97511fb8d214a99e5a67723de1a7bc51ff00 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Cx3XaRx5C2W/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,161 @@ +# Manifold Density Estimation via Generalized Dequantization + +Anonymous Authors ${}^{1}$ + +## Abstract + +Density estimation is an important technique for characterizing distributions given observations. Much existing research on density estimation has focused on cases wherein the data lies in a Euclidean space. However, some kinds of data are not well-modeled by supposing that their underlying geometry is Euclidean. Instead, it can be useful to model such data as lying on a manifold with some known structure. For instance, some kinds of data may be known to lie on the surface of a sphere. We study the problem of estimating densities on manifolds. We propose a method, inspired by the literature on "dequantization," which we interpret through the lens of a coordinate transformation of an ambient Euclidean space and a smooth manifold of interest. Using methods from normalizing flows, we apply this method to the dequantization of smooth manifold structures in order to model densities on the sphere, tori, and the orthogonal group. + +## 1. Introduction + +Certain kinds of data are not well-modeled under the assumption of an underlying Euclidean geometry. Examples include data with a fundamental directional structure, data that represents transformations of Euclidean space (such as rotations and reflections), data that has periodicity constraints or data that represents hierarchical structures. In such cases, it is important to explicitly model the data as lying on a manifold with a suitable structure; for instance a sphere would be appropriate for directional data, the orthogonal group for rotations and reflections, and the torus captures structural properties of periodicity. + +The contribution of this work is to express density estimation on manifolds as a form of dequantization. Given a probability density in an ambient Euclidean space, one can obtain the density on the manifold by performing a manifold change-of-variables in which the manifold structure appears and then projecting out any auxiliary structures. This marginalization can be viewed as analogous to "quantization" where, for instance, continuous values are discarded and only rounded integer values remain. In this view the auxiliary structure defines how the manifold could be "dequantized" into the ambient Euclidean space. By marginalizing along these auxiliary dimensions, one obtains the marginal distribution on the manifold. In practice, however, one has only the manifold-constrained observations from an unknown distribution on the manifold. A second contribution of this work is to formulate the density estimation as a learning problem on the ambient Euclidean space. We show how to invoke the manifold change-of-variables, and then perform the marginalization along the auxiliary dimensions to obtain effective estimates of the density on the manifold. An advantage of our dequantization approach is that it allows one to utilize any expressive density directly on the ambient Euclidean space (e.g., RealNVP (Dinh et al., 2017), neural ODEs (Chen et al., 2018; Grathwohl et al., 2018) or any + +![01963e41-c08f-7e77-a3fd-860f8b6b0a8b_0_964_506_570_395_0.jpg](images/01963e41-c08f-7e77-a3fd-860f8b6b0a8b_0_964_506_570_395_0.jpg) + +Figure 1. We model densities on a manifold as a projection, or "quantization," onto the manifold from an ambient Euclidean space. To enable density computations we use a "dequantization density" which can depend position on the manifold. In this figure the manifold in question is ${\mathbb{S}}^{2}$ , embedded in ${\mathbb{R}}^{3}$ and the dequantization density, illustrated here for a set of locations along the equator of ${\mathbb{S}}^{2}$ , is over $r \in {\mathbb{R}}_{ + }$ , the distance from the origin in the direction $y \in {\mathbb{S}}^{2}$ . Density on the manifold can be estimated via importance sampling by marginalizing over ${\mathbb{R}}^{ + }$ for a given $y \in {\mathbb{S}}^{2}$ using the dequantization distribution as an importance distribution. + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +055 + +![01963e41-c08f-7e77-a3fd-860f8b6b0a8b_1_199_192_1356_341_0.jpg](images/01963e41-c08f-7e77-a3fd-860f8b6b0a8b_1_199_192_1356_341_0.jpg) + +Figure 2. The dequantization roadmap. In the first row, we begin with ${\mathbb{R}}^{m}$ (or a space identical to ${\mathbb{R}}^{m}$ up to a set of measure zero). This Euclidean space can be transformed into the product of manifolds $\mathcal{Y} \times \mathcal{Z}$ via a change-of-variables $G : {\mathbb{R}}^{m} \rightarrow \mathcal{Y} \times \mathcal{Z}$ . Quantization takes the product manifold $\mathcal{Y} \times \mathcal{Z}$ to its $\mathcal{Y}$ -component alone. In the second row, we may begin with a probability density ${\pi }_{{\mathbb{R}}^{m}}$ defined on ${\mathbb{R}}^{m}$ . Under the change-of-variables $G$ we obtain a new probability density ${\pi }_{\mathcal{Y} \times \mathcal{Z}}$ which is related to ${\pi }_{{\mathbb{R}}^{m}}$ by the manifold change-of-variables eq. (1). Quantizing $\mathcal{Y} \times \mathcal{Z}$ marginalizes out the $\mathcal{Z}$ -component of ${\pi }_{\mathcal{Y} \times \mathcal{Z}}$ . We may equivalently introduce a dequantization density ${\widetilde{\pi }}_{\mathcal{Z}}$ and compute the marginal density on $\mathcal{Y}$ via importance sampling. + +056 + +057 + +058 + +059 + +060 + +Table 1. Table of the matrix manifold dequantizations considered in this work. We show the corresponding auxiliary structure, the dequantization transformation, the resulting Euclidean space, and the Jacobian determinant of the transformation. + +
ManifoldAuxiliary StructureEuclidean SpaceDequantizationJacobian Determinant
${\mathbb{S}}^{m - 1}$${\mathbb{R}}_{ + }$${\mathbb{R}}^{m}$Spherical coordinates $\left( {y, r}\right) \mapsto {ry}$${r}^{m - 1}$
${\mathbb{T}}^{m}$${\mathbb{R}}_{ + } \times \cdots \times {\mathbb{R}}_{ + }$${\mathbb{R}}^{2m}$Iterated polar coordinates $\left( {{y}_{i},{r}_{i}}\right) \mapsto {r}_{i}{y}_{i}$$\mathop{\prod }\limits_{{i = 1}}^{m}{r}_{i}$
Stiefel(m, n)${\operatorname{Tri}}_{ + }\left( n\right)$${\mathbb{R}}^{m \times n}$QR decomposition $\left( {\mathbf{Y},\mathbf{R}}\right) \mapsto \mathbf{{YR}}$${\mathbf{R}}_{11}^{m - 1}\cdots {\mathbf{R}}_{nn}^{m - n}$
Stiefel(m, n)$\operatorname{PD}\left( n\right)$${\mathbb{R}}^{m \times n}$Matrix polar decomposition $\left( {\mathbf{Y},\mathbf{R}}\right) \mapsto \mathbf{{YR}}$Automatic differentiation
+ +080 + +081 + +other normalizing flow (Kobyzev et al., 2020)); the dequan-tization approach does not require a practitioner to construct densities intrinsically on the manifold. + +## 2. Theory + +Theorem 1. Let $\mathcal{Y}$ and $\mathcal{Z}$ be smooth manifolds embedded in ${\mathbb{R}}^{n}$ and ${\mathbb{R}}^{p}$ , respectively. Let $G : {\mathbb{R}}^{m} \rightarrow \mathcal{Y} \times \mathcal{Z}$ be a smooth, invertible transformation. Let ${\pi }_{{\mathbb{R}}^{m}}$ be a density on ${\mathbb{R}}^{m}$ . Under the change-of-variables $G$ , the corresponding density on $\mathcal{Y} \times \mathcal{Z}$ is given by, + +$$ +{\pi }_{\mathcal{Y} \times \mathcal{Z}}\left( {y, z}\right) = \frac{{\pi }_{{\mathbb{R}}^{m}}\left( x\right) }{\sqrt{\det \left( {\nabla G{\left( x\right) }^{\top }\nabla G\left( x\right) }\right) }} \tag{1} +$$ + +where $x = {G}^{-1}\left( {y, z}\right)$ . + +Even when $G$ is not an invertible mapping, it may be possible to compute the change-of-variables when $G$ is invertible on partitions of ${\mathbb{R}}^{m}$ . + +Corollary 1. Let ${\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{l}$ be a partition of ${\mathbb{R}}^{m}$ . Let $G : {\mathbb{R}}^{m} \rightarrow \mathcal{Y} \times \mathcal{Z}$ be a function and suppose that there exist smooth and invertible functions ${G}_{i} : {\mathcal{O}}_{i} \rightarrow \mathcal{Y} \times \mathcal{Z}$ such that ${G}_{i} = G \mid {\mathcal{O}}_{i}$ for $i = 1,\ldots , l$ . Then, if $x \sim {\pi }_{{\mathbb{R}}^{m}}$ , the density of $\left( {y, z}\right) = G\left( x\right)$ is given by + +$$ +{\pi }_{\mathcal{Y} \times \mathcal{Z}}\left( {y, z}\right) = \mathop{\sum }\limits_{{i = 1}}^{l}\frac{{\pi }_{{\mathbb{R}}^{m}}\left( {x}_{i}\right) }{\sqrt{\det \left( {\nabla {G}_{i}{\left( {x}_{i}\right) }^{\top }\nabla {G}_{i}\left( {x}_{i}\right) }\right) }}. \tag{2} +$$ + +where ${x}_{i} = {G}_{i}^{-1}\left( {y, z}\right)$ . + +How does theorem 1 relate to the dequantization of smooth manifolds? + +### 2.1. Dequantization + +Manifolds of interest (such the sphere, the torus, or the orthogonal group) can be introduced as elements of a new coordinate system for an ambient Euclidean space. In each case, the manifold appears with an auxiliary manifold which may not be of immediate interest. Namely, (i) The sphere appears with set of positive real numbers when defining a coordinate system for ${\mathbb{R}}^{m} \smallsetminus \{ 0\} \cong {\mathbb{S}}^{m - 1} \times {\mathbb{R}}_{ + }$ ; (ii) The torus appears the product manifold of $m$ copies of the positive real numbers when defining a coordinate system for ${\mathbb{R}}^{2m} \smallsetminus \{ 0\} \cong {\mathbb{T}}^{m} \times {\mathbb{R}}_{ + } \times \ldots \times {\mathbb{R}}_{ + }$ ; (iii) the Stiefel manifold appears with the set of lower-triangular matrices with positive diagonal entries when defining a coordinate system of full-rank matrices: $\operatorname{FR}\left( {n, p}\right) \cong \operatorname{Stiefel}\left( {n, p}\right) \times$ ${\operatorname{Tri}}_{ + }\left( p\right)$ . We would like to marginalize out these "nuisance manifolds" so as to obtain distributions on the manifold of primary interest. A convenient means to achieve this is to introduce an importance sampling distribution over the nuisance manifold. Formally, we have the following result, which is an immediate consequence of theorem 1 . + +Corollary 2. Let $\mathcal{Y},\mathcal{Z}, G$ , and $\pi {y}_{\times \mathcal{Z}}$ be as defined in theorem 1. Let ${\widetilde{\pi }}_{\mathcal{Z}}$ be a non-vanishing density on $\mathcal{Z}$ . To obtain the marginal density on $\mathcal{Y}$ , it suffices to compute, + +$$ +{\pi }_{\mathcal{Y}}\left( y\right) = \underset{z \sim {\widetilde{\pi }}_{\mathcal{Z}}}{\mathbb{E}}\frac{{\pi }_{\mathcal{X}}\left( x\right) }{{\widetilde{\pi }}_{\mathcal{Z}}\left( z\right) \cdot \sqrt{\det \left( {\nabla G{\left( x\right) }^{\top }\nabla G\left( x\right) }\right) }}, \tag{3} +$$ + +where $x = {G}^{-1}\left( {y, z}\right)$ . + +## 3. Discussion + +We investigate the problem of density estimation given observations on a manifold using the dequantization procedure described in section 2. + +Problem. Let $\mathcal{Y}$ be a manifold embedded in ${\mathbb{R}}^{n}$ and let ${\pi }_{\mathcal{Y}}$ be a density on $\mathcal{Y}$ . Given observations of ${\pi }_{\mathcal{Y}}$ , construct an estimate ${\widehat{\pi }}_{\mathcal{Y}}$ of the density ${\pi }_{\mathcal{Y}}$ . We apply eq. (3) in order to obtain the density estimate on $\mathcal{Y}$ . Generating samples from ${\pi }_{\mathcal{Y}}$ may be achieved by first sampling $x \sim {\pi }_{\mathcal{X}}$ , applying the transformation $G\left( x\right) = \left( {y, z}\right)$ , and taking $y$ as a sample from the approximated distribution ${\widehat{\pi }}_{\mathcal{Y}}$ . + +### 3.1. Densities on ${\mathbb{R}}^{m}$ + +As ${\mathbb{R}}^{m}$ is a Euclidean space, we have available a wealth of possible mechanisms to produce flexible densities in the ambient space. One popular choice is RealNVP (Dinh et al., 2017). An alternative is neural ODEs which parameterizes a vector field in the Euclidean space; the change in probability density under the vector field flow is obtained by integrating the instantaneous change-of-variables formula (Chen et al., 2018; Grathwohl et al., 2018). + +### 3.2. Objective Functions + +We consider two possible objective functions for density estimation. The first is the evidence lower bound of the observations $\left\{ {{y}_{1},\ldots ,{y}_{{n}_{\mathrm{{obs}}}}}\right\}$ : + +$$ +\log {\widehat{\pi }}_{\mathcal{Y}}\left( {y}_{i}\right) \geq \underset{z \sim {\widetilde{\pi }}_{\mathcal{Z}}}{\mathbb{E}}\log \frac{{\pi }_{{\mathbb{R}}^{m}}\left( {{G}^{-1}\left( {{y}_{i}, z}\right) }\right) }{{\widetilde{\pi }}_{\mathcal{Z}}\left( z\right) \cdot \sqrt{\det \left( {\nabla G{\left( x\right) }^{\top }\nabla G\left( x\right) }\right) }}. +$$ + +(4) + +This follows as a consequence of Jensen's inequality applied to eq. (3). Experimental results using this objective function are denoted with the suffix (ELBO). The second is the log-likelihood computed via importance sampling: + +$$ +\log {\widehat{\pi }}_{\mathcal{Y}}\left( {y}_{i}\right) = \log \underset{z \sim {\widetilde{\pi }}_{\mathcal{Z}}}{\mathbb{E}}\frac{{\pi }_{{\mathbb{R}}^{m}}\left( {{G}^{-1}\left( {{y}_{i}, z}\right) }\right) }{{\widetilde{\pi }}_{\mathcal{Z}}\left( z\right) \cdot \sqrt{\det \left( {\nabla G{\left( x\right) }^{\top }\nabla G\left( x\right) }\right) }}. +$$ + +(5) + +Because the calculation of eq. (5) requires an importance sampling estimate, experimental results using this objective function are denoted with the suffix (I.S.). + +## 4. Experimental Results + +To demonstrate the effectiveness of the approach, we now show experimental results for density estimation on three different manifolds: the sphere, the torus and the orthogonal group. In our comparison against competing algorithms, we ensure that each method has a comparable number of learnable parameters. Our evaluation metrics are designed to test the fidelity of the density estimate to the target distribution. In all of our examples we use rejection sampling in order to draw samples from the target distribution. + +### 4.1. Sphere and Hypersphere + +Our first experimental results concern the sphere ${\mathbb{S}}^{2}$ where we consider a multimodal distribution with four modes. We consider performing density estimation using the ELBO (eq. (4)) and log-likelihood objective functions (eq. (5)); we construct densities in the ambient space using RealNVP and neural ODEs. As baselines we consider the Möbius transform approach described in (Rezende et al., 2020), which is a specialized normalizing flow method for tori and spheres, and the neural manifold ODE applied to the sphere as described in (Lou et al., 2020). We give a comparison of performance metrics between these methods in table 2. In these experiments, we find that parameterizing a neural ODE model in the ambient space gave the better KL-divergence and effective sample size (ESS) metrics than RealNVP when our dequantization approach is used. We find that our dequantization algorithm minimizing either eq. (4) or eq. (5) achieves similar performance in the first and second moment metrics. However, when using eq. (5), slightly lower KL-divergence metrics are achievable as well as slightly larger effective sample sizes. In either case, de-quantization tends to outperform the Möbius transform on this multimodal density on ${\mathbb{S}}^{2}$ . The manifold ODE method is outperformed by the ODE dequantization algorithms with both eq. (4) and eq. (5). + +We next consider a multimodal density ${\mathbb{S}}^{3} \cong \mathrm{{SU}}\left( 3\right)$ (the special unitary group). As before, we compare dequantization to Möbius flow transformations and manifold neural ODEs and present results in table 3. Similar to the case of the multimodal density on ${\mathbb{S}}^{2}$ , we find that dequantization with an ambient neural ODE model is most effective, with ELBO maximization giving the smallest KL-divergence metrics. All dequantization algorithms out-performed the Möbius transformation on the sphere but only dequantization with an ambient ODE and ELBO minimization outperformed the manifold neural ODE method. + +Table 2. Comparison of dequantization to normalizing flows on the multimodal density on ${\mathbb{S}}^{2}$ . Averages were computed using ten random trials for the dequantization procedures and eight random trials for the normalizing flow (because two random trials exhibited divergent behavior and were excluded). The dequantization procedure is illustrated for both the ELBO loss and the KL divergence loss. + +
MethodMean MSECovariance MSE$\operatorname{KL}\left( {q\parallel p}\right)$$\operatorname{KL}\left( {p\parallel q}\right)$Relative ESS
Deq. ODE (ELBO)${0.0012} \pm {0.0002}$${0.0006} \pm {0.0001}$${0.0046} \pm {0.0002}$${0.0046} \pm {0.0002}$${99.0990} \pm {0.0401}$
Deq. ODE (I.S.)${0.0014} \pm {0.0002}$${0.0010} \pm {0.0001}$${0.0029} \pm {0.0001}$${0.0029} \pm {0.0001}$${99.4170} \pm {0.0225}$
Deq RealNVP (ELBO)${0.0004} \pm {0.0001}$${0.0003} \pm {0.0001}$${0.0231} \pm {0.0010}$${0.0212} \pm {0.0009}$${95.9540} \pm {0.1688}$
Deq. RealNVP (I.S.)${0.0005} \pm {0.0002}$${0.0002} \pm {0.0000}$${0.0124} \pm {0.0006}$${0.0115} \pm {0.0006}$${97.8240} \pm {0.1183}$
Man. ODE${0.0010} \pm {0.0004}$${0.0009} \pm {0.0002}$${0.0085} \pm {0.0007}$${0.0083} \pm {0.0007}$${98.3860} \pm {0.1328}$
Möbius${0.0021} \pm {0.0005}$${0.0019} \pm {0.0005}$${0.0595} \pm {0.0025}$-${89.2575} \pm {0.4888}$
+ +Table 3. Comparison of dequantization to normalizing flows on the multimodal density on ${\mathbb{S}}^{3}$ . Averages were computed using ten random trials for the dequantization procedures and nine random trials for the normalizing flow (one random trial exhibited divergent behavior and was excluded). + +
MethodMean MSECovariance MSE$\operatorname{KL}\left( {q\parallel p}\right)$$\operatorname{KL}\left( {p\parallel q}\right)$Relative ESS
Deq. ODE (ELBO)${0.0009} \pm {0.0001}$${0.0007} \pm {0.0001}$${0.0072} \pm {0.0002}$${0.0070} \pm {0.0002}$${98.6490} \pm {0.0388}$
Deq. ODE (I.S.)${0.0017} \pm {0.0001}$${0.0022} \pm {0.0002}$${0.0189} \pm {0.0004}$${0.0180} \pm {0.0004}$${96.6150} \pm {0.0648}$
Deq. RealNVP (ELBO)${0.0003} \pm {0.0001}$${0.0004} \pm {0.0001}$${0.0384} \pm {0.0010}$${0.0283} \pm {0.0005}$${95.1880} \pm {0.0771}$
Deq. RealNVP (I.S.)${0.0003} \pm {0.0001}$${0.0003} \pm {0.0000}$${0.0208} \pm {0.0004}$${0.0180} \pm {0.0004}$${96.6340} \pm {0.0920}$
Man. ODE${0.0012} \pm {0.0003}$${0.0008} \pm {0.0002}$${0.0098} \pm {0.0009}$${0.0094} \pm {0.0007}$${98.1780} \pm {0.1302}$
Möbius${0.0027} \pm {0.0004}$${0.0014} \pm {0.0003}$${0.0542} \pm {0.0047}$-${88.7290} \pm {0.9332}$
+ +Table 4. Metrics of the dequantization algorithm in application to the orthogonal Procrustes problem and dequantization of a multimodal density on $\mathrm{{SO}}\left( 3\right)$ . When using the polar decomposition, results are averaged over ten independent trials for the multimodal distribution on SO(3) and nine independent trials for the orthogonal Procrustes problem; for the QR decomposition, results are averaged over nine trials. + +
ExperimentMean MSECovariance MSE$\operatorname{KL}\left( {q\parallel p}\right)$$\operatorname{KL}\left( {p\parallel q}\right)$Relative ESS
Procrustes (ELBO - Polar)${0.0021} \pm {0.0008}$${0.0012} \pm {0.0005}$${0.0193} \pm {0.0069}$${0.0173} \pm {0.0053}$${96.9489} \pm {0.7649}$
Procrustes (I.S. - Polar)${0.0038} \pm {0.0020}$${0.0015} \pm {0.0008}$${0.0301} \pm {0.0126}$${0.0202} \pm {0.0075}$${95.6944} \pm {1.4654}$
Procrustes (ELBO - QR)${0.0011} \pm {0.0003}$${0.0008} \pm {0.0003}$${0.0124} \pm {0.0032}$${0.0095} \pm {0.0015}$${97.9678} \pm {0.3325}$
Procrustes (I.S. - QR)${0.0015} \pm {0.0005}$${0.0011} \pm {0.0004}$${0.0174} \pm {0.0072}$${0.0122} \pm {0.0029}$${96.6267} \pm {0.6326}$
SO(3) (ELBO - Polar)${0.0007} \pm {0.0002}$${0.0029} \pm {0.0003}$${0.0443} \pm {0.0011}$${0.0415} \pm {0.0059}$${96.2930} \pm {0.0649}$
SO(3) (I.S. - Polar)${0.0004} \pm {0.0001}$${0.0014} \pm {0.0001}$${0.0207} \pm {0.0028}$${0.0235} \pm {0.0029}$${97.7280} \pm {0.1136}$
SO(3) (ELBO - QR)${0.0017} \pm {0.0004}$${0.0054} \pm {0.0006}$${0.0563} \pm {0.0060}$${0.0363} \pm {0.0041}$${93.5633} \pm {2.1331}$
SO(3) (I.S. - QR)${0.0012} \pm {0.0004}$${0.0020} \pm {0.0004}$${0.0260} \pm {0.0017}$${0.0219} \pm {0.0021}$${94.3256} \pm {2.8099}$
+ +### 4.2. Orthogonal Group + +The previous two examples focused on manifolds composed of spheres and circles. We now examine density estimation on the orthogonal group, where we consider inference in a probabilistic variant of the orthogonal Procrustes problem; we seek to sample orthogonal transformations that transport one point cloud towards another in terms of squared distance. We consider parameterizing a distribution in the ambient Euclidean space using RealNVP in these experiments. Results are presented in table 4. We observe that optimizing the ELBO objective function (eq. (4)) tended to produce better density estimates than the log-likelihood (eq. (5)). Nevertheless, we find that either dequantization algorithm is highly effective at matching the target density. + +We may also leverage corollary 1 so as to apply our method to the "dequantization" of $\mathrm{{SO}}\left( n\right)$ . As an example, we consider a multimodal density on $\mathrm{{SO}}\left( 3\right)$ . Results of applying our method to sampling from this distribution are also shown in table 4. In this example we find that minimizing the negative log-likelihood using importance sampling tended to produce the best approximation of the first- and second-moments of the distribution, in addition to smaller KL-divergence metrics. + +References + +Chen, R. T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. K. Neural ordinary differential equations. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31, pp. 6571-6583. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ + +69386f6bb1dfed68692a24c8686939b9-Paper. pdf. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real NVP. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=HkpbnH91x. + +Grathwohl, W., Chen, R. T. Q., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models, 2018. + +Kobyzev, I., Prince, S., and Brubaker, M. Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. + +Lou, A., Lim, D., Katsman, I., Huang, L., Jiang, Q., Lim, S.- N., and De Sa, C. Neural manifold ordinary differential equations. arXiv preprint arXiv:2006.10254, 2020. + +Rezende, D. J., Papamakarios, G., Racanière, S., Albergo, M. S., Kanwar, G., Shanahan, P. E., and Cranmer, K. Normalizing flows on tori and spheres. CoRR, abs/2002.02428, 2020. URL https://arxiv.org/ abs/2002.02428. + +270 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Cx3XaRx5C2W/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Cx3XaRx5C2W/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..0338d2be5aa737f618c71f292542a60860a8475f --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Cx3XaRx5C2W/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,227 @@ +§ MANIFOLD DENSITY ESTIMATION VIA GENERALIZED DEQUANTIZATION + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Density estimation is an important technique for characterizing distributions given observations. Much existing research on density estimation has focused on cases wherein the data lies in a Euclidean space. However, some kinds of data are not well-modeled by supposing that their underlying geometry is Euclidean. Instead, it can be useful to model such data as lying on a manifold with some known structure. For instance, some kinds of data may be known to lie on the surface of a sphere. We study the problem of estimating densities on manifolds. We propose a method, inspired by the literature on "dequantization," which we interpret through the lens of a coordinate transformation of an ambient Euclidean space and a smooth manifold of interest. Using methods from normalizing flows, we apply this method to the dequantization of smooth manifold structures in order to model densities on the sphere, tori, and the orthogonal group. + +§ 1. INTRODUCTION + +Certain kinds of data are not well-modeled under the assumption of an underlying Euclidean geometry. Examples include data with a fundamental directional structure, data that represents transformations of Euclidean space (such as rotations and reflections), data that has periodicity constraints or data that represents hierarchical structures. In such cases, it is important to explicitly model the data as lying on a manifold with a suitable structure; for instance a sphere would be appropriate for directional data, the orthogonal group for rotations and reflections, and the torus captures structural properties of periodicity. + +The contribution of this work is to express density estimation on manifolds as a form of dequantization. Given a probability density in an ambient Euclidean space, one can obtain the density on the manifold by performing a manifold change-of-variables in which the manifold structure appears and then projecting out any auxiliary structures. This marginalization can be viewed as analogous to "quantization" where, for instance, continuous values are discarded and only rounded integer values remain. In this view the auxiliary structure defines how the manifold could be "dequantized" into the ambient Euclidean space. By marginalizing along these auxiliary dimensions, one obtains the marginal distribution on the manifold. In practice, however, one has only the manifold-constrained observations from an unknown distribution on the manifold. A second contribution of this work is to formulate the density estimation as a learning problem on the ambient Euclidean space. We show how to invoke the manifold change-of-variables, and then perform the marginalization along the auxiliary dimensions to obtain effective estimates of the density on the manifold. An advantage of our dequantization approach is that it allows one to utilize any expressive density directly on the ambient Euclidean space (e.g., RealNVP (Dinh et al., 2017), neural ODEs (Chen et al., 2018; Grathwohl et al., 2018) or any + + < g r a p h i c s > + +Figure 1. We model densities on a manifold as a projection, or "quantization," onto the manifold from an ambient Euclidean space. To enable density computations we use a "dequantization density" which can depend position on the manifold. In this figure the manifold in question is ${\mathbb{S}}^{2}$ , embedded in ${\mathbb{R}}^{3}$ and the dequantization density, illustrated here for a set of locations along the equator of ${\mathbb{S}}^{2}$ , is over $r \in {\mathbb{R}}_{ + }$ , the distance from the origin in the direction $y \in {\mathbb{S}}^{2}$ . Density on the manifold can be estimated via importance sampling by marginalizing over ${\mathbb{R}}^{ + }$ for a given $y \in {\mathbb{S}}^{2}$ using the dequantization distribution as an importance distribution. + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +055 + + < g r a p h i c s > + +Figure 2. The dequantization roadmap. In the first row, we begin with ${\mathbb{R}}^{m}$ (or a space identical to ${\mathbb{R}}^{m}$ up to a set of measure zero). This Euclidean space can be transformed into the product of manifolds $\mathcal{Y} \times \mathcal{Z}$ via a change-of-variables $G : {\mathbb{R}}^{m} \rightarrow \mathcal{Y} \times \mathcal{Z}$ . Quantization takes the product manifold $\mathcal{Y} \times \mathcal{Z}$ to its $\mathcal{Y}$ -component alone. In the second row, we may begin with a probability density ${\pi }_{{\mathbb{R}}^{m}}$ defined on ${\mathbb{R}}^{m}$ . Under the change-of-variables $G$ we obtain a new probability density ${\pi }_{\mathcal{Y} \times \mathcal{Z}}$ which is related to ${\pi }_{{\mathbb{R}}^{m}}$ by the manifold change-of-variables eq. (1). Quantizing $\mathcal{Y} \times \mathcal{Z}$ marginalizes out the $\mathcal{Z}$ -component of ${\pi }_{\mathcal{Y} \times \mathcal{Z}}$ . We may equivalently introduce a dequantization density ${\widetilde{\pi }}_{\mathcal{Z}}$ and compute the marginal density on $\mathcal{Y}$ via importance sampling. + +056 + +057 + +058 + +059 + +060 + +Table 1. Table of the matrix manifold dequantizations considered in this work. We show the corresponding auxiliary structure, the dequantization transformation, the resulting Euclidean space, and the Jacobian determinant of the transformation. + +max width= + +Manifold Auxiliary Structure Euclidean Space Dequantization Jacobian Determinant + +1-5 +${\mathbb{S}}^{m - 1}$ ${\mathbb{R}}_{ + }$ ${\mathbb{R}}^{m}$ Spherical coordinates $\left( {y,r}\right) \mapsto {ry}$ ${r}^{m - 1}$ + +1-5 +${\mathbb{T}}^{m}$ ${\mathbb{R}}_{ + } \times \cdots \times {\mathbb{R}}_{ + }$ ${\mathbb{R}}^{2m}$ Iterated polar coordinates $\left( {{y}_{i},{r}_{i}}\right) \mapsto {r}_{i}{y}_{i}$ $\mathop{\prod }\limits_{{i = 1}}^{m}{r}_{i}$ + +1-5 +Stiefel(m, n) ${\operatorname{Tri}}_{ + }\left( n\right)$ ${\mathbb{R}}^{m \times n}$ QR decomposition $\left( {\mathbf{Y},\mathbf{R}}\right) \mapsto \mathbf{{YR}}$ ${\mathbf{R}}_{11}^{m - 1}\cdots {\mathbf{R}}_{nn}^{m - n}$ + +1-5 +Stiefel(m, n) $\operatorname{PD}\left( n\right)$ ${\mathbb{R}}^{m \times n}$ Matrix polar decomposition $\left( {\mathbf{Y},\mathbf{R}}\right) \mapsto \mathbf{{YR}}$ Automatic differentiation + +1-5 + +080 + +081 + +other normalizing flow (Kobyzev et al., 2020)); the dequan-tization approach does not require a practitioner to construct densities intrinsically on the manifold. + +§ 2. THEORY + +Theorem 1. Let $\mathcal{Y}$ and $\mathcal{Z}$ be smooth manifolds embedded in ${\mathbb{R}}^{n}$ and ${\mathbb{R}}^{p}$ , respectively. Let $G : {\mathbb{R}}^{m} \rightarrow \mathcal{Y} \times \mathcal{Z}$ be a smooth, invertible transformation. Let ${\pi }_{{\mathbb{R}}^{m}}$ be a density on ${\mathbb{R}}^{m}$ . Under the change-of-variables $G$ , the corresponding density on $\mathcal{Y} \times \mathcal{Z}$ is given by, + +$$ +{\pi }_{\mathcal{Y} \times \mathcal{Z}}\left( {y,z}\right) = \frac{{\pi }_{{\mathbb{R}}^{m}}\left( x\right) }{\sqrt{\det \left( {\nabla G{\left( x\right) }^{\top }\nabla G\left( x\right) }\right) }} \tag{1} +$$ + +where $x = {G}^{-1}\left( {y,z}\right)$ . + +Even when $G$ is not an invertible mapping, it may be possible to compute the change-of-variables when $G$ is invertible on partitions of ${\mathbb{R}}^{m}$ . + +Corollary 1. Let ${\mathcal{O}}_{1},\ldots ,{\mathcal{O}}_{l}$ be a partition of ${\mathbb{R}}^{m}$ . Let $G : {\mathbb{R}}^{m} \rightarrow \mathcal{Y} \times \mathcal{Z}$ be a function and suppose that there exist smooth and invertible functions ${G}_{i} : {\mathcal{O}}_{i} \rightarrow \mathcal{Y} \times \mathcal{Z}$ such that ${G}_{i} = G \mid {\mathcal{O}}_{i}$ for $i = 1,\ldots ,l$ . Then, if $x \sim {\pi }_{{\mathbb{R}}^{m}}$ , the density of $\left( {y,z}\right) = G\left( x\right)$ is given by + +$$ +{\pi }_{\mathcal{Y} \times \mathcal{Z}}\left( {y,z}\right) = \mathop{\sum }\limits_{{i = 1}}^{l}\frac{{\pi }_{{\mathbb{R}}^{m}}\left( {x}_{i}\right) }{\sqrt{\det \left( {\nabla {G}_{i}{\left( {x}_{i}\right) }^{\top }\nabla {G}_{i}\left( {x}_{i}\right) }\right) }}. \tag{2} +$$ + +where ${x}_{i} = {G}_{i}^{-1}\left( {y,z}\right)$ . + +How does theorem 1 relate to the dequantization of smooth manifolds? + +§ 2.1. DEQUANTIZATION + +Manifolds of interest (such the sphere, the torus, or the orthogonal group) can be introduced as elements of a new coordinate system for an ambient Euclidean space. In each case, the manifold appears with an auxiliary manifold which may not be of immediate interest. Namely, (i) The sphere appears with set of positive real numbers when defining a coordinate system for ${\mathbb{R}}^{m} \smallsetminus \{ 0\} \cong {\mathbb{S}}^{m - 1} \times {\mathbb{R}}_{ + }$ ; (ii) The torus appears the product manifold of $m$ copies of the positive real numbers when defining a coordinate system for ${\mathbb{R}}^{2m} \smallsetminus \{ 0\} \cong {\mathbb{T}}^{m} \times {\mathbb{R}}_{ + } \times \ldots \times {\mathbb{R}}_{ + }$ ; (iii) the Stiefel manifold appears with the set of lower-triangular matrices with positive diagonal entries when defining a coordinate system of full-rank matrices: $\operatorname{FR}\left( {n,p}\right) \cong \operatorname{Stiefel}\left( {n,p}\right) \times$ ${\operatorname{Tri}}_{ + }\left( p\right)$ . We would like to marginalize out these "nuisance manifolds" so as to obtain distributions on the manifold of primary interest. A convenient means to achieve this is to introduce an importance sampling distribution over the nuisance manifold. Formally, we have the following result, which is an immediate consequence of theorem 1 . + +Corollary 2. Let $\mathcal{Y},\mathcal{Z},G$ , and $\pi {y}_{\times \mathcal{Z}}$ be as defined in theorem 1. Let ${\widetilde{\pi }}_{\mathcal{Z}}$ be a non-vanishing density on $\mathcal{Z}$ . To obtain the marginal density on $\mathcal{Y}$ , it suffices to compute, + +$$ +{\pi }_{\mathcal{Y}}\left( y\right) = \underset{z \sim {\widetilde{\pi }}_{\mathcal{Z}}}{\mathbb{E}}\frac{{\pi }_{\mathcal{X}}\left( x\right) }{{\widetilde{\pi }}_{\mathcal{Z}}\left( z\right) \cdot \sqrt{\det \left( {\nabla G{\left( x\right) }^{\top }\nabla G\left( x\right) }\right) }}, \tag{3} +$$ + +where $x = {G}^{-1}\left( {y,z}\right)$ . + +§ 3. DISCUSSION + +We investigate the problem of density estimation given observations on a manifold using the dequantization procedure described in section 2. + +Problem. Let $\mathcal{Y}$ be a manifold embedded in ${\mathbb{R}}^{n}$ and let ${\pi }_{\mathcal{Y}}$ be a density on $\mathcal{Y}$ . Given observations of ${\pi }_{\mathcal{Y}}$ , construct an estimate ${\widehat{\pi }}_{\mathcal{Y}}$ of the density ${\pi }_{\mathcal{Y}}$ . We apply eq. (3) in order to obtain the density estimate on $\mathcal{Y}$ . Generating samples from ${\pi }_{\mathcal{Y}}$ may be achieved by first sampling $x \sim {\pi }_{\mathcal{X}}$ , applying the transformation $G\left( x\right) = \left( {y,z}\right)$ , and taking $y$ as a sample from the approximated distribution ${\widehat{\pi }}_{\mathcal{Y}}$ . + +§ 3.1. DENSITIES ON ${\MATHBB{R}}^{M}$ + +As ${\mathbb{R}}^{m}$ is a Euclidean space, we have available a wealth of possible mechanisms to produce flexible densities in the ambient space. One popular choice is RealNVP (Dinh et al., 2017). An alternative is neural ODEs which parameterizes a vector field in the Euclidean space; the change in probability density under the vector field flow is obtained by integrating the instantaneous change-of-variables formula (Chen et al., 2018; Grathwohl et al., 2018). + +§ 3.2. OBJECTIVE FUNCTIONS + +We consider two possible objective functions for density estimation. The first is the evidence lower bound of the observations $\left\{ {{y}_{1},\ldots ,{y}_{{n}_{\mathrm{{obs}}}}}\right\}$ : + +$$ +\log {\widehat{\pi }}_{\mathcal{Y}}\left( {y}_{i}\right) \geq \underset{z \sim {\widetilde{\pi }}_{\mathcal{Z}}}{\mathbb{E}}\log \frac{{\pi }_{{\mathbb{R}}^{m}}\left( {{G}^{-1}\left( {{y}_{i},z}\right) }\right) }{{\widetilde{\pi }}_{\mathcal{Z}}\left( z\right) \cdot \sqrt{\det \left( {\nabla G{\left( x\right) }^{\top }\nabla G\left( x\right) }\right) }}. +$$ + +(4) + +This follows as a consequence of Jensen's inequality applied to eq. (3). Experimental results using this objective function are denoted with the suffix (ELBO). The second is the log-likelihood computed via importance sampling: + +$$ +\log {\widehat{\pi }}_{\mathcal{Y}}\left( {y}_{i}\right) = \log \underset{z \sim {\widetilde{\pi }}_{\mathcal{Z}}}{\mathbb{E}}\frac{{\pi }_{{\mathbb{R}}^{m}}\left( {{G}^{-1}\left( {{y}_{i},z}\right) }\right) }{{\widetilde{\pi }}_{\mathcal{Z}}\left( z\right) \cdot \sqrt{\det \left( {\nabla G{\left( x\right) }^{\top }\nabla G\left( x\right) }\right) }}. +$$ + +(5) + +Because the calculation of eq. (5) requires an importance sampling estimate, experimental results using this objective function are denoted with the suffix (I.S.). + +§ 4. EXPERIMENTAL RESULTS + +To demonstrate the effectiveness of the approach, we now show experimental results for density estimation on three different manifolds: the sphere, the torus and the orthogonal group. In our comparison against competing algorithms, we ensure that each method has a comparable number of learnable parameters. Our evaluation metrics are designed to test the fidelity of the density estimate to the target distribution. In all of our examples we use rejection sampling in order to draw samples from the target distribution. + +§ 4.1. SPHERE AND HYPERSPHERE + +Our first experimental results concern the sphere ${\mathbb{S}}^{2}$ where we consider a multimodal distribution with four modes. We consider performing density estimation using the ELBO (eq. (4)) and log-likelihood objective functions (eq. (5)); we construct densities in the ambient space using RealNVP and neural ODEs. As baselines we consider the Möbius transform approach described in (Rezende et al., 2020), which is a specialized normalizing flow method for tori and spheres, and the neural manifold ODE applied to the sphere as described in (Lou et al., 2020). We give a comparison of performance metrics between these methods in table 2. In these experiments, we find that parameterizing a neural ODE model in the ambient space gave the better KL-divergence and effective sample size (ESS) metrics than RealNVP when our dequantization approach is used. We find that our dequantization algorithm minimizing either eq. (4) or eq. (5) achieves similar performance in the first and second moment metrics. However, when using eq. (5), slightly lower KL-divergence metrics are achievable as well as slightly larger effective sample sizes. In either case, de-quantization tends to outperform the Möbius transform on this multimodal density on ${\mathbb{S}}^{2}$ . The manifold ODE method is outperformed by the ODE dequantization algorithms with both eq. (4) and eq. (5). + +We next consider a multimodal density ${\mathbb{S}}^{3} \cong \mathrm{{SU}}\left( 3\right)$ (the special unitary group). As before, we compare dequantization to Möbius flow transformations and manifold neural ODEs and present results in table 3. Similar to the case of the multimodal density on ${\mathbb{S}}^{2}$ , we find that dequantization with an ambient neural ODE model is most effective, with ELBO maximization giving the smallest KL-divergence metrics. All dequantization algorithms out-performed the Möbius transformation on the sphere but only dequantization with an ambient ODE and ELBO minimization outperformed the manifold neural ODE method. + +Table 2. Comparison of dequantization to normalizing flows on the multimodal density on ${\mathbb{S}}^{2}$ . Averages were computed using ten random trials for the dequantization procedures and eight random trials for the normalizing flow (because two random trials exhibited divergent behavior and were excluded). The dequantization procedure is illustrated for both the ELBO loss and the KL divergence loss. + +max width= + +Method Mean MSE Covariance MSE $\operatorname{KL}\left( {q\parallel p}\right)$ $\operatorname{KL}\left( {p\parallel q}\right)$ Relative ESS + +1-6 +Deq. ODE (ELBO) ${0.0012} \pm {0.0002}$ ${0.0006} \pm {0.0001}$ ${0.0046} \pm {0.0002}$ ${0.0046} \pm {0.0002}$ ${99.0990} \pm {0.0401}$ + +1-6 +Deq. ODE (I.S.) ${0.0014} \pm {0.0002}$ ${0.0010} \pm {0.0001}$ ${0.0029} \pm {0.0001}$ ${0.0029} \pm {0.0001}$ ${99.4170} \pm {0.0225}$ + +1-6 +Deq RealNVP (ELBO) ${0.0004} \pm {0.0001}$ ${0.0003} \pm {0.0001}$ ${0.0231} \pm {0.0010}$ ${0.0212} \pm {0.0009}$ ${95.9540} \pm {0.1688}$ + +1-6 +Deq. RealNVP (I.S.) ${0.0005} \pm {0.0002}$ ${0.0002} \pm {0.0000}$ ${0.0124} \pm {0.0006}$ ${0.0115} \pm {0.0006}$ ${97.8240} \pm {0.1183}$ + +1-6 +Man. ODE ${0.0010} \pm {0.0004}$ ${0.0009} \pm {0.0002}$ ${0.0085} \pm {0.0007}$ ${0.0083} \pm {0.0007}$ ${98.3860} \pm {0.1328}$ + +1-6 +Möbius ${0.0021} \pm {0.0005}$ ${0.0019} \pm {0.0005}$ ${0.0595} \pm {0.0025}$ - ${89.2575} \pm {0.4888}$ + +1-6 + +Table 3. Comparison of dequantization to normalizing flows on the multimodal density on ${\mathbb{S}}^{3}$ . Averages were computed using ten random trials for the dequantization procedures and nine random trials for the normalizing flow (one random trial exhibited divergent behavior and was excluded). + +max width= + +Method Mean MSE Covariance MSE $\operatorname{KL}\left( {q\parallel p}\right)$ $\operatorname{KL}\left( {p\parallel q}\right)$ Relative ESS + +1-6 +Deq. ODE (ELBO) ${0.0009} \pm {0.0001}$ ${0.0007} \pm {0.0001}$ ${0.0072} \pm {0.0002}$ ${0.0070} \pm {0.0002}$ ${98.6490} \pm {0.0388}$ + +1-6 +Deq. ODE (I.S.) ${0.0017} \pm {0.0001}$ ${0.0022} \pm {0.0002}$ ${0.0189} \pm {0.0004}$ ${0.0180} \pm {0.0004}$ ${96.6150} \pm {0.0648}$ + +1-6 +Deq. RealNVP (ELBO) ${0.0003} \pm {0.0001}$ ${0.0004} \pm {0.0001}$ ${0.0384} \pm {0.0010}$ ${0.0283} \pm {0.0005}$ ${95.1880} \pm {0.0771}$ + +1-6 +Deq. RealNVP (I.S.) ${0.0003} \pm {0.0001}$ ${0.0003} \pm {0.0000}$ ${0.0208} \pm {0.0004}$ ${0.0180} \pm {0.0004}$ ${96.6340} \pm {0.0920}$ + +1-6 +Man. ODE ${0.0012} \pm {0.0003}$ ${0.0008} \pm {0.0002}$ ${0.0098} \pm {0.0009}$ ${0.0094} \pm {0.0007}$ ${98.1780} \pm {0.1302}$ + +1-6 +Möbius ${0.0027} \pm {0.0004}$ ${0.0014} \pm {0.0003}$ ${0.0542} \pm {0.0047}$ - ${88.7290} \pm {0.9332}$ + +1-6 + +Table 4. Metrics of the dequantization algorithm in application to the orthogonal Procrustes problem and dequantization of a multimodal density on $\mathrm{{SO}}\left( 3\right)$ . When using the polar decomposition, results are averaged over ten independent trials for the multimodal distribution on SO(3) and nine independent trials for the orthogonal Procrustes problem; for the QR decomposition, results are averaged over nine trials. + +max width= + +Experiment Mean MSE Covariance MSE $\operatorname{KL}\left( {q\parallel p}\right)$ $\operatorname{KL}\left( {p\parallel q}\right)$ Relative ESS + +1-6 +Procrustes (ELBO - Polar) ${0.0021} \pm {0.0008}$ ${0.0012} \pm {0.0005}$ ${0.0193} \pm {0.0069}$ ${0.0173} \pm {0.0053}$ ${96.9489} \pm {0.7649}$ + +1-6 +Procrustes (I.S. - Polar) ${0.0038} \pm {0.0020}$ ${0.0015} \pm {0.0008}$ ${0.0301} \pm {0.0126}$ ${0.0202} \pm {0.0075}$ ${95.6944} \pm {1.4654}$ + +1-6 +Procrustes (ELBO - QR) ${0.0011} \pm {0.0003}$ ${0.0008} \pm {0.0003}$ ${0.0124} \pm {0.0032}$ ${0.0095} \pm {0.0015}$ ${97.9678} \pm {0.3325}$ + +1-6 +Procrustes (I.S. - QR) ${0.0015} \pm {0.0005}$ ${0.0011} \pm {0.0004}$ ${0.0174} \pm {0.0072}$ ${0.0122} \pm {0.0029}$ ${96.6267} \pm {0.6326}$ + +1-6 +SO(3) (ELBO - Polar) ${0.0007} \pm {0.0002}$ ${0.0029} \pm {0.0003}$ ${0.0443} \pm {0.0011}$ ${0.0415} \pm {0.0059}$ ${96.2930} \pm {0.0649}$ + +1-6 +SO(3) (I.S. - Polar) ${0.0004} \pm {0.0001}$ ${0.0014} \pm {0.0001}$ ${0.0207} \pm {0.0028}$ ${0.0235} \pm {0.0029}$ ${97.7280} \pm {0.1136}$ + +1-6 +SO(3) (ELBO - QR) ${0.0017} \pm {0.0004}$ ${0.0054} \pm {0.0006}$ ${0.0563} \pm {0.0060}$ ${0.0363} \pm {0.0041}$ ${93.5633} \pm {2.1331}$ + +1-6 +SO(3) (I.S. - QR) ${0.0012} \pm {0.0004}$ ${0.0020} \pm {0.0004}$ ${0.0260} \pm {0.0017}$ ${0.0219} \pm {0.0021}$ ${94.3256} \pm {2.8099}$ + +1-6 + +§ 4.2. ORTHOGONAL GROUP + +The previous two examples focused on manifolds composed of spheres and circles. We now examine density estimation on the orthogonal group, where we consider inference in a probabilistic variant of the orthogonal Procrustes problem; we seek to sample orthogonal transformations that transport one point cloud towards another in terms of squared distance. We consider parameterizing a distribution in the ambient Euclidean space using RealNVP in these experiments. Results are presented in table 4. We observe that optimizing the ELBO objective function (eq. (4)) tended to produce better density estimates than the log-likelihood (eq. (5)). Nevertheless, we find that either dequantization algorithm is highly effective at matching the target density. + +We may also leverage corollary 1 so as to apply our method to the "dequantization" of $\mathrm{{SO}}\left( n\right)$ . As an example, we consider a multimodal density on $\mathrm{{SO}}\left( 3\right)$ . Results of applying our method to sampling from this distribution are also shown in table 4. In this example we find that minimizing the negative log-likelihood using importance sampling tended to produce the best approximation of the first- and second-moments of the distribution, in addition to smaller KL-divergence metrics. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/DoGRjqtgjph/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/DoGRjqtgjph/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..a7817d3529f6124c96c8532880d81c14beacd7e1 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/DoGRjqtgjph/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,301 @@ +# Diffusion Priors In Variational Autoencoders + +Anonymous Authors ${}^{1}$ + +## Abstract + +Among likelihood-based approaches for deep generative modelling, variational autoencoders (VAEs) offer scalable amortized posterior inference and fast sampling. However, VAEs are also more and more outperformed by competing models such as normalizing flows (NFs), deep-energy models, or the new denoising diffusion probabilistic models (DDPMs). In this preliminary work, we improve VAEs by demonstrating how DDPMs can be used for modelling the prior distribution of the latent variables. The diffusion prior model improves upon Gaussian priors of classical VAEs and is competitive with NF-based priors. Finally, we hypothesize that hierarchical VAEs could similarly benefit from the enhanced capacity of diffusion priors. + +## 1. Introduction + +Over the last few years, the interest of the deep learning community for generative modelling has increased steadily. Among the likelihood-based approaches for deep generative modelling, variational autoencoders (Kingma & Welling, 2013, VAEs) stand as one of the most popular, although competing approaches now demonstrate better performance. In particular, Ho et al. (2020); Nichol & Dhariwal (2021); Dhariwal & Nichol (2021) recently showed that denoising diffusion probabilistic models (DDPMs) are competitive deep generative models, obtaining samples quality similar to those of the best implicit deep generative models such as ProgressiveGAN (Karras et al., 2017) and Style-GAN (Karras et al., 2019). Similarly to VAEs, DDPMs train on a variational bound and may be interpreted under the encoding-decoding framework. + +In the original formulation of VAEs, the prior and the posterior distributions over the latent variables are assumed to be both Gaussian. However, these assumptions are often incompatible and are thus limiting the performance of VAEs for complex modelling tasks (Tomczak & Welling, 2018; Chen et al., 2018). A natural solution to this problem is to parameterize the prior, sometimes also the posterior, with more expressive distributions. In this preliminary work, we improve VAEs by demonstrating how DDPMs can be used for modelling the prior distribution of the latent variables. In addition to boosting DDPM with the compression properties of VAEs, combining the two models should eventually lead to greater generative performance by enabling complex generative modelling even with simple decoder architecture. + +## 2. Latent generative models + +### 2.1. Variational autoencoder + +We want to learn a generative model of an unknown distribution $p\left( \mathbf{x}\right)$ given a dataset $X \in {\mathbb{R}}^{N \times d}$ of $N$ i.i.d observations $\mathbf{x}$ sampled from this unknown distribution. The original VAE postulates a two-step generative process in which some unobserved variables $\mathbf{z} \in {\mathbb{R}}^{h}$ are first sampled from a prior distribution $p\left( \mathbf{z}\right)$ and then observations $\mathbf{x}$ are generated from a conditional distribution ${p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right)$ . The generative process can be expressed mathematically as + +$$ +\mathbf{z} \sim p\left( \mathbf{z}\right) \text{ and }\mathbf{x} \sim {p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right) . \tag{1} +$$ + +The prior $p\left( \mathbf{z}\right)$ is chosen Gaussian while the likelihood ${p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right)$ is modeled with a neural network. The likelihood model decodes latent variables into observations and is thus usually refereed as the decoder in the literature. In its original formulation, the likelihood is parameterized with a multivariate Gaussian $\mathcal{N}\left( {{\mu }_{\theta }\left( \mathbf{z}\right) ,\operatorname{diag}\left( {{\sigma }_{\theta }^{2}\left( \mathbf{z}\right) }\right) }\right)$ when the observations are continuous, and a categorical distribution when they are discrete. + +Training the generative model is achieved by finding the parameters $\theta$ of the decoder that maximize the sum of the marginal likelihoods of individual points + +$$ +{p}_{\theta }\left( X\right) = \mathop{\sum }\limits_{{\mathbf{x} \in X}}\log \int {p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right) p\left( \mathbf{z}\right) \mathrm{d}\mathbf{z}. +$$ + +These integrals are intractable but the introduction of an encoder network that approximates the posterior distribution ${q}_{\phi }\left( {\mathbf{z} \mid \mathbf{x}}\right)$ allows maximizing the associated evidence lower bound + +$$ +\mathrm{{ELBO}} \mathrel{\text{:=}} {\mathbb{E}}_{q}\left\lbrack {\log \frac{{p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right) p\left( \mathbf{z}\right) }{{q}_{\psi }\left( {\mathbf{z} \mid \mathbf{x}}\right) }}\right\rbrack \tag{2} +$$ + +$$ += \log {p}_{\theta }\left( \mathbf{x}\right) - \mathbb{K}\mathbb{L}\left\lbrack {{q}_{\psi }\left( {\mathbf{z} \mid \mathbf{x}}\right) \parallel {p}_{\theta }\left( {\mathbf{z} \mid \mathbf{x}}\right) }\right\rbrack \tag{3} +$$ + +$$ +\leq \log {p}_{\theta }\left( \mathbf{x}\right) \text{.} \tag{4} +$$ + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +The ELBO becomes tighter as the approximate posterior ${q}_{\psi }\left( {\mathbf{z} \mid \mathbf{x}}\right)$ gets closer to the true posterior. Learning the generative model is finally performed by jointly optimizing the parameters $\theta$ of the decoder and $\phi$ of the approximate posterior via stochastic gradient ascent. In the original VAE, the encoder models the approximate posterior as a conditional multivariate Gaussian distribution $\mathcal{N}\left( {{\mu }_{\phi }\left( \mathbf{x}\right) ,\operatorname{diag}\left( {{\sigma }_{\phi }^{2}\left( \mathbf{x}\right) }\right) }\right)$ . + +The ELBO loss presents two antagonistic goals to the encoder. It should be able to both encodes the data accurately while being as close as possible to the prior distribution. Consequently, the Gaussian assumptions made on both the prior and the posterior distributions are often incompatible and limit the generative performance. A possible solution consists in learning a prior distribution that is compatible with the learned posteriors. For example, Habibian et al. (2019) and Chen et al. (2017) respectively showed that autoregressive models and normalizing flows (Rezende & Mohamed, 2015, NFs) greatly improve the performance of VAEs when used as prior distributions. In the following we present how denoising diffusion probabilistic models can be used to improve the performance of classical VAEs. + +### 2.2. Denoising diffusion probabilistic models + +Inspired by non-equilibrium statistical physics, Sohl-Dickstein et al. (2015) originally introduced DDPMs while Ho et al. (2020) demonstrated only more recently how to train these models for image synthesis, achieving results close to the state-of-the-art on this task. DDPMs formulate generative modelling as the reverse operation of diffusion, a physical process which progressively destroys information. Formally, the reverse process is a latent variable model of + +the form + +$$ +{p}_{\phi }\left( {\mathbf{x}}_{0}\right) \mathrel{\text{:=}} \int {p}_{\phi }\left( {\mathbf{x}}_{0 : T}\right) d{\mathbf{x}}_{1 : T}, +$$ + +where ${\mathbf{x}}_{0} \mathrel{\text{:=}} \mathbf{x}$ denotes the observations and ${\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{T}$ denote latent variables of the same dimensionality as ${\mathbf{x}}_{0}$ . The joint distribution ${p}_{\phi }\left( {\mathbf{x}}_{0 : T}\right)$ is modelled as a first order Markov chain with Gaussian transitions, that is + +$$ +{p}_{\phi }\left( {\mathbf{x}}_{0 : T}\right) \mathrel{\text{:=}} {p}_{\phi }\left( {\mathbf{x}}_{T}\right) \mathop{\prod }\limits_{{t = 1}}^{T}{p}_{\phi }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) , \tag{5} +$$ + +$$ +{p}_{\phi }\left( {\mathbf{x}}_{T}\right) \mathrel{\text{:=}} \mathcal{N}\left( {\mathbf{0},\mathrm{I}}\right) , \tag{6} +$$ + +$$ +{p}_{\phi }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) \mathrel{\text{:=}} \mathcal{N}\left( {{\mu }_{\phi }\left( {{\mathbf{x}}_{t}, t}\right) ,{\sigma }_{t}^{2}\mathrm{I}}\right) . \tag{7} +$$ + +Similar to VAEs, the reverse Markov chain is trained on an ELBO. However, the approximate posterior $q\left( {{\mathbf{x}}_{1 : T} \mid {\mathbf{x}}_{0}}\right)$ is fixed to a diffusion process that is also a first order Markov chain with Gaussian transitions, + +$$ +q\left( {{\mathbf{x}}_{1 : T} \mid {\mathbf{x}}_{0}}\right) \mathrel{\text{:=}} \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right) , \tag{8} +$$ + +$$ +q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right) \mathrel{\text{:=}} \mathcal{N}\left( {\sqrt{1 - {\beta }_{t}}{\mathbf{x}}_{t - 1},{\beta }_{t}\mathbf{I}}\right) , \tag{9} +$$ + +where ${\beta }_{1},\ldots ,{\beta }_{T}$ are the variance schedule that is either fixed as training hyper-parameters or learned. The ELBO is then given by + +$$ +\mathrm{{ELBO}} \mathrel{\text{:=}} {\mathbb{E}}_{q}\left\lbrack {\log \frac{{p}_{\phi }\left( {\mathbf{x}}_{\mathbf{0} : \mathbf{T}}\right) }{q\left( {{\mathbf{x}}_{\mathbf{1} : \mathbf{T}} \mid {\mathbf{x}}_{\mathbf{0}}}\right) }}\right\rbrack \leq \log {p}_{\phi }\left( {\mathbf{x}}_{0}\right) . \tag{10} +$$ + +Provided that the variance schedule ${\beta }_{t}$ is small and that the number of timesteps $T$ is large enough, the Gaussian assumptions on the generative process ${p}_{\phi }$ are reasonable. Ho et al. (2020) take advantage of the Gaussian transitions to express the ELBO as + +$$ +{\mathbb{E}}_{q}\left\lbrack {\mathbb{{KL}}\left\lbrack {q\left( {{\mathbf{x}}_{T} \mid {\mathbf{x}}_{0}}\right) \parallel p\left( {\mathbf{x}}_{T}\right) }\right\rbrack - \log {p}_{\phi }\left( {{\mathbf{x}}_{0} \mid {\mathbf{x}}_{1}}\right) }\right. +$$ + +$$ +\left. {+\mathop{\sum }\limits_{{t = 2}}^{T}\mathbb{K}\mathbb{L}\left\lbrack {q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0}}\right) \parallel {p}_{\phi }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) }\right\rbrack }\right\rbrack . \tag{11} +$$ + +The inner sum in Equation (11) is made of comparisons between the Gaussian generative transitions ${p}_{\phi }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ and the conditional forward posterior $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0}}\right)$ which can also be expressed in closed form as Gaussians $\mathcal{N}\left( {{\widetilde{\mu }}_{t}\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{t}}\right) ,{\widetilde{\beta }}_{t}\mathbf{I}}\right)$ , where ${\widetilde{\beta }}_{t}$ are functions of the variance schedule. The KL can thus be calculated with closed form expressions which reduces the variance of the final expression. In addition, Ho et al. (2020) empirically demonstrate that it is sufficient to take optimization steps on uniformly sampled terms of the sum instead of computing it completely. The final objective closely resembles denoising score matching over multiple noise levels (Song & Ermon, 2019). These observations combined with additional simplifications leads to a simplified loss + +$$ +{L}_{\mathrm{{DDPM}}}\left( {{\mathbf{x}}_{0};\phi }\right) \mathrel{\text{:=}} {\mathbb{E}}_{t,{\mathbf{x}}_{0},{\mathbf{x}}_{t}}\left\lbrack {\frac{1}{2{\sigma }_{t}^{2}}{\begin{Vmatrix}{\mu }_{\phi }\left( {\mathbf{x}}_{t}, t\right) - {\widetilde{\mu }}_{t}\left( {\mathbf{x}}_{0},{\mathbf{x}}_{t}\right) \end{Vmatrix}}^{2}}\right\rbrack , +$$ + +(12) + +where ${\widetilde{\mu }}_{t}\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{t}}\right)$ is the mean of $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0},{\mathbf{x}}_{t}}\right)$ , the forward diffusion posterior conditioned on the observation ${\mathbf{x}}_{0}$ . + +### 2.3. Prior modelling with denoising diffusion + +We now introduce our contribution which consists in using a DDPM for modelling the prior distribution in VAEs. We formulate the generative model as + +$$ +{\mathbf{z}}_{T} \sim \mathcal{N}\left( {\mathbf{0},\mathrm{I}}\right) \tag{13} +$$ + +$$ +{\mathbf{z}}_{t - 1 \mid t} \sim {p}_{\phi }\left( {{\mathbf{z}}_{t - 1} \mid {\mathbf{z}}_{t}}\right) \;\forall t \in \left\lbrack {T,\ldots ,1}\right\rbrack \tag{14} +$$ + +$$ +\mathbf{x} \sim {p}_{\theta }\left( {\mathbf{x} \mid {\mathbf{z}}_{0}}\right) , \tag{15} +$$ + +where $\phi$ denotes the parameters of the reverse diffusion model encoding the prior distribution. Equations (13) and (14) implicitly define a prior distribution over the usual latent variables ${\mathbf{z}}_{0}$ which is modelled with a reverse diffusion process. + +Unfortunately, we cannot train a VAE with a diffusion prior directly on the ELBO as expressed in Equation (2) as ${p}_{\phi }\left( {\mathbf{z}}_{0}\right)$ cannot be evaluated. However, Equation (2) can be further developed as + +$$ +{\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log {p}_{\theta }\left( {\mathbf{x} \mid {\mathbf{z}}_{0}}\right) }\right\rbrack - {\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log q\left( {{\mathbf{z}}_{0} \mid \mathbf{x}}\right) }\right\rbrack + {\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log {p}_{\phi }\left( {\mathbf{z}}_{0}\right) }\right\rbrack +$$ + +(16) + +in which a lower bound on the last term can be expressed by Equation (10). This finally leads to the following expression + +$$ +{\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log {p}_{\theta }\left( {\mathbf{x} \mid {\mathbf{z}}_{0}}\right) - \log q\left( {{\mathbf{z}}_{0} \mid \mathbf{x}}\right) + {\mathbb{E}}_{q}\left\lbrack {\log \frac{{p}_{\phi }\left( {\mathbf{z}}_{\mathbf{0} : \mathbf{T}}\right) }{q\left( {{\mathbf{z}}_{\mathbf{1} : \mathbf{T}} \mid {\mathbf{z}}_{\mathbf{0}}}\right) }}\right\rbrack }\right\rbrack +$$ + +(17) + +$$ +\leq {\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log {p}_{\theta }\left( {\mathbf{x} \mid {\mathbf{z}}_{0}}\right) - \log q\left( {{\mathbf{z}}_{0} \mid \mathbf{x}}\right) + \log {p}_{\phi }\left( {\mathbf{z}}_{0}\right) }\right\rbrack \tag{18} +$$ + +$$ +\leq \log {p}_{\theta }\left( \mathbf{x}\right) , \tag{19} +$$ + +which is a valid ELBO. Finally, the diffusion prior ${p}_{\phi }$ is trained jointly with the approximate posterior ${q}_{\psi }$ and the likelihood models ${p}_{\theta }$ which are optimized as in a classical VAE. This leads to the following loss function: + +$$ +\mathcal{L}\left( {\mathbf{x};\phi ,\theta ,\psi }\right) \mathrel{\text{:=}} {\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log \frac{{p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right) }{{q}_{\psi }\left( {\mathbf{z} \mid \mathbf{x}}\right) }}\right\rbrack + {\mathbb{E}}_{{q}_{\psi }}\left\lbrack {{L}_{\mathrm{{DDPM}}}\left( {{\mathbf{z}}_{0};\phi }\right) }\right\rbrack . +$$ + +(20) + +## 3. Related work + +Various approaches have been proposed to improve the modelling capacity and the training of VAEs. As a first example, some state-of-the-art deep generative models based on VAEs model the posterior with normalizing flows or autoregressive models (Kingma et al., 2016; Vahdat & Kautz, 2020). Autoregressive models are also often used as a replacement of the original likelihood parameterization, which assumes conditional independencies that are often unrealistic (Oord et al., 2016). Another popular improvement made to the original VAE is the embedding of structure in the latent variables. In particular, hierarchical VAEs (Sønderby et al., 2016; Kingma et al., 2016) combined with careful training demonstrate impressive results on generative modelling for images (Vahdat & Kautz, 2020). + +Closer to our work, Chen et al. (2017) first proposed to learn the prior as a solution to the mismatch between the approximate and the true posteriors. They model the prior with an autoregressive flow, which also closely relates to modelling the posterior distribution with an inverse autoregressive flow (Kingma et al., 2016). Tomczak & Welling (2018) takes inspiration from the aggregated posterior $\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{q}_{\psi }\left( {z \mid x}\right)$ (Hoffman & Johnson, 2016; Makhzani et al., 2015) to introduce the VampPrior defined as a mixture of learned pseudo-inputs. An orthogonal line of work suggests that the mismatch between the approximate posterior and the exact posterior can be reduced by over-weighting the terms related to the prior and to the approximate posterior in the ELBO (Higgins et al., 2016; Chen et al., 2018). + +## 4. Experiments + +We now compare VAEs for different choices of priors, including the original Gaussian prior, an NF prior, and the proposed diffusion prior. All models share a same backbone encoder-decoder architecture inspired from DCGAN (Radford et al., 2015). Optimization is performed with Adam for 250 epochs with a learning rate set to 0.0005 . After each epoch, the models are evaluated on a validation set used to select the best one for each training setting. We compare the models on the CIFAR10 and CelebA datasets for 3 different latent variables dimensionality(40,100,200). The NF used in our experiments is a 3-step autoregressive affine flow with simple MLP backbones similar to the one used to model the transition function of DDPM. + +Table 1 presents the FID scores for the different models. We first notice the large scores reached by all models on the CIFAR10 dataset. This can be explained by the simplicity of the models trained in our experiments. We believe these scores could be greatly improved by using a more sophisticated likelihood model such as a PixelCNN (Oord et al., 2016). Although FID scores suggest that the Gaussian prior outperforms the diffusion prior in terms of generative performance, the visual inspection of Figure 1 shows that the diffusion prior results in samples slightly more realistic than those of the classical VAE. The best FID score is achieved by the NF prior, although its samples do not seem to reflect this superiority. In this case, we believe the FID scores are not entirely informative about the quality of the images synthesized by the models and should be interpreted with a grain of salt. Although learned priors seem to improve generative performance on CIFAR10, additional work is needed to reach results that would justify using a diffusion prior for this dataset. + +On CelebA however, we observe in Table 1 that diffusion priors outperform the Gaussian prior. This is in line with the visual inspection of Figure 2a and Figure 2c. As for CIFAR10, the NF prior outperforms the Gaussian and diffusion priors in terms of FID scores, although the visual inspection of the corresponding samples in Figure 2b does not reveal a much better quality of images when compared to those resulting from the diffusion prior. We conclude from these observations that diffusion priors offer an interesting alternative to NFs for modelling the prior in a VAE. + +165 + +177 + +![01963e2d-3351-76de-882a-5ebe709f3187_3_151_522_1450_533_0.jpg](images/01963e2d-3351-76de-882a-5ebe709f3187_3_151_522_1450_533_0.jpg) + +Figure 1. Samples generated by a VAE trained on CIFAR10 for three different prior models. The diffusion prior leads to slightly better sampling quality than the Gaussian distribution and similar to the NF prior. + +178 + +179 + +180 + +181 + +182 + +183 + +184 + +185 + +186 + +198 + +![01963e2d-3351-76de-882a-5ebe709f3187_3_155_1158_1444_530_0.jpg](images/01963e2d-3351-76de-882a-5ebe709f3187_3_155_1158_1444_530_0.jpg) + +Figure 2. Samples generated by a VAE trained on CelebA for three different prior models. The diffusion prior leads to better sampling quality than the Gaussian distribution and similar to the NF prior. + +199 + +200 + +201 + +202 + +203 + +204 + +219 + +Table 1. FID scores for different models for prior modelling in VAEs and for different latent size. Diffusion priors outperform classical VAE on CelebA but are slightly worse than NFs. FID scores do not reveal the superiority of any method on CIFAR10. + +
DatasetCelebACIFAR10
Latent Size4010020040100200
Gaussian154.3149.4139.1176.0126.2123.9
${NF}$72.959.4954.7167.6129.1129.6
Diffusion114.867.9588.3177.9160.5153.1
+ +## 5. Conclusion and future work + +This preliminary work presents how denoising diffusion probabilistic models can be used as a new class of learnable priors for VAEs. As a notable contribution, we empirically demonstrate that optimizing implicitly a prior on an ELBO can be performed jointly to training the encoder and the decoder of the VAE. In addition, our results suggest DDPM performs on par with NFs for modelling prior distribution. + +A large spectrum of future research directions could benefit from the basic idea expressed in this preliminary work. As an example, recent advances in diffusion models such as the continuous formulation (Song et al., 2020) or improvement to the training procedure of DDPM (Nichol & Dhariwal, 2021) could be implemented in the prior model. Similarly, many improvements could be made to the architectures used for the VAE and to the training procedure. In particular, image synthesis with hierarchical VAEs which organizes the latent variables into multiple scales images could reveal the full potential of diffusion priors. This would indeed combine the structural knowledge embed by such type of VAEs with the impressive performance of DDPM for modelling distributions over images. Finally, diffusion does not constrain the neural networks architectures and so enables the embedding of a larger choice of inductive biases in the prior distribution compared to autoregressive models and NFs. + +## References + +Chen, R. T., Li, X., Grosse, R., and Duvenaud, D. Isolating sources of disentanglement in variational autoencoders. arXiv preprint arXiv:1802.04942, 2018. + +Chen, X., Kingma, D. P., Salimans, T., Duan, Y., Dhariwal, P., Schulman, J., Sutskever, I., and Abbeel, P. Variational lossy autoencoder. ICLR, 2017. + +Dhariwal, P. and Nichol, A. Diffusion models beat gans on image synthesis. arXiv preprint arXiv:2105.05233, 2021. + +Habibian, A., Rozendaal, T. v., Tomczak, J. M., and Cohen, T. S. Video compression with rate-distortion autoen-coders. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7033-7042, 2019. + +Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., + +Botvinick, M., Mohamed, S., and Lerchner, A. beta-vae: Learning basic visual concepts with a constrained variational framework. 2016. + +Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. arXiv preprint arXiv:2006.11239, 2020. + +Hoffman, M. D. and Johnson, M. J. Elbo surgery: yet another way to carve up the variational evidence lower bound. In Workshop in Advances in Approximate Bayesian Inference, NIPS, volume 1, pp. 2, 2016. + +Karras, T., Aila, T., Laine, S., and Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. + +Karras, T., Laine, S., and Aila, T. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401-4410, 2019. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. + +Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016. + +Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., and Frey, B. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. + +Nichol, A. and Dhariwal, P. Improved denoising diffusion probabilistic models. arXiv preprint arXiv:2102.09672, 2021. + +Oord, A. v. d., Kalchbrenner, N., Vinyals, O., Espeholt, L., Graves, A., and Kavukcuoglu, K. Conditional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016. + +Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. + +Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In International Conference on Machine Learning, pp. 1530-1538. PMLR, 2015. + +Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256-2265. PMLR, 2015. + +Sønderby, C. K., Raiko, T., Maaløe, L., Sønderby, S. K., and Winther, O. Ladder variational autoencoders. arXiv preprint arXiv:1602.02282, 2016. + +Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. In Proceedings of the 33rd Annual Conference on Neural Information Processing Systems, 2019. + +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Er-mon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. + +Tomczak, J. and Welling, M. Vae with a vampprior. In International Conference on Artificial Intelligence and Statistics, pp. 1214-1223. PMLR, 2018. + +Vahdat, A. and Kautz, J. Nvae: A deep hierarchical variational autoencoder. arXiv preprint arXiv:2007.03898, 2020. + +318 + +319 + +320 + +321 + +326 + +327 + +328 + +329 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/DoGRjqtgjph/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/DoGRjqtgjph/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..91cebc74d1d99753cb3810395675df0cb059b802 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/DoGRjqtgjph/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,249 @@ +§ DIFFUSION PRIORS IN VARIATIONAL AUTOENCODERS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Among likelihood-based approaches for deep generative modelling, variational autoencoders (VAEs) offer scalable amortized posterior inference and fast sampling. However, VAEs are also more and more outperformed by competing models such as normalizing flows (NFs), deep-energy models, or the new denoising diffusion probabilistic models (DDPMs). In this preliminary work, we improve VAEs by demonstrating how DDPMs can be used for modelling the prior distribution of the latent variables. The diffusion prior model improves upon Gaussian priors of classical VAEs and is competitive with NF-based priors. Finally, we hypothesize that hierarchical VAEs could similarly benefit from the enhanced capacity of diffusion priors. + +§ 1. INTRODUCTION + +Over the last few years, the interest of the deep learning community for generative modelling has increased steadily. Among the likelihood-based approaches for deep generative modelling, variational autoencoders (Kingma & Welling, 2013, VAEs) stand as one of the most popular, although competing approaches now demonstrate better performance. In particular, Ho et al. (2020); Nichol & Dhariwal (2021); Dhariwal & Nichol (2021) recently showed that denoising diffusion probabilistic models (DDPMs) are competitive deep generative models, obtaining samples quality similar to those of the best implicit deep generative models such as ProgressiveGAN (Karras et al., 2017) and Style-GAN (Karras et al., 2019). Similarly to VAEs, DDPMs train on a variational bound and may be interpreted under the encoding-decoding framework. + +In the original formulation of VAEs, the prior and the posterior distributions over the latent variables are assumed to be both Gaussian. However, these assumptions are often incompatible and are thus limiting the performance of VAEs for complex modelling tasks (Tomczak & Welling, 2018; Chen et al., 2018). A natural solution to this problem is to parameterize the prior, sometimes also the posterior, with more expressive distributions. In this preliminary work, we improve VAEs by demonstrating how DDPMs can be used for modelling the prior distribution of the latent variables. In addition to boosting DDPM with the compression properties of VAEs, combining the two models should eventually lead to greater generative performance by enabling complex generative modelling even with simple decoder architecture. + +§ 2. LATENT GENERATIVE MODELS + +§ 2.1. VARIATIONAL AUTOENCODER + +We want to learn a generative model of an unknown distribution $p\left( \mathbf{x}\right)$ given a dataset $X \in {\mathbb{R}}^{N \times d}$ of $N$ i.i.d observations $\mathbf{x}$ sampled from this unknown distribution. The original VAE postulates a two-step generative process in which some unobserved variables $\mathbf{z} \in {\mathbb{R}}^{h}$ are first sampled from a prior distribution $p\left( \mathbf{z}\right)$ and then observations $\mathbf{x}$ are generated from a conditional distribution ${p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right)$ . The generative process can be expressed mathematically as + +$$ +\mathbf{z} \sim p\left( \mathbf{z}\right) \text{ and }\mathbf{x} \sim {p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right) . \tag{1} +$$ + +The prior $p\left( \mathbf{z}\right)$ is chosen Gaussian while the likelihood ${p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right)$ is modeled with a neural network. The likelihood model decodes latent variables into observations and is thus usually refereed as the decoder in the literature. In its original formulation, the likelihood is parameterized with a multivariate Gaussian $\mathcal{N}\left( {{\mu }_{\theta }\left( \mathbf{z}\right) ,\operatorname{diag}\left( {{\sigma }_{\theta }^{2}\left( \mathbf{z}\right) }\right) }\right)$ when the observations are continuous, and a categorical distribution when they are discrete. + +Training the generative model is achieved by finding the parameters $\theta$ of the decoder that maximize the sum of the marginal likelihoods of individual points + +$$ +{p}_{\theta }\left( X\right) = \mathop{\sum }\limits_{{\mathbf{x} \in X}}\log \int {p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right) p\left( \mathbf{z}\right) \mathrm{d}\mathbf{z}. +$$ + +These integrals are intractable but the introduction of an encoder network that approximates the posterior distribution ${q}_{\phi }\left( {\mathbf{z} \mid \mathbf{x}}\right)$ allows maximizing the associated evidence lower bound + +$$ +\mathrm{{ELBO}} \mathrel{\text{ := }} {\mathbb{E}}_{q}\left\lbrack {\log \frac{{p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right) p\left( \mathbf{z}\right) }{{q}_{\psi }\left( {\mathbf{z} \mid \mathbf{x}}\right) }}\right\rbrack \tag{2} +$$ + +$$ += \log {p}_{\theta }\left( \mathbf{x}\right) - \mathbb{K}\mathbb{L}\left\lbrack {{q}_{\psi }\left( {\mathbf{z} \mid \mathbf{x}}\right) \parallel {p}_{\theta }\left( {\mathbf{z} \mid \mathbf{x}}\right) }\right\rbrack \tag{3} +$$ + +$$ +\leq \log {p}_{\theta }\left( \mathbf{x}\right) \text{ . } \tag{4} +$$ + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +The ELBO becomes tighter as the approximate posterior ${q}_{\psi }\left( {\mathbf{z} \mid \mathbf{x}}\right)$ gets closer to the true posterior. Learning the generative model is finally performed by jointly optimizing the parameters $\theta$ of the decoder and $\phi$ of the approximate posterior via stochastic gradient ascent. In the original VAE, the encoder models the approximate posterior as a conditional multivariate Gaussian distribution $\mathcal{N}\left( {{\mu }_{\phi }\left( \mathbf{x}\right) ,\operatorname{diag}\left( {{\sigma }_{\phi }^{2}\left( \mathbf{x}\right) }\right) }\right)$ . + +The ELBO loss presents two antagonistic goals to the encoder. It should be able to both encodes the data accurately while being as close as possible to the prior distribution. Consequently, the Gaussian assumptions made on both the prior and the posterior distributions are often incompatible and limit the generative performance. A possible solution consists in learning a prior distribution that is compatible with the learned posteriors. For example, Habibian et al. (2019) and Chen et al. (2017) respectively showed that autoregressive models and normalizing flows (Rezende & Mohamed, 2015, NFs) greatly improve the performance of VAEs when used as prior distributions. In the following we present how denoising diffusion probabilistic models can be used to improve the performance of classical VAEs. + +§ 2.2. DENOISING DIFFUSION PROBABILISTIC MODELS + +Inspired by non-equilibrium statistical physics, Sohl-Dickstein et al. (2015) originally introduced DDPMs while Ho et al. (2020) demonstrated only more recently how to train these models for image synthesis, achieving results close to the state-of-the-art on this task. DDPMs formulate generative modelling as the reverse operation of diffusion, a physical process which progressively destroys information. Formally, the reverse process is a latent variable model of + +the form + +$$ +{p}_{\phi }\left( {\mathbf{x}}_{0}\right) \mathrel{\text{ := }} \int {p}_{\phi }\left( {\mathbf{x}}_{0 : T}\right) d{\mathbf{x}}_{1 : T}, +$$ + +where ${\mathbf{x}}_{0} \mathrel{\text{ := }} \mathbf{x}$ denotes the observations and ${\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{T}$ denote latent variables of the same dimensionality as ${\mathbf{x}}_{0}$ . The joint distribution ${p}_{\phi }\left( {\mathbf{x}}_{0 : T}\right)$ is modelled as a first order Markov chain with Gaussian transitions, that is + +$$ +{p}_{\phi }\left( {\mathbf{x}}_{0 : T}\right) \mathrel{\text{ := }} {p}_{\phi }\left( {\mathbf{x}}_{T}\right) \mathop{\prod }\limits_{{t = 1}}^{T}{p}_{\phi }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) , \tag{5} +$$ + +$$ +{p}_{\phi }\left( {\mathbf{x}}_{T}\right) \mathrel{\text{ := }} \mathcal{N}\left( {\mathbf{0},\mathrm{I}}\right) , \tag{6} +$$ + +$$ +{p}_{\phi }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) \mathrel{\text{ := }} \mathcal{N}\left( {{\mu }_{\phi }\left( {{\mathbf{x}}_{t},t}\right) ,{\sigma }_{t}^{2}\mathrm{I}}\right) . \tag{7} +$$ + +Similar to VAEs, the reverse Markov chain is trained on an ELBO. However, the approximate posterior $q\left( {{\mathbf{x}}_{1 : T} \mid {\mathbf{x}}_{0}}\right)$ is fixed to a diffusion process that is also a first order Markov chain with Gaussian transitions, + +$$ +q\left( {{\mathbf{x}}_{1 : T} \mid {\mathbf{x}}_{0}}\right) \mathrel{\text{ := }} \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right) , \tag{8} +$$ + +$$ +q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right) \mathrel{\text{ := }} \mathcal{N}\left( {\sqrt{1 - {\beta }_{t}}{\mathbf{x}}_{t - 1},{\beta }_{t}\mathbf{I}}\right) , \tag{9} +$$ + +where ${\beta }_{1},\ldots ,{\beta }_{T}$ are the variance schedule that is either fixed as training hyper-parameters or learned. The ELBO is then given by + +$$ +\mathrm{{ELBO}} \mathrel{\text{ := }} {\mathbb{E}}_{q}\left\lbrack {\log \frac{{p}_{\phi }\left( {\mathbf{x}}_{\mathbf{0} : \mathbf{T}}\right) }{q\left( {{\mathbf{x}}_{\mathbf{1} : \mathbf{T}} \mid {\mathbf{x}}_{\mathbf{0}}}\right) }}\right\rbrack \leq \log {p}_{\phi }\left( {\mathbf{x}}_{0}\right) . \tag{10} +$$ + +Provided that the variance schedule ${\beta }_{t}$ is small and that the number of timesteps $T$ is large enough, the Gaussian assumptions on the generative process ${p}_{\phi }$ are reasonable. Ho et al. (2020) take advantage of the Gaussian transitions to express the ELBO as + +$$ +{\mathbb{E}}_{q}\left\lbrack {\mathbb{{KL}}\left\lbrack {q\left( {{\mathbf{x}}_{T} \mid {\mathbf{x}}_{0}}\right) \parallel p\left( {\mathbf{x}}_{T}\right) }\right\rbrack - \log {p}_{\phi }\left( {{\mathbf{x}}_{0} \mid {\mathbf{x}}_{1}}\right) }\right. +$$ + +$$ +\left. {+\mathop{\sum }\limits_{{t = 2}}^{T}\mathbb{K}\mathbb{L}\left\lbrack {q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0}}\right) \parallel {p}_{\phi }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) }\right\rbrack }\right\rbrack . \tag{11} +$$ + +The inner sum in Equation (11) is made of comparisons between the Gaussian generative transitions ${p}_{\phi }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ and the conditional forward posterior $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0}}\right)$ which can also be expressed in closed form as Gaussians $\mathcal{N}\left( {{\widetilde{\mu }}_{t}\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{t}}\right) ,{\widetilde{\beta }}_{t}\mathbf{I}}\right)$ , where ${\widetilde{\beta }}_{t}$ are functions of the variance schedule. The KL can thus be calculated with closed form expressions which reduces the variance of the final expression. In addition, Ho et al. (2020) empirically demonstrate that it is sufficient to take optimization steps on uniformly sampled terms of the sum instead of computing it completely. The final objective closely resembles denoising score matching over multiple noise levels (Song & Ermon, 2019). These observations combined with additional simplifications leads to a simplified loss + +$$ +{L}_{\mathrm{{DDPM}}}\left( {{\mathbf{x}}_{0};\phi }\right) \mathrel{\text{ := }} {\mathbb{E}}_{t,{\mathbf{x}}_{0},{\mathbf{x}}_{t}}\left\lbrack {\frac{1}{2{\sigma }_{t}^{2}}{\begin{Vmatrix}{\mu }_{\phi }\left( {\mathbf{x}}_{t},t\right) - {\widetilde{\mu }}_{t}\left( {\mathbf{x}}_{0},{\mathbf{x}}_{t}\right) \end{Vmatrix}}^{2}}\right\rbrack , +$$ + +(12) + +where ${\widetilde{\mu }}_{t}\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{t}}\right)$ is the mean of $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0},{\mathbf{x}}_{t}}\right)$ , the forward diffusion posterior conditioned on the observation ${\mathbf{x}}_{0}$ . + +§ 2.3. PRIOR MODELLING WITH DENOISING DIFFUSION + +We now introduce our contribution which consists in using a DDPM for modelling the prior distribution in VAEs. We formulate the generative model as + +$$ +{\mathbf{z}}_{T} \sim \mathcal{N}\left( {\mathbf{0},\mathrm{I}}\right) \tag{13} +$$ + +$$ +{\mathbf{z}}_{t - 1 \mid t} \sim {p}_{\phi }\left( {{\mathbf{z}}_{t - 1} \mid {\mathbf{z}}_{t}}\right) \;\forall t \in \left\lbrack {T,\ldots ,1}\right\rbrack \tag{14} +$$ + +$$ +\mathbf{x} \sim {p}_{\theta }\left( {\mathbf{x} \mid {\mathbf{z}}_{0}}\right) , \tag{15} +$$ + +where $\phi$ denotes the parameters of the reverse diffusion model encoding the prior distribution. Equations (13) and (14) implicitly define a prior distribution over the usual latent variables ${\mathbf{z}}_{0}$ which is modelled with a reverse diffusion process. + +Unfortunately, we cannot train a VAE with a diffusion prior directly on the ELBO as expressed in Equation (2) as ${p}_{\phi }\left( {\mathbf{z}}_{0}\right)$ cannot be evaluated. However, Equation (2) can be further developed as + +$$ +{\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log {p}_{\theta }\left( {\mathbf{x} \mid {\mathbf{z}}_{0}}\right) }\right\rbrack - {\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log q\left( {{\mathbf{z}}_{0} \mid \mathbf{x}}\right) }\right\rbrack + {\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log {p}_{\phi }\left( {\mathbf{z}}_{0}\right) }\right\rbrack +$$ + +(16) + +in which a lower bound on the last term can be expressed by Equation (10). This finally leads to the following expression + +$$ +{\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log {p}_{\theta }\left( {\mathbf{x} \mid {\mathbf{z}}_{0}}\right) - \log q\left( {{\mathbf{z}}_{0} \mid \mathbf{x}}\right) + {\mathbb{E}}_{q}\left\lbrack {\log \frac{{p}_{\phi }\left( {\mathbf{z}}_{\mathbf{0} : \mathbf{T}}\right) }{q\left( {{\mathbf{z}}_{\mathbf{1} : \mathbf{T}} \mid {\mathbf{z}}_{\mathbf{0}}}\right) }}\right\rbrack }\right\rbrack +$$ + +(17) + +$$ +\leq {\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log {p}_{\theta }\left( {\mathbf{x} \mid {\mathbf{z}}_{0}}\right) - \log q\left( {{\mathbf{z}}_{0} \mid \mathbf{x}}\right) + \log {p}_{\phi }\left( {\mathbf{z}}_{0}\right) }\right\rbrack \tag{18} +$$ + +$$ +\leq \log {p}_{\theta }\left( \mathbf{x}\right) , \tag{19} +$$ + +which is a valid ELBO. Finally, the diffusion prior ${p}_{\phi }$ is trained jointly with the approximate posterior ${q}_{\psi }$ and the likelihood models ${p}_{\theta }$ which are optimized as in a classical VAE. This leads to the following loss function: + +$$ +\mathcal{L}\left( {\mathbf{x};\phi ,\theta ,\psi }\right) \mathrel{\text{ := }} {\mathbb{E}}_{{q}_{\psi }}\left\lbrack {\log \frac{{p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right) }{{q}_{\psi }\left( {\mathbf{z} \mid \mathbf{x}}\right) }}\right\rbrack + {\mathbb{E}}_{{q}_{\psi }}\left\lbrack {{L}_{\mathrm{{DDPM}}}\left( {{\mathbf{z}}_{0};\phi }\right) }\right\rbrack . +$$ + +(20) + +§ 3. RELATED WORK + +Various approaches have been proposed to improve the modelling capacity and the training of VAEs. As a first example, some state-of-the-art deep generative models based on VAEs model the posterior with normalizing flows or autoregressive models (Kingma et al., 2016; Vahdat & Kautz, 2020). Autoregressive models are also often used as a replacement of the original likelihood parameterization, which assumes conditional independencies that are often unrealistic (Oord et al., 2016). Another popular improvement made to the original VAE is the embedding of structure in the latent variables. In particular, hierarchical VAEs (Sønderby et al., 2016; Kingma et al., 2016) combined with careful training demonstrate impressive results on generative modelling for images (Vahdat & Kautz, 2020). + +Closer to our work, Chen et al. (2017) first proposed to learn the prior as a solution to the mismatch between the approximate and the true posteriors. They model the prior with an autoregressive flow, which also closely relates to modelling the posterior distribution with an inverse autoregressive flow (Kingma et al., 2016). Tomczak & Welling (2018) takes inspiration from the aggregated posterior $\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{q}_{\psi }\left( {z \mid x}\right)$ (Hoffman & Johnson, 2016; Makhzani et al., 2015) to introduce the VampPrior defined as a mixture of learned pseudo-inputs. An orthogonal line of work suggests that the mismatch between the approximate posterior and the exact posterior can be reduced by over-weighting the terms related to the prior and to the approximate posterior in the ELBO (Higgins et al., 2016; Chen et al., 2018). + +§ 4. EXPERIMENTS + +We now compare VAEs for different choices of priors, including the original Gaussian prior, an NF prior, and the proposed diffusion prior. All models share a same backbone encoder-decoder architecture inspired from DCGAN (Radford et al., 2015). Optimization is performed with Adam for 250 epochs with a learning rate set to 0.0005 . After each epoch, the models are evaluated on a validation set used to select the best one for each training setting. We compare the models on the CIFAR10 and CelebA datasets for 3 different latent variables dimensionality(40,100,200). The NF used in our experiments is a 3-step autoregressive affine flow with simple MLP backbones similar to the one used to model the transition function of DDPM. + +Table 1 presents the FID scores for the different models. We first notice the large scores reached by all models on the CIFAR10 dataset. This can be explained by the simplicity of the models trained in our experiments. We believe these scores could be greatly improved by using a more sophisticated likelihood model such as a PixelCNN (Oord et al., 2016). Although FID scores suggest that the Gaussian prior outperforms the diffusion prior in terms of generative performance, the visual inspection of Figure 1 shows that the diffusion prior results in samples slightly more realistic than those of the classical VAE. The best FID score is achieved by the NF prior, although its samples do not seem to reflect this superiority. In this case, we believe the FID scores are not entirely informative about the quality of the images synthesized by the models and should be interpreted with a grain of salt. Although learned priors seem to improve generative performance on CIFAR10, additional work is needed to reach results that would justify using a diffusion prior for this dataset. + +On CelebA however, we observe in Table 1 that diffusion priors outperform the Gaussian prior. This is in line with the visual inspection of Figure 2a and Figure 2c. As for CIFAR10, the NF prior outperforms the Gaussian and diffusion priors in terms of FID scores, although the visual inspection of the corresponding samples in Figure 2b does not reveal a much better quality of images when compared to those resulting from the diffusion prior. We conclude from these observations that diffusion priors offer an interesting alternative to NFs for modelling the prior in a VAE. + +165 + +177 + + < g r a p h i c s > + +Figure 1. Samples generated by a VAE trained on CIFAR10 for three different prior models. The diffusion prior leads to slightly better sampling quality than the Gaussian distribution and similar to the NF prior. + +178 + +179 + +180 + +181 + +182 + +183 + +184 + +185 + +186 + +198 + + < g r a p h i c s > + +Figure 2. Samples generated by a VAE trained on CelebA for three different prior models. The diffusion prior leads to better sampling quality than the Gaussian distribution and similar to the NF prior. + +199 + +200 + +201 + +202 + +203 + +204 + +219 + +Table 1. FID scores for different models for prior modelling in VAEs and for different latent size. Diffusion priors outperform classical VAE on CelebA but are slightly worse than NFs. FID scores do not reveal the superiority of any method on CIFAR10. + +max width= + +Dataset 3|c|CelebA 3|c|CIFAR10 + +1-7 +Latent Size 40 100 200 40 100 200 + +1-7 +Gaussian 154.3 149.4 139.1 176.0 126.2 123.9 + +1-7 +${NF}$ 72.9 59.49 54.7 167.6 129.1 129.6 + +1-7 +Diffusion 114.8 67.95 88.3 177.9 160.5 153.1 + +1-7 + +§ 5. CONCLUSION AND FUTURE WORK + +This preliminary work presents how denoising diffusion probabilistic models can be used as a new class of learnable priors for VAEs. As a notable contribution, we empirically demonstrate that optimizing implicitly a prior on an ELBO can be performed jointly to training the encoder and the decoder of the VAE. In addition, our results suggest DDPM performs on par with NFs for modelling prior distribution. + +A large spectrum of future research directions could benefit from the basic idea expressed in this preliminary work. As an example, recent advances in diffusion models such as the continuous formulation (Song et al., 2020) or improvement to the training procedure of DDPM (Nichol & Dhariwal, 2021) could be implemented in the prior model. Similarly, many improvements could be made to the architectures used for the VAE and to the training procedure. In particular, image synthesis with hierarchical VAEs which organizes the latent variables into multiple scales images could reveal the full potential of diffusion priors. This would indeed combine the structural knowledge embed by such type of VAEs with the impressive performance of DDPM for modelling distributions over images. Finally, diffusion does not constrain the neural networks architectures and so enables the embedding of a larger choice of inductive biases in the prior distribution compared to autoregressive models and NFs. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/EWtxSjQ0YR/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/EWtxSjQ0YR/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..619725c3ed86fd44834f8dc350e61c59714b8fae --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/EWtxSjQ0YR/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,243 @@ +# The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders + +Anonymous Authors ${}^{1}$ + +## Abstract + +Training and using modern neural-network based latent-variable generative models (like Variational Autoencoders) often require simultaneously training a generative direction along with an inferential + +016 (encoding) direction, which approximates the pos- + +017 terior distribution over the latent variables. Thus, + +018 the question arises: how complex does the infer- + +019 ential model need to be, in order to be able to + +020 accurately model the posterior distribution of a + +021 given generative model? In this paper, we iden- + +022 tify an important property of the generative map + +023 impacting the required size of the encoder. We + +024 show that if the generative map is "strongly in- + +025 vertible" (in a sense we suitably formalize), the + +026 inferential model need not be much more com- + +027 plex. Conversely, we prove that there exist non- + +028 invertible generative maps, for which the encod- + +029 ing direction needs to be exponentially larger (un- + +030 der standard assumptions in computational com- + +031 plexity). Importantly, we do not require the gen- + +032 erative model to be layerwise invertible, which a + +033 lot of the related literature assumes and isn't sat- + +034 isfied by many architectures used in practice (e.g. + +035 convolution and pooling based networks). Thus, + +036 we provide theoretical support for the empirical + +037 wisdom that learning deep generative models is + +038 harder when data lies on a low-dimensional mani- + +039 fold. + +040 + +## 1. Introduction + +Many modern generative models of choice (e.g. Generative Adversarial Networks (Goodfellow et al., 2014), Variational Autoencoders (Kingma and Welling, 2013)) are modeled as non-linear, possibly stochastic transformations of a simple latent distribution (e.g. a standard Gaussian). A particularly common task is modeling the inferential (encoder) direction: that is, modeling the posterior distribution on the latents $z$ given an observable sample $x$ . Such a task is useful both at train time and at test time. At train time, fitting generative models like variational autoencoders via maximum likelihood often relies on variational methods, which require the joint training of a generative model (i.e. generator/decoder), as well as an inference model (i.e. encoder) which models the posterior distribution of the latent given the observables. At test time, the posterior distribution very often has some practical use, e.g. useful, potentially interpretable feature embeddings for data (Berthelot et al., 2018), "intervening" on the latent space to change the sample in some targeted manner (Shen et al., 2020), etc. As such, the question of the "complexity" of the inference model (i.e. number of parameters to represent it using a neural network-based encoder) as a function of the "complexity" of the forward model is of paramount importance: + +Question: How should we choose the architecture of the inference (encoder) model relative to the architecture of the generative (decoder) model during training? + +For instance, when is the backward model not much more complex, so that training in this manner is not computationally prohibitive? Such a question is also pertinent from a purely scientific perspective, as it asks: + +Question: Given a generative model for data, when is inference (much) harder than generation? + +In this paper we identify an important aspect of the generative direction governing the complexity of the inference direction for variational autoencoders: a notion of approximate bijectivity/invertibility of the mean of the generative direction. We prove that under this assumption, the complexity of the inference direction is not much greater than the complexity of the generative direction. Conversely, without this assumption, under standard computational complexity conjectures from cryptography, we can exhibit instances where the inference direction has to be much more complex. + +On the mathematical level, our techniques involve a neural simulation of a Langevin random walk to sample from the posterior of the latent variables. We show that the walk converges fast when started from an appropriate initial point - which we can compute using gradient descent (again, simulated via a neural network). On the lower bound side, we provide a reduction from the existence of one-way Boolean permutations in computational complexity: that is, permutations that are easy to calculate, but hard to invert. We show that the existence of a small encoder for non-invertible generators would allow us to design an invertor for any Boolean permutation, thus violating the existence a one-way permutation. This is the first time such ideas have been applied to generative models. + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +Our results can be seen as corroborating empirical observations that learning deep generative models more generally is harder when data lies on a low-dimensional manifold (Dai and Wipf, 2019; Arjovsky et al., 2017). + +### 2.Our Results + +The Variational Autoencoder (VAE) (Kingma and Welling, 2013) is one of the most commonly used paradigms in generative models. It's trained by fitting a generator which maps latent variables $z$ to observables $x$ , denoted by ${p}_{\theta }\left( {x \mid z}\right)$ , as well as an encoder which maps the observables to the latent space, denoted by ${q}_{\phi }\left( {z \mid x}\right)$ . Here $\phi$ and $\theta$ are the encoder parameters and generator parameters respectively. Given $n$ training samples ${\left\{ {x}^{\left( i\right) }\right\} }_{i = 1}^{n}$ , the VAE objective is given by + +$$ +\mathop{\max }\limits_{{\phi ,\theta }}\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\mathbb{E}}_{z \sim {q}_{\phi }\left( {. \mid {x}^{\left( i\right) }}\right) }\left\lbrack {\log {p}_{\theta }\left( {{x}^{\left( i\right) } \mid z}\right) }\right\rbrack +$$ + +$$ +- {KL}\left( {{q}_{\phi }\left( {z \mid {x}^{\left( i\right) }}\right) \parallel p\left( z\right) }\right) +$$ + +where $p\left( z\right)$ is typically chosen to be a standard Gaussian. This loss can be viewed as a variational relaxation of the maximum likelihood objective, where the encoder ${q}_{\phi }$ , in the limit of infinite representational power, is intended to model the posterior distribution ${p}_{\theta }\left( {z \mid {x}^{\left( i\right) }}\right)$ over the latent variables $z$ . + +Setup: We will consider a setting in which the data distribution itself is given by some ground-truth generator $G : {\mathbb{R}}^{{d}_{l}} \rightarrow {\mathbb{R}}^{{d}_{o}}$ , and ask how complex (in terms of number of parameters) the encoder needs to be (as a function of the number of parameters of $G$ ), s.t. it approximates the posterior distribution $p\left( {z \mid x}\right)$ of the generator. + +We will consider two standard probabilistic models for the generator/encoder respectively. + +Definition 1 (Latent Gaussian). A latent Gaussian is the conditional distribution given by a stochastic pushforward of the standard gaussian. That is, for latent variable $z \in {\mathbb{R}}^{{d}_{l}}$ and observable $x \in {\mathbb{R}}^{{d}_{o}}$ , for a neural network $G : {\mathbb{R}}^{{d}_{l}} \rightarrow {\mathbb{R}}^{{d}_{o}}$ and noise parameter ${\beta }^{2}$ ; the distribution $p\left( {x \mid z}\right) = \mathcal{N}\left( {G\left( z\right) ,{\beta }^{2}{I}_{{d}_{o}}}\right)$ is a latent Gaussian when $p\left( z\right) = \mathcal{N}\left( {0,{I}_{{d}_{l}}}\right)$ . In other words, a sample from this distribution can be generated by sampling independently $z \sim \mathcal{N}\left( {0, I}\right)$ and $\xi \sim \mathcal{N}\left( {0,{\beta }^{2}I}\right)$ and outputting $x = G\left( z\right) + \xi$ . This is a standard neural parametrization of a generator with (scaled) identity covariance matrix, a fairly common choice in practical implementations of VAEs (Kingma and Welling, 2013; Dai and Wipf, 2019). + +We will also define a probabilistic model which is a composition of latent Gaussians (i.e. consists of multiple stochastic layers), which is also common, particularly when modeling encoders in VAEs, as they can model potentially non-Gaussian posteriors (Burda et al., 2015; Rezende et al., 2014): + +Definition 2 (Deep Latent Gaussian). A deep latent Gaussian is the conditional distribution given by a sequence of stochastic pushforwards of any density. That is, for observable ${z}_{0} \in {\mathbb{R}}^{{d}_{0}}$ and latent variables ${\left\{ {z}_{i} \in {\mathbb{R}}^{{d}_{i}}\right\} }_{i = 1}^{L}$ , for neural networks ${\left\{ {G}_{i} : {\mathbb{R}}^{{d}_{i - 1}} \rightarrow {\mathbb{R}}^{{d}_{i}}\right\} }_{i = 1}^{L}$ and noise parameters ${\left\{ {\beta }_{i}^{2}\right\} }_{i = 1}^{L}$ , the conditional distribution $p\left( {{z}_{L} \mid {z}_{0}}\right)$ is a deep latent Gaussian when $p\left( {{z}_{i} \mid {z}_{i - 1}}\right) =$ $\mathcal{N}\left( {{G}_{i}\left( {z}_{i - 1}\right) ,{\beta }_{i}^{2}{I}_{{d}_{i}}}\right) ,\forall i \in \left\lbrack L\right\rbrack$ and $p\left( {z}_{0}\right)$ is any valid density. + +In other words, a deep latent Gaussian is a distribution, which can be sampled by ancestral sampling, one layer at a time. Note that this class of distributions is convenient as a choice for an encoder in a VAE, since compositions are amenable to the reparametrization trick of (Kingma and Welling, 2013) - the randomness for each of the layers can be "presampled" and appropriate transformed (Burda et al., 2015; Rezende et al., 2014). Then, we ask the following: + +Question: If a VAE generator is modeled as a latent Gaussian (that is, $p\left( {x \mid z}\right) \equiv \mathcal{N}\left( {G\left( z\right) ,{\beta }^{2}I}\right)$ ), s.t. the corresponding $G$ has at most $N$ parameters, and we wish to approximate the posterior $p\left( {z \mid x}\right)$ by a deep latent Gaussian s.t. the total size of the networks in it have at most ${N}^{\prime }$ parameters, how large must ${N}^{\prime }$ be as function of $N$ ? + +We will work in the setting ${d}_{l} = {d}_{o} = d$ , and prove a dichotomy based on the invertibility of $G$ : namely, if $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ is bijective, and $\beta \leq \mathcal{O}\left( \frac{1}{{d}^{1.5}\sqrt{\log d/\epsilon }}\right)$ , the posterior $p\left( {z \mid x}\right)$ can be $\epsilon$ -approximated in total variation distance by a deep latent Gaussian of size ${N}^{\prime } =$ $\mathcal{O}\left( {N \cdot \operatorname{poly}\left( {d,1/\beta ,1/\epsilon }\right) }\right)$ . Thus, if the neural network $G$ is invertible, and for a fixed $\epsilon$ and a small-enough variance term ${\beta }^{2}$ , we can approximate the posterior with a deep latent Gaussian polynomially larger than $G$ . On the other hand, if $G$ is not bijective, if one-way-functions exist (a widely believed computational complexity conjecture), we will show there exists a VAE generator $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ of size polynomial in $d$ , for which the posterior $p\left( {z \mid x}\right)$ cannot be approximated in total variation distance for even an inverse polynomial fraction of inputs $x$ , unless the inferential network is of size exponential in $d$ . + +### 2.1. Upper bounds for bijective generators + +We first lay out the assumptions on the map $G$ . The first is a quantitative characterization of bijectivity; and the second requires upper bounds on the derivatives of $G$ upto order 3 . We also have a centering assumption. We state these below. + +Assumption 1 (Strong invertibility). We will assume that the latent and observable spaces have the same dimension (denoted $d$ ), and $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ is bijective. Moreover, we will assume there exists a positive constant $m > 0$ such that: + +$$ +\forall {z}_{1},{z}_{2} \in {\mathbb{R}}^{d},\begin{Vmatrix}{G\left( {z}_{1}\right) - G\left( {z}_{2}\right) }\end{Vmatrix} \geq m \cdot \begin{Vmatrix}{{z}_{1} - {z}_{2}}\end{Vmatrix} +$$ + +Remark 1: This is a stronger quantitative version of invertibility. Furthermore, the infinitesimal version of this condition (i.e. $\begin{Vmatrix}{{z}_{1} - {z}_{2}}\end{Vmatrix} \rightarrow 0$ ) implies that the smallest magnitude of the singular values of the Jacobian at any point is lower bounded by $m$ , that is $\forall z \in {\mathbb{R}}^{d},\mathop{\min }\limits_{{i \in \left\lbrack d\right\rbrack }}\left| {{\sigma }_{i}\left( {{J}_{G}\left( z\right) }\right) }\right| \geq$ $m > 0$ . Since $m$ is strictly positive, this in particular means that the Jacobian is full rank everywhere. + +Remark 2: Note, we do not require that $G$ is layerwise invertible (i.e. that the each map from one layer to the next is invertible) - if that is the case, at least in the limit $\beta \rightarrow 0$ , the existence of an inference decoder of comparable size to $G$ is rather obvious: we simply invert each layer one at a time. This is important, as many architectures based on convolutions perform operations which increase the dimension (i.e. map from a lower to a higher dimensional space), followed by pooling (which decrease the dimension). Nevertheless, it has been observed that these architectures are invertible in practice- (Lipton and Tripathi, 2017) manage to get almost ${100}\%$ success at inverting an off-the-shelf trained model-thus justifying this assumption. + +Assumption 2 (Smoothness). There exists a finite positive constant $M > 0$ such that : + +$$ +\forall {z}_{1},{z}_{2} \in {\mathbb{R}}^{d},\begin{Vmatrix}{G\left( {z}_{1}\right) - G\left( {z}_{2}\right) }\end{Vmatrix} \leq M \cdot \begin{Vmatrix}{{z}_{1} - {z}_{2}}\end{Vmatrix} +$$ + +Moreover, we will assume that $G$ has continuous partial derivatives up to order 3 at every $z \in {\mathbb{R}}^{d}$ and the derivatives are bounded by finite positive constants ${M}_{2}$ and ${M}_{3}$ as + +$$ +\forall z \in {\mathbb{R}}^{d},{\begin{Vmatrix}{\nabla }^{2}G\left( z\right) \end{Vmatrix}}_{op} \leq {M}_{2} < \infty , +$$ + +$$ +{\begin{Vmatrix}{\nabla }^{3}G\left( z\right) \end{Vmatrix}}_{op} \leq {M}_{3} < \infty +$$ + +Remark 3: This is a benign assumption, stating that the map $G$ is smooth to third order. The infinitesimal version of this means that the largest magnitude of the singular values of the Jacobian at any point is upper bounded by $M$ , that is $\forall z \in {\mathbb{R}}^{d},\mathop{\max }\limits_{{i \in \left\lbrack d\right\rbrack }}\left| {{\sigma }_{i}\left( {{J}_{G}\left( z\right) }\right) }\right| = {\begin{Vmatrix}{J}_{G}\left( z\right) \end{Vmatrix}}_{op} \leq M < \infty .$ + +Remark 4: A neural network with activation function $\sigma$ will satisfy this assumption when $\sigma : \mathbb{R} \rightarrow \mathbb{R}$ is Lipschitz, and $\mathop{\max }\limits_{a}\left| {{\sigma }^{\prime }\left( a\right) }\right| \& \mathop{\max }\limits_{a}\left| {{\sigma }^{\prime \prime }\left( a\right) }\right|$ are finite. + +Assumption 3 (Centering). The map $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ satisfies $G\left( 0\right) = 0$ . + +Remark 5: This assumption is for convenience of stating the bounds - we effectively need the "range" of majority of the samples $x$ under the distribution of the generator. All the results can be easily restated by including a dependence on $\parallel G\left( 0\right) \parallel$ . + +Our main result is then stated below. Throughout, the $\mathcal{O}\left( \text{.}\right)$ notation hides dependence on the map constants, namely $m, M,{M}_{2},{M}_{3}$ . We will denote by ${d}_{\mathrm{{TV}}}\left( {p, q}\right)$ the total variation distance between the distributions $p, q$ . + +Theorem 1 (Main, invertible generator). Consider a VAE generator given by a latent Gaussian with noise parameter ${\beta }^{2}$ and generator $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ satisfying Assumptions 1 and 2, which has $N$ parameters and a differentiable activation function $\sigma$ . Then, for + +$$ +\beta \leq \mathcal{O}\left( \frac{1}{{d}^{1.5}\sqrt{\log \frac{d}{\epsilon }}}\right) \tag{1} +$$ + +there exists a deep latent Gaussian with ${N}^{\prime } =$ $\mathcal{O}\left( {N \cdot \operatorname{poly}\left( {d,\frac{1}{\beta },\frac{1}{\epsilon }}\right) }\right)$ parameters and activation functions $\left\{ {\sigma ,{\sigma }^{\prime },\rho }\right\}$ , where $\rho \left( x\right) = {x}^{2}$ , such that with probability $1 - \exp \left( {-\mathcal{O}\left( d\right) }\right)$ over a sample $x$ from the VAE generator, the distribution $q\left( {z \mid x}\right)$ of the deep latent Gaussian on input $x$ satisfies ${d}_{TV}\left( {q\left( {z \mid x}\right) , p\left( {z \mid x}\right) }\right) \leq \epsilon$ . + +Remark 6: The addition of $\rho$ in the activation functions is for convenience of stating the bound. Using usual techniques in universal approximation it can be simulated using any other smooth activation. + +### 2.2. Lower bounds for non-bijective Generators + +We now discuss the case when the generative map $G$ is not bijective, showing an instance such that no small encoder corresponding to the posterior exists. The lower bound will be based on a reduction from the existence of one-way functions - a standard complexity assumption in theoretical computer science (more concretely, cryptography). Precisely, we will start with the following form of the one-way-function conjecture: + +Conjecture 1 (Existence of one-way permutations, (Katz and Lindell,2020)). There exists a bijection $f$ : $\{ - 1,1{\} }^{d} \rightarrow \{ - 1,1{\} }^{d}$ computable by a Boolean circuit $\mathcal{C} : \{ - 1,1{\} }^{d} \rightarrow \{ - 1,1{\} }^{d}$ of size polyd, but for every $T\left( d\right) =$ poly(d)and $\epsilon \left( d\right) = \frac{1}{\operatorname{poly}\left( d\right) }$ and circuit ${\mathcal{C}}^{\prime } : \{ - 1,1{\} }^{d} \rightarrow \{ - 1,1{\} }^{d}$ of size $T\left( d\right)$ it holds $\mathop{\Pr }\limits_{{z \sim \{ \pm 1{\} }^{d}}}\left\lbrack {{\mathcal{C}}^{\prime }\left( {\mathcal{C}\left( z\right) }\right) = z}\right\rbrack \leq \epsilon \left( d\right) .$ In other words, there is a circuit of size polynomial in the input, s.t. for every polynomially sized invertor circuit (the two polynomials need not be the same - the invertor can be much larger, so long as it's polynomial), the invertor circuit succeeds on at most an inverse polynomial fraction of the inputs. Assuming this Conjecture, we show that there exist generators that do not have small encoders that accurately represent the posterior for most points $x$ . Namely: + +Theorem 2 (Main, non-invertible generator). If Conjecture 1 holds, there exists a VAE generator $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ with size poly(d) and activation functions $\{ {sgn},\min ,\max \}$ , s.t., for every $\beta = o\left( {1/\sqrt{d}}\right)$ , every $T\left( d\right) = \operatorname{poly}\left( d\right)$ and every $\epsilon \left( d\right) = 1/$ poly(d), any encoder $E$ that can be represented by a deep latent Gaussian with networks that have total number of parameters bounded by $T\left( d\right)$ , weights bounded by $W$ , activation functions that are $L$ - Lipschitz, and node outputs bounded by $M$ with probability $1 - \exp \left( {-d}\right)$ over a sample $x$ from $G$ and $L, M, W =$ $o\left( {\exp \left( {\operatorname{poly}\left( d\right) }\right) }\right)$ , we have: + +$$ +\mathop{\Pr }\limits_{{x \sim G}}\left\lbrack {{d}_{TV}\left( {E\left( {z \mid x}\right) , p\left( {z \mid x}\right) }\right) \leq \frac{1}{10}}\right\rbrack \leq \epsilon \left( d\right) +$$ + +Thus, we show the existence of a generator for which no encoder of polynomial size reasonably approximates the posterior for even an inverse-polynomial fraction of the samples $x$ (under the distribution of the generator). + +Remark 7: The generator $G$ , though mapping from ${\mathbb{R}}^{d} \rightarrow$ ${\mathbb{R}}^{d}$ will be highly non-invertible. Perhaps counterintuitively, Conjecture 1 applies to bijections-though, the point will be that $G$ will be simulating a Boolean circuit, and in the process will give the same output on many inputs (more precisely, it will only depend on the sign of the inputs, rather than their values). + +Remark 8: The choice of activation functions $\{ \mathrm{{sgn}},\min ,\max \}$ is for convenience of stating the theorem. Using standard universal approximation results, similar results can be stated with other activation functions. + +Remark 9: The restrictions on the Lipschitzness of the activations, bounds of the weights and node outputs of $E$ are extremely mild - as they are allowed to be potentially exponential in $d$ - considering that even writing down a natural number in binary requires logarithmic number of digits. + +## 3. Related Work + +On the empirical side, the impact of impoverished variational posteriors in VAEs (in particular, modeling the encoder as a Gaussian) has long been conjectured as one of the (several) reasons for the fuzzy nature of samples in trained VAEs. (Zhao et al., 2017) provide recent evidence towards this conjecture. Invertibility of generative models in general (VAEs, GANs and normalizing flows), both as it relates to the hardness of fitting the model, and as it relates to the usefulness of having an invertible model, has been studied quite a bit: (Lipton and Tripathi, 2017) show that for off-the-shelf trained GANs, they can invert them with near-100% success rate, despite the model not being encouraged to be invertible during training; (Dai and Wipf, 2019) propose an alternate training algorithm for VAEs that tries to remedy algorithmic problems during training VAEs when data lies on a lower-dimensional manifold; (Behrmann et al., 2020) show that trained normalizing flows, while being by design invertible, are just barely so - the learned models are extremely close to being singular. + +On the theoretical side, the most closely relevant work is (Lei et al., 2019). They provide an algorithm for inverting GAN generators with random weights and expanding layers. Their algorithm is layerwise - that is to say, each of the layers in their networks is invertible, and they invert the layers one at a time. This is distinctly not satisfied by architectures used in practice, which expand and shrink - a typical example are convolutional architectures based on convolutions and pooling. The same paper also shows NP-hardness of inverting a general GAN, but crucially they assume the network $G$ is part of the input (their proof does not work otherwise). Our lower bound can be viewed as a "non-uniform complexity" (i.e. circuit complexity) analogue of this, since we are looking for a small neural network $E$ , as opposed to an efficient algorithm; crucially, however $G$ is not part of the input (i.e. $G$ can be preprocessed for an unlimited amount of time). (Hand and Voroninski, 2018) provide similar guarantees for inverting GAN generators with random weights that satisfy layerwise invertibility, albeit via non-convex optimization of a certain objective. + +## 4. Conclusion + +In this paper we initiated the first formal study of the effect of invertibility of the generator on the representational complexity of the encoder in variational autoencoders. We proved a dichotomy: invertible generators give rise to distributions for which the posterior can be approximated by an encoder not much larger than the generator. On the other hand, for non-invertible generators, the corresponding encoder may need to be exponentially larger. Our work is the first to connect the complexity of inference to invertibility, and there are many interesting venues for further work. + +## References + +M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pages 214-223. PMLR, 2017. + +J. Behrmann, P. Vicol, K.-C. Wang, R. Grosse, and J.- H. Jacobsen. Understanding and mitigating exploding inverses in invertible neural networks. arXiv preprint arXiv:2006.09347, 2020. + +D. Berthelot, C. Raffel, A. Roy, and I. Goodfellow. Understanding and improving interpolation in autoen-coders via an adversarial regularizer. arXiv preprint arXiv:1807.07543, 2018. + +S. Bubeck, R. Eldan, and J. Lehec. Sampling from a log-concave distribution with projected langevin monte carlo. Discrete & Computational Geometry, 59(4):757-783, 2018. + +Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. + +B. Dai and D. Wipf. Diagnosing and enhancing vae models. arXiv preprint arXiv:1903.05789, 2019. + +A. S. Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave densities. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2016. + +A. S. Dalalyan. Further and stronger analogy between sampling and optimization: Langevin monte carlo and gradient descent. arXiv preprint arXiv:1704.04752, 2017. + +I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Ben-gio. Generative adversarial networks. arXiv preprint arXiv:1406.2661, 2014. + +P. Hand and V. Voroninski. Global guarantees for enforcing deep generative priors by empirical risk. In Conference On Learning Theory, pages 970-978. PMLR, 2018. + +N. Ikeda and S. Watanabe. A comparison theorem for solutions of stochastic differential equations and its applications. Osaka Journal of Mathematics, 14(3):619-633, 1977. + +J. Katz and Y. Lindell. Introduction to modern cryptography. CRC press, 2020. + +D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. + +Q. Lei, A. Jalal, I. S. Dhillon, and A. G. Dimakis. Inverting deep generative models, one layer at a time. arXiv preprint arXiv:1906.07437, 2019. + +L.-H. Lim. Singular values and eigenvalues of tensors: a variational approach. In 1st IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, 2005., pages 129-132. IEEE, 2005. + +P.-L. Lions and A.-S. Sznitman. Stochastic differential equations with reflecting boundary conditions. Communications on Pure and Applied Mathematics, 37(4):511-537, 1984. + +Z. C. Lipton and S. Tripathi. Precise recovery of latent vectors from generative adversarial networks. arXiv preprint arXiv:1702.04782, 2017. + +A. Moitra and A. Risteski. Fast convergence for langevin diffusion with matrix manifold structure. arXiv preprint arXiv:2002.05576, 2020. + +Y. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2003. + +M. Raginsky, A. Rakhlin, and M. Telgarsky. Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis. arXiv preprint arXiv:1702.03849, 2017. + +D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning, pages 1278-1286. PMLR, 2014. + +Y. Saisho. Stochastic differential equations for multidimensional domain with reflecting boundary. Probability Theory and Related Fields, 74(3):455-477, 1987. + +Y. Shen, J. Gu, X. Tang, and B. Zhou. Interpreting the latent space of gans for semantic face editing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9243-9252, 2020. + +H. T. Siegelmann. Neural networks and analog computation: beyond the Turing limit. Springer Science & Business Media, 2012. + +S. Zhao, J. Song, and S. Ermon. Towards deeper understanding of variational autoencoding models. arXiv preprint arXiv:1702.08658, 2017. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/EWtxSjQ0YR/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/EWtxSjQ0YR/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..a32147bbf60345adf546f030a56e9c12d70775d1 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/EWtxSjQ0YR/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,187 @@ +§ THE EFFECTS OF INVERTIBILITY ON THE REPRESENTATIONAL COMPLEXITY OF ENCODERS IN VARIATIONAL AUTOENCODERS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Training and using modern neural-network based latent-variable generative models (like Variational Autoencoders) often require simultaneously training a generative direction along with an inferential + +016 (encoding) direction, which approximates the pos- + +017 terior distribution over the latent variables. Thus, + +018 the question arises: how complex does the infer- + +019 ential model need to be, in order to be able to + +020 accurately model the posterior distribution of a + +021 given generative model? In this paper, we iden- + +022 tify an important property of the generative map + +023 impacting the required size of the encoder. We + +024 show that if the generative map is "strongly in- + +025 vertible" (in a sense we suitably formalize), the + +026 inferential model need not be much more com- + +027 plex. Conversely, we prove that there exist non- + +028 invertible generative maps, for which the encod- + +029 ing direction needs to be exponentially larger (un- + +030 der standard assumptions in computational com- + +031 plexity). Importantly, we do not require the gen- + +032 erative model to be layerwise invertible, which a + +033 lot of the related literature assumes and isn't sat- + +034 isfied by many architectures used in practice (e.g. + +035 convolution and pooling based networks). Thus, + +036 we provide theoretical support for the empirical + +037 wisdom that learning deep generative models is + +038 harder when data lies on a low-dimensional mani- + +039 fold. + +040 + +§ 1. INTRODUCTION + +Many modern generative models of choice (e.g. Generative Adversarial Networks (Goodfellow et al., 2014), Variational Autoencoders (Kingma and Welling, 2013)) are modeled as non-linear, possibly stochastic transformations of a simple latent distribution (e.g. a standard Gaussian). A particularly common task is modeling the inferential (encoder) direction: that is, modeling the posterior distribution on the latents $z$ given an observable sample $x$ . Such a task is useful both at train time and at test time. At train time, fitting generative models like variational autoencoders via maximum likelihood often relies on variational methods, which require the joint training of a generative model (i.e. generator/decoder), as well as an inference model (i.e. encoder) which models the posterior distribution of the latent given the observables. At test time, the posterior distribution very often has some practical use, e.g. useful, potentially interpretable feature embeddings for data (Berthelot et al., 2018), "intervening" on the latent space to change the sample in some targeted manner (Shen et al., 2020), etc. As such, the question of the "complexity" of the inference model (i.e. number of parameters to represent it using a neural network-based encoder) as a function of the "complexity" of the forward model is of paramount importance: + +Question: How should we choose the architecture of the inference (encoder) model relative to the architecture of the generative (decoder) model during training? + +For instance, when is the backward model not much more complex, so that training in this manner is not computationally prohibitive? Such a question is also pertinent from a purely scientific perspective, as it asks: + +Question: Given a generative model for data, when is inference (much) harder than generation? + +In this paper we identify an important aspect of the generative direction governing the complexity of the inference direction for variational autoencoders: a notion of approximate bijectivity/invertibility of the mean of the generative direction. We prove that under this assumption, the complexity of the inference direction is not much greater than the complexity of the generative direction. Conversely, without this assumption, under standard computational complexity conjectures from cryptography, we can exhibit instances where the inference direction has to be much more complex. + +On the mathematical level, our techniques involve a neural simulation of a Langevin random walk to sample from the posterior of the latent variables. We show that the walk converges fast when started from an appropriate initial point - which we can compute using gradient descent (again, simulated via a neural network). On the lower bound side, we provide a reduction from the existence of one-way Boolean permutations in computational complexity: that is, permutations that are easy to calculate, but hard to invert. We show that the existence of a small encoder for non-invertible generators would allow us to design an invertor for any Boolean permutation, thus violating the existence a one-way permutation. This is the first time such ideas have been applied to generative models. + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +Our results can be seen as corroborating empirical observations that learning deep generative models more generally is harder when data lies on a low-dimensional manifold (Dai and Wipf, 2019; Arjovsky et al., 2017). + +§ 2.OUR RESULTS + +The Variational Autoencoder (VAE) (Kingma and Welling, 2013) is one of the most commonly used paradigms in generative models. It's trained by fitting a generator which maps latent variables $z$ to observables $x$ , denoted by ${p}_{\theta }\left( {x \mid z}\right)$ , as well as an encoder which maps the observables to the latent space, denoted by ${q}_{\phi }\left( {z \mid x}\right)$ . Here $\phi$ and $\theta$ are the encoder parameters and generator parameters respectively. Given $n$ training samples ${\left\{ {x}^{\left( i\right) }\right\} }_{i = 1}^{n}$ , the VAE objective is given by + +$$ +\mathop{\max }\limits_{{\phi ,\theta }}\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\mathbb{E}}_{z \sim {q}_{\phi }\left( {. \mid {x}^{\left( i\right) }}\right) }\left\lbrack {\log {p}_{\theta }\left( {{x}^{\left( i\right) } \mid z}\right) }\right\rbrack +$$ + +$$ +- {KL}\left( {{q}_{\phi }\left( {z \mid {x}^{\left( i\right) }}\right) \parallel p\left( z\right) }\right) +$$ + +where $p\left( z\right)$ is typically chosen to be a standard Gaussian. This loss can be viewed as a variational relaxation of the maximum likelihood objective, where the encoder ${q}_{\phi }$ , in the limit of infinite representational power, is intended to model the posterior distribution ${p}_{\theta }\left( {z \mid {x}^{\left( i\right) }}\right)$ over the latent variables $z$ . + +Setup: We will consider a setting in which the data distribution itself is given by some ground-truth generator $G : {\mathbb{R}}^{{d}_{l}} \rightarrow {\mathbb{R}}^{{d}_{o}}$ , and ask how complex (in terms of number of parameters) the encoder needs to be (as a function of the number of parameters of $G$ ), s.t. it approximates the posterior distribution $p\left( {z \mid x}\right)$ of the generator. + +We will consider two standard probabilistic models for the generator/encoder respectively. + +Definition 1 (Latent Gaussian). A latent Gaussian is the conditional distribution given by a stochastic pushforward of the standard gaussian. That is, for latent variable $z \in {\mathbb{R}}^{{d}_{l}}$ and observable $x \in {\mathbb{R}}^{{d}_{o}}$ , for a neural network $G : {\mathbb{R}}^{{d}_{l}} \rightarrow {\mathbb{R}}^{{d}_{o}}$ and noise parameter ${\beta }^{2}$ ; the distribution $p\left( {x \mid z}\right) = \mathcal{N}\left( {G\left( z\right) ,{\beta }^{2}{I}_{{d}_{o}}}\right)$ is a latent Gaussian when $p\left( z\right) = \mathcal{N}\left( {0,{I}_{{d}_{l}}}\right)$ . In other words, a sample from this distribution can be generated by sampling independently $z \sim \mathcal{N}\left( {0,I}\right)$ and $\xi \sim \mathcal{N}\left( {0,{\beta }^{2}I}\right)$ and outputting $x = G\left( z\right) + \xi$ . This is a standard neural parametrization of a generator with (scaled) identity covariance matrix, a fairly common choice in practical implementations of VAEs (Kingma and Welling, 2013; Dai and Wipf, 2019). + +We will also define a probabilistic model which is a composition of latent Gaussians (i.e. consists of multiple stochastic layers), which is also common, particularly when modeling encoders in VAEs, as they can model potentially non-Gaussian posteriors (Burda et al., 2015; Rezende et al., 2014): + +Definition 2 (Deep Latent Gaussian). A deep latent Gaussian is the conditional distribution given by a sequence of stochastic pushforwards of any density. That is, for observable ${z}_{0} \in {\mathbb{R}}^{{d}_{0}}$ and latent variables ${\left\{ {z}_{i} \in {\mathbb{R}}^{{d}_{i}}\right\} }_{i = 1}^{L}$ , for neural networks ${\left\{ {G}_{i} : {\mathbb{R}}^{{d}_{i - 1}} \rightarrow {\mathbb{R}}^{{d}_{i}}\right\} }_{i = 1}^{L}$ and noise parameters ${\left\{ {\beta }_{i}^{2}\right\} }_{i = 1}^{L}$ , the conditional distribution $p\left( {{z}_{L} \mid {z}_{0}}\right)$ is a deep latent Gaussian when $p\left( {{z}_{i} \mid {z}_{i - 1}}\right) =$ $\mathcal{N}\left( {{G}_{i}\left( {z}_{i - 1}\right) ,{\beta }_{i}^{2}{I}_{{d}_{i}}}\right) ,\forall i \in \left\lbrack L\right\rbrack$ and $p\left( {z}_{0}\right)$ is any valid density. + +In other words, a deep latent Gaussian is a distribution, which can be sampled by ancestral sampling, one layer at a time. Note that this class of distributions is convenient as a choice for an encoder in a VAE, since compositions are amenable to the reparametrization trick of (Kingma and Welling, 2013) - the randomness for each of the layers can be "presampled" and appropriate transformed (Burda et al., 2015; Rezende et al., 2014). Then, we ask the following: + +Question: If a VAE generator is modeled as a latent Gaussian (that is, $p\left( {x \mid z}\right) \equiv \mathcal{N}\left( {G\left( z\right) ,{\beta }^{2}I}\right)$ ), s.t. the corresponding $G$ has at most $N$ parameters, and we wish to approximate the posterior $p\left( {z \mid x}\right)$ by a deep latent Gaussian s.t. the total size of the networks in it have at most ${N}^{\prime }$ parameters, how large must ${N}^{\prime }$ be as function of $N$ ? + +We will work in the setting ${d}_{l} = {d}_{o} = d$ , and prove a dichotomy based on the invertibility of $G$ : namely, if $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ is bijective, and $\beta \leq \mathcal{O}\left( \frac{1}{{d}^{1.5}\sqrt{\log d/\epsilon }}\right)$ , the posterior $p\left( {z \mid x}\right)$ can be $\epsilon$ -approximated in total variation distance by a deep latent Gaussian of size ${N}^{\prime } =$ $\mathcal{O}\left( {N \cdot \operatorname{poly}\left( {d,1/\beta ,1/\epsilon }\right) }\right)$ . Thus, if the neural network $G$ is invertible, and for a fixed $\epsilon$ and a small-enough variance term ${\beta }^{2}$ , we can approximate the posterior with a deep latent Gaussian polynomially larger than $G$ . On the other hand, if $G$ is not bijective, if one-way-functions exist (a widely believed computational complexity conjecture), we will show there exists a VAE generator $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ of size polynomial in $d$ , for which the posterior $p\left( {z \mid x}\right)$ cannot be approximated in total variation distance for even an inverse polynomial fraction of inputs $x$ , unless the inferential network is of size exponential in $d$ . + +§ 2.1. UPPER BOUNDS FOR BIJECTIVE GENERATORS + +We first lay out the assumptions on the map $G$ . The first is a quantitative characterization of bijectivity; and the second requires upper bounds on the derivatives of $G$ upto order 3 . We also have a centering assumption. We state these below. + +Assumption 1 (Strong invertibility). We will assume that the latent and observable spaces have the same dimension (denoted $d$ ), and $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ is bijective. Moreover, we will assume there exists a positive constant $m > 0$ such that: + +$$ +\forall {z}_{1},{z}_{2} \in {\mathbb{R}}^{d},\begin{Vmatrix}{G\left( {z}_{1}\right) - G\left( {z}_{2}\right) }\end{Vmatrix} \geq m \cdot \begin{Vmatrix}{{z}_{1} - {z}_{2}}\end{Vmatrix} +$$ + +Remark 1: This is a stronger quantitative version of invertibility. Furthermore, the infinitesimal version of this condition (i.e. $\begin{Vmatrix}{{z}_{1} - {z}_{2}}\end{Vmatrix} \rightarrow 0$ ) implies that the smallest magnitude of the singular values of the Jacobian at any point is lower bounded by $m$ , that is $\forall z \in {\mathbb{R}}^{d},\mathop{\min }\limits_{{i \in \left\lbrack d\right\rbrack }}\left| {{\sigma }_{i}\left( {{J}_{G}\left( z\right) }\right) }\right| \geq$ $m > 0$ . Since $m$ is strictly positive, this in particular means that the Jacobian is full rank everywhere. + +Remark 2: Note, we do not require that $G$ is layerwise invertible (i.e. that the each map from one layer to the next is invertible) - if that is the case, at least in the limit $\beta \rightarrow 0$ , the existence of an inference decoder of comparable size to $G$ is rather obvious: we simply invert each layer one at a time. This is important, as many architectures based on convolutions perform operations which increase the dimension (i.e. map from a lower to a higher dimensional space), followed by pooling (which decrease the dimension). Nevertheless, it has been observed that these architectures are invertible in practice- (Lipton and Tripathi, 2017) manage to get almost ${100}\%$ success at inverting an off-the-shelf trained model-thus justifying this assumption. + +Assumption 2 (Smoothness). There exists a finite positive constant $M > 0$ such that : + +$$ +\forall {z}_{1},{z}_{2} \in {\mathbb{R}}^{d},\begin{Vmatrix}{G\left( {z}_{1}\right) - G\left( {z}_{2}\right) }\end{Vmatrix} \leq M \cdot \begin{Vmatrix}{{z}_{1} - {z}_{2}}\end{Vmatrix} +$$ + +Moreover, we will assume that $G$ has continuous partial derivatives up to order 3 at every $z \in {\mathbb{R}}^{d}$ and the derivatives are bounded by finite positive constants ${M}_{2}$ and ${M}_{3}$ as + +$$ +\forall z \in {\mathbb{R}}^{d},{\begin{Vmatrix}{\nabla }^{2}G\left( z\right) \end{Vmatrix}}_{op} \leq {M}_{2} < \infty , +$$ + +$$ +{\begin{Vmatrix}{\nabla }^{3}G\left( z\right) \end{Vmatrix}}_{op} \leq {M}_{3} < \infty +$$ + +Remark 3: This is a benign assumption, stating that the map $G$ is smooth to third order. The infinitesimal version of this means that the largest magnitude of the singular values of the Jacobian at any point is upper bounded by $M$ , that is $\forall z \in {\mathbb{R}}^{d},\mathop{\max }\limits_{{i \in \left\lbrack d\right\rbrack }}\left| {{\sigma }_{i}\left( {{J}_{G}\left( z\right) }\right) }\right| = {\begin{Vmatrix}{J}_{G}\left( z\right) \end{Vmatrix}}_{op} \leq M < \infty .$ + +Remark 4: A neural network with activation function $\sigma$ will satisfy this assumption when $\sigma : \mathbb{R} \rightarrow \mathbb{R}$ is Lipschitz, and $\mathop{\max }\limits_{a}\left| {{\sigma }^{\prime }\left( a\right) }\right| \& \mathop{\max }\limits_{a}\left| {{\sigma }^{\prime \prime }\left( a\right) }\right|$ are finite. + +Assumption 3 (Centering). The map $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ satisfies $G\left( 0\right) = 0$ . + +Remark 5: This assumption is for convenience of stating the bounds - we effectively need the "range" of majority of the samples $x$ under the distribution of the generator. All the results can be easily restated by including a dependence on $\parallel G\left( 0\right) \parallel$ . + +Our main result is then stated below. Throughout, the $\mathcal{O}\left( \text{ . }\right)$ notation hides dependence on the map constants, namely $m,M,{M}_{2},{M}_{3}$ . We will denote by ${d}_{\mathrm{{TV}}}\left( {p,q}\right)$ the total variation distance between the distributions $p,q$ . + +Theorem 1 (Main, invertible generator). Consider a VAE generator given by a latent Gaussian with noise parameter ${\beta }^{2}$ and generator $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ satisfying Assumptions 1 and 2, which has $N$ parameters and a differentiable activation function $\sigma$ . Then, for + +$$ +\beta \leq \mathcal{O}\left( \frac{1}{{d}^{1.5}\sqrt{\log \frac{d}{\epsilon }}}\right) \tag{1} +$$ + +there exists a deep latent Gaussian with ${N}^{\prime } =$ $\mathcal{O}\left( {N \cdot \operatorname{poly}\left( {d,\frac{1}{\beta },\frac{1}{\epsilon }}\right) }\right)$ parameters and activation functions $\left\{ {\sigma ,{\sigma }^{\prime },\rho }\right\}$ , where $\rho \left( x\right) = {x}^{2}$ , such that with probability $1 - \exp \left( {-\mathcal{O}\left( d\right) }\right)$ over a sample $x$ from the VAE generator, the distribution $q\left( {z \mid x}\right)$ of the deep latent Gaussian on input $x$ satisfies ${d}_{TV}\left( {q\left( {z \mid x}\right) ,p\left( {z \mid x}\right) }\right) \leq \epsilon$ . + +Remark 6: The addition of $\rho$ in the activation functions is for convenience of stating the bound. Using usual techniques in universal approximation it can be simulated using any other smooth activation. + +§ 2.2. LOWER BOUNDS FOR NON-BIJECTIVE GENERATORS + +We now discuss the case when the generative map $G$ is not bijective, showing an instance such that no small encoder corresponding to the posterior exists. The lower bound will be based on a reduction from the existence of one-way functions - a standard complexity assumption in theoretical computer science (more concretely, cryptography). Precisely, we will start with the following form of the one-way-function conjecture: + +Conjecture 1 (Existence of one-way permutations, (Katz and Lindell,2020)). There exists a bijection $f$ : $\{ - 1,1{\} }^{d} \rightarrow \{ - 1,1{\} }^{d}$ computable by a Boolean circuit $\mathcal{C} : \{ - 1,1{\} }^{d} \rightarrow \{ - 1,1{\} }^{d}$ of size polyd, but for every $T\left( d\right) =$ poly(d)and $\epsilon \left( d\right) = \frac{1}{\operatorname{poly}\left( d\right) }$ and circuit ${\mathcal{C}}^{\prime } : \{ - 1,1{\} }^{d} \rightarrow \{ - 1,1{\} }^{d}$ of size $T\left( d\right)$ it holds $\mathop{\Pr }\limits_{{z \sim \{ \pm 1{\} }^{d}}}\left\lbrack {{\mathcal{C}}^{\prime }\left( {\mathcal{C}\left( z\right) }\right) = z}\right\rbrack \leq \epsilon \left( d\right) .$ In other words, there is a circuit of size polynomial in the input, s.t. for every polynomially sized invertor circuit (the two polynomials need not be the same - the invertor can be much larger, so long as it's polynomial), the invertor circuit succeeds on at most an inverse polynomial fraction of the inputs. Assuming this Conjecture, we show that there exist generators that do not have small encoders that accurately represent the posterior for most points $x$ . Namely: + +Theorem 2 (Main, non-invertible generator). If Conjecture 1 holds, there exists a VAE generator $G : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ with size poly(d) and activation functions $\{ {sgn},\min ,\max \}$ , s.t., for every $\beta = o\left( {1/\sqrt{d}}\right)$ , every $T\left( d\right) = \operatorname{poly}\left( d\right)$ and every $\epsilon \left( d\right) = 1/$ poly(d), any encoder $E$ that can be represented by a deep latent Gaussian with networks that have total number of parameters bounded by $T\left( d\right)$ , weights bounded by $W$ , activation functions that are $L$ - Lipschitz, and node outputs bounded by $M$ with probability $1 - \exp \left( {-d}\right)$ over a sample $x$ from $G$ and $L,M,W =$ $o\left( {\exp \left( {\operatorname{poly}\left( d\right) }\right) }\right)$ , we have: + +$$ +\mathop{\Pr }\limits_{{x \sim G}}\left\lbrack {{d}_{TV}\left( {E\left( {z \mid x}\right) ,p\left( {z \mid x}\right) }\right) \leq \frac{1}{10}}\right\rbrack \leq \epsilon \left( d\right) +$$ + +Thus, we show the existence of a generator for which no encoder of polynomial size reasonably approximates the posterior for even an inverse-polynomial fraction of the samples $x$ (under the distribution of the generator). + +Remark 7: The generator $G$ , though mapping from ${\mathbb{R}}^{d} \rightarrow$ ${\mathbb{R}}^{d}$ will be highly non-invertible. Perhaps counterintuitively, Conjecture 1 applies to bijections-though, the point will be that $G$ will be simulating a Boolean circuit, and in the process will give the same output on many inputs (more precisely, it will only depend on the sign of the inputs, rather than their values). + +Remark 8: The choice of activation functions $\{ \mathrm{{sgn}},\min ,\max \}$ is for convenience of stating the theorem. Using standard universal approximation results, similar results can be stated with other activation functions. + +Remark 9: The restrictions on the Lipschitzness of the activations, bounds of the weights and node outputs of $E$ are extremely mild - as they are allowed to be potentially exponential in $d$ - considering that even writing down a natural number in binary requires logarithmic number of digits. + +§ 3. RELATED WORK + +On the empirical side, the impact of impoverished variational posteriors in VAEs (in particular, modeling the encoder as a Gaussian) has long been conjectured as one of the (several) reasons for the fuzzy nature of samples in trained VAEs. (Zhao et al., 2017) provide recent evidence towards this conjecture. Invertibility of generative models in general (VAEs, GANs and normalizing flows), both as it relates to the hardness of fitting the model, and as it relates to the usefulness of having an invertible model, has been studied quite a bit: (Lipton and Tripathi, 2017) show that for off-the-shelf trained GANs, they can invert them with near-100% success rate, despite the model not being encouraged to be invertible during training; (Dai and Wipf, 2019) propose an alternate training algorithm for VAEs that tries to remedy algorithmic problems during training VAEs when data lies on a lower-dimensional manifold; (Behrmann et al., 2020) show that trained normalizing flows, while being by design invertible, are just barely so - the learned models are extremely close to being singular. + +On the theoretical side, the most closely relevant work is (Lei et al., 2019). They provide an algorithm for inverting GAN generators with random weights and expanding layers. Their algorithm is layerwise - that is to say, each of the layers in their networks is invertible, and they invert the layers one at a time. This is distinctly not satisfied by architectures used in practice, which expand and shrink - a typical example are convolutional architectures based on convolutions and pooling. The same paper also shows NP-hardness of inverting a general GAN, but crucially they assume the network $G$ is part of the input (their proof does not work otherwise). Our lower bound can be viewed as a "non-uniform complexity" (i.e. circuit complexity) analogue of this, since we are looking for a small neural network $E$ , as opposed to an efficient algorithm; crucially, however $G$ is not part of the input (i.e. $G$ can be preprocessed for an unlimited amount of time). (Hand and Voroninski, 2018) provide similar guarantees for inverting GAN generators with random weights that satisfy layerwise invertibility, albeit via non-convex optimization of a certain objective. + +§ 4. CONCLUSION + +In this paper we initiated the first formal study of the effect of invertibility of the generator on the representational complexity of the encoder in variational autoencoders. We proved a dichotomy: invertible generators give rise to distributions for which the posterior can be approximated by an encoder not much larger than the generator. On the other hand, for non-invertible generators, the corresponding encoder may need to be exponentially larger. Our work is the first to connect the complexity of inference to invertibility, and there are many interesting venues for further work. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/HhcKxFKluuP/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/HhcKxFKluuP/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..cb1dd9a727462269f307abf0a0abb8e131da5b5d --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/HhcKxFKluuP/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,453 @@ +## Discrete Denoising Flows + +## Anonymous Authors ${}^{1}$ + +## Abstract + +Discrete flow-based models are a recently proposed class of generative models that learn invertible transformations for discrete random variables. Since they do not require data dequantization and maximize an exact likelihood objective, they can be used in a straight-forward manner for lossless compression. In this paper, we introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs). In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias. We show that DDFs outperform Discrete Flows on modelling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood. + +## 1. Introduction + +Due to their wide range of applications, flow-based generative models have been extensively studied in recent years (Rezende & Mohamed, 2015; Dinh et al., 2016). Research has mainly focused on modelling continuous data distributions, in which discretely stored data like audio or image data must be dequantized prior to modelling. However, two recent publications explore flow-based generative modelling of discrete distributions: Discrete Flows (Tran et al., 2019) for categorical random variables and Integer Discrete Flows (Hoogeboom et al., 2019) for ordinal discrete random variables. Due to their discrete nature and exact likelihood objective, these discrete flow-based models can be used directly for lossless compression. + +Unlike other approaches that utilize generative models for lossless compression, discrete flow-based models are advantageous because they (i) enable efficient inference and (ii) can encode single data samples efficiently. Approaches that utilize the Variational Autoencoder (VAE) (Kingma & Welling, 2013) for lossless compression typically combine the model with bits-back-coding (Hinton & Van Camp, 1993), which is effective for compressing full data sets but inefficient for encoding single samples. Autoregressive models such as PixelCNN (Oord et al., 2016) can also be used for lossless compression, however, they are generally expensive to decode. + +Unfortunately, both Discrete Flows and Integer Discrete Flows come with the drawback that each of their layers contains a quantization operation. When optimizing them with the backpropagation algorithm, the gradient of the quantization operation has to be estimated with a biased gradient estimator, which may compromise their performance. + +To improve training efficiency, reduce gradient bias and improve overall performance, we introduce a new discrete flow-based generative model for categorical random variables, Discrete Denoising Flows (DDFs). DDFs can be trained without introducing gradient bias. They further come with the positive side effect that the training is computationally very efficient. This efficiency results from the local training algorithm of the DDFs, which trains only one layer at a time instead of all at once. We demonstrate that Discrete Denoising Flows outperform Discrete Flows in terms of log-likelihood. + +## 2. Related Work & Background + +This section first introduces normalizing flows as well as discrete flows. It then goes on to describe alternate approaches that utilize generative models for lossless compression. + +Normalizing Flows The fundamental idea of flow-based modelling is to express a complicated probability distribution as a transformation on a simple probability distribution. Given the two continuous random variables $X$ and $Z$ and the invertible and differentiable transformation $T : \mathcal{Z} \rightarrow \mathcal{X}$ , $X$ ’s probability distribution ${p}_{X}\left( \cdot \right)$ can be written in terms of $Z$ ’s probability distribution ${p}_{Z}\left( \cdot \right)$ as + +$$ +{p}_{X}\left( x\right) = {p}_{Z}\left( z\right) {\left| \det {J}_{T}\left( z\right) \right| }^{-1}\;\text{ with }z = {T}^{-1}\left( x\right) , \tag{1} +$$ + +using the change of variables formula. The Jacobian determinant acts as normalizing term and ensures that ${p}_{X}\left( \cdot \right)$ is a valid probability distribution. The distribution ${p}_{Z}\left( \cdot \right)$ is referred to as the base distribution and the transformation $T$ as a normalizing flow. A composition of invertible and differentiable functions can be viewed as a repeated application of formula 1. Therefore, such compositions are also referred to as normalizing flows throughout literature. + +Discrete Flows In the case of two discrete random variables $X$ and $Z$ , the change of variables formula for continuous random variables given in formula 1 simplifies to + +$$ +{p}_{X}\left( x\right) = {p}_{Z}\left( z\right) \;\text{ with }\;z = {T}^{-1}\left( x\right) \tag{2} +$$ + +Normalization with the Jacobian determinant is no longer necessary as it corrects for a change in volume. Discrete distributions, however, have no volume since they only have support on a discrete set of points. As pointed out by Papa-makarios et al. (Papamakarios et al., 2019), discrete flow-based models can only permute probabilities in the probability tensor that represents the distribution of a random variable. However, Van den Berg et al. (Berg et al., 2020) showed that this is not as harmful in terms of modeling flexibility as originally thought. + +Discrete Flows (DFs) (Tran et al., 2019) are discrete flow-based models that learn transformations on categorical random variables. The authors define a bijective transformation $T : \mathcal{Z} \rightarrow \mathcal{X}$ with $\mathcal{X} = \mathcal{Z} = \{ 1,\ldots , K{\} }^{D}$ in the form of a bipartite coupling layer (Dinh et al., 2016). The coupling layer input $x$ is partitioned into two sets s.t. $x = \left\lbrack {{x}_{a},{x}_{b}}\right\rbrack$ and then transformed into an output $z = \left\lbrack {{z}_{a},{z}_{b}}\right\rbrack$ with + +$$ +{z}_{a} = {x}_{a} \tag{3} +$$ + +$$ +{z}_{b} = \left( {{s}_{{\theta }_{1}}\left( {x}_{a}\right) \circ {x}_{b} + {t}_{{\theta }_{2}}\left( {x}_{a}\right) }\right) {\;\operatorname{mod}\;K}, +$$ + +where $\circ$ denotes element-wise multiplication. Every element of scale ${s}_{{\theta }_{1}}\left( \cdot \right)$ and translation ${t}_{{\theta }_{2}}\left( \cdot \right)$ can take on values in $\{ 1,\ldots , K\}$ . Note that the transformation is only invertible if each element of the scale is coprime to the number of classes $K$ . Scale ${s}_{{\theta }_{1}}\left( \cdot \right)$ and translation ${t}_{{\theta }_{2}}\left( \cdot \right)$ are modeled by a neural network with parameters ${\theta }_{1,2}$ . To obtain discrete scale and translation values, the authors utilize the argmax operator combined with a relaxed softmax as gradient estimator (Jang et al., 2016) to enable backpropagation. This introduces bias to the model parameter gradients, which harms optimization. Note that the example describes the bipartite version and not the autoregressive version of DFs. + +Generative Models for Lossless Compression The Variational Autoencoder (VAE) (Kingma & Welling, 2013) can be utilized for lossless compression is by discretizing the continuous latent vector and applying bits-back coding (Hinton & Van Camp, 1993). Recent methods that work according to this approach include Bits-Back with ANS (Townsend et al., 2019a), Bit-Swap (Kingma et al., 2019) and HiLLoC (Townsend et al., 2019b). These methods obtain performances close to the negative ELBO for compressing full datasets. However, when encoding a single data sample they are rather inefficient because the auxiliary bits needed for bits-back coding cannot be amortized across many samples. The same problem but in a scaled-up version due to multiple latent variables arises when local bits-back coding is used in normalizing flows (Ho et al., 2019). In this case, encoding a single image would require more bits than the original image. Mentzer et al. (Mentzer et al., 2019) utilize a VAE with deterministic encoder to transform a data sample into a set of discrete multiscale latent vectors. Although this method does not require bits-back coding, it optimizes only a lowerbound on the likelihood instead of the likelihood directly. Another generative model that is well suited for lossless compression is the PixelCNN (Oord et al., 2016). PixelCNN organizes the pixels of an image as a sequence and predicts the distribution of a pixel conditioned on all previous pixels. Consequently, drawing samples from Pix-elCNN requires multiple network evaluations and is very costly. Nevertheless, PixelCNN achieves state-of-the-art performances in lossless compression. + +## 3. Method: Discrete Denoising Flows + +In this section, we introduce Discrete Denoising Flows. Like other flow-based models, DDFs consist of several bipartite coupling layers that are easily invertible. These so-called denoising coupling layers are embedded in an architecture that factors out parts of the input vector at regular intervals. + +### 3.1. Denoising Coupling Layer + +Complying to the change of variables formula 2 , we define the denoising coupling layer as an invertible transformation $T : \mathcal{Z} \rightarrow \mathcal{X}$ between two categorical variables $X$ and $Z$ with domains $\mathcal{X} = \mathcal{Z} = \{ 1,\ldots , K{\} }^{D}$ . The inverse ${T}^{-1}$ , which represents the forward pass during training, is given as + +$$ +{z}_{a} = {x}_{a} \tag{4} +$$ + +$$ +{z}_{b} = \operatorname{cond}\operatorname{perm}\left( {{x}_{b} \mid n\left( {x}_{a}\right) }\right) +$$ + +That is, the input $x \in \{ 1,.., K{\} }^{D}$ is partitioned into two sets such that $x = \left\lbrack {{x}_{a},{x}_{b}}\right\rbrack$ and ${x}_{a} \in \{ 1,.., K{\} }^{d}$ . The first part stays the same while the second part is transformed conditioned on the first part. For this transformation, we use a neural network $n$ as well as the conditional permutation operation cond_perm $\left( {\cdot \mid \cdot }\right)$ . + +The conditional permutation operation is the core component of the denoising coupling layer. For notation clarity, we define the variable $\theta = n\left( {x}_{a}\right)$ with $\theta \in {\mathbb{R}}^{\left( {D - d}\right) \times K}$ as the output of the neural network $n$ . The conditional permutation operation is then defined as + +$$ +{\operatorname{cond}}_{\text{-perm }}\left( {{x}_{{b}_{i}} \mid {\theta }_{i}}\right) = {\operatorname{perm}}_{{\theta }_{i}}\left( {x}_{{b}_{i}}\right) \tag{5} +$$ + +$$ +{\text{perm}}_{{\theta }_{i}} = \left( \begin{matrix} 1 & 2 & \ldots & K \\ {\operatorname{argsort}}_{ - }{\operatorname{top}}_{h}\left( {\theta }_{i}\right) & & & \end{matrix}\right) \tag{6} +$$ + +per dimension $i \in \{ 1,\ldots , D - d\}$ , where we used Cauchy’s two-line notation for permutations to define the permutation ${\text{perm}}_{{\theta }_{i}}$ . We also introduced this argsort ${}_{ - }{\text{top}}_{h}\left( \cdot \right)$ operation with the additional hyperparameter $h \in \{ 1,.., K\}$ . This operation acts similar to the regular argsort operation, with the difference that it only sorts the top $h$ largest elements while leaving the remaining elements in their previous positions. Figure 1 illustrates this functionality. The intuition behind the operation is that only the most predictable classes are permuted, leaving more of the structure intact than an entire argsort. Also, observe that in the binary case $K = 2$ and for $h = K$ , argsort and argsort_top ${}_{h}$ are equivalent. + +![01963e49-94e9-72be-b269-94be492dc7a6_2_220_201_564_410_0.jpg](images/01963e49-94e9-72be-b269-94be492dc7a6_2_220_201_564_410_0.jpg) + +Figure 1. Functionality of argsort ${\operatorname{-top}}_{h}\left( {\theta }_{i}\right)$ illustrated for an example ${\theta }_{i}$ with number of classes $K = 5$ . + +The conditional permutation operation is easily invertible as + +$$ +{\text{ cond - perm }}_{i}^{-1}\left( {{x}_{{b}_{i}} \mid {\theta }_{i}}\right) = {\operatorname{perm}}_{{\theta }_{i}}^{-1}\left( {x}_{{b}_{i}}\right) \tag{7} +$$ + +$$ +{\text{ perm }}_{{\theta }_{i}}^{-1} = \left( \begin{matrix} {\operatorname{argsort}}_{ - }{\operatorname{top}}_{h}\left( {\theta }_{i}\right) \\ 1\;2\;\ldots \;K \end{matrix}\right) \tag{8} +$$ + +per dimension $i \in \{ 1,\ldots , D - d\}$ . Using this definition, we can write the transformation $T$ representing the denoising coupling layer at inference time as + +$$ +{x}_{a} = {z}_{a} \tag{9} +$$ + +$$ +{x}_{b} = {\operatorname{cond}}_{ - }{\operatorname{perm}}^{-1}\left( {{z}_{b} \mid n\left( {z}_{a}\right) }\right) +$$ + +### 3.2. Training Denoising Discrete Flows + +For training a denoising coupling layer, we simply train a neural network $n$ to predict ${x}_{b}$ from ${x}_{a}$ . To this end, we use the mean cross-entropy loss between $n\left( {x}_{a}\right)$ and ${x}_{b}$ as our objective function. After training, the fixed neural network $n$ can be employed in a denoising coupling layer. When we apply the conditional permutation operation in + +$$ +{z}_{b} = \operatorname{cond}\_ \operatorname{perm}\left( {{x}_{b} \mid n\left( {x}_{a}\right) }\right) , +$$ + +the more the argmax of $n\left( {x}_{a}\right)$ resembles ${x}_{b}$ , the more likely it is for a value in ${x}_{b}$ to be switched to one of the smaller class values. Consequently, given that the argmax of $n\left( {x}_{a}\right)$ somewhat resembles ${x}_{b}$ , the outcome of the conditional permutation operation ${z}_{b}$ , is more likely to contain smaller class values than ${x}_{b}$ . This makes the value of the random variable $Z$ more predictable than the value of the random variable $X$ , when looking at those dimensions in isolation. In other words, we decorrelated the random variable $X$ into the random variable $Z$ . As a direct consequence, modelling the distribution ${p}_{Z}\left( \cdot \right)$ with a $D$ -dimensional i.i.d. categorical distribution will result in a smaller mismatch than it would for the distribution ${p}_{X}\left( \cdot \right)$ . To give some more intuition on this functionality, we include an illustrating example in appendix A and provide additional augment for the case $K = 2$ in appendix B. + +Algorithm 1 transform $\left( {\operatorname{ddf}, S, x}\right)$ . Transforms $x \mapsto z$ + +--- + +Input: $x$ , ddf // ddf is a list of classifiers $\left\lbrack {{n}_{1},{n}_{2},\ldots }\right\rbrack$ + +Let $z = x$ + +for $n$ , shuffle in $\operatorname{ddf}, S$ do + + Split $\left\lbrack {{z}_{a},{z}_{b}}\right\rbrack = z$ + + ${z}_{b} = \operatorname{cond}\operatorname{\_ perm}\left( {{z}_{b}|\theta = n\left( {z}_{a}\right) }\right)$ + + Combine $z = \left\lbrack {{z}_{a},{z}_{b}}\right\rbrack$ + + $z = \operatorname{shuffle}\left( z\right)$ + +end for + +return $z$ + +--- + +Algorithm 2 optimize $\left( {{n}_{\text{new }}, z}\right)$ . Optimize a new layer. + +--- + +Input: ${n}_{\text{new }}, z\;//{n}_{\text{new }}$ is a pixel-wise classifier + +Split $\left\lbrack {{z}_{a},{z}_{b}}\right\rbrack = z$ + + Optim. $\log \mathcal{C}\left( {{z}_{b} \mid \theta = {n}_{\text{new }}\left( {z}_{a}\right) }\right) \;//$ Equiv. to cross-entropy + +--- + +Algorithm 3 Training DDFs + +--- + +Input: number of layers $L$ + +ddf $= \left\lbrack \right\rbrack \;$ // create DFF with 0 layers + +Init $S = \left\lbrack {{\operatorname{shuffle}}_{1},\ldots {\operatorname{shuffle}}_{L}}\right\rbrack \;$ // Init $L$ shuffling layers + +while $i < L$ do + + Init ${n}_{\text{new }}\;$ // new classifier + + while ${n}_{\text{new }}$ not converged do + + Sample $x \sim$ Data + + $z = \operatorname{transform}\left( {\operatorname{ddf}, S, x}\right)$ + + optimize $\left( {{n}_{\text{new }}, z}\right)$ + + end while + + ddf.append $\left( {n}_{\text{new }}\right)$ + +end while + +return ddf + +--- + +Shuffling and Splitpriors Algorithms 1, 2 and 3 describe the training process of DDFs. Note that only one denoising coupling layer is trained at a time. Moreover, we utilize invertible shuffle operations such that the input is partitioned differently in each coupling layer. Note that instead of propagating the full input vector $x$ through all layers of the DDF, we factor out parts of the input vector at regular intervals and model these parts conditioned on the other parts following the splitprior approach in (Dinh et al., 2016; Kingma & Dhariwal, 2018). As a result, the following coupling layers operate on lower-dimensional data. Not only is this more efficient, but it also allows for some additional dependencies between parts of the output vector $z$ . + +165 + +![01963e49-94e9-72be-b269-94be492dc7a6_3_199_190_608_302_0.jpg](images/01963e49-94e9-72be-b269-94be492dc7a6_3_199_190_608_302_0.jpg) + +Figure 2. Qualitative results for (a) Discrete Flows and (b) Discrete Denoising Flows on the quantized eight Gaussians toy data set. + +166 + +167 + +168 + +169 + +## 4. Experiments + +In this section we explore how the compression rate in bits per dimension (BPD) of Discrete Denoising Flows compares to Discrete Flows on three different data sets. Each experiment was conducted at least three times; we show the average results as well as the standard deviation in Table 1. All experimental details can be found in appendix C. In each experiment, we use equally sized neural networks in the coupling layers of DFs and DDFs to ensure comparability. + +
DATA SETDFDDF (ours)
8 GAUSSIANS${5.05} \pm {0.05}$${4.58} \pm {0.02}$
Bin. MNIST${0.17} \pm {0.01}$$\mathbf{{0.16}} \pm {0.01}$
CITYSCAPES${0.65} \pm {0.03}$$\mathbf{{0.58}} \pm {0.03}$
+ +Table 1. Comparison of achieved BPD of Discrete Flow (DF) and Denosing Discrete Flow (DDF) per data set. + +Eight Gaussians As a first experiment, we train DDFs and DFs on a two-dimensional toy data set also used by Tran et al. (2019). This data set is a mixture of Gaussians with 8 means uniformly distributed around a circle and discretized into 91 bins (i.e. $K = {91}$ classes). Since the data is two-dimensional, we model it with a single coupling layer per model and set $h = K$ for the DDF coupling layer. For $2\mathrm{D}$ no splitpriors are used, because that would already make the model universal and we cannot see how well the flow itself performs. As apparent from the qualitative results in Figure 2 as well as the achieved BPD given in Table 1, DDFs outperform DFs. + +Binary MNIST In a second experiment, we train both DFs and DDFs on the binarized MNIST data set. Since the data set has $K = 2$ classes, we have $h = 2$ for the DDF coupling layers. The samples given in Figure 3 and the achieved BPDs in table 1 show that DDFs outperform DFs. + +Cityscapes To test the performance DDFs on image-type data, we use a 8-class version of the Cityscapes data set (Cordts et al., 2016) modified by Hoogeboom et al. (2021). This data set contains ${32} \times {64}$ segmentation maps, samples are given in figure 4c. Since we use multiple coupling layers in this experiment and have a data set with a number of classes $K > 2$ , there is a trade-off between permuting classes and maintaining structure in each coupling layer. Therefore, after performing a grid search, we set $h = 4$ for all DDF coupling layers. From the samples in figure 4 and the achieved BPDs rates in table 1, we can see that DDFs outperform DFs for this data set as well. + +![01963e49-94e9-72be-b269-94be492dc7a6_3_942_185_610_308_0.jpg](images/01963e49-94e9-72be-b269-94be492dc7a6_3_942_185_610_308_0.jpg) + +Figure 3. Qualitative results for (a) Discrete Flows and (b) Discrete Denoising Flows on the binarized MNIST data set. + +![01963e49-94e9-72be-b269-94be492dc7a6_3_969_627_552_767_0.jpg](images/01963e49-94e9-72be-b269-94be492dc7a6_3_969_627_552_767_0.jpg) + +Figure 4. Qualitative results for (a) Discrete Flows and (b) Discrete Denoising Flowson the Cityscapes data set. Figure (c) show samples from the 8-class Cityscapes data set. + +## 5. Conclusion + +In this paper, we have introduced a new discrete flow-based generative model for categorical data distributions, Discrete Denoising Flows. We showed that our model outperforms Discrete Flows in terms of log-likelihood. + +References + +Berg, R. v. d., Gritsenko, A. A., Dehghani, M., Sønderby, C. K., and Salimans, T. Idf++: Analyzing and improving integer discrete flows for lossless compression. arXiv preprint arXiv:2006.12459, 2020. + +Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., and Schiele, B. The cityscapes dataset for semantic urban scene understanding. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2016. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. + +Hinton, G. E. and Van Camp, D. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pp. 5-13, 1993. + +Ho, J., Lohn, E., and Abbeel, P. Compression with flows via local bits-back coding. arXiv preprint arXiv:1905.08500, 2019. + +Hoogeboom, E., Peters, J., van den Berg, R., and Welling, M. Integer discrete flows and lossless compression. In Advances in Neural Information Processing Systems, pp. 12134-12144, 2019. + +Hoogeboom, E., Nielsen, D., Jaini, P., Forré, P., and Welling, M. Argmax flows and multinomial diffusion: Towards non-autoregressive language models. arXiv preprint arXiv:2102.05379, 2021. + +Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017. + +Jang, E., Gu, S., and Poole, B. Categorical repa-rameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1\mathrm{x}1$ convolutions. In Advances in Neural Information Processing Systems, pp. 10215- 10224, 2018. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. + +Kingma, F., Abbeel, P., and Ho, J. Bit-swap: Recursive bits-back coding for lossless compression with hierarchical latent variables. In International Conference on Machine Learning, pp. 3408-3417. PMLR, 2019. + +Mentzer, F., Agustsson, E., Tschannen, M., Timofte, R., and Gool, L. V. Practical full resolution learned lossless image compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10629-10638, 2019. + +Oord, A. v. d., Kalchbrenner, N., Vinyals, O., Espeholt, L., Graves, A., and Kavukcuoglu, K. Conditional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. arXiv preprint arXiv:1912.02762, 2019. + +Rezende, D. J. and Mohamed, S. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015. + +Townsend, J., Bird, T., and Barber, D. Practical lossless compression with latent variables using bits back coding. arXiv preprint arXiv:1901.04866, 2019a. + +Townsend, J., Bird, T., Kunze, J., and Barber, D. Hilloc: Lossless image compression with hierarchical latent variable models. arXiv preprint arXiv:1912.09953, 2019b. + +Tran, D., Vafa, K., Agrawal, K., Dinh, L., and Poole, B. Discrete flows: Invertible generative models of discrete data. In Advances in Neural Information Processing Systems, pp. 14692-14701, 2019. + +## A. Denoising Coupling Layer: Example + +276 + +Consider the distribution + +278 + +$$ +{P}_{{X}_{1}{X}_{2}} +$$ + +280 + +
${x}_{1} \smallsetminus {x}_{2}$01
00.40.2
10.10.3
+ +If we were to directly model ${P}_{{X}_{1}{X}_{2}}$ with a factorized Bernoulli distribution + +$$ +{P}_{\text{model }}\left( x\right) = \operatorname{Bern}\left( {{x}_{1} \mid p}\right) \cdot \operatorname{Bern}\left( {{x}_{2} \mid q}\right) , +$$ + +the highest achievable log likelihood would be + +$$ +{\mathbb{E}}_{x \sim {P}_{{X}_{1}{X}_{2}}}\left\lbrack {{\log }_{2}{P}_{\text{model }}\left( x\right) }\right\rbrack +$$ + +$$ += {\mathbb{E}}_{x \sim {P}_{{X}_{1}{X}_{2}}}\left\lbrack {{\log }_{2}\left( {\operatorname{Bern}\left( {{x}_{1} \mid p}\right) \cdot \operatorname{Bern}\left( {{x}_{2} \mid q}\right) }\right) }\right\rbrack +$$ + +$$ +\approx - {1.97} +$$ + +with $p = {0.4}$ and $q = {0.5}$ . Suppose now we have trained a neural net $n$ to predict ${x}_{2}$ from ${x}_{1}$ , such that $\operatorname{argmax}n\left( {x}_{1}\right) = {x}_{2}$ . Using $n$ in a denoising coupling layer ${T}^{-1} : \mathcal{X} \rightarrow \mathcal{Z}$ as defined in equation 4 with $K = 2$ and implicitly $h = 2$ , we obtain the distribution ${P}_{{Z}_{1}{Z}_{2}}$ . + +$$ +{P}_{{Z}_{1}{Z}_{2}} +$$ + +
${z}_{1}$$\smallsetminus {z}_{2}$01
00.40.2
10.30.1
+ +When modeling ${P}_{{Z}_{1}{Z}_{2}}$ with a factorized Bernoulli distribution, the highest achievable log likelihood is + +$$ +{\mathbb{E}}_{x \sim {P}_{{X}_{1}{X}_{2}}}\left\lbrack {{\log }_{2}{P}_{\text{model }}\left( {{T}^{-1}\left( x\right) }\right) }\right\rbrack +$$ + +$$ += {\mathbb{E}}_{x \sim {P}_{{Z}_{1}{Z}_{2}}}\left\lbrack {{\log }_{2}\left( {\operatorname{Bern}\left( {{z}_{1} \mid p}\right) \cdot \operatorname{Bern}\left( {{z}_{2} \mid q}\right) }\right) }\right\rbrack +$$ + +$$ +\approx - {1.85} +$$ + +with $p = {0.4}$ and $q = {0.7}$ . We can see that ${P}_{{Z}_{1}{Z}_{2}}$ is now modeled higher log-likelihood than the factorized Bernoulli distribution on ${P}_{{X}_{1}{X}_{2}}$ . + +## B. Denoising Coupling Layer: General Binary Case + +In the following, we generalize the example given in appendix A to provide further insight into the functionality of the denoising coupling layer. Consider again a two-dimensional binary random variable $X$ with probability distribution ${P}_{{X}_{1}{X}_{2}}$ defined as + +$$ +{P}_{{X}_{1}{X}_{2}} +$$ + +
${x}_{1}$$\smallsetminus {x}_{2}$01
0${p}_{1}$${p}_{2}$
1${p}_{3}$${p}_{4}$
+ +We train a neural network $n$ to predict ${x}_{2}$ from ${x}_{1}$ , such that $\operatorname{argmax}n\left( {x}_{1}\right) = {x}_{2}$ . Using $n$ in a denoising coupling layer as defined in equation 4 with $K = 2$ and implicitly $h = 2$ , we obtain the distribution ${P}_{{Z}_{1}{Z}_{2}}$ . + +$$ +{P}_{{Z}_{1}{Z}_{2}} +$$ + +
${x}_{1} \smallsetminus {x}_{2}$01
0$\max \left( {{p}_{1},{p}_{2}}\right)$$\min \left( {{p}_{1},{p}_{2}}\right)$
1$\max \left( {{p}_{3},{p}_{4}}\right)$$\min \left( {{p}_{3},{p}_{4}}\right)$
+ +We now want to show that applying the denoising coupling layer results in a distribution that can be modelled more accurately with a factorized Bernoulli distribution + +$$ +{P}_{\text{model }}\left( x\right) = \operatorname{Bern}\left( {{x}_{1} \mid p}\right) \cdot \operatorname{Bern}\left( {{x}_{2} \mid q}\right) , +$$ + +with parameters $0 \leq p, q \leq 1$ in z-space than in x-space. To this end, we demonstrate that the model log-likelihood for ${P}_{{Z}_{1}{Z}_{2}}$ is always higher than or equal to the model log-likelihood for ${P}_{{X}_{1}{X}_{2}}$ . + +The model log-likelihood of ${P}_{{X}_{1}{X}_{2}}$ is given as + +$$ +{\mathbb{E}}_{x \sim {P}_{{X}_{1}{X}_{2}}}\left\lbrack {\log {P}_{\text{model }}\left( x\right) }\right\rbrack +$$ + +$$ += {\mathbb{E}}_{x}\left\lbrack {\log \left( {\operatorname{Bern}\left( {{x}_{1} \mid p}\right) }\right) + \log \left( {\operatorname{Bern}\left( {{x}_{2} \mid q}\right) }\right) }\right\rbrack +$$ + +$$ += \log \left( p\right) \cdot \left( {{p}_{1} + {p}_{2}}\right) + \log \left( {1 - p}\right) \cdot \left( {{p}_{3} + {p}_{4}}\right) +$$ + +$$ ++ \log \left( q\right) \cdot \left( {{p}_{1} + {p}_{3}}\right) + \log \left( {1 - q}\right) \cdot \left( {{p}_{2} + {p}_{4}}\right) +$$ + +$$ += - \mathcal{H}\left( p\right) - \mathcal{H}\left( q\right) +$$ + +where the last line assumes the optimal choice for $p = {p}_{1} +$ ${p}_{2}$ and $q = {p}_{1} + {p}_{3}$ and $\mathcal{H}$ denotes the binary entropy defined as $\mathcal{H}\left( p\right) = - p\log p - \left( {1 - p}\right) \log \left( {1 - p}\right)$ . Analogously for ${P}_{{Z}_{1}{Z}_{2}}$ using Bernoulli parameters $\widehat{q},\widehat{p}$ + +$$ +{\mathbb{E}}_{z \sim {P}_{{Z}_{1}{Z}_{2}}}\left\lbrack {\log {P}_{\text{model }}\left( z\right) }\right\rbrack +$$ + +$$ += \log \left( \widehat{p}\right) \cdot \left( {\max \left( {{p}_{1},{p}_{2}}\right) + \min \left( {{p}_{1},{p}_{2}}\right) }\right) +$$ + +$$ ++ \log \left( {1 - \widehat{p}}\right) \cdot \left( {\max \left( {{p}_{3},{p}_{4}}\right) + \min \left( {{p}_{3},{p}_{4}}\right) }\right) +$$ + +$$ ++ \log \left( \widehat{q}\right) \cdot \left( {\max \left( {{p}_{1},{p}_{2}}\right) + \max \left( {{p}_{3},{p}_{4}}\right) }\right) +$$ + +$$ ++ \log \left( {1 - \widehat{q}}\right) \cdot \left( {\min \left( {{p}_{1},{p}_{2}}\right) + \min \left( {{p}_{3},{p}_{4}}\right) }\right) +$$ + +$$ += - \mathcal{H}\left( \widehat{p}\right) - \mathcal{H}\left( \widehat{q}\right) +$$ + +with optimal choice $\widehat{p} = \max \left( {{p}_{1},{p}_{2}}\right) + \min \left( {{p}_{1},{p}_{2}}\right) =$ ${p}_{1} + {p}_{2}$ which is the same, and the new $\widehat{q} = \max \left( {{p}_{1},{p}_{2}}\right) +$ $\max \left( {{p}_{3},{p}_{4}}\right)$ . Since $\mathcal{H}\left( p\right) = \mathcal{H}\left( \widehat{p}\right)$ we only need to compare the terms containing $\widehat{q}$ and $q$ . We use that $- \mathcal{H}$ is a monotonic increasing function when only considering the + +interval $\left\lbrack {{0.5},{1.0}}\right\rbrack$ and that $\mathcal{H}\left( p\right) = \mathcal{H}\left( {1 - p}\right)$ . First we check if ${p}_{1} + {p}_{3} \geq {0.5}$ or otherwise ${p}_{2} + {p}_{4} \geq {0.5}$ . Let this value be $a$ so that $a \geq {0.5}$ , and the other value $b$ with $b \leq {0.5}$ . From the symmetry we have: + +$$ +- \mathcal{H}\left( {{p}_{1} + {p}_{3}}\right) = - \mathcal{H}\left( a\right) = - \mathcal{H}\left( {{p}_{2} + {p}_{4}}\right) = - \mathcal{H}\left( b\right) . +$$ + +Next, observe that $\widehat{q} = \max \left( {{p}_{1},{p}_{2}}\right) + \max \left( {{p}_{3},{p}_{4}}\right) \geq$ $\max \left( {{p}_{1} + {p}_{3},{p}_{2} + {p}_{4}}\right) = a$ and since $- \mathcal{H}$ is monotonically increasing on $\left\lbrack {{0.5},{1.0}}\right\rbrack$ it follows that: + +$$ +- \mathcal{H}\left( \widehat{q}\right) \geq - \mathcal{H}\left( a\right) = - \mathcal{H}\left( q\right) . +$$ + +Plugging this back into the previous equation gives us the desired identity: + +$$ +{\mathbb{E}}_{z \sim {P}_{{Z}_{1}{Z}_{2}}}\left\lbrack {\log {P}_{\text{model }}\left( z\right) }\right\rbrack \geq {\mathbb{E}}_{x \sim {P}_{{X}_{1}{X}_{2}}}\left\lbrack {\log {P}_{\text{model }}\left( x\right) }\right\rbrack +$$ + +under optimal choice of the Bernoulli parameters. + +## C. Experimental Details + +We train Discrete Flows and Discrete Denoising Flows on three data sets. In each experiment, both models use the same architecture to ensure comparability. + +Throughout the experiments, we use the Adam optimizer, a learning rate of 0.001 , and a batch size of 64 . The base distribution is always a factorized categorical distribution with $K$ classes. $K$ varies between the data sets. Recall that in the coupling layers of both DFs and DDFs, the $D$ - dimensional coupling layer input $x$ is split into two parts such that $x = \left\lbrack {{x}_{a},{x}_{b}}\right\rbrack$ , at a split index $d$ . For all of our experiments, we set $d = \frac{D}{2}$ . + +Eight Gaussians In this experiment, we train a single-layer Discrete Flow and a single-layer Discrete Denoising Flow. For both models, we use an MLP consisting of 4 linear layers with 256 hidden units and ReLU activations to parameterize the coupling layer. This small model size is sufficient for modelling our $2\mathrm{D}$ toy data set. + +Consisting of only one coupling layer, preserving the structure of the input vector for later coupling layers is not relevant for the DDF model. Consequently, we set the parameter $h$ in the denoising coupling layer to the number of classes in the data set $K = {91}$ . + +Binary MNIST In this experiment, we work with binary image data. Consequently, each DF and DDF coupling layer is parameterized by a DenseNet (Huang et al., 2017) consisting of 8 dense building blocks. For DDFs, modeling binary data implies that $h$ equals the number of classes $K$ , i.e. $h = K = 2$ . + +We embed the coupling layers into a multi-layer architecture of coupling layers, split priors, and squeeze operations (Dinh et al., 2016). The squeeze changes the vector size from [channels $\times H \times W$ ] to $\left\lbrack {4 \cdot \text{channels} \times \frac{H}{2} \times \frac{W}{2}}\right\rbrack$ . + +The overall model architecture consists of 2 blocks that consisting of the following layers (in that order): + +$$ +\text{\{squeeze - coupling - splitprior - coupling - splitprior\}} +$$ + +Note that each coupling layer is preceded by a shuffling operation applied to the channels of the input vector. Further the splitprior factors out the opposite part that the coupling transformed (so if the coupling layer transforms ${x}_{b} \mapsto {z}_{b}$ then ${z}_{a}$ is factored out). + +Cityscapes In this experiment, we're again dealing with image-type data, this time with $K = 8$ classes. Like in the previous experiment, we utilize a DenseNet (Huang et al., 2017) in the DF and DDF coupling layers. However, here it consists of 15 dense building blocks. We perform a grid search for the DDF parameter $h$ on $\{ 1,2,4,6,8\}$ and find that $h = 4$ leads to the best performance. + +In analogy to the previous experiment, we embed the coupling layers in a multi-layer architecture of coupling layers, splitpriors and squeeze operations. For this experiment, the model architecture consists of 3 building blocks with the following layers (in that order): + +$$ +\text{\{squeeze - coupling - splitprior - coupling - splitprior\}} +$$ + +Again, the splitprior factors out the opposite part that the coupling transformed (so if the coupling layer transforms ${x}_{b} \mapsto {z}_{b}$ then ${z}_{a}$ is factored out). \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/HhcKxFKluuP/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/HhcKxFKluuP/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..2741482c5dfa421eea4ad10e49db041607ccd113 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/HhcKxFKluuP/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,226 @@ +§ DISCRETE DENOISING FLOWS + +§ ANONYMOUS AUTHORS ${}^{1}$ + +§ ABSTRACT + +Discrete flow-based models are a recently proposed class of generative models that learn invertible transformations for discrete random variables. Since they do not require data dequantization and maximize an exact likelihood objective, they can be used in a straight-forward manner for lossless compression. In this paper, we introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs). In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias. We show that DDFs outperform Discrete Flows on modelling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood. + +§ 1. INTRODUCTION + +Due to their wide range of applications, flow-based generative models have been extensively studied in recent years (Rezende & Mohamed, 2015; Dinh et al., 2016). Research has mainly focused on modelling continuous data distributions, in which discretely stored data like audio or image data must be dequantized prior to modelling. However, two recent publications explore flow-based generative modelling of discrete distributions: Discrete Flows (Tran et al., 2019) for categorical random variables and Integer Discrete Flows (Hoogeboom et al., 2019) for ordinal discrete random variables. Due to their discrete nature and exact likelihood objective, these discrete flow-based models can be used directly for lossless compression. + +Unlike other approaches that utilize generative models for lossless compression, discrete flow-based models are advantageous because they (i) enable efficient inference and (ii) can encode single data samples efficiently. Approaches that utilize the Variational Autoencoder (VAE) (Kingma & Welling, 2013) for lossless compression typically combine the model with bits-back-coding (Hinton & Van Camp, 1993), which is effective for compressing full data sets but inefficient for encoding single samples. Autoregressive models such as PixelCNN (Oord et al., 2016) can also be used for lossless compression, however, they are generally expensive to decode. + +Unfortunately, both Discrete Flows and Integer Discrete Flows come with the drawback that each of their layers contains a quantization operation. When optimizing them with the backpropagation algorithm, the gradient of the quantization operation has to be estimated with a biased gradient estimator, which may compromise their performance. + +To improve training efficiency, reduce gradient bias and improve overall performance, we introduce a new discrete flow-based generative model for categorical random variables, Discrete Denoising Flows (DDFs). DDFs can be trained without introducing gradient bias. They further come with the positive side effect that the training is computationally very efficient. This efficiency results from the local training algorithm of the DDFs, which trains only one layer at a time instead of all at once. We demonstrate that Discrete Denoising Flows outperform Discrete Flows in terms of log-likelihood. + +§ 2. RELATED WORK & BACKGROUND + +This section first introduces normalizing flows as well as discrete flows. It then goes on to describe alternate approaches that utilize generative models for lossless compression. + +Normalizing Flows The fundamental idea of flow-based modelling is to express a complicated probability distribution as a transformation on a simple probability distribution. Given the two continuous random variables $X$ and $Z$ and the invertible and differentiable transformation $T : \mathcal{Z} \rightarrow \mathcal{X}$ , $X$ ’s probability distribution ${p}_{X}\left( \cdot \right)$ can be written in terms of $Z$ ’s probability distribution ${p}_{Z}\left( \cdot \right)$ as + +$$ +{p}_{X}\left( x\right) = {p}_{Z}\left( z\right) {\left| \det {J}_{T}\left( z\right) \right| }^{-1}\;\text{ with }z = {T}^{-1}\left( x\right) , \tag{1} +$$ + +using the change of variables formula. The Jacobian determinant acts as normalizing term and ensures that ${p}_{X}\left( \cdot \right)$ is a valid probability distribution. The distribution ${p}_{Z}\left( \cdot \right)$ is referred to as the base distribution and the transformation $T$ as a normalizing flow. A composition of invertible and differentiable functions can be viewed as a repeated application of formula 1. Therefore, such compositions are also referred to as normalizing flows throughout literature. + +Discrete Flows In the case of two discrete random variables $X$ and $Z$ , the change of variables formula for continuous random variables given in formula 1 simplifies to + +$$ +{p}_{X}\left( x\right) = {p}_{Z}\left( z\right) \;\text{ with }\;z = {T}^{-1}\left( x\right) \tag{2} +$$ + +Normalization with the Jacobian determinant is no longer necessary as it corrects for a change in volume. Discrete distributions, however, have no volume since they only have support on a discrete set of points. As pointed out by Papa-makarios et al. (Papamakarios et al., 2019), discrete flow-based models can only permute probabilities in the probability tensor that represents the distribution of a random variable. However, Van den Berg et al. (Berg et al., 2020) showed that this is not as harmful in terms of modeling flexibility as originally thought. + +Discrete Flows (DFs) (Tran et al., 2019) are discrete flow-based models that learn transformations on categorical random variables. The authors define a bijective transformation $T : \mathcal{Z} \rightarrow \mathcal{X}$ with $\mathcal{X} = \mathcal{Z} = \{ 1,\ldots ,K{\} }^{D}$ in the form of a bipartite coupling layer (Dinh et al., 2016). The coupling layer input $x$ is partitioned into two sets s.t. $x = \left\lbrack {{x}_{a},{x}_{b}}\right\rbrack$ and then transformed into an output $z = \left\lbrack {{z}_{a},{z}_{b}}\right\rbrack$ with + +$$ +{z}_{a} = {x}_{a} \tag{3} +$$ + +$$ +{z}_{b} = \left( {{s}_{{\theta }_{1}}\left( {x}_{a}\right) \circ {x}_{b} + {t}_{{\theta }_{2}}\left( {x}_{a}\right) }\right) {\;\operatorname{mod}\;K}, +$$ + +where $\circ$ denotes element-wise multiplication. Every element of scale ${s}_{{\theta }_{1}}\left( \cdot \right)$ and translation ${t}_{{\theta }_{2}}\left( \cdot \right)$ can take on values in $\{ 1,\ldots ,K\}$ . Note that the transformation is only invertible if each element of the scale is coprime to the number of classes $K$ . Scale ${s}_{{\theta }_{1}}\left( \cdot \right)$ and translation ${t}_{{\theta }_{2}}\left( \cdot \right)$ are modeled by a neural network with parameters ${\theta }_{1,2}$ . To obtain discrete scale and translation values, the authors utilize the argmax operator combined with a relaxed softmax as gradient estimator (Jang et al., 2016) to enable backpropagation. This introduces bias to the model parameter gradients, which harms optimization. Note that the example describes the bipartite version and not the autoregressive version of DFs. + +Generative Models for Lossless Compression The Variational Autoencoder (VAE) (Kingma & Welling, 2013) can be utilized for lossless compression is by discretizing the continuous latent vector and applying bits-back coding (Hinton & Van Camp, 1993). Recent methods that work according to this approach include Bits-Back with ANS (Townsend et al., 2019a), Bit-Swap (Kingma et al., 2019) and HiLLoC (Townsend et al., 2019b). These methods obtain performances close to the negative ELBO for compressing full datasets. However, when encoding a single data sample they are rather inefficient because the auxiliary bits needed for bits-back coding cannot be amortized across many samples. The same problem but in a scaled-up version due to multiple latent variables arises when local bits-back coding is used in normalizing flows (Ho et al., 2019). In this case, encoding a single image would require more bits than the original image. Mentzer et al. (Mentzer et al., 2019) utilize a VAE with deterministic encoder to transform a data sample into a set of discrete multiscale latent vectors. Although this method does not require bits-back coding, it optimizes only a lowerbound on the likelihood instead of the likelihood directly. Another generative model that is well suited for lossless compression is the PixelCNN (Oord et al., 2016). PixelCNN organizes the pixels of an image as a sequence and predicts the distribution of a pixel conditioned on all previous pixels. Consequently, drawing samples from Pix-elCNN requires multiple network evaluations and is very costly. Nevertheless, PixelCNN achieves state-of-the-art performances in lossless compression. + +§ 3. METHOD: DISCRETE DENOISING FLOWS + +In this section, we introduce Discrete Denoising Flows. Like other flow-based models, DDFs consist of several bipartite coupling layers that are easily invertible. These so-called denoising coupling layers are embedded in an architecture that factors out parts of the input vector at regular intervals. + +§ 3.1. DENOISING COUPLING LAYER + +Complying to the change of variables formula 2, we define the denoising coupling layer as an invertible transformation $T : \mathcal{Z} \rightarrow \mathcal{X}$ between two categorical variables $X$ and $Z$ with domains $\mathcal{X} = \mathcal{Z} = \{ 1,\ldots ,K{\} }^{D}$ . The inverse ${T}^{-1}$ , which represents the forward pass during training, is given as + +$$ +{z}_{a} = {x}_{a} \tag{4} +$$ + +$$ +{z}_{b} = \operatorname{cond}\operatorname{perm}\left( {{x}_{b} \mid n\left( {x}_{a}\right) }\right) +$$ + +That is, the input $x \in \{ 1,..,K{\} }^{D}$ is partitioned into two sets such that $x = \left\lbrack {{x}_{a},{x}_{b}}\right\rbrack$ and ${x}_{a} \in \{ 1,..,K{\} }^{d}$ . The first part stays the same while the second part is transformed conditioned on the first part. For this transformation, we use a neural network $n$ as well as the conditional permutation operation cond_perm $\left( {\cdot \mid \cdot }\right)$ . + +The conditional permutation operation is the core component of the denoising coupling layer. For notation clarity, we define the variable $\theta = n\left( {x}_{a}\right)$ with $\theta \in {\mathbb{R}}^{\left( {D - d}\right) \times K}$ as the output of the neural network $n$ . The conditional permutation operation is then defined as + +$$ +{\operatorname{cond}}_{\text{ -perm }}\left( {{x}_{{b}_{i}} \mid {\theta }_{i}}\right) = {\operatorname{perm}}_{{\theta }_{i}}\left( {x}_{{b}_{i}}\right) \tag{5} +$$ + +$$ +{\text{ perm }}_{{\theta }_{i}} = \left( \begin{matrix} 1 & 2 & \ldots & K \\ {\operatorname{argsort}}_{ - }{\operatorname{top}}_{h}\left( {\theta }_{i}\right) & & & \end{matrix}\right) \tag{6} +$$ + +per dimension $i \in \{ 1,\ldots ,D - d\}$ , where we used Cauchy’s two-line notation for permutations to define the permutation ${\text{ perm }}_{{\theta }_{i}}$ . We also introduced this argsort ${}_{ - }{\text{ top }}_{h}\left( \cdot \right)$ operation with the additional hyperparameter $h \in \{ 1,..,K\}$ . This operation acts similar to the regular argsort operation, with the difference that it only sorts the top $h$ largest elements while leaving the remaining elements in their previous positions. Figure 1 illustrates this functionality. The intuition behind the operation is that only the most predictable classes are permuted, leaving more of the structure intact than an entire argsort. Also, observe that in the binary case $K = 2$ and for $h = K$ , argsort and argsort_top ${}_{h}$ are equivalent. + + < g r a p h i c s > + +Figure 1. Functionality of argsort ${\operatorname{-top}}_{h}\left( {\theta }_{i}\right)$ illustrated for an example ${\theta }_{i}$ with number of classes $K = 5$ . + +The conditional permutation operation is easily invertible as + +$$ +{\text{ cond - perm }}_{i}^{-1}\left( {{x}_{{b}_{i}} \mid {\theta }_{i}}\right) = {\operatorname{perm}}_{{\theta }_{i}}^{-1}\left( {x}_{{b}_{i}}\right) \tag{7} +$$ + +$$ +{\text{ perm }}_{{\theta }_{i}}^{-1} = \left( \begin{matrix} {\operatorname{argsort}}_{ - }{\operatorname{top}}_{h}\left( {\theta }_{i}\right) \\ 1\;2\;\ldots \;K \end{matrix}\right) \tag{8} +$$ + +per dimension $i \in \{ 1,\ldots ,D - d\}$ . Using this definition, we can write the transformation $T$ representing the denoising coupling layer at inference time as + +$$ +{x}_{a} = {z}_{a} \tag{9} +$$ + +$$ +{x}_{b} = {\operatorname{cond}}_{ - }{\operatorname{perm}}^{-1}\left( {{z}_{b} \mid n\left( {z}_{a}\right) }\right) +$$ + +§ 3.2. TRAINING DENOISING DISCRETE FLOWS + +For training a denoising coupling layer, we simply train a neural network $n$ to predict ${x}_{b}$ from ${x}_{a}$ . To this end, we use the mean cross-entropy loss between $n\left( {x}_{a}\right)$ and ${x}_{b}$ as our objective function. After training, the fixed neural network $n$ can be employed in a denoising coupling layer. When we apply the conditional permutation operation in + +$$ +{z}_{b} = \operatorname{cond}\_ \operatorname{perm}\left( {{x}_{b} \mid n\left( {x}_{a}\right) }\right) , +$$ + +the more the argmax of $n\left( {x}_{a}\right)$ resembles ${x}_{b}$ , the more likely it is for a value in ${x}_{b}$ to be switched to one of the smaller class values. Consequently, given that the argmax of $n\left( {x}_{a}\right)$ somewhat resembles ${x}_{b}$ , the outcome of the conditional permutation operation ${z}_{b}$ , is more likely to contain smaller class values than ${x}_{b}$ . This makes the value of the random variable $Z$ more predictable than the value of the random variable $X$ , when looking at those dimensions in isolation. In other words, we decorrelated the random variable $X$ into the random variable $Z$ . As a direct consequence, modelling the distribution ${p}_{Z}\left( \cdot \right)$ with a $D$ -dimensional i.i.d. categorical distribution will result in a smaller mismatch than it would for the distribution ${p}_{X}\left( \cdot \right)$ . To give some more intuition on this functionality, we include an illustrating example in appendix A and provide additional augment for the case $K = 2$ in appendix B. + +Algorithm 1 transform $\left( {\operatorname{ddf},S,x}\right)$ . Transforms $x \mapsto z$ + +Input: $x$ , ddf // ddf is a list of classifiers $\left\lbrack {{n}_{1},{n}_{2},\ldots }\right\rbrack$ + +Let $z = x$ + +for $n$ , shuffle in $\operatorname{ddf},S$ do + + Split $\left\lbrack {{z}_{a},{z}_{b}}\right\rbrack = z$ + + ${z}_{b} = \operatorname{cond}\operatorname{\_ perm}\left( {{z}_{b}|\theta = n\left( {z}_{a}\right) }\right)$ + + Combine $z = \left\lbrack {{z}_{a},{z}_{b}}\right\rbrack$ + + $z = \operatorname{shuffle}\left( z\right)$ + +end for + +return $z$ + +Algorithm 2 optimize $\left( {{n}_{\text{ new }},z}\right)$ . Optimize a new layer. + +Input: ${n}_{\text{ new }},z\;//{n}_{\text{ new }}$ is a pixel-wise classifier + +Split $\left\lbrack {{z}_{a},{z}_{b}}\right\rbrack = z$ + + Optim. $\log \mathcal{C}\left( {{z}_{b} \mid \theta = {n}_{\text{ new }}\left( {z}_{a}\right) }\right) \;//$ Equiv. to cross-entropy + +Algorithm 3 Training DDFs + +Input: number of layers $L$ + +ddf $= \left\lbrack \right\rbrack \;$ // create DFF with 0 layers + +Init $S = \left\lbrack {{\operatorname{shuffle}}_{1},\ldots {\operatorname{shuffle}}_{L}}\right\rbrack \;$ // Init $L$ shuffling layers + +while $i < L$ do + + Init ${n}_{\text{ new }}\;$ // new classifier + + while ${n}_{\text{ new }}$ not converged do + + Sample $x \sim$ Data + + $z = \operatorname{transform}\left( {\operatorname{ddf},S,x}\right)$ + + optimize $\left( {{n}_{\text{ new }},z}\right)$ + + end while + + ddf.append $\left( {n}_{\text{ new }}\right)$ + +end while + +return ddf + +Shuffling and Splitpriors Algorithms 1, 2 and 3 describe the training process of DDFs. Note that only one denoising coupling layer is trained at a time. Moreover, we utilize invertible shuffle operations such that the input is partitioned differently in each coupling layer. Note that instead of propagating the full input vector $x$ through all layers of the DDF, we factor out parts of the input vector at regular intervals and model these parts conditioned on the other parts following the splitprior approach in (Dinh et al., 2016; Kingma & Dhariwal, 2018). As a result, the following coupling layers operate on lower-dimensional data. Not only is this more efficient, but it also allows for some additional dependencies between parts of the output vector $z$ . + +165 + + < g r a p h i c s > + +Figure 2. Qualitative results for (a) Discrete Flows and (b) Discrete Denoising Flows on the quantized eight Gaussians toy data set. + +166 + +167 + +168 + +169 + +§ 4. EXPERIMENTS + +In this section we explore how the compression rate in bits per dimension (BPD) of Discrete Denoising Flows compares to Discrete Flows on three different data sets. Each experiment was conducted at least three times; we show the average results as well as the standard deviation in Table 1. All experimental details can be found in appendix C. In each experiment, we use equally sized neural networks in the coupling layers of DFs and DDFs to ensure comparability. + +max width= + +DATA SET DF DDF (ours) + +1-3 +8 GAUSSIANS ${5.05} \pm {0.05}$ ${4.58} \pm {0.02}$ + +1-3 +Bin. MNIST ${0.17} \pm {0.01}$ $\mathbf{{0.16}} \pm {0.01}$ + +1-3 +CITYSCAPES ${0.65} \pm {0.03}$ $\mathbf{{0.58}} \pm {0.03}$ + +1-3 + +Table 1. Comparison of achieved BPD of Discrete Flow (DF) and Denosing Discrete Flow (DDF) per data set. + +Eight Gaussians As a first experiment, we train DDFs and DFs on a two-dimensional toy data set also used by Tran et al. (2019). This data set is a mixture of Gaussians with 8 means uniformly distributed around a circle and discretized into 91 bins (i.e. $K = {91}$ classes). Since the data is two-dimensional, we model it with a single coupling layer per model and set $h = K$ for the DDF coupling layer. For $2\mathrm{D}$ no splitpriors are used, because that would already make the model universal and we cannot see how well the flow itself performs. As apparent from the qualitative results in Figure 2 as well as the achieved BPD given in Table 1, DDFs outperform DFs. + +Binary MNIST In a second experiment, we train both DFs and DDFs on the binarized MNIST data set. Since the data set has $K = 2$ classes, we have $h = 2$ for the DDF coupling layers. The samples given in Figure 3 and the achieved BPDs in table 1 show that DDFs outperform DFs. + +Cityscapes To test the performance DDFs on image-type data, we use a 8-class version of the Cityscapes data set (Cordts et al., 2016) modified by Hoogeboom et al. (2021). This data set contains ${32} \times {64}$ segmentation maps, samples are given in figure 4c. Since we use multiple coupling layers in this experiment and have a data set with a number of classes $K > 2$ , there is a trade-off between permuting classes and maintaining structure in each coupling layer. Therefore, after performing a grid search, we set $h = 4$ for all DDF coupling layers. From the samples in figure 4 and the achieved BPDs rates in table 1, we can see that DDFs outperform DFs for this data set as well. + + < g r a p h i c s > + +Figure 3. Qualitative results for (a) Discrete Flows and (b) Discrete Denoising Flows on the binarized MNIST data set. + + < g r a p h i c s > + +Figure 4. Qualitative results for (a) Discrete Flows and (b) Discrete Denoising Flowson the Cityscapes data set. Figure (c) show samples from the 8-class Cityscapes data set. + +§ 5. CONCLUSION + +In this paper, we have introduced a new discrete flow-based generative model for categorical data distributions, Discrete Denoising Flows. We showed that our model outperforms Discrete Flows in terms of log-likelihood. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MmiEW51-fJ/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MmiEW51-fJ/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..42ed3a42e38fa7d1116caec7ca4a6c87b5bed988 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MmiEW51-fJ/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,409 @@ +# General Invertible Transformations for Flow-based Generative Modeling + +## Anonymous Authors ${}^{1}$ + +## Abstract + +In this paper, we present a new class of invertible transformations with an application to flow-based generative models. We indicate that many wellknown invertible transformations in reversible logic and reversible neural networks could be derived from our proposition. Next, we propose two new coupling layers that are important building blocks of flow-based generative models. In the experiments on digit data, we present how these new coupling layers could be used in Integer Discrete Flows (IDF), and that they achieve better results than standard coupling layers used in IDF and RealNVP. + +## 1. Introduction + +Notation Let us consider a $D$ -dimensional space $\mathbf{x} \in \mathcal{X}$ , e.g., $\mathcal{X} = \{ 0,1{\} }^{D},\mathcal{X} = {\mathbb{Z}}^{D}$ or $\mathcal{X} = {\mathbb{R}}^{D}$ . We define a binary invertible operator $\circ : \mathcal{X} \times \mathcal{X} \rightarrow \mathcal{X}$ . The inverse operation to $\circ$ is denoted by $\bullet$ . For instance, for the addition: $\circ \equiv +$ and $\bullet \equiv -$ , and for the XOR operator: $\circ \equiv \oplus$ and $\bullet \equiv \oplus$ . Further, we use the following notation: ${\mathcal{X}}_{i : j}$ is a subset of $\mathcal{X}$ corresponding to variables from the $i$ -th dimension to the $j$ -th dimension, ${\mathbf{x}}_{i : j}$ , we assume that ${\mathcal{X}}_{1 : 0} = \varnothing$ and ${\mathcal{X}}_{n + 1 : n} = \varnothing$ + +Invertible transformations In reversible computing (Tof-foli, 1980; Fredkin & Toffoli, 1982), invertible logic gates allow to invert logic operations in order to potentially decrease energy consumption of computation (Bennett, 2003). The typical invertible logic gates are: + +- the Feynman gate: For $\mathbf{x} \in \{ 0,1{\} }^{2}$ , the gate is defined as follows (Feynman, 1986): + +$$ +{y}_{1} = {x}_{1} +$$ + +$$ +{y}_{2} = {x}_{1} \oplus {x}_{2}. \tag{1} +$$ + +- the Toffoli gate: For $\mathbf{x} \in \{ 0,1{\} }^{3}$ , the gate is defined as follows (Toffoli, 1980): + +$$ +{y}_{1} = {x}_{1} +$$ + +$$ +{y}_{2} = {x}_{2} \tag{2} +$$ + +$$ +{y}_{3} = {x}_{3} \oplus \left( {{x}_{1} \land {x}_{2}}\right) . +$$ + +- the Fredkin gate: For $\mathbf{x} \in \{ 0,1{\} }^{3}$ , the gate is defined as follows (Fredkin & Toffoli, 1982): + +$$ +{y}_{1} = {x}_{1} +$$ + +$$ +{y}_{2} = {x}_{2} \oplus \left( {{x}_{1} \land \left( {{x}_{2} \oplus {x}_{3}}\right) }\right) \tag{3} +$$ + +$$ +{y}_{3} = {x}_{3} \oplus \left( {{x}_{1} \land \left( {{x}_{2} \oplus {x}_{3}}\right) }\right) . +$$ + +Invertible transformations play also a crucial role in reversible neural networks (Gomez et al., 2017; Chang et al., 2018; MacKay et al., 2018). For instance, an invertible transformation called a coupling layer is an important building block in flow-based models (Dinh et al., 2016). It is defined as follows: + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} +$$ + +$$ +{\mathbf{y}}_{2} = \exp \left\{ {{\mathrm{{NN}}}_{s}\left( {\mathbf{x}}_{1}\right) }\right\} \odot {x}_{2} + {\mathrm{{NN}}}_{t}\left( {\mathbf{x}}_{1}\right) , \tag{4} +$$ + +where $\odot$ is an element-wise multiplication, ${\mathrm{{NN}}}_{s}\left( \cdot \right)$ and ${\mathrm{{NN}}}_{t}\left( \cdot \right)$ denote arbitrary neural networks, and an input is divided into two parts, $\mathbf{x} = \left\lbrack {{\mathbf{x}}_{1},{\mathbf{x}}_{2}}\right\rbrack$ , e.g., along the channel dimension. If ${\mathrm{{NN}}}_{s}\left( \cdot \right) \equiv 1$ , and we stack two coupling layers with reversing the order of variables in between, then we obtain the reversible residual block (Gomez et al., 2017): + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} + {\mathrm{{NN}}}_{t,1}\left( {\mathbf{x}}_{2}\right) +$$ + +$$ +{\mathbf{y}}_{2} = {\mathbf{x}}_{2} + {\mathrm{{NN}}}_{t,2}\left( {\mathbf{y}}_{1}\right) . \tag{5} +$$ + +Recently, (Hoogeboom et al., 2019) proposed a modification of the coupling layer for integer-valued variables: + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} +$$ + +$$ +{\mathbf{y}}_{2} = {\mathbf{x}}_{2} + \left\lfloor {{\mathrm{{NN}}}_{t}\left( {x}_{1}\right) }\right\rceil , \tag{6} +$$ + +where $\lfloor \cdot \rceil$ denotes the rounding operation. In order to allow applying bakcpropagation to the rounding operation, the straight through gradient estimator is used (Hoogeboom et al., 2019). + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +Flow-based generative models There are three major groups of generative models: autoregressive models (Van den Oord et al., 2016), latent variable models(Goodfellow et al., 2014; Kingma & Welling, 2013; Rezende et al., 2014), and flow-based models (Papamakar-ios et al., 2019). The last approach takes advantage of the change of variables formula: + +$$ +p\left( \mathbf{x}\right) = \pi \left( {\mathbf{z} = {f}^{-1}\left( \mathbf{x}\right) }\right) {\left| {\mathbf{J}}_{f}\left( \mathbf{z}\right) \right| }^{-1}, \tag{7} +$$ + +where $\pi \left( \cdot \right)$ is a known distribution (a base distribution, e.g., Normal distribution), $f : \mathcal{X} \rightarrow \mathcal{X}$ is a bijective map, and ${\mathbf{J}}_{f}\left( \mathbf{z}\right)$ denotes the Jacobian matrix. + +The main challenge of flow-based generative models lies in formulating invertible transformations for which the Jacobian determinant is computationally tractable. In the simplest case, we can use volume-preserving transformations that result in ${\mathbf{J}}_{f}\left( \mathbf{z}\right) = 1$ , e.g., linear autoregressive flows (Kingma et al., 2016; Tomczak & Welling, 2017) or Householder flows (Tomczak & Welling, 2016). However, the volume-preserving flows cannot model arbitrary distributions, therefore, non-linear transformations are preferable. In (Rezende & Mohamed, 2015; Berg et al., 2018; Hoogeboom et al., 2020) a specific form of non-linear transformations was constructed so that to take advantage of the matrix determinant lemma and its generalization to efficiently calculate the Jacobian determinant. Recently, the transformation used in (Rezende & Mohamed, 2015; Berg et al., 2018) was further generalized to arbitrary contractive residual networks (Chen et al., 2019) and contractive densenets (Perugachi-Diaz et al., 2021) with the Russian roulette estimator of the Jacobian determinant. Coupling layers constitute a different group of non-linear transformation that are used in flow-based models like RealNVP (Dinh et al., 2016) and GLOW (Kingma & Dhariwal, 2018). + +In the case of discrete variables, e.g., $\mathcal{X} = {\mathbb{Z}}^{D}$ , the change of variables takes a simpler form due to the fact that there is no change of volume for probability mass functions: + +$$ +p\left( \mathbf{x}\right) = \pi \left( {\mathbf{z} = {f}^{-1}\left( \mathbf{x}\right) }\right) . \tag{8} +$$ + +To date, coupling layers with the rounding operator (Eq. 6) are typically used. The resulting flow-based models are called Integer Discrete Flows (IDF) (Hoogeboom et al., 2019; Berg et al., 2020) with a mixture of discretized logistic distributions (Salimans et al., 2017) as the base distribution. + +### 2.Our approach + +### 2.1. General invertible transformations + +Our main contribution of this paper is a proposition of a new class of invertible transformations that generalize many invertible transformations in reversible computing and reversible deep learning. + +Proposition 1. Let us take $\mathbf{x},\mathbf{y} \in \mathcal{X}$ . The following transformation from $\mathbf{x}$ to $\mathbf{y}$ with ${f}_{i} : {\mathcal{X}}_{1 : i - 1} \times {\mathcal{X}}_{i + 1 : n} \rightarrow {\mathcal{X}}_{i}$ : + +$$ +{y}_{1} = {x}_{1} \circ {f}_{1}\left( {\varnothing ,{\mathbf{x}}_{2 : D}}\right) +$$ + +$$ +{y}_{2} = \left( {{g}_{2}\left( {y}_{1}\right) \vartriangleright {x}_{2}}\right) \circ {f}_{2}\left( {{y}_{1},{\mathbf{x}}_{3 : D}}\right) +$$ + +... + +$$ +{y}_{d} = \left( {{g}_{d}\left( {\mathbf{y}}_{1 : d - 1}\right) \vartriangleright {x}_{d}}\right) \circ {f}_{d}\left( {{\mathbf{y}}_{1 : d - 1},{\mathbf{x}}_{d + 1 : D}}\right) +$$ + +... + +$$ +{y}_{D} = \left( {{g}_{D}\left( {\mathbf{y}}_{1 : D - 1}\right) \vartriangleright {x}_{D}}\right) \circ {f}_{D}\left( {{\mathbf{y}}_{1 : D - 1},\varnothing }\right) +$$ + +is invertible for given binary transformations $\circ$ and $\vartriangleright$ with inverses $\bullet$ and $\vartriangleleft$ , respectively, and arbitrary functions ${g}_{2},\ldots ,{g}_{D}$ and ${f}_{1},\ldots ,{f}_{D}$ . + +Proof. In order to inverse $\mathbf{y}$ to $\mathbf{x}$ we start from the last element to obtain the following: + +$$ +{x}_{D} = {g}_{D}\left( {\mathbf{y}}_{1 : D - 1}\right) \blacktriangleleft \left( {{y}_{D} \bullet {f}_{D}\left( {{\mathbf{y}}_{1 : D - 1},\varnothing }\right) }\right) . +$$ + +Then, we can proceed with next expressions in the decreasing order (i.e., from $D - 1$ to 1) to eventually obtain: + +$$ +{x}_{D - 1} = {g}_{D - 1}\left( {\mathbf{y}}_{1 : D - 2}\right) \blacktriangleleft \left( {{y}_{D - 1} \bullet {f}_{D - 1}\left( {{\mathbf{y}}_{1 : D - 2},{x}_{D}}\right) }\right) +$$ + +... + +$$ +{x}_{d} = {g}_{d}\left( {\mathbf{y}}_{1 : d - 1}\right) \blacktriangleleft \left( {{y}_{d} \bullet {f}_{d}\left( {{\mathbf{y}}_{1 : d - 1},{\mathbf{x}}_{d + 1 : D}}\right) }\right) +$$ + +... + +$$ +{x}_{2} = {g}_{2}\left( {y}_{1}\right) \blacktriangleleft \left( {{y}_{2} \bullet {f}_{2}\left( {{y}_{1},{\mathbf{x}}_{3 : D}}\right) }\right) +$$ + +$$ +{x}_{1} = {y}_{1} \bullet {f}_{1}\left( {\varnothing ,{\mathbf{x}}_{2 : D}}\right) . +$$ + +Next, we show that many widely known invertible transformations could be derived from the proposed general invertible transformation. First, for the space of binary variables, we present that our proposition could be used to obtain three of the most important reversible logic gates. + +Corollary 2 (Feynman gate). Let us consider $\mathbf{x} \in \{ 0,1{\} }^{2}$ , and $\circ \equiv \oplus$ and $\vartriangleright \equiv \oplus$ with $\bullet \equiv \oplus$ and $\vartriangleleft \equiv \oplus$ , where $\oplus$ is the XOR operation. Then, taking ${g}_{2} \equiv 0,{f}_{1}\left( {x}_{2}\right) = 0$ and ${f}_{2}\left( {y}_{1}\right) = {y}_{1}$ results in the Feynman gate: + +$$ +{y}_{1} = {x}_{1} \tag{9} +$$ + +$$ +{y}_{2} = {x}_{2} \oplus {x}_{1}\text{.} +$$ + +Proof. The Eq. 9 follows from the idempotency of XOR. + +Corollary 3 (Toffoli gate). Let us consider $\mathbf{x} \in \{ 0,1{\} }^{3}$ , and $\circ \equiv \oplus$ and $\vartriangleright \equiv \oplus$ with $\bullet \equiv \oplus$ and $\vartriangleleft \equiv \oplus$ , where $\oplus$ is the XOR operation. Then, taking ${g}_{2}\left( {y}_{1}\right) \equiv 0,{g}_{3}\left( {{\mathbf{y}}_{1 : 2} \equiv 0}\right.$ , + +${f}_{1}\left( {\mathbf{x}}_{2 : 3}\right) \equiv 0,{f}_{2}\left( {{y}_{1},{x}_{3}}\right) \equiv 0$ and ${f}_{3}\left( {\mathbf{y}}_{1 : 2}\right) = {y}_{1} \land {y}_{2}$ results in the Toffoli gate: + +$$ +{y}_{1} = {x}_{1} \tag{10} +$$ + +$$ +{y}_{2} = {x}_{2} \tag{11} +$$ + +$$ +{y}_{3} = {x}_{3} \oplus \left( {{y}_{1} \land {y}_{2}}\right) . \tag{12} +$$ + +Proof. The Eqs. 10 - 12 follow from the idempotency of the XOR operator. + +Corollary 4 (Fredkin gate). Let us consider $\mathbf{x} \in \{ 0,1{\} }^{4}$ , and $\circ \equiv \oplus$ and $\vartriangleright \equiv \oplus$ with $\bullet \equiv \oplus$ and $\vartriangleleft \equiv \oplus$ , where $\oplus$ is the XOR operation. Then, taking ${x}_{1} \equiv 0,{g}_{2}\left( {y}_{1}\right) \equiv 0$ , ${g}_{3}\left( {\mathbf{y}}_{1 : 2}\right) \equiv 0,{g}_{4}\left( {\mathbf{y}}_{1 : 3}\right) \equiv 0,{f}_{1}\left( {\mathbf{x}}_{2 : 4}\right) = {x}_{2} \land \left( {{x}_{3} \oplus {x}_{4}}\right) ,$ ${f}_{2}\left( {{y}_{1},{\mathbf{x}}_{3 : 4}}\right) \equiv 0,{f}_{3}\left( {{\mathbf{y}}_{1 : 2},{x}_{4}}\right) = {y}_{1}$ and ${f}_{4}\left( {\mathbf{y}}_{1 : 3}\right) \equiv {y}_{1}$ results in the Fredkin gate: + +$$ +{y}_{1} = {x}_{1} \oplus \left( {{x}_{2} \land \left( {{x}_{3} \oplus {x}_{4}}\right) }\right) \tag{13} +$$ + +$$ +{y}_{2} = {x}_{2} \oplus 0 \tag{14} +$$ + +$$ +{y}_{3} = {x}_{3} \oplus {y}_{1} \tag{15} +$$ + +$$ +{y}_{4} = {x}_{4} \oplus {y}_{1} \tag{16} +$$ + +Remark 5 (On the Fredkin gate). Comparing equations 13-16 with the definition of Fredkin gate we notice that in Corollary 4 we have to introduce an additional equation to be consistent with the Proposition 1. Moreover, we introduced a dummy variable ${x}_{1}$ that always equals 0 . + +Moreover, we observe that our proposition generalizes invertible layers in neural networks. + +Corollary 6 (A coupling layer). Let us consider $\mathrm{x} =$ ${\left\lbrack {\mathbf{x}}_{1},{\mathbf{x}}_{2}\right\rbrack }^{\top }$ , where ${\mathcal{X}}_{i} = {\mathbb{R}}^{{D}_{i}}$ , and $\circ \equiv + , \vartriangleright \equiv \odot$ with - $\equiv -$ and $\vartriangleleft \equiv \oslash$ , where $\odot$ and $\oslash$ denote element-wise multiplication and division, respectively. Then, taking ${g}_{2}\left( {\mathbf{y}}_{1}\right) = \exp \left( {{\mathrm{{NN}}}_{s}\left( {\mathbf{y}}_{1}\right) }\right) ,{f}_{1}\left( {\mathbf{x}}_{2}\right) = 0$ and ${f}_{2}\left( {\mathbf{y}}_{1}\right) =$ ${\mathrm{{NN}}}_{t}\left( {\mathbf{y}}_{1}\right)$ , where ${\mathrm{{NN}}}_{s},{\mathrm{{NN}}}_{t}$ are neural networks, results in the coupling layer (Dinh et al., 2016): + +$$ +{\mathrm{y}}_{1} = {\mathrm{x}}_{1} \tag{17} +$$ + +$$ +{\mathbf{y}}_{2} = \exp \left( {{\mathrm{{NN}}}_{s}\left( {\mathbf{y}}_{1}\right) }\right) \odot {\mathbf{x}}_{2} + {\mathrm{{NN}}}_{t}\left( {\mathbf{y}}_{1}\right) . \tag{18} +$$ + +Corollary 7 (A reversible residual layer). Let us consider $\mathbf{x} = {\left\lbrack {\mathbf{x}}_{1},{\mathbf{x}}_{2}\right\rbrack }^{\top }$ , where ${\mathcal{X}}_{i} = {\mathbb{R}}^{{D}_{i}}$ , and $\circ \equiv + , \vartriangleright \equiv \odot$ with $\bullet \equiv -$ and $\vartriangleleft \equiv \oslash$ , where $\odot$ and $\oslash$ denote element-wise multiplication and division, respectively. Then, taking ${g}_{2}\left( {\mathbf{y}}_{1}\right) \equiv 1,{f}_{1}\left( {\mathbf{x}}_{2}\right) = {\mathrm{{NN}}}_{1}\left( {\mathbf{x}}_{2}\right)$ and ${f}_{2}\left( {\mathbf{y}}_{1}\right) = {\mathrm{{NN}}}_{2}\left( {\mathbf{y}}_{1}\right)$ , where NN is a neural network, results in the reversible residual layer proposed in (Gomez et al., 2017): + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} + {\mathrm{{NN}}}_{1}\left( {\mathbf{x}}_{2}\right) \tag{19} +$$ + +$$ +{\mathbf{y}}_{2} = {\mathbf{x}}_{2} + {\mathrm{{NN}}}_{2}\left( {\mathbf{y}}_{1}\right) . \tag{20} +$$ + +Remark 8 (On the reversible residual layer). According to Proposition 1, we can further generalize the reversible + +residual layer proposed in (Gomez et al., 2017) by taking ${g}_{2}\left( {\mathbf{y}}_{1}\right) = \exp \left( {{\mathrm{{NN}}}_{3}\left( {\mathbf{y}}_{1}\right) }\right)$ that would result in the following invertible layer: + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} + {\mathrm{{NN}}}_{1}\left( {\mathbf{x}}_{2}\right) \tag{21} +$$ + +$$ +{\mathbf{y}}_{2} = \exp \left( {{\mathrm{{NN}}}_{3}\left( {\mathbf{y}}_{1}\right) }\right) \odot {\mathbf{x}}_{2} + {\mathrm{{NN}}}_{2}\left( {\mathbf{y}}_{1}\right) . \tag{22} +$$ + +Interestingly, we can calculate the Jacobian of such transformation. The Jacobian takes the following form: + +$$ +\mathbf{J}\left( \mathbf{z}\right) = \left\lbrack \begin{array}{ll} \frac{\partial {\mathbf{y}}_{1}}{\partial {\mathbf{x}}_{1}} & \frac{\partial {\mathbf{y}}_{1}}{\partial {\mathbf{x}}_{2}} \\ \frac{\partial {\mathbf{y}}_{2}}{\partial {\mathbf{x}}_{1}} & \frac{\partial {\mathbf{y}}_{2}}{\partial {\mathbf{x}}_{2}} \end{array}\right\rbrack = \left\lbrack \begin{array}{ll} \mathbf{A} & \mathbf{B} \\ & \\ \mathbf{C} & \mathbf{D} \end{array}\right\rbrack \tag{23} +$$ + +where in (23) we represent it as a block matrix, where $\mathbf{A} = \mathbf{I}$ , i.e., the identity matrix. Then, $\mathbf{{AC}} = \mathbf{{CA}}$ and according to Theorem 3 on determinants of block matrices in (Silvester, 2000), the logarithm of the Jacobian-determinant equals: + +$$ +\log \left| {\det \mathbf{J}\left( \mathbf{z}\right) }\right| = \mathop{\sum }\limits_{i}{\mathrm{{NN}}}_{3, i}\left( {\mathbf{y}}_{1}\right) . \tag{24} +$$ + +Corollary 9 (A reversible differential mutation). Let us consider ${\mathbf{x}}_{1},{\mathbf{x}}_{2},{\mathbf{x}}_{3} \in {\mathbb{R}}^{D},\gamma \in {\mathbb{R}}_{ + }$ , and $\circ \equiv + , \vartriangleright \equiv \odot$ with $\bullet \equiv -$ . Then, taking ${g}_{2}\left( {\mathbf{y}}_{1}\right) \equiv 1,{g}_{3}\left( {\mathbf{y}}_{1 : 2}\right) \equiv 1$ , ${f}_{1}\left( {\mathbf{x}}_{2 : 3}\right) = \gamma \left( {{\mathbf{x}}_{2} - {\mathbf{x}}_{3}}\right) ,{f}_{2}\left( {{\mathbf{y}}_{1},{\mathbf{x}}_{3}}\right) = \gamma \left( {{\mathbf{x}}_{3} - {\mathbf{y}}_{1}}\right)$ , and ${f}_{3}\left( {\mathbf{y}}_{1 : 2}\right) = \gamma \left( {{\mathbf{y}}_{1} - {\mathbf{y}}_{2}}\right)$ , results in the reversible differential mutation proposed in (Tomczak et al., 2020): + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} + \gamma \left( {{\mathbf{x}}_{2} - {\mathbf{x}}_{3}}\right) \tag{25} +$$ + +$$ +{\mathbf{y}}_{2} = {\mathbf{x}}_{2} + \gamma \left( {{\mathbf{x}}_{3} - {\mathbf{y}}_{1}}\right) \tag{26} +$$ + +$$ +{\mathbf{y}}_{3} = {\mathbf{x}}_{3} + \gamma \left( {{\mathbf{y}}_{1} - {\mathbf{y}}_{2}}\right) \text{.} \tag{27} +$$ + +### 2.2. General Invertible Transformations for Integer Discrete Flows + +We propose to utilize our general invertible transformations in IDF. For this purpose, we formulate two new coupling layers that fulfill Proposition 1, namely: - We divide the input into four parts, $\mathbf{x} = \left\lbrack {{\mathbf{x}}_{1},{\mathbf{x}}_{2},{\mathbf{x}}_{3},{\mathbf{x}}_{4}}\right\rbrack$ : + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} + \left\lfloor {{\mathrm{{NN}}}_{t,1}\left( {\mathbf{x}}_{2 : 4}\right) }\right\rceil +$$ + +$$ +{\mathbf{y}}_{2} = {\mathbf{x}}_{2} + \left\lfloor {{\mathrm{{NN}}}_{t,2}\left( {{\mathbf{y}}_{1},{\mathbf{x}}_{3 : 4}}\right) }\right\rceil \tag{28} +$$ + +$$ +{\mathbf{y}}_{3} = {\mathbf{x}}_{3} + \left\lfloor {{\mathrm{{NN}}}_{t,3}\left( {{\mathbf{y}}_{1 : 2},{\mathbf{x}}_{4}}\right) }\right\rceil +$$ + +$$ +{\mathbf{y}}_{4} = {\mathbf{x}}_{4} + \left\lfloor {{\mathrm{{NN}}}_{t,4}\left( {\mathbf{y}}_{1 : 3}\right) }\right\rceil +$$ + +- We divide the input into eight parts, $\mathbf{x} = \left\lbrack {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{8}}\right\rbrack$ . Then, we formulate the coupling layer analogically to (28). + +Further, we use the discretized two-parameter logistic distribution. It could be expressed as a difference of the logistic cumulative distribution functions (Salimans et al., 2017) or analytically (Chakraborty & Chakravarty, 2016). + +## 3. Experiments + +Data In the experiment, we use a toy dataset of handwritten digits available in Scikit-learn ${}^{1}$ that consists of $1,{797}\mathrm{{im}}$ - ages. Each image consists of 64 pixels $\left( {8\mathrm{{px}} \times 8\mathrm{{px}}}\right)$ . + +Models In the experiment, we compare the following models: RealNVP with uniform dequantization and the standard Gaussian base distribution (REALNVP), RealNVP with the coupling layer in Remark 8 and the standard Gaussian base distribution (REALNVP2), IDF with the coupling layer in (6) (REALNVP), IDF with the coupling layer in (28) (IDF4), IDF with the coupling layer for eight parts (IDF8). In all models, we used the order permutation (i.e., a matrix with ones on the anti-diagonal and zeros elsewhere) after each coupling layer. + +In order to keep a similar number of weights, we used 16 flows for IDF, 8 flows for REALNVP, 4 flows for REAL-NVP2, 4 flows for IDF4, and 2 flows for IDF8. All models have roughly ${1.32}\mathrm{M}$ weights. For IDF, IDF4, and IDF8 we utilized the following neural networks for transitions $\left( {\mathrm{{NN}}}_{t}\right) :$ + +$\operatorname{Linear}\left( {{D}_{in},{256}}\right) \rightarrow$ LeakyReLU $\rightarrow \operatorname{Linear}\left( {{256},{256}}\right) \rightarrow$ + +$$ +\text{LeakyReLU} \rightarrow \text{Linear}\left( {{256},{D}_{\text{out }}}\right) +$$ + +and for REALNVP, we additionally used the following neural networks for scaling $\left( {\mathrm{{NN}}}_{s}\right)$ : + +$\operatorname{Linear}\left( {{D}_{in},{256}}\right) \rightarrow$ LeakyReLU $\rightarrow \operatorname{Linear}\left( {{256},{256}}\right) \rightarrow$ + +LeakyReLU $\rightarrow$ Linear $\left( {{256},{D}_{\text{out }}}\right) \rightarrow$ Tanh + +Training & Evaluation We compare the models using the negative-log likelihood (nll). We train each model using 1,000 images, and the mini-batch equals 64. Moreover, we take 350 images for validation and 447 for testing. Each model is trained 5 times. During training, we use the early stopping with the patience equal 20 epochs, and the best performing model on the validation set is later evaluated on the test set. The Adam optimizer with the learning rate equal 0.001 was used. + +Code The experiments could be reproduced by running the code available at: https://github.com/ ANONYMIZED + +Results In Figure 1, we present aggregated results for the five models. Moreover, in Figure 2, examples of unconditional samples are presented, together with a real sample. + +First, we notice that IDF performs better than REALNVP and REALNVP2. In this paper, we use fully-connected neural networks and small toy data. Nevertheless, it is interesting to see that IDF could perform better than a widely used continuous flow-based model with dequantization. Moreover, it seems that the new coupling layer presented in Remark 8 is more stable in terms of final results, however, the difference in performance is statistically insignificant. Second, we observe that our proposition of invertible transformations results in improved nll. The proposed coupling layer with 8 partitions performed the best in terms of nll, and the coupling layer with 4 partitions also outperformed IDF, REALNVP and REALNVP2. We want to highlight that all models have almost identical numbers of weights, thus, these results are not caused by taking larger neural networks. The samples presented in Figure 2 are rather hard to analyze due to their small size $\left( {8\mathrm{{px}} \times 8\mathrm{{px}}}\right)$ . Nevertheless, we notice that all IDFs generated crisper images than RE-ALNVP and REALNVP2. Moreover, it seems that IDF4 and IDF8 seem to produce digits of higher visual quality. + +![01963e3d-8fcd-71bc-bdd4-7cd9b620cb6a_3_903_193_690_335_0.jpg](images/01963e3d-8fcd-71bc-bdd4-7cd9b620cb6a_3_903_193_690_335_0.jpg) + +Figure 1. The aggregated results for the four models on the test set. + +![01963e3d-8fcd-71bc-bdd4-7cd9b620cb6a_3_922_592_667_401_0.jpg](images/01963e3d-8fcd-71bc-bdd4-7cd9b620cb6a_3_922_592_667_401_0.jpg) + +Figure 2. Unconditional samples from the five models. + +## 4. Conclusion + +In this paper, we proposed a new class of invertible transformations. We showed that many well-known invertible transformations could be derived from our proposition. Moreover, we proposed two coupling layers and presented how they could be utilized in flow-based models for integer values (Integer Discrete Flows). Our preliminary experiments on the digits data indicate that the new coupling layers result in better negative log-likelihood values than for IDF and REALNVP. These results are promising and will be pursued in the future on more challenging datasets. + +--- + +'https://scikit-learn.org/stable/ datasets/index.html#digits-dataset + +--- + +References + +Bennett, C. H. Notes on landauer's principle, reversible computation, and maxwell's demon. Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, 34(3):501-510, 2003. + +Berg, R. v. d., Hasenclever, L., Tomczak, J. M., and Welling, M. Sylvester normalizing flows for variational inference. ${UAI},{2018}$ . + +Berg, R. v. d., Gritsenko, A. A., Dehghani, M., Sønderby, C. K., and Salimans, T. Idf++: Analyzing and improving integer discrete flows for lossless compression. arXiv preprint arXiv:2006.12459, 2020. + +Chakraborty, S. and Chakravarty, D. A new discrete probability distribution with integer support on $\left( {-\infty ,\infty }\right)$ . Communications in Statistics-Theory and Methods, 45 (2):492-505, 2016. + +Chang, B., Meng, L., Haber, E., Ruthotto, L., Begert, D., and Holtham, E. Reversible architectures for arbitrarily deep residual neural networks. In ${AAAI},{2018}$ . + +Chen, R. T., Behrmann, J., Duvenaud, D. K., and Jacobsen, J.-H. Residual flows for invertible generative modeling. In Advances in Neural Information Processing Systems, pp. 9916-9926, 2019. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. + +Feynman, R. P. Quantum mechanical computers. Foundations of physics, 16(6):507-531, 1986. + +Fredkin, E. and Toffoli, T. Conservative logic. International Journal of theoretical physics, 21(3-4):219-253, 1982. + +Gomez, A. N., Ren, M., Urtasun, R., and Grosse, R. B. The reversible residual network: Backpropagation without storing activations. In NIPS, pp. 2214-2224, 2017. + +Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In NIPS, pp. 2672-2680, 2014. + +Hoogeboom, E., Peters, J., van den Berg, R., and Welling, M. Integer discrete flows and lossless compression. In Advances in Neural Information Processing Systems, pp. 12134-12144, 2019. + +Hoogeboom, E., Satorras, V. G., Tomczak, J. M., and Welling, M. The convolution exponential and generalized sylvester flows. NeurIPS, 2020. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow + +with invertible 1x1 convolutions. In NeurIPS, pp. 10215- 10224, 2018. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. + +Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. Improved variational inference with inverse autoregressive flow. In NIPS, pp. 4743-4751, 2016. + +MacKay, M., Vicol, P., Ba, J., and Grosse, R. B. Reversible recurrent neural networks. In NeurIPS, pp. 9029-9040, 2018. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. arXiv preprint arXiv:1912.02762, 2019. + +Perugachi-Diaz, Y., Tomczak, J. M., and Bhulai, S. Invertible densenets with concatenated lipswish. arXiv preprint arXiv:2102.02694, 2021. + +Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In ICML, pp. 1530-1538, 2015. + +Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In ${ICML},{2014}$ . + +Salimans, T., Karpathy, A., Chen, X., and Kingma, D. P. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017. + +Silvester, J. R. Determinants of block matrices. The Mathematical Gazette, 84(501):460-467, 2000. + +Toffoli, T. Reversible computing. In International Colloquium on Automata, Languages, and Programming, pp. 632-644. Springer, 1980. + +Tomczak, J. M. and Welling, M. Improving variational auto-encoders using householder flow. arXiv preprint arXiv:1611.09630, 2016. + +Tomczak, J. M. and Welling, M. Improving variational auto-encoders using convex combination linear inverse autoregressive flow. arXiv preprint arXiv:1706.02326, 2017. + +Tomczak, J. M., Weglarz-Tomczak, E., and Eiben, A. E. Differential evolution with reversible linear transformations. arXiv preprint arXiv:2002.02869, 2020. + +Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., et al. Conditional image generation with pixelcnn decoders. In NIPS, pp. 4790-4798, 2016. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MmiEW51-fJ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MmiEW51-fJ/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..ebcb0e4f6dcbd50803934078da8909e942246da3 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MmiEW51-fJ/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,341 @@ +§ GENERAL INVERTIBLE TRANSFORMATIONS FOR FLOW-BASED GENERATIVE MODELING + +§ ANONYMOUS AUTHORS ${}^{1}$ + +§ ABSTRACT + +In this paper, we present a new class of invertible transformations with an application to flow-based generative models. We indicate that many wellknown invertible transformations in reversible logic and reversible neural networks could be derived from our proposition. Next, we propose two new coupling layers that are important building blocks of flow-based generative models. In the experiments on digit data, we present how these new coupling layers could be used in Integer Discrete Flows (IDF), and that they achieve better results than standard coupling layers used in IDF and RealNVP. + +§ 1. INTRODUCTION + +Notation Let us consider a $D$ -dimensional space $\mathbf{x} \in \mathcal{X}$ , e.g., $\mathcal{X} = \{ 0,1{\} }^{D},\mathcal{X} = {\mathbb{Z}}^{D}$ or $\mathcal{X} = {\mathbb{R}}^{D}$ . We define a binary invertible operator $\circ : \mathcal{X} \times \mathcal{X} \rightarrow \mathcal{X}$ . The inverse operation to $\circ$ is denoted by $\bullet$ . For instance, for the addition: $\circ \equiv +$ and $\bullet \equiv -$ , and for the XOR operator: $\circ \equiv \oplus$ and $\bullet \equiv \oplus$ . Further, we use the following notation: ${\mathcal{X}}_{i : j}$ is a subset of $\mathcal{X}$ corresponding to variables from the $i$ -th dimension to the $j$ -th dimension, ${\mathbf{x}}_{i : j}$ , we assume that ${\mathcal{X}}_{1 : 0} = \varnothing$ and ${\mathcal{X}}_{n + 1 : n} = \varnothing$ + +Invertible transformations In reversible computing (Tof-foli, 1980; Fredkin & Toffoli, 1982), invertible logic gates allow to invert logic operations in order to potentially decrease energy consumption of computation (Bennett, 2003). The typical invertible logic gates are: + + * the Feynman gate: For $\mathbf{x} \in \{ 0,1{\} }^{2}$ , the gate is defined as follows (Feynman, 1986): + +$$ +{y}_{1} = {x}_{1} +$$ + +$$ +{y}_{2} = {x}_{1} \oplus {x}_{2}. \tag{1} +$$ + + * the Toffoli gate: For $\mathbf{x} \in \{ 0,1{\} }^{3}$ , the gate is defined as follows (Toffoli, 1980): + +$$ +{y}_{1} = {x}_{1} +$$ + +$$ +{y}_{2} = {x}_{2} \tag{2} +$$ + +$$ +{y}_{3} = {x}_{3} \oplus \left( {{x}_{1} \land {x}_{2}}\right) . +$$ + + * the Fredkin gate: For $\mathbf{x} \in \{ 0,1{\} }^{3}$ , the gate is defined as follows (Fredkin & Toffoli, 1982): + +$$ +{y}_{1} = {x}_{1} +$$ + +$$ +{y}_{2} = {x}_{2} \oplus \left( {{x}_{1} \land \left( {{x}_{2} \oplus {x}_{3}}\right) }\right) \tag{3} +$$ + +$$ +{y}_{3} = {x}_{3} \oplus \left( {{x}_{1} \land \left( {{x}_{2} \oplus {x}_{3}}\right) }\right) . +$$ + +Invertible transformations play also a crucial role in reversible neural networks (Gomez et al., 2017; Chang et al., 2018; MacKay et al., 2018). For instance, an invertible transformation called a coupling layer is an important building block in flow-based models (Dinh et al., 2016). It is defined as follows: + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} +$$ + +$$ +{\mathbf{y}}_{2} = \exp \left\{ {{\mathrm{{NN}}}_{s}\left( {\mathbf{x}}_{1}\right) }\right\} \odot {x}_{2} + {\mathrm{{NN}}}_{t}\left( {\mathbf{x}}_{1}\right) , \tag{4} +$$ + +where $\odot$ is an element-wise multiplication, ${\mathrm{{NN}}}_{s}\left( \cdot \right)$ and ${\mathrm{{NN}}}_{t}\left( \cdot \right)$ denote arbitrary neural networks, and an input is divided into two parts, $\mathbf{x} = \left\lbrack {{\mathbf{x}}_{1},{\mathbf{x}}_{2}}\right\rbrack$ , e.g., along the channel dimension. If ${\mathrm{{NN}}}_{s}\left( \cdot \right) \equiv 1$ , and we stack two coupling layers with reversing the order of variables in between, then we obtain the reversible residual block (Gomez et al., 2017): + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} + {\mathrm{{NN}}}_{t,1}\left( {\mathbf{x}}_{2}\right) +$$ + +$$ +{\mathbf{y}}_{2} = {\mathbf{x}}_{2} + {\mathrm{{NN}}}_{t,2}\left( {\mathbf{y}}_{1}\right) . \tag{5} +$$ + +Recently, (Hoogeboom et al., 2019) proposed a modification of the coupling layer for integer-valued variables: + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} +$$ + +$$ +{\mathbf{y}}_{2} = {\mathbf{x}}_{2} + \left\lfloor {{\mathrm{{NN}}}_{t}\left( {x}_{1}\right) }\right\rceil , \tag{6} +$$ + +where $\lfloor \cdot \rceil$ denotes the rounding operation. In order to allow applying bakcpropagation to the rounding operation, the straight through gradient estimator is used (Hoogeboom et al., 2019). + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +Flow-based generative models There are three major groups of generative models: autoregressive models (Van den Oord et al., 2016), latent variable models(Goodfellow et al., 2014; Kingma & Welling, 2013; Rezende et al., 2014), and flow-based models (Papamakar-ios et al., 2019). The last approach takes advantage of the change of variables formula: + +$$ +p\left( \mathbf{x}\right) = \pi \left( {\mathbf{z} = {f}^{-1}\left( \mathbf{x}\right) }\right) {\left| {\mathbf{J}}_{f}\left( \mathbf{z}\right) \right| }^{-1}, \tag{7} +$$ + +where $\pi \left( \cdot \right)$ is a known distribution (a base distribution, e.g., Normal distribution), $f : \mathcal{X} \rightarrow \mathcal{X}$ is a bijective map, and ${\mathbf{J}}_{f}\left( \mathbf{z}\right)$ denotes the Jacobian matrix. + +The main challenge of flow-based generative models lies in formulating invertible transformations for which the Jacobian determinant is computationally tractable. In the simplest case, we can use volume-preserving transformations that result in ${\mathbf{J}}_{f}\left( \mathbf{z}\right) = 1$ , e.g., linear autoregressive flows (Kingma et al., 2016; Tomczak & Welling, 2017) or Householder flows (Tomczak & Welling, 2016). However, the volume-preserving flows cannot model arbitrary distributions, therefore, non-linear transformations are preferable. In (Rezende & Mohamed, 2015; Berg et al., 2018; Hoogeboom et al., 2020) a specific form of non-linear transformations was constructed so that to take advantage of the matrix determinant lemma and its generalization to efficiently calculate the Jacobian determinant. Recently, the transformation used in (Rezende & Mohamed, 2015; Berg et al., 2018) was further generalized to arbitrary contractive residual networks (Chen et al., 2019) and contractive densenets (Perugachi-Diaz et al., 2021) with the Russian roulette estimator of the Jacobian determinant. Coupling layers constitute a different group of non-linear transformation that are used in flow-based models like RealNVP (Dinh et al., 2016) and GLOW (Kingma & Dhariwal, 2018). + +In the case of discrete variables, e.g., $\mathcal{X} = {\mathbb{Z}}^{D}$ , the change of variables takes a simpler form due to the fact that there is no change of volume for probability mass functions: + +$$ +p\left( \mathbf{x}\right) = \pi \left( {\mathbf{z} = {f}^{-1}\left( \mathbf{x}\right) }\right) . \tag{8} +$$ + +To date, coupling layers with the rounding operator (Eq. 6) are typically used. The resulting flow-based models are called Integer Discrete Flows (IDF) (Hoogeboom et al., 2019; Berg et al., 2020) with a mixture of discretized logistic distributions (Salimans et al., 2017) as the base distribution. + +§ 2.OUR APPROACH + +§ 2.1. GENERAL INVERTIBLE TRANSFORMATIONS + +Our main contribution of this paper is a proposition of a new class of invertible transformations that generalize many invertible transformations in reversible computing and reversible deep learning. + +Proposition 1. Let us take $\mathbf{x},\mathbf{y} \in \mathcal{X}$ . The following transformation from $\mathbf{x}$ to $\mathbf{y}$ with ${f}_{i} : {\mathcal{X}}_{1 : i - 1} \times {\mathcal{X}}_{i + 1 : n} \rightarrow {\mathcal{X}}_{i}$ : + +$$ +{y}_{1} = {x}_{1} \circ {f}_{1}\left( {\varnothing ,{\mathbf{x}}_{2 : D}}\right) +$$ + +$$ +{y}_{2} = \left( {{g}_{2}\left( {y}_{1}\right) \vartriangleright {x}_{2}}\right) \circ {f}_{2}\left( {{y}_{1},{\mathbf{x}}_{3 : D}}\right) +$$ + +... + +$$ +{y}_{d} = \left( {{g}_{d}\left( {\mathbf{y}}_{1 : d - 1}\right) \vartriangleright {x}_{d}}\right) \circ {f}_{d}\left( {{\mathbf{y}}_{1 : d - 1},{\mathbf{x}}_{d + 1 : D}}\right) +$$ + +... + +$$ +{y}_{D} = \left( {{g}_{D}\left( {\mathbf{y}}_{1 : D - 1}\right) \vartriangleright {x}_{D}}\right) \circ {f}_{D}\left( {{\mathbf{y}}_{1 : D - 1},\varnothing }\right) +$$ + +is invertible for given binary transformations $\circ$ and $\vartriangleright$ with inverses $\bullet$ and $\vartriangleleft$ , respectively, and arbitrary functions ${g}_{2},\ldots ,{g}_{D}$ and ${f}_{1},\ldots ,{f}_{D}$ . + +Proof. In order to inverse $\mathbf{y}$ to $\mathbf{x}$ we start from the last element to obtain the following: + +$$ +{x}_{D} = {g}_{D}\left( {\mathbf{y}}_{1 : D - 1}\right) \blacktriangleleft \left( {{y}_{D} \bullet {f}_{D}\left( {{\mathbf{y}}_{1 : D - 1},\varnothing }\right) }\right) . +$$ + +Then, we can proceed with next expressions in the decreasing order (i.e., from $D - 1$ to 1) to eventually obtain: + +$$ +{x}_{D - 1} = {g}_{D - 1}\left( {\mathbf{y}}_{1 : D - 2}\right) \blacktriangleleft \left( {{y}_{D - 1} \bullet {f}_{D - 1}\left( {{\mathbf{y}}_{1 : D - 2},{x}_{D}}\right) }\right) +$$ + +... + +$$ +{x}_{d} = {g}_{d}\left( {\mathbf{y}}_{1 : d - 1}\right) \blacktriangleleft \left( {{y}_{d} \bullet {f}_{d}\left( {{\mathbf{y}}_{1 : d - 1},{\mathbf{x}}_{d + 1 : D}}\right) }\right) +$$ + +... + +$$ +{x}_{2} = {g}_{2}\left( {y}_{1}\right) \blacktriangleleft \left( {{y}_{2} \bullet {f}_{2}\left( {{y}_{1},{\mathbf{x}}_{3 : D}}\right) }\right) +$$ + +$$ +{x}_{1} = {y}_{1} \bullet {f}_{1}\left( {\varnothing ,{\mathbf{x}}_{2 : D}}\right) . +$$ + +Next, we show that many widely known invertible transformations could be derived from the proposed general invertible transformation. First, for the space of binary variables, we present that our proposition could be used to obtain three of the most important reversible logic gates. + +Corollary 2 (Feynman gate). Let us consider $\mathbf{x} \in \{ 0,1{\} }^{2}$ , and $\circ \equiv \oplus$ and $\vartriangleright \equiv \oplus$ with $\bullet \equiv \oplus$ and $\vartriangleleft \equiv \oplus$ , where $\oplus$ is the XOR operation. Then, taking ${g}_{2} \equiv 0,{f}_{1}\left( {x}_{2}\right) = 0$ and ${f}_{2}\left( {y}_{1}\right) = {y}_{1}$ results in the Feynman gate: + +$$ +{y}_{1} = {x}_{1} \tag{9} +$$ + +$$ +{y}_{2} = {x}_{2} \oplus {x}_{1}\text{ . } +$$ + +Proof. The Eq. 9 follows from the idempotency of XOR. + +Corollary 3 (Toffoli gate). Let us consider $\mathbf{x} \in \{ 0,1{\} }^{3}$ , and $\circ \equiv \oplus$ and $\vartriangleright \equiv \oplus$ with $\bullet \equiv \oplus$ and $\vartriangleleft \equiv \oplus$ , where $\oplus$ is the XOR operation. Then, taking ${g}_{2}\left( {y}_{1}\right) \equiv 0,{g}_{3}\left( {{\mathbf{y}}_{1 : 2} \equiv 0}\right.$ , + +${f}_{1}\left( {\mathbf{x}}_{2 : 3}\right) \equiv 0,{f}_{2}\left( {{y}_{1},{x}_{3}}\right) \equiv 0$ and ${f}_{3}\left( {\mathbf{y}}_{1 : 2}\right) = {y}_{1} \land {y}_{2}$ results in the Toffoli gate: + +$$ +{y}_{1} = {x}_{1} \tag{10} +$$ + +$$ +{y}_{2} = {x}_{2} \tag{11} +$$ + +$$ +{y}_{3} = {x}_{3} \oplus \left( {{y}_{1} \land {y}_{2}}\right) . \tag{12} +$$ + +Proof. The Eqs. 10 - 12 follow from the idempotency of the XOR operator. + +Corollary 4 (Fredkin gate). Let us consider $\mathbf{x} \in \{ 0,1{\} }^{4}$ , and $\circ \equiv \oplus$ and $\vartriangleright \equiv \oplus$ with $\bullet \equiv \oplus$ and $\vartriangleleft \equiv \oplus$ , where $\oplus$ is the XOR operation. Then, taking ${x}_{1} \equiv 0,{g}_{2}\left( {y}_{1}\right) \equiv 0$ , ${g}_{3}\left( {\mathbf{y}}_{1 : 2}\right) \equiv 0,{g}_{4}\left( {\mathbf{y}}_{1 : 3}\right) \equiv 0,{f}_{1}\left( {\mathbf{x}}_{2 : 4}\right) = {x}_{2} \land \left( {{x}_{3} \oplus {x}_{4}}\right) ,$ ${f}_{2}\left( {{y}_{1},{\mathbf{x}}_{3 : 4}}\right) \equiv 0,{f}_{3}\left( {{\mathbf{y}}_{1 : 2},{x}_{4}}\right) = {y}_{1}$ and ${f}_{4}\left( {\mathbf{y}}_{1 : 3}\right) \equiv {y}_{1}$ results in the Fredkin gate: + +$$ +{y}_{1} = {x}_{1} \oplus \left( {{x}_{2} \land \left( {{x}_{3} \oplus {x}_{4}}\right) }\right) \tag{13} +$$ + +$$ +{y}_{2} = {x}_{2} \oplus 0 \tag{14} +$$ + +$$ +{y}_{3} = {x}_{3} \oplus {y}_{1} \tag{15} +$$ + +$$ +{y}_{4} = {x}_{4} \oplus {y}_{1} \tag{16} +$$ + +Remark 5 (On the Fredkin gate). Comparing equations 13-16 with the definition of Fredkin gate we notice that in Corollary 4 we have to introduce an additional equation to be consistent with the Proposition 1. Moreover, we introduced a dummy variable ${x}_{1}$ that always equals 0 . + +Moreover, we observe that our proposition generalizes invertible layers in neural networks. + +Corollary 6 (A coupling layer). Let us consider $\mathrm{x} =$ ${\left\lbrack {\mathbf{x}}_{1},{\mathbf{x}}_{2}\right\rbrack }^{\top }$ , where ${\mathcal{X}}_{i} = {\mathbb{R}}^{{D}_{i}}$ , and $\circ \equiv + , \vartriangleright \equiv \odot$ with - $\equiv -$ and $\vartriangleleft \equiv \oslash$ , where $\odot$ and $\oslash$ denote element-wise multiplication and division, respectively. Then, taking ${g}_{2}\left( {\mathbf{y}}_{1}\right) = \exp \left( {{\mathrm{{NN}}}_{s}\left( {\mathbf{y}}_{1}\right) }\right) ,{f}_{1}\left( {\mathbf{x}}_{2}\right) = 0$ and ${f}_{2}\left( {\mathbf{y}}_{1}\right) =$ ${\mathrm{{NN}}}_{t}\left( {\mathbf{y}}_{1}\right)$ , where ${\mathrm{{NN}}}_{s},{\mathrm{{NN}}}_{t}$ are neural networks, results in the coupling layer (Dinh et al., 2016): + +$$ +{\mathrm{y}}_{1} = {\mathrm{x}}_{1} \tag{17} +$$ + +$$ +{\mathbf{y}}_{2} = \exp \left( {{\mathrm{{NN}}}_{s}\left( {\mathbf{y}}_{1}\right) }\right) \odot {\mathbf{x}}_{2} + {\mathrm{{NN}}}_{t}\left( {\mathbf{y}}_{1}\right) . \tag{18} +$$ + +Corollary 7 (A reversible residual layer). Let us consider $\mathbf{x} = {\left\lbrack {\mathbf{x}}_{1},{\mathbf{x}}_{2}\right\rbrack }^{\top }$ , where ${\mathcal{X}}_{i} = {\mathbb{R}}^{{D}_{i}}$ , and $\circ \equiv + , \vartriangleright \equiv \odot$ with $\bullet \equiv -$ and $\vartriangleleft \equiv \oslash$ , where $\odot$ and $\oslash$ denote element-wise multiplication and division, respectively. Then, taking ${g}_{2}\left( {\mathbf{y}}_{1}\right) \equiv 1,{f}_{1}\left( {\mathbf{x}}_{2}\right) = {\mathrm{{NN}}}_{1}\left( {\mathbf{x}}_{2}\right)$ and ${f}_{2}\left( {\mathbf{y}}_{1}\right) = {\mathrm{{NN}}}_{2}\left( {\mathbf{y}}_{1}\right)$ , where NN is a neural network, results in the reversible residual layer proposed in (Gomez et al., 2017): + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} + {\mathrm{{NN}}}_{1}\left( {\mathbf{x}}_{2}\right) \tag{19} +$$ + +$$ +{\mathbf{y}}_{2} = {\mathbf{x}}_{2} + {\mathrm{{NN}}}_{2}\left( {\mathbf{y}}_{1}\right) . \tag{20} +$$ + +Remark 8 (On the reversible residual layer). According to Proposition 1, we can further generalize the reversible + +residual layer proposed in (Gomez et al., 2017) by taking ${g}_{2}\left( {\mathbf{y}}_{1}\right) = \exp \left( {{\mathrm{{NN}}}_{3}\left( {\mathbf{y}}_{1}\right) }\right)$ that would result in the following invertible layer: + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} + {\mathrm{{NN}}}_{1}\left( {\mathbf{x}}_{2}\right) \tag{21} +$$ + +$$ +{\mathbf{y}}_{2} = \exp \left( {{\mathrm{{NN}}}_{3}\left( {\mathbf{y}}_{1}\right) }\right) \odot {\mathbf{x}}_{2} + {\mathrm{{NN}}}_{2}\left( {\mathbf{y}}_{1}\right) . \tag{22} +$$ + +Interestingly, we can calculate the Jacobian of such transformation. The Jacobian takes the following form: + +$$ +\mathbf{J}\left( \mathbf{z}\right) = \left\lbrack \begin{array}{ll} \frac{\partial {\mathbf{y}}_{1}}{\partial {\mathbf{x}}_{1}} & \frac{\partial {\mathbf{y}}_{1}}{\partial {\mathbf{x}}_{2}} \\ \frac{\partial {\mathbf{y}}_{2}}{\partial {\mathbf{x}}_{1}} & \frac{\partial {\mathbf{y}}_{2}}{\partial {\mathbf{x}}_{2}} \end{array}\right\rbrack = \left\lbrack \begin{array}{ll} \mathbf{A} & \mathbf{B} \\ & \\ \mathbf{C} & \mathbf{D} \end{array}\right\rbrack \tag{23} +$$ + +where in (23) we represent it as a block matrix, where $\mathbf{A} = \mathbf{I}$ , i.e., the identity matrix. Then, $\mathbf{{AC}} = \mathbf{{CA}}$ and according to Theorem 3 on determinants of block matrices in (Silvester, 2000), the logarithm of the Jacobian-determinant equals: + +$$ +\log \left| {\det \mathbf{J}\left( \mathbf{z}\right) }\right| = \mathop{\sum }\limits_{i}{\mathrm{{NN}}}_{3,i}\left( {\mathbf{y}}_{1}\right) . \tag{24} +$$ + +Corollary 9 (A reversible differential mutation). Let us consider ${\mathbf{x}}_{1},{\mathbf{x}}_{2},{\mathbf{x}}_{3} \in {\mathbb{R}}^{D},\gamma \in {\mathbb{R}}_{ + }$ , and $\circ \equiv + , \vartriangleright \equiv \odot$ with $\bullet \equiv -$ . Then, taking ${g}_{2}\left( {\mathbf{y}}_{1}\right) \equiv 1,{g}_{3}\left( {\mathbf{y}}_{1 : 2}\right) \equiv 1$ , ${f}_{1}\left( {\mathbf{x}}_{2 : 3}\right) = \gamma \left( {{\mathbf{x}}_{2} - {\mathbf{x}}_{3}}\right) ,{f}_{2}\left( {{\mathbf{y}}_{1},{\mathbf{x}}_{3}}\right) = \gamma \left( {{\mathbf{x}}_{3} - {\mathbf{y}}_{1}}\right)$ , and ${f}_{3}\left( {\mathbf{y}}_{1 : 2}\right) = \gamma \left( {{\mathbf{y}}_{1} - {\mathbf{y}}_{2}}\right)$ , results in the reversible differential mutation proposed in (Tomczak et al., 2020): + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} + \gamma \left( {{\mathbf{x}}_{2} - {\mathbf{x}}_{3}}\right) \tag{25} +$$ + +$$ +{\mathbf{y}}_{2} = {\mathbf{x}}_{2} + \gamma \left( {{\mathbf{x}}_{3} - {\mathbf{y}}_{1}}\right) \tag{26} +$$ + +$$ +{\mathbf{y}}_{3} = {\mathbf{x}}_{3} + \gamma \left( {{\mathbf{y}}_{1} - {\mathbf{y}}_{2}}\right) \text{ . } \tag{27} +$$ + +§ 2.2. GENERAL INVERTIBLE TRANSFORMATIONS FOR INTEGER DISCRETE FLOWS + +We propose to utilize our general invertible transformations in IDF. For this purpose, we formulate two new coupling layers that fulfill Proposition 1, namely: - We divide the input into four parts, $\mathbf{x} = \left\lbrack {{\mathbf{x}}_{1},{\mathbf{x}}_{2},{\mathbf{x}}_{3},{\mathbf{x}}_{4}}\right\rbrack$ : + +$$ +{\mathbf{y}}_{1} = {\mathbf{x}}_{1} + \left\lfloor {{\mathrm{{NN}}}_{t,1}\left( {\mathbf{x}}_{2 : 4}\right) }\right\rceil +$$ + +$$ +{\mathbf{y}}_{2} = {\mathbf{x}}_{2} + \left\lfloor {{\mathrm{{NN}}}_{t,2}\left( {{\mathbf{y}}_{1},{\mathbf{x}}_{3 : 4}}\right) }\right\rceil \tag{28} +$$ + +$$ +{\mathbf{y}}_{3} = {\mathbf{x}}_{3} + \left\lfloor {{\mathrm{{NN}}}_{t,3}\left( {{\mathbf{y}}_{1 : 2},{\mathbf{x}}_{4}}\right) }\right\rceil +$$ + +$$ +{\mathbf{y}}_{4} = {\mathbf{x}}_{4} + \left\lfloor {{\mathrm{{NN}}}_{t,4}\left( {\mathbf{y}}_{1 : 3}\right) }\right\rceil +$$ + + * We divide the input into eight parts, $\mathbf{x} = \left\lbrack {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{8}}\right\rbrack$ . Then, we formulate the coupling layer analogically to (28). + +Further, we use the discretized two-parameter logistic distribution. It could be expressed as a difference of the logistic cumulative distribution functions (Salimans et al., 2017) or analytically (Chakraborty & Chakravarty, 2016). + +§ 3. EXPERIMENTS + +Data In the experiment, we use a toy dataset of handwritten digits available in Scikit-learn ${}^{1}$ that consists of $1,{797}\mathrm{{im}}$ - ages. Each image consists of 64 pixels $\left( {8\mathrm{{px}} \times 8\mathrm{{px}}}\right)$ . + +Models In the experiment, we compare the following models: RealNVP with uniform dequantization and the standard Gaussian base distribution (REALNVP), RealNVP with the coupling layer in Remark 8 and the standard Gaussian base distribution (REALNVP2), IDF with the coupling layer in (6) (REALNVP), IDF with the coupling layer in (28) (IDF4), IDF with the coupling layer for eight parts (IDF8). In all models, we used the order permutation (i.e., a matrix with ones on the anti-diagonal and zeros elsewhere) after each coupling layer. + +In order to keep a similar number of weights, we used 16 flows for IDF, 8 flows for REALNVP, 4 flows for REAL-NVP2, 4 flows for IDF4, and 2 flows for IDF8. All models have roughly ${1.32}\mathrm{M}$ weights. For IDF, IDF4, and IDF8 we utilized the following neural networks for transitions $\left( {\mathrm{{NN}}}_{t}\right) :$ + +$\operatorname{Linear}\left( {{D}_{in},{256}}\right) \rightarrow$ LeakyReLU $\rightarrow \operatorname{Linear}\left( {{256},{256}}\right) \rightarrow$ + +$$ +\text{ LeakyReLU } \rightarrow \text{ Linear }\left( {{256},{D}_{\text{ out }}}\right) +$$ + +and for REALNVP, we additionally used the following neural networks for scaling $\left( {\mathrm{{NN}}}_{s}\right)$ : + +$\operatorname{Linear}\left( {{D}_{in},{256}}\right) \rightarrow$ LeakyReLU $\rightarrow \operatorname{Linear}\left( {{256},{256}}\right) \rightarrow$ + +LeakyReLU $\rightarrow$ Linear $\left( {{256},{D}_{\text{ out }}}\right) \rightarrow$ Tanh + +Training & Evaluation We compare the models using the negative-log likelihood (nll). We train each model using 1,000 images, and the mini-batch equals 64. Moreover, we take 350 images for validation and 447 for testing. Each model is trained 5 times. During training, we use the early stopping with the patience equal 20 epochs, and the best performing model on the validation set is later evaluated on the test set. The Adam optimizer with the learning rate equal 0.001 was used. + +Code The experiments could be reproduced by running the code available at: https://github.com/ ANONYMIZED + +Results In Figure 1, we present aggregated results for the five models. Moreover, in Figure 2, examples of unconditional samples are presented, together with a real sample. + +First, we notice that IDF performs better than REALNVP and REALNVP2. In this paper, we use fully-connected neural networks and small toy data. Nevertheless, it is interesting to see that IDF could perform better than a widely used continuous flow-based model with dequantization. Moreover, it seems that the new coupling layer presented in Remark 8 is more stable in terms of final results, however, the difference in performance is statistically insignificant. Second, we observe that our proposition of invertible transformations results in improved nll. The proposed coupling layer with 8 partitions performed the best in terms of nll, and the coupling layer with 4 partitions also outperformed IDF, REALNVP and REALNVP2. We want to highlight that all models have almost identical numbers of weights, thus, these results are not caused by taking larger neural networks. The samples presented in Figure 2 are rather hard to analyze due to their small size $\left( {8\mathrm{{px}} \times 8\mathrm{{px}}}\right)$ . Nevertheless, we notice that all IDFs generated crisper images than RE-ALNVP and REALNVP2. Moreover, it seems that IDF4 and IDF8 seem to produce digits of higher visual quality. + + < g r a p h i c s > + +Figure 1. The aggregated results for the four models on the test set. + + < g r a p h i c s > + +Figure 2. Unconditional samples from the five models. + +§ 4. CONCLUSION + +In this paper, we proposed a new class of invertible transformations. We showed that many well-known invertible transformations could be derived from our proposition. Moreover, we proposed two coupling layers and presented how they could be utilized in flow-based models for integer values (Integer Discrete Flows). Our preliminary experiments on the digits data indicate that the new coupling layers result in better negative log-likelihood values than for IDF and REALNVP. These results are promising and will be pursued in the future on more challenging datasets. + +'https://scikit-learn.org/stable/ datasets/index.html#digits-dataset \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MvjsWTCfXpA/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MvjsWTCfXpA/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..05aebf49aa5e49c07a119945814b75e6e35f0f05 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MvjsWTCfXpA/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,540 @@ +# Agent Forecasting at Flexible Horizons using ODE Flows + +Anonymous Authors ${}^{1}$ + +## Abstract + +In this work we describe OMEN, a neural ODE based normalizing flow for the prediction of marginal distributions at flexible evaluation horizons, and apply it to agent position forecasting. OMEN's architecture embeds an assumption that marginal distributions of a given agent moving forward in time are related, allowing for an efficient representation of marginal distributions through time and allowing for reliable interpolation between prediction horizons seen in training. Experiments on a popular agent forecasting dataset demonstrate significant improvements over most baseline approaches, and comparable performance to the state of the art while providing the new functionality of reliable interpolation of predicted marginal distributions between prediction horizons as demonstrated with synthetic data. + +## 1. Introduction + +Autonomous driving has benefited tremendously from recent progress in deep learning and computer vision (Grig-orescu et al., 2019). The capability of recognizing traffic signs (Arcos-García et al., 2018; Zhou et al., 2020), localizing pedestrians (Mao et al., 2017; Liu et al., 2019), etc. makes it possible for autonomous vehicles to "see" the world (Zhao et al., 2019). However, one critical component for safe and efficient planning in autonomous vehicles is an accurate prediction of the future position of such agents (such as pedestrians or moving vehicles) in the environment (Mozaffari et al., 2019; Rudenko et al., 2020). Despite the importance of the position prediction problem, the performance on this task is still far from satisfactory because of the following challenging requirements: (1) predictions must be conditioned on the environment, as contextual clues are essential for an accurate prediction (an example is given in Fig. 1a); and (2) predictions are required to be highly multi-modal (shown in Fig. 1b) as the real-world environment often exhibits junctions where an agent has several distinct possible future trajectories, and mode collapse in these moments could lead to disastrous planning outcomes. + +It is common to frame the agent forecasting task as learning marginal distributions over potential agent positions (Makansi et al., 2019; Oh & Valois, 2019; Zieba et al., 2020), also known as "occupancy maps", a popular representation in planning for robotics and autonomous vehicles (Grigorescu et al., 2019; Mozaffari et al., 2019). By predicting the marginal distribution at a specific point in time, these methods are often superior at capturing the complex multi-modal nature of the data, avoiding the challenges of generating diverse trajectories (Ma et al., 2020). In addition, while the underlying process of an agent's trajectory is continuous, most popular forecasting models operate on a discretized representation of time chosen during training (Whittle, 1951; Rhinehart et al., 2018; Mozaffari et al., 2019; Makansi et al., 2019; Salinas et al., 2019; Tang & Salakhutdinov, 2019; Rhinehart et al., 2019; Oh & Valois, 2019; Zieba et al., 2020). The granularity of time-steps used in training constrains the resolution and utility of these approaches. Please refer to the Appendix for a detailed discussion on related works. + +Recently, Deng et al. (2020) demonstrated a conditional temporal process which can produce marginals and trajectories fully continuous in time. However, the expressiveness of this approach is ultimately bounded by the formulation as a stochastic process, taken in their paper to be a differential deformation of the Wiener process. + +Building upon this approach, we propose a novel normalizing flow based architecture motivated by the assumption of modelling a continuous temporal process, where our model defines a new temporal process rather than deforming an existing one. The described model is shown in Fig. 1c. The main contributions of this work are summarized as following: (1) An expressive, multi-modal conditional normalizing flow based model for predicting agent positions. (2) A model capable of predicting at flexible horizons, including those not seen in training. (3) A flow architecture that embeds assumptions that, for a continuous process, predicted marginal distributions deform smoothly in time. (4) + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +055 + +![01963e2a-b711-7f29-8363-936c04a78daa_1_247_189_1249_397_0.jpg](images/01963e2a-b711-7f29-8363-936c04a78daa_1_247_189_1249_397_0.jpg) + +Figure 1. Overview. (a) Environmental conditioning. Agent location prediction requires synthesis of complex conditioning information, e.g. road markings, agent histories, lidar, video data. (b) Flexible prediction. Our goal is to predict marginals across agent locations at any choice of time, shown here for agent 1 (top, blue) and agent 2 (bottom, red). (c) Continuous representation. We propose a continuous flow based architecture, explicitly connecting marginal predictions across horizons. Here a base distribution (left) is connected to a marginal prediction at 2 seconds (middle) and 8 seconds (right) by a single neural ODE. Black lines show sample trajectories, corresponding to solutions to the ODE with an initial value taken from the base distribution. + +056 + +057 + +058 Demonstrations on both synthetic data, and an important agent forecasting dataset. + +## 2. Method + +In this section, we present our model and its optimization. We consider the task of predicting marginal distributions over future vehicle positions based on asynchronous conditioning information. Specifically, given 2D positional data $\mathbf{x} \mathrel{\text{:=}} {\left\{ {\mathbf{x}}_{{i}^{\prime }}^{\left( {t}_{j}^{\prime }\right) }\right\} }_{{i}^{\prime }, j}$ for a set of dependent agents ${i}^{\prime } \in {A}^{\prime }$ at asynchronous times ${t}_{j}^{\prime } \in {T}^{\prime }$ , we are interested in the marginal distributions $p\left( {\left\{ {\mathbf{x}}_{i}^{\left( {t}_{j}\right) }\right\} }_{i}\right)$ , with $i \in A \subseteq {A}^{\prime }$ and $T \ni {t}_{j} > \max \left( {T}^{\prime }\right)$ , where $T$ is a set of target horizons. In practice, we may also have image-based auxiliary information $\mathbf{a} = {\left\{ {\mathbf{a}}_{{i}^{\prime }}^{\left( {t}_{j}^{\prime }\right) }\right\} }_{{i}^{\prime }, j}$ , such as Lidar scans, and write $\phi \mathrel{\text{:=}} \{ \mathbf{x},\mathbf{a}\}$ to summarize all available information up to time ${t}_{0} \mathrel{\text{:=}} \max \left( {T}^{\prime }\right)$ . Due to the nature of the data we work with, we will principally refer to timepoints (e.g., ${t}_{j},{t}_{j}^{\prime }$ ), however, our model is continuous in time, and as such it will at times be necessary to refer to the continuous axis of time $t$ which those observations lie on. Further the positional data ${\mathbf{x}}_{i}^{{t}_{j}}$ is taken to be the discrete vectorized observations of a function $x\left( t\right)$ . + +Our approach builds upon previous work on normalizing flows (NFs) and its continuous counterparts. We refer the reader to (Rezende & Mohamed, 2016; Chen et al., 2018; Grathwohl et al., 2018; Papamakarios et al., 2019; Kobyzev et al., 2020) for additional details. + +### 2.1. Normalizing Flows with Informative Base Distributions + +In the normalizing flow literature, it is usually assumed that a sufficiently expressive flow makes the choice of base distribution irrelevant (Papamakarios et al., 2019; Kobyzev et al., 2020), and is therefore commonly chosen as a simple Gaussian distribution. However, recent works (Deng et al., 2020; Jaini et al., 2020; Mahajan et al., 2020) have started exploring constructions where the choice of base distribution embeds information about the target distribution, allowing good approximations of the target distribution with simpler flow transforms. For example, Jaini et al. (2020) demonstrated that for a target distribution with heavy tails, choosing a base distribution with similar heavy tails can be more effective than a wide variety of modern complex NF transforms in capturing the target distribution accurately. + +Inspired by the aforementioned discussion, we suggest that to model the distribution of ${p}_{t}\left( {x\left( t\right) \mid \mathbf{\phi }}\right)$ for a range of values of $t > {t}_{0}$ , a desired property of the model would be that the distributions of ${p}_{t}\left( {x\left( t\right) \mid \phi }\right)$ and ${p}_{t + \epsilon }\left( {x\left( {t + \epsilon }\right) \mid \phi }\right)$ should be similar for small $\epsilon$ and identical when $\epsilon \rightarrow 0.{}^{1}$ In other words, ${p}_{t}\left( {{x}_{t}\left( t\right) }\right)$ can be served as an informative base distribution of ${p}_{t + \epsilon }\left( {x\left( {t + \epsilon }\right) }\right)$ . This can be realized by incrementally transforming distributions as time progresses. Therefore, we can formulate the proposed model as follows: at any target time in the future, we can describe the target distribution ${p}_{t + \epsilon }\left( {x\left( {t + \epsilon }\right) }\right)$ as a transform $f$ (taken to be a normalizing flow) forward in time from the previous time-step ${p}_{t}\left( {x\left( t\right) }\right)$ , + +$$ +{p}_{t + \epsilon }\left( {x\left( {t + \epsilon }\right) }\right) = {p}_{t}\left( {{f}^{-1}\left( {x\left( {t + \epsilon }\right) }\right) }\right) \left| {\det \frac{\partial {f}^{-1}}{\partial x\left( {t + \epsilon }\right) }}\right| . \tag{1} +$$ + +In addition, we can take advantage of the fact that the series of flow transforms at any point in a sequence building out from the base distribution represents a valid normalizing flow. Therefore, we can implement a network with multiple + +--- + +${}^{1}$ To ease notation, we drop references to the conditioning information $\phi$ from now on. + +--- + +110 + +![01963e2a-b711-7f29-8363-936c04a78daa_2_239_194_1270_425_0.jpg](images/01963e2a-b711-7f29-8363-936c04a78daa_2_239_194_1270_425_0.jpg) + +Figure 2. Interpolation in Time with Synthetic Data. Plots of predicted likelihood vs. $x -$ and $y$ -coordinates at a series of times into the future. The number of modes ${n}_{m}$ was provided as conditioning information, and times marked with * were seen in training. The times shown here are a subset of those in Table 1. + +111 + +112 + +113 outputs, with each output further from the base distribution learning to predict a point further into the future. This formulation, inspired by recent progress on informative base distributions for NFs (Deng et al., 2020; Jaini et al., 2020; Mahajan et al., 2020), motivates our proposed architecture. + +### 2.2. Representation Through a Continuous Conditional Normalizing Flow + +Built upon the discrete model described above, we realise the proposed NF architecture by adopting a neural ODE representation. With this approach, we find our model can, with minimal regularization (Finlay et al., 2020), learn reasonable interpolations between evaluation points at training phase, allowing us to produce valid marginal distributions at arbitrary target times. The proposed model utilizes the above-discussed "prior" intuition when constructing marginal distributions by taking marginals at earlier time-steps as informative base distributions. A illustration outlining this approach is available in the Appendix. + +To facilitate asynchronous conditioning when predicting conditional marginal distributions, a vector of conditioning information from an encoder model is passed to the neural ODE. Specifically, as an extension to (Chen et al., 2018; Grathwohl et al., 2018), this information is concatenated to the input of a fully-connected neural network $f$ described by the neural ODE transform $\frac{\partial z\left( t\right) }{\partial t}$ , such that for some parameters $\theta$ and conditioning information $\phi$ we have + +$$ +f\left( {z\left( t\right) , t,\phi ;\theta }\right) = \frac{\partial z\left( t\right) }{\partial t}. \tag{2} +$$ + +Following (Chen et al., 2018; Grathwohl et al., 2018), given an observation $z\left( t\right)$ , we can solve the initial value problem to find the equivalent point in the base distribution $z\left( 0\right)$ : + +$$ +\log p\left( {z\left( t\right) }\right) = \log p\left( {z\left( 0\right) }\right) - {\int }_{0}^{t}\operatorname{tr}\frac{\partial f}{\partial z\left( t\right) }{dt}. \tag{3} +$$ + +Calculating likelihood estimates at multiple horizons of interest simply requires solving the initial value problem for a different choice of $t$ , where here the temporal axis of the ODE is explicitly aligned with the axis of time in the dataset of interest. A 'trajectory' can be generated by first sampling from the base distribution and then solving the ODE for the sampled point at $t = 0$ , however unlike a true trajectory the only source of stochasticity is the initial sample from the base distribution. + +Training. The proposed model can be optimized by minimizing the mean negative log-likelihood of distributions at $\left| T\right|$ target horizons from $\left| A\right|$ agents. Therefore, our optimization objective can be formulated as: + +$$ +{\mathcal{L}}_{\mathrm{{NLL}}}\left( {f\left( {z\left( t\right) , t,\phi ;\theta }\right) ,\mathbf{x}}\right) = - \mathop{\sum }\limits_{{i = 0}}^{\left| A\right| }\mathop{\sum }\limits_{{j = 0}}^{\left| T\right| }\log \left( {{p}_{{t}_{j}}\left( {{\mathbf{x}}_{i}^{\left( {t}_{j}\right) } \mid \phi ,{t}_{j},\theta }\right) }\right) . +$$ + +(4) + +Note that although the model is trained on a finite selection of time-steps, inference (evaluation) can be conducted at any time. + +## 3. Evaluation + +In this section we demonstrate the ability of the model to generate realistic position estimates for an agent at a future time in both synthetic datasets and a complex autonomous driving environment. + +### 3.1. Position Estimation on Synthetic 2D Data + +In order to explore our model's ability to interpolate and extrapolate through time we created a synthetic multi-modal temporal process dataset. This process consists of radially growing angular distribution bands. The bands have 3 different modes. The modes control the angular division of distributional bands. At each time-step the radial distance of the band grows with step length drawn from a normal + +
${n}_{m}$Prediction Horizon
10*1520*25303540*50*${60}^{ * }$70*8090100
1-1.5882.1592.3992.7132.9383.1173.3353.6563.9424.1954.4414.785.426
0-1.957-2.5652.7823.012----4.4434.5784.770
3--0.0060.3440.6760.9741.1881.3791.5801.9302.1812.4352.6772.9423.1551
0-0.649-0.9941.2481.476----2.6702.9153.231
8-1.0921.5161.8052.1532.3802.5722.7883.0643.3213.5583.8034.1504.601
0-1.726-2.1332.3682.562----3.8424.0924.329
+ +Table 1. Performance (NLL) on Target Horizons. The number of modes ${n}_{m}$ is treated as a conditioning variable of the model. $\bullet$ marks the model trained on times marked with * for respective columns, and interpolated/extrapolated to times with no *. - marks a model trained and evaluated only on times not marked with a *- Performance can be seen to be broadly equivalent between the two models, demonstrating an ability to both interpolate and extrapolate to times unseen in training. + +
MethodTest $\widehat{e}$
PRECOG-ESP (Rhinehart et al., 2019)${0.634} \pm {0.006}$
HCNAF (Oh & Valois, 2019)0.114
CTFP* (Deng et al., 2020)${0.500} \pm {0.014}$
OMEN*${0.185} \pm {0.002}$
OMEN-discrete${0.144} \pm {0.006}$
+ +Table 2. PRECOG-Carla single agent forecasting evaluation. Lower is better. All models use PRECOG-Carla Town 1 Training set in training, and are evaluated on the PRECOG-Carla Town 1 test set. OMEN and CTFP, marked with *, are able to produce likelihood estimates for unseen target horizons. + +distribution. Conditioning information on the number of modes ${n}_{m} \in \{ 1,3,8\}$ is encoded using an MLP before concatenated to every layer of the neural ODE flow in place of $\phi$ . Our model was trained on a specific subset of time points $t \in \{ {10},{20},{40},{50},{60},{70}\}$ , then evaluated at a variety times never seen in training, including examples of both interpolation and extrapolation. Performance on log-likelihood estimation are comparable to a model trained explicitly on held out times. Full results are show in Table 1 , qualitative results are shown in Fig. 2. + +### 3.2. Agent Forecasting Experiments + +Baselines and Ablations. Results from our model are compared to several leading approaches for likelihood estimation on agent forecasting. Minor modifications to the CTFP model (Deng et al., 2020) and a discrete ablation of OMEN are described in the appendix. While all baselines are capable of producing likelihood estimates for agents and times seen in training, only the full OMEN model and the CTFP model (Deng et al., 2020) are able to produce likelihood estimates for unseen time points. + +Metrics. Following Rhinehart et al. (2019), results are presented here using the extra nats metric $\widehat{e}$ , which provides a normalized and bounded likelihood metric, $\widehat{e} \mathrel{\text{:=}}$ $H\left( {{p}^{\prime }, q}\right) - H\left( \eta \right) /\left( {\left| T\right| \cdot \left| A\right| \cdot {N}_{D}}\right)$ , where $H\left( {{p}^{\prime }, q}\right)$ is the cross-entropy between the true distribution ${p}^{\prime }$ perturbed by some noise $\eta$ (taken here as $\eta = \mathcal{N}\left( {\mathbf{0},{0.01}^{2} \cdot \mathbf{I}}\right.$ to match Rhinehart et al. (2019)) and our models prediction $q,{N}_{D}$ is the number of dimensions in the position data, and $H\left( \eta \right)$ can be calculated analytically. + +PRECOG Carla Dataset. The PRECOG Carla dataset (Rhinehart et al., 2019) is comprised of the complex simulated trajectories of an autopilot and four other agents in the Carla traffic simulation (Dosovitskiy et al., 2017), and includes additional Lidar data centred on the main autopilot agent. Here train, validation, and test data subsets were chosen to match Rhinehart et al. (2019). OMEN and its ablations were trained to minimize the NLL of PRECOG Carla's autopilot for all future time-steps available in the dataset. Results are presented in Table 2, and plots showing example predictions are available in the Appendix. We also refer the readers to the Appendix for further implementation details. + +## 4. Conclusion + +We presented a normalizing flow based architecture with a structure motivated by the assumption of modelling a continuous temporal process. Experimental evidence suggested that the constraints that allow for the smooth interpolation of likelihood estimates did cause some degradation in performance, however novel new capabilities are demonstrated in comparison to other leading approaches for likelihood estimation on agent forecasting. Specifically we demonstrated the ability to conditionally model complex processes, and to both interpolate and extrapolate those results through time. Further, performance on the important and challenging task of agent forecasting is explored, and comparable performance to the state-of-the-art is achieved. + +In future work the authors plan to extend this approach to the important task of multi-agent forecasting, where a normalizing flow formulation is expected to be particularly useful for capturing the complex high dimensional distributions. + +References + +Arcos-García, Á., Alvarez-Garcia, J. A., and Soria-Morillo, L. M. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods. Neural Networks, 99:158-165, 2018. + +Bhattacharyya, A., Hanselmann, M., Fritz, M., Schiele, B., and Straehle, C. Conditional flow variational au-toencoders for structured sequence prediction. CoRR, abs/1908.09008, 2019. URL http://arxiv.org/ abs/1908.09008. + +Brouwer, E. D., Simm, J., Arany, A., and Moreau, Y. Gru-ode-bayes: Continuous modeling of sporadically-observed time series. CoRR, abs/1905.12374, 2019. URL http://arxiv.org/abs/1905.12374. + +Chen, R. T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. Neural ordinary differential equations, 2018. + +Chen, R. T. Q., Amos, B., and Nickel, M. Neural spatiotemporal point processes, 2021. + +Deng, R., Chang, B., Brubaker, M. A., Mori, G., and Lehrmann, A. Modeling continuous stochastic processes with dynamic normalizing flows, 2020. + +Dosovitskiy, A., Ros, G., Codevilla, F., López, A. M., and Koltun, V. CARLA: an open urban driving simulator. CoRR, abs/1711.03938, 2017. URL http://arxiv.org/abs/1711.03938. + +Finlay, C., Jacobsen, J.-H., Nurbekyan, L., and Oberman, A. M. How to train your neural ode: the world of jacobian and kinetic regularization, 2020. + +Grathwohl, W., Chen, R. T. Q., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. ArXiv, abs/1810.01367, 2018. + +Grigorescu, S., Trasnea, B., Cocias, T., and Macesanu, G. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37, 11 2019. doi: 10.1002/rob.21918. + +Huang, C., Krueger, D., Lacoste, A., and Courville, A. C. Neural autoregressive flows. CoRR, abs/1804.00779, 2018a. URL http://arxiv.org/abs/1804.00779. + +Huang, C.-W., Krueger, D., Lacoste, A., and Courville, A. Neural autoregressive flows, 2018b. + +Jain, A., Casas, S., Liao, R., Xiong, Y., Feng, S., Segal, S., and Urtasun, R. Discrete residual flow for probabilistic pedestrian behavior prediction. CoRR, + +abs/1910.08041, 2019. URL http://arxiv.org/ abs/1910.08041. + +Jaini, P., Kobyzev, I., Yu, Y., and Brubaker, M. Tails of lipschitz triangular flows, 2020. + +Kingma, D. P., Salimans, T., and Welling, M. Improving variational inference with inverse autoregressive flow. CoRR, abs/1606.04934, 2016. URL http://arxiv.org/abs/1606.04934. + +Kobyzev, I., Prince, S., and Brubaker, M. Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-1, 2020. ISSN 1939-3539. doi: 10.1109/tpami.2020.2992934. URL http://dx.doi.org/10.1109/TPAMI.2020.2992934. + +Kumar, M., Babaeizadeh, M., Erhan, D., Finn, C., Levine, S., Dinh, L., and Kingma, D. Videoflow: A flow-based generative model for video. CoRR, abs/1903.01434, 2019. URL http://arxiv.org/abs/1903.01434. + +Li, Y., Yi, H., Bender, C. M., Shan, S., and Oliva, J. B. Exchangeable neural ode for set modeling, 2020. + +Liu, W., Liao, S., Ren, W., Hu, W., and Yu, Y. High-level semantic feature detection: A new perspective for pedestrian detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5187-5196, 2019. + +Ma, Y. J., Inala, J. P., Jayaraman, D., and Bastani, O. Diverse sampling for normalizing flow based trajectory forecasting, 2020. + +Mahajan, S., Bhattacharyya, A., Fritz, M., Schiele, B., and Roth, S. Normalizing flows with multi-scale autoregressive priors, 2020. + +Makansi, O., Ilg, E., Çiçek, Ö., and Brox, T. Overcoming limitations of mixture density networks: A sampling and fitting framework for multimodal future prediction. CoRR, abs/1906.03631, 2019. URL http://arxiv.org/ abs/1906.03631. + +Mao, J., Xiao, T., Jiang, Y., and Cao, Z. What can help pedestrian detection? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3127-3136, 2017. + +Mehrasa, N., Deng, R., Ahmed, M. O., Chang, B., He, J., Durand, T., Brubaker, M., and Mori, G. Point process flows. CoRR, abs/1910.08281, 2019. URL http:// arxiv.org/abs/1910.08281. + +Mozaffari, S., Al-Jarrah, O. Y., Dianati, M., Jennings, P. A., and Mouzakitis, A. Deep learning-based vehicle behaviour prediction for autonomous driving applications: A review. CoRR, abs/1912.11676, 2019. URL http://arxiv.org/abs/1912.11676. + +Oh, G. and Valois, J. HCNAF: hyper-conditioned neural autoregressive flow and its application for probabilistic occupancy map forecasting. CoRR, abs/1912.08111, 2019. URL http://arxiv.org/abs/1912.08111. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference, 2019. + +Qiu, C., Mandt, S., and Rudolph, M. Variational dynamic mixtures, 2020. + +Rasul, K., Sheikh, A.-S., Schuster, I., Bergmann, U., and Vollgraf, R. Multivariate probabilistic time series forecasting via conditioned normalizing flows, 2021. + +Rempe, D., Birdal, T., Zhao, Y., Gojcic, Z., Sridhar, S., and Guibas, L. J. Caspr: Learning canonical spatiotemporal point cloud representations, 2020. + +Rezende, D. J. and Mohamed, S. Variational inference with normalizing flows, 2016. + +Rhinehart, N., Kitani, K., and Vernaza, P. R2p2: A repa-rameterized pushforward policy for diverse, precise generative path forecasting. In European Conference on Computer Vision. Springer, 2018. + +Rhinehart, N., McAllister, R., Kitani, K. M., and Levine, S. PRECOG: prediction conditioned on goals in visual multi-agent settings. CoRR, abs/1905.01296, 2019. URL http://arxiv.org/abs/1905.01296. + +Rubanova, Y., Chen, R. T. Q., and Duvenaud, D. Latent odes for irregularly-sampled time series. CoRR, abs/1907.03907, 2019. URL http://arxiv.org/ abs/1907.03907. + +Rudenko, A., Palmieri, L., Herman, M., Kitani, K. M., Gavrila, D. M., and Arras, K. O. Human motion trajectory prediction: a survey. The International Journal of Robotics Research, 39(8):895-935, Jun 2020. ISSN 1741- 3176. doi: 10.1177/0278364920917446. URL http: //dx.doi.org/10.1177/0278364920917446. + +Salinas, D., Flunkert, V., and Gasthaus, J. Deepar: Probabilistic forecasting with autoregressive recurrent networks, 2019. + +Shchur, O., Bilos, M., and Günnemann, S. Intensity-free learning of temporal point processes. CoRR, abs/1909.12127, 2019. URL http://arxiv.org/ abs/1909.12127. + +Tang, Y. and Salakhutdinov, R. Multiple futures prediction. + +In NeurIPS, 2019. + +Tong, A., Huang, J., Wolf, G., van Dijk, D., and Krish-naswamy, S. Trajectorynet: A dynamic optimal transport network for modeling cellular dynamics, 2020. + +Voelker, A., Kajić, I., and Eliasmith, C. Legendre memory units: Continuous-time representation in recurrent neural networks. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 15570-15579. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ + +952285b9b7e7a1be5aa7849f32ffff05-Paper. pdf. + +Whittle, P. Hypothesis Testing in Time Series Analysis. Statistics / Uppsala universitet. Almqvist & Wiksells boktr., 1951. ISBN 9780598919823. URL https:// books.google.ca/books?id=nE_QAAAAMAAJ. + +Zhao, Z.-Q., Zheng, P., tao Xu, S., and Wu, X. Object detection with deep learning: A review, 2019. + +Zhou, S., Deng, C., Piao, Z., and Zhao, B. Few-shot traffic sign recognition with clustering inductive bias and random neural network. Pattern Recognition, 100:107160, 2020. + +Zieba, M., Przewiezlikowski, M., Smieja, M., Tabor, J., Trzcinski, T., and Spurek, P. Regflow: Probabilistic flow-based regression for future prediction, 2020. + +## A. Related Works + +The proposed framework is intrinsically related with two broad literature families: (1) ode based time-series forecasting models, and (2) distribution based forecasting models. In this section, we review statistical models from relevant literature, and discuss the difference between the proposed method with previous works. + +Neural ODEs for Time Series Forecasting. Much recent work has explored embedding neural ODEs in models designed to process sequential data, like Recurrent Neural Networks (RNNs), replacing the hidden state with a neural ODE which evolves as a function of time (Rubanova et al., 2019; Brouwer et al., 2019; Voelker et al., 2019). These works are principally pre-occupied with solving the problem of encoding asynchronous time series data, in contrast we instead focus predicting the evolution of a probability distribution in what is assumed to be a continuous process. + +Other recent works have used neural ODE based flows to connected multiple distributions (Li et al., 2020; Rempe et al., 2020). As in our architecture this models leverage a neural ODE flow to smoothly interpolate between multiple complex distributions. However unlike our model this transformation is not aligned with the temporal axis of the observed data. Similar to our proposed architecture, Tong et al. (2020) uses a neural ODE flow to connect predictions at several horizons, aligning ODE 'time' with the time of observations. However their model uses no conditional information, and generates plausible trajectories between observed data rather than attempting to forecast future marginal distributions. + +In Deng et al. (2020) the model learns a distribution through time by flowing from the target distribution to a Wiener process. Similar to the work presented here this approach allows for an efficient estimation of the marginal distribution at any target horizon of interest. The key distinction is in their method the continuous prediction as a function of prediction horizon comes from the choice of a Wiener base distribution, separate from the choice of flow model. In our work the continuous behaviour is instead a direct result of the flow architecture used, defining a new temporal process rather than deforming an existing one. + +Concurrent and closely related to our work is Chen et al. (2021), which explores a similar architecture for the related problem of point processes, and also utilizes a continuous normalizing flow to describe a marginal distribution across predicted event features as a function of target time. However their approach differs from our own as they are principally concerned with conditioning on the features and timing of past events, to predict the timing and features of discrete future events, where our model is concerned with the smoothly interpolated prediction of an underlying continuous process (e.g. the path of a vehicle) using a synthesis of extremely high dimensional conditioning information (lidar, cameras etc.). Practically this means that the way conditioning information is passed to the continuous flow model is quite distinct in the two approaches. + +Distribution-Based Forecasting Models. Autoregressive forecasting models provide a way to generate trajectories of any length(Whittle, 1951), with modern models allowing for the prediction of expressive distributions which in can capture complex multi modal behavior (Salinas et al., 2019; Qiu et al., 2020) with a number of approaches utilizing normalizing flows in some way (Kumar et al., 2019; Shchur et al., 2019; Mehrasa et al., 2019; Bhattacharyya et al., 2019; Rasul et al., 2021) However in order to infer the statistics of a marginal distribution beyond the next time-step extensive sampling is required, and in these works a fixed discrete sampling in time is assumed. Jain et al. (2019) proposes an architecture which, similar to our approach, explicitly relates marginal distributions in time. However their model is discrete in both time and agent position, and doesn't use the formalism of Normalizing Flows. Instead learning direct transforms on an discretized representation of the marginal distribution or an "occupancy grid" (Grigorescu et al., 2019; Mozaffari et al., 2019). + +Rhinehart et al. (2019) describes a model which uses a series of affine transforms to learn a conditional joint distribution over a selection of agents and horizons. This formulation is similar to a discrete version of our model with a much less expressive choice of Normalizing Flow, and unlike our model is limited to only predict times seen in training. + +Most similar to our model is Oh & Valois (2019) a conditional auto-regressive flow for marginal prediction at flexible horizons. Here however the flow model is a series of discrete layers, specifically a conditional extension of Neural Autoregressive Flows (Huang et al., 2018b) with the predicted horizon passed as an explicit conditioning variable. + +## B. Method + +Figure 3 shows an overview of our models computational graph. + +## C. Evaluation + +### C.1. Position Estimation on Synthetic 2D Data + +#### C.1.1. Synthetic Gaussians + +Following (Oh & Valois, 2019), we explore an extension of the synthetic Gaussian experiment from (Huang et al., 2018a), where a single model conditionally represents one of three multi-modal configurations. For OMEN, conditioning information ${n}_{m} \in 0,1,2$ is encoded using an MLP before concatenated to every layer of the neural ODE flow in place of $\phi$ . Results are shown in table 3, performance is comparable to the HCNAF approach and demonstrates that our choice of a conditional neural ODE based normalizing flow is capable of conditionally representing complex multi modal data. + +
ModelAAFNAFHCNAFOMEN
2 by 26.0563.7753.8963.896
5 by 55.2893.8653.9663.975
10 by 105.0874.1764.2784.336
+ +Table 3. NLL for the synthetic Gaussian experiments. The AAF (Kingma et al., 2016) and NAF v results are for individual models for each configuration. The HCNAF and OMEN results are for a single model across all three configurations. Results for AAF, NAF, and HCNAF models are taken from (Oh & Valois, 2019). + +![01963e2a-b711-7f29-8363-936c04a78daa_7_304_498_1216_951_0.jpg](images/01963e2a-b711-7f29-8363-936c04a78daa_7_304_498_1216_951_0.jpg) + +Figure 3. Architecture. Computation graph and model outline for our proposed architecture OMEN. Data. Shown in pink is the process we hope to predict, with observations ${\mathbf{x}}_{j}^{t}$ in the past and ${\mathbf{x}}^{{\mathbf{t}}_{j}^{\prime }}$ in the future shown as circles. At inference only points ${t}_{m}$ through ${t}_{0}$ are available, with ${t}_{0}^{\prime }$ through ${t}_{n}^{\prime }$ used in training. The process shown in green represents additional conditioning information passed to the encoder that we don’t intend to predict, reported at points ${\mathbf{a}}^{{t}_{j}}$ e.g. periodic lidar and video observations of the environment. Encoder. Observations from ${t}_{m}$ through ${t}_{0}$ are combined in a neural network to produce a single vector of conditioning information $\phi$ . LL. Log-likelihood is calculated by solving our neural ode given the observation ${\mathbf{z}}^{{\tau }_{n}}$ at time ODE time ${\tau }_{n}$ , and conditioning information $\phi$ to find the corresponding points in the base distribution ${\mathbf{z}}^{0}$ and the log determinant of the transform given by the trace of transform (boxed blue line). Sampling. Here we first sample from the base distribution to find ${\mathbf{z}}^{0}$ , then solve for that point, conditioning information $\phi$ , and n ODE time points of interest ${\tau }_{0},\cdots ,{\tau }_{n}$ to find points on the corresponding trajectory ${\mathbf{z}}^{{\tau }_{0}},\cdots ,{\mathbf{z}}^{{\tau }_{n}}$ (boxed blue line). + +### C.2. Precog Carla Dataset, Example Results + +For Precog Carla dataset, an encoder network which is a partial re-implementation of that in (Oh & Valois, 2019), is used. LSTM modules encode the past trajectories of agents in the environment, and a residual CNN encodes Lidar information from a single main agent. The encoded trajectory and Lidar information is combined in a MLP and concatenated to every layer of a Neural ODE describing a normalizing flow, as outlined in Section 3.3. + +In addition to Table 2, we also provide qualitative results. Figures 4, 5, and 6 show example predicted conditional marginal distributions for four of the twenty horizons in the Precog Carla Dataset. All examples are taken from the precog carla town01 test set. + +#### C.2.1. BASELINES AND ABLATIONS + +Minor extensions are made to the CTFP (Deng et al., 2020) model to provide a functional baseline. Specifically additional encoding information was concatenated with the output of the ODE-RNN, and an extra loss on extrapolating the predicted process into the future was added in training. + +OMEN-discrete has a separate ODE flow transform between each inference time point in training. In this way it resembles a model following Eq. 1 where $\epsilon$ in the delta between forecast time points in the training set, and each neural ODE transform represents a separate but sequential normalizing flow transform. This ablation is expected to have superior expressive power as the representation no longer is constrained to be fully continuous in time, and each separate ODE transform can learn its own ODE stop time, allowing for expressive power between time steps to vary. However it does not allow for continuous interpolation of marginals in time. + +490 + +494 + +495 + +![01963e2a-b711-7f29-8363-936c04a78daa_9_218_190_1295_1931_0.jpg](images/01963e2a-b711-7f29-8363-936c04a78daa_9_218_190_1295_1931_0.jpg) + +Figure 4. Example Preco-Carla Prediction example predicted conditional marginal distributions for four of the twenty horizons in the Precog Carla Dataset. The full conditioning information available to the agent is shown at the top, specifically the autopilots historical trajectory, the historical trajectory of the four closest cars, and a lidar captured by the autopilot at $t = 0$ . A single future point for each agent is appended to the top plot to aid the reader when estimating the direction of those agents. The four bottom plots show marginals at $t \in 1,2,3,{4s}$ into the future and the true future location of the autopilot at those times. + +496 + +497 + +498 + +499 + +500 + +501 + +502 + +503 + +504 + +505 + +506 + +507 + +508 + +509 + +510 + +511 + +512 + +513 + +514 + +515 + +516 + +517 + +518 + +519 + +520 + +521 + +522 + +523 + +524 + +525 + +526 + +527 + +529 + +530 + +544 + +545 + +546 + +547 + +548 + +549 + +550 + +![01963e2a-b711-7f29-8363-936c04a78daa_10_224_193_1286_1925_0.jpg](images/01963e2a-b711-7f29-8363-936c04a78daa_10_224_193_1286_1925_0.jpg) + +551 + +552 + +553 + +554 + +555 + +556 + +557 + +558 + +559 + +560 + +561 + +562 + +563 + +564 + +577 + +578 + +579 + +580 + +598 + +599 + +600 + +601 + +602 + +603 + +604 + +Figure 5. Example Preco-Carla Prediction example predicted conditional marginal distributions for four of the twenty horizons in the Precog Carla Dataset. The full conditioning information available to the agent is shown at the top, specifically the autopilots historical trajectory, the historical trajectory of the four closest cars, and a lidar captured by the autopilot at $t = 0$ . A single future point for each agent is appended to the top plot to aid the reader when estimating the direction of those agents. The four bottom plots show marginals at $t \in 1,2,3,{4s}$ into the future and the true future location of the autopilot at those times. + +605 + +![01963e2a-b711-7f29-8363-936c04a78daa_11_196_184_1317_1940_0.jpg](images/01963e2a-b711-7f29-8363-936c04a78daa_11_196_184_1317_1940_0.jpg) + +606 + +607 + +608 + +609 + +610 + +611 + +612 + +613 + +614 + +615 + +616 + +617 + +618 + +619 + +620 + +621 + +622 + +623 + +624 + +625 + +626 + +627 + +628 + +629 + +630 + +631 + +632 + +633 + +634 + +635 + +636 + +637 + +638 + +639 + +640 + +641 + +642 + +643 + +644 + +645 + +646 + +647 + +648 + +649 + +650 + +651 + +652 + +653 + +654 + +655 + +656 + +657 + +658 + +659 + +Figure 6. Example Preco-Carla Prediction example predicted conditional marginal distributions for four of the twenty horizons in the Precog Carla Dataset. The full conditioning information available to the agent is shown at the top, specifically the autopilots historical trajectory, the historical trajectory of the four closest cars, and a lidar captured by the autopilot at $t = 0$ . A single future point for each agent is appended to the top plot to aid the reader when estimating the direction of those agents. The four bottom plots show marginals at $t \in 1,2,3,{4s}$ into the future and the true future location of the autopilot at those times. + diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MvjsWTCfXpA/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MvjsWTCfXpA/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..ed0b034a7956bbe24587fc42a276f68199af7ffe --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/MvjsWTCfXpA/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,169 @@ +§ AGENT FORECASTING AT FLEXIBLE HORIZONS USING ODE FLOWS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +In this work we describe OMEN, a neural ODE based normalizing flow for the prediction of marginal distributions at flexible evaluation horizons, and apply it to agent position forecasting. OMEN's architecture embeds an assumption that marginal distributions of a given agent moving forward in time are related, allowing for an efficient representation of marginal distributions through time and allowing for reliable interpolation between prediction horizons seen in training. Experiments on a popular agent forecasting dataset demonstrate significant improvements over most baseline approaches, and comparable performance to the state of the art while providing the new functionality of reliable interpolation of predicted marginal distributions between prediction horizons as demonstrated with synthetic data. + +§ 1. INTRODUCTION + +Autonomous driving has benefited tremendously from recent progress in deep learning and computer vision (Grig-orescu et al., 2019). The capability of recognizing traffic signs (Arcos-García et al., 2018; Zhou et al., 2020), localizing pedestrians (Mao et al., 2017; Liu et al., 2019), etc. makes it possible for autonomous vehicles to "see" the world (Zhao et al., 2019). However, one critical component for safe and efficient planning in autonomous vehicles is an accurate prediction of the future position of such agents (such as pedestrians or moving vehicles) in the environment (Mozaffari et al., 2019; Rudenko et al., 2020). Despite the importance of the position prediction problem, the performance on this task is still far from satisfactory because of the following challenging requirements: (1) predictions must be conditioned on the environment, as contextual clues are essential for an accurate prediction (an example is given in Fig. 1a); and (2) predictions are required to be highly multi-modal (shown in Fig. 1b) as the real-world environment often exhibits junctions where an agent has several distinct possible future trajectories, and mode collapse in these moments could lead to disastrous planning outcomes. + +It is common to frame the agent forecasting task as learning marginal distributions over potential agent positions (Makansi et al., 2019; Oh & Valois, 2019; Zieba et al., 2020), also known as "occupancy maps", a popular representation in planning for robotics and autonomous vehicles (Grigorescu et al., 2019; Mozaffari et al., 2019). By predicting the marginal distribution at a specific point in time, these methods are often superior at capturing the complex multi-modal nature of the data, avoiding the challenges of generating diverse trajectories (Ma et al., 2020). In addition, while the underlying process of an agent's trajectory is continuous, most popular forecasting models operate on a discretized representation of time chosen during training (Whittle, 1951; Rhinehart et al., 2018; Mozaffari et al., 2019; Makansi et al., 2019; Salinas et al., 2019; Tang & Salakhutdinov, 2019; Rhinehart et al., 2019; Oh & Valois, 2019; Zieba et al., 2020). The granularity of time-steps used in training constrains the resolution and utility of these approaches. Please refer to the Appendix for a detailed discussion on related works. + +Recently, Deng et al. (2020) demonstrated a conditional temporal process which can produce marginals and trajectories fully continuous in time. However, the expressiveness of this approach is ultimately bounded by the formulation as a stochastic process, taken in their paper to be a differential deformation of the Wiener process. + +Building upon this approach, we propose a novel normalizing flow based architecture motivated by the assumption of modelling a continuous temporal process, where our model defines a new temporal process rather than deforming an existing one. The described model is shown in Fig. 1c. The main contributions of this work are summarized as following: (1) An expressive, multi-modal conditional normalizing flow based model for predicting agent positions. (2) A model capable of predicting at flexible horizons, including those not seen in training. (3) A flow architecture that embeds assumptions that, for a continuous process, predicted marginal distributions deform smoothly in time. (4) + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +055 + + < g r a p h i c s > + +Figure 1. Overview. (a) Environmental conditioning. Agent location prediction requires synthesis of complex conditioning information, e.g. road markings, agent histories, lidar, video data. (b) Flexible prediction. Our goal is to predict marginals across agent locations at any choice of time, shown here for agent 1 (top, blue) and agent 2 (bottom, red). (c) Continuous representation. We propose a continuous flow based architecture, explicitly connecting marginal predictions across horizons. Here a base distribution (left) is connected to a marginal prediction at 2 seconds (middle) and 8 seconds (right) by a single neural ODE. Black lines show sample trajectories, corresponding to solutions to the ODE with an initial value taken from the base distribution. + +056 + +057 + +058 Demonstrations on both synthetic data, and an important agent forecasting dataset. + +§ 2. METHOD + +In this section, we present our model and its optimization. We consider the task of predicting marginal distributions over future vehicle positions based on asynchronous conditioning information. Specifically, given 2D positional data $\mathbf{x} \mathrel{\text{ := }} {\left\{ {\mathbf{x}}_{{i}^{\prime }}^{\left( {t}_{j}^{\prime }\right) }\right\} }_{{i}^{\prime },j}$ for a set of dependent agents ${i}^{\prime } \in {A}^{\prime }$ at asynchronous times ${t}_{j}^{\prime } \in {T}^{\prime }$ , we are interested in the marginal distributions $p\left( {\left\{ {\mathbf{x}}_{i}^{\left( {t}_{j}\right) }\right\} }_{i}\right)$ , with $i \in A \subseteq {A}^{\prime }$ and $T \ni {t}_{j} > \max \left( {T}^{\prime }\right)$ , where $T$ is a set of target horizons. In practice, we may also have image-based auxiliary information $\mathbf{a} = {\left\{ {\mathbf{a}}_{{i}^{\prime }}^{\left( {t}_{j}^{\prime }\right) }\right\} }_{{i}^{\prime },j}$ , such as Lidar scans, and write $\phi \mathrel{\text{ := }} \{ \mathbf{x},\mathbf{a}\}$ to summarize all available information up to time ${t}_{0} \mathrel{\text{ := }} \max \left( {T}^{\prime }\right)$ . Due to the nature of the data we work with, we will principally refer to timepoints (e.g., ${t}_{j},{t}_{j}^{\prime }$ ), however, our model is continuous in time, and as such it will at times be necessary to refer to the continuous axis of time $t$ which those observations lie on. Further the positional data ${\mathbf{x}}_{i}^{{t}_{j}}$ is taken to be the discrete vectorized observations of a function $x\left( t\right)$ . + +Our approach builds upon previous work on normalizing flows (NFs) and its continuous counterparts. We refer the reader to (Rezende & Mohamed, 2016; Chen et al., 2018; Grathwohl et al., 2018; Papamakarios et al., 2019; Kobyzev et al., 2020) for additional details. + +§ 2.1. NORMALIZING FLOWS WITH INFORMATIVE BASE DISTRIBUTIONS + +In the normalizing flow literature, it is usually assumed that a sufficiently expressive flow makes the choice of base distribution irrelevant (Papamakarios et al., 2019; Kobyzev et al., 2020), and is therefore commonly chosen as a simple Gaussian distribution. However, recent works (Deng et al., 2020; Jaini et al., 2020; Mahajan et al., 2020) have started exploring constructions where the choice of base distribution embeds information about the target distribution, allowing good approximations of the target distribution with simpler flow transforms. For example, Jaini et al. (2020) demonstrated that for a target distribution with heavy tails, choosing a base distribution with similar heavy tails can be more effective than a wide variety of modern complex NF transforms in capturing the target distribution accurately. + +Inspired by the aforementioned discussion, we suggest that to model the distribution of ${p}_{t}\left( {x\left( t\right) \mid \mathbf{\phi }}\right)$ for a range of values of $t > {t}_{0}$ , a desired property of the model would be that the distributions of ${p}_{t}\left( {x\left( t\right) \mid \phi }\right)$ and ${p}_{t + \epsilon }\left( {x\left( {t + \epsilon }\right) \mid \phi }\right)$ should be similar for small $\epsilon$ and identical when $\epsilon \rightarrow 0.{}^{1}$ In other words, ${p}_{t}\left( {{x}_{t}\left( t\right) }\right)$ can be served as an informative base distribution of ${p}_{t + \epsilon }\left( {x\left( {t + \epsilon }\right) }\right)$ . This can be realized by incrementally transforming distributions as time progresses. Therefore, we can formulate the proposed model as follows: at any target time in the future, we can describe the target distribution ${p}_{t + \epsilon }\left( {x\left( {t + \epsilon }\right) }\right)$ as a transform $f$ (taken to be a normalizing flow) forward in time from the previous time-step ${p}_{t}\left( {x\left( t\right) }\right)$ , + +$$ +{p}_{t + \epsilon }\left( {x\left( {t + \epsilon }\right) }\right) = {p}_{t}\left( {{f}^{-1}\left( {x\left( {t + \epsilon }\right) }\right) }\right) \left| {\det \frac{\partial {f}^{-1}}{\partial x\left( {t + \epsilon }\right) }}\right| . \tag{1} +$$ + +In addition, we can take advantage of the fact that the series of flow transforms at any point in a sequence building out from the base distribution represents a valid normalizing flow. Therefore, we can implement a network with multiple + +${}^{1}$ To ease notation, we drop references to the conditioning information $\phi$ from now on. + +110 + + < g r a p h i c s > + +Figure 2. Interpolation in Time with Synthetic Data. Plots of predicted likelihood vs. $x -$ and $y$ -coordinates at a series of times into the future. The number of modes ${n}_{m}$ was provided as conditioning information, and times marked with * were seen in training. The times shown here are a subset of those in Table 1. + +111 + +112 + +113 outputs, with each output further from the base distribution learning to predict a point further into the future. This formulation, inspired by recent progress on informative base distributions for NFs (Deng et al., 2020; Jaini et al., 2020; Mahajan et al., 2020), motivates our proposed architecture. + +§ 2.2. REPRESENTATION THROUGH A CONTINUOUS CONDITIONAL NORMALIZING FLOW + +Built upon the discrete model described above, we realise the proposed NF architecture by adopting a neural ODE representation. With this approach, we find our model can, with minimal regularization (Finlay et al., 2020), learn reasonable interpolations between evaluation points at training phase, allowing us to produce valid marginal distributions at arbitrary target times. The proposed model utilizes the above-discussed "prior" intuition when constructing marginal distributions by taking marginals at earlier time-steps as informative base distributions. A illustration outlining this approach is available in the Appendix. + +To facilitate asynchronous conditioning when predicting conditional marginal distributions, a vector of conditioning information from an encoder model is passed to the neural ODE. Specifically, as an extension to (Chen et al., 2018; Grathwohl et al., 2018), this information is concatenated to the input of a fully-connected neural network $f$ described by the neural ODE transform $\frac{\partial z\left( t\right) }{\partial t}$ , such that for some parameters $\theta$ and conditioning information $\phi$ we have + +$$ +f\left( {z\left( t\right) ,t,\phi ;\theta }\right) = \frac{\partial z\left( t\right) }{\partial t}. \tag{2} +$$ + +Following (Chen et al., 2018; Grathwohl et al., 2018), given an observation $z\left( t\right)$ , we can solve the initial value problem to find the equivalent point in the base distribution $z\left( 0\right)$ : + +$$ +\log p\left( {z\left( t\right) }\right) = \log p\left( {z\left( 0\right) }\right) - {\int }_{0}^{t}\operatorname{tr}\frac{\partial f}{\partial z\left( t\right) }{dt}. \tag{3} +$$ + +Calculating likelihood estimates at multiple horizons of interest simply requires solving the initial value problem for a different choice of $t$ , where here the temporal axis of the ODE is explicitly aligned with the axis of time in the dataset of interest. A 'trajectory' can be generated by first sampling from the base distribution and then solving the ODE for the sampled point at $t = 0$ , however unlike a true trajectory the only source of stochasticity is the initial sample from the base distribution. + +Training. The proposed model can be optimized by minimizing the mean negative log-likelihood of distributions at $\left| T\right|$ target horizons from $\left| A\right|$ agents. Therefore, our optimization objective can be formulated as: + +$$ +{\mathcal{L}}_{\mathrm{{NLL}}}\left( {f\left( {z\left( t\right) ,t,\phi ;\theta }\right) ,\mathbf{x}}\right) = - \mathop{\sum }\limits_{{i = 0}}^{\left| A\right| }\mathop{\sum }\limits_{{j = 0}}^{\left| T\right| }\log \left( {{p}_{{t}_{j}}\left( {{\mathbf{x}}_{i}^{\left( {t}_{j}\right) } \mid \phi ,{t}_{j},\theta }\right) }\right) . +$$ + +(4) + +Note that although the model is trained on a finite selection of time-steps, inference (evaluation) can be conducted at any time. + +§ 3. EVALUATION + +In this section we demonstrate the ability of the model to generate realistic position estimates for an agent at a future time in both synthetic datasets and a complex autonomous driving environment. + +§ 3.1. POSITION ESTIMATION ON SYNTHETIC 2D DATA + +In order to explore our model's ability to interpolate and extrapolate through time we created a synthetic multi-modal temporal process dataset. This process consists of radially growing angular distribution bands. The bands have 3 different modes. The modes control the angular division of distributional bands. At each time-step the radial distance of the band grows with step length drawn from a normal + +max width= + +2*${n}_{m}$ 14|c|Prediction Horizon + +2-15 + X 10* 15 20* 25 30 35 40* 50* ${60}^{ * }$ 70* 80 90 100 + +1-15 +2*1 - 1.588 2.159 2.399 2.713 2.938 3.117 3.335 3.656 3.942 4.195 4.441 4.78 5.426 + +2-15 + 0 - 1.957 - 2.565 2.782 3.012 - - - - 4.443 4.578 4.770 + +1-15 +2*3 - -0.006 0.344 0.676 0.974 1.188 1.379 1.580 1.930 2.181 2.435 2.677 2.942 3.1551 + +2-15 + 0 - 0.649 - 0.994 1.248 1.476 - - - - 2.670 2.915 3.231 + +1-15 +2*8 - 1.092 1.516 1.805 2.153 2.380 2.572 2.788 3.064 3.321 3.558 3.803 4.150 4.601 + +2-15 + 0 - 1.726 - 2.133 2.368 2.562 - - - - 3.842 4.092 4.329 + +1-15 + +Table 1. Performance (NLL) on Target Horizons. The number of modes ${n}_{m}$ is treated as a conditioning variable of the model. $\bullet$ marks the model trained on times marked with * for respective columns, and interpolated/extrapolated to times with no *. - marks a model trained and evaluated only on times not marked with a *- Performance can be seen to be broadly equivalent between the two models, demonstrating an ability to both interpolate and extrapolate to times unseen in training. + +max width= + +Method Test $\widehat{e}$ + +1-2 +PRECOG-ESP (Rhinehart et al., 2019) ${0.634} \pm {0.006}$ + +1-2 +HCNAF (Oh & Valois, 2019) 0.114 + +1-2 +CTFP* (Deng et al., 2020) ${0.500} \pm {0.014}$ + +1-2 +OMEN* ${0.185} \pm {0.002}$ + +1-2 +OMEN-discrete ${0.144} \pm {0.006}$ + +1-2 + +Table 2. PRECOG-Carla single agent forecasting evaluation. Lower is better. All models use PRECOG-Carla Town 1 Training set in training, and are evaluated on the PRECOG-Carla Town 1 test set. OMEN and CTFP, marked with *, are able to produce likelihood estimates for unseen target horizons. + +distribution. Conditioning information on the number of modes ${n}_{m} \in \{ 1,3,8\}$ is encoded using an MLP before concatenated to every layer of the neural ODE flow in place of $\phi$ . Our model was trained on a specific subset of time points $t \in \{ {10},{20},{40},{50},{60},{70}\}$ , then evaluated at a variety times never seen in training, including examples of both interpolation and extrapolation. Performance on log-likelihood estimation are comparable to a model trained explicitly on held out times. Full results are show in Table 1, qualitative results are shown in Fig. 2. + +§ 3.2. AGENT FORECASTING EXPERIMENTS + +Baselines and Ablations. Results from our model are compared to several leading approaches for likelihood estimation on agent forecasting. Minor modifications to the CTFP model (Deng et al., 2020) and a discrete ablation of OMEN are described in the appendix. While all baselines are capable of producing likelihood estimates for agents and times seen in training, only the full OMEN model and the CTFP model (Deng et al., 2020) are able to produce likelihood estimates for unseen time points. + +Metrics. Following Rhinehart et al. (2019), results are presented here using the extra nats metric $\widehat{e}$ , which provides a normalized and bounded likelihood metric, $\widehat{e} \mathrel{\text{ := }}$ $H\left( {{p}^{\prime },q}\right) - H\left( \eta \right) /\left( {\left| T\right| \cdot \left| A\right| \cdot {N}_{D}}\right)$ , where $H\left( {{p}^{\prime },q}\right)$ is the cross-entropy between the true distribution ${p}^{\prime }$ perturbed by some noise $\eta$ (taken here as $\eta = \mathcal{N}\left( {\mathbf{0},{0.01}^{2} \cdot \mathbf{I}}\right.$ to match Rhinehart et al. (2019)) and our models prediction $q,{N}_{D}$ is the number of dimensions in the position data, and $H\left( \eta \right)$ can be calculated analytically. + +PRECOG Carla Dataset. The PRECOG Carla dataset (Rhinehart et al., 2019) is comprised of the complex simulated trajectories of an autopilot and four other agents in the Carla traffic simulation (Dosovitskiy et al., 2017), and includes additional Lidar data centred on the main autopilot agent. Here train, validation, and test data subsets were chosen to match Rhinehart et al. (2019). OMEN and its ablations were trained to minimize the NLL of PRECOG Carla's autopilot for all future time-steps available in the dataset. Results are presented in Table 2, and plots showing example predictions are available in the Appendix. We also refer the readers to the Appendix for further implementation details. + +§ 4. CONCLUSION + +We presented a normalizing flow based architecture with a structure motivated by the assumption of modelling a continuous temporal process. Experimental evidence suggested that the constraints that allow for the smooth interpolation of likelihood estimates did cause some degradation in performance, however novel new capabilities are demonstrated in comparison to other leading approaches for likelihood estimation on agent forecasting. Specifically we demonstrated the ability to conditionally model complex processes, and to both interpolate and extrapolate those results through time. Further, performance on the important and challenging task of agent forecasting is explored, and comparable performance to the state-of-the-art is achieved. + +In future work the authors plan to extend this approach to the important task of multi-agent forecasting, where a normalizing flow formulation is expected to be particularly useful for capturing the complex high dimensional distributions. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/OOlxsoRPyFL/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/OOlxsoRPyFL/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..06f798e2923017bcae9510d12743d11d1fa09651 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/OOlxsoRPyFL/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,291 @@ +# Deep Signature Statistics for Likelihood-free Time-series Models + +Anonymous Authors ${}^{1}$ + +## Abstract + +Simulation-based inference (SBI) has emerged as a family of methods for performing inference on complex simulation models with intractable likelihood functions. A common bottleneck in SBI is the construction of low-dimensional summary statistics of the data. In this respect, time-series data, often being high-dimensional, multivariate, and complex in structure, present a particular challenge. To address this we introduce deep signature statistics, a principled and automated method for combining summary statistic selection for time-series data with neural SBI methods. Our approach leverages deep signature transforms, trained concurrently with a neural density estimator, to produce informative statistics for multivariate sequential data that encode important geometric properties of the underlying path. We obtain competitive results across benchmark models. + +## 1. Introduction + +In recent decades, scientific modelers have increasingly adopted simulation-based models: computer programs describing stochastic generative processes. Such models are widely employed in a variety of disciplines, e.g. economics (Baptista et al., 2016) and ecology (Wood, 2010). Their popularity lies in the greater flexibility afforded to the modeler over conventional equation-based modeling, enabling a higher degree of fidelity to the true data-generating process. + +A drawback of this greater flexibility is that the likelihood functions of simulation models are typically intractable, being defined only implicitly (Diggle & Gratton, 1984). Consequently, traditional frequentist and Bayesian approaches relying on likelihood evaluations are infeasible. This limitation has motivated a plethora of methods for performing likelihood-free or simulation-based inference (SBI) (for a recent overview, see Cranmer et al., 2020). Early examples of such methods include approximate Bayesian computation (ABC) (Pritchard et al., 1999; Beaumont et al., 2002) and synthetic likelihood (Wood, 2010), while more recent approaches exploit the flexibility of modern machine learning techniques, e.g. (sequential) neural likelihood estimation (Papamakarios et al., 2019; Lueckmann et al., 2019) and (sequential) neural ratio estimation (Hermans et al., 2020). + +In traditional approaches, the selection of an appropriate, low-dimensional set of summary statistics is key to the quality of inference. A popular approach in $\mathrm{{ABC}}$ is to select a large set of candidate statistics from which lower dimensional summaries are obtained through 'best subset selection', 'projection' or 'regularization' (Blum et al., 2013). A major disadvantage of these approaches is that they require the user to know in advance a powerful set of summary statistics, which can be time-consuming and arbitrary and often requires domain knowledge and experimentation. + +Some more recent approaches, such as sequential posterior and ratio estimation, are able to automate the learning of summary statistics by leveraging the expressiveness of neural networks. Yet, it is not guaranteed that the summary statistics implicitly generated through the use of such neural networks provide representations of sufficient quality for posterior inference. Fully connected networks and deep multilayer perceptrons (see e.g. Wong et al., 2018) still lack the inductive biases to easily extract meaningful representations from time-series, which poses the question of how automated techniques can fill the knowledge gap of domain expertise. One such possibility is partially exchangeable networks (PENs) (Wiqvist et al., 2019), in which a neural network architecture that exploits certain model symmetries is proposed in the context of ABC. + +Here, we argue for the use of the so-called "signature method" (Morrill et al., 2020; Bonnier et al., 2019) for extracting features from multimodal, multivariate sequential data. The central object of study-the path signature-is, in a sense, a canonical feature transformation in that the signature of path-valued random variable captures all possible nonlinearities (Arribas, 2018). Applications of the signature method have produced promising results in a number of tasks, including character recognition (Xie et al., 2018), gesture recognition (Li et al., 2017), and early identification of Alzheimer's from clinical data (Moore et al., 2019). + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +We term our method deep signature statistics (DSS). The main idea is to embed a deep signature model (Bonnier et al., 2019)—in which the signature appears as a pooling operation in a neural network-into an existing neural method for posterior density estimation. Doing so combines the useful inductive biases provided by the signature transform with the power of neural networks to efficiently learn summary statistics and posterior estimates concurrently. Our results suggest that DSS offers a robust, automatic, and theoretically principled pipeline for posterior inference which performs competitively across benchmark models. + +## 2. Background: Path signatures + +The signature of a path $X = \left( {{X}^{1},{X}^{2},\ldots ,{X}^{d}}\right) : \left\lbrack {0, T}\right\rbrack \rightarrow$ ${\mathbb{R}}^{d}$ is an infinite collection of statistics that characterizes the path up to a negligible equivalence class (Lyons, 2014). It is defined by the infinite collection of statistics + +$$ +\operatorname{Sig}\left( X\right) = \left( {1, S{\left( X\right) }_{0, T}^{1}, S{\left( X\right) }_{0, T}^{2},\ldots , S{\left( X\right) }_{0, T}^{d},}\right. +$$ + +$$ +S{\left( X\right) }_{0, T}^{1,1}, S{\left( X\right) }_{0, T}^{1,2},\ldots ) +$$ + +consisting of the $k$ -fold iterated integral of $X$ with multi-index ${i}_{1},\ldots ,{i}_{k}$ defined as + +$$ +S{\left( X\right) }_{0, T}^{{i}_{1},\ldots ,{i}_{k}} = \mathop{\int }\limits_{{0 \leq {t}_{1} < \cdots < {t}_{k} \leq T}}\mathrm{\;d}{X}_{{t}_{1}}^{{i}_{1}}\ldots \mathrm{d}{X}_{{t}_{k}}^{{i}_{k}}. \tag{1} +$$ + +We provide a geometric interpretation of the depth 1 terms, $S{\left( X\right) }_{0, T}^{i}$ , and depth 2 terms, $S{\left( X\right) }_{0, T}^{i, j}$ , of the signature in Figure 1. When the underlying path $X$ is of bounded variation, the integral (1) can be understood as the Riemann-Stieltjes integral with respect to $X$ . In particular, for differentiable paths, this leads to a more commonly understood Riemann integral by substituting $\mathrm{d}{X}_{t} = \frac{\mathrm{d}{X}_{t}}{\mathrm{\;d}t}\mathrm{\;d}t$ . + +$\operatorname{Sig}\left( X\right)$ can be understood as the equivalent of statistical moments for path-valued random variables, the terms of which constitute a set of "canonical features that can be intuitively described as an ordered version of sample moments" (Kiraly & Oberhauser, 2019). It is standard to truncate the infinitely long signature to depth $N \in \mathbb{N}$ , which consists of all terms in the signature that have index sets $\left\{ {{i}_{1},{i}_{2},\ldots ,{i}_{k}}\right\}$ for $k \leq N$ . We denote this with ${\operatorname{Sig}}_{N}\left( X\right)$ . We further denote the set of all streams on a set $V$ by + +$$ +\mathcal{S}\left( V\right) = \left\{ {\mathbf{x} = \left( {{x}_{1},\ldots ,{x}_{n}}\right) : {x}_{i} \in V, n \in \mathbb{N}}\right\} . +$$ + +Here, to obtain a signature from a stream of data $\mathbf{x} \in \mathcal{S}\left( {\mathbb{R}}^{d}\right)$ , the data points ${x}_{i}$ are first interpolated into a path before the integrals of Equation 1 are computed. For example, one may use the interpolation via a continuous function $f = \left( {{f}_{1},\ldots ,{f}_{d}}\right) : \left\lbrack {0, T}\right\rbrack \rightarrow {\mathbb{R}}^{d}$ with $f\left( {\frac{i - 1}{n - 1} \cdot T}\right) = {x}_{i}$ + +![01963e35-e859-7aa8-8884-f035104b5bdf_1_896_210_701_458_0.jpg](images/01963e35-e859-7aa8-8884-f035104b5bdf_1_896_210_701_458_0.jpg) + +Figure 1. A geometric interpretation of signature terms. Red circles indicate (possibly irregular) observations, and the black curve illustrate the underlying continuous path. Depth-1 terms are the increments $\Delta {X}^{1}$ and $\Delta {X}^{2}$ , while the depth-2 terms ${S}^{\left( 1,2\right) }$ and ${S}^{\left( 2,1\right) }$ correspond to the green and blue areas, respectively. + +leaving us with the signature terms + +$$ +S{\left( X\right) }_{0, T}^{{i}_{1},\ldots ,{i}_{k}} = \left( {\mathop{\int }\limits_{{0 \leq {t}_{1} < \cdots < {t}_{k} \leq T}}\mathop{\prod }\limits_{{j = 1}}^{k}\frac{\mathrm{d}{f}_{{i}_{j}}}{\mathrm{\;d}t}\left( {t}_{j}\right) \mathrm{d}{t}_{1}\cdots \mathrm{d}{t}_{k}}\right) . +$$ + +## 3. Method: Deep Signature Statistics + +A deep signature transform (Bonnier et al., 2019), shown in Figure 2, entails repeated application of blocks of three key elements: an augmentation of the stream with a stream-preserving feature map ${\Phi }^{\varphi }$ with learnable parameters $\varphi$ ; a lift operation $\ell$ , which transforms the augmented stream into a stream of streams; and the depth $N$ signature transform applied to each substream, giving a stream of signatures. Blocks are by design able to concatenate, and after as many blocks as desired have been concatenated, the output is obtained by passing the output of the final block through an optional additional neural network. The ultimate effect is to capture higher order signature information using fewer terms (Chevyrev & Oberhauser, 2016; Kifaly & Oberhauser, 2019). We provide further details on deep signature transforms in Appendix A. + +Combining path signatures-with their strong mathematical basis-with the expressivity of neural networks has been seen to produce competitive results in a number of learning tasks (Morrill et al., 2020). In this way, the use of a deep signature transform may yield approximately sufficient statistics, despite truncation. This makes it an ideal candidate for use in likelihood-free inference settings as a means for generating data-dependent summary statistics for both univariate and multivariate data of any length. We term deep signature transforms used in this way deep signature statistics (DSS). + +![01963e35-e859-7aa8-8884-f035104b5bdf_2_296_185_1155_266_0.jpg](images/01963e35-e859-7aa8-8884-f035104b5bdf_2_296_185_1155_266_0.jpg) + +Figure 2. Deep signature transform with parameters ${\varphi }_{1},\ldots ,{\varphi }_{k + 1}$ . + +We train DSS in tandem with a neural density estimator and using the same loss function. The SBI algorithm then targets the posterior $p\left( {\mathbf{\theta } \mid {\sigma }^{\varphi }\left( \mathbf{x}\right) }\right)$ with a conditional density estimator ${q}_{\phi }\left( {\mathbf{\theta } \mid {\sigma }^{\varphi }\left( \mathbf{x}\right) }\right)$ , or a classifier $f\left( {\mathbf{\theta },{\sigma }^{\varphi }\left( \mathbf{x}\right) }\right)$ , while the extended parameter set $\left( {\phi ,\varphi }\right)$ is learned jointly using the loss function of the posterior density estimator. Throughout, we make use of neural ratio estimation (NRE) (Hermans et al., 2020) or its sequential version, written SNRE, using a ResNet classifier with batch size 50 and learning rate 0.005 . + +## 4. Experiments + +### 4.1. Evaluation metrics + +To assess the quality of the estimated posteriors, we compute the sliced Wasserstein distance (SWD) (Peyre & Cuturi, 2019) between samples from approximate ground truth posteriors and samples from the estimated posterior densities. In all cases, SWDs were computed using the Python Optimal Transport package (Flamary & Courty, 2017) and 1000 posterior samples from the posterior density estimated in each training round. To train the ratio estimator, we generate 1000 training examples during each round for 20 rounds. + +### 4.2. Ornstein-Uhlenbeck process + +The Ornstein-Uhlenbeck (OU) process (Uhlenbeck & Orn-stein, 1930) is a proto-typical Gauss-Markov stochastic differential equation (SDE) model. We discretize the SDE such that the data $\mathbf{x} = \left( {{x}_{0},{x}_{1},\ldots ,{x}_{T}}\right) ,{x}_{i} \in \mathbb{R}$ is generated according to + +$$ +{x}_{i} = {\theta }_{1}\exp \left( {\theta }_{2}\right) {\Delta t} + \left( {1 - {\theta }_{1}{\Delta t}}\right) {x}_{i - 1} + \frac{{\epsilon }_{i}}{2}, +$$ + +with ${x}_{0} = {10},{\Delta t} = {0.2}$ , and ${\epsilon }_{i} \sim \mathcal{N}\left( {0,{\Delta t}}\right)$ . The parameters $\mathbf{\theta } = \left( {{\theta }_{1},{\theta }_{2}}\right)$ are to be inferred. We set uniform priors ${\theta }_{1} \sim \mathcal{U}\left( {0,1}\right)$ and ${\theta }_{2} \sim \mathcal{U}\left( {-2,2}\right)$ , and generate ground truth observation ${\mathbf{x}}_{o} \sim p\left( {\mathbf{x} \mid {\mathbf{\theta }}^{ * }}\right)$ at true parameter values ${\mathbf{\theta }}^{ * } = \left( {{\theta }_{1}^{ * },{\theta }_{2}^{ * }}\right) = \left( {{0.5},1}\right)$ . + +We plot the marginal posteriors obtained for the OU process using Metropolis-Hastings and DSS in Figure 3 We see from this that DSS + sequential neural ratio estimation (SNRE) is able to accurately recover the posterior density for this model. A more quantitative evaluation of the quality of the estimated posteriors is shown in Figure 4, in which we compare the SWD between samples from the approximate ground truth posterior and estimated posteriors using each summary method at each training round. The hand-crafted summaries we used were the mean, standard deviation, and autocorrelations at lags 1 and 2 of the observed time series, giving four hand-crafted summary statistics. + +![01963e35-e859-7aa8-8884-f035104b5bdf_2_899_543_695_236_0.jpg](images/01963e35-e859-7aa8-8884-f035104b5bdf_2_899_543_695_236_0.jpg) + +Figure 3. (Ornstein-Uhlenbeck) Example of the marginal posteriors obtained from DSS after round 10 (orange), and the approximate ground truth marginals from Metropolis-Hastings (blue). + +Of the learned summaries, DSS tends to perform as well as or better than recurrent neural network (RNN) in 11 training rounds, while it outperforms PEN in almost all rounds. We also see for this simulator that the hand-crafted summary statistics outperform all learned summaries at almost every training round. This demonstrates the importance of wellchosen summary statistics and inductive biases, and the nontrivial nature of learning appropriate summary statistics for time series data: even with state-of-the-art neural network models such as RNN and PEN for summarizing time series data, it is difficult to meet, let alone surpass, the performance of sensible hand-crafted summaries. + +### 4.3. Ricker model + +The Ricker model (Ricker, 1954) is a simple ecological model of population dynamics with an intractable likelihood function. A population size ${N}_{t}$ evolves as + +$$ +\log {N}_{t + 1} = \log r + \log {N}_{t} - {N}_{t} + {e}_{t}, +$$ + +where $r$ is a parameter determining the growth rate of the population and ${e}_{t} \sim \mathcal{N}\left( {0,\sigma }\right)$ . The model assumes that the observations $\mathbf{x} = \left( {{x}_{0},{x}_{1},\ldots ,{x}_{T}}\right) ,{x}_{i} \in \mathbb{R}$ are measurements of the population size, which in turn are Poisson random deviates ${x}_{t} \sim \operatorname{Po}\left( {\phi {N}_{t}}\right)$ , for scale parameter $\phi$ . We assume the task of estimating the posterior density $p\left( {\mathbf{\theta } \mid \mathbf{x}}\right)$ for $\mathbf{\theta } = \left( {\log r,\phi ,\sigma }\right)$ given an observation ${\mathbf{x}}_{o} \sim p\left( {\mathbf{x} \mid {\mathbf{\theta }}^{ * }}\right)$ , where ${\mathbf{\theta }}^{ * } = \left( {4,{10},{0.3}}\right)$ is the true parameter set. We assume uniform priors for each parameter, with $\log r \sim \mathcal{U}\left( {3,8}\right) ,\phi \sim \mathcal{U}\left( {0,{20}}\right)$ , and $\sigma \sim \mathcal{U}\left( {0,{0.6}}\right)$ . + +![01963e35-e859-7aa8-8884-f035104b5bdf_3_157_193_683_530_0.jpg](images/01963e35-e859-7aa8-8884-f035104b5bdf_3_157_193_683_530_0.jpg) + +Figure 4. (Ornstein-Uhlenbeck) The sliced Wasserstein distances between approximate ground truth and estimated posterior densities for each summary method at each training round. Crosses and shaded regions indicate mean and standard error over 20 seeds. + +In Figure 5, we plot samples from the approximate ground truth posterior $p\left( {\mathbf{\theta } \mid {\mathbf{x}}_{o}}\right)$ -obtained using particle MCMC (Andrieu et al., 2010) following the guidelines of Schmon et al. (2020)-and the posteriors obtained using DSS for the Ricker model. From this, we see that DSS + SNRE has been reasonably successful in recovering the approximate ground truth density for $\sigma$ , while it has accurately recovered the location and shape of the densities for $\log r$ and $\phi$ . + +In Figure 6, we show the SWD between the samples from the approximate ground truth posterior and estimated posteriors for each summary statistic method. The hand-crafted summary statistics used in this instance are those proposed in Wood (2010), and consist of: the autocovariances to lag 5 ; the mean; the number of zeros in the sequence; the coefficients of the regression ${x}_{t + 1}^{0.3} = {\beta }_{1}{x}_{t}^{0.3} + {\beta }_{2}{x}_{t}^{0.6} + {\epsilon }_{t}$ for error term ${\epsilon }_{t}$ ; and the coefficients of the cubic regression of the ordered differences ${x}_{t} - {x}_{t - 1}$ on their observed values. + +From Figure 6, we see that DSS matches or exceeds PEN's (resp. RNN's) performance in 16 (resp. 18) out of 20 rounds. In particular, DSS appears to achieve greater asymptotic accuracy of the recovered posteriors at a high number of training examples. This example also highlights the possibility that learned summary statistics can outperform expert hand-crafted summaries, in particular, when model complexity doesn't allow for straightforward selection. + +![01963e35-e859-7aa8-8884-f035104b5bdf_3_900_210_694_236_0.jpg](images/01963e35-e859-7aa8-8884-f035104b5bdf_3_900_210_694_236_0.jpg) + +Figure 5. (Ricker model) An example of the marginal posteriors obtained from DSS after 10 rounds of 1000 (orange) and the approximate ground truth marginals from particle MCMC (blue). + +![01963e35-e859-7aa8-8884-f035104b5bdf_3_912_615_671_524_0.jpg](images/01963e35-e859-7aa8-8884-f035104b5bdf_3_912_615_671_524_0.jpg) + +Figure 6. (Ricker model) The sliced Wasserstein distances between the true and estimated posterior densities for each summary statistic method at each training round. Crosses and shaded regions indicate mean and standard error over 20 different seeds. + +## 5. Discussion + +In this paper, we address the problem of learning summary statistics for implicit time-series models with intractable likelihood functions. We propose the use of path signatures as a means for automatically generating approximately sufficient statistics for general multivariate time-series data. We demonstrate how the truncated signature can be combined with neural networks via deep signature transforms to generate informative summaries, and observe competitive performance in comparisons against existing state-of-the-art methods. Our method is general and non-specific to the models we consider. In particular, the size of the learned statistic can be taken to be model-dependent, while in our experiments the summary statistics learned with DSS are always of size 3 . Furthermore, while we use the signature truncated to degree 3 in each neural-lift-signature block; this can be replaced with the log-signature truncated to a higher degree if more signature terms are required, while keeping the dimensionality low. + +References + +Andrieu, C., Doucet, A., and Holenstein, R. Particle markov chain monte carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3): 269-342, 2010. + +Arribas, I. P. Derivatives pricing using signature payoffs. 2018. + +Baptista, R., Farmer, J. D., Hinterschweiger, M., Low, K., Tang, D., and Uluc, A. Macroprudential policy in an agent-based model of the uk housing market. 2016. + +Beaumont, M. A., Zhang, W., and Balding, D. J. Approximate bayesian computation in population genetics. Genetics, 162(4):2025-2035, 2002. + +Blum, M. G., Nunes, M. A., Prangle, D., Sisson, S. A., et al. A comparative review of dimension reduction methods in approximate bayesian computation. Statistical Science, 28(2):189-208, 2013. + +Bonnier, P., Kidger, P., Arribas, I. P., Salvi, C., and Lyons, T. Deep Signature Transforms. (NeurIPS):1-17, 2019. + +Chevyrev, I. and Oberhauser, H. Signature moments to characterize laws of stochastic processes. 2016. + +Cranmer, K., Brehmer, J., and Louppe, G. The frontier of simulation-based inference. Proceedings of the National Academy of Sciences of the United States of America, 117 (48):30055-30062, 2020. ISSN 10916490. doi: 10.1073/ pnas.1912789117. + +Diggle, P. J. and Gratton, R. J. Monte carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society: Series B (Methodological), 46 (2):193-212, 1984. + +Flamary, R. and Courty, N. Pot python optimal transport library, 2017. URL https://pythonot.github.io/. + +Hermans, J., Begy, V., and Louppe, G. Likelihood-free mcmc with amortized approximate ratio estimators. In International Conference on Machine Learning, pp. 4239- 4248. PMLR, 2020. + +Kiraly, F. J. and Oberhauser, H. Kernels for sequentially ordered data. Journal of Machine Learning Research,20(31):1-45,2019. URL http://jmlr.org/ papers/v20/16-314.html. + +Kifaly, F. J. and Oberhauser, H. Kernels for sequentially ordered data. Journal of Machine Learning Research, 20, 2019. ISSN 15337928. + +Li, C., Zhang, X., and Jin, L. LPSNet: A Novel Log + +Path Signature Feature Based Hand Gesture Recognition Framework. Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017, 2018-January:631-639, 2017. doi: 10.1109/ICCVW. 2017.80. + +Lueckmann, J.-M., Bassetto, G., Karaletsos, T., and Macke, J. H. Likelihood-free inference with emulator networks. In Symposium on Advances in Approximate Bayesian Inference, pp. 32-53. PMLR, 2019. + +Lyons, T. Rough paths, Signatures and the modelling of functions on streams. (291244), 2014. URL http: //arxiv.org/abs/1405.4537. + +Moore, P. J., Lyons, T. J., and Gallacher, J. Using path signatures to predict a diagnosis of Alzheimer's disease. PLoS ONE, 14(9):1-16, 2019. ISSN 19326203. doi: 10. 1371/journal.pone. 0222212. URL http://dx.doi.org/10.1371/journal.pone.0222212. + +Morrill, J., Fermanian, A., Kidger, P., and Lyons, T. A Generalised Signature Method for Time Series. pp. 1- 25, 2020. URL http://arxiv.org/abs/2006.00873. + +Pacchiardi, L. and Dutta, R. Score matched conditional exponential families for likelihood-free inference, 2020. + +Papamakarios, G., Sterratt, D., and Murray, I. Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 837-848. PMLR, 2019. + +Peyre, G. and Cuturi, M. Computational optimal transport. Foundations and Trends in Machine Learning, 11(5-6): 355-607, 2019. + +Pritchard, J. K., Seielstad, M. T., Perez-Lezaun, A., and Feldman, M. W. Population growth of human y chromosomes: a study of y chromosome microsatellites. Molecular biology and evolution, 16(12):1791-1798, 1999. + +Reizenstein, J. and Graham, B. Algorithm 1004: The iisig-nature library: Efficient calculation of iterated-integral signatures and log signatures. ACM Transactions on Mathematical Software (TOMS), 2020. + +Ricker, W. E. Stock and recruitment. Journal of the Fisheries Board of Canada, 11(5):559-623, 1954. + +Schmon, S. M., Deligiannidis, G., Doucet, A., and Pitt, M. K. Large-sample asymptotics of the pseudo-marginal method. Biometrika, 108(1):37-51, 07 2020. ISSN 0006- 3444. doi: 10.1093/biomet/asaa044. URL https:// doi.org/10.1093/biomet/asaa044. + +Tejero-Cantero, A., Boelts, J., Deistler, M., Lueckmann, + +J.-M., Durkan, C., Gonçalves, P. J., Greenberg, D. S., and Macke, J. H. sbi: A toolkit for simulation-based inference. Journal of Open Source Software, 5(52):2505, 2020. doi: 10.21105/joss.02505. URL https://doi.org/10.21105/joss.02505. + +Uhlenbeck, G. and Ornstein, L. On the theory of the brownian motion. Physical Review, 36(5):823-841, 1930. ISSN 0031899X. + +Wiqvist, S., Mattei, P.-A., Picchini, U., and Frellsen, J. Partially exchangeable networks and architectures for learning summary statistics in approximate bayesian computation. In International Conference on Machine Learning, pp. 6798-6807. PMLR, 2019. + +Wong, W., Jiang, B., Wu, T.-y., and Zheng, C. Learning summary statistic for approximate bayesian computation via deep neural network. Statistica Sinica, 2018. ISSN 1017-0405. doi: 10.5705/ss.202015.0340. URL http: //dx.doi.org/10.5705/ss.202015.0340. + +Wood, S. N. Statistical inference for noisy nonlinear ecological dynamic systems. Nature, 466(7310):1102-1104, 2010. ISSN 00280836. doi: 10.1038/nature09319. + +Xie, Z., Sun, Z., Jin, L., Ni, H., and Lyons, T. Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(8):1903-1917, 2018. ISSN 01628828. doi: 10.1109/TPAMI.2017.2732978. + +325 + +326 + +327 + +328 + +329 + +## Supplementary Material + +## A. Deep Signature Transforms + +We now describe the components of deep signature models in detail. + +## STREAM-PRESERVING FEATURE MAP + +The learnable, stream-preserving neural network ${\Phi }^{\varphi }$ : ${\mathbb{R}}^{d \times m} \rightarrow {\mathbb{R}}^{e}$ for some $m \in \mathbb{N}$ operates on the original stream x as + +$$ +\Phi \left( \mathbf{x}\right) = \left( {{\Phi }_{1},\ldots ,{\Phi }_{n - m + 1}}\right) , +$$ + +where ${\Phi }_{k} = {\Phi }^{\varphi }\left( {{x}_{k},\ldots ,{x}_{k + m};{\Phi }_{k - 1}}\right)$ and ${\Phi }_{0} = 0$ . This general structure can take the form of a one-dimensional convolutional layer, a feedforward network, or recurrent network. + +## LIFT OPERATION + +The learnable feature map obtained from the stream-preserving neural network augments the existing stream with additional channels. Its operation is described as "stream-preserving" since it does not destroy the streamlike nature of the data. The signature transform, on the other hand, operates on streams to produce an infinite set of features with no inherent stream-like properties. Direct application of the signature transform will thus prohibit its further application. + +In general, however, we may wish to apply the signature transform repeatedly. This motivates the inclusion of a lift operation between the learnable, stream-preserving network and the signature transformation. A lift operation $\ell : \mathcal{S}\left( {\mathbb{R}}^{d}\right) \rightarrow \mathcal{S}\left( {\mathcal{S}\left( {\mathbb{R}}^{e}\right) }\right)$ for some $e \in \mathbb{N}$ maps a stream into the space of streams of streams. Applying the signature transform element-wise to the lifted stream therefore yields a stream of signatures, + +$$ +{\operatorname{Sig}}_{N}\left( {\ell \left( \mathbf{x}\right) }\right) \mathrel{\text{:=}} \left( {{\operatorname{Sig}}_{N}\left( {{\ell }_{1}\left( \mathbf{x}\right) }\right) ,\ldots ,{\operatorname{Sig}}_{N}\left( {{\ell }_{v}\left( \mathbf{x}\right) }\right) }\right) +$$ + +$$ +\in \mathcal{S}\left( {\mathbb{R}}^{\left( {{e}^{N + 1} - 1}\right) /\left( {e - 1}\right) }\right) , +$$ + +which is amenable to further signature-based analysis (because the output is a stream). Examples of a lift operation include expanding windows $\ell \left( \mathbf{x}\right) = \left( {{\mathbf{x}}_{2},{\mathbf{x}}_{3},\ldots ,{\mathbf{x}}_{n}}\right)$ where ${\mathbf{x}}_{i} = \left( {{x}_{1},\ldots ,{x}_{i}}\right)$ , or sliding windows with window length $p$ , in which case ${\mathbf{x}}_{i} = \left( {{x}_{i},\ldots ,{x}_{i + p}}\right)$ . + +## NEURAL-LIFT-SIGNATURE BLOCK + +A stream-preserving neural network can be combined with a lift-signature operation to create a neural-lift-signature block + +$$ +{B}_{N}^{\varphi }\left( \mathbf{x}\right) = \left( {{\operatorname{Sig}}_{N} \circ \ell \circ {\Phi }^{\varphi }}\right) \left( \mathbf{x}\right) . +$$ + +This composite operation may or may not be stream-preserving. In particular, a neural-lift-signature block is not stream-preserving if we take $\ell \left( \mathbf{x}\right) \mathrel{\text{:=}} \mathbf{x}$ for that block. + +## DEEP SIGNATURE TRANSFORMS + +Let $\mathcal{X}$ be some set and ${f}^{\varphi } : \mathcal{S}\left( {\mathbb{R}}^{c}\right) \rightarrow \mathcal{X}$ be a neural network with trainable parameters $\varphi$ . A deep signature transform $\sigma \left( \mathbf{x}\right)$ , illustrated in Figure 2, is a mapping from $\mathcal{S}\left( {\mathbb{R}}^{d}\right)$ to $\mathcal{X}$ defined as any sequence of $k$ neural-lift-signature blocks followed by an optional final neural network ${f}^{{\varphi }_{k + 1}}$ , i.e. + +$$ +{\sigma }^{\varphi }\left( \mathbf{x}\right) = \left( {{f}^{{\varphi }_{k + 1}} \circ {B}_{{N}_{k}}^{{\varphi }_{k}} \circ \cdots \circ {B}_{{N}_{2}}^{{\varphi }_{2}} \circ {B}_{{N}_{1}}^{{\varphi }_{1}}}\right) \left( \mathbf{x}\right) \tag{2} +$$ + +where $\varphi = \left( {{\varphi }_{1},\ldots ,{\varphi }_{k + 1}}\right)$ . Note that the lift operation can be different in each of the $k$ neural-lift-signature blocks ${B}_{{N}_{k}}^{{\varphi }_{k}}$ . + +## B. Implementation details + +### B.1. Software + +For evaluating signatures and deep signature transforms, we used iisignature (Reizenstein & Graham, 2020) and https://github.com/patrick-kidger/ Deep-Signature-Transforms (Bonnier et al., 2019). SBI algorithms were implemented using sbi (Tejero-Cantero et al., 2020). The python implementation of PEN is found at https: //github.com/LoryPack/SM-ExpFam-LFI (Pacchiardi & Dutta, 2020). + +### B.2. Neural network specifications + +#### B.2.1. Deep signature statistics + +The deep signature model we use involved three neural-lift-signature blocks followed by a final recurrent network. The neural component of the first block consisted of a feedfor-ward network with kernel size 3 and 2 hidden layers of size 16 swept across the input stream. The output size of this network was 3 , so that initial layer augmented the input stream with an additional 3 channels. The neural components of the remaining two blocks were recurrent networks with 2 hidden layers of size 16. For each block, we use expanding windows with initial size 2 that grew by 1 time step in each iteration, followed by the signature transform truncated at degree 3. For all simulators, we apply basepoint and time augmentations to the input stream before passing it through the deep signature model, and take an output of size 3 . This yields a model with 9,735 trainable parameters. + +#### B.2.2. PARTIALLY EXCHANGEABLE NETWORKS + +Let $\mathbf{x} = \left( {{x}_{1},\ldots ,{x}_{n}}\right) ,{x}_{i} \in \mathcal{X}$ be sequential data generated by a stochastic process of Markov order $r$ , and $A$ be a metric space. Partially exchangeable network models $F : {\mathcal{X}}^{n} \rightarrow A$ consist of two networks $\phi : {\mathcal{X}}^{r + 1} \rightarrow \mathbb{R}$ and $\rho : {\mathcal{X}}^{r} \times \mathbb{R} \rightarrow$ $A$ combined as + +$$ +F\left( \mathbf{x}\right) = \rho \left( {{x}_{1 : r}\mathop{\sum }\limits_{{i = 1}}^{{n - r}}\phi \left( {x}_{i : \left( {i + r}\right) }\right) }\right) . +$$ + +For our experiments, we follow Wiqvist et al. (2019) and take the $\phi$ network to be a fully connected network with three layers of sizes 11,100, and 50 and output size 10 , and the $\rho$ network to be a fully connected network with four layers of sizes $\left( {{10} + r}\right) ,{50},{50}$ , and 20 . ReLU activations were used for all hidden layers. For PEN1, this yields a model with 10,093 trainable parameters. + +#### B.2.3. RECURRENT NEURAL NETWORK + +The recurrent network model consists of two recurrent neural networks. The first network has layers of size 64, 64, and 32 , with an output of size 6 , while the second layer has layers of size 32,32, and 32 with output size 7 . Windows of size 4 were swept across the input for both networks, with strides of 4 and 2 in the first and second, respectively. Altogether, this yields a model with 10,157 trainable parameters. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/OOlxsoRPyFL/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/OOlxsoRPyFL/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..fb9ef114cc5bba22f6a102d407159b8357e21efa --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/OOlxsoRPyFL/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,131 @@ +§ DEEP SIGNATURE STATISTICS FOR LIKELIHOOD-FREE TIME-SERIES MODELS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Simulation-based inference (SBI) has emerged as a family of methods for performing inference on complex simulation models with intractable likelihood functions. A common bottleneck in SBI is the construction of low-dimensional summary statistics of the data. In this respect, time-series data, often being high-dimensional, multivariate, and complex in structure, present a particular challenge. To address this we introduce deep signature statistics, a principled and automated method for combining summary statistic selection for time-series data with neural SBI methods. Our approach leverages deep signature transforms, trained concurrently with a neural density estimator, to produce informative statistics for multivariate sequential data that encode important geometric properties of the underlying path. We obtain competitive results across benchmark models. + +§ 1. INTRODUCTION + +In recent decades, scientific modelers have increasingly adopted simulation-based models: computer programs describing stochastic generative processes. Such models are widely employed in a variety of disciplines, e.g. economics (Baptista et al., 2016) and ecology (Wood, 2010). Their popularity lies in the greater flexibility afforded to the modeler over conventional equation-based modeling, enabling a higher degree of fidelity to the true data-generating process. + +A drawback of this greater flexibility is that the likelihood functions of simulation models are typically intractable, being defined only implicitly (Diggle & Gratton, 1984). Consequently, traditional frequentist and Bayesian approaches relying on likelihood evaluations are infeasible. This limitation has motivated a plethora of methods for performing likelihood-free or simulation-based inference (SBI) (for a recent overview, see Cranmer et al., 2020). Early examples of such methods include approximate Bayesian computation (ABC) (Pritchard et al., 1999; Beaumont et al., 2002) and synthetic likelihood (Wood, 2010), while more recent approaches exploit the flexibility of modern machine learning techniques, e.g. (sequential) neural likelihood estimation (Papamakarios et al., 2019; Lueckmann et al., 2019) and (sequential) neural ratio estimation (Hermans et al., 2020). + +In traditional approaches, the selection of an appropriate, low-dimensional set of summary statistics is key to the quality of inference. A popular approach in $\mathrm{{ABC}}$ is to select a large set of candidate statistics from which lower dimensional summaries are obtained through 'best subset selection', 'projection' or 'regularization' (Blum et al., 2013). A major disadvantage of these approaches is that they require the user to know in advance a powerful set of summary statistics, which can be time-consuming and arbitrary and often requires domain knowledge and experimentation. + +Some more recent approaches, such as sequential posterior and ratio estimation, are able to automate the learning of summary statistics by leveraging the expressiveness of neural networks. Yet, it is not guaranteed that the summary statistics implicitly generated through the use of such neural networks provide representations of sufficient quality for posterior inference. Fully connected networks and deep multilayer perceptrons (see e.g. Wong et al., 2018) still lack the inductive biases to easily extract meaningful representations from time-series, which poses the question of how automated techniques can fill the knowledge gap of domain expertise. One such possibility is partially exchangeable networks (PENs) (Wiqvist et al., 2019), in which a neural network architecture that exploits certain model symmetries is proposed in the context of ABC. + +Here, we argue for the use of the so-called "signature method" (Morrill et al., 2020; Bonnier et al., 2019) for extracting features from multimodal, multivariate sequential data. The central object of study-the path signature-is, in a sense, a canonical feature transformation in that the signature of path-valued random variable captures all possible nonlinearities (Arribas, 2018). Applications of the signature method have produced promising results in a number of tasks, including character recognition (Xie et al., 2018), gesture recognition (Li et al., 2017), and early identification of Alzheimer's from clinical data (Moore et al., 2019). + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +We term our method deep signature statistics (DSS). The main idea is to embed a deep signature model (Bonnier et al., 2019)—in which the signature appears as a pooling operation in a neural network-into an existing neural method for posterior density estimation. Doing so combines the useful inductive biases provided by the signature transform with the power of neural networks to efficiently learn summary statistics and posterior estimates concurrently. Our results suggest that DSS offers a robust, automatic, and theoretically principled pipeline for posterior inference which performs competitively across benchmark models. + +§ 2. BACKGROUND: PATH SIGNATURES + +The signature of a path $X = \left( {{X}^{1},{X}^{2},\ldots ,{X}^{d}}\right) : \left\lbrack {0,T}\right\rbrack \rightarrow$ ${\mathbb{R}}^{d}$ is an infinite collection of statistics that characterizes the path up to a negligible equivalence class (Lyons, 2014). It is defined by the infinite collection of statistics + +$$ +\operatorname{Sig}\left( X\right) = \left( {1,S{\left( X\right) }_{0,T}^{1},S{\left( X\right) }_{0,T}^{2},\ldots ,S{\left( X\right) }_{0,T}^{d},}\right. +$$ + +$$ +S{\left( X\right) }_{0,T}^{1,1},S{\left( X\right) }_{0,T}^{1,2},\ldots ) +$$ + +consisting of the $k$ -fold iterated integral of $X$ with multi-index ${i}_{1},\ldots ,{i}_{k}$ defined as + +$$ +S{\left( X\right) }_{0,T}^{{i}_{1},\ldots ,{i}_{k}} = \mathop{\int }\limits_{{0 \leq {t}_{1} < \cdots < {t}_{k} \leq T}}\mathrm{\;d}{X}_{{t}_{1}}^{{i}_{1}}\ldots \mathrm{d}{X}_{{t}_{k}}^{{i}_{k}}. \tag{1} +$$ + +We provide a geometric interpretation of the depth 1 terms, $S{\left( X\right) }_{0,T}^{i}$ , and depth 2 terms, $S{\left( X\right) }_{0,T}^{i,j}$ , of the signature in Figure 1. When the underlying path $X$ is of bounded variation, the integral (1) can be understood as the Riemann-Stieltjes integral with respect to $X$ . In particular, for differentiable paths, this leads to a more commonly understood Riemann integral by substituting $\mathrm{d}{X}_{t} = \frac{\mathrm{d}{X}_{t}}{\mathrm{\;d}t}\mathrm{\;d}t$ . + +$\operatorname{Sig}\left( X\right)$ can be understood as the equivalent of statistical moments for path-valued random variables, the terms of which constitute a set of "canonical features that can be intuitively described as an ordered version of sample moments" (Kiraly & Oberhauser, 2019). It is standard to truncate the infinitely long signature to depth $N \in \mathbb{N}$ , which consists of all terms in the signature that have index sets $\left\{ {{i}_{1},{i}_{2},\ldots ,{i}_{k}}\right\}$ for $k \leq N$ . We denote this with ${\operatorname{Sig}}_{N}\left( X\right)$ . We further denote the set of all streams on a set $V$ by + +$$ +\mathcal{S}\left( V\right) = \left\{ {\mathbf{x} = \left( {{x}_{1},\ldots ,{x}_{n}}\right) : {x}_{i} \in V,n \in \mathbb{N}}\right\} . +$$ + +Here, to obtain a signature from a stream of data $\mathbf{x} \in \mathcal{S}\left( {\mathbb{R}}^{d}\right)$ , the data points ${x}_{i}$ are first interpolated into a path before the integrals of Equation 1 are computed. For example, one may use the interpolation via a continuous function $f = \left( {{f}_{1},\ldots ,{f}_{d}}\right) : \left\lbrack {0,T}\right\rbrack \rightarrow {\mathbb{R}}^{d}$ with $f\left( {\frac{i - 1}{n - 1} \cdot T}\right) = {x}_{i}$ + + < g r a p h i c s > + +Figure 1. A geometric interpretation of signature terms. Red circles indicate (possibly irregular) observations, and the black curve illustrate the underlying continuous path. Depth-1 terms are the increments $\Delta {X}^{1}$ and $\Delta {X}^{2}$ , while the depth-2 terms ${S}^{\left( 1,2\right) }$ and ${S}^{\left( 2,1\right) }$ correspond to the green and blue areas, respectively. + +leaving us with the signature terms + +$$ +S{\left( X\right) }_{0,T}^{{i}_{1},\ldots ,{i}_{k}} = \left( {\mathop{\int }\limits_{{0 \leq {t}_{1} < \cdots < {t}_{k} \leq T}}\mathop{\prod }\limits_{{j = 1}}^{k}\frac{\mathrm{d}{f}_{{i}_{j}}}{\mathrm{\;d}t}\left( {t}_{j}\right) \mathrm{d}{t}_{1}\cdots \mathrm{d}{t}_{k}}\right) . +$$ + +§ 3. METHOD: DEEP SIGNATURE STATISTICS + +A deep signature transform (Bonnier et al., 2019), shown in Figure 2, entails repeated application of blocks of three key elements: an augmentation of the stream with a stream-preserving feature map ${\Phi }^{\varphi }$ with learnable parameters $\varphi$ ; a lift operation $\ell$ , which transforms the augmented stream into a stream of streams; and the depth $N$ signature transform applied to each substream, giving a stream of signatures. Blocks are by design able to concatenate, and after as many blocks as desired have been concatenated, the output is obtained by passing the output of the final block through an optional additional neural network. The ultimate effect is to capture higher order signature information using fewer terms (Chevyrev & Oberhauser, 2016; Kifaly & Oberhauser, 2019). We provide further details on deep signature transforms in Appendix A. + +Combining path signatures-with their strong mathematical basis-with the expressivity of neural networks has been seen to produce competitive results in a number of learning tasks (Morrill et al., 2020). In this way, the use of a deep signature transform may yield approximately sufficient statistics, despite truncation. This makes it an ideal candidate for use in likelihood-free inference settings as a means for generating data-dependent summary statistics for both univariate and multivariate data of any length. We term deep signature transforms used in this way deep signature statistics (DSS). + + < g r a p h i c s > + +Figure 2. Deep signature transform with parameters ${\varphi }_{1},\ldots ,{\varphi }_{k + 1}$ . + +We train DSS in tandem with a neural density estimator and using the same loss function. The SBI algorithm then targets the posterior $p\left( {\mathbf{\theta } \mid {\sigma }^{\varphi }\left( \mathbf{x}\right) }\right)$ with a conditional density estimator ${q}_{\phi }\left( {\mathbf{\theta } \mid {\sigma }^{\varphi }\left( \mathbf{x}\right) }\right)$ , or a classifier $f\left( {\mathbf{\theta },{\sigma }^{\varphi }\left( \mathbf{x}\right) }\right)$ , while the extended parameter set $\left( {\phi ,\varphi }\right)$ is learned jointly using the loss function of the posterior density estimator. Throughout, we make use of neural ratio estimation (NRE) (Hermans et al., 2020) or its sequential version, written SNRE, using a ResNet classifier with batch size 50 and learning rate 0.005 . + +§ 4. EXPERIMENTS + +§ 4.1. EVALUATION METRICS + +To assess the quality of the estimated posteriors, we compute the sliced Wasserstein distance (SWD) (Peyre & Cuturi, 2019) between samples from approximate ground truth posteriors and samples from the estimated posterior densities. In all cases, SWDs were computed using the Python Optimal Transport package (Flamary & Courty, 2017) and 1000 posterior samples from the posterior density estimated in each training round. To train the ratio estimator, we generate 1000 training examples during each round for 20 rounds. + +§ 4.2. ORNSTEIN-UHLENBECK PROCESS + +The Ornstein-Uhlenbeck (OU) process (Uhlenbeck & Orn-stein, 1930) is a proto-typical Gauss-Markov stochastic differential equation (SDE) model. We discretize the SDE such that the data $\mathbf{x} = \left( {{x}_{0},{x}_{1},\ldots ,{x}_{T}}\right) ,{x}_{i} \in \mathbb{R}$ is generated according to + +$$ +{x}_{i} = {\theta }_{1}\exp \left( {\theta }_{2}\right) {\Delta t} + \left( {1 - {\theta }_{1}{\Delta t}}\right) {x}_{i - 1} + \frac{{\epsilon }_{i}}{2}, +$$ + +with ${x}_{0} = {10},{\Delta t} = {0.2}$ , and ${\epsilon }_{i} \sim \mathcal{N}\left( {0,{\Delta t}}\right)$ . The parameters $\mathbf{\theta } = \left( {{\theta }_{1},{\theta }_{2}}\right)$ are to be inferred. We set uniform priors ${\theta }_{1} \sim \mathcal{U}\left( {0,1}\right)$ and ${\theta }_{2} \sim \mathcal{U}\left( {-2,2}\right)$ , and generate ground truth observation ${\mathbf{x}}_{o} \sim p\left( {\mathbf{x} \mid {\mathbf{\theta }}^{ * }}\right)$ at true parameter values ${\mathbf{\theta }}^{ * } = \left( {{\theta }_{1}^{ * },{\theta }_{2}^{ * }}\right) = \left( {{0.5},1}\right)$ . + +We plot the marginal posteriors obtained for the OU process using Metropolis-Hastings and DSS in Figure 3 We see from this that DSS + sequential neural ratio estimation (SNRE) is able to accurately recover the posterior density for this model. A more quantitative evaluation of the quality of the estimated posteriors is shown in Figure 4, in which we compare the SWD between samples from the approximate ground truth posterior and estimated posteriors using each summary method at each training round. The hand-crafted summaries we used were the mean, standard deviation, and autocorrelations at lags 1 and 2 of the observed time series, giving four hand-crafted summary statistics. + + < g r a p h i c s > + +Figure 3. (Ornstein-Uhlenbeck) Example of the marginal posteriors obtained from DSS after round 10 (orange), and the approximate ground truth marginals from Metropolis-Hastings (blue). + +Of the learned summaries, DSS tends to perform as well as or better than recurrent neural network (RNN) in 11 training rounds, while it outperforms PEN in almost all rounds. We also see for this simulator that the hand-crafted summary statistics outperform all learned summaries at almost every training round. This demonstrates the importance of wellchosen summary statistics and inductive biases, and the nontrivial nature of learning appropriate summary statistics for time series data: even with state-of-the-art neural network models such as RNN and PEN for summarizing time series data, it is difficult to meet, let alone surpass, the performance of sensible hand-crafted summaries. + +§ 4.3. RICKER MODEL + +The Ricker model (Ricker, 1954) is a simple ecological model of population dynamics with an intractable likelihood function. A population size ${N}_{t}$ evolves as + +$$ +\log {N}_{t + 1} = \log r + \log {N}_{t} - {N}_{t} + {e}_{t}, +$$ + +where $r$ is a parameter determining the growth rate of the population and ${e}_{t} \sim \mathcal{N}\left( {0,\sigma }\right)$ . The model assumes that the observations $\mathbf{x} = \left( {{x}_{0},{x}_{1},\ldots ,{x}_{T}}\right) ,{x}_{i} \in \mathbb{R}$ are measurements of the population size, which in turn are Poisson random deviates ${x}_{t} \sim \operatorname{Po}\left( {\phi {N}_{t}}\right)$ , for scale parameter $\phi$ . We assume the task of estimating the posterior density $p\left( {\mathbf{\theta } \mid \mathbf{x}}\right)$ for $\mathbf{\theta } = \left( {\log r,\phi ,\sigma }\right)$ given an observation ${\mathbf{x}}_{o} \sim p\left( {\mathbf{x} \mid {\mathbf{\theta }}^{ * }}\right)$ , where ${\mathbf{\theta }}^{ * } = \left( {4,{10},{0.3}}\right)$ is the true parameter set. We assume uniform priors for each parameter, with $\log r \sim \mathcal{U}\left( {3,8}\right) ,\phi \sim \mathcal{U}\left( {0,{20}}\right)$ , and $\sigma \sim \mathcal{U}\left( {0,{0.6}}\right)$ . + + < g r a p h i c s > + +Figure 4. (Ornstein-Uhlenbeck) The sliced Wasserstein distances between approximate ground truth and estimated posterior densities for each summary method at each training round. Crosses and shaded regions indicate mean and standard error over 20 seeds. + +In Figure 5, we plot samples from the approximate ground truth posterior $p\left( {\mathbf{\theta } \mid {\mathbf{x}}_{o}}\right)$ -obtained using particle MCMC (Andrieu et al., 2010) following the guidelines of Schmon et al. (2020)-and the posteriors obtained using DSS for the Ricker model. From this, we see that DSS + SNRE has been reasonably successful in recovering the approximate ground truth density for $\sigma$ , while it has accurately recovered the location and shape of the densities for $\log r$ and $\phi$ . + +In Figure 6, we show the SWD between the samples from the approximate ground truth posterior and estimated posteriors for each summary statistic method. The hand-crafted summary statistics used in this instance are those proposed in Wood (2010), and consist of: the autocovariances to lag 5 ; the mean; the number of zeros in the sequence; the coefficients of the regression ${x}_{t + 1}^{0.3} = {\beta }_{1}{x}_{t}^{0.3} + {\beta }_{2}{x}_{t}^{0.6} + {\epsilon }_{t}$ for error term ${\epsilon }_{t}$ ; and the coefficients of the cubic regression of the ordered differences ${x}_{t} - {x}_{t - 1}$ on their observed values. + +From Figure 6, we see that DSS matches or exceeds PEN's (resp. RNN's) performance in 16 (resp. 18) out of 20 rounds. In particular, DSS appears to achieve greater asymptotic accuracy of the recovered posteriors at a high number of training examples. This example also highlights the possibility that learned summary statistics can outperform expert hand-crafted summaries, in particular, when model complexity doesn't allow for straightforward selection. + + < g r a p h i c s > + +Figure 5. (Ricker model) An example of the marginal posteriors obtained from DSS after 10 rounds of 1000 (orange) and the approximate ground truth marginals from particle MCMC (blue). + + < g r a p h i c s > + +Figure 6. (Ricker model) The sliced Wasserstein distances between the true and estimated posterior densities for each summary statistic method at each training round. Crosses and shaded regions indicate mean and standard error over 20 different seeds. + +§ 5. DISCUSSION + +In this paper, we address the problem of learning summary statistics for implicit time-series models with intractable likelihood functions. We propose the use of path signatures as a means for automatically generating approximately sufficient statistics for general multivariate time-series data. We demonstrate how the truncated signature can be combined with neural networks via deep signature transforms to generate informative summaries, and observe competitive performance in comparisons against existing state-of-the-art methods. Our method is general and non-specific to the models we consider. In particular, the size of the learned statistic can be taken to be model-dependent, while in our experiments the summary statistics learned with DSS are always of size 3 . Furthermore, while we use the signature truncated to degree 3 in each neural-lift-signature block; this can be replaced with the log-signature truncated to a higher degree if more signature terms are required, while keeping the dimensionality low. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/QFbpSF6DBB/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/QFbpSF6DBB/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..b883420bb6dc5986b6317623941808608abd6223 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/QFbpSF6DBB/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,469 @@ +# Representation Learning in Continuous-Time Score-Based Generative Models + +Anonymous Authors ${}^{1}$ + +## Abstract + +Score-based methods represented as stochastic differential equations on a continuous time domain have recently proven successful as a non-adversarial generative model. Training such models relies on denoising score matching, which can be seen as multi-scale denoising autoencoders. Here, we augment the denoising score-matching framework to enable representation learning without any supervised signal. GANs and VAEs learn representations by directly transforming latent codes to data samples. In contrast, score-based representation learning relies on a new formulation of the denoising score-matching objective and thus encodes information needed for denois-ing. We show how this difference allows for manual control of the level of detail encoded in the representation. + +## 1. Score-based generative modeling + +Score-based methods have recently proven successful for generating images (Song & Ermon, 2020; Song et al., 2020), graphs (Niu et al., 2020), shapes (Cai et al., 2020), and audio (Chen et al., 2020b; Kong et al., 2021). Two promising approaches apply step-wise perturbations to samples of the data distribution until the perturbed distribution matches a known prior (Song & Ermon, 2019; Ho et al., 2020). A model is trained to estimate the reverse process, which transforms samples of the prior to samples of the data distribution. These diffusion models have been further refined (Nichol & Dhariwal, 2021; Jolicoeur-Martineau et al., 2020; Luhman & Luhman, 2021) and even achieved better image sample quality than GANs (Dhariwal & Nichol, 2021). Further, Song et al. showed that these frameworks are discrete versions of continuous-time perturbations by stochastic differential equations and propose a score-based generative modeling framework on continuous time. + +![01963e47-f00a-74ff-af56-8cda257bd281_0_906_504_673_443_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_0_906_504_673_443_0.jpg) + +Figure 1. Conditional score matching with a parametrized latent code is representation learning. Denoising score matching estimates the score at each $\widetilde{x}$ ; we add a latent representation $z$ of the clean data $x$ as additional input to the score estimator. + +Learning desirable representations has been an inseparable component of generative models such as GANs and VAEs (Radford et al., 2016; Chen et al., 2016; Higgins et al., 2017; Burgess et al., 2018; van den Oord et al., 2017; Donahue & Simonyan, 2019; Chen et al., 2020a). Considering score-based methods as promising and theoretically grounded generative models, here we propose a method to augment their underlying SDE for learning a latent data-generating code. The key idea of our approach is illustrated in Figure 1. We begin by briefly revisiting the foundations of score-based generative diffusion models in section 1.1. In section 2 we present our method and follow up with experimental results in section 3 . + +### 1.1. Forward and reverse diffusion process + +The forward diffusion process of the data is modeled as a Stochastic Differential Equation (SDE) on a continuous time domain $t \in \left\lbrack {0, T}\right\rbrack$ . Let ${x}_{0} \in {\mathcal{R}}^{d}$ denote a sample of the data distribution ${x}_{0} \sim {p}_{0}$ , where $d$ is the data dimension. The trajectory ${\left( {x}_{t}\right) }_{t \in \left\lbrack {0, T}\right\rbrack }$ of data samples is a function of time determined by the stochastic diffusion process. The SDE is chosen such that the distribution ${p}_{0T}\left( {{x}_{T} \mid {x}_{0}}\right)$ for any sample ${x}_{0} \sim {p}_{0}$ can be approximated by a known prior distribution. Notice that the subscript ${0T}$ of ${p}_{0T}$ refers to the conditional distribution of the diffused data at time $T$ given the data at time 0 . For simplicity we limit the remainder of this paper to the so-called Variance Exploding SDE (Song et al., 2021), which is defined as + +$$ +\mathrm{d}x = f\left( {x, t}\right) \mathrm{d}t + g\left( t\right) \mathrm{d}\mathrm{w} \mathrel{\text{:=}} \sqrt{\frac{\mathrm{d}\left\lbrack {{\sigma }^{2}\left( t\right) }\right\rbrack }{\mathrm{d}t}}, \tag{1} +$$ + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +where $\mathrm{w}$ is the standard Wiener process. The perturbation kernel of this diffusion process has a closed-form solution being ${p}_{0t}\left( {{x}_{t} \mid {x}_{0}}\right) = \mathcal{N}\left( {{x}_{t};{x}_{0},\left\lbrack {{\sigma }^{2}\left( t\right) - {\sigma }^{2}\left( 0\right) }\right\rbrack I}\right)$ . It was shown by Anderson that the reverse diffusion process is the solution to the following SDE: + +$$ +\mathrm{d}x = \left\lbrack {f\left( {x, t}\right) - {g}^{2}\left( t\right) {\nabla }_{x}\log {p}_{t}\left( x\right) }\right\rbrack \mathrm{d}t + g\left( t\right) \mathrm{d}\overline{\mathrm{w}}, \tag{2} +$$ + +where $\overline{\mathrm{w}}$ is the standard Wiener process on reverse time flow. Thus, given the score function ${\nabla }_{x}\log {p}_{t}\left( x\right)$ for all $t \in$ $\left\lbrack {0, T}\right\rbrack$ , we can generate samples from the data distribution ${p}_{0}\left( x\right)$ . + +### 1.2. Denoising score matching objective + +In order to learn the score function, one would like to minimize the distance between the model and the true score function. This method is called Explicit Score Matching (Vincent, 2011) and has the following objective function: + +$$ +{J}_{t}^{ESM}\left( \theta \right) = {\mathbf{E}}_{{x}_{t}}\left\lbrack {\begin{Vmatrix}{s}_{\theta }\left( {x}_{t}, t\right) - {\nabla }_{{x}_{t}}\log {p}_{t}\left( {x}_{t}\right) \end{Vmatrix}}_{2}^{2}\right\rbrack . \tag{3} +$$ + +Since the ground-truth score function ${\nabla }_{{x}_{t}}\log {p}_{t}\left( {x}_{t}\right)$ is generally not known, one can apply denoising score matching (Vincent, 2011), which is defined as the following: + +$$ +{J}_{t}\left( \theta \right) = {\mathbf{E}}_{{x}_{0}}\left\{ {{\mathbf{E}}_{{x}_{t} \mid {x}_{0}}\lbrack }\right. +$$ + +$$ +\left. \left. {\begin{Vmatrix}{s}_{\theta }\left( {x}_{t}, t\right) - {\nabla }_{{x}_{t}}\log {p}_{0t}\left( {x}_{t} \mid {x}_{0}\right) \end{Vmatrix}}_{2}^{2}\right\rbrack \right\} \text{.} \tag{4} +$$ + +The issue of single scale noise motivated Song & Ermon to expand the objective to a sum of denoising score matching terms on multiple noise scales. They further augment the objective with a positive weighting function $\lambda \left( \sigma \right) > 0$ to empirically balance the loss magnitudes for all noise levels. For the continuous time domain, Song et al. uniformly sample $t \in \left\lbrack {0, T}\right\rbrack$ and use a time-dependent positive weighting function $\lambda \left( t\right)$ , leading to the following objective: + +$$ +J\left( \theta \right) = {\mathbf{E}}_{t}\left\lbrack {\lambda \left( t\right) {J}_{t}\left( \theta \right) }\right\rbrack . \tag{5} +$$ + +We now show that this objective cannot be made arbitrarily small. It is known that (4) is equal to explicit score matching up to a constant which is independent of $\theta$ (Vincent,2011). Thus, the objective is minimized when the model equals the ground-truth score function ${s}_{\theta }\left( {{x}_{t}, t}\right) = {\nabla }_{x}\log {p}_{t}\left( x\right)$ and the additional constant is equal to the loss when this equality holds. This leads to the following new formulation of the denoising score matching objective: + +$$ +{J}_{t}\left( \theta \right) = {\mathbf{E}}_{{x}_{0}}\left\{ {{\mathbf{E}}_{{x}_{t} \mid {x}_{0}}\lbrack }\right. +$$ + +$$ +{\begin{Vmatrix}{\nabla }_{{x}_{t}}\log {p}_{0t}\left( {x}_{t} \mid {x}_{0}\right) - {\nabla }_{{x}_{t}}\log {p}_{t}\left( {x}_{t}\right) \end{Vmatrix}}_{2}^{2} \tag{6} +$$ + +$$ +\left. {+{\begin{Vmatrix}{s}_{\theta }\left( {x}_{t}, t\right) - {\nabla }_{{x}_{t}}\log {p}_{t}\left( {x}_{t}\right) \end{Vmatrix}}_{2}^{2}}\right\} \text{.} +$$ + +This observation has not been emphasized previously, probably because it has no direct effect on the learning of the score function. However, the additional constant has major implications for finding other hyperparameters. Examples for such hyperparameters are the values of: the function $\lambda \left( t\right)$ and the choice of the forward SDE. While these hyper-parameters could be optimized in explicit score matching using gradient-based learning, this ability is severely limited by the additional non-vanishing constant in (6). In particular, optimizing such hyperparameters based on the denoising score matching objective leads to solutions that do not necessarily minimize the distance from the model to the ground-truth score function. Instead, they are heavily biased towards solutions with a smaller value of the additional constant. For example, trying to minimize the worst-case $\lambda$ -divergence as defined in (Durkan &Song,2021) with an adversarially trained $\lambda$ is not directly possible, since $\lambda$ will focus on regions where the constant is high and mostly ignores the model fit to the ground-truth score. + +The non-vanishing constant in the denoising score matching objective, which presents a burden in multiple ways such as hyperparameter search and model evaluation, however also provides an opportunity for latent representation learning, which will be described in the following sections. + +## 2. Representation learning through score-matching + +### 2.1. Conditional score matching + +Class-conditional generation can be achieved in this framework by training an additional time-dependent classifier ${p}_{t}\left( {y \mid {x}_{t}}\right)$ (Song et al.,2021). In particular, the conditional score for a fixed $y$ can be expressed as the sum of the unconditional score and the score of the classifier, that is, + +$$ +{\nabla }_{{x}_{t}}\log {p}_{t}\left( {{x}_{t} \mid y}\right) = {\nabla }_{{x}_{t}}\log {p}_{t}\left( {x}_{t}\right) + {\nabla }_{{x}_{t}}\log {p}_{t}\left( {y \mid {x}_{t}}\right) . +$$ + +We propose conditional score matching as an alternative way to allow for controllable generation. Given supervised labels $y\left( x\right)$ , the new training objective for each time $t$ becomes + +$$ +{J}_{t}\left( \theta \right) = {\mathbf{E}}_{{x}_{0}}\left\{ {{\mathbf{E}}_{{x}_{t} \mid {x}_{0}}\lbrack }\right. +$$ + +$$ +\left. \left. {\begin{Vmatrix}{s}_{\theta }\left( {x}_{t}, t, y\left( {x}_{0}\right) \right) - {\nabla }_{{x}_{t}}\log {p}_{0t}\left( {x}_{t} \mid {x}_{0}\right) \end{Vmatrix}}_{2}^{2}\right\rbrack \right\} \text{.} \tag{7} +$$ + +The conditional objective is minimized if and only if the model equals the conditional score function ${\nabla }_{{x}_{t}}\log {p}_{t}\left( {{x}_{t} \mid y\left( {x}_{0}\right) = \widehat{y}}\right)$ for all labels $\widehat{y}$ . Note that conditional score matching is directly done during training and does not require to train an additional classifier over the whole time domain. + +### 2.2. Learning the latent representation + +Since supervised data is limited and rarely available, we propose to learn the labeling function $y\left( {x}_{0}\right)$ at the same time as optimizing the conditional score matching objective (7). In particular, we represent the labeling function as a trainable encoder ${E}_{\phi } : {\mathcal{R}}^{d} \rightarrow {\mathcal{R}}^{{d}_{z}}$ , where ${E}_{\phi }\left( {x}_{0}\right)$ maps the data sample ${x}_{0}$ to its corresponding code in the ${d}_{z}$ -dimensional latent space. The code is then used as additional input to the model. Formally, the proposed learning objective for latent representation learning is the following: + +$$ +J\left( {\theta ,\phi }\right) = {\mathbf{E}}_{t,{x}_{0},{x}_{t}}\lbrack \lambda \left( t\right) +$$ + +$$ +\left. {\begin{Vmatrix}{s}_{\theta }\left( {x}_{t}, t,{E}_{\phi }\left( {x}_{0}\right) \right) - {\nabla }_{{x}_{t}}\log {p}_{0t}\left( {x}_{t} \mid {x}_{0}\right) \end{Vmatrix}}_{2}^{2}\right\rbrack \text{.} \tag{8} +$$ + +Intuitively, ${E}_{\phi }\left( {x}_{0}\right)$ selects the vector field used to denoise ${x}_{0}$ starting from ${x}_{t}$ . We show in the following that (8) is a valid representation learning objective. The score of the perturbation kernel ${\nabla }_{{x}_{t}}\log {p}_{0t}\left( {{x}_{t} \mid {x}_{0}}\right)$ is a function of only $t,{x}_{t}$ and ${x}_{0}$ . Thus the objective can be reduced to zero if all information about ${x}_{0}$ is contained in the latent representation ${E}_{\phi }\left( {x}_{0}\right)$ . When ${E}_{\phi }\left( {x}_{0}\right)$ has no mutual information with ${x}_{0}$ , the objective can only be reduced up to the constant in (6). Hence, our proposed formulation takes advantage of the non-zero lower-bound of (6) which can only vanish when data information is distilled in a code provided as input to the model. + +### 2.3. Controlling the representation + +In contrast to other methods used for unsupervised representation learning (Radford et al., 2016; Chen et al., 2016; Higgins et al., 2017), the proposed objective here enjoys the continuous nature of the SDE. The encoder is trained to represent information needed to denoise ${x}_{0}$ for different levels of noise $\sigma \left( t\right)$ . We hypothesize that by adjusting the weighting function $\lambda \left( t\right)$ , we can manually control the granularity of the features encoded in the representation. For high noise levels, the mutual information of ${x}_{t}$ and ${x}_{0}$ is insignificant, thus denoising requires all information about ${x}_{0}$ to be contained in the code. In contrast, for small values of $t,{x}_{t}$ still contains coarse-grained features of ${x}_{0}$ and denoising can be performed even when the representation encodes only fine-grained features. We provide empirical evidence to support this hypothesis in Section 3. + +## 3. Experimental results + +For all experiments, we use the same function $\sigma \left( t\right) , t \in$ $\left\lbrack {0,1}\right\rbrack$ as in (Song et al.,2021), which is $\sigma \left( t\right) =$ ${\sigma }_{\min }{\left( \frac{{\sigma }_{\max }}{{\sigma }_{\min }}\right) }^{t}$ , where ${\sigma }_{\min } = {0.01}$ and ${\sigma }_{\max } = {50}$ . Further, we use $\lambda \left( t\right) = {\sigma }^{2}\left( t\right)$ , which has been shown to yield the KL-Divergence objective (Durkan & Song, 2021). For visualization purposes, we use a 2-dimensional latent space if not stated otherwise. Our goal is not to produce state-of-the-art image quality, rather showcase the representation learning method. Because of that and also limited computational resources, we did not carry out an extensive hyper-parameter sweep. Hence, the model architecture in all experiments is similar to but significantly smaller than the one proposed in (Song et al., 2021). Details for architecture and hyperparam-eters are described in the appendix (A.1). Figure 10 in the appendix further illustrates how the representation encodes information for denoising. + +![01963e47-f00a-74ff-af56-8cda257bd281_2_901_190_686_887_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_2_901_190_686_887_0.jpg) + +Figure 2. Samples and latent distribution of a model trained on MNIST (a-b) and the first three classes of CIFAR-10 (c-d) using L1-regularization and uniform sampling of $t$ + +### 3.1. Uniform sampling of $t$ + +We first train a model using L1-regularization on the latent code for the MNIST dataset (LeCun & Cortes, 2010) and CIFAR-10 (Krizhevsky et al.). Due to computational limitations, we limit CIFAR-10 to a subset of only three classes, which we randomly chose to be the first three classes. Figure 2 shows samples from a grid over the latent space and a point cloud visualization of the latent values $z = {E}_{\phi }\left( {x}_{0}\right)$ . For MNIST, we can see that the value of ${z}_{1}$ controls the stroke width, while ${z}_{2}$ weakly indicates the class. In contrast, the latent code of CIFAR-10 samples mostly encodes information about the class label. We can see from the samples that part of the reason might be an encoding of the background, which is highly correlated with the class labels. We conducted the same experiment with a probabilistic encoder, where the latent representation is regularized using + +165 + +![01963e47-f00a-74ff-af56-8cda257bd281_3_166_201_676_430_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_3_166_201_676_430_0.jpg) + +Figure 3. Samples and latent distribution of a model trained on MNIST using KL-divergence and uniform sampling of $\sigma$ + +166 + +167 + +168 + +169 + +170 + +171 + +172 + +173 + +KL-Divergence. The resulting representation is similar and can be seen in the appendix(5,6). We also trained models on all classes of CIFAR-10, however not until convergence due to computational constraints (cf.8,9). Early results indicate encoding of overall image brightness. We further want to point out that the generative process using the reverse SDE involves randomness and thus can generate different samples for a single latent representation. The diversity of samples generated for the same representation steadily decreases with the dimensionality of the latent space, which is empirically shown in Figure 11 of the appendix. + +### 3.2. Controlling the representation + +Next, we analyze the behavior of the representation when adjusting the weighting function $\lambda \left( t\right)$ , which can be done by changing the sampling distribution of $t$ . + +#### 3.2.1. HIGH NOISE LEVELS + +First, we focus the training on higher noise levels. To this end, we sample $t$ such that $\sigma \left( t\right)$ is uniformly sampled from the interval $\left\lbrack {{\sigma }_{\min },{\sigma }_{\max }}\right\rbrack = \left\lbrack {{0.01},{50}}\right\rbrack$ . Note that after learning the representation we additionally train the model with uniform sampling of $t$ and a frozen encoder to achieve good sample quality. Figure 3 shows the resulting representation for MNIST using a probabilistic encoder (cf. Figure 7 for L1 regularization results). As expected, the latent representation encodes information about classes rather than fine-grained features such as stroke width. This validates our hypothesis of section 2.3 that we can control the granularity of features encoded in the latent space. + +#### 3.2.2. TRAINING ON SINGLE TIMESCALES + +To understand the effect of training on different timescales more clearly, we limit the support of the weighting function $\lambda \left( t\right)$ to a single value of $t$ . We analyze the resulting quality of the latent representation for different values of $t$ using + +219 the silhouette score with euclidean distance based on the dataset classes (Rousseeuw, 1987). It compares the average distance between a point to all other points in its cluster with the average distance to points in the nearest different cluster. Thus we measure how well the latent representation encodes classes, ignoring any other features. + +![01963e47-f00a-74ff-af56-8cda257bd281_3_902_203_684_328_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_3_902_203_684_328_0.jpg) + +Figure 4. Mean and standard deviation of silhouette scores when training a model on MNIST (left) and the first three classes of CIFAR-10 (right) using a single $t$ over three runs. + +Figure 4 shows the silhouette scores of latent codes of MNIST and CIFAR-10 samples for different values of $t$ . In alignment with our hypothesis of section 2.3, training on a small $t$ and thus low noise levels leads to almost no encoded class information in the latent representation, while the opposite is the case for a range of $t$ which differs between the two datasets. The decline in encoded class information for high values of $t$ can be explained by the vanishing difference between distributions of perturbed samples when $t$ gets large. This shows that the distinction among the code classes represented by the silhouette score is controlled by $\lambda \left( t\right)$ . + +Overall, the difference in the latent codes for varying $\lambda \left( t\right)$ shows that we can control the granularity encoded in the representation. While this ability does not exist in previously proposed models for representation learning, it provides a significant advantage when there exist some prior information about the level of detail that we intend to encode in the target representation. + +## 4. Conclusion + +We presented a new objective for representation learning based on conditional denoising score matching. In doing so, we turned the original non-vanishing objective function into one that can be reduced to zero if all information is distilled in the code. We showed that the proposed method learns interpretable features in the latent space. In contrast to previous approaches, denoising score matching as a foundation comes with the ability to manually control the granularity of features encoded in the representation. We demonstrated that the encoder can learn to separate classes when focusing on high noise levels and encodes fine-grained features such as stroke-width when mainly trained on low level noise. + +References + +Anderson, B. D. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313-326, 1982. ISSN 0304-4149. doi: https://doi.org/10.1016/0304-4149(82)90051-5.URL https://www.sciencedirect.com/ science/article/pii/0304414982900515. + +Burgess, C. P., Higgins, I., Pal, A., Matthey, L., Watters, N., Desjardins, G., and Lerchner, A. Understanding disentangling in $\beta$ -vae,2018. + +Cai, R., Yang, G., Averbuch-Elor, H., Hao, Z., Belongie, S., Snavely, N., and Hariharan, B. Learning gradient fields for shape generation, 2020. + +Chen, M., Radford, A., Child, R., Wu, J., Jun, H., Luan, D., and Sutskever, I. Generative pretraining from pixels. In III, H. D. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1691-1703. PMLR, 13-18 Jul 2020a. URL http://proceedings.mlr.press/v119/ chen20s.html. + +Chen, N., Zhang, Y., Zen, H., Weiss, R. J., Norouzi, M., and Chan, W. Wavegrad: Estimating gradients for waveform generation, 2020b. + +Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets, 2016. + +Dhariwal, P. and Nichol, A. Diffusion models beat gans on image synthesis, 2021. + +Donahue, J. and Simonyan, K. Large scale adversarial representation learning, 2019. + +Durkan, C. and Song, Y. On maximum likelihood training of score-based generative models, 2021. + +Higgins, I., Matthey, L., Pal, A., Burgess, C. P., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. beta-vae: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017. + +Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. CoRR, abs/2006.11239, 2020. URL https://arxiv.org/abs/2006.11239. + +Jolicoeur-Martineau, A., Piché-Taillefer, R., des Combes, R. T., and Mitliagkas, I. Adversarial score matching and improved sampling for image generation, 2020. + +Kong, Z., Ping, W., Huang, J., Zhao, K., and Catanzaro, B. Diffwave: A versatile diffusion model for audio synthesis, 2021. + +Krizhevsky, A., Nair, V., and Hinton, G. Cifar-10 (canadian + +institute for advanced research). URL http://www.cs.toronto.edu/~kriz/cifar.html. + +LeCun, Y. and Cortes, C. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/ exdb/mnist/. + +Luhman, E. and Luhman, T. Knowledge distillation in iterative generative models for improved sampling speed, 2021. + +Nichol, A. and Dhariwal, P. Improved denoising diffusion probabilistic models. CoRR, abs/2102.09672, 2021. URL https://arxiv.org/abs/2102.09672. + +Niu, C., Song, Y., Song, J., Zhao, S., Grover, A., and Ermon, S. Permutation invariant graph generation via score-based generative modeling, 2020. + +Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks, 2016. + +Rousseeuw, P. J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20:53-65, 1987. ISSN 0377-0427. doi: https://doi.org/10.1016/0377-0427(87)90125-7.URL https://www.sciencedirect.com/ science/article/pii/0377042787901257. + +Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models, 2020. + +Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. CoRR, abs/1907.05600, 2019. URL http://arxiv.org/ abs/1907.05600. + +Song, Y. and Ermon, S. Improved techniques for training score-based generative models. CoRR, abs/2006.09011, 2020. URL https://arxiv.org/abs/2006.09011. + +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Er-mon, S., and Poole, B. Score-based generative modeling through stochastic differential equations, 2021. + +van den Oord, A., Vinyals, O., and Kavukcuoglu, K. Neural discrete representation learning. CoRR, abs/1711.00937, 2017. URL http://arxiv.org/ abs/1711.00937. + +Vincent, P. A connection between score matching and de-noising autoencoders. Neural Computation, 23(7):1661- 1674, 2011. doi: 10.1162/NECO_a_00142. + +275 + +## A. Appendix + +278 + +![01963e47-f00a-74ff-af56-8cda257bd281_5_264_320_1211_660_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_5_264_320_1211_660_0.jpg) + +279 + +280 + +281 + +282 + +283 + +Figure 5. Samples and latent distribution of a model trained on MNIST using KL-divergence and uniform sampling of $t$ + +### A.1. Architecture and Hyperparameters + +The model architecture we use for all experiments is based on "DDPM++ cont. (deep)" used for CIFAR-10 in (Song et al., 2021). It is composed of a downsampling and an upsampling block with residual blocks at multiple resolutions. We did not change any of the hyperparameters of the optimizer. Depending on the dataset, we adjusted the number of resolutions, number of channels per resolution, and the number of residual blocks per resolution in order to reduce training time. + +For representation learning, we use an encoder with the same architecture as the downsampling block of the model, followed by another three dense layers mapping to a low dimensional latent space. Another four dense layers map the latent code back to a higher-dimensional representation. It is then given as input to the model in the same way as the time embedding. That is, each channel is provided with a conditional bias determined by the representation and time embedding at multiple stages of the downsampling and upsampling block. + +Regularization of the latent space For both datasets, we use a regularization weight of ${10}^{-5}$ when applying L1- regularization, and a weight of ${10}^{-7}$ when using a probabilistic encoder regularized with KL-Divergence. + +MNIST hyperparameters Due to the simplicity of MNIST, we only use two resolutions of size ${28} \times {28} \times {32}$ and ${14} \times {14} \times {64}$ , respectively. The number of residual blocks at each resolution is set to two. In each experiment, the model is trained for ${80k}$ iterations. For uniform sampling of $\sigma$ we trained the models for an additional ${80k}$ iterations with a frozen encoder and uniform sampling of $t$ . + +CIFAR10 hyperparameters For the silhouette score analysis, we use three resolutions of size ${32} \times {32} \times {32},{16} \times {16} \times {32}$ , and $8 \times 8 \times {32}$ , again with only two residual blocks at each resolution. Each model is trained for ${90k}$ iterations. + +CIFAR10 (deep) hyperparameters While representation learning works for small models already, sample quality on CIFAR-10 is poor for models of the size described above. Thus for models used to generate samples, we use eight residual blocks per resolution and the following resolutions: ${32} \times {32} \times {32},{16} \times {16} \times {64},8 \times 8 \times {64}$ , and $4 \times 4 \times {64}$ . Each model is trained for ${300k}$ iterations. + +330 + +332 + +333 + +334 + +![01963e47-f00a-74ff-af56-8cda257bd281_6_259_344_1213_662_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_6_259_344_1213_662_0.jpg) + +Figure 6. Samples and latent distribution of a model trained on the first three classes of CIFAR-10 using KL-divergence and uniform sampling of $t$ + +335 + +336 + +337 + +338 + +339 + +340 + +342 + +348 + +349 + +351 + +365 + +![01963e47-f00a-74ff-af56-8cda257bd281_6_265_1325_1206_661_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_6_265_1325_1206_661_0.jpg) + +Figure 7. Samples and latent distribution of a model trained on MNIST using L1-regularization and uniform sampling of $\sigma$ + +366 + +367 + +368 + +369 + +370 + +371 + +372 + +373 + +374 + +375 + +376 + +377 + +378 + +379 + +380 + +381 + +382 + +383 + +384 + +385 + +386 + +387 + +388 + +389 + +![01963e47-f00a-74ff-af56-8cda257bd281_7_263_350_1211_663_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_7_263_350_1211_663_0.jpg) + +Figure 8. Samples and latent distribution of a model trained on all classes of CIFAR-10 using KL-divergence and uniform sampling of $t$ + +390 + +391 + +392 + +393 + +394 + +395 + +396 + +397 + +398 + +399 + +400 + +420 + +![01963e47-f00a-74ff-af56-8cda257bd281_7_266_1314_1206_664_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_7_266_1314_1206_664_0.jpg) + +Figure 9. Samples and latent distribution of a model trained on all classes of CIFAR-10 using L1-regularization and uniform sampling of $t$ + +421 + +422 + +423 + +424 + +425 + +426 + +427 + +428 + +429 + +430 + +431 + +432 + +435 + +436 + +437 + +438 + +439 + +440 + +![01963e47-f00a-74ff-af56-8cda257bd281_8_578_823_590_594_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_8_578_823_590_594_0.jpg) + +Figure 10. Samples generated starting from ${x}_{t}$ (left column) using the score function with the latent code of another ${x}_{0}$ (top row) as input. It shows that samples are denoised correctly only when conditioning on the latent code of the corresponding original image ${x}_{0}$ . + +525 + +![01963e47-f00a-74ff-af56-8cda257bd281_9_270_465_1211_1321_0.jpg](images/01963e47-f00a-74ff-af56-8cda257bd281_9_270_465_1211_1321_0.jpg) + +Figure 11. Samples generated using the same latent code for each generation, showing that the randomness of the code-conditional generation reduces in higher dimensional latent spaces. + +526 + +527 + +528 + +529 + +530 + +531 + +532 + +533 + +534 + +535 + +536 + +537 + +538 + +539 + +549 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/QFbpSF6DBB/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/QFbpSF6DBB/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..457c6a6e038c31091c4c81f2eca1dc5c97cb14a2 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/QFbpSF6DBB/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,181 @@ +§ REPRESENTATION LEARNING IN CONTINUOUS-TIME SCORE-BASED GENERATIVE MODELS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Score-based methods represented as stochastic differential equations on a continuous time domain have recently proven successful as a non-adversarial generative model. Training such models relies on denoising score matching, which can be seen as multi-scale denoising autoencoders. Here, we augment the denoising score-matching framework to enable representation learning without any supervised signal. GANs and VAEs learn representations by directly transforming latent codes to data samples. In contrast, score-based representation learning relies on a new formulation of the denoising score-matching objective and thus encodes information needed for denois-ing. We show how this difference allows for manual control of the level of detail encoded in the representation. + +§ 1. SCORE-BASED GENERATIVE MODELING + +Score-based methods have recently proven successful for generating images (Song & Ermon, 2020; Song et al., 2020), graphs (Niu et al., 2020), shapes (Cai et al., 2020), and audio (Chen et al., 2020b; Kong et al., 2021). Two promising approaches apply step-wise perturbations to samples of the data distribution until the perturbed distribution matches a known prior (Song & Ermon, 2019; Ho et al., 2020). A model is trained to estimate the reverse process, which transforms samples of the prior to samples of the data distribution. These diffusion models have been further refined (Nichol & Dhariwal, 2021; Jolicoeur-Martineau et al., 2020; Luhman & Luhman, 2021) and even achieved better image sample quality than GANs (Dhariwal & Nichol, 2021). Further, Song et al. showed that these frameworks are discrete versions of continuous-time perturbations by stochastic differential equations and propose a score-based generative modeling framework on continuous time. + + < g r a p h i c s > + +Figure 1. Conditional score matching with a parametrized latent code is representation learning. Denoising score matching estimates the score at each $\widetilde{x}$ ; we add a latent representation $z$ of the clean data $x$ as additional input to the score estimator. + +Learning desirable representations has been an inseparable component of generative models such as GANs and VAEs (Radford et al., 2016; Chen et al., 2016; Higgins et al., 2017; Burgess et al., 2018; van den Oord et al., 2017; Donahue & Simonyan, 2019; Chen et al., 2020a). Considering score-based methods as promising and theoretically grounded generative models, here we propose a method to augment their underlying SDE for learning a latent data-generating code. The key idea of our approach is illustrated in Figure 1. We begin by briefly revisiting the foundations of score-based generative diffusion models in section 1.1. In section 2 we present our method and follow up with experimental results in section 3 . + +§ 1.1. FORWARD AND REVERSE DIFFUSION PROCESS + +The forward diffusion process of the data is modeled as a Stochastic Differential Equation (SDE) on a continuous time domain $t \in \left\lbrack {0,T}\right\rbrack$ . Let ${x}_{0} \in {\mathcal{R}}^{d}$ denote a sample of the data distribution ${x}_{0} \sim {p}_{0}$ , where $d$ is the data dimension. The trajectory ${\left( {x}_{t}\right) }_{t \in \left\lbrack {0,T}\right\rbrack }$ of data samples is a function of time determined by the stochastic diffusion process. The SDE is chosen such that the distribution ${p}_{0T}\left( {{x}_{T} \mid {x}_{0}}\right)$ for any sample ${x}_{0} \sim {p}_{0}$ can be approximated by a known prior distribution. Notice that the subscript ${0T}$ of ${p}_{0T}$ refers to the conditional distribution of the diffused data at time $T$ given the data at time 0 . For simplicity we limit the remainder of this paper to the so-called Variance Exploding SDE (Song et al., 2021), which is defined as + +$$ +\mathrm{d}x = f\left( {x,t}\right) \mathrm{d}t + g\left( t\right) \mathrm{d}\mathrm{w} \mathrel{\text{ := }} \sqrt{\frac{\mathrm{d}\left\lbrack {{\sigma }^{2}\left( t\right) }\right\rbrack }{\mathrm{d}t}}, \tag{1} +$$ + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +where $\mathrm{w}$ is the standard Wiener process. The perturbation kernel of this diffusion process has a closed-form solution being ${p}_{0t}\left( {{x}_{t} \mid {x}_{0}}\right) = \mathcal{N}\left( {{x}_{t};{x}_{0},\left\lbrack {{\sigma }^{2}\left( t\right) - {\sigma }^{2}\left( 0\right) }\right\rbrack I}\right)$ . It was shown by Anderson that the reverse diffusion process is the solution to the following SDE: + +$$ +\mathrm{d}x = \left\lbrack {f\left( {x,t}\right) - {g}^{2}\left( t\right) {\nabla }_{x}\log {p}_{t}\left( x\right) }\right\rbrack \mathrm{d}t + g\left( t\right) \mathrm{d}\overline{\mathrm{w}}, \tag{2} +$$ + +where $\overline{\mathrm{w}}$ is the standard Wiener process on reverse time flow. Thus, given the score function ${\nabla }_{x}\log {p}_{t}\left( x\right)$ for all $t \in$ $\left\lbrack {0,T}\right\rbrack$ , we can generate samples from the data distribution ${p}_{0}\left( x\right)$ . + +§ 1.2. DENOISING SCORE MATCHING OBJECTIVE + +In order to learn the score function, one would like to minimize the distance between the model and the true score function. This method is called Explicit Score Matching (Vincent, 2011) and has the following objective function: + +$$ +{J}_{t}^{ESM}\left( \theta \right) = {\mathbf{E}}_{{x}_{t}}\left\lbrack {\begin{Vmatrix}{s}_{\theta }\left( {x}_{t},t\right) - {\nabla }_{{x}_{t}}\log {p}_{t}\left( {x}_{t}\right) \end{Vmatrix}}_{2}^{2}\right\rbrack . \tag{3} +$$ + +Since the ground-truth score function ${\nabla }_{{x}_{t}}\log {p}_{t}\left( {x}_{t}\right)$ is generally not known, one can apply denoising score matching (Vincent, 2011), which is defined as the following: + +$$ +{J}_{t}\left( \theta \right) = {\mathbf{E}}_{{x}_{0}}\left\{ {{\mathbf{E}}_{{x}_{t} \mid {x}_{0}}\lbrack }\right. +$$ + +$$ +\left. \left. {\begin{Vmatrix}{s}_{\theta }\left( {x}_{t},t\right) - {\nabla }_{{x}_{t}}\log {p}_{0t}\left( {x}_{t} \mid {x}_{0}\right) \end{Vmatrix}}_{2}^{2}\right\rbrack \right\} \text{ . } \tag{4} +$$ + +The issue of single scale noise motivated Song & Ermon to expand the objective to a sum of denoising score matching terms on multiple noise scales. They further augment the objective with a positive weighting function $\lambda \left( \sigma \right) > 0$ to empirically balance the loss magnitudes for all noise levels. For the continuous time domain, Song et al. uniformly sample $t \in \left\lbrack {0,T}\right\rbrack$ and use a time-dependent positive weighting function $\lambda \left( t\right)$ , leading to the following objective: + +$$ +J\left( \theta \right) = {\mathbf{E}}_{t}\left\lbrack {\lambda \left( t\right) {J}_{t}\left( \theta \right) }\right\rbrack . \tag{5} +$$ + +We now show that this objective cannot be made arbitrarily small. It is known that (4) is equal to explicit score matching up to a constant which is independent of $\theta$ (Vincent,2011). Thus, the objective is minimized when the model equals the ground-truth score function ${s}_{\theta }\left( {{x}_{t},t}\right) = {\nabla }_{x}\log {p}_{t}\left( x\right)$ and the additional constant is equal to the loss when this equality holds. This leads to the following new formulation of the denoising score matching objective: + +$$ +{J}_{t}\left( \theta \right) = {\mathbf{E}}_{{x}_{0}}\left\{ {{\mathbf{E}}_{{x}_{t} \mid {x}_{0}}\lbrack }\right. +$$ + +$$ +{\begin{Vmatrix}{\nabla }_{{x}_{t}}\log {p}_{0t}\left( {x}_{t} \mid {x}_{0}\right) - {\nabla }_{{x}_{t}}\log {p}_{t}\left( {x}_{t}\right) \end{Vmatrix}}_{2}^{2} \tag{6} +$$ + +$$ +\left. {+{\begin{Vmatrix}{s}_{\theta }\left( {x}_{t},t\right) - {\nabla }_{{x}_{t}}\log {p}_{t}\left( {x}_{t}\right) \end{Vmatrix}}_{2}^{2}}\right\} \text{ . } +$$ + +This observation has not been emphasized previously, probably because it has no direct effect on the learning of the score function. However, the additional constant has major implications for finding other hyperparameters. Examples for such hyperparameters are the values of: the function $\lambda \left( t\right)$ and the choice of the forward SDE. While these hyper-parameters could be optimized in explicit score matching using gradient-based learning, this ability is severely limited by the additional non-vanishing constant in (6). In particular, optimizing such hyperparameters based on the denoising score matching objective leads to solutions that do not necessarily minimize the distance from the model to the ground-truth score function. Instead, they are heavily biased towards solutions with a smaller value of the additional constant. For example, trying to minimize the worst-case $\lambda$ -divergence as defined in (Durkan &Song,2021) with an adversarially trained $\lambda$ is not directly possible, since $\lambda$ will focus on regions where the constant is high and mostly ignores the model fit to the ground-truth score. + +The non-vanishing constant in the denoising score matching objective, which presents a burden in multiple ways such as hyperparameter search and model evaluation, however also provides an opportunity for latent representation learning, which will be described in the following sections. + +§ 2. REPRESENTATION LEARNING THROUGH SCORE-MATCHING + +§ 2.1. CONDITIONAL SCORE MATCHING + +Class-conditional generation can be achieved in this framework by training an additional time-dependent classifier ${p}_{t}\left( {y \mid {x}_{t}}\right)$ (Song et al.,2021). In particular, the conditional score for a fixed $y$ can be expressed as the sum of the unconditional score and the score of the classifier, that is, + +$$ +{\nabla }_{{x}_{t}}\log {p}_{t}\left( {{x}_{t} \mid y}\right) = {\nabla }_{{x}_{t}}\log {p}_{t}\left( {x}_{t}\right) + {\nabla }_{{x}_{t}}\log {p}_{t}\left( {y \mid {x}_{t}}\right) . +$$ + +We propose conditional score matching as an alternative way to allow for controllable generation. Given supervised labels $y\left( x\right)$ , the new training objective for each time $t$ becomes + +$$ +{J}_{t}\left( \theta \right) = {\mathbf{E}}_{{x}_{0}}\left\{ {{\mathbf{E}}_{{x}_{t} \mid {x}_{0}}\lbrack }\right. +$$ + +$$ +\left. \left. {\begin{Vmatrix}{s}_{\theta }\left( {x}_{t},t,y\left( {x}_{0}\right) \right) - {\nabla }_{{x}_{t}}\log {p}_{0t}\left( {x}_{t} \mid {x}_{0}\right) \end{Vmatrix}}_{2}^{2}\right\rbrack \right\} \text{ . } \tag{7} +$$ + +The conditional objective is minimized if and only if the model equals the conditional score function ${\nabla }_{{x}_{t}}\log {p}_{t}\left( {{x}_{t} \mid y\left( {x}_{0}\right) = \widehat{y}}\right)$ for all labels $\widehat{y}$ . Note that conditional score matching is directly done during training and does not require to train an additional classifier over the whole time domain. + +§ 2.2. LEARNING THE LATENT REPRESENTATION + +Since supervised data is limited and rarely available, we propose to learn the labeling function $y\left( {x}_{0}\right)$ at the same time as optimizing the conditional score matching objective (7). In particular, we represent the labeling function as a trainable encoder ${E}_{\phi } : {\mathcal{R}}^{d} \rightarrow {\mathcal{R}}^{{d}_{z}}$ , where ${E}_{\phi }\left( {x}_{0}\right)$ maps the data sample ${x}_{0}$ to its corresponding code in the ${d}_{z}$ -dimensional latent space. The code is then used as additional input to the model. Formally, the proposed learning objective for latent representation learning is the following: + +$$ +J\left( {\theta ,\phi }\right) = {\mathbf{E}}_{t,{x}_{0},{x}_{t}}\lbrack \lambda \left( t\right) +$$ + +$$ +\left. {\begin{Vmatrix}{s}_{\theta }\left( {x}_{t},t,{E}_{\phi }\left( {x}_{0}\right) \right) - {\nabla }_{{x}_{t}}\log {p}_{0t}\left( {x}_{t} \mid {x}_{0}\right) \end{Vmatrix}}_{2}^{2}\right\rbrack \text{ . } \tag{8} +$$ + +Intuitively, ${E}_{\phi }\left( {x}_{0}\right)$ selects the vector field used to denoise ${x}_{0}$ starting from ${x}_{t}$ . We show in the following that (8) is a valid representation learning objective. The score of the perturbation kernel ${\nabla }_{{x}_{t}}\log {p}_{0t}\left( {{x}_{t} \mid {x}_{0}}\right)$ is a function of only $t,{x}_{t}$ and ${x}_{0}$ . Thus the objective can be reduced to zero if all information about ${x}_{0}$ is contained in the latent representation ${E}_{\phi }\left( {x}_{0}\right)$ . When ${E}_{\phi }\left( {x}_{0}\right)$ has no mutual information with ${x}_{0}$ , the objective can only be reduced up to the constant in (6). Hence, our proposed formulation takes advantage of the non-zero lower-bound of (6) which can only vanish when data information is distilled in a code provided as input to the model. + +§ 2.3. CONTROLLING THE REPRESENTATION + +In contrast to other methods used for unsupervised representation learning (Radford et al., 2016; Chen et al., 2016; Higgins et al., 2017), the proposed objective here enjoys the continuous nature of the SDE. The encoder is trained to represent information needed to denoise ${x}_{0}$ for different levels of noise $\sigma \left( t\right)$ . We hypothesize that by adjusting the weighting function $\lambda \left( t\right)$ , we can manually control the granularity of the features encoded in the representation. For high noise levels, the mutual information of ${x}_{t}$ and ${x}_{0}$ is insignificant, thus denoising requires all information about ${x}_{0}$ to be contained in the code. In contrast, for small values of $t,{x}_{t}$ still contains coarse-grained features of ${x}_{0}$ and denoising can be performed even when the representation encodes only fine-grained features. We provide empirical evidence to support this hypothesis in Section 3. + +§ 3. EXPERIMENTAL RESULTS + +For all experiments, we use the same function $\sigma \left( t\right) ,t \in$ $\left\lbrack {0,1}\right\rbrack$ as in (Song et al.,2021), which is $\sigma \left( t\right) =$ ${\sigma }_{\min }{\left( \frac{{\sigma }_{\max }}{{\sigma }_{\min }}\right) }^{t}$ , where ${\sigma }_{\min } = {0.01}$ and ${\sigma }_{\max } = {50}$ . Further, we use $\lambda \left( t\right) = {\sigma }^{2}\left( t\right)$ , which has been shown to yield the KL-Divergence objective (Durkan & Song, 2021). For visualization purposes, we use a 2-dimensional latent space if not stated otherwise. Our goal is not to produce state-of-the-art image quality, rather showcase the representation learning method. Because of that and also limited computational resources, we did not carry out an extensive hyper-parameter sweep. Hence, the model architecture in all experiments is similar to but significantly smaller than the one proposed in (Song et al., 2021). Details for architecture and hyperparam-eters are described in the appendix (A.1). Figure 10 in the appendix further illustrates how the representation encodes information for denoising. + + < g r a p h i c s > + +Figure 2. Samples and latent distribution of a model trained on MNIST (a-b) and the first three classes of CIFAR-10 (c-d) using L1-regularization and uniform sampling of $t$ + +§ 3.1. UNIFORM SAMPLING OF $T$ + +We first train a model using L1-regularization on the latent code for the MNIST dataset (LeCun & Cortes, 2010) and CIFAR-10 (Krizhevsky et al.). Due to computational limitations, we limit CIFAR-10 to a subset of only three classes, which we randomly chose to be the first three classes. Figure 2 shows samples from a grid over the latent space and a point cloud visualization of the latent values $z = {E}_{\phi }\left( {x}_{0}\right)$ . For MNIST, we can see that the value of ${z}_{1}$ controls the stroke width, while ${z}_{2}$ weakly indicates the class. In contrast, the latent code of CIFAR-10 samples mostly encodes information about the class label. We can see from the samples that part of the reason might be an encoding of the background, which is highly correlated with the class labels. We conducted the same experiment with a probabilistic encoder, where the latent representation is regularized using + +165 + + < g r a p h i c s > + +Figure 3. Samples and latent distribution of a model trained on MNIST using KL-divergence and uniform sampling of $\sigma$ + +166 + +167 + +168 + +169 + +170 + +171 + +172 + +173 + +KL-Divergence. The resulting representation is similar and can be seen in the appendix(5,6). We also trained models on all classes of CIFAR-10, however not until convergence due to computational constraints (cf.8,9). Early results indicate encoding of overall image brightness. We further want to point out that the generative process using the reverse SDE involves randomness and thus can generate different samples for a single latent representation. The diversity of samples generated for the same representation steadily decreases with the dimensionality of the latent space, which is empirically shown in Figure 11 of the appendix. + +§ 3.2. CONTROLLING THE REPRESENTATION + +Next, we analyze the behavior of the representation when adjusting the weighting function $\lambda \left( t\right)$ , which can be done by changing the sampling distribution of $t$ . + +§ 3.2.1. HIGH NOISE LEVELS + +First, we focus the training on higher noise levels. To this end, we sample $t$ such that $\sigma \left( t\right)$ is uniformly sampled from the interval $\left\lbrack {{\sigma }_{\min },{\sigma }_{\max }}\right\rbrack = \left\lbrack {{0.01},{50}}\right\rbrack$ . Note that after learning the representation we additionally train the model with uniform sampling of $t$ and a frozen encoder to achieve good sample quality. Figure 3 shows the resulting representation for MNIST using a probabilistic encoder (cf. Figure 7 for L1 regularization results). As expected, the latent representation encodes information about classes rather than fine-grained features such as stroke width. This validates our hypothesis of section 2.3 that we can control the granularity of features encoded in the latent space. + +§ 3.2.2. TRAINING ON SINGLE TIMESCALES + +To understand the effect of training on different timescales more clearly, we limit the support of the weighting function $\lambda \left( t\right)$ to a single value of $t$ . We analyze the resulting quality of the latent representation for different values of $t$ using + +219 the silhouette score with euclidean distance based on the dataset classes (Rousseeuw, 1987). It compares the average distance between a point to all other points in its cluster with the average distance to points in the nearest different cluster. Thus we measure how well the latent representation encodes classes, ignoring any other features. + + < g r a p h i c s > + +Figure 4. Mean and standard deviation of silhouette scores when training a model on MNIST (left) and the first three classes of CIFAR-10 (right) using a single $t$ over three runs. + +Figure 4 shows the silhouette scores of latent codes of MNIST and CIFAR-10 samples for different values of $t$ . In alignment with our hypothesis of section 2.3, training on a small $t$ and thus low noise levels leads to almost no encoded class information in the latent representation, while the opposite is the case for a range of $t$ which differs between the two datasets. The decline in encoded class information for high values of $t$ can be explained by the vanishing difference between distributions of perturbed samples when $t$ gets large. This shows that the distinction among the code classes represented by the silhouette score is controlled by $\lambda \left( t\right)$ . + +Overall, the difference in the latent codes for varying $\lambda \left( t\right)$ shows that we can control the granularity encoded in the representation. While this ability does not exist in previously proposed models for representation learning, it provides a significant advantage when there exist some prior information about the level of detail that we intend to encode in the target representation. + +§ 4. CONCLUSION + +We presented a new objective for representation learning based on conditional denoising score matching. In doing so, we turned the original non-vanishing objective function into one that can be reduced to zero if all information is distilled in the code. We showed that the proposed method learns interpretable features in the latent space. In contrast to previous approaches, denoising score matching as a foundation comes with the ability to manually control the granularity of features encoded in the representation. We demonstrated that the encoder can learn to separate classes when focusing on high noise levels and encodes fine-grained features such as stroke-width when mainly trained on low level noise. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Sl09p1eK9D/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Sl09p1eK9D/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..6a90779d027126081ec9cf752b7a2ba21c269aa6 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Sl09p1eK9D/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,652 @@ +# Understanding Event-Generation Networks via Uncertainties + +## Anonymous Authors ${}^{1}$ + +## Abstract + +Generative models and normalizing flow based models have made great progress in recent years both in their theoretical development as well as in a growing number of applications. As such models become applied more and more with it increases the desire for predictive uncertainty to know when to trust the underlying model. In this extended abstract we target the application area of Large Hadron Collider (LHC) simulations and show how to extend normalizing flows with probabilistic Bayesian Neural Network based transformations to model LHC events with uncertainties. + +## 1. Introduction + +The role of first-principle simulations in our understanding of large data sets makes LHC physics stand out in comparison to many other areas of science. Three aspects define the application of modern big data methods in this field: (i) ATLAS and CMS deliver proper big data with excellent control over uncertainties, (ii) perturbative quantum field theory provides consistent precision predictions, and (iii) fast and reliable precision simulations generate events from first principles. The fact that experiments, field theory calculations, and simulations control their uncertainties implies that we can work with a complete uncertainty budget, including statistical, systematic, and theory uncertainties. To sustain this approach at the upcoming HL-LHC, with a data set more than 25 times the current Run 2 data set, the theory challenge is to provide faster simulations and keep full control of the uncertainties at the per-cent level and better. + +In recent years it has been shown that modern machine learning and especially generative models can improve LHC event simulations in many ways (Butter & Plehn, 2020). See Section A in the appendix for an overview on recent work and the generative models used. + +The problem with these applications is that we know little about how these generative networks work, and what the uncertainty on the generative network output is. As we will see, these two questions are closely related. + +In general, we can track statistical and systematic uncertainties in neural network outputs with Bayesian neural networks (BNNs) (MacKay, 1995; Neal, 1995; Gal, 2016; Kendall & Gal, 2017). Such networks have been used in particle physics for a long time (Bhat & Prosper, 2005; Saucedo, 2007; Xu et al., 2008). For the LHC they have been proposed to extract uncertainties in jet classification (Bollweg et al., 2020) and jet calibration (Kasieczka et al., 2020). They can cover essentially all uncertainties related to statistical, systematic, and structural limitations of the training sample (Nachman, 2020). Similar ideas can be used as part of ensemble techniques (Araz & Spannowsky, 2021). + +We propose a Bayesian invertible neural net (BINN) which combines the flexibility of normalizing flow with BNNs and demonstrate it via a simple 2d toy example and finally a semi-realistic LHC example. See the appendix for an extended discussion of these experiments and further results. + +## 2. Generative Networks with Uncertainties + +We start by reminding ourselves that we often assume that a generative model has learned a phase-space density perfectly, so the only remaining source of uncertainty is the statistics of the generated sample binned in phase space. However, we know that such an assumption is not realistic (Bollweg et al., 2020; Kasieczka et al., 2020), and we need to estimate the effect of statistical or systematic limitations of the training data. The problem with such a statistical limitation is that it is turned into a systematic shortcoming of the generative model (Butter et al., 2019) - once we generate a new sample, the information on the training data is lost, and the only way we might recover it is by training many networks and comparing their outcome. For most applications this is not a realistic or economic option, so we will show how an alternative solution could look. + +Invertible Neural Networks. To model complex densities such as LHC phase space distributions, we can employ normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2016; Kingma & Dhariwal, 2018; Kobyzev et al., + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +2019). They use the fact we can transform a random variable $z \sim {p}_{Z}\left( z\right)$ using a bijective map $G : z \rightarrow x$ to a random variable $x = G\left( z\right)$ with the density + +$$ +{p}_{X}\left( x\right) = {p}_{Z}\left( z\right) {\left| \det \frac{\partial G\left( z\right) }{\partial z}\right| }^{-1} +$$ + +$$ += {p}_{Z}\left( {\bar{G}\left( x\right) }\right) \left| {\det \frac{\partial \bar{G}\left( x\right) }{\partial x}}\right| , \tag{1} +$$ + +where we defined $\bar{G} \mathrel{\text{:=}} {G}^{-1}$ . Given a sample $z$ from the base distribution ${p}_{Z}$ , we can use the map $G$ to generate a sample from the target distribution going in the forward direction and vice versa with a sample $x$ from the target. + +For this to be a useful approach, we require the base distribution ${p}_{Z}$ to be simple enough to allow for effective sample generation, $G$ to be flexible enough for a non-trivial transformation, and its Jacobian determinant to be effectively computable. With these constraints, $G$ gives us a powerful generative pipeline to model the phase space density ${p}_{X}$ . To fulfill them we choose the base distribution to be a multivariate Gaussian with mean zero and an identity matrix as the covariance, and rely on the real non-volume preserving flow (Dinh et al., 2016) in the invertible neural network (INN) formulation by Ardizzone et al. (2018) for $G$ . + +An INN composes multiple transformation maps into coupling layers with the following structure. The input vector $z$ into a layer is split in half, $z = \left( {{z}_{1},{z}_{2}}\right)$ , allowing us to compute the output $x = \left( {{x}_{1},{x}_{2}}\right)$ of the layer as + +$$ +\left( \begin{array}{l} {x}_{1} \\ {x}_{2} \end{array}\right) = \left( \begin{array}{l} {z}_{1} \odot {e}^{{s}_{2}\left( {z}_{2}\right) } + {t}_{2}\left( {z}_{2}\right) \\ {z}_{2} \odot {e}^{{s}_{1}\left( {x}_{1}\right) } + {t}_{1}\left( {x}_{1}\right) \end{array}\right) , \tag{2} +$$ + +where ${s}_{i},{t}_{i}\left( {i = 1,2}\right)$ are small multi-layer perceptrons (MLP), and $\odot$ is the element-wise product. This structure allows both for easy invertibility as well as an easy Jacobian. Throughout, we refer to their weights jointly as $\theta$ . + +Bayesian INN. The invertible neural net provides us with a powerful generative model of the underlying data distribution. However, it lacks a mechanism to account for our uncertainty in the transformation parameters $\theta$ themselves. To model it, we switch from deterministic to probabilistic transformations, replacing the deterministic sub-networks ${s}_{1,2}$ and ${t}_{1,2}$ in each of the coupling layers with Bayesian neural nets. Placing priors over their weights we get as the generative pipeline for our BINN + +$$ +\theta \sim p\left( \theta \right) , +$$ + +$$ +x\left| {\theta \sim {p}_{X}\left( {x \mid \theta }\right) = {p}_{Z}\left( {\bar{G}\left( {x;\theta }\right) }\right) }\right| \det \frac{\partial \bar{G}\left( {x;\theta }\right) }{\partial x} \mid . \tag{3} +$$ + +Given our set of observations $\mathcal{D}$ we can rely on variational inference (Blei et al., 2017) to approximate the intractable posterior $p\left( {\theta \mid \mathcal{D}}\right)$ with a mean-field Gaussian as the variational posterior ${q}_{\phi }\left( \theta \right)$ . Learning then consists of maximizing the evidence lower bound (ELBO) + +$$ +\mathcal{L} = \mathop{\sum }\limits_{{n = 1}}^{N}{\mathbb{E}}_{{q}_{\phi }\left( \theta \right) }\left\lbrack {\log {p}_{X}\left( {{x}_{n} \mid \theta }\right) }\right\rbrack - \operatorname{KL}\left( {{q}_{\phi }\left( \theta \right) , p\left( \theta \right) }\right) +$$ + +$$ += \mathop{\sum }\limits_{{n = 1}}^{N}{\mathbb{E}}_{{q}_{\phi }\left( \theta \right) }\left\lbrack {\log {p}_{Z}\left( {\bar{G}\left( {{x}_{n};\theta }\right) }\right) + \log \left| {\det \frac{\partial \bar{G}\left( {{x}_{n};\theta }\right) }{\partial {x}_{n}}}\right| }\right\rbrack +$$ + +$$ +- \mathrm{{KL}}\left( {{q}_{\phi }\left( \theta \right) , p\left( \theta \right) }\right) , +$$ + +via stochastic gradient descent on the parameters $\phi$ . By design all three terms, the log likelihood, log determinant, and the Kullback-Leibler (KL) divergence can be computed easily, and we can approximate the sum and the expectation with a minibatch and weight samples respectively. + +## 3. Experiments + +Before we tackle a semi-realistic LHC setup, we first study the behavior of BINNs for a set of toy examples, namely distributions over the minimally allowed two-dimensional parameter space where in one dimension the density is flat. Aside from the fact that these toy examples illustrate that the BINN actually constructs a meaningful uncertainty distribution, we will use the combination of density and uncertainty maps to analyse how an INN actually learns a density distributions. We will see that the INN describes the density map in the sense of a few-parameter fit, rather than numerically encoding patches over the parameter space independently. We discuss one of the three toy experiments here and refer the reader for the other two to the appendix. We there also provide details on the architecture and hyperparameters. + +#### 3.1.Toy Events with Uncertainties: The Wedge Ramp + +Our first toy example is a two-dimensional ramp distribution, linear in one direction and flat in the other, + +$$ +p\left( {x, y}\right) = \operatorname{Linear}\left( {x \in \left\lbrack {0,1}\right\rbrack }\right) \cdot \operatorname{Const}\left( {y \in \left\lbrack {0,1}\right\rbrack }\right) = x \cdot 2. +$$ + +The second factor ensures that the distribution $p\left( {x, y}\right)$ is normalized to one, and the network output is shown in Fig. 1 (upper). The output are unweighted events in the two-dimensional parameters space,(x, y). We show one-dimensional distributions after marginalizing over the unobserved direction and find that the network reproduces the equation well. In the bottom row we include the predictive uncertainty given by the BINN. For this purpose we train a network on the two-dimensional parameter space and evaluate it for a set of points with $x \in \left\lbrack {0,1}\right\rbrack$ and a constant $y$ -value. In the left panel we indicate the predictive uncertainty as an error bar around the density estimate. Throughout the paper we always remove the phase space boundaries, because we know that the network is unstable + +110 + +![01963e33-03ea-7360-ae83-3e032c74762f_2_387_196_965_583_0.jpg](images/01963e33-03ea-7360-ae83-3e032c74762f_2_387_196_965_583_0.jpg) + +Figure 1. (upper) Two-dimensional and marginal densities for the linear wedge ramp. (lower) Density and predictive uncertainty distribution for the wedge ramp. In the left panel the density and uncertainty are averaged over several lines with constant $y$ . In the central and right panels, the uncertainty band on ${\sigma }_{\text{pred }}$ is given by their variation. The green curve represents a two-parameter fit to (5). + +111 + +112 + +113 + +114 there, and the uncertainties explode just like we expect. The relative uncertainty grows for small values of $x$ and hence small values of $p\left( {x, y}\right)$ , and it covers the deviation of the extracted density from the true density well. These features are common to all our experiments. In the central and right panel of Fig. 1 we show the relative and absolute predictive uncertainties. The error bar indicates how much ${\sigma }_{\text{pred }}$ varies for different choices of $y$ . We compute it as the standard deviation of different values of ${\sigma }_{\text{pred }}$ , after confirming that the central values agree within this range. As expected, the relative uncertainty decreases towards larger $x$ . However, the absolute uncertainty shows a distinctive minimum in ${\sigma }_{\text{pred }}$ around $x \approx {0.45}$ . This minimum is a common feature in all our training rounds, so we need to explain it. + +To understand this non-trivial uncertainty distribution ${\sigma }_{\text{pred }}\left( x\right)$ we focus on the non-trivial $x$ -coordinate and its linear behavior $p\left( x\right) = {ax} + b$ with $x \in \left\lbrack {0,1}\right\rbrack$ . As the model learns a density, we can remove $b$ by fixing the normalization, $p\left( x\right) = a\left( {x - {0.5}}\right) + 1$ . If we now assume that a network acts like a fit of $a$ , as it will turn out useful, we can relate the uncertainty ${\Delta a}$ to an uncertainty in the density, + +$$ +{\sigma }_{\text{pred }} \equiv {\Delta p} \approx \left| {x - {0.5}}\right| {\Delta a}. \tag{4} +$$ + +The absolute value appears because the uncertainties are defined to be positive, as encoded in the usual quadratic error propagation. The uncertainty distribution has a minimum at $x = 1/2$ , close to the observed value in Fig. 1. + +The differences between the simple prediction in (4) and our numerical findings in Fig. 1 are that the predictive uncertainty is not symmetric and does not reach zero. To account for these effects we can expand our very simple ansatz to $p\left( x\right) = {ax} + b$ with $x \in \left\lbrack {{x}_{\min },{x}_{\max }}\right\rbrack$ . Using the normalization condition again we find + +$$ +p\left( x\right) = {ax} + \frac{1 - \frac{a}{2}\left( {{x}_{\max }^{2} - {x}_{\min }^{2}}\right) }{{x}_{\max } - {x}_{\min }}. +$$ + +Again assuming a fit-like behavior of the flow network we expect for the predictive uncertainty + +$$ +{\sigma }_{\text{pred }}^{2} \equiv {\left( \Delta p\right) }^{2} = {\left( x - \frac{1}{2}\right) }^{2}{\left( \Delta a\right) }^{2} + {\left( 1 + \frac{a}{2}\right) }^{2}{\left( \Delta {x}_{\max }\right) }^{2} +$$ + +$$ ++ {\left( 1 - \frac{a}{2}\right) }^{2}{\left( \Delta {x}_{\min }\right) }^{2}\text{.} \tag{5} +$$ + +Adding ${x}_{\max }$ adds an $x$ -independent offset. Also accounting for ${x}_{\min }$ does not change the $x$ -dependence of predictive uncertainty. The slight shift of the minimum and the asymmetry between the lower and upper boundaries in $x$ are not explained by this argument. We ascribe them to boundary effects, specifically the challenge for the network to describe the correct approach towards $p\left( x\right) \rightarrow 0$ . + +The green line in the lower panels of Fig. 1 gives a two-parameter fit of ${\Delta a}$ and $\Delta {x}_{\max }$ to the ${\sigma }_{\text{pred }}$ distribution from the BINN. It indicates that there is a hierarchy in the way the network extracts the $x$ -independent term with high precision, whereas the uncertainty on the slope $a$ is around $4\%$ . + +#### 3.2.LHC Events with Uncertainties + +As a physics example we consider the Drell-Yan process + +$$ +{pp} \rightarrow Z \rightarrow {e}^{ + }{e}^{ - } +$$ + +with its simple $2 \rightarrow 2$ phase space combined with the parton density. The training set consists of an unweighted set of + +165 + +![01963e33-03ea-7360-ae83-3e032c74762f_3_324_182_1097_670_0.jpg](images/01963e33-03ea-7360-ae83-3e032c74762f_3_324_182_1097_670_0.jpg) + +Figure 2. Marginalized kinematic distributions for the Drell-Yan process. We show the central prediction from the BINN and include the predictive uncertainty as the blue band. The red band indicates the statistical uncertainty of the training data per bin in the Gaussian limit. + +166 + +167 + +168 + +169 + +170 4-vectors simulated with MADGRAPH5 (Alwall et al., 2014) at ${13}\mathrm{{TeV}}$ collider energy with the NNPDF2.3 parton densities (Ball et al., 2013). We fix the masses of the final-state leptons and enforce momentum conservation in the transverse direction, which leaves us with a four-dimensional phase space. In our discussion we limit ourselves to a sufficiently large set of one-dimensional distributions. For these marginalized uncertainties we follow the procedure laid out in Sec. C.1.4 with 50 samples in the BINN-weight space. + +To start with, we show a set of generated kinematic distributions in Fig. 2. The positron energy features the expected strong peak from the $Z$ -resonance. Its sizeable tail to larger energies is well described by the training data to ${E}_{e} \approx {280}\mathrm{{GeV}}$ . The central value learned by the BINN becomes unstable at slightly lower values of ${250}\mathrm{{GeV}}$ , as expected. The momentum component ${p}_{x}$ is not observable given the azimuthal symmetry of the detector, but it's broad distribution is nevertheless reproduced correctly. The predictive uncertainty covers the slight deviations over the entire range. What is observable at the LHC is the transverse momentum of the outgoing leptons, with a similar distribution as the energy, just with the $Z$ -mass peak at the upper end of the distribution. Again, the predictive uncertainty determined by the BINN covers the slight deviations from the truth on the pole and in both tails. In the second row we show the ${p}_{z}$ component as an example for a strongly peaked distribution, similar to the Gaussian toy model in Sec. C.1.2. + +While the energy of the lepton pair has a similar basic form as the individual energies, we also show the invariant mass of the electron-positron pair, which is described by the usual Breit-Wigner peak. It is well known that this intermediate resonance is especially hard to learn for a network, because it forms a narrow, highly correlated phase space structure. Going beyond the precision shown here would for instance require an additional MMD loss (as e.g. in Butter et al., 2019; Bellagente et al., 2020b). This resonance peak is the only distribution, where the predictive uncertainty does not cover the deviation of the BINN density from the truth. This apparent failure corresponds to the fact that generative networks always overestimate the width and hence underestimate the height of this mass peak (Butter et al., 2019). This is an example of the network being limited by the expressive power in phase space resolution, generating an uncertainty which the Bayesian version cannot account for. See Sec. C.2.1 for further results. + +## 4. Conclusion + +Controlling the output of generative networks and quantifying their uncertainties is the main task for any application in LHC physics, be it in forward generation, inversion, or inference. We have proposed to use a Bayesian invertible neural network (BINN) to quantify the uncertainties from the network training for each generated event. For a series of two-dimensional toy models and an LHC-inspired application we have shown how the Bayesian setup indeed generates an uncertainty distribution, over the full phase space and over marginalized phase spaces. As expected, the learned uncertainty shrinks with an improved training statistics. Our method can be trivially extended from unweighted to weighted events by adapting the simple loss. + +References + +Alanazi, Y., Sato, N., Liu, T., Melnitchouk, W., Kuchera, M. P., Pritchard, E., Robertson, M., Strauss, R., Velasco, L., and Li, Y. Simulation of electron-proton scattering events by a Feature-Augmented and Transformed Generative Adversarial Network (FAT-GAN). 1 2020. + +Alwall, J. et al. The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. JHEP, 07: 079, 2014. + +Andreassen, A., Feige, I., Frye, C., and Schwartz, M. D. JU-NIPR: a Framework for Unsupervised Machine Learning in Particle Physics. Eur. Phys. J., C79(2):102, 2019. + +Araz, J. Y. and Spannowsky, M. Combine and Conquer: Event Reconstruction with Bayesian Ensemble Neural Networks. 2 2021. + +Ardizzone, L., Kruse, J., Wirkert, S., Rahner, D., Pellegrini, E. W., Klessen, R. S., Maier-Hein, L., Rother, C., and Köthe, U. Analyzing inverse problems with invertible neural networks. 2018. + +ATLAS Collaboration. Deep generative models for fast shower simulation in ATLAS. ATL-SOFT-PUB-2018- 001, 2018. + +ATLAS Collaboration. Energy resolution with a GAN for Fast Shower Simulation in ATLAS. ATLAS-SIM-2019- 004, 2019. + +Backes, M., Butter, A., Plehn, T., and Winterhalder, R. How to GAN Event Unweighting. 122020. + +Badger, S. and Bullock, J. Using neural networks for efficient evaluation of high multiplicity scattering amplitudes. 2 2020. + +Baldi, P., Blecher, L., Butter, A., Collado, J., Howard, J. N., Keilbach, F., Plehn, T., Kasieczka, G., and Whiteson, D. How to GAN Higher Jet Resolution. 12 2020. + +Ball, R. D., Bertone, V., Carrazza, S., Del Debbio, L., Forte, S., Guffanti, A., Hartland, N. P., and Rojo, J. Parton distributions with QED corrections. Nucl. Phys., B877: 290-320, 2013. + +Belayneh, D. et al. Calorimetry with Deep Learning: Particle Simulation and Reconstruction for Collider Physics. Eur. Phys. J. C, 80(7):688, 2020. + +Bellagente, M., Butter, A., Kasieczka, G., Plehn, T., Rous-selot, A., Winterhalder, R., Ardizzone, L., and Köthe, U. Invertible Networks or Partons to Detector and Back Again. SciPost Phys., 9:074, 2020a. + +Bellagente, M., Butter, A., Kasieczka, G., Plehn, T., and Winterhalder, R. How to GAN away Detector Effects. + +SciPost Phys., 8(4):070, 2020b. + +Bhat, P. C. and Prosper, H. B. Bayesian neural networks. Conf. Proc. C, 050912:151, 2005. + +Bieringer, S., Butter, A., Heimel, T., Höche, S., Köthe, U., Plehn, T., and Radev, S. T. Measuring QCD Splittings with Invertible Networks. 122020. + +Bishara, F. and Montull, M. (Machine) Learning Amplitudes for Faster Event Generation. 122019. + +Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859-877, 2017. + +Bollweg, S., Haußmann, M., Kasieczka, G., Luchmann, M., Plehn, T., and Thompson, J. Deep-Learning Jets with Uncertainties and More. SciPost Phys., 8(1):006, 2020. + +Bothmann, E. and Debbio, L. Reweighting a parton shower using a neural network: the final-state case. JHEP, 01: 033, 2019. + +Bothmann, E., Janßen, T., Knobbe, M., Schmale, T., and Schumann, S. Exploring phase space with Neural Importance Sampling. SciPost Phys., 8(4):069, 2020. + +Brehmer, J. and Cranmer, K. Flows for simultaneous manifold learning and density estimation. 32020. + +Buhmann, E., Diefenbacher, S., Eren, E., Gaede, F., Kasieczka, G., Korol, A., and Krüger, K. Getting High: High Fidelity Simulation of High Granularity Calorimeters with High Speed. 52020. + +Buhmann, E., Diefenbacher, S., Eren, E., Gaede, F., Kasieczka, G., Korol, A., and Krüger, K. Decoding Photons: Physics in the Latent Space of a BIB-AE Generative Network. 2 2021. + +Butter, A. and Plehn, T. Generative Networks for LHC events. 82020. + +Butter, A., Plehn, T., and Winterhalder, R. How to GAN LHC Events. SciPost Phys., 7:075, 2019. + +Butter, A., Diefenbacher, S., Kasieczka, G., Nachman, B., and Plehn, T. GANplifying Event Samples. 8 2020a. + +Butter, A., Plehn, T., and Winterhalder, R. How to GAN Event Subtraction. SciPost Phys. Core, 3:009, 2020b. + +Chen, I.-K., Klimek, M. D., and Perelstein, M. Improved Neural Network Monte Carlo Simulation. SciPost Phys., 10:023, 2021. + +Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., and Bharath, A. A. Generative adversarial networks: An overview. IEEE Signal Processing Magazine, 35(1):53-65, Jan 2018. ISSN 1053-5888. + +Datta, K., Kar, D., and Roy, D. Unfolding with Generative Adversarial Networks. 2018. + +de Oliveira, L., Paganini, M., and Nachman, B. Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics Synthesis. Comput. Softw. Big Sci., 1(1):4, 2017. + +Di Bello, F. A., Ganguly, S., Gross, E., Kado, M., Pitt, M., Santi, L., and Shlomi, J. Towards a Computer Vision Particle Flow. Eur. Phys. J. C, 81(2):107, 2021. + +Di Sipio, R., Faucci Giannelli, M., Ketabchi Haghighat, S., and Palazzo, S. DijetGAN: A Generative-Adversarial Network Approach for the Simulation of QCD Dijet Events at the LHC. JHEP, 08:110, 2020. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. 2016. + +Dohi, K. Variational Autoencoders for Jet Simulation. 9 2020. + +Erdmann, M., Geiger, L., Glombitza, J., and Schmidt, D. Generating and refining particle detector simulations using the Wasserstein distance in adversarial networks. Comput. Softw. Big Sci., 2(1):4, 2018. + +Erdmann, M., Glombitza, J., and Quast, T. Precise simulation of electromagnetic calorimeter showers using a Wasserstein Generative Adversarial Network. Comput. Softw. Big Sci., 3:4, 2019. + +Gal, Y. Uncertainty in Deep Learning. PhD thesis, Cambridge, 2016. + +Gao, C., Höche, S., Isaacson, J., Krause, C., and Schulz, H. Event Generation with Normalizing Flows. Phys. Rev. D, 101(7):076002, 2020a. + +Gao, C., Isaacson, J., and Krause, C. i-flow: High-dimensional Integration and Sampling with Normalizing Flows. 2020b. + +Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial networks. 2014. + +Hashemi, B., Amin, N., Datta, K., Olivito, D., and Pierini, M. LHC analysis-specific datasets with Generative Adversarial Networks. 2019. + +Kasieczka, G., Luchmann, M., Otterpohl, F., and Plehn, T. Per-Object Systematics using Deep-Learned Calibration. SciPost Phys., 9:089, 2020. + +Kendall, A. and Gal, Y. What Uncertainties Do We Need + +in Bayesian Deep Learning for Computer Vision? Proc. NIPS, 2017. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1 \times 1$ convolutions. 2018. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. 2014. + +Kingma, D. P. and Welling, M. An introduction to variational autoencoders. Foundations and Trends $\circledast$ in Machine Learning, 12(4):307-392, 2019. ISSN 1935-8245. + +Klimek, M. D. and Perelstein, M. Neural Network-Based Approach to Phase Space Integration. 2018. + +Knapp, O., Dissertori, G., Cerri, O., Nguyen, T. Q., Vlimant, J.-R., and Pierini, M. Adversarially Learned Anomaly Detection on CMS Open Data: re-discovering the top quark. 52020. + +Kobyzev, I., Prince, S., and Brubaker, M. A. Normalizing flows: An introduction and review of current methods. 2019. + +Kobyzev, I., Prince, S., and Brubaker, M. Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-1, 2020. ISSN 1939-3539. + +Lin, J., Bhimji, W., and Nachman, B. Machine Learning Templates for QCD Factorization in the Search for Physics Beyond the Standard Model. JHEP, 05:181, 2019. + +MacKay, D. Probable Networks and Plausible Predictions - A Review of Practical Bayesian Methods for Supervised Neural Networks. Comp. in Neural Systems, 6:4679, 1995. + +Monk, J. W. Deep Learning as a Parton Shower. JHEP, 12: 021, 2018. + +Musella, P. and Pandolfi, F. Fast and Accurate Simulation of Particle Detectors Using Generative Adversarial Networks. Comput. Softw. Big Sci., 2(1):8, 2018. + +Müller, T., McWilliams, B., Rousselle, F., Gross, M., and Novák, J. Neural importance sampling. 2018. + +Nachman, B. A guide for deploying Deep Learning in LHC searches: How to achieve optimality and account for uncertainty. SciPost Phys., 8:090, 2020. + +Nachman, B. and Shih, D. Anomaly Detection with Density Estimation. Phys. Rev. D, 101:075042, 2020. + +Neal, R. M. Bayesian learning for neural networks. PhD thesis, Toronto, 1995. + +Otten, S. et al. Event Generation and Statistical Sampling with Deep Generative Models and a Density Information Buffer. 2019. + +Paganini, M., de Oliveira, L., and Nachman, B. Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multilayer Calorimeters. Phys. Rev. Lett., 120(4):042003, 2018a. + +Paganini, M., de Oliveira, L., and Nachman, B. CaloGAN : Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks. Phys. Rev., D97(1):014021, 2018b. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. 2019. + +Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019. + +Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., and Köthe, U. Bayesflow: Learning complex stochastic models with invertible neural networks. 2020. + +Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1530-1538, Lille, France, 07-09 Jul 2015. PMLR. + +Saucedo, S. R. Bayesian Neural Networks for Classification. Master's thesis, Florida State University, 2007. + +Stienen, B. and Verheyen, R. Phase Space Sampling and Inference from Weighted Events with Autoregressive Flows. SciPost Phys., 10:038, 2021. + +$\mathrm{{Xu}},\mathrm{Y}.,\mathrm{{Xu}},\mathrm{W}.,\mathrm{{Meng}},\mathrm{Y}.,\mathrm{{Zhu}},\mathrm{K}.$ , and $\mathrm{{Xu}},\mathrm{W}$ . Applying Bayesian Neural Networks to Event Reconstruction in Reactor Neutrino Experiments. Nucl. Instrum. Meth. A, 592:451, 2008. + +384 + +## Appendix + +## A. Related Works on Generative Models for LHC event simulations + +Promising techniques include generative adversarial networks (GAN) (Goodfellow et al., 2014; Creswell et al., 2018; Butter et al., 2020a), variational autoen-coders (Kingma & Welling, 2014; 2019), and normalizing flows (Rezende & Mohamed, 2015; Kobyzev et al., 2020; Papamakarios et al., 2019; Kobyzev et al., 2019; Müller et al., 2018), including invertible networks (INNs) (Ardiz-zone et al., 2018; Dinh et al., 2016; Kingma & Dhariwal, 2018). They can improve phase space integration (Klimek & Perelstein, 2018; Chen et al., 2021), phase space sampling (Bothmann et al., 2020; Gao et al., 2020b;a), and amplitude computations (Bishara & Montull, 2019; Badger & Bullock, 2020). Further developments are fully NN-based event generation (Otten et al., 2019; Hashemi et al., 2019; Di Sipio et al., 2020; Butter et al., 2019; Alanazi et al., 2020), event subtraction (Butter et al., 2020b), event unweighting (Stienen & Verheyen, 2021; Backes et al., 2020), detector simulation (Paganini et al., 2018a;b; Musella & Pandolfi, 2018; Erdmann et al., 2018; 2019; ATLAS Collaboration, 2018; 2019; Belayneh et al., 2020; Buhmann et al., 2020; 2021), or parton showering (Bothmann & Debbio, 2019; de Oliveira et al., 2017; Monk, 2018; Andreassen et al., 2019; Dohi, 2020). Generative models will also improve searches for physics beyond the Standard Model (Lin et al., 2019), anomaly detection (Nachman & Shih, 2020; Knapp et al., 2020), detector resolution (Di Bello et al., 2021; Baldi et al., 2020), and inference (Brehmer & Cranmer, 2020; Radev et al., 2020; Bieringer et al., 2020). Finally, conditional GANs and INNs allow us to invert the simulation chain to unfold detector effects (Datta et al., 2018; Bella-gente et al., 2020b) and extract the hard scattering process at parton level (Bellagente et al., 2020a). + +## B. Uncertainties on Event Samples + +Uncertainties on a simulated kinematic or phase space distribution are crucial for any LHC analysis. For instance, we need to know to what degree we can trust a simulated ${p}_{T}$ - distribution in mono-jet search for dark matter. We denote the complete phase space weight for a given phase space point as $p\left( x\right)$ , and can then illustrate a total cross section as + +$$ +{\sigma }_{\text{tot }} = {\int }_{0}^{1}{dxp}\left( x\right) \;\text{ with }\;p\left( x\right) > 0. \tag{6} +$$ + +In this simplified notation $x$ stands for a generally multidimensional phase space. For each phase space position, we can also define an uncertainty $\sigma \left( x\right)$ . + +One contribution to the error budget are systematic and theory uncertainties, ${\sigma }_{\text{th/sys }}\left( x\right)$ . The former reflect our ignorance of aspects of the training data, which do not decrease when we increase the amount of training data. The latter captures the degree to which we trust our prediction, for instance based on self-consistency arguments. For example accounting for large, momentum-dependent logarithms we can compute it from the phase space position, or for an unweighted event, alone. If we use a numerical variation of the factorization and renormalization scale to estimate a theory uncertainty, we typically re-weight events with the scales. Another uncertainty arises from the statistical limitations of the training data, ${\sigma }_{\text{stat }}\left( x\right)$ . For instance in mono-jet production, the tails of the predicted ${p}_{T}$ -distribution for the Standard Model will at some point be statistics limited. In the Gaussian limit, a statistical uncertainty can be defined by binning the phase space and in that limit we expect a scaling like ${\sigma }_{\text{stat }}\left( x\right) \sim \sqrt{p\left( x\right) }$ , and we will test that hypothesis in detail in Sec. C.1. + +Once we know the uncertainties as a function of the phase space position, we can account for them as additional entries in unweighted or weighted events. For instance, relative uncertainties can be easily added to unweighted events, + +$$ +{\operatorname{ev}}_{i} = \left( \begin{matrix} {\sigma }_{\text{stat }}/p \\ {\sigma }_{\text{syst }}/p \\ {\sigma }_{\text{th }}/p \\ \left\{ {x}_{\mu , j}\right\} \\ \left\{ {p}_{\mu , j}\right\} \end{matrix}\right) ,\text{ with }\mu = 0\ldots 3\text{ for each particle }j. +$$ + +The entries $\sigma$ or $\sigma /p$ are smooth functions of phase space. The challenge in working with this definition is how to extract ${\sigma }_{\text{stat }}$ without binning. We will show how Bayesian networks give us access to limited information in the training data. Specific theory and systematics counterparts can be either computed directly or extracted by appropriately modifying the training data (Bollweg et al., 2020; Kasieczka et al., 2020). + +## C. Further Experiments + +#### C.1.Toy Events with Uncertainties + +The default architecture for our toy models is a network with 32 units per layer, three layers per coupling block, and a total of 20 coupling blocks. It's implemented in Py-TORCH (Paszke et al., 2019). More details are given in Tab. 1. The most relevant hyperparameter is the number of coupling blocks in that more blocks provide a more stable performance with respect to several trainings of the same architecture. Generally, moderate changes for instance of the number of units per layer do not have a visible impact on the performance. For each of the trainings we use a sample of ${300}\mathrm{k}$ events. The widths of the Gaussian priors are set to one. We check that variations of this over several orders of magnitude did not have a significant impact on the performance. + +Table 1. Hyper-parameters for all toy models, implemented in PY-TORCH (v1.4.0) (Paszke et al., 2019). + +
ParameterFlow
Hidden layers (per block)3
Units per hidden layer32
Batch size512
Epochs300
Trainable weights75k
OptimizerAdam
$\left( {\alpha ,{\beta }_{1},{\beta }_{2}}\right)$$\left( {1 \times {10}^{-3},{0.9},{0.999}}\right)$
Coupling layers20
Training size300k
Prior width1
+ +#### C.1.1. Kicker Ramp + +We can test our findings from the linear wedge ramp using the slightly more complex quadratic or kicker ramp, + +$$ +p\left( {x, y}\right) = \operatorname{Quadr}\left( {x \in \left\lbrack {0,1}\right\rbrack }\right) \times \operatorname{Const}\left( {y \in \left\lbrack {0,1}\right\rbrack }\right) = {x}^{2} \times 3. +$$ + +We show the results from the network training for the density in Fig. 3 and find that the network describes the density well, limited largely by the flat, low-statistics approach towards the lower boundary with $p\left( x\right) \rightarrow 0$ . + +In complete analogy to Fig. 1 we show the complete BINN output with the density $p\left( {x, y}\right)$ and the predictive uncertainty ${\sigma }_{\text{pred }}\left( {x, y}\right)$ in Fig. 4. As for the linear case, the BINN reproduces the density well, deviations from the truth being within the predictive uncertainty in all points of phase space. We remove the phase space boundaries, where the network becomes unstable and the predictive uncertainties grows correspondingly. The indicated error bar on ${\sigma }_{\text{pred }}\left( {x, y}\right)$ is given by the variation of the predictions for different $y$ -values, after ensuring that their central values agree. The relative uncertainty at the lower boundary $x = 0$ is large, reflecting the statistical limitation of this phase-space region. An interesting feature appears again in the absolute uncertainty, namely a maximum-minimum combination as a function of $x$ . + +Again in analogy to the wedge ramp, we start with the parametrization of the density + +$$ +p\left( x\right) = a{\left( x - {x}_{0}\right) }^{2}\;\text{ with }\;x \in \left\lbrack {{x}_{0},{x}_{\max }}\right\rbrack , \tag{7} +$$ + +where we assume that the lower boundary coincides with the minimum and there is no constant offset. We choose to describe this density through the minimum position ${x}_{0}$ , coinciding the the lower end of the $x$ -range, and ${x}_{\max }$ as the second parameter. The parameter $a$ can be eliminated + +440 + +![01963e33-03ea-7360-ae83-3e032c74762f_8_175_204_1384_418_0.jpg](images/01963e33-03ea-7360-ae83-3e032c74762f_8_175_204_1384_418_0.jpg) + +Figure 3. Two-dimensional and marginal densities for the quadratic kicker ramp. + +441 + +442 + +443 + +444 + +445 + +446 + +447 + +448 + +449 + +450 + +451 + +452 + +453 + +454 + +455 + +![01963e33-03ea-7360-ae83-3e032c74762f_8_170_722_1394_415_0.jpg](images/01963e33-03ea-7360-ae83-3e032c74762f_8_170_722_1394_415_0.jpg) + +Figure 4. Density and predictive uncertainty distribution for the kicker ramp. In the left panel the density and uncertainty are averaged over several lines with constant $y$ . In the central and right panels, the uncertainty band on ${\sigma }_{\text{pred }}$ is given by their variation. The green curve represents a two-parameter fit to (9). + +456 + +457 + +458 + +459 + +460 + +461 + +462 + +463 + +464 + +465 + +466 + +467 + +468 + +469 through the normalization condition and we find + +$$ +p\left( x\right) = 3\frac{{\left( x - {x}_{0}\right) }^{2}}{{\left( {x}_{\max } - {x}_{0}\right) }^{3}}. \tag{8} +$$ + +If we vary ${x}_{0}$ and ${x}_{\max }$ we can trace two contributions to the uncertainty in the density, + +$$ +{\sigma }_{\text{pred }} \equiv {\Delta p} \supset \frac{9}{{\left( {x}_{\max } - {x}_{0}\right) }^{4}} +$$ + +$$ +\cdot \left| {\left( {x - {x}_{0}}\right) \left( {x - \frac{{x}_{0}}{3} - \frac{2{x}_{\max }}{3}}\right) }\right| \Delta {x}_{0} +$$ + +and + +$$ +{\sigma }_{\text{pred }} \equiv {\Delta p} \supset \frac{9}{{\left( {x}_{\max } - {x}_{0}\right) }^{4}}{\left( x - {x}_{0}\right) }^{2}\Delta {x}_{\max }, \tag{9} +$$ + +one from the variation of ${x}_{0}$ and one from the variation of ${x}_{\max }$ . In analogy to (5) they need to be added in quadrature. If the uncertainty on $\Delta {x}_{0}$ dominates, the uncertainty has a trivial minimum at $x = 0$ and a non-trivial minimum at $x = 2/3$ . From $\Delta {x}_{\max }$ we get another contribution which scales like ${\Delta p} \propto p\left( x\right)$ . In Fig. 4 we clearly observe both contributions, and the green line in the lower panels is given by the corresponding 2-parameter fig to the ${\sigma }_{\text{pred }}$ distribution from the BINN. + +#### C.1.2. Gaussian Ring + +Our third example is a two dimensional Gaussian ring, which in terms of polar coordinates reads + +$$ +p\left( {r,\phi }\right) = \text{Gauss}\left( {r > 0;\mu = 4, w = 1}\right) +$$ + +$$ +\times \operatorname{Const}\left( {\phi \in \left\lbrack {0,\pi }\right\rbrack }\right) \text{,} \tag{10} +$$ + +We define the Gaussian density as the usual + +$$ +\operatorname{Gauss}\left( r\right) = \frac{1}{\sqrt{2\pi }w}\exp \left\lbrack {-\frac{1}{2{w}^{2}}{\left( r - \mu \right) }^{2}}\right\rbrack \tag{11} +$$ + +The density defined in (10) can be translated into Cartesian coordinates as + +$$ +p\left( {x, y}\right) = \operatorname{Gauss}\left( {r\left( {x, y}\right) ;\mu = 4, w = 1}\right) +$$ + +$$ +\times \operatorname{Const}\left( {\phi \left( {x, y}\right) \in \left\lbrack {0,\pi }\right\rbrack }\right) \times \frac{1}{r\left( {x, y}\right) }, \tag{12} +$$ + +495 + +![01963e33-03ea-7360-ae83-3e032c74762f_9_179_184_1387_437_0.jpg](images/01963e33-03ea-7360-ae83-3e032c74762f_9_179_184_1387_437_0.jpg) + +Figure 5. Two-dimensional and marginal densities for the Gaussian (half-)ring. + +496 + +497 + +498 + +499 + +500 + +501 + +502 + +503 + +504 + +505 + +506 + +507 + +508 where the additional factor $1/r$ comes from the Jacobian. We train the BINN on Cartesian coordinates, just like in the two examples before, and limit ourselves to $y > 0$ to avoid problems induced by learning a non-trivial topology in mapping the latent and phase spaces. In Fig. 5 we once again see that our network describes the true two-dimensional density well. + +In Fig. 6 we show the Cartesian density but evaluated on a line of constant angle. This form includes the Jacobian and has the expected, slightly shifted peak position at ${r}_{\max } =$ $2 + \sqrt{3} = {3.73}$ . The BINN returns a predictive uncertainty, which grows towards both boundaries. The error band easily covers the deviation of the density learned by the BINN and the true density. While the relative predictive uncertainty appears to have a simple minimum around the peak of the density, we again see that the absolute uncertainty has a distinct structure with a local minimum right at the peak. The question is what we can learn about the INN from this pattern in the BINN. + +As before, we describe our distribution in the relevant direction in terms of convenient fit parameters. For the Gaussian radial density these are the mean $\mu$ and the width $w$ used in (10). The contributions driven by the extraction of the mean in Cartesian coordinates reads + +$$ +{\sigma }_{\text{pred }} \equiv {\Delta p} \supset \left| {\frac{G\left( r\right) }{r}\frac{\mu - r}{{w}^{2}}}\right| {\Delta \mu } +$$ + +$$ +\text{and}\;{\sigma }_{\text{pred }} \equiv {\Delta p} \supset \left| {\frac{{\left( r - \mu \right) }^{2}}{{w}^{3}} - \frac{1}{w}}\right| {\Delta w}\text{.} \tag{13} +$$ + +In analogy to (5) the two contributions need to be added in quadrature for the full, fit-like uncertainty. The contribution from the the mean has a minimum at $r = \mu = 4$ and is otherwise dominated by the exponential behavior of the Gaussian, just as we observe in the BINN result. In the opposite limit of ${\Delta \mu } \ll {\Delta w}$ the uncertainty develops the maxima at $r = 3$ and $r = 5$ , which we observe in Fig. 6. In + +549 the lower panels we show a one-parameter fit of the BINN output and find that the network determined the mean of the Gaussian as $\mu = 4 \pm {0.037}$ . + +#### C.1.3. Errors vs Training Statistics + +Even though it is clear from the above discussion that we cannot expect the predictive uncertainties to have a simple scaling pattern, like for the regression (Kasieczka et al., 2020) and classification (Bollweg et al., 2020) networks, there still remains the question how the BINN uncertainties change with the size of the training sample. + +In Fig. 7 we show how the BINN predictions for the density and uncertainty change if we vary the training sample size from ${10}\mathrm{k}$ events to $1\mathrm{M}$ training events. Note that for all toy models, including the kicker ramp in Sec. C.1.1, we use ${300}\mathrm{k}$ training events. For the small ${10}\mathrm{k}$ training sample, we see that the instability of the BINN density becomes visible even for our reduced $x$ -range. The peak-dip pattern of the absolute uncertainty, characteristic for the kicker ramp, is also hardly visible, indicating that the network has not learned the density well enough to determine its shape. Finally, the variation of the predictive density explodes for $x > {0.4}$ , confirming the picture of a poorly trained BINN. As a rough estimate, the absolute uncertainty at $x = {0.5}$ with a density value $p\left( {x, y}\right) = {0.75}$ ranges around ${\sigma }_{\text{pred }} =$ ${0.11}\ldots {0.15}$ . + +For ${100}\mathrm{k}$ training events we see that the patterns discussed in Sec. C.1.1 begin to form. The density and uncertainty encoded in the network are stable, and the peak-dip with a minimum around $x = 2/3$ becomes visible. As a rough estimate we can read off ${\sigma }_{\text{pred }}\left( {0.5}\right) \approx {0.06} \pm {0.03}$ . For $1\mathrm{M}$ training events the picture improves even more and the network extracts a stable uncertainty of ${\sigma }_{\text{pred }}\left( {0.5}\right) \approx$ ${0.03} \pm {0.01}$ . Crucially, the dip around $x \approx 2/3$ remains, and even compared to Fig. 4 with its ${300}\mathrm{k}$ training events the density and uncertainty at the upper phase space boundary are much better controlled. + +![01963e33-03ea-7360-ae83-3e032c74762f_10_175_278_1387_431_0.jpg](images/01963e33-03ea-7360-ae83-3e032c74762f_10_175_278_1387_431_0.jpg) + +Figure 6. Cartesian density and predictive uncertainty distribution for the Gaussian ring. In the left panel the density and uncertainty are averaged over several lines with constant $\phi$ . In the central and right panels, the uncertainty band on ${\sigma }_{\text{pred }}$ is given by their variation. The green curve represents a two-parameter fit to (13). + +![01963e33-03ea-7360-ae83-3e032c74762f_10_175_1047_1390_835_0.jpg](images/01963e33-03ea-7360-ae83-3e032c74762f_10_175_1047_1390_835_0.jpg) + +Figure 7. Dependence of the density (upper) and absolute uncertainty (lower) on the training statistics for the kicker ramp. We illustrate BINNs trained on ${10}\mathrm{k},{100}\mathrm{k}$ , and $1\mathrm{M}$ events (left to right), to be compared to ${300}\mathrm{k}$ events used for Fig. 4. Our training routine ensures that all models receive the same number of weights updates, regardless of the training set size. + +Finally, we briefly comment on a frequentist interpretation of the BINN output. We know from simpler Bayesian networks (Bollweg et al., 2020; Kasieczka et al., 2020) that it is possible to reproduce the predictive uncertainty using an ensemble of deterministic networks with the same architecture. However, from those studies we also know that our class of Bayesian networks has a very efficient built-in regularization, so this kind of comparison is not trivial. For the BINN results shown in this paper we find that the detailed patterns in the absolute uncertainties are extracted by the Bayesian network much more effectively than they would be for ensembles of deterministic INNs. For naive implementations with a similar network size and no fine-tuned regularization these patterns are somewhat harder to extract. On the other hand, in stable regions without distinctive patterns the spread of ensembles of deterministic networks reproduces the predictive uncertainty reported by the BINN. + +#### C.1.4. Marginalizing Phase Space + +Before we move to a more LHC-related problem, we need to study how the BINN provides uncertainties for marginalized kinematic distribution. In all three toy examples the two-dimensional phase space consists of one physical and one trivial direction. For instance, the kicker ramp in Sec. C.1.1 has a quadratic physical direction, and in a typical phase space problem we would integrate out the trivial, constant direction and show a one-dimensional kinematic distribution. From our effectively one-dimensional uncertainty extraction we know that the absolute uncertainty has a characteristic maximum-minimum combination, as seen in the lower-right panel of Fig. 4. + +To compute the uncertainty for a properly marginalized phase space direction, we remind ourselves how the BINN computes the density and the predictive uncertainty by sampling over the weights, + +$$ +p\left( {x, y}\right) = \int {d\theta q}\left( \theta \right) p\left( {x, y \mid \theta }\right) +$$ + +$$ +{\sigma }_{\text{pred }}^{2}\left( {x, y}\right) = \int {d\theta q}\left( \theta \right) {\left\lbrack p\left( x, y \mid \theta \right) - p\left( x, y\right) \right\rbrack }^{2}. \tag{14} +$$ + +If we integrate over the $y$ -direction, the marginalized density is defined as + +$$ +p\left( x\right) = \int {dyp}\left( {x, y}\right) +$$ + +$$ += \int {dyd\theta q}\left( \theta \right) p\left( {x, y \mid \theta }\right) +$$ + +$$ += \int {d\theta q}\left( \theta \right) \int {dyp}\left( {x, y \mid \theta }\right) +$$ + +$$ +\equiv \int {d\theta q}\left( \theta \right) p\left( {x \mid \theta }\right) , \tag{15} +$$ + +which implicitly defines $p\left( {x \mid \theta }\right)$ in the last step, notably without providing us with a way to extract it in a closed form. The key step in this definition is that we exchange the order of the $y$ and $\theta$ integrations. Nevertheless, with this definition at hand, we can define the uncertainty on the marginalized distribution as + +$$ +{\sigma }_{\text{pred }}^{2}\left( x\right) = \int {d\theta q}\left( \theta \right) {\left\lbrack p\left( x \mid \theta \right) - p\left( x\right) \right\rbrack }^{2}. \tag{16} +$$ + +We illustrate this construction with a trivial $p\left( {x, y}\right) =$ $p\left( {x,{y}_{0}}\right)$ , where we can replace the trivial $y$ -dependence by a fixed choice $y = {y}_{0}$ just like for the wedge and kicker ramps. Here we find, modulo a normalization constant in the $y$ -integration + +$$ +{\sigma }_{\text{pred }}^{2}\left( x\right) = \int {d\theta q}\left( \theta \right) {\left\lbrack p\left( x \mid \theta \right) - p\left( x\right) \right\rbrack }^{2} +$$ + +$$ += \int {d\theta q}\left( \theta \right) \int {dy}{\left\lbrack p\left( x,{y}_{0} \mid \theta \right) - p\left( x,{y}_{0}\right) \right\rbrack }^{2} +$$ + +$$ += \int {dyd\theta q}\left( \theta \right) {\left\lbrack p\left( x,{y}_{0} \mid \theta \right) - p\left( x,{y}_{0}\right) \right\rbrack }^{2} +$$ + +$$ += \int {dy}{\sigma }_{\text{pred }}^{2}\left( {x,{y}_{0}}\right) = {\sigma }_{\text{pred }}^{2}\left( {x,{y}_{0}}\right) . \tag{17} +$$ + +Adding a trivial $y$ -direction does not affect the predictive uncertainty in the physical $x$ -direction. + +As mentioned above, unlike for the joint density, $p\left( {x, y \mid \theta }\right)$ we do not know the closed form of the marginal distributions $p\left( x\right)$ or $p\left( {x \mid \theta }\right)$ . Instead, we can approximate the marginal-ized uncertainties through a combined sampling in $y$ and $\theta$ . We start with one set of weights ${\theta }_{i}$ from the weight distributions, based on one random number per INN weight. We now sample $N$ points in the latent space, ${z}_{j}$ , and compute $N$ phase space point ${x}_{j}$ using the BINN configuration ${\theta }_{i}$ . We then bin the wanted phase space direction $x$ and approximate $p\left( {x \mid {\theta }_{i}}\right)$ by a histogram. We repeat this procedure $i = 1\ldots M$ times to extract $M$ histograms with identical binning. This allows us to compute a mean and a standard deviation from $M$ histograms to approximates $p\left( x\right)$ and ${\sigma }_{\text{pred }}\left( x\right)$ . The approximation of ${\sigma }_{\text{pred }}$ should be an over-estimate, because it includes the statistical uncertainty related to a finite number of samples per bin. For $N \gg 1$ this contribution should become negligible. With this procedure we effectively sample $N \times M$ points in phase space. + +Following (15), we can also fix the phase space points, so instead of sampling for each weight sample another set of phase space points, we use the same phase space points for each weight sampling. This should stabilize the statistical fluctuations, but with the drawback of relying only on an effective number of $N$ phase space points. Both approaches lead to the same ${\sigma }_{\text{pred }}$ for sufficiently large $N$ , which we typically set to ${10}^{5}\ldots {10}^{6}$ . For the Bayesian weights we find stable results for $M = {30}\ldots {50}$ . + +In Fig. 8 we show the marginalized densities and predictive uncertainties for the kicker ramp. In $y$ -direction the density + +660 and the predictive uncertainty show the expected flat behavior. The only exception are the phase space boundaries, where the density starts to deviate slightly from the training data and the uncertainty correctly reflects that instability. In $x$ -direction, the marginalized density and uncertainty can be compared to their one-dimensional counterparts in Fig.4. While we expect the same peak-dip structure, the key question is if the numerical values for ${\sigma }_{\text{pred }}\left( x\right)$ change. If the network learns the $y$ -direction as uncorrelated additional data, the marginalized uncertainty should decrease through a larger effective training sample. This is what we typically see for Monte Carlo simulations, where a combination of bins in an unobserved directions leads to the usual reduced statistical uncertainty. On the other hand, if the network learns that the $y$ -directions is flat, then adding events in this direction will have no effect on the uncertainty of the marginalized distribution. This would correspond to a set of fully correlated bins, where a combination will not lead to any improvement in the uncertainty. In Fig. 8 we see that the ${\sigma }_{\text{pred }}\left( x\right)$ values on the peak, in the dip, and to the upper end of the phase space boundary hardly change from the one-dimensional results in Fig.4. This confirms our general observation, that the (B)INN learns a functional form of the density in both directions, in close analogy to a fit. It also means that the uncertainty from the generative network training is not described by the simple statistical scaling as was observed for simpler networks (Bollweg et al., 2020; Kasieczka et al., 2020) and instead points towards a GANplification-like (Butter et al., 2020a) pattern. + +![01963e33-03ea-7360-ae83-3e032c74762f_12_329_202_1076_835_0.jpg](images/01963e33-03ea-7360-ae83-3e032c74762f_12_329_202_1076_835_0.jpg) + +Figure 8. Marginalized densities and predictive uncertainties for the kicker ramp. Instead of the true distribution we now show the training data as a reference, to illustrate possible limitations. We use ${10}\mathrm{M}$ phase space point to guarantee a stable prediction. + +#### C.2.LHC Experiment + +#### C.2.1. Further Discussion + +In Fig. 9 we show a set of absolute and relative uncertainties from the BINN. The strong peak combined with a narrow tail in the ${E}_{e}$ distribution shows two interesting features. Just above the peak the absolute uncertainty drops more rapidly than expected, a feature shared by the wedge and kicker ramps at their respective upper phase space boundaries. The shoulder around ${E}_{e} \approx {280}\mathrm{{GeV}}$ indicates that for a while the predictive uncertainty follows the increasingly poor modelling of the phase space density by the BINN, to a point where the network stops following the truth curve altogether and the predictive uncertainty is limited by the expressive power of the network. Unlike the absolute uncertainty, the relative uncertainty keeps growing for increasing values of ${E}_{e}$ . This behavior illustrates that in phase space regions where the BINN starts failing altogether, we cannot trust the predictive uncertainty either, but we see a pattern in the intermediate phase space regime where the network starts failing. + +![01963e33-03ea-7360-ae83-3e032c74762f_13_159_183_1396_849_0.jpg](images/01963e33-03ea-7360-ae83-3e032c74762f_13_159_183_1396_849_0.jpg) + +Figure 9. Absolute and relative uncertainties as a function of some of the kinematic Drell-Yan observables shown in Fig. 2. + +The second kinematic quantity we select is the (unobservable) $x$ -component of the momentum. It forms a relative flat central plateau with sharp cliffs at each side. Any network will have trouble learning the exact shape of such sharp phase space patterns. Here the BINN keeps track of this, the absolute and the relative predictive uncertainties indeed explode. The only difference between the two is that the (learned) density at the foot of the plateau drops even faster than the learned absolute uncertainty, so their ratio keeps growing. + +Finally, we show the result for the Breit-Wigner mass peak, the physical counterpart of the Gaussian ring model of Sec. C.1.2. Indeed, we see exactly the same pattern, namely a distinctive minimum in the predictive uncertainty right on the mass peak. This pattern can be explained by the network learning the general form of a mass peak and then adjusting the mean and the width of this peak. Learning the peak position leads to a minimum of the uncertainty right at the peak, and learning the width brings up two maxima on the shoulders of the mass peak. In combination Fig. 2 and 9 clearly show that the BINN traces uncertainties in generated LHC events just as for the toy models. Again, some distinctive patterns in the predictive uncertainty can be explained by the way the network learns the phase space density. + +#### C.2.2. Further Details + +The hyperparameters and architecture for the LHC experiment are given in Table 2. + +Table 2. Hyper-parameters for the Drell-Yan data set, implemented in PYTORCH (v1.4.0) (Paszke et al., 2019). + +
ParameterFlow
Hidden layers (per block)2
Units per hidden layer64
Batch size512
Epochs500
Trainable weights$\sim {182}\mathrm{k}$
Number of training events$\sim 1\mathrm{M}$
OptimizerAdam
$\left( {\alpha ,{\beta }_{1},{\beta }_{2}}\right)$$\left( {1 \times {10}^{-3},{0.9},{0.999}}\right)$
Coupling layers20
Prior width1
+ diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Sl09p1eK9D/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Sl09p1eK9D/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..6bf451981eda6bb9ea387e10a1c7986164452caf --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/Sl09p1eK9D/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,167 @@ +§ UNDERSTANDING EVENT-GENERATION NETWORKS VIA UNCERTAINTIES + +§ ANONYMOUS AUTHORS ${}^{1}$ + +§ ABSTRACT + +Generative models and normalizing flow based models have made great progress in recent years both in their theoretical development as well as in a growing number of applications. As such models become applied more and more with it increases the desire for predictive uncertainty to know when to trust the underlying model. In this extended abstract we target the application area of Large Hadron Collider (LHC) simulations and show how to extend normalizing flows with probabilistic Bayesian Neural Network based transformations to model LHC events with uncertainties. + +§ 1. INTRODUCTION + +The role of first-principle simulations in our understanding of large data sets makes LHC physics stand out in comparison to many other areas of science. Three aspects define the application of modern big data methods in this field: (i) ATLAS and CMS deliver proper big data with excellent control over uncertainties, (ii) perturbative quantum field theory provides consistent precision predictions, and (iii) fast and reliable precision simulations generate events from first principles. The fact that experiments, field theory calculations, and simulations control their uncertainties implies that we can work with a complete uncertainty budget, including statistical, systematic, and theory uncertainties. To sustain this approach at the upcoming HL-LHC, with a data set more than 25 times the current Run 2 data set, the theory challenge is to provide faster simulations and keep full control of the uncertainties at the per-cent level and better. + +In recent years it has been shown that modern machine learning and especially generative models can improve LHC event simulations in many ways (Butter & Plehn, 2020). See Section A in the appendix for an overview on recent work and the generative models used. + +The problem with these applications is that we know little about how these generative networks work, and what the uncertainty on the generative network output is. As we will see, these two questions are closely related. + +In general, we can track statistical and systematic uncertainties in neural network outputs with Bayesian neural networks (BNNs) (MacKay, 1995; Neal, 1995; Gal, 2016; Kendall & Gal, 2017). Such networks have been used in particle physics for a long time (Bhat & Prosper, 2005; Saucedo, 2007; Xu et al., 2008). For the LHC they have been proposed to extract uncertainties in jet classification (Bollweg et al., 2020) and jet calibration (Kasieczka et al., 2020). They can cover essentially all uncertainties related to statistical, systematic, and structural limitations of the training sample (Nachman, 2020). Similar ideas can be used as part of ensemble techniques (Araz & Spannowsky, 2021). + +We propose a Bayesian invertible neural net (BINN) which combines the flexibility of normalizing flow with BNNs and demonstrate it via a simple 2d toy example and finally a semi-realistic LHC example. See the appendix for an extended discussion of these experiments and further results. + +§ 2. GENERATIVE NETWORKS WITH UNCERTAINTIES + +We start by reminding ourselves that we often assume that a generative model has learned a phase-space density perfectly, so the only remaining source of uncertainty is the statistics of the generated sample binned in phase space. However, we know that such an assumption is not realistic (Bollweg et al., 2020; Kasieczka et al., 2020), and we need to estimate the effect of statistical or systematic limitations of the training data. The problem with such a statistical limitation is that it is turned into a systematic shortcoming of the generative model (Butter et al., 2019) - once we generate a new sample, the information on the training data is lost, and the only way we might recover it is by training many networks and comparing their outcome. For most applications this is not a realistic or economic option, so we will show how an alternative solution could look. + +Invertible Neural Networks. To model complex densities such as LHC phase space distributions, we can employ normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2016; Kingma & Dhariwal, 2018; Kobyzev et al., + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +2019). They use the fact we can transform a random variable $z \sim {p}_{Z}\left( z\right)$ using a bijective map $G : z \rightarrow x$ to a random variable $x = G\left( z\right)$ with the density + +$$ +{p}_{X}\left( x\right) = {p}_{Z}\left( z\right) {\left| \det \frac{\partial G\left( z\right) }{\partial z}\right| }^{-1} +$$ + +$$ += {p}_{Z}\left( {\bar{G}\left( x\right) }\right) \left| {\det \frac{\partial \bar{G}\left( x\right) }{\partial x}}\right| , \tag{1} +$$ + +where we defined $\bar{G} \mathrel{\text{ := }} {G}^{-1}$ . Given a sample $z$ from the base distribution ${p}_{Z}$ , we can use the map $G$ to generate a sample from the target distribution going in the forward direction and vice versa with a sample $x$ from the target. + +For this to be a useful approach, we require the base distribution ${p}_{Z}$ to be simple enough to allow for effective sample generation, $G$ to be flexible enough for a non-trivial transformation, and its Jacobian determinant to be effectively computable. With these constraints, $G$ gives us a powerful generative pipeline to model the phase space density ${p}_{X}$ . To fulfill them we choose the base distribution to be a multivariate Gaussian with mean zero and an identity matrix as the covariance, and rely on the real non-volume preserving flow (Dinh et al., 2016) in the invertible neural network (INN) formulation by Ardizzone et al. (2018) for $G$ . + +An INN composes multiple transformation maps into coupling layers with the following structure. The input vector $z$ into a layer is split in half, $z = \left( {{z}_{1},{z}_{2}}\right)$ , allowing us to compute the output $x = \left( {{x}_{1},{x}_{2}}\right)$ of the layer as + +$$ +\left( \begin{array}{l} {x}_{1} \\ {x}_{2} \end{array}\right) = \left( \begin{array}{l} {z}_{1} \odot {e}^{{s}_{2}\left( {z}_{2}\right) } + {t}_{2}\left( {z}_{2}\right) \\ {z}_{2} \odot {e}^{{s}_{1}\left( {x}_{1}\right) } + {t}_{1}\left( {x}_{1}\right) \end{array}\right) , \tag{2} +$$ + +where ${s}_{i},{t}_{i}\left( {i = 1,2}\right)$ are small multi-layer perceptrons (MLP), and $\odot$ is the element-wise product. This structure allows both for easy invertibility as well as an easy Jacobian. Throughout, we refer to their weights jointly as $\theta$ . + +Bayesian INN. The invertible neural net provides us with a powerful generative model of the underlying data distribution. However, it lacks a mechanism to account for our uncertainty in the transformation parameters $\theta$ themselves. To model it, we switch from deterministic to probabilistic transformations, replacing the deterministic sub-networks ${s}_{1,2}$ and ${t}_{1,2}$ in each of the coupling layers with Bayesian neural nets. Placing priors over their weights we get as the generative pipeline for our BINN + +$$ +\theta \sim p\left( \theta \right) , +$$ + +$$ +x\left| {\theta \sim {p}_{X}\left( {x \mid \theta }\right) = {p}_{Z}\left( {\bar{G}\left( {x;\theta }\right) }\right) }\right| \det \frac{\partial \bar{G}\left( {x;\theta }\right) }{\partial x} \mid . \tag{3} +$$ + +Given our set of observations $\mathcal{D}$ we can rely on variational inference (Blei et al., 2017) to approximate the intractable posterior $p\left( {\theta \mid \mathcal{D}}\right)$ with a mean-field Gaussian as the variational posterior ${q}_{\phi }\left( \theta \right)$ . Learning then consists of maximizing the evidence lower bound (ELBO) + +$$ +\mathcal{L} = \mathop{\sum }\limits_{{n = 1}}^{N}{\mathbb{E}}_{{q}_{\phi }\left( \theta \right) }\left\lbrack {\log {p}_{X}\left( {{x}_{n} \mid \theta }\right) }\right\rbrack - \operatorname{KL}\left( {{q}_{\phi }\left( \theta \right) ,p\left( \theta \right) }\right) +$$ + +$$ += \mathop{\sum }\limits_{{n = 1}}^{N}{\mathbb{E}}_{{q}_{\phi }\left( \theta \right) }\left\lbrack {\log {p}_{Z}\left( {\bar{G}\left( {{x}_{n};\theta }\right) }\right) + \log \left| {\det \frac{\partial \bar{G}\left( {{x}_{n};\theta }\right) }{\partial {x}_{n}}}\right| }\right\rbrack +$$ + +$$ +- \mathrm{{KL}}\left( {{q}_{\phi }\left( \theta \right) ,p\left( \theta \right) }\right) , +$$ + +via stochastic gradient descent on the parameters $\phi$ . By design all three terms, the log likelihood, log determinant, and the Kullback-Leibler (KL) divergence can be computed easily, and we can approximate the sum and the expectation with a minibatch and weight samples respectively. + +§ 3. EXPERIMENTS + +Before we tackle a semi-realistic LHC setup, we first study the behavior of BINNs for a set of toy examples, namely distributions over the minimally allowed two-dimensional parameter space where in one dimension the density is flat. Aside from the fact that these toy examples illustrate that the BINN actually constructs a meaningful uncertainty distribution, we will use the combination of density and uncertainty maps to analyse how an INN actually learns a density distributions. We will see that the INN describes the density map in the sense of a few-parameter fit, rather than numerically encoding patches over the parameter space independently. We discuss one of the three toy experiments here and refer the reader for the other two to the appendix. We there also provide details on the architecture and hyperparameters. + +§ 3.1.TOY EVENTS WITH UNCERTAINTIES: THE WEDGE RAMP + +Our first toy example is a two-dimensional ramp distribution, linear in one direction and flat in the other, + +$$ +p\left( {x,y}\right) = \operatorname{Linear}\left( {x \in \left\lbrack {0,1}\right\rbrack }\right) \cdot \operatorname{Const}\left( {y \in \left\lbrack {0,1}\right\rbrack }\right) = x \cdot 2. +$$ + +The second factor ensures that the distribution $p\left( {x,y}\right)$ is normalized to one, and the network output is shown in Fig. 1 (upper). The output are unweighted events in the two-dimensional parameters space,(x, y). We show one-dimensional distributions after marginalizing over the unobserved direction and find that the network reproduces the equation well. In the bottom row we include the predictive uncertainty given by the BINN. For this purpose we train a network on the two-dimensional parameter space and evaluate it for a set of points with $x \in \left\lbrack {0,1}\right\rbrack$ and a constant $y$ -value. In the left panel we indicate the predictive uncertainty as an error bar around the density estimate. Throughout the paper we always remove the phase space boundaries, because we know that the network is unstable + +110 + +0.9 Training Data 0.6 0.8 1.0 Fit: ${\Delta a} = {0.09},\Delta {x}_{\max } = {0.01}$ 0.200 Fit: ${\Delta a} = {0.09},\Delta {x}_{\max } = {0.01}$ $\pm \delta {\sigma }_{\text{ pred }}$ Relative Uncertainty 0.150 0.125 0.050 0.025 0.8 0.2 0.4 0.6 0.8 Training Data 1.75 1.5 0.6 >0.5 1.00 0.75 0.3 0.50 0.2 0.25 0.2 0.4 0.6 0.8 0.0 0.2 0.4 BINN 0.07 1.5 0.06 $\pm \delta {\sigma }_{\text{ pred }}$ Normalized Absolute Uncertainty 0.05 0.04 0.5 0.03 0.4 0.6 0.8 0.2 0.4 + +Figure 1. (upper) Two-dimensional and marginal densities for the linear wedge ramp. (lower) Density and predictive uncertainty distribution for the wedge ramp. In the left panel the density and uncertainty are averaged over several lines with constant $y$ . In the central and right panels, the uncertainty band on ${\sigma }_{\text{ pred }}$ is given by their variation. The green curve represents a two-parameter fit to (5). + +111 + +112 + +113 + +114 there, and the uncertainties explode just like we expect. The relative uncertainty grows for small values of $x$ and hence small values of $p\left( {x,y}\right)$ , and it covers the deviation of the extracted density from the true density well. These features are common to all our experiments. In the central and right panel of Fig. 1 we show the relative and absolute predictive uncertainties. The error bar indicates how much ${\sigma }_{\text{ pred }}$ varies for different choices of $y$ . We compute it as the standard deviation of different values of ${\sigma }_{\text{ pred }}$ , after confirming that the central values agree within this range. As expected, the relative uncertainty decreases towards larger $x$ . However, the absolute uncertainty shows a distinctive minimum in ${\sigma }_{\text{ pred }}$ around $x \approx {0.45}$ . This minimum is a common feature in all our training rounds, so we need to explain it. + +To understand this non-trivial uncertainty distribution ${\sigma }_{\text{ pred }}\left( x\right)$ we focus on the non-trivial $x$ -coordinate and its linear behavior $p\left( x\right) = {ax} + b$ with $x \in \left\lbrack {0,1}\right\rbrack$ . As the model learns a density, we can remove $b$ by fixing the normalization, $p\left( x\right) = a\left( {x - {0.5}}\right) + 1$ . If we now assume that a network acts like a fit of $a$ , as it will turn out useful, we can relate the uncertainty ${\Delta a}$ to an uncertainty in the density, + +$$ +{\sigma }_{\text{ pred }} \equiv {\Delta p} \approx \left| {x - {0.5}}\right| {\Delta a}. \tag{4} +$$ + +The absolute value appears because the uncertainties are defined to be positive, as encoded in the usual quadratic error propagation. The uncertainty distribution has a minimum at $x = 1/2$ , close to the observed value in Fig. 1. + +The differences between the simple prediction in (4) and our numerical findings in Fig. 1 are that the predictive uncertainty is not symmetric and does not reach zero. To account for these effects we can expand our very simple ansatz to $p\left( x\right) = {ax} + b$ with $x \in \left\lbrack {{x}_{\min },{x}_{\max }}\right\rbrack$ . Using the normalization condition again we find + +$$ +p\left( x\right) = {ax} + \frac{1 - \frac{a}{2}\left( {{x}_{\max }^{2} - {x}_{\min }^{2}}\right) }{{x}_{\max } - {x}_{\min }}. +$$ + +Again assuming a fit-like behavior of the flow network we expect for the predictive uncertainty + +$$ +{\sigma }_{\text{ pred }}^{2} \equiv {\left( \Delta p\right) }^{2} = {\left( x - \frac{1}{2}\right) }^{2}{\left( \Delta a\right) }^{2} + {\left( 1 + \frac{a}{2}\right) }^{2}{\left( \Delta {x}_{\max }\right) }^{2} +$$ + +$$ ++ {\left( 1 - \frac{a}{2}\right) }^{2}{\left( \Delta {x}_{\min }\right) }^{2}\text{ . } \tag{5} +$$ + +Adding ${x}_{\max }$ adds an $x$ -independent offset. Also accounting for ${x}_{\min }$ does not change the $x$ -dependence of predictive uncertainty. The slight shift of the minimum and the asymmetry between the lower and upper boundaries in $x$ are not explained by this argument. We ascribe them to boundary effects, specifically the challenge for the network to describe the correct approach towards $p\left( x\right) \rightarrow 0$ . + +The green line in the lower panels of Fig. 1 gives a two-parameter fit of ${\Delta a}$ and $\Delta {x}_{\max }$ to the ${\sigma }_{\text{ pred }}$ distribution from the BINN. It indicates that there is a hierarchy in the way the network extracts the $x$ -independent term with high precision, whereas the uncertainty on the slope $a$ is around $4\%$ . + +§ 3.2.LHC EVENTS WITH UNCERTAINTIES + +As a physics example we consider the Drell-Yan process + +$$ +{pp} \rightarrow Z \rightarrow {e}^{ + }{e}^{ - } +$$ + +with its simple $2 \rightarrow 2$ phase space combined with the parton density. The training set consists of an unweighted set of + +165 + +$\times {10}^{-2}$ $\times {10}^{-2}$ $\times {10}^{-1}$ BINN Training Data $+ / - {\sigma }_{\text{ pred }}$ Normalized 0.4 Training Data 0.2 25 50 20 ${p}_{x,c}$ - [GeV] ${p}_{T,{e}^{ + }}\left\lbrack \mathrm{{GeV}}\right\rbrack$ $\times {10}^{-1}$ BINN BINN Training Data $+ / \cdot {\sigma }_{\text{ pred }}$ 2.0 $+ / \cdot {\sigma }_{\text{ pred }}$ 1.5 300 400 500 80 85 100 ${E}_{e - e + }\left\lbrack \mathrm{{GeV}}\right\rbrack$ ${m}_{{e}^{ - }{e}^{ + }}\left\lbrack \mathrm{{GeV}}\right\rbrack$ BINN 2.0 Training Data 1.0 $+ / - {\sigma }_{\text{ pred }}$ Normalized 1.5 0.8 1.0 0.6 0.5 0.2 $\frac{\text{ BINN }}{\text{ Data }}$ 0 300 -25 ${E}_{{e}^{ + }}\left\lbrack \mathrm{{GeV}}\right\rbrack$ $\times {10}^{-3}$ $\times {10}^{-2}$ 6 BINN 1.25 Training Data 5 $+ / \cdot {\sigma }_{\text{ pred }}$ 1.00 Vormalized 3 Normalized 0.75 0.50 2 0.00 -200 0 200 100 200 ${p}_{z,e}$ - [GeV] + +Figure 2. Marginalized kinematic distributions for the Drell-Yan process. We show the central prediction from the BINN and include the predictive uncertainty as the blue band. The red band indicates the statistical uncertainty of the training data per bin in the Gaussian limit. + +166 + +167 + +168 + +169 + +170 4-vectors simulated with MADGRAPH5 (Alwall et al., 2014) at ${13}\mathrm{{TeV}}$ collider energy with the NNPDF2.3 parton densities (Ball et al., 2013). We fix the masses of the final-state leptons and enforce momentum conservation in the transverse direction, which leaves us with a four-dimensional phase space. In our discussion we limit ourselves to a sufficiently large set of one-dimensional distributions. For these marginalized uncertainties we follow the procedure laid out in Sec. C.1.4 with 50 samples in the BINN-weight space. + +To start with, we show a set of generated kinematic distributions in Fig. 2. The positron energy features the expected strong peak from the $Z$ -resonance. Its sizeable tail to larger energies is well described by the training data to ${E}_{e} \approx {280}\mathrm{{GeV}}$ . The central value learned by the BINN becomes unstable at slightly lower values of ${250}\mathrm{{GeV}}$ , as expected. The momentum component ${p}_{x}$ is not observable given the azimuthal symmetry of the detector, but it's broad distribution is nevertheless reproduced correctly. The predictive uncertainty covers the slight deviations over the entire range. What is observable at the LHC is the transverse momentum of the outgoing leptons, with a similar distribution as the energy, just with the $Z$ -mass peak at the upper end of the distribution. Again, the predictive uncertainty determined by the BINN covers the slight deviations from the truth on the pole and in both tails. In the second row we show the ${p}_{z}$ component as an example for a strongly peaked distribution, similar to the Gaussian toy model in Sec. C.1.2. + +While the energy of the lepton pair has a similar basic form as the individual energies, we also show the invariant mass of the electron-positron pair, which is described by the usual Breit-Wigner peak. It is well known that this intermediate resonance is especially hard to learn for a network, because it forms a narrow, highly correlated phase space structure. Going beyond the precision shown here would for instance require an additional MMD loss (as e.g. in Butter et al., 2019; Bellagente et al., 2020b). This resonance peak is the only distribution, where the predictive uncertainty does not cover the deviation of the BINN density from the truth. This apparent failure corresponds to the fact that generative networks always overestimate the width and hence underestimate the height of this mass peak (Butter et al., 2019). This is an example of the network being limited by the expressive power in phase space resolution, generating an uncertainty which the Bayesian version cannot account for. See Sec. C.2.1 for further results. + +§ 4. CONCLUSION + +Controlling the output of generative networks and quantifying their uncertainties is the main task for any application in LHC physics, be it in forward generation, inversion, or inference. We have proposed to use a Bayesian invertible neural network (BINN) to quantify the uncertainties from the network training for each generated event. For a series of two-dimensional toy models and an LHC-inspired application we have shown how the Bayesian setup indeed generates an uncertainty distribution, over the full phase space and over marginalized phase spaces. As expected, the learned uncertainty shrinks with an improved training statistics. Our method can be trivially extended from unweighted to weighted events by adapting the simple loss. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/T4Wf0w2jcz/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/T4Wf0w2jcz/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..bf2a988f0acfff49dea8571fee822435fef40f9d --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/T4Wf0w2jcz/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,213 @@ +## Copula-Based Normalizing Flows + +Anonymous Authors ${}^{1}$ + +## Abstract + +Normalizing flows, which learn a distribution by transforming the data to samples from a Gaussian base distribution, have proven powerful density approximators. But their expressive power is limited by this choice of the base distribution. We, therefore, propose to generalize the base distribution to a more elaborate copula distribution to capture the properties of the target distribution more accurately. In a first empirical analysis, we demonstrate that this replacement can dramatically improve the vanilla normalizing flows in terms of flexibility, stability, and effectivity for heavy-tailed data. Our results suggest that the improvements are related to an increased local Lipschitz-stability of the learned flow. + +## 1. Introduction + +Normalizing Flows (NFs) are a recently developed class of density estimators, which aim to transform the distribution of interest ${P}_{\mathbf{x}}$ to some tractable base distribution ${P}_{\mathbf{z}}$ . Using the change of variables formula, this allows for exact likelihood computation, which is in contrast to other deep generative models such as Variational Autoencoders (Kingma & Welling, 2014) or Generative Adversarial Networks (Goodfellow et al., 2014). Impressive estimation results, especially in the field of natural image generation, have lead to great popularity of these deep generative models. Motivated by this success, much effort has been put into the development of new parametric classes of NFs to make them even more performant (Dinh et al., 2015; 2016; Chen et al., 2019; Papamakarios et al., 2017; Grathwohl et al., 2019). However, our theoretical understanding has not developed at the same speed, which, so we claim, slows down further progress in the development of powerful NF architectures. + +Fortunately, very recent works have addressed theoretical limitations of these methods: One important limitation is the expressive power of NFs. Because they are based on the change of variables formula, the learned transformations are required to be diffeomorphisms. As a consequence, a NF with bounded Lipschitz constant is unable to map one distribution ${P}_{\mathbf{x}}$ to a lighter-tailed distribution ${P}_{\mathbf{z}}$ (Wiese et al., 2019; Jaini et al., 2020). Therefore, since vanilla NFs are implemented using an isotropic Gaussian base distribution, they are unable to learn heavy-tailed distributions, which are known to appear in natural image data (Zhu et al., 2014; Horn & Perona, 2017; Zhang et al., 2017). This is in conflict with observations recently made by Behrmann et al. (2021): Even though NFs are typically designed to obey invertibility, this property is often violated in practice. This is due to numerical inaccuracies, which are promoted by a large Bi-Lipschitz constant. Bounding the Bi-Lipschitz constant, however, conflicts with the previously mentioned theoretical requirements needed to avoid a limited expressiveness of the NF. + +These findings emphasize the high importance of choosing an appropriate base distribution for NFs. We therefore propose to generalize the isotropic Gaussian to a much broader class of distributions-the class of copula distributions. Copulae are a well-known concept in statistical theory and are being used to model complex distributions in finance, actuarial science, and extreme-value theory (Gen-est et al., 2009; Joe, 2014; Hofert et al., 2018). Broadly speaking, a copula is some function that couples marginal distributions to a multivariate joint distribution. Hence, copulae allow for flexible multivariate modeling with marginals stemming from a huge range of suitable classes. This allows for example to formulate NF base distributions that combine heavy-tailed marginals- as proposed by Wiese et al. (2019); Jaini et al. (2020); Alexanderson & Henter (2020)—with light-tailed marginals. This paper presents a novel framework for choosing the base distribution of NFs building on the well-studied theory of copulae. A first empirical investigation demonstrates the benefits brought by this approach. Our experimental analysis on toy data reveals that using even the most simple copula model-the Independence Copula-we are able to outperform the vanilla NF approach, which uses an isotropic Gaussian base distribution. The resulting NF converges faster, is more robust, and achieves an overall better test performance. In addition, we show that the learned transformation has a better-behaved functional form in the sense of a more stable local Lipschitz continuity. + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +## 2. Background + +In this section, we quickly review some background knowledge about NFs (Section 2.1), followed by an overview of copula theory (Section 2.2). + +### 2.1. Normalizing Flows + +Density estimation via NFs revolve around learning a diffeomorphic transformation ${T}_{\theta }$ that maps some unknown target distribution ${P}_{\mathbf{x}}$ to a known and tractable base distribution ${P}_{\mathbf{z}}$ . At the cornerstone of NFs is the change of variables formula + +$$ +{p}_{\theta }\left( x\right) = {p}_{\mathbf{z}}\left( {{T}_{\theta }\left( x\right) }\right) \left| {\det {J}_{{T}_{\theta }}\left( x\right) }\right| \;\text{ for }x \in {\mathbb{R}}^{D}, \tag{1} +$$ + +which relates the evaluation of the estimated density ${p}_{\theta }$ of $\mathbf{x}$ to the evaluation of the base density ${p}_{\mathbf{z}}$ , of ${T}_{\theta }\left( x\right)$ , and of $\det {J}_{{T}_{\theta }}\left( x\right)$ . By composing simple diffeomorphic building blocks ${T}_{\theta } \mathrel{\text{:=}} {T}_{\theta , l} \circ \cdots \circ {T}_{\theta ,1}$ , we are able to obtain expressive transformations, while presuming diffeomorphy and computational tractablity of the building blocks. Due to the tractable PDF in (1), we are able to train the model via maximum likelihood estimation (MLE) + +$$ +\widehat{\theta } \in \underset{\theta }{\arg \min }{\mathbb{E}}_{{p}_{\text{data }}}\left\lbrack {-\log {p}_{\theta }\left( \mathbf{x}\right) }\right\rbrack , \tag{2} +$$ + +where ${p}_{\text{data }}$ is the PDF of the empirical distribution of $\mathbf{x}$ . A comprehensive overview of NFs, including the exact parameterizations of certain flow models ${T}_{\theta }$ , computational aspects, and more, can be found in Kobyzev et al. (2020) and Papamakarios et al. (2021). + +### 2.2. Copulae + +A completely different approach of density estimation, which has mostly been left unrelated to NFs, is the idea of copulae. + +Definition 2.1 (Copula). A copula is a multivariate distribution with ${CDFC} : {\left\lbrack 0,1\right\rbrack }^{D} \rightarrow \left\lbrack {0,1}\right\rbrack$ that has standard uniform marginals, i.e. the marginals ${C}_{j}$ of $C$ satisfy ${C}_{j} \sim U\left\lbrack {0,1}\right\rbrack$ . + +The fundamental idea behind copula theory is that we can associate every distribution with a uniquely defined copula $C$ . Vice versa, given $D$ marginal distributions, each copula $C$ defines a multivariate distribution with the given marginals. Formally, this is known as Sklar's Theorem (Sklar, 1959; 1996). + +Theorem 2.2 (Sklar's Theorem). Taken from Hofert et al. (2018). + +1. For any D-dimensional CDF ${F}_{\mathbf{z}}$ with marginal CDFs ${F}_{{\mathbf{z}}_{1}},\ldots ,{F}_{{\mathbf{z}}_{D}}$ , there exists a copula $C$ such that + +$$ +{F}_{\mathbf{z}}\left( z\right) = C\left( {{F}_{{\mathbf{z}}_{1}}\left( {z}_{1}\right) ,\ldots ,{F}_{{\mathbf{z}}_{D}}\left( {z}_{D}\right) }\right) \tag{3} +$$ + +for all $z \in {\mathbb{R}}^{D}$ . The copula is uniquely defined on $\mathcal{U} \mathrel{\text{:=}} \mathop{\prod }\limits_{{j = 1}}^{D}\operatorname{Im}\left( {F}_{{\mathbf{z}}_{j}}\right)$ , where $\operatorname{Im}\left( {F}_{{\mathbf{z}}_{j}}\right)$ is the image of ${F}_{{\mathbf{z}}_{j}}$ . For all $u \in \mathcal{U}$ it is given by + +$$ +C\left( u\right) = {F}_{\mathbf{z}}\left( {{F}_{{\mathbf{z}}_{1}}^{ \leftarrow }\left( {u}_{1}\right) ,\ldots ,{F}_{{\mathbf{z}}_{D}}^{ \leftarrow }\left( {u}_{D}\right) }\right) , \tag{4} +$$ + +where ${F}_{{\mathbf{z}}_{j}}^{ \leftarrow }$ are the right-inverses of ${F}_{{\mathbf{z}}_{j}}$ . + +2. Conversely, given any $D$ -dimensional copula $C$ and marginal CDFs ${F}_{1},\ldots {F}_{D}$ , a function $F$ as defined in (3) is a D-dimensional CDF with marginals ${F}_{1},\ldots ,{F}_{D}$ . + +Part 1 of Sklar's Theorem finds much application in statistical dependency analysis (Joe, 2014). In contrast to classical dependency measures, such as Pearson correlation, copulae are a more flexible tool that allow the decoupling of the marginals and the dependency structure. Part 2 of Sklar's Theorem is of relevance for statistical modeling, and more precisely, to define multivariate distributions. Given marginal distributions, which are typically much easier to estimate than the full joint distribution, and a copula $C$ we can "couple" the marginals and the dependency structure to a multivariate joint distribution. This perspective finds various applications in the context of finance and related disciplines that need to take heavy tails and tail dependencies into account, see Genest et al. (2009) for an overview. In Section A of the Appendix we give some illustrative examples and further details about properties of copula distributions. + +If $C$ and the marginal CDFs ${F}_{{\mathbf{z}}_{1}},\ldots ,{F}_{{\mathbf{z}}_{D}}$ have PDFs $c,{p}_{{\mathbf{z}}_{1}},\ldots ,{p}_{{\mathbf{z}}_{D}}$ , respectively, we can write the PDF of $\mathbf{z}$ as + +$$ +{p}_{\mathbf{z}}\left( z\right) = c\left( {{F}_{{\mathbf{z}}_{1}}\left( {z}_{1}\right) ,\ldots ,{F}_{{\mathbf{z}}_{D}}\left( {z}_{D}\right) }\right) \mathop{\prod }\limits_{{j = 1}}^{D}{p}_{{\mathbf{z}}_{j}}\left( {z}_{j}\right) . \tag{5} +$$ + +### 3.NFs With Copula-Base Distributions + +In this paper, we propose to employ copula to model a flexible, yet appropriate base distribution with the goal of gaining a NF that is able to solve the limitations of NFs discussed in Section 1. We expect to gain powerful and robust PDF approximators, by combining the properties of theoretical sound copulae (see for instance Chapter 8 in Joe (2014)) and NFs, which allow for the estimation of complex densities. + +#### 3.1.A General Framework + +We propose to replace the isotropic Gaussian base distribution in the vanilla NF framework by a more flexible copula distribution. Importantly, we want to learn a base distribution that is able to represent the tail behavior of the distribution of $\mathbf{x}$ . For training a NF with a copula base distribution we build on the fact that we can write the PDF of the latent variables as written in (5). This requires two estimation steps: First, we need to estimate the marginal distributions ${F}_{{\mathbf{z}}_{1}},\ldots ,{F}_{{\mathbf{z}}_{D}}$ , which can further be used to calculate the marginal densities ${p}_{{\mathbf{z}}_{1}},\ldots ,{p}_{{\mathbf{z}}_{D}}$ . Secondly, we need to estimate the copula density $c$ . A popular approach for estimating the density in (5) is to employ the method of inference functions for margins (IFM) (Joe & Xu, 1996), which sequentially estimates the marginals using MLE first, and then employs these marginals to estimate the copula using MLE. + +It is important to note that in contrast to standard applications of copula theory, we do not aim at estimating the full data generating distribution based on (5). Instead, following the investigations by Alexanderson & Henter (2020) and Jaini et al. (2020), our goal is to capture the tail-behavior of x. Hence, we propose to learn surrogate marginal distributions ${\mathbf{z}}_{1},\ldots ,{\mathbf{z}}_{D}$ that are able to represent the tailedness of the marginals ${\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{D}$ . By combining these marginals with some simple copula structure, such as the Gaussian Copula or the Independence Copula (see (6) below), we are able to create a joint distribution that represents the marginal tail behavior of $\mathbf{x}$ . + +The proposed adjustment can be applied to any existing NF architecture as long as (5) remains tractable. However, as the main goal of the base distribution is not to fully estimate the target but to represent the tail behavior of $\mathbf{x}$ , we can restrict ourselves to tractable parametric marginal distributions and copulae. + +### 3.2. Experimental Analysis + +In this section, we investigate the benefits of the proposed approach by analyzing a toy problem. In the following experiments, we employ the framework proposed in Section 3.1 using the most simple copula: the Independence Copula, i.e. we consider a base distribution $\mathbf{z}$ with PDF + +$$ +{p}_{\mathbf{z}}\left( z\right) = \mathop{\prod }\limits_{{j = 1}}^{D}{p}_{{\mathbf{z}}_{j}}\left( {z}_{j}\right) ,\;z \in \mathop{\prod }\limits_{{j = 1}}^{D}\operatorname{supp}\left( {\mathbf{z}}_{j}\right) . \tag{6} +$$ + +Note that by plugging Gaussian marginals in (6), we would obtain the vanilla NF. We consider a training set generated from a 2-dimensional heavy-tailed distribution, ${}^{1}$ which has standardized t-distributed marginals ${\mathbf{x}}_{1},{\mathbf{x}}_{2} \sim {t}_{2}\left( {0,1}\right)$ with 2 degrees of freedom. The corresponding copula is a Gum-bel Copula with parameter $\rho = {2.5}$ , i.e. + +$$ +C\left( u\right) \mathrel{\text{:=}} \exp \left( {-{\left( {\left( -\log \left( {u}_{1}\right) \right) }^{\rho } + {\left( -\log \left( {u}_{2}\right) \right) }^{\rho }\right) }^{1/\rho }}\right) . +$$ + +## As a proof of concept we compare the estimation of this + +![01963e38-f474-750e-a6f9-5659e896345e_2_917_198_659_299_0.jpg](images/01963e38-f474-750e-a6f9-5659e896345e_2_917_198_659_299_0.jpg) + +Figure 1. Mean training and test loss over 100 trails for NFs with different base distributions: normal (blue, solid), correctTails (orange, dashed), correctFamily (green, dotted), and exactMarginals (red, dashdotted). The shaded area depicts the ${95}\%$ confidence interval, which was computed using bootstrapping. We excluded normal runs that achieved a final loss larger than 25 , which happened in 17 out of 100 runs. + +heavy-tailed distribution using a ${\mathrm{{NF}}}^{2}$ with an isotropic Gaussian base distribution, and with 3 different heavy-tailed base distributions constructed via (6). We consider the following heavy-tailed marginals: + +1. Laplace(0,4)and ${t}_{5}\left( {0,2}\right)$ . We call this case correct-Tails because both marginals are heavy-tailed; + +2. ${t}_{5}\left( {0,1}\right)$ and ${t}_{5}\left( {0,1}\right)$ . We call this case correctFamily since both marginals stem from the same parametric class as the exact marginals; + +3. ${t}_{2}\left( {0,1}\right)$ and ${t}_{2}\left( {0,1}\right)$ . We call this case exactMarginals. + +Samples from the target distribution and from the different base distributions are visualized in Figure 6 and 7 in the Appendix. + +Training and test loss In Figure 1, we plot the average training and test performance over 100 trails. It is apparent that training using a base distribution with the correct type of tail behavior is beneficial. First of all, we observe a significant gap between the test performance of the vanilla NF and the NFs with a heavy-tailed base distribution. Notice that in Figure 1 we excluded all runs with a final test loss of above 25 , which happened in 17 of the normal runs and not once in the other cases. Furthermore, we clearly observe a much faster convergence and a more stable training procedure. The fluctuations and instabilities in the vanilla $\mathrm{{NF}}$ are due to tail-samples that have a massive effect on the likelihood in (2), which can be reduced by choosing base distributions with slower decaying tails (Alexanderson & Henter, 2020). + +Learning the tails To illustrate the ability to model the tails, we compared the estimated empirical quantile functions. We did so for both marginal distributions (Figure 2) but also for the distribution of $\parallel \mathbf{x}{\parallel }_{2}$ (Figure 9 in the Appendix). In line with the findings by Jaini et al. (2020), we notice that the vanilla NF is not capable of modeling the quantiles of the target distribution. More precisely, we observe that the corresponding quantile function is steeper around its center and has shorter tails. This means that the distribution learned by the NF does not account for the heavy tails by directly modeling them, but instead covers samples from the tails of the data distribution by being more widespread. In contrast, the base distributions that took the tailedness of $\mathbf{x}$ into account, could achieve a much better fit to the quantiles, see Figure 8 in the Appendix for further results. + +--- + +${}^{2}$ We are using a 3-layered MAF (Papamakarios et al.,2017). Further computational details can be found in Section C of the Appendix. + +${}^{1}$ all computational details can be found in Section C of the Appendix + +--- + +![01963e38-f474-750e-a6f9-5659e896345e_3_201_203_618_239_0.jpg](images/01963e38-f474-750e-a6f9-5659e896345e_3_201_203_618_239_0.jpg) + +Figure 2. Estimated marginal quantiles in the case of normal (orange, dashed) and exactMarginals (green, dotted). The corresponding negative log-likelihoods are 4.00 and 3.39 , respectively. + +Invertibility and numerical stability As investigated by Behrmann et al. (2021), the Bi-Lipschitz constant plays a fundamental role in the practical invertibility and numerical stability of NFs. To understand the learned transformation $T$ in terms of its Lipschitz continuity, we propose to study the Lipschitz surface of $T$ . Note that if $T$ is differentiable and $L$ -Lipschitz, we can follow the derivation by Behrmann et al. (2021) (in equation (5) and (6) therein) to approximate + +$$ +L = \mathop{\sup }\limits_{{x \in \operatorname{supp}\left( \mathbf{x}\right) }}{\begin{Vmatrix}{J}_{T}\left( x\right) \end{Vmatrix}}_{2} = \mathop{\sup }\limits_{{x \in \operatorname{supp}\left( \mathbf{x}\right) }}\mathop{\sup }\limits_{{\parallel v{\parallel }_{2} = 1}}{\begin{Vmatrix}{J}_{T}\left( x\right) v\end{Vmatrix}}_{2} +$$ + +$$ +\approx \mathop{\sup }\limits_{{x \in \operatorname{supp}\left( \mathbf{x}\right) }}\mathop{\sup }\limits_{{\parallel v{\parallel }_{2} = 1}}\frac{1}{\varepsilon }\parallel T\left( x\right) - T\left( {x + {\varepsilon v}}\right) {\parallel }_{2}, +$$ + +where $\varepsilon$ is some small constant. This motivates to consider an estimate ${}^{3}$ of $\mathop{\sup }\limits_{{\parallel v{\parallel }_{2} = 1}}\parallel T\left( x\right) - T\left( {x + {\varepsilon v}}\right) {\parallel }_{2}/\varepsilon$ for $x \in \operatorname{supp}\left( \mathbf{x}\right)$ as a local surrogate for the Lipschitz-continuity of $T$ . Plotting these quantities for both, $T$ and ${T}^{-1}$ , and $x \in {\left\lbrack -{10},{10}\right\rbrack }^{2}$ we obtain the Lipschitz surfaces, which are depicted in Figure 3. We notice that the vanilla NF has many fluctuations in the local Lipschitz-continuity, while the proposed copula method leads to a well-behaved transformation. The inverse transformation ${T}_{\theta }^{-1}$ in the vanilla NF has exploding local Lipschitz constants, while-again-the proposed method results in a stable inverse transformation. + +![01963e38-f474-750e-a6f9-5659e896345e_3_912_207_674_592_0.jpg](images/01963e38-f474-750e-a6f9-5659e896345e_3_912_207_674_592_0.jpg) + +Figure 3. Examples for the Lipschitz surfaces of ${T}_{\theta }$ and ${T}_{\theta }^{-1}$ on a log-scale. The corresponding negative log-likelihood is shown in brackets. + +## 4. Discussion + +In this work, we paved the way toward a general extension of NF architectures using copulae. Synthetic experiments revealed that the modelling performance of NFs can be improved substantially by replacing the vanilla Gaussian base distribution by a base distributions that reflect basic properties of the data distribution more accurately. Of course, we have just scraped the surface of the underlying potential of the proposed approach: While we concentrate on the tail behavior of the marginals in this work, the general idea can potentially also be applied to incorporate other types of inductive bias, such as multimodality by choosing multimodal marginals, or symmetries and tail dependencies by selecting appropriate marginals and copulae. + +Our experiments suggest that it is sufficient to have only a broad estimate of the marginals. As mentioned in Section 3.1, one could also employ IFM to learn these before training the NF. However, the question of what is the best technique for choosing or estimating an appropriate marginal distributions and copula still requires further investigation. Nevertheless, we think that this flexibility brings additional improvement over the methods proposed by Jaini et al. (2020) and Alexanderson & Henter (2020). Of course, we have yet only gained preliminary results with our empirical study, which we plan to verify on real-world data and for different models in the future. + +We believe that our analysis of the base functions can help to popularize NFs in a wide spectrum of domains. One such application might be financial risk analysis, where it is essential to model tail dependencies. + +--- + +${}^{3}$ see Section C in the Appendix for details + +--- + +References + +Alexanderson, S. and Henter, G. E. Robust model training and generalisation with studentising flows. arXiv preprint arXiv:2006.06599, 2020. + +Behrmann, J., Vicol, P., Wang, K.-C., Grosse, R., and Jacobsen, J.-H. Understanding and mitigating exploding inverses in invertible neural networks. In Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130, pp. 1792-1800, Virtual Event, 2021. + +Chen, R. T. Q., Behrmann, J., Duvenaud, D. K., and Jacobsen, J.-H. Residual flows for invertible generative modeling. In Advances in Neural Information Processing Systems, volume 32, pp. 9913-9923, Vancouver, BC, Canada, 2019. + +Dinh, L., Krueger, D., and Bengio, Y. NICE: non-linear independent components estimation. In 3rd International Conference on Learning Representations, ICLR Workshop Track Proceedings, San Diego, CA, USA, 2015. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. + +Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G. nflows: normalizing flows in PyTorch, 2020. + +Genest, C., Gendron, M., and Bourdeau-Brien, M. The advent of copulas in finance. In The European Journal of Finance, volume 15, pp. 609-618. Routledge, 2009. + +Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems, volume 27, pp. 2672-2680, Cambridge, MA, USA, 2014. + +Grathwohl, W., Chen, R. T. Q., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. In International Conference on Learning Representations, New Orleans, LA, USA, 2019. + +Hofert, M., Kojadinovic, I., Maechler, M., and Yan, J. Elements of Copula Modeling with R. Springer Use R! Series, 2018. + +Horn, G. V. and Perona, P. The devil is in the tails: Fine-grained classification in the wild. arXiv preprint arXiv:1709.01450, 2017. + +Jaini, P., Kobyzev, I., Yu, Y., and Brubaker, M. Tails of Lipschitz triangular flows. In Proceedings of the 37th International Conference on Machine Learning, volume 119, pp. 4673-4681, Virtual Event, 2020. + +Joe, H. Dependence modeling with copulas. CRC press, 2014. + +Joe, H. and Xu, J. J. The estimation method of inference functions for margins for multivariate models. In Faculty Research and Publications, Faculty Research and Publications, 1996. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR Conference Track Proceedings, Banff, AB, Canada, 2014. + +Kobyzev, I., Prince, S., and Brubaker, M. Normalizing flows: An introduction and review of current methods. In IEEE Transactions on Pattern Analysis and Machine Intelligence. IEEE, 2020. + +Papamakarios, G., Pavlakou, T., and Murray, I. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, volume 30, pp. 2338-2347, Long Beach, CA, USA, 2017. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. In Journal of Machine Learning Research, volume 22, pp. 1-64, 2021. + +Sklar, A. Fonctions de répartition à n dimensions et leurs marges. In Publications de l'Institut Statistique de l'Université de Paris, volume 8, pp. 229-231, 1959. + +Sklar, A. Random variables, distribution functions, and copulas: a personal look backward and forward. Lecture notes-monograph series, pp. 1-14, 1996. + +Wiese, M., Knobloch, R., and Korn, R. Copula & marginal flows: Disentangling the marginal from its joint. arXiv preprint arXiv:1907.03361, 2019. + +Zhang, X., Fang, Z., Wen, Y., Li, Z., and Qiao, Y. Range loss for deep face recognition with long-tailed training data. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5419-5428, Los Alamitos, CA, USA, 2017. + +Zhu, X., Anguelov, D., and Ramanan, D. Capturing longtail distributions of object subcategories. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 915-922, Columbus, OH, USA, 2014. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/T4Wf0w2jcz/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/T4Wf0w2jcz/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..16dbf7c19d7fd13ab4bfd7c1446084a40c442282 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/T4Wf0w2jcz/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,153 @@ +§ COPULA-BASED NORMALIZING FLOWS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Normalizing flows, which learn a distribution by transforming the data to samples from a Gaussian base distribution, have proven powerful density approximators. But their expressive power is limited by this choice of the base distribution. We, therefore, propose to generalize the base distribution to a more elaborate copula distribution to capture the properties of the target distribution more accurately. In a first empirical analysis, we demonstrate that this replacement can dramatically improve the vanilla normalizing flows in terms of flexibility, stability, and effectivity for heavy-tailed data. Our results suggest that the improvements are related to an increased local Lipschitz-stability of the learned flow. + +§ 1. INTRODUCTION + +Normalizing Flows (NFs) are a recently developed class of density estimators, which aim to transform the distribution of interest ${P}_{\mathbf{x}}$ to some tractable base distribution ${P}_{\mathbf{z}}$ . Using the change of variables formula, this allows for exact likelihood computation, which is in contrast to other deep generative models such as Variational Autoencoders (Kingma & Welling, 2014) or Generative Adversarial Networks (Goodfellow et al., 2014). Impressive estimation results, especially in the field of natural image generation, have lead to great popularity of these deep generative models. Motivated by this success, much effort has been put into the development of new parametric classes of NFs to make them even more performant (Dinh et al., 2015; 2016; Chen et al., 2019; Papamakarios et al., 2017; Grathwohl et al., 2019). However, our theoretical understanding has not developed at the same speed, which, so we claim, slows down further progress in the development of powerful NF architectures. + +Fortunately, very recent works have addressed theoretical limitations of these methods: One important limitation is the expressive power of NFs. Because they are based on the change of variables formula, the learned transformations are required to be diffeomorphisms. As a consequence, a NF with bounded Lipschitz constant is unable to map one distribution ${P}_{\mathbf{x}}$ to a lighter-tailed distribution ${P}_{\mathbf{z}}$ (Wiese et al., 2019; Jaini et al., 2020). Therefore, since vanilla NFs are implemented using an isotropic Gaussian base distribution, they are unable to learn heavy-tailed distributions, which are known to appear in natural image data (Zhu et al., 2014; Horn & Perona, 2017; Zhang et al., 2017). This is in conflict with observations recently made by Behrmann et al. (2021): Even though NFs are typically designed to obey invertibility, this property is often violated in practice. This is due to numerical inaccuracies, which are promoted by a large Bi-Lipschitz constant. Bounding the Bi-Lipschitz constant, however, conflicts with the previously mentioned theoretical requirements needed to avoid a limited expressiveness of the NF. + +These findings emphasize the high importance of choosing an appropriate base distribution for NFs. We therefore propose to generalize the isotropic Gaussian to a much broader class of distributions-the class of copula distributions. Copulae are a well-known concept in statistical theory and are being used to model complex distributions in finance, actuarial science, and extreme-value theory (Gen-est et al., 2009; Joe, 2014; Hofert et al., 2018). Broadly speaking, a copula is some function that couples marginal distributions to a multivariate joint distribution. Hence, copulae allow for flexible multivariate modeling with marginals stemming from a huge range of suitable classes. This allows for example to formulate NF base distributions that combine heavy-tailed marginals- as proposed by Wiese et al. (2019); Jaini et al. (2020); Alexanderson & Henter (2020)—with light-tailed marginals. This paper presents a novel framework for choosing the base distribution of NFs building on the well-studied theory of copulae. A first empirical investigation demonstrates the benefits brought by this approach. Our experimental analysis on toy data reveals that using even the most simple copula model-the Independence Copula-we are able to outperform the vanilla NF approach, which uses an isotropic Gaussian base distribution. The resulting NF converges faster, is more robust, and achieves an overall better test performance. In addition, we show that the learned transformation has a better-behaved functional form in the sense of a more stable local Lipschitz continuity. + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +§ 2. BACKGROUND + +In this section, we quickly review some background knowledge about NFs (Section 2.1), followed by an overview of copula theory (Section 2.2). + +§ 2.1. NORMALIZING FLOWS + +Density estimation via NFs revolve around learning a diffeomorphic transformation ${T}_{\theta }$ that maps some unknown target distribution ${P}_{\mathbf{x}}$ to a known and tractable base distribution ${P}_{\mathbf{z}}$ . At the cornerstone of NFs is the change of variables formula + +$$ +{p}_{\theta }\left( x\right) = {p}_{\mathbf{z}}\left( {{T}_{\theta }\left( x\right) }\right) \left| {\det {J}_{{T}_{\theta }}\left( x\right) }\right| \;\text{ for }x \in {\mathbb{R}}^{D}, \tag{1} +$$ + +which relates the evaluation of the estimated density ${p}_{\theta }$ of $\mathbf{x}$ to the evaluation of the base density ${p}_{\mathbf{z}}$ , of ${T}_{\theta }\left( x\right)$ , and of $\det {J}_{{T}_{\theta }}\left( x\right)$ . By composing simple diffeomorphic building blocks ${T}_{\theta } \mathrel{\text{ := }} {T}_{\theta ,l} \circ \cdots \circ {T}_{\theta ,1}$ , we are able to obtain expressive transformations, while presuming diffeomorphy and computational tractablity of the building blocks. Due to the tractable PDF in (1), we are able to train the model via maximum likelihood estimation (MLE) + +$$ +\widehat{\theta } \in \underset{\theta }{\arg \min }{\mathbb{E}}_{{p}_{\text{ data }}}\left\lbrack {-\log {p}_{\theta }\left( \mathbf{x}\right) }\right\rbrack , \tag{2} +$$ + +where ${p}_{\text{ data }}$ is the PDF of the empirical distribution of $\mathbf{x}$ . A comprehensive overview of NFs, including the exact parameterizations of certain flow models ${T}_{\theta }$ , computational aspects, and more, can be found in Kobyzev et al. (2020) and Papamakarios et al. (2021). + +§ 2.2. COPULAE + +A completely different approach of density estimation, which has mostly been left unrelated to NFs, is the idea of copulae. + +Definition 2.1 (Copula). A copula is a multivariate distribution with ${CDFC} : {\left\lbrack 0,1\right\rbrack }^{D} \rightarrow \left\lbrack {0,1}\right\rbrack$ that has standard uniform marginals, i.e. the marginals ${C}_{j}$ of $C$ satisfy ${C}_{j} \sim U\left\lbrack {0,1}\right\rbrack$ . + +The fundamental idea behind copula theory is that we can associate every distribution with a uniquely defined copula $C$ . Vice versa, given $D$ marginal distributions, each copula $C$ defines a multivariate distribution with the given marginals. Formally, this is known as Sklar's Theorem (Sklar, 1959; 1996). + +Theorem 2.2 (Sklar's Theorem). Taken from Hofert et al. (2018). + +1. For any D-dimensional CDF ${F}_{\mathbf{z}}$ with marginal CDFs ${F}_{{\mathbf{z}}_{1}},\ldots ,{F}_{{\mathbf{z}}_{D}}$ , there exists a copula $C$ such that + +$$ +{F}_{\mathbf{z}}\left( z\right) = C\left( {{F}_{{\mathbf{z}}_{1}}\left( {z}_{1}\right) ,\ldots ,{F}_{{\mathbf{z}}_{D}}\left( {z}_{D}\right) }\right) \tag{3} +$$ + +for all $z \in {\mathbb{R}}^{D}$ . The copula is uniquely defined on $\mathcal{U} \mathrel{\text{ := }} \mathop{\prod }\limits_{{j = 1}}^{D}\operatorname{Im}\left( {F}_{{\mathbf{z}}_{j}}\right)$ , where $\operatorname{Im}\left( {F}_{{\mathbf{z}}_{j}}\right)$ is the image of ${F}_{{\mathbf{z}}_{j}}$ . For all $u \in \mathcal{U}$ it is given by + +$$ +C\left( u\right) = {F}_{\mathbf{z}}\left( {{F}_{{\mathbf{z}}_{1}}^{ \leftarrow }\left( {u}_{1}\right) ,\ldots ,{F}_{{\mathbf{z}}_{D}}^{ \leftarrow }\left( {u}_{D}\right) }\right) , \tag{4} +$$ + +where ${F}_{{\mathbf{z}}_{j}}^{ \leftarrow }$ are the right-inverses of ${F}_{{\mathbf{z}}_{j}}$ . + +2. Conversely, given any $D$ -dimensional copula $C$ and marginal CDFs ${F}_{1},\ldots {F}_{D}$ , a function $F$ as defined in (3) is a D-dimensional CDF with marginals ${F}_{1},\ldots ,{F}_{D}$ . + +Part 1 of Sklar's Theorem finds much application in statistical dependency analysis (Joe, 2014). In contrast to classical dependency measures, such as Pearson correlation, copulae are a more flexible tool that allow the decoupling of the marginals and the dependency structure. Part 2 of Sklar's Theorem is of relevance for statistical modeling, and more precisely, to define multivariate distributions. Given marginal distributions, which are typically much easier to estimate than the full joint distribution, and a copula $C$ we can "couple" the marginals and the dependency structure to a multivariate joint distribution. This perspective finds various applications in the context of finance and related disciplines that need to take heavy tails and tail dependencies into account, see Genest et al. (2009) for an overview. In Section A of the Appendix we give some illustrative examples and further details about properties of copula distributions. + +If $C$ and the marginal CDFs ${F}_{{\mathbf{z}}_{1}},\ldots ,{F}_{{\mathbf{z}}_{D}}$ have PDFs $c,{p}_{{\mathbf{z}}_{1}},\ldots ,{p}_{{\mathbf{z}}_{D}}$ , respectively, we can write the PDF of $\mathbf{z}$ as + +$$ +{p}_{\mathbf{z}}\left( z\right) = c\left( {{F}_{{\mathbf{z}}_{1}}\left( {z}_{1}\right) ,\ldots ,{F}_{{\mathbf{z}}_{D}}\left( {z}_{D}\right) }\right) \mathop{\prod }\limits_{{j = 1}}^{D}{p}_{{\mathbf{z}}_{j}}\left( {z}_{j}\right) . \tag{5} +$$ + +§ 3.NFS WITH COPULA-BASE DISTRIBUTIONS + +In this paper, we propose to employ copula to model a flexible, yet appropriate base distribution with the goal of gaining a NF that is able to solve the limitations of NFs discussed in Section 1. We expect to gain powerful and robust PDF approximators, by combining the properties of theoretical sound copulae (see for instance Chapter 8 in Joe (2014)) and NFs, which allow for the estimation of complex densities. + +§ 3.1.A GENERAL FRAMEWORK + +We propose to replace the isotropic Gaussian base distribution in the vanilla NF framework by a more flexible copula distribution. Importantly, we want to learn a base distribution that is able to represent the tail behavior of the distribution of $\mathbf{x}$ . For training a NF with a copula base distribution we build on the fact that we can write the PDF of the latent variables as written in (5). This requires two estimation steps: First, we need to estimate the marginal distributions ${F}_{{\mathbf{z}}_{1}},\ldots ,{F}_{{\mathbf{z}}_{D}}$ , which can further be used to calculate the marginal densities ${p}_{{\mathbf{z}}_{1}},\ldots ,{p}_{{\mathbf{z}}_{D}}$ . Secondly, we need to estimate the copula density $c$ . A popular approach for estimating the density in (5) is to employ the method of inference functions for margins (IFM) (Joe & Xu, 1996), which sequentially estimates the marginals using MLE first, and then employs these marginals to estimate the copula using MLE. + +It is important to note that in contrast to standard applications of copula theory, we do not aim at estimating the full data generating distribution based on (5). Instead, following the investigations by Alexanderson & Henter (2020) and Jaini et al. (2020), our goal is to capture the tail-behavior of x. Hence, we propose to learn surrogate marginal distributions ${\mathbf{z}}_{1},\ldots ,{\mathbf{z}}_{D}$ that are able to represent the tailedness of the marginals ${\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{D}$ . By combining these marginals with some simple copula structure, such as the Gaussian Copula or the Independence Copula (see (6) below), we are able to create a joint distribution that represents the marginal tail behavior of $\mathbf{x}$ . + +The proposed adjustment can be applied to any existing NF architecture as long as (5) remains tractable. However, as the main goal of the base distribution is not to fully estimate the target but to represent the tail behavior of $\mathbf{x}$ , we can restrict ourselves to tractable parametric marginal distributions and copulae. + +§ 3.2. EXPERIMENTAL ANALYSIS + +In this section, we investigate the benefits of the proposed approach by analyzing a toy problem. In the following experiments, we employ the framework proposed in Section 3.1 using the most simple copula: the Independence Copula, i.e. we consider a base distribution $\mathbf{z}$ with PDF + +$$ +{p}_{\mathbf{z}}\left( z\right) = \mathop{\prod }\limits_{{j = 1}}^{D}{p}_{{\mathbf{z}}_{j}}\left( {z}_{j}\right) ,\;z \in \mathop{\prod }\limits_{{j = 1}}^{D}\operatorname{supp}\left( {\mathbf{z}}_{j}\right) . \tag{6} +$$ + +Note that by plugging Gaussian marginals in (6), we would obtain the vanilla NF. We consider a training set generated from a 2-dimensional heavy-tailed distribution, ${}^{1}$ which has standardized t-distributed marginals ${\mathbf{x}}_{1},{\mathbf{x}}_{2} \sim {t}_{2}\left( {0,1}\right)$ with 2 degrees of freedom. The corresponding copula is a Gum-bel Copula with parameter $\rho = {2.5}$ , i.e. + +$$ +C\left( u\right) \mathrel{\text{ := }} \exp \left( {-{\left( {\left( -\log \left( {u}_{1}\right) \right) }^{\rho } + {\left( -\log \left( {u}_{2}\right) \right) }^{\rho }\right) }^{1/\rho }}\right) . +$$ + +§ AS A PROOF OF CONCEPT WE COMPARE THE ESTIMATION OF THIS + + < g r a p h i c s > + +Figure 1. Mean training and test loss over 100 trails for NFs with different base distributions: normal (blue, solid), correctTails (orange, dashed), correctFamily (green, dotted), and exactMarginals (red, dashdotted). The shaded area depicts the ${95}\%$ confidence interval, which was computed using bootstrapping. We excluded normal runs that achieved a final loss larger than 25, which happened in 17 out of 100 runs. + +heavy-tailed distribution using a ${\mathrm{{NF}}}^{2}$ with an isotropic Gaussian base distribution, and with 3 different heavy-tailed base distributions constructed via (6). We consider the following heavy-tailed marginals: + +1. Laplace(0,4)and ${t}_{5}\left( {0,2}\right)$ . We call this case correct-Tails because both marginals are heavy-tailed; + +2. ${t}_{5}\left( {0,1}\right)$ and ${t}_{5}\left( {0,1}\right)$ . We call this case correctFamily since both marginals stem from the same parametric class as the exact marginals; + +3. ${t}_{2}\left( {0,1}\right)$ and ${t}_{2}\left( {0,1}\right)$ . We call this case exactMarginals. + +Samples from the target distribution and from the different base distributions are visualized in Figure 6 and 7 in the Appendix. + +Training and test loss In Figure 1, we plot the average training and test performance over 100 trails. It is apparent that training using a base distribution with the correct type of tail behavior is beneficial. First of all, we observe a significant gap between the test performance of the vanilla NF and the NFs with a heavy-tailed base distribution. Notice that in Figure 1 we excluded all runs with a final test loss of above 25, which happened in 17 of the normal runs and not once in the other cases. Furthermore, we clearly observe a much faster convergence and a more stable training procedure. The fluctuations and instabilities in the vanilla $\mathrm{{NF}}$ are due to tail-samples that have a massive effect on the likelihood in (2), which can be reduced by choosing base distributions with slower decaying tails (Alexanderson & Henter, 2020). + +Learning the tails To illustrate the ability to model the tails, we compared the estimated empirical quantile functions. We did so for both marginal distributions (Figure 2) but also for the distribution of $\parallel \mathbf{x}{\parallel }_{2}$ (Figure 9 in the Appendix). In line with the findings by Jaini et al. (2020), we notice that the vanilla NF is not capable of modeling the quantiles of the target distribution. More precisely, we observe that the corresponding quantile function is steeper around its center and has shorter tails. This means that the distribution learned by the NF does not account for the heavy tails by directly modeling them, but instead covers samples from the tails of the data distribution by being more widespread. In contrast, the base distributions that took the tailedness of $\mathbf{x}$ into account, could achieve a much better fit to the quantiles, see Figure 8 in the Appendix for further results. + +${}^{2}$ We are using a 3-layered MAF (Papamakarios et al.,2017). Further computational details can be found in Section C of the Appendix. + +${}^{1}$ all computational details can be found in Section C of the Appendix + + < g r a p h i c s > + +Figure 2. Estimated marginal quantiles in the case of normal (orange, dashed) and exactMarginals (green, dotted). The corresponding negative log-likelihoods are 4.00 and 3.39, respectively. + +Invertibility and numerical stability As investigated by Behrmann et al. (2021), the Bi-Lipschitz constant plays a fundamental role in the practical invertibility and numerical stability of NFs. To understand the learned transformation $T$ in terms of its Lipschitz continuity, we propose to study the Lipschitz surface of $T$ . Note that if $T$ is differentiable and $L$ -Lipschitz, we can follow the derivation by Behrmann et al. (2021) (in equation (5) and (6) therein) to approximate + +$$ +L = \mathop{\sup }\limits_{{x \in \operatorname{supp}\left( \mathbf{x}\right) }}{\begin{Vmatrix}{J}_{T}\left( x\right) \end{Vmatrix}}_{2} = \mathop{\sup }\limits_{{x \in \operatorname{supp}\left( \mathbf{x}\right) }}\mathop{\sup }\limits_{{\parallel v{\parallel }_{2} = 1}}{\begin{Vmatrix}{J}_{T}\left( x\right) v\end{Vmatrix}}_{2} +$$ + +$$ +\approx \mathop{\sup }\limits_{{x \in \operatorname{supp}\left( \mathbf{x}\right) }}\mathop{\sup }\limits_{{\parallel v{\parallel }_{2} = 1}}\frac{1}{\varepsilon }\parallel T\left( x\right) - T\left( {x + {\varepsilon v}}\right) {\parallel }_{2}, +$$ + +where $\varepsilon$ is some small constant. This motivates to consider an estimate ${}^{3}$ of $\mathop{\sup }\limits_{{\parallel v{\parallel }_{2} = 1}}\parallel T\left( x\right) - T\left( {x + {\varepsilon v}}\right) {\parallel }_{2}/\varepsilon$ for $x \in \operatorname{supp}\left( \mathbf{x}\right)$ as a local surrogate for the Lipschitz-continuity of $T$ . Plotting these quantities for both, $T$ and ${T}^{-1}$ , and $x \in {\left\lbrack -{10},{10}\right\rbrack }^{2}$ we obtain the Lipschitz surfaces, which are depicted in Figure 3. We notice that the vanilla NF has many fluctuations in the local Lipschitz-continuity, while the proposed copula method leads to a well-behaved transformation. The inverse transformation ${T}_{\theta }^{-1}$ in the vanilla NF has exploding local Lipschitz constants, while-again-the proposed method results in a stable inverse transformation. + + < g r a p h i c s > + +Figure 3. Examples for the Lipschitz surfaces of ${T}_{\theta }$ and ${T}_{\theta }^{-1}$ on a log-scale. The corresponding negative log-likelihood is shown in brackets. + +§ 4. DISCUSSION + +In this work, we paved the way toward a general extension of NF architectures using copulae. Synthetic experiments revealed that the modelling performance of NFs can be improved substantially by replacing the vanilla Gaussian base distribution by a base distributions that reflect basic properties of the data distribution more accurately. Of course, we have just scraped the surface of the underlying potential of the proposed approach: While we concentrate on the tail behavior of the marginals in this work, the general idea can potentially also be applied to incorporate other types of inductive bias, such as multimodality by choosing multimodal marginals, or symmetries and tail dependencies by selecting appropriate marginals and copulae. + +Our experiments suggest that it is sufficient to have only a broad estimate of the marginals. As mentioned in Section 3.1, one could also employ IFM to learn these before training the NF. However, the question of what is the best technique for choosing or estimating an appropriate marginal distributions and copula still requires further investigation. Nevertheless, we think that this flexibility brings additional improvement over the methods proposed by Jaini et al. (2020) and Alexanderson & Henter (2020). Of course, we have yet only gained preliminary results with our empirical study, which we plan to verify on real-world data and for different models in the future. + +We believe that our analysis of the base functions can help to popularize NFs in a wide spectrum of domains. One such application might be financial risk analysis, where it is essential to model tail dependencies. + +${}^{3}$ see Section C in the Appendix for details \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/URKYsI2TFl/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/URKYsI2TFl/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..9bb40e8cd2f433000cf7cb411aa2098abf25aa56 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/URKYsI2TFl/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,549 @@ +# On the expressivity of bi-Lipschitz normalizing flows + +Anonymous Authors ${}^{1}$ + +## Abstract + +An invertible function is bi-Lipschitz if both the function and its inverse have bounded Lipschitz constants. Nowadays, most Normalizing Flows are bi-Lipschitz by design or by training to limit numerical errors (among other things). In this paper, we discuss the expressivity bi-Lipschitz of Normalizing Flows and identify several target distributions that are difficult to approximate using such models. Then, we characterize the expressivity of bi-Lipschitz Normalizing Flows by giving several lower bounds on the total variation divergence between these particularly unfavorable distributions and their best possible approximation. Finally, we discuss potential remedies which include using more complex latent distributions. + +## 1. Introduction + +A number of recent publications have demonstrated the benefits of constructing machine learning models with a small Lipschitz constant. First, models with a small Lipschitz constant have been linked with better generalization capabilities, both in terms of true risk (Bartlett et al., 2017), and adversarial risk (Farnia et al., 2018). In addition, models with a small Lipschitz constraint are more stable during training, and are less prone to numerical errors, a property which is particularly important in the context of invertible neural networks and normalizing flows (Behrmann et al., 2021). + +Unfortunately, enforcing a small Lipschitz constant, either by design, or using regularization during training, can impede the ability of a model to fit the data distribution. Based on this observation, several researchers have studied the limitations of neural networks with bounded Lipschitz constant. In particular Tanielian et al. (2020) was able identify a family of target distributions with disconnected support that cannot be fitted with a GAN with a bounded Lipschitz + +## constant. + +In this paper we focus on the impact of the Lipschitz constraint on normalizing flows. When they are invertible, normalizing flows are often not just Lipschitz, but bi-Lipschitz, meaning that both the mapping function and its inverse have bounded Lipschitz constant. For example, Affine Coupling, Additive Coupling, neural ODE, Invertible $1 \times 1$ Convolution and Residual Networks are bi-Lipschitz by design. Other types of normalizing flows, can also be trained to be bi-Lipschitz, in order to avoid exploding inverses (Behrmann et al., 2021). We study the expressivity of normalizing flows with bounded Lipschitz constant and discuss the impact of the bi-Lipschitz constant on the total variation divergence. More precisely, we give several lower bounds on the total variation divergence between the generated distribution and the target distribution, in some (particularly unfavorable) training settings. + +## 2. Background + +An invertible normalizing flow or normalizing flow for short, is a an invertible density model in which both density estimation and sampling can be done efficiently. Normalizing flows (NF) learn an invertible mapping between a latent space $\mathcal{Z}$ and a data space $\mathcal{X}$ . Typically, the forward direction $F : \mathcal{X} \rightarrow \mathcal{Z}$ (i.e. the normalizing direction) is tractable and exact whereas the inverse direction ${F}^{-1} : \mathcal{Z} \rightarrow \mathcal{X}$ (i.e. the generative direction) either has a closed form, or can be approximated using an interative algorithm. + +Let ${P}^{ * }$ be the true data distribution over $\mathcal{X}$ that we wish to approximate. We assume this distribution admits a density ${p}^{ * }$ . Let $q\left( \mathbf{z}\right) = \frac{1}{{\left( \sqrt{2\pi }\right) }^{d}}{e}^{-\parallel \mathbf{z}{\parallel }^{2}/2}d\mathbf{z}$ be a gaussian density function and $Q$ the associated measure. If we name ${P}_{\theta }$ the learned data distribution, and ${p}_{\theta }$ its density, we can perform the density estimate efficiently using a simple change of variable: + +$$ +\forall x \in \mathcal{X},\;{p}_{\theta }\left( \mathbf{x}\right) = \left| {{\operatorname{Jac}}_{F}\left( \mathbf{x}\right) }\right| q\left( {F\left( \mathbf{x}\right) }\right) \tag{1} +$$ + +The learned distribution ${P}_{\theta }$ can there be expressed with the Gaussian measure over any subset $A \subseteq \mathcal{X}$ : + +$$ +{P}_{\theta }\left( A\right) = Q\left( {F\left( A\right) }\right) = {\int }_{F\left( A\right) }q\left( \mathbf{z}\right) d\mathbf{z} +$$ + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +For this to be useful in practice, the Jacobian (more specifically its determinant) must be computed efficiently. The relation between both distributions requires the determinant as given in Equation 1. To satisfies this property, Normalizing Flows usually have particular structures and consequently most of the common blocks are bi-Lipschitz. + +### 2.1. Bi-Lipschitz Normalizing Flows + +In this paper, we consider bi-Lipschitz normalizing flows. We define the bi-Lipschitz property as follows. + +Definition 2.1. A bijective function $F : \mathcal{X} \subset {\mathbb{R}}^{d} \rightarrow \mathcal{Z} \subset$ ${\mathbb{R}}^{d}$ is said to be $\left( {{L}_{1},{L}_{2}}\right)$ -bi-Lipschitz if $F$ is ${L}_{1}$ -Lipschitz and its inverse ${F}^{-1}$ is ${L}_{2}$ -Lipschitz: + +$$ +\forall {\mathbf{x}}_{1},{\mathbf{x}}_{2} \in \mathcal{X},\;\begin{Vmatrix}{F\left( {\mathbf{x}}_{1}\right) - F\left( {\mathbf{x}}_{2}\right) }\end{Vmatrix} \leq {L}_{1}\begin{Vmatrix}{{\mathbf{x}}_{1} - {\mathbf{x}}_{2}}\end{Vmatrix} +$$ + +$$ +\forall {\mathbf{z}}_{1},{\mathbf{z}}_{2} \in \mathcal{Z},\;\begin{Vmatrix}{{F}^{-1}\left( {\mathbf{z}}_{1}\right) - {F}^{-1}\left( {\mathbf{z}}_{2}\right) }\end{Vmatrix} \leq {L}_{2}\begin{Vmatrix}{{\mathbf{z}}_{1} - {\mathbf{z}}_{2}}\end{Vmatrix} +$$ + +The bi-Lipschitz continuity implies some intrinsic properties satisfied by normalizing flows. Indeed, if a function can not expend infinitely, its inverse function can not contract infinitely. Therefore, not only the contractive power is limited but also the determinant of the jacobian is bounded. + +Proposition 2.1. If a normalizing flow $F$ is $\left( {{L}_{1},{L}_{2}}\right)$ -bi-Lipschitz, then for any ${\mathbf{x}}_{1},{\mathbf{x}}_{2} \in \mathcal{X}$ , it satisfies: + +$$ +\frac{1}{{L}_{2}}\begin{Vmatrix}{{\mathbf{x}}_{1} - {\mathbf{x}}_{2}}\end{Vmatrix} \leq \begin{Vmatrix}{F\left( {\mathbf{x}}_{1}\right) - F\left( {\mathbf{x}}_{2}\right) }\end{Vmatrix} \leq {L}_{1}\begin{Vmatrix}{{\mathbf{x}}_{1} - {\mathbf{x}}_{2}}\end{Vmatrix} +$$ + +Moreover, the determinant of its Jacobian Matrix ${\operatorname{Jac}}_{F}$ satisfies for all $\mathbf{x} \in \mathcal{X} \subset {\mathbb{R}}^{d} : \frac{1}{{L}_{2}^{d} \leq \left| {{\operatorname{Jac}}_{F}\left( \mathbf{x}\right) }\right| \leq {L}_{1}^{d}}$ + +Notice that this bounding is often used as an alternate bi-Lipschitz definition. + +For instance, the i-ResNet (Behrmann et al., 2019) and the Residual Flow (Chen et al., 2020) are both based on residual atomic blocks ${f}_{i} = {I}_{d} + {g}_{i}$ . Their invertibility is ensured by the Lipschitz constant $\operatorname{Lip}\left( {g}_{i}\right) \leq L < 1$ . If $F$ is composed of $m$ residual blocks such that $F = {f}_{m} \circ \cdots \circ {f}_{1}$ , then the overall bi-Lipschitz constants satisfy $\operatorname{Lip}\left( F\right) \leq {\left( 1 + L\right) }^{m}$ and $\operatorname{Lip}\left( {F}^{-1}\right) \leq 1/{\left( 1 - L\right) }^{m}$ . Alternatively, in Glow (Kingma &Dhariwal,2018) with atomic blocks ${W}_{i} =$ ${P}_{i}{L}_{i}\left( {{U}_{i} + \operatorname{diag}\left( {s}_{i}\right) }\right)$ , the bi-Lipschitz constants statisfy: $\operatorname{Lip}\left( F\right) \leq \mathop{\prod }\limits_{i}^{m}{\begin{Vmatrix}{W}_{i}\end{Vmatrix}}_{2}$ and $\operatorname{Lip}\left( {F}^{-1}\right) \leq \mathop{\prod }\limits_{i}^{m}{\begin{Vmatrix}{W}_{i}^{-1}\end{Vmatrix}}_{2}$ . Consequently, the bi-Lipschitzness constraints on either the function or its Jacobian determinant can be released by increasing the depth of the network but, by doing so, the stability of the inverse can be affected (Behrmann et al., 2021). + +### 2.2. Assessing the learning abilities + +Our goal is to understand how the bi-Lipschitz property affects the approximation ability of the network. To do so, we will compare the true data distribution ${P}^{ * }$ and its density ${p}^{ * }$ with the learned distribution ${P}_{\theta }$ and its density ${p}_{\theta }$ . + +To evaluate how the true distribution ${P}^{ * }$ and the generated distribution ${P}_{\theta }$ differ, we use the Total Variation Distance, whose definition is recalled here: + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) = \mathop{\sup }\limits_{A}\left| {{P}^{ * }\left( A\right) - {P}_{\theta }\left( A\right) }\right| +$$ + +## 3. Lower Bounds on the TV Distance + +#### 3.1.A bound on bi-Lipschitz normalizing flow for any subset $A$ + +The first theorem is a lower bound on the TV distance between the learned distribution and the target distribution in a general setting. Intuitively, the idea is to find an arbitrary subset $A$ that is sufficiently concentrated so that the Lipschitz contrained mapping can not concentrate enough weight form the Gaussian distribution onto this subset. + +Theorem 3.1 (bi-Lipschitz mappings fail to capture high density subset). Let $F$ be $\left( {{L}_{1},{L}_{2}}\right)$ -bi-Lipschitz and ${\eta }_{A} =$ $\frac{{P}^{ * }\left( A\right) }{\operatorname{vol}\left( A\right) }$ be the average density over any subset $A \subset {\mathbb{R}}^{d}$ . Then: + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \mathop{\sup }\limits_{A}\operatorname{vol}\left( A\right) \left( {{\eta }_{A} - {\left( \frac{{L}_{1}}{4{L}_{2}\sqrt{2\pi }}\right) }^{d}}\right) +$$ + +Therefore, if there is a subset $A$ that satisfies ${\eta }_{A} >$ ${\left( \frac{{L}_{1}}{4{L}_{2}\sqrt{2\pi }}\right) }^{d}$ , then the TV is necessarily strictly positive. + +The proof of this Theorem is given in appendix A.1. + +Remark that the bound in Theorem 3.1 depends on both Lipschitz constraints ${L}_{1}$ and ${L}_{2}$ . If a subset $A$ is found to be very dense, the mapping will not be able to expand the given volume of $A$ to match the lower density of the Gaussian density because of ${L}_{1}$ . On the other hand, the point with the highest density within $A$ will be matched with the highest point on the Gaussian density but all its neighbourhood has to me moved by a factor of $1/{L}_{2}$ . The main advantage of this formulation is to apply to any subset of the data space, but at the expense of a loose bound on the TV. + +### 3.2. Bounds for specific subset ${B}_{R,\mathbf{x}}$ + +The bound in Theorem 3.1 can be further improved by making assumptions on the structure of the subset $A$ . We choose to focus on ${l}_{2}$ balls instead of arbitrary subsets. + +Let ${B}_{R,{\mathbf{x}}_{0}}$ be the ${l}_{2}$ ball with center ${\mathbf{x}}_{0}$ and radius $R$ (i.e. ${B}_{R,{\mathbf{x}}_{0}} = \left\{ {\mathbf{x} \in \mathcal{X}{\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}_{0}\end{Vmatrix}}_{2} \leq R}\right\}$ ). Then we can show that both high density balls and low density ones are difficult to fit properly, the former because of the Lipschitz constraint of $F$ , the latter because of the Lipschitz constraint of ${F}^{-1}$ . We first consider high density balls. + +110 + +![01963e39-bf81-7a3f-bfaa-28776c987e9f_2_235_184_531_523_0.jpg](images/01963e39-bf81-7a3f-bfaa-28776c987e9f_2_235_184_531_523_0.jpg) + +Figure 1. Example of a target distribution where theorem 3.2 applies: the subset ${B}_{R}$ concentrates most of the weight in ${p}^{ * }$ , but $q\left( {F\left( {B}_{R}\right) }\right)$ can only be as big as $q\left( {B}_{{R}_{{L}_{1}}}\right)$ . + +111 + +112 + +113 + +114 + +115 + +116 + +117 + +118 + +119 + +Theorem 3.2 (NF with a ${L}_{1}$ -Lipschitz mapping $F$ fails to capture high density balls + +). Let $F$ be ${L}_{1}$ -Lipschitz. Then: + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \mathop{\sup }\limits_{{R,{\mathbf{x}}_{0}}}\left( {{P}^{ * }\left( {B}_{R,{\mathbf{x}}_{0}}\right) - \frac{R{L}_{1}}{\sqrt{\pi }}}\right) +$$ + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) > \mathop{\sup }\limits_{{R,{\mathbf{x}}_{0}}}\left( {{P}^{ * }\left( {B}_{R,{\mathbf{x}}_{0}}\right) - 4{d}^{1/4}R{L}_{1}}\right) +$$ + +Therefore, if we find a ball for which the true measure satisfies $\frac{{P}^{ * }\left( {B}_{R,{x}_{0}}\right) }{R} > \frac{{L}_{1}}{\sqrt{\pi }}$ or $\frac{{P}^{ * }\left( {B}_{R,{x}_{0}}\right) }{R} > 4{d}^{1/4}{L}_{1}$ , then the TV is necessarily strictly positive. + +The theorem 3.2 highlights the effect of the Lipschitz constraint of the forward mapping $F$ . If a ball has a high probability in the data space ${P}^{ * }\left( {B}_{R}\right)$ , then the probability assigned to this ball is at most $Q\left( {B}_{R{L}_{1}}\right)$ in the latent space and is upper bounded by $R{L}_{1}/\sqrt{\pi }$ (Ball,1993). A one dimensional representation of a pathological case for theorem 3.2 is shown on figure 1. In other words no ball with a high enough density in data space can be expended sufficiently to have a matching probability in the latent space. Note that we could use a closed-form of $Q\left( {B}_{R{L}_{1}}\right)$ but it is less open to interpretation than the approximations we have made. + +156 Conversely, the mapping being bi-Lipschitz, the mapping can not contract arbitrarily. If there is a low density zone mapped on the maximum of the Gaussian density, then the Normalizing Flow cannot reduce enough the probability of the corresponding zone. Notice that the assumption of a low density zone is strong but fairly reasonable for instance, one can observe a multimodal density with fairly well separated + +163 modes. If the modes are roughly equiprobable, we expect a mapping to assign those modes in balanced way around 164 the mode of the Gaussian distribution in the latent space. Therefore, the low density ball is mapped on a zone wider than the ball ${B}_{R/{L}_{2}}$ and consequently the Gaussian measure associated is lower bounded by $Q\left( {B}_{R/{L}_{2}}\right)$ as illustrated on the one dimensional example on figure 2. Despite the lower bounds established by (Pinelis, 2020), there is no reasonably interpretable bounds, therefore we use the closed-form that is expressed with the Gamma function $\Gamma$ and the incomplete gamma function $\gamma$ . The numerical approximations of the closed form are given in figure 3 . We can observe that the higher the dimension is, the larger the ${l}_{2}$ distance between two modes can be. + +![01963e39-bf81-7a3f-bfaa-28776c987e9f_2_977_184_528_528_0.jpg](images/01963e39-bf81-7a3f-bfaa-28776c987e9f_2_977_184_528_528_0.jpg) + +Figure 2. Example of a target distribution for which Theorem 3.3 applies: the subset ${B}_{R}$ concentrates little weight in ${p}^{ * }$ , but $q\left( {F\left( {B}_{R}\right) }\right)$ can only be as small as $q\left( {{B}_{R}/{L}_{2}}\right)$ . + +Theorem 3.3 (NF with ${L}_{2}$ -Lipschitz inverse mappings ${F}^{-1}$ fail to capture low density balls). Let ${F}^{-1}$ be ${L}_{2}$ -Lipschitz. We consider the balls centered on ${F}^{-1}\left( 0\right)$ , we have the lower bound: + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \mathop{\sup }\limits_{R}\left( {\frac{\gamma \left( {\frac{d}{2},\frac{{R}^{2}}{2{L}_{2}^{2}}}\right) }{\Gamma \left( \frac{d}{2}\right) } - {P}^{ * }\left( {B}_{R,{F}^{-1}\left( 0\right) }\right) }\right) +$$ + +Therefore, if we find a ball for which the the true measure satisfies ${P}^{ * }\left( {B}_{R,{F}^{-1}\left( 0\right) }\right) < \frac{\gamma \left( {d/2,{R}^{2}/2{L}_{2}^{2}}\right) }{\Gamma \left( {d/2}\right) }$ , then the TV is necessarily strictly positive. + +Both formal proofs are detailed in appendix A. 2 and A.3. + +### 3.3. Comparison to related work + +A related set up is used in (Tanielian et al., 2020). The authors consider two disconnected subsets ${M}_{1}$ and ${M}_{2}$ separated by a distance $D$ , with equal probabilities in the latent space, i.e. ${P}_{\theta }\left( {M}_{1}\right) = {P}_{\theta }\left( {M}_{1}\right) = 1/2$ . As a consequence, ${F}^{-1}\left( 0\right)$ is equidistant from ${M}_{1}$ and ${M}_{2}$ as illustrated in Figure 4. + +![01963e39-bf81-7a3f-bfaa-28776c987e9f_3_155_188_712_273_0.jpg](images/01963e39-bf81-7a3f-bfaa-28776c987e9f_3_155_188_712_273_0.jpg) + +Figure 3. Representation of the Gaussian Measure of balls of radius $R/{L}_{2}$ centered on 0 . The measure is given for dimension 1,2, 10 and then the dimensions of MNIST (Yann LeCun et al., 2010), CIFAR10 (Alex Krizhevsky, 2009) and CelebA (Liu et al., 2015) + +![01963e39-bf81-7a3f-bfaa-28776c987e9f_3_280_690_444_390_0.jpg](images/01963e39-bf81-7a3f-bfaa-28776c987e9f_3_280_690_444_390_0.jpg) + +Figure 4. Experimental set up given by (Tanielian et al., 2020) + +Corollary 3.3.1 (NF with ${L}_{2}$ -Lipschitz inverse mapping). If ${F}^{-1}$ is ${L}_{2}$ -Lipschitz, then we have a lower bound on the TV distance based on the distance $D$ between ${M}_{1}$ and ${M}_{2}$ : + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \gamma \left( {\frac{d}{2},\frac{{D}^{2}}{2{L}_{2}^{2}}}\right) /\Gamma \left( \frac{d}{2}\right) . +$$ + +Note that, the TV distance being defined as the sup on any subspace $A$ , we can accumulate the failures made by the network, and therefore take into account the error when the two manifolds ${M}_{1}$ and ${M}_{2}$ are too dense. This set up is then a appropriate pathological case to study the effect of the bi-Lipschitzness of the mapping. + +The original work assesses the learning abilities of their generative model, a GAN (Goodfellow et al., 2014), with a definition of precision and recall given by (Sajjadi et al., 2018) and improved by (Kynkäänniemi et al., 2019). The main advantage of this metric is that is well fitted to be used with the Gaussian Isoperimetric Inequality and therefore gives a result independent from the dimension. By using, the TV distance or any distance for that matter, it can be applied on distributions with any support. The details of the Precision and Recall and the comparison between both methods can be found in appendix B and the proof of corollary 3.3.1 is in appendix A.4. + +## 4. Potential remedies + +To mitigate the limitations induced by the two Lipschitz constants, one may suggest to simply increase the aggregated Lipschitz constants. For most structures of Normalizing Flow, one solution would be to increase the number of layers of the network. However, according to (Behrmann et al., 2021), this method affects the invertible stability of the normalizing flow. The solution might be held in more complex latent distribution. The first idea would be to use a Gaussian distribution with learnable parameters $\mathbf{\mu }$ and $\sum = \operatorname{diag}\left( {\sigma }_{i}\right)$ but this only results in a trade-off between the failures highlighted by the theorems 3.2 and 3.3 . Since a learnable standard deviation $\sigma$ is equivalent to a reparametrization trick, the consequence would be shifting the Lipschitz constant $\left( {{L}_{1},{L}_{2}}\right)$ toward $\left( {\frac{{L}_{1}}{\sigma },{L}_{2}\sigma }\right)$ . Then, it could be interesting when only one failure is met under the condition that the stability may not be affection by a larger range within the latent space. + +To solve this, a Gaussian Mixture (GM) can be used to learn disconnected manifolds (Khayatkhoei et al., 2019; Izmailov et al., 2019). Since each mode can be assigned to a different manifold or high density zones, the maximum density of the gaussian density will not be mapped to the low density space between the two density peaks. Consequently, if theorem 3.3 has no reason to hold, we can express the lower bound of theorem 3.2 with a GM with K equally distributed modes with learnable mean ${\mathbf{\mu }}_{j}$ and diagonal covariance matrix ${\sum }_{j} = \operatorname{diag}\left( {\sigma }_{ji}\right)$ : + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \mathop{\sup }\limits_{{R,{\mathbf{x}}_{0}}}\left( {{P}^{ * }\left( {B}_{R,{\mathbf{x}}_{0}}\right) - \frac{1}{K}\frac{R{L}_{1}}{\mathop{\prod }\limits_{i}{\sigma }_{ji}^{d}\sqrt{\pi }}}\right) +$$ + +Therefore, the GM could solve the issue set by split subset of the data space under two conditions: the number of modes $K$ should be reasonably low such that the measure mapped onto potential dense subset can be large enough, and most importantly, great attention should be given to the experimental issue linked to the training with learnable ${\sigma }_{ij}$ and $K$ . + +## 5. Conclusion + +We have established the bi-Lipschitz constraints reduce the expressivity of Normalizing flows. When the dataset meets some particular conditions such as a high density zone or a low density zone between two high density zones, the reduced expressivity fails to capture the real distribution of the dataset. To compensate, this lack of learning ability of the mapping, a more complex, i.e. expressive latent distribution can be implemented. However, this method suffers from training difficulties and should be further studied. + +References + +Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. + +Ball, K. The reverse isoperimetric problem for Gaussian measure. Discrete & Computational Geometry volume, 10(4):411-420, December 1993. ISSN 1432-0444. URL https://doi.org/10.1007/BF02573986. + +Bartlett, P., Foster, D. J., and Telgarsky, M. Spectrally-normalized margin bounds for neural networks. In 30th Conference on Neural Information Processing Systems (NeurIPS 2017), December 2017. arXiv: 1706.08498. + +Behrmann, J., Grathwohl, W., Chen, R. T. Q., Duvenaud, D., and Jacobsen, J.-H. Invertible Residual Networks. In Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019, May 2019. arXiv: 1811.00995. + +Behrmann, J., Vicol, P., Wang, K.-C., Grosse, R., and Jacobsen, J.-H. Understanding and Mitigating Exploding Inverses in Invertible Neural Networks. In International Conference on Artificial Intelligence and Statistics, pp. 1792-1800. PMLR, March 2021. URL http://proceedings.mlr.press/v130/ behrmann21a.html. + +Chen, R. T. Q., Behrmann, J., Duvenaud, D., and Jacobsen, J.-H. Residual Flows for Invertible Generative Modeling. In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada., July 2020. arXiv: 1906.02735. + +Farnia, F., Zhang, J., and Tse, D. Generalizable Adversarial Training via Spectral Normalization. September 2018. URL https://openreview.net/forum? id=Hyx4knR9Ym. + +Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative Adversarial Networks. In 27th Conference on Neural Information Processing Systems (NeurIPS 2014), June 2014. URL http://arxiv.org/abs/1406.2661.arXiv: 1406.2661. + +Izmailov, P., Kirichenko, P., Finzi, M., and Wilson, A. G. Semi-Supervised Learning with Normalizing Flows. In Proceedings of the 37th International Conference on Machine Learning, December 2019. arXiv: 1912.13025. + +Khayatkhoei, M., Elgammal, A., and Singh, M. Disconnected Manifold Learning for Generative Adversarial Networks. In 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada, January 2019. URL http://arxiv.org/abs/1806.00880. + +Kingma, D. P. and Dhariwal, P. Glow: Generative Flow with Invertible 1x1 Convolutions. In 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada., volume 31, 2018. + +Kynkäänniemi, T., Karras, T., Laine, S., Lehtinen, J., and Aila, T. Improved Precision and Recall Metric for Assessing Generative Models. In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada., October 2019. arXiv: 1904.06991. + +Liu, Z., Luo, P., Wang, X., and Tang, X. Deep Learning Face Attributes in the Wild. Proceedings of International Conference on Computer Vision (ICCV), December 2015. + +Pinelis, I. Exact lower and upper bounds on the incomplete gamma function. Mathematical Inequalities & Applications, pp. 1261-1278, 2020. doi: 10.7153/ mia-2020-23-95. + +Sajjadi, M. S. M., Bachem, O., Lucic, M., Bousquet, O., and Gelly, S. Assessing Generative Models via Precision and Recall. In 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada, October 2018. URL http://arxiv.org/abs/1806.00035.arXiv: 1806.00035. + +Tanielian, U., Issenhuth, T., Dohmatob, E., and Mary, J. Learning disconnected manifolds: a no GANs land. In Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020, December 2020. arXiv: 2006.04596. + +Yann LeCun, Corinna Cortes, and Burges, C. MNIST handwritten digit database. ATT Labs, 2, 2010. URL http://yann.lecun.com/exdb/mnist. + +## A. Proofs + +### A.1. Proof of theorem 3.1 + +By definition we have ${P}_{\theta }\left( A\right) = Q\left( {F\left( A\right) }\right) = {\int }_{F\left( A\right) }q\left( \mathbf{z}\right) d\mathbf{z}$ , then with the change of variable formula we obtain: + +$$ +{P}_{\theta }\left( A\right) = {\int }_{A}\left| {{\operatorname{Jac}}_{F}\left( \mathbf{x}\right) }\right| q\left( {F\left( \mathbf{x}\right) }\right) d\mathbf{x} +$$ + +$$ +\begin{matrix} & = & \frac{1}{{\left( 2\pi \right) }^{d/2}}{\int }_{A}\left| {{\mathrm{{Jac}}}_{F}\left( \mathbf{x}\right) }\right| {e}^{-{\parallel F\left( \mathbf{x}\right) \parallel }^{2}/2}d\mathbf{x} \end{matrix} +$$ + +As ${F}^{-1}$ is ${L}_{2}$ -Lipschitz, $F$ satisfies: + +$$ +\forall {\mathbf{x}}_{1},{\mathbf{x}}_{2} \in \mathcal{X},\;\frac{1}{{L}_{2}}\begin{Vmatrix}{{\mathbf{x}}_{1} - {\mathbf{x}}_{2}}\end{Vmatrix} \leq \begin{Vmatrix}{F\left( {\mathbf{x}}_{1}\right) - F\left( {\mathbf{x}}_{2}\right) }\end{Vmatrix}, +$$ + +and in particular, we have: + +$$ +\forall \mathbf{x} \in \mathcal{X},\;\frac{1}{{L}_{2}}\begin{Vmatrix}{{\mathbf{x}}_{1} - {F}^{-1}\left( 0\right) }\end{Vmatrix} \leq \parallel F\left( \mathbf{x}\right) \parallel . +$$ + +Consequently $\forall \mathbf{x} \in \mathcal{X}$ + +$$ +q\left( {F\left( \mathbf{x}\right) }\right) = \frac{1}{{\left( 2\pi \right) }^{d/2}}{e}^{-\parallel F\left( \mathbf{x}\right) {\parallel }^{2}/2}, +$$ + +$$ +\leq \frac{1}{{\left( 2\pi \right) }^{d/2}}{e}^{-\parallel \mathbf{x}/{L}_{2} - {F}^{-1}\left( 0\right) /{L}_{2}{\parallel }^{2}/2}, +$$ + +$$ +\leq \frac{1}{{\left( 2\pi \right) }^{d/2}}{e}^{-\parallel T\left( \mathbf{x}\right) {\parallel }^{2}}, +$$ + +where $T$ is the affine mapping given by + +$$ +T\left( \mathbf{x}\right) = \frac{\mathbf{x} - {F}^{-1}\left( 0\right) }{4{L}_{2}}. +$$ + +As $F$ is ${L}_{1}$ -Lipschitz we have $\left| {{\operatorname{Jac}}_{F}\left( \mathbf{x}\right) }\right| < {L}_{1}^{d}$ and thus + +$$ +{P}_{\theta }\left( A\right) \leq {\left( \frac{{L}_{1}}{\sqrt{2\pi }}\right) }^{d}{\int }_{A}{e}^{-\parallel T\left( \mathbf{x}\right) {\parallel }_{2}^{2}}d\mathbf{x} +$$ + +$$ +\begin{matrix} & \leq & {\left( \frac{{L}_{1}}{\sqrt{2\pi }}\right) }^{d}{\int }_{T\left( A\right) }\frac{1}{{\left( 4{L}_{2}\right) }^{d}}d\mathbf{x} \end{matrix} +$$ + +$$ +\leq {\left( \frac{{L}_{1}}{4{L}_{2}\sqrt{2\pi }}\right) }^{d}\operatorname{vol}\left( A\right) , +$$ + +and thus ${TV}\left( {{P}^{ * },{P}_{\theta }}\right) = \mathop{\sup }\limits_{A}\left| {{P}^{ * }\left( A\right) - {P}_{\theta }\left( A\right) }\right|$ implies + +$$ +{TV}\left( {{P}^{ * },{P}_{\theta }}\right) \geq +$$ + +$$ +\mathop{\sup }\limits_{A}\left( {{P}^{ * }\left( A\right) - {\left( \frac{{L}_{1}}{4{L}_{2}\sqrt{2\pi }}\right) }^{d}\operatorname{vol}\left( A\right) }\right) +$$ + +### A.2. Proof of theorem 3.2 + +By definition of the TV distance, we have + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \mathop{\sup }\limits_{{R,{\mathbf{x}}_{0}}}\left| {{P}^{ * }\left( {B}_{R,{\mathbf{x}}_{0}}\right) - Q\left( {F\left( {B}_{R,{\mathbf{x}}_{0}}\right) }\right) }\right| , +$$ + +where ${B}_{R,{\mathbf{x}}_{0}}$ is the ball of a radius $R$ centered in ${\mathbf{x}}_{0}$ . + +Then, the idea is to show that the image of a ball ${B}_{R}$ by a ${L}_{1}$ - Lipschitz function is in a ball of radius ${L}_{1}R$ , and then use a reverse isoperimetric inequality the find an upper bound of the measure of a ball of a radius ${L}_{1}R$ . + +Proof of $F\left( {B}_{R,{\mathbf{x}}_{0}}\right) \subset {B}_{{L}_{1}R, F\left( {\mathbf{x}}_{0}\right) }$ + +First of all, for every $\mathbf{z} \in F\left( {B}_{R,{\mathbf{x}}_{0}}\right)$ , there exist $\mathbf{x} \in {B}_{R}$ such that ${F}^{-1}\left( \mathbf{z}\right) = \mathbf{x}$ , we have: + +$$ +\begin{Vmatrix}{F\left( {{F}^{-1}\left( \mathbf{z}\right) }\right) - F\left( {\mathbf{x}}_{0}\right) }\end{Vmatrix} = \begin{Vmatrix}{F\left( \mathbf{x}\right) - F\left( {\mathbf{x}}_{0}\right) }\end{Vmatrix} +$$ + +$$ +\begin{matrix} & \leq & {L}_{1} \end{matrix}\begin{Vmatrix}{\mathbf{x} - {\mathbf{x}}_{0}}\end{Vmatrix} +$$ + +$$ +\leq {L}_{1}R +$$ + +Upper bound of $Q\left( {B}_{{L}_{1}R}\right)$ First of all, it can be easily establish that $Q\left( {{B}_{{L}_{1}R}\left( {F\left( {\mathbf{x}}_{0}\right) }\right) }\right)$ is at a maximum when $F\left( {\mathbf{x}}_{0}\right) = 0$ . From now on, we will only consider ${B}_{{L}_{1}R}$ the ball centered on 0 . Therefore the objective is to find an upper bound on: + +$$ +Q\left( {B}_{{L}_{1}R}\right) = {\int }_{\parallel \mathbf{z}\parallel < {L}_{1}R}q\left( \mathbf{z}\right) d\mathbf{z} +$$ + +$$ +\begin{matrix} & = & {\int }_{\parallel \mathbf{z}\parallel < {L}_{1}R}\frac{1}{{\left( \sqrt{2\pi }\right) }^{d}}{e}^{-\parallel \mathbf{z}{\parallel }^{2}/2}d\mathbf{z} \end{matrix} +$$ + +We can use the polar coordinates system to get another expression of the Gaussian measure with ${S}_{d - 1}\left( r\right) =$ $\frac{2{\pi }^{d/2}{r}^{d - 1}}{Q\left( {d/2}\right) }$ being the volume of the hypersphere: + +$$ +Q\left( {B}_{{L}_{1}R}\right) = \frac{1}{{\left( 2\pi \right) }^{d/2}}{\int }_{0}^{{L}_{1}R}{S}_{d - 1}\left( r\right) {e}^{-{r}^{2}/2}{dr} +$$ + +$$ += \frac{2}{{2}^{d/2}\Gamma \left( {d/2}\right) }{\int }_{0}^{{L}_{1}R}{r}^{d - 1}{e}^{-{r}^{2}/2}{dr} +$$ + +However ${r}^{d - 1}{e}^{-{r}^{2}/2}$ has a maximum value reached for $r = \sqrt{d - 1}$ , we can have an upper bound: + +$$ +Q\left( {B}_{{L}_{1}R}\right) \leq \frac{2}{{2}^{d/2}\Gamma \left( {d/2}\right) }{\sqrt{d - 1}}^{d - 1}{e}^{-\frac{d - 1}{2}}{\int }_{0}^{{L}_{1}R}{dr} +$$ + +$$ +\leq \frac{\sqrt{2}{L}_{1}R}{\Gamma \left( {d/2}\right) }{\left( \frac{d - 1}{2e}\right) }^{\frac{d - 1}{2}} +$$ + +Then, with the Stirling approximation of the Gamma function: + +$$ +\frac{1}{2}\Gamma \left( {d/2}\right) = \frac{1}{d}\Gamma \left( {d/2 + 1}\right) +$$ + +$$ +\geq \frac{\sqrt{\pi }\sqrt{d}}{d}{\left( d/2\right) }^{d/2}{e}^{-d/2} +$$ + +$$ +\geq \frac{\sqrt{\pi }}{{2}^{d/2}}{d}^{\frac{d - 1}{2}}{e}^{-\frac{d}{2}} +$$ + +We obtain: + +$$ +Q\left( {B}_{{L}_{1}R}\right) \leq \frac{2}{{2}^{d/2}\Gamma \left( {d/2}\right) }{\left( d - 1\right) }^{\frac{d - 1}{2}}{e}^{-\frac{d - 1}{2}} +$$ + +$$ +\leq \;\frac{{L}_{1}R\sqrt{e}}{\sqrt{\pi }}{\left( \frac{d - 1}{d}\right) }^{\frac{d - 1}{2}} +$$ + +Using the bound + +$$ +\frac{1}{\sqrt{e}} < {\left( \frac{d - 1}{d}\right) }^{\frac{d - 1}{2}} +$$ + +we have + +$$ +Q\left( {B}_{{L}_{1}R}\right) < \frac{{L}_{1}R}{\sqrt{\pi }} +$$ + +Otherwise, an inequality from (Ball, 1993) give for the hypersphere in ${\mathbb{R}}^{d - 1}$ , an upper bound of the Gaussian Measure over a convex set $\mathrm{C}$ in ${\mathbb{R}}^{d}$ : + +$$ +\forall d \geq 2,{\int }_{\partial C}q < 4{d}^{1/4} +$$ + +Therefore, for a Ball ${B}_{{L}_{1}R}$ : + +$$ +\forall d \geq 2,\;Q\left( {B}_{{L}_{1}R}\right) < 4{d}^{1/4}{L}_{1}R +$$ + +Lower Bound of the TV As soon as we have an upper bound on $Q\left( {B}_{{L}_{1}R}\right)$ , we have: + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \mathop{\sup }\limits_{{R,{\mathbf{x}}_{0}}}\left( {{P}^{ * }\left( {B}_{R,{\mathbf{x}}_{0}}\right) - \frac{{L}_{1}R}{\sqrt{\pi }}}\right) +$$ + +With the second upper bound, the theorem can be formulated as: + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{\mu }^{ * },{\mu }_{\theta }}\right) > \mathop{\sup }\limits_{{R,{\mathbf{x}}_{0}}}\left( {{\mu }^{ * }\left( {B}_{R,{\mathbf{x}}_{0}}\right) - 4{d}^{1/4}R{L}_{1}}\right) +$$ + +### A.3. Proof of theorem 3.3 + +In this section, we denote ${B}_{R} = {B}_{R,{F}^{-1}\left( 0\right) }$ . As ${F}^{-1}$ is ${L}_{2}$ -Lipschitz, ${F}^{-1}\left( {B}_{R/{L}_{2},0}\right) \subset {B}_{R}$ and thus + +$$ +{P}_{\theta }\left( {B}_{R}\right) \geq {P}_{\theta }\left( {{F}^{-1}\left( {B}_{R}\right) }\right) = Q\left( {B}_{R/{L}_{2},0}\right) . +$$ + +By construction + +$$ +Q\left( {B}_{R/{L}_{2},0}\right) = \mathbb{P}\left( {\parallel \mathbf{z}{\parallel }^{2} \leq \frac{{R}^{2}}{{L}_{2}^{2}}}\right) , +$$ + +when $z$ follows the standard Gaussian distribution in ${\mathbb{R}}^{d}$ . This quantity can be computed using the cumulative distribution function of the chi-square distribution, i.e. + +$$ +Q\left( {B}_{R/{L}_{2},0}\right) = \frac{\gamma \left( {\frac{d}{2},\frac{{R}^{2}}{2{L}_{2}^{2}}}\right) }{\Gamma \left( \frac{d}{2}\right) }, +$$ + +where $\gamma$ is the lower incomplete gamma function given by + +$$ +\gamma \left( {x, k}\right) = {\int }_{0}^{x}{t}^{k - 1}{e}^{-t}{dt}. +$$ + +### A.4. Proof of Corollary 3.3.1 + +Since ${M}_{1}$ and ${M}_{2}$ are separated by a distance $D$ the ball centered on ${F}^{-1}\left( 0\right)$ has a radius at least as big as $D$ that we might call ${B}_{D}$ to simplify the notation. Therefore: + +$$ +\bar{\alpha } = {P}_{\theta }\left( {M}_{1}\right) + {P}_{\theta }\left( {M}_{2}\right) +$$ + +$$ += 1 - {P}_{\theta }\left( \overline{{M}_{2} \cup {M}_{1}}\right) +$$ + +$$ +\leq 1 - {P}_{\theta }\left( {B}_{D}\right) +$$ + +$$ +\leq 1 - Q\left( {F\left( {B}_{D}\right) }\right) +$$ + +$$ +\left. { \leq 1 - Q\left( {B}_{D/{L}_{2}}\right) }\right) +$$ + +$$ +\leq \;1 - \frac{\gamma (\frac{d}{2},\frac{{D}^{2}}{2{L}_{2}^{2}})}{\Gamma (\frac{d}{2})} +$$ + +And since ${P}^{ * }\left( {B}_{D}\right) = 0$ : + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \left| {{P}_{\theta }\left( {B}_{D}\right) - {P}^{ * }\left( {B}_{D}\right) }\right| +$$ + +$$ +\geq \;{P}_{\theta }\left( {{B}_{D}\left( {{F}^{-1}\left( 0\right) }\right) }\right. +$$ + +$$ +\begin{matrix} & {\gamma (\frac{d}{2},\frac{{D}^{2}}{2{L}_{2}^{2}})} \\ \geq & \frac{\Gamma (\frac{d}{2})}{\Gamma (\frac{d}{2})} \end{matrix} +$$ + +### B.The link with the Precision and Recall for generative models + +### B.1. Definitions of the precision and the recall + +The precision and the recall are defined as such : + +Definition B.1. For $\alpha ,\beta \in \left\lbrack {0,1}\right\rbrack$ , the distributions ${P}_{\theta }$ is said to have a precision $\alpha$ at recall $\beta$ with respect to ${P}^{ * }$ if there exist the distributions $\nu ,{\nu }_{\theta },{\nu }^{ * }$ , such that ${P}_{\theta }$ and ${P}^{ * }$ can be decomposed as such : + +$$ +{P}_{\theta } = {\alpha \nu } + \left( {1 - \alpha }\right) {\nu }_{\theta }\;\text{ and }\;{P}^{ * } = {\beta \nu } + \left( {1 - \beta }\right) {\nu }^{ * } +$$ + +The distribution $\nu$ defined on $\operatorname{Supp}\left( {P}_{\theta }\right) \cup \operatorname{Supp}\left( {P}^{ * }\right)$ while $\operatorname{Supp}\left( {\nu }_{\theta }\right) = \operatorname{Supp}\left( {P}_{\theta }\right)$ and $\operatorname{Supp}\left( {P}^{ * }\right) = \operatorname{Supp}\left( {\nu }^{ * }\right)$ + +It can be interpreted as such in (Sajjadi et al.,2018): $\nu$ represent the part of ${P}^{ * }$ that ${P}_{\theta }$ correctly models, ${\nu }_{\theta }$ is simultaneously the part of ${P}^{ * }$ that ${P}_{\theta }$ misses on their joint support and all the points that should not be represented by ${P}_{\theta }$ . Finally, ${\nu }^{ * }$ cover the points of ${P}^{ * }$ that ${P}_{\theta }$ could not model and the difference between $\nu$ and ${\nu }_{\theta }$ on their joint support. Among all the potential decompositions, i.e. the pairs $\left( {\alpha ,\beta }\right)$ , the focus is set on the maximum precision $\bar{\alpha }$ and the maximum recall $\bar{\beta }$ . + +Proposition B.1. The maximum precision and the maximum recall satisfy : + +$$ +\bar{\alpha } = {P}_{\theta }\left( {\operatorname{Supp}\left( {P}^{ * }\right) }\right) \;\text{ and }\;\bar{\beta } = {P}^{ * }\left( {\operatorname{Supp}\left( {P}_{\theta }\right) }\right) +$$ + +An improved version of the precision and the recall are defined in (Kynkäänniemi et al., 2019). + +#### B.2.The link between the maximum precision, the maximum recall and the TV distance + +The link between the TV and the maximum precision $\bar{\alpha }$ and the maximum recall $\bar{\beta }$ is : + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \left| {{P}^{ * }\left( {\operatorname{Supp}\left( {P}^{ * }\right) }\right) - {P}_{\theta }\left( {\operatorname{Supp}\left( {P}^{ * }\right) }\right) }\right| = 1 - \bar{\alpha } +$$ + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \left| {{P}^{ * }\left( {\operatorname{Supp}\left( {P}_{\theta }\right) }\right) - {P}_{\theta }\left( {\operatorname{Supp}\left( {P}_{\theta }\right) }\right) }\right| = 1 - \bar{\beta } +$$ + +### B.3. Corollary 3.3.1 given in terms of maximum precision + +Corollary B.0.1 $\left( {{F}^{-1}{L}_{2}}\right.$ -Lipschitz). If ${F}^{-1}$ is ${L}_{2}$ - Lipschitz, therefore we have an upper bound on the maximum precision based on the distance $D$ between ${M}_{1}$ and + +${M}_{2}$ : + +$$ +\bar{\alpha } \leq 1 - + \frac{\gamma \left( {\frac{d}{2},\frac{{D}^{2}}{2{L}_{2}^{2}}}\right) }{\Gamma \left( \frac{d}{2}\right) } +$$ + +This result is to be compared with the actual upper bound on the maximum precision: + +$$ +\bar{\alpha } + \frac{2D}{{L}_{2}}{e}^{-{\Phi }^{-1}{\left( \bar{\alpha }/2\right) }^{2}} \leq 1\text{ where }\Phi \left( t\right) = {\int }_{-\infty }^{t}\frac{\exp \left( {-{r}^{2}/2}\right) }{2\pi }{dr} +$$ + +Our result may depend on the dimension but an upper bound of the precision can be directly computed. + +439 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/URKYsI2TFl/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/URKYsI2TFl/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..25281e27381bb674eeb500a7fb6a27c05e2aa972 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/URKYsI2TFl/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,197 @@ +§ ON THE EXPRESSIVITY OF BI-LIPSCHITZ NORMALIZING FLOWS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +An invertible function is bi-Lipschitz if both the function and its inverse have bounded Lipschitz constants. Nowadays, most Normalizing Flows are bi-Lipschitz by design or by training to limit numerical errors (among other things). In this paper, we discuss the expressivity bi-Lipschitz of Normalizing Flows and identify several target distributions that are difficult to approximate using such models. Then, we characterize the expressivity of bi-Lipschitz Normalizing Flows by giving several lower bounds on the total variation divergence between these particularly unfavorable distributions and their best possible approximation. Finally, we discuss potential remedies which include using more complex latent distributions. + +§ 1. INTRODUCTION + +A number of recent publications have demonstrated the benefits of constructing machine learning models with a small Lipschitz constant. First, models with a small Lipschitz constant have been linked with better generalization capabilities, both in terms of true risk (Bartlett et al., 2017), and adversarial risk (Farnia et al., 2018). In addition, models with a small Lipschitz constraint are more stable during training, and are less prone to numerical errors, a property which is particularly important in the context of invertible neural networks and normalizing flows (Behrmann et al., 2021). + +Unfortunately, enforcing a small Lipschitz constant, either by design, or using regularization during training, can impede the ability of a model to fit the data distribution. Based on this observation, several researchers have studied the limitations of neural networks with bounded Lipschitz constant. In particular Tanielian et al. (2020) was able identify a family of target distributions with disconnected support that cannot be fitted with a GAN with a bounded Lipschitz + +§ CONSTANT. + +In this paper we focus on the impact of the Lipschitz constraint on normalizing flows. When they are invertible, normalizing flows are often not just Lipschitz, but bi-Lipschitz, meaning that both the mapping function and its inverse have bounded Lipschitz constant. For example, Affine Coupling, Additive Coupling, neural ODE, Invertible $1 \times 1$ Convolution and Residual Networks are bi-Lipschitz by design. Other types of normalizing flows, can also be trained to be bi-Lipschitz, in order to avoid exploding inverses (Behrmann et al., 2021). We study the expressivity of normalizing flows with bounded Lipschitz constant and discuss the impact of the bi-Lipschitz constant on the total variation divergence. More precisely, we give several lower bounds on the total variation divergence between the generated distribution and the target distribution, in some (particularly unfavorable) training settings. + +§ 2. BACKGROUND + +An invertible normalizing flow or normalizing flow for short, is a an invertible density model in which both density estimation and sampling can be done efficiently. Normalizing flows (NF) learn an invertible mapping between a latent space $\mathcal{Z}$ and a data space $\mathcal{X}$ . Typically, the forward direction $F : \mathcal{X} \rightarrow \mathcal{Z}$ (i.e. the normalizing direction) is tractable and exact whereas the inverse direction ${F}^{-1} : \mathcal{Z} \rightarrow \mathcal{X}$ (i.e. the generative direction) either has a closed form, or can be approximated using an interative algorithm. + +Let ${P}^{ * }$ be the true data distribution over $\mathcal{X}$ that we wish to approximate. We assume this distribution admits a density ${p}^{ * }$ . Let $q\left( \mathbf{z}\right) = \frac{1}{{\left( \sqrt{2\pi }\right) }^{d}}{e}^{-\parallel \mathbf{z}{\parallel }^{2}/2}d\mathbf{z}$ be a gaussian density function and $Q$ the associated measure. If we name ${P}_{\theta }$ the learned data distribution, and ${p}_{\theta }$ its density, we can perform the density estimate efficiently using a simple change of variable: + +$$ +\forall x \in \mathcal{X},\;{p}_{\theta }\left( \mathbf{x}\right) = \left| {{\operatorname{Jac}}_{F}\left( \mathbf{x}\right) }\right| q\left( {F\left( \mathbf{x}\right) }\right) \tag{1} +$$ + +The learned distribution ${P}_{\theta }$ can there be expressed with the Gaussian measure over any subset $A \subseteq \mathcal{X}$ : + +$$ +{P}_{\theta }\left( A\right) = Q\left( {F\left( A\right) }\right) = {\int }_{F\left( A\right) }q\left( \mathbf{z}\right) d\mathbf{z} +$$ + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +For this to be useful in practice, the Jacobian (more specifically its determinant) must be computed efficiently. The relation between both distributions requires the determinant as given in Equation 1. To satisfies this property, Normalizing Flows usually have particular structures and consequently most of the common blocks are bi-Lipschitz. + +§ 2.1. BI-LIPSCHITZ NORMALIZING FLOWS + +In this paper, we consider bi-Lipschitz normalizing flows. We define the bi-Lipschitz property as follows. + +Definition 2.1. A bijective function $F : \mathcal{X} \subset {\mathbb{R}}^{d} \rightarrow \mathcal{Z} \subset$ ${\mathbb{R}}^{d}$ is said to be $\left( {{L}_{1},{L}_{2}}\right)$ -bi-Lipschitz if $F$ is ${L}_{1}$ -Lipschitz and its inverse ${F}^{-1}$ is ${L}_{2}$ -Lipschitz: + +$$ +\forall {\mathbf{x}}_{1},{\mathbf{x}}_{2} \in \mathcal{X},\;\begin{Vmatrix}{F\left( {\mathbf{x}}_{1}\right) - F\left( {\mathbf{x}}_{2}\right) }\end{Vmatrix} \leq {L}_{1}\begin{Vmatrix}{{\mathbf{x}}_{1} - {\mathbf{x}}_{2}}\end{Vmatrix} +$$ + +$$ +\forall {\mathbf{z}}_{1},{\mathbf{z}}_{2} \in \mathcal{Z},\;\begin{Vmatrix}{{F}^{-1}\left( {\mathbf{z}}_{1}\right) - {F}^{-1}\left( {\mathbf{z}}_{2}\right) }\end{Vmatrix} \leq {L}_{2}\begin{Vmatrix}{{\mathbf{z}}_{1} - {\mathbf{z}}_{2}}\end{Vmatrix} +$$ + +The bi-Lipschitz continuity implies some intrinsic properties satisfied by normalizing flows. Indeed, if a function can not expend infinitely, its inverse function can not contract infinitely. Therefore, not only the contractive power is limited but also the determinant of the jacobian is bounded. + +Proposition 2.1. If a normalizing flow $F$ is $\left( {{L}_{1},{L}_{2}}\right)$ -bi-Lipschitz, then for any ${\mathbf{x}}_{1},{\mathbf{x}}_{2} \in \mathcal{X}$ , it satisfies: + +$$ +\frac{1}{{L}_{2}}\begin{Vmatrix}{{\mathbf{x}}_{1} - {\mathbf{x}}_{2}}\end{Vmatrix} \leq \begin{Vmatrix}{F\left( {\mathbf{x}}_{1}\right) - F\left( {\mathbf{x}}_{2}\right) }\end{Vmatrix} \leq {L}_{1}\begin{Vmatrix}{{\mathbf{x}}_{1} - {\mathbf{x}}_{2}}\end{Vmatrix} +$$ + +Moreover, the determinant of its Jacobian Matrix ${\operatorname{Jac}}_{F}$ satisfies for all $\mathbf{x} \in \mathcal{X} \subset {\mathbb{R}}^{d} : \frac{1}{{L}_{2}^{d} \leq \left| {{\operatorname{Jac}}_{F}\left( \mathbf{x}\right) }\right| \leq {L}_{1}^{d}}$ + +Notice that this bounding is often used as an alternate bi-Lipschitz definition. + +For instance, the i-ResNet (Behrmann et al., 2019) and the Residual Flow (Chen et al., 2020) are both based on residual atomic blocks ${f}_{i} = {I}_{d} + {g}_{i}$ . Their invertibility is ensured by the Lipschitz constant $\operatorname{Lip}\left( {g}_{i}\right) \leq L < 1$ . If $F$ is composed of $m$ residual blocks such that $F = {f}_{m} \circ \cdots \circ {f}_{1}$ , then the overall bi-Lipschitz constants satisfy $\operatorname{Lip}\left( F\right) \leq {\left( 1 + L\right) }^{m}$ and $\operatorname{Lip}\left( {F}^{-1}\right) \leq 1/{\left( 1 - L\right) }^{m}$ . Alternatively, in Glow (Kingma &Dhariwal,2018) with atomic blocks ${W}_{i} =$ ${P}_{i}{L}_{i}\left( {{U}_{i} + \operatorname{diag}\left( {s}_{i}\right) }\right)$ , the bi-Lipschitz constants statisfy: $\operatorname{Lip}\left( F\right) \leq \mathop{\prod }\limits_{i}^{m}{\begin{Vmatrix}{W}_{i}\end{Vmatrix}}_{2}$ and $\operatorname{Lip}\left( {F}^{-1}\right) \leq \mathop{\prod }\limits_{i}^{m}{\begin{Vmatrix}{W}_{i}^{-1}\end{Vmatrix}}_{2}$ . Consequently, the bi-Lipschitzness constraints on either the function or its Jacobian determinant can be released by increasing the depth of the network but, by doing so, the stability of the inverse can be affected (Behrmann et al., 2021). + +§ 2.2. ASSESSING THE LEARNING ABILITIES + +Our goal is to understand how the bi-Lipschitz property affects the approximation ability of the network. To do so, we will compare the true data distribution ${P}^{ * }$ and its density ${p}^{ * }$ with the learned distribution ${P}_{\theta }$ and its density ${p}_{\theta }$ . + +To evaluate how the true distribution ${P}^{ * }$ and the generated distribution ${P}_{\theta }$ differ, we use the Total Variation Distance, whose definition is recalled here: + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) = \mathop{\sup }\limits_{A}\left| {{P}^{ * }\left( A\right) - {P}_{\theta }\left( A\right) }\right| +$$ + +§ 3. LOWER BOUNDS ON THE TV DISTANCE + +§ 3.1.A BOUND ON BI-LIPSCHITZ NORMALIZING FLOW FOR ANY SUBSET $A$ + +The first theorem is a lower bound on the TV distance between the learned distribution and the target distribution in a general setting. Intuitively, the idea is to find an arbitrary subset $A$ that is sufficiently concentrated so that the Lipschitz contrained mapping can not concentrate enough weight form the Gaussian distribution onto this subset. + +Theorem 3.1 (bi-Lipschitz mappings fail to capture high density subset). Let $F$ be $\left( {{L}_{1},{L}_{2}}\right)$ -bi-Lipschitz and ${\eta }_{A} =$ $\frac{{P}^{ * }\left( A\right) }{\operatorname{vol}\left( A\right) }$ be the average density over any subset $A \subset {\mathbb{R}}^{d}$ . Then: + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \mathop{\sup }\limits_{A}\operatorname{vol}\left( A\right) \left( {{\eta }_{A} - {\left( \frac{{L}_{1}}{4{L}_{2}\sqrt{2\pi }}\right) }^{d}}\right) +$$ + +Therefore, if there is a subset $A$ that satisfies ${\eta }_{A} >$ ${\left( \frac{{L}_{1}}{4{L}_{2}\sqrt{2\pi }}\right) }^{d}$ , then the TV is necessarily strictly positive. + +The proof of this Theorem is given in appendix A.1. + +Remark that the bound in Theorem 3.1 depends on both Lipschitz constraints ${L}_{1}$ and ${L}_{2}$ . If a subset $A$ is found to be very dense, the mapping will not be able to expand the given volume of $A$ to match the lower density of the Gaussian density because of ${L}_{1}$ . On the other hand, the point with the highest density within $A$ will be matched with the highest point on the Gaussian density but all its neighbourhood has to me moved by a factor of $1/{L}_{2}$ . The main advantage of this formulation is to apply to any subset of the data space, but at the expense of a loose bound on the TV. + +§ 3.2. BOUNDS FOR SPECIFIC SUBSET ${B}_{R,\MATHBF{X}}$ + +The bound in Theorem 3.1 can be further improved by making assumptions on the structure of the subset $A$ . We choose to focus on ${l}_{2}$ balls instead of arbitrary subsets. + +Let ${B}_{R,{\mathbf{x}}_{0}}$ be the ${l}_{2}$ ball with center ${\mathbf{x}}_{0}$ and radius $R$ (i.e. ${B}_{R,{\mathbf{x}}_{0}} = \left\{ {\mathbf{x} \in \mathcal{X}{\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}_{0}\end{Vmatrix}}_{2} \leq R}\right\}$ ). Then we can show that both high density balls and low density ones are difficult to fit properly, the former because of the Lipschitz constraint of $F$ , the latter because of the Lipschitz constraint of ${F}^{-1}$ . We first consider high density balls. + +110 + + < g r a p h i c s > + +Figure 1. Example of a target distribution where theorem 3.2 applies: the subset ${B}_{R}$ concentrates most of the weight in ${p}^{ * }$ , but $q\left( {F\left( {B}_{R}\right) }\right)$ can only be as big as $q\left( {B}_{{R}_{{L}_{1}}}\right)$ . + +111 + +112 + +113 + +114 + +115 + +116 + +117 + +118 + +119 + +Theorem 3.2 (NF with a ${L}_{1}$ -Lipschitz mapping $F$ fails to capture high density balls + +). Let $F$ be ${L}_{1}$ -Lipschitz. Then: + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \mathop{\sup }\limits_{{R,{\mathbf{x}}_{0}}}\left( {{P}^{ * }\left( {B}_{R,{\mathbf{x}}_{0}}\right) - \frac{R{L}_{1}}{\sqrt{\pi }}}\right) +$$ + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) > \mathop{\sup }\limits_{{R,{\mathbf{x}}_{0}}}\left( {{P}^{ * }\left( {B}_{R,{\mathbf{x}}_{0}}\right) - 4{d}^{1/4}R{L}_{1}}\right) +$$ + +Therefore, if we find a ball for which the true measure satisfies $\frac{{P}^{ * }\left( {B}_{R,{x}_{0}}\right) }{R} > \frac{{L}_{1}}{\sqrt{\pi }}$ or $\frac{{P}^{ * }\left( {B}_{R,{x}_{0}}\right) }{R} > 4{d}^{1/4}{L}_{1}$ , then the TV is necessarily strictly positive. + +The theorem 3.2 highlights the effect of the Lipschitz constraint of the forward mapping $F$ . If a ball has a high probability in the data space ${P}^{ * }\left( {B}_{R}\right)$ , then the probability assigned to this ball is at most $Q\left( {B}_{R{L}_{1}}\right)$ in the latent space and is upper bounded by $R{L}_{1}/\sqrt{\pi }$ (Ball,1993). A one dimensional representation of a pathological case for theorem 3.2 is shown on figure 1. In other words no ball with a high enough density in data space can be expended sufficiently to have a matching probability in the latent space. Note that we could use a closed-form of $Q\left( {B}_{R{L}_{1}}\right)$ but it is less open to interpretation than the approximations we have made. + +156 Conversely, the mapping being bi-Lipschitz, the mapping can not contract arbitrarily. If there is a low density zone mapped on the maximum of the Gaussian density, then the Normalizing Flow cannot reduce enough the probability of the corresponding zone. Notice that the assumption of a low density zone is strong but fairly reasonable for instance, one can observe a multimodal density with fairly well separated + +163 modes. If the modes are roughly equiprobable, we expect a mapping to assign those modes in balanced way around 164 the mode of the Gaussian distribution in the latent space. Therefore, the low density ball is mapped on a zone wider than the ball ${B}_{R/{L}_{2}}$ and consequently the Gaussian measure associated is lower bounded by $Q\left( {B}_{R/{L}_{2}}\right)$ as illustrated on the one dimensional example on figure 2. Despite the lower bounds established by (Pinelis, 2020), there is no reasonably interpretable bounds, therefore we use the closed-form that is expressed with the Gamma function $\Gamma$ and the incomplete gamma function $\gamma$ . The numerical approximations of the closed form are given in figure 3 . We can observe that the higher the dimension is, the larger the ${l}_{2}$ distance between two modes can be. + + < g r a p h i c s > + +Figure 2. Example of a target distribution for which Theorem 3.3 applies: the subset ${B}_{R}$ concentrates little weight in ${p}^{ * }$ , but $q\left( {F\left( {B}_{R}\right) }\right)$ can only be as small as $q\left( {{B}_{R}/{L}_{2}}\right)$ . + +Theorem 3.3 (NF with ${L}_{2}$ -Lipschitz inverse mappings ${F}^{-1}$ fail to capture low density balls). Let ${F}^{-1}$ be ${L}_{2}$ -Lipschitz. We consider the balls centered on ${F}^{-1}\left( 0\right)$ , we have the lower bound: + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \mathop{\sup }\limits_{R}\left( {\frac{\gamma \left( {\frac{d}{2},\frac{{R}^{2}}{2{L}_{2}^{2}}}\right) }{\Gamma \left( \frac{d}{2}\right) } - {P}^{ * }\left( {B}_{R,{F}^{-1}\left( 0\right) }\right) }\right) +$$ + +Therefore, if we find a ball for which the the true measure satisfies ${P}^{ * }\left( {B}_{R,{F}^{-1}\left( 0\right) }\right) < \frac{\gamma \left( {d/2,{R}^{2}/2{L}_{2}^{2}}\right) }{\Gamma \left( {d/2}\right) }$ , then the TV is necessarily strictly positive. + +Both formal proofs are detailed in appendix A. 2 and A.3. + +§ 3.3. COMPARISON TO RELATED WORK + +A related set up is used in (Tanielian et al., 2020). The authors consider two disconnected subsets ${M}_{1}$ and ${M}_{2}$ separated by a distance $D$ , with equal probabilities in the latent space, i.e. ${P}_{\theta }\left( {M}_{1}\right) = {P}_{\theta }\left( {M}_{1}\right) = 1/2$ . As a consequence, ${F}^{-1}\left( 0\right)$ is equidistant from ${M}_{1}$ and ${M}_{2}$ as illustrated in Figure 4. + + < g r a p h i c s > + +Figure 3. Representation of the Gaussian Measure of balls of radius $R/{L}_{2}$ centered on 0 . The measure is given for dimension 1,2, 10 and then the dimensions of MNIST (Yann LeCun et al., 2010), CIFAR10 (Alex Krizhevsky, 2009) and CelebA (Liu et al., 2015) + + < g r a p h i c s > + +Figure 4. Experimental set up given by (Tanielian et al., 2020) + +Corollary 3.3.1 (NF with ${L}_{2}$ -Lipschitz inverse mapping). If ${F}^{-1}$ is ${L}_{2}$ -Lipschitz, then we have a lower bound on the TV distance based on the distance $D$ between ${M}_{1}$ and ${M}_{2}$ : + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \gamma \left( {\frac{d}{2},\frac{{D}^{2}}{2{L}_{2}^{2}}}\right) /\Gamma \left( \frac{d}{2}\right) . +$$ + +Note that, the TV distance being defined as the sup on any subspace $A$ , we can accumulate the failures made by the network, and therefore take into account the error when the two manifolds ${M}_{1}$ and ${M}_{2}$ are too dense. This set up is then a appropriate pathological case to study the effect of the bi-Lipschitzness of the mapping. + +The original work assesses the learning abilities of their generative model, a GAN (Goodfellow et al., 2014), with a definition of precision and recall given by (Sajjadi et al., 2018) and improved by (Kynkäänniemi et al., 2019). The main advantage of this metric is that is well fitted to be used with the Gaussian Isoperimetric Inequality and therefore gives a result independent from the dimension. By using, the TV distance or any distance for that matter, it can be applied on distributions with any support. The details of the Precision and Recall and the comparison between both methods can be found in appendix B and the proof of corollary 3.3.1 is in appendix A.4. + +§ 4. POTENTIAL REMEDIES + +To mitigate the limitations induced by the two Lipschitz constants, one may suggest to simply increase the aggregated Lipschitz constants. For most structures of Normalizing Flow, one solution would be to increase the number of layers of the network. However, according to (Behrmann et al., 2021), this method affects the invertible stability of the normalizing flow. The solution might be held in more complex latent distribution. The first idea would be to use a Gaussian distribution with learnable parameters $\mathbf{\mu }$ and $\sum = \operatorname{diag}\left( {\sigma }_{i}\right)$ but this only results in a trade-off between the failures highlighted by the theorems 3.2 and 3.3 . Since a learnable standard deviation $\sigma$ is equivalent to a reparametrization trick, the consequence would be shifting the Lipschitz constant $\left( {{L}_{1},{L}_{2}}\right)$ toward $\left( {\frac{{L}_{1}}{\sigma },{L}_{2}\sigma }\right)$ . Then, it could be interesting when only one failure is met under the condition that the stability may not be affection by a larger range within the latent space. + +To solve this, a Gaussian Mixture (GM) can be used to learn disconnected manifolds (Khayatkhoei et al., 2019; Izmailov et al., 2019). Since each mode can be assigned to a different manifold or high density zones, the maximum density of the gaussian density will not be mapped to the low density space between the two density peaks. Consequently, if theorem 3.3 has no reason to hold, we can express the lower bound of theorem 3.2 with a GM with K equally distributed modes with learnable mean ${\mathbf{\mu }}_{j}$ and diagonal covariance matrix ${\sum }_{j} = \operatorname{diag}\left( {\sigma }_{ji}\right)$ : + +$$ +{\mathcal{D}}_{\mathrm{{TV}}}\left( {{P}^{ * },{P}_{\theta }}\right) \geq \mathop{\sup }\limits_{{R,{\mathbf{x}}_{0}}}\left( {{P}^{ * }\left( {B}_{R,{\mathbf{x}}_{0}}\right) - \frac{1}{K}\frac{R{L}_{1}}{\mathop{\prod }\limits_{i}{\sigma }_{ji}^{d}\sqrt{\pi }}}\right) +$$ + +Therefore, the GM could solve the issue set by split subset of the data space under two conditions: the number of modes $K$ should be reasonably low such that the measure mapped onto potential dense subset can be large enough, and most importantly, great attention should be given to the experimental issue linked to the training with learnable ${\sigma }_{ij}$ and $K$ . + +§ 5. CONCLUSION + +We have established the bi-Lipschitz constraints reduce the expressivity of Normalizing flows. When the dataset meets some particular conditions such as a high density zone or a low density zone between two high density zones, the reduced expressivity fails to capture the real distribution of the dataset. To compensate, this lack of learning ability of the mapping, a more complex, i.e. expressive latent distribution can be implemented. However, this method suffers from training difficulties and should be further studied. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/VmwEpdsvHZ9/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/VmwEpdsvHZ9/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..f3d0f72020288207814053704c8f79c70cb7a81d --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/VmwEpdsvHZ9/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,731 @@ +## Sliced Iterative Normalizing Flows + +Anonymous Authors ${}^{1}$ + +## Abstract + +We develop an iterative (greedy) deep learning algorithm which is able to transform an arbitrary probability distribution function (PDF) into the target PDF. The model is based on iterative Optimal Transport of a series of 1D slices, matching on each slice the marginal PDF to the target. As special cases of this algorithm, we introduce two Sliced Iterative Normalizing Flows (SINF), which map from the data to the latent space (GIS) and vice versa (SIG). We show that SIG is able to generate high quality samples that match the GAN benchmarks. GIS obtains better results on small dataset density estimation tasks compared to the density trained NFs. SINF approach deviates significantly from the current DL paradigm, as it is greedy and does not use concepts such as mini-batching, stochastic gradient descent and gradient back-propagation through deep layers. + +## 1. Introduction + +Latent variable generative models such as Normalizing Flows (NFs) (Rezende & Mohamed, 2015; Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018), Variational Au-toEncoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014; Radford et al., 2015) aim to model the distribution $p\left( x\right)$ of high-dimensional input data $x$ by introducing a mapping from a latent variable $z$ to $x$ , where $z$ is assumed to follow a given prior distribution $\pi \left( z\right)$ . These models usually parameterize the mapping using neural networks, and the training of these models typically consists of minimizing a dissimilarity measure between the model distribution and the target distribution. + +In this work we adopt a different approach to build the map from latent variable $z$ to data $x$ . We approach this problem from the Optimal Transport (OT) point of view. OT studies whether the transport maps exist between two probability distributions, and if they do, how to construct the map to minimize the transport cost. We propose to decompose the high dimensional problem into a succession of 1D transport problems, where the OT solution is known. The mapping is iteratively augmented, and it has a NF structure that allows explicit density estimation and efficient sampling. We name the algorithm Sliced Iterative Normalizing Flow (SINF). Our objective function is inspired by the Wasserstein distance, which is defined as the minimal transport cost and has been widely used in the loss functions of generative models (Arjovsky & Bottou, 2017; Tolstikhin et al., 2017). We propose a new metric, max K-sliced Wasserstein distance, which enables the algorithm to scale well to high dimensions. + +## 2. Background + +The p-Wasserstein distance, $p \in \lbrack 1,\infty )$ , between two probability distributions ${p}_{1}$ and ${p}_{2}$ is defined as: + +$$ +{W}_{p}\left( {{p}_{1},{p}_{2}}\right) = \mathop{\inf }\limits_{{\gamma \in \Pi \left( {{p}_{1},{p}_{2}}\right) }}{\left( {\mathbb{E}}_{\left( {x, y}\right) \sim \gamma }\left\lbrack \parallel x - y{\parallel }^{p}\right\rbrack \right) }^{\frac{1}{p}}, \tag{1} +$$ + +where $\Pi \left( {{p}_{1},{p}_{2}}\right)$ is the set of all possible joint distributions $\gamma \left( {x, y}\right)$ with marginalized distributions ${p}_{1}$ and ${p}_{2}$ . In 1D the Wasserstein distance has a closed form solution via Cumulative Distribution Functions (CDFs), but this evaluation is intractable in high dimension. An alternative metric, the Sliced p-Wasserstein Distance (SWD), is defined as: + +$$ +S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) = {\left( {\int }_{{\mathbb{S}}^{d - 1}}{W}_{p}^{p}\left( \mathcal{R}{p}_{1}\left( \cdot ,\theta \right) ,\mathcal{R}{p}_{2}\left( \cdot ,\theta \right) \right) d\theta \right) }^{\frac{1}{p}}, +$$ + +(2) + +where ${\mathbb{S}}^{d - 1}$ denotes the unit sphere ${\theta }_{1}^{2} + \cdots {\theta }_{n}^{2} = 1$ in ${\mathbb{R}}^{d},{d\theta }$ is the normalized uniform measure on ${\mathbb{S}}^{d - 1}$ , and $\mathcal{R}$ denotes the Radon transform. The definition of Radon transform can be found in the appendix. For a given $\theta$ , the function $\left( {\mathcal{R}p}\right) \left( {\cdot ,\theta }\right) : \mathbb{R} \rightarrow \mathbb{R}$ is essentially the slice (or projection) of $p\left( x\right)$ on axis $\theta$ . + +The SWD can be calculated by approximating the high dimensional integral with Monte Carlo samples. However, in high dimensions a large number of projections is required to accurately estimate SWD. This motivates to use the max- + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +imum Sliced p-Wasserstein Distance (max SWD): + +$$ +\max - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) = \mathop{\max }\limits_{{\theta \in {\mathbb{S}}^{d - 1}}}{W}_{p}\left( {\mathcal{R}{p}_{1}\left( {\cdot ,\theta }\right) ,\mathcal{R}{p}_{2}\left( {\cdot ,\theta }\right) }\right) , +$$ + +(3) + +which is the maximum of the Wasserstein distance of the 1D marginalized distributions of all possible directions. + +## 3. Sliced Iterative Normalizing Flows + +We consider the general problem of building a NF that maps an arbitrary PDF ${p}_{1}\left( x\right)$ to another arbitrary PDF ${p}_{2}\left( x\right)$ of the same dimensionality. We firstly introduce our objective function in Section 3.1. The general SINF algorithm is presented in Section 3.2. We then consider the special cases of ${p}_{1}$ and ${p}_{2}$ being standard Normal distributions in Section 3.3 and Section 3.4, respectively. + +### 3.1. Maximum K-sliced Wasserstein distance + +We generalize the idea of maximum SWD and propose maximum K-Sliced p-Wasserstein Distance (max K-SWD): + +$$ +\max - K - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) = \mathop{\max }\limits_{\left\{ {\theta }_{1},\cdots ,{\theta }_{K}\right\} } +$$ + +$$ +{\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}\right) \right) \right) }^{\frac{1}{p}}. \tag{4} +$$ + +In this work we fix $p = 2$ . The max K-SWD defines $\mathrm{K}$ orthogonal axes $\left\{ {{\theta }_{1},\cdots ,{\theta }_{K}}\right\}$ where the marginal distributions of ${p}_{1}$ and ${p}_{2}$ are most different, providing a natural choice for performing 1D marginal matching in our algorithm (see Section 3.2). The proof that max K-SWD is a proper distance and the details of its estimation are provided in the appendix. + +### 3.2. Proposed SINF algorithm + +The SINF algorithm is based on iteratively matching the 1D marginalized distribution of ${p}_{1}$ to ${p}_{2}$ . This is motivated by the inverse Radon Transform (see Appendix) and Cramér-Wold theorem, which suggest that matching the high dimensional distributions is equivalent to matching the 1D slices on all possible directions, decomposing the high dimensional problem into a series of $1\mathrm{D}$ problems. Given a set of i.i.d. samples $X$ drawn from ${p}_{1}$ , in each iteration, a set of 1D marginal transformations ${\left\{ {\Psi }_{k}\right\} }_{k = 1}^{K}{}^{1}(K \leq d$ where $d$ is the dimensionality of the dataset) are applied to the samples on orthogonal axes ${\left\{ {\theta }_{k}\right\} }_{k = 1}^{K}$ to match the 1D marginalized PDF of ${p}_{2}$ along those axes. Let $A = \left\lbrack {{\theta }_{1},\cdots ,{\theta }_{K}}\right\rbrack$ be the weight matrix $\left( {{A}^{T}A = {I}_{K}}\right)$ , the transformation at iteration $l$ of samples ${X}_{l}$ can be written as + +$$ +{X}_{l + 1} = {A}_{l}{\mathbf{\Psi }}_{l}\left( {{A}_{l}^{T}{X}_{l}}\right) + {X}_{l}^{ \bot }, \tag{5} +$$ + +Algorithm 1 Sliced Iterative Normalizing Flow + +--- + +Input: ${\left\{ {x}_{i} \sim {p}_{1}\right\} }_{i = 1}^{N},{\left\{ {y}_{i} \sim {p}_{2}\right\} }_{i = 1}^{N}, K$ , number of + +iteration ${L}_{\text{iter }}$ + +for $l = 1$ to ${L}_{\text{iter }}$ do + + ${A}_{l} = \max \mathrm{K} - \operatorname{SWD}\left( {{x}_{i},{y}_{i}, K}\right)$ + + for $k = 1$ to $K$ do + + ${\theta }_{k} = {A}_{l}\left\lbrack { : , k}\right\rbrack$ + + Compute ${\widehat{x}}_{i} = {\theta }_{k} \cdot {x}_{i}$ and ${\widehat{y}}_{i} = {\theta }_{k} \cdot {y}_{i}$ for each $i$ + + ${\widetilde{x}}_{m} = \operatorname{quantiles}\left( {\operatorname{PDF}\left( {\widehat{x}}_{i}\right) }\right)$ + + ${\widetilde{y}}_{m} = \operatorname{quantiles}\left( {\operatorname{PDF}\left( {\widehat{y}}_{i}\right) }\right)$ + + ${\psi }_{l, k} =$ RationalQuadraticSpline $\left( {{\widetilde{x}}_{m},{\widetilde{y}}_{m}}\right)$ + + end for + + ${\mathbf{\Psi }}_{l} = \left\lbrack {{\Psi }_{l1},\cdots ,{\Psi }_{lK}}\right\rbrack$ + + Update ${x}_{i} = {x}_{i} - {A}_{l}{A}_{l}^{T}{x}_{i} + {A}_{l}{\mathbf{\Psi }}_{l}\left( {{A}_{l}^{T}{x}_{i}}\right)$ + +end for + +--- + +where ${X}_{l}^{ \bot } = {X}_{l} - {A}_{l}{A}_{l}^{T}{X}_{l}.{\mathbf{\Psi }}_{l} = {\left\lbrack {\Psi }_{l1},\cdots ,{\Psi }_{lK}\right\rbrack }^{T}$ is the marginal mapping of each dimension of ${A}_{l}^{T}{X}_{l}$ , and its components are required to be monotonic and differentiable. The Inverse and Jacobian determinant of transformation 5 can be easily evaluated (see appendix). + +The weight matrix ${A}_{l}$ and the marginal transformations ${\mathbf{\Psi }}_{l}$ are determined by iteratively minimizing the max K-SWD (Equation 4) between the transformed ${p}_{1}$ and ${p}_{2}$ . Specifically, we propose to iteratively solving for the orthogonal axes $\left\{ {{\theta }_{1},\cdots ,{\theta }_{K}}\right\}$ in max K-SWD, and then apply 1D marginal matching on those axes to minimize max K-SWD. + +Let ${p}_{1, l}$ be the transformed ${p}_{1}$ at iteration $l$ . The $k$ th component of ${\mathbf{\Psi }}_{l},{\Psi }_{l, k}$ , maps the 1D marginalized PDF of ${p}_{1, l}$ to ${p}_{2}$ and has an OT solution: + +$$ +{\Psi }_{l, k}\left( x\right) = {F}_{k}^{-1}\left( {{G}_{l, k}\left( x\right) }\right) , \tag{6} +$$ + +where ${G}_{l, k}\left( x\right) = {\int }_{-\infty }^{x}\left( {\mathcal{R}{p}_{1, l}}\right) \left( {t,{\theta }_{k}}\right) {dt}$ and ${F}_{k}\left( x\right) =$ ${\int }_{-\infty }^{x}\left( {\mathcal{R}{p}_{2}}\right) \left( {t,{\theta }_{k}}\right) {dt}$ are the CDFs of ${p}_{1, l}$ and ${p}_{2}$ on axis ${\theta }_{k}$ , respectively. The CDFs can be estimated using the quantiles of the samples (in SIG Section 3.3), or using Kernel Density Estimation (KDE, in GIS Section 3.4). Equation 6 is monotonic and therefore invertible. We choose to parametrize it with monotonic rational quadratic splines (Gregory & Delbourgo, 1982; Durkan et al., 2019). Details about the splines are shown in the appendix. + +The proposed algorithm iteratively minimizes the max $\mathrm{K}$ - SWD between the transformed ${p}_{1}$ and ${p}_{2}$ . The orthono-mal vectors $\left\{ {{\theta }_{1},\cdots ,{\theta }_{K}}\right\}$ specify $K$ axes along which the marginalized PDF between ${p}_{1, l}$ and ${p}_{2}$ are most different, thus maximizes the gain at each iteration and improves the efficiency of the algorithm. The model is able to converge with two orders of magnitude fewer iterations than + +--- + +${}^{1}$ Notation definition: In this paper we use $l, k, j$ and $m$ to represent different iterations of the algorithm, different axes ${\theta }_{k}$ , different gradient descent iterations of max K-SWD calculation (see Algorithm 2), and different knots in the spline functions of 1D transformation, respectively. + +--- + +Table 1. FID scores on different datasets (lower is better). The errors are generally smaller than the differences. + +
MethodMNISTFashionCIFAR-10CelebA
iterativeSWF225.1207.6--
SIG $\left( {T = 1}\right)$ (this work)4.513.766.537.3
adversarial trainingFlow-GAN (ADV)155.6216.971.1-
WGAN6.721.555.241.3
WGAN GP20.324.555.830.0
Best default GAN$\sim {10}$$\sim {32}$$\sim {70}$$\sim {48}$
AE basedSWAE(Wu et al., 2019)--107.948.9
SWAE(Kolouri et al., 2018)29.874.3141.953.9
CWAE23.657.1120.049.7
PAE-28.0-49.2
two-stage VAE12.629.396.144.4
+ +113 + +Table 2. Comparison between SIG and GIS + +
ModelSIGGIS
Initial PDF ${p}_{1}$Gaussian${p}_{\text{data }}$
Final PDF ${p}_{2}$${p}_{\text{data }}$Gaussian
TrainingIteratively maps Gaussian to ${p}_{\mathrm{{data}}}$Iteratively maps ${p}_{\text{data }}$ to Gaussian
NF structureYesYes
AdvantageGood samplesGood density estimation
+ +random axes, and it also leads to better sample quality. This is because as the dimensionality $d$ grows, the number of slices $\left( {\mathcal{R}p}\right) \left( {\cdot ,\theta }\right)$ required to approximate $p\left( x\right)$ using inverse Radon formula scales as ${L}^{d - 1}$ (Kolouri et al.,2015), where $L$ is the number of slices needed to approximate a similar smooth 2D distribution. Therefore, if $\theta$ are randomly chosen, it takes a large number of iterations to converge in high dimensions due to the curse of dimensionality. Our objective function reduces the curse of dimensionality in high dimensions by identifying the most relevant directions. + +Unlike KL-divergence which is invariant under flow transformations, max K-SWD is different in data space and in latent space. Therefore the direction of building the flow model is of key importance. In the next two sections we discuss two different ways of building the flow, which are good at sample generation and density estimation, respectively. + +### 3.3. Sliced Iterative Generator (SIG) + +For Sliced Iterative Generator (SIG) ${p}_{1}$ is a standard Normal distribution, and ${p}_{2}$ is the target distribution. The model iteratively maps the Normal distribution to the target distribution using 1D slice transformations. Specifically, one firstly draw a set of samples from the standard Normal distribution, and then iteratively updates the samples following Equation 5. SIG directly minimizes the max K-SWD between the generated distribution and the target distribution, and is able to generate high quality samples. + +### 3.4. Gaussianizing Iterative Slicing (GIS) + +For Gaussianizing Iterative Slicing (GIS) ${p}_{1}$ is the target distribution and ${p}_{2}$ is a standard Normal distribution. The model iteratively Gaussianizes the target distribution, and the mapping is learned in the reverse direction of SIG. In GIS the max K-SWD between latent data and the Normal distribution is minimized, thus the model performs well in density estimation, even though its objective is not $p\left( x\right)$ . A comparison between SIG and GIS is shown in Table 2. + +## 4. Experiments + +### 4.1. Density estimation $p\left( x\right)$ of small datasets + +We perform density estimation with GIS on four UCI datasets (Lichman et al., 2013) and BSDS300 (Martin et al., 2001). The data preprocessing of UCI datasets and BSDS300 follows Papamakarios et al. (2017). We vary the size of the training set ${N}_{\text{train }}$ from ${10}^{2}$ to ${10}^{5}$ to test the model performance on a wide range of dataset size. In Figure 1 we compare GIS to other NF models GF (Meng et al., 2020), FFJORD (Grathwohl et al., 2018), MAF (Papamakar-ios et al., 2017) and RQ-NSF (AR)(Durkan et al., 2019), as well as KDE. Some non-GIS NF models collapsed during training or used more memory than our GPU, and are not shown in the plot. The results in Figure 1 show that GIS is more stable compared to other NFs and outperforms them on small training sets. This highlights that GIS is less sensitive to hyper-parameter optimization and achieves good performance out of the box. Training time varies with data size, but is generally lower than other NFs on small datasets. + +![01963e33-e365-7032-871a-368a64aa692f_3_126_187_734_923_0.jpg](images/01963e33-e365-7032-871a-368a64aa692f_3_126_187_734_923_0.jpg) + +Figure 1. Density estimation versus training set size. The legend in panel (a) applies to other panels as well. Higher is better: at 100-1000 training size GIS has the best performance in all cases. + +### 4.2. Generative modeling of images + +We evaluate SIG as a generative model of images using the following 4 datasets: MNIST (LeCun et al., 1998), Fashion-MNIST (Xiao et al., 2017), CIFAR-10 (Krizhevsky et al., 2009) and Celeb-A (Liu et al., 2015). In Figure 2 we show samples of these four datasets. For MNIST, Fashion-MNIST and CelebA dataset we show samples from the model with reduced temperature $T = {0.85}$ (i.e., sampling from a Gaussian distribution with standard deviation $T = {0.85}$ in latent space), which slightly improves the sample quality (Parmar et al., 2018; Kingma & Dhariwal, 2018). We report the final FID score (calculated using temperature $\mathrm{T} = 1$ ) in Table 1, where we compare our results with similar algorithms SWF (Liutkus et al., 2018) and Flow-Gan (ADV) (Grover et al., 2018). We also list the FID scores of some other generative models for comparison, including models using slice-based distance SWAEs (Wu et al., 2019; Kolouri et al., 2018) and CWAE (Knop et al., 2018), Wasserstein GAN models (Arjovsky et al., 2017; Gulrajani et al., 2017), and other GANs and AE-based models PAE (Böhm & Seljak, 2020) and two-stage VAE (Dai & Wipf, 2019; Xiao et al., 2019). We notice that previous iterative algorithms were unable to produce good samples on high dimensional image datasets. In contrast, SIG obtains the best FID scores on MNIST and Fashion-MNIST, while on CIFAR-10 and CelebA it also outperforms similar algorithms and AE-based models, and gets comparable results with GANs. + +![01963e33-e365-7032-871a-368a64aa692f_3_896_187_701_802_0.jpg](images/01963e33-e365-7032-871a-368a64aa692f_3_896_187_701_802_0.jpg) + +Figure 2. Random samples from SIG. + +## 5. Conclusions + +We introduce sliced iterative normalizing flow (SINF) that iteratively transforms data distribution to a Gaussian (GIS) or the other way around (SIG) using OT. To the best of our knowledge, SIG is the first greedy deep learning algorithm that is competitive with the SOTA generators in high dimensions, while GIS achieves comparable results on density estimation with current NF models, but is more stable, faster to train, and achieves higher $p\left( x\right)$ when trained on small training sets even though it does not train on $p\left( x\right)$ . SINF has very few hyperparameters, and is very insensitive to their choice. SINF is related to several previous models, which is discussed in Appendix. SINF has deep neural network architecture, but its approach deviates significantly from the current Deep Learning paradigm, as it does not use concepts such as mini-batching, stochastic gradient descent and gradient back-propagation through deep layers. SINF is an existence proof that greedy Deep Learning without these ingredients can be SOTA for modern high dimensional ML applications. SINF may be of particular interest in applications where robustness, insensitivity to hyperparameters, small data size, and speed are of primary importance. + +References + +Arjovsky, M. and Bottou, L. Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862, 2017. + +Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. + +Böhm, V. and Seljak, U. Probabilistic auto-encoder. arXiv preprint arXiv:2006.05479, 2020. + +Chen, S. S. and Gopinath, R. A. Gaussianization. In Advances in neural information processing systems, pp. 423- 429, 2001. + +Choi, H., Jang, E., and Alemi, A. A. Waic, but why? generative ensembles for robust anomaly detection. arXiv preprint arXiv:1810.01392, 2018. + +Cramér, H. and Wold, H. Some theorems on distribution functions. Journal of the London Mathematical Society, 1(4):290-294, 1936. + +Dai, B. and Wipf, D. Diagnosing and enhancing vae models. arXiv preprint arXiv:1903.05789, 2019. + +Deshpande, I., Zhang, Z., and Schwing, A. G. Generative modeling using the sliced wasserstein distance. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3483-3491, 2018. + +Deshpande, I., Hu, Y.-T., Sun, R., Pyrros, A., Siddiqui, N., Koyejo, S., Zhao, Z., Forsyth, D., and Schwing, A. G. Max-sliced wasserstein distance and its use for gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10648-10656, 2019. + +Dinh, L., Krueger, D., and Bengio, Y. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. + +Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G. Neural spline flows. In Advances in Neural Information Processing Systems, pp. 7509-7520, 2019. + +Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014. + +Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018. + +Gregory, J. and Delbourgo, R. Piecewise rational quadratic + +interpolation to monotonic data. IMA Journal of Numerical Analysis, 2(2):123-130, 1982. + +Grover, A., Dhar, M., and Ermon, S. Flow-gan: Combining maximum likelihood and adversarial learning in generative models. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. + +Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. C. Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767-5777, 2017. + +Helgason, S. Integral geometry and Radon transforms. Springer Science & Business Media, 2010. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1\mathrm{\;x}1$ convolutions. In Advances in Neural Information Processing Systems, pp. 10215-10224, 2018. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. + +Knop, S., Tabor, J., Spurek, P., Podolak, I., Mazur, M., and Jastrzebski, S. Cramer-wold autoencoder. arXiv preprint arXiv:1805.09235, 2018. + +Kolouri, S., Park, S. R., and Rohde, G. K. The radon cumulative distribution transform and its application to image classification. IEEE transactions on image processing, 25(2):920-934, 2015. + +Kolouri, S., Pope, P. E., Martin, C. E., and Rohde, G. K. Sliced-wasserstein autoencoder: An embarrassingly simple generative model. arXiv preprint arXiv:1804.01947, 2018. + +Kolouri, S., Nadjahi, K., Simsekli, U., Badeau, R., and Rohde, G. Generalized sliced wasserstein distances. In Advances in Neural Information Processing Systems, pp. 261-272, 2019. + +Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009. + +Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015. + +Laparra, V., Camps-Valls, G., and Malo, J. Iterative gaus-sianization: from ica to random rotations. IEEE transactions on neural networks, 22(4):537-549, 2011. + +LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. + +Lichman, M. et al. Uci machine learning repository, 2013. + +Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. + +Liutkus, A., Şimşekli, U., Majewski, S., Durmus, A., and Stöter, F.-R. Sliced-wasserstein flows: Nonparametric generative modeling via optimal transport and diffusions. arXiv preprint arXiv:1806.08141, 2018. + +Martin, D., Fowlkes, C., Tal, D., and Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, volume 2, pp. 416-423. IEEE, 2001. + +Meng, C., Ke, Y., Zhang, J., Zhang, M., Zhong, W., and Ma, P. Large-scale optimal transport map estimation using projection pursuit. In Advances in Neural Information Processing Systems 32. 2019. + +Meng, C., Song, Y., Song, J., and Ermon, S. Gaussianization flows. arXiv preprint arXiv:2003.01941, 2020. + +Nadjahi, K., Durmus, A., Chizat, L., Kolouri, S., Shahram-pour, S., and Simsekli, U. Statistical and topological properties of sliced probability divergences. Advances in Neural Information Processing Systems, 33, 2020. + +Nalisnick, E., Matsukawa, A., Teh, Y. W., Gorur, D., and Lakshminarayanan, B. Do deep generative models know what they don't know? arXiv preprint arXiv:1810.09136, 2018. + +Nguyen, K., Ho, N., Pham, T., and Bui, H. Distributional sliced-wasserstein and applications to generative modeling. arXiv preprint arXiv:2002.07367, 2020a. + +Nguyen, K., Nguyen, S., Ho, N., Pham, T., and Bui, H. Improving relational regularized autoencoders with spherical sliced fused gromov wasserstein. arXiv preprint arXiv:2010.01787, 2020b. + +Papamakarios, G., Pavlakou, T., and Murray, I. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pp. 2338-2347, 2017. + +Parmar, N., Vaswani, A., Uszkoreit, J., Kaiser, Ł., Shazeer, N., Ku, A., and Tran, D. Image transformer. arXiv preprint arXiv:1802.05751, 2018. + +Pitié, F., Kokaram, A. C., and Dahyot, R. Automated colour grading using colour distribution transfer. Computer Vision and Image Understanding, 107(1-2):123-137, 2007. + +Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative + +adversarial networks. arXiv preprint arXiv:1511.06434, 2015. + +Ren, J., Liu, P. J., Fertig, E., Snoek, J., Poplin, R., Depristo, M., Dillon, J., and Lakshminarayanan, B. Likelihood ratios for out-of-distribution detection. In Advances in Neural Information Processing Systems, pp. 14680-14691, 2019. + +Rezende, D. J. and Mohamed, S. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015. + +Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. + +Salimans, T., Karpathy, A., Chen, X., and Kingma, D. P. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017. + +Tagare, H. D. Notes on optimization on stiefel manifolds. In Technical report, Technical report. Yale University, 2011. + +Tolstikhin, I., Bousquet, O., Gelly, S., and Schoelkopf, B. Wasserstein auto-encoders. arXiv preprint arXiv:1711.01558, 2017. + +Wu, J., Huang, Z., Acharya, D., Li, W., Thoma, J., Paudel, D. P., and Gool, L. V. Sliced wasserstein generative models. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3713-3722, 2019. + +Xiao, H., Rasul, K., and Vollgraf, R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. + +Xiao, Z., Yan, Q., Chen, Y., and Amit, Y. Generative latent flow: A framework for non-adversarial image generation. arXiv preprint arXiv:1905.10485, 2019. + +Xiao, Z., Yan, Q., and Amit, Y. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. arXiv preprint arXiv:2003.02977, 2020. + +## A. Radon transform + +Let ${\mathbb{L}}^{1}\left( X\right)$ be the space of absolute integrable functions on $X$ . The Radon transform $\mathcal{R} : {\mathbb{L}}^{1}\left( {\mathbb{R}}^{d}\right) \rightarrow {\mathbb{L}}^{1}\left( {\mathbb{R} \times {\mathbb{S}}^{d - 1}}\right)$ is defined as + +$$ +\left( {\mathcal{R}p}\right) \left( {t,\theta }\right) = {\int }_{{\mathbb{R}}^{d}}p\left( x\right) \delta \left( {t-\langle x,\theta \rangle }\right) {dx}, \tag{7} +$$ + +where ${\mathbb{S}}^{d - 1}$ denotes the unit sphere ${\theta }_{1}^{2} + \cdots {\theta }_{n}^{2} = 1$ in ${\mathbb{R}}^{d}$ , $\delta \left( \cdot \right)$ is the Dirac delta function, and $\langle \cdot , \cdot \rangle$ is the standard inner product in ${\mathbb{R}}^{d}$ . For a given $\theta$ , the function $\left( {\mathcal{R}p}\right) \left( {\cdot ,\theta }\right)$ : $\mathbb{R} \rightarrow \mathbb{R}$ is essentially the slice (or projection) of $p\left( x\right)$ on axis $\theta$ . + +Note that the Radon transform $\mathcal{R}$ is invertible. Its inverse, also known as the filtered back-projection formula, is given by (Helgason, 2010; Kolouri et al., 2019) + +$$ +{\mathcal{R}}^{-1}\left( {\left( {\mathcal{R}p}\right) \left( {t,\theta }\right) }\right) \left( x\right) = {\int }_{{\mathbb{S}}^{n - 1}}\left( {\left( {\mathcal{R}p}\right) \left( {\cdot ,\theta }\right) * h}\right) \left( {\langle x,\theta \rangle }\right) {d\theta }, +$$ + +(8) + +where $*$ is the convolution operator, and the convolution kernel $h$ has the Fourier transform $\widehat{h}\left( k\right) = c{\left| k\right| }^{d - 1}$ . The inverse Radon transform provides a practical way to reconstruct the original function $p\left( x\right)$ using its 1D slices $\left( {\mathcal{R}p}\right) \left( {\cdot ,\theta }\right)$ , and is widely used in medical imaging. This inverse formula implies that if the 1D slices of two functions are the same in all axes, these two functions are identical. This is also known as Carmér-Wold theorem (Cramér & Wold, 1936). + +### B.max K-SWD + +The optimization in max K-SWD is performed under the constraints that $\left\{ {{\theta }_{1},\cdots ,{\theta }_{K}}\right\}$ are orthonormal vectors, or equivalently, ${A}^{T}A = {I}_{K}$ where $A = \left\lbrack {{\theta }_{1},\cdots ,{\theta }_{K}}\right\rbrack$ is the matrix whose i-th column vector is ${\theta }_{i}$ . Mathematically, the set of all possible $A$ matrices is called Stiefel Manifold ${V}_{K}\left( {\mathbb{R}}^{d}\right) = \left\{ {A \in {\mathbb{R}}^{d \times K} : {A}^{T}A = {I}_{K}}\right\}$ . As suggested by Tagare (2011), the optimization of matrix $A$ can be performed by doing gradient ascent on the Stiefel Manifold: + +$$ +{A}_{\left( j + 1\right) } = {\left( {I}_{d} + \frac{\tau }{2}{B}_{\left( j\right) }\right) }^{-1}\left( {{I}_{d} - \frac{\tau }{2}{B}_{\left( j\right) }}\right) {A}_{\left( j\right) }, \tag{9} +$$ + +where ${A}_{\left( j\right) }$ is the weight matrix at gradient descent iteration $j$ (which is different from the iteration $l$ of the algorithm), $\tau$ is the learning rate, which is determined by backtracking line search, $B = G{A}^{T} - A{G}^{T}$ , and $G$ is the negative gradient matrix $G = \left\lbrack {-\frac{\partial \mathcal{F}}{\partial {A}_{p, q}}}\right\rbrack \in {\mathbb{R}}^{d \times K}$ . Equation 9 has the properties that ${A}_{\left( j + 1\right) } \in {V}_{K}\left( {\mathbb{R}}^{d}\right)$ , and that the tangent vector ${\left. \frac{d{A}_{\left( j + 1\right) }}{d\tau }\right| }_{\tau = 0}$ is the projection of gradient $\left\lbrack \frac{\partial \mathcal{F}}{\partial {A}_{p, q}}\right\rbrack$ onto ${T}_{{A}_{\left( j\right) }}\left( {{V}_{K}\left( {\mathbb{R}}^{d}\right) }\right)$ (the tangent space of ${V}_{K}\left( {\mathbb{R}}^{d}\right)$ at $\left. {A}_{\left( j\right) }\right)$ under the canonical inner product (Tagare, 2011). + +However, Equation 9 requires the inversion of a $d \times d$ matrix, which is computationally expensive in high dimensions. + +Algorithm 2 max K-SWD + +--- + +Input: ${\left\{ {x}_{i} \sim {p}_{1}\right\} }_{i = 1}^{N},{\left\{ {y}_{i} \sim {p}_{2}\right\} }_{i = 1}^{N}, K$ , order $p,\max$ + +iteration ${J}_{\text{maxiter }}$ + +Randomly initialize $A \in {V}_{K}\left( {\mathbb{R}}^{d}\right)$ + +for $j = 1$ to ${J}_{\text{maxiter }}$ do + + Initialize $D = 0$ + + for $k = 1$ to $K$ do + + ${\theta }_{k} = A\left\lbrack { : , k}\right\rbrack$ + + Compute ${\widehat{x}}_{i} = {\theta }_{k} \cdot {x}_{i}$ and ${\widehat{y}}_{i} = {\theta }_{k} \cdot {y}_{i}$ for each $i$ + + Sort ${\widehat{x}}_{i}$ and ${\widehat{x}}_{j}$ in ascending order s.t. ${\widehat{x}}_{i\left\lbrack n\right\rbrack } \leq$ + + ${\widehat{x}}_{i\left\lbrack {n + 1}\right\rbrack }$ and ${\widehat{y}}_{j\left\lbrack n\right\rbrack } \leq {\widehat{y}}_{j\left\lbrack {n + 1}\right\rbrack }$ + + $D = D + \frac{1}{KN}\mathop{\sum }\limits_{{i = 1}}^{N}{\left| {\widehat{x}}_{i\left\lbrack n\right\rbrack } - {\widehat{y}}_{j\left\lbrack n\right\rbrack }\right| }^{p}$ + + end for + + $G = \left\lbrack {-\frac{\partial D}{\partial {A}_{i, j}}}\right\rbrack , U = \left\lbrack {G, A}\right\rbrack , V = \left\lbrack {A, - G}\right\rbrack$ + + Determine learning rate $\tau$ with backtracking line search + + $A = A - {\tau U}{\left( {I}_{2K} + \frac{\tau }{2}{V}^{T}U\right) }^{-1}{V}^{T}A$ + + if $A$ has converged then + + Early stop + + end if + +end for + +Output: ${D}^{\frac{1}{p}} \approx \max - K - S{W}_{p}, A \approx \left\lbrack {{\theta }_{1},\cdots ,{\theta }_{K}}\right\rbrack$ + +--- + +The matrix inversion can be simplified using the Sherman-Morrison-Woodbury formula, which results in the following equation (Tagare, 2011): + +$$ +{A}_{\left( j + 1\right) } = {A}_{\left( j\right) } - \tau {U}_{\left( j\right) }{\left( {I}_{2K} + \frac{\tau }{2}{V}_{\left( j\right) }^{T}{U}_{\left( j\right) }\right) }^{-1}{V}_{\left( j\right) }^{T}{A}_{\left( j\right) }, +$$ + +(10) + +where $U = \left\lbrack {G, A}\right\rbrack$ (the concatenation of columns of $G$ and $W$ ) and $V = \left\lbrack {A, - G}\right\rbrack$ . Equation 10 only involves the inversion of a ${2K} \times {2K}$ matrix, where $K$ is the number of axes to apply marginal transformation in each iteration. For high dimensional data (e.g. images), we use a relatively small $K$ to avoid the inversion of large matrices. A large $K$ leads to faster training, but one would converge to similar results with a small $K$ using more iterations. + +The procedure for estimating max K-SWD and $A$ is summarized in Algorithm 2. + +Proposition 1. Let ${P}_{p}\left( \Omega \right)$ be the set of Borel probability measures with finite p’th moment on metric space $\left( {\Omega , d}\right)$ . The maximum K-sliced p-Wasserstein distance is a metric over ${P}_{p}\left( \Omega \right)$ . + +Proof. We firstly prove the triangle inequality. Let ${\mu }_{1}$ , ${\mu }_{2}$ and ${\mu }_{3}$ be probability measures in ${P}_{p}\left( \Omega \right)$ with probability density function ${p}_{1},{p}_{2}$ and ${p}_{3}$ , respectively. Let $\left\{ {{\theta }_{1}^{ * },\cdots ,{\theta }_{K}^{ * }}\right\} = \arg \mathop{\max }\limits_{\left\{ {\theta }_{1},\cdots ,{\theta }_{K}\right\} }$ orthonormal + +$$ +{\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}\right) ,\left( \mathcal{R}{p}_{3}\right) \left( \cdot ,{\theta }_{k}\right) \right) \right) }^{\frac{1}{p}}\text{; then} +$$ + +$$ +\max - K - S{W}_{p}\left( {{p}_{1},{p}_{3}}\right) +$$ + +$$ += \mathop{\max }\limits_{\left\{ {\theta }_{1},\cdots ,{\theta }_{K}\right\} }\text{orthonormal} +$$ + +$$ +{\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}\right) ,\left( \mathcal{R}{p}_{3}\right) \left( \cdot ,{\theta }_{k}\right) \right) \right) }^{\frac{1}{p}} +$$ + +$$ += {\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) ,\left( \mathcal{R}{p}_{3}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) \right) \right) }^{\frac{1}{p}} +$$ + +$$ +\leq \left( {\frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}\left\lbrack {{W}_{p}\left( {\left( {\mathcal{R}{p}_{1}}\right) \left( {\cdot ,{\theta }_{k}^{ * }}\right) ,\left( {\mathcal{R}{p}_{2}}\right) \left( {\cdot ,{\theta }_{k}^{ * }}\right) }\right) }\right. }\right. +$$ + +$$ +{\left. +{W}_{p}{\left( \left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) ,\left( \mathcal{R}{p}_{3}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) )\right\rbrack }^{p}\right) }^{\frac{1}{p}} +$$ + +$$ +\leq {\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) \right) \right) }^{\frac{1}{p}} \tag{11} +$$ + +$$ ++ {\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) ,\left( \mathcal{R}{p}_{3}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) \right) \right) }^{\frac{1}{p}} +$$ + +$$ +\leq \mathop{\max }\limits_{\left\{ {\theta }_{1},\cdots ,{\theta }_{K}\right\} }\text{orthonormal} +$$ + +$$ +{\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}\right) \right) \right) }^{\frac{1}{p}} +$$ + +$$ ++ \mathop{\max }\limits_{\left\{ {\theta }_{1},\cdots ,{\theta }_{K}\right\} }\text{,} +$$ + +$$ +{\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}\right) ,\left( \mathcal{R}{p}_{3}\right) \left( \cdot ,{\theta }_{k}\right) \right) \right) }^{\frac{1}{p}} +$$ + +$$ += \max - K - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) + \max - K - S{W}_{p}\left( {{p}_{2},{p}_{3}}\right) \text{,} +$$ + +where the first inequality comes from the triangle inequality of Wasserstein distance, and the second inequality follows Minkowski inequality. Therefore $\max - K - S{W}_{p}$ satisfies the triangle inequality. + +Now we prove the identity of indiscernibles. For any probability measures ${\mu }_{1}$ and ${\mu }_{2}$ in ${P}_{p}\left( \Omega \right)$ with probability density function ${p}_{1}$ and ${p}_{2}$ , let + +$\widehat{\theta } = \arg \mathop{\max }\limits_{{\theta \in {\mathbb{S}}^{d - 1}}}{W}_{p}\left( {\left( {\mathcal{R}{p}_{1}}\right) \left( {\cdot ,\theta }\right) ,\left( {\mathcal{R}{p}_{2}}\right) \left( {\cdot ,\theta }\right) }\right)$ , and + +$\left\{ {{\theta }_{1}^{ * },\cdots ,{\theta }_{K}^{ * }}\right\} = \arg \mathop{\max }\limits_{\left\{ {\theta }_{1},\cdots ,{\theta }_{K}\right\} }$ orthonormal + +$$ +{\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}\right) \right) \right) }^{\frac{1}{p}}\text{, we have} +$$ + +$$ +\max - K - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) +$$ + +$$ += {\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) \right) \right) }^{\frac{1}{p}} +$$ + +$$ +\leq {\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,\widehat{\theta }\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,\widehat{\theta }\right) \right) \right) }^{\frac{1}{p}} \tag{12} +$$ + +$$ += {W}_{p}\left( {\left( {\mathcal{R}{p}_{1}}\right) \left( {\cdot ,\widehat{\theta }}\right) ,\left( {\mathcal{R}{p}_{2}}\right) \left( {\cdot ,\widehat{\theta }}\right) }\right) +$$ + +$$ += \max - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) \text{.} +$$ + +On the other hand, let $\left\{ {\widehat{\theta },{\widetilde{\theta }}_{2},\cdots ,{\widetilde{\theta }}_{K}}\right\}$ be a set of orthonormal vectors in ${\mathbb{S}}^{d - 1}$ where the first element is $\widehat{\theta }$ , we have + +$$ +\max - K - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) +$$ + +$$ += {\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) \right) \right) }^{\frac{1}{p}} +$$ + +$$ +\geq \left( {\frac{1}{K}{W}_{p}^{p}\left( {\left( {\mathcal{R}{p}_{1}}\right) \left( {\cdot ,\widehat{\theta }}\right) ,\left( {\mathcal{R}{p}_{2}}\right) \left( {\cdot ,\widehat{\theta }}\right) }\right) }\right. +$$ + +$$ ++ {\left. \frac{1}{K}\mathop{\sum }\limits_{{k = 2}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\widetilde{\theta }}_{k}\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\widetilde{\theta }}_{k}\right) \right) \right) }^{\frac{1}{p}} \tag{13} +$$ + +$$ +\geq {\left( \frac{1}{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,\widehat{\theta }\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,\widehat{\theta }\right) \right) \right) }^{\frac{1}{p}} +$$ + +$$ += {\left( \frac{1}{K}\right) }^{\frac{1}{p}}\max - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) . +$$ + +Therefore we have ${\left( \frac{1}{K}\right) }^{\frac{1}{p}}\max - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) \leq$ + +$\max - K - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) \leq \max - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right)$ . Thus $\max - K - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) = 0 \Leftrightarrow \max - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) = 0 \Leftrightarrow$ ${\mu }_{1} = {\mu }_{2}$ , where we use the non-negativity and identity of indiscernibles of max $- S{W}_{p}$ . + +Finally, the symmetry of $\max - K - S{W}_{p}$ can be proven using the fact that p-Wasserstein distance is symmetric: + +$$ +\max - K - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) +$$ + +$$ += {\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) \right) \right) }^{\frac{1}{p}} \tag{14} +$$ + +$$ += {\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) ,\left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}^{ * }\right) \right) \right) }^{\frac{1}{p}} +$$ + +$$ += \max - K - S{W}_{p}\left( {{p}_{2},{p}_{1}}\right) \text{.} +$$ + +## C. More details of SINF + +### C.1. Inverse and Jacobian determinant + +The SINF transformation of Equation 5 can be easily inverted: + +$$ +{X}_{l} = {A}_{l}{\mathbf{\Psi }}_{l}^{-1}\left( {{A}_{l}^{T}{X}_{l + 1}}\right) + {X}_{l}^{ \bot }, \tag{15} +$$ + +where ${X}_{l}^{ \bot } = {X}_{l} - {A}_{l}{A}_{l}^{T}{X}_{l} = {X}_{l + 1} - {A}_{l}{A}_{l}^{T}{X}_{l + 1}$ . The Jacobian determinant of the transformation is also efficient to calculate: + +$$ +\det \left( \frac{\partial {X}_{l + 1}}{\partial {X}_{l}}\right) = \mathop{\prod }\limits_{{k = 1}}^{K}\frac{d{\Psi }_{lk}\left( x\right) }{dx}. \tag{16} +$$ + +Proof of Equation 16. Let $\left\{ {{\theta }_{1},\cdots ,{\theta }_{K},\cdots ,{\theta }_{d}}\right\}$ be a set of orthonormal basis in ${\mathcal{R}}^{d}$ where the first $K$ vectors are ${\theta }_{1},\cdots ,{\theta }_{K}$ , respectively. Let ${R}_{l} = \left\lbrack {{\theta }_{1},\cdots ,{\theta }_{d}}\right\rbrack$ be an orthogonal matrix whose i-th column vector is ${\theta }_{i}$ , ${U}_{l} = \left\lbrack {{\theta }_{K + 1},\cdots ,{\theta }_{d}}\right\rbrack$ . Since ${A}_{l} = \left\lbrack {{\theta }_{1},\cdots ,{\theta }_{K}}\right\rbrack$ , we have ${R}_{l} = \left\lbrack {{A}_{l},{U}_{l}}\right\rbrack$ (the concatenation of columns of $A$ and $U)$ . Let ${\mathbf{I}}^{d - K} = {\left\lbrack {\mathrm{{id}}}_{1},\cdots ,{\mathrm{{id}}}_{d - K}\right\rbrack }^{T}$ be a marginal transformation that consists of $d - {K1}$ D identity transformation, ${\widehat{\mathbf{\Psi }}}_{l} = \left\lbrack \begin{matrix} {\mathbf{\Psi }}_{l} \\ {\mathbf{I}}^{d - K} \end{matrix}\right\rbrack$ , we have + +$$ +{X}_{l + 1} = {A}_{l}{\mathbf{\Psi }}_{l}\left( {{A}_{l}^{T}{X}_{l}}\right) + {X}_{l} - {A}_{l}{A}_{l}^{T}{X}_{l} +$$ + +$$ += {A}_{l}{\mathbf{\Psi }}_{l}\left( {{A}_{l}^{T}{X}_{l}}\right) + {R}_{l}{R}_{l}^{T}{X}_{l} - {A}_{l}{A}_{l}^{T}{X}_{l} +$$ + +$$ += {A}_{l}{\mathbf{\Psi }}_{l}\left( {{A}_{l}^{T}{X}_{l}}\right) + \left\lbrack {{A}_{l},{U}_{l}}\right\rbrack \left\lbrack \begin{matrix} {A}_{l}^{T} \\ {U}_{l}^{T} \end{matrix}\right\rbrack {X}_{l} - {A}_{l}{A}_{l}^{T}{X}_{l} +$$ + +$$ += {A}_{l}{\mathbf{\Psi }}_{l}\left( {{A}_{l}^{T}{X}_{l}}\right) + {U}_{l}{U}_{l}^{T}{X}_{l} +$$ + +$$ += {A}_{l}{\mathbf{\Psi }}_{l}\left( {{A}_{l}^{T}{X}_{l}}\right) + {U}_{l}{\mathbf{I}}^{d - K}\left( {{U}_{l}^{T}{X}_{l}}\right) +$$ + +$$ += \left\lbrack {{A}_{l},{U}_{l}}\right\rbrack \left\lbrack \begin{matrix} {\mathbf{\Psi }}_{l} \\ {I}^{d - K} \end{matrix}\right\rbrack \left( {{\left\lbrack {A}_{l},{U}_{l}\right\rbrack }^{T}{X}_{l}}\right) +$$ + +$$ += {R}_{l}{\widehat{\mathbf{\Psi }}}_{l}\left( {{R}_{l}^{T}{X}_{l}}\right) . +$$ + +(17) + +Since ${R}_{l}$ is an orthogonal matrix with determinant $\pm 1$ , and the Jacobian of the marginal transformation ${\widehat{\mathbf{\Psi }}}_{l}$ is diagonal, the Jacobian determinant of the above equation can be written as + +$$ +\det \left( \frac{\partial {X}_{l + 1}}{\partial {X}_{l}}\right) = \mathop{\prod }\limits_{{k = 1}}^{K}\frac{d{\Psi }_{lk}\left( x\right) }{dx} \cdot \mathop{\prod }\limits_{{k = 1}}^{{d - K}}\frac{d\left( {{\operatorname{id}}_{k}\left( x\right) }\right) }{dx} \tag{18} +$$ + +$$ += \mathop{\prod }\limits_{{k = 1}}^{K}\frac{d{\Psi }_{lk}\left( x\right) }{dx}. +$$ + +### C.2. Objective + +At iteration $l$ , the objective of SINF can be written as: + +$$ +{\mathcal{F}}_{l} = \mathop{\min }\limits_{\left\{ {\Psi }_{l1},\cdots ,{\Psi }_{lK}\right\} }\mathop{\max }\limits_{\left\{ {\theta }_{l1},\cdots ,{\theta }_{lK}\right\} }\mathop{\max }\limits_{\text{ orthonormal }} +$$ + +$$ +{\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( {\Psi }_{lk}\left( \left( \mathcal{R}{p}_{1, l}\right) \left( \cdot ,{\theta }_{lk}\right) \right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{lk}\right) \right) \right) }^{\frac{1}{p}} \tag{19} +$$ + +The algorithm firstly optimize ${\theta }_{lk}$ to maximize the objective, with ${\Psi }_{lk}$ fixed to identical transformations (equivalent to Equation 4). Then the axes ${\theta }_{lk}$ are fixed and the objective is minimized with marginal matching ${\Psi }_{l}$ . The samples are updated, and this process repeats until convergence. + +### C.3. Monotonic Rational Quadratic Spline + +Monotonic Rational Quadratic Splines (Gregory & Del-bourgo, 1982; Durkan et al., 2019) approximate the function in each bin with the quotient of two quadratic polynomials. They are monotonic, contineously differentiable, and can be inverted analytically. The splines are parametrized by the coordinates and derivatives of $M$ knots: ${\left\{ \left( {x}_{m},{y}_{m},{y}_{m}^{\prime }\right) \right\} }_{m = 1}^{M}$ , with ${x}_{m + 1} > {x}_{m},{y}_{m + 1} > {y}_{m}$ and ${y}_{m}^{\prime } > 0$ . Given these parameters, the function in bin $m$ can be written as (Durkan et al., 2019) + +$$ +y = {y}_{m} + \left( {{y}_{m + 1} - {y}_{m}}\right) \frac{{s}_{m}{\xi }^{2} + {y}_{m}^{\prime }\xi \left( {1 - \xi }\right) }{{s}_{m} + {\sigma }_{m}\xi \left( {1 - \xi }\right) }, \tag{20} +$$ + +where ${s}_{m} = \left( {{y}_{m + 1} - {y}_{m}}\right) /\left( {{x}_{m + 1} - {x}_{m}}\right) ,{\sigma }_{m} = {y}_{m + 1}^{\prime } +$ ${y}_{m}^{\prime } - 2{s}_{m}$ and $\xi = \left( {x - {x}_{m}}\right) /\left( {{x}_{m + 1} - {x}_{m}}\right)$ . The derivative is given by + +$$ +\frac{dy}{dx} = \frac{{s}_{m}^{2}\left\lbrack {{y}_{m + 1}^{\prime }{\xi }^{2} + 2{s}_{m}\xi \left( {1 - \xi }\right) + {y}_{m}^{\prime }{\left( 1 - \xi \right) }^{2}}\right\rbrack }{{\left\lbrack {s}_{m} + {\sigma }_{m}\xi \left( 1 - \xi \right) \right\rbrack }^{2}}. \tag{21} +$$ + +Finally, the inverse can be calculated with + +$$ +x = {x}_{m} + \left( {{x}_{m + 1} - {x}_{m}}\right) \frac{2c}{-b - \sqrt{{b}^{2} - {4ac}}}, \tag{22} +$$ + +where $a = \left( {{s}_{m} - {y}_{m}^{\prime }}\right) + \zeta {\sigma }_{m}, b = {y}_{m}^{\prime } - \zeta {\sigma }_{m}, c = - {s}_{m}\zeta$ and $\zeta = \left( {y - {y}_{m}}\right) /\left( {{y}_{m + 1} - {y}_{m}}\right)$ . The derivation of these formula can be found in Appendix A of Durkan et al. (2019). + +In our algorithm the coordinates of the knots are determined by the quantiles of the marginalized PDF (see Algorithm 1). The derivative ${y}_{m}^{\prime }\left( {1 < m < M}\right)$ is determined by fitting a local quadratic polynomial to the neighboring knots $\left( {{x}_{m - 1},{y}_{m - 1}}\right) ,\left( {{x}_{m},{y}_{m}}\right)$ , and $\left( {{x}_{m + 1},{y}_{m + 1}}\right)$ : + +$$ +{y}_{m}^{\prime } = \frac{{s}_{m - 1}\left( {{x}_{m + 1} - {x}_{m}}\right) + {s}_{m}\left( {{x}_{m} - {x}_{m - 1}}\right) }{{x}_{m + 1} - {x}_{m - 1}}. \tag{23} +$$ + +The function outside $\left\lbrack {{x}_{1},{x}_{M}}\right\rbrack$ is linearly extrapolated with slopes ${y}_{1}^{\prime }$ and ${y}_{M}^{\prime }$ . In SIG, ${y}_{1}^{\prime }$ and ${y}_{M}^{\prime }$ are fixed to 1, while in GIS they are fitted to the samples that fall outside $\left\lbrack {{x}_{1},{x}_{M}}\right\rbrack$ . + +![01963e33-e365-7032-871a-368a64aa692f_9_163_191_682_436_0.jpg](images/01963e33-e365-7032-871a-368a64aa692f_9_163_191_682_436_0.jpg) + +Figure 3. Illustration of the patch-based approach with $S = 4$ , $p = 2$ and $q = 2$ . At each iteration, different patches are modeled separately. The patches are randomly shifted in each iteration assuming periodic boundaries. + +![01963e33-e365-7032-871a-368a64aa692f_9_159_814_698_230_0.jpg](images/01963e33-e365-7032-871a-368a64aa692f_9_159_814_698_230_0.jpg) + +Figure 4. Illustration of the hierarchical modeling of an $S = 8$ image. The patch size starts from $q = 8$ and gradually decreases to $q = 2$ . + +We use $M = {400}$ knots in SIG to interpolate each ${\Psi }_{l, k}$ , while in GIS we set $M = \min \left( {\sqrt{{N}_{\text{train }}},{200}}\right)$ . The performance is insensitive to these choices, as long as $M$ is large enough to fully characterize the $1\mathrm{D}$ transformation ${\Psi }_{l, k}$ . + +### C.4. Patch-based hierarchical approach for SIG + +Generally speaking, the neighboring pixels in images have stronger correlations than pixels that are far apart. This fact has been taken advantage by convolutional neural networks, which outperform Fully Connected Neural Networks (FCNNs) and have become standard building blocks in computer vision tasks. Like FCNNs, vanilla SIG and GIS make no assumption about the structure of the data and cannot model high dimensional images very well. Recently, Meng et al. (2020) proposed a patch-based approach, providing a different way to improve the modeling of the local correlations for NF models. The patch-based approach decomposes an $S \times S$ image into $p \times p$ patches, with $q \times q$ neighboring pixels in each patch $\left( {S = {pq}}\right)$ . In each iteration the marginal-ized distribution of each patch is modeled separately without considering the correlations between different patches. This approach effectively reduces the dimensionality from ${S}^{2}$ to ${q}^{2}$ , at the cost of ignoring the long range correlations. + +To reduce the effects of ignoring long range correlations, we propose a hierarchical model. In SIG, we start from modeling the entire images, which corresponds to $q = S$ and $p = 1$ . After some iterations the samples show correct structures, indicating the long range correlations have been modeled well. We then gradually decrease the patch size $q$ until $q = 2$ , which allows us to gradually focus on the smaller scales. Assuming a periodic boundary condition, we let the patches randomly shift in each iteration. If the patch size $q$ does not divide $S$ , we set $p = \lfloor S/q\rfloor$ and the rest of the pixels are kept unchanged. + +### C.5. Regularization of GIS + +We add regularization to GIS for density estimation tasks to further improve the performance and reduce overfitting. The regularization is added in the following two aspects: 1) The weight matrix ${W}_{l}$ is regularized by limiting the maximum number of iteration ${J}_{\text{maxiter }}$ (see Algorithm 2). We set ${J}_{\text{maxiter }} = N/d$ , where $N$ is the number of training data and $d$ is the dimensionality. Thus for very small datasets $\left( {N/d \rightarrow 1}\right)$ the axes of marginal transformation are almost random. This has no effect on datasets of regular size. + +2) The CDFs in Equation 6 are estimated using KDE, and the 1D marginal transformation is regularized with: + +$$ +{\widetilde{\psi }}_{l, k}\left( x\right) = \left( {1 - \alpha }\right) {\psi }_{l, k}\left( x\right) + {\alpha x}, \tag{24} +$$ + +where $\alpha \in \lbrack 0,1)$ is the regularization parameter. ${\widetilde{\psi }}_{l, k}$ is the regularized transformation. As $\alpha$ increases, the performance improves, but more iterations are needed to converge. Thus $\alpha$ controls the trade-off between performance and speed. + +## D. Related work + +Iterative normalizing flow models called RBIG (Chen & Gopinath, 2001; Laparra et al., 2011) are simplified versions of GIS, as they are based on a succession of rotations followed by 1D marginal Gaussianizations. Iterative Distribution Transfer (IDT) (Pitié et al., 2007) is a similar algorithm but does not require the base distribution to be a Gaussian. These models do not scale well to high dimensions because they do not have a good way of choosing the axes, and they are not competitive against modern NFs trained on $p\left( x\right)$ (Meng et al.,2020). Meng et al. (2019) use a similar algorithm Projection Pursuit Monge Map (PPMM) to construct OT maps. They propose to find the most informative axis using Projection Pursuit (PP) in each iteration, and show that PPMM works well in low-dimensional bottleneck settings $\left( {d = 8}\right)$ . However, it has yet to be proven that PPMM scales to high dimensions, considering that PP scales as $\mathcal{O}\left( {d}^{3}\right)$ . A DL, non-iterative version of these models is Gaussianization Flow (GF) (Meng et al., 2020), which trains on $p\left( x\right)$ and achieves good density estimation results in low dimensions, but does not have good sampling properties in high dimensions. RBIG, GIS and GF have similar architectures but are trained differently. We compare their density estimation results in Section 4.1. + +Another iterative generative model is Sliced Wasserstein Flow (SWF) (Liutkus et al., 2018). Similar to SIG, SWF tries to minimize the SWD between the distributions of samples and the data, and transforms this problem into solving a d dimensional PDE. The PDE is solved iteratively by doing a gradient flow in the Wasserstein space, and they show SWF works well for low dimensional bottleneck features. However, in each iteration the algorithm requires evaluating an integral over the $d$ dimensional unit sphere approximated with Monte Carlo integration, which does not scale well to high dimensions. Another difference with SIG is that SWF does not have a flow structure, cannot be inverted, and does not provide the likelihood. We compare the sample qualities between SWF and SIG in Section 4.2. + +SWD, max SWD and other slice-based distance (e.g. Cramér-Wold distance) have been widely used in training generative models (Deshpande et al., 2018; 2019; Wu et al., 2019; Kolouri et al., 2018; Knop et al., 2018; Nguyen et al., 2020b;a; Nadjahi et al., 2020). Wu et al. (2019) propose a differentiable SWD block composed of a rotation followed by marginalized Gaussianizations, but unlike RBIG, the rotation matrix is trained in an end-to-end DL fashion. They propose Sliced Wasserstein AutoEncoder (SWAE) by adding SWD blocks to an $\mathrm{{AE}}$ to regularize the latent variables, and show that its sample quality outperforms VAE and AE + RBIG. Nguyen et al. (2020b;a) generalize the max-sliced approach using parametrized distributions over projection axes. Nguyen et al. (2020b) propose Mixture Spherical Sliced Fused Gromov Wasserstein (MSSFG), which samples the slice axes around a few informative directions following Von Mises-Fisher distribution. They apply MSSFG to training Deterministic Relational regularized Au-toEncoder (DRAE) and name it mixture spherical DRAE (ms-DRAE). Nguyen et al. (2020a) go further and propose Distributional Sliced Wasserstein distance (DSW), which tries to find the optimal axes distribution by parametrizing it with a neural network. They apply DSW to the training of GANs, and we will refer to their model as DSWGAN in this paper. We compare the sample qualities between SWAE, ms-DRAE, DSWGAN and SIG in Section 4.2. + +Grover et al. (2018) propose Flow-GAN using a NF as the generator of a GAN, so the model can perform likelihood evaluation, and allows both maximum likelihood and adversarial training. Similar to our work they find that adversarial training gives good samples but poor $p\left( x\right)$ , while training by maximum likelihood results in bad samples. Similar to SIG, the adversarial version of Flow-GAN minimizes the Wasserstein distance between samples and data, and has a NF structure. We compare their samples in Section 4.2. + +603 + +604 + +![01963e33-e365-7032-871a-368a64aa692f_10_899_184_695_311_0.jpg](images/01963e33-e365-7032-871a-368a64aa692f_10_899_184_695_311_0.jpg) + +Figure 5. Gaussian noise (first column), Fashion-MNIST (top panel) and CelebA (bottom) samples at different iterations. + +![01963e33-e365-7032-871a-368a64aa692f_10_895_632_704_238_0.jpg](images/01963e33-e365-7032-871a-368a64aa692f_10_895_632_704_238_0.jpg) + +Figure 6. Middle: interpolations between CelebA samples from SIG. Left and right: the corresponding nearest training data. + +## E. Other experiments + +### E.1. Density estimation of full datasets + +We perform density estimation with GIS on four UCI datasets (Lichman et al., 2013) and BSDS300 (Martin et al., 2001), as well as image datasets MNIST (LeCun et al., 1998) and Fashion-MNIST (Xiao et al., 2017). The data preprocessing of UCI datasets and BSDS300 follows Pa-pamakarios et al. (2017). In Table 3 we compare our results with RBIG (Laparra et al., 2011) and GF (Meng et al., 2020). The former can be seen as GIS with random axes to apply 1D gaussianization, while the latter can be seen as training non-iterative GIS with MLE training on $p\left( x\right)$ . We also list other NF models Real NVP (Dinh et al., 2016), Glow (Kingma & Dhariwal, 2018), FFJORD (Grathwohl et al., 2018), MAF (Papamakarios et al., 2017) and RQ-NSF (AR)(Durkan et al., 2019) for comparison. + +We observe that RBIG performs significantly worse than current SOTA. GIS outperforms RBIG and is the first iterative algorithm that achieves comparable performance compared to maximum likelihood models. This is even more impressive given that GIS is not trained on $p\left( x\right)$ , yet it outperforms GF on $p\left( x\right)$ on GAS, BSDS300 and Fashion-MNIST. + +### E.2. More samples from SIG + +In Figure 5 we show the SIG samples at different iterations. In Figure 6 we display interpolations between SIG samples, and the nearest training data, to verify we are not memorizing the training data. + +605 + +Table 3. Negative test log-likelihood for tabular datasets measured in nats, and image datasets measured in bits/dim (lower is better). + +
MethodPOWERGASHEPMASSMINIBOONEBSDS300MNISTFashion
iterativeRBIG1.020.0524.5925.41-115.961.714.46
GIS (this work)-0.32-10.3019.0014.26-155.751.343.22
maximum likelihoodGF-0.57-10.1317.5910.32-152.821.293.35
Real NVP-0.17-8.3318.7113.55-153.281.062.85
Glow-0.17-8.1518.9211.35-155.071.052.95
FFJORD-0.46-8.5914.9210.43-157.400.99-
MAF-0.30-10.0817.3911.68-156.361.89-
RQ-NSF (AR)-0.66-13.0914.019.22-157.31--
+ +606 + +607 + +608 + +609 + +610 + +611 + +Table 4. OoD detection accuracy quantified by the AUROC of data $p\left( x\right)$ trained on Fashion-MNIST. + +
MethodMNISTOMNIGLOT
SIG (this work)0.9800.993
GIS (this work)0.8240.891
PixelCNN++0.089-
IWAE0.4230.568
+ +#### E.3.Out of Distribution (OoD) detection + +OoD detection with generative models has recently attracted a lot of attention, since the $\log p$ estimates of NF and VAE have been shown to be poor OoD detectors: different generative models can assign higher probabilities to OoD data than to In Distribution (InD) training data (Nalisnick et al., 2018). One combination of datasets for which this has been observed is Fashion-MNIST and MNIST, where a model + +640 trained on the former assigns higher density to the latter. + +641 SINF does not train on the likelihood $p\left( x\right)$ , which is an 642 advantage for OoD. Likelihood is sensitive to the smallest variance directions (Ren et al., 2019): for example, a zero variance pixel leads to an infinite $p\left( x\right)$ , and noise must be added to regularize it. But zero variance directions con- + +646 tain little or no information on the global structure of the + +647 image. SINF objective is more sensitive to the meaning- 648 ful global structures that can separate between OoD and + +649 InD. Because the patch based approach ignores the long 650 range correlations and results in bad OoD, we use vanilla + +651 SINF without patch based approach. We train the models + +652 on F-MNIST, and then evaluate anomaly detection on test data of MNIST and OMNIGLOT (Lake et al., 2015). In Table 4 we compare our results to maximum likelihood $p\left( x\right)$ models PixelCNN++(Salimans et al.,2017; Ren et al., + +657 2019), and IWAE (Choi et al., 2018). Other models that perform well include VIB and WAIC (Choi et al., 2018), + +658 which achieve 0.941, 0.943 and 0.766, 0.796 , for MNIST 659 and OMNIGLOT, respectively (below our SIG results). For the MNIST case Ren et al. (2019) obtained 0.996 using the likelihood ratio between the model and its perturbed version, but they require fine-tuning on some additional OoD dataset, which may not be available in OoD applications. Lower dimensional latent space PAE (Böhm & Seljak, 2020) achieves 0.997 and 0.981 for MNIST and OMNIGLOT, respectively, while VAE based likelihood regret (Xiao et al., 2020) achieves 0.988 on MNIST, but requires additional (expensive) processing. + +![01963e33-e365-7032-871a-368a64aa692f_11_896_683_696_408_0.jpg](images/01963e33-e365-7032-871a-368a64aa692f_11_896_683_696_408_0.jpg) + +Figure 7. Fashion-MNIST samples before (left panel) and after SIG improvement (right panel). Top: SWF. Middle: Flow-GAN (ADV). Bottom: MAF. + +### E.4. Improving the samples of other generative models + +Since SIG is able to transform any base distribution to the target distribution, it can also be used as a "Plug-and-Play" tool to improve the samples of other generative models. To demonstrate this, we train SWF, Flow-GAN(ADV) and MAF(5) on Fashion-MNIST with the default architectures in their papers, and then we apply 240 SIG iterations (30% of the total number of iterations in Section 4.2) to improve the sample quality. In Figure 7 we compare the samples before and after SIG improvement. Their FID scores improve from 207.6, 216.9 and 81.2 to 23.9, 21.2 and 16.6, respectively. These results can be further improved by adding more SIG iterations. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/VmwEpdsvHZ9/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/VmwEpdsvHZ9/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..2c8f56fa20677c41104b2994b430a408f1d4fd00 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/VmwEpdsvHZ9/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,219 @@ +§ SLICED ITERATIVE NORMALIZING FLOWS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +We develop an iterative (greedy) deep learning algorithm which is able to transform an arbitrary probability distribution function (PDF) into the target PDF. The model is based on iterative Optimal Transport of a series of 1D slices, matching on each slice the marginal PDF to the target. As special cases of this algorithm, we introduce two Sliced Iterative Normalizing Flows (SINF), which map from the data to the latent space (GIS) and vice versa (SIG). We show that SIG is able to generate high quality samples that match the GAN benchmarks. GIS obtains better results on small dataset density estimation tasks compared to the density trained NFs. SINF approach deviates significantly from the current DL paradigm, as it is greedy and does not use concepts such as mini-batching, stochastic gradient descent and gradient back-propagation through deep layers. + +§ 1. INTRODUCTION + +Latent variable generative models such as Normalizing Flows (NFs) (Rezende & Mohamed, 2015; Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018), Variational Au-toEncoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014; Radford et al., 2015) aim to model the distribution $p\left( x\right)$ of high-dimensional input data $x$ by introducing a mapping from a latent variable $z$ to $x$ , where $z$ is assumed to follow a given prior distribution $\pi \left( z\right)$ . These models usually parameterize the mapping using neural networks, and the training of these models typically consists of minimizing a dissimilarity measure between the model distribution and the target distribution. + +In this work we adopt a different approach to build the map from latent variable $z$ to data $x$ . We approach this problem from the Optimal Transport (OT) point of view. OT studies whether the transport maps exist between two probability distributions, and if they do, how to construct the map to minimize the transport cost. We propose to decompose the high dimensional problem into a succession of 1D transport problems, where the OT solution is known. The mapping is iteratively augmented, and it has a NF structure that allows explicit density estimation and efficient sampling. We name the algorithm Sliced Iterative Normalizing Flow (SINF). Our objective function is inspired by the Wasserstein distance, which is defined as the minimal transport cost and has been widely used in the loss functions of generative models (Arjovsky & Bottou, 2017; Tolstikhin et al., 2017). We propose a new metric, max K-sliced Wasserstein distance, which enables the algorithm to scale well to high dimensions. + +§ 2. BACKGROUND + +The p-Wasserstein distance, $p \in \lbrack 1,\infty )$ , between two probability distributions ${p}_{1}$ and ${p}_{2}$ is defined as: + +$$ +{W}_{p}\left( {{p}_{1},{p}_{2}}\right) = \mathop{\inf }\limits_{{\gamma \in \Pi \left( {{p}_{1},{p}_{2}}\right) }}{\left( {\mathbb{E}}_{\left( {x,y}\right) \sim \gamma }\left\lbrack \parallel x - y{\parallel }^{p}\right\rbrack \right) }^{\frac{1}{p}}, \tag{1} +$$ + +where $\Pi \left( {{p}_{1},{p}_{2}}\right)$ is the set of all possible joint distributions $\gamma \left( {x,y}\right)$ with marginalized distributions ${p}_{1}$ and ${p}_{2}$ . In 1D the Wasserstein distance has a closed form solution via Cumulative Distribution Functions (CDFs), but this evaluation is intractable in high dimension. An alternative metric, the Sliced p-Wasserstein Distance (SWD), is defined as: + +$$ +S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) = {\left( {\int }_{{\mathbb{S}}^{d - 1}}{W}_{p}^{p}\left( \mathcal{R}{p}_{1}\left( \cdot ,\theta \right) ,\mathcal{R}{p}_{2}\left( \cdot ,\theta \right) \right) d\theta \right) }^{\frac{1}{p}}, +$$ + +(2) + +where ${\mathbb{S}}^{d - 1}$ denotes the unit sphere ${\theta }_{1}^{2} + \cdots {\theta }_{n}^{2} = 1$ in ${\mathbb{R}}^{d},{d\theta }$ is the normalized uniform measure on ${\mathbb{S}}^{d - 1}$ , and $\mathcal{R}$ denotes the Radon transform. The definition of Radon transform can be found in the appendix. For a given $\theta$ , the function $\left( {\mathcal{R}p}\right) \left( {\cdot ,\theta }\right) : \mathbb{R} \rightarrow \mathbb{R}$ is essentially the slice (or projection) of $p\left( x\right)$ on axis $\theta$ . + +The SWD can be calculated by approximating the high dimensional integral with Monte Carlo samples. However, in high dimensions a large number of projections is required to accurately estimate SWD. This motivates to use the max- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +imum Sliced p-Wasserstein Distance (max SWD): + +$$ +\max - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) = \mathop{\max }\limits_{{\theta \in {\mathbb{S}}^{d - 1}}}{W}_{p}\left( {\mathcal{R}{p}_{1}\left( {\cdot ,\theta }\right) ,\mathcal{R}{p}_{2}\left( {\cdot ,\theta }\right) }\right) , +$$ + +(3) + +which is the maximum of the Wasserstein distance of the 1D marginalized distributions of all possible directions. + +§ 3. SLICED ITERATIVE NORMALIZING FLOWS + +We consider the general problem of building a NF that maps an arbitrary PDF ${p}_{1}\left( x\right)$ to another arbitrary PDF ${p}_{2}\left( x\right)$ of the same dimensionality. We firstly introduce our objective function in Section 3.1. The general SINF algorithm is presented in Section 3.2. We then consider the special cases of ${p}_{1}$ and ${p}_{2}$ being standard Normal distributions in Section 3.3 and Section 3.4, respectively. + +§ 3.1. MAXIMUM K-SLICED WASSERSTEIN DISTANCE + +We generalize the idea of maximum SWD and propose maximum K-Sliced p-Wasserstein Distance (max K-SWD): + +$$ +\max - K - S{W}_{p}\left( {{p}_{1},{p}_{2}}\right) = \mathop{\max }\limits_{\left\{ {\theta }_{1},\cdots ,{\theta }_{K}\right\} } +$$ + +$$ +{\left( \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{W}_{p}^{p}\left( \left( \mathcal{R}{p}_{1}\right) \left( \cdot ,{\theta }_{k}\right) ,\left( \mathcal{R}{p}_{2}\right) \left( \cdot ,{\theta }_{k}\right) \right) \right) }^{\frac{1}{p}}. \tag{4} +$$ + +In this work we fix $p = 2$ . The max K-SWD defines $\mathrm{K}$ orthogonal axes $\left\{ {{\theta }_{1},\cdots ,{\theta }_{K}}\right\}$ where the marginal distributions of ${p}_{1}$ and ${p}_{2}$ are most different, providing a natural choice for performing 1D marginal matching in our algorithm (see Section 3.2). The proof that max K-SWD is a proper distance and the details of its estimation are provided in the appendix. + +§ 3.2. PROPOSED SINF ALGORITHM + +The SINF algorithm is based on iteratively matching the 1D marginalized distribution of ${p}_{1}$ to ${p}_{2}$ . This is motivated by the inverse Radon Transform (see Appendix) and Cramér-Wold theorem, which suggest that matching the high dimensional distributions is equivalent to matching the 1D slices on all possible directions, decomposing the high dimensional problem into a series of $1\mathrm{D}$ problems. Given a set of i.i.d. samples $X$ drawn from ${p}_{1}$ , in each iteration, a set of 1D marginal transformations ${\left\{ {\Psi }_{k}\right\} }_{k = 1}^{K}{}^{1}(K \leq d$ where $d$ is the dimensionality of the dataset) are applied to the samples on orthogonal axes ${\left\{ {\theta }_{k}\right\} }_{k = 1}^{K}$ to match the 1D marginalized PDF of ${p}_{2}$ along those axes. Let $A = \left\lbrack {{\theta }_{1},\cdots ,{\theta }_{K}}\right\rbrack$ be the weight matrix $\left( {{A}^{T}A = {I}_{K}}\right)$ , the transformation at iteration $l$ of samples ${X}_{l}$ can be written as + +$$ +{X}_{l + 1} = {A}_{l}{\mathbf{\Psi }}_{l}\left( {{A}_{l}^{T}{X}_{l}}\right) + {X}_{l}^{ \bot }, \tag{5} +$$ + +Algorithm 1 Sliced Iterative Normalizing Flow + +Input: ${\left\{ {x}_{i} \sim {p}_{1}\right\} }_{i = 1}^{N},{\left\{ {y}_{i} \sim {p}_{2}\right\} }_{i = 1}^{N},K$ , number of + +iteration ${L}_{\text{ iter }}$ + +for $l = 1$ to ${L}_{\text{ iter }}$ do + + ${A}_{l} = \max \mathrm{K} - \operatorname{SWD}\left( {{x}_{i},{y}_{i},K}\right)$ + + for $k = 1$ to $K$ do + + ${\theta }_{k} = {A}_{l}\left\lbrack { : ,k}\right\rbrack$ + + Compute ${\widehat{x}}_{i} = {\theta }_{k} \cdot {x}_{i}$ and ${\widehat{y}}_{i} = {\theta }_{k} \cdot {y}_{i}$ for each $i$ + + ${\widetilde{x}}_{m} = \operatorname{quantiles}\left( {\operatorname{PDF}\left( {\widehat{x}}_{i}\right) }\right)$ + + ${\widetilde{y}}_{m} = \operatorname{quantiles}\left( {\operatorname{PDF}\left( {\widehat{y}}_{i}\right) }\right)$ + + ${\psi }_{l,k} =$ RationalQuadraticSpline $\left( {{\widetilde{x}}_{m},{\widetilde{y}}_{m}}\right)$ + + end for + + ${\mathbf{\Psi }}_{l} = \left\lbrack {{\Psi }_{l1},\cdots ,{\Psi }_{lK}}\right\rbrack$ + + Update ${x}_{i} = {x}_{i} - {A}_{l}{A}_{l}^{T}{x}_{i} + {A}_{l}{\mathbf{\Psi }}_{l}\left( {{A}_{l}^{T}{x}_{i}}\right)$ + +end for + +where ${X}_{l}^{ \bot } = {X}_{l} - {A}_{l}{A}_{l}^{T}{X}_{l}.{\mathbf{\Psi }}_{l} = {\left\lbrack {\Psi }_{l1},\cdots ,{\Psi }_{lK}\right\rbrack }^{T}$ is the marginal mapping of each dimension of ${A}_{l}^{T}{X}_{l}$ , and its components are required to be monotonic and differentiable. The Inverse and Jacobian determinant of transformation 5 can be easily evaluated (see appendix). + +The weight matrix ${A}_{l}$ and the marginal transformations ${\mathbf{\Psi }}_{l}$ are determined by iteratively minimizing the max K-SWD (Equation 4) between the transformed ${p}_{1}$ and ${p}_{2}$ . Specifically, we propose to iteratively solving for the orthogonal axes $\left\{ {{\theta }_{1},\cdots ,{\theta }_{K}}\right\}$ in max K-SWD, and then apply 1D marginal matching on those axes to minimize max K-SWD. + +Let ${p}_{1,l}$ be the transformed ${p}_{1}$ at iteration $l$ . The $k$ th component of ${\mathbf{\Psi }}_{l},{\Psi }_{l,k}$ , maps the 1D marginalized PDF of ${p}_{1,l}$ to ${p}_{2}$ and has an OT solution: + +$$ +{\Psi }_{l,k}\left( x\right) = {F}_{k}^{-1}\left( {{G}_{l,k}\left( x\right) }\right) , \tag{6} +$$ + +where ${G}_{l,k}\left( x\right) = {\int }_{-\infty }^{x}\left( {\mathcal{R}{p}_{1,l}}\right) \left( {t,{\theta }_{k}}\right) {dt}$ and ${F}_{k}\left( x\right) =$ ${\int }_{-\infty }^{x}\left( {\mathcal{R}{p}_{2}}\right) \left( {t,{\theta }_{k}}\right) {dt}$ are the CDFs of ${p}_{1,l}$ and ${p}_{2}$ on axis ${\theta }_{k}$ , respectively. The CDFs can be estimated using the quantiles of the samples (in SIG Section 3.3), or using Kernel Density Estimation (KDE, in GIS Section 3.4). Equation 6 is monotonic and therefore invertible. We choose to parametrize it with monotonic rational quadratic splines (Gregory & Delbourgo, 1982; Durkan et al., 2019). Details about the splines are shown in the appendix. + +The proposed algorithm iteratively minimizes the max $\mathrm{K}$ - SWD between the transformed ${p}_{1}$ and ${p}_{2}$ . The orthono-mal vectors $\left\{ {{\theta }_{1},\cdots ,{\theta }_{K}}\right\}$ specify $K$ axes along which the marginalized PDF between ${p}_{1,l}$ and ${p}_{2}$ are most different, thus maximizes the gain at each iteration and improves the efficiency of the algorithm. The model is able to converge with two orders of magnitude fewer iterations than + +${}^{1}$ Notation definition: In this paper we use $l,k,j$ and $m$ to represent different iterations of the algorithm, different axes ${\theta }_{k}$ , different gradient descent iterations of max K-SWD calculation (see Algorithm 2), and different knots in the spline functions of 1D transformation, respectively. + +Table 1. FID scores on different datasets (lower is better). The errors are generally smaller than the differences. + +max width= + +X Method MNIST Fashion CIFAR-10 CelebA + +1-6 +2*iterative SWF 225.1 207.6 - - + +2-6 + SIG $\left( {T = 1}\right)$ (this work) 4.5 13.7 66.5 37.3 + +1-6 +4*adversarial training Flow-GAN (ADV) 155.6 216.9 71.1 - + +2-6 + WGAN 6.7 21.5 55.2 41.3 + +2-6 + WGAN GP 20.3 24.5 55.8 30.0 + +2-6 + Best default GAN $\sim {10}$ $\sim {32}$ $\sim {70}$ $\sim {48}$ + +1-6 +5*AE based SWAE(Wu et al., 2019) - - 107.9 48.9 + +2-6 + SWAE(Kolouri et al., 2018) 29.8 74.3 141.9 53.9 + +2-6 + CWAE 23.6 57.1 120.0 49.7 + +2-6 + PAE - 28.0 - 49.2 + +2-6 + two-stage VAE 12.6 29.3 96.1 44.4 + +1-6 + +113 + +Table 2. Comparison between SIG and GIS + +max width= + +Model SIG GIS + +1-3 +Initial PDF ${p}_{1}$ Gaussian ${p}_{\text{ data }}$ + +1-3 +Final PDF ${p}_{2}$ ${p}_{\text{ data }}$ Gaussian + +1-3 +Training Iteratively maps Gaussian to ${p}_{\mathrm{{data}}}$ Iteratively maps ${p}_{\text{ data }}$ to Gaussian + +1-3 +NF structure Yes Yes + +1-3 +Advantage Good samples Good density estimation + +1-3 + +random axes, and it also leads to better sample quality. This is because as the dimensionality $d$ grows, the number of slices $\left( {\mathcal{R}p}\right) \left( {\cdot ,\theta }\right)$ required to approximate $p\left( x\right)$ using inverse Radon formula scales as ${L}^{d - 1}$ (Kolouri et al.,2015), where $L$ is the number of slices needed to approximate a similar smooth 2D distribution. Therefore, if $\theta$ are randomly chosen, it takes a large number of iterations to converge in high dimensions due to the curse of dimensionality. Our objective function reduces the curse of dimensionality in high dimensions by identifying the most relevant directions. + +Unlike KL-divergence which is invariant under flow transformations, max K-SWD is different in data space and in latent space. Therefore the direction of building the flow model is of key importance. In the next two sections we discuss two different ways of building the flow, which are good at sample generation and density estimation, respectively. + +§ 3.3. SLICED ITERATIVE GENERATOR (SIG) + +For Sliced Iterative Generator (SIG) ${p}_{1}$ is a standard Normal distribution, and ${p}_{2}$ is the target distribution. The model iteratively maps the Normal distribution to the target distribution using 1D slice transformations. Specifically, one firstly draw a set of samples from the standard Normal distribution, and then iteratively updates the samples following Equation 5. SIG directly minimizes the max K-SWD between the generated distribution and the target distribution, and is able to generate high quality samples. + +§ 3.4. GAUSSIANIZING ITERATIVE SLICING (GIS) + +For Gaussianizing Iterative Slicing (GIS) ${p}_{1}$ is the target distribution and ${p}_{2}$ is a standard Normal distribution. The model iteratively Gaussianizes the target distribution, and the mapping is learned in the reverse direction of SIG. In GIS the max K-SWD between latent data and the Normal distribution is minimized, thus the model performs well in density estimation, even though its objective is not $p\left( x\right)$ . A comparison between SIG and GIS is shown in Table 2. + +§ 4. EXPERIMENTS + +§ 4.1. DENSITY ESTIMATION $P\LEFT( X\RIGHT)$ OF SMALL DATASETS + +We perform density estimation with GIS on four UCI datasets (Lichman et al., 2013) and BSDS300 (Martin et al., 2001). The data preprocessing of UCI datasets and BSDS300 follows Papamakarios et al. (2017). We vary the size of the training set ${N}_{\text{ train }}$ from ${10}^{2}$ to ${10}^{5}$ to test the model performance on a wide range of dataset size. In Figure 1 we compare GIS to other NF models GF (Meng et al., 2020), FFJORD (Grathwohl et al., 2018), MAF (Papamakar-ios et al., 2017) and RQ-NSF (AR)(Durkan et al., 2019), as well as KDE. Some non-GIS NF models collapsed during training or used more memory than our GPU, and are not shown in the plot. The results in Figure 1 show that GIS is more stable compared to other NFs and outperforms them on small training sets. This highlights that GIS is less sensitive to hyper-parameter optimization and achieves good performance out of the box. Training time varies with data size, but is generally lower than other NFs on small datasets. + + < g r a p h i c s > + +Figure 1. Density estimation versus training set size. The legend in panel (a) applies to other panels as well. Higher is better: at 100-1000 training size GIS has the best performance in all cases. + +§ 4.2. GENERATIVE MODELING OF IMAGES + +We evaluate SIG as a generative model of images using the following 4 datasets: MNIST (LeCun et al., 1998), Fashion-MNIST (Xiao et al., 2017), CIFAR-10 (Krizhevsky et al., 2009) and Celeb-A (Liu et al., 2015). In Figure 2 we show samples of these four datasets. For MNIST, Fashion-MNIST and CelebA dataset we show samples from the model with reduced temperature $T = {0.85}$ (i.e., sampling from a Gaussian distribution with standard deviation $T = {0.85}$ in latent space), which slightly improves the sample quality (Parmar et al., 2018; Kingma & Dhariwal, 2018). We report the final FID score (calculated using temperature $\mathrm{T} = 1$ ) in Table 1, where we compare our results with similar algorithms SWF (Liutkus et al., 2018) and Flow-Gan (ADV) (Grover et al., 2018). We also list the FID scores of some other generative models for comparison, including models using slice-based distance SWAEs (Wu et al., 2019; Kolouri et al., 2018) and CWAE (Knop et al., 2018), Wasserstein GAN models (Arjovsky et al., 2017; Gulrajani et al., 2017), and other GANs and AE-based models PAE (Böhm & Seljak, 2020) and two-stage VAE (Dai & Wipf, 2019; Xiao et al., 2019). We notice that previous iterative algorithms were unable to produce good samples on high dimensional image datasets. In contrast, SIG obtains the best FID scores on MNIST and Fashion-MNIST, while on CIFAR-10 and CelebA it also outperforms similar algorithms and AE-based models, and gets comparable results with GANs. + + < g r a p h i c s > + +Figure 2. Random samples from SIG. + +§ 5. CONCLUSIONS + +We introduce sliced iterative normalizing flow (SINF) that iteratively transforms data distribution to a Gaussian (GIS) or the other way around (SIG) using OT. To the best of our knowledge, SIG is the first greedy deep learning algorithm that is competitive with the SOTA generators in high dimensions, while GIS achieves comparable results on density estimation with current NF models, but is more stable, faster to train, and achieves higher $p\left( x\right)$ when trained on small training sets even though it does not train on $p\left( x\right)$ . SINF has very few hyperparameters, and is very insensitive to their choice. SINF is related to several previous models, which is discussed in Appendix. SINF has deep neural network architecture, but its approach deviates significantly from the current Deep Learning paradigm, as it does not use concepts such as mini-batching, stochastic gradient descent and gradient back-propagation through deep layers. SINF is an existence proof that greedy Deep Learning without these ingredients can be SOTA for modern high dimensional ML applications. SINF may be of particular interest in applications where robustness, insensitivity to hyperparameters, small data size, and speed are of primary importance. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/XzNsBgj68_L/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/XzNsBgj68_L/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..d92c282e585a98d5a1ee228850ab09c328132e94 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/XzNsBgj68_L/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,285 @@ +# Improving Continuous Normalizing Flows using a Multi-Resolution Framework + +Anonymous Authors ${}^{1}$ + +## Abstract + +Recent work has shown that Continuous Normalizing Flows (CNFs) can serve as generative models of images with exact likelihood calculation and invertible generation/density estimation. In this work we introduce a Multi-Resolution variant of such models (MRCNF). We introduce a transformation between resolutions that allows for no change in the log likelihood. We show that this approach yields comparable likelihood values for various image datasets, with improved performance at higher resolutions, with fewer parameters, using only 1 GPU. + +## 1. Introduction + +Reversible generative models derived through the use of the change of variables technique (Dinh et al., 2017; Kingma & Dhariwal, 2018; Ho et al., 2019; Yu et al., 2020) are growing in interest as generative models, because they enable efficient density estimation, efficient sampling, and computation of exact likelihoods. A promising variation of the change-of-variable approach is based on the use of a continuous time variant of normalizing flows (Chen et al., 2018; Grathwohl et al., 2019), which uses an integral over continuous time dynamics to transform a base distribution into the model distribution, called Continuous Normalizing Flows (CNF). CNFs have been shown to be capable of modelling complex distributions such as those associated with images. While this new paradigm for the generative modelling of images is not as mature as Generative Adversarial Networks (GANs) (Goodfellow et al., 2016) or Variational Autoencoders (VAEs) (Kingma & Welling, 2013) in terms of the generated image quality, it is a promising direction of research. + +In this work, we focus on making the training of continuous normalizing flows feasible for higher resolution images, and help reduce computation time. We thus introduce a novel multi-resolution technique for continuous normalizing flows, by modelling the conditional distribution of high-level information at each resolution in an autoregressive fashion. We show that this makes the models perform better at higher resolutions. A high-level view of our approach is shown in Figure 1. Our main contributions are: + +![01963e31-151e-7dfc-ba23-3f8975c319ea_0_891_533_709_305_0.jpg](images/01963e31-151e-7dfc-ba23-3f8975c319ea_0_891_533_709_305_0.jpg) + +Figure 1. The architecture of our MRCNF method (best viewed in color). Continuous normalizing flows (CNFs) ${g}_{s}$ are used to generate images ${\mathbf{x}}_{s}$ from noise ${\mathbf{z}}_{s}$ at each resolution, with those at finer resolutions conditioned (dashed lines) on the coarser image one level above ${\mathbf{x}}_{s + 1}$ , except at the coarsest level. + +1. We introduce Multi-Resolution Continuous Normalizing Flows (MRCF), through which we achieve state-of-the-art Bits-per-dimension (BPD) (negative log likelihood per pixel) on ImageNet64 using fewer model parameters relative to comparable methods. + +2. We propose a multi-resolution transformation that does not add cost in terms of likelihood. + +## 2. Background + +### 2.1. Normalizing Flows + +Normalizing flows (Tabak & Turner, 2013; Jimenez Rezende & Mohamed, 2015; Dinh et al., 2017; Papamakarios et al., 2019; Kobyzev et al., 2020) are generative models that map a complex data distribution, such as real images, to a known noise distribution. They are trained by maximizing the log likelihood of their input images. Suppose a normalizing flow $g$ produces output $\mathbf{z}$ from an input $\mathbf{x}$ i.e. $\mathbf{z} = g\left( \mathbf{x}\right)$ . The change-of-variables + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +formula provides the likelihood of the image under this transformation as: + +$$ +\log p\left( \mathbf{x}\right) = \log \left| {\det \frac{\mathrm{d}g}{\mathrm{\;d}\mathbf{x}}}\right| + \log p\left( \mathbf{z}\right) \tag{1} +$$ + +The first term on the right (log determinant of the Jacobian) is often intractable, however, previous works on normalizing flows have found ways to estimate this efficiently. The second term, $\log p\left( \mathbf{z}\right)$ , is computed as the log probability of $\mathbf{z}$ under a known noise distribution, typically the standard Gaussian $\mathcal{N}$ . + +### 2.2. Continuous Normalizing Flows + +Continuous Normalizing Flows (CNF) (Chen et al., 2018; Grathwohl et al., 2019; Finlay et al., 2020) are a variant of normalizing flows that operate in the continuous domain, using the framework of Neural ODEs (Chen et al., 2018). Suppose CNF $g$ transforms its state $\mathbf{v}\left( t\right)$ using a Neural ODE (Chen et al.,2018) with neural network $f$ defining the differential. Here, $\mathbf{v}\left( {t}_{0}\right) = \mathbf{x}$ is, say, an image, and at the final time step $\mathbf{v}\left( {t}_{1}\right) = \mathbf{z}$ is a sample from a known noise distribution. + +$$ +\mathbf{v}\left( {t}_{1}\right) = g\left( {\mathbf{v}\left( {t}_{0}\right) }\right) = \mathbf{v}\left( {t}_{0}\right) + {\int }_{{t}_{0}}^{{t}_{1}}f\left( {\mathbf{v}\left( t\right) , t}\right) \mathrm{d}t \tag{2} +$$ + +Chen et al. (2018); Grathwohl et al. (2019) proposed a more efficient method to compute the change in log-probability in the context of CNFs, called the instantaneous variant of the change-of-variables formula, another differential equation: + +$$ +\Delta \log {p}_{\mathbf{v}\left( {t}_{0}\right) \rightarrow \mathbf{v}\left( {t}_{1}\right) } = - {\int }_{{t}_{0}}^{{t}_{1}}\operatorname{Tr}\left( \frac{\partial f}{\partial \mathbf{v}\left( t\right) }\right) \mathrm{d}t \tag{3} +$$ + +An ODE solver solves both differential equations eq. (2) and eq. (3). Thus, a CNF provides both the final state $\mathbf{v}\left( {t}_{1}\right)$ as well as the change in $\log$ probability $\Delta \log {p}_{\mathbf{v}\left( {t}_{0}\right) \rightarrow \mathbf{v}\left( {t}_{1}\right) }$ together. + +Prior works (Grathwohl et al., 2019; Finlay et al., 2020; Ghosh et al., 2020; Onken et al., 2021; Huang & Yeh, 2021) have trained CNFs as reversible generative models of images, by maximizing the likelihood of images: + +$$ +\mathbf{z} = g\left( \mathbf{x}\right) ;\;\log p\left( \mathbf{x}\right) = \Delta \log {p}_{\mathbf{x} \rightarrow \mathbf{z}} + \log p\left( \mathbf{z}\right) \tag{4} +$$ + +where $\mathbf{x}$ is an image, $\mathbf{z}$ and $\Delta \log {p}_{\mathbf{x} \rightarrow \mathbf{z}}$ are computed by the CNF using eq. (2) and eq. (3), and $\log p\left( \mathbf{z}\right)$ is the likelihood of the computed $\mathbf{z}$ under a known noise distribution, typically the standard Gaussian $\mathcal{N}\left( {\mathbf{0},\mathbf{I}}\right)$ . CNF $g$ is trained by maximizing ${\mathbb{E}}_{\mathbf{x}}\log p\left( \mathbf{x}\right)$ . Novel images are generated by sampling $\mathbf{z}$ from the known noise distribution, and running it through the CNF in reverse. + +### 3.Our method + +Our method is a reversible generative model of images that builds on top of CNFs. We introduce the notion of multiple resolutions in images, and connect the different resolutions in an autoregressive fashion. This helps generate images faster, with better likelihood values at higher resolutions. Moreover, we used only one GPU in all our experiments. We call this model Multi-Resolution Continuous Normalizing Flow (MRCNF). + +### 3.1. Multi-Resolution image representation + +Multi-resolution representations of images have been explored in computer vision for decades (Burt, 1981; Witkin, 1987; Burt & Adelson, 1983; Mallat, 1989; Marr, 2010). We express an image $\mathbf{x}$ as a series of high-level information ${\mathbf{y}}_{s}$ not present in the immediate coarser images ${\mathbf{x}}_{s + 1}$ (obtained by averaging every $2 \times 2$ patch), and a final coarse image ${\mathbf{x}}_{S}$ : + +$$ +\mathbf{x} \rightarrow \left( {{\mathbf{y}}_{1},{\mathbf{x}}_{2}}\right) \rightarrow \cdots \rightarrow \left( {{\mathbf{y}}_{1},{\mathbf{y}}_{2},\ldots ,{\mathbf{y}}_{S - 1},{\mathbf{x}}_{S}}\right) \tag{5} +$$ + +Our overall method is to map these $S$ terms to $S$ noise samples using $S$ CNFs. + +### 3.2. Defining the high-level information ${\mathbf{y}}_{s}$ + +The multi-resolution representation in eq. (5) needs to be invertible, i.e. it should be possible to deterministically obtain ${\mathbf{x}}_{s}$ from ${\mathbf{y}}_{s}$ and ${\mathbf{x}}_{s + 1}$ , and vice versa. Further, it is preferable that this transformation incurs minimal additional computational cost, and does not add too much change in terms of log-likelihood. We choose to perform a linear transformation taking into account the following properties: 1) volume preserving i.e. determinant is 1,2) angle preserving, and 3) range preserving (respecting the maximum principle, studied for some time, under the notion of the maximum principle (Varga, 1966)). + +Consider the simplest case of 2 resolutions where ${\mathbf{x}}_{1}$ is a $2 \times 2$ image with pixel values ${x}_{1},{x}_{2},{x}_{3},{x}_{4}$ , and ${\mathbf{x}}_{2}$ is a $1 \times 1$ image with pixel value $\bar{x} = \frac{1}{4}\left( {{x}_{1} + {x}_{2} + {x}_{3} + {x}_{4}}\right)$ . We require three values $\left( {{y}_{1},{y}_{2},{y}_{3}}\right) = {\mathbf{y}}_{1}$ that contain information not present in ${\mathbf{x}}_{2}$ , such that when they are combined with ${\mathbf{x}}_{2},{\mathbf{x}}_{1}$ is obtained. This could be viewed as a problem of finding a matrix $\mathbf{M}$ such that: ${\left\lbrack {x}_{1},{x}_{2},{x}_{3},{x}_{4}\right\rbrack }^{\top } = \mathbf{M}{\left\lbrack {y}_{1},{y}_{2},{y}_{3},\bar{x}\right\rbrack }^{\top }$ . We fix the last column of $\mathbf{M}$ as ${\left\lbrack 1,1,1,1\right\rbrack }^{\top }$ , since every pixel value in ${\mathbf{x}}_{1}$ depends on $\bar{x}$ . Finding the rest of the parameters can be viewed as requiring four 3D vectors that are non-trivially equally spaced. These can be considered as the four corners of a tetrahedron in 3D space, under any configuration (rotated in space) and scaling of the vectors. + +Out of the many possibilities for this tetrahedron, we could choose the matrix that performs the Discrete Haar Wavelet Transform (DHWT) (Mallat, 1989; Mallat & Peyré, 2009). However, this has $\log \left| {\det \left( {\mathbf{M}}^{-1}\right) }\right| = \log \left( {1/2}\right)$ , and is + +therefore not volume preserving. We introduce a variant of the DHWT matrix that is unimodular, i.e. has a determinant of 1 (therefore volume preserving), while also preserving the range of the images for the input and its average: + +$$ +\left\lbrack \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \\ {x}_{4} \end{array}\right\rbrack = {a}^{-1}\left\lbrack \begin{matrix} c & c & c & a \\ c & - c & - c & a \\ - c & c & - c & a \\ - c & - c & c & a \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {y}_{1} \\ {y}_{2} \\ {y}_{3} \\ \bar{x} \end{array}\right\rbrack \Leftrightarrow \tag{6} +$$ + +$$ +\left\lbrack \begin{array}{l} {y}_{1} \\ {y}_{2} \\ {y}_{3} \\ \bar{x} \end{array}\right\rbrack = \left\lbrack \begin{matrix} {c}^{-1} & {c}^{-1} & - {c}^{-1} & - {c}^{-1} \\ {c}^{-1} & - {c}^{-1} & {c}^{-1} & - {c}^{-1} \\ {c}^{-1} & - {c}^{-1} & - {c}^{-1} & {c}^{-1} \\ {a}^{-1} & {a}^{-1} & {a}^{-1} & {a}^{-1} \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \\ {x}_{4} \end{array}\right\rbrack \tag{7} +$$ + +where $c = {2}^{2/3}, a = 4$ , and $\log \left| {\det \left( {\mathbf{M}}^{-1}\right) }\right| = \log \left( 1\right) = 0$ . This can be scaled up to larger spatial regions by performing the same calculation for each $2 \times 2$ patch. Let $M$ be the function that uses matrix $\mathbf{M}$ from above and combines every pixel in ${\mathbf{x}}_{s + 1}$ with 3 corresponding pixels in ${\mathbf{y}}_{s}$ to make the $2 \times 2$ patch at that location in ${\mathbf{x}}_{s}$ using eq. (6): + +$$ +{\mathbf{x}}_{s} = M\left( {{\mathbf{y}}_{s},{\mathbf{x}}_{s + 1}}\right) \Leftrightarrow {\mathbf{y}}_{s},{\mathbf{x}}_{s + 1} = {M}^{-1}\left( {\mathbf{x}}_{s}\right) \tag{8} +$$ + +### 3.3. Multi-Resolution Continuous Normalizing Flows + +Using the multi-resolution image representation in eq. (5), we characterize the conditional distribution over the additional degrees of freedom $\left( {\mathbf{y}}_{s}\right)$ required to generate a higher resolution image $\left( {\mathbf{x}}_{s}\right)$ that is consistent with the average $\left( {\mathbf{x}}_{s + 1}\right)$ over the equivalent pixel space. At each resolution $s$ , we use a CNF to reversibly map between ${\mathbf{y}}_{s}$ (or ${\mathbf{x}}_{S}$ when $s = S$ ) and a sample ${\mathbf{z}}_{s}$ from a known noise distribution. At generation, ${\mathbf{y}}_{s}$ only adds information missing in ${\mathbf{x}}_{s + 1}$ , but conditional on it. This framework ensures that one coarse image could generate several potential fine images, but these fine images have the same coarse image as their average. This fact is preserved across resolutions. + +In principle, any generative model could be used to map between the multi-resolution image and noise. Normalizing flows are good candidates for this as they are probabilistic generative models that perform exact likelihood estimates, and can be run in reverse to generate novel data from the model's distribution. This allows model comparison and measurement of generalization to unseen data. We choose to use the CNF variant of normalizing flows at each resolution, since CNFs have recently been shown to be effective in modeling image distributions using a fraction of the number of parameters typically used in normalizing flows (and non flow-based approaches), and their underlying framework of Neural ODEs have been shown to be more robust than convolutional layers (Yan et al., 2020). + +Training: We train an MRCNF by maximizing the average log-likelihood of the images in the training dataset under the model, i.e. $\max {\mathbb{E}}_{\mathbf{x}}\log p\left( \mathbf{x}\right)$ . The log probability of each + +image $\log p\left( \mathbf{x}\right)$ can be estimated recursively as: + +$$ +\log p\left( \mathbf{x}\right) = \log p\left( {{\mathbf{y}}_{1},{\mathbf{x}}_{2}}\right) = \log p\left( {{\mathbf{y}}_{1} \mid {\mathbf{x}}_{2}}\right) + \log p\left( {\mathbf{x}}_{2}\right) +$$ + +$$ += \mathop{\sum }\limits_{{s = 1}}^{{S - 1}}\left( {\log p\left( {{\mathbf{y}}_{s} \mid {\mathbf{x}}_{s + 1}}\right) }\right) + \log p\left( {\mathbf{x}}_{S}\right) \tag{9} +$$ + +where $\log p\left( {\mathbf{x}}_{S}\right)$ is computed by CNF ${g}_{S}$ using eq. (4): + +$$ +{\mathbf{z}}_{S} = {g}_{S}\left( {\mathbf{x}}_{S}\right) ;\;\log p\left( {\mathbf{x}}_{S}\right) = \Delta \log {p}_{{\mathbf{x}}_{S} \rightarrow {\mathbf{z}}_{S}} + \log p\left( {\mathbf{z}}_{S}\right) +$$ + +(10) + +and $\log p\left( {{\mathbf{y}}_{s} \mid {\mathbf{x}}_{s + 1}}\right)$ is also computed by CNFs ${g}_{s}$ similarly, conditioning on the coarser image: + +$$ +\left\{ \begin{array}{l} {\mathbf{z}}_{s} = {g}_{s}\left( {{\mathbf{y}}_{s} \mid {\mathbf{x}}_{s + 1}}\right) \\ \log p\left( {{\mathbf{y}}_{s} \mid {\mathbf{x}}_{s + 1}}\right) = \Delta \log {p}_{\left( {{\mathbf{y}}_{s} \rightarrow {\mathbf{z}}_{s}}\right) \mid {\mathbf{x}}_{s + 1}} + \log p\left( {\mathbf{z}}_{s}\right) \end{array}\right. +$$ + +(11) + +This model could be seen as a stack of CNFs connected in an autoregressive fashion. Typically, likelihood-based generative models are compared using the metric of bits-per-dimension (BPD), i.e. the negative log likelihood per pixel in the image: + +$$ +\operatorname{BPD}\left( \mathbf{x}\right) = \frac{-\log p\left( \mathbf{x}\right) }{\operatorname{dims}\left( \mathbf{x}\right) } \tag{12} +$$ + +Hence, we train our MRCNF to minimize the average BPD of the images in the training dataset, computed using eq. (12). Although the final log likelihood $\log p\left( \mathbf{x}\right)$ involves sequentially summing over values returned by all $S$ CNFs, each CNF can be trained independently, in parallel. + +We use FFJORD (Grathwohl et al., 2019) as the baseline model for our CNFs. In addition, we use to two regularization terms introduced by RNODE (Finlay et al., 2020) to speed up the training of FFJORD models. + +Generation: First, ${\mathbf{z}}_{s}, s = 1,\ldots , S$ are sampled from the latent noise distributions. Given an $S$ -resolution model, CNF ${g}_{s}$ at resolution $s$ transforms the noise sample ${\mathbf{z}}_{s}$ to high-level information ${\mathbf{y}}_{s}$ conditioned on the immediate coarse image ${\mathbf{x}}_{s + 1}$ (except ${g}_{S}$ which is unconditioned). ${\mathbf{y}}_{s}$ and ${\mathbf{x}}_{s + 1}$ are then combined to form ${\mathbf{x}}_{s}$ as described in section 3.2 (see fig. 1). This process is repeated progressively from coarser to finer resolutions: + +$$ +{\mathbf{x}}_{S} = {g}_{S}^{-1}\left( {\mathbf{z}}_{S}\right) \;s = S +$$ + +$$ +\left\{ {\begin{array}{l} {\mathbf{y}}_{s} = {g}_{s}^{-1}\left( {{\mathbf{z}}_{s} \mid {\mathbf{x}}_{s + 1}}\right) \\ {\mathbf{x}}_{s} = M\left( {{\mathbf{y}}_{s},{\mathbf{x}}_{s + 1}}\right) \end{array}\;s = S - 1 \rightarrow 1}\right. \tag{13} +$$ + +## 4. Related work + +Several prior works on normalizing flows (Kingma & Dhari-wal, 2018; Hoogeboom et al., 2019a;b; Song et al., 2019; Ma et al., 2019; Durkan et al., 2019; Chen et al., 2020; Ho et al., 2019; Lee et al., 2020; Yu et al., 2020) build on Re-alNVP (Dinh et al., 2017). Although they achieve great + +167 results in terms of BPD and image quality, they nonetheless + +168 report results from significantly higher parameter (some + +169 with ${100}\mathrm{x}$ !), and several times GPU hours of training. + +170 Although our MRCNF model is similar to the recently published WaveletFlow (Yu et al., 2020), we generalize the notion of a multi-resolution image representation. Wavelet-Flow builds on the Glow (Kingma & Dhariwal, 2018) ar- + +175 chitecture, while ours builds on CNFs. WaveletFlow claims to have orthonormal transformation, our eq. (6) involves a 176 unimodular transformation. Finally, WaveletFlow applies special sampling techniques to obtain better samples from its model. We have so far not used such techniques for generation, but we believe they can potentially help our models as well. + +"Multiple scales" in prior normalizing flows: Normalizing flows (Dinh et al., 2017; Kingma & Dhariwal, 2018; Grathwohl et al., 2019) try to be "multi-scale" by transforming the input at one resolution in a smart way (squeezing operation) such that the width of the features progressively reduces. In contrast, our model stacks normalizing flows at multiple resolutions in an autoregressive fashion. + +## 5. Experimental results + +We train Multi-Resolution Continuous Normalizing Flow (MRCNF) and Multi-Resolution Continuous Normalizing Flow - Wavelet (MRCNF-Wavelet) models on the Ima-geNet (Deng et al., 2009) dataset at 32x32, 64x64, 128x128. We build on the code provided in (Finlay et al., 2020) (https://github.com/cfinlay/ffjord-rnode).In all cases, we train using only one NVIDIA RTX 2080 Ti GPU with 11GB. + +At lower resolution spaces, we achieve comparable BPDs in lesser time with far fewer parameters than previous normalizing flows (and non flow-based approaches). However, the power of the multi-resolution formulation is more evident at higher resolutions: we achieve state-of-the-art BPD for ImageNet64 with significantly fewer parameters and lower time using only one GPU. + +Progressive training: Since each resolution can be trained independently, we train an MRCNF model on ImageNet128 by training only the finest resolution $\left( {{128} \times {128}}\right)$ conditioned on the immediate coarser $\left( {{64} \times {64}}\right)$ images, and attach that to a 3-resolution ${64} \times {64}$ model. The resulting 4-resolution ImageNet128 model gives a BPD of 3.31 (Table 2) with just 2.74M parameters and 59 GPU hours of total training time. + +## 6. Conclusion + +We presented a Multi-Resolution approach to CNFs, which provides an efficient framework for likelihood calculations + +Table 1. Bits-per-dimension (lower is better) of images for CIFAR10, ImageNet at ${32} \times {32}$ , and ImageNet at ${64} \times {64}$ , reported as the mean and standard deviation across the dataset. We also report the number of parameters in the models, and the time taken to train (in GPU hours). Most previous models use multiple GPUs for training, all our models were trained on only one GPU: NVIDIA RTX 2080 Ti 11GB. + +${}^{ \ddagger }$ As reported in (Ghosh et al.,2020). ${}^{§}$ Re-implemented by us. 'x': Fails to train. Blank spaces indicate unreported values. *RNODE (Finlay et al., 2020) used 4 GPUs to train on ImageNet64. by training on a single GPU in lesser time with a significantly fewer parameters. We see a marked improvement in BPD for ImageNet64 and above. + +
Imagenet32IMAGENET64 BPD PARAM TIME
BPD ParamTIME
Flow-based Prior Work
RealNVP4.2846.0M3.9896.0M
Glow4.0966.1M3.81111.1M
MintNet4.0617.4M
MaCow3.69122.5M
Flow++3.86169M3.6973.5M
Wavelet Flow4.0864M3.7896M822
1-Resolution CNF
FFJORD${3.96}^{ \ddagger }$${2.00}{\mathrm{M}}^{ \ddagger }$>5days ${}^{ \ddagger }$$X$$X$
RNODE${2.36}^{ \ddagger }$2.00M${30.1}^{ \ddagger }$${3.83}^{ * }$2.00M${64.1}^{ * }$
${3.49}^{3}$${1.58}{\mathrm{M}}^{3}$${40.39}^{3}$
FFJORD + STEER3.842.00M>5days
RNODE + STEER2.352.00M24.9
${3.49}^{3}$${1.58}{\mathrm{M}}^{3}$${30.07}^{3}$
(Ours) Multi-Resolution CNF (MRCNF)
2-resolution3.771.33M18.18---
2-resolution3.786.68M17.98---
3-resolution3.971.53M13.783.612.04M28.64
+ +Table 2. Metrics for unconditional ImageNet128 generation. + +
ImageNet128BPDParamTIME
Parallel Multiscale (Reed et al., 2017)3.55
SPN (Menick & Kalchbrenner, 2019)3.08250M
(Ours) 4-resolution MRCNF3.302.74M58.59
+ +![01963e31-151e-7dfc-ba23-3f8975c319ea_3_912_1505_663_330_0.jpg](images/01963e31-151e-7dfc-ba23-3f8975c319ea_3_912_1505_663_330_0.jpg) + +Figure 2. ImageNet: Example of super-resolving from ground truth ${16} \times {16}$ to ${64} \times {64}$ . Top ground truth, middle generated, bottom ground truth. + +References + +Burt, P. and Adelson, E. The laplacian pyramid as a compact image code. IEEE Transactions on communications, 31 (4):532-540, 1983. + +Burt, P. J. Fast filter transform for image processing. Computer graphics and image processing, 16(1):20-51, 1981. + +Chen, J., Lu, C., Chenli, B., Zhu, J., and Tian, T. Vflow: More expressive generative flows with variational data augmentation. In International Conference on Machine Learning, 2020. + +Chen, R. T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. Neural ordinary differential equations. Advances in Neural Information Processing Systems, 2018. + +Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. In International Conference on Learned Representations, 2017. + +Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G. Neural spline flows. In Advances in Neural Information Processing Systems, volume 32, pp. 7511- 7522, 2019. URL https://proceedings.neurips.cc/paper/2019/file/ + +7ac71d433f282034e088473244df8c02-Paper. pdf. + +Finlay, C., Jacobsen, J.-H., Nurbekyan, L., and Oberman, A. How to train your neural ode: the world of jacobian and kinetic regularization. International Conference on Machine Learning, 2020. + +Ghosh, A., Behl, H. S., Dupont, E., Torr, P. H., and Nam-boodiri, V. Steer: Simple temporal regularization for neural odes. In Advances in Neural Information Processing Systems, 2020. + +Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. Deep learning, volume 1. MIT Press, 2016. + +Grathwohl, W., Chen, R. T. Q., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. International Conference on Learning Representations, 2019. + +Ho, J., Chen, X., Srinivas, A., Duan, Y., and Abbeel, P. Flow++: Improving flow-based generative models with variational dequantization and architecture design. In International Conference on Machine Learning, 2019. + +Hoogeboom, E., Berg, R. v. d., and Welling, M. Emerging convolutions for generative normalizing flows. In + +International Conference on Machine Learning, 2019a. + +Hoogeboom, E., Peters, J., van den Berg, R., and Welling, M. Integer discrete flows and lossless compression. In Advances in Neural Information Processing Systems, volume 32, pp. 12134- 12144, 2019b. URL https://proceedings.neurips.cc/paper/2019/file/ 9e9a30b74c49d07d8150c8c83b1ccf07-Paper. pdf. + +Huang, H.-H. and Yeh, M.-Y. Accelerating continuous normalizing flow with trajectory polynomial regularization. AAAI Conference on Artificial Intelligence, 2021. + +Jimenez Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In International Conference on Machine Learning, pp. 1530-1-1538, 2015. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1\mathrm{x}1$ convolutions. In Advances in neural information processing systems, pp. 10215-10224, 2018. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. + +Kobyzev, I., Prince, S., and Brubaker, M. Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. + +Lee, S.-g., Kim, S., and Yoon, S. Nanoflow: Scalable normalizing flows with sublinear parameter complexity. In Advances in Neural Information Processing Systems, 2020. + +Ma, X., Kong, X., Zhang, S., and Hovy, E. Macow: Masked convolutional generative flow. In Advances in Neural Information Processing Systems, pp. 5893-5902, 2019. + +Mallat, S. G. A theory for multiresolution signal decomposition: the wavelet representation. IEEE transactions on pattern analysis and machine intelligence, 11(7):674- 693, 1989. + +Mallat, S. G. and Peyré, G. A wavelet tour of signal processing: the sparse way. Elsevier, 2009. + +Marr, D. Vision: A computational investigation into the human representation and processing of visual information. MIT press, 2010. + +Menick, J. and Kalchbrenner, N. Generating high fidelity images with subscale pixel networks and multidimensional upscaling. In International Conference on Learning Representations, 2019. + +Onken, D., Fung, S. W., Li, X., and Ruthotto, L. Ot-flow: Fast and accurate continuous normalizing flows via optimal transport. AAAI Conference on Artificial Intelligence, 2021. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. arXiv preprint arXiv:1912.02762, 2019. + +Reed, S., Oord, A. v. d., Kalchbrenner, N., Colmenarejo, S. G., Wang, Z., Belov, D., and De Freitas, N. Parallel multiscale autoregressive density estimation. In International Conference on Machine Learning, 2017. + +Song, Y., Meng, C., and Ermon, S. Mintnet: Building invertible neural networks with masked convolutions. In Advances in Neural Information Processing Systems, pp. 11004-11014, 2019. + +Tabak, E. G. and Turner, C. V. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145-164, 2013. + +Varga, R. S. On a discrete maximum principle. SIAM Journal on Numerical Analysis, 3(2):355-359, 1966. + +Witkin, A. P. Scale-space filtering. In Readings in Computer Vision, pp. 329-332. Elsevier, 1987. + +Yan, H., Du, J., Tan, V. Y. F., and Feng, J. On robustness of neural ordinary differential equations. International Conference on Learning Representations, 2020. + +Yu, J., Derpanis, K., and Brubaker, M. Wavelet flow: Fast training of high resolution normalizing flows. In ${Ad}$ - vances in Neural Information Processing Systems, 2020. + +325 + +326 + +327 + +328 + +329 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/XzNsBgj68_L/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/XzNsBgj68_L/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..5d3cdd5d8a51dff1001337b2be0b099fb2935ad2 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/XzNsBgj68_L/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,271 @@ +§ IMPROVING CONTINUOUS NORMALIZING FLOWS USING A MULTI-RESOLUTION FRAMEWORK + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Recent work has shown that Continuous Normalizing Flows (CNFs) can serve as generative models of images with exact likelihood calculation and invertible generation/density estimation. In this work we introduce a Multi-Resolution variant of such models (MRCNF). We introduce a transformation between resolutions that allows for no change in the log likelihood. We show that this approach yields comparable likelihood values for various image datasets, with improved performance at higher resolutions, with fewer parameters, using only 1 GPU. + +§ 1. INTRODUCTION + +Reversible generative models derived through the use of the change of variables technique (Dinh et al., 2017; Kingma & Dhariwal, 2018; Ho et al., 2019; Yu et al., 2020) are growing in interest as generative models, because they enable efficient density estimation, efficient sampling, and computation of exact likelihoods. A promising variation of the change-of-variable approach is based on the use of a continuous time variant of normalizing flows (Chen et al., 2018; Grathwohl et al., 2019), which uses an integral over continuous time dynamics to transform a base distribution into the model distribution, called Continuous Normalizing Flows (CNF). CNFs have been shown to be capable of modelling complex distributions such as those associated with images. While this new paradigm for the generative modelling of images is not as mature as Generative Adversarial Networks (GANs) (Goodfellow et al., 2016) or Variational Autoencoders (VAEs) (Kingma & Welling, 2013) in terms of the generated image quality, it is a promising direction of research. + +In this work, we focus on making the training of continuous normalizing flows feasible for higher resolution images, and help reduce computation time. We thus introduce a novel multi-resolution technique for continuous normalizing flows, by modelling the conditional distribution of high-level information at each resolution in an autoregressive fashion. We show that this makes the models perform better at higher resolutions. A high-level view of our approach is shown in Figure 1. Our main contributions are: + + < g r a p h i c s > + +Figure 1. The architecture of our MRCNF method (best viewed in color). Continuous normalizing flows (CNFs) ${g}_{s}$ are used to generate images ${\mathbf{x}}_{s}$ from noise ${\mathbf{z}}_{s}$ at each resolution, with those at finer resolutions conditioned (dashed lines) on the coarser image one level above ${\mathbf{x}}_{s + 1}$ , except at the coarsest level. + +1. We introduce Multi-Resolution Continuous Normalizing Flows (MRCF), through which we achieve state-of-the-art Bits-per-dimension (BPD) (negative log likelihood per pixel) on ImageNet64 using fewer model parameters relative to comparable methods. + +2. We propose a multi-resolution transformation that does not add cost in terms of likelihood. + +§ 2. BACKGROUND + +§ 2.1. NORMALIZING FLOWS + +Normalizing flows (Tabak & Turner, 2013; Jimenez Rezende & Mohamed, 2015; Dinh et al., 2017; Papamakarios et al., 2019; Kobyzev et al., 2020) are generative models that map a complex data distribution, such as real images, to a known noise distribution. They are trained by maximizing the log likelihood of their input images. Suppose a normalizing flow $g$ produces output $\mathbf{z}$ from an input $\mathbf{x}$ i.e. $\mathbf{z} = g\left( \mathbf{x}\right)$ . The change-of-variables + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +formula provides the likelihood of the image under this transformation as: + +$$ +\log p\left( \mathbf{x}\right) = \log \left| {\det \frac{\mathrm{d}g}{\mathrm{\;d}\mathbf{x}}}\right| + \log p\left( \mathbf{z}\right) \tag{1} +$$ + +The first term on the right (log determinant of the Jacobian) is often intractable, however, previous works on normalizing flows have found ways to estimate this efficiently. The second term, $\log p\left( \mathbf{z}\right)$ , is computed as the log probability of $\mathbf{z}$ under a known noise distribution, typically the standard Gaussian $\mathcal{N}$ . + +§ 2.2. CONTINUOUS NORMALIZING FLOWS + +Continuous Normalizing Flows (CNF) (Chen et al., 2018; Grathwohl et al., 2019; Finlay et al., 2020) are a variant of normalizing flows that operate in the continuous domain, using the framework of Neural ODEs (Chen et al., 2018). Suppose CNF $g$ transforms its state $\mathbf{v}\left( t\right)$ using a Neural ODE (Chen et al.,2018) with neural network $f$ defining the differential. Here, $\mathbf{v}\left( {t}_{0}\right) = \mathbf{x}$ is, say, an image, and at the final time step $\mathbf{v}\left( {t}_{1}\right) = \mathbf{z}$ is a sample from a known noise distribution. + +$$ +\mathbf{v}\left( {t}_{1}\right) = g\left( {\mathbf{v}\left( {t}_{0}\right) }\right) = \mathbf{v}\left( {t}_{0}\right) + {\int }_{{t}_{0}}^{{t}_{1}}f\left( {\mathbf{v}\left( t\right) ,t}\right) \mathrm{d}t \tag{2} +$$ + +Chen et al. (2018); Grathwohl et al. (2019) proposed a more efficient method to compute the change in log-probability in the context of CNFs, called the instantaneous variant of the change-of-variables formula, another differential equation: + +$$ +\Delta \log {p}_{\mathbf{v}\left( {t}_{0}\right) \rightarrow \mathbf{v}\left( {t}_{1}\right) } = - {\int }_{{t}_{0}}^{{t}_{1}}\operatorname{Tr}\left( \frac{\partial f}{\partial \mathbf{v}\left( t\right) }\right) \mathrm{d}t \tag{3} +$$ + +An ODE solver solves both differential equations eq. (2) and eq. (3). Thus, a CNF provides both the final state $\mathbf{v}\left( {t}_{1}\right)$ as well as the change in $\log$ probability $\Delta \log {p}_{\mathbf{v}\left( {t}_{0}\right) \rightarrow \mathbf{v}\left( {t}_{1}\right) }$ together. + +Prior works (Grathwohl et al., 2019; Finlay et al., 2020; Ghosh et al., 2020; Onken et al., 2021; Huang & Yeh, 2021) have trained CNFs as reversible generative models of images, by maximizing the likelihood of images: + +$$ +\mathbf{z} = g\left( \mathbf{x}\right) ;\;\log p\left( \mathbf{x}\right) = \Delta \log {p}_{\mathbf{x} \rightarrow \mathbf{z}} + \log p\left( \mathbf{z}\right) \tag{4} +$$ + +where $\mathbf{x}$ is an image, $\mathbf{z}$ and $\Delta \log {p}_{\mathbf{x} \rightarrow \mathbf{z}}$ are computed by the CNF using eq. (2) and eq. (3), and $\log p\left( \mathbf{z}\right)$ is the likelihood of the computed $\mathbf{z}$ under a known noise distribution, typically the standard Gaussian $\mathcal{N}\left( {\mathbf{0},\mathbf{I}}\right)$ . CNF $g$ is trained by maximizing ${\mathbb{E}}_{\mathbf{x}}\log p\left( \mathbf{x}\right)$ . Novel images are generated by sampling $\mathbf{z}$ from the known noise distribution, and running it through the CNF in reverse. + +§ 3.OUR METHOD + +Our method is a reversible generative model of images that builds on top of CNFs. We introduce the notion of multiple resolutions in images, and connect the different resolutions in an autoregressive fashion. This helps generate images faster, with better likelihood values at higher resolutions. Moreover, we used only one GPU in all our experiments. We call this model Multi-Resolution Continuous Normalizing Flow (MRCNF). + +§ 3.1. MULTI-RESOLUTION IMAGE REPRESENTATION + +Multi-resolution representations of images have been explored in computer vision for decades (Burt, 1981; Witkin, 1987; Burt & Adelson, 1983; Mallat, 1989; Marr, 2010). We express an image $\mathbf{x}$ as a series of high-level information ${\mathbf{y}}_{s}$ not present in the immediate coarser images ${\mathbf{x}}_{s + 1}$ (obtained by averaging every $2 \times 2$ patch), and a final coarse image ${\mathbf{x}}_{S}$ : + +$$ +\mathbf{x} \rightarrow \left( {{\mathbf{y}}_{1},{\mathbf{x}}_{2}}\right) \rightarrow \cdots \rightarrow \left( {{\mathbf{y}}_{1},{\mathbf{y}}_{2},\ldots ,{\mathbf{y}}_{S - 1},{\mathbf{x}}_{S}}\right) \tag{5} +$$ + +Our overall method is to map these $S$ terms to $S$ noise samples using $S$ CNFs. + +§ 3.2. DEFINING THE HIGH-LEVEL INFORMATION ${\MATHBF{Y}}_{S}$ + +The multi-resolution representation in eq. (5) needs to be invertible, i.e. it should be possible to deterministically obtain ${\mathbf{x}}_{s}$ from ${\mathbf{y}}_{s}$ and ${\mathbf{x}}_{s + 1}$ , and vice versa. Further, it is preferable that this transformation incurs minimal additional computational cost, and does not add too much change in terms of log-likelihood. We choose to perform a linear transformation taking into account the following properties: 1) volume preserving i.e. determinant is 1,2) angle preserving, and 3) range preserving (respecting the maximum principle, studied for some time, under the notion of the maximum principle (Varga, 1966)). + +Consider the simplest case of 2 resolutions where ${\mathbf{x}}_{1}$ is a $2 \times 2$ image with pixel values ${x}_{1},{x}_{2},{x}_{3},{x}_{4}$ , and ${\mathbf{x}}_{2}$ is a $1 \times 1$ image with pixel value $\bar{x} = \frac{1}{4}\left( {{x}_{1} + {x}_{2} + {x}_{3} + {x}_{4}}\right)$ . We require three values $\left( {{y}_{1},{y}_{2},{y}_{3}}\right) = {\mathbf{y}}_{1}$ that contain information not present in ${\mathbf{x}}_{2}$ , such that when they are combined with ${\mathbf{x}}_{2},{\mathbf{x}}_{1}$ is obtained. This could be viewed as a problem of finding a matrix $\mathbf{M}$ such that: ${\left\lbrack {x}_{1},{x}_{2},{x}_{3},{x}_{4}\right\rbrack }^{\top } = \mathbf{M}{\left\lbrack {y}_{1},{y}_{2},{y}_{3},\bar{x}\right\rbrack }^{\top }$ . We fix the last column of $\mathbf{M}$ as ${\left\lbrack 1,1,1,1\right\rbrack }^{\top }$ , since every pixel value in ${\mathbf{x}}_{1}$ depends on $\bar{x}$ . Finding the rest of the parameters can be viewed as requiring four 3D vectors that are non-trivially equally spaced. These can be considered as the four corners of a tetrahedron in 3D space, under any configuration (rotated in space) and scaling of the vectors. + +Out of the many possibilities for this tetrahedron, we could choose the matrix that performs the Discrete Haar Wavelet Transform (DHWT) (Mallat, 1989; Mallat & Peyré, 2009). However, this has $\log \left| {\det \left( {\mathbf{M}}^{-1}\right) }\right| = \log \left( {1/2}\right)$ , and is + +therefore not volume preserving. We introduce a variant of the DHWT matrix that is unimodular, i.e. has a determinant of 1 (therefore volume preserving), while also preserving the range of the images for the input and its average: + +$$ +\left\lbrack \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \\ {x}_{4} \end{array}\right\rbrack = {a}^{-1}\left\lbrack \begin{matrix} c & c & c & a \\ c & - c & - c & a \\ - c & c & - c & a \\ - c & - c & c & a \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {y}_{1} \\ {y}_{2} \\ {y}_{3} \\ \bar{x} \end{array}\right\rbrack \Leftrightarrow \tag{6} +$$ + +$$ +\left\lbrack \begin{array}{l} {y}_{1} \\ {y}_{2} \\ {y}_{3} \\ \bar{x} \end{array}\right\rbrack = \left\lbrack \begin{matrix} {c}^{-1} & {c}^{-1} & - {c}^{-1} & - {c}^{-1} \\ {c}^{-1} & - {c}^{-1} & {c}^{-1} & - {c}^{-1} \\ {c}^{-1} & - {c}^{-1} & - {c}^{-1} & {c}^{-1} \\ {a}^{-1} & {a}^{-1} & {a}^{-1} & {a}^{-1} \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \\ {x}_{4} \end{array}\right\rbrack \tag{7} +$$ + +where $c = {2}^{2/3},a = 4$ , and $\log \left| {\det \left( {\mathbf{M}}^{-1}\right) }\right| = \log \left( 1\right) = 0$ . This can be scaled up to larger spatial regions by performing the same calculation for each $2 \times 2$ patch. Let $M$ be the function that uses matrix $\mathbf{M}$ from above and combines every pixel in ${\mathbf{x}}_{s + 1}$ with 3 corresponding pixels in ${\mathbf{y}}_{s}$ to make the $2 \times 2$ patch at that location in ${\mathbf{x}}_{s}$ using eq. (6): + +$$ +{\mathbf{x}}_{s} = M\left( {{\mathbf{y}}_{s},{\mathbf{x}}_{s + 1}}\right) \Leftrightarrow {\mathbf{y}}_{s},{\mathbf{x}}_{s + 1} = {M}^{-1}\left( {\mathbf{x}}_{s}\right) \tag{8} +$$ + +§ 3.3. MULTI-RESOLUTION CONTINUOUS NORMALIZING FLOWS + +Using the multi-resolution image representation in eq. (5), we characterize the conditional distribution over the additional degrees of freedom $\left( {\mathbf{y}}_{s}\right)$ required to generate a higher resolution image $\left( {\mathbf{x}}_{s}\right)$ that is consistent with the average $\left( {\mathbf{x}}_{s + 1}\right)$ over the equivalent pixel space. At each resolution $s$ , we use a CNF to reversibly map between ${\mathbf{y}}_{s}$ (or ${\mathbf{x}}_{S}$ when $s = S$ ) and a sample ${\mathbf{z}}_{s}$ from a known noise distribution. At generation, ${\mathbf{y}}_{s}$ only adds information missing in ${\mathbf{x}}_{s + 1}$ , but conditional on it. This framework ensures that one coarse image could generate several potential fine images, but these fine images have the same coarse image as their average. This fact is preserved across resolutions. + +In principle, any generative model could be used to map between the multi-resolution image and noise. Normalizing flows are good candidates for this as they are probabilistic generative models that perform exact likelihood estimates, and can be run in reverse to generate novel data from the model's distribution. This allows model comparison and measurement of generalization to unseen data. We choose to use the CNF variant of normalizing flows at each resolution, since CNFs have recently been shown to be effective in modeling image distributions using a fraction of the number of parameters typically used in normalizing flows (and non flow-based approaches), and their underlying framework of Neural ODEs have been shown to be more robust than convolutional layers (Yan et al., 2020). + +Training: We train an MRCNF by maximizing the average log-likelihood of the images in the training dataset under the model, i.e. $\max {\mathbb{E}}_{\mathbf{x}}\log p\left( \mathbf{x}\right)$ . The log probability of each + +image $\log p\left( \mathbf{x}\right)$ can be estimated recursively as: + +$$ +\log p\left( \mathbf{x}\right) = \log p\left( {{\mathbf{y}}_{1},{\mathbf{x}}_{2}}\right) = \log p\left( {{\mathbf{y}}_{1} \mid {\mathbf{x}}_{2}}\right) + \log p\left( {\mathbf{x}}_{2}\right) +$$ + +$$ += \mathop{\sum }\limits_{{s = 1}}^{{S - 1}}\left( {\log p\left( {{\mathbf{y}}_{s} \mid {\mathbf{x}}_{s + 1}}\right) }\right) + \log p\left( {\mathbf{x}}_{S}\right) \tag{9} +$$ + +where $\log p\left( {\mathbf{x}}_{S}\right)$ is computed by CNF ${g}_{S}$ using eq. (4): + +$$ +{\mathbf{z}}_{S} = {g}_{S}\left( {\mathbf{x}}_{S}\right) ;\;\log p\left( {\mathbf{x}}_{S}\right) = \Delta \log {p}_{{\mathbf{x}}_{S} \rightarrow {\mathbf{z}}_{S}} + \log p\left( {\mathbf{z}}_{S}\right) +$$ + +(10) + +and $\log p\left( {{\mathbf{y}}_{s} \mid {\mathbf{x}}_{s + 1}}\right)$ is also computed by CNFs ${g}_{s}$ similarly, conditioning on the coarser image: + +$$ +\left\{ \begin{array}{l} {\mathbf{z}}_{s} = {g}_{s}\left( {{\mathbf{y}}_{s} \mid {\mathbf{x}}_{s + 1}}\right) \\ \log p\left( {{\mathbf{y}}_{s} \mid {\mathbf{x}}_{s + 1}}\right) = \Delta \log {p}_{\left( {{\mathbf{y}}_{s} \rightarrow {\mathbf{z}}_{s}}\right) \mid {\mathbf{x}}_{s + 1}} + \log p\left( {\mathbf{z}}_{s}\right) \end{array}\right. +$$ + +(11) + +This model could be seen as a stack of CNFs connected in an autoregressive fashion. Typically, likelihood-based generative models are compared using the metric of bits-per-dimension (BPD), i.e. the negative log likelihood per pixel in the image: + +$$ +\operatorname{BPD}\left( \mathbf{x}\right) = \frac{-\log p\left( \mathbf{x}\right) }{\operatorname{dims}\left( \mathbf{x}\right) } \tag{12} +$$ + +Hence, we train our MRCNF to minimize the average BPD of the images in the training dataset, computed using eq. (12). Although the final log likelihood $\log p\left( \mathbf{x}\right)$ involves sequentially summing over values returned by all $S$ CNFs, each CNF can be trained independently, in parallel. + +We use FFJORD (Grathwohl et al., 2019) as the baseline model for our CNFs. In addition, we use to two regularization terms introduced by RNODE (Finlay et al., 2020) to speed up the training of FFJORD models. + +Generation: First, ${\mathbf{z}}_{s},s = 1,\ldots ,S$ are sampled from the latent noise distributions. Given an $S$ -resolution model, CNF ${g}_{s}$ at resolution $s$ transforms the noise sample ${\mathbf{z}}_{s}$ to high-level information ${\mathbf{y}}_{s}$ conditioned on the immediate coarse image ${\mathbf{x}}_{s + 1}$ (except ${g}_{S}$ which is unconditioned). ${\mathbf{y}}_{s}$ and ${\mathbf{x}}_{s + 1}$ are then combined to form ${\mathbf{x}}_{s}$ as described in section 3.2 (see fig. 1). This process is repeated progressively from coarser to finer resolutions: + +$$ +{\mathbf{x}}_{S} = {g}_{S}^{-1}\left( {\mathbf{z}}_{S}\right) \;s = S +$$ + +$$ +\left\{ {\begin{array}{l} {\mathbf{y}}_{s} = {g}_{s}^{-1}\left( {{\mathbf{z}}_{s} \mid {\mathbf{x}}_{s + 1}}\right) \\ {\mathbf{x}}_{s} = M\left( {{\mathbf{y}}_{s},{\mathbf{x}}_{s + 1}}\right) \end{array}\;s = S - 1 \rightarrow 1}\right. \tag{13} +$$ + +§ 4. RELATED WORK + +Several prior works on normalizing flows (Kingma & Dhari-wal, 2018; Hoogeboom et al., 2019a;b; Song et al., 2019; Ma et al., 2019; Durkan et al., 2019; Chen et al., 2020; Ho et al., 2019; Lee et al., 2020; Yu et al., 2020) build on Re-alNVP (Dinh et al., 2017). Although they achieve great + +167 results in terms of BPD and image quality, they nonetheless + +168 report results from significantly higher parameter (some + +169 with ${100}\mathrm{x}$ !), and several times GPU hours of training. + +170 Although our MRCNF model is similar to the recently published WaveletFlow (Yu et al., 2020), we generalize the notion of a multi-resolution image representation. Wavelet-Flow builds on the Glow (Kingma & Dhariwal, 2018) ar- + +175 chitecture, while ours builds on CNFs. WaveletFlow claims to have orthonormal transformation, our eq. (6) involves a 176 unimodular transformation. Finally, WaveletFlow applies special sampling techniques to obtain better samples from its model. We have so far not used such techniques for generation, but we believe they can potentially help our models as well. + +"Multiple scales" in prior normalizing flows: Normalizing flows (Dinh et al., 2017; Kingma & Dhariwal, 2018; Grathwohl et al., 2019) try to be "multi-scale" by transforming the input at one resolution in a smart way (squeezing operation) such that the width of the features progressively reduces. In contrast, our model stacks normalizing flows at multiple resolutions in an autoregressive fashion. + +§ 5. EXPERIMENTAL RESULTS + +We train Multi-Resolution Continuous Normalizing Flow (MRCNF) and Multi-Resolution Continuous Normalizing Flow - Wavelet (MRCNF-Wavelet) models on the Ima-geNet (Deng et al., 2009) dataset at 32x32, 64x64, 128x128. We build on the code provided in (Finlay et al., 2020) (https://github.com/cfinlay/ffjord-rnode).In all cases, we train using only one NVIDIA RTX 2080 Ti GPU with 11GB. + +At lower resolution spaces, we achieve comparable BPDs in lesser time with far fewer parameters than previous normalizing flows (and non flow-based approaches). However, the power of the multi-resolution formulation is more evident at higher resolutions: we achieve state-of-the-art BPD for ImageNet64 with significantly fewer parameters and lower time using only one GPU. + +Progressive training: Since each resolution can be trained independently, we train an MRCNF model on ImageNet128 by training only the finest resolution $\left( {{128} \times {128}}\right)$ conditioned on the immediate coarser $\left( {{64} \times {64}}\right)$ images, and attach that to a 3-resolution ${64} \times {64}$ model. The resulting 4-resolution ImageNet128 model gives a BPD of 3.31 (Table 2) with just 2.74M parameters and 59 GPU hours of total training time. + +§ 6. CONCLUSION + +We presented a Multi-Resolution approach to CNFs, which provides an efficient framework for likelihood calculations + +Table 1. Bits-per-dimension (lower is better) of images for CIFAR10, ImageNet at ${32} \times {32}$ , and ImageNet at ${64} \times {64}$ , reported as the mean and standard deviation across the dataset. We also report the number of parameters in the models, and the time taken to train (in GPU hours). Most previous models use multiple GPUs for training, all our models were trained on only one GPU: NVIDIA RTX 2080 Ti 11GB. + +${}^{ \ddagger }$ As reported in (Ghosh et al.,2020). ${}^{§}$ Re-implemented by us. 'x': Fails to train. Blank spaces indicate unreported values. *RNODE (Finlay et al., 2020) used 4 GPUs to train on ImageNet64. by training on a single GPU in lesser time with a significantly fewer parameters. We see a marked improvement in BPD for ImageNet64 and above. + +max width= + +2*X 3|c|Imagenet32 3|c|IMAGENET64 BPD PARAM TIME + +2-4 + X BPD Param TIME 3|c|X + +1-7 +7|c|Flow-based Prior Work + +1-7 +RealNVP 4.28 46.0M X 3.98 96.0M X + +1-7 +Glow 4.09 66.1M X 3.81 111.1M X + +1-7 +MintNet 4.06 17.4M X X X X + +1-7 +MaCow X X X 3.69 122.5M X + +1-7 +Flow++ 3.86 169M X 3.69 73.5M X + +1-7 +Wavelet Flow 4.08 64M X 3.78 96M 822 + +1-7 +7|c|1-Resolution CNF + +1-7 +FFJORD ${3.96}^{ \ddagger }$ ${2.00}{\mathrm{M}}^{ \ddagger }$ >5days ${}^{ \ddagger }$ $X$ X $X$ + +1-7 +RNODE ${2.36}^{ \ddagger }$ 2.00M ${30.1}^{ \ddagger }$ ${3.83}^{ * }$ 2.00M ${64.1}^{ * }$ + +1-7 +X ${3.49}^{3}$ ${1.58}{\mathrm{M}}^{3}$ ${40.39}^{3}$ X X X + +1-7 +FFJORD + STEER 3.84 2.00M >5days X X X + +1-7 +RNODE + STEER 2.35 2.00M 24.9 X X X + +1-7 +X ${3.49}^{3}$ ${1.58}{\mathrm{M}}^{3}$ ${30.07}^{3}$ X X X + +1-7 +7|c|(Ours) Multi-Resolution CNF (MRCNF) + +1-7 +2-resolution 3.77 1.33M 18.18 - - - + +1-7 +2-resolution 3.78 6.68M 17.98 - - - + +1-7 +3-resolution 3.97 1.53M 13.78 3.61 2.04M 28.64 + +1-7 + +Table 2. Metrics for unconditional ImageNet128 generation. + +max width= + +ImageNet128 BPD Param TIME + +1-4 +Parallel Multiscale (Reed et al., 2017) 3.55 X X + +1-4 +SPN (Menick & Kalchbrenner, 2019) 3.08 250M X + +1-4 +(Ours) 4-resolution MRCNF 3.30 2.74M 58.59 + +1-4 + + < g r a p h i c s > + +Figure 2. ImageNet: Example of super-resolving from ground truth ${16} \times {16}$ to ${64} \times {64}$ . Top ground truth, middle generated, bottom ground truth. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZBR9EpEl6G4/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZBR9EpEl6G4/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..8589b529ac02175e962c2878ef445cbad49fb1cf --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZBR9EpEl6G4/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,462 @@ +# Diffeomorphic Explanations with Normalizing Flows + +Anonymous Authors ${}^{1}$ + +## Abstract + +Normalizing flows are diffeomorphisms which are parameterized by neural networks. As a result, they can induce coordinate transformations in the tangent space of the data manifold. In this work, we demonstrate that such transformations can be used to generate interpretable explanations for decisions of neural networks. More specifically, we perform gradient ascent in the base space of the flow to generate counterfactuals which are classified with great confidence as a specified target class. We analyze this generation process theoretically using Riemannian differential geometry and establish a rigorous theoretical connection between gradient ascent on the data manifold and in the base space of the flow. + +![01963e2d-ca26-77e7-937c-17474b0c7bff_0_173_1153_667_266_0.jpg](images/01963e2d-ca26-77e7-937c-17474b0c7bff_0_173_1153_667_266_0.jpg) + +Figure 1. Diffeomorphic explanation for hair-color classification. + +## 1. Introduction + +Explaining a complex system can be drastically simplified using a suitable coordinate system. As an example, the solar system can be explained either by using a reference system for which the sun is at rest (heliocentristic) or, alternatively, for which the earth is at rest (geocentristic). Despite wide-held belief, both reference system are physically valid. However, the dynamics of the planets is significantly easier to describe in heliocentristic coordinates since the planets will follow geometrically simple trajectories. + +Explanation methods for neural networks have recently gained significant attention because they promise to make black-box classifiers more transparent, see (Samek et al., 2019) for a detailed overview. In this paper, we use the bi-jectivity of a normalizing flow to consider a classifier in the base space of the flow. This amounts to a coordinate transformation in the data space (or mathematically more precise: a diffeomorphism). We will show that in this coordinate system, the classifier is more easily interpretable and can be used to construct counterfactual explanations that lie on the data manifold. Using Riemannian differential geometry, we will analyze the advantages of creating counterfactual explanations in the base space of the flow and establish a process by which the tangent space of the data manifold can be estimated from the flow. We strongly expect these theoretical insights to be useful beyond explainability. + +In summary, our main contributions are as follows: + +- We propose a novel application domain for flows: inducing a bijective transformation to a more interpretable space on which counterfactuals can be easily generated. + +- We analyze the properties of this generation process theoretically using Riemannian differential geometry. + +- We experimentally demonstrate superior performance compared to more traditional approaches for generating counterfactuals for classification tasks in three different domains. + +## 2. Counterfactual Explanations + +Let $f : X \rightarrow {\mathbb{R}}^{K}$ be a classifier whose component $f{\left( x\right) }_{k}$ is the probability for the point $x \in X$ to be of class $k \in$ $\{ 1,\ldots , K\}$ . We make no assumptions on the architecture of the classifier $f$ and only require that we can evaluate $f\left( x\right)$ and its derivative ${\partial }_{x}f\left( x\right)$ for a given input $x \in X{.}^{1}$ + +In this work, we will follow the well-established paradigm of counterfactual explanations - see (Verma et al., 2020) for a recent review. These methods aim to explain the classifier $f$ by providing an answer to the question which minimal deformations ${x}^{\prime } = x + {\delta x}$ need to be applied to the original input $x$ in order to change its prediction. Often, the difference ${\delta x}$ is then visualized by a heatmap highlighting the relevant pixels for the change in classification, see Figure 1 for an example. + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +${}^{1}$ This assumption can be relaxed: if we do not have access to the gradient, we can approximate it by finite differences. + +--- + +In the following, we will assume that the data lies on a submanifold $S \subset X$ which is of (significantly) lower dimension $n$ than the dimension $N$ of its embedding space $X$ . We stress that this is also known as the manifold assumption and is expected to hold across a wide range of machine learning tasks, see e.g. (Goodfellow et al., 2016). In these situations, we are often interested in only the deformations ${x}^{\prime }$ which lie on the data manifold $S$ . As an example, a customer of a bank may want to understand how their financial data needs to change in order to receive a loan. If the classification changes off-manifold, for example for zip codes that do not exist, this is of no relevance since the user is obliged to enter their correct zip code. Furthermore, it is often required that the deformation is minimal, i.e. the perturbation ${\delta x}$ should be as small as possible. However, the relevant norm is the one of the data manifold $S$ and not of its embedding pixel space $X$ . For example, a slightly rotated number in an MNIST image may have large pixel-wise distance but should be considered an infinitesimal perturbation of the original image. + +More precisely, we define counterfactuals as follows: let $t = {\operatorname{argmax}}_{j}{f}_{j}\left( x\right)$ be the predicted class for the data point $x \in S$ . The set ${\Delta }_{k,\delta } \subset S$ of counterfactuals ${x}^{\prime }$ of the point $x$ with respect to the target class $k \in \{ 1,\ldots , K\} \smallsetminus \{ t\}$ and confidence $\delta \in (0,1\rbrack$ is defined by + +${\Delta }_{k,\delta } = \left\{ {{x}^{\prime }\left( x\right) \in S : \;{\operatorname{argmax}}_{j}{f}_{j}\left( {x}^{\prime }\right) = k \land {f}_{k}\left( {x}^{\prime }\right) > \delta }\right\} ,$ + +i.e. all points on the data manifold which are classified to be of the target class $k$ with at least the confidence $\delta$ . A minimal counterfactual ${x}^{\prime } \in {\Delta }_{k,\delta }$ is then a counterfactual with smallest distance ${d}_{\gamma }\left( {{x}^{\prime }, x}\right)$ to the original point $x$ , where ${d}_{\gamma }$ is the distance on the data manifold (induced by its Riemannian metric $\gamma$ ). Note that there may not be a unique minimal counterfactual. + +## 3. Construction of Counterfactual + +We propose to estimate the minimal counterfactual ${x}^{\prime }$ of the data sample $x$ with respect to the classifier $f$ by using a diffeomorphism $g : Z \rightarrow X$ modelled by a normalizing flow. + +The flow $g$ equips the space $X$ with a probability density + +$$ +{q}_{X}\left( x\right) = {q}_{Z}\left( {{g}^{-1}\left( x\right) }\right) \left| {\det \frac{\partial z}{\partial x}}\right| \tag{1} +$$ + +by push-forward of a simple base density ${q}_{Z}$ , such as $N\left( {0,1}\right)$ , defined on the base space $Z$ . We assume that the flow was successfully trained to approximate the data distribution ${p}_{X}$ by minimizing the forward KL-divergence as usual (this assumption will be made more precise in Section 4). + +We then perform gradient ascent in the base space $Z$ to maximize the probability of the target class $k$ , i.e. + +$$ +{z}^{\left( t + 1\right) } = {z}^{\left( t\right) } + \lambda \frac{\partial {\left( f \circ g\right) }_{k}}{\partial z}\left( {z}^{\left( t\right) }\right) , \tag{2} +$$ + +where $\lambda$ is the learning rate and we initialize by mapping the original point $x$ to the base space by ${z}^{\left( 0\right) } = {g}^{-1}\left( x\right)$ . We then take the sample ${x}^{\left( T\right) } = g\left( {z}^{\left( T\right) }\right)$ as an estimator for a minimal counterfactual if ${x}^{\left( T\right) }$ is the first optimization step to be classified as the target $k$ with given confidence $\delta$ : + +$$ +{\operatorname{argmax}}_{j}{f}_{j}\left( {x}^{\left( T\right) }\right) = k\;\text{ and }\;{f}_{k}\left( {x}^{\left( T\right) }\right) > \delta . +$$ + +This is because generically taking further steps only increases the distance to the original sample $\begin{Vmatrix}{{z}^{\left( T\right) } - {z}^{0}}\end{Vmatrix} >$ $\begin{Vmatrix}{{z}^{T + t} - {z}^{0}}\end{Vmatrix}$ for $t > 0$ and we want to find (an estimate of a) minimal counterfactual. This may also be validated by continuing optimization for a certain number of steps and selecting the sample with the minimal distance. + +As discussed in Section 6, generative-model-based methods to estimate (minimal) counterfactuals have previously been proposed, for example based on Generative Adversarial Networks or Autoencoders. However, the relevance of normalizing flows in this domain has so far not be recognized. This is unfortunate as normalizing flows have important advantages in this application domain compared to other generative models: firstly, a flow $g$ is a diffeomorphism and therefore no information is lost by considering the classifier $f \circ g$ on $Z$ instead of the original classifier $f$ on $X$ , i.e. there is a unique $z = {g}^{-1}\left( x\right) \in Z$ for any data point $x \in X$ . Secondly, performing gradient ascent in the base space $Z$ of a well-trained flow will ensure (to good approximation) that each optimization step ${x}^{\left( t\right) } = g\left( {z}^{\left( t\right) }\right)$ will stay on the data manifold $S \subset X$ . Since the base space $Z$ has the same dimension as the data space $X$ , the latter statement is far from obvious and is substantiated with theoretical arguments in the next section. + +## 4. Theoretical Analysis + +In the following, it will be shown that performing gradient ascent in $Z$ space and then mapping the result in $X$ space will stay on the data manifold $S$ . + +This is in stark contrast to gradient ascent directly in $X$ space, i.e. + +$$ +{x}^{\left( t + 1\right) } = {x}^{\left( t\right) } + \lambda \frac{\partial {f}_{k}}{\partial x}\left( {x}^{\left( t\right) }\right) , \tag{3} +$$ + +where $\lambda$ is the learning rate. It is well-known that such an optimization procedure would very quickly leave the data manifold $S$ , see for example (Goodfellow et al.,2014). For gradient ascent in $Z$ space (2), each step ${z}^{\left( t\right) }$ can uniquely be mapped to $X$ space by ${x}^{\left( t\right) } = g\left( {z}^{\left( t\right) }\right)$ . In the Supplement A.1, we derive the following result: + +![01963e2d-ca26-77e7-937c-17474b0c7bff_2_327_187_359_276_0.jpg](images/01963e2d-ca26-77e7-937c-17474b0c7bff_2_327_187_359_276_0.jpg) + +Figure 2. Gaussian normal coordinates, see Appendix D of (Carroll, 2019) for a detailed mathematical introduction. + +Theorem 1. Let ${z}^{\left( t\right) }$ be defined as in (2) and ${x}^{\left( t\right) } = g\left( {z}^{\left( t\right) }\right)$ . Then, to leading order in the learning rate $\lambda$ , + +$$ +{x}^{\left( t + 1\right) } = {x}^{\left( t\right) } + {\left. \lambda {\gamma }^{-1}\right| }_{{g}^{-1}\left( {x}^{\left( t\right) }\right) }\frac{\partial {f}_{k}}{\partial x}\left( {x}^{\left( t\right) }\right) + \mathcal{O}\left( {\lambda }^{2}\right) , \tag{4} +$$ + +where ${\gamma }^{-1} = \frac{\partial g}{\partial z}\frac{\partial {g}^{T}}{\partial z} \in {\mathbb{R}}^{N, N}$ is the pull-back of the flat metric on $Z$ under the flow $g$ . + +Therefore, performing gradient ascent in $X$ or $Z$ space is not equivalent because (3) and (7) do not agree. In particular, the presence of the inverse metric ${\gamma }^{-1}$ in the update formula (7) effectively induces separate learning rates for each direction in tangent space. + +In the following, we prove that directions orthogonal to the data manifold $S \subset X$ is heavily suppressed by the inverse metric and thus gradient ascent (7) stays on the data manifold $S$ to very good approximation. + +In practice, the data manifold is only approximately of lower dimension. Specifically, we assume that the data manifold is a product manifold, equipped with the canonical product metric, of the form + +$$ +S = \mathcal{D} \times {B}_{{\delta }_{1}} \times \cdots \times {B}_{{\delta }_{N - n}}, \tag{5} +$$ + +where $\mathcal{D}$ is a $n$ -dimensional manifold and ${B}_{\delta }$ is an open one-dimensional ball with radius $\delta$ (with respect to the flat metric of the embedding space $X$ ). Since we will choose all the radii ${\delta }_{i}$ to be small, the data manifold $S$ is thus approximately $n$ -dimensional. + +We choose Gaussian normal coordinates $x =$ $\left( {{x}_{\parallel }^{1},\ldots ,{x}_{\parallel }^{n},{x}_{ \bot }^{1},\ldots ,{x}_{ \bot }^{N - n}}\right)$ on $X$ , where the ${x}_{ \bot }^{i}$ are slice coordinates for ${B}_{{\delta }_{i}}$ and the $\left( {{x}_{\parallel }^{1},\ldots ,{x}_{\parallel }^{n}}\right)$ are slice coordinates of $\mathcal{D}$ , see Figure 2. We furthermore require that in our coordinates ${x}_{ \bot }^{i}\left( p\right) \in \left( {-\delta , + \delta }\right)$ for $p \in S$ . We then show in the Supplement A.2: + +Theorem 2. Let ${p}_{X}$ denote the data density with $\operatorname{supp}\left( {p}_{X}\right) = S$ , and the flow $g$ be well-trained such that + +$$ +\operatorname{KL}\left( {{p}_{X},{q}_{X}}\right) < \epsilon , +$$ + +and the base density be bounded. Let ${\gamma }^{-1} = \frac{\partial g}{\partial z}{\frac{\partial g}{\partial z}}^{T}$ be the inverse of the induced metric $\gamma$ in the canonical basis of coordinates $x$ . + +In this basis, ${\gamma }^{-1}$ is given by + +$$ +{\gamma }^{-1} = \left( \begin{array}{llll} {\gamma }_{\mathcal{D}}^{-1} & & & \\ & {\gamma }_{{B}_{{\delta }_{1}}}^{-1} & & \\ & & \ddots & \\ & & & {\gamma }_{{B}_{{\delta }_{{N}_{r} - n}}}^{-1} \end{array}\right) , +$$ + +where ${\gamma }_{\mathcal{M}}^{-1}$ is the inverse of the induced metric on the sub-manifold $\mathcal{M}$ . + +Furthermore, ${\gamma }_{{B}_{{\delta }_{i}}}^{-1} \rightarrow 0$ for vanishing radius ${\delta }_{i} \rightarrow 0$ . + +We therefore conclude that gradient ascent in $Z$ space corresponds to gradient ascent in $X$ where the learning rate of all gradient components ${\partial }_{{x}_{ \bot }}f$ orthogonal to the data manifold are effectively scaled by a vanishingly small learning rate. As a result, the gradient ascent in $Z$ will, to very good approximation, not leave the data manifold. + +## 5. Experiments + +Tangent Space: A non-trivial consequence of our theoretical insights is that we can infer the tangent plane of each point on the data manifold from our flow $g$ . Specifically, we perform a singular value decomposition of the Jacobian $\frac{\partial g}{\partial z} = {U\sum V}$ and rewrite the inverse induced metric as + +$$ +{\gamma }^{-1} = \frac{\partial g}{\partial z}\frac{\partial {g}^{T}}{\partial z} = U{\sum }^{2}{U}^{T}. \tag{6} +$$ + +For an approximately $n$ -dimensional data manifold $S$ in an $N$ -dimensional embedding space $X$ , Theorem 2 shows that the inverse induced metric ${\gamma }^{-1}$ has $N - n$ small eigenvalues. The corresponding eigenvectors of the large eigenvalues will then approximately span the tangent space of the data manifold. In order to demonstrate this in a toy example, we train flows to approximate data manifolds with the shape of a helix and torus respectively. Figure 3 shows that we can indeed recover the tangent planes of these data manifolds to very good approximation. We refer to the Supplement B for details about the used flow and the data generation. + +Diffeomorphic Explanations: We now demonstrate applications to image classification in several domains. The discussion is necessarily concise, see Supplement $\mathrm{C}$ for more details. + +Datasets: We use the MNIST (Deng, 2012), CelebA (Liu et al., 2015), as well as the CheXpert datasets (Irvin et al., 2019). The latter is a dataset of labeled chest X-rays. + +Classifiers: We train a ten-class classifier on MNIST (test accuracy of ${99}\%$ ). For CelebA, we train a binary classifier on the blonde attribute (test accuracy of 94%). For CheXpert, we train a binary classifier on the cardiomegaly attribute (test accuracy of ${86}\%$ ). All classifiers consists of a few standard convolutional, pooling and fully-connected layers with ReLU-activations and batch normalization. Flows: We choose a flow with RealNVP-type couplings (Dinh et al., 2016) for MNIST and the Glow architecture (Kingma & Dhariwal, 2018) for CelebA and CheXpert. Estimation of Counterfactuals: We select the classes 'nine', ’blonde’, and ’cardiomegaly’ as targets $k$ for MNIST, CelebA, and CheXpert, respectively, and take the confidence threshold to be $\delta = {0.99}$ . We use Adam for optimization. Results: Counterfactuals produced by the flow indeed show semantically meaningful deformations in particular when compared to counterfactuals produced by gradient ascent in the data space $X$ , see Figure 4. For Figure 5, we train a linear SVM for the same classification tasks and show that the flow's counterfactuals generalize better to such a simple model suggesting that they indeed use semantically more relevant deformations than conventional counterfactu-als produced by gradient ascent in $X$ space. + +![01963e2d-ca26-77e7-937c-17474b0c7bff_3_263_234_490_219_0.jpg](images/01963e2d-ca26-77e7-937c-17474b0c7bff_3_263_234_490_219_0.jpg) + +Figure 3. Approximate tangent planes for points on the data manifold $S$ . As predicted by theory, the parallelepiped spanned by all three eigenvectors of the inverse induced metric scaled by the corresponding eigenvalues is to good approximation of the same dimension as the data manifold and tangential to it. + +## 6. Related Works + +An influential reference for our work is (Singla et al., 2019) which uses generative adversarial networks (GANs) to generate counterfactuals, see also (Liu et al., 2015; Samangouei et al., 2018) for similar methods. Other approaches (Dhu-randhar et al., 2018; Joshi et al., 2019) use Autoencoders instead of GANs. While both classes of generative models can currently sample more realistic high-dimensional samples, they are not bijective. As a result, an encoder network has to be used which comes at the risk of mode-dropping and without any theoretical guarantees in contrast to our work. (Sixt et al., 2021) propose to train a linear classifier in the base space of a normalizing flow and show that this classifier tends to use highly interpretable features. In contrast to their approach, our method is completely model-agnostic. In (Rombach et al., 2020; Esser et al., 2020), an invertible neural network is used to decompose latent representations of an autoencoder into semantic factors to automatically detect interpretable concepts as well as invariances of classifiers. + +![01963e2d-ca26-77e7-937c-17474b0c7bff_3_897_187_700_477_0.jpg](images/01963e2d-ca26-77e7-937c-17474b0c7bff_3_897_187_700_477_0.jpg) + +Figure 4. Counterfactuals for MNIST ('four' to 'nine'), CelebA ('non-blonde' to 'blonde'), and CheXpert ('healthy' to 'cardiomegaly’). Columns of each block show original image $x$ , counterfactual ${x}^{\prime }$ , and difference ${\delta x}$ for three selected datapoints. First row is our method, i.e. gradient ascent in $Z$ space. Second row is standard gradient ascent in $X$ space. Heatmaps show sum over absolute values of color channels. + +![01963e2d-ca26-77e7-937c-17474b0c7bff_3_899_972_687_250_0.jpg](images/01963e2d-ca26-77e7-937c-17474b0c7bff_3_899_972_687_250_0.jpg) + +Figure 5. Generalization of counterfactuals to linear SVMs. Left: accuracy with respect to the target class $k$ generalizes better to SVM for $Z$ -based counterfactuals. Right: distance in the base space is smaller for $Z$ than for $X$ -based counterfactuals. + +## 7. Conclusion + +In this work, we have used the fact that a normalizing flow is a diffeomorphism to map the data space to its base space. In this space, we can then straightforwardly perform gradient ascent on the data manifold, as we have established rigorously using Riemannian differential geometry. For future work, more high-dimensional classification tasks will be considered as well as the dependence of the explanations on the chosen flow architecture. Furthermore, it would be interesting to evaluate the robustness of these explanations with respect to adversarial model and input manipulations (Ghorbani et al., 2019; Dombrowski et al., 2019; Anders et al., 2020; Heo et al., 2019). + +References + +Anders, C., Pasliev, P., Dombrowski, A.-K., Müller, K.- R., and Kessel, P. Fairwashing explanations with off-manifold detergent. In International Conference on Machine Learning, pp. 314-323. PMLR, 2020. + +Carroll, S. M. Spacetime and geometry. Cambridge University Press, 2019. + +Deng, L. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141-142, 2012. + +Dhurandhar, A., Chen, P.-Y., Luss, R., Tu, C.-C., Ting, P., Shanmugam, K., and Das, P. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. arXiv preprint arXiv:1802.07623, 2018. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. + +Dombrowski, A.-K., Alber, M., Anders, C. J., Ackermann, M., Müller, K.-R., and Kessel, P. Explanations can be manipulated and geometry is to blame. arXiv preprint arXiv:1906.07983, 2019. + +Esser, P., Rombach, R., and Ommer, B. A disentangling invertible interpretation network for explaining latent representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9223-9232, 2020. + +Ghorbani, A., Abid, A., and Zou, J. Interpretation of neural networks is fragile. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 3681-3688, 2019. + +Goodfellow, I., Bengio, Y., and Courville, A. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. + +Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. + +Heo, J., Joo, S., and Moon, T. Fooling neural network interpretations via adversarial model manipulation. arXiv preprint arXiv:1902.02041, 2019. + +Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S., Chute, C., Marklund, H., Haghgoo, B., Ball, R., Shpan-skaya, K., et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 590-597, 2019. + +Joshi, S., Koyejo, O., Vijitbenjaronk, W., Kim, B., and Ghosh, J. Towards realistic individual recourse and actionable explanations in black-box decision making systems. arXiv preprint arXiv:1907.09615, 2019. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1\mathrm{x}1$ convolutions. arXiv preprint arXiv:1807.03039, 2018. + +Lee, J. M. Smooth manifolds. In Introduction to Smooth Manifolds, pp. 1-31. Springer, 2013. + +Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. + +Rombach, R., Esser, P., and Ommer, B. Making sense of cnns: Interpreting deep representations & their invari-ances with inns. arXiv preprint arXiv:2008.01777, 2020. + +Samangouei, P., Saeedi, A., Nakagawa, L., and Silberman, N. Explaingan: Model explanation via decision boundary crossing transformations. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 666-681, 2018. + +Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., and Müller, K.-R. Explainable AI: interpreting, explaining and visualizing deep learning, volume 11700. Springer Nature, 2019. + +Singla, S., Pollack, B., Chen, J., and Batmanghelich, K. Explanation by progressive exaggeration. arXiv preprint arXiv:1911.00483, 2019. + +Sixt, L., Schuessler, M., Weiß, P., and Landgraf, T. Interpretability through invertibility: A deep convolutional network with ideal counterfactuals and isosurfaces, 2021. URL https://openreview.net/forum? id=8YFhXYe1Ps. + +Verma, S., Dickerson, J., and Hines, K. Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596, 2020. + +## A. Proofs + +### A.1. Proof of Theorem 1 + +We repeat the theorem for convenience: + +Theorem. Let ${z}^{\left( t\right) }$ be defined as in (2) and ${x}^{\left( t\right) } = g\left( {z}^{\left( t\right) }\right)$ . Then, to leading order in the learning rate $\lambda$ , + +$$ +{x}^{\left( t + 1\right) } = {x}^{\left( t\right) } + {\left. \lambda {\gamma }^{-1}\right| }_{{g}^{-1}\left( {x}^{\left( t\right) }\right) }\frac{\partial {f}_{k}}{\partial x}\left( {x}^{\left( t\right) }\right) + \mathcal{O}\left( {\lambda }^{2}\right) , \tag{7} +$$ + +where ${\gamma }^{-1} = \frac{\partial g}{\partial z}{\frac{\partial g}{\partial z}}^{T} \in {\mathbb{R}}^{N, N}$ is the pull-back of the flat metric on $Z$ under the flow $g$ . + +Proof. The step ${x}^{\left( t + 1\right) } = g\left( {z}^{\left( t + 1\right) }\right)$ can be rewritten using the update formula (2) of the gradient ascent in $Z$ as + +$$ +{x}^{\left( t + 1\right) } = g\left( {{z}^{\left( t\right) } + \lambda \frac{\partial \left( {{f}_{k} \circ g}\right) }{\partial z}\left( {z}^{\left( t\right) }\right) }\right) . \tag{8} +$$ + +We now perform a Taylor expansion to leading order in the learning rate $\lambda$ using index notation as it eases notation + +$$ +{x}_{i}^{\left( t + 1\right) } = g{\left( {z}^{\left( t\right) }\right) }_{i} + \lambda \mathop{\sum }\limits_{{j, l}}\frac{\partial {g}_{i}}{\partial {z}_{j}}\frac{\partial {g}_{l}}{\partial {z}_{j}}\frac{\partial {f}_{k}}{\partial {x}_{l}}\left( {g\left( {z}^{\left( t\right) }\right) }\right) + \mathcal{O}\left( {\lambda }^{2}\right) . +$$ + +The result then follows by identifying $g\left( {z}^{\left( t\right) }\right) = {x}^{\left( t\right) }$ and ${\gamma }_{il}^{-1} = \mathop{\sum }\limits_{j}\frac{\partial {g}_{i}}{\partial {z}_{j}}\frac{\partial {g}_{l}}{\partial {z}_{j}}$ which in matrix notation is given by ${\gamma }^{-1} = \frac{\partial g}{\partial z}{\frac{\partial g}{\partial z}}^{T}.$ + +### A.2. Proof of Theorem 2 + +Following the notation used throughout the main part, we denote by ${p}_{X}$ the data probability density. In particular, it holds that the data manifold is given by $S = \operatorname{supp}\left( p\right)$ . The flow $g : Z \rightarrow X$ induces the probability density ${q}_{X}$ on the target space $X$ by push-forward of a base density ${q}_{Z}$ on the base space $Z$ , i.e. ${q}_{X}\left( x\right) = {q}_{Z}\left( {{g}^{-1}\left( x\right) }\right) \left| \frac{\partial z}{\partial x}\right|$ . + +Before giving the proof of Theorem 2, we will first derive the following result: + +Theorem 3. Let the flow be well-trained such that + +$$ +\operatorname{KL}\left( {{p}_{X},{q}_{X}}\right) < \epsilon , \tag{9} +$$ + +for some small $\epsilon \in \mathbb{R}$ . Then, we have for the data manifold $S \subset X$ + +$$ +{\int }_{S}{q}_{X}\left( x\right) \mathrm{d}x > 1 - \epsilon . \tag{10} +$$ + +Proof. By assumption, + +$$ +- \operatorname{KL}\left( {p, q}\right) > - \epsilon +$$ + +Using the definition of the KL-divergence and the inequality $\ln \left( a\right) \leq a - 1$ , it the follows that + +$$ +- \epsilon < {\int }_{S}{p}_{X}\left( x\right) \ln \left( \frac{{q}_{X}\left( x\right) }{{p}_{X}\left( x\right) }\right) \mathrm{d}x +$$ + +$$ +\leq {\int }_{S}{p}_{X}\left( x\right) \left( {\frac{{q}_{X}\left( x\right) }{{p}_{X}\left( x\right) } - 1}\right) \mathrm{d}x +$$ + +$$ += {\int }_{S}{q}_{X}\left( x\right) \mathrm{d}x - 1, +$$ + +and thus + +$$ +{\int }_{S}{q}_{X}\left( x\right) \mathrm{d}x > 1 - \epsilon . \tag{11} +$$ + +We repeat Theorem 1 for convenience: + +Theorem. Let ${p}_{X}$ denote the data density with $\operatorname{supp}\left( {p}_{X}\right) = S$ , and the flow $g$ be well-trained such that + +$$ +\operatorname{KL}\left( {{p}_{X},{q}_{X}}\right) < \epsilon , +$$ + +and the base density be bounded. Let ${\gamma }^{-1} = \frac{\partial g}{\partial z}{\frac{\partial g}{\partial z}}^{T}$ be the inverse of the induced metric $\gamma$ in the canonical basis of coordinates $x$ . + +In this basis, ${\gamma }^{-1}$ is given by + +$$ +{\gamma }^{-1} = \left( \begin{array}{llll} {\gamma }_{\mathcal{D}}^{-1} & & & \\ & {\gamma }_{{B}_{{\delta }_{1}}}^{-1} & & \\ & & \ddots & \\ & & & {\gamma }_{{B}_{{\delta }_{N - - n}}}^{-1} \end{array}\right) , +$$ + +where ${\gamma }_{\mathcal{M}}^{-1}$ is the inverse of the induced metric on the sub-manifold $\mathcal{M}$ . + +Furthermore, ${\gamma }_{{B}_{{\delta }_{i}}}^{-1} \rightarrow 0$ for vanishing radius ${\delta }_{i} \rightarrow 0$ . + +Proof. In the chosen coordinates, the metric $\gamma$ takes the block-diagonal form (in the canonical basis) + +$$ +\gamma = \left( \begin{array}{llll} {\gamma }_{\mathcal{D}} & & & \\ & {\gamma }_{{B}_{{\delta }_{1}}} & & \\ & & \ddots & \\ & & & {\gamma }_{{B}_{{\delta }_{N - n}}} \end{array}\right) , +$$ + +see e.g. Example 13.2 of (Lee, 2013) for a proof. In these coordinates, we can then perform the integral (10) of Theorem 3: + +$$ +1 - \epsilon < {\int }_{S}\left| {\det \frac{\partial z}{\partial x}}\right| {q}_{Z}\left( {{g}^{-1}\left( x\right) }\right) \mathrm{d}x +$$ + +$$ += {\int }_{S}\sqrt{\det \left| \gamma \right| }{q}_{Z}\left( {{g}^{-1}\left( x\right) }\right) \mathrm{d}x, +$$ + +where in the second step, we have used the definition of the induced metric $\gamma = \frac{\partial z}{\partial x}{\frac{\partial z}{\partial x}}^{T}$ which implies that $\det \left| \gamma \right| =$ det ${\left| \frac{\partial z}{\partial x}\right| }^{2}$ . Using the Gaussian normal coordinates, we can rewrite the integral as + +$$ +{\int }_{\mathcal{D}}\sqrt{\left| {\gamma }_{\mathcal{D}}\right| }\mathop{\prod }\limits_{{i = 1}}^{{N - n}}\left( {{\int }_{-{\delta }_{i}}^{{\delta }_{i}}\sqrt{\left| {\gamma }_{{B}_{{\delta }_{i}}}\right| }\mathrm{d}{x}_{ \bot }^{i}}\right) {q}_{Z}\left( {{g}^{-1}\left( x\right) }\right) {\mathrm{d}}^{n}{x}_{\parallel }. +$$ + +Using the assumption that the base density ${q}_{Z}$ is bounded, i.e. ${q}_{Z}\left( z\right) \leq C$ , we arrive at the inequality + +$$ +1 - \epsilon < C{\int }_{\mathcal{D}}\sqrt{\det \left| {\gamma }_{\mathcal{D}}\right| }\mathop{\prod }\limits_{{i = 1}}^{{N - n}}\left( {{\int }_{-{\delta }_{i}}^{{\delta }_{i}}\sqrt{\left| {\gamma }_{{B}_{{\delta }_{i}}}\right| }\mathrm{\;d}{x}_{ \bot }^{i}}\right) {\mathrm{d}}^{n}{x}_{\parallel }. +$$ + +(12) + +The integral however vanishes in the limit of vanishing radius ${\delta }_{i}$ since + +$$ +{\int }_{-{\delta }_{i}}^{{\delta }_{i}}\sqrt{\left| {\gamma }_{{B}_{{\delta }_{i}}}\right| }\mathrm{d}{x}_{ \bot }^{i} \rightarrow 0\;\text{ for }\;{\delta }_{i} \rightarrow 0, +$$ + +unless $\sqrt{\left| {\gamma }_{{B}_{{\delta }_{i}}}\right| } \rightarrow \infty$ . Thus for the inequality (12) to hold the metric ${\gamma }_{{B}_{{\delta }_{i}}}$ has to diverge in the limit of vanishing ${\delta }_{i}$ . + +Since the induced metric $\gamma$ is block-diagonal, its inverse is given by + +$$ +{\gamma }^{-1} = \left( \begin{array}{llll} {\gamma }_{\mathcal{D}}^{-1} & & & \\ & {\gamma }_{{B}_{{\delta }_{1}}}^{-1} & & \\ & & \ddots & \\ & & & {\gamma }_{{B}_{{\delta }_{N - p}}}^{-1} \end{array}\right) . +$$ + +Because ${\gamma }_{{B}_{{\delta }_{i}}} \in \mathbb{R}$ diverges for vanishing radius, it follows that ${\gamma }_{{B}_{{\delta }_{i}}}^{-1} \rightarrow 0$ for ${\delta }_{i} \rightarrow 0$ . + +### B.Toy Example for Tangent Space + +Flow The flow used for the toy example is composed of 12 RealNVP-type coupling layer blocks. Each of these blocks includes a three-layer fully-connected neural network with leaky ReLU activations for the scale and translation functions. For training, we sample from the target distribution. We train for 5000 epochs using a batch of 500 samples per epoch. We use the Adam optimizer with standard parameters and learning rate $\lambda = 1 \times {10}^{-4}$ . This takes around 10 minutes on a standard CPU. + +Latent distribution We use a 3D standard Gaussian distribution as the latent distribution. + +Helix To get a data sample from the helix we sample from a uniform distribution ${x}_{3} \sim \mathcal{U}\left( {-4,4}\right)$ and define ${x}_{1} =$ $\sin \left( {x}_{3}\right)$ and ${x}_{2} = \cos \left( {x}_{3}\right)$ . + +Torus We define a torus with outer radius $R = 3$ and unit inner radius. To get a data sample from the Helix we sample from a uniform distribution $\phi ,\theta \sim \mathcal{U}\left( {0,{2\pi }}\right)$ and define ${x}_{0} = \cos \left( \theta \right) \left( {R + \cos \left( \phi \right) }\right) ,{x}_{1} = \sin \left( \theta \right) \left( {R + \cos \left( \phi \right) }\right)$ , and ${x}_{3} = \sin \left( \phi \right)$ . + +## C. Details on Experiments + +### C.1. Flows + +Architecture: We use the RealNVP architecture ${}^{2}$ for MNIST and the Glow architecture ${}^{3}$ for CelebA and CheX-pert. + +Training: We use the Adam optimizer with a learning rate of $1 \times {10}^{-4}$ and weight decay of $5 \times {10}^{-4}$ for all flows. MNIST: we train for 30 epochs on all available training images. Bits per dimension on the test set average to 1.21. CelebA: we train for 8 epochs on all available training images. We use 5 bit images. Bits per dimension on the test set average to 1.32 . CheXpert: we train for 4 epochs on all available training images. Bits per dimension on the test set average to 3.59 . + +### C.2. Classifier + +Architecture: All classifiers have a similar structure consisting of convolutional, pooling and fully connected layers. We use ReLU activations and batch normalization. For MNIST we use four convolutional layers and three fully connected layers. For CelebA and CheXpert we use six convolutional layers and four fully connected layers. + +Training: We use the Adam optimizer with a weight decay of $5 \times {10}^{-4}$ for all classifiers. + +MNIST: we use training and test data as specified in torchvi-sion. We use ${10}\%$ of the training data for validation. We train for 4 epochs using a learning rate of $1 \times {10}^{-3}$ . We get a test accuracy of 0.99 . + +CelebA: we take training and test data set as specified in torchvision. We use ${10}\%$ of the training images for validation. We scale and crop the images to ${64} \times {64}$ pixels. We partition the data sets into all images for which the blonde attribute is positive and the rest of the images. We treat the imbalance by undersampling the class with more examples. We train for 10 epochs using a learning rate of $5 \times {10}^{-3}$ . We get a balanced test accuracy of ${93.63}\%$ by averaging over true positive rate (93.95%) and true negative rate (93.31%). + +CheXpert: we choose the first 6500 patients from the training set for testing. The remaining patients are used for training. We select the model based on performance on the original validation set. We only consider frontal images and scale and crop the images to ${128} \times {128}$ pixels. For the training data the cardiomegaly attibute can have four different values: blanks, 0,1 , and -1 . We label images with the blank attribute as 0 if the no finding attribute is 1 , otherwise we ignore images with blank attributes. We also ignore images where the cardiomegaly attribute is labeled as uncertain. Using this technique, we obtain 25717 training images labelled as healthy and 20603 training images labelled as cardiomegaly. We do not treat the imbalance but train on the data as is. We train for 9 epochs using a learning rate of $1 \times {10}^{-4}$ . We test on the test set, that was produced in the same way as the training set. We get a balanced test accuracy of 86.07% by averaging over true positive rate (84.83%) and true negative rate (87.27%). + +--- + +${}^{2}$ adapted from https://github.com/fmu2/realNVP + +${}^{3}$ adapted from https://github.com/rosinality/ glow-pytorch + +--- + +385 + +![01963e2d-ca26-77e7-937c-17474b0c7bff_7_165_231_687_256_0.jpg](images/01963e2d-ca26-77e7-937c-17474b0c7bff_7_165_231_687_256_0.jpg) + +Figure 6. left: original image, first row: evolution throughout optimization. Numbers indicate confidence with which the image is classified as ’blonde’. Second row: heatmaps of ${\delta x}$ + +### C.3. Optimization Counterfactuals + +Counterfactuals are found using the Adam optimizer with standard parameters. We vary only the learning rate $\lambda$ . + +For MNIST we use $\lambda = 5 \times {10}^{-4}$ for conventional counter-factuals and $\lambda = 5 \times {10}^{-2}$ for counterfactuals found via the flow. We do a maximum of 2000 steps stopping early when we reach the target confidence of 0.99 . We perform attacks on 500 images of the true class 'four'. All conventional attacks and 498 of the attacks via the flow reached the target confidence of 0.99 for the target class 'nine'. + +For CelebA we use $\lambda = 7 \times {10}^{-4}$ for conventional counter-factuals and $\lambda = 5 \times {10}^{-3}$ for counterfactuals found via the flow. We do a maximum of 1000 steps stopping early when we reach the target confidence of 0.99 . We perform attacks on 500 images of the true class 'non-blonde'. 492 conventional attacks and 496 of the attacks via the flow reached the target confidence of 0.99 for the target class 'blonde'. + +For CheXpert we use $\lambda = 5 \times {10}^{-4}$ for conventional coun-terfactuals and $\lambda = 5 \times {10}^{-3}$ for counterfactuals found via the flow. We do a maximum of 1000 steps stopping early when we reach the target confidence of 0.99 . We perform attacks on 1000 images of the true class 'healthy'. All conventional attacks and 490 of the attacks via the flow reached the target confidence of 0.99 for the target class 'cardiomegaly'. + +## D. Examples for Counterfactuals + +In this supplement, we present results on randomly selected images from the three datasets. For the heatmaps, we visualize both the sum over the absolute values of color channels as well as the sum over the color channnels. + +![01963e2d-ca26-77e7-937c-17474b0c7bff_7_971_245_555_1738_0.jpg](images/01963e2d-ca26-77e7-937c-17474b0c7bff_7_971_245_555_1738_0.jpg) + +Figure 7. Randomly selected examples MNIST 'four' to 'nine' + +440 + +443 + +![01963e2d-ca26-77e7-937c-17474b0c7bff_8_224_228_1306_1749_0.jpg](images/01963e2d-ca26-77e7-937c-17474b0c7bff_8_224_228_1306_1749_0.jpg) + +444 + +476 + +477 + +478 + +479 + +480 + +481 + +482 + +483 + +484 + +485 + +486 + +489 + +490 + +492 + +Figure 8. Randomly selected examples CelebA 'not blonde' to 'blonde' + +493 + +494 + +Figure 9. Randomly selected examples CheXpert 'healthy' to 'cardiomegaly' + diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZBR9EpEl6G4/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZBR9EpEl6G4/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..9e0f8794633635dea05222816838dfe95dca9a4a --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZBR9EpEl6G4/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,167 @@ +§ DIFFEOMORPHIC EXPLANATIONS WITH NORMALIZING FLOWS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Normalizing flows are diffeomorphisms which are parameterized by neural networks. As a result, they can induce coordinate transformations in the tangent space of the data manifold. In this work, we demonstrate that such transformations can be used to generate interpretable explanations for decisions of neural networks. More specifically, we perform gradient ascent in the base space of the flow to generate counterfactuals which are classified with great confidence as a specified target class. We analyze this generation process theoretically using Riemannian differential geometry and establish a rigorous theoretical connection between gradient ascent on the data manifold and in the base space of the flow. + + < g r a p h i c s > + +Figure 1. Diffeomorphic explanation for hair-color classification. + +§ 1. INTRODUCTION + +Explaining a complex system can be drastically simplified using a suitable coordinate system. As an example, the solar system can be explained either by using a reference system for which the sun is at rest (heliocentristic) or, alternatively, for which the earth is at rest (geocentristic). Despite wide-held belief, both reference system are physically valid. However, the dynamics of the planets is significantly easier to describe in heliocentristic coordinates since the planets will follow geometrically simple trajectories. + +Explanation methods for neural networks have recently gained significant attention because they promise to make black-box classifiers more transparent, see (Samek et al., 2019) for a detailed overview. In this paper, we use the bi-jectivity of a normalizing flow to consider a classifier in the base space of the flow. This amounts to a coordinate transformation in the data space (or mathematically more precise: a diffeomorphism). We will show that in this coordinate system, the classifier is more easily interpretable and can be used to construct counterfactual explanations that lie on the data manifold. Using Riemannian differential geometry, we will analyze the advantages of creating counterfactual explanations in the base space of the flow and establish a process by which the tangent space of the data manifold can be estimated from the flow. We strongly expect these theoretical insights to be useful beyond explainability. + +In summary, our main contributions are as follows: + + * We propose a novel application domain for flows: inducing a bijective transformation to a more interpretable space on which counterfactuals can be easily generated. + + * We analyze the properties of this generation process theoretically using Riemannian differential geometry. + + * We experimentally demonstrate superior performance compared to more traditional approaches for generating counterfactuals for classification tasks in three different domains. + +§ 2. COUNTERFACTUAL EXPLANATIONS + +Let $f : X \rightarrow {\mathbb{R}}^{K}$ be a classifier whose component $f{\left( x\right) }_{k}$ is the probability for the point $x \in X$ to be of class $k \in$ $\{ 1,\ldots ,K\}$ . We make no assumptions on the architecture of the classifier $f$ and only require that we can evaluate $f\left( x\right)$ and its derivative ${\partial }_{x}f\left( x\right)$ for a given input $x \in X{.}^{1}$ + +In this work, we will follow the well-established paradigm of counterfactual explanations - see (Verma et al., 2020) for a recent review. These methods aim to explain the classifier $f$ by providing an answer to the question which minimal deformations ${x}^{\prime } = x + {\delta x}$ need to be applied to the original input $x$ in order to change its prediction. Often, the difference ${\delta x}$ is then visualized by a heatmap highlighting the relevant pixels for the change in classification, see Figure 1 for an example. + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +${}^{1}$ This assumption can be relaxed: if we do not have access to the gradient, we can approximate it by finite differences. + +In the following, we will assume that the data lies on a submanifold $S \subset X$ which is of (significantly) lower dimension $n$ than the dimension $N$ of its embedding space $X$ . We stress that this is also known as the manifold assumption and is expected to hold across a wide range of machine learning tasks, see e.g. (Goodfellow et al., 2016). In these situations, we are often interested in only the deformations ${x}^{\prime }$ which lie on the data manifold $S$ . As an example, a customer of a bank may want to understand how their financial data needs to change in order to receive a loan. If the classification changes off-manifold, for example for zip codes that do not exist, this is of no relevance since the user is obliged to enter their correct zip code. Furthermore, it is often required that the deformation is minimal, i.e. the perturbation ${\delta x}$ should be as small as possible. However, the relevant norm is the one of the data manifold $S$ and not of its embedding pixel space $X$ . For example, a slightly rotated number in an MNIST image may have large pixel-wise distance but should be considered an infinitesimal perturbation of the original image. + +More precisely, we define counterfactuals as follows: let $t = {\operatorname{argmax}}_{j}{f}_{j}\left( x\right)$ be the predicted class for the data point $x \in S$ . The set ${\Delta }_{k,\delta } \subset S$ of counterfactuals ${x}^{\prime }$ of the point $x$ with respect to the target class $k \in \{ 1,\ldots ,K\} \smallsetminus \{ t\}$ and confidence $\delta \in (0,1\rbrack$ is defined by + +${\Delta }_{k,\delta } = \left\{ {{x}^{\prime }\left( x\right) \in S : \;{\operatorname{argmax}}_{j}{f}_{j}\left( {x}^{\prime }\right) = k \land {f}_{k}\left( {x}^{\prime }\right) > \delta }\right\} ,$ + +i.e. all points on the data manifold which are classified to be of the target class $k$ with at least the confidence $\delta$ . A minimal counterfactual ${x}^{\prime } \in {\Delta }_{k,\delta }$ is then a counterfactual with smallest distance ${d}_{\gamma }\left( {{x}^{\prime },x}\right)$ to the original point $x$ , where ${d}_{\gamma }$ is the distance on the data manifold (induced by its Riemannian metric $\gamma$ ). Note that there may not be a unique minimal counterfactual. + +§ 3. CONSTRUCTION OF COUNTERFACTUAL + +We propose to estimate the minimal counterfactual ${x}^{\prime }$ of the data sample $x$ with respect to the classifier $f$ by using a diffeomorphism $g : Z \rightarrow X$ modelled by a normalizing flow. + +The flow $g$ equips the space $X$ with a probability density + +$$ +{q}_{X}\left( x\right) = {q}_{Z}\left( {{g}^{-1}\left( x\right) }\right) \left| {\det \frac{\partial z}{\partial x}}\right| \tag{1} +$$ + +by push-forward of a simple base density ${q}_{Z}$ , such as $N\left( {0,1}\right)$ , defined on the base space $Z$ . We assume that the flow was successfully trained to approximate the data distribution ${p}_{X}$ by minimizing the forward KL-divergence as usual (this assumption will be made more precise in Section 4). + +We then perform gradient ascent in the base space $Z$ to maximize the probability of the target class $k$ , i.e. + +$$ +{z}^{\left( t + 1\right) } = {z}^{\left( t\right) } + \lambda \frac{\partial {\left( f \circ g\right) }_{k}}{\partial z}\left( {z}^{\left( t\right) }\right) , \tag{2} +$$ + +where $\lambda$ is the learning rate and we initialize by mapping the original point $x$ to the base space by ${z}^{\left( 0\right) } = {g}^{-1}\left( x\right)$ . We then take the sample ${x}^{\left( T\right) } = g\left( {z}^{\left( T\right) }\right)$ as an estimator for a minimal counterfactual if ${x}^{\left( T\right) }$ is the first optimization step to be classified as the target $k$ with given confidence $\delta$ : + +$$ +{\operatorname{argmax}}_{j}{f}_{j}\left( {x}^{\left( T\right) }\right) = k\;\text{ and }\;{f}_{k}\left( {x}^{\left( T\right) }\right) > \delta . +$$ + +This is because generically taking further steps only increases the distance to the original sample $\begin{Vmatrix}{{z}^{\left( T\right) } - {z}^{0}}\end{Vmatrix} >$ $\begin{Vmatrix}{{z}^{T + t} - {z}^{0}}\end{Vmatrix}$ for $t > 0$ and we want to find (an estimate of a) minimal counterfactual. This may also be validated by continuing optimization for a certain number of steps and selecting the sample with the minimal distance. + +As discussed in Section 6, generative-model-based methods to estimate (minimal) counterfactuals have previously been proposed, for example based on Generative Adversarial Networks or Autoencoders. However, the relevance of normalizing flows in this domain has so far not be recognized. This is unfortunate as normalizing flows have important advantages in this application domain compared to other generative models: firstly, a flow $g$ is a diffeomorphism and therefore no information is lost by considering the classifier $f \circ g$ on $Z$ instead of the original classifier $f$ on $X$ , i.e. there is a unique $z = {g}^{-1}\left( x\right) \in Z$ for any data point $x \in X$ . Secondly, performing gradient ascent in the base space $Z$ of a well-trained flow will ensure (to good approximation) that each optimization step ${x}^{\left( t\right) } = g\left( {z}^{\left( t\right) }\right)$ will stay on the data manifold $S \subset X$ . Since the base space $Z$ has the same dimension as the data space $X$ , the latter statement is far from obvious and is substantiated with theoretical arguments in the next section. + +§ 4. THEORETICAL ANALYSIS + +In the following, it will be shown that performing gradient ascent in $Z$ space and then mapping the result in $X$ space will stay on the data manifold $S$ . + +This is in stark contrast to gradient ascent directly in $X$ space, i.e. + +$$ +{x}^{\left( t + 1\right) } = {x}^{\left( t\right) } + \lambda \frac{\partial {f}_{k}}{\partial x}\left( {x}^{\left( t\right) }\right) , \tag{3} +$$ + +where $\lambda$ is the learning rate. It is well-known that such an optimization procedure would very quickly leave the data manifold $S$ , see for example (Goodfellow et al.,2014). For gradient ascent in $Z$ space (2), each step ${z}^{\left( t\right) }$ can uniquely be mapped to $X$ space by ${x}^{\left( t\right) } = g\left( {z}^{\left( t\right) }\right)$ . In the Supplement A.1, we derive the following result: + + < g r a p h i c s > + +Figure 2. Gaussian normal coordinates, see Appendix D of (Carroll, 2019) for a detailed mathematical introduction. + +Theorem 1. Let ${z}^{\left( t\right) }$ be defined as in (2) and ${x}^{\left( t\right) } = g\left( {z}^{\left( t\right) }\right)$ . Then, to leading order in the learning rate $\lambda$ , + +$$ +{x}^{\left( t + 1\right) } = {x}^{\left( t\right) } + {\left. \lambda {\gamma }^{-1}\right| }_{{g}^{-1}\left( {x}^{\left( t\right) }\right) }\frac{\partial {f}_{k}}{\partial x}\left( {x}^{\left( t\right) }\right) + \mathcal{O}\left( {\lambda }^{2}\right) , \tag{4} +$$ + +where ${\gamma }^{-1} = \frac{\partial g}{\partial z}\frac{\partial {g}^{T}}{\partial z} \in {\mathbb{R}}^{N,N}$ is the pull-back of the flat metric on $Z$ under the flow $g$ . + +Therefore, performing gradient ascent in $X$ or $Z$ space is not equivalent because (3) and (7) do not agree. In particular, the presence of the inverse metric ${\gamma }^{-1}$ in the update formula (7) effectively induces separate learning rates for each direction in tangent space. + +In the following, we prove that directions orthogonal to the data manifold $S \subset X$ is heavily suppressed by the inverse metric and thus gradient ascent (7) stays on the data manifold $S$ to very good approximation. + +In practice, the data manifold is only approximately of lower dimension. Specifically, we assume that the data manifold is a product manifold, equipped with the canonical product metric, of the form + +$$ +S = \mathcal{D} \times {B}_{{\delta }_{1}} \times \cdots \times {B}_{{\delta }_{N - n}}, \tag{5} +$$ + +where $\mathcal{D}$ is a $n$ -dimensional manifold and ${B}_{\delta }$ is an open one-dimensional ball with radius $\delta$ (with respect to the flat metric of the embedding space $X$ ). Since we will choose all the radii ${\delta }_{i}$ to be small, the data manifold $S$ is thus approximately $n$ -dimensional. + +We choose Gaussian normal coordinates $x =$ $\left( {{x}_{\parallel }^{1},\ldots ,{x}_{\parallel }^{n},{x}_{ \bot }^{1},\ldots ,{x}_{ \bot }^{N - n}}\right)$ on $X$ , where the ${x}_{ \bot }^{i}$ are slice coordinates for ${B}_{{\delta }_{i}}$ and the $\left( {{x}_{\parallel }^{1},\ldots ,{x}_{\parallel }^{n}}\right)$ are slice coordinates of $\mathcal{D}$ , see Figure 2. We furthermore require that in our coordinates ${x}_{ \bot }^{i}\left( p\right) \in \left( {-\delta , + \delta }\right)$ for $p \in S$ . We then show in the Supplement A.2: + +Theorem 2. Let ${p}_{X}$ denote the data density with $\operatorname{supp}\left( {p}_{X}\right) = S$ , and the flow $g$ be well-trained such that + +$$ +\operatorname{KL}\left( {{p}_{X},{q}_{X}}\right) < \epsilon , +$$ + +and the base density be bounded. Let ${\gamma }^{-1} = \frac{\partial g}{\partial z}{\frac{\partial g}{\partial z}}^{T}$ be the inverse of the induced metric $\gamma$ in the canonical basis of coordinates $x$ . + +In this basis, ${\gamma }^{-1}$ is given by + +$$ +{\gamma }^{-1} = \left( \begin{array}{llll} {\gamma }_{\mathcal{D}}^{-1} & & & \\ & {\gamma }_{{B}_{{\delta }_{1}}}^{-1} & & \\ & & \ddots & \\ & & & {\gamma }_{{B}_{{\delta }_{{N}_{r} - n}}}^{-1} \end{array}\right) , +$$ + +where ${\gamma }_{\mathcal{M}}^{-1}$ is the inverse of the induced metric on the sub-manifold $\mathcal{M}$ . + +Furthermore, ${\gamma }_{{B}_{{\delta }_{i}}}^{-1} \rightarrow 0$ for vanishing radius ${\delta }_{i} \rightarrow 0$ . + +We therefore conclude that gradient ascent in $Z$ space corresponds to gradient ascent in $X$ where the learning rate of all gradient components ${\partial }_{{x}_{ \bot }}f$ orthogonal to the data manifold are effectively scaled by a vanishingly small learning rate. As a result, the gradient ascent in $Z$ will, to very good approximation, not leave the data manifold. + +§ 5. EXPERIMENTS + +Tangent Space: A non-trivial consequence of our theoretical insights is that we can infer the tangent plane of each point on the data manifold from our flow $g$ . Specifically, we perform a singular value decomposition of the Jacobian $\frac{\partial g}{\partial z} = {U\sum V}$ and rewrite the inverse induced metric as + +$$ +{\gamma }^{-1} = \frac{\partial g}{\partial z}\frac{\partial {g}^{T}}{\partial z} = U{\sum }^{2}{U}^{T}. \tag{6} +$$ + +For an approximately $n$ -dimensional data manifold $S$ in an $N$ -dimensional embedding space $X$ , Theorem 2 shows that the inverse induced metric ${\gamma }^{-1}$ has $N - n$ small eigenvalues. The corresponding eigenvectors of the large eigenvalues will then approximately span the tangent space of the data manifold. In order to demonstrate this in a toy example, we train flows to approximate data manifolds with the shape of a helix and torus respectively. Figure 3 shows that we can indeed recover the tangent planes of these data manifolds to very good approximation. We refer to the Supplement B for details about the used flow and the data generation. + +Diffeomorphic Explanations: We now demonstrate applications to image classification in several domains. The discussion is necessarily concise, see Supplement $\mathrm{C}$ for more details. + +Datasets: We use the MNIST (Deng, 2012), CelebA (Liu et al., 2015), as well as the CheXpert datasets (Irvin et al., 2019). The latter is a dataset of labeled chest X-rays. + +Classifiers: We train a ten-class classifier on MNIST (test accuracy of ${99}\%$ ). For CelebA, we train a binary classifier on the blonde attribute (test accuracy of 94%). For CheXpert, we train a binary classifier on the cardiomegaly attribute (test accuracy of ${86}\%$ ). All classifiers consists of a few standard convolutional, pooling and fully-connected layers with ReLU-activations and batch normalization. Flows: We choose a flow with RealNVP-type couplings (Dinh et al., 2016) for MNIST and the Glow architecture (Kingma & Dhariwal, 2018) for CelebA and CheXpert. Estimation of Counterfactuals: We select the classes 'nine', ’blonde’, and ’cardiomegaly’ as targets $k$ for MNIST, CelebA, and CheXpert, respectively, and take the confidence threshold to be $\delta = {0.99}$ . We use Adam for optimization. Results: Counterfactuals produced by the flow indeed show semantically meaningful deformations in particular when compared to counterfactuals produced by gradient ascent in the data space $X$ , see Figure 4. For Figure 5, we train a linear SVM for the same classification tasks and show that the flow's counterfactuals generalize better to such a simple model suggesting that they indeed use semantically more relevant deformations than conventional counterfactu-als produced by gradient ascent in $X$ space. + + < g r a p h i c s > + +Figure 3. Approximate tangent planes for points on the data manifold $S$ . As predicted by theory, the parallelepiped spanned by all three eigenvectors of the inverse induced metric scaled by the corresponding eigenvalues is to good approximation of the same dimension as the data manifold and tangential to it. + +§ 6. RELATED WORKS + +An influential reference for our work is (Singla et al., 2019) which uses generative adversarial networks (GANs) to generate counterfactuals, see also (Liu et al., 2015; Samangouei et al., 2018) for similar methods. Other approaches (Dhu-randhar et al., 2018; Joshi et al., 2019) use Autoencoders instead of GANs. While both classes of generative models can currently sample more realistic high-dimensional samples, they are not bijective. As a result, an encoder network has to be used which comes at the risk of mode-dropping and without any theoretical guarantees in contrast to our work. (Sixt et al., 2021) propose to train a linear classifier in the base space of a normalizing flow and show that this classifier tends to use highly interpretable features. In contrast to their approach, our method is completely model-agnostic. In (Rombach et al., 2020; Esser et al., 2020), an invertible neural network is used to decompose latent representations of an autoencoder into semantic factors to automatically detect interpretable concepts as well as invariances of classifiers. + + < g r a p h i c s > + +Figure 4. Counterfactuals for MNIST ('four' to 'nine'), CelebA ('non-blonde' to 'blonde'), and CheXpert ('healthy' to 'cardiomegaly’). Columns of each block show original image $x$ , counterfactual ${x}^{\prime }$ , and difference ${\delta x}$ for three selected datapoints. First row is our method, i.e. gradient ascent in $Z$ space. Second row is standard gradient ascent in $X$ space. Heatmaps show sum over absolute values of color channels. + + < g r a p h i c s > + +Figure 5. Generalization of counterfactuals to linear SVMs. Left: accuracy with respect to the target class $k$ generalizes better to SVM for $Z$ -based counterfactuals. Right: distance in the base space is smaller for $Z$ than for $X$ -based counterfactuals. + +§ 7. CONCLUSION + +In this work, we have used the fact that a normalizing flow is a diffeomorphism to map the data space to its base space. In this space, we can then straightforwardly perform gradient ascent on the data manifold, as we have established rigorously using Riemannian differential geometry. For future work, more high-dimensional classification tasks will be considered as well as the dependence of the explanations on the chosen flow architecture. Furthermore, it would be interesting to evaluate the robustness of these explanations with respect to adversarial model and input manipulations (Ghorbani et al., 2019; Dombrowski et al., 2019; Anders et al., 2020; Heo et al., 2019). \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZbSeZKdqNkm/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZbSeZKdqNkm/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..6f0babb141ea0a498cbb2b54d10e453bc814e6f4 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZbSeZKdqNkm/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,726 @@ +# Task-agnostic Continual Learning with Hybrid Probabilistic Models + +Anonymous Authors ${}^{1}$ + +## Abstract + +Learning new tasks continuously without forgetting on a constantly changing data distribution is essential for real-world problems but extremely challenging for modern deep learning. In this work we propose HCL, a Hybrid generative-discriminative approach to Continual Learning for classification. We model the distribution of each task and each class with a normalizing flow. The flow is used to learn the data distribution, perform classification, identify task changes, and avoid forgetting, all leveraging the invertibility and exact likelihood which are uniquely enabled by the normalizing flow model. We use the generative capabilities of the flow to avoid catastrophic forgetting through generative replay and a novel functional regularization technique. For task identification, we use state-of-the-art anomaly detection techniques based on measuring the typicality of the model's statistics. We demonstrate the strong performance of HCL on a range of continual learning benchmarks such as split-MNIST, split-CIFAR, and SVHN-MNIST. + +## 1. Introduction + +For humans, it is natural to learn new skills sequentially without forgetting the skills that were learned previously. Deep learning models, on the other hand, suffer from catastrophic forgetting: when presented with a sequence of tasks, deep neural networks can successfully learn the new tasks, but the performance on the old tasks degrades (McCloskey & Cohen, 1989; French, 1999; Kirkpatrick et al., 2017; Parisi et al., 2019; Hadsell et al., 2020). Being able to learn sequentially without forgetting is crucial for numerous applications of deep learning. In real life, data often arrives as a continuous stream, and the data distribution is constantly changing. For example, consider a neural network that might be used for object detection in self-driving cars. The model should continuously adapt to different environments, e.g. weather and lighting. While the network learns to work under new conditions, it should also avoid forgetting. For example, once it adapts to driving during the winter, it should still work well in other seasons. This example illustrates the domain-incremental continual learning setting: the distribution of the inputs to the model evolves over time while the target space stays the same. Moreover, in this scenario, the model should be task-agnostic: it has no information on the task boundaries, i.e., the timestamps when the input distribution changes. + +Motivated by the task-agnostic domain-incremental continual learning setting, we propose Hybrid Continual Learning (HCL) - an approach based on simultaneous generative and discriminative modeling of the data with normalizing flows. Fig. 1 schematically demonstrates the framework. The contributions of our work are as follows: + +- We propose HCL, a normalizing flow-based approach to task-agnostic continual learning. We employ two methods to alleviate catastrophic forgetting: generative replay and a novel functional regularization technique. We provide an empirical comparison and theoretical analysis of the two techniques showing that the functional regularization constrains the model more than generative replay to avoid forgetting, and generally leads to better performance. + +- We conduct experiments on a range of image classification continual learning problems on split MNIST, split CIFAR, SVHN-MNIST and MNIST-SVHN datasets. HCL achieves strong performance in all settings. + +- We show that HCL can successfully detect task boundaries and identify new as well as recurring tasks, measuring the typicality of model's statistics. + +## 2. Background and Notation + +Continual learning (CL) We assume that a continual learning model ${g}_{\theta } : \mathcal{X} \rightarrow \mathcal{Y}$ is trained on a sequence of $\tau$ supervised tasks: ${T}_{{t}_{1}},{T}_{{t}_{2}},\ldots ,{T}_{{t}_{\tau }}$ . Each task ${T}_{i} = {\left\{ \left( {x}_{j}^{i},{y}_{j}^{i}\right) \right\} }_{j = 1}^{{N}_{i}}$ has the input space ${\mathcal{X}}^{i}$ , the label space ${\mathcal{Y}}^{i}$ , and the corresponding data-generating distribution + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +055 + +![01963e31-aecd-7347-8fac-7a60d1329088_1_284_185_1176_293_0.jpg](images/01963e31-aecd-7347-8fac-7a60d1329088_1_284_185_1176_293_0.jpg) + +Figure 1. An illustration of the proposed Hybrid Continual Learning (HCL) framework. HCL models the distribution of each class in each task as a latent Gaussian distribution transformed by a normalizing flow. We show the Gaussian mixtures corresponding to the two tasks ${t}_{1}$ and ${t}_{2}$ in the latent space on the left, and the corresponding data distributions on the right. If a new task $t = 3$ appears, HCL identifies it using the typicality of the flow's statistics, and initializes the Gaussian mixture for a new task. + +056 + +057 + +058 ${p}_{i}\left( {x, y}\right)$ . The number of tasks $\tau$ is not known in advance, and while training on a task ${T}_{i}$ the model does not have access to the data from previous ${T}_{1},\ldots ,{T}_{i - 1}$ or future tasks ${T}_{i + 1},\ldots ,{T}_{\tau }$ . The objective of a CL model is to minimize $\mathop{\sum }\limits_{{i = 1}}^{M}{E}_{x, y \sim {p}_{i}\left( {\cdot , \cdot }\right) }l\left( {{g}_{\theta }\left( x\right) , y}\right)$ for some risk function $l\left( {\cdot , \cdot }\right)$ , and thus, generalize well on all tasks after training. In this work, we focus on classification, and in particular, the domain-incremental learning setting with ${\mathcal{Y}}^{i} = \{ 1,\ldots K\}$ for all tasks $i$ . For more on CL settings see (van de Ven & Tolias, 2019) and (Hsu et al., 2018). + +Task-agnostic CL In most continual learning algorithms, it is crucial to know the task boundaries - the moments when the training task is changed. At each iteration $j$ of training, we receive a tuple $\left( {x\left( j\right) , y\left( j\right) , t\left( j\right) }\right)$ where $x\left( j\right)$ and $y\left( j\right)$ is a batch of data and the corresponding labels and $t\left( j\right)$ is the index of the current task. In this work, we also consider the task-agnostic setting, where the task index $t\left( j\right)$ is not provided and the algorithm has to infer it from data. + +## 3. Hybrid Model for Continual Learning + +### 3.1. Modeling the data distribution + +HCL approximates the data distribution with a single normalizing flow, with each class-task pair(y, t)corresponding to a unique Gaussian in the latent space (see Fig. 1 for illustration). More precisely, we model the joint distribution ${p}_{t}\left( {x, y}\right)$ of the data $x$ and the class label $y$ conditioned on a task $t$ as ${p}_{t}\left( {x, y}\right) \approx \widehat{p}\left( {x, y \mid t}\right) = {\widehat{p}}_{X}\left( {x \mid y, t}\right) \widehat{p}\left( {y \mid t}\right)$ , where ${\widehat{p}}_{X}\left( {x \mid y, t}\right)$ is modeled by a normalizing flow ${f}_{\theta }$ with a base distribution ${\widehat{p}}_{Z} = \mathcal{N}\left( {{\mu }_{y, t}, I}\right) : {\widehat{p}}_{X}\left( {x \mid y, t}\right) =$ ${f}_{\theta }^{-1}\left( {\mathcal{N}\left( {{\mu }_{y, t}, I}\right) }\right)$ . Here ${\mu }_{y, t}$ is the mean of the latent distribution corresponding to the class $y$ and task $t$ . We assume that $\widehat{p}\left( {y \mid t}\right)$ is a uniform distribution over the classes for each task: $\widehat{p}\left( {y \mid t}\right) = \frac{1}{K}$ . + +We train the model by maximum likelihood: for each mini-batch of data $\left( {x\left( j\right) , y\left( j\right) , t\left( j\right) }\right)$ we compute the likelihood using the change of variable formula and take a gradient step with respect to the parameters $\theta$ of the flow. In the task-agnostic setting, we have no access to the task index ${t}_{j}$ and instead infer it from data (see Section 3.2). At test-time, HCL classifies an input $x$ to the class $\widehat{y}$ using the Bayes rule: $\widehat{p}\left( {y \mid x}\right) \propto \widehat{p}\left( {x \mid y}\right)$ , so $\widehat{y} = \arg \mathop{\max }\limits_{y}\mathop{\sum }\limits_{{t = 1}}^{\tau }{\widehat{p}}_{X}\left( {x \mid y, t}\right)$ . Notice that we do not have access to the task index at test time, so we marginalize the predictions over all tasks $t$ . + +### 3.2. Task Identification + +In the task-agnostic scenario, the task identity $t$ is not given during training and has to be inferred from the data. The model starts with $K$ Gaussians with means ${\left\{ {\mu }_{y,{t}_{1}}\right\} }_{y = 1}^{K}$ in the latent space corresponding to the classes of the first task. We assume that a model first observes batches of data ${B}_{1},\ldots ,{B}_{m}$ from the task ${T}_{{t}_{1}}$ where each $B = {\left\{ \left( {x}_{j},{y}_{j}\right) \right\} }_{j = 1}^{b}$ . Then, at some unknown point in time $m + 1$ , it starts observing data batches ${B}_{m + 1},{B}_{m + 2},\ldots$ coming from the next task ${T}_{{t}_{2}}$ . The model has to detect the task boundary and initialize Gaussian mixture components in the latent space which will correspond to this new task ${\left\{ \mathcal{N}\left( {\mu }_{y,{t}_{2}}, I\right) \right\} }_{y = 1}^{K}$ . Moreover, in our set-up some of the tasks can be recurring. Thus, after observing tasks ${T}_{{t}_{1}},\ldots ,{T}_{{t}_{k}}$ and detecting the change point from the task ${T}_{k}$ , the model has to identify whether this batch of data comes from a completely new task ${T}_{{t}_{k + 1}}$ (and add new Gaussians for this task in the latent space) or from one of the previous tasks ${T}_{{t}_{1}},\ldots ,{T}_{{t}_{k - 1}}$ . + +Similarly to prior work on anomaly detection (Nalisnick et al., 2019c) and (Morningstar et al., 2020), we detect task changes measuring the typicality of the HCL model's statistics. Following Morningstar et al. (2020), we can use the following statistics on data batches $B$ : log-likelihood ${S}_{1}\left( {B, t}\right) = \mathop{\sum }\limits_{{\left( {{x}_{j},{y}_{j}}\right) \in B}}{\widehat{p}}_{X}\left( {{x}_{j} \mid {y}_{j}, t}\right)$ , log-likelihood of the latent variable ${S}_{2}\left( {B, t}\right) = \mathop{\sum }\limits_{{\left( {{x}_{j},{y}_{j}}\right) \in B}}{\widehat{p}}_{Z}\left( {f\left( {x}_{j}\right) \mid {y}_{j}, t}\right)$ and log-determinant of the Jacobian ${S}_{3}\left( {B, t}\right) = {S}_{1}\left( {B, t}\right) -$ ${S}_{2}\left( {B, t}\right)$ . For each task $t$ , we keep track of the mean ${\mu }_{S}^{t}$ and the standard deviation ${\sigma }_{S}^{t}$ for these statistics over a window of the last $l$ batches of data. Then, if any statistic $S\left( {B, t}\right)$ of the current batch $B$ and task $t$ is not falling within the typical set $\left| {S\left( {B, t}\right) - {\mu }_{S}^{t}}\right| > \lambda {\sigma }_{S}^{t}$ , HCL detects a task change. In this case, if all the statistics are in the typical set $\mid S\left( {B,{t}^{\prime }}\right) -$ ${\mu }_{S} \mid < \lambda {\sigma }_{S}$ for one of the previous tasks, we identify a switch to the task ${t}^{\prime }$ ; otherwise, we switch to a new task. In practice, for most standard CL benchmarks such as split-MNIST we only use a single statistic - HCL's log-likelihood which is sufficient for robust task change detection. However, for more challenging scenarios identified in Nalisnick et al. (2019a), we use all three statistics described above. + +### 3.3. Alleviating Catastrophic Forgetting + +#### 3.3.1. Generative Replay + +Following Shin et al. (2017); Rao et al. (2019), we train the model on the mix of real data from the current task and generated data from previous tasks to combat forgetting. For generating the replay data, we store a single snapshot of the HCL model ${\widehat{p}}_{X}^{\left( k\right) }\left( {x \mid y, t}\right)$ with weights ${\theta }^{\left( k\right) }$ taken at a point of the last detected task change ${T}_{{t}_{k}} \rightarrow {T}_{{t}_{k + 1}}$ . We generate and replay data from old tasks using the snapshot: ${x}_{GR} \sim {\widehat{p}}_{X}^{\left( k\right) }\left( {x \mid y, t}\right)$ , where $y \sim U\{ 1,\ldots , K\}$ and $t \sim U\left\{ {{t}_{1},\ldots ,{t}_{k}}\right\}$ , and maximize its likelihood ${\mathcal{L}}_{GR} = \log {\widehat{p}}_{X}\left( {{x}_{GR} \mid y, t}\right)$ under the current HCL model ${\widehat{p}}_{X}\left( \cdot \right)$ . We store only a single snapshot model throughout training as it approximates the data distribution of all tasks up to ${T}_{{t}_{k}}$ . After detecting the task change ${T}_{{t}_{k + 1}} \rightarrow {T}_{{t}_{k + 2}}$ , we update the snapshot with new weights ${\theta }^{\left( k + 1\right) }$ . The resulting objective function in generative replay training is ${\mathcal{L}}_{ll} + {\mathcal{L}}_{GR}$ , where ${\mathcal{L}}_{ll}$ is the log-likelihood of the data on the current task. See Appendix D for a further discussion of the generative replay objective. We refer to HCL with generative replay as HCL-GR. In prior work, generative replay has been successfully applied, predominantly using GANs or VAEs (Shin et al., 2017; Rao et al., 2019; Lee et al., 2020; Ye & Bors, 2020; Pomponi et al., 2020b; Mundt et al., 2019; Achille et al., 2018). + +#### 3.3.2. FUNCTIONAL REGULARIZATION + +We propose a novel functional regularization loss that enforces the flow to map samples from previous tasks to the same latent representations as a snapshot model. Specifically, similar to GR, we save a snapshot of the model ${\widehat{p}}_{X}^{\left( k\right) }\left( \cdot \right)$ taken after detecting a shift from the task ${T}_{{t}_{k}}$ and produce samples ${x}_{FR} \sim {\widehat{p}}_{X}^{\left( k\right) }\left( {x \mid y, t}\right)$ for $y \sim U\{ 1,\ldots , K\}$ , $t \sim U\left\{ {{t}_{1},\ldots ,{t}_{k}}\right\}$ . However, instead of generative replay loss ${\mathcal{L}}_{GR}$ , we add the following term to the maximum likelihood objective ${\mathcal{L}}_{FR} = {\begin{Vmatrix}{f}_{\theta }\left( {x}_{FR}\right) - {f}_{{\theta }^{\left( k\right) }}\left( {x}_{FR}\right) \end{Vmatrix}}^{2}$ , where ${f}_{\theta }$ is the current flow mapping and ${f}_{{\theta }^{\left( K\right) }}$ is the snapshot model. We note that the ${L}_{2}$ distance in ${\mathcal{L}}_{FR}$ is a natural choice given the choice of ${p}_{Z}\left( {z \mid y, t}\right)$ as a Gaussian, as ${\mathcal{L}}_{ll}$ also contains a linear combination of losses of the form ${\begin{Vmatrix}{f}_{\theta }\left( x\right) - {\mu }_{y, t}\end{Vmatrix}}^{2}$ . The term ${\mathcal{L}}_{FR}$ can be understood as controlling the amount of change we allow for the function $f$ , hence controlling the trade-off between stability and plasticity. In practice, we weigh the term by $\alpha : {\mathcal{L}}_{ll} + \alpha {\mathcal{L}}_{FR}$ . We refer to the method as HCL-FR. To the best of our knowledge, the loss ${\mathcal{L}}_{FR}$ is novel: it is designed specifically for normalizing flows and cannot be trivially extended to other generative models. In order to apply ${\mathcal{L}}_{FR}$ to VAEs, we would need to apply the loss separately to the encoder and the decoder of the model, and potentially to their composition. Recently, Titsias et al. (2019) and Pan et al. (2020) proposed related regularization techniques for continual learning which rely on the Gaussian Process framework. + +Theoretical analysis In Appendix E. 1 we study the loss ${\mathcal{L}}_{FR}$ theoretically and draw connections to other objectives. In particular, the term can be interpreted as looking at the amount of change in the function as measured by the KL-divergence assuming the output of the flow is an isotropic Gaussian. Under a Taylor approximation, we show that ${\mathcal{L}}_{FR}$ enforces the weights to move only in directions of low curvature of the mapping ${f}_{\theta }^{\left( k\right) }$ when learning a new task. Hence, similar to regularization-based CL methods, this term limits movement of the weights in directions that lead to large functional changes of the flow. + +## 4. Experiments + +In this section, we evaluate HCL on a range of image classification tasks in continual learning. In all experiments, we consider domain-incremental learning where the number of classes $K$ is the same in all tasks. At test time, the task identity is not provided to any of the considered methods. For HCL, we report the performance both in the task-aware (when the task identity is provided to the model during training) and task-agnostic (no task boundary knowledge during training) settings. We use RealNVP and Glow normalizing flow models. See Appendix A for detailed setup. + +Metrics Let ${a}_{i, j}$ be the accuracy of the model on task $i$ after training on $j$ tasks. We report the following metrics: (1) final accuracy ${a}_{i,\tau }$ on each task $i \in$ $\{ 1,\ldots \tau \}$ at the end of the training,(2) average final accuracy across tasks $\frac{1}{d}\mathop{\sum }\limits_{{i = 1}}^{d}{a}_{i,\tau }$ ,(3) the average forgetting: $\frac{1}{\tau - 1}\mathop{\sum }\limits_{{i = 1}}^{{\tau - 1}}\left( {{a}_{i, i} - {a}_{i,\tau }}\right)$ , and (4) the overall accuracy: the final accuracy on $\left( {K \times \tau }\right)$ -way classification which indicates how well the model identifies both the class and the task. We run each experiment with 3 seeds and report mean and standard deviation of the metrics. + +Adam We evaluate Adam training without any extra steps for preventing catastrophic forgetting. + +Multi-Task Learning (MTL) We evaluate multitask learning (MTL): the model is trained on each task ${T}_{{t}_{i}}$ for the same number of epochs as in CL methods, however, when training on ${T}_{{t}_{i}}$ , it has access to all previous tasks ${T}_{{t}_{1}},\ldots {T}_{{t}_{i - 1}}$ . At each iteration, we sample a mini-batch (of the same size as the current data batch) containing the data + +165 + +![01963e31-aecd-7347-8fac-7a60d1329088_3_221_186_1306_522_0.jpg](images/01963e31-aecd-7347-8fac-7a60d1329088_3_221_186_1306_522_0.jpg) + +Figure 2. Results on (a) Split MNIST, (b) MNIST-SVHN and (c) SVHN-MNIST image datasets. For Split MNIST, in the top panel we show the performance of each method on each of the tasks in the end of training; for HCL we show the results in the task-agnostic setting with dashed lines. We also show average accuracy, forgetting and overall accuracy for each of the datasets and methods. HCL provides strong performance, especially on SVHN-MNIST where it achieves almost zero forgetting and significantly outperforms ER. HCL-FR provides better results than HCL-GR overall. + +166 + +167 + +168 + +169 + +170 + +171 + +172 + +173 + +## from each of the tasks that have been observed so far. + +Experience Replay (ER) We reserve a buffer with a fixed size of 1000 samples for each task and randomly select samples to add to that buffer during training on each task. When training on task ${T}_{k}$ , the model randomly picks a number of samples equal to the current task's batch size from each of the previous task buffers and appends to the current batch. + +CURL We evaluate the state-of-the-art CURL (Rao et al., 2019) method for continual learning which is most closely related to HCL: CURL also incorporates a generative model (VAE), with an expanding Gaussian mixture in latent space, and likelihood-based task-change detection. + +Split MNIST In this experiment, following prior work we split the MNIST dataset (LeCun et al., 1998) into 5 binary classification tasks. We train for 30 epochs on each task. We use the Glow architecture to model the data distribution. The results are presented in Fig 2 (a) and Appendix Table 1. HCL shows strong performance, competitive with ER. Out of the HCL variants, HCL-FR provides a better performance both in the task-aware and the task-agnostic settings. Both HCL variants significantly outperform CURL. We hypothesise that since it only uses a single latent Gaussian component for each class, CURL cannot as easily capture a highly multimodal and complex data distribution for a single class - a requirement for domain-incremental learning where classes may be visually very different across different tasks. In contrast, as HCL initialises multiple latent components in a task-agnostic fashion and draws upon a flexible flow-based model, it is much better suited to the domain-incremental continual learning setting. The final accuracy of the Adam baseline on some tasks is very low: unless we take measures to avoid forgetting, the flow typically maps the data from all + +219 tasks to the region in the latent space corresponding to the final task, and it may happen that e.g. the data in class 1 of the first task will be mapped to the class 2 of the last task. + +MNIST-SVHN and SVHN-MNIST We evaluate HCL and the baselines on two more challenging problems: MNIST-SVHN and SVHN-MNIST. Here, the tasks are 10-way classification problems on either the SVHN (Netzer et al., 2011) or the MNIST dataset. We use the RealNVP architecture with inputs of size ${32} \times {32} \times 3$ , and upscale the MNIST images to this resolution. We train the methods for 90 epochs on each task. We report the results in Fig 2 (b) and (c) and Appendix Table 2. HCL-FR and HCL-GR show strong performance, outperforming ER and Adam significantly, and performing on par with MTL. On MNIST-SVHN, the model is able to almost completely avoid forgetting. + +See Appendix B for additional experimental results on split CIFAR-10 and split CIFAR-100 and Appendix G discussing task identification results. + +## 5. Discussion + +In this work we proposed HCL, a hybrid model for continual learning based on normalizing flows. HCL achieves strong performance on a range of image classification problems and is able to automatically detect new and recurring tasks using the typicality of flow's statistics. We believe that the key advantage of HCL is its simplicity and extensibility. HCL describes the data generating process using a tractable but flexible probabilistic model and uses the maximum-likelihood to train the model. + +## References + +Achille, A., Eccles, T., Matthey, L., Burgess, C., Watters, N., Ler-chner, A., and Higgins, I. Life-long disentangled representation learning with cross-domain latent homologies. In NeurIPS, 2018. + +Aljundi, R., Kelchtermans, K., and Tuytelaars, T. Task-free continual learning. In ${CVPR},{2019}$ . + +Atanov, A., Volokhova, A., Ashukha, A., Sosnovik, I., and Vetrov, D. Semi-conditional normalizing flows for semi-supervised learning. arXiv:1905.00505, 2019. + +Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. + +Balaji, Y., Farajtabar, M., Yin, D., Mott, A., and Li, A. The effectiveness of memory replay in large scale continual learning. arXiv:2010.02418, 2020. + +Behrmann, J., Grathwohl, W., Chen, R. T., Duvenaud, D., and Jacobsen, J.-H. Invertible residual networks. In ICML, 2019. + +Bulusu, S., Kailkhura, B., Li, B., Varshney, P. K., and Song, D. Anomalous example detection in deep learning: A survey. IEEE Access, 8:132330-132347, 2020. + +Buzzega, P., Boschini, M., Porrello, A., Abati, D., and Calderara, S. Dark experience for general continual learning: a strong, simple baseline. arXiv preprint arXiv:2004.07211, 2020. + +Chaudhry, A., Ranzato, M., Rohrbach, M., and Elhoseiny, M. Efficient lifelong learning with A-GEM. arXiv:1812.00420, 2018. + +Chaudhry, A., Gordo, A., Dokania, P. K., Torr, P., and Lopez-Paz, D. Using hindsight to anchor past knowledge in continual learning. arXiv:2002.08165, 2020. + +Chen, R. T. Q., Behrmann, J., Duvenaud, D., and Jacobsen, J. Residual flows for invertible generative modeling. In NeurIPS, 2019. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using Real NVP. ICLR, 2017. + +Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 2011. + +Farajtabar, M., Azizan, N., Mott, A., and Li, A. Orthogonal gradient descent for continual learning. In AISTATS, 2020. + +Finzi, M., Izmailov, P., Maddox, W., Kirichenko, P., and Wilson, A. G. Invertible convolutional networks. In Workshop on Invertible Neural Networks and Normalizing Flows (ICML), 2019. + +French, R. M. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 1999. + +Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial networks. NeurIPS, 2014. + +Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv:1810.01367, 2018. + +Hadsell, R., Rao, D., Rusu, A. A., and Pascanu, R. Embracing change: Continual learning in deep neural networks. Trends in Cognitive Sciences, 2020. + +He, X., Sygnowski, J., Galashov, A., Rusu, A. A., Teh, Y. W., and Pascanu, R. Task agnostic continual learning via meta learning. arXiv:1906.05201, 2019. + +Hsu, Y.-C., Liu, Y.-C., Ramasamy, A., and Kira, Z. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv:1810.12488, 2018. + +Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, + +2015. + +Izmailov, P., Kirichenko, P., Finzi, M., and Wilson, A. G. Semi-supervised learning with normalizing flows. In ${ICML},{2020}$ . + +Jerfel, G., Grant, E., Griffiths, T. L., and Heller, K. A. Reconciling meta-learning and continual learning with online mixtures of tasks. In NeurIPS, 2019. + +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1 \times 1$ convolutions. In NeurIPS,2018. + +Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. arXiv:1312.6114, 2013. + +Kingma, D. P., Mohamed, S., Jimenez Rezende, D., and Welling, M. Semi-supervised learning with deep generative models. NeurIPS, 2014. + +Kirichenko, P., Izmailov, P., and Wilson, A. G. Why normalizing flows fail to detect out-of-distribution data. arXiv:2006.08545, 2020. + +Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 2017. + +Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009. + +Lange, M., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., Slabaugh, G. G., and Tuytelaars, T. Continual learning: A comparative study on how to defy forgetting in classification tasks. ArXiv, abs/1909.08383, 2019. + +LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. + +Lee, S., Ha, J., Zhang, D., and Kim, G. A neural dirichlet process mixture model for task-free continual learning. arXiv:2001.00689, 2020. + +Li, X., Zhou, Y., Wu, T., Socher, R., and Xiong, C. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. arXiv:1904.00310, 2019. + +Lopez-Paz, D. and Ranzato, M. Gradient episodic memory for continual learning. In NeurIPS, 2017. + +McCloskey, M. and Cohen, N. J. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation. 1989. + +Mirzadeh, S.-I., Farajtabar, M., and Ghasemzadeh, H. Dropout as an implicit gating mechanism for continual learning. In Proceedings of the Computer Vision and Pattern Recognition Workshops, 2020a. + +Mirzadeh, S. I., Farajtabar, M., Gorur, D., Pascanu, R., and Ghasemzadeh, H. Linear mode connectivity in multitask and continual learning. In ${ICLR},{2020}\mathrm{\;b}$ . + +Mirzadeh, S. I., Farajtabar, M., Pascanu, R., and Ghasemzadeh, H. Understanding the role of training regimes in continual learning. In NeurIPS, 2020. + +Morningstar, W. R., Ham, C., Gallagher, A. G., Lakshminarayanan, B., Alemi, A. A., and Dillon, J. V. Density of states estimation for out-of-distribution detection. arXiv:2006.09273, 2020. + +Mundt, M., Majumder, S., Pliushch, I., and Ramesh, V. Unified + +probabilistic deep continual learning through generative replay and open set recognition. arXiv:1905.12019, 2019. + +Nalisnick, E., Matsukawa, A., Teh, Y. W., Gorur, D., and Laksh-minarayanan, B. Do deep generative models know what they don’t know? In ${ICLR},{2019}\mathrm{a}$ . + +Nalisnick, E., Matsukawa, A., Teh, Y. W., Gorur, D., and Lakshmi-narayanan, B. Hybrid models with deep and invertible features. In ${ICML},{2019}\mathrm{\;b}$ . + +Nalisnick, E., Matsukawa, A., Teh, Y. W., and Lakshminarayanan, B. Detecting out-of-distribution inputs to deep generative models using typicality. arXiv:1906.02994, 2019c. + +Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. Reading digits in natural images with unsupervised feature learning. 2011. + +Oord, A. v. d., Kalchbrenner, N., Vinyals, O., Espeholt, L., Graves, A., and Kavukcuoglu, K. Conditional image generation with PixelCNN decoders. arXiv preprint arXiv:1606.05328, 2016. + +Pan, P., Swaroop, S., Immer, A., Eschenhagen, R., Turner, R. E., and Khan, M. E. Continual deep learning by functional regular-isation of memorable past. NeurIPS, 2020. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. arXiv:1912.02762, 2019. + +Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., and Wermter, S. Continual lifelong learning with neural networks: A review. Neural Networks, 2019. + +Pascanu, R. and Bengio, Y. Revisiting natural gradient for deep networks. arXiv:1301.3584, 2013. + +Pomponi, J., Scardapane, S., Lomonaco, V., and Uncini, A. Efficient continual learning in neural networks with embedding regularization. Neurocomputing, 2020a. + +Pomponi, J., Scardapane, S., and Uncini, A. Pseudo-rehearsal for continual learning with normalizing flows. arXiv:2007.02443, 2020b. + +Rao, D., Visin, F., Rusu, A., Pascanu, R., Teh, Y. W., and Hadsell, R. Continual unsupervised representation learning. In NeurIPS, 2019. + +Rebuffi, S.-A., Kolesnikov, A. I., Sperl, G., and Lampert, C. H. iCaRL: Incremental classifier and representation learning. CVPR, 2016. + +Riemer, M., Cases, I., Ajemian, R., Liu, M., Rish, I., Tu, Y., and Tesauro, G. Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv:1810.11910, 2018. + +Rios, A. and Itti, L. Closed-loop GAN for continual learning. arXiv:1811.01146, 2018. + +Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., and Hadsell, R. Progressive neural networks. arXiv:1606.04671, 2016. + +Saha, G., Garg, I., and Roy, K. Gradient projection memory for continual learning. arXiv preprint arXiv:2103.09762, 2021. + +Shin, H., Lee, J. K., Kim, J., and Kim, J. Continual learning with deep generative replay. In NeurIPS, 2017. + +Srivastava, R. K., Greff, K., and Schmidhuber, J. Highway networks. arXiv preprint arXiv:1505.00387, 2015. + +Tan, M. and Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML, 2019. + +Titsias, M. K., Schwarz, J., Matthews, A. G. d. G., Pascanu, R., and Teh, Y. W. Functional regularisation for continual learning using Gaussian processes. arXiv:1901.11356, 2019. + +van de Ven, G. M. and Tolias, A. S. Three scenarios for continual learning. arXiv:1904.07734, 2019. + +van de Ven, G. M., Siegelmann, H. T., and Tolias, A. S. Brain-inspired replay for continual learning with artificial neural networks. Nature communications, 11(1):1-14, 2020. + +Wortsman, M., Ramanujan, V., Liu, R., Kembhavi, A., Rastegari, M., Yosinski, J., and Farhadi, A. Supermasks in superposition. NeurIPS, 2020. + +$\mathrm{{Ye}},\mathrm{F}$ and Bors, A. G. Learning latent representations across multiple data domains using lifelong VAEGAN. arXiv:2007.10221, 2020. + +Yin, D., Farajtabar, M., and Li, A. Sola: Continual learning with second-order loss approximation. arXiv:2006.10974, 2020. + +Yoon, J., Yang, E., Lee, J., and Hwang, S. J. Lifelong learning with dynamically expandable networks. In ICLR, 2018. + +Zeno, C., Golan, I., Hoffer, E., and Soudry, D. Task agnostic continual learning using online variational Bayes. arXiv:1803.10123, 2018. + +Zhang, H., Li, A., Guo, J., and Guo, Y. Hybrid models for open set recognition. arXiv:2003.12506, 2020. + +Zhang, M., Wang, T., Lim, J. H., and Feng, J. Prototype reminding for continual learning. arXiv:1905.09447, 2019. + +Zisselman, E. and Tamar, A. Deep residual flow for out of distribution detection. In ${CVPR},{2020}$ . + +# Task-agnostic Continual Learning with Hybrid Probabilistic Models: Supplementary Material + +## A. Setup and hyperparameters + +### A.1. HCL + +Setup We use RealNVP and Glow normalizing flow models. We initialize the class-conditional latent Gaussian means ${\mu }_{y, t}$ randomly when a new task is identified. Following Izmailov et al. (2020) we do not train ${\mu }_{y, t}$ and instead keep them fixed. We use the Adam optimizer (Kingma & Ba, 2014) to train the parameters of the flow and a batch size of 32. For both GR and FR, we generate the number of replay samples equal to the batch size for each of the previously observed tasks. For task identification, we use a window of $l = {100}$ mini-batches and the sensitivity $\lambda$ in threshold for task detection is set to 5 . For more details on the hyper-parameters, please see the Appendix A. + +Split MNIST For split MNIST, we use a Glow architecture (Kingma & Dhariwal, 2018), generally following Nalisnick et al. (2019a) for the model setup. We use Highway networks (Srivastava et al., 2015) as coupling layer networks which predicted the scale and shift of the affine transformation. Each Highway network had 3 hidden layers with 200 channels. We use the Glow with multi-scale architecture with 2 scales and 14 coupling layers per scale, squeezing the spatial dimension in between two scales. + +For the sensitivity parameter $\lambda$ , we tested values $\lambda = 3$ and $\lambda = 5$ on split MNIST dataset. For the lower value of $\lambda$ the model correctly identified the actual task shifts, however, it detected a higher number of extra task shifts. We used $\lambda = 5$ for the rest of the experiments. + +SVHN-MNIST and MNIST-SVHN We use a RealNVP (Dinh et al., 2017) model with ResNet-like coupling layer network for SVHN-MNIST and MNIST-SVHN experiments. The ResNet networks have 8 blocks with 64 channels, and use Layer Normalization (Ba et al., 2016) instead of Batch Normalization (Ioffe & Szegedy, 2015). RealNVP has 3 scales and 16 coupling layers in total. + +CIFAR embeddings For this set of experiments discussed in Appendix B, we use RealNVP model with 1 scale with 8 coupling layers, and MLP coupling layer networks which has 3 hidden layers and 512 hidden units in each layer. + +We use the weight decay $5 \times {10}^{-5}$ on split MNIST, SVHN-MNIST and MNIST-SVHN experiments, and tune the weight decay om the range $\left\{ {{10}^{-4},{10}^{-3},{10}^{-2}}\right\}$ on a validation set. + +For HCL-FR on split MNIST, SVHN-MNIST and MNIST-SVHN we set the weight of the regularization term of ${\mathcal{L}}_{FR}$ objective $\alpha = 1$ . For split CIFAR-10 and split CIFAR-100, we tune $\alpha$ on a validation set in the range $\{ 1,5,{10},{100}\}$ . Generally, we do not notice major difference in performance of HCL-FR and its task-agnostic version when varying $\alpha$ . + +We compare our HCL-GR and HCL-FR to other training procedures for the same flow model: regular Adam training, multi-task learning and experience replay. Additionally, we compare to CURL (Rao et al., 2019) which is based on a VAE architecture. + +Adam As a baseline, we train HCL model with Adam optimizer. Prior work (Mirzadeh et al., 2020; Hsu et al., 2018) argues against using Adam for continual learning. However, it is challenging to train normalizing flows with SGD. We experimented with Adagrad (Duchi et al., 2011) and RMSProp but did not observe a significant improvement compared to Adam. + +Experience Replay Note that we fix the size of the buffer per task throughout the experiments, resulting in varying performance: on the SVHN-MNIST the total size of the buffer is only about ${1.5}\%$ of the SVHN dataset, while on Split CIFAR-100 by the end of training the size of the combined buffer on all tasks is ${20}\%$ of the dataset. + +### A.2. CURL + +We evaluate the state-of-the-art CURL (Rao et al., 2019) method for continual learning which is most closely related to HCL: CURL also incorporates a generative model (VAE), with an expanding Gaussian mixture in latent space, and likelihood-based task-change detection. However, in the original paper the method is only evaluated in class- and task-incremental learning settings, and focuses on unsupervised learning. To provide a fair comparison, we use the supervised variant of CURL proposed by Rao et al. (2019), in which the label is used to directly train the corresponding Gaussian component in latent + +385 + +![01963e31-aecd-7347-8fac-7a60d1329088_7_203_206_1335_470_0.jpg](images/01963e31-aecd-7347-8fac-7a60d1329088_7_203_206_1335_470_0.jpg) + +Figure 3. Results on Split CIFAR embedding datasets. We use embeddings extracted by an EfficientNet model (Tan & Le, 2019) pre-trained on ImageNet. In the top panels we show the performance of each method on each of the tasks in the end of training; for HCL we show the results in the task-agnostic setting with dashed lines. At the bottom, we show average accuracy, forgetting and overall accuracy for each of the methods. HCL outperforms CURL and Adam and performs on par with experience replay with a large replay buffer. HCL-FR provides better performance than HCL-GR. + +386 + +387 + +388 + +389 + +390 + +391 + +392 + +393 + +space. It is important to note that while CURL is task-agnostic in the unsupervised setting, it implicitly infers the task via labels in the supervised task- and class-incremental settings (and hence does not perform task-change detection or unsupervised expansion). Thus, for domain-incremental learning (where the labels do not signal the introduction of a new task), we snapshot the generative model on task change, meaning that it is not task-agnostic in this setting. Finally, since the supervised variant of CURL utilises a single Gaussian component in latent space for each label, we cannot innately compute the overall accuracy with respect to class labels after training on domain labels. + +We train all models with the Adam optimizer, with a learning rate of ${10}^{-3}$ . For MNIST, we use the same architecture as in Rao et al. (2019), with a MLP encoder with layer sizes [1200, 600, 300, 150], a MLP Bernoulli decoder with layer sizes $\left\lbrack {{500},{500}}\right\rbrack$ , and a latent space dimensionality of 32 . For CIFAR10/100 features, we use 2-layer MLPs for both the encoder and decoder (with a Gaussian likelihood for the decoder), with 512 units in each layer and a latent space dimensionality of 64. + +## B. Additional experimental results + +Split CIFAR embeddings We consider the Split CIFAR-10 and Split CIFAR-100 datasets, where each task corresponds to 2 classes of CIFAR-10 and 10 classes of CIFAR-100 (Krizhevsky et al., 2009) respectively. Generative models typically struggle to generate high fidelity images when trained on CIFAR datasets due to their high variance and low resolution. For this reason, we utilize transfer learning and, instead of using the raw pixel values, use embeddings extracted by an EfficientNet model (Tan & Le, 2019) pre-trained on ImageNet. Normalizing flows have been shown to perform well on classification and out-of-distribution detection tasks using the image embeddings (Izmailov et al., 2020; Kirichenko et al., 2020; Zhang et al., 2020; Zisselman & Tamar, 2020). We report the results in Fig. 3 and Appendix Tables 3 and 4. We train all methods for 15 epochs per task. In the Appendix Tables, we additionally report the performance in the single-pass setting training for just one epoch per task. HCL provides strong performance, with functional regularization giving the best results. ER provides a strong baseline, especially on CIFAR-100 due to the relatively large (20% of the dataset) size of the replay buffer. CURL underperforms compared to the HCL variants. Since CURL is a VAE-based model, it requires an appropriate decoder likelihood for each data type (a Gaussian distribution in the case of CIFAR embeddings), while our flow-based HCL model directly models the distribution in the data space without extra approximations. We hypothesize that this advantage is one of reasons for the HCL superior performance. + +In Tables 1, 2, 3 and 4, we provide detailed results of the experiments. For each experiment and method with the exception of MNIST-SVHN and SVHN-MNIST we repeat the experiments three times with different random initializations. We report the mean and standard deviation of the results. We report the results on Split MNIST in Table 1; on SVHN-MNIST and + +438 MNIST-SVHN in Table 2; on Split CIFAR-10 in Table 3 and on Split CIFAR-100 in Table 4. For Split CIFAR datasets we report the results in two settings: with 15 epochs per task and with 1 epoch per task (single-pass). 439 + +440 + +Table 1. Results of the experiments on split MNIST dataset with MTL (multitask learning), Adam (regular training without alleviating forgetting), ER (standard experience or data buffer replay with the capacity of 1000 samples per task), HCL-GR (generative replay), HCL-FR (functional regularization). The dataset with 10 classes is split into 5 binary classification tasks, as well as task-agnostic versions of HCL-FR and HCL-GR. + +
TASK #12345Acc AVGFORGET AVGFULL Acc
MTL99.7898.0298.0898.9896.1598.20-1.3294.44
$\pm {0.15}$$\pm {1.50}$$\pm {0.96}$$\pm {0.33}$$\pm {1.87}$$\pm {0.88}$$\pm {1.03}$$\pm {1.06}$
ADAM55.5963.6619.9689.4199.1665.5642.5719.66
$\pm {4.74}$$\pm {3.10}$$\pm {7.03}$$\pm {7.15}$$\pm {0.41}$$\pm {1.33}$$\pm {1.49}$$\pm {0.08}$
ER95.9294.6994.2798.4496.7796.023.1992.86
$\pm {5.00}$$\pm {1.98}$$\pm {2.20}$$\pm {0.41}$$\pm {1.00}$$\pm {1.31}$$\pm {1.92}$$\pm {1.92}$
CURL96.6793.0680.1498.0597.2793.236.90-
$\pm {0.64}$$\pm {1.42}$$\pm {5.70}$$\pm {0.86}$$\pm {0.41}$$\pm {1.06}$$\pm {1.47}$
HCL-FR98.3195.9796.3799.2495.9597.171.5393.55
$\pm {1.03}$$\pm {0.81}$$\pm {1.06}$$\pm {0.08}$$\pm {3.04}$$\pm {0.65}$$\pm {0.91}$$\pm {1.40}$
HCL-GR95.9793.0892.6798.9497.0295.544.2586.58
$\pm {3.65}$$\pm {4.17}$$\pm {2.43}$$\pm {0.66}$$\pm {1.69}$$\pm {1.21}$$\pm {1.99}$$\pm {7.37}$
HCL-FR (TA)94.4193.8890.2598.8699.2895.335.2490.89
$\pm {3.42}$$\pm {0.37}$$\pm {4.11}$$\pm {0.23}$$\pm {0.23}$$\pm {0.67}$$\pm {0.95}$$\pm {0.96}$
HCL-GR (TA)96.7884.4988.3899.0498.8793.527.4184.65
$\pm {0.98}$$\pm {5.57}$$\pm {4.79}$$\pm {0.53}$$\pm {0.06}$$\pm {1.84}$$\pm {2.18}$$\pm {3.46}$
+ +441 + +442 + +443 + +444 + +446 + +447 + +448 + +449 + +450 + +451 + +452 + +453 + +454 + +455 + +456 + +457 + +458 + +459 + +460 + +461 + +Table 2. Results of the experiments on SVHN-MNIST and MNIST-SVHN datasets with MTL (multitask learning), Adam (regular training without alleviating forgetting), ER (standard experience or data buffer replay with the capacity of 1000 samples per task), HCL-GR (generative replay), HCL-FR (functional regularization), as well as task-agnostic versions of HCL-FR and HCL-GR.. Each dataset contains two 10-way classification tasks corresponding to MNIST and SVHN. + +
SVHN-MIST
TASK #12Acc AVGFORGET AVGFULL Acc
MTL95.9699.1897.570.0196.86
ADAM9.2899.1854.2386.6930.74
ER71.1499.4585.29524.8376.00
HCL-FR94.4899.3296.901.4995.78
HCL-GR94.0399.3596.691.9495.5
HCL-FR (TA)95.1999.2697.230.9296.38
HCL-GR (TA)91.7699.5095.632.8793.84
+ +
MNIST-SVHN
TASK #12ACC AVGFORGET AVGFULL Acc
MTL99.5895.5697.57-0.1196.68
ADAM64.1695.8279.9935.3169.23
ER98.5488.8993.720.9391.56
HCL-FR99.5195.5597.53-0.0496.65
HCL-GR99.5395.5297.53-0.0696.63
HCL-FR (TA)99.4794.1496.810.0295.62
HCL-GR (TA)99.5694.6897.12-0.0496.04
+ +476 + +## C. Differences between HCL-GR and HCL-FR + +Intuitively, HCL-FR imposes a stronger restriction on the model. Indeed, in order to have low values of the objective ${\mathcal{L}}_{FR}$ , the model ${f}_{\theta }$ has to map the replay data to the same locations in the latent space as the snapshot model ${f}_{\theta }^{\left( k\right) }$ . On the other + +482 hand, to achieve the low value of the HCL-GR objective we only need the likelihood of the replay data to be high according + +483 to ${f}_{\theta }$ . In other words, HCL-GR only restricts the locations of ${f}_{\theta }\left( x\right)$ (for replayed data) to be in the high-density set of ${\widehat{p}}_{Z}$ , + +484 but not in any particular position (see Figure 4(a)). + +We visualize the effect of both objectives on a two-dimensional two moons dataset in Figure 4. We treat the two moons as different tasks and train the HCL model on the top moon shown in grey first. Then, we continue training on the second moon shown in orange using either FR or GR to avoid forgetting. To build an understanding of the effect of these methods, we only use a fixed set of four data points shown as coral squares as the replay data. We show the learned distributions after + +489 training on the second task for HCL-GR and HCL-FR in panels (c) and (d) of Figure 4. With a limited number of replay 490 samples, HCL-GR struggles to avoid forgetting. The method is motivated to maximize the likelihood of the replay data, + +492 and it overfits to the small replay buffer, forgetting the structure of the first task. HCL-FR on the other hand preserves the structure of the first task better using the same 4 replay samples. In the panel (e) of Figure 4 we visualize the positions to 493 which the models map the replay data in the latent space. HCL-FR (shown with stars) maps the replay data to exactly the 494 + +495 + +![01963e31-aecd-7347-8fac-7a60d1329088_9_243_187_1257_457_0.jpg](images/01963e31-aecd-7347-8fac-7a60d1329088_9_243_187_1257_457_0.jpg) + +Figure 4. Comparison of functional regularization and generative replay. (a): A visualization of HCL-FR and HCL-GR; HCL-GR forces the model to maintain high likelihood of the replay data, while HCL-FR penalizes the distance between the locations of the latent representations for the sampled data for the current and snapshot models. (b): Two moons dataset; data from the first and second tasks is shown with grey and orange circles, and coral squares show the replay samples. (c): Learned distribution after training on the second task with HCL-GR, and (d) HCL-FR. (e): Locations of images of the replay data in the latent space for the model trained on the first task (squares), HCL-GR (triangles) and HCL-FR (stars). HCL-FR restricts the model more than GR: the locations of replay samples in the latent space coincide for HCL-FR and the model trained on the first task. Consequently, HCL-FR preserves more information about the structure of the first task. + +496 + +497 + +498 + +499 + +500 + +same locations as the snapshot model ${f}_{\theta }^{\left( 1\right) }$ trained on the first task (shown with squares). HCL-GR (shown with triangles) simply maps the samples to the high-density region without preserving their locations. + +To sum up, HCL-FR provides a stronger regularization than HCL-GR while preserving the flexibility of the model, which is crucial for avoiding forgetting. We provide an empirical comparison of HCL-FR and HCL-GR in Section 4 where HCL-FR shows better performance across the board. We discuss the relation between HCL-FR and HCL-GR objectives further in Appendix E.2. + +## D. Alleviating forgetting + +The loss ${\mathcal{L}}_{GR}$ in HCL-GR is computed as follows: + +$$ +{x}_{GR} \sim {\widehat{p}}_{X}^{\left( k\right) }\left( {x \mid y, t}\right) ,\;y \sim U\left\lbrack {1,\ldots K}\right\rbrack , t \sim U\left\{ {{t}_{1},\ldots ,{t}_{k}}\right\} +$$ + +$$ +{\mathcal{L}}_{GR} = \log {p}_{X}\left( {{x}_{GR} \mid y, t}\right) = \log {p}_{Z}\left( {{f}_{\theta }\left( {x}_{GR}\right) \mid y, t}\right) + \log \left| \frac{\partial {f}_{\theta }}{\partial x}\right| \tag{1} +$$ + +$$ += - \frac{1}{2}{\begin{Vmatrix}{f}_{\theta }\left( {x}_{GR}\right) - {\mu }_{y, t}\end{Vmatrix}}^{2} + \log \left| \frac{\partial {f}_{\theta }}{\partial x}\right| + \text{ const }. +$$ + +We note that up to a constant, the loss in Eq. (1) can be expressed as the ${KL}$ -divergence between the distribution ${\widehat{p}}^{\left( k\right) }$ corresponding to the snapshot model ${f}_{{\theta }^{\left( k\right) }}$ and the distribution $\widehat{p}$ corresponding to the corrent model ${f}_{\theta }$ : + +$$ +{KL}\left\lbrack {{\widehat{p}}^{\left( k\right) }\parallel \widehat{p}}\right\rbrack = \int {\widehat{p}}^{\left( k\right) }\left( {x \mid y, t}\right) \log \frac{{\widehat{p}}^{\left( k\right) }\left( {x \mid y, t}\right) }{\widehat{p}\left( {x \mid y, t}\right) }{dx} \approx \frac{1}{J}\mathop{\sum }\limits_{{i = 1}}^{J}\log \frac{{\widehat{p}}^{\left( k\right) }\left( {{x}_{i} \mid y, t}\right) }{\widehat{p}\left( {{x}_{i} \mid y, t}\right) } +$$ + +$$ += \frac{1}{J}\mathop{\sum }\limits_{{i = 1}}^{J}\left( {\log {\widehat{p}}^{\left( k\right) }\left( {{x}_{i} \mid y, t}\right) \underset{GR}{\underbrace{-\log \widehat{p}\left( {{x}_{i} \mid y, t}\right) }}}\right) . \tag{2} +$$ + +## E. Analysis of Functional regularization + +In this section, we look at the interpretation of the functional regularization and its connection to regularization based CL methods as well as with generative replay. + +### E.1. Taylor expansion of the functional regularization + +For simplicity let the flow model be given by + +$$ +f : {\mathcal{R}}^{M \times N} \rightarrow {\mathcal{R}}^{N} +$$ + +where ${\mathcal{R}}^{M}$ is the parameter space, and ${\mathcal{R}}^{N}$ is the input space. Specifically we have $f\left( {\theta , x}\right) = z$ for the parameters $\theta$ and input $x$ . And by abuse of notation let ${f}^{-1}$ be the inverse flow, namely + +$$ +{f}^{-1}\left( {\theta , f\left( {\theta , x}\right) }\right) \mathrel{\text{:=}} x. +$$ + +Note that the only thing we did is to make the dependency of the flow on its parameters $\theta$ explicit. + +The regularization term that we rely on has the following form + +$$ +{\mathcal{L}}_{FR} = \frac{1}{J}\mathop{\sum }\limits_{{z}_{j}}\left\lbrack {\left( {z}_{j} - f\left( {\theta }_{k + 1},{f}^{-1}\left( {\theta }_{k},{z}_{j}\right) \right) \right) }^{2}\right\rbrack , \tag{3} +$$ + +where ${\theta }_{k + 1}$ and ${\theta }_{k}$ are the parameters of the model after and before learning the $k$ -th task. + +For legibility, let ${x}_{j} = {f}^{-1}\left( {{\theta }_{k},{z}_{j}}\right)$ and ${\theta }_{k + 1} = {\theta }_{k} + \Delta$ . We can re-write the loss as ${\left( {z}_{j} - f\left( {\theta }_{k + 1},{x}_{j}\right) \right) }^{2}$ , for a given ${z}_{j}$ . We will also drop the subscript ${}_{j}$ when not needed. + +We start by taking a first order Taylor expansion around ${\theta }_{k}$ of $f\left( {{\theta }_{k + 1}, x}\right)$ : + +$$ +f\left( {{\theta }_{k + 1}, x}\right) = f\left( {{\theta }_{k} + \Delta , x}\right) \tag{4} +$$ + +$$ +\approx f\left( {{\theta }_{k}, x}\right) + {\Delta }^{T}\nabla f{|}_{{\theta }_{k}, x}\text{.} +$$ + +We can now re-write the regularizer as: + +$$ +{\mathcal{L}}_{FR} = \frac{1}{J}\mathop{\sum }\limits_{{z}_{j}}\left( {\left( {z}_{j} - f\left( {\theta }_{k},{x}_{j}\right) - {\Delta }^{T}\nabla f{ \mid }_{{\theta }_{k},{x}_{j}}\right) }^{2}\right) +$$ + +$$ += \frac{1}{J}\mathop{\sum }\limits_{{z}_{j}}\left\lbrack {\left( {z}_{j} - f\left( {\theta }_{k},{f}^{-1}\left( {\theta }_{k},{z}_{j}\right) \right) - {\Delta }^{T}\nabla f{ \mid }_{{\theta }_{k},{x}_{j}}\right) }^{2}\right\rbrack +$$ + +$$ += \frac{1}{J}\mathop{\sum }\limits_{{x}_{j}}\left\lbrack {\left( {\Delta }^{T}\nabla f{|}_{{\theta }_{k},{x}_{j}}\right) }^{2}\right\rbrack \tag{5} +$$ + +$$ += \frac{1}{J}\mathop{\sum }\limits_{{x}_{j}}\left\lbrack {{\Delta }^{T}\nabla f{|}_{{\theta }_{k},{x}_{j}}{\left( \nabla \widetilde{f}{|}_{{\theta }_{k},{x}_{j}}\right) }^{T}\Delta }\right\rbrack +$$ + +$$ += \frac{1}{J}{\Delta }^{T}\mathop{\sum }\limits_{{x}_{j}}\left\lbrack {\nabla f{|}_{{\theta }_{k},{x}_{j}}{\left( \nabla f{|}_{{\theta }_{k},{x}_{j}}\right) }^{T}}\right\rbrack \Delta +$$ + +$$ += \frac{1}{J}{\left( {\theta }_{k + 1} - {\theta }_{k}\right) }^{T}\mathop{\sum }\limits_{{x}_{j}}\left\lbrack {\nabla f{\left| {}_{{\theta }_{k},{x}_{j}}{\left( \nabla f{ \mid }_{{\theta }_{k},{x}_{j}}\right) }^{T}\right| }^{\prime }}\right\rbrack \left( {{\theta }_{k + 1} - {\theta }_{k}}\right) +$$ + +From the equation above, we can see that the regularization term is minimized when $\Delta$ spans the direction of low eigenvalues of the matrix $\mathop{\sum }\limits_{{x}_{i}}\left\lbrack {\nabla f{\left| {}_{{\theta }_{k},{x}_{j}}{\left( \nabla f{ \mid }_{{\theta }_{k},{x}_{j}}\right) }^{T}\right| }^{2}}\right\rbrack$ . Note that the updates on the current task can only change $\Delta$ . This is similar to methods like EWC that restricts movement in direction of high curvature according to the Fisher Information matrix on previous tasks. In particular the Fisher metric considered that takes the same form as a an expectation over observations ${x}_{j}$ of the outer product of gradients. In particular we can see that this form can be interpreted (see for example (Pascanu &Bengio,2013)) as the expected KL loss, if we consider for every ${x}_{j}$ isotropic Gaussians centered around $f\left( {{\theta }_{k},{x}_{j}}\right)$ and $f\left( {{\theta }_{k + 1}, j}\right)$ respectively. Note however that this expected KL is not the same as the KL between the distributions ${p}_{X}^{\left( k\right) }$ and ${p}_{X}$ . + +### E.2. Relationship between functional regularization and generative replay + +Functional regularization (FR) and Generative replay (GR) look very similar at the first glance. For both, we take a sample from the latent space and pass it reverse through the old flow. The difference is in FR, instead of replay, we penalize the Euclidean distance between old and new embedding. In this subsection we characterize what this subtle but canonical difference may imply. Let's start with the KL distance between the old and new distribution which is indeed the loss that GR + +enforces and relate it to the FR penalty term. + +$$ +{\mathcal{L}}_{GR} = {KL}\left\lbrack {{p}^{\left( k\right) }\parallel p}\right\rbrack = \int {p}^{\left( k\right) }\left( {x \mid y, t}\right) \log \frac{{p}^{\left( k\right) }\left( {x \mid y, t}\right) }{p\left( {x \mid y, t}\right) }{dx} \tag{6} +$$ + +$$ +\approx \frac{1}{J}\mathop{\sum }\limits_{{i = 1}}^{J}\log \frac{{p}^{\left( k\right) }\left( {{x}_{i} \mid y, t}\right) }{p\left( {{x}_{i} \mid y, t}\right) } \tag{7} +$$ + +$$ += \frac{1}{J}\mathop{\sum }\limits_{{i = 1}}^{J}\left( {\log \frac{{p}_{Z}\left( {{f}_{{\theta }^{\left( k\right) }}\left( {x}_{i}\right) \mid y, t}\right) }{{p}_{Z}\left( {{f}_{\theta }\left( {x}_{i}\right) \mid y, t}\right) } + \underset{ \approx 0}{\underbrace{\log \frac{\left| \frac{\partial {f}_{{\theta }^{\left( k\right) }}}{\partial x}\right| }{\left| \frac{\partial {f}_{\theta }}{\partial x}\right| }}}}\right) \tag{8} +$$ + +$$ +\approx \frac{1}{J}\mathop{\sum }\limits_{{i = 1}}^{J}\left( {\log \frac{{p}_{Z}\left( {{f}_{{\theta }^{\left( k\right) }}\left( {{f}_{{\theta }^{k}}^{-1}\left( {z}_{i}\right) }\right) \mid y, t}\right) }{{p}_{Z}\left( {{f}_{\theta }\left( {{f}_{{\theta }^{k}}^{-1}\left( {z}_{i}\right) }\right) \mid y, t}\right) }}\right) \tag{9} +$$ + +$$ += \frac{1}{J}\mathop{\sum }\limits_{{i = 1}}^{J}\left( {\log \frac{{p}_{Z}\left( {{z}_{i} \mid y, t}\right) }{{p}_{Z}\left( {{f}_{\theta }\left( {{f}_{{\theta }^{k}}^{-1}\left( {z}_{i}\right) }\right) \mid y, t}\right) }}\right) \tag{10} +$$ + +$$ += \frac{1}{J}\mathop{\sum }\limits_{{i = 1}}^{J}\left( {\log {p}_{Z}\left( {{z}_{i} \mid y, t}\right) - \log {p}_{Z}\left( {{f}_{\theta }\left( {{f}_{{\theta }^{k}}^{-1}\left( {z}_{i}\right) }\right) \mid y, t}\right) }\right) \tag{11} +$$ + +$$ += \frac{1}{J \cdot {\sigma }^{2}}\mathop{\sum }\limits_{{i = 1}}^{J}\left( {{\begin{Vmatrix}{f}_{\theta }\left( {f}_{{\theta }^{k}}^{-1}\left( {z}_{i}\right) \right) - {\mu }_{y}^{t}\end{Vmatrix}}^{2} - {\begin{Vmatrix}{z}_{i} - {\mu }_{y}^{t}\end{Vmatrix}}^{2}}\right) \tag{12} +$$ + +$$ +\leq \frac{1}{J \cdot {\sigma }^{2}}\mathop{\sum }\limits_{{i = 1}}^{J}{\begin{Vmatrix}{f}_{\theta }\left( {f}_{{\theta }^{k}}^{-1}\left( {z}_{i}\right) \right) - {z}_{i}\end{Vmatrix}}^{2} \tag{13} +$$ + +$$ += {\mathcal{L}}_{FR} \tag{14} +$$ + +Note that in Eq. (8) the log of the determinant ratio is approximately assumed to be zero since the replay implicitly discourages changes in the flow mapping. The above derivation indicates a few key points on the relation between FR and GR: + +- The FR loss provides an upper bound on the GR loss and is thus more restrictive; low FR loss implies low GR loss but the reverse does not necessarily hold. + +- Comparing the approximation of GR in Eq. (12) and FR in Eq. (13) indicates that in FR we are pushing latent representations to the same point as they were before, while in GR the relative relocation with respect to the Gaussian centers are being pushed to be similar. In other words, GR is roughly a mean-normalized FR. + +- FR does not rescale the loss according to the log determinant. That is, it doesn't take into account how the flow contracts or expands. In contrast, this stretch is being well captured by GR loss. + +- In return, FR experiences a more stable loss function for the sake of optimization and convergence specially when the determinant is close to 0 . + +- Similarly, the sample estimate of the gradient exhibits less variance in FR compared to GR which is of practical value. + +To further make sense of the relationship between FR and GR we visualize their associated loss function on a toy example, where we are mapping from univariate Gaussian (with variance 1) to another univariate Gaussian. In particular, we can afford to parametrize the flow as + +$$ +f\left( {\theta , x}\right) = {\theta x},\theta \in {\mathcal{R}}^{ + }, +$$ + +where $\theta > 0$ is a positive real number. In particular, let’s assume we are regularizing to a previous version of the flow with parameter ${\theta }^{\left( k\right) } = \gamma$ . In this case the loss for the function regularization quickly degenerates to + +$$ +{\mathcal{L}}_{FR} = \mathop{\sum }\limits_{z}\parallel f\left( {\theta ,{f}^{-1}\left( {\gamma , z}\right) - z{\parallel }^{2} = \mathop{\sum }\limits_{z}{\begin{Vmatrix}\frac{\theta }{\gamma }z - z\end{Vmatrix}}^{2} \approx {\left( \theta - \gamma \right) }^{2}}\right. +$$ + +In contrast the loss for generative replay will have the form + +$$ +{\mathcal{L}}_{GR} \approx \frac{1}{2}{\begin{Vmatrix}\frac{\theta }{\gamma }z\end{Vmatrix}}^{2} - \log \left( \left| \theta \right| \right) +$$ + +Figure 5 shows these two losses as a function of $\theta$ . Note the degeneracy of GR loss around $\theta = 0$ and how the FR loss becomes more tractable and well behaved by compromising the flow contraction/expansion from consideration. + +![01963e31-aecd-7347-8fac-7a60d1329088_12_451_618_841_524_0.jpg](images/01963e31-aecd-7347-8fac-7a60d1329088_12_451_618_841_524_0.jpg) + +Figure 5. GR loss vs. FR loss + +## F. Related Work + +Continual Learning Following Lange et al. (2019), we review the related methods to alleviate catastrophic forgetting in continual learning in three different but overlapping categories. Replay-based methods store and rephrase a memory of the examples or knowledge learned so far (Rebuffi et al., 2016; Lopez-Paz & Ranzato, 2017; Shin et al., 2017; Riemer et al., 2018; Rios & Itti, 2018; Zhang et al., 2019; Chaudhry et al., 2020; Balaji et al., 2020). Regularization-based methods constrain the parameter updates while learning new tasks to preserve previous knowledge. They include many popular and new methods such as EWC (Kirkpatrick et al., 2017), function-space regularization (Titsias et al., 2019); feature regularization, and feature replay methods (Pomponi et al., 2020a; van de Ven et al., 2020; Pomponi et al., 2020b); and orthogonality-based regularized replay methods such as OGD (Farajtabar et al., 2020), AGEM (Chaudhry et al., 2018) and GPM (Saha et al., 2021). A few works also look at continual learning from the perspectives of the loss landscape (Yin et al., 2020) and dynamics of optimization (Mirzadeh et al., 2020; Mirzadeh et al., 2020b). Modularity-based methods allocate different subsets of the parameters to each task (Rusu et al., 2016; Yoon et al., 2018; Jerfel et al., 2019; Li et al., 2019; Wortsman et al., 2020; Mirzadeh et al., 2020a). + +Task-Agnostic CL Recently, several methods have been developed for the task-agnostic CL setting. Zeno et al. (2018) and He et al. (2019) use the online variational Bayes framework to avoid the need or explicit task identities. Aljundi et al. (2019), an early advocate of task-free CL, detect task changes as peaks in the loss values following a plateau. Jerfel et al. (2019) infer the latent tasks within a Dirichlet process mixture model. Ye & Bors (2020) embed the information associated with different domains into several clusters. Mundt et al. (2019) propose a method based on variational Bayesian inference that combines a joint probabilistic encoder with a generative model and a linear classifier to distinguish unseen unknown data from trained known tasks. Achille et al. (2018) employ a variational autoencoder with shared embeddings which detects shifts in the data distribution and allocates spare representational capacity to new knowledge, while encouraging the learned + +713 representations to be disentangled. Buzzega et al. (2020) combine reservoir sampling data replay and model distillation for training models without knowing task boundaries. 714 + +The two works most closely related to HCL are CURL (Rao et al., 2019) and CN-DPMM (Lee et al., 2020). CN-DPMM uses a Dirichlet process mixture model to detect task changes; they then use a separate modularity-based method to perform the classification. CURL uses an end-to-end framework for detecting tasks and learning on them. However, CURL is primarily developed for unsupervised representation learning and cannot be trivially extended to task-agnostic supervised continual learning; in the experiments, we show that HCL achieves superior performance to a supervised version of CURL. Both CN-DPMM and CURL use a variational auto-encoder (Kingma & Welling, 2013) to model the data distribution. HCL, on the other hand, uses a single probabilistic hybrid model based on a normalizing flow to simultaneously learn the data distribution, detect task changes and perform classification. + +Out-of-Distribution Detection In HCL, in the task agnostic setting we need to detect data coming from new tasks, which can be viewed as out-of-distribution (OOD) detection (see e.g. Bulusu et al., 2020, for a recent survey). In particular, HCL detects task changes by measuring the typicality of the model's statistics, which is similar to recently proposed state-of-the-art OOD detection methods by Nalisnick et al. (2019c) and Morningstar et al. (2020). In some of our experiments, we apply HCL to embeddings extracted by a deep neural network; Zhang et al. (2020) develop a related method for OOD detection, where a flow-based generative model approximates the density of intermediate representations of the data. Kirichenko et al. (2020) also show that normalizing flows can detect OOD image data more successfully if applied to embeddings. + +Hybrid Models HCL is a hybrid generative-discriminative model that simultaneously learns to generate realistic samples of the data and solve the discriminative classification problem. Architecturally, HCL is most closely related to the semi-supervised flow-based models of Izmailov et al. (2020) and Atanov et al. (2019). These works do not consider continual learning, and focus on a very different problem setting. Nalisnick et al. (2019b) and Kingma et al. (2014) provide another two examples of hybrid models for semi-supervised learning. Zhang et al. (2020) develop a hybrid model for OOD detection. + +Normalizing flows Normalizing flows are flexible deep generative models with tractable likelihood based on invertible neural networks. Flows model the data distribution ${p}_{X}$ as a transformation ${\widehat{p}}_{X} = {f}_{\theta }^{-1}\left( {\widehat{p}}_{Z}\right)$ , where ${\widehat{p}}_{Z}$ is a fixed density in the latent space (typically a Gaussian), and ${f}_{\theta } : \mathcal{X} \rightarrow \mathcal{Z}$ is an invertible neural network with parameters $\theta$ that maps input space $\mathcal{X}$ to the latent space $\mathcal{Z}$ of the same dimension. We can then compute the density ${\widehat{p}}_{X}$ exactly using the change of variable formula: ${\widehat{p}}_{X}\left( x\right) = {\widehat{p}}_{Z}\left( {{f}_{\theta }\left( x\right) }\right) \cdot \left| \frac{\partial {f}_{\theta }}{\partial x}\right|$ , where $\left| \frac{\partial {f}_{\theta }}{\partial x}\right|$ is the determinant of the Jacobian of ${f}_{\theta }$ at $x$ . The architecture of the flow networks is designed to ensure cheap computation of the inverse ${f}^{-1}$ and the Jacobian $\left| \frac{\partial {f}_{e}}{\partial x}\right|$ . In HCL, we use RealNVP (Dinh et al., 2017) and Glow (Kingma & Dhariwal, 2018) flow architectures due to their simplicity and strong performance. Other flow architectures include invertible Residual Networks (Behrmann et al., 2019), residual flows (Chen et al., 2019), FFJORD (Grathwohl et al., 2018), invertible CNNs (Finzi et al., 2019) and others. For a more detailed discussion of normalizing flows, please see the recent survey by Papamakarios et al. (2019). + +Normalizing flows have a number of key advantages over other deep generative models that are essential for HCL. First, unlike Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), flows provide a tractable likelihood that can be used for task identification together with other model statistics (Section 3.2). Second, likelihood-based models can be used for both generation and classification, unlike GANs. Moreover, flows can produce samples of higher fidelity than Variational Autoencoders (VAEs) (Kingma & Welling, 2013) and much faster than auto-regressive models (Oord et al., 2016), which is important for alleviating catastrophic forgetting (Section 3.3). Further we demonstrate that our proposed HCL outperforms CURL (Rao et al., 2019), a VAE-based CL approach. + +## G. Task identification + +Can flows detect task changes? Nalisnick et al. (2019a) show that deep generative models sometimes fail to detect out-of-distribution data using likelihood, e.g. when trained on FashionMNIST dataset, both normalizing flows and VAEs assign higher likelihood to out-of-distribution MNIST data. However, they consider unsupervised OOD detection, while in our case there is label information available and for each task HCL is modeling the class-conditional distribution ${p}_{t}\left( {x \mid y}\right)$ . Intuitively, the model will not be able to classify unknown task samples correctly when the data distribution shifts, so the task-conditional likelihood $\widehat{p}\left( {B \mid t}\right) = \widehat{p}\left( {y \mid x, t}\right) \widehat{p}\left( {x \mid t}\right)$ of the batch $B$ which comes from a new task ${t}^{\prime }$ should be low. Moreover, motivated by recent advances in OOD detection with generative models (Nalisnick et al., 2019c; Morningstar et al., 2020), we propose to detect task changes using two-sided test on HCL's multiple statistics and demonstrate that HCL is able to correctly identify task change not only in standard CL benchmarks, but also in FashionMNIST-MNIST continual learning problem, which is a more challenging scenario as identified in Nalisnick et al. (2019a). Note that prior works in continual learning which are based on a VAE model (Rao et al. (2019) and Lee et al. (2020)) rely on VAE's likelihood to + +determine task change points which may not be reliable in challenging settings (Nalisnick et al., 2019a). + +The proposed task detection based on measuring typicality of model's statistics demonstrated strong performance in all benchmarks experiments, detecting all existing task changes. In some cases (one of the 3 runs of HCL-FR on CIFAR-10 and CIFAR-100, and HCL-GR on split MNIST) the model identified an extra task change which did not actually happen. In these cases, the model uses multiple clusters in the latent space for modeling the same class in the same task. In practice, it did not significantly hurt the final accuracy. For the runs where spurious task changes were detected, we adjusted the computation of the overall accuracy metric by accordingly re-labelling the tasks identities. For example, if during training on ${T}_{1}$ the model identifies an extra task change and then identifies the real task change to ${T}_{2}$ , we consider all clusters added during training on ${T}_{1}$ to belong to task 1 and the clusters added at identified task change to ${T}_{2}$ to belong to task 2 . + +Robustness In addition to standard CL benchmark tasks, we test HCL on FashionMNIST-MNIST and MNIST-FashionMNIST domain-incremental learning classification. Although this dataset pair was identified as a failure mode for OOD detection by Nalisnick et al. (2019a), HCL's task detection correctly identified task changes in all runs. + +Task recurrence Next, we test the ability of HCL to not only detect the task boundaries but also infer the task identities in the presence of recurring tasks. In particular, we consider the Split CIFAR-10 embeddings dataset with the following sequence of tasks: $\left\lbrack {{T}_{1},{T}_{2},{T}_{3},{T}_{1},{T}_{4}}\right\rbrack$ where ${T}_{i}$ is the binary classification task between the original classes ${2i} - 2$ and ${2i} - 1$ . The task ${T}_{1}$ appears twice in the sequence, and the model has to identify it as an existing rather than a new task. Both HCL-FR and HCL-GR were able to successfully identify the recurring task, achieving ${95.06} \pm {0.25}\%$ and ${91.27} \pm {0.82}\%$ final average accuracy respectively. + +816 + +817 + +823 + +824 + +825 + +Table 3. Results of the experiments on split CIFAR-10 embeddings dataset extracted using EfficientNet model pretrained on ImageNet The dataset with 10 classes is split into 5 binary classification tasks. The methods used are MTL (multitask learning) setting, Adam (regular training without alleviating forgetting), ER (standard data buffer replay with the capacity of 1000 samples per task), CURL (Rao et al., 2019), HCL-GR (generative replay), HCL-FR (functional regularization), as well as task-agnostic versions of HCL-FR and HCL-GR. 15 EPOCHS PER TASK + +
TASK #12345Acc AVGFORGET AVGFULL Acc
MTL98.8795.9097.4897.4098.8297.690.7593.61
$\pm {0.08}$$\pm {0.29}$$\pm {0.12}$$\pm {0.25}$$\pm {0.13}$$\pm {0.05}$$\pm {0.14}$$\pm {0.14}$
ADAM90.7358.9754.9081.7099.2577.1127.3819.85
$\pm {1.05}$$\pm {4.88}$$\pm {3.82}$$\pm {3.76}$$\pm {0.04}$$\pm {1.33}$$\pm {1.63}$$\pm {0.01}$
ER95.7590.6094.6298.5799.2095.753.7888.27
$\pm {0.43}$$\pm {1.06}$$\pm {0.28}$$\pm {0.15}$$\pm {0.11}$$\pm {0.35}$$\pm {0.46}$$\pm {0.52}$
CURL88.5973.4884.4695.9897.6588.0312.05-
$\pm {3.85}$$\pm {6.00}$$\pm {2.82}$$\pm {0.64}$$\pm {0.41}$$\pm {2.15}$$\pm {2.79}$
HCL-FR96.9593.2294.5898.5098.9796.442.4790.12
$\pm {0.44}$$\pm {0.59}$$\pm {0.19}$$\pm {0.08}$$\pm {0.19}$$\pm {0.05}$$\pm {0.09}$$\pm {0.35}$
HCL-GR93.9885.4393.2898.6399.2094.115.8680.10
$\pm {0.27}$$\pm {0.25}$$\pm {0.92}$$\pm {0.16}$$\pm {0.08}$$\pm {0.21}$$\pm {0.24}$$\pm {1.21}$
HCL-FR (TA)96.6392.1894.7098.7398.9896.252.8689.44
$\pm {0.33}$$\pm {1.33}$$\pm {0.19}$$\pm {0.25}$$\pm {0.10}$$\pm {0.17}$$\pm {0.28}$$\pm {0.80}$
HCL-GR (TA)95.4784.8892.4098.3299.2394.066.0580.29
$\pm {0.73}$$\pm {1.08}$$\pm {0.16}$$\pm {0.22}$$\pm {0.05}$$\pm {0.25}$$\pm {0.35}$$\pm {0.81}$
+ +846 + +849 + +850 + +851 + +852 + +853 + +855 + +856 + +SINGLE-PASS (1 EPOCH PER TASK) + +
TASK #12345Acc AVGFORGET AVGFULL Acc
MTL98.9296.6797.2597.2098.1897.64-0.0593.69
$\pm {0.10}$$\pm {0.17}$$\pm {0.00}$$\pm {0.23}$$\pm {0.26}$$\pm {0.05}$$\pm {0.13}$$\pm {0.09}$
ADAM92.2253.3562.5378.9299.1377.2326.8719.83
$\pm {1.20}$$\pm {0.53}$$\pm {3.52}$$\pm {6.34}$$\pm {0.15}$$\pm {1.27}$$\pm {1.54}$$\pm {0.03}$
ER98.3294.0897.1297.7398.5597.161.3291.85
$\pm {0.12}$$\pm {0.71}$$\pm {0.14}$$\pm {0.08}$$\pm {0.11}$$\pm {0.09}$$\pm {0.17}$$\pm {0.03}$
CURL95.5480.9780.5594.6196.2589.588.25-
$\pm {1.16}$$\pm {4.83}$$\pm {7.16}$$\pm {1.99}$$\pm {0.41}$$\pm {0.92}$$\pm {1.10}$
HCL-FR95.2788.0393.3598.2898.9294.774.6885.94
$\pm {0.46}$$\pm {0.40}$$\pm {0.73}$$\pm {0.31}$$\pm {0.06}$$\pm {0.09}$$\pm {0.13}$$\pm {0.01}$
HCL-GR93.6885.8293.2898.5299.1094.085.7382.85
$\pm {0.55}$$\pm {0.27}$$\pm {0.41}$$\pm {0.17}$$\pm {0.04}$$\pm {0.08}$$\pm {0.06}$$\pm {0.40}$
HCL-FR (TA)95.3587.1293.4098.2798.9094.614.8585.72
$\pm {0.16}$$\pm {0.81}$$\pm {0.32}$$\pm {0.16}$$\pm {0.08}$$\pm {0.24}$$\pm {0.32}$$\pm {0.37}$
HCL-GR (TA)93.3782.2892.3398.2399.1793.087.0579.93
$\pm {0.27}$$\pm {1.94}$$\pm {0.65}$$\pm {0.17}$$\pm {0.18}$$\pm {0.54}$$\pm {0.56}$$\pm {0.84}$
+ +857 + +858 + +859 + +860 + +861 + +862 + +863 + +864 + +865 + +866 + +867 + +868 + +869 + +870 + +871 + +872 + +873 + +874 + +875 + +876 + +877 + +878 + +879 + +880 + +881 + +Table 4. Results of the experiments on split CIFAR-100 embeddings dataset extracted using EfficientNet model pretrained on ImageNet. The dataset with 100 classes is split into ten 10-way classification tasks. The methods used are MTL (multitask learning) setting, Adam (regular training without alleviating forgetting), ER (standard data buffer replay with the capacity of 1000 samples per task), CURL (Rao et al., 2019), HCL-GR (generative replay), HCL-FR (functional regularization), as well as task-agnostic versions of HCL-FR and HCL-GR. 15 EPOCHS PER TASK + +
TASK #12345678910Acc AVGFORGET AVGFULL Acc
MTL78.7773.3076.5372.3376.8773.6377.6378.7383.5371.8776.329.3774.11
$\pm {0.39}$$\pm {1.27}$$\pm {1.30}$$\pm {0.25}$$\pm {0.78}$$\pm {2.36}$$\pm {1.60}$$\pm {0.12}$$\pm {0.80}$$\pm {0.97}$$\pm {0.52}$$\pm {0.34}$$\pm {0.55}$
ADAM7.771.004.633.803.3310.8317.605.0717.1096.8316.8086.499.84
$\pm {0.25}$$\pm {0.29}$$\pm {0.60}$$\pm {1.26}$$\pm {1.15}$$\pm {0.46}$$\pm {1.34}$$\pm {0.05}$$\pm {0.80}$$\pm {0.25}$$\pm {0.27}$$\pm {0.33}$$\pm {0.04}$
ER65.7063.5768.8362.9771.9371.7375.2076.4086.2069.3771.1917.7468.46
$\pm {1.23}$$\pm {1.60}$$\pm {0.78}$$\pm {1.32}$$\pm {0.76}$$\pm {0.24}$$\pm {0.71}$$\pm {1.08}$$\pm {0.57}$$\pm {1.72}$$\pm {0.22}$$\pm {0.32}$$\pm {0.27}$
CURL11.623.344.344.266.7218.7622.2831.6655.4482.6824.1165.24-
$\pm {2.31}$$\pm {1.07}$$\pm {2.01}$$\pm {2.02}$$\pm {3.23}$$\pm {1.25}$$\pm {2.41}$$\pm {2.02}$$\pm {1.19}$$\pm {0.52}$$\pm {0.72}$$\pm {0.68}$
HCL-FR54.2751.0059.1050.3056.1057.1067.6768.3381.2093.4763.8531.8960.58
$\pm {1.68}$$\pm {1.87}$$\pm {1.85}$$\pm {1.82}$$\pm {0.29}$$\pm {1.08}$$\pm {2.05}$$\pm {0.87}$$\pm {0.75}$$\pm {0.21}$$\pm {0.80}$$\pm {0.99}$$\pm {0.78}$
HCL-GR44.5341.4356.5747.5352.0053.8768.9368.1082.9394.6761.0636.1057.39
$\pm {1.18}$$\pm {1.75}$$\pm {0.25}$$\pm {0.59}$$\pm {1.77}$$\pm {1.24}$$\pm {1.60}$$\pm {0.43}$$\pm {0.24}$$\pm {0.19}$$\pm {0.43}$$\pm {0.56}$$\pm {0.60}$
HCL-FR (TA)55.0351.1355.3348.1756.5056.5367.8765.9780.1793.9363.0632.5259.66
$\pm {0.57}$$\pm {1.89}$$\pm {4.23}$$\pm {0.63}$$\pm {1.56}$$\pm {1.02}$$\pm {0.66}$$\pm {2.59}$$\pm {1.36}$$\pm {0.33}$$\pm {0.64}$$\pm {0.88}$$\pm {0.70}$
HCL-GR (TA)42.1035.1750.3740.7746.1345.8361.8759.3378.4795.3055.5342.4651.64
$\pm {0.75}$$\pm {2.32}$$\pm {0.86}$$\pm {0.88}$$\pm {0.63}$$\pm {1.01}$$\pm {0.87}$$\pm {0.48}$$\pm {0.68}$$\pm {0.22}$$\pm {0.23}$$\pm {0.25}$$\pm {0.14}$
+ +898 + +899 + +SINGLE-PASS (1 EPOCH PER TASK) + +
TASK #12345678910Acc AVGFORGET AVGFULL Acc
MTL86.2381.4383.7779.8078.5372.8773.4763.1360.8035.9371.60-12.1868.80
$\pm {0.19}$$\pm {0.54}$$\pm {0.73}$$\pm {0.36}$$\pm {0.37}$$\pm {1.64}$$\pm {0.54}$$\pm {1.54}$$\pm {0.70}$$\pm {2.52}$$\pm {0.38}$$\pm {0.13}$$\pm {0.52}$
ADAM8.471.135.032.302.909.8319.0712.0022.3395.8717.8983.6411.29
$\pm {1.15}$$\pm {0.42}$$\pm {0.37}$$\pm {0.65}$$\pm {0.45}$$\pm {0.66}$$\pm {0.52}$$\pm {1.28}$$\pm {2.18}$$\pm {0.17}$$\pm {0.18}$$\pm {0.28}$$\pm {0.31}$
ER78.2376.7780.4080.1080.6374.0372.0762.8758.2717.9768.13-27.1065.13
$\pm {0.79}$$\pm {1.22}$$\pm {0.42}$$\pm {0.45}$$\pm {0.26}$$\pm {1.17}$$\pm {0.12}$$\pm {1.22}$$\pm {1.70}$$\pm {0.76}$$\pm {0.36}$$\pm {0.25}$$\pm {0.25}$
CURL12.824.949.889.3811.5214.4422.4415.6424.3859.9818.5444.00-
$\pm {4.73}$$\pm {1.73}$$\pm {2.95}$$\pm {2.89}$$\pm {2.67}$$\pm {4.05}$$\pm {3.18}$$\pm {1.87}$$\pm {6.32}$$\pm {9.29}$$\pm {1.54}$$\pm {2.66}$
HCL-FR50.3013.8048.3738.6337.9039.7751.4753.3373.7794.0350.1443.3145.76
$\pm {1.16}$$\pm {2.27}$$\pm {1.05}$$\pm {0.57}$$\pm {2.14}$$\pm {0.25}$$\pm {5.70}$$\pm {2.26}$$\pm {0.31}$$\pm {0.76}$$\pm {0.41}$$\pm {0.17}$$\pm {0.34}$
HCL-GR56.1042.6758.9339.6341.5336.1345.8346.0051.4090.8750.9140.4046.10
$\pm {1.22}$$\pm {0.94}$$\pm {1.84}$$\pm {1.68}$$\pm {2.49}$$\pm {1.60}$$\pm {4.90}$$\pm {2.25}$$\pm {6.68}$$\pm {1.40}$$\pm {0.82}$$\pm {0.85}$$\pm {0.99}$
HCL-FR (TA)49.5016.2748.7337.7338.8737.8051.7750.5375.0793.6349.9943.0445.64
$\pm {2.40}$$\pm {2.23}$$\pm {2.19}$$\pm {1.14}$$\pm {1.90}$$\pm {3.19}$$\pm {1.65}$$\pm {2.09}$$\pm {1.61}$$\pm {0.25}$$\pm {0.86}$$\pm {0.58}$$\pm {0.97}$
HCL-GR (TA)35.2030.2041.1727.3035.5733.1343.7045.2368.0395.5045.5052.0840.87
$\pm {0.91}$$\pm {1.99}$$\pm {1.92}$$\pm {5.62}$$\pm {1.14}$$\pm {2.83}$$\pm {0.43}$$\pm {1.27}$$\pm {1.00}$$\pm {0.43}$$\pm {0.46}$$\pm {0.57}$$\pm {0.57}$
+ diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZbSeZKdqNkm/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZbSeZKdqNkm/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..ebb97de0c9ca60729e4d2fba9970ff7718f55110 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/ZbSeZKdqNkm/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,117 @@ +§ TASK-AGNOSTIC CONTINUAL LEARNING WITH HYBRID PROBABILISTIC MODELS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Learning new tasks continuously without forgetting on a constantly changing data distribution is essential for real-world problems but extremely challenging for modern deep learning. In this work we propose HCL, a Hybrid generative-discriminative approach to Continual Learning for classification. We model the distribution of each task and each class with a normalizing flow. The flow is used to learn the data distribution, perform classification, identify task changes, and avoid forgetting, all leveraging the invertibility and exact likelihood which are uniquely enabled by the normalizing flow model. We use the generative capabilities of the flow to avoid catastrophic forgetting through generative replay and a novel functional regularization technique. For task identification, we use state-of-the-art anomaly detection techniques based on measuring the typicality of the model's statistics. We demonstrate the strong performance of HCL on a range of continual learning benchmarks such as split-MNIST, split-CIFAR, and SVHN-MNIST. + +§ 1. INTRODUCTION + +For humans, it is natural to learn new skills sequentially without forgetting the skills that were learned previously. Deep learning models, on the other hand, suffer from catastrophic forgetting: when presented with a sequence of tasks, deep neural networks can successfully learn the new tasks, but the performance on the old tasks degrades (McCloskey & Cohen, 1989; French, 1999; Kirkpatrick et al., 2017; Parisi et al., 2019; Hadsell et al., 2020). Being able to learn sequentially without forgetting is crucial for numerous applications of deep learning. In real life, data often arrives as a continuous stream, and the data distribution is constantly changing. For example, consider a neural network that might be used for object detection in self-driving cars. The model should continuously adapt to different environments, e.g. weather and lighting. While the network learns to work under new conditions, it should also avoid forgetting. For example, once it adapts to driving during the winter, it should still work well in other seasons. This example illustrates the domain-incremental continual learning setting: the distribution of the inputs to the model evolves over time while the target space stays the same. Moreover, in this scenario, the model should be task-agnostic: it has no information on the task boundaries, i.e., the timestamps when the input distribution changes. + +Motivated by the task-agnostic domain-incremental continual learning setting, we propose Hybrid Continual Learning (HCL) - an approach based on simultaneous generative and discriminative modeling of the data with normalizing flows. Fig. 1 schematically demonstrates the framework. The contributions of our work are as follows: + + * We propose HCL, a normalizing flow-based approach to task-agnostic continual learning. We employ two methods to alleviate catastrophic forgetting: generative replay and a novel functional regularization technique. We provide an empirical comparison and theoretical analysis of the two techniques showing that the functional regularization constrains the model more than generative replay to avoid forgetting, and generally leads to better performance. + + * We conduct experiments on a range of image classification continual learning problems on split MNIST, split CIFAR, SVHN-MNIST and MNIST-SVHN datasets. HCL achieves strong performance in all settings. + + * We show that HCL can successfully detect task boundaries and identify new as well as recurring tasks, measuring the typicality of model's statistics. + +§ 2. BACKGROUND AND NOTATION + +Continual learning (CL) We assume that a continual learning model ${g}_{\theta } : \mathcal{X} \rightarrow \mathcal{Y}$ is trained on a sequence of $\tau$ supervised tasks: ${T}_{{t}_{1}},{T}_{{t}_{2}},\ldots ,{T}_{{t}_{\tau }}$ . Each task ${T}_{i} = {\left\{ \left( {x}_{j}^{i},{y}_{j}^{i}\right) \right\} }_{j = 1}^{{N}_{i}}$ has the input space ${\mathcal{X}}^{i}$ , the label space ${\mathcal{Y}}^{i}$ , and the corresponding data-generating distribution + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +055 + + < g r a p h i c s > + +Figure 1. An illustration of the proposed Hybrid Continual Learning (HCL) framework. HCL models the distribution of each class in each task as a latent Gaussian distribution transformed by a normalizing flow. We show the Gaussian mixtures corresponding to the two tasks ${t}_{1}$ and ${t}_{2}$ in the latent space on the left, and the corresponding data distributions on the right. If a new task $t = 3$ appears, HCL identifies it using the typicality of the flow's statistics, and initializes the Gaussian mixture for a new task. + +056 + +057 + +058 ${p}_{i}\left( {x,y}\right)$ . The number of tasks $\tau$ is not known in advance, and while training on a task ${T}_{i}$ the model does not have access to the data from previous ${T}_{1},\ldots ,{T}_{i - 1}$ or future tasks ${T}_{i + 1},\ldots ,{T}_{\tau }$ . The objective of a CL model is to minimize $\mathop{\sum }\limits_{{i = 1}}^{M}{E}_{x,y \sim {p}_{i}\left( {\cdot , \cdot }\right) }l\left( {{g}_{\theta }\left( x\right) ,y}\right)$ for some risk function $l\left( {\cdot , \cdot }\right)$ , and thus, generalize well on all tasks after training. In this work, we focus on classification, and in particular, the domain-incremental learning setting with ${\mathcal{Y}}^{i} = \{ 1,\ldots K\}$ for all tasks $i$ . For more on CL settings see (van de Ven & Tolias, 2019) and (Hsu et al., 2018). + +Task-agnostic CL In most continual learning algorithms, it is crucial to know the task boundaries - the moments when the training task is changed. At each iteration $j$ of training, we receive a tuple $\left( {x\left( j\right) ,y\left( j\right) ,t\left( j\right) }\right)$ where $x\left( j\right)$ and $y\left( j\right)$ is a batch of data and the corresponding labels and $t\left( j\right)$ is the index of the current task. In this work, we also consider the task-agnostic setting, where the task index $t\left( j\right)$ is not provided and the algorithm has to infer it from data. + +§ 3. HYBRID MODEL FOR CONTINUAL LEARNING + +§ 3.1. MODELING THE DATA DISTRIBUTION + +HCL approximates the data distribution with a single normalizing flow, with each class-task pair(y, t)corresponding to a unique Gaussian in the latent space (see Fig. 1 for illustration). More precisely, we model the joint distribution ${p}_{t}\left( {x,y}\right)$ of the data $x$ and the class label $y$ conditioned on a task $t$ as ${p}_{t}\left( {x,y}\right) \approx \widehat{p}\left( {x,y \mid t}\right) = {\widehat{p}}_{X}\left( {x \mid y,t}\right) \widehat{p}\left( {y \mid t}\right)$ , where ${\widehat{p}}_{X}\left( {x \mid y,t}\right)$ is modeled by a normalizing flow ${f}_{\theta }$ with a base distribution ${\widehat{p}}_{Z} = \mathcal{N}\left( {{\mu }_{y,t},I}\right) : {\widehat{p}}_{X}\left( {x \mid y,t}\right) =$ ${f}_{\theta }^{-1}\left( {\mathcal{N}\left( {{\mu }_{y,t},I}\right) }\right)$ . Here ${\mu }_{y,t}$ is the mean of the latent distribution corresponding to the class $y$ and task $t$ . We assume that $\widehat{p}\left( {y \mid t}\right)$ is a uniform distribution over the classes for each task: $\widehat{p}\left( {y \mid t}\right) = \frac{1}{K}$ . + +We train the model by maximum likelihood: for each mini-batch of data $\left( {x\left( j\right) ,y\left( j\right) ,t\left( j\right) }\right)$ we compute the likelihood using the change of variable formula and take a gradient step with respect to the parameters $\theta$ of the flow. In the task-agnostic setting, we have no access to the task index ${t}_{j}$ and instead infer it from data (see Section 3.2). At test-time, HCL classifies an input $x$ to the class $\widehat{y}$ using the Bayes rule: $\widehat{p}\left( {y \mid x}\right) \propto \widehat{p}\left( {x \mid y}\right)$ , so $\widehat{y} = \arg \mathop{\max }\limits_{y}\mathop{\sum }\limits_{{t = 1}}^{\tau }{\widehat{p}}_{X}\left( {x \mid y,t}\right)$ . Notice that we do not have access to the task index at test time, so we marginalize the predictions over all tasks $t$ . + +§ 3.2. TASK IDENTIFICATION + +In the task-agnostic scenario, the task identity $t$ is not given during training and has to be inferred from the data. The model starts with $K$ Gaussians with means ${\left\{ {\mu }_{y,{t}_{1}}\right\} }_{y = 1}^{K}$ in the latent space corresponding to the classes of the first task. We assume that a model first observes batches of data ${B}_{1},\ldots ,{B}_{m}$ from the task ${T}_{{t}_{1}}$ where each $B = {\left\{ \left( {x}_{j},{y}_{j}\right) \right\} }_{j = 1}^{b}$ . Then, at some unknown point in time $m + 1$ , it starts observing data batches ${B}_{m + 1},{B}_{m + 2},\ldots$ coming from the next task ${T}_{{t}_{2}}$ . The model has to detect the task boundary and initialize Gaussian mixture components in the latent space which will correspond to this new task ${\left\{ \mathcal{N}\left( {\mu }_{y,{t}_{2}},I\right) \right\} }_{y = 1}^{K}$ . Moreover, in our set-up some of the tasks can be recurring. Thus, after observing tasks ${T}_{{t}_{1}},\ldots ,{T}_{{t}_{k}}$ and detecting the change point from the task ${T}_{k}$ , the model has to identify whether this batch of data comes from a completely new task ${T}_{{t}_{k + 1}}$ (and add new Gaussians for this task in the latent space) or from one of the previous tasks ${T}_{{t}_{1}},\ldots ,{T}_{{t}_{k - 1}}$ . + +Similarly to prior work on anomaly detection (Nalisnick et al., 2019c) and (Morningstar et al., 2020), we detect task changes measuring the typicality of the HCL model's statistics. Following Morningstar et al. (2020), we can use the following statistics on data batches $B$ : log-likelihood ${S}_{1}\left( {B,t}\right) = \mathop{\sum }\limits_{{\left( {{x}_{j},{y}_{j}}\right) \in B}}{\widehat{p}}_{X}\left( {{x}_{j} \mid {y}_{j},t}\right)$ , log-likelihood of the latent variable ${S}_{2}\left( {B,t}\right) = \mathop{\sum }\limits_{{\left( {{x}_{j},{y}_{j}}\right) \in B}}{\widehat{p}}_{Z}\left( {f\left( {x}_{j}\right) \mid {y}_{j},t}\right)$ and log-determinant of the Jacobian ${S}_{3}\left( {B,t}\right) = {S}_{1}\left( {B,t}\right) -$ ${S}_{2}\left( {B,t}\right)$ . For each task $t$ , we keep track of the mean ${\mu }_{S}^{t}$ and the standard deviation ${\sigma }_{S}^{t}$ for these statistics over a window of the last $l$ batches of data. Then, if any statistic $S\left( {B,t}\right)$ of the current batch $B$ and task $t$ is not falling within the typical set $\left| {S\left( {B,t}\right) - {\mu }_{S}^{t}}\right| > \lambda {\sigma }_{S}^{t}$ , HCL detects a task change. In this case, if all the statistics are in the typical set $\mid S\left( {B,{t}^{\prime }}\right) -$ ${\mu }_{S} \mid < \lambda {\sigma }_{S}$ for one of the previous tasks, we identify a switch to the task ${t}^{\prime }$ ; otherwise, we switch to a new task. In practice, for most standard CL benchmarks such as split-MNIST we only use a single statistic - HCL's log-likelihood which is sufficient for robust task change detection. However, for more challenging scenarios identified in Nalisnick et al. (2019a), we use all three statistics described above. + +§ 3.3. ALLEVIATING CATASTROPHIC FORGETTING + +§ 3.3.1. GENERATIVE REPLAY + +Following Shin et al. (2017); Rao et al. (2019), we train the model on the mix of real data from the current task and generated data from previous tasks to combat forgetting. For generating the replay data, we store a single snapshot of the HCL model ${\widehat{p}}_{X}^{\left( k\right) }\left( {x \mid y,t}\right)$ with weights ${\theta }^{\left( k\right) }$ taken at a point of the last detected task change ${T}_{{t}_{k}} \rightarrow {T}_{{t}_{k + 1}}$ . We generate and replay data from old tasks using the snapshot: ${x}_{GR} \sim {\widehat{p}}_{X}^{\left( k\right) }\left( {x \mid y,t}\right)$ , where $y \sim U\{ 1,\ldots ,K\}$ and $t \sim U\left\{ {{t}_{1},\ldots ,{t}_{k}}\right\}$ , and maximize its likelihood ${\mathcal{L}}_{GR} = \log {\widehat{p}}_{X}\left( {{x}_{GR} \mid y,t}\right)$ under the current HCL model ${\widehat{p}}_{X}\left( \cdot \right)$ . We store only a single snapshot model throughout training as it approximates the data distribution of all tasks up to ${T}_{{t}_{k}}$ . After detecting the task change ${T}_{{t}_{k + 1}} \rightarrow {T}_{{t}_{k + 2}}$ , we update the snapshot with new weights ${\theta }^{\left( k + 1\right) }$ . The resulting objective function in generative replay training is ${\mathcal{L}}_{ll} + {\mathcal{L}}_{GR}$ , where ${\mathcal{L}}_{ll}$ is the log-likelihood of the data on the current task. See Appendix D for a further discussion of the generative replay objective. We refer to HCL with generative replay as HCL-GR. In prior work, generative replay has been successfully applied, predominantly using GANs or VAEs (Shin et al., 2017; Rao et al., 2019; Lee et al., 2020; Ye & Bors, 2020; Pomponi et al., 2020b; Mundt et al., 2019; Achille et al., 2018). + +§ 3.3.2. FUNCTIONAL REGULARIZATION + +We propose a novel functional regularization loss that enforces the flow to map samples from previous tasks to the same latent representations as a snapshot model. Specifically, similar to GR, we save a snapshot of the model ${\widehat{p}}_{X}^{\left( k\right) }\left( \cdot \right)$ taken after detecting a shift from the task ${T}_{{t}_{k}}$ and produce samples ${x}_{FR} \sim {\widehat{p}}_{X}^{\left( k\right) }\left( {x \mid y,t}\right)$ for $y \sim U\{ 1,\ldots ,K\}$ , $t \sim U\left\{ {{t}_{1},\ldots ,{t}_{k}}\right\}$ . However, instead of generative replay loss ${\mathcal{L}}_{GR}$ , we add the following term to the maximum likelihood objective ${\mathcal{L}}_{FR} = {\begin{Vmatrix}{f}_{\theta }\left( {x}_{FR}\right) - {f}_{{\theta }^{\left( k\right) }}\left( {x}_{FR}\right) \end{Vmatrix}}^{2}$ , where ${f}_{\theta }$ is the current flow mapping and ${f}_{{\theta }^{\left( K\right) }}$ is the snapshot model. We note that the ${L}_{2}$ distance in ${\mathcal{L}}_{FR}$ is a natural choice given the choice of ${p}_{Z}\left( {z \mid y,t}\right)$ as a Gaussian, as ${\mathcal{L}}_{ll}$ also contains a linear combination of losses of the form ${\begin{Vmatrix}{f}_{\theta }\left( x\right) - {\mu }_{y,t}\end{Vmatrix}}^{2}$ . The term ${\mathcal{L}}_{FR}$ can be understood as controlling the amount of change we allow for the function $f$ , hence controlling the trade-off between stability and plasticity. In practice, we weigh the term by $\alpha : {\mathcal{L}}_{ll} + \alpha {\mathcal{L}}_{FR}$ . We refer to the method as HCL-FR. To the best of our knowledge, the loss ${\mathcal{L}}_{FR}$ is novel: it is designed specifically for normalizing flows and cannot be trivially extended to other generative models. In order to apply ${\mathcal{L}}_{FR}$ to VAEs, we would need to apply the loss separately to the encoder and the decoder of the model, and potentially to their composition. Recently, Titsias et al. (2019) and Pan et al. (2020) proposed related regularization techniques for continual learning which rely on the Gaussian Process framework. + +Theoretical analysis In Appendix E. 1 we study the loss ${\mathcal{L}}_{FR}$ theoretically and draw connections to other objectives. In particular, the term can be interpreted as looking at the amount of change in the function as measured by the KL-divergence assuming the output of the flow is an isotropic Gaussian. Under a Taylor approximation, we show that ${\mathcal{L}}_{FR}$ enforces the weights to move only in directions of low curvature of the mapping ${f}_{\theta }^{\left( k\right) }$ when learning a new task. Hence, similar to regularization-based CL methods, this term limits movement of the weights in directions that lead to large functional changes of the flow. + +§ 4. EXPERIMENTS + +In this section, we evaluate HCL on a range of image classification tasks in continual learning. In all experiments, we consider domain-incremental learning where the number of classes $K$ is the same in all tasks. At test time, the task identity is not provided to any of the considered methods. For HCL, we report the performance both in the task-aware (when the task identity is provided to the model during training) and task-agnostic (no task boundary knowledge during training) settings. We use RealNVP and Glow normalizing flow models. See Appendix A for detailed setup. + +Metrics Let ${a}_{i,j}$ be the accuracy of the model on task $i$ after training on $j$ tasks. We report the following metrics: (1) final accuracy ${a}_{i,\tau }$ on each task $i \in$ $\{ 1,\ldots \tau \}$ at the end of the training,(2) average final accuracy across tasks $\frac{1}{d}\mathop{\sum }\limits_{{i = 1}}^{d}{a}_{i,\tau }$ ,(3) the average forgetting: $\frac{1}{\tau - 1}\mathop{\sum }\limits_{{i = 1}}^{{\tau - 1}}\left( {{a}_{i,i} - {a}_{i,\tau }}\right)$ , and (4) the overall accuracy: the final accuracy on $\left( {K \times \tau }\right)$ -way classification which indicates how well the model identifies both the class and the task. We run each experiment with 3 seeds and report mean and standard deviation of the metrics. + +Adam We evaluate Adam training without any extra steps for preventing catastrophic forgetting. + +Multi-Task Learning (MTL) We evaluate multitask learning (MTL): the model is trained on each task ${T}_{{t}_{i}}$ for the same number of epochs as in CL methods, however, when training on ${T}_{{t}_{i}}$ , it has access to all previous tasks ${T}_{{t}_{1}},\ldots {T}_{{t}_{i - 1}}$ . At each iteration, we sample a mini-batch (of the same size as the current data batch) containing the data + +165 + + < g r a p h i c s > + +Figure 2. Results on (a) Split MNIST, (b) MNIST-SVHN and (c) SVHN-MNIST image datasets. For Split MNIST, in the top panel we show the performance of each method on each of the tasks in the end of training; for HCL we show the results in the task-agnostic setting with dashed lines. We also show average accuracy, forgetting and overall accuracy for each of the datasets and methods. HCL provides strong performance, especially on SVHN-MNIST where it achieves almost zero forgetting and significantly outperforms ER. HCL-FR provides better results than HCL-GR overall. + +166 + +167 + +168 + +169 + +170 + +171 + +172 + +173 + +§ FROM EACH OF THE TASKS THAT HAVE BEEN OBSERVED SO FAR. + +Experience Replay (ER) We reserve a buffer with a fixed size of 1000 samples for each task and randomly select samples to add to that buffer during training on each task. When training on task ${T}_{k}$ , the model randomly picks a number of samples equal to the current task's batch size from each of the previous task buffers and appends to the current batch. + +CURL We evaluate the state-of-the-art CURL (Rao et al., 2019) method for continual learning which is most closely related to HCL: CURL also incorporates a generative model (VAE), with an expanding Gaussian mixture in latent space, and likelihood-based task-change detection. + +Split MNIST In this experiment, following prior work we split the MNIST dataset (LeCun et al., 1998) into 5 binary classification tasks. We train for 30 epochs on each task. We use the Glow architecture to model the data distribution. The results are presented in Fig 2 (a) and Appendix Table 1. HCL shows strong performance, competitive with ER. Out of the HCL variants, HCL-FR provides a better performance both in the task-aware and the task-agnostic settings. Both HCL variants significantly outperform CURL. We hypothesise that since it only uses a single latent Gaussian component for each class, CURL cannot as easily capture a highly multimodal and complex data distribution for a single class - a requirement for domain-incremental learning where classes may be visually very different across different tasks. In contrast, as HCL initialises multiple latent components in a task-agnostic fashion and draws upon a flexible flow-based model, it is much better suited to the domain-incremental continual learning setting. The final accuracy of the Adam baseline on some tasks is very low: unless we take measures to avoid forgetting, the flow typically maps the data from all + +219 tasks to the region in the latent space corresponding to the final task, and it may happen that e.g. the data in class 1 of the first task will be mapped to the class 2 of the last task. + +MNIST-SVHN and SVHN-MNIST We evaluate HCL and the baselines on two more challenging problems: MNIST-SVHN and SVHN-MNIST. Here, the tasks are 10-way classification problems on either the SVHN (Netzer et al., 2011) or the MNIST dataset. We use the RealNVP architecture with inputs of size ${32} \times {32} \times 3$ , and upscale the MNIST images to this resolution. We train the methods for 90 epochs on each task. We report the results in Fig 2 (b) and (c) and Appendix Table 2. HCL-FR and HCL-GR show strong performance, outperforming ER and Adam significantly, and performing on par with MTL. On MNIST-SVHN, the model is able to almost completely avoid forgetting. + +See Appendix B for additional experimental results on split CIFAR-10 and split CIFAR-100 and Appendix G discussing task identification results. + +§ 5. DISCUSSION + +In this work we proposed HCL, a hybrid model for continual learning based on normalizing flows. HCL achieves strong performance on a range of image classification problems and is able to automatically detect new and recurring tasks using the typicality of flow's statistics. We believe that the key advantage of HCL is its simplicity and extensibility. HCL describes the data generating process using a tractable but flexible probabilistic model and uses the maximum-likelihood to train the model. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/_l8XYZe88K4/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/_l8XYZe88K4/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..06aaade4766c854a67ef4ed434410ab4e1aad942 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/_l8XYZe88K4/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,451 @@ +Why be adversarial? Let's cooperate!: + +Cooperative Dataset Alignment via JSD Upper Bound + +## Abstract + +Unsupervised dataset alignment estimates a transformation that maps two or more source domains to a shared aligned domain given only the domain datasets. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning. Most prior works use adversarial learning (i.e., min-max optimization), which can be challenging to optimize and evaluate. A few recent works explore non-adversarial flow-based (i.e., invertible) approaches, but they lack a unified perspective. Therefore, we propose to unify and generalize previous flow-based approaches under a single non-adversarial framework, which we prove is equivalent to minimizing an upper bound on the Jensen-Shannon Divergence (JSD). Importantly, our problem reduces to a min-min, i.e., cooperative, problem and can provide a natural evaluation metric for unsupervised dataset alignment. We present preliminary results of our proposed framework on simulated and real-world data. + +## 1. Introduction + +In many cases, a practitioner has access to multiple related but distinct datasets such as agricultural measurements from two farms, experimental data collected in different months, or sales data before and after a major event. ${Un}$ - supervised dataset alignment (UDA) is the ML task aimed at aligning these related but distinct datasets in a shared space, which may be a latent space, without any pairing information between the two domains (i.e., unsupervised). This task has many applications such as generative modeling (e.g., (Zhu et al., 2017)), unsupervised domain adaptation (e.g., (Grover et al., 2020; Hu et al., 2018)), batch effect mitigation in biology (e.g., (Haghverdi et al., 2018)), and fairness-aware learning (e.g., (Zemel et al., 2013)). + +The most common approach for obtaining such alignment transformations stems from Generative Adversarial Networks (GAN) (Goodfellow et al., 2014), which can be viewed as minimizing a lower bound on the Jensen- + +Shannon Divergence (JSD) between real and generated distributions. The lower bound is tight if and only if the inner maximization is solved perfectly. CycleGAN (Zhu et al., 2017) maps between two datasets via two GAN objectives between the two domains and a cycle consistency loss, which encourages approximate invertibility of the transformations. However, adversarial learning can be quite challenging in practice (see e.g. (Lucic et al., 2018; Kurach et al., 2019)) because of the competitive nature of the min-max optimization problem. Also, the research community only has reasonable model evaluation metrics for certain data types. Specifically, the commonly accepted Frechet Inception Distance (FID) (Heusel et al., 2017) is only applicable to image or auditory data, which have standard powerful pretrained classifiers, and even the implementation of FID can evaluation issues (Parmar et al., 2021). No clear metrics exist for tabular data or non-perceptual data. + +Recently, flow-based methods that leverage invertible models have been proposed for the UDA task (Grover et al., 2020; Usman et al., 2020). AlignFlow (Grover et al., 2020) leverages invertible models to make the model cycle-consistent (i.e., invertible) by construction and introduce exact log-likelihood loss terms derived from standard flow-based generative models as a complementary loss terms to the adversarial loss terms. Yet, AlignFlow still leverages adversarial learning and does not provide a general evaluation metric. Log-likelihood ratio minimizing flows (LRMF) (Usman et al. 2020) use invertible flow models and density estimation to avoid adversarial learning altogether and define a new metric based on the log-likelihood ratio. However, LRMF depends heavily on the density model class and can only partially align datasets if the target distribution is not in the chosen density model class. Additionally, the LRMF metric depends on this density model class and is only defined for two datasets. + +Therefore, to avoid challenging adversarial learning and generalize previous flow-based approaches, we propose a unified non-adversarial UDA framework, which we prove is equivalent to minimizing an upper bound on the JSD. Importantly, our problem reduces to a min-min, i.e., cooperative, problem, and the JSD upper bound can provide a natural evaluation metric for UDA that can be applied in any domain. Our framework requires two parts, the outer minimization requires an invertible model and the inner minimization requires a density model (e.g., Gaussian mixture models or normalizing flows (Dinh et al., 2017)). We summarize our contributions as follows: + +- We prove that a minimization problem over density models is an upper bound on a generalized version of JSD that allows for more than two distributions. Importantly, we also theoretically quantify the bound gap and show that it can be made tight if the density model class is flexible enough. + +- We use this JSD upper bound to derive a novel regularized loss function for UDA and explain its relationship to prior methods. + +- We demonstrate the feasibility of our method on simulated and real-world data. + +Notation We will denote distributions as ${P}_{X}\left( \mathbf{x}\right)$ where $X$ is the corresponding random variable. Invertible functions will be denoted by $T\left( \cdot \right)$ . We will use ${X}_{j} \sim {P}_{{X}_{j}}$ to denote the observed random variable from the $j$ -th distribution. We will use ${Z}_{j} \triangleq {T}_{j}\left( {X}_{j}\right) \sim {P}_{{Z}_{j}} \equiv {P}_{{T}_{j}\left( {X}_{j}\right) }$ to denote the latent random variable of the $j$ -th distribution after applying ${T}_{j}$ to ${X}_{j}$ (and note that ${X}_{j} = {T}_{j}^{-1}\left( {Z}_{j}\right)$ ). We will denote the mixtures of these observed or latent distributions as ${P}_{{X}_{\operatorname{mix}}} \triangleq \mathop{\sum }\limits_{j}{w}_{j}{P}_{{X}_{j}}$ and ${P}_{{Z}_{\operatorname{mix}}} \triangleq \mathop{\sum }\limits_{j}{w}_{j}{P}_{{Z}_{j}}$ , where $\mathbf{w}$ is a probability vector. We denote KL divergence, entropy, and cross entropy as $\mathrm{{KL}}\left( {\cdot , \cdot }\right) ,\mathrm{H}\left( \cdot \right)$ , and ${\mathrm{H}}_{\mathrm{c}}\left( {\cdot , \cdot }\right)$ , respectively, where $\mathrm{{KL}}\left( {P, Q}\right) = {\mathrm{H}}_{\mathrm{c}}\left( {P, Q}\right) - \mathrm{H}\left( P\right)$ . + +## 2. Regularized Alignment Upper Bound Loss + +We first remind the reader of the generalized Jensen-Shannon divergence for more than two distributions, where the standard JSD is recovered if ${w}_{1} = {w}_{2} = {0.5}$ . + +Definition 1 (Generalized Jensen-Shannon Divergence (GJSD) (Lin,1991)). Given $k$ distributions ${\left\{ {P}_{{X}_{j}}\right\} }_{j = 1}^{k}$ and a corresponding probability weight vector $\mathbf{w}$ , the generalized Jensen-Shannon divergence is defined as (proof of equivalence in appendix): + +$$ +{\operatorname{GJSD}}_{\mathbf{w}}\left( {{P}_{{X}_{1}},\cdots ,{P}_{{X}_{k}}}\right) \triangleq \mathop{\sum }\limits_{j}{w}_{j}\operatorname{KL}\left( {{P}_{{X}_{j}},\mathop{\sum }\limits_{j}{w}_{j}{P}_{{X}_{j}}}\right) +$$ + +$$ +\equiv \mathrm{H}\left( {\mathop{\sum }\limits_{j}{w}_{j}{P}_{{X}_{j}}}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) . \tag{1} +$$ + +The goal of distribution alignment is to find a set of transformations ${\left\{ {T}_{j}\left( \cdot \right) \right\} }_{j = 1}^{k}$ (which will be invertible in our case) such that the latent distributions align, i.e., ${P}_{{T}_{j}\left( {X}_{j}\right) } =$ ${P}_{{T}_{{j}^{\prime }}\left( {X}_{{j}^{\prime }}\right) }$ or equivalently ${P}_{{Z}_{j}} = {P}_{{Z}_{{j}^{\prime }}}$ for all $j \neq {j}^{\prime }$ . Given the properties of divergences, this alignment will happen if and only if $\operatorname{GJSD}\left( {{P}_{{Z}_{1}},\cdots ,{P}_{{Z}_{k}}}\right) = 0$ . Thus, ideally, we + +would minimize GJSD directly with respect to ${T}_{j}$ , i.e., + +$$ +\mathop{\min }\limits_{{{T}_{1},\cdots ,{T}_{k} \in \mathcal{T}}}\operatorname{GJSD}\left( {{P}_{{T}_{1}\left( {X}_{1}\right) },\cdots ,{P}_{{T}_{k}\left( {X}_{k}\right) }}\right) \tag{2} +$$ + +$$ +\equiv \mathop{\min }\limits_{{{T}_{1},\cdots ,{T}_{k} \in \mathcal{T}}}\mathrm{H}\left( {\mathop{\sum }\limits_{j}{w}_{j}{P}_{{T}_{j}\left( {X}_{j}\right) }}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{T}_{j}\left( {X}_{j}\right) }\right) , +$$ + +where $\mathcal{T}$ is a class of invertible functions. + +### 2.1. GJSD Upper Bound + +However, we cannot evaluate the entropy terms in Eqn. 3 because we do not know the density of ${P}_{{X}_{j}}$ ; we only have samples from ${P}_{{X}_{j}}$ . Therefore, we will upper bound the first entropy term in Eqn. 3 (H $\left( {\mathop{\sum }\limits_{j}{w}_{j}{P}_{{X}_{j}}}\right)$ ) using an auxiliary density model and decompose the other entropy terms by leveraging the change of variables formula for invertible functions. + +Theorem 1 (GJSD Upper Bound). Given an auxiliary density model class $\mathcal{Q}$ , we form a GJSD upper bound: + +$$ +{\operatorname{GJSD}}_{\mathbf{w}}\left( {{P}_{{Z}_{1}},\cdots ,{P}_{{Z}_{k}}}\right) +$$ + +$$ +\leq \mathop{\min }\limits_{{Q \in \mathcal{Q}}}{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) , +$$ + +where the bound gap is exactly $\mathop{\min }\limits_{{Q \in \mathcal{Q}}}\operatorname{KL}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right)$ . + +Proof of Theorem 1. For any $Q \in \mathcal{Q}$ , we have the following upper bound: + +$$ +{\operatorname{GJSD}}_{\mathbf{w}}\left( {{P}_{{Z}_{1}},\cdots ,{P}_{{Z}_{k}}}\right) +$$ + +$$ += \underset{ = 0}{\underbrace{{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) - {\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) }} + \mathrm{H}\left( {P}_{{Z}_{\operatorname{mix}}}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) +$$ + +$$ += {\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) - \mathrm{{KL}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) +$$ + +$$ +\leq {\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) , +$$ + +where the inequality is by the fact that KL divergence is non-negative and the bound gap is equal to $\operatorname{KL}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right)$ . The $Q$ that achieves the minimum in the upper bound is equivalent to the $Q$ that minimizes the bound gap, i.e., + +$$ +{Q}^{ * } = \underset{Q \in \mathcal{Q}}{\arg \min }{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) \underset{\text{Constant w.r.t. }Q}{\underbrace{-\mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) }} \tag{3} +$$ + +$$ += \underset{Q \in \mathcal{Q}}{\arg \min }{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) \underset{\text{Constant w.r.t. }Q}{\underbrace{-\mathrm{H}\left( {P}_{{Z}_{\operatorname{mix}}}\right) }} \tag{4} +$$ + +$$ += \underset{Q \in \mathcal{Q}}{\arg \min }\mathrm{{KL}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) . \tag{5} +$$ + +The tightness of the bound depends on how well the class of density models $\mathcal{Q}$ (e.g., mixture models, normalizing flows, or autoregressive densities) can approximate ${P}_{{Z}_{\operatorname{mix}}}$ ; notably, the bound can be made tight if ${P}_{{Z}_{\operatorname{mix}}} \in \mathcal{Q}$ . Also, one key feature of this upper bound is that the cross entropy term can be evaluated using only samples from ${P}_{{X}_{j}}$ and the transformations ${T}_{j}$ , i.e., ${\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) =$ $\mathop{\sum }\limits_{j}{w}_{j}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log Q\left( {{T}_{j}\left( {\mathbf{x}}_{j}\right) }\right) }\right\rbrack$ . However, we still cannot evaluate the other entropy terms $\mathrm{H}\left( {P}_{{Z}_{j}}\right)$ since we do not know the densities of ${P}_{{Z}_{j}}$ (or ${P}_{{X}_{j}}$ ). Thus, we leverage the fact that the ${T}_{j}$ functions are invertible to define an entropy change of variables. + +Lemma 2 (Entropy Change of Variables). Let $X \sim {P}_{X}$ and $Z \triangleq T\left( X\right) \sim {P}_{Z}$ , where $T$ is an invertible transformation. The entropy of $Z$ can be decomposed as follows: + +$$ +\mathrm{H}\left( {P}_{Z}\right) = \mathrm{H}\left( {P}_{X}\right) + {\mathbb{E}}_{{P}_{X}}\left\lbrack {\log \left| {{J}_{T}\left( \mathbf{x}\right) }\right| }\right\rbrack , \tag{6} +$$ + +where $\left| {{J}_{T}\left( \mathbf{x}\right) }\right|$ is the determinant of the Jacobian of $T$ . + +The key insight from this lemma is that $\mathrm{H}\left( {P}_{X}\right)$ is a constant with respect to $T$ and can thus be ignored when optimizing $T$ , while ${\mathbb{E}}_{{P}_{X}}\left\lbrack {\log \left| {{J}_{T}\left( \mathbf{x}\right) }\right| }\right\rbrack$ can be approximated using only samples from ${P}_{X}$ . Combining Theorem 1 and Lemma 2, we can arrive at our final objective function which is equivalent to minimizing an upper bound on the GJSD: + +$$ +{\operatorname{GJSD}}_{\mathbf{w}}\left( {{P}_{{Z}_{1}},\cdots ,{P}_{{Z}_{k}}}\right) +$$ + +$$ +\leq \mathop{\min }\limits_{{Q \in \mathcal{Q}}}{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}}, Q}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) \tag{7} +$$ + +$$ += \mathop{\min }\limits_{{Q \in \mathcal{Q}}}\mathop{\sum }\limits_{j}{w}_{j}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) \left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right| }\right\rbrack \tag{8} +$$ + +$$ +- \mathop{\sum }\limits_{j}{w}_{j}\mathrm{\;H}\left( {P}_{{X}_{j}}\right) , +$$ + +where the last term $- \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right)$ is constant with respect to ${T}_{j}$ functions so they can be ignored. We formally define this loss function as follows. + +Definition 2 (Alignment Upper Bound Loss). Given $k$ continuous distributions ${\left\{ {P}_{{X}_{j}}\right\} }_{j = 1}^{k}$ , a class of continuous distributions $\mathcal{Q}$ , and a probability weight vector $\mathbf{w}$ , the alignment upper bound loss is defined as follows: + +$$ +{\mathcal{L}}_{\mathrm{{AUB}}}\left( {{T}_{1},\cdots ,{T}_{k};{\left\{ {P}_{{X}_{j}}\right\} }_{j = 1}^{k},\mathcal{Q},\mathbf{w}}\right) +$$ + +$$ +\triangleq \mathop{\min }\limits_{{Q \in \mathcal{Q}}}\mathop{\sum }\limits_{j}{w}_{j}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log \left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right| Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) }\right\rbrack , \tag{9} +$$ + +where ${T}_{j}$ are invertible and $\left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right|$ is the absolute value of the Jacobian determinant. + +Notice that this alignment loss can be seen as learning the best base distribution given fixed flow models ${T}_{j}$ . We now consider the theoretical optimum if we optimize over all invertible functions. + +Theorem 3 (Alignment at Global Minimum of ${\mathcal{L}}_{\mathrm{{AUB}}}$ ). If ${\mathcal{L}}_{\mathrm{{AUB}}}$ is minimized over the class of all invertible functions, a global minimum of ${\mathcal{L}}_{\text{AUB }}$ implies that the latent distributions are aligned, i.e., ${P}_{{T}_{j}\left( {X}_{j}\right) } = {P}_{{T}_{{j}^{\prime }}\left( {X}_{{j}^{\prime }}\right) }$ for all $j \neq {j}^{\prime }$ . Notably, this result holds regardless of $\mathcal{Q}$ . + +Informally, this can be proved by showing that the problem decouples into separate normalizing flow losses where $Q$ is the base distribution and the optimum is achieved only if ${P}_{{T}_{j}\left( {X}_{j}\right) } = Q$ for all ${T}_{j}$ (formal proof in the appendix). This alignment of the latent distributions also implies the translation between any of the observed component distributions. The proof follows directly from Theorem 3 and the change of variables formula. + +Corollary 4 (Translation at Global Minimum of ${\mathcal{L}}_{\mathrm{{AUB}}}$ ). Similar to Theorem 3, a global minimum of ${\mathcal{L}}_{\mathrm{{AUB}}}$ implies translation between any component distributions using the inverses of ${T}_{j}$ , i.e., ${P}_{{T}_{{j}^{\prime }}^{-1}\left( {{T}_{j}\left( {X}_{j}\right) }\right) } = {P}_{{X}_{{j}^{\prime }}}$ for all $j \neq {j}^{\prime }$ . + +### 2.2. Regularization via Transportation Cost + +While the alignment objective is the most challenging part of UDA, we argue that regularization is also critical for practical and stable alignment (or translation) between datasets because there are many optimal alignment solutions-even infinitely many in most cases (see appendix for two examples). We alleviate this issue by adding expected transportation cost (usually squared Euclidean distance) as a regularization to our objective inspired by optimal transport (OT) concepts. + +Definition 3 (Regularized Alignment Upper Bound Loss). Given similar setup as in Def. 2 and a transportation cost function $c\left( {a, b}\right) \geq 0$ for transporting a point from a to $b$ , the regularized alignment upper bound loss is defined as: + +$$ +{\mathcal{L}}_{\text{RAUB }}\left( {{T}_{1},\cdots ,{T}_{k};{\left\{ {P}_{{X}_{j}}\right\} }_{j = 1}^{k},\mathcal{Q},\mathbf{w},\lambda , c}\right) +$$ + +$$ +\triangleq \mathop{\min }\limits_{{Q \in \mathcal{Q}}}\mathop{\sum }\limits_{j}{w}_{j}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log \left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right| Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) }\right. \tag{10} +$$ + +$$ +\left. {+{\lambda c}\left( {\mathbf{x},{T}_{j}\left( \mathbf{x}\right) }\right) }\right\rbrack \text{.} +$$ + +### 2.3. Relationship to Prior Works + +AlignFlow is special case without adversarial terms AlignFlow (Grover et al., 2020) without adversarial loss terms is a special case of our method for two distributions where the density model class $\mathcal{Q}$ only contains the standard normal distribution (i.e., a singleton class) and no regularization is used (i.e., $\lambda = 0$ ). Thus, AlignFlow can be viewed as initially optimizing a poor upper bound on JSD; however, the JSD bound becomes tighter as training progresses because the latent distributions independently move towards the same normal distribution. + +LRMF is special case with only one transformation Log-likelihood ratio minimizing flows (LRMF) (Usman et al., 2020) is also a special case of our method for only two distributions, where one transformation is fixed at the identity (i.e., ${T}_{2} =$ Id) and no regularization is applied (i.e., $\lambda = 0$ ). While the final practical LRMF objective is a special case of ours, the theory is developed from a different but complementary perspective. The LRMF metric developed requires an assumption about a given density model + +165 + +![01963e4c-eeae-7bec-97e8-3b6f5aff77fc_3_183_192_1406_333_0.jpg](images/01963e4c-eeae-7bec-97e8-3b6f5aff77fc_3_183_192_1406_333_0.jpg) + +Figure 1: Top row is latent space and bottom is the data translated into the other space. (a-c) LRMF, which only has one transformation $T$ may not be able to align the datasetss if the density model class $\mathcal{Q}$ is not expressive enough (in this case Gaussian distributions) while using two transformations as in our framework can align them. (d-f) AlignFlow (without adversarial terms) may not align because ${Q}_{z}$ is fixed at a standard normal, while our approach with learnable mixture of Gaussians for ${Q}_{z}$ is able to learn an alignment (both use the same ${T}_{j}$ models). + +166 + +168 + +![01963e4c-eeae-7bec-97e8-3b6f5aff77fc_3_177_716_649_447_0.jpg](images/01963e4c-eeae-7bec-97e8-3b6f5aff77fc_3_177_716_649_447_0.jpg) + +Figure 2: An unregularized alignment loss (top) can lead to excessive and unexpected movement of points in the latent representation (lines connect transported points), while our regularized alignment loss (bottom) yields a unique and regularized solution that moves points significantly less and is closer to the identity function. + +![01963e4c-eeae-7bec-97e8-3b6f5aff77fc_3_162_1371_692_218_0.jpg](images/01963e4c-eeae-7bec-97e8-3b6f5aff77fc_3_162_1371_692_218_0.jpg) + +Figure 3: Preliminary results on a high dimensional real world datasets demonstrate the feasibility of our method. The complex translation between MNIST and USPS datasets (left) does not seem to overfit, while the simple translation between MNIST 0 to 1 may overfit (as seen by right most column of test). The first and third columns are the original images, and the second and the fourth columns are the translated images. + +207 + +208 + +209 + +class, which enables a zero point (or absolute value) of the metric to be estimated but requires fitting extra domain density models. Usman et al. (2020) also do not uncover the + +217 connection of the objective as an upper bound on JSD re- gardless of the density model class. Additionally, to ensure 218 alignment, LRMF requires that the density model class in- 219 cludes the true target distribution because only one invertible transform is used, while our approach can theoretically align even if the shared density model class is weak (see Theorem 3 and our simulated experiments). + +Cooperative versus Adversarial Networks Analogous to the generator $G$ and the discriminator $D$ in adversarial learning, our framework has two main networks, ${T}_{j}$ and ${Q}_{z}$ . We can use any invertible function for ${T}_{j}$ (e.g., coupling-based flows (Dinh et al., 2017), neural ODE flows (Grathwohl et al., 2018), or residual flows (Chen et al., 2019)) and any (approximate) density models for ${Q}_{z}$ (e.g., kernel densities (in low dimensions), mixture models, autoregressive densities (Salimans et al., 2017), normalizing flows (Kingma and Dhariwal, 2018), or even VAEs (Kingma and Welling, 2019)). Thus, our framework has similar modularity compared to adversarial approaches. In contrast, we have a min-min, i.e., cooperative, optimization problem, but our transformations must be invertible and the auxiliary density model ${Q}_{z}$ may be more difficult to train than the auxiliary discriminator $D$ . We expect these limitations to be alleviated as new invertible models and density models are continually being developed. + +## 3. Experiments and Conclusion + +We first demonstrate the differences of our approach to LRMF and AlignFlow in Fig. 1 and the importance of regularization in Fig. 2 (see appendix for details). We also demonstrate the feasibility of our approach for high-dimensional datasets in some preliminary experiments shown in Fig. 3 (see appendix for details). Yet, scaling up our framework in practice is still a fundamental challenge, and our approach inherits some weaknesses of JSD, which may not give useful gradient information in certain cases. Thus, while these experiments and our theoretical work build a unified foundation for cooperative alignment learning, our work also open up new theoretical and practical questions. + +## References + +Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223-2232, 2017. + +Aditya Grover, Christopher Chute, Rui Shu, Zhangjie Cao, and Stefano Ermon. Alignflow: Cycle consistent learning from multiple domains via normalizing flows. In AAAI, 2020. + +Lanqing Hu, Meina Kan, Shiguang Shan, and Xilin Chen. Duplex generative adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. + +Laleh Haghverdi, Aaron TL Lun, Michael D Morgan, and John C Marioni. Batch effects in single-cell rna-sequencing data are corrected by matching mutual nearest neighbors. Nature biotechnology, 36(5):421-427, 2018. + +Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In Sanjoy Dasgupta and David McAllester, editors, International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 325-333, Atlanta, Georgia, USA, 17-19 Jun 2013. + +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, 2014. + +Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are gans created equal? a large-scale study. In Advances in neural information processing systems, pages 700-709, 2018. + +Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. The GAN landscape: Losses, architectures, regularization, and normalization, 2019. URL https: //openreview.net/forum?id=rkGG6s0qKQ + +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https: //proceedings.neurips.cc/paper/2017/file/ 8a1d694707ebOfefe65871369074926d-Paper. pdf. + +Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu. On buggy resizing libraries and surprising subtleties in fid calculation. arXiv preprint arXiv:2104.11222, 2021. + +Ben Usman, Avneesh Sud, Nick Dufour, and Kate Saenko. Log-likelihood ratio minimizing flows: Towards robust and quantifiable neural distribution alignment. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Bal-can, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 21118- 21129. Curran Associates, Inc., 2020. URL https: //proceedings.neurips.cc/paper/2020/file/ f169b1a771215329737c91f70b5bf05c-Paper. pdf. + +Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Den- + +sity estimation using real nvp. In International Conference on Learning Representations, 2017. + +Jianhua Lin. Divergence measures based on the shannon entropy. IEEE Transactions on Information theory, 37(1):145- 151, 1991. + +Will Grathwohl, Ricky TQ Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018. + +Ricky T. Q. Chen, Jens Behrmann, David K Duvenaud, and Joern-Henrik Jacobsen. Residual flows for invertible generative modeling. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https: //proceedings.neurips.cc/paper/2019/file/ 5d0d5594d24f0f955548f0fc0ff83d10-Paper. pdf. + +Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017. + +Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible $1\mathrm{\;x}1$ convolutions. In Advances in neural information processing systems, pages 10215-10224, 2018. + +Diederik P Kingma and Max Welling. An introduction to variational autoencoders. arXiv preprint arXiv:1906.02691, 2019. + +Gabriel Peyré and Marco Cuturi. Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6):355-607, 2019. ISSN 1935-8237. doi: 10.1561/2200000073. URL http://dx.doi.org/10.1561/2200000073 + +Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. + +## A. Proofs + +Proof of Equivalence in Def. 1 While the proof of the equivalence is well-known, we reproduce here for completeness. As a reminder, the KL divergence is defined as: + +$$ +\mathrm{{KL}}\left( {P, Q}\right) = {\mathbb{E}}_{P}\left\lbrack {\log \frac{P\left( x\right) }{Q\left( x\right) }}\right\rbrack = {\mathbb{E}}_{P}\left\lbrack {-\log Q\left( x\right) }\right\rbrack - {\mathbb{E}}_{P}\left\lbrack {-\log P\left( x\right) }\right\rbrack = {\mathrm{H}}_{\mathrm{c}}\left( {P, Q}\right) - \mathrm{H}\left( P\right) , \tag{11} +$$ + +where ${\mathrm{H}}_{\mathrm{c}}\left( {\cdot , \cdot }\right)$ denotes the cross entropy and $\mathrm{H}\left( \cdot \right)$ denotes entropy. Given this, we can now easily derive the equivalence: + +$$ +{\operatorname{GJSD}}_{\mathbf{w}}\left( {{P}_{{X}_{1}},\cdots ,{P}_{{X}_{k}}}\right) = \mathop{\sum }\limits_{j}{w}_{j}\operatorname{KL}\left( {{P}_{{X}_{j}},{P}_{{X}_{\operatorname{mix}}}}\right) \tag{12} +$$ + +$$ += \mathop{\sum }\limits_{j}{w}_{j}\left( {{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{X}_{j}},{P}_{{X}_{\operatorname{mix}}}}\right) - \mathrm{H}\left( {P}_{{X}_{j}}\right) }\right) \tag{13} +$$ + +$$ += \mathop{\sum }\limits_{j}{w}_{j}{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{X}_{j}},{P}_{{X}_{\operatorname{mix}}}}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) \tag{14} +$$ + +$$ += \mathop{\sum }\limits_{j}{w}_{j}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log {P}_{{X}_{\operatorname{mix}}}}\right\rbrack - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) \tag{15} +$$ + +$$ += \mathop{\sum }\limits_{j}{w}_{j}{\int }_{\mathcal{X}} - {P}_{{X}_{j}}\left( x\right) \log {P}_{{X}_{\operatorname{mix}}}\left( x\right) {dx} - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) \tag{16} +$$ + +$$ += {\int }_{\mathcal{X}} - \mathop{\sum }\limits_{j}{w}_{j}{P}_{{X}_{j}}\left( x\right) \log {P}_{{X}_{\operatorname{mix}}}\left( x\right) {dx} - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) \tag{17} +$$ + +$$ += {\int }_{\mathcal{X}} - {P}_{{X}_{\operatorname{mix}}}\left( x\right) \log {P}_{{X}_{\operatorname{mix}}}\left( x\right) {dx} - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) \tag{18} +$$ + +$$ += \mathrm{H}\left( {P}_{{X}_{\operatorname{mix}}}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) . \tag{19} +$$ + +Proof of Lemma 2. First, we note the following fact from the standard change of variables formula: + +$$ +{P}_{X}\left( \mathbf{x}\right) = {P}_{Z}\left( {T\left( \mathbf{x}\right) }\right) \left| {{J}_{T}\left( \mathbf{x}\right) }\right| \tag{20} +$$ + +$$ +\Rightarrow {P}_{X}\left( \mathbf{x}\right) {\left| {J}_{T}\left( \mathbf{x}\right) \right| }^{-1} = {P}_{Z}\left( {T\left( \mathbf{x}\right) }\right) . +$$ + +We can now derive our result using the change of variables for expectations (i.e., LOTUS) and the probability change of variables from above: + +$$ +\mathrm{H}\left( {P}_{Z}\right) = {\mathbb{E}}_{{P}_{Z}}\left\lbrack {-\log {P}_{Z}\left( \mathbf{z}\right) }\right\rbrack = {\mathbb{E}}_{{P}_{X}}\left\lbrack {-\log {P}_{Z}\left( {T\left( \mathbf{x}\right) }\right) }\right\rbrack +$$ + +$$ += {\mathbb{E}}_{{P}_{X}}\left\lbrack {-\log {P}_{X}\left( \mathbf{x}\right) {\left| {J}_{T}\left( \mathbf{x}\right) \right| }^{-1}}\right\rbrack +$$ + +$$ += {\mathbb{E}}_{{P}_{X}}\left\lbrack {-\log {P}_{X}\left( \mathbf{x}\right) }\right\rbrack + {\mathbb{E}}_{{P}_{X}}\left\lbrack {-\log {\left| {J}_{T}\left( \mathbf{x}\right) \right| }^{-1}}\right\rbrack +$$ + +$$ += \mathrm{H}\left( {P}_{X}\right) + {\mathbb{E}}_{{P}_{X}}\left\lbrack {\log \left| {{J}_{T}\left( \mathbf{x}\right) }\right| }\right\rbrack . +$$ + +Proof of Theorem 3 Given any fixed $Q$ , minimizing ${\mathcal{L}}_{\mathrm{{AUB}}}$ decouples into minimizing separate normalizing flow losses where $Q$ is the base distribution. For each normalizing flow, there exists an invertible ${T}_{j}$ such that ${T}_{j}\left( {X}_{j}\right) \sim Q$ , and this achieves the minimum value of ${\mathcal{L}}_{\mathrm{{AUB}}}$ . More formally, + +$$ +\mathop{\min }\limits_{{{T}_{1},\cdots ,{T}_{k}}}{\mathcal{L}}_{\mathrm{{AUB}}}\left( {{T}_{1},\cdots ,{T}_{k}}\right) \tag{21} +$$ + +$$ += \mathop{\min }\limits_{{{T}_{1},\cdots ,{T}_{k}}}\mathop{\sum }\limits_{j}{w}_{j}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log \left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right| Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) }\right\rbrack \tag{22} +$$ + +$$ += \mathop{\sum }\limits_{j}{w}_{j}\mathop{\min }\limits_{{T}_{j}}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log \left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right| Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) }\right\rbrack + \mathrm{H}\left( {P}_{{X}_{j}}\right) - \mathrm{H}\left( {P}_{{X}_{j}}\right) \tag{23} +$$ + +$$ += \mathop{\sum }\limits_{j}{w}_{j}\mathop{\min }\limits_{{T}_{j}}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log \left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right| Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) }\right\rbrack + \mathrm{H}\left( {P}_{{X}_{j}}\right) - {\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log {P}_{{X}_{j}}\left( \mathbf{x}\right) }\right\rbrack ) \tag{24} +$$ + +$$ += \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) + \mathop{\sum }\limits_{j}{w}_{j}\mathop{\min }\limits_{{T}_{j}}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {\log \frac{{P}_{{X}_{j}}\left( \mathbf{x}\right) {\left| {J}_{{T}_{j}}\left( \mathbf{x}\right) \right| }^{-1}}{Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) }}\right\rbrack \tag{25} +$$ + +$$ += \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) + \mathop{\sum }\limits_{j}{w}_{j}\mathop{\min }\limits_{{T}_{j}}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {\log \frac{{P}_{{T}_{j}\left( {X}_{j}\right) }\left( {{T}_{j}\left( \mathbf{x}\right) }\right) }{Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) }}\right\rbrack \tag{26} +$$ + +$$ += \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) + \mathop{\sum }\limits_{j}{w}_{j}\mathop{\min }\limits_{{T}_{j}}{\mathbb{E}}_{{P}_{{T}_{j}\left( {X}_{j}\right) }}\left\lbrack {\log \frac{{P}_{{T}_{j}\left( {X}_{j}\right) }\left( \mathbf{z}\right) }{Q\left( \mathbf{z}\right) }}\right\rbrack \tag{27} +$$ + +$$ += \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) + \mathop{\sum }\limits_{j}{w}_{j}\mathop{\min }\limits_{{T}_{j}}\mathrm{{KL}}\left( {{P}_{{T}_{j}\left( {X}_{j}\right) }, Q}\right) . \tag{28} +$$ + +Given that $\operatorname{KL}\left( {P, Q}\right) \geq 0$ and equal to 0 if and only if $P = Q$ , the global minimum is achieved only if ${P}_{{T}_{i}\left( {X}_{j}\right) } = Q,\forall j$ and there exist such invertible functions (e.g., the optimal Monge map between ${P}_{{X}_{j}}$ and $Q$ for squared Euclidean transportation cost (Peyré and Cuturi,2019)). Additionally, the optimal value is $\mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right)$ , which is constant with respect to the ${T}_{j}$ transformations. + +## B. Examples of Non-Unique Alignment Solutions + +### B.1. Gaussian Example + +Suppose the component distributions are normal distributions, i.e., ${X}_{1} \sim \mathcal{N}\left( {{\mu }_{1}, I}\right)$ and ${X}_{2} \sim \mathcal{N}\left( {{\mu }_{2}, I}\right)$ , and for even greater simplicity, we assume ${T}_{2}$ is the identity, i.e., ${T}_{2}\left( \mathbf{x}\right) = \mathbf{x}$ . Then, a global optimal solution could be ${T}_{1}\left( \mathbf{x}\right) =$ $U\left( {\mathbf{x} - {\mu }_{1} + {\mu }_{2}}\right)$ for any orthogonal matrix $U$ , i.e., there are infinitely many invertible functions that align the distributions. Note that this lack of unique solutions is not restricted to orthogonal rotations (see appendix for a more complex example). + +### B.2. Complex Example + +Consider the 1D case where $\mathcal{Q}$ only contains the uniform distribution. Thus, ${T}_{1}$ and ${T}_{2}$ must map their distributions to the uniform distribution for alignment. One solution would be that ${T}_{1} = {F}_{1}$ and ${T}_{2} = {F}_{2}$ where ${F}_{1}$ and ${F}_{2}$ are the CDFs of ${P}_{{X}_{1}}$ and ${P}_{{X}_{2}}$ . Yet, there are infinitely many other possible solutions. Consider an invertible function that subdivides the unit interval into an arbitrarily large number of equal length intervals and then shuffles these intervals with a fixed arbitrary permutation. More formally, we could define this as: + +$$ +{S}_{m,\pi }\left( x\right) = \left\{ {\begin{array}{ll} x - \frac{1}{m} + \frac{\pi \left( 1\right) }{m} & \text{ if }x \in \left\lbrack {0,\frac{1}{m}}\right) \\ x - \frac{2}{m} + \frac{\pi \left( 2\right) }{m} & \text{ if }x \in \left\lbrack {\frac{1}{m},\frac{2}{m}}\right) \\ \vdots & \vdots \\ x - \frac{m}{m} + \frac{\pi \left( m\right) }{m} & \text{ if }x \in \left\lbrack {\frac{m - 1}{m},1}\right\rbrack \end{array},}\right. \tag{29} +$$ + +where $\pi \left( \cdot \right)$ is a permutation of the integers 1 to $m$ . Given this, then other optimal solutions could be ${T}_{1} = {S}_{m,\pi } \circ {F}_{1}$ and ${T}_{2} = {F}_{2}$ for any $m > 1$ and any permutation $\pi$ . This idea could be generalized to higher dimensions as well by mapping to the multivariate uniform distribution and subdividing the unit hypercube similarly. + +### C.2D dataset comparison with related works + +### C.1. Single $T$ vs. Double $T$ ’s (LRMF vs. Ours) + +We first compare our method with LRMF (Usman et al., 2020) method. We construct the experiment to have the task: Transform between the two half-circled distributions ${X}_{1}$ and ${X}_{2}$ in the moons dataset. In this example, we made two models, one with LRMF setup and one with our RAUB setup. As illustrated in Figure 1, the LRMF method fails to transform between ${X}_{1}$ and ${X}_{2}$ . Even though $Q$ can model well enough for ${T}_{1}\left( {X}_{1}\right) , Q$ can only model the mean and variance of ${T}_{2}\left( {X}_{2}\right)$ which is obviously not informative enough. Therefore, the LRMF fails to transform between two datasets. While in the RAUB setup, both ${T}_{1}\left( {X}_{1}\right)$ and ${T}_{2}\left( {X}_{2}\right)$ are modeled to the same distribution while $Q$ can model well enough. And the resulting inverted version of ${X}_{1}$ and ${X}_{2}$ shows valid transformation. Therefore, the performance of the LRMF model is limited by the power of the density model $Q$ which means if $Q$ fails to model one of the transformed data distribution well enough, data alignment cannot be achieved with high performances. + +### C.2. Simple Fixed $Q$ vs. Learnable $Q$ (AlignFlow vs. Ours) + +Next we compare our method with AlignFlow(Grover et al., 2020; Hu et al., 2018) setup. We construct the experiment to have the task: Transform between the two random patterns ${X}_{1}$ and ${X}_{2}$ in the randomly generated datasets. Again, we made two models with AlignFlow and our RAUB setups respectively. As illustrated in Figure 1, the AlignFlow method fails to transform between ${X}_{1}$ and ${X}_{2}$ , because the transformed dataset ${T}_{1}\left( {X}_{1}\right)$ and ${T}_{2}\left( {X}_{2}\right)$ failed to reach the normal distribution $Q$ . While in the RAUB setup, the density model $Q$ is learned to help fit the transformed distributions ${T}_{1}\left( {X}_{1}\right)$ and ${T}_{2}\left( {X}_{2}\right)$ , which allows them to be aligned with each other. Therefore, the performance of the AlignFlow model is limited by the performances of the invertible functions. + +### C.3. Regularized vs. Un-regularized (Some prior works vs. Ours) + +We finally show the importance of the regularization term. We construct the experiment to have the task: Transform between two concentric circles with the same mean but slighted different radius. In this example, we make two models with RAUB setup, but one with regularization and one without. As illustrated in the Figure 2, both models are able to transform between two distributions with high performances. However, the transformation pattern is not natural in terms of the 'moving cost' of each point. Each pair created by the unregularized model has bigger transportation cost compared to the pairs created by the regularized model. Therefore, we argue that by adding a transportation cost, the resulting transformation between samples would be closer to an identity transformation and therefore more stable. + +Note: All implementation details on the toy dataset are available in the Appendix C. + +## D. Real-world datasets + +To verify the extensibility of our fundamental idea, we also performed our experiments in real-world data. Concretely, we compare our model's performance with the baselines in the image translation task since an evaluation on the dataset alignment are intuitive and interpretable. + +### D.1. Experiment Details + +We use MNIST and USPS dataset in our experiments. Both two datasets are composed of hand-written digits with 10 classes. Specifically, for the simplest setting, we do our experiments with zero and one classes of MNIST datasest because a translation from zero to one and vice versa requires a semantic transformation, i.e., those two numbers cannot be transformed into each other with a simple translation or rotation. We next perform our experiments with more complicated settings by aligning two different MNIST and USPS datasets. + +To cover the high complexity of the real-world data, we exploit a CNN-based flow model (Dinh et al., 2016) as our invertible functions. We also introduce the state of the art density model (Salimans et al. 2017) as our $Q$ . Regarding a training procedure, as mentioned in subsection [F.1], we first pre-train our ${Q}_{z}$ model to efficiently train our $T$ functions. We then set our frameworks to gradually transfer knowledge from $Q$ to ${Ts}$ by introducing $\beta$ to our loss function, as elaborated in subsection [F.2]. Learning rate is empirically set to be 0.002 and scheduled to be exponentially decreased after training 10 epoch with a decaying factor of 0.95 . + +### D.2. Experiment Results + +Fig. 3 shows the results of the two real world datasets. The left experiment in the figure shows that the upper bound on the JSD is effectively minimized since the translations between MNIST and USPS are decently working. This implies that common structures of the different domains, e.g., digit information are properly mapped into the shared representations ${T}_{1}\left( {X}_{1}\right)$ and ${T}_{2}\left( {X}_{2}\right)$ while distinctive characteristics of the domains, e.g., a bigger and a thicker pattern of USPS can be transformed via ${T}_{1}^{-1}$ and ${T}_{2}^{-1}$ . We believe this demonstrates that the tight upper bound (by $Q$ ) of the JSD effectively successively forms the indistinguishable representations. + +On the other hand, our proposed idea has some limitations to tackle in the realworld dataset. First of all, as shown in the simpler $0 \Leftrightarrow 1$ experiment in Fig. 3, our complex $T$ with the simple $Q$ yields the overfitting. Second, $\beta$ in the realworld experiments has to be carefully set in training our frameworks. To elaborate, once the gradually increasing $\beta$ reaches a specific value, following the fractional distributions strategy in subsection F.2, it starts to make the model performance worse. Third, model performance with the higher resolution, e.g., ${256} \times {256}$ needs to be conducted to convincingly verify the performance in realworld dataset. We think these limitations have to be explored to better understand the behavior of our frameworks in realworld dataset. + +## E. Detailed Parameters Used in 2D Dataset Experiment + +### E.1. LRMF vs. Ours Experiment + +- $T$ for LRMF setup: ${T}_{1} : 8$ channel-wise mask for Real-NVP model with $s$ and $t$ derived from 64 hidden channels of fully connected networks. ${T}_{2}$ : Identity function. + +- $T$ for RAUB setup: ${T}_{1}$ and ${T}_{1}$ : 8 channel-wise mask for Real-NVP model with $s$ and $t$ derived from 64 hidden channels of fully connected networks. Regularization coefficient $\lambda = 0$ + +- $Q$ for both: A single Gaussian distribution with trainable mean and trainable variances. + +### E.2. Alignflow vs. Ours Experiment + +- $T$ for both: 2 channel-wise mask for RealNVP model with $s$ and $t$ derived from 8 hidden channels of fully connected networks. + +- $Q$ for Alignflow setup: A single fixed normal distribution. + +- $Q$ for RAUB setup: A learnable mixture of Gaussian with 3 components. Regularization coefficient $\lambda = 0$ + +### E.3. Regularized vs. Unregularized Experiment + +1. $T$ for both: 8 channel-wise mask for RealNVP model with $s$ and $t$ derived from 64 hidden channels of fully connected networks. + +2. $Q$ for both: A learnable mixture of Gaussian with 2 components. + +3. $\lambda$ for unregularized Experiment : $\lambda = 0$ + +4. $\lambda$ for regularized Experiment : $\lambda = 0$ + +## F. Algorithm + +### F.1. Pre-trained Model + +One of the benefits of our method over the baseline methods is that it is possible to harness the power of the pre-trained density model as our $Q$ . It is worth mentioning the use of the pre-trained $Q$ enables the gap between the upperbound and the GJSD to be small, thus the better performance can be theoretically ensured compared to baselines with relatively simple $Q$ . Furthermore, it is possible to accomplish the statistical and computational efficiency in a training procedure. Specifically, the number of required data to train the networks $T$ can be significantly reduced and the training time can be shortened. Based on these, we leveraged one of the pre-trained state-of-the-art density model ( ) as our $Q$ throughout our experiment. + +### F.2. Train with fractional distributions + +Most state-of-the-art density would not have a nice convex structure across the entire domain of the images. They will mostly have a nice peak structure within a small range from the true distributions but remain noisy for the rest domain. This would cost the result to fall into local minimum easily at the very beginning and slow or even prevent the loss to a more desirable low value. At the same time, this narrowed range of convexity pattern will make the transform function $T$ much harder to learn so that the transformed distribution $T\left( x\right)$ will fail to fit the distribution of ${Q}_{z}$ . This is also an even bigger problem if we are using a pre-trained model of ${Q}_{z}$ to begin with. Therefore, in order to circumvent this kind of situation, we propose to use a fractional power of distributions of ${Q}_{z}$ at the first epoch, and slowly increase the power up to 1 to met the original loss objectives. + +The basic idea behind the newly introduced fractional power is to expand the variance of ${Q}_{z}$ so that the originally nice range of convexity will get expand, so that the algorithm should have a relatively loss function structure to start with. As the training goes, since $T\left( x\right)$ is more close to the distribution of density model, we can slightly increase the fractional power to reduce the range of complexity. After we have increased our power to 1 , the objective is exactly the same of our original loss function. By using this kind of warm start epochs, we can have our $T\left( x\right)$ to start at a relatively good range in ${Q}_{z}$ , which result in a more efficient learning curve for $T$ . + +The implementation of this idea is also straightforward: By introducing the fractional power $\beta$ , we have $\log \left| {{J}_{{T}_{j}}\left( x\right) }\right| Q{\left( {T}_{j}\right) }^{\beta }\left( x\right) = \log \left| {{J}_{{T}_{j}}\left( x\right) }\right| + \log Q{\left( {T}_{j}\right) }^{\beta }\left( x\right) = \log \left| {{J}_{{T}_{j}}\left( x\right) }\right| + \beta \log Q\left( {T}_{j}\right) \left( x\right) .$ \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/_l8XYZe88K4/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/_l8XYZe88K4/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..4b5127d47d6f95c994e2c3d7c822e3c920f32f6a --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/_l8XYZe88K4/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,211 @@ +Why be adversarial? Let's cooperate!: + +Cooperative Dataset Alignment via JSD Upper Bound + +§ ABSTRACT + +Unsupervised dataset alignment estimates a transformation that maps two or more source domains to a shared aligned domain given only the domain datasets. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning. Most prior works use adversarial learning (i.e., min-max optimization), which can be challenging to optimize and evaluate. A few recent works explore non-adversarial flow-based (i.e., invertible) approaches, but they lack a unified perspective. Therefore, we propose to unify and generalize previous flow-based approaches under a single non-adversarial framework, which we prove is equivalent to minimizing an upper bound on the Jensen-Shannon Divergence (JSD). Importantly, our problem reduces to a min-min, i.e., cooperative, problem and can provide a natural evaluation metric for unsupervised dataset alignment. We present preliminary results of our proposed framework on simulated and real-world data. + +§ 1. INTRODUCTION + +In many cases, a practitioner has access to multiple related but distinct datasets such as agricultural measurements from two farms, experimental data collected in different months, or sales data before and after a major event. ${Un}$ - supervised dataset alignment (UDA) is the ML task aimed at aligning these related but distinct datasets in a shared space, which may be a latent space, without any pairing information between the two domains (i.e., unsupervised). This task has many applications such as generative modeling (e.g., (Zhu et al., 2017)), unsupervised domain adaptation (e.g., (Grover et al., 2020; Hu et al., 2018)), batch effect mitigation in biology (e.g., (Haghverdi et al., 2018)), and fairness-aware learning (e.g., (Zemel et al., 2013)). + +The most common approach for obtaining such alignment transformations stems from Generative Adversarial Networks (GAN) (Goodfellow et al., 2014), which can be viewed as minimizing a lower bound on the Jensen- + +Shannon Divergence (JSD) between real and generated distributions. The lower bound is tight if and only if the inner maximization is solved perfectly. CycleGAN (Zhu et al., 2017) maps between two datasets via two GAN objectives between the two domains and a cycle consistency loss, which encourages approximate invertibility of the transformations. However, adversarial learning can be quite challenging in practice (see e.g. (Lucic et al., 2018; Kurach et al., 2019)) because of the competitive nature of the min-max optimization problem. Also, the research community only has reasonable model evaluation metrics for certain data types. Specifically, the commonly accepted Frechet Inception Distance (FID) (Heusel et al., 2017) is only applicable to image or auditory data, which have standard powerful pretrained classifiers, and even the implementation of FID can evaluation issues (Parmar et al., 2021). No clear metrics exist for tabular data or non-perceptual data. + +Recently, flow-based methods that leverage invertible models have been proposed for the UDA task (Grover et al., 2020; Usman et al., 2020). AlignFlow (Grover et al., 2020) leverages invertible models to make the model cycle-consistent (i.e., invertible) by construction and introduce exact log-likelihood loss terms derived from standard flow-based generative models as a complementary loss terms to the adversarial loss terms. Yet, AlignFlow still leverages adversarial learning and does not provide a general evaluation metric. Log-likelihood ratio minimizing flows (LRMF) (Usman et al. 2020) use invertible flow models and density estimation to avoid adversarial learning altogether and define a new metric based on the log-likelihood ratio. However, LRMF depends heavily on the density model class and can only partially align datasets if the target distribution is not in the chosen density model class. Additionally, the LRMF metric depends on this density model class and is only defined for two datasets. + +Therefore, to avoid challenging adversarial learning and generalize previous flow-based approaches, we propose a unified non-adversarial UDA framework, which we prove is equivalent to minimizing an upper bound on the JSD. Importantly, our problem reduces to a min-min, i.e., cooperative, problem, and the JSD upper bound can provide a natural evaluation metric for UDA that can be applied in any domain. Our framework requires two parts, the outer minimization requires an invertible model and the inner minimization requires a density model (e.g., Gaussian mixture models or normalizing flows (Dinh et al., 2017)). We summarize our contributions as follows: + + * We prove that a minimization problem over density models is an upper bound on a generalized version of JSD that allows for more than two distributions. Importantly, we also theoretically quantify the bound gap and show that it can be made tight if the density model class is flexible enough. + + * We use this JSD upper bound to derive a novel regularized loss function for UDA and explain its relationship to prior methods. + + * We demonstrate the feasibility of our method on simulated and real-world data. + +Notation We will denote distributions as ${P}_{X}\left( \mathbf{x}\right)$ where $X$ is the corresponding random variable. Invertible functions will be denoted by $T\left( \cdot \right)$ . We will use ${X}_{j} \sim {P}_{{X}_{j}}$ to denote the observed random variable from the $j$ -th distribution. We will use ${Z}_{j} \triangleq {T}_{j}\left( {X}_{j}\right) \sim {P}_{{Z}_{j}} \equiv {P}_{{T}_{j}\left( {X}_{j}\right) }$ to denote the latent random variable of the $j$ -th distribution after applying ${T}_{j}$ to ${X}_{j}$ (and note that ${X}_{j} = {T}_{j}^{-1}\left( {Z}_{j}\right)$ ). We will denote the mixtures of these observed or latent distributions as ${P}_{{X}_{\operatorname{mix}}} \triangleq \mathop{\sum }\limits_{j}{w}_{j}{P}_{{X}_{j}}$ and ${P}_{{Z}_{\operatorname{mix}}} \triangleq \mathop{\sum }\limits_{j}{w}_{j}{P}_{{Z}_{j}}$ , where $\mathbf{w}$ is a probability vector. We denote KL divergence, entropy, and cross entropy as $\mathrm{{KL}}\left( {\cdot , \cdot }\right) ,\mathrm{H}\left( \cdot \right)$ , and ${\mathrm{H}}_{\mathrm{c}}\left( {\cdot , \cdot }\right)$ , respectively, where $\mathrm{{KL}}\left( {P,Q}\right) = {\mathrm{H}}_{\mathrm{c}}\left( {P,Q}\right) - \mathrm{H}\left( P\right)$ . + +§ 2. REGULARIZED ALIGNMENT UPPER BOUND LOSS + +We first remind the reader of the generalized Jensen-Shannon divergence for more than two distributions, where the standard JSD is recovered if ${w}_{1} = {w}_{2} = {0.5}$ . + +Definition 1 (Generalized Jensen-Shannon Divergence (GJSD) (Lin,1991)). Given $k$ distributions ${\left\{ {P}_{{X}_{j}}\right\} }_{j = 1}^{k}$ and a corresponding probability weight vector $\mathbf{w}$ , the generalized Jensen-Shannon divergence is defined as (proof of equivalence in appendix): + +$$ +{\operatorname{GJSD}}_{\mathbf{w}}\left( {{P}_{{X}_{1}},\cdots ,{P}_{{X}_{k}}}\right) \triangleq \mathop{\sum }\limits_{j}{w}_{j}\operatorname{KL}\left( {{P}_{{X}_{j}},\mathop{\sum }\limits_{j}{w}_{j}{P}_{{X}_{j}}}\right) +$$ + +$$ +\equiv \mathrm{H}\left( {\mathop{\sum }\limits_{j}{w}_{j}{P}_{{X}_{j}}}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right) . \tag{1} +$$ + +The goal of distribution alignment is to find a set of transformations ${\left\{ {T}_{j}\left( \cdot \right) \right\} }_{j = 1}^{k}$ (which will be invertible in our case) such that the latent distributions align, i.e., ${P}_{{T}_{j}\left( {X}_{j}\right) } =$ ${P}_{{T}_{{j}^{\prime }}\left( {X}_{{j}^{\prime }}\right) }$ or equivalently ${P}_{{Z}_{j}} = {P}_{{Z}_{{j}^{\prime }}}$ for all $j \neq {j}^{\prime }$ . Given the properties of divergences, this alignment will happen if and only if $\operatorname{GJSD}\left( {{P}_{{Z}_{1}},\cdots ,{P}_{{Z}_{k}}}\right) = 0$ . Thus, ideally, we + +would minimize GJSD directly with respect to ${T}_{j}$ , i.e., + +$$ +\mathop{\min }\limits_{{{T}_{1},\cdots ,{T}_{k} \in \mathcal{T}}}\operatorname{GJSD}\left( {{P}_{{T}_{1}\left( {X}_{1}\right) },\cdots ,{P}_{{T}_{k}\left( {X}_{k}\right) }}\right) \tag{2} +$$ + +$$ +\equiv \mathop{\min }\limits_{{{T}_{1},\cdots ,{T}_{k} \in \mathcal{T}}}\mathrm{H}\left( {\mathop{\sum }\limits_{j}{w}_{j}{P}_{{T}_{j}\left( {X}_{j}\right) }}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{T}_{j}\left( {X}_{j}\right) }\right) , +$$ + +where $\mathcal{T}$ is a class of invertible functions. + +§ 2.1. GJSD UPPER BOUND + +However, we cannot evaluate the entropy terms in Eqn. 3 because we do not know the density of ${P}_{{X}_{j}}$ ; we only have samples from ${P}_{{X}_{j}}$ . Therefore, we will upper bound the first entropy term in Eqn. 3 (H $\left( {\mathop{\sum }\limits_{j}{w}_{j}{P}_{{X}_{j}}}\right)$ ) using an auxiliary density model and decompose the other entropy terms by leveraging the change of variables formula for invertible functions. + +Theorem 1 (GJSD Upper Bound). Given an auxiliary density model class $\mathcal{Q}$ , we form a GJSD upper bound: + +$$ +{\operatorname{GJSD}}_{\mathbf{w}}\left( {{P}_{{Z}_{1}},\cdots ,{P}_{{Z}_{k}}}\right) +$$ + +$$ +\leq \mathop{\min }\limits_{{Q \in \mathcal{Q}}}{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) , +$$ + +where the bound gap is exactly $\mathop{\min }\limits_{{Q \in \mathcal{Q}}}\operatorname{KL}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right)$ . + +Proof of Theorem 1. For any $Q \in \mathcal{Q}$ , we have the following upper bound: + +$$ +{\operatorname{GJSD}}_{\mathbf{w}}\left( {{P}_{{Z}_{1}},\cdots ,{P}_{{Z}_{k}}}\right) +$$ + +$$ += \underset{ = 0}{\underbrace{{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) - {\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) }} + \mathrm{H}\left( {P}_{{Z}_{\operatorname{mix}}}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) +$$ + +$$ += {\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) - \mathrm{{KL}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) +$$ + +$$ +\leq {\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) , +$$ + +where the inequality is by the fact that KL divergence is non-negative and the bound gap is equal to $\operatorname{KL}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right)$ . The $Q$ that achieves the minimum in the upper bound is equivalent to the $Q$ that minimizes the bound gap, i.e., + +$$ +{Q}^{ * } = \underset{Q \in \mathcal{Q}}{\arg \min }{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) \underset{\text{ Constant w.r.t. }Q}{\underbrace{-\mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) }} \tag{3} +$$ + +$$ += \underset{Q \in \mathcal{Q}}{\arg \min }{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) \underset{\text{ Constant w.r.t. }Q}{\underbrace{-\mathrm{H}\left( {P}_{{Z}_{\operatorname{mix}}}\right) }} \tag{4} +$$ + +$$ += \underset{Q \in \mathcal{Q}}{\arg \min }\mathrm{{KL}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) . \tag{5} +$$ + +The tightness of the bound depends on how well the class of density models $\mathcal{Q}$ (e.g., mixture models, normalizing flows, or autoregressive densities) can approximate ${P}_{{Z}_{\operatorname{mix}}}$ ; notably, the bound can be made tight if ${P}_{{Z}_{\operatorname{mix}}} \in \mathcal{Q}$ . Also, one key feature of this upper bound is that the cross entropy term can be evaluated using only samples from ${P}_{{X}_{j}}$ and the transformations ${T}_{j}$ , i.e., ${\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) =$ $\mathop{\sum }\limits_{j}{w}_{j}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log Q\left( {{T}_{j}\left( {\mathbf{x}}_{j}\right) }\right) }\right\rbrack$ . However, we still cannot evaluate the other entropy terms $\mathrm{H}\left( {P}_{{Z}_{j}}\right)$ since we do not know the densities of ${P}_{{Z}_{j}}$ (or ${P}_{{X}_{j}}$ ). Thus, we leverage the fact that the ${T}_{j}$ functions are invertible to define an entropy change of variables. + +Lemma 2 (Entropy Change of Variables). Let $X \sim {P}_{X}$ and $Z \triangleq T\left( X\right) \sim {P}_{Z}$ , where $T$ is an invertible transformation. The entropy of $Z$ can be decomposed as follows: + +$$ +\mathrm{H}\left( {P}_{Z}\right) = \mathrm{H}\left( {P}_{X}\right) + {\mathbb{E}}_{{P}_{X}}\left\lbrack {\log \left| {{J}_{T}\left( \mathbf{x}\right) }\right| }\right\rbrack , \tag{6} +$$ + +where $\left| {{J}_{T}\left( \mathbf{x}\right) }\right|$ is the determinant of the Jacobian of $T$ . + +The key insight from this lemma is that $\mathrm{H}\left( {P}_{X}\right)$ is a constant with respect to $T$ and can thus be ignored when optimizing $T$ , while ${\mathbb{E}}_{{P}_{X}}\left\lbrack {\log \left| {{J}_{T}\left( \mathbf{x}\right) }\right| }\right\rbrack$ can be approximated using only samples from ${P}_{X}$ . Combining Theorem 1 and Lemma 2, we can arrive at our final objective function which is equivalent to minimizing an upper bound on the GJSD: + +$$ +{\operatorname{GJSD}}_{\mathbf{w}}\left( {{P}_{{Z}_{1}},\cdots ,{P}_{{Z}_{k}}}\right) +$$ + +$$ +\leq \mathop{\min }\limits_{{Q \in \mathcal{Q}}}{\mathrm{H}}_{\mathrm{c}}\left( {{P}_{{Z}_{\operatorname{mix}}},Q}\right) - \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{Z}_{j}}\right) \tag{7} +$$ + +$$ += \mathop{\min }\limits_{{Q \in \mathcal{Q}}}\mathop{\sum }\limits_{j}{w}_{j}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) \left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right| }\right\rbrack \tag{8} +$$ + +$$ +- \mathop{\sum }\limits_{j}{w}_{j}\mathrm{\;H}\left( {P}_{{X}_{j}}\right) , +$$ + +where the last term $- \mathop{\sum }\limits_{j}{w}_{j}\mathrm{H}\left( {P}_{{X}_{j}}\right)$ is constant with respect to ${T}_{j}$ functions so they can be ignored. We formally define this loss function as follows. + +Definition 2 (Alignment Upper Bound Loss). Given $k$ continuous distributions ${\left\{ {P}_{{X}_{j}}\right\} }_{j = 1}^{k}$ , a class of continuous distributions $\mathcal{Q}$ , and a probability weight vector $\mathbf{w}$ , the alignment upper bound loss is defined as follows: + +$$ +{\mathcal{L}}_{\mathrm{{AUB}}}\left( {{T}_{1},\cdots ,{T}_{k};{\left\{ {P}_{{X}_{j}}\right\} }_{j = 1}^{k},\mathcal{Q},\mathbf{w}}\right) +$$ + +$$ +\triangleq \mathop{\min }\limits_{{Q \in \mathcal{Q}}}\mathop{\sum }\limits_{j}{w}_{j}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log \left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right| Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) }\right\rbrack , \tag{9} +$$ + +where ${T}_{j}$ are invertible and $\left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right|$ is the absolute value of the Jacobian determinant. + +Notice that this alignment loss can be seen as learning the best base distribution given fixed flow models ${T}_{j}$ . We now consider the theoretical optimum if we optimize over all invertible functions. + +Theorem 3 (Alignment at Global Minimum of ${\mathcal{L}}_{\mathrm{{AUB}}}$ ). If ${\mathcal{L}}_{\mathrm{{AUB}}}$ is minimized over the class of all invertible functions, a global minimum of ${\mathcal{L}}_{\text{ AUB }}$ implies that the latent distributions are aligned, i.e., ${P}_{{T}_{j}\left( {X}_{j}\right) } = {P}_{{T}_{{j}^{\prime }}\left( {X}_{{j}^{\prime }}\right) }$ for all $j \neq {j}^{\prime }$ . Notably, this result holds regardless of $\mathcal{Q}$ . + +Informally, this can be proved by showing that the problem decouples into separate normalizing flow losses where $Q$ is the base distribution and the optimum is achieved only if ${P}_{{T}_{j}\left( {X}_{j}\right) } = Q$ for all ${T}_{j}$ (formal proof in the appendix). This alignment of the latent distributions also implies the translation between any of the observed component distributions. The proof follows directly from Theorem 3 and the change of variables formula. + +Corollary 4 (Translation at Global Minimum of ${\mathcal{L}}_{\mathrm{{AUB}}}$ ). Similar to Theorem 3, a global minimum of ${\mathcal{L}}_{\mathrm{{AUB}}}$ implies translation between any component distributions using the inverses of ${T}_{j}$ , i.e., ${P}_{{T}_{{j}^{\prime }}^{-1}\left( {{T}_{j}\left( {X}_{j}\right) }\right) } = {P}_{{X}_{{j}^{\prime }}}$ for all $j \neq {j}^{\prime }$ . + +§ 2.2. REGULARIZATION VIA TRANSPORTATION COST + +While the alignment objective is the most challenging part of UDA, we argue that regularization is also critical for practical and stable alignment (or translation) between datasets because there are many optimal alignment solutions-even infinitely many in most cases (see appendix for two examples). We alleviate this issue by adding expected transportation cost (usually squared Euclidean distance) as a regularization to our objective inspired by optimal transport (OT) concepts. + +Definition 3 (Regularized Alignment Upper Bound Loss). Given similar setup as in Def. 2 and a transportation cost function $c\left( {a,b}\right) \geq 0$ for transporting a point from a to $b$ , the regularized alignment upper bound loss is defined as: + +$$ +{\mathcal{L}}_{\text{ RAUB }}\left( {{T}_{1},\cdots ,{T}_{k};{\left\{ {P}_{{X}_{j}}\right\} }_{j = 1}^{k},\mathcal{Q},\mathbf{w},\lambda ,c}\right) +$$ + +$$ +\triangleq \mathop{\min }\limits_{{Q \in \mathcal{Q}}}\mathop{\sum }\limits_{j}{w}_{j}{\mathbb{E}}_{{P}_{{X}_{j}}}\left\lbrack {-\log \left| {{J}_{{T}_{j}}\left( \mathbf{x}\right) }\right| Q\left( {{T}_{j}\left( \mathbf{x}\right) }\right) }\right. \tag{10} +$$ + +$$ +\left. {+{\lambda c}\left( {\mathbf{x},{T}_{j}\left( \mathbf{x}\right) }\right) }\right\rbrack \text{ . } +$$ + +§ 2.3. RELATIONSHIP TO PRIOR WORKS + +AlignFlow is special case without adversarial terms AlignFlow (Grover et al., 2020) without adversarial loss terms is a special case of our method for two distributions where the density model class $\mathcal{Q}$ only contains the standard normal distribution (i.e., a singleton class) and no regularization is used (i.e., $\lambda = 0$ ). Thus, AlignFlow can be viewed as initially optimizing a poor upper bound on JSD; however, the JSD bound becomes tighter as training progresses because the latent distributions independently move towards the same normal distribution. + +LRMF is special case with only one transformation Log-likelihood ratio minimizing flows (LRMF) (Usman et al., 2020) is also a special case of our method for only two distributions, where one transformation is fixed at the identity (i.e., ${T}_{2} =$ Id) and no regularization is applied (i.e., $\lambda = 0$ ). While the final practical LRMF objective is a special case of ours, the theory is developed from a different but complementary perspective. The LRMF metric developed requires an assumption about a given density model + +165 + + < g r a p h i c s > + +Figure 1: Top row is latent space and bottom is the data translated into the other space. (a-c) LRMF, which only has one transformation $T$ may not be able to align the datasetss if the density model class $\mathcal{Q}$ is not expressive enough (in this case Gaussian distributions) while using two transformations as in our framework can align them. (d-f) AlignFlow (without adversarial terms) may not align because ${Q}_{z}$ is fixed at a standard normal, while our approach with learnable mixture of Gaussians for ${Q}_{z}$ is able to learn an alignment (both use the same ${T}_{j}$ models). + +166 + +168 + + < g r a p h i c s > + +Figure 2: An unregularized alignment loss (top) can lead to excessive and unexpected movement of points in the latent representation (lines connect transported points), while our regularized alignment loss (bottom) yields a unique and regularized solution that moves points significantly less and is closer to the identity function. + + < g r a p h i c s > + +Figure 3: Preliminary results on a high dimensional real world datasets demonstrate the feasibility of our method. The complex translation between MNIST and USPS datasets (left) does not seem to overfit, while the simple translation between MNIST 0 to 1 may overfit (as seen by right most column of test). The first and third columns are the original images, and the second and the fourth columns are the translated images. + +207 + +208 + +209 + +class, which enables a zero point (or absolute value) of the metric to be estimated but requires fitting extra domain density models. Usman et al. (2020) also do not uncover the + +217 connection of the objective as an upper bound on JSD re- gardless of the density model class. Additionally, to ensure 218 alignment, LRMF requires that the density model class in- 219 cludes the true target distribution because only one invertible transform is used, while our approach can theoretically align even if the shared density model class is weak (see Theorem 3 and our simulated experiments). + +Cooperative versus Adversarial Networks Analogous to the generator $G$ and the discriminator $D$ in adversarial learning, our framework has two main networks, ${T}_{j}$ and ${Q}_{z}$ . We can use any invertible function for ${T}_{j}$ (e.g., coupling-based flows (Dinh et al., 2017), neural ODE flows (Grathwohl et al., 2018), or residual flows (Chen et al., 2019)) and any (approximate) density models for ${Q}_{z}$ (e.g., kernel densities (in low dimensions), mixture models, autoregressive densities (Salimans et al., 2017), normalizing flows (Kingma and Dhariwal, 2018), or even VAEs (Kingma and Welling, 2019)). Thus, our framework has similar modularity compared to adversarial approaches. In contrast, we have a min-min, i.e., cooperative, optimization problem, but our transformations must be invertible and the auxiliary density model ${Q}_{z}$ may be more difficult to train than the auxiliary discriminator $D$ . We expect these limitations to be alleviated as new invertible models and density models are continually being developed. + +§ 3. EXPERIMENTS AND CONCLUSION + +We first demonstrate the differences of our approach to LRMF and AlignFlow in Fig. 1 and the importance of regularization in Fig. 2 (see appendix for details). We also demonstrate the feasibility of our approach for high-dimensional datasets in some preliminary experiments shown in Fig. 3 (see appendix for details). Yet, scaling up our framework in practice is still a fundamental challenge, and our approach inherits some weaknesses of JSD, which may not give useful gradient information in certain cases. Thus, while these experiments and our theoretical work build a unified foundation for cooperative alignment learning, our work also open up new theoretical and practical questions. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/agj4cdOfrAP/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/agj4cdOfrAP/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..9ffa081e634253b9972f7f7c3ce6c4c4294364f2 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/agj4cdOfrAP/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,883 @@ +# On Fast Sampling of Diffusion Probabilistic Models + +Anonymous Authors ${}^{1}$ + +## Abstract + +In this work, we propose FastDPM, a unified framework for fast sampling in diffusion probabilistic models. FastDPM generalizes previous methods and gives rise to new algorithms with improved sample quality. We systematically investigate the fast sampling methods under this framework across different domains, on different datasets, and with different amount of conditional information provided for generation. We find the performance of a particular method depends on data domains (e.g., image or audio), the trade-off between sampling speed and sample quality, and the amount of conditional information. We further provide insights and recipes on the choice of methods for practitioners. + +## 1. Introduction + +Diffusion probabilistic models are a class of deep generative models that use Markov chains to gradually transform between a simple distribution (e.g., isotropic Gaussian) and the complex data distribution (Sohl-Dickstein et al., 2015; Ho et al., 2020). Most recently, these models have obtained the state-of-the-art results in several important domains, including image synthesis (Ho et al., 2020; Song et al., 2020b; Dhariwal & Nichol, 2021), audio synthesis (Kong et al., 2020b; Chen et al., 2020), and 3-D point cloud generation (Luo & Hu, 2021; Zhou et al., 2021). We will use "diffusion models" as shorthand to refer to these models. + +Diffusion models usually comprise: $i$ ) a parameter-free $T$ - step Markov chain named the diffusion process, which gradually adds random noise into the data, and ${ii}$ ) a parameterized $T$ -step Markov chain called the reverse or denoising process, which removes the added noise as a denoising function. The likelihood in diffusion models is intractable, but they can be efficiently trained by optimizing a variant of the variational lower bound. In particular, Ho et al. (2020) propose a certain parameterization called the denoising diffusion probabilistic model (DDPM) and show its connection with denoising score matching (Song & Ermon, 2019), so the reverse process can be viewed as sampling from a score-based model using Langevin dynamics. DDPM can produce high-fidelity samples reliably with large model capacity and outperforms the state-of-the-art models in image and audio domains (Dhariwal & Nichol, 2021; Kong et al., 2020b). However, a noticeable limitation of diffusion models is their expensive denoising or sampling process. For example, DDPM requires a Markov chain with $T = {1000}$ steps to generate high quality image samples (Ho et al., 2020), and DiffWave requires $T = {200}$ to obtain high-fidelity audio synthesis (Kong et al., 2020b). In other words, one has to run the forward-pass of the neural network $T$ times to generate a sample, which is much slower than the state-of-the-art GANs or flow-based models for image and audio synthesis (e.g., Karras et al., 2020; Kingma & Dhariwal, 2018; Kong et al., 2020a; Ping et al., 2020). + +To deal with this limitation, several methods have been proposed to reduce the length of the reverse process to $S \ll T$ steps. One class of methods compute continuous noise levels based on discrete diffusion steps and retrain a new model conditioned on these continuous noise levels (Song & Ermon, 2019; Chen et al., 2020; Okamoto et al., 2021; San-Roman et al., 2021). Then, a shorter reverse process can be obtained by carefully choosing a small set (size $S$ ) of noise levels. However, these methods cannot reuse the pretrained diffusion models, because the state-of-the-art DDPM models are conditioned on discrete diffusion steps (Ho et al., 2020; Dhariwal & Nichol, 2021). It is also unclear the diffusion models conditioned on continuous noise levels can achieve comparable sample quality as the state-of-the-art DDPMs on challenging unconditional image and audio synthesis tasks (Dhariwal & Nichol, 2021; Kong et al., 2020b). Another class of methods directly approximate the original reverse process of DDPM models with shorter ones (of length $S$ ), which are conditioned on discrete diffusion steps (Song et al., 2020a; Kong et al., 2020b). Although both classes of methods have shown the trade-off between sampling speed and sample quality (i.e., larger $S$ lead to higher sample quality), the fast sampling methods without retraining are more advantageous for fast iteration and deployment, while still keeping high-fidelity synthesis with small number of steps in the reverse process (e.g., $S = 6$ in Kong et al. (2020b)). + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +058 In this work, we propose FastDPM, a unified framework of fast sampling methods for diffusion models without retrain- 059 ing. The core idea of FastDPM is to i) generalize discrete 060 diffusion steps to continuous diffusion steps, and ii) design a bijective mapping between continuous diffusion steps and continuous noise levels. Then, we use this bijection to construct an approximate diffusion process and an approximate reverse process, both of which have length $S \ll T$ . + +FastDPM includes and generalizes the fast sampling algorithms from denoising diffusion implicit models (DDIM) (Song et al., 2020a) and DiffWave (Kong et al., 2020b). In detail, FastDPM offers two ways to construct the approximate diffusion process: selecting $S$ steps in the original diffusion process, or more flexibly, choosing $S$ variances. FastDPM also offers ways to construct the approximate reverse process: using the stochastic DDPM reverse process (DDPM-rev), or using the implicit (deterministic) DDIM reverse process (DDIM-rev). We can control the amount of stochasticity in the reverse process of FastDPM as in Song et al. (2020a). + +FastDPM gives rise to new algorithms with improved sample quality than previous methods when the length of the approximate reverse process $S$ is small. We then extensively evaluate the family of FastDPM methods across image and audio domains. We find the deterministic DDIM-rev significantly outperforms the stochastic DDPM-rev in image generation tasks, but DDPM-rev significantly outperforms DDIM-rev in audio synthesis tasks. Finally, we investigate the performance of different methods by varying the amount of conditional information. We find with different amount of conditional information, we need different amount of stochasticity in the reverse process of FastDPM. + +We discuss related work in Appendix A. + +## 2. Diffusion Models + +Let $d$ be the data dimension. Let ${p}_{\text{data }}$ be the data distribution and ${p}_{\text{latent }} = \mathcal{N}\left( {0,{I}_{d \times d}}\right)$ be the latent distribution. Then, the denoising diffusion probabilistic model (DDPM, Sohl-Dickstein et al., 2015; Ho et al., 2020) is a deep generative model consisting two Markov chains called diffusion and reverse processes, respectively. The length of each Markov chain is $T$ , which is called the number of diffusion or reverse steps. The diffusion process gradually adds Gaussian noise to the data distribution until the noisy data distribution is close to the latent distribution. Formally, the diffusion process from data ${x}_{0} \sim {p}_{\text{data }}$ to the latent variable ${x}_{T}$ is defined as $q\left( {{x}_{1},\cdots ,{x}_{T} \mid {x}_{0}}\right) = \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{x}_{t} \mid {x}_{t - 1}}\right)$ , where each of $q\left( {{x}_{t} \mid {x}_{t - 1}}\right) = \mathcal{N}\left( {{x}_{t};\sqrt{1 - {\beta }_{t}}{x}_{t - 1},{\beta }_{t}I}\right)$ for some small constant ${\beta }_{t} > 0$ . The hyperparameters ${\beta }_{1},\cdots ,{\beta }_{T}$ are called the variance schedule. + +The reverse process aims to eliminate the noise added in each diffusion step. Formally, the reverse process from ${x}_{T} \sim {p}_{\text{latent }}$ to ${x}_{0}$ is defined as ${p}_{\theta }\left( {{x}_{0},\cdots ,{x}_{T - 1} \mid {x}_{T}}\right) =$ $\mathop{\prod }\limits_{{t = 1}}^{T}{p}_{\theta }\left( {{x}_{t - 1} \mid {x}_{t}}\right)$ , where each of ${p}_{\theta }\left( {{x}_{t - 1} \mid {x}_{t}}\right)$ is defined as $\mathcal{N}\left( {{x}_{t - 1};{\mu }_{\theta }\left( {{x}_{t}, t}\right) ,{\sigma }_{t}^{2}I}\right)$ ; the mean ${\mu }_{\theta }\left( {{x}_{t}, t}\right)$ is parameterized through a neural network and the variance ${\sigma }_{t}$ is time-step dependent constant. Based on the reverse process, the sampling process is to first draw ${x}_{T} \sim \mathcal{N}\left( {0, I}\right)$ , then draw ${x}_{t - 1} \sim {p}_{\theta }\left( {{x}_{t - 1} \mid {x}_{t}}\right)$ for $t = T, T - 1,\cdots ,1$ , and finally outputs ${x}_{0}$ . + +The training objective of DDPM is based on the variational evidence lower bound (ELBO). Under a certain parameterization introduced by Ho et al. (2020), the objective can be largely simplified. One may first define constants ${\alpha }_{t} = 1 - {\beta }_{t},{\bar{\alpha }}_{t} = \mathop{\prod }\limits_{{i = 1}}^{t}{\alpha }_{i},{\widetilde{\beta }}_{t} = \frac{\overline{1} - {\bar{\alpha }}_{t - 1}}{1 - {\bar{\alpha }}_{t}}{\beta }_{t}$ for $t > 1$ and ${\widetilde{\beta }}_{1} = {\beta }_{1}$ . Then, a noticeable property of diffusion model is + +$$ +q\left( {{x}_{t} \mid {x}_{0}}\right) = \mathcal{N}\left( {{x}_{t};\sqrt{{\bar{\alpha }}_{t}}{x}_{0},\left( {1 - {\bar{\alpha }}_{t}}\right) I}\right) , \tag{1} +$$ + +thus one can directly sample ${x}_{t}$ given ${x}_{0}$ (see Appendix B. 1 for derivation). Furthermore, one may parameterize ${\mu }_{\theta }\left( {{x}_{t}, t}\right) = \frac{1}{\sqrt{{\alpha }_{t}}}\left( {{x}_{t} - \frac{{\beta }_{t}}{\sqrt{1 - {\bar{\alpha }}_{t}}}{\epsilon }_{\theta }\left( {{x}_{t}, t}\right) }\right)$ , where ${\epsilon }_{\theta }$ is a neural network taking ${x}_{t}$ and the diffusion-step $t$ as inputs. In addition, ${\sigma }_{t}$ is simply parameterized as ${\widetilde{\beta }}_{t}^{\frac{1}{2}}$ . Ho et al. (2020) show that minimizing the following unweighted variant of the ELBO leads to higher generation quality: + +$$ +\mathop{\min }\limits_{\theta }{L}_{\text{unweighted }}\left( \theta \right) = {\mathbb{E}}_{{x}_{0},\epsilon , t}{\begin{Vmatrix}\epsilon - {\epsilon }_{\theta }\left( {x}_{t}, t\right) \end{Vmatrix}}_{2}^{2}, \tag{2} +$$ + +where $\epsilon \sim \mathcal{N}\left( {0, I}\right) ,{x}_{0} \sim {q}_{\text{data }}, t$ is uniformly taken from $1,\cdots , T$ , and ${x}_{t} = \sqrt{{\bar{\alpha }}_{t}} \cdot {x}_{0} + \sqrt{1 - {\bar{\alpha }}_{t}} \cdot \epsilon$ from Eq. (1). + +## 3. FastDPM: A Unified Framework for Fast Sampling in Diffusion Models + +In order to achieve high-fidelity synthesis, the number of diffusion steps $T$ in DDPM is set to be very large so that $q\left( {{x}_{T} \mid {x}_{0}}\right)$ is close to ${p}_{\text{latent }}$ . For example, $T = {1000}$ in image synthesis (Ho et al.,2020) and $T = {200}$ in audio synthesis (Kong et al., 2020b). Then, sampling from DDPM needs running through the network ${\epsilon }_{\theta }$ for as many as $T$ times, which can be very slow. In this section, we propose FastDPM, which approximates the pretrained DDPM via much shorter diffusion and reverse processes of length $S \ll T$ , thus it can generate a sample by only running the network $S$ times. The core idea of FastDPM is to: i) generalize discrete diffusion steps to continuous diffusion steps and, then ${ii}$ ) design a bijective mapping between continuous diffusion steps and continuous noise levels, where these noise levels indicate the amount of noise in data. Finally, we use this bijective mapping to construct an approximate diffusion process and an approximate reverse process, respectively. + +### 3.1. Bijective mapping between Continuous Diffusion Steps and Noise Levels + +In this section, we generalize discrete (integer) diffusion steps to continuous (real-valued) diffusion steps. Then, we introduce a bijective mapping $\mathcal{R}$ and $\mathcal{T} = {R}^{-1}$ between continuous diffusion steps $t$ and noise levels $r$ . + +Define $\mathcal{R}$ . We start with an integer diffusion step $t$ . From Eq. (1), one can observe ${x}_{t} = \sqrt{{\bar{\alpha }}_{t}} \cdot {x}_{0} + \sqrt{1 - {\bar{\alpha }}_{t}} \cdot \epsilon$ where $\epsilon \sim \mathcal{N}\left( {0, I}\right)$ , thus sampling ${x}_{t}$ given ${x}_{0}$ is equivalent to adding a Gaussian noise to ${x}_{0}$ . Based on this observation, we define the noise level at step $t$ as $\mathcal{R}\left( t\right) = \sqrt{{\bar{\alpha }}_{t}}$ , which means ${x}_{t}$ is composed of $\mathcal{R}\left( t\right)$ fraction of the data ${x}_{0}$ and $\left( {1 - \mathcal{R}\left( t\right) }\right)$ fraction of white noise. For example, $\mathcal{R}\left( t\right) = 0$ means no noise and $\mathcal{R}\left( t\right) = 1$ means pure white noise. Next, we extend the domain of $\mathcal{R}$ to real values. Assume that the variance schedule ${\left\{ {\beta }_{t}\right\} }_{t = 1}^{T}$ is linear: ${\beta }_{i} = {\beta }_{1} + \left( {i - 1}\right) {\Delta \beta }$ , where ${\Delta \beta } = \frac{{\beta }_{T} - {\beta }_{1}}{T - 1}$ (Ho et al., 2020). We further define an auxiliary constant $\widehat{\beta } = \frac{1 - {\beta }_{1}}{\Delta \beta }$ , which is $\gg T$ assuming that ${\beta }_{T} \ll {1.0}.{}^{1}$ Then, we have ${\bar{\alpha }}_{t} = {\left( \Delta \beta \right) }^{t}\Gamma \left( {\widehat{\beta } + 1}\right) /\Gamma \left( {\widehat{\beta } - t + 1}\right)$ . Because the Gamma function $\Gamma$ is well-defined on $\left( {0,\infty }\right)$ , it gives rise to a natural extension of ${\bar{\alpha }}_{t}$ for continuous diffusion steps $t$ . As a result, for $t \in \lbrack 0,\widehat{\beta })$ , we define the noise level at $t$ as: + +$$ +\mathcal{R}\left( t\right) = {\left( \Delta \beta \right) }^{\frac{t}{2}}\Gamma {\left( \widehat{\beta } + 1\right) }^{\frac{1}{2}}\Gamma {\left( \widehat{\beta } - t + 1\right) }^{-\frac{1}{2}}. \tag{3} +$$ + +Define $\mathcal{T}$ . For any noise level $r \in \left( {0,1}\right)$ , its corresponding (continuous) diffusion step, $\mathcal{T}\left( r\right)$ , is defined by inverting $\mathcal{R} : \mathcal{T}\left( r\right) = {\mathcal{R}}^{-1}\left( r\right)$ . Given a noise level $r = \mathcal{R}\left( t\right)$ , we numerically solve $t = \mathcal{T}\left( r\right)$ by applying a binary search based on Eq. (3). We have $\mathcal{T}\left( r\right) \in \left\lbrack {t, t + 1}\right\rbrack$ for $r \in \left\lbrack {\sqrt{{\bar{\alpha }}_{t + 1}},\sqrt{{\bar{\alpha }}_{t}}}\right\rbrack$ , and this provides a good initialization to the binary search algorithm. Experimentally, we find the binary search algorithm converges in no more than 20 iterations. + +### 3.2. Approximate the Diffusion Process + +Let ${\widehat{x}}_{0} \sim {p}_{\text{data }}$ . Given a sequence of noise levels $1 > {r}_{1} >$ ${r}_{2} > \cdots > {r}_{S} > 0$ , we aim to construct each step in the approximate diffusion process as ${\widehat{x}}_{s} \sim \mathcal{N}\left( {{\widehat{x}}_{s};{r}_{s}{\widehat{x}}_{0},\left( {1 - {r}_{s}^{2}}\right) I}\right)$ . To achieve this goal, we define ${\gamma }_{s} = {r}_{s}^{2}/{r}_{s - 1}^{2}$ , compute the corresponding variances as ${\eta }_{s} = 1 - {\gamma }_{s} = 1 - {r}_{s}^{2}/{r}_{s - 1}^{2}$ , and then define the transition probability in the approximate diffusion process as + +$$ +q\left( {{\widehat{x}}_{s} \mid {\widehat{x}}_{s - 1}}\right) = \mathcal{N}\left( {{\widehat{x}}_{s};\sqrt{1 - {\eta }_{s}}{\widehat{x}}_{s - 1},{\eta }_{s}I}\right) +$$ + +$$ += \mathcal{N}\left( {{\widehat{x}}_{s};\frac{{r}_{s}}{{r}_{s - 1}}{\widehat{x}}_{s - 1},\left( {1 - \frac{{r}_{s}^{2}}{{r}_{s - 1}^{2}}}\right) I}\right) . \tag{4} +$$ + +One can see this by rewriting Eq. (1): ${\eta }_{s}$ corresponds to ${\beta }_{t} = 1 - {\alpha }_{t},{\gamma }_{s}$ corresponds to ${\alpha }_{t}$ , and ${r}_{s}$ corresponds to + +$\sqrt{{\bar{\alpha }}_{t}}$ . We then propose the following two ways to schedule the noise levels ${\left\{ {r}_{s}\right\} }_{s = 1}^{S}$ . + +Noise levels from variances (VAR). We start from the variance schedule ${\left\{ {\eta }_{s}\right\} }_{s = 1}^{S}$ . Next, we compute ${\gamma }_{s} = 1 - {\eta }_{s}$ and ${\bar{\gamma }}_{s} = \mathop{\prod }\limits_{{i = 1}}^{s}{\gamma }_{i}$ . The noise level at step $s$ is ${r}_{s} = \sqrt{{\bar{\gamma }}_{s}}$ . + +Noise levels from steps (STEP). We start from a subset of diffusion steps ${\left\{ {\tau }_{s}\right\} }_{s - 1}^{S}$ in $\{ 1,\cdots , T\}$ . Then, the noise level at step $s$ is ${r}_{s} = \mathcal{R}\left( {\tau }_{s}\right) = \sqrt{{\bar{\alpha }}_{{\tau }_{s}}}$ . + +When ${\eta }_{s} = 1 - {\bar{\alpha }}_{{\tau }_{s}}/{\bar{\alpha }}_{{\tau }_{s - 1}}$ , we have ${\bar{\gamma }}_{s} = {\bar{\alpha }}_{{\tau }_{s}}$ . Therefore, noise levels from steps can be regarded as a special case of noise levels from variances. + +### 3.3. Approximate the Reverse Process + +Given the same sequence of noise levels in Section 3.2, we aim to approximate the reverse process in the original DDPM. To achieve this goal, we regard the model ${\epsilon }_{\theta }$ as being trained on variances ${\left\{ {\eta }_{s}\right\} }_{s = 1}^{S}$ instead of the original ${\left\{ {\beta }_{t}\right\} }_{t = 1}^{T}$ . Then, the transition probability in the approximate reverse process is + +$$ +{p}_{\theta }\left( {{\widehat{x}}_{s - 1} \mid {\widehat{x}}_{s}}\right) = \mathcal{N}\left( {{\widehat{x}}_{s - 1};\widehat{\mu }\left( {{\widehat{x}}_{s}, s}\right) ,{\widetilde{\eta }}_{s}I}\right) , \tag{5} +$$ + +where $\widehat{\mu }\left( {{\widehat{x}}_{s}, s}\right) = \frac{1}{\sqrt{{\gamma }_{s}}}\left( {{\widehat{x}}_{s} - \frac{{\eta }_{s}}{\sqrt{1 - {\bar{\gamma }}_{s}}}{\epsilon }_{\theta }\left( {{\widehat{x}}_{s},\mathcal{T}\left( {r}_{s}\right) }\right) }\right) ,{\widetilde{\eta }}_{s} =$ $\frac{1 - {\bar{\gamma }}_{s - 1}}{1 - {\bar{\gamma }}_{s}}{\eta }_{s}$ for $s > 1$ and ${\widetilde{\eta }}_{1} = {\eta }_{1}.{\widetilde{\eta }}_{s}$ corresponds to the ${\widetilde{\beta }}_{t} = {\sigma }_{t}^{2}$ term. There are two ways to sample from the approximate reverse process in Eq. (5). Let every ${\widehat{\epsilon }}_{s}$ be i.i.d. standard Gaussians for $1 \leq s \leq S$ . + +DDPM reverse process (DDPM-rev). The sampling procedure based on the DDPM reverse process is based on Eq. (5): that is, to first sample ${\widehat{x}}_{S} \sim {p}_{\text{latent }}$ and then sample ${\widehat{x}}_{s - 1} = \widehat{\mu }\left( {{\widehat{x}}_{s}, s}\right) + \sqrt{{\widetilde{\eta }}_{s}}{\widehat{\epsilon }}_{s}.$ + +DDIM reverse process (DDIM-rev). Let $\kappa \in \left\lbrack {0,1}\right\rbrack$ be a hyperparameter. ${}^{2}$ Then, the sampling procedure based on DDIM (Song et al.,2020a) is to first sample ${\widehat{x}}_{S} \sim {p}_{\text{latent }}$ and then sample ${\widehat{x}}_{s - 1} = \sqrt{{\bar{\gamma }}_{s - 1}}\left( \frac{{\widehat{x}}_{s} - \sqrt{1 - {\bar{\gamma }}_{s}}{\epsilon }_{\theta }\left( {{\widehat{x}}_{s},\mathcal{T}\left( {r}_{s}\right) }\right) }{\sqrt{{\bar{\gamma }}_{s}}}\right) +$ $\sqrt{1 - {\bar{\gamma }}_{s - 1} - {\kappa }^{2}{\widetilde{\eta }}_{s}}{\epsilon }_{\theta }\left( {{\widehat{x}}_{s},\mathcal{T}\left( {r}_{s}\right) }\right) + \kappa \sqrt{{\widetilde{\eta }}_{s}}{\widehat{\epsilon }}_{s}$ . When $\kappa = 1$ , it is exactly DDPM-rev (see Appendix B. 2 for derivation). + +### 3.4. Connections with Previous Methods + +The DDIM (Song et al., 2020a) method is equivalent to STEP + DDIM-rev in FastDPM. The fast sampling algorithm by DiffWave (Kong et al., 2020b) is related to VAR + DDPM-rev in FastDPM. Compared with DiffWave, Fast-DPM offers an automatic way to select variances in different settings and a more natural way to compute noise levels. + +--- + +${}^{1}$ E.g., ${\beta }_{T} = {0.02}$ in Ho et al. (2020); Kong et al. (2020b). + +${}^{2}\kappa$ is $\eta$ in Song et al. (2020a). + +--- + +## 4. Experiments + +In this section, we aim to answer the following two questions for FastDPM. (1) Which approximate diffusion process, VAR or STEP, is better? And (2) which approximate reverse process, DDPM-rev or DDIM-rev, is better? We investigate these questions by conducting extensive experiments in both image and audio domains. + +We conduct unconditional image generation experiments on CIFAR-10 (Krizhevsky et al., 2009), CelebA (Liu et al., 2015), and LSUN-bedroom (Yu et al., 2015), unconditional and class-conditional audio synthesis experiments on the Speech Commands 0-9 (SC09) dataset (Warden, 2018), and neural vocoding experiments (audio synthesis conditioned on mel spectrogram) on the LJSpeech dataset (Ito, 2017). + +We use pretrained models in all experiments (Ho et al., 2020; Esser, 2020; Song et al., 2020a; Kong et al., 2020b). We use Fréchet Inception Distance (FID) (Heusel et al., 2017; Lang, 2020), Inception Score (IS) (Salimans et al., 2016), and the crowdMOS tookit (Ribeiro et al., 2011) to evaluate generated samples. Details of experimental setup can be found in Appendix C. Results can be found in Appendix D. Generated samples can be found in Appendix E and the demo website. ${}^{3}$ + +### 4.1. Observations and Insights + +We have the following observations and insights according to the above experimental results. + +VAR marginally outperforms STEP for small $S$ . In the above experiments, the two approximate diffusion processes (STEP and VAR) generally match performances of each other. On CIFAR-10, VAR outperforms STEP when $S = {10}$ , and STEP slightly outperforms VAR when $S \geq {20}$ . On CelebA, VAR slightly outperforms STEP when $S \leq {20}$ , and they have similar results when $S \geq {50}$ . On LSUN-bedroom, VAR slightly outperforms STEP when $S \leq {50}$ , and STEP slightly outperforms VAR when $S = {100}$ . On SC09, VAR slightly outperforms STEP in most cases. On LJSpeech, VAR slightly outperforms STEP when $S = 5$ . Based on these results, we conclude that VAR marginally outperforms STEP for small $S$ . + +Different reverse processes dominate in different domains. In the above experiments, the difference between DDPM and DDIM reverse processes is very clear. In image generation tasks, DDIM-rev significantly outperforms DDPM-rev except for the $S = {100}$ case in the LSUN-bedroom experiment. When we reduce $\kappa$ from 1.0 to 0.0 (see Table 1), the quality of generated samples consistently improves. In contrast, in audio synthesis tasks, DDPM-rev significantly outperforms DDIM-rev. When we increase $\kappa$ from 0.0 to 1.0 (see Table 4), the quality of generated samples consistently improves. This can also be observed from Figure 8: DDIM produces very noisy utterances while DDPM produces very clean utterances. + +The results indicate that in the image domain, DDIM-rev produces better quality whereas in the audio domain, DDPM-rev produces better quality. We speculate the reason behind the difference is that in the audio domain, waveforms naturally exhibit significant amount of stochasticity. The DDPM reverse process offers much stochasticity because at each reverse step $s,{\widehat{x}}_{s - 1}$ is sampled from a Gaussian distribution. However, the DDIM reverse process $\left( {\kappa = {0.0}}\right)$ is a deterministic mapping from latents to data, so it leads to degrade quality in the audio domain. This hypothesis is also aligned with previous result that the flow-based model with deterministic mapping was unable to generate intelligible speech unconditionally on SC09 (Ping, 2021). + +The amount of conditional information affects the choice of reverse processes. In audio synthesis experiments, we find the amount of conditional information affects the generation quality of methods with different reverse processes. In the unconditional generation experiment on SC09, DDPM-rev (which corresponds to $\kappa = {1.0}$ ) has the best results. When there is slightly more conditional information in the class-conditional generation experiment on SC09, DDIM-rev with $\kappa = {0.5}$ has the best results and slightly outperforms DDPM-rev. In both experiments DDIM-rev with $\kappa = {0.0}$ has much worse results. When there is much more conditional information (mel spectrogram) in the neural vocoding experiments on LJSpeech, DDPM-rev is still better than DDIM-rev, but the difference between these two methods is reduced. We speculate that adding conditional information reduces the amount of stochasticity required. When there is no conditional information, we need a large amount of stochasticity $\left( {\kappa = {1.0}}\right)$ ; when there is weak class information, we need moderate stochasticity $\left( {\kappa = {0.5}}\right)$ ; and when there is strong mel-spectrogram information, even having no stochasticity $\left( {\kappa = {0.0}}\right)$ is able to generate reasonable samples. + +## 5. Conclusion + +Diffusion models are a class of powerful deep generative models that produce superior quality samples on various generation tasks. In this paper, we introduce FastDPM, a unified framework for fast sampling in diffusion models without retraining. FastDPM generalizes prior methods and provides more flexibility. We extensively evaluate and analyze FastDPM in image and audio generation tasks. One limitation of FastDPM is that when $S$ is small, there is still quality degradation compared to the original DDPM. We plan to study algorithms offering higher quality for extremely small $S$ in future. + +--- + +${}^{3}$ https://fastdpm.github.io + +--- + +References + +Chen, N., Zhang, Y., Zen, H., Weiss, R. J., Norouzi, M., and Chan, W. WaveGrad: Estimating gradients for waveform generation. arXiv preprint arXiv:2009.00713, 2020. + +Dhariwal, P. and Nichol, A. Diffusion models beat gans on image synthesis. arXiv preprint arXiv:2105.05233, 2021. + +Esser, P. Pytorch pretrained diffusion models. https: //github.com/pesser/pytorch_diffusion, 2020. + +Gao, R., Song, Y., Poole, B., Wu, Y. N., and Kingma, D. P. Learning energy-based models by diffusion recovery likelihood. arXiv preprint arXiv:2012.08125, 2020. + +Goyal, A., Ke, N. R., Ganguli, S., and Bengio, Y. Variational walkback: Learning a transition operator as a stochastic recurrent net. arXiv preprint arXiv:1711.02282, 2017. + +Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pp. 6626-6637, 2017. + +Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. arXiv preprint arXiv:2006.11239, 2020. + +Hoogeboom, E., Nielsen, D., Jaini, P., Forré, P., and Welling, M. Argmax flows and multinomial diffusion: Towards non-autoregressive language models. arXiv preprint arXiv:2102.05379, 2021. + +Ito, K. The LJ speech dataset. 2017. + +Jeong, M., Kim, H., Cheon, S. J., Choi, B. J., and Kim, N. S. Diff-tts: A denoising diffusion model for text-to-speech. arXiv preprint arXiv:2104.01409, 2021. + +Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T. Analyzing and improving the image quality of stylegan. In CVPR, pp. 8110-8119, 2020. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible 1x1 convolutions. In NeurIPS, 2018. + +Kong, J., Kim, J., and Bae, J. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. In NeurIPS, 2020a. + +Kong, Z., Ping, W., Huang, J., Zhao, K., and Catanzaro, B. DiffWave: A versatile diffusion model for audio synthesis. arXiv preprint arXiv:2009.09761, 2020b. + +Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009. + +Lang, S. Fid score for pytorch. https://github.com/ + +mseitzer/pytorch-fid,2020. + +Lee, J. and Han, S. Nu-wave: A diffusion probabilistic model for neural audio upsampling. arXiv preprint arXiv:2104.02321, 2021. + +Li, H., Yang, Y., Chang, M., Feng, H., Xu, Z., Li, Q., and Chen, Y. Srdiff: Single image super-resolution with diffusion probabilistic models. arXiv preprint arXiv:2104.14951, 2021. + +Liu, J., Li, C., Ren, Y., Chen, F., Liu, P., and Zhao, Z. Diffsinger: Diffusion acoustic model for singing voice synthesis. arXiv preprint arXiv:2105.02446, 2021. + +Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. + +Luo, S. and Hu, W. Diffusion probabilistic models for 3d point cloud generation. arXiv preprint arXiv:2103.01458, 2021. + +Meng, C., Song, J., Song, Y., Zhao, S., and Ermon, S. Improved autoregressive modeling with distribution smoothing. arXiv preprint arXiv:2103.15089, 2021. + +Mittal, G., Engel, J., Hawthorne, C., and Simon, I. Symbolic music generation with diffusion models. arXiv preprint arXiv:2103.16091, 2021. + +Okamoto, T., Toda, T., Shiga, Y., and Kawai, H. Noise level limited sub-modeling for diffusion probabilistic vocoders. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6029-6033. IEEE, 2021. + +Ping, W. WaveFlow on SC09 for unconditional generation. https://openreview.net/forum?id= a-xFK8Ymz5J¬eId=P3ORiRE9C3,2021. + +Ping, W., Peng, K., Zhao, K., and Song, Z. WaveFlow: A compact flow-based model for raw audio. In ICML, 2020. + +Popov, V., Vovk, I., Gogoryan, V., Sadekova, T., and Kudi-nov, M. Grad-tts: A diffusion probabilistic model for text-to-speech. arXiv preprint arXiv:2105.06337, 2021. + +Ribeiro, F., Florêncio, D., Zhang, C., and Seltzer, M. Crowd-MOS: An approach for crowdsourcing mean opinion score studies. In ICASSP, 2011. + +Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. Improved techniques for training gans. In Advances in neural information processing systems, pp. 2234-2242, 2016. + +San-Roman, R., Nachmani, E., and Wolf, L. Noise estimation for generative diffusion models. arXiv preprint arXiv:2104.02600, 2021. + +Sohl-Dickstein, J., Weiss, E. A., Maheswaranthan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585, 2015. + +Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020a. + +Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. arXiv preprint arXiv:1907.05600, 2019. + +Song, Y. and Ermon, S. Improved techniques for training score-based generative models. arXiv preprint arXiv:2006.09011, 2020. + +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Er-mon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020b. + +Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015. URL http:// arxiv.org/abs/1512.00567. + +Warden, P. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209, 2018. + +$\mathrm{{Xu}},\mathrm{Y}$ . and Tuguldur, E.-O. Convolutional neural networks for Google speech commands data set with Py-Torch, 2017. https://github.com/tugstugi/ pytorch-speech-commands. + +Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., and Xiao, J. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. + +Zhou, L., Du, Y., and Wu, J. 3d shape generation and completion through point-voxel diffusion. arXiv preprint arXiv:2104.03670, 2021. + +329 + +## A. Related Work + +Diffusion models are a class of powerful deep generative models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Goyal et al., 2017), which have received a lot of attention recently. These models have been applied to various domains, including image generation (Ho et al., 2020; Dhariwal & Nichol, 2021), audio synthesis (Kong et al., 2020b; Chen et al., 2020; Okamoto et al., 2021), image or audio super-resolution (Li et al., 2021; Lee & Han, 2021), text-to-speech (Jeong et al., 2021; Popov et al., 2021), music synthesis (Liu et al., 2021; Mittal et al., 2021), 3-D point cloud generation (Luo & Hu, 2021; Zhou et al., 2021), and language models (Hoogeboom et al., 2021). Diffusion models are connected with scored-based models (Song & Ermon, 2019; 2020; Song et al., 2020b), and there have been a series of research extending and improving diffusion models (Song et al., 2020b; Gao et al., 2020; Dhariwal & Nichol, 2021; San-Roman et al., 2021; Meng et al., 2021). + +There are two families of methods aiming for accelerating diffusion models at synthesis, which reduce the length of the reverse process from $T$ to a much smaller $S$ . One family of methods tackle this problem at training. They retrain the network conditioned on continuous noise levels instead of discrete diffusion steps (Song & Ermon, 2019; Chen et al., 2020; Okamoto et al., 2021; San-Roman et al., 2021). Assuming that the corresponding network is able to predict added noise at any noise level, we can carefully choose only $S \ll T$ noise levels and construct a short reverse process just based on them. San-Roman et al. (2021) present a learning scheme that can step-by-step adjust those noise level parameters, for any given number of steps $S$ . Another family of methods aim to directly approximate the original reverse process within the pretrained DDPM conditioned on discrete steps. In other words, no retraining is needed. Song et al. (2020a) introduce denoising diffusion implicit models (DDIM), which contain non-Markovian processes that lead to an equivalent training objective as DDPM. These non-Markovian processes naturally permit "jumping steps", or formally, using a subset of steps to form a short reverse process. However, compared to using continuous noise levels, selecting discrete steps offers less flexibility. Kong et al. (2020b) introduce a fast sampling algorithm by interpolating steps according to corresponding noise levels. This can be seen as an attempt to map continuous noise levels to discrete diffusion steps. However, it lacks both theoretical justification for the interpolation and extensive empirical studies. + +In this paper, we propose FastDPM, a method that approximates the original DDPM model. FastDPM constructs a bijective mapping between (continuous) diffusion steps and continuous noise levels. This allows us to take advantage of the flexibility of using these continuous noise levels. FastDPM generalizes Kong et al. (2020b) by using Gamma functions to compute noise levels, which naturally extends from discrete domain to continuous domain. FastDPM generalizes Song et al. (2020a) by providing a special set of noise levels that exactly correspond to integer steps. + +380 + +381 + +382 + +383 + +384 + +## B. Derivations + +### B.1. Derivation of $q\left( {{x}_{t} \mid {x}_{0}}\right)$ + +According to the definition of diffusion process, we have + +$$ +{x}_{t} = \sqrt{{\alpha }_{t}}{x}_{t - 1} + \sqrt{{\beta }_{t}}{\epsilon }_{t}, \tag{6} +$$ + +where each ${\epsilon }_{t}$ is an i.i.d. standard Gaussian. Then, by recursion, we have + +$$ +{x}_{t} = \sqrt{{\alpha }_{t}{\alpha }_{t - 1}}{x}_{t - 2} + \sqrt{{\alpha }_{t}{\beta }_{t - 1}}{\epsilon }_{t - 1} + \sqrt{{\beta }_{t}}{\epsilon }_{t} +$$ + +$$ += \sqrt{{\alpha }_{t}{\alpha }_{t - 1}{\alpha }_{t - 1}}{x}_{t - 3} + \sqrt{{\alpha }_{t}{\alpha }_{t - 1}{\beta }_{t - 2}}{\epsilon }_{t - 2} + \sqrt{{\alpha }_{t}{\beta }_{t - 1}}{\epsilon }_{t - 1} + \sqrt{{\beta }_{t}}{\epsilon }_{t} \tag{7} +$$ + +$$ += \sqrt{{\bar{\alpha }}_{t}}{x}_{0} + \sqrt{{\alpha }_{t}{\alpha }_{t - 1}\cdots {\alpha }_{2}{\beta }_{1}}{\epsilon }_{1} + \cdots + \sqrt{{\alpha }_{t}{\beta }_{t - 1}}{\epsilon }_{t - 1} + \sqrt{{\beta }_{t}}{\epsilon }_{t}. +$$ + +As a result, $q\left( {{x}_{t} \mid {x}_{0}}\right)$ is still Gaussian. Its mean vector is $\sqrt{{\bar{\alpha }}_{t}}{x}_{0}$ , and its covariance matrix is $\left( {{\alpha }_{t}{\alpha }_{t - 1}\cdots {\alpha }_{2}{\beta }_{1} + \cdots + }\right.$ $\left. {{\alpha }_{t}{\beta }_{t - 1} + {\beta }_{t}}\right) I = \left( {1 - {\bar{\alpha }}_{t}}\right) I$ . Formally, we have + +$$ +q\left( {{x}_{t} \mid {x}_{0}}\right) = \mathcal{N}\left( {{x}_{t};\sqrt{{\bar{\alpha }}_{t}}{x}_{0},\left( {1 - {\bar{\alpha }}_{t}}\right) I}\right) . \tag{8} +$$ + +B.2. Derivation of DDIM $\left( {\kappa = 1}\right)$ + +When $\kappa = 1$ , the coefficient of the ${\epsilon }_{\theta }$ term in the DDIM reverse process is + +$$ +- \frac{\sqrt{1 - {\bar{\gamma }}_{s}}}{\sqrt{{\gamma }_{s}}} + \sqrt{1 - {\bar{\gamma }}_{s - 1} - \frac{1 - {\bar{\gamma }}_{s - 1}}{1 - {\bar{\gamma }}_{s}}{\eta }_{s}} = - \frac{1 - {\bar{\gamma }}_{s}}{\sqrt{{\gamma }_{s}\left( {1 - {\bar{\gamma }}_{s}}\right) }} + \frac{\sqrt{\left( {{\gamma }_{s} - {\bar{\gamma }}_{s}}\right) \left( {1 - {\bar{\gamma }}_{s} - {\eta }_{s}}\right) }}{\sqrt{{\gamma }_{s}\left( {1 - {\bar{\gamma }}_{s}}\right) }} +$$ + +$$ += - \frac{1 - {\bar{\gamma }}_{s}}{\sqrt{{\gamma }_{s}\left( {1 - {\bar{\gamma }}_{s}}\right) }} + \frac{{\gamma }_{s} - {\bar{\gamma }}_{s}}{\sqrt{{\gamma }_{s}\left( {1 - {\bar{\gamma }}_{s}}\right) }} \tag{9} +$$ + +$$ += - \frac{{\eta }_{s}}{\sqrt{{\gamma }_{s}\left( {1 - {\bar{\gamma }}_{s}}\right) }}. +$$ + +439 + +## C. Detailed Experimental Setup + +Image datasets. We conduct unconditional image generation experiments on three datasets: CIFAR-10 (50k object images of resolution ${32} \times {32}$ (Krizhevsky et al.,2009)), CelebA (~163k face images of resolution 64 × 64 (Liu et al.,2015)), and LSUN-bedroom ( $\sim$ 3M bedroom images of resolution 256 $\times$ 256 (Yu et al.,2015)). + +Audio datasets. We conduct unconditional and class-conditional audio synthesis experiments on the Speech Commands 0-9 (SC09) dataset, the spoken digit subset of the full Speech Commands dataset (Warden,2018). SC09 contains $\sim {31}\mathrm{k}$ one-second long utterances of ten classes ( 0 through 9 ) with a sampling rate of ${16}\mathrm{{kHz}}$ . We conduct neural vocoding experiments (audio synthesis conditioned on mel spectrogram) on the LJSpeech dataset (Ito,2017). It contains $\sim {24}$ hours of audio ( $\sim$ 13k utterances from a female speaker) recorded in home environment with a sampling rate of ${22.05}\mathrm{{kHz}}$ . + +Models. In all experiments, we use pretrained checkpoints in prior works. In detail, the pretrained models for CIFAR-10 and LSUN-bedroom are taken from DDPM (Ho et al., 2020; Esser, 2020), the pretrained model for CelebA is taken from DDIM (Song et al.,2020a). In these models, $T$ is 1000. The pretrained models for SC09 and LJSpeech are taken from DiffWave (Kong et al.,2020b). In these models, $T$ is 200. In all models, ${\beta }_{1} = {10}^{-4},{\beta }_{T} = 2 \times {10}^{-2}$ , and all ${\beta }_{t}$ ’s are linearly interpolated between ${\beta }_{1}$ and ${\beta }_{T}$ . + +Noise level schedules. For each of the approximate diffusion process in Section 3.2, we examine two schedules: linear and quadratic. For noise levels ${\left\{ {\eta }_{s}\right\} }_{s = 1}^{S}$ from variances, the two schedules are: + +- Linear (VAR): ${\eta }_{s} = \left( {1 + {cs}}\right) {\eta }_{0}$ . + +- Quadratic (VAR): ${\eta }_{s} = {\left( 1 + cs\right) }^{2}{\eta }_{0}$ . + +We let ${\eta }_{0} = {\beta }_{0}$ and the constant $c$ satisfy $\mathop{\prod }\limits_{{s = 1}}^{S}\left( {1 - {\eta }_{s}}\right) = {\bar{\alpha }}_{T}$ . The noise level at step $s$ is ${r}_{s} = \sqrt{{\bar{\gamma }}_{s}}$ . + +For noise levels ${\left\{ {\eta }_{s}\right\} }_{s = 1}^{S}$ from steps, they are computed from selected steps ${\left\{ {\tau }_{s}\right\} }_{s = 1}^{S}$ among $\{ 1,\cdots , T\}$ (Song et al.,2020a). The two schedules are: + +- Linear (STEP): ${\tau }_{s} = \lfloor {cs}\rfloor$ , where $c = \frac{T}{S}$ . + +- Quadratic (STEP): ${\tau }_{s} = \left\lfloor {c{s}^{2}}\right\rfloor$ , where $c = \frac{4}{5} \cdot \frac{T}{{S}^{2}}$ . + +Then, the noise level at step $s$ is ${r}_{s} = \mathcal{R}\left( {\tau }_{s}\right) = \sqrt{{\bar{\alpha }}_{{\tau }_{s}}}$ . + +In image generation experiments, we follow the same noise level schedules as in Song et al. (2020a): quadratic schedules for CIFAR-10 and linear schedules for CelebA and LSUN-bedroom. We use linear schedules in SC09 experiments and quadratic schedules in LJSpeech experiments; we find these schedules have better quality. + +Evaluations. In all unconditional generation experiments, we use the Fréchet Inception Distance (FID) (Heusel et al., 2017; Lang,2020) to evaluate generated samples. For the training set ${X}_{t}$ and the set of generated samples ${X}_{g}$ , the FID between + +these two sets is defined as + +$$ +\mathrm{{FID}} = {\begin{Vmatrix}{\mu }_{t} - {\mu }_{g}\end{Vmatrix}}^{2} + \operatorname{tr}\left( {{\sum }_{t} + {\sum }_{g} - 2\sqrt{{\sum }_{t}{\sum }_{g}}}\right) , \tag{10} +$$ + +where ${\mu }_{t},{\mu }_{g}$ and ${\sum }_{t},{\sum }_{g}$ are the means and covariances of ${X}_{t},{X}_{g}$ after a feature transformation. In each image generation experiment, ${X}_{g}$ is ${50}\mathrm{\;K}$ generated images. The transformed feature is the 2048-dimensional vector output of the last layer of Inception-V3 (Szegedy et al.,2015). In each audio synthesis experiment, ${X}_{g}$ is 5K generated utterances. The transformed feature is the 1024-dimensional vector output of the last layer of a ResNeXT classifier (Xu & Tuguldur, 2017), which achieves ${99.06}\%$ accuracy on the training set and ${98.76}\%$ accuracy on the test set. The FID is the smaller the better. + +In the class-conditional generation experiment on SC09, we evaluate with accuracy and the Inception Score (IS). ${}^{4}$ The accuracy is computed by matching the predictions of the ResNeXT classifier and the pre-specified labels in the dataset. The IS of generated samples ${X}_{g}$ is defined as + +$$ +\operatorname{IS} = \exp \left( {{\mathbb{E}}_{x \sim {X}_{g}}\operatorname{KL}\left( {p\left( x\right) \parallel {\mathbb{E}}_{{x}^{\prime } \sim {X}_{g}}p\left( {x}^{\prime }\right) }\right) }\right) , \tag{11} +$$ + +where $p\left( x\right)$ is the logit vector of the ResNeXT classifier. The IS and accuracy are the larger the better. + +--- + +${}^{4}$ Note that FID is not an appropriate metric for conditional generation. + +--- + +In the neural vocoding experiment on LJSpeech, we evaluate the speech quality with the crowdMOS tookit (Ribeiro et al., 2011), where the test utterances from all models were presented to Mechanical Turk workers. We report the 5-scale Mean Opinion Scores (MOS), and it is the larger the better. + +536 + +549 + +## D. Evaluation Results in Experiments + +We report image generation results under different approximate diffusion processes, approximate reverse processes and $S$ , the length of FastDPM. Evaluation results on CIFAR-10, CelebA, and LSUN-bedroom measured in FID are shown in Table 1, Table 2, and Table 3, respectively. + +Table 1. CIFAR-10 image generation measured in FID. STEP means noise levels from steps and VAR means noise levels from variances. Both use quadratic schedules. $S$ is the length of FastDPM. The standard DDPM $\left( {T = {1000}}\right)$ has FID $= {3.03}$ . + +
Approx. DiffusionApprox. ReverseFID $\left( \downarrow \right)$
$S = {10}$$S = {20}$$S = {50}$$S = {100}$
STEPDDIM-rev $\left( {\kappa = {0.0}}\right)$11.015.053.202.86
VARDDIM-rev $\left( {\kappa = {0.0}}\right)$9.905.223.413.01
STEPDDIM-rev $\left( {\kappa = {0.2}}\right)$11.325.163.272.87
VARDDIM-rev $\left( {\kappa = {0.2}}\right)$10.185.323.503.04
STEPDDIM-rev $\left( {\kappa = {0.5}}\right)$13.536.143.613.05
VARDDIM-rev $\left( {\kappa = {0.5}}\right)$12.226.553.863.15
STEPDDPM-rev36.7014.825.794.03
VARDDPM-rev29.4315.276.744.58
+ +Table 2. CelebA image generation measured in FID. STEP means noise levels from steps and VAR means noise levels from variances. Both use linear schedules. $S$ is the length of FastDPM. The standard DDPM $\left( {T = {1000}}\right)$ has FID $= {7.00}$ . + +
Approx. DiffusionApprox. ReverseFID (↓)
$S = {10}$$S = {20}$$S = {50}$$S = {100}$
STEPDDIM-rev $\left( {\kappa = {0.0}}\right)$15.7210.778.317.85
VARDDIM-rev $\left( {\kappa = {0.0}}\right)$15.3110.698.417.95
STEPDDPM-rev29.5219.3812.8310.35
VARDDPM-rev28.9818.8912.8310.39
+ +Table 3. LSUN-bedroom image generation measured in FID. STEP means noise levels from steps and VAR means noise levels from variances. Both use linear schedules. $S$ is the length of FastDPM. + +
Approx. DiffusionApprox. ReverseFID (↓)
$S = {10}$$S = {20}$$S = {50}$$S = {100}$
STEPDDIM-rev $\left( {\kappa = {0.0}}\right)$19.079.958.439.94
VARDDIM-rev $\left( {\kappa = {0.0}}\right)$19.989.868.3710.27
STEPDDPM-rev42.6920.9710.247.98
VARDDPM-rev41.0020.1210.128.13
+ +596 + +598 + +599 + +600 + +601 + +602 + +603 + +604 + +We report audio synthesis results under different approximate diffusion processes, approximate reverse processes and $S$ , the length of FastDPM. Evaluation results of unconditional generation on SC09 measured in FID and IS are shown in Table 4. Evaluation results of class-conditional generation on SC09 measured in accuracy and IS are shown in Table 5. Evaluation results of neural vocoding on LJSpeech measured in MOS are shown in Table 6. + +Table 4. SC09 unconditional audio synthesis measured in FID and IS. STEP means noise levels from steps and VAR means noise levels from variances. Both use linear schedules. $S$ is the length of FastDPM. The original DiffWave $\left( {T = {200}}\right)$ has FID $= {1.29}$ and IS $= {5.30}$ . + +
Approx. DiffusionApprox. ReverseFID (↓)IS (↑)
$S = {10}$$S = {20}$$S = {50}$$S = {10}$$S = {20}$$S = {50}$
STEPDDIM-rev $\left( {\kappa = {0.0}}\right)$4.725.315.542.462.272.23
VARDDIM-rev $\left( {\kappa = {0.0}}\right)$4.744.885.582.492.422.21
STEPDDIM-rev $\left( {\kappa = {0.5}}\right)$2.602.522.463.944.174.19
VARDDIM-rev $\left( {\kappa = {0.5}}\right)$2.672.492.473.944.204.20
STEPDDPM-rev1.751.401.334.034.575.16
VARDDPM-rev1.691.381.344.064.635.18
+ +Table 5. SC09 class-conditional audio synthesis. The results are measured by accuracy and IS. STEP means noise levels from steps and VAR means noise levels from variances. Both use linear schedules. $S$ is the length of FastDPM. The DiffWave $\left( {T = {200}}\right)$ has accuracy $= {91.2}\%$ and $\mathrm{{IS}} = {6.63}$ . + +
Approx. DiffusionApprox. ReverseAccuracy (↑)IS (↑)
$S = {10}$$S = {20}$$S = {50}$$S = {10}$$S = {20}$$S = {50}$
STEPDDIM-rev $\left( {\kappa = {0.0}}\right)$66.5%68.3%66.1%3.213.182.87
VARDDIM-rev $\left( {\kappa = {0.0}}\right)$66.6%68.5%66.1%3.263.222.88
STEPDDIM-rev $\left( {\kappa = {0.5}}\right)$85.8%88.4%87.8%5.796.236.00
VARDDIM-rev $\left( {\kappa = {0.5}}\right)$$\mathbf{{86.0}\% }$88.2%88.0%5.746.246.03
STEPDDPM-rev79.9%82.7%86.8%4.715.105.83
VARDDPM-rev81.0%82.8%87.0%4.935.165.86
+ +Table 6. LJSpeech audio synthesis conditioned on mel spectrogram measured. The results are measured by 5-scale MOS with 95% confidence intervals. STEP means noise levels from steps and VAR means noise levels from variances. Both use quadratic schedules. $S$ is the length of FastDPM. + +
Approx. DiffusionApprox. Reverse$S$MOS (↑)
STEPDDIM-rev $\left( {\kappa = {0.0}}\right)$5${3.72} \pm {0.11}$
VARDDIM-rev $\left( {\kappa = {0.0}}\right)$5${3.75} \pm {0.10}$
STEPDDPM-rev5${4.28} \pm {0.08}$
VARDDPM-rev5${4.31} \pm {0.07}$
DiffWave $\left( {T = {200}}\right)$200${4.42} \pm {0.10}$
Ground truth-${4.51} \pm {0.07}$
+ +650 + +651 + +652 + +653 + +655 + +656 + +657 + +658 + +659 + +## E. Generated Samples in Experiments + +In this section, we display generated samples of FastDPM, including image samples and mel-spectrogram of audio samples. + +### E.1. Unconditional Generation on CIFAR-10 + +![01963e3a-b64c-7c07-b125-f6127ead9745_12_298_388_1158_383_0.jpg](images/01963e3a-b64c-7c07-b125-f6127ead9745_12_298_388_1158_383_0.jpg) + +Figure 1. Comparison of generated samples of FastDPM on CIFAR-10 among different $S$ and approximate diffusion processes. The approximate reverse process is DDIM-rev $\left( {\kappa = {0.0}}\right)$ . + +![01963e3a-b64c-7c07-b125-f6127ead9745_12_280_916_1192_802_0.jpg](images/01963e3a-b64c-7c07-b125-f6127ead9745_12_280_916_1192_802_0.jpg) + +Figure 2. Comparison of generated samples of FastDPM on CIFAR-10 among different $S$ and approximate reverse processes. The approximate diffusion process is VAR. + +713 + +714 + +### E.2. Unconditional Generation on CelebA + +![01963e3a-b64c-7c07-b125-f6127ead9745_13_280_262_1198_1172_0.jpg](images/01963e3a-b64c-7c07-b125-f6127ead9745_13_280_262_1198_1172_0.jpg) + +Figure 3. Comparison of generated samples of FastDPM on CelebA among different $S$ and approximate diffusion processes. The approximate reverse process is DDIM-rev $\left( {\kappa = {0.0}}\right)$ . + +765 + +766 + +768 + +769 + +E.3. Unconditional Generation on LSUN-bedroom + +![01963e3a-b64c-7c07-b125-f6127ead9745_14_309_261_1131_1068_0.jpg](images/01963e3a-b64c-7c07-b125-f6127ead9745_14_309_261_1131_1068_0.jpg) + +Figure 4. Comparison of generated samples of FastDPM on LSUN bedroom among different approximate diffusion processes. The approximate reverse process is DDIM-rev $\left( {\kappa = {0.0}}\right)$ and $S = {100}$ . + +820 + +821 + +822 + +823 + +824 + +828 + +![01963e3a-b64c-7c07-b125-f6127ead9745_15_308_193_1132_1068_0.jpg](images/01963e3a-b64c-7c07-b125-f6127ead9745_15_308_193_1132_1068_0.jpg) + +Figure 5. Comparison of generated samples of FastDPM on LSUN bedroom among different approximate diffusion processes. The approximate reverse process is DDIM-rev $\left( {\kappa = {0.0}}\right)$ and $S = {50}$ . + +866 + +867 + +868 + +869 + +870 + +871 + +872 + +874 + +875 + +876 + +877 + +878 + +879 + +883 + +![01963e3a-b64c-7c07-b125-f6127ead9745_16_308_193_1132_1068_0.jpg](images/01963e3a-b64c-7c07-b125-f6127ead9745_16_308_193_1132_1068_0.jpg) + +Figure 6. Comparison of generated samples of FastDPM on LSUN bedroom among different approximate diffusion processes. The approximate reverse process is DDIM-rev $\left( {\kappa = {0.0}}\right)$ and $S = {20}$ . + +923 + +924 + +925 + +926 + +927 + +929 + +930 + +931 + +932 + +933 + +934 + +935 + +![01963e3a-b64c-7c07-b125-f6127ead9745_17_306_193_1134_1068_0.jpg](images/01963e3a-b64c-7c07-b125-f6127ead9745_17_306_193_1134_1068_0.jpg) + +Figure 7. Comparison of generated samples of FastDPM on LSUN bedroom among different approximate diffusion processes. The approximate reverse process is DDIM-rev $\left( {\kappa = {0.0}}\right)$ and $S = {10}$ . + +938 + +976 + +977 + +978 + +979 + +980 + +981 + +982 + +983 + +984 + +985 + +986 + +987 + +988 + +989 + +E.4. Unconditional Generation on SC09 + +995 + +![01963e3a-b64c-7c07-b125-f6127ead9745_18_379_284_1018_1609_0.jpg](images/01963e3a-b64c-7c07-b125-f6127ead9745_18_379_284_1018_1609_0.jpg) + +996 + +997 + +998 + +999 + +1000 + +1001 + +1002 + +1003 + +1004 + +1005 + +1006 + +1007 + +1008 + +1009 + +1010 + +1011 + +1012 + +1013 + +1014 + +1015 + +1016 + +1017 + +1018 + +1019 + +1020 + +1021 + +1022 + +1023 + +1024 + +1025 + +1026 + +1027 + +1028 + +1029 + +1030 + +1031 + +1032 + +1033 + +1034 + +1035 + +1036 + +1037 + +1038 Figure 8. Mel-spectrogram of 16 synthesized utterances $\left( {S = {50}}\right)$ . We use linear noise level schedules from steps in (a) and variances in + +1039 (b). In each subplot, the top row shows results of DDIM-rev $\left( {\kappa = {0.0}}\right)$ , the middle row shows results of DDIM-rev $\left( {\kappa = {0.5}}\right)$ , and the + +1040 bottom row shows results of DDPM-rev. DDPM-rev produces the clearest utterances in these approximate reverse processes. + +1041 + +1042 + +1043 + +1044 E.5. Conditional Generation on SC09 + +1046 + +1047 + +![01963e3a-b64c-7c07-b125-f6127ead9745_19_379_284_1018_1607_0.jpg](images/01963e3a-b64c-7c07-b125-f6127ead9745_19_379_284_1018_1607_0.jpg) + +1048 + +1049 + +1050 + +1051 + +1052 + +1053 + +1054 + +1055 + +1056 + +1057 + +1058 + +1059 + +1060 + +1061 + +1062 + +1063 + +1064 + +1065 + +1066 + +1067 + +1068 + +1069 + +1070 + +1071 + +1072 + +1073 + +1074 + +1075 + +1076 + +1077 + +1078 + +1079 + +1080 + +1081 + +1082 + +1083 + +1084 + +1085 + +1086 + +1087 + +1088 + +1089 + +1090 + +1091 + +1092 + +1093 Figure 9. Mel-spectrogram of 20 synthesized utterances $\left( {S = {50}}\right)$ . We use linear noise level schedules from steps in (a) and variances in + +1094 (b). In each subplot, the top row shows results of DDIM-rev $\left( {\kappa = {0.0}}\right)$ , the middle row shows results of DDIM-rev $\left( {\kappa = {0.5}}\right)$ , and the + +1095 bottom row shows results of DDPM-rev. DDIM-rev $\left( {\kappa = {0.5}}\right)$ produces the clearest utterances in these approximate reverse processes. + +1096 + +1097 + +1098 + +1099 E.6. Neural Vocoding on LJSpeech + +1101 + +1102 + +1103 + +![01963e3a-b64c-7c07-b125-f6127ead9745_20_378_291_1015_1544_0.jpg](images/01963e3a-b64c-7c07-b125-f6127ead9745_20_378_291_1015_1544_0.jpg) + +1104 + +1105 + +1106 + +1107 + +1108 + +1109 + +1110 + +1111 + +1112 + +1113 + +1114 + +1115 + +1116 + +1117 + +1118 + +1119 + +1120 + +1121 + +1122 + +1123 + +1124 + +1125 + +1126 + +1127 + +1128 + +1129 + +1130 + +1131 + +1132 + +1133 + +1134 + +1135 + +1136 + +1137 + +1138 + +1139 + +1140 + +1141 + +1142 + +1143 + +1144 + +1145 + +1146 + +1147 Figure 10. Mel-spectrogram of ground truth and generated LJ001-0001 ( $S = 5$ , channel $= {128}$ ). We use linear noise level schedules from + +1148 steps in (a) and variances in (b). In each subplot, the top row shows ground truth, the middle row shows results of DDIM-rev $\left( {\kappa = {0.0}}\right)$ , + +1149 and the bottom row shows results of DDPM-rev. Both DDPM-rev and DDIM-rev generate high quality speech. + +1150 + +1151 + +1153 + +1154 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/av2hdS1rLI/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/av2hdS1rLI/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..91a9f689fd16664d6362996106f207409034434b --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/av2hdS1rLI/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,521 @@ +# $\alpha$ -VAEs : Optimising variational inference by learning data-dependent divergence skew + +## Anonymous Authors ${}^{1}$ + +## Abstract + +The skew-geometric Jensen-Shannon divergence $\left( {\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}\right)$ allows for an intuitive interpolation between forward and reverse Kullback-Leibler (KL) divergence based on the skew parameter $\alpha$ . While the benefits of the skew in ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ are clear-balancing forward/reverse KL in a comprehensible manner-the choice of optimal skew remains opaque and requires an expensive grid search. In this paper we introduce $\alpha$ -VAEs, which extend the ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ variational autoencoder by allowing for learnable, and therefore data-dependent, skew. We motivate the use of a parameterised skew in the dual divergence by analysing trends dependent on data complexity in synthetic examples. We also prove and discuss the dependency of the divergence minimum on the input data and encoder parameters, before empirically demonstrating that this dependency does not reduce to either direction of KL divergence for benchmark datasets. Finally, we demonstrate that optimised skew values consistently converge across a range of initial values and provide improved denoising and reconstruction properties. These render $\alpha$ - VAEs an efficient and practical modelling choice across a range of tasks, datasets, and domains. + +## 1. Introduction + +As variational inference (VI) progresses, state of the art Variational AutoEncoders (VAEs) increase in complexity (Vahdat & Kautz, 2020; Child, 2020) while continuing to minimise the Evidence Lower BOund (ELBO) (Blei et al., 2017). Compared to generative adversarial networks (GANs) (Goodfellow et al., 2014), VAEs necessitate less stringent and problem-dependent training regimes, and compared to autoregressive models (Larochelle & Murray, 2011; + +Germain et al., 2015) (that can be interpreted as instances of very deep VAEs (Child, 2020)) are less computationally expensive and more efficient to sample. VAE learning requires optimisation of an objective which balances the quality of decoded reconstructions from encoded representations, with a regularising divergence term penalising latent space deviations from a fixed prior distribution. + +The divergence term should be strongly coupled to the predominant assumption of latent variables parameterised by a multivariate Gaussian, ${p}_{\theta }\left( z\right) = N\left( {\mu ,{\sigma }^{2}}\right)$ with $z,\mu ,\sigma \in$ ${\mathbb{R}}^{n}$ , approximated by ${q}_{\phi }\left( {z \mid x}\right)$ with $x \in {\mathbb{R}}^{m}$ and $n \leq m$ . For instance, in the original VAE (Kingma & Welling, 2014), KL divergence (Kullback & Leibler, 1951) naturally constrained the variational distribution to an isotropic Gaussian unit ball $\operatorname{KL}\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel \mathcal{N}\left( {0, I}\right) }\right)$ , despite unfavourable properties (Bishop, 2006), such as unboundedness and asymmetry. Moreover, KL does not capitalise on the full flexibility of the wider family of exponential distributions, a recent direction which has tightened the ELBO (Brekel-mans et al., 2020; Masrani et al., 2019) and rendered VAE divergence regularisation more interpretable in distribution space via skew-geometric Jensen-Shannon $\left( {\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}\right)$ divergence (Nielsen, 2019; Deasy et al., 2020). We henceforth refer to VAEs with skewed geometric divergences as $\alpha$ - VAEs, subsuming $\beta$ -VAEs. + +Divergence skew in VAEs balances the contrasting properties of forward and reverse KL (such as zero-avoidance/forcing) and circumvents opaque divergence terms by interpolating between them (Bishop, 2006). However, an expensive grid search over skew values fixed through training is necessary to optimise for tasks such as image reconstruction. This is particularly problematic as the link between optimal skew and dataset properties is not clear and is not easily resolved. Moreover, when skew is treated as static, the divergence constraint does not change during training and therefore does not reflect improvements in the encoder and its embeddings. Instead, an improved optimisation would update the divergence skew relative to these factors without using prior knowledge or compromising performance. + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +055 + +056 + +ELBO + +
DivergenceKL expressionTrain skew
KL (ELBO)$\operatorname{KL}\left( {q\parallel p}\right)$
$\alpha$ -VAEs
${\mathrm{{JS}}}^{\mathrm{G}}\alpha$$\left( {1 - \alpha }\right) \mathrm{{KL}}\left( {p\parallel {p}^{\alpha }{q}^{1 - \alpha }}\right) + \alpha \mathrm{{KL}}\left( {q\parallel {p}^{\alpha }{q}^{1 - \alpha }}\right)$
${\mathrm{{JS}}}_{n}^{\mathrm{G}}\alpha$$\left( {1 - \alpha }\right) \mathrm{{KL}}\left( {{p}^{\alpha }{q}^{1 - \alpha }\parallel p}\right) + \alpha \mathrm{{KL}}\left( {{p}^{\alpha }{q}^{1 - \alpha }\parallel q}\right)$
t-JS ${}^{\mathrm{G}}\alpha$$\left( {1 - \alpha }\right) \mathrm{{KL}}\left( {p\parallel {p}^{\alpha }{q}^{1 - \alpha }}\right) + \alpha \mathrm{{KL}}\left( {q\parallel {p}^{\alpha }{q}^{1 - \alpha }}\right)$
t-JS ${}_{ * }^{\mathrm{G}}\alpha$$\left( {1 - \alpha }\right) \mathrm{{KL}}\left( {{p}^{\alpha }{q}^{1 - \alpha }\parallel p}\right) + \alpha \mathrm{{KL}}\left( {{p}^{\alpha }{q}^{1 - \alpha }\parallel q}\right)$
+ +Reconstruction loss 14 ${\mathrm{{JS}}}^{{G}_{\alpha }}$ 13 ${\mathbf{{JS}}}_{ * }^{{G}_{\alpha }}$ t-JS ${}^{{G}_{a}}$ 12 t-JS* 11 10 + +(b) Breakdown of $\alpha$ -VAE divergences. + +0.0 0.2 0.4 0.6 0.8 1.0 + +$\alpha$ + +(a) $\alpha$ -VAEs on the MNIST dataset. + +Figure 1. The influence of divergence skew, $\alpha$ , on test set image reconstruction (measured in mean squared error). (Deasy et al.,2020) used an expensive grid search to fix divergence skew through training. Our separate optimisation of $\alpha$ in ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ (prefix t-), as a competing objective to reconstruction loss, leads to strong performance and substantially improves over reverse KL from any initial skew. We plot optimal $\alpha$ , denoted by a star, for 5 training seeds which (visually) converge to the same point. + +057 + +058 + +Contributions. To overcome these issues, we extend (Deasy et al.,2020) and introduce $\alpha$ -VAEs, by allowing for data- and encoder-dependent skew. Our findings indicate trends in final skew values which are dependent on proxies for data complexity in synthetic examples. We demonstrate that optimal divergence skew tends toward a balance of forward and reverse KL as dimensionality increases and, for increasing distribution modes, training skew marginally favours forward KL. In the higher dimensional setting of standard image benchmark datasets, we then establish that final skew values converge across seeds and are consistent for a range of initial values. We further exhibit that learning skew in $\alpha$ -VAEs has a positive impact on test set reconstruction loss (summarised in Figure 1), reconstructing denoised images from noisy images, and explain both advantages from the rate-distortion perspective (Alemi et al., 2018). Overall, we show that $\alpha$ -VAEs with learnable $\alpha$ consistently outperform forward and reverse KL independent of dataset, encoder parameters, and initial skew. + +## 2. $\alpha$ -VAE optimisation + +### 2.1. Learning divergence parameterisations is not constrained optimisation + +In order to flexibly interchange divergence terms regular-ising the latent space of VAEs, it is common to formulate VAE training as a constrained optimisation problem (Higgins et al., 2017). A suitable objective to maximise is the marginal (log-)likelihood of the observed data $x \in {\mathbb{R}}^{m}$ as an expectation over the distribution of latent factors $z \in {\mathbb{R}}^{n}$ + +$$ +\mathop{\max }\limits_{\theta }\left\lbrack {{\mathbb{E}}_{{p}_{\theta }\left( z\right) }\left\lbrack {{p}_{\theta }\left( {x \mid z}\right) }\right\rbrack }\right\rbrack . \tag{1} +$$ + +The latent representation can be controlled by imposing an isotropic unit Gaussian constraint on the prior $p\left( z\right) =$ + +$\mathcal{N}\left( {0, I}\right)$ , arriving at the constrained optimisation problem + +$$ +\mathop{\max }\limits_{{\phi ,\theta }}{\mathbb{E}}_{{p}_{\mathcal{D}}\left( x\right) }\left\lbrack {\log {\mathbb{E}}_{{q}_{\phi }\left( {z \mid x}\right) }\left\lbrack {{p}_{\theta }\left( {x \mid z}\right) }\right\rbrack }\right\rbrack +$$ + +$$ +\text{subject to}D\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p\left( z\right) }\right) < \varepsilon \text{,} \tag{2} +$$ + +where $\varepsilon$ dictates the strength of the constraint and $D$ is a divergence. Equation (2) is then rewritten as a Lagrangian under the KKT conditions (Karush, 1939; Kuhn & Tucker, 2014), obtaining + +$$ +\mathcal{F}\left( {\theta ,\phi ,\lambda ;x, z}\right) = {\mathbb{E}}_{{q}_{\phi }\left( {z \mid x}\right) }\left\lbrack {\log {p}_{\theta }\left( {x \mid z}\right) }\right\rbrack +$$ + +$$ +- \lambda \left( {D\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p\left( z\right) }\right) - \varepsilon }\right) . \tag{3} +$$ + +However, here we are interested in learning properties of the divergence itself, rendering Equation (2) and (3) invalid. In this setting, the constrained optimisation problem is no longer well-posed, as the constraint becomes part of the optimisation. Instead, we choose to relax these optimisation assumptions while maintaining competing objectives, the log-likelihood and the parameterised divergence + +$$ +\mathcal{L}\left( {\theta ,\phi ,\lambda ;x, z}\right) = {\mathbb{E}}_{{q}_{\phi }\left( {z \mid x}\right) }\left\lbrack {\log {p}_{\theta }\left( {x \mid z}\right) }\right\rbrack +$$ + +$$ +- \lambda {D}_{\psi \left( {x, z}\right) }\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p\left( z\right) ,}\right. \tag{4} +$$ + +where $\psi \left( {x, z}\right)$ parameterises $D$ . + +Such a formulation only relates back to Equation (3) in a valid manner when changes in $\psi$ map to a family of divergences and redundant cases are avoided. For instance, it would be particularly useful if, for a given $\psi$ during optimi-sation, training could still be understood as a reconstruction loss term and a closed-form divergence regularisation term. It is, therefore, of import to consider: which divergences allow for such a parameterisation, whether they will have non-trivial minima, and whether the subsequent properties of this optimisation generalise or are useful. To this end, we consider learning $\alpha$ , the skew, in the ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ family of divergences (Appendix A) and proceed by highlighting the connection to forward and reverse KL. + +### 2.2. Skew optimisation + +From an information geometric viewpoint, the skew parameter's influence on the intermediate distribution can be seen as the weighting of two distributions along a path in the statistical manifold. In (Masrani et al., 2019), the authors consider the geometric path between the variational distribution ${q}_{\phi }\left( {z \mid x}\right)$ and the model distribution $p\left( {x, z}\right)$ . The expensive grid search therein required integration along this path and was later avoided in (Brekelmans et al., 2020) by using a moment-matching approach to learn the optimal point on the path. In contrast to these works, here we consider skew between ${q}_{\phi }\left( {z \mid x}\right)$ and the prior $p\left( z\right)$ as a form of regularisation. + +Additionally, the property that ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ interpolates between forward and reverse KL (Deasy et al., 2020) offers a clear mechanism for trading off forward and reverse KL properties. In VAE optimisation, as we have generalised the overall objective by extending the constrained optimisation problem to competing objectives, it is possible to consider direct parameterisation of $\psi$ from Equation (4). As long as $\psi$ maps to $\left\lbrack {0,1}\right\rbrack$ , the output is a valid divergence (a member of the JS ${}^{{\mathrm{G}}_{\alpha }}$ family) with extremes at forward and reverse KL, so we can consider optimising $\psi$ via gradient descent. + +Before attempting the optimisation, we can derive useful properties of ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}\left( {P\parallel Q}\right)$ which clarify how the optimisa-tion should be carried out. In particular, it is important to understand the behaviour of ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ with respect to $\alpha$ , as this will dictate whether convergence is possible. + +Proposition 1. For Gaussian distributions $P$ and $Q$ with probability density functions $p\left( z\right)$ and ${q}_{\phi }\left( {z \mid x}\right)$ respectively, the derivative of ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}\left( {P\parallel Q}\right)$ with respect to $\alpha$ is + +$$ +\frac{d{\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}}{d\alpha } = 2\left( {\alpha - 1}\right) \mathrm{{KL}}\left( {p\left( z\right) \parallel {q}_{\phi }\left( {z \mid x}\right) }\right) +$$ + +$$ ++ {2\alpha }\mathrm{{KL}}\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p\left( z\right) }\right) \text{,} \tag{5} +$$ + +with a stationary point at + +$$ +{\alpha }^{ * } = \frac{\operatorname{KL}\left( {p\left( z\right) \parallel {q}_{\phi }\left( {z \mid x}\right) }\right) }{\operatorname{KL}\left( {p\left( z\right) \parallel {q}_{\phi }\left( {z \mid x}\right) }\right) + \operatorname{KL}\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p}\right) }, \tag{6} +$$ + +which is a global minimum in the $\alpha$ dimension of the opti-misation. Proof in Appendix D.1. + +A useful sanity check from Equation (6), is that ${\alpha }^{ * }$ is clearly bounded between 0 and 1 due to the non-negativity of KL divergence. Secondly, as Equation (6) relies on parameters $\phi$ , this expression is clearly data dependent, meaning optimal skew shifts during training and should not be fixed as in + +(Deasy et al.,2020). Finally, considered as a standalone ${1D}$ optimisation of $\alpha$ , the global minimum means that training divergence skew is convex and should be simple to minimise in practice, with closed-form convergence. + +Proposition 2. The upper bound on the convergence rate of gradient descent to ${\alpha }^{ * }$ in (6) is + +$$ +2\left( {\operatorname{KL}\left( {p\left( z\right) \parallel {q}_{\phi }\left( {z \mid x}\right) }\right) + \operatorname{KL}\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p\left( z\right) }\right) }\right) {D}_{1}^{2}{e}^{-{4T}}, +$$ + +(7) + +where ${D}_{1} = {\begin{Vmatrix}{\alpha }_{1} - {\alpha }^{ * }\end{Vmatrix}}_{2}^{2},{\alpha }_{1}$ is the initial skew value, and $T$ is the number of optimisation steps. Proof in Appendix D.2. + +In the next section, we find practical optimisation of skew in ${\mathrm{{JS}}}_{ * }^{{\mathrm{G}}_{\alpha }}$ to be well behaved, suggesting that training skew in the dual divergence is also well-posed and does not reduce to an invalid divergence or one of the KL directions-despite an equivalent proof of convexity via differentiation not being obvious, due to a mixture term being outside of the log. + +## 3. Experiments + +As the convexity shown in Section 2.2 suggests simple op-timisation of $\alpha$ , we directly train $\alpha$ as another parameter of the model, but use a separate optimiser to the model parameters so that our 1D analysis holds. + +### 3.1. Characterising skew optimisation for ${\mathbf{{JS}}}_{ * }^{{\mathbf{G}}_{\alpha }}$ + +To better understand how dual $\alpha$ -VAEs, ${\alpha }_{ * }$ -VAEs, will behave in the more complex setting of modern variational inference benchmarks, we first highlight their properties on synthetic examples. In Figure 2, we depict different optimal skew values for a fit of a multivariate Gaussian to an underlying additive mixture of multivariate Gaussians with trained ${\mathrm{{JS}}}_{ * }^{{\mathrm{G}}_{\alpha }}$ skew. As the divergence integrals are not tractable, we directly optimise the multivariate Gaussian parameters via samples from the data for all divergences. + +In both plots of Figure 2, we depict the emergent low-dimensional trends in optimised skew. For Figure 2a, we fit a 2D Gaussian ball to a 2D additive mixture of Gaussian balls, with an increasing number of components in the mixture. Whereas, in Figure 2b, we increase the dimension of the fit and keep the ratio of mixtures to dimensions fixed at 5 . As the number of mixture components increases in Figure 2a, optimal skew for ${\mathrm{{JS}}}_{ * }^{{\mathrm{G}}_{\alpha }}$ decreases, favouring reverse KL as mass becomes more concentrated. Similarly, in Figure 2b, the optimal skew for ${\mathrm{{JS}}}_{ * }^{{\mathrm{G}}_{\alpha }}$ also decreases, underlining the need for data-dependent skewed divergences and suggesting learnt skew in dual ${\alpha }_{ * }$ -VAEs will be consistent and avoid trivial cases for more complex data. + +165 + +
VAEMNISTFashion-MNISTdSpritesChairsCelebA
$\beta$ -VAE $\left( {\mathrm{{KL}}\left( {q\parallel p}\right) ,\beta = 4\text{,(Deasy et al.,2020))}}\right)$11.7513.3210.5120.79269.52
InfoVAE (MMD, $\lambda = {500}$ ,(Deasy et al.,2020))13.1911.1011.8718.85271.71
Vanilla VAE $\left( {\mathrm{{KL}}\left( {q\parallel p}\right) }\right)$${10.67} \pm {0.27}$${12.36} \pm {0.22}$${7.78} \pm {0.24}$${20.33} \pm {0.34}$${262.53} \pm {2.07}$
$\alpha$ -VAE (fixed $\alpha = {0.5}$ )${11.24} \pm {0.05}$${11.07} \pm {0.67}$${12.07} \pm {0.06}$${19.11} \pm {0.14}$${270.33} \pm {0.78}$
${\alpha }_{ * }$ -VAE (fixed ${\alpha }_{ * } = {0.5}$ )${8.82} \pm {0.04}$${9.80} \pm {0.06}$${5.72} \pm {0.07}$${16.40} \pm {0.15}$${264.27} \pm {0.45}$
$\alpha$ -VAE${8.89} \pm {0.07}$${9.90} \pm {0.05}$${5.03} \pm {0.31}$${16.48} \pm {0.08}$${259.50} \pm {0.32}$
${\alpha }_{ * }$ -VAE${8.52} \pm {0.07}$${9.59} \pm {0.03}$${3.88} \pm {0.27}$${15.98} \pm {0.17}$${259.52} \pm {0.36}$
+ +Table 1. Final model reconstruction error across regularisation divergences and datasets. For trainable skew $\alpha$ -VAEs (bottom two rows), final $\alpha$ values are given in Table 2. ${\alpha }_{ * }$ indicates dual $\alpha$ -VAEs. + +177 + +178 + +![01963e43-f54e-7610-b9dc-808075960c0c_3_166_646_683_222_0.jpg](images/01963e43-f54e-7610-b9dc-808075960c0c_3_166_646_683_222_0.jpg) + +(a) Learnt $\alpha$ vs. mixture compo-(b) Learnt $\alpha$ vs. mixture dimen-nents. sion. + +Figure 2. Emergent trends in optimised skew when fitting a Gaussian ball to an additive mixture of Gaussians in ND for increasing data complexity. + +179 + +180 + +181 + +182 + +### 3.2. Benchmark image dataset performance + +Divergence skew convergence. As the expression for the minimum in Equation (6) is encoder dependent, we begin our benchmark dataset assessment by depicting the alpha landscape. In Figure 3, we see that optimal $\alpha$ decreases during training, before stabilising in the final epochs. This supports the argument that naively fixing skew is not optimal, leads to inferior solutions, and even a hyperparameter search for fixed skew followed by training is not sufficient. + +The divergences converge to a consistent minimum. This confirms that our ${1D}$ assessment of convexity and separate optimisation of $\alpha$ successfully leads to stable skew values (see Figure 4 in the Appendix). In addition, we exemplify how skew evolves across 5 different training seeds. These plots delineate data and encoder dependency, with consistency across seeds which initialise model parameters and training procedures (e.g. dropout and batching), suggesting learnt skew predominantly derives from, and varies between, datasets (see Table 2). + +Robustness to noise. As learning skew allows for problem-dependent measurement of the distance to the prior, which better accommodates more dispersed encoded distributions, we tested how learning skew regularises VAEs in various noise settings. In Figure 5, we present denoising experi- + +218 ments where we add Gaussian noise, $\mathcal{N}\left( {0,{\sigma }^{2}}\right)$ , to the nor- malised input images and clip to $\left\lbrack {0,1}\right\rbrack$ . Despite degraded 219 intuition surrounding where the skew balance should lie in the noisy setting, Figure 5 clearly demonstrates that our trainable skew in ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ , and its dual form, consistently provides lower test set reconstruction loss across noise levels. As a sanity check, we can also verify expected trends in the other divergences, with the less dispersion-friendly reverse KL performing poorly at higher noise levels as forward KL becomes more appropriate, as well as the expected robust behaviour of MMD at high noise levels (Zhao et al., 2019). + +Improved reconstruction as a rate-distortion trade-off. We test our model's reconstruction-loss performance in the standard clean-image setting. Table 1 demonstrates superior reconstruction loss across multiple datasets, outperforming both the naive choice of fixed $\alpha = {0.5}$ and KL divergence-the latter by a substantial margin. We further detail the relationship with fixed $\alpha$ in Figure 1a, Figure 6, the supplementary figures in Appendix E, observing low reconstruction loss from an arbitrary starting skew. To explain this performance gain, we plot distortion (MSE) against rate (Figure 7 in Appendix E), measured using the reverse KL divergence for all divergences. The shift down and to the right for $\alpha$ -VAEs mean that they trade off rate for distortion, even more so when skew is learnt, giving higher quality image reconstruction when starting with unknown divergence scaling $\lambda$ . + +## 4. Conclusion + +We recast VAE optimisation as a multi-objective task with competing reconstruction and divergence regularisation terms. This allowed us to use gradient descent to directly learn the divergence skew used to control the variational distribution ${q}_{\phi }\left( {z \mid x}\right)$ relative to the prior distribution $p\left( z\right)$ . The resulting method, $\alpha$ -VAE, was shown to be well-posed as a 1D optimisation with a dependency on both the encoder and the data, has improved reconstruction and denoising properties over standard regularisation techniques, and avoids an expensive grid search over skew values. Moreover, as $\alpha$ -VAEs also generalise over the $\beta$ -VAE family, they are a practical, efficient, and unbiased choice of VAE across a range of tasks and domains. + +## References + +Alemi, A., Poole, B., Fischer, I., Dillon, J., Saurous, R. A., and Murphy, K. Fixing a broken elbo. In International Conference on Machine Learning, pp. 159-168. PMLR, 2018. + +Aubry, M., Maturana, D., Efros, A., Russell, B., and Sivic, J. Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models. In CVPR, 2014. + +Bishop, C. M. Pattern recognition and machine learning. Springer, 2006. + +Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859-877, 2017. + +Bozkurt, A., Esmaeili, B., Tristan, J., Brooks, D. H., Dy, J. G., and van de Meent, J. Rate-regularization and generalization in variational autoencoders. In The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of Proceedings of Machine Learning Research, pp. 3880-3888. PMLR, 2021. + +Brekelmans, R., Masrani, V., Wood, F., Steeg, G. V., and Galstyan, A. All in the exponential family: Bregman duality in thermodynamic variational inference. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 1111-1122. PMLR, 2020. + +Bubeck, S. Convex optimization: Algorithms and complexity. arXiv preprint arXiv:1405.4980, 2014. + +Burgess, C. P., Higgins, I., Pal, A., Matthey, L., Watters, N., Desjardins, G., and Lerchner, A. Understanding disentangling in $\beta$ -vae. arXiv preprint arXiv:1804.03599, 2018. + +Child, R. Very deep vaes generalize autoregressive models and can outperform them on images. arXiv preprint arXiv:2011.10650, 2020. + +Deasy, J., Simidjievski, N., and Lió, P. Constraining variational inference with geometric jensen-shannon divergence. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. + +Dieng, A. B., Tran, D., Ranganath, R., Paisley, J., and Blei, D. Variational inference via $\chi$ upper bound minimization. In Advances in Neural Information Processing Systems, pp. 2732-2741, 2017. + +Germain, M., Gregor, K., Murray, I., and Larochelle, H. Made: Masked autoencoder for distribution estimation. In International Conference on Machine Learning, pp. ${881} - {889},{2015}$ . + +Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014. + +Hensman, J., Zwießele, M., and Lawrence, N. Tilted variational bayes. In Artificial Intelligence and Statistics, pp. 356-364, 2014. + +Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. beta-vae: Learning basic visual concepts with a constrained variational framework. Iclr, 2(5):6, 2017. + +Huang, C.-W., Tan, S., Lacoste, A., and Courville, A. C. Improving explorability in variational inference with annealed variational objectives. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 9701-9711. Curran Associates, Inc., 2018. + +Huang, S., Makhzani, A., Cao, Y., and Grosse, R. B. Evaluating lossy compression rates of deep generative models. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 4444-4454. PMLR, 2020. + +Karush, W. Minima of functions of several variables with inequalities as side constraints. M. Sc. Dissertation. Dept. of Mathematics, Univ. of Chicago, 1939. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. In ICLR 2014, 2014. URL abs/1312. 6114. + +Kuhn, H. W. and Tucker, A. W. Nonlinear programming. In Traces and emergence of nonlinear programming, pp. 247-258. Springer, 2014. + +Kullback, S. and Leibler, R. A. On information and sufficiency. The annals of mathematical statistics, 22(1): 79-86, 1951. + +Larochelle, H. and Murray, I. The neural autoregressive distribution estimator. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 29-37, 2011. + +LeCun, Y., Cortes, C., and Burges, C. Mnist handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2, 2010. + +Li, Y. and Turner, R. E. Rényi divergence variational inference. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 1073-1081. Curran Associates, Inc., 2016. + +Lin, J. Divergence measures based on the shannon entropy. IEEE Transactions on Information theory, 37(1):145-151, 1991. + +Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. + +Lucas, J., Tucker, G., Grosse, R. B., and Norouzi, M. Understanding posterior collapse in generative latent variable models. In ${DGS}@{ICLR},{2019}$ . + +Masrani, V., Le, T. A., and Wood, F. The thermodynamic variational objective. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 11521- 11530, 2019. + +Matthey, L., Higgins, I., Hassabis, D., and Lerchner, A. dsprites: Disentanglement testing sprites dataset. https://github.com/deepmind/dsprites-dataset/, 2017. + +Murphy, K. P. Machine learning: a probabilistic perspective. MIT press, 2012. + +Neal, R. M. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012. + +Niculescu, C. and Persson, L.-E. Convex functions and their applications. Springer, 2006. + +Nielsen, F. On the Jensen-Shannon Symmetrization of Distances Relying on Abstract Means. Entropy, 21(5):485, 2019. ISSN 1099-4300. doi: 10.3390/e21050485. Extended version available at https://arxiv.org/abs/1904.04017. + +Nielsen, F. On a variational definition for the jensen-shannon symmetrization of distances based on the information radius. Entropy, 23(4), 2021. ISSN 1099-4300. + +Nielsen, F. and Garcia, V. Statistical exponential families: A digest with flash cards. arXiv preprint arXiv:0911.4863, 2009. + +Nishiyama, T. Generalized bregman and jensen divergences which include some f-divergences. arXiv preprint arXiv:1808.06148, 2018. + +Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International + +Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pp. 1278-1286. PMLR, 2014. + +Rudin, W. et al. Principles of mathematical analysis, volume 3. McGraw-hill New York, 1976. + +Tolstikhin, I., Bousquet, O., Gelly, S., and Schoelkopf, B. Wasserstein auto-encoders. In International Conference on Learning Representations, 2018. + +Vahdat, A. and Kautz, J. Nvae: A deep hierarchical variational autoencoder. arXiv preprint arXiv:2007.03898, 2020. + +Xiao, H., Rasul, K., and Vollgraf, R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. + +Zhang, M., Bird, T., Habib, R., Xu, T., and Barber, D. Variational f-divergence minimization. arXiv preprint arXiv:1907.11891, 2019. + +Zhao, S., Song, J., and Ermon, S. Infovae: Balancing learning and inference in variational autoencoders. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, Honolulu, Hawaii, USA, pp. 5885- 5892. AAAI Press, Palo Alto, CA, USA, 2019. + +### A.The JS ${}^{{\mathrm{G}}_{\alpha }}$ divergence family + +For distributions $P$ and $Q$ of a continuous random variable $Z = {\left\lbrack {Z}_{1},\ldots ,{Z}_{n}\right\rbrack }^{\mathrm{T}}$ , the forward KL divergence (Kullback & Leibler, 1951) is defined as + +$$ +\operatorname{KL}\left( {P\parallel Q}\right) = {\int }_{Z}p\left( z\right) \log \left\lbrack \frac{p\left( z\right) }{q\left( z\right) }\right\rbrack {dz}, \tag{8} +$$ + +where $p$ and $q$ are the probability densities of $P$ and $Q$ respectively, $z \in {\mathbb{R}}^{n}$ , and reverse KL divergence refers to $\operatorname{KL}\left( {Q\parallel P}\right)$ . + +Reverse KL from a standard normal distribution ${\mathcal{N}}_{2}\left( {0, I}\right)$ to a diagonal multivariate normal distribution ${\mathcal{N}}_{1}\left( {\mu ,\sum }\right) ,\mu \in$ ${\mathbb{R}}^{n}$ and $\sum \in {\mathbb{R}}^{n \times n}$ , is used throughout variational models (Higgins et al., 2017; Kingma & Welling, 2014; Neal, 2012) and is known to enforce zero-avoiding hyperparameters on ${\mathcal{N}}_{1}$ when minimised (Bishop,2006; Murphy,2012). On the other hand, the forward KL divergence is known for its zero-forcing property (Bishop, 2006; Murphy, 2012). However, there exist well-known drawbacks of the KL divergence, such as no upper bound leading to unstable optimisation and poor approximation (Hensman et al., 2014), as well as its asymmetric property $\operatorname{KL}\left( {P\parallel Q}\right) \neq \operatorname{KL}\left( {Q\parallel P}\right)$ . Under-dispersed approximations relative to the exact posterior also produce difficulties with light-tailed posteriors when the variational distribution has heavier tails (Dieng et al., 2017). + +One attempt at remedying these issues is the well-known symmetrisation, the Jensen-Shannon (JS) divergence (Lin, 1991) + +$$ +\operatorname{JS}\left( {p\left( z\right) \parallel q\left( z\right) }\right) = \frac{1}{2}\operatorname{KL}\left( {p\parallel \frac{p + q}{2}}\right) + \frac{1}{2}\operatorname{KL}\left( {q\parallel \frac{p + q}{2}}\right) +$$ + +(9) + +Although the JS divergence is bounded and offers some intuition through symmetry, it includes the problematic mixture distribution $\frac{p + q}{2}$ . This term means that no closed-form expression exists for the JS divergence between two multivariate normal distributions using Equation (9). + +Recently, (Nielsen, 2019) and (Nishiyama, 2018) have proposed a further generalisation of the JS divergence using abstract means (quasi-arithmetic means (Niculescu & Persson, 2006), also known as Kolmogorov-Nagumo means). By choosing the weighted geometric mean ${\mathrm{G}}_{\alpha }\left( {x, y}\right) = {x}^{\alpha }{y}^{1 - \alpha }$ for $\alpha \in \left\lbrack {0,1}\right\rbrack$ , and using the property that the weighted product of exponential family distributions (which includes the multivariate normal) stays in the exponential family (Nielsen & Garcia, 2009), we have the divergence + +$$ +{\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}\left( {p\left( z\right) \parallel q\left( z\right) }\right) = \left( {1 - \alpha }\right) \mathrm{{KL}}\left( {p\parallel {G}_{\alpha }\left( {p, q}\right) }\right) \tag{10} +$$ + +$$ ++ \alpha \mathrm{{KL}}\left( {q\parallel {G}_{\alpha }\left( {p, q}\right) }\right) \text{.} \tag{11} +$$ + +${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ , the skew-geometric Jensen-Shannon divergence, between two multivariate Gaussians ${\mathcal{N}}_{1}\left( {{\mu }_{1},{\sum }_{1}}\right)$ and + +${\mathcal{N}}_{2}\left( {{\mu }_{2},{\sum }_{2}}\right)$ then admits a closed form + +$$ +{\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}\left( {{\mathcal{N}}_{1}\parallel {\mathcal{N}}_{2}}\right) = \left( {1 - \alpha }\right) \mathrm{{KL}}\left( {{\mathcal{N}}_{1}\parallel {\mathcal{N}}_{\alpha }}\right) \tag{12} +$$ + +$$ ++ \alpha \mathrm{{KL}}\left( {{\mathcal{N}}_{2}\parallel {\mathcal{N}}_{\alpha }}\right) \tag{13} +$$ + +with the equivalent dual divergence being + +$$ +{\mathrm{{JS}}}_{ * }^{{\mathrm{G}}_{\alpha }}\left( {{\mathcal{N}}_{1}\parallel {\mathcal{N}}_{2}}\right) = \left( {1 - \alpha }\right) \mathrm{{KL}}\left( {{\mathcal{N}}_{\alpha }\parallel {\mathcal{N}}_{1}}\right) \tag{14} +$$ + +$$ ++ \alpha \mathrm{{KL}}\left( {{\mathcal{N}}_{\alpha }\parallel {\mathcal{N}}_{2}}\right) \tag{15} +$$ + +where ${\mathcal{N}}_{\alpha }$ has parameters + +$$ +{\sum }_{\alpha } = {\left( \left( 1 - \alpha \right) {\sum }_{1}^{-1} + \alpha {\sum }_{2}^{-1}\right) }^{-1} \tag{16} +$$ + +$$ +{\mu }_{\alpha } = {\sum }_{\alpha }\left( {\left( {1 - \alpha }\right) {\sum }_{1}^{-1}{\mu }_{1} + \alpha {\sum }_{2}^{-1}{\mu }_{2}}\right) . \tag{17} +$$ + +In simple terms, these divergences measure a weighted arithmetic mean of divergences from/to the prior/variational distribution to/from a weighted geometric mean of distributions. We proceed by examining why this skew parameter $\alpha$ should be learnt (replacing $\psi$ in Equation (4)), what this means in divergence space, and where the optimal $\alpha$ lie. + +## B. Related work + +Since its introduction (Nielsen,2021), JS ${}^{{\mathrm{G}}_{\alpha }}$ (with $\alpha = {0.5}$ ) has been used to decompose and estimate multi-modal ELBO loss as regularisation in multi-modal VAEs. (Deasy et al.,2020) further investigated the potential of ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ at different skew values, in turn, demonstrating improved reconstruction performance of VAEs. In contrast, in this paper we propose data-dependent learnable skew, which retains the benefits of using an optimal skew but avoids an expensive grid-search. This provides both efficient and practical way of training VAEs, leading to improved denoising and reconstruction performance as well as better lossy-compression rates focussed on in (Huang et al., 2020). In a similar context, our work is related to (Brekelmans et al., 2020), that introduces an optimal integration schedule via dynamic parameter selection when approximating the Thermodynamic Variational Objective (Masrani et al., 2019). + +$\alpha$ -VAEs extend the standard VAEs (Kingma &Welling, 2014; Rezende et al., 2014) paradigm with regularisation constraint inspired by recent work on closed-form expressions for statistical divergences (Nielsen, 2019; Nishiyama, 2018). In particular, $\alpha$ -VAEs offer a stable and intuitive reg-ularisation mechanism. This allows optimal interpolation between forward and reverse KL divergence, therefore combating the issue of posterior collapse (Lucas et al., 2019). In this regard, our work is related to approaches that address this issue through KL annealing during training (Bozkurt et al., 2021; Huang et al., 2018; Burgess et al., 2018). In a more general sense, this work is also related to other approaches that utilise various statistical divergences and distances for latent space regularisation as an alternative to the conventional KL divergence (Hensman et al., 2014; Tolstikhin et al., 2018; Dieng et al., 2017; Zhang et al., 2019; Zhao et al., 2019; Li & Turner, 2016). + +## C. Benchmark datasets for skew exploration + +Throughout our experiments we evaluate the reconstruction loss (mean squared error) on four standard benchmark image datasets : MNIST, ${28} \times {28}$ black and white images of handwritten digits (LeCun et al.,2010); Fashion-MNIST, ${28} \times {28}$ black and white images of clothing (Xiao et al., 2017); Chairs, ${64} \times {64}$ black and white images of 3D chairs (Aubry et al.,2014); dSprites ${64} \times {64}$ black and white images of 2D shapes procedurally generated from 6 ground truth independent latent factors (Matthey et al., 2017); CelebA resampled to ${64} \times {64} \times 3$ colour images of celebrity faces (Liu et al., 2015). For fair comparison, we follow Higgins et al. (2017) by selecting a common neural architecture across experiments (full details in Appendix ??). For consistent analysis, rather than searching within architecture and hyperparam-eter spaces for the best performing model by some metric, we standardise comparison and characterise the benefit of learning divergence skew. + +## D. Proofs + +### D.1. Proposition 1. + +Proof. First, simplifying the divergence + +$$ +{\mathrm{{JS}}}^{{G}_{\alpha }} = \left( {1 - \alpha }\right) \mathrm{{KL}}\left( {p\parallel {p}^{\alpha }{q}^{1 - \alpha }}\right) + \alpha \mathrm{{KL}}\left( {q\parallel {p}^{\alpha }{q}^{1 - \alpha }}\right) +$$ + +$$ += \left( {1 - \alpha }\right) {\int }_{x}p\log \left\lbrack \frac{p}{{p}^{\alpha }{q}^{1 - \alpha }}\right\rbrack {dx} +$$ + +$$ ++ \alpha {\int }_{x}q\log \left\lbrack \frac{q}{{p}^{\alpha }{q}^{1 - \alpha }}\right\rbrack {dx} +$$ + +$$ += {\left( 1 - \alpha \right) }^{2}{\int }_{x}p\log \left\lbrack \frac{p}{q}\right\rbrack {dx} + {\alpha }^{2}{\int }_{x}q\log \left\lbrack \frac{q}{p}\right\rbrack {dx} +$$ + +$$ += {\left( 1 - \alpha \right) }^{2}\mathrm{{KL}}\left( {p\parallel q}\right) + {\alpha }^{2}\mathrm{{KL}}\left( {q\parallel p}\right) . \tag{18} +$$ + +Then, differentiating (18) with respect to $\alpha$ + +$$ +\frac{d{\mathrm{{JS}}}^{{G}_{\alpha }}}{d\alpha } = 2\left( {\left( {\alpha - 1}\right) \mathrm{{KL}}\left( {p\parallel q}\right) + \alpha \mathrm{{KL}}\left( {q\parallel p}\right) }\right) = 0, +$$ + +(19) + +and rearranging gives Equation (6), before differentiating again + +$$ +\frac{{d}^{2}{\mathrm{{JS}}}^{{G}_{\alpha }}}{d{\alpha }^{2}} = 2\left( {\mathrm{{KL}}\left( {p\parallel q}\right) + \mathrm{{KL}}\left( {q\parallel p}\right) }\right) \geq 0, \tag{20} +$$ + +demonstrates the global minimum property as KL divergence is always positive. + +### D.2. Proposition 2. + +Proof. We first define the convexity strength and smoothness of $f\left( \alpha \right) = {\mathrm{{JS}}}^{{G}_{\alpha }}$ , using its simplified form in Equation (18). + +A function $f\left( \alpha \right) : {\mathbb{R}}^{n} \mapsto {\mathbb{R}}^{n}$ is $\lambda$ -strongly convex if + +$$ +{\nabla }^{2}f\left( \alpha \right) \succcurlyeq {\lambda I}, \tag{21} +$$ + +where $\succcurlyeq$ is a generalised inequality and, in our 1D case, this expression reduces to the second derivative with respect to $\alpha$ , giving + +$$ +\lambda = 2\left( {\mathrm{{KL}}\left( {p\parallel q}\right) + \mathrm{{KL}}\left( {q\parallel p}\right) }\right) . \tag{22} +$$ + +$f\left( \alpha \right)$ is also $\beta$ -smooth if $\frac{df}{d\alpha }$ is $\beta$ -Lipschitz + +$$ +\parallel \nabla f\left( x\right) - \nabla f\left( y\right) \parallel \leq \beta \parallel x - y\parallel , \tag{23} +$$ + +and, as our $f$ is doubly differentiable, the Mean Value Theorem (Rudin et al.,1976) with $g = \frac{df}{d\alpha }$ + +$$ +\frac{g\left( x\right) - g\left( y\right) }{x - y} \leq \frac{{dg}\left( z\right) }{d\alpha } = \frac{{d}^{2}f\left( z\right) }{d{\alpha }^{2}}\;\forall x < z < y, \tag{24} +$$ + +gives the necessary bound and the same constant + +$$ +\beta = 2\left( {\mathrm{{KL}}\left( {p\parallel q}\right) + \mathrm{{KL}}\left( {q\parallel p}\right) }\right) . \tag{25} +$$ + +We can now substitute into the result from convex opti-misation with gradient descent (Bubeck, 2014) that, for a $\lambda$ -strongly convex and $\beta$ -smooth function, the optimal step size $\gamma$ is + +$$ +\gamma = \frac{2}{\lambda + \beta } \tag{26} +$$ + +$$ += \frac{1}{2\left( {\mathrm{{KL}}\left( {p\parallel q}\right) + \mathrm{{KL}}\left( {q\parallel p}\right) }\right) }, \tag{27} +$$ + +and the upper bound on the convergence rate is + +$$ +\beta {\begin{Vmatrix}{\alpha }_{1} - {\alpha }^{ * }\end{Vmatrix}}_{2}^{2}{e}^{-\frac{4T}{\kappa }}, \tag{28} +$$ + +where $\kappa = \frac{\beta }{\lambda } = 1$ is the condition number, completing the proof. + +## E. Further results + +
DivergenceMNISTFashionMNISTdSpritesChairsCelebA
t-JS ${}^{{\mathrm{G}}_{\alpha }}$${0.118} \pm {0.001}$${0.165} \pm {0.002}$${0.06} \pm {0.007}$${0.121} \pm {0.001}$${0.0377} \pm {0.000}$
t-JS ${}_{ * }^{{\mathrm{G}}_{\alpha }}$${0.360} \pm {0.000}$${0.377} \pm {0.002}$${0.39} \pm {0.001}$${0.365} \pm {0.001}$${0.310} \pm {0.000}$
+ +Table 2. Mean learnt $\alpha$ for ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ and ${\mathrm{{JS}}}_{ * }^{{\mathrm{G}}_{\alpha }}$ with standard deviation across 5 different training seeds. + +![01963e43-f54e-7610-b9dc-808075960c0c_8_173_1197_1410_616_0.jpg](images/01963e43-f54e-7610-b9dc-808075960c0c_8_173_1197_1410_616_0.jpg) + +Figure 3. $\alpha$ landscape at different training epochs in training for FashionMNIST. + +490 + +491 + +492 + +493 + +494 + +495 + +496 + +498 + +![01963e43-f54e-7610-b9dc-808075960c0c_9_168_489_1412_1252_0.jpg](images/01963e43-f54e-7610-b9dc-808075960c0c_9_168_489_1412_1252_0.jpg) + +Figure 4. The robust nature of our method’s convergence. $\alpha$ convergence across a range of starting values and seeds using the MNIST dataset. + +550 + +551 + +552 + +553 + +554 + +556 + +![01963e43-f54e-7610-b9dc-808075960c0c_10_176_384_663_434_0.jpg](images/01963e43-f54e-7610-b9dc-808075960c0c_10_176_384_663_434_0.jpg) + +Figure 5. Reconstruction loss for noisy input images across different noise levels and regularisation divergences on MNIST. + +![01963e43-f54e-7610-b9dc-808075960c0c_10_919_400_653_427_0.jpg](images/01963e43-f54e-7610-b9dc-808075960c0c_10_919_400_653_427_0.jpg) + +Figure 6. $\alpha$ -VAEs on the Fashion-MNIST dataset. + +557 + +558 + +559 + +560 + +561 + +562 + +563 + +![01963e43-f54e-7610-b9dc-808075960c0c_10_172_1321_1414_523_0.jpg](images/01963e43-f54e-7610-b9dc-808075960c0c_10_172_1321_1414_523_0.jpg) + +Figure 7. Rate distortion curves for KL, JS ${}_{ * }^{{G}_{\alpha }}$ , and JS ${}_{ * }^{{G}_{\alpha }}$ with learnt $\alpha$ on MNIST. Dashed or full lines connect values from the training or test set respectively. $\lambda$ values vary consistently for each divergence from left to right but are only annotated for KL. + +603 + +604 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/av2hdS1rLI/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/av2hdS1rLI/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..b4a51ebfee0062dd51393ad905782cf582e35d68 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/av2hdS1rLI/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,232 @@ +§ $\ALPHA$ -VAES : OPTIMISING VARIATIONAL INFERENCE BY LEARNING DATA-DEPENDENT DIVERGENCE SKEW + +§ ANONYMOUS AUTHORS ${}^{1}$ + +§ ABSTRACT + +The skew-geometric Jensen-Shannon divergence $\left( {\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}\right)$ allows for an intuitive interpolation between forward and reverse Kullback-Leibler (KL) divergence based on the skew parameter $\alpha$ . While the benefits of the skew in ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ are clear-balancing forward/reverse KL in a comprehensible manner-the choice of optimal skew remains opaque and requires an expensive grid search. In this paper we introduce $\alpha$ -VAEs, which extend the ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ variational autoencoder by allowing for learnable, and therefore data-dependent, skew. We motivate the use of a parameterised skew in the dual divergence by analysing trends dependent on data complexity in synthetic examples. We also prove and discuss the dependency of the divergence minimum on the input data and encoder parameters, before empirically demonstrating that this dependency does not reduce to either direction of KL divergence for benchmark datasets. Finally, we demonstrate that optimised skew values consistently converge across a range of initial values and provide improved denoising and reconstruction properties. These render $\alpha$ - VAEs an efficient and practical modelling choice across a range of tasks, datasets, and domains. + +§ 1. INTRODUCTION + +As variational inference (VI) progresses, state of the art Variational AutoEncoders (VAEs) increase in complexity (Vahdat & Kautz, 2020; Child, 2020) while continuing to minimise the Evidence Lower BOund (ELBO) (Blei et al., 2017). Compared to generative adversarial networks (GANs) (Goodfellow et al., 2014), VAEs necessitate less stringent and problem-dependent training regimes, and compared to autoregressive models (Larochelle & Murray, 2011; + +Germain et al., 2015) (that can be interpreted as instances of very deep VAEs (Child, 2020)) are less computationally expensive and more efficient to sample. VAE learning requires optimisation of an objective which balances the quality of decoded reconstructions from encoded representations, with a regularising divergence term penalising latent space deviations from a fixed prior distribution. + +The divergence term should be strongly coupled to the predominant assumption of latent variables parameterised by a multivariate Gaussian, ${p}_{\theta }\left( z\right) = N\left( {\mu ,{\sigma }^{2}}\right)$ with $z,\mu ,\sigma \in$ ${\mathbb{R}}^{n}$ , approximated by ${q}_{\phi }\left( {z \mid x}\right)$ with $x \in {\mathbb{R}}^{m}$ and $n \leq m$ . For instance, in the original VAE (Kingma & Welling, 2014), KL divergence (Kullback & Leibler, 1951) naturally constrained the variational distribution to an isotropic Gaussian unit ball $\operatorname{KL}\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel \mathcal{N}\left( {0,I}\right) }\right)$ , despite unfavourable properties (Bishop, 2006), such as unboundedness and asymmetry. Moreover, KL does not capitalise on the full flexibility of the wider family of exponential distributions, a recent direction which has tightened the ELBO (Brekel-mans et al., 2020; Masrani et al., 2019) and rendered VAE divergence regularisation more interpretable in distribution space via skew-geometric Jensen-Shannon $\left( {\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}\right)$ divergence (Nielsen, 2019; Deasy et al., 2020). We henceforth refer to VAEs with skewed geometric divergences as $\alpha$ - VAEs, subsuming $\beta$ -VAEs. + +Divergence skew in VAEs balances the contrasting properties of forward and reverse KL (such as zero-avoidance/forcing) and circumvents opaque divergence terms by interpolating between them (Bishop, 2006). However, an expensive grid search over skew values fixed through training is necessary to optimise for tasks such as image reconstruction. This is particularly problematic as the link between optimal skew and dataset properties is not clear and is not easily resolved. Moreover, when skew is treated as static, the divergence constraint does not change during training and therefore does not reflect improvements in the encoder and its embeddings. Instead, an improved optimisation would update the divergence skew relative to these factors without using prior knowledge or compromising performance. + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +055 + +056 + +ELBO + +max width= + +Divergence KL expression Train skew + +1-3 +KL (ELBO) $\operatorname{KL}\left( {q\parallel p}\right)$ ✘ + +1-3 +3|c|$\alpha$ -VAEs + +1-3 +${\mathrm{{JS}}}^{\mathrm{G}}\alpha$ $\left( {1 - \alpha }\right) \mathrm{{KL}}\left( {p\parallel {p}^{\alpha }{q}^{1 - \alpha }}\right) + \alpha \mathrm{{KL}}\left( {q\parallel {p}^{\alpha }{q}^{1 - \alpha }}\right)$ ✘ + +1-3 +${\mathrm{{JS}}}_{n}^{\mathrm{G}}\alpha$ $\left( {1 - \alpha }\right) \mathrm{{KL}}\left( {{p}^{\alpha }{q}^{1 - \alpha }\parallel p}\right) + \alpha \mathrm{{KL}}\left( {{p}^{\alpha }{q}^{1 - \alpha }\parallel q}\right)$ ✘ + +1-3 +t-JS ${}^{\mathrm{G}}\alpha$ $\left( {1 - \alpha }\right) \mathrm{{KL}}\left( {p\parallel {p}^{\alpha }{q}^{1 - \alpha }}\right) + \alpha \mathrm{{KL}}\left( {q\parallel {p}^{\alpha }{q}^{1 - \alpha }}\right)$ ✓ + +1-3 +t-JS ${}_{ * }^{\mathrm{G}}\alpha$ $\left( {1 - \alpha }\right) \mathrm{{KL}}\left( {{p}^{\alpha }{q}^{1 - \alpha }\parallel p}\right) + \alpha \mathrm{{KL}}\left( {{p}^{\alpha }{q}^{1 - \alpha }\parallel q}\right)$ ✓ + +1-3 + +Reconstruction loss 14 ${\mathrm{{JS}}}^{{G}_{\alpha }}$ 13 ${\mathbf{{JS}}}_{ * }^{{G}_{\alpha }}$ t-JS ${}^{{G}_{a}}$ 12 t-JS* 11 10 + +(b) Breakdown of $\alpha$ -VAE divergences. + +0.0 0.2 0.4 0.6 0.8 1.0 + +$\alpha$ + +(a) $\alpha$ -VAEs on the MNIST dataset. + +Figure 1. The influence of divergence skew, $\alpha$ , on test set image reconstruction (measured in mean squared error). (Deasy et al.,2020) used an expensive grid search to fix divergence skew through training. Our separate optimisation of $\alpha$ in ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ (prefix t-), as a competing objective to reconstruction loss, leads to strong performance and substantially improves over reverse KL from any initial skew. We plot optimal $\alpha$ , denoted by a star, for 5 training seeds which (visually) converge to the same point. + +057 + +058 + +Contributions. To overcome these issues, we extend (Deasy et al.,2020) and introduce $\alpha$ -VAEs, by allowing for data- and encoder-dependent skew. Our findings indicate trends in final skew values which are dependent on proxies for data complexity in synthetic examples. We demonstrate that optimal divergence skew tends toward a balance of forward and reverse KL as dimensionality increases and, for increasing distribution modes, training skew marginally favours forward KL. In the higher dimensional setting of standard image benchmark datasets, we then establish that final skew values converge across seeds and are consistent for a range of initial values. We further exhibit that learning skew in $\alpha$ -VAEs has a positive impact on test set reconstruction loss (summarised in Figure 1), reconstructing denoised images from noisy images, and explain both advantages from the rate-distortion perspective (Alemi et al., 2018). Overall, we show that $\alpha$ -VAEs with learnable $\alpha$ consistently outperform forward and reverse KL independent of dataset, encoder parameters, and initial skew. + +§ 2. $\ALPHA$ -VAE OPTIMISATION + +§ 2.1. LEARNING DIVERGENCE PARAMETERISATIONS IS NOT CONSTRAINED OPTIMISATION + +In order to flexibly interchange divergence terms regular-ising the latent space of VAEs, it is common to formulate VAE training as a constrained optimisation problem (Higgins et al., 2017). A suitable objective to maximise is the marginal (log-)likelihood of the observed data $x \in {\mathbb{R}}^{m}$ as an expectation over the distribution of latent factors $z \in {\mathbb{R}}^{n}$ + +$$ +\mathop{\max }\limits_{\theta }\left\lbrack {{\mathbb{E}}_{{p}_{\theta }\left( z\right) }\left\lbrack {{p}_{\theta }\left( {x \mid z}\right) }\right\rbrack }\right\rbrack . \tag{1} +$$ + +The latent representation can be controlled by imposing an isotropic unit Gaussian constraint on the prior $p\left( z\right) =$ + +$\mathcal{N}\left( {0,I}\right)$ , arriving at the constrained optimisation problem + +$$ +\mathop{\max }\limits_{{\phi ,\theta }}{\mathbb{E}}_{{p}_{\mathcal{D}}\left( x\right) }\left\lbrack {\log {\mathbb{E}}_{{q}_{\phi }\left( {z \mid x}\right) }\left\lbrack {{p}_{\theta }\left( {x \mid z}\right) }\right\rbrack }\right\rbrack +$$ + +$$ +\text{ subject to }D\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p\left( z\right) }\right) < \varepsilon \text{ , } \tag{2} +$$ + +where $\varepsilon$ dictates the strength of the constraint and $D$ is a divergence. Equation (2) is then rewritten as a Lagrangian under the KKT conditions (Karush, 1939; Kuhn & Tucker, 2014), obtaining + +$$ +\mathcal{F}\left( {\theta ,\phi ,\lambda ;x,z}\right) = {\mathbb{E}}_{{q}_{\phi }\left( {z \mid x}\right) }\left\lbrack {\log {p}_{\theta }\left( {x \mid z}\right) }\right\rbrack +$$ + +$$ +- \lambda \left( {D\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p\left( z\right) }\right) - \varepsilon }\right) . \tag{3} +$$ + +However, here we are interested in learning properties of the divergence itself, rendering Equation (2) and (3) invalid. In this setting, the constrained optimisation problem is no longer well-posed, as the constraint becomes part of the optimisation. Instead, we choose to relax these optimisation assumptions while maintaining competing objectives, the log-likelihood and the parameterised divergence + +$$ +\mathcal{L}\left( {\theta ,\phi ,\lambda ;x,z}\right) = {\mathbb{E}}_{{q}_{\phi }\left( {z \mid x}\right) }\left\lbrack {\log {p}_{\theta }\left( {x \mid z}\right) }\right\rbrack +$$ + +$$ +- \lambda {D}_{\psi \left( {x,z}\right) }\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p\left( z\right) ,}\right. \tag{4} +$$ + +where $\psi \left( {x,z}\right)$ parameterises $D$ . + +Such a formulation only relates back to Equation (3) in a valid manner when changes in $\psi$ map to a family of divergences and redundant cases are avoided. For instance, it would be particularly useful if, for a given $\psi$ during optimi-sation, training could still be understood as a reconstruction loss term and a closed-form divergence regularisation term. It is, therefore, of import to consider: which divergences allow for such a parameterisation, whether they will have non-trivial minima, and whether the subsequent properties of this optimisation generalise or are useful. To this end, we consider learning $\alpha$ , the skew, in the ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ family of divergences (Appendix A) and proceed by highlighting the connection to forward and reverse KL. + +§ 2.2. SKEW OPTIMISATION + +From an information geometric viewpoint, the skew parameter's influence on the intermediate distribution can be seen as the weighting of two distributions along a path in the statistical manifold. In (Masrani et al., 2019), the authors consider the geometric path between the variational distribution ${q}_{\phi }\left( {z \mid x}\right)$ and the model distribution $p\left( {x,z}\right)$ . The expensive grid search therein required integration along this path and was later avoided in (Brekelmans et al., 2020) by using a moment-matching approach to learn the optimal point on the path. In contrast to these works, here we consider skew between ${q}_{\phi }\left( {z \mid x}\right)$ and the prior $p\left( z\right)$ as a form of regularisation. + +Additionally, the property that ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ interpolates between forward and reverse KL (Deasy et al., 2020) offers a clear mechanism for trading off forward and reverse KL properties. In VAE optimisation, as we have generalised the overall objective by extending the constrained optimisation problem to competing objectives, it is possible to consider direct parameterisation of $\psi$ from Equation (4). As long as $\psi$ maps to $\left\lbrack {0,1}\right\rbrack$ , the output is a valid divergence (a member of the JS ${}^{{\mathrm{G}}_{\alpha }}$ family) with extremes at forward and reverse KL, so we can consider optimising $\psi$ via gradient descent. + +Before attempting the optimisation, we can derive useful properties of ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}\left( {P\parallel Q}\right)$ which clarify how the optimisa-tion should be carried out. In particular, it is important to understand the behaviour of ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ with respect to $\alpha$ , as this will dictate whether convergence is possible. + +Proposition 1. For Gaussian distributions $P$ and $Q$ with probability density functions $p\left( z\right)$ and ${q}_{\phi }\left( {z \mid x}\right)$ respectively, the derivative of ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}\left( {P\parallel Q}\right)$ with respect to $\alpha$ is + +$$ +\frac{d{\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}}{d\alpha } = 2\left( {\alpha - 1}\right) \mathrm{{KL}}\left( {p\left( z\right) \parallel {q}_{\phi }\left( {z \mid x}\right) }\right) +$$ + +$$ ++ {2\alpha }\mathrm{{KL}}\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p\left( z\right) }\right) \text{ , } \tag{5} +$$ + +with a stationary point at + +$$ +{\alpha }^{ * } = \frac{\operatorname{KL}\left( {p\left( z\right) \parallel {q}_{\phi }\left( {z \mid x}\right) }\right) }{\operatorname{KL}\left( {p\left( z\right) \parallel {q}_{\phi }\left( {z \mid x}\right) }\right) + \operatorname{KL}\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p}\right) }, \tag{6} +$$ + +which is a global minimum in the $\alpha$ dimension of the opti-misation. Proof in Appendix D.1. + +A useful sanity check from Equation (6), is that ${\alpha }^{ * }$ is clearly bounded between 0 and 1 due to the non-negativity of KL divergence. Secondly, as Equation (6) relies on parameters $\phi$ , this expression is clearly data dependent, meaning optimal skew shifts during training and should not be fixed as in + +(Deasy et al.,2020). Finally, considered as a standalone ${1D}$ optimisation of $\alpha$ , the global minimum means that training divergence skew is convex and should be simple to minimise in practice, with closed-form convergence. + +Proposition 2. The upper bound on the convergence rate of gradient descent to ${\alpha }^{ * }$ in (6) is + +$$ +2\left( {\operatorname{KL}\left( {p\left( z\right) \parallel {q}_{\phi }\left( {z \mid x}\right) }\right) + \operatorname{KL}\left( {{q}_{\phi }\left( {z \mid x}\right) \parallel p\left( z\right) }\right) }\right) {D}_{1}^{2}{e}^{-{4T}}, +$$ + +(7) + +where ${D}_{1} = {\begin{Vmatrix}{\alpha }_{1} - {\alpha }^{ * }\end{Vmatrix}}_{2}^{2},{\alpha }_{1}$ is the initial skew value, and $T$ is the number of optimisation steps. Proof in Appendix D.2. + +In the next section, we find practical optimisation of skew in ${\mathrm{{JS}}}_{ * }^{{\mathrm{G}}_{\alpha }}$ to be well behaved, suggesting that training skew in the dual divergence is also well-posed and does not reduce to an invalid divergence or one of the KL directions-despite an equivalent proof of convexity via differentiation not being obvious, due to a mixture term being outside of the log. + +§ 3. EXPERIMENTS + +As the convexity shown in Section 2.2 suggests simple op-timisation of $\alpha$ , we directly train $\alpha$ as another parameter of the model, but use a separate optimiser to the model parameters so that our 1D analysis holds. + +§ 3.1. CHARACTERISING SKEW OPTIMISATION FOR ${\MATHBF{{JS}}}_{ * }^{{\MATHBF{G}}_{\ALPHA }}$ + +To better understand how dual $\alpha$ -VAEs, ${\alpha }_{ * }$ -VAEs, will behave in the more complex setting of modern variational inference benchmarks, we first highlight their properties on synthetic examples. In Figure 2, we depict different optimal skew values for a fit of a multivariate Gaussian to an underlying additive mixture of multivariate Gaussians with trained ${\mathrm{{JS}}}_{ * }^{{\mathrm{G}}_{\alpha }}$ skew. As the divergence integrals are not tractable, we directly optimise the multivariate Gaussian parameters via samples from the data for all divergences. + +In both plots of Figure 2, we depict the emergent low-dimensional trends in optimised skew. For Figure 2a, we fit a 2D Gaussian ball to a 2D additive mixture of Gaussian balls, with an increasing number of components in the mixture. Whereas, in Figure 2b, we increase the dimension of the fit and keep the ratio of mixtures to dimensions fixed at 5 . As the number of mixture components increases in Figure 2a, optimal skew for ${\mathrm{{JS}}}_{ * }^{{\mathrm{G}}_{\alpha }}$ decreases, favouring reverse KL as mass becomes more concentrated. Similarly, in Figure 2b, the optimal skew for ${\mathrm{{JS}}}_{ * }^{{\mathrm{G}}_{\alpha }}$ also decreases, underlining the need for data-dependent skewed divergences and suggesting learnt skew in dual ${\alpha }_{ * }$ -VAEs will be consistent and avoid trivial cases for more complex data. + +165 + +max width= + +VAE MNIST Fashion-MNIST dSprites Chairs CelebA + +1-6 +$\beta$ -VAE $\left( {\mathrm{{KL}}\left( {q\parallel p}\right) ,\beta = 4\text{ ,(Deasy et al.,2020)) }}\right)$ 11.75 13.32 10.51 20.79 269.52 + +1-6 +InfoVAE (MMD, $\lambda = {500}$ ,(Deasy et al.,2020)) 13.19 11.10 11.87 18.85 271.71 + +1-6 +Vanilla VAE $\left( {\mathrm{{KL}}\left( {q\parallel p}\right) }\right)$ ${10.67} \pm {0.27}$ ${12.36} \pm {0.22}$ ${7.78} \pm {0.24}$ ${20.33} \pm {0.34}$ ${262.53} \pm {2.07}$ + +1-6 +$\alpha$ -VAE (fixed $\alpha = {0.5}$ ) ${11.24} \pm {0.05}$ ${11.07} \pm {0.67}$ ${12.07} \pm {0.06}$ ${19.11} \pm {0.14}$ ${270.33} \pm {0.78}$ + +1-6 +${\alpha }_{ * }$ -VAE (fixed ${\alpha }_{ * } = {0.5}$ ) ${8.82} \pm {0.04}$ ${9.80} \pm {0.06}$ ${5.72} \pm {0.07}$ ${16.40} \pm {0.15}$ ${264.27} \pm {0.45}$ + +1-6 +$\alpha$ -VAE ${8.89} \pm {0.07}$ ${9.90} \pm {0.05}$ ${5.03} \pm {0.31}$ ${16.48} \pm {0.08}$ ${259.50} \pm {0.32}$ + +1-6 +${\alpha }_{ * }$ -VAE ${8.52} \pm {0.07}$ ${9.59} \pm {0.03}$ ${3.88} \pm {0.27}$ ${15.98} \pm {0.17}$ ${259.52} \pm {0.36}$ + +1-6 + +Table 1. Final model reconstruction error across regularisation divergences and datasets. For trainable skew $\alpha$ -VAEs (bottom two rows), final $\alpha$ values are given in Table 2. ${\alpha }_{ * }$ indicates dual $\alpha$ -VAEs. + +177 + +178 + + < g r a p h i c s > + +(a) Learnt $\alpha$ vs. mixture compo-(b) Learnt $\alpha$ vs. mixture dimen-nents. sion. + +Figure 2. Emergent trends in optimised skew when fitting a Gaussian ball to an additive mixture of Gaussians in ND for increasing data complexity. + +179 + +180 + +181 + +182 + +§ 3.2. BENCHMARK IMAGE DATASET PERFORMANCE + +Divergence skew convergence. As the expression for the minimum in Equation (6) is encoder dependent, we begin our benchmark dataset assessment by depicting the alpha landscape. In Figure 3, we see that optimal $\alpha$ decreases during training, before stabilising in the final epochs. This supports the argument that naively fixing skew is not optimal, leads to inferior solutions, and even a hyperparameter search for fixed skew followed by training is not sufficient. + +The divergences converge to a consistent minimum. This confirms that our ${1D}$ assessment of convexity and separate optimisation of $\alpha$ successfully leads to stable skew values (see Figure 4 in the Appendix). In addition, we exemplify how skew evolves across 5 different training seeds. These plots delineate data and encoder dependency, with consistency across seeds which initialise model parameters and training procedures (e.g. dropout and batching), suggesting learnt skew predominantly derives from, and varies between, datasets (see Table 2). + +Robustness to noise. As learning skew allows for problem-dependent measurement of the distance to the prior, which better accommodates more dispersed encoded distributions, we tested how learning skew regularises VAEs in various noise settings. In Figure 5, we present denoising experi- + +218 ments where we add Gaussian noise, $\mathcal{N}\left( {0,{\sigma }^{2}}\right)$ , to the nor- malised input images and clip to $\left\lbrack {0,1}\right\rbrack$ . Despite degraded 219 intuition surrounding where the skew balance should lie in the noisy setting, Figure 5 clearly demonstrates that our trainable skew in ${\mathrm{{JS}}}^{{\mathrm{G}}_{\alpha }}$ , and its dual form, consistently provides lower test set reconstruction loss across noise levels. As a sanity check, we can also verify expected trends in the other divergences, with the less dispersion-friendly reverse KL performing poorly at higher noise levels as forward KL becomes more appropriate, as well as the expected robust behaviour of MMD at high noise levels (Zhao et al., 2019). + +Improved reconstruction as a rate-distortion trade-off. We test our model's reconstruction-loss performance in the standard clean-image setting. Table 1 demonstrates superior reconstruction loss across multiple datasets, outperforming both the naive choice of fixed $\alpha = {0.5}$ and KL divergence-the latter by a substantial margin. We further detail the relationship with fixed $\alpha$ in Figure 1a, Figure 6, the supplementary figures in Appendix E, observing low reconstruction loss from an arbitrary starting skew. To explain this performance gain, we plot distortion (MSE) against rate (Figure 7 in Appendix E), measured using the reverse KL divergence for all divergences. The shift down and to the right for $\alpha$ -VAEs mean that they trade off rate for distortion, even more so when skew is learnt, giving higher quality image reconstruction when starting with unknown divergence scaling $\lambda$ . + +§ 4. CONCLUSION + +We recast VAE optimisation as a multi-objective task with competing reconstruction and divergence regularisation terms. This allowed us to use gradient descent to directly learn the divergence skew used to control the variational distribution ${q}_{\phi }\left( {z \mid x}\right)$ relative to the prior distribution $p\left( z\right)$ . The resulting method, $\alpha$ -VAE, was shown to be well-posed as a 1D optimisation with a dependency on both the encoder and the data, has improved reconstruction and denoising properties over standard regularisation techniques, and avoids an expensive grid search over skew values. Moreover, as $\alpha$ -VAEs also generalise over the $\beta$ -VAE family, they are a practical, efficient, and unbiased choice of VAE across a range of tasks and domains. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/b8ZS7SV3HK/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/b8ZS7SV3HK/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..bfd17d0a798ac8f1faaa606e619482fe832df56b --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/b8ZS7SV3HK/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,183 @@ +# Representational aspects of depth and conditioning in normalizing flows + +Anonymous Authors ${}^{1}$ + +## Abstract + +Normalizing flows are among the most popular paradigms in generative modeling, especially for images, primarily because we can efficiently evaluate the likelihood of a data point. Training normalizing flows can be difficult because models which produce good samples typically need to be extremely deep and can often be poorly conditioned: since they are parametrized as invertible maps from ${\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ , and typical training data like images intuitively is lower-dimensional, the learned maps often have Jacobians that are close to being singular. In our paper, we tackle representational aspects around depth and conditioning of normalizing flows: both for general invertible architectures, and for a particular common architecture, affine couplings. We prove that $\Theta \left( 1\right)$ affine coupling layers suffice to exactly represent a permutation or $1 \times 1$ convolution, as used in GLOW, showing that representationally the choice of partition is not a bottleneck for depth. We also show that shallow affine coupling networks are universal approximators in Wasserstein distance if ill-conditioning is allowed, and experimentally investigate related phenomena involving padding. Finally, we show a depth lower bound for general flow architectures with few neurons per layer and bounded Lipschitz constant. + +## 1. Introduction + +Deep generative models are one of the lynchpins of unsupervised learning, underlying tasks spanning distribution learning, feature extraction and transfer learning. Parametric families of neural-network based models have been improved to the point of being able to model complex distributions like images of human faces. One paradigm that has received a lot attention is normalizing flows, which model distributions as pushforwards of a standard Gaussian (or other simple distribution) through an invertible neural network $G$ . Thus, the likelihood has an explicit form via the change of variables formula using the Jacobian of $G$ . Training normalizing flows is challenging due to a couple of main issues. Empirically, these models seem to require a much larger size than other generative models (e.g. GANs) and most notably, a much larger depth. This makes training challenging due to vanishing/exploding gradients. A very related problem is conditioning, more precisely the smallest singular value of the forward map $G$ . It’s intuitively clear that natural images will have a low-dimensional structure, thus a close-to-singular $G$ might be needed. On the other hand, the change-of-variables formula involves the determinant of the Jacobian of ${G}^{-1}$ , which grows larger the more singular $G$ is. + +While recently, the universal approximation power of various types of invertible architectures has been studied if the input is padded with a sufficiently large number of all-0 coordinates (Dupont et al., 2019; Huang et al., 2020) or arbitrary partitions and permutations are allowed (Teshima et al., 2020), precise quantification of the cost of invertibility in terms of the depth required and the conditioning of the model has not been fleshed out. + +In this paper, we study both mathematically and empirically representational aspects of depth and conditioning in normalizing flows and answer several fundamental questions. + +## 2. Related Work + +On the empirical side, flow models were first popularized by (Dinh et al., 2014), who introduce the NICE model and the idea of parametrizing a distribution as a sequence of transformations with triangular Jacobians, so that maximum likelihood training is tractable. Quickly thereafter, (Dinh et al., 2016) improved the affine coupling block architecture they introduced to allow non-volume-preserving (NVP) transformations, (Papamakarios et al., 2017) introduced an autoregressive version, and finally (Kingma & Dhariwal, 2018) introduced $1 \times 1$ convolutions in the architecture, which they view as relaxations of permutation matrices-intuitively, allowing learned partitions for the affine blocks. Subsequently, there have been variants on these ideas: (Grathwohl et al., 2018; Dupont et al., 2019; Behrmann et al., 2018) viewed these models as discretizations of ODEs and introduced ways to approximate determinants of non-triangular Jacobians, though these models still don't scale beyond datasets the size of CIFAR10. The conditioning/invertibility of trained models was experimentally studied in (Behrmann et al., 2019), along with some "adversarial vulnerabilities" of the conditioning. Mathematically understanding the relative representational power and statistical/algorithmic implications thereof for different types of generative models is still however a very poorly understood and nascent area of study. + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +Most closely related to our results are the recent works of (Huang et al., 2020), (Zhang et al.) and (Teshima et al., 2020). The first two prove universal approximation results + +068 for invertible architectures (the former affine couplings, the latter neural ODEs) if the input is allowed to be padded with 069 + +070 zeroes. The latter proves universal approximation when GLOW-style permutation layers are allowed through a construction that operates on one dimension at a time. This is + +073 very different than how flows are trained in practice, which is typically with a partition which splits the data roughly in half. It also requires the architectural modification of GLOW to work. As we'll discuss in the following section, our results prove universal approximation even without padding and permutations, but we focus on more fine-grained implications to depth and conditioning of the learned model and prove universal approximation in a setting that is used in practice. Another work (Kong & Chaudhuri, 2020) studies the representational power of Sylvester and Householder flows, normalizing flow architectures which are quite different from affine coupling networks. In particular, they prove a depth lower bound for local planar flows with bounded weights; for planar flows, our general Theorem 4 can also be applied, but the resulting lower bound instances are very different (ours targets multimodality, theirs targets tail behavior). + +## 3. Overview of Results + +### 3.1. Results About Affine Coupling Architectures + +We begin by proving several results for a particularly common normalizing flow architectures: those based on affine coupling layers (Dinh et al., 2014; 2016; Kingma & Dhari-wal, 2018). The appeal of these architecture comes from training efficiency. Although layerwise invertible neural networks (i.e. networks for which each layer consists of an invertible matrix and invertible pointwise nonlinearity) seem like a natural choice, in practice these models have several disadvantages: for example, computing the determinant of the Jacobian is expensive unless the weight matrices are restricted. + +Consequently, it's typical for the transformations in a flow network to be constrained in a manner that allows for effi- + +108 cient computation of the Jacobian determinant. The most + +109 common building block is an affine coupling block, originally proposed by (Dinh et al., 2014; 2016). A coupling block partitions the coordinates $\left\lbrack d\right\rbrack$ into two parts: $S$ and $\left\lbrack d\right\rbrack \smallsetminus S$ , for a subset $S$ with $\left| S\right|$ containing around half the coordinates of $d$ . The transformation then has the form: + +Definition 1. An affine coupling block is a map $f : {\mathbb{R}}^{d} \rightarrow$ ${\mathbb{R}}^{d}$ , s.t. $f\left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S}}\right) = \left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S} \odot s\left( {x}_{S}\right) + t\left( {x}_{S}\right) }\right)$ + +Of course, the modeling power will be severely constrained if the coordinates in $S$ never change: so typically, flow models either change the set $S$ in a fixed or learned way (e.g. alternating between different partitions of the channel in (Dinh et al., 2016) or applying a learned permutation in (Kingma & Dhariwal, 2018)). As a permutation is a discrete object, it is difficult to learn in a differentiable manner - so (Kingma & Dhariwal, 2018) simply learns an invertible linear function (i.e. a $1 \times 1$ convolution) as a differentiation-friendly relaxation thereof. + +#### 3.1.1. UNIVERSAL APPROXIMATION WITH ILL-CONDITIONED AFFINE COUPLING NETWORKS + +First, we address universal approximation of normalizing flows and its close ties to conditioning. Namely, a recent work (Theorem 1 of (Huang et al., 2020)) showed that deep affine coupling networks are universal approximators if we allow the training data to be padded with sufficiently many zeros. While zero padding is convenient for their analysis (in fact, similar proofs have appeared for other invertible architectures like Augmented Neural ODEs (Zhang et al.)), in practice models trained on zero-padded data often perform poorly. Another work (Teshima et al., 2020) proves universal approximation with the optional permutations and $\left| S\right| = d - 1$ needed for the nonconstructive proof. We remove that requirement in two ways, first by giving a construction that gives universal approximation without permutations in 3 composed couplings and second by showing that the permutations can be simulated by a constant number of alternating but fixed coupling layers. + +First we show that neither padding nor permutations nor depth is necessary representationally: shallow models without zero padding are already universal approximators in Wasserstein. + +Theorem 1 (Universal approximation without padding). Suppose that $P$ is the standard Gaussian measure in ${\mathbb{R}}^{n}$ with $n$ even and $Q$ is a distribution on ${\mathbb{R}}^{n}$ with bounded support and absolutely continuous with respect to the Lebesgue measure. Then for any $\epsilon > 0$ , there exists a depth-3 affine coupling network $g$ , with maps $s, t$ represented by feedfor-ward ReLU networks such that ${W}_{2}\left( {{g}_{\# }P, Q}\right) \leq \epsilon$ . + +Remark 1. A shared caveat of the universality construction in Theorem 1 with the construction in (Huang et al., 2020) is that the resulting network is poorly conditioned. In the case of the construction in (Huang et al., 2020), this is obvious because they pad the $d$ -dimensional training data with $d$ additional zeros, and a network that takes as input a Gaussian distribution in ${\mathbb{R}}^{2d}$ (i.e. has full support) and outputs data on $d$ -dimensional manifold (the space of zero padded data) must have a singular Jacobian almost everywhere. ${}^{1}$ In the case of Theorem 1, the condition number of the network blows up at least as quickly as $1/\epsilon$ as we take the approximation error $\epsilon \rightarrow 0$ , so this network is also ill-conditioned if we are aiming for a very accurate approximation. + +Remark 2. Based on Theorem 3, the condition number blowup of either the Jacobian or the Hessian is necessary for a shallow model to be universal, even when approximating well-conditioned linear maps. The network constructed in Theorem 1 is also consistent with the lower bound from Theorem 4, because the network we construct in Theorem 1 is highly non-Lipschitz and uses many parameters per layer. + +##### 3.1.2.THE EFFECT OF CHOICE OF PARTITION ON DEPTH + +Next, we ask how much of a saving in terms of the depth of the network can one hope to gain from using learned partitions (ala GLOW) as compared to a fixed partition. More precisely: + +Question 1: Can models like Glow (Kingma & Dhariwal, 2018) be simulated by a sequence of affine blocks with a fixed partition without increasing the depth by much? + +We answer this question in the affirmative at least for equally sized partitions (which is what is typically used in practice). We show the following surprising fact: consider an arbitrary partition $\left( {S,\left\lbrack {2d}\right\rbrack \smallsetminus S}\right)$ of $\left\lbrack {2d}\right\rbrack$ , such that $S$ satisfies $\left| S\right| = d$ , for $d \in \mathbb{N}$ . Then for any invertible matrix $T \in {\mathbb{R}}^{{2d} \times {2d}}$ , the linear map $T : {\mathbb{R}}^{2d} \rightarrow {\mathbb{R}}^{2d}$ can be exactly represented by a composition of $O\left( 1\right)$ affine coupling layers that are linear, namely have the form ${L}_{i}\left( {{x}_{S},{x}_{\left\lbrack {2d}\right\rbrack \smallsetminus S}}\right) = \left( {{x}_{S},{B}_{i}{x}_{\left\lbrack {2d}\right\rbrack \smallsetminus S} + }\right.$ $\left. {{A}_{i}{x}_{S}}\right)$ or ${L}_{i}\left( {{x}_{S},{x}_{\left\lbrack {2d}\right\rbrack \smallsetminus S}}\right) = \left( {{C}_{i}{x}_{S} + {D}_{i}{x}_{\left\lbrack {2d}\right\rbrack \smallsetminus S},{x}_{\left\lbrack {2d}\right\rbrack \smallsetminus S}}\right)$ for matrices ${A}_{i},{B}_{i},{C}_{i},{D}_{i} \in {\mathbb{R}}^{d \times d}$ , s.t. each ${B}_{i},{C}_{i}$ is diagonal. For convenience of notation, without loss of generality let $S = \left\lbrack d\right\rbrack$ . Then, each of the layers ${L}_{i}$ is a matrix of the form $\left\lbrack \begin{matrix} I & 0 \\ {A}_{i} & {B}_{i} \end{matrix}\right\rbrack$ or $\left\lbrack \begin{matrix} {C}_{i} & {D}_{i} \\ 0 & I \end{matrix}\right\rbrack$ , where the rows and columns are partitioned into blocks of size $d$ . + +With this notation in place, we show the following theorem: + +Theorem 2. For all $d \geq 4$ , there exists a $k \leq {24}$ such that for any invertible $T \in {\mathbb{R}}^{{2d} \times {2d}}$ with $\det \left( T\right) > 0$ , there exist matrices ${A}_{i},{D}_{i} \in {\mathbb{R}}^{d \times d}$ and diagonal matrices ${B}_{i},{C}_{i} \in$ + +${\mathbb{R}}_{ \geq 0}^{d \times d}$ for all $i \in \left\lbrack k\right\rbrack$ such that + +$$ +T = \mathop{\prod }\limits_{{i = 1}}^{k}\left\lbrack \begin{matrix} I & 0 \\ {A}_{i} & {B}_{i} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {C}_{i} & {D}_{i} \\ 0 & I \end{matrix}\right\rbrack +$$ + +Note that the condition $\det \left( T\right) > 0$ is required, since affine coupling networks are always orientation-preserving. Adding one diagonal layer with negative signs suffices to model general matrices. In particular, since permutation matrices are invertible, this means that any applications of permutations to achieve a different partition of the inputs (e.g. like in Glow (Kingma & Dhariwal, 2018)) can in principle be represented as a composition of not-too-many affine coupling layers, indicating that the flexibility in the choice of partition is not the representational bottleneck. + +It’s a reasonable to ask how optimal the $k \leq {24}$ bound is - we supplement our upper bound with a lower bound, namely that $k \geq 3$ . This is surprising, as naive parameter counting would suggest $k = 2$ might work. Namely, we show: + +Theorem 3. For all $d \geq 4$ and $k \leq 2$ , there exists an invertible $T \in {\mathbb{R}}^{{2d} \times {2d}}$ with $\det \left( T\right) > 0$ , s.t. for all ${A}_{i},{D}_{i} \in$ ${\mathbb{R}}^{d \times d}$ and for all diagonal matrices ${B}_{i},{C}_{i} \in {\mathbb{R}}_{ > 0}^{d \times d}, i \in \left\lbrack k\right\rbrack$ it holds that + +$$ +T \neq \mathop{\prod }\limits_{{i = 1}}^{k}\left\lbrack \begin{matrix} I & 0 \\ {A}_{i} & {B}_{i} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {C}_{i} & {D}_{i} \\ 0 & I \end{matrix}\right\rbrack +$$ + +Beyond the relevance of this result in the context of how important the choice of partitions is, it also shows a lower bound on the depth for an equal number of nonlinear affine coupling layers (even with quite complex functions $s$ and $t$ in each layer) - since a nonlinear network can always be linearized about a (smooth) point to give a linear network with the same number of layers. In other words, studying linear affine coupling networks lets us prove a depth lower bound/depth separation for nonlinear networks for free. + +Remark 3 (Significance of Theorem 2 for Approximation in Likelihood/KL). All of the universality results in the literature for normalizing flows, including Theorem 1, prove universality in the Wasserstein distance or in the related sense of convergence of distributions. A stronger and probably much more difficult problem is to prove universality under the KL divergence instead: i.e. to show for a well-behaved distribution $P$ , there exists a sequence ${Q}_{n}$ of distributions generated by normalizing flow models such that + +$$ +\operatorname{KL}\left( {P,{Q}_{n}}\right) \rightarrow 0. \tag{1} +$$ + +This is important because Maximum-Likelihood training attempts to pick the model with the smallest KL, not the smallest Wasserstein distance, and the minimizers of these two objectives can be extremely different. For $P = N\left( {0,\sum }\right)$ , + +--- + +${}^{1}$ Alternatively, we could feed a degenerate Gaussian supported on a $d$ -dimensional subspace into the network as input, but there is no way to train such a model using maximum-likelihood training, since the prior is degenerate. + +--- + +Theorem 2 certainly implies (1) for bounded depth linear affine couplings, and thus gives the first proof that global optimization of the max-likelihood objective of a normalizing flow model would successfully learn a Gaussian with arbitrary nondegenerate $\sum$ . + +### 3.2. Results about General Architectures + +In order to guarantee that the network is invertible, normalizing flow models place significant restrictions on the architecture of the model. The most basic and general question we can ask is how this restriction affects the expressive power of the model - in particular, how much the depth must increase to compensate. + +More precisely, we ask: + +Question 2: is there a distribution over ${\mathbb{R}}^{d}$ which can be written as the pushforward of a Gaussian through a small, shallow generator, which cannot be approximated by the pushforward of a Gaussian through a small, shallow layer-wise invertible neural network? + +Given that there is great latitude in terms of the choice of layer architecture, while keeping the network invertible, the most general way to pose this question is to require each layer to be a function of $p$ parameters - i.e. $f = {f}_{1} \circ {f}_{2} \circ$ $\cdots \circ {f}_{\ell }$ where $\circ$ denotes function composition and each ${f}_{i} : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ is an invertible function specified by a vector ${\theta }_{i} \in {\mathbb{R}}^{p}$ of parameters. This framing is extremely general: for instance it includes layerwise invertible feedforward networks in which ${f}_{i}\left( x\right) = {\sigma }^{\otimes d}\left( {{A}_{i}x + {b}_{i}}\right) ,\sigma$ is invertible, ${A}_{i} \in {\mathbb{R}}^{d \times d}$ is invertible, ${\theta }_{i} = \left( {{A}_{i},{b}_{i}}\right)$ and $p = d\left( {d + 1}\right)$ . It also includes popular architectures based on affine coupling blocks which we discussed in more detail in the previous subsection. + +We answer this question in the affirmative: namely, we show for any $k$ that there is a distribution over ${\mathbb{R}}^{d}$ which can be expressed as the pushforward of a network with depth $O\left( 1\right)$ and size $O\left( k\right)$ that cannot be (even very approximately) expressed as the pushforward of a Gaussian through a Lipschitz layerwise invertible network of depth smaller than $k/p$ . + +Towards formally stating the result, let $\theta = \left( {{\theta }_{1},\ldots ,{\theta }_{\ell }}\right) \in$ $\Theta \subset {\mathbb{R}}^{{d}^{\prime }}$ be the vector of all parameters (e.g. weights, biases) in the network, where ${\theta }_{i} \in {\mathbb{R}}^{p}$ are the parameters that correspond to layer $i$ , and let ${f}_{\theta } : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ denote the resulting function. Define $R$ so that $\Theta$ is contained in the Euclidean ball of radius $R$ . + +We say the family ${f}_{\theta }$ is L-Lipschitz with respect to its parameters and inputs, if + +$$ +\forall \theta ,{\theta }^{\prime } \in \Theta : {\mathrm{E}}_{x \sim \mathcal{N}\left( {0,{I}_{d \times d}}\right) }\begin{Vmatrix}{{f}_{\theta }\left( x\right) - {f}_{{\theta }^{\prime }}\left( x\right) }\end{Vmatrix} \leq L\begin{Vmatrix}{\theta - {\theta }^{\prime }}\end{Vmatrix} +$$ + +and $\forall x, y \in {\mathbb{R}}^{d},\begin{Vmatrix}{{f}_{\theta }\left( x\right) - {f}_{\theta }\left( y\right) }\end{Vmatrix} \leq L\parallel x - y\parallel$ . * We will discuss the reasonable range for $L$ in terms of the weights after the Theorem statement. We show ${}^{3}$ : + +Theorem 4. For any $k = \exp \left( {o\left( d\right) }\right) , L = \exp \left( {o\left( d\right) }\right) , R =$ $\exp \left( {o\left( d\right) }\right)$ , we have that for $d$ sufficiently large and any $\gamma > 0$ there exists a neural network $g : {\mathbb{R}}^{d + 1} \rightarrow {\mathbb{R}}^{d}$ with $O\left( k\right)$ parameters and depth $O\left( 1\right)$ , s.t. for any family $\left\{ {{f}_{\theta },\theta \in \Theta }\right\}$ of layerwise invertible networks that are $L$ - Lipschitz with respect to its parameters and inputs, have $p$ parameters per layer and depth at most $k/p$ we have + +$$ +\forall \theta \in \Theta ,{W}_{1}\left( {{\left( {f}_{\theta }\right) }_{\# \mathcal{N}},{g}_{\# \mathcal{N}}}\right) \geq {10}{\gamma }^{2}d +$$ + +Furthermore, for all $\theta \in \Theta ,{KL}\left( {{\left( {f}_{\theta }\right) }_{\# \mathcal{N}},{g}_{\# \mathcal{N}}}\right) \geq 1/{10}$ and $\operatorname{KL}\left( {{g}_{\# \mathcal{N}},{\left( {f}_{\theta }\right) }_{\# \mathcal{N}}}\right) \geq \frac{{10}{\gamma }^{2}d}{{L}^{2}}$ . + +Remark 4. First, note that while the number of parameters in both networks is comparable (i.e. it’s $O\left( k\right)$ ), the invertible network is deeper, which usually is accompanied with algorithmic difficulties for training, due to vanishing and exploding gradients. For layerwise invertible generators, if we assume that the nonlinearity $\sigma$ is 1-Lipschitz and each matrix in the network has operator norm at most $M$ , then a depth $\ell$ network will have $L = O{\left( {M}^{\ell }\right) }^{4}$ and $p = O\left( {d}^{2}\right)$ . For an affine coupling network with $g, h$ parameterized by $H$ -layer networks with $p/2$ parameters each,1-Lipschitz activations and weights bounded by $M$ as above, we would similarly have $L = O\left( {M}^{\ell H}\right)$ . + +Remark 5. We make a couple of comments on the "hard" distribution $g$ we construct, as well as the meaning of the parameter $\gamma$ and how to interpret the various lower bounds in the different metrics. The distribution $g$ for a given $\gamma$ will in fact be close to a mixture of $k$ Gaussians, each with mean on the sphere of radius ${10}{\gamma }^{2}d$ and covariance matrix ${\gamma }^{2}{I}_{d}$ . Thus this distribution has most of it's mass in a sphere of radius $O\left( {{\gamma }^{2}d}\right)$ - so the Wasserstein guarantee gives close to a trivial approximation for $g$ . The KL divergence bounds are derived by so-called transport inequalities between KL and Wasserstein for subgaussian distributions (Bobkov & Götze, 1999). The discrepancy between the two KL divergences comes from the fact that the functions $g,{f}_{\theta }$ may have different Lipschitz constants, hence the tails of ${g}_{\# \mathcal{N}}$ and ${f}_{\# \mathcal{N}}$ behave differently. In fact, if the function ${f}_{\theta }$ had the same Lipschitz constant as $g$ , both KL lower bounds would be on the order of a constant. + +--- + +${}^{2}$ Note for architectures having trainable biases in the input layer, these two notions of Lipschitzness should be expected to behave similarly. + +${}^{3}$ In this Theorem and throughout, we use the standard asymptotic notation $f\left( d\right) = o\left( {g\left( d\right) }\right)$ to indicate that $\mathop{\limsup }\limits_{{d \rightarrow \infty }}\frac{f\left( d\right) }{g\left( d\right) } = 0$ . For example, the assumption $k, L, R =$ $\exp \left( {o\left( d\right) }\right)$ means that for any sequence ${\left( {k}_{d},{L}_{d},{R}_{d}\right) }_{d = 1}^{\infty }$ such that $\mathop{\limsup }\limits_{{d \rightarrow \infty }}\frac{\max \left( {\log {k}_{d},\log {L}_{d},\log {R}_{d}}\right) }{d} = 0$ the result holds true. + +${}^{4}$ Note, our theorem applies to exponentially large Lipschitz constants. + +--- + +References + +Behrmann, J., Grathwohl, W., Chen, R. T., Duvenaud, D., and Jacobsen, J.-H. Invertible residual networks. arXiv preprint arXiv:1811.00995, 2018. + +Behrmann, J., Vicol, P., Wang, K.-C., Grosse, R. B., and Jacobsen, J.-H. On the invertibility of invertible neural networks. 2019. + +Bobkov, S. G. and Götze, F. Exponential integrability and transportation cost related to logarithmic sobolev inequalities. Journal of Functional Analysis, 163(1):1-28, 1999. + +Dinh, L., Krueger, D., and Bengio, Y. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. + +Dupont, E., Doucet, A., and Teh, Y. W. Augmented neural odes. In Advances in Neural Information Processing Systems, pp. 3134-3144, 2019. + +Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018. + +Huang, C.-W., Dinh, L., and Courville, A. Augmented normalizing flows: Bridging the gap between generative flows and latent variable models. arXiv preprint arXiv:2002.07101, 2020. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1 \times 1$ convolutions. In Advances in Neural Information Processing Systems, pp. 10215-10224, 2018. + +Kong, Z. and Chaudhuri, K. The expressive power of a class of normalizing flow models. arXiv preprint arXiv:2006.00392, 2020. + +Papamakarios, G., Pavlakou, T., and Murray, I. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pp. 2338-2347, 2017. + +Teshima, T., Ishikawa, I., Tojo, K., Oono, K., Ikeda, M., and Sugiyama, M. Coupling-based invertible neural networks are universal diffeomorphism approximators. In Advances in Neural Information Processing Systems, 2020. + +Zhang, H., Gao, X., Unterman, J., and Arodz, T. Approximation capabilities of neural odes and invertible residual networks. + +274 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/b8ZS7SV3HK/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/b8ZS7SV3HK/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..63b18d9a4939248646cdc473079d302974c0659e --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/b8ZS7SV3HK/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,141 @@ +§ REPRESENTATIONAL ASPECTS OF DEPTH AND CONDITIONING IN NORMALIZING FLOWS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Normalizing flows are among the most popular paradigms in generative modeling, especially for images, primarily because we can efficiently evaluate the likelihood of a data point. Training normalizing flows can be difficult because models which produce good samples typically need to be extremely deep and can often be poorly conditioned: since they are parametrized as invertible maps from ${\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ , and typical training data like images intuitively is lower-dimensional, the learned maps often have Jacobians that are close to being singular. In our paper, we tackle representational aspects around depth and conditioning of normalizing flows: both for general invertible architectures, and for a particular common architecture, affine couplings. We prove that $\Theta \left( 1\right)$ affine coupling layers suffice to exactly represent a permutation or $1 \times 1$ convolution, as used in GLOW, showing that representationally the choice of partition is not a bottleneck for depth. We also show that shallow affine coupling networks are universal approximators in Wasserstein distance if ill-conditioning is allowed, and experimentally investigate related phenomena involving padding. Finally, we show a depth lower bound for general flow architectures with few neurons per layer and bounded Lipschitz constant. + +§ 1. INTRODUCTION + +Deep generative models are one of the lynchpins of unsupervised learning, underlying tasks spanning distribution learning, feature extraction and transfer learning. Parametric families of neural-network based models have been improved to the point of being able to model complex distributions like images of human faces. One paradigm that has received a lot attention is normalizing flows, which model distributions as pushforwards of a standard Gaussian (or other simple distribution) through an invertible neural network $G$ . Thus, the likelihood has an explicit form via the change of variables formula using the Jacobian of $G$ . Training normalizing flows is challenging due to a couple of main issues. Empirically, these models seem to require a much larger size than other generative models (e.g. GANs) and most notably, a much larger depth. This makes training challenging due to vanishing/exploding gradients. A very related problem is conditioning, more precisely the smallest singular value of the forward map $G$ . It’s intuitively clear that natural images will have a low-dimensional structure, thus a close-to-singular $G$ might be needed. On the other hand, the change-of-variables formula involves the determinant of the Jacobian of ${G}^{-1}$ , which grows larger the more singular $G$ is. + +While recently, the universal approximation power of various types of invertible architectures has been studied if the input is padded with a sufficiently large number of all-0 coordinates (Dupont et al., 2019; Huang et al., 2020) or arbitrary partitions and permutations are allowed (Teshima et al., 2020), precise quantification of the cost of invertibility in terms of the depth required and the conditioning of the model has not been fleshed out. + +In this paper, we study both mathematically and empirically representational aspects of depth and conditioning in normalizing flows and answer several fundamental questions. + +§ 2. RELATED WORK + +On the empirical side, flow models were first popularized by (Dinh et al., 2014), who introduce the NICE model and the idea of parametrizing a distribution as a sequence of transformations with triangular Jacobians, so that maximum likelihood training is tractable. Quickly thereafter, (Dinh et al., 2016) improved the affine coupling block architecture they introduced to allow non-volume-preserving (NVP) transformations, (Papamakarios et al., 2017) introduced an autoregressive version, and finally (Kingma & Dhariwal, 2018) introduced $1 \times 1$ convolutions in the architecture, which they view as relaxations of permutation matrices-intuitively, allowing learned partitions for the affine blocks. Subsequently, there have been variants on these ideas: (Grathwohl et al., 2018; Dupont et al., 2019; Behrmann et al., 2018) viewed these models as discretizations of ODEs and introduced ways to approximate determinants of non-triangular Jacobians, though these models still don't scale beyond datasets the size of CIFAR10. The conditioning/invertibility of trained models was experimentally studied in (Behrmann et al., 2019), along with some "adversarial vulnerabilities" of the conditioning. Mathematically understanding the relative representational power and statistical/algorithmic implications thereof for different types of generative models is still however a very poorly understood and nascent area of study. + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +Most closely related to our results are the recent works of (Huang et al., 2020), (Zhang et al.) and (Teshima et al., 2020). The first two prove universal approximation results + +068 for invertible architectures (the former affine couplings, the latter neural ODEs) if the input is allowed to be padded with 069 + +070 zeroes. The latter proves universal approximation when GLOW-style permutation layers are allowed through a construction that operates on one dimension at a time. This is + +073 very different than how flows are trained in practice, which is typically with a partition which splits the data roughly in half. It also requires the architectural modification of GLOW to work. As we'll discuss in the following section, our results prove universal approximation even without padding and permutations, but we focus on more fine-grained implications to depth and conditioning of the learned model and prove universal approximation in a setting that is used in practice. Another work (Kong & Chaudhuri, 2020) studies the representational power of Sylvester and Householder flows, normalizing flow architectures which are quite different from affine coupling networks. In particular, they prove a depth lower bound for local planar flows with bounded weights; for planar flows, our general Theorem 4 can also be applied, but the resulting lower bound instances are very different (ours targets multimodality, theirs targets tail behavior). + +§ 3. OVERVIEW OF RESULTS + +§ 3.1. RESULTS ABOUT AFFINE COUPLING ARCHITECTURES + +We begin by proving several results for a particularly common normalizing flow architectures: those based on affine coupling layers (Dinh et al., 2014; 2016; Kingma & Dhari-wal, 2018). The appeal of these architecture comes from training efficiency. Although layerwise invertible neural networks (i.e. networks for which each layer consists of an invertible matrix and invertible pointwise nonlinearity) seem like a natural choice, in practice these models have several disadvantages: for example, computing the determinant of the Jacobian is expensive unless the weight matrices are restricted. + +Consequently, it's typical for the transformations in a flow network to be constrained in a manner that allows for effi- + +108 cient computation of the Jacobian determinant. The most + +109 common building block is an affine coupling block, originally proposed by (Dinh et al., 2014; 2016). A coupling block partitions the coordinates $\left\lbrack d\right\rbrack$ into two parts: $S$ and $\left\lbrack d\right\rbrack \smallsetminus S$ , for a subset $S$ with $\left| S\right|$ containing around half the coordinates of $d$ . The transformation then has the form: + +Definition 1. An affine coupling block is a map $f : {\mathbb{R}}^{d} \rightarrow$ ${\mathbb{R}}^{d}$ , s.t. $f\left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S}}\right) = \left( {{x}_{S},{x}_{\left\lbrack d\right\rbrack \smallsetminus S} \odot s\left( {x}_{S}\right) + t\left( {x}_{S}\right) }\right)$ + +Of course, the modeling power will be severely constrained if the coordinates in $S$ never change: so typically, flow models either change the set $S$ in a fixed or learned way (e.g. alternating between different partitions of the channel in (Dinh et al., 2016) or applying a learned permutation in (Kingma & Dhariwal, 2018)). As a permutation is a discrete object, it is difficult to learn in a differentiable manner - so (Kingma & Dhariwal, 2018) simply learns an invertible linear function (i.e. a $1 \times 1$ convolution) as a differentiation-friendly relaxation thereof. + +§ 3.1.1. UNIVERSAL APPROXIMATION WITH ILL-CONDITIONED AFFINE COUPLING NETWORKS + +First, we address universal approximation of normalizing flows and its close ties to conditioning. Namely, a recent work (Theorem 1 of (Huang et al., 2020)) showed that deep affine coupling networks are universal approximators if we allow the training data to be padded with sufficiently many zeros. While zero padding is convenient for their analysis (in fact, similar proofs have appeared for other invertible architectures like Augmented Neural ODEs (Zhang et al.)), in practice models trained on zero-padded data often perform poorly. Another work (Teshima et al., 2020) proves universal approximation with the optional permutations and $\left| S\right| = d - 1$ needed for the nonconstructive proof. We remove that requirement in two ways, first by giving a construction that gives universal approximation without permutations in 3 composed couplings and second by showing that the permutations can be simulated by a constant number of alternating but fixed coupling layers. + +First we show that neither padding nor permutations nor depth is necessary representationally: shallow models without zero padding are already universal approximators in Wasserstein. + +Theorem 1 (Universal approximation without padding). Suppose that $P$ is the standard Gaussian measure in ${\mathbb{R}}^{n}$ with $n$ even and $Q$ is a distribution on ${\mathbb{R}}^{n}$ with bounded support and absolutely continuous with respect to the Lebesgue measure. Then for any $\epsilon > 0$ , there exists a depth-3 affine coupling network $g$ , with maps $s,t$ represented by feedfor-ward ReLU networks such that ${W}_{2}\left( {{g}_{\# }P,Q}\right) \leq \epsilon$ . + +Remark 1. A shared caveat of the universality construction in Theorem 1 with the construction in (Huang et al., 2020) is that the resulting network is poorly conditioned. In the case of the construction in (Huang et al., 2020), this is obvious because they pad the $d$ -dimensional training data with $d$ additional zeros, and a network that takes as input a Gaussian distribution in ${\mathbb{R}}^{2d}$ (i.e. has full support) and outputs data on $d$ -dimensional manifold (the space of zero padded data) must have a singular Jacobian almost everywhere. ${}^{1}$ In the case of Theorem 1, the condition number of the network blows up at least as quickly as $1/\epsilon$ as we take the approximation error $\epsilon \rightarrow 0$ , so this network is also ill-conditioned if we are aiming for a very accurate approximation. + +Remark 2. Based on Theorem 3, the condition number blowup of either the Jacobian or the Hessian is necessary for a shallow model to be universal, even when approximating well-conditioned linear maps. The network constructed in Theorem 1 is also consistent with the lower bound from Theorem 4, because the network we construct in Theorem 1 is highly non-Lipschitz and uses many parameters per layer. + +§ 3.1.2.THE EFFECT OF CHOICE OF PARTITION ON DEPTH + +Next, we ask how much of a saving in terms of the depth of the network can one hope to gain from using learned partitions (ala GLOW) as compared to a fixed partition. More precisely: + +Question 1: Can models like Glow (Kingma & Dhariwal, 2018) be simulated by a sequence of affine blocks with a fixed partition without increasing the depth by much? + +We answer this question in the affirmative at least for equally sized partitions (which is what is typically used in practice). We show the following surprising fact: consider an arbitrary partition $\left( {S,\left\lbrack {2d}\right\rbrack \smallsetminus S}\right)$ of $\left\lbrack {2d}\right\rbrack$ , such that $S$ satisfies $\left| S\right| = d$ , for $d \in \mathbb{N}$ . Then for any invertible matrix $T \in {\mathbb{R}}^{{2d} \times {2d}}$ , the linear map $T : {\mathbb{R}}^{2d} \rightarrow {\mathbb{R}}^{2d}$ can be exactly represented by a composition of $O\left( 1\right)$ affine coupling layers that are linear, namely have the form ${L}_{i}\left( {{x}_{S},{x}_{\left\lbrack {2d}\right\rbrack \smallsetminus S}}\right) = \left( {{x}_{S},{B}_{i}{x}_{\left\lbrack {2d}\right\rbrack \smallsetminus S} + }\right.$ $\left. {{A}_{i}{x}_{S}}\right)$ or ${L}_{i}\left( {{x}_{S},{x}_{\left\lbrack {2d}\right\rbrack \smallsetminus S}}\right) = \left( {{C}_{i}{x}_{S} + {D}_{i}{x}_{\left\lbrack {2d}\right\rbrack \smallsetminus S},{x}_{\left\lbrack {2d}\right\rbrack \smallsetminus S}}\right)$ for matrices ${A}_{i},{B}_{i},{C}_{i},{D}_{i} \in {\mathbb{R}}^{d \times d}$ , s.t. each ${B}_{i},{C}_{i}$ is diagonal. For convenience of notation, without loss of generality let $S = \left\lbrack d\right\rbrack$ . Then, each of the layers ${L}_{i}$ is a matrix of the form $\left\lbrack \begin{matrix} I & 0 \\ {A}_{i} & {B}_{i} \end{matrix}\right\rbrack$ or $\left\lbrack \begin{matrix} {C}_{i} & {D}_{i} \\ 0 & I \end{matrix}\right\rbrack$ , where the rows and columns are partitioned into blocks of size $d$ . + +With this notation in place, we show the following theorem: + +Theorem 2. For all $d \geq 4$ , there exists a $k \leq {24}$ such that for any invertible $T \in {\mathbb{R}}^{{2d} \times {2d}}$ with $\det \left( T\right) > 0$ , there exist matrices ${A}_{i},{D}_{i} \in {\mathbb{R}}^{d \times d}$ and diagonal matrices ${B}_{i},{C}_{i} \in$ + +${\mathbb{R}}_{ \geq 0}^{d \times d}$ for all $i \in \left\lbrack k\right\rbrack$ such that + +$$ +T = \mathop{\prod }\limits_{{i = 1}}^{k}\left\lbrack \begin{matrix} I & 0 \\ {A}_{i} & {B}_{i} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {C}_{i} & {D}_{i} \\ 0 & I \end{matrix}\right\rbrack +$$ + +Note that the condition $\det \left( T\right) > 0$ is required, since affine coupling networks are always orientation-preserving. Adding one diagonal layer with negative signs suffices to model general matrices. In particular, since permutation matrices are invertible, this means that any applications of permutations to achieve a different partition of the inputs (e.g. like in Glow (Kingma & Dhariwal, 2018)) can in principle be represented as a composition of not-too-many affine coupling layers, indicating that the flexibility in the choice of partition is not the representational bottleneck. + +It’s a reasonable to ask how optimal the $k \leq {24}$ bound is - we supplement our upper bound with a lower bound, namely that $k \geq 3$ . This is surprising, as naive parameter counting would suggest $k = 2$ might work. Namely, we show: + +Theorem 3. For all $d \geq 4$ and $k \leq 2$ , there exists an invertible $T \in {\mathbb{R}}^{{2d} \times {2d}}$ with $\det \left( T\right) > 0$ , s.t. for all ${A}_{i},{D}_{i} \in$ ${\mathbb{R}}^{d \times d}$ and for all diagonal matrices ${B}_{i},{C}_{i} \in {\mathbb{R}}_{ > 0}^{d \times d},i \in \left\lbrack k\right\rbrack$ it holds that + +$$ +T \neq \mathop{\prod }\limits_{{i = 1}}^{k}\left\lbrack \begin{matrix} I & 0 \\ {A}_{i} & {B}_{i} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {C}_{i} & {D}_{i} \\ 0 & I \end{matrix}\right\rbrack +$$ + +Beyond the relevance of this result in the context of how important the choice of partitions is, it also shows a lower bound on the depth for an equal number of nonlinear affine coupling layers (even with quite complex functions $s$ and $t$ in each layer) - since a nonlinear network can always be linearized about a (smooth) point to give a linear network with the same number of layers. In other words, studying linear affine coupling networks lets us prove a depth lower bound/depth separation for nonlinear networks for free. + +Remark 3 (Significance of Theorem 2 for Approximation in Likelihood/KL). All of the universality results in the literature for normalizing flows, including Theorem 1, prove universality in the Wasserstein distance or in the related sense of convergence of distributions. A stronger and probably much more difficult problem is to prove universality under the KL divergence instead: i.e. to show for a well-behaved distribution $P$ , there exists a sequence ${Q}_{n}$ of distributions generated by normalizing flow models such that + +$$ +\operatorname{KL}\left( {P,{Q}_{n}}\right) \rightarrow 0. \tag{1} +$$ + +This is important because Maximum-Likelihood training attempts to pick the model with the smallest KL, not the smallest Wasserstein distance, and the minimizers of these two objectives can be extremely different. For $P = N\left( {0,\sum }\right)$ , + +${}^{1}$ Alternatively, we could feed a degenerate Gaussian supported on a $d$ -dimensional subspace into the network as input, but there is no way to train such a model using maximum-likelihood training, since the prior is degenerate. + +Theorem 2 certainly implies (1) for bounded depth linear affine couplings, and thus gives the first proof that global optimization of the max-likelihood objective of a normalizing flow model would successfully learn a Gaussian with arbitrary nondegenerate $\sum$ . + +§ 3.2. RESULTS ABOUT GENERAL ARCHITECTURES + +In order to guarantee that the network is invertible, normalizing flow models place significant restrictions on the architecture of the model. The most basic and general question we can ask is how this restriction affects the expressive power of the model - in particular, how much the depth must increase to compensate. + +More precisely, we ask: + +Question 2: is there a distribution over ${\mathbb{R}}^{d}$ which can be written as the pushforward of a Gaussian through a small, shallow generator, which cannot be approximated by the pushforward of a Gaussian through a small, shallow layer-wise invertible neural network? + +Given that there is great latitude in terms of the choice of layer architecture, while keeping the network invertible, the most general way to pose this question is to require each layer to be a function of $p$ parameters - i.e. $f = {f}_{1} \circ {f}_{2} \circ$ $\cdots \circ {f}_{\ell }$ where $\circ$ denotes function composition and each ${f}_{i} : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ is an invertible function specified by a vector ${\theta }_{i} \in {\mathbb{R}}^{p}$ of parameters. This framing is extremely general: for instance it includes layerwise invertible feedforward networks in which ${f}_{i}\left( x\right) = {\sigma }^{\otimes d}\left( {{A}_{i}x + {b}_{i}}\right) ,\sigma$ is invertible, ${A}_{i} \in {\mathbb{R}}^{d \times d}$ is invertible, ${\theta }_{i} = \left( {{A}_{i},{b}_{i}}\right)$ and $p = d\left( {d + 1}\right)$ . It also includes popular architectures based on affine coupling blocks which we discussed in more detail in the previous subsection. + +We answer this question in the affirmative: namely, we show for any $k$ that there is a distribution over ${\mathbb{R}}^{d}$ which can be expressed as the pushforward of a network with depth $O\left( 1\right)$ and size $O\left( k\right)$ that cannot be (even very approximately) expressed as the pushforward of a Gaussian through a Lipschitz layerwise invertible network of depth smaller than $k/p$ . + +Towards formally stating the result, let $\theta = \left( {{\theta }_{1},\ldots ,{\theta }_{\ell }}\right) \in$ $\Theta \subset {\mathbb{R}}^{{d}^{\prime }}$ be the vector of all parameters (e.g. weights, biases) in the network, where ${\theta }_{i} \in {\mathbb{R}}^{p}$ are the parameters that correspond to layer $i$ , and let ${f}_{\theta } : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ denote the resulting function. Define $R$ so that $\Theta$ is contained in the Euclidean ball of radius $R$ . + +We say the family ${f}_{\theta }$ is L-Lipschitz with respect to its parameters and inputs, if + +$$ +\forall \theta ,{\theta }^{\prime } \in \Theta : {\mathrm{E}}_{x \sim \mathcal{N}\left( {0,{I}_{d \times d}}\right) }\begin{Vmatrix}{{f}_{\theta }\left( x\right) - {f}_{{\theta }^{\prime }}\left( x\right) }\end{Vmatrix} \leq L\begin{Vmatrix}{\theta - {\theta }^{\prime }}\end{Vmatrix} +$$ + +and $\forall x,y \in {\mathbb{R}}^{d},\begin{Vmatrix}{{f}_{\theta }\left( x\right) - {f}_{\theta }\left( y\right) }\end{Vmatrix} \leq L\parallel x - y\parallel$ . * We will discuss the reasonable range for $L$ in terms of the weights after the Theorem statement. We show ${}^{3}$ : + +Theorem 4. For any $k = \exp \left( {o\left( d\right) }\right) ,L = \exp \left( {o\left( d\right) }\right) ,R =$ $\exp \left( {o\left( d\right) }\right)$ , we have that for $d$ sufficiently large and any $\gamma > 0$ there exists a neural network $g : {\mathbb{R}}^{d + 1} \rightarrow {\mathbb{R}}^{d}$ with $O\left( k\right)$ parameters and depth $O\left( 1\right)$ , s.t. for any family $\left\{ {{f}_{\theta },\theta \in \Theta }\right\}$ of layerwise invertible networks that are $L$ - Lipschitz with respect to its parameters and inputs, have $p$ parameters per layer and depth at most $k/p$ we have + +$$ +\forall \theta \in \Theta ,{W}_{1}\left( {{\left( {f}_{\theta }\right) }_{\# \mathcal{N}},{g}_{\# \mathcal{N}}}\right) \geq {10}{\gamma }^{2}d +$$ + +Furthermore, for all $\theta \in \Theta ,{KL}\left( {{\left( {f}_{\theta }\right) }_{\# \mathcal{N}},{g}_{\# \mathcal{N}}}\right) \geq 1/{10}$ and $\operatorname{KL}\left( {{g}_{\# \mathcal{N}},{\left( {f}_{\theta }\right) }_{\# \mathcal{N}}}\right) \geq \frac{{10}{\gamma }^{2}d}{{L}^{2}}$ . + +Remark 4. First, note that while the number of parameters in both networks is comparable (i.e. it’s $O\left( k\right)$ ), the invertible network is deeper, which usually is accompanied with algorithmic difficulties for training, due to vanishing and exploding gradients. For layerwise invertible generators, if we assume that the nonlinearity $\sigma$ is 1-Lipschitz and each matrix in the network has operator norm at most $M$ , then a depth $\ell$ network will have $L = O{\left( {M}^{\ell }\right) }^{4}$ and $p = O\left( {d}^{2}\right)$ . For an affine coupling network with $g,h$ parameterized by $H$ -layer networks with $p/2$ parameters each,1-Lipschitz activations and weights bounded by $M$ as above, we would similarly have $L = O\left( {M}^{\ell H}\right)$ . + +Remark 5. We make a couple of comments on the "hard" distribution $g$ we construct, as well as the meaning of the parameter $\gamma$ and how to interpret the various lower bounds in the different metrics. The distribution $g$ for a given $\gamma$ will in fact be close to a mixture of $k$ Gaussians, each with mean on the sphere of radius ${10}{\gamma }^{2}d$ and covariance matrix ${\gamma }^{2}{I}_{d}$ . Thus this distribution has most of it's mass in a sphere of radius $O\left( {{\gamma }^{2}d}\right)$ - so the Wasserstein guarantee gives close to a trivial approximation for $g$ . The KL divergence bounds are derived by so-called transport inequalities between KL and Wasserstein for subgaussian distributions (Bobkov & Götze, 1999). The discrepancy between the two KL divergences comes from the fact that the functions $g,{f}_{\theta }$ may have different Lipschitz constants, hence the tails of ${g}_{\# \mathcal{N}}$ and ${f}_{\# \mathcal{N}}$ behave differently. In fact, if the function ${f}_{\theta }$ had the same Lipschitz constant as $g$ , both KL lower bounds would be on the order of a constant. + +${}^{2}$ Note for architectures having trainable biases in the input layer, these two notions of Lipschitzness should be expected to behave similarly. + +${}^{3}$ In this Theorem and throughout, we use the standard asymptotic notation $f\left( d\right) = o\left( {g\left( d\right) }\right)$ to indicate that $\mathop{\limsup }\limits_{{d \rightarrow \infty }}\frac{f\left( d\right) }{g\left( d\right) } = 0$ . For example, the assumption $k,L,R =$ $\exp \left( {o\left( d\right) }\right)$ means that for any sequence ${\left( {k}_{d},{L}_{d},{R}_{d}\right) }_{d = 1}^{\infty }$ such that $\mathop{\limsup }\limits_{{d \rightarrow \infty }}\frac{\max \left( {\log {k}_{d},\log {L}_{d},\log {R}_{d}}\right) }{d} = 0$ the result holds true. + +${}^{4}$ Note, our theorem applies to exponentially large Lipschitz constants. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/cAsVBUe1Rnj/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/cAsVBUe1Rnj/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..bcfcee2e6b873aef8e8d9971c2ca6befd9ea0a5f --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/cAsVBUe1Rnj/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,433 @@ +# Beyond In-Place Corruption: Insertion and Deletion in Denoising Probabilistic Models + +Anonymous Authors ${}^{1}$ + +## Abstract + +Denoising diffusion probabilistic models have shown impressive results for generation of sequences by iteratively corrupting each example and then learning to map corrupted versions back to the original. However, previous work has largely focused on in-place corruption, adding noise to each pixel or token individually while keeping their locations the same. In this work, we consider a broader class of corruption processes and denoising models over sequence data that can insert and delete elements, while still being efficient to train and sample from. We demonstrate that these models outperform standard in-place models on an arithmetic sequence task, and that when trained on the text8 dataset they can be used to fix spelling errors without any fine-tuning. + +## 1. Introduction + +Although autoregressive models are generally considered state of the art for language modeling, machine translation, and other sequence-generation tasks (Raffel et al., 2019; van den Oord et al., 2016), they must process sequences left-to-right, which can make generation slow. As such, significant research effort has been put into non-autoregressive models that allow for parallel generation (Wang & Cho, 2019; Ghazvininejad et al., 2019). Recently, denoising diffusion probabilistic models (DDPMs) (Sohl-Dickstein et al., 2015) have shown impressive results in a variety of domains (Chen et al., 2020; Ho et al., 2020; Hoogeboom et al., 2021), in some cases achieving comparable results to autoregressive models with far fewer steps. In these models, a forward process iteratively corrupts the data towards a noise distribution, and a generative model is trained to learn the reverse denoising process. However, these models share one limitation: the corruption process always modifies sequence + +![01963e47-0800-74e3-9e54-d71797366f94_0_899_559_690_182_0.jpg](images/01963e47-0800-74e3-9e54-d71797366f94_0_899_559_690_182_0.jpg) + +Figure 1. Generating an arithmetic sequence by denoising with insertion and deletion over ten steps, showing $x{\;\operatorname{mod}\;{100}}$ with color and $x{\;\operatorname{mod}\;{10}}$ with text. ’ $\mathrm{D}$ ’ denotes deletion and ’ $\mathrm{I}$ ’ insertion according to the fixed forward process $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ . This sequence was generated by the learned reverse process ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ . + +elements in-place. While convenient, this choice introduces strong constraints which limit the efficacy of the generative denoising process. For example, if the model makes a mistake and places a word or phrase in the wrong place, it cannot easily compensate. + +In order to overcome similar limitations in non-autoregressive translation, Gu et al. (2019) proposed the Levenshtein transformer, which learns to perform insertion and deletion operations. Although this model works well for translation, it was not designed as a purely generative model and does not allow estimation of log-likelihoods. + +In this work, we integrate insertion and deletion into the DDPM framework, generalizing multinomial diffusion models (Hoogeboom et al., 2021) and taking inspiration from the Levenshtein transformer. We carefully design a forward noising process that allows for tractable sampling of corrupted sequences and computing estimates of the log-likelihood bound. We show that our models outperform in-place diffusion on a dataset of arithmetic sequences, and that they learn error-correction mechanisms that work on misaligned inputs when trained on text. + +## 2. Background + +In this section we discuss the most relevant related work that is needed to introduce our proposed method. In Appendix A we discuss additional related work. + +### 2.1. Denoising diffusion probabilistic models + +DDPMs are latent variable generative models defined by a forward Markov process $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ which gradually adds noise, and a learned reverse process ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ that removes noise. The forward process defines a joint distribution $q\left( {\mathbf{x}}_{0 : T}\right) = q\left( {\mathbf{x}}_{0}\right) \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ where $q\left( {\mathbf{x}}_{0}\right)$ is the data distribution and ${\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{T}$ are increasingly noisy latent variables that converge to a known distribution $q\left( {\mathbf{x}}_{T}\right)$ . The reverse process ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ is then trained to match the forward process posteriors $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0}}\right)$ , yielding a gradual denoising model with a tractable variational bound on the log-likelihood. To enable efficient training, $q$ is often chosen such that these posteriors can be computed analytically: for continuous DDPMs, $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ is typically a Gaussian, while for discrete DDPMs, Hoogeboom et al. (2021) propose setting $q$ to randomly replace tokens with some probability. All recent diffusion models perform corruption in-place: the $k$ th element of ${\mathbf{x}}_{t}$ is a noisier version of the $k$ th element of ${\mathbf{x}}_{t - 1}$ , with no dependence on other tokens. + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +### 2.2. Levenshtein Transformer + +Gu et al. (2019) proposed the Levenshtein transformer, a non-autoregressive model that inserts and deletes tokens in parallel. Over a series of generation steps, the model marks tokens in current sequence $\mathbf{x}$ that should be deleted, predicts how many tokens should be inserted at each position, and finally predicts values for the newly inserted tokens. To train the model, Gu et al. use imitation learning: they corrupt a dataset example $\mathbf{x}$ using a combination of random noise and the model's own predictions, use a dynamic programming algorithm to compute the minimal set of edits to recover $\mathbf{x}$ from the corrupted version ${\mathbf{x}}^{\prime }$ , then train the model to maximize the likelihood of those optimal edits. + +The Levenshtein transformer has shown impressive results in non-autoregressive machine translation. However, it has only been applied in a conditional generation setting where there are only a small set of possible correct answers, and it cannot be directly trained or evaluated in terms of the log-likelihood of dataset samples. + +## 3. Method + +Our goal is to design an insertion-deletion-based generative model within the probabilistic framework of diffusion models with a tractable bound on the log-likelihood. The main considerations are (a) how to define the forward corruption process so that it leads to a reverse process with insertions, deletions, and replacements, (b) how to parameterize the reverse process, and (c) how to do both tractably within the diffusion process framework. + +### 3.1. Forward Process + +The forward corruption process specifies how to gradually convert data ${\mathbf{x}}_{0}$ into noise ${\mathbf{x}}_{T}$ by repeatedly applying a singlestep forward process $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ . Since the learned reverse process is trained to undo each of these corruption steps, and insertion and deletion are inverses, we can obtain a learned reverse process with deletion, insertion, and replacement operations by including insertion, deletion, and replacement operations in the forward process, respectively. + +A challenge is that if a single forward step can apply an arbitrary set of insertions, deletions, and replacements, then there are many ways to get ${\mathbf{x}}_{t}$ from ${\mathbf{x}}_{t - 1}$ . For example, ${\mathbf{x}}_{t}$ can be related to ${\mathbf{x}}_{t - 1}$ through the minimum edit between the two, or by deleting the full ${\mathbf{x}}_{t - 1}$ and then inserting the full ${\mathbf{x}}_{t}$ . In order to compute $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ , one would need to sum over all these possibilities. Our first idea is to restrict the forward process so that there is a single way to get each ${\mathbf{x}}_{t}$ from each ${\mathbf{x}}_{t - 1}$ , by adding two auxiliary symbols into the vocabulary that explicitly track insertion and deletion operations: every insertion operation produces the insertion-marker token INS, and every deletion operation deletes the deletion-marker token DEL. (We note that, since the reverse process is reversing the forward corruption process, the learned model must instead insertion of the propose the following form for $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ : + +1. Remove all $\overline{\text{ DEL }}$ tokens from ${\mathbf{x}}_{t - 1}$ . + +2. For each token $x$ in ${\mathbf{x}}_{t - 1}$ , sample a new value (possibly $\overline{\text{ DEL }}$ ) as ${x}^{\prime } \sim \operatorname{Cat}\left( {{x}^{\prime };{\delta }_{x}^{T}{Q}_{t}}\right)$ , where ${Q}_{t}$ is a Markov transition matrix and ${\delta }_{x}$ is a one-hot vector for $x$ . + +3. Between each pair of tokens in the result, and also at the start and end of the sequence, sample ${n}_{i}^{\text{new }} \sim$ $\operatorname{Geom}\left( {1 - {\alpha }_{t}}\right)$ and insert that many $\mathrm{{INS}}$ tokens. (We explain this choice in Section 3.4.) + +We allow ${Q}_{t}$ to include transitions from [INS] to any other token, and from any token to DEL, but disallow transitions to $\overline{\mathrm{{DNS}}}$ or from $\overline{\mathrm{{DEL}}}$ to ensure they only arise from insertions and deletions. This ensures unique 1-step alignments. + +### 3.2. Parameterization of the reverse process + +As an inductive bias, we prefer reverse processes that produce ${\mathbf{x}}_{t - 1}$ by modifying ${\mathbf{x}}_{t}$ , instead of predicting it from scratch. As such, the learned reverse process ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ first removes all INS tokens from ${\mathbf{x}}_{t}$ , then predicts two things for each remaining token: the previous value of the token (which might be $\xrightarrow[]{\text{ INS }}$ if the token should be removed), and the number of $\overline{\mathrm{{DEL}}}$ tokens that should be inserted before the token. (Recall that, since this is the reverse process, the auxiliary tokens have opposite meanings here.) We also take inspiration from other work on diffusion models (Ho et al., 2020; Hoogeboom et al., 2021), which find improved performance by guessing ${\mathbf{x}}_{0}$ and then using knowledge of the forward process to derive ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ , as opposed to specifying ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ directly. Our full parameterization combines these two ideas: it attempts to infer the edit summary ${\mathbf{a}}_{0 \rightarrow t}$ that was applied to ${\mathbf{x}}_{0}$ to produce ${\mathbf{x}}_{t}$ (as shown in Fig. 2), then uses the known form of $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right)$ to derive + +![01963e47-0800-74e3-9e54-d71797366f94_2_176_195_659_429_0.jpg](images/01963e47-0800-74e3-9e54-d71797366f94_2_176_195_659_429_0.jpg) + +Figure 2. An example of sequences ${\mathbf{x}}_{0}$ through ${\mathbf{x}}_{3}$ produced by a forward process $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ (top), along with the corresponding edit summary ${\mathbf{a}}_{0 \rightarrow 3}$ (bottom) that summarizes how to obtain ${\mathbf{x}}_{t}$ from ${\mathbf{x}}_{0}$ without describing the full sample path. Note that multiple sample paths can correspond to the same edit summary. Our model ${p}_{\theta }$ predicts the corresponding $\overrightarrow{v} \rightarrow$ or $\rightarrow$ edge in ${\mathbf{a}}_{0 \rightarrow t}$ for each token in ${x}_{t}$ (including the previous value $v$ in the first case), and also predicts the number of $\rightarrow$ edges immediately before each token in ${\mathbf{x}}_{t}$ (e.g. there is one before ’ $\mathrm{f}$ ’ and zero before ’i’). + +${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ . Specifically, we compute + +$$ +{p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) \propto \mathop{\sum }\limits_{{{\widetilde{\mathbf{x}}}_{0},{\widetilde{\mathbf{a}}}_{0 \rightarrow t}}}{\widetilde{p}}_{\theta }\left( {{\widetilde{\mathbf{x}}}_{0},{\widetilde{\mathbf{a}}}_{0 \rightarrow t} \mid {\mathbf{x}}_{t}}\right) \cdot q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1},{\widetilde{\mathbf{a}}}_{0 \rightarrow t} \mid {\widetilde{\mathbf{x}}}_{0}}\right) . \tag{1} +$$ + +where tildes denote predictions that are not directly supervised, and we intentionally use $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1},{\widetilde{\mathbf{a}}}_{0 \rightarrow t} \mid {\widetilde{\mathbf{x}}}_{0}}\right)$ in place of $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\widetilde{\mathbf{x}}}_{0},{\widetilde{\mathbf{a}}}_{0 \rightarrow t}}\right)$ to prevent the model from predicting edits ${\widetilde{\mathbf{a}}}_{0 \rightarrow t}$ that have zero probability under $q\left( {{\mathbf{x}}_{t},{\mathbf{a}}_{0 \rightarrow t} \mid {\mathbf{x}}_{0}}\right)$ . Intuitively, the model predicts a summary of which edits likely happened (at an unknown time $s \leq t$ ) to produce ${\mathbf{x}}_{t}$ , then $q$ determines the details of which specific edits appeared in ${\mathbf{x}}_{t - 1}$ . This parameterization requires us to be able to compute $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1},{\widetilde{\mathbf{a}}}_{0 \rightarrow t} \mid {\widetilde{\mathbf{x}}}_{0}}\right)$ , which we discuss in Section 3.4. + +### 3.3. Loss function + +We optimize the standard evidence bound on the negative log-likelihood, which can be expressed as + +$$ +L = {\mathbb{E}}_{q\left( {\mathbf{x}}_{0 : T}\right) }\left\lbrack {\underset{{L}_{T}}{\underbrace{-\log {p}_{\theta }\left( {\mathbf{x}}_{T}\right) }} + \mathop{\sum }\limits_{{t = 1}}^{T}\underset{{L}_{t - 1}}{\underbrace{-\log \frac{{p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) }{q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right) }}}}\right\rbrack \tag{2} +$$ + +We restrict our attention to $q$ for which the last transition $q\left( {{\mathbf{x}}_{T} \mid {\mathbf{x}}_{T - 1}}\right)$ deterministically replaces every token with DEL and inserts no new tokens; we can then simply learn a tabular distribution ${p}_{\theta }\left( \left| {\mathbf{x}}_{T}\right| \right)$ of final forward process lengths to compute the ${L}_{T}$ term. + +For the ${L}_{t - 1}$ terms, we randomly sample $t$ and then compute + +$$ +{\mathbb{E}}_{q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right) }\left\lbrack {{\mathbb{E}}_{q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right) }\left\lbrack {-\log \frac{{p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) }{q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right) }}\right\rbrack }\right\rbrack . \tag{3} +$$ + +![01963e47-0800-74e3-9e54-d71797366f94_2_907_195_685_511_0.jpg](images/01963e47-0800-74e3-9e54-d71797366f94_2_907_195_685_511_0.jpg) + +Figure 3. Representation of $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ (left) and $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ (right) as PFSTs, along with their composition $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ (bottom). Execution starts at the black dot and continues until reaching end-of-sequence at the double-outlined state. Some probabilities omitted for readability; see Fig. 5 (in Appendix B). + +It turns out that we can compute this inner expectation in closed form for a given sample $\left( {t,{\mathbf{x}}_{0},{\mathbf{x}}_{t},{\mathbf{a}}_{0 \rightarrow t}}\right)$ , as we discuss in the next section. + +### 3.4. Computational considerations + +While a diffusion model could be trained by simply drawing sequences ${\mathbf{x}}_{0},{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{T}$ and training the model to undo each step, these models are usually trained by analytically computing terms of the negative evidence lower bound for individual timesteps $t$ and samples $\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{t}}\right)$ , by using closed form representations of $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{0}}\right)$ and $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0}}\right)$ (Ho et al., 2020). Unfortunately, doing this for a forward process that inserts and deletes tokens is nontrivial. Over multiple steps, the $\overline{\text{ INS }}$ and $\overline{\text{ DEL }}$ markers may be skipped, which means that (as mentioned in Section 3.1) there will likely be many possible sets of insertions and deletions that produce ${\mathbf{x}}_{t}$ from ${\mathbf{x}}_{0}$ , with a corresponding wide variety of intermediate sequences $\left( {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{t - 1}}\right)$ . + +To address this challenge, we introduce two main ideas: (a) cast the necessary quantities in terms of probabilistic finite-state transducers (PFSTs), which allow us to marginalize out details about intermediate sequences that do not matter for computing the loss, and (b) choose to condition on the edit summary ${\mathbf{a}}_{0 \rightarrow t}$ in addition to the pair $\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{t}}\right)$ while analytically computing the loss term ${L}_{t}$ in Eq. (3), which allows us to efficiently compute those PFST-based quantities. + +A PFST is a probabilistic finite state machine that has an input tape and one or more output tapes. It repeatedly makes stochastic transitions governed by a combination of transition probabilities and the current symbol from the input tape. As it makes transitions, it consumes input tape symbols and writes to its output tape(s). To make use of PFSTs + +165 + +0: scnt that seem somewhat useful to bottom they controlled the arrangement of bambelatic the elements of a light full i Input: + +1: sent that se $\overline{\mathrm{{DEL}}}$ in somewhat useful DEL to bottom they controlled the arrangement of bambelatic the elements ... thisn sentsnetne wasstype vssry babdly + +2: scnt that sem somewhat usefu to bottom INS they controlled the arrangesent of bambelatic the elements of a li... Insert/delete outputs: + +3: scnt that sem somewhat usefu to bottoms INS they controlled the arrangesent of gambelatic the elements of a 1 . this sentence tune was type very badly + +4: scnt that sem somewhat usefu to b DEL ttomsp they controlled the arrangesent of gambelatic the elements o . . . this sentiment was typed very badly + +Apr-2018 5: scnt that sem somewhat fsefu to bttkmsp they control DEL ed the arrangesent of gambelatic the elements off a li . this sentence the bass style very badly 6: scnt thaq sem somewhat fse u to bttkmsp they DEL controled the arrangesen INS t DEL f gambeaetic the elem... this sentencence was typed very badly 7: scnt thaq INS sem somewhat fse u INS to bttkmsp they controled the arrangesentt f gamneaetic the elemnents ... this sentence one was type very barely In-place outputs: + +28: rgny a-s blgjddaz INS | DEL | as INS | Vrrneipnohwxswokachsyrycc INS | IDEL | k DEL | dmzya INS | a Uphehva INS | . thern senticelle wasstype issum babble + +29: rg DEL em DEL d INS | hlgjIdtz INS njasivrdm gi DEL | ut DEL | ws swok a h INS spa INS g INS | DEL | . . there sentinel e was type issey babely + +30: rgjepdvphlg|DEL|DEL|shjas|DEL|vrebliute|DEL|mb|DEL|shqbu|INS|f|INS|iw|VNS|asx|INS|ay|jd., thirn senticette wasstype fasry bandly + +31: udnsrbi-DEL-cv|DEL| bzx| INS e-sqf|INS in rxuonfkpij|DEL|DEL|eq|DEL| DEL| INS h-DELaadg-r-DEL|kc . thian senteneure was type viery batfly + +32: DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL third sentiments lapstyle essay bolely + +Figure 4. Left: generating text with an insertion-deletion denoising model ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ trained on the text8 dataset (generative process flows upward). Right: Fixing typos using an insert-delete model (and an in-place baseline), showing five random predictions from each model. + +166 + +167 + +168 + +
NLL (nats)Error rate (%)
In-place$\leq {39.95} \pm {0.06}$${13.12} \pm {2.40}$
0.4 ins/del$\leq {36.35} \pm {0.07}$5.70 ± 0.37
0.6 ins/del$\leq {35.71} \pm {0.04}$5.16 ± 0.27
0.8 ins/del$\leq {38.51} \pm {0.17}$6.48 ± 0.13
+ +Table 1. Results on arithmetic sequences. NLL denotes negative log-likelihoods, error rate denotes the fraction of the step sizes in each generated example that are different from the most common step size. Standard deviation taken over five random seeds. + +for our purposes, we begin by expressing $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ as a PFST, which is possible because geometric random variables can be sampled as a repeated coin flip. The PFST iteratively consumes the input $\left( {\mathbf{x}}_{t - 1}\right)$ , transitioning between states and writing to the output $\left( {\mathbf{x}}_{t}\right)$ . We additionally make use of an algebra over PFSTs that allows composing PFSTs and integrating out output tapes. By composing PFSTs for $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ and $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ , we obtain a two-output tape PFST for $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ , with which we can integrate out ${\mathbf{x}}_{t - 1}$ to obtain $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{0}}\right)$ . Fig. 3 shows the high-level structure of each PFST; full details are in Appendix B.2. + +Given a specific edit summary ${\mathbf{a}}_{0 \rightarrow t}$ , we can reconstruct the state transitions in the PFST for $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ , which allows us to compute $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1},{\mathbf{a}}_{0 \rightarrow t} \mid {\mathbf{x}}_{0}}\right)$ and $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right)$ in closed form. Details on how to compute the necessary terms for our loss in Section 3.3 and our model parameterization in Section 3.2 are given in Appendix B. 3 and B.4, respectively. + +## 4. Experiment: Toy sequence datasets + +We start by exploring the expressive power of denoising insertion-deletion models on a toy dataset of increasing and decreasing arithmetic sequences. We take a multinomial diffusion corruption process (Hoogeboom et al., 2021) and augment it with varying probabilities of insertion and deletion. As shown in Table 1, adding a moderate amount of insertion and deletion in the forward process leads to better log-likelihoods and also produces generated sequences that have fewer deviations from being a valid arithmetic sequence. Figure 1 visualizes an example generated sequence from the 0.6 insert/delete rate model. See Appendix C. 2 for experiment details. + +## 5. Experiment: Text generation + +We also investigate training a 32-step multinomial-diffusion-based model augmented with insertion and deletion on the character-level language dataset text8 (Mahoney, 2011). Although insert/delete models have slightly worse log-likelihood bounds on this dataset (see Table 2 in Appendix C), the samples are still high quality, and the models show qualitative differences in the generative process: they can correct spelling errors, insert spaces between words, and make other human-like edits. In Fig. 4 we show a generated sentence from an insert-delete model, and also show that it can be used to "spellcheck" a badly-human-written sentence without being trained on this task by simply treating the sentence as ${\mathbf{x}}_{10}$ and sampling from ${p}_{\theta }\left( {{\mathbf{x}}_{0} \mid {\mathbf{x}}_{10}}\right)$ . The insert-delete model generates imperfect but intuitive suggestions whereas an in-place model generates nonsense due to misalignment issues. See Appendix C. 3 for experiment details. + +## 6. Discussion + +In this work we have opened up the class of denoising-based generative models to more flexible processes that include insert and delete operations in addition to the commonly used replacement operation. While we have motivated these insert-delete diffusion-like models from the perspective of text generation, this class of models could be useful for several other applications, such as image super resolution (by inserting and deleting pixel rows and columns), video generation (by inserting and deleting video frames), and molecular structure generation (by editing SMILES representations (Weininger, 1988)). We are also excited about the potential for incorporating other types of global structure and semantically meaningful edits (such as duplication or reordering) into corruption processes as a strategy for improving denoising-based generative models. + +## References + +Alva-Manchego, F., Bingel, J., Paetzold, G., Scarton, C., and Specia, L. Learning how to simplify from explicit labeling of complex-simplified text pairs. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 295- 305, Taipei, Taiwan, November 2017. Asian Federation of Natural Language Processing. URL https://www.aclweb.org/anthology/I17-1030. + +Bahdanau, D., Serdyuk, D., Brakel, P., Ke, N. R., Chorowski, J., Courville, A., and Bengio, Y. Task loss estimation for sequence prediction. arXiv preprint arXiv:1511.06456, 2015. + +Chen, N., Zhang, Y., Zen, H., Weiss, R. J., Norouzi, M., and Chan, W. WaveGrad: Estimating gradients for waveform generation. September 2020. + +Dinella, E., Dai, H., Li, Z., Naik, M., Song, L., and Wang, K. Hoppity: Learning graph transformations to detect and fix bugs in programs. In International Conference on Learning Representations, 2020. URL https:// openreview.net/forum?id=SJeqs6EFvB. + +Dong, Y., Li, Z., Rezagholizadeh, M., and Cheung, J. C. K. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3393-3402, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1331. URL https: //www.aclweb.org/anthology/P19-1331. + +Ghazvininejad, M., Levy, O., Liu, Y., and Zettlemoyer, L. Mask-Predict: Parallel decoding of conditional masked language models. April 2019. + +Graves, A., Fernández, S., Gomez, F., and Schmidhu-ber, J. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, pp. 369-376, New York, NY, USA, 2006. Association for Computing Machinery. ISBN 1595933832. doi: 10.1145/ 1143844.1143891. URL https://doi.org/10.1145/1143844.1143891. + +Gu, J., Wang, C., and Zhao, J. Levenshtein transformer. May 2019. + +Guu, K., Hashimoto, T. B., Oren, Y., and Liang, P. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437-450, 2018. + +Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. arXiv preprint arXiv:2006.11239, 2020. + +Hoogeboom, E., Nielsen, D., Jaini, P., Forré, P., and Welling, M. Argmax flows and multinomial diffusion: Towards non-autoregressive language models. 2021. + +Mahoney, M. Text8 dataset. http://mattmahoney.net/dc/textdata,2011.Accessed: 2021-5-24. + +Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified Text-to-Text transformer. 2019. + +Sabour, S., Chan, W., and Norouzi, M. Optimal completion distillation for sequence learning. arXiv preprint arXiv:1810.01398, 2018. + +Seff, A., Zhou, W., Damani, F., Doyle, A., and Adams, R. P. Discrete object generation with reversible inductive construction. July 2019. + +Sohl-Dickstein, J., Weiss, E. A., Maheswaranthan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585, 2015. + +van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. WaveNet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016. + +Wang, A. and Cho, K. BERT has a mouth, and it must speak: BERT as a markov random field language model. February 2019. + +Weininger, D. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. J. Chem. Inf. Comput. Sci., 28(1):31-36, 1988. ISSN 0095-2338. doi: 10.1021/ci00057a005. + +Yao, Z., Xu, F. F., Yin, P., Sun, H., and Neubig, G. Learning structural edits via incremental tree transformations. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=v9hAX77--cZ. + +Yin, P., Neubig, G., Allamanis, M., Brockschmidt, M., and Gaunt, A. L. Learning to represent edits. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum? id=BJ16AjC5F7. + +Zhao, R., Bieber, D., Swersky, K., and Tarlow, D. Neural networks for modeling source code edits. arXiv preprint arXiv:1904.02818, 2019. + +## A. Other related work + +A few other works have studied diffusion-like generative models for structured data, including Seff et al. (2019), which exploits a structured forward process $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ to impose constraints on the generated samples. A number of other edit-based generative models have been proposed, including Guu et al. (2018) which edits prototypical examples in a latent space. In natural language processing, edit-based models have been proposed for learning to simplify complex sentences into simple ones (Alva-Manchego et al., 2017; Dong et al., 2019). In source code applications, it is common to generate edits for bug-fixing (Yin et al., 2019; Zhao et al., 2019; Dinella et al., 2020; Yao et al., 2021). There are also models that use edit distances for purposes of supervision (either directly or via imitation learning), but still generate left-to-right (Graves et al., 2006; Bahdanau et al., 2015; Sabour et al., 2018). + +## B. Computing probabilities with PFSTs + +In this section we describe how to compute the necessary probabilities for the forward process and learned reverse process using probabilistic finite state transducers. + +### B.1. Notation for PFST representations + +We will begin by introducing additional notation which will be useful for representing PFSTs of the multi-step forward process probabilities $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ and $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ . + +For all of our PFSTs, we associate each transition with a label " $p : x \mapsto y$ ", which indicates that, conditioned on $x$ being the next symbol on the input tape, with probability $p$ the PFST consumes $x$ and produces $y$ . We use $\varepsilon$ to denote the empty sequence, and thus $p : \varepsilon \mapsto y$ denotes a transition that (with probability $p$ ) inserts $y$ without consuming any input. Similarly $p : x \mapsto \varepsilon$ denotes consuming $x$ without producing any output, which corresponds to a deletion. For the product transducer $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ , we write $p : x \mapsto y \mapsto z$ to indicate consuming $x$ from ${\mathbf{x}}_{0}$ , writing $y$ to ${\mathbf{x}}_{t - 1}$ , and writing $z$ to ${\mathbf{x}}_{t}$ . + +As stated in Section 3.1, each single step of the forward process is parameterized by a scalar ${\alpha }_{t}$ and a Markov transition matrix ${\mathbf{Q}}_{t}$ . To represent the aggregate probabilities over multiple steps, we introduce three parameters ${\overline{\mathbf{\alpha }}}_{t},{\overline{\mathbf{\beta }}}_{t}$ , and ${\overline{\mathbf{Q}}}_{t}$ : + +- ${\bar{\alpha }}_{t}$ is a vector of insertion probabilities, such that ${\left\lbrack {\bar{\alpha }}_{t}\right\rbrack }_{i}$ gives the chance of inserting token $i$ when skipping from time 0 to time $t$ . In particular, ${\left\lbrack {\bar{\alpha }}_{t}\right\rbrack }_{\langle \mathrm{{INS}}\rangle }$ denotes the probability of inserting $\overline{\mathrm{{INS}}}$ , and ${\left\lbrack {\overline{\mathbf{\alpha }}}_{t}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }$ denotes the probability of inserting $\overline{\text{ DEL }}$ . ${\overline{\mathbf{\alpha }}}_{t}$ is used to summarize inserts at some time $s \leq t$ followed by a chain of replacements ${\mathbf{Q}}_{s + 1},\ldots ,{\mathbf{Q}}_{t}$ . If $s < t$ , we call this a silent insertion. + +- Conversely, ${\overline{\mathbf{\beta }}}_{t}$ is a vector of deletion probabilities, such that ${\left\lbrack {\overline{\mathbf{\beta }}}_{t}\right\rbrack }_{i}$ gives the chance of deleting token $i$ conditional on it appearing in ${\mathbf{x}}_{0}.{\overline{\mathbf{\beta }}}_{t}$ is used to summarize a chain of replacements ${\mathbf{Q}}_{1},\ldots ,{\mathbf{Q}}_{s}$ that produce $\overline{\text{ DEL }}$ at some time $s < t$ . We call this a silent deletion. + +- Finally, ${\overline{\mathbf{Q}}}_{t}$ is a matrix that specifies how tokens will be replaced over multiple steps, such that ${\left\lbrack {\overline{\mathbf{Q}}}_{t}\right\rbrack }_{xy}$ denotes the probability of consuming $x$ and producing $y$ conditioned on $x$ appearing in ${\mathbf{x}}_{0}$ . Notably, this encompasses both chains of replacements due to ${\mathbf{Q}}_{t}$ , as well as silent deletion-insertion pairs, where a token is inserted immediately after a deleted token. (For instance, in Fig. 2, 'b' and 'h' form a deletion-insertion pair) + +Using this, we can fully specify the PFSTs for each process of interest: + +![01963e47-0800-74e3-9e54-d71797366f94_5_917_735_672_594_0.jpg](images/01963e47-0800-74e3-9e54-d71797366f94_5_917_735_672_594_0.jpg) + +Figure 5. From top to bottom: $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right) , q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ , and $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ as probabilistic finite-state transducers. + +### B.2. Calculating $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{0}}\right)$ from $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ + +As discussed in Section 3.4, we can use the transducer representations shown in Fig. 5 to recursively construct probabilities for $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{0}}\right)$ from the individual step distributions $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ . We proceed inductively by constructing a deterministic $q\left( {{\mathbf{x}}_{0} \mid {\mathbf{x}}_{0}}\right)$ and then repeatedly computing $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{0}}\right)$ from $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ and $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ . + +As our base case, observe that $q\left( {{\mathbf{x}}_{0} \mid {\mathbf{x}}_{0}}\right)$ is the identity transformation, and we can represent it using the following parameters: + +$$ +{\left\lbrack {\overline{\mathbf{\alpha }}}_{0}\right\rbrack }_{i} = 0,\;{\left\lbrack {\overline{\mathbf{\beta }}}_{0}\right\rbrack }_{i} = 0, \tag{4} +$$ + +$$ +{\left\lbrack {\overline{\mathbf{Q}}}_{0}\right\rbrack }_{ij} = 1\text{if}i = j,0\text{otherwise.} +$$ + +Now suppose we know ${\overline{\mathbf{\alpha }}}_{t - 1},{\overline{\mathbf{\beta }}}_{t - 1}$ , and ${\overline{\mathbf{Q}}}_{t - 1}$ for $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ , and ${\mathbf{Q}}_{t}$ and ${\alpha }_{t}$ for $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ , and we wish to compute ${\overline{\mathbf{\alpha }}}_{t},{\overline{\mathbf{\beta }}}_{t}$ , + +330 and ${\overline{\mathbf{Q}}}_{t}$ for $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{0}}\right)$ . We start by constructing the product transducer for $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ by composing the two transducers for $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ and $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ , as shown in Fig. 5. Next, we marginalize out the middle timestep ${\mathbf{x}}_{t - 1}$ . This entails removing the middle step from each transition, and instead summing over all possible values for that middle token. We obtain the two-tape transducer shown in Fig. 6. + +![01963e47-0800-74e3-9e54-d71797366f94_6_186_196_647_245_0.jpg](images/01963e47-0800-74e3-9e54-d71797366f94_6_186_196_647_245_0.jpg) + +Figure 6. Transducer for $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{0}}\right)$ after marginalizing out ${\mathbf{x}}_{t - 1}$ from $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ in Fig. 5. Note the presence of matrix-vector products with ${\overline{\mathbf{\alpha }}}_{t - 1}$ and ${\overline{\mathbf{Q}}}_{t - 1}$ , instead of explicit indices. + +Next, we eliminate the middle state ${S}_{B}^{A}$ , by replacing all paths that pass through it with new transitions that directly connect ${S}_{A}^{A}$ and ${S}_{B}^{B}$ . We note that these paths may enter the loop ${S}_{B}^{A} \mapsto {S}_{B}^{A}$ arbitrarily many times without producing any output or consuming any input (this is a silent-insertion-deletion pair). The total probability of all paths that take that loop an arbitrary number of times is thus + +$$ +\mathop{\sum }\limits_{{n = 0}}^{\infty }{\left( {\left\lbrack {\bar{\alpha }}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }\right) }^{n} = \frac{1}{1 - {\left\lbrack {\bar{\alpha }}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }}. \tag{5} +$$ + +We obtain the following new transitions. From ${S}_{A}^{A}$ to ${S}_{A}^{A}$ : + +$$ +\left( {1 - {\alpha }_{t}}\right) \frac{1}{1 - {\left\lbrack {\bar{\alpha }}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }}{\bar{\alpha }}_{t - 1}^{T}{Q}_{t}{\delta }_{y} : \varepsilon \mapsto y \tag{6} +$$ + +From ${S}_{A}^{A}$ to ${S}_{B}^{B}$ : + +$$ +\left( {1 - {\alpha }_{t}}\right) \frac{1}{1 - {\left\lbrack {\bar{\alpha }}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }}\left( {1 - \mathop{\sum }\limits_{y}{\left\lbrack {\bar{\alpha }}_{t - 1}\right\rbrack }_{y}}\right) : \varepsilon \mapsto \varepsilon \tag{7} +$$ + +From ${S}_{B}^{B}$ to ${S}_{B}^{B}$ : + +$$ +{\left\lbrack {\overline{\mathbf{Q}}}_{t - 1}\right\rbrack }_{x\langle \mathrm{{DEL}}\rangle }\frac{1}{1 - {\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }}\left( {1 - \mathop{\sum }\limits_{y}{\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{y} : x \mapsto \varepsilon }\right. \tag{8} +$$ + +From ${S}_{B}^{B}$ to ${S}_{A}^{A}$ : + +$$ +{\left\lbrack {\overline{\mathbf{Q}}}_{t - 1}\right\rbrack }_{x\langle \mathrm{{DEL}}\rangle }\frac{1}{1 - {\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }}{\overline{\mathbf{\alpha }}}_{t - 1}^{T}{\mathbf{Q}}_{t}{\mathbf{\delta }}_{z} : x \mapsto z \tag{9} +$$ + +Equation (9) is particularly notable, as it corresponds to a silent-deletion-insertion pair, in which $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ replaces $x$ with DEL and then inserts some other token ( $y$ in Fig. 5, but marginalized out here), after which $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ removes [DEL and produces $z$ from $y$ . + +Combining these new transitions with the old ones between ${S}_{A}^{A}$ and ${S}_{B}^{B}$ gives us the following values for $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{0}}\right)$ : + +$$ +{\overline{\mathbf{\alpha }}}_{t}^{T} = {\alpha }_{t}{\mathbf{\delta }}_{\langle \mathrm{{INS}}\rangle }^{T} + \frac{1 - {\alpha }_{t}}{1 - {\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }}{\overline{\mathbf{\alpha }}}_{t - 1}^{T}{\mathbf{Q}}_{t}, \tag{10} +$$ + +$$ +{\overline{\mathbf{\beta }}}_{t} = {\overline{\mathbf{\beta }}}_{t - 1} + {\overline{\mathbf{Q}}}_{t - 1}{\mathbf{\delta }}_{\langle \mathrm{{DEL}}\rangle }\frac{1 - \mathop{\sum }\limits_{y}{\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{y}}{1 - {\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }}, \tag{11} +$$ + +$$ +{\overline{\mathbf{Q}}}_{t} = {\overline{\mathbf{Q}}}_{t - 1}{\mathbf{Q}}_{t} + \frac{{\overline{\mathbf{Q}}}_{t - 1}{\delta }_{\langle \mathrm{{DEL}}\rangle }{\overline{\mathbf{\alpha }}}_{t - 1}^{T}{\mathbf{Q}}_{t}}{1 - {\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }} \tag{12} +$$ + +(Note: Here we assume ${\left\lbrack {\mathbf{Q}}_{t}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle i} = {\left\lbrack {\mathbf{Q}}_{t}\right\rbrack }_{i\langle \mathrm{{INS}}\rangle } = 0$ , as ${\mathbf{Q}}_{t}$ does not allow transitions from DEL or to INS.) Intuitively, Eq. (10) says that inserts occur either as ins marked inserts at time $t$ or (silent) inserts before time $t$ that are then perturbed; Eq. (11) says that deletions occur either as silent deletions before time $t$ or as transitions to DEL at time $t$ that are then removed without inserting new tokens; and Eq. (12) says that replacements occur either because a token was copied/replaced before time $t$ and then copied/replaced again at $t$ , or because a token $x$ was replaced by $\overline{\text{ DEL }}$ at time $t - 1$ , but a new token $y$ was (silently) inserted at or before time $t - 1$ , so that at time $t$ the new token $y$ looks like a replacement for the old token $x$ . + +### B.3. Closed form of $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right)$ + +We can similarly obtain a closed-form representation of $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right)$ by reasoning backwards about the elimination steps in the previous section. We start by observing that the edit summary ${\mathbf{a}}_{0 \rightarrow t}$ tells us the sequence of replacements $x \mapsto z$ , insertions $\varepsilon \mapsto z$ , and deletions $x \mapsto \varepsilon$ executed by the transducer while sampling ${\mathbf{x}}_{t}$ from ${\mathbf{x}}_{0}$ . + +Suppose we observe a replacement $x \mapsto z$ (where perhaps $x = z$ if it was copied unmodified). This must have been produced by the ${\overline{\mathbf{Q}}}_{t}$ edge. From Eq. (12) and Fig. 5 we can infer the distribution over the intermediate value $x \mapsto y \mapsto z$ , if it exists: + +$$ +p\left( {x \mapsto y \mapsto z \mid x \mapsto z}\right) = \frac{{\left\lbrack {\overline{\mathbf{Q}}}_{t - 1}\right\rbrack }_{xy} \cdot {\left\lbrack {\mathbf{Q}}_{t}\right\rbrack }_{yz}}{{\left\lbrack {\overline{\mathbf{Q}}}_{t}\right\rbrack }_{xz}} \tag{13} +$$ + +$$ +p\left( {\left. \begin{matrix} x \mapsto \text{ 回正L } \\ \varepsilon \mapsto y \mapsto z \end{matrix}\right| \;x \mapsto z}\right) = \frac{{\left\lbrack {\overline{\mathbf{Q}}}_{t - 1}\right\rbrack }_{x\left( \mathrm{{DEL}}\right) }{\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{y}{\left\lbrack {\mathbf{Q}}_{t}\right\rbrack }_{yz}}{\left( {1 - {\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }}\right) {\left\lbrack {\overline{\mathbf{Q}}}_{t}\right\rbrack }_{xz}} \tag{14} +$$ + +If the event in Eq. (14) occurs, we can also infer that there was a geometric number ${n}_{i}^{\text{extra }} \sim \operatorname{Geom}\left( {1 - {\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{\left\langle \mathrm{{DEL}}\right\rangle }}\right)$ of extra $\varepsilon \mapsto \overline{\text{ DEL }} \mapsto \varepsilon$ transitions due to the loop in ${S}_{B}^{A}$ . + +Now suppose we observe an insert $\varepsilon \mapsto z$ . If $z = \overline{\text{ INS }}$ , we know it was inserted at time $t$ , so it must have been produced by the $\varepsilon \mapsto \varepsilon \mapsto$ [INS] transition. If $z$ is any other token, it must have already existed at time $t - 1$ , with + +$$ +p\left( {\varepsilon \mapsto y \mapsto z \mid \varepsilon \mapsto z}\right) = \frac{{\left\lbrack {\bar{\alpha }}_{t - 1}\right\rbrack }_{y} \cdot {\left\lbrack {\mathbf{Q}}_{t}\right\rbrack }_{yz}}{{\left\lbrack {\bar{\alpha }}_{t - 1}{\mathbf{Q}}_{t}\right\rbrack }_{z}}. \tag{15} +$$ + +In this second case we also pass through ${S}_{B}^{A}$ and generate ${n}_{i}^{\text{extra }} \sim \operatorname{Geom}\left( {1 - {\left\lbrack {\bar{\alpha }}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }}\right)$ extra $\varepsilon \mapsto$ EEL $\mapsto \varepsilon$ transitions. + +Next suppose we observe a deletion $x \mapsto \varepsilon$ (where we know $x \neq$ DEL because there are no deletion markers in the data distribution). In this case we have + +$$ +p\left( {x \mapsto \varepsilon \mapsto \varepsilon \mid x \mapsto \varepsilon }\right) = \frac{{\left\lbrack {\overline{\mathbf{\beta }}}_{t - 1}\right\rbrack }_{x}}{{\left\lbrack {\overline{\mathbf{\beta }}}_{t}\right\rbrack }_{x}} \tag{16} +$$ + +$$ +p\left( {x \mapsto \text{ DEL } \mapsto \mid \varepsilon \mid x \mapsto \varepsilon }\right) = \frac{{\left\lbrack {\overline{\mathbf{Q}}}_{t - 1}\right\rbrack }_{x\left( \mathrm{{DEL}}\right) }\frac{1 - \mathop{\sum }\limits_{y}{\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{y}}{1 - {\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}}}}{{\left\lbrack {\overline{\mathbf{\beta }}}_{t}\right\rbrack }_{x}} \tag{17} +$$ + +where, like before, the second case passes through ${S}_{B}^{A}$ and generates ${n}_{i}^{\text{extra }} \sim \operatorname{Geom}\left( {1 - {\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{\left\langle \mathrm{{DEL}}\right\rangle }}\right)$ extra $\varepsilon \mapsto \overline{\text{ DEL }} \mapsto \varepsilon$ transitions. + +Finally, we note that every time we move from ${S}_{A}^{A}$ to ${S}_{B}^{B}$ (in other words, whenever we stop inserting tokens), there is one more ${n}_{i}^{\text{extra }} \sim \operatorname{Geom}\left( {1 - {\left\lbrack {\overline{\mathbf{\alpha }}}_{t - 1}\right\rbrack }_{\left\langle \mathrm{{DEL}}\right\rangle }}\right)$ set of $\varepsilon \mapsto \overline{\text{ DEL }} \mapsto \varepsilon$ transitions. + +Using the above analysis allows us to compute $q\left( {{\mathbf{x}}_{t - 1},{\mathbf{a}}_{0 \rightarrow \left( {t - 1}\right) } \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right)$ , where the extra information ${\mathbf{a}}_{0 \rightarrow \left( {t - 1}\right) }$ specifies the sequence of $x \mapsto \varepsilon \mapsto \varepsilon ,\varepsilon \mapsto$ DEL $\mapsto \varepsilon$ and $x \mapsto \overline{\text{ DEL }} \mapsto \varepsilon$ transitions (which are ambiguous from ${\mathbf{a}}_{0 \rightarrow t}$ alone). Since we do not particularly care about this information, we can marginalize it out by noting that the total number ${n}^{\text{obs }}$ of consecutive notch tokens observed at a particular position in ${\mathbf{x}}_{t - 1}$ is the sum of the number of explicit deletions $x \mapsto \overline{\text{ DEL }} \mapsto \varepsilon$ and insertion-deletion pairs $\varepsilon \mapsto$ DEL $\mapsto \varepsilon$ . Given a fixed number of explicit deletions, the total number of insertion-deletion pairs is a sum of independent geometric random variables and thus has a negative binomial distribution. We can thus: + +- compute for each deleted token $x$ in ${\mathbf{x}}_{0}$ the probability of an explicit $x \mapsto \overline{\text{ DEL }} \mapsto \varepsilon$ transition using Eq. (17) + +- compute for each perturbed $x \mapsto z$ transition the probability of an explicit $x \mapsto \overline{\text{ DEL }} \mapsto \varepsilon$ transition using Eq. (14) + +- compute the distribution of the total number ${n}^{\text{explicit }}$ of $x \mapsto$ bec $\mapsto \varepsilon$ transitions at this location in ${\mathbf{x}}_{t - 1}$ by noting that it is a sum of independent Bernoulli r.v.s (which can be computed either by taking convolutions of their PMFs, or, if all tokens are deleted with the same probability, by observing that this is a binomial distribution) + +- use this distribution to compute a mixture of negative binomial distributions: ${n}^{\text{obs }} \sim {n}^{\text{explicit }} + \mathrm{{NB}}\left( {n}^{\text{explicit }}\right. +$ $1,1 - {\left\lbrack {\bar{\alpha }}_{t - 1}\right\rbrack }_{\langle \mathrm{{DEL}}\rangle }$ ). + +### B.4. Combining ${\widetilde{p}}_{\theta }\left( {{\widetilde{\mathbf{x}}}_{0},{\widetilde{\mathbf{a}}}_{0 \rightarrow t} \mid {\mathbf{x}}_{t}}\right)$ with $q$ + +The ${\mathbf{x}}_{0}$ -predicting parameterization of ${p}_{\theta }$ follows the same general procedure outlined above for inferring ${\mathbf{x}}_{t - 1}$ from ${\mathbf{x}}_{t},{\mathbf{x}}_{0}$ and ${\mathbf{a}}_{0 \rightarrow t}$ . However, we make a few slight modifications due to the structure of ${\widetilde{p}}_{\theta }$ . + +For each token $z$ in ${\mathbf{x}}_{t}$ , the model predicts a modification probability ${\widetilde{p}}_{\theta }\left( {x \mapsto z}\right)$ for each token and an insertion probability ${\widetilde{p}}_{\theta }\left( {\epsilon \mapsto z}\right)$ . We use these as weights to scale the appropriate inference terms in Eqs. (13) to (15). + +Additionally, the model predicts a distribution ${\widetilde{p}}_{\theta }\left( {n}_{i}^{\text{del }}\right)$ of the number $x \mapsto \varepsilon$ transitions that occurred before each position $i$ in ${\mathbf{x}}_{t}$ . We use this to infer the number ${n}_{i}^{\text{obs }}$ of DEL placeholders that appear at time $t - 1$ using the same inference procedure as above, but we now have a mixture of mixtures of negative binomial distributions because we may be uncertain about how many insertions there were. (Usually, we will have ${n}_{i}^{\text{obs }} \leq {n}_{i}^{\text{del }}$ , since deletions could have occurred at any time from 0 to $t$ .) When implementing this parameterization we assume that every token is equally likely to be deleted at each timestep, so that the model only has to predict the number of missing tokens from ${\mathbf{x}}_{0}$ ; if this is not the case, it would be possible to predict ${\widetilde{p}}_{\theta }\left( {n}_{i}^{obs}\right)$ directly instead. + +We choose to predict deletion-insertion pairs simply as an insertion preceded by a deletion, instead of reasoning about it as a replacement; this simplifies our computation by avoiding having to separately reason about Eq. (14). + +## C. Experimental details + +### C.1. Model architecture + +For all of our experiments, we use a standard decoder-only transformer following the T5 (Raffel et al., 2019) architecture, with either six or twelve layers depending on the task. The main modification we make is to introduce two output heads instead of one. The first output head, like a standard transformer, predicts a matrix ${f}_{\theta }\left( {\mathbf{x}}_{t}\right) \in {\mathbb{R}}^{L \times K}$ of unnormalized log-probabilities (logits), where $L$ is the sequence length and $K$ is the vocabulary size. We interpret ${f}_{\theta }{\left( {\mathbf{x}}_{t}\right) }_{iv}$ as the log-probability of the $i$ th token being produced by a replacement edit $\bar{v} \rightarrow$ (equivalently $v \mapsto {\left\lbrack {\mathbf{x}}_{t}\right\rbrack }_{i}$ in the PFST notation) in the edit summary ${\mathbf{a}}_{0 \rightarrow t}$ , and similarly interpret ${f}_{\theta }{\left( {\mathbf{x}}_{t}\right) }_{i\langle \mathrm{{INS}}\rangle }$ as the log-probability of the $i$ th token of ${\mathbf{x}}_{t}$ being an insertion $\overset{ \rightarrow }{ \rightarrow }$ (or $\varepsilon \mapsto {\left\lbrack {\mathbf{x}}_{t}\right\rbrack }_{i}$ ). We reuse the embeddings for the input vocabulary as the final output layer for this head. The secound output head produces a matrix ${g}_{\theta }\left( {\mathbf{x}}_{t}\right) \in {\mathbb{R}}^{L \times L}$ , for which ${f}_{\theta }{\left( {\mathbf{x}}_{t}\right) }_{in}$ gives the (unnormalized) log-probability of having $n$ different $- \bullet \left( \right.$ or ${\left\lbrack {\mathbf{x}}_{0}\right\rbrack }_{j} \mapsto \varepsilon$ ) edges immediately before the $i$ th token of ${\mathbf{x}}_{t}$ . + +When running the transformer on an input sequence, we introduce an extra end-of-sequence token EOS that denotes the last position in the input. The first output head ${f}_{\theta }$ is ignored for the EOS token, but we do use the output ${g}_{\theta }$ for the EOS token to determine the number of + +440 + +441 + +![01963e47-0800-74e3-9e54-d71797366f94_8_206_235_568_377_0.jpg](images/01963e47-0800-74e3-9e54-d71797366f94_8_206_235_568_377_0.jpg) + +Figure 7. Noise schedule for arithmetic sequence task for $r = {0.6}$ . For each number $x \in \{ 0,\ldots ,{511}\}$ , probability mass shown by the red line is evenly divided among all of the other 511 dataset tokens (not including $\overline{\text{ INS }}$ or $\overline{\text{ DEL }}$ ). Schedules for other values of $r$ are similar, but with higher or lower values of $\alpha$ and ${\mathbf{Q}}_{x\langle \mathrm{{DEL}}\rangle }$ . + +442 + +443 + +## edit summary ${\mathbf{a}}_{0 \rightarrow t}$ that occur at the end of the sequence. + +As mentioned in Section 3.3, we additionally store a fixed-size table ${p}_{\theta }\left( \left| {\mathbf{x}}_{T}\right| \right) \in {\mathbb{R}}^{L}$ , which we fit to the distribution of observed lengths ${\mathbf{x}}_{T}$ . + +### C.2. Arithmetic sequences + +We construct a dataset of arithmetic sequences by randomly sampling a step size $s$ between 1 and 10, a direction (increasing or decreasing), a length $\ell$ between 32 and 64 (with the constraint that $s\left( {\ell - 1}\right) < {509}$ ), and finally a random starting position so that all terms in the sequence are between 2 and 511 , inclusive. ( 0 is used to denote padding in the data loader, and 1 was reserved for preliminary experiments that required additional reserved tokens, but both are treated as ordinary tokens by the model.) Along with $\overline{\mathrm{{INS}}},\overline{\mathrm{{DEL}}}$ , and an end-of-sequence marker EOS, this yields a total augmented vocabulary of size 515. + +We compare four different forward process schedules, each of which is tuned to add less noise for timesteps closer to 0 and more noise as $t$ approaches 10 . We start by choosing an insert/delete rate $r \in \{ 0,{0.4},{0.6},{0.8}\}$ . Next, for $1 \leq t \leq 9$ , we calculate a fraction ${u}_{t} = {0.1}\frac{t}{9} + {0.9}{\left( \frac{t}{9}\right) }^{2}$ , then choose the insertion probability ${\alpha }_{t}$ and matrix ${\mathbf{Q}}_{t}$ for each $t$ so that, cumulatively after step $t$ , approximately ${u}_{t} \times r$ of the elements of ${\mathbf{x}}_{0}$ have been deleted, ${u}_{t} \times r$ of the elements of ${\mathbf{x}}_{t}$ come from insertions (so that the length of the sequence remains approximately the same), and ${u}_{t}$ of the remaining elements from ${x}_{0}$ have been replaced by a random integer between 0 and 512. Finally, at step 10 we append a deterministic step ${\mathbf{Q}}_{10}$ that replaces every token with $\overline{\mathrm{{DEL}}}$ , and set ${\alpha }_{10} = 0$ . When $r = {0.0}$ , no insertions or deletions occur until the last step, which is simply used to allow the model to predict the length of the sequence. We choose ${\left\lbrack {\mathbf{Q}}_{t}\right\rbrack }_{\langle \mathrm{{INS}}\rangle n} = \frac{1}{512}$ for all $0 \leq n < {512}$ so that $\mathrm{{INS}}$ is equally likely to transition to any + +494 of the 512 numbers in the vocabulary. The full schedule for $r = {0.6}$ is shown in Fig. 7. + +
Bits/char
In place≤ 1.669
0.4 insert/delete$\leq {1.759}$
0.6 insert/delete≤ 1.789
0.8 insert/delete≤ 1.844
+ +Table 2. Preliminary quantitative results on text8. Shown are the best results over a hyperparameter sweep of 12 learning rate schedules. + +For each insert/delete rate $r$ , we train a six-layer transformer model over 100,000 minibatches of 512 random examples, using the Adam optimizer and a learning rate that increases linearly to $2 \times {10}^{-4}$ over 5000 steps, then stays constant. We rerun training with five random seeds for each schedule. Since the loss seemed to stabilize at around 90,000 steps, we take averages of the validation metrics computed during the last 10,000 steps of training for each seed, corresponding to ELBO estimates for 46,080 random dataset examples and error rate metrics for 2304 samples drawn from the model. We then report the average and standard deviation of these per-seed metrics across the five random seeds for each schedule. + +### C.3. Text generation on text8 + +For text8, we construct a dataset of training examples by taking randomly-selected 118 character chunks of the full concatenated lower-cased training set. We use a dataset vocabulary of 28 tokens, including each character 'a' through 'z', a space, and an extra token '-' that does not appear in the dataset (again used for preliminary experiments); including INS, DEL, and EOS gives a vocabulary of size 31. During training, since we may insert a large number of tokens by chance, we enforce a maximum length of the intermediates ${\mathbf{x}}_{t}$ by rejection sampling until we draw a sample shorter than 128 characters (which we correct for when computing the ELBO during evaluation). + +As in the arithmetic sequence dataset, we compare forward process schedules with four insert/delete rates $r \in$ $\{ 0,{0.4},{0.6},{0.8}\}$ , constructed to add less noise near time 0 . In this case, we instead set ${u}_{t} = {0.1}\frac{t}{31} + {0.9}{\left( \frac{t}{31}\right) }^{2}$ and produce a 32-step corruption process; similarly, when randomizing, we randomly choose from the 28 tokens in the vocabulary instead of the 512 numbers. + +For each insert/delete rate $r$ , we train a twelve-layer transformer model over 1,000,000 minibatches of 512 random examples, using the Adam optimizer. We perform a sweep over four learning rates $\left\{ {5 \times {10}^{-5},1 \times {10}^{-4},2 \times {10}^{-4},5 \times {10}^{-4}}\right\}$ and three schedule types: linear increase until 5000 steps followed by constant, linear increase until 5000 steps followed by reciprocal square root decay, and a cyclical cosine schedule with period 100,000 . + +498 As a preliminary estimate of performance, and because train- 499 ing seemed to converge before 900,000 steps, we evaluated 500 over a subset of 40,960 length-118 segments sampled from the validation set, averaged over the last 100,000 steps of training. Table 2 shows preliminary bits/char measurements for the run with the best performance for each value of $r$ . + +To produce the typo-repair example on the right side of Fig. 4, we took the human-written sentence "thisn sentsnetne wasstype vssry babdly", intended as a typo-ridden version of "this sentence was typed very badly". We then padded the sentence out with placeholder text ("lorem ipsum dolor sit amet lorem ipsum dolor sit amet...") until it had length 119, to be approximately the length of the training examples. We set this padded sentence as ${\mathbf{x}}_{10}$ , then drew five random samples for both the 0.6 insert/delete rate model and the 0.0 insert/delete rate model. We trimmed off the placeholder text (which the model generally left alone) but did not make any other edits. + +526 + +530 536 + +537 + +538 + +539 + +540 + +541 + +545 + +546 + +548 + +549 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/cAsVBUe1Rnj/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/cAsVBUe1Rnj/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..7290984722634549ea844810cd00a0c602ae926e --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/cAsVBUe1Rnj/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,171 @@ +§ BEYOND IN-PLACE CORRUPTION: INSERTION AND DELETION IN DENOISING PROBABILISTIC MODELS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Denoising diffusion probabilistic models have shown impressive results for generation of sequences by iteratively corrupting each example and then learning to map corrupted versions back to the original. However, previous work has largely focused on in-place corruption, adding noise to each pixel or token individually while keeping their locations the same. In this work, we consider a broader class of corruption processes and denoising models over sequence data that can insert and delete elements, while still being efficient to train and sample from. We demonstrate that these models outperform standard in-place models on an arithmetic sequence task, and that when trained on the text8 dataset they can be used to fix spelling errors without any fine-tuning. + +§ 1. INTRODUCTION + +Although autoregressive models are generally considered state of the art for language modeling, machine translation, and other sequence-generation tasks (Raffel et al., 2019; van den Oord et al., 2016), they must process sequences left-to-right, which can make generation slow. As such, significant research effort has been put into non-autoregressive models that allow for parallel generation (Wang & Cho, 2019; Ghazvininejad et al., 2019). Recently, denoising diffusion probabilistic models (DDPMs) (Sohl-Dickstein et al., 2015) have shown impressive results in a variety of domains (Chen et al., 2020; Ho et al., 2020; Hoogeboom et al., 2021), in some cases achieving comparable results to autoregressive models with far fewer steps. In these models, a forward process iteratively corrupts the data towards a noise distribution, and a generative model is trained to learn the reverse denoising process. However, these models share one limitation: the corruption process always modifies sequence + + < g r a p h i c s > + +Figure 1. Generating an arithmetic sequence by denoising with insertion and deletion over ten steps, showing $x{\;\operatorname{mod}\;{100}}$ with color and $x{\;\operatorname{mod}\;{10}}$ with text. ’ $\mathrm{D}$ ’ denotes deletion and ’ $\mathrm{I}$ ’ insertion according to the fixed forward process $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ . This sequence was generated by the learned reverse process ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ . + +elements in-place. While convenient, this choice introduces strong constraints which limit the efficacy of the generative denoising process. For example, if the model makes a mistake and places a word or phrase in the wrong place, it cannot easily compensate. + +In order to overcome similar limitations in non-autoregressive translation, Gu et al. (2019) proposed the Levenshtein transformer, which learns to perform insertion and deletion operations. Although this model works well for translation, it was not designed as a purely generative model and does not allow estimation of log-likelihoods. + +In this work, we integrate insertion and deletion into the DDPM framework, generalizing multinomial diffusion models (Hoogeboom et al., 2021) and taking inspiration from the Levenshtein transformer. We carefully design a forward noising process that allows for tractable sampling of corrupted sequences and computing estimates of the log-likelihood bound. We show that our models outperform in-place diffusion on a dataset of arithmetic sequences, and that they learn error-correction mechanisms that work on misaligned inputs when trained on text. + +§ 2. BACKGROUND + +In this section we discuss the most relevant related work that is needed to introduce our proposed method. In Appendix A we discuss additional related work. + +§ 2.1. DENOISING DIFFUSION PROBABILISTIC MODELS + +DDPMs are latent variable generative models defined by a forward Markov process $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ which gradually adds noise, and a learned reverse process ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ that removes noise. The forward process defines a joint distribution $q\left( {\mathbf{x}}_{0 : T}\right) = q\left( {\mathbf{x}}_{0}\right) \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ where $q\left( {\mathbf{x}}_{0}\right)$ is the data distribution and ${\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{T}$ are increasingly noisy latent variables that converge to a known distribution $q\left( {\mathbf{x}}_{T}\right)$ . The reverse process ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ is then trained to match the forward process posteriors $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0}}\right)$ , yielding a gradual denoising model with a tractable variational bound on the log-likelihood. To enable efficient training, $q$ is often chosen such that these posteriors can be computed analytically: for continuous DDPMs, $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ is typically a Gaussian, while for discrete DDPMs, Hoogeboom et al. (2021) propose setting $q$ to randomly replace tokens with some probability. All recent diffusion models perform corruption in-place: the $k$ th element of ${\mathbf{x}}_{t}$ is a noisier version of the $k$ th element of ${\mathbf{x}}_{t - 1}$ , with no dependence on other tokens. + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +§ 2.2. LEVENSHTEIN TRANSFORMER + +Gu et al. (2019) proposed the Levenshtein transformer, a non-autoregressive model that inserts and deletes tokens in parallel. Over a series of generation steps, the model marks tokens in current sequence $\mathbf{x}$ that should be deleted, predicts how many tokens should be inserted at each position, and finally predicts values for the newly inserted tokens. To train the model, Gu et al. use imitation learning: they corrupt a dataset example $\mathbf{x}$ using a combination of random noise and the model's own predictions, use a dynamic programming algorithm to compute the minimal set of edits to recover $\mathbf{x}$ from the corrupted version ${\mathbf{x}}^{\prime }$ , then train the model to maximize the likelihood of those optimal edits. + +The Levenshtein transformer has shown impressive results in non-autoregressive machine translation. However, it has only been applied in a conditional generation setting where there are only a small set of possible correct answers, and it cannot be directly trained or evaluated in terms of the log-likelihood of dataset samples. + +§ 3. METHOD + +Our goal is to design an insertion-deletion-based generative model within the probabilistic framework of diffusion models with a tractable bound on the log-likelihood. The main considerations are (a) how to define the forward corruption process so that it leads to a reverse process with insertions, deletions, and replacements, (b) how to parameterize the reverse process, and (c) how to do both tractably within the diffusion process framework. + +§ 3.1. FORWARD PROCESS + +The forward corruption process specifies how to gradually convert data ${\mathbf{x}}_{0}$ into noise ${\mathbf{x}}_{T}$ by repeatedly applying a singlestep forward process $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ . Since the learned reverse process is trained to undo each of these corruption steps, and insertion and deletion are inverses, we can obtain a learned reverse process with deletion, insertion, and replacement operations by including insertion, deletion, and replacement operations in the forward process, respectively. + +A challenge is that if a single forward step can apply an arbitrary set of insertions, deletions, and replacements, then there are many ways to get ${\mathbf{x}}_{t}$ from ${\mathbf{x}}_{t - 1}$ . For example, ${\mathbf{x}}_{t}$ can be related to ${\mathbf{x}}_{t - 1}$ through the minimum edit between the two, or by deleting the full ${\mathbf{x}}_{t - 1}$ and then inserting the full ${\mathbf{x}}_{t}$ . In order to compute $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ , one would need to sum over all these possibilities. Our first idea is to restrict the forward process so that there is a single way to get each ${\mathbf{x}}_{t}$ from each ${\mathbf{x}}_{t - 1}$ , by adding two auxiliary symbols into the vocabulary that explicitly track insertion and deletion operations: every insertion operation produces the insertion-marker token INS, and every deletion operation deletes the deletion-marker token DEL. (We note that, since the reverse process is reversing the forward corruption process, the learned model must instead insertion of the propose the following form for $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ : + +1. Remove all $\overline{\text{ DEL }}$ tokens from ${\mathbf{x}}_{t - 1}$ . + +2. For each token $x$ in ${\mathbf{x}}_{t - 1}$ , sample a new value (possibly $\overline{\text{ DEL }}$ ) as ${x}^{\prime } \sim \operatorname{Cat}\left( {{x}^{\prime };{\delta }_{x}^{T}{Q}_{t}}\right)$ , where ${Q}_{t}$ is a Markov transition matrix and ${\delta }_{x}$ is a one-hot vector for $x$ . + +3. Between each pair of tokens in the result, and also at the start and end of the sequence, sample ${n}_{i}^{\text{ new }} \sim$ $\operatorname{Geom}\left( {1 - {\alpha }_{t}}\right)$ and insert that many $\mathrm{{INS}}$ tokens. (We explain this choice in Section 3.4.) + +We allow ${Q}_{t}$ to include transitions from [INS] to any other token, and from any token to DEL, but disallow transitions to $\overline{\mathrm{{DNS}}}$ or from $\overline{\mathrm{{DEL}}}$ to ensure they only arise from insertions and deletions. This ensures unique 1-step alignments. + +§ 3.2. PARAMETERIZATION OF THE REVERSE PROCESS + +As an inductive bias, we prefer reverse processes that produce ${\mathbf{x}}_{t - 1}$ by modifying ${\mathbf{x}}_{t}$ , instead of predicting it from scratch. As such, the learned reverse process ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ first removes all INS tokens from ${\mathbf{x}}_{t}$ , then predicts two things for each remaining token: the previous value of the token (which might be $\xrightarrow[]{\text{ INS }}$ if the token should be removed), and the number of $\overline{\mathrm{{DEL}}}$ tokens that should be inserted before the token. (Recall that, since this is the reverse process, the auxiliary tokens have opposite meanings here.) We also take inspiration from other work on diffusion models (Ho et al., 2020; Hoogeboom et al., 2021), which find improved performance by guessing ${\mathbf{x}}_{0}$ and then using knowledge of the forward process to derive ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ , as opposed to specifying ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ directly. Our full parameterization combines these two ideas: it attempts to infer the edit summary ${\mathbf{a}}_{0 \rightarrow t}$ that was applied to ${\mathbf{x}}_{0}$ to produce ${\mathbf{x}}_{t}$ (as shown in Fig. 2), then uses the known form of $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right)$ to derive + + < g r a p h i c s > + +Figure 2. An example of sequences ${\mathbf{x}}_{0}$ through ${\mathbf{x}}_{3}$ produced by a forward process $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ (top), along with the corresponding edit summary ${\mathbf{a}}_{0 \rightarrow 3}$ (bottom) that summarizes how to obtain ${\mathbf{x}}_{t}$ from ${\mathbf{x}}_{0}$ without describing the full sample path. Note that multiple sample paths can correspond to the same edit summary. Our model ${p}_{\theta }$ predicts the corresponding $\overrightarrow{v} \rightarrow$ or $\rightarrow$ edge in ${\mathbf{a}}_{0 \rightarrow t}$ for each token in ${x}_{t}$ (including the previous value $v$ in the first case), and also predicts the number of $\rightarrow$ edges immediately before each token in ${\mathbf{x}}_{t}$ (e.g. there is one before ’ $\mathrm{f}$ ’ and zero before ’i’). + +${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ . Specifically, we compute + +$$ +{p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) \propto \mathop{\sum }\limits_{{{\widetilde{\mathbf{x}}}_{0},{\widetilde{\mathbf{a}}}_{0 \rightarrow t}}}{\widetilde{p}}_{\theta }\left( {{\widetilde{\mathbf{x}}}_{0},{\widetilde{\mathbf{a}}}_{0 \rightarrow t} \mid {\mathbf{x}}_{t}}\right) \cdot q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1},{\widetilde{\mathbf{a}}}_{0 \rightarrow t} \mid {\widetilde{\mathbf{x}}}_{0}}\right) . \tag{1} +$$ + +where tildes denote predictions that are not directly supervised, and we intentionally use $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1},{\widetilde{\mathbf{a}}}_{0 \rightarrow t} \mid {\widetilde{\mathbf{x}}}_{0}}\right)$ in place of $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\widetilde{\mathbf{x}}}_{0},{\widetilde{\mathbf{a}}}_{0 \rightarrow t}}\right)$ to prevent the model from predicting edits ${\widetilde{\mathbf{a}}}_{0 \rightarrow t}$ that have zero probability under $q\left( {{\mathbf{x}}_{t},{\mathbf{a}}_{0 \rightarrow t} \mid {\mathbf{x}}_{0}}\right)$ . Intuitively, the model predicts a summary of which edits likely happened (at an unknown time $s \leq t$ ) to produce ${\mathbf{x}}_{t}$ , then $q$ determines the details of which specific edits appeared in ${\mathbf{x}}_{t - 1}$ . This parameterization requires us to be able to compute $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1},{\widetilde{\mathbf{a}}}_{0 \rightarrow t} \mid {\widetilde{\mathbf{x}}}_{0}}\right)$ , which we discuss in Section 3.4. + +§ 3.3. LOSS FUNCTION + +We optimize the standard evidence bound on the negative log-likelihood, which can be expressed as + +$$ +L = {\mathbb{E}}_{q\left( {\mathbf{x}}_{0 : T}\right) }\left\lbrack {\underset{{L}_{T}}{\underbrace{-\log {p}_{\theta }\left( {\mathbf{x}}_{T}\right) }} + \mathop{\sum }\limits_{{t = 1}}^{T}\underset{{L}_{t - 1}}{\underbrace{-\log \frac{{p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) }{q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right) }}}}\right\rbrack \tag{2} +$$ + +We restrict our attention to $q$ for which the last transition $q\left( {{\mathbf{x}}_{T} \mid {\mathbf{x}}_{T - 1}}\right)$ deterministically replaces every token with DEL and inserts no new tokens; we can then simply learn a tabular distribution ${p}_{\theta }\left( \left| {\mathbf{x}}_{T}\right| \right)$ of final forward process lengths to compute the ${L}_{T}$ term. + +For the ${L}_{t - 1}$ terms, we randomly sample $t$ and then compute + +$$ +{\mathbb{E}}_{q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right) }\left\lbrack {{\mathbb{E}}_{q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right) }\left\lbrack {-\log \frac{{p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right) }{q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right) }}\right\rbrack }\right\rbrack . \tag{3} +$$ + + < g r a p h i c s > + +Figure 3. Representation of $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ (left) and $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ (right) as PFSTs, along with their composition $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ (bottom). Execution starts at the black dot and continues until reaching end-of-sequence at the double-outlined state. Some probabilities omitted for readability; see Fig. 5 (in Appendix B). + +It turns out that we can compute this inner expectation in closed form for a given sample $\left( {t,{\mathbf{x}}_{0},{\mathbf{x}}_{t},{\mathbf{a}}_{0 \rightarrow t}}\right)$ , as we discuss in the next section. + +§ 3.4. COMPUTATIONAL CONSIDERATIONS + +While a diffusion model could be trained by simply drawing sequences ${\mathbf{x}}_{0},{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{T}$ and training the model to undo each step, these models are usually trained by analytically computing terms of the negative evidence lower bound for individual timesteps $t$ and samples $\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{t}}\right)$ , by using closed form representations of $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{0}}\right)$ and $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0}}\right)$ (Ho et al., 2020). Unfortunately, doing this for a forward process that inserts and deletes tokens is nontrivial. Over multiple steps, the $\overline{\text{ INS }}$ and $\overline{\text{ DEL }}$ markers may be skipped, which means that (as mentioned in Section 3.1) there will likely be many possible sets of insertions and deletions that produce ${\mathbf{x}}_{t}$ from ${\mathbf{x}}_{0}$ , with a corresponding wide variety of intermediate sequences $\left( {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{t - 1}}\right)$ . + +To address this challenge, we introduce two main ideas: (a) cast the necessary quantities in terms of probabilistic finite-state transducers (PFSTs), which allow us to marginalize out details about intermediate sequences that do not matter for computing the loss, and (b) choose to condition on the edit summary ${\mathbf{a}}_{0 \rightarrow t}$ in addition to the pair $\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{t}}\right)$ while analytically computing the loss term ${L}_{t}$ in Eq. (3), which allows us to efficiently compute those PFST-based quantities. + +A PFST is a probabilistic finite state machine that has an input tape and one or more output tapes. It repeatedly makes stochastic transitions governed by a combination of transition probabilities and the current symbol from the input tape. As it makes transitions, it consumes input tape symbols and writes to its output tape(s). To make use of PFSTs + +165 + +0: scnt that seem somewhat useful to bottom they controlled the arrangement of bambelatic the elements of a light full i Input: + +1: sent that se $\overline{\mathrm{{DEL}}}$ in somewhat useful DEL to bottom they controlled the arrangement of bambelatic the elements ... thisn sentsnetne wasstype vssry babdly + +2: scnt that sem somewhat usefu to bottom INS they controlled the arrangesent of bambelatic the elements of a li... Insert/delete outputs: + +3: scnt that sem somewhat usefu to bottoms INS they controlled the arrangesent of gambelatic the elements of a 1 . this sentence tune was type very badly + +4: scnt that sem somewhat usefu to b DEL ttomsp they controlled the arrangesent of gambelatic the elements o . . . this sentiment was typed very badly + +Apr-2018 5: scnt that sem somewhat fsefu to bttkmsp they control DEL ed the arrangesent of gambelatic the elements off a li . this sentence the bass style very badly 6: scnt thaq sem somewhat fse u to bttkmsp they DEL controled the arrangesen INS t DEL f gambeaetic the elem... this sentencence was typed very badly 7: scnt thaq INS sem somewhat fse u INS to bttkmsp they controled the arrangesentt f gamneaetic the elemnents ... this sentence one was type very barely In-place outputs: + +28: rgny a-s blgjddaz INS | DEL | as INS | Vrrneipnohwxswokachsyrycc INS | IDEL | k DEL | dmzya INS | a Uphehva INS | . thern senticelle wasstype issum babble + +29: rg DEL em DEL d INS | hlgjIdtz INS njasivrdm gi DEL | ut DEL | ws swok a h INS spa INS g INS | DEL | . . there sentinel e was type issey babely + +30: rgjepdvphlg|DEL|DEL|shjas|DEL|vrebliute|DEL|mb|DEL|shqbu|INS|f|INS|iw|VNS|asx|INS|ay|jd., thirn senticette wasstype fasry bandly + +31: udnsrbi-DEL-cv|DEL| bzx| INS e-sqf|INS in rxuonfkpij|DEL|DEL|eq|DEL| DEL| INS h-DELaadg-r-DEL|kc . thian senteneure was type viery batfly + +32: DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL third sentiments lapstyle essay bolely + +Figure 4. Left: generating text with an insertion-deletion denoising model ${p}_{\theta }\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t}}\right)$ trained on the text8 dataset (generative process flows upward). Right: Fixing typos using an insert-delete model (and an in-place baseline), showing five random predictions from each model. + +166 + +167 + +168 + +max width= + +X NLL (nats) Error rate (%) + +1-3 +In-place $\leq {39.95} \pm {0.06}$ ${13.12} \pm {2.40}$ + +1-3 +0.4 ins/del $\leq {36.35} \pm {0.07}$ 5.70 ± 0.37 + +1-3 +0.6 ins/del $\leq {35.71} \pm {0.04}$ 5.16 ± 0.27 + +1-3 +0.8 ins/del $\leq {38.51} \pm {0.17}$ 6.48 ± 0.13 + +1-3 + +Table 1. Results on arithmetic sequences. NLL denotes negative log-likelihoods, error rate denotes the fraction of the step sizes in each generated example that are different from the most common step size. Standard deviation taken over five random seeds. + +for our purposes, we begin by expressing $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ as a PFST, which is possible because geometric random variables can be sampled as a repeated coin flip. The PFST iteratively consumes the input $\left( {\mathbf{x}}_{t - 1}\right)$ , transitioning between states and writing to the output $\left( {\mathbf{x}}_{t}\right)$ . We additionally make use of an algebra over PFSTs that allows composing PFSTs and integrating out output tapes. By composing PFSTs for $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{t - 1}}\right)$ and $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ , we obtain a two-output tape PFST for $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ , with which we can integrate out ${\mathbf{x}}_{t - 1}$ to obtain $q\left( {{\mathbf{x}}_{t} \mid {\mathbf{x}}_{0}}\right)$ . Fig. 3 shows the high-level structure of each PFST; full details are in Appendix B.2. + +Given a specific edit summary ${\mathbf{a}}_{0 \rightarrow t}$ , we can reconstruct the state transitions in the PFST for $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{0}}\right)$ , which allows us to compute $q\left( {{\mathbf{x}}_{t},{\mathbf{x}}_{t - 1},{\mathbf{a}}_{0 \rightarrow t} \mid {\mathbf{x}}_{0}}\right)$ and $q\left( {{\mathbf{x}}_{t - 1} \mid {\mathbf{x}}_{t},{\mathbf{x}}_{0},{\mathbf{a}}_{0 \rightarrow t}}\right)$ in closed form. Details on how to compute the necessary terms for our loss in Section 3.3 and our model parameterization in Section 3.2 are given in Appendix B. 3 and B.4, respectively. + +§ 4. EXPERIMENT: TOY SEQUENCE DATASETS + +We start by exploring the expressive power of denoising insertion-deletion models on a toy dataset of increasing and decreasing arithmetic sequences. We take a multinomial diffusion corruption process (Hoogeboom et al., 2021) and augment it with varying probabilities of insertion and deletion. As shown in Table 1, adding a moderate amount of insertion and deletion in the forward process leads to better log-likelihoods and also produces generated sequences that have fewer deviations from being a valid arithmetic sequence. Figure 1 visualizes an example generated sequence from the 0.6 insert/delete rate model. See Appendix C. 2 for experiment details. + +§ 5. EXPERIMENT: TEXT GENERATION + +We also investigate training a 32-step multinomial-diffusion-based model augmented with insertion and deletion on the character-level language dataset text8 (Mahoney, 2011). Although insert/delete models have slightly worse log-likelihood bounds on this dataset (see Table 2 in Appendix C), the samples are still high quality, and the models show qualitative differences in the generative process: they can correct spelling errors, insert spaces between words, and make other human-like edits. In Fig. 4 we show a generated sentence from an insert-delete model, and also show that it can be used to "spellcheck" a badly-human-written sentence without being trained on this task by simply treating the sentence as ${\mathbf{x}}_{10}$ and sampling from ${p}_{\theta }\left( {{\mathbf{x}}_{0} \mid {\mathbf{x}}_{10}}\right)$ . The insert-delete model generates imperfect but intuitive suggestions whereas an in-place model generates nonsense due to misalignment issues. See Appendix C. 3 for experiment details. + +§ 6. DISCUSSION + +In this work we have opened up the class of denoising-based generative models to more flexible processes that include insert and delete operations in addition to the commonly used replacement operation. While we have motivated these insert-delete diffusion-like models from the perspective of text generation, this class of models could be useful for several other applications, such as image super resolution (by inserting and deleting pixel rows and columns), video generation (by inserting and deleting video frames), and molecular structure generation (by editing SMILES representations (Weininger, 1988)). We are also excited about the potential for incorporating other types of global structure and semantically meaningful edits (such as duplication or reordering) into corruption processes as a strategy for improving denoising-based generative models. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/fEPhiuZS9TV/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/fEPhiuZS9TV/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..376459a716e60df4d454cc1404b0363e6d8dd960 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/fEPhiuZS9TV/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,549 @@ +# Distilling the Knowledge from Normalizing Flows + +Anonymous Authors ${}^{1}$ + +## Abstract + +Normalizing flows are a powerful class of generative models demonstrating strong performance in several speech and vision problems. In contrast to other generative models, normalizing flows have tractable likelihoods and allow for stable training. However, they have to be carefully designed to be bijective functions with efficient Jacobian calculation. In practice, these requirements lead to over-parameterized and sophisticated architectures that are inferior to alternative feed-forward models in terms of inference time and memory consumption. In this work, we investigate whether one can distill knowledge from flow-based models to more efficient alternatives. We provide a positive answer to this question by proposing a simple distillation approach and demonstrating its effectiveness on state-of-the-art conditional flow-based models for image super-resolution and speech synthesis. + +## 1. Introduction + +Normalizing flows (NF) (Dinh et al., 2015; Rezende & Mohamed, 2015) are a class of generative models that construct complex probability distributions by applying a series of invertible transformations to a simple base density (typically multivariate Gaussian). Such a design allows for exact likelihood computation via change-of-variables formula. Therefore, NF can be straightforwardly trained via likelihood maximization. This NF property is appealing for practitioners since alternative GAN-based models require adversarial optimization, which suffers from vanishing gradients, mode collapse, oscillating or cyclic behavior (Goodfellow, 2016). + +To enable exact likelihood computation, NF architectures must be composed of invertible modules that also support the efficient calculation of their Jacobian determinant. A large number of such modules have been recently developed, including autoregressive, bipartite, linear and residual transformations (Huang et al., 2018; Kingma et al., 2016; Papamakarios et al., 2017; van den Berg et al., 2018; Durkan et al., 2019; Dinh et al., 2017; 2015; Rezende & Mohamed, 2015; Kingma & Dhariwal, 2018; Chen et al., 2019). While some modules are significantly more efficient than others, normalizing flows are generally inferior to feed-forward counterparts (e.g., GANs) in terms of sampling time. In particular, autoregressive flows (Huang et al., 2018; Pa-pamakarios et al., 2017) use a slow sequential generation procedure, while bipartite flows (Dinh et al., 2017; Kingma & Dhariwal, 2018) can require a lot of submodules with low expressive power. Moreover, invertibility limits the size of the inner representation leading to impractically deep models. The more comprehensive discussion of the background and related work is deferred to Appendix A. + +However, in many applications, one does not need explicit density estimation but requires efficient inference at deployment. This raises a question whether one can relinquish the invertibility of normalizing flows to improve their runtime and memory consumption after training. In this work, we investigate whether this can be achieved through the knowledge distillation approach. In particular, we describe how to distill knowledge from pretrained flow-based models into efficient architectures, which do not suffer from NF design limitations. Our code and models are available online. ${}^{1}$ + +We summarize our contributions as follows: + +- We propose a plain training strategy and student design that allow for knowledge distillation from conditional normalizing flows to feed-forward models with streamlined inference and lower memory consumption. To the best of our knowledge, this is the first work that proposes the NF distillation to feed-forward architectures. + +- In our experiments, we empirically confirm the effectiveness of our method on the state-of-the-art flow-based models for super-resolution (SRFlow (Lugmayr et al., 2020a)) and speech synthesis (WaveGlow (Prenger et al., 2018)). We can achieve up to $\times {10}$ speedups with no perceptible loss in quality. + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +https://www.dropbox.com/sh/ + +bpdo3rwga3y4hw1/AACtVHTpaDoqFXLeaMb0-py1a? dl=0 + +--- + +055 + +![01963e4b-a253-780a-b475-1fc9919e6103_1_319_178_1116_365_0.jpg](images/01963e4b-a253-780a-b475-1fc9919e6103_1_319_178_1116_365_0.jpg) + +Figure 1. Overall scheme of the proposed knowledge distillation approach. Knowledge is transferred from a flow-based teacher to a feed-forward student using reconstruction and feature losses. As a result, a student is no longer restricted to be a flow and hence can exploit more efficient architectures. + +056 + +057 + +058 + +059 + +060 + +## 2. Method + +This section formally describes the details of our distillation approach. We also provide a general guide how one can design a student architecture for a given flow-based teacher. As illustrative examples, we use two state-of-the-art models from speech and vision domains. + +### 2.1. Distillation procedure + +A pretrained conditional flow-based model defines a deterministic bijective mapping from a noise sample $z \sim N\left( {0,{\sigma I}}\right)$ and contextual information $c$ to an output sample $x$ . Our distillation approach approximates this mapping by a more efficient model in a supervised manner. The general scheme of our approach is presented in Figure 1. + +### 2.2. Objective + +The proposed distillation objective is a combination of reconstruction and feature losses: + +$$ +L = {L}_{\text{rec }} + \alpha \cdot {L}_{\text{feature }} \tag{1} +$$ + +where $\alpha$ is a hyperparameter to balance the loss terms. + +For both applications, the reconstruction loss is an average ${L}_{1}$ -norm between the student and teacher samples: + +$$ +{L}_{rec} = {\begin{Vmatrix}{T}_{\theta }\left( z, c\right) - {S}_{\psi }\left( z, c\right) \end{Vmatrix}}_{1} \tag{2} +$$ + +where ${T}_{\theta }$ and ${S}_{\psi }$ correspond to the teacher and student models, respectively. + +The feature loss for SR model distillation is a perceptual distance between generated images computed by LPIPS (Zhang et al., 2018b). + +The feature loss for speech vocoder distillation is a multi-resolution ${STF}{T}^{2}$ loss (van den Oord et al.,2018; Ya-mamoto et al.,2020; 2019), which is a sum of ${STFT}$ losses with different parameters (i.e., FFT size, window size and frame shift). In more detail, a single STFT loss is the sum of the spectral convergence and log STFT magnitude terms (Yamamoto et al., 2020). + +### 2.3. Student Design + +The proposed student design can be described by a general recipe: "just clone a teacher architecture and take advantage of not being a flow anymore". In other words, our student models inherit the reduced teacher architecture and get free of the following constrains: + +Invertibility. In flow-based models, the size of the inner representations is limited by the dimensions of input data vectors. This restriction might result in much deeper and hence inefficient models. Since the student no longer has to be reversible, one can vary representation dimensions arbitrarily. As a result, student architectures with much fewer blocks can achieve similar performance. + +Tractable Jacobian determinants. Flow-based models usually have to exploit specific operations for easy-to-compute Jacobian determinants. Therefore, the student models might benefit from replacing these operations with more efficient and expressive ones. + +Below, we describe the particular design options on the example of flow-based models with affine coupling layers (Dinh et al., 2017; Kingma & Dhariwal, 2018). The supportive illustration is provided in Figure 2. + +a) The initial flow-based model with affine coupling layers, e.g., SRFlow or WaveGlow with fewer parameters. + +b) One can consider adjusting the inner representation dimensions to increase the overall expressive power. + +c) Affine coupling layers split the input $z$ into two halves ${z}_{a}$ and ${z}_{b}$ . Then, ${z}_{b}$ is used to predict the affine transform parameters to process ${z}_{a}$ . This split operation is needed to make the Jacobian determinant easily computable. In the student model, one can remove the split operation and transform the entire input vector. + +--- + +${}^{2}$ STFT stands for the Short-Term Fourier Transform + +--- + +110 + +![01963e4b-a253-780a-b475-1fc9919e6103_2_269_179_1194_413_0.jpg](images/01963e4b-a253-780a-b475-1fc9919e6103_2_269_179_1194_413_0.jpg) + +Figure 2. Student design options on the example of affine coupling layers. a) Affine coupling block; b) the same but with increased inner representation dimensions; c) affine block without partition operations; d) the affine block is substituted by a feed-forward module. + +111 + +112 + +113 + +114 + +115 + +116 + +d) Finally, it might be helpful to substitute the entire affine block with a commonly used feed-forward module for the corresponding task. + +In our SRFlow and WaveGlow students, we adhere to the options b) and d). Specifically, we tune the inner representation size and replace the flow steps with stacked RRDB (Wang et al., 2018) and WaveNet blocks (Oord et al., 2016), respectively. The detailed descriptions of the student architectures are provided in Appendix B. The student design ablation on the example of WaveGlow is presented in Appendix F. + +## 3. Experiments + +In this section, we compare the learned student against the corresponding teachers and other relevant baselines from the literature. + +### 3.1. Super-resolution + +For super-resolution, we compare the student to the following set of models: + +- ESRGAN(Wang et al., 2018), RankSRGAN (Mittal et al., 2013) - the recent GAN-based super-resolution models. ESRGAN is trained by a combination of reconstruction, perceptual and adversarial losses. RankSR-GAN uses the additional ranker network to optimize non-differentiable perceptual metrics. + +- RRDB (Wang et al., 2018) - the same as ESRGAN but trained only with ${L}_{1}$ objective. + +- SRFlow (Lugmayr et al., 2020a) — a flow-based teacher model. The model is optimized for efficient inference by precomputing inversions for $1 \times 1$ invertible convolutions and fusing actnorm layers. + +- SRFlow Student - a proposed student model designed according to Section 2.3. + +Datasets. The evaluation is performed on the DIV2K dataset (Agustsson & Timofte, 2017) - one of the established benchmarks for single image super-resolution. In addition to the DIV2K train set, the Flickr2K dataset (Lim et al.,2017) is also used for training. We consider $\times 4$ and $\times 8$ scaling factors between LR and HR images. + +Evaluation Metrics. As the primary measure of perceptual performance, we report the established LPIPS (Zhang et al., 2018b), which is proven to correlate with human estimation (Lugmayr et al., 2020b). We also report the standard fidelity-oriented metrics, Peak Signal to Noise Ratio (PSNR) and structural similarity index (SSIM) (Wang et al., 2004). In addition, we evaluate consistency with the LR image by reporting the LR-PSNR, computed as PSNR between the downsampled SR image and the original LR image. + +To evaluate diversity, we follow (Mao et al., 2019) and measure the average pairwise LPIPS distances between the generated samples for a particular LR image. Larger pairwise distances indicate higher diversity. + +Evaluation results. We report metrics as well as corresponding inference time and a number of parameters in Table 1. For most baseline models, we reproduce the results on publicly released checkpoints. Runtimes are measured on a single Tesla V100 on a single validation sample in half precision. The SRFlow student achieves the same metric values as the teacher being by $\times {2.1}$ and $\times {4.9}$ faster for $\times 4$ and $\times 8$ scaling factors, respectively. + +Also, we provide qualitative results for $\times 4$ and $\times 8$ scaling factors in the appendix, see Figure 4 and Figure 5, respectively. The inference for the teacher and student models is performed for the same input noise vectors $z$ . We observe that the student produces samples with quality similar to the teacher. + +Note that the student preserves the teacher diversity, which indicates that our distillation procedure is potentially able to produce student models with exploitable latent space (Lug-mayr et al., 2020a). + +
Models $\times 4$PSNR ↑SSIM↑LPIPS $\downarrow$LR-PSNR ↑Param. (M)Time (ms)Diversity↑
RRDB29.430.840.25349.0416.70${303} \pm 1$-
RankSRGAN26.550.750.13242.351.554${40} \pm 1$-
ESRGAN26.630.760.11542.3916.70${303} \pm 1$-
SRFlow27.070.760.12049.7539.54866 ± 20.068
SRFlow Student27.320.770.12049.5220.39$\mathbf{{420}} \pm \mathbf{1}$0.064
+ +
Models $\times 8$PSNR ↑SSIM↑LPIPS $\downarrow$LR-PSNR $\uparrow$Param. (M)Time (ms)Diversity↑
RRDB25.540.700.41844.9816.74${110} \pm 2$-
RankSRGAN-------
ESRGAN*22.180.580.27731.3516.74${110} \pm 2$-
SRFlow23.010.570.27150.2050.84${690} \pm 5$0.167
SRFlow Student23.300.580.26948.3616.16${140} \pm 2$0.159
+ +Table 1. Evaluation metrics on the DIV2K dataset. LPIPS is considered as a primary measure of perceptual quality. The student models provide similar metrics being by $\times {2.1}$ and $\times {4.9}$ faster for $\times 4$ and $\times 8$ scaling factors, respectively. (*) denotes that metrics are taken from (Lugmayr et al., 2020a). + +### 3.2. Speech Synthesis + +Here, we consider the following set of models for speech synthesis evaluation: + +- WaveGlow (Prenger et al., 2018) - a flow-based teacher model. For efficient inference, we remove weight norms (Salimans & Kingma, 2016) and precompute inversions for convolutional layers. + +- NanoFlow (Lee et al., 2020) - a recent flow-based vocoder, which produces samples with comparable quality to WaveGlow, but has about $\times {30}$ fewer parameters. + +- MelGAN (Kumar et al., 2019), Parallel Wave-GAN (Yamamoto et al., 2020), HiFi-GAN (Kong et al., 2020) - recent GAN-based vocoders. While MelGAN and ParallelWaveGAN are inferior to WaveGlow, HiFi-GAN represents the current state-of-the-art. + +- WG Student - a student model designed according to Section 2.3. For evaluation, we consider three configurations with $4/4/2$ WaveNet blocks of ${128}/{96}/{96}$ hidden channels and denote them as V1/V2/V3, respectively. + +Datasets. All experiments are performed on the LJ speech dataset (Ito & Johnson, 2017), which is one of the most common benchmarks in speech synthesis. We use a sampling rate of ${22},{050}\mathrm{{kHz}}$ and produce mel-spectrograms of the original audio according to (Prenger et al., 2018). While Prenger et al. (2018) originally provide train/validation/test splits for the LJ speech dataset, other methods are not always consistent with them. Therefore, for a fair comparison of the pretrained models, we collect a novel evaluation set and provide its details in Appendix E. + +Evaluation results. We compare vocoders in the setting where models are conditioned on ground-truth mel-spectrograms as in previous works (Prenger et al., 2018; Kim et al., 2019). For all baseline models, we use officially released pretrained models. As a primary metric for generated audio evaluation, we report Mean Opinion Score + +
ModelsMOSParam. (M)Speed(MHz)
Ground-truth${4.50} \pm {0.06}$--
NanoFlow${3.67} \pm {0.09}$2.820.369
WaveGlow$\mathbf{{3.92} \pm {0.08}}$87.731.26
WG Student V1${3.93} \pm {0.09}$9.539.36
WG Student V2$\mathbf{{3.89} \pm {0.09}}$6.3512.21
WG Student V3${3.62} \pm {0.09}$3.1823.62
Parallel WaveGAN${3.84} \pm {0.08}$1.442.76
MelGAN${3.58} \pm {0.08}$4.2728.40
HiFi-GAN${4.10} \pm {0.08}$1.4654.13
+ +Table 2. Evaluation results on the LJ Speech dataset. WG Student V1 and V2 demonstrate similar speech quality to the WaveGlow teacher being by $\times {7.4}$ and $\times {9.7}$ faster, respectively. + +(MOS) with corresponding 95 confidence intervals (CI) in Table 4. In Appendix D, we describe the detailed protocol used for MOS evaluation. The inference speed is reported in MHz, which stands for ${10}^{6}$ audio samples per second. The runtimes are measured on a single Tesla V100 in half precision with sufficiently large batch sizes and sequence lengths to suppress non-model related overheads. + +V1 and V2 students demonstrate speech quality similar to the teacher and provide $\times {7.4}$ and $\times {9.7}$ faster speech generation, respectively. Moreover, these students have $\times {9.2}$ and $\times {13.8}$ fewer parameters compared to the teacher. + +## 4. Conclusion + +In this work, we address the problem of high computational cost of normalizing flows and explain how one can increase the efficiency by giving up invertibility and tractable Jacobian, which are often not necessary for deployed models. In particular, we describe an effective knowledge distillation method from a flow-based teacher to more lightweight student architectures. We empirically demonstrate that the models distilled from normalizing flows are an appealing combination of simplicity, stability and efficiency, needed for typical production pipelines. + +References + +Agustsson, E. and Timofte, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017. + +Arik, S. Ö., Chrzanowski, M., Coates, A., Diamos, G., Gib-iansky, A., Kang, Y., Li, X., Miller, J., Ng, A., Raiman, J., Sengupta, S., and Shoeybi, M. Deep voice: Real-time neural text-to-speech. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 195-204, International Convention Centre, Sydney, Australia, 06-11 Aug 2017. PMLR. + +Behrmann, J., Grathwohl, W., Chen, R. T. Q., Duvenaud, D., and Jacobsen, J.-H. Invertible residual networks. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 573-582, Long Beach, California, USA, 09-15 Jun 2019. PMLR. + +Chen, N., Zhang, Y., Zen, H., Weiss, R. J., Norouzi, M., and Chan, W. Wavegrad: Estimating gradients for waveform generation. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=NsMLjcFa080. + +Chen, R. T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. K. Neural ordinary differential equations. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31, pp. 6571-6583. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ + +69386f6bb1dfed68692a24c8686939b9-Paper. pdf. + +Chen, R. T. Q., Behrmann, J., Duvenaud, D., and Jacobsen, J. Residual flows for invertible generative modeling. In Advances in Neural Information Processing Systems, 2019. + +Dinh, L., Krueger, D., and Bengio, Y. Nice: Non-linear independent components estimation. CoRR, abs/1410.8516, 2015. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real NVP. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. + +Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G. Neural spline flows. In Wallach, H., Larochelle, H., + +Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 7511-7522. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ + +7ac71d433f282034e088473244df8c02-Paper. pdf. + +Goodfellow, I. Nips 2016 tutorial: Generative adversarial networks. Advances in Neural Information Processing Systems, 2016. + +Grathwohl, W., Chen, R. T. Q., Bettencourt, J., and Du-venaud, D. Scalable reversible generative models with free-form continuous dynamics. In International Conference on Learning Representations, 2019. URL https: //openreview.net/forum?id=rJxgknCcK7. + +Hinton, G., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. URL https://arxiv.org/abs/1503.02531v1. + +Ho, J., Chen, X., Srinivas, A., Duan, Y., and Abbeel, P. Flow++: Improving flow-based generative models with variational dequantization and architecture design. In International Conference on Machine Learning, pp. 2722- 2730. PMLR, 2019. + +Huang, C.-W., Krueger, D., Lacoste, A., and Courville, A. Neural autoregressive flows. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 2078-2087, Stockholmsmässan, Stockholm Sweden, 10-15 Jul 2018. PMLR. + +Ito, K. and Johnson, L. The lj speech dataset. https:// keithito.com/LJ-Speech-Dataset/,2017. + +Kalchbrenner, N., Elsen, E., Simonyan, K., Noury, S., Casagrande, N., Lockhart, E., Stimberg, F., van den Oord, A., Dieleman, S., and Kavukcuoglu, K. Efficient neural audio synthesis. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 2410-2419, Stockholmsmässan, Stockholm Sweden, 10-15 Jul 2018. PMLR. + +Karami, M., Schuurmans, D., Sohl-Dickstein, J., Dinh, L., and Duckworth, D. Invertible convolutional flow. In NeurIPS, 2019. + +Kim, S., Lee, S.-G., Song, J., Kim, J., and Yoon, S. FloWaveNet : A generative flow for raw audio. In Chaud-huri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, + +pp. 3370-3378. PMLR, 09-15 Jun 2019. URL http:// proceedings.mlr.press/v97/kim19b.html. + +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1 \times 1$ convolutions. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems. Curran Associates, Inc., 2018. + +Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. Improved variational inference with inverse autoregressive flow. In Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 29, pp. 4743-4751. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper/2016/file/ ddeebdeefdb7e7e7a697e1c3e3d8ef54-Paper. pdf. + +Kong, J., Kim, J., and Bae, J. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 17022-17033. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ + +c5d736809766d46260d816d8dbc9eb44-Paper. pdf. + +Kong, Z., Ping, W., Huang, J., Zhao, K., and Catanzaro, B. Diffwave: A versatile diffusion model for audio synthesis. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=a-xFK8Ymz5J. + +Kumar, K., Kumar, R., de Boissiere, T., Gestin, L., Teoh, W. Z., Sotelo, J., de Brébisson, A., Bengio, Y., and Courville, A. C. Melgan: Generative adversarial networks for conditional waveform synthesis. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 14910-14921. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ 6804c9bca0a615bdb9374d00a9fcba59-Paper. pdf. + +Ledig, C., Theis, L., Huszár, F., Caballero, J., Aitken, A., Tejani, A., Totz, J., Wang, Z., and Shi, W. Photo-realistic + +single image super-resolution using a generative adversarial network. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 105-114, 2017. + +Lee, S., Kim, S., and Yoon, S. Nanoflow: Scalable normalizing flows with sublinear parameter complexity, 2020. + +Lim, B., Son, S., Kim, H., Nah, S., and Lee, K. M. Enhanced deep residual networks for single image super-resolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017. + +Lugmayr, A., Danelljan, M., Gool, L. V., and Timofte, R. Srflow: Learning the super-resolution space with normalizing flow, 2020a. + +Lugmayr, A., Danelljan, M., and Timofte, R. Ntire 2020 challenge on real-world image super-resolution: Methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2020b. + +Mao, Q., Lee, H.-Y., Tseng, H.-Y., Ma, S., and Yang, M.- H. Mode seeking generative adversarial networks for diverse image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1429-1437, 2019. + +Mehri, S., Kumar, K., Gulrajani, I., Kumar, R., Jain, S., Sotelo, J., Courville, A., and Bengio, Y. Sam-plernn: An unconditional end-to-end neural audio generation model, 2016. URL http://arxiv.org/abs/ 1612.07837. + +Mittal, A., Soundararajan, R., and Bovik, A. C. Making a "completely blind" image quality analyzer. IEEE Signal Processing Letters, 20(3):209-212, 2013. doi: 10.1109/ LSP.2012.2227726. + +Oord, A. v. d., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. Wavenet: A generative model for raw audio. 2016. + +Papamakarios, G., Pavlakou, T., and Murray, I. Masked autoregressive flow for density estimation. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30, pp. 2338-2347. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/ + +6c1da886822c67822bcf3679d04369fa-Paper. pdf. + +Ping, W., Peng, K., Gibiansky, A., Arik, S. O., Kannan, A., Narang, S., Raiman, J., and Miller, J. Deep voice 3: 2000- speaker neural text-to-speech. In International Conference on Learning Representations, 2018. URL https: //openreview.net/forum?id=HJtEm4p6Z. + +Ping, W., Peng, K., and Chen, J. Clarinet: Parallel wave generation in end-to-end text-to-speech. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum? id=HklY120cYm. + +Ping, W., Peng, K., Zhao, K., and Song, Z. WaveFlow: A compact flow-based model for raw audio. In III, H. D. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 7706-7716, Virtual, 2020. PMLR. + +Prenger, R., Valle, R., and Catanzaro, B. Waveglow: A flow-based generative network for speech synthesis, 2018. + +Ren, Y., Ruan, Y., Tan, X., Qin, T., Zhao, S., Zhao, Z., and Liu, T.-Y. Fastspeech: Fast, robust and controllable text to speech. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 3171-3180. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ + +f63f65b503e22cb970527f23c9ad7db1-Paper. pdf. + +Ren, Y., Hu, C., Tan, X., Qin, T., Zhao, S., Zhao, Z., and Liu, T.-Y. Fastspeech 2: Fast and high-quality end-to-end text to speech, 2020. + +Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1530-1538. PMLR, 07-09 Jul 2015. + +Ribeiro, F. P., Florêncio, D., Zhang, C., and Seltzer, M. L. Crowdmos: An approach for crowdsourcing mean opinion score studies. 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2416-2419, 2011. + +Romero, A., Ballas, N., Kahou, S., Chassang, A., Gatta, C., and Bengio, Y. Fitnets: Hints for thin deep nets. CoRR, abs/1412.6550, 2015. + +Salimans, T. and Kingma, D. P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Lee, D., Sugiyama, + +M., Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 29, pp. 901-909. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper/2016/file/ + +ed265bc903a5a097f61d3ec064d96d2e-Paper. pdf. + +Sau, B. and Balasubramanian, V. Deep model compression: Distilling knowledge from noisy teachers. 102016. + +Serrà, J., Pascual, S., and Segura Perales, C. Blow: a single-scale hyperconditioned flow for non-parallel raw-audio voice conversion. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32, pp. 6793-6803. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ + +9426c311e76888b3b2368150cd05f362-Paper. pdf. + +Shen, J., Pang, R., Weiss, R., Schuster, M., Jaitly, N., Yang, Z., Chen, Z., Zhang, Y., Wang, Y., Skerrv-Ryan, R., Saurous, R., Agiomvrgiannakis, Y., and Wu, Y. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. pp. 4779-4783, 04 2018. doi: 10.1109/ICASSP.2018.8461368. + +Smith, L. N. and Topin, N. Super-convergence: Very fast training of residual networks using large learning rates, 2018. URL https://openreview.net/forum? id=H1A5ztj3b. + +Uria, B., Côté, M.-A., Gregor, K., Murray, I., and Larochelle, H. Neural autoregressive distribution estimation. The Journal of Machine Learning Research, 17 (1):7184-7220, 2016. + +van den Berg, R., Hasenclever, L., Tomczak, J., and Welling, M. Sylvester normalizing flows for variational inference. In proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI), 2018. + +van den Oord, A., Kalchbrenner, N., Espeholt, L., kavukcuoglu, k., Vinyals, O., and Graves, A. Conditional image generation with pixelcnn decoders. In Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 29, pp. 4790-4798. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper/2016/file/ + +b1301141feffabac455e1f90a7de2054-Paper. pdf. + +van den Oord, A., Li, Y., Babuschkin, I., Simonyan, K., Vinyals, O., Kavukcuoglu, K., van den Driessche, G., + +Lockhart, E., Cobo, L., Stimberg, F., Casagrande, N., Grewe, D., Noury, S., Dieleman, S., Elsen, E., Kalchbren-ner, N., Zen, H., Graves, A., King, H., Walters, T., Belov, D., and Hassabis, D. Parallel WaveNet: Fast high-fidelity speech synthesis. In Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, Stockholmsmässan, Stockholm Sweden, 2018. PMLR. + +Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., and Change Loy, C. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, September 2018. + +Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. Trans. Img. Proc., 13(4):600-612, April 2004. ISSN 1057-7149. doi: 10.1109/TIP.2003. 819861. + +Yamamoto, R., Song, E., and Kim, J.-M. Probability Density Distillation with Generative Adversarial Networks for High-Quality Parallel Waveform Generation. In Proc. Interspeech 2019, pp. 699-703, 2019. doi: 10.21437/ Interspeech.2019-1965. URL http://dx.doi.org/ 10.21437/Interspeech.2019-1965. + +Yamamoto, R., Song, E., and Kim, J.-M. Parallel wavegan: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram, 2020. + +Yang, G., Huang, X., Hao, Z., Liu, M.-Y., Belongie, S., and Hariharan, B. Pointflow: 3d point cloud generation with continuous normalizing flows. arXiv, 2019. + +Zhang, K., Zuo, W., and Zhang, L. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3262-3271, 2018a. + +Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In ${CVPR},{2018}\mathrm{\;b}$ . + +Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., and Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV), pp. 286-301, 2018c. + +Zhang, Y., Tian, Y., Kong, Y., Zhong, B., and Fu, Y. Residual dense network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2472-2481, 2018d. + +## Appendices + +## A. Background and Related work + +This section briefly describes the basics of normalizing flows and their applications in the image super-resolution and speech synthesis tasks. Finally, we also provide a short overview of knowledge distillation techniques. + +### A.1. Normalizing flows + +Normalizing flows rely on change-of-variable formula to compute exact log likelihood as a sum of Jacobian log-determinants. In general, computing the Jacobian determinant of a transformation with $N$ -dimensional inputs and outputs has $O\left( {N}^{3}\right)$ computational complexity, which is intractable for large $N$ in typical deep learning applications. Therefore, one of the challenges is to design invertible architectures with efficient determinant calculation. + +Currently, there are two main families of flow-based models with easy-to-compute Jacobian determinants: based on autoregressive and bipartite transformations. The autoregressive family includes autoregressive flow (AF) (Papa-makarios et al., 2017; Huang et al., 2018) and inverse autoregressive flow (IAF) (Kingma et al., 2016). AFs resemble autoregressive models (Uria et al., 2016), which allow for parallel density estimation but perform sequential synthesis. In contrast, IAF poses parallel synthesis but sequential density estimation, making likelihood-based training very slow. The second family includes bipartite transformations (Dinh et al., 2017; Kingma & Dhariwal, 2018), which provide both efficient likelihood-based training and parallel synthesis. However, bipartite flows are usually less expressive than the autoregressive ones and hence the models require a larger number of layers and parameters. Recent works develop more powerful and efficient normalizing flows. The authors of (Ping et al., 2020; Lee et al., 2020) combine the ideas of autoregressive and bipartite transformations. Others propose more expressive invertible layers and architectures (Ho et al., 2019; Durkan et al., 2019; Karami et al., 2019; Chen et al., 2019; Behrmann et al., 2019) or continuous transformations (Grathwohl et al., 2019; Chen et al., 2018). + +At the moment, conditional NFs are gaining popularity in various practical speech and vision applications. In particular, several recent works (Prenger et al., 2018; Kim et al., 2019; Lugmayr et al., 2020a; Yang et al., 2019) exploit the ideas of Glow (Kingma & Dhariwal, 2018), Real-NVP (Dinh et al., 2017) and continuous flows (Chen et al., 2018) for waveform synthesis, image super-resolution, and + +492 point cloud generation. In this work, we focus on the state-of-the-art conditional flow-based models for image 493 + +494 super-resolution (Lugmayr et al., 2020a) and speech synthesis (Prenger et al., 2018). + +### A.2. Super-Resolution + +Super-resolution (SR) is one of the fundamental image processing problems which aims to improve the quality of low-resolution (LR) images by upscaling them to high-resolution (HR) ones with natural high-frequency details. + +Single image super-resolution approaches tend to either discover the most efficient neural network architectures (Zhang et al., 2018d;c;a) or improve high-frequency details generation by introducing more sophisticated objectives (Ledig et al., 2017; Wang et al., 2018; Mittal et al., 2013). In contrast to these approaches, the SRFlow model (Lugmayr et al., 2020a) designs a flow-based architecture that is trained by likelihood maximization. As a result, SRFlow estimates a conditional distribution of natural HR images corresponding to a given LR image. + +SRFlow is built upon the Glow (Kingma & Dhariwal, 2018) architecture that gradually transforms a sample from initial distribution $z \sim N\left( {0, I}\right)$ into the HR image. The model is conditioned on representations of LR images delivered by a deterministic encoder. The LR encoder is a popular feed-forward SR architecture based on Residual-in-Residual Dense Blocks (RRDB) (Wang et al., 2018). + +### A.3. Speech Synthesis + +The state-of-the-art performance for speech synthesis is also currently achieved by deep generative models. In typical speech synthesis pipelines, neural vocoders ${}^{3}$ play one of the most important roles. Usually, a neural vocoder synthesizes time-domain waveform and is conditioned on mel-spectrograms from a text-to-spectrogram model (Arik et al., 2017; Ping et al., 2018; Ren et al., 2019; 2020; Shen et al., 2018). + +Long-standing state-of-the-art neural vocoders are autoregressive models (Oord et al., 2016; Kalchbrenner et al., 2018; Mehri et al., 2016) that provide impressive speech quality but suffer from a slow sequential generation process. Therefore, a wide line of works aims to speedup their inference. + +Flow-based models have been successfully applied for parallel waveform synthesis with fidelity comparable to autoregressive models (van den Oord et al., 2018; Prenger et al., 2018; Ping et al., 2020; Lee et al., 2020; Kim et al., 2019; Ping et al., 2019; Serrà et al., 2019). Among flow-based vocoders, WaveGlow (Prenger et al., 2018), Wave-Flow (Ping et al., 2020), NanoFlow (Lee et al., 2020) and FloWaveNet (Kim et al., 2019) have a particular empha- + +--- + +${}^{3}$ vocoder - speech waveform synthesizer + +--- + +495 + +![01963e4b-a253-780a-b475-1fc9919e6103_9_191_184_1395_602_0.jpg](images/01963e4b-a253-780a-b475-1fc9919e6103_9_191_184_1395_602_0.jpg) + +Figure 3. Left: WaveGlow student architecture is presented as a sequence of WaveNet blocks (Oord et al., 2016), which are conditioned on upsampled mel-spectograms. Right: SRFlow student architecture consists of $L$ levels. At each level $l$ , feature maps from the previous level are first combined with corresponding LR encoding ${c}_{l}$ and noise vector ${z}_{l}$ . Then, stacked RRDB blocks followed by an unsqueeze operation are applied. + +496 + +497 + +498 + +499 + +500 + +501 + +502 + +505 + +506 + +507 sis on efficiency. In this work, we specifically consider the WaveGlow model (Prenger et al., 2018) - one of the current state-of-the-art flow-based vocoders. + +Being efficient enough, GAN-based vocoders (Kumar et al., 2019; Yamamoto et al., 2020) used to provide inferior speech fidelity compared to the state-of-the-art flow-based models. However, recently proposed HiFi-GAN (Kong et al., 2020) efficiently delivers high-quality speech due to the training procedure with multi-scale (Kumar et al., 2019) and multi-period discriminators (Kong et al., 2020). + +An alternative line of works proposes vocoders based on the denoising diffusion framework (Kong et al., 2021; Chen et al., 2021). These models demonstrate promising speech quality but are not efficient enough for many usage scenarios. + +### A.4. Knowledge Distillation. + +Knowledge distillation is one of the most popular compression and acceleration techniques for large models and ensembles of neural networks (Hinton et al., 2015; Sau & Balasubramanian, 2016; Romero et al., 2015). Specifically, the idea of knowledge distillation is to train a single efficient student on predictions produced by a computationally expensive teacher. + +The closest works to ours distill knowledge from an expensive autoregressive neural vocoder to normalizing flows with parallel inference. In particular, Parallel WaveNet (van den Oord et al., 2018) and ClariNet (Ping et al., 2019) transfer knowledge from a pretrained WaveNet (Oord et al., 2016) to + +549 IAF (Kingma et al., 2016). As opposed to these works, we address the computational inefficiency of flow-based models and propose to transfer knowledge from normalizing flows to feed-forward networks. + +## B. Student Architectures + +Below, we describe the architectures for WaveGlow and SR-Flow students in more detail and depict their visualizations in Figure 3 (Left) and (Right), respectively. + +### B.1. SRFlow Student + +LR Encoder. Both teacher and student models condition on the LR representations produced by the LR encoder. The LR encoder is a feed-forward SR architecture based on 23 Residual-in-Residual Dense Blocks (RRDB) (Wang et al., 2018). Originally, RRDB blocks are composed of 3 Residual Dense Blocks (RDB) and each RDB consists of 5 convolutional layers. In our student models, we vary the numbers of RDB blocks as well as convolutional layers within them. + +The LR encoder is unchanged for $\times 4$ scaling factor. For $\times 8$ scaling factor, the RRDB block is substituted by a single RDB one with 3 convolutional layers. + +Base Network. The base architecture consists of $L$ levels. At each level $l$ , the model combines the corresponding LR representation ${C}_{l}$ , the activations from the previous level ${h}_{l - 1}$ and the noise vector ${z}_{l}$ by the Concat 2d layer. At this step, one can vary $h$ channel dimensions, according to + +## Section 2.3 b). + +Then, instead of a sequence of flow steps in the teacher model, a sequence of RRDB blocks follows. For $\times 4$ scaling factor, we use 6 RRDB blocks of 2 RDBs with 3 convolutional layers. For $\times 8$ scaling factor, a single original RRDB is used. + +Finally, similarly to the SRFlow teacher, the transition step and unsqueeze operation are applied. The transition step is Conv $1 \times 1$ followed by the Actnorm2d layer (Kingma & Dhariwal, 2018). The unsqueeze operation doubles a spatial resolution of feature maps and reduces their channel dimension by a factor of 4 . + +### B.2. WaveGlow Student + +WaveGlow student represents a sequence of conditional WaveNet blocks (Oord et al., 2016). We start and finish with Conv $1 \times 1$ layers to adjust the input channel dimension. Each WaveNet block is organized into 8 residual layers, each consisting of a dilated convolution followed by a gated activation unit (van den Oord et al.,2016) and $1 \times 1$ convolution. The upsampled mel-spectrograms are added to the intermediate activations before the gated activation unit. In contrast to the teacher, a student does not inject noise between the intermediate WaveNet blocks but obtains the entire $z$ at the very beginning. + +## C. Training Details + +SRFlow Students. The models are trained on patches of size ${128} \times {128}$ by the Adam optimizer (Kingma &Ba,2014) with learning rate of ${2e} - 4$ for 100 epochs. The learning rate is dropped by a factor of 2 at each 25-th epoch. The batch sizes of 80 and 128 are used for 4 and 8 scaling factors, respectively. The loss coefficient $\alpha$ is 10 . The hyperparam-eters are tuned on the hold-out set, which is separate from the test one. + +WaveGlow Students. The models are optimized on short segments of length 16000 samples by the Adam optimizer (Kingma &Ba,2014) with a learning rate of ${1e} - 4$ for 200000 iterations. For faster convergence, we use One Cycle learning rate schedule (Smith & Topin, 2018) with 5000 warm up steps. The batch size is 256 . The loss coefficient $\alpha$ is ${1e} - 2$ for most student settings. The hyperparameters are tuned on the hold-out train samples, which do not participate in the final evaluation. The training is performed on $8 \times$ Tesla V100. + +## D. Mean Opinion Score + +We crowd-sourced MOS tests on 400 English-speaking individuals per model. Each rater had to pass training on 21 validation audio samples from various models with golden results and corresponding clarifications. Then, they listened to 10 audio samples from a single model and rated their naturalness on a five-point scale. Each rater was asked to wear headphones and work in a quiet environment. To aggregate MOS results for each model, we follow the protocol from (Kumar et al., 2019). In Table 3, we provide the rating scale used for our MOS evaluation. + +
RatingQuality
5Natural speech. Distortions are imperceptible.
4Mostly natural speech. Distortions are slightly perceptible and rare.
3Equally natural and unnatural speech. Distortions are perceptible and almost permanent.
2Mostly unnatural speech. Distortions are annoying, but not objectionable.
1Completely unnatural speech. Distortions are very objectionable.
+ +Table 3. The rating scale presented to 400 individuals for MOS evaluation. This rating scale was adapted from (Ribeiro et al., 2011) to better fit modern speech synthesis applications. + +## E. Evaluation set for Speech Synthesis + +The collected set consists of 50 utterances trimmed from publicly available recordings by Linda Johnson from Lib-riVox. We carefully choose recordings to be similar to the original dataset and normalize audio samples accordingly. This test set will be publicly available online for reproducibility purposes. + +## F. Student Design Ablation + +To identify the core change to the architecture that improves the student performance, we ablate the student design principles described in Section 2.3 on the speech synthesis task. In this experiment, we consider the student models with 4 WaveNet blocks of 96 channels and evaluate the following configurations in correspondence with Figure 2: + +a) Flow Student - a student corresponds to the smaller WaveGlow model with all NF restrictions mentioned in Section 2.3. + +b) Wide Flow Student - a flow student where the inner representation channel dimension is increased from 8 to 96. Note that this model is not invertible anymore. + +c) Affine Student - the same as a flow student but the partition operations are removed, see Figure 2c). This student is still invertible but has $O\left( {N}^{3}\right)$ complexity of Jacobian determinant calculation. + +
ModelsMOSParam. (M)Speed (MHz)
Flow Student${3.42} \pm {0.09}$6.2812.25
Wide Flow Student${3.93} \pm {0.08}$6.3711.95
Affine Student${3.90} \pm {0.09}$6.2812.32
WG Student${3.89} \pm {0.09}$6.3512.21
+ +Table 4. The comparison of the student design principles described in Section 2.3. All design options provide essentially the same performance. This indicates that relaxing either invertibility or Jacobian constraints is sufficient to achieve significant improvements. + +d) WG Student - a student model where affine coupling layers are replaced with WaveNet blocks. + +According to Table 4, once any of the NF constraints are lifted, the corresponding student can achieve significantly better speech quality compared to the flow student of the same capacity. + +638 + +646 + +647 + +648 + +649 + +650 + +651 + +658 + +659 + +660 + +661 + +662 + +707 + +![01963e4b-a253-780a-b475-1fc9919e6103_12_70_313_1536_1559_0.jpg](images/01963e4b-a253-780a-b475-1fc9919e6103_12_70_313_1536_1559_0.jpg) + +Figure 4. Qualitative results of SRFlow teacher and student for $\times 4$ scaling factor. The student model produces SR samples similar to the teacher's ones. + +708 + +709 + +710 + +711 + +713 + +714 + +715 + +718 + +![01963e4b-a253-780a-b475-1fc9919e6103_13_135_316_1472_1553_0.jpg](images/01963e4b-a253-780a-b475-1fc9919e6103_13_135_316_1472_1553_0.jpg) + +Figure 5. Qualitative results of SRFlow teacher and student on the DIV2K dataset for $\times 8$ scaling factor. On most samples, the student model demonstrates performance close to the teacher. + +750 + +751 + +752 + +753 + +755 + +756 + +757 + +758 + +759 + +760 + +761 + +762 + +768 + +769 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/fEPhiuZS9TV/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/fEPhiuZS9TV/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..ff6897be71501f4ae6f22157fe4c860fc48358d0 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/fEPhiuZS9TV/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,246 @@ +§ DISTILLING THE KNOWLEDGE FROM NORMALIZING FLOWS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Normalizing flows are a powerful class of generative models demonstrating strong performance in several speech and vision problems. In contrast to other generative models, normalizing flows have tractable likelihoods and allow for stable training. However, they have to be carefully designed to be bijective functions with efficient Jacobian calculation. In practice, these requirements lead to over-parameterized and sophisticated architectures that are inferior to alternative feed-forward models in terms of inference time and memory consumption. In this work, we investigate whether one can distill knowledge from flow-based models to more efficient alternatives. We provide a positive answer to this question by proposing a simple distillation approach and demonstrating its effectiveness on state-of-the-art conditional flow-based models for image super-resolution and speech synthesis. + +§ 1. INTRODUCTION + +Normalizing flows (NF) (Dinh et al., 2015; Rezende & Mohamed, 2015) are a class of generative models that construct complex probability distributions by applying a series of invertible transformations to a simple base density (typically multivariate Gaussian). Such a design allows for exact likelihood computation via change-of-variables formula. Therefore, NF can be straightforwardly trained via likelihood maximization. This NF property is appealing for practitioners since alternative GAN-based models require adversarial optimization, which suffers from vanishing gradients, mode collapse, oscillating or cyclic behavior (Goodfellow, 2016). + +To enable exact likelihood computation, NF architectures must be composed of invertible modules that also support the efficient calculation of their Jacobian determinant. A large number of such modules have been recently developed, including autoregressive, bipartite, linear and residual transformations (Huang et al., 2018; Kingma et al., 2016; Papamakarios et al., 2017; van den Berg et al., 2018; Durkan et al., 2019; Dinh et al., 2017; 2015; Rezende & Mohamed, 2015; Kingma & Dhariwal, 2018; Chen et al., 2019). While some modules are significantly more efficient than others, normalizing flows are generally inferior to feed-forward counterparts (e.g., GANs) in terms of sampling time. In particular, autoregressive flows (Huang et al., 2018; Pa-pamakarios et al., 2017) use a slow sequential generation procedure, while bipartite flows (Dinh et al., 2017; Kingma & Dhariwal, 2018) can require a lot of submodules with low expressive power. Moreover, invertibility limits the size of the inner representation leading to impractically deep models. The more comprehensive discussion of the background and related work is deferred to Appendix A. + +However, in many applications, one does not need explicit density estimation but requires efficient inference at deployment. This raises a question whether one can relinquish the invertibility of normalizing flows to improve their runtime and memory consumption after training. In this work, we investigate whether this can be achieved through the knowledge distillation approach. In particular, we describe how to distill knowledge from pretrained flow-based models into efficient architectures, which do not suffer from NF design limitations. Our code and models are available online. ${}^{1}$ + +We summarize our contributions as follows: + + * We propose a plain training strategy and student design that allow for knowledge distillation from conditional normalizing flows to feed-forward models with streamlined inference and lower memory consumption. To the best of our knowledge, this is the first work that proposes the NF distillation to feed-forward architectures. + + * In our experiments, we empirically confirm the effectiveness of our method on the state-of-the-art flow-based models for super-resolution (SRFlow (Lugmayr et al., 2020a)) and speech synthesis (WaveGlow (Prenger et al., 2018)). We can achieve up to $\times {10}$ speedups with no perceptible loss in quality. + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +https://www.dropbox.com/sh/ + +bpdo3rwga3y4hw1/AACtVHTpaDoqFXLeaMb0-py1a? dl=0 + +055 + + < g r a p h i c s > + +Figure 1. Overall scheme of the proposed knowledge distillation approach. Knowledge is transferred from a flow-based teacher to a feed-forward student using reconstruction and feature losses. As a result, a student is no longer restricted to be a flow and hence can exploit more efficient architectures. + +056 + +057 + +058 + +059 + +060 + +§ 2. METHOD + +This section formally describes the details of our distillation approach. We also provide a general guide how one can design a student architecture for a given flow-based teacher. As illustrative examples, we use two state-of-the-art models from speech and vision domains. + +§ 2.1. DISTILLATION PROCEDURE + +A pretrained conditional flow-based model defines a deterministic bijective mapping from a noise sample $z \sim N\left( {0,{\sigma I}}\right)$ and contextual information $c$ to an output sample $x$ . Our distillation approach approximates this mapping by a more efficient model in a supervised manner. The general scheme of our approach is presented in Figure 1. + +§ 2.2. OBJECTIVE + +The proposed distillation objective is a combination of reconstruction and feature losses: + +$$ +L = {L}_{\text{ rec }} + \alpha \cdot {L}_{\text{ feature }} \tag{1} +$$ + +where $\alpha$ is a hyperparameter to balance the loss terms. + +For both applications, the reconstruction loss is an average ${L}_{1}$ -norm between the student and teacher samples: + +$$ +{L}_{rec} = {\begin{Vmatrix}{T}_{\theta }\left( z,c\right) - {S}_{\psi }\left( z,c\right) \end{Vmatrix}}_{1} \tag{2} +$$ + +where ${T}_{\theta }$ and ${S}_{\psi }$ correspond to the teacher and student models, respectively. + +The feature loss for SR model distillation is a perceptual distance between generated images computed by LPIPS (Zhang et al., 2018b). + +The feature loss for speech vocoder distillation is a multi-resolution ${STF}{T}^{2}$ loss (van den Oord et al.,2018; Ya-mamoto et al.,2020; 2019), which is a sum of ${STFT}$ losses with different parameters (i.e., FFT size, window size and frame shift). In more detail, a single STFT loss is the sum of the spectral convergence and log STFT magnitude terms (Yamamoto et al., 2020). + +§ 2.3. STUDENT DESIGN + +The proposed student design can be described by a general recipe: "just clone a teacher architecture and take advantage of not being a flow anymore". In other words, our student models inherit the reduced teacher architecture and get free of the following constrains: + +Invertibility. In flow-based models, the size of the inner representations is limited by the dimensions of input data vectors. This restriction might result in much deeper and hence inefficient models. Since the student no longer has to be reversible, one can vary representation dimensions arbitrarily. As a result, student architectures with much fewer blocks can achieve similar performance. + +Tractable Jacobian determinants. Flow-based models usually have to exploit specific operations for easy-to-compute Jacobian determinants. Therefore, the student models might benefit from replacing these operations with more efficient and expressive ones. + +Below, we describe the particular design options on the example of flow-based models with affine coupling layers (Dinh et al., 2017; Kingma & Dhariwal, 2018). The supportive illustration is provided in Figure 2. + +a) The initial flow-based model with affine coupling layers, e.g., SRFlow or WaveGlow with fewer parameters. + +b) One can consider adjusting the inner representation dimensions to increase the overall expressive power. + +c) Affine coupling layers split the input $z$ into two halves ${z}_{a}$ and ${z}_{b}$ . Then, ${z}_{b}$ is used to predict the affine transform parameters to process ${z}_{a}$ . This split operation is needed to make the Jacobian determinant easily computable. In the student model, one can remove the split operation and transform the entire input vector. + +${}^{2}$ STFT stands for the Short-Term Fourier Transform + +110 + + < g r a p h i c s > + +Figure 2. Student design options on the example of affine coupling layers. a) Affine coupling block; b) the same but with increased inner representation dimensions; c) affine block without partition operations; d) the affine block is substituted by a feed-forward module. + +111 + +112 + +113 + +114 + +115 + +116 + +d) Finally, it might be helpful to substitute the entire affine block with a commonly used feed-forward module for the corresponding task. + +In our SRFlow and WaveGlow students, we adhere to the options b) and d). Specifically, we tune the inner representation size and replace the flow steps with stacked RRDB (Wang et al., 2018) and WaveNet blocks (Oord et al., 2016), respectively. The detailed descriptions of the student architectures are provided in Appendix B. The student design ablation on the example of WaveGlow is presented in Appendix F. + +§ 3. EXPERIMENTS + +In this section, we compare the learned student against the corresponding teachers and other relevant baselines from the literature. + +§ 3.1. SUPER-RESOLUTION + +For super-resolution, we compare the student to the following set of models: + + * ESRGAN(Wang et al., 2018), RankSRGAN (Mittal et al., 2013) - the recent GAN-based super-resolution models. ESRGAN is trained by a combination of reconstruction, perceptual and adversarial losses. RankSR-GAN uses the additional ranker network to optimize non-differentiable perceptual metrics. + + * RRDB (Wang et al., 2018) - the same as ESRGAN but trained only with ${L}_{1}$ objective. + + * SRFlow (Lugmayr et al., 2020a) — a flow-based teacher model. The model is optimized for efficient inference by precomputing inversions for $1 \times 1$ invertible convolutions and fusing actnorm layers. + + * SRFlow Student - a proposed student model designed according to Section 2.3. + +Datasets. The evaluation is performed on the DIV2K dataset (Agustsson & Timofte, 2017) - one of the established benchmarks for single image super-resolution. In addition to the DIV2K train set, the Flickr2K dataset (Lim et al.,2017) is also used for training. We consider $\times 4$ and $\times 8$ scaling factors between LR and HR images. + +Evaluation Metrics. As the primary measure of perceptual performance, we report the established LPIPS (Zhang et al., 2018b), which is proven to correlate with human estimation (Lugmayr et al., 2020b). We also report the standard fidelity-oriented metrics, Peak Signal to Noise Ratio (PSNR) and structural similarity index (SSIM) (Wang et al., 2004). In addition, we evaluate consistency with the LR image by reporting the LR-PSNR, computed as PSNR between the downsampled SR image and the original LR image. + +To evaluate diversity, we follow (Mao et al., 2019) and measure the average pairwise LPIPS distances between the generated samples for a particular LR image. Larger pairwise distances indicate higher diversity. + +Evaluation results. We report metrics as well as corresponding inference time and a number of parameters in Table 1. For most baseline models, we reproduce the results on publicly released checkpoints. Runtimes are measured on a single Tesla V100 on a single validation sample in half precision. The SRFlow student achieves the same metric values as the teacher being by $\times {2.1}$ and $\times {4.9}$ faster for $\times 4$ and $\times 8$ scaling factors, respectively. + +Also, we provide qualitative results for $\times 4$ and $\times 8$ scaling factors in the appendix, see Figure 4 and Figure 5, respectively. The inference for the teacher and student models is performed for the same input noise vectors $z$ . We observe that the student produces samples with quality similar to the teacher. + +Note that the student preserves the teacher diversity, which indicates that our distillation procedure is potentially able to produce student models with exploitable latent space (Lug-mayr et al., 2020a). + +max width= + +Models $\times 4$ PSNR ↑ SSIM↑ LPIPS $\downarrow$ LR-PSNR ↑ Param. (M) Time (ms) Diversity↑ + +1-8 +RRDB 29.43 0.84 0.253 49.04 16.70 ${303} \pm 1$ - + +1-8 +RankSRGAN 26.55 0.75 0.132 42.35 1.554 ${40} \pm 1$ - + +1-8 +ESRGAN 26.63 0.76 0.115 42.39 16.70 ${303} \pm 1$ - + +1-8 +SRFlow 27.07 0.76 0.120 49.75 39.54 866 ± 2 0.068 + +1-8 +SRFlow Student 27.32 0.77 0.120 49.52 20.39 $\mathbf{{420}} \pm \mathbf{1}$ 0.064 + +1-8 + +max width= + +Models $\times 8$ PSNR ↑ SSIM↑ LPIPS $\downarrow$ LR-PSNR $\uparrow$ Param. (M) Time (ms) Diversity↑ + +1-8 +RRDB 25.54 0.70 0.418 44.98 16.74 ${110} \pm 2$ - + +1-8 +RankSRGAN - - - - - - - + +1-8 +ESRGAN* 22.18 0.58 0.277 31.35 16.74 ${110} \pm 2$ - + +1-8 +SRFlow 23.01 0.57 0.271 50.20 50.84 ${690} \pm 5$ 0.167 + +1-8 +SRFlow Student 23.30 0.58 0.269 48.36 16.16 ${140} \pm 2$ 0.159 + +1-8 + +Table 1. Evaluation metrics on the DIV2K dataset. LPIPS is considered as a primary measure of perceptual quality. The student models provide similar metrics being by $\times {2.1}$ and $\times {4.9}$ faster for $\times 4$ and $\times 8$ scaling factors, respectively. (*) denotes that metrics are taken from (Lugmayr et al., 2020a). + +§ 3.2. SPEECH SYNTHESIS + +Here, we consider the following set of models for speech synthesis evaluation: + + * WaveGlow (Prenger et al., 2018) - a flow-based teacher model. For efficient inference, we remove weight norms (Salimans & Kingma, 2016) and precompute inversions for convolutional layers. + + * NanoFlow (Lee et al., 2020) - a recent flow-based vocoder, which produces samples with comparable quality to WaveGlow, but has about $\times {30}$ fewer parameters. + + * MelGAN (Kumar et al., 2019), Parallel Wave-GAN (Yamamoto et al., 2020), HiFi-GAN (Kong et al., 2020) - recent GAN-based vocoders. While MelGAN and ParallelWaveGAN are inferior to WaveGlow, HiFi-GAN represents the current state-of-the-art. + + * WG Student - a student model designed according to Section 2.3. For evaluation, we consider three configurations with $4/4/2$ WaveNet blocks of ${128}/{96}/{96}$ hidden channels and denote them as V1/V2/V3, respectively. + +Datasets. All experiments are performed on the LJ speech dataset (Ito & Johnson, 2017), which is one of the most common benchmarks in speech synthesis. We use a sampling rate of ${22},{050}\mathrm{{kHz}}$ and produce mel-spectrograms of the original audio according to (Prenger et al., 2018). While Prenger et al. (2018) originally provide train/validation/test splits for the LJ speech dataset, other methods are not always consistent with them. Therefore, for a fair comparison of the pretrained models, we collect a novel evaluation set and provide its details in Appendix E. + +Evaluation results. We compare vocoders in the setting where models are conditioned on ground-truth mel-spectrograms as in previous works (Prenger et al., 2018; Kim et al., 2019). For all baseline models, we use officially released pretrained models. As a primary metric for generated audio evaluation, we report Mean Opinion Score + +max width= + +Models MOS Param. (M) Speed(MHz) + +1-4 +Ground-truth ${4.50} \pm {0.06}$ - - + +1-4 +NanoFlow ${3.67} \pm {0.09}$ 2.82 0.369 + +1-4 +WaveGlow $\mathbf{{3.92} \pm {0.08}}$ 87.73 1.26 + +1-4 +WG Student V1 ${3.93} \pm {0.09}$ 9.53 9.36 + +1-4 +WG Student V2 $\mathbf{{3.89} \pm {0.09}}$ 6.35 12.21 + +1-4 +WG Student V3 ${3.62} \pm {0.09}$ 3.18 23.62 + +1-4 +Parallel WaveGAN ${3.84} \pm {0.08}$ 1.44 2.76 + +1-4 +MelGAN ${3.58} \pm {0.08}$ 4.27 28.40 + +1-4 +HiFi-GAN ${4.10} \pm {0.08}$ 1.46 54.13 + +1-4 + +Table 2. Evaluation results on the LJ Speech dataset. WG Student V1 and V2 demonstrate similar speech quality to the WaveGlow teacher being by $\times {7.4}$ and $\times {9.7}$ faster, respectively. + +(MOS) with corresponding 95 confidence intervals (CI) in Table 4. In Appendix D, we describe the detailed protocol used for MOS evaluation. The inference speed is reported in MHz, which stands for ${10}^{6}$ audio samples per second. The runtimes are measured on a single Tesla V100 in half precision with sufficiently large batch sizes and sequence lengths to suppress non-model related overheads. + +V1 and V2 students demonstrate speech quality similar to the teacher and provide $\times {7.4}$ and $\times {9.7}$ faster speech generation, respectively. Moreover, these students have $\times {9.2}$ and $\times {13.8}$ fewer parameters compared to the teacher. + +§ 4. CONCLUSION + +In this work, we address the problem of high computational cost of normalizing flows and explain how one can increase the efficiency by giving up invertibility and tractable Jacobian, which are often not necessary for deployed models. In particular, we describe an effective knowledge distillation method from a flow-based teacher to more lightweight student architectures. We empirically demonstrate that the models distilled from normalizing flows are an appealing combination of simplicity, stability and efficiency, needed for typical production pipelines. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/gGJRwZmCFm4/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/gGJRwZmCFm4/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..2f07d7a7e621f094f9c8fe3bf9a7c760cf97cb7b --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/gGJRwZmCFm4/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,789 @@ +## Equivariant Manifold Flows + +Anonymous Authors ${}^{1}$ + +## Abstract + +Tractably modelling distributions over manifolds has long been an important goal in the natural sciences. Recent work has focused on developing general machine learning models to learn such distributions. However, for many applications these distributions must respect manifold symmetries-a trait which most previous models disregard. In this paper, we lay the theoretical foundations for learning symmetry-invariant distributions on arbitrary manifolds via equivariant manifold flows. We demonstrate the utility of our approach by using it to learn gauge invariant densities over ${SU}\left( n\right)$ in the context of quantum field theory. + +## 1. Introduction + +Density learning over manifolds has a broad array of applications ranging from quantum field theory in physics (Wirns-berger et al., 2020) to motion estimation in robotics (Feiten et al., 2013) to protein-structure prediction in computational biology (Hamelryck et al., 2006). Recent work (Lou et al., 2020; Mathieu & Nickel, 2020; Falorsi & Forré, 2020) has extended the powerful framework of continuous normalizing flows (Chen et al., 2018; Grathwohl et al., 2019) to the setting of Riemannian manifolds, lifting the utility of these models for learning complex probability distributions to a more general setting. + +Although these manifold normalizing flows were a considerable step forward, they are insufficient for many problems in the natural sciences. For example, coupled particle systems in physical chemistry (Köhler et al.,2020) and ${SU}\left( n\right)$ lattice gauge theories in theoretical physics (Boyda et al., 2020) require symmetries that are nontrivial to enforce. Typically, these symmetries are enforced in an ad hoc way using properties specific to the manifold (Boyda et al., 2020). In contrast, our paper presents a fully general way to learn flows that induce symmetry-invariant distributions. + +## 2. Related Work + +Normalizing Flows on Manifolds Normalizing flows on manifolds have received a considerable amount of attention, both in terms of manifold-specific and general constructions. Rezende et al. (2020) introduced constructions specific to tori and spheres, while Bose et al. (2020) introduced constructions for hyperbolic space. Following this work, Lou et al. (2020); Mathieu & Nickel (2020) introduced a general construction by extending Neural ODEs (Chen et al., 2018) to the setting of Riemannian manifolds. + +Equivariant Machine Learning Recent work has incorporated equivariance into machine learning models for the purpose of modelling symmetries (Cohen & Welling, 2016; Kondor & Trivedi, 2018). In particular, Köhler et al. (2020) introduced equivariant normalizing flows for Euclidean space and Boyda et al. (2020) introduced equivariant flows for ${SU}\left( n\right)$ via a manifold-specific construction. In contrast, the equivariant manifold flows in our paper are fully general and applicable to arbitrary Riemannian manifolds. + +## 3. Background + +In this section, we provide a terse overview of the necessary concepts for understanding our paper. For a more detailed introduction to Riemannian geometry, we refer the reader to texts such as Lee (2013); Kobyzev et al. (2020). + +### 3.1. Riemannian Geometry + +A Riemannian manifold(M, h)is an $n$ -dimensional manifold with a smooth collection of inner products ${\left( {h}_{x}\right) }_{x \in \mathcal{M}}$ for every tangent space ${T}_{x}\mathcal{M}$ . The Riemannian metric $h$ induces a distance ${d}_{h}$ on the manifold. + +A diffeomorphism $f : \mathcal{M} \rightarrow \mathcal{M}$ is called an isometry if $h\left( {{D}_{x}f\left( u\right) ,{D}_{x}f\left( v\right) }\right) = h\left( {u, v}\right)$ for all tangent vectors $u, v \in {T}_{x}\mathcal{M}$ where ${D}_{x}f$ is the differential of $f$ . Note that isometries preserve the manifold distance function. The collection of all isometries forms a group $G$ , which we call the isometry group of the manifold $\mathcal{M}$ . + +Riemannian metrics also allow for a natural analogue of gradients on ${\mathbb{R}}^{n}$ . For a function $f : \mathcal{M} \rightarrow \mathbb{R}$ , we define the Riemannian gradient ${\nabla }_{x}f$ to be the vector on ${T}_{x}\mathcal{M}$ such that $h\left( {{\nabla }_{x}f, v}\right) = {D}_{x}f\left( v\right)$ for $v \in {T}_{x}\mathcal{M}$ . + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +### 3.2. Normalizing Flows on Manifolds + +Let(M, h)be a Riemannian manifold. A normalizing flow on $\mathcal{M}$ is a diffeomorphism ${f}_{\theta } : \mathcal{M} \rightarrow \mathcal{M}$ (parametrized by $\theta$ ) that transforms a prior density $\rho$ to model density ${\rho }_{{f}_{\theta }}$ . The model distribution can be computed via the change of variables equation: + +$$ +{\rho }_{{f}_{\theta }}\left( x\right) = \rho \left( {{f}_{\theta }^{-1}\left( x\right) }\right) \left| {\det \frac{\partial {f}_{\theta }^{-1}\left( x\right) }{dx}}\right| . +$$ + +### 3.3. Equivariance and Invariance + +Let $G$ be an isometry subgroup of $\mathcal{M}$ . We notate the action of an element $g \in G$ on $\mathcal{M}$ by the map ${R}_{g} : \mathcal{M} \rightarrow \mathcal{M}$ . + +Equivariant and Invariant Functions We say that a function $f : \mathcal{M} \rightarrow \mathcal{N}$ is equivariant if, for symmetries ${g}_{x} : \mathcal{M} \rightarrow \mathcal{M}$ and ${g}_{y} : \mathcal{N} \rightarrow \mathcal{N}, f \circ {g}_{x} = {g}_{y} \circ f$ . We say a function $f : \mathcal{M} \rightarrow \mathcal{N}$ is invariant if $f \circ {g}_{x} = f$ . When $X$ and $Y$ are manifolds, the symmetries ${g}_{x}$ and ${g}_{y}$ are isometries. + +Equivariant Vector Fields Let $X : \mathcal{M} \times \lbrack 0,\infty ) \rightarrow T\mathcal{M}$ , $X\left( {m, t}\right) \in {T}_{m}\mathcal{M}$ be a time-dependent vector field on manifold $\mathcal{M}$ , with base point ${x}_{0} \in \mathcal{M}.X$ is a $G$ -equivariant vector field if $\forall \left( {m, t}\right) \in \mathcal{M} \times \lbrack 0,\infty ), X\left( {{R}_{g}m, t}\right) =$ $\left( {{D}_{m}{R}_{g}}\right) X\left( {m, t}\right)$ . + +Equivariant Flows A flow $f$ on a manifold $\mathcal{M}$ is $G$ - equivariant if it commutes with actions from $G$ , i.e. we have ${R}_{g} \circ f = f \circ {R}_{g}.$ + +Invariance of Density For a group $G$ , a density $\rho$ on a manifold $\mathcal{M}$ is $G$ -invariant if, for all $g \in G$ and $x \in \mathcal{M}$ , $\rho \left( {{R}_{g}x}\right) = \rho \left( x\right)$ , where ${R}_{g}$ is the action of $g$ on $x$ . + +## 4. Invariant Densities from Equivariant Flows + +In this section, we describe a tractable way to learn a density over a manifold that obeys a symmetry given by an isometry subgroup $G$ . Directly parameterizing a density that obeys this symmetry is nontrivial. Hence we prove the following implications that yield a tractable solution to this problem: + +1. G-invariant potential $\Rightarrow$ G-equivariant vector field (Theorem 1). We show that given a $G$ -invariant potential function $f : \mathcal{M} \rightarrow \mathbb{R}$ , the vector field $\nabla f$ is $G$ -equivariant. + +2. G-equivariant vector field $\Rightarrow$ G-equivariant flow (Theorem 2). We show that a $G$ -equivariant vector field on $\mathcal{M}$ uniquely induces a $G$ -equivariant flow. + +3. G-equivariant flow $\Rightarrow$ G-invariant density (Theorem 3). We show that given a $G$ -invariant prior $\rho$ and a $G$ -equivariant flow ${f}_{\theta }$ , the flow density ${\rho }_{{f}_{\theta }}$ is $G$ -invariant. Hence, we can obtain a $G$ -invariant density from a $G$ - invariant potential. We claim that constructing a $G$ -invariant potential function on a manifold is far simpler than directly parameterizing a $G$ -invariant density or a $G$ -equivariant flow (an example construction will be given). We defer the proofs of all theorems to Appendix A. + +### 4.1. Equivariant Gradient of Potential Function + +We start by showing how to construct $G$ -equivariant vector fields from $G$ -invariant potential functions. To design an equivariant vector field $X$ , it is sufficient to set the vector field dynamics of $X$ as the gradient of some $G$ -invariant potential function $\Phi : \mathcal{M} \rightarrow \mathbb{R}$ : + +Theorem 1. Let(M, h)be a Riemannian manifold and $G$ be its group of isometries (or an isometry subgroup). If $\Phi : \mathcal{M} \rightarrow \mathbb{R}$ is a smooth $G$ -invariant function, then the following diagram commutes for any $g \in G$ : + +![01963e28-5820-7de4-83a0-aa949b9fe71c_1_1126_868_244_155_0.jpg](images/01963e28-5820-7de4-83a0-aa949b9fe71c_1_1126_868_244_155_0.jpg) + +or ${\nabla }_{{R}_{g}u}\Phi = {D}_{u}{R}_{g}\left( {{\nabla }_{u}\Phi }\right)$ . Hence $\nabla \Phi$ is a $G$ -equivariant vector field. This condition is also tight in the sense that it only occurs if $G$ is the group of isometries. + +### 4.2. Constructing Equivariant Manifold Flows from Equivariant Vector Fields + +To construct equivariant manifold flows, we use tools from manifold ordinary differential equations (ODEs) and continuous normalizing flows (CNFs). In particular, note the definition below: + +Manifold Continuous Normalizing Flows A manifold continuous normalizing flow with base point $z$ is a function $\gamma : \lbrack 0,\infty ) \rightarrow \mathcal{M}$ that satisfies the manifold ODE + +$$ +\frac{{d\gamma }\left( t\right) }{dt} = X\left( {\gamma \left( t\right) , t}\right) ,\gamma \left( 0\right) = z. +$$ + +We define ${F}_{X, T} : \mathcal{M} \rightarrow \mathcal{M}, z \mapsto {F}_{X, T}\left( z\right)$ to map any base point $z \in \mathcal{M}$ to the value of the CNF starting at $z$ , evaluated at time $T$ . This function is known as the (vector field) flow of $X$ . Observe that there exists a natural correspondence between equivariant flows and equivariant vector fields, by the following theorem: + +Theorem 2. Let(M, h)be a Riemannian manifold, and $G$ be its isometry group (or one of its subgroups). Let $X$ be any time-dependent vector field on $\mathcal{M}$ , and ${F}_{X, T}$ be the flow of $X$ . Then $X$ is a $G$ -equivariant vector field if and only if ${F}_{X, T}$ is a $G$ -equivariant flow. + +### 4.3. Invariant Manifold Densities from Equivariant Flows + +We now show that $G$ -equivariant flows induce $G$ -invariant densities. Note that we require the group $G$ to be an isometry subgroup in order to control the density of ${\rho }_{f}$ , and the following theorem does not hold for general diffeomorphism groups. + +Theorem 3. Let(M, h)be a Riemannian manifold, and $G$ be its isometry group (or one of its subgroups). If $\rho$ is a $G$ -invariant density on $\mathcal{M}$ , and $f$ is a $G$ -equivariant diffeomorphism, then ${\rho }_{f}\left( x\right)$ is also $G$ -invariant. + +### 4.4. Universality of Flows Generated by Invariant Potentials + +We prove that flows induced by invariant potentials suffice to learn any smooth invariant distribution over a closed manifold, as measured by Kullback-Leibler (KL) divergence. + +Theorem 4. Let(M, h)be a closed Riemannian manifold. Let $\rho ,\pi$ be smooth G-invariant distributions over said manifold, and let ${D}_{KL}\left( {\rho \parallel \pi }\right)$ be the ${KL}$ divergence between distributions $\rho$ and $\pi$ . If we choose a function $g : \mathcal{M} \rightarrow \mathbb{R}$ such that for $x \in \mathcal{M}$ , + +$$ +g\left( x\right) = \log \left( \frac{\pi \left( x\right) }{\rho \left( x\right) }\right) . +$$ + +Then we have: + +$$ +\frac{\partial }{\partial t}{D}_{KL}\left( {\rho \parallel \pi }\right) = - {\int }_{\mathcal{M}}\rho \exp \left( g\right) \parallel \nabla g{\parallel }^{2}{dx} \leq 0. +$$ + +In particular, note that if the target distribution is $\pi$ and the current distribution is $\rho$ , if we set $g$ to be $\log \left( {\pi \left( x\right) /\rho \left( x\right) }\right)$ and $g$ is the potential from which the flow is obtained, then the KL divergence between $\pi$ and $\rho$ is monotonically decreasing by Theorem 4. + +## 5. Learning Invariant Densities with Equivariant Flows + +In this section, we discuss how the theory in Section 4 is applied. + +### 5.1. Equivariant Manifold Flow Model + +We assume that a $G$ -invariant potential function $f : \mathcal{M} \rightarrow$ $\mathbb{R}$ is given. The equivariant flow model works by using automatic differentiation (Paszke et al.,2017) on $f$ to obtain $\nabla f$ , using this for the vector field, and integrating in a step-wise fashion over the manifold. Specifically, forward integration and change-in-density (divergence) computations utilize the Riemannian CNF (Mathieu & Nickel, 2020) framework. This flow model is used with a specific training procedure (see Section 5.3) to obtain a $G$ -invariant model density that approximates some target. + +### 5.2. Constructing Conjugation-invariant Potential Functions on ${SU}\left( n\right)$ + +For many applications in physics (specifically gauge theory and lattice quantum field theory), one works with the Lie group ${SU}\left( n\right)$ - the group of unitary matrices with determinant 1 (for details on the manifold structure of ${SU}\left( n\right)$ , see Appendix B). In particular, when modelling probability distributions on ${SU}\left( n\right)$ for lattice QFT, the desired distribution must be invariant under conjugation by ${SU}\left( n\right)$ (Boyda et al.,2020). Conjugation is an isometry on ${SU}\left( n\right)$ (see Appendix A.5), so we can model probability distributions invariant under this action with our developed theory. + +Invariant Potential Parameterization We want to produce a $G$ -invariant potential function $\Phi : {SU}\left( n\right) \rightarrow \mathbb{R}$ . Note that matrix conjugation preserves eigenvalues. Thus, for a function $\Phi : {SU}\left( n\right) \rightarrow \mathbb{R}$ to be invariant to matrix conjugation, it has to act on the eigenvalues of $x \in {SU}\left( n\right)$ as a multi-set. + +We can parameterize such potential functions $\Phi$ by the DeepSet network from Zaheer et al. (2017). DeepSet is a permutation invariant neural network that acts on the eigenvalues, so the mapping of $x \in {SU}\left( n\right)$ is $\Phi \left( x\right) =$ $\widehat{\Phi }\left( \left\{ {{\lambda }_{1}\left( x\right) ,\ldots ,{\lambda }_{n}\left( x\right) }\right\} \right)$ for some set function $\widehat{\Phi }$ . + +As a result of this design, the only variance in the learned distribution will be amongst non-similar matrices, while all similar matrices will be assigned the same density value. + +Prior Distributions For the prior distribution of the flow, we need a distribution that is invariant to conjugation by ${SU}\left( n\right)$ . We use the Haar measure, whose volume element is given for $x \in {SU}\left( n\right)$ as $\operatorname{Haar}\left( x\right) = \mathop{\prod }\limits_{{i < j}}{\left| {\lambda }_{i}\left( x\right) - {\lambda }_{j}\left( x\right) \right| }^{2}$ (Boyda et al., 2020). We can sample from and compute log probabilities for this distribution efficiently with standard matrix computations (Mezzadri, 2007). + +### 5.3. Training Equivariant Manifold Flows + +Learning to sample given an exact density is important in settings such as the one in Boyda et al. (2020), where we are given conjugation-invariant densities on ${SU}\left( n\right)$ for which we know the exact density function yet sampling well is nontrivial. We train our models by sampling from the Haar distribution on ${SU}\left( n\right)$ , computing the KL divergence between the model probabilities and target probabilities at these samples, and backpropagating from this KL loss. + +## 6. Experiments + +In this section, we learn densities on ${SU}\left( n\right)$ that are invariant to conjugation by ${SU}\left( n\right)$ , which is important for constructing flow-based samplers for ${SU}\left( n\right)$ lattice gauge theories in theoretical physics (Boyda et al., 2020). Our model outperforms the construction given in Boyda et al. (2020). An experiment on an additional manifold with a new symmetry is given in Appendix D. Moreover, we demonstrate in Appendix D that leveraging symmetries inherent to the manifold improves performance over general manifold flows (Lou et al., 2020). + +![01963e28-5820-7de4-83a0-aa949b9fe71c_3_248_218_580_761_0.jpg](images/01963e28-5820-7de4-83a0-aa949b9fe71c_3_248_218_580_761_0.jpg) + +(a) ${SU}\left( 2\right)$ learned densities from (Left) our model and (Right) the Boyda et al. (2020) model. The target densities are in orange, while model densities are in blue. The $x$ - axis is $\theta$ for an eigenvalue ${e}^{i\theta }$ of a matrix in ${SU}\left( 2\right)$ (the second eigenvalue is determined as ${e}^{-{i\theta }}$ ). Our model has better behavior in low-density regions, and more smoothly captures the targets in high-density regions. + +![01963e28-5820-7de4-83a0-aa949b9fe71c_3_882_217_656_779_0.jpg](images/01963e28-5820-7de4-83a0-aa949b9fe71c_3_882_217_656_779_0.jpg) + +(b) ${SU}\left( 3\right)$ learned densities from (Middle) our model and (Right) the Boyda et al. (2020) model for different target densities (Left). The $x$ -axis and $y$ -axis are the angles ${\theta }_{1}$ and ${\theta }_{2}$ for eigenvalues ${e}^{i{\theta }_{1}}$ and ${e}^{i{\theta }_{2}}$ of a matrix in ${SU}\left( 3\right)$ (the third eigenvalue is determined as ${e}^{-i{\theta }_{1} - i{\theta }_{2}}$ ), and the probabilities correspond to colors on a logarithmic scale. Our model better captures the geometry of the target densities. + +Figure 1. Comparison of learned densities on (a) ${SU}\left( 2\right)$ and (b) ${SU}\left( 3\right)$ . All densities are normalized to have maximum value 1 . + +For the sake of staying true to the application area, we follow the framework of Boyda et al. (2020) in learning densities on ${SU}\left( n\right)$ that are invariant to conjugation by ${SU}\left( n\right)$ . In particular, our goal is to learn a flow to model a target distribution so that we may efficiently sample from it. As mentioned above in Section 5.3, this setting follows the paradigm in which we are given exact density functions and learn how to sample. Our model is as described in Section 5; further training details are given in Appendix C.1. + +Figure 1a displays learned densities for our model and the model of Boyda et al. (2020) for three densities on ${SU}\left( 2\right)$ described in Appendix C.2.1. While both models match the target distributions well in high-density regions, our model exhibits a considerable improvement in lower-density regions, where the tails of our learned distribution decay faster. By contrast, the model of Boyda et al. (2020) seems to be unable to reduce mass near $\pm \pi$ , a possible consequence of their construction. Even in high-density regions, our model appears to vary smoothly, with fewer unnecessary bumps and curves compared to the densities of the model in Boyda et al. (2020). + +Figure 1b displays learned densities for our model and the model of Boyda et al. (2020) for three densities on ${SU}\left( 3\right)$ described in Appendix C.2.2. In this case, our models fit the target densities more accurately and better respect the geometry of the target distribution. Indeed, while the models of Boyda et al. (2020) are often sharp and have pointed corners, our models learn densities that vary smoothly and curve in ways that are representative of the target distributions. + +## 7. Conclusion + +In this work, we introduce equivariant manifold flows in a fully general context and provide the necessary theory to ensure a principled construction. We also demonstrate the efficacy of our approach in the context of learning a conjugation invariant density over ${SU}\left( n\right)$ , which is an important task for sampling ${SU}\left( n\right)$ lattice gauge theories in quantum field theory. + +## References + +Bose, J., Smofsky, A., Liao, R., Panangaden, P., and Hamilton, W. Latent variable modelling with hyperbolic normalizing flows. In Proceedings of the 37th International Conference on Machine Learning, pp. 1045-1055, 2020. + +Boyda, D., Kanwar, G., Racanière, S., Rezende, D. J., Albergo, M. S., Cranmer, K., Hackett, D. C., and Shanahan, P. E. Sampling using ${su}\left( n\right)$ gauge equivariant flows. arXiv preprint arXiv:2008.05456, 2020. + +Bump, D. Lie groups. Springer, 2004. + +Chen, R. T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. K. Neural ordinary differential equations. In Advances in Neural Information Processing Systems, volume 31, pp. 6571-6583, 2018. + +Cohen, T. and Welling, M. Group equivariant convolutional networks. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2990-2999, 2016. + +Falorsi, L. and Forré, P. Neural ordinary differential equations on manifolds. arXiv preprint arXiv:2006.06663, 2020. + +Feiten, W., Lang, M., and Hirche, S. Rigid motion estimation using mixtures of projected gaussians. Proceedings of the 16th International Conference on Information Fusion, pp. 1465-1472, 2013. + +Gallier, J. and Quaintance, J. Differential geometry and lie groups: A computational perspective. 2020. + +Grathwohl, W., Chen, R. T. Q., Bettencourt, J., and Du-venaud, D. Scalable reversible generative models with free-form continuous dynamics. In International Conference on Learning Representations, 2019. + +Hamelryck, T., Kent, J. T., and Krogh, A. Sampling realistic protein conformations using local structural bias. PLoS Computational Biology, 2(9), 2006. + +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. + +Kobyzev, I., Prince, S., and Brubaker, M. Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. + +Köhler, J., Klein, L., and Noe, F. Equivariant flows: Exact likelihood generative learning for symmetric densities. In Proceedings of the 37th International Conference on Machine Learning, pp. 5361-5370, 2020. + +Kondor, R. and Trivedi, S. On the generalization of equivari-ance and convolution in neural networks to the action of + +compact groups. In Proceedings of the 35th International Conference on Machine Learning, pp. 2747-2755, 2018. + +Lee, J. M. Introduction to Smooth Manifolds. Graduate Texts in Mathematics. Springer New York, 2013. + +Lou, A., Lim, D., Katsman, I., Huang, L., Jiang, Q., Lim, S. N., and De Sa, C. M. Neural manifold ordinary differential equations. In Advances in Neural Information Processing Systems, volume 33, pp. 17548-17558, 2020. + +Mathieu, E. and Nickel, M. Riemannian continuous normalizing flows. In Advances in Neural Information Processing Systems, volume 33, pp. 2503-2515, 2020. + +Mezzadri, F. How to generate random matrices from the classical compact groups. Notices of the American Mathematical Society, 54:592-604, 2007. + +Nagano, Y., Yamaguchi, S., Fujita, Y., and Koyama, M. A wrapped normal distribution on hyperbolic space for gradient-based learning. In Proceedings of the 36th International Conference on Machine Learning, pp. 4693- 4702, 2019. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. arXiv preprint arXiv:1912.02762, 2019. + +Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. Automatic differentiation in pytorch. In Neural Information Processing System Autodiff Workshop, 2017. + +Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703, 2019. + +Rezende, D. J., Papamakarios, G., Racaniere, S., Albergo, M., Kanwar, G., Shanahan, P., and Cranmer, K. Normalizing flows on tori and spheres. In Proceedings of the 37th International Conference on Machine Learning, pp. 8083-8092, 2020. + +Skopek, O., Ganea, O.-E., and B'ecigneul, G. Mixed-curvature variational autoencoders. arXiv preprint arXiv:1911.08411, 2019. + +Wirnsberger, P., Ballard, A. J., Papamakarios, G., Abercrombie, S., Racanière, S., Pritzel, A., Jimenez Rezende, D., and Blundell, C. Targeted free energy estimation via learned mappings. The Journal of Chemical Physics, 153 (14):144112, 2020. + +Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R. R., and Smola, A. J. Deep sets. In Advances in Neural Information Processing Systems, volume 30, pp. 3391-3401, 2017. + +316 + +317 + +318 + +319 + +320 + +326 + +328 + +329 + +## Appendix + +## A. Proof of Theorems + +In this section, we restate and prove the theorems in Section 4. These give the theoretical foundations that we use to build our models. + +### A.1. Proof of Theorem 1 + +Theorem 1. Let(M, h)be a Riemannian manifold and $G$ be its group of isometries (or an isometry subgroup). If $\Phi : \mathcal{M} \rightarrow \mathbb{R}$ is a smooth $G$ -invariant function, then the following diagram commutes for any $g \in G$ : or ${\nabla }_{{R}_{g}}{u\Phi } = {D}_{u}{R}_{g}\left( {{\nabla }_{u}\Phi }\right)$ . This is condition is also tight in the sense that it only occurs if $G$ is the group of isometries. + +![01963e28-5820-7de4-83a0-aa949b9fe71c_6_756_613_243_154_0.jpg](images/01963e28-5820-7de4-83a0-aa949b9fe71c_6_756_613_243_154_0.jpg) + +Proof. We first recall the Riemannian gradient chain rule: + +$$ +{\nabla }_{u}\left( {\Phi \circ {R}_{g}}\right) = {\left( {D}_{u}{R}_{g}\right) }^{\top }\left( {{\nabla }_{{R}_{g}u}\Phi }\right) +$$ + +where ${\left( {D}_{u}{R}_{g}\right) }^{\top } : {T}_{{R}_{g}u}\mathcal{M} \rightarrow {T}_{u}\mathcal{M}$ is the "adjoint" given by + +$$ +h\left( {{D}_{u}{R}_{g}\left( v\right) , w}\right) = h\left( {v,{\left( {D}_{u}{R}_{g}\right) }^{\top }\left( w\right) }\right) . +$$ + +Since ${R}_{g}$ is an isometry, we also have + +$$ +h\left( {x, y}\right) = h\left( {{D}_{u}{R}_{g}\left( x\right) ,{D}_{u}{R}_{g}\left( y\right) }\right) . +$$ + +Combining the above two equations gives + +$$ +h\left( {x, y}\right) = h\left( {{Du}{R}_{g}\left( x\right) ,{D}_{u}{R}_{g}\left( y\right) }\right) = h\left( {x,{\left( {D}_{u}{R}_{g}\right) }^{\top }\left( {{D}_{u}{R}_{g}\left( y\right) }\right) }\right) , +$$ + +which implies for all $y$ , + +$$ +h\left( {x, y - {\left( {D}_{u}{R}_{g}\right) }^{\top }\left( {{D}_{u}{R}_{g}\left( y\right) }\right) }\right) = 0. +$$ + +Since $h$ is a Riemannian metric (even pseudo-metric works due to non-degeneracy), we must have that ${\left( {D}_{u}{R}_{g}\right) }^{\top } \circ \left( {{D}_{u}{R}_{g}}\right) =$ $I$ . + +To complete the proof, we recall that $\Phi = \Phi \circ {R}_{g}$ , and this combined with chain rule gives + +$$ +{\nabla }_{u}\Phi = {\nabla }_{u}\left( {\Phi \circ {R}_{g}}\right) = {\left( {D}_{u}{R}_{g}\right) }^{\top }\left( {{\nabla }_{{R}_{g}}{u\Phi }}\right) . +$$ + +Now applying ${D}_{u}{R}_{g}$ on both sides gives + +$$ +{\nabla }_{{R}_{g}u}\Phi = {D}_{u}{R}_{g}{\nabla }_{u}\Phi +$$ + +which is exactly what we want to show. + +We see that this is an "only if" condition because we must necessarily get that the adjoint is the inverse, which implies that ${R}_{g}$ is an isometry. + +### A.2. Proof of Theorem 2 + +Theorem 2. Let(M, h)be a Riemannian manifold, and $G$ be its isometry group (or one of its subgroups). Let $X$ be any time-dependent vector field on $\mathcal{M}$ , and ${F}_{X, T}$ be the flow of $X$ . Then $X$ is an $G$ -equivariant vector field if and only if ${F}_{X, T}$ is a $G$ -equivariant flow for any $T \in \lbrack 0, + \infty )$ . + +Proof. $G$ -equivariant $X \Rightarrow G$ -equivariant ${F}_{X, T}$ . We invoke the following lemma from Lee (2013, Corollary 9.14): + +Lemma 1. Let $F : \mathcal{M} \rightarrow \mathcal{N}$ be a diffeomorphism. If $X$ is a smooth vector field over $\mathcal{M}$ and $\theta$ is the flow of $X$ , then the flow of ${F}_{ * }X\left( {F}_{ * }\right.$ is another notation for the differential of $\left. {F \mid F}\right)$ is ${\eta }_{t} = F \circ {\theta }_{t} \circ {F}^{-1}$ , with domain ${N}_{t} = F\left( {M}_{t}\right)$ for each $t \in \mathbb{R}$ . + +Examine ${R}_{g}$ and its action on $X$ . Since $X$ is $G$ -equivariant, we have for any $\left( {x, t}\right) \in \mathcal{M} \times \lbrack 0, + \infty )$ , + +$$ +\left( {{\left( {R}_{g}\right) }_{ * }X}\right) \left( {x, t}\right) = \left( {{D}_{{R}_{g}^{-1}\left( x\right) }{R}_{g}}\right) X\left( {{R}_{g}^{-1}\left( x\right) , t}\right) = X\left( {{R}_{g} \circ {R}_{g}^{-1}\left( x\right) , t}\right) = X\left( {x, t}\right) +$$ + +so it follows that ${\left( {R}_{g}\right) }_{ * }X = X$ . Applying the lemma above, we get: + +$$ +{F}_{{\left( {R}_{g}\right) }_{ * }X, T} = {R}_{g} \circ {F}_{X, T} \circ {R}_{g}^{-1} +$$ + +and, by simplifying, we get that ${F}_{X, T} \circ {R}_{g} = {R}_{g} \circ {F}_{X, T}$ , as desired. + +$G$ -equivariant $X \Leftarrow G$ -equivariant ${F}_{X, T}$ . This direction follows from the chain rule. If ${F}_{X, T}$ is $G$ -equivariant, then at all times we have: + +$$ +\left( {{D}_{m}{R}_{g}}\right) \left( {X\left( {{F}_{X, t}\left( m\right) , t}\right) = \left( {{D}_{m}{R}_{g}}\right) \left( {\frac{d}{dt}{F}_{X, T}\left( m\right) }\right) }\right. +$$ + +(definition) + +$$ += \frac{d}{dt}\left( {{R}_{g} \circ {F}_{X, T}}\right) \left( m\right) +$$ + +(chain rule) + +$$ += \frac{d}{dt}{F}_{X, T}\left( {{R}_{g}m}\right) +$$ + +(equivariance) + +$$ += X\left( {{R}_{g}\left( {{F}_{X, t}\left( m\right) }\right) , t}\right) +$$ + +(definition) + +This concludes the proof of the backward direction. + +### A.3. Proof of Theorem 3 + +Theorem 3. Let(M, h)be a Riemannian manifold, and $G$ be its isometry group (or one of its subgroups). If $\rho$ is a $G$ -invariant density on $\mathcal{M}$ , and $f$ is a $G$ -equivariant diffeomorphism, then ${\rho }_{f}\left( x\right)$ is also $G$ -invariant. + +Proof. We wish to show ${\rho }_{f}\left( x\right)$ is also $G$ -invariant, i.e. ${\rho }_{f}\left( {{R}_{g}x}\right) = {\rho }_{f}\left( x\right)$ for all $g \in G, x \in \mathcal{M}$ . + +We first recall the definition of ${\rho }_{f}$ : + +$$ +{\rho }_{f}\left( x\right) = \rho \left( {{f}^{-1}\left( x\right) }\right) \left| {\det \frac{\partial {f}^{-1}\left( x\right) }{dx}}\right| = \rho \left( {{f}^{-1}\left( x\right) }\right) \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right| . +$$ + +Since $f \in {C}^{1}\left( {\mathcal{M},\mathcal{M}}\right)$ is $G$ -equivariant, we have $f \circ {R}_{g} = {R}_{g} \circ f$ for any $g \in G$ . Also, since $\rho$ is $G$ -invariant, we have $\rho \circ {R}_{g} = \rho$ . Combining these properties, we see that: + +$$ +{\rho }_{f}\left( {{R}_{g}x}\right) = {\rho }_{f}\left( {{R}_{g}x}\right) \frac{\left| \det {J}_{{R}_{g}}\left( x\right) \right| }{\left| \det {J}_{{R}_{g}}\left( x\right) \right| } = \frac{{\rho }_{{R}_{{g}^{-1}} \circ f}\left( x\right) }{\left| \det {J}_{{R}_{g}}\left( x\right) \right| } +$$ + +(expanding definition of ${\rho }_{f}$ ) + +$$ += \frac{{\rho }_{f \circ {R}_{{g}^{-1}}}\left( x\right) }{\left| \det {J}_{{R}_{g}}\left( x\right) \right| } = \rho \left( {\left( {{R}_{g} \circ {f}^{-1}}\right) \left( x\right) }\right) \frac{\left| \det {J}_{{R}_{g} \circ {f}^{-1}}\left( x\right) \right| }{\left| \det {J}_{{R}_{g}}\left( x\right) \right| } +$$ + +(G-equivariance of $f$ ) + +$$ += \left( {\rho \circ {R}_{g} \circ {f}^{-1}}\right) \left( x\right) \frac{\left| \det {J}_{{R}_{g}}\left( {f}^{-1}\left( x\right) \right) {J}_{{f}^{-1}}\left( x\right) \right| }{\left| \det {J}_{{R}_{g}}\left( x\right) \right| } +$$ + +(expanding Jacobian) + +$$ += \left( {\rho \circ {f}^{-1}}\right) \left( x\right) \frac{\left| {\det {J}_{{R}_{g}}\left( {{f}^{-1}\left( x\right) }\right) }\right| \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right| }{\left| \det {J}_{{R}_{g}}\left( x\right) \right| } +$$ + +(G-invariance of $\rho$ ) + +$$ += \rho \left( {{f}^{-1}\left( x\right) }\right) \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right| \cdot \frac{\left| \det {J}_{{R}_{g}}\left( {f}^{-1}\left( x\right) \right) \right| }{\left| \det {J}_{{R}_{g}}\left( x\right) \right| } +$$ + +(rearrangement) + +$$ += {\rho }_{f}\left( x\right) \cdot \frac{\left| \det {J}_{{R}_{g}}\left( {f}^{-1}\left( x\right) \right) \right| }{\left| \det {J}_{{R}_{g}}\left( x\right) \right| } +$$ + +(expanding definition of ${\rho }_{f}$ ) + +Now note that $G$ is contained in the isometry group, and thus ${R}_{g}$ is an isometry. This means $\left| {\det {J}_{{R}_{g}}\left( x\right) }\right| = 1$ for any $x \in \mathcal{M}$ , so the right-hand side above is simply ${\rho }_{f}\left( x\right)$ , which proves the theorem. + +### A.4. Proof of Theorem 4 + +Theorem 4. Let(M, h)be a closed Riemannian manifold. Let $\rho$ be a distribution over said manifold, and let ${D}_{KL}\left( {\rho \parallel \pi }\right)$ be the Kullback-Leibler divergence between distributions $\rho$ and $\pi$ . If we choose a $g : \mathcal{M} \rightarrow \mathbb{R}$ such that: + +$$ +g\left( x\right) = \log \left( \frac{\pi \left( x\right) }{\rho \left( x\right) }\right) +$$ + +for $x \in \mathcal{M}$ , we have: + +$$ +\frac{\partial }{\partial t}{D}_{KL}\left( {\rho \parallel \pi }\right) = - \int \rho \exp \left( g\right) \parallel \nabla g{\parallel }^{2}{dx} +$$ + +Proof. We start by noting the following by the Fokker-Planck equation: + +$$ +\frac{\partial \rho }{\partial t} = \nabla \cdot \left( {\rho \nabla g}\right) . +$$ + +This gives: + +$$ +\frac{\partial }{\partial t}{D}_{KL}\left( {\rho \parallel \pi }\right) = \int \frac{\pi }{\rho }\frac{\partial \rho }{\partial t}{dx} +$$ + +$$ += \int \frac{\pi }{\rho }\nabla \cdot \left( {\rho \nabla g}\right) {dx} +$$ + +$$ += \int \left( {\nabla \cdot \left( {\frac{\pi }{\rho }\left( {\rho \nabla g}\right) }\right) - \left( {\rho \nabla g}\right) \cdot \nabla \cdot \frac{\pi }{\rho }}\right) {dx} +$$ + +$$ += \int \left( {\nabla \cdot \left( {\pi \nabla g}\right) - \left( {\rho \nabla g}\right) \cdot \nabla \frac{\pi }{\rho }}\right) {dx} +$$ + +$$ += - \int \left( {\rho \nabla g}\right) \cdot \nabla \frac{\pi }{\rho }{dx} +$$ + +where the final equality follows from the divergence theorem, since the integral of the divergence over a closed manifold is + +0 . Now if we choose $g$ such that: + +$$ +g\left( x\right) = \log \left( \frac{\pi \left( x\right) }{\rho \left( x\right) }\right) . +$$ + +## Then we have: + +$$ +\frac{\partial }{\partial t}{D}_{KL}\left( {\rho \parallel \pi }\right) = - \int \left( {\rho \nabla g}\right) \cdot \nabla \exp \left( g\right) {dx} +$$ + +$$ += - \int \rho \exp \left( g\right) \parallel \nabla g{\parallel }^{2}{dx} +$$ + +as desired. + +### A.5. Conjugation by ${SU}\left( n\right)$ as an Isometry + +We now prove a lemma that shows that the group action of conjugation by ${SU}\left( n\right)$ is an isometry subgroup. This implies that Theorems 1 through 3 above can be specialized to the setting of ${SU}\left( n\right)$ . + +Lemma 2. Let $G$ be the group action of conjugation by ${SU}\left( n\right)$ , and let each ${R}_{g}$ represent the corresponding action of conjugation by $g \in {SU}\left( n\right)$ . Then $G$ is an isometry subgroup. + +Proof. We first show that the matrix conjugation action of ${SU}\left( n\right)$ is unitary. For $R, X \in {SU}\left( n\right)$ , note that the action of conjugation is given by $\operatorname{vec}\left( {{RX}{R}^{-1}}\right) = \left( {{R}^{-T} \otimes R}\right) \operatorname{vec}\left( X\right)$ . We have that ${R}^{-T} \otimes R$ is unitary because: + +$$ +{\left( {R}^{-T} \otimes R\right) }^{ * }\left( {{R}^{-T} \otimes R}\right) +$$ + +$$ += \left( {\overline{{R}^{-1}} \otimes {R}^{ * }}\right) \left( {{R}^{-T} \otimes R}\right) +$$ + +(conjugate transposes distribute over $\otimes$ ) + +$$ += \left( {\overline{{R}^{-1}}{R}^{-T}}\right) \otimes \left( {{R}^{ * }R}\right) +$$ + +(mixed-product property of $\otimes$ ) + +$$ += \left( {{R}^{T}{R}^{-T}}\right) \otimes \left( I\right) = \left( I\right) \otimes \left( I\right) = {I}_{{n}^{2} \times {n}^{2}} +$$ + +(simplification) + +Now choose an orthonormal frame ${X}_{1},\ldots ,{X}_{m}$ of $T\mathcal{M}$ . Note that $T\mathcal{M}$ locally consists of ${SU}\left( n\right)$ shifts of the algebra, which itself consists of traceless skew-Hermitian matrices (Gallier &Quaintance,2020). We show $G$ is an isometry subgroup by noting that when it acts on the frame, the resulting frame is orthonormal. Let $g \in G$ , and consider the result of action of $g$ on the frame, namely ${R}_{g}{X}_{1},\ldots ,{R}_{g}{X}_{m}$ . Then we have: + +$$ +{\left( {R}_{g}{X}_{i}\right) }^{ * }\left( {{R}_{g}{X}_{j}}\right) = {X}_{i}^{ * }{R}_{g}^{ * }{R}_{g}{X}_{j} = {X}_{i}^{ * }{X}_{j}. +$$ + +Note for $i \neq j$ , we have ${X}_{i}^{ * }{X}_{j} = 0$ and for $i = j$ we see ${X}_{i}^{ * }{X}_{i} = 1$ . Hence the resulting frame is orthonormal and $G$ is an isometry subgroup. + +## B. Manifold Details for the Special Unitary Group ${SU}\left( n\right)$ + +In this section, we give a basic introduction to the special unitary group ${SU}\left( n\right)$ and relevant properties. + +Definition. The special unitary group ${SU}\left( n\right)$ consists of all $n$ -by- $n$ unitary matrices $U$ (i.e. ${U}^{ * }U = U{U}^{ * } = 1$ for ${U}^{ * }$ the conjugate transpose of $U$ ) that have determinant $\det \left( U\right) = 1$ . + +Note that ${SU}\left( n\right)$ is a smooth manifold; in particular, it has Lie structure (Gallier &Quaintance,2020). Moreover, the tangent space at the identity (i.e. the Lie algebra) consists of traceless skew-Hermitian matrices (Gallier & Quaintance, 2020). The Riemannian metric is $\operatorname{tr}\left( {{A}^{\top }B}\right)$ . + +### B.1. Haar Measure on ${SU}\left( n\right)$ + +Haar Measure. Haar measures are generic constructs of measures on topological groups $G$ that are invariant under group operation. For example, the Lie group $G = {SU}\left( n\right)$ has Haar measure ${\mu }_{H} : {SU}\left( n\right) \rightarrow \mathbb{R}$ , which is defined as the unique measure such that for any $U \in {SU}\left( n\right)$ , we have + +$$ +{\mu }_{H}\left( {VU}\right) = {\mu }_{H}\left( {UW}\right) = {\mu }_{H}\left( U\right) +$$ + +for all $V, W \in {SU}\left( n\right)$ and ${\mu }_{H}\left( G\right) = 1$ . + +A topological group $G$ together with its unique Haar measure defines a probability space on the group. This gives one natural way of defining probability distributions on the group, explaining its importance in our construction of probability distributions on Lie groups, specifically ${SU}\left( n\right)$ . + +To make the above Haar measure definition more concrete, we note from Bump (2004, Proposition 18.4) that we can transform an integral over ${SU}\left( n\right)$ with respect to the Haar measure into integrating over the corresponding diagonal matrices under eigendecomposition: + +$$ +{\int }_{{SU}\left( n\right) }{fd}{\mu }_{H} = \frac{1}{n!}{\int }_{T}f\left( {\operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) }\right) \mathop{\prod }\limits_{{i < j}}\left| {{\lambda }_{i} - {\lambda }_{j}}\right| {d\lambda }. +$$ + +Thus, we can think of the Haar measure as inducing the change of variables with volume element + +$$ +\operatorname{Haar}\left( x\right) = \mathop{\prod }\limits_{{i < j}}{\left| {\lambda }_{i}\left( x\right) - {\lambda }_{j}\left( x\right) \right| }^{2}. +$$ + +To sample uniformly from the Haar measure, we just need to ensure that we are sampling each $x \in {SU}\left( n\right)$ with probability proportional to $\operatorname{Haar}\left( x\right)$ . + +Sampling from the Haar Prior. We use Algorithm 1 (Mezzadri, 2007) for generating a sample uniformly from the Haar prior on ${SU}\left( n\right)$ : + +Algorithm 1 Sampling from the Haar Prior on ${SU}\left( n\right)$ + +--- + +Sample $Z \in {\mathbb{C}}^{n \times n}$ where each entry ${Z}_{ij} = {Z}_{ij}^{\left( 1\right) } + i{Z}_{ij}^{\left( 2\right) }$ for independent random variables ${Z}_{ij}^{\left( 1\right) },{Z}_{ij}^{\left( 2\right) } \sim \mathcal{N}\left( {0,1/2}\right)$ . + + Let $Z = {QR}$ be the QR Factorization of $Z$ . + + Let $\Lambda = \operatorname{diag}\left( {\frac{{R}_{11}}{\left| {R}_{11}\right| },\ldots ,\frac{{R}_{nn}}{\left| {R}_{nn}\right| }}\right)$ . + + Output ${Q}^{\prime } = {Q\Lambda }$ as distributed with Haar measure. + +--- + +### B.2. Eigendecomposition on ${SU}\left( n\right)$ + +One main step in the invariant potential computation for ${SU}\left( n\right)$ is to derive formulas for the eigendecomposition of $U \in {SU}\left( n\right)$ as well as formulas for differentiation through the eigendecomposition (recall that we must differentiate the ${SU}\left( n\right)$ -invariant potential $f$ to get ${SU}\left( n\right)$ -equivariant vector field $\nabla f$ , as described in Section 5.2). This section first derives general formulas for how to do this for $U \in {SU}\left( n\right)$ . In practice, such general methods often introduce instability, and thus, for the oft-used special cases of $n = 2,3$ , we derive explicit formulas for the eigenvalues based on finding roots of the characteristic polynomials (given by root formulas for quadratic/cubic equations). + +#### B.2.1. DERIVATIONS FOR THE GENERAL CASE ${SU}\left( N\right)$ + +Here we reconstruct the steps of differentiation through eigendecomposition from Boyda et al. (2020, Appendix C) that allow efficient computation in our use-case. For our matrix-conjugation-invariant ${SU}\left( n\right)$ flow, we need only differentiate the eigenvalues with respect to the input $U \in {SU}\left( n\right)$ . + +For an input $U \in {SU}\left( n\right)$ , let its eigendecomposition be $U = {PD}{P}^{ * }$ , where $w = \operatorname{diag}\left( D\right) \in {\mathbb{C}}^{n}$ contains its eigenvalues, and $P = \left\lbrack \begin{array}{lll} {p}_{1} & \cdots & {p}_{n} \end{array}\right\rbrack \in {\mathbb{C}}^{n \times n}$ with ${p}_{i} \in {\mathbb{C}}^{n}$ as its eigenvectors. Let $L$ denote our loss function, and write the downstream gradients in row vector format: + +$$ +g = \left\lbrack \begin{array}{ll} \frac{\partial L}{\partial \operatorname{Re}w} & \frac{\partial L}{\partial \operatorname{Im}w} \end{array}\right\rbrack = \left\lbrack \begin{array}{ll} {g}^{\left( 1\right) } & {g}^{\left( 2\right) } \end{array}\right\rbrack . +$$ + +Then following similar steps as in Boyda et al. (2020), we can compute the gradient of $L$ with respect to the real and imaginary parts of $U$ as follows: + +$$ +\frac{\partial L}{\partial \operatorname{Re}U} = \mathop{\sum }\limits_{{i = 1}}^{n}{g}_{i}^{\left( 1\right) }\operatorname{Re}\left( {\overline{{p}_{i}}{p}_{i}^{\top }}\right) + \mathop{\sum }\limits_{{i = 1}}^{n}{g}_{i}^{\left( 2\right) }\operatorname{Im}\left( {\overline{{p}_{i}}{p}_{i}^{\top }}\right) +$$ + +$$ +\frac{\partial L}{\partial \operatorname{Im}U} = - \mathop{\sum }\limits_{{i = 1}}^{n}{g}_{i}^{\left( 1\right) }\operatorname{Im}\left( {\overline{{p}_{i}}{p}_{i}^{\top }}\right) + \mathop{\sum }\limits_{{i = 1}}^{n}{g}_{i}^{\left( 2\right) }\operatorname{Re}\left( {\overline{{p}_{i}}{p}_{i}^{\top }}\right) +$$ + +If we define + +$$ +{Q}^{\left( 1\right) } = \left\lbrack \begin{array}{lll} {g}_{1}^{\left( 1\right) }\overline{{p}_{1}} & \ldots & {g}_{n}^{\left( 1\right) }\overline{{p}_{n}} \end{array}\right\rbrack \;{Q}^{\left( 2\right) } = \left\lbrack \begin{array}{lll} {g}_{1}^{\left( 2\right) }\overline{{p}_{1}} & \ldots & {g}_{n}^{\left( 2\right) }\overline{{p}_{n}} \end{array}\right\rbrack +$$ + +Then we can write the gradients in terms of efficient matrix computations: + +$$ +\frac{\partial L}{\partial \operatorname{Re}U} = \operatorname{Re}\left( {{Q}^{\left( 1\right) }{P}^{\top }}\right) + \operatorname{Im}\left( {{Q}^{\left( 2\right) }{P}^{\top }}\right) +$$ + +$$ +\frac{\partial L}{\partial \operatorname{Im}U} = - \operatorname{Im}\left( {{Q}^{\left( 1\right) }{P}^{\top }}\right) + \operatorname{Re}\left( {{Q}^{\left( 2\right) }{P}^{\top }}\right) . +$$ + +#### B.2.2. Explicit Formula for ${SU}\left( 2\right)$ + +We now derive an explicit eigenvalue formula for the $U \in {SU}\left( 2\right)$ case. Let us denote $U = \left\lbrack \begin{array}{ll} a + {bi} & - c + {di} \\ c + {di} & a - {bi} \end{array}\right\rbrack$ for $a, b, c, d \in \mathbb{R}$ such that ${a}^{2} + {b}^{2} + {c}^{2} + {d}^{2} = 1$ as an element of ${SU}\left( 2\right)$ ; then the characteristic polynomial of this matrix is given by + +$$ +\det \left( {{\lambda I} - U}\right) = \left( {\lambda - \left( {a + {bi}}\right) }\right) \left( {\lambda - \left( {a - {bi}}\right) }\right) + \left( {c + {di}}\right) \left( {c - {di}}\right) = {\left( a - \lambda \right) }^{2} + {b}^{2} + {c}^{2} + {d}^{2} = {\lambda }^{2} - {2a\lambda } + 1 +$$ + +and thus its eigenvalues are given by + +$$ +{\lambda }_{1} = a + i\sqrt{1 - {a}^{2}} = a + i\sqrt{{b}^{2} + {c}^{2} + {d}^{2}} +$$ + +$$ +{\lambda }_{2} = a - i\sqrt{1 - {a}^{2}} = a - i\sqrt{{b}^{2} + {c}^{2} + {d}^{2}} +$$ + +Remark. We note that there is a natural isomorphism $\phi : {S}^{3} \rightarrow {SU}\left( 2\right)$ , given by + +$$ +\phi \left( {a, b, c, d}\right) = \left\lbrack \begin{matrix} a + {bi} & - c + {di} \\ c + {di} & a - {bi} \end{matrix}\right\rbrack +$$ + +We can exploit this isomorphism by learning a flow over ${S}^{3}$ with a regular manifold flow like NMODE (Lou et al.,2020) and mapping it to a flow over ${SU}\left( 2\right)$ . This is also an acceptable way to obtain stable density learning over ${SU}\left( 2\right)$ . + +#### B.2.3. Explicit Formula for ${SU}\left( 3\right)$ + +We now derive an explicit eigenvalue formula for the $U \in {SU}\left( 3\right)$ case. For the case of $U \in {SU}\left( 3\right)$ , we can compute the characteristic polynomial as + +$$ +\det \left( {{\lambda I} - U}\right) = \det \left( \left\lbrack \begin{matrix} \lambda - {U}_{11} & - {U}_{12} & - {U}_{13} \\ - {U}_{21} & \lambda - {U}_{22} & - {U}_{23} \\ - {U}_{31} & - {U}_{32} & \lambda - {U}_{33} \end{matrix}\right\rbrack \right) +$$ + +$$ += {\lambda }^{3} + {c}_{2}{\lambda }^{2} + {c}_{1}\lambda + {c}_{0} +$$ + +where + +$$ +{c}_{2} = - \left( {{U}_{11} + {U}_{22} + {U}_{33}}\right) +$$ + +$$ +{c}_{1} = {U}_{11}{U}_{22} + {U}_{22}{U}_{33} + {U}_{33}{U}_{11} - {U}_{12}{U}_{21} - {U}_{23}{U}_{32} - {U}_{13}{U}_{31} +$$ + +$$ +{c}_{0} = - \left( {{U}_{12}{U}_{23}{U}_{31} + {U}_{13}{U}_{21}{U}_{32} + {U}_{11}{U}_{22}{U}_{33} - {U}_{12}{U}_{21}{U}_{33} - {U}_{13}{U}_{31}{U}_{22} - {U}_{23}{U}_{32}{U}_{11}}\right) +$$ + +Now to solve the equation + +$$ +{\lambda }^{3} + {c}_{2}{\lambda }^{2} + {c}_{1}\lambda + {c}_{0} = 0 +$$ + +we first transform it into a depressed cubic + +$$ +{t}^{3} + {pt} + q = 0 +$$ + +where we make the transformation + +$$ +t = x + \frac{{c}_{2}}{3} +$$ + +$$ +p = \frac{3{c}_{1} - {c}_{2}^{2}}{3} +$$ + +$$ +q = \frac{2{c}_{2}^{3} - 9{c}_{2}{c}_{1} + {27}{c}_{0}}{27} +$$ + +Now from Cardano's formula, we have the cubic roots of the depressed cubic given by + +$$ +{\lambda }_{1,2,3} = \sqrt[3]{-\frac{q}{2} + \sqrt{\frac{{q}^{2}}{4} + \frac{{p}^{3}}{27}}} + \sqrt[3]{-\frac{q}{2} - \sqrt{\frac{{q}^{2}}{4} + \frac{{p}^{3}}{27}}} +$$ + +where the two cubic roots in the above equation are picked such that they multiply to $- \frac{p}{3}$ . + +## C. Experimental Details for Learning Equivariant Flows on ${SU}\left( n\right)$ + +This section presents some additional details regarding the experiments that learn invariant densities on ${SU}\left( n\right)$ in Section 6. + +### C.1. Training Details + +Our DeepSet network (Zaheer et al., 2017) consists of a feature extractor and regressor. The feature extractor is a 1-layer tanh network with 32 hidden channels. We concatenate the time component to the sum component of the feature extractor before feeding the resulting 33 size tensor into a 1-layer tanh regressor network. + +To train our flows, we minimize the KL divergence between our model distribution and the target distribution (Papamakarios et al.,2019), as is done in Boyda et al. (2020). In a training iteration, we draw a batch of samples uniformly from ${SU}\left( n\right)$ , map them through our flow, and compute the gradients with respect to the batch KL divergence between our model probabilities and the target density probabilities. We use the Adam stochastic optimizer for gradient-based optimization (Kingma & Ba, 2015). The graph shown in Figure 1 was trained for 300 iterations with a batch size of 8192 and weight decay setting of 0.01 ; the starting learning rate for Adam was 0.01 , and a multi-step learning rate schedule that decreased the learning rate by a factor of 10 every 100 epochs was used. We use PyTorch to implement our models and run experiments (Paszke et al., 2019). Experiments are run on one CPU and/or GPU at a time, where we use one NVIDIA RTX 2080Ti GPU with 11 GB of GPU RAM. + +### C.2. Conjugation-Invariant Target Distributions + +Boyda et al. (2020) defined a family of matrix-conjugation-invariant densities on ${SU}\left( n\right)$ as: + +$$ +{p}_{\text{toy }}\left( U\right) = \frac{1}{Z}{e}^{\frac{\beta }{n}\operatorname{Re}\operatorname{tr}\left( {\mathop{\sum }\limits_{k}{c}_{k}{U}^{k}}\right) }, +$$ + +which is parameterized by scalars ${c}_{k}$ and $\beta$ . The normalizing constant $Z$ is chosen to ensure that ${p}_{toy}$ is a valid probability density with respect to the Haar measure. + +More specifically, the experiments of Boyda et al. (2020) focus on learning to sample from the distribution with the above density with three components, in the following form: + +$$ +{p}_{\text{toy }}\left( U\right) = \frac{1}{Z}{e}^{\frac{\beta }{n}\operatorname{Re}\operatorname{tr}\left( {{c}_{1}U + {c}_{2}{U}^{2} + {c}_{3}{U}^{3}}\right) } +$$ + +We tested on three instances of the density, also used in Boyda et al. (2020): + +
set $i$${c}_{1}$${c}_{2}$${c}_{3}$$\beta$
10.98-0.63-0.219
20.17-0.651.229
31009
+ +Table 1. Sets of parameters ${c}_{1},{c}_{2},{c}_{3}$ and $\beta$ used in the ${SU}\left( 2\right)$ and ${SU}\left( 3\right)$ experiments + +Note that the rows of Figure 1 correspond to coefficient sets3,2,1, given in order from top to bottom. + +#### C.2.1. Case for ${SU}\left( 2\right)$ + +In the case of $n = 2$ , we can represent the eigenvalues of a matrix $U \in {SU}\left( 2\right)$ in the form ${e}^{i\theta },{e}^{-{i\theta }}$ for some angle $\theta \in \left\lbrack {0,\pi }\right\rbrack$ . We then have $\operatorname{tr}\left( U\right) = {e}^{i\theta } + {e}^{-{i\theta }} = 2\cos \left( \theta \right)$ , so above density takes the form: + +$$ +{p}_{\text{toy }}\left( U\right) = \frac{1}{Z}{e}^{{c}_{1}\beta \cos \theta } \cdot {e}^{{c}_{2}\beta \cos \left( {2\theta }\right) } \cdot {e}^{{c}_{3}\beta \cos \left( {3\theta }\right) }. +$$ + +#### C.2.2. Case for ${SU}\left( 3\right)$ + +In the case of $n = 3$ , we can represent the eigenvalues of $U \in {SU}\left( 3\right)$ in the form ${e}^{i{\theta }_{1}},{e}^{i{\theta }_{2}},{e}^{i\left( {-{\theta }_{1} - {\theta }_{2}}\right) }$ . Thus, we have + +$$ +\operatorname{Re}\operatorname{tr}\left( U\right) = \frac{1}{3}\left( {\cos \left( {\theta }_{1}\right) + \cos \left( {\theta }_{2}\right) + \cos \left( {-{\theta }_{1} - {\theta }_{2}}\right) }\right) +$$ + +and thus + +$$ +{p}_{\text{toy }}\left( U\right) = \frac{1}{Z}{e}^{\frac{{c}_{1}\beta }{3}\left( {\cos \left( {\theta }_{1}\right) + \cos \left( {\theta }_{2}\right) + \cos \left( {-{\theta }_{1} - {\theta }_{2}}\right) }\right) } +$$ + +$$ +\cdot {e}^{\frac{{c}_{2}\beta }{3}\left( {\cos \left( {2{\theta }_{1}}\right) + \cos \left( {2{\theta }_{2}}\right) + \cos \left( {-2{\theta }_{1} - 2{\theta }_{2}}\right) }\right) } +$$ + +$$ +\cdot {e}^{\frac{{c}_{3}\beta }{3}\left( {\cos \left( {3{\theta }_{1}}\right) + \cos \left( {3{\theta }_{2}}\right) + \cos \left( {-3{\theta }_{1} - 3{\theta }_{2}}\right) }\right) } +$$ + +## D. Sphere Isotropy Experiments + +In this section, we illustrate the generality of the framework in the paper by presenting an additional invariant potential construction on a different manifold: the $n$ -sphere ${S}^{n}$ . Moreover, to demonstrate the need for enforcing equivariance of flow models, we directly compare our flow construction with a general purpose flow while learning a density with an inherent symmetry. The densities we decided to use for this purpose are sphere densities that are invariant to action by the isotropy group. Our model is able to learn these densities much better than previous manifold ODE models that do not enforce equivariance of flows (Lou et al., 2020), thus showing the ability of our model to leverage the desired symmetries. In fact, even on simple isotropy-invariant densities, our model succeeds while the free model without equivariance fails. + +Definition. The unit $n$ -sphere ${S}^{n}$ can be thought of as an embedded manifold of ${\mathbb{R}}^{n + 1}$ , given by + +$$ +{S}^{n} = \left\{ {x \in {\mathbb{R}}^{n + 1} : {x}_{1}^{2} + \ldots + {x}_{n + 1}^{2} = 1}\right\} +$$ + +The discussions below will focus on the special case of the unit sphere ${S}^{2}$ in 3 dimensions. + +### D.1. Isotropy Invariance on ${S}^{2}$ + +Isotropy Group. The isotropy group for a point $v \in {S}^{2}$ is defined as the subgroup of the isometry group which fixes $v$ , i.e. the set of rotations around an axis that passes through $v$ . In practice, we let $v = \left( {0,0,1}\right)$ , so the isotropy group is the group of rotations on the ${xy}$ -plane. An isotropy invariant density would be invariant to such rotations, and hence would look like a horizontally-striped density on the sphere. + +Invariant Potential Parameterization. We design an invariant potential by applying a neural network to the free parameter. In the case of our specific isotropy group listed above, the free parameter is the $z$ -coordinate. The invariant potential is simply a 1-input neural network on the $z$ -coordinate of the input. As a result of this design, we see that the only variance in the learned distribution that uses this potential will be along the $z$ -axis, as desired. + +Prior Distributions. For proper learning with a normalizing flow, we need a prior distribution on the sphere that respects the isotropy invariance. There are many isotropy invariant potentials on the sphere. Natural choices include the uniform density (which is invariant to all rotations) and the wrapped distribution with the center at $v$ (Skopek et al.,2019; Nagano et al., 2019). For our experiments, we use the uniform density. + +### D.2. Experiments + +In this section, we present experiments on learning isotropy-invariant densities on the sphere. The specific density that we would like to learn is illustrated in Figure 2a, which is invariant under the isotropy group of rotations on the ${xy}$ -plane. + +We try to learn this density using our equivariant flow construction model (result in Figure 2b), and compare it to the previous manifold ODE model that do not enforce equivariance of flows in Lou et al. (2020) (result in Figure 2c). Both models are trained for 100 epochs with a learning rate of 0.001 and a batch size of 200 . + +![01963e28-5820-7de4-83a0-aa949b9fe71c_14_204_679_1347_261_0.jpg](images/01963e28-5820-7de4-83a0-aa949b9fe71c_14_204_679_1347_261_0.jpg) + +Figure 2. We compare the equivariant manifold flow and regular manifold flow on an invariant dataset. Note that our model is able to accurately capture the ground truth data distribution while the regular manifold flow struggles. + +Despite our equivariant flow having fewer parameters (as both flows have the same width and the equivariant flow has an input dimension of 1 ), our model is able to capture the distribution much better than the base manifold flow. We believe this is due to the inductive bias of our equivariant model, which explicitly leverages the underlying symmetry. + +816 + +817 + +818 + +819 + +820 + +821 + +822 + +823 + +824 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/gGJRwZmCFm4/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/gGJRwZmCFm4/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..9472f1d36b6ad70210a4ccb4e4c7b0a4eef84241 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/gGJRwZmCFm4/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,161 @@ +§ EQUIVARIANT MANIFOLD FLOWS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Tractably modelling distributions over manifolds has long been an important goal in the natural sciences. Recent work has focused on developing general machine learning models to learn such distributions. However, for many applications these distributions must respect manifold symmetries-a trait which most previous models disregard. In this paper, we lay the theoretical foundations for learning symmetry-invariant distributions on arbitrary manifolds via equivariant manifold flows. We demonstrate the utility of our approach by using it to learn gauge invariant densities over ${SU}\left( n\right)$ in the context of quantum field theory. + +§ 1. INTRODUCTION + +Density learning over manifolds has a broad array of applications ranging from quantum field theory in physics (Wirns-berger et al., 2020) to motion estimation in robotics (Feiten et al., 2013) to protein-structure prediction in computational biology (Hamelryck et al., 2006). Recent work (Lou et al., 2020; Mathieu & Nickel, 2020; Falorsi & Forré, 2020) has extended the powerful framework of continuous normalizing flows (Chen et al., 2018; Grathwohl et al., 2019) to the setting of Riemannian manifolds, lifting the utility of these models for learning complex probability distributions to a more general setting. + +Although these manifold normalizing flows were a considerable step forward, they are insufficient for many problems in the natural sciences. For example, coupled particle systems in physical chemistry (Köhler et al.,2020) and ${SU}\left( n\right)$ lattice gauge theories in theoretical physics (Boyda et al., 2020) require symmetries that are nontrivial to enforce. Typically, these symmetries are enforced in an ad hoc way using properties specific to the manifold (Boyda et al., 2020). In contrast, our paper presents a fully general way to learn flows that induce symmetry-invariant distributions. + +§ 2. RELATED WORK + +Normalizing Flows on Manifolds Normalizing flows on manifolds have received a considerable amount of attention, both in terms of manifold-specific and general constructions. Rezende et al. (2020) introduced constructions specific to tori and spheres, while Bose et al. (2020) introduced constructions for hyperbolic space. Following this work, Lou et al. (2020); Mathieu & Nickel (2020) introduced a general construction by extending Neural ODEs (Chen et al., 2018) to the setting of Riemannian manifolds. + +Equivariant Machine Learning Recent work has incorporated equivariance into machine learning models for the purpose of modelling symmetries (Cohen & Welling, 2016; Kondor & Trivedi, 2018). In particular, Köhler et al. (2020) introduced equivariant normalizing flows for Euclidean space and Boyda et al. (2020) introduced equivariant flows for ${SU}\left( n\right)$ via a manifold-specific construction. In contrast, the equivariant manifold flows in our paper are fully general and applicable to arbitrary Riemannian manifolds. + +§ 3. BACKGROUND + +In this section, we provide a terse overview of the necessary concepts for understanding our paper. For a more detailed introduction to Riemannian geometry, we refer the reader to texts such as Lee (2013); Kobyzev et al. (2020). + +§ 3.1. RIEMANNIAN GEOMETRY + +A Riemannian manifold(M, h)is an $n$ -dimensional manifold with a smooth collection of inner products ${\left( {h}_{x}\right) }_{x \in \mathcal{M}}$ for every tangent space ${T}_{x}\mathcal{M}$ . The Riemannian metric $h$ induces a distance ${d}_{h}$ on the manifold. + +A diffeomorphism $f : \mathcal{M} \rightarrow \mathcal{M}$ is called an isometry if $h\left( {{D}_{x}f\left( u\right) ,{D}_{x}f\left( v\right) }\right) = h\left( {u,v}\right)$ for all tangent vectors $u,v \in {T}_{x}\mathcal{M}$ where ${D}_{x}f$ is the differential of $f$ . Note that isometries preserve the manifold distance function. The collection of all isometries forms a group $G$ , which we call the isometry group of the manifold $\mathcal{M}$ . + +Riemannian metrics also allow for a natural analogue of gradients on ${\mathbb{R}}^{n}$ . For a function $f : \mathcal{M} \rightarrow \mathbb{R}$ , we define the Riemannian gradient ${\nabla }_{x}f$ to be the vector on ${T}_{x}\mathcal{M}$ such that $h\left( {{\nabla }_{x}f,v}\right) = {D}_{x}f\left( v\right)$ for $v \in {T}_{x}\mathcal{M}$ . + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +§ 3.2. NORMALIZING FLOWS ON MANIFOLDS + +Let(M, h)be a Riemannian manifold. A normalizing flow on $\mathcal{M}$ is a diffeomorphism ${f}_{\theta } : \mathcal{M} \rightarrow \mathcal{M}$ (parametrized by $\theta$ ) that transforms a prior density $\rho$ to model density ${\rho }_{{f}_{\theta }}$ . The model distribution can be computed via the change of variables equation: + +$$ +{\rho }_{{f}_{\theta }}\left( x\right) = \rho \left( {{f}_{\theta }^{-1}\left( x\right) }\right) \left| {\det \frac{\partial {f}_{\theta }^{-1}\left( x\right) }{dx}}\right| . +$$ + +§ 3.3. EQUIVARIANCE AND INVARIANCE + +Let $G$ be an isometry subgroup of $\mathcal{M}$ . We notate the action of an element $g \in G$ on $\mathcal{M}$ by the map ${R}_{g} : \mathcal{M} \rightarrow \mathcal{M}$ . + +Equivariant and Invariant Functions We say that a function $f : \mathcal{M} \rightarrow \mathcal{N}$ is equivariant if, for symmetries ${g}_{x} : \mathcal{M} \rightarrow \mathcal{M}$ and ${g}_{y} : \mathcal{N} \rightarrow \mathcal{N},f \circ {g}_{x} = {g}_{y} \circ f$ . We say a function $f : \mathcal{M} \rightarrow \mathcal{N}$ is invariant if $f \circ {g}_{x} = f$ . When $X$ and $Y$ are manifolds, the symmetries ${g}_{x}$ and ${g}_{y}$ are isometries. + +Equivariant Vector Fields Let $X : \mathcal{M} \times \lbrack 0,\infty ) \rightarrow T\mathcal{M}$ , $X\left( {m,t}\right) \in {T}_{m}\mathcal{M}$ be a time-dependent vector field on manifold $\mathcal{M}$ , with base point ${x}_{0} \in \mathcal{M}.X$ is a $G$ -equivariant vector field if $\forall \left( {m,t}\right) \in \mathcal{M} \times \lbrack 0,\infty ),X\left( {{R}_{g}m,t}\right) =$ $\left( {{D}_{m}{R}_{g}}\right) X\left( {m,t}\right)$ . + +Equivariant Flows A flow $f$ on a manifold $\mathcal{M}$ is $G$ - equivariant if it commutes with actions from $G$ , i.e. we have ${R}_{g} \circ f = f \circ {R}_{g}.$ + +Invariance of Density For a group $G$ , a density $\rho$ on a manifold $\mathcal{M}$ is $G$ -invariant if, for all $g \in G$ and $x \in \mathcal{M}$ , $\rho \left( {{R}_{g}x}\right) = \rho \left( x\right)$ , where ${R}_{g}$ is the action of $g$ on $x$ . + +§ 4. INVARIANT DENSITIES FROM EQUIVARIANT FLOWS + +In this section, we describe a tractable way to learn a density over a manifold that obeys a symmetry given by an isometry subgroup $G$ . Directly parameterizing a density that obeys this symmetry is nontrivial. Hence we prove the following implications that yield a tractable solution to this problem: + +1. G-invariant potential $\Rightarrow$ G-equivariant vector field (Theorem 1). We show that given a $G$ -invariant potential function $f : \mathcal{M} \rightarrow \mathbb{R}$ , the vector field $\nabla f$ is $G$ -equivariant. + +2. G-equivariant vector field $\Rightarrow$ G-equivariant flow (Theorem 2). We show that a $G$ -equivariant vector field on $\mathcal{M}$ uniquely induces a $G$ -equivariant flow. + +3. G-equivariant flow $\Rightarrow$ G-invariant density (Theorem 3). We show that given a $G$ -invariant prior $\rho$ and a $G$ -equivariant flow ${f}_{\theta }$ , the flow density ${\rho }_{{f}_{\theta }}$ is $G$ -invariant. Hence, we can obtain a $G$ -invariant density from a $G$ - invariant potential. We claim that constructing a $G$ -invariant potential function on a manifold is far simpler than directly parameterizing a $G$ -invariant density or a $G$ -equivariant flow (an example construction will be given). We defer the proofs of all theorems to Appendix A. + +§ 4.1. EQUIVARIANT GRADIENT OF POTENTIAL FUNCTION + +We start by showing how to construct $G$ -equivariant vector fields from $G$ -invariant potential functions. To design an equivariant vector field $X$ , it is sufficient to set the vector field dynamics of $X$ as the gradient of some $G$ -invariant potential function $\Phi : \mathcal{M} \rightarrow \mathbb{R}$ : + +Theorem 1. Let(M, h)be a Riemannian manifold and $G$ be its group of isometries (or an isometry subgroup). If $\Phi : \mathcal{M} \rightarrow \mathbb{R}$ is a smooth $G$ -invariant function, then the following diagram commutes for any $g \in G$ : + + < g r a p h i c s > + +or ${\nabla }_{{R}_{g}u}\Phi = {D}_{u}{R}_{g}\left( {{\nabla }_{u}\Phi }\right)$ . Hence $\nabla \Phi$ is a $G$ -equivariant vector field. This condition is also tight in the sense that it only occurs if $G$ is the group of isometries. + +§ 4.2. CONSTRUCTING EQUIVARIANT MANIFOLD FLOWS FROM EQUIVARIANT VECTOR FIELDS + +To construct equivariant manifold flows, we use tools from manifold ordinary differential equations (ODEs) and continuous normalizing flows (CNFs). In particular, note the definition below: + +Manifold Continuous Normalizing Flows A manifold continuous normalizing flow with base point $z$ is a function $\gamma : \lbrack 0,\infty ) \rightarrow \mathcal{M}$ that satisfies the manifold ODE + +$$ +\frac{{d\gamma }\left( t\right) }{dt} = X\left( {\gamma \left( t\right) ,t}\right) ,\gamma \left( 0\right) = z. +$$ + +We define ${F}_{X,T} : \mathcal{M} \rightarrow \mathcal{M},z \mapsto {F}_{X,T}\left( z\right)$ to map any base point $z \in \mathcal{M}$ to the value of the CNF starting at $z$ , evaluated at time $T$ . This function is known as the (vector field) flow of $X$ . Observe that there exists a natural correspondence between equivariant flows and equivariant vector fields, by the following theorem: + +Theorem 2. Let(M, h)be a Riemannian manifold, and $G$ be its isometry group (or one of its subgroups). Let $X$ be any time-dependent vector field on $\mathcal{M}$ , and ${F}_{X,T}$ be the flow of $X$ . Then $X$ is a $G$ -equivariant vector field if and only if ${F}_{X,T}$ is a $G$ -equivariant flow. + +§ 4.3. INVARIANT MANIFOLD DENSITIES FROM EQUIVARIANT FLOWS + +We now show that $G$ -equivariant flows induce $G$ -invariant densities. Note that we require the group $G$ to be an isometry subgroup in order to control the density of ${\rho }_{f}$ , and the following theorem does not hold for general diffeomorphism groups. + +Theorem 3. Let(M, h)be a Riemannian manifold, and $G$ be its isometry group (or one of its subgroups). If $\rho$ is a $G$ -invariant density on $\mathcal{M}$ , and $f$ is a $G$ -equivariant diffeomorphism, then ${\rho }_{f}\left( x\right)$ is also $G$ -invariant. + +§ 4.4. UNIVERSALITY OF FLOWS GENERATED BY INVARIANT POTENTIALS + +We prove that flows induced by invariant potentials suffice to learn any smooth invariant distribution over a closed manifold, as measured by Kullback-Leibler (KL) divergence. + +Theorem 4. Let(M, h)be a closed Riemannian manifold. Let $\rho ,\pi$ be smooth G-invariant distributions over said manifold, and let ${D}_{KL}\left( {\rho \parallel \pi }\right)$ be the ${KL}$ divergence between distributions $\rho$ and $\pi$ . If we choose a function $g : \mathcal{M} \rightarrow \mathbb{R}$ such that for $x \in \mathcal{M}$ , + +$$ +g\left( x\right) = \log \left( \frac{\pi \left( x\right) }{\rho \left( x\right) }\right) . +$$ + +Then we have: + +$$ +\frac{\partial }{\partial t}{D}_{KL}\left( {\rho \parallel \pi }\right) = - {\int }_{\mathcal{M}}\rho \exp \left( g\right) \parallel \nabla g{\parallel }^{2}{dx} \leq 0. +$$ + +In particular, note that if the target distribution is $\pi$ and the current distribution is $\rho$ , if we set $g$ to be $\log \left( {\pi \left( x\right) /\rho \left( x\right) }\right)$ and $g$ is the potential from which the flow is obtained, then the KL divergence between $\pi$ and $\rho$ is monotonically decreasing by Theorem 4. + +§ 5. LEARNING INVARIANT DENSITIES WITH EQUIVARIANT FLOWS + +In this section, we discuss how the theory in Section 4 is applied. + +§ 5.1. EQUIVARIANT MANIFOLD FLOW MODEL + +We assume that a $G$ -invariant potential function $f : \mathcal{M} \rightarrow$ $\mathbb{R}$ is given. The equivariant flow model works by using automatic differentiation (Paszke et al.,2017) on $f$ to obtain $\nabla f$ , using this for the vector field, and integrating in a step-wise fashion over the manifold. Specifically, forward integration and change-in-density (divergence) computations utilize the Riemannian CNF (Mathieu & Nickel, 2020) framework. This flow model is used with a specific training procedure (see Section 5.3) to obtain a $G$ -invariant model density that approximates some target. + +§ 5.2. CONSTRUCTING CONJUGATION-INVARIANT POTENTIAL FUNCTIONS ON ${SU}\LEFT( N\RIGHT)$ + +For many applications in physics (specifically gauge theory and lattice quantum field theory), one works with the Lie group ${SU}\left( n\right)$ - the group of unitary matrices with determinant 1 (for details on the manifold structure of ${SU}\left( n\right)$ , see Appendix B). In particular, when modelling probability distributions on ${SU}\left( n\right)$ for lattice QFT, the desired distribution must be invariant under conjugation by ${SU}\left( n\right)$ (Boyda et al.,2020). Conjugation is an isometry on ${SU}\left( n\right)$ (see Appendix A.5), so we can model probability distributions invariant under this action with our developed theory. + +Invariant Potential Parameterization We want to produce a $G$ -invariant potential function $\Phi : {SU}\left( n\right) \rightarrow \mathbb{R}$ . Note that matrix conjugation preserves eigenvalues. Thus, for a function $\Phi : {SU}\left( n\right) \rightarrow \mathbb{R}$ to be invariant to matrix conjugation, it has to act on the eigenvalues of $x \in {SU}\left( n\right)$ as a multi-set. + +We can parameterize such potential functions $\Phi$ by the DeepSet network from Zaheer et al. (2017). DeepSet is a permutation invariant neural network that acts on the eigenvalues, so the mapping of $x \in {SU}\left( n\right)$ is $\Phi \left( x\right) =$ $\widehat{\Phi }\left( \left\{ {{\lambda }_{1}\left( x\right) ,\ldots ,{\lambda }_{n}\left( x\right) }\right\} \right)$ for some set function $\widehat{\Phi }$ . + +As a result of this design, the only variance in the learned distribution will be amongst non-similar matrices, while all similar matrices will be assigned the same density value. + +Prior Distributions For the prior distribution of the flow, we need a distribution that is invariant to conjugation by ${SU}\left( n\right)$ . We use the Haar measure, whose volume element is given for $x \in {SU}\left( n\right)$ as $\operatorname{Haar}\left( x\right) = \mathop{\prod }\limits_{{i < j}}{\left| {\lambda }_{i}\left( x\right) - {\lambda }_{j}\left( x\right) \right| }^{2}$ (Boyda et al., 2020). We can sample from and compute log probabilities for this distribution efficiently with standard matrix computations (Mezzadri, 2007). + +§ 5.3. TRAINING EQUIVARIANT MANIFOLD FLOWS + +Learning to sample given an exact density is important in settings such as the one in Boyda et al. (2020), where we are given conjugation-invariant densities on ${SU}\left( n\right)$ for which we know the exact density function yet sampling well is nontrivial. We train our models by sampling from the Haar distribution on ${SU}\left( n\right)$ , computing the KL divergence between the model probabilities and target probabilities at these samples, and backpropagating from this KL loss. + +§ 6. EXPERIMENTS + +In this section, we learn densities on ${SU}\left( n\right)$ that are invariant to conjugation by ${SU}\left( n\right)$ , which is important for constructing flow-based samplers for ${SU}\left( n\right)$ lattice gauge theories in theoretical physics (Boyda et al., 2020). Our model outperforms the construction given in Boyda et al. (2020). An experiment on an additional manifold with a new symmetry is given in Appendix D. Moreover, we demonstrate in Appendix D that leveraging symmetries inherent to the manifold improves performance over general manifold flows (Lou et al., 2020). + + < g r a p h i c s > + +(a) ${SU}\left( 2\right)$ learned densities from (Left) our model and (Right) the Boyda et al. (2020) model. The target densities are in orange, while model densities are in blue. The $x$ - axis is $\theta$ for an eigenvalue ${e}^{i\theta }$ of a matrix in ${SU}\left( 2\right)$ (the second eigenvalue is determined as ${e}^{-{i\theta }}$ ). Our model has better behavior in low-density regions, and more smoothly captures the targets in high-density regions. + + < g r a p h i c s > + +(b) ${SU}\left( 3\right)$ learned densities from (Middle) our model and (Right) the Boyda et al. (2020) model for different target densities (Left). The $x$ -axis and $y$ -axis are the angles ${\theta }_{1}$ and ${\theta }_{2}$ for eigenvalues ${e}^{i{\theta }_{1}}$ and ${e}^{i{\theta }_{2}}$ of a matrix in ${SU}\left( 3\right)$ (the third eigenvalue is determined as ${e}^{-i{\theta }_{1} - i{\theta }_{2}}$ ), and the probabilities correspond to colors on a logarithmic scale. Our model better captures the geometry of the target densities. + +Figure 1. Comparison of learned densities on (a) ${SU}\left( 2\right)$ and (b) ${SU}\left( 3\right)$ . All densities are normalized to have maximum value 1 . + +For the sake of staying true to the application area, we follow the framework of Boyda et al. (2020) in learning densities on ${SU}\left( n\right)$ that are invariant to conjugation by ${SU}\left( n\right)$ . In particular, our goal is to learn a flow to model a target distribution so that we may efficiently sample from it. As mentioned above in Section 5.3, this setting follows the paradigm in which we are given exact density functions and learn how to sample. Our model is as described in Section 5; further training details are given in Appendix C.1. + +Figure 1a displays learned densities for our model and the model of Boyda et al. (2020) for three densities on ${SU}\left( 2\right)$ described in Appendix C.2.1. While both models match the target distributions well in high-density regions, our model exhibits a considerable improvement in lower-density regions, where the tails of our learned distribution decay faster. By contrast, the model of Boyda et al. (2020) seems to be unable to reduce mass near $\pm \pi$ , a possible consequence of their construction. Even in high-density regions, our model appears to vary smoothly, with fewer unnecessary bumps and curves compared to the densities of the model in Boyda et al. (2020). + +Figure 1b displays learned densities for our model and the model of Boyda et al. (2020) for three densities on ${SU}\left( 3\right)$ described in Appendix C.2.2. In this case, our models fit the target densities more accurately and better respect the geometry of the target distribution. Indeed, while the models of Boyda et al. (2020) are often sharp and have pointed corners, our models learn densities that vary smoothly and curve in ways that are representative of the target distributions. + +§ 7. CONCLUSION + +In this work, we introduce equivariant manifold flows in a fully general context and provide the necessary theory to ensure a principled construction. We also demonstrate the efficacy of our approach in the context of learning a conjugation invariant density over ${SU}\left( n\right)$ , which is an important task for sampling ${SU}\left( n\right)$ lattice gauge theories in quantum field theory. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/jxsmOXCDv9l/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/jxsmOXCDv9l/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..d5c21b3427ab6ec6cedcccc9ae2b07a94aa36388 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/jxsmOXCDv9l/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,579 @@ +# Interpreting diffusion score matching using normalizing flow + +Anonymous Authors ${}^{1}$ + +## Abstract + +Scoring matching (SM), and its related counterpart, Stein discrepancy (SD) have achieved great success in model training and evaluations. However, recent research shows their limitations when dealing with certain types of distributions. One possible fix is incorporating the original score matching (or Stein discrepancy) with a diffusion matrix, which is called diffusion score matching (DSM) (or diffusion Stein discrepancy (DSD)) . However, the lack of the interpretation of the diffusion limits its usage within simple distributions and manually chosen matrix. In this work, we plan to fill this gap by interpreting the diffusion matrix using normalizing flows. Specifically, we theoretically prove that DSM (or DSD) is equivalent to the original score matching (or score matching) evaluated in the transformed space defined by the normalizing flow, where the diffusion matrix is the inverse of the flow's Jacobian matrix. In addition, we also build its connection to Riemannian manifolds, and further extend it to continuous flows, where the change of DSM is characterized by an ODE. + +## 1. Introduction + +Recently, score matching (Hyvärinen & Dayan, 2005) and its closely related counterpart, Stein discrepancy (Gorham, 2017) have made great progress in both understanding their theoretical properties and practical usage. Particularly, unlike Kullback-Leibler (KL) divergence which can only be used for distributions with known normalizing constant, SM (or SD) can be evaluated for unnormalized densities, and requires fewer assumptions for the probability distributions (Fisher et al., 2021). Such useful properties enable them to be widely applied in training energy-based model (EBM) (Song et al., 2020a; Grathwohl et al., 2020; Wenliang et al., 2019), state-of-the-art score-based generative model (Song & Ermon, 2019; Song et al., 2020b), statistical tests (Liu et al., 2016; Chwialkowski et al., 2016) and variational inference (Hu et al., 2018; Liu & Wang, 2016). + +Despite their elegant statistical properties, recent work (Barp et al., 2019) demonstrated their failure when dealing with certain type of distributions (e.g. heavy-tailed distributions). For instance, when the data and the model are heavy tailed distributions, the model can fail to recover the true mode even in one dimensional case. The root of this problem is that the SM (or SD) objective is highly non-convex and does not correlate well with likelihood. To fix it, Barp et al. (2019) proposed a variant called diffusion score matching (and diffusion Stein discrepancy), where a diffusion matrix is introduced. However, the author did not provide us an interpretation of this diffusion matrix. In fact, the diffusion used by the author (Barp et al., 2019) is manually chosen for toy densities. Such lack of interpretation hinders further development of a proper training method of the diffusion matrix. + +In this paper, we aim to give an interpretation based on normalizing flows, which sheds light on developing training method for the diffusion. We summarize our contributions as follows: + +- We theoretically prove that DSM (or DSD) is equivalent to the original SM (or SD) performed in the transformed space defined by the normalizing flow. The diffusion matrix is exactly the same as the inverse of the flow's Jacobian matrix. + +- We further show that its connection to Riemannian manifold. Specifically, we show the diffusion matrix is closely related to the Riemannian metric tensor. + +- We further extend DSM to their continuous version. Namely, we derive an ODE to characterize its instantaneous change. + +We hope that by building these connections, a broad range of techniques from normalizing flow communities can be leveraged to develop training methods for the diffusion matrix. + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +## 2. Background: Diffusion Stein discrepancy + +### 2.1. Score matching and Stein discrepancy + +Let $\mathcal{P}$ be the space of Borel probability measures on ${\mathbb{R}}^{D}$ , $\mathbb{Q} \in \mathcal{P}$ to be a probability measure, the objective for model learning is to find a sequence of probability measures $\left\{ {{\mathbb{P}}_{\theta } : \theta \in \Theta }\right\} \subset \mathcal{P}$ that approximates $\mathbb{Q}$ in an appropriate sense. One common way to achieve this is by defining a discrepancy measure $\mathcal{D} : \mathcal{P} \times \mathcal{P} \rightarrow \mathbb{R}$ , which quantifies the differences between two probability measures. Thus, the optimal parameters ${\theta }^{ * }$ can be obtained by ${\theta }^{ * } = \operatorname{argmin}\mathcal{D}\left( {\mathbb{Q}\parallel {\mathbb{P}}_{\theta }}\right)$ . The choice of discrepancy depends on the properties of the probability measures, the efficiency and its robustness. The one we are focused on is called Fisher divergence. Assuming for probability measures $\mathbb{Q}$ and ${\mathbb{P}}_{\theta }$ , we have corresponding twice differentiable densities $q\left( \mathbf{x}\right) ,{p}_{\theta }\left( \mathbf{x}\right)$ . The Fisher divergence (Johnson,2004) is defined as + +$$ +\mathcal{F}\left( {q, p}\right) = \frac{1}{2}{\mathbb{E}}_{q}\left\lbrack {\begin{Vmatrix}{\mathbf{s}}_{p}\left( \mathbf{x}\right) - {\mathbf{s}}_{q}\left( \mathbf{x}\right) \end{Vmatrix}}^{2}\right\rbrack \tag{1} +$$ + +where ${\mathbf{s}}_{p}\left( \mathbf{x}\right) = {\nabla }_{\mathbf{x}}\log {p}_{\theta }\left( \mathbf{x}\right)$ is called the score of ${p}_{\theta }$ , and ${s}_{q}$ is defined accordingly. Despite that $q$ is often used for underlying data densities with the intractable ${s}_{q},{s}_{q}$ in fact acts as a constant for parameter $\theta$ . Thus, one can use integration-by-part to derive the following: + +$$ +\mathcal{F}\left( {q,{p}_{\theta }}\right) = \underset{\mathcal{L}\left( \theta \right) }{\underbrace{{\mathbb{E}}_{q}\left\lbrack {\frac{1}{2}{\begin{Vmatrix}{\mathbf{s}}_{p}\left( \mathbf{x}\right) \end{Vmatrix}}^{2} + \operatorname{Tr}\left( {{\nabla }_{\mathbf{x}}{\mathbf{s}}_{p}\left( \mathbf{x}\right) }\right) }\right\rbrack }} + {C}_{q} \tag{2} +$$ + +where ${C}_{q}$ is a constant w.r.t parameter $\theta$ . This alternative objective $\mathcal{L}\left( \theta \right)$ is referred as score matching (Hyvärinen & Dayan, 2005). + +Another discrepancy measure we are interested is called Stein discrepancy, which is defined as + +$$ +\mathcal{S}\left( {q,{p}_{\theta }}\right) = \mathop{\sup }\limits_{{\mathbf{f} \in \mathcal{H}}}{\mathbb{E}}_{q}\left\lbrack {{\mathbf{s}}_{p}{\left( \mathbf{x}\right) }^{T}\mathbf{f}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}^{T}\mathbf{f}\left( \mathbf{x}\right) }\right\rbrack \tag{3} +$$ + +where $\mathbf{f} : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D}$ is a test function, and $\mathcal{H}$ is an appropriate test function family, e.g. reproducing kernel Hilbert space (Liu et al., 2016; Chwialkowski et al., 2016) or Stein class (Gorham, 2017; Liu et al., 2016). Recent work (Hu et al., 2018) proved a strong connection between Stein discrepancy and Fisher divergence by showing the optimal test function: + +$$ +{\mathbf{f}}^{ * }\left( \mathbf{x}\right) \propto {\mathbf{s}}_{p}\left( \mathbf{x}\right) - {\mathbf{s}}_{q}\left( \mathbf{x}\right) \tag{4} +$$ + +Thus, by substitution back into eq. 3, we can show Stein discrepancy is equivalent to Fisher divergence up to a multiplicative constant. + +Recent works (Barp et al., 2019; Gorham et al., 2019) further extend the score matching and Stein discrepancy by incorporating a diffusion matrix $\mathbf{m}\left( \mathbf{x}\right) : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D \times D}$ , which are called diffusion score matching (DSM) and diffusion Stein discrepancy (DSD). DSM is defined as + +$$ +{DS}{M}_{m}\left( {q,{p}_{\theta }}\right) = \frac{1}{2}{\mathbb{E}}_{q}\left\lbrack {\begin{Vmatrix}\mathbf{m}{\left( \mathbf{x}\right) }^{T}\left( {\mathbf{s}}_{p}\left( \mathbf{x}\right) - {\mathbf{s}}_{q}\left( \mathbf{x}\right) \right) \end{Vmatrix}}^{2}\right\rbrack \tag{5} +$$ + +![01963e4a-40a8-7943-9c90-781bd76352b4_1_1041_188_403_388_0.jpg](images/01963e4a-40a8-7943-9c90-781bd76352b4_1_1041_188_403_388_0.jpg) + +Figure 1. The SM loss computed for different $\theta$ . Orange line: Original SM loss with orange dashed line indicating valid initialization region. Blue line: DSM loss with dashed blue line indicating the optimum solution ${\theta }^{ * }$ . The black dashed line indicates true $\theta$ . + +where $\mathbf{m}\left( \mathbf{x}\right)$ is a matrix-valued function. Similarly, DSD is defined as + +$$ +{DS}{D}_{m}\left( {q,{p}_{\theta }}\right) +$$ + +$$ += \mathop{\sup }\limits_{{\mathbf{f} \in \mathcal{H}}}{\mathbb{E}}_{q}\left\lbrack {{\left( \mathbf{m}{\left( \mathbf{x}\right) }^{T}{\mathbf{s}}_{p}\left( \mathbf{x}\right) \right) }^{T}\mathbf{f}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}^{T}\left( {\mathbf{m}\left( \mathbf{x}\right) \mathbf{f}\left( \mathbf{x}\right) }\right) }\right\rbrack +$$ + +(6) + +It can be shown that as long as $\mathbf{m}\left( \mathbf{x}\right)$ is invertible, ${DS}{M}_{m}\left( {q,{p}_{\theta }}\right)$ and ${DS}{D}_{m}\left( {q,{p}_{\theta }}\right)$ are valid divergences. + +These two extensions have demonstrated superior performances when dealing with certain type of distributions. In the following, we give a motivating example similar to Barp et al. (2019). + +### 2.2. Motivating example: Student-t distribution + +Let assume $q,{p}_{\theta }$ to be 1 dimensional student-t distribution. The target is to approximate $q$ by ${p}_{\theta }$ . The training set is 300 i.i.d data sampled from $q$ with mean 0 and scale 0.3 . We assume the scale parameter for ${p}_{\theta }$ is the same as $q$ , and the only trainable parameter $\theta$ is the mean. The degree of freedom is 5 for both $q,{p}_{\theta }$ . Figure 1 shows the score matching loss computed for different $\theta$ . We can observe that for original SM loss, it is highly non-convex, and the loss value does not correlate well with likelihood. Indeed, we can see the true location $\theta = 0$ is protected by two high 'walls'. In other words, unlike maximum likelihood estimator, a parameter $\theta$ that is closer to the ground truth does not necessarily produce low SM loss. One important consequence is that unless the initialized $\theta$ is within the narrow valid region, the gradient-based optimization will never recover the ground truth. On the other hand, if we chose $\mathbf{m}\left( \mathbf{x}\right) = \left( {1 + \frac{{\left( \mathbf{x} - \theta \right) }^{2}}{0.6}}\right)$ as the diffusion matrix, the corresponding ${DS}{M}_{m}\left( {q,{p}_{\theta }}\right)$ loss is convex. The ground truth can be recovered by minimizing DSM with a proper gradient-based optimizer. + +The selection of the diffusion matrix is crucial to the success of the estimator. Unfortunately, the interpretation of this matrix is unclear, not mentioning a selection algorithm. In the following, we aim to shed lights on this problem by connecting this diffusion with normalizing flows. + +## 3. Diffusion matrix as normalizing flow + +### 3.1. Interpreting DSM/DSD using normalizing flow + +Let assume we have two densities ${q}_{X}\left( \mathbf{x}\right) ,{p}_{X}\left( \mathbf{x}\right)$ defined on ${\mathbb{R}}^{D}$ and are twice differentiable. We further define an differentiable invertible transformation $\mathbf{T}\left( \mathbf{x}\right) : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D}$ . Let's define + +$$ +\mathbf{y} = \mathbf{T}\left( \mathbf{x}\right) \tag{7} +$$ + +with corresponding densities ${q}_{Y}\left( \mathbf{y}\right)$ and ${p}_{Y}\left( \mathbf{y}\right)$ . We can prove the following theorem: + +Theorem 3.1. For twice differentiable densities ${q}_{X}\left( \mathbf{x}\right)$ , ${p}_{X}\left( \mathbf{x}\right)$ and an invertible differentiable transformation $T$ : ${\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D}$ , the diffusion score matching objective (Eq.5) is equivalent to the original score matching + +$$ +\mathcal{F}\left( {{q}_{Y},{p}_{Y}}\right) = \frac{1}{2}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {\begin{Vmatrix}{\mathbf{s}}_{{p}_{Y}}\left( \mathbf{y}\right) - {\mathbf{s}}_{{q}_{Y}}\left( \mathbf{y}\right) \end{Vmatrix}}^{2}\right\rbrack \tag{8} +$$ + +where $\mathbf{y} = T\left( \mathbf{x}\right)$ , and ${p}_{Y},{q}_{Y}$ are corresponding densities after the transformation. The diffusion matrix $\mathbf{m}\left( \mathbf{x}\right)$ is the inverse of the Jacobian matrix ${\left( {\nabla }_{\mathbf{x}}\mathbf{T}\left( \mathbf{x}\right) \right) }^{-1}$ + +Proof. From the change of variable formula, the corresponding densities ${p}_{Y}\left( \mathbf{y}\right) ,{q}_{Y}\left( \mathbf{y}\right)$ can be defined as: + +$$ +{p}_{Y}\left( \mathbf{y}\right) = {p}_{X}\left( {{T}^{-1}\left( \mathbf{y}\right) }\right) \left| \frac{\partial {T}^{-1}\left( \mathbf{y}\right) }{\partial \mathbf{y}}\right| , +$$ + +$$ +{q}_{Y}\left( \mathbf{y}\right) = {q}_{X}\left( {{T}^{-1}\left( \mathbf{y}\right) }\right) \left| \frac{\partial {T}^{-1}\left( \mathbf{y}\right) }{\partial \mathbf{y}}\right| . +$$ + +Then the Fisher divergence (Eq.1) for ${p}_{Y},{q}_{Y}$ can be formulated as follows (we use the notation ${\mathcal{F}}_{T}$ to emphasise that ${p}_{Y},{q}_{Y}$ are obtained using transformation $T$ ): + +$$ +F\left( {{p}_{Y},{q}_{Y}}\right) \mathrel{\text{:=}} \frac{1}{2}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {\begin{Vmatrix}{\nabla }_{\mathbf{y}}\log {p}_{Y}\left( \mathbf{y}\right) - {\nabla }_{\mathbf{y}}\log {q}_{Y}\left( \mathbf{y}\right) \end{Vmatrix}}_{2}^{2}\right\rbrack +$$ + +$$ += \frac{1}{2}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {\begin{Vmatrix}{\nabla }_{\mathbf{y}}\log {p}_{X}\left( {T}^{-1}\left( \mathbf{y}\right) \right) - {\nabla }_{\mathbf{y}}\log {q}_{X}\left( {T}^{-1}\left( \mathbf{y}\right) \right) \end{Vmatrix}}_{2}^{2}\right\rbrack +$$ + +$$ += \frac{1}{2}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {\parallel {\nabla }_{\mathbf{y}}{T}^{-1}{\left( \mathbf{y}\right) }^{\top }}\right. +$$ + +$$ +\left. \left. {\left( {{\nabla }_{{T}^{-1}\left( \mathbf{y}\right) }\log {p}_{X}\left( {{T}^{-1}\left( \mathbf{y}\right) }\right) - {\nabla }_{{T}^{-1}\left( \mathbf{y}\right) }\log {q}_{X}\left( {{T}^{-1}\left( \mathbf{y}\right) }\right) }\right) {\left. \right| }_{2}^{2}}\right) \right\rbrack +$$ + +$$ += \frac{1}{2}{\mathbb{E}}_{{q}_{X}}\left\lbrack {\begin{Vmatrix}{\left( {\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) \right) }^{-\top }\left( {\nabla }_{\mathbf{x}}\log {p}_{X}\left( \mathbf{x}\right) - {\nabla }_{\mathbf{x}}\log {q}_{X}\left( \mathbf{x}\right) \right) \end{Vmatrix}}_{2}^{2}\right\rbrack , +$$ + +(9)where the last step comes from changing the variable to $\mathbf{x} = {T}^{-1}\left( \mathbf{y}\right)$ and noticing that ${\nabla }_{\mathbf{y}}{T}^{-1}\left( \mathbf{y}\right) =$ ${\left( {\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) \right) }^{-1}$ from the inverse function theorem. This objective coincides with the diffusion score matching (Eq.5). Importantly, ${DS}{M}_{m}\left( {{q}_{X},{p}_{X}}\right)$ is a valid divergence (i.e. ${DS}{M}_{m}\left( {{p}_{X},{q}_{X}}\right) = 0$ iff. ${p}_{X} = {q}_{X}$ ) when $m\left( \mathbf{x}\right)$ is an invertible matrix for every $\mathbf{x}$ . As normalising flow transformations naturally give invertible Jacobian matrices, we can easily extablish the connection $\mathcal{F}\left( {{p}_{Y},{q}_{Y}}\right) =$ ${DS}{M}_{m}\left( {{p}_{X},{q}_{X}}\right)$ with $m\left( \mathbf{x}\right) = {\left( {\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) \right) }^{-1}$ . + +We also include additional plots for the student-t example. Specifically, we plot the log likelihood function after the transformation defined by $\mathbf{m}\left( \mathbf{x}\right) = \left( {1 + \frac{\left( \mathbf{x} - \theta \right) }{0.6}}\right)$ in figure 2 (Appendix D). + +Similarly, we can prove the connections between DSD (Eq.6) and normalizing flow. The proof is in appendix A. + +Theorem 3.2. For twice differentiable densities ${q}_{X}\left( \mathbf{x}\right)$ , ${p}_{X}\left( \mathbf{x}\right)$ , an invertible differentiable transformation $\mathbf{T}\left( \mathbf{x}\right)$ : ${\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D}$ and differentiable test function in suitable test function family $\mathcal{H} : \mathbf{f} : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D} \in \mathcal{H}$ , the diffusion Stein discrepancy (Eq.6) is equivalent to the original Stein discrepancy + +$$ +\mathcal{S}\left( {{q}_{Y},{p}_{Y}}\right) = \mathop{\sup }\limits_{{\mathbf{g} \in {\mathcal{H}}^{\prime }}}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {{\mathbf{s}}_{{p}_{Y}}{\left( \mathbf{y}\right) }^{T}\mathbf{g}\left( \mathbf{y}\right) + {\nabla }_{\mathbf{y}}^{T}\mathbf{g}\left( \mathbf{y}\right) }\right\rbrack \tag{10} +$$ + +where $\mathbf{g}\left( \mathbf{y}\right) = \mathbf{f}\left( {{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\right) ,{\mathcal{H}}^{\prime }$ is the corresponding function space for $\mathbf{g},{p}_{Y}$ and ${q}_{Y}$ are transformed densities by $\mathbf{T}\left( \cdot \right)$ . The diffusion matrix $\mathbf{m}\left( \mathbf{x}\right) = {\left( {\nabla }_{\mathbf{x}}\mathbf{T}\left( \mathbf{x}\right) \right) }^{-1}$ . + +Based on the above two theorems, we formally establish the connections between the ${DSM}/{DSD}$ with normalizing flows. This gives us an interpretation of the diffusion matrix as the inverse of the Jacobian matrix defined by the flow. + +We can further interpret this diffusion using the distributions defined on a Riemannian manifold. In fact, we can show that the ${DSM}$ is equivalent to the original SM performed on two densities defined on Riemannian manifolds with the metric tensor $\mathbf{G}\left( \mathbf{x}\right) = {\mathbf{m}}^{-T}\left( \mathbf{x}\right) \mathbf{m}{\left( \mathbf{x}\right) }^{-1}$ + +### 3.2. Interpreting DSM using Riemannian manifold + +Assume we have a Riemannian manifold(M, g)with Riemannian metric tensor $\mathbf{g}$ . For each point $\mathbf{a} \in \mathcal{M}$ , we assume it has a local coordinates ${\mathbf{x}}_{a} = \left\lbrack {{x}_{a}^{1},\ldots ,{x}_{a}^{D}}\right\rbrack$ . We can prove the following proposition: + +Proposition 3.1. Assume the Riemannian manifold(M, g) as defined above, we define two probability measures $\mathbb{Q},\mathbb{P}$ . We denote the corresponding densities (in terms of local coordinates $\mathbf{x}$ ) w.r.t. Riemannian manifold as $\widetilde{p}\left( \mathbf{x}\right) = \frac{d\mathbb{P}}{d\mathcal{M}\left( \mathbf{x}\right) }$ and $\widetilde{q}\left( \mathbf{x}\right) = \frac{d\mathbb{Q}}{d\mathcal{M}\left( \mathbf{x}\right) }$ . Then, the score matching loss for $\widetilde{p}$ and $\widetilde{q}$ is + +$$ +{\mathcal{F}}_{\mathcal{M}}\left( {\widetilde{q},\widetilde{p}}\right) = \frac{1}{2}{\mathbb{E}}_{q}\left\lbrack {\mathbf{\Delta }{\left( \mathbf{x}\right) }^{T}\mathbf{G}{\left( \mathbf{x}\right) }^{-1}\mathbf{\Delta }\left( \mathbf{x}\right) }\right\rbrack \tag{11} +$$ + +where $p\left( \mathbf{x}\right) = \frac{d\mathbb{P}}{d\mathcal{M}\left( \mathbf{x}\right) }\frac{d\mathcal{M}\left( \mathbf{x}\right) }{d\mathbf{x}}, q\left( \mathbf{x}\right)$ is defined similarly, and $\mathbf{\Delta }\left( \mathbf{x}\right) = {\mathbf{s}}_{p}\left( \mathbf{x}\right) - {\mathbf{s}}_{q}\left( \mathbf{x}\right) .\mathbf{G}\left( \mathbf{x}\right)$ is an symmetric positive definite matrix representing the Riemannian metric tensor. Particularly, if $\mathbf{G}\left( \mathbf{x}\right) = \mathbf{m}{\left( \mathbf{x}\right) }^{-T}\mathbf{m}{\left( \mathbf{x}\right) }^{-1}$ , then ${\mathcal{F}}_{\mathcal{M}}\left( {\widetilde{q},\widetilde{p}}\right)$ is equivalent to DSM (Eq.5) with diffusion matrix $\mathbf{m}\left( \mathbf{x}\right)$ . + +## The proof is in appendix B. + +This result is more general than theorem 3.1. Specifically, theorem 3.1 only proves a sufficient condition for the DSM to be a valid discrepancy. Namely, if we have an invertible flow, the corresponding diffusion matrix $\mathbf{m}\left( x\right)$ must be invertible. However, the converse is not true. On the other hand, proposition 3.1 only requires $\mathbf{m}\left( \mathbf{x}\right)$ to be invertible, which is more general. Indeed, from the topological point of view, if we have an invertible and differentiable flow $T$ , then the transformed space (Riemannian manifold) is actually diffeomorphic to the original space (e.g. ${\mathbb{R}}^{D}$ ). Thus, this flow can be viewed as a special case of Gemici et al. (2016). But in general, Riemannian manifold may not be diffeomorphic to ${\mathbb{R}}^{D}$ , which explains why theorem 3.1 is only a sufficient condition. + +### 3.3. Continuous DSM with ODE flow + +Previous sections assume a deterministic transformation $\mathbf{T}\left( \mathbf{x}\right)$ . Recent work has shown promising results for continuous flows characterised by an ODE (Chen et al., 2018; Grathwohl et al., 2018). Namely, we can consider a transformation + +$$ +d\mathbf{x} = \mathbf{g}\left( {\mathbf{x}\left( t\right) }\right) {dt} \tag{12} +$$ + +where $\mathbf{g}\left( {\mathbf{x}\left( t\right) }\right)$ is a deterministic drift that is uniformly Lipschitz continuous w.r.t. $\mathbf{x}$ . We define ${p}_{t}$ and ${q}_{t}$ to be the corresponding densities for $\mathbf{x}\left( t\right)$ . Inspired by Chen et al. (2018), we can characterise the instantaneous change of the score matching loss $\frac{d\mathcal{F}\left( {{q}_{t},{p}_{t}}\right) }{dt}$ by the following proposition: + +Proposition 3.2. Let ${p}_{t}\left( {\mathbf{x}\left( t\right) }\right) ,{q}_{t}\left( {\mathbf{x}\left( t\right) }\right)$ be two probability density functions, where $\mathbf{x}\left( t\right)$ is characterized by an ODE defined in eq.12. Assume $\mathbf{g}\left( {\mathbf{x}\left( t\right) }\right)$ is uniformly Lipschitz continuous w.r.t. $\mathbf{x}\left( t\right)$ . Then, the instantaneous change of score matching loss follows: + +$$ +\frac{d\mathcal{F}\left( {{q}_{t},{p}_{t}}\right) }{dt} = - \frac{1}{2}{\mathbb{E}}_{{q}_{t}}\left\lbrack {\Delta {\left( \mathbf{x}\right) }^{T}\left( {{\nabla }_{\mathbf{x}}\mathbf{g}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}\mathbf{g}{\left( \mathbf{x}\right) }^{T}}\right) \Delta \left( \mathbf{x}\right) }\right\rbrack +$$ + +(13) + +where $\Delta \left( \mathbf{x}\right) = {\mathbf{s}}_{{p}_{t}}\left( \mathbf{x}\right) - {\mathbf{s}}_{{q}_{t}}\left( \mathbf{x}\right)$ . + +The proof is in appendix C. + +## 4. Conclusion + +In this paper, we discuss the connections of the diffusion score matching and diffusion Stein discrepancy to normalizing flows. Specifically, we theoretically prove that the DSM (or DSD) is equivalent to performing the original score matching (or Stein discrepancy) on the transformed densities. The diffusion matrix $\mathbf{m}\left( \mathbf{x}\right)$ is defined by the inverse of the flow's Jacobian matrix. We also establish the connection of DSM with densities defined on Riemannian manifolds by showing that the diffusion matrix is closely related to the Riemannian metric tensor. In the end, we extend the DSM by continuous flow, and derive an ODE characterizing its instantaneous changes. By building the connections, we hope to shed lights on developing training method for the diffusion matrix to enable the practical usage of DSM (or DSD) for large models. + +## References + +Barp, A., Briol, F.-X., Duncan, A. B., Girolami, M., and + +Mackey, L. Minimum stein discrepancy estimators. arXiv preprint arXiv:1906.08283, 2019. + +Chen, R. T., Rubanova, Y., Bettencourt, J., and Duvenaud, D. Neural ordinary differential equations. arXiv preprint arXiv:1806.07366, 2018. + +Chwialkowski, K., Strathmann, H., and Gretton, A. A kernel test of goodness of fit. In International conference on machine learning, pp. 2606-2615. PMLR, 2016. + +Fisher, M., Nolan, T., Graham, M., Prangle, D., and Oates, C. Measure transport with kernel stein discrepancy. In International Conference on Artificial Intelligence and Statistics, pp. 1054-1062. PMLR, 2021. + +Gemici, M. C., Rezende, D., and Mohamed, S. Normalizing flows on riemannian manifolds. arXiv preprint arXiv:1611.02304, 2016. + +Gorham, J. Measuring sample quality with Stein's method. Stanford University, 2017. + +Gorham, J., Duncan, A. B., Vollmer, S. J., Mackey, L., et al. Measuring sample quality with diffusions. Annals of Applied Probability, 29(5):2884-2928, 2019. + +Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018. + +Grathwohl, W., Wang, K.-C., Jacobsen, J.-H., Duvenaud, D., and Zemel, R. Learning the stein discrepancy for training and evaluating energy-based models without sampling. In International Conference on Machine Learning, pp. 3732-3747. PMLR, 2020. + +Hu, T., Chen, Z., Sun, H., Bai, J., Ye, M., and Cheng, G. Stein neural sampler. arXiv preprint arXiv:1810.03545, 2018. + +Hyvärinen, A. and Dayan, P. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005. + +Johnson, O. Information theory and the central limit theorem. World Scientific, 2004. + +Liu, Q. and Wang, D. Stein variational gradient descent: A general purpose bayesian inference algorithm. arXiv preprint arXiv:1608.04471, 2016. + +Liu, Q., Lee, J., and Jordan, M. A kernelized stein discrepancy for goodness-of-fit tests. In International conference on machine learning, pp. 276-284. PMLR, 2016. + +Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. arXiv preprint arXiv:1907.05600, 2019. + +Song, Y., Garg, S., Shi, J., and Ermon, S. Sliced score matching: A scalable approach to density and score estimation. In Uncertainty in Artificial Intelligence, pp. 574-584. PMLR, 2020a. + +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Er-mon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020b. + +Wenliang, L., Sutherland, D., Strathmann, H., and Gretton, A. Learning deep kernels for exponential family densities. In International Conference on Machine Learning, pp. 6737-6746. PMLR, 2019. + +268 + +269 + +270 + +271 + +274 + +## A. Proof of theorem 3.2 + +Proof. Let's first define the Stein operator as + +$$ +{\mathcal{S}}_{{p}_{Y}}\left\lbrack \mathbf{g}\right\rbrack = {\mathbf{s}}_{{p}_{Y}}{\left( \mathbf{y}\right) }^{T}\mathbf{g}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{y}}^{T}\mathbf{g}\left( \mathbf{y}\right) \tag{14} +$$ + +for the test function $\mathbf{g}\left( \mathbf{y}\right)$ and density ${p}_{Y}\left( \mathbf{y}\right)$ . Thus, the Stein discrepancy can be rewritten as + +$$ +\mathcal{S}\left( {{q}_{Y},{p}_{Y}}\right) = \mathop{\sup }\limits_{{\mathbf{g} \in \mathcal{H}}}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {{\mathcal{S}}_{{p}_{Y}}\left\lbrack \mathbf{g}\right\rbrack }\right\rbrack \tag{15} +$$ + +In the following, we will focus on the Stein operator. From the change of variable formula $\mathbf{y} = \mathbf{T}\left( \mathbf{x}\right)$ , we have + +$$ +{p}_{Y}\left( \mathbf{y}\right) = {p}_{X}\left( {{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\right) \left| \frac{\partial {\mathbf{T}}^{-1}\left( \mathbf{y}\right) }{\partial \mathbf{y}}\right| ,\;\mathbf{g}\left( \mathbf{y}\right) = \mathbf{f}\left( {{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\right) +$$ + +(16) + +Now we can rewrite the Stein operator: + +$$ +{\mathcal{S}}_{{p}_{Y}}\left\lbrack \mathbf{g}\right\rbrack = {\nabla }_{\mathbf{y}}\log {p}_{Y}{\left( \mathbf{y}\right) }^{T}\mathbf{g}\left( \mathbf{y}\right) + {\nabla }_{\mathbf{y}}^{T}\mathbf{g}\left( \mathbf{y}\right) +$$ + +$$ += {\nabla }_{\mathbf{y}}\log {p}_{X}{\left( {\mathbf{T}}^{-1}\left( \mathbf{y}\right) \right) }^{T}\mathbf{g}\left( \mathbf{y}\right) + {\left( {\nabla }_{\mathbf{y}}\log \left| \frac{\partial {\mathbf{T}}^{-1}\left( \mathbf{y}\right) }{\partial \mathbf{y}}\right| \right) }^{T}\mathbf{g}\left( \mathbf{y}\right) +$$ + +$$ ++ {\nabla }_{\mathbf{y}}\mathbf{g}\left( \mathbf{y}\right) +$$ + +$$ += {\left\lbrack {\left( {\nabla }_{\mathbf{y}}{\mathbf{T}}^{-1}\left( \mathbf{y}\right) \right) }^{T}\left( {\nabla }_{{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\log {p}_{X}\left( {\mathbf{T}}^{-1}\left( \mathbf{y}\right) \right) \right) \right\rbrack }^{T}\mathbf{g}\left( \mathbf{y}\right) +$$ + +$$ ++ \underset{\left( 1\right) }{\underbrace{{\left( {\nabla }_{\mathbf{y}}\log \left| \frac{\partial {\mathbf{T}}^{-1}\left( \mathbf{y}\right) }{\partial \mathbf{y}}\right| \right) }^{T}\mathbf{g}\left( \mathbf{y}\right) }} +$$ + +$$ ++ \operatorname{Tr}\left\lbrack {\left( {{\nabla }_{\mathbf{y}}{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\right) {\nabla }_{{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\mathbf{f}\left( {{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\right) }\right\rbrack +$$ + +(17) + +The second equality is from the chain rule and definition of divergence operator ${\nabla }^{T}$ . For the layout of the matrix calculus, we follow the column vector layout as the following: for a function $\mathbf{h} : {\mathbb{R}}^{D} \rightarrow \mathbb{R}$ , and $\mathbf{f} : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{N}$ , we have + +$$ +\frac{\partial h\left( \mathbf{x}\right) }{\partial \mathbf{x}} = \left\lbrack \begin{matrix} \frac{\partial h\left( \mathbf{x}\right) }{\partial {x}_{1}} \\ \vdots \\ \frac{\partial h\left( \mathbf{x}\right) }{\partial {x}_{n}} \end{matrix}\right\rbrack \tag{18} +$$ + +$$ +\frac{\partial \mathbf{f}\left( \mathbf{x}\right) }{\partial \mathbf{x}} = \left\lbrack \begin{matrix} \frac{\partial {f}_{1}\left( \mathbf{x}\right) }{\partial {x}_{1}} & \ldots & \frac{\partial {f}_{1}\left( \mathbf{x}\right) }{\partial {x}_{D}} \\ \vdots & \vdots & \vdots \\ \frac{\partial {f}_{N}\left( \mathbf{x}\right) }{\partial {x}_{1}} & \ldots & \frac{\partial {f}_{N}\left( \mathbf{x}\right) }{\partial {x}_{D}} \end{matrix}\right\rbrack +$$ + +Now, we focus on 1 term: + +$$ +{\nabla }_{\mathbf{y}}\log \left| \frac{\partial {T}^{-1}\left( \mathbf{y}\right) }{\partial \mathbf{y}}\right| = \operatorname{Tr}\left\lbrack {{\left( {\nabla }_{\mathbf{y}}{T}^{-1}\left( \mathbf{y}\right) \right) }^{-1}{\nabla }_{\mathbf{y}}{\nabla }_{\mathbf{y}}{T}^{-1}\left( \mathbf{y}\right) }\right\rbrack +$$ + +$$ += \operatorname{Tr}\left\lbrack {{\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) {\nabla }_{\mathbf{y}}{\left( {\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) \right) }^{-1}}\right\rbrack +$$ + +$$ += \operatorname{Tr}\left\lbrack {{\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) {\nabla }_{\mathbf{y}}{T}^{-1}\left( \mathbf{y}\right) {\nabla }_{\mathbf{x}}{\left( {\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) \right) }^{-1}}\right\rbrack +$$ + +$$ += \operatorname{Tr}\left\lbrack {{\nabla }_{\mathbf{x}}{\left( {\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) \right) }^{-1}}\right\rbrack +$$ + +(19)where we use the inverse function theorem ${\nabla }_{\mathbf{y}}{\mathbf{T}}^{-1}\left( \mathbf{y}\right) =$ ${\left( {\nabla }_{\mathbf{x}}\mathbf{T}\left( \mathbf{x}\right) \right) }^{-1}$ . In addition, we define ${\nabla }_{\mathbf{x}}^{T}{\left( {\nabla }_{\mathbf{x}}\mathbf{T}\left( \mathbf{x}\right) \right) }^{-1} =$ $\operatorname{Tr}\left\lbrack {{\nabla }_{\mathbf{x}}{\left( {\nabla }_{\mathbf{x}}\mathbf{T}\left( \mathbf{x}\right) \right) }^{-1}}\right\rbrack$ . + +So we can set $\mathbf{m}\left( \mathbf{x}\right) = {\left( {\nabla }_{\mathbf{x}}\mathbf{T}\left( \mathbf{x}\right) \right) }^{-1}$ , we can obtain: + +$$ +{\mathcal{S}}_{{p}_{Y}}\left\lbrack \mathbf{g}\right\rbrack = {\left( \mathbf{m}{\left( \mathbf{x}\right) }^{T}{\mathbf{s}}_{p}\left( \mathbf{x}\right) \right) }^{T}\mathbf{f}\left( \mathbf{x}\right) + {\left( {\nabla }_{\mathbf{x}}^{T}\mathbf{m}\left( \mathbf{x}\right) \right) }^{T}\mathbf{f}\left( \mathbf{x}\right) +$$ + +$$ ++ \operatorname{Tr}\left\lbrack {\mathbf{m}\left( \mathbf{x}\right) {\nabla }_{x}\mathbf{f}\left( \mathbf{x}\right) }\right\rbrack +$$ + +$$ += {\left( \mathbf{m}{\left( \mathbf{x}\right) }^{T}{\mathbf{s}}_{p}\left( \mathbf{x}\right) \right) }^{T}\mathbf{f}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}^{T}\left\lbrack {\mathbf{m}\left( \mathbf{x}\right) \mathbf{f}\left( \mathbf{x}\right) }\right\rbrack +$$ + +(20) + +which is exactly the same as the inner part of DSD (Eq.6). So with change of variable formula, we can easily show + +$$ +\mathcal{S}\left( {{q}_{Y},{p}_{Y}}\right) = {DS}{D}_{m}\left( {{q}_{X},{p}_{X}}\right) \tag{21} +$$ + +## B. Proof of proposition 3.1 + +With the definition of the Riemannian manifold(M, g), for any point $\mathbf{a} \in \mathcal{M}$ with local coordinates $\mathbf{x} \in {\mathbb{R}}^{D}$ , and two vectors $\mathbf{u},\mathbf{v}$ from its tangent plane ${T}_{a}\mathcal{M}$ , we can represents $\mathbf{u},\mathbf{v}$ using the basis ${\left( \frac{\partial }{\partial {x}_{i}}\right) }_{\mathbf{a}}$ as + +$$ +\mathbf{u} = \mathop{\sum }\limits_{{i = 1}}^{D}{u}_{i}{\left( \frac{\partial }{\partial {x}_{i}}\right) }_{\mathbf{a}},\;\mathbf{v} = \mathop{\sum }\limits_{{i = 1}}^{D}{v}_{i}{\left( \frac{\partial }{\partial {x}_{i}}\right) }_{\mathbf{a}} \tag{22} +$$ + +The inner product defined by the metric $\mathbf{g}$ can be expressed as + +$$ +\mathbf{g}\left( {\mathbf{u},\mathbf{v}}\right) = \mathop{\sum }\limits_{{i, j}}^{D}{u}_{i}{v}_{j}{\left\langle {\left( \frac{\partial }{\partial {x}_{i}}\right) }_{\mathbf{a}},{\left( \frac{\partial }{\partial {x}_{j}}\right) }_{\mathbf{a}}\right\rangle }_{g} = \mathop{\sum }\limits_{{i, j}}^{D}{u}_{i}{g}_{ij}\left( \mathbf{x}\right) {v}_{j} +$$ + +(23) + +where ${g}_{ij}\left( \mathbf{x}\right)$ is the ${ij} -$ th element of matrix $\mathbf{G}\left( \mathbf{x}\right)$ and $\langle \cdot , \cdot {\rangle }_{g}$ is the inner product defined by Riemannian metric $\mathbf{g}$ . + +We assume the measure $\mathcal{M}\left( \mathbf{x}\right)$ is absolutely continuous w.r.t. Lebesgue measure, then we have the following change of variable formula + +$$ +d\mathcal{M}\left( \mathbf{x}\right) = \sqrt{\left| \mathbf{G}\left( \mathbf{x}\right) \right| }d\mathbf{x} \tag{24} +$$ + +Then we can represents the densities $\widetilde{p},\widetilde{q}$ under Lebessgue measure + +$$ +p\left( \mathbf{x}\right) = \frac{d\mathbb{P}}{d\mathcal{M}\left( \mathbf{x}\right) }\frac{d\mathcal{M}\left( \mathbf{x}\right) }{d\mathbf{x}} = \widetilde{p}\left( \mathbf{x}\right) \sqrt{\left| \mathbf{G}\left( \mathbf{x}\right) \right| } \tag{25} +$$ + +and $q\left( \mathbf{x}\right)$ is defined accordingly. The score matching loss for $\widetilde{p}$ and $\widetilde{q}$ is + +$$ +{\mathcal{F}}_{\mathcal{M}}\left( {\widetilde{q},\widetilde{p}}\right) = \frac{1}{2}\int \widetilde{q}\left( \mathbf{x}\right) \parallel \nabla \log \widetilde{p}\left( \mathbf{x}\right) - \nabla \log \widetilde{q}\left( \mathbf{x}\right) {\parallel }_{g}^{2}d\mathcal{M}\left( \mathbf{x}\right) +$$ + +$$ += \frac{1}{2}\int q\left( \mathbf{x}\right) \parallel \nabla \log \widetilde{p}\left( \mathbf{x}\right) - \nabla \log \widetilde{q}\left( \mathbf{x}\right) {\parallel }_{g}^{2}d\mathbf{x} +$$ + +(26) + +Now let’s define $\nabla \log \widetilde{p}\left( \mathbf{x}\right)$ . From the basics of Riemannian manifold, for a point $\mathbf{a} \in \mathcal{M}$ with local coordinate $\mathbf{x}$ , and $\mathcal{X}$ is a vector field on $\mathcal{M}$ , we have the following definition + +$$ +{\left\langle \mathop{\sum }\limits_{{i = 1}}^{D}{\left( \nabla \log \widetilde{p}\left( \mathbf{x}\right) \right) }_{i}{\left( \frac{\partial }{\partial {x}_{i}}\right) }_{\mathbf{a}},\mathop{\sum }\limits_{{j = 1}}^{D}{\mathcal{X}}_{j}\left( \frac{\partial }{\partial {x}_{j}}\right) \right\rangle }_{g} = \mathop{\sum }\limits_{{i = 1}}^{D}{\mathcal{X}}_{i}\frac{\partial \log \widetilde{p}}{\partial {x}_{i}} +$$ + +(27) + +Written in terms of matrix form, assume $\mathbf{X} =$ ${\left\lbrack {\mathcal{X}}_{1},\ldots ,{\mathcal{X}}_{D}\right\rbrack }^{T}$ , and ${g}_{ij}\left( \mathbf{x}\right)$ is the element of symmetric positive definite matrix $\mathbf{G}\left( \mathbf{x}\right)$ , we have + +$$ +{\left( \nabla \log \widetilde{p}\right) }^{T}\mathbf{G}\left( \mathbf{x}\right) \mathbf{X} = {\left( \frac{\partial \log \widetilde{p}}{\partial \mathbf{x}}\right) }^{T}\mathbf{X} \tag{28} +$$ + +$$ +\Rightarrow \nabla \log \widetilde{p} = {\mathbf{G}}^{-1}\left( \mathbf{x}\right) \left( \frac{\partial \log \widetilde{p}}{\partial \mathbf{x}}\right) +$$ + +Therefore, we have + +$$ +\parallel \nabla \log \widetilde{p}\left( \mathbf{x}\right) - \nabla \log \widetilde{q}\left( \mathbf{x}\right) {\parallel }_{g}^{2} +$$ + +$$ += \langle \nabla \log \widetilde{p}\left( \mathbf{x}\right) - \nabla \log \widetilde{q}\left( \mathbf{x}\right) ,\nabla \log \widetilde{p}\left( \mathbf{x}\right) - \nabla \log \widetilde{q}\left( \mathbf{x}\right) {\rangle }_{g} +$$ + +$$ += {\left\langle {\mathbf{G}}^{-1}\left( \mathbf{x}\right) \underset{\widetilde{\Delta }\left( \mathbf{x}\right) }{\underbrace{\left( \frac{\partial \log \widetilde{p}}{\partial \mathbf{x}} - \frac{\partial \log \widetilde{q}}{\partial \mathbf{x}}\right) }},{\mathbf{G}}^{-1}\left( \mathbf{x}\right) \left( \frac{\partial \log \widetilde{p}}{\partial \mathbf{x}} - \frac{\partial \log \widetilde{q}}{\partial \mathbf{x}}\right) \right\rangle }_{g} +$$ + +$$ += \widetilde{\Delta }{\left( \mathbf{x}\right) }^{T}{\mathbf{G}}^{-1}\left( \mathbf{x}\right) \mathbf{G}\left( \mathbf{x}\right) {\mathbf{G}}^{-1}\left( \mathbf{x}\right) \widetilde{\Delta }\left( \mathbf{x}\right) +$$ + +$$ += \widetilde{\Delta }{\left( \mathbf{x}\right) }^{T}{\mathbf{G}}^{-1}\left( \mathbf{x}\right) \widetilde{\Delta }\left( \mathbf{x}\right) +$$ + +(29) + +By change of variable formula, it is also easy to show that + +$$ +\widetilde{\Delta }\left( \mathbf{x}\right) = \underset{\Delta \left( \mathbf{x}\right) }{\underbrace{\left( \frac{\partial \log p}{\partial \mathbf{x}} - \frac{\partial \log q}{\partial \mathbf{x}}\right) }} \tag{30} +$$ + +Therefore, we have + +$$ +{\begin{Vmatrix}\nabla \log \widetilde{p}\left( \mathbf{x}\right) - \nabla \log \widetilde{q}\left( \mathbf{x}\right) \end{Vmatrix}}_{g}^{2} = {\Delta }^{T}\left( \mathbf{x}\right) {\mathbf{G}}^{-1}\left( \mathbf{x}\right) \Delta \left( \mathbf{x}\right) +$$ + +(31) + +Substitute back to ${\mathcal{F}}_{\mathcal{M}}\left( {\widetilde{q},\widetilde{p}}\right)$ (Eq.26), we can obtain the result. Particularly, compare to DSM (Eq.5), we can observe that if $\mathbf{G}\left( \mathbf{x}\right) = \mathbf{m}{\left( \mathbf{x}\right) }^{-T}\mathbf{m}{\left( \mathbf{x}\right) }^{-1}$ , the ${\mathcal{F}}_{\mathcal{M}}\left( {\widetilde{q},\widetilde{p}}\right)$ is equivalent to ${DSM}$ . Indeed, as $\mathbf{m}\left( \mathbf{x}\right) \in {\mathbb{R}}^{D \times D}$ is an invertible matrix, then $\mathbf{m}{\left( \mathbf{x}\right) }^{-T}\mathbf{m}{\left( \mathbf{x}\right) }^{-1}$ must be symmetric positive definite, which satisfies the requirements for $\mathbf{G}\left( \mathbf{x}\right)$ . + +## C. Proof of proposition 3.2 + +An ODE flow is defined by the solution of the following ODE: + +$$ +d\mathbf{x} = \mathbf{g}\left( \mathbf{x}\right) {dt} \tag{32} +$$ + +with $\mathbf{g}\left( \mathbf{x}\right)$ the deterministic drift term. Let us consider the forward Euler discretisation of the ODE, which gives + +$$ +\mathbf{x}\left( {t + \delta }\right) = \mathbf{x}\left( t\right) + \delta \mathbf{g}\left( {\mathbf{x}\left( t\right) }\right) \mathrel{\text{:=}} {\mathbf{T}}_{\delta }\left( {\mathbf{x}\left( t\right) }\right) . \tag{33} +$$ + +With $\delta \approx 0$ we see that ${T}_{\delta }$ is an invertible transformation. Now consider $\mathbf{y} = \mathbf{x}\left( {t + \delta }\right)$ and $\mathbf{x}\left( t\right) = \mathbf{x}$ . This again pushes ${p}_{X}\left( \mathbf{x}\right)$ and ${q}_{X}\left( \mathbf{x}\right)$ to ${p}_{Y}\left( \mathbf{y}\right)$ and ${q}_{Y}\left( \mathbf{y}\right)$ , respectively. Therefore we can reuse results from theorem 3.1 and derive + +$$ +F\left( {{p}_{Y},{q}_{Y}}\right) = \frac{1}{2}{\mathbb{E}}_{{q}_{X}}\left\lbrack {\begin{Vmatrix}\mathbf{m}{\left( \mathbf{x}\right) }^{T}\left( {\mathbf{s}}_{{p}_{X}}\left( \mathbf{x}\right) - {\mathbf{s}}_{{q}_{X}}\left( \mathbf{x}\right) \right) \end{Vmatrix}}^{2}\right\rbrack , +$$ + +(34) + +where $\mathbf{m}\left( \mathbf{x}\right) = {\left( {\nabla }_{\mathbf{x}}{\mathbf{T}}_{\delta }\left( \mathbf{x}\right) \right) }^{-1}$ Notice that ${T}_{\delta }\left( \mathbf{x}\right) = \mathbf{x}$ when $\delta = 0$ . This means we can compute the change of score matching at time $t$ as: + +$$ +\frac{\partial }{\partial t}F\left( {{p}_{Y},{q}_{Y}}\right) = \mathop{\lim }\limits_{{\delta \rightarrow {0}^{ + }}}\frac{F\left( {{p}_{Y},{q}_{Y}}\right) - F\left( {{p}_{X},{q}_{X}}\right) }{\delta } +$$ + +$$ += \frac{1}{2}\mathop{\lim }\limits_{{\delta \rightarrow {0}^{ + }}}{\mathbb{E}}_{{q}_{X}\left( \mathbf{x}\right) }\left\lbrack {\Delta {\left( \mathbf{x}\right) }^{\top }{\delta }^{-1}\left( {m\left( \mathbf{x}\right) m{\left( \mathbf{x}\right) }^{\top } - \mathbf{I}}\right) \Delta \left( \mathbf{x}\right) }\right\rbrack , +$$ + +(35) + +with $\Delta \left( \mathbf{x}\right) = {\nabla }_{\mathbf{x}}\log {p}_{X}\left( \mathbf{x}\right) - {\nabla }_{\mathbf{x}}\log {q}_{X}\left( \mathbf{x}\right)$ . As ${\nabla }_{\mathbf{x}}{T}_{\delta }\left( \mathbf{x}\right) = \mathbf{I} + \delta {\nabla }_{\mathbf{x}}\mathbf{g}\left( \mathbf{x}\right)$ , simple calculation shows that + +$$ +{\delta }^{-1}\left( {m\left( \mathbf{x}\right) m{\left( \mathbf{x}\right) }^{\top } - \mathbf{I}}\right) +$$ + +$$ += {\delta }^{-1}\left\lbrack {{\left\lbrack {\left( \mathbf{I} + \delta {\nabla }_{\mathbf{x}}\mathbf{g}\left( \mathbf{x}\right) \right) }^{\top }\left( \mathbf{I} + \delta {\nabla }_{\mathbf{x}}\mathbf{g}\left( \mathbf{x}\right) \right) \right\rbrack }^{-1} - \mathbf{I}}\right\rbrack +$$ + +$$ += {\delta }^{-1}\left\lbrack {{\left\lbrack \mathbf{I} + \delta \left( {\nabla }_{\mathbf{x}}\mathbf{g}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}\mathbf{g}{\left( \mathbf{x}\right) }^{\top }\right) + \mathcal{O}\left( {\delta }^{2}\right) \right\rbrack }^{-1} - \mathbf{I}}\right\rbrack +$$ + +$$ += - \left\lbrack {{\nabla }_{\mathbf{x}}\mathbf{g}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}\mathbf{g}{\left( \mathbf{x}\right) }^{\top } + \mathcal{O}\left( \delta \right) }\right\rbrack +$$ + +$$ +{\left\lbrack \mathbf{I} + \delta \left( {\nabla }_{\mathbf{x}}\mathbf{g}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}\mathbf{g}{\left( \mathbf{x}\right) }^{\top }\right) + \mathcal{O}\left( {\delta }^{2}\right) \right\rbrack }^{-1}, +$$ + +(36) + +which leads to + +$$ +\frac{\partial }{\partial t}F\left( {{p}_{Y},{q}_{Y}}\right) +$$ + +$$ += \mathop{\lim }\limits_{{\delta \rightarrow {0}^{ + }}}\frac{1}{2}{\mathbb{E}}_{{q}_{X}\left( \mathbf{x}\right) }\left\lbrack {\Delta {\left( \mathbf{x}\right) }^{\top }{\delta }^{-1}\left( {m\left( \mathbf{x}\right) m{\left( \mathbf{x}\right) }^{\top } - \mathbf{I}}\right) \Delta \left( \mathbf{x}\right) }\right\rbrack +$$ + +$$ += - \frac{1}{2}{\mathbb{E}}_{{q}_{X}\left( \mathbf{x}\right) }\left\lbrack {\Delta {\left( \mathbf{x}\right) }^{\top }\left( {{\nabla }_{\mathbf{x}}\mathbf{g}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}\mathbf{g}{\left( \mathbf{x}\right) }^{\top }}\right) \Delta \left( \mathbf{x}\right) }\right\rbrack +$$ + +(37) + +As this quantifies the instantaneous changes, replacing ${p}_{t}$ and ${q}_{t}$ for ${p}_{Y},{p}_{X},{q}_{Y}$ and ${q}_{X}$ gives the instantaneous change of score matching loss. + +## D. Additional plots + +From the motivating example and theorem 3.1, we know $\mathbf{m}\left( \mathbf{x}\right) = \left( {1 + \frac{{\left( \mathbf{x} - \theta \right) }^{2}}{b}}\right)$ . Therefore, by simple calculus, the corresponding transformation $y = \mathbf{T}\left( \mathbf{x}\right)$ can be defined as + +$$ +\mathbf{y} = \mathbf{T}\left( \mathbf{x}\right) = \frac{1}{b\sqrt{b}}{\tan }^{-1}\left( \frac{\mathbf{x} - \theta }{\sqrt{b}}\right) \tag{38} +$$ + +$$ +\mathbf{x} = {\mathbf{T}}^{-1}\left( \mathbf{y}\right) = \sqrt{b}\tan \left( {b\sqrt{b}\mathbf{y}}\right) + \theta +$$ + +Let’s define the transformed densities ${p}_{Y}\left( \mathbf{y}\right)$ and ${q}_{Y}\left( \mathbf{y}\right)$ as + +$$ +{p}_{Y}\left( \mathbf{y}\right) = p\left( {{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\right) \left| {{\nabla }_{\mathbf{y}}{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\right| \tag{39} +$$ + +$$ +{q}_{Y}\left( \mathbf{y}\right) = q\left( {{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\right) \left| {{\nabla }_{\mathbf{y}}{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\right| +$$ + +Therefore, we can plot the log likelihood for the original densities $p, q$ and transformed densities ${p}_{Y},{q}_{Y}$ as figure 2 . + +385 + +![01963e4a-40a8-7943-9c90-781bd76352b4_7_207_184_1334_485_0.jpg](images/01963e4a-40a8-7943-9c90-781bd76352b4_7_207_184_1334_485_0.jpg) + +Figure 2. Left:The log likelihood plot for original densities $q, p$ . Middle: The log likelihood function for transformed density ${p}_{Y}$ Right: The log likelihood function for ${q}_{Y}$ . We choose $\theta = - {2.5}$ and $b = {0.6}$ . Notice that the transformed densities ${p}_{Y},{q}_{Y}$ are periodic as we consider $y \in \mathbb{R}$ . This won’t happen if we consider $\mathbf{y} = \mathbf{T}\left( \mathbf{x}\right)$ . Because all $\mathbf{x}$ value will be squeezed inside the period containing 0, i.e. $\mathbf{y}$ will inside $\left\lbrack {-{3.37},{3.37}}\right\rbrack$ in this case. + +386 + +387 + +388 + +389 + +390 + +391 + +392 In this case, we set $p\left( \mathbf{x}\right)$ has the mean -2.5 with the same scale 0.3 as $q$ , whereas $q$ has mean 0 . For the transformation $\mathbf{T}$ , we set $\theta = - {2.5}$ with $b = {0.6}$ . \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/jxsmOXCDv9l/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/jxsmOXCDv9l/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..fe8cd9ec044fb237bbdc44c26e770d046f6edc79 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/jxsmOXCDv9l/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,203 @@ +§ INTERPRETING DIFFUSION SCORE MATCHING USING NORMALIZING FLOW + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Scoring matching (SM), and its related counterpart, Stein discrepancy (SD) have achieved great success in model training and evaluations. However, recent research shows their limitations when dealing with certain types of distributions. One possible fix is incorporating the original score matching (or Stein discrepancy) with a diffusion matrix, which is called diffusion score matching (DSM) (or diffusion Stein discrepancy (DSD)) . However, the lack of the interpretation of the diffusion limits its usage within simple distributions and manually chosen matrix. In this work, we plan to fill this gap by interpreting the diffusion matrix using normalizing flows. Specifically, we theoretically prove that DSM (or DSD) is equivalent to the original score matching (or score matching) evaluated in the transformed space defined by the normalizing flow, where the diffusion matrix is the inverse of the flow's Jacobian matrix. In addition, we also build its connection to Riemannian manifolds, and further extend it to continuous flows, where the change of DSM is characterized by an ODE. + +§ 1. INTRODUCTION + +Recently, score matching (Hyvärinen & Dayan, 2005) and its closely related counterpart, Stein discrepancy (Gorham, 2017) have made great progress in both understanding their theoretical properties and practical usage. Particularly, unlike Kullback-Leibler (KL) divergence which can only be used for distributions with known normalizing constant, SM (or SD) can be evaluated for unnormalized densities, and requires fewer assumptions for the probability distributions (Fisher et al., 2021). Such useful properties enable them to be widely applied in training energy-based model (EBM) (Song et al., 2020a; Grathwohl et al., 2020; Wenliang et al., 2019), state-of-the-art score-based generative model (Song & Ermon, 2019; Song et al., 2020b), statistical tests (Liu et al., 2016; Chwialkowski et al., 2016) and variational inference (Hu et al., 2018; Liu & Wang, 2016). + +Despite their elegant statistical properties, recent work (Barp et al., 2019) demonstrated their failure when dealing with certain type of distributions (e.g. heavy-tailed distributions). For instance, when the data and the model are heavy tailed distributions, the model can fail to recover the true mode even in one dimensional case. The root of this problem is that the SM (or SD) objective is highly non-convex and does not correlate well with likelihood. To fix it, Barp et al. (2019) proposed a variant called diffusion score matching (and diffusion Stein discrepancy), where a diffusion matrix is introduced. However, the author did not provide us an interpretation of this diffusion matrix. In fact, the diffusion used by the author (Barp et al., 2019) is manually chosen for toy densities. Such lack of interpretation hinders further development of a proper training method of the diffusion matrix. + +In this paper, we aim to give an interpretation based on normalizing flows, which sheds light on developing training method for the diffusion. We summarize our contributions as follows: + + * We theoretically prove that DSM (or DSD) is equivalent to the original SM (or SD) performed in the transformed space defined by the normalizing flow. The diffusion matrix is exactly the same as the inverse of the flow's Jacobian matrix. + + * We further show that its connection to Riemannian manifold. Specifically, we show the diffusion matrix is closely related to the Riemannian metric tensor. + + * We further extend DSM to their continuous version. Namely, we derive an ODE to characterize its instantaneous change. + +We hope that by building these connections, a broad range of techniques from normalizing flow communities can be leveraged to develop training methods for the diffusion matrix. + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +§ 2. BACKGROUND: DIFFUSION STEIN DISCREPANCY + +§ 2.1. SCORE MATCHING AND STEIN DISCREPANCY + +Let $\mathcal{P}$ be the space of Borel probability measures on ${\mathbb{R}}^{D}$ , $\mathbb{Q} \in \mathcal{P}$ to be a probability measure, the objective for model learning is to find a sequence of probability measures $\left\{ {{\mathbb{P}}_{\theta } : \theta \in \Theta }\right\} \subset \mathcal{P}$ that approximates $\mathbb{Q}$ in an appropriate sense. One common way to achieve this is by defining a discrepancy measure $\mathcal{D} : \mathcal{P} \times \mathcal{P} \rightarrow \mathbb{R}$ , which quantifies the differences between two probability measures. Thus, the optimal parameters ${\theta }^{ * }$ can be obtained by ${\theta }^{ * } = \operatorname{argmin}\mathcal{D}\left( {\mathbb{Q}\parallel {\mathbb{P}}_{\theta }}\right)$ . The choice of discrepancy depends on the properties of the probability measures, the efficiency and its robustness. The one we are focused on is called Fisher divergence. Assuming for probability measures $\mathbb{Q}$ and ${\mathbb{P}}_{\theta }$ , we have corresponding twice differentiable densities $q\left( \mathbf{x}\right) ,{p}_{\theta }\left( \mathbf{x}\right)$ . The Fisher divergence (Johnson,2004) is defined as + +$$ +\mathcal{F}\left( {q,p}\right) = \frac{1}{2}{\mathbb{E}}_{q}\left\lbrack {\begin{Vmatrix}{\mathbf{s}}_{p}\left( \mathbf{x}\right) - {\mathbf{s}}_{q}\left( \mathbf{x}\right) \end{Vmatrix}}^{2}\right\rbrack \tag{1} +$$ + +where ${\mathbf{s}}_{p}\left( \mathbf{x}\right) = {\nabla }_{\mathbf{x}}\log {p}_{\theta }\left( \mathbf{x}\right)$ is called the score of ${p}_{\theta }$ , and ${s}_{q}$ is defined accordingly. Despite that $q$ is often used for underlying data densities with the intractable ${s}_{q},{s}_{q}$ in fact acts as a constant for parameter $\theta$ . Thus, one can use integration-by-part to derive the following: + +$$ +\mathcal{F}\left( {q,{p}_{\theta }}\right) = \underset{\mathcal{L}\left( \theta \right) }{\underbrace{{\mathbb{E}}_{q}\left\lbrack {\frac{1}{2}{\begin{Vmatrix}{\mathbf{s}}_{p}\left( \mathbf{x}\right) \end{Vmatrix}}^{2} + \operatorname{Tr}\left( {{\nabla }_{\mathbf{x}}{\mathbf{s}}_{p}\left( \mathbf{x}\right) }\right) }\right\rbrack }} + {C}_{q} \tag{2} +$$ + +where ${C}_{q}$ is a constant w.r.t parameter $\theta$ . This alternative objective $\mathcal{L}\left( \theta \right)$ is referred as score matching (Hyvärinen & Dayan, 2005). + +Another discrepancy measure we are interested is called Stein discrepancy, which is defined as + +$$ +\mathcal{S}\left( {q,{p}_{\theta }}\right) = \mathop{\sup }\limits_{{\mathbf{f} \in \mathcal{H}}}{\mathbb{E}}_{q}\left\lbrack {{\mathbf{s}}_{p}{\left( \mathbf{x}\right) }^{T}\mathbf{f}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}^{T}\mathbf{f}\left( \mathbf{x}\right) }\right\rbrack \tag{3} +$$ + +where $\mathbf{f} : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D}$ is a test function, and $\mathcal{H}$ is an appropriate test function family, e.g. reproducing kernel Hilbert space (Liu et al., 2016; Chwialkowski et al., 2016) or Stein class (Gorham, 2017; Liu et al., 2016). Recent work (Hu et al., 2018) proved a strong connection between Stein discrepancy and Fisher divergence by showing the optimal test function: + +$$ +{\mathbf{f}}^{ * }\left( \mathbf{x}\right) \propto {\mathbf{s}}_{p}\left( \mathbf{x}\right) - {\mathbf{s}}_{q}\left( \mathbf{x}\right) \tag{4} +$$ + +Thus, by substitution back into eq. 3, we can show Stein discrepancy is equivalent to Fisher divergence up to a multiplicative constant. + +Recent works (Barp et al., 2019; Gorham et al., 2019) further extend the score matching and Stein discrepancy by incorporating a diffusion matrix $\mathbf{m}\left( \mathbf{x}\right) : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D \times D}$ , which are called diffusion score matching (DSM) and diffusion Stein discrepancy (DSD). DSM is defined as + +$$ +{DS}{M}_{m}\left( {q,{p}_{\theta }}\right) = \frac{1}{2}{\mathbb{E}}_{q}\left\lbrack {\begin{Vmatrix}\mathbf{m}{\left( \mathbf{x}\right) }^{T}\left( {\mathbf{s}}_{p}\left( \mathbf{x}\right) - {\mathbf{s}}_{q}\left( \mathbf{x}\right) \right) \end{Vmatrix}}^{2}\right\rbrack \tag{5} +$$ + + < g r a p h i c s > + +Figure 1. The SM loss computed for different $\theta$ . Orange line: Original SM loss with orange dashed line indicating valid initialization region. Blue line: DSM loss with dashed blue line indicating the optimum solution ${\theta }^{ * }$ . The black dashed line indicates true $\theta$ . + +where $\mathbf{m}\left( \mathbf{x}\right)$ is a matrix-valued function. Similarly, DSD is defined as + +$$ +{DS}{D}_{m}\left( {q,{p}_{\theta }}\right) +$$ + +$$ += \mathop{\sup }\limits_{{\mathbf{f} \in \mathcal{H}}}{\mathbb{E}}_{q}\left\lbrack {{\left( \mathbf{m}{\left( \mathbf{x}\right) }^{T}{\mathbf{s}}_{p}\left( \mathbf{x}\right) \right) }^{T}\mathbf{f}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}^{T}\left( {\mathbf{m}\left( \mathbf{x}\right) \mathbf{f}\left( \mathbf{x}\right) }\right) }\right\rbrack +$$ + +(6) + +It can be shown that as long as $\mathbf{m}\left( \mathbf{x}\right)$ is invertible, ${DS}{M}_{m}\left( {q,{p}_{\theta }}\right)$ and ${DS}{D}_{m}\left( {q,{p}_{\theta }}\right)$ are valid divergences. + +These two extensions have demonstrated superior performances when dealing with certain type of distributions. In the following, we give a motivating example similar to Barp et al. (2019). + +§ 2.2. MOTIVATING EXAMPLE: STUDENT-T DISTRIBUTION + +Let assume $q,{p}_{\theta }$ to be 1 dimensional student-t distribution. The target is to approximate $q$ by ${p}_{\theta }$ . The training set is 300 i.i.d data sampled from $q$ with mean 0 and scale 0.3 . We assume the scale parameter for ${p}_{\theta }$ is the same as $q$ , and the only trainable parameter $\theta$ is the mean. The degree of freedom is 5 for both $q,{p}_{\theta }$ . Figure 1 shows the score matching loss computed for different $\theta$ . We can observe that for original SM loss, it is highly non-convex, and the loss value does not correlate well with likelihood. Indeed, we can see the true location $\theta = 0$ is protected by two high 'walls'. In other words, unlike maximum likelihood estimator, a parameter $\theta$ that is closer to the ground truth does not necessarily produce low SM loss. One important consequence is that unless the initialized $\theta$ is within the narrow valid region, the gradient-based optimization will never recover the ground truth. On the other hand, if we chose $\mathbf{m}\left( \mathbf{x}\right) = \left( {1 + \frac{{\left( \mathbf{x} - \theta \right) }^{2}}{0.6}}\right)$ as the diffusion matrix, the corresponding ${DS}{M}_{m}\left( {q,{p}_{\theta }}\right)$ loss is convex. The ground truth can be recovered by minimizing DSM with a proper gradient-based optimizer. + +The selection of the diffusion matrix is crucial to the success of the estimator. Unfortunately, the interpretation of this matrix is unclear, not mentioning a selection algorithm. In the following, we aim to shed lights on this problem by connecting this diffusion with normalizing flows. + +§ 3. DIFFUSION MATRIX AS NORMALIZING FLOW + +§ 3.1. INTERPRETING DSM/DSD USING NORMALIZING FLOW + +Let assume we have two densities ${q}_{X}\left( \mathbf{x}\right) ,{p}_{X}\left( \mathbf{x}\right)$ defined on ${\mathbb{R}}^{D}$ and are twice differentiable. We further define an differentiable invertible transformation $\mathbf{T}\left( \mathbf{x}\right) : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D}$ . Let's define + +$$ +\mathbf{y} = \mathbf{T}\left( \mathbf{x}\right) \tag{7} +$$ + +with corresponding densities ${q}_{Y}\left( \mathbf{y}\right)$ and ${p}_{Y}\left( \mathbf{y}\right)$ . We can prove the following theorem: + +Theorem 3.1. For twice differentiable densities ${q}_{X}\left( \mathbf{x}\right)$ , ${p}_{X}\left( \mathbf{x}\right)$ and an invertible differentiable transformation $T$ : ${\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D}$ , the diffusion score matching objective (Eq.5) is equivalent to the original score matching + +$$ +\mathcal{F}\left( {{q}_{Y},{p}_{Y}}\right) = \frac{1}{2}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {\begin{Vmatrix}{\mathbf{s}}_{{p}_{Y}}\left( \mathbf{y}\right) - {\mathbf{s}}_{{q}_{Y}}\left( \mathbf{y}\right) \end{Vmatrix}}^{2}\right\rbrack \tag{8} +$$ + +where $\mathbf{y} = T\left( \mathbf{x}\right)$ , and ${p}_{Y},{q}_{Y}$ are corresponding densities after the transformation. The diffusion matrix $\mathbf{m}\left( \mathbf{x}\right)$ is the inverse of the Jacobian matrix ${\left( {\nabla }_{\mathbf{x}}\mathbf{T}\left( \mathbf{x}\right) \right) }^{-1}$ + +Proof. From the change of variable formula, the corresponding densities ${p}_{Y}\left( \mathbf{y}\right) ,{q}_{Y}\left( \mathbf{y}\right)$ can be defined as: + +$$ +{p}_{Y}\left( \mathbf{y}\right) = {p}_{X}\left( {{T}^{-1}\left( \mathbf{y}\right) }\right) \left| \frac{\partial {T}^{-1}\left( \mathbf{y}\right) }{\partial \mathbf{y}}\right| , +$$ + +$$ +{q}_{Y}\left( \mathbf{y}\right) = {q}_{X}\left( {{T}^{-1}\left( \mathbf{y}\right) }\right) \left| \frac{\partial {T}^{-1}\left( \mathbf{y}\right) }{\partial \mathbf{y}}\right| . +$$ + +Then the Fisher divergence (Eq.1) for ${p}_{Y},{q}_{Y}$ can be formulated as follows (we use the notation ${\mathcal{F}}_{T}$ to emphasise that ${p}_{Y},{q}_{Y}$ are obtained using transformation $T$ ): + +$$ +F\left( {{p}_{Y},{q}_{Y}}\right) \mathrel{\text{ := }} \frac{1}{2}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {\begin{Vmatrix}{\nabla }_{\mathbf{y}}\log {p}_{Y}\left( \mathbf{y}\right) - {\nabla }_{\mathbf{y}}\log {q}_{Y}\left( \mathbf{y}\right) \end{Vmatrix}}_{2}^{2}\right\rbrack +$$ + +$$ += \frac{1}{2}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {\begin{Vmatrix}{\nabla }_{\mathbf{y}}\log {p}_{X}\left( {T}^{-1}\left( \mathbf{y}\right) \right) - {\nabla }_{\mathbf{y}}\log {q}_{X}\left( {T}^{-1}\left( \mathbf{y}\right) \right) \end{Vmatrix}}_{2}^{2}\right\rbrack +$$ + +$$ += \frac{1}{2}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {\parallel {\nabla }_{\mathbf{y}}{T}^{-1}{\left( \mathbf{y}\right) }^{\top }}\right. +$$ + +$$ +\left. \left. {\left( {{\nabla }_{{T}^{-1}\left( \mathbf{y}\right) }\log {p}_{X}\left( {{T}^{-1}\left( \mathbf{y}\right) }\right) - {\nabla }_{{T}^{-1}\left( \mathbf{y}\right) }\log {q}_{X}\left( {{T}^{-1}\left( \mathbf{y}\right) }\right) }\right) {\left. \right| }_{2}^{2}}\right) \right\rbrack +$$ + +$$ += \frac{1}{2}{\mathbb{E}}_{{q}_{X}}\left\lbrack {\begin{Vmatrix}{\left( {\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) \right) }^{-\top }\left( {\nabla }_{\mathbf{x}}\log {p}_{X}\left( \mathbf{x}\right) - {\nabla }_{\mathbf{x}}\log {q}_{X}\left( \mathbf{x}\right) \right) \end{Vmatrix}}_{2}^{2}\right\rbrack , +$$ + +(9)where the last step comes from changing the variable to $\mathbf{x} = {T}^{-1}\left( \mathbf{y}\right)$ and noticing that ${\nabla }_{\mathbf{y}}{T}^{-1}\left( \mathbf{y}\right) =$ ${\left( {\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) \right) }^{-1}$ from the inverse function theorem. This objective coincides with the diffusion score matching (Eq.5). Importantly, ${DS}{M}_{m}\left( {{q}_{X},{p}_{X}}\right)$ is a valid divergence (i.e. ${DS}{M}_{m}\left( {{p}_{X},{q}_{X}}\right) = 0$ iff. ${p}_{X} = {q}_{X}$ ) when $m\left( \mathbf{x}\right)$ is an invertible matrix for every $\mathbf{x}$ . As normalising flow transformations naturally give invertible Jacobian matrices, we can easily extablish the connection $\mathcal{F}\left( {{p}_{Y},{q}_{Y}}\right) =$ ${DS}{M}_{m}\left( {{p}_{X},{q}_{X}}\right)$ with $m\left( \mathbf{x}\right) = {\left( {\nabla }_{\mathbf{x}}T\left( \mathbf{x}\right) \right) }^{-1}$ . + +We also include additional plots for the student-t example. Specifically, we plot the log likelihood function after the transformation defined by $\mathbf{m}\left( \mathbf{x}\right) = \left( {1 + \frac{\left( \mathbf{x} - \theta \right) }{0.6}}\right)$ in figure 2 (Appendix D). + +Similarly, we can prove the connections between DSD (Eq.6) and normalizing flow. The proof is in appendix A. + +Theorem 3.2. For twice differentiable densities ${q}_{X}\left( \mathbf{x}\right)$ , ${p}_{X}\left( \mathbf{x}\right)$ , an invertible differentiable transformation $\mathbf{T}\left( \mathbf{x}\right)$ : ${\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D}$ and differentiable test function in suitable test function family $\mathcal{H} : \mathbf{f} : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D} \in \mathcal{H}$ , the diffusion Stein discrepancy (Eq.6) is equivalent to the original Stein discrepancy + +$$ +\mathcal{S}\left( {{q}_{Y},{p}_{Y}}\right) = \mathop{\sup }\limits_{{\mathbf{g} \in {\mathcal{H}}^{\prime }}}{\mathbb{E}}_{{q}_{Y}}\left\lbrack {{\mathbf{s}}_{{p}_{Y}}{\left( \mathbf{y}\right) }^{T}\mathbf{g}\left( \mathbf{y}\right) + {\nabla }_{\mathbf{y}}^{T}\mathbf{g}\left( \mathbf{y}\right) }\right\rbrack \tag{10} +$$ + +where $\mathbf{g}\left( \mathbf{y}\right) = \mathbf{f}\left( {{\mathbf{T}}^{-1}\left( \mathbf{y}\right) }\right) ,{\mathcal{H}}^{\prime }$ is the corresponding function space for $\mathbf{g},{p}_{Y}$ and ${q}_{Y}$ are transformed densities by $\mathbf{T}\left( \cdot \right)$ . The diffusion matrix $\mathbf{m}\left( \mathbf{x}\right) = {\left( {\nabla }_{\mathbf{x}}\mathbf{T}\left( \mathbf{x}\right) \right) }^{-1}$ . + +Based on the above two theorems, we formally establish the connections between the ${DSM}/{DSD}$ with normalizing flows. This gives us an interpretation of the diffusion matrix as the inverse of the Jacobian matrix defined by the flow. + +We can further interpret this diffusion using the distributions defined on a Riemannian manifold. In fact, we can show that the ${DSM}$ is equivalent to the original SM performed on two densities defined on Riemannian manifolds with the metric tensor $\mathbf{G}\left( \mathbf{x}\right) = {\mathbf{m}}^{-T}\left( \mathbf{x}\right) \mathbf{m}{\left( \mathbf{x}\right) }^{-1}$ + +§ 3.2. INTERPRETING DSM USING RIEMANNIAN MANIFOLD + +Assume we have a Riemannian manifold(M, g)with Riemannian metric tensor $\mathbf{g}$ . For each point $\mathbf{a} \in \mathcal{M}$ , we assume it has a local coordinates ${\mathbf{x}}_{a} = \left\lbrack {{x}_{a}^{1},\ldots ,{x}_{a}^{D}}\right\rbrack$ . We can prove the following proposition: + +Proposition 3.1. Assume the Riemannian manifold(M, g) as defined above, we define two probability measures $\mathbb{Q},\mathbb{P}$ . We denote the corresponding densities (in terms of local coordinates $\mathbf{x}$ ) w.r.t. Riemannian manifold as $\widetilde{p}\left( \mathbf{x}\right) = \frac{d\mathbb{P}}{d\mathcal{M}\left( \mathbf{x}\right) }$ and $\widetilde{q}\left( \mathbf{x}\right) = \frac{d\mathbb{Q}}{d\mathcal{M}\left( \mathbf{x}\right) }$ . Then, the score matching loss for $\widetilde{p}$ and $\widetilde{q}$ is + +$$ +{\mathcal{F}}_{\mathcal{M}}\left( {\widetilde{q},\widetilde{p}}\right) = \frac{1}{2}{\mathbb{E}}_{q}\left\lbrack {\mathbf{\Delta }{\left( \mathbf{x}\right) }^{T}\mathbf{G}{\left( \mathbf{x}\right) }^{-1}\mathbf{\Delta }\left( \mathbf{x}\right) }\right\rbrack \tag{11} +$$ + +where $p\left( \mathbf{x}\right) = \frac{d\mathbb{P}}{d\mathcal{M}\left( \mathbf{x}\right) }\frac{d\mathcal{M}\left( \mathbf{x}\right) }{d\mathbf{x}},q\left( \mathbf{x}\right)$ is defined similarly, and $\mathbf{\Delta }\left( \mathbf{x}\right) = {\mathbf{s}}_{p}\left( \mathbf{x}\right) - {\mathbf{s}}_{q}\left( \mathbf{x}\right) .\mathbf{G}\left( \mathbf{x}\right)$ is an symmetric positive definite matrix representing the Riemannian metric tensor. Particularly, if $\mathbf{G}\left( \mathbf{x}\right) = \mathbf{m}{\left( \mathbf{x}\right) }^{-T}\mathbf{m}{\left( \mathbf{x}\right) }^{-1}$ , then ${\mathcal{F}}_{\mathcal{M}}\left( {\widetilde{q},\widetilde{p}}\right)$ is equivalent to DSM (Eq.5) with diffusion matrix $\mathbf{m}\left( \mathbf{x}\right)$ . + +§ THE PROOF IS IN APPENDIX B. + +This result is more general than theorem 3.1. Specifically, theorem 3.1 only proves a sufficient condition for the DSM to be a valid discrepancy. Namely, if we have an invertible flow, the corresponding diffusion matrix $\mathbf{m}\left( x\right)$ must be invertible. However, the converse is not true. On the other hand, proposition 3.1 only requires $\mathbf{m}\left( \mathbf{x}\right)$ to be invertible, which is more general. Indeed, from the topological point of view, if we have an invertible and differentiable flow $T$ , then the transformed space (Riemannian manifold) is actually diffeomorphic to the original space (e.g. ${\mathbb{R}}^{D}$ ). Thus, this flow can be viewed as a special case of Gemici et al. (2016). But in general, Riemannian manifold may not be diffeomorphic to ${\mathbb{R}}^{D}$ , which explains why theorem 3.1 is only a sufficient condition. + +§ 3.3. CONTINUOUS DSM WITH ODE FLOW + +Previous sections assume a deterministic transformation $\mathbf{T}\left( \mathbf{x}\right)$ . Recent work has shown promising results for continuous flows characterised by an ODE (Chen et al., 2018; Grathwohl et al., 2018). Namely, we can consider a transformation + +$$ +d\mathbf{x} = \mathbf{g}\left( {\mathbf{x}\left( t\right) }\right) {dt} \tag{12} +$$ + +where $\mathbf{g}\left( {\mathbf{x}\left( t\right) }\right)$ is a deterministic drift that is uniformly Lipschitz continuous w.r.t. $\mathbf{x}$ . We define ${p}_{t}$ and ${q}_{t}$ to be the corresponding densities for $\mathbf{x}\left( t\right)$ . Inspired by Chen et al. (2018), we can characterise the instantaneous change of the score matching loss $\frac{d\mathcal{F}\left( {{q}_{t},{p}_{t}}\right) }{dt}$ by the following proposition: + +Proposition 3.2. Let ${p}_{t}\left( {\mathbf{x}\left( t\right) }\right) ,{q}_{t}\left( {\mathbf{x}\left( t\right) }\right)$ be two probability density functions, where $\mathbf{x}\left( t\right)$ is characterized by an ODE defined in eq.12. Assume $\mathbf{g}\left( {\mathbf{x}\left( t\right) }\right)$ is uniformly Lipschitz continuous w.r.t. $\mathbf{x}\left( t\right)$ . Then, the instantaneous change of score matching loss follows: + +$$ +\frac{d\mathcal{F}\left( {{q}_{t},{p}_{t}}\right) }{dt} = - \frac{1}{2}{\mathbb{E}}_{{q}_{t}}\left\lbrack {\Delta {\left( \mathbf{x}\right) }^{T}\left( {{\nabla }_{\mathbf{x}}\mathbf{g}\left( \mathbf{x}\right) + {\nabla }_{\mathbf{x}}\mathbf{g}{\left( \mathbf{x}\right) }^{T}}\right) \Delta \left( \mathbf{x}\right) }\right\rbrack +$$ + +(13) + +where $\Delta \left( \mathbf{x}\right) = {\mathbf{s}}_{{p}_{t}}\left( \mathbf{x}\right) - {\mathbf{s}}_{{q}_{t}}\left( \mathbf{x}\right)$ . + +The proof is in appendix C. + +§ 4. CONCLUSION + +In this paper, we discuss the connections of the diffusion score matching and diffusion Stein discrepancy to normalizing flows. Specifically, we theoretically prove that the DSM (or DSD) is equivalent to performing the original score matching (or Stein discrepancy) on the transformed densities. The diffusion matrix $\mathbf{m}\left( \mathbf{x}\right)$ is defined by the inverse of the flow's Jacobian matrix. We also establish the connection of DSM with densities defined on Riemannian manifolds by showing that the diffusion matrix is closely related to the Riemannian metric tensor. In the end, we extend the DSM by continuous flow, and derive an ODE characterizing its instantaneous changes. By building the connections, we hope to shed lights on developing training method for the diffusion matrix to enable the practical usage of DSM (or DSD) for large models. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/kJpj-levLl/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/kJpj-levLl/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..303376685ecadb2071c968aa8cc3a8a4b982dbb6 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/kJpj-levLl/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,257 @@ +## PNF: Progressive normalizing flows + +Anonymous Authors ${}^{1}$ + +## Abstract + +In this work, we introduce Progressive Normalizing Flows (PNF), a generative network that allows to model high-dimensional input data distributions by progressively training several flow-based modules. Competing generative models, such as GANs or autoencoders, do not aim at learning probability density of real data, while flow-based models realize this objective at a prohibitive cost of highly-dimensional internal representation. Here, we address these limitations and introduce a new strategy to train flow-based models. We progressively train consecutive models at increasing data resolutions, which allows to construct low-dimensional representations, as done in autoencoders, while directly approximating the log-likelihood function. Additional feature of our model is its intrinsic ability to upscale data resolution in the consecutive stages. We show that PNF offers a superior or comparable performance over the state of the art. + +## 1. Introduction + +Generative models trained to sample realistic data-points, such as images, are a mainstream field of machine learning, with multiple applications ranging from 3D shape optimization (Spurek et al., 2020) to high energy physics simulations (Deja et al., 2020). + +Existing approaches include GANs (Goodfellow et al., 2014), autoencoders (Kingma & Welling, 2013) and invertible flows (Uria et al., 2014; Jain et al., 2020). GANs give high quality results, yet their main working principle relies on the competition between adversarially trained networks, while the sampling prior distribution is not explicitly modeled and hence does not provide an intuitive data sample representation. The second family of generative models, autoencoder-based methods, does not share the limitation of GANs, as these models can simultaneously fit a data manifold and approximate its prior distribution (Kingma & Welling, 2013; Tolstikhin et al., 2017; Knop et al., 2020). More precisely, generative autoencoders map data into lower-dimensional latent space regularized to follow a certain distribution, e.g. Gaussian, and reconstruct it back. However, neither GANs nor autoencoder-based models are able to explicitly learn the probability density of real data in the input space. + +![01963e37-4b1e-7897-af1b-38fe1962aff6_0_892_550_701_491_0.jpg](images/01963e37-4b1e-7897-af1b-38fe1962aff6_0_892_550_701_491_0.jpg) + +Figure 1. In PNF we start with a density model on a downscaled image. Next we we progressively add information encoded by a conditional flow model (bottom images) which allows us to up-sample specific dimension of the image. In two steps of upscaling we upsample both dimensions, and finally obtain a high-resolution image (upper left). + +The third family of generative models, flow-based methods, aims to address this limitation by constructing an invertible transformation from data space to a Gaussian distribution, called Normalizing Flow (Rezende & Mohamed, 2015). Contrary to the other methods, the model explicitly learns the data distribution, and therefore, the resulting loss function is defined as a negative log-likelihood. The main challenge in modeling flows is finding the invertible function that allows efficient computation of Jacobian determinant. For discrete flows, like NICE (Dinh et al., 2014), RealNVP (Dinh et al., 2016) and Glow (Kingma & Dhariwal, 2018) that issue is solved by application of the so-called coupling layers. The continuous flows, like FFJORD (Grathwohl et al., 2018) use Jacobian trace due to the transformation + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +## specified by an ordinary differential equation. + +056 Another important group of flow-based approaches are au- 057 toregressive methods, like IAF (Kingma et al., 2016) or 058 MAF (Papamakarios et al., 2017) that uses chain rule and 059 aims at modeling conditional distributions on single vari- 060 ables with an invertible transformations. Good approximation of negative log-likelihood, however, comes at a price of a high-dimensional internal representation which is costly both in terms of memory required to store the model, as well as the computational effort needed to train it. + +In this work, we address these limitations of the existing generative models and introduce Progressive Normalizing Flows (PNF) model that takes the best of autoencoder and flow-based methods. Our approach combines the ability to construct low-dimensional representation, as done in autoencoder-based methods, with the direct approximation of the data distribution using negative log-likelihood function of flow-based models. + +The main idea of PNF is to train several conditional flow generative models that learn the data distribution at recursively reduced resolutions, see Fig. 1. We use cascading multiple flow stages: the first flow model is trained using base resolution (e.g. $8 \times 8$ pixels), the second one is trained on a resolution increased twice along a given axis (e.g. ${16} \times 8$ ), but with the conditioning mechanism taking the results of the base resolution model. This operation is recursively repeated until the model can output data samples in the desired resolution. To enable convergence of our model we represent data at subsequent stages by encoding the mean and divergence values, which allows to effectively train flow-based modules. + +Our contributions can be summarized as follows: + +- We introduce a divide-and-conquer progressive strategy to effectively train a high-dimensional flow-based generative models. + +- We introduce a new PNF generative network that takes the best of two worlds of autoencoder and flow-based generative models: our approach constructs a low-dimensional data representation, while directly approximating the log-likelihood function. + +- PNF has the latent given by images in lower resolution, which consequently allows the upsampling of images. + +## 2. Description of PNF + +Basic idea of PNF We aim to define the generative model on high resolution images in a progressive fashion. In autoregressive models, we order the image pixels in a given way (Jain et al., 2020), and produce the next pixels by sampling from conditional distribution. In our paper, we apply a similar strategy, but we progressively increase the resolution + +109 of the images. We sample higher resolution images from distribution conditioned on lower resolution versions. + +To describe it, let us denote ${\mathcal{I}}_{ij}$ the set of images of resolution ${2}^{i}d \times {2}^{j}d\left( {i, j = 0,1,\ldots , k}\right)$ and let us fix the image $M \in {\mathcal{I}}_{k}\left( {{\mathcal{I}}_{k} \mathrel{\text{:=}} {\mathcal{I}}_{kk}}\right)$ . We denote by ${M}_{ij} \in {\mathcal{I}}_{ij}$ the image $M$ downscaled to the resolution ${2}^{i}d \times {2}^{j}d$ . Let us note, that one can obtain the image $M$ with the knowledge of the following information: the image ${M}_{00}$ , the information necessary to upscale the resolution from ${M}_{ii}$ to get the image ${M}_{i, i + 1}$ , and from ${M}_{i, i + 1}$ to ${M}_{i + 1, i + 1}$ , for $i = 0,1,\ldots , k - 1$ . We will describe this idea more formally in the two following paragraphs. In the first, we give a detailed description in the case of one dimensional data (one-dimensional vector), and then we show how we can adapt it to the case of images (two-dimensional matrix). + +Theory: progressive approach in ${\mathbb{R}}^{D}$ PNF is based on the fusion of the ideas standing behind autoregressive density models (Uria et al., 2014; Jain et al., 2020) and wavelet approach to data analysis and compression (in particular Haar wavelets). + +Let us recall the basic idea of autoregressive models. We assume that our data lies in ${\mathbb{R}}^{D}$ , and to model the density we apply the formula + +$$ +p\left( x\right) = \mathop{\prod }\limits_{{i = 1}}^{D}p\left( {{x}_{i} \mid {x}_{1},\ldots ,{x}_{i - 1}}\right) ,\;x = \left( {{x}_{1},\ldots ,{x}_{D}}\right) . +$$ + +In our approach we use a version of autoregressive approach, but for a special decomposition of the space. To do this, we will need the following notation: given vectors $\left( {{x}_{1},..,{x}_{n}}\right) ,\left( {{y}_{1},..,{y}_{n}}\right) \in {\mathbb{R}}^{n}$ we define + +$x \oplus y = \left( {{x}_{1} + {y}_{1},{x}_{1} - {y}_{1},\ldots ,{x}_{n} + {y}_{n},{x}_{n} - {y}_{n}}\right) \in {\mathbb{R}}^{2n}.$ + +For $x \in {\mathbb{R}}^{2n}$ we additionally define two operators + +$$ +{Sx} \mathrel{\text{:=}} \left( {\frac{{x}_{1} + {x}_{2}}{2},\ldots ,\frac{{x}_{{2n} - 1} + {x}_{2n}}{2}}\right) \in {\mathbb{R}}^{n}, +$$ + +$$ +{\Delta x} \mathrel{\text{:=}} \left( {\frac{{x}_{1} - {x}_{2}}{2},\ldots ,\frac{{x}_{{2n} - 1} - {x}_{2n}}{2}}\right) \in {\mathbb{R}}^{n}. +$$ + +Observe that ${S}^{k}x$ is the point $x$ with decreased resolution, and that for $x \in {\mathbb{R}}^{2n}$ we have + +$$ +x = {Sx} + {\Delta x}. +$$ + +Applying the above formula $k$ times for $x \in {\mathbb{R}}^{D}\left( {D = {2}^{k}d}\right)$ we get + +$$ +x = {Sx} \oplus {\Delta x} = \left( {{S}^{2}x \oplus {\Delta Sx}}\right) \oplus {\Delta x} = \ldots +$$ + +$$ += \left\lbrack {{S}^{k}x \oplus \Delta {S}^{k - 1}x \oplus \ldots \oplus {\Delta Sx}}\right\rbrack \oplus {\Delta x}. +$$ + +Then we can apply the autoregressive density formula: + +$$ +p\left( x\right) = {C}_{1} \cdot p\left( {{\Delta x} \mid {Sx}}\right) \cdot p\left( {Sx}\right) +$$ + +$$ += {C}_{2} \cdot p\left( {{\Delta x} \mid {Sx}}\right) \cdot p\left( {{\Delta Sx} \mid {S}^{2}x}\right) \cdot p\left( {{S}^{2}x}\right) +$$ + +$$ += \ldots = {C}_{k} \cdot p\left( {{S}^{k}x}\right) \cdot \mathop{\prod }\limits_{{i = 1}}^{d}p\left( {\Delta {S}^{k - i}x \mid {S}^{k - i + 1}x}\right) , \tag{1} +$$ + +where ${C}_{k}$ is the inverse of th absolute value of the determinant of the map + +$$ +{\mathbb{R}}^{D} = {\mathbb{R}}^{k} \times \left\lbrack {{\mathbb{R}}^{{2}^{0}d} \times \ldots \times {\mathbb{R}}^{{2}^{k}d}}\right\rbrack \ni \left( {v,{v}_{0},{v}_{1},\ldots ,{v}_{d}}\right) \rightarrow +$$ + +$$ +\rightarrow \left\lbrack {\left( {v \oplus {v}_{1}}\right) \oplus \ldots }\right\rbrack \oplus {v}_{d} \in {\mathbb{R}}^{D}\text{.} +$$ + +PNF: model summary Let us now summarize what is the final result of applying the conditional flow model to our data $X$ in the procedure described above. For $x \in {\mathbb{R}}^{{2}^{l}d}$ and $j < l$ , by ${x}_{\left\lbrack j\right\rbrack } = {S}^{l - j}x \in {\mathbb{R}}^{{2}^{j}d}$ we denote the point $x$ "downscaled" to the resolution ${2}^{j}d$ . Thus if by $\mathcal{X}$ we denote the true random vector from which our data $X$ was generated, by ${\mathcal{X}}_{\left\lbrack l\right\rbrack }$ we denote the rescaling of $\mathcal{X}$ to respective lower resolution. Then we obtain: + +- invertible map $\phi : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ , such that if $U \sim N\left( {0, I}\right)$ , then $\phi \left( U\right) \sim {\mathcal{X}}_{\left\lbrack 0\right\rbrack }$ , + +- for $j < l$ "upsampling" maps ${\Phi }_{jl} : {\mathbb{R}}^{{2}^{j}d} \times$ ${\mathbb{R}}^{\left( {{2}^{l} - {2}^{j}}\right) d} \rightarrow {\mathbb{R}}^{{2}^{l}d}$ such that if $x \sim {\mathcal{X}}_{\left\lbrack j\right\rbrack }$ and $U \sim$ $N\left( {0, I}\right)$ , then ${\Phi }_{jl}\left( {x, U}\right) \sim {\mathcal{X}}_{\left\lbrack l\right\rbrack }$ . + +Moreover, downscaling of ${\Phi }_{jl}\left( {x, U}\right)$ recreates $x$ , i.e. + +$$ +{\Phi }_{jl}{\left( x, U\right) }_{\left\lbrack j\right\rbrack } = x\text{ for }x \in {\mathbb{R}}^{{2}^{j}d},\;U \in {\mathbb{R}}^{\left( {{2}^{l} - {2}^{j}}\right) d}. \tag{2} +$$ + +Let us now observe that the above hierarchy gives us a latent model for $\mathcal{X}$ . Namely by the latent space $Z$ we take ${\mathbb{R}}^{d}$ , on which we sample by taking $\phi \left( U\right)$ for $U \sim N\left( {0, I}\right)$ . Then the following mappings: encoder $\mathcal{E} : {\mathbb{R}}^{D} \rightarrow Z$ and decoder $\mathcal{D} : Z \rightarrow {\mathbb{R}}^{D}$ , are given by + +$$ +\mathcal{E}x = {x}_{\left\lbrack k\right\rbrack }\text{and}\mathcal{D}z = {\Phi }_{0, k}\left( {z,0}\right) \text{.} +$$ + +Observe that we upsample from $Z$ by taking the fixed zero noise. As always, the lower dimensional manifold spanned by $Z$ is given by $\mathcal{M} = \mathcal{D}Z$ . Moreover, by (2) we obtain that $\mathcal{E}$ and $\mathcal{D}$ are right invertible, i.e. + +$$ +\mathcal{D}\mathcal{E}z = z,\;z \in Z. +$$ + +In other words, we obtain that the following map + +$$ +{p}_{\mathcal{M}} : {\mathbb{R}}^{D} \ni x \rightarrow \mathcal{D}\mathcal{E}x \in \mathcal{M} +$$ + +is a true projection onto $\mathcal{M}$ , that is + +$$ +{p}_{\mathcal{M}}x = x,\;x \in \mathcal{M}. +$$ + +Observe that the above does not hold for the standard autoen-coder models, as in the above we only obtain approximate identity. + +PNF for images Now we describe the natural modification of the above approach for matrices (representation of images). As before, we consider $M \in {\mathbb{R}}^{D \times D}$ , where $D = {2}^{k}d$ . Then we need the operators ${S}_{x},{S}_{y}{\Delta }_{x},{\Delta }_{y}$ and operations ${ \oplus }_{x},{ \oplus }_{y}$ , which are componentwise analogues of the operators $S,\Delta$ and operation $\oplus$ . Notice that ${S}_{x}^{i}{S}_{y}^{j}M$ denote the image, where we reduce the resolution in $x$ by ${2}^{i}$ , and in $y$ by ${2}^{j}$ . Namely, we have + +$$ +M = {S}_{x}M{ \oplus }_{x}{\Delta }_{x}M = \left\lbrack {{S}_{y}{S}_{x}M{ \oplus }_{y}{\Delta }_{y}{S}_{x}M}\right\rbrack { \oplus }_{x}{\Delta }_{x}M +$$ + +$$ += \ldots = \left\lbrack {\left( {\left( {{S}_{y}^{k}{S}_{x}^{k}M{ \oplus }_{y}{\Delta }_{y}{S}_{y}^{k - 1}{S}_{x}^{k}M}\right) { \oplus }_{y}\ldots }\right) { \oplus }_{x}{\Delta }_{x}M}\right\rbrack +$$ + +see Figure 1, where we depict the operations ${ \oplus }_{x}$ and ${ \oplus }_{y}$ . + +Then one can obtain the analogue of formula (1), which allows to compute the probability of the image $M$ in the original scale, by computing the probability of $M$ in reduced resolution multiplicated by the respective conditional probabilities which tell us "how much probabability" we have to add to increase the resolution. + +Thus, exactly as is the case for vectors, PNF applied to images gives us the following features: + +- density model on images, + +- ability to upscale the resolution, + +- lower dimensional manifold with the true projection. + +## 3. Experiments + +The main step in the training process of PNF is to train: ${1}^{ \circ }$ the baseline flow model ${f}_{0}$ modelling significantly reduced resolution images from the original dataset, ${2}^{ \circ }$ several conditional flow generative models which can be used in the process of upscaling the image resolution. We can use PNF approach appropriate number of times and finally compare, e.g., the resulting log-likelihood with the methods applied directly to high-resolution images from the original dataset. For the baseline generative flow model, we use RealNVP; see. (Dinh et al., 2016). We follow the same multi-scale architecture for all conditional models, as well. + +For the toy-example, we choose MNIST dataset, upscaled to ${32} \times {32}$ pixels. Following (Dinh et al.,2016), we also consider CIFAR-10 (Krizhevsky et al., 2009) and CelebFaces Attributes (Liu et al.,2015) (CelebA, downscaled to ${64} \times {64}$ pixels) datasets. To illustrate the point of our approach during the training phase, in the of case the original dataset with images of the size $D \times D\left( {D = {4d}}\right)$ , we proceed as follows. First we train the baseline RealNVP model ${f}_{0}$ for the downscaled images of the size $d \times d$ . Next, we keep progressing by modelling four conditional flow models ${f}_{1}^{x}$ , ${f}_{1}^{y},{f}_{2}^{x},{f}_{2}^{y}$ trained on the images rescaled to the resolution ${2d} \times d,{2d} \times {2d},{4d} \times {2d},{4d} \times {4d} = D \times D$ , respectively. The training and conditioning observations are provided by application of ${S}_{x},{S}_{y}$ and ${\Delta }_{x},{\Delta }_{y}$ operators, described in + +![01963e37-4b1e-7897-af1b-38fe1962aff6_3_162_210_688_132_0.jpg](images/01963e37-4b1e-7897-af1b-38fe1962aff6_3_162_210_688_132_0.jpg) + +Figure 2. Upscaling pipeline with PNF models. From the left: image sampled by baseline RealNVP model ${f}_{0}$ trained on low-resolution ( $8 \times 8$ pixels) MNIST dataset, upscaled images of the size ${16} \times 8,{16} \times {16},{32} \times {16},{32} \times {32}$ pixels with PNF models ${f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x},{f}_{2}^{y}$ , respectively. + +![01963e37-4b1e-7897-af1b-38fe1962aff6_3_156_558_695_258_0.jpg](images/01963e37-4b1e-7897-af1b-38fe1962aff6_3_156_558_695_258_0.jpg) + +Figure 3. Upscaling pipeline with PNF models. From the left: image sampled by baseline RealNVP model ${f}_{0}$ trained on low-resolution ( ${16} \times {16}$ pixels) CelebA dataset, upscaled images of the size ${32} \times {16},{32} \times {32},{64} \times {32},{64} \times {64}$ pixels with PNF models ${f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x},{f}_{2}^{y}$ , respectively. + +Section 2; see also Figure 1, where we depict the process of these operations in the case of test image from CelebA dataset. + +For example, in the case of base images rescaled to ${32} \times {32}$ (MNIST and CIFAR-10 case), we train the following models: + +- baseline flow model ${f}_{0}$ for images $8 \times 8$ , + +- conditional flow models ${f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x},{f}_{2}^{y}$ for images ${\Delta }_{z}\left( I\right)$ conditioned by ${S}_{z}\left( I\right) \left( {z \in \{ x, y\} }\right)$ , where $I$ is of the size ${16} \times 8,{16} \times {16},{32} \times {16},{32} \times {32}$ , respectively. + +Figure 2 shows a sample image generated by the above model ${f}_{0}$ and resulting upscaled images by using the generative models ${f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x},{f}_{2}^{y}$ ; when upscaling the images we sample from generative flow models with a fixed zero noise. + +In the case of MNIST dataset we consider both dense (linear) and convolutional architectures for RealNVP coupling layers. In the dense case, we use 6 affine coupling layers as invertible dense neural networks ( 18 layers). We follow (Dinh et al., 2016) and use the same architecture for convolutional coupling layers of RealNVP models for CIFAR-10 (same for MNIST) and CelebA datasets. Figure 3 is analog of Figure 2, but with PNF trained on CelebA dataset. + +For the purpose of comparison, for each considered dataset (images of $D \times D$ pixels), i.e. ${MNIST}\left( {D = {32}}\right)$ , CIFAR- ${10}\left( {D = {32}}\right)$ and $\operatorname{CelebA}\left( {D = {64}}\right)$ , we train the reference baseline model $f$ and all the five PNF models ${f}_{0},{f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x}$ , ${f}_{2}^{y}$ . We use regular train/test splits for each dataset. For the analysis, we compare the log-likelihood value (and resulting + +![01963e37-4b1e-7897-af1b-38fe1962aff6_3_893_210_701_424_0.jpg](images/01963e37-4b1e-7897-af1b-38fe1962aff6_3_893_210_701_424_0.jpg) + +Figure 4. Upscaling the low-resolution CelebA images. Columns from the left: downscaled ( ${16} \times {16}$ pixels) test CelebA image, upscaled images with PNF models ${f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x}$ and ${f}_{2}^{y}$ , respectively. + +Table 1. Log-likelihood and bits-per-dimension (in parenthesis) values for baseline RealNVP and our PNF models. For the MNIST dataset we considered both dense (top value) and convolutional (bottom) architecture of the coupling layers. + +
DATA SETREALNVPPNF
MNIST4433.35 (1.75)4857.89 (1.16)
4473.32 (1.70)4264.79 (2.00)
CIFAR-109421.04 (3.57)8891.81 (3.82)
CELEBA46787.85 (2.51)44092.34 (2.82)
+ +bits-per-dimension) obtained by baseline model $f$ and all PNF models; we evaluate the models using testing split for each considered dataset. For the training process we chose ${ADAM}$ algorithm (Da,2014) with default hyperparameters and use an ${L}_{2}$ regularization on the weight scale parameters with coefficient $5 \cdot {10}^{-5}$ . We train all flow models by setting the prior ${p}_{Z}$ to be an isotropic unit norm Gaussian. + +In the case of CIFAR-10 and CelebA datasets the baseline models reproduces the results in (Dinh et al., 2016). Let us remark that, in the case of CelebA dataset, we trained the baseline model for more iterations and achieved better bits-per-dimension value (see Table 1) than the one given in reference paper (Dinh et al., 2016). + +## 4. Conclusion + +In this paper we introduced a new flow-based architecture, which can be seen as the fusion of classical flow models and autoregressive models. Since we split the space as done in the case of the wavelet transform, we obtain progressive densites of the data scaled to respective lower resolution. This allows us to view the lower resolution images as the latent, which is decoded to the original resolution by the upscaling given by the respective flow model with zero noise. + +References + +Da, K. A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. + +Deja, K., Dubiński, J., Nowak, P., Wenzel, S., Spurek, P., and Trzciński, T. End-to-end sinkhorn autoencoder with noise generator. IEEE Access, 2020. + +Dinh, L., Krueger, D., and Bengio, Y. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using Real NVP. arXiv preprint arXiv:1605.08803, 2016. + +Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Adv. in Neural Information Processing Systems, NeurIPS, pp. 2672-2680, 2014. + +Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018. + +Jain, A., Abbeel, P., and Pathak, D. Locally masked convolution for autoregressive models. In Conference on Uncertainty in Artificial Intelligence, pp. 1358-1367. PMLR, 2020. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1\mathrm{x}1$ convolutions. arXiv preprint arXiv:1807.03039, 2018. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. + +Kingma, D. P., Salimans, T., Józefowicz, R., Chen, X., Sutskever, I., and Welling, M. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016. + +Knop, S., Spurek, P., Tabor, J., Podolak, I., Mazur, M., and Jastrzebski, S. Cramer-wold auto-encoder. Journal of Machine Learning Research, 21, 2020. + +Krizhevsky, A. et al. Learning multiple layers of features from tiny images. 2009. + +Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730- 3738, 2015. + +Papamakarios, G., Pavlakou, T., and Murray, I. Masked autoregressive flow for density estimation. arXiv preprint arXiv:1705.07057, 2017. + +270 + +Rezende, D. J. and Mohamed, S. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015. + +Spurek, P., Winczowski, S., Tabor, J., Zamorski, M., Zieba, M., and Trzciński, T. Hypernetwork approach to generating point clouds. Proceedings of Machine Learning Research, 119, 2020. + +Tolstikhin, I., Bousquet, O., Gelly, S., and Schoelkopf, B. Wasserstein auto-encoders. arXiv preprint arXiv:1711.01558, 2017. + +Uria, B., Murray, I., and Larochelle, H. A deep and tractable density estimator. In International Conference on Machine Learning, pp. 467-475. PMLR, 2014. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/kJpj-levLl/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/kJpj-levLl/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..ec2d2123546e5e41c0b8eaaa182647f99493b8d8 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/kJpj-levLl/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,229 @@ +§ PNF: PROGRESSIVE NORMALIZING FLOWS + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +In this work, we introduce Progressive Normalizing Flows (PNF), a generative network that allows to model high-dimensional input data distributions by progressively training several flow-based modules. Competing generative models, such as GANs or autoencoders, do not aim at learning probability density of real data, while flow-based models realize this objective at a prohibitive cost of highly-dimensional internal representation. Here, we address these limitations and introduce a new strategy to train flow-based models. We progressively train consecutive models at increasing data resolutions, which allows to construct low-dimensional representations, as done in autoencoders, while directly approximating the log-likelihood function. Additional feature of our model is its intrinsic ability to upscale data resolution in the consecutive stages. We show that PNF offers a superior or comparable performance over the state of the art. + +§ 1. INTRODUCTION + +Generative models trained to sample realistic data-points, such as images, are a mainstream field of machine learning, with multiple applications ranging from 3D shape optimization (Spurek et al., 2020) to high energy physics simulations (Deja et al., 2020). + +Existing approaches include GANs (Goodfellow et al., 2014), autoencoders (Kingma & Welling, 2013) and invertible flows (Uria et al., 2014; Jain et al., 2020). GANs give high quality results, yet their main working principle relies on the competition between adversarially trained networks, while the sampling prior distribution is not explicitly modeled and hence does not provide an intuitive data sample representation. The second family of generative models, autoencoder-based methods, does not share the limitation of GANs, as these models can simultaneously fit a data manifold and approximate its prior distribution (Kingma & Welling, 2013; Tolstikhin et al., 2017; Knop et al., 2020). More precisely, generative autoencoders map data into lower-dimensional latent space regularized to follow a certain distribution, e.g. Gaussian, and reconstruct it back. However, neither GANs nor autoencoder-based models are able to explicitly learn the probability density of real data in the input space. + + < g r a p h i c s > + +Figure 1. In PNF we start with a density model on a downscaled image. Next we we progressively add information encoded by a conditional flow model (bottom images) which allows us to up-sample specific dimension of the image. In two steps of upscaling we upsample both dimensions, and finally obtain a high-resolution image (upper left). + +The third family of generative models, flow-based methods, aims to address this limitation by constructing an invertible transformation from data space to a Gaussian distribution, called Normalizing Flow (Rezende & Mohamed, 2015). Contrary to the other methods, the model explicitly learns the data distribution, and therefore, the resulting loss function is defined as a negative log-likelihood. The main challenge in modeling flows is finding the invertible function that allows efficient computation of Jacobian determinant. For discrete flows, like NICE (Dinh et al., 2014), RealNVP (Dinh et al., 2016) and Glow (Kingma & Dhariwal, 2018) that issue is solved by application of the so-called coupling layers. The continuous flows, like FFJORD (Grathwohl et al., 2018) use Jacobian trace due to the transformation + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +§ SPECIFIED BY AN ORDINARY DIFFERENTIAL EQUATION. + +056 Another important group of flow-based approaches are au- 057 toregressive methods, like IAF (Kingma et al., 2016) or 058 MAF (Papamakarios et al., 2017) that uses chain rule and 059 aims at modeling conditional distributions on single vari- 060 ables with an invertible transformations. Good approximation of negative log-likelihood, however, comes at a price of a high-dimensional internal representation which is costly both in terms of memory required to store the model, as well as the computational effort needed to train it. + +In this work, we address these limitations of the existing generative models and introduce Progressive Normalizing Flows (PNF) model that takes the best of autoencoder and flow-based methods. Our approach combines the ability to construct low-dimensional representation, as done in autoencoder-based methods, with the direct approximation of the data distribution using negative log-likelihood function of flow-based models. + +The main idea of PNF is to train several conditional flow generative models that learn the data distribution at recursively reduced resolutions, see Fig. 1. We use cascading multiple flow stages: the first flow model is trained using base resolution (e.g. $8 \times 8$ pixels), the second one is trained on a resolution increased twice along a given axis (e.g. ${16} \times 8$ ), but with the conditioning mechanism taking the results of the base resolution model. This operation is recursively repeated until the model can output data samples in the desired resolution. To enable convergence of our model we represent data at subsequent stages by encoding the mean and divergence values, which allows to effectively train flow-based modules. + +Our contributions can be summarized as follows: + + * We introduce a divide-and-conquer progressive strategy to effectively train a high-dimensional flow-based generative models. + + * We introduce a new PNF generative network that takes the best of two worlds of autoencoder and flow-based generative models: our approach constructs a low-dimensional data representation, while directly approximating the log-likelihood function. + + * PNF has the latent given by images in lower resolution, which consequently allows the upsampling of images. + +§ 2. DESCRIPTION OF PNF + +Basic idea of PNF We aim to define the generative model on high resolution images in a progressive fashion. In autoregressive models, we order the image pixels in a given way (Jain et al., 2020), and produce the next pixels by sampling from conditional distribution. In our paper, we apply a similar strategy, but we progressively increase the resolution + +109 of the images. We sample higher resolution images from distribution conditioned on lower resolution versions. + +To describe it, let us denote ${\mathcal{I}}_{ij}$ the set of images of resolution ${2}^{i}d \times {2}^{j}d\left( {i,j = 0,1,\ldots ,k}\right)$ and let us fix the image $M \in {\mathcal{I}}_{k}\left( {{\mathcal{I}}_{k} \mathrel{\text{ := }} {\mathcal{I}}_{kk}}\right)$ . We denote by ${M}_{ij} \in {\mathcal{I}}_{ij}$ the image $M$ downscaled to the resolution ${2}^{i}d \times {2}^{j}d$ . Let us note, that one can obtain the image $M$ with the knowledge of the following information: the image ${M}_{00}$ , the information necessary to upscale the resolution from ${M}_{ii}$ to get the image ${M}_{i,i + 1}$ , and from ${M}_{i,i + 1}$ to ${M}_{i + 1,i + 1}$ , for $i = 0,1,\ldots ,k - 1$ . We will describe this idea more formally in the two following paragraphs. In the first, we give a detailed description in the case of one dimensional data (one-dimensional vector), and then we show how we can adapt it to the case of images (two-dimensional matrix). + +Theory: progressive approach in ${\mathbb{R}}^{D}$ PNF is based on the fusion of the ideas standing behind autoregressive density models (Uria et al., 2014; Jain et al., 2020) and wavelet approach to data analysis and compression (in particular Haar wavelets). + +Let us recall the basic idea of autoregressive models. We assume that our data lies in ${\mathbb{R}}^{D}$ , and to model the density we apply the formula + +$$ +p\left( x\right) = \mathop{\prod }\limits_{{i = 1}}^{D}p\left( {{x}_{i} \mid {x}_{1},\ldots ,{x}_{i - 1}}\right) ,\;x = \left( {{x}_{1},\ldots ,{x}_{D}}\right) . +$$ + +In our approach we use a version of autoregressive approach, but for a special decomposition of the space. To do this, we will need the following notation: given vectors $\left( {{x}_{1},..,{x}_{n}}\right) ,\left( {{y}_{1},..,{y}_{n}}\right) \in {\mathbb{R}}^{n}$ we define + +$x \oplus y = \left( {{x}_{1} + {y}_{1},{x}_{1} - {y}_{1},\ldots ,{x}_{n} + {y}_{n},{x}_{n} - {y}_{n}}\right) \in {\mathbb{R}}^{2n}.$ + +For $x \in {\mathbb{R}}^{2n}$ we additionally define two operators + +$$ +{Sx} \mathrel{\text{ := }} \left( {\frac{{x}_{1} + {x}_{2}}{2},\ldots ,\frac{{x}_{{2n} - 1} + {x}_{2n}}{2}}\right) \in {\mathbb{R}}^{n}, +$$ + +$$ +{\Delta x} \mathrel{\text{ := }} \left( {\frac{{x}_{1} - {x}_{2}}{2},\ldots ,\frac{{x}_{{2n} - 1} - {x}_{2n}}{2}}\right) \in {\mathbb{R}}^{n}. +$$ + +Observe that ${S}^{k}x$ is the point $x$ with decreased resolution, and that for $x \in {\mathbb{R}}^{2n}$ we have + +$$ +x = {Sx} + {\Delta x}. +$$ + +Applying the above formula $k$ times for $x \in {\mathbb{R}}^{D}\left( {D = {2}^{k}d}\right)$ we get + +$$ +x = {Sx} \oplus {\Delta x} = \left( {{S}^{2}x \oplus {\Delta Sx}}\right) \oplus {\Delta x} = \ldots +$$ + +$$ += \left\lbrack {{S}^{k}x \oplus \Delta {S}^{k - 1}x \oplus \ldots \oplus {\Delta Sx}}\right\rbrack \oplus {\Delta x}. +$$ + +Then we can apply the autoregressive density formula: + +$$ +p\left( x\right) = {C}_{1} \cdot p\left( {{\Delta x} \mid {Sx}}\right) \cdot p\left( {Sx}\right) +$$ + +$$ += {C}_{2} \cdot p\left( {{\Delta x} \mid {Sx}}\right) \cdot p\left( {{\Delta Sx} \mid {S}^{2}x}\right) \cdot p\left( {{S}^{2}x}\right) +$$ + +$$ += \ldots = {C}_{k} \cdot p\left( {{S}^{k}x}\right) \cdot \mathop{\prod }\limits_{{i = 1}}^{d}p\left( {\Delta {S}^{k - i}x \mid {S}^{k - i + 1}x}\right) , \tag{1} +$$ + +where ${C}_{k}$ is the inverse of th absolute value of the determinant of the map + +$$ +{\mathbb{R}}^{D} = {\mathbb{R}}^{k} \times \left\lbrack {{\mathbb{R}}^{{2}^{0}d} \times \ldots \times {\mathbb{R}}^{{2}^{k}d}}\right\rbrack \ni \left( {v,{v}_{0},{v}_{1},\ldots ,{v}_{d}}\right) \rightarrow +$$ + +$$ +\rightarrow \left\lbrack {\left( {v \oplus {v}_{1}}\right) \oplus \ldots }\right\rbrack \oplus {v}_{d} \in {\mathbb{R}}^{D}\text{ . } +$$ + +PNF: model summary Let us now summarize what is the final result of applying the conditional flow model to our data $X$ in the procedure described above. For $x \in {\mathbb{R}}^{{2}^{l}d}$ and $j < l$ , by ${x}_{\left\lbrack j\right\rbrack } = {S}^{l - j}x \in {\mathbb{R}}^{{2}^{j}d}$ we denote the point $x$ "downscaled" to the resolution ${2}^{j}d$ . Thus if by $\mathcal{X}$ we denote the true random vector from which our data $X$ was generated, by ${\mathcal{X}}_{\left\lbrack l\right\rbrack }$ we denote the rescaling of $\mathcal{X}$ to respective lower resolution. Then we obtain: + + * invertible map $\phi : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ , such that if $U \sim N\left( {0,I}\right)$ , then $\phi \left( U\right) \sim {\mathcal{X}}_{\left\lbrack 0\right\rbrack }$ , + + * for $j < l$ "upsampling" maps ${\Phi }_{jl} : {\mathbb{R}}^{{2}^{j}d} \times$ ${\mathbb{R}}^{\left( {{2}^{l} - {2}^{j}}\right) d} \rightarrow {\mathbb{R}}^{{2}^{l}d}$ such that if $x \sim {\mathcal{X}}_{\left\lbrack j\right\rbrack }$ and $U \sim$ $N\left( {0,I}\right)$ , then ${\Phi }_{jl}\left( {x,U}\right) \sim {\mathcal{X}}_{\left\lbrack l\right\rbrack }$ . + +Moreover, downscaling of ${\Phi }_{jl}\left( {x,U}\right)$ recreates $x$ , i.e. + +$$ +{\Phi }_{jl}{\left( x,U\right) }_{\left\lbrack j\right\rbrack } = x\text{ for }x \in {\mathbb{R}}^{{2}^{j}d},\;U \in {\mathbb{R}}^{\left( {{2}^{l} - {2}^{j}}\right) d}. \tag{2} +$$ + +Let us now observe that the above hierarchy gives us a latent model for $\mathcal{X}$ . Namely by the latent space $Z$ we take ${\mathbb{R}}^{d}$ , on which we sample by taking $\phi \left( U\right)$ for $U \sim N\left( {0,I}\right)$ . Then the following mappings: encoder $\mathcal{E} : {\mathbb{R}}^{D} \rightarrow Z$ and decoder $\mathcal{D} : Z \rightarrow {\mathbb{R}}^{D}$ , are given by + +$$ +\mathcal{E}x = {x}_{\left\lbrack k\right\rbrack }\text{ and }\mathcal{D}z = {\Phi }_{0,k}\left( {z,0}\right) \text{ . } +$$ + +Observe that we upsample from $Z$ by taking the fixed zero noise. As always, the lower dimensional manifold spanned by $Z$ is given by $\mathcal{M} = \mathcal{D}Z$ . Moreover, by (2) we obtain that $\mathcal{E}$ and $\mathcal{D}$ are right invertible, i.e. + +$$ +\mathcal{D}\mathcal{E}z = z,\;z \in Z. +$$ + +In other words, we obtain that the following map + +$$ +{p}_{\mathcal{M}} : {\mathbb{R}}^{D} \ni x \rightarrow \mathcal{D}\mathcal{E}x \in \mathcal{M} +$$ + +is a true projection onto $\mathcal{M}$ , that is + +$$ +{p}_{\mathcal{M}}x = x,\;x \in \mathcal{M}. +$$ + +Observe that the above does not hold for the standard autoen-coder models, as in the above we only obtain approximate identity. + +PNF for images Now we describe the natural modification of the above approach for matrices (representation of images). As before, we consider $M \in {\mathbb{R}}^{D \times D}$ , where $D = {2}^{k}d$ . Then we need the operators ${S}_{x},{S}_{y}{\Delta }_{x},{\Delta }_{y}$ and operations ${ \oplus }_{x},{ \oplus }_{y}$ , which are componentwise analogues of the operators $S,\Delta$ and operation $\oplus$ . Notice that ${S}_{x}^{i}{S}_{y}^{j}M$ denote the image, where we reduce the resolution in $x$ by ${2}^{i}$ , and in $y$ by ${2}^{j}$ . Namely, we have + +$$ +M = {S}_{x}M{ \oplus }_{x}{\Delta }_{x}M = \left\lbrack {{S}_{y}{S}_{x}M{ \oplus }_{y}{\Delta }_{y}{S}_{x}M}\right\rbrack { \oplus }_{x}{\Delta }_{x}M +$$ + +$$ += \ldots = \left\lbrack {\left( {\left( {{S}_{y}^{k}{S}_{x}^{k}M{ \oplus }_{y}{\Delta }_{y}{S}_{y}^{k - 1}{S}_{x}^{k}M}\right) { \oplus }_{y}\ldots }\right) { \oplus }_{x}{\Delta }_{x}M}\right\rbrack +$$ + +see Figure 1, where we depict the operations ${ \oplus }_{x}$ and ${ \oplus }_{y}$ . + +Then one can obtain the analogue of formula (1), which allows to compute the probability of the image $M$ in the original scale, by computing the probability of $M$ in reduced resolution multiplicated by the respective conditional probabilities which tell us "how much probabability" we have to add to increase the resolution. + +Thus, exactly as is the case for vectors, PNF applied to images gives us the following features: + + * density model on images, + + * ability to upscale the resolution, + + * lower dimensional manifold with the true projection. + +§ 3. EXPERIMENTS + +The main step in the training process of PNF is to train: ${1}^{ \circ }$ the baseline flow model ${f}_{0}$ modelling significantly reduced resolution images from the original dataset, ${2}^{ \circ }$ several conditional flow generative models which can be used in the process of upscaling the image resolution. We can use PNF approach appropriate number of times and finally compare, e.g., the resulting log-likelihood with the methods applied directly to high-resolution images from the original dataset. For the baseline generative flow model, we use RealNVP; see. (Dinh et al., 2016). We follow the same multi-scale architecture for all conditional models, as well. + +For the toy-example, we choose MNIST dataset, upscaled to ${32} \times {32}$ pixels. Following (Dinh et al.,2016), we also consider CIFAR-10 (Krizhevsky et al., 2009) and CelebFaces Attributes (Liu et al.,2015) (CelebA, downscaled to ${64} \times {64}$ pixels) datasets. To illustrate the point of our approach during the training phase, in the of case the original dataset with images of the size $D \times D\left( {D = {4d}}\right)$ , we proceed as follows. First we train the baseline RealNVP model ${f}_{0}$ for the downscaled images of the size $d \times d$ . Next, we keep progressing by modelling four conditional flow models ${f}_{1}^{x}$ , ${f}_{1}^{y},{f}_{2}^{x},{f}_{2}^{y}$ trained on the images rescaled to the resolution ${2d} \times d,{2d} \times {2d},{4d} \times {2d},{4d} \times {4d} = D \times D$ , respectively. The training and conditioning observations are provided by application of ${S}_{x},{S}_{y}$ and ${\Delta }_{x},{\Delta }_{y}$ operators, described in + + < g r a p h i c s > + +Figure 2. Upscaling pipeline with PNF models. From the left: image sampled by baseline RealNVP model ${f}_{0}$ trained on low-resolution ( $8 \times 8$ pixels) MNIST dataset, upscaled images of the size ${16} \times 8,{16} \times {16},{32} \times {16},{32} \times {32}$ pixels with PNF models ${f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x},{f}_{2}^{y}$ , respectively. + + < g r a p h i c s > + +Figure 3. Upscaling pipeline with PNF models. From the left: image sampled by baseline RealNVP model ${f}_{0}$ trained on low-resolution ( ${16} \times {16}$ pixels) CelebA dataset, upscaled images of the size ${32} \times {16},{32} \times {32},{64} \times {32},{64} \times {64}$ pixels with PNF models ${f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x},{f}_{2}^{y}$ , respectively. + +Section 2; see also Figure 1, where we depict the process of these operations in the case of test image from CelebA dataset. + +For example, in the case of base images rescaled to ${32} \times {32}$ (MNIST and CIFAR-10 case), we train the following models: + + * baseline flow model ${f}_{0}$ for images $8 \times 8$ , + + * conditional flow models ${f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x},{f}_{2}^{y}$ for images ${\Delta }_{z}\left( I\right)$ conditioned by ${S}_{z}\left( I\right) \left( {z \in \{ x,y\} }\right)$ , where $I$ is of the size ${16} \times 8,{16} \times {16},{32} \times {16},{32} \times {32}$ , respectively. + +Figure 2 shows a sample image generated by the above model ${f}_{0}$ and resulting upscaled images by using the generative models ${f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x},{f}_{2}^{y}$ ; when upscaling the images we sample from generative flow models with a fixed zero noise. + +In the case of MNIST dataset we consider both dense (linear) and convolutional architectures for RealNVP coupling layers. In the dense case, we use 6 affine coupling layers as invertible dense neural networks ( 18 layers). We follow (Dinh et al., 2016) and use the same architecture for convolutional coupling layers of RealNVP models for CIFAR-10 (same for MNIST) and CelebA datasets. Figure 3 is analog of Figure 2, but with PNF trained on CelebA dataset. + +For the purpose of comparison, for each considered dataset (images of $D \times D$ pixels), i.e. ${MNIST}\left( {D = {32}}\right)$ , CIFAR- ${10}\left( {D = {32}}\right)$ and $\operatorname{CelebA}\left( {D = {64}}\right)$ , we train the reference baseline model $f$ and all the five PNF models ${f}_{0},{f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x}$ , ${f}_{2}^{y}$ . We use regular train/test splits for each dataset. For the analysis, we compare the log-likelihood value (and resulting + + < g r a p h i c s > + +Figure 4. Upscaling the low-resolution CelebA images. Columns from the left: downscaled ( ${16} \times {16}$ pixels) test CelebA image, upscaled images with PNF models ${f}_{1}^{x},{f}_{1}^{y},{f}_{2}^{x}$ and ${f}_{2}^{y}$ , respectively. + +Table 1. Log-likelihood and bits-per-dimension (in parenthesis) values for baseline RealNVP and our PNF models. For the MNIST dataset we considered both dense (top value) and convolutional (bottom) architecture of the coupling layers. + +max width= + +DATA SET REALNVP PNF + +1-3 +2*MNIST 4433.35 (1.75) 4857.89 (1.16) + +2-3 + 4473.32 (1.70) 4264.79 (2.00) + +1-3 +CIFAR-10 9421.04 (3.57) 8891.81 (3.82) + +1-3 +CELEBA 46787.85 (2.51) 44092.34 (2.82) + +1-3 + +bits-per-dimension) obtained by baseline model $f$ and all PNF models; we evaluate the models using testing split for each considered dataset. For the training process we chose ${ADAM}$ algorithm (Da,2014) with default hyperparameters and use an ${L}_{2}$ regularization on the weight scale parameters with coefficient $5 \cdot {10}^{-5}$ . We train all flow models by setting the prior ${p}_{Z}$ to be an isotropic unit norm Gaussian. + +In the case of CIFAR-10 and CelebA datasets the baseline models reproduces the results in (Dinh et al., 2016). Let us remark that, in the case of CelebA dataset, we trained the baseline model for more iterations and achieved better bits-per-dimension value (see Table 1) than the one given in reference paper (Dinh et al., 2016). + +§ 4. CONCLUSION + +In this paper we introduced a new flow-based architecture, which can be seen as the fusion of classical flow models and autoregressive models. Since we split the space as done in the case of the wavelet transform, we obtain progressive densites of the data scaled to respective lower resolution. This allows us to view the lower resolution images as the latent, which is decoded to the original resolution by the upscaling given by the respective flow model with zero noise. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mj6qILYHjbS/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mj6qILYHjbS/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..33ab8008d97adc69c819174bbbfdfa66f46101cc --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mj6qILYHjbS/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,412 @@ +# Semantic Perturbations with Normalizing Flows for Improved Generalization + +Anonymous Authors ${}^{1}$ + +## Abstract + +Several methods from two separate lines of works, namely, data augmentation (DA) and adversarial training techniques, rely on perturbations done in latent space. Often, these methods are either non-interpretable due to their non-invertibility or are notoriously difficult to train due to their numerous hyperparameters. We exploit the exactly reversible encoder-decoder structure of normalizing flows to perform perturbations in the latent space. We demonstrate that these on-manifold perturbations match the performance of advanced DA techniques-reaching ${96.6}\%$ test accuracy for CIFAR-10 using ResNet-18 and outperform existing methods particularly in low data regimes-yielding ${10} - {25}\%$ relative improvement of test accuracy from classical training. We find our latent adversarial perturbations, adaptive to the classifier throughout its training, are most effective. + +## 1. Introduction + +Successfully applying Deep Neural Networks (DNNs) in real world tasks in large part depends on the availability of large annotated datasets for the task at hand. Thus, besides several overfitting techniques, data-augmentation (DA) often remains a "mandatory" step when deploying DNNs in practice. Traditional DA techniques consist of applying a predefined set of transformations to the training samples that do not change the class label. As this approach is limited to making the classifier robust solely to the fixed set of hard-coded transformations, advanced methods incorporate more loosely defined transformations in the data space (Zhang et al., 2018a; DeVries & Taylor, 2017; Yun et al., 2019). Furthermore, recently proposed DA techniques exploit the latent space to perform such transformations (Antoniou et al., 2017; Zhao et al., 2018), while typically solving the model's non-invertability by training a separate model (Zhao et al., + +## 2018), thus making them hard to train. + +A separate line of work focuses on adversarial training (see (Biggio & Roli, 2018) and references therein), where the final model is trained with samples perturbed in a way that makes its prediction incorrect, called adversarial samples. However, further empirical studies showed that such training reduces the "clean" samples accuracy, indicating the two objectives are competing (Tsipras et al., 2019b; Su et al., 2018). Stutz et al. (2019) postulate that this robustness-generalization trade-off appears due to using off-manifold adversarial attacks that leave the data-manifold, and that 'on-manifold adversarial attacks' can improve generalization. Thus, the authors proposed to use perturbations in the latent space of a generative model, VAE-GAN (Larsen et al., 2016; Rosca et al., 2017). However, as this method relies on the VAE-GAN model which is particularly hard to train-since in addition to GAN training it involves hard to tune hyperparamaters balancing the VAE and GAN objectives-its usage remained limited. + +Motivated by the advantages of normalizing flows (NF) relevant to these two lines of works, we employ NFs (e.g. Glow, Kingma & Dhariwal, 2018), to define entirely unsupervised augmentations-contrasting with pre-defined fixed transformations-with the same goal of improving the generalization of deep classifiers. In particular, NF models offer appealing advantages for latent space perturbations, such as: (i) exact latent-variable inference and log-likelihood evaluation, and (ii) efficient inference and synthesis that can be parallelized (Kingma & Dhariwal, 2018). + +Related works. Several works learn useful DA policies, for instance by optimization (Fawzi et al., 2016; Ratner et al., 2017), Reinforcement Learning techniques (Cubuk et al., 2019; 2020; Zhang et al., 2020b), specifically trained generator networks (Peng et al., 2018) or assisted by generative adversarial networks (Perez & Wang, 2017; Antoniou et al., 2017; Zhang et al., 2018b; Tran et al., 2020). Several methods traverse the latent space to find virtual data samples that are missclassified (Baluja & Fischer, 2017; Song et al., 2018; Xiao et al., 2018; Zhang et al., 2020a). Complementary, the connection of adversarial learning and generalization has also been studied in (Tanay & Griffin, 2016; Rozsa et al., 2016; Jalal et al., 2017; Tsipras et al., 2019a; Gilmer et al., 2018; Zhao et al., 2018). + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +![01963e4e-5e57-7d6a-b941-a5813388f608_1_188_184_622_408_0.jpg](images/01963e4e-5e57-7d6a-b941-a5813388f608_1_188_184_622_408_0.jpg) + +Figure 1. Exactness of NF encoding-decoding. Here $\mathcal{F}$ denotes the bijective NF model, and $\mathcal{G}/{\mathcal{G}}^{-1}$ encoder/decoder pair of inexact methods such as VAE or VAE-GAN which, due to inherent decoder noise, is only approximately bijective. + +Overview of contributions. We exploit the exactly reversible encoder-decoder structure of NFs to perform efficient and controllable augmentations in the learned on-manifold space. We demonstrate that our on-manifold perturbations consistently outperform the standard training on CIFAR-10/100 using ResNet-18. Moreover, in a low-data regime, such training yields up to ${25}\%$ relative improvement from classical training, of which-as most effective-we find the adversarial perturbations that are adaptive to the classifier throughout its training, see $\$ 3$ . + +## 2. Perturbations in Latent Space using Normalizing Flows + +The invertibility of normalizing flows enables bidirectional transitions between image and latent space, see Figure 1. This in turn allows for applying perturbations directly in the latent space rather than image space. We denote a trained normalizing flow model, mapping data manifold $\mathcal{X}$ to latent space $\mathcal{Z}$ as, $\mathcal{F} : \mathcal{X} \rightarrow \mathcal{Z}$ . Given a perturbation function $\mathcal{P} : \mathcal{Z} \rightarrow \mathcal{Z}$ , defined over the latent space, we define its counterpart in image space as ${\mathcal{F}}^{-1}\left( {\mathcal{P}\left( {\mathcal{F}\left( z\right) }\right) }\right)$ . + +Our goal is to define latent perturbation function $\mathcal{P}\left( z\right)$ such that we obtain identity-preserving semantic modifications over the original image $\mathbf{x}$ in the image domain. To this end, we limit the structure of possible $\mathcal{P}\left( z\right)$ in two-ways. Firstly, we directly consider incremental perturbations of the form $z + \mathcal{P}\left( z\right)$ . Secondly, we use an $\epsilon$ parameter to control the size of perturbation allowed. More precisely, we have: + +$$ +{\mathcal{F}}^{-1}\left( {z + \mathcal{P}\left( {\mathcal{F}\left( \mathbf{x}\right) ,\epsilon }\right) }\right) . +$$ + +For brevity, we refer to $\mathcal{P}\left( \mathbf{z}\right)$ as "latent attacks" (LA) and we consider two variants, described below. + +### 2.1. Randomized Latent Attacks + +At training time, given a datapoint ${\mathbf{x}}_{i}$ , with $1 \leq i \leq n$ , using trained normalizing flow we obtain its corresponding latent code ${\mathbf{z}}_{i} = \mathcal{F}\left( {\mathbf{x}}_{i}\right)$ . + +Primarily, as perturbation function we consider a simplistic Gaussian noise in the latent space: + +$$ +{\mathcal{P}}_{\text{rand }}\left( {\cdot ,\epsilon }\right) = \epsilon \cdot \mathcal{N}\left( {0,\mathbf{I}}\right) ,\;\text{ (Randomized-LA) } +$$ + +which is independent from ${\mathbf{z}}_{i}$ . Any such distribution around the original ${z}_{i}$ is equivalent to sampling from the learned manifold. In this case, the normalizing flow pushes forward this simple Gaussian distribution centered around ${\mathbf{z}}_{i}$ to a distribution on the image space around ${\mathbf{x}}_{i} = {\mathcal{F}}^{-1}\left( {\mathbf{z}}_{i}\right)$ . Thus, sampling from the simple prior distribution $\mathcal{N}\left( {0,\mathbf{I}}\right)$ is equivalent to sampling from a complex conditional distribution around the original image over the data manifold. + +We also define norm truncated versions as follows: + +$$ +{\mathcal{P}}_{\text{rand }}^{{\ell }_{p}}\left( {\cdot ,\epsilon }\right) = \Pi \left( {\epsilon \cdot \mathcal{N}\left( {0,\mathbf{I}}\right) }\right) , +$$ + +where ${\ell }_{p}$ denotes the selected norm, e.g. ${\ell }_{2}$ or ${\ell }_{\infty }$ . For ${\ell }_{2}$ norm, $\Pi$ is defined as ${\ell }_{2}$ norm scaling, and for ${\ell }_{\infty },\Pi$ is the component-wise clipping operation defined below: + +$$ +{\left( \Pi \left( \mathbf{x}\right) \right) }_{i} \mathrel{\text{:=}} \max \left( {-\epsilon ,\min \left( {+\epsilon ,{\mathbf{x}}_{i}}\right) }\right) . +$$ + +### 2.2. Adversarial Latent Attacks + +Analogous to the above randomized latent attacks, at train time, given a datapoint ${\mathbf{x}}_{i}$ and it’s associated label ${l}_{i}$ , with $1 \leq i \leq n$ , using trained normalizing flow we obtain its corresponding latent code ${\mathbf{z}}_{i} = \mathcal{F}\left( {\mathbf{x}}_{i}\right)$ . + +We search for ${\Delta }_{{\mathbf{z}}_{i}} \in \mathcal{Z}$ such that the loss obtained of the generated image ${\widetilde{\mathbf{x}}}_{i} = {\mathcal{F}}^{-1}\left( {{\mathbf{z}}_{i} + {\Delta }_{{\mathbf{z}}_{i}}}\right)$ is maximal: + +$$ +{\Delta }_{{\mathbf{z}}_{i}}^{ \star } = \underset{{\begin{Vmatrix}{\Delta }_{{\mathbf{z}}_{i}}\end{Vmatrix}}_{{l}_{p}}}{\arg \max }{\mathcal{L}}_{\theta }\left( {{\mathcal{F}}^{-1}\left( {{\mathbf{z}}_{i} + {\Delta }_{{\mathbf{z}}_{i}}}\right) ,{l}_{i}}\right) , +$$ + +$$ +{\mathcal{P}}_{\text{adv }}^{{\ell }_{p}}\left( {{\mathbf{z}}_{i},\epsilon }\right) = {\Delta }_{{\mathbf{z}}_{i}}^{ \star },\;\text{ (Adversarial-LA) } +$$ + +where ${\mathcal{L}}_{\theta }$ is the loss function of the classifier, and ${\ell }_{p}$ denotes the selected norm, e.g. ${\ell }_{2}$ or ${\ell }_{\infty }$ . + +In practice, we define the number of steps $k$ to optimize for ${\Delta }_{{\mathbf{z}}_{i}}^{ \star } \in \mathcal{Z}$ , as well as the step size $\alpha$ (as in Stutz et al. (2019); Wong & Kolter (2021)), and we have the following procedure: + +- Initialize a random ${\Delta }_{{\mathbf{z}}_{i}}^{0}$ with ${\begin{Vmatrix}{\Delta }_{{\mathbf{z}}_{i}}^{0}\end{Vmatrix}}_{{\ell }_{p}} \leq \epsilon$ . + +- Iteratively update ${\Delta }_{{\mathbf{z}}_{i}}^{j}$ for $j = 1,\ldots , k$ number of steps with step size $\alpha$ as follows: + +$$ +{\Delta }_{{\mathbf{z}}_{i}}^{j} = \Pi \left( {{\Delta }_{{\mathbf{z}}_{i}}^{j - 1} + \alpha \cdot \frac{\nabla {\mathcal{L}}_{\theta }\left( {{\mathcal{F}}^{-1}\left( {{\mathbf{z}}_{i} + {\Delta }_{{\mathbf{z}}_{i}}^{j - 1}}\right) ,{l}_{i}}\right) }{{\begin{Vmatrix}\nabla {\mathcal{L}}_{\theta }\left( {\mathcal{F}}^{-1}\left( {\mathbf{z}}_{i} + {\Delta }_{{\mathbf{z}}_{i}}^{j - 1}\right) ,{l}_{i}\right) \end{Vmatrix}}_{{\ell }_{p}}}}\right) +$$ + +where $\Pi$ is the projection operator that ensures ${\begin{Vmatrix}{\Delta }_{{\mathbf{z}}_{i}}^{j}\end{Vmatrix}}_{{\ell }_{p}} \leq \epsilon$ and the gradient is with respect to ${\Delta }_{{\mathbf{z}}_{i}}^{j}$ . + +Table 1. Test accuracy (%) on CIFAR-10, in the low-data regime compared to the full train set. For the former, we use 5% and 100% of the training and test set, respectively. In addition to standard training, we consider standard training with commonly used data augmentations (DA) in the image space, which includes rotation and horizontal flips (Zagoruyko & Komodakis, 2016), as well as more recent Cutout (DeVries & Taylor, 2017) and Mixup (Zhang et al.,2018a) methods. See $§{3.1}$ for a discussion. + +
$\mathbf{{Method}}$Low-dataFull-set
Standard (no DA)49.889.7
Standard + common DA64.195.2
VAE-GAN (Stutz et al., 2019)58.994.2
Cutout (DeVries & Taylor, 2017)66.896.0
Mixup (Zhang et al., 2018a)73.495.9
Randomized-LA (ours)70.196.3
Adversarial-LA (ours)80.496.6
+ +- Output ${\mathcal{P}}_{adv}\left( {{\mathbf{z}}_{i},\epsilon }\right) = {\Delta }_{{\mathbf{z}}_{i}}^{k}$ + +For the case of ${\ell }_{\infty }$ , we replace normalization of gradient with $\operatorname{sign}\left( \cdot \right)$ operator, i.e.: + +$$ +{\Delta }_{{\mathbf{z}}_{i}}^{j} = \Pi \left( {{\Delta }_{{\mathbf{z}}_{i}}^{j - 1} + \alpha \cdot \operatorname{sign}\left( {\nabla {\mathcal{L}}_{\theta }\left( {{\mathcal{F}}^{-1}\left( {{\mathbf{z}}_{i} + {\Delta }_{{\mathbf{z}}_{i}}^{j - 1}}\right) ,{l}_{i}}\right) }\right) }\right) +$$ + +and use component-wise clipping for projection, equivalent to the standard ${\ell }_{\infty }$ PGD adversarial attack of Madry et al.. + +Similarly, as NFs directly models the underlying data manifold, this perturbation is equivalent to a search over the on-manifold adversarial samples (Stutz et al., 2019). + +## 3. Experiments + +### 3.1. Generalization on CIFAR-10 + +We are primarily interested in the performance of our perturbations in the low-data regime, when using only a small subset of CIFAR-10 as the training set. We train ResNet-18 classifiers on only $5\%$ percent of the full training set and evaluate models on the full test set. + +We compare our methods with some of the most commonly used data augmentations methods such as Cutout (DeVries & Taylor, 2017) and Mixup (Zhang et al., 2018a), as well as with the VAE-GAN based approach (Stutz et al., 2019). For (Stutz et al., 2019), we use the authors' implementation. + +For (DeVries & Taylor, 2017), we report the best test accuracy observed among a grid search on the learning rate $\eta \in$ $\{ {0.1},{0.01}\}$ . Similarly, for (Zhang et al.,2018a), we report the best accuracy among grid search on learning rate $\eta \in$ $\{ {0.1},{0.01}\}$ and mixup coefficient $\lambda \in \{ {.1},{.2},{.3},{.4},{1.0}\}$ . For Randomized-LA, we use $\ell = {\ell }_{\infty },\epsilon = {0.25}$ , and for Adversarial-LA, we use $\ell = 2,\epsilon = {1.0},\alpha = {0.5}, k = 3$ . + +Table 2. Cross-datasets experiments. Test accuracy (%) on CIFAR-100, in the low-data regime, where we use ${10}\%$ of the training set and the full test set. The normalizing flow used to generate training samples is trained on CIFAR-10. + +
$\mathbf{{Method}}$Test$\mathbf{{Improvement}}$
Standard36.4-
Randomized-LA, $\ell = {\ell }_{\infty },\epsilon = {.2}$39.7+3.3
Randomized-LA, $\ell = {\ell }_{\infty },\epsilon = {.3}$41.0+4.6
Randomized-LA, $\ell = {\ell }_{2},\epsilon = {10}$40.4+4.0
Randomized-LA, $\ell = {\ell }_{2},\epsilon = {20}$42.3+5.9
Adversarial-LA, $\ell = {\ell }_{2},\alpha = {.5}, k = 3$45.0+8.6
+ +Table 1 summarizes our generalization experiments in the low data regime-using only $5\%$ of CIFAR-10 for training, compared to the full CIFAR-10 training set. Both Randomized-LA and Adversarial-LA notably outperform the standard training baseline. In particular, we observe that (i) our simplistic Randomized-LA method already outperforms some recent strong data augmentation methods, and (ii) Adversarial-LA achieves best test accuracy for both low-data and full-set regimes. See $§{3.3}$ below for additional benchmarks with VAE-GAN (Stutz et al., 2019). + +### 3.2. Cross-Dataset Experiments + +To further analyze potential applications of our NF based latent attacks on real-world use cases, we conduct the following experiment. Assuming we have available a relevant large-scale dataset, a question arises if the NF within our approach can be pre-trained on it, and used for training the classifier on a different dataset. In particular, we use CIFAR- 10 to train the NF model, and then our latent attacks to train a classifier on ${10}\%$ of the CIFAR-100 dataset. + +Table 2 shows our results for a selection of latent attacks. Randomized-LA and Adversarial-LA achieve 16% and 24% percent improvements over the standard baseline, respectively. The results indicate that normalizing flows are capable of transferring useful augmentations across datasets. + +### 3.3. Additional Comparison with VAE-GAN + +We study the performance of our latent perturbation-based training strategies in varying settings, starting from low-data regime to full-set. We reproduce the classifier and the hy-perparameter setup used in (Stutz et al., 2019), and use analogous setup for our method. For the reported VAE-GAN results, we used the source code provided by the authors ${}^{1}$ . For our Randomized-LA, we use perturbations of size $\epsilon = {0.15}$ and for Adversarial-LA, we use $\epsilon = {0.05},\alpha = {0.01}$ and number of steps $k = {10}$ . + +--- + +'https://github.com/davidstutz/ + +disentangling-robustness-generalization + +--- + +![01963e4e-5e57-7d6a-b941-a5813388f608_3_174_190_661_485_0.jpg](images/01963e4e-5e57-7d6a-b941-a5813388f608_3_174_190_661_485_0.jpg) + +Figure 2. Test accuracy (y-axis)-on full test set, for a varying number of training samples (x-axis), on FashionMNIST. To replicate the setup of VAE-GAN (Stutz et al., 2019), only a portion of the dataset (x-axis) is used to train the classifier, while the corresponding generative model is trained on the full dataset. We run each experiment with three different random seeds, and report the mean and standard deviation of the test accuracy. See $\$ {3.3}$ . + +Figure 2 shows our average results for 3 runs with training sizes in $\{ {600},{2400},{6000},{50000}\}$ . We observe that Randomized-LA performs comparatively to the standard training baseline, whereas Adversarial-LA outperforms the standard baseline across all train set sizes. Note that the difference to the standard baselines shrinks as we increase the number of samples available to the classifiers. + +Inline with our results, Stutz et al. (2019) report diminishing performance gains for increasingly challenging datasets such as FashionMNIST to CelebA, when using therein VAE-GAN based approach. One potential cause could be the approximate encoding and decoding mappings, and/or sensitivity to hyper-parameter tuning. Indeed, our results support the numerous appealing advantages of NF models for latent space perturbations, and indicate that they have better capacity to produce useful augmented training samples. + +## 4. Discussion + +Exact Coding. As noted in $\$ 2$ , normalizing flows can perform exact encoding and decoding by their construction. That is, the decoding operation is exactly the reverse of the encoding operation. Any continuous encoder maps a local neighborhood of a sample to some local neighborhood of its latent representation. However, the invertibility of normalizing flows also maps any local neighborhood of latent code to a local neighborhood of the original sample. This property has significant advantages over any other approximate invertible encoder-decoder methods including VAE-GANs, for defining perturbations in latent space. + +Increasing Effective Dataset Size. The primary advantage of exact coding is that the generated samples via latent perturbations improve the generalization performance of classifiers, as we show in $§{3.1}$ . To understand why this occurs, consider the limit case $\epsilon \rightarrow 0$ for a latent perturbation. For a numerically stable NF, this implies that we recover the original data samples, hence the original data manifold. As we increase the $\epsilon$ , we "enlarge" our manifold simultaneously from all data samples. Thus, by increasing $\epsilon$ , we add further plausible data points to our training set as long the learned latent representation is a good approximation of the underlying data manifold. This does not necessarily hold for approximate mappings due to inherent decoder noise. + +Controllability. In $§2$ , we introduced two variants of latent perturbations with normalizing flows. These two variants define different local objectives around the latent code of the original sample. The randomized latent attack defines a sampling operation on the data manifold, whereas the adversarial latent attack defines a stochastic search procedure to find samples attaining high classifier losses. Here, we exploit the diffeomorphism provided by normalizing flow to efficiently map a complex sampling operation-sampling from data manifold, or a complex search operation-finding on-manifold adversarial samples, to the latent space. Combined with simple prior structure of NFs, this allow for future possibilities on designing efficient algorithms tackling various on manifold problems $§5$ . + +Compatibility with Data Augmentations. It is important to note that our method is orthogonal to image space data augmentation methods. In other words, we can train normalizing flows with commonly used data augmentations. Indeed in our experiments, we observe that trained models apply some of the training-time augmentations such as cropping. This allows us to encode and decode augmented samples as well as original samples of CIFAR-10. Additionally, we can use more advanced methods such as DeVries & Taylor (2017); Zhang et al. (2018a) concurrently with our latent perturbations to train classifiers. + +## 5. Conclusion + +Motivated by the numerous advantages of normalizing flows, we propose flow-based latent perturbation methods to augment the training datasets, to train a classifier. Our empirical results on several real-world datasets demonstrate the efficacy of these generative models for improved test accuracy both in full and in low-data regimes. + +Further directions include exploiting potentially more complex prior structures to design efficient flow-based algorithms tackling on-manifold sampling or optimization problems. For example, using NF models with explicit parametrization of specific semantic transformations (e.g., zoom or orientation) would enable the training of more robustly generalizing classifiers. + +References + +Antoniou, A., Storkey, A., and Edwards, H. Data augmentation generative adversarial networks. arXiv preprint arXiv:1711.04340, 2017. + +Ardizzone, L., Lüth, C., Kruse, J., Rother, C., and Köthe, U. Guided image generation with conditional invertible neural networks. arXiv preprint arXiv:1907.02392, 2019. + +Baluja, S. and Fischer, I. Adversarial transformation networks: Learning to generate adversarial examples. arXiv preprint arXiv:1703.09387, 2017. + +Biggio, B. and Roli, F. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317-331, 2018. ISSN 0031-3203. doi: https://doi.org/ 10.1016/j.patcog.2018.07.023. + +Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., and Le, Q. V. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 113- 123, 2019. + +Cubuk, E. D., Zoph, B., Shlens, J., and Le, Q. V. Ran-daugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2020. + +DeVries, T. and Taylor, G. W. Improved Regularization of Convolutional Neural Networks with Cutout. arXiv:1708.04552, 2017. URL http://arxiv.org/ abs/1708.04552. arXiv: 1708.04552. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using Real NVP. In International Conference on Learning Representations (ICLR), 2017. + +Fawzi, A., Samulowitz, H., Turaga, D., and Frossard, P. Adaptive data augmentation for image classification. In IEEE International Conference on Image Processing (ICIP), 2016. + +Gilmer, J., Metz, L., Faghri, F., Schoenholz, S. S., Raghu, M., Wattenberg, M., and Goodfellow, I. Adversarial spheres. arXiv preprint arXiv:1801.02774, 2018. + +Jalal, A., Ilyas, A., Daskalakis, C., and Dimakis, A. G. The robust manifold defense: Adversarial training using generative models. arXiv preprint arXiv:1712.09196, 2017. + +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1\mathrm{\;x}1$ convolutions. In Advances in neural information processing systems, pp. 10215-10224, 2018. + +Larsen, A. B. L., Sønderby, S. K., and Winther, O. Autoen-coding beyond pixels using a learned similarity metric. In ${ICML},{2016}$ . + +Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, November 1998. ISSN 1558-2256. doi: 10.1109/5.726791. Conference Name: Proceedings of the IEEE. + +Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018. + +Peng, X., Tang, Z., Yang, F., Feris, R. S., and Metaxas, D. Jointly optimize data augmentation and network training: Adversarial data augmentation in human pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. + +Perez, L. and Wang, J. The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621, 2017. + +Ratner, A. J., Ehrenberg, H., Hussain, Z., Dunnmon, J., and Ré, C. Learning to compose domain-specific transformations for data augmentation. In Advances in Neural Information Processing Systems (NeurIPS), volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/ f26dab9bf6a137c3b6782e562794c2f2-Paper. pdf. + +Rosca, M., Lakshminarayanan, B., Warde-Farley, D., and Mohamed, S. Variational approaches for auto-encoding generative adversarial networks. arXiv preprint arXiv:1706.04987, 2017. + +Rozsa, A., Günther, M., and Boult, T. E. Are accuracy and robustness correlated. In 15th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 227-232. IEEE Computer Society, 2016. doi: 10.1109/ICMLA.2016.0045. URL https: //doi.org/10.1109/ICMLA.2016.0045. + +Song, Y., Shu, R., Kushman, N., and Ermon, S. Constructing unrestricted adversarial examples with generative models. In Advances in Neural Information Processing Systems (NeurIPS), volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ 8cea559c47e4fbdb73b23e0223d04e79-Paper. pdf. + +Stutz, D., Hein, M., and Schiele, B. Disentangling adversarial robustness and generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6976-6987, 2019. + +Su, D., Zhang, H., Chen, H., Yi, J., Chen, P., and Gao, Y. Is robustness the cost of accuracy? - A comprehensive study on the robustness of 18 deep image classification models. In ${ECCV},{2018}$ . + +Tanay, T. and Griffin, L. A boundary tilting persepective on the phenomenon of adversarial examples. arXiv preprint arXiv:1608.07690, 2016. + +Tran, N.-T., Tran, V.-H., Nguyen, N.-B., Nguyen, T.-K., and Cheung, N.-M. On data augmentation for GAN training. arXiv preprint arXiv:2006.05338, 2020. + +Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. Robustness may be at odds with accuracy. In International Conference on Learning Representations (ICLR), 2019a. + +Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152, 2019b. + +Wong, E. and Kolter, J. Z. Learning perturbation sets for robust machine learning. In International Conference on Learning Representations (ICLR), 2021. + +Xiao, C., Li, B., yan Zhu, J., He, W., Liu, M., and Song, D. Generating adversarial examples with adversarial networks. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJ-CAI), pp. 3905-3911, 7 2018. doi: 10.24963/ijcai.2018/ 543. URL https://doi.org/10.24963/ijcai.2018/543. + +Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., and Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 6023-6032, 2019. + +Zagoruyko, S. and Komodakis, N. Wide Residual Networks. In Procedings of the British Machine Vision Conference 2016, pp. 87.1-87.12, York, UK, 2016. British Machine Vision Association. ISBN 978-1-901725-59-9. doi: 10.5244/C.30.87. URL http://www.bmva.org/ bmvc/2016/papers/paper087/index.html. + +Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. mixup: Beyond Empirical Risk Minimization. In International Conference on Learning Representations (ICLR), 2018a. URL http://arxiv.org/abs/ 1710.09412. + +329 + +Zhang, L., Yu, M., Chen, T., Shi, Z., Bao, C., and Ma, K. Auxiliary training: Towards accurate and robust models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June ${2020}\mathrm{a}$ . + +Zhang, R., Che, T., Ghahramani, Z., Bengio, Y., and Song, Y. Metagan: An adversarial approach to few-shot learning. In Advances in Neural Information Processing Systems (NeurIPS), volume 31. Curran Associates, Inc., 2018b. URL https://proceedings.neurips.cc/paper/2018/file/ 4e4e53aa080247bc31d0eb4e7aeb07a0-Paper. pdf. + +Zhang, X., Wang, Q., Zhang, J., and Zhong, Z. Adversarial autoaugment. In International Conference on Learning Representations (ICLR), 2020b. + +Zhao, Z., Dua, D., and Singh, S. Generating natural adversarial examples. In International Conference on Learning Representations (ICLR), 2018. + +## A. Details on the implementation + +In this section, we list all details on the implementation. + +Source Code. Our source code is anonymously provided in this repository: https://anonymous.4open.science/r/37e44cb8-6c57-4dc2-8bde-2ef5bf844c73/ + +### A.1. Architectures + +Generative model (NF) architecture. We use Glow (Kingma & Dhariwal, 2018) for the normalizing flow architecture. For the MNIST and FMNIST experiments, we use a conditional, 12-step, Glow-coupling-based architecture similar to (Ardizzone et al., 2019). See Table 3 for details. For the CIFAR-10/100 experiments, we use the original Glow architecture described in (Kingma & Dhariwal, 2018), i.e., 3 scales of 32 steps each containing activation normalization, affine coupling and invertible $1 \times 1$ convolution. We adapt an existing PyTorch implementation in ${}^{2}$ to better match the original Tensorflow implementation in ${}^{3}$ . For more details on multi-scale architecture in normalizing flows, see (Dinh et al., 2017). + +Table 3. Normalizing flow architectures used for our experiments on MNIST and FMNIST. With ${c}_{in} \rightarrow {y}_{\text{out }}$ we denote the number of channels of the input and output of the layer. With $\oplus$ we denote concatenation operation. We use the implementation provided in https://github.com/VLL-HD/FrEIA.For more details on affine coupling layers, see (Kingma & Dhariwal, 2018). Generative Model + +Input: $\mathbf{x} \in {\mathbb{R}}^{784},\mathbf{y} \in {\mathbb{R}}^{10}$ GLOWCouplingBlock $\overline{\mathrm{{GLOW}}}\overline{\mathrm{{CouplingBlock}}}$ + +--- + + Input: $\mathbf{x} \in {\mathbb{R}}^{784},\mathbf{y} \in {\mathbb{R}}^{10}$ + + split $\mathbf{x} \rightarrow {\mathbf{x}}_{1},{\mathbf{x}}_{2}\left( {{78}\overline{4} \rightarrow {39}\overline{2},\overline{39}\overline{2}}\right)$ + + subnet ${\mathbf{x}}_{2} \oplus \mathbf{y} \rightarrow {\mathbf{s}}_{1},{\mathbf{t}}_{1}\left( {{40}\overline{2} \rightarrow {39}\overline{2},3\overline{9}\overline{2}}\right)$ + +affine coupling ${\mathbf{x}}_{1},{\mathbf{s}}_{1},{\mathbf{t}}_{1} \rightarrow {\mathbf{z}}_{1}\left( {3 \times {392} \rightarrow {392}}\right)$ + + subnet ${\mathbf{z}}_{1} \oplus \mathbf{y} \rightarrow {\mathbf{s}}_{2},{\mathbf{t}}_{2}\left( {{402} \rightarrow {392},{392}}\right)$ + +$$ +\text{affine coupling}{\mathbf{x}}_{2},{\mathbf{s}}_{2},{\mathbf{t}}_{2} \rightarrow {\mathbf{x}}_{2}^{\prime }\left( {3 \times {392} \rightarrow {392}}\right) +$$ + +$$ +\text{concat.}{z}_{1} \oplus {z}_{2}\left( {{39}\overline{2},{39}\overline{2} \rightarrow {78}\overline{4}}\right) +$$ + +--- + +
Subnets
Input: $\mathbf{x} \in {\mathbb{R}}^{402}$ linear $\left( {{40}\overline{2} \rightarrow {512}}\right)$ ReLU linear $\left( {{512} \rightarrow {784}}\right)$ plit $\left( {{784} \rightarrow {392},{392}}\right)$
+ +PermuteRandom $\overline{\mathrm{{GLOW}}}\overline{\mathrm{{CouplingBlock}}}$ PermuteRandom ... x9 GLOWCouplingBlock PermuteRandom + +Classifier architecture. For our experiments on MNIST, we use LeNet-5 (Lecun et al., 1998) with replaced nonlinearity-instead of tanh we use ${ReLU}$ , and we initialize the network parameters with truncated normal distribution $\sigma = {0.1}$ . For the FMNIST experiments, we use the same classifier as used in (Stutz et al., 2019). See Table 4 for more details. For CIFAR-10/100, we use the ResNet-18 architecture as implemented in (DeVries & Taylor, 2017; Zhang et al., 2018a). This ResNet-18 includes slight modifications over the standard ResNet-18 architecture in order to achieve better performance on CIFAR-10/100. See ${}^{4}$ and ${}^{5}$ for implementation. In particular, the first layer is changed to a $3 \times 3$ convolution with stride 1 and padding 1, from the original $7 \times 7$ convolution with stride 2 and padding 3 . Additionally, the following max-pooling layer is removed. + +### A.2. Hyperparameters + +Generative Models. For MNIST and FMNIST, we use the Adam (Kingma & Ba, 2014) optimizer with a batch size of 100 and learning rate of ${10}^{-6}$ for 100 epochs to train normalizing flows. For CIFAR-10, we use the Adamax (Kingma &Ba, 2014) optimizer with learning rate of 0.0005 and weight decay of 0.00005 . We use warmup learning rate scheduling for the first 500.000 steps of the training. That is, the learning rate is linearly increased from 0 to the base learning rate 0.0005 in 500.000 steps. + +For VAE-GAN training, we run the implementation provided by authors ${}^{6}$ with the default architectures and parameters. That + +--- + +${}^{2}$ https://github.com/chrischute/glow + +${}^{3}$ https://github.com/openai/glow + +${}^{4}$ https://github.com/facebookresearch/mixup-cifar10 + +${}^{5}$ https://github.com/uoguelph-mlrg/Cutout + +${}^{6}$ https://github.com/davidstutz/disentangling-robustness-generalization + +--- + +Table 4. Convolutional Neural Network (CNN) architectures used for our experiments on MNIST and FMNIST. We use ker and pad to denote kernel and padding for the convolution layers, respectively. With $h \times w$ we denote the kernel size. With ${c}_{in} \rightarrow {y}_{\text{out }}$ we denote the + +number of channels of the input and output of the layer. + +--- + +
LeNet-5
Input: $\mathbf{x} \rightarrow {\mathbb{R}}^{1 \times {28} \times {28}}$
convolution (ker: $5 \times 5,1 \rightarrow 6$ ; stride: 1; pad:2)
ReLU
AvgPool2d (ker: $2 \times 2$ )
convolution (ker: $5 \times 5,6 \rightarrow {16}$ ; stride: 1 ; pad:0)
ReLU
AvgPool2d (ker: $2 \times 2$ )
+ + Flatten $\left( {{16} \times 5 \times 5 \rightarrow {400}}\right)$ + + linear $\left( {{400} \rightarrow {120}}\right)$ + + ReLU + + linear $\left( {{120} \rightarrow {84}}\right)$ + + ReLU + + linear $\left( {{120} \rightarrow {10}}\right)$ + + ReLU + +--- + +CNN from (Stutz et al., 2019) + +--- + + Input: $\mathbf{x} \in {\mathbb{R}}^{1 \times {28} \times {28}}$ + +convolution (ker: $4 \times 4,\overline{1} \rightarrow {16}$ ; stride:2; pad:1) + + Batch Normalization + + ReLU + +convolution (ker: $4 \times 4,{16} \rightarrow {32}$ ; stride: 2 ; pad:1) + + Batch Normalization + + ReLU + +convolution (ker: $4 \times 4,{32} \rightarrow {64}$ ; stride: 2 ; pad:1) + + Batch Normalization + + ReLU + + Flatten $\left( {{64} \times 3 \times 3 \rightarrow {576}}\right)$ + + linear $\left( {{576} \rightarrow {100}}\right)$ + + linear $\left( {{100} \rightarrow {10}}\right)$ + +--- + +is, for MNIST, we use $\beta = {2.75},\gamma = 1,\eta = 0$ and latent space size of 10 . We use Adam optimizer with a batch size of 100, learning rate of 0.005 , weight decay of 0.0001 and train VAE-GANs for 60 epochs with an exponential decay scheduling of 0.9 for the learning rate. For CIFAR-10, we use the CelebA setup provided (the only 3-channel color dataset provided) and thus use $\beta = {3.0}$ , latent space size of 25 and 30 epochs instead. Note that we report "On-Learned-Manifold Adversarial Training" from (Stutz et al., 2019) which uses class-specific VAE-GANs. That is, 10 separate VAE-GAN architectures are trained for both FMNIST and CIFAR-10 datasets. + +Discussion on Hyperparameters of Generative Models. As normalizing flows directly optimize log-likelihood of the data, there are no hyperparameters in their loss function. Additionally, the normalizing flow models that we use have a fixed latent dimension equal to the input dimension due to their architectural design. As noted in $\$ {3.3}$ , this is in contrast to VAE-GAN used in (Stutz et al., 2019) where the training involves optimizing separate losses for three networks (namely, encoder, decoder and discriminator) concurrently. Coefficients called $\beta ,\gamma$ and $\eta$ are used to scale different loss terms involved such as reconstruction, decoder and discriminator loss. Additionally, the latent size for VAE-GAN is hand-picked for each dataset. + +Classifiers. We list hyperparameters used in our experiments on FMNIST and CIFAR-10 for classifier training in $\$ 3$ . For MNIST, we use the Adam optimizer with learning rate of 0.001 and weight decay of 0.001 . We train LeNet-5 classifiers for 20 epochs with exponential learning decay of rate 0.1 for 10.000 steps. + +Data Augmentation. For CIFAR-10/100, we use standard data augmentation akin to (Zagoruyko & Komodakis, 2016). That is, we zero-pad images with 4 pixels on each side, take a random crop of size ${32} \times {32}$ , and then mirror the resulting image horizontally with ${50}\%$ probability. We use such data augmentation for both training the generative and the classifier models. Hence, our normalizing flow model is capable of encoding-decoding operations on augmented samples as well. Advanced data augmentation baselines we use in 1 (DeVries & Taylor, 2017; Zhang et al., 2018a) also include the same standard data augmentations. However, (Stutz et al., 2019) does not use data augmentation in their generative model. To provide a more direct comparison between the performance of two generative models, in §B. 2 we conduct an additional study without any data augmentations. + +## B. Additional Results + +### B.1. Results on MNIST + +Table 5 summarizes our results on MNIST in full data regime. Although the baseline has very good performances on this dataset, we observe improved generalization. + +Table 5. Train and test accuracy (%) as well as loss on MNIST. Comparison with standard training, versus ours-latent-space perturbations $\left( {{\mathcal{P}}_{\text{rand }}\underline{\& {\mathcal{P}}_{adv}}}\right)$ . + +
PerturbationTrain AccuracyTrain LossTest AccuracyTest Loss
Standard99.800.006999.240.0288
${\mathcal{P}}_{rand}^{{\ell }_{\infty }},\epsilon = {0.15}$99.780.007699.280.0262
${\mathcal{P}}_{adv}^{{\ell }_{\infty }},\epsilon = {0.05},\alpha = {0.01}, k = {10}$99.260.023099.430.0216
+ +### B.2. Additional Results on CIFAR-10 + +Results without Data Augmentation. To provide a direct comparison between two generative models and eliminate the effect of data augmentation, we run additional experiments. Table 6 shows results for our latent perturbations without any data augmentation to train the normalizing flow and the classifier. In line with our FMNIST results in $\$ {3.3}$ , we observe that both randomized and adversarial latent attack overperform standard baseline and VAE-GAN based approach. + +Table 6. Test accuracy (%) on CIFAR-10, in the low-data regime (5% of training samples) without any data augmentation. + +
$\mathbf{{Method}}$Low-data
Standard49.8
VAE-GAN (Stutz et al., 2019)49.4
Randomized-LA (ours)54.9
Adversarial-LA (ours)58.2
+ +Results with Different Attack Parameters. In Table 7 we provide results with varying hyperparameters for the different attacks. Observe that in the "higher" Adversarial-LA perturbation setting-where $\epsilon = {2.0}$ , the classifier still didn’t fully fit to the training set but the test performance is above the standard baseline. + +Multi-step Training. We run additional experiments where we sequentially apply different attack hyperparameters in a multi-step training with weaker perturbations to increase the performance on the test set. The results are listed in Table 7, denoted with+. + +494 + +495 + +496 + +497 + +498 + +Table 7. Train and test accuracy (%) as well as loss on CIFAR-10, using ResNet-18. All of the models are trained with the same hyperparameters listed in $\$$ A.2. Perturbations listed with the + sign indicates a multi-step training. For example, last row lists the result of the model trained with ${P}_{adv}^{{\ell }_{2}},\epsilon = {2.0},\alpha = {1.5}, k = 2$ for 130 epochs, ${P}_{rand},\epsilon = {0.25}$ for 40 epochs and ${P}_{rand}^{{\ell }_{2}},\epsilon = {10.0}$ for 30 epochs. Note that, regardless of multi-step training, the hyperparameters, including the total number of training epochs ( $= {200}$ ), remain fixed across the experiments. + +
PerturbationTrain AccuracyTrain LossTest AccuracyTest Loss
Baselines:
Standard100.00.00295.20.194
${P}_{PGD}^{{\ell }_{2}},\epsilon = {2.0},\alpha = {0.5}, k = {10}$61.130.89575.70.731
${P}_{PGD}^{{\ell }_{\infty }},\epsilon = {0.03},\alpha = {0.008}, k = {10}$77.30.52186.30.442
Ours:
${\mathcal{P}}_{\text{rand }}^{{\ell }_{2}},\epsilon = {10.0}$99.80.00795.80.161
${\mathcal{P}}_{rand}^{{\ell }_{\infty }},\epsilon = {0.25}$99.50.01596.30.142
$+ {\mathcal{P}}_{rand},\epsilon = {0.15}$100.00.00296.40.133
${\mathcal{P}}_{adv}^{{\bar{\ell }}_{2}},\epsilon = {1.0},\alpha = {0.5}, k = 3$99.90.00596.60.126
${\mathcal{P}}_{adv}^{{\ell }_{2}},\epsilon = {2.0},\alpha = {1.5}, k = 2$89.10.21495.80.134
$+ {\mathcal{P}}_{adv}^{{\ell }_{2}},\epsilon = {1.0},\alpha = {0.75}, k = 2$99.20.03096.50.114
$+ {\mathcal{P}}_{adv}^{{\ell }_{2}},\epsilon = {0.75},\alpha = {0.5}, k = 2$99.70.01196.70.115
$+ {\mathcal{P}}_{rand},\epsilon = {0.25}$100.00.00296.50.132
$+ {\mathcal{P}}_{\text{rand }}^{{\ell }_{2}},\epsilon = {10.0}$100.00.00296.60.131
+ diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mj6qILYHjbS/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mj6qILYHjbS/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..13edba63c340bb0f1fae4384f143ec36275b4f28 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mj6qILYHjbS/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,210 @@ +§ SEMANTIC PERTURBATIONS WITH NORMALIZING FLOWS FOR IMPROVED GENERALIZATION + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Several methods from two separate lines of works, namely, data augmentation (DA) and adversarial training techniques, rely on perturbations done in latent space. Often, these methods are either non-interpretable due to their non-invertibility or are notoriously difficult to train due to their numerous hyperparameters. We exploit the exactly reversible encoder-decoder structure of normalizing flows to perform perturbations in the latent space. We demonstrate that these on-manifold perturbations match the performance of advanced DA techniques-reaching ${96.6}\%$ test accuracy for CIFAR-10 using ResNet-18 and outperform existing methods particularly in low data regimes-yielding ${10} - {25}\%$ relative improvement of test accuracy from classical training. We find our latent adversarial perturbations, adaptive to the classifier throughout its training, are most effective. + +§ 1. INTRODUCTION + +Successfully applying Deep Neural Networks (DNNs) in real world tasks in large part depends on the availability of large annotated datasets for the task at hand. Thus, besides several overfitting techniques, data-augmentation (DA) often remains a "mandatory" step when deploying DNNs in practice. Traditional DA techniques consist of applying a predefined set of transformations to the training samples that do not change the class label. As this approach is limited to making the classifier robust solely to the fixed set of hard-coded transformations, advanced methods incorporate more loosely defined transformations in the data space (Zhang et al., 2018a; DeVries & Taylor, 2017; Yun et al., 2019). Furthermore, recently proposed DA techniques exploit the latent space to perform such transformations (Antoniou et al., 2017; Zhao et al., 2018), while typically solving the model's non-invertability by training a separate model (Zhao et al., + +§ 2018), THUS MAKING THEM HARD TO TRAIN. + +A separate line of work focuses on adversarial training (see (Biggio & Roli, 2018) and references therein), where the final model is trained with samples perturbed in a way that makes its prediction incorrect, called adversarial samples. However, further empirical studies showed that such training reduces the "clean" samples accuracy, indicating the two objectives are competing (Tsipras et al., 2019b; Su et al., 2018). Stutz et al. (2019) postulate that this robustness-generalization trade-off appears due to using off-manifold adversarial attacks that leave the data-manifold, and that 'on-manifold adversarial attacks' can improve generalization. Thus, the authors proposed to use perturbations in the latent space of a generative model, VAE-GAN (Larsen et al., 2016; Rosca et al., 2017). However, as this method relies on the VAE-GAN model which is particularly hard to train-since in addition to GAN training it involves hard to tune hyperparamaters balancing the VAE and GAN objectives-its usage remained limited. + +Motivated by the advantages of normalizing flows (NF) relevant to these two lines of works, we employ NFs (e.g. Glow, Kingma & Dhariwal, 2018), to define entirely unsupervised augmentations-contrasting with pre-defined fixed transformations-with the same goal of improving the generalization of deep classifiers. In particular, NF models offer appealing advantages for latent space perturbations, such as: (i) exact latent-variable inference and log-likelihood evaluation, and (ii) efficient inference and synthesis that can be parallelized (Kingma & Dhariwal, 2018). + +Related works. Several works learn useful DA policies, for instance by optimization (Fawzi et al., 2016; Ratner et al., 2017), Reinforcement Learning techniques (Cubuk et al., 2019; 2020; Zhang et al., 2020b), specifically trained generator networks (Peng et al., 2018) or assisted by generative adversarial networks (Perez & Wang, 2017; Antoniou et al., 2017; Zhang et al., 2018b; Tran et al., 2020). Several methods traverse the latent space to find virtual data samples that are missclassified (Baluja & Fischer, 2017; Song et al., 2018; Xiao et al., 2018; Zhang et al., 2020a). Complementary, the connection of adversarial learning and generalization has also been studied in (Tanay & Griffin, 2016; Rozsa et al., 2016; Jalal et al., 2017; Tsipras et al., 2019a; Gilmer et al., 2018; Zhao et al., 2018). + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + + < g r a p h i c s > + +Figure 1. Exactness of NF encoding-decoding. Here $\mathcal{F}$ denotes the bijective NF model, and $\mathcal{G}/{\mathcal{G}}^{-1}$ encoder/decoder pair of inexact methods such as VAE or VAE-GAN which, due to inherent decoder noise, is only approximately bijective. + +Overview of contributions. We exploit the exactly reversible encoder-decoder structure of NFs to perform efficient and controllable augmentations in the learned on-manifold space. We demonstrate that our on-manifold perturbations consistently outperform the standard training on CIFAR-10/100 using ResNet-18. Moreover, in a low-data regime, such training yields up to ${25}\%$ relative improvement from classical training, of which-as most effective-we find the adversarial perturbations that are adaptive to the classifier throughout its training, see $\$ 3$ . + +§ 2. PERTURBATIONS IN LATENT SPACE USING NORMALIZING FLOWS + +The invertibility of normalizing flows enables bidirectional transitions between image and latent space, see Figure 1. This in turn allows for applying perturbations directly in the latent space rather than image space. We denote a trained normalizing flow model, mapping data manifold $\mathcal{X}$ to latent space $\mathcal{Z}$ as, $\mathcal{F} : \mathcal{X} \rightarrow \mathcal{Z}$ . Given a perturbation function $\mathcal{P} : \mathcal{Z} \rightarrow \mathcal{Z}$ , defined over the latent space, we define its counterpart in image space as ${\mathcal{F}}^{-1}\left( {\mathcal{P}\left( {\mathcal{F}\left( z\right) }\right) }\right)$ . + +Our goal is to define latent perturbation function $\mathcal{P}\left( z\right)$ such that we obtain identity-preserving semantic modifications over the original image $\mathbf{x}$ in the image domain. To this end, we limit the structure of possible $\mathcal{P}\left( z\right)$ in two-ways. Firstly, we directly consider incremental perturbations of the form $z + \mathcal{P}\left( z\right)$ . Secondly, we use an $\epsilon$ parameter to control the size of perturbation allowed. More precisely, we have: + +$$ +{\mathcal{F}}^{-1}\left( {z + \mathcal{P}\left( {\mathcal{F}\left( \mathbf{x}\right) ,\epsilon }\right) }\right) . +$$ + +For brevity, we refer to $\mathcal{P}\left( \mathbf{z}\right)$ as "latent attacks" (LA) and we consider two variants, described below. + +§ 2.1. RANDOMIZED LATENT ATTACKS + +At training time, given a datapoint ${\mathbf{x}}_{i}$ , with $1 \leq i \leq n$ , using trained normalizing flow we obtain its corresponding latent code ${\mathbf{z}}_{i} = \mathcal{F}\left( {\mathbf{x}}_{i}\right)$ . + +Primarily, as perturbation function we consider a simplistic Gaussian noise in the latent space: + +$$ +{\mathcal{P}}_{\text{ rand }}\left( {\cdot ,\epsilon }\right) = \epsilon \cdot \mathcal{N}\left( {0,\mathbf{I}}\right) ,\;\text{ (Randomized-LA) } +$$ + +which is independent from ${\mathbf{z}}_{i}$ . Any such distribution around the original ${z}_{i}$ is equivalent to sampling from the learned manifold. In this case, the normalizing flow pushes forward this simple Gaussian distribution centered around ${\mathbf{z}}_{i}$ to a distribution on the image space around ${\mathbf{x}}_{i} = {\mathcal{F}}^{-1}\left( {\mathbf{z}}_{i}\right)$ . Thus, sampling from the simple prior distribution $\mathcal{N}\left( {0,\mathbf{I}}\right)$ is equivalent to sampling from a complex conditional distribution around the original image over the data manifold. + +We also define norm truncated versions as follows: + +$$ +{\mathcal{P}}_{\text{ rand }}^{{\ell }_{p}}\left( {\cdot ,\epsilon }\right) = \Pi \left( {\epsilon \cdot \mathcal{N}\left( {0,\mathbf{I}}\right) }\right) , +$$ + +where ${\ell }_{p}$ denotes the selected norm, e.g. ${\ell }_{2}$ or ${\ell }_{\infty }$ . For ${\ell }_{2}$ norm, $\Pi$ is defined as ${\ell }_{2}$ norm scaling, and for ${\ell }_{\infty },\Pi$ is the component-wise clipping operation defined below: + +$$ +{\left( \Pi \left( \mathbf{x}\right) \right) }_{i} \mathrel{\text{ := }} \max \left( {-\epsilon ,\min \left( {+\epsilon ,{\mathbf{x}}_{i}}\right) }\right) . +$$ + +§ 2.2. ADVERSARIAL LATENT ATTACKS + +Analogous to the above randomized latent attacks, at train time, given a datapoint ${\mathbf{x}}_{i}$ and it’s associated label ${l}_{i}$ , with $1 \leq i \leq n$ , using trained normalizing flow we obtain its corresponding latent code ${\mathbf{z}}_{i} = \mathcal{F}\left( {\mathbf{x}}_{i}\right)$ . + +We search for ${\Delta }_{{\mathbf{z}}_{i}} \in \mathcal{Z}$ such that the loss obtained of the generated image ${\widetilde{\mathbf{x}}}_{i} = {\mathcal{F}}^{-1}\left( {{\mathbf{z}}_{i} + {\Delta }_{{\mathbf{z}}_{i}}}\right)$ is maximal: + +$$ +{\Delta }_{{\mathbf{z}}_{i}}^{ \star } = \underset{{\begin{Vmatrix}{\Delta }_{{\mathbf{z}}_{i}}\end{Vmatrix}}_{{l}_{p}}}{\arg \max }{\mathcal{L}}_{\theta }\left( {{\mathcal{F}}^{-1}\left( {{\mathbf{z}}_{i} + {\Delta }_{{\mathbf{z}}_{i}}}\right) ,{l}_{i}}\right) , +$$ + +$$ +{\mathcal{P}}_{\text{ adv }}^{{\ell }_{p}}\left( {{\mathbf{z}}_{i},\epsilon }\right) = {\Delta }_{{\mathbf{z}}_{i}}^{ \star },\;\text{ (Adversarial-LA) } +$$ + +where ${\mathcal{L}}_{\theta }$ is the loss function of the classifier, and ${\ell }_{p}$ denotes the selected norm, e.g. ${\ell }_{2}$ or ${\ell }_{\infty }$ . + +In practice, we define the number of steps $k$ to optimize for ${\Delta }_{{\mathbf{z}}_{i}}^{ \star } \in \mathcal{Z}$ , as well as the step size $\alpha$ (as in Stutz et al. (2019); Wong & Kolter (2021)), and we have the following procedure: + + * Initialize a random ${\Delta }_{{\mathbf{z}}_{i}}^{0}$ with ${\begin{Vmatrix}{\Delta }_{{\mathbf{z}}_{i}}^{0}\end{Vmatrix}}_{{\ell }_{p}} \leq \epsilon$ . + + * Iteratively update ${\Delta }_{{\mathbf{z}}_{i}}^{j}$ for $j = 1,\ldots ,k$ number of steps with step size $\alpha$ as follows: + +$$ +{\Delta }_{{\mathbf{z}}_{i}}^{j} = \Pi \left( {{\Delta }_{{\mathbf{z}}_{i}}^{j - 1} + \alpha \cdot \frac{\nabla {\mathcal{L}}_{\theta }\left( {{\mathcal{F}}^{-1}\left( {{\mathbf{z}}_{i} + {\Delta }_{{\mathbf{z}}_{i}}^{j - 1}}\right) ,{l}_{i}}\right) }{{\begin{Vmatrix}\nabla {\mathcal{L}}_{\theta }\left( {\mathcal{F}}^{-1}\left( {\mathbf{z}}_{i} + {\Delta }_{{\mathbf{z}}_{i}}^{j - 1}\right) ,{l}_{i}\right) \end{Vmatrix}}_{{\ell }_{p}}}}\right) +$$ + +where $\Pi$ is the projection operator that ensures ${\begin{Vmatrix}{\Delta }_{{\mathbf{z}}_{i}}^{j}\end{Vmatrix}}_{{\ell }_{p}} \leq \epsilon$ and the gradient is with respect to ${\Delta }_{{\mathbf{z}}_{i}}^{j}$ . + +Table 1. Test accuracy (%) on CIFAR-10, in the low-data regime compared to the full train set. For the former, we use 5% and 100% of the training and test set, respectively. In addition to standard training, we consider standard training with commonly used data augmentations (DA) in the image space, which includes rotation and horizontal flips (Zagoruyko & Komodakis, 2016), as well as more recent Cutout (DeVries & Taylor, 2017) and Mixup (Zhang et al.,2018a) methods. See $§{3.1}$ for a discussion. + +max width= + +$\mathbf{{Method}}$ Low-data Full-set + +1-3 +Standard (no DA) 49.8 89.7 + +1-3 +Standard + common DA 64.1 95.2 + +1-3 +VAE-GAN (Stutz et al., 2019) 58.9 94.2 + +1-3 +Cutout (DeVries & Taylor, 2017) 66.8 96.0 + +1-3 +Mixup (Zhang et al., 2018a) 73.4 95.9 + +1-3 +Randomized-LA (ours) 70.1 96.3 + +1-3 +Adversarial-LA (ours) 80.4 96.6 + +1-3 + + * Output ${\mathcal{P}}_{adv}\left( {{\mathbf{z}}_{i},\epsilon }\right) = {\Delta }_{{\mathbf{z}}_{i}}^{k}$ + +For the case of ${\ell }_{\infty }$ , we replace normalization of gradient with $\operatorname{sign}\left( \cdot \right)$ operator, i.e.: + +$$ +{\Delta }_{{\mathbf{z}}_{i}}^{j} = \Pi \left( {{\Delta }_{{\mathbf{z}}_{i}}^{j - 1} + \alpha \cdot \operatorname{sign}\left( {\nabla {\mathcal{L}}_{\theta }\left( {{\mathcal{F}}^{-1}\left( {{\mathbf{z}}_{i} + {\Delta }_{{\mathbf{z}}_{i}}^{j - 1}}\right) ,{l}_{i}}\right) }\right) }\right) +$$ + +and use component-wise clipping for projection, equivalent to the standard ${\ell }_{\infty }$ PGD adversarial attack of Madry et al.. + +Similarly, as NFs directly models the underlying data manifold, this perturbation is equivalent to a search over the on-manifold adversarial samples (Stutz et al., 2019). + +§ 3. EXPERIMENTS + +§ 3.1. GENERALIZATION ON CIFAR-10 + +We are primarily interested in the performance of our perturbations in the low-data regime, when using only a small subset of CIFAR-10 as the training set. We train ResNet-18 classifiers on only $5\%$ percent of the full training set and evaluate models on the full test set. + +We compare our methods with some of the most commonly used data augmentations methods such as Cutout (DeVries & Taylor, 2017) and Mixup (Zhang et al., 2018a), as well as with the VAE-GAN based approach (Stutz et al., 2019). For (Stutz et al., 2019), we use the authors' implementation. + +For (DeVries & Taylor, 2017), we report the best test accuracy observed among a grid search on the learning rate $\eta \in$ $\{ {0.1},{0.01}\}$ . Similarly, for (Zhang et al.,2018a), we report the best accuracy among grid search on learning rate $\eta \in$ $\{ {0.1},{0.01}\}$ and mixup coefficient $\lambda \in \{ {.1},{.2},{.3},{.4},{1.0}\}$ . For Randomized-LA, we use $\ell = {\ell }_{\infty },\epsilon = {0.25}$ , and for Adversarial-LA, we use $\ell = 2,\epsilon = {1.0},\alpha = {0.5},k = 3$ . + +Table 2. Cross-datasets experiments. Test accuracy (%) on CIFAR-100, in the low-data regime, where we use ${10}\%$ of the training set and the full test set. The normalizing flow used to generate training samples is trained on CIFAR-10. + +max width= + +$\mathbf{{Method}}$ Test $\mathbf{{Improvement}}$ + +1-3 +Standard 36.4 - + +1-3 +Randomized-LA, $\ell = {\ell }_{\infty },\epsilon = {.2}$ 39.7 +3.3 + +1-3 +Randomized-LA, $\ell = {\ell }_{\infty },\epsilon = {.3}$ 41.0 +4.6 + +1-3 +Randomized-LA, $\ell = {\ell }_{2},\epsilon = {10}$ 40.4 +4.0 + +1-3 +Randomized-LA, $\ell = {\ell }_{2},\epsilon = {20}$ 42.3 +5.9 + +1-3 +Adversarial-LA, $\ell = {\ell }_{2},\alpha = {.5},k = 3$ 45.0 +8.6 + +1-3 + +Table 1 summarizes our generalization experiments in the low data regime-using only $5\%$ of CIFAR-10 for training, compared to the full CIFAR-10 training set. Both Randomized-LA and Adversarial-LA notably outperform the standard training baseline. In particular, we observe that (i) our simplistic Randomized-LA method already outperforms some recent strong data augmentation methods, and (ii) Adversarial-LA achieves best test accuracy for both low-data and full-set regimes. See $§{3.3}$ below for additional benchmarks with VAE-GAN (Stutz et al., 2019). + +§ 3.2. CROSS-DATASET EXPERIMENTS + +To further analyze potential applications of our NF based latent attacks on real-world use cases, we conduct the following experiment. Assuming we have available a relevant large-scale dataset, a question arises if the NF within our approach can be pre-trained on it, and used for training the classifier on a different dataset. In particular, we use CIFAR- 10 to train the NF model, and then our latent attacks to train a classifier on ${10}\%$ of the CIFAR-100 dataset. + +Table 2 shows our results for a selection of latent attacks. Randomized-LA and Adversarial-LA achieve 16% and 24% percent improvements over the standard baseline, respectively. The results indicate that normalizing flows are capable of transferring useful augmentations across datasets. + +§ 3.3. ADDITIONAL COMPARISON WITH VAE-GAN + +We study the performance of our latent perturbation-based training strategies in varying settings, starting from low-data regime to full-set. We reproduce the classifier and the hy-perparameter setup used in (Stutz et al., 2019), and use analogous setup for our method. For the reported VAE-GAN results, we used the source code provided by the authors ${}^{1}$ . For our Randomized-LA, we use perturbations of size $\epsilon = {0.15}$ and for Adversarial-LA, we use $\epsilon = {0.05},\alpha = {0.01}$ and number of steps $k = {10}$ . + +'https://github.com/davidstutz/ + +disentangling-robustness-generalization + + < g r a p h i c s > + +Figure 2. Test accuracy (y-axis)-on full test set, for a varying number of training samples (x-axis), on FashionMNIST. To replicate the setup of VAE-GAN (Stutz et al., 2019), only a portion of the dataset (x-axis) is used to train the classifier, while the corresponding generative model is trained on the full dataset. We run each experiment with three different random seeds, and report the mean and standard deviation of the test accuracy. See $\$ {3.3}$ . + +Figure 2 shows our average results for 3 runs with training sizes in $\{ {600},{2400},{6000},{50000}\}$ . We observe that Randomized-LA performs comparatively to the standard training baseline, whereas Adversarial-LA outperforms the standard baseline across all train set sizes. Note that the difference to the standard baselines shrinks as we increase the number of samples available to the classifiers. + +Inline with our results, Stutz et al. (2019) report diminishing performance gains for increasingly challenging datasets such as FashionMNIST to CelebA, when using therein VAE-GAN based approach. One potential cause could be the approximate encoding and decoding mappings, and/or sensitivity to hyper-parameter tuning. Indeed, our results support the numerous appealing advantages of NF models for latent space perturbations, and indicate that they have better capacity to produce useful augmented training samples. + +§ 4. DISCUSSION + +Exact Coding. As noted in $\$ 2$ , normalizing flows can perform exact encoding and decoding by their construction. That is, the decoding operation is exactly the reverse of the encoding operation. Any continuous encoder maps a local neighborhood of a sample to some local neighborhood of its latent representation. However, the invertibility of normalizing flows also maps any local neighborhood of latent code to a local neighborhood of the original sample. This property has significant advantages over any other approximate invertible encoder-decoder methods including VAE-GANs, for defining perturbations in latent space. + +Increasing Effective Dataset Size. The primary advantage of exact coding is that the generated samples via latent perturbations improve the generalization performance of classifiers, as we show in $§{3.1}$ . To understand why this occurs, consider the limit case $\epsilon \rightarrow 0$ for a latent perturbation. For a numerically stable NF, this implies that we recover the original data samples, hence the original data manifold. As we increase the $\epsilon$ , we "enlarge" our manifold simultaneously from all data samples. Thus, by increasing $\epsilon$ , we add further plausible data points to our training set as long the learned latent representation is a good approximation of the underlying data manifold. This does not necessarily hold for approximate mappings due to inherent decoder noise. + +Controllability. In $§2$ , we introduced two variants of latent perturbations with normalizing flows. These two variants define different local objectives around the latent code of the original sample. The randomized latent attack defines a sampling operation on the data manifold, whereas the adversarial latent attack defines a stochastic search procedure to find samples attaining high classifier losses. Here, we exploit the diffeomorphism provided by normalizing flow to efficiently map a complex sampling operation-sampling from data manifold, or a complex search operation-finding on-manifold adversarial samples, to the latent space. Combined with simple prior structure of NFs, this allow for future possibilities on designing efficient algorithms tackling various on manifold problems $§5$ . + +Compatibility with Data Augmentations. It is important to note that our method is orthogonal to image space data augmentation methods. In other words, we can train normalizing flows with commonly used data augmentations. Indeed in our experiments, we observe that trained models apply some of the training-time augmentations such as cropping. This allows us to encode and decode augmented samples as well as original samples of CIFAR-10. Additionally, we can use more advanced methods such as DeVries & Taylor (2017); Zhang et al. (2018a) concurrently with our latent perturbations to train classifiers. + +§ 5. CONCLUSION + +Motivated by the numerous advantages of normalizing flows, we propose flow-based latent perturbation methods to augment the training datasets, to train a classifier. Our empirical results on several real-world datasets demonstrate the efficacy of these generative models for improved test accuracy both in full and in low-data regimes. + +Further directions include exploiting potentially more complex prior structures to design efficient flow-based algorithms tackling on-manifold sampling or optimization problems. For example, using NF models with explicit parametrization of specific semantic transformations (e.g., zoom or orientation) would enable the training of more robustly generalizing classifiers. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/msCiI5dejr/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/msCiI5dejr/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..38530ca8df641903c1c95c70c9c78275eb622b43 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/msCiI5dejr/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,635 @@ +# Generalization of the Change of Variables Formula with Applications to Residual Flows + +## Anonymous Authors ${}^{1}$ + +## Abstract + +Normalizing flows leverage the Change of Variables Formula (CVF) to define flexible density models. Yet, the requirement of smooth transformations (diffeomorphisms) in the CVF poses a significant challenge in the construction of these models. To enlarge the design space of flows, we introduce $\mathcal{L}$ -diffeomorphisms as generalized transformations which may violate these requirements on zero Lebesgue-measure sets. This relaxation allows e.g. the use of non-smooth activations functions such as ReLU. Finally, we apply the obtained results to planar, radial, and contractive residual flows. + +## 1. Introduction + +The term normalizing flow refers to a concatenation of arbitrarily many simple transformations such that together they describe a transformation of desired flexibility and expressiveness. Formally, a transformation $f : Z \rightarrow X$ denotes a diffeomorphism, i.e., a bijective mapping where both $f$ and ${f}^{-1}$ are continuously differentiable. The crucial reason why transformations are considered in normalizing flows is the validity of the Change of Variables Formula (CVF) for a probability density ${p}_{Z}$ on $Z$ described by + +$$ +{\int }_{{f}^{-1}\left( A\right) }{p}_{Z}\left( z\right) d{\lambda }^{n}\left( z\right) = {\int }_{A}{p}_{f}\left( x\right) d{\lambda }^{n}\left( x\right) , \tag{1} +$$ + +where ${p}_{f}\left( x\right) \mathrel{\text{:=}} {p}_{Z}\left( {{f}^{-1}\left( x\right) }\right) \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right|$ and $A \subseteq X$ . This formula provides an explicit expression of the density ${p}_{f}$ induced by $f$ on the target space $X$ , which includes the determinant of the Jacobian as a volume correction term. + +Based on this definition of a transformation, however, it is generally not accurate to use non-smooth activations, such as ReLU, Leaky ReLU, or ELU with $\alpha \neq 1$ , in the design of normalizing flows. These usually cause the flow to become non-differentiable on a set with no volume w.r.t. the Lebesgue measure ${\lambda }^{n}$ , hence no diffeomorphism. In measure theory, these sets are called ${\lambda }^{n}$ -null sets or only null sets for short and are negligible in integration. + +We demonstrate that the requirements for a flow can be significantly weakened by excluding null sets from the base and target space while preserving the validity of the CVF. There are remarks on using almost everywhere (a.e.) differentiable activation functions in Kobyzev et al. (2020) or Kong & Chaudhuri (2020), yet both works lack a proof of the validity of such transformations to define flows. In our work, we provide such proofs for an even more general statement. At the same time, we discuss the probabilistic background of normalizing flows to induce a well-defined density in the end. Furthermore, we point out that not every generalization of the CVF found in the mathematical literature is immediately suitable for flows. Finally, we put a special emphasis on the applications to residual flows. In doing so, we prove that non-smooth activations are also valid for both planar and radial flows (Rezende & Mohamed, 2015), as well as for contractive residual flows (Behrmann et al., 2019). + +## 2. Background on CVF in Probability Theory + +The basic idea behind normalizing flows is to transform a known and tractable probability space into a more complex one. Mathematically, a probability space $\left( {Z,{\mathcal{A}}_{Z},{\mathbb{P}}_{Z}}\right)$ is composed of a set $Z$ equipped with a $\sigma$ -algebra and a probability measure ${\mathbb{P}}_{Z}$ . In the target space $\left( {X,{\mathcal{A}}_{X},{\mathbb{P}}_{\text{data }}}\right)$ , only the set and $\sigma$ -algebra are fixed, and the data distribution ${\mathbb{P}}_{\text{data }}$ is unknown. For simplicity, we only consider open subsets of ${\mathbb{R}}^{n}$ and trace $\sigma$ -algebras of the Lebesgue algebra $\mathcal{L}$ in the following, i.e., ${\mathcal{A}}_{Z} = \mathcal{L}\left( Z\right)$ and ${\mathcal{A}}_{X} = \mathcal{L}\left( X\right)$ . The trace $\sigma$ -Algebra is a restricted $\sigma$ -Algebra on a subset defined by $\mathcal{L}\left( Z\right) \mathrel{\text{:=}} \{ A \cap Z \mid A \in \mathcal{L}\}$ . Besides, we assume that the distribution ${\mathbb{P}}_{Z}$ is absolutely continuous w.r.t. the $n$ -dimensional Lebesgue measure ${\lambda }^{n}$ , i.e., ${\lambda }^{n}$ -null sets have a ${\mathbb{P}}_{Z}$ -probability of zero. Therefore, the existence of a probability density ${p}_{Z}$ follows by Radon-Nikodym’s theorem (Bogachev, 2006, Sec. 3.2). For more information on measure theory, see Bogachev (2006) or Elstrodt (2013). + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +At the lowest level, a transformation $f : Z \rightarrow X$ has to be at least an ${\mathcal{A}}_{Z} - {\mathcal{A}}_{X}$ -measurable mapping (i.e., a random variable) in order to induce a distribution on the target space by the so-called pushforward measure: + +$$ +{\mathbb{P}}_{f}\left( A\right) \mathrel{\text{:=}} {\mathbb{P}}_{Z}\left( {{f}^{-1}\left( A\right) }\right) \;\left( {A \in {\mathcal{A}}_{X}}\right) . +$$ + +Under the assumption that the base distribution ${\mathbb{P}}_{Z}$ has a density function ${p}_{Z}$ , there also exists an integral representation for the pushforward measure, i.e., + +$$ +{\mathbb{P}}_{f}\left( A\right) = {\int }_{{f}^{-1}\left( A\right) }{p}_{Z}\left( z\right) d{\lambda }^{n}\left( z\right) . \tag{2} +$$ + +Assumption 1 (Transformations for the CVF). The function $f : Z \rightarrow X$ between two open sets $Z, X \subseteq {\mathbb{R}}^{n}$ is a diffeomorphism; or equivalently expressed by the inverse function theorem, $f$ is bijective, continuously differentiable and without critical points. + +We note that $z \in Z$ is a critical point if the Jacobian-determinant vanishes in this point, i.e., $\det {J}_{f}\left( z\right) = 0$ . In particular, a critical point $z$ indicates that the inverse is nondifferentiable or non-continuously differentiable in $f\left( z\right)$ . + +If the mapping $f$ satisfies Assumption 1, the CVF from eq. (1) holds, and we can extend the expression (2) of the distribution ${\mathbb{P}}_{f}$ by + +$$ +{\mathbb{P}}_{f}\left( A\right) = {\int }_{{f}^{\cdot 1}\left( A\right) }{p}_{Z}\left( z\right) d{\lambda }^{n}\left( z\right) = {\int }_{A}{p}_{f}\left( x\right) d{\lambda }^{n}\left( x\right) . \tag{3} +$$ + +Since the equality (3) is valid for all ${\mathcal{A}}_{X}$ -measurable sets, the ${\lambda }^{n}$ -unique probability density of ${\mathbb{P}}_{f}$ is given by + +$$ +{p}_{f}\left( x\right) = {p}_{Z}\left( {{f}^{-1}\left( x\right) }\right) \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right| \;\left( {\text{ a.e. }x \in X}\right) . \tag{4} +$$ + +In order to unify the existing definitions of flows in the literature, we will speak of a flow or a proper flow when a density on $X$ of the form as in eq. (4) is induced. + +## 3. Generalization of the CVF + +If it is argued that a flow indeed induces a probability density, often the CVF is merely named, and only in rare cases reference is made to sources like Rudin (1987) or Bogachev (2006). In most of the mathematical literature, it is proved in the following form + +$$ +{\int }_{f\left( Z\right) }\psi \left( x\right) {dx} = {\int }_{Z}\psi \left( {f\left( z\right) }\right) \left| {\det {J}_{f}\left( z\right) }\right| {dz}, \tag{5} +$$ + +where $f : U \rightarrow {\mathbb{R}}^{n}$ is injective, continuously differentiable with $U \subseteq {\mathbb{R}}^{n}$ open, $Z \subset U$ measurable, and $\psi$ Lebesgue integrable. Moreover, there are even broader formulations of this statement where $f$ is only differentiable or even only Lipschitz continuous everywhere and injective almost everywhere (cf. Varberg, 1971). In Bogachev (2006, Thm. 5.8.30) and Hajłasz (1993), generalizations are discussed where injectivity is not even required by considering the cardinality of the preimage set. + +The identity (5) is, however, rather analytically motivated for solving integrals and does not aim to provide a representation of the density induced by $f$ . In the common case where $f$ satisfies Assumption 1, these two variants (eq. (1) and (5)) are valid, since both $f$ and ${f}^{-1}$ form diffeomor-phisms. Nevertheless, when we consider generalizations, it is usually no longer clear how and whether they can also be applied for the purposes of normalizing flows. In the following sections, we derive a similar strong generalization of the CVF as in (5), which is more suited to transforming probability densities, i.e., more suited for normalizing flows. + +### 3.1. $\mathcal{L}$ -Diffeomorphism + +The basic idea is to require the conditions for a diffeomorphism only almost everywhere, since these sets do not affect integration. This idea leads to the following definition of a generalized transformation, providing a weaker set of conditions than in Assumption 1. + +Definition 2 (Lebesgue-Diffeomorphism). A mapping $f : Z \rightarrow X$ between two open sets $Z, X \subseteq {\mathbb{R}}^{n}$ is called Lebesgue-diffeomorphism ( $\mathcal{L}$ -diffeomorphism for short), if there are ${\lambda }^{n}$ -null sets ${N}_{Z},{N}_{X}$ with ${N}_{Z}$ closed such that the restriction $f : Z \smallsetminus {N}_{Z} \rightarrow X \smallsetminus {N}_{X}$ is bijective, continuously differentiable, and the set of critical points is a null set. + +Examples. In the following, we list a few $\mathcal{L}$ -diffeomorphism and their corresponding null sets: + +1. A diffeomorphism $f : Z \rightarrow X$ forms an $\mathcal{L}$ -diffeomorphism where both ${N}_{Z}$ and ${N}_{X}$ are empty sets. + +2. The cubic function $f : \mathbb{R} \rightarrow \mathbb{R}$ with $f\left( x\right) = {x}^{3}$ is an $\mathcal{L}$ -diffeomorphism with ${N}_{Z} = {N}_{X} = \varnothing$ and one critical point $\{ 0\}$ , which is a null set. + +3. In particular, the plane polar coordinates transformation $f : {\mathbb{R}}_{ + } \times \left\lbrack {0,{2\pi }}\right\rbrack \rightarrow {\mathbb{R}}^{2}$ given by $f\left( {r,\phi }\right) =$ $\left( {r\cos \left( \phi \right) , r\sin \left( \phi \right) }\right)$ is an $\mathcal{L}$ -diffeomorphism with + +$$ +{N}_{Z} = \{ 0\} \times \left\lbrack {0,{2\pi }}\right\rbrack \cup {\mathbb{R}}_{ > 0} \times \{ 0,{2\pi }\} \;\text{ and } +$$ + +$$ +{N}_{X} = {\mathbb{R}}_{ + } \times \{ 0\} \text{.} +$$ + +In appendix Lemma A. 1 we show that a $\mathcal{L}$ -diffeomorphism is a measurable mapping with respect to the corresponding trace $\sigma$ -algebras of the Lebesgue algebra, thus inducing a distribution on $X$ by the pushfor-ward measure. Moreover, the name 'diffeomorphism' does justice to the Definition 2, which we state in the following lemma (see Appendix A for the proof): + +Lemma 3. Let $f : Z \rightarrow X$ be an $\mathcal{L}$ -diffeomorphism. Then there are ${\lambda }^{n}$ -null sets ${N}_{Z},{N}_{X}$ with ${N}_{Z}$ closed such that the restriction $f : Z \smallsetminus {N}_{Z} \rightarrow X \smallsetminus {N}_{X}$ is a diffeomorphism. + +The reasoning behind the assumptions of an $\mathcal{L}$ - diffeomorphism can be understood as follows: We can remove negligible sets from the domain and the target space such that the restriction is bijective and continuously differentiable. To show that the restricted inverse is continuously differentiable, we use the inverse function theorem (Rudin, 1976, Thm. 9.24). Nevertheless, for the inverse function theorem to apply, all critical points and their image must still be removable, i.e., measure-theoretically negligible. That the set of critical points is a null set was assumed, and for the image, we use Sard's theorem (see Appendix A for the detailed proof). + +Furthermore, it is necessary to suppose that the set of critical points is a null set; because there are examples of continuously differentiable and bijective functions whose set of critical points does not have measure zero. These points cause the inverse function to be non-continuously differentiable on a set with a positive measure (see Appendix A. 2 for an example). + +#### 3.2.CVF for $\mathcal{L}$ -Diffeomorphism + +The previously defined $\mathcal{L}$ -diffeomorphisms form a reasonable generalization of diffeomorphisms and are comparable to those transformations discussed in the introduction of this section. Furthermore, the following theorem justifies the validity of CVF as well for $\mathcal{L}$ -diffeomorphisms (see Appendix A for the proof): + +Theorem 4 (CVF for $\mathcal{L}$ -Diffeomorphism). Let $f : Z \rightarrow X$ be an $\mathcal{L}$ -diffeomorphism and ${\mathbb{P}}_{Z}$ a distribution on $Z$ with probability density ${p}_{Z}$ w.r.t. ${\lambda }^{n}$ . Then the CVF (see eq. (3)) holds for $f$ . In particular $f$ induces a distribution on $X$ with density ${p}_{f}$ given by + +$$ +{p}_{f}\left( x\right) = {p}_{Z}\left( {{f}^{-1}\left( x\right) }\right) \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right| \;\left( {\text{ a.e. }x \in X}\right) . \tag{6} +$$ + +This theorem legitimizes the use of functions as proper flows that are not everywhere bijective, continuous, differentiable, or continuously differentiable. Even more, the inverse does not have to fulfill these properties everywhere either. In short, we can apply $\mathcal{L}$ -diffeomorphisms as flows. + +### 3.3. Invariance under Composition + +The strength and tremendous upswing of normalizing flows mainly occurred because simple flows can be chained together. Thus, we can achieve the desired degree of complexity and expressiveness by increasing the number of simple flows. For this purpose, flows are often considered on the same base and target space. This crucial property is also retained for $\mathcal{L}$ -diffeomorphisms (see Appendix A for the proof): + +Lemma 5 (Composition). Let $\Omega \subseteq {\mathbb{R}}^{n}$ be an open set and ${f}_{1},{f}_{2} : \Omega \rightarrow \Omega \mathcal{L}$ -diffeomorphisms. Then the composition + +${f}_{2} \circ {f}_{1}$ is also an $\mathcal{L}$ -diffeomorphism on $\Omega$ . + +From this lemma, it results inductively that the concatenation $f = {f}_{K} \circ \ldots \circ {f}_{1}$ of $K\mathcal{L}$ -diffeomorphisms ${f}_{1},\ldots {f}_{K}$ on $\Omega$ forms an $\mathcal{L}$ -diffeomorphism. Hence, common formulas from the normalizing flow literature, e.g., presented in Kobyzev et al. (2020) or Papamakarios et al. (2021), also apply to $\mathcal{L}$ -diffeomorphisms or can be extended to them. For example, the following holds for the Jacobian-determinant of $f$ with ${x}_{i} \mathrel{\text{:=}} {f}_{i + 1}^{-1} \circ \ldots \circ {f}_{K}^{-1}\left( x\right)$ and ${x}_{K} \mathrel{\text{:=}} x$ + +$$ +\det {J}_{{f}^{\cdot 1}}\left( x\right) = \mathop{\prod }\limits_{{i = 1}}^{K}\det {J}_{{f}_{i}^{\cdot 1}}\left( {x}_{i}\right) \;\left( {\text{ a.e. }x \in \Omega }\right) . +$$ + +Despite the legitimate use of $\mathcal{L}$ -diffeomorphisms mathematically, it is essential to note that these can lead to numerical instabilities. In some points, the Jacobian-determinant or the inverse function does not need to exist. Nevertheless, these values can be set meaningfully or ignored in some situations. + +## 4. Non-smooth Activations in Residual Flows + +In this section, we apply the previous results to residual mappings, which are perturbations of the identity of the form $f\left( x\right) = x + g\left( x\right)$ . This justifies the use of non-smooth activations in planar, radial, and contractive residual flows. + +### 4.1. Planar Flows + +The term planar flow was first introduced by Rezende & Mohamed (2015), which refers to functions of the form + +$$ +{f}_{\mathrm{P}}\left( x\right) = x + {uh}\left( {{w}^{T}x + b}\right) +$$ + +with non-linearity $h$ and $w, u \in {\mathbb{R}}^{n}, b \in \mathbb{R}$ . They describe a plane-wise expansion or contraction of all hyperplanes orthogonal to $w$ . In order to admit also non-smooth activations, we generalize the results from Rezende & Mohamed (2015) in the following theorem and obtain a sufficient criterion for the bijectivity of a planar flow ${f}_{\mathrm{P}}$ (see Appendix B. 1 for the proof). + +Theorem 6. Let ${f}_{P}$ be a planar flow with activation $h$ . If the one-dimensional mapping $\psi : \mathbb{R} \rightarrow \mathbb{R}$ with + +$$ +\psi \left( \lambda \right) = \lambda + {w}^{T}{uh}\left( \lambda \right) +$$ + +is bijective, then ${f}_{P}$ is also bijective. + +However, bijectivity is not sufficient for the planar flow to induce a density of the desired form. But similar to Theorem 6 , conditions can be imposed on a one-dimensional mapping such that ${f}_{\mathrm{P}}$ describes an $\mathcal{L}$ -diffeomorphism (see Appendix B. 1 for the proof): + +Table 1. Conditions on the parameters of a planar flow ${f}_{\mathrm{P}}$ , such that it is a proper flow for the activation $h$ (see B.1.1 for the proofs). + +
NON-LINEARITYCONDITION
RELU${w}^{T}u > - 1$
ELU $\left( {\alpha > 0}\right)$${w}_{m}^{T}u > \max \left( {-1, - \frac{1}{\alpha }}\right)$
TANH${w}^{T}u \geq - 1$
SOFTPLUS${w}^{T}u > - 1$
+ +Theorem 7. Let ${f}_{P}$ be a bijective planar flow. If there is a countable and closed set $N \subset \mathbb{R}$ such that the activation function $h$ is continuously differentiable on $\mathbb{R} \smallsetminus N$ and + +$$ +C = \left\{ {x \in \mathbb{R} \smallsetminus N \mid 1 + {w}^{T}u{h}^{\prime }\left( x\right) = 0}\right\} +$$ + +is countable, then ${f}_{P}$ is an $\mathcal{L}$ -diffeomorphism. In particular, ${f}_{P}$ is a proper flow. + +Using Theorem 7, conditions for different activations such that the resulting planar flow describes a flow can be found via reducing it to a one-dimensional problem. The constraints for the most popular non-linearities are summarized in Table 1. + +### 4.2. Radial Flows + +Another intuitive way to perturb the identity in ${\mathbb{R}}^{n}$ is to expand or contract spherically around a centering point. This type of transformation was initially studied by Tabak & Turner (2013) and subsequently by Rezende & Mohamed (2015). These transformations of the form + +$$ +{f}_{\mathrm{R}}\left( x\right) = x + {\beta h}\left( {\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}_{2}\right) \left( {x - {x}_{0}}\right) +$$ + +are called radial flows, where $h : {\mathbb{R}}_{ + } \rightarrow {\mathbb{R}}_{ + }$ is a localization function, ${x}_{0} \in {\mathbb{R}}^{n}$ the center, and $\beta \in \mathbb{R}$ . When $\beta$ is negative, a contraction occurs, and positive values lead to an expansion around the center ${x}_{0}$ . The following theorem provides a sufficient criterion for the bijectivity of a radial flow if we consider non-smooth functions $h$ (see Appendix B. 2 for the proof): + +Theorem 8. Let ${f}_{R}$ be a radial flow with localization $h$ . If the one-dimensional mapping $\psi : {\mathbb{R}}_{ + } \rightarrow {\mathbb{R}}_{ + }$ with + +$$ +\psi \left( r\right) \mathrel{\text{:=}} r + {\beta h}\left( r\right) r +$$ + +is bijective, then ${f}_{R}$ is also bijective. + +Again, bijectivity is sufficient only for the existence of the inverse which means that ${f}_{\mathrm{R}}$ , in general, does neither describe an $\mathcal{L}$ -diffeomorphism nor a flow. However, this property is ensured by the conditions of the following theorem (see Appendix B. 2 for the proof): + +Theorem 9. Let ${f}_{R}$ be a bijective radial flow. If there is a countable, closed set $N \subset {\mathbb{R}}_{ > 0}$ such that the localization + +function $h$ is continuously differentiable on ${\mathbb{R}}_{ > 0} \smallsetminus N$ and + +$$ +C \mathrel{\text{:=}} \left\{ {r \in {\mathbb{R}}_{ > 0} \smallsetminus N \mid 1 + \beta \left( {h\left( r\right) + r{h}^{\prime }\left( r\right) = 0}\right) }\right\} +$$ + +is countable, then ${f}_{R}$ is an $\mathcal{L}$ -diffeomorphism. In particular, ${f}_{R}$ is a proper flow. + +### 4.3. Contractive Residual Flows + +Contractive mappings provide a more general type of perturbations of the identity. A function $g$ is called contractive if there exists a constant $L < 1$ such that for all $x, y \in {\mathbb{R}}^{n}$ it holds for any norm on the vector space ${\mathbb{R}}^{n}$ + +$$ +\parallel g\left( x\right) - g\left( y\right) \parallel \leq L\parallel x - y\parallel . +$$ + +In Behrmann et al. (2019) and Chen et al. (2019), these kinds of residual flows are called (contractive) residual flows and are denoted henceforth by ${f}_{\mathrm{C}}$ . On the one hand, this strong condition on $g$ gives the bijectivity of ${f}_{\mathrm{C}}$ by the Banach’s fixed point theorem; on the other hand, it follows that ${f}_{\mathrm{C}}$ has no critical points (Behrmann, 2019, Lem. 5.4). Thus, only a few assumptions are required to guarantee that a residual flow forms an $\mathcal{L}$ -diffeomorphism. + +Theorem 10. Let ${f}_{C}$ be a residual flow with contractive perturbation $g$ . If there is a closed ${\lambda }^{n}$ -null set $N \subset {\mathbb{R}}^{n}$ such that $g$ is continuously differentiable on ${\mathbb{R}}^{n} \smallsetminus N$ , then ${f}_{C}$ is an $\mathcal{L}$ -diffeomorphism. In particular, ${f}_{C}$ is a proper flow. + +The proof of this statement follows directly from the property that Lipschitz continuous functions map ${\lambda }^{n}$ -null sets to sets of ${\lambda }^{n}$ -measure zero (Rudin,1987, Lem. 7.25) and the fact that $\operatorname{Lip}\left( {f}_{\mathrm{C}}\right) = 1 + L$ . + +## 5. Conclusion + +In this paper, we have shown that the conditions on a flow need not be strictly satisfied everywhere, and a certain degree of freedom is permitted on measure-theoretically negligible sets. With this, we have justified using $\mathcal{L}$ -diffeomorphism instead of a normal diffeomorphism as flows. This gain significantly increases the possibilities in the design of normalizing flows and, in particular, allows the usage of non-smooth activations. Nevertheless, we have only justified their existence and usage mathematically, far from leading to a successful practical application. Thus, future work should investigate whether and in which situations non-smooth activations provide an actual gain. Moreover, we only applied these generalizations to simple residual flows. For this reason, it remains open to what extent other flows such as more general planar flows, like Sylvester flows (Van Den Berg et al., 2018), or autoregressive flows (Huang et al., 2018; Jaini et al., 2019) also benefit from this. + +References + +Behrmann, J. Principles of Neural Network Architecture Design - Invertibility and Domain Knowledge. Phd thesis, University of Bremen, 2019. + +Behrmann, J., Grathwohl, W., Chen, R. T. Q., Duvenaud, D., and Jacobsen, J.-H. Invertible residual networks. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pp. 573-582. PMLR, 2019. + +Bogachev, V. Measure Theory. Springer Berlin Heidelberg, 2006. + +Bressoud, D. M. A Radical Approach to Lebesgue's Theory of Integration. Cambridge University Press, 2008. + +Chen, R. T. Q., Behrmann, J., Duvenaud, D. K., and Jacobsen, J.-H. Residual flows for invertible generative modeling. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. + +DiMartino, R. and Urbina, W. On cantor-like sets and cantor-lebesgue singular functions, 2014. arXiv:1410.5093. + +Elstrodt, J. Maß- und Integrationstheorie. Springer-Lehrbuch. Springer Berlin Heidelberg, 2013. + +Floret, K. Maß- und Integrationstheorie. Teubner Studi-enbücher Mathematik. Vieweg+Teubner Verlag, 1981. + +Hajłasz, P. Change of variables formula under minimal assumptions. Colloquium Mathematicae, 64(1):93-101, 1993. + +Huang, C.-W., Krueger, D., Lacoste, A., and Courville, A. Neural autoregressive flows. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pp. 2078-2087. PMLR, 2018. + +Jaini, P., Selby, K. A., and Yu, Y. Sum-of-squares polynomial flow. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pp. 3009- 3018. PMLR, 2019. + +Kobyzev, I., Prince, S., and Brubaker, M. Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-1, 2020. + +Kong, Z. and Chaudhuri, K. The expressive power of a class of normalizing flow models. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108, pp. 3599-3609. PMLR, 2020. + +Milnor, J. Topology from the Differentiable Viewpoint. University Press of Virginia, 1965. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. Journal of Machine Learning Research, 22(57):1-64, 2021. + +Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1530- 1538. PMLR, 2015. + +Rudin, W. Principles of Mathematical Analysis. McGraw-Hill New York, 3rd edition, 1976. + +Rudin, W. Real and Complex Analysis. McGraw-Hill New York, 3rd edition, 1987. + +Tabak, E. and Turner, C. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145-164, 2013. + +Van Den Berg, R., Hasenclever, L., Tomczak, J., and Welling, M. Sylvester normalizing flows for variational inference. In 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, pp. 393-402. Association For Uncertainty in Artificial Intelligence (AUAI), 2018. + +Varberg, D. E. Change of variables in multiple integrals. The American Mathematical Monthly, 78(1):42-45, 1971. + +Willard, S. General Topology. Addison Wesley series in mathematics. Addison-Wesley Publishing Company, 1970. + +## A. Generalization of the Change of Variables Formula + +Lemma A.1. Let $f : Z \rightarrow X$ be an $\mathcal{L}$ -diffeomorphism from the measurable space $\left( {Z,{\mathcal{A}}_{Z}}\right)$ into the target space $\left( {X,{\mathcal{A}}_{X}}\right)$ . Then $f$ is an ${\mathcal{A}}_{Z} - {\mathcal{A}}_{X}$ -measurable mapping, i.e., a random variable. + +Proof. According to the definition of measurable functions, we have to show that every preimage of a measurable set contained in ${\mathcal{A}}_{X}$ is also an element of ${\mathcal{A}}_{Z}$ . Let $B \in {\mathcal{A}}_{X}$ . Since $f$ is an $\mathcal{L}$ -diffeomorphism, there exist ${\lambda }^{n}$ -null sets ${N}_{Z},{N}_{X}$ such that the restriction $f : Z \smallsetminus {N}_{Z} \rightarrow X \smallsetminus {N}_{X}$ is bijective and continuously differentiable. We can decompose the preimage of $B$ as + +$$ +{f}^{-1}\left( B\right) = \left( {{N}_{Z} \cap {f}^{-1}\left( B\right) }\right) \cup \left( {Z \smallsetminus {N}_{Z} \cap {f}^{-1}\left( B\right) }\right) . +$$ + +Since ${N}_{Z}$ is a null set, the subset ${N}_{Z} \cap {f}^{-1}\left( B\right)$ also has a measure of zero. We can represent the other set of the equation above by the bijectivity of $f$ on $Z \smallsetminus {N}_{Z}$ as + +$$ +Z \smallsetminus {N}_{Z} \cap {f}^{-1}\left( B\right) = {f}^{-1}\left( {X \smallsetminus {N}_{X} \cap B}\right) . \tag{7} +$$ + +Thus it results from the continuity of the restricted mapping, which is in particular $\mathcal{L}\left( {Z \smallsetminus {N}_{Z}}\right) - \mathcal{L}\left( {X \smallsetminus {N}_{X}}\right)$ -measurable, that the set (7) is contained in the trace $\sigma$ -algebra $\mathcal{L}\left( {Z \smallsetminus {N}_{Z}}\right)$ , hence an element of ${\mathcal{A}}_{Z}$ . In the end, we can represent ${f}^{-1}\left( B\right)$ as a union of a null set and a ${\mathcal{A}}_{Z}$ -measurable set, which shows the claim. + +Proof of Lemma 3. From the Definition 2 of an $\mathcal{L}$ -diffeomorphism, we obtain the existence of null sets ${N}_{Z},{N}_{X}$ with ${N}_{Z}$ closed such that the restricted function $f : Z \smallsetminus {N}_{Z} \rightarrow X \smallsetminus {N}_{X}$ is bijective, continuously differentiable, and the set of critical points $C$ has measure zero. At first, we show that $C$ is a closed set. Consider the mapping $g : Z \smallsetminus {N}_{Z} \rightarrow \mathbb{R}$ with $g\left( x\right) = \det \left( {{J}_{f}\left( x\right) }\right)$ . Because of the continuity of the determinant and the continuous differentiability of the restricted $f$ , the mapping $g$ is also continuous. Therefore the preimage under $g$ of closed sets is also closed according to the topological definition of continuity, i.e., + +$$ +{g}^{-1}\left( {\{ 0\} }\right) = \left\{ {x \in Z \smallsetminus {N}_{Z} \mid \det {J}_{f}\left( x\right) = 0}\right\} = C +$$ + +is a closed set; thus, $Z \smallsetminus \left( {{N}_{Z} \cup C}\right)$ is open in ${\mathbb{R}}^{n}$ . It follows by the inverse function theorem and its consequences (Rudin, 1976, Thm. 9.24 and 9.25) that the restriction + +$$ +f : Z \smallsetminus \left( {{N}_{Z} \cup C}\right) \rightarrow X \smallsetminus \left( {{N}_{X} \cup f\left( C\right) }\right) +$$ + +forms a diffeomorphism. By assumption, ${N}_{C}$ and $C$ are ${\lambda }^{n}$ -null sets, and so ${N}_{Z} \cup C$ . From Sard’s theorem (Milnor,1965, $§2)$ , it follows that $f\left( C\right)$ , hence especially ${N}_{X} \cup f\left( C\right)$ , is a set of measure zero. + +Proof of Theorem 4. We obtain from Lemma 3 the existence of two null sets ${N}_{Z},{N}_{X}$ so that the restriction $f : Z \smallsetminus {N}_{Z} \rightarrow X \smallsetminus {N}_{X}$ of an $\mathcal{L}$ -diffeomorphism forms a diffeomorphism. Let $A \in {\mathcal{A}}_{X}$ be a measurable set in the target space $X$ . Because of the additivity of ${\mathbb{P}}_{Z}$ and the definition of the pushforwad measure ${\mathbb{P}}_{f}$ , it follows + +$$ +{\mathbb{P}}_{f}\left( B\right) \mathrel{\text{:=}} {\mathbb{P}}_{Z}\left( {{f}^{-1}\left( B\right) }\right) = {\mathbb{P}}_{Z}\left( {\left( {{f}^{-1}\left( B\right) \smallsetminus {N}_{Z}}\right) \dot{ \cup }\left( {{f}^{-1}\left( B\right) \cap {N}_{Z}}\right) }\right) +$$ + +$$ += {\mathbb{P}}_{Z}\left( {{f}^{-1}\left( B\right) \smallsetminus {N}_{Z}}\right) + {\mathbb{P}}_{Z}\left( {{f}^{-1}\left( B\right) \cap {N}_{Z}}\right) . \tag{8} +$$ + +Since ${\mathbb{P}}_{Z}$ is absolutely continuous w.r.t. the Lebesgue measure and ${f}^{-1}\left( B\right) \cap {N}_{Z}$ is a subset of a ${\lambda }^{n}$ -null set, the right summand of term (8) vanishes. For the other term, we note the following set equality: + +$$ +{f}^{-1}\left( B\right) \smallsetminus {N}_{Z} = {f}^{-1}\left( {B \smallsetminus {N}_{X}}\right) . \tag{9} +$$ + +Because the restriction $f : Z \smallsetminus {N}_{Z} \rightarrow X \smallsetminus {N}_{X}$ is a diffeomorphism and $B \smallsetminus {N}_{X}$ is an element of the trace $\sigma$ -algebra $\mathcal{L}\left( {X \smallsetminus {N}_{X}}\right)$ , the equation (8) can be extended by the CVF from eq. (1) as + +$$ +{\mathbb{P}}_{f}\left( B\right) = {\int }_{{f}^{-1}\left( {B \smallsetminus {N}_{X}}\right) }{p}_{z}\left( z\right) d{\lambda }^{n}\left( z\right) = {\int }_{B \smallsetminus {N}_{X}}{p}_{z}\left( {{f}^{-1}\left( x\right) }\right) \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right| d{\lambda }^{n}\left( x\right) . \tag{10} +$$ + +We define a function ${p}_{f} : X \rightarrow {\mathbb{R}}_{ + }$ with + +$$ +{p}_{f}\left( x\right) = \left\{ {\begin{matrix} {p}_{Z}\left( {{f}^{-1}\left( x\right) }\right) \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right| & x \in X \smallsetminus {N}_{X} \\ 0 & x \in {N}_{X} \end{matrix}.}\right. +$$ + +Since $B \cap {N}_{X} \subseteq {N}_{X}$ is a ${\lambda }^{n}$ -null set, we can additionally integrate in (10) over this set and obtain the expression of the distribution induced by $f$ + +$$ +{\mathbb{P}}_{f}\left( B\right) = {\int }_{B \smallsetminus {N}_{X}}{p}_{f}\left( x\right) d{\lambda }^{n}\left( x\right) + {\int }_{B \cap {N}_{X}}{p}_{f}\left( x\right) d{\lambda }^{n}\left( x\right) = {\int }_{B}{p}_{f}\left( x\right) d{\lambda }^{n}\left( x\right) , \tag{11} +$$ + +hence ${p}_{f}$ forms a density of distribution ${\mathbb{P}}_{f}$ . Finally, by the Radon-Nikodym theorem (Bogachev,2006, Sec. 3.2), the density function is ${\lambda }^{n}$ -unique, which shows the claim. + +Proof of Lemma 5. Since ${f}_{1}$ and ${f}_{2}$ are $\mathcal{L}$ -diffeomorphisms, by Lemma 3 there are ${\lambda }^{n}$ -null sets ${N}_{Z}^{1},{N}_{Z}^{2},{N}_{X}^{1},{N}_{X}^{2} \subset \Omega$ with ${N}_{Z}^{1}$ and ${N}_{Z}^{2}$ closed such that both + +$$ +{f}_{1} : \Omega \smallsetminus {N}_{Z}^{1} \rightarrow \Omega \smallsetminus {N}_{X}^{1}\;\text{ and }\;{f}_{2} : \Omega \smallsetminus {N}_{Z}^{2} \rightarrow \Omega \smallsetminus {N}_{X}^{2} \tag{12} +$$ + +are diffeomorphisms. We remove from the domain of ${f}_{1}$ all elements mapping to the set ${N}_{Z}^{2}$ since ${f}_{2}$ is not a diffeomorphism on it. Hence we define + +$$ +{N}_{Z} \mathrel{\text{:=}} {N}_{Z}^{1} \cup {f}_{1}^{-1}\left( {{N}_{Z}^{2} \cap \Omega \smallsetminus {N}_{X}^{1}}\right) \;\text{ and }\;{N}_{X} \mathrel{\text{:=}} {N}_{X}^{2} \cup {f}_{2}\left( {{N}_{X}^{1} \cap \Omega \smallsetminus {N}_{Z}^{2}}\right) . \tag{13} +$$ + +It is relatively easy to see that both ${f}_{1}\left( {\Omega \smallsetminus {N}_{Z}}\right) = \Omega \smallsetminus \left( {{N}_{X}^{1} \cup {N}_{Z}^{2}}\right)$ and ${f}_{2}\left( {{f}_{1}\left( {\Omega \smallsetminus {N}_{Z}}\right) }\right) = \Omega \smallsetminus {N}_{X}$ are valid. Moreover, from the continuity of ${f}_{1}$ and since ${N}_{Z}^{2} \cap \Omega \smallsetminus {N}_{X}^{1}$ is closed in the subspace topology on $\Omega \smallsetminus {N}_{X}^{1}$ , it follows that ${f}^{-1}\left( {{N}_{Z}^{2} \cap \Omega \smallsetminus {N}_{X}^{1}}\right)$ is closed in $\Omega \smallsetminus {N}_{Z}^{1}$ . Hence there exists a closed set $A \subset \Omega$ such that (see Sec. 6 in Willard,1970 for more information) + +$$ +{f}^{-1}\left( {{N}_{Z}^{2} \cap \Omega \smallsetminus {N}_{X}^{1}}\right) = A \cap \Omega \smallsetminus {N}_{Z}^{1}. +$$ + +This results with eq. (13) in + +$$ +{N}_{Z} = {N}_{Z}^{1} \cup \left( {A \cap \Omega \smallsetminus {N}_{Z}^{1}}\right) = {N}_{Z}^{1} \cup A +$$ + +which is a closed set in $\Omega$ . In summary, the mapping ${f}_{2} \circ {f}_{1} : \Omega \smallsetminus {N}_{Z} \rightarrow \Omega \smallsetminus {N}_{X}$ is well-defined, continuously differentiable, bijective, and has no critical points, since we have merely further restricted the diffeomorphisms from (12). In addition, ${N}_{Z}$ is a closed set in $\Omega$ . To complete the proof, we still need to show that ${N}_{Z}$ and ${N}_{X}$ have a ${\lambda }^{n}$ -measure of zero. However, this follows directly from the fact that diffeomorphisms map null sets to null sets. + +The following example illustrates why we assume that the set of critical points is a null set. Indeed, one can construct bijective and continuously differentiable functions whose inverse is not differentiable on a set with positive measure. + +Example A.2. As an example, consider the following recursively defined set on the interval $\left\lbrack {0,1}\right\rbrack$ called the Smith-Volterra-Cantor set ${C}_{\mathrm{{SV}}}$ . The definition and properties of this set are only briefly sketched here. For more information and detailed proofs, the reader is referred to Bressoud (2008) or DiMartino & Urbina (2014). Another example can also be found in Floret (1981, Sec. 17.15). + +The recursive definition of ${C}_{\mathrm{{SV}}}$ on $\left\lbrack {0,1}\right\rbrack$ starts by removing the open middle quarter from the interval. Then we create subsequent sets ${S}_{n}$ by removing an open interval of length $\frac{1}{{4}^{n}}$ from the center of each interval in ${S}_{n - 1}$ , i.e., + +$\left\lbrack {\frac{5}{8},1}\right\rbrack$ + +![01963e43-43cb-760e-a6b7-3c5b7dd11d15_6_1223_1723_334_191_0.jpg](images/01963e43-43cb-760e-a6b7-3c5b7dd11d15_6_1223_1723_334_191_0.jpg) + +Figure 1. Visualization of ${S}_{n}$ . + +$\left\lbrack {\frac{7}{32},\frac{3}{8}}\right\rbrack$ $\left\lbrack {\frac{5}{8},\frac{25}{32}}\right\rbrack$ ${S}_{3} = \left\lbrack {0,\frac{1}{16}}\right\rbrack \cup \left\lbrack {\frac{3}{32},\frac{5}{32}}\right\rbrack \cup \left\lbrack {\frac{7}{32},\frac{9}{32}}\right\rbrack \cup \left\lbrack {\frac{5}{16},\frac{3}{8}}\right\rbrack \cup \left\lbrack {\frac{5}{8},\frac{11}{16}}\right\rbrack \cup \left\lbrack {\frac{23}{32},\frac{25}{32}}\right\rbrack \cup \left\lbrack {\frac{15}{32},\frac{29}{32}}\right\rbrack \cup \left\lbrack {\frac{15}{16},1}\right\rbrack$ $\vdots$ + +Now the Smith-Volterra-Cantor set is defined as the following intersection of all ${S}_{n}$ + +$$ +{C}_{\mathrm{{SV}}} \mathrel{\text{:=}} \mathop{\bigcap }\limits_{{n = 1}}^{\infty }{S}_{n} +$$ + +which is closed because of the countable intersection of closed sets. In addition, in each recursion step in each of the ${2}^{n - 1}$ intervals, the middle pieces with the Lebesgue measure of $\frac{1}{{4}^{n}}$ are removed. Hence it holds + +$$ +{\lambda }^{1}\left( {C}_{\mathrm{{SV}}}\right) = 1 - \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{{2}^{n - 1}}{{4}^{n}} = 1 - \frac{1}{4}\mathop{\sum }\limits_{{n = 0}}^{\infty }{\left( \frac{1}{2}\right) }^{n} = 1 - \frac{1}{2} = \frac{1}{2} +$$ + +due to the geometric series. In order to construct a counterexample, we consider the function $d : \left\lbrack {0,1}\right\rbrack \rightarrow \mathbb{R}$ with $d\left( x\right) = \mathop{\inf }\limits_{{c \in {C}_{\mathrm{{SV}}}}}\left| {x - c}\right|$ . This function is obviously continuous and $d\left( x\right) = 0$ holds if and only if $x \in {C}_{\mathrm{{SV}}}$ , otherwise it only takes positive values. Because of the continuity, this function is integrable and it follows from the fundamental theorem of calculus that + +$$ +f : \left( {0,1}\right) \rightarrow f\left( \left( {0,1}\right) \right) +$$ + +$$ +f\left( x\right) = {\int }_{0}^{x}d\left( z\right) {dz}, +$$ + +is continuously differentiable. Moreover, $f$ is injective resulting from the construction of ${C}_{\mathrm{{SV}}}$ and the fact that $d$ is not constant zero on any interval with positive measure. But the set of critical points of $f$ is the Smith-Volterra-Cantor set, which is not a $\lambda$ -null set. + +## B. Non-smooth Activations in Residual Flows + +### B.1. Planar Flows + +Theorem 6 presented here corresponds to a generalization of the proof given by Rezende & Mohamed (2015) since not only the smooth activation function $h\left( x\right) = \tanh \left( x\right)$ is considered. However, the argument in the proof is very similar. + +Proof of Theorem 6. Let $y \in {\mathbb{R}}^{n}$ be arbitrary. Then we have to show that the following equation has a unique solution + +$$ +{f}_{\mathrm{P}}\left( x\right) = x + {uh}\left( {{w}^{T}x + b}\right) = y. \tag{14} +$$ + +If $w = 0$ holds, then a unique solution is given by $x = y - {uh}\left( b\right)$ , wherefrom bijectivity results. For this reason, let $w \neq 0$ in the following, thus $w$ spans a one-dimensional linear subspace $W$ in ${\mathbb{R}}^{n}$ . Consequently, each element $x \in {\mathbb{R}}^{n}$ has a unique orthogonal decomposition $x = {x}_{\parallel } + {x}_{ \bot }$ with ${x}_{\parallel } \in W$ and ${x}_{ \bot } \in {W}^{ \bot } \mathrel{\text{:=}} \left\{ {x \in {\mathbb{R}}^{n} \mid {w}^{T}x = 0}\right\}$ . Due to the orthogonality of ${x}_{ \bot }$ and $w$ , the following solution of the orthogonal component depending on the parallel one can be inferred from eq. (14) + +$$ +{x}_{ \bot } = y - {x}_{\parallel } - {uh}\left( {{w}^{T}{x}_{\parallel } + b}\right) . \tag{15} +$$ + +Since $W$ is a one-dimensional linear subspace, there is a unique $\lambda \in \mathbb{R}$ with ${x}_{\parallel } = w\frac{\lambda }{{w}^{T}w}$ . By using this representation, the original equation multiplied by ${w}^{T}$ from the left yields + +$$ +{w}^{T}y = {w}^{T}{x}_{ \bot } + {w}^{T}w\frac{\lambda }{{w}^{T}w} + {w}^{T}{uh}\left( {\lambda + b}\right) = \lambda + {w}^{T}{uh}\left( {\lambda + b}\right) \tag{16} +$$ + +$$ += \psi \left( {\lambda + b}\right) - b, +$$ + +where the last equation in (16) follows again by ${w}^{T}{x}_{ \bot } = 0$ . The assumed bijectivity of $\psi$ leads to the unique existence of a ${\lambda }_{y} \in \mathbb{R}$ , which solves the equation above. In addition, this implies the existences of ${x}_{\parallel }$ and ${x}_{ \bot }$ , thus + +$$ +x = {x}_{\parallel } + {x}_{ \bot } = y - {uh}\left( {{\lambda }_{y} + b}\right) +$$ + +is the unique solution of the initial equation (14) and consequently the bijectivity of ${f}_{\mathrm{P}}$ follows. + +Proof of Theorem 7. In case $w = 0,{f}_{\mathrm{P}}$ represents an affine linear mapping describing a diffeomorphism, hence an $\mathcal{L}$ -diffeomorphism. For this reason, let $w \neq 0$ . Consequently, $w$ spans a one-dimensional linear subspace $W$ , from which follows a unique orthogonal decomposition of the vector space ${\mathbb{R}}^{n} = W + {W}^{ \bot }$ as a direct sum. This leads to a characterization of the vector space as a disjoint union of hyperplanes, i.e., + +$$ +{\mathbb{R}}^{n} = \mathop{\bigcup }\limits_{{\lambda \in \mathbb{R}}}^{ \cdot }{H}_{\lambda }\;\text{ with }\;{H}_{\lambda } \mathrel{\text{:=}} \left\{ {\left. {w\frac{\lambda }{{w}^{T}w} + {x}_{ \bot }}\right| \;{x}_{ \bot } \in {W}^{ \bot }}\right\} . +$$ + +Consider the function $\tau : {\mathbb{R}}^{n} \rightarrow \mathbb{R}$ with $\tau \left( x\right) = {w}^{T}x + b$ mapping every element of a hyperplane ${H}_{\lambda }$ to the same value $\lambda + b$ . Since the activation function $h$ is not continuously differentiable on the countable set $N$ , we remove all hyperplanes mapping to $N$ under $\tau$ . So we define + +$$ +{H}_{N} \mathrel{\text{:=}} \mathop{\bigcup }\limits_{{n \in N}}{H}_{n - b}\;\text{ such that }\;\tau \left( {H}_{N}\right) = N. +$$ + +Because $\tau$ is continuous and $N$ is closed in $\mathbb{R}$ , it follows that ${H}_{N} = {\tau }^{-1}\left( N\right)$ is closed in ${\mathbb{R}}^{n}$ . Moreover, ${H}_{N}$ as a countable union of hyperplanes is also a null set w.r.t. the Lebesgue measure ${\lambda }^{n}$ due to the subadditivity. Accordingly, the restriction ${f}_{\mathrm{P}} : {\mathbb{R}}^{n} \smallsetminus {H}_{N} \rightarrow {\mathbb{R}}^{n} \smallsetminus {f}_{\mathrm{P}}\left( {H}_{N}\right)$ is bijective and continuously differentiable. Furthermore, ${H}_{N}$ is a closed ${\lambda }^{n}$ -null set. We note that the image of a hyperplane ${H}_{\lambda }$ under ${f}_{\mathrm{P}}$ is also a hyperplane; hence ${f}_{\mathrm{P}}\left( {H}_{N}\right)$ forms a countable union of hyperplanes which is a null set. Finally, it is left to show that the set of critical points of the restricted planar flow is also a null set. By the matrix determinant lemma we get for the set of critical points the set equality + +$$ +{C}_{{f}_{\mathrm{P}}} \mathrel{\text{:=}} \left\{ {x \in {\mathbb{R}}^{n} \smallsetminus {H}_{N} \mid \det {J}_{{f}_{\mathrm{P}}}\left( x\right) = 0}\right\} = \left\{ {x \in {\mathbb{R}}^{n} \smallsetminus {H}_{N} \mid 1 + {w}^{T}u{h}^{\prime }\left( {\tau \left( x\right) }\right) = 0}\right\} . +$$ + +This gives $\tau \left( {C}_{{f}_{\mathrm{P}}}\right) = C$ , which we have assumed to be a countable set. Hence + +$$ +{C}_{{f}_{\mathrm{P}}} = {\tau }^{-1}\left( C\right) = \mathop{\bigcup }\limits_{{c \in C}}{\tau }^{-1}\left( {\{ c\} }\right) = \mathop{\bigcup }\limits_{{c \in C}}{H}_{c - b}, +$$ + +which is as a countable union of hyperplanes a ${\lambda }^{n}$ -null set. + +#### B.1.1. EXAMPLES FOR SOME ACTIVATIONS + +In the following, we infer conditions for particular choices of activation functions for a planar flow ${f}_{\mathrm{P}}$ . A visualization of the crucial function for each proof can be found in Figure 2, where the conditions on the flow are fulfilled, just (not) fulfilled and not fulfilled anymore. + +Lemma B.1 (ReLU). If ${w}^{T}u > - 1$ holds, then ${f}_{P}$ with activation ReLU is an $\mathcal{L}$ -diffeomorphism. + +Proof. Consider the mapping $\psi : \mathbb{R} \rightarrow \mathbb{R}$ given by + +$$ +\psi \left( \lambda \right) = \lambda + {w}^{T}u\operatorname{ReLU}\left( {\lambda + b}\right) = \left\{ \begin{array}{ll} \lambda & ,\lambda < - b \\ \lambda + {w}^{T}u\left( {\lambda + b}\right) & ,\lambda \geq - b \end{array}\right. +$$ + +Because of the inequality ${w}^{T}u > - 1$ , this mapping is strictly monotonically increasing and thus obviously bijective. Consequently, the bijectivity of the planar flow ${f}_{\mathrm{P}}$ follows from Theorem 6. Furthermore, the ReLU activation is continuously differentiable on $\mathbb{R} \smallsetminus \{ 0\}$ and it holds for all $\lambda \in \mathbb{R} \smallsetminus \{ 0\}$ + +$$ +1 + {w}^{T}u{\operatorname{ReLU}}^{\prime }\left( \lambda \right) = \left\{ \begin{array}{ll} 1 & ,\lambda < 0 \\ 1 + {w}^{T}u & ,\lambda > 0 \end{array}\right. \neq 0. +$$ + +Therefore, the planar flow ${f}_{\mathrm{P}}$ with activation ReLU has no critical points; thus, the claim follows from Theorem 7. + +Lemma B. 2 (tanh). If ${w}^{T}u \geq - 1$ holds, then ${f}_{P}$ with activation tanh is an $\mathcal{L}$ -diffeomorphism. + +Proof. Consider the function $\psi : \mathbb{R} \rightarrow \mathbb{R}$ given by $\psi \left( \lambda \right) = \lambda + {w}^{T}u\tanh \left( \lambda \right)$ with derivative ${\psi }^{\prime }\left( \lambda \right) = 1 + \frac{{w}^{T}u}{\cosh {\left( \lambda \right) }^{2}}$ . Since the hyperbolic cosine is 1 only at 0 and otherwise always greater than 1, we get with the assumed inequality ${\psi }^{\prime }\left( \lambda \right) \geq 0$ and equality only if $\lambda = 0$ and ${w}^{T}u = - 1$ . Therefore, the function $\psi$ is strictly monotonically increasing, hence injective. Moreover, the surjectivity follows from the boundedness of the hyperbolic tangent. Consequently, the bijectivity of the planar flow ${f}_{\mathrm{P}}$ follows from Theorem 6. The activation function is continuously differentiable, and as seen earlier, the equation $1 + {w}^{T}u\tanh \left( \lambda \right) = 0$ is only satisfied if ${w}^{T}u = - 1$ and $\lambda = 0$ . In any case, the set of critical points is a countable set, so the claim follows from Theorem 7. Lemma B. 3 (ELU). If ${w}^{T}u > \max \left( {-1, - \frac{1}{\alpha }}\right)$ holds, then ${f}_{P}$ with activation ELU $\left( {\alpha > 0}\right)$ is an $\mathcal{L}$ -diffeomorphism. + +Proof. Consider the function $\psi : \mathbb{R} \rightarrow \mathbb{R}$ with + +$$ +\psi \left( \lambda \right) = \lambda + {w}^{T}u\operatorname{ELU}\left( {\lambda + b}\right) = \left\{ {\begin{array}{ll} \lambda + {w}^{T}{u\alpha }\left( {{e}^{\lambda + b} - 1}\right) & ,\lambda \leq - b \\ \lambda + {w}^{T}u\left( {\lambda + b}\right) & ,\lambda > - b \end{array}.}\right. +$$ + +This function is continuously differentiable on $\mathbb{R} \smallsetminus \{ b\}$ with derivative given by + +$$ +{\psi }^{\prime }\left( \lambda \right) = \left\{ \begin{array}{ll} 1 + {w}^{T}{u\alpha }{e}^{\lambda + b} & ,\lambda < - b \\ 1 + {w}^{T}u & ,\lambda > - b \end{array}\right. +$$ + +By the assumed inequality and ${e}^{\lambda + b} \in \left( {0,1}\right)$ for $\lambda < - b$ , we obtain for $\lambda < - b$ + +$$ +{\psi }^{\prime }\left( \lambda \right) = 1 + {w}^{T}{u\alpha }{e}^{\lambda + b} > 1 + \max \left( {-1, - \frac{1}{\alpha }}\right) \alpha {e}^{\lambda + b} > 1 + \max \left( {-\alpha , - 1}\right) \geq 0. \tag{17} +$$ + +In addition, we get the positivity for the other case $\lambda > - b$ + +$$ +{\psi }^{\prime }\left( \lambda \right) = 1 + {w}^{T}u > 1 + \max \left( {-1, - \frac{1}{\alpha }}\right) \geq 0. \tag{18} +$$ + +Because of the limit $\mathop{\lim }\limits_{{\lambda \searrow - b}}\psi \left( \lambda \right) = \psi \left( {-b}\right) = - b$ , the function $\psi$ is strictly monotonically increasing, thus injective. Furthermore, surjectivity results from the mean value theorem; hence is ${f}_{\mathrm{P}}$ bijective by Theorem 6. Additionally, the activation function ELU is continuously differentiable on $\mathbb{R} \smallsetminus \{ b\}$ , and similar to (17) and (18) the inequality ${\psi }^{\prime }\left( \lambda \right) =$ $1 + {w}^{T}u\mathrm{{ELU}}\left( \lambda \right) > 0$ follows for all $\lambda \neq 0$ . Therefore we can apply Theorem 7, which shows the claim. + +Lemma B.4 (Softplus). If ${w}^{T}u > - 1$ holds, then ${f}_{P}$ with activation Softplus is an $\mathcal{L}$ -diffeomorphism. + +Proof. For the function $\psi : \mathbb{R} \rightarrow \mathbb{R}$ given by $\psi \left( \lambda \right) = \lambda + {w}^{T}u$ Softplus $\left( {\lambda + b}\right)$ with Softplus $\left( \lambda \right) = \log \left( {1 + {e}^{\lambda + b}}\right)$ the derivative is + +$$ +{\psi }^{\prime }\left( \lambda \right) = 1 + {w}^{T}u\frac{{e}^{\lambda + b}}{1 + {e}^{\lambda + b}} = 1 + {w}^{T}u\frac{1}{1 + {e}^{-\left( {\lambda + b}\right) }}. +$$ + +Since the range of the factor $\frac{1}{1 + {e}^{-\left( {\lambda + b}\right) }}$ is in the interval(0,1), the assumed inequality results for all $\lambda \in \mathbb{R}$ in + +$$ +{\psi }^{\prime }\left( \lambda \right) > 1 - \frac{1}{1 + {e}^{-\left( {\lambda + b}\right) }} > 0. \tag{19} +$$ + +Consequently, the function $\psi$ is strictly monotonically increasing, hence injective. By continuity, the limit $\mathop{\lim }\limits_{{\lambda \rightarrow - \infty }} = - \infty$ results. Without loss of generality, we consider $\lambda > - b$ in the following. In this case, the inequality $\log \left( {1 + {e}^{\lambda + b}}\right) \leq$ $\log \left( {2{e}^{\lambda + b}}\right)$ holds, and we obtain for ${w}^{T}u < 0$ + +$$ +\mathop{\lim }\limits_{{\lambda \rightarrow \infty }}\psi \left( \lambda \right) \geq \mathop{\lim }\limits_{{\lambda \rightarrow \infty }}\lambda + {w}^{T}u\log \left( {2{e}^{x + b}}\right) = \mathop{\lim }\limits_{{\lambda \rightarrow \infty }}\lambda + {w}^{T}u\log \left( 2\right) \left( {\lambda + b}\right) = \infty . +$$ + +For the other case ${w}^{T}u \geq 0$ , results $\mathop{\lim }\limits_{{\lambda \rightarrow \infty }}\psi \left( \lambda \right) \geq \mathop{\lim }\limits_{{\lambda \rightarrow \infty }}\lambda = \infty$ . Thus, the surjectivity follows from the mean value theorem. Therefore, we get the bijectivity from Theorem 6. In addition, the activation function Softplus is everywhere continuously differentiable and has not critical points, so the claim follows from Theorem 7. + +### B.2. Radial Flows + +Theorem 8 presented here corresponds to a generalization of the proof given by Rezende & Mohamed (2015), since not only the localization function $h\left( r\right) = \frac{1}{\alpha + r}$ is considered. However, the arguments in the proof are very similar. + +550 + +551 + +![01963e43-43cb-760e-a6b7-3c5b7dd11d15_10_212_231_1312_1176_0.jpg](images/01963e43-43cb-760e-a6b7-3c5b7dd11d15_10_212_231_1312_1176_0.jpg) + +Figure 2. Visualization of the function $\psi$ considered in the proofs of Section B.1.1. For each of the activations ReLU, tanh, ELU $\left( {\alpha = 2}\right)$ and Softplus, we plotted the function $\psi$ with $b = 0$ for three different choices of ${w}^{T}u$ ; the conditions for an $\mathcal{L}$ -diffeomorphism are satisfied (green), just (not) satisfied (blue), and not satisfied (magenta). + +552 + +553 + +554 + +555 + +556 + +557 + +571 + +576 + +Proof of Theorem 8. Let $y \in {\mathbb{R}}^{n}$ arbitrary. We have to show for bijectivity that the following equation has a unique solution + +$$ +{f}_{\mathrm{R}}\left( x\right) = x + {\beta h}\left( r\right) \left( {x - {x}_{0}}\right) = y\;\text{ where }\;r \mathrel{\text{:=}} {\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}_{2}. \tag{20} +$$ + +If $y = {x}_{0}$ , we then obtain the equality + +$$ +0 = {\begin{Vmatrix}y - {x}_{0}\end{Vmatrix}}_{2} = {\begin{Vmatrix}x - {x}_{0} + \beta h\left( r\right) \left( x - {x}_{0}\right) \end{Vmatrix}}_{2} = r + {\beta h}\left( r\right) r = \psi \left( r\right) +$$ + +after rearranging equation (20) and taking the Euclidean norm. Since $\psi \left( 0\right) = 0$ and $\psi$ was assumed to be bijective, it follows that $\begin{Vmatrix}{x - {x}_{0}}\end{Vmatrix} = 0$ which is equivalent to $x = {x}_{0}$ because of the definiteness of the Euclidean distance. Hereafter let $y \in {\mathbb{R}}^{n} \smallsetminus \left\{ {x}_{0}\right\}$ , i.e., $r = {\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}_{2} > 0$ . In this case, each element $x - {x}_{0}$ with $x \in {\mathbb{R}}^{n} \smallsetminus \left\{ {x}_{0}\right\}$ can be expressed unambiguously as the product of its projection on the unit sphere and its Euclidean distance. Thus, for $x \in {\mathbb{R}}^{n} \smallsetminus \left\{ {x}_{0}\right\}$ there exists a unique normalized direction vector $\widehat{x} \in {S}_{1}\left( {x}_{0}\right) \mathrel{\text{:=}} \left\{ {x \in {\mathbb{R}}^{n} \mid {\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}_{2} = 1}\right\}$ such that $x = {x}_{0} + r\widehat{x}$ where $r \mathrel{\text{:=}} {\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}_{2}$ . Substituting this representation into equation (20), one obtains after conversion + +$$ +y - {x}_{0} = \widehat{x}\left( {r + {\beta h}\left( r\right) r}\right) = \widehat{x}\psi \left( r\right) , \tag{21} +$$ + +and by taking of the norm of this + +$$ +0 < {\begin{Vmatrix}y - {x}_{0}\end{Vmatrix}}_{2} = \parallel \widehat{x}\psi \left( r\right) {\parallel }_{2} = \psi \left( r\right) . +$$ + +Because of the bijectivity of $\psi$ , the unique existence of a radius ${r}_{y} > 0$ around the centering point results. Due to the fact that $\psi \left( r\right) > 0$ remains valid for $r > 0$ , equation (21) gives the following unambiguous expression of the direction vector $\widehat{x}$ + +$$ +{\widehat{x}}_{y} \mathrel{\text{:=}} \frac{y - {x}_{0}}{\psi \left( {r}_{y}\right) } = \frac{y - {x}_{0}}{{r}_{y} + {\beta h}\left( {r}_{y}\right) {r}_{y}}. +$$ + +Hence $x = {x}_{0} + {r}_{y}{\widehat{x}}_{y}$ is the unique solution of the original equation (20), concluding finally that ${f}_{\mathrm{R}}$ is bijective. + +Proof of Theorem 9. According to the requirement, the localization function is not continuously differentiable for all radii, so spheres with such distances around the centering point must be removed from the domain of ${f}_{\mathrm{R}}$ . Furthermore, the point ${x}_{0}$ corresponding to a radius of 0 must be eliminated in order to restrict the localization function to an open set, thus allowing us to verily speak of differentiability. For this purpose we define + +$$ +S \mathrel{\text{:=}} \mathop{\bigcup }\limits_{{r \in N}}{S}_{r}\left( {x}_{0}\right) \cup \left\{ {x}_{0}\right\} \;\text{ where }\;{S}_{r}\left( {x}_{0}\right) \mathrel{\text{:=}} \left\{ {x \in {\mathbb{R}}^{n} \mid {\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}_{2} = r}\right\} . +$$ + +Since the shifted Euclidean norm $\tau \left( x\right) \mathrel{\text{:=}} {\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}_{2}$ is continuous, it follows that the preimage of the closed set $N \cup \{ 0\}$ is also closed. Therefore, the set of eliminated points + +$$ +S = \mathop{\bigcup }\limits_{{r \in N}}{\tau }^{-1}\left( {\{ r\} }\right) \cup {\tau }^{-1}\left( {\{ 0\} }\right) = {\tau }^{-1}\left( {N\cup \{ 0\} }\right) +$$ + +is closed; moreover, it is a set of measure zero because a countable union of spheres and points is a ${\lambda }^{n}$ -null set. In summary, the restriction ${f}_{\mathrm{R}} : {\mathbb{R}}^{n} \smallsetminus S \rightarrow {\mathbb{R}}^{n} \smallsetminus {f}_{\mathrm{R}}\left( S\right)$ describes a bijective and continuously differentiable mapping. Besides, the following is valid for every $x \in {S}_{r}\left( {x}_{0}\right)$ + +$$ +{\begin{Vmatrix}{f}_{\mathrm{R}}\left( x\right) - {x}_{0}\end{Vmatrix}}_{2} = {\begin{Vmatrix}x - {x}_{0} + \beta h\left( r\right) \left( x - {x}_{0}\right) \end{Vmatrix}}_{2} = r + {\beta h}\left( r\right) r. \tag{22} +$$ + +This indicates that ${f}_{\mathrm{R}}$ maps spheres with radius $r$ to spheres with radius $r + {\beta h}\left( r\right) r$ around the centering point ${x}_{0}$ ; thus, ${f}_{R}\left( S\right)$ is also a countable union of null sets, therefore, itself a null set. Finally, it remains to show that the set of critical points of this restriction is also a Lebesgue null set. From the matrix determinant lemma and the higher dimensional differentiation rules, the Jacobian-determinant of ${f}_{\mathrm{R}}$ at position $x \in {\mathbb{R}}^{n} \smallsetminus S$ is given by + +$$ +\det \left( {{J}_{{f}_{\mathrm{R}}}\left( x\right) }\right) = {\left( 1 + \beta h\left( r\right) \right) }^{n - 1}\left( {1 + {\beta h}\left( r\right) + \beta {h}^{\prime }\left( r\right) r}\right) . \tag{23} +$$ + +Since ${f}_{\mathrm{R}}\left( {x}_{0}\right) = {x}_{0}$ holds and ${f}_{\mathrm{R}}$ is bijective, both ${\begin{Vmatrix}{f}_{\mathrm{R}}\left( x\right) - {x}_{0}\end{Vmatrix}}_{2} > 0$ and $r = {\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}_{2} > 0$ hold for all $x \in {\mathbb{R}}^{n} \smallsetminus S$ . Thus equation (22) yields + +$$ +\frac{{\begin{Vmatrix}{f}_{\mathrm{R}}\left( x\right) - {x}_{0}\end{Vmatrix}}_{2}}{r} = 1 + {\beta h}\left( r\right) > 0. +$$ + +Knowing that this term does not vanish for any $x \in {\mathbb{R}}^{n} \smallsetminus S$ , the set of critical points can be represented using equation (23) like + +$$ +{C}_{{f}_{\mathrm{R}}} \mathrel{\text{:=}} \left\{ {x \in {\mathbb{R}}^{n} \smallsetminus S \mid \det {J}_{{f}_{\mathrm{R}}}\left( x\right) = 0}\right\} = \left\{ {x \in {\mathbb{R}}^{n} \smallsetminus S \mid 1 + {\beta h}\left( {\tau \left( x\right) }\right) + \beta {h}^{\prime }\left( {\tau \left( x\right) }\right) = 0}\right\} . +$$ + +This gives $\tau \left( {C}_{{f}_{\mathrm{R}}}\right) = C$ , which we have assumed to be a countable set. Hence + +$$ +{C}_{{f}_{\mathrm{R}}} = {\tau }^{-1}\left( C\right) = \mathop{\bigcup }\limits_{{c \in C}}{\tau }^{-1}\left( {\{ c\} }\right) = \mathop{\bigcup }\limits_{{c \in C}}{S}_{c}\left( {x}_{0}\right) , +$$ + +which is as a countable union of spheres a ${\lambda }^{n}$ -null set. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/msCiI5dejr/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/msCiI5dejr/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..dce4fdb373be9eb702bd61881b24333221da1e9b --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/msCiI5dejr/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,231 @@ +§ GENERALIZATION OF THE CHANGE OF VARIABLES FORMULA WITH APPLICATIONS TO RESIDUAL FLOWS + +§ ANONYMOUS AUTHORS ${}^{1}$ + +§ ABSTRACT + +Normalizing flows leverage the Change of Variables Formula (CVF) to define flexible density models. Yet, the requirement of smooth transformations (diffeomorphisms) in the CVF poses a significant challenge in the construction of these models. To enlarge the design space of flows, we introduce $\mathcal{L}$ -diffeomorphisms as generalized transformations which may violate these requirements on zero Lebesgue-measure sets. This relaxation allows e.g. the use of non-smooth activations functions such as ReLU. Finally, we apply the obtained results to planar, radial, and contractive residual flows. + +§ 1. INTRODUCTION + +The term normalizing flow refers to a concatenation of arbitrarily many simple transformations such that together they describe a transformation of desired flexibility and expressiveness. Formally, a transformation $f : Z \rightarrow X$ denotes a diffeomorphism, i.e., a bijective mapping where both $f$ and ${f}^{-1}$ are continuously differentiable. The crucial reason why transformations are considered in normalizing flows is the validity of the Change of Variables Formula (CVF) for a probability density ${p}_{Z}$ on $Z$ described by + +$$ +{\int }_{{f}^{-1}\left( A\right) }{p}_{Z}\left( z\right) d{\lambda }^{n}\left( z\right) = {\int }_{A}{p}_{f}\left( x\right) d{\lambda }^{n}\left( x\right) , \tag{1} +$$ + +where ${p}_{f}\left( x\right) \mathrel{\text{ := }} {p}_{Z}\left( {{f}^{-1}\left( x\right) }\right) \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right|$ and $A \subseteq X$ . This formula provides an explicit expression of the density ${p}_{f}$ induced by $f$ on the target space $X$ , which includes the determinant of the Jacobian as a volume correction term. + +Based on this definition of a transformation, however, it is generally not accurate to use non-smooth activations, such as ReLU, Leaky ReLU, or ELU with $\alpha \neq 1$ , in the design of normalizing flows. These usually cause the flow to become non-differentiable on a set with no volume w.r.t. the Lebesgue measure ${\lambda }^{n}$ , hence no diffeomorphism. In measure theory, these sets are called ${\lambda }^{n}$ -null sets or only null sets for short and are negligible in integration. + +We demonstrate that the requirements for a flow can be significantly weakened by excluding null sets from the base and target space while preserving the validity of the CVF. There are remarks on using almost everywhere (a.e.) differentiable activation functions in Kobyzev et al. (2020) or Kong & Chaudhuri (2020), yet both works lack a proof of the validity of such transformations to define flows. In our work, we provide such proofs for an even more general statement. At the same time, we discuss the probabilistic background of normalizing flows to induce a well-defined density in the end. Furthermore, we point out that not every generalization of the CVF found in the mathematical literature is immediately suitable for flows. Finally, we put a special emphasis on the applications to residual flows. In doing so, we prove that non-smooth activations are also valid for both planar and radial flows (Rezende & Mohamed, 2015), as well as for contractive residual flows (Behrmann et al., 2019). + +§ 2. BACKGROUND ON CVF IN PROBABILITY THEORY + +The basic idea behind normalizing flows is to transform a known and tractable probability space into a more complex one. Mathematically, a probability space $\left( {Z,{\mathcal{A}}_{Z},{\mathbb{P}}_{Z}}\right)$ is composed of a set $Z$ equipped with a $\sigma$ -algebra and a probability measure ${\mathbb{P}}_{Z}$ . In the target space $\left( {X,{\mathcal{A}}_{X},{\mathbb{P}}_{\text{ data }}}\right)$ , only the set and $\sigma$ -algebra are fixed, and the data distribution ${\mathbb{P}}_{\text{ data }}$ is unknown. For simplicity, we only consider open subsets of ${\mathbb{R}}^{n}$ and trace $\sigma$ -algebras of the Lebesgue algebra $\mathcal{L}$ in the following, i.e., ${\mathcal{A}}_{Z} = \mathcal{L}\left( Z\right)$ and ${\mathcal{A}}_{X} = \mathcal{L}\left( X\right)$ . The trace $\sigma$ -Algebra is a restricted $\sigma$ -Algebra on a subset defined by $\mathcal{L}\left( Z\right) \mathrel{\text{ := }} \{ A \cap Z \mid A \in \mathcal{L}\}$ . Besides, we assume that the distribution ${\mathbb{P}}_{Z}$ is absolutely continuous w.r.t. the $n$ -dimensional Lebesgue measure ${\lambda }^{n}$ , i.e., ${\lambda }^{n}$ -null sets have a ${\mathbb{P}}_{Z}$ -probability of zero. Therefore, the existence of a probability density ${p}_{Z}$ follows by Radon-Nikodym’s theorem (Bogachev, 2006, Sec. 3.2). For more information on measure theory, see Bogachev (2006) or Elstrodt (2013). + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +At the lowest level, a transformation $f : Z \rightarrow X$ has to be at least an ${\mathcal{A}}_{Z} - {\mathcal{A}}_{X}$ -measurable mapping (i.e., a random variable) in order to induce a distribution on the target space by the so-called pushforward measure: + +$$ +{\mathbb{P}}_{f}\left( A\right) \mathrel{\text{ := }} {\mathbb{P}}_{Z}\left( {{f}^{-1}\left( A\right) }\right) \;\left( {A \in {\mathcal{A}}_{X}}\right) . +$$ + +Under the assumption that the base distribution ${\mathbb{P}}_{Z}$ has a density function ${p}_{Z}$ , there also exists an integral representation for the pushforward measure, i.e., + +$$ +{\mathbb{P}}_{f}\left( A\right) = {\int }_{{f}^{-1}\left( A\right) }{p}_{Z}\left( z\right) d{\lambda }^{n}\left( z\right) . \tag{2} +$$ + +Assumption 1 (Transformations for the CVF). The function $f : Z \rightarrow X$ between two open sets $Z,X \subseteq {\mathbb{R}}^{n}$ is a diffeomorphism; or equivalently expressed by the inverse function theorem, $f$ is bijective, continuously differentiable and without critical points. + +We note that $z \in Z$ is a critical point if the Jacobian-determinant vanishes in this point, i.e., $\det {J}_{f}\left( z\right) = 0$ . In particular, a critical point $z$ indicates that the inverse is nondifferentiable or non-continuously differentiable in $f\left( z\right)$ . + +If the mapping $f$ satisfies Assumption 1, the CVF from eq. (1) holds, and we can extend the expression (2) of the distribution ${\mathbb{P}}_{f}$ by + +$$ +{\mathbb{P}}_{f}\left( A\right) = {\int }_{{f}^{\cdot 1}\left( A\right) }{p}_{Z}\left( z\right) d{\lambda }^{n}\left( z\right) = {\int }_{A}{p}_{f}\left( x\right) d{\lambda }^{n}\left( x\right) . \tag{3} +$$ + +Since the equality (3) is valid for all ${\mathcal{A}}_{X}$ -measurable sets, the ${\lambda }^{n}$ -unique probability density of ${\mathbb{P}}_{f}$ is given by + +$$ +{p}_{f}\left( x\right) = {p}_{Z}\left( {{f}^{-1}\left( x\right) }\right) \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right| \;\left( {\text{ a.e. }x \in X}\right) . \tag{4} +$$ + +In order to unify the existing definitions of flows in the literature, we will speak of a flow or a proper flow when a density on $X$ of the form as in eq. (4) is induced. + +§ 3. GENERALIZATION OF THE CVF + +If it is argued that a flow indeed induces a probability density, often the CVF is merely named, and only in rare cases reference is made to sources like Rudin (1987) or Bogachev (2006). In most of the mathematical literature, it is proved in the following form + +$$ +{\int }_{f\left( Z\right) }\psi \left( x\right) {dx} = {\int }_{Z}\psi \left( {f\left( z\right) }\right) \left| {\det {J}_{f}\left( z\right) }\right| {dz}, \tag{5} +$$ + +where $f : U \rightarrow {\mathbb{R}}^{n}$ is injective, continuously differentiable with $U \subseteq {\mathbb{R}}^{n}$ open, $Z \subset U$ measurable, and $\psi$ Lebesgue integrable. Moreover, there are even broader formulations of this statement where $f$ is only differentiable or even only Lipschitz continuous everywhere and injective almost everywhere (cf. Varberg, 1971). In Bogachev (2006, Thm. 5.8.30) and Hajłasz (1993), generalizations are discussed where injectivity is not even required by considering the cardinality of the preimage set. + +The identity (5) is, however, rather analytically motivated for solving integrals and does not aim to provide a representation of the density induced by $f$ . In the common case where $f$ satisfies Assumption 1, these two variants (eq. (1) and (5)) are valid, since both $f$ and ${f}^{-1}$ form diffeomor-phisms. Nevertheless, when we consider generalizations, it is usually no longer clear how and whether they can also be applied for the purposes of normalizing flows. In the following sections, we derive a similar strong generalization of the CVF as in (5), which is more suited to transforming probability densities, i.e., more suited for normalizing flows. + +§ 3.1. $\MATHCAL{L}$ -DIFFEOMORPHISM + +The basic idea is to require the conditions for a diffeomorphism only almost everywhere, since these sets do not affect integration. This idea leads to the following definition of a generalized transformation, providing a weaker set of conditions than in Assumption 1. + +Definition 2 (Lebesgue-Diffeomorphism). A mapping $f : Z \rightarrow X$ between two open sets $Z,X \subseteq {\mathbb{R}}^{n}$ is called Lebesgue-diffeomorphism ( $\mathcal{L}$ -diffeomorphism for short), if there are ${\lambda }^{n}$ -null sets ${N}_{Z},{N}_{X}$ with ${N}_{Z}$ closed such that the restriction $f : Z \smallsetminus {N}_{Z} \rightarrow X \smallsetminus {N}_{X}$ is bijective, continuously differentiable, and the set of critical points is a null set. + +Examples. In the following, we list a few $\mathcal{L}$ -diffeomorphism and their corresponding null sets: + +1. A diffeomorphism $f : Z \rightarrow X$ forms an $\mathcal{L}$ -diffeomorphism where both ${N}_{Z}$ and ${N}_{X}$ are empty sets. + +2. The cubic function $f : \mathbb{R} \rightarrow \mathbb{R}$ with $f\left( x\right) = {x}^{3}$ is an $\mathcal{L}$ -diffeomorphism with ${N}_{Z} = {N}_{X} = \varnothing$ and one critical point $\{ 0\}$ , which is a null set. + +3. In particular, the plane polar coordinates transformation $f : {\mathbb{R}}_{ + } \times \left\lbrack {0,{2\pi }}\right\rbrack \rightarrow {\mathbb{R}}^{2}$ given by $f\left( {r,\phi }\right) =$ $\left( {r\cos \left( \phi \right) ,r\sin \left( \phi \right) }\right)$ is an $\mathcal{L}$ -diffeomorphism with + +$$ +{N}_{Z} = \{ 0\} \times \left\lbrack {0,{2\pi }}\right\rbrack \cup {\mathbb{R}}_{ > 0} \times \{ 0,{2\pi }\} \;\text{ and } +$$ + +$$ +{N}_{X} = {\mathbb{R}}_{ + } \times \{ 0\} \text{ . } +$$ + +In appendix Lemma A. 1 we show that a $\mathcal{L}$ -diffeomorphism is a measurable mapping with respect to the corresponding trace $\sigma$ -algebras of the Lebesgue algebra, thus inducing a distribution on $X$ by the pushfor-ward measure. Moreover, the name 'diffeomorphism' does justice to the Definition 2, which we state in the following lemma (see Appendix A for the proof): + +Lemma 3. Let $f : Z \rightarrow X$ be an $\mathcal{L}$ -diffeomorphism. Then there are ${\lambda }^{n}$ -null sets ${N}_{Z},{N}_{X}$ with ${N}_{Z}$ closed such that the restriction $f : Z \smallsetminus {N}_{Z} \rightarrow X \smallsetminus {N}_{X}$ is a diffeomorphism. + +The reasoning behind the assumptions of an $\mathcal{L}$ - diffeomorphism can be understood as follows: We can remove negligible sets from the domain and the target space such that the restriction is bijective and continuously differentiable. To show that the restricted inverse is continuously differentiable, we use the inverse function theorem (Rudin, 1976, Thm. 9.24). Nevertheless, for the inverse function theorem to apply, all critical points and their image must still be removable, i.e., measure-theoretically negligible. That the set of critical points is a null set was assumed, and for the image, we use Sard's theorem (see Appendix A for the detailed proof). + +Furthermore, it is necessary to suppose that the set of critical points is a null set; because there are examples of continuously differentiable and bijective functions whose set of critical points does not have measure zero. These points cause the inverse function to be non-continuously differentiable on a set with a positive measure (see Appendix A. 2 for an example). + +§ 3.2.CVF FOR $\MATHCAL{L}$ -DIFFEOMORPHISM + +The previously defined $\mathcal{L}$ -diffeomorphisms form a reasonable generalization of diffeomorphisms and are comparable to those transformations discussed in the introduction of this section. Furthermore, the following theorem justifies the validity of CVF as well for $\mathcal{L}$ -diffeomorphisms (see Appendix A for the proof): + +Theorem 4 (CVF for $\mathcal{L}$ -Diffeomorphism). Let $f : Z \rightarrow X$ be an $\mathcal{L}$ -diffeomorphism and ${\mathbb{P}}_{Z}$ a distribution on $Z$ with probability density ${p}_{Z}$ w.r.t. ${\lambda }^{n}$ . Then the CVF (see eq. (3)) holds for $f$ . In particular $f$ induces a distribution on $X$ with density ${p}_{f}$ given by + +$$ +{p}_{f}\left( x\right) = {p}_{Z}\left( {{f}^{-1}\left( x\right) }\right) \left| {\det {J}_{{f}^{-1}}\left( x\right) }\right| \;\left( {\text{ a.e. }x \in X}\right) . \tag{6} +$$ + +This theorem legitimizes the use of functions as proper flows that are not everywhere bijective, continuous, differentiable, or continuously differentiable. Even more, the inverse does not have to fulfill these properties everywhere either. In short, we can apply $\mathcal{L}$ -diffeomorphisms as flows. + +§ 3.3. INVARIANCE UNDER COMPOSITION + +The strength and tremendous upswing of normalizing flows mainly occurred because simple flows can be chained together. Thus, we can achieve the desired degree of complexity and expressiveness by increasing the number of simple flows. For this purpose, flows are often considered on the same base and target space. This crucial property is also retained for $\mathcal{L}$ -diffeomorphisms (see Appendix A for the proof): + +Lemma 5 (Composition). Let $\Omega \subseteq {\mathbb{R}}^{n}$ be an open set and ${f}_{1},{f}_{2} : \Omega \rightarrow \Omega \mathcal{L}$ -diffeomorphisms. Then the composition + +${f}_{2} \circ {f}_{1}$ is also an $\mathcal{L}$ -diffeomorphism on $\Omega$ . + +From this lemma, it results inductively that the concatenation $f = {f}_{K} \circ \ldots \circ {f}_{1}$ of $K\mathcal{L}$ -diffeomorphisms ${f}_{1},\ldots {f}_{K}$ on $\Omega$ forms an $\mathcal{L}$ -diffeomorphism. Hence, common formulas from the normalizing flow literature, e.g., presented in Kobyzev et al. (2020) or Papamakarios et al. (2021), also apply to $\mathcal{L}$ -diffeomorphisms or can be extended to them. For example, the following holds for the Jacobian-determinant of $f$ with ${x}_{i} \mathrel{\text{ := }} {f}_{i + 1}^{-1} \circ \ldots \circ {f}_{K}^{-1}\left( x\right)$ and ${x}_{K} \mathrel{\text{ := }} x$ + +$$ +\det {J}_{{f}^{\cdot 1}}\left( x\right) = \mathop{\prod }\limits_{{i = 1}}^{K}\det {J}_{{f}_{i}^{\cdot 1}}\left( {x}_{i}\right) \;\left( {\text{ a.e. }x \in \Omega }\right) . +$$ + +Despite the legitimate use of $\mathcal{L}$ -diffeomorphisms mathematically, it is essential to note that these can lead to numerical instabilities. In some points, the Jacobian-determinant or the inverse function does not need to exist. Nevertheless, these values can be set meaningfully or ignored in some situations. + +§ 4. NON-SMOOTH ACTIVATIONS IN RESIDUAL FLOWS + +In this section, we apply the previous results to residual mappings, which are perturbations of the identity of the form $f\left( x\right) = x + g\left( x\right)$ . This justifies the use of non-smooth activations in planar, radial, and contractive residual flows. + +§ 4.1. PLANAR FLOWS + +The term planar flow was first introduced by Rezende & Mohamed (2015), which refers to functions of the form + +$$ +{f}_{\mathrm{P}}\left( x\right) = x + {uh}\left( {{w}^{T}x + b}\right) +$$ + +with non-linearity $h$ and $w,u \in {\mathbb{R}}^{n},b \in \mathbb{R}$ . They describe a plane-wise expansion or contraction of all hyperplanes orthogonal to $w$ . In order to admit also non-smooth activations, we generalize the results from Rezende & Mohamed (2015) in the following theorem and obtain a sufficient criterion for the bijectivity of a planar flow ${f}_{\mathrm{P}}$ (see Appendix B. 1 for the proof). + +Theorem 6. Let ${f}_{P}$ be a planar flow with activation $h$ . If the one-dimensional mapping $\psi : \mathbb{R} \rightarrow \mathbb{R}$ with + +$$ +\psi \left( \lambda \right) = \lambda + {w}^{T}{uh}\left( \lambda \right) +$$ + +is bijective, then ${f}_{P}$ is also bijective. + +However, bijectivity is not sufficient for the planar flow to induce a density of the desired form. But similar to Theorem 6, conditions can be imposed on a one-dimensional mapping such that ${f}_{\mathrm{P}}$ describes an $\mathcal{L}$ -diffeomorphism (see Appendix B. 1 for the proof): + +Table 1. Conditions on the parameters of a planar flow ${f}_{\mathrm{P}}$ , such that it is a proper flow for the activation $h$ (see B.1.1 for the proofs). + +max width= + +NON-LINEARITY CONDITION + +1-2 +RELU ${w}^{T}u > - 1$ + +1-2 +ELU $\left( {\alpha > 0}\right)$ ${w}_{m}^{T}u > \max \left( {-1, - \frac{1}{\alpha }}\right)$ + +1-2 +TANH ${w}^{T}u \geq - 1$ + +1-2 +SOFTPLUS ${w}^{T}u > - 1$ + +1-2 + +Theorem 7. Let ${f}_{P}$ be a bijective planar flow. If there is a countable and closed set $N \subset \mathbb{R}$ such that the activation function $h$ is continuously differentiable on $\mathbb{R} \smallsetminus N$ and + +$$ +C = \left\{ {x \in \mathbb{R} \smallsetminus N \mid 1 + {w}^{T}u{h}^{\prime }\left( x\right) = 0}\right\} +$$ + +is countable, then ${f}_{P}$ is an $\mathcal{L}$ -diffeomorphism. In particular, ${f}_{P}$ is a proper flow. + +Using Theorem 7, conditions for different activations such that the resulting planar flow describes a flow can be found via reducing it to a one-dimensional problem. The constraints for the most popular non-linearities are summarized in Table 1. + +§ 4.2. RADIAL FLOWS + +Another intuitive way to perturb the identity in ${\mathbb{R}}^{n}$ is to expand or contract spherically around a centering point. This type of transformation was initially studied by Tabak & Turner (2013) and subsequently by Rezende & Mohamed (2015). These transformations of the form + +$$ +{f}_{\mathrm{R}}\left( x\right) = x + {\beta h}\left( {\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}_{2}\right) \left( {x - {x}_{0}}\right) +$$ + +are called radial flows, where $h : {\mathbb{R}}_{ + } \rightarrow {\mathbb{R}}_{ + }$ is a localization function, ${x}_{0} \in {\mathbb{R}}^{n}$ the center, and $\beta \in \mathbb{R}$ . When $\beta$ is negative, a contraction occurs, and positive values lead to an expansion around the center ${x}_{0}$ . The following theorem provides a sufficient criterion for the bijectivity of a radial flow if we consider non-smooth functions $h$ (see Appendix B. 2 for the proof): + +Theorem 8. Let ${f}_{R}$ be a radial flow with localization $h$ . If the one-dimensional mapping $\psi : {\mathbb{R}}_{ + } \rightarrow {\mathbb{R}}_{ + }$ with + +$$ +\psi \left( r\right) \mathrel{\text{ := }} r + {\beta h}\left( r\right) r +$$ + +is bijective, then ${f}_{R}$ is also bijective. + +Again, bijectivity is sufficient only for the existence of the inverse which means that ${f}_{\mathrm{R}}$ , in general, does neither describe an $\mathcal{L}$ -diffeomorphism nor a flow. However, this property is ensured by the conditions of the following theorem (see Appendix B. 2 for the proof): + +Theorem 9. Let ${f}_{R}$ be a bijective radial flow. If there is a countable, closed set $N \subset {\mathbb{R}}_{ > 0}$ such that the localization + +function $h$ is continuously differentiable on ${\mathbb{R}}_{ > 0} \smallsetminus N$ and + +$$ +C \mathrel{\text{ := }} \left\{ {r \in {\mathbb{R}}_{ > 0} \smallsetminus N \mid 1 + \beta \left( {h\left( r\right) + r{h}^{\prime }\left( r\right) = 0}\right) }\right\} +$$ + +is countable, then ${f}_{R}$ is an $\mathcal{L}$ -diffeomorphism. In particular, ${f}_{R}$ is a proper flow. + +§ 4.3. CONTRACTIVE RESIDUAL FLOWS + +Contractive mappings provide a more general type of perturbations of the identity. A function $g$ is called contractive if there exists a constant $L < 1$ such that for all $x,y \in {\mathbb{R}}^{n}$ it holds for any norm on the vector space ${\mathbb{R}}^{n}$ + +$$ +\parallel g\left( x\right) - g\left( y\right) \parallel \leq L\parallel x - y\parallel . +$$ + +In Behrmann et al. (2019) and Chen et al. (2019), these kinds of residual flows are called (contractive) residual flows and are denoted henceforth by ${f}_{\mathrm{C}}$ . On the one hand, this strong condition on $g$ gives the bijectivity of ${f}_{\mathrm{C}}$ by the Banach’s fixed point theorem; on the other hand, it follows that ${f}_{\mathrm{C}}$ has no critical points (Behrmann, 2019, Lem. 5.4). Thus, only a few assumptions are required to guarantee that a residual flow forms an $\mathcal{L}$ -diffeomorphism. + +Theorem 10. Let ${f}_{C}$ be a residual flow with contractive perturbation $g$ . If there is a closed ${\lambda }^{n}$ -null set $N \subset {\mathbb{R}}^{n}$ such that $g$ is continuously differentiable on ${\mathbb{R}}^{n} \smallsetminus N$ , then ${f}_{C}$ is an $\mathcal{L}$ -diffeomorphism. In particular, ${f}_{C}$ is a proper flow. + +The proof of this statement follows directly from the property that Lipschitz continuous functions map ${\lambda }^{n}$ -null sets to sets of ${\lambda }^{n}$ -measure zero (Rudin,1987, Lem. 7.25) and the fact that $\operatorname{Lip}\left( {f}_{\mathrm{C}}\right) = 1 + L$ . + +§ 5. CONCLUSION + +In this paper, we have shown that the conditions on a flow need not be strictly satisfied everywhere, and a certain degree of freedom is permitted on measure-theoretically negligible sets. With this, we have justified using $\mathcal{L}$ -diffeomorphism instead of a normal diffeomorphism as flows. This gain significantly increases the possibilities in the design of normalizing flows and, in particular, allows the usage of non-smooth activations. Nevertheless, we have only justified their existence and usage mathematically, far from leading to a successful practical application. Thus, future work should investigate whether and in which situations non-smooth activations provide an actual gain. Moreover, we only applied these generalizations to simple residual flows. For this reason, it remains open to what extent other flows such as more general planar flows, like Sylvester flows (Van Den Berg et al., 2018), or autoregressive flows (Huang et al., 2018; Jaini et al., 2019) also benefit from this. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mvtooHbjOwx/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mvtooHbjOwx/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..3d181591bc54c06990953fa137075412f7432054 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mvtooHbjOwx/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,613 @@ +# Efficient Bayesian Sampling Using Normalizing Flows to Assist Markov Chain Monte Carlo Methods + +## Anonymous Authors ${}^{1}$ + +## Abstract + +Normalizing flows can generate complex target distributions and thus show promise in many applications in Bayesian statistics as an alternative or complement to MCMC for sampling posteriors. Since no data set from the target posterior distribu- + +017 tion is available beforehand, the flow is typically + +018 trained using the reverse Kullback-Leibler (KL) + +019 divergence that only requires samples from a base + +020 distribution. This strategy may perform poorly + +021 when the posterior is complicated and hard to + +022 sample with an untrained normalizing flow. Here + +023 we explore a distinct training strategy, using the direct KL divergence as loss, in which samples from the posterior are generated by (i) assisting + +026 a local MCMC algorithm on the posterior with a + +027 normalizing flow to accelerate its mixing rate and + +028 (ii) using the data generated this way to train the flow. The method only requires a limited amount of a priori input about the posterior, and can be used to estimate the evidence required for model validation, as we illustrate on examples. + +## 1. Introduction + +Given a model with continuous parameters $\theta \in \Theta \subseteq {\mathbb{R}}^{d}$ , a prior on these parameters in the form of a probability density function ${\rho }_{o}\left( \theta \right)$ , and a set of observational data $D$ whose likelihood given the model is $L\left( \theta \right)$ , Bayes formula asserts that the posterior distribution of the parameters has probability density + +$$ +{\rho }_{ * }\left( \theta \right) = \rho \left( {\theta \mid D}\right) {\rho }_{o}\left( \theta \right) = {Z}_{ * }^{-1}L\left( \theta \right) {\rho }_{o}\left( \theta \right) \tag{1} +$$ + +where the normalization factor ${Z}_{ * } = {\int }_{\Theta }L\left( \theta \right) {\rho }_{o}\left( \theta \right) {d\theta }$ is the unknown evidence. A primary aim of Bayesian inference is to sample this posterior to identify which parameters best explain the data given the model. In addition one is typically interested in estimating ${Z}_{ * }$ since it allows for model validation, comparison, and selection. + +Markov Chain Monte Carlo (MCMC) algorithms (Liu, 2008) are nowadays the methods of choice to sample complex posterior distributions. MCMC methods generate a sequence of configurations over which the time average of any suitable observable converges towards its ensemble average over some target distribution, here the posterior. This is achieved by proposing new samples from a proposal density that is easy to sample, then accepting or rejecting them using a criterion that guarantees that the transition kernel of the chain is in detailed balance with respect to the posterior density: a popular choice is Metropolis-Hastings criterion. + +MCMC methods, however, suffer from two problems. First, mixing may be slow when the posterior density ${\rho }_{ * }$ is multimodal, which can occur when the likelihood is non-log-concave (Fong et al., 2019). This is because proposal distributions using local dynamics like the popular Metropolis adjusted Langevin algorithm (MALA) (Roberts & Tweedie, 1996) are inefficient at making the chain transition from one mode to another, whereas uninformed non-local proposal distributions lead to high rejection rates. The second issue with MCMC algorithms is that they provide no efficient way to estimate the evidence ${Z}_{ * }$ : to this end, they need to be combined with other techniques such as thermodynamic integration or replica exchange, or traded for other techniques such as annealed importance sampling (Neal, 2001), nested sampling (Skilling, 2006), or the descent/ascent nonequilibrium estimator proposed in (Rotskoff & Vanden-Eijnden, 2019) and recently explored in (Thin et al., 2021). + +Here, we employ a data-driven approach to aid designing a fast-mixing transition kernel (Levy et al., 2018; Titsias, 2017; Song et al., 2017). Normalizing flows (Papamakar-ios et al., 2021) are especially promising in this context: these maps can approximate the posterior density ${\rho }_{ * }$ as the pushforward of a simple base density ${\rho }_{\mathrm{B}}$ (e.g. the prior density ${\rho }_{o}$ ) by an invertible map $T : \Theta \rightarrow \Theta$ . Their use for Bayesian inference, as opposed to density estimation (Song et al., 2017; 2021), was first advocated in Rezende & Mohamed (2015). Since a representative training set of samples from the posterior density is typically unavailable before- + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +hand, these authors proposed to use the reverse Kullback-Leibler (KL) divergence of the posterior ${\rho }_{ * }$ from the push forward of the base ${\rho }_{\mathrm{B}}$ , since this divergence can be expressed as an expectation over samples generated from ${\rho }_{\mathrm{B}}$ , consistent with the variational inference framework (Jordan et al., 1998; Blei et al., 2017). This procedure has the potential drawback that training the map is hard if the posterior differs significantly from the initial pushforward (Hartnett & Mohseni, 2020), as it may lead to "mode collapse." Annealing of ${\rho }_{ * }$ during training was shown to reduce this issue (Wu et al., 2019; Nicoli et al., 2020). + +Building on works using normalizing flows with MCMC (Albergo et al., 2019; Noé et al., 2019; Gabrié et al., 2021) here we explore an alternative strategy that blends sampling and learning where we (i) assist a MCMC algorithm with a normalizing flow to accelerate mixing and (ii) use the generated data to train the flow on the direct KL divergence. + +## 2. Posterior Sampling and Model Validation with Normalizing Flows + +A normalizing flow (NF) is an invertible map $T$ that pushes forward a simple base density ${\rho }_{\mathrm{B}}$ (typically a Gaussian with unit variance, though we could also take ${\rho }_{\mathrm{B}} = {\rho }_{o}$ ) towards a target distribution, here the posterior density ${\rho }_{ * }$ . An ideal map ${T}_{ * }$ (with inverse ${\bar{T}}_{ * }$ ) is such that if ${\theta }_{\mathrm{B}}$ is drawn from ${\rho }_{\mathrm{B}}$ then ${T}_{ * }\left( {\theta }_{\mathrm{B}}\right)$ is a sample from ${\rho }_{ * }$ . Of course, in practice, we have no access to this exact ${T}_{ * }$ , but if we have an approximation $T$ of ${T}_{ * }$ , it still assists sampling ${\rho }_{ * }$ . Denote by $\widehat{\rho }$ the push-forward of ${\rho }_{\mathrm{B}}$ under the map $T$ , + +$$ +\widehat{\rho }\left( \theta \right) = {\rho }_{\mathrm{B}}\left( {\bar{T}\left( \theta \right) }\right) \det \left| {{\nabla }_{\theta }\bar{T}}\right| . \tag{2} +$$ + +As long as $\widehat{\rho }$ and ${\rho }_{ * }$ are either both positive or both zero at any point $\theta \in \Theta$ , we can use a Metropolis-Hasting MCMC algorithm to sample from ${\rho }_{ * }$ using $\widehat{\rho }$ as a transition kernel: a proposed configuration ${\theta }^{\prime } = T\left( {\theta }_{\mathrm{B}}\right)$ from a given configuration $\theta$ is accepted with probability + +$$ +\operatorname{acc}\left( {\theta ,{\theta }^{\prime }}\right) = \min \left\lbrack {1,\frac{\widehat{\rho }\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) }{{\rho }_{ * }\left( \theta \right) \widehat{\rho }\left( {\theta }^{\prime }\right) }}\right\rbrack . \tag{3} +$$ + +This procedure is equivalent to using the transition kernel + +$$ +{\pi }_{T}\left( {\theta ,{\theta }^{\prime }}\right) = \operatorname{acc}\left( {\theta ,{\theta }^{\prime }}\right) \widehat{\rho }\left( {\theta }^{\prime }\right) + \left( {1 - r\left( \theta \right) }\right) \delta \left( {\theta - {\theta }^{\prime }}\right) \tag{4} +$$ + +where $r\left( \theta \right) = {\int }_{\Theta }\operatorname{acc}\left( {\theta ,{\theta }^{\prime }}\right) \widehat{\rho }\left( {\theta }^{\prime }\right) d{\theta }^{\prime }$ . Since ${\pi }_{T}\left( {\theta ,{\theta }^{\prime }}\right)$ is irreducible and aperiodic under the aforementioned conditions on ${\rho }_{ * }$ and $\widehat{\rho }$ , its associated chain is ergodic with respect to ${\rho }_{ * }$ (Meyn &Tweedie,2012). In addition the evidence is given by + +$$ +{Z}_{ * } = {\mathbb{E}}_{{\rho }_{\mathrm{B}}}\left\lbrack \frac{L\left( {T\left( {\theta }_{\mathrm{B}}\right) }\right) {\rho }_{o}\left( {T\left( {\theta }_{\mathrm{B}}\right) }\right) }{\widehat{\rho }\left( {T\left( {\theta }_{\mathrm{B}}\right) }\right) }\right\rbrack . \tag{5} +$$ + +For the scheme to be efficient, two conditions are required. First the parametrization of the map $T$ must allow for easy evaluation of the density $\widehat{\rho }$ which requires easily estimable Jacobian determinants and inverses. This issue has been one of the main foci in the normalizing flow literature (Pa-pamakarios et al., 2021) and is for instance solved using coupling layers (Dinh et al., 2015; 2017). Second, as shown by formula (3) the proposal density $\widehat{\rho }$ must produce samples with statistical weights comparable to the posterior density ${\rho }_{ * }$ to ensure appreciable acceptance rates. This requires training the map $T$ to resemble the optimal ${T}_{ * }$ . + +Algorithm 1 Concurrent MCMC sampling and map training + +--- + +1: SAMPLETRAIN $\left( {{U}_{ * }, T,\{ {\theta }_{i}\left( 0\right) {\} }_{i = 1}^{n},\tau ,{k}_{\max },{k}_{\operatorname{Lang}},\epsilon }\right)$ + +: Inputs: ${U}_{ * }$ target potential, $T$ initial map, ${\left\{ {\theta }_{i}\left( 0\right) \right\} }_{i = 1}^{n}$ + + initial chains, $\tau > 0$ time step, ${k}_{\max } \in \mathbb{N}$ total duration, + + ${k}_{\text{Lang }} \in \mathbb{N}$ number of Langevin steps per NF resampling + + step, $\epsilon > 0$ map training time step + + $k = 0$ + + while $k < {k}_{\max }$ do + + for $i = 1,\ldots , n$ do + + if $k{\;\operatorname{mod}\;{k}_{\text{Lang }}} + 1 = 0$ then + + ${\theta }_{\mathrm{B},\mathrm{i}}^{\prime } \sim {\rho }_{\mathrm{B}}$ + + ${\theta }_{i}^{\prime } = T\left( {\theta }_{\mathrm{B}, i}^{\prime }\right)$ \{push-forward via $T$ \} + + ${\theta }_{i}\left( {k + 1}\right) = {\theta }_{i}^{\prime }$ with prob $\operatorname{acc}\left( {{\theta }_{i}\left( k\right) ,{\theta }_{i}^{\prime }}\right)$ , other- + + wise ${\theta }_{i}\left( {k + 1}\right) = {\theta }_{i}\left( k\right) \{$ resampling step $\}$ + + else + + ${\theta }_{i}^{\prime } = {\theta }_{i}\left( k\right) - \tau \nabla {U}_{ * }\left( {{\theta }_{i}\left( k\right) }\right) + \sqrt{2\tau }{\eta }_{i}$ with ${\eta }_{i} \sim$ + + $\mathcal{N}\left( {{0}_{d},{I}_{d}}\right)$ \{discretized Langevin step\} + + ${\theta }_{i}\left( {k + 1}\right) = {\theta }_{i}^{\prime }$ with MALA acceptance prob or + + ULA, otherwise ${\theta }_{i}\left( {k + 1}\right) = {\theta }_{i}\left( k\right)$ + + $k \leftarrow k + 1$ + + $\mathcal{L}\left\lbrack T\right\rbrack = - \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\log \widehat{\rho }\left( {{\theta }_{i}\left( {k + 1}\right) }\right)$ + + $T \leftarrow T - \epsilon \nabla \mathcal{L}\left\lbrack T\right\rbrack$ \{Update the map\} + + return: ${\left\{ {\theta }_{i}\left( k\right) \right\} }_{k = 0, i = 1}^{{k}_{\max }, n}, T$ + +--- + +## 3. Compounding Local MCMCs and Generative Sampling + +In Algorithm 1, we present a concurrent sampling/training strategy that synergistically uses $T$ to improve sampling from ${\rho }_{ * }$ and samples obtained from ${\rho }_{ * }$ to train $T$ . Let us describe the different components of the scheme. + +Sampling. The sampling component of Algorithm 1 alternates between steps of the Metropolis-Hasting procedure using a NF as discussed in Section 2 and steps of a local MCMC sampler (here MALA) (line 11), using as potential + +$$ +{U}_{ * }\left( \theta \right) = - \log L\left( \theta \right) - \log {\rho }_{o}\left( \theta \right) . \tag{6} +$$ + +Strictly speaking, the second transition kernel does not need to be local, it should, however, have satisfactory acceptance rates early in the training procedure to provide initial data to start up the optimization of $T$ . From a random initialization $T$ , the parametrized density $\widehat{\rho }$ has initially little overlap with + +110 + +![01963e3d-0333-7a58-a236-5b9018a4a58d_2_145_215_711_809_0.jpg](images/01963e3d-0333-7a58-a236-5b9018a4a58d_2_145_215_711_809_0.jpg) + +Figure 1. Sampling a mixture of 2 Gaussians in 10d. Top row: Training loss and acceptance rate of the NF non-local moves as a function of iterations. Middle row: Target density and estimation of the relative weight of modes A and B using sampling with $\widehat{\rho }$ . Bottom row: $\widehat{\rho }$ and example chains along training/sampling. (See Appendix A. 2 for setup details.) + +111 + +112 + +113 + +114 + +115 + +116 + +117 + +118 + +119 + +120 + +121 + +122 + +128 + +the posterior ${\rho }_{ * }$ and the moves proposed by the NF have a high probability to be rejected. However, thanks to the data generated by the local sampler, the training of $T$ can be initiated. As training goes on, more and more moves proposed by the NF can be accepted. It is crucial to notice that these moves, generated by pushing forward independent draws from the base distribution ${\rho }_{\mathrm{B}}\left( \theta \right)$ , are non-local and easily mix between modes. + +Training. A standard way of training $T$ is to minimize the Kullback-Leibler divergence from $\widehat{\rho }$ to ${\rho }_{ * }$ . Since we do not have access to samples from ${\rho }_{ * }$ , here we use instead + +$$ +{D}_{\mathrm{{KL}}}\left( {{\rho }_{k}\parallel \widehat{\rho }}\right) = {C}_{k} - {\int }_{\Theta }\log \widehat{\rho }\left( \theta \right) {\rho }_{k}\left( \theta \right) {d\theta }, \tag{7} +$$ + +where ${\rho }_{k}$ denotes the density of the MCMC after $k \in \mathbb{N}$ steps and ${C}_{k} = {\int }_{\Theta }\log {\rho }_{k}\left( \theta \right) {\rho }_{k}\left( \theta \right) {d\theta }$ is a constant irrelevant for the optimization of $T$ . In practice, we run $n$ walkers in parallel in the chain: denoting their positions at iteration $k$ by ${\left\{ {\theta }_{i}\left( k\right) \right\} }_{i = 1}^{n}$ , we use the following estimator of (7) (minus the constant) as objective for training: + +$$ +{\mathcal{L}}_{k}^{n}\left\lbrack T\right\rbrack = - \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\log \widehat{\rho }\left( {{\theta }_{i}\left( k\right) }\right) . \tag{8} +$$ + +![01963e3d-0333-7a58-a236-5b9018a4a58d_2_897_231_700_370_0.jpg](images/01963e3d-0333-7a58-a236-5b9018a4a58d_2_897_231_700_370_0.jpg) + +Figure 2. Radial velocities. From the signal plotted in blue, we draw the noisy observations in red and obtain the initial samples in gray with the Joker algorithm (Price-Whelan et al., 2017). + +The training component of Algorithm 1 uses stochastic gradient descent on this loss function to update the parameters of the normalizing flow (line 15). Note that ${\left\{ {\theta }_{i}\left( k\right) \right\} }_{i = 1}^{n}$ are not perfect samples from ${\rho }_{ * }$ to start with, but their quality increases with the number of iterations of the MCMC. Note also that this training strategy is different from the one proposed in (Rezende & Mohamed, 2015), which uses instead the reverse KL divergence of $\widehat{\rho }$ from ${\rho }_{ * }$ : this objective could be combined with the one in (8). Here we will stick to using (8) as loss, which has the advantage of having approximate input from ${\rho }_{ * }$ from the get-go through the MCMC samples, when the chain is initialized as explained next. + +Initialization. To initialize MCMC chains, we assume that we have initial data lying in each of the important modes of the posterior ${\rho }_{ * }$ , but require no additional information. That is, we take ${\theta }_{i}\left( 0\right) = {\theta }_{i}$ , where the ${\theta }_{i}$ are initially located in these modes but not necessarily drawn from ${\rho }_{ * }$ . We stress that the method therefore applies in situations where these modes have been located beforehand, for example by doing gradient descent on $L\left( \theta \right) {\rho }_{o}\left( \theta \right)$ from random points in $\Theta$ . + +Architecture details of Algorithm 1 are deferred to Appendix A, and questions of convergence to Appendix B. + +## 4. Numerical Experiments + +### 4.1. Sampling Mixture of Gaussians in High-dimension + +As a first test case of Algorithm 1, we sample a Gaussian mixture with 2 components in 10 dimensions and estimate the relative statistical weights of its two modes. ${}^{1}$ The bottom row of Fig. 1 shows 2d projections of the trajectories of representative chains (in black) from initializations in each of the modes (red stars) as the NF learns to model the target density (blue contours). Running first locally under the + +--- + +${}^{1}$ In this experiment and the one that follows we used unadjusted Langevin dynamics (ULA) rather than MALA because the time steps were sufficiently small to ensure a high acceptance rate. + +--- + +165 + +166 + +![01963e3d-0333-7a58-a236-5b9018a4a58d_3_166_241_1423_470_0.jpg](images/01963e3d-0333-7a58-a236-5b9018a4a58d_3_166_241_1423_470_0.jpg) + +Figure 3. Comparing samples from Algorithm 1 (top row in blue, 2d projections) with the samples from Joker (in green) and the initialization samples (in pink). The posterior densities marginalized over two parameters are shown in the bottom row. + +167 + +168 + +169 + +170 + +171 + +172 + +173 + +174 + +175 + +176 + +177 + +178 + +179 Langevin sampler, the chains progressively mix between modes and grasp the difference of statistical weights also captured by the final map $T$ . Quantitatively, the acceptance rate of moves proposed by the NF reaches $\sim {80}\%$ at the end of training (Fig. 1 top row). The estimator of the relative statistical weights of each modes (the right mode denoted by $\mathrm{A}$ is twice as likely as the left mode, denoted by $\mathrm{B}$ ) using Eq. (5) also converges to the exact value within a small statistical error (Fig. 1 middle row). + +### 4.2. Radial Velocity of an Exoplanet + +Next we apply our method to the Bayesian sampling of radial velocity parameters in a model close to the one studied by Price-Whelan et al. (2017) for a star-exoplanet system. The model has 4 parameters: an offset velocity ${v}_{0}$ , an amplitude $K$ , a period $P$ and a phase ${\phi }_{0}$ . Denoting the vector of parameters by $\theta = \left( {{v}_{0}, K,{\phi }_{0},\ln P}\right) \in \Theta \subset {\mathbb{R}}^{4}$ , the radial velocity is given by + +$$ +v\left( {t;\theta }\right) = {v}_{0} + K\cos \left( {{\Omega t} + {\phi }_{0}}\right) \tag{9} +$$ + +with $\Omega = {2\pi }/P$ . From a set of observations $D =$ ${\left\{ {v}_{k},{t}_{k}\right\} }_{k = 1}^{N}$ , the goal is to sample the posterior distribution over $\theta$ . Following (Price-Whelan et al.,2017), we assume a Gaussian likelihood $L\left( \theta \right) = \mathcal{N}\left( {{v}_{k};v\left( {{t}_{k};\theta }\right) ,{\sigma }_{\text{obs }}^{2}}\right)$ , with known variance ${\sigma }_{\text{obs }}^{2}$ , and the prior distributions + +$$ +\ln P \sim \mathcal{U}\left( {\ln {P}_{\min },\ln {P}_{\max }}\right) ,\;{\phi }_{0} \sim \mathcal{U}\left( {0,{2\pi }}\right) , \tag{10} +$$ + +$$ +K \sim \mathcal{N}\left( {{\mu }_{K},{\sigma }_{K}^{2}}\right) ,\;{v}_{0} \sim \mathcal{N}\left( {0,{\sigma }_{{v}_{0}}^{2}}\right) . +$$ + +We sample $N = 6$ noisy observations at different times ${t}_{k}$ (red diamonds in Fig. 2) from a ground-truth radial velocity with parameters ${\theta }_{0}$ (dashed blue line). Using one iteration of the accept-reject Joker algorithm (Price-Whelan et al., 2017) with ${10}^{3}$ samples from the prior distributions we obtain 11 sets of likely parameters (corresponding to the gray lines in Fig. 2), which we will use as starting points for the MCMC chains in Algorithm 1. Note that to ensure a minimum acceptance rate of $\sim 1\%$ the Joker samples priors of $P$ and ${\phi }_{0}$ only, and computes the maximum likelihood value for the "linear parameters" $K$ and ${v}_{0}$ . + +We assess the quality of sampling after ${10}^{4}$ iterations of Algorithm 1 on Fig. 3, looking at all the possible 2d projections of space of parameters. The samples from Algorithm 1 (top row, in blue, final acceptance rate of ${60}\%$ , see Fig. 4) are generally covering well the modes of the marginal posterior distribution (second row), far beyond the initial samples (top row, in pink), at the exception of a light mode in the neighborhood of ${v}_{0} = 0$ and ${\phi }_{0} = 0$ in which no chain was initialized. This is an illustration of the need for prior knowledge of the rough location of a basin of interest to successful sample it with the proposed method. For comparison, we also report samples accepted by the Joker algorithm from an initial draw of ${10}^{6}$ samples (top row, in green). Because of the maximum likelihood step along $K$ and ${v}_{0}$ mentioned above, the posterior is not appropriately sampled along these two dimensions by this strategy. + +## 5. Conclusion + +Our results show that normalizing flows can be exploited to augment MCMC schemes used in Bayesian inference with a nonlocal transport component that significantly enhances mixing. By design, the method blends MCMC with an optimization component similar to that used variational inference (Blei et al., 2017), and it would be interesting to investigate further how both approaches compare on challenging applications with complex posteriors on high dimensional parameter spaces. + +References + +Albergo, M. S., Kanwar, G., and Shanahan, P. E. Flow-based generative models for Markov chain Monte Carlo in lattice field theory. Physical Review D, 100(3):034515, August 2019. ISSN 2470-0010, 2470-0029. doi: 10.1103/ PhysRevD.100.034515. + +Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. Variational Inference: A Review for Statisticians. Journal of the American Statistical Association, 112(518):859- 877, April 2017. ISSN 0162-1459, 1537-274X. doi: 10/gb2dc6. + +Dinh, L., Krueger, D., and Bengio, Y. NICE: Non-linear independent components estimation. 3rd International Conference on Learning Representations, ICLR 2015 - Workshop Track Proceedings, 1(2):1-13, 2015. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density Estimation Using Real NVP. In International Conference on Learning Representations, pp. 32, 2017. + +Fong, E., Lyddon, S., and Holmes, C. Scalable Nonparametric Sampling from Multimodal Posteriors with the Posterior Bootstrap. In International Conference on Machine Learning, pp. 1952-1962. PMLR, May 2019. + +Gabrié, M., Rotskoff, G. M., and Vanden-Eijnden, E. Adaptive Monte Carlo augmented with normalizing flows. arXiv:2105.12603 [cond-mat, physics:physics], May 2021. + +Hartnett, G. S. and Mohseni, M. Self-Supervised Learning of Generative Spin-Glasses with Normalizing Flows. arXiv:2001.00585 [cond-mat, physics:quant-ph, stat], January 2020. + +Jordan, R., Kinderlehrer, D., and Otto, F. The Variational Formulation of the Fokker-Planck Equation. SIAM Journal on Mathematical Analysis, 29(1):1-17, January 1998. ISSN 0036-1410, 1095-7154. doi: 10.1137/ S0036141096303359. + +Levy, D., Hoffman, M. D., and Sohl-Dickstein, J. Generalizing Hamiltonian Monte Carlo with Neural Networks. In International Conference on Learning Representations, February 2018. + +Liu, J. Monte Carlo strategies in scientific computing. Springer Verlag, New York, Berlin, Heidelberg, 2008. + +Meyn, S. P. and Tweedie, R. L. Markov chains and stochastic stability. Springer Science & Business Media, 2012. + +Neal, R. M. Annealed importance sampling. Statistics and Computing, 11:125-139, 2001. doi: 10.1023/A: 1008923215028. + +Nicoli, K. A., Nakajima, S., Strodthoff, N., Samek, W., + +Müller, K. R., and Kessel, P. Asymptotically unbiased estimation of physical observables with neural samplers. Physical Review E, 101(2), 2020. ISSN 24700053. doi: 10.1103/PhysRevE.101.023304. + +Noé, F., Olsson, S., Köhler, J., and Wu, H. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science, 365(6457):eaaw1147, September 2019. ISSN 0036-8075, 1095-9203. doi: 10.1126/science.aaw1147. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. Journal of Machine Learning Research, 22(57):1-64, 2021. + +Price-Whelan, A. M., Hogg, D. W., Foreman-Mackey, D., and Rix, H.-W. The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data . The Astrophysical Journal, 837(1):20, 2017. + +Rezende, D. and Mohamed, S. Variational Inference with Normalizing Flows. In International Conference on Machine Learning, pp. 1530-1538. PMLR, June 2015. + +Roberts, G. O. and Tweedie, R. L. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 2(4):341-363, December 1996. ISSN 1350-7265. + +Rotskoff, G. M. and Vanden-Eijnden, E. Dynamical Computation of the Density of States and Bayes Factors Using Nonequilibrium Importance Sampling. Physical Review Letters, 122(15):150602, April 2019. ISSN 0031-9007, 1079-7114. doi: 10.1103/PhysRevLett.122.150602. + +Skilling, J. Nested sampling for general Bayesian computation. Bayesian Analysis, 1(4):833-859, December 2006. ISSN 1936-0975. doi: 10.1214/06-BA127. + +Song, J., Zhao, S., and Ermon, S. A-nice-mc: Adversarial training for MCMC. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. + +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Er-mon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. + +Thin, A., Janati, Y., Corff, S. L., Ollion, C., Doucet, A., Durmus, A., Moulines, E., and Robert, C. Invertible Flow Non Equilibrium sampling. arXiv:2103.10943 [stat], March 2021. + +Titsias, M. K. Learning Model Reparametrizations: Implicit Variational Inference by Fitting MCMC distributions. arXiv:1708.01529 [stat], August 2017. + +Wu, D., Wang, L., and Zhang, P. Solving Statistical Mechanics Using Variational Autoregressive Networks. Physical Review Letters, 122(8):080602, February 2019. doi: 10.1103/PhysRevLett.122.080602. + +315 + +316 + +317 + +318 + +319 + +320 + +321 + +326 + +327 + +328 + +329 + +## A. Computational details + +All codes are made available through the Github repository anonymous (to be disclosed after double-blind review). + +### A.1. Architectures + +We parametrize the map $T$ as a RealNVP (Dinh et al.,2017), for which inverse and Jacobian determinant can be computed efficiently. Its building block is an invertible affine coupling layer updating half of the variables, + +$$ +{\theta }_{1 : d/2}^{\left( k + 1\right) } = {e}^{s\left( {\theta }_{d/2 : d}^{\left( k\right) }\right) } \odot {\theta }_{1 : d/2}^{\left( k\right) } + t\left( {\theta }_{d/2 : d}^{\left( k\right) }\right) \tag{11} +$$ + +where $s\left( \cdot \right)$ and $t\left( \cdot \right)$ are learnable neural networks from ${\mathbb{R}}^{d/2} \rightarrow {\mathbb{R}}^{d/2}, \odot$ denotes component-wise multiplication (Hadamard product), and $k$ indexes the depth of the network. In our experiments we use fully connected networks with depth 3, hidden layers of width 100 with ReLU activations. The parameters of these networks are initialized with random Gaussian values of small variance so that the RealNVP implements a map $T$ close to the identity at initialization. + +### A.2. Mixture of Gaussians experiment + +Target and hyperparameters. The target distribution is a mixture of Gaussian in dimension $d = {10}$ with two components: + +$$ +{\rho }_{ * }\left( \theta \right) = {w}_{\mathrm{A}}\frac{{e}^{-\frac{1}{2}{\left| \theta - {\theta }_{\mathrm{A}}\right| }^{2}}}{{\left( 2\pi \right) }^{d/2}} + {w}_{\mathrm{B}}\frac{{e}^{-\frac{1}{2}{\left| \theta - {\theta }_{\mathrm{B}}\right| }^{2}}}{{\left( 2\pi \right) }^{d/2}}, \tag{12} +$$ + +with weights ${w}_{\mathrm{A}} = 2/3,{w}_{\mathrm{B}} = 1/3$ and centroids with non-zero coordinates only in the first two dimensions ${\theta }_{\mathrm{A}1,2} = (8,3$ and ${\theta }_{\mathrm{B}1,2} = \left( {-2,3}\right)$ . Fig. 1 displays density slices in the $\left( {{\theta }_{1},{\theta }_{2}}\right)$ -plane. + +We use a RealNVP network with 6 pairs of coupling layers and a standard normal distribution as a base ${\rho }_{\mathrm{B}}$ . A set of 100 independent walkers were initialized in equal shares in modes A and B of the target. For the optimization, we compute gradients using batches of 1000 samples corresponding to 10 consecutive repeated sampling steps of Algorithm 1 (with $\tau = {0.005}$ and ${k}_{\text{Lang }} = 1$ ). We run 4000 parameters updates using Adam with a learning rate of 0.005 . + +Computing log-evidence differences. Interpreting ${\rho }_{ * }\left( \theta \right)$ as a posterior distribution over a set of parameters $\Theta$ , we would like to estimate the relative evidence for modes $A$ and $B$ . Denoting by $A$ and $B$ two sets of configurations in $\Theta$ corresponding to each mode, the difference between their log-evidence is given by + +$$ +\log {Z}_{A} - \log {Z}_{B} = \log \frac{{\int }_{\Theta }{\mathbb{1}}_{A}\left( \theta \right) {\rho }_{ * }\left( \theta \right) {d\theta }}{{\int }_{\Theta }{\mathbb{1}}_{B}\left( \theta \right) {\rho }_{ * }\left( \theta \right) {d\theta }} = \log {\mathbb{E}}_{ * }\left( {\mathbb{1}}_{A}\right) - \log {\mathbb{E}}_{ * }\left( {\mathbb{1}}_{B}\right) . \tag{13} +$$ + +Once the normalizing flow has been learned to assist sampling, it can be used to approximate Eq. 5. Drawing ${\left\{ {\theta }_{i}\right\} }_{i = 1}^{n}$ from $\widehat{\rho }$ we have the Monte Carlo estimate ${\widehat{Z}}_{*,\mathrm{\;A}}$ of ${\mathbb{E}}_{ * }\left( {\mathbb{1}}_{A}\right)$ : + +$$ +{\widehat{Z}}_{*,\mathrm{\;A}} = \frac{1}{Z}\mathop{\sum }\limits_{{i = 1}}^{n}{\mathbb{1}}_{A}\left( {\theta }_{i}\right) \widehat{w}\left( {\theta }_{i}\right) , \tag{14} +$$ + +with the unnormalized weights ${\widehat{w}}_{i} = L\left( {\theta }_{i}\right) {\rho }_{o}\left( {\theta }_{i}\right) /\widehat{\rho }\left( {\theta }_{i}\right)$ , taking the form of an importance sampling estimator. The quality of the estimator can be monitored using an estimate of the effective sample size to adjust the variance estimate from the + +empirical variance + +$$ +{n}_{\text{eff }} = \frac{{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\widehat{w}}_{i}\right) }^{2}}{\mathop{\sum }\limits_{{i = 1}}^{n}{\widehat{w}}_{i}^{2}}. \tag{15} +$$ + +Finally the unknown ${Z}_{ * }$ cancels out in the log-evidence difference: + +$$ +\log {Z}_{\mathrm{A}} - \log {Z}_{\mathrm{B}} \approx \log \left( {\mathop{\sum }\limits_{{i = 1}}^{n}{\mathbb{1}}_{A}\left( {\theta }_{i}\right) {\widehat{w}}_{i}}\right) - \log \left( {\mathop{\sum }\limits_{{i = 1}}^{n}{\mathbb{1}}_{B}\left( {\theta }_{i}\right) {\widehat{w}}_{i}}\right) . \tag{16} +$$ + +In the middle right panel of Fig. 1 we report the performance of this estimator using the sets $A = \left\{ {\theta : {\begin{Vmatrix}\theta - {\theta }_{A}\end{Vmatrix}}_{2} \leq 5}\right\}$ and $B = \left\{ {\theta : {\begin{Vmatrix}\theta - {\theta }_{B}\end{Vmatrix}}_{2} \leq 5}\right\}$ , and $n = {10}^{5}$ . The estimator convergence to the exact value $\ln 2/3 - \ln 1/3 = \ln 2$ as the quality of $\widehat{\rho }$ increases over training. + +385 + +386 + +![01963e3d-0333-7a58-a236-5b9018a4a58d_7_173_250_1410_457_0.jpg](images/01963e3d-0333-7a58-a236-5b9018a4a58d_7_173_250_1410_457_0.jpg) + +Figure 4. Training of a NF to model the posterior distribution for the radial velocity experiment. Left: Evolution of the approximate KL divergence used as an objective for training. Middle: The acceptance rate of the non-local moves proposed by the NF along training are above ${60}\%$ by the end of training. Right: The contour blue plot reports the marginalized true posterior along $\ln P$ and ${\phi }_{0}$ computed with numerical integration. The colored lines mark the trajectories of MCMC chains using the assisted sampling startegy considered here with the final learned map $T$ . + +387 + +388 + +389 + +390 + +391 + +392 + +393 + +394 + +395 + +### A.3. Radial velocity experiment + +Setup. The parameters of the prior are set to the values $\ln {P}_{\min } = 3,\ln {P}_{\max } = 5,{\sigma }_{\text{obs }} = {1.8},{\sigma }_{K} = 3,{\mu }_{K} = 5$ and ${\sigma }_{{v}_{0}} = 1$ . The RealNVP used to learn the posterior distribution has 6 pairs of coupling layers. The base distribution ${\rho }_{\mathrm{B}}$ is a standard normal distribution as in the previous experiment, but the NF is learned on a whitened representation of the parameters $\theta$ . Concretely, using the 11 initial samples from the Joker, we compute the empirical mean $\widehat{\theta }$ and empirical covariance $\widehat{\sum } = \mathop{\sum }\limits_{{i = 1}}^{n}\left( {{\theta }_{i} - \widehat{\theta }}\right) {\left( {\theta }_{i} - \widehat{\theta }\right) }^{\top }/n$ . The latter admits an eigenvalue decomposition $\widehat{\sum } = {OD}{O}^{\top }$ , with $D$ the diagonal matrix of eigenvalues and $O$ the orthogonal eigenvectors basis. Using $W = O{D}^{-1/2}{O}^{\top }$ , we train the normalize flow to model the distribution of the whitened parameters ${\theta }_{W} = W\left( {\theta - \widehat{\theta }}\right)$ . + +Training. A set of 110 independent walkers were initialized using 10 copies of each of 11 set of parameters retained from an initial iteration of the Joker algorithm from ${10}^{3}$ draws of ${\phi }_{0}$ and $\ln P$ . Here again we use 5 consecutive repeated sampling steps of Algorithm 1 (with $\tau = 5{e}^{-6}$ and ${k}_{\text{Lang }} = 1$ ) to compute gradients using batches of 550 samples. We run Adam for ${10}^{4}$ iterations with a learning rate of 0.001 . + +Fig. 4 reports the evolution of the approximate KL divergence ${\mathcal{L}}_{k}^{n}$ along training (left) and the acceptance rates of the non-local moves proposed by the NF (middle). We also plot the projected trajectories of 11 chains started with the initial Joker samples updated with combined sampling component of Algorithm 1 using the final learned $T$ . During the short chains that included 10 non-local resampling steps, the walkers can be seen to jump back and forth between two of the modes of the marginalized density (underlying in blue in the plot). However mixing is not well assisted with the mode around ${\phi }_{0}$ (see discussion in the main text as well). + +## B. Continuous limit of the MCMC scheme + +### B.1. Chapman-Kolmogorov equation + +Written in terms of the densities ${\rho }_{ * }$ and $\widehat{\rho }$ (assumed to be fixed for now) the transition kernel in (4) reads + +$$ +{\pi }_{T}\left( {\theta ,{\theta }^{\prime }}\right) = a\left( {\theta ,{\theta }^{\prime }}\right) \widehat{\rho }\left( {\theta }^{\prime }\right) + \left( {1 - b\left( \theta \right) }\right) \delta \left( {\theta - {\theta }^{\prime }}\right) \tag{17} +$$ + +where + +$$ +a\left( {\theta ,{\theta }^{\prime }}\right) = \min \left( {\frac{\widehat{\rho }\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) }{\widehat{\rho }\left( {\theta }^{\prime }\right) {\rho }_{ * }\left( \theta \right) },1}\right) , \tag{18} +$$ + +$$ +b\left( \theta \right) = {\int }_{\Theta }a\left( {\theta ,{\theta }^{\prime }}\right) \widehat{\rho }\left( {\theta }^{\prime }\right) d{\theta }^{\prime }. +$$ + +Denoting as ${\left\{ {\rho }_{k}\left( \theta \right) \right\} }_{k \in \mathbb{N}}$ the updated probability density of the walker in the Markov chain associated with the kernel ${\pi }_{T}\left( {\theta ,{\theta }^{\prime }}\right)$ alone, this density satisfies the Chapman-Kolmogorov equation + +$$ +{\rho }_{k + 1}\left( \theta \right) = {\int }_{\Theta }{\rho }_{k}\left( {\theta }^{\prime }\right) {\pi }_{T}\left( {{\theta }^{\prime },\theta }\right) d{\theta }^{\prime }. \tag{19} +$$ + +Using the explicit form of ${\pi }_{T}\left( {\theta ,{\theta }^{\prime }}\right)$ in (17), after some simple reorganization this equation can be written as + +$$ +{\rho }_{k + 1}\left( \theta \right) = {\rho }_{k}\left( \theta \right) + {\int }_{\Theta }R\left( {\theta ,{\theta }^{\prime }}\right) \left( {{\rho }_{ * }\left( \theta \right) {\rho }_{k}\left( {\theta }^{\prime }\right) - {\rho }_{k}\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) }\right) d{\theta }^{\prime } \tag{20} +$$ + +where we defined + +$$ +R\left( {\theta ,{\theta }^{\prime }}\right) = R\left( {{\theta }^{\prime },\theta }\right) = \min \left( {\frac{\widehat{\rho }\left( \theta \right) }{{\rho }_{ * }\left( \theta \right) },\frac{\widehat{\rho }\left( \theta \right) }{{\rho }_{ * }\left( \theta \right) }}\right) . \tag{21} +$$ + +Note that if we had $\widehat{\rho } = {\rho }_{ * }$ , then $R\left( {\theta ,{\theta }^{\prime }}\right) = 1$ and (20) would reach equilibrium in one step, ${\rho }_{k + 1} = {\rho }_{ * }$ whatever ${\rho }_{k}$ . + +### B.2. Continuous limit + +To take the continuous limit of (20), we modify this equation in a way that the update of the density is only partial. Specifically, denoting ${\rho }_{t}$ the value of the density at time $t \geq 0$ , we turn this equation into + +$$ +{\rho }_{t + \tau }\left( \theta \right) = {\rho }_{t}\left( \theta \right) + {\alpha \tau }{\int }_{\Theta }R\left( {\theta ,{\theta }^{\prime }}\right) \left( {{\rho }_{ * }\left( \theta \right) {\rho }_{t}\left( {\theta }^{\prime }\right) - {\rho }_{t}\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) }\right) d{\theta }^{\prime } \tag{22} +$$ + +where $\alpha > 0$ and $\tau > 0$ are parameters. This will allow us to make the MCMC resampling updates on par with those of MALA, using $\tau > 0$ as timestep in both. Subtracting ${\rho }_{t}\left( \theta \right)$ from both sides of (22), dividing by $\tau$ , and letting $\tau \rightarrow 0$ gives + +$$ +{\partial }_{t}{\rho }_{t}\left( \theta \right) = \alpha {\int }_{\Theta }R\left( {\theta ,{\theta }^{\prime }}\right) \left( {{\rho }_{ * }\left( \theta \right) {\rho }_{t}\left( {\theta }^{\prime }\right) - {\rho }_{t}\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) }\right) d{\theta }^{\prime }. \tag{23} +$$ + +We can now add the Langevin terms that arise in the continuous limit of the compounded MCMC scheme that we use, to + +arrive at + +$$ +{\partial }_{t}{\rho }_{t} = \nabla \cdot \left( {{\rho }_{t}\nabla {U}_{ * } + \nabla {\rho }_{t}}\right) + \alpha {\int }_{\Theta }R\left( {\theta ,{\theta }^{\prime }}\right) \left( {{\rho }_{ * }\left( \theta \right) {\rho }_{t}\left( {\theta }^{\prime }\right) - {\rho }_{t}\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) }\right) d{\theta }^{\prime } \tag{24} +$$ + +where $\alpha > 0$ measures the separation of time scale between the Langevin and the resampling terms. Written in term of ${g}_{t} = {\rho }_{t}/{\rho }_{ * }$ and ${\widehat{g}}_{t} = {\widehat{\rho }}_{t}/{\rho }_{ * }$ (now also allowed to vary with time) Eq. (24) reads + +$$ +{\partial }_{t}{g}_{t} = - \nabla {U}_{ * } \cdot \nabla {g}_{t} + \Delta {g}_{t} + \alpha {\int }_{\Theta }\min \left( {{\widehat{g}}_{t}\left( \theta \right) ,{\widehat{g}}_{t}\left( {\theta }^{\prime }\right) }\right) \left( {{g}_{t}\left( {\theta }^{\prime }\right) - {g}_{t}\left( \theta \right) }\right) {\rho }_{ * }\left( {\theta }^{\prime }\right) d{\theta }^{\prime } \tag{25} +$$ + +### B.3. Convergence rate + +Consider the Pearson ${\chi }^{2}$ -divergence of ${\rho }_{t}$ with respect to ${\rho }_{ * }$ defined + +$$ +{D}_{t} = {\int }_{\Theta }\frac{{\rho }_{t}^{2}}{{\rho }_{ * }}{d\theta } - 1 = {\int }_{\Theta }{g}_{t}^{2}{\rho }_{ * }{d\theta } - 1 \geq 0. \tag{26} +$$ + +Assuming that ${D}_{0} < \infty$ and using (25) we deduce that ${D}_{t}$ satisfies + +$$ +\frac{d{D}_{t}}{dt} = 2{\int }_{\Theta }{g}_{t}\left( \theta \right) {\partial }_{t}{g}_{t}\left( \theta \right) {\rho }_{ * }\left( \theta \right) {d\theta } +$$ + +$$ += 2{\int }_{\Theta }{g}_{t}\left( \theta \right) \nabla \cdot \left( {{\rho }_{ * }\left( \theta \right) \nabla {g}_{t}\left( \theta \right) }\right) {d\theta } + {2\alpha }{\int }_{{\Theta }^{2}}\min \left( {{\widehat{g}}_{t}\left( \theta \right) ,{\widehat{g}}_{t}\left( {\theta }^{\prime }\right) }\right) \left( {{g}_{t}\left( {\theta }^{\prime }\right) - {g}_{t}\left( \theta \right) }\right) {g}_{t}\left( \theta \right) {\rho }_{ * }\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) {d\theta d}{\theta }^{\prime } \tag{27} +$$ + +$$ += - 2{\int }_{\Theta }{\left| \nabla {g}_{t}\left( \theta \right) \right| }^{2}{\rho }_{ * }\left( \theta \right) {d\theta } - \alpha {\int }_{{\Theta }^{2}}\min \left( {{\widehat{g}}_{t}\left( \theta \right) ,{\widehat{g}}_{t}\left( {\theta }^{\prime }\right) }\right) {\left| {g}_{t}\left( {\theta }^{\prime }\right) - {g}_{t}\left( \theta \right) \right| }^{2}{\rho }_{ * }\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) {d\theta d}{\theta }^{\prime } +$$ + +$$ +\leq - \alpha {\int }_{{\Theta }^{2}}\min \left( {{\widehat{g}}_{t}\left( \theta \right) ,{\widehat{g}}_{t}\left( {\theta }^{\prime }\right) }\right) {\left| {g}_{t}\left( {\theta }^{\prime }\right) - {g}_{t}\left( \theta \right) \right| }^{2}{\rho }_{ * }\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) {d\theta d}{\theta }^{\prime } +$$ + +where we used $\left( {-\nabla {U}_{ * } \cdot \nabla {g}_{t} + \Delta {g}_{t}}\right) {\rho }_{ * } = \nabla \cdot \left( {{\rho }_{ * }\nabla {g}_{t}}\right)$ to reexpress the first integral in the second equality. If we denote ${\widehat{G}}_{t} = \mathop{\inf }\limits_{{\theta \in \Theta }}{\widehat{g}}_{t}\left( \theta \right) \in \left\lbrack {0,1}\right\rbrack$ ,(27) implies + +$$ +\frac{d{D}_{t}}{dt} \leq - \alpha {\widehat{G}}_{t}{\int }_{{\Theta }^{2}}{\left| {g}_{t}\left( {\theta }^{\prime }\right) - {g}_{t}\left( \theta \right) \right| }^{2}{\rho }_{ * }\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) {d\theta d}{\theta }^{\prime } = - {2\alpha }{\widehat{G}}_{t}{D}_{t}, \tag{28} +$$ + +where we used the normalization conditions ${\int }_{\Theta }{g}_{t}\left( \theta \right) {\rho }_{ * }\left( \theta \right) {d\theta } = {\int }_{\Theta }\widehat{\rho }\left( \theta \right) {d\theta } = 1$ . As a result, using Gronwall inequality we + +deduce + +$$ +{D}_{t} \leq {D}_{0}{e}^{-\alpha {\int }_{0}^{t}{\widehat{G}}_{s}{ds}}. \tag{29} +$$ + +This equation indicates that ${D}_{t} \rightarrow 0$ as $t \rightarrow \infty$ as long as ${\int }_{0}^{t}{\widehat{G}}_{s}{ds} \rightarrow \infty$ . That is, convergence can only fail if ${\widehat{G}}_{t} = o\left( {t}^{-1}\right)$ as $t \rightarrow \infty$ , and it is guaranteed otherwise. Convergence is also exponential asymptotically, as long as ${\widehat{G}}_{t}$ remains bounded away from 0 as $t \rightarrow \infty$ . + +To get a more explicit convergence rate, let us analyze (29) in two subcases. First let us assume that the map is not trained, i.e. ${\widehat{g}}_{t}\left( \theta \right) = \widehat{g}\left( \theta \right)$ is fixed, and denote $\widehat{G} = \mathop{\inf }\limits_{{\theta \in \Theta }}\widehat{g}\left( \theta \right) \in \left\lbrack {0,1}\right\rbrack$ . In this case,(29) reduces to + +$$ +{D}_{t} \leq {D}_{0}{e}^{-{2\alpha }\widehat{G}t}\;\left( {{\widehat{g}}_{t} = \widehat{g}\text{ fixed }}\right) . \tag{30} +$$ + +Note that this bound is only nontrivial if $\widehat{G} > 0$ . Even if that is the case, the rate in (30) can be pretty poor if $\widehat{G}$ is very small (e.g exponentially small in the input dimension $d$ ), which is to be expected if the map is not trained. The best case scenario is of course the idealized situation when $\widehat{G} = 1$ , which requires that $\widehat{g} = 1$ (i.e. $\widehat{\rho } = {\rho }_{ * }$ ) because of the normalization conditions ${\int }_{\Theta }\widehat{g}\left( \theta \right) {\rho }_{ * }\left( \theta \right) {d\theta } = {\int }_{\Theta }\widehat{\rho }\left( \theta \right) {d\theta } = 1$ : this case is the continuous equivalent of the one step convergence of the discrete MCMC scheme with resampling from ${\rho }_{ * }$ . + +Second let us assume that ${\widehat{g}}_{t} = {g}_{t}$ , that is the trained distribution instantaneously follows the walkers distribution at all times. + +In this case, (29) reduces to + +$$ +{D}_{t} \leq {D}_{0}{e}^{-{2\alpha }{\int }_{0}^{t}{G}_{s}{ds}}\;\left( {\widehat{g} = {g}_{t}}\right) , \tag{31} +$$ + +where we denote + +$$ +{G}_{t} = \mathop{\inf }\limits_{{\theta \in \Theta }}\left( \frac{{\rho }_{t}\left( \theta \right) }{{\rho }_{ * }\left( \theta \right) }\right) = \mathop{\inf }\limits_{{\theta \in \Theta }}{g}_{t}\left( \theta \right) \in \left\lbrack {0,1}\right\rbrack . \tag{32} +$$ + +To make this bound explicit, let us consider the evolution of ${G}_{t}$ . Denoting ${\theta }_{t} = {\operatorname{argmin}}_{\theta \in \Theta }{g}_{t}\left( \theta \right)$ so that ${G}_{t} = {g}_{t}\left( {\theta }_{t}\right)$ , and using $\min \left( {{g}_{t}\left( {\theta }_{t}\right) ,{g}_{t}\left( {\theta }^{\prime }\right) }\right) = {g}_{t}\left( {\theta }_{t}\right) = {G}_{t},\nabla {g}_{t}\left( {\theta }_{t}\right) = 0$ , and $\Delta {g}_{t}\left( {\theta }_{t}\right) \geq 0$ by definition of ${\theta }_{t}$ , from (25) we have + +$$ +\frac{d{G}_{t}}{dt} = {\partial }_{t}{g}_{t}\left( {\theta }_{t}\right) + {\dot{\theta }}_{t} \cdot \nabla {g}_{t}\left( {\theta }_{t}\right) +$$ + +$$ += \Delta {g}_{t}\left( {\theta }_{t}\right) + \alpha {G}_{t}{\int }_{\Theta }\left( {{g}_{t}\left( {\theta }^{\prime }\right) - {G}_{t}}\right) {\rho }_{ * }\left( {\theta }^{\prime }\right) d{\theta }^{\prime } \tag{33} +$$ + +$$ += \Delta {g}_{t}\left( {\theta }_{t}\right) + \alpha {G}_{t} - \alpha {G}_{t}^{2} +$$ + +$$ +\geq \alpha {G}_{t} - \alpha {G}_{t}^{2} +$$ + +where we used again the normalization conditions ${\int }_{\Theta }{g}_{t}\left( {\theta }^{\prime }\right) {\rho }_{ * }\left( {\theta }^{\prime }\right) d{\theta }^{\prime } = {\int }_{\Theta }{\rho }_{ * }\left( {\theta }^{\prime }\right) d{\theta }^{\prime } = 1$ . Eq. (33) implies that + +$$ +\frac{1}{{G}_{t} - {G}_{t}^{2}}\frac{d{G}_{t}}{dt} \geq \alpha \tag{34} +$$ + +which after integration gives + +$$ +\log \left( \frac{{G}_{t}\left( {1 - {G}_{0}}\right) }{{G}_{0}\left( {1 - {G}_{t}}\right) }\right) \geq {\alpha t} \tag{35} +$$ + +This means that we have + +$$ +{G}_{t} \geq \frac{{G}_{0}}{{G}_{0} + \left( {1 - {G}_{0}}\right) {e}^{-{\alpha t}}}. \tag{36} +$$ + +Inserting this equation in (31) and performing the integral explicitly gives + +$$ +{D}_{t} \leq \frac{{D}_{0}}{{\left( {G}_{0}\left( {e}^{\alpha t} - 1\right) + 1\right) }^{2}}. \tag{37} +$$ + +This bound is only nontrivial if ${G}_{0} \in (0,1\rbrack$ . \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mvtooHbjOwx/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mvtooHbjOwx/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..9a8d24ac3263075c0a2e22513f29ac9a7d5365ed --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/mvtooHbjOwx/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,261 @@ +§ EFFICIENT BAYESIAN SAMPLING USING NORMALIZING FLOWS TO ASSIST MARKOV CHAIN MONTE CARLO METHODS + +§ ANONYMOUS AUTHORS ${}^{1}$ + +§ ABSTRACT + +Normalizing flows can generate complex target distributions and thus show promise in many applications in Bayesian statistics as an alternative or complement to MCMC for sampling posteriors. Since no data set from the target posterior distribu- + +017 tion is available beforehand, the flow is typically + +018 trained using the reverse Kullback-Leibler (KL) + +019 divergence that only requires samples from a base + +020 distribution. This strategy may perform poorly + +021 when the posterior is complicated and hard to + +022 sample with an untrained normalizing flow. Here + +023 we explore a distinct training strategy, using the direct KL divergence as loss, in which samples from the posterior are generated by (i) assisting + +026 a local MCMC algorithm on the posterior with a + +027 normalizing flow to accelerate its mixing rate and + +028 (ii) using the data generated this way to train the flow. The method only requires a limited amount of a priori input about the posterior, and can be used to estimate the evidence required for model validation, as we illustrate on examples. + +§ 1. INTRODUCTION + +Given a model with continuous parameters $\theta \in \Theta \subseteq {\mathbb{R}}^{d}$ , a prior on these parameters in the form of a probability density function ${\rho }_{o}\left( \theta \right)$ , and a set of observational data $D$ whose likelihood given the model is $L\left( \theta \right)$ , Bayes formula asserts that the posterior distribution of the parameters has probability density + +$$ +{\rho }_{ * }\left( \theta \right) = \rho \left( {\theta \mid D}\right) {\rho }_{o}\left( \theta \right) = {Z}_{ * }^{-1}L\left( \theta \right) {\rho }_{o}\left( \theta \right) \tag{1} +$$ + +where the normalization factor ${Z}_{ * } = {\int }_{\Theta }L\left( \theta \right) {\rho }_{o}\left( \theta \right) {d\theta }$ is the unknown evidence. A primary aim of Bayesian inference is to sample this posterior to identify which parameters best explain the data given the model. In addition one is typically interested in estimating ${Z}_{ * }$ since it allows for model validation, comparison, and selection. + +Markov Chain Monte Carlo (MCMC) algorithms (Liu, 2008) are nowadays the methods of choice to sample complex posterior distributions. MCMC methods generate a sequence of configurations over which the time average of any suitable observable converges towards its ensemble average over some target distribution, here the posterior. This is achieved by proposing new samples from a proposal density that is easy to sample, then accepting or rejecting them using a criterion that guarantees that the transition kernel of the chain is in detailed balance with respect to the posterior density: a popular choice is Metropolis-Hastings criterion. + +MCMC methods, however, suffer from two problems. First, mixing may be slow when the posterior density ${\rho }_{ * }$ is multimodal, which can occur when the likelihood is non-log-concave (Fong et al., 2019). This is because proposal distributions using local dynamics like the popular Metropolis adjusted Langevin algorithm (MALA) (Roberts & Tweedie, 1996) are inefficient at making the chain transition from one mode to another, whereas uninformed non-local proposal distributions lead to high rejection rates. The second issue with MCMC algorithms is that they provide no efficient way to estimate the evidence ${Z}_{ * }$ : to this end, they need to be combined with other techniques such as thermodynamic integration or replica exchange, or traded for other techniques such as annealed importance sampling (Neal, 2001), nested sampling (Skilling, 2006), or the descent/ascent nonequilibrium estimator proposed in (Rotskoff & Vanden-Eijnden, 2019) and recently explored in (Thin et al., 2021). + +Here, we employ a data-driven approach to aid designing a fast-mixing transition kernel (Levy et al., 2018; Titsias, 2017; Song et al., 2017). Normalizing flows (Papamakar-ios et al., 2021) are especially promising in this context: these maps can approximate the posterior density ${\rho }_{ * }$ as the pushforward of a simple base density ${\rho }_{\mathrm{B}}$ (e.g. the prior density ${\rho }_{o}$ ) by an invertible map $T : \Theta \rightarrow \Theta$ . Their use for Bayesian inference, as opposed to density estimation (Song et al., 2017; 2021), was first advocated in Rezende & Mohamed (2015). Since a representative training set of samples from the posterior density is typically unavailable before- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +hand, these authors proposed to use the reverse Kullback-Leibler (KL) divergence of the posterior ${\rho }_{ * }$ from the push forward of the base ${\rho }_{\mathrm{B}}$ , since this divergence can be expressed as an expectation over samples generated from ${\rho }_{\mathrm{B}}$ , consistent with the variational inference framework (Jordan et al., 1998; Blei et al., 2017). This procedure has the potential drawback that training the map is hard if the posterior differs significantly from the initial pushforward (Hartnett & Mohseni, 2020), as it may lead to "mode collapse." Annealing of ${\rho }_{ * }$ during training was shown to reduce this issue (Wu et al., 2019; Nicoli et al., 2020). + +Building on works using normalizing flows with MCMC (Albergo et al., 2019; Noé et al., 2019; Gabrié et al., 2021) here we explore an alternative strategy that blends sampling and learning where we (i) assist a MCMC algorithm with a normalizing flow to accelerate mixing and (ii) use the generated data to train the flow on the direct KL divergence. + +§ 2. POSTERIOR SAMPLING AND MODEL VALIDATION WITH NORMALIZING FLOWS + +A normalizing flow (NF) is an invertible map $T$ that pushes forward a simple base density ${\rho }_{\mathrm{B}}$ (typically a Gaussian with unit variance, though we could also take ${\rho }_{\mathrm{B}} = {\rho }_{o}$ ) towards a target distribution, here the posterior density ${\rho }_{ * }$ . An ideal map ${T}_{ * }$ (with inverse ${\bar{T}}_{ * }$ ) is such that if ${\theta }_{\mathrm{B}}$ is drawn from ${\rho }_{\mathrm{B}}$ then ${T}_{ * }\left( {\theta }_{\mathrm{B}}\right)$ is a sample from ${\rho }_{ * }$ . Of course, in practice, we have no access to this exact ${T}_{ * }$ , but if we have an approximation $T$ of ${T}_{ * }$ , it still assists sampling ${\rho }_{ * }$ . Denote by $\widehat{\rho }$ the push-forward of ${\rho }_{\mathrm{B}}$ under the map $T$ , + +$$ +\widehat{\rho }\left( \theta \right) = {\rho }_{\mathrm{B}}\left( {\bar{T}\left( \theta \right) }\right) \det \left| {{\nabla }_{\theta }\bar{T}}\right| . \tag{2} +$$ + +As long as $\widehat{\rho }$ and ${\rho }_{ * }$ are either both positive or both zero at any point $\theta \in \Theta$ , we can use a Metropolis-Hasting MCMC algorithm to sample from ${\rho }_{ * }$ using $\widehat{\rho }$ as a transition kernel: a proposed configuration ${\theta }^{\prime } = T\left( {\theta }_{\mathrm{B}}\right)$ from a given configuration $\theta$ is accepted with probability + +$$ +\operatorname{acc}\left( {\theta ,{\theta }^{\prime }}\right) = \min \left\lbrack {1,\frac{\widehat{\rho }\left( \theta \right) {\rho }_{ * }\left( {\theta }^{\prime }\right) }{{\rho }_{ * }\left( \theta \right) \widehat{\rho }\left( {\theta }^{\prime }\right) }}\right\rbrack . \tag{3} +$$ + +This procedure is equivalent to using the transition kernel + +$$ +{\pi }_{T}\left( {\theta ,{\theta }^{\prime }}\right) = \operatorname{acc}\left( {\theta ,{\theta }^{\prime }}\right) \widehat{\rho }\left( {\theta }^{\prime }\right) + \left( {1 - r\left( \theta \right) }\right) \delta \left( {\theta - {\theta }^{\prime }}\right) \tag{4} +$$ + +where $r\left( \theta \right) = {\int }_{\Theta }\operatorname{acc}\left( {\theta ,{\theta }^{\prime }}\right) \widehat{\rho }\left( {\theta }^{\prime }\right) d{\theta }^{\prime }$ . Since ${\pi }_{T}\left( {\theta ,{\theta }^{\prime }}\right)$ is irreducible and aperiodic under the aforementioned conditions on ${\rho }_{ * }$ and $\widehat{\rho }$ , its associated chain is ergodic with respect to ${\rho }_{ * }$ (Meyn &Tweedie,2012). In addition the evidence is given by + +$$ +{Z}_{ * } = {\mathbb{E}}_{{\rho }_{\mathrm{B}}}\left\lbrack \frac{L\left( {T\left( {\theta }_{\mathrm{B}}\right) }\right) {\rho }_{o}\left( {T\left( {\theta }_{\mathrm{B}}\right) }\right) }{\widehat{\rho }\left( {T\left( {\theta }_{\mathrm{B}}\right) }\right) }\right\rbrack . \tag{5} +$$ + +For the scheme to be efficient, two conditions are required. First the parametrization of the map $T$ must allow for easy evaluation of the density $\widehat{\rho }$ which requires easily estimable Jacobian determinants and inverses. This issue has been one of the main foci in the normalizing flow literature (Pa-pamakarios et al., 2021) and is for instance solved using coupling layers (Dinh et al., 2015; 2017). Second, as shown by formula (3) the proposal density $\widehat{\rho }$ must produce samples with statistical weights comparable to the posterior density ${\rho }_{ * }$ to ensure appreciable acceptance rates. This requires training the map $T$ to resemble the optimal ${T}_{ * }$ . + +Algorithm 1 Concurrent MCMC sampling and map training + +1: SAMPLETRAIN $\left( {{U}_{ * },T,\{ {\theta }_{i}\left( 0\right) {\} }_{i = 1}^{n},\tau ,{k}_{\max },{k}_{\operatorname{Lang}},\epsilon }\right)$ + +: Inputs: ${U}_{ * }$ target potential, $T$ initial map, ${\left\{ {\theta }_{i}\left( 0\right) \right\} }_{i = 1}^{n}$ + + initial chains, $\tau > 0$ time step, ${k}_{\max } \in \mathbb{N}$ total duration, + + ${k}_{\text{ Lang }} \in \mathbb{N}$ number of Langevin steps per NF resampling + + step, $\epsilon > 0$ map training time step + + $k = 0$ + + while $k < {k}_{\max }$ do + + for $i = 1,\ldots ,n$ do + + if $k{\;\operatorname{mod}\;{k}_{\text{ Lang }}} + 1 = 0$ then + + ${\theta }_{\mathrm{B},\mathrm{i}}^{\prime } \sim {\rho }_{\mathrm{B}}$ + + ${\theta }_{i}^{\prime } = T\left( {\theta }_{\mathrm{B},i}^{\prime }\right)$ {push-forward via $T$ } + + ${\theta }_{i}\left( {k + 1}\right) = {\theta }_{i}^{\prime }$ with prob $\operatorname{acc}\left( {{\theta }_{i}\left( k\right) ,{\theta }_{i}^{\prime }}\right)$ , other- + + wise ${\theta }_{i}\left( {k + 1}\right) = {\theta }_{i}\left( k\right) \{$ resampling step $\}$ + + else + + ${\theta }_{i}^{\prime } = {\theta }_{i}\left( k\right) - \tau \nabla {U}_{ * }\left( {{\theta }_{i}\left( k\right) }\right) + \sqrt{2\tau }{\eta }_{i}$ with ${\eta }_{i} \sim$ + + $\mathcal{N}\left( {{0}_{d},{I}_{d}}\right)$ {discretized Langevin step} + + ${\theta }_{i}\left( {k + 1}\right) = {\theta }_{i}^{\prime }$ with MALA acceptance prob or + + ULA, otherwise ${\theta }_{i}\left( {k + 1}\right) = {\theta }_{i}\left( k\right)$ + + $k \leftarrow k + 1$ + + $\mathcal{L}\left\lbrack T\right\rbrack = - \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\log \widehat{\rho }\left( {{\theta }_{i}\left( {k + 1}\right) }\right)$ + + $T \leftarrow T - \epsilon \nabla \mathcal{L}\left\lbrack T\right\rbrack$ {Update the map} + + return: ${\left\{ {\theta }_{i}\left( k\right) \right\} }_{k = 0,i = 1}^{{k}_{\max },n},T$ + +§ 3. COMPOUNDING LOCAL MCMCS AND GENERATIVE SAMPLING + +In Algorithm 1, we present a concurrent sampling/training strategy that synergistically uses $T$ to improve sampling from ${\rho }_{ * }$ and samples obtained from ${\rho }_{ * }$ to train $T$ . Let us describe the different components of the scheme. + +Sampling. The sampling component of Algorithm 1 alternates between steps of the Metropolis-Hasting procedure using a NF as discussed in Section 2 and steps of a local MCMC sampler (here MALA) (line 11), using as potential + +$$ +{U}_{ * }\left( \theta \right) = - \log L\left( \theta \right) - \log {\rho }_{o}\left( \theta \right) . \tag{6} +$$ + +Strictly speaking, the second transition kernel does not need to be local, it should, however, have satisfactory acceptance rates early in the training procedure to provide initial data to start up the optimization of $T$ . From a random initialization $T$ , the parametrized density $\widehat{\rho }$ has initially little overlap with + +110 + + < g r a p h i c s > + +Figure 1. Sampling a mixture of 2 Gaussians in 10d. Top row: Training loss and acceptance rate of the NF non-local moves as a function of iterations. Middle row: Target density and estimation of the relative weight of modes A and B using sampling with $\widehat{\rho }$ . Bottom row: $\widehat{\rho }$ and example chains along training/sampling. (See Appendix A. 2 for setup details.) + +111 + +112 + +113 + +114 + +115 + +116 + +117 + +118 + +119 + +120 + +121 + +122 + +128 + +the posterior ${\rho }_{ * }$ and the moves proposed by the NF have a high probability to be rejected. However, thanks to the data generated by the local sampler, the training of $T$ can be initiated. As training goes on, more and more moves proposed by the NF can be accepted. It is crucial to notice that these moves, generated by pushing forward independent draws from the base distribution ${\rho }_{\mathrm{B}}\left( \theta \right)$ , are non-local and easily mix between modes. + +Training. A standard way of training $T$ is to minimize the Kullback-Leibler divergence from $\widehat{\rho }$ to ${\rho }_{ * }$ . Since we do not have access to samples from ${\rho }_{ * }$ , here we use instead + +$$ +{D}_{\mathrm{{KL}}}\left( {{\rho }_{k}\parallel \widehat{\rho }}\right) = {C}_{k} - {\int }_{\Theta }\log \widehat{\rho }\left( \theta \right) {\rho }_{k}\left( \theta \right) {d\theta }, \tag{7} +$$ + +where ${\rho }_{k}$ denotes the density of the MCMC after $k \in \mathbb{N}$ steps and ${C}_{k} = {\int }_{\Theta }\log {\rho }_{k}\left( \theta \right) {\rho }_{k}\left( \theta \right) {d\theta }$ is a constant irrelevant for the optimization of $T$ . In practice, we run $n$ walkers in parallel in the chain: denoting their positions at iteration $k$ by ${\left\{ {\theta }_{i}\left( k\right) \right\} }_{i = 1}^{n}$ , we use the following estimator of (7) (minus the constant) as objective for training: + +$$ +{\mathcal{L}}_{k}^{n}\left\lbrack T\right\rbrack = - \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\log \widehat{\rho }\left( {{\theta }_{i}\left( k\right) }\right) . \tag{8} +$$ + + < g r a p h i c s > + +Figure 2. Radial velocities. From the signal plotted in blue, we draw the noisy observations in red and obtain the initial samples in gray with the Joker algorithm (Price-Whelan et al., 2017). + +The training component of Algorithm 1 uses stochastic gradient descent on this loss function to update the parameters of the normalizing flow (line 15). Note that ${\left\{ {\theta }_{i}\left( k\right) \right\} }_{i = 1}^{n}$ are not perfect samples from ${\rho }_{ * }$ to start with, but their quality increases with the number of iterations of the MCMC. Note also that this training strategy is different from the one proposed in (Rezende & Mohamed, 2015), which uses instead the reverse KL divergence of $\widehat{\rho }$ from ${\rho }_{ * }$ : this objective could be combined with the one in (8). Here we will stick to using (8) as loss, which has the advantage of having approximate input from ${\rho }_{ * }$ from the get-go through the MCMC samples, when the chain is initialized as explained next. + +Initialization. To initialize MCMC chains, we assume that we have initial data lying in each of the important modes of the posterior ${\rho }_{ * }$ , but require no additional information. That is, we take ${\theta }_{i}\left( 0\right) = {\theta }_{i}$ , where the ${\theta }_{i}$ are initially located in these modes but not necessarily drawn from ${\rho }_{ * }$ . We stress that the method therefore applies in situations where these modes have been located beforehand, for example by doing gradient descent on $L\left( \theta \right) {\rho }_{o}\left( \theta \right)$ from random points in $\Theta$ . + +Architecture details of Algorithm 1 are deferred to Appendix A, and questions of convergence to Appendix B. + +§ 4. NUMERICAL EXPERIMENTS + +§ 4.1. SAMPLING MIXTURE OF GAUSSIANS IN HIGH-DIMENSION + +As a first test case of Algorithm 1, we sample a Gaussian mixture with 2 components in 10 dimensions and estimate the relative statistical weights of its two modes. ${}^{1}$ The bottom row of Fig. 1 shows 2d projections of the trajectories of representative chains (in black) from initializations in each of the modes (red stars) as the NF learns to model the target density (blue contours). Running first locally under the + +${}^{1}$ In this experiment and the one that follows we used unadjusted Langevin dynamics (ULA) rather than MALA because the time steps were sufficiently small to ensure a high acceptance rate. + +165 + +166 + + < g r a p h i c s > + +Figure 3. Comparing samples from Algorithm 1 (top row in blue, 2d projections) with the samples from Joker (in green) and the initialization samples (in pink). The posterior densities marginalized over two parameters are shown in the bottom row. + +167 + +168 + +169 + +170 + +171 + +172 + +173 + +174 + +175 + +176 + +177 + +178 + +179 Langevin sampler, the chains progressively mix between modes and grasp the difference of statistical weights also captured by the final map $T$ . Quantitatively, the acceptance rate of moves proposed by the NF reaches $\sim {80}\%$ at the end of training (Fig. 1 top row). The estimator of the relative statistical weights of each modes (the right mode denoted by $\mathrm{A}$ is twice as likely as the left mode, denoted by $\mathrm{B}$ ) using Eq. (5) also converges to the exact value within a small statistical error (Fig. 1 middle row). + +§ 4.2. RADIAL VELOCITY OF AN EXOPLANET + +Next we apply our method to the Bayesian sampling of radial velocity parameters in a model close to the one studied by Price-Whelan et al. (2017) for a star-exoplanet system. The model has 4 parameters: an offset velocity ${v}_{0}$ , an amplitude $K$ , a period $P$ and a phase ${\phi }_{0}$ . Denoting the vector of parameters by $\theta = \left( {{v}_{0},K,{\phi }_{0},\ln P}\right) \in \Theta \subset {\mathbb{R}}^{4}$ , the radial velocity is given by + +$$ +v\left( {t;\theta }\right) = {v}_{0} + K\cos \left( {{\Omega t} + {\phi }_{0}}\right) \tag{9} +$$ + +with $\Omega = {2\pi }/P$ . From a set of observations $D =$ ${\left\{ {v}_{k},{t}_{k}\right\} }_{k = 1}^{N}$ , the goal is to sample the posterior distribution over $\theta$ . Following (Price-Whelan et al.,2017), we assume a Gaussian likelihood $L\left( \theta \right) = \mathcal{N}\left( {{v}_{k};v\left( {{t}_{k};\theta }\right) ,{\sigma }_{\text{ obs }}^{2}}\right)$ , with known variance ${\sigma }_{\text{ obs }}^{2}$ , and the prior distributions + +$$ +\ln P \sim \mathcal{U}\left( {\ln {P}_{\min },\ln {P}_{\max }}\right) ,\;{\phi }_{0} \sim \mathcal{U}\left( {0,{2\pi }}\right) , \tag{10} +$$ + +$$ +K \sim \mathcal{N}\left( {{\mu }_{K},{\sigma }_{K}^{2}}\right) ,\;{v}_{0} \sim \mathcal{N}\left( {0,{\sigma }_{{v}_{0}}^{2}}\right) . +$$ + +We sample $N = 6$ noisy observations at different times ${t}_{k}$ (red diamonds in Fig. 2) from a ground-truth radial velocity with parameters ${\theta }_{0}$ (dashed blue line). Using one iteration of the accept-reject Joker algorithm (Price-Whelan et al., 2017) with ${10}^{3}$ samples from the prior distributions we obtain 11 sets of likely parameters (corresponding to the gray lines in Fig. 2), which we will use as starting points for the MCMC chains in Algorithm 1. Note that to ensure a minimum acceptance rate of $\sim 1\%$ the Joker samples priors of $P$ and ${\phi }_{0}$ only, and computes the maximum likelihood value for the "linear parameters" $K$ and ${v}_{0}$ . + +We assess the quality of sampling after ${10}^{4}$ iterations of Algorithm 1 on Fig. 3, looking at all the possible 2d projections of space of parameters. The samples from Algorithm 1 (top row, in blue, final acceptance rate of ${60}\%$ , see Fig. 4) are generally covering well the modes of the marginal posterior distribution (second row), far beyond the initial samples (top row, in pink), at the exception of a light mode in the neighborhood of ${v}_{0} = 0$ and ${\phi }_{0} = 0$ in which no chain was initialized. This is an illustration of the need for prior knowledge of the rough location of a basin of interest to successful sample it with the proposed method. For comparison, we also report samples accepted by the Joker algorithm from an initial draw of ${10}^{6}$ samples (top row, in green). Because of the maximum likelihood step along $K$ and ${v}_{0}$ mentioned above, the posterior is not appropriately sampled along these two dimensions by this strategy. + +§ 5. CONCLUSION + +Our results show that normalizing flows can be exploited to augment MCMC schemes used in Bayesian inference with a nonlocal transport component that significantly enhances mixing. By design, the method blends MCMC with an optimization component similar to that used variational inference (Blei et al., 2017), and it would be interesting to investigate further how both approaches compare on challenging applications with complex posteriors on high dimensional parameter spaces. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/s-Fg3dXQzyS/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/s-Fg3dXQzyS/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..3b8937ec6cb5dbb70028fd7bbc18b76c75afd678 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/s-Fg3dXQzyS/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,611 @@ +## Rectangular Flows for Manifold Learning + +Anonymous Authors ${}^{1}$ + +## Abstract + +Normalizing flows allow for tractable maximum likelihood estimation of their parameters but are incapable of modelling low-dimensional manifold structure in observed data. Flows which injectively map from low- to high-dimensional space provide promise for fixing this issue, but the resulting likelihood-based objective becomes more challenging to evaluate. Current approaches avoid computing the entire objective - which may induce pathological behaviour - or assume the manifold structure is known beforehand and thus are not widely applicable. Instead, we propose two methods relying on tricks from automatic differentiation and numerical linear algebra to either evaluate or approximate the full likelihood objective, performing end-to-end manifold learning and density estimation. We study the trade-offs between our methods, demonstrate improved results over previous injective flows, and show promising results on out-of-distribution detection. + +## 1. Introduction + +Normalizing Flows (NFs) have recently become a staple of generative modelling, and particularly for density estimation (see Papamakarios et al. (2019) or Kobyzev et al. (2020) for a review). Here, we typically have access to a set of points in some high-dimensional space ${\mathbb{R}}^{D}$ , which NFs model as the pushforward of some simple distribution on ${\mathbb{R}}^{D}$ through a parametrized bijection. Although this construction can admit tractable maximum likelihood training, the learned density has $D$ -dimensional support; this directly contradicts the manifold hypothesis (Bengio et al., 2013) which states that high-dimensional data lives on a lower-dimensional manifold embedded in ambient space. + +Instead, we may consider injective flows to circumvent this misspecification. These now map a random variable on ${\mathbb{R}}^{d}$ into ${\mathbb{R}}^{D}$ , defining a distribution on some $d$ -dimensional manifold embedded in ${\mathbb{R}}^{D}$ . We have access to a change-of-variable formula as in NFs, but the volume-change term now becomes much more difficult to evaluate. While there have been efforts towards training flows where the resulting distribution is supported on a low-dimensional manifold (e.g. (Rezende et al., 2020; Brehmer & Cranmer, 2020)), these approaches either assume the manifold is known beforehand or otherwise avoid the volume-change term. Both of these are undesirable: in the former, we generally do not know the manifold structure a priori; the latter can result in learning manifolds to which it is difficult to assign density. + +In this work, we show that likelihood-based density estimation for injective flows can be made tractable. We propose two methods for backpropagating through the injective volume-change term which rely on careful application of forward- and backward-mode automatic differentiation. The first method involves exact evaluation of this term and its gradient which incurs a higher memory cost; the second uses conjugate gradients and Hutchinson's trace estimator to obtain unbiased stochastic gradient estimates. Unlike previous work, our methods do not need the data manifold to be specified beforehand, and simultaneously estimate this manifold along with the distribution on it end-to-end, thus enabling maximum likelihood training to occur. Ours are the first methods to backpropagate through the volume-change term in ambient dimensions $D$ approaching 1,000 . We study the trade-off between memory and variance introduced by our methods and show improvements over injective flow baselines for density estimation. We also show that injective flows obtain state-of-the-art performance for likelihood-based Out-of-Distribution (OoD) detection. + +## 2. Background and Motivation + +### 2.1. Rectangular Normalizing Flows + +Standard NFs unrealistically result in the learned density ${p}_{X}$ having $D$ -dimensional support. To overcome this, we first follow Brehmer & Cranmer (2020), where an injective mapping ${g}_{\phi } : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{D}$ with $d < D$ is constructed. Here, $Z \in {\mathbb{R}}^{d}$ is the low-dimensional variable used to model the data as $X \mathrel{\text{:=}} {g}_{\phi }\left( Z\right)$ . A well-known result from differential geometry (Krantz & Parks, 2008), provides a change-of- + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +variable formula for $x \in {\mathcal{M}}_{\phi } \mathrel{\text{:=}} \left\{ {{g}_{\phi }\left( z\right) : z \in {\mathbb{R}}^{d}}\right\}$ : + +$$ +{p}_{X}\left( x\right) = {p}_{Z}\left( {z}_{\phi }\right) {\left| \det \mathbf{J}{\left\lbrack {g}_{\phi }\right\rbrack }^{\top }\left( {z}_{\phi }\right) \mathbf{J}\left\lbrack {g}_{\phi }\right\rbrack \left( {z}_{\phi }\right) \right| }^{-1/2}, \tag{1} +$$ + +where ${z}_{\phi } \mathrel{\text{:=}} {g}_{\phi }^{-1}\left( x\right)$ , and ${p}_{X}\left( x\right) = 0$ for $x \notin {\mathcal{M}}_{\phi }$ . The Jacobian-transpose-Jacobian (JtJ) determinant now characterizes the change in volume from $Z$ to $X$ . We make several relevant observations:(i)The Jacobian matrix $\mathbf{J}\left\lbrack {g}_{\phi }\right\rbrack \left( {{g}_{\phi }^{-1}\left( x\right) }\right) \in {\mathbb{R}}^{D \times d}$ is no longer a square matrix, and we thus refer to these flows as rectangular. (ii) ${g}_{\phi }^{-1} : {\mathcal{M}}_{\phi } \rightarrow {\mathbb{R}}^{d}$ is only properly defined on ${\mathcal{M}}_{\phi }$ and not ${\mathbb{R}}^{D}$ , and ${p}_{X}$ is now supported on the $d$ -dimensional manifold ${\mathcal{M}}_{\phi }$ . (iii) This is not a density with respect to the Lebesgue measure; the dominating measure is the Riemannian measure on the manifold ${\mathcal{M}}_{\phi }$ (Pennec,2006). (iv) When $d = D$ , we recover the standard change-of-variable. + +Since data points $x$ will almost surely not lie exactly on ${\mathcal{M}}_{\phi }$ , we use a left inverse ${g}_{\phi }^{ \dagger } : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{d}$ such that ${g}_{\phi }^{ \dagger }\left( {{g}_{\phi }\left( z\right) }\right) = z$ for all $z \in {\mathbb{R}}^{d}$ in place of ${g}_{\phi }^{-1}$ . This exists by injectivity and is properly defined on ${\mathbb{R}}^{D}$ , unlike ${g}_{\phi }^{-1}$ which only exists on ${\mathcal{M}}_{\phi }$ . Setting ${z}_{\phi } \mathrel{\text{:=}} {g}_{\phi }^{ \dagger }\left( x\right)$ in (1) is equivalent to projecting $x$ onto ${\mathcal{M}}_{\phi }$ as $x \leftarrow {g}_{\phi }\left( {{g}_{\phi }^{ \dagger }\left( x\right) }\right)$ , and then evaluating the density from (1) at the projected point. + +Now, ${g}_{\phi }$ is injectively constructed as follows: + +$$ +{g}_{\phi } = {\widetilde{f}}_{\theta } \circ \operatorname{pad} \circ {h}_{\eta }\;\text{ and }\;{g}_{\phi }^{ \dagger } = {h}_{\eta }^{-1} \circ {\operatorname{pad}}^{ \dagger } \circ {\widetilde{f}}_{\theta }^{-1}, \tag{2} +$$ + +where ${\widetilde{f}}_{\theta } : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D}$ and ${h}_{\eta } : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ are both square flows, $\phi \mathrel{\text{:=}} \left( {\theta ,\eta }\right)$ , and pad : ${\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{D}$ and pad ${}^{ \dagger } : {\mathbb{R}}^{D} \rightarrow$ ${\mathbb{R}}^{d}$ are defined as $\operatorname{pad}\left( z\right) = \left( {z,\mathbf{0}}\right)$ and ${\operatorname{pad}}^{ \dagger }\left( {z,{z}^{\prime }}\right) = z$ , where $\mathbf{0},{z}^{\prime } \in {\mathbb{R}}^{D - d}$ . We can thus rewrite (1) using this specific form of ${g}_{\phi }$ , with details in Appendix B. + +Constructing flows with a tractable volume-change term is more challenging than in the standard case. Brehmer & Cranmer (2020) thus propose a two-step training procedure, wherein ${f}_{\theta } \mathrel{\text{:=}} {\widetilde{f}}_{\theta } \circ$ pad and ${h}_{\eta }$ are trained separately, to avoid this calculation. Training ${f}_{\theta }$ involves minimizing the reconstruction error ${\begin{Vmatrix}x - {f}_{\theta }\left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \end{Vmatrix}}_{2}^{2}$ , which encourages the observed data to lie on ${\mathcal{M}}_{\theta }$ . Then, since ${h}_{\eta }$ will not appear in the determinant term in (1), it can be taken to be any $d$ -dimensional NF and, fixing $\theta ,\eta$ can be learned via maximizing its likelihood over the lower-dimensional points ${\left\{ {f}_{\theta }^{ \dagger }\left( {x}_{i}\right) \right\} }_{i}$ . In practice, gradients steps in $\theta$ and $\eta$ are alternated. This procedure circumvents evaluation of the Jt. J term, but we soon show that this comes with downsides. + +### 2.2. Motivation + +Dimensionality issues Problems originating from dimensionality mismatch have been observed throughout the deep generative modelling literature. Dai & Wipf (2019) show that using powerful variational autoencoders supported on ${\mathbb{R}}^{D}$ to model data living in a low-dimensional manifold results in the manifold itself being learned, but not the distribution on it. Cornish et al. (2020) demonstrate the drawbacks of using normalizing flows for estimating the density of topologically-complex data, but still model the support as being $D$ -dimensional; Behrmann et al. (2021) provide a related result characterizing non-invertibility of trained flows. This body of work strongly motivates the development of models whose support has matching topology - including dimension - to that of the true data distribution. + +Manifold flows A challenge to overcome for NFs on manifolds is the JtJ term; this is currently handled in one of two ways. The first assumes the manifold is known beforehand (Rezende et al., 2020), limiting its general applicability to low-dimensional data where the true manifold can be known. The second group circumvents the computation of this term entirely; this includes the aforementioned Brehmer & Cranmer (2020). Kumar et al. (2020) use a loose lower bound of the log-likelihood and do not explicitly enforce injectivity, so that the change-of-variable almost surely does not hold. Cunningham et al. (2020) propose to convolve the manifold distribution with Gaussian noise, which results in the model having high-dimensional support. + +Why optimize this term? We can imagine a situation where, even if ${f}_{\theta }$ maps to the correct manifold, it might unnecessarily change volume in such a way that makes correctly learning ${h}_{\eta }$ more challenging than it needs to be. For example, there is nothing in the two-step objective encouraging ${f}_{\theta }$ to learn a manifold parametrized with a well-controlled "speed", which we observe to be an issue in Figure 2 of the experiments. This is but one example of a failure which could have been avoided by learning the manifold in a density-aware fashion including the Jt J term. + +## 3. Rectangular Flow Maximum Likelihood + +#### 3.1.Our Optimization Objective + +We have noted that including $\mathrm{{JtJ}}$ in the optimization is sensible, but (1) corresponds to the density of the projection of $x$ onto ${\mathcal{M}}_{\theta }$ . Thus, optimizing only this would not result in learning ${\mathcal{M}}_{\theta }$ such that observed data lies on it, only encouraging projected data points to have high likelihood. Instead, we use the KKT conditions (Karush, 1939; Kuhn & Tucker,1951) to maximize the following Lagrangian in $\phi$ subject to the constraint that the reconstruction error should be smaller than some threshold: + +$$ +\log {p}_{Z}\left( {z}_{\phi }\right) - \log \left| {\det \mathbf{J}\left\lbrack {h}_{\eta }\right\rbrack \left( {z}_{\phi }\right) }\right| \tag{3} +$$ + +$$ +- \frac{1}{2}\log \det {J}_{\theta }^{\top }\left( x\right) {J}_{\theta }\left( x\right) - \beta {\begin{Vmatrix}x - {f}_{\theta }\left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \end{Vmatrix}}_{2}^{2}, +$$ + +where we treat $\beta > 0$ as a hyperparameter, and denote $\mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {{f}_{\theta }^{ \dagger }\left( x\right) }\right)$ as ${J}_{\theta }\left( x\right)$ and again ${z}_{\phi } \mathrel{\text{:=}} {g}_{\phi }^{ \dagger }\left( x\right)$ for simplicity. We now make a technical but relevant observation about our objective: since our likelihoods are Radon-Nikodym derivatives with respect to the Riemannian measure on ${\mathcal{M}}_{\theta }$ , different values of $\theta$ will result in different dominating measures. One should thus be careful to compare likelihoods for models with different values of $\theta$ . However, thanks to the smoothness of the objective over $\theta$ , we should expect likelihoods for values of $\theta$ which are "close enough" to be comparable for practical purposes. In other words, comparisons remain reasonable locally, and the gradient of the volume-change term still contains information that helps learning ${\mathcal{M}}_{\theta }$ in such a way that ${h}_{\eta }$ can easily learn a density on the pulled-back dataset ${\left\{ {f}_{\theta }^{ \dagger }\left( {x}_{i}\right) \right\} }_{i}$ . + +### 3.2. Optimizing our Objective: Stochastic Gradients + +All the terms in (3) are straightforward to evaluate and back-propagate through except for the third one; in this section we show how to obtain unbiased stochastic estimates of its gradient. We now drop the dependence of the Jacobian on $x$ from our notation and write ${J}_{\theta }$ , knowing that the end computation will be parallelized over a batch ${\left\{ {x}_{i}\right\} }_{i}$ . We assume access to an efficient matrix-vector product routine, i.e. computing ${J}_{\theta }^{\top }{J}_{\theta }\epsilon$ can be quickly achieved for any $\epsilon \in {\mathbb{R}}^{d}$ . We elaborate on how we obtain these matrix-vector products in the next section. It is a well known fact from matrix calculus (Petersen & Pedersen, 2008) that: + +$$ +\frac{\partial }{\partial {\theta }_{j}}\log \det {J}_{\theta }^{\top }{J}_{\theta } = \operatorname{tr}\left( {{\left( {J}_{\theta }^{\top }{J}_{\theta }\right) }^{-1}\frac{\partial }{\partial {\theta }_{j}}{J}_{\theta }^{\top }{J}_{\theta }}\right) , \tag{4} +$$ + +where tr denotes the trace operator and ${\theta }_{j}$ is the $j$ -th element of $\theta$ . Next, we use Hutchinson’s trace estimator (Hutchinson, 1989), which says that for any matrix $M \in {\mathbb{R}}^{d \times d},\operatorname{tr}\left( M\right) =$ ${\mathbb{E}}_{\epsilon }\left\lbrack {{\epsilon }^{\top }{M\epsilon }}\right\rbrack$ for any ${\mathbb{R}}^{d}$ -valued random variable $\epsilon$ with zero mean and identity covariance matrix. We can thus obtain an unbiased stochastic estimate of our gradient as: + +$$ +\frac{\partial }{\partial {\theta }_{j}}\operatorname{logdet}{J}_{\theta }^{\top }{J}_{\theta } \approx \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{\epsilon }_{k}^{\top }{\left( {J}_{\theta }^{\top }{J}_{\theta }\right) }^{-1}\frac{\partial }{\partial {\theta }_{j}}{J}_{\theta }^{\top }{J}_{\theta }{\epsilon }_{k}, \tag{5} +$$ + +where ${\left\{ {\epsilon }_{k}\right\} }_{k}$ are typically sampled either from standard Gaussian or Rademacher distributions. Naïve computation of the above estimate remains intractable without explicitly constructing ${J}_{\theta }^{\top }{J}_{\theta }$ . Fortunately, the ${J}_{\theta }^{\top }{J}_{\theta }\epsilon$ terms can be trivially obtained using the given matrix-vector product routine, avoiding the construction of ${J}_{\theta }^{\top }{J}_{\theta }$ , and then $\partial /\partial {\theta }_{j}{J}_{\theta }^{\top }{J}_{\theta }\epsilon$ follows by taking the gradient w.r.t. $\theta$ . + +Yet there is still the issue of computing ${\epsilon }^{\top }{\left( {J}_{\theta }^{\top }{J}_{\theta }\right) }^{-1} =$ ${\left\lbrack {\left( {J}_{\theta }^{\top }{J}_{\theta }\right) }^{-1}\epsilon \right\rbrack }^{\top }$ . We use Conjugate Gradients (CG) (Nocedal & Wright, 2006) in order to achieve this. CG is an iterative method to solve problems of the form ${Au} = \epsilon$ for given $A \in {\mathbb{R}}^{d \times d}$ (in our case $A = {J}_{\theta }^{\top }{J}_{\theta }$ ) and $\epsilon \in {\mathbb{R}}^{d}$ ; we include the CG algorithm in Appendix D for completeness. CG has several important properties. First, it is known to recover the solution (assuming exact arithmetic) after at most $d$ steps, which means we can evaluate ${A}^{-1}\epsilon$ . The solution converges exponentially (in the number of iterations $\tau$ ) to the true value (Shewchuk et al.,1994), so often $\tau \ll d$ iterations are sufficient for accuracy to many decimal places. Second, CG only requires a method to compute matrix-vector products against $A$ , and does not require access to $A$ itself. One such product is performed at each iteration, and CG thus requires at most $d$ of these products, although again $\tau \ll d$ product usually suffice. This results in $\mathcal{O}\left( {\tau {d}^{2}}\right)$ solve complexity-less than the $\mathcal{O}\left( {d}^{3}\right)$ required by direct inversion. We denote ${A}^{-1}\epsilon$ computed with conjugate gradients as $\mathrm{{CG}}\left( {A;\epsilon }\right)$ . We can then compute the estimator from (5) as: + +$$ +\frac{\partial }{\partial {\theta }_{j}}\operatorname{logdet}{J}_{\theta }^{\top }{J}_{\theta } \approx \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}\operatorname{CG}{\left( {J}_{\theta }^{\top }{J}_{\theta };{\epsilon }_{k}\right) }^{\top }\frac{\partial }{\partial {\theta }_{j}}{J}_{\theta }^{\top }{J}_{\theta }{\epsilon }_{k} \tag{6} +$$ + +In practice, we implement this term by applying a stop_gradient on the CG step, thereby allowing us to avoid implementing a custom backward pass. We add this term into (3) and write out in full the contribution of a point $x$ to the training objective in the Appendix (Equation (12)). + +#### 3.3.AD Considerations: The Exact Method and the Forward-Backward AD Trick + +Here we derive the routine for vector products against ${J}_{\theta }^{\top }{J}_{\theta }$ , along with an exact method that avoids Hutchinson's estimator but has increased memory requirements. We will use commonly-known properties of AD to derive our approach, which we review in Appendix E. First, consider the problem of explicitly constructing ${J}_{\theta }$ . This construction can then be used to evaluate ${J}_{\theta }^{\top }{J}_{\theta }$ and exactly compute its log determinant, thus avoiding having to use stochastic gradients as in the previous section. We refer to this procedure as the exact method. Naïvely, one might try to explicitly construct ${J}_{\theta }$ using only backward-mode AD, which would require $D$ vector-Jacobian products (vjps) of the form ${v}^{\top }{J}_{\theta }$ - one per basis vector $v \in {\mathbb{R}}^{D}$ . A better way to explicitly construct ${J}_{\theta }$ is with forward-mode AD, which only requires $d$ Jacobian-vector products (jvps) ${J}_{\theta }\epsilon$ , again one per basis vector $\epsilon \in {\mathbb{R}}^{d}$ . We use a custom implementation of forward-mode AD in the popular PyTorch (Paszke et al.,2019) library ${}^{1}$ for the exact method, as well as for the forward-backward AD trick described below. + +We now explain how to combine forward- and backward-mode AD to obtain efficient matrix-vector products against ${J}_{\theta }^{\top }{J}_{\theta }$ in order to obtain the tractable gradient estimates from the previous section. Note that $v \mathrel{\text{:=}} {J}_{\theta }\epsilon$ can be computed with a single jvp call, and then ${J}_{\theta }^{\top }{J}_{\theta }\epsilon = {\left\lbrack {v}^{\top }{J}_{\theta }\right\rbrack }^{\top }$ can be efficiently computed using only a vjp call. We refer to this way of computing matrix-vector products against ${J}_{\theta }^{\top }{J}_{\theta }$ as the forward-backward AD trick. Note that (12) requires $K\left( {\tau + 1}\right)$ such matrix-vector products, which might appear less efficient as it could be greater than the $d$ jvps required by the exact method. However, the stochastic method is much more memory-efficient than its exact counterpart when optimizing: of the $K\left( {\tau + 1}\right)$ matrix-vector products needed to evaluate (12), only $K$ require gradients with respect to $\theta$ . Thus only $K$ jvps and $K$ vjps, along with their intermediate steps, must be stored in memory over a training step. In contrast, the exact method requires gradients for + +--- + +${}^{1}$ PyTorch has a forward-mode AD implementation which relies on the "double backward" trick, which is known to be memory-inefficient. See https://j-towns.github.io/ 2017/06/12/A-new-trick.html for a description. + +--- + +180 every one of its $d$ jup computations, which requires storing these along with their intermediate steps in memory. + +Our proposed methods thus offer a memory-variance tradeoff. Increasing $K$ in the stochastic method results in larger memory requirements which imply longer training times, as the batch size must be set lower. On the other hand, the larger the memory cost, the smaller the variance of the gradient. This still holds true for the exact method, which results in exact gradients, at the cost of increased memory requirements (as long as $K \ll d$ ; if $K$ is large enough the stochastic method should never be used over the exact one). + +## 4. Experiments + +We compare our methods against that of Brehmer & Cranmer (2020), and study the memory vs. variance trade-off. We include all experimental details in Appendix H. Figure 2 shows how the two-step method (TS) correctly recovers the manifold, but not the distribution on it when trying to learn + +200 a simple von Mises ground truth distribution on the unit + +201 circle in ${\mathbb{R}}^{2}$ , which our method (ML) handily recovers. + +We also compare the methods with the tabular datasets used by $\mathrm{{Pa}}$ - pamakarios et al. (2017), along with MNIST and FMNIST. Due to space constraints, we include our results in Appendix A, where we show that: (i) our maximum likelihood methods better recover the target distribution, as measured by FID score (Heusel et al., 2017); (ii) our stochastic version with $K = 1$ is competitive against its more memory-hungry alternatives; and (iii) rectangular flows + +![01963e34-b00a-708a-a0ec-96108121a19a_3_509_1541_325_353_0.jpg](images/01963e34-b00a-708a-a0ec-96108121a19a_3_509_1541_325_353_0.jpg) + +Figure 1. OoD detection with RNFs-ML (exact). + +219 perform very well for OoD detection. In particular, they assign higher likelihoods to FMNIST than to MNIST when trained on the former, contrary to what has been observed in previous NFs literature (Nalisnick et al., 2019), as can be seen in Figure 1. + +![01963e34-b00a-708a-a0ec-96108121a19a_3_943_191_612_911_0.jpg](images/01963e34-b00a-708a-a0ec-96108121a19a_3_943_191_612_911_0.jpg) + +Figure 2. Left column: RNFs-ML (exact) (top), von Mises ground truth (middle), and RNF-TS (bottom). Right column: Speed at which ${f}_{{\theta }^{ * }}$ maps to ${\mathcal{M}}_{{\theta }^{ * }}$ (measured as ${l}_{2}$ distance between uniformly spaced consecutive points in $\mathbb{R}$ mapped through ${f}_{{\theta }^{ * }}$ ) for RNFs-ML (exact) (top), RNFs-TS (bottom), and distribution ${h}_{\eta }$ has to learn in order to recover the ground truth, fixing ${\theta }^{ * }$ (middle). We can see that RNFs-ML map from low to high dimensions at a more constant speed, thus providing a simpler $z$ distribution for ${h}_{\eta }$ to learn. RNFs-TS map at a higher speed towards the top of the circle which impacts density estimates. + +## 5. Conclusions + +In this paper we argue for the importance of likelihood-based training of rectangular flows, and introduce two methods allowing to do so. We study the benefits of our methods, and empirically show that they are preferable to current alternatives. We anticipate improvements to our methods with more powerful flow architectures than RealNVP, along with advancements in specifying flow models with more flexible topological properties. + +References + +Alemi, A. A., Fischer, I., Dillon, J. V., and Murphy, K. Deep variational information bottleneck. ICLR, 2017. + +Alemi, A. A., Fischer, I., and Dillon, J. V. Uncertainty in the variational information bottleneck. arXiv preprint arXiv:1807.00906, 2018. + +Baydin, A. G., Pearlmutter, B. A., Radul, A. A., and Siskind, J. M. Automatic differentiation in machine learning: a survey. Journal of machine learning research, 18, 2018. + +Behrmann, J., Vicol, P., Wang, K.-C., Grosse, R., and Jacobsen, J.-H. Understanding and mitigating exploding inverses in invertible neural networks. In International Conference on Artificial Intelligence and Statistics, pp. 1792-1800. PMLR, 2021. + +Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8): 1798-1828, 2013. + +Brehmer, J. and Cranmer, K. Flows for simultaneous manifold learning and density estimation. In Advances in Neural Information Processing Systems, volume 33, 2020. + +Choi, H., Jang, E., and Alemi, A. A. Waic, but why? generative ensembles for robust anomaly detection. arXiv preprint arXiv:1810.01392, 2018. + +Cornish, R., Caterini, A., Deligiannidis, G., and Doucet, A. Relaxing bijectivity constraints with continuously indexed normalising flows. In International Conference on Machine Learning, pp. 2133-2143. PMLR, 2020. + +Cunningham, E., Zabounidis, R., Agrawal, A., Fiterau, I., and Sheldon, D. Normalizing flows across dimensions. arXiv preprint arXiv:2006.13070, 2020. + +Dai, B. and Wipf, D. Diagnosing and enhancing vae models. ICLR, 2019. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. ICLR, 2017. + +Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G. Neural spline flows. In Advances in Neural Information Processing Systems, volume 32, 2019. + +Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, volume 30, 2017. + +Hutchinson, M. F. A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. Communications in Statistics-Simulation and Computation, 18(3):1059-1076, 1989. + +Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448- 456. PMLR, 2015. + +Karush, W. Minima of functions of several variables with inequalities as side constraints. M. Sc. Dissertation. Dept. of Mathematics, Univ. of Chicago, 1939. + +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. ICLR, 2015. + +Kobyzev, I., Prince, S., and Brubaker, M. Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. + +Krantz, S. G. and Parks, H. R. Geometric integration theory. Springer Science & Business Media, 2008. + +Kuhn, H. W. and Tucker, A. W., 1951," nonlinear programming,". In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability(University of California Press, Berkeley, CA), volume 481492, 1951. + +Kumar, A., Poole, B., and Murphy, K. Regularized autoen-coders via relaxed injective probability flow. In International Conference on Artificial Intelligence and Statistics, pp. 4292-4301. PMLR, 2020. + +Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. ICLR, 2019. + +Nalisnick, E., Matsukawa, A., Teh, Y. W., Gorur, D., and Lakshminarayanan, B. Do deep generative models know what they don't know? ICLR, 2019. + +Nocedal, J. and Wright, S. Numerical optimization. Springer Science & Business Media, 2006. + +Papamakarios, G., Pavlakou, T., and Murray, I. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, volume 30, 2017. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. arXiv preprint arXiv:1912.02762, 2019. + +Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, + +M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, volume 32. 2019. + +Pennec, X. Intrinsic statistics on riemannian manifolds: Basic tools for geometric measurements. Journal of Mathematical Imaging and Vision, 25(1):127-154, 2006. + +Petersen, K. B. and Pedersen, M. S. The matrix cookbook, October 2008. URL http://www2.imm.dtu.dk/ pubdb/p.php?3274. Version 20081110. + +Ren, J., Liu, P. J., Fertig, E., Snoek, J., Poplin, R., De-pristo, M., Dillon, J., and Lakshminarayanan, B. Likelihood ratios for out-of-distribution detection. In Advances in Neural Information Processing Systems, volume 32, 2019. + +Rezende, D. J., Papamakarios, G., Racaniere, S., Albergo, M., Kanwar, G., Shanahan, P., and Cranmer, K. Normalizing flows on tori and spheres. In International Conference on Machine Learning, pp. 8083-8092. PMLR, 2020. + +Shewchuk, J. R. et al. An introduction to the conjugate gradient method without the agonizing pain, 1994. + +Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015. + +325 + +326 + +327 + +328 + +329 + +## A. Main Experimental Results + +In all our experiments, w use the real NVP (Dinh et al., 2017) architecture for all flows, except we do not use batch normalization (Ioffe & Szegedy, 2015) as it causes issues with v j:p computations. We point out that all comparisons remain fair, including a detailed explanation of this phenomenon in Appendix F, along with all experimental details in Appendix H. Throughout, we use the labels RNFs-ML for our maximum likelihood training method, RNFs-TS for the two-step method, and RNFs for rectangular NFs in general. For most runs, we found it useful to anneal the likelihood term(s). That is, at the beginning of training we optimize only the reconstruction term, and then slowly incorporate the other terms. This likelihood annealing procedure helped avoid local optima where the manifold is not recovered (large reconstruction error) but the likelihood of projected data is high. + +### A.1. Simulated Data + +We consider a simulated dataset where we have access to ground truth, which allows us to empirically verify the deficiencies of RNFs-TS. We use a von Mises distribution, which is supported on the one-dimensional unit circle in ${\mathbb{R}}^{2}$ . Figure 2 shows this distribution, along with its estimates from RNFs-ML (exact) and RNFs-TS. As previously observed, RNFs-TS correctly approximate the manifold, but fail to learn the right distribution on it. In contrast we can see that RNFs-ML, by virtue of including the Jacobian-transpose-Jacobian term in the optimization, manage to recover both the manifold and the distribution on it (top left panel), while also resulting in an easier-to-learn low-dimensional distribution (middle right panel) thanks to ${f}_{{\theta }^{ * }}$ mapping to ${\mathcal{M}}_{{\theta }^{ * }}$ at a more consistent speed (top right panel). We do point out that, while the results presented here are representative of usual runs for both methods, we also had runs with different results which we include in Appendix H. We finish with the observation that even though the line and the circle are not homeomorphic and thus RNFs are not perfectly able to recover the support, they manage to adequately approximate it. + +### A.2. Tabular Data + +We now turn our attention to the tabular datasets used by Papamakarios et al. (2017), now a common benchmark for NFs as well. As previously mentioned, one should be careful when comparing models with different supports, as we cannot rely on test likelihoods as a metric. We take inspiration from the FID score (Heusel et al., 2017), which is commonly used to evaluate quality of generated images when likelihoods are not available. The FID score compares the first and second moments of a well-chosen statistic from the model and data distributions using the squared Wasserstein-2 metric (between Gaussians). Instead of using the last hidden layer of a pre-trained classifier as is often done for images, we take the statistic to be the data itself: in other words, our metric compares the mean and covariance of generated data against those of observed data with the same squared Wasserstein-2 metric. We include the mathematical formulas for computing both FID and our modified version for tabular data in Appendix G. We use early stopping with our FID-like score across all models. Our results are summarized in Table 1, where we can see that RNFs-ML consistently do a better job at recovering the underlying distribution. Once again, these results emphasize the benefits of including the Jacobian-transpose-Jacobian in the objective. Interestingly, except for HEPMASS, the results from our stochastic version with $K = 1$ are not significantly exceeded by the exact version or using a larger value of $K$ , suggesting that the added variance does not result in decreased empirical performance. We highlight that no tuning was done (except on GAS for which we changed $d$ from 4 to 2), RNFs-ML outperforming RNFs-TS out-of-the-box here (details are in Appendix H). We report training times in Appendix H, and observe that RNFs-ML take a similar amount of time as RNFs-TS to train for datasets with lower values of $D$ , and while we do take longer to train for the other datasets, our training times remain reasonable and we often require fewer epochs to converge. + +### A.3. Image Data and Out-of-Distribution Detection + +We also compare RNFs-ML to RNFs-TS for image modelling on MNIST and FMNIST. We point out that these datasets have ambient dimension $D = {784}$ , and being able to fit RNFs-ML is in itself noteworthy: to the best of our knowledge no previous method has scaled optimizing the Jacobian-transpose-Jacobian term to these dimensions. We use FID scores both for comparing models and for early stopping during training. We also used likelihood annealing and set $d \mathrel{\text{:=}} {20}$ , with all experimental details again given in Appendix H. We report FID scores in Table 2, where we can see that we outperform RNFs-TS. Our RNFs-ML $\left( {K = 1}\right)$ variant also outperforms its decreased-variance counterparts. This is partially explained by the fact that we used this variant to tune RNFs-ML, but we also hypothesize that this added variance can be helpful because of the remaining (non-dimension-based) topological mismatch. Nonetheless, once again these results suggest that + +384 + +Table 1. FID-like metric for tabular data (lower is better). Bolded runs are the best or overlap with it. + +
MethodPOWERGASHEPMASSMINIBOONE
RNFs-ML (exact)${0.069} \pm {0.014}$${0.138} \pm {0.021}$${0.486} \pm {0.032}$${0.978} \pm {0.082}$
RNFs-ML $\left( {K = 1}\right)$${0.083} \pm {0.015}$${0.110} \pm {0.021}$${0.779} \pm {0.191}$${1.001} \pm {0.051}$
RNFs-ML $\left( {K = {10}}\right)$${0.113} \pm {0.037}$${0.140} \pm {0.013}$${0.495} \pm {0.055}$${0.878} \pm {0.083}$
RNFs-TS${0.178} \pm {0.021}$${0.161} \pm {0.016}$${0.649} \pm {0.081}$${1.085} \pm {0.0622}$
+ +387 + +388 + +389 + +390 the variance induced by our stochastic method is not empirically harmful. We also report training times in Appendix H, where we can see the computational benefits of our stochastic method. + +Table 2. FID scores (lower is better) and decision stump OoD accuracy (higher is better). + +
MethodFIDOoD ACCURACY
MNISTFMNISTMNIST $\rightarrow$ FMNISTFMNIST $\rightarrow$ MNIST
RNFs-ML (exact)36.09296.0192%91%
RNFs-ML $\left( {K = 1}\right)$$\underline{\mathbf{{33.98}}}$288.3997%78%
RNFs-ML $\left( {K = 4}\right)$42.90342.9177%89%
RNFs-TS35.52318.5998%96%
+ +We further evaluate the performance of RNFs for OoD detection. Nalisnick et al. (2019) pointed out that square NFs trained on FMNIST assign higher likelihoods to MNIST than they do to FMNIST. While there has been research attempting to fix this puzzling behaviour (Alemi et al., 2017; 2018; Choi et al., 2018; Ren et al., 2019), to the best of our knowledge no method has managed to correct it using only likelihoods of trained models. Figure 1 shows that RNFs remedy this phenomenon, and that models trained on FMNIST assign higher test likelihoods to FMNIST than to MNIST. This correction does not come at the cost of strange behaviour now emerging in the opposite direction (i.e. when training on MNIST, see Appendix H for a histogram). Table 2 quantifies these results (arrows point from in-distribution datasets to OoD ones) with the accuracy of a decision stump using only log-likelihood, and we can see that the best-performing RNFs models essentially solve this OoD task. While we leave a formal explanation of this result for future work, we believe this discovery highlights the importance of properly specifying models and of ensuring the use of appropriate inductive biases, in this case low intrinsic dimensionality of the observed data. We point out that this seems to be a property of RNFs, rather than of our ML training method, although our exact method is still used to compute these log-likelihoods at test time. We include additional results on OoD detection using reconstruction errors - along with a discussion - in Appendix H, where we found the opposite unexpected behaviour: FMNIST always has smaller reconstruction errors, regardless of which dataset was used for training. + +## B. Injective Change-of-Variable Formula and Stacking Injective Flows + +First, we note the density of projected points, and then we will derive it. Defining ${f}_{\theta } \mathrel{\text{:=}} {\widetilde{f}}_{\theta } \circ$ pad and ${f}_{\theta }^{ \dagger } \mathrel{\text{:=}} {\operatorname{pad}}^{ \dagger } \circ {\widetilde{f}}_{\theta }^{-1}$ , our construction of ${g}_{\phi }$ yields: + +$$ +{p}_{X}\left( x\right) = {p}_{Z}\left( {{g}_{\phi }^{ \dagger }\left( x\right) }\right) {\left| \det \mathbf{J}\left\lbrack {h}_{\eta }\right\rbrack \left( {g}_{\phi }^{ \dagger }\left( x\right) \right) \right| }^{-1}{\left| \det \mathbf{J}{\left\lbrack {f}_{\theta }\right\rbrack }^{\top }\left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \right| }^{-1/2}. \tag{7} +$$ + +We now derive (7) from (1) (with ${g}_{\phi }^{ \dagger }$ in place of ${g}_{\phi }^{-1}$ ). By the chain rule, we have: + +$$ +\mathbf{J}\left\lbrack {g}_{\phi }\right\rbrack \left( {{g}_{\phi }^{ \dagger }\left( x\right) }\right) = \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {{f}_{\theta }^{ \dagger }\left( x\right) }\right) \mathbf{J}\left\lbrack {h}_{\eta }\right\rbrack \left( {{g}_{\phi }^{ \dagger }\left( x\right) }\right) . \tag{8} +$$ + +The Jacobian-transpose Jacobian term in (1) thus becomes: + +$$ +{\left| \det \mathbf{J}{\left\lbrack {g}_{\phi }\right\rbrack }^{\top }\left( {g}_{\phi }^{ \dagger }\left( x\right) \right) \mathbf{J}\left\lbrack {g}_{\phi }\right\rbrack \left( {g}_{\phi }^{ \dagger }\left( x\right) \right) \right| }^{-1/2} \tag{9} +$$ + +$$ += {\left| \det \mathbf{J}{\left\lbrack {h}_{\eta }\right\rbrack }^{\top }\left( {g}_{\phi }^{ \dagger }\left( x\right) \right) \mathbf{J}{\left\lbrack {f}_{\theta }\right\rbrack }^{\top }\left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \mathbf{J}\left\lbrack {h}_{\eta }\right\rbrack \left( {g}_{\phi }^{ \dagger }\left( x\right) \right) \right| }^{-1/2} +$$ + +$$ += {\left| \det \mathbf{J}{\left\lbrack {h}_{\eta }\right\rbrack }^{\top }\left( {g}_{\phi }^{ \dagger }\left( x\right) \right) \right| }^{-1/2}{\left| \det \mathbf{J}{\left\lbrack {f}_{\theta }\right\rbrack }^{\top }\left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \right| }^{-1/2}{\left| \det \mathbf{J}\left\lbrack {h}_{\eta }\right\rbrack \left( {g}_{\phi }^{ \dagger }\left( x\right) \right) \right| }^{-1/2} +$$ + +$$ += {\left| \det \mathbf{J}\left\lbrack {h}_{\eta }\right\rbrack \left( {g}_{\phi }^{ \dagger }\left( x\right) \right) \right| }^{-1}{\left| \det \mathbf{J}{\left\lbrack {f}_{\theta }\right\rbrack }^{\top }\left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \right| }^{-1/2}, +$$ + +where the second equality follows from the fact that $\mathbf{J}{\left\lbrack {h}_{\eta }\right\rbrack }^{\top }\left( {{g}_{\phi }^{ \dagger }\left( x\right) }\right) ,\mathbf{J}{\left\lbrack {f}_{\theta }\right\rbrack }^{\top }\left( {{f}_{\theta }^{ \dagger }\left( x\right) }\right) \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {{f}_{\theta }^{ \dagger }\left( x\right) }\right)$ , and $\mathbf{J}\left\lbrack {h}_{\eta }\right\rbrack \left( {{g}_{\phi }^{ \dagger }\left( x\right) }\right)$ are all square $d \times d$ matrices; and the third equality follows because determinants are invariant to transpositions. The observation that the three involved matrices are square is the reason behind why we can decompose the change-of-variable formula for ${g}_{\phi }$ as applying first the change-of-variable formula for ${h}_{\eta }$ , and then applying it for ${f}_{\theta }$ . + +This property, unlike in the case of standard flows, does not always hold. That is, the change-of-variable formula for a composition of injective transformations is not necessarily equivalent to applying the injective change-of-variable formula twice. To see this, consider the case where ${g}_{1} : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{{d}_{2}}$ and ${g}_{2} : {\mathbb{R}}^{{d}_{2}} \rightarrow {\mathbb{R}}^{D}$ are injective, where $d < {d}_{2} < D$ and let $g = {g}_{2} \circ {g}_{1}$ . Clearly $g$ is injective by construction, and thus the determinant from its change-of-variable formula at a point $z \in {\mathbb{R}}^{d}$ is given by: + +$$ +\det \mathbf{J}{\left\lbrack g\right\rbrack }^{\top }\left( z\right) \mathbf{J}\left\lbrack g\right\rbrack \left( z\right) = \det \mathbf{J}{\left\lbrack {g}_{1}\right\rbrack }^{\top }\left( z\right) \mathbf{J}{\left\lbrack {g}_{2}\right\rbrack }^{\top }\left( {{g}_{1}\left( z\right) }\right) \mathbf{J}\left\lbrack {g}_{2}\right\rbrack \left( {{g}_{1}\left( z\right) }\right) \mathbf{J}\left\lbrack {g}_{1}\right\rbrack \left( z\right) , \tag{10} +$$ + +where now $\mathbf{J}\left\lbrack {g}_{1}\right\rbrack \left( z\right) \in {\mathbb{R}}^{{d}_{2} \times d}$ and $\mathbf{J}\left\lbrack {g}_{2}\right\rbrack \left( {{g}_{1}\left( z\right) }\right) \in {\mathbb{R}}^{D \times {d}_{2}}$ . Unlike the determinant from (9), this determinant cannot be easily decomposed into a product of determinants since the involved matrices are not all square. In particular, (10) need not match: + +$$ +\det \mathbf{J}{\left\lbrack {g}_{1}\right\rbrack }^{\top }\left( z\right) \mathbf{J}\left\lbrack {g}_{1}\right\rbrack \left( z\right) \cdot \det \mathbf{J}{\left\lbrack {g}_{2}\right\rbrack }^{\top }\left( {{g}_{1}\left( z\right) }\right) \mathbf{J}\left\lbrack {g}_{2}\right\rbrack \left( {{g}_{1}\left( z\right) }\right) , \tag{11} +$$ + +which would be the determinant terms from applying the change-of-variable formula twice. Note that this observation does not imply that a flow like $g$ could not be trained with our method, it simply implies that the det $\mathbf{J}{\left\lbrack g\right\rbrack }^{\top }\left( z\right) \mathbf{J}\left\lbrack g\right\rbrack \left( z\right)$ term has to be considered as a whole, and not decomposed into separate terms. It is easy to verify that in general, only an initial $d$ -dimensional square flow can be separated from the overall Jacobian-transpose-Jacobian determinant. + +## C. Full Hutchinson-Based Objective + +Here, we provide the full contribution of a point $x$ to the objective containing Hutchinson’s estimator and conjugate gradients: + +$$ +\log {p}_{Z}\left( {{g}_{\phi }^{ \dagger }\left( x\right) }\right) - \log \left| {\det \mathbf{J}\left\lbrack {h}_{\eta }\right\rbrack \left( {{g}_{\phi }^{ \dagger }\left( x\right) }\right) }\right| - \beta {\begin{Vmatrix}x - {f}_{\theta }\left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \end{Vmatrix}}_{2}^{2} \tag{12} +$$ + +$$ +- \frac{1}{2K}\mathop{\sum }\limits_{{k = 1}}^{K}\text{stop_gradient}\left( {\operatorname{CG}{\left( {J}_{\theta }^{\top }{J}_{\theta };{\epsilon }_{k}\right) }^{\top }}\right) {J}_{\theta }^{\top }{J}_{\theta }{\epsilon }_{k}\text{.} +$$ + +## D. Conjugate Gradients + +We outline the CG algorithm in Algorithm 1, whose output we write as $\mathrm{{CG}}\left( {A;\epsilon }\right)$ in the main manuscript. Note that $\mathrm{{CG}}$ does not need access to $A$ , just a matrix-vector product routine against $A$ , mvp_ $A\left( \cdot \right)$ . If $A$ is symmetric positive definite, then $\mathrm{{CG}}$ converges in at most $d$ steps, i.e. its output matches ${A}^{-1}\epsilon$ and the corresponding residual is 0, and CG uses thus at most $d$ calls to mvp_A $\left( \cdot \right)$ . This convergence holds mathematically, but can be violated numerically if $A$ is ill-conditioned, which is why the $\tau < d$ condition is added in the while loop. + +Algorithm 1 CG + +--- + +Input : mvp_A(-), function for matrix-vector products against $A \in {\mathbb{R}}^{d \times d}$ + + $\epsilon \in {\mathbb{R}}^{d}$ + + $\delta \geq 0$ , tolerance + +Output : ${A}^{-1}\epsilon$ + +${u}_{0} \leftarrow \mathbf{0} \in {\mathbb{R}}^{d}//$ current solution + +${r}_{0} \leftarrow - \epsilon //$ current residual + +${q}_{0} \leftarrow {r}_{0}$ + +$\tau \leftarrow 0$ + +while ${\begin{Vmatrix}{r}_{\tau }\end{Vmatrix}}_{2} > \delta$ and $\tau < d$ do + + ${v}_{\tau } \leftarrow$ mvp_A $\left( {q}_{\tau }\right)$ + + ${\alpha }_{\tau } \leftarrow \left( {{r}_{\tau }^{\top }{r}_{\tau }}\right) /\left( {{q}_{\tau }^{\top }{v}_{\tau }}\right)$ + + ${u}_{\tau + 1} \leftarrow {u}_{\tau } + {\alpha }_{\tau }{q}_{\tau }$ + + ${r}_{\tau + 1} \leftarrow {r}_{\tau } - {\alpha }_{\tau }{v}_{\tau }$ + + ${\beta }_{\tau } \leftarrow \left( {{r}_{\tau + 1}^{\top }{r}_{\tau + 1}}\right) /\left( {{r}_{\tau }^{\top }{r}_{\tau }}\right)$ + + ${q}_{\tau + 1} \leftarrow {r}_{\tau + 1} + {\beta }_{\tau }{q}_{\tau }$ + + $\tau \leftarrow \tau + 1$ + +end + +return ${u}_{\tau }$ + +--- + +## E. Automatic Differentiation + +Here we summarize the relevant properties from forward- and backward-mode automatic differentiation (AD) which we use in the main manuscript. Let $f$ be the composition of smooth functions ${f}_{1},\ldots ,{f}_{L}$ , i.e. $f = {f}_{L} \circ {f}_{L - 1} \circ \cdots \circ {f}_{1}$ . For example, in our setting this function could be ${f}_{\theta }$ , so that ${f}_{1} =$ pad, and the rest of the functions could be coupling layers from a $D$ -dimensional square flow (or the functions whose compositions results in the coupling layers). By the chain rule, the Jacobian of $f$ is given by: + +$$ +\mathbf{J}\left\lbrack f\right\rbrack \left( z\right) = \mathbf{J}\left\lbrack {f}_{L}\right\rbrack \left( {{\bar{f}}_{L - 1}\left( z\right) }\right) \cdots \mathbf{J}\left\lbrack {f}_{2}\right\rbrack \left( {{\bar{f}}_{1}\left( z\right) }\right) \mathbf{J}\left\lbrack {f}_{1}\right\rbrack \left( z\right) , \tag{13} +$$ + +where ${\bar{f}}_{l} \mathrel{\text{:=}} {f}_{l} \circ {f}_{l - 1} \circ \cdots \circ {f}_{1}$ for $l = 1,2,\ldots , L - 1$ . Forward-mode AD computes products from right to left, and is thus efficient for computing jvp operations. Computing $\mathbf{J}\left\lbrack f\right\rbrack \left( z\right) \epsilon$ is thus obtained by performing $L$ matrix-vector multiplications, one against each of the Jacobians on the right hand side of (13). Backward-mode AD computes products from left to right, and would thus result in significantly more inefficient jvp evaluations involving $L - 1$ matrix-matrix products, and a single matrix-vector product. Analogously, backward-mode AD computes $\mathbf{v}$ jps of the form ${v}^{\top }\mathbf{J}\left\lbrack f\right\rbrack \left( z\right)$ efficiently, using $L$ vector-matrix products, while forward-mode AD would require $L - 1$ matrix-matrix products and a single vector-matrix product. + +Typically, the cost of evaluating a matrix-vector or vector-matrix product against $\mathbf{J}\left\lbrack {f}_{l + 1}\right\rbrack \left( {\bar{f}}_{l}\right) \left( {\text{or}\mathbf{J}\left\lbrack {f}_{1}\right\rbrack \left( z\right) }\right)$ is the same as computing ${\bar{f}}_{l + 1}\left( z\right)$ from ${\bar{f}}_{l}\left( z\right)$ , i.e. the cost of evaluating ${f}_{l + 1}$ (or the cost of evaluating ${f}_{1}$ in the case of $\mathbf{J}\left\lbrack {f}_{1}\right\rbrack \left( z\right)$ ) (Baydin et al., 2018). jvp and v j'p computations thus not only have the same computational cost, but this cost is also equivalent to a forward pass, i.e. computing $f$ . + +When computing $f$ , obtaining a jvp with forward-mode AD adds the same memory cost as another computation of $f$ since intermediate results do not have to be stored. That is, in order to compute $\mathbf{J}\left\lbrack {f}_{l}\right\rbrack \left( {{\bar{f}}_{l - 1}\left( z\right) }\right) \cdots \mathbf{J}\left\lbrack {f}_{1}\right\rbrack \left( z\right) \epsilon$ , we only need to store $\mathbf{J}\left\lbrack {f}_{l - 1}\right\rbrack \left( {{\bar{f}}_{l - 2}\left( z\right) }\right) \cdots \mathbf{J}\left\lbrack {f}_{1}\right\rbrack \left( z\right) \epsilon$ and ${\bar{f}}_{l - 1}\left( z\right)$ (which has to be stored anyway for computing $f$ ) in memory. On the other hand, computing a v jp with backward-mode AD has a higher memory cost: One has to first compute $f$ and store all the intermediate ${\bar{f}}_{l}\left( z\right)$ (along with $z$ ), since computing ${v}^{\top }\mathbf{J}\left\lbrack {f}_{L}\right\rbrack \left( {{\bar{f}}_{L - 1}\left( z\right) }\right) \cdots \mathbf{J}\left\lbrack {f}_{l}\right\rbrack \left( {{\bar{f}}_{l - 1}\left( z\right) }\right)$ from ${v}^{\top }\mathbf{J}\left\lbrack {f}_{L}\right\rbrack \left( {{\bar{f}}_{L - 1}\left( z\right) }\right) \cdots \mathbf{J}\left\lbrack {f}_{l + 1}\right\rbrack \left( {{\bar{f}}_{l}\left( z\right) }\right)$ requires having ${\bar{f}}_{l - 1}\left( z\right)$ in memory. + +## F. Batch Normalization + +We now explain the issues that arise when combining batch normalization with ${vjps}$ . These issues arise not only in our setting, but every time backward-mode AD has to be called to compute or approximate the gradient of the determinant term. + +We consider the case with a batch of size $2,{x}_{1}$ and ${x}_{2}$ , as it exemplifies the issue and the notation becomes simpler. Consider applying ${f}_{\theta }$ (without batch normalization) to each element in the batch, which we denote with the batch function ${F}_{\theta }$ : + +$$ +{F}_{\theta }\left( {{x}_{1},{x}_{2}}\right) \mathrel{\text{:=}} \left( {{f}_{\theta }\left( {x}_{1}\right) ,{f}_{\theta }\left( {x}_{2}\right) }\right) . \tag{14} +$$ + +The Jacobian of ${F}_{\theta }$ clearly has a block-diagonal structure: + +$$ +\mathbf{J}\left\lbrack {F}_{\theta }\right\rbrack \left( {{x}_{1},{x}_{2}}\right) = \left( \begin{matrix} \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{1}\right) & \mathbf{0} \\ \mathbf{0} & \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{2}\right) \end{matrix}\right) . \tag{15} +$$ + +This structure implies that relevant computations such as vjps, jvps, and determinants parallelize over the batch: + +$$ +{\left( {v}_{1},{v}_{2}\right) }^{\top }\mathbf{J}\left\lbrack {F}_{\theta }\right\rbrack \left( {{x}_{1},{x}_{2}}\right) = \left( {{v}_{1}^{\top }\mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{1}\right) ,{v}_{2}^{\top }\mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{2}\right) }\right) \tag{16} +$$ + +$$ +\mathbf{J}\left\lbrack {F}_{\theta }\right\rbrack \left( {{x}_{1},{x}_{2}}\right) \left( \begin{array}{l} {\epsilon }_{1} \\ {\epsilon }_{2} \end{array}\right) = \left( \begin{array}{l} \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{1}\right) {\epsilon }_{1} \\ \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{2}\right) {\epsilon }_{2} \end{array}\right) +$$ + +$$ +\det \mathbf{J}{\left\lbrack {F}_{\theta }\right\rbrack }^{\top }\left( {{x}_{1},{x}_{2}}\right) \mathbf{J}\left\lbrack {F}_{\theta }\right\rbrack \left( {{x}_{1},{x}_{2}}\right) = \det \mathbf{J}{\left\lbrack {f}_{\theta }\right\rbrack }^{\top }\left( {x}_{1}\right) \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{1}\right) \det \mathbf{J}{\left\lbrack {f}_{\theta }\right\rbrack }^{\top }\left( {x}_{2}\right) \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{2}\right) . +$$ + +In contrast, when using batch normalization, the resulting computation ${F}_{\theta }^{BN}\left( {{x}_{1},{x}_{2}}\right)$ does not have a block-diagonal Jacobian, and thus this parallelism over the batch breaks down, in other words: + +$$ +{\left( {v}_{1},{v}_{2}\right) }^{\top }\mathbf{J}\left\lbrack {F}_{\theta }^{\left( BN\right) }\right\rbrack \left( {{x}_{1},{x}_{2}}\right) \neq \left( {{v}_{1}^{\top }\mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{1}\right) ,{v}_{2}^{\top }\mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{2}\right) }\right) \tag{17} +$$ + +$$ +\mathbf{J}\left\lbrack {F}_{\theta }^{BN}\right\rbrack \left( {{x}_{1},{x}_{2}}\right) \left( \begin{array}{l} {\epsilon }_{1} \\ {\epsilon }_{2} \end{array}\right) \neq \left( \begin{array}{l} \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{1}\right) {\epsilon }_{1} \\ \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{2}\right) {\epsilon }_{2} \end{array}\right) +$$ + +$$ +\det \mathbf{J}{\left\lbrack {F}_{\theta }^{BN}\right\rbrack }^{\top }\left( {{x}_{1},{x}_{2}}\right) \mathbf{J}\left\lbrack {F}_{\theta }^{BN}\right\rbrack \left( {{x}_{1},{x}_{2}}\right) \neq \det \mathbf{J}{\left\lbrack {f}_{\theta }\right\rbrack }^{\top }\left( {x}_{1}\right) \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{1}\right) \det \mathbf{J}{\left\lbrack {f}_{\theta }\right\rbrack }^{\top }\left( {x}_{2}\right) \mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {x}_{2}\right) , +$$ + +where the above $\neq$ signs should be interpreted as "not generally equal to" rather than always not equal to, as equalities could hold coincidentally in rare cases. + +In square flow implementations, AD is never used to obtain any of these quantities, and the Jacobian log determinants are explicitly computed for each element in the batch. In other words, this batch dependence is ignored in square flows, both in the log determinant computation, and when backpropagating through it. Elaborating on this point, AD is only used to backpropagate (with respect to $\theta$ ) over this explicit computation. If AD was used on ${F}_{\theta }^{BN}$ to construct the matrices and we then computed the corresponding log determinants, the results would not match with the explicitly computed log determinants: The latter would be equivalent to using batch normalization with a stop_gradient operation with respect to $\left( {{x}_{1},{x}_{2}}\right)$ but not with respect to $\theta$ , while the former would use no stop_gradient whatsoever. Unfortunately, this partial stop_gradient operation only with respect to inputs but not parameters is not available in commonly used AD libraries. While our custom implementation of jvps can be easily "hard-coded" to have this behaviour, doing so for $\mathrm{v}$ jps would require significant modifications to PyTorch. We note that this is not a fundamental limitation and that these modifications could be done to obtain vjps that behave as expected with a low-level re-implementation of batch normalization, but these fall outside of the scope of our paper. Thus, in the interest of performing computations in a manner that remains consistent with what is commonly done for square flows and that allows fair comparisons of our exact and stochastic methods, we avoid using batch normalization. + +### G.FID and FID-like Scores + +For a given dataset $\left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \subset {\mathbb{R}}^{D}$ and a set of samples generated by a model $\left\{ {{x}_{1}^{\left( g\right) },\ldots ,{x}_{m}^{\left( g\right) }}\right\} \subset {\mathbb{R}}^{D}$ , along with a statistic $T : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{r}$ , the empirical means and covariances are given by: + +$$ +\widehat{\mu } \mathrel{\text{:=}} \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}T\left( {x}_{i}\right) ,\;\widehat{\sum } \mathrel{\text{:=}} \frac{1}{n - 1}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {T\left( {x}_{i}\right) - \widehat{\mu }}\right) {\left( T\left( {x}_{i}\right) - \widehat{\mu }\right) }^{\top } \tag{18} +$$ + +$$ +{\widehat{\mu }}^{\left( g\right) } \mathrel{\text{:=}} \frac{1}{m}\mathop{\sum }\limits_{{i = 1}}^{m}T\left( {x}_{i}^{\left( g\right) }\right) ,\;{\widehat{\sum }}^{\left( g\right) } \mathrel{\text{:=}} \frac{1}{m - 1}\mathop{\sum }\limits_{{i = 1}}^{m}\left( {T\left( {x}_{i}^{\left( g\right) }\right) - {\widehat{\mu }}^{\left( g\right) }}\right) {\left( T\left( {x}_{i}^{\left( g\right) }\right) - {\widehat{\mu }}^{\left( g\right) }\right) }^{\top }. \tag{19} +$$ + +605 + +Table 3. Training times in seconds," $K > 1$ " means $K = {10}$ for tabular data and $K = 4$ for images. + +
DatasetRNFs-ML (exact)RNFs-ML $\left( {K = 1}\right)$RNFs-ML $\left( {K > 1}\right)$RNFs-TS
EPOCHTOTALEPOCHTOTALEPOCHTOTALEPOCHTOTAL
POWER53.84.13e367.46.76e31361.14e445.13.83e3
GAS37.32.51e362.74.51e380.15.24e343.2${3.49}\mathrm{e}3$
HEPMASS1431.01e4146${8.28}\mathrm{e}3$1591.20e429.1${2.42}\mathrm{e}3$
MINIBOONE49.34.16e326.3${2.01}\mathrm{e}3$29.82.94e34.61481
MNIST${2.40}\mathrm{e}3$${2.59}\mathrm{e}5$1.71e31.57e5${3.03}\mathrm{e}3$${3.20}\mathrm{e}5$${2.13e2}$3.90e4
FMNIST2.34e3${2.59}\mathrm{e}5$1.72e31.50e5${3.15}\mathrm{e}3$${2.10}\mathrm{e}5$${1.04}\mathrm{e}2$1.11e4
+ +606 + +608 + +609 + +The FID score takes $T$ as the last hidden layer of a pretrained inception network (Szegedy et al.,2015), and evaluates generated sample quality by comparing generated moments against data moments. This comparison is done with the squared Wasserstein-2 distance between Gaussians with corresponding moments, which is given by: + +$$ +{\begin{Vmatrix}\widehat{\mu } - {\widehat{\mu }}^{\left( g\right) }\end{Vmatrix}}_{2}^{2} + \operatorname{tr}\left( {\widehat{\sum } + {\widehat{\sum }}^{\left( g\right) } - 2{\left( \widehat{\sum }{\widehat{\sum }}^{\left( g\right) }\right) }^{1/2}}\right) , \tag{20} +$$ + +which is 0 if and only if the moments match. Our proposed FID-like score for tabular data is computed the exact same way, except no inception network is used. Instead, we simply take $T$ to be the identity, $T\left( x\right) = x$ . + +## H. Experimental Details + +First we will comment on hyperparameters/architectural choices shared across experiments. The $D$ -dimensional square flow that we use, as mentioned in the main manuscript, is a RealNVP network (Dinh et al., 2017). In all cases, we use the ADAM (Kingma & Ba, 2015) optimizer and train with early stopping against some validation criterion specified for each experiment separately and discussed further in each of the relevant subsections below. We use no weight decay. We also do not use batch normalization in any experiments for the reasons mentioned above in Appendix F. We use a standard Gaussian on $d$ dimensions as ${p}_{Z}$ in all experiments. + +Compute We ran our two-dimensional experiments on a Lenovo T530 laptop with an Intel i5 processor, with negligible training time per epoch. We ran the tabular data experiments on a variety of NVIDIA GeForce GTX GPUs on a shared cluster: we had, at varying times, access to 1080, 1080 Ti, and 2080 Ti models, but never access to more than six cards in total at once. For the image experiments, we had access to a 32GB-configuration NVIDIA Tesla v100 GPU. We ran each of the tabular and image experiments on a single card at a time, except for the image experiments for the RNFs-ML (exact) and $\left( {K = {10}}\right)$ models which we parallelized over four cards. + +Table 3 includes training times for all of our experiments. Since we used FID-like and FID scores for ealy stopping, we include both per-epoch and total times. Per epoch times of RNFs-ML exclude epochs where the Jacobian-transpose-Jacobian log determinant is annealed with a 0 weight, although we include time added from this portion of training into the total time cost. Note throughout this section we also consider one epoch of the two-step baseline procedure to be one full pass through the data training the likelihood term, and then one full pass through the data training the reconstruction term. + +### H.1. Simulated Data + +The data for this experiment is simulated from a von Mises distribution centred at $\frac{\pi }{2}$ projected onto a circle of radius 1 . We randomly generate 10,000 training data points and train with batch sizes of 1,000 . We use 1,000 points for validation, performing early stopping using the value of the full objective and halting training when we do not see any validation improvement for 50 epochs. We create visualizations in Figure 2 and Figure 3 by taking 1,000 grid points equally-spaced between -3 and 3 as the low-dimensional space, project these into higher dimensions by applying the flow ${g}_{\phi }$ , and then assign density to these points using the injective change-of-variable formula (1). In this low-dimensional example, we use the full Jacobian-transpose-Jacobian which ends up just being a scalar as $d = 1$ . We commence likelihood annealing (when active) on the 500-th training epoch and end up with a full likelihood term by the 1000-th. + +For the $D$ -dimensional square flow ${f}_{\theta }$ , we used a 5-layer RealNVP model, with each layer having a fully-connected coupler network of size $2 \times {10}$ , i.e. 2 hidden layers each of size 10, outputting the shift and (log) scale values. The baseline + +additionally uses a simple shift-and-scale transformation in $d$ -dimensional space as ${h}_{\eta }$ ; we simply use the identity map for ${h}_{\eta }$ in this simple example. + +We perform slightly different parameter sweeps for the two methods based on preliminary exploration. For the baseline two-step procedure, we perform runs over the following grid: + +- Learning rate: ${10}^{-3},{10}^{-4}$ . + +- Regularization parameter $\left( \beta \right) : {100},1,{000},{10},{000}$ (which for this method is equivalent to having a separate learning rate for the regularization objective). + +- Likelihood annealing: True or False. + +For our method, we search over the following, excluding learning rate since our method was stable at the higher rate of ${10}^{-3}$ : + +- Regularization parameter $\left( \beta \right) : {10},{50},{200}$ . + +- Likelihood annealing: True or False. + +Empirically we found the two-step baseline performed better with the higher regularization, which also agrees with the hyperparameter settings from their paper. + +Divergences on RNFs-TS between our codebase and the implementation of (Brehmer & Cranmer, 2020) Although we were able to replicate the baseline RNF-TS method, there were some different choices made in the codebase of the baseline method (available here: https://github.com/johannbrehmer/manifold-flow), which we outline below: + +- The baseline was trained for 120 epochs and then selects the model with best validation score, whereas we use early stopping over an (essentially) unlimited number of epochs. + +- The baseline weights the reconstruction term with a factor of 100 and the likelihood term with a factor of 0.1 . This is equivalent in our codebase to setting $\beta = 1,{000}$ , and lowering the learning rate by a factor of 10 . + +- The baseline uses cosine annealing of the learning rate, which we do not use. + +- The baseline includes a sharp Normal base distribution on the pulled-back padded coordinates. We neglected to include this as it isn't mentioned in the paper and can end up resulting in essentially a square flow construction. + +- The baseline uses the ADAMW optimizer (Loshchilov & Hutter, 2019) to fix issues with weight decay within ADAM (which they also use). We stick with standard ADAM as we do not use weight decay. + +- The baseline flow reparametrizes the scale $s$ of the RealNVP network as $s = \sigma \left( {\widetilde{s} + 2}\right) + {10}^{-3}$ , where $\widetilde{s}$ is the unconstrained scale and $\sigma$ is the sigmoid function, but this constrains the scale to be less than $1 + {10}^{-3}$ . This appears to be done for stability of the transformation (cf. the ResNets below). We instead use the standard parametrization of $s = \exp \left( \widetilde{s}\right)$ as the fully-connected networks appear to be adequately stable. + +- The Baseline uses ResNets with ReLU activation of size $2 \times {100}$ as the affine coupling networks. We use MLPs with tanh activation function instead. + +- The baseline uses a dataset which is not strictly on a manifold. The radius of a point on the circle is sampled from $\mathcal{N}\left( {1,{0.01}^{2}}\right)$ . We use a strictly one-dimensional distribution instead with a von Mises distribution on the angle as noted above. + +In general, we favoured more standard and simpler choices for modelling the circle, outside of the likelihood annealing which is non-standard. + +715 + +![01963e34-b00a-708a-a0ec-96108121a19a_13_420_189_918_580_0.jpg](images/01963e34-b00a-708a-a0ec-96108121a19a_13_420_189_918_580_0.jpg) + +Figure 3. Densities (top row) and speeds (bottom row) for additional runs. Failed runs not recovering neither manifold nor the distribution on it, RNFs-ML (exact) (left column) and RNFs-TS (right column). Successful RNFs-TS run (middle column). + +716 + +717 + +718 + +719 + +We note that, while the results reported in the main manuscript are representative of common runs, both for RNFs-ML (exact) and RNFs-TS; not every single run of RNFs-ML (exact) obtained results as good as the ones from the main manuscript. Similarly, some runs of RNFs-TS recovered better likelihoods than the one from the main manuscript. We emphasize again that the results reported on the main manuscript are the most common ones: most RNFs-ML (exact) runs correctly recovered both the manifold and the distribution on it, and most RNFs-TS runs recovered only the manifold correctly. For completeness, we include in Figure 3 some of the rare runs where results were different than the ones reported in the main manuscript. Interestingly, we can see that the successful RNFs-TS run, which managed to recover the distribution on the manifold, had more constant speeds than other RNFs-TS runs. + +### H.2. Tabular Data + +For the tabular data, we use the GAS, POWER, HEPMASS, and MINIBOONE datasets, preprocessed as in Papamakarios et al. (2017), although we neglect to use a test dataset as we simply compared moments on the trained data, as is typically done with the FID score. We did not observe problems with overfitting in practice for any of the methods. We use the FID-like metric with the first and second moments of the generated and observed data as described in Appendix G for early stopping, halting training after 20 epochs of no improvement. + +We again use a RealNVP flow in $D$ dimensions but now with 10 layers, with each layer having a fully-connected coupler network of hidden dimension $4 \times {128}$ . The $d$ -dimensional flow here is also a RealNVP, but just a 5-layer network with couplers of size $2 \times {32}$ . + +In all methods, we use a regularization parameter of $\beta = {50}$ . We introduce the likelihood term with low weight after 25 epochs, linearly increasing its contribution to the objective until it is set to its full weight after 50 epochs. We select $d$ as $\left\lfloor \frac{D}{2}\right\rfloor$ , except for ML methods on $D = 8$ GAS which use $d = 2$ (noted below). We use a learning rate of ${10}^{-4}$ . For the methods involving the Hutchinson estimator, we use a standard Gaussian as the estimating distribution. We also experimented with a Rademacher distribution here but found the Gaussian to be superior. + +Results reported on the main manuscript are the mean of 5 runs (with different seeds) plus/minus standard error. Occasionally, both RNFs-ML and RNFs-TS resulted in failed runs with FID-like scores at least an order of magnitude larger than other runs. In these rare instances, we did another run and ignored the outlier. We did this for both methods, and we do point out that RNFs-ML did not have a higher number of failed runs. + +As mentioned in the main manuscript, GAS required slightly more tuning as RNFs-ML did not outperform RNFs-TS when using $d = 4$ . We instead use latent dimension $d = 2$ , where this time RNFs-ML did outperform. Since RNFs-TS did better with $d = 4$ , we report those numbers in the main manuscript. Otherwise, our methods outperformed the baseline out-of-the-box, using parameter configurations gleaned from the image and circle experiments. + +770 + +Table 4. Parameter combinations investigated for MNIST runs. Note that the final two rows are irrelevant for RNF-ML (exact) and RNF-TS. We include "short names" for ease of listing parameters for the runs in Table 2. + +
PARAMETERSHORT NAMEMAIN VALUEALTERNATIVES
Likelihood AnnealingLATrueFalse
Reconstruction parameter$\beta$505,500,10000
Low dimension$d$2010,15,30
$D$ -dim flow coupler$D$ NET$8 \times {64}$$4 \times {64}$
$d$ -dim flow layersd LAYERS510
Hutchinson distributionHUTCHGaussianRademacher
CG tolerance (normalized)tol10.001
+ +771 + +772 + +773 + +774 + +775 + +776 + +777 + +778 + +779 + +780 + +781 + +782 + +783 + +Table 5. Parameter choices for the MNIST runs reported in Table 2. + +
METHODLA$\beta$$d$$D$ NET$d$ LAYERSHUTCHtol
RNFs-ML (exact)True520$8 \times {64}$10N/AN/A
RNFs-ML $\left( {K = 1}\right)$True520$8 \times {64}$10Gaussian0.001
RNFs-ML $\left( {K = 4}\right)$True5020$8 \times {64}$5Gaussian1
RNFs-TSTrue5020$8 \times {64}$5N/AN/A
+ +784 + +785 + +786 + +### H.3. Image Data and Out-of-Distribution Detection + +In this set of experiments, we mostly tuned the RNFs-ML methods on MNIST for $K = 1$ - applying any applicable settings to RNFs-TS on MNIST as well - which is likely one of the main reasons that RNFs-ML perform so well for $K = 1\mathrm{\;{vs}}$ . the exact method or $K = 4$ . The reason why we spent so much time on $K = 1$ is that it was the fastest experiment to run and thus the easiest to iterate on. Our general strategy for tuning was to stick to a base set of parameters that performed reasonably well and then try various things to improve performance. A full grid search of all the parameters we might have wanted to try was quite prohibitive on the compute that we had available. Some specific details on settings follow below. + +For the $D$ -dimensional square flow, we mainly used the 10-layer RealNVP model which exactly mirrors the setup that Dinh et al. (2017) used on image data, except we neglect to include batch normalization (as discussed in Appendix F) and we also tried reducing the size of the ResNet coupling networks from $8 \times {64}$ to $4 \times {64}$ for computational purposes. For further computational savings, we additionally attempted to use a RealNVP with fewer layers as the $D$ -dimensional square flow, but this performed extremely poorly and we did not revisit it. For the $d$ -dimensional square component, we used another RealNVP with either 5 or 10 layers, and fully-connected coupler networks of size $4 \times {32}$ . We also looked into modifying the flow here to be a neural spline flow (Durkan et al.,2019), but this, like the smaller $D$ -dimensional RealNVP, performed very poorly as well. This may be because we did not constrain the norm of the gradients, although further investigation is required. We also looked into using no $d$ -dimensional flow for our methods as in the circle experiment, but this did not work well at all. + +For padding, we first randomly (although this is fixed once the run begins) permute the $d$ -dimensional input, pad to get to the appropriate length of vector, and then reshape to put into image dimension. We also pad with zeros when performing the inverse of the density split operation (cf. the $z$ to $x$ direction of Dinh et al. (2017, Figure 4(b))), so that the input is actually padded twice at various steps of the flow. + +When we used likelihood annealing, we did the same thing as for the tabular data: optimize only the reconstruction term for 25 epochs, then slowly and linearly introduce the likelihood term up until it has a weight of 1 in the objective function after epoch 50 . + +We summarize our attempted parameters in Table 4. For some choices of parameters, such as likelihood annealing set to False, $d = {15},{30},\beta = {10},{000}$ , and CG tolerance set to 1, we had very few runs because of computational reasons. However, we note that the run with low CG tolerance ends up being the most successful run on MNIST. We have included "SHORT NAMES" in the table for ease of listing hyperparameter values for the runs in Table 2, which we now provide for MNIST and FMNIST in Table 5 and Table 6 respectively. + +Figure 4 shows best RNFs-ML log-likelihoods for models trained on MNIST (left panel), and we can see that indeed + +825 + +Table 6. Parameter choices for the FMNIST runs reported in Table 2. + +
METHODLA$\beta$$d$$D$ NET$d$ LAYERSHUTCHtol
RNFs-ML (exact)True5020$8 \times {64}$10N/AN/A
RNFs-ML $\left( {K = 1}\right)$True5020$8 \times {64}$5Rademacher1
RNFs-ML $\left( {K = 4}\right)$True5020$8 \times {64}$10Rademacher1
RNFs-TSFalse520$4 \times {64}$10N/AN/A
+ +![01963e34-b00a-708a-a0ec-96108121a19a_15_400_485_967_331_0.jpg](images/01963e34-b00a-708a-a0ec-96108121a19a_15_400_485_967_331_0.jpg) + +Figure 4. OoD log-likelihood histograms trained on MNIST (left), and OoD reconstruction error histograms trained on FMNIST (middle) and MNIST (right). Log-likelihood results (left) are RNFs-ML (exact), and reconstruction results (middle and right) are RNFs-ML $\left( {K = 1}\right)$ . Note that green denotes in-distribution data, and blue OoD data; and colors do not correspond to datasets. + +MNIST is assigned higher likelihoods than FMNIST. We also include OoD detection results when using reconstruction error instead of log-likelihoods, for models trained on FMNIST (middle panel) and MNIST (right panel). We observed similar results with RNFs-TS. Surprisingly, it is now the reconstruction error which exhibits puzzling behaviour: it is always lower on FMNIST, regardless of whether the model was trained on FMNIST or MNIST. Once again, this behaviour also happens for RNFs-TS, where the reconstruction error is optimized separately. We thus hypothesize that this behaviour is not due to maximum likelihood training, and rather is a consequence of inductive biases of the architecture. + +875 + +876 + +877 + +878 + +879 \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/s-Fg3dXQzyS/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/s-Fg3dXQzyS/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..0ba45846dfa6be35961a90cacbb9d72febdc1933 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/s-Fg3dXQzyS/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,129 @@ +§ RECTANGULAR FLOWS FOR MANIFOLD LEARNING + +Anonymous Authors ${}^{1}$ + +§ ABSTRACT + +Normalizing flows allow for tractable maximum likelihood estimation of their parameters but are incapable of modelling low-dimensional manifold structure in observed data. Flows which injectively map from low- to high-dimensional space provide promise for fixing this issue, but the resulting likelihood-based objective becomes more challenging to evaluate. Current approaches avoid computing the entire objective - which may induce pathological behaviour - or assume the manifold structure is known beforehand and thus are not widely applicable. Instead, we propose two methods relying on tricks from automatic differentiation and numerical linear algebra to either evaluate or approximate the full likelihood objective, performing end-to-end manifold learning and density estimation. We study the trade-offs between our methods, demonstrate improved results over previous injective flows, and show promising results on out-of-distribution detection. + +§ 1. INTRODUCTION + +Normalizing Flows (NFs) have recently become a staple of generative modelling, and particularly for density estimation (see Papamakarios et al. (2019) or Kobyzev et al. (2020) for a review). Here, we typically have access to a set of points in some high-dimensional space ${\mathbb{R}}^{D}$ , which NFs model as the pushforward of some simple distribution on ${\mathbb{R}}^{D}$ through a parametrized bijection. Although this construction can admit tractable maximum likelihood training, the learned density has $D$ -dimensional support; this directly contradicts the manifold hypothesis (Bengio et al., 2013) which states that high-dimensional data lives on a lower-dimensional manifold embedded in ambient space. + +Instead, we may consider injective flows to circumvent this misspecification. These now map a random variable on ${\mathbb{R}}^{d}$ into ${\mathbb{R}}^{D}$ , defining a distribution on some $d$ -dimensional manifold embedded in ${\mathbb{R}}^{D}$ . We have access to a change-of-variable formula as in NFs, but the volume-change term now becomes much more difficult to evaluate. While there have been efforts towards training flows where the resulting distribution is supported on a low-dimensional manifold (e.g. (Rezende et al., 2020; Brehmer & Cranmer, 2020)), these approaches either assume the manifold is known beforehand or otherwise avoid the volume-change term. Both of these are undesirable: in the former, we generally do not know the manifold structure a priori; the latter can result in learning manifolds to which it is difficult to assign density. + +In this work, we show that likelihood-based density estimation for injective flows can be made tractable. We propose two methods for backpropagating through the injective volume-change term which rely on careful application of forward- and backward-mode automatic differentiation. The first method involves exact evaluation of this term and its gradient which incurs a higher memory cost; the second uses conjugate gradients and Hutchinson's trace estimator to obtain unbiased stochastic gradient estimates. Unlike previous work, our methods do not need the data manifold to be specified beforehand, and simultaneously estimate this manifold along with the distribution on it end-to-end, thus enabling maximum likelihood training to occur. Ours are the first methods to backpropagate through the volume-change term in ambient dimensions $D$ approaching 1,000 . We study the trade-off between memory and variance introduced by our methods and show improvements over injective flow baselines for density estimation. We also show that injective flows obtain state-of-the-art performance for likelihood-based Out-of-Distribution (OoD) detection. + +§ 2. BACKGROUND AND MOTIVATION + +§ 2.1. RECTANGULAR NORMALIZING FLOWS + +Standard NFs unrealistically result in the learned density ${p}_{X}$ having $D$ -dimensional support. To overcome this, we first follow Brehmer & Cranmer (2020), where an injective mapping ${g}_{\phi } : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{D}$ with $d < D$ is constructed. Here, $Z \in {\mathbb{R}}^{d}$ is the low-dimensional variable used to model the data as $X \mathrel{\text{ := }} {g}_{\phi }\left( Z\right)$ . A well-known result from differential geometry (Krantz & Parks, 2008), provides a change-of- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +variable formula for $x \in {\mathcal{M}}_{\phi } \mathrel{\text{ := }} \left\{ {{g}_{\phi }\left( z\right) : z \in {\mathbb{R}}^{d}}\right\}$ : + +$$ +{p}_{X}\left( x\right) = {p}_{Z}\left( {z}_{\phi }\right) {\left| \det \mathbf{J}{\left\lbrack {g}_{\phi }\right\rbrack }^{\top }\left( {z}_{\phi }\right) \mathbf{J}\left\lbrack {g}_{\phi }\right\rbrack \left( {z}_{\phi }\right) \right| }^{-1/2}, \tag{1} +$$ + +where ${z}_{\phi } \mathrel{\text{ := }} {g}_{\phi }^{-1}\left( x\right)$ , and ${p}_{X}\left( x\right) = 0$ for $x \notin {\mathcal{M}}_{\phi }$ . The Jacobian-transpose-Jacobian (JtJ) determinant now characterizes the change in volume from $Z$ to $X$ . We make several relevant observations:(i)The Jacobian matrix $\mathbf{J}\left\lbrack {g}_{\phi }\right\rbrack \left( {{g}_{\phi }^{-1}\left( x\right) }\right) \in {\mathbb{R}}^{D \times d}$ is no longer a square matrix, and we thus refer to these flows as rectangular. (ii) ${g}_{\phi }^{-1} : {\mathcal{M}}_{\phi } \rightarrow {\mathbb{R}}^{d}$ is only properly defined on ${\mathcal{M}}_{\phi }$ and not ${\mathbb{R}}^{D}$ , and ${p}_{X}$ is now supported on the $d$ -dimensional manifold ${\mathcal{M}}_{\phi }$ . (iii) This is not a density with respect to the Lebesgue measure; the dominating measure is the Riemannian measure on the manifold ${\mathcal{M}}_{\phi }$ (Pennec,2006). (iv) When $d = D$ , we recover the standard change-of-variable. + +Since data points $x$ will almost surely not lie exactly on ${\mathcal{M}}_{\phi }$ , we use a left inverse ${g}_{\phi }^{ \dagger } : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{d}$ such that ${g}_{\phi }^{ \dagger }\left( {{g}_{\phi }\left( z\right) }\right) = z$ for all $z \in {\mathbb{R}}^{d}$ in place of ${g}_{\phi }^{-1}$ . This exists by injectivity and is properly defined on ${\mathbb{R}}^{D}$ , unlike ${g}_{\phi }^{-1}$ which only exists on ${\mathcal{M}}_{\phi }$ . Setting ${z}_{\phi } \mathrel{\text{ := }} {g}_{\phi }^{ \dagger }\left( x\right)$ in (1) is equivalent to projecting $x$ onto ${\mathcal{M}}_{\phi }$ as $x \leftarrow {g}_{\phi }\left( {{g}_{\phi }^{ \dagger }\left( x\right) }\right)$ , and then evaluating the density from (1) at the projected point. + +Now, ${g}_{\phi }$ is injectively constructed as follows: + +$$ +{g}_{\phi } = {\widetilde{f}}_{\theta } \circ \operatorname{pad} \circ {h}_{\eta }\;\text{ and }\;{g}_{\phi }^{ \dagger } = {h}_{\eta }^{-1} \circ {\operatorname{pad}}^{ \dagger } \circ {\widetilde{f}}_{\theta }^{-1}, \tag{2} +$$ + +where ${\widetilde{f}}_{\theta } : {\mathbb{R}}^{D} \rightarrow {\mathbb{R}}^{D}$ and ${h}_{\eta } : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ are both square flows, $\phi \mathrel{\text{ := }} \left( {\theta ,\eta }\right)$ , and pad : ${\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{D}$ and pad ${}^{ \dagger } : {\mathbb{R}}^{D} \rightarrow$ ${\mathbb{R}}^{d}$ are defined as $\operatorname{pad}\left( z\right) = \left( {z,\mathbf{0}}\right)$ and ${\operatorname{pad}}^{ \dagger }\left( {z,{z}^{\prime }}\right) = z$ , where $\mathbf{0},{z}^{\prime } \in {\mathbb{R}}^{D - d}$ . We can thus rewrite (1) using this specific form of ${g}_{\phi }$ , with details in Appendix B. + +Constructing flows with a tractable volume-change term is more challenging than in the standard case. Brehmer & Cranmer (2020) thus propose a two-step training procedure, wherein ${f}_{\theta } \mathrel{\text{ := }} {\widetilde{f}}_{\theta } \circ$ pad and ${h}_{\eta }$ are trained separately, to avoid this calculation. Training ${f}_{\theta }$ involves minimizing the reconstruction error ${\begin{Vmatrix}x - {f}_{\theta }\left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \end{Vmatrix}}_{2}^{2}$ , which encourages the observed data to lie on ${\mathcal{M}}_{\theta }$ . Then, since ${h}_{\eta }$ will not appear in the determinant term in (1), it can be taken to be any $d$ -dimensional NF and, fixing $\theta ,\eta$ can be learned via maximizing its likelihood over the lower-dimensional points ${\left\{ {f}_{\theta }^{ \dagger }\left( {x}_{i}\right) \right\} }_{i}$ . In practice, gradients steps in $\theta$ and $\eta$ are alternated. This procedure circumvents evaluation of the Jt. J term, but we soon show that this comes with downsides. + +§ 2.2. MOTIVATION + +Dimensionality issues Problems originating from dimensionality mismatch have been observed throughout the deep generative modelling literature. Dai & Wipf (2019) show that using powerful variational autoencoders supported on ${\mathbb{R}}^{D}$ to model data living in a low-dimensional manifold results in the manifold itself being learned, but not the distribution on it. Cornish et al. (2020) demonstrate the drawbacks of using normalizing flows for estimating the density of topologically-complex data, but still model the support as being $D$ -dimensional; Behrmann et al. (2021) provide a related result characterizing non-invertibility of trained flows. This body of work strongly motivates the development of models whose support has matching topology - including dimension - to that of the true data distribution. + +Manifold flows A challenge to overcome for NFs on manifolds is the JtJ term; this is currently handled in one of two ways. The first assumes the manifold is known beforehand (Rezende et al., 2020), limiting its general applicability to low-dimensional data where the true manifold can be known. The second group circumvents the computation of this term entirely; this includes the aforementioned Brehmer & Cranmer (2020). Kumar et al. (2020) use a loose lower bound of the log-likelihood and do not explicitly enforce injectivity, so that the change-of-variable almost surely does not hold. Cunningham et al. (2020) propose to convolve the manifold distribution with Gaussian noise, which results in the model having high-dimensional support. + +Why optimize this term? We can imagine a situation where, even if ${f}_{\theta }$ maps to the correct manifold, it might unnecessarily change volume in such a way that makes correctly learning ${h}_{\eta }$ more challenging than it needs to be. For example, there is nothing in the two-step objective encouraging ${f}_{\theta }$ to learn a manifold parametrized with a well-controlled "speed", which we observe to be an issue in Figure 2 of the experiments. This is but one example of a failure which could have been avoided by learning the manifold in a density-aware fashion including the Jt J term. + +§ 3. RECTANGULAR FLOW MAXIMUM LIKELIHOOD + +§ 3.1.OUR OPTIMIZATION OBJECTIVE + +We have noted that including $\mathrm{{JtJ}}$ in the optimization is sensible, but (1) corresponds to the density of the projection of $x$ onto ${\mathcal{M}}_{\theta }$ . Thus, optimizing only this would not result in learning ${\mathcal{M}}_{\theta }$ such that observed data lies on it, only encouraging projected data points to have high likelihood. Instead, we use the KKT conditions (Karush, 1939; Kuhn & Tucker,1951) to maximize the following Lagrangian in $\phi$ subject to the constraint that the reconstruction error should be smaller than some threshold: + +$$ +\log {p}_{Z}\left( {z}_{\phi }\right) - \log \left| {\det \mathbf{J}\left\lbrack {h}_{\eta }\right\rbrack \left( {z}_{\phi }\right) }\right| \tag{3} +$$ + +$$ +- \frac{1}{2}\log \det {J}_{\theta }^{\top }\left( x\right) {J}_{\theta }\left( x\right) - \beta {\begin{Vmatrix}x - {f}_{\theta }\left( {f}_{\theta }^{ \dagger }\left( x\right) \right) \end{Vmatrix}}_{2}^{2}, +$$ + +where we treat $\beta > 0$ as a hyperparameter, and denote $\mathbf{J}\left\lbrack {f}_{\theta }\right\rbrack \left( {{f}_{\theta }^{ \dagger }\left( x\right) }\right)$ as ${J}_{\theta }\left( x\right)$ and again ${z}_{\phi } \mathrel{\text{ := }} {g}_{\phi }^{ \dagger }\left( x\right)$ for simplicity. We now make a technical but relevant observation about our objective: since our likelihoods are Radon-Nikodym derivatives with respect to the Riemannian measure on ${\mathcal{M}}_{\theta }$ , different values of $\theta$ will result in different dominating measures. One should thus be careful to compare likelihoods for models with different values of $\theta$ . However, thanks to the smoothness of the objective over $\theta$ , we should expect likelihoods for values of $\theta$ which are "close enough" to be comparable for practical purposes. In other words, comparisons remain reasonable locally, and the gradient of the volume-change term still contains information that helps learning ${\mathcal{M}}_{\theta }$ in such a way that ${h}_{\eta }$ can easily learn a density on the pulled-back dataset ${\left\{ {f}_{\theta }^{ \dagger }\left( {x}_{i}\right) \right\} }_{i}$ . + +§ 3.2. OPTIMIZING OUR OBJECTIVE: STOCHASTIC GRADIENTS + +All the terms in (3) are straightforward to evaluate and back-propagate through except for the third one; in this section we show how to obtain unbiased stochastic estimates of its gradient. We now drop the dependence of the Jacobian on $x$ from our notation and write ${J}_{\theta }$ , knowing that the end computation will be parallelized over a batch ${\left\{ {x}_{i}\right\} }_{i}$ . We assume access to an efficient matrix-vector product routine, i.e. computing ${J}_{\theta }^{\top }{J}_{\theta }\epsilon$ can be quickly achieved for any $\epsilon \in {\mathbb{R}}^{d}$ . We elaborate on how we obtain these matrix-vector products in the next section. It is a well known fact from matrix calculus (Petersen & Pedersen, 2008) that: + +$$ +\frac{\partial }{\partial {\theta }_{j}}\log \det {J}_{\theta }^{\top }{J}_{\theta } = \operatorname{tr}\left( {{\left( {J}_{\theta }^{\top }{J}_{\theta }\right) }^{-1}\frac{\partial }{\partial {\theta }_{j}}{J}_{\theta }^{\top }{J}_{\theta }}\right) , \tag{4} +$$ + +where tr denotes the trace operator and ${\theta }_{j}$ is the $j$ -th element of $\theta$ . Next, we use Hutchinson’s trace estimator (Hutchinson, 1989), which says that for any matrix $M \in {\mathbb{R}}^{d \times d},\operatorname{tr}\left( M\right) =$ ${\mathbb{E}}_{\epsilon }\left\lbrack {{\epsilon }^{\top }{M\epsilon }}\right\rbrack$ for any ${\mathbb{R}}^{d}$ -valued random variable $\epsilon$ with zero mean and identity covariance matrix. We can thus obtain an unbiased stochastic estimate of our gradient as: + +$$ +\frac{\partial }{\partial {\theta }_{j}}\operatorname{logdet}{J}_{\theta }^{\top }{J}_{\theta } \approx \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{\epsilon }_{k}^{\top }{\left( {J}_{\theta }^{\top }{J}_{\theta }\right) }^{-1}\frac{\partial }{\partial {\theta }_{j}}{J}_{\theta }^{\top }{J}_{\theta }{\epsilon }_{k}, \tag{5} +$$ + +where ${\left\{ {\epsilon }_{k}\right\} }_{k}$ are typically sampled either from standard Gaussian or Rademacher distributions. Naïve computation of the above estimate remains intractable without explicitly constructing ${J}_{\theta }^{\top }{J}_{\theta }$ . Fortunately, the ${J}_{\theta }^{\top }{J}_{\theta }\epsilon$ terms can be trivially obtained using the given matrix-vector product routine, avoiding the construction of ${J}_{\theta }^{\top }{J}_{\theta }$ , and then $\partial /\partial {\theta }_{j}{J}_{\theta }^{\top }{J}_{\theta }\epsilon$ follows by taking the gradient w.r.t. $\theta$ . + +Yet there is still the issue of computing ${\epsilon }^{\top }{\left( {J}_{\theta }^{\top }{J}_{\theta }\right) }^{-1} =$ ${\left\lbrack {\left( {J}_{\theta }^{\top }{J}_{\theta }\right) }^{-1}\epsilon \right\rbrack }^{\top }$ . We use Conjugate Gradients (CG) (Nocedal & Wright, 2006) in order to achieve this. CG is an iterative method to solve problems of the form ${Au} = \epsilon$ for given $A \in {\mathbb{R}}^{d \times d}$ (in our case $A = {J}_{\theta }^{\top }{J}_{\theta }$ ) and $\epsilon \in {\mathbb{R}}^{d}$ ; we include the CG algorithm in Appendix D for completeness. CG has several important properties. First, it is known to recover the solution (assuming exact arithmetic) after at most $d$ steps, which means we can evaluate ${A}^{-1}\epsilon$ . The solution converges exponentially (in the number of iterations $\tau$ ) to the true value (Shewchuk et al.,1994), so often $\tau \ll d$ iterations are sufficient for accuracy to many decimal places. Second, CG only requires a method to compute matrix-vector products against $A$ , and does not require access to $A$ itself. One such product is performed at each iteration, and CG thus requires at most $d$ of these products, although again $\tau \ll d$ product usually suffice. This results in $\mathcal{O}\left( {\tau {d}^{2}}\right)$ solve complexity-less than the $\mathcal{O}\left( {d}^{3}\right)$ required by direct inversion. We denote ${A}^{-1}\epsilon$ computed with conjugate gradients as $\mathrm{{CG}}\left( {A;\epsilon }\right)$ . We can then compute the estimator from (5) as: + +$$ +\frac{\partial }{\partial {\theta }_{j}}\operatorname{logdet}{J}_{\theta }^{\top }{J}_{\theta } \approx \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}\operatorname{CG}{\left( {J}_{\theta }^{\top }{J}_{\theta };{\epsilon }_{k}\right) }^{\top }\frac{\partial }{\partial {\theta }_{j}}{J}_{\theta }^{\top }{J}_{\theta }{\epsilon }_{k} \tag{6} +$$ + +In practice, we implement this term by applying a stop_gradient on the CG step, thereby allowing us to avoid implementing a custom backward pass. We add this term into (3) and write out in full the contribution of a point $x$ to the training objective in the Appendix (Equation (12)). + +§ 3.3.AD CONSIDERATIONS: THE EXACT METHOD AND THE FORWARD-BACKWARD AD TRICK + +Here we derive the routine for vector products against ${J}_{\theta }^{\top }{J}_{\theta }$ , along with an exact method that avoids Hutchinson's estimator but has increased memory requirements. We will use commonly-known properties of AD to derive our approach, which we review in Appendix E. First, consider the problem of explicitly constructing ${J}_{\theta }$ . This construction can then be used to evaluate ${J}_{\theta }^{\top }{J}_{\theta }$ and exactly compute its log determinant, thus avoiding having to use stochastic gradients as in the previous section. We refer to this procedure as the exact method. Naïvely, one might try to explicitly construct ${J}_{\theta }$ using only backward-mode AD, which would require $D$ vector-Jacobian products (vjps) of the form ${v}^{\top }{J}_{\theta }$ - one per basis vector $v \in {\mathbb{R}}^{D}$ . A better way to explicitly construct ${J}_{\theta }$ is with forward-mode AD, which only requires $d$ Jacobian-vector products (jvps) ${J}_{\theta }\epsilon$ , again one per basis vector $\epsilon \in {\mathbb{R}}^{d}$ . We use a custom implementation of forward-mode AD in the popular PyTorch (Paszke et al.,2019) library ${}^{1}$ for the exact method, as well as for the forward-backward AD trick described below. + +We now explain how to combine forward- and backward-mode AD to obtain efficient matrix-vector products against ${J}_{\theta }^{\top }{J}_{\theta }$ in order to obtain the tractable gradient estimates from the previous section. Note that $v \mathrel{\text{ := }} {J}_{\theta }\epsilon$ can be computed with a single jvp call, and then ${J}_{\theta }^{\top }{J}_{\theta }\epsilon = {\left\lbrack {v}^{\top }{J}_{\theta }\right\rbrack }^{\top }$ can be efficiently computed using only a vjp call. We refer to this way of computing matrix-vector products against ${J}_{\theta }^{\top }{J}_{\theta }$ as the forward-backward AD trick. Note that (12) requires $K\left( {\tau + 1}\right)$ such matrix-vector products, which might appear less efficient as it could be greater than the $d$ jvps required by the exact method. However, the stochastic method is much more memory-efficient than its exact counterpart when optimizing: of the $K\left( {\tau + 1}\right)$ matrix-vector products needed to evaluate (12), only $K$ require gradients with respect to $\theta$ . Thus only $K$ jvps and $K$ vjps, along with their intermediate steps, must be stored in memory over a training step. In contrast, the exact method requires gradients for + +${}^{1}$ PyTorch has a forward-mode AD implementation which relies on the "double backward" trick, which is known to be memory-inefficient. See https://j-towns.github.io/ 2017/06/12/A-new-trick.html for a description. + +180 every one of its $d$ jup computations, which requires storing these along with their intermediate steps in memory. + +Our proposed methods thus offer a memory-variance tradeoff. Increasing $K$ in the stochastic method results in larger memory requirements which imply longer training times, as the batch size must be set lower. On the other hand, the larger the memory cost, the smaller the variance of the gradient. This still holds true for the exact method, which results in exact gradients, at the cost of increased memory requirements (as long as $K \ll d$ ; if $K$ is large enough the stochastic method should never be used over the exact one). + +§ 4. EXPERIMENTS + +We compare our methods against that of Brehmer & Cranmer (2020), and study the memory vs. variance trade-off. We include all experimental details in Appendix H. Figure 2 shows how the two-step method (TS) correctly recovers the manifold, but not the distribution on it when trying to learn + +200 a simple von Mises ground truth distribution on the unit + +201 circle in ${\mathbb{R}}^{2}$ , which our method (ML) handily recovers. + +We also compare the methods with the tabular datasets used by $\mathrm{{Pa}}$ - pamakarios et al. (2017), along with MNIST and FMNIST. Due to space constraints, we include our results in Appendix A, where we show that: (i) our maximum likelihood methods better recover the target distribution, as measured by FID score (Heusel et al., 2017); (ii) our stochastic version with $K = 1$ is competitive against its more memory-hungry alternatives; and (iii) rectangular flows + + < g r a p h i c s > + +Figure 1. OoD detection with RNFs-ML (exact). + +219 perform very well for OoD detection. In particular, they assign higher likelihoods to FMNIST than to MNIST when trained on the former, contrary to what has been observed in previous NFs literature (Nalisnick et al., 2019), as can be seen in Figure 1. + + < g r a p h i c s > + +Figure 2. Left column: RNFs-ML (exact) (top), von Mises ground truth (middle), and RNF-TS (bottom). Right column: Speed at which ${f}_{{\theta }^{ * }}$ maps to ${\mathcal{M}}_{{\theta }^{ * }}$ (measured as ${l}_{2}$ distance between uniformly spaced consecutive points in $\mathbb{R}$ mapped through ${f}_{{\theta }^{ * }}$ ) for RNFs-ML (exact) (top), RNFs-TS (bottom), and distribution ${h}_{\eta }$ has to learn in order to recover the ground truth, fixing ${\theta }^{ * }$ (middle). We can see that RNFs-ML map from low to high dimensions at a more constant speed, thus providing a simpler $z$ distribution for ${h}_{\eta }$ to learn. RNFs-TS map at a higher speed towards the top of the circle which impacts density estimates. + +§ 5. CONCLUSIONS + +In this paper we argue for the importance of likelihood-based training of rectangular flows, and introduce two methods allowing to do so. We study the benefits of our methods, and empirically show that they are preferable to current alternatives. We anticipate improvements to our methods with more powerful flow architectures than RealNVP, along with advancements in specifying flow models with more flexible topological properties. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/uWhWujR6Tl0/Initial_manuscript_md/Initial_manuscript.md b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/uWhWujR6Tl0/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..84af859571c282ea7130d4af2a54da9cdd8a06b8 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/uWhWujR6Tl0/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,235 @@ +# Multi-Scale Continuous Normalizing Flows + +## Anonymous Authors ${}^{1}$ + +## Abstract + +We introduce a multi-scale variant of Continuous Normalizing Flows, and explore the computation of likelihood values. We also introduce a Wavelet-version of the model. However, we find that this formulation is flawed in the computation of BPD, and explore ways to alleviate this problem. + +## 1. Introduction + +Reversible generative models derived through the use of the change of variables technique (Dinh et al., 2017; Kingma & Dhariwal, 2018; Ho et al., 2019; Yu et al., 2020) are growing in interest as alternatives to generative models based on Generative Adversarial Networks (GANs) (Goodfellow et al., 2016) and Variational Autoencoders (VAEs) (Kingma & Welling, 2013). A change of variables approach facilitates the transformation of a simple known probability distribution such as Gaussian noise into a more complex model distribution, such as images. Reversible generative models using this technique are attractive because they enable efficient density estimation, efficient sampling, and admit exact likelihoods to be computed. A promising variation of the change-of-variable approach is based on the use of a continuous time variant of normalizing flows (Chen et al., 2018; Grathwohl et al., 2019), which uses an integral over continuous time dynamics to transform a base distribution into the model distribution. This approach uses ordinary differential equations (ODEs) specified by a neural network, or Neural ODEs. + +In this work, we consider a direct multi-resolution approach to continuous normalizing flows. While state-of-the art GANs and VAEs exploit the multi-resolution properties of images, and recently top performing methods also inject noise at each resolution (Brock et al., 2019; Shaham et al., 2019; Karras et al., 2020; Vahdat & Kautz, 2020), only recently have normalizing flows exploited the multi-resolution properties of images, using wavelets (Yu et al., 2020). + +We use Continuous Normalizing Flows (CNF) in a multi-resolution fashion to generate an image at finer resolutions conditioned on the immediate coarser resolution image. A high-level view of our approach is shown in Figure 1. + +![01963e2e-bb21-7192-92f6-976bdd0555bf_0_1032_689_426_430_0.jpg](images/01963e2e-bb21-7192-92f6-976bdd0555bf_0_1032_689_426_430_0.jpg) + +Figure 1. The architecture of our MSFlow-Image method. Continuous normalizing flows are used to generate images at each resolution, with finer resolutions being generated conditioned on the coarser image one level above. + +### 2.Our method + +Since images naturally exhibit structure in resolution, images can be decomposed into representations at multiple resolutions. We take advantage of this property by first decomposing an image in resolution space i.e. into a series of images at coarser resolutions : $\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{1},\ldots {\mathbf{x}}_{S}}\right) = {\mathbf{x}}_{s \leq S}$ . We then train an invertible generative model that normalizes this joint multi-resolution image into multi-resolution noise. + +### 2.1. Normalizing Flows + +We wish to train a generative model on a multi-resolution set of true images, i.e. find a probability distribution $p\left( {\mathbf{x}}_{s \leq S}\right) = p\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{S}}\right)$ that matches the true data distribution. Normalizing flows (Tabak & Turner, 2013; Jimenez Rezende & Mohamed, 2015; Dinh et al., 2017; Papamakarios et al., 2019; Kobyzev et al., 2020) are good candidates for such a model, as they are probabilistic generative models that perform exact likelihood estimates, and can be run in reverse to generate novel data from the model's distribution. This allows model comparison and measuring generalization to unseen data. Normalizing flows are trained + +--- + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +--- + +by maximizing the log likelihood of the input images. If a normalizing flow produces output $\mathbf{z}$ from an input image $\mathbf{x}$ , the change-of-variables formula provides the likelihood of the image under this transformation as: + +$$ +\log p\left( \mathbf{x}\right) = \log p\left( \mathbf{z}\right) + \log \left| {\det \frac{\mathrm{d}\mathbf{z}}{\mathrm{d}\mathbf{x}}}\right| \tag{1} +$$ + +$\log p\left( \mathbf{z}\right)$ is computed as the log probability of $\mathbf{z}$ under the noise distribution (typically standard Gaussian). + +### 2.2. Joint multi-resolution representation + +We depart from the typical usage of the multi-resolution representation of images by observing that had we not known that the images at different resolutions could be derived from one fine image, we now have a joint distribution over all possible images at all resolutions. Suppose $\mathbf{x}$ and $\mathbf{y}$ are two different images. Under this joint multi-resolution distribution, $\left( {{\mathbf{x}}_{S - 2},{\mathbf{x}}_{S - 1},{\mathbf{x}}_{S}}\right)$ and $\left( {{\mathbf{y}}_{S - 2},{\mathbf{y}}_{S - 1},{\mathbf{y}}_{S}}\right)$ are valid multi-resolution images, but so are $\left( {{\mathbf{y}}_{S - 2},{\mathbf{x}}_{S - 1},{\mathbf{y}}_{S}}\right)$ and $\left( {{\mathbf{x}}_{S - 2},{\mathbf{y}}_{S - 1},{\mathbf{x}}_{S}}\right)$ . It so happens that our real data distribution of multi-resolution images are those multi-resolution data points that are correlated in resolution space. This is equivalent to the fact that among all possible single-resolution images, only those that have correlated pixels in width and height are real/natural images, as opposed to noise images without any correlation among pixels. + +### 2.3. Multi-Resolution Normalizing Flows + +We now wish to map the joint distribution of multi-resolution images ${\mathbf{x}}_{s \leq S}$ to "joint" multi-resolution noise ${\mathbf{z}}_{s \leq S}$ . In this case, the multi-resolution change-of-variables formula is: + +$$ +\log p\left( {\mathbf{x}}_{s \leq S}\right) = \log p\left( {\mathbf{z}}_{s \leq S}\right) + \log \left| {\det \frac{\partial {\mathbf{z}}_{s \leq S}}{\partial {\mathbf{x}}_{s \leq S}}}\right| \tag{2} +$$ + +The multi-resolution structure of the data results in a simplification of the calculation of the Jacobian determinant. To illustrate this, choose a non-redundant basis of multi-resolution variables such that ${\mathbf{z}}_{s}$ at any resolution is linearly independent of ${\mathbf{x}}_{s + j}, j > 0$ at finer resolutions. This leads to the following block lower triangular structure in the variables: + +$$ +\log p\left( {\mathbf{x}}_{s \leq S}\right) +$$ + +$$ += \mathop{\sum }\limits_{{s = 0}}^{S}\log p\left( {\mathbf{z}}_{s}\right) + \log \left| {\det \left\lbrack \begin{matrix} \frac{\partial {\mathbf{z}}_{0}}{\partial {\mathbf{x}}_{0}} & 0 & \cdots & 0 \\ \frac{\partial {\mathbf{z}}_{1}}{\partial {\mathbf{x}}_{0}} & \frac{\partial {\mathbf{z}}_{1}}{\partial {\mathbf{x}}_{1}} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial {\mathbf{z}}_{S}}{\partial {\mathbf{x}}_{0}} & \frac{\partial {\mathbf{z}}_{S}}{\partial {\mathbf{x}}_{1}} & \cdots & \frac{\partial {\mathbf{z}}_{S}}{\partial {\mathbf{x}}_{S}} \end{matrix}\right\rbrack }\right| +$$ + +$$ += \mathop{\sum }\limits_{{s = 0}}^{S}\left( {\log p\left( {\mathbf{z}}_{s}\right) + \log \left| {\det \frac{\partial {\mathbf{z}}_{s}}{\partial {\mathbf{x}}_{s}}}\right| }\right) \tag{3} +$$ + +We train a normalizing flow at each resolution to compute the likelihood of the image up to that resolution using Equation 3. This allows us to learn normalizing flows at each resolution independently, and in parallel. + +Since the Jacobian determinant is a (block) lower triangular matrix, the non-zero off-diagonal elements don't contribute to the final log probability. Hence, we can freely condition each normalizing flow on the coarser images, by treating the coarser images as independent variables. This allows us to learn only the higher-level information at each resolution. We use this to our advantage, and train each normalizing flow ${g}_{s}$ between ${\mathbf{x}}_{s}$ and ${\mathbf{z}}_{s}$ conditioned on the immediate coarser ${\mathbf{x}}_{s - 1}$ making a Markov assumption: + +$$ +{\mathbf{z}}_{0} = {g}_{0}\left( {\mathbf{x}}_{0}\right) ;\;{\mathbf{z}}_{s} = {g}_{s}\left( {{\mathbf{x}}_{s} \mid {\mathbf{x}}_{s - 1}}\right) \;\forall s > 0 \tag{4} +$$ + +Hence, equation 3 can be rewritten as: + +$$ +\log p\left( {\mathbf{x}}_{s \leq S}\right) = \log \mathcal{N}\left( {{g}_{0}\left( {\mathbf{x}}_{0}\right) ;\mathbf{0},\mathbf{I}}\right) + \log \left| {\det \frac{\partial {g}_{0}}{\partial {\mathbf{x}}_{0}}}\right| +$$ + +$$ ++ \mathop{\sum }\limits_{{s = 1}}^{S}\left( {\log \mathcal{N}\left( {{g}_{s}\left( {{\mathbf{x}}_{s} \mid {\mathbf{x}}_{s - 1}}\right) ;\mathbf{0},\mathbf{I}}\right) + \log \left| {\det \frac{\partial {g}_{s}}{\partial {\mathbf{x}}_{s}}}\right| }\right) \tag{5} +$$ + +### 2.4. Multi-Resolution Continuous Normalizing Flows + +We choose to use Continuous Normalizing Flows at each resolution (CNF) (Chen et al., 2018; Grathwohl et al., 2019), since they have recently been shown to effectively model image distributions using a fraction of the number of parameters typically used in normalizing flows (and non flow-based approaches). At each resolution, each CNF ${g}_{s}$ transforms its state (say $\mathbf{v}\left( t\right)$ ) using a Neural ODE (Chen et al.,2018) with neural network ${f}_{s}$ : + +$$ +\mathbf{v}\left( {t}_{1}\right) = {g}_{s}\left( {\mathbf{v}\left( {t}_{0}\right) \mid \mathbf{c}}\right) = \mathbf{v}\left( {t}_{0}\right) + {\int }_{{t}_{0}}^{{t}_{1}}{f}_{s}\left( {\mathbf{v}\left( t\right) , t,\mathbf{c}}\right) \mathrm{d}t +$$ + +(6) + +Chen et al. (2018); Grathwohl et al. (2019) proposed an instantaneous variant of the change-of-variables formula CNFs, which expresses the change in log-probability of the state of the Neural ODE i.e. $\Delta \log {p}_{\mathbf{v}}$ as a differential equation: + +$$ +\Delta \log {p}_{\mathbf{v}\left( {t}_{0}\right) \rightarrow \mathbf{v}\left( {t}_{1}\right) } = - {\int }_{{t}_{0}}^{{t}_{1}}\operatorname{Tr}\left( \frac{\partial {f}_{s}}{\partial \mathbf{v}\left( t\right) }\right) \mathrm{d}t \tag{7} +$$ + +Hence, the ODE solver solves for the augmented state with the above differential, to obtain both the final state as well as the change in log probability simultaneously. Thus, the log probability at each resolution in eqs. (3) and (5) can be computed as: + +$$ +\log p\left( {\mathbf{x}}_{s \leq S}\right) = \mathop{\sum }\limits_{{s = 0}}^{S}\left( {\log p\left( {\mathbf{z}}_{s}\right) + \Delta \log {p}_{{\mathbf{x}}_{s} \rightarrow {\mathbf{z}}_{s}}}\right) \tag{8} +$$ + +## We call this model MSFlow-Image. + +In general, at each resolution (except the coarsest), the image ${\mathbf{x}}_{s}$ could first be converted to another representation ${\mathbf{y}}_{s}$ using a suitable orthogonal bijective transformation $T$ from ${\mathbf{x}}_{s}$ to ${\mathbf{y}}_{s}$ so that $\Delta \log {p}_{{\mathbf{x}}_{s} \rightarrow {\mathbf{y}}_{s}} = 0$ : + +$$ +\log p\left( {\mathbf{x}}_{s \leq S}\right) = \tag{9} +$$ + +$$ +\mathop{\sum }\limits_{{s = 0}}^{S}\left( {\log p\left( {\mathbf{z}}_{s}\right) + \Delta \log {p}_{{\mathbf{y}}_{s} \rightarrow {\mathbf{z}}_{s}} + \Delta \log {p}_{{\mathbf{x}}_{s} \rightarrow {\mathbf{y}}_{s}} \cdot 0}\right) +$$ + +In the simplest case, ${\mathbf{y}}_{s} = {\mathbf{x}}_{s}$ , which is MSFlow-Image. A more complex orthogonal transform to use is the Haar wavelet transform, we call this model Multi-Scale Continuous Normalizing Flow - Wavelet (MSFlow-Wavelet). At each resolution, ${\mathbf{x}}_{s}$ is transformed into a composition of the 3 wavelet coefficients ${\mathbf{w}}_{s}$ , and the coarser version ${\mathbf{x}}_{s - 1}$ , i.e. ${\mathbf{y}}_{s} = \left( {{\mathbf{w}}_{s},{\mathbf{x}}_{s - 1}}\right)$ . In this case, the conditioning becomes more obvious: each CNF maps the wavelet coefficients ${\mathbf{w}}_{s}$ to a noise sample ${\mathbf{z}}_{s}$ conditioned on ${\mathbf{x}}_{s - 1}$ (see Figure 2), similar to WaveletFlow(Yu et al., 2020) which builds on Glow (Kingma & Dhariwal, 2018). + +![01963e2e-bb21-7192-92f6-976bdd0555bf_2_180_1007_653_376_0.jpg](images/01963e2e-bb21-7192-92f6-976bdd0555bf_2_180_1007_653_376_0.jpg) + +Figure 2. Architecture of MSFlow-Wavelet. + +Training: The overall model is trained to maximize the log-probability of the joint multi-resolution image, given by Equation 8 as the sum of the likelihoods of the images at each resolution. Equivalently, our model is trained to minimize the Bits-per-dimension (BPD) of the image at finest resolution $S$ with ${D}_{S}$ pixels: + +$$ +\operatorname{bpd}\left( {\mathbf{x}}_{s \leq S}\right) = \frac{-\log p\left( {\mathbf{x}}_{s \leq S}\right) }{{D}_{S}\log 2} +$$ + +$$ += \frac{-1}{{D}_{S}\log 2}\left\lbrack {\mathop{\sum }\limits_{{s = 0}}^{S}\left( {\log p\left( {\mathbf{z}}_{s}\right) + \Delta \log {p}_{{\mathbf{x}}_{s} \rightarrow {\mathbf{z}}_{s}}}\right) }\right\rbrack \tag{10} +$$ + +Since each CNF ${g}_{s}$ independently models the conditional distribution of the image at that resolution, we train each ${g}_{s}$ to minimize each $\operatorname{bpd}\left( {\mathbf{x}}_{{s}^{\prime } \leq s}\right)$ step by step from the coarsest resolution $\left( {s = 0}\right)$ to the finest resolution $\left( {s = S}\right)$ , having frozen ${g}_{j} : j \neq s$ . + +We use FFJORD (Grathwohl et al., 2019) as the baseline model for our CNFs. In addition, to speed up the training of FFJORD models by stabilizing the learnt dynamics, FFJORD RNODE (Finlay et al., 2020) introduced two regularization terms: the kinetic energy of the flow and the Jacobian norm. STEER (Ghosh et al., 2020) introduced temporal regularization by making the final time of integration stochastic. + +Generation: Assuming each ${g}_{s}$ is invertible (which CNFs are), we may then generate images using ancestral sampling: we first sample ${\mathbf{z}}_{s}$ ’s from a latent noise distribution, and transform them backwards into image space progressively from coarser to finer resolutions through the CNFs: + +$$ +\left\{ \begin{array}{l} {\mathbf{x}}_{0} = {g}_{0}^{-1}\left( {\mathbf{z}}_{0}\right) \\ {\mathbf{x}}_{s} = {T}^{-1}\left( {\mathbf{y}}_{s}\right) = {T}^{-1}\left( {{g}_{s}^{-1}\left( {{\mathbf{z}}_{s} \mid {\mathbf{x}}_{s - 1}}\right) }\right) \;\forall s > 0 \end{array}\right. +$$ + +(11) + +## 3. Related work + +Several prior works on normalizing flows (Kingma & Dhari-wal, 2018; Song et al., 2019; Ma et al., 2019; Yu et al., 2020) build on RealNVP (Dinh et al., 2017). Although they achieve great results in terms of BPD and image quality, they nonetheless report results from significantly higher parameters and several GPU hours for training. + +Our MSFlow-Wavelet model is quite similar to the recently published WaveletFlow(Yu et al., 2020). However, Wavelet-Flow builds on the Glow (Kingma & Dhariwal, 2018) architecture, while ours builds on CNFs (Grathwohl et al., 2019; Finlay et al., 2020). Moreover, WaveletFlow applies certain techniques to obtain better samples from its model. We have so far not used such techniques for generation, but they can potentially help generate better samples from our models. + +## 4. Experimental results + +We train MSFlow-Image and MSFlow-Wavelet models on the CIFAR10 (Krizhevsky et al., 2009) dataset at finest resolution of ${32} \times {32}$ , and the ImageNet (Deng et al.,2009) dataset at ${32} \times {32},{64} \times {64},{128} \times {128}$ . We build on top of the code provided in (Finlay et al.,2020) ${}^{1}$ . In all cases, we train using only one NVIDIA V100 GPU with 16GB. + +Ablation study on regularizers: We perform an ablation study using the two regularizations mentioned above: with/without RNODE (Finlay et al., 2020), with/without STEER (Ghosh et al., 2020). We find that consistently in all cases, FFJORD RNODE achieves superior BPD in lesser time. In some cases, FFJORD fails to train. + +Ablation study on resolutions: We train models with varying number of total resolutions. Increasing the number of + +--- + +${}^{1}$ https://github.com/cfinlay/ffjord-rnode + +--- + +Table 1. Unconditional image generation metrics (lower is better in all cases): number of parameters in the model, bits-per-dimension, time (in hours). Most previous models use multiple GPUs for training, all our models were trained on only one NVIDIA V100 GPU. ${}^{ \ddagger }$ As reported in (Ghosh et al.,2020). ${}^{ * }$ FFJORD RNODE (Finlay et al.,2020) used 4 GPUs to train on ImageNet64. 'x': Fails to train. Blank spaces indicate unreported values. + +
CIFAR10ImageNet32ImageNet64
ParamBPDTIMEParamBPDTIMEParamBPDTIME
1-scale Continuous Normalizing Flow
FFJORD (Grathwohl et al., 2019)3.40≥5days${3.96}^{ \ddagger }$>5days ${}^{ \ddagger }$$X$$X$
FFJORD RNODE (Finlay et al., 2020)1.36M3.3831.842.00M${2.36}^{ \ddagger }$${30.1}^{ \ddagger }$${3.83}^{ * }$${64.1}^{ * }$
FFJORD + STEER (Ghosh et al., 2020)3.4086.343.84>5days2.00M
FFJORD RNODE + STEER (Ghosh et al., 2020)3.39722.242.3524.9
(OURS) 2-scale MSFlow-Image
2-scale FFJORD1.8517.89$X$$X$0.16MX$X$
2-scale FFJORD RNODE0.48M1.6916.371.9226.201.5451.21
2-scale FFJORD + STEER2.0418.760.16M2.3620.12$X$$X$
2-scale FFJORD RNODE + STEER1.7418.431.9765.161.5866.76
(Ours) 3-scale MSFlow-Image
3-scale FFJORD0.48M1.5421.512.0030.540.13M$X$$X$
3-scale FFJORD RNODE1.3221.481.6641.171.2160.89
3-scale FFJORD + STEER1.7221.090.13M2.2121.36X$X$
3-scale FFJORD RNODE + STEER1.4423.441.6854.051.2659.14
(OURS) 4-scale MSFlow-Image
4-scale FFJORD1.4219.951.8430.630.17MXX
4-scale FFJORD RNODE0.64M1.2819.080.17M1.6242.601.1865.6
4-scale FFJORD + STEER1.8817.73X$X$X$X$
4-scale FFJORD RNODE + STEER1.4417.601.6362.821.3666.2
(OURS) 5-scale MSFlow-Image0.81M1.2819.420.22M1.1771.33
(Ours) 6-scale MSFlow-Image0.97M1.2420.52
(Ours) 2-scale MSFlow-Wavelet0.50M3.5617.170.33M3.9215.30
(Ours) 3-scale MSFlow-Wavelet0.76M3.6913.990.51M4.0017.700.51M4.0437.82
(OURS) 4-scale MSFlow-Wavelet1.03M3.7713.940.69M4.0216.83
(OURS) 5-scale MSFlow-Wavelet1.29M3.8710.73
+ +168 total resolutions consistently improves BPD across models with the same number of parameters per resolution, except in the case of MSFlow-Wavelet where we see the opposite case. + +Progressive training: Since each resolution can be trained independently, we can train an MSFlow-Image model on Im-ageNet128 by training only the finest resolution (128x128) conditioned on ${64} \times {64}$ images for 1 epoch, and then attach that to a 4-resolution model trained on ImageNet64 from scratch. This 5-resolution ImageNet128 model gives a BPD of 1.13 . + +## 5. Fundamental flaw + +However, we note that there is a fundamental flow to this calculation of BPD : we calculated the BPD of ${\mathbf{x}}_{s \leq S}$ , while prior works report the BPD of ${\mathbf{x}}_{S}$ . This implies that our model maps the joint distribution of images to joint noise, meaning our model includes images whose coarser versions do not correspond to the finest image. This does not apply to our MSFlow-Wavelet models since the wavelet formulation + +217 ensures the consistency of coarser images with respect to the fine image. 218 + +219 Hence, to find the likelihood of ${\mathbf{x}}_{S}$ under our MSFlow-Image model, the likelihood of ${\mathbf{x}}_{s \leq S}$ needs to be marginal-ized over the entire subspace of lower resolution images. This is intractable. To make it tractable, we could approximate this marginal using Monte Carlo integration, by sampling multiple lower-reolution images and summing over the respective joint likelihoods. Inevitably, this leads to much greated BPD values than the ones reported in Table 1. Hence, Table 1 is not a fair comparison to make, except for the MSFlow-Wavelet rows. + +## 6. Conclusion + +We have presented a Multi-Resolution approach to Continuous Normalizing Flows, and performed exact likelihood calculations on several benchmark datasets of images by training on a single GPU in lesser time with a fraction of the number of parameters of other competitive models. However, we found that our formulation is fundamentally flawed in the computation of BPD for a single image. We explored ideas over how to fix this issue. We found that formulations similar to the Wavelet formulation which ensure the consistency of coarsened images with respect to the finest image can help alleviate this problem. + +References + +Brock, A., Donahue, J., and Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019. + +Chen, R. T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. Neural ordinary differential equations. Advances in Neural Information Processing Systems, 2018. + +Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. + +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. In International Conference on Learned Representations, 2017. + +Finlay, C., Jacobsen, J.-H., Nurbekyan, L., and Oberman, A. How to train your neural ode: the world of jacobian and kinetic regularization. International Conference on Machine Learning, 2020. + +Ghosh, A., Behl, H. S., Dupont, E., Torr, P. H., and Nam-boodiri, V. Steer: Simple temporal regularization for neural odes. In Advances in Neural Information Processing Systems, 2020. + +Goodfellow, I., Bengio, Y., Courville, A., and Bengio, Y. Deep learning, volume 1. MIT Press, 2016. + +Grathwohl, W., Chen, R. T. Q., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. International Conference on Learning Representations, 2019. + +Ho, J., Chen, X., Srinivas, A., Duan, Y., and Abbeel, P. Flow++: Improving flow-based generative models with variational dequantization and architecture design. In International Conference on Machine Learning, 2019. + +Jimenez Rezende, D. and Mohamed, S. Variational inference with normalizing flows. In International Conference on Machine Learning, pp. 1530--1538, 2015. + +Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8110- 8119, 2020. + +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible $1\mathrm{\;x}1$ convolutions. In Advances in neural information processing systems, pp. 10215-10224, 2018. + +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. + +Kobyzev, I., Prince, S., and Brubaker, M. Normalizing flows: An introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. + +Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. Technical Report, University of Toronto, 2009. URL https://www.cs.toronto.edu/~kriz/cifar.html. + +Ma, X., Kong, X., Zhang, S., and Hovy, E. Macow: Masked convolutional generative flow. In Advances in Neural Information Processing Systems, pp. 5893-5902, 2019. + +Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. arXiv preprint arXiv:1912.02762, 2019. + +Shaham, T. R., Dekel, T., and Michaeli, T. Singan: Learning a generative model from a single natural image. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4570-4580, 2019. + +Song, Y., Meng, C., and Ermon, S. Mintnet: Building invertible neural networks with masked convolutions. In Advances in Neural Information Processing Systems, pp. 11004-11014, 2019. + +Tabak, E. G. and Turner, C. V. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145-164, 2013. + +Vahdat, A. and Kautz, J. Nvae: A deep hierarchical variational autoencoder. In Advances in Neural Information Processing Systems, 2020. + +Yu, J., Derpanis, K., and Brubaker, M. Wavelet flow: Fast training of high resolution normalizing flows. In ${Ad}$ - vances in Neural Information Processing Systems, 2020. \ No newline at end of file diff --git a/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/uWhWujR6Tl0/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/uWhWujR6Tl0/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..9aebc3f3aab9461f0c6d9847eb6ab391aaa4c0d1 --- /dev/null +++ b/papers/ICML/ICML 2021/ICML 2021 Workshop/ICML 2021 Workshop INNF/uWhWujR6Tl0/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,288 @@ +§ MULTI-SCALE CONTINUOUS NORMALIZING FLOWS + +§ ANONYMOUS AUTHORS ${}^{1}$ + +§ ABSTRACT + +We introduce a multi-scale variant of Continuous Normalizing Flows, and explore the computation of likelihood values. We also introduce a Wavelet-version of the model. However, we find that this formulation is flawed in the computation of BPD, and explore ways to alleviate this problem. + +§ 1. INTRODUCTION + +Reversible generative models derived through the use of the change of variables technique (Dinh et al., 2017; Kingma & Dhariwal, 2018; Ho et al., 2019; Yu et al., 2020) are growing in interest as alternatives to generative models based on Generative Adversarial Networks (GANs) (Goodfellow et al., 2016) and Variational Autoencoders (VAEs) (Kingma & Welling, 2013). A change of variables approach facilitates the transformation of a simple known probability distribution such as Gaussian noise into a more complex model distribution, such as images. Reversible generative models using this technique are attractive because they enable efficient density estimation, efficient sampling, and admit exact likelihoods to be computed. A promising variation of the change-of-variable approach is based on the use of a continuous time variant of normalizing flows (Chen et al., 2018; Grathwohl et al., 2019), which uses an integral over continuous time dynamics to transform a base distribution into the model distribution. This approach uses ordinary differential equations (ODEs) specified by a neural network, or Neural ODEs. + +In this work, we consider a direct multi-resolution approach to continuous normalizing flows. While state-of-the art GANs and VAEs exploit the multi-resolution properties of images, and recently top performing methods also inject noise at each resolution (Brock et al., 2019; Shaham et al., 2019; Karras et al., 2020; Vahdat & Kautz, 2020), only recently have normalizing flows exploited the multi-resolution properties of images, using wavelets (Yu et al., 2020). + +We use Continuous Normalizing Flows (CNF) in a multi-resolution fashion to generate an image at finer resolutions conditioned on the immediate coarser resolution image. A high-level view of our approach is shown in Figure 1. + + < g r a p h i c s > + +Figure 1. The architecture of our MSFlow-Image method. Continuous normalizing flows are used to generate images at each resolution, with finer resolutions being generated conditioned on the coarser image one level above. + +§ 2.OUR METHOD + +Since images naturally exhibit structure in resolution, images can be decomposed into representations at multiple resolutions. We take advantage of this property by first decomposing an image in resolution space i.e. into a series of images at coarser resolutions : $\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{1},\ldots {\mathbf{x}}_{S}}\right) = {\mathbf{x}}_{s \leq S}$ . We then train an invertible generative model that normalizes this joint multi-resolution image into multi-resolution noise. + +§ 2.1. NORMALIZING FLOWS + +We wish to train a generative model on a multi-resolution set of true images, i.e. find a probability distribution $p\left( {\mathbf{x}}_{s \leq S}\right) = p\left( {{\mathbf{x}}_{0},{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{S}}\right)$ that matches the true data distribution. Normalizing flows (Tabak & Turner, 2013; Jimenez Rezende & Mohamed, 2015; Dinh et al., 2017; Papamakarios et al., 2019; Kobyzev et al., 2020) are good candidates for such a model, as they are probabilistic generative models that perform exact likelihood estimates, and can be run in reverse to generate novel data from the model's distribution. This allows model comparison and measuring generalization to unseen data. Normalizing flows are trained + +${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author . + +Preliminary work. Under review by INNF+ 2021. Do not distribute. + +by maximizing the log likelihood of the input images. If a normalizing flow produces output $\mathbf{z}$ from an input image $\mathbf{x}$ , the change-of-variables formula provides the likelihood of the image under this transformation as: + +$$ +\log p\left( \mathbf{x}\right) = \log p\left( \mathbf{z}\right) + \log \left| {\det \frac{\mathrm{d}\mathbf{z}}{\mathrm{d}\mathbf{x}}}\right| \tag{1} +$$ + +$\log p\left( \mathbf{z}\right)$ is computed as the log probability of $\mathbf{z}$ under the noise distribution (typically standard Gaussian). + +§ 2.2. JOINT MULTI-RESOLUTION REPRESENTATION + +We depart from the typical usage of the multi-resolution representation of images by observing that had we not known that the images at different resolutions could be derived from one fine image, we now have a joint distribution over all possible images at all resolutions. Suppose $\mathbf{x}$ and $\mathbf{y}$ are two different images. Under this joint multi-resolution distribution, $\left( {{\mathbf{x}}_{S - 2},{\mathbf{x}}_{S - 1},{\mathbf{x}}_{S}}\right)$ and $\left( {{\mathbf{y}}_{S - 2},{\mathbf{y}}_{S - 1},{\mathbf{y}}_{S}}\right)$ are valid multi-resolution images, but so are $\left( {{\mathbf{y}}_{S - 2},{\mathbf{x}}_{S - 1},{\mathbf{y}}_{S}}\right)$ and $\left( {{\mathbf{x}}_{S - 2},{\mathbf{y}}_{S - 1},{\mathbf{x}}_{S}}\right)$ . It so happens that our real data distribution of multi-resolution images are those multi-resolution data points that are correlated in resolution space. This is equivalent to the fact that among all possible single-resolution images, only those that have correlated pixels in width and height are real/natural images, as opposed to noise images without any correlation among pixels. + +§ 2.3. MULTI-RESOLUTION NORMALIZING FLOWS + +We now wish to map the joint distribution of multi-resolution images ${\mathbf{x}}_{s \leq S}$ to "joint" multi-resolution noise ${\mathbf{z}}_{s \leq S}$ . In this case, the multi-resolution change-of-variables formula is: + +$$ +\log p\left( {\mathbf{x}}_{s \leq S}\right) = \log p\left( {\mathbf{z}}_{s \leq S}\right) + \log \left| {\det \frac{\partial {\mathbf{z}}_{s \leq S}}{\partial {\mathbf{x}}_{s \leq S}}}\right| \tag{2} +$$ + +The multi-resolution structure of the data results in a simplification of the calculation of the Jacobian determinant. To illustrate this, choose a non-redundant basis of multi-resolution variables such that ${\mathbf{z}}_{s}$ at any resolution is linearly independent of ${\mathbf{x}}_{s + j},j > 0$ at finer resolutions. This leads to the following block lower triangular structure in the variables: + +$$ +\log p\left( {\mathbf{x}}_{s \leq S}\right) +$$ + +$$ += \mathop{\sum }\limits_{{s = 0}}^{S}\log p\left( {\mathbf{z}}_{s}\right) + \log \left| {\det \left\lbrack \begin{matrix} \frac{\partial {\mathbf{z}}_{0}}{\partial {\mathbf{x}}_{0}} & 0 & \cdots & 0 \\ \frac{\partial {\mathbf{z}}_{1}}{\partial {\mathbf{x}}_{0}} & \frac{\partial {\mathbf{z}}_{1}}{\partial {\mathbf{x}}_{1}} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial {\mathbf{z}}_{S}}{\partial {\mathbf{x}}_{0}} & \frac{\partial {\mathbf{z}}_{S}}{\partial {\mathbf{x}}_{1}} & \cdots & \frac{\partial {\mathbf{z}}_{S}}{\partial {\mathbf{x}}_{S}} \end{matrix}\right\rbrack }\right| +$$ + +$$ += \mathop{\sum }\limits_{{s = 0}}^{S}\left( {\log p\left( {\mathbf{z}}_{s}\right) + \log \left| {\det \frac{\partial {\mathbf{z}}_{s}}{\partial {\mathbf{x}}_{s}}}\right| }\right) \tag{3} +$$ + +We train a normalizing flow at each resolution to compute the likelihood of the image up to that resolution using Equation 3. This allows us to learn normalizing flows at each resolution independently, and in parallel. + +Since the Jacobian determinant is a (block) lower triangular matrix, the non-zero off-diagonal elements don't contribute to the final log probability. Hence, we can freely condition each normalizing flow on the coarser images, by treating the coarser images as independent variables. This allows us to learn only the higher-level information at each resolution. We use this to our advantage, and train each normalizing flow ${g}_{s}$ between ${\mathbf{x}}_{s}$ and ${\mathbf{z}}_{s}$ conditioned on the immediate coarser ${\mathbf{x}}_{s - 1}$ making a Markov assumption: + +$$ +{\mathbf{z}}_{0} = {g}_{0}\left( {\mathbf{x}}_{0}\right) ;\;{\mathbf{z}}_{s} = {g}_{s}\left( {{\mathbf{x}}_{s} \mid {\mathbf{x}}_{s - 1}}\right) \;\forall s > 0 \tag{4} +$$ + +Hence, equation 3 can be rewritten as: + +$$ +\log p\left( {\mathbf{x}}_{s \leq S}\right) = \log \mathcal{N}\left( {{g}_{0}\left( {\mathbf{x}}_{0}\right) ;\mathbf{0},\mathbf{I}}\right) + \log \left| {\det \frac{\partial {g}_{0}}{\partial {\mathbf{x}}_{0}}}\right| +$$ + +$$ ++ \mathop{\sum }\limits_{{s = 1}}^{S}\left( {\log \mathcal{N}\left( {{g}_{s}\left( {{\mathbf{x}}_{s} \mid {\mathbf{x}}_{s - 1}}\right) ;\mathbf{0},\mathbf{I}}\right) + \log \left| {\det \frac{\partial {g}_{s}}{\partial {\mathbf{x}}_{s}}}\right| }\right) \tag{5} +$$ + +§ 2.4. MULTI-RESOLUTION CONTINUOUS NORMALIZING FLOWS + +We choose to use Continuous Normalizing Flows at each resolution (CNF) (Chen et al., 2018; Grathwohl et al., 2019), since they have recently been shown to effectively model image distributions using a fraction of the number of parameters typically used in normalizing flows (and non flow-based approaches). At each resolution, each CNF ${g}_{s}$ transforms its state (say $\mathbf{v}\left( t\right)$ ) using a Neural ODE (Chen et al.,2018) with neural network ${f}_{s}$ : + +$$ +\mathbf{v}\left( {t}_{1}\right) = {g}_{s}\left( {\mathbf{v}\left( {t}_{0}\right) \mid \mathbf{c}}\right) = \mathbf{v}\left( {t}_{0}\right) + {\int }_{{t}_{0}}^{{t}_{1}}{f}_{s}\left( {\mathbf{v}\left( t\right) ,t,\mathbf{c}}\right) \mathrm{d}t +$$ + +(6) + +Chen et al. (2018); Grathwohl et al. (2019) proposed an instantaneous variant of the change-of-variables formula CNFs, which expresses the change in log-probability of the state of the Neural ODE i.e. $\Delta \log {p}_{\mathbf{v}}$ as a differential equation: + +$$ +\Delta \log {p}_{\mathbf{v}\left( {t}_{0}\right) \rightarrow \mathbf{v}\left( {t}_{1}\right) } = - {\int }_{{t}_{0}}^{{t}_{1}}\operatorname{Tr}\left( \frac{\partial {f}_{s}}{\partial \mathbf{v}\left( t\right) }\right) \mathrm{d}t \tag{7} +$$ + +Hence, the ODE solver solves for the augmented state with the above differential, to obtain both the final state as well as the change in log probability simultaneously. Thus, the log probability at each resolution in eqs. (3) and (5) can be computed as: + +$$ +\log p\left( {\mathbf{x}}_{s \leq S}\right) = \mathop{\sum }\limits_{{s = 0}}^{S}\left( {\log p\left( {\mathbf{z}}_{s}\right) + \Delta \log {p}_{{\mathbf{x}}_{s} \rightarrow {\mathbf{z}}_{s}}}\right) \tag{8} +$$ + +§ WE CALL THIS MODEL MSFLOW-IMAGE. + +In general, at each resolution (except the coarsest), the image ${\mathbf{x}}_{s}$ could first be converted to another representation ${\mathbf{y}}_{s}$ using a suitable orthogonal bijective transformation $T$ from ${\mathbf{x}}_{s}$ to ${\mathbf{y}}_{s}$ so that $\Delta \log {p}_{{\mathbf{x}}_{s} \rightarrow {\mathbf{y}}_{s}} = 0$ : + +$$ +\log p\left( {\mathbf{x}}_{s \leq S}\right) = \tag{9} +$$ + +$$ +\mathop{\sum }\limits_{{s = 0}}^{S}\left( {\log p\left( {\mathbf{z}}_{s}\right) + \Delta \log {p}_{{\mathbf{y}}_{s} \rightarrow {\mathbf{z}}_{s}} + \Delta \log {p}_{{\mathbf{x}}_{s} \rightarrow {\mathbf{y}}_{s}} \cdot 0}\right) +$$ + +In the simplest case, ${\mathbf{y}}_{s} = {\mathbf{x}}_{s}$ , which is MSFlow-Image. A more complex orthogonal transform to use is the Haar wavelet transform, we call this model Multi-Scale Continuous Normalizing Flow - Wavelet (MSFlow-Wavelet). At each resolution, ${\mathbf{x}}_{s}$ is transformed into a composition of the 3 wavelet coefficients ${\mathbf{w}}_{s}$ , and the coarser version ${\mathbf{x}}_{s - 1}$ , i.e. ${\mathbf{y}}_{s} = \left( {{\mathbf{w}}_{s},{\mathbf{x}}_{s - 1}}\right)$ . In this case, the conditioning becomes more obvious: each CNF maps the wavelet coefficients ${\mathbf{w}}_{s}$ to a noise sample ${\mathbf{z}}_{s}$ conditioned on ${\mathbf{x}}_{s - 1}$ (see Figure 2), similar to WaveletFlow(Yu et al., 2020) which builds on Glow (Kingma & Dhariwal, 2018). + + < g r a p h i c s > + +Figure 2. Architecture of MSFlow-Wavelet. + +Training: The overall model is trained to maximize the log-probability of the joint multi-resolution image, given by Equation 8 as the sum of the likelihoods of the images at each resolution. Equivalently, our model is trained to minimize the Bits-per-dimension (BPD) of the image at finest resolution $S$ with ${D}_{S}$ pixels: + +$$ +\operatorname{bpd}\left( {\mathbf{x}}_{s \leq S}\right) = \frac{-\log p\left( {\mathbf{x}}_{s \leq S}\right) }{{D}_{S}\log 2} +$$ + +$$ += \frac{-1}{{D}_{S}\log 2}\left\lbrack {\mathop{\sum }\limits_{{s = 0}}^{S}\left( {\log p\left( {\mathbf{z}}_{s}\right) + \Delta \log {p}_{{\mathbf{x}}_{s} \rightarrow {\mathbf{z}}_{s}}}\right) }\right\rbrack \tag{10} +$$ + +Since each CNF ${g}_{s}$ independently models the conditional distribution of the image at that resolution, we train each ${g}_{s}$ to minimize each $\operatorname{bpd}\left( {\mathbf{x}}_{{s}^{\prime } \leq s}\right)$ step by step from the coarsest resolution $\left( {s = 0}\right)$ to the finest resolution $\left( {s = S}\right)$ , having frozen ${g}_{j} : j \neq s$ . + +We use FFJORD (Grathwohl et al., 2019) as the baseline model for our CNFs. In addition, to speed up the training of FFJORD models by stabilizing the learnt dynamics, FFJORD RNODE (Finlay et al., 2020) introduced two regularization terms: the kinetic energy of the flow and the Jacobian norm. STEER (Ghosh et al., 2020) introduced temporal regularization by making the final time of integration stochastic. + +Generation: Assuming each ${g}_{s}$ is invertible (which CNFs are), we may then generate images using ancestral sampling: we first sample ${\mathbf{z}}_{s}$ ’s from a latent noise distribution, and transform them backwards into image space progressively from coarser to finer resolutions through the CNFs: + +$$ +\left\{ \begin{array}{l} {\mathbf{x}}_{0} = {g}_{0}^{-1}\left( {\mathbf{z}}_{0}\right) \\ {\mathbf{x}}_{s} = {T}^{-1}\left( {\mathbf{y}}_{s}\right) = {T}^{-1}\left( {{g}_{s}^{-1}\left( {{\mathbf{z}}_{s} \mid {\mathbf{x}}_{s - 1}}\right) }\right) \;\forall s > 0 \end{array}\right. +$$ + +(11) + +§ 3. RELATED WORK + +Several prior works on normalizing flows (Kingma & Dhari-wal, 2018; Song et al., 2019; Ma et al., 2019; Yu et al., 2020) build on RealNVP (Dinh et al., 2017). Although they achieve great results in terms of BPD and image quality, they nonetheless report results from significantly higher parameters and several GPU hours for training. + +Our MSFlow-Wavelet model is quite similar to the recently published WaveletFlow(Yu et al., 2020). However, Wavelet-Flow builds on the Glow (Kingma & Dhariwal, 2018) architecture, while ours builds on CNFs (Grathwohl et al., 2019; Finlay et al., 2020). Moreover, WaveletFlow applies certain techniques to obtain better samples from its model. We have so far not used such techniques for generation, but they can potentially help generate better samples from our models. + +§ 4. EXPERIMENTAL RESULTS + +We train MSFlow-Image and MSFlow-Wavelet models on the CIFAR10 (Krizhevsky et al., 2009) dataset at finest resolution of ${32} \times {32}$ , and the ImageNet (Deng et al.,2009) dataset at ${32} \times {32},{64} \times {64},{128} \times {128}$ . We build on top of the code provided in (Finlay et al.,2020) ${}^{1}$ . In all cases, we train using only one NVIDIA V100 GPU with 16GB. + +Ablation study on regularizers: We perform an ablation study using the two regularizations mentioned above: with/without RNODE (Finlay et al., 2020), with/without STEER (Ghosh et al., 2020). We find that consistently in all cases, FFJORD RNODE achieves superior BPD in lesser time. In some cases, FFJORD fails to train. + +Ablation study on resolutions: We train models with varying number of total resolutions. Increasing the number of + +${}^{1}$ https://github.com/cfinlay/ffjord-rnode + +Table 1. Unconditional image generation metrics (lower is better in all cases): number of parameters in the model, bits-per-dimension, time (in hours). Most previous models use multiple GPUs for training, all our models were trained on only one NVIDIA V100 GPU. ${}^{ \ddagger }$ As reported in (Ghosh et al.,2020). ${}^{ * }$ FFJORD RNODE (Finlay et al.,2020) used 4 GPUs to train on ImageNet64. 'x': Fails to train. Blank spaces indicate unreported values. + +max width= + +2*X 3|c|CIFAR10 3|c|ImageNet32 3|c|ImageNet64 + +2-10 + Param BPD TIME Param BPD TIME Param BPD TIME + +1-10 +10|c|1-scale Continuous Normalizing Flow + +1-10 +FFJORD (Grathwohl et al., 2019) X 3.40 ≥5days X ${3.96}^{ \ddagger }$ >5days ${}^{ \ddagger }$ X $X$ $X$ + +1-10 +FFJORD RNODE (Finlay et al., 2020) 3*1.36M 3.38 31.84 3*2.00M ${2.36}^{ \ddagger }$ ${30.1}^{ \ddagger }$ X ${3.83}^{ * }$ ${64.1}^{ * }$ + +1-1 +3-4 +6-10 +FFJORD + STEER (Ghosh et al., 2020) 3.40 86.34 3.84 >5days 2.00M X X + +1-1 +3-4 +6-10 +FFJORD RNODE + STEER (Ghosh et al., 2020) 3.397 22.24 2.35 24.9 X X X + +1-10 +10|c|(OURS) 2-scale MSFlow-Image + +1-10 +2-scale FFJORD X 1.85 17.89 X $X$ $X$ 4*0.16M X $X$ + +1-7 +9-10 +2-scale FFJORD RNODE 3*0.48M 1.69 16.37 X 1.92 26.20 1.54 51.21 + +1-1 +3-7 +9-10 +2-scale FFJORD + STEER 2.04 18.76 0.16M 2.36 20.12 $X$ $X$ + +1-1 +3-7 +9-10 +2-scale FFJORD RNODE + STEER 1.74 18.43 X 1.97 65.16 1.58 66.76 + +1-10 +10|c|(Ours) 3-scale MSFlow-Image + +1-10 +3-scale FFJORD 4*0.48M 1.54 21.51 X 2.00 30.54 4*0.13M $X$ $X$ + +1-1 +3-7 +9-10 +3-scale FFJORD RNODE 1.32 21.48 X 1.66 41.17 1.21 60.89 + +1-1 +3-7 +9-10 +3-scale FFJORD + STEER 1.72 21.09 0.13M 2.21 21.36 X $X$ + +1-1 +3-7 +9-10 +3-scale FFJORD RNODE + STEER 1.44 23.44 X 1.68 54.05 1.26 59.14 + +1-10 +10|c|(OURS) 4-scale MSFlow-Image + +1-10 +4-scale FFJORD X 1.42 19.95 X 1.84 30.63 4*0.17M X X + +1-7 +9-10 +4-scale FFJORD RNODE 3*0.64M 1.28 19.08 3*0.17M 1.62 42.60 1.18 65.6 + +1-1 +3-4 +6-7 +9-10 +4-scale FFJORD + STEER 1.88 17.73 X $X$ X $X$ + +1-1 +3-4 +6-7 +9-10 +4-scale FFJORD RNODE + STEER 1.44 17.60 1.63 62.82 1.36 66.2 + +1-10 +(OURS) 5-scale MSFlow-Image 0.81M 1.28 19.42 X X X 0.22M 1.17 71.33 + +1-10 +(Ours) 6-scale MSFlow-Image 0.97M 1.24 20.52 X X X X X X + +1-10 +(Ours) 2-scale MSFlow-Wavelet 0.50M 3.56 17.17 0.33M 3.92 15.30 X X X + +1-10 +(Ours) 3-scale MSFlow-Wavelet 0.76M 3.69 13.99 0.51M 4.00 17.70 0.51M 4.04 37.82 + +1-10 +(OURS) 4-scale MSFlow-Wavelet 1.03M 3.77 13.94 0.69M 4.02 16.83 X X X + +1-10 +(OURS) 5-scale MSFlow-Wavelet 1.29M 3.87 10.73 X X X X X X + +1-10 + +168 total resolutions consistently improves BPD across models with the same number of parameters per resolution, except in the case of MSFlow-Wavelet where we see the opposite case. + +Progressive training: Since each resolution can be trained independently, we can train an MSFlow-Image model on Im-ageNet128 by training only the finest resolution (128x128) conditioned on ${64} \times {64}$ images for 1 epoch, and then attach that to a 4-resolution model trained on ImageNet64 from scratch. This 5-resolution ImageNet128 model gives a BPD of 1.13 . + +§ 5. FUNDAMENTAL FLAW + +However, we note that there is a fundamental flow to this calculation of BPD : we calculated the BPD of ${\mathbf{x}}_{s \leq S}$ , while prior works report the BPD of ${\mathbf{x}}_{S}$ . This implies that our model maps the joint distribution of images to joint noise, meaning our model includes images whose coarser versions do not correspond to the finest image. This does not apply to our MSFlow-Wavelet models since the wavelet formulation + +217 ensures the consistency of coarser images with respect to the fine image. 218 + +219 Hence, to find the likelihood of ${\mathbf{x}}_{S}$ under our MSFlow-Image model, the likelihood of ${\mathbf{x}}_{s \leq S}$ needs to be marginal-ized over the entire subspace of lower resolution images. This is intractable. To make it tractable, we could approximate this marginal using Monte Carlo integration, by sampling multiple lower-reolution images and summing over the respective joint likelihoods. Inevitably, this leads to much greated BPD values than the ones reported in Table 1. Hence, Table 1 is not a fair comparison to make, except for the MSFlow-Wavelet rows. + +§ 6. CONCLUSION + +We have presented a Multi-Resolution approach to Continuous Normalizing Flows, and performed exact likelihood calculations on several benchmark datasets of images by training on a single GPU in lesser time with a fraction of the number of parameters of other competitive models. However, we found that our formulation is fundamentally flawed in the computation of BPD for a single image. We explored ideas over how to fix this issue. We found that formulations similar to the Wavelet formulation which ensure the consistency of coarsened images with respect to the finest image can help alleviate this problem. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/01lfX8qrh1O/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/01lfX8qrh1O/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..049b82edfe7c7ede738bbbe3ca114ffe91145303 --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/01lfX8qrh1O/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,173 @@ +# Integrating Force-based Manipulation Primitives with Deep Learning-based Visual Servoing for Robotic Assembly + +Yee Sien Lee ${}^{1}$ , Nghia Vuong ${}^{1}$ , Nicholas Adrian ${}^{1,2}$ , and Quang-Cuong Pham ${}^{1}$ + +Abstract-This paper explores the idea of combining Deep Learning-based Visual Servoing and dynamic sequences of force-based Manipulation Primitives for robotic assembly tasks. Most current peg-in-hole algorithms assume the initial peg pose is already aligned within a minute deviation range before a tight-clearance insertion is attempted. With the integration of tactile and visual information, highly-accurate peg alignment before insertion can be achieved autonomously. In the alignment phase, the peg mounted on the end-effector can be aligned automatically from an initial pose with large displacement errors to an estimated insertion pose with errors lower than ${1.5}\mathrm{\;{mm}}$ in translation and ${1.5}^{ \circ }$ in rotation, all in one-shot Deep Learning-Based Visual Servoing estimation. A dynamic sequence of Manipulation Primitives will then be automatically generated via Reinforcement Learning to finish the last stage of insertion. + +## I. INTRODUCTION + +Robots have drastically increased industrial productivity by assisting humans to undertake high-volume and repetitive tasks such as lifting, assembly, and picking and placing of manufacturing parts. Specifically, robotic assembly has become progressively more common in the modern workspace, with increasingly complex and autonomous assembly tasks having been conducted in recent years [1]. However, robotic peg-in-hole assemblies require extremely high success rate and generalization to different contexts which are well beyond today's industrial robots' autonomous capability [2], [3]. Manual designing and fine-tuning are still required to achieve such tasks. Therefore, to achieve autonomous dexterous robotic peg-in-hole assembly, Vuong et al. proposed the idea of automatically discovering the dynamic sequence of Manipulation Primitives (MPs) via Reinforcement Learning (RL) [4]. + +The research from [4] utilized a force torque sensor to gauge the external force exerted on the end-effector. Besides maintaining a high accuracy, their method also showed promising generalization capability across different geometries. Nonetheless, the absence of a visual device limited the effectiveness of assembly as the peg had to be readily aligned within a small deviation range before insertion. + +This study aims to improve the solely force-based solution of [4] in terms of practicality in real-world settings by implementing Deep Learning-based Visual Servoing (DLVS) in the alignment phase. In this project, the DLVS work by Yu et al. [5] was chosen to complement the solely force-based solution. With DLVS capability, the hole pose can be estimated automatically in the alignment phase. + +![01963e18-0b18-7651-a210-e2a00b439beb_0_910_454_735_458_0.jpg](images/01963e18-0b18-7651-a210-e2a00b439beb_0_910_454_735_458_0.jpg) + +Fig. 1: Robotic assembly setup. A square peg was used in this study. + +The scope of this study includes: (1) achieving high-accuracy ( $< {1.5}\mathrm{\;{mm}}$ in translation and $< {1.5}^{ \circ }$ in rotation) autonomous estimation of hole pose in the alignment phase, and (2) enhancing the generalization capabilities across workspace with the newly integrated DLVS feature. + +## II. RELATED WORK + +## A. Studies Achieving Alignment and Insertion + +A study focusing on fast robust peg-in-hole insertion with continuous visual servoing was conducted in [6]. In the alignment phase, the peg was aligned to the hole based on heatmaps generated from a Deep Neural Network (DNN). After alignment, peg insertion was attempted via compliance using force-feedback. This approach was able to achieve high accuracy (peg-hole clearance of ${0.015}\mathrm{\;{mm}}$ ). However, there were two downsides: (1) DNN in alignment phase could only align position, but not orientation; (2) In the insertion phase, a simple compliant force insertion which was unable to account for large rotational errors was applied. + +[7] focused on achieving peg-in-hole assembly using multi-view images and DLVS trained on synthetic data. There were two steps in the alignment phase: (1) DLVS quickly moved the peg closer to the hole; (2) spiral search then precisely aligned the peg to the hole. The process would then proceed to the insertion phase where impedance control was used to perform the insertion. The clearance of the hole in this experiment was ${0.4}\mathrm{\;{mm}}$ . However, this approach could not align orientation errors as well due to the limitations of the DNN. Another downside was the long execution time. The approach needed more than 40 seconds to complete peg insertion from the start of the search phase. + +--- + +${}^{1}$ School of Mechanical and Aerospace Engineering, NTU, Singapore + +${}^{2}$ HP-NTU Digital Manufacturing Corporate Lab, Singapore + +--- + +## B. Alignment Phase: Deep Learning-based Visual Servoing + +Deep learning-based visual servoing (DLVS) estimates the camera pose repeatedly while the robot is moving towards the target pose to achieve high final accuracy [5]. + +Bateux et al. explored an efficient method of generating dataset to train robust neural network for DLVS which considers changing lighting conditions and the addition of random occlusions [8]. The network achieves sub-millimeter accuracy but can only estimate a camera pose with respect to a fixed reference pose. The neural network has to be retrained every time a new reference pose is introduced, which is impractical in real-life usages. Thus, the authors proposed in the same paper another neural network which accepts a pair of images taken at random poses as input. Nevertheless, this extension could only achieve centimeter accuracy. + +Yu et al. proposed a new neural network based on Siamese architecture that can output the relative pose between any pair of images taken at arbitrary poses with sub-millimeter accuracy [5]. The network is also effective under varying lighting conditions and with the inclusion of random occlusions, and can even generalize to objects with similar physical appearances. During actual insertion experiments, the model achieved sub-millimeter accuracy in camera pose estimation in one shot from initial deviations of: $\left( {-5,5}\right) \mathrm{{mm}}$ for $x$ and $y,\left( {0,{10}}\right) \mathrm{{mm}}$ for $z,\left( {-5,5}\right)$ deg for roll and pitch, (-10,10)deg for yaw. + +## C. Insertion Phase: Force-based Manipulation Primitives in Robotic Assembly + +[3] proposed a robot manipulation technique that achieved faster cylindrical peg-in-hole insertions than human. Nonetheless, the author had to design manually the sequence of MPs, therefore the method did not have generalization capabilities to different contexts. Moreover, the sequence of MPs generated was fixed before execution. Thus, the approach suggested could not adjust to unpredictable circumstances in real-time execution. Some other works where a rigid sequence of MPs were manually defined include [2], [9], and [10]. + +In another study, Vuong et al. utilized RL to automatically discover the dynamic sequences of MPs [4]. The dynamically generated MPs could generalize across a wide range of assembly tasks and are more robust against environmental uncertainties during execution. Nonetheless, the method proposed was solely force-based. The lack of a visual sensor limited the effectiveness of robotic assembly tasks in real life due to the need for pre-defined peg pre-insertion alignment. + +## III. METHODOLOGY + +## A. Task Description + +The peg-in-hole insertion task was split into two phases, namely (1) alignment phase and (2) insertion phase. + +In the alignment phase, the expected outcome was the improved alignment between the peg and hole through the DLVS algorithm from [5]. The peg was then manipulated to move down until contact with the hole block. After rotating the resulting peg pose by ${180}^{ \circ }$ against $x$ -axis and translating it down along z-axis by the hole depth, the estimated hole pose was recorded (Fig. 2). Before proceeding to the insertion phase, to check whether the alignment phase had achieved complete insertion, the peg was manipulated to move in $x, y, z$ - axes under two criteria, maximum movement duration and threshold of the force sensed. In unsuccessful attempts, the process subsequently proceeded to the insertion phase. + +In the insertion phase, a dynamic sequence of MPs based on the RL policy trained in [4] were generated for final insertion. After each MP step, the policy would inspect the insertion status by finding the distance between the latest achieved pose and the estimated goal pose. If the $x$ and $y$ errors between the two poses were less than $5\mathrm{\;{mm}}$ and the $z$ errors were smaller than $4\mathrm{\;{mm}}$ simultaneously, the insertion would be deemed successful. + +![01963e18-0b18-7651-a210-e2a00b439beb_1_986_735_594_398_0.jpg](images/01963e18-0b18-7651-a210-e2a00b439beb_1_986_735_594_398_0.jpg) + +Fig. 2: Hole pose (green) estimation deduced from after-contact peg pose (blue) in alignment phase. + +## B. Deep Learning-based Visual Servoing Neural Network + +The neural network developed in [5] was designed to estimate the relative transformation between any two random camera poses. In training, the neural network took a pair of samples as input each time. Each sample in the pair comprised: (1) an image taken at a random pose and (2) the transformation matrix of the pose. The output of the network was the relative pose between the input pair of camera poses in the form of translation(x, y, z)and quaternions(a, b, c, d). + +Dataset Generation. Firstly, the peg was guided manually to the insertion pose. The peg was then lifted vertically for ${12}\mathrm{\;{cm}}$ so that a full view of the target hole could be captured. This end-effector pose was then recorded as the default pose ${T}_{d}$ (Fig. 3). Samples were generated at random poses revolving around ${T}_{d}$ . The origins of the new arbitrary end-effector poses were randomly sampled within a vertical cylinder of ${10}\mathrm{\;{mm}}$ radius and ${20}\mathrm{\;{mm}}$ height $\left( {{Cy}{l}_{r = {10}, h = {20}}}\right)$ , with the origin of ${T}_{d}$ at the the bottom center of the cylinder (Fig. 3). The rotation was randomly sampled within the range of $- {10}^{ \circ }$ to ${10}^{ \circ }$ for roll and pitch, and $- {20}^{ \circ }$ to ${20}^{ \circ }$ for yaw. At each random pose, an image was captured and the transformation matrix ${T}_{de}$ of the pose was recorded. ${T}_{de}$ is the transformation matrix which transforms the end-effector's coordinate frame to the default pose's coordinate frame. This image and ${T}_{de}$ formed a complete sample which would later be input to the neural network as part of an input pair. The two input images in a pair were identified as ${I}_{A}$ and ${I}_{B}$ . Before training, the true label which was the relative transformation ${T}_{BA}$ could be calculated as follows: + +$$ +{T}_{BA} = \left\lbrack {T}_{dB}^{-1}\right\rbrack \left\lbrack {T}_{dA}\right\rbrack \tag{1} +$$ + +![01963e18-0b18-7651-a210-e2a00b439beb_2_148_451_737_434_0.jpg](images/01963e18-0b18-7651-a210-e2a00b439beb_2_148_451_737_434_0.jpg) + +Fig. 3: The red coordinate frame defines the default pose, ${T}_{d}$ . The origin of the new random end-effector pose, ${T}_{Oe}$ can be anywhere in the red cylinder. + +As the robotic arm's shadows could affect the network's performance, the samples were generated with the hole being placed at different positions and orientations to ensure the shadows did not always appear at the same position. The hole was placed at 5 points on the base, where 4 points would form the vertices of a 5-cm square and 1 point would be at the center of the square. At each point, the hole block was rotated clockwise at ${0}^{ \circ },{30}^{ \circ },{60}^{ \circ },{90}^{ \circ }$ . At each orientation, 200 samples were collected. This would amount to 4000 samples ( 5 points $\times 4$ orientations $\times {200}$ samples). + +## C. Dynamic Sequences of Manipulation Primitives + +Dynamic sequences of MPs could be discovered automatically through RL [4]. The RL policies were trained entirely in Mujoco simulation and transferred directly to physical execution. + +Manipulation Primitives in the Insertion Phase. The MPs were defined as the appropriate motions of the end-effector in a task space. The motions were controlled by three types of instructions: (1) velocity command, (2) force command, and (3) stopping condition. The MPs were categorized into two families: free-space MPs and in-contact MPs. Free-space MPs were executed when the peg was not in contact with the hole block while in-contact MPs were executed when the peg was touching the hole block. + +Using Reinforcement Learning to automatically generate dynamic sequences of Manipulation Primitives. The learning of dynamic sequences of MPs was regarded as a discounted episodic RL problem which could be addressed by a Markov Decision Process (MDP) [11]. An MDP is a function of state vector set $S$ , action set $A$ , state-transition probability $P$ , reward $R$ , and discount factor $\gamma$ . + +In this paper, an action $a$ was one of the MPs. The state vector $s$ was defined as the position of the peg relative to the estimated hole frame. After an MP had been executed at time $t - 1$ and the stopping condition had been reached, the new state at time $t$ was measured. The reward function rewarded three terms: (1) MPs moving the peg closer to goal pose, (2) MPs with short execution time, and (3) MPs that had achieved SUCCESS stopping condition. + +With an initial pose deviation of (-1.5,1.5) mm and (-1.5, ${1.5}{)}^{ \circ }$ , this RL policy could achieve ${94}\%$ success rate out of 50 insertion attempts with only one episode run. Thus, the peg's pose displacement errors needed to be within this range at the end of the alignment phase. + +## IV. EXPERIMENTS AND RESULTS + +The performance of the model was first evaluated on the test set. After that, the model was tested on actual insertion tasks. To prove the usefulness of the pre-insertion alignment, two baseline experiments were conducted. Lastly, the model was appraised for its generalization capability over workspace. + +## A. Experimental setup + +All experiments were conducted with a plastic square peg and a square hole which has ${19.98} - \mathrm{{mm}}$ sides and ${20} - \mathrm{{mm}}$ depth. The clearance between the mating parts was ${0.26}\mathrm{\;{mm}}$ . + +The robot used in this project was the 7-DOF Franka Emika Panda cobot. An in-hand camera was mounted on the end-effector (Fig. 1). The camera was short-range with a field of view of ${70}^{ \circ }$ and a resolution of ${640} \times {480}$ . + +An additional force torque sensor, Gamma IP60 was used to measure the external force exerting on the peg as the force estimation in libfranka was too imprecise for the execution of force-based MPs. + +## B. Training and model evaluation + +Both samples in each input pair to the network had to be taken at random poses which were generated with respect to the same ${T}_{d}$ . In total, there were ${200}^{2} \times {20} = {800000}$ pairs of samples taken from 20 sets ( 4 orientations at each of the 5 points). ${80}\%$ of the samples were used for training and the rest were used as the test set. A model was trained for 10 epochs. The learning rate was ${10}^{-4}$ at the beginning and halved after the ${4}^{th},{6}^{th},{8}^{th}$ epoch. The batch size of the training set was 256 . The entire training was run on 4 GTX-1080Ti's. + +TABLE I: Test set errors $\left( {{e}_{\phi } : }\right.$ roll error, ${e}_{\theta } :$ pitch error, ${e}_{\psi }$ : yaw error). + +
${e}_{x}/\mathrm{{mm}}$${e}_{y}/\mathrm{{mm}}$${e}_{z}/\mathrm{{mm}}$${e}_{\phi }/{}^{ \circ }$${e}_{\theta }/{}^{ \circ }$${e}_{\psi }/{}^{ \circ }$
0.24410.28750.20440.17920.18560.2148
+ +The performance of the model on the test set is recorded in Table I. Since the RL policy used in the insertion phase could accept errors up to ${1.5}\mathrm{\;{mm}}$ in translation and ${1.5}^{ \circ }$ in rotation, the test errors were low enough to proceed to physical execution. + +## C. Actual insertion task + +The peg was manually guided to the goal pose at the beginning. 50 random poses within the sampling range defined by the ${Cy}{l}_{r = 5, h = {10}}$ were generated around the goal pose. At each attempt, ${I}_{A}$ and one of the 50 images taken at the arbitrary poses, ${I}_{B}$ were input to the model. ${\widehat{T}}_{BA}$ between the two poses was estimated through DLVS. The peg would move to ${\widehat{T}}_{0A}$ at the end of the alignment phase. + +In the insertion phase, the true hole pose was not given explicitly to the RL policy. The estimated hole pose deduced from ${\widehat{T}}_{0A}$ in the alignment phase was input to the policy. The peg was subsequently guided into the hole by a sequence of force-based MPs generated within one episode. The success rate and time taken are shown in Fig. 4. + +## D. Comparing our method to baseline methods + +Two baseline experiments were conducted to prove the usefulness of the proposed approach: (1) aligned peg with the same DLVS algorithm followed by pure compliance insertion and (2) attempted insertion with RL-generated MPs without alignment from the same sampling range defined by ${\mathrm{{Cyl}}}_{r = 5, h = {10}}$ . As shown in Fig. 4a, both baseline methods’ insertion success rates were much lower than that of our method whereas the time taken per attempt for alignment and insertion were much longer. + +![01963e18-0b18-7651-a210-e2a00b439beb_3_161_1163_705_322_0.jpg](images/01963e18-0b18-7651-a210-e2a00b439beb_3_161_1163_705_322_0.jpg) + +Fig. 4: (a) Insertion success rates and (b) Average time taken per attempt for alignment and insertion of our method compared to the two baseline methods out of 50 attempts. In (b), there was no alignment phase in baseline (2). + +## E. Generalization over workspace + +Iterative estimations with the same DLVS algorithm managed to align the peg to be within acceptable deviation threshold from initial pose differences that were larger than the sampling range ${Cy}{l}_{r = 5, h = {10}}$ . Two test cases ( 1 easy,1 hard) were executed. In both cases, all pose errors converged to within ${1.5}\mathrm{\;{mm}}$ and ${1.5}^{ \circ }$ after a number of iterations (Fig. 5). + +## V. CONCLUSIONS + +The addition of DLVS has improved the practicality of the force-based peg insertion solution proposed by Vuong et al. [4]. With visual capabilities at the alignment phase, the peg's starting pose error thresholds in both translation and orientation were increased. Even initial pose differences that were larger than the normal sampling range could be handled if iterative visual servoing was applied. + +![01963e18-0b18-7651-a210-e2a00b439beb_3_919_138_726_549_0.jpg](images/01963e18-0b18-7651-a210-e2a00b439beb_3_919_138_726_549_0.jpg) + +(a) Easy test case. Initial pose errors: $\left( {\mathrm{x},\mathrm{y},\mathrm{z}}\right) = \left( {{10},{10},{10}}\right) \mathrm{{mm}}$ , (roll, pitch, yaw) $= {\left( {10},{10},{20}\right) }^{ \circ }$ . Converged to acceptable error thresholds after 4 iterations. + +![01963e18-0b18-7651-a210-e2a00b439beb_3_912_821_733_552_0.jpg](images/01963e18-0b18-7651-a210-e2a00b439beb_3_912_821_733_552_0.jpg) + +(b) Hard test case. Initial pose errors: $\left( {\mathrm{x},\mathrm{y},\mathrm{z}}\right) = \left( {{25},{25},{25}}\right) \mathrm{{mm}}$ , (roll, pitch, yaw) $= {\left( {25},{25},{50}\right) }^{ \circ }$ . Converged to acceptable error thresholds after 6 iterations. + +Fig. 5: Initial pose differences that were larger than the sampling range ${Cy}{l}_{r = 5, h = {10}}$ converged to within ${1.5}\mathrm{\;{mm}}$ and ${1.5}^{ \circ }$ in both easy and hard test cases. + +Furthermore, the true hole pose is no longer required in this new approach. The estimated hole pose can be deduced from DLVS and input to the RL policy. This improvement is significant as in real-world robotic assembly tasks, the pose of the part to be mated is normally unknown. + +In future work, the DLVS model's generalization capability to different shapes can be evaluated without retraining. Optimal numbers of visual servoing iterations can also be found for different magnitudes of initial pose errors to boost the proposed approach's usability in real-life assembly tasks. + +## REFERENCES + +[1] F. Suarez, X. Zhou, and Q. C. Pham, "Can robots assemble an ikea chair?" Science Robotics, vol. 3, p. eaat6385, 04 2018. + +[2] F. Suárez-Ruiz and Q.-C. Pham, "A framework for fine robotic assembly," in 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 421-426. + +[3] L. Johannsmeier, M. Gerchow, and S. Haddadin, "A framework for robot manipulation: Skill formalism, meta learning and adaptive control," in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 5844-5850. + +[4] N. Vuong, H. Pham, and Q.-C. Pham, "Learning sequences of manipulation primitives for robotic assembly," in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 4086- 4092. + +[5] C. Yu, Z. Cai, H. Pham, and Q.-C. Pham, "Siamese convolutional neural network for sub-millimeter-accurate camera pose estimation and visual servoing," in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 935-941. + +[6] R. L. Haugaard, J. Langaa, C. Sloth, and A. G. Buch, "Fast robust peg-in-hole insertion with continuous visual servoing," 2020. [Online]. Available: https://arxiv.org/abs/2011.06399 + +[7] J. C. Triyonoputro, W. Wan, and K. Harada, "Quickly inserting pegs into uncertain holes using multi-view images and deep network trained on synthetic data," in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 5792-5799. + +[8] Q. Bateux, E. Marchand, J. Leitner, F. Chaumette, and P. Corke, "Training deep neural networks for visual servoing," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 3307-3314. + +[9] U. Thomas, B. Finkemeyer, T. Kroger, and F. Wahl, "Error-tolerant execution of complex robot tasks based on skill primitives," in 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), vol. 3, 2003, pp. 3069-3075 vol.3. + +[10] B. Finkemeyer, T. Kröger, and F. Wahl, "Executing assembly tasks specified by manipulation primitive nets," Advanced Robotics, vol. 19, pp. 591-611, 01 2005. + +[11] M. Otterlo and M. Wiering, "Reinforcement learning and markov decision processes," Reinforcement Learning: State of the Art, pp. 3- 42, 01 2012. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/01lfX8qrh1O/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/01lfX8qrh1O/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..c6b6c421b9b0fdc33b53e35361e5a4ba1381c3f7 --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/01lfX8qrh1O/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,152 @@ +§ INTEGRATING FORCE-BASED MANIPULATION PRIMITIVES WITH DEEP LEARNING-BASED VISUAL SERVOING FOR ROBOTIC ASSEMBLY + +Yee Sien Lee ${}^{1}$ , Nghia Vuong ${}^{1}$ , Nicholas Adrian ${}^{1,2}$ , and Quang-Cuong Pham ${}^{1}$ + +Abstract-This paper explores the idea of combining Deep Learning-based Visual Servoing and dynamic sequences of force-based Manipulation Primitives for robotic assembly tasks. Most current peg-in-hole algorithms assume the initial peg pose is already aligned within a minute deviation range before a tight-clearance insertion is attempted. With the integration of tactile and visual information, highly-accurate peg alignment before insertion can be achieved autonomously. In the alignment phase, the peg mounted on the end-effector can be aligned automatically from an initial pose with large displacement errors to an estimated insertion pose with errors lower than ${1.5}\mathrm{\;{mm}}$ in translation and ${1.5}^{ \circ }$ in rotation, all in one-shot Deep Learning-Based Visual Servoing estimation. A dynamic sequence of Manipulation Primitives will then be automatically generated via Reinforcement Learning to finish the last stage of insertion. + +§ I. INTRODUCTION + +Robots have drastically increased industrial productivity by assisting humans to undertake high-volume and repetitive tasks such as lifting, assembly, and picking and placing of manufacturing parts. Specifically, robotic assembly has become progressively more common in the modern workspace, with increasingly complex and autonomous assembly tasks having been conducted in recent years [1]. However, robotic peg-in-hole assemblies require extremely high success rate and generalization to different contexts which are well beyond today's industrial robots' autonomous capability [2], [3]. Manual designing and fine-tuning are still required to achieve such tasks. Therefore, to achieve autonomous dexterous robotic peg-in-hole assembly, Vuong et al. proposed the idea of automatically discovering the dynamic sequence of Manipulation Primitives (MPs) via Reinforcement Learning (RL) [4]. + +The research from [4] utilized a force torque sensor to gauge the external force exerted on the end-effector. Besides maintaining a high accuracy, their method also showed promising generalization capability across different geometries. Nonetheless, the absence of a visual device limited the effectiveness of assembly as the peg had to be readily aligned within a small deviation range before insertion. + +This study aims to improve the solely force-based solution of [4] in terms of practicality in real-world settings by implementing Deep Learning-based Visual Servoing (DLVS) in the alignment phase. In this project, the DLVS work by Yu et al. [5] was chosen to complement the solely force-based solution. With DLVS capability, the hole pose can be estimated automatically in the alignment phase. + + < g r a p h i c s > + +Fig. 1: Robotic assembly setup. A square peg was used in this study. + +The scope of this study includes: (1) achieving high-accuracy ( $< {1.5}\mathrm{\;{mm}}$ in translation and $< {1.5}^{ \circ }$ in rotation) autonomous estimation of hole pose in the alignment phase, and (2) enhancing the generalization capabilities across workspace with the newly integrated DLVS feature. + +§ II. RELATED WORK + +§ A. STUDIES ACHIEVING ALIGNMENT AND INSERTION + +A study focusing on fast robust peg-in-hole insertion with continuous visual servoing was conducted in [6]. In the alignment phase, the peg was aligned to the hole based on heatmaps generated from a Deep Neural Network (DNN). After alignment, peg insertion was attempted via compliance using force-feedback. This approach was able to achieve high accuracy (peg-hole clearance of ${0.015}\mathrm{\;{mm}}$ ). However, there were two downsides: (1) DNN in alignment phase could only align position, but not orientation; (2) In the insertion phase, a simple compliant force insertion which was unable to account for large rotational errors was applied. + +[7] focused on achieving peg-in-hole assembly using multi-view images and DLVS trained on synthetic data. There were two steps in the alignment phase: (1) DLVS quickly moved the peg closer to the hole; (2) spiral search then precisely aligned the peg to the hole. The process would then proceed to the insertion phase where impedance control was used to perform the insertion. The clearance of the hole in this experiment was ${0.4}\mathrm{\;{mm}}$ . However, this approach could not align orientation errors as well due to the limitations of the DNN. Another downside was the long execution time. The approach needed more than 40 seconds to complete peg insertion from the start of the search phase. + +${}^{1}$ School of Mechanical and Aerospace Engineering, NTU, Singapore + +${}^{2}$ HP-NTU Digital Manufacturing Corporate Lab, Singapore + +§ B. ALIGNMENT PHASE: DEEP LEARNING-BASED VISUAL SERVOING + +Deep learning-based visual servoing (DLVS) estimates the camera pose repeatedly while the robot is moving towards the target pose to achieve high final accuracy [5]. + +Bateux et al. explored an efficient method of generating dataset to train robust neural network for DLVS which considers changing lighting conditions and the addition of random occlusions [8]. The network achieves sub-millimeter accuracy but can only estimate a camera pose with respect to a fixed reference pose. The neural network has to be retrained every time a new reference pose is introduced, which is impractical in real-life usages. Thus, the authors proposed in the same paper another neural network which accepts a pair of images taken at random poses as input. Nevertheless, this extension could only achieve centimeter accuracy. + +Yu et al. proposed a new neural network based on Siamese architecture that can output the relative pose between any pair of images taken at arbitrary poses with sub-millimeter accuracy [5]. The network is also effective under varying lighting conditions and with the inclusion of random occlusions, and can even generalize to objects with similar physical appearances. During actual insertion experiments, the model achieved sub-millimeter accuracy in camera pose estimation in one shot from initial deviations of: $\left( {-5,5}\right) \mathrm{{mm}}$ for $x$ and $y,\left( {0,{10}}\right) \mathrm{{mm}}$ for $z,\left( {-5,5}\right)$ deg for roll and pitch, (-10,10)deg for yaw. + +§ C. INSERTION PHASE: FORCE-BASED MANIPULATION PRIMITIVES IN ROBOTIC ASSEMBLY + +[3] proposed a robot manipulation technique that achieved faster cylindrical peg-in-hole insertions than human. Nonetheless, the author had to design manually the sequence of MPs, therefore the method did not have generalization capabilities to different contexts. Moreover, the sequence of MPs generated was fixed before execution. Thus, the approach suggested could not adjust to unpredictable circumstances in real-time execution. Some other works where a rigid sequence of MPs were manually defined include [2], [9], and [10]. + +In another study, Vuong et al. utilized RL to automatically discover the dynamic sequences of MPs [4]. The dynamically generated MPs could generalize across a wide range of assembly tasks and are more robust against environmental uncertainties during execution. Nonetheless, the method proposed was solely force-based. The lack of a visual sensor limited the effectiveness of robotic assembly tasks in real life due to the need for pre-defined peg pre-insertion alignment. + +§ III. METHODOLOGY + +§ A. TASK DESCRIPTION + +The peg-in-hole insertion task was split into two phases, namely (1) alignment phase and (2) insertion phase. + +In the alignment phase, the expected outcome was the improved alignment between the peg and hole through the DLVS algorithm from [5]. The peg was then manipulated to move down until contact with the hole block. After rotating the resulting peg pose by ${180}^{ \circ }$ against $x$ -axis and translating it down along z-axis by the hole depth, the estimated hole pose was recorded (Fig. 2). Before proceeding to the insertion phase, to check whether the alignment phase had achieved complete insertion, the peg was manipulated to move in $x,y,z$ - axes under two criteria, maximum movement duration and threshold of the force sensed. In unsuccessful attempts, the process subsequently proceeded to the insertion phase. + +In the insertion phase, a dynamic sequence of MPs based on the RL policy trained in [4] were generated for final insertion. After each MP step, the policy would inspect the insertion status by finding the distance between the latest achieved pose and the estimated goal pose. If the $x$ and $y$ errors between the two poses were less than $5\mathrm{\;{mm}}$ and the $z$ errors were smaller than $4\mathrm{\;{mm}}$ simultaneously, the insertion would be deemed successful. + + < g r a p h i c s > + +Fig. 2: Hole pose (green) estimation deduced from after-contact peg pose (blue) in alignment phase. + +§ B. DEEP LEARNING-BASED VISUAL SERVOING NEURAL NETWORK + +The neural network developed in [5] was designed to estimate the relative transformation between any two random camera poses. In training, the neural network took a pair of samples as input each time. Each sample in the pair comprised: (1) an image taken at a random pose and (2) the transformation matrix of the pose. The output of the network was the relative pose between the input pair of camera poses in the form of translation(x, y, z)and quaternions(a, b, c, d). + +Dataset Generation. Firstly, the peg was guided manually to the insertion pose. The peg was then lifted vertically for ${12}\mathrm{\;{cm}}$ so that a full view of the target hole could be captured. This end-effector pose was then recorded as the default pose ${T}_{d}$ (Fig. 3). Samples were generated at random poses revolving around ${T}_{d}$ . The origins of the new arbitrary end-effector poses were randomly sampled within a vertical cylinder of ${10}\mathrm{\;{mm}}$ radius and ${20}\mathrm{\;{mm}}$ height $\left( {{Cy}{l}_{r = {10},h = {20}}}\right)$ , with the origin of ${T}_{d}$ at the the bottom center of the cylinder (Fig. 3). The rotation was randomly sampled within the range of $- {10}^{ \circ }$ to ${10}^{ \circ }$ for roll and pitch, and $- {20}^{ \circ }$ to ${20}^{ \circ }$ for yaw. At each random pose, an image was captured and the transformation matrix ${T}_{de}$ of the pose was recorded. ${T}_{de}$ is the transformation matrix which transforms the end-effector's coordinate frame to the default pose's coordinate frame. This image and ${T}_{de}$ formed a complete sample which would later be input to the neural network as part of an input pair. The two input images in a pair were identified as ${I}_{A}$ and ${I}_{B}$ . Before training, the true label which was the relative transformation ${T}_{BA}$ could be calculated as follows: + +$$ +{T}_{BA} = \left\lbrack {T}_{dB}^{-1}\right\rbrack \left\lbrack {T}_{dA}\right\rbrack \tag{1} +$$ + + < g r a p h i c s > + +Fig. 3: The red coordinate frame defines the default pose, ${T}_{d}$ . The origin of the new random end-effector pose, ${T}_{Oe}$ can be anywhere in the red cylinder. + +As the robotic arm's shadows could affect the network's performance, the samples were generated with the hole being placed at different positions and orientations to ensure the shadows did not always appear at the same position. The hole was placed at 5 points on the base, where 4 points would form the vertices of a 5-cm square and 1 point would be at the center of the square. At each point, the hole block was rotated clockwise at ${0}^{ \circ },{30}^{ \circ },{60}^{ \circ },{90}^{ \circ }$ . At each orientation, 200 samples were collected. This would amount to 4000 samples ( 5 points $\times 4$ orientations $\times {200}$ samples). + +§ C. DYNAMIC SEQUENCES OF MANIPULATION PRIMITIVES + +Dynamic sequences of MPs could be discovered automatically through RL [4]. The RL policies were trained entirely in Mujoco simulation and transferred directly to physical execution. + +Manipulation Primitives in the Insertion Phase. The MPs were defined as the appropriate motions of the end-effector in a task space. The motions were controlled by three types of instructions: (1) velocity command, (2) force command, and (3) stopping condition. The MPs were categorized into two families: free-space MPs and in-contact MPs. Free-space MPs were executed when the peg was not in contact with the hole block while in-contact MPs were executed when the peg was touching the hole block. + +Using Reinforcement Learning to automatically generate dynamic sequences of Manipulation Primitives. The learning of dynamic sequences of MPs was regarded as a discounted episodic RL problem which could be addressed by a Markov Decision Process (MDP) [11]. An MDP is a function of state vector set $S$ , action set $A$ , state-transition probability $P$ , reward $R$ , and discount factor $\gamma$ . + +In this paper, an action $a$ was one of the MPs. The state vector $s$ was defined as the position of the peg relative to the estimated hole frame. After an MP had been executed at time $t - 1$ and the stopping condition had been reached, the new state at time $t$ was measured. The reward function rewarded three terms: (1) MPs moving the peg closer to goal pose, (2) MPs with short execution time, and (3) MPs that had achieved SUCCESS stopping condition. + +With an initial pose deviation of (-1.5,1.5) mm and (-1.5, ${1.5}{)}^{ \circ }$ , this RL policy could achieve ${94}\%$ success rate out of 50 insertion attempts with only one episode run. Thus, the peg's pose displacement errors needed to be within this range at the end of the alignment phase. + +§ IV. EXPERIMENTS AND RESULTS + +The performance of the model was first evaluated on the test set. After that, the model was tested on actual insertion tasks. To prove the usefulness of the pre-insertion alignment, two baseline experiments were conducted. Lastly, the model was appraised for its generalization capability over workspace. + +§ A. EXPERIMENTAL SETUP + +All experiments were conducted with a plastic square peg and a square hole which has ${19.98} - \mathrm{{mm}}$ sides and ${20} - \mathrm{{mm}}$ depth. The clearance between the mating parts was ${0.26}\mathrm{\;{mm}}$ . + +The robot used in this project was the 7-DOF Franka Emika Panda cobot. An in-hand camera was mounted on the end-effector (Fig. 1). The camera was short-range with a field of view of ${70}^{ \circ }$ and a resolution of ${640} \times {480}$ . + +An additional force torque sensor, Gamma IP60 was used to measure the external force exerting on the peg as the force estimation in libfranka was too imprecise for the execution of force-based MPs. + +§ B. TRAINING AND MODEL EVALUATION + +Both samples in each input pair to the network had to be taken at random poses which were generated with respect to the same ${T}_{d}$ . In total, there were ${200}^{2} \times {20} = {800000}$ pairs of samples taken from 20 sets ( 4 orientations at each of the 5 points). ${80}\%$ of the samples were used for training and the rest were used as the test set. A model was trained for 10 epochs. The learning rate was ${10}^{-4}$ at the beginning and halved after the ${4}^{th},{6}^{th},{8}^{th}$ epoch. The batch size of the training set was 256 . The entire training was run on 4 GTX-1080Ti's. + +TABLE I: Test set errors $\left( {{e}_{\phi } : }\right.$ roll error, ${e}_{\theta } :$ pitch error, ${e}_{\psi }$ : yaw error). + +max width= + +${e}_{x}/\mathrm{{mm}}$ ${e}_{y}/\mathrm{{mm}}$ ${e}_{z}/\mathrm{{mm}}$ ${e}_{\phi }/{}^{ \circ }$ ${e}_{\theta }/{}^{ \circ }$ ${e}_{\psi }/{}^{ \circ }$ + +1-6 +0.2441 0.2875 0.2044 0.1792 0.1856 0.2148 + +1-6 + +The performance of the model on the test set is recorded in Table I. Since the RL policy used in the insertion phase could accept errors up to ${1.5}\mathrm{\;{mm}}$ in translation and ${1.5}^{ \circ }$ in rotation, the test errors were low enough to proceed to physical execution. + +§ C. ACTUAL INSERTION TASK + +The peg was manually guided to the goal pose at the beginning. 50 random poses within the sampling range defined by the ${Cy}{l}_{r = 5,h = {10}}$ were generated around the goal pose. At each attempt, ${I}_{A}$ and one of the 50 images taken at the arbitrary poses, ${I}_{B}$ were input to the model. ${\widehat{T}}_{BA}$ between the two poses was estimated through DLVS. The peg would move to ${\widehat{T}}_{0A}$ at the end of the alignment phase. + +In the insertion phase, the true hole pose was not given explicitly to the RL policy. The estimated hole pose deduced from ${\widehat{T}}_{0A}$ in the alignment phase was input to the policy. The peg was subsequently guided into the hole by a sequence of force-based MPs generated within one episode. The success rate and time taken are shown in Fig. 4. + +§ D. COMPARING OUR METHOD TO BASELINE METHODS + +Two baseline experiments were conducted to prove the usefulness of the proposed approach: (1) aligned peg with the same DLVS algorithm followed by pure compliance insertion and (2) attempted insertion with RL-generated MPs without alignment from the same sampling range defined by ${\mathrm{{Cyl}}}_{r = 5,h = {10}}$ . As shown in Fig. 4a, both baseline methods’ insertion success rates were much lower than that of our method whereas the time taken per attempt for alignment and insertion were much longer. + + < g r a p h i c s > + +Fig. 4: (a) Insertion success rates and (b) Average time taken per attempt for alignment and insertion of our method compared to the two baseline methods out of 50 attempts. In (b), there was no alignment phase in baseline (2). + +§ E. GENERALIZATION OVER WORKSPACE + +Iterative estimations with the same DLVS algorithm managed to align the peg to be within acceptable deviation threshold from initial pose differences that were larger than the sampling range ${Cy}{l}_{r = 5,h = {10}}$ . Two test cases ( 1 easy,1 hard) were executed. In both cases, all pose errors converged to within ${1.5}\mathrm{\;{mm}}$ and ${1.5}^{ \circ }$ after a number of iterations (Fig. 5). + +§ V. CONCLUSIONS + +The addition of DLVS has improved the practicality of the force-based peg insertion solution proposed by Vuong et al. [4]. With visual capabilities at the alignment phase, the peg's starting pose error thresholds in both translation and orientation were increased. Even initial pose differences that were larger than the normal sampling range could be handled if iterative visual servoing was applied. + + < g r a p h i c s > + +(a) Easy test case. Initial pose errors: $\left( {\mathrm{x},\mathrm{y},\mathrm{z}}\right) = \left( {{10},{10},{10}}\right) \mathrm{{mm}}$ , (roll, pitch, yaw) $= {\left( {10},{10},{20}\right) }^{ \circ }$ . Converged to acceptable error thresholds after 4 iterations. + + < g r a p h i c s > + +(b) Hard test case. Initial pose errors: $\left( {\mathrm{x},\mathrm{y},\mathrm{z}}\right) = \left( {{25},{25},{25}}\right) \mathrm{{mm}}$ , (roll, pitch, yaw) $= {\left( {25},{25},{50}\right) }^{ \circ }$ . Converged to acceptable error thresholds after 6 iterations. + +Fig. 5: Initial pose differences that were larger than the sampling range ${Cy}{l}_{r = 5,h = {10}}$ converged to within ${1.5}\mathrm{\;{mm}}$ and ${1.5}^{ \circ }$ in both easy and hard test cases. + +Furthermore, the true hole pose is no longer required in this new approach. The estimated hole pose can be deduced from DLVS and input to the RL policy. This improvement is significant as in real-world robotic assembly tasks, the pose of the part to be mated is normally unknown. + +In future work, the DLVS model's generalization capability to different shapes can be evaluated without retraining. Optimal numbers of visual servoing iterations can also be found for different magnitudes of initial pose errors to boost the proposed approach's usability in real-life assembly tasks. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/9Big79N1FrZ/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/9Big79N1FrZ/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..1a06f9ae513890e0c751d74cc038d67b1fd227cb --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/9Big79N1FrZ/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,251 @@ +# Efficient Object Manipulation Planning with Monte Carlo Tree Search + +Huaijiang ${\mathrm{{Zhu}}}^{1}$ , Ludovic Righetti ${}^{1,2}$ + +Abstract-This work presents an efficient approach to object manipulation planning using Monte Carlo Tree Search (MCTS) to find contact sequences and an efficient ADMM-based trajectory optimization algorithm to evaluate the dynamic feasibility of candidate contact sequences. To accelerate MCTS, we propose a methodology to learn a goal-conditioned policy-value network used to direct the search towards promising nodes. Further, manipulation-specific heuristics enable to drastically reduce the search space. Systematic object manipulation experiments in a physics simulator demonstrate the efficiency of our approach. In particular, our approach scales favorably for long manipulation sequences thanks to the learned policy-value network, significantly improving planning success rate. + +## I. INTRODUCTION + +The ability to plan sequences of contacts and movements to manipulate objects is central to endow robots with sufficient autonomy to perform complex tasks. This remains, however, particularly challenging. Indeed, finding dynamically feasible sequences of contacts between the manipulator and an object typically leads to intractable combinatorial and nonlinear problems. + +Recently, trajectory optimization has become a popular tool for multi-contact locomotion [1]-[4] as this leads to desirable formulations to reason about interaction forces. Yet, it remains unclear how the planning of contact modes should be efficiently incorporated, primarily due to its discrete nature which creates an undesirable consequence: discontinuity in the dynamics at contact switch. To handle this discontinuity under the trajectory optimization framework, two streams of methodologies have emerged: + +1) the contact-invariant or contact-implicit approach enforce contact complementarity either as hard constraints [5], [6], penalty terms in a cost function [7]-[9], or with differentiable soft contacts models [10] + +2) the hybrid approach treats contact switches as discrete decisions within a continuous problem [11]-[13]. + +In this work, we examine the latter methodology to propose an optimization framework amenable to customization with object manipulation-specific heuristics and learning from data to improve its computational efficiency. + +The most common formulation of such problem is via Mixed-integer Programming (MIP). In the context of robot manipulation, one representative work is the Contact-Trajectory Optimization proposed in [12], where contact scheduling is modeled as binary decision variables and the non-convexity due to cross product is relaxed by McCormick envelopes. The resulted problem is a Mixed-integer Quadratic Program (MIQP) which can be solved by off-the-shelf MIQP solvers. However, the approach has only been demonstrated on $2\mathrm{D}$ object manipulation with very short manipulation sequences. This is in contrast to our approach which handles $3\mathrm{D}$ objects and long sequences. + +In the context of machine learning, CoCo proposed in [14] finds feasible solution to MIP by first learning to map the problem parameters to the assignment of the discrete variables offline and then solving the resulted continuous optimization problem online. While this greatly improves the solution speed at inference time, it assumes that one is able to solve the original MIP in a reasonable amount of time to construct the dataset. If the original problem is prohibitive to solve, collecting a large dataset for this problem may not be practical without abundant computational resources. + +Recently, an algorithm that augments Contact-Implicit Trajectory Optimization (CITO) with tree search was proposed in [15] to incorporate domain-specific knowledge for robot manipulation. It uses depth-first search (DFS) to find a sequence of kinematically feasible contact modes that have a stable grasp and then constrain the CITO problem with the found contact sequence. + +In principle, we can employ a brute-force approach to our problem: search over all possible combinations of the discrete variables and for each such combination solve the resulted continuous optimization problem. In general, such strategy is not practical due to the factorial complexity. However, it can be made more efficient if 1) the search space can be notably reduced, 2) good search heuristics are available, and 3) the non-convex continuous optimization problem can be solved efficiently. In this work, we show that all these three requirements can be achieved. In particular, our contributions are + +1) we adapt learning-based Monte Carlo Tree Search (MCTS) to discrete contact planning problems for robotic manipulation, + +2) we formulate the resulted continuous optimization problem as a biconvex program to allow efficient solution via the Alternating Direction Method of Multipliers (ADMM) [16], and + +3) we learn a policy-value network from data collected on short-horizon tasks which provides good heuristics for long-horizon tasks and significantly decreases the overall solution time. + +To our best knowledge, this is the first application of learning-based MCTS to contact planning for manipulation. + +--- + +${}^{1}$ Tandon School of Engineering, New York University, USA + +${}^{2}$ Max-Planck Institute for Intelligent Systems, Germany + +An extended version of this work has been submitted to IROS 2022 and is under review. + +--- + +## II. Problem Statement + +## A. Inputs + +We aim to solve an object manipulation task similar to the Contact-Trajectory Optimization problem proposed in [12] where the following quantities are given: + +1) a rigid object with known geometry, friction coefficient $\mu$ , mass $m$ , moment of inertia $\mathcal{I}$ , and ${N}_{\Omega }$ pre-defined touchable regions, + +2) a trajectory with discretization step ${\Delta t}$ of length $T$ that consists of the desired object pose, velocity, and acceleration + +3) an environment with known geometry and friction coefficient ${\mu }_{e}$ , and + +4) a manipulator with known kinematics that can make at most ${N}_{c}$ contacts with the object. + +At the $t$ -th time step, given the object motion and the object dynamics, we can compute the desired total force ${f}_{\text{des }}\left( t\right)$ and torque ${\tau }_{\text{des }}\left( t\right)$ to be applied to the object from rigid-body dynamics. In addition, as the geometry of the object and the environment as well as the object motion are known, we can obtain ${N}_{e}\left( t\right)$ environment contact locations ${r}_{e}\left( t\right)$ for $e \in \left\{ {1,\ldots ,{N}_{e}\left( t\right) }\right\}$ at each time step $t$ by checking the collisions between the object and the environment, assuming uniform pressure distribution. + +## B. Outputs + +For each time step $t$ , we aim to find the following: + +1) the contact region ${\Omega }_{c}\left( t\right) \in \left\{ {0,1,\ldots ,{N}_{\Omega }}\right\}$ , the contact force ${f}_{c}\left( t\right)$ and the contact location ${r}_{c}\left( t\right)$ for each contact point $c$ of the manipulator; ${\Omega }_{c}\left( t\right) = 0$ indicates that the $c$ -th contact point is not in contact, and + +2) the environment contact force ${f}_{e}\left( t\right)$ + +such that the forces and torques sum to the desired ones + +$$ +\mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{f}_{c}\left( t\right) + \mathop{\sum }\limits_{{e = 1}}^{{{N}_{e}\left( t\right) }}{f}_{e}\left( t\right) = {f}_{\text{des }}\left( t\right) \tag{1a} +$$ + +$$ +\mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{r}_{c}\left( t\right) \times {f}_{c}\left( t\right) + \mathop{\sum }\limits_{{e = 1}}^{{{N}_{e}\left( t\right) }}{r}_{e}\left( t\right) \times {f}_{e}\left( t\right) = {\tau }_{\mathrm{{des}}}\left( t\right) . \tag{1b} +$$ + +## III. METHOD + +The problem described above is challenging even though the desired object motion is provided, as one needs to find both the discrete contact region ${\Omega }_{c}\left( t\right)$ and the continuous manipulator contact forces ${f}_{c}\left( t\right)$ , the contact locations ${r}_{c}\left( t\right)$ , and the environment contact forces ${f}_{e}\left( t\right)$ . + +## A. Discrete Contact Planning via MCTS + +A series of learning-based MCTS algorithms has been proposed in [17], [18] for the chess-playing agents AlphaGo and AlphaZero. We adapt it to solve the discrete contact region planning problem and refer the algorithm as Policy-Value Monte Carlo Tree Search (PVMCTS): for a given object motion $\xi$ , we want to find the contact region for each contact point $c$ at each time step $t$ . The whole sequence is then evaluated to return a reward $r$ to guide future search. + +1) Assumptions: To reduce the search space, we make the following assumptions: + +- Persistent contact: While the downstream continuous optimization problem may have a small discretization step, for example ${\Delta t} = {0.1}\mathrm{\;s}$ , most manipulation tasks do not require contact switch at such a high frequency. Thus, we assume that a contact point must remain in the same region for $d$ time steps. + +- Contact switch: We allow at most one contact point to break or make contact at each contact switch, and we only allow contact switches when the desired object velocity and acceleration are zero. + +- Contact region: Each contact region can only be touched by at most one contact point. + +2) State and action representation: With the assumption above, at each planning step $n$ , the PVMCTS chooses for each contact point $c$ its contact region for the next $d$ time steps, hence ${a}_{n} = {\left\lbrack {\Omega }_{1}\left( t\right) ,\ldots ,{\Omega }_{{N}_{c}}\left( t\right) \right\rbrack }_{t = {nd}}^{{nd} + d}$ , and the state ${s}_{n}$ is simply the concatenation of all previous actions. + +3) Heuristics: To reduce the search space, we apply the following heuristics to further limit the size of the legal action set $\mathcal{A}\left( s\right)$ : + +- Kinematic feasibility: For each contact point $c$ , a contact region will only be considered if inverse kinematics can find a manipulator configuration that reaches the center of this region within an error threshold of $1\mathrm{\;{cm}}$ . + +- Number of contacts: For time steps where the angular acceleration is nonzero, we require at least $\min \left( {{N}_{c},3}\right)$ contact points to be in contact. + +4) Reward function: Once the PVMCTS reaches a terminal state, hence ${Nd} = T$ , we obtain a sequence of contact regions ${\left\lbrack {\Omega }_{1}\left( t\right) ,\ldots ,{\Omega }_{{N}_{c}}\left( t\right) \right\rbrack }_{t = 0}^{T - 1}$ , which is used to construct a continuous optimization problem that solves for the contact force ${f}_{c}\left( t\right) ,{f}_{e}\left( t\right)$ and the contact location ${r}_{c}\left( t\right)$ (cf. Sec III-B). To evaluate this solution, we integrate it to obtain an object pose $\widehat{q}\left( {T - 1}\right)$ with the semi-implicit Euler method. We then compare it with the desired pose $q\left( {T - 1}\right)$ to compute a weighted distance + +$$ +D\left( {q,\widehat{q}}\right) = \parallel p - \widehat{p}\parallel + \beta \begin{Vmatrix}{\log \left( {{\widehat{R}}^{\top }R}\right) }\end{Vmatrix}, \tag{2} +$$ + +where $\beta > 0$ scales the angular distance. The weighted distance within a threshold $D \leq {D}_{\text{th }}$ is then normalized to $\left\lbrack {0,1}\right\rbrack$ to obtain the reward. + +5) Goal-conditioned policy-value network: Note that each PVMCTS instance only searches for the contact sequence for a given object motion $\xi$ , thus the rewards are motion-specific. To allow learning from object motion information as well, we define an intermediate goal ${\lambda }_{n} = \left\lbrack {q\left( {nd}\right) , q\left( {{nd} + h}\right) }\right\rbrack$ for each planning step $n$ that consists of the current desired object pose $q\left( {nd}\right)$ and the future one $q\left( {{nd} + h}\right)$ in $h$ steps. Fig 1 depicts the policy-value network architecture. + +6) Value classifier: One key difference between our task and the generic game-play is that our dataset is highly imbalanced-many contact sequences explored by the PVMCTS are dynamically infeasible, resulting in rewards that equal zero. Directly training on such a dataset leads to underestimation of the value function. Instead, we only train our policy-value network on samples that incur positive rewards. Additionally on the entire dataset $\mathcal{D}$ , we train a binary classifier ${C}_{\phi }\left( s\right)$ with logistic regression where positive samples are given more weights. At inference time, a state is only fed into the policy-value network if the classifier labels it as positive; otherwise, we simply output zero value ${v}_{\theta }\left( s\right) = 0$ and uniformly distributed action probability ${p}_{\theta }\left( {s, a}\right) = \frac{1}{\left| \mathcal{A}\left( s\right) \right| }$ . + +![01963e19-b1c8-780a-a840-c11a1274c3b5_2_208_188_616_315_0.jpg](images/01963e19-b1c8-780a-a840-c11a1274c3b5_2_208_188_616_315_0.jpg) + +Fig. 1: Schematic diagram of the policy-value network architecture. Activation functions and regularization layers such as Dropout and BatchNorm are omitted. + +## B. Continuous Contact Optimization via ADMM + +Now let us consider the sub-problem where we already obtained a sequence of contact regions ${\left\lbrack {\Omega }_{1}\left( t\right) ,\ldots ,{\Omega }_{{N}_{c}}\left( t\right) \right\rbrack }_{t = 0}^{T - 1}$ for each contact point $c$ : we can find the contact forces and locations by solving a continuous optimization problem with the following constraints and cost. For brevity, we omit the time indices if there is no ambiguity. + +1) Dynamics: The contact forces and torques must sum to the desired ones + +$$ +\mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{f}_{c} + \mathop{\sum }\limits_{{e = 1}}^{{N}_{e}}{f}_{e} = {f}_{\text{des }} \tag{3a} +$$ + +$$ +\mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{r}_{c} \times {f}_{c} + \mathop{\sum }\limits_{{e = 1}}^{{N}_{e}}{r}_{e} \times {f}_{e} = {\tau }_{\text{des }}. \tag{3b} +$$ + +2) Contact location: The contact location must be inside the given contact region ${\Omega }_{c}$ for ${\Omega }_{c} \neq 0$ . + +3) Contact force: If the $c$ -th contact point is not in contact with any contact region, hence ${\Omega }_{c} = 0$ , the contact force is set to zero. Note that this is not a complementarity constraint as ${\Omega }_{c}$ is already given. + +4) Sticking contact: If the $c$ -th contact point is in contact with the same region at two consecutive time steps, then the contact location remains the same to prevent the manipulator from sliding on the object. + +5) Coulomb friction: The contact force has to stay inside the friction cone of the given surface. Note that the environment contact can be either sticking or sliding depending on the velocity of the contact point ${\dot{r}}_{e}\left( t\right)$ relative to the environment, which can be obtained from the object motion. + +6) Performance cost: Finally, we minimize a quadratic objective function that avoids applying large forces at the boundary of the contact region + +$$ +J = \mathop{\sum }\limits_{{t = 0}}^{{T - 1}}\mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{\begin{Vmatrix}{f}_{c}\left( t\right) \end{Vmatrix}}^{2} + {\begin{Vmatrix}{r}_{c}\left( t\right) \end{Vmatrix}}^{2} \tag{4} +$$ + +7) Biconvex Decomposition: The continuous optimization problem described above has an interesting feature that the only non-convex constraint (3b) due to the cross product ${r}_{c} \times {f}_{c}$ is in fact biconvex. When we group the decision variables into two sets $x = {\left\lbrack {r}_{c}\left( t\right) ,{\alpha }_{c}\left( t\right) \right\rbrack }_{t = 0}^{\bar{T} - 1}$ and $z =$ ${\left\lbrack {f}_{c}\left( t\right) ,{f}_{e}\left( t\right) \right\rbrack }_{t = 0}^{T - 1}$ , we can re-write the original problem into the standard ADMM form with a biconvex constraint + +$$ +G\left( {x, z}\right) = \mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{r}_{c} \times {f}_{c} + \mathop{\sum }\limits_{{e = 1}}^{{N}_{e}}{r}_{e} \times {f}_{e} - {\tau }_{\mathrm{{des}}} = 0. \tag{5} +$$ + +As all other constraints are separable in $x$ and $z$ , they can be added as indicator functions to the objective and at each ADMM update step solved as standard constrained Quadratic Programs (QPs). + +## IV. EXPERIMENTS + +We conduct simulation experiments to show that our framework 1) is capable of finding dynamically feasible solutions to manipulation planning problems defined in Sec 1.1, and 2) scales to long-horizon tasks even when trained only on data collected from shorter-horizon tasks. + +## A. Experiment Setup + +Throughout all experiments, we consider a manipulator with ${N}_{c} = 2$ contact points, composed of two modular robot fingers similar to the ones used in [19] and a ${10}\mathrm{\;{cm}} \times {10}\mathrm{\;{cm}} \times$ ${10}\mathrm{\;{cm}}$ cube with mass $m = {0.5}\mathrm{\;{kg}}$ on an infinitely large plane. The cube and the plane have the same friction coefficient $\mu = {\mu }_{e} = {0.8}$ . We consider the following primitive object motions and the composite of them 1) Sliding(S) 2) Sliding with curvature(SC)3) Rotating(R)4) Lifting(L), and 5) Pivoting(P)generated by interpolating between the initial and desired object poses. An interpolated trajectory for a single primitive motion has $T = {48}$ time steps and lasts ${4.8}\mathrm{\;s}$ . We require each contact point remain in the same region for $d = 8$ time steps, hence the trajectory has $N = 6$ contact modes. The trajectory always starts with zero velocity and acceleration for ${2.4}\mathrm{\;s}$ allowing at most two contact switches. + +## B. Metrics + +We examine three performance metrics to evaluate the effectiveness and efficiency of our method + +1) Pose error: the error between the desired pose and the one integrated from the solution. + +2) Number of evaluations: the number of continuous optimization problems the PVMCTS needs to solve until it finds the first feasible solution below the error threshold ${D}_{\text{th }}$ . + +3) Solution time: the total time needed to find the first feasible solution. + +TABLE I: Task performance for motions interpolated from randomly sampled poses with various lengths. Pose errors are calculated only for successful tasks. + +
#Object motionsTrajectory length $T$ModelSuccess rateError $\left\lbrack {\mathrm{{cm}},{}^{ \circ }}\right\rbrack$#EvaluationTime [s]
AverageWorstAverageWorstAverageWorst
148Untrained$\mathbf{{20}}/\mathbf{{20}}$0.16,1.180.57 , 5.894.65112.094.88
Trained20/200.15, 0.390.24,0.831.540.711.73
296Untrained20/200.35 , 1.230.79 . 2.248.15258.5421.88
Trained20/200.32,0.880.48,1.78251.964.68
3144Untrained${12}/{20}$0.48,1.860.91 , 5.9829.855046.2384.63
Trained20/200.43,1.810.58.4.842.383.189.43
4192Untrained$5/{20}$0.61,1.950.74 . 2.1343.055093.57137.31
Trained${20}/{20}$0.65, 2.561.59, 6.922.8166.1231.02
+ +## C. Untrained PVMCTS + +In this set of experiments, we show that our method is capable of generating feasible contact plans for primitive object motions using an untrained PVMCTS. The network outputs are simply set to ${v}_{\theta }\left( {s, a}\right) = 0$ and ${p}_{\theta }\left( {s, a}\right) = \frac{1}{\left| \mathcal{A}\left( s\right) \right| }$ . + +1) Tasks: In this experiment, we consider for each primitive object motion the following desired poses summarized in Table II. They are given relative to the initial object pose and the orientation is expressed in the axis-angle representation. + +TABLE II: Desired object poses for various primitive motions. + +
TasksPosition [cm]Orientation [ ${}^{ \circ }$ ]
$S$$\left\lbrack {0,{10},0}\right\rbrack$$\left\lbrack {0,0,0}\right\rbrack$
${SC}$$\left\lbrack {0,5,0}\right\rbrack$$\left\lbrack {0,0,{45}}\right\rbrack$
$R$$\left\lbrack {0,0,0}\right\rbrack$$\left\lbrack {0,0,{90}}\right\rbrack$
$L$$\left\lbrack {0,0,{10}}\right\rbrack$$\left\lbrack {0,0,0}\right\rbrack$
$P$$\left\lbrack {5,0,2}\right\rbrack$$\left\lbrack {0,{45},0}\right\rbrack$
+ +2) Results: Table III shows that our method, even with an untrained model, is capable of finding dynamically feasible solutions for object motions after only a handful evaluations on average. Indeed, the heuristics we proposed greatly reduces the search space while still allowing discovery of dynamically feasible contact plans that results in small pose errors for the object motions considered in this task. + +3) Executing the contact plan: To validate the solution found by ADMM, we execute the contact plan in an open-loop fashion with a simple impedance controller for each finger in the PyBullet simulator [20]. In simulation, the robot is able to move the object towards its desired pose even without the feedback of the actual object pose. + +TABLE III: Task performance for primitive object motions. + +
TasksError $\left\lbrack {\mathrm{{cm}},{}^{ \circ }}\right\rbrack$#EvaluationTime [s]
AverageWorstAverageWorstAverageWorst
$S$0.25,0.720.27,1.696.3153.107.03
${SC}$0.09,0.420.14,0.424.3122.316.38
$R$0.00,1.690.00,1.692.591.163.96
$L$0.35,0.000.35,0.005.7122.484.89
$P$0.36, 2.120.42,3.848.2193.627.56
+ +## D. Learning Planar Manipulation Tasks + +In the previous experiments, we have shown the effectiveness of our search strategy thanks to the heuristics that greatly reduces the size of the set of legal actions $\mathcal{A}\left( s\right)$ . Nevertheless, the search space still grows exponentially with the length of the contact sequence $N$ . It is thus natural to ask if learning from past experience can accelerate the search. + +In this experiment, we show that we can significantly reduce the solution time for longer-horizon tasks even if they are not contained in the training data. + +1) Tasks: : We consider the composition of planar object motions ${SC}$ with randomly sampled desired poses, which are then interpolated. The trajectories in the training data all have a length of $T = {96}$ . + +2) Results: We evaluate the trained and untrained models on tasks that are generated by the same procedure yet have different trajectory lengths. Each task category with the same trajectory length has 20 different randomly generated tasks. We set the maximal number of evaluations to be 50 , hence a task is considered failed if no feasible solution within the error threshold is found after evaluating 50 contact sequences. Table 1 reports the performance metrics of the untrained and trained model for each task category. We see that the trained model consistently solve all the tasks, regardless of the trajectory length, while the untrained model struggles in long-horizon tasks, solving only 5 out of 20 tasks with trajectory length $T = {192}$ . In contrast to the untrained model, the average number of evaluations required by the trained model to find the first feasible solution grows rather slowly with the trajectory length. + +## V. CONCLUSION + +In this work, we proposed a framework that combines data-driven tree search via PVMCTS and efficient non-convex optimization via ADMM to find dynamically feasible contact forces and locations to realize a given object motion. We show that the capability of learning from data allows our framework to achieve great scalability for long-horizon motions even when the dataset only contains data collected from shorter motions. + +The most limited aspect of our approach is that the object motion must be provided. While this is possible for simple tasks, true dexterity requires automatic generation of the object motion by reasoning about the environment, which can be achieved by enumerating not only the manipulator contacts but also the environment contacts as proposed in [21]. It is thus an interesting future research direction to incorporate such a component. + +[1] A. Escande and A. Kheddar, "Contact planning for acyclic motion with tasks constraints," IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems, 2009. IROS 2009., p. 435-440, Oct 2009. + +[2] Y.-C. Lin, L. Righetti, and D. Berenson, "Robust humanoid contact planning with learned zero- and one-step capturability prediction," IEEE Robotics and Automation Letters, vol. 5, no. 2, 2020. + +[3] B. Ponton, M. Khadiv, A. Meduri, and L. Righetti, "Efficient multi-contact pattern generation with sequential convex approximations of the centroidal dynamics," IEEE Transactions on Robotics, vol. 37, no. 5, p. 1661-1679, 2021. + +[4] J. Carpentier, S. Tonneau, M. Naveau, O. Stasse, and N. Mansard, "A versatile and efficient pattern generator for generalized legged locomotion," in IEEE-RAS International Conference on Robotics and Automation, 2016. + +[5] D. Stewart and J. C. Trinkle, "An implicit time-stepping scheme for rigid body dynamics with coulomb friction," in IEEE International Conference on Robotics and Automation, 2000, pp. 162-169. + +[6] M. Posa, C. Cantu, and R. Tedrake, "A direct method for trajectory optimization of rigid bodies through contact," The Int $J$ of Robotics Research, vol. 33, no. 1, pp. 69-81, 2014. + +[7] I. Mordatch, E. Todorov, and Z. Popović, "Discovery of complex behaviors through contact-invariant optimization," ACM Transactions on Graphics (TOG), vol. 31, no. 4, pp. 1-8, 2012. + +[8] I. Mordatch, J. M. Wang, E. Todorov, and V. Koltun, "Animating human lower limbs using contact-invariant optimization," ACM Transactions on Graphics (TOG), vol. 32, no. 6, pp. 1-8, 2013. + +[9] I. Mordatch, Z. Popović, and E. Todorov, "Contact-invariant optimization for hand manipulation," in ACM SIGGRAPH/Eurographics symposium on computer animation, 2012, pp. 137-144. + +[10] M. Neunert et al., "Whole-body nonlinear model predictive control through contacts for quadrupeds," IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1458-1465, 2018. + +[11] M. Toussaint, "Logic-geometric programming: An optimization-based approach to combined task and motion planning," in Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015. + +[12] B. Aceituno-Cabezas and A. Rodriguez, "A global quasi-dynamic model for contact-trajectory optimization," in Robotics: Science and Systems (RSS), 2020. + +[13] N. Doshi, F. R. Hogan, and A. Rodriguez, "Hybrid differential dynamic programming for planar manipulation primitives," in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 6759-6765. + +[14] A. Cauligi et al., "Coco: Online mixed-integer control via supervised learning," IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1447-1454, 2021. + +[15] C. Chen, P. Culbertson, M. Lepert, M. Schwager, and J. Bohg, "Trajectotree: Trajectory optimization meets tree search for planning multi-contact dexterous manipulation," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021, pp. 8262-8268. + +[16] S. Boyd et al., "Distributed optimization and statistical learning via the alternating direction method of multipliers," Foundations and Trends® in Machine learning, vol. 3, no. 1, pp. 1-122, 2011. + +[17] D. Silver et al., "Mastering the game of go without human knowledge," Nature, vol. 550, no. 7676, pp. 354-359, 2017. + +[18] J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lockhart, D. Hassabis, T. Graepel, et al., "Mastering atari, go, chess and shogi by planning with a learned model," Nature, vol. 588, no. 7839, pp. 604-609, 2020. + +[19] M. Wüthrich et al., "Trifinger: An open-source robot for learning dexterity," arXiv preprint arXiv:2008.03596, 2020. + +[20] E. Coumans and Y. Bai, "Pybullet, a python module for physics simulation for games, robotics and machine learning," http://pybullet.org + +[21] X. Cheng, E. Huang, Y. Hou, and M. T. Mason, "Contact mode guided motion planning for quasidynamic dexterous manipulation in 3d," arXiv preprint arXiv:2105.14431, 2021. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/9Big79N1FrZ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/9Big79N1FrZ/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..831f1c107cf579767b0b6a692ead1a244626eee7 --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/9Big79N1FrZ/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,277 @@ +§ EFFICIENT OBJECT MANIPULATION PLANNING WITH MONTE CARLO TREE SEARCH + +Huaijiang ${\mathrm{{Zhu}}}^{1}$ , Ludovic Righetti ${}^{1,2}$ + +Abstract-This work presents an efficient approach to object manipulation planning using Monte Carlo Tree Search (MCTS) to find contact sequences and an efficient ADMM-based trajectory optimization algorithm to evaluate the dynamic feasibility of candidate contact sequences. To accelerate MCTS, we propose a methodology to learn a goal-conditioned policy-value network used to direct the search towards promising nodes. Further, manipulation-specific heuristics enable to drastically reduce the search space. Systematic object manipulation experiments in a physics simulator demonstrate the efficiency of our approach. In particular, our approach scales favorably for long manipulation sequences thanks to the learned policy-value network, significantly improving planning success rate. + +§ I. INTRODUCTION + +The ability to plan sequences of contacts and movements to manipulate objects is central to endow robots with sufficient autonomy to perform complex tasks. This remains, however, particularly challenging. Indeed, finding dynamically feasible sequences of contacts between the manipulator and an object typically leads to intractable combinatorial and nonlinear problems. + +Recently, trajectory optimization has become a popular tool for multi-contact locomotion [1]-[4] as this leads to desirable formulations to reason about interaction forces. Yet, it remains unclear how the planning of contact modes should be efficiently incorporated, primarily due to its discrete nature which creates an undesirable consequence: discontinuity in the dynamics at contact switch. To handle this discontinuity under the trajectory optimization framework, two streams of methodologies have emerged: + +1) the contact-invariant or contact-implicit approach enforce contact complementarity either as hard constraints [5], [6], penalty terms in a cost function [7]-[9], or with differentiable soft contacts models [10] + +2) the hybrid approach treats contact switches as discrete decisions within a continuous problem [11]-[13]. + +In this work, we examine the latter methodology to propose an optimization framework amenable to customization with object manipulation-specific heuristics and learning from data to improve its computational efficiency. + +The most common formulation of such problem is via Mixed-integer Programming (MIP). In the context of robot manipulation, one representative work is the Contact-Trajectory Optimization proposed in [12], where contact scheduling is modeled as binary decision variables and the non-convexity due to cross product is relaxed by McCormick envelopes. The resulted problem is a Mixed-integer Quadratic Program (MIQP) which can be solved by off-the-shelf MIQP solvers. However, the approach has only been demonstrated on $2\mathrm{D}$ object manipulation with very short manipulation sequences. This is in contrast to our approach which handles $3\mathrm{D}$ objects and long sequences. + +In the context of machine learning, CoCo proposed in [14] finds feasible solution to MIP by first learning to map the problem parameters to the assignment of the discrete variables offline and then solving the resulted continuous optimization problem online. While this greatly improves the solution speed at inference time, it assumes that one is able to solve the original MIP in a reasonable amount of time to construct the dataset. If the original problem is prohibitive to solve, collecting a large dataset for this problem may not be practical without abundant computational resources. + +Recently, an algorithm that augments Contact-Implicit Trajectory Optimization (CITO) with tree search was proposed in [15] to incorporate domain-specific knowledge for robot manipulation. It uses depth-first search (DFS) to find a sequence of kinematically feasible contact modes that have a stable grasp and then constrain the CITO problem with the found contact sequence. + +In principle, we can employ a brute-force approach to our problem: search over all possible combinations of the discrete variables and for each such combination solve the resulted continuous optimization problem. In general, such strategy is not practical due to the factorial complexity. However, it can be made more efficient if 1) the search space can be notably reduced, 2) good search heuristics are available, and 3) the non-convex continuous optimization problem can be solved efficiently. In this work, we show that all these three requirements can be achieved. In particular, our contributions are + +1) we adapt learning-based Monte Carlo Tree Search (MCTS) to discrete contact planning problems for robotic manipulation, + +2) we formulate the resulted continuous optimization problem as a biconvex program to allow efficient solution via the Alternating Direction Method of Multipliers (ADMM) [16], and + +3) we learn a policy-value network from data collected on short-horizon tasks which provides good heuristics for long-horizon tasks and significantly decreases the overall solution time. + +To our best knowledge, this is the first application of learning-based MCTS to contact planning for manipulation. + +${}^{1}$ Tandon School of Engineering, New York University, USA + +${}^{2}$ Max-Planck Institute for Intelligent Systems, Germany + +An extended version of this work has been submitted to IROS 2022 and is under review. + +§ II. PROBLEM STATEMENT + +§ A. INPUTS + +We aim to solve an object manipulation task similar to the Contact-Trajectory Optimization problem proposed in [12] where the following quantities are given: + +1) a rigid object with known geometry, friction coefficient $\mu$ , mass $m$ , moment of inertia $\mathcal{I}$ , and ${N}_{\Omega }$ pre-defined touchable regions, + +2) a trajectory with discretization step ${\Delta t}$ of length $T$ that consists of the desired object pose, velocity, and acceleration + +3) an environment with known geometry and friction coefficient ${\mu }_{e}$ , and + +4) a manipulator with known kinematics that can make at most ${N}_{c}$ contacts with the object. + +At the $t$ -th time step, given the object motion and the object dynamics, we can compute the desired total force ${f}_{\text{ des }}\left( t\right)$ and torque ${\tau }_{\text{ des }}\left( t\right)$ to be applied to the object from rigid-body dynamics. In addition, as the geometry of the object and the environment as well as the object motion are known, we can obtain ${N}_{e}\left( t\right)$ environment contact locations ${r}_{e}\left( t\right)$ for $e \in \left\{ {1,\ldots ,{N}_{e}\left( t\right) }\right\}$ at each time step $t$ by checking the collisions between the object and the environment, assuming uniform pressure distribution. + +§ B. OUTPUTS + +For each time step $t$ , we aim to find the following: + +1) the contact region ${\Omega }_{c}\left( t\right) \in \left\{ {0,1,\ldots ,{N}_{\Omega }}\right\}$ , the contact force ${f}_{c}\left( t\right)$ and the contact location ${r}_{c}\left( t\right)$ for each contact point $c$ of the manipulator; ${\Omega }_{c}\left( t\right) = 0$ indicates that the $c$ -th contact point is not in contact, and + +2) the environment contact force ${f}_{e}\left( t\right)$ + +such that the forces and torques sum to the desired ones + +$$ +\mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{f}_{c}\left( t\right) + \mathop{\sum }\limits_{{e = 1}}^{{{N}_{e}\left( t\right) }}{f}_{e}\left( t\right) = {f}_{\text{ des }}\left( t\right) \tag{1a} +$$ + +$$ +\mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{r}_{c}\left( t\right) \times {f}_{c}\left( t\right) + \mathop{\sum }\limits_{{e = 1}}^{{{N}_{e}\left( t\right) }}{r}_{e}\left( t\right) \times {f}_{e}\left( t\right) = {\tau }_{\mathrm{{des}}}\left( t\right) . \tag{1b} +$$ + +§ III. METHOD + +The problem described above is challenging even though the desired object motion is provided, as one needs to find both the discrete contact region ${\Omega }_{c}\left( t\right)$ and the continuous manipulator contact forces ${f}_{c}\left( t\right)$ , the contact locations ${r}_{c}\left( t\right)$ , and the environment contact forces ${f}_{e}\left( t\right)$ . + +§ A. DISCRETE CONTACT PLANNING VIA MCTS + +A series of learning-based MCTS algorithms has been proposed in [17], [18] for the chess-playing agents AlphaGo and AlphaZero. We adapt it to solve the discrete contact region planning problem and refer the algorithm as Policy-Value Monte Carlo Tree Search (PVMCTS): for a given object motion $\xi$ , we want to find the contact region for each contact point $c$ at each time step $t$ . The whole sequence is then evaluated to return a reward $r$ to guide future search. + +1) Assumptions: To reduce the search space, we make the following assumptions: + + * Persistent contact: While the downstream continuous optimization problem may have a small discretization step, for example ${\Delta t} = {0.1}\mathrm{\;s}$ , most manipulation tasks do not require contact switch at such a high frequency. Thus, we assume that a contact point must remain in the same region for $d$ time steps. + + * Contact switch: We allow at most one contact point to break or make contact at each contact switch, and we only allow contact switches when the desired object velocity and acceleration are zero. + + * Contact region: Each contact region can only be touched by at most one contact point. + +2) State and action representation: With the assumption above, at each planning step $n$ , the PVMCTS chooses for each contact point $c$ its contact region for the next $d$ time steps, hence ${a}_{n} = {\left\lbrack {\Omega }_{1}\left( t\right) ,\ldots ,{\Omega }_{{N}_{c}}\left( t\right) \right\rbrack }_{t = {nd}}^{{nd} + d}$ , and the state ${s}_{n}$ is simply the concatenation of all previous actions. + +3) Heuristics: To reduce the search space, we apply the following heuristics to further limit the size of the legal action set $\mathcal{A}\left( s\right)$ : + + * Kinematic feasibility: For each contact point $c$ , a contact region will only be considered if inverse kinematics can find a manipulator configuration that reaches the center of this region within an error threshold of $1\mathrm{\;{cm}}$ . + + * Number of contacts: For time steps where the angular acceleration is nonzero, we require at least $\min \left( {{N}_{c},3}\right)$ contact points to be in contact. + +4) Reward function: Once the PVMCTS reaches a terminal state, hence ${Nd} = T$ , we obtain a sequence of contact regions ${\left\lbrack {\Omega }_{1}\left( t\right) ,\ldots ,{\Omega }_{{N}_{c}}\left( t\right) \right\rbrack }_{t = 0}^{T - 1}$ , which is used to construct a continuous optimization problem that solves for the contact force ${f}_{c}\left( t\right) ,{f}_{e}\left( t\right)$ and the contact location ${r}_{c}\left( t\right)$ (cf. Sec III-B). To evaluate this solution, we integrate it to obtain an object pose $\widehat{q}\left( {T - 1}\right)$ with the semi-implicit Euler method. We then compare it with the desired pose $q\left( {T - 1}\right)$ to compute a weighted distance + +$$ +D\left( {q,\widehat{q}}\right) = \parallel p - \widehat{p}\parallel + \beta \begin{Vmatrix}{\log \left( {{\widehat{R}}^{\top }R}\right) }\end{Vmatrix}, \tag{2} +$$ + +where $\beta > 0$ scales the angular distance. The weighted distance within a threshold $D \leq {D}_{\text{ th }}$ is then normalized to $\left\lbrack {0,1}\right\rbrack$ to obtain the reward. + +5) Goal-conditioned policy-value network: Note that each PVMCTS instance only searches for the contact sequence for a given object motion $\xi$ , thus the rewards are motion-specific. To allow learning from object motion information as well, we define an intermediate goal ${\lambda }_{n} = \left\lbrack {q\left( {nd}\right) ,q\left( {{nd} + h}\right) }\right\rbrack$ for each planning step $n$ that consists of the current desired object pose $q\left( {nd}\right)$ and the future one $q\left( {{nd} + h}\right)$ in $h$ steps. Fig 1 depicts the policy-value network architecture. + +6) Value classifier: One key difference between our task and the generic game-play is that our dataset is highly imbalanced-many contact sequences explored by the PVMCTS are dynamically infeasible, resulting in rewards that equal zero. Directly training on such a dataset leads to underestimation of the value function. Instead, we only train our policy-value network on samples that incur positive rewards. Additionally on the entire dataset $\mathcal{D}$ , we train a binary classifier ${C}_{\phi }\left( s\right)$ with logistic regression where positive samples are given more weights. At inference time, a state is only fed into the policy-value network if the classifier labels it as positive; otherwise, we simply output zero value ${v}_{\theta }\left( s\right) = 0$ and uniformly distributed action probability ${p}_{\theta }\left( {s,a}\right) = \frac{1}{\left| \mathcal{A}\left( s\right) \right| }$ . + + < g r a p h i c s > + +Fig. 1: Schematic diagram of the policy-value network architecture. Activation functions and regularization layers such as Dropout and BatchNorm are omitted. + +§ B. CONTINUOUS CONTACT OPTIMIZATION VIA ADMM + +Now let us consider the sub-problem where we already obtained a sequence of contact regions ${\left\lbrack {\Omega }_{1}\left( t\right) ,\ldots ,{\Omega }_{{N}_{c}}\left( t\right) \right\rbrack }_{t = 0}^{T - 1}$ for each contact point $c$ : we can find the contact forces and locations by solving a continuous optimization problem with the following constraints and cost. For brevity, we omit the time indices if there is no ambiguity. + +1) Dynamics: The contact forces and torques must sum to the desired ones + +$$ +\mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{f}_{c} + \mathop{\sum }\limits_{{e = 1}}^{{N}_{e}}{f}_{e} = {f}_{\text{ des }} \tag{3a} +$$ + +$$ +\mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{r}_{c} \times {f}_{c} + \mathop{\sum }\limits_{{e = 1}}^{{N}_{e}}{r}_{e} \times {f}_{e} = {\tau }_{\text{ des }}. \tag{3b} +$$ + +2) Contact location: The contact location must be inside the given contact region ${\Omega }_{c}$ for ${\Omega }_{c} \neq 0$ . + +3) Contact force: If the $c$ -th contact point is not in contact with any contact region, hence ${\Omega }_{c} = 0$ , the contact force is set to zero. Note that this is not a complementarity constraint as ${\Omega }_{c}$ is already given. + +4) Sticking contact: If the $c$ -th contact point is in contact with the same region at two consecutive time steps, then the contact location remains the same to prevent the manipulator from sliding on the object. + +5) Coulomb friction: The contact force has to stay inside the friction cone of the given surface. Note that the environment contact can be either sticking or sliding depending on the velocity of the contact point ${\dot{r}}_{e}\left( t\right)$ relative to the environment, which can be obtained from the object motion. + +6) Performance cost: Finally, we minimize a quadratic objective function that avoids applying large forces at the boundary of the contact region + +$$ +J = \mathop{\sum }\limits_{{t = 0}}^{{T - 1}}\mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{\begin{Vmatrix}{f}_{c}\left( t\right) \end{Vmatrix}}^{2} + {\begin{Vmatrix}{r}_{c}\left( t\right) \end{Vmatrix}}^{2} \tag{4} +$$ + +7) Biconvex Decomposition: The continuous optimization problem described above has an interesting feature that the only non-convex constraint (3b) due to the cross product ${r}_{c} \times {f}_{c}$ is in fact biconvex. When we group the decision variables into two sets $x = {\left\lbrack {r}_{c}\left( t\right) ,{\alpha }_{c}\left( t\right) \right\rbrack }_{t = 0}^{\bar{T} - 1}$ and $z =$ ${\left\lbrack {f}_{c}\left( t\right) ,{f}_{e}\left( t\right) \right\rbrack }_{t = 0}^{T - 1}$ , we can re-write the original problem into the standard ADMM form with a biconvex constraint + +$$ +G\left( {x,z}\right) = \mathop{\sum }\limits_{{c = 1}}^{{N}_{c}}{r}_{c} \times {f}_{c} + \mathop{\sum }\limits_{{e = 1}}^{{N}_{e}}{r}_{e} \times {f}_{e} - {\tau }_{\mathrm{{des}}} = 0. \tag{5} +$$ + +As all other constraints are separable in $x$ and $z$ , they can be added as indicator functions to the objective and at each ADMM update step solved as standard constrained Quadratic Programs (QPs). + +§ IV. EXPERIMENTS + +We conduct simulation experiments to show that our framework 1) is capable of finding dynamically feasible solutions to manipulation planning problems defined in Sec 1.1, and 2) scales to long-horizon tasks even when trained only on data collected from shorter-horizon tasks. + +§ A. EXPERIMENT SETUP + +Throughout all experiments, we consider a manipulator with ${N}_{c} = 2$ contact points, composed of two modular robot fingers similar to the ones used in [19] and a ${10}\mathrm{\;{cm}} \times {10}\mathrm{\;{cm}} \times$ ${10}\mathrm{\;{cm}}$ cube with mass $m = {0.5}\mathrm{\;{kg}}$ on an infinitely large plane. The cube and the plane have the same friction coefficient $\mu = {\mu }_{e} = {0.8}$ . We consider the following primitive object motions and the composite of them 1) Sliding(S) 2) Sliding with curvature(SC)3) Rotating(R)4) Lifting(L), and 5) Pivoting(P)generated by interpolating between the initial and desired object poses. An interpolated trajectory for a single primitive motion has $T = {48}$ time steps and lasts ${4.8}\mathrm{\;s}$ . We require each contact point remain in the same region for $d = 8$ time steps, hence the trajectory has $N = 6$ contact modes. The trajectory always starts with zero velocity and acceleration for ${2.4}\mathrm{\;s}$ allowing at most two contact switches. + +§ B. METRICS + +We examine three performance metrics to evaluate the effectiveness and efficiency of our method + +1) Pose error: the error between the desired pose and the one integrated from the solution. + +2) Number of evaluations: the number of continuous optimization problems the PVMCTS needs to solve until it finds the first feasible solution below the error threshold ${D}_{\text{ th }}$ . + +3) Solution time: the total time needed to find the first feasible solution. + +TABLE I: Task performance for motions interpolated from randomly sampled poses with various lengths. Pose errors are calculated only for successful tasks. + +max width= + +2*#Object motions 2*Trajectory length $T$ 2*Model 2*Success rate 2|c|Error $\left\lbrack {\mathrm{{cm}},{}^{ \circ }}\right\rbrack$ 2|c|#Evaluation 2|c|Time [s] + +5-10 + Average Worst Average Worst Average Worst + +1-10 +2*1 2*48 Untrained $\mathbf{{20}}/\mathbf{{20}}$ 0.16,1.18 0.57 , 5.89 4.65 11 2.09 4.88 + +3-10 + Trained 20/20 0.15, 0.39 0.24,0.83 1.5 4 0.71 1.73 + +1-10 +2*2 2*96 Untrained 20/20 0.35 , 1.23 0.79 . 2.24 8.15 25 8.54 21.88 + +3-10 + Trained 20/20 0.32,0.88 0.48,1.78 2 5 1.96 4.68 + +1-10 +2*3 2*144 Untrained ${12}/{20}$ 0.48,1.86 0.91 , 5.98 29.85 50 46.23 84.63 + +3-10 + Trained 20/20 0.43,1.81 0.58.4.84 2.3 8 3.18 9.43 + +1-10 +2*4 2*192 Untrained $5/{20}$ 0.61,1.95 0.74 . 2.13 43.05 50 93.57 137.31 + +3-10 + Trained ${20}/{20}$ 0.65, 2.56 1.59, 6.92 2.8 16 6.12 31.02 + +1-10 + +§ C. UNTRAINED PVMCTS + +In this set of experiments, we show that our method is capable of generating feasible contact plans for primitive object motions using an untrained PVMCTS. The network outputs are simply set to ${v}_{\theta }\left( {s,a}\right) = 0$ and ${p}_{\theta }\left( {s,a}\right) = \frac{1}{\left| \mathcal{A}\left( s\right) \right| }$ . + +1) Tasks: In this experiment, we consider for each primitive object motion the following desired poses summarized in Table II. They are given relative to the initial object pose and the orientation is expressed in the axis-angle representation. + +TABLE II: Desired object poses for various primitive motions. + +max width= + +Tasks Position [cm] Orientation [ ${}^{ \circ }$ ] + +1-3 +$S$ $\left\lbrack {0,{10},0}\right\rbrack$ $\left\lbrack {0,0,0}\right\rbrack$ + +1-3 +${SC}$ $\left\lbrack {0,5,0}\right\rbrack$ $\left\lbrack {0,0,{45}}\right\rbrack$ + +1-3 +$R$ $\left\lbrack {0,0,0}\right\rbrack$ $\left\lbrack {0,0,{90}}\right\rbrack$ + +1-3 +$L$ $\left\lbrack {0,0,{10}}\right\rbrack$ $\left\lbrack {0,0,0}\right\rbrack$ + +1-3 +$P$ $\left\lbrack {5,0,2}\right\rbrack$ $\left\lbrack {0,{45},0}\right\rbrack$ + +1-3 + +2) Results: Table III shows that our method, even with an untrained model, is capable of finding dynamically feasible solutions for object motions after only a handful evaluations on average. Indeed, the heuristics we proposed greatly reduces the search space while still allowing discovery of dynamically feasible contact plans that results in small pose errors for the object motions considered in this task. + +3) Executing the contact plan: To validate the solution found by ADMM, we execute the contact plan in an open-loop fashion with a simple impedance controller for each finger in the PyBullet simulator [20]. In simulation, the robot is able to move the object towards its desired pose even without the feedback of the actual object pose. + +TABLE III: Task performance for primitive object motions. + +max width= + +2*Tasks 2|c|Error $\left\lbrack {\mathrm{{cm}},{}^{ \circ }}\right\rbrack$ 2|c|#Evaluation 2|c|Time [s] + +2-7 + Average Worst Average Worst Average Worst + +1-7 +$S$ 0.25,0.72 0.27,1.69 6.3 15 3.10 7.03 + +1-7 +${SC}$ 0.09,0.42 0.14,0.42 4.3 12 2.31 6.38 + +1-7 +$R$ 0.00,1.69 0.00,1.69 2.5 9 1.16 3.96 + +1-7 +$L$ 0.35,0.00 0.35,0.00 5.7 12 2.48 4.89 + +1-7 +$P$ 0.36, 2.12 0.42,3.84 8.2 19 3.62 7.56 + +1-7 + +§ D. LEARNING PLANAR MANIPULATION TASKS + +In the previous experiments, we have shown the effectiveness of our search strategy thanks to the heuristics that greatly reduces the size of the set of legal actions $\mathcal{A}\left( s\right)$ . Nevertheless, the search space still grows exponentially with the length of the contact sequence $N$ . It is thus natural to ask if learning from past experience can accelerate the search. + +In this experiment, we show that we can significantly reduce the solution time for longer-horizon tasks even if they are not contained in the training data. + +1) Tasks: : We consider the composition of planar object motions ${SC}$ with randomly sampled desired poses, which are then interpolated. The trajectories in the training data all have a length of $T = {96}$ . + +2) Results: We evaluate the trained and untrained models on tasks that are generated by the same procedure yet have different trajectory lengths. Each task category with the same trajectory length has 20 different randomly generated tasks. We set the maximal number of evaluations to be 50, hence a task is considered failed if no feasible solution within the error threshold is found after evaluating 50 contact sequences. Table 1 reports the performance metrics of the untrained and trained model for each task category. We see that the trained model consistently solve all the tasks, regardless of the trajectory length, while the untrained model struggles in long-horizon tasks, solving only 5 out of 20 tasks with trajectory length $T = {192}$ . In contrast to the untrained model, the average number of evaluations required by the trained model to find the first feasible solution grows rather slowly with the trajectory length. + +§ V. CONCLUSION + +In this work, we proposed a framework that combines data-driven tree search via PVMCTS and efficient non-convex optimization via ADMM to find dynamically feasible contact forces and locations to realize a given object motion. We show that the capability of learning from data allows our framework to achieve great scalability for long-horizon motions even when the dataset only contains data collected from shorter motions. + +The most limited aspect of our approach is that the object motion must be provided. While this is possible for simple tasks, true dexterity requires automatic generation of the object motion by reasoning about the environment, which can be achieved by enumerating not only the manipulator contacts but also the environment contacts as proposed in [21]. It is thus an interesting future research direction to incorporate such a component. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/B_bgRmVoir/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/B_bgRmVoir/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..981cb45018000d0d41a69b441272ae2e7ff6b285 --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/B_bgRmVoir/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,193 @@ +# Learning Goal-Oriented Non-Prehensile Pushing in Cluttered Scenes + +Nils Dengler David Großklaus Maren Bennewitz + +Abstract-Pushing objects through cluttered scenes is a challenging task, especially when the objects to be pushed have initially unknown dynamics and touching other entities has to be avoided to reduce the risk of damage. In this paper, we approach this problem by applying deep reinforcement learning to generate pushing actions for a robotic manipulator acting on a planar surface where objects have to be pushed to goal locations while avoiding other items in the same workspace. With the latent space learned from a depth image of the scene and other observations of the environment, such as contact information between the end effector and the object as well as distance to the goal, our framework is able to learn contact-rich pushing actions that avoid collisions with other objects. As the experimental results with a six degrees of freedom robotic arm show, our system is able to successfully push objects from start to end positions while avoiding nearby objects. Furthermore, we evaluate our learned policy in comparison to a state-of-the-art pushing controller for mobile robots and show that our agent performs better in terms of success rate, collisions with other objects, and continuous object contact in various scenarios. + +## I. INTRODUCTION + +Pushing is often used for re-positioning and re-orientating objects since it simplifies the object manipulation in comparison to pick-and-place approaches. Furthermore, pushing allows for moving large, heavy, and irregularly shaped, as well as small and fragile objects to target positions and can be used for reducing uncertainty in the position of objects [1]. Hereby, the term pushing is separated in non-prehensile pushing [2] and prehensile pushing (push-grasp) [3], [4]. For example, in limited space [5], [6] and when dealing with fragile objects, non-prehensile pushing is the preferred manipulation action, since grasping increases the risk of damage. In the past, pushing has been used to separate objects for better grasping [7], [8] or to sort objects from a table into a bin [9] and is assumed to be more time-efficient than grasping to overcome short distances [10]. + +In general, pushing actions should be contact-rich with smooth arm motions. Furthermore, the contact to other objects in the workspace should be avoided to prevent any damages and changes the configuration of the scene. While for a long time, pushing behaviors were created using expert knowledge in an analytical way, more and more work is focusing on reinforcement learning (RL) to solve this task. Especially the ability to learn from environment interactions and own experiences makes RL a useful way to learn challenging new skills. Start-to-goal pushing with a RL-agent has been tackled before [13] and serves as a benchmark for RL [14], however, pushing in cluttered environments where collisions with other objects have to be avoided is a less researched area. While there are already approaches for mobile bases [15], [16], they have not been transferred to robotic manipulators so far. + +![01963e1d-38bf-7068-b601-3819c1c340da_0_983_489_592_599_0.jpg](images/01963e1d-38bf-7068-b601-3819c1c340da_0_983_489_592_599_0.jpg) + +Fig. 1: Targeted application scenario of our system within the RePAIR-project ${}^{1}$ . The goal is to push the small fragment to the desired goal pose (green). Shown in magenta is the best pushing path, which maintains a safety distance to the other objects. + +In this paper, we present a framework to train a RL-agent that is able to realize obstacle-aware pushing in a contact-rich manner to guide objects with initially unknown dynamics on a planar surface to desired target configurations with a robotic manipulator. As representation of the workspace, we use a depth image taken from a bird's eye view. To reduce the size of the observation space and therefore the complexity, we use the latent space of a variational autoencoder. To accelerate learning, we calculate subgoals from an optimal $2\mathrm{D}$ path in a grid representation of the environment generated from the depth image. In addition, we use further observations, such as contact information between the end effector of the manipulator and the object as well as the distance to the goal. The output of our system is an incremental motion of the current $\left( {x, y,\theta }\right)$ -position of the robot’s end effector. Fig. 1 illustrates a targeted application scenario from the RePAIR-project ${}^{1}$ . The goal is to push the small fresco fragment to the desired position in a gentle manner while not damaging it or any other fragment on the assembly table. + +--- + +All authors are with the Humanoid Robots Lab, University of Bonn, Germany. This work has partially been funded by the European Commission under grant agreement number 964854-RePAIR - H2020-FETOPEN-2018- 2020 and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy, EXC-2070 - 390732324 - Phenorob. + +--- + +Regarding related work, we differ compared to Bejjani et al. [4] in terms of object avoidance, use relative coordinates, like Migimatsu et al. [17] and include tactile force information, as proposed by Lee et al. [18] and Lin et al. [13]. Furthermore, we used for our approach ideas from Krivic et al. [15] and Regier et al. [19]. + +The key contributions of our work are the following + +- A model-free RL system that learns to generate smooth pushing paths, with contact-rich pushing actions to reach the object's target positions in cluttered environments, thereby avoiding contact to other, nearby objects. + +- A qualitative and quantitative evaluation in simulation in comparison to a state-of-the-art pushing controller [15], which we adapted to our scenario. + +As the experiments show, our system leads to reliable pushing, while achieving better performance compared to [15] with respect to success rate, collisions with other objects, and continuous object contact in various scenarios. + +## II. Problem Description + +In this work we consider the following problem. In a tabletop environment, a robotic arm is supposed to move an object from its current position to a 2D goal configuration. To achieve this, we consider the end effector (EE) of the arm moving in planar space $\left( {x, y,\theta }\right)$ . The robotic arm can be of any degree of freedom (DOF). In addition to the pushing object, there are other objects which need to be considered as obstacles and which might obstruct the direct path to the end configuration. The obstacles have to be avoided by the EE and the object at all time. The goal of the RL-agent is to determine the best incremental movement $\left( {{\Delta x},{\Delta y},{\Delta \theta }}\right)$ of its EE position at each time step, to move the object with the EE as fast, but also as safe as possible to the goal position while avoiding obstacles on the way. An RGB-D camera is mounted centered above the scene in bird's eye view to obtain observations of the objects in the workspace. + +### III.OUR APPROACH + +We apply deep reinforcement learning to solve the task described above. This is motivated by the fact that we expect to obtain smoother trajectories as we would get with a pure control-based approach. Especially for traversing narrow passages, the lack of parameter tuning can be beneficial. We use a variational auto encoder (VAE) to decouple the feature extraction of the given depth image from the policy learning process [20]. Fig. 2 shows an overview of our proposed system. In the following, we describe our RL framework in detail. + +## A. Reinforcement Learning + +For the implementation, we followed some ideas proposed by Regier et al. [19], which proposed a RL-framework to navigate in cluttered environments with a mobile robot. In the following we define the action and observation space, the reward function, the used RL-algorithm, the experience replay buffer strategy, as well as the learning strategy. + +1) Action Space: Our action space consists of the three values, $\left( {{\Delta x},{\Delta y},{\Delta \theta }}\right)$ , which are the increment to the current $x$ and $y$ position, as well as the yaw angle $\theta$ of the gripper. We set the maximum value of $\left( {{\Delta x},{\Delta y},{\Delta \theta }}\right)$ to the maximum distance change possible during one predefined time window. + +2) Observation Space: The observation space of our RL-agent consists of 49 values, namely: The latent space (32), the EE position (5), the 6D joint angles (6), the 2D subgoal at t-1 and t-5 $\left( {2 * 2}\right)$ , an EE contact with obstacle indication (1) and the object to goal distance (1). To give the agent an indication of the best pushing direction, we include two subgoals of different time steps into the observation that we calculate from the current shortest path. The shortest path is calculated on a binary map, gathered from the depth image, where all obstacles are inflated according to half of the object's diameter. The agent never receives the complete shortest path in its observation and re-calculates it at each time step. Therefore, our agent is not constructed as a path following agent but learns the best pushing behavior during training. + +3) Reward: Our reward function consists of following three components: + +$$ +{r}_{\text{dist }} = \left\{ \begin{array}{ll} {50}, & \text{ if goal reached } \\ - {r}_{{g}_{ - }\text{dist }} - {r}_{{o}_{ - }\text{dist }}, & \text{ otherwise } \end{array}\right. \tag{1} +$$ + +$$ +{r}_{\text{collison }} = \left\{ \begin{array}{ll} - {10}, & \text{ if object out of bounds } \\ - 5 & \text{ if collision occurred } \end{array}\right. \tag{2} +$$ + +$$ +{r}_{\text{touch }} = \left\{ \begin{array}{ll} {r}_{{o}_{ - }\text{dist }}, & \text{ contact to object } \\ 0 & \text{ otherwise } \end{array}\right. \tag{3} +$$ + +The first equation encourages the agent toward a faster learning behavior since it penalizes higher distances between object and goal as well as object and EE with ${r}_{{g}_{ - }\text{dist }}$ and ${r}_{\text{o_dist }}$ . ${r}_{\text{collison }}$ penalizes each collision of the object with clutter in the scene or if the object gets pushed out of bounds. The last part of the reward ${r}_{\text{touch }}$ considers the suggestion of Lin et al. [13]. Since we calculate the distance between the EE and the center of the object, a small distance value remains, even if the EE has contact to the object. Therefore, we negate the ${r}_{{o}_{ - }{dist}}$ penalty of ${r}_{dist}$ each time the EE has contact to the object, to encourage a contact-rich behavior. + +Together all three parts form the reward function ${r}_{\text{total }}$ of our agent: + +$$ +{r}_{\text{total }} = {r}_{\text{dist }} + {r}_{\text{collision }} + {r}_{\text{touch }} +$$ + +--- + +${}^{1}$ https://www.repairproject.eu + +--- + +![01963e1d-38bf-7068-b601-3819c1c340da_2_231_140_1330_527_0.jpg](images/01963e1d-38bf-7068-b601-3819c1c340da_2_231_140_1330_527_0.jpg) + +Fig. 2: Overview of our deep reinforcement learning pushing framework. Our system receives a depth image of the environment taken from an RGB-D camera. We calculate an object centered egocentric local window and feed it into the variational auto encoder to get the latent space. Furthermore, a global path from the current object position to the goal position, including subgoals, is calculated. The latent space, the subgoals, and further observations from the environments are used as the concatenated observation for the policy network of the deep RL which calculates the best 3D incremental motion. + +![01963e1d-38bf-7068-b601-3819c1c340da_2_152_863_1479_277_0.jpg](images/01963e1d-38bf-7068-b601-3819c1c340da_2_152_863_1479_277_0.jpg) + +Fig. 3: Figures (a) to (c) depict example environments used for training and the quantitative evaluation. (d) and (e) shows unseen, complex environments to further evaluate the performance of our system and (f) the objects to be pushed. All objects have the same weight but differ in their geometrical shape. As pushing object during training we used the red cube. In a curriculum learning manner, we rotated the obstacle in (a) and vary its size during the training. Furthermore, the distance between the obstacles in (b) and (c) decreased from 20 $\mathrm{{cm}}$ to ${10}\mathrm{\;{cm}}$ , making the task more difficult. + +4) RL-Algorithm: We evaluated two popular off-policy algorithms, namely Soft Actor-Critic [21] and Twin Delayed Deep Deterministic policy gradient (TD3) [22], as well as Truncated Quantile Critics (TQC) [23]. During our experiment, TQC led to the best and the most reproducible results and is therefore used for our experiments. + +5) Attentive Experience Replay: The experience replay strategy enables agents to learn from previous experiences they made while interacting with the environment. We use the Attentive Experience Replay (AER) strategy, which samples entries in the replay buffer according to the similarities between the entry's state and the current state of the agent. + +6) Learning Strategy: As the agent's learning strategy, we chose curriculum learning, which divides the task into subtasks and learns the subtasks one after another in increasing difficulty. We began the training with a maximum start-goal Euclidean distance of ${0.06}\mathrm{\;m}$ and increase it during training to up to ${0.6}\mathrm{\;m}$ . As training environments, we used the scenes shown in Fig. 3. The agent was trained for ${7e6}$ iterations. Without curriculum learning, the agent was not able to learn the task. + +## IV. EXPERIMENTS + +The goal of our experiments is to demonstrate the performance of our system qualitatively and quantitatively in free space as well as in obstacle-laden environments in terms of success rate, object contact, number of collisions, and shortest path deviation, i.e., normalized inverse path length (SPL) [24]. Furthermore, we provide a comparative evaluation against a state-of-the-art pushing control approach by Krivic et al. [15]. We performed the evaluation in pybul-let [25] with a $6\mathrm{{DOF}}\mathrm{{UR}}{5}^{2}$ with a Robotiq $2\mathrm{f}{85}$ two-finger gripper ${}^{3}$ . The implementation of our learning framework with all hyperparameters as well as the reimplementation of the baseline approach is available at GitHub ${}^{4}$ . + +## A. Quantitative Evaluation + +The quantitative evaluation consists of three parts, i.e., pushing with known and unknown objects in scenes with obstacles, and in previously unseen, highly cluttered scenes. All metrics except the success rate and the SPL are evaluated only on episodes that both methods could solve successfully. + +--- + +${}^{2}$ https://www.universal-robots.com/products/ur5-robot/ + +${}^{3}$ https://robotiq.com/products/2f85-140-adaptive-robot-gripper + +${}^{4}$ https://github.com/NilsDengler/cluttered-pushing + +--- + +![01963e1d-38bf-7068-b601-3819c1c340da_3_150_136_737_400_0.jpg](images/01963e1d-38bf-7068-b601-3819c1c340da_3_150_136_737_400_0.jpg) + +Fig. 4: Qualitative results from the quantitative evaluation of our approach (a) in comparison to the baseline [15] (b). Red indicates the start, green the goal position and magenta is the initial shortest path calculated by Lazy Theta* [26]. The path taken by the end effector is shown in black and the path of the object in blue. The grey area in (b) shows the increased traversal costs around obstacles, used for the baseline approach, while the obstacles in our approach (a) are inflated only by a small amount according to the half of the object's diameter. + +
object avoidanceSuccess RateObject Contact Rate *Collision RateSPLPath Length *
Ours0.980${0.995} \pm {0.02}$${0.008} \pm {0.04}$0.910${0.523} \pm {0.18}$
Krivic et al. [15]0.955${0.850} \pm {0.10}$${0.011} \pm {0.05}$0.952$\mathbf{{0.513}} \pm {0.16}$
+ +
FragmentSuccess RateObject Contact Rate *Collision Rate *SPLPath Length *
Ours0.867${0.980} \pm {0.05}$${0.05} \pm {0.11}$0.71${0.630} \pm {0.29}$
Ours re-trained0.867${0.980} \pm {0.05}$${0.05} \pm {0.11}$0.71${0.630} \pm {0.29}$
Krivic et al. [15]0.953${0.868} \pm {0.11}$${0.024} \pm {0.07}$0.951${0.501} \pm {0.16}$
+ +(a) (b) (c) + +
complex taskSuccess RateObject Contact Rate *Collision Rate *SPLPath Length
Ours0.88${0.977} \pm {0.06}$${0.065} \pm {0.13}$0.779${0.492} \pm {0.13}$
Krivic et al. [15]0.72${0.566} \pm {0.11}$${0.01} \pm {0.05}$0.720${0.550} \pm {0.18}$
+ +TABLE I: Quantitative evaluation of pushing in cluttered environments, wrt. success rate, object contact, collisions, normalized inverse path length (SPL), and path length in meters. The values are the average over 500 runs for (a) and (b) and 50 for (c). The results are in comparison to the approach by Krivic et al. [15] where the metrics marked with a $*$ are significant according to the paired t-test with a chosen p-value of 0.05 . As shown, our approach achieves overall better results in terms of success rate, object contact rate, and collision rate. Please refer to the text for more details. + +The object contact rate is evaluated for each episode, once the EE first touched the object. Both, object contact rate and collision rate are the average of each episode, averaged over all episodes. For all experiments, we randomly sampled the distance between start and goal within 0.2 to ${0.6}\mathrm{\;m}$ . As pushing object during training, we used the red object shown in Fig. 3. + +1) Pushing in Scenes With Obstacles: We generated different environments as shown in Fig 3 (a) to (c) and evaluated them with the learned object shown in (f), together with the completely unknown complex fragment shown in Fig. 1. We sampled the orientation and size of the obstacle in (a) as well as the distance between the obstacles in (b) to (c). For each object, we randomly generated 1,000 start-goal configurations within the randomly sampled environments. The results are shown in Tab. Ia and b. With both objects, our approach achieves a significantly higher object contact rate in comparison to the baseline, which shows the benefit of our approach in terms of gentle pushing through contact-rich behavior. In terms of the SPL, the baseline achieves better results while there is no significantly increased path length. The higher SPL can be explained with the higher obstacle inflation necessary for the baseline approach and is illustrated in Fig. 4 that depicts example trajectories of the experiments. As can be seen, our agent has learned to safely navigate around objects, without strictly following the initial shortest path. This is a key advantage in comparison to the baseline approach, which follows the shortest path as tight as possible due to the properties of the controller method and is crucial if the parameters are not fine-tuned. Example 4 of Fig. 4 shows a scenario where our agent pushes a more efficient path, since it does not rely on any cost map and therefore on no parameter tuning. As the fragment was never seen during training, we retrained the agent and achieved similar results as with the red object. This underlines, that our system can be used for serving a general purpose but also retrained to specify on given scenarios. + +2) Pushing in Unseen, Complex Environments : Finally, we designed more complex tasks with the goal to evaluate the capabilities of our trained agent in unseen environments with a higher density of clutter. We randomly sampled 50 start-goal configurations of the two scenarios (Fig. 3d and e), which contain many narrow passages. The results in Tab. Ic show the good performance in complex and completely unseen environments. Our agent achieved better results than Krivic et al. [15] in each metric except the collision rate. Especially, the contact rate is significantly increased. As already mentioned, our agent has not been trained on such scenarios, accordingly, the success rate is a bit lower in comparison to the other evaluations with the small cube. Regier et al. [19] showed that the agent will benefit, if it continues training in the unknown environment for a short time period. + +## V. CONCLUSION + +We presented a novel deep reinforcement learning approach for object pushing in cluttered tabletop environments. We demonstrated the efficacy of our approach in multiple simulated experiments, where the results show the increased performance in comparison to an existing control-based method with respect to various metrics. We showed that the pushing behavior highly benefits from our learning approach in terms of constant object contact and smooth trajectories avoiding obstacles while maintaining equal path length in comparison to the baseline method [15]. The evaluation of the runtime highlights that our system is capable of online pushing. The code of our system can be found on Github ${}^{4}$ and a video on our web page ${}^{5}$ . + +--- + +${}^{5}$ https://www.hrl.uni-bonn.de/publications/dengler22iros-final.mp4 + +--- + +## REFERENCES + +[1] M. T. Mason, "Mechanics and planning of manipulator pushing operations," The International Journal of Robotics Research, 1986. + +[2] J. Lloyd and N. F. Lepora, "Goal-driven robotic pushing using tactile and proprioceptive feedback," IEEE Transactions on Robotics, 2021. + +[3] M. R. Dogar and S. S. Srinivasa, "Push-grasping with dexterous hands: Mechanics and a method," in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS). IEEE, 2010. + +[4] W. Bejjani, M. R. Dogar, and M. Leonetti, "Learning physics-based manipulation in clutter: Combining image-based generalization and look-ahead planning," in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS). IEEE. + +[5] A. Cosgun, T. Hermans, V. Emeli, and M. Stilman, "Push planning for object placement on cluttered table surfaces," in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS). IEEE, 2011. + +[6] W. Bejjani, "Learning deep policies for physics-based robotic manipulation in cluttered real-world environments," Ph.D. dissertation, University of Leeds, 2021. + +[7] A. Eitel, N. Hauff, and W. Burgard, "Learning to singulate objects using a push proposal network," in Robotics research. Springer, 2020, pp. 405-419. + +[8] A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser, "Learning synergies between pushing and grasping with self-supervised deep reinforcement learning," in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS). IEEE, 2018. + +[9] M. Ewerton, A. Martínez-González, and J.-M. Odobez, "An efficient image-to-image translation hourglass-based architecture for object pushing policy learning," in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS). IEEE, 2021. + +[10] J. Li and D. Hsu, "Push-net: Deep planar pushing for objects with unknown physical properties," in Proc. of Robotics: Science and Systems (RSS), 2018. + +[11] F. Paus, T. Huang, and T. Asfour, "Predicting pushing action effects on spatial object relations by learning internal prediction models," in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA). IEEE, 2020. + +[12] I. Nematollahi, O. Mees, L. Hermann, and W. Burgard, "Hindsight for foresight: Unsupervised structured dynamics models from physical interaction," in Proc. of the IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS). IEEE, 2020. + +[13] N. Lin, L. Zhang, Y. Chen, Y. Zhu, R. Chen, P. Wu, and X. Chen, "Reinforcement learning for robotic safe control with force sensing," in WRC Symposium on Advanced Robotics and Automation (WRC SARA). IEEE, 2019. + +[14] A. Raffin, "Rl baselines3 zoo," https://github.com/DLR-RM/ rl-baselines3-zoo, 2020. + +[15] S. Krivic and J. Piater, "Pushing corridors for delivering unknown objects with a mobile robot," Autonomous Robots, 2019. + +[16] J. Stüber, M. Kopicki, and C. Zito, "Feature-based transfer learning for robotic push manipulation," in Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA). IEEE, 2018. + +[17] T. Migimatsu and J. Bohg, "Object-centric task and motion planning in dynamic environments," IEEE Robotics and Automation Letters (RA-L), 2020. + +[18] M. A. Lee, Y. Zhu, P. Zachares, M. Tan, K. Srinivasan, S. Savarese, L. Fei-Fei, A. Garg, and J. Bohg, "Making sense of vision and touch: Learning multimodal representations for contact-rich tasks," IEEE Transactions on Robotics, 2020. + +[19] P. Regier, L. Gesing, and M. Bennewitz, "Deep reinforcement learning for navigation in cluttered environments," in Proc. of the Intl. Conf. on Machine Learning and Applications (CMLA), 2020. + +[20] A. Raffin, A. Hill, R. Traoré, T. Lesort, N. Díaz-Rodríguez, and D. Fil-liat, "Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based robotics," arXiv preprint arXiv:1901.08651, 2019. + +[21] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor," in International conference on machine learning, 2018. + +[22] S. Fujimoto, H. Hoof, and D. Meger, "Addressing function approximation error in actor-critic methods," in International Conference on Machine Learning, 2018. + +[23] A. Kuznetsov, P. Shvechikov, A. Grishin, and D. Vetrov, "Controlling overestimation bias with truncated mixture of continuous distributional quantile critics," in International Conference on Machine Learning, 2020. + +[24] P. Anderson, A. Chang, D. S. Chaplot, A. Dosovitskiy, S. Gupta, V. Koltun, J. Kosecka, J. Malik, R. Mottaghi, M. Savva, et al., "On evaluation of embodied navigation agents," arXiv preprint arXiv:1807.06757, 2018. + +[25] E. Coumans and Y. Bai, "Pybullet, a python module for physics simulation for games, robotics and machine learning," http://pybullet.org, 2016-2021. + +[26] A. Nash, S. Koenig, and C. Tovey, "Lazy theta*: Any-angle path planning and path length analysis in 3d," in Proc. of the Conference on Advancements of Artificial Intelligence (AAAI), 2010. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/B_bgRmVoir/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/B_bgRmVoir/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..117ae64f017f4ce6046230b15bf143b1a3586c76 --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/B_bgRmVoir/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,156 @@ +§ LEARNING GOAL-ORIENTED NON-PREHENSILE PUSHING IN CLUTTERED SCENES + +Nils Dengler David Großklaus Maren Bennewitz + +Abstract-Pushing objects through cluttered scenes is a challenging task, especially when the objects to be pushed have initially unknown dynamics and touching other entities has to be avoided to reduce the risk of damage. In this paper, we approach this problem by applying deep reinforcement learning to generate pushing actions for a robotic manipulator acting on a planar surface where objects have to be pushed to goal locations while avoiding other items in the same workspace. With the latent space learned from a depth image of the scene and other observations of the environment, such as contact information between the end effector and the object as well as distance to the goal, our framework is able to learn contact-rich pushing actions that avoid collisions with other objects. As the experimental results with a six degrees of freedom robotic arm show, our system is able to successfully push objects from start to end positions while avoiding nearby objects. Furthermore, we evaluate our learned policy in comparison to a state-of-the-art pushing controller for mobile robots and show that our agent performs better in terms of success rate, collisions with other objects, and continuous object contact in various scenarios. + +§ I. INTRODUCTION + +Pushing is often used for re-positioning and re-orientating objects since it simplifies the object manipulation in comparison to pick-and-place approaches. Furthermore, pushing allows for moving large, heavy, and irregularly shaped, as well as small and fragile objects to target positions and can be used for reducing uncertainty in the position of objects [1]. Hereby, the term pushing is separated in non-prehensile pushing [2] and prehensile pushing (push-grasp) [3], [4]. For example, in limited space [5], [6] and when dealing with fragile objects, non-prehensile pushing is the preferred manipulation action, since grasping increases the risk of damage. In the past, pushing has been used to separate objects for better grasping [7], [8] or to sort objects from a table into a bin [9] and is assumed to be more time-efficient than grasping to overcome short distances [10]. + +In general, pushing actions should be contact-rich with smooth arm motions. Furthermore, the contact to other objects in the workspace should be avoided to prevent any damages and changes the configuration of the scene. While for a long time, pushing behaviors were created using expert knowledge in an analytical way, more and more work is focusing on reinforcement learning (RL) to solve this task. Especially the ability to learn from environment interactions and own experiences makes RL a useful way to learn challenging new skills. Start-to-goal pushing with a RL-agent has been tackled before [13] and serves as a benchmark for RL [14], however, pushing in cluttered environments where collisions with other objects have to be avoided is a less researched area. While there are already approaches for mobile bases [15], [16], they have not been transferred to robotic manipulators so far. + + < g r a p h i c s > + +Fig. 1: Targeted application scenario of our system within the RePAIR-project ${}^{1}$ . The goal is to push the small fragment to the desired goal pose (green). Shown in magenta is the best pushing path, which maintains a safety distance to the other objects. + +In this paper, we present a framework to train a RL-agent that is able to realize obstacle-aware pushing in a contact-rich manner to guide objects with initially unknown dynamics on a planar surface to desired target configurations with a robotic manipulator. As representation of the workspace, we use a depth image taken from a bird's eye view. To reduce the size of the observation space and therefore the complexity, we use the latent space of a variational autoencoder. To accelerate learning, we calculate subgoals from an optimal $2\mathrm{D}$ path in a grid representation of the environment generated from the depth image. In addition, we use further observations, such as contact information between the end effector of the manipulator and the object as well as the distance to the goal. The output of our system is an incremental motion of the current $\left( {x,y,\theta }\right)$ -position of the robot’s end effector. Fig. 1 illustrates a targeted application scenario from the RePAIR-project ${}^{1}$ . The goal is to push the small fresco fragment to the desired position in a gentle manner while not damaging it or any other fragment on the assembly table. + +All authors are with the Humanoid Robots Lab, University of Bonn, Germany. This work has partially been funded by the European Commission under grant agreement number 964854-RePAIR - H2020-FETOPEN-2018- 2020 and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy, EXC-2070 - 390732324 - Phenorob. + +Regarding related work, we differ compared to Bejjani et al. [4] in terms of object avoidance, use relative coordinates, like Migimatsu et al. [17] and include tactile force information, as proposed by Lee et al. [18] and Lin et al. [13]. Furthermore, we used for our approach ideas from Krivic et al. [15] and Regier et al. [19]. + +The key contributions of our work are the following + + * A model-free RL system that learns to generate smooth pushing paths, with contact-rich pushing actions to reach the object's target positions in cluttered environments, thereby avoiding contact to other, nearby objects. + + * A qualitative and quantitative evaluation in simulation in comparison to a state-of-the-art pushing controller [15], which we adapted to our scenario. + +As the experiments show, our system leads to reliable pushing, while achieving better performance compared to [15] with respect to success rate, collisions with other objects, and continuous object contact in various scenarios. + +§ II. PROBLEM DESCRIPTION + +In this work we consider the following problem. In a tabletop environment, a robotic arm is supposed to move an object from its current position to a 2D goal configuration. To achieve this, we consider the end effector (EE) of the arm moving in planar space $\left( {x,y,\theta }\right)$ . The robotic arm can be of any degree of freedom (DOF). In addition to the pushing object, there are other objects which need to be considered as obstacles and which might obstruct the direct path to the end configuration. The obstacles have to be avoided by the EE and the object at all time. The goal of the RL-agent is to determine the best incremental movement $\left( {{\Delta x},{\Delta y},{\Delta \theta }}\right)$ of its EE position at each time step, to move the object with the EE as fast, but also as safe as possible to the goal position while avoiding obstacles on the way. An RGB-D camera is mounted centered above the scene in bird's eye view to obtain observations of the objects in the workspace. + +§ III.OUR APPROACH + +We apply deep reinforcement learning to solve the task described above. This is motivated by the fact that we expect to obtain smoother trajectories as we would get with a pure control-based approach. Especially for traversing narrow passages, the lack of parameter tuning can be beneficial. We use a variational auto encoder (VAE) to decouple the feature extraction of the given depth image from the policy learning process [20]. Fig. 2 shows an overview of our proposed system. In the following, we describe our RL framework in detail. + +§ A. REINFORCEMENT LEARNING + +For the implementation, we followed some ideas proposed by Regier et al. [19], which proposed a RL-framework to navigate in cluttered environments with a mobile robot. In the following we define the action and observation space, the reward function, the used RL-algorithm, the experience replay buffer strategy, as well as the learning strategy. + +1) Action Space: Our action space consists of the three values, $\left( {{\Delta x},{\Delta y},{\Delta \theta }}\right)$ , which are the increment to the current $x$ and $y$ position, as well as the yaw angle $\theta$ of the gripper. We set the maximum value of $\left( {{\Delta x},{\Delta y},{\Delta \theta }}\right)$ to the maximum distance change possible during one predefined time window. + +2) Observation Space: The observation space of our RL-agent consists of 49 values, namely: The latent space (32), the EE position (5), the 6D joint angles (6), the 2D subgoal at t-1 and t-5 $\left( {2 * 2}\right)$ , an EE contact with obstacle indication (1) and the object to goal distance (1). To give the agent an indication of the best pushing direction, we include two subgoals of different time steps into the observation that we calculate from the current shortest path. The shortest path is calculated on a binary map, gathered from the depth image, where all obstacles are inflated according to half of the object's diameter. The agent never receives the complete shortest path in its observation and re-calculates it at each time step. Therefore, our agent is not constructed as a path following agent but learns the best pushing behavior during training. + +3) Reward: Our reward function consists of following three components: + +$$ +{r}_{\text{ dist }} = \left\{ \begin{array}{ll} {50}, & \text{ if goal reached } \\ - {r}_{{g}_{ - }\text{ dist }} - {r}_{{o}_{ - }\text{ dist }}, & \text{ otherwise } \end{array}\right. \tag{1} +$$ + +$$ +{r}_{\text{ collison }} = \left\{ \begin{array}{ll} - {10}, & \text{ if object out of bounds } \\ - 5 & \text{ if collision occurred } \end{array}\right. \tag{2} +$$ + +$$ +{r}_{\text{ touch }} = \left\{ \begin{array}{ll} {r}_{{o}_{ - }\text{ dist }}, & \text{ contact to object } \\ 0 & \text{ otherwise } \end{array}\right. \tag{3} +$$ + +The first equation encourages the agent toward a faster learning behavior since it penalizes higher distances between object and goal as well as object and EE with ${r}_{{g}_{ - }\text{ dist }}$ and ${r}_{\text{ o\_dist }}$ . ${r}_{\text{ collison }}$ penalizes each collision of the object with clutter in the scene or if the object gets pushed out of bounds. The last part of the reward ${r}_{\text{ touch }}$ considers the suggestion of Lin et al. [13]. Since we calculate the distance between the EE and the center of the object, a small distance value remains, even if the EE has contact to the object. Therefore, we negate the ${r}_{{o}_{ - }{dist}}$ penalty of ${r}_{dist}$ each time the EE has contact to the object, to encourage a contact-rich behavior. + +Together all three parts form the reward function ${r}_{\text{ total }}$ of our agent: + +$$ +{r}_{\text{ total }} = {r}_{\text{ dist }} + {r}_{\text{ collision }} + {r}_{\text{ touch }} +$$ + +${}^{1}$ https://www.repairproject.eu + + < g r a p h i c s > + +Fig. 2: Overview of our deep reinforcement learning pushing framework. Our system receives a depth image of the environment taken from an RGB-D camera. We calculate an object centered egocentric local window and feed it into the variational auto encoder to get the latent space. Furthermore, a global path from the current object position to the goal position, including subgoals, is calculated. The latent space, the subgoals, and further observations from the environments are used as the concatenated observation for the policy network of the deep RL which calculates the best 3D incremental motion. + + < g r a p h i c s > + +Fig. 3: Figures (a) to (c) depict example environments used for training and the quantitative evaluation. (d) and (e) shows unseen, complex environments to further evaluate the performance of our system and (f) the objects to be pushed. All objects have the same weight but differ in their geometrical shape. As pushing object during training we used the red cube. In a curriculum learning manner, we rotated the obstacle in (a) and vary its size during the training. Furthermore, the distance between the obstacles in (b) and (c) decreased from 20 $\mathrm{{cm}}$ to ${10}\mathrm{\;{cm}}$ , making the task more difficult. + +4) RL-Algorithm: We evaluated two popular off-policy algorithms, namely Soft Actor-Critic [21] and Twin Delayed Deep Deterministic policy gradient (TD3) [22], as well as Truncated Quantile Critics (TQC) [23]. During our experiment, TQC led to the best and the most reproducible results and is therefore used for our experiments. + +5) Attentive Experience Replay: The experience replay strategy enables agents to learn from previous experiences they made while interacting with the environment. We use the Attentive Experience Replay (AER) strategy, which samples entries in the replay buffer according to the similarities between the entry's state and the current state of the agent. + +6) Learning Strategy: As the agent's learning strategy, we chose curriculum learning, which divides the task into subtasks and learns the subtasks one after another in increasing difficulty. We began the training with a maximum start-goal Euclidean distance of ${0.06}\mathrm{\;m}$ and increase it during training to up to ${0.6}\mathrm{\;m}$ . As training environments, we used the scenes shown in Fig. 3. The agent was trained for ${7e6}$ iterations. Without curriculum learning, the agent was not able to learn the task. + +§ IV. EXPERIMENTS + +The goal of our experiments is to demonstrate the performance of our system qualitatively and quantitatively in free space as well as in obstacle-laden environments in terms of success rate, object contact, number of collisions, and shortest path deviation, i.e., normalized inverse path length (SPL) [24]. Furthermore, we provide a comparative evaluation against a state-of-the-art pushing control approach by Krivic et al. [15]. We performed the evaluation in pybul-let [25] with a $6\mathrm{{DOF}}\mathrm{{UR}}{5}^{2}$ with a Robotiq $2\mathrm{f}{85}$ two-finger gripper ${}^{3}$ . The implementation of our learning framework with all hyperparameters as well as the reimplementation of the baseline approach is available at GitHub ${}^{4}$ . + +§ A. QUANTITATIVE EVALUATION + +The quantitative evaluation consists of three parts, i.e., pushing with known and unknown objects in scenes with obstacles, and in previously unseen, highly cluttered scenes. All metrics except the success rate and the SPL are evaluated only on episodes that both methods could solve successfully. + +${}^{2}$ https://www.universal-robots.com/products/ur5-robot/ + +${}^{3}$ https://robotiq.com/products/2f85-140-adaptive-robot-gripper + +${}^{4}$ https://github.com/NilsDengler/cluttered-pushing + + < g r a p h i c s > + +Fig. 4: Qualitative results from the quantitative evaluation of our approach (a) in comparison to the baseline [15] (b). Red indicates the start, green the goal position and magenta is the initial shortest path calculated by Lazy Theta* [26]. The path taken by the end effector is shown in black and the path of the object in blue. The grey area in (b) shows the increased traversal costs around obstacles, used for the baseline approach, while the obstacles in our approach (a) are inflated only by a small amount according to the half of the object's diameter. + +max width= + +object avoidance Success Rate Object Contact Rate * Collision Rate SPL Path Length * + +1-6 +Ours 0.980 ${0.995} \pm {0.02}$ ${0.008} \pm {0.04}$ 0.910 ${0.523} \pm {0.18}$ + +1-6 +Krivic et al. [15] 0.955 ${0.850} \pm {0.10}$ ${0.011} \pm {0.05}$ 0.952 $\mathbf{{0.513}} \pm {0.16}$ + +1-6 + +max width= + +Fragment Success Rate Object Contact Rate * Collision Rate * SPL Path Length * + +1-6 +Ours 0.867 ${0.980} \pm {0.05}$ ${0.05} \pm {0.11}$ 0.71 ${0.630} \pm {0.29}$ + +1-6 +Ours re-trained 0.867 ${0.980} \pm {0.05}$ ${0.05} \pm {0.11}$ 0.71 ${0.630} \pm {0.29}$ + +1-6 +Krivic et al. [15] 0.953 ${0.868} \pm {0.11}$ ${0.024} \pm {0.07}$ 0.951 ${0.501} \pm {0.16}$ + +1-6 + +(a) (b) (c) + +max width= + +complex task Success Rate Object Contact Rate * Collision Rate * SPL Path Length + +1-6 +Ours 0.88 ${0.977} \pm {0.06}$ ${0.065} \pm {0.13}$ 0.779 ${0.492} \pm {0.13}$ + +1-6 +Krivic et al. [15] 0.72 ${0.566} \pm {0.11}$ ${0.01} \pm {0.05}$ 0.720 ${0.550} \pm {0.18}$ + +1-6 + +TABLE I: Quantitative evaluation of pushing in cluttered environments, wrt. success rate, object contact, collisions, normalized inverse path length (SPL), and path length in meters. The values are the average over 500 runs for (a) and (b) and 50 for (c). The results are in comparison to the approach by Krivic et al. [15] where the metrics marked with a $*$ are significant according to the paired t-test with a chosen p-value of 0.05 . As shown, our approach achieves overall better results in terms of success rate, object contact rate, and collision rate. Please refer to the text for more details. + +The object contact rate is evaluated for each episode, once the EE first touched the object. Both, object contact rate and collision rate are the average of each episode, averaged over all episodes. For all experiments, we randomly sampled the distance between start and goal within 0.2 to ${0.6}\mathrm{\;m}$ . As pushing object during training, we used the red object shown in Fig. 3. + +1) Pushing in Scenes With Obstacles: We generated different environments as shown in Fig 3 (a) to (c) and evaluated them with the learned object shown in (f), together with the completely unknown complex fragment shown in Fig. 1. We sampled the orientation and size of the obstacle in (a) as well as the distance between the obstacles in (b) to (c). For each object, we randomly generated 1,000 start-goal configurations within the randomly sampled environments. The results are shown in Tab. Ia and b. With both objects, our approach achieves a significantly higher object contact rate in comparison to the baseline, which shows the benefit of our approach in terms of gentle pushing through contact-rich behavior. In terms of the SPL, the baseline achieves better results while there is no significantly increased path length. The higher SPL can be explained with the higher obstacle inflation necessary for the baseline approach and is illustrated in Fig. 4 that depicts example trajectories of the experiments. As can be seen, our agent has learned to safely navigate around objects, without strictly following the initial shortest path. This is a key advantage in comparison to the baseline approach, which follows the shortest path as tight as possible due to the properties of the controller method and is crucial if the parameters are not fine-tuned. Example 4 of Fig. 4 shows a scenario where our agent pushes a more efficient path, since it does not rely on any cost map and therefore on no parameter tuning. As the fragment was never seen during training, we retrained the agent and achieved similar results as with the red object. This underlines, that our system can be used for serving a general purpose but also retrained to specify on given scenarios. + +2) Pushing in Unseen, Complex Environments : Finally, we designed more complex tasks with the goal to evaluate the capabilities of our trained agent in unseen environments with a higher density of clutter. We randomly sampled 50 start-goal configurations of the two scenarios (Fig. 3d and e), which contain many narrow passages. The results in Tab. Ic show the good performance in complex and completely unseen environments. Our agent achieved better results than Krivic et al. [15] in each metric except the collision rate. Especially, the contact rate is significantly increased. As already mentioned, our agent has not been trained on such scenarios, accordingly, the success rate is a bit lower in comparison to the other evaluations with the small cube. Regier et al. [19] showed that the agent will benefit, if it continues training in the unknown environment for a short time period. + +§ V. CONCLUSION + +We presented a novel deep reinforcement learning approach for object pushing in cluttered tabletop environments. We demonstrated the efficacy of our approach in multiple simulated experiments, where the results show the increased performance in comparison to an existing control-based method with respect to various metrics. We showed that the pushing behavior highly benefits from our learning approach in terms of constant object contact and smooth trajectories avoiding obstacles while maintaining equal path length in comparison to the baseline method [15]. The evaluation of the runtime highlights that our system is capable of online pushing. The code of our system can be found on Github ${}^{4}$ and a video on our web page ${}^{5}$ . + +${}^{5}$ https://www.hrl.uni-bonn.de/publications/dengler22iros-final.mp4 \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/DhcNXrsb78p/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/DhcNXrsb78p/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..a5a5ef5ca34cfc02d9e30120ca9a88dc703d163f --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/DhcNXrsb78p/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,197 @@ +# Synthesizing and Simulating Volumetric Meshes from Vision-based Tactile Imprints + +Xinghao Zhu ${}^{1,2}$ , Siddarth Jain ${}^{2, * }$ , Masayoshi Tomizuka ${}^{1}$ , and Jeroen van Baar ${}^{2}$ + +Abstract-Vision-based tactile sensors typically employ a deformable elastomer and a camera to provide high-resolution contact images. This work focuses on learning to simulate and synthesize the volumetric mesh of the elastomer based on the image imprints acquired from tactile sensors. Obtaining accurate volumetric meshes for the elastomer can provide direct contact information and benefit robotic grasping and manipulation. Our method [1] proposes a train-then-adapt way to leverage synthetic image-mesh pairs and real-world images from finite element methods (FEM) and physical sensors. Our approach can accurately reconstruct the deformation of the real-world tactile sensor elastomer in various domains. While the proposed learning approaches have shown to produce solutions, we discuss some limitations and challenges for viable real-world applications. + +## I. INTRODUCTION + +Tactile sensing is essential for humans when interacting with environments. Robotic tactile sensors can provide contact profiles during grasping and manipulation tasks. Among different designs, vision-based tactile sensors are variants [2]-[9]. They use cameras to capture the deformation of the contact elastomer with high-resolution images, as shown in Fig. 1 (a) and (b). + +Representing the deformable elastomer with volumetric mesh can advance the development of vision-based tactile sensors. Volumetric meshes provide accurate and direct information about the contact. This information can benefit manipulation tasks like in-hand object localization [10]- [12], vision-free manipulation [13]-[15], and contact profile reconstruction [4], [5], [16]-[18]. Moreover, meshes can be used for precise dynamics learning [19]-[21] and future state prediction [22], [23]. + +Our method directly predicts the volumetric mesh from images using vision-based tactile sensors, such as the Gel-Slim [5], in a sim-to-real setting. We first employed supervised training to synthesize the volumetric mesh from image imprints gathered from synthetic 3D FEM. However, directly deploying the network into the real-world yields poor reconstruction results due to the sim-to-real gap. Thus, we propose data augmentations and a self-supervised adaptation method on real-world images to address this gap. Experiments demonstrate that the proposed method can transfer networks for sim-to-real, seen contact objects to novel contact objects, and between different GelSlim sensor instances (as shown in Fig. 1 (c) and (d)). + +![01963e18-bdb5-768a-ad54-a811364fef7d_0_915_456_731_197_0.jpg](images/01963e18-bdb5-768a-ad54-a811364fef7d_0_915_456_731_197_0.jpg) + +Fig. 1: (a) the GelSlim visual-tactile sensor, (b) the construction of the sensor, with the elastomer (1), the transparent lens (2), the lights (3), and the camera (4). (c) a depth image observation obtained from the sensor, and (d) the corresponding reconstructed volumetric mesh with our method. The red rectangle denotes the camera's view range, and the color represents the displacement level. + +In the remaining of this paper, we illustrate the method and demonstrate the results in Section II and Section III. Then we discuss the limitations and future directions in Section IV. + +## II. Methods + +This section first introduces the problem statement and preliminaries. Next, the image-to-mesh projection and self-supervised adaptation methods are discussed. Finally, the datasets are described, including synthetic labeled data, real-world unlabeled data, and data augmentation techniques. + +## A. Problem Statement and Preliminaries + +This paper focuses on the problem of reconstructing an elastomer's volumetric mesh with image observations. The non-injective projection from surface images to volumetric vertex makes this problem nontrivial. Some preliminaries are: + +1) Image Observations: Visual tactile sensors contact objects with a silicone elastomer and use a camera to capture the deformation of the surface, as shown in Fig 1. The captured RGB image can be used to construct a depth map using shape from shading [5], [24]. Compared to raw RGB images, depth maps can better represent the geometry of the contact surface and are much easier to simulate using synthetic cameras. Thus we use $\left( {{128} \times {128}}\right)$ depth maps $I$ as the image observations in this paper. + +2) Volumetric Meshes with FEM: In the FEM, geometrical shapes are represented by volumetric meshes $\mathcal{M}$ . With high-resolution meshes and small computation steps, FEM can estimate the forward dynamics of soft bodies [19], [25]. This paper uses graphs to represent volumetric meshes. Specifically, volumetric meshes are defined as a set of vertices and edges, $\mathcal{M} = \left( {\mathcal{V},\mathcal{A}}\right)$ , with $n$ vertices in $3\mathrm{D}$ Euclidean space, $\mathcal{V} \in {\mathbb{R}}^{n \times 3}$ . The adjacency matrix $\mathcal{A} \in$ $\{ 0,1{\} }^{n \times n}$ represents the edges. + +--- + +* Corresponding Author + +${}^{1}$ Mechanical Systems Control Lab, UC Berkeley, Berkeley, CA, USA. \{zhuxh, tomizuka\}@berkeley.edu + +${}^{2}$ Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, USA \{sjain, jeroen\}@merl.com + +--- + +![01963e18-bdb5-768a-ad54-a811364fef7d_1_145_135_739_652_0.jpg](images/01963e18-bdb5-768a-ad54-a811364fef7d_1_145_135_739_652_0.jpg) + +Fig. 2: Training structure. The image-to-mesh projection network is optimized with pre-trained autoencoders. The self-supervised adaptation transfers the projection network to various domains with a differentiable render. + +## B. Supervised Image-to-Mesh Projection + +The image-to-mesh projection is learned with latent representations. Fig. 2 shows the training structure of the network. The image variational autoencoder (VAE) reconstructs depth maps $I$ to $\widehat{I}$ and is trained as a $\beta$ -VAE. We adopt the convolutional mesh autoencoders (COMA) [26] for the volumetric mesh VAE. COMA uses spectral graph convolutions [27] to extract features and a hierarchical pooling operation to reduce vertices. The latent projection model is comprised of three fully connected layers. It is trained in a supervised manner with the encoder and decoder frozen. + +## C. Self-Supervised Adaptation + +When deploying the trained network to the real world, covariate shift problems may reduce the performance significantly [28]. Moreover, the real-world data only has depth maps $\left\{ {I}_{j}\right\}$ , the ground-truth volumetric meshes are not available, making it hard to fine-tune the network in a supervised manner. Thus, we propose a self-supervised adaptation framework (Fig. 2) to resolve the covariate shift. + +The reconstructed mesh $\widehat{\mathcal{M}}$ is rendered to the image $\widetilde{I}$ using a differentiable renderer, which allows gradients to propagate backward. In parallel, we use the pre-trained image VAE to reconstruct the input depth map $\widehat{I}$ . The network is adapted to minimize the difference between $\widetilde{I}$ and $\widehat{I}$ . + +## D. Datasets + +Labeled synthetic data $\left\{ \left( {{I}_{i},{\mathcal{M}}_{i}}\right) \right\}$ and unlabeled real-world data $\left\{ \left( {I}_{j}\right) \right\}$ are required to train the image-to-mesh projection and adapt the network among different domains. + +1) Synthetic Data: Labeled image-mesh pairs $\left\{ \left( {{I}_{i},{\mathcal{M}}_{i}}\right) \right\}$ for $i \in \left\lbrack {1,\ldots , N}\right\rbrack$ can be simulated using FEM and synthetic cameras. In this work, FEM is performed using the GPU-based Isaac Gym [29]. A FEM model for the GelSlim is created as Fig 1(b). To generate data pairs, 16 primitive indenters (Fig. 3-Left) are utilized to interact with the elastomer at randomized positions and rotations. The Isaac Gym simulator collects vertex positions $\mathcal{M}$ at each contact trajectory. The depth map $I$ is then rendered with a synthetic camera. Fig. 4 shows examples of synthetic data pairs. + +![01963e18-bdb5-768a-ad54-a811364fef7d_1_951_137_653_277_0.jpg](images/01963e18-bdb5-768a-ad54-a811364fef7d_1_951_137_653_277_0.jpg) + +Fig. 3: Left: Primitive indenters. Right: Novel contact objects. + +![01963e18-bdb5-768a-ad54-a811364fef7d_1_914_517_734_412_0.jpg](images/01963e18-bdb5-768a-ad54-a811364fef7d_1_914_517_734_412_0.jpg) + +Fig. 4: Data samples. Top: Raw synthetic depth observations, corresponding ground-truth meshes, and augmented synthetic depth observations. Bottom: Real-world depth observations for sample indenters. + +2) Real-World Data: Real-world datasets $\left\{ {I}_{j}\right\}$ are obtained with physical GelSlim sensors and various indenters (Fig. 4). Primitive indenters are 3D printed and interact with the sensor is randomized. Besides primitive shapes, several household and industrial objects are used as a novel set (Fig. 3-Right). The novel set represents common objects that the GelSlim will work with. Moreover, we use two GelSlim sensors to collect real-world data. + +3) Image Augmentations: The appearance of synthetic images is quite different from that of real-world depth maps, as in Fig. 4. The depth reconstruction process for the physical GelSlim introduces significant noise into the image. To enhance the performance in the real world, this paper injects Perlin noise and adds a real-world reference noise image into the synthetic images [28]. The Perlin noise provides a realistic gradient for the image and imitates the real-world camera noise. The reference image provides sensor-specific noise. + +In total, ${1.28}\mathrm{M}$ unique labeled image-mesh pairs were obtained from the simulator, and 1,651 real-world images were obtained for 2 GelSlim sensors with 19 indenters. + +![01963e18-bdb5-768a-ad54-a811364fef7d_2_151_138_731_262_0.jpg](images/01963e18-bdb5-768a-ad54-a811364fef7d_2_151_138_731_262_0.jpg) + +Fig. 5: Image-to-mesh projection results with synthetic data. First row: Input depth observations. Second row: Corresponding ground-truth mesh. Third row: Reconstructed volumetric mesh with our approach. + +![01963e18-bdb5-768a-ad54-a811364fef7d_2_152_600_729_324_0.jpg](images/01963e18-bdb5-768a-ad54-a811364fef7d_2_152_600_729_324_0.jpg) + +Fig. 6: Experiments with real-world primitive contact objects. First row: Input depth observations. Second row: Reconstructed volumetric meshes. Third row: Rendered depth images from reconstructed meshes. + +## III. RESULTS + +In this section, we present the experiments for supervised image-to-mesh projection and self-supervised adaptation. + +## A. Supervised Projection + +Our proposed supervised image-to-mesh projection was evaluated using synthetic data. The training yields ${0.012}\mathrm{\;{cm}}$ root-mean-square error (RMSE) between the ground-truth vertex positions and predicted vertex positions. Fig. 5 shows a batch of projection results. The results show that the reconstruction is accurate and captures the contact information. We also investigate the usefulness of the VAE pre-training as described in II-B. We trained the image-to-mesh network from scratch and observed ${0.009}\mathrm{\;{cm}}$ and ${0.085}\mathrm{\;{cm}}$ training and validation errors, respectively. This suggests that the network overfits without the pre-training, which aligns with the findings presented in [30]. + +## B. Self-Supervised Adaptation + +Section II-C and Section II-D.3 introduce a self-supervised adaptation method and synthetic data augmentations to resolve the covariate shift problem. This section shows that neither adaptation nor augmentation can achieve the objective alone. Moreover, experiments demonstrate that the proposed methods can adapt networks in different domains. + +The adaptation is performed with the real-world dataset $\left\{ {I}_{j}\right\}$ , without ground-truth mesh availability. We use the RMSE between $\widehat{I}$ and $\widetilde{I}$ to evaluate the performance of the adaptations. + +TABLE I: Domain adaptation results with real-world data. The root-mean-square error (RMSE) is measured between reconstructed images $\widetilde{I}$ and rendered images $\widehat{I}$ . + +
Source $\rightarrow$ TargetRMSE before/after Adaptation (cm)
Sim-Prim. $\rightarrow$ Real-Prim${0.57} \rightarrow {0.12}$
Sim-Prim $\rightarrow$ Real-Prim-2${0.77} \rightarrow {0.20}$
Real-Prim $\rightarrow$ Real-Prim-2${0.35} \rightarrow {0.16}$
Real-Prim $\rightarrow$ Real-Novel${0.64} \rightarrow {0.41}$
Sim-Prim $\rightarrow$ Real-Novel${1.30} \rightarrow {0.62}$
+ +Networks were trained or tuned on source domains and then adapted to target domains. The RMSEs were measured before and after the adaptation. + +![01963e18-bdb5-768a-ad54-a811364fef7d_2_916_542_729_216_0.jpg](images/01963e18-bdb5-768a-ad54-a811364fef7d_2_916_542_729_216_0.jpg) + +Fig. 7: Experiments with real-world novel contact objects. First row: Input depth observations. Second row: Reconstructed volumetric mesh from the network. + +1) Ablation Studies: We compare the effects of the adaption model and the synthetic data augmentations. As a baseline, models with neither augmentations nor adaptations yield ${1.03}\mathrm{\;{cm}}$ RMSE. We observe that using only adaptation (0.79cm RMSE), or only augmentation (0.57cm RMSE), results in lower performance. The reason for higher performance when both are used(0.12cmRMSE)is two-fold. On one hand, the data augmentation enlarges the distribution of the synthetic dataset, which causes the real-world data to be within distribution (or close to). On the other hand, the adaptation model transfers the network from the simulated distribution to the real-world distribution, ensuring invariant feature encodings. A batch of qualitative reconstruction examples is shown in Fig 6. + +2) Domain Adaptations: The network was adapted among various data domains, including simulated data with primitive contact objects (Sim-Prim), real-world data with primitive contact objects (Real-Prim), real-world data with novel contact objects (Real-Novel), and real-world primitive data with a second GelSlim sensor (Real-Prim-2). + +Table I and Fig. 7 show the transfer results among domains. The results suggest that the proposed method can effectively improve the performance of the network under both visual and shape differences. However, we conclude that the covariate shifts for visual noise and shape differences are not correlated, as for experiment $\operatorname{Sim}$ -Prim $\rightarrow$ Real-Novel is less compared to the others. Adaptation for each separately performs better compared to adaptation for both. + +## IV. DISCUSSION + +This work presents a framework to synthesize volumetric meshes of vision-based tactile sensors for novel contact interactions. Our work has several contributions. First, we present a 3D FEM simulator for vision-based tactile sensors and a simulator calibration approach. Second, we generate a dataset for the GelSlim sensor with both simulated and real-world contacts. Third, we propose a label-free adaptation method and image augmentations for domain transfers; this approach can effectively transfer networks to various visual and different shape scenarios. Lastly, our network efficiently reconstructs the volumetric mesh with depth images and precisely estimates the contact profiles of different shapes. More details of the method and results are available in the full version of the paper [1]. + +![01963e18-bdb5-768a-ad54-a811364fef7d_3_158_137_717_216_0.jpg](images/01963e18-bdb5-768a-ad54-a811364fef7d_3_158_137_717_216_0.jpg) + +Fig. 8: Non-injective projection of the internal vertices. Black nodes are surface vertices that the observations can supervise. Gray nodes are internal vertices that can randomly move without affecting the proposed adaptation loss. I.e., (a) and (b) will yield the same observation on the surface, while (a) is preferred. + +The present work also has some limitations. First, we do not constrain internal vertices during the adaptation. In the self-supervised training, the network is optimized with image observations, which only capture the surface displacement of the volumetric mesh. However, the surface displacement does not provide injective supervision for unobservable internal vertices. In other words, the internal vertices remain free-floating during the adaptation; they can move freely in the interior of the mesh without affecting the training loss, as shown in Fig 8. Such unconstraint vertices can be detrimental to reconstructing the mesh vertices using surface observation. Our experiments demonstrate that the network begins to predict random internal vertices after the first several epochs. We hypothesize that the network has some self-regularity at the beginning of the adaptation, inherited from the pretraining dataset. There is a potential solution to such a problem: adding penalty terms as a regulation. By leveraging the minimum energy principle [31], [32], it is possible to design a differentiable function that computes the energy of deformations. Such energy should be minimized simultaneously during the adaptation to mitigate the randomness of internal vertices. + +The second limitation is that the current method does not predict the dynamics of the elastomer. Instead, our proposed method learns the mapping from surface observations to the mesh states. Compared to other state representations, e.g., images [13] and surface mesh [4], [5], [33], volumetric mesh contains internal vertices and edges and thus can better simulate the deformation of the objects [25], [34]. Learning the dynamics for volumetric meshes can allow model predictive control (MPC) applications and benefit reinforcement learning (RL). An MPC-based algorithm or a model-based RL agent [35] can be designed to determine actions for robotic manipulation. On the other hand, there are many previous works on learning the dynamics for meshes [19]-[21]. These methods, however, focus on simple problem formulations in that the mesh vertices are known exactly at each timestep. A more challenging and practical scenario is to learn the dynamics with observations only since the actual mesh states are unavailable in real-world data. + +## REFERENCES + +[1] X. Zhu, S. Jain, M. Tomizuka, and J. van Baar, "Learning to synthesize volumetric meshes from vision-based tactile imprints," ArXiv Preprint, 2022. + +[2] W. Yuan, S. Dong, and E. H. Adelson, "Gelsight: High-resolution robot tactile sensors for estimating geometry and force," Sensors, vol. 17, no. 12, 2017. + +[3] E. Donlon, S. Dong, M. Liu, J. Li, E. Adelson, and A. Rodriguez, "Gelslim: A high-resolution, compact, robust, and calibrated tactile-sensing finger," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018. + +[4] D. Ma, E. Donlon, S. Dong, and A. Rodriguez, "Dense tactile force estimation using gelslim and inverse fem," in 2019 International Conference on Robotics and Automation (ICRA), 2019. + +[5] I. Taylor, S. Dong, and A. Rodriguez, "Gelslim3.0: High-resolution measurement of shape, force and slip in a compact tactile-sensing finger," ArXiv Preprint, vol. abs/2103.12269, 2021. + +[6] C. Matl, J. Koe, and R. Bajcsy, "Stretch: a soft to resistive elastic tactile hand," arXiv preprint arXiv:2105.08154, 2021. + +[7] B. W. McInroe, C. L. Chen, K. Y. Goldberg, K. Y. Goldberg, R. Bajcsy, and R. S. Fearing, "Towards a soft fingertip with integrated sensing and actuation," in 2018 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), 2018. + +[8] M. Lambeta, P.-W. Chou, S. Tian, B. Yang, B. Maloon, V. R. Most, D. Stroud, R. Santos, A. Byagowi, G. Kammerer, et al., "Digit: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation," IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 3838-3845, 2020. + +[9] A. Padmanabha, F. Ebert, S. Tian, R. Calandra, C. Finn, and S. Levine, "Omnitact: A multi-directional high-resolution touch sensor," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020. + +[10] M. Bauza, O. Canal, and A. Rodriguez, "Tactile mapping and localization from high-resolution tactile imprints," in 2019 International Conference on Robotics and Automation (ICRA), 2019. + +[11] Y. S. Narang, K. V. Wyk, A. Mousavian, and D. Fox, "Interpreting and predicting tactile signals via a physics-based and data-driven framework," ArXiv Preprint, 2020. + +[12] M. Bauza, E. Valls, B. Lim, T. Sechopoulos, and A. Rodriguez, "Tactile object pose estimation from the first touch with geometric contact rendering," ArXiv Preprint, 2020. + +[13] S. Dong, D. Jha, D. Romeres, S. Kim, D. Nikovski, and A. Rodriguez, "Tactile-rl for insertion: Generalization to objects of unknown geometry," in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021. + +[14] F. R. Hogan, J. Ballester, S. Dong, and A. Rodriguez, "Tactile dexterity: Manipulation primitives with tactile feedback," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020. + +[15] Y. She, S. Wang, S. Dong, N. Sunil, A. Rodriguez, and E. Adelson, "Cable manipulation with a tactile-reactive gripper," in Robotics: Science and Systems (RSS), 2020. + +[16] N. F. Lepora, A. Church, C. de Kerckhove, R. Hadsell, and J. Lloyd, "From pixels to percepts: Highly robust edge perception and contour following using deep learning and an optical biomimetic tactile sensor," IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 2101-2107, 2019. + +[17] D. Ma, S. Dong, and A. Rodriguez, "Extrinsic contact sensing with relative-motion tracking from distributed tactile measurements," ArXiv Preprint, 2021. + +[18] C. Wang, S. Wang, B. Romero, F. Veiga, and E. Adelson, "Swing-bot: Learning physical features from in-hand tactile exploration for dynamic swing-up manipulation," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020. + +[19] T. Pfaff, M. Fortunato, A. Sanchez-Gonzalez, and P. W. Battaglia, "Learning mesh-based simulation with graph networks," in International Conference on Learning Representations, 2021. + +[20] A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Ried-miller, R. Hadsell, and P. Battaglia, "Graph networks as learnable physics engines for inference and control," ArXiv Preprint, 2018. + +[21] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. W. Battaglia, "Learning to simulate complex physics with graph networks," in International Conference on Machine Learning, 2020. + +[22] F. d. A. Belbute-Peres, T. D. Economon, and J. Z. Kolter, "Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid Flow Prediction," in ICML, 2020. + +[23] Y. Li, T. Lin, K. Yi, D. Bear, D. L. Yamins, J. Wu, J. B. Tenenbaum, and A. Torralba, "Visual grounding of learned physical models," in ICML, 2020. + +[24] M. K. Johnson and E. H. Adelson, "Retrographic sensing for the measurement of surface texture and shape," in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009. + +[25] E. G. Thompson, Introduction to the Finite Element Method: Theory, Programming and Applications. Wiley Text Books, 2004. + +[26] A. Ranjan, T. Bolkart, S. Sanyal, and M. J. Black, "Generating 3D faces using convolutional mesh autoencoders," in European Conference on Computer Vision (ECCV), 2018. + +[27] M. Defferrard, X. Bresson, and P. Vandergheynst, "Convolutional neural networks on graphs with fast localized spectral filtering," in Advances in Neural Information Processing Systems, 2016. + +[28] X. Zhu, L. Sun, Y. Fan, and M. Tomizuka, "6-dof contrastive grasp proposal network," in Proceedings of the 2021 International Conference on Robotics and Automation (ICRA), 2021. + +[29] V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Mack-lin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, and G. State, "Isaac gym: High performance gpu-based physics simulation for robot learning," ArXiv Preprint, 2021. + +[30] Y. Narang, B. Sundaralingam, M. Macklin, A. Mousavian, and D. Fox, "Sim-to-real for robotic tactile sensing via physics-based simulation and learned latent projections," Proceeding of the 2021 International Conference on Robotics and Automation (ICRA), 2021. + +[31] H. B. Callen and H. B. Callen, Thermodynamics and an Introduction to Thermostatistics. New York, Wiley, 1985. + +[32] D. Arndt and G. Kanschat, "A differentiable mapping of mesh cells based on finite elements on quadrilateral and hexahedral meshes," Computational Methods in Applied Mathematics, vol. 21, no. 1, pp. 1-11, 2021. + +[33] B. Hudson, G. L. Miller, T. Phillips, and D. Sheehy, Size Complexity of Volume Meshes vs. Surface Meshes, 2009. + +[34] M. A. Neto, A. Amaro, L. Roseiro, J. Cirne, and R. Leal, Finite Element Method for 3D Solids. Cham: Springer International Publishing, 2015, pp. 233-263. + +[35] M. Janner, J. Fu, M. Zhang, and S. Levine, "When to trust your model: Model-based policy optimization," ArXiv Preprint, 2019. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/DhcNXrsb78p/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/DhcNXrsb78p/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..428d69eb30c5e90499860b3cb738c5db0f0c8188 --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/DhcNXrsb78p/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,140 @@ +§ SYNTHESIZING AND SIMULATING VOLUMETRIC MESHES FROM VISION-BASED TACTILE IMPRINTS + +Xinghao Zhu ${}^{1,2}$ , Siddarth Jain ${}^{2, * }$ , Masayoshi Tomizuka ${}^{1}$ , and Jeroen van Baar ${}^{2}$ + +Abstract-Vision-based tactile sensors typically employ a deformable elastomer and a camera to provide high-resolution contact images. This work focuses on learning to simulate and synthesize the volumetric mesh of the elastomer based on the image imprints acquired from tactile sensors. Obtaining accurate volumetric meshes for the elastomer can provide direct contact information and benefit robotic grasping and manipulation. Our method [1] proposes a train-then-adapt way to leverage synthetic image-mesh pairs and real-world images from finite element methods (FEM) and physical sensors. Our approach can accurately reconstruct the deformation of the real-world tactile sensor elastomer in various domains. While the proposed learning approaches have shown to produce solutions, we discuss some limitations and challenges for viable real-world applications. + +§ I. INTRODUCTION + +Tactile sensing is essential for humans when interacting with environments. Robotic tactile sensors can provide contact profiles during grasping and manipulation tasks. Among different designs, vision-based tactile sensors are variants [2]-[9]. They use cameras to capture the deformation of the contact elastomer with high-resolution images, as shown in Fig. 1 (a) and (b). + +Representing the deformable elastomer with volumetric mesh can advance the development of vision-based tactile sensors. Volumetric meshes provide accurate and direct information about the contact. This information can benefit manipulation tasks like in-hand object localization [10]- [12], vision-free manipulation [13]-[15], and contact profile reconstruction [4], [5], [16]-[18]. Moreover, meshes can be used for precise dynamics learning [19]-[21] and future state prediction [22], [23]. + +Our method directly predicts the volumetric mesh from images using vision-based tactile sensors, such as the Gel-Slim [5], in a sim-to-real setting. We first employed supervised training to synthesize the volumetric mesh from image imprints gathered from synthetic 3D FEM. However, directly deploying the network into the real-world yields poor reconstruction results due to the sim-to-real gap. Thus, we propose data augmentations and a self-supervised adaptation method on real-world images to address this gap. Experiments demonstrate that the proposed method can transfer networks for sim-to-real, seen contact objects to novel contact objects, and between different GelSlim sensor instances (as shown in Fig. 1 (c) and (d)). + + < g r a p h i c s > + +Fig. 1: (a) the GelSlim visual-tactile sensor, (b) the construction of the sensor, with the elastomer (1), the transparent lens (2), the lights (3), and the camera (4). (c) a depth image observation obtained from the sensor, and (d) the corresponding reconstructed volumetric mesh with our method. The red rectangle denotes the camera's view range, and the color represents the displacement level. + +In the remaining of this paper, we illustrate the method and demonstrate the results in Section II and Section III. Then we discuss the limitations and future directions in Section IV. + +§ II. METHODS + +This section first introduces the problem statement and preliminaries. Next, the image-to-mesh projection and self-supervised adaptation methods are discussed. Finally, the datasets are described, including synthetic labeled data, real-world unlabeled data, and data augmentation techniques. + +§ A. PROBLEM STATEMENT AND PRELIMINARIES + +This paper focuses on the problem of reconstructing an elastomer's volumetric mesh with image observations. The non-injective projection from surface images to volumetric vertex makes this problem nontrivial. Some preliminaries are: + +1) Image Observations: Visual tactile sensors contact objects with a silicone elastomer and use a camera to capture the deformation of the surface, as shown in Fig 1. The captured RGB image can be used to construct a depth map using shape from shading [5], [24]. Compared to raw RGB images, depth maps can better represent the geometry of the contact surface and are much easier to simulate using synthetic cameras. Thus we use $\left( {{128} \times {128}}\right)$ depth maps $I$ as the image observations in this paper. + +2) Volumetric Meshes with FEM: In the FEM, geometrical shapes are represented by volumetric meshes $\mathcal{M}$ . With high-resolution meshes and small computation steps, FEM can estimate the forward dynamics of soft bodies [19], [25]. This paper uses graphs to represent volumetric meshes. Specifically, volumetric meshes are defined as a set of vertices and edges, $\mathcal{M} = \left( {\mathcal{V},\mathcal{A}}\right)$ , with $n$ vertices in $3\mathrm{D}$ Euclidean space, $\mathcal{V} \in {\mathbb{R}}^{n \times 3}$ . The adjacency matrix $\mathcal{A} \in$ $\{ 0,1{\} }^{n \times n}$ represents the edges. + +* Corresponding Author + +${}^{1}$ Mechanical Systems Control Lab, UC Berkeley, Berkeley, CA, USA. {zhuxh, tomizuka}@berkeley.edu + +${}^{2}$ Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, USA {sjain, jeroen}@merl.com + + < g r a p h i c s > + +Fig. 2: Training structure. The image-to-mesh projection network is optimized with pre-trained autoencoders. The self-supervised adaptation transfers the projection network to various domains with a differentiable render. + +§ B. SUPERVISED IMAGE-TO-MESH PROJECTION + +The image-to-mesh projection is learned with latent representations. Fig. 2 shows the training structure of the network. The image variational autoencoder (VAE) reconstructs depth maps $I$ to $\widehat{I}$ and is trained as a $\beta$ -VAE. We adopt the convolutional mesh autoencoders (COMA) [26] for the volumetric mesh VAE. COMA uses spectral graph convolutions [27] to extract features and a hierarchical pooling operation to reduce vertices. The latent projection model is comprised of three fully connected layers. It is trained in a supervised manner with the encoder and decoder frozen. + +§ C. SELF-SUPERVISED ADAPTATION + +When deploying the trained network to the real world, covariate shift problems may reduce the performance significantly [28]. Moreover, the real-world data only has depth maps $\left\{ {I}_{j}\right\}$ , the ground-truth volumetric meshes are not available, making it hard to fine-tune the network in a supervised manner. Thus, we propose a self-supervised adaptation framework (Fig. 2) to resolve the covariate shift. + +The reconstructed mesh $\widehat{\mathcal{M}}$ is rendered to the image $\widetilde{I}$ using a differentiable renderer, which allows gradients to propagate backward. In parallel, we use the pre-trained image VAE to reconstruct the input depth map $\widehat{I}$ . The network is adapted to minimize the difference between $\widetilde{I}$ and $\widehat{I}$ . + +§ D. DATASETS + +Labeled synthetic data $\left\{ \left( {{I}_{i},{\mathcal{M}}_{i}}\right) \right\}$ and unlabeled real-world data $\left\{ \left( {I}_{j}\right) \right\}$ are required to train the image-to-mesh projection and adapt the network among different domains. + +1) Synthetic Data: Labeled image-mesh pairs $\left\{ \left( {{I}_{i},{\mathcal{M}}_{i}}\right) \right\}$ for $i \in \left\lbrack {1,\ldots ,N}\right\rbrack$ can be simulated using FEM and synthetic cameras. In this work, FEM is performed using the GPU-based Isaac Gym [29]. A FEM model for the GelSlim is created as Fig 1(b). To generate data pairs, 16 primitive indenters (Fig. 3-Left) are utilized to interact with the elastomer at randomized positions and rotations. The Isaac Gym simulator collects vertex positions $\mathcal{M}$ at each contact trajectory. The depth map $I$ is then rendered with a synthetic camera. Fig. 4 shows examples of synthetic data pairs. + + < g r a p h i c s > + +Fig. 3: Left: Primitive indenters. Right: Novel contact objects. + + < g r a p h i c s > + +Fig. 4: Data samples. Top: Raw synthetic depth observations, corresponding ground-truth meshes, and augmented synthetic depth observations. Bottom: Real-world depth observations for sample indenters. + +2) Real-World Data: Real-world datasets $\left\{ {I}_{j}\right\}$ are obtained with physical GelSlim sensors and various indenters (Fig. 4). Primitive indenters are 3D printed and interact with the sensor is randomized. Besides primitive shapes, several household and industrial objects are used as a novel set (Fig. 3-Right). The novel set represents common objects that the GelSlim will work with. Moreover, we use two GelSlim sensors to collect real-world data. + +3) Image Augmentations: The appearance of synthetic images is quite different from that of real-world depth maps, as in Fig. 4. The depth reconstruction process for the physical GelSlim introduces significant noise into the image. To enhance the performance in the real world, this paper injects Perlin noise and adds a real-world reference noise image into the synthetic images [28]. The Perlin noise provides a realistic gradient for the image and imitates the real-world camera noise. The reference image provides sensor-specific noise. + +In total, ${1.28}\mathrm{M}$ unique labeled image-mesh pairs were obtained from the simulator, and 1,651 real-world images were obtained for 2 GelSlim sensors with 19 indenters. + + < g r a p h i c s > + +Fig. 5: Image-to-mesh projection results with synthetic data. First row: Input depth observations. Second row: Corresponding ground-truth mesh. Third row: Reconstructed volumetric mesh with our approach. + + < g r a p h i c s > + +Fig. 6: Experiments with real-world primitive contact objects. First row: Input depth observations. Second row: Reconstructed volumetric meshes. Third row: Rendered depth images from reconstructed meshes. + +§ III. RESULTS + +In this section, we present the experiments for supervised image-to-mesh projection and self-supervised adaptation. + +§ A. SUPERVISED PROJECTION + +Our proposed supervised image-to-mesh projection was evaluated using synthetic data. The training yields ${0.012}\mathrm{\;{cm}}$ root-mean-square error (RMSE) between the ground-truth vertex positions and predicted vertex positions. Fig. 5 shows a batch of projection results. The results show that the reconstruction is accurate and captures the contact information. We also investigate the usefulness of the VAE pre-training as described in II-B. We trained the image-to-mesh network from scratch and observed ${0.009}\mathrm{\;{cm}}$ and ${0.085}\mathrm{\;{cm}}$ training and validation errors, respectively. This suggests that the network overfits without the pre-training, which aligns with the findings presented in [30]. + +§ B. SELF-SUPERVISED ADAPTATION + +Section II-C and Section II-D.3 introduce a self-supervised adaptation method and synthetic data augmentations to resolve the covariate shift problem. This section shows that neither adaptation nor augmentation can achieve the objective alone. Moreover, experiments demonstrate that the proposed methods can adapt networks in different domains. + +The adaptation is performed with the real-world dataset $\left\{ {I}_{j}\right\}$ , without ground-truth mesh availability. We use the RMSE between $\widehat{I}$ and $\widetilde{I}$ to evaluate the performance of the adaptations. + +TABLE I: Domain adaptation results with real-world data. The root-mean-square error (RMSE) is measured between reconstructed images $\widetilde{I}$ and rendered images $\widehat{I}$ . + +max width= + +Source $\rightarrow$ Target RMSE before/after Adaptation (cm) + +1-2 +Sim-Prim. $\rightarrow$ Real-Prim ${0.57} \rightarrow {0.12}$ + +1-2 +Sim-Prim $\rightarrow$ Real-Prim-2 ${0.77} \rightarrow {0.20}$ + +1-2 +Real-Prim $\rightarrow$ Real-Prim-2 ${0.35} \rightarrow {0.16}$ + +1-2 +Real-Prim $\rightarrow$ Real-Novel ${0.64} \rightarrow {0.41}$ + +1-2 +Sim-Prim $\rightarrow$ Real-Novel ${1.30} \rightarrow {0.62}$ + +1-2 + +Networks were trained or tuned on source domains and then adapted to target domains. The RMSEs were measured before and after the adaptation. + + < g r a p h i c s > + +Fig. 7: Experiments with real-world novel contact objects. First row: Input depth observations. Second row: Reconstructed volumetric mesh from the network. + +1) Ablation Studies: We compare the effects of the adaption model and the synthetic data augmentations. As a baseline, models with neither augmentations nor adaptations yield ${1.03}\mathrm{\;{cm}}$ RMSE. We observe that using only adaptation (0.79cm RMSE), or only augmentation (0.57cm RMSE), results in lower performance. The reason for higher performance when both are used(0.12cmRMSE)is two-fold. On one hand, the data augmentation enlarges the distribution of the synthetic dataset, which causes the real-world data to be within distribution (or close to). On the other hand, the adaptation model transfers the network from the simulated distribution to the real-world distribution, ensuring invariant feature encodings. A batch of qualitative reconstruction examples is shown in Fig 6. + +2) Domain Adaptations: The network was adapted among various data domains, including simulated data with primitive contact objects (Sim-Prim), real-world data with primitive contact objects (Real-Prim), real-world data with novel contact objects (Real-Novel), and real-world primitive data with a second GelSlim sensor (Real-Prim-2). + +Table I and Fig. 7 show the transfer results among domains. The results suggest that the proposed method can effectively improve the performance of the network under both visual and shape differences. However, we conclude that the covariate shifts for visual noise and shape differences are not correlated, as for experiment $\operatorname{Sim}$ -Prim $\rightarrow$ Real-Novel is less compared to the others. Adaptation for each separately performs better compared to adaptation for both. + +§ IV. DISCUSSION + +This work presents a framework to synthesize volumetric meshes of vision-based tactile sensors for novel contact interactions. Our work has several contributions. First, we present a 3D FEM simulator for vision-based tactile sensors and a simulator calibration approach. Second, we generate a dataset for the GelSlim sensor with both simulated and real-world contacts. Third, we propose a label-free adaptation method and image augmentations for domain transfers; this approach can effectively transfer networks to various visual and different shape scenarios. Lastly, our network efficiently reconstructs the volumetric mesh with depth images and precisely estimates the contact profiles of different shapes. More details of the method and results are available in the full version of the paper [1]. + + < g r a p h i c s > + +Fig. 8: Non-injective projection of the internal vertices. Black nodes are surface vertices that the observations can supervise. Gray nodes are internal vertices that can randomly move without affecting the proposed adaptation loss. I.e., (a) and (b) will yield the same observation on the surface, while (a) is preferred. + +The present work also has some limitations. First, we do not constrain internal vertices during the adaptation. In the self-supervised training, the network is optimized with image observations, which only capture the surface displacement of the volumetric mesh. However, the surface displacement does not provide injective supervision for unobservable internal vertices. In other words, the internal vertices remain free-floating during the adaptation; they can move freely in the interior of the mesh without affecting the training loss, as shown in Fig 8. Such unconstraint vertices can be detrimental to reconstructing the mesh vertices using surface observation. Our experiments demonstrate that the network begins to predict random internal vertices after the first several epochs. We hypothesize that the network has some self-regularity at the beginning of the adaptation, inherited from the pretraining dataset. There is a potential solution to such a problem: adding penalty terms as a regulation. By leveraging the minimum energy principle [31], [32], it is possible to design a differentiable function that computes the energy of deformations. Such energy should be minimized simultaneously during the adaptation to mitigate the randomness of internal vertices. + +The second limitation is that the current method does not predict the dynamics of the elastomer. Instead, our proposed method learns the mapping from surface observations to the mesh states. Compared to other state representations, e.g., images [13] and surface mesh [4], [5], [33], volumetric mesh contains internal vertices and edges and thus can better simulate the deformation of the objects [25], [34]. Learning the dynamics for volumetric meshes can allow model predictive control (MPC) applications and benefit reinforcement learning (RL). An MPC-based algorithm or a model-based RL agent [35] can be designed to determine actions for robotic manipulation. On the other hand, there are many previous works on learning the dynamics for meshes [19]-[21]. These methods, however, focus on simple problem formulations in that the mesh vertices are known exactly at each timestep. A more challenging and practical scenario is to learn the dynamics with observations only since the actual mesh states are unavailable in real-world data. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Fdj08qJuZxH/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Fdj08qJuZxH/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..daea980d79b9a29070582f849055181723f92b49 --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Fdj08qJuZxH/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,279 @@ +# SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional, and Incremental Robot Learning + +Jun Lv ${}^{*1}$ , Qiaojun ${\mathrm{{Yu}}}^{*1}$ , Lin Shao ${}^{*2}$ , Wenhai Liu ${}^{3}$ , Wenqiang ${\mathrm{{Xu}}}^{1}$ and Cewu Lu ${}^{1}$ + +Abstract-Building general-purpose robots to perform a diverse range of tasks in a large variety of environments in the physical world at the human level is extremely challenging. According to [1], it requires the robot learning to be sample-efficient, generalizable, compositional, and incremental. In this work, we introduce a systematic learning framework called SAGCI-system towards achieving these above four requirements. Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a file of Unified Robot Description Format (URDF). Our system adopts a learning-augmented differentiable simulation that loads the URDF. The robot then utilizes the interactive perception to interact with the environment to online verify and modify the URDF. Leveraging the differentiable simulation, we propose a model-based learning algorithm combining object-centric and robot-centric stages to efficiently produce policies to accomplish manipulation tasks. We apply our system to perform articulated object manipulation tasks, both in the simulation and the real world. Extensive experiments demonstrate the effectiveness of our proposed learning framework. Supplemental materials and videos are available on our project webpage https://sites.google.com/view/egci. + +## I. INTRODUCTION + +Building general-purpose robots to perform a diverse range of tasks in a large variety of environments in the physical world at the human level is extremely challenging. Consider a robot operating in a household. The robot faces various challenges. It would need to perform a broad range of tasks, such as preparing food in the kitchen and cleaning the floor in the living room. In the process, it must be able to handle the immense diversity of objects and variability of unstructured environments. Moreover, it would also need to quickly learn online to accomplish new tasks requested by humans. According to [1], building such an intelligent system requires the robot learning to be: sample-efficient, which means the robot needs to master skills using few training samples; generalizable, which requires the robot to accomplish tasks under unseen but similar environments or settings; compositional, which indicates that the various knowledge and skills could be decomposed and combined to solve exponentially more problems; and incremental, which requires new knowledge and abilities could be added over time. However, deep learning/deep reinforcement learning fails to meet these requirements [1]. + +We propose a robot learning framework called SAGCI-System aiming to satisfy the above four requirements. In our setting, the robot has an RGB-D camera mounted on its wrist and can perceive the surrounding environment. Our system first takes the raw point clouds through the camera as the inputs, and produces initial modeling of the surrounding environment represented as a file of URDF [2]. Based on the initial modeling, the robot leverages the interactive perception (IP) [3] to interact with the environments to online modify the URDF. Our pipeline adopts a learning-augmented differentiable simulation that loads the URDF. We augment the differentiable simulation with differential neural networks to tackle the simulation error. We propose a novel model-based learning algorithm combining object-centric and robot-centric stages to efficiently produce policies to accomplish manipulation tasks. The policies learned through our pipeline would be sample-efficient and generalizable since we integrate the structured physics-informed "inductive bias" in the learning process. Moreover, the knowledge and robotic skills learned through our model-based approaches are associated with the generated URDF files which are organized with hierarchical structures, straightly resulting in compositionality and incrementality. + +Our primary contributions are: (1)we propose a systematic learning framework towards achieving sample-efficient, generalizable, compositional, and incremental robot learning; (2)we propose neural network models to generate the URDF file of the environment and leverage the interactive perception to iteratively make the environment model more accurate; (3) we introduce a model-based learning approach based on the learning-augmented differentiable simulation to reduce the sim-to-real gap; (4) we conduct extensive quantitative experiments to demonstrate the effectiveness of our proposed approach; (5) we apply our framework on real-world experiments to accomplish a diverse set of tasks. + +## II. RELATED WORK + +We review literature related to the key components in our approach including the interactive perception, differential simulation, and model-based reinforcement learning/control, and describe how we are different from previous works. + +## A. Interactive Perception + +Interactive perception (IP) is related to an extensive body of work in robotics and computer vision. It has enjoyed success in a wide range of applications, including object segmentation [4, 5], object recognition [6], object sorting [7, 8], and object search [9, 10]. For a broader review of IP, we refer to [3]. Hausman et al. [11] introduced a particle filter-based approach to represent the uncertainty over articulation models and selected actions to efficiently reduce the uncertainty over these models. Martin and Brock [12] presented an IP algorithm to perform articulated object estimation on the fly by formulating the perception problem as three interconnected recursive estimation filters. In this work, we leverage IP to construct, verify and modify the modeling of the surrounding environments. + +## B. Differential Simulation + +Differentiable simulation provides gradients in physical systems for learning, control, and inverse problems. Leveraging recent advances in automatic differentiation methods [13-18], a number of differentiable physics engines have been proposed to solve system identification and control problems [15, 19-21, 21, 22, 22-28]. An alternative approach towards a differentiable physics engine is to approximate physics with neural networks leveraging data-driven approaches [29-31]. These neural networks are implicitly differentiable, but the learned networks might not satisfy the physical dynamics. The simulation quality may also degenerate beyond the training data distribution. One line of works [32, 33] addresses these issues by augmenting the simulation with neural networks. In this work, we adopt the Nimble simulation [34] and augment the differentiable simulation with neural networks to address the sim-to-real gap while taking advantage of the differentiation to develop manipulation policies. + +--- + +*The authors contributed equally + +${}^{1}$ Jun Lv, Qiaojun Yu, Wenqiang Xu, Cewu Lu are with Department of Computer Science, Shanghai Jiao Tong University, China. \{lyujune_sjtu, yqjllxs, vinjohn, lucewu\}@sjtu.edu.cn + +${}^{2}$ Lin Shao is with Artificial Intelligence Lab, Stanford University, USA. lins2@stanford.edu + +${}^{3}$ Wenhai Liu is with School of Mechanical Engineering, Shanghai Jiao Tong University, China. sjtu-wenhai@sjtu.edu.cn + +--- + +![01963e1e-59dd-78da-9235-845308dbc53a_1_158_144_723_237_0.jpg](images/01963e1e-59dd-78da-9235-845308dbc53a_1_158_144_723_237_0.jpg) + +Fig. 1: The proposed pipeline takes the raw point clouds captured by the RGB-D camera mounted on the robot as the inputs, and produces initial modeling of the surrounding environment represented as a URDF. Then robot leverages the interactive perception to online verify and modify the URDF. We also propose a novel model-based manipulation learning method to accomplish manipulation tasks. + +## C. Model-Based Reinforcement Learning + +Model-based reinforcement learning (MBRL) is recognized with the potential to be significantly more sample efficient than model-free RL [35]. However, developing an accurate model of the environment is a challenging problem. Modeling errors degenerate the performances by misleading the policies to exploit the models' deficiencies. For a broader review of the field on MBRL, we refer to [36]. In this work, we develop a system to model the environment and integrate a learning-augmented differentiable simulation to reduce the modeling error. To mitigate the reward/cost function engineering, we propose a two-level (object-centric and robot-centric) procedures to generate manipulation policies. + +## III. TECHNICAL APPROACH + +Once deployed in an unstructured environment, the robot would begin to perceive the surrounding environment through an RGB-D camera mounted on its wrist. Our pipeline takes the raw point clouds as inputs, and produces initial modeling of the surrounding environment represented URDF. Based on the initial modeling, our robot leverages the interactive perception to online modify the URDF. Our pipeline adopts a learning-augmented differentiable simulation to tackle the modeling error. Leveraging the differentiable simulation, we propose a model-based learning algorithm combining object-centric and robot-centric stages to efficiently produce policies to accomplish manipulation tasks. An overview of our proposed method is shown in Fig. 1. We describe the above modules in the following subsections. + +## A. Environment Initial Modeling + +With an RGB-D camera mounted at the wrist, the robot receives point clouds at time step $t$ as ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N} \in {\mathcal{R}}^{N \times 3}$ , where the $N$ is the total number of points. In this subsection, we would discuss our pipeline, which takes ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ as the inputs to initially model the surrounding environment. + +We use the file of URDF to represent the environment, which could be directly loaded into most popular robotic simulations such as Mujoco [37], Bullet [38], and Nimble [34] for model-based control/learning/planning. In this work, we only care about one object at a time. The generated URDF file should only contain descriptions of a single object's links and joints. Each link is a rigid body part containing the mesh files describing the geometry shape and physical attributes such as the mass value, the inertial matrix, and the contact friction coefficients. Each joint describes the relationship and its parameter between two links. For more explanations about the URDF, we refer to [39]. + +We first describe how we generate these descriptions of the object's links. We develop a part-level instance segmentation model that takes the point clouds ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ as input, and outputs a part-level instance segmentation mask denoted as ${\left\{ {\mathcal{M}}_{t}^{i}\right\} }_{i = 1}^{N}$ where ${\mathcal{M}}_{t}^{i} \in$ $\{ 1,2,\ldots , K\} .K$ is the number of parts. We then segment the raw point cloud ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ into a set of groups denoted as ${\left\{ {\mathcal{G}}_{t}^{j}\right\} }_{j = 1}^{K}$ . Here each group corresponds to a link in the URDF. Then we would generate a watertight mesh based on the group through ManifoldPlus [40]. Based on the raw input point clouds of each group, our model also estimates the physical proprieties of the link. Note that these values may not be correct, and we would modify these properties at the IP stage in Sec. III-B. + +Then we would explain how to produce the corresponding joints in the URDF. First, given $K$ links, we would develop a joint relationship model that takes the segmented point clouds of the link $u$ and the link $v$ , where $u, v \in \{ 1,2,\ldots , K\}$ , to estimate their pairwise joint relationship denoted as $\mathcal{J} \in {\mathcal{R}}^{K \times K \times 4}$ . The predicted result item $\mathcal{J}\left( {u, v}\right)$ contains the probability for the "None","Revolute", "Prismatic", "Fixed" joint between two links. Here "None" indicates that two links do not contain a direct joint relationship. The model also predicts the joint spatial descriptions denoted as $\mathcal{C} \in {\mathcal{R}}^{K \times K \times 9}$ , contains joint axis, origin, and orientation for each pair. Till now, we get a complete directed graph through $K$ links. To compose a URDF, we adopt a greedy algorithm to find a directed tree denoted as $\mathcal{E} = {\left\{ {u}_{i},{v}_{i}\right\} }_{i = 1}^{K - 1}$ from the complete graph, which contains $K - 1$ joints. Up to now, we have generated an initial URDF file to describe the environment. Due to the page limit, we report the detailed pipeline in the supplementary material. + +## B. Interactive Perception + +Although we have estimated a URDF file based on the raw point clouds, there are always modeling errors during the abovementioned method, due to imperfect recognition. We propose a pipeline to train the robot to take advantage of the Interactive Perception (IP) to modify the URDF. We first discuss what modeling parameters are re-estimated through the IP and then introduce the pipeline of how to update these modeling parameters. + +a) Model Parameter: We would update the following model parameters: (1) the joint type $\mathcal{J}$ ; (2) the joint spatial descriptions $\mathcal{C}$ ; (3) each link’s mask segmentation $\mathcal{M}$ and the corresponding mesh file; (4) the physical attributes ${\alpha }^{\text{sim }}$ . We denote the model parameter set as $\mathcal{Z} = \left\{ {\mathcal{J},\mathcal{C},\mathcal{M},{\alpha }^{\text{sim }}}\right\}$ , and we also denote $\mathcal{E}$ to describe the URDF tree structure. + +Our system receives the raw point cloud ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ at the time step $t$ , and generates an action denoted as ${a}_{t}^{IP}$ . After the action ${a}_{t}^{IP}$ are executed, we observe the state difference between the simulation and the real world. Minimizing their difference would lead to a more accurate $\mathcal{Z}$ . To accomplish this, we need to establish the correspondence between the real world and the simulation at first. + +b) Correspondence in Simulation: In the simulation, we could directly calculate the 3D positions of ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ at time $t + 1$ , denote as ${\left\{ {\overline{\mathcal{P}}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ , via a forward function ${\mathcal{F}}^{\text{Sim }}$ , given model parameters $\mathcal{Z},\mathcal{E}$ and action ${a}_{t}$ . + +$$ +{\overline{\mathcal{P}}}_{t + 1}^{i} = {\mathcal{F}}^{Sim}\left( {{\mathcal{P}}_{t}^{i},\mathcal{Z},\mathcal{E},{a}_{t}^{IP}}\right) \tag{1} +$$ + +As shown in Eqn 1, each point ${\mathcal{P}}_{t}^{i}$ is associated with ${\overline{\mathcal{P}}}_{t + 1}^{i}$ through the forward simulation. + +c) Correspondence in Real World: Through the RGB-D camera, the robot in the real world receives a new point cloud at the time $t + 1$ denoted as ${\left\{ {\mathcal{P}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ . We train a scene flow [41] model that takes raw point clouds ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ and ${\left\{ {\mathcal{P}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ as inputs, and outputs the scene flow ${\left\{ {\mathcal{U}}_{t}^{i}\right\} }_{i = 1}^{N}$ . We then calculate the $3\mathrm{D}$ positions of the point clouds ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ at the time $t + 1$ , denoted as ${\left\{ {\mathcal{P}}_{t}^{i} + {\mathcal{U}}_{t}^{i}\right\} }_{i = 1}^{N}$ . Each point ${\mathcal{P}}_{t}^{i} + {\mathcal{U}}_{t}^{i}$ searches for the nearest point in ${\left\{ {\mathcal{P}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ . We denote these found points in ${\left\{ {\mathcal{P}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ as ${\left\{ {\widetilde{\mathcal{P}}}_{t + 1}^{i}\right\} }_{i = 1}^{N}.{\left\{ {\widetilde{\mathcal{P}}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ is point-wise correspondence to ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ . We denote the real-world forward function ${\mathcal{F}}^{\text{real }}$ as, + +$$ +{\widetilde{\mathcal{P}}}_{t + 1}^{i} = {\mathcal{F}}^{\text{real }}\left( {{\mathcal{P}}_{t}^{i},{\mathcal{U}}_{t}^{i},{\left\{ {\mathcal{P}}_{t + 1}^{i}\right\} }_{i = 1}^{N}}\right) \tag{2} +$$ + +d) Model Parameter Optimization: Given ${\mathcal{P}}_{t}^{i}$ , we can compute its 3D position at next time step in both simulation and real world by ${\mathcal{F}}^{\text{sim }}$ and ${\mathcal{F}}^{\text{real }}$ , which is ${\overline{\mathcal{P}}}_{t + 1}^{i}$ and ${\widetilde{\mathcal{P}}}_{t + 1}^{i}$ respectively. Accurate model parameters lead to a small difference of them. We denote the distance between ${\left\{ {\overline{\mathcal{P}}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ and ${\left\{ {\widetilde{\mathcal{P}}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ as the ${\mathcal{L}}_{t + 1}$ + +$$ +{\mathcal{L}}_{t + 1}\left( {{a}_{t}^{IP},\mathcal{Z},\mathcal{E}}\right) = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\begin{Vmatrix}{\overline{\mathcal{P}}}_{t + 1}^{i} - {\widetilde{\mathcal{P}}}_{t + 1}^{i}\end{Vmatrix}}^{2} \tag{3} +$$ + +The ${\mathcal{L}}_{t + 1}$ is adopted to measure the modeling quality. Since the function ${\mathcal{F}}^{\text{sim }}$ is differentiable with respect to the model parameters $\mathcal{Z}$ . We could optimize the $\mathcal{Z}$ through gradient descent and find new model parameter set ${\mathcal{Z}}^{\prime } = \left\{ {{\mathcal{J}}^{\prime },{\mathcal{C}}^{\prime },{\mathcal{M}}^{\prime },{\alpha }^{\text{sim }\prime }}\right\}$ and obtain ${\mathcal{E}}^{\prime }$ from $\mathcal{E}$ with the help of newly optimized ${\mathcal{Z}}^{\prime }$ . In this way, we gradually approach accurate modeling of the environment. + +$$ +{\mathcal{Z}}^{\prime } = \mathcal{Z} - \lambda \frac{\partial {\mathcal{L}}_{t + 1}}{\partial \mathcal{Z}} \tag{4} +$$ + +e) Policy Network: We introduce a deep reinforcement learning network which tasks as inputs the raw point cloud ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ and model parameter set $\mathcal{Z}$ , and outputs an action denoted as ${a}_{t}^{IP}$ . Here the ${a}_{t}^{IP}$ contains a discrete action which is the link id on the predicted URDF, and a continuous action that determines the goal state change of the relative joint in the simulation world. How to generate the associated robot manipulation actions will be discussed in III-D. We define the modeling quality improvement in Eqn. 5 as the reward of taking the action ${a}_{t}^{IP}$ given the current state. + +$$ +{r}_{t} = {\mathcal{L}}_{t + 1}\left( {{a}_{t}^{IP},\mathcal{Z},\mathcal{E}}\right) - {\mathcal{L}}_{t + 1}\left( {{a}_{t}^{IP},{\mathcal{Z}}^{\prime },{\mathcal{E}}^{\prime }}\right) \tag{5} +$$ + +Leveraging the IP, our pipeline can verify and modify the URDF file to achieve better environment modeling. Detailed explanation of each part is put in the supplementary material. + +## C. Sim-Real Gap Reduction Through augmenting differential sim- ulation with neural networks + +Even if the analytical model parameters have been provided, rigid-body dynamics alone often does not exactly predict the motion of mechanisms in the real world [33]. In the interactive perception stage, our pipeline would optimize the modeling parameters to reduce the differences between the simulation and the real world. However there are always discrepancy remaining. To address this, we propose a simulation that leverages differentiable physics models and neural networks to allow the efficient reduction of the sim-real gap. We denote ${s}_{t}^{sim}$ and ${s}_{t}$ as current state of simulation and real world respectively. We develop a neural network model denoted NeuralNet to predict the residual change of the next state based on the current state, the current action, and the calculated next state from the Nimble simulation. In the real world, our robot would take action ${a}_{t}$ based on the state ${s}_{t}$ and gather the transition tuple $\left( {{s}_{t},{a}_{t},{s}_{t + 1}}\right)$ . Meanwhile in the simulation, our robot would also take the same action ${a}_{t}$ based on the state ${s}_{t}$ and gather the transition tuple $\left( {{s}_{t}^{sim},{a}_{t},{s}_{t + 1}^{sim}}\right)$ . Here ${s}_{t}^{sim}$ and ${s}_{t}$ are the same. We define a loss denoted as ${\mathcal{L}}^{\text{aug }}$ to measure the difference between ${s}_{t + 1}^{sim}$ and ${s}_{t + 1}$ . Due the page limit, we report the detailed process to calculate the ${\mathcal{L}}^{\text{aug }}$ in the supplementary material. + +## D. Model-Based Manipulation Learning + +In this subsection, we discuss how our system produces robotic manipulation actions to accomplish a given task denoted as $\mathcal{T}$ . For example, the robot may receive a task request from Sec. III-B to open the microwave by $\theta$ degree, which is the action ${a}^{TP}$ defined in Sec. III-B during the interactive perception stage. We first train the robot to accomplish the task $\mathcal{T}$ in the differential simulation. We then record these robotic manipulation sequences from the simulation denoted as ${\left\{ \left( {s}_{t}^{sim},{a}_{t}^{sim}\right) \right\} }_{t = 0}^{T - 1}$ , and utilize these manipulation sequences to guide the robotic execution in the real-world. + +a) Manipulation in the Simulation: Based on the modeling of the environment, we propose two-level procedures to guide the robot in the simulation to reach the target goal $\mathcal{G}\left( \mathcal{T}\right)$ which indicates the success of the task $\mathcal{T}$ . + +Object-centric setting. In this setting, the robot is not loaded into the simulation and we would directly control the objects to find a feasible path to reach the goal $\mathcal{G}\left( \mathcal{T}\right)$ . We denote the state and the action in this object-centric setup to be ${x}_{t}$ and ${u}_{t}$ . Here ${x}_{t} = \left\lbrack {{q}_{t},{\dot{q}}_{t}}\right\rbrack$ which contains the object current joint value ${q}_{t}$ and the joint velocity ${\dot{q}}_{t}.{u}_{t}$ represents the external forces exerted directly to control the objects. + +We formulate the problem of finding a feasible path to reach the goal as an optimal control problem with the dynamic function denoted as ${\mathcal{F}}^{\text{sim }}$ and the cost function ${l}^{o}$ shown as below. Here $T$ is the maximal time step. + +$$ +{\mathcal{L}}_{t}^{o}\left( \mathcal{T}\right) = \mathop{\sum }\limits_{{t = 0}}^{{T - 1}}{l}_{t}^{o}\left( {{x}_{t},{u}_{t};\mathcal{T}}\right) \tag{6} +$$ + +$$ +\text{s.t.}{x}_{t + 1} = {\mathcal{F}}^{\text{sim }}\left( {{x}_{t},{u}_{t}}\right) ,{x}_{0} = {x}_{\text{init }}\text{.} \tag{7} +$$ + +We could find a solution to the optimization problem leveraging the differentiable simulation. Detailed explanations are reported in the supplementary material. We record the corresponding procedure sequences denoted as ${\left\{ {x}_{t}^{ * },{u}_{t}^{ * }\right\} }_{t = 0}^{T - 1}$ . The sequences reflect how the objects should be transformed in order to successfully reach the goal $\mathcal{G}\left( \mathcal{T}\right)$ , which are used to guide robot actions at the next level. + +Robot-centric setting. After gathering the ${\left\{ {x}_{t}^{ * },{u}_{t}^{ * }\right\} }_{t = 0}^{T - 1}$ in the object-centric setting, we load the robot into the simulation and start the robot-centric procedure, in which we find the robotic action sequences to accomplish the task $\mathcal{T}$ . Note that in the robot-centric setting, the state ${s}_{t}^{\text{sim }} = \left\lbrack {{x}_{t};{q}_{t}^{r},{\dot{q}}_{t}^{r}}\right\rbrack$ is composed of two components: the objects’ state denote as ${x}_{t}$ , which are the joint position ${q}_{t}^{o}$ and joint velocity ${\dot{q}}_{t}^{o}$ ; the robot’s joint position ${q}_{t}^{r}$ and joint velocity ${\dot{q}}_{t}^{r}$ . The action ${a}_{t}^{\text{sim }}$ in the robot-centric setting is the robot control forces denoted as $\tau$ . Note that ${a}_{t}^{\text{sim }}$ does not contain ${u}_{t}$ used in the object-centric setup meaning that we does not directly control object in the robot-centric setting. + +$$ +{\mathcal{L}}_{t}^{r}\left( \mathcal{T}\right) = \mathop{\sum }\limits_{{t = 0}}^{{T - 1}}{l}_{t}^{r}\left( {{s}_{t}^{sim},{a}_{t}^{sim};{\left\{ {x}_{t}^{ * }\right\} }_{t = 0}^{T - 1},{\left\{ {u}_{t}^{ * }\right\} }_{t = 0}^{T - 1},\mathcal{T}}\right) \tag{8} +$$ + +$$ +\text{s.t.}\;{s}_{t + 1}^{sim} = {\mathcal{F}}^{sim}\left( {{s}_{t}^{sim},{a}_{t}^{sim}}\right) ,{s}_{0}^{sim} = {s}_{\text{init }} \tag{9} +$$ + +$$ +{a}_{t}^{sim} = {\pi }_{\theta }\left( {s}_{t}^{sim}\right) +$$ + +We formulate the problem of directly find the sequence of robotic actions ${\left\{ {a}_{t}^{\text{sim }}\right\} }_{t = 0}^{T - 1}$ to accomplish the task $\mathcal{T}$ as an optimization problem. The cost function is denoted as ${l}^{r}$ and the policy pa-rametered by $\theta$ is ${\pi }_{\theta }$ . Due to the page limit, we put the detailed explanation of how to optimize ${\mathcal{L}}^{r}$ and ${\pi }_{\theta }$ in the supplementary material. We denote the corresponding sequences as ${\left\{ {s}_{t}^{ * },{a}_{t}^{ * }\right\} }_{t = 0}^{T}$ . Note that we could switch back to the Object-centric setting if some hard constraints are met such as the robot reach its joint limit or is near to a potential collision. + +b) Guided Manipulation in the Real world: After receiving the sequence of states ${\left\{ {s}_{t}^{{sim} * },{a}_{t}^{{sim} * }\right\} }_{t = 0}^{T}$ and the policy ${\pi }_{\theta }$ , we would execute the learned policy ${\pi }_{\theta }\left( {s}^{t}\right)$ and the state will change to ${s}_{t + 1}$ . Detailed explanations are in supplementary material. Meanwhile, we record the transition tuple in the real world $\left( {{s}_{t}^{r},{a}_{t}^{r},{s}_{t + 1}^{r}}\right)$ . If the current episode fails to accomplish the task $\mathcal{T}$ in the real world. These states would be added into the memory buffer to improve the quality of the model as described in Sec. III-C. + +## IV. EXPERIMENTS + +In this work, we develop a learning framework aiming to achieve sample-efficient, generalizable, compositional, and incremental robot learning. Our experiments focus on evaluating the following question: (1) How effective is our proposed interactive perception framework? (2) Whether our approach is more sample-efficient and has better generalization compared to other approaches? (3) How useful is our model in zero/few-shot learning, task composition/combination, long-horizon manipulations tasks? + +## A. Experimental Setup + +a) Simulation: We use PyBullet [38] to simulate the real world which is different from our differentiable simulation based on Nimble [34]. We conduct our experiments using six object categories from the SAPIEN dataset [42], which are box, door, microwave, oven, refrigerator, and storage furniture. We select 176 models in total, 143 models for training, and 33 models for testing. + +b) Real World: We also set up the real world experiment. We mount an RGB-D camera RealSense on the wrist of the 7- Dof Franka robot. Due to the page limit, we put the real-world experiment in the supplementary material. + +## B. Evaluating the Performance of Interactive Perception + +In this subsection, we compare the modeling qualities before and after the interactive perception operations. Given the initial modeling of the environment, we verify and modify each part on the predicted URDF with a sequence of interactions, and online optimize the URDF by each interaction step. To evaluate the performance, We compute the average precision under IoU 0.75 of instance part segmentation (AP75), joint type classification accuracy (Acc.), joint orientation error (Rot.), and joint translation error (Tran.). Comparing the results of the "init" and "opt" in Tab. II. the pipeline can significantly improve the URDF quality after interactive perception. + +TABLE I: Performance of interactive perception. + +
AP75 ↑Acc. $\uparrow$Rot. $\downarrow$Tran. $\downarrow$
initoptinitoptinitoptinitopt
Door0.7480.7140.9750.97513.7132.25215.8063.670
Microwave0.8000.9391.0001.00011.5624.23015.4393.788
Box0.8950.8220.8770.90710.1434.53015.2417.899
Oven0.7730.8751.0001.00023.5256.58918.73110.066
Fridge0.5840.8140.9890.95512.1745.33118.0007.820
Storage0.4990.5190.9240.94911.5649.83133.5107.404
Overall0.7170.7810.9610.96413.7805.46119.4556.775
+ +## C. Evaluating the Sample-Efficiency and Generalization + +We compare our proposed approach with two popular model-free RL algorithms: SAC [43] and TD3 [44] on articulated object manipulation tasks. The tasks are to teach the robot to open the articulated objects of six classes. We post the result of opening the microwaves and more results on other categories are put in the supplementary material. + +![01963e1e-59dd-78da-9235-845308dbc53a_3_972_415_611_285_0.jpg](images/01963e1e-59dd-78da-9235-845308dbc53a_3_972_415_611_285_0.jpg) + +Fig. 2: (a) Learning curves of SAC and TD3. (b) Comparison of SAC, TD3 and our method in terms of the generalization on the microwave + +For the model-free RL approaches, we describe the definitions of the states, actions, and the reward functions in the supplementary material. Fig. 2(a) shows the average success rate of the SAC and TD3 in the training stage. After training ${700}\mathrm{k}$ steps, the average success rate of SAC models and TD3 models to open microwaves are about ${80}\%$ and ${70}\%$ , respectively. However, our pipeline achieves a success rate of around ${90}\%$ after five interactive perception operations and training experiences/samples. Moreover, the modeling constructed by our approach could be adopted for other related tasks immediately. To evaluate the generalization abilities of different approaches, we change the articulated object's 6D pose by a 6D offset/disturbance. As shown in Fig. 2(b), with the increasing disturbance to the robot end-effector’s initial 6D pose, the performance of SAC and TD3 decreases rapidly, especially in manipulating unseen microwaves. The results show that our method performs comparably to the baseline methods on the small disturbance and outperforms them on the large disturbance and on unseen microwaves, indicating a significantly better generalization ability. + +## D. Evaluating the compositionality and incrementality + +We put the experiments of this subsection in the supplementary material due to the page limit. + +![01963e1e-59dd-78da-9235-845308dbc53a_3_944_1551_662_126_0.jpg](images/01963e1e-59dd-78da-9235-845308dbc53a_3_944_1551_662_126_0.jpg) + +Fig. 3: Open the microwave, put the mug into it, and close it + +## V. CONCLUSION + +We present a learning system called SAGCI-System aiming to achieve sample-efficient, generalizable, compositional, and incremental robot learning. Our system first estimates an initial URDF to model the surrounding environment. The URDF would be loaded into a learning-augmented differential simulation. We leverage the interactive perception to online correct the URDF. Based on the modeling, we propose a new model-based learning approach to generate policies to accomplish various tasks. We apply the system to articulated object manipulation tasks. Extensive experiments in the simulation and the real world demonstrate the effectiveness of our proposed learning framework. + +[1] L. P. Kaelbling, "The foundation of efficient robot learning," + +Science, vol. 369, no. 6506, pp. 915-916, 2020. + +[2] "Unified robot description format (urdf)," 2019. [Online]. Available: http://wiki.ros.org/urdf + +[3] J. Bohg, K. Hausman, B. Sankaran, O. Brock, D. Kragic, S. Schaal, and G. S. Sukhatme, "Interactive perception: Leveraging action in perception and perception in action," IEEE Transactions on Robotics, vol. 33, no. 6, pp. 1273-1291, 2017. + +[4] H. van Hoof, O. Kroemer, and J. Peters, "Probabilistic segmentation and targeted exploration of objects in cluttered environments," IEEE Transactions on Robotics, vol. 30, no. 5, pp. 1198-1209, 2014. + +[5] D. Pathak, Y. Shentu, D. Chen, P. Agrawal, T. Darrell, S. Levine, and J. Malik, "Learning instance segmentation by interaction," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 2042-2045. + +[6] D. Schiebener, J. Morimoto, T. Asfour, and A. Ude, "Integrating visual perception and manipulation for autonomous learning of object representations," Adaptive Behavior, vol. 21, no. 5, pp. 328-345, 2013. [Online]. Available: https://doi.org/10.1177/ 1059712313484502 + +[7] L. Y. Chang, J. R. Smith, and D. Fox, "Interactive singulation of objects from a pile," in IEEE International Conference on Robotics and Automation, ICRA 2012, 14-18 May, 2012, St. Paul, Minnesota, USA. IEEE, 2012, pp. 3875-3882. [Online]. Available: https://doi.org/10.1109/ICRA.2012.6224575 + +[8] M. Gupta and G. S. Sukhatme, "Using manipulation primitives for brick sorting in clutter," in 2012 IEEE International Conference on Robotics and Automation, 2012, pp. 3883-3889. + +[9] M. Gupta, T. Rühr, M. Beetz, and G. S. Sukhatme, "Interactive environment exploration in clutter," in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013, pp. ${5265} - {5272}$ . + +[10] T. Novkovic, R. Pautrat, F. Furrer, M. Breyer, R. Siegwart, and J. Nieto, "Object finding in cluttered scenes using interactive perception," in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 8338-8344. + +[11] K. Hausman, S. Niekum, S. Osentoski, and G. S. Sukhatme, "Active articulation model estimation through interactive perception," in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015, pp. 3305-3312. + +[12] R. M. Martin and O. Brock, "Online interactive perception of articulated objects with multi-level recursive estimation based on task-specific priors," in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2014, pp. 2494-2501. + +[13] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., "Pytorch: An imperative style, high-performance deep learning library," Advances in neural information processing systems, vol. 32, pp. 8026-8037, 2019. + +[14] T. T. D. Team, R. Al-Rfou, G. Alain, A. Almahairi, C. Anger-mueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, A. Belikov et al., "Theano: A python framework for fast computation of mathematical expressions," arXiv preprint arXiv:1605.02688, 2016. + +[15] Y. Hu, T.-M. Li, L. Anderson, J. Ragan-Kelley, and F. Durand, "Taichi: a language for high-performance computation on spatially sparse data structures," ACM Transactions on Graphics (TOG), vol. 38, no. 6, p. 201, 2019. + +[16] B. Bell, "Cppad: a package for c++ algorithmic differentiation," http://www.coin-or.org/CppAD, 2020. + +[17] J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang, "JAX: composable transformations of Python+NumPy programs," 2018. [Online]. Available: http://github.com/google/jax + +[18] S. Agarwal, K. Mierle, and Others, "Ceres solver," http:// + +ceres-solver.org + +[19] J. Degrave, M. Hermans, J. Dambre et al., "A differentiable physics engine for deep learning in robotics," Frontiers in neu-rorobotics, vol. 13, p. 6, 2019. + +[20] F. de Avila Belbute-Peres, K. Smith, K. Allen, J. Tenenbaum, and + +J. Z. Kolter, "End-to-end differentiable physics for learning and control," in Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., vol. 31. Curran Associates, Inc., 2018. [Online]. Available: https://proceedings.neurips.cc/paper/ 2018/file/842424a1d0595b76ec4fa03c46e8d755-Paper.pdf + +[21] M. Geilinger, D. Hahn, J. Zehnder, M. Bächer, B. Thomaszewski, and S. Coros, "Add: analytically differentiable dynamics for multi-body systems with frictional contact," ACM Transactions on Graphics (TOG), vol. 39, no. 6, pp. 1-15, 2020. + +[22] Y.-L. Qiao, J. Liang, V. Koltun, and M. Lin, "Scalable differentiable physics for learning and control," in International Conference on Machine Learning. PMLR, 2020, pp. 7847-7856. + +[23] E. Heiden, D. Millard, E. Coumans, Y. Sheng, and G. S. Sukhatme, "NeuralSim: Augmenting differentiable simulators with neural networks," in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2021. [Online]. Available: https://github.com/google-research/tiny-differentiable-simulator + +[24] M. Toussaint, K. R. Allen, K. A. Smith, and J. B. Tenenbaum, "Differentiable physics and stable modes for tool-use and manipulation planning -extended abstract," in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19). California: International Joint Conferences on Artificial Intelligence, 2019, pp. 6231-6235. [Online]. Available: https://www.ijcai.org/Proceedings/2019/ + +[25] C. Schenck and D. Fox, "Spnets: Differentiable fluid dynamics for deep neural networks," in Conference on Robot Learning. PMLR, 2018, pp. 317-335. + +[26] J. Liang, M. Lin, and V. Koltun, "Differentiable cloth simulation for inverse problems," in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds., vol. 32. Curran Associates, Inc., 2019. [Online]. Available: https://proceedings.neurips.cc/paper/ 2019/file/28f0b864598a1291557bed248a998d4e-Paper.pdf + +[27] Y. Hu, J. Liu, A. Spielberg, J. B. Tenenbaum, W. T. Freeman, J. Wu, D. Rus, and W. Matusik, "Chainqueen: A real-time differentiable physical simulator for soft robotics," in 2019 International conference on robotics and automation (ICRA). IEEE, 2019, pp. 6265-6271. + +[28] P. Holl, N. Thuerey, and V. Koltun, "Learning to control pdes with differentiable physics," in International Conference on Learning Representations, 2019. + +[29] P. W. Battaglia, R. Pascanu, M. Lai, D. Rezende, and K. Kavukcuoglu, "Interaction networks for learning about objects, relations and physics," arXiv preprint arXiv:1612.00222, 2016. + +[30] M. B. Chang, T. Ullman, A. Torralba, and J. B. Tenenbaum, "A compositional object-based approach to learning physical dynamics," arXiv preprint arXiv:1612.00341, 2016. + +[31] D. Mrowca, C. Zhuang, E. Wang, N. Haber, L. Fei-Fei, J. B. Tenenbaum, and D. L. Yamins, "Flexible neural representation for physics prediction," arXiv preprint arXiv:1806.08047, 2018. + +[32] A. Ajay, J. Wu, N. Fazeli, M. Bauza, L. P. Kaelbling, J. B. Tenenbaum, and A. Rodriguez, "Augmenting physical simulators with stochastic neural networks: Case study of planar pushing and bouncing," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 3066- 3073. + +[33] E. Heiden, D. Millard, E. Coumans, Y. Sheng, and G. S. Sukhatme, "Neuralsim: Augmenting differentiable simulators with neural networks," arXiv preprint arXiv:2011.04217, 2020. + +[34] K. Werling, D. Omens, J. Lee, I. Exarchos, and C. K. Liu, "Fast and feature-complete differentiable physics for articulated rigid + +bodies with contact," in Proceedings of Robotics: Science and Systems (RSS), July 2021. + +[35] T. Wang, X. Bao, I. Clavera, J. Hoang, Y. Wen, E. Langlois, S. Zhang, G. Zhang, P. Abbeel, and J. Ba, "Benchmarking model-based reinforcement learning," arXiv preprint arXiv:1907.02057, 2019. + +[36] T. M. Moerland, J. Broekens, and C. M. Jonker, "Model-based reinforcement learning: A survey," arXiv preprint arXiv:2006.16712, 2020. + +[37] E. Todorov, T. Erez, and Y. Tassa, "Mujoco: A physics engine for model-based control," in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 5026-5033. + +[38] E. Coumans and Y. Bai, "Pybullet, a python module for physics simulation for games, robotics and machine learning," http:// pybullet.org, 2016-2021. + +[39] R. ROS, "urdf," http://wiki.ros.org/urdf + +[40] J. Huang, Y. Zhou, and L. Guibas, "Manifoldplus: A robust and scalable watertight manifold surface generation method for triangle soups," arXiv preprint arXiv:2005.11621, 2020. + +[41] X. Liu, C. R. Qi, and L. J. Guibas, "Flownet3d: Learning scene flow in 3d point clouds," CVPR, 2019. + +[42] F. Xiang, Y. Qin, K. Mo, Y. Xia, H. Zhu, F. Liu, M. Liu, H. Jiang, Y. Yuan, H. Wang, L. Yi, A. X. Chang, L. J. Guibas, and H. Su, "SAPIEN: A simulated part-based interactive environment," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. + +[43] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor," in International conference on machine learning. PMLR, 2018, pp. 1861-1870. + +[44] S. Fujimoto, H. Hoof, and D. Meger, "Addressing function approximation error in actor-critic methods," in International Conference on Machine Learning. PMLR, 2018, pp. 1587-1596. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Fdj08qJuZxH/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Fdj08qJuZxH/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..2c50010f7bbf577f00d7fcbd4567a074f6c8bceb --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Fdj08qJuZxH/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,207 @@ +§ SAGCI-SYSTEM: TOWARDS SAMPLE-EFFICIENT, GENERALIZABLE, COMPOSITIONAL, AND INCREMENTAL ROBOT LEARNING + +Jun Lv ${}^{*1}$ , Qiaojun ${\mathrm{{Yu}}}^{*1}$ , Lin Shao ${}^{*2}$ , Wenhai Liu ${}^{3}$ , Wenqiang ${\mathrm{{Xu}}}^{1}$ and Cewu Lu ${}^{1}$ + +Abstract-Building general-purpose robots to perform a diverse range of tasks in a large variety of environments in the physical world at the human level is extremely challenging. According to [1], it requires the robot learning to be sample-efficient, generalizable, compositional, and incremental. In this work, we introduce a systematic learning framework called SAGCI-system towards achieving these above four requirements. Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a file of Unified Robot Description Format (URDF). Our system adopts a learning-augmented differentiable simulation that loads the URDF. The robot then utilizes the interactive perception to interact with the environment to online verify and modify the URDF. Leveraging the differentiable simulation, we propose a model-based learning algorithm combining object-centric and robot-centric stages to efficiently produce policies to accomplish manipulation tasks. We apply our system to perform articulated object manipulation tasks, both in the simulation and the real world. Extensive experiments demonstrate the effectiveness of our proposed learning framework. Supplemental materials and videos are available on our project webpage https://sites.google.com/view/egci. + +§ I. INTRODUCTION + +Building general-purpose robots to perform a diverse range of tasks in a large variety of environments in the physical world at the human level is extremely challenging. Consider a robot operating in a household. The robot faces various challenges. It would need to perform a broad range of tasks, such as preparing food in the kitchen and cleaning the floor in the living room. In the process, it must be able to handle the immense diversity of objects and variability of unstructured environments. Moreover, it would also need to quickly learn online to accomplish new tasks requested by humans. According to [1], building such an intelligent system requires the robot learning to be: sample-efficient, which means the robot needs to master skills using few training samples; generalizable, which requires the robot to accomplish tasks under unseen but similar environments or settings; compositional, which indicates that the various knowledge and skills could be decomposed and combined to solve exponentially more problems; and incremental, which requires new knowledge and abilities could be added over time. However, deep learning/deep reinforcement learning fails to meet these requirements [1]. + +We propose a robot learning framework called SAGCI-System aiming to satisfy the above four requirements. In our setting, the robot has an RGB-D camera mounted on its wrist and can perceive the surrounding environment. Our system first takes the raw point clouds through the camera as the inputs, and produces initial modeling of the surrounding environment represented as a file of URDF [2]. Based on the initial modeling, the robot leverages the interactive perception (IP) [3] to interact with the environments to online modify the URDF. Our pipeline adopts a learning-augmented differentiable simulation that loads the URDF. We augment the differentiable simulation with differential neural networks to tackle the simulation error. We propose a novel model-based learning algorithm combining object-centric and robot-centric stages to efficiently produce policies to accomplish manipulation tasks. The policies learned through our pipeline would be sample-efficient and generalizable since we integrate the structured physics-informed "inductive bias" in the learning process. Moreover, the knowledge and robotic skills learned through our model-based approaches are associated with the generated URDF files which are organized with hierarchical structures, straightly resulting in compositionality and incrementality. + +Our primary contributions are: (1)we propose a systematic learning framework towards achieving sample-efficient, generalizable, compositional, and incremental robot learning; (2)we propose neural network models to generate the URDF file of the environment and leverage the interactive perception to iteratively make the environment model more accurate; (3) we introduce a model-based learning approach based on the learning-augmented differentiable simulation to reduce the sim-to-real gap; (4) we conduct extensive quantitative experiments to demonstrate the effectiveness of our proposed approach; (5) we apply our framework on real-world experiments to accomplish a diverse set of tasks. + +§ II. RELATED WORK + +We review literature related to the key components in our approach including the interactive perception, differential simulation, and model-based reinforcement learning/control, and describe how we are different from previous works. + +§ A. INTERACTIVE PERCEPTION + +Interactive perception (IP) is related to an extensive body of work in robotics and computer vision. It has enjoyed success in a wide range of applications, including object segmentation [4, 5], object recognition [6], object sorting [7, 8], and object search [9, 10]. For a broader review of IP, we refer to [3]. Hausman et al. [11] introduced a particle filter-based approach to represent the uncertainty over articulation models and selected actions to efficiently reduce the uncertainty over these models. Martin and Brock [12] presented an IP algorithm to perform articulated object estimation on the fly by formulating the perception problem as three interconnected recursive estimation filters. In this work, we leverage IP to construct, verify and modify the modeling of the surrounding environments. + +§ B. DIFFERENTIAL SIMULATION + +Differentiable simulation provides gradients in physical systems for learning, control, and inverse problems. Leveraging recent advances in automatic differentiation methods [13-18], a number of differentiable physics engines have been proposed to solve system identification and control problems [15, 19-21, 21, 22, 22-28]. An alternative approach towards a differentiable physics engine is to approximate physics with neural networks leveraging data-driven approaches [29-31]. These neural networks are implicitly differentiable, but the learned networks might not satisfy the physical dynamics. The simulation quality may also degenerate beyond the training data distribution. One line of works [32, 33] addresses these issues by augmenting the simulation with neural networks. In this work, we adopt the Nimble simulation [34] and augment the differentiable simulation with neural networks to address the sim-to-real gap while taking advantage of the differentiation to develop manipulation policies. + +*The authors contributed equally + +${}^{1}$ Jun Lv, Qiaojun Yu, Wenqiang Xu, Cewu Lu are with Department of Computer Science, Shanghai Jiao Tong University, China. {lyujune_sjtu, yqjllxs, vinjohn, lucewu}@sjtu.edu.cn + +${}^{2}$ Lin Shao is with Artificial Intelligence Lab, Stanford University, USA. lins2@stanford.edu + +${}^{3}$ Wenhai Liu is with School of Mechanical Engineering, Shanghai Jiao Tong University, China. sjtu-wenhai@sjtu.edu.cn + + < g r a p h i c s > + +Fig. 1: The proposed pipeline takes the raw point clouds captured by the RGB-D camera mounted on the robot as the inputs, and produces initial modeling of the surrounding environment represented as a URDF. Then robot leverages the interactive perception to online verify and modify the URDF. We also propose a novel model-based manipulation learning method to accomplish manipulation tasks. + +§ C. MODEL-BASED REINFORCEMENT LEARNING + +Model-based reinforcement learning (MBRL) is recognized with the potential to be significantly more sample efficient than model-free RL [35]. However, developing an accurate model of the environment is a challenging problem. Modeling errors degenerate the performances by misleading the policies to exploit the models' deficiencies. For a broader review of the field on MBRL, we refer to [36]. In this work, we develop a system to model the environment and integrate a learning-augmented differentiable simulation to reduce the modeling error. To mitigate the reward/cost function engineering, we propose a two-level (object-centric and robot-centric) procedures to generate manipulation policies. + +§ III. TECHNICAL APPROACH + +Once deployed in an unstructured environment, the robot would begin to perceive the surrounding environment through an RGB-D camera mounted on its wrist. Our pipeline takes the raw point clouds as inputs, and produces initial modeling of the surrounding environment represented URDF. Based on the initial modeling, our robot leverages the interactive perception to online modify the URDF. Our pipeline adopts a learning-augmented differentiable simulation to tackle the modeling error. Leveraging the differentiable simulation, we propose a model-based learning algorithm combining object-centric and robot-centric stages to efficiently produce policies to accomplish manipulation tasks. An overview of our proposed method is shown in Fig. 1. We describe the above modules in the following subsections. + +§ A. ENVIRONMENT INITIAL MODELING + +With an RGB-D camera mounted at the wrist, the robot receives point clouds at time step $t$ as ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N} \in {\mathcal{R}}^{N \times 3}$ , where the $N$ is the total number of points. In this subsection, we would discuss our pipeline, which takes ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ as the inputs to initially model the surrounding environment. + +We use the file of URDF to represent the environment, which could be directly loaded into most popular robotic simulations such as Mujoco [37], Bullet [38], and Nimble [34] for model-based control/learning/planning. In this work, we only care about one object at a time. The generated URDF file should only contain descriptions of a single object's links and joints. Each link is a rigid body part containing the mesh files describing the geometry shape and physical attributes such as the mass value, the inertial matrix, and the contact friction coefficients. Each joint describes the relationship and its parameter between two links. For more explanations about the URDF, we refer to [39]. + +We first describe how we generate these descriptions of the object's links. We develop a part-level instance segmentation model that takes the point clouds ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ as input, and outputs a part-level instance segmentation mask denoted as ${\left\{ {\mathcal{M}}_{t}^{i}\right\} }_{i = 1}^{N}$ where ${\mathcal{M}}_{t}^{i} \in$ $\{ 1,2,\ldots ,K\} .K$ is the number of parts. We then segment the raw point cloud ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ into a set of groups denoted as ${\left\{ {\mathcal{G}}_{t}^{j}\right\} }_{j = 1}^{K}$ . Here each group corresponds to a link in the URDF. Then we would generate a watertight mesh based on the group through ManifoldPlus [40]. Based on the raw input point clouds of each group, our model also estimates the physical proprieties of the link. Note that these values may not be correct, and we would modify these properties at the IP stage in Sec. III-B. + +Then we would explain how to produce the corresponding joints in the URDF. First, given $K$ links, we would develop a joint relationship model that takes the segmented point clouds of the link $u$ and the link $v$ , where $u,v \in \{ 1,2,\ldots ,K\}$ , to estimate their pairwise joint relationship denoted as $\mathcal{J} \in {\mathcal{R}}^{K \times K \times 4}$ . The predicted result item $\mathcal{J}\left( {u,v}\right)$ contains the probability for the "None","Revolute", "Prismatic", "Fixed" joint between two links. Here "None" indicates that two links do not contain a direct joint relationship. The model also predicts the joint spatial descriptions denoted as $\mathcal{C} \in {\mathcal{R}}^{K \times K \times 9}$ , contains joint axis, origin, and orientation for each pair. Till now, we get a complete directed graph through $K$ links. To compose a URDF, we adopt a greedy algorithm to find a directed tree denoted as $\mathcal{E} = {\left\{ {u}_{i},{v}_{i}\right\} }_{i = 1}^{K - 1}$ from the complete graph, which contains $K - 1$ joints. Up to now, we have generated an initial URDF file to describe the environment. Due to the page limit, we report the detailed pipeline in the supplementary material. + +§ B. INTERACTIVE PERCEPTION + +Although we have estimated a URDF file based on the raw point clouds, there are always modeling errors during the abovementioned method, due to imperfect recognition. We propose a pipeline to train the robot to take advantage of the Interactive Perception (IP) to modify the URDF. We first discuss what modeling parameters are re-estimated through the IP and then introduce the pipeline of how to update these modeling parameters. + +a) Model Parameter: We would update the following model parameters: (1) the joint type $\mathcal{J}$ ; (2) the joint spatial descriptions $\mathcal{C}$ ; (3) each link’s mask segmentation $\mathcal{M}$ and the corresponding mesh file; (4) the physical attributes ${\alpha }^{\text{ sim }}$ . We denote the model parameter set as $\mathcal{Z} = \left\{ {\mathcal{J},\mathcal{C},\mathcal{M},{\alpha }^{\text{ sim }}}\right\}$ , and we also denote $\mathcal{E}$ to describe the URDF tree structure. + +Our system receives the raw point cloud ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ at the time step $t$ , and generates an action denoted as ${a}_{t}^{IP}$ . After the action ${a}_{t}^{IP}$ are executed, we observe the state difference between the simulation and the real world. Minimizing their difference would lead to a more accurate $\mathcal{Z}$ . To accomplish this, we need to establish the correspondence between the real world and the simulation at first. + +b) Correspondence in Simulation: In the simulation, we could directly calculate the 3D positions of ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ at time $t + 1$ , denote as ${\left\{ {\overline{\mathcal{P}}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ , via a forward function ${\mathcal{F}}^{\text{ Sim }}$ , given model parameters $\mathcal{Z},\mathcal{E}$ and action ${a}_{t}$ . + +$$ +{\overline{\mathcal{P}}}_{t + 1}^{i} = {\mathcal{F}}^{Sim}\left( {{\mathcal{P}}_{t}^{i},\mathcal{Z},\mathcal{E},{a}_{t}^{IP}}\right) \tag{1} +$$ + +As shown in Eqn 1, each point ${\mathcal{P}}_{t}^{i}$ is associated with ${\overline{\mathcal{P}}}_{t + 1}^{i}$ through the forward simulation. + +c) Correspondence in Real World: Through the RGB-D camera, the robot in the real world receives a new point cloud at the time $t + 1$ denoted as ${\left\{ {\mathcal{P}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ . We train a scene flow [41] model that takes raw point clouds ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ and ${\left\{ {\mathcal{P}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ as inputs, and outputs the scene flow ${\left\{ {\mathcal{U}}_{t}^{i}\right\} }_{i = 1}^{N}$ . We then calculate the $3\mathrm{D}$ positions of the point clouds ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ at the time $t + 1$ , denoted as ${\left\{ {\mathcal{P}}_{t}^{i} + {\mathcal{U}}_{t}^{i}\right\} }_{i = 1}^{N}$ . Each point ${\mathcal{P}}_{t}^{i} + {\mathcal{U}}_{t}^{i}$ searches for the nearest point in ${\left\{ {\mathcal{P}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ . We denote these found points in ${\left\{ {\mathcal{P}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ as ${\left\{ {\widetilde{\mathcal{P}}}_{t + 1}^{i}\right\} }_{i = 1}^{N}.{\left\{ {\widetilde{\mathcal{P}}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ is point-wise correspondence to ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ . We denote the real-world forward function ${\mathcal{F}}^{\text{ real }}$ as, + +$$ +{\widetilde{\mathcal{P}}}_{t + 1}^{i} = {\mathcal{F}}^{\text{ real }}\left( {{\mathcal{P}}_{t}^{i},{\mathcal{U}}_{t}^{i},{\left\{ {\mathcal{P}}_{t + 1}^{i}\right\} }_{i = 1}^{N}}\right) \tag{2} +$$ + +d) Model Parameter Optimization: Given ${\mathcal{P}}_{t}^{i}$ , we can compute its 3D position at next time step in both simulation and real world by ${\mathcal{F}}^{\text{ sim }}$ and ${\mathcal{F}}^{\text{ real }}$ , which is ${\overline{\mathcal{P}}}_{t + 1}^{i}$ and ${\widetilde{\mathcal{P}}}_{t + 1}^{i}$ respectively. Accurate model parameters lead to a small difference of them. We denote the distance between ${\left\{ {\overline{\mathcal{P}}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ and ${\left\{ {\widetilde{\mathcal{P}}}_{t + 1}^{i}\right\} }_{i = 1}^{N}$ as the ${\mathcal{L}}_{t + 1}$ + +$$ +{\mathcal{L}}_{t + 1}\left( {{a}_{t}^{IP},\mathcal{Z},\mathcal{E}}\right) = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\begin{Vmatrix}{\overline{\mathcal{P}}}_{t + 1}^{i} - {\widetilde{\mathcal{P}}}_{t + 1}^{i}\end{Vmatrix}}^{2} \tag{3} +$$ + +The ${\mathcal{L}}_{t + 1}$ is adopted to measure the modeling quality. Since the function ${\mathcal{F}}^{\text{ sim }}$ is differentiable with respect to the model parameters $\mathcal{Z}$ . We could optimize the $\mathcal{Z}$ through gradient descent and find new model parameter set ${\mathcal{Z}}^{\prime } = \left\{ {{\mathcal{J}}^{\prime },{\mathcal{C}}^{\prime },{\mathcal{M}}^{\prime },{\alpha }^{\text{ sim }\prime }}\right\}$ and obtain ${\mathcal{E}}^{\prime }$ from $\mathcal{E}$ with the help of newly optimized ${\mathcal{Z}}^{\prime }$ . In this way, we gradually approach accurate modeling of the environment. + +$$ +{\mathcal{Z}}^{\prime } = \mathcal{Z} - \lambda \frac{\partial {\mathcal{L}}_{t + 1}}{\partial \mathcal{Z}} \tag{4} +$$ + +e) Policy Network: We introduce a deep reinforcement learning network which tasks as inputs the raw point cloud ${\left\{ {\mathcal{P}}_{t}^{i}\right\} }_{i = 1}^{N}$ and model parameter set $\mathcal{Z}$ , and outputs an action denoted as ${a}_{t}^{IP}$ . Here the ${a}_{t}^{IP}$ contains a discrete action which is the link id on the predicted URDF, and a continuous action that determines the goal state change of the relative joint in the simulation world. How to generate the associated robot manipulation actions will be discussed in III-D. We define the modeling quality improvement in Eqn. 5 as the reward of taking the action ${a}_{t}^{IP}$ given the current state. + +$$ +{r}_{t} = {\mathcal{L}}_{t + 1}\left( {{a}_{t}^{IP},\mathcal{Z},\mathcal{E}}\right) - {\mathcal{L}}_{t + 1}\left( {{a}_{t}^{IP},{\mathcal{Z}}^{\prime },{\mathcal{E}}^{\prime }}\right) \tag{5} +$$ + +Leveraging the IP, our pipeline can verify and modify the URDF file to achieve better environment modeling. Detailed explanation of each part is put in the supplementary material. + +§ C. SIM-REAL GAP REDUCTION THROUGH AUGMENTING DIFFERENTIAL SIM- ULATION WITH NEURAL NETWORKS + +Even if the analytical model parameters have been provided, rigid-body dynamics alone often does not exactly predict the motion of mechanisms in the real world [33]. In the interactive perception stage, our pipeline would optimize the modeling parameters to reduce the differences between the simulation and the real world. However there are always discrepancy remaining. To address this, we propose a simulation that leverages differentiable physics models and neural networks to allow the efficient reduction of the sim-real gap. We denote ${s}_{t}^{sim}$ and ${s}_{t}$ as current state of simulation and real world respectively. We develop a neural network model denoted NeuralNet to predict the residual change of the next state based on the current state, the current action, and the calculated next state from the Nimble simulation. In the real world, our robot would take action ${a}_{t}$ based on the state ${s}_{t}$ and gather the transition tuple $\left( {{s}_{t},{a}_{t},{s}_{t + 1}}\right)$ . Meanwhile in the simulation, our robot would also take the same action ${a}_{t}$ based on the state ${s}_{t}$ and gather the transition tuple $\left( {{s}_{t}^{sim},{a}_{t},{s}_{t + 1}^{sim}}\right)$ . Here ${s}_{t}^{sim}$ and ${s}_{t}$ are the same. We define a loss denoted as ${\mathcal{L}}^{\text{ aug }}$ to measure the difference between ${s}_{t + 1}^{sim}$ and ${s}_{t + 1}$ . Due the page limit, we report the detailed process to calculate the ${\mathcal{L}}^{\text{ aug }}$ in the supplementary material. + +§ D. MODEL-BASED MANIPULATION LEARNING + +In this subsection, we discuss how our system produces robotic manipulation actions to accomplish a given task denoted as $\mathcal{T}$ . For example, the robot may receive a task request from Sec. III-B to open the microwave by $\theta$ degree, which is the action ${a}^{TP}$ defined in Sec. III-B during the interactive perception stage. We first train the robot to accomplish the task $\mathcal{T}$ in the differential simulation. We then record these robotic manipulation sequences from the simulation denoted as ${\left\{ \left( {s}_{t}^{sim},{a}_{t}^{sim}\right) \right\} }_{t = 0}^{T - 1}$ , and utilize these manipulation sequences to guide the robotic execution in the real-world. + +a) Manipulation in the Simulation: Based on the modeling of the environment, we propose two-level procedures to guide the robot in the simulation to reach the target goal $\mathcal{G}\left( \mathcal{T}\right)$ which indicates the success of the task $\mathcal{T}$ . + +Object-centric setting. In this setting, the robot is not loaded into the simulation and we would directly control the objects to find a feasible path to reach the goal $\mathcal{G}\left( \mathcal{T}\right)$ . We denote the state and the action in this object-centric setup to be ${x}_{t}$ and ${u}_{t}$ . Here ${x}_{t} = \left\lbrack {{q}_{t},{\dot{q}}_{t}}\right\rbrack$ which contains the object current joint value ${q}_{t}$ and the joint velocity ${\dot{q}}_{t}.{u}_{t}$ represents the external forces exerted directly to control the objects. + +We formulate the problem of finding a feasible path to reach the goal as an optimal control problem with the dynamic function denoted as ${\mathcal{F}}^{\text{ sim }}$ and the cost function ${l}^{o}$ shown as below. Here $T$ is the maximal time step. + +$$ +{\mathcal{L}}_{t}^{o}\left( \mathcal{T}\right) = \mathop{\sum }\limits_{{t = 0}}^{{T - 1}}{l}_{t}^{o}\left( {{x}_{t},{u}_{t};\mathcal{T}}\right) \tag{6} +$$ + +$$ +\text{ s.t. }{x}_{t + 1} = {\mathcal{F}}^{\text{ sim }}\left( {{x}_{t},{u}_{t}}\right) ,{x}_{0} = {x}_{\text{ init }}\text{ . } \tag{7} +$$ + +We could find a solution to the optimization problem leveraging the differentiable simulation. Detailed explanations are reported in the supplementary material. We record the corresponding procedure sequences denoted as ${\left\{ {x}_{t}^{ * },{u}_{t}^{ * }\right\} }_{t = 0}^{T - 1}$ . The sequences reflect how the objects should be transformed in order to successfully reach the goal $\mathcal{G}\left( \mathcal{T}\right)$ , which are used to guide robot actions at the next level. + +Robot-centric setting. After gathering the ${\left\{ {x}_{t}^{ * },{u}_{t}^{ * }\right\} }_{t = 0}^{T - 1}$ in the object-centric setting, we load the robot into the simulation and start the robot-centric procedure, in which we find the robotic action sequences to accomplish the task $\mathcal{T}$ . Note that in the robot-centric setting, the state ${s}_{t}^{\text{ sim }} = \left\lbrack {{x}_{t};{q}_{t}^{r},{\dot{q}}_{t}^{r}}\right\rbrack$ is composed of two components: the objects’ state denote as ${x}_{t}$ , which are the joint position ${q}_{t}^{o}$ and joint velocity ${\dot{q}}_{t}^{o}$ ; the robot’s joint position ${q}_{t}^{r}$ and joint velocity ${\dot{q}}_{t}^{r}$ . The action ${a}_{t}^{\text{ sim }}$ in the robot-centric setting is the robot control forces denoted as $\tau$ . Note that ${a}_{t}^{\text{ sim }}$ does not contain ${u}_{t}$ used in the object-centric setup meaning that we does not directly control object in the robot-centric setting. + +$$ +{\mathcal{L}}_{t}^{r}\left( \mathcal{T}\right) = \mathop{\sum }\limits_{{t = 0}}^{{T - 1}}{l}_{t}^{r}\left( {{s}_{t}^{sim},{a}_{t}^{sim};{\left\{ {x}_{t}^{ * }\right\} }_{t = 0}^{T - 1},{\left\{ {u}_{t}^{ * }\right\} }_{t = 0}^{T - 1},\mathcal{T}}\right) \tag{8} +$$ + +$$ +\text{ s.t. }\;{s}_{t + 1}^{sim} = {\mathcal{F}}^{sim}\left( {{s}_{t}^{sim},{a}_{t}^{sim}}\right) ,{s}_{0}^{sim} = {s}_{\text{ init }} \tag{9} +$$ + +$$ +{a}_{t}^{sim} = {\pi }_{\theta }\left( {s}_{t}^{sim}\right) +$$ + +We formulate the problem of directly find the sequence of robotic actions ${\left\{ {a}_{t}^{\text{ sim }}\right\} }_{t = 0}^{T - 1}$ to accomplish the task $\mathcal{T}$ as an optimization problem. The cost function is denoted as ${l}^{r}$ and the policy pa-rametered by $\theta$ is ${\pi }_{\theta }$ . Due to the page limit, we put the detailed explanation of how to optimize ${\mathcal{L}}^{r}$ and ${\pi }_{\theta }$ in the supplementary material. We denote the corresponding sequences as ${\left\{ {s}_{t}^{ * },{a}_{t}^{ * }\right\} }_{t = 0}^{T}$ . Note that we could switch back to the Object-centric setting if some hard constraints are met such as the robot reach its joint limit or is near to a potential collision. + +b) Guided Manipulation in the Real world: After receiving the sequence of states ${\left\{ {s}_{t}^{{sim} * },{a}_{t}^{{sim} * }\right\} }_{t = 0}^{T}$ and the policy ${\pi }_{\theta }$ , we would execute the learned policy ${\pi }_{\theta }\left( {s}^{t}\right)$ and the state will change to ${s}_{t + 1}$ . Detailed explanations are in supplementary material. Meanwhile, we record the transition tuple in the real world $\left( {{s}_{t}^{r},{a}_{t}^{r},{s}_{t + 1}^{r}}\right)$ . If the current episode fails to accomplish the task $\mathcal{T}$ in the real world. These states would be added into the memory buffer to improve the quality of the model as described in Sec. III-C. + +§ IV. EXPERIMENTS + +In this work, we develop a learning framework aiming to achieve sample-efficient, generalizable, compositional, and incremental robot learning. Our experiments focus on evaluating the following question: (1) How effective is our proposed interactive perception framework? (2) Whether our approach is more sample-efficient and has better generalization compared to other approaches? (3) How useful is our model in zero/few-shot learning, task composition/combination, long-horizon manipulations tasks? + +§ A. EXPERIMENTAL SETUP + +a) Simulation: We use PyBullet [38] to simulate the real world which is different from our differentiable simulation based on Nimble [34]. We conduct our experiments using six object categories from the SAPIEN dataset [42], which are box, door, microwave, oven, refrigerator, and storage furniture. We select 176 models in total, 143 models for training, and 33 models for testing. + +b) Real World: We also set up the real world experiment. We mount an RGB-D camera RealSense on the wrist of the 7- Dof Franka robot. Due to the page limit, we put the real-world experiment in the supplementary material. + +§ B. EVALUATING THE PERFORMANCE OF INTERACTIVE PERCEPTION + +In this subsection, we compare the modeling qualities before and after the interactive perception operations. Given the initial modeling of the environment, we verify and modify each part on the predicted URDF with a sequence of interactions, and online optimize the URDF by each interaction step. To evaluate the performance, We compute the average precision under IoU 0.75 of instance part segmentation (AP75), joint type classification accuracy (Acc.), joint orientation error (Rot.), and joint translation error (Tran.). Comparing the results of the "init" and "opt" in Tab. II. the pipeline can significantly improve the URDF quality after interactive perception. + +TABLE I: Performance of interactive perception. + +max width= + +2*X 2|c|AP75 ↑ 2|c|Acc. $\uparrow$ 2|c|Rot. $\downarrow$ 2|c|Tran. $\downarrow$ + +2-9 + init opt init opt init opt init opt + +1-9 +Door 0.748 0.714 0.975 0.975 13.713 2.252 15.806 3.670 + +1-9 +Microwave 0.800 0.939 1.000 1.000 11.562 4.230 15.439 3.788 + +1-9 +Box 0.895 0.822 0.877 0.907 10.143 4.530 15.241 7.899 + +1-9 +Oven 0.773 0.875 1.000 1.000 23.525 6.589 18.731 10.066 + +1-9 +Fridge 0.584 0.814 0.989 0.955 12.174 5.331 18.000 7.820 + +1-9 +Storage 0.499 0.519 0.924 0.949 11.564 9.831 33.510 7.404 + +1-9 +Overall 0.717 0.781 0.961 0.964 13.780 5.461 19.455 6.775 + +1-9 + +§ C. EVALUATING THE SAMPLE-EFFICIENCY AND GENERALIZATION + +We compare our proposed approach with two popular model-free RL algorithms: SAC [43] and TD3 [44] on articulated object manipulation tasks. The tasks are to teach the robot to open the articulated objects of six classes. We post the result of opening the microwaves and more results on other categories are put in the supplementary material. + + < g r a p h i c s > + +Fig. 2: (a) Learning curves of SAC and TD3. (b) Comparison of SAC, TD3 and our method in terms of the generalization on the microwave + +For the model-free RL approaches, we describe the definitions of the states, actions, and the reward functions in the supplementary material. Fig. 2(a) shows the average success rate of the SAC and TD3 in the training stage. After training ${700}\mathrm{k}$ steps, the average success rate of SAC models and TD3 models to open microwaves are about ${80}\%$ and ${70}\%$ , respectively. However, our pipeline achieves a success rate of around ${90}\%$ after five interactive perception operations and training experiences/samples. Moreover, the modeling constructed by our approach could be adopted for other related tasks immediately. To evaluate the generalization abilities of different approaches, we change the articulated object's 6D pose by a 6D offset/disturbance. As shown in Fig. 2(b), with the increasing disturbance to the robot end-effector’s initial 6D pose, the performance of SAC and TD3 decreases rapidly, especially in manipulating unseen microwaves. The results show that our method performs comparably to the baseline methods on the small disturbance and outperforms them on the large disturbance and on unseen microwaves, indicating a significantly better generalization ability. + +§ D. EVALUATING THE COMPOSITIONALITY AND INCREMENTALITY + +We put the experiments of this subsection in the supplementary material due to the page limit. + + < g r a p h i c s > + +Fig. 3: Open the microwave, put the mug into it, and close it + +§ V. CONCLUSION + +We present a learning system called SAGCI-System aiming to achieve sample-efficient, generalizable, compositional, and incremental robot learning. Our system first estimates an initial URDF to model the surrounding environment. The URDF would be loaded into a learning-augmented differential simulation. We leverage the interactive perception to online correct the URDF. Based on the modeling, we propose a new model-based learning approach to generate policies to accomplish various tasks. We apply the system to articulated object manipulation tasks. Extensive experiments in the simulation and the real world demonstrate the effectiveness of our proposed learning framework. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Hdm-mhXpFjv/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Hdm-mhXpFjv/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..0dd8e5bc1cb27be954e2c4fea1f9287e97a33e08 --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Hdm-mhXpFjv/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,171 @@ +# Self-Supervised Learning of Multi-Object Keypoints for Robotic Manipulation + +Jan Ole von Hartz*, Eugenio Chisari*, Tim Welschehold, and Abhinav Valada + +Abstract- In recent years, policy learning methods using either reinforcement or imitation have made significant progress. However, both techniques still suffer from being computationally expensive and requiring large amounts of training data. This problem is especially prevalent in real-world robotic manipulation tasks, where access to ground truth scene features is not available and policies are instead learned from raw camera observations. In this paper, we demonstrate the efficacy of learning image keypoints via the Dense Correspondence pretext task for downstream policy learning. Extending prior work to challenging multi-object scenes, we show that our model can be trained to deal with important problems in representation learning, primarily scale-invariance and occlusion. We evaluate our approach on diverse robot manipulation tasks, compare it to other visual representation learning approaches, and demonstrate its flexibility and effectiveness for sample-efficient policy learning. + +## I. INTRODUCTION + +Despite major advancements in reinforcement and imitation learning, sample efficiency is still a dominant challenge for both techniques, severely limiting their applicability to robotic manipulation. While learning manipulation policies from raw camera observations is typically computationally expensive and requires large amounts of training data, ground truth scene features are usually not available outside of simulation. This results in a large disparity between the potential promised by the state of art in e.g. reinforcement learning research and their practical use in robotic manipulation. Representation learning has been exploited to bridge the gap between training on camera observations versus ground truth features [1] and over the years, a wide variety of approaches have been proposed. However, not all of them are equally suited for the challenges of policy learning in robotic manipulation. From experience, we offer the following criteria: 1) Meaningfulness, i.e. to encode rich semantic content relevant to the task. 2) Compactness, to enable efficient policy learning from small amounts of data. 3) Invariance to image rotation, scale and partial occlusions, i.e. be temporally and spatially consistent. 4) Interpretability, to foster trust and safety. 5) Applicability to deformable objects and multi-object scenes. 6) Require minimal supervision. + +One family of representation learning approaches are pose estimation-based methods [2]-[5]. Although being compact, meaningful and easy to interpret, they typically need a $3\mathrm{D}$ model of the object, they are not applicable to deformable objects, and do not work well in the presence of occlusions. Image reconstruction-based methods [6]-[8], on the other hand, are applicable to arbitrary scenes, but are hard to interpret and their usefulness for policy learning is limited due to the large discrepancy between pretext and downstream tasks. Interestingly, Dense Object Nets (DON) trained on dense correspondence were found to produce object keypoints suitable for robot manipulation [9]-[11]. Constituting a compact and easy to interpret representation, they facilitate efficient policy learning. Notably, they can even generalize across objects. Nevertheless, they have only been studied in settings without occlusions and moving cameras [9]-[12]. Moreover, in the context of policy learning, they have only been demonstrated to work for simple single-object tasks. + +![01963e1b-f569-7084-9561-bcbd5c7136d7_0_911_470_727_245_0.jpg](images/01963e1b-f569-7084-9561-bcbd5c7136d7_0_911_470_727_245_0.jpg) + +Fig. 1: We demonstrate that keypoints can be trained to be scale-invariant and handle occlusions, while tracking multiple objects across time and camera perspectives. + +A major problem in image representation learning is scale-invariance, i.e. the ability to find corresponding image features across views with different distances between the object and the camera. Whereas, in the domain of handcrafted features, substantial work has been done to achieve scale-invariance [13], both work on keypoints [9]-[12] and other prior work [1] has avoided this problem by using fixed overhead cameras. However, this severely limits the utility of this method for diverse applications. For example, it can neither be used for mobile robots, nor for manipulation tasks where a wrist camera is required for precise alignment of the gripper. + +In this work, we evaluate the feasibility of using keypoints as a representation for policy learning on a new set of simulated tasks including, for the first time, multi-object tasks. The selected tasks introduce a set of new challenges, most importantly, transparency, occlusion, moving cameras (hence the need for scale-invariance) and stereo correspondence. We perform extensive evaluations and demonstrate that DON-based keypoints can be trained to deal with all these challenges, often outperforming other methods. + +In summary, our main contributions are: + +1) We extend Dense Object Nets to deal with scale variances and occlusions, e.g. due to moving cameras. + +2) We demonstrate that keypoints are useful for learning multi-object manipulation tasks, where additional objects are not just clutter, but relevant to the task. + +--- + +*These authors contributed equally. + +Department of Computer Science, University of Freiburg, Germany. + +--- + +3) Finally, comparing it to other representation learning approaches, we identify key challenges and propose potential solutions for future work. + +## II. RELATED WORK + +Representation Learning: A common approach to generate more compact representations of camera observations is by training a neural network model with a bottleneck such as a Variational Auto-Encoder (VAE) for image reconstruction. An example of this approach is $\beta$ -VAE [7] which can be parameterized to favor disentangled representations. MONet [6] partitions the image into several slots first, that are subsequently encoded using multiple VAEs. In both cases, the latent representation at the respective bottleneck(s) serves as the representation for the downstream task. Similarly, Transporter [8] is trained to reconstruct a source image $I$ from a target image ${I}^{\prime }$ via transporting local features. In [1], these representations have been compared and shown to enable more efficient policy learning on a set of robotic manipulation tasks. Due to their agnosticism toward the scene's content, in principle, all these methods work out of the box with multi-object scenes, occlusions and changes in perspective. + +Keypoints: Keypoints are a set of pixel- or 3D coordinates, usually placed on the task-relevant objects, and that are ideally invariant to changes in the camera and object positions. These keypoints can be generated by training an encoder on the dense correspondence pretext task[9]. They are suited for policy learning via behavioral cloning [10] and model-based reinforcement learning (RL) [11]. Subsequent work has shown that keypoints can be learned end-to-end through RL [14]. Moreover, keypoints are able to generalize between instances of the same object class [9] and they are also applicable to deformable objects [10]. + +Multi-Object Scenes: Recent work [12] explores the use of DC-generated keypoints in a multi-object setting. The authors employ a similarity graph between scenes, from which they sample using random walker sampling. To label the similarity between scenes, they leverage a pretrained ResNet [15] and cluster the resulting embeddings. Strikingly, this allows them to retain within-class generalization, while simultaneously distinguishing object classes without additional supervision. However, this method has a number of drawbacks. First, it requires single-object scans of all objects. Second, as the authors noted, it deals poorly with occlusions. Third, their similarity measure requires the computation of a $K$ -means clustering. This both leads to increasing complexity of the method, scaling with the number of training sequences and leads to further problems when the number of classes is not known. Much of this stems from attempting to generalize between objects of the same class. Florence et al. [9] identified that their underlying technique can be used for multi-object scenes, but did not provide a way to train the network to distinguish between the individual objects in a multi-object scene. In contrast, our method instead allows to collect data from multi-object scenes and to directly train the DON on them. This not only makes data collection much faster and removes the mentioned computational overhead, but also allows to directly train the encoder for object-discrimination and occlusions. + +Policy Learning: Another common problem for visual representations is object scaling, i.e. finding correspondences between views of the same object that show it from different distances. Prior work circumvents this problem by either planning a grasp from a fixed height [9], [12] or, for policy learning, capturing the scene using a fixed overhead camera [10], [11]. This restricts the use of DONs, for example, with a moving wrist-camera. In robotic manipulation, however, such a camera might be needed for a precise grasping. Similarly, prior work does not address the problem of occlusion. In this work, we demonstrate that DONs can be trained to be invariant to scale as well as to occlusions, while still having the ability to distinguish multiple objects. + +## III. TECHNICAL APPROACH + +Our goal is to extract keypoints from camera observations for efficient policy learning. We achieve this by training a DON in a self-supervised manner, for which we first need to estimate ground truth pixel correspondences between pairs of images. In this section, we first recap the general DON pretraining procedure and keypoint generation. We then describe how to adapt these techniques to the multi-object case and how to achieve scale-invariance. Finally, we detail how we use the keypoints for policy learning. + +## A. Correspondence Estimation + +By moving a RGB-D camera in a static scene, we can reconstruct the $3\mathrm{D}$ representation of that scene, e.g. using volumetric reconstruction [16], from which a point cloud can be extracted. After filtering out background points, we can project the point cloud back onto the image plane to generate object masks for all images along the trajectory. For a given pixel position in one image in the trajectory, we can find the corresponding pixel position in another image of the same trajectory via simple 3D projections using the respective camera pose and calibration matrix. + +## B. Dense-Correspondence Pretraining + +Using the aforementioned technique for finding correspondences between pairs of images, we train an encoder network ${e}_{\eta } : {\mathbb{R}}^{H \times W \times 3} \rightarrow {\mathbb{R}}^{H \times W \times D}$ , mapping an RGB image to a $D$ - dimensional descriptor, to minimize the descriptor distance between corresponding points, while enforcing at least a margin $M$ between non-corresponding points. In doing so, we utilize a self-supervised pixelwise contrastive loss [17], [18]. Specifically, for a given pair of images ${I}_{a},{I}_{b}$ , we sample $m$ pixel locations ${u}_{a}$ from the object mask of ${I}_{a}$ and compute the corresponding positions ${u}_{b}$ in ${I}_{b}$ . Additionally, for each point $i$ in ${u}_{a}$ we sample $n$ non-corresponding points ${u}_{i}$ from both ${I}_{b}$ ’s object mask and the background. We then compute a gradient-update for the encoder as + +$$ +\mathcal{L}\left( {{I}_{a},{I}_{b}}\right) = \mathop{\sum }\limits_{{i = 0}}^{m}\left( \frac{{\begin{Vmatrix}{e}_{\eta }{\left( {I}_{a}\right) }_{{u}_{a}^{i}} - {e}_{\eta }{\left( {I}_{b}\right) }_{{u}_{b}^{i}}\end{Vmatrix}}^{2}}{m}\right. +$$ + +$$ +\left. {+\mathop{\sum }\limits_{{j = 0}}^{n}\frac{\max \left( {0, M - {\begin{Vmatrix}{e}_{\eta }{\left( {I}_{a}\right) }_{{u}_{a}^{i}} - {e}_{\eta }{\left( {I}_{b}\right) }_{{u}_{i}^{j}}\end{Vmatrix}}^{2}}\right) }{n}}\right) \text{.} \tag{1} +$$ + +![01963e1b-f569-7084-9561-bcbd5c7136d7_2_149_133_719_234_0.jpg](images/01963e1b-f569-7084-9561-bcbd5c7136d7_2_149_133_719_234_0.jpg) + +Fig. 2: Keypoint generation during policy learning. + +Similar to [9], we use a ResNet50 encoder with stride 8 pretrained on ImageNet and bilinearly upsample the feature maps back to the full input resolution. We use the Adam [19] optimizer with an initial learning rate of $1\mathrm{e} - 4$ and exponentially decay by a factor of 0.9 every 25 steps.Additionally, we regularize the training via a L2 penalty of $1\mathrm{e} - 4$ . + +## C. Keypoint Generation + +During policy learning, we freeze the encoder and sample one frame from the set of trajectories that we want the policy to learn. We then either randomly sample reference positions from the relevant object masks or manually select them. The descriptors at these positions of the sampled image serve as the reference descriptors for the model. A camera observation is encoded by feeding it through the frozen encoder and computing the Euclidean distance between each of the reference descriptors and the respective descriptor at all positions in the embedding of the image. Applying a softmax to the negative distance map yields an activation map interpreted as the probability of each pixel location corresponding to the reference position. Unlike prior work [9], [10], we select the keypoint location as the global mode of the activation map, which we found to work better than the expectation in the presence of noise and multimodality. Fig. 2 illustrates this approach. + +To generate 3D keypoints, [10] propose to project the pixel-coordinates into the world frame. In contrast, we find that either projecting the pixel-coordinates into the local camera frame, or even just appending the respective depth values to the coordinate vector, besides being simpler, yields a more effective representation for our LSTM policy. This is due the LSTM being sensitive to the scale of the data. We normalize the pixel coordinates to lie within $\left\lbrack {-1,1}\right\rbrack$ to ease learning the policy. + +## D. Scale-Invariance and Multi-Object Tasks + +To extend DONs to multi-object tasks, we want to pretrain directly on multi-object scenes, such that the data is fast to collect and already contains occlusions. Thus, we need to generate separate masks for each object in the scene. To do so, we can employ volumetric reconstruction and split the resulting point cloud using simple clustering. Projecting these object-wise point clouds back onto the camera planes yields consistent object masks for the trajectory. During one iteration of pretraining, we sample one of the object masks and treat the other object as part of the background, teaching the model to distinguish the two objects. Just sampling an object mask has the added benefit of working with any number of objects and empty masks are skipped. Furthermore, we collect an additional set of trajectories, only showing the robot arm, to teach the model not to confuse it with the objects in the scene. + +![01963e1b-f569-7084-9561-bcbd5c7136d7_2_912_139_722_225_0.jpg](images/01963e1b-f569-7084-9561-bcbd5c7136d7_2_912_139_722_225_0.jpg) + +Fig. 3: RLBench Tasks: CloseMicrowave, TakeLidOffSaucePan, PhoneOn-Base, PutRubbishInBin + +Similarly, as we found the DON to generalize badly to unseen perspectives such as close-ups, we needed to pretrain the model on similar perspectives it would see during policy learning. Note, that this is not limited to cases where the change in perspective cuts away necessary context, but to changes in distance between object and camera in general. In contrast, having these different perspectives in the training data teaches the model to generate a scale-invariant representation. In our experience, larger descriptor dimensions enable training the encoder on more perspectives without loss in quality. Yet, we find it important to normalize the descriptor distances in pretraining by the square root of the descriptor dimension. During policy learning, we sample an equal number of reference positions from all the object masks. + +## E. Imitation Learning + +We follow [20] in the setup of our experiments, with the action space being constituted by the change in the robot's end-effector pose. Using an LSTM, we predict the mean of a Gaussian action-distribution with fixed variance, i.e. ${\pi }_{\theta } \sim \mathcal{N}\left( {{f}_{\theta }\left( {s,\theta }\right) ;{\sigma }^{2}}\right)$ . The variance is set to correspond to $1\mathrm{\;{mm}}$ for the translational and 0.25 degrees for the rotational components of the action. For the observation, we concatenate the visual representation with the robot's current joint angles or the end-effector pose. Across all trajectories contained in a batch and their time steps, we minimize the negative log-likelihood of the predicted action distribution as + +$$ +\mathcal{L}\left( {s, a}\right) = - q\log \left( {{\pi }_{\theta }\left( {a \mid s}\right) }\right) . \tag{2} +$$ + +We again train the model using the Adam optimizer with a learning rate of $3\mathrm{e} - 4$ and an L2 penalty of $3\mathrm{e} - 6$ . + +## IV. EXPERIMENTAL EVALUATIONS + +We evaluate the utility of different representations for policy learning using RLBench [21], a suite of realistic manipulation tasks using everyday objects. In this framework, between instances of the same tasks, the objects are placed randomly in the scene. We select two single-object tasks (CloseMicrowave, TakeLidOffSaucePan), two multi-object tasks (PhoneOnBase, PutRubbishInBin) and perform all of them with the model of a Franka Emika Panda robot, see Fig. 3. These tasks pose different challenges. In the CloseMicrowave task, we confront the models with an object that changes its shape and has very different appearance across a trajectory and in TakeLidOffSaucepan there is high object symmetry and transparency. The PhoneOnBase task requires careful alignment of the gripper and the PutRubbishInBin task adds visual clutter. The multi-object tasks further introduce occlusions and the need to track multiple objects. For the single-object tasks, we train the policy on 14 expert demonstrations, using a wrist-mounted camera with ${256} \times {256}$ pixels, while for the multi-object tasks providing 140 demonstrations and using a stereo setup of overhead and wrist camera (with identical resolutions). The overhead camera provides an overview over the scene, while the wrist camera facilitates alignment of the gripper with the objects and requires scale-invariance from the encoders. Both camera observations are encoded independently with the same encoder. + +TABLE I: Success rates of the learned policies. + +
MethodMicrowaveLidPhoneRubbish
CNN0.6150.3150.4200.245
CNND0.5600.180--
$\beta$ -VAE [7]0.1100.0000.0050.000
Transporter [8]0.0350.0750.0000.005
MONet [6]0.7850.8750.3850.760
DC KP 2D0.8050.280--
DC KP 3D0.9350.8000.6400.590
GT KP0.8750.9900.7200.885
+ +Besides the three pretrained representation learning methods $\beta$ -VAE [7], Transporter [8] and MONet [6], we further compare against an end-to-end optimized Convolutional Neural Network (CNN) and a variant that has access to the camera's depth values (CNND). To disentangle the effects of representation and policy learning, we also add a ground truth keypoints model (GT-KP). We train all the policies for 1000 steps for the single-object tasks and 1500 steps for the multi-object tasks. We then evaluate all the policies in the respective task environments for 200 episodes. + +## A. Single-Object Tasks + +From the results shown in Tab. I, we observe that both $\beta$ -VAE and Transporter learn representations that are unsuited for the tasks at hand. Transporter in particular is designed for scenarios with a top-down view on the scene and well-separated local features. Although the CNNs achieve reasonable policy success on CloseMicrowave, they are vastly outperformed by both Monet and Keypoints. Note that in TakeLidOffSaucePan, there is little information in the object's pixel-position but instead most of the information is in the camera depth. This can account for the performance gap between the $2\mathrm{D}$ and $3\mathrm{D}$ keypoints in Tab. I. Therefore, we drop the 2D keypoints and the underperforming CNND from the comparisons for the more difficult tasks. + +While MONet manages to outperform the learned keypoints on TakeLidOffSaucePan, the GT-KP model still outperforms it by a large margin, indicating that keypoints are the more effective representation, although the current implementation still has room for improvement. MONet's strong performance on TakeLidOffSaucePan is due to it partitioning the lid into its parts. It considers the dark parts (handle and knob) as one entity, thus enabling the policy to learn a sure grasp, and incidentally ignoring the difficulties entailed by the transparency of the rest of the lid. + +## B. Multi-Object Tasks + +In PhoneOnBase, many task instances are so difficult, that also the human pilot providing the demonstrations could only solve ${84}\%$ of the instances. Even among the remaining instances, a sizable fraction requires a more complex trajectory than the rest, such that the ground truth model's success rate of ${72}\%$ is close to what can be expected to be achieved by BC. While the learned keypoints come very close to this upper-bound on PhoneOnBase, the gap is larger on PutRubbishInBin due to the additional visual clutter and complex shape of the bin. MONet tends to conflate the robot arm with the scene object, making it perform poorly on PhoneOnBase, where precise alignment is critical. On PutRubbishInBin, where precision is less crucial, it outperforms the learned keypoints. + +## V. CONCLUSIONS + +Keypoints allow for efficient policy learning. Not only are they a compact representation that still encodes the task-relevant information, but if properly normalized, they are easy to learn from. Compared to other methods, this allows higher efficacy of the downstream policy when the training data is scarce, but precision is still required. Unlike CNNs and representations such as MONet, they can easily incorporate depth information, making them a powerful choice for robotic manipulation. Moreover, the method can be extended to new challenges in a straightforward manner. This includes, but is not limited to, multi-object tasks, occlusions, and changing object scale. While it needs to be specifically trained to handle these problems, doing so is intuitive, making the method a flexible approach. Compared to, e.g. MONet, keypoints generalize less well to unseen camera perspectives, and need to be specifically trained for new challenges, whereas MONet is more robust and generalizes better out of the box. Even with the improved pretraining, keypoints remain significantly more noisy for the moving wrist camera than for the stable overhead camera. + +The descriptor distance provides an intuitive measure of the encoder's certainty that can be used in multiple ways, e.g. to ignore or resample uncertain keypoints to better deal with occlusions. Leveraging this fact will help to make the encoding less noisy. Moreover, using additional techniques such as random crop [22] in DC pretraining can help reduce the noise further. A more fundamental challenge lies in object symmetry when not enough context is provided to uniquely identify positions on the object. This is especially prevalent for the wrist camera, e.g. with the lid or phone receiver. One way to remedy this would be to introduce a memory component or extend the current camera observation by previously seen frames to add context. Finally, pretraining a Dense Object Net on a large set of different objects can enable the method to generalize beyond different instances of the same class. + +## REFERENCES + +[1] M. Wulfmeier, A. Byravan, T. Hertweck, I. Higgins, A. Gupta, T. Kulkarni, M. Reynolds, D. Teplyashin, R. Hafner, T. Lampe, et al., "Representation matters: Improving perception and exploration for robotics," in IEEE International Conference on Robotics and Automation, 2021, pp. 6512-6519. + +[2] R. B. Rusu and S. Cousins, "3d is here: Point cloud library (pcl)," in IEEE International Conference on Robotics and Automation, 2011, pp. $1 - 4$ . + +[3] X. Deng, A. Mousavian, Y. Xiang, F. Xia, T. Bretl, and D. Fox, "Poserbpf: A rao-blackwellized particle filter for6d object pose estimation." in Robotics: Science and Systems, 2019. + +[4] C. Wang, D. Xu, Y. Zhu, R. Martín-Martín, C. Lu, L. Fei-Fei, and S. Savarese, "Densefusion: 6d object pose estimation by iterative dense fusion," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3343-3352. + +[5] N. A. Piga, F. Bottarel, C. Fantacci, G. Vezzani, U. Pattacini, and L. Natale, "Maskukf: An instance segmentation aided unscented kalman filter for 6d object pose and velocity tracking," Frontiers in Robotics and ${AI}$ , vol. 8, p. 38,2021. + +[6] C. P. Burgess, L. Matthey, N. Watters, R. Kabra, I. Higgins, M. Botvinick, and A. Lerchner, "Monet: Unsupervised scene decomposition and representation," arXiv preprint arXiv:1901.11390, 2019. + +[7] I. Higgins, L. Matthey, A. Pal, C. P. Burgess, X. Glorot, M. M. Botvinick, S. Mohamed, and A. Lerchner, "beta-vae: Learning basic visual concepts with a constrained variational framework," in ICLR, 2017. + +[8] T. D. Kulkarni, A. Gupta, C. Ionescu, S. Borgeaud, M. Reynolds, A. Zisserman, and V. Mnih, "Unsupervised learning of object keypoints for perception and control," Advances in neural information processing systems, vol. 32, 2019. + +[9] P. Florence, L. Manuelli, and R. Tedrake, "Dense object nets: Learning dense visual object descriptors by and for robotic manipulation," Conference on Robot Learning, 2018. + +[10] ——, “Self-supervised correspondence in visuomotor policy learning,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 492-499, 2019. + +[11] L. Manuelli, Y. Li, P. Florence, and R. Tedrake, "Keypoints into the + +future: Self-supervised correspondence in model-based reinforcement learning," in Conference on Robot Learning, 2021, pp. 693-710. + +[12] D. Hadjivelichkov and D. Kanoulas, "Fully self-supervised class awareness in dense object descriptors," in Conference on Robot Learning, 2022, pp. 1522-1531. + +[13] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International journal of computer vision, vol. 60, no. 2, pp. 91-110, 2004. + +[14] B. Chen, P. Abbeel, and D. Pathak, "Unsupervised learning of visual 3d keypoints for control," in International Conference on Machine Learning, 2021, pp. 1539-1549. + +[15] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," CoRR, vol. abs/1512.03385, 2015. [Online]. Available: http://arxiv.org/abs/1512.03385 + +[16] B. Curless and M. Levoy, "A volumetric method for building complex models from range images," in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, 1996, pp. 303-312. + +[17] C. B. Choy, J. Gwak, S. Savarese, and M. Chandraker, "Universal correspondence network," Advances in neural information processing systems, vol. 29, 2016. + +[18] T. Schmidt, R. Newcombe, and D. Fox, "Self-supervised visual descriptor learning for dense correspondence," IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 420-427, 2016. + +[19] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. + +[20] E. Chisari, T. Welschehold, J. Boedecker, W. Burgard, and A. Valada, "Correct me if i am wrong: Interactive learning for robotic manipulation," IEEE Robotics and Automation Letters, 2022. + +[21] S. James, Z. Ma, D. Rovick Arrojo, and A. J. Davison, "Rlbench: The robot learning benchmark & learning environment," IEEE Robotics and Automation Letters, 2020. + +[22] M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas, "Reinforcement learning with augmented data," arXiv preprint arXiv:2004.14990, 2020. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Hdm-mhXpFjv/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Hdm-mhXpFjv/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..f309833e38605d436120708827b9b012b681285b --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/Hdm-mhXpFjv/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,147 @@ +§ SELF-SUPERVISED LEARNING OF MULTI-OBJECT KEYPOINTS FOR ROBOTIC MANIPULATION + +Jan Ole von Hartz*, Eugenio Chisari*, Tim Welschehold, and Abhinav Valada + +Abstract- In recent years, policy learning methods using either reinforcement or imitation have made significant progress. However, both techniques still suffer from being computationally expensive and requiring large amounts of training data. This problem is especially prevalent in real-world robotic manipulation tasks, where access to ground truth scene features is not available and policies are instead learned from raw camera observations. In this paper, we demonstrate the efficacy of learning image keypoints via the Dense Correspondence pretext task for downstream policy learning. Extending prior work to challenging multi-object scenes, we show that our model can be trained to deal with important problems in representation learning, primarily scale-invariance and occlusion. We evaluate our approach on diverse robot manipulation tasks, compare it to other visual representation learning approaches, and demonstrate its flexibility and effectiveness for sample-efficient policy learning. + +§ I. INTRODUCTION + +Despite major advancements in reinforcement and imitation learning, sample efficiency is still a dominant challenge for both techniques, severely limiting their applicability to robotic manipulation. While learning manipulation policies from raw camera observations is typically computationally expensive and requires large amounts of training data, ground truth scene features are usually not available outside of simulation. This results in a large disparity between the potential promised by the state of art in e.g. reinforcement learning research and their practical use in robotic manipulation. Representation learning has been exploited to bridge the gap between training on camera observations versus ground truth features [1] and over the years, a wide variety of approaches have been proposed. However, not all of them are equally suited for the challenges of policy learning in robotic manipulation. From experience, we offer the following criteria: 1) Meaningfulness, i.e. to encode rich semantic content relevant to the task. 2) Compactness, to enable efficient policy learning from small amounts of data. 3) Invariance to image rotation, scale and partial occlusions, i.e. be temporally and spatially consistent. 4) Interpretability, to foster trust and safety. 5) Applicability to deformable objects and multi-object scenes. 6) Require minimal supervision. + +One family of representation learning approaches are pose estimation-based methods [2]-[5]. Although being compact, meaningful and easy to interpret, they typically need a $3\mathrm{D}$ model of the object, they are not applicable to deformable objects, and do not work well in the presence of occlusions. Image reconstruction-based methods [6]-[8], on the other hand, are applicable to arbitrary scenes, but are hard to interpret and their usefulness for policy learning is limited due to the large discrepancy between pretext and downstream tasks. Interestingly, Dense Object Nets (DON) trained on dense correspondence were found to produce object keypoints suitable for robot manipulation [9]-[11]. Constituting a compact and easy to interpret representation, they facilitate efficient policy learning. Notably, they can even generalize across objects. Nevertheless, they have only been studied in settings without occlusions and moving cameras [9]-[12]. Moreover, in the context of policy learning, they have only been demonstrated to work for simple single-object tasks. + + < g r a p h i c s > + +Fig. 1: We demonstrate that keypoints can be trained to be scale-invariant and handle occlusions, while tracking multiple objects across time and camera perspectives. + +A major problem in image representation learning is scale-invariance, i.e. the ability to find corresponding image features across views with different distances between the object and the camera. Whereas, in the domain of handcrafted features, substantial work has been done to achieve scale-invariance [13], both work on keypoints [9]-[12] and other prior work [1] has avoided this problem by using fixed overhead cameras. However, this severely limits the utility of this method for diverse applications. For example, it can neither be used for mobile robots, nor for manipulation tasks where a wrist camera is required for precise alignment of the gripper. + +In this work, we evaluate the feasibility of using keypoints as a representation for policy learning on a new set of simulated tasks including, for the first time, multi-object tasks. The selected tasks introduce a set of new challenges, most importantly, transparency, occlusion, moving cameras (hence the need for scale-invariance) and stereo correspondence. We perform extensive evaluations and demonstrate that DON-based keypoints can be trained to deal with all these challenges, often outperforming other methods. + +In summary, our main contributions are: + +1) We extend Dense Object Nets to deal with scale variances and occlusions, e.g. due to moving cameras. + +2) We demonstrate that keypoints are useful for learning multi-object manipulation tasks, where additional objects are not just clutter, but relevant to the task. + +*These authors contributed equally. + +Department of Computer Science, University of Freiburg, Germany. + +3) Finally, comparing it to other representation learning approaches, we identify key challenges and propose potential solutions for future work. + +§ II. RELATED WORK + +Representation Learning: A common approach to generate more compact representations of camera observations is by training a neural network model with a bottleneck such as a Variational Auto-Encoder (VAE) for image reconstruction. An example of this approach is $\beta$ -VAE [7] which can be parameterized to favor disentangled representations. MONet [6] partitions the image into several slots first, that are subsequently encoded using multiple VAEs. In both cases, the latent representation at the respective bottleneck(s) serves as the representation for the downstream task. Similarly, Transporter [8] is trained to reconstruct a source image $I$ from a target image ${I}^{\prime }$ via transporting local features. In [1], these representations have been compared and shown to enable more efficient policy learning on a set of robotic manipulation tasks. Due to their agnosticism toward the scene's content, in principle, all these methods work out of the box with multi-object scenes, occlusions and changes in perspective. + +Keypoints: Keypoints are a set of pixel- or 3D coordinates, usually placed on the task-relevant objects, and that are ideally invariant to changes in the camera and object positions. These keypoints can be generated by training an encoder on the dense correspondence pretext task[9]. They are suited for policy learning via behavioral cloning [10] and model-based reinforcement learning (RL) [11]. Subsequent work has shown that keypoints can be learned end-to-end through RL [14]. Moreover, keypoints are able to generalize between instances of the same object class [9] and they are also applicable to deformable objects [10]. + +Multi-Object Scenes: Recent work [12] explores the use of DC-generated keypoints in a multi-object setting. The authors employ a similarity graph between scenes, from which they sample using random walker sampling. To label the similarity between scenes, they leverage a pretrained ResNet [15] and cluster the resulting embeddings. Strikingly, this allows them to retain within-class generalization, while simultaneously distinguishing object classes without additional supervision. However, this method has a number of drawbacks. First, it requires single-object scans of all objects. Second, as the authors noted, it deals poorly with occlusions. Third, their similarity measure requires the computation of a $K$ -means clustering. This both leads to increasing complexity of the method, scaling with the number of training sequences and leads to further problems when the number of classes is not known. Much of this stems from attempting to generalize between objects of the same class. Florence et al. [9] identified that their underlying technique can be used for multi-object scenes, but did not provide a way to train the network to distinguish between the individual objects in a multi-object scene. In contrast, our method instead allows to collect data from multi-object scenes and to directly train the DON on them. This not only makes data collection much faster and removes the mentioned computational overhead, but also allows to directly train the encoder for object-discrimination and occlusions. + +Policy Learning: Another common problem for visual representations is object scaling, i.e. finding correspondences between views of the same object that show it from different distances. Prior work circumvents this problem by either planning a grasp from a fixed height [9], [12] or, for policy learning, capturing the scene using a fixed overhead camera [10], [11]. This restricts the use of DONs, for example, with a moving wrist-camera. In robotic manipulation, however, such a camera might be needed for a precise grasping. Similarly, prior work does not address the problem of occlusion. In this work, we demonstrate that DONs can be trained to be invariant to scale as well as to occlusions, while still having the ability to distinguish multiple objects. + +§ III. TECHNICAL APPROACH + +Our goal is to extract keypoints from camera observations for efficient policy learning. We achieve this by training a DON in a self-supervised manner, for which we first need to estimate ground truth pixel correspondences between pairs of images. In this section, we first recap the general DON pretraining procedure and keypoint generation. We then describe how to adapt these techniques to the multi-object case and how to achieve scale-invariance. Finally, we detail how we use the keypoints for policy learning. + +§ A. CORRESPONDENCE ESTIMATION + +By moving a RGB-D camera in a static scene, we can reconstruct the $3\mathrm{D}$ representation of that scene, e.g. using volumetric reconstruction [16], from which a point cloud can be extracted. After filtering out background points, we can project the point cloud back onto the image plane to generate object masks for all images along the trajectory. For a given pixel position in one image in the trajectory, we can find the corresponding pixel position in another image of the same trajectory via simple 3D projections using the respective camera pose and calibration matrix. + +§ B. DENSE-CORRESPONDENCE PRETRAINING + +Using the aforementioned technique for finding correspondences between pairs of images, we train an encoder network ${e}_{\eta } : {\mathbb{R}}^{H \times W \times 3} \rightarrow {\mathbb{R}}^{H \times W \times D}$ , mapping an RGB image to a $D$ - dimensional descriptor, to minimize the descriptor distance between corresponding points, while enforcing at least a margin $M$ between non-corresponding points. In doing so, we utilize a self-supervised pixelwise contrastive loss [17], [18]. Specifically, for a given pair of images ${I}_{a},{I}_{b}$ , we sample $m$ pixel locations ${u}_{a}$ from the object mask of ${I}_{a}$ and compute the corresponding positions ${u}_{b}$ in ${I}_{b}$ . Additionally, for each point $i$ in ${u}_{a}$ we sample $n$ non-corresponding points ${u}_{i}$ from both ${I}_{b}$ ’s object mask and the background. We then compute a gradient-update for the encoder as + +$$ +\mathcal{L}\left( {{I}_{a},{I}_{b}}\right) = \mathop{\sum }\limits_{{i = 0}}^{m}\left( \frac{{\begin{Vmatrix}{e}_{\eta }{\left( {I}_{a}\right) }_{{u}_{a}^{i}} - {e}_{\eta }{\left( {I}_{b}\right) }_{{u}_{b}^{i}}\end{Vmatrix}}^{2}}{m}\right. +$$ + +$$ +\left. {+\mathop{\sum }\limits_{{j = 0}}^{n}\frac{\max \left( {0,M - {\begin{Vmatrix}{e}_{\eta }{\left( {I}_{a}\right) }_{{u}_{a}^{i}} - {e}_{\eta }{\left( {I}_{b}\right) }_{{u}_{i}^{j}}\end{Vmatrix}}^{2}}\right) }{n}}\right) \text{ . } \tag{1} +$$ + + < g r a p h i c s > + +Fig. 2: Keypoint generation during policy learning. + +Similar to [9], we use a ResNet50 encoder with stride 8 pretrained on ImageNet and bilinearly upsample the feature maps back to the full input resolution. We use the Adam [19] optimizer with an initial learning rate of $1\mathrm{e} - 4$ and exponentially decay by a factor of 0.9 every 25 steps.Additionally, we regularize the training via a L2 penalty of $1\mathrm{e} - 4$ . + +§ C. KEYPOINT GENERATION + +During policy learning, we freeze the encoder and sample one frame from the set of trajectories that we want the policy to learn. We then either randomly sample reference positions from the relevant object masks or manually select them. The descriptors at these positions of the sampled image serve as the reference descriptors for the model. A camera observation is encoded by feeding it through the frozen encoder and computing the Euclidean distance between each of the reference descriptors and the respective descriptor at all positions in the embedding of the image. Applying a softmax to the negative distance map yields an activation map interpreted as the probability of each pixel location corresponding to the reference position. Unlike prior work [9], [10], we select the keypoint location as the global mode of the activation map, which we found to work better than the expectation in the presence of noise and multimodality. Fig. 2 illustrates this approach. + +To generate 3D keypoints, [10] propose to project the pixel-coordinates into the world frame. In contrast, we find that either projecting the pixel-coordinates into the local camera frame, or even just appending the respective depth values to the coordinate vector, besides being simpler, yields a more effective representation for our LSTM policy. This is due the LSTM being sensitive to the scale of the data. We normalize the pixel coordinates to lie within $\left\lbrack {-1,1}\right\rbrack$ to ease learning the policy. + +§ D. SCALE-INVARIANCE AND MULTI-OBJECT TASKS + +To extend DONs to multi-object tasks, we want to pretrain directly on multi-object scenes, such that the data is fast to collect and already contains occlusions. Thus, we need to generate separate masks for each object in the scene. To do so, we can employ volumetric reconstruction and split the resulting point cloud using simple clustering. Projecting these object-wise point clouds back onto the camera planes yields consistent object masks for the trajectory. During one iteration of pretraining, we sample one of the object masks and treat the other object as part of the background, teaching the model to distinguish the two objects. Just sampling an object mask has the added benefit of working with any number of objects and empty masks are skipped. Furthermore, we collect an additional set of trajectories, only showing the robot arm, to teach the model not to confuse it with the objects in the scene. + + < g r a p h i c s > + +Fig. 3: RLBench Tasks: CloseMicrowave, TakeLidOffSaucePan, PhoneOn-Base, PutRubbishInBin + +Similarly, as we found the DON to generalize badly to unseen perspectives such as close-ups, we needed to pretrain the model on similar perspectives it would see during policy learning. Note, that this is not limited to cases where the change in perspective cuts away necessary context, but to changes in distance between object and camera in general. In contrast, having these different perspectives in the training data teaches the model to generate a scale-invariant representation. In our experience, larger descriptor dimensions enable training the encoder on more perspectives without loss in quality. Yet, we find it important to normalize the descriptor distances in pretraining by the square root of the descriptor dimension. During policy learning, we sample an equal number of reference positions from all the object masks. + +§ E. IMITATION LEARNING + +We follow [20] in the setup of our experiments, with the action space being constituted by the change in the robot's end-effector pose. Using an LSTM, we predict the mean of a Gaussian action-distribution with fixed variance, i.e. ${\pi }_{\theta } \sim \mathcal{N}\left( {{f}_{\theta }\left( {s,\theta }\right) ;{\sigma }^{2}}\right)$ . The variance is set to correspond to $1\mathrm{\;{mm}}$ for the translational and 0.25 degrees for the rotational components of the action. For the observation, we concatenate the visual representation with the robot's current joint angles or the end-effector pose. Across all trajectories contained in a batch and their time steps, we minimize the negative log-likelihood of the predicted action distribution as + +$$ +\mathcal{L}\left( {s,a}\right) = - q\log \left( {{\pi }_{\theta }\left( {a \mid s}\right) }\right) . \tag{2} +$$ + +We again train the model using the Adam optimizer with a learning rate of $3\mathrm{e} - 4$ and an L2 penalty of $3\mathrm{e} - 6$ . + +§ IV. EXPERIMENTAL EVALUATIONS + +We evaluate the utility of different representations for policy learning using RLBench [21], a suite of realistic manipulation tasks using everyday objects. In this framework, between instances of the same tasks, the objects are placed randomly in the scene. We select two single-object tasks (CloseMicrowave, TakeLidOffSaucePan), two multi-object tasks (PhoneOnBase, PutRubbishInBin) and perform all of them with the model of a Franka Emika Panda robot, see Fig. 3. These tasks pose different challenges. In the CloseMicrowave task, we confront the models with an object that changes its shape and has very different appearance across a trajectory and in TakeLidOffSaucepan there is high object symmetry and transparency. The PhoneOnBase task requires careful alignment of the gripper and the PutRubbishInBin task adds visual clutter. The multi-object tasks further introduce occlusions and the need to track multiple objects. For the single-object tasks, we train the policy on 14 expert demonstrations, using a wrist-mounted camera with ${256} \times {256}$ pixels, while for the multi-object tasks providing 140 demonstrations and using a stereo setup of overhead and wrist camera (with identical resolutions). The overhead camera provides an overview over the scene, while the wrist camera facilitates alignment of the gripper with the objects and requires scale-invariance from the encoders. Both camera observations are encoded independently with the same encoder. + +TABLE I: Success rates of the learned policies. + +max width= + +Method Microwave Lid Phone Rubbish + +1-5 +CNN 0.615 0.315 0.420 0.245 + +1-5 +CNND 0.560 0.180 - - + +1-5 +$\beta$ -VAE [7] 0.110 0.000 0.005 0.000 + +1-5 +Transporter [8] 0.035 0.075 0.000 0.005 + +1-5 +MONet [6] 0.785 0.875 0.385 0.760 + +1-5 +DC KP 2D 0.805 0.280 - - + +1-5 +DC KP 3D 0.935 0.800 0.640 0.590 + +1-5 +GT KP 0.875 0.990 0.720 0.885 + +1-5 + +Besides the three pretrained representation learning methods $\beta$ -VAE [7], Transporter [8] and MONet [6], we further compare against an end-to-end optimized Convolutional Neural Network (CNN) and a variant that has access to the camera's depth values (CNND). To disentangle the effects of representation and policy learning, we also add a ground truth keypoints model (GT-KP). We train all the policies for 1000 steps for the single-object tasks and 1500 steps for the multi-object tasks. We then evaluate all the policies in the respective task environments for 200 episodes. + +§ A. SINGLE-OBJECT TASKS + +From the results shown in Tab. I, we observe that both $\beta$ -VAE and Transporter learn representations that are unsuited for the tasks at hand. Transporter in particular is designed for scenarios with a top-down view on the scene and well-separated local features. Although the CNNs achieve reasonable policy success on CloseMicrowave, they are vastly outperformed by both Monet and Keypoints. Note that in TakeLidOffSaucePan, there is little information in the object's pixel-position but instead most of the information is in the camera depth. This can account for the performance gap between the $2\mathrm{D}$ and $3\mathrm{D}$ keypoints in Tab. I. Therefore, we drop the 2D keypoints and the underperforming CNND from the comparisons for the more difficult tasks. + +While MONet manages to outperform the learned keypoints on TakeLidOffSaucePan, the GT-KP model still outperforms it by a large margin, indicating that keypoints are the more effective representation, although the current implementation still has room for improvement. MONet's strong performance on TakeLidOffSaucePan is due to it partitioning the lid into its parts. It considers the dark parts (handle and knob) as one entity, thus enabling the policy to learn a sure grasp, and incidentally ignoring the difficulties entailed by the transparency of the rest of the lid. + +§ B. MULTI-OBJECT TASKS + +In PhoneOnBase, many task instances are so difficult, that also the human pilot providing the demonstrations could only solve ${84}\%$ of the instances. Even among the remaining instances, a sizable fraction requires a more complex trajectory than the rest, such that the ground truth model's success rate of ${72}\%$ is close to what can be expected to be achieved by BC. While the learned keypoints come very close to this upper-bound on PhoneOnBase, the gap is larger on PutRubbishInBin due to the additional visual clutter and complex shape of the bin. MONet tends to conflate the robot arm with the scene object, making it perform poorly on PhoneOnBase, where precise alignment is critical. On PutRubbishInBin, where precision is less crucial, it outperforms the learned keypoints. + +§ V. CONCLUSIONS + +Keypoints allow for efficient policy learning. Not only are they a compact representation that still encodes the task-relevant information, but if properly normalized, they are easy to learn from. Compared to other methods, this allows higher efficacy of the downstream policy when the training data is scarce, but precision is still required. Unlike CNNs and representations such as MONet, they can easily incorporate depth information, making them a powerful choice for robotic manipulation. Moreover, the method can be extended to new challenges in a straightforward manner. This includes, but is not limited to, multi-object tasks, occlusions, and changing object scale. While it needs to be specifically trained to handle these problems, doing so is intuitive, making the method a flexible approach. Compared to, e.g. MONet, keypoints generalize less well to unseen camera perspectives, and need to be specifically trained for new challenges, whereas MONet is more robust and generalizes better out of the box. Even with the improved pretraining, keypoints remain significantly more noisy for the moving wrist camera than for the stable overhead camera. + +The descriptor distance provides an intuitive measure of the encoder's certainty that can be used in multiple ways, e.g. to ignore or resample uncertain keypoints to better deal with occlusions. Leveraging this fact will help to make the encoding less noisy. Moreover, using additional techniques such as random crop [22] in DC pretraining can help reduce the noise further. A more fundamental challenge lies in object symmetry when not enough context is provided to uniquely identify positions on the object. This is especially prevalent for the wrist camera, e.g. with the lid or phone receiver. One way to remedy this would be to introduce a memory component or extend the current camera observation by previously seen frames to add context. Finally, pretraining a Dense Object Net on a large set of different objects can enable the method to generalize beyond different instances of the same class. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/e1UbzlmDUI/Initial_manuscript_md/Initial_manuscript.md b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/e1UbzlmDUI/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..16f30499fca2f320f42e4068056e4044311acd66 --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/e1UbzlmDUI/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,191 @@ +# Learning Dense Reward with Temporal Variant Self-Supervision + +Yuning Wu ${}^{1,2}$ , Jieliang Luo ${}^{2}$ , Hui Li ${}^{2}$ + +Abstract-Rewards play an essential role in reinforcement learning. In contrast to rule-based game environments with well-defined reward functions, complex real-world robotic applications, such as contact-rich manipulation, lack explicit and informative descriptions that can directly be used as a reward. Previous effort has shown that it is possible to algorithmically extract dense rewards directly from multimodal observations. In this paper, we aim to extend this effort by proposing a more efficient and robust way of sampling and learning. In particular, our sampling approach utilizes temporal variance to simulate the fluctuating state and action distribution of a manipulation task. We then proposed a network architecture for self-supervised learning to better incorporate temporal information in latent representations. We tested our approach in two experimental setups, namely joint-assembly and door-opening. Preliminary results show that our approach is effective and efficient in learning dense rewards, and the learned rewards lead to faster convergence than baselines. + +## I. INTRODUCTION + +Reinforcement learning (RL) is gaining momentum in solving complex real-world robotics problems. One challenging category is contact-rich manipulation tasks. The success of RL in these scenarios depends on a reliable reward system. While this genre of problems is marked by rich, high-dimensional, continuous observations, it is typically hard to come up with a dense reward that can harness such richness to guide RL training. The conventional way of using sparse, boolean rewards (e.g., 1 if the task is successfully completed and 0 otherwise) is often challenging and inefficient. The difficulty has led to the practice of reward engineering, where rewards are hand-crafted based on domain knowledge and trial-and-error. However, such solutions often require extensive robotics expertise and can be quite task-specific. + +In this research, we propose an end-to-end learning framework that can extract dense rewards from multimodal observations, inspired by [1]. The reward is learned in a self-supervised manner by combining one or two human demonstrations with a physics simulator, and can then be directly used in training RL algorithms. We evaluate our framework in two contact-rich manipulation tasks, joint-assembly and door-opening tasks. + +There are two main contributions in this paper: 1) a temporal variant forward sampling (TVFS) method that is more robust and cost-efficient in generating samples from human demonstrations for contact-rich manipulation tasks, 2) a self-supervised latent representation learning architecture that can utilize sample pairs from TVFS. + +## II. Problem Statement & Related Work + +## A. Problem Statement + +We focus on contact-rich tasks that can be suitably framed as discrete-time Markov Decision Processes (MDPs) [2], which is described by a set of states $S$ , a set of actions $A$ , a set of conditional probabilities $p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right)$ for the state transition ${s}_{t} \rightarrow {s}_{t + 1}$ , a reward function $R : S \times A \rightarrow \mathbb{R}$ , and a discount factor $\gamma \in \left\lbrack {0,1}\right\rbrack$ . The MDPs can be solved by using RL algorithms to train an optimal policy $\pi \left( s\right) \rightarrow a$ that maximizes the expected total reward. Our goal is to learn a dense reward function $R$ , which can be used by RL algorithms to reach the optimal policy for the MDPs. + +## B. Inverse Reinforcement Learning + +To tackle the reward engineering problem, Inverse Reinforcement Learning (IRL) has arisen as a prominent solution [3]. Rather than crafting the reward, IRL methods learn reward functions [4], [5], [6] from expert demonstrations. However, conventional IRL methods mostly deal with ideal scenarios where states and representations are discrete and low-dimensional. Recent advances in deep learning have extended classical IRL's capability to continuous, high-dimensional observation space [7], [8]. However, despite much improved performance, learning is often conducted using generative adversarial learning frameworks [9], [10], meaning that one must train the reward function alongside a policy. Besides instability in training, this framework essentially diverges from our goal, i.e. to learn a reward function independently without concurrently learning a policy. + +## C. Learning Dense Reward for Contact-Rich Manipulation + +[11] and [12] explored the idea of training a dense reward function directly from human feedback. The methods integrated human experts in the RL training process and periodically ask their preference on a group of pairwise videos clips of the agent's behavior. A reward function is gradually trained that eventually can best explain the human's judgments. The methods showed impressive results on training complex robotic locomotion tasks, but haven't been tested on contact-rich manipulation tasks where the behaviors are hard to be observed by pure visual cues. [1] proposed a DREM framework that extracts dense reward from multimodal observation through sampling and self-supervised learning. The framework shows great potential in translating rich, continuous, high-dimensional observations into a task progress variable that can be used to guide RL training. Our work builds on top of this research, by proposing improvements to the sampling method and self-supervised learning architecture. [13] also proposed to learn a multimodal representation of sensor inputs and use the representation for policy learning on contact-rich tasks, such as peg insertion. However, it uses crafted reward functions for various sub-tasks. + +--- + +${}^{1}$ Carnegie Mellon University, Pittsburgh. ${}^{2}$ Autodesk Research, San Francisco. This research was conducted during Yuning Wu's internship at the Autodesk AI Lab and Autodesk Robotics Lab. + +--- + +## III. Approach: Learning Dense Reward with TEMPORAL VARIANT SELF-SUPERVISION + +Similar to ideas presented in [1], our method also aims to learn a task progress variable $p \in \left\lbrack {0,1}\right\rbrack$ that captures the progress towards finishing a task. The variable can then be used as a dense reward. With $p = 0$ representing the initial state, and $p = 1$ representing the goal state, the variable is structured as a similarity score in the latent space $\mathcal{H}$ . The latent representation ${h}_{\phi } : \mathcal{S} \rightarrow \mathcal{H}$ is learned in a self-supervised manner with two major objectives. The first is to capture an efficient, low-dimensional embedding of the multimodal observation space $\mathcal{S}$ . The second is to encode temporal information in the learned representation. Adopted from [1], for contact-rich manipulation tasks with relatively determined and repeatable goal state, the task progress can be derived with distance measure $d$ in $\mathcal{H}$ : + +$$ +p = 1 - \frac{d\left( {{h}_{\phi }\left( s\right) ,{h}_{\phi }\left( {s}_{g}\right) }\right) }{d\left( {{h}_{\phi }\left( {s}_{0}\right) ,{h}_{\phi }\left( {s}_{g}\right) }\right) } \tag{1} +$$ + +Prior work has explored ways to learn the representation through explicitly enforcing temporal order through a triplet loss function [1]. Such enforcement by design involves tuning multiple hyperparameters. We propose a framework where temporal information can be injected in a more natural, self-consistent manner by utilizing dynamic relation among pairs of adjacent observations $\left( {{s}_{t},{s}_{t + 1}}\right)$ . + +$$ +{h}_{\phi }\left( {s}_{t}\right) + \Delta {h}_{\psi }\left( {s}_{t}\right) = {h}_{\phi }\left( {s}_{t + 1}\right) \tag{2} +$$ + +${h}_{\phi }$ is the latent representation, whereas $\Delta {h}_{\psi }$ is the change of latent representation between $t$ and $t + 1$ resulted from dynamics. ${h}_{\phi }$ and $\Delta {h}_{\psi }$ are learned using different modalities. The insight behind such constraint is that latent representation of ${s}_{t + 1}$ should be consistent with the representation of ${s}_{t}$ , plus any dynamic change happened within the time step. + +Our proposed improvements are the following. The first is temporal variant forward sampling, which generates a tree of data (i.e. observations with temporal information) from a single human demonstration. The second is self-supervised representation learning network architecture, which uses generated observation pairs to learn representations through Eq.(2). + +## A. Temporal Variant Forward Sampling + +Collecting observations with temporal information is an essential step for training, but can be challenging if only from human demonstrations. Therefore, the idea of sampling has been broadly experimented in [1], [14]. By combining one or two human demonstrations with a physics simulator, it is possible to obtain a tree of data through sampling. [1] proposed a backward sampling process based on the insight that variance of the goal state is smaller than the initial state. However, although it is feasible to sample backward positions and generate visual images from the positions in simulation, it is typically hard to sample backward force and torque (F/T). Through experiments, we have found that restoring a state with the exact $\mathrm{F}/\mathrm{T}$ reading can be computationally intensive and simulator-dependent. Also, when playing forward a backward sampled action sequence, the $\mathrm{F}/\mathrm{T}$ readings in the forward pass do not necessarily match the F/T readings recorded in the backward pass. + +![01963e1f-1bd6-7150-b289-36e574d6b19a_1_986_159_586_750_0.jpg](images/01963e1f-1bd6-7150-b289-36e574d6b19a_1_986_159_586_750_0.jpg) + +Fig. 1. Sampling variance along different stages of a manipulation task. The blue ${V}_{1}\left( t\right)$ and red ${V}_{2}\left( t\right)$ curves show potential temporal variance control functions that can be used in sampling. + +Without loss of generality, we propose a new sampling process, named Temporal Variant Forward Sampling (TVFS) that aims to tackle the aforementioned challenges, while capturing the fluctuating variance of manipulation tasks. The insight behind our method is to roughly control sampled actions with a temporal variance $V\left( t\right)$ , such that sampled actions do not diverge too much from the potential distribution of an expert demonstration, and that the actions are mostly progressing forward. For instance, an action that is the opposite of an expert action may not appear at certain stable stages. As shown in Fig.1, at the starting stage $\left( {p = 0}\right)$ , the sampling variance is limited. At the intermediate stage, the variance is high due to lack of constraints and high moving flexibility. At the ending stage $\left( {p = 1}\right)$ , the variance is low because the goal state is relatively deterministic. $V\left( t\right)$ can be depicted with a chosen kernel function. In our experiment, we have chosen the quadratic function for simplicity. The general process of our sampling method is illustrated in Fig.2, and is described as follows. + +1) We record an expert demonstration in simulation, and choose the sampling seeds (states) $\left\{ {{Q}_{0},{Q}_{1},\cdots ,{Q}_{M}}\right\}$ by certain sampling interval. + +![01963e1f-1bd6-7150-b289-36e574d6b19a_2_170_156_662_357_0.jpg](images/01963e1f-1bd6-7150-b289-36e574d6b19a_2_170_156_662_357_0.jpg) + +Fig. 2. Temporal variant forward sampling (TVFS). The difference between sampled actions (blue) and demo actions (black) are controlled through $V\left( t\right)$ , which is also a variance measure that changes along the task process. + +2) At each seed (state) ${Q}_{i}$ , randomly sample $N$ branches. Each branch may contain multiple forward steps. At each step, control the variance between sampled action ${a}_{t}^{\text{sampled }}$ and demo action ${a}_{t}^{\text{demo }}$ . The variance can be measured using any similarity score. In our case, we have chosen the cosine similarity. + +3) Record the sampled observations in pairs $\left( {{s}_{t},{s}_{t + 1}}\right)$ , such that they can be used in learning Eq.(2). + +## B. Multimodal Representation Learning + +With the generated multimodal observation pairs from TVFS, we have designed a network architecture and a loss function to incorporate temporal information in representation learning. As mentioned above, we use different modalities to learn different components of Eq.(2). ${h}_{\phi }$ is learned with static modalities such as images, depth maps and poses, while $\Delta {h}_{\psi }$ is learned using dynamic modalities such as $\mathrm{F}/\mathrm{T}$ and velocities. The two separately learned components should be consistent in the latent space. We accentuate such consistency with a hybrid loss function, consisting of temporal enforcement loss and reconstruction loss. The architecture and loss functions are detailed as follows. + +![01963e1f-1bd6-7150-b289-36e574d6b19a_2_161_1586_715_400_0.jpg](images/01963e1f-1bd6-7150-b289-36e574d6b19a_2_161_1586_715_400_0.jpg) + +Fig. 3. Self-supervised learning network architecture + +1) Static Modality Encoder ${\mathbf{E}}^{\mathbf{s}}$ . To learn ${h}_{\phi }$ , we use a fixed RGB-D camera as input for static modality encoders. The RGB image (256x256x3) and depth map (256x256x1) are handled separately. Similar to [13], the network is composed of a 6-layer Convolutionary Neural Network and a fsully-connected layer. Depending on the experiment scenarios, one may switch or combine the modalities. An extra Multi-Layer Percep-tron may for output fusion. The final embedding is a 64 dimensional hidden vector. + +2) Dynamic Modality Encoder ${\mathbf{E}}^{\mathbf{d}}$ . To learn $\Delta {h}_{\psi }$ , we use F/T reading and velocity as input for the dynamic modality encoder. Due to the accumulative nature of $\mathrm{F}/\mathrm{T}$ , we use a window size of 32 to better capture the momentum. The ${32} \times 6$ input is passed into a 4-layer Causal Convolution Network. The output is concatenated and fused with velocity to produce another 64 dimensional hidden vector. + +3) Static Modality Decoder ${\mathbf{D}}^{\mathbf{s}}$ . Through experiments, we have found that instead of enforcing self-supervised learning on both static and dynamic modalities, it is better to focus on one side only. This choice will be explained further in later descriptions. In our case, we are proposing an auto-encoding architecture on the static modality side. The decoder takes input of a 64 dimensional hidden vector and use transposed Convolutional Neural Network to reconstruct the RGB image / depth map. + +4) Latent Representation Learning. As mentioned briefly in previous context, the temporal order is injected through learning with pairs of adjacent observations $\left( {{s}_{t},{s}_{t + 1}}\right)$ . By enforcing Eq.(2) among each pair, the temporal relation among latent representations are broadcasted. Fig. 3 is an illustration of the whole network architecture. We first encode and decode ${s}_{t}$ with ${\mathbf{E}}^{\mathbf{s}},{\mathbf{E}}^{\mathbf{d}}$ and ${\mathbf{D}}^{\mathbf{s}}$ , then use the embedding to craft an latent representation for ${s}_{t + 1}$ . + +$$ +\widehat{h}\left( {s}_{t + 1}\right) = {\mathbf{E}}^{\mathbf{s}}\left( {s}_{t}\right) + {\overline{\mathbf{E}}}^{\mathbf{d}}\left( {s}_{t}\right) \tag{3} +$$ + +The hybrid loss function is structured around $\widehat{h}\left( {s}_{t + 1}\right)$ . The first component is temporal enforcement loss, which enforces Eq.(2) in the latent space. To ensure effectiveness of $\Delta {h}_{\psi }$ , we are applying ${L2}$ normalization to ${\overline{\mathbf{E}}}^{\mathbf{d}}\left( {s}_{t}\right)$ . + +$$ +{l}^{\text{temporal }} = \operatorname{MSE}\left\lbrack {\widehat{h}\left( {s}_{t + 1}\right) ,{\mathbf{E}}^{\mathbf{s}}\left( {s}_{t + 1}\right) }\right\rbrack \tag{4} +$$ + +The second component is reconstruction loss, which provides supervision signal to the auto-encoding architecture. As apposed to directly decoding ${\mathbf{E}}^{\mathbf{s}}\left( {s}_{t + 1}\right)$ , we are decoding $\widehat{h}\left( {s}_{t + 1}\right)$ so that Eq.(2) is also enforced in self-supervision. + +$$ +{l}^{\text{recon }} = {l}^{\text{recon }}\left( {s}_{t}\right) + {\widehat{l}}^{\text{recon }}\left( {s}_{t + 1}\right) \tag{5} +$$ + +$$ +{l}^{\text{recon }}\left( {s}_{t}\right) = \operatorname{MSE}\left\lbrack {{\mathbf{D}}^{\mathbf{s}}\left( {{\mathbf{E}}^{\mathbf{s}}\left( {s}_{t}\right) }\right) ,{s}_{t}}\right\rbrack \tag{6} +$$ + +$$ +{\widehat{l}}^{\text{recon }}\left( {s}_{t + 1}\right) = \operatorname{MSE}\left\lbrack {{\mathbf{D}}^{\mathbf{s}}\left( {\widehat{h}\left( {s}_{t + 1}\right) }\right) ,{s}_{t + 1}}\right\rbrack \tag{7} +$$ + +The two loss components are then combined through a hyperparameter $\lambda$ . + +$$ +l = {l}^{\text{recon }} + \lambda \cdot {l}^{\text{temporal }} \tag{8} +$$ + +We set $\lambda = {10}$ in training to accentuate the temporal relation, so that the representation learning does not converge suboptimally too early. The learned embedding is then used in Eq.(1) for the dense reward. + +## IV. EXPERIMENTAL RESULTS + +The experiments are conducted in simulation. In order to test that our sampling method can be generalized to different simulators and robot controllers, we tested lap-joint assembly in PyBullet [15] with an robot-agnostic environment, and door-opening in Robosuite [16] with a Panda robot. We conducted similar sampling process on both tasks, setting sampling interval $I = {50}$ , number of branches $N = 5$ , number of steps per branch $K = {10}$ . The temporal variance is controlled in $\theta \in \left\lbrack {\frac{\pi }{12},\frac{\pi }{4}}\right\rbrack$ . We trained the model with an NVIDIA 3060 GPU for around 5,000 iterations. Compared to the training iterations mentioned in [1], our method is potentially more efficient. We defer ablation study and further examination of this comparison to future work. + +## A. Visualization of Learned Dense Reward + +We visualized the learned dense reward in two cases. The results indicate that the dense reward learned by our approach is effective. In the first case (Fig.4), we compare the rewards between a successful trajectory and a failed trajectory in the lap-joint task. The plots suggest that a successful trajectory has rewards gradually increasing from 0 to 1 , which matches the definition of task progress. A failed trajectory have a decreasing reward dropping below 0 , which can happen when the agent get into unexpected scenarios. + +![01963e1f-1bd6-7150-b289-36e574d6b19a_3_165_1344_710_270_0.jpg](images/01963e1f-1bd6-7150-b289-36e574d6b19a_3_165_1344_710_270_0.jpg) + +Fig. 4. Comparison of a successful trajectory (left) and a failed trajectory (right). Visualization produced in the lap-joint task. + +In the second case (Fig.5), we examine rewards of an inexpert demonstration in door-opening. The demonstrator experienced a plateau of trial-and-error when rotating back and forth the door handle. While the hand-crafted reward mostly gives analogous signals during this period, our rewards provides fluctuations indicating more detailed feedback for learning. + +## B. Dense Rewards for Policy Training + +To examine the performance of the learned reward in policy training, we have chosen Soft Actor-Critic [17] as the RL algorithm for benchmarking. We compared three types of rewards in the door-opening task, namely our dense reward, a hand-crafted reward based on distance $\left( {\gamma {\begin{Vmatrix}{x}_{t} - {x}_{g}\end{Vmatrix}}_{2}}\right)$ , and the sparse boolean reward. In particular, we trained the policy for door-opening task for 500 epochs. For each type of reward, we conducted three training experiments with different random seeds. The results (Fig.6) indicate that our dense reward leads to faster convergence, and more training stability. + +![01963e1f-1bd6-7150-b289-36e574d6b19a_3_936_150_700_290_0.jpg](images/01963e1f-1bd6-7150-b289-36e574d6b19a_3_936_150_700_290_0.jpg) + +Fig. 5. Comparison of dense reward and hand-crafted rewards in door opening task + +![01963e1f-1bd6-7150-b289-36e574d6b19a_3_916_924_723_394_0.jpg](images/01963e1f-1bd6-7150-b289-36e574d6b19a_3_916_924_723_394_0.jpg) + +Fig. 6. RL training comparison among three types of rewards: our dense reward (blue), hand-crafted distance reward (orange), and sparse boolean reward (red). + +## V. Conclusions and Future Works + +In this paper, we propose an improved framework for learning dense reward for contact-rich manipulation tasks. The framework includes a more robust sampling method, namely temporal variant forward sampling (TVFS), that can generate samples from one or two human demonstrations with a physics simulator. A self-supervised learning architecture is also designed to efficiently utilize the generated sample pairs. + +For future work, we intend to conduct more ablation studies regarding the framework's adaptability and modalities. For instance, during experiments we observe that camera setup can have a substantial impact on the learning result. Therefore one potential is to study whether we can mainly rely on pure tactile sensors for reward inference. Another potential is to test whether the reward can be transferred to manipulation tasks with nondeterministic goal state. + +## ACKNOWLEDGEMENT + +We thank Tonya Custis and Sachin Chitta for budgetary support of the project; Yotto Koga for simulation support; our colleagues at Autodesk Research for the valuable feedback, and Zheng Wu for the discussions. + +## REFERENCES + +[1] Z. Wu, W. Lian, V. Unhelkar, M. Tomizuka, and S. Schaal, "Learning dense rewards for contact-rich manipulation tasks," 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 6214- 6221, 2021. + +[2] M. L. Puterman, Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. + +[3] A. Y. Ng, S. J. Russell, et al., "Algorithms for inverse reinforcement learning." in Icml, vol. 1, 2000, p. 2. + +[4] P. Abbeel and A. Y. Ng, "Apprenticeship learning via inverse reinforcement learning," in Proceedings of the twenty-first international conference on Machine learning, 2004, p. 1. + +[5] B. D. Ziebart, A. L. Maas, J. A. Bagnell, A. K. Dey, et al., "Maximum entropy inverse reinforcement learning." in Aaai, vol. 8. Chicago, IL, USA, 2008, pp. 1433-1438. + +[6] D. Ramachandran and E. Amir, "Bayesian inverse reinforcement learning." in IJCAI, vol. 7, 2007, pp. 2586-2591. + +[7] M. Wulfmeier, P. Ondruska, and I. Posner, "Maximum entropy deep inverse reinforcement learning," arXiv preprint arXiv:1507.04888, 2015. + +[8] S. K. Seyed Ghasemipour, S. S. Gu, and R. Zemel, "Smile: Scalable meta inverse reinforcement learning through context-conditional policies," Advances in Neural Information Processing Systems, vol. 32, 2019. + +[9] J. Ho and S. Ermon, "Generative adversarial imitation learning," Advances in neural information processing systems, vol. 29, 2016. + +[10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," Advances in neural information processing systems, vol. 27, 2014. + +[11] P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei, "Deep reinforcement learning from human preferences," Advances in neural information processing systems, vol. 30, 2017. + +[12] K. Lee, L. Smith, and P. Abbeel, "Pebble: Feedback-efficient interactive reinforcement learning via relabeling experience and unsupervised pre-training," arXiv preprint arXiv:2106.05091, 2021. + +[13] M. A. Lee, Y. Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg, and J. Bohg, "Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks," in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 8943-8950. + +[14] C. Florensa, D. Held, M. Wulfmeier, M. Zhang, and P. Abbeel, "Reverse curriculum generation for reinforcement learning," in Conference on robot learning. PMLR, 2017, pp. 482-495. + +[15] E. Coumans and Y. Bai, "Pybullet, a python module for physics simulation for games, robotics and machine learning," http://pybullet.org, 2016-2021. + +[16] Y. Zhu, J. Wong, A. Mandlekar, and R. Martín-Martín, "robosuite: A modular simulation framework and benchmark for robot learning," in arXiv preprint arXiv:2009.12293, 2020. + +[17] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor," in International conference on machine learning. PMLR, 2018, pp. 1861-1870. \ No newline at end of file diff --git a/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/e1UbzlmDUI/Initial_manuscript_tex/Initial_manuscript.tex b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/e1UbzlmDUI/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..f432b4e3a92fdbefbecef8983024cd8ae79b84f3 --- /dev/null +++ b/papers/ICRA/ICRA 2022/ICRA 2022 Workshop/ICRA 2022 Workshop Contact-Rich/e1UbzlmDUI/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,151 @@ +§ LEARNING DENSE REWARD WITH TEMPORAL VARIANT SELF-SUPERVISION + +Yuning Wu ${}^{1,2}$ , Jieliang Luo ${}^{2}$ , Hui Li ${}^{2}$ + +Abstract-Rewards play an essential role in reinforcement learning. In contrast to rule-based game environments with well-defined reward functions, complex real-world robotic applications, such as contact-rich manipulation, lack explicit and informative descriptions that can directly be used as a reward. Previous effort has shown that it is possible to algorithmically extract dense rewards directly from multimodal observations. In this paper, we aim to extend this effort by proposing a more efficient and robust way of sampling and learning. In particular, our sampling approach utilizes temporal variance to simulate the fluctuating state and action distribution of a manipulation task. We then proposed a network architecture for self-supervised learning to better incorporate temporal information in latent representations. We tested our approach in two experimental setups, namely joint-assembly and door-opening. Preliminary results show that our approach is effective and efficient in learning dense rewards, and the learned rewards lead to faster convergence than baselines. + +§ I. INTRODUCTION + +Reinforcement learning (RL) is gaining momentum in solving complex real-world robotics problems. One challenging category is contact-rich manipulation tasks. The success of RL in these scenarios depends on a reliable reward system. While this genre of problems is marked by rich, high-dimensional, continuous observations, it is typically hard to come up with a dense reward that can harness such richness to guide RL training. The conventional way of using sparse, boolean rewards (e.g., 1 if the task is successfully completed and 0 otherwise) is often challenging and inefficient. The difficulty has led to the practice of reward engineering, where rewards are hand-crafted based on domain knowledge and trial-and-error. However, such solutions often require extensive robotics expertise and can be quite task-specific. + +In this research, we propose an end-to-end learning framework that can extract dense rewards from multimodal observations, inspired by [1]. The reward is learned in a self-supervised manner by combining one or two human demonstrations with a physics simulator, and can then be directly used in training RL algorithms. We evaluate our framework in two contact-rich manipulation tasks, joint-assembly and door-opening tasks. + +There are two main contributions in this paper: 1) a temporal variant forward sampling (TVFS) method that is more robust and cost-efficient in generating samples from human demonstrations for contact-rich manipulation tasks, 2) a self-supervised latent representation learning architecture that can utilize sample pairs from TVFS. + +§ II. PROBLEM STATEMENT & RELATED WORK + +§ A. PROBLEM STATEMENT + +We focus on contact-rich tasks that can be suitably framed as discrete-time Markov Decision Processes (MDPs) [2], which is described by a set of states $S$ , a set of actions $A$ , a set of conditional probabilities $p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right)$ for the state transition ${s}_{t} \rightarrow {s}_{t + 1}$ , a reward function $R : S \times A \rightarrow \mathbb{R}$ , and a discount factor $\gamma \in \left\lbrack {0,1}\right\rbrack$ . The MDPs can be solved by using RL algorithms to train an optimal policy $\pi \left( s\right) \rightarrow a$ that maximizes the expected total reward. Our goal is to learn a dense reward function $R$ , which can be used by RL algorithms to reach the optimal policy for the MDPs. + +§ B. INVERSE REINFORCEMENT LEARNING + +To tackle the reward engineering problem, Inverse Reinforcement Learning (IRL) has arisen as a prominent solution [3]. Rather than crafting the reward, IRL methods learn reward functions [4], [5], [6] from expert demonstrations. However, conventional IRL methods mostly deal with ideal scenarios where states and representations are discrete and low-dimensional. Recent advances in deep learning have extended classical IRL's capability to continuous, high-dimensional observation space [7], [8]. However, despite much improved performance, learning is often conducted using generative adversarial learning frameworks [9], [10], meaning that one must train the reward function alongside a policy. Besides instability in training, this framework essentially diverges from our goal, i.e. to learn a reward function independently without concurrently learning a policy. + +§ C. LEARNING DENSE REWARD FOR CONTACT-RICH MANIPULATION + +[11] and [12] explored the idea of training a dense reward function directly from human feedback. The methods integrated human experts in the RL training process and periodically ask their preference on a group of pairwise videos clips of the agent's behavior. A reward function is gradually trained that eventually can best explain the human's judgments. The methods showed impressive results on training complex robotic locomotion tasks, but haven't been tested on contact-rich manipulation tasks where the behaviors are hard to be observed by pure visual cues. [1] proposed a DREM framework that extracts dense reward from multimodal observation through sampling and self-supervised learning. The framework shows great potential in translating rich, continuous, high-dimensional observations into a task progress variable that can be used to guide RL training. Our work builds on top of this research, by proposing improvements to the sampling method and self-supervised learning architecture. [13] also proposed to learn a multimodal representation of sensor inputs and use the representation for policy learning on contact-rich tasks, such as peg insertion. However, it uses crafted reward functions for various sub-tasks. + +${}^{1}$ Carnegie Mellon University, Pittsburgh. ${}^{2}$ Autodesk Research, San Francisco. This research was conducted during Yuning Wu's internship at the Autodesk AI Lab and Autodesk Robotics Lab. + +§ III. APPROACH: LEARNING DENSE REWARD WITH TEMPORAL VARIANT SELF-SUPERVISION + +Similar to ideas presented in [1], our method also aims to learn a task progress variable $p \in \left\lbrack {0,1}\right\rbrack$ that captures the progress towards finishing a task. The variable can then be used as a dense reward. With $p = 0$ representing the initial state, and $p = 1$ representing the goal state, the variable is structured as a similarity score in the latent space $\mathcal{H}$ . The latent representation ${h}_{\phi } : \mathcal{S} \rightarrow \mathcal{H}$ is learned in a self-supervised manner with two major objectives. The first is to capture an efficient, low-dimensional embedding of the multimodal observation space $\mathcal{S}$ . The second is to encode temporal information in the learned representation. Adopted from [1], for contact-rich manipulation tasks with relatively determined and repeatable goal state, the task progress can be derived with distance measure $d$ in $\mathcal{H}$ : + +$$ +p = 1 - \frac{d\left( {{h}_{\phi }\left( s\right) ,{h}_{\phi }\left( {s}_{g}\right) }\right) }{d\left( {{h}_{\phi }\left( {s}_{0}\right) ,{h}_{\phi }\left( {s}_{g}\right) }\right) } \tag{1} +$$ + +Prior work has explored ways to learn the representation through explicitly enforcing temporal order through a triplet loss function [1]. Such enforcement by design involves tuning multiple hyperparameters. We propose a framework where temporal information can be injected in a more natural, self-consistent manner by utilizing dynamic relation among pairs of adjacent observations $\left( {{s}_{t},{s}_{t + 1}}\right)$ . + +$$ +{h}_{\phi }\left( {s}_{t}\right) + \Delta {h}_{\psi }\left( {s}_{t}\right) = {h}_{\phi }\left( {s}_{t + 1}\right) \tag{2} +$$ + +${h}_{\phi }$ is the latent representation, whereas $\Delta {h}_{\psi }$ is the change of latent representation between $t$ and $t + 1$ resulted from dynamics. ${h}_{\phi }$ and $\Delta {h}_{\psi }$ are learned using different modalities. The insight behind such constraint is that latent representation of ${s}_{t + 1}$ should be consistent with the representation of ${s}_{t}$ , plus any dynamic change happened within the time step. + +Our proposed improvements are the following. The first is temporal variant forward sampling, which generates a tree of data (i.e. observations with temporal information) from a single human demonstration. The second is self-supervised representation learning network architecture, which uses generated observation pairs to learn representations through Eq.(2). + +§ A. TEMPORAL VARIANT FORWARD SAMPLING + +Collecting observations with temporal information is an essential step for training, but can be challenging if only from human demonstrations. Therefore, the idea of sampling has been broadly experimented in [1], [14]. By combining one or two human demonstrations with a physics simulator, it is possible to obtain a tree of data through sampling. [1] proposed a backward sampling process based on the insight that variance of the goal state is smaller than the initial state. However, although it is feasible to sample backward positions and generate visual images from the positions in simulation, it is typically hard to sample backward force and torque (F/T). Through experiments, we have found that restoring a state with the exact $\mathrm{F}/\mathrm{T}$ reading can be computationally intensive and simulator-dependent. Also, when playing forward a backward sampled action sequence, the $\mathrm{F}/\mathrm{T}$ readings in the forward pass do not necessarily match the F/T readings recorded in the backward pass. + + < g r a p h i c s > + +Fig. 1. Sampling variance along different stages of a manipulation task. The blue ${V}_{1}\left( t\right)$ and red ${V}_{2}\left( t\right)$ curves show potential temporal variance control functions that can be used in sampling. + +Without loss of generality, we propose a new sampling process, named Temporal Variant Forward Sampling (TVFS) that aims to tackle the aforementioned challenges, while capturing the fluctuating variance of manipulation tasks. The insight behind our method is to roughly control sampled actions with a temporal variance $V\left( t\right)$ , such that sampled actions do not diverge too much from the potential distribution of an expert demonstration, and that the actions are mostly progressing forward. For instance, an action that is the opposite of an expert action may not appear at certain stable stages. As shown in Fig.1, at the starting stage $\left( {p = 0}\right)$ , the sampling variance is limited. At the intermediate stage, the variance is high due to lack of constraints and high moving flexibility. At the ending stage $\left( {p = 1}\right)$ , the variance is low because the goal state is relatively deterministic. $V\left( t\right)$ can be depicted with a chosen kernel function. In our experiment, we have chosen the quadratic function for simplicity. The general process of our sampling method is illustrated in Fig.2, and is described as follows. + +1) We record an expert demonstration in simulation, and choose the sampling seeds (states) $\left\{ {{Q}_{0},{Q}_{1},\cdots ,{Q}_{M}}\right\}$ by certain sampling interval. + + < g r a p h i c s > + +Fig. 2. Temporal variant forward sampling (TVFS). The difference between sampled actions (blue) and demo actions (black) are controlled through $V\left( t\right)$ , which is also a variance measure that changes along the task process. + +2) At each seed (state) ${Q}_{i}$ , randomly sample $N$ branches. Each branch may contain multiple forward steps. At each step, control the variance between sampled action ${a}_{t}^{\text{ sampled }}$ and demo action ${a}_{t}^{\text{ demo }}$ . The variance can be measured using any similarity score. In our case, we have chosen the cosine similarity. + +3) Record the sampled observations in pairs $\left( {{s}_{t},{s}_{t + 1}}\right)$ , such that they can be used in learning Eq.(2). + +§ B. MULTIMODAL REPRESENTATION LEARNING + +With the generated multimodal observation pairs from TVFS, we have designed a network architecture and a loss function to incorporate temporal information in representation learning. As mentioned above, we use different modalities to learn different components of Eq.(2). ${h}_{\phi }$ is learned with static modalities such as images, depth maps and poses, while $\Delta {h}_{\psi }$ is learned using dynamic modalities such as $\mathrm{F}/\mathrm{T}$ and velocities. The two separately learned components should be consistent in the latent space. We accentuate such consistency with a hybrid loss function, consisting of temporal enforcement loss and reconstruction loss. The architecture and loss functions are detailed as follows. + + < g r a p h i c s > + +Fig. 3. Self-supervised learning network architecture + +1) Static Modality Encoder ${\mathbf{E}}^{\mathbf{s}}$ . To learn ${h}_{\phi }$ , we use a fixed RGB-D camera as input for static modality encoders. The RGB image (256x256x3) and depth map (256x256x1) are handled separately. Similar to [13], the network is composed of a 6-layer Convolutionary Neural Network and a fsully-connected layer. Depending on the experiment scenarios, one may switch or combine the modalities. An extra Multi-Layer Percep-tron may for output fusion. The final embedding is a 64 dimensional hidden vector. + +2) Dynamic Modality Encoder ${\mathbf{E}}^{\mathbf{d}}$ . To learn $\Delta {h}_{\psi }$ , we use F/T reading and velocity as input for the dynamic modality encoder. Due to the accumulative nature of $\mathrm{F}/\mathrm{T}$ , we use a window size of 32 to better capture the momentum. The ${32} \times 6$ input is passed into a 4-layer Causal Convolution Network. The output is concatenated and fused with velocity to produce another 64 dimensional hidden vector. + +3) Static Modality Decoder ${\mathbf{D}}^{\mathbf{s}}$ . Through experiments, we have found that instead of enforcing self-supervised learning on both static and dynamic modalities, it is better to focus on one side only. This choice will be explained further in later descriptions. In our case, we are proposing an auto-encoding architecture on the static modality side. The decoder takes input of a 64 dimensional hidden vector and use transposed Convolutional Neural Network to reconstruct the RGB image / depth map. + +4) Latent Representation Learning. As mentioned briefly in previous context, the temporal order is injected through learning with pairs of adjacent observations $\left( {{s}_{t},{s}_{t + 1}}\right)$ . By enforcing Eq.(2) among each pair, the temporal relation among latent representations are broadcasted. Fig. 3 is an illustration of the whole network architecture. We first encode and decode ${s}_{t}$ with ${\mathbf{E}}^{\mathbf{s}},{\mathbf{E}}^{\mathbf{d}}$ and ${\mathbf{D}}^{\mathbf{s}}$ , then use the embedding to craft an latent representation for ${s}_{t + 1}$ . + +$$ +\widehat{h}\left( {s}_{t + 1}\right) = {\mathbf{E}}^{\mathbf{s}}\left( {s}_{t}\right) + {\overline{\mathbf{E}}}^{\mathbf{d}}\left( {s}_{t}\right) \tag{3} +$$ + +The hybrid loss function is structured around $\widehat{h}\left( {s}_{t + 1}\right)$ . The first component is temporal enforcement loss, which enforces Eq.(2) in the latent space. To ensure effectiveness of $\Delta {h}_{\psi }$ , we are applying ${L2}$ normalization to ${\overline{\mathbf{E}}}^{\mathbf{d}}\left( {s}_{t}\right)$ . + +$$ +{l}^{\text{ temporal }} = \operatorname{MSE}\left\lbrack {\widehat{h}\left( {s}_{t + 1}\right) ,{\mathbf{E}}^{\mathbf{s}}\left( {s}_{t + 1}\right) }\right\rbrack \tag{4} +$$ + +The second component is reconstruction loss, which provides supervision signal to the auto-encoding architecture. As apposed to directly decoding ${\mathbf{E}}^{\mathbf{s}}\left( {s}_{t + 1}\right)$ , we are decoding $\widehat{h}\left( {s}_{t + 1}\right)$ so that Eq.(2) is also enforced in self-supervision. + +$$ +{l}^{\text{ recon }} = {l}^{\text{ recon }}\left( {s}_{t}\right) + {\widehat{l}}^{\text{ recon }}\left( {s}_{t + 1}\right) \tag{5} +$$ + +$$ +{l}^{\text{ recon }}\left( {s}_{t}\right) = \operatorname{MSE}\left\lbrack {{\mathbf{D}}^{\mathbf{s}}\left( {{\mathbf{E}}^{\mathbf{s}}\left( {s}_{t}\right) }\right) ,{s}_{t}}\right\rbrack \tag{6} +$$ + +$$ +{\widehat{l}}^{\text{ recon }}\left( {s}_{t + 1}\right) = \operatorname{MSE}\left\lbrack {{\mathbf{D}}^{\mathbf{s}}\left( {\widehat{h}\left( {s}_{t + 1}\right) }\right) ,{s}_{t + 1}}\right\rbrack \tag{7} +$$ + +The two loss components are then combined through a hyperparameter $\lambda$ . + +$$ +l = {l}^{\text{ recon }} + \lambda \cdot {l}^{\text{ temporal }} \tag{8} +$$ + +We set $\lambda = {10}$ in training to accentuate the temporal relation, so that the representation learning does not converge suboptimally too early. The learned embedding is then used in Eq.(1) for the dense reward. + +§ IV. EXPERIMENTAL RESULTS + +The experiments are conducted in simulation. In order to test that our sampling method can be generalized to different simulators and robot controllers, we tested lap-joint assembly in PyBullet [15] with an robot-agnostic environment, and door-opening in Robosuite [16] with a Panda robot. We conducted similar sampling process on both tasks, setting sampling interval $I = {50}$ , number of branches $N = 5$ , number of steps per branch $K = {10}$ . The temporal variance is controlled in $\theta \in \left\lbrack {\frac{\pi }{12},\frac{\pi }{4}}\right\rbrack$ . We trained the model with an NVIDIA 3060 GPU for around 5,000 iterations. Compared to the training iterations mentioned in [1], our method is potentially more efficient. We defer ablation study and further examination of this comparison to future work. + +§ A. VISUALIZATION OF LEARNED DENSE REWARD + +We visualized the learned dense reward in two cases. The results indicate that the dense reward learned by our approach is effective. In the first case (Fig.4), we compare the rewards between a successful trajectory and a failed trajectory in the lap-joint task. The plots suggest that a successful trajectory has rewards gradually increasing from 0 to 1, which matches the definition of task progress. A failed trajectory have a decreasing reward dropping below 0, which can happen when the agent get into unexpected scenarios. + + < g r a p h i c s > + +Fig. 4. Comparison of a successful trajectory (left) and a failed trajectory (right). Visualization produced in the lap-joint task. + +In the second case (Fig.5), we examine rewards of an inexpert demonstration in door-opening. The demonstrator experienced a plateau of trial-and-error when rotating back and forth the door handle. While the hand-crafted reward mostly gives analogous signals during this period, our rewards provides fluctuations indicating more detailed feedback for learning. + +§ B. DENSE REWARDS FOR POLICY TRAINING + +To examine the performance of the learned reward in policy training, we have chosen Soft Actor-Critic [17] as the RL algorithm for benchmarking. We compared three types of rewards in the door-opening task, namely our dense reward, a hand-crafted reward based on distance $\left( {\gamma {\begin{Vmatrix}{x}_{t} - {x}_{g}\end{Vmatrix}}_{2}}\right)$ , and the sparse boolean reward. In particular, we trained the policy for door-opening task for 500 epochs. For each type of reward, we conducted three training experiments with different random seeds. The results (Fig.6) indicate that our dense reward leads to faster convergence, and more training stability. + + < g r a p h i c s > + +Fig. 5. Comparison of dense reward and hand-crafted rewards in door opening task + + < g r a p h i c s > + +Fig. 6. RL training comparison among three types of rewards: our dense reward (blue), hand-crafted distance reward (orange), and sparse boolean reward (red). + +§ V. CONCLUSIONS AND FUTURE WORKS + +In this paper, we propose an improved framework for learning dense reward for contact-rich manipulation tasks. The framework includes a more robust sampling method, namely temporal variant forward sampling (TVFS), that can generate samples from one or two human demonstrations with a physics simulator. A self-supervised learning architecture is also designed to efficiently utilize the generated sample pairs. + +For future work, we intend to conduct more ablation studies regarding the framework's adaptability and modalities. For instance, during experiments we observe that camera setup can have a substantial impact on the learning result. Therefore one potential is to study whether we can mainly rely on pure tactile sensors for reward inference. Another potential is to test whether the reward can be transferred to manipulation tasks with nondeterministic goal state. + +§ ACKNOWLEDGEMENT + +We thank Tonya Custis and Sachin Chitta for budgetary support of the project; Yotto Koga for simulation support; our colleagues at Autodesk Research for the valuable feedback, and Zheng Wu for the discussions. \ No newline at end of file