Introduction
In this section, we introduce more detailed background knowledge.
Let $\mathbf{x}$ be a high-dimensional continuous variable. We suppose that $\mathbf{x}$ is drawn from $p^*(\mathbf{x})$, which is the true data distribution. Given a collected dataset $\mathcal{D} = {\mathbf{x}1, \mathbf{x}2,...,\mathbf{x}D}$, we are interested in approximating $p^*(\mathbf{x})$ with a model $p{\theta}(\mathbf{x})$. We optimize $\theta$ by minimizing the negative log-likelihood $$\begin{equation} \mathcal{L}(\mathcal{D}) = \sum{i=1}^{D} - \log p{\theta}(\mathbf{x}_i). \label{eq:likelihood} \end{equation}$$
For some settings, variable $\tilde{\mathbf{x}}$ is discrete, e.g., image pixel values are often integers. In these cases, we dequantize $\tilde{\mathbf{x}}$ by adding continuous noise $\bm{\mu}$ to it, resulting in a continuous variable $\mathbf{x} = \tilde{\mathbf{x}} + \bm{\mu}$. As shown by @ho2019flow, the log-likelihood of $\tilde{\textbf{x}}$ is lower-bounded by the log-likelihood of $\mathbf{x}$.
Normalizing flows enable computation of $p_{\theta}(\mathbf{x})$, even though it is usually intractable for many other model families. A normalizing flow [@rezende2015variational] is composed of a series of invertible functions $\mathbf{f} = \mathbf{f}1 \circ \mathbf{f}2 \circ ... \circ \mathbf{f}K$, which transform $\mathbf{x}$ to a latent code $\mathbf{z}$ drawn from a simple distribution. Therefore, with the change of variables formula, we can rewrite $\log p{\theta}(\mathbf{x})$ to be $$\begin{equation} \log p{\theta}(\mathbf{x}) = \log p{Z}(\mathbf{z}) + \sum_{i=1}^{K} \log \left|\det \left(\frac{\partial \mathbf{f}i}{\partial \mathbf{r}{i-1}}\right)\right|, \label{eq:relikelihood} \end{equation}$$ where $\mathbf{r}i = \mathbf{f}i(\mathbf{r}{i-1})$, $\mathbf{r}{0} = \mathbf{x}$, and $\mathbf{r}_{K}=\mathbf{z}$.
Emerging convolutions [@hoogeboom2019emerging] combine two autoregressive convolutions [@germain2015made; @kingma2016improved]. Formally, $$\begin{eqnarray*} \mathbf{M}'_1 = \mathbf{M}_1 \odot \mathbf{A}_1, ~~~~~~~~ \mathbf{M}'_2 = \mathbf{M}_2 \odot \mathbf{A}_2, ~~~~~~~~ \mathbf{y} = \mathbf{M}'_2 \star (\mathbf{M}'_1 \star \mathbf{x}), \end{eqnarray*}$$ where $\mathbf{M}_1, \mathbf{M}_2$ are convolutional kernels whose size is $c \times c \times d \times d$, and $\mathbf{A}_1, \mathbf{A}_2$ are binary masks. The symbol $\star$ represents the convolution operator.[^2] An emerging convolutional layer has the same receptive fields as standard convolutional layers, which can capture correlations between a target pixel and its neighbor pixels. However, like other autoregressive convolutions, computing the inverse of an emerging convolution requires sequentially traversing each dimension of input, so its computation is not parallelizable and is a computational bottleneck when the input is high-dimensional.
Periodic convolutions [@hoogeboom2019emerging; @Finzi2019Invertible] use discrete Fourier transformations to transform both the input and the kernel to Fourier domain. A periodic convolution is computed as $$\begin{equation*} \mathbf{y}{u,:,:} = \sum{v} \mathcal{F}^{-1}(\mathcal{F}(\mathbf{M}^{(p)}{u,v,:,:})\odot \mathcal{F}(\mathbf{x}{v,:,:})), \end{equation*}$$ where $\mathcal{F}$ is a discrete Fourier transformation, and $\mathbf{M}^{(p)}$ is the convolution kernel whose size is $c \times c \times d \times d$. The computational complexity of periodic convolutions is $\mathcal{O}(c^2hw\log(hw) +c^3hw)$. In our experiments, we found that the Fourier transformation requires a large amount of memory. These two problems impact the efficiency of both training and sampling when the input is high-dimensional.
Memory-Efficient Woodbury transformations can effectively reduce the space complexity. The main idea is to perform spatial transformations along the height and width axes separately, i.e., a height transformation and a width transformation. The transformations are: $$\begin{eqnarray} \mathbf{x}_c &=& (\mathbf{I}^{(c)} + \mathbf{U}^{(c)}\mathbf{V}^{(c)}) \mathbf{x}, \nonumber\ \mathbf{x}_w &=& \text{reshape}(\mathbf{x}_c, (ch, w)), \nonumber\ \mathbf{x}_w &=& \mathbf{x}_c (\mathbf{I}^{(w)} + \mathbf{U}^{(w)}\mathbf{V}^{(w)}), \nonumber\ \mathbf{x}_h &=& \text{reshape}(\mathbf{x}_w, (cw, h)), \nonumber \ \mathbf{y} &=& \mathbf{x}_h(\mathbf{I}^{(h)} + \mathbf{U}^{(h)}\mathbf{V}^{(h)}), \nonumber\ \mathbf{y} &=& \text{reshape}(\mathbf{y}, (c, hw)), \label{eq:me-w} \end{eqnarray}$$ where $\text{reshape}(\mathbf{x}, (n,m))$ reshapes $\mathbf{x}$ to be an $n \times m$ matrix. Matrices $\mathbf{I}^{(w)}$ and $\mathbf{I}^{(h)}$ are $w$- and $h$-dimensional identity matrices, respectively. Matrices $\mathbf{U}^{(w)}, \mathbf{V}^{(w)}, \mathbf{U}^{(h)}$, and $\mathbf{V}^{(h)}$ are $w \times d_w$, $d_w \times w$, $w \times d_w$, and $d_w \times w$ matrices, respectively, where $d_w$ and $d_h$ are constant latent dimensions.
Using the Woodbury matrix identity and the Sylvester's determinant identity, we can compute the inverse and Jacobian determinant: $$\begin{eqnarray} \mathbf{y} &=& \text{reshape}(\mathbf{y}, (cw, h)), \nonumber\ \mathbf{x}_h &=& \mathbf{y}(\mathbf{I}^{(h)} - \mathbf{U}^{(h)}(\mathbf{I}^{(d_h)} + \mathbf{V}^{(h)}\mathbf{U}^{(h)})^{-1}\mathbf{V}^{(h)}), \nonumber\ \mathbf{x}_w &=& \text{reshape}(\mathbf{x}_h, (ch, w)), \nonumber\ \mathbf{x}_w &=& \mathbf{x}_w(\mathbf{I}^{(w)} - \mathbf{U}^{(w)}(\mathbf{I}^{(d_w)} + \mathbf{V}^{(w)}\mathbf{U}^{(w)})^{-1}\mathbf{V}^{(w)}), \nonumber \ \mathbf{x}_c &=& \text{reshape}(\mathbf{x}_w, (c, hw)), \nonumber\ \mathbf{x} &=& (\mathbf{I}^{(c)} - \mathbf{U}^{(c)}(\mathbf{I}^{(d_c)} + \mathbf{V}^{(c)}\mathbf{U}^{(c)})^{-1}\mathbf{V}^{(c)})\mathbf{x}_c, \end{eqnarray}$$ $$\begin{eqnarray} \log \left| \det(\frac{\partial \mathbf{y}}{\partial \mathbf{x}}) \right| &= hw \log\left|\det(\mathbf{I}^{(d_c)}+\mathbf{V}^{(c)}\mathbf{U}^{(c)})\right| + ch\log\left|\det(\mathbf{I}^{(d_w)}+\mathbf{V}^{(w)}\mathbf{U}^{(w)})\right| \nonumber\ &+ cw\log\left|\det\left(\mathbf{I}^{(d_h)}+\mathbf{V}^{(h)}\mathbf{U}^{(h)}\right)\right|, \end{eqnarray}$$ where $\mathbf{I}^{(d_w)}$ and $\mathbf{I}^{(d_h)}$ are $d_w$- and $d_h$-dimensional identity matrices, respectively. The Jacobian of the $\text{reshape}()$ is an identity matrix, so its log-determinant is $0$.
We call Equation [eq:me-w]{reference-type="ref" reference="eq:me-w"} the memory-efficient Woodbury transformation because it reduces space complexity from $\mathcal{O}(c+hw)$ to $\mathcal{O}(c+h+w)$. This method is effective when $h$ and $w$ are large. To analyze its complexity, we let all latent dimensions be less than $d$ as before. The complexity of forward transformation is $\mathcal{O}(dchw)$; the complexity of computing the determinant is $\mathcal{O}(d(c+h+w)+d^3)$; and the complexity of computing the inverse is $\mathcal{O}(dchw + d^2(c+ch+cw)+d^3)$. The same as Woodbury transformations, when the input is high dimensional, we can omit $d$. Therefore, the computational complexities of the memory-efficient Woodbury transformation are also linear with the input size.
We list the complexities of different methods in Table 2{reference-type="ref" reference="tab:complexity"}. We can see that the computational complexities of Woodbury transformations are comparable to other methods, and maybe smaller when the input is high-dimensional, i.e., the $c,h,w$ are big.
:::: center ::: {#tab:complexity} Method Forward Backward
1x1 convolution $\mathcal{O}(c^2hw+c^3)$ $\mathcal{O}(c^2hw)$ Periodic conolution $\mathcal{O}(chw\log(hw)+c^3hw)$ $\mathcal{O}(chw\log(hw)+c^2hw)$ Emerging convolution $\mathcal{O}(c^2hw)$ $\mathcal{O}(c^2hw)$ ME-Woodbury transformation $\mathcal{O}(dchw)$ $\mathcal{O}(dchw)$ Woodbury transformation $\mathcal{O}(dchw)$ $\mathcal{O}(dchw)$
: Comparisons of computational complexities. ::: ::::
In this section, we present additional details about our experiments to aid reproducibility.