Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Missing a closing quotation mark in string. in row 3
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 35255)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 3
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction} \begin{figure}[t] \vskip 0.01in \begin{center} \centerline{\includegraphics[width=0.9\columnwidth]{cover.pdf}} \caption{Visualization of SR result on NTIRE2020 validation image. The red one is our proposed dSRVAE approach.} \label{Figure 1} \end{center} \vskip -0.3in \end{figure} Example based image Super-Resolution (SR) is a classic supervised learning approach that has inspired many SR works. The concept is based on that same patterns are likely repetitive across the whole images. In order to fill out the missing pixels accurately, researchers have proposed many approaches to model the image patterns for prediction. Pixel based interpolation is one of the earliest learning based SR approaches. It models a group of pixels by assuming the geometry duality across the neighbourhoods. The problem is that individual pixel contains very little information. The assumption can only hold in a small region. To better grasp the pattern, patch based SR approaches~\cite{KSVD,NCSR,A+,SRRMF,CRFSR,ISCAS17,ICIP18} were proposed dominanting the research approaches for a long time. Similarly, researchers use patches rather than pixels based on the piece-wise linearity. The complete image can be divided into many patches and each patch can be modelled by a simple linear regression. Similar patches not only can be found in the image itself, but also from external images. Hence there are substantial research works investigating internal or external based image SR. In order to improve the SR quality, more data are exploited for patch clustering and regression but it can quickly become cumbersome and over complex. Convolutional Neural Network (CNN) works better than most machine learning approaches because it can digest huge amount of data to learn different filters for feature extraction via backpropagation. Many CNN based SR approaches~\cite{SRCNN,VDSR,LapSRN,EDSR,RCAN,DBPN,HBPN,ABPN,FrequencySF,RefSR,ZSSR,BlindSR,SRGAN,ESRGAN,NTIRE2019,NTIRE2020RWSRchallenge,AIM2019RWSRchallenge,lugmayrICCVW2019} have successfully boosted up the image super-resolution performance in both computation and quality. Most algorithms of the pixel and patch based approaches rely on supervised learning. They require paired low-resolution (LR) and ground truth high-resolution (HR) images for building the reconstruction mapping. In order to mimic the real images, the most common process is to use HR images to simulate LR images by a spatial domain down-sampling (Bicubic process) or transform domain down-sampling (DCT and Wavelet process). However, this kind of simulation still simplifies the real situation where real images could also be degraded by different noises or photo editing. Better simulation has been proposed to use cameras to capture LR and HR images with different focal lens and then align the pixels by image registration~\cite{NTIRE2019}. Though researchers came up with different simulations to model the down-sampling process, it still targets on one specific applications. Real-world super-resolution is far more complicated. As investigated in~\cite{AIM2019RWSRchallenge,NTIRE2020RWSRchallenge}, there is no available ground-truth LR-HR image pairs. Most supervised image SR approaches have the overfitting problem. As shown in Figure~\ref{Figure 1}, once the down-sampling is different from the assumption, supervised approaches fail while our proposed method can generate robust and good results. Instead of learning the reconstruction in supervised manner, in this work, we propose a novel unsupervised real image denoising and Super-Resolution approach via Variational AutoEncoder (dSRVAE). We add denoising task to super-resolution because real images usually contain various types of noises and degradations. With the lack of targeted HR images, pursuing lower pixel distortion may lose its meanings. According to Generative Adversarial Network (GAN) based SR approaches~\cite{BlindSR,SRGAN,ESRGAN,CycleGAN}, a discriminator can constrain the network to generate photo-realistic quality for sacrificing image distortion. Based on this observation, the proposed network is made of two parts: Denoising AutoEncoder (DAE) and Super-Resolution Sub-Network (SRSN) with an attached discriminator. Contrast to previous works, we claim the following points: \begin{enumerate} \item To the best of our knowledge, this is the first work on joint real image denoising and super-resolution via unsupervised learning. \item This is also the first work on combining Variational AutoEncoder and Generative Adversarial Network for image super-resolution. \item To stabilize the adversarial training, we propose a simple cycle training strategy to force the network to balance the reference and super-resolved images. \end{enumerate} \section{Related Work} In this section, we give a brief review of previous works related to our proposed method. We focus on perceptual image super-resolution, hence omitting a main body of works on generative approaches for image super-resolution. We also introduce the related unsupervised learning for image super-resolution, like blind image SR. Interested readers may refer to these literatures for more details. \subsection{Perceptual Image Super-Resolution} In the past few years, it has been widely observed that there is a tradeoff between distortion and perception. SR approaches on reducing the pixel distortion tend to generate over-smooth results. For practical applications, with the absence of ground truth images, researchers are more attracted to the images with distinct textures (even fake ones). Generative Adversarial Network~\cite{BlindSR,SRGAN,ESRGAN} adopted by many SR approaches has the ability to provide photo-realistic images. The basic idea is to train a generator well enough that the discriminator cannot distinguish the SR images from HR images. Additional pre-trained deep networks are usually used to measure the key feature losses. SRGAN~\cite{SRGAN} is the first work using GAN for perceptual image SR. It uses VGG feature maps to allow visually pleasant image generation. ESRGAN~\cite{ESRGAN} further improves the visual quality by replacing the standard discriminator to relativistic discriminator that sharpens the textures and edges. In terms of the evaluation of perceptual quality, some works~\cite{PIRM,LPIPS} were proposed to measure the visual quality by handcrafted or automatic criteria. For instance, Learned Perceptual Image Patch Similarity (LPIPS) metric is the one using various deep network activations to score the image visual quality. \begin{figure*}[h] \vskip 0.01in \begin{center} \centerline{\includegraphics[width=0.65\textwidth]{network.pdf}} \caption{Complete structure of the proposed dSRVAE model. It includes Denoising AutoEnocder (DAE) and Super-Resolution Sub-Network (SRSN). The discriminator is attached for photo-realistic SR generation.} \label{Figure 2} \end{center} \vskip -0.3in \end{figure*} \subsection{Real-World Super-Resolution} Given the fact that a real image contains more complicated noise and artifacts, real world super-resolution is proposed to resolve the problem. There are two features of ``real-world'' super-resolution: 1) online training and testing and 2) Estimating degradation factor using prior information. One of the representative work is ZSSR~\cite{ZSSR}. It use the low-resolution image itself to learn internal statistics for super-resolution. No prior information is required for training. It can be considered as the first CNN based unsupervised image SR approach. On the other hand, with the huge learning capacity of deep neural network, we can assume degradation factors in low-resolution image generation, like adding different noise levels, forming blur kernels with some combinations of scale factors, etc., and then combine various of these factors for a general image super-resolution. For example, we can have joint demosaicing and super-resolution~\cite{Demosaic_SR_1,Demosaic_SR_2}, joint denoising and super-resolution~\cite{Denoise_SR} and joint deblurring, denoising and super-resolution~\cite{BlindSR,ss,CycleGAN}. Considering that the real images are normally obtained by unknown or non-ideal process, it is cumbersome or even impossible to include all the degradation factors in the training phase. A better real world image SR should be learned from an unsupervised approach, where the ground truth images are not involved in training stage. \section{The Proposed Method} In the following section, we will give a detailed introduction of our proposed work. Let us formally define the real image SR. Mathematically, given a LR image $\mathbf{X}\in\mathbb{R}^{\mathit{m}\times\mathit{n}\times3}$ which may be down-sampled from an unknown HR image $\mathbf{Y}\in\mathbb{R}^{\mathit{\alpha m}\times\mathit{\alpha n}\times3}$, where ($\mathit{m}$, $\mathit{n}$) is the dimension of the image and $\alpha$ is the up-sampling factor. They are related by th following degradation model, \begin{small} \begin{equation} \mathbf{X}=\mathbf{sKY}+\mu \tag{1} \label{Equation 1} \end{equation} \end{small}\\ where $\mu$ is the additive noise, \textbf{s} is the down-sampling operator and \textbf{K} is the blur kernel. The goal of image SR is to resolve Equation~\ref{Equation 1} as a Maximum A Posterior (MAP) problem as follows, \begin{small} \begin{equation} \mathbf{\hat{Y}}=\underset{\mathbf{Y}}{\arg\max}\, log \mathit{P}(\mathbf{X|Y})+log \mathit{P}(\mathbf{Y}) \tag{2} \label{Equation 2} \end{equation} \end{small}\\ where $\mathbf{\hat{Y}}$ is the predicted SR image. log$\mathit{P}$($\mathbf{X}|\mathbf{Y}$) represents the log-likelihood of LR images given HR images and log$\mathit{P}$($\mathbf{Y}$) is the prior of HR images that is used for model optimization. Formally, we resolve the image SR problem as follows, \begin{small} \begin{equation} \underset{\theta}{\min} \Arrowvert\mathbf{Y-\hat{Y}}\Arrowvert^\mathit{r} \ \text{s.t.} \mathbf{\hat{Y}}=\underset{\mathbf{Y}}{\arg\min} \frac{1}{2}\Arrowvert\mathbf{X}-\mathbf{sKY}\Arrowvert^2+\lambda\Omega(\mathbf{Y}) \tag{3} \label{Equation 3} \end{equation} \end{small}\\ where $\Arrowvert\ast\Arrowvert^\mathit{r}$ represents the $\mathit{r}$-th order estimation of pixel based distortion. The regularization term $\Omega(\mathbf{\mathbf{Y}})$ controls the complexity of the model. The noise pattern is omitted in Equation~\ref{Equation 3} on the assumption that the noise is independent from the signal and the residual between the estimation and the ground truth can be optimized by various linear or non-linear approaches. In the real world, the noise comes from the camera sensor or data compression and it is signal-dependent. Direct super-resolution usually fails to generate clean images. For practical application, a generalized super-resolution model is required to handle various degradations and distortions. It would be useful to firstly decouple the noise from the LR image and then performs super-resolution. Meanwhile, this disentanglement process can also be beneficial to real applications. As shown in Figure~\ref{Figure 2}, we propose a joint image denoising and Super-Resolution model by using generative Variational AutoEncoder (dSRVAE). It includes two parts: Denoising AutoEncoder (DAE) and Super-Resolution Sub-Network (SRSN). With the absence of target images, a discriminator is attached together with the autoencoder to encourage the SR images to pick up the desired visual pattern from the reference images. The details of the structure will be discussed in the following parts. \subsection{Denoising AutoEncoder (DAE)} Mathematically, Conditional Variational AutoEncoder (VAE) can be formed as follows, \begin{small} \begin{equation} P(\mathbf{Y|X})=\int P(\mathbf{Y|X},z)P(z|\mathbf{X})\, dz \tag{4} \label{Equation 4} \end{equation} \end{small}\\ where vector z is sampled from the high-dimensional space $\mathbf{Z}$. VAE targets on learning the latent variable that describes the conditional distribution. We can use Bayesian rule to rewrite Equation~\ref{Equation 4} as \begin{small} \begin{equation} \tag*{(5)} \begin{matrix} \begin{split} logP_{\theta}(\mathbf{Y|X})&\ge\int log P(\mathbf{Y|X},z)log P(z|\mathbf{X})\, dz \\ &=E_{Q_{\theta}(z|X)}\left[log\frac{P_{\theta}(\mathbf{Y|X},z)P_{\theta}(z|\mathbf{X})}{Q_{\phi}(z|\mathbf{X,Y})}\right] \end{split} \end{matrix} \label{Equation 5} \end{equation} \end{small}\\ We design the network to learn parameters $\theta$ for maximizing the data log likelihood $P_{\theta}(\mathbf{Y|X})$. Equation~\ref{Equation 5} can be further rearranged as the following equation, \begin{small} \begin{equation} \tag*{(6)} \begin{matrix} \begin{split} logP_{\theta}(\mathbf{Y|X})&=E_{Q_{\theta}(z|X)}\left[log\frac{P_{\theta}(\mathbf{Y|X},z)P_{\theta}(z|\mathbf{X})}{Q_{\phi}(z|\mathbf{X,Y})}\right] \\ &=E_{Q_{\phi}(z|X)}[logP_{\theta}(\mathbf{Y|X},z)] \\ &-KL[Q_{\phi}(z|\mathbf{X,Y})|P_{\theta}(z|\mathbf{X})] \end{split} \end{matrix} \label{Equation 6} \end{equation} \end{small}\\ where $KL[p|q]$ represents the KL divergence. Equation~\ref{Equation 6} can be interpreted in the way that the encoder is designed to learn a set of parameters $\phi$ to approximate posterior $Q_{\phi}(z|\mathbf{X,Y})$, while the decoder learns parameters $\theta$ to represent the likelihood $P_\theta(\mathbf{Y|X},z)$. We can adopt the KL divergence to represent the divergence between predicted distributions $Q_{\phi}(z|\mathbf{X,Y})$ and $P_\theta(\mathbf{Y|X},z)$. In order to compute the gradients for backpropagation, the ``reparameterization trick''~\cite{VAE} is used to randomly sample from $Q_{\phi}(z|\mathbf{X,Y})$ and then compute latent variable as $z=\mu(\mathbf{X,Y})+\varepsilon\ast\sigma^{0.5}(\mathbf{X,Y})$. To utilize variational autoencoder for image denoising, the posterior needs to be modified from $P(\mathbf{Y|X})$ to $P(\mathbf{T|X})$, where $\mathbf{T}$ is the target clean image. The encoder compresses the clean image to learn the latent variables. Then the decoder learns to extract the noise from the noisy image and the sampled vector $\mathit{z}$. Results in~\cite{SRCNN,ESRGAN} show that VGG19 network~\cite{VGG} is a good feature extractor for image processing, we discard the fully connected layers and only use the rest of convolution layers as the encoder to extract feature maps from the clean image. Let us mathematically define the training process of DAE as follows. \begin{small} \begin{equation} \tag*{(7)} \begin{matrix} \begin{split} \frac{1}{N}&\sum_n^N logP_{\theta}(\mathbf{T_n|X_n}) \\ &=\frac{1}{N}\sum_n^N E_{Q_{\theta}(z|X_n)}\left[log P_{\theta}(\mathbf{T|X},z=\mu+\varepsilon\ast\sigma^{0.5})\right] \\ &-KL[Q_{\phi}(z|\mathbf{X_n,T_n})|P_\theta(z|\mathbf{X_n})] \end{split} \end{matrix} \label{Equation 7} \end{equation} \end{small}\\ where \textit{N} is the batch number. The output of the decoder is the estimation of the noise pattern. By subtracting it from the real LR image, we can obtain the clean image for the following super-resolution process. During the testing, the encoder can be discarded and only the decoder is needed for image denoising. \subsection{Super-Resolution Sub-Network (SRSN)} After denoising process, we propose a light subnetwork for image enlargement and we refer it as Super-Resolution Sub-Network (SRSN). As shown in Figure~\ref{Figure 2}, in order to obtain images with photo-realistic visual quality, a discriminator is attached to form a generative adversarial network. The basic structure of the SRSN is a set of hierarchical residual blocks, which has been widely used in several works~\cite{EDSR,LapSRN,RCAN}. In order to match the dimension, the denoised image is initially up-sampled to the desired dimension by bicubic interpolation. \begin{figure}[t] \vskip 0.01in \begin{center} \centerline{\includegraphics[width=0.7\columnwidth]{cycle_train.pdf}} \caption{Spatial and frequency domain differences between HR and LR images.} \label{Figure 3} \end{center} \vskip -0.3in \end{figure} Since there is no ground truth HR images to calculate the reconstruction loss (e.g. L1-norm loss), we propose a novel cycle training strategy that comes from the back-projection theory, which is different from the previous related works~\cite{CycleGAN,CycleGAN2017} Let us use Figure~\ref{Figure 3} to interpret the image SR from the signal processing aspect. The HR image contains both low- and high-frequency components. The former represents the basic structure of the image while the latter represents complex patterns, like edges and textures. Assuming that we obtain a ``perfect'' SR image, we down-sample it to generate the corresponding LR image. Relatively, the LR image stands for the low frequency information of the SR image. The super-resolution process can be updated by back projecting the residues learned from the down-sampled SR image and the original LR image. We therefore form a cycle to check the network for its ability of making robust super-resolution. Mathematically, we have the following loss function to demonstrate the cycle training strategy. \begin{small} \begin{equation}\tag*{(8)} \begin{matrix} \begin{split} L_{MAE}=&\sum_{c=1}^C \sum_{h=1}^H \sum_{w=1}^W |s(\mathbf{Y}_{c,h,w})-\mathit{g}(\mathbf{X}_{c,h,w})| \\ & + |\mathbf{Y}_{c,h,w}-\mathbb{Y}_{c,h,w}| \\ & where \ \mathbb{Y}=\mathit{f}(s(\mathbf{Y})), \mathbf{Y}=\mathit{f}(\mathit{g}(\mathbf{X})) \end{split} \end{matrix} \label{Equation 8} \end{equation} \end{small}\\ where $L_{MAE}$ is the pixel based Mean Absolute Errors (MAE), \textit{f} and \textit{g} are the SRSN and DAE parameters, \textit{C}, \textit{H} and \textit{W} are the size of SR images. \textit{s} is the down-sampling operator with Bicubic process for simplicity. $\mathbf{Y}$ is the output SR image and $\mathbb{Y}$ is the back projected SR image. Equation~\ref{Equation 8} is a loose constraints on image super-resolution because there is no ground truth to compute the actual loss. The first term in Equation~\ref{Equation 8} is to guarantee the low frequency consistency and the second term is to force the back projected SR image to be close to the SR image. We use Bicubic for down-sampling because the real down-sampling operator is too complicated to model it. It is unnecessary to ensure the exact difference between down-sampled SR estimation $\mathit{f}(s(\mathbf{Y}))$ and denoised LR image $\mathit{g}(\mathbf{X})$ because the network is trained to converge until the estimated LR is close to ground truth LR. On the other hand, Equation~\ref{Equation 8} does not give a strong supervision to the high-frequency reconstruction. It is crucial to give a constraint on the high frequency component. We add a discriminator to take both reference image and the SR image as inputs for real-fake classification. Its objective is to distinguish the high frequency differences between the SR and HR images. Considering that there is no corresponding HR images, for $\alpha\times$ image SR, we randomly crop a $\alpha H \times\alpha W$ larger patch from the reference image to match the dimension of the SR result. To encourage the network to pick up photo-realistic features, we also use pre-trained VGG19 to extract the feature map for estimation. Both SR and the denoised LR images are sent to VGG19 to output the feature maps obtained by the 4th convolution layer before the 5th ``Maxpooling'' layer. The SR feature maps are down-sampled by $\alpha\times$ to match the LR feature maps. The total training loss is described as follows, \begin{small} \begin{equation}\tag*{(9)} \begin{matrix} \begin{split} \mathbf{L}=& \lambda \left \| \phi_i(\mathit{f}(\mathit{g}(X)))-s(\phi_i(\mathit{g}(X))) \right \|_1^1 \\ & + \eta log[1-D_{\theta_D} (G_{\theta_G} (\mathit{g}(X)))] + \mathbf{L_{MAE}} \end{split} \end{matrix} \label{Equation 9} \end{equation} \end{small} where $\lambda$ and $\eta$ are two weighting parameters to balance the VGG feature loss and adversarial loss. $\theta_G$ and $\theta_D$ are the learnable parameters of the generator and discriminator, respectively. $\phi_i$ represents the features from the i-th convolutional layer. \section{Experiments} \subsection{Data Preparation and Network Implementation} We conducted experiments with the training data provided by NTIRE2020 Real World Super-Resolution Challenge~\cite{NTIRE2020RWSRchallenge}. The training dataset is formed by Flickr2K and DIV2K. They both contain images with resolution larger than 1000$\times$1000. The Flickr2K dataset is not only degraded by unknown factors but also down-sampled $4\times$ by an unknown operator. The objective is to learn a mapping function to map from the source domain (Flickr2K) to the target domain (DIV2K). W extracted patches from the training dataset with the size of 128$\times$128. For the discriminator of the proposed SRSN, we extracted 512$\times$512 patches as references for training. For testing, we not only gave focus on super-resolution, but also denoising for real images. The testing datasets include BSD68~\cite{BSD68}, Set5~\cite{Set5}, Urban100~\cite{Urban100}, NTIRE2019 Real Images~\cite{NTIRE2019} and NTIRE2020 validation~\cite{NTIRE2020RWSRchallenge}. Among them, BSD68 is a common dataset for image denoising. Set5 and Urban100 are used for image super-resolution. NTIRE2019 Real Images contains 20 images captured by different cameras with various noise and blurring effects. NTIRE2020 validation includes images with the same degradation as the training images. To efficiently super-resolve the LR image, we used the pre-trained VGG19 (remove the fully connected layers) as the encoder of the proposed DAE. The length of the latent vector is 512. The decoder is made of 2 deconvolution layers with kernel size 6, stride 4 and padding 1, and 3 residual blocks with kernel size 3, stride 1 and padding 1. The Super-Resolution Sub-Network (SRSN) has 4 residual blocks. Each residual block contains 64 kernels of size 3, stride 1 and padding 1. In the following experiments, we will demonstrate that the proposed dSRVAE can achieve comparable or even better SR performance. We conducted our experiments using Pytorch 1.4 on a PC with two NVIDIA GTX1080Ti GPUs. During the training, we set the learning rate to 0.0001 for all layers. The batch size was set to 16 for 1$\times10^6$ iterations. For optimization, we used Adam with the momentum to 0.9 and the weight decay of 0.0001. The executive codes and more experimental results can be found in the following link: \url{https://github.com/Holmes-Alan/dSRVAE}. We encourage readers to download the SR results from the link for better visual comparison. \subsection{Image Denoising} For our proposed dSRVAE, the Denoising AutoEncoder (DAE) is trained for removing noise from the input LR image. To demonstrate the capability of using Variational AutoEncoder, we tested two different datasets: BSD68 and NTIRE2019. Note that BSD68 is a clean dataset that can be added with additional random noise for evaluation and NTIRE2019 dataset was used for image super-resolution. We used it because the dataset was captured in real life by cameras. It reflects the real image processing scenario so that it can be used for denoising evaluation. In order to evaluate the efficiency of the DAE for denoising, we design another plain convolutional network made of multiple convolutional layers for comparison and we refer it as \textit{net-CNN}. We also experimented on other state-of-the-art image denosing approaches and show the comparison in the following table. \begin{table}[b] \caption{Quantitative comparison of different networks for image denoising. {\color{red}Red} indicates the best results.} \label{Table 1} \vskip -0.1in \begin{center} \begin{small} \scalebox{0.7}{ \begin{tabular}{ccccccc} \hline \multicolumn{7}{c}{BSD68($\sigma=$15)} \\ \hline Algorithm & BM3D & DnCNN & FFDNet & TNRD & net-CNN & DAE (ours) \\ PSNR (dB) & 31.07 & 31.73 & 31.63 & 31.42 & 31.56 & {\color{red}31.81} \\ \hline \multicolumn{7}{c}{NTIRE2019} \\ \hline \multicolumn{3}{c}{Algorithm} & DnCNN & PD & net-CNN & DAE (ours) \\ \multicolumn{3}{c}{PSNR (dB)} & 29.30 & 29.53 & 29.36 & {\color{red}29.54} \\ \hline \end{tabular} } \end{small} \end{center} \vskip -0.3in \end{table} \begin{figure*}[h] \vskip 0.01in \begin{center} \centerline{\includegraphics[width=0.7\textwidth]{noise_vis.pdf}} \caption{Visualization of image denoising on NTIRE2020 validation images. Enlarged red boxes are included for better comparison.} \label{Figure 4} \end{center} \vskip -0.3in \end{figure*} In Table~\ref{Table 1}, we compare our approach with five classic denoising approaches with BSD68 and NTIRE2019. From the PSNR results, it shows that the proposed DAE achieves better performance. Note that we tested the BSD68 with Gaussian noise of variance 15. We did not test using other Gaussian noise levels because our objective is not for additive noise removal. Our target is to illustrate the denoising capability of our proposed DAE model. In order to show the denoising ability on real image with unknown noise, we tested the NTIRE2020 validation dataset and show the visual comparison in Figure~\ref{Figure 4}. It can be seen from Figure~\ref{Figure 4} that both approaches can remove the noise in the background, like the sky in these two images. More interestingly, using proposed DAE can preserve as much details as possible while the PD approach tends to oversmooth the edgy areas (check the windows on the buildings and the textures on the wheel) to remove the noise. \subsection{Image Super-Resolution} More importantly, to prove the effectiveness of the proposed dSRVAE network, we conducted experiments by comparing some of the state-of-the-art SR algorithms: Bicubic, SRGAN~\cite{SRGAN}, ESRGAN~\cite{ESRGAN} and BlindSR~\cite{BlindSR}. PSNR and SSIM were used to evaluate the quantitative distortion performance and PI score~\cite{PIRM} was used to indicate the perception performance. Generally, PSNR and SSIM were calculated by converting the RGB image to YUV and taking the Y-channel image for estimation. PI takes the RGB image for estimation. We only focus on 4$\times$ image SR. All approaches were reimplemented using the codes provided by the corresponding authors. In the following sections, we will give evaluation on different down-sampling scenarios, including ideal bicubic down-sampling, camera simulation and unknown degradation. \begin{figure*}[h] \vskip 0.01in \begin{center} \centerline{\includegraphics[width=0.7\textwidth]{bic_vis.pdf}} \caption{Visualization of 4$\times$ image super-resolution on Set5 and Urban100 images. Enlarged red boxes are included for better comparison.} \label{Figure 5} \end{center} \vskip -0.3in \end{figure*} \noindent\textbf{Analysis on ideal Bicubic down-sampled SR}\\ First, for classic image SR, we assume bicubic as a standard down-sampling operator for image SR. With sufficient training images and deeper structures, a lot of works have been proposed to improve SR performance. Initially, our network was trained in unsupervised way for real image. It cannot be used to compare most of the existing SR approaches. For a fair comparison, we modified our network to take paired LR and HR images for supervised training. The MAE loss function in Equation~\ref{Equation 8} was modified to calculate the errors between SR and HR. Adversarial loss was also used for photo-realistic image SR. For the sake of objective measurement, Table~\ref{Table 2} shows the quantitative results among different approaches. \begin{table}[t] \caption{Quantitative comparison of different networks for 4$\times$ image super-resolution on Set5 and Urban100. {\color{red}Red} indicates the best results.} \label{Table 2} \vskip -0.1in \begin{center} \begin{small} \scalebox{0.9}{ \begin{tabular}{cccccc} \hline \multicolumn{2}{c}{\multirow{2}{*}{Algorithm}} & \multicolumn{2}{c}{Set5} & \multicolumn{2}{c}{Urban100} \\ \multicolumn{2}{c}{} & PSNR & PI & PSNR & PI \\ \hline Bicubic & \multirow{4}{*}{4$\times$} & 28.42 & 7.370 & 23.64 & 6.944 \\ ESRGAN & & 30.47 & 3.755 & 24.36 & {\color{red}3.484} \\ SRGAN & & 29.40 & {\color{red}3.355} & 24.41 & 3.771 \\ dSRGAN(ours) & & {\color{red}31.46} & 4.836 & {\color{red}26.33} & 4.481 \\ \hline \end{tabular} } \end{small} \end{center} \vskip -0.3in \end{table} \begin{figure*}[h] \vskip 0.01in \begin{center} \centerline{\includegraphics[width=0.75\textwidth]{ntire2019_vis.pdf}} \caption{Visualization of 4$\times$ image super-resolution on NTIRE2019 validation. Enlarged red boxes are included for better comparison.} \label{Figure 6} \end{center} \vskip -0.3in \end{figure*} Table~\ref{Table 2} lists the PSNR and PI score on Set5 and Urban100 for 4$\times$ SR. Higher PSNR means lower distortion and lower PI score means better visual quality. The results show that the proposed network can achieve comparable performance to state-of-the-art image SR approaches. Since all approaches focus on perceptual quality, we use Figure~\ref{Figure 5} to demonstrate the visualization comparison. Figure~\ref{Figure 5} shows two examples from Set5 and Urban100. We can see that using the proposed dSRVAE can provide photo-realistic details, like the textures on the hat of the \textit{Baby} and the metal bars in the mirror of \textit{img004}. \noindent\textbf{Analysis on real image captured by cameras}\\ In this part, we will show visual comparison on NTIRE2019 dataset. It is a dataset that contains images captured by different cameras under different conditions. We use this dataset to test the generalization of our proposed dSRVAE. Our comparison includes supervised approaches (ESRGAN, SRGAN) and BlindSR, a novel blind image SR approach trained on different blur and down-sampling kernels. From the results in Figure~\ref{Figure 6}, we can see that the proposed dSRVAE not only can effectively remove the noise from the LR image, but also preserves the original pattern without severe distortion. For example, ESRGAN can generate much sharper edges on the texts of image \textit{Cam2\_03} but with some bizarre patterns. On the other hand, compared with BlindSR, dSRVAE can provide sharper reconstruction without distorting the pattern of the texts. Similar results can also be observed for image \textit{cam2\_05}. \noindent\textbf{Analysis on real image with unknown degradation factors}\\ Finally, let us make a comparison on NTIRE2020 testing images. This dataset contains 100 high-resolution images. The LR images were down-sampled 4$\times$ by unknown degradation factors, including noise and artifacts. It is more complicated than simple bicubic down-sampling or camera simulation scenarios. Without knowing the ground truth images, we provide visualization of different SR approaches to illustrate the SR performance. \begin{figure*}[h] \vskip 0.01in \begin{center} \centerline{\includegraphics[width=0.85\textwidth]{ntire2020_vis_5.pdf}} \caption{Visualization of 4$\times$ image super-resolution on NTIRE2020 validation. Enlarged red boxes are included for better comparison.} \label{Figure 7} \end{center} \vskip -0.3in \end{figure*} Based on our assumption, joint learn denoising and super-resolution can be beneficial for real image SR because we always encounter various noise on the real image that cannot just be resolved by a single SR model, especially when the noise is signal-dependent. It is useful to disentangle the correlation between noise and signal for other processes. To demonstrate the performance of our proposed dSRVAE, we compare with two perceptual image SR approaches (ESRGAN and SRGAN) and one blind image SR approach (BlindSR). Our target is to test whether the proposed ``first-denosing-then-SR'' strategy works. In Figure~\ref{Figure 7}, dSRVAE is referred to our final result. We separate the Denoising AutoEncoder and Super-Resolution Sub-Network to independently test whether they work. We refer SRSN as the one withouting using DAE for denoising and ``DAE+ESRGAN'' as the one first using DAE for denoising and then use ESRGAN for SR. Figure~\ref{Figure 7} shows the results on images ``0922'' and ``0953''. We can see that the keyboard and the road are much better reconstructed by dSRVAE. SRGAN and ESRGAN were trained on using clean LR images down-sampled by bicubic so that they cannot handle the noise. BlindSR, on the other hand, can partially remove some noise and can not provide much improvement on the textures. DAE+ESRGAN helps to remove the noise with a little blurring effect because it was not trained end-to-end. Using SRSN only would be affected by the noise. Our approach, dSRVAE, can effectively remove the noise, and also improve the overall quality. \section{Discussion} In this paper, we propose an unsupervised real image super-resolution approach with Generative Variational AutoEncoder. Two key points were introduced: 1) Variational AutoEncoder for image denoising and, 2) cycle training strategy for unsupervised image super-resolution. In order to obtain photo-realistic SR images, we combine variational autoencoder and generative adversarial network for joint image denoising and super-resolution. Experimental results show that our proposed real image denoising and Super-Resolution via Variational AutoEncoder (dSRVAE) approach achieves good perceptual performance on different datasets. More importantly, results on testing NTIRE2019 and NTIRE2020 datasets show that the proposed dSRVAE can handle real image super-resolution for practical applications. {\small \bibliographystyle{ieee_fullname}
{ "timestamp": "2020-04-28T02:29:52", "yymm": "2004", "arxiv_id": "2004.12811", "language": "en", "url": "https://arxiv.org/abs/2004.12811" }
\section{Introduction \label{sec:Intro}} There is a growing interest in experimentally verifying the predictions of quantum electrodynamics (QED) in the strong field, high-intensity, regime. To access this regime in experiment, two requirements must be met: (i) an electromagnetic field is present which is sufficiently intense so that many field quanta participate in a given process; (ii) the momentum transfer (recoil) in scattering is large enough that the quantum nature of processes is manifest. Upcoming laser facilities such as ELI-Beamlines~\cite{eli-beams}, ELI-NP~\cite{eli-np}, and SEL~(see~\cite{Danson:2019} for an overview) will reach field strengths to fulfil requirement~(i). One way to fulfil~(ii) is to use laser wakefield accelerated particles, recent successes of which include the generation of positron beams in the lab~\cite{sarri15} and measurement of quantum signals of radiation reaction~\cite{Cole:2017zca,Poder:2018ifi}. Background electromagnetic field strength can be quantified using an intensity parameter, $\xi$, equivalent to the work done by the background over a Compton wavelength, in units of the background photon energy. When $\xi\sim O(1)$, the standard approach of treating the background in perturbation theory fails, because this assumes that processes are more probable when \textit{fewer} background photons are involved. When $\xi\gg1$, an alternative approximation is often employed, in which the instantaneous rate for processes in a constant (`crossed') plane wave background (treated without recourse to perturbation theory) is integrated over the classical trajectories of the scattered particles. This ``locally-constant field approximation'' (LCFA) \cite{Nikishov:1964zza,Ritus:1985,king15d,DiPiazza:2017raw} has the particular advantage that it can be applied to arbitrary external fields. Therefore, when used in conjunction with a classical Maxwell field equation solver, it can be employed in situations for inhomogeneous backgrounds. The locally-constant field approximation is almost exclusively the method by which QED processes in intense fields are added to laser-plasma simulation codes \cite{nerush11,elkina11,ridgers12,king13a,bulanov13,ridgers14,Gonoskov:2014mda,blackburn14,gelfer15,jirka16,gonoskov17,efimenko19}. It has recently been extended in several respects, by including higher derivative corrections \cite{Ilderton:2018nws,DiPiazza:2018bfu,King:2019igt}, analysing simple, non-constant, fields in Schwinger pair production~\cite{aleksandrov19} and extending it to previously neglected processes~\cite{Ilderton:2019bop,Tang:2019ffe}. An alternative approach to probe the strong-field regime of QED is to use a conventional particle accelerator to fulfil the energy condition (ii), and a less intense laser to fulfil the field condition (i). This was demonstrated by the landmark E144 experiment~\cite{Bamber:1999zt} which investigated photon emission~\cite{Bula:1996st} and pair production~\cite{Breit:1934zz,Burke:1997ew} in the weakly nonlinear regime. Using modern high-intensity laser systems, this form of experiment will be performed at E320 at FACET-II and at LUXE \cite{Abramowicz:2019gvx} at DESY, to measure QED in the highly nonlinear, non-perturbative regime, which was out of reach for E144. These experiments will access the intermediate intensity regime $\xi\sim O(1)$, where the locally-constant field approximation breaks down and fails to capture experimental observables such as the harmonic structure in spectra~\cite{Chen:1998,sakai15,Khrennikov15}. To address this problem we derive here, from QED, the ``locally monochromatic approximation'' (LMA). Because the LMA is based upon a perturbation around a monochromatic background, it is not suitable for intense laser-matter collisions where a plasma is generated. Instead, it \emph{complements} the locally-constant-field approximation by covering the regime of high-energy and intermediate intensity where the LCFA becomes invalid. As usual, we assume that the laser background is well defined and backreaction~[\cite{seipt17,Ekman:2020vsc} can be neglected to a first approximation. Rather than taking the constant crossed field result to be fundamental and the basis of the approximation, the LMA builds upon the monochromatic result, which is more specific to propagating fields such as laser pulses. One can show that both field configurations are 'null' (characterised by vanishing field invariants) and thus have the \emph{same} degree of symmetry so that the dynamics becomes maximally super-integrable in either case \cite{Heinzl:2017zsr,Heinzl:2017blq}. Various numerical codes have already been implemented that include the QED effects of nonlinear Compton scattering and nonlinear Breit-Wheeler pair-creation, by using an ``instaneously monochromatic'' rate that samples a non plane-wave field around the probe particles. Examples include the simulation code to support the SLAC E144 experiment \cite{Bamber:1999zt}, CAIN \cite{cain1} and IP Strong \cite{hartin18}, which has lately been used to provide simulation support for the planning of the LUXE experiment \cite{Abramowicz:2019gvx}. In this paper, we formalise the LMA and identify the approximations necessary to derive it from QED. We find the LMA treats the fast dynamics related to the carrier frequency of the plane wave exactly, but uses a local expansion to describe the slow dynamics associated with the pulse envelope. This combines the slowly-varying envelope approximation~\cite{Narozhnyi:1996qf,McDonald:1997,Seipt:2010ya,Seipt:2014yga,Seipt:2016rtk} with the locally-constant field approximation, improving upon both. It captures features to which the locally-constant field approximation is blind, yet because it is still an explicitly local approximation, it can be added to single-particle simulation codes. Furthermore, by benchmarking the LMA against exact calculations in pulses, an additional feature in the mid-IR region of nonlinear Compton scattering will become apparent, which may provide an additional signal to be searched for in experiment. The paper is organised as follows. In Sec.~\ref{sec:LMA} we outline the key steps in deriving the LMA for a general first-order strong field QED process. In Sec.~\ref{sec:FullQED} we give an outline of the numerical methods that form the basis of our benchmarking against finite-pulse results. The LMA for nonlinear Compton scattering is then compared to QED in circularly and linearly polarised pulse backgrounds in Sec.~\ref{sec:NLC}. We demonstrate the validity of the LMA for nonlinear Breit-Wheeler pair production in Sec.~\ref{sec:BW}. We conclude in Sec.~\ref{sec:Summary}. In Appendix~\ref{app:NLCLMA}, a detailed derivation of the LMA for nonlinear Compton scattering in a circularly polarised background is presented and in Appendix~\ref{app:Smalls} we include an alternative derivation of the infra-red\footnote{Here and throughout, we use `infra-red' to denote low \textit{lightfront} energy $n\cdot P$, for $n$ the laser propagation direction and $P$ any given particle momentum. This is a natural variable in plane wave calculations.} limit of nonlinear Compton scattering, demonstrating also that the correct limit is trivially reproduced from the LMA. Finally, in Appendix~\ref{app:LCFA}, we show that the locally-constant field approximation can be recovered as a high-intensity limit of the LMA. \section{Outline of the locally monochromatic approximation\label{sec:LMA}} Let the gauge potential of the background, $a_\mu(\varphi)$, depend only on the phase $\varphi = k \cdot x$, with $k$ being the wave four-vector. We will work in lightfront coordinates $x = (x^{\scriptscriptstyle +},x^\LCm,\bm{x}^{\scriptscriptstyle \perp})$ where $x^{{\scriptscriptstyle \pm}} = x^0 \pm x^3$ and $\bm{x}^{\scriptscriptstyle \perp} = (x^1, x^2)$. Here $x^{\scriptscriptstyle +}$ is lightfront time while $x^{\LCm}$ and $\bm{x}^{{\scriptscriptstyle \perp}}$ are called the longitudinal and perpendicular directions, respectively~\cite{Heinzl:2000ht}. With this notation, the wave vector of the background $k_\mu = \delta_\mu^+ k_+$, and $\varphi = k_+ x^+$. The scattering amplitude, $S_{fi}$, for an incoming electron with on-shell momentum $p$, $p^{2}=m^{2}$, is then calculated using the Volkov wavefunction~\cite{Volkov:1935zz}, \begin{align}\label{def:Volkov} \Psi_p(x) = \left( 1 + \frac{\slashed{k} \slashed{a}(\varphi)}{2 k \cdot p} \right) u_p e^{- i S_p(x)} \; . \; \end{align} In the exponent, $S_p(x)$ is the classical action for an electron in a plane wave background, \begin{align}\label{def:ClassAct} S_p(x) = p \cdot x + \int^{\varphi}_{- \infty} \frac{2 p \cdot a(t) - a^2(t)}{2 k \cdot p}dt . \end{align} The scattering amplitude $S_{fi}$ in a plane wave background can then be written as \begin{align}\label{def:Amplitude} S_{fi} =& (2\pi)^3 \delta^3_{-,\perp}(p_{\text{in}} - p_{\text{out}}) \mathcal{M}, \; \end{align} with an invariant amplitude $\mathcal{M}$. Due to the non-trivial structure of the background, overall momentum conservation (encoded in the delta functions) only holds in three directions, $\{-,\perp\}$. A closed form solution for phase integrals such as (\ref{def:ClassAct}) is only known for some special cases of the background field, for example infinite ``monochromatic'' plane waves (see e.g.~\cite{Ritus:1985} for extensive applications). Beyond these solutions, one can turn to a numerical approach or employ an approximation. The slowly varying envelope approximation is known to simplify the classical action (\ref{def:ClassAct}) occurring in the exponent and hence make the phase integrations tractable~\cite{Narozhnyi:1996qf,McDonald:1997,Seipt:2010ya,Seipt:2014yga,Seipt:2016rtk}. It is applied as follows. Let the pulse $a_\mu(\varphi)$ have the form \begin{align}\label{def:Gauge} a^\mu(\varphi) =& m \, \xi \, f\Big(\frac{\varphi}{\Phi}\Big) \big( \varepsilon^\mu \cos\delta \cos\varphi + \bar{\varepsilon}^\mu \sin\delta \sin\varphi \big) \;, \end{align} where $\xi$ is the dimensionless Lorentz and gauge invariant measure of the field intensity~\cite{Heinzl:2008rh}, $f(\varphi/\Phi)$ is the pulse envelope with phase duration $\Phi$ and $\varepsilon^\mu$, and $\bar{\varepsilon}^\mu$ are polarisation directions satisfying $\varepsilon^2 = \bar{\varepsilon}^2 = -1$ and $\varepsilon \cdot \bar{\varepsilon} = k \cdot \varepsilon = k \cdot \bar{\varepsilon} = 0$. The parameter $\delta \in (0,\pi/2)$ determines the polarisation of the pulse; $\delta = 0$ for linear polarisation along $\varepsilon$, $\delta = \pi/2$ for linear polarisation along $\bar{\varepsilon}$ and $\delta = \pi/4$ for circular polarisation\footnote{We make implicit a normalisation factor in the gauge potential (\ref{def:Gauge}) such that $\text{Max}[a_\mu(\varphi)/(m \xi)] = 1$.}. We consider the pulse envelope to be asymptotically switched on and off, $\lim_{\varphi\to\pm\infty}f(\varphi) = 0$. The slowly varying envelope approximation assumes that the pulse duration $\Phi$ is sufficiently long that terms of order $\mathcal{O}(\Phi^{-1})$ can be neglected. (Higher orders can in principle be included in the approximation but they will lead to a more complicated result that takes longer to numerically evaluate and, as we shall see, the leading order terms will already be sufficient to reproduce the main features of spectra.) As a result, derivatives of the envelope with respect to the phase can be neglected, because they are of the form $df(\varphi/\Phi)/d\varphi \sim \Phi^{-1}f'(\varphi/\Phi)$. In other words, the envelope varies slowly compared to the fast dynamics of the carrier frequency. The practical benefit of this is that we can simplify the classical action (\ref{def:ClassAct}). More explicitly, the classical action will have terms both linear and quadratic in the field envelope. In all terms involving both fast and slow oscillations, we integrate by parts, picking up terms of order $\mathcal{O}(\Phi^{-1})$ which we neglect, and so remove the integrals from (\ref{def:ClassAct}). This gives us, for the possible linear terms arising, \begin{align}\label{eqn:LinearIntApprox} \int^{\varphi}_{-\infty} \! \mathrm{d} \psi \; f\Big( \frac{\psi}{\Phi} \Big) \big\{ \cos \psi , \sin \psi \big\} \simeq f\Big( \frac{\varphi}{\Phi} \Big) \big\{ \sin\varphi , - \cos\varphi \big\} \,, \end{align} and for the possible quadratic terms \begin{align}\label{eqn:QuadIntApprox} & \int^{\varphi}_{-\infty} \! \mathrm{d} \psi \; f^2\Big( \frac{\psi}{\Phi} \Big) \big\{\cos^2 \psi , \sin^2 \psi \big\} \nonumber\\ &\simeq \frac12 f^2\Big( \frac{\varphi}{\Phi} \Big) \Big\{ \big( \varphi + \sin\varphi \cos\varphi \big) , \big( \varphi - \sin\varphi \cos\varphi \big) \Big\} \,. \end{align} For the particular case of a circularly polarised background, there arises a term containing only slow oscillations (the integral of $f^2$ without trigonometric functions), which must be approximated by different means (see below). With these approximations, the background-dependent parts of the classical action can always be put in the form \begin{align}\label{eqn:ClassActApprox} S_p(x) &\simeq G\left(\varphi,\frac{\varphi}{\Phi}\right) + \frac{1}{2} \alpha\Big(\frac{\varphi}{\Phi}\Big) \big[u(\varphi) - u^{-1}(\varphi)\big] \nonumber\\ &+ \frac{1}{2} \beta\Big(\frac{\varphi}{\Phi}\Big) \big[v(\varphi) - v^{-1}(\varphi)\big] \; . \end{align} The functions $\alpha$ and $\beta$ are purely slowly-varying functions of the phase $\varphi$. The functions $u(\varphi)$ and $v(\varphi)$ are of the form $\exp(ic\varphi)$, for $c\in\{1,2\}$. Note the similarity of the form of the exponent with the generating function for the Bessel function of the first kind, \begin{equation}\label{def:Generating} \exp \left\{ \sfrac{1}{2} z\left( \frac{\varphi}{\Phi} \right) \left[u(\varphi) - u^{-1}(\varphi)\right] \right\} =\! \sum_{n \in \mathbb{Z}} u^{n}(\varphi) J_{n}\!\left[z\left(\frac{\varphi}{\Phi}\right)\right]\!. \end{equation} This was recognised and exploited in~\cite{Narozhnyi:1996qf} and essentially gives a generalisation of the infinite monochromatic field results~\cite{Berestetsky:1982aq,Ritus:1985} to the case where the argument of the Bessel function now depends slowly on the phase. There will also appear rapidly oscillating terms in the pre-exponent, but these can be incorporated by differentiating (\ref{def:Generating}) with respect to $z$ and combining terms. The scattering amplitude will thus be defined in terms of harmonics, represented by the sum over integers~$n$ in (\ref{def:Generating}). So far everything has been typical for the application of the slowly-varying envelope approximation in the strong-field QED literature~\cite{Narozhnyi:1996qf,McDonald:1997,Seipt:2010ya,Seipt:2014yga,Seipt:2016rtk}. It is at this point that we take the further step of performing a local expansion in the phase variables to arrive at a local ``rate'' which can be implemented in one-particle numerical simulations. To define the local expansion, we will concentrate on single (dressed) vertex ``one-to-two'' processes: nonlinear Compton scattering and nonlinear Breit-Wheeler pair production. The amount of literature on these processes has become too large to be cited here in full; regarding nonlinear Compton scattering see \cite{Nikishov:1964zza,Brown:1964zzb,Goldman:1964} for the original papers, \cite{Ritus:1985,Ehlotzky:2009} for reviews and \cite{Harvey:2009ry,Boca:2009zz,Mackenroth:2010jr,Heinzl:2009nd,Seipt:2010ya} for a selection of more recent results. Nonlinear Breit-Wheeler pair creation was first discussed in \cite{Toll:1952rq,Reiss:1962,Nikishov:1964zza}, while the study of finite size effects was initiated in \cite{Heinzl:2010vg}. Both processes were observed (at mildly nonlinear intensities) by the SLAC E144 experiment \cite{Bula:1996st,Burke:1997ew,Bamber:1999zt}. For the two examples to be considered, the reduced amplitude $\mathcal{M}$ in (\ref{def:Amplitude}) will have one phase integral, and after applying the slowly-varying envelope approximation, will be defined in terms of an infinite sum over the harmonic order $n$, i.e. \begin{align} \mathcal{M} = \sum_{n = - \infty}^{\infty} \int \! \mathrm{d} \varphi \; \mathcal{M}_{n}(\varphi) . \end{align} Squaring the amplitude for the probability, we will have something of the form, \begin{align} \mathbb{P} \sim \sum_{n,n^\prime = - \infty}^{\infty} \int \mathrm{d} \Omega_{\text{LIPS}} \int \! \mathrm{d} \varphi \, \mathrm{d} \varphi^\prime \; \mathcal{M}_{n}^{\dagger}(\varphi) \mathcal{M}_{n^\prime}(\varphi^\prime) \;, \end{align} i.e., a double infinite sum over harmonic orders, two phase integrals, and an integration over the Lorentz invariant phase space of the process, $\mathrm{d}\Omega_{\text{LIPS}}$. Now we perform a local expansion of the probability, in analogy to the locally-constant field approximation (see e.g.~\cite{Nikishov:1964zza,Ritus:1985,king15d,DiPiazza:2017raw}). We make a change of variables to the sum and difference of phases, \begin{align}\label{def:Phase} \phi = \frac{1}{2} \big( \varphi + \varphi^\prime \big) \;, \quad \theta = \varphi - \varphi^\prime \;. \end{align} Terms in the probability are then expanded in a Taylor series in $\theta \ll 1$, and the slowly-varying envelope approximation is then applied to all derivatives of the pulse envelope, giving \begin{align} f\Big(\frac{\varphi}{\Phi}\Big) \approx f\Big(\frac{\varphi^\prime}{\Phi}\Big) \approx f\Big(\frac{\phi}{\Phi}\Big) \;. \end{align} This allows the $\mathrm{d} \theta$ integrals to be performed, and the probability takes the form \begin{align} \mathbb{P} = \int \!\mathrm{d}\phi \; \mathcal{R}(\phi) , \end{align} where $\mathcal{R}(\phi)$ is interpreted as a local ``rate''\footnote{ In general $\mathcal{R}(\phi)$ will contain infinite sums over harmonic orders, and a number of final state momentum integrals. The aim is to do as many of these final state momentum integrals as possible. Despite the added complexity which arises from retaining a slowly varying dependence on the phase variable $\phi$, the number of final state integrals that can be performed is the same in the LMA as for a first order process in an infinite monochromatic plane wave \cite{Berestetsky:1982aq,Ritus:1985} (see appendix~\ref{app:NLCLMA}). }. For the processes of nonlinear Compton scattering and nonlinear Breit-Wheeler pair production we can write: \begin{eqnarray} \mathbb{P}_{\scriptstyle \textsf{LMA}} \approx \int \mathrm{d}\phi \, \mathcal{R}_{\scriptstyle{\textsf{mono}}}[\xi f(\phi/\Phi)] \;, \label{eqn:exp1} \end{eqnarray} where $\mathcal{R}_{\scriptstyle{\textsf{mono}}}$ is the probability per unit phase of the process in a monochromatic (infinitely long) plane wave. For a circularly polarised background the LMA is exactly equal to the integral on the right-hand side of (\ref{eqn:exp1}). For a linearly polarised background, it is not so straightforward, as interference between different harmonic orders is included, but we will find that, to a good approximation, both sides of (\ref{eqn:exp1}) are equal. To conclude this outline of the LMA, we reiterate that the LMA is simply the application of two well-known approximations in the strong-field QED literature, the slowly varying envelope approximation and the ``local'' expansion in the relative phase variable, $\theta$, carried out at the level of the probability. For each term in the local expansion we apply the slowly-varying envelope approximation, which reduces the complexity of the rates and allows us to progress further analytically. What this means is that no further restrictions have to be imposed on the pulse envelope beyond those required for the slowly-varying envelope approximation to be valid, i.e. that the phase duration $\Phi$ be sufficiently large that derivatives of the envelope can be safely neglected. Although the approximation has been used before for a circularly polarised background~\cite{Titov:2019kdk}, as far as we are aware, this is the first explicit derivation and benchmarking with the direct calculation from QED for a plane-wave pulse. The monochromatic result is obtained from the LMA by taking the infinite pulse limit $\Phi \to \infty$, i.e. $f\to 1$. \section{Direct calculation from QED for a pulsed background \label{sec:FullQED}} We wish to benchmark the LMA against the numerical evaluation of exact expressions from high-intensity QED. We provide here the details of the integration scheme used. For both nonlinear Compton scattering and nonlinear Breit-Wheeler pair production in a plane-wave pulse, one can write the total probability in the form $\mathbb{P} = \alpha \mathcal{I}/\eta$, where $\alpha$ denotes the fine structure constant, $\eta=k\cdot P /m^2$ is the energy parameter of the incoming particle (where $k$ is the light-like wave vector of the plane wave background, $P$ is the four-momentum of the incoming particle) and $\mathcal{I}$ is a triple integral. $\mathcal{I}$ involves two phase integrals, $\phi$, $\theta$, and an integral over $s$, the fraction of the incoming particle's light-front momentum, $P^{-}$, carried away by the emitted particle. For nonlinear Compton scattering, this is of the form: \begin{eqnarray} \mathcal{I} &=& \int_{-\infty}^{\infty} \mathrm{d} \phi \int_{0}^{1} \mathrm{d} s \left\{- \frac{\pi}{2} \right. \nonumber \\ && \left. + \int_{0}^{\infty} \frac{\mathrm{d} \theta}{\theta}\left[1 + h(a,s)\right]\sin\left[g(s)\theta\mu(\phi,\theta)\right] \right\} \, . \label{eqn:pulse1} \end{eqnarray} For the numerical calculation of the exact QED result, we are using the ``$i\epsilon$'' regularisation at the level of the probability (see e.g. \cite{dinu12}), as evidenced by the $\pi/2$ counter-term in \eqnref{eqn:pulse1}. The dependence on the field $a$ defined in (\ref{def:Gauge}) resides in both $h(a,s)$ and in the Kibble mass \cite{Kibble:1965zz,Kibble:1975vz} normalised by the electron mass: \begin{eqnarray} \label{eqn:Kibble} \mu(\phi,\theta) = 1- \frac{1}{\theta}\int_{\phi - \theta/2}^{\phi + \theta/2} \frac{a^{2}}{m^{2}} + \bigg( \frac{1}{\theta} \int_{\phi - \theta/2}^{\phi + \theta/2} \frac{a}{m} \bigg)^2.\nonumber \\ \end{eqnarray} In what follows we will outline some manipulations allowing for a straightforward numerical integration of $\mathcal{I}$. \begin{figure}[!!t] \centering \includegraphics[width=0.7\linewidth]{figure1.pdf} \caption{Overview of the regions integrated over in the $\phi$-$\theta$ plane. The non-striped regions are inside of the pulse: \mbox{$|\varphi'|<2\pi\Phi$}. The dark subregion in the area covered by $I_{4}$ signifies $|\varphi|<2\pi\Phi$.} \label{fig:regionPlot1} \end{figure} The phase integration plane $(\phi,\theta)$ can be split naturally into subregions where the integrand in (\ref{eqn:pulse1}) takes a specific form according to the following two observations: First, the field-dependent function $h(a,s)$ only has support for $a \ne 0$. Second, the Kibble mass becomes phase-independent when $\phi \pm \theta/2$ obey certain inequalities (see below). Suppose we consider a pulse envelope, $f(\varphi/\Phi)$, which is symmetric about the origin with support $|\varphi|<L/2$. The example pulse shape we consider in this paper is $f = \cos^{2}$, where the phase duration is $L=\pi\Phi$ and the pulse length parameter, $\Phi$, can be related to the number of cycles, $N$, via $\Phi = 2N$. Using the symmetry of the integrand, we only have to consider the first quadrant in the $(\phi,\theta)$-plane, which splits into the sub-regions shown in \figref{fig:regionPlot1} such that $\mathcal{I} = \int ds \sum_{k=1}^4 I_k$. To deal with the infinite numerical integation of a nonlinearly oscillating pure phase term, we first rewrite the regularisation factor as \[ \frac{\pi}{2} = \int_{0}^{\infty} \frac{\mathrm{d}\theta}{\theta}\,\sin K\theta, \] which is independent of the choice of the constant factor $K$. In order to make for a simpler numerical evaluation, we choose $K=g(s)$, allowing us to combine it with the other infinite phase term in (\ref{eqn:pulse1}). (Other choices are useful in other circumstances, see for example~\cite{Dinu:2013hsd} and (\ref{eqn:NLCdif}) in the appendix.) Using this trick, we find that the first integral vanishes, \begin{eqnarray} I_1 = \int_{2\pi\Phi}^{\infty} \mathrm{d}\sigma \int_{0}^{2(\phi-2\pi\Phi)} \frac{d\theta}{\theta} & & \left\{ -\sin\left[g(s)\theta\right] \right. \nonumber \\ && \left. + \sin\left[g(s)\theta\mu(\phi,\theta)\right]\right\} = 0.\nonumber \end{eqnarray} This can be shown by noting that $\lim_{a\to 0}\mu(\phi,\theta) = 1$, and in this phase region the pulse has no support. This is because terms depending on the potential, $a(\varphi)$, $a(\varphi')$ are zero unless: \[|\varphi|=|\phi+\theta/2|<2\pi\Phi\quad\trm{or}\quad|\varphi'|=|\phi-\theta/2|<2\pi\Phi.\] In contrast, the integral $I_{2}$ over the region where the pulse is yet to pass through, is non-zero: \begin{eqnarray} I_2= \int_{0}^{\infty} \mathrm{d}\phi \int_{2(2\pi\Phi+\phi)}^{\infty} \frac{\mathrm{d}\theta}{\theta}& & \left\{ -\sin\left[g(s)\theta\right] \right. \nonumber \\ && \left. + \sin\left[g(s)\theta\mu(\phi,\theta)\right]\right\} \ne 0 . \nonumber \end{eqnarray} Nevertheless, it may be calculated analytically by noting that the combination $\theta \mu(\phi,\theta)$ accumulates a constant total phase, $\theta \mu \to \theta + \theta_{\infty}$, when the probe particle traverses the pulse and continues to propagate in vacuum. Explicitly, one finds for both nonlinear Compton and Breit-Wheeler processes that $\theta_{\infty} = 3\pi c_{\varepsilon}\xi^{2}\Phi/2$, where $c_\varepsilon = 1$ ($c_\varepsilon = 1/2$) for a circularly (linearly) polarised background. This finally leads to \begin{eqnarray} I_2 = 4\pi\Phi\sin X&&\left[\cos X\, \trm{Ci}\,Y-\sin X\, \trm{Si}\,Y \right. \nonumber \\ && \left. + \frac{\pi}{2}\sin X - \frac{1}{Y} \sin \left(X+Y\right)\right], \end{eqnarray} where $X = \theta_{\infty}g(s)/2$ and $Y=4\pi\Phi g(s)$. This is related to recent studies of interference effects in a double-pulse background \cite{ilderton19,Ilderton:2020dhs}. The remaining integral, $I_3$, collects the contributions where the average phase $\phi$ is outside the pulse, while the phase difference $\theta$ is large enough that $\phi-\theta/2$ reaches back into the pulse, \begin{eqnarray} I_{3} = \int_{2\pi\Phi}^{\infty} \mathrm{d}\phi \int_{2(\phi-2\pi\Phi)}^{2(\phi+2\pi\Phi)} \frac{\mathrm{d}\theta}{\theta}& & \left\{ -\sin\left[g(s)\theta\right] \right. \nonumber \\ && \left. + \sin\left[g(s)\mu(\phi,\theta)\right]\right\} ,\nonumber \end{eqnarray} The integrand oscillates with a slowly decaying amplitude for $\phi > 2\pi\Phi$ outside the pulse. As the oscillations are regular, they can be handled by using many data points. We also expect (and will show later) that contributions from outside the pulse are important mainly in the infra-red region of the spectrum, where we have an analytical expression for the limit. Finally, $I_{4}$ is just the evaluation of the full integral in \eqnref{eqn:pulse1}, for $\phi \in [0,2\pi\Phi]$, $\theta \in [0, 2(2\pi\Phi + \phi)]$, i.e. ``on top of'' the pulse. As this is a well-defined, finite integration range, convergence can be assured by simply increasing the sampling resolution of the integrand. \begin{figure}[!!h] \centering \includegraphics[width=0.99\linewidth]{figure2.pdf} \caption{A demonstrative plot showing how different parts of the integration region contribute to the spectrum (here, for a linearly polarised pulse) using a) a log scale and b) a linear scale.} \label{fig:decompPlot1} \end{figure} The contribution of each part of the phase integration plane $(\phi,\theta)$ to the spectrum is shown, for example parameters, in Fig.~\ref{fig:decompPlot1}. This demonstrates that in the infra-red limit, $s \to 0$, the integral $\mathcal{I}$ from (\ref{eqn:pulse1}) is dominated by the sub-integral $I_2$, i.e.\ by contributions from phase regions located \emph{outside} the pulse. On the one hand, this agrees with intuition based on the uncertainty principle---the lowest photon energies require the longest interaction of the electron with the background as has already been pointed out in the literature for nonlinear Compton scattering~\cite{king15d}. On the other hand, when studying the infra-red, one should take into account soft contributions from higher-order processes~\cite{dinu12}. \section{Nonlinear Compton scattering \label{sec:NLC}} \subsection{Circularly polarised plane wave}\label{sec:NLCcirc} \begin{figure}[h!!] \centering \includegraphics[width=0.99\linewidth]{figure3.pdf} \caption{The photon spectrum from nonlinear Compton scattering in a \emph{circularly} polarised background, in the high-energy, weakly nonlinear regime, normalised by $N/2$ for pulses with different numbers of cycles, $N$. The locally-constant field approximation (light short-dashed line) poorly approximates the spectrum, whereas the LMA (dark long-dashed line) captures the harmonic structure and becomes more accurate as the length of the pulse increases. Plotted left-to-right is: a) the yield spectrum; b) the energy spectrum; c) the IR part of the spectrum (log-linear); d) the UV part of the spectrum (log). The vertical solid lines here and in the following figures correspond to the positions of the harmonic edges calculated for an infinite monochromatic plane wave.}\label{fig:NLCCP1} \end{figure} \begin{figure}[h!!] \centering \includegraphics[width=0.99\linewidth]{figure4.pdf} \caption{The photon spectrum from nonlinear Compton scattering in a \emph{circularly}-polarised background, in the high-energy nonlinear regime, normalised by half the number of laser cycles, $N/2$. The locally-constant field approximation (light short-dashed line) approximates the spectrum well for values of $s$ corresponding to higher harmonics. The LMA (dark long-dashed line) captures both the harmonic structure and the large-$s$ behaviour and becomes more accurate as the length of the pulse increases. Plotted left-to-right is: a) the yield spectrum; b) the energy spectrum; c) the IR part of the spectrum (log-linear); d) the UV part of the spectrum (log). }\label{fig:NLCCP2} \end{figure} Having evaluated the full QED integrals for a pulse, we can now compare with the LMA. The latter is numerically more efficient, but also implies enhanced analytical control as it typically results in well-known special functions. Beyond these immediate advantages, our motivation to improve standard literature approximations is three-fold: (i) to have a locally defined rate which could in principle be implemented in numerical simulation codes; (ii) to be able to resolve the harmonic structures present in the exact QED probabilities with this approximation; and (iii) to be able to work in the moderate intensity regime, $\xi \sim 1$, relevant for current state-of-the-art laser facilities. By construction, item (i) is readily provided by the LMA. To test the LMA for the other two goals, we will benchmark it against numerically integrated exact QED probabilities, beginning with the process of nonlinear Compton scattering. Consider the interaction of an electron, initial invariant energy parameter $\eta_{e} = k \cdot p/m^{2}$, with the plane wave \begin{align}\label{eqn:CircPot} a_{\mu}(\phi) = m \xi \cos^2\Big( \frac{\phi}{\Phi} \Big) \big( \varepsilon_{\mu} \cos\phi + \bar{\varepsilon}_{\mu} \sin\phi \big) \;, \end{align} which has circular polarisation and envelope $f\sim \cos^{2}$. The LMA to the nonlinear Compton spectrum in this setup is given in (\ref{eqn:NLCcirc}). In Fig.~\ref{fig:NLCCP1} we compare the photon spectrum predicted by the LMA with the exact QED result, for the parameters $\xi = 0.5$ and $\eta_{e} = 0.1$, and various pulse lengths $\Phi$. This is the low intensity, high-energy regime which will be probed at, for example, LUXE~\cite{Abramowicz:2019gvx}. In this regime, the locally-constant field approximation, valid for $\xi^2/\eta_{e} \gg 1$~\cite{Khok1}, is no longer applicable and fails by a large margin as demonstrated in Fig.~\ref{fig:NLCCP1}. Each of the plots (a)--(d) in Fig.~\ref{fig:NLCCP1} shows the spectra for the LMA (dark long-dashed line), the locally-constant field approximation (light short-dashed line) and the numerically integrated exact QED results, the latter of which is plotted for various pulse lengths. (We recall the number of cycles $N$ and the pulse duration $\Phi$ are related by $\Phi = 2 N$.) The numerically integrated exact QED spectra have been normalised by $N/2$ to facilitate comparison. As discussed above, one of the steps in deriving the LMA for a given process is to first apply the slowly-varying envelope approximation, which assumes that the pulse duration is sufficiently long such that derivatives of the profile can be neglected. We can see the consequences of this in Fig.~\ref{fig:NLCCP1}. As the pulse duration is increased, the LMA result remains the same (when normalised by pulse duration), but the results from the numerical integration of the exact QED probability become progressively more peaked around the first harmonic, and the agreement between this and the LMA improves. In all cases, the locally-constant field approximation completely misses not only the key harmonic structures and the infra-red limit, but also fails in the high-energy, $s \to 1$, regime. This is characteristic of the locally-constant field approximation for $\xi < 1$. Fig.~\ref{fig:NLCCP2} we show the same spectra as before, however for the increased field strength of $\xi = 2.5$. We are now in a regime where the locally-constant field approximation is able to more accurately capture at least the $s \to 1$ behaviour of the spectra, but we can see that the LMA is still vastly superior. In fact, in Fig.~\ref{fig:NLCCP2}c we can distinguish three distinct regions of the spectrum on the interval $0 < s < 1$, defined in relation to the position of the first harmonic/Compton edge, which for a monochromatic plane wave is located at $s_1 = 2 \eta_{e} / (1 + \xi^2 + 2\eta_{e})$. There is the far infra-red sector where $0 < s \ll s_1$, the harmonic range where $s > s_1$ which includes all of the harmonic structure of the spectrum, and the intermediate regime where $s \lesssim s_1$. In both the far infra-red and the harmonic range the LMA gives a very good agreement with the numerically integrated exact QED spectrum, outperforming the locally-constant field approximation in both cases. One of the most striking improvements in this regard is the agreement between the LMA and the exact QED spectrum in the far infra-red, $s \to 0$ limit. This agreement is not only visible numerically; one can trivially derive the correct $s \to 0$ limit from the LMA, as shown in Appendix~\ref{app:Smalls} where we also provide a novel derivation of the limit from the exact QED probability. The second area in which the LMA performs well is in the harmonic range. For sufficiently long pulses, which in Fig.~\ref{fig:NLCCP2} corresponds to $8$ cycles (full-width-half-max duration of around $11\,\trm{fs}$ for a $800\,\trm{nm}$ carrier wavelength), the LMA not only predicts the correct position of the leading harmonic in the spectrum, but is also accurate in predicting the locations and magnitudes of the sub-leading harmonics. The only part of the spectrum in which the LMA deviates somewhat from the exact QED result is the intermediate regime where $s \lesssim s_1$. It turns out that this sector of the spectrum contains features which, to the best of our knowledge, have not been extensively commented on in the literature. Most numerical investigations of the exact QED spectrum/probability are compared to the locally-constant field approximation, which is well known (i) to not capture harmonic structure and (ii) to diverge towards the infra-red. The LMA, however, yields the correct infra-red limit, $s \to 0$, and very good agreement in the harmonic range, but does not capture the full structure of the spectrum in the intermediate range. In each of the spectra coming from the numerically integrated exact QED results there is a clear ``bump'' in the range just before the first harmonic. This same feature can be seen in various other works in the literature, see for example \cite{Ilderton:2018nws,DiPiazza:2018bfu,Blackburn:2018sfn}. A qualitative explanation for these additional peaks is that a pulse profile introduces additional frequency scales in the dynamics, analogous to the usual harmonics found at locations determined by the carrier frequency scale of the background, see e.g.~\cite{Boca:2009zz,Seipt:2010ya,Mackenroth:2010jr}. For the current choice of a $\cos^{2}$ pulse envelope, we found that the approximate position of these peaks can be determined as follows. One first introduces a rescaled frequency, $\tilde{k}^{0} = k^{0}/2I $, where $k^{0}$ is the carrier wave frequency and $I$ is the integral\footnote{For circular polarisation, i.e.\ the choice (\ref{eqn:CircPot}), one finds $I = \pi\Phi/2$. An analogous argument for linear polarisation (see below) employs the scaling $\tilde{k}^{0} = k^{0}/\sqrt{2} \pi \Phi$.} of the pulse profile, $f$. One then calculates the position of the first harmonic/Compton edge, $s_{1}$, using the rescaled energy parameter $\eta_{e}\to \eta_{e}/2I$. As pulse duration increases, the additional broad peaks get pushed further back into the infra-red and are smoothed out, eventually disappearing in the infinite plane wave (monochromatic) limit. Therefore an improvement of the accuracy of the LMA in this part of the spectrum might be achieved by including higher order terms in $1/\Phi$, i.e.\ the slowly-varying-envelope part of the approximation. The amplitude of these peaks also decreases significantly as $\xi$ falls below unity. Fig.~\ref{fig:NLCCP2} also shows that in the UV range, $s\to1$, there is good agreement between the LMA and the locally constant field approximation for $\xi=2.5$. However, this is no longer true when $\xi=0.5$ as in Fig.~\ref{fig:NLCCP1}. To capture the UV limit in more detail one could adopt the methods of \cite{torgrimsson18, torgrimsson19} and use the saddle point method, noting that, in the exponent, the pre-factor of the Kibble mass is proportional to $ (1-s)^{-1}$. Following this route, though, is beyond the scope of our present discusion. The case of a circularly polarised plane wave pulse gives the simplest form of the LMA due to the additional symmetries of the choice of background. The approach can, however, still be used for the case of linear polarisation, to which we now turn. \begin{figure}[t!!] \centering \includegraphics[width=0.99\linewidth]{figure5.pdf} \caption{The photon spectrum from nonlinear Compton scattering in a \emph{linearly} polarised background, in the high-energy, weakly nonlinear regime, normalised by half the number of laser cycles, $N/2$. The agreement of the LMA (dark long-dashed line) and disagreement of the locally-constant field approximation (light short-dashed line) with the numerically exact results is similar to the circularly polarised case. The dot-dashed line is the spectrum acquired by taking the LMA for a \emph{circularly} polarised background and rescaling the intensity parameter $\xi\to\xi/\sqrt{2}$.}\label{fig:NLCLP1} \end{figure} \begin{figure}[b!!] \centering \includegraphics[width=0.99\linewidth]{figure6.pdf} \caption{The photon spectrum from nonlinear Compton scattering in a \emph{linearly} polarised background, in the high-energy, nonlinear regime, normalised by half the number of laser cycles. The agreement of the LMA (dark long-dashed line) and the locally-constant field approximation (light short-dashed line) with the exact pulsed results is similar to the circularly polarised case. The dot-dashed line is the spectrum acquired by taking the LMA for a \emph{circularly} polarised background and replacing the intensity parameter $\xi\to\xi/\sqrt{2}$. Unlike in the weak-field regime, the linearly polarised LMA is not well approximated by rescaling the intensity parameter in the circularly polarised LMA.}\label{fig:NLCLP2} \end{figure} \subsection{Linearly polarised plane wave \label{sec:NLClin}} As above, we compare the LMA for a linearly polarised background field with the numerically integrated exact result for a fixed electron energy $\eta_{e} = 0.1$ and field strengths $\xi = 0.5$ (Fig.~\ref{fig:NLCLP1}) and $\xi = 2.5$ (Fig.~\ref{fig:NLCLP2}). In this case the LMA is given by (\ref{eqn:NLClin}). Even for infinite monochromatic plane wave fields, the probability of nonlinear Compton scattering for a linearly polarised background field has extra structure compared to the circularly polarised case. The same is true for the LMA in a pulsed linearly polarised field. The source of the extra structure is that for linear polarisation the term which is quadratic in the background field in the classical action (\ref{def:ClassAct}) is dependent on both the slow oscillations due to the pulse profile, and the fast oscillations of the carrier frequency of the plane wave. Within the LMA, this results in a non-trivial integration over the angular spread of the emitted photons. As a consequence (see Appendix~\ref{app:NLCLMA} for details), there remains a double harmonic sum, compared to the circularly polarised case, where it simplifies due to the extra symmetry in the background. Hence, it is not possible to simply take the textbook expression for linearly polarised monochromatic plane waves \cite{Berestetsky:1982aq} and localise the field intensity, $\xi \to \xi f$, as could be done in the circularly polarised case. In principle, the additional structure of a double-harmonic sum allows for the possibility of interference effects between the harmonics. However, in the intermediate intensity, high-energy regime, we did not find any appreciable contribution from this interference. For weak fields, $\xi < 1$, the low-energy part of the spectrum, i.e.\ the region $s \lesssim s_{1}$ below the first harmonic, $s_{1}$, is well approximated by the perturbative contribution from the squared potential, $a^{2}$. In this case, the linearly polarised LMA turns out to be well-approximated by taking the circularly polarised LMA and making the replacement $\xi\to\xi/\sqrt{2}$, as is demonstrated in (\ref{fig:NLCLP2}). Because of this, rescaling the circularly polarised result is a method which has been used to implement rates for linear polarisation in numerical codes. However, this method fails for $\xi>1$. In this regime, higher harmonics, proportional to $a^{2n}$ for the $n$th harmonic, contribute to the spectrum and can no longer be obtained through a simple modification of the circularly polarised LMA. This impact of the background polarisation at higher values of the field strength is demonstrated in \figref{fig:NLCLP2}. Although the position of the harmonics is still correctly predicted by the rescaled circularly polarised LMA, their amplitude is not, nor is the overall shape of the spectrum correctly captured: the rescaled circularly polarised result gives an underestimate for the smallest values of $s$, but an overestimate for larger values. Hence, the linearly polarised LMA proper, rather than the rescaled circularly polarised LMA, must be used in the intensity regime of upcoming experiments \cite{Abramowicz:2019gvx}. From both the circular and linear polarisation cases just discussed one notes that the higher the field strength $\xi$, the better the agreement between LMA and locally-constant field approximation in the ultra-violet (large-$s$) regime. In appendix~\ref{app:LCFA} we show explicitly that this is not just some numerical accident. Indeed, we will derive the locally-constant field approximation as the high-field limit of the LMA. \section{Nonlinear Breit-Wheeler \label{sec:BW}} So far our focus has been on implementing and analysing the LMA for nonlinear Compton scattering. In principle, however, the LMA can be applied to any QED scattering process in a plane wave background. As another example, consider nonlinear Breit-Wheeler pair production, where an initial photon decays into an electron-positron pair. The derivation of the LMA for this process follows the same route as for nonlinear Compton scattering (see appendix~\ref{app:NLCLMA}), and we again find that in the case of a circularly polarised plane wave the final differential probability is simply the textbook result in a monochromatic plane wave \cite{Berestetsky:1982aq} with a localisation of the field strength, $\xi \rightarrow \xi f$, see (\ref{eqn:LMABW}) in the appendix. A well known feature of the nonlinear Breit-Wheeler process is the strict lower bound, $n_\star$, on the harmonic number contributing for a given field strength and initial photon energy. This is because the outgoing particle states are massive, so that their production can only proceed above an energy threshold. For a monochromatic plane wave, the lower bound is given by $n_{\star}^{\text{mono}} = 2 (1 + \xi^2)/\eta_{\gamma}$, where $\eta_{\gamma}=k\cdot \ell/m^2$ is the energy parameter for the incident photon with four-momentum $\ell$. Comparing this to (\ref{eqn:BWparams}), we can see that for a pulse there are points along the phase for which the minimum harmonic $n_{\star} < n_{\star}^{\text{mono}}$ for the same $\xi$ and $\eta_{\gamma}$. At first glance, this would appear to mean that at the wings of the pulse, as $f \to 0$, the minimum harmonic contributing would decrease, and since Bessel harmonics of lower order are typically greater in magnitude, that the process would be more probable at lower field strengths. One has to keep in mind, though, that the argument $z(\phi)$ of the Bessel function depends on the pulse profile $f$ and vanishes in the limit $f \to 0$. The only Bessel function surviving this limit is $J_{0}$. However, since the harmonic sum in (\ref{eqn:LMABW}) is over strictly positive $n > 0$ and thus excludes $J_0$, there is no contribution to the probability for $f \to 0$. Hence, in comparison to nonlinear Compton scattering, the nonlinear Breit-Wheeler process will still require either very high field strengths, for which the locally-constant field approximation should be a good approximation, or very high initial photon energies. For both the Compton and Breit-Wheeler processes, the momentum taken from the field increases with field strength, and the harmonic structure becomes less well defined. In order to demonstrate the LMA for the Breit-Wheeler process, the centre-of-mass energy should be close to the pair rest-energy in order that only very few laser photons are required for pair production to take place. In Fig.~\ref{fig:NBW1}, we demonstrate such a situation, where we present the spectrum of electrons produced by a head-on collision of a $250\,\trm{GeV}$ photon ($\eta_{\gamma} = 3$) with a laser pulse of intensity $\xi =1$. We note that the harmonic structure of the spectrum for long pulses is well-approximated by the LMA, whereas the locally-constant field approximation both misses the harmonics in the spectrum and under-predicts the yield. \begin{figure}[t!!] \includegraphics[width=0.7\linewidth]{figure7.pdf} \caption{The spectrum of electrons produced in nonlinear Breit-Wheeler pair production for $\xi=1$, $\eta_{\gamma}=3$. A comparison of the locally-constant field approximation (light short-dashed line) and the LMA (dark long-dashed line) with the exact numerical result in pulses of different numbers of cycles, $N$.}\label{fig:NBW1} \end{figure} \section{Summary \label{sec:Summary}} Motivated by the need to improve the theoretical tools required for supporting state-of-the-art laser experiments probing the high-intensity regime of QED, we have introduced here the locally monochromatic approximation (LMA). This technique treats the quickly- and slowly-oscillating components of laser field profiles differently, in order to improve on the accuracy of the existing locally-constant field approximation, which essentially treats all field components as slowly varying. Oscillations due to the carrier frequency of the laser field are treated exactly, while the slowly-varying field envelope degrees of freedom are treated in a local expansion. Therefore, the accuracy of the LMA increases with increasing pulse duration as we have shown by comparing directly with exact QED results. This conclusion agrees with other works that have compared a train of monochromatic pulses with single short-pulse spectra \cite{krajewska12a,krajewska12b}. Although we have not included it here, the LMA could easily be extended to include a carrier-envelope-phase, since the separation between fast and slow time scales would remain (see e.g. \cite{Titov:2019kdk} for an example of this applied to the slowly-varying-envelope approximation). The LMA (or its precursors) have been used in several numerical codes, albeit in an ad-hoc fashion. To put the LMA on a firmer basis, we provide the first derivation from, and the first benchmarking against, QED in a plane wave background. In doing so we have identified the character of expansions at work and established how the accuracy improves with pulse duration. Finally, we have located spectral features in the mid-infra-red that are missed by this approximation. We note, however, that despite being local in the phase variable, the LMA is unsuitable for intense laser-matter interactions where plasma is present. This is because the LMA relies on the presence of structures particular to laser fields, essentially a central frequency and an envelope, which normally are absent in a plasma. Instead, the LMA can be thought to extend the LCFA up to higher energies and down to intermediate and low intensities, in situations where the background field is a laser pulse of well-characterised shape. Such a situation is to be found in upcoming high-energy experiments~\cite{Abramowicz:2019gvx}. When applicable, the LMA correctly resolves harmonic structure in particle spectra. Whilst it is known that these can be washed out due to multi-particle effects \cite{angioi16}, they have been observed in experiments ~\cite{Bamber:1999zt,babzien06,sakai15,Khrennikov15}. The washing-out effect is expected to be less significant if the electron beam has a narrow momentum spread. A further advantage of the LMA is its capability to capture the infra-red limit of nonlinear Compton scattering. In contrast, the locally-constant field approximation is well-known to fail in this regard. In this paper we have considered the first-order processes of nonlinear Compton scattering and nonlinear Breit-Wheeler pair production, but the LMA could also be extended to higher-order processes such as trident pair production (see e.g.~\cite{Ritus:1972nf,Ilderton:2010wr,King:2018ibi,Mackenroth:2018smh,Dinu:2019wdw,Acosta:2019bvh}) and double nonlinear Compton scattering (see e.g.~\cite{Morozov:1975uah,Lotstedt:2009zz,Seipt:2012tn,Mackenroth:2012rb,King:2014wfa,Dinu:2018efz}). This extension is not trivial, as it would need to deal with the appearance of resonant singularities in dressed propagators \cite{oleinik67,krajewska11} and is therefore a subject for further work. \acknowledgements The authors thank Anton Ilderton for many useful discussions and a careful reading of the manuscript. B.K.\ and A.J.M.\ are supported by the EPSRC grant EP/S010319/1.
{ "timestamp": "2020-12-17T02:14:46", "yymm": "2004", "arxiv_id": "2004.13035", "language": "en", "url": "https://arxiv.org/abs/2004.13035" }
\section{Simulated Results} \label{app:sim} In each simulation, we constructed an environment with two tasks. For each, we sample 750 times from the first task, followed by 750 times from the second task. These 1,500 samples comprise the training data. We sample another 1,000 hold out samples to evaluate the algorithms. We fit a random forest (\sct{RF}) (technically, an uncertainty forest which is an honest forest with a finite-sample correction~\cite{Guo2019-xe}) and a \sct{Odif}. We repeat this process 30 times to obtain errorbars. Errorbars in all cases were negligible. \subsection{Gaussian XOR} Gaussian XOR is two class classification problem with equal class priors. Conditioned on being in class 0, a sample is drawn from a mixture of two Gaussians with means $\pm \begin{bmatrix} 0.5, & 0.5\end{bmatrix}\T $, and variances proportional to the identity matrix. Conditioned on being in class 1, a sample is drawn from a mixture of two Gaussians with means $ \pm \begin{bmatrix} 0.5, & - 0.5\end{bmatrix}\T $, and variances proportional to the identity matrix. Gaussian XNOR is the same distribution as Gaussian XOR with the class labels flipped. Rotated XOR (R-XOR) rotates XOR by $\theta^\circ$ degrees. \begin{figure \centering \includegraphics[width=.8\linewidth]{images/spiral_plot.pdf} \caption{ \textit{Top}: 750 samples from 3 spirals (left) and 5 spirals (right). \textit{Bottom left}: \sct{Odif}\ outperforms \sct{RF}\ on 3 spirals when 5 spirals data is available, demonstrating \textit{backward} transfer in \sct{Odif}. \textit{Bottom center}: \sct{Odif}\ outperforms \sct{RF}\ on 5 spirals when 3 spirals data is available, demonstrating \textit{forward} transfer in \sct{Odif}. \textit{Bottom right}: Transfer Efficiency of \sct{Odif}. The forward (solid) and backward (dashed) curves are the ratio of the generalization error of \sct{Odif}\ to \sct{RF}\ in their respective figures. \sct{Odif}\ demonstrates decreasing forward transfer and increasing backward transfer in this environment.} \label{fig:spiral} \end{figure} \subsection{Spirals} A description of the distributions for the two tasks is as follows: let $ K $ be the number of classes and $ S \sim$ multinomial$(\frac{1}{K}\vec{1}_{K}, n) $. Conditioned on $S$, each feature vector is parameterized by two variables, the radius $ r $ and an angle $ \theta $. For each sample, $ r $ is sampled uniformly in $ [0, 1] $. Conditioned on a particular class, the angles are evenly spaced between $ \frac{4\pi(k-1)t_{K}}{K} $ and $ \frac{4\pi(k)t_{K}}{K} $ where $ t_{K} $ controls the number of turns in the spiral. To inject noise along the spiral, we add Gaussian noise to the evenly spaced angles $ \theta': \theta = \theta' + \mathcal{N}(0, \sigma_{K}^{2}) $. The observed feature vector is then $ (r \; \cos(\theta), r \; \sin(\theta) $. In Figure \ref{fig:spiral} we set $ t_{3} = 2.5 $, $ t_{5} = 3.5 $, $ \sigma_{3}^{2} = 3$ and $ \sigma_{5}^{2}=1.876 $. Consider an environment with a three spiral and five spiral task (Figure~\ref{fig:spiral}). In this environment, axis-aligned splits are inefficient, because the optimal partitions are better approximated by irregular polytopes than by the orthotopes provided by axis-aligned splits. The three spiral data helps the five spiral performance because the optimal partitioning for these two tasks is relatively similar to one another, as indicated by positive forward transfer. This is despite the fact that the five spiral task requires more fine partitioning than the three spiral task. Because \sct{Odif}\ grows relatively deep trees, it over-partitions space, thereby rendering tasks with more coarse optimal decision boundaries useful for tasks with more fine optimal decision boundaries. The five spiral data also improves the three spiral performance. \section{Omnidirectional Algorithms} \label{app:progl_algs} We propose two concrete omnidirectional algorithms, Omnidirectional Forests (\sct{Odif}) and Omnidirectional Networks (\sct{Odin}). The two algorithms differ in their detais of how to update representers and voters, but abstracting a level up they are both special cases of the same procedure. Let \sct{Odix} refer to any possible omnidirectional algorithm. Algorithms ~\ref{alg:odxtrain},~\ref{alg:odix_add_voter},~\ref{alg:odix_update_voter}, and ~\ref{alg:odix_predict} provide pseudocode for adding representers, updating voters, and making predictions for any \sct{Odix} algorithm; the below sections provide \sct{Odif}~and \sct{Odin}~specific details. \subsection{Omnidirectional Forests} \begin{algorithm}[t] \caption{Add a new \sct{Odix}~representer for a task. OOB = out-of-bag. } \label{alg:odxtrain} \begin{algorithmic}[1] \Require \Statex (1) $t$ \Comment{current task number} \Statex (2) $\mathcal{D}_n^t = (\mathbf{x}^t,\mathbf{y}^t) \in \mathbb{R}^{n \times p} \times \{1,\ldots, K\}^n$ \Comment{training data for task $t$} \Ensure \Statex (1) $u_t$ \Comment{a representer set} \Statex (2) $\mc{I}_{OOB}^t$ \Comment{a set of the indices of OOB data} \Function{\sct{Odix}.fit}{$t, (\mathbf{x}^t,\mathbf{y}^t)$} \State $u_t, \mc{I}_{OOB}^t \leftarrow$ X.fit($\mathbf{x}^t$, $\mathbf{y}^t$) \Comment train a representer X on bootstrapped data \State \Return $u_t, \mc{I}_{OOB}^t$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{Add a new \sct{Odix}~voter for the current task. } \label{alg:odix_add_voter} \begin{algorithmic}[1] \Require \Statex (1) $t$ \Comment{current task number} \Statex (2) $\mb{u}_t = \{u_t\}_{t'=1}^t$ \Comment{the set of representers} \Statex (3) $\mathcal{D}_n^t = (\mathbf{x}^t,\mathbf{y}^t) \in \mathbb{R}^{n \times p} \times \{1,\ldots, K\}^n$ \Comment{training data for task $t$} \Statex (4) $\mc{I}_{OOB}^t$ \Comment{a set of the indices of OOB data for the current task} \Ensure $\mb{v}_t = \{v_{t,t'}\}_{t'=1}^t$ \Comment in-task ($t'=t$) and cross-task ($t'\neq t$) voters for task $t$ \Function{\sct{Odix}.add\_voter}{$t, \mb{u}_t, (\mathbf{x}_t, \mathbf{y}_t), \mc{I}_{OOB}^t$} \State $v_{tt} \leftarrow u_{t}$.add\_voter($(\mathbf{x}_t, \mathbf{y}_t), \mc{I}_{OOB}^t$) \Comment add the in-task voter using OOB data \For{$t' = 1, \ldots, t-1$} \Comment update the cross task voters for task $t$ \State $v_{tt'} \leftarrow u_{t'}$.add\_voter($\mathbf{x}_t, \mathbf{y}_t$) \EndFor \State \Return $v_t$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{Update \sct{Odix}~voter for the previous tasks. } \label{alg:odix_update_voter} \begin{algorithmic}[1] \Require \Statex (1) $t$ \Comment current task number \Statex (2) $u_t$ \Comment representer for the current task \Statex (3) $\mathcal{D} =\{\mathcal{D}^{t'} \}_{t'=1}^{t-1}$ \Comment training data for tasks $t'=1,\cdots, t-1$ \Ensure $\mb{v} = \{\mb{v}_{t'}\}_{t'=1}^{t-1}$ \Comment all previous task voters \Function{\sct{Odix}.update\_voter}{$t, u_t, \mathcal{D}$} \For{$t'= 1, \ldots, t-1$} \Comment update the cross task voters \State $v_{t't} \leftarrow u_{t}$.get\_voter($\mathbf{x}_{t'}, \mathbf{y}_{t'}$) \EndFor \State \Return $\mb{v}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{Predicting a class label using \sct{Odix}. } \label{alg:odix_predict} \begin{algorithmic}[1] \Require \Statex (1) $x \in \mathbb{R}^{p}$ \Comment test datum \Statex (2) $t$ \Comment task identity associated with $x$ \Statex (3) $\mb{u}$ \Comment all $T$ reperesenters \Statex (4) $\mb{v}_t$ \Comment voter for task $t$ \Ensure $\hat{y}$ \Comment a predicted class label \Function{$\hat{y} =$ \sct{Odix}.predict}{$t, x, v_t$} \State $T \leftarrow$ \sct{Odix}.get\_task\_number() \Comment get the total number of tasks \State $\hat{\mathbf{p}}_t = \mathbf{0}$ \Comment $\hat{\mathbf{p}}_t$ is a $K$-dimensional posterior vector \For{$t' = 1,\ldots, T$} \Comment update the posteriors calculated from $T$ task voters \State $\hat{\mathbf{p}}_t \leftarrow \hat{\mathbf{p}}_t + v_{tt'}$.predict\_proba($u_{t'}(x)$) \EndFor \State $\hat{\mathbf{p}}_t \leftarrow \hat{\mathbf{p}}_t / T$ \State $\hat{y} = \operatornamewithlimits{argmax}_i(\hat{\mathbf{p}}_t)$ \Comment find the index $i$ of the elements in the vector $\hat{\mathbf{p}}_t$ with maximum probability \State \Return $\hat{y}$ \EndFunction \end{algorithmic} \end{algorithm} A Omnidirectional Forest (\sct{Odif}) is a decision forest-based instance of ensembling representations. For each task, the transformer $u_t$ of a \sct{Odif}\ is a decision forest~\cite{Amit1997-nd, breiman2001random}. The leaf nodes of each decision tree partition the input space $ \mc{X} $~\cite{breiman1984classification}. The representation of $ x \in \mc{X} $ corresponding to a single tree can be a one-hot encoded $L_b$-dimensional vector with a 1 in the location corresponding to the leaf $x$ falls into of tree $b$. The representation of $ x $ resulting from the collection of trees simply concatenates the $B$ one-hot vectors from the $B$ trees. Thus, the the transformer $ u_{t} $ is the mapping from $ \mc{X}$ to a $B$-sparse vector of length $\sum_{b=1}^B L_b$. The posteriors are learned by populating the cells of the partitions and taking class votes with out-of-bag samples, as in `honest trees'~\cite{breiman1984classification, denil14, Athey19}. The posteriors output the average normalized class votes across the collection of trees, adjusted for finite sample bias~\cite{Guo2019-xe}. The decider $w_t$ averages the posterior estimates and outputs the argmax to produce a single prediction. Recall that honest decision forests are universally consistent classifiers and regressors~\cite{Athey19}, meaning that with sufficiently large sample sizes, under suitable though general assumptions, they will converge to minimize risk. The single task version of this approaches simplifies to an approach called `Uncertainty Forests'~\cite{Guo2019-xe}. Table~\ref{tab:hyperparameter_table} lists the hyperparameters used in the CIFAR experiments. \begin{table}[ht] \caption{Hyperparameters for \sct{Odif}\ in CIFAR experiments. n\_estimators is denoted by $B$, the number of trees, above. } \label{tab:hyperparameter_table} \begin{tabular}{|l|l|} \hline \textbf{Hyperparameters} & \textbf{Value} \\ \hline n\_estimators ($500$ training samples per task) & $10$\\ \hline n\_estimators ($5000$ training samples per task) & $40$\\ \hline max\_depth & $30$\\ \hline max\_samples & $0.67$\\ \hline min\_samples\_leaf & $1$\\ \hline \end{tabular} \end{table} \subsection{Omnidirectional Networks} An Omnidirectional Network (\sct{Odin}) is a deep network (DN) based instance of ensembling representations. For each task, the representer $ u_{t} $ in an \sct{Odin}\ is the ``backbone'' of a DN, including all but the final layer. Thus, each $u_t$ maps an element of $ \mc{X} $ to an element of $ \mbb{R}^{d} $, where $ d $ is the number of neurons in the penultimate layer of the DN. In practice, we use the architecture described in \citet{tolias_architecture} as ``5 convolutional layers followed by 2 fully-connected layers each containing 2,000 nodes with ReLU non-linearities and a softmax output layer." We trained this network using cross-entropy loss and the Adam optimizer~\cite{kingma2014adam} to learn the transformer. The omni-voters are learned via $ k $-Nearest Neighbors ($k$-NN) \cite{Stone1977-fi}. Recall that a $k$-NN, with $ k $ chosen such that as the number of samples $ n $ goes to infinity $ k $ goes to infinity and $ \frac{k}{n} \to 0 $, is a universally consistent classifier~\cite{Stone1977-fi}. We use $k = 16\log_2{n}$, which satisfies these conditions. \section{Decision Tree as a Compositional Hypothesis} \label{app:example} Consider learning a decision tree for a two class classification problem. The input to the decision tree is a set of $n$ feature-vector/response pairs, $(x_i,y_i)$. The learned tree structure corresponds to the representer $u$, because the tree structure maps each input feature vector into an indicator encoding in which leaf node each feature vector resides. Formally, $u: \mathcal{X} \mapsto [L]$, where $[L] = \{1, 2, \ldots, L\}$ and $L$ is the total number of leaf nodes. In other words, $u$ maps from the original data space, to a $L$-dimensional one-hot encoded sparse binary vector, where the sole non-zero entry indicates in which leaf node a particular observation falls, that is, $\tilde{x} := u(x) \in \{0,1\}^L$ where $\norm{\tilde{x}} = 1$. Learning the voter is simply a matter of counting the fraction of observations in each leaf per class. So, the voter is trained using $n$ pairs of transformed feature-vector/response pairs $(\tilde{x}_i,y_i)$, and it assigns a probability of each class in each leaf: $\{ v_l := \mathbb{P}[y_i=1 | \tilde{x}_i=l], \forall l \in [L] \}$ and $v(\tilde{x})=v_{\tilde{x}}$. In other words, for two class classification, $v$ maps from the $L$-dimensional binary vector to the probability that $x$ is in class 1. The decider is simply $w\left(v(\tilde{x})\right)=\mathbbm{1}_{\{v(\tilde{x})>0.5\}}$, that is, it outputs the most likely class label of the leaf node that $x$ falls into. For inference, the tree is given a single $x$, and it is passed down the tree until it reaches a leaf node, where it is represented by its leaf identifier $\tilde{x}$. The voter takes $\tilde{x}$ as input, and outputs the estimated posterior probability of being in class $1$ for the leaf node in which $\tilde{x}$ resides: $v(\tilde{x}) = \mathbb{P}[y=1 | \tilde{x}]$. If $v(\tilde{x})$ is bigger than $0.5$, the decider decides that $x$ is in class $1$, and otherwise, it decides it is in class $0$. \section{Compositional Representation Ensembling} \label{app:compositional} Consider a scenario in which we have two tasks, one following the other. Assume that we already learned a single decomposable hypothesis for the first task: $w_{1} \circ v_{1} \circ u_{1}$, and then we get new data associated with a second task. Let $n_1$ denote the sample size for the first task, and $n_2$ denote the sample size for the second task, and $n=n_1+n_2$. The representation ensembling approach generally works as follows. First, since we want to transfer forward to the second task, we push all the new data through the first representer $u_1$, which yields $\tilde{x}_{n_1+1}^{(1)}, \ldots, \tilde{x}_n^{(1)}$. Second, we learn a new representer $u_{2}$ using the new data, $\{(x_i,y_i)\}_{i=n_1+1}^n$. We then push the new data through the new representer, yielding $\tilde{x}_1^{(2)}, \ldots, \tilde{x}_n^{(2)}$. Third, we train a new omni-voter, $v_2$. To do so, $v_2$ is trained on the outputs from both representers, that is, $\{(\tilde{x}_i^{(j)},y_i)\}_{i=n_1+1}^{n}$ for $j=1,2$. The output of $v_2$ for any new input $x$ is the posterior probability (or score) for that point for each potential response in task two (class label). Thus, by virtue of ensembling these representations, this approach enables forward transfer~\cite{rusu2016progressive, Dhillon2019-fj}. Now, we would also like to improve performance on the first task using the second task's data. While many lifelong methods have tried to achieve this kind of backward transfer, to date, they have mostly failed~\cite{Ruvolo2013-hk}. Recall that previously we had already pushed all the first task data through the first task representer, which had yielded $\tilde{x}_{1}^{(1)}, \ldots, \tilde{x}_{n_1}^{(1)}$. Assuming we kept any of the first task's data, or can adequately simulate it, we can push those data through $u_2$ to get a second representation of the first task's data: $\tilde{x}_{1}^{(2)}, \ldots, \tilde{x}_{n_1}^{(2)}$. Then, $v_1$ would be trained on both representations of the first task's data. This `replay-like' procedure facilitates backward transfer, that is, improving performance on previous tasks by leveraging data from newer tasks. Both the forward and backward transfer updates can be implemented every time we obtain data associated with a new task. \textbf{Enabling the omni-voters to ensemble \textit{omnidirectionally} between all sets of tasks is the key innovation of our proposed omnidirectional learning approaches.} \subsection{Reference Algorithm Implementation Details}\label{app:archs} The same network architecture was used for all compared deep learning methods. Following~\citet{tolias_architecture}, the `base network architecture' consisted of five convolutional layers followed by two-fully connected layers each containing 2000 nodes with ReLU non-linearities and a softmax output layer. The convolutional layers had 16, 32, 64, 128 and 254 channels, they used batch-norm and a ReLU non-linearity, they had a 3x3 kernel, a padding of 1 and a stride of 2 (except the first layer, which had a stride of 1). This architecture was used with a multi-headed output layer (i.e., a different output layer for each task) for all algorithms using a fixed-size network. For ProgNN and DF-CNN the same architecture was used for each column introduced for each new task, and in our \sct{Odin}\ this architecture was used for the transformers $u_t$ (see above). \section{Real Data Extended Results} \label{app:cifar} \begin{figure \centering \includegraphics[width=\linewidth]{images/spectrogram.pdf} \caption{Spectrogram extracted from $8$ different recordings of $6$ speakers uttering the digit `$5$'. } \label{fig:spectrogram} \end{figure} \begin{figure \centering \includegraphics[width=.75\linewidth]{images/language.pdf} \caption{Both \sct{Odif} and \sct{Odin} ~shows positive forward and backward transfer for the spoken digit tasks.} \label{fig:language} \end{figure} \subsection{Spoken Digit Experiment} \begin{table}[ht] \caption{Hyperparameters for \sct{Odif}\ in spoken digit experiment.} \label{tab:hyperparameter_table_language} \begin{tabular}{|l|l|} \hline \textbf{Hyperparameters} & \textbf{Value} \\ \hline n\_estimators ($275$ training samples per task) & $10$\\ \hline max\_depth & $30$\\ \hline max\_samples & $0.67$\\ \hline min\_samples\_leaf & $1$\\ \hline \end{tabular} \end{table} In this experiment, we used the spoken digit dataset provided in \url{https://github.com/Jakobovski/free-spoken-digit-dataset}. The dataset contains audio recordings from $6$ different speakers with $50$ recordings for each digit per speaker ($3000$ recordings in total). The experiment was set up with $6$ tasks where each task contains recordings from only one speaker. For each recording, a spectrogram was extracted using Hanning windows of duration $16$ ms with an overlap of $4$ ms between the adjacent windows. The spectrograms were resized down to $28 \times 28$. The extracted spectrograms from $8$ random recordings of `$5$' for $6$ speakers are shown in Figure \ref{fig:spectrogram}. For each Monte Carlo repetition of the experiment, spectrograms extracted for each task were randomly divided into $55\%$ train and $45\%$ test set. As shown in Figure \ref{fig:language}, both \sct{Odif}~and \sct{Odin}~show positive transfer between the spoken digit tasks. \subsection{CIFAR 10x10} \begin{table* \caption{Task splits for CIFAR 10x10.} \label{tab:cifar-table} \begin{tabular}{|l|l|} \hline Task \# & Image Classes \\ \hline 1 & apple, aquarium fish, baby, bear, beaver, bed, bee, beetle, bicycle, bottle \\ \hline 2 & bowl, boy, bridge, bus, butterfly, camel, can, castle, caterpillar\\ \hline 3 & chair, chimpanzee, clock, cloud, cockroach, couch, crab, crocodile, cup, dinosaur\\ \hline 4 & dolphin, elephant, flatfish, forest, fox, girl, hamster, house, kangaroo, keyboard \\ \hline 5 & lamp, lawn mower, leopard, lion, lizard, lobster, man, maple tree, motor cycle, mountain \\ \hline 6 & mouse, mushroom, oak tree, orange, orchid, otter, palm tree, pear, pickup truck, pine tree \\ \hline 7 & plain, plate, poppy, porcupine, possum, rabbit, raccoon, ray, road, rocket \\ \hline 8 & rose, sea, seal, shark, shrew, skunk, skyscraper, snail, snke, spider \\ \hline 9 & squirrel, streetcar, sunflower, sweet pepper, table, tank, telephone, television, tiger, tractor \\ \hline 10 & train, trout, tulip, turtle, wardrobe, whale, willow tree, wolf, woman, worm \\ \hline \end{tabular} \captionsetup{justification=centering} \end{table*} Supplementary Table \ref{tab:cifar-table} shows the image classes associated with each task number. Supplementary Figure~\ref{fig:sample5000} is the same as Figure~\ref{fig:cifar-10x10} but with 5,000 training samples per task, rather than 500. Notably, with 5,000 samples, replay methods are able to transfer both forward and backward as well, though recall they are considerably more computationally intensive than \sct{Odin}\ and \sct{Odif}. \begin{figure \centering \includegraphics[width=\linewidth]{images/benchmark_5000.pdf} \caption{Performance of different algorithms on CIFAR 10x10 vision dataset for $5$,$000$ training samples per task. \sct{Odin}\ maintains approximately the same forward transfer (top left and bottom left) and backward transfer (top center and bottom center) efficiency as those for $500$ samples per task whereas other algorithms show reduced or nearly unchanged transfer. \sct{Odif}\ still demonstrates positive forward, backward, and final transfer, unlike most of the state-of-the-art algorithms, which demonstrate forgetting. The replay methods, however, do demonstrate transfer, albeit with significantly higher computational cost.} \label{fig:sample5000} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=.5\linewidth]{images/adversary_5000.pdf} \caption{\textbf{Label shuffle experiment on CIFAR 10x10 vision dataset for $5$,$000$ training samples per task.} Shuffling class labels within tasks two through nine with 5000 samples each demonstrates both \sct{Odif}~and \sct{Odin}~can still achieve positive backward transfer, and that the other algorithms that do not replay the previous task data fail to transfer. } \label{fig:cifar5000} \end{figure} \subsection{CIFAR Label Shuffling} Supplementary Figure~\ref{fig:cifar5000} shows the same result as the label shuffling from Figure~\ref{fig:cifar2}, but with 5,000 samples per class. The results for \sct{Odin}\ and \sct{Odif}\ are qualitatively similar, in that they transfer backwards. The replay methods are also able to transfer when using this larger number of samples, although with considerably higher computational cost. \subsection{CIFAR 10x10 Repeated Classes} We also considered the setting where each task is defined by a random sampling of 10 out of 100 classes with replacement. This environment is designed to demonstrate the effect of tasks with shared subtasks, which is a common property of real world lifelong learning tasks. Supplementary Figure \ref{fig:overlapping} shows transfer efficiency of \sct{Odif}\ and \sct{Odin}\ on Task 1. \begin{figure \centering \includegraphics[width=0.4\linewidth]{images/random_class.pdf} \caption{\sct{Odif}\ and \sct{Odin}\ transfer knowledge effectively when tasks share common classes. Each task is a random selection of 10 out of the 100 CIFAR-100 classes. Both \sct{Odif}\ and \sct{Odin}\ demonstrate monotonically increasing transfer efficiency for up to 30 tasks. } \label{fig:overlapping} \end{figure} \section{Background} \label{sec:background} \subsection{Classical Machine Learning} \label{sec:cml} Classical supervised learning \cite{Mohri2018-tf} considers random variables \sloppy $(X, Y) \sim P_{X, Y}$, where $X$ is an $\mc{X}$-valued input, $Y$ is a $\mc{Y}$-valued label (or response), and $ P_{X,Y} \in \mc{P}_{X, Y} $ is the joint distribution of $ (X, Y) $. Given a loss function $\ell: \mc{Y} \times \mc{Y} \rightarrow [0, \infty)$, the goal is to find the hypothesis (also called predictor or decision rule), $h: \mc{X} \rightarrow \mc{Y}$ that minimizes expected loss, or \textit{risk}, $ R(h) = \mbb{E}_{X, Y}\left[\ell(h(X), Y)\right].$ A learning algorithm (or rule) is a function $f$ that maps data sets ($n$ training samples) to a hypothesis, where a data set $\mathbf{S}_n = \{X_i, Y_i\}_{i = 1}^{n}$ is a set of $n$ input/response pairs. Assume $n$ samples of $(X,Y)$ pairs are independently and identically distributed from some true but unknown $P_{X, Y}$~\cite{Mohri2018-tf}. A learning algorithm is evaluated on its generalization error (or expected risk): $\mbb{E}\left[R(f(\mathbf{S}_n))\right],$ where the expectation is taken with respect to the true but unknown distribution governing the data, $P_{X,Y}$. The goal is to choose a learner $f$ that learns a hypothesis $h$ that has a small generalization error for the given task~\citep{bickel2015mathematical}. \subsection{Lifelong Learning} Lifelong learning generalizes classical machine learning in a few ways: (i) instead of one task, there is an environment $\mathcal{T}$ of (possibly infinitely) many tasks, (ii) data arrive sequentially, rather than in batch mode, and (iii) there are computational complexity constraints on the learning algorithm and hypotheses. This third requirement is crucial, though often implicit. Consider, for example, the algorithm that stores all the data, and then retrains everything from scratch each time a new sample arrives. Without computational constraints, such an algorithm could be classified as a lifelong learner; we do not think such a label is appropriate for that algorithm. The goal in lifelong learning therefore is, given new data and a new task, use all the existing data to achieve lower generalization error on this new task, while also using the new data to obtain a lower generalization error on the previous tasks. This is distinct from classical online learning scenarios, because the previously experienced tasks may recur, so we are concerned about maintaining and improving performance on those tasks as well. Previous work in lifelong learning falls loosely into two algorithmic camps: (i) continually updating a fixed parametric model as new tasks arrive, and (ii) adding resources as new tasks arrive. Some approaches additionally store or replay previously encountered data to reduce forgetting~\cite{robins1995catastrophic,shin2017continual,tolias_architecture}. In `task-aware' scenarios, the learner is aware of all task details for all tasks, meaning that the hypotheses are of the form $h:\mc{X}\times\mc{T}\rightarrow\mc{Y}$. In `task-unaware' (or task agnostic~\cite{Zeno2018-hu}) scenarios the learner may not know that the task has changed at all, which means that the hypotheses are of the form, $h:\mc{X}\rightarrow\mc{Y}$. We only address task-aware scenarios here. \subsection{Reference algorithms} We compared our approaches to nine reference lifelong learning methods. These algorithms can be classified into two groups based on whether they build new resources, or leverage fixed resources, given new tasks. Among them, ProgNN~\cite{rusu2016progressive} and DF-CNN~\cite{Lee2019-eg} learn new tasks by building new resources. For {ProgNN}, for each new task a new `column' of network is introduced. In addition to introducing this column, lateral connections from all previous columns to the new column are added. These lateral connections are computationally costly, as explained below. Deconvolution-Factorized CNNs (DF-CNN)~\cite{Lee2019-eg} is a lifelong learning algorithm that improves upon {ProgNN}\ by introducing a knowledge base with lateral connections to each new column, thereby avoiding all pairwise connections, and dramatically reducing computational costs. The other seven algorithms, EWC \cite{kirkpatrick2017overcoming}, Online-EWC \cite{schwarz2018progress}, SI \cite{zenke2017continual}, LwF \cite{li2017learning}, `None' and two variants of exact replay (Total Replay and Partial Replay)~\cite{rolnick2019experience}, all have fixed capacity resources. For the first variant of exact replay, referred to as ``Total replay'', we replay all the data from all previous tasks whenever a new task is encountered. In the lifelong learning literature this is typically called ``offline training''. Replaying everything might however not be needed~\cite{tolias_architecture}, and for the second variant of exact replay the amount of replay for each new task is fixed to the number of training samples in the new task, and the samples to be replayed are randomly selected from all the data of the previous tasks. For the baseline `None', the network was incrementally trained on all tasks in the standard way while always only using the data from the current task. The implementations for all of the algorithms are adapted from open source codes~\cite{Lee2019-eg,Van_de_Ven2019-wy}; for implementation details, see Appendix~\ref{app:archs}. \section{Discussion} \label{sec:discussion} We introduced quasilinear representation ensembling as an approach to omnidirectional lifelong learning. The two specific algorithms we developed, \sct{Odif}\ and \sct{Odin}, demonstrate the possibility of achieving both forward and backward transfer, due to leveraging resources (representers) learned for other tasks without undue computational burdens. Forest-based representation ensembling approaches can easily add new resources when appropriate. This work further therefore motivates additional work on deep learning to enable dynamically adding resources when appropriate~\cite{Yoon2017-jb}. To achieve backward transfer, \sct{Odif}~and \sct{Odin}~stored old data to vote on the newly learned transformers. Because the representation space scales quasilinearly with sample size, storing the data does not increase the computational complexity of the algorithm, and it remains quasilinear. In contrast, ProgNN has representation space scaling quadratically with sample size, rendering it less computationally efficient than merely storing all the data and retraining (which `Total Replay' does). Both ProgNN and Total Replay, however, have quadratic time complexity, unlike \sct{Odif}~and \sct{Odin}. Nonetheless, a natural extension of this work would obviate the need to store any data. While we employed representation ensembling to address catastrophic forgetting, the paradigm of ensembling \textit{representations} rather than \textit{learners} can be readily applied more generally. For example, `batch effects' (sources of variability unrelated to the scientific question of interest) have plagued many fields of inquiry, including neuroscience~\cite{Bridgeford2020-ay} and genomics~\cite{Johnson2007-qs}. Similarly, federated learning is becoming increasingly central in artificial intelligence, due to its importance in differential privacy~\cite{Dwork2008-sw}. This may be particularly important in light of global pandemics such as COVID-19, where combining small datasets across hospital systems could enable more rapid discoveries~\cite{Vogelstein2020-jq}. Finally, biological learning leverages ensembles of representations, so we hope this work motivates a tighter connection between biological and machine learning. By carefully designing experiments in which both behaviors and brain are observed while learning across sequences of tasks (possibly in multiple stages of neural development or degeneration), we may be able to learn more about how biological agents are able to omnidirectilonally learn so efficiently, and transfer that understanding to building more effective artificial intelligences. In the meantime, our code, including code to reproduce the experiments in this manuscript, is available from \url{http://proglearn.neurodata.io/}. \subsection{Real data experiments} \label{sec:real} We consider two modalities for real data experiments: vision and language. Below we provide a detailed analysis of the performance of lifelong learning algorithms in vision data; Appendix~\ref{app:cifar} provides details for our language experiments, which have qualitatively similar results illustrating that \sct{Odif}\ is a modality agnostic, sample and computationally efficient, lifelong learning algorithm. The CIFAR 100 challenge~\cite{krizhevsky2009learning}, consists of 50,000 training and 10,000 test samples, each a 32x32 RGB image of a common object, from one of 100 possible classes, such as apples and bicycles. CIFAR 10x10 divides these data into 10 tasks, each with 10 classes~\cite{Lee2019-eg} (see Appendix~\ref{app:cifar} for details). We compare \sct{Odif}~and \sct{Odin}\ to the deep lifelong learning algorithms discussed above. The below experiments use only 500 training samples per task, see Appendix Figure~\ref{fig:cifar2} for the corresponding results using 5,000 training samples per task. \subsubsection{Resource Growing Experiments} We first compare \sct{Odif}~and \sct{Odin}~to state-of-the-art resource growing algorithms: ProgNN and DF-CNN (Figure~\ref{fig:cifar-10x10}, top panels). Both \sct{Odif}~and \sct{Odin}~demonstrate positive forward transfer for every task (\sct{Odif}~increases nearly monotonically), indicating they are robust to distributional shift in ways that ProgNN and DF-CNN are not. \sct{Odin}~and \sct{Odif}~uniquely demonstrate positive backwards transfer, \sct{Odin}~is actually monotonically increasing, indicating that with each new task, performance on all prior tasks increases (and \sct{Odif}~nearly monotonically increases BTE as well). In contrast, while neither ProgNN nor DF-CNN exhibit catastrophic forgetting, they also do not exhibit any positive backward transfer. Final transfer efficiency per task is the transfer efficiency associated with that task having seen all the data. \sct{Odif}~and \sct{Odin}~both demonstrate positive final transfer efficiency for all tasks, whereas ProgNN and DF-CNN both exhibit negative final transfer efficiency for at least one task. \subsubsection{Resource Constrained Experiments} It is possible that the above algorithms are leveraging additional resources to improve performance without meaningfully transferring information between representations. To address this concern, we devised a `resource constrained' variant of \sct{Odif}. In this constrained variant, we compare the lifelong learning algorithm to its single task variant, but ensure that they both have the same amount of resources. For example, on Task 2, we would compare \sct{Odif}~with 20 trees (10 trained on 500 samples from Task 1, and another 10 trained on 500 samples from Task 2) to \sct{RF}\ with 20 trees (all trained on 500 samples Task 2). If \sct{Odif}~is able to meaningfully transfer information across tasks, then its resource-constrained FTE and BTE will still be positive. Indeed, FTE remains positive after enough tasks, and BTE is actually invariant to this change (Figure~\ref{fig:cifar-10x10}, bottom left and center). In contrast, all of the reference algorithms that have fixed resources exhibit negative forward and backward transfer. Moreover, the reference algorithms also all exhibit negative final transfer efficiency on each task, whereas our resource constrained \sct{Odif}\ maintains positive final transfer on every task (Figure~\ref{fig:cifar-10x10}, top right). Interestingly, when using 5,000 samples per task, replay methods are able to demonstrate positive forward and backwards transfer (Supplementary Figure~\ref{fig:cifar2}), although they require quadratic time. Note that in this experiment, building the single task learners actually required substantially \textit{more} resources, specifically, $10+20+\cdots+100=550$ trees, as compared with only $100$ trees in the prior experiments. In general, to ensure single task learners use the same amount of resources per task as omnidirectional learners requires $\tmc{O}(n^2)$ resources, where as \sct{Odif}~only requires $\tmc{O}(n)$, a polynomial reduction in resources. \subsubsection{Resource Recruiting Experiments} The binary distinction we made above, algorithms either build resources or reallocate them, is a false dichotomy, and biologically unnatural. In biological learning, systems develop from building (juvenile) to recruiting (adult) resources. We therefore trained \sct{Odif}~on the first nine CIFAR 10x10 tasks using 50 trees per task, with 500 samples per task. For the tenth task, we could (i) select the 50 trees (out of the 450 existing trees) that perform best on task 10 (recruiting), (ii) train 50 new trees, as \sct{Odif}~would normally do (building), (iii) build 25 and recruit 25 trees (hybrid), or (iv) ignore all prior trees (\sct{RF}). \sct{Odif}~outperforms other approaches except when 5,000 training samples are available, but the recruiting approach is nearly as good as \sct{Odif}~(Figure~\ref{fig:cifar-10x10}, bottom right). This result motivates future work to investigate optimal strategies for determining how to optimally leverage existing resources given a new task, and task-unaware settings. \subsubsection{Adversarial Experiments} Consider the same CIFAR 10x10 experiment above, but, for tasks two through nine, randomly permute the class labels within each task, rendering each of those tasks adversarial with regard to the first task (because the labels are uninformative). Figure~\ref{fig:cifar2}A indicates that backward transfer efficiency for both \sct{Odif}~and \sct{Odin}~is invariant to such label shuffling (the other algorithms also seem invariant to label shuffling, but did not demonstrate positive backwards transfer). Now, consider a Rotated CIFAR experiment, which uses only data from the first task, divided into two equally sized subsets (making two tasks), where the second subset is rotated by different amounts (Figure~\ref{fig:cifar2}, right). Transfer efficiency of both \sct{Odif}~and \sct{Odin}~is nearly invariant to rotation angle, whereas the other approaches are far more sensitive to rotation angle. Note that zero rotation angle corresponds to the two tasks \textit{having identical distributions}. \begin{figure}[htbp] \centering \includegraphics[width=.8\linewidth]{images/adversary.pdf} \caption{\textbf{Extended CIFAR 10x10 experiments.} (\textit{A}) Shuffling class labels within tasks two through nine with 500 samples each demonstrates both \sct{Odif}~and \sct{Odin}~can still achieve positive backward transfer, and that the other algorithms still fail to transfer. (\textit{B}) \sct{Odif}~and \sct{Odin}~are nearly invariant to rotations, whereas other approaches are more sensitive to rotation. } \label{fig:cifar2} \end{figure} \section{Introduction} \label{sec:introduction} Learning is the process by which an intelligent system improves performance on a given task by leveraging data~\cite{mitchell1999machine}. In biological learning, learning is lifelong, with agents continually building on past knowledge and experiences, improving on many tasks given data associated with any task. For example, learning a second language often improves performance in an individual's native language~\cite{Zhao2016-vo}. In classical machine learning, the system often starts with essentially zero knowledge, a ``tabula rasa'', and is optimized for a single task~\cite{Vapnik1971-um,Valiant1984-dx}. While it is relatively easy to \textit{simultaneously} optimize for multiple tasks (multi-task learning)~\cite{caruana1997multitask}, it has proven much more difficult to \textit{sequentially} optimize for multiple tasks~\cite{thrun1996learning, Thrun2012-sj}. Specifically, classical machine learning systems, and natural extensions thereof, exhibit ``catastrophic forgetting'' when trained sequentially, meaning their performance on the prior tasks drops precipitously upon training on new tasks~\cite{mccloskey1989catastrophic,mcclelland1995there}. This is in contrast to many biological learning settings, such as the second language learning setting mentioned above. In the past 30 years, a number of sequential task learning algorithms have attempted to overcome catastrophic forgetting. These approaches naturally fall into one of two camps. In one, the algorithm has fixed resources, and so must reallocate resources (essentially compressing representations) in order to incorporate new knowledge~\cite{kirkpatrick2017overcoming,zenke2017continual,li2017learning,schwarz2018progress,Finn2019-yv}. Biologically, this corresponds to adulthood, where brains have a nearly fixed or decreasing number of cells and synapses. In the other, the algorithm adds (or builds) resources as new data arrive~\cite{Ruvolo2013-hk, rusu2016progressive,Lee2019-eg}. Biologically, this corresponds to development, where brains grow by adding cells, synapses, etc. Approaches from both camps demonstrate some degree of continual (or lifelong) learning~\cite{parisi2019continual}. In particular, they can sometimes learn new tasks while not catastrophically forgetting old tasks. However, as we will show, many state of the art lifelong learning algorithms are unable to transfer knowledge forward, and none are able to transfer knowledge backward with small sample sizes where it is particularly important. This inability to omnidirectionally transfer has been identified as one of the key obstacles limiting the capabilities of artificial intelligence~\cite{Pearl2019-bp,Marcus2019-dj}. We present an approach to lifelong learning called ``omnidirectional learning''. Omnidirectional learning algorithms build on the ideas introduced in Progressive Neural Networks ({ProgNN})~\cite{rusu2016progressive}, in which new tasks yield additional representational capacity. However, although {ProgNN}'s are able to transfer forwards, they fail to transfer backwards. Moreover, as we will show, {ProgNN}\ requires quadratic space and time complexity in sample size. Our key innovation is the introduction of representation ensembling which enables omnidirectional transfer via an ``omni-voter'' layer, reducing computational time and space from quadratic to quasilinear (i.e., linear up to polylog terms). We implement two complementary omnidirectional learning algorithms, one based on decision forests (Omnidirectional Forests, \sct{Odif}), and another based on deep networks (Omnidirectional Networks, \sct{Odin}). Both \sct{Odif}\ and \sct{Odin}\ demonstrate forward and backward transfer, while maintaining computational efficiency. Simulations illustrate their learning capabilities, including performance properties in the presence of adversarial tasks. We then demonstrate their learning capabilities in vision and language benchmark applications. Although the omnidirectional algorithms presented here are primarily resource building, we illustrate that they can effectively leverage prior representations. This ability implies that the algorithm can convert from a ``juvenile" resource building state to the ``adult" resource recruiting state -- all while maintaining key omnidirectional learning capabilities and efficiencies. \section{Omnidirectional Algorithms}\label{sec:algorithms} \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{images/learning_schema_new.pdf} \caption{Schemas of composable hypotheses. Ensembling voters is a well-established practice, including random forests and gradient boosted trees. Ensembling representations was previously used in lifelong learning scenarios, but without connections from future tasks to past ones. We introduce such connections, thereby enabling backward transfer. } \label{fig:schematic} \end{figure} Our approach to lifelong learning relies on hypotheses that can be decomposed into three constituent parts: $ h(\cdot) = w \circ v \circ u(\cdot) $ (Figure~\ref{fig:schematic}A). The representer, $u: \mc{X} \mapsto \tilde{\mc{X}}$, maps an $\mc{X}$-valued input into an internal representation space $\tilde{\mc{X}}$~\cite{Vaswani2017-lq,Devlin2018-lk}. The voter $v: \tilde{\mc{X}} \mapsto \Delta_{\mc{Y}}$ maps the transformed data into a posterior distribution (or, more generally, a score) on the response space $\mc{Y}$. Finally, a decider $w: \Delta_{\mc{Y}} \mapsto \mc{Y}$, produces a predicted label.\footnote{In coding theory, these three functions are frequently called the encoder, channel, and decoder, respectively~\cite{Cover2012-sl,Cho2014-ew}} See Appendix~\ref{app:example} for a concrete example using a decision tree. One can generalize the above decomposition by allowing for multiple representers. Given $B$ different representers, one can attach a single voter to each representer, yielding $B$ different voters (Figure~\ref{fig:schematic}B). Doing so requires generalizing the definition of a decider, which would operate on multiple voters. The decider is then said to \textit{ensemble the voters}. This is the learning paradigm behind boosting~\cite{Freund1995-md} and bagging~\cite{Breiman1996-yz}---indeed, decision forests are a canonical example of a decision function operating on a collection of $ B $ outputs~\cite{breiman2001random}. A decision forest learns $B$ different decision trees, each of which has a tree structure corresponding to a representer. Each tree is assigned a voter that outputs that single tree's guess as to probability that an observation is in any class. The decider outputs the most likely class averaged over the trees. A further generalization of the above decomposition allows for \textit{each voter to ensemble the representers} (Figure~\ref{fig:schematic}C). Doing so requires the introduction of an \textit{omni-voter} layer, which is formally distinct from the voter function described above that operates solely on a single representer. The omni-voter ensembles all the existing representations, regardless of the order in which they were learned. In this scenario, like with bagging and boosting, the ensemble of voters then feeds into the single decider. When each representer has learned complementary representations, this latter approach has certain appealing properties, particularly in multiple task scenarios, including lifelong learning. See Appendix~\ref{app:compositional} for a concrete example. We developed two different omnidirectional learning algorithms that ensemble representations (see Appendix~\ref{app:progl_algs} for full details, including pseudocode). Omnidirectional Forest (\sct{Odif}) uses decision forests as the representers, specifically a variant of decision forests called `Uncertainty Forest'~\cite{Guo2019-xe}. An Omnidirectional Network (\sct{Odin}) uses deep networks as the representers. In either case, as new data from a new task arrives, our algorithms first build a new independent representer (using forests or networks). Then, it builds the voter for this new task, which intergrates information across all existing representers, thereby enabling forward transfer. If new data arrive from an old task, it can leverage the new representers to update the voters from the old tasks, thereby enabling backward transfer. In either case, new test data are passed through all existing representers and corresponding voters to make a prediction. \sct{Odin}\ was motivated by ProgNN, but differs from ProgNN in two key ways. First, recall that ProgNN builds a new neural network `column' for each new task, and also builds lateral connections between the new column, and all previous columns. In contrast, \sct{Odin}\ excludes those lateral connections, thereby greatly reducing the number of parameters and train time. Moreover, this makes each representation independent, thereby potentially avoiding interference across representations. Second, for inference on task $j$ data, assuming we have observed tasks up to $J > j$, ProgNN only leverages representations learned from tasks up to $j$, thereby excluding tasks $j+1, \ldots, J$. In contrast, \sct{Odin}\ leverages representations from all $J$ tasks. This difference enables backward transfer. \sct{Odif}\ adds yet another difference by replacing the deep network representers with random forest representers. This has the effect of making the capacity, space and time complexity scale with the complexity and sample size of each task. In contrast, both ProgNN and \sct{Odin}\ have a fixed capacity for each task, even if the tasks have very different sample sizes and complexities. \section{Results} \subsection{A computational taxonomy of lifelong learning} \label{sec:taxonomy} \begin{table}[ht] \caption{Capacity, space, and time constraints of various lifelong learning algorithms. We show soft-O notation ($\tmc{O}(\cdot, \cdot)$ defined in main text) as a function of $n$ and $T$, as well as the common setting where where $n$ is proportional to $T$. Our omnidirectional algorithms are the only algorithms whose representation space grows, but sub-quadratically with $n$ or $T$, and \sct{Odif}\ is the only algorithm whose time complexity is linear in $n$ for learning the representation.} \label{tab:tax} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \textbf{Parametric} & \textbf{Capacity} & \multicolumn{2}{c|}{\textbf{Space}} & \multicolumn{2}{c|}{\textbf{Time}} & \textbf{Examples}\\ \cline{2-6} & ($n,T$) & ($n,T$) & ($n \propto T$) & $(n,T)$ & $(n \propto T)$ & \\ \hline parametric & $1$ & $T$ & $n$ & $nT$ & $n^2$ & EWC\\ \hline parametric & $1$ & $1$ & $1$ & $n $ & $n$ & O-EWC, SI, LwF\\ \hline \hline parametric & $1$ & $n$ & $n$ & $nT$ & $n^2$ & Total Replay \\ \hline \hline semiparametric & $T$ & $T^2$ & $n^2$ & $n T$ & $n^2$ & ProgNN\\ \hline semiparametric & $T$ & $T$ & $n$ & $n$ & $n$ & DF-CNN\\ \hline \hline \textcolor{red}{semiparametric} & \textcolor{red}{$T$} & \textcolor{red}{$T+n$} & \textcolor{red}{$n$} & \textcolor{red}{$n$} & \textcolor{red}{$n$} & \textcolor{red}{\sct{Odin}} \\ \hline \textcolor{red}{nonparametric} & \textcolor{red}{$n$} & \textcolor{red}{$n$} & \textcolor{red}{$ n$} & \textcolor{red}{$n$} & \textcolor{red}{$n$} & \textcolor{red}{\sct{Odif}} \\ \hline \end{tabular} \end{table} Lifelong learning approaches can be divided into those with fixed resources, and those with growing resources. We therefore quantify the computational space and time complexity of the internal representation of a number of algorithms, using both theoretical analysis and empirical investigations. We also study the representation capacity of these algorithms. We use the soft-O notation $\tilde{\mc{O}}$ to quantify complexity~\cite{Van_Rooij2019-hu}. Letting $n$ be the sample size and $T$ be the number of tasks, we write that a lifelong learning algorithm is $f(n,t) = \tilde{\mc{O}}(g(n,T))$ when $|f|$ is bounded above asymptotically by a function $g$ of $n$ and $T$ up to a constant factor and polylogarithmic terms. Table~\ref{tab:tax} summarizes the capacity, space and time complexity of several reference algorithms, as well as our \sct{Odin}~and \sct{Odif}. For the deep learning methods, we assume that the number of iterations is proportional to the number of samples. For space and time complexity, the table shows results as a function of $n$ and $T$, as well as the common scenario where sample size per task is fixed and therefore proportional to the number of tasks, $n \propto T$. Fixed resource lifelong learning methods are parametric, in that the representational capacity is invariant to sample size and task number, have computational space complexity of $\tilde{\mc{O}}(1)$~\cite{bickel2015mathematical}. Given a sufficiently large number of tasks, without placing constraints on the relationship between the tasks, eventually all parametric methods will catastrophically forget at least some things. EWC, Online EWC, SI, and LwF are all examples of parametric lifelong learning algorithms. Semi-parametric algorithms are algorithms whose representational capacity grows slower than sample size. For example, if $T$ is increasing slower than $n$ (e.g., $T \propto \log n$), than algorithms whose capacity is proportional to $T$ are semi-parametric. {ProgNN}\ is semi-parametric with space complexity $\tilde{\mc{O}}(T^2)$ due to the lateral connections. Moreover, the time complexity for {ProgNN}\ also scales quadratically with $n$ when $n \propto T$. Thus, an algorithm that literally stores all the data it has ever seen, and retrains a fixed size network on all that data with the arrival of each new task, would have smaller space complexity and the same time complexity as {ProgNN}. For comparison, we implement such an algorithm and refer to it as {Total Replay}. DF-CNN improves upon {ProgNN}\ by introducing a knowledge base with lateral connections to each new column, thereby avoiding all pairwise connections. Because these semi-parametric methods have a fixed representational capacity per task, they will either lack the representation capacity to perform well given sufficiently complex tasks, and/or will waste resources for very simple tasks. \sct{Odin}\ eliminates the lateral connections between columns of the network, thereby reducing space complexity down to $\tmc{O}(T)$. \sct{Odin}\ stores all the data to enable backwards transfer, but retains linear time complexity. \sct{Odif}\ is the only non-parametric lifelong learning algorithm to our knowledge. Its capacity, space and time complexity are all $\tilde{\mc{O}}(n)$, meaning that its representational capacity naturally increases with the complexity of each task. \subsection{Illustrating Omnidirectional Learning with \sct{Odif}} \label{sec:simulations} \begin{figure} \centering \includegraphics[width=\linewidth]{images/parity_exp.pdf} \caption{\textbf{Omnidirectional Forests demonstrate forward and backward transfer.} (\textit{A}) 750 samples from: (\textit{Ai}) Gaussian XOR, (\textit{Aii}) XNOR, which has the same optimal discriminant boundary as XOR, and (\textit{Aiii}) R-XOR, which has a discriminant boundary that is uninformative, and therefore adversarial, to XOR. (\textit{Bi}) Generalization error for XOR, and (\textit{Bii}) XNOR of both \sct{Odif}\ (red) and \sct{RF}\ (green). \sct{Odif}\ outperforms \sct{RF}\ on XOR when XNOR data is available, and on XNOR when XOR data are available. (\textit{Biii}) Forward and backward transfer efficiency of \sct{Odif}\ are positive for all sample sizes, and are negative for all sample sizes for \sct{RF}. (\textit{Ci}) In an adversarial task setting (XOR followed by R-XOR), \sct{Odif}\ gracefully forgets XOR while positively forward transferring to R-XOR, whereas \sct{RF}\ demonstrates catastrophic forgetting and interference. (\textit{Cii}) log BTE with respect to XOR is positive when the optimal decision boundary of $\theta$-XOR is similar to that of XOR (e.g. angles near $0^\circ$ and $90^\circ$), and negative when the discriminant boundary is uninformative, and therefore adversarial, to XOR (e.g. angles near $45^\circ$) . (\textit{Ciii}) BTE increases monotonically with respect to sample size for XOR versus $25^\circ$-XOR. } \label{fig:xor-nxor} \end{figure} \subsubsection{Omnidirectional learning in a simple environment} Consider a very simple two-task environment: Gaussian XOR and Gaussian Exclusive NOR (XNOR) (Figure \ref{fig:xor-nxor}A, see Appendix~\ref{app:sim} for details). The two tasks share the exact same discriminant boundaries: the coordinate axes. Thus, transferring from one task to the other merely requires learning a bit flip. We sample 750 samples from XOR, followed by another 750 samples from XNOR. \sct{Odif}\ and random forests (\sct{RF}) achieve the same generalization error on XOR when training with XOR data (Figure \ref{fig:xor-nxor}Bi). But because \sct{RF}\ does not account for a change in task, when XNOR data appear, \sct{RF}\ performance on XOR gets worse and worse. In contrast, \sct{Odif}\ continues to improve on XOR given XNOR data, demonstrating backwards transfer. Now consider the generalization error on \textit{XNOR} (Figure \ref{fig:xor-nxor}Bii). Both \sct{Odif}\ and \sct{RF}\ are at chance levels when only XOR data are available. When XNOR data are available, \sct{RF}\ must unlearn everything it learned from the XOR data, and thus its performance on XNOR starts out nearly maximally inaccurate, and quickly improves. On the other hand, because \sct{Odif}\ can leverage the representer learned using the XOR data, upon getting \textit{any} XNOR data, it immediately performs quite well, and then continues to improve with further XNOR data, demonstrating forward transfer (Figure \ref{fig:xor-nxor}Biii). \sct{Odif}\ demonstrates positive forward and backward transfer for all sample sizes, whereas \sct{RF}\ fails to demonstrate forward or backward transfer, and eventually catastrophically forgets the previous tasks. \subsubsection{Omnidirectional learning in adversarial environments} Statistics has a rich history of \textit{robust learning}~\cite{huber1996robust}, and machine learning has recently focused on \textit{adversarial learning}~\cite{Bruna2013-iq}. However, in both cases the focus is on adversarial \textit{examples}, rather than adversarial \textit{tasks}. In the context of omnidirectional learning, we informally define a task $t$ to be adversarial with respect to task $t'$ if the true joint distribution of task $t$, without any domain adaptation, impedes performance on task $t'$. In other words, training data from task $t$ can only add noise, rather than signal, for task $t'$. An adversarial task for Gaussian XOR is Gaussian XOR rotated by $45^\circ$ (R-XOR) (Figure~\ref{fig:xor-nxor}Aiii). Training on R-XOR therefore impedes the performance of \sct{Odif}\ on XOR, and thus backward transfer falls below one, demonstrating graceful forgetting (Figure \ref{fig:xor-nxor}Ci). Because R-XOR is more difficult than XOR for \sct{Odif}\ (because the discriminant boundaries are oblique~\cite{Tomita2020-xe}), and because the discriminant boundaries are learned imperfectly with finite data, data from XOR can actually improve performance on R-XOR, and thus forward transfer is positive. In contrast, both forward and backward transfer are negative for \sct{RF}. To further investigate this relationship, we designed a suite of R-XOR examples, varying the rotation angle $\theta$ between $0^\circ$ and $360^\circ$, sampling 100 points from XOR, and another 100 from each R-XOR (Figure~\ref{fig:xor-nxor}Cii). As the angle increases from $0^\circ$ to $45^\circ$, log BTE flips from positive ($\approx 0.30$) to negative ($\approx -0.06$). The $45^\circ$-XOR is the maximally adversarial R-XOR. Thus, as the angle further increases, log BTE increases back up to $\approx 0.30$ at $90^\circ$, which has an identical discriminant boundary to XOR. Moreover, when $\theta$ is fixed at $25^\circ$, BTE monotonically increases with sample size (Figure~\ref{fig:xor-nxor}Ciii). Together, these experiments indicate that the amount of transfer can be a complicated function of (i) the difficulty of learning good representations for each task, (ii) the relationship between the two tasks, and (iii) the sample size of each. Appendix~\ref{app:sim} further investigates this phenomenon in a multi-spiral environment. \section{Evaluation Criteria} \label{sec:evaluation-ceiterion} Others have previously introduced criteria to evaluate transfer, including forward and backward transfer~\cite{LopezPaz2017GradientEM,Benavides-Prado2018-nv}. These definitions typically compare the difference, rather than the ratio, between learning with and without transfer. Pearl~\cite{Pearl2019-bp} introduced the transfer benefit ratio, which builds directly off relative efficiency from classical statistics \cite{bickel2015mathematical}. Our definitions are closely related to his. \textit{Transfer efficiency} is the ratio of the generalization error of (i) an algorithm that has learned only from data associated with a given task, to (ii) the same learning algorithm that also has access to other data. Let $R^t$ be the risk associated with task $t$, and $\mathbf{S}_n^t$ be the data from $\mathbf{S}_n$ that is specifically associated with task $t$, so $R^t(f(\mathbf{S}_n^t))$ is the risk on task $t$ of the hypothesis learned by $f$ only on task $t$ data, and $R^t(f(\mathbf{S}_n))$ denotes the risk on task $t$ of the hypothesis learned on all the data. \begin{Def}[Transfer Efficiency] The transfer efficiency of algorithm $ f $ for given task $ t $ with sample size $n$ is $\mathsf{TE}_n^t(f) := \mbb{E}\left[{R^t\left(f(\mathbf{S}_n^t)\right)}\right]/\mbb{E}\left[{R^t\left(f(\mathbf{S}_n)\right)}\right] $. We say that algorithm $ f $ has transfer learned for task $t$ with data $\mathbf{S}_n$ if and only if $ \mathsf{TE}_n^t(f) > 1 $. \end{Def} To evaluate a lifelong learning algorithm while respecting the streaming nature of the tasks, it is convenient to consider two extensions of transfer efficiency. \textit{Forward} transfer efficiency is the expected ratio of the risk of the learning algorithm with (i) access only to task $t$ data, to (ii) access to the data up to and including the last observation from task $ t $. This quantity measures the relative effect of previously seen out-of-task data on the performance on task $ t $. Formally, let $N^t = \max\{i: T_i = t\}$, be the index of the last occurrence of task $t$ in the data sequence. Let $\mathbf{S}_n^{<t} = \{(X_1, Y_1, T_1), ..., (X_{N^t}, Y_{N^t}, T_{N^t}) \}$ be all data up to and including that data point. \begin{Def}[Forward Transfer Efficiency] The forward transfer efficiency of $ f $ for task $t$ given $ {n} $ samples is $\mathsf{FTE}_n^t(f) := \mbb{E}\left[R^{t}\left(f(\mathbf{S}_n^t)\right)\right] / \mbb{E}\left[R^t\left(f(\mathbf{S}_n^{<t})\right)\right]$. \end{Def} We say an algorithm (positive) forward transfers for task $t$ if and only if $\mathsf{FTE}_n^t{(f)} >1$. In other words, if $\mathsf{FTE}_n^t{(f)} >1$, then the algorithm has used data associated with past tasks to improve performance on task $t$. One can also determine the rate of \textit{backward} transfer by comparing $R^t\left(f(\mathbf{S}_n^{< t})\right)$ to the risk of the hypothesis learned having seen the entire training dataset. More formally, backward transfer efficiency is the expected ratio of the risk of the learned hypothesis with (i) access to the data up to and including the last observation from task $ t $, to (ii) access to the entire dataset. Thus, this quantity measures the relative effect of future task data on the performance on task $ t $. \begin{Def}[Backward Transfer Efficiency] The backward transfer efficiency of $ f $ for task $t $ given $ {n} $ samples is $\mathsf{BTE}_n^t(f) := \mbb{E}\left[R^{t}\left(f(\mathbf{S}_n^{< t})\right)\right] / \mbb{E}\left[R^t\left(f(\mathbf{S}_n)\right)\right]$. \end{Def} We say an algorithm (positive) backward transfers for task $t$ if and only if $\mathsf{BTE}_n^t{(f)} >1$. In other words, if $\mathsf{BTE}_n^t{(f)} >1$, then the algorithm has used data associated with future tasks to improve performance on previous tasks. After observing $m$ tasks, the extent to which the $\mathsf{TE}$ for the $j^{th}$ task comes from forward transfer versus from backwards transfer depends on the order of the tasks. If we have a sequence in which tasks do not repeat, transfer efficiency for the first task is all backwards transfer, for the last task it is all forwards transfer, and for the middle tasks it is a combination of the two. In general, $\mathsf{TE}$ factorizes into $\mathsf{FTE}$ and $\mathsf{BTE}$: \begin{align*} \mathsf{TE}_n^t(f) = \frac{\mbb{E}\left[{R^t\left(f(\mathbf{S}_n^t)\right)}\right]}{\mbb{E}\left[{R^t\left(f(\mathbf{S}_n)\right)}\right]} = \frac{\mbb{E}\left[R^{t}\left(f(\mathbf{S}_n^t)\right)\right]} {\mbb{E}\left[R^t\left(f(\mathbf{S}_n^{<t})\right)\right]} \times \frac{\mbb{E}\left[R^{t}\left(f(\mathbf{S}_n^{< t})\right)\right]} {\mbb{E}\left[R^t\left(f(\mathbf{S}_n)\right)\right]}. \end{align*} Throughout, we will report log $\mathsf{TE}$ so that positive transfer corresponds to $\mathsf{TE}>1$. \section{A General Lifelong Learning Algorithm} \label{app:LL} To explain the general lifelong learning algorithm, we first describe a describe a general honest learning algorithm. There are two generic kinds of functions for honest learning: \begin{enumerate} \item \sct{LearnTransformer} takes data as input and outputs a \sct{Transformer} function that maps $ \mc{X} $ to $ \mtc{X}$, so, \sct{Transformer}$(x)=\mt{x}$. \item \sct{LearnDecider} takes transformed data as input and outputs a \sct{Decider} function that maps $\mtc{X}$ to an action, that is, \sct{Decider}$(\mt{x}) \in \mc{A}$. \end{enumerate} \noindent \begin{algorithm}[] \caption{Honest Learner} \begin{algorithmic} \State A generic description for constructing an honest or transfer learning algorithm. It assumes that the resulting decision function can be described as a composition of a transformer and a decider. \Require \\ \begin{itemize} \item All the data, $\mc{D}_n = (x_i,y_i,j_i)$ for $i \in [n]$, where $j_i$ denotes group or task membership. For honest learners, $j_i$ denotes whether the sample is in the structure set. For transfer learning, $j_i$ denotes whether the sample is in the source set. \item the previous $J$ transfomers, \sct{Transformer}$_j$ for $j \in [J]$, and \item the previous $J^2$ deciders (one for each pair of tasks), \sct{Decider}$^{(j)}_{j'}$ for $j,j' \in [J]$. \end{itemize} \Ensure \\ \begin{itemize} \item A transformer, \sct{Transformer}, and \item a decider, \sct{Decider}. \end{itemize} \Function{HonestLearner}{$\mathcal{D}_n$} \State \begin{enumerate} \State \textbf{Learn and apply transformer} \begin{enumerate} \item Learn the transformer on a subset of the data, $i \in \mc{I}$ $$\sct{Transformer} = \sct{LearnTransformer}\big(\{x_i,y_i\}_{i \in \mc{I}}\big).$$ \item {Apply the transformer to all the data}: $$\forall i \in [n]: \quad \mt{x}_{i} = \sct{Transformer}(x_i).$$ \end{enumerate} \State \textbf{Learn decider}: \begin{enumerate} \item Learn the decider on data not used for learning the transformer: $\sct{Decider} = \sct{LearnDecider}\big(\{\mt{x}_i,y_i\}_{i \notin \mc{I}} \big).$ \end{enumerate} \end{enumerate} \State \Return $\sct{Transformer}, \sct{Decider}$ \EndFunction \end{algorithmic} \end{algorithm} \vspace{10pt} \vspace{10pt} Artificial neural networks, trees, ensembles of trees (such as random forests and gradient boosted trees), ensembles of linear or quadratic decision rules, and $k$-nearest neighbors are all examples of classification procedures that can be learned honestly. We now describe a general lifelong learning algorithm. The lifelong learning algorithms we describe require the two functions for honest learning and a third function to possibly update previously learned transformers: \begin{enumerate} \item \sct{LearnTransformer} takes data as input and outputs a \sct{Transformer} function that maps $ \mc{X} $ to $ \mtc{X}$, so, \sct{Transformer}$(x)=\mt{x}$. \item \sct{LearnDecider} takes transformed data as input and outputs a \sct{Decider} function that maps $\mtc{X}$ to an action, that is, \sct{Decider}$(\mt{x}) \in \mc{A}$. \item \sct{UpdateTransformer} takes both new data and previous transformer as input and updates the transformer (this is an optional step). \end{enumerate} \noindent Below, for simplicity, we assume that data are batched into tasks, and each task is unique. Also assume that we have already observed and operated on $J$ tasks, and now we observe a $J+1^{th}$ task. \vspace{25pt} \clearpage \label{alg:general-lifelong} \begin{algorithm}[] \caption{Lifelong Learner} \begin{algorithmic} \State A generic algorithm for updating a lifelong learning machine. This particular algorithm presumes all data are stored forever, because past data are used for \emph{reverse transfer} purposes. Past data can be discarded if one is only concerned about \emph{forward transfer}, and still no catastrophic forgetting will occur. The description of this algorithm starts after the learning machine has already experienced $J$ tasks, and now it is faced with data from task $J+1$. \Require \\ \begin{itemize} \item All the data, $\mc{D}_n = (x_i,y_i,j_i)$ for $i \in [n]$, \item the previous $J$ transfomers, \sct{Transformer}$_j$ for $j \in [J]$, and \item the previous $J^2$ deciders (one for each pair of tasks), \sct{Decider}$^{(j)}_{j'}$ for $j,j' \in [J]$. \end{itemize} \Ensure \\ \begin{itemize} \item The $J+1^{th}$ transformer, \sct{Transformer}$_{J+1}$, and \item for each of the $J+1$ tasks, new deciders, \sct{Decider}$^{(j)}_{J+1}$ for $j \in [J+1]$. \end{itemize} \Function{UpdateLifelongLearner}{$\big(\mc{D}_n, \{\sct{Transformer}_j\}_{j \in [J]}, \{\sct{Decider}^{(j)}_{j'}\}_{j,j' \in [J]} \big)$} \State \begin{enumerate} \item \textbf{Update and apply transformers} \begin{enumerate} \item $[$Optional$]$ Update the existing $J$ transformers using the data from the $J+1^{th}$ task, that is, $\forall j < J+1$: $$\sct{Transformer}_{j} = \sct{UpdateTransformer}\big(\{x_i,y_i\}_{i : j_i = J+1}, \sct{Transformer}_j \big). $$ \item $[$Optional$]$ Update transformations of all data using these new transfomers: $$\forall\, j \in [J], \forall \, i \in [n]: \quad \mt{x}^j_i = \sct{Transformer}_j(x_i)$$ \item Learn the $J+1^{th}$ transformer on a subset of the data, $i \in \mc{I}^{J+1}$, from task $J+1$ $$\sct{Transformer}_{J+1} = \sct{LearnTransformer}\big(\{x_i,y_i\}_{i \in \mc{I}^{J+1}}\big).$$ \item {Apply the $J+1^{th}$ transformer to all the data}: $$\forall i \in [n]: \quad \mt{x}^{J+1}_{i} = \sct{Transformer}_{J+1}(x_i).$$ \end{enumerate} \item \textbf{Update deciders}: \begin{enumerate} \item For each task $ j'\neq j$, learn a decider on the $j^{\text{th}}$ data using the $j'^{th}$ transformed data: $$ \forall j' \neq j: \quad \sct{Decider}_{j'}^{(j)} = \sct{LearnDecider}\big(\{\mt{x}^{(j)}_i,y_i\}_{i: j_i \neq j}\big).$$ \item For task $j' = j$, learn a decider on the $j^{\text{th}}$ data using only the data not used for learning the transformer on the $j'^{th}$ data: $$\sct{Decider}_{j'}^{(j)} = \sct{LearnDecider}\big(\{\mt{x}^{(j)}_i,y_i\}_{i \notin \mc{I}^{J+1}} \big).$$ \item For all tasks, update its decider by averaging the $J+1$ deciders on the task: $$\forall j \in [J+1] : \quad \sct{Decider}^{(j)} \leftarrow \frac{J}{J+1} \sct{Decider}^{(j)} + \frac{1}{J+1} \sum_{j'=1}^{J+1} \sct{Decider}_{j'}^{(j)}.$$ \end{enumerate} \end{enumerate} \State \Return $\{\sct{Transformer}_j\}_{j \in [J+1]}, \{\sct{Decider}^{(j)}_{j'}\}_{j,j' \in [J+1]}$ \EndFunction \end{algorithmic} \end{algorithm}
{ "timestamp": "2021-03-04T02:29:55", "yymm": "2004", "arxiv_id": "2004.12908", "language": "en", "url": "https://arxiv.org/abs/2004.12908" }
"\\section{Introduction}\nAs humans we learn to combine vision and language early in life, building (...TRUNCATED)
{"timestamp":"2020-07-14T02:27:58","yymm":"2004","arxiv_id":"2004.13073","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:Introduction}\n\n\nSince early in this century, a large number(...TRUNCATED)
{"timestamp":"2020-04-28T02:30:39","yymm":"2004","arxiv_id":"2004.12841","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nIn recent years, deep learning with Convolutional Neural Networks (CNNs)(...TRUNCATED)
{"timestamp":"2020-10-08T02:12:48","yymm":"2004","arxiv_id":"2004.13075","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nThe Iitaka dimension for a holomorphic line bundle $L$ over a compact comp(...TRUNCATED)
{"timestamp":"2020-04-28T02:30:10","yymm":"2004","arxiv_id":"2004.12825","language":"en","url":"http(...TRUNCATED)
"\\subsection{Dimensions and Dynkin labels for few representations of $SU(N)$}\\label{appendix:grip-(...TRUNCATED)
{"timestamp":"2020-04-28T02:30:15","yymm":"2004","arxiv_id":"2004.12830","language":"en","url":"http(...TRUNCATED)
"\\section*{ReLU Defense}\n\n\\section{Introduction}\nThe lockdown has not been too bad! After all, (...TRUNCATED)
{"timestamp":"2020-06-04T02:07:02","yymm":"2004","arxiv_id":"2004.13013","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\r\n\\label{sec:intr}\r\n\r\nIn this paper, we proceed with the analysis sta(...TRUNCATED)
{"timestamp":"2020-04-28T02:33:44","yymm":"2004","arxiv_id":"2004.12945","language":"en","url":"http(...TRUNCATED)
End of preview.

No dataset card yet

Downloads last month
5